From a technology perspective, one of the biggest changes we’ve seen over the past year has been a dramatic acceleration in cloud computing initiatives. The pandemic has proven once and for all that cloud computing really does work, even in the most challenging of circumstances, providing greater speed, agility and resilience.
And with this new level of trust and appreciation of cloud computing, huge numbers of businesses have gone from running only a handful of applications in the cloud to wanting to shift significant parts of their IT estate over to a cloud environment, as quickly as they possibly can.
Indeed, as organisations have rushed through digital transformation programs to deliver new digital services to both customers and employees during the pandemic, most have relied heavily on the cloud to enable them to move at the required speed and scale.
The pandemic will certainly come to be seen as a tipping point in the transition to cloud computing, speeding up what was already an inevitable switch by several years. Indeed, Gartner has forecast that worldwide end-user spending on public cloud services will grow by 18.4 per cent in 2021, and that the proportion of IT spending on cloud computing will make up 14.2 per cent of the total global enterprise IT spending market in 2024, up from 9.1 per cent in 2020.
Monitoring headaches in the cloud
This marked shift towards cloud computing is undoubtedly delivering benefits, enabling the digital transformation initiatives organisations have relied on throughout the pandemic. In many cases, the level and speed of innovation that has been achieved simply wouldn’t have been possible using legacy technologies.
However, there is always a sting! The rapid acceleration of cloud initiatives has had a profound impact on the IT department, adding huge complexity and even greater pressure onto technologists.
In our latest Agents of Transformation report, Agents of Transformation 2021: The Rise of Full-Stack Observability, we found that 77 per cent of global technologists are experiencing greater levels of complexity as a result of the acceleration of cloud computing initiatives during the pandemic. And 78 per cent cited technology sprawl and the need to manage a patchwork of legacy and cloud technologies as an additional source of complexity.
On the back of rapid digital transformation over the past year, technologists have rightly put even more focus on monitoring the entire IT estate, from customer-facing applications through to third party services and core infrastructure like network and storage. But whilst to a large degree their established monitoring approaches and tools have provided them greater visibility across traditional, legacy environments, they have been found wanting within new hybrid cloud environments.
The reason for this is that within a software-defined, cloud environment, nothing is fixed; everything is constantly changing in real-time. And that makes monitoring far more difficult.
Traditional approaches to monitoring were based on physical IT infrastructure – technologists knew they were operating five servers and 10 network wires – they were dealing with constants. This then allowed for fixed dashboards for each layer of the IT stack. But the nature of cloud computing is that organisations are continually scaling their use of IT up and down, according to business need. For instance, a company might be using two servers to support a customer-facing application, but then suddenly increase that to 25 servers to meet a surge in demand in real-time, before dropping back down to five a few hours later while adapting its network and storage infrastructure along the way.
Traditional monitoring solutions simply aren’t designed for this dynamic use of infrastructure as code, and that means most technologists can no longer get visibility of their full IT stack health in a single pane of glass. In fact, three-quarters of technologists now report they are being held back because they have multiple, disconnected monitoring solutions, and worryingly, more than two-thirds admit they now waste a lot of time as they can’t easily isolate where performance issues are actually happening. The acceleration of cloud computing initiatives is undoubtedly the major driver of this issue.
Full-stack observability must include the cloud
Looking ahead, technologists are under no illusions: the transition to the cloud is only going to gather pace, as organisations continue to prioritise digital transformation to get through the pandemic and exploit new opportunities in a turbulent marketplace.
Technologists are also fully aware that unless they find a way to gain greater visibility and insight into all IT environments, they will be unable to drive the rapid, sustainable digital transformation their organisations need. Indeed, 79 per cent of technologists state that they need to adopt more comprehensive observability tools to achieve their organisations’ innovation goals.
Without genuine full-stack observability, technologists simply don’t stand a chance of being able to quickly identify and fix technology issues before they impact end users and the business.
IT and business leaders need to recognise that unless they address this issue now, they are jeopardising all of their efforts and investment in digital transformation. Organisations can develop the most innovative, cloud-based applications for their customers and staff, but unless their technologists have the right level of visibility and tools to optimise IT performance in real-time, then they will never be able to deliver faultless digital experiences.
Technologists need to be able to monitor all technical areas across their IT stack, including within cloud environments, and to directly link technology performance to end user experience and business outcomes, so they can prioritise actions and focus on what really matters to the business. Get this right, and then organisations really can start to take full advantage of the cloud.
James Harvey is EMEAR CTO at Cisco AppDynamics