Shrinking IT Budgets, Growing Development Expectations — How to Reconcile the Two?

Shrinking IT Budgets, Growing Development Expectations — How to Reconcile the Two? - it budget

In 2026, global IT investment is under pressure. AI expectations are rising. Someone has to reconcile these two realities — and that responsibility typically falls to the IT leader. In Hungary, this pressure is particularly acute. Years of elevated inflation, a high-interest-rate environment, and slower economic growth have collectively narrowed the room for maneuver in enterprise IT budgets. This is reflected in Dell’Oro Group’s global analysis: telecom capex is projected to decline 2 percent in 2026, with that trend persisting through 2030. Meanwhile, business demand for AI continues to grow — faster decisions, automated processes, intelligent systems. The IT leader is no longer weighing whether to invest or to cut costs. Both are required, simultaneously. And to do both, one thing is essential: a precise understanding of where money is being lost in day-to-day operations. That is the real significance of observability in 2026.

This article examines how organizations can sustain progress even under tight budget constraints.

Two Trends Converging — Without a Playbook 

Dell’Oro Group’s analysis, published in March 2026, projects that global telecom capex will decline 2 percent this year, with that trajectory expected to hold through 2030. That alone is not surprising. What is surprising is that during the same period, AI expectations — from the business side, from ownership, and in the context of competitive pressure — are increasing significantly.

The IT leader finds themselves in an unusual position: managing less while delivering more. Fewer resources, yet a mandate to sustain reliable, scalable, and increasingly AI-driven operations.

This is not exclusively a telecommunications problem. The same pressure is felt in financial services, energy, and manufacturing. 

CAPEX and OPEX — What Do They Mean in an IT Context?

Before addressing where and how savings can be achieved, it is worth clarifying two concepts that sit at the center of most IT budget discussions — and are often conflated.

CAPEX (Capital Expenditure) refers to one-time, capital-type investments. In an IT context, this means servers, network infrastructure, software licenses, or the cost of deploying a new platform. CAPEX is decision-driven: the organization consciously commits a one-time expenditure in exchange for expected long-term value. When budgets tighten, CAPEX is what leaders can defer — delaying a deployment, waiting on an upgrade cycle.

OPEX (Operational Expenditure), by contrast, represents the ongoing cost of running the business: engineering hours, operational fees, incident management, troubleshooting, and the downstream consequences of unplanned downtime.

The distinction matters particularly in the current environment. CAPEX can be held back — and that is precisely what organizations across Hungary and globally are doing. OPEX, however, will not decrease on its own. It only becomes manageable when the organization understands, with precision, what is driving unnecessary operational cost.

In most IT organizations, that understanding is missing — and the consequences show up directly in unpredictable, reactive operations. This is examined in detail in our article on predictable IT operations in complex systems.

Where Does the Money Actually Go?

OPEX rarely disappears in a single large line item. It drains quietly, gradually, across many small points simultaneously.

  • The largest cost driver is unplanned outage management. When a critical system goes down, the cost extends well beyond the time spent resolving the incident. It includes the hours of every affected team, the parallel investigations running concurrently, the business-side impact, and the consequences of SLA breaches. For a mid-sized enterprise, a single significant incident can represent a substantial financial exposure — in direct costs and eroded trust alike.
  • The second major driver is manual troubleshooting. When monitoring tools are fragmented, engineers spend a significant portion of their time navigating between dashboards, reconciling data from disparate sources, and manually piecing together correlations. This is structured capacity waste — one that siloed operations only deepen.
  • The third driver is less visible: the cost of a reactive operations culture. When an organization typically learns about a problem only after customers are already experiencing it, every response is firefighting. And firefighting is, by nature, more expensive than prevention.

According to Dynatrace’s 2024 State of Observability report — a global survey of 1,300 CIOs conducted by Qualtrics — 75 percent of IT leaders reported that increasing system complexity is directly driving up operational costs. As a vendor-commissioned study, the framing should be taken with appropriate context, though the directional finding aligns with broader industry observations.

Without AI vs. With AI — What Changes in OPEX?

The conversation around AI adoption tends to center on opportunity: faster processes, automated decisions, intelligent systems. All of that is real. But there is another side to the picture that receives less attention.

“The operational cost of AI-based systems is higher than that of traditional systems.”

Not necessarily because of licensing costs, but because AI system behavior is harder to predict. A traditional application either works or it does not. An AI-based system can operate incorrectly — slowly, inaccurately, unpredictably — without anyone immediately noticing.

From an OPEX perspective, this represents a meaningful risk. When a fault occurs in an AI-driven process — whether an automated decision chain, a recommendation engine, or a predictive analytics module — the impact propagates quickly. The deeper AI is integrated into operations, the harder it becomes to trace where and when something degraded. This dynamic is examined in detail in our article on decision paralysis in enterprise environments.

None of this argues against AI adoption. The case for it remains strong. The question is whether the organization is equipped to govern what it deploys.

Without observability, AI adoption can become a significant OPEX amplifier. Incident resolution times grow. Fault sources become harder to isolate. Engineers search for increasingly non-obvious correlations in increasingly complex systems. The efficiency gains AI produces on one side are partially or fully offset by the increased operational burden on the other. The AIOps model addresses precisely this gap: framing the oversight and automated response capabilities for AI-based systems within a unified operational framework.

How Does Observability Reduce OPEX? — The TELUS Example

The above may read as abstract reasoning. There is a concrete case that illustrates what it looks like in practice.

TELUS is one of Canada’s largest telecommunications providers: 18 million customers, hundreds of interdependent digital services, 25 teams working in parallel. Telecommunications is among the most operationally complex environments, where system downtime produces immediate, measurable business impact — and where keeping OPEX under control is a matter of operational survival.

TELUS had faced the same challenge that most large enterprises recognize: each team had visibility into its own domain, but the end-to-end picture was missing. When a fault surfaced, determining the cause was difficult. Preventing recurrence was harder still. Monitoring data from disparate tools had to be manually correlated, response was reactive, and engineering capacity was consistently absorbed by incident management.

Following the deployment of Dynatrace, that changed fundamentally. A unified observability platform provided real-time visibility into how system components interacted, where anomalies were emerging, and which process was likely causing an observed issue — before customers experienced any impact.

The results are measurable. Mean time to resolution (MTTR) decreased by 45 percent. Development cycles accelerated, and security risks became identifiable earlier in the pipeline. Less engineering time goes to incident response, downtime windows are shorter, and coordination overhead has decreased. The team is the same — operating with higher efficiency and lower operational pressure.

The TELUS example is relevant to a Hungarian enterprise for a straightforward reason: wherever many systems run in parallel, where teams operate in silos, and where fault causality is difficult to trace, OPEX will consistently be higher than it needs to be.

Closing

An IT organization running fragmented, manually operated monitoring tools is paying more to operate than it should. The reason is structural: incidents surface only once they are already felt. Root causes are found only after engineers have spent hours searching. Alerts fire only after the impact has already propagated.

Observability changes that dynamic. It does not promise fault-free operations — no approach does. What it offers is what an IT leader is actually looking for: controlled operations. Problems that are visible early. Interventions that are targeted. Decisions that are grounded in evidence.

Under tightening budget constraints, this is the most viable path to extracting maximum value from existing resources.

If you would like to assess where your organization stands in terms of observability maturity, the Telvice team is available for a complimentary consultation. Get in touch.

Sources