The average enterprise SOC now consumes between £3m and £8m per year in staff, tooling, and managed service fees. In most cases, it delivers alert volumes no human team can meaningfully process, coverage gaps that wouldn't survive a serious audit, and a board report that describes busyness rather than security posture.
This isn't a people problem. It's an engineering problem.
The real drivers of SOC cost
SOC budgets balloon for a predictable set of reasons. Tool sprawl — where each threat category has its own platform, its own console, its own alert queue — creates integration debt that consumes analyst time just to manage. SIEM licensing scales with data volume, which means every new data source is another cost centre. And the headcount model — hiring analysts to review alerts — is a treadmill: more alerts means more analysts, which means more cost, with no ceiling in sight.
The result is a function that is both expensive and fragile. Remove three analysts and detection capability drops significantly. That is not a mature operating model.
The automation gap
The most underutilised lever in most SOC environments is automation. SOAR platforms have existed for years, but implementation tends to be shallow — a handful of enrichment playbooks and some basic ticketing integration, rather than genuine reduction in analyst workload. The reason is almost always the same: automation was bolted on after the fact, rather than engineered in from the start.
A well-engineered SOAR deployment can automate 40–70% of tier-1 alert handling. Not by replacing analyst judgement on complex incidents, but by eliminating the repetitive enrichment, triage and false-positive suppression work that consumes the majority of analyst time. The analysts that remain work on harder problems, stay engaged longer, and cost less to retain.
Detection as code
Most organisations manage their detection logic the same way they managed infrastructure in 2012 — manually, inconsistently, and with no version control. Detection rules are created by individuals, stored in SIEM UIs, and drift over time as the environment changes. There is no peer review. There is no testing. There is no audit trail.
Detection as code — managing SIEM rules, correlation logic and response playbooks in a version-controlled repository with CI/CD pipelines, automated testing and change approval gates — transforms the detection engineering function. It enables teams to ship more detection, retire stale rules confidently, and evidence coverage to auditors without manual effort.
The economics are significant. Teams that have implemented detection as code consistently report 30–50% reductions in the time spent managing detection logic, with measurably better coverage quality.
AI-assisted triage: where it actually helps
Large language models are beginning to deliver genuine value in tier-1 triage — not as autonomous decision-makers, but as enrichment engines. An LLM-assisted triage workflow can summarise alert context, pull relevant threat intelligence, cross-reference recent incidents, and produce a prioritised recommendation in seconds. The analyst reviews and decides; the model does the legwork.
This is not the same as vendor AI features that promise to "automatically resolve" alerts. Those claims should be treated with deep scepticism. The value of LLMs in the SOC today is in accelerating human decision-making, not replacing it.
What a leaner SOC actually looks like
The pattern that works is: consolidate data into a well-governed SIEM or XDR platform; engineer automation for the high-volume, low-complexity alert categories; implement detection as code so that coverage is managed with the same rigour as infrastructure; and deploy AI-assisted enrichment to accelerate the analyst workflows that cannot be automated.
The result, typically, is a smaller team doing substantially better work — with measurable coverage, evidenced detection, and a cost trajectory that doesn't automatically increase as the threat landscape evolves.