A Series B SaaS company. £12M ARR. 85 employees. A GTM team of 32 across sales, marketing, and customer success.

And 14 tools powering the revenue engine.

Total annual spend: £527k. That’s before the RevOps team spent 40% of their capacity just keeping the integrations alive. Before the quarterly “data reconciliation sprints” that pulled three people off strategic work for a week at a time. Before the onboarding tax, where every new hire needed six weeks to learn the tool landscape.

The stack wasn’t broken. It was worse than broken. It was functioning just well enough that nobody questioned it.

I was brought in to run a GTM tech stack audit as part of a broader operational design engagement. The brief was to “find some savings.” What we found was an architecture problem masquerading as a procurement problem.

The Starting Point: 14 Tools, Zero Architecture

Here’s what the stack looked like when I mapped it:

Core systems: CRM (Salesforce, £82k/yr), marketing automation (HubSpot Marketing Hub, £38k/yr), customer success platform (Gainsight, £45k/yr).

Overlay tools: Forecasting (Clari, £52k/yr), conversation intelligence (Gong, £34k/yr), sales engagement (Outreach, £41k/yr), data enrichment (ZoomInfo, £48k/yr), CPQ (Salesforce CPQ, £36k/yr).

Point solutions: Attribution (Bizible, £28k/yr), proposal generation (Proposify, £12k/yr), contract management (Juro, £18k/yr), scheduling (Chili Piper, £15k/yr), intent data (Bombora, £42k/yr), customer feedback (Pendo, £36k/yr).

Every one of these was purchased to solve a real problem. That’s what makes stack bloat so insidious. Each decision was rational in isolation. But nobody was making architecture decisions. They were making procurement decisions. And there’s a massive difference.

The result was a classic multi-tool data tax: 14 platforms, 23 integrations (11 of which were custom-built middleware), and a data model so fragmented that the same contact record could exist in seven different states across seven different systems.

The Audit: What We Actually Found

The audit ran over three weeks. Not because the methodology was complex, but because untangling dependencies in a 14-tool stack takes time. You can’t just look at licence costs. You have to trace every data flow, every integration, every workflow that touches more than one system.

Phase 1: Usage and adoption mapping

We pulled usage data from every platform. Logins, feature utilisation, API call volumes, workflow execution rates. The findings were stark.

Three tools had fewer than 30% of licensed users logging in monthly. Proposify was being used by exactly two reps. The rest had reverted to Google Docs. Bombora’s intent signals were feeding a dashboard that nobody had opened in four months.

£72k annually on tools that the team had already abandoned in practice.

Phase 2: Overlap analysis

This is where it got interesting. Seven of the 14 tools had significant functional overlap.

Gong and Outreach both tracked email engagement, with different numbers. Clari and Salesforce both housed pipeline data, with different stage definitions. HubSpot and Bizible both ran attribution models, producing conflicting results that marketing and sales used to argue with each other in every pipeline review.

The overlap wasn’t just redundancy. It was actively creating conflicting data. The RevOps team spent an estimated 15 hours per week reconciling numbers across platforms. That’s nearly a full headcount consumed by a problem the tools themselves were creating.

Phase 3: Integration burden

This was the hidden cost nobody had quantified.

The 23 integrations required constant maintenance. When Salesforce pushed an API update, three downstream integrations broke. When Outreach changed their webhook format, the enrichment pipeline failed silently for two weeks before anyone noticed. The RevOps team had become, functionally, an integration maintenance team.

40% of RevOps capacity was consumed by keeping tools talking to each other. Not building process. Not improving data quality. Not enabling the GTM team. Just plumbing.

Phase 4: Value attribution

We scored every tool on three dimensions. Direct revenue impact: does this tool directly enable deals to close? Data contribution: does it make the system of record more accurate? Operational necessity: would removing it break a critical workflow?

Four tools scored high on all three. Three scored high on one dimension only. Seven scored low across the board when you accounted for the integration overhead they created.

The Architecture: What Stayed, What Left, What Was Built

Rationalisation is not a cost-cutting exercise. It’s an architecture exercise. The goal isn’t fewer tools. It’s a coherent system where every component has a clear role and data flows in one direction.

What stayed (6 tools, £243k/yr)

Salesforce remained as the system of record, but with a rebuilt data model. Proper validation rules. Enforced stage definitions. Required fields that were actually required. This alone eliminated the need for Clari as a forecasting overlay, because the underlying data became trustworthy.

HubSpot stayed for marketing automation. Gong stayed for conversation intelligence, but with its engagement tracking disabled in favour of a single source from HubSpot. Outreach stayed for sales engagement. ZoomInfo stayed for enrichment, but with a reduced licence tier after we eliminated the seats that Bombora’s intent data had been justifying.

Gainsight stayed for customer success, though I flagged it as a candidate for replacement in the next review cycle given emerging alternatives.

What was removed (6 tools, £193k saved)

Clari, Bizible, Proposify, Bombora, Juro, and Chili Piper. Each removal required a migration plan. Bizible’s attribution data was rebuilt natively in HubSpot. Chili Piper’s scheduling was replaced with Calendly’s free tier plus a lightweight routing rule in Salesforce.

What was replaced with custom AI builds (2 tools, £154k replaced, £12k build cost)

This is where the death of the £100k SaaS contract thesis plays out in practice.

Salesforce CPQ (£36k/yr) was replaced with a custom quoting system. The company had four products, two pricing models, and a discount approval workflow. That’s not complexity that justifies a £36k annual platform. We built a custom quoting tool using AI-assisted development that pulled pricing rules from a single config table, generated quotes as PDFs, and routed approvals through Slack. Total build time: two weeks. Ongoing cost: effectively zero beyond Salesforce API limits.

Juro (£18k/yr) and Proposify (£12k/yr) were replaced with a single document generation system. Contracts and proposals were templated, populated from Salesforce data, and delivered for e-signature through DocuSign, which the company already had but wasn’t using for this workflow. The custom layer handled the logic. The existing tool handled the signing.

These weren’t science experiments. They’ve been running in production for over a year. The quoting system has processed over 400 quotes. The document system has generated over 600 contracts. Zero downtime. Zero “we need to go back to the old tool” conversations.

The Results

The numbers after 12 months of operation:

£350k in annual spend eliminated. From £527k to £177k in direct tool costs, plus the £12k one-time build cost for the custom replacements. Net first-year saving of £338k. Net annual saving from year two onwards of £350k.

80% reduction in data reconciliation time. From 15 hours per week to approximately 3. The remaining reconciliation is between HubSpot and Salesforce, which is manageable and follows a single, documented sync protocol.

RevOps capacity recovered. The team went from spending 40% of their time on integration maintenance to under 10%. That freed capacity was redirected to building the forecasting model, improving territory design, and running the AI revenue architecture initiative that’s now driving their next stage of growth.

New hire onboarding dropped from 6 weeks to 2 weeks. Fewer tools means fewer logins, fewer training sessions, fewer “ask Sarah how that integration works” conversations.

Forecast accuracy improved by 23 percentage points. From 64% to 87% weighted forecast accuracy. Not because we bought a better forecasting tool. Because we fixed the data that forecasting depends on.

Lessons: Why This Requires Architecture Thinking

The temptation with tech stack rationalisation is to treat it as a procurement exercise. Find the expensive tools. Negotiate better rates. Cancel the ones nobody uses. That approach saves money. It doesn’t fix the problem.

The problem is always architectural. It’s data flowing through too many systems. It’s conflicting sources of truth. It’s an integration layer that consumes more resources than the tools it connects.

Three principles guided this engagement:

Start with the data model, not the tool list. Before deciding what to keep or cut, you need to map every data entity, every flow, every transformation. The tool decisions follow from the data architecture, not the other way around.

Measure integration cost, not just licence cost. A £15k tool that requires £40k worth of RevOps time to maintain is a £55k tool. Most companies never make this calculation, which is why “cheap” point solutions proliferate.

Build only what’s truly specific to your business. The custom AI builds worked because they replaced tools where the company’s requirements were simpler than the platform’s capabilities. If your CPQ needs are genuinely complex, buy Salesforce CPQ. If you have four products and two pricing models, you don’t need a platform. You need a script with a config file.

Tech stack rationalisation is not about spending less on software. It’s about spending nothing on architecture that actively works against you. Every redundant tool, every conflicting data source, every integration that exists because two systems can’t agree on what a “qualified opportunity” means, that’s not a tool cost. That’s a tax on every decision your GTM team makes.

The £350k this company saved wasn’t hiding in licence fees. It was hiding in the architecture that nobody had designed on purpose.