We use cookies to enhance your browsing experience, analyze site traffic and deliver personalized content. For more information, please read our Privacy Policy.

As the new year sets in, technology leaders are asking a pointed question: after all the hype, where is the business value from AI? Over the past two years, organisations have eagerly rolled out horizontal AI tools, from out-of-the-box chatbots to co-pilots embedded in productivity suites like Microsoft 365 Copilot, in the hope of achieving quick productivity gains. Yet studies from MIT and McKinsey suggest that close to 95 percent of generative AI initiatives fail to deliver meaningful return on investment.

The reality now coming into focus is that simply inserting generic AI into existing workflows rarely changes the fundamentals of how a business operates. While horizontal AI is often a necessary starting point, and one we actively advocate at Digital Bricks, its impact tends to plateau when it is treated as an end state rather than a foundation. When organisations rely solely on broad, general-purpose AI without addressing the deeper structure of their business processes, the result is often incremental efficiency at best, rather than measurable strategic value.

Horizontal AI

Horizontal AI tools are general-purpose by design. They aim for broad productivity gains, examples are automating emails, generating meeting notes, speeding up coding or copywriting across any industry or role. With Microsoft 365 Copilot, the focus is usually on time and cost savings.

“With general tools such as chatbots, co-pilots and digital assistants, the focus is mostly on productivity, time and cost reduction,”

-Neil Sholay, Vice President of AI Business Value at Oracle.

That focus can indeed yield efficiency improvements. The limitation of this horizontal approach is that it rarely touches the heart of business operations. General purpose AI solutions don’t actually solve a pressing business problem. An executive might marvel at a chatbot answering FAQs, but if core processes like order fulfillment or risk analysis remain unchanged, the AI isn’t moving any needle that matters. Sholay argues that simply adding a co-pilot or chatbot on top of existing workflows “does not lead to a transformation of the core of the business processes.” Many such pilots stall out precisely because they lack a clear, measurable impact.

The Vertical AI Difference

Increasingly, the leaders in AI adoption are those taking a vertical approach – diving deep into domain-specific AI solutions tailored to their industry, data and business processes. Vertical AI (also called domain-specific AI) refers to systems built with expertise in a particular field, whether that’s banking, supply chain logistics, healthcare diagnostics, or any other specialised arena. Unlike horizontal tools that offer generic skills to everyone, vertical AI is purpose-built to solve complex problems at the core of a specific business. It’s the difference between a general helper and a master craftsman: one is a jack-of-all-trades, the other is deeply knowledgeable about the task at hand.

Horizontal & Vertical AI in the Microsft AI Stack

Neil Sholay is a great example, he explains that Oracle has bet heavily on vertical AI applications that integrate directly into workflows. Why? Because they consistently see higher and more measurable returns than from universal AI toolkits. Specialized AI – tuned for tasks like fraud detection in banking, predictive maintenance in manufacturing, or route optimization in logistics – tends to deliver dramatic improvements where it counts. These systems are embedded in the workflow, often rethinking how that workflow operates from the ground up.

Several factors give vertical AI an edge. First, it operates on real-time, domain-specific data, applying patterns learned from historical training to current live information (a process known as AI inference). This means a vertical AI for, say, inventory management isn’t generating generic advice – it’s reading yesterday’s sales, today’s deliveries, and weather forecasts, then making decisions that solve your inventory problem. Second, vertical solutions benefit from clearer governance and feedback loops. Because they tackle defined tasks in a controlled context, it’s easier to build in oversight. Users and experts can continually feed corrections and improvements to the model (e.g. flagging a fraud alert as false positive or providing the outcome of a predictive maintenance schedule), and the system can learn from this feedback within the domain. Over time, the AI becomes more accurate and aligned to the business, something a broad model never achieves when it’s not tuned to specifics.

Crucially, vertical AI can drive changes that horizontal tools cannot touch. Organizations often discover that implementing a domain-focused AI solution forces them to break down silos and connect systems in new ways – effectively redesigning business processes with AI at the center. This kind of end-to-end reengineering is where the real transformative power of AI lies. We see AI agents, large language models, and other AI components being woven into the fabric of workflows, data streams, and domain knowledge bases to create something qualitatively new. It’s not just answering questions or automating a task here and there – it’s building intelligent workflows that can sense, decide, and act within the context of the business.

The Layers of a Vertical AI Solution

How do vertical AI systems achieve such depth? They typically comprise multiple layers working in concert, each adding a piece of the puzzle:

  • Knowledge and Retrieval Layer: A vertical AI taps into rich, domain-specific data sources – whether internal databases, industry knowledge bases, or real-time sensor feeds. This layer ensures the AI always works with relevant, up-to-date context. Robust retrieval mechanisms (like enterprise search or vector databases for text embeddings) are used to fetch the information most pertinent to the task at hand.
  • Domain-Specific AI Models: At the heart is a model (or set of models) specialized for the task and industry. This could be a fine-tuned language model fluent in financial jargon for a fintech application, or a computer vision model trained on manufacturing defects for an assembly line. Because these models are trained on proprietary, context-rich data, they perform the task with a level of accuracy and nuance that a general model can’t match. They execute the core domain task – be it diagnosing an X-ray or flagging an anomalous transaction – and provide outputs or decisions tailored to the business context.
  • Autonomous Orchestration Layer: An emerging frontier in vertical AI is the use of AI agents that can orchestrate multi-step processes with minimal human input. This layer is about tying tasks together and acting on the results. These AI agents use techniques like dynamic chaining of prompts and actions, adjusting their workflow based on live inputs or intermediate results. They operate under predefined guardrails (to ensure they don’t exceed authority or risk), but within those limits they exhibit a degree of autonomy. This orchestration layer effectively turns decision outputs into real-world operations. It’s what makes the difference between an AI that suggests and an AI that implements.

By combining these layers – contextual knowledge, specialized task execution, and autonomous action – vertical AI solutions can handle end-to-end scenarios in a business process. They retrieve the right data, make an expert judgment, and then carry it through to a tangible outcome. The payoff is significant: companies get AI systems that not only know their business, but can also do business work, all while fitting into the existing IT ecosystem with proper oversight.

Importantly, vertical AI isn’t about throwing caution to the wind and letting machines run wild. The most successful domain-specific AI deployments are characterized by careful design and human oversight at critical junctures. Let’s explore what that entails.

Critical Success Factors for Vertical AI

If vertical AI is so promising, how can organizations ensure its success? There are some clear success factors that distinguish the 5% of AI projects that thrive from the 95% that falter. In conversations with CIOs and AI leaders (such as Neil Sholay of Oracle), five key factors emerge for successfully deploying AI in the enterprise:

Robust, Accessible Data Foundations: Quality data is the fuel of AI, and in vertical solutions this means having the right domain-specific data ready for use. Companies must invest in making their enterprise data both accessible and AI-ready – cleaned, unified, and rich with the context of their business. Whether it’s historical maintenance logs for an airline’s predictive engine maintenance AI or years of call transcripts for a telecom customer service AI, these systems thrive only if fed with relevant, high-quality data. Often this involves breaking down internal data silos and ensuring data governance is in place so the AI can tap into all the information it needs (within compliance boundaries). Simply put, no data, no AI – and the better the data, the better the results.

Human Oversight and Control: Even as AI takes on more tasks, human judgment must remain in the loop.

“Human intervention must always be ingrained in the control mechanisms to ensure the balance between autonomy and risk,”

-Max Dinser, Chief Executive Officer at Digital Bricks

This means from day one, vertical AI should be designed with checkpoints or controls where humans review and guide the outcomes. Human oversight is not just about catching mistakes – it also provides accountability and ethical governance, especially important in high-stakes applications. The goal is to let AI operate autonomously where it’s safe and efficient, but always under a framework of human control that can step in whenever the AI’s decisions could have serious implications.

Strong Governance and Orchestration of AI Agents: As AI agents and automated workflows proliferate, companies need a governance framework to manage these AI components. This includes overseeing how models are used, ensuring security and privacy of data, and orchestrating the interactions between multiple AI agents or tools. It’s increasingly common for “citizen developers” (non-programmers in the business) to create AI-driven automations using no-code tools like Microsoft Copilot Studio. Governance must extend to these user-built agents as well: Who monitors their performance? Are they following compliance rules? A centralized AI governance board or platform can enforce standards, track outcomes, and ensure that all these moving parts serve the company’s strategy reliably and safely.

User-Centric Design (Conversational and Contextual UI): No matter how powerful an AI system is, if end-users cannot easily work with it, adoption will lag. A critical but sometimes overlooked success factor is designing the user interface and experience such that interacting with AI feels natural and intuitive. In practice, this means making AI tools conversational, context-aware, and dynamic. Instead of forcing users to adapt to the AI, the AI should adapt to the users’ workflows. A vertical AI solution might present itself as a conversational assistant within the apps employees already use. By fitting seamlessly into how people work, vertical AI becomes a trusted collaborator rather than a complicated new system. This dramatically improves adoption rates and business impact.

Built-in Business Value Metrics: Finally, organizations must embed measurement of business impact into the AI systems from the start. It’s not enough to deploy an AI and hope for the best – success is achieved by continuously tracking how the AI is moving the needle on key metrics. By instrumenting AI solutions with analytics, companies can iteratively refine their use cases and settings to maximize value. Moreover, tying AI to mission-critical data and processes (as vertical AI does) naturally creates bigger impacts. An AI embedded in, say, the finance department’s processes can directly influence working capital or risk exposure; one in supply chain can tangibly cut inventory costs or improve fulfillment times. Indeed, we’re already seeing major European firms like ING and KPN weaving AI into their core workflows in finance, supply chain, and HR – areas where improvements show up on quarterly reports. From day one, ask: “How will this AI deliver value, and how will we know?” – and bake those answers into the system’s design.

These success factors underscore a common theme: treating AI as a strategic capability that is woven into the fabric of the business, rather than a black-box tool handed off to IT. It requires forethought – about data, people, process, and value – but it ensures that when AI is turned on, it runs in the right direction.

Licensing and Cost

Beyond technical success, there is a practical question every executive eventually asks: what is this going to cost us? Licensing and pricing models play a decisive role in whether AI initiatives move beyond experimentation.

Microsoft has taken a layered approach. With Microsoft 365 Copilot, Copilot Studio, Azure and Foundry, AI capabilities are increasingly embedded across the stack, but with a clear distinction between horizontal productivity and vertical extensibility. Out-of-the-box Copilot features are licensed per user, lowering the barrier to entry and enabling organisations to realise immediate productivity gains (horizontal). More advanced scenarios, such as extending Copilot with domain data, workflows, and custom actions via Copilot Studio, shift the cost model toward usage and infrastructure rather than fixed licence uplifts. Foundry further reinforces this model by giving organisations full control over model selection, orchestration, memory, and governance, while consumption is driven by actual compute, storage, and inference requirements.

This approach makes the economics of vertical AI more transparent, but not necessarily cheaper by default. As with any cloud-native architecture, scale matters. Running large language models, orchestrating agent workflows, or executing high-volume inference pipelines consumes compute, memory, and networking resources. These costs do not disappear simply because AI is embedded in the platform. Instead, they surface through Azure consumption. The advantage is predictability and control: organisations can see where costs originate, optimise workloads, and align spend with business-critical use cases rather than opaque licence bundles.

AI cost management shifts from negotiating licence premiums to architecting responsibly. Questions move from “what does this feature cost per user?” to “what does this workflow cost per transaction, per case, or per outcome?” This is where vertical AI becomes economically compelling. When AI is embedded directly into revenue-generating or cost-intensive processes, its value becomes measurable. If a vertical agent built on Azure reduces processing time, mitigates risk, or unlocks new capacity at scale, infrastructure costs are no longer a concern but a lever.

Licensing considerations therefore loop back to impact. A vertical AI system that demonstrably saves millions in operational cost or accelerates revenue justifies its infrastructure footprint. The risk lies not in spending on AI, but in deploying it without a clear value anchor. A disciplined approach is to begin with tightly scoped pilots, measure real outcomes, and scale only where value is proven. From what we see at Digital Bricks, the market is steadily moving in this direction, toward value-aligned, consumption-based models that reward purposeful design over indiscriminate rollout. Microsoft’s stack, particularly when combining Copilot Studio with Azure AI Foundry, is well suited to this shift, provided organisations treat AI not as a feature, but as an operational capability with explicit ownership and accountability.

The Infrastructure Backbone

Hand in hand with cost comes the question of infrastructure. Under the hood of any successful vertical AI system sits a substantial technical backbone: high-performance compute, specialised accelerators, resilient data pipelines, secure integration layers, and continuous monitoring. Organisations that underestimate these requirements often discover that an AI pilot which ran comfortably on a limited dataset and a single environment becomes far more complex when scaled to production with expectations of high availability, low latency, and enterprise-grade reliability. As a result, CTOs are increasingly treating AI initiatives with the same architectural discipline as any mission-critical system.

The major cloud platforms are racing to provide the AI infrastructure that enterprises can depend on at scale. Microsoft’s investments in Azure, spanning GPU-optimised regions, AI-ready networking, and managed services for model hosting and orchestration, reflect a recognition that AI adoption rises or falls on infrastructure readiness. Microsoft Foundry builds on this foundation by abstracting much of the complexity involved in deploying, governing, and scaling models, while still allowing organisations to retain control over where workloads run and how resources are consumed. For enterprises, adopting AI on a mature cloud platform is not merely about access to models, but about inheriting an industrial-grade operating environment that would be impractical to replicate internally.

This acceleration in AI infrastructure investment coincides with a broader shift in enterprise architecture thinking. Not long ago, best-of-breed approaches dominated, with organisations deliberately spreading workloads across multiple vendors to avoid dependency and maximise flexibility. In practice, many are now re-evaluating this approach for AI workloads. As AI systems cut across data, security, identity, and application layers, fragmentation introduces friction. Integrating multiple AI platforms can complicate governance, increase security risk, and slow down deployment. In contrast, standardising the AI layer on a single platform, often Microsoft Azure in European enterprises, simplifies integration, unifies data access patterns, and enables consistent governance and monitoring without eliminating vendor diversity elsewhere in the stack.

For European organisations in particular, infrastructure decisions are inseparable from regulatory and compliance considerations. The EU AI Act places emphasis on risk management, transparency, and accountability, especially for AI systems operating in sensitive or regulated domains. Running vertical AI workloads within defined Azure regions, supported by established security, identity, and compliance controls, allows organisations to align innovation with regulatory expectations. This is increasingly important as enterprises move beyond experimentation and begin embedding AI directly into operational workflows where errors, bias, or downtime carry real consequences.

Ultimately, scaling vertical AI is not simply a question of adding more compute. It is an architectural challenge that requires balancing performance, governance, and integration at scale. Leaders who invest early in the right infrastructure backbone place themselves in a position where successful pilots can transition smoothly into enterprise-wide capabilities. Those who do not risk discovering that technical ambition has outpaced the foundations required to support it.

Adoption

Even with strong technology in place, organisational dynamics ultimately determine whether AI takes root. A recurring theme in AI adoption is the tension between bottom-up experimentation and top-down direction. Much of the early momentum has come from grassroots efforts, with engineers, data teams, and digitally curious business units experimenting with AI tools and building proofs of concept in relative isolation. This bottom-up energy has been invaluable, particularly in demonstrating what is technically possible with tools such as Microsoft Copilot, Copilot Studio, and Azure. However, experimentation alone rarely translates into sustained impact without coordination and intent at leadership level.

To move beyond isolation, organisations increasingly need explicit sponsorship from senior leadership. This does not mean centralising all innovation, but rather setting a clear direction for where AI should create value. When the C-suite and business unit leaders actively frame AI as a strategic capability, rather than an IT experiment, adoption accelerates. In practice, this means aligning AI initiatives with business outcomes, defining where Copilot-led productivity ends and where vertical, workflow-embedded AI begins, and ensuring that efforts across departments reinforce rather than duplicate one another.

Many organisations address this by establishing a central AI function, often positioned as a Centre of Excellence or enablement hub. Within Microsoft-centric environments, this role increasingly focuses on governance, architectural patterns, and reuseable assets across Copilot Studio extensions and Azure AI workloads. A central team can consolidate learnings, define guardrails, and ensure consistency across security, data access, and identity, while still allowing individual teams to build and extend AI solutions relevant to their domain.

The coordination of a Centre of Excellence avoids a common failure mode: multiple departments independently building similar agents against different datasets and tools, resulting in fragmented capability and uneven risk exposure.

-Max Dinser, Chief Executive Officer, Digital Bricks

Crucially, successful adoption is not an IT-only concern. AI delivers value only when it addresses real operational pain points, and those are owned by the business. Finance, HR, operations, and customer teams must play an active role in shaping use cases. A finance team struggling with reconciliation cycles, or an HR function overwhelmed by manual screening, is far better positioned to define meaningful AI applications than a purely technical team. When business stakeholders co-design AI solutions, adoption improves, resistance decreases, and AI is perceived as an enabler rather than an imposed system.

This is where enablement and upskilling become decisive. Organisations adopting Microsoft Copilot at scale are quickly discovering that access alone does not guarantee value. Different roles interact with AI in fundamentally different ways. The needs of a finance analyst, a policy officer, or a customer service lead vary significantly, particularly as AI moves from horizontal productivity into domain-specific workflows. In our work at Digital Bricks, we consistently see stronger outcomes where AI literacy and capability building are tailored to roles and responsibilities, rather than delivered as generic training. This role-specific focus helps teams understand not only how to use AI, but where it fits, where it does not, and when escalation or human judgement remains essential.

Top-down involvement increasingly extends to financial leadership as well. Involving the CFO early can bring discipline to AI adoption by anchoring initiatives in measurable outcomes. Rather than slowing progress, this often accelerates it by filtering out low-impact experiments and concentrating investment on use cases that materially affect cost, risk, or revenue. When AI initiatives demonstrate tangible value, they build internal credibility and justify further expansion.

Finally, AI adoption should be viewed as a continuation of digital transformation, not a competing priority. Cloud migration, data modernisation, and security consolidation create the foundations upon which AI depends. At the same time, AI initiatives often expose inefficiencies in existing processes, prompting further digital investment. Many European organisations are therefore integrating AI considerations directly into their Microsoft Azure and M365 roadmaps, ensuring that as systems modernise, they are designed with AI-readiness in mind. The most effective organisations do not wait for transformation to be complete before adopting AI. They allow each to inform and strengthen the other.

The Human Element

Amid all the technology, it’s vital to remember that AI is a tool to augment human expertise, not replace it. As Karsten Marijnissen, Field CTO at Incentro, aptly stated, “AI agents are not magic.” Too many companies burn cash on believing AI to be a plug-and-play miracle worker, when in reality success with AI is as much about human knowledge as software development. Marijnissen suggests it’s roughly half software engineering, half human domain know-how. This resonates strongly in vertical AI: the best outcomes occur when seasoned professionals team up with AI systems, each bringing their strengths.

For any organization building vertical AI, it’s wise to involve your internal domain experts from the start. If you’re developing an AI to help with pharmaceutical research, have your top chemists and clinicians work hand-in-hand with the data scientists. If it’s an AI for precision agriculture, get the veteran agronomists and field managers in the room with the AI engineers. These experts will help define the problem correctly, ensure the AI is fed the right data and constraints, and validate the outputs. Moreover, their buy-in will drive adoption – when respected experts trust and champion the AI because they helped craft it, their peers are more likely to trust it as well.

What's Next?

Organizations that master domain-specific AI now will not only solve immediate problems but also create a foundation of expertise and technology that competitors will find hard to match. These early movers in vertical AI serve as exemplars to their peers – much like early adopters of the internet or mobile tech became industry leaders. They show what’s possible when AI is treated as a core capability aligned with business strategy.

One exciting aspect of vertical AI’s future is the potential for reuse and scalability of solutions across adjacent problems. When you develop a vertical AI solution, you’re not just solving one issue; you’re often building a collection of components and knowledge that can be applied elsewhere. The intellectual property (IP) – data sets, fine-tuned models, custom algorithms – gained from one vertical application becomes an asset for tackling similar challenges in other departments or even different industries.

Modern AI development platforms are accelerating this reuse through what Microsoft calls “component collections” and “dynamic chaining.” In tools like the Copilot Studio, teams can package parts of their AI solutions (prompts, sub-models, integration routines) into modular components. These components can then be recombined in different sequences – or chained dynamically – to handle new tasks. Imagine a library of AI components your organization has built: one component handles retrieving customer data, another summarizes a legal document, another triggers an email workflow. With dynamic chaining, you could quickly assemble a new AI-driven process (say, for a new regulatory compliance check) by linking these existing pieces together, rather than starting from scratch. This modular approach means each new vertical AI project is easier and faster than the last, because you’re standing on the shoulders of what’s already built. It’s a bit like having a set of Lego blocks specifically designed for your industry – you can build new structures by reusing the blocks in different configurations.

From a strategic view, this capability turns vertical AI into a force multiplier. Companies not only gain efficiency and insight from individual AI solutions, but they also accumulate a competitive knowledge base – a sort of AI playbook for their domain. Over time, the organizations that have invested in vertical AI will have an arsenal of battle-tested models, curated data, and effective processes, which newcomers or laggards will have to spend a lot of time and money to replicate. This is how early adoption can translate into sustained competitive advantage.

Another forward-looking consideration is how vertical AI aligns with the trajectory of regulation and public trust. Europe’s regulatory climate is emphasizing transparency, risk management, and accountability. Vertical AI, with its emphasis on domain-specific governance and human oversight, is well-suited to thrive under these rules.  The future of AI in regulated environments belongs to those who can demonstrate mastery over their AI’s purpose and behavior. Domain specialization makes that easier because the scope is narrower and more definable. We expect regulatory clarity to actually spur more vertical AI development, as organizations invest in compliant, trustworthy AI systems tailored to their sector’s standards.

To wrap up the entire article, the shift from horizontal to vertical AI is a maturation of how businesses think about artificial intelligence. “Depth over breadth” is becoming the mantra: the real wins come from applying AI with focus, expertise, and purpose. Companies that embrace this model are already seeing orders-of-magnitude greater impact than those sticking to generic tools. At the same time, they’re building capabilities that can scale and adapt, fueled by reusable components and smarter orchestration. The journey isn’t always easy – it demands vision, collaboration between tech and business, and diligent governance – but the value on the table is immense.

At Digital bricks, we say "go vertical, and go all in". Identify the areas where AI can truly move the needle for your organization, and concentrate your efforts there. Encourage your teams to develop deep domain AI expertise and share learnings internally. Keep humans at the heart of the loop, even as you automate. If you do this, AI will be a core driver of your business’s innovation and growth. In a few years, we will likely look around the market and see that the standout performers are those who figured out how to make AI work specifically for them. More vertical, done in the right way, truly means more value.