A new survey of 900 CEOs reveals that nearly eight in 10 (78%) believe failed AI strategies could threaten their jobs as boards demand measurable AI results and stronger governance.
Image credit: Pexels

A new survey of 900 CEOs reveals that eight in 10 believe failed AI strategies could threaten their jobs as boards demand measurable AI results and stronger governance.

Boards are demanding measurable AI results as executive confidence in AI systems, governance, and accountability begins to fracture, according to a new global CEO survey from Dataiku and Harris Poll. The AI boom has entered a new phase inside the corporate world – accountability. After two years of executives racing to adopt generative AI tools and reassure investors they were not falling behind competitors, CEOs are now confronting a more personal risk – the possibility that failed AI strategies could cost them their jobs.

A new global study from Dataiku, conducted by Harris Poll, found that 80% of CEOs worldwide believe their role will be at risk by the end of 2026 if their AI strategies fail. In the US, that pressure appears even more intense, with 81% of CEOs saying a fellow executive will likely be removed over a failed AI initiative or AI-related crisis.

MEASURABLE & TRUSTED RETURNS

The findings highlight a rapidly shifting reality in enterprise AI adoption. Boards no longer want experimentation alone, they want measurable returns, governance and proof that AI systems can be trusted at scale.

The report, Global AI Confessions Report: CEO Edition 2026, surveyed 900 CEOs globally and paints a picture of growing tension in the boardroom as AI becomes deeply embedded in business operations while trust in the technology remains uneven.

The findings come as enterprises accelerate investments in generative AI platforms, AI copilots and autonomous AI agents amid mounting pressure from shareholders and boards to demonstrate ROI from billions in AI spending. At the same time, concerns around hallucinations, legal liability, explainability, and vendor lock-in continue to rise across industries.

Earlier reports focusing on responsible AI business practices highlighted growing pressure on companies to move beyond AI experimentation and implement stronger governance frameworks as AI adoption accelerates across industries.

KEY AI FINDINGS

Key findings from the Dataiku report, show that:

  • 80% of global CEOs say their jobs are at risk if AI strategies fail by 2026. 
  • 87% would stake their careers on AI success.
  • Confidence in deploying AI agents at scale dropped from 41% to 31% 
  • 79% fear AI agents could create legal risks.
  • 72% of US CEOs say boards are pressuring them to deliver measurable AI outcomes. 
  • 96% believe employees are already using unauthorised “Shadow AI” tools.

CEOS ARE BETTING CAREERS ON AI

One of the report’s most striking findings is that 87% of CEOs said they would personally stake their job on the success of their company’s AI initiatives. Yet confidence in AI systems appears to be weakening even as adoption accelerates. According to the study:

  • Confidence in deploying AI agents at scale dropped from 41% to 31% in just one year. 
  • 79% of CEOs worry AI agents could create legal risk. 
  • More than half fear poor explainability could trigger customer trust or brand crises. 
  • 51% say they have delayed AI initiatives because of regulatory uncertainty. 

The findings suggest many executives feel trapped between aggressive AI expectations from boards and lingering uncertainty around whether enterprise AI systems are mature enough for high-stakes decision-making.

AI LEADERSHIP & GOVERNANCE

That tension is increasingly reshaping corporate leadership itself. “Every enterprise now has access to powerful AI. The differentiator is whether they can turn that power into reliable business decisions,” said Florian Douetteau. “That is the cognitive dissonance happening in the C-suite right now: CEOs are staking their jobs on AI, but still questioning its outputs and struggling to control the systems they say they own.”

The findings align with a broader trend emerging across the enterprise technology market, where AI governance and operational oversight are beginning to replace speed and experimentation as executive priorities.

A recent report on AI risks in the workplace found many companies were still failing to adequately prepare employees for the legal, ethical and operational risks associated with generative AI systems.

GROWING BOARD PRESSURE

The report indicates that corporate boards are becoming increasingly impatient with vague AI strategies and experimental deployments that fail to generate measurable business outcomes. Globally, 62% of CEOs said their boards are actively pressuring them to deliver measurable AI-driven results. In the US, that figure climbed to 72%, up from 61% the previous year. Meanwhile:

  • 78% of US CEOs rank AI strategy as a top or high priority. 
  • Yet only one in three say it is their company’s single highest priority. 
  • 89% of US CEOs say their personal involvement in AI decisions has increased. 

The pressure reflects a broader shift in the enterprise AI market. Following the rapid rise of OpenAI, Microsoft Copilot, and enterprise AI automation platforms, many companies are now moving from pilot programs to production deployments, forcing executives to prove AI investments can improve productivity, reduce costs or generate new revenue.

That transition, however, is proving difficult. As previously reported, CEOs worldwide expect AI investment to more than double over the next two years despite ongoing concerns surrounding ROI, implementation complexity and operational oversight.

CEOS DON’T FULLY TRUST AI SYSTEMS

Despite aggressive AI adoption plans, many CEOs remain skeptical of the systems shaping business decisions. The report found that:

  • 80% of CEOs actively question or challenge AI outputs. 
  • 51% still require human approval for business-critical decisions. 
  • 34% would not allow AI to make decisions without human oversight.

AI is already influencing more than 40 business-critical decisions per CEO annually, according to the study, and 94% said they are comfortable disclosing AI-assisted decisions to their boards. But trust in AI vendors and platforms also appears to be deteriorating. The survey found:

  • 76% of CEOs believe they are overly dependent on too few AI vendors. 
  • 67% said they challenged AI platform or vendor decisions made by CIOs or technology teams during the past year.

That growing skepticism reflects rising industry concerns around vendor lock-in, opaque AI systems, and the long-term risks of depending heavily on a small number of AI infrastructure providers.

AI GOVERNANCE: A BOARDROOM PRIORITY

The report suggests governance may now be overtaking innovation as the defining issue in enterprise AI. According to the survey:

  • 96% of CEOs believe employees are already using unauthorized generative AI tools, commonly referred to as “Shadow AI”. 
  • 57% warn explainability gaps could create customer trust or reputational crises. 
  • Governance ranked higher than workforce readiness or orchestration as the most important factor for AI success.

The increased focus on governance reflects a wider shift already emerging in boardrooms globally. A previous report on board priorities found that AI governance, ESG and workforce concerns are becoming central strategic issues for corporate directors.

The findings also come as governments worldwide move toward stricter AI regulation, including the European Union’s AI Act and expanding discussions around AI liability, transparency and compliance standards in the United States.

Another report on responsible AI governance frameworks highlighted growing efforts by organisations to establish formal AI governance policies as companies race to deploy generative AI technologies safely.

The report also exposed a widening gap between boardroom confidence and operational readiness. While 94% of CEOs said they are comfortable telling boards that AI influenced strategic decisions, only 34% of data leaders said their AI agents could pass a basic decision audit. At the same time:

  • 83% of CEOs expect to deploy AI agents into full production during 2026. 
  • Yet only 25% of CIOs say they can monitor all AI agents in real time. 

The disconnect highlights one of the biggest unresolved questions in enterprise AI adoption: whether organisations are scaling AI faster than they can govern it.

WHY THE FINDINGS MATTER

The findings suggest enterprise AI has entered a new accountability phase. For much of the past two years, companies were rewarded simply for adopting AI tools and demonstrating innovation momentum to investors and boards. 

That dynamic is now changing. Executives are increasingly being evaluated not on whether they are deploying AI, but whether those systems can deliver measurable business value safely, reliably and at scale. The report highlights several trends likely to shape enterprise AI strategy in 2026:

  • AI governance is becoming a board-level priority rather than a technical issue. 
  • Explainability and auditability are emerging as critical enterprise requirements. 
  • CEOs are becoming more directly involved in AI oversight and vendor decisions. 
  • Legal, regulatory and reputational risks are increasingly influencing deployment decisions. 
  • Trust in AI systems may become a competitive differentiator for enterprises. 

The findings also suggest many organizations may be scaling AI adoption faster than their governance and operational oversight capabilities can keep pace.

AI COMPATABILITY

For much of the past two years, the dominant corporate fear around AI was falling behind competitors. That fear is evolving. Today, executives appear increasingly concerned that AI failures – whether operational, legal, financial, or reputational – could directly threaten leadership credibility and shareholder trust.

The growing anxiety around AI accountability is also reshaping workforce expectations. A recent report on AI and employment trends found that many companies are increasingly tying employee performance and future staffing decisions directly to AI adoption and productivity.

The Dataiku report suggests the next phase of enterprise AI will not simply be defined by adoption, but by accountability And increasingly, CEOs themselves may be the ones held responsible when AI systems fail.

Click here to access the Global AI Confessions Report: CEO Edition.

Median CEO pay jumped 23.2% to $29.4 million in 2025 while the CEO-worker pay gap widened to 341:1, according to new data, raising new questions about fairness, AI-driven incentives and executive compensation.

The demand for moral leadership in corporate America has reached its highest recorded level, yet very few leaders at the top consistently demonstrate it, confirms new study.

Despite AI accelerating global business transformation, a growing number of companies are struggling to meet their climate targets, according to new research.

A study has confirmed the growing adoption of AI across industries, but also highlights a lack of responsible AI practices.

Despite the increasing use of AI at work, only 56% of companies educate their employees on its risks, reveals report.

Chief executives around the world expect the growth rate of artificial intelligence (AI) investments to more than double in the next two years, according to survey.

Top board priorities now include artificial intelligence, ESG, climate change, human capital and board culture, revealed survey.

The Responsible AI Institute, a non-profit dedicated to facilitating the responsible use of AI worldwide, has launched the AI Policy Template to help businesses develop their own enterprise-wide responsible AI policies and governance.

Six in 10 companies plan to lay off employees who cannot or will not use AI, signalling a profound shift in how organisations define value, performance and job security.

Nearly seven in 10 US employees now fear AI will lead to layoffs at their company within the next three years, according to a new workplace mental health report.

Sign up for our newsletter