Six Essential Inquiries Every Board Should Ask Before Approving That Next AI Project

AI for Executives
Executive boards must make informed technology investment decisions, especially regarding AI. In this article, Jennifer Stirrup discusses emerging best practices based on industry research. Essential inquiries include strategic alignment, data quality, risk assessment, ethical considerations, long-term impacts, and governance structure. Proactive oversight ensures competitive advantage and stakeholder trust.

AI for Executive Boards

The Governance Imperative: Beyond Simple Project Approval

In today's rapidly evolving business landscape, boards face unprecedented pressure to make informed decisions about technology investments—particularly AI initiatives. These decisions carry far-reaching implications for organizational risk, competitive advantage, and stakeholder trust. According to a recent PwC survey, only about half of board directors feel they receive adequate information about AI-related risks, with directors noting a gap in board-level AI expertise and confidence regarding oversight. 

“The board’s job is to see what management does not see.” – Jennifer Stirrup

The approval of major initiatives should not be a tick box exercise. Asking the right questions is often a lost art. Asking insightful questions is a vital governance skill that can determine organizational success or failure. This is particularly true for AI projects, where the stakes extend beyond traditional ROI considerations to encompass ethical, legal, and societal implications.

Let's explore the six essential inquiries every board should consider. Asking the right questions will make AI success more likely before approving that next significant project.

1. Strategic Alignment: Solving Real Business Problems or Technology Searching for Purpose?

The first and perhaps most fundamental question any board should ask is whether a proposed project—especially an AI initiative—addresses a specific business challenge with measurable impact. This inquiry goes beyond superficial alignment statements to probe the actual business case.

According to Harvard Business Review, 70% of digital transformation initiatives fail to achieve their objectives, often because they lack clear strategic alignment. For AI projects specifically, this failure rate climbs even higher when technology is deployed without a well-defined business purpose.

Key sub-questions to consider:

  • Does this initiative directly support our strategic priorities or core competitive advantages?
  • Can management articulate specific KPIs that will improve as a result?
  • How does this project compare to alternative approaches for solving the same business problem?
  • Is this an appropriate time to undertake this initiative, considering our other strategic priorities?

A McKinsey study found that organizations with clear links between digital initiatives and core business strategy were 1.5 times more likely to report successful outcomes. Boards must ensure that projects aren't initiated merely because of technology buzzwords, but because they genuinely advance strategic objectives.

2. Data Foundation: Excellence from Mediocrity?

For AI projects in particular, the quality of underlying data determines the ceiling of possible outcomes. Boards should interrogate the organization's data readiness before approving significant AI investments.

image_1

As data expert Thomas C. Redman observes in his book, Data Quality: The Field Guide (2001), “Bad data is a corporate cancer that spreads slowly through an organisation, wasting time, increasing costs, weakening decision making, frustrating customers, and undermining strategy.” This is particularly true for AI initiatives, which amplify both the benefits of good data and the costs of poor data.

Critical areas of data assessment:

  • Has a formal data quality assessment been conducted for the data sources powering this initiative?
  • What percentage of our data meets quality standards for this application?
  • How are we addressing data privacy, security, and compliance requirements?
  • Is our data infrastructure adequate to support the proposed AI workloads?
  • What ongoing data governance processes will ensure continued data quality?

According to Sogeti Labs (part of the CapGemini Group), implementing strong data governance significantly improves data quality, decision-making, compliance, and innovation. These enhancements ultimately translate into better financial performance, including cost savings, revenue growth, and improved profitability. Boards should be especially wary when management cannot provide clear answers about data readiness, as this often signals future implementation challenges.

3. Risk Assessment: Mapping to Enterprise Risk Management

Every significant project carries risks, but AI initiatives introduce novel challenges that may not fit neatly into traditional risk frameworks. Effective boards require comprehensive risk mapping before approval.

MIT Sloan research shows that organisations report significantly better outcomes with integrated risk management. 

Risk assessment inquiry should include:

  • How specifically are AI-related risks mapped to our enterprise risk management framework?
  • What processes exist for identifying and mitigating algorithmic bias?
  • What monitoring mechanisms will detect emerging risks during implementation?
  • How have similar organizations experienced and addressed comparable risks?
  • What business continuity provisions exist if the AI systems fail or require revision?

A particularly critical consideration is regulatory compliance. With AI regulation evolving rapidly across jurisdictions, boards must ensure that initiatives include appropriate compliance provisions and can adapt to emerging requirements.

4. Ethical Guardrails: From Principles to Practice

Ethics in AI deployment has moved from a theoretical concern to a business-critical consideration. Boards now face increasing pressure to ensure that AI implementations align with organizational values and societal expectations.

A recent Deloitte survey indicate that 76% of executives say their organisations are conducting ethical AI training for their workforce, which is a strong sign of the perceived importance of ethics in AI. However, the results also reveal a gap between awareness and action: in a different but comparable Deloitte survey, only 27% of companies reported having distinct ethical standards or comprehensive ethical frameworks for AI in place.

image_2

Essential ethical governance questions include:

  • What specific ethical frameworks guide the development and deployment of this AI initiative?
  • Has an AI ethics committee been established with clear authority to intervene?
  • How will ethical considerations be operationalized throughout the project lifecycle?
  • What mechanisms exist for stakeholder feedback on ethical dimensions?
  • How will ethical outcomes be measured and reported to the board?

Accenture research supports the broader message that mature, responsible AI practices lead to substantial increases in stakeholder trust, improved talent attraction, stronger reputational outcomes, and more efficient scaling and deployment of AI initiatives. For example, 

5. Long-Term Impact: Beyond Immediate ROI

While quarterly results matter, boards must balance short-term metrics with long-term organizational transformation. This is particularly important for AI initiatives, which often deliver their most significant value through cumulative effects rather than immediate returns.

Key long-term considerations include:

  • Beyond immediate ROI, how does this initiative transform operational resilience?
  • What cultural shifts might result from this implementation?
  • How will this project affect our talent strategy and organizational capabilities?
  • What competitive advantages might emerge as second-order effects?
  • How might this initiative affect our stakeholder relationships and brand perception?

According to BCG analysis, companies that simultaneously drive cost efficiency and revenue innovation significantly outperform their peers in total shareholder return (TSR) over five years, value creation depends on cutting costs and growing revenues. The combined effect of cost reduction (32%) and revenue growth (43%) drives lasting TSR outperformance. This means that organisations leveraging both immediate efficiency (often linked to cost savings) and long-term capability-building (enabling sustained growth through innovation and revenue expansion) outperform their peers by 43% on five-year revenue growth metrics. Boards need to provide their role in ensuring this balanced perspective by providing a dual focus on short-term and long-term priorities in technology investments delivers a significant competitive advantage over time.

6. Governance Structure: Expertise and Ongoing Oversight

The final inquiry addresses the board's own readiness to provide effective oversight throughout the project lifecycle. This self-assessment is particularly important for AI initiatives, which require specialized knowledge to govern effectively.

Related studies by Harvard Law School do indicate that executive leaders are advocating for more AI and Generative AI expertise on boards, but only a minority of directors currently believe such expertise is very important to have. The gap between directors and executives is a persistent governance challenge, as shown by Harvard Law School research.

image_3

Governance structure questions include:

  • Does our board possess adequate expertise for effective AI oversight?
  • What education or external advisors might strengthen our governance capabilities?
  • How will ongoing model performance be monitored and reported to the board?
  • What decision thresholds will trigger board review after initial approval?
  • Are responsibilities clearly delineated between management execution and board oversight?

Industry research from MIT Sloan finds that boards with digital and AI-savvy directors achieve markedly better results on digital initiatives, and commentary in Forbes and Harvard Business Review highlights that dedicated technology committees are an emerging best practice for effective oversight. Stanford Graduate School of Business and the Rock Center for Corporate Governance frequently discuss the importance of board-level attention for technology and digital transformation governance, but committee structures need to be in place. Ultimately, the research shows that tech-savvy boards and technology-focused committees contribute positively to digital transformation outcomes, and this is an emerging best practice for organisations to explore. 

The Competitive Advantage of Proactive Governance

Organizations with robust AI governance practices consistently outperform competitors, as research cited above has shown. However, few boards feel fully prepared for effective AI oversight. Regulatory frameworks are changing at speed, in a an effort to catch up with the technology. These frameworks range from the EU's AI Act to emerging U.S. standards so proactive governance becomes a competitive advantage rather than simply a risk consideration.

KPMG’s Global Tech Report 2024 supports the broad message that mature digital governance helps to provide a substantial uplift in profitability and business value among organisations with mature digital governance. The 2024 report reveals that across all surveyed technology categories, there has been a “25 percentage point year-on-year increase in the number of executives who say these [digital transformation] systems have had a positive impact on their company’s profitability.” This means a greater proportion of organisations are seeing positive returns from tech, especially those with more mature, integrated digital governance and data frameworks.

From Inquiry to Action: Implementing Effective Project Governance

Implementing these six inquiries requires more than adding questions to a checklist. It demands a fundamental shift in how boards approach their oversight responsibilities:

  1. Develop specialized expertise through board education or dedicated committees
  2. Establish clear reporting frameworks for AI-specific metrics and risks
  3. Create governance cadences that match the speed of technological change
  4. Foster cultures of transparency where concerns can surface without fear
  5. Build relationships with technical experts who can translate complex concepts
  6. Benchmark governance practices against industry leaders and evolving standards

Ready to Strengthen Your Board's AI Governance?

Is your board equipped to ask these critical questions about your next AI initiative? Are you confident in your governance approach as AI transforms your industry?

As an experienced AI governance advisor, I help boards develop the frameworks, expertise, and processes needed for effective oversight of complex technology initiatives. My tailored approach ensures your governance structures match both your organizational context and emerging best practices.

Take the next step in strengthening your AI governance:

Schedule a confidential 30-minute consultation to discuss your board's specific governance challenges and opportunities. During this call, we'll explore practical approaches to implementing these six essential inquiries in your organization.

Book your consultation today or email me directly at jenstirrup@jenstirrup.com to start the conversation.

In a business environment where technology governance increasingly determines competitive outcomes, boards that ask the right questions are fulfilling their fiduciary duties while creating sustainable advantage through responsible innovation.

  1. KPMG, "Technology Governance Benchmark Study," 2022
Share the Post:

Discover more from Jennifer Stirrup: AI Strategy, Data Consulting & BI Expert | Keynote Speaker

Subscribe now to keep reading and get access to the full archive.

Continue reading