A recent Harvard study has raised important questions about the role of generative AI in executive decision-making. Conducted during a series of executive education sessions from mid-2024 to early 2025, the research involved over 300 managers and executives tasked with forecasting Nvidia’s stock price one month into the future.
Participants were split into two groups: one consulted among their peers, while the other used ChatGPT for support. In the end, those who relied on ChatGPT not only became more optimistic in their forecasts but also made less accurate predictions. In contrast, the group that engaged in peer discussions showed improved accuracy and more conservative estimates.
These findings challenge the growing assumption that AI tools automatically enhance decision-making in corporate settings.
While generative AI like ChatGPT is widely adopted for its speed, fluency, and data-processing capabilities, this experiment revealed its limitations in high-stakes strategic forecasting. As AI continues to permeate the boardroom, the study serves as a reminder that even powerful tools require careful, human-centered implementation.
The study’s findings also suggests leaning on AI can amplify cognitive biases, fuel misplaced confidence, and disconnect users from practical realities. These insights question how and when to rely on AI, especially when decisions can have significant financial or strategic consequences.
Why It Matters: Generative AI has tremendous potential in streamlining operations, analyzing data, and enhancing productivity. Yet, when it comes to nuanced, strategic, or forward-looking decisions, its influence can backfire. This study demonstrates that AI’s confident tone and trend-based analysis may give users a false sense of certainty, encouraging decisions that are less grounded in realism. Leaders need to understand not only what AI can do, but what it might do to their thinking. By integrating human dialogue and critical scrutiny, organizations can use AI more effectively and avoid costly mistakes.
- AI Consultation Raised Forecasts but Reduced Accuracy: Executives who consulted ChatGPT raised their one-month price forecasts for Nvidia by an average of $5.11. While this demonstrated AI’s influence in shaping expectations, it also highlighted a key problem: these revised estimates were less accurate than their original guesses. When compared to actual market outcomes, AI-assisted forecasts missed the mark more significantly than forecasts revised after peer discussions. This suggests that while AI might present compelling analyses, its outputs do not necessarily improve decision quality and may even lead users further from reality.
- Peer Discussions Led to More Accurate and Cautious Estimates: Executives who participated in small peer group discussions tended to lower their forecasts by about $2.20, showing a shift toward more conservative and, as it turned out, more accurate estimates. The process of verbalizing and debating their views likely allowed for self-correction and the integration of contextual knowledge that AI lacked. The peer environment also fostered a sense of responsibility and realism, where participants hesitated to make overly bullish claims in front of colleagues. This dynamic helped counterbalance personal biases and led to a more grounded consensus.
- AI Promoted Overconfidence Through “Pinpoint” Forecasting: After using ChatGPT, executives were significantly more likely to provide hyper-specific predictions with decimal-point precision, such as forecasting Nvidia’s price to the cent or fraction of a dollar. This behavior, which psychologists link to overconfidence, suggests that ChatGPT’s detailed and authoritative tone may embolden users to overestimate their own understanding or the precision of the forecast. In contrast, peer conversations led to more generalized and cautious estimates, reducing the likelihood of such illusory precision. This overconfidence could be dangerous in real-world settings, where executives may make bold decisions based on falsely perceived certainty.
- Trend Extrapolation Skewed AI’s Advice: ChatGPT’s forecasts likely relied on extrapolating recent patterns, such as Nvidia’s dramatic stock rise in prior months. Since the model draws on historical data without real-time market updates or contextual awareness of possible inflection points, it may inherently favor the continuation of recent trends. This is known as extrapolation bias and can be misleading, especially in volatile markets. Executives, influenced by ChatGPT’s optimistic tone and detailed rationale, may have overlooked signals that suggested a plateau or correction was more likely. This underscores a critical limitation of using AI for financial forecasting without human oversight.
- Practical Implications for Corporate AI Use: The study offers important lessons for how executives and organizations should approach AI in strategic contexts. First, AI tools should be used as supplements, not substitutes, for human reasoning and discussion. Second, decision-makers must be aware of AI’s limitations, including its lack of real-time data and inability to incorporate emotional or contextual nuance. Third, organizations should build safeguards into their decision-making processes, such as encouraging peer review or setting protocols for evaluating AI outputs critically. By combining the speed and analytical power of AI with human judgment and peer dialogue, leaders can make more informed, balanced decisions.
Go Deeper -> Research: Executives Who Used Gen AI Made Worse Predictions – Harvard Business Review
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.