The Gartner Data & Analytics Summit, held March 9 to 11, 2026 at the Gaylord Palms Resort in Orlando, arrived under a different center of gravity than any of its predecessors. Where prior editions had treated AI as one significant track among several, the 2026 agenda placed it at the spine of the program, with a dedicated AI track, more than 137 research-driven sessions, and a roster of over 60 Gartner analysts whose framing had visibly shifted from whether enterprises should deploy AI to whether their data infrastructure can survive the deployment they have already committed to. The expanded AI agenda was not a marketing addition. It was a structural admission about where the discipline is heading.
An agenda restructured around AI execution
The Summit’s five comprehensive tracks and three spotlight tracks marked a quiet reorganization. Sessions on AI strategy, responsible AI, governance, generative AI, large language models, retrieval-augmented generation, prompt engineering, and machine learning were folded into the main program rather than relegated to a side stream. The signal to attendees, anchored by C-level data leaders, heads of AI, chief data officers, and chief analytics officers, was unambiguous: the data and analytics function is being absorbed into an AI execution mandate, and the operating model needs to evolve before the gap widens.
Gartner’s opening keynote, delivered by VP Analyst Adam Ronthal and Director Analyst Georgia O’Callaghan and titled around navigating AI on the data and analytics journey to value, set the controlling argument for the week. Success with AI, in their framing, is not about being fastest. It is about finding a path to value that the organization can sustain while managing risk and cost. The line landed because most attendees had already watched colleagues elsewhere ship AI initiatives that produced press releases but not measurable outcomes.
The governance question Gartner cannot avoid
The second narrative arc through the program was the governance gap. Sarah Turkaly, Director Analyst, framed it in stark terms: data and analytics governance has historically lagged in innovating, and the discipline is now at a tipping point where it could become a single point of failure for AI. Her session on governance of AI, by AI, and for AI laid out the architectural challenge that most enterprises have not yet addressed.
VP Analyst Nate Novosel sharpened the warning in a separate session. Organizations with low data governance maturity are significantly more likely to fail at realizing the value of their AI initiatives, full stop. The corollary Gartner offered was practical rather than ideological: identify business outcomes first, then design governance to deliver them incrementally. This pragmatic reframing matters because most governance programs have been generating activity, including policies, asset registers, and rule libraries, without demonstrating measurable business impact, a credibility problem that AI deployment has now made acute.
The pattern echoes themes covered in our AI governance enterprise analysis, and the failures Gartner described are largely the same failures documented in our data governance crisis report. What Orlando added was the specificity: governance that cannot prove continuous, measurable outcomes will struggle to justify its budget in an AI-driven enterprise.
Headline keynotes and what they actually argued
Beyond the opening, three Summit keynotes carried disproportionate weight. Chris Howard, Distinguished VP Analyst and Chief of Research at Gartner, delivered “Beyond AI,” a session whose underlying thesis was that the AI conversation needs to expand past model capability into organizational design, talent strategy, and the redistribution of decision rights. Peter Hinssen of Nexxworks gave the guest keynote on leading through what he calls “the Never Normal,” arguing that the post-pandemic operating conditions are not transitional but permanent, and that decision-making frameworks built for stable contexts will continue to underperform. Naomi Bagdonas, executive advisor and author, closed the keynote slate with a session on improv and humor as competitive advantages, an unusual choice for a data conference whose receptiveness suggests how much the audience has changed.
The technical sessions delivered the operational substance. Nina Showell, Principal Analyst, made the case that unstructured data management has become the prerequisite for generative AI rather than an adjacent concern. By 2027, in Gartner’s projection, IT spending on multistructured data management will account for 40 percent of total data management spend. Through 2028, organizations attempting to build proprietary unstructured metadata solutions will incur costs more than 300 percent higher than firms that adapt existing document and records infrastructure.
The predictions that should reshape 2026 planning
Rita Sallam, Distinguished VP Analyst, presented Gartner’s top data and analytics predictions for 2026 and beyond. Several deserve direct attention from executive teams.
By 2027, 75 percent of hiring processes will include certifications and testing for workplace AI proficiency during recruiting, a shift that will reshape talent acquisition pipelines at firms whose recruiting infrastructure was built around generic technology competence. The pattern intersects directly with the trends covered in our HR AI analysis and HR tech news report.
Through 2027, generative AI and AI agent adoption will create the first true challenge to mainstream productivity tools in 30 years, prompting a $58 billion market shakeup. The figure is large enough to make the implication clear: the Office and Workspace category, stable since the early 2010s, is structurally exposed.
By 2029, AI agents are projected to generate 10 times more data from physical environments than from all digital AI applications combined, a forecast that quietly redefines what storage, networking, and edge infrastructure planning needs to absorb.
By 2030, half of organizations will use autonomous AI agents to interpret governance policies and technical standards into machine-verifiable data contracts, automating compliance enforcement. This is the prediction that most directly threatens the headcount logic of current governance programs.
These projections compound. The themes covered in our agentic AI report and AI agents analysis were running through nearly every Summit session, and the framing in our recent AI news roundup tracks the same trajectory Gartner is now formalizing.
What the Summit implied for D&A leaders
The strategic reorientation surfacing from Orlando was not a list of best practices. It was an architectural argument: the data and analytics function needs to stop organizing around tools and roles and start organizing around outcomes that AI agents can act on. That shift implies a different operating model.
Jorg Heizenberg, VP Analyst, argued that D&A leaders should stop choosing between centralization and decentralization and adopt both, centralizing some functions and decentralizing others against the specific needs of the organization. Multiple interconnected versions of the D&A organization, namely global, regional, local, and what Heizenberg called “the next version,” should be defined with clear dependencies and accountabilities, with delivery happening through cross-functional, multi-disciplinary teams.
The implication for AI readiness, articulated separately by analysts Mark Beyer and Roxane Edjlali, is precise. AI-ready data is data whose fitness for a specific AI use case can be proven, contextually and continuously, against the requirements of the technique and use case in question. This is a structurally different standard from the static data quality programs that most enterprises currently run. It is also the standard that will determine which AI initiatives produce value and which produce remediation costs.
For D&A leaders building the next 18 months, the Summit’s quiet but consistent message was that the function is being asked to transform faster than the headcount and tooling will allow under current assumptions. The leaders who succeed will be those who treat metadata as infrastructure, connect governance investments directly to AI workstreams, and use agents to scale enforcement and curation without scaling headcount, themes also surfacing in our hidden governance risks coverage.
The question Gartner left in the room
Orlando did not produce neat answers. It produced sharper questions, which may have been the point. The most consequential one is whether the data and analytics function, as currently constituted in most enterprises, is structurally capable of supporting the AI ambitions its own leadership has already committed to publicly.
So the question for any D&A leader returning from Orlando, or reading the agenda from a distance, is this: if your organization had to prove the continuous AI-readiness of its data tomorrow, against a specific high-stakes use case, what evidence could you produce, and how long would gathering it take?
