Artificial intelligence is quickly becoming a baseline expectation for product teams, not a competitive edge. Airtable’s 2026 Predictions for Product Teams report confirms the shift: 76% of product leaders are scaling up AI investments, and 92% now tie their work directly to revenue — double the share from a few years ago. Yet this ambitious push largely ignores a quieter crisis.
According to the same report, nearly half of product teams lack the time for strategic planning, roadmap development, or even basic data analysis. Without space to think about where AI fits, new tools become just another layer of operational burden rather than a source of clarity.
Preksha Kothari, a product manager at The Edge Fitness Clubs in Pennsylvania and an Advisory Council Member at Products That Count, the world’s largest professional community for product managers, has spent years navigating this tension firsthand. With a track record of shipping and rebuilding digital products across different industries, she knows what operational overload looks like from the inside.
At the same time, she studies transformer architectures and multimodal AI at an academic level, with papers published in IEEE, which gives her a technical understanding of what these tools can and cannot do before they ever reach a product roadmap. It is that rare mix of hands-on product experience and deep research fluency that allows her to see clearly where AI will strengthen a product and where the real work starts long before any model gets involved.
What a broken checkout teaches about AI readiness
Across the industry, a pattern keeps repeating: a team adopts an AI-powered analytics suite or an automated personalization engine, expecting conversions to climb. Months later, the numbers barely move. Not because the tool is bad, but because nobody paused to ask what was actually broken. When a checkout flow confuses users or a mobile layout buries the signup button, no amount of machine learning on top will compensate. AI amplifies whatever it is pointed at — and if it is pointed at a mess, it amplifies the mess.
“Most product problems I have seen do not need a neural network,” Kothari says. “They need someone to sit with the data, watch real sessions, and figure out where people give up. That part is unglamorous, but it is where the money is.”

At The Edge Fitness Clubs, a 44-location chain in the northeastern United States, she inherited exactly this kind of situation. The online membership join flow — the single most important digital revenue channel for the company — was underperforming. Plan selection was confusing, the mobile payment experience was losing users, and sign-ups were stalling midway.
Kothari stripped the flow back to basics — simplified the interface, removed friction from the payment steps, and validated every change with A/B tests. Bounce rates fell, online conversions improved, and she later applied the same discipline to the company’s broader website overhaul, cleaning up navigation and legacy UX debt that had built up over the years. No AI involved — just structured product work.
Getting the foundation right is the first real step toward any meaningful AI integration. But a clean product does not mean AI can be dropped in anywhere. The next challenge is understanding the technology well enough to know where it genuinely adds value — and that requires a different kind of expertise.
Also read: What is A Product Launch Strategy for Product Marketing
Why most product managers cannot tell a good AI tool from a bad one
Here is what most failed AI integrations have in common: the product manager who approved them could not explain how the underlying model actually works. Vendor demos look impressive; pitch decks promise efficiency gains. But when a language model hallucinates in production or a recommendation engine surfaces irrelevant results, the PM who does not understand the mechanism has no way to diagnose the failure — let alone prevent it.
“You cannot make good product decisions about a technology you treat as a black box,” Kothari says. “If you do not know where a model loses accuracy, you will not catch the problem until your users do.”
She does not say this from a theoretical position. Her own IEEE research is focused precisely on the places where AI models fall short. A paper she presented at the CCWC 2025 conference in Las Vegas examined how transformer architectures lose coherence as input sequences grow longer — a limitation that directly affects any product relying on language models to process complex queries.
A second study, presented at IEEE’s World AI IoT Congress in Seattle, explored how combining computer vision with speech translation inside large language models can reduce errors in cross-modal communication. In both cases, large language models were examined through the lens of reliability — specifically, where and why that reliability breaks down.
Kothari applies the same lens when she evaluates student work at university hackathons. Reviewing projects at HackDuke, one of the largest social-impact hackathons in the country, and at Georgia Tech’s Haklytics, which focuses on analytics and AI, she consistently notices the same pattern: teams build confident prototypes on top of AI but rarely test what happens when the model receives unexpected input. A demo runs smoothly on curated data; in the real world, users type garbage, skip fields, and behave in ways no training set anticipated.
“The prototypes are genuinely impressive — the technical bar has gone way up,” she says. “But when I ask a team what their model does with an edge case, most of them have not thought about it yet. And that is exactly the moment a product fails.”
That blind spot, Kothari argues, is not unique to students. It runs through product teams at companies of every size. And while not every PM needs to publish in IEEE to close the gap, the fix is more accessible than most people assume. In her view, it comes down to three questions any product manager can ask before integrating an AI tool: what data was it trained on, where does accuracy drop, and what happens when the input falls outside the training set. If the vendor cannot answer clearly, you are not buying a tool — you are buying a risk.
Also read: How Leading UX Teams Shape Product Outcomes — A Closer Look at What Really Makes Design Matter
From research papers to a 300,000-person community
Kothari is convinced that AI, used correctly, can genuinely make product managers’ lives easier — not by replacing their judgment, but by handling the repetitive, data-heavy work that eats up hours meant for strategy. She shares that conviction through the Product Talk podcast, an award-winning show with millions of downloads, where she discusses how large language models support problem-solving in product management. Through her Advisory Council role at the organization, she helps shape how a community of over 300,000 product professionals approaches AI adoption — not as hype, but as a practical skill.
“AI is a tool, not a magic wand,” Kothari says. “It can optimize your workflow, speed up your research, and surface patterns you would miss on your own. But only if you understand what you are pointing it at. Otherwise, you are just automating confusion.”
She mentors young women entering product management and AI careers, and her long-term goal is entrepreneurship — building products that make technology accessible beyond the engineering world. For now, Kothari believes the path forward is simple: fix what is broken, learn how the technology works, then let AI into the process. If the Airtable numbers are any indication, the teams that follow this order will not just survive the current wave — they will be the ones who actually get value from it.
Leave a comment