Categories
Events

The Age of AI Agents: What Leaders Need to Know for 2026 & Beyond

Categories
Events

The Age of AI Agents: What Leaders Need to Know for 2026 & Beyond

Open Innovator Knowledge Session | January 19, 2026

Open Innovator organized a groundbreaking knowledge session on “The Age of AI Agents: What Leaders Need to Know for 2026 & Beyond” on January 19, 2026, marking the first major discussion of the new year on how leadership must evolve as we transition from the chatbot era to the agentic AI era.

This pivotal session brought together global experts to examine a fundamental shift: moving from AI that suggests to AI that executes from copilots to autonomous digital workforce. The panel explored critical questions around trust, accountability, ethics, and the strategic decisions leaders must make as AI agents become capable of acting independently while humans sleep, transforming not just how work gets done, but who—or what—does it.

Expert Panel

The session featured four distinguished experts bringing diverse perspectives from policy, healthcare technology, enterprise transformation, and AI development:

Beatriz Zambrano Serrano – Expert at the intersection of MedTech and virtual reality, ensuring AI agents work in high-stakes healthcare environments where the margin for error is zero, with deep expertise in VR-based medical training simulations.

Hanane Boujemi – Tech policy expert and “guardian of the guardrails,” navigating the policy landscape to keep AI innovation ethical and legal, with nearly two decades of experience working at the highest levels of both big tech and government.

Ahmed Elrayes – Enterprise transformation veteran and “organizational architect,” serving as an advisor on digital transformation who bridges the gap between high-tech AI agents and high-impact human teams, particularly in government and Saudi Arabian markets.

Puneet Agarwal – Founder of AI LifeBOT, turning agentic theory into digital workforce reality with over 100 AI agents deployed across healthcare, manufacturing, retail, and other sectors globally.

The discussion was expertly moderated by Naman Kothari from NASSCOM, who framed the conversation around a provocative premise: “AI won’t replace you, but a leader who manages AI agents will replace the leader who still thinks AI is just a fancy Google search.”

From Copilot to Chief of Staff: Understanding the Shift

Naman opened the session with a powerful distinction that set the tone for the entire discussion. In 2024, if you asked a chatbot to help you get to London for a Tuesday meeting, it would act as a copilot—scanning the web, presenting flight options, hotel prices, and weather forecasts, perhaps even drafting an email. But then the work stopped. You still had to book the flight, coordinate the Uber, and manage the calendar.

In 2026, agentic AI changes everything. You tell the agent “I need to be in London for that Tuesday meeting. Make it happen.” The agent doesn’t give you a list—it negotiates for you, identifies the best pricing, transacts on your behalf, and syncs your calendar. As Naman put it: “Gen AI is your consultant. The AI agent is your chief of staff.”

First Responsibilities: What Would You Trust to an AI Executive?

The panel tackled a provocative scenario: if an AI agent joined your leadership team tomorrow as a decision-maker, what would you trust it with first, and what would you never give up?

Healthcare: Data Analysis, Not Value Judgments

Beatriz brought the critical perspective from high-stakes medical environments. She would immediately hand her AI agent all the accumulated training simulation data—information on how medical personnel performed, where mistakes occurred, what was effective and intuitive. “There are many things that we as humans miss,” she explained, noting the difficulty of processing vast amounts of training data to improve simulators and make practice as real as possible.

However, Beatriz drew a firm line: she would never let AI design the actual training scenarios. “That’s really an ethical call. It’s a value-based judgment. You need to understand why you’re doing the case, all the demographical information about the patient. For that, you need real physicians and real experts.”

Enterprise: Operational Tasks, Not Strategic Decisions

Ahmed identified a major opportunity in the operational realm. He observed that in his work across organizations, particularly in Saudi Arabia, people are drowning—spending 70-80% of their time on repetitive operational tasks like pulling data from multiple sources, issuing reports, managing IT service requests, and writing feedback comments.

“The first thing I would give them is operational tasks that are repeated with clear decisions,” Ahmed stated. “I don’t want to replace my team yet, but I want to free them for more strategic work, more creative work, more work involving ethical values.”

What would he never hand over? “Any responsibility that requires strategic decisions dealing with values and customers, something that has accountability with it. Anything that involves culture, values, or human perspective—I wouldn’t give to an AI agent yet.”

Policy: Building Intelligence and Empathy First

Hanane offered a fascinating perspective from the policy world. Before deploying AI agents, she would focus on having them develop “exceptional communication skills”—not personality traits, but values essential to policy-making. She emphasized that agents need high levels of intelligence, empathy, and the ability to navigate complexity and ambiguity.

“We need to look at the foundations of how to make technology work with the help of policy—not to hinder it, but to help it benefit either the business model, service delivery, or scientific research,” Hanane explained. She stressed that data is agnostic until we can make sense of it, and agents need to be intelligent enough not to replace humans but to guide, coach, and anchor them.

What must remain human? Strategic decisions that require understanding your specific situation and context. “You need to be able to make the right call for your own situation and not apply a blanket policy.”

AI Development: Leading HR, With Human Loop

Puneet brought practical experience from building AI agents. With characteristic humor, he said if an AI agent joined his team, “I will hand it over lead HR—but jokes apart,” he acknowledged the current reality requires human oversight.

His more serious point focused on preparation: “How I empower the AI agent for the future is critical. Every decision, including hyper-personalization and decision-making, the AI agent will be able to do better than us as we move forward—because it will be more enriched with data, with clean data.” The current limitation? We don’t have clean data yet, which is why human-in-the-loop remains essential.

Bold Predictions: The State of AI Agents by End of 2026

The session moved to rapid-fire predictions about where we’ll be by the end of 2026.

Will Employees Manage More AI Agents Than Human Subordinates?

Ahmed’s answer: Not yet. Especially in government sectors and markets like Saudi Arabia, significant preparation work needs to happen first. “There are regulations around cloud services and other technologies. Organizations need to build themselves in terms of data structure, automation, and systems,” he explained.

Beyond technical limitations, Ahmed identified a critical cultural barrier: “A lot of leaders treat agentic AI as a collaborative tool, an added tool—not as a fundamental operational change in how you deliver value within your services.” His bold prediction: the majority of organizations are still behind, though some unicorns will emerge.

Will the Most Powerful AI Agents Live in Headsets and Glasses?

Beatriz’s answer: True, but with major caveats. She pointed to exciting startups in San Francisco combining humanoid robotics with AI, seeing this as the future trend—if politics and regulation allow it. “I see that trend, if regulation allows it, which is very difficult in my opinion. That would be the most powerful. However, I don’t know when we would allow it to enact its full power.”

Hanane reinforced this point, noting that wearables are “the ticket item”—the hardware that will make a huge difference for the AI hype, but “we need to get it right this time because we have the frameworks in place.” She cited significant challenges: fitting all the necessary chips into wearable form factors, operating beyond current infrastructure layers, and navigating regulatory pushback.

Who’s Liable When AI Agents Make Million-Dollar Mistakes?

When asked who gets fired if an AI agent makes a massive financial error—the CEO, CTO, or software vendor—Hanane’s answer was unequivocal: Everyone will be on the case of the CEO.

Drawing from her experience at Meta, she explained that when CEOs make wrong bets on major initiatives, they bear ultimate responsibility. But her deeper point was about getting the fundamentals right: “We need to do more work to get everybody on board. We need consensus building. Doing things on your own or coming with a top-down approach doesn’t work anymore. Testing regulatory readiness in some markets before deploying products is critical.”

Real-World Impact: AI Agents Already Transforming Industries

Puneet provided concrete examples of how AI LifeBOT is deploying agentic AI across sectors:

Healthcare:

  • Avatar-based appointment booking systems that talk to patients in real-time, helping them schedule appointments based on doctor preferences, clinic locations, and availability—all integrated with backend hospital management systems
  • Diabetic boot sensors that measure foot pressure and alert patients to prevent ulcers, shifting from reactive treatment to preventive medicine

Manufacturing:

  • Voice-enabled predictive maintenance systems allowing blue-collar workers to speak to machines in their native languages (Spanish, Chinese, Hindi, Arabic, English)
  • Workers can ask questions about machine maintenance in natural language, with data captured by IoT devices but engagement happening through voice

Retail & Consumer Products:

  • Customer service AI agents handling warranty services, repair requests, and support calls
  • Analysis of customer journeys across channels to improve support delivery

Cross-Functional Applications:

  • Over 100 AI agents created for various functions: legal, IT, sales, operations, finance, marketing
  • Sector-agnostic, plug-and-play solutions deployed across US, India, Africa, and Southeast Asia

Critically, Puneet emphasized built-in safeguards: “We understand these kinds of mistakes can happen due to hallucination. While building agents for enterprises, we are putting a lot of checks and balances. We have agents doing actions, and we also have anti-agents doing audit and performance management of other agents.”

Critical Warnings: The Gaps We’re Missing

The Clean Data Problem

Puneet was direct about current limitations: “One important reason for immaturity is the data we have right now. It will take more time to reach maturity where we don’t require human-in-the-loop. Right now, for critical decisions, we are putting human-in-the-loop.”

The Wrong Goal: Efficiency Over Innovation

Ahmed issued a powerful warning about what leaders will regret: “Leaders will think ‘I wish I didn’t run after ROI or cost reduction or efficiency from the start.’ They were running after the hype, saying ‘I have agentic AI’ without understanding what AI is, what agentic AI is, what LLM is.”

He emphasized the need for capacity building: “Leaders should spend more money on understanding the technology, what it’s capable of doing, how to deploy it correctly, and change management—how to treat the technology within their organization.”

The danger? “It’s not like implementing a new ERP where you can put it back to manual. The damage of going after AI and creating more issues is very hard to recover from. You need a roadmap, an ambition roadmap for managing change, educating people, and having governance in place.”

The Equity Gap

Beatriz raised a profound concern about global inequality: “The world is not equitable at the moment. We have a lot of disparity. I would love for everybody to be at the same level before all these advancements happen, because if not, some are inevitably going to be left behind.”

She called for foundational work first: “I would really like for all governments, people, and leaders to build the infrastructure—to be connected to the Internet, to have basic digital literacy and digital skills. And then when we have that base, we can advance.”

Hanane reinforced this, noting that “a few billion people are not yet connected to the Internet. We have a big chunk of data that we aim to process which is not yet available.”

Looking to 2030: What Will Future Leaders Say We Got Wrong?

In the session’s most thought-provoking segment, panelists imagined what leaders in 2030 or 2035 will say about the decisions being made today.

Hanane: “We Worried About the Wrong Things”

“I would definitely think of AI as not as smart as we all think,” Hanane projected. “Ten years down the line, as a policy maker—hopefully by then I’ll become a minister—I’ll be thinking these AI agents are not as smart as us. We have to be on top of the technology as humans because we can make sense of communication and the implicit much better than any agent we train ourselves.”

Her vision: AI should be “more of a tool in systematic decisions that cuts time and energy and helps optimize processes, especially when running large projects—whether in big companies or at the level of government.”

But she warned: “The machine ultimately will never outsmart the human. We need to mobilize the machine to follow instructions, have checks and balances in situ, make sure foundations are there for infrastructure, deployment, and data protection frameworks.”

Ahmed: “We Chased Efficiency Instead of Transformation”

Ahmed’s regret prediction was pointed: “Leaders will think ‘I wish I didn’t run after ROI or cost reduction from the start, running after the hype.’ A lot of organizations lack understanding of what’s AI, what’s agentic AI, what’s LLM.”

His prescription: “Probably would have spent more money in capacity building, understanding the technology, change management, and how to treat the technology within the organization. Many organizations treat AI as an add-on technology which is not—it has a profound impact on organization structure, decision-making, hierarchy, and workforce.”

Beatriz: “We Learned Humility From Our Blind Spots”

Beatriz offered the most optimistic perspective: “I’m very positive about the future. Agentic AI has actually made me very humble because I’ve seen what I have missed personally, what blind spots I have. Technology shows us what we’re lacking, and if we are humble enough to really analyze their judgment versus what we would have done, we can really learn and advance as a society.”

Her hope: “First really help everybody to be at the same level. Build infrastructure, get connected to the Internet, have basic digital literacy and skills. Then when we have that base, we can advance.”

Key Takeaways for Leaders

1. From Automation to Autonomy

As Puneet emphasized, “Agent AI is a mind shift. We are moving from automation to autonomy. This is not going to be stopped—this is the future. But we have to understand the consequences.”

2. Don’t Pave the Cow Path—Build New Roads

Ahmed’s warning resonates: Don’t just seek efficiency gains. Use AI agents to reimagine entirely new business models and ways of delivering value that were previously impossible.

3. Human-in-the-Loop Remains Essential

Across all perspectives, the consensus was clear: for critical decisions involving values, culture, strategy, and accountability, human judgment remains non-negotiable—at least for now.

4. Foundation First, Innovation Second

Hanane’s policy perspective emphasized getting the basics right: infrastructure, data protection frameworks, digital literacy, and regulatory readiness must precede widespread deployment.

5. Technology Only Works When It Works for Everyone

Beatriz’s closing remark captured the ethical imperative: “There’s a lot of power in agentic AI, but honestly, it’s only worth it if it makes us better and if it makes humanity better. So let’s all work towards that.”

Conclusion: The Leadership Evolution Has Begun

The session made clear that 2026 marks a pivotal moment. As Naman framed it at the close: “The age of agents isn’t just a tech upgrade—it’s a leadership evolution.”

Future leaders won’t be remembered for the agents they deployed, but for the culture of innovation they built to manage them. The challenge ahead isn’t technological—it’s human. It’s about building capacity, managing change, establishing governance, ensuring equity, and maintaining the ethical compass that only humans can provide.

The digital workforce is here. The question is: are leaders ready to orchestrate it?

This Open Innovator Knowledge Session featured expert insights on navigating the agentic AI revolution. A huge shoutout to the brilliant speakers—Beatriz Zambrano Serrano, Hanane Boujemi, Ahmed Elrayes, and Puneet Agarwal—for bringing clarity, candour, and perspective to the discussion. Special thanks to Puneet Agarwal, founder of AI LifeBOT, for showing what innovation with intent truly looks like.

Leave a Reply

Your email address will not be published. Required fields are marked *