Categories
Events Visibility Quotient

Empowering the Core: Women Redefining the AI Value Chain

Categories
Events Visibility Quotient

Empowering the Core: Women Redefining the AI Value Chain

The rapid ascent of Artificial Intelligence is often discussed through the lens of silicon, datasets, and compute power. However, as the global tech landscape shifts toward 2026, a more critical narrative is emerging: the human architecture behind the algorithms. On March 9, 2026, a landmark session titled “Women Across the AI Value Chain” brought together a powerhouse of leaders to dismantle the stereotypes and structural barriers that have historically sidelined female voices in technology. Hosted by Open Innovator, and supported by the Mexican Embassy in Germany, the dialogue served as more than just a commemorative event for International Women’s Day; it was a strategic masterclass on leadership, influence, and the future of innovation.

Panelists

  • Isma Khemies – Advocate for inclusive leadership in AI
  • Shayma Kurz – Driving innovation through ethical AI practices
  • Sina Landorff – Championing diversity in tech ecosystems
  • Angeley Mullins – Scaling global AI-driven businesses
  • Linda Kohl – Breaking barriers in AI adoption and strategy
  • Jomy Jose – Empowering women in AI entrepreneurship

Co-Hosts

  • Adriana Carmona Beltran – Facilitating dialogue on women in AI leadership
  • Tedix – Partner organization amplifying voices in technology

Ecosystem Partners

  • Oliver Contla – Secretaría de Relaciones Exteriores de México, supporting international collaboration
  • Francisco Quiroga – Secretaría de Relaciones Exteriores de México, strengthening global AI networks

The Invisible Foundation of Leadership

The conversation opened with a poignant reflection on the nature of unrecognized leadership. Drawing a parallel between the high-stakes world of AI and the domestic sphere, the host highlighted how women have historically managed complex systems—families, communities, and educational environments—with resilience and innovation, yet these efforts are rarely labeled as “leadership.”

In the context of the AI value chain, this invisibility often persists. While women are integral to the development, ethical oversight, and deployment of AI, their contributions frequently remain behind the scenes.

The goal of the panel was to bridge this gap, moving from quiet contribution to radical visibility. As emphasized during the discussion, visibility creates opportunity. When a woman is seen as a decisive founder or an expert engineer, she provides a blueprint for the next generation. The panel sought to redefine traits like empathy and decisiveness not as gendered characteristics, but as essential human qualities necessary for navigating the “real system” of AI: the people who make the decisions.

Navigating the “Boys’ Club” and Building Credibility

Shayma Kurz, a veteran of the automotive industry and a former engineer at Mercedes-Benz, provided a visceral look at the challenges of navigating male-dominated technical environments. In industries like automotive and AI infrastructure, women often find themselves as the “only one in the room.” Kurz’s journey is a testament to the fact that influence in technical spaces is not built through the volume of one’s voice, but through the undeniable quality of one’s work.

Kurz identified three pillars for building credibility: competency, value creation, and strategic relationships. She emphasized that to succeed in a “boys’ club,” a woman must often solve the problems that others cannot. By becoming the person who can fix a broken data architecture or streamline a complex process, the focus shifts from gender to utility. However, Kurz also warned against the trap of waiting for an invitation to speak. Influence, she noted, is often built before a meeting starts. By aligning stakeholders and understanding the technical “pain points” of a project ahead of time, women can enter decision-making rooms with a foundation of support that makes their presence undeniable.

The Shift from Hierarchy to Data-Augmented Decisions

Jomy Jose, bringing two decades of experience across hospitality and insurance, explored how the nature of decision-making itself is evolving. In the past, corporate structures were strictly hierarchical, with decisions flowing from the top down based largely on seniority and intuition. Today, the integration of AI has transformed this into a data-augmented process.

According to Jose, AI acts as a “helper” that compresses the time between analysis and action. Decisions are now a hybrid of human judgment and AI-supported insights. This shift presents a unique opportunity for women. As AI agents and agentic workflows take over operational tasks, the value of strategic oversight increases. Jose emphasized that communities play a vital role here. By creating psychologically safe spaces for women to experiment with new tools and ask “stupid” questions, professional networks accelerate the learning curve and help women stay at the forefront of the AI value chain.

The Structural Gap: Informal Power vs. Formal Title

One of the most striking segments of the discussion was led by Isma Khemies, an executive coach with deep roots in international key account management. Isma deconstructed the “structural gap” that exists in large organizations. On paper, decisions are made by C-suite executives and board members. In reality, power resides where risk, revenue, and relationships intersect.

Isma shared a sobering personal account of the “competency paradox.” In her previous role, she was the “Wikipedia of the company,” holding deep influence over clients worth millions. Yet, she was passed over for a Sales Director position precisely because she was too valuable in her current role. This highlights a recurring theme for women in tech: holding immense informal power (resolving conflicts, spotting risks, and maintaining client trust) without the formal title or compensation to match. To close this gap, Isma argued that women must move closer to the Profit and Loss (P&L) statements. Influence must be made measurable. If a woman’s leadership is the reason a multi-million dollar account remains loyal, that impact must be quantified and used as leverage for formal advancement.

Scaling AI Through Diversity and Inclusion

The panelists, including Sina Landorff, Angeley Mullins, and Linda Kohl, collectively reinforced the idea that scaling AI requires a diversity of perspectives. AI is not just about the model; it is about the deployment of that model in a human world. When women lead AI teams, they bring a holistic view of the “value chain”—from the ethical sourcing of data to the final user experience.

The discussion touched upon the “double bind” mentioned by Adriana Carmona Beltran: the reality that women are often criticized for being “too manly” if they are decisive, or “too feminine” if they are soft. The consensus among the superwomen on the panel was to reject these labels entirely. By focusing on the high-stakes outcomes—revenue growth, risk mitigation, and technological breakthrough—these leaders are carving out a new definition of authority that is based on impact rather than performance of gender.

A Community of Innovation

The success of the “Women Across the AI Value Chain” event was a collaborative effort. A huge shoutout is deserved for the superwomen panelists: Isma Khemies, Shayma Kurz, Sina Landorff, Angeley Mullins, Linda Kohl, and Jomy Jose. Their willingness to share raw, unvarnished experiences provided a masterclass for everyone in the room.

The conversation was brought to life by co-host Adriana Carmona Beltran and the support of Tedix. Furthermore, the dialogue was amplified by incredible ecosystem partners Oliver Contla and Francisco Quiroga from the Secretaría de Relaciones Exteriores de México, whose support underscores the global importance of inclusive innovation.

Conclusion

As we look toward the future of the AI ecosystem, it is clear that technical skill alone is not enough. The leaders of tomorrow will be those who can navigate complex social architectures, leverage data-augmented insights, and turn informal influence into formal power. The journey of these women shows that while the glass ceiling still exists, it is being cracked by the sheer force of competency and community. By stepping into the spotlight and claiming their roles as builders, scalers, and influencers, women are not just participating in the AI value chain—they are defining it.


About Open Innovator

Open Innovator is a global platform dedicated to fostering collaboration, breaking down silos, and empowering the next generation of tech leaders. We believe that the best innovations happen when diverse minds meet at the intersection of technology and humanity. Through sessions like these, we aim to bridge the gap between theory and real-world impact.

Join the Movement

Are you ready to be part of the future of AI? We are always looking for passionate innovators, thinkers, and leaders to join our growing ecosystem.

Write to us today at open-innovator@quotients.com to join our community and stay updated on upcoming sessions!

Categories
Data Trust Quotients Events

Report: The AI vs. AI Digital Arms Race

Categories
Data Trust Quotients Events

Report: The AI vs. AI Digital Arms Race

March 6, 2026

The global technological landscape has reached a pivotal tipping point where the narrative of Artificial Intelligence has shifted from “assistance” to “autonomy.” We have officially entered an era of a digital arms race—a state where AI systems are simultaneously being engineered to compromise global infrastructure and to defend it.

In a landmark knowledge session organized by DTQ, a panel of elite practitioners from the banking, telecommunications, and aviation sectors convened to dissect this “AI vs. AI” phenomenon. The consensus was clear: the battlefield has moved beyond human reaction times. The security of our future now depends on how we architect the machines that fight on our behalf.

The session brought together three leading practitioners in AI-driven cybersecurity across banking, telecom, and aviation:

  • Dr. Sudin Baraokar – AI and quantum scientist, former Head of Innovation at SBI, architect of the Yono app (100M+ users), and builder of AI-native banking systems.
  • Daxesh Parikh – EVP at DoveLoft Limited, specializing in telecom-based authentication for government, banking, and fintech, working with major Indian banks on next-gen security beyond OTPs.
  • Sabarikumar KB – Group Manager & CSO at Airbus, with frontline SOC experience countering AI-generated attacks and expertise in aviation security architecture.

Moderator: Dr. Akvile, founder of System Akvile and CEO, participant in G20 AI governance discussions, with extensive work on AI in health and youth sectors

The Opening Salvo: From Tools to Combatants

The discussion opened with a provocative observation: technology is advancing at a velocity that has outpaced traditional oversight. Only a few years ago, AI was seen as a helpful tool for automation; today, it has become a primary combatant. Some systems are designed to create problems, while others are built to stop them, turning the digital landscape into a battle where one AI generates threats and another AI counters them—leaving humans as spectators to the unfolding drama.

This drama plays out through a sophisticated cycle: attackers deploy Large Language Models to craft flawless phishing campaigns, generate hyper-realistic deepfakes for social engineering, and automate brute-force hacking that can probe millions of vulnerabilities in seconds. In response, defensive AI is being woven into the fabric of networks, detecting anomalies and neutralizing threats at machine speed

Banking Infrastructure: Resiliency at 24,000 TPS

The primary concern for any digital economy is the stability of its financial heart. Dr. Sudin Baraokar, an AI and Quantum Scientist with a storied career at SBI, IBM, and GE, provided a masterclass on how banking infrastructure is evolving to survive an AI-native world.

The Scale of the Challenge

Dr. Sudin shared staggering benchmarks from his tenure as Head of Innovation at the State Bank of India (SBI). These figures provide the context for why traditional security is no longer sufficient:

  • Transaction Speed: Core banking systems are benchmarked at 24,000 transactions per second (TPS).
  • Daily Volume: Handling approximately 1.5 billion transactions daily.
  • Customer Reach: Protecting the data of 500 million customers across 700 million accounts.
  • The Yono Factor: The Yono digital lending app has now crossed 100 million users, representing a massive surface area for potential attacks.

The Shift to Artificial Superintelligence (ASI)

Dr. Sudin emphasized that the advent of AI and Gen AI allows banks to “talk to their data” in ways previously unimagined. The shift is moving away from static rules and manual libraries toward Security Model Management.

“Previously, we used to have a whole lot of templates and rules, but now it’s all model-driven,” he explained. This allows for a three-level approach to security:

  1. Level 1 (Business Rules & Intent): Establishing the foundational logic of what a transaction should look like.
  2. Level 2 (Reasoning): Using AI to analyze the context and intent behind system behavior.
  3. Level 3 (Decisioning): Enabling the system to take autonomous action to block a threat.

The Human Factor: The Persistent Weakest Link

Moderator Dr. Akvile, Founder and CEO of System Akvile, brought a grounding perspective to the high-tech discussion. Despite the billions of dollars invested in AI shields, she pointed out that the most frequent point of failure is still the human being sitting at the keyboard.

The “Grandmother” Scam and Deepfakes

Dr. Akvile highlighted a growing trend in European banking: the largest investments are no longer just in software, but in human education. She shared anecdotes of “grandmothers” in Germany giving away banking details to AI-generated voices claiming to be their granddaughters.

“Banks are doing a lot to protect from cyberattacks, but the biggest issue is still the person handling the account,” she remarked. Whether it is using “Password123” or sharing sensitive data on fraudulent web pages, human fallibility provides a backdoor that even the most advanced AI struggles to close.

The Value of Information

Working with young people in the health sector, Dr. Akvile expressed concern over the “value of information.” In an age of deepfakes and AI influencers, the public’s ability to distinguish reality from manipulation is eroding. This creates a secondary security risk: the manipulation of public opinion to trigger bank runs or healthcare panics.

The Telecom Backbone: Beyond the OTP

Daxesh Parikh, Executive Vice President at Dovelofts Limited, pivoted the conversation toward the “nervous system” of the digital world: Telecommunications. He argued that data theft is synonymous with “business paralysis.”

The RBI Mandate of 2026

In a significant update for the Indian BFSI sector, Parikh discussed the April 1, 2026, RBI mandate. The regulator is demanding a robust alternative to the One-Time Password (OTP) to prevent fraud and reduce friction.

“Fraudsters can weaponize SS7 and SIP protocols to intercept OTPs,” Parikh warned. The industry is moving toward Predictive Real-Time Authentication using the “crypto engine” already present in every SIM card.

The “Crypto Engine” Solution

By leveraging the unique cryptographic identity held by telecom operators, banks can verify a user’s identity without ever sending a text message. This “silent” authentication is already being used by Barclays Bank in Europe and is expected to become the global standard by 2030.

Frontline Defense: The Struggling SOC

Saba, Group Manager and CSO at Airbus, provided a reality check from the Security Operations Center (SOC). She confirmed that traditional detection tools are “struggling” because they were built to recognize historical patterns.

The Experimentation Advantage

Attackers now have the “experimentation advantage.” Instead of sending one phishing email, they can use AI to generate 100,000 variations, testing each one against common filters until they find a “perfect” version that looks like a genuine internal HR update.

The SOC Shift

To counter this, Saba outlined a necessary evolution for security teams:

  • Behavior Over Signatures: Stop looking for what a file “is” and start looking at what it “does.”
  • Correlation Over Isolated Events: Using AI to connect a harmless-looking login with an unusual data export.
  • Analytical Thinking: Analysts must move from being “tool operators” to “investigators.”

Security by Design in an AI-Native World

The panel agreed that “Security by Design” has fundamentally changed. It is no longer enough to secure the infrastructure (the “car”); you must secure the intelligence (the “driver”).

The Three Pillars of Model Security

Dr. Sudin and Saba identified three critical areas where AI-native systems must be protected:

  1. Training Data Security: Preventing “data poisoning” where an attacker injects malicious data into the AI’s learning set.
  2. Model Behavior: Implementing filters to prevent “prompt injection,” where a user tricks an AI into bypassing its own safety rules.
  3. Lifecycle Monitoring: AI systems “drift” over time. Continuous monitoring is required to ensure the AI doesn’t develop harmful biases or vulnerabilities as it learns from new data.

Compliance: The Floor, Not the Ceiling

A common mistake made by organizations is treating compliance (GDPR, ISO, India’s DPDP) as the goal. Saba argued that compliance is merely the floor—the absolute minimum baseline.

“Compliance moves at the speed of governance, but threats move at the speed of code,” she noted. An organization can be 100% compliant and still be 100% vulnerable. The goal must shift from “being compliant” to “being resilient.”

The 2036 Vision: Agentic and Autonomic Security

Looking toward the next decade, Dr. Sudin outlined a future of Agentic Security. In this world, security fabrics will function like a neural network—automated, autonomic (self-managing), and self-audited.

He compared this transformation to the current $5 trillion investment in AI hardware, such as NVIDIA’s Blackwell chips, which feature 200 billion transistors. “We need to accelerate our journeys across business, data, and technology just as fast as the hardware is accelerating,” he urged.

Conclusion: Fortune Favors the Prepared

The DTQ session concluded with a final round of advice for the next generation of entrepreneurs and leaders:

  • Dr. Sudin: “Don’t depend on particular LLMs. Build your own organizational Small Language Models (SLMs) to own your IP and security.”
  • Daxesh Parikh: “Fortune favors the brave. Take calculated risks, align with AI-routing platforms early, and don’t wait indefinitely for the ‘perfect’ time.”
  • Saba: “Do the basics first. HTTPS, MFA, and API security are the foundations. AI is the roof. You cannot build the roof before the foundation.”
  • Dr. Akvile: “Preserve humanity. As we use more AI, we must ensure we don’t lose our empathy and authenticity.”

Final Takeaways

  1. AI vs. AI is Reality: Organizations must fight automation with intelligence.
  2. The OTP is Dying: Prepare for hardware-based, cryptographic identity.
  3. Model-Driven GRC: Governance must be integrated into the AI’s reasoning layer from Day Zero.
  4. Education is Essential: The human link must be strengthened through constant awareness.

The “AI vs. AI” digital arms race is not a drama we can afford to watch from the sidelines. It is a fundamental shift in the human-machine relationship, and the winners will be those who build their defenses as intelligently as their offenses.

This DTQ Session provided essential insights on the AI vs. AI battleground in cybersecurity. Expert panel: Dr. Sudin Baraokar (AI/Quantum Scientist, former SBI Head of Innovation), Daxesh Parikh (DoveLoft Limited), and Saba (Airbus CSO). Moderated by Dr. Akvile. Write to us at open-innovator@quotients.com for participating and more information about our upcoming sessions.

Categories
Events Uncategorized Visibility Quotient

The Case for Patient Capital: Navigating the Myth vs. Reality of Long-Gestation Investments

Categories
Events Uncategorized Visibility Quotient

The Case for Patient Capital: Navigating the Myth vs. Reality of Long-Gestation Investments

Executive Summary

In a global market increasingly conditioned for rapid scaling and quarterly liquidity, the Open Innovator Session held on March 2, 2026, provided a contrarian framework for value creation.

The panel featured George Jones, Managing Director at Woodside Capital Partners; Keshia Theobald-van Gent, Venture Capital Partner at B Dev Ventures; and Matteo R. Oldani, Associate Partner at Your Group and Fractional CFO at Rosetta Omics. Together, they unpacked the structural realities of investing in technologies that require seven to eleven years to mature.

The consensus among the panel was clear: in sectors such as semiconductors, photonics, and life sciences, time is not a liability, but a strategic moat. When managed with financial discipline and commercial validation, long-horizon ventures offer superior defensibility and enhanced terminal value.

I. Deconstructing the Liquidity Myth

A primary friction point for LPs and GPs is the perceived “capital lock-up” inherent in deep tech. However, historical data and fund behavior suggest a more nuanced reality:

  • Fund Lifecycle Elasticity: While nominally ten-year vehicles, most venture funds operate on 15-to-17-year horizons through extensions, aligning naturally with the 9-year maturity average for semiconductors and 11-year average for IoT.
  • The Maturity Premium: Delayed liquidity often results in higher-quality exits. Companies with a decade of development enter acquisition talks with validated Intellectual Property (IP), stabilized risk profiles, and crystallized product-market fit.

“Liquidity is not absent in long-gestation cycles; it is deferred in exchange for enhanced valuation and competitive insulation.”

II. Risk Mitigation: Beyond the Binary “Moonshot”

The panel rejected the trope that deep tech is a binary “all-or-nothing” bet. Instead, they proposed a model of Incremental De-risking through disciplined milestone execution:

  1. Structured Experimentation: Success is predicated on completing full pilot cycles before pivoting.
  2. Market-Anchored Pivots: Tactical shifts must be driven by external feedback, not internal technical frustration.
  3. Mission Continuity: While tactics evolve, the core strategic objective must remain constant to maintain investor alignment.

III. The Transition: From Technical Elegance to Economic Validation

A critical failure point identified by Keshia Theobald-van Gent (B Dev Ventures) is the “Innovation Trap”—optimizing technology at the expense of market readiness.

StageFocusPrimary Objective
SeedProduct ValidationTechnical Proof of Concept
Series BEconomic ValidationRepeatable Sales & Unit Economics

To bridge this gap, founders must prioritize a clearly defined Ideal Customer Profile (ICP) and early evidence of Willingness to Pay (WTP). As the session noted: “Innovation gets you to Seed; discipline gets you to Series B.”

IV. Financial Architecture and Capital Efficiency

From a CFO perspective, Matteo R. Oldani emphasized that strategic patience is only viable when paired with rigorous financial oversight. Long-gestation founders must distinguish between EBITDA and Cash Flow while maintaining an acute understanding of investor incentives.

Lessons from the 2020–2025 Cycle:

The recent era of “cheap money” served as a cautionary tale. Excess capital often distorts discipline and inflates valuations beyond sustainable levels. The panel’s directive: Raise what is required, not what is offered. Efficiency is a structural advantage that reduces future fundraising pressure.

V. Designing for the Exit

The sequence of development should ideally follow a Sell → Design → Build methodology. By validating customer demand before final construction, downstream risks are significantly mitigated.

Exit Pathway Realities:

  • Acquisition Readiness: Should be integrated into the corporate DNA from Day 1.
  • Secondaries: Partial sales can provide interim liquidity, easing the pressure of the 10-year wait.
  • Ego vs. Tech: Investors cited “ego-driven decision making” and “founder detachment” as more frequent deal-killers than technical failure or messy cap tables.

Conclusion: The Decade Test

The session concluded with a shift in perspective on what constitutes a “successful” investment. While financial returns remain the primary metric, the enduring impact on healthcare, energy, and infrastructure provides the underlying stability of the asset class.

The Bottom Line:

The greatest wealth in the current venture ecosystem is not being built on 18-month hype cycles. It is being forged in the decade-long pursuit of hard tech. For the disciplined investor, long-horizon thinking remains the ultimate competitive edge.

Write us to at open-innovator@quotients.com to participate and get more information on our upcoming sessions.

Categories
Events

Ethical AI in Academia: Beyond Detection to Cultivation

Categories
Events

Ethical AI in Academia: Beyond Detection to Cultivation

Open Innovator Knowledge Session | February 2026

Open Innovator organized a critical knowledge session on ethical AI in academia, moving the conversation beyond sensationalized headlines about AI bans and cheating scandals to address how institutions can actually lead AI responsibly.

As moderator Dr. Nikolina Ljepava opened: Headlines scream that AI use is bad for students, thousands are caught cheating, and research integrity is compromised—creating panic that academia is under AI attack. But the real question isn’t whether AI should exist in academic institutions (it’s already in classrooms, research labs, and admission screening), but how institutions can cultivate ethical scholarship rather than just catching violations. The session brought together academic leaders to explore how universities can design frameworks that protect integrity while embracing innovation, shifting from prohibition to responsible integration.

Expert Panel

The session convened three academic leaders implementing AI governance at different institutional levels:

Professor Alaa Garad – Pro Vice Chancellor and Professor of Strategic Learning and Business Excellence at Abertay University, joining from Scotland. Creator of the learning-driven organization model and leader in strategic quality management, bringing decades of experience in organizational learning and institutional transformation.

Dr. Sheily Verma Panwar – Academic Program Director and Dean at CUQ Ulster University in Doha, teaching master’s level artificial intelligence programs. Specializing in integrating ethics into core AI education modules including machine learning, data engineering, and AI infrastructure.

Dr. Mayar Alsabah – Lecturer at Heriot-Watt University Dubai College of Technology, with extensive experience mentoring students, startups, and student entrepreneurship in the digital economy, bringing insights on AI-driven innovation and emerging ethical blind spots.

Moderated by Dr. Nikolina Ljepava, Acting Dean of the College of Business Administration at the University of Khorfakkan, bringing deep understanding of academic leadership and institutional responsibilities in the AI era.

Key Points & Strategic Frameworks

The Necessary Evolution: From Prohibition to Conversation

  • The 2022 Turning Point: The sudden rise of generative AI initially triggered defensive reactions: bans, rushed policies, and a focus on “catching” users.
  • The Shift: Institutions must move toward “responsible integration.” AI is already in labs and classrooms; the goal is to define how it exists there rather than trying to erase it.
  • A Culture of Awareness: Moving away from “guilty/not guilty” terminology toward a culture of transparent AI use and human oversight.

Non-Transferable Human Accountability

  • AI as a Tool, Not an Authority: AI outputs are aids, not final decisions. Responsibility for research and grading must remain with human academics.
  • The Traceability Requirement: Every academic outcome must be traceable back to a human “why.” Delegating judgment to systems risks “professional delusion” where no one is responsible for produced knowledge.
  • Mandatory Disclosure: Policies should require explicit documentation of how AI was used in any given assignment or research paper.

The Multi-Tier Integration Model

To effectively embed AI ethics, institutions should address four distinct levels:

  • Tier 1: Quality Review: Embedding AI standards into national and institutional quality assurance indicators.
  • Tier 2: Institutional Policy: Creating user-friendly, accessible policies (avoiding 20-page legal documents) that are easy for students to find and understand.
  • Tier 3: Curriculum Design: Making “Ethical AI Adoption” a formal learning objective in every program. This includes using a “Human-First” assignment strategy—where students maintain a version of their work before AI enhancement.
  • Tier 4: Leadership: Moving AI strategy out of the IT department and into the hands of senior executive management (Provosts and Deans).

Ethics as a “Core Literacy”

  • Against Standalone Modules: Ethics should not be a separate, theoretical “add-on.” It must be embedded directly into technical lessons (e.g., discussing data bias while teaching data science).
  • Professional Instinct: The goal is to graduate students who instinctively ask “Is this model safe?” rather than just “Is it accurate?”
  • Universal Requirement: AI ethics is no longer a specialized elective; it is a core literacy required for every discipline, from the arts to the sciences.

Identifying Ethical Blind Spots in Innovation

  • Epistemic Overconfidence: AI is “persuasively wrong.” Students may mistake AI fluency for factual truth, especially in underserved markets where data is sparse.
  • Strategic Convergence: If every student uses the same prompts and models, original thinking disappears, leading to a “homogenization” of ideas and average conclusions.

Practical Implementation & The “Digital Champion” Model

  • Internal Customers: Universities should include students in governance conversations to understand the reality of AI use on the ground.
  • AI Champions: Similar to the COVID-19 response, departments should appoint “AI Champions” to provide peer-to-peer mentoring and share best practices.
  • Budgetary Commitment: Institutions must move past “lip service” and allocate real budgets for mandatory faculty and student training.
  • y alone creates culture of superficial compliance
  • People will always find ways to bypass bans
  • Literacy creates systematic resilience
  • It gives individuals the intellectual immune system to recognize hallucination, spot bias, and most importantly know when to apply their own judgment over machine output

Conclusion: The Comprehensive Picture

Synthesizing the panel’s recommendations into a comprehensive framework:

1. Start from Top: Leadership must be aware what needs to be done, with serious commitment beyond lip service.

2. Policies That Live: Not oriented only toward compliance. Policies must live in curriculum and what we do on everyday basis.

3. Integration Everywhere: AI ethics should be in every AI learning module, but ethics of AI should be treated as core literacy—not only in AI-related courses but spanning across all disciplines and areas, because students use it everywhere.

4. Meaningful and Efficient Integration: Institutions must find ways to integrate all of this without running from it, without prohibition and policing. Find ways that are useful and efficient while not losing the human touch—human creativity, human analytical critical thinking.

5. Avoid Mediocrity: Without proper integration, we risk producing average outputs and average thinking. The goal is maintaining excellence while leveraging AI’s capabilities.

The Mission Ahead: For all in academia, the new mission is finding how to integrate AI in ways that are useful and efficient at one point, but at another point don’t sacrifice what makes education valuable—human creativity, critical thinking, original thought, and ethical judgment.

The Reality: In one hour, the panel scratched the surface of this topic. Much more can be said, and it will continue developing over time as technology advances and AI evolves. The conversation must continue as institutions, faculty, and students navigate this transformation together.

The shift required isn’t technological—it’s cultural, structural, and deeply human. Academic institutions face a choice: lead the AI integration thoughtfully and ethically, or risk becoming irrelevant as the traditional university model fundamentally transforms around them.


This Open Innovator Knowledge Session provided essential frameworks for embedding ethical AI in academic institutions. Expert panel: Professor Alaa Garad (Abertay University), Dr. Sheily Verma Panwar (CUQ Ulster University), and Dr. Mayar Alsabah (Heriot-Watt University Dubai). Moderated by Dr. Nikolina Ljepava (University of Khorfakkan).

Categories
Events Data Trust Quotients

From Data Privacy to Data Trust: The Evolution of Data Governance

Categories
Events Data Trust Quotients

From Data Privacy to Data Trust: The Evolution of Data Governance

Data Trust Quotient (DTQ) organized a critical knowledge session on February 20, 2026, addressing the fundamental shift from data privacy to data trust as AI systems scale across industries. The session explored a new category of risk: not just data theft, but quiet data manipulation that can make even the smartest AI make dangerously wrong decisions.

Expert Panel

The session convened four practitioners from highly regulated industries where data integrity is mission-critical:

Melwyn Rebeiro – CISO at Julius Baer, bringing extensive experience in security, risk, and compliance from ultra-regulated financial services environments, wearing both the Chief Information Security Officer and Data Protection Officer hats.

Rohit Ponnapalli – Internal CISO at Cloud4C Services, specializing in cloud security, enterprise protection, and cybersecurity for government smart city projects where real-time data integrity directly influences public infrastructure operations.

Ashwani Giri – Head of Data Standards and Governance at Zurich, working with enterprise privacy frameworks and regulators.

Mukul Agarwal – Head of IT with deep experience in IT strategy, systems, and digital transformation in the banking and financial services sector, bringing the skepticism and traceability mindset essential to financial industry operations.

Moderated by Betania Allo, international technology lawyer and AI policy expert based in Riyadh, working at the intersection of AI governance, cybersecurity, and cross-border regulatory strategy. Hosted by Data Trust (DTQ), a global platform bringing professionals together to share practices, address challenges, and co-create solutions for building stronger trust across industries.

The Shift: From Confidentiality to Verifiable Integrity

Regulators Are Changing Their Expectations

Ashwani opened by confirming the shift is happening at ground level as AI adoption increases. Organizations are preparing security documentation, having internal discussions, trying to understand what changes are required. Confidentiality was the past—now much more mature with clear understanding. The present focus: initiating discussions around veracity and verifiable data.

The Medical Prescription Analogy: Earlier, the idea was ensuring only the right people (patient and doctor) had access. Now the expectation is that nobody is altering the prescription in the background. With AI, the expectation is that data is not poisoned or drifting, that hallucinations and poisoning are prevented.

Regulators as Trust Enablers: Regulators enable trust in the social ecosystem. As AI adoption drives changes, they’re moving from simply asking access-related questions (IAM) to expecting cryptographic proof of truth, verifiable audit trails, immutable integrity checks, and mechanisms providing confidence that claimed data is actually true.

The Verification Challenge: Organizations are framing that they have bases covered, but when regulators try to verify, many cannot demonstrate it. Except for the most mature organizations with proper budgets and resourcing, most face this challenge—trying to understand changes before implementing them.

The Timeline: Similar to information security 15 years ago when organizations struggled with their own approaches, AI security faces similar challenges now. But this evolution will be much faster—5-10 years to reach maturity rather than decades.

AI Readiness Without Data Provenance Is Flying Without a Black Box

When asked if organizations can truly claim AI readiness without tracking who changed data and when, Ashwani was direct: AI readiness is definitely not there in many organizations. Provenance is absolutely essential.

The Right Thing, No Matter How Hard: Organizations should do the right thing regardless of difficulty. Provenance work is already happening in bits and pieces but not in structured format. Requirements include policies in place, dedicated teams (not stopgap arrangements), and full commitment—not pulling people just to support tasks.

The Stark Reality: AI readiness without rigorous data governance is like flying a commercial plane without a black box, without proof of provenance or source of truth. It will land nowhere.

Automation Requirements: Regulators expect automated readiness testing and red teaming (validation testing of processes) to ensure controls are designed properly and working without glitches. If automation is less than 80%, it’s a problem.

The Non-Negotiable Future: Regulators are signaling this now but will become more aggressive. Provenance will be non-negotiable. Without it, enterprises are building highly efficient black boxes.

Industry Readiness: Varied Responses to the Challenge

BFSI Leads, Others Follow at Their Own Pace

Different sectors respond differently. Banking, Financial Services, Insurance (BFSI) and healthcare—highly critical sectors—are early adopters responding well. Other industries respond at their own pace, some lagging behind, but everyone understands the importance.

The Leadership Ladder: Understanding and awareness exist. Behaviors are being introduced. Once understanding, awareness, behaviors, and ownership align, leadership emerges. AI leadership is still far away, but early adopters (especially BFSI) are doing well and having internal discussions to create right synergies.

No Choice But to Comply: Organizations understand this requirement is coming. They have no choice but to comply eventually.

The Vault Problem: Securing Contents, Not Just Containers

Mukul brought the financial services perspective with a critical observation: Skepticism is the word in BFSI. The industry doesn’t trust anything at face value unless traceability exists.

What Security Has Done Wrong: Traditional IT security secured the vault—fortifying infrastructure, ensuring nothing comes in, checking what goes out, logging and mitigating. But they haven’t verified what’s inside the vault.

The Critical Gap: Did someone with the absolute right key enter the vault and modify contents? Could be malicious intent or oversight. This is where data corruption matters.

Real-World Financial Risk: What if someone changed the interest rate for a customer’s loan for a specified period, reducing their outgo, causing damage of X amount to the financial institution, then reset it later? The change happened, reverted, damage was done, nobody noticed. This problem area lacks fair mitigation.

Insider Risk: The Blind Spot in Mature Security

Rohit emphasized this isn’t just about regulatory requirements—it’s about trust. Organizations have controls in place, but are they using those controls to monitor behavior changes or data changes?

The Maturity Imbalance: Security has organized as a fortress to prevent intrusion. Organizations are mature enough to prevent hackers from getting in. But there are fewer controls to tackle insider risk management—where data changes, data integrity, data accuracy, and data theft issues originate.

The Spending Gap: Leaving BFSI aside, other industries don’t spend much on tools. Organizations should start looking at insider threat and gaining trust from operations adapted to day-to-day life.

Zero Trust for Data: Beyond Access Control

Trust Nobody, Verify Everybody

Melwyn brought the perspective from Julius Baer’s highly regulated environment. Regulators are adopting zero trust—not trusting anybody, just verifying everybody. Whether insider or outsider, the boundary has completely changed.

The Regulatory Focus: Most regulators in India are focusing on having organizations adopt zero trust technology—trust nobody but always verify so legitimate users are the only ones accessing data.

The Evidence Requirement: If someone tries to tamper with data, at least you have logs or verifiable evidence that data has been tampered with and appropriate action can be taken.

From Access Zero Trust to Data Zero Trust

The zero trust mindset must extend directly to the data layer itself—continuously validating that information has not been altered.

The Shift Beyond Access: It’s not only about access control in zero trust, but also about the data itself. Always verify rather than trust the data. The source of data, integrity of data, and provenance of data must be verified in an irrefutable manner without tampering or malicious intent.

Why Data Is Everything: If there’s no data, there are no jobs for anyone in the room. Data is the critical aspect of decision-making and must be protected at all times.

The AI Attack Surface: Traditional cybersecurity techniques exist—encryption, hashing, salting. But with AI advent, various attacks are happening against data: injection, poisoning, and others.

The Survival Requirement: Focus must shift from zero trust access to zero trust data. Without it, organizations cannot make critical and crucial decisions and will not survive in a competitive, AI and ML-driven world.

Multi-Dimensional Accountability

Who Owns Risk When Data Is Quietly Manipulated?

In India, the trend shows most organizations still have CISOs taking care of data because they’re considered best positioned to understand both security and privacy requirements that the DPO job demands.

Different Layers of Ownership:

  • Data Owner: The reference point for data
  • CISO: Provides guardrails to guard data safety against malicious attacks
  • DPO: Concerned only with data privacy, ensuring it’s not impacted or hampered
  • Governance: Legal and compliance teams ensuring every control is covered

Shared Responsibility: Each member has their own job in the organizational chart and must do their part in protecting data. But ultimately, the board has overall responsibility and accountability to ensure whatever guardrails or safety measures allocated to data protection are in place and nothing is missing.

When Data Alteration Creates Public Safety Risks

Rohit brought critical perspective from smart city and government projects where personally identifiable information (PII) and sensitive personal data are paramount—not just for cybersecurity but for counterterrorism.

The Bio-Weapon Example: If data about blood group distribution leaked—showing a city has the highest number of O-positive blood groups—a bio-weapon could be created targeting only that blood group, causing mass casualties and impacting national reputation.

Real-Time Utility Monitoring: Smart cities don’t just hold privacy data; they monitor real-time use of public services by citizens. Traffic analysis, water management during seasonal changes, public Wi-Fi usage—all create critical data that, if tampered with, could cause chaos in city operations.

The Efficiency Question: Models exist to monitor data alteration and access, but are they efficient? Considering the scale of operations, monitoring capabilities, budget limitations, and whether they treat public safety with the same seriousness as corporate security—efficiency remains a question mark.

The Tool Gap: Industry-Specific Maturity

When it comes to infrastructure security or user security, good controls exist across industries with mature maintenance. But data access management is a question mark depending on industry.

BFSI Advantage: The Reserve Bank of India mandates database access management tools. They have controls because they have solutions. They can develop use cases, rules, and alerts for abnormalities, modifications, deletions, additions, direct database access.

The Budget Challenge: Outside BFSI, getting board approval for database access management tools requires a very strong use case or customer escalation. Without these tools, organizations rely on DB soft logs requiring manual review—cumbersome for humans to identify abnormalities and more like postmortem analysis.

Real-Time vs. Postmortem: Manual review might take six days to discover data modification. By then, damage is done. With DAM tools in place, organizations can get alerts and act in real-time with preventive and corrective controls.

Industry-Specific Reality: Controls are there but depend on how important security, integrity, and trust are to the board—determining what tools can be secured for data integrity monitoring.

Traditional Security Models Are Insufficient

Rohit identified a critical trend: Traditional data access had a system and a user or user-developed application. Controls were simple. Now there’s a third element: AI—self-adaptive, self-learning, and capable of directly accessing data.

Going Back to the Drawing Board: Everyone is returning to proper boards where they can define and design controls. The whole industry—technical people, operations teams—are validating whether traditional security controls are sufficient to handle AI operations.

The Use Case Problem: Concerns arise because controls must change for every use case. One AI tool might have eight use cases, each requiring different controls, different monitoring, different security on who’s accessing, what output is given, what data is accessed, privilege levels, potential injection attacks, and command exploitation.

Output Modification Threat: It’s not just about data modification. What if output is modified? Hackers don’t need to get into databases to modify data if they can modify output directly. This concern is getting significant attention.

The Level Question: Organizations must determine at what level they’re discussing data integrity—making it a complex, layered challenge.

Key Questions Defining Data Trust

Is Data Trust Just Rebranding Privacy?

Ashwani’s answer: Data trust is the next level of data privacy. Privacy focused on keeping data safe. The question now: Is the data you’ve kept trustable? Is somebody altering or changing it? Is it the right data collected in the first place?

End-to-End Protection: Ensuring you’re collecting data that’s right and fit for purpose, protecting it with all possible controls until consumption, and having the right pipeline protecting from end to end with proper lineage.

Traceability Requirement: You should be able to identify where trust is broken. If somebody altered data, you must be able to trace it.

The Future Parameter: Data trust is next-step beyond traditional data privacy controls—paramount for successful AI-driven organizations in the fully AI-driven era ahead.

The DPO Triad: As Rohit suggested to a DPO colleague—information security has three attributes (confidentiality, integrity, availability). For DPOs, it should be privacy, security, and trust defining overall governance.

Three Years Forward: Trusted vs. Just Compliant

Melwyn’s perspective: Trust is extremely important—going one level ahead of compliance. Compliance and trust are interchanging based on time differences.

Why Both Matter: Everyone wants to be compliant because penalties are high and heavy. Everyone wants to be trusted because without being a trusted brand or company, you’re out of business—competitors are already ahead.

The Reversal: Compliance is not driving trust. Trust is driving compliance. It’s a non-negotiable, hand-in-glove situation.

The Drinkable Water Example: Mukul provided a perfect analogy: Someone asks for water. Giving a glass of water is compliance. But was that water drinkable? That’s trust. Would you trust the person who gave drinkable water, or just take water from someone who was merely compliant?

No Shortcut to Trust: Ashwani emphasized trust cannot be bought with budget instantly. It takes time, requiring continuous good work to earn it. Trust is a real differentiator earned only by fixing things at ground level. There’s no shortcut to trust.

Compliance as Checkbox vs. Backbone

Rohit highlighted that compliance is a satisfaction factor for customers. When you want to prove you have good security controls, compliance comes into picture.

The Dangerous Trend: Compliance is becoming a checkbox, which should not be taken lightly. Compliance should be the backbone on which you build more security controls. Some organizations treat it as a checkbox saying they’re compliant, but effectiveness and efficiency remain questionable.

Priority Actions for the Next 24 Months

People, Process, Technology—In That Order

Ashwani’s Framework: Organizations must ensure right standards, policies, procedures, and mandates are in place. Identify the right people for the work and agree on RACI matrix (who’s responsible, accountable, consulted, informed) defining roles clearly.

Ground framework first. Other things are technology-related. Fixing the people part—the human factor—is always most important. Once you fix the human vector, everything else comes with much more ease.

Mindset and Culture Change

Melwyn’s Priority: The mindset must change when discussing privacy, data security, and integrity. Culture has to be there. Without the right mindset, culture, ethos, and ethics to govern, even the best controls, equipment, or security will not work.

The right mindset is the key to success.

Access Monitoring and Traceability

Rohit’s Focus: Culture is a never-ending job through awareness sessions and phishing simulations—always 10-20% violating despite efforts. But purely for trust, organizations have enough controls knowing who has access to systems.

Three Critical Questions: Focus on controls understanding who has access to systems or data, who is modifying data, and what is being modified. Answer these three questions and trust can be easily built.

Explainable AI with Human in the Loop

Mukul’s Guidance: Many organizations live in the hype of deploying AI and trusting their data with AI. There must be a human in the loop, and AI must be explainable.

Explainable AI with human in the loop is the keyword when trusting data with AI models. At least jobs are safe with this explanation—people are still needed to validate.

Conclusion: Trust Cannot Be Bought, Only Earned

The session revealed unanimous agreement: The future belongs to organizations with the most trusted data, not just the most data or the most advanced AI.

Trust is the cornerstone of AI-driven ecosystems. Provenance is non-negotiable. Zero trust must extend from access control to the data layer itself. Accountability is multi-dimensional across boards, executive leadership, technology teams, and legal compliance.

As India accelerates its AI ambitions (hosting the AI Summit during this session), embedding verifiable integrity at scale becomes essential—not only for foundational institutional credibility across sectors but for defining long-term leadership.

Key principles emerged: Do the right thing no matter how hard. Fix the human factor first. Treat compliance as backbone, not checkbox. Remember there’s no shortcut to trust—it must be earned through continuous good work fixing things at ground level.

The shift from data privacy to data trust represents the next evolution in data governance—moving from protecting data from unauthorized access to ensuring data remains true, accurate, and verifiable throughout its lifecycle in AI-driven systems.


This Data Trust Knowledge Session provided essential frameworks for organizations navigating the evolution from data privacy to data trust. Expert panel: Melwyn Rebeiro (Julius Baer), Rohit Ponnapalli (Cloud4C Services), Ashwani Giri (Zurich), and Mukul Agarwal (BFSI sector). Moderated by Betania Allo.

Categories
Data Trust Quotients DTQ Visibility Quotient

The AI Trust Fall: Building Confidence in an Era of Hallucination

Categories
Data Trust Quotients DTQ Visibility Quotient

The AI Trust Fall: Building Confidence in an Era of Hallucination

Data Trust Knowledge Session | February 9, 2026

Open Innovator organized a critical knowledge session on AI trust as systems transition from experimental tools to enterprise infrastructure. With tech giants leading trillion-dollar-plus investments in AI, the focus has shifted from model performance to governance, real-world decision-making, and managing a new category of risk: internal intelligence that can hallucinate facts, bypass traditional logic, and sound completely convincing. The session explored how to design systems, governance, and human oversight so that trust is earned, verified, and continuously managed across cybersecurity, telecom infrastructure, healthcare, and enterprise platforms.

Expert Panel

Vijay Banda – Chief Strategy Officer pioneering cognitive security, where monitors must monitor other monitors and validation layers become essential for AI-generated outputs.

Rajat Singh – Executive Vice President bringing telecommunications and 5G expertise where microsecond precision is non-negotiable and errors cascade globally.

Rahul Venkat – Senior Staff Scientist in AI and healthcare, architecting safety nets that leverage AI intelligence without compromising clinical accuracy.

Varij Saurabh – VP and Director of Products for Enterprise Search, with 15-20 years building platforms where probabilistic systems must deliver reliable business foundations.

Moderated by Rudy Shoushany, AI governance expert and founder of BCCM Management and TxDoc. Hosted by Data Trust, a community focused on data privacy, protection, and responsible AI governance.

Cognitive Security: The New Paradigm

Vijay declared that traditional security from 2020 is dead. The era of cognitive security has arrived like having a copilot monitor the pilot’s behavior, not just the plane’s systems. Security used to be deterministic with known anomalies; now it’s probabilistic and unpredictable. You can’t patch a hallucination like you patch a server.

Critical Requirements:

  • Validation layers for all AI-generated content, cross-checked by another agent using golden sources of truth
  • Human oversight checking if outputs are garbage in/garbage out, or worse-confidential data leakage
  • Zero trust of data-never assume AI outputs are correct without verification
  • Training AI systems on correct parameters, acceptable outputs, and inherent biases

The shift: These aren’t insider threats anymore, but probabilistic scenarios where data from AI engines gets used by employees without proper validation.

Telecom Precision: Layered Architecture for Zero Error

Rajat explained why the AI trust question has become urgent. Early social media was a separate dimension from real life. Now AI-generated content directly affects real lives-deepfakes, synthesized datasets submitted to governments, and critical infrastructure decisions.

The Telecom Solution: Upstream vs. Downstream

Systems are divided into two zones:

Upstream (Safe Zone): AI can freely find correlations, test hypotheses, and experiment without affecting live networks.

Downstream (Guarded Zone): Where changes affect physical networks. Only deterministic systems allowed-rule engines, policy makers, closed-loop automation, and mandatory human-in-the-loop.

Core Principle: Observation ≠ Decision ≠ Action. This separation embedded in architecture creates the first step toward near-zero error.

Additional safeguards include digital twins, policy engines, and keeping cognitive systems separate from deterministic ones. The key insight: zero error means zero learning. Managed errors within boundaries drive innovation.

Why Telecom Networks Rarely Crash: Layered architecture with what seems like too many layers but is actually the right amount, preventing cascading failures.

Healthcare: Knowledge Graphs and Moving Goalposts

Rahul acknowledged hallucination exists but noted we’re not yet at a stage of extreme worry. The issue: as AI answers more questions correctly, doctors will eventually start trusting it blindly like they trust traditional software. That’s when problems will emerge.

Healthcare Is Different from Code

You can’t test AI solutions on your body to see if they work. The costs of errors are catastrophically higher than software bugs. Doctors haven’t started extensively using AI for patient care because they don’t have 100% trust—yet.

The Knowledge Graph Moat

The competitive advantage isn’t ChatGPT or the AI model itself—it’s the curated knowledge graph that companies and institutions build as their foundation for accurate answers.

Technical Safeguards:

  • Validation layers
  • LLM-as-judge (another LLM checking if the first is lying)
  • Multiple generation testing (hallucinations produce different explanations each time)
  • Self-consistency checks
  • Mechanistic interpretability (examining network layers)

The Continuous Challenge: The moment you publish a defense technique, AI finds a way to beat it. Like cybersecurity, this is a continuous process, not a one-time solution.

AI Beyond Human Capabilities

Rahul challenged the assumption that all ground truth must come from humans. DeepMind can invent drugs at speeds impossible for humans. AI-guided ultrasounds performed by untrained midwives in rural areas can provide gestational age assessments as accurately as trained professionals, bringing healthcare to underserved communities.

The pragmatic question for clinical-grade AI: Do benefits outweigh risks? Evaluation must go beyond gross statistics to ensure systems work on every subgroup, especially the most marginalized communities.

Enterprise Platforms: Living with Probabilistic Systems

Varij’s philosophy after 15-20 years building AI systems: You have to learn to live with the weakness. Accept that AI is probabilistic, not deterministic. Once you accept this reality, you automatically start thinking about problems where AI can still outperform humans.

The Accuracy Argument

When customers complained about system accuracy, the response was simple: If humans are 80% accurate and the AI system is 95% accurate, you’re still better off with AI.

Look for Scale Opportunities

Choose use cases where scale matters. If you can do 10 cases daily and AI enables 1,000 cases daily with better accuracy, the business value is transformative.

Reframe Problems to Create New Value

Example: Competitors used ethnographers with clipboards spending a week analyzing 6 hours of video for $100,000 reports. The AI solution used thousands of cameras processing video in real-time, integrated with transaction systems, showing complete shopping funnels for physical stores—value impossible with previous systems.

The Product Manager’s Transformed Role

Traditional PM workflow–write user stories, define expectations, create acceptance criteria, hand to testers–is breaking down.

The New Reality:

Model evaluations (evals) have moved from testers to product managers. PMs must now write 50-100 test cases as evaluations, knowing exactly what deserves 100% marks, before testing can begin.

Three Critical Pillars for Reliable Foundations:

1. Data Quality Pipelines – Monitor how data moves into systems, through embeddings, and retrieval processes. Without quality data in a timely manner, AI cannot provide reliable insights.

2. Prompt Engineering – Simply asking systems to use only verified links, not hallucinate, and depend on high-quality sources increases performance 10-15%. Grounding responses in provided data and requiring traceability are essential.

3. Observability and Traceability – If mistakes happen, you must trace where they started and how they reached endpoints. Companies are building LLM observation platforms that score outputs in real-time on completeness, accuracy, precision, and recall.

The shift from deterministic to probabilistic means defining what’s good enough for customers while balancing accuracy, timeliness, cost, and performance parameters.

Non-Negotiable Guardrails

Single Source of Truth – Enterprises must maintain authentic sources of truth with verification mechanisms before AI-generated data reaches employees. Critical elements include verification layers, single source of truth, and data lineage tracking to differentiate artificiality from fact.

NIST AI RMF + ISO 42001 – Start with NIST AI Risk Management Framework to tactically map risks and identify which need prioritizing. Then implement governance using ISO 42001 as the compliance backbone.

Architecture First, Not Model First – Success depends on layered architectures with clear trust boundaries, not on having the smartest AI model.

Success Factors for the Next 3-5 Years

The next decade won’t be won by making AI perfectly truthful. Success belongs to organizations with better system engineers who understand failure, leaders who design trust boundaries, and teams who treat AI as a junior genius rather than an oracle.

What Telecom Deploys: Not intelligence, but responsibility. AI’s role is to amplify human judgment, not replace it. Understanding this prevents operational chaos and enables practical implementation.

AI Will Always Generalize: It will always overfit narratives. Everyone uses ChatGPT or similar tools for context before important sessions—this will continue. Success depends on knowing exactly where AI must not be trusted and making wrong answers as harmless as possible.

The AGI Question and Investment Reality

Panel perspectives on AGI varied from already here in certain forms, to not caring because AI is just a tool, to being far from achieving Nobel Prize-winning scientist level intelligence despite handling mediocre middle-level tasks.

From an investment perspective, AGI timing matters critically for companies like OpenAI. With trillions in commitments to data centers and infrastructure, if AGI isn’t claimed by 2026-2027, a significant market correction is likely when demand fails to match massive supply buildout.

Key Takeaways

1. Cognitive Security Has Replaced Traditional Security – Validation layers, zero trust of AI data, and semantic telemetry are mandatory.

2. Separate Observation from Decision from Action – Layered architecture prevents errors from cascading into mission-critical systems.

3. Knowledge Graphs Are the Real Moat – In healthcare and critical domains, competitive advantage comes from curated knowledge, not the LLM.

4. Accept Probabilistic Reality – Design around AI being 95% accurate vs. humans at 80%, choosing use cases where AI’s scale advantages transform value.

5. PMs Now Own Evaluations – The testing function has moved to product managers who must define what’s good enough in a probabilistic world.

6. Human-in-the-Loop Is Non-Negotiable – Structured intervention at critical decision points, not just oversight.

7. Single Source of Truth – Authentic data sources with verification mechanisms before AI outputs reach employees.

8. Continuous Process, Not One-Time Fix – Like cybersecurity, AI trust requires ongoing vigilance as defenses and attacks evolve.

9. Responsibility Over Intelligence – Deploy systems designed for responsibility and amplifying human judgment, not autonomous decision-making.

10. Better System Engineers Win – Success belongs to those who understand where AI must not be trusted and design boundaries accordingly.

Conclusion

The session revealed a unified perspective: The question isn’t whether AI can be trusted absolutely, but how we architect systems where trust is earned through verification, maintained through continuous monitoring, and bounded by clear human authority.

From cognitive security frameworks to layered telecom architectures, from healthcare knowledge graphs to PM evaluation ownership, the message is consistent: Design for the reality that AI will make mistakes, then ensure those mistakes are caught before they cascade into catastrophic failures.

The AI trust fall isn’t about blindly falling backward hoping AI catches you. It’s about building safety nets first—validation layers, zero trust of data, single sources of truth, human-in-the-loop checkpoints, and organizational structures where responsibility always rests with humans who understand both the power and limitations of their AI tools.

Organizations that thrive won’t have the most advanced AI—they’ll have mastered responsible deployment, treating AI as the junior genius it is, not the oracle we might wish it to be.


This Data Trust Knowledge Session provided essential frameworks for building AI trust in mission-critical environments. Expert panel: Vijay Banda, Rajat Singh, Rahul Venkat, and Varij Saurabh. Moderated by Rudy Shoushany.

Categories
Events

From Hype to Cash Flow: What Will Actually Make Money in AI by 2026

Categories
Events

From Hype to Cash Flow: What Will Actually Make Money in AI by 2026

Open Innovator Knowledge Session | January 30, 2026

Open Innovator organized a no-holds-barred knowledge session on “From Hype to Cash Flow: What Will Actually Make Money in AI by 2026” on January 30, 2026, cutting through the noise to address the market’s critical pivot point. As moderator Naman Kothari put it: “Everyone is an AI expert in today’s world. We have reached peak hype.” But as 2026 unfolds, the market has developed low tolerance for potential and high hunger for profit.

The gold rush is over; we’re in the settlement phase where nobody cares how cool your algorithm is if your cash flow is stuck in a tailspin. This candid session brought together three investors and growth strategists who literally spend their days separating signal from noise, steering companies through high-stakes waters where the question isn’t about the technology—it’s about whether the same old business logic dressed in a new hoodie can actually generate sustainable revenue.

Expert Panel

The session convened three experts who evaluate AI investments from distinctly different but complementary perspectives:

Deborah Boechat – Founder & CEO of Onit Center, bringing over 10 years of global experience helping startups and scaleups turn innovation into revenue through international expansion across the US, Latin America, Europe, and Asia Pacific, with strategic growth and capital connections that bridge four continents.

Carolina Castilla – Creator of the world’s first Artificial Intelligence Awareness Experiment (Love My Robot, Inc.) and Venture Capitalist at VC Lab, bringing culture, capital, and consciousness into the AI economy. An electronic music producer who uses performances as a “Trojan horse” to access Fortune 100 meetings, Carolina has developed AI-powered risk assessment tools that analyze startup viability in 30 seconds based on 100 questions VCs actually ask.

Hugo Lara – Investment Associate at BDev Ventures, backing Seed to Series B B2B software companies with real traction and a relentless focus on revenue generation, specifically targeting companies at the half-million ARR mark who are ready to scale to $5-10 million.

The discussion was moderated by Naman Kothari from NASSCOM, who framed the central challenge: “The era of cheap capital and high hype is over. We are now in the era of cash flow.”

The Litmus Test: Real Revenue Driver or Shiny Distraction?

The Founder’s Dilemma: People vs. AI Tools

Deborah opened with a fundamental tension she observes across continents: founders struggling between hiring more people or substituting them with AI tools that might perform similarly.

“Working with founders in their decision-making process around growth, I notice there’s a struggle: Should I hire more people or substitute that amount of people for a tech or AI tool that could perform in a maybe similar way?”

Her litmus test centers on understanding the business model and cash flow internally. The critical question: Is an AI tool a suitable solution, or can existing team members handle it?

“From a realistic standpoint, founders need good balance. Everyone wants to go for AI—’Show me what’s trending right now in AI that can help me solve this problem.’ But technology is a tool, a great guidance. There’s a need to keep balance between both, respecting budget.”

Budget respect is paramount, especially for companies seeking funding. “Technology is globalized—even though it’s different demographics and geographies, at the end of the day, it’s a similar struggle or decision-making process.”

The 30-Second Verdict: Can AI Replace Investor Intuition?

Carolina brought a provocative perspective: she can assess investment readiness in 30 seconds using AI analysis of pitch decks, powered by research collected from interviewing VCs at startup competitions.

“I started interviewing VCs and collecting data—I got 100 questions that VCs ask to know if a startup is going to succeed. I sat with my CTO and started prompting four years ago. Now I can get a pitch deck and say, ‘OK, these are the red flags or green flags of this startup.'”

But she immediately qualified this capability: “That doesn’t mean anything. I’m a general manager of a fund. My responsibility is make money for my investors, make my startups successful. But my real mission is asking: How do you personally separate innovation that improves human life from innovation that just accelerates everything, focusing on profits without taking care of workers who helped build the system?”

Her investment philosophy: “I follow founders with compelling value proposition. Obviously cash flow is the most important, but the question is more about human nature—where is this going and why are we putting so much into this?”

Integration Over Transformation: The Sustainable Path

Hugo provided the clearest framework for distinguishing temporary spikes from sustainable paths to $5-10 million cash flow.

“Ask the fundamental question: Is AI actually needed here, or is this just a nice-to-have that will pass at some point?”

He emphasized this is especially critical for companies targeting enterprise segments, where deals can be large and create the illusion of rapid traction.

The Integration Principle: “What I believe is the type of approach that looks to integrate will stick better than the type of approach that tries to transform the whole process. Look at it like trying to convert a combustion gas car into an EV—also while the car is moving. That’s going to be very hard. It’s going to sound promising, but companies rarely have time to wait for all that disruption to happen.”

His prescription: “When companies try to integrate into current discussions—pricing, churn, revenue growth—decisions that have mattered since the beginning, if you make your way into those conversations and are consistent about delivering results, that has better chance to stick past the initial early adopters.”

The warning: Many companies at the hype peak promised complete transformations of critical processes like sales and procurement—strategic areas that are extremely hard to change in short windows. “That’s why you hear headlines about ‘AI is not returning anything on investment’—because many ideas started with trying to change the way business is done instead of integrating into the way business is actually carried out nowadays.”

The Moats and Misfires: Where Investors See Value

Will AI Augment Capital Decisions or Will Human Judgment Remain the Final Moat?

Carolina’s answer was unequivocal but nuanced: “I will not call it intuition. AI helps me—I get a pitch deck and in 30 seconds I know if I’m introducing this startup to my advisors. But that is just an assessment. The real thing about investing is human and is responsibility.”

She outlined what AI can and cannot do:

What AI Can Do:

  • Save time understanding a business, company, founder, team, readiness, business plan, competitive advantage, compelling value proposition
  • Provide readiness scores and assessments

What AI Cannot Do:

  • Provide responsibility (“The founder gets the money and goes to Hawaii to party”)
  • Replace due diligence teams that verify data accuracy
  • Ensure founder tenacity to respond to boards and continue raising money

Her perspective on the CEO role: “Everybody loves their startups, doing their products, doing demos. But the CEO life is not that—you get a team to do all that. You just keep knocking doors and raise money for the rest of your startup’s life.”

Bottom line: “Yes, we can get some readiness, but no, AI doesn’t give us responsibility.”

The Phantom Growth Trap: When Momentum Isn’t Scalable

Hugo identified the most common false confidence at the $500K+ ARR stage: “Founders often feel like they’ve nailed sales. When you talk to them, they actually don’t have a playbook—their sales is still founder-led and network-driven.”

This is perfectly fine, but it’s often not scalable or repeatable. “If selling depends on you being at the right place at the right time, talking to the right person, that’s very hard to replicate at scale. It can be a way of building a business, but not necessarily one that will generate double-digit growth, which is what you need for venture-backable companies.”

BDev Ventures specifically looks for $1 million+ ARR because “we back-tested it and found this is where companies usually have a go-to-market strategy that’s not evolving very fast and a sales team that can replicate the playbook.”

But the verification is critical: “Sometimes that’s not the case, and that’s certainly one of the most common false positives I find in day-to-day conversations.”

Global Expansion: Strength or Distraction?

Deborah brought crucial perspective on when international growth strengthens cash flow versus when it becomes a distraction from monetization.

The Right Stage to Expand:

  • The company has matured in their current market
  • They have a defined ICP (Ideal Customer Profile)—they know exactly who their client is
  • They have cash flow and revenue coming in
  • The business model is proven, sells well, and is scalable

The Critical Analysis: “If Company X was able to grow in that specific market, if they dominate what they’re doing from a scalable perspective even though it’s local, that’s good momentum to start looking outside. Look at other markets where your ICP has similar behavior.”

Important Considerations Beyond Finding Customers:

  • Some countries/regions have barriers to entry
  • Understand the new area or region thoroughly
  • The purpose is more clients, opportunities, and revenue—not just a selling point like “we’re operating in this country”

Deborah’s warning: “If they’re not being profitable, it’s just not effective. It’s better for these companies to stay where they are and shore up before expanding to a new region.”

Her formula: “It’s all about good timing and understanding the numbers. Cash is king—it’s important to respect timing and money.”

Practical Investment Criteria: What Actually Matters

How to Decide: Double Down or Write Off?

When asked how to decide whether to double down on an AI investment or write it off as sunk cost, Hugo emphasized: “You really have to take a very close look at who’s actually winning.”

While cash flow is the lifeblood, for most of the growth stage “you’re going to look at how fast revenue is growing—that’s the first thing I would look at to prove the bets they’re making are paying off.”

The deeper analysis: “As Deborah mentioned, if growth came from a bet placed on global expansion and it actually paid out, that’s talking about not only revenue growth but the execution of something as complex as going to another country and actually selling. Those are signals about the team, not just superficial stages or numbers.”

Hugo’s framework: “There has to be a story and narrative behind the success that you believe will continue. That’s where you double down. The contrary is where you’d be more cautious—about companies making bets that aren’t paying off one after the other.”

Building Models vs. Solving Real Problems

When asked whether investors prefer AI companies building models or applying AI to solve real business problems, Carolina identified a nuanced trade-off:

The Risk of Building on Existing Platforms: “If you just build your agents or things on top of Anthropic or OpenAI, there is a risk.”

The Attraction of Proprietary LLMs: “Obviously you’re more attractive if you’re building your own LLM. But now, competing with billion and trillion-dollar companies like OpenAI is impossible.”

What She Actually Looks For:

  • Privacy by design
  • Auditability
  • Bias testing
  • Strong security
  • Business models that don’t depend on exploiting attention or selling data

Carolina’s fund structure reflects this philosophy: “We’re investing seed in AI for enterprise, but we left 20% of the fund for AI that invests in the creator economy.”

Her stance on generative AI: “I’m totally against Gen AI and the tokenization model. I like to invest in businesses where the business model is not exploiting.”

Rapid Fire Insights: Gut Reactions from the Front Lines

Founder-Market Fit vs. Aggressive Unit Economics

Hugo’s answer: Founder-market fit.

His reasoning: “I find it has been more durable over time than unit economics. Unit economics becoming so important over the last 10 years is more a product of markets not being able to keep up with amounts of cash needed in new rounds.”

He provided historical context: Ten years ago, a Series A would be $3 million. “Nowadays a Series A can be upwards of $50 million. That’s why investors require companies not only to go for the full hike at all costs, but to make camp whenever possible—to look at if they’re making money, which you wouldn’t see in the early 2000s or even the 90s.”

The constant across time: “What you will see is always: who are you investing in?”

Can AI Become a Better Limited Partner Than Humans?

Carolina’s answer: False.

Her 15-second defense: “AI is prompted by a human. The human has mistakes by itself. I prompted AI when I was at a very big company, and the AI could think what I was prompting was correct. But I could probably be mistaken. So no, false.”

Dominating One Market vs. Being Average Across Three Continents

Deborah’s answer: Average across three countries.

Her reasoning: “Simply because when you’re in three different economies, worst-case scenario, if there’s a crash in one economy, you still have two to handle your business as an option.”

Will the AI Bubble Burst Before End of 2026?

Unanimous answer: No.

All three panelists agreed the AI bubble will not burst before 2026 ends, though Deborah qualified: “Not that soon.”

Green Flags: What Gets Investors Excited

Deborah: The Global Powerhouse DNA

Green flags that signal a global powerhouse, not just a tool seller:

  1. They have clients and revenue coming in
  2. Product-market fit is established – they understand exactly who their customer is
  3. With that configuration properly understood, scaling becomes easier

Deborah’s summary: “Definitely great flags: #1, do they have clients? If yes, they’re getting started. But certainly product-market fit—that’s definitely a path to scalability.”

On MVP stage specifically: “Do you have demand? Do you have people who are going to purchase your solution? It’s important to understand to which point AI is going to help or substitute humans. That’s something investors look at: How far can you go without our money? If you can make a lot of sales with less effort, that’s something investors will look at.”

Carolina: Compelling Value Proposition Wins

When asked what signal her AI picks up that humans might miss that makes her lean forward, Carolina’s answer was simple and direct: “Compelling value proposition.”

Her AI analyzes 100 questions to answer one prompt: “Is this startup going to succeed?” The tool came from organizing Startup World Cup regionals with access to the best startups and judges globally.

The workflow: “In 30 seconds it tells me how ready this pitch is for due diligence. That doesn’t mean what they say is real—there needs to be a person looking at data, team, resume, how they manage finances. It’s just a tool that tells me, ‘OK, this is a startup worth faster introduction than the other one.'”

Her cautionary tale: She once had a founder claim $30 million in signed MOUs in a pitch. “They fooled all the judges. When we went into diligence, everything was a lie. So I get data from the pitch—’Oh, what an amazing startup.’ Let’s get the meeting. ‘Oh, this is not a breakthrough. The LLM is not proprietary—they took 60% of OpenAI.'”

The conclusion: “AI can analyze data that humans present, but there’s due diligence and responsibility from the GM, investor, and founder to organize a business that’s good for the world, the team, the investors, and all stakeholders.”

Hugo: Trust Built Through Long Conversations

Beyond ARR and spreadsheets, Hugo looks for behavioral green flags—specifically how founders talk about cash flow.

“Now that AI is helping us do most manual processes at VC funds to save time, I’m sure from today until probably end of this year, most funds will switch to spending 90-95% of their time building trust with founders they’re looking to invest in and nurturing those relationships.”

Why? “At the end of the day, it’s like marrying for at least 10 years with a company. You need to be sure you’re investing in people you trust.”

What stands out in deals:

  • Who’s the founder? What have they done before?
  • How much do they actually know their business?
  • Can they articulate a narrative beyond ‘give me this money and I’ll generate this result’?
  • Can they go into the workings? Point out assumptions and levers that need to be pulled?

Hugo’s honest assessment: “I’ve had answers like, ‘I will work harder than the competition,’ and I’m sure they will—but that’s very hard for me to buy.”

BDev Ventures’ process reflects this: “Our investment process is very long—three to five months. But for the 60+ companies in our portfolio, that’s been beneficial. We’ve been working together for a good amount of time and know each other very well. Founders know if we’re the right investor; we’ve built enough conviction to make a decision.”

His conclusion: “As much as the market pushes for faster due diligence, I think it’s always going to be an investment in people. My green flag would be that I trust the person across the screen.”

The Five-Year Outlook: Infrastructure, Applications, or Tools?

When asked what type of AI investment will give the highest returns over the next five years, the panel offered varied but infrastructure-leaning perspectives:

Carolina: “I feel mobility could be a big one.”

Hugo: “I think infrastructure—but the infrastructure that could make a big difference. I’m not sure all VC funds will be able to invest in that, but I do think it’s a type of technology that could change people’s lives.”

Deborah: “I would also go with Hugo on infrastructure. I feel like it’s a good momentum. To consider five years from now with the evolution of technology and AI tools—we’re at the very beginning. As years go by, people are just going to think out-of-the-box and bring in solutions.”

Key Takeaways for 2026

1. Balance Is Everything

Technology is a tool and great guidance, but founders need balance between AI solutions and human capabilities, always respecting budget constraints.

2. Integration Beats Transformation

AI approaches that integrate into current business processes (pricing, churn, revenue growth) stick better than those promising complete transformations of strategic functions.

3. Cash Flow Is Still King

In 2026, the market has low tolerance for potential and high hunger for profit. Durability of the problem being solved matters more than algorithmic sophistication.

4. Founder-Market Fit Endures

While unit economics matter, founder-market fit has proven more durable over time. Investors are spending 90-95% of their time building trust with founders.

5. Scalability Requires More Than Founder-Led Sales

At $500K+ ARR, the most common false confidence is thinking you’ve “nailed sales” when it’s still founder-led and network-driven—hard to replicate at scale.

6. Global Expansion Timing Is Critical

Expand only when you’ve dominated your current market with proven ICP, cash flow, and scalable business model. Otherwise, it’s a distraction from monetization.

7. AI Cannot Replace Due Diligence Responsibility

AI can assess readiness in 30 seconds, but it cannot verify data accuracy, ensure founder tenacity, or guarantee they won’t go to Hawaii instead of executing.

8. Compelling Value Proposition Remains Supreme

Despite all the AI analysis tools, the fundamental question remains: Does this solve a real problem in a way customers will pay for consistently?

Conclusion: From Gold Rush to Settlement Phase

As Naman concluded: “The AI gold rush is moving into its most important phase—the phase of accountability.”

The unified message from all three investors: “In 2026, the market isn’t looking for the most advanced AI. It’s looking for the most indispensable business.”

From Deborah’s insights on global expansion timing to Carolina’s perspective on augmented decision-making while maintaining human responsibility, to Hugo’s warnings about phantom growth traps, the consensus is clear: Stop looking at valuation. Start looking at value.

The era of cheap capital and AI labels on every pitch deck is over. The settlement phase has begun, and in this phase, the question isn’t whether your algorithm is cool—it’s whether your business logic actually generates predictable, sustainable cash flow.

This Open Innovator Knowledge Session delivered a pragmatic master class on separating AI signal from noise in 2026. Deep appreciation to the expert panel—Deborah Boechat (Onit Center), Carolina Castilla (VC Lab/Love My Robot), and Hugo Lara (BDev Ventures)—for cutting through the hype and providing the real-world math on what will actually make money in AI.

Categories
Events

The AI Arms Race: Defense vs. Offense

Categories
Events

The AI Arms Race: Defense vs. Offense

Open Innovator Knowledge Session | January 27, 2026

Open Innovator organized a critical knowledge session on “The AI Arms Race: Defense vs. Offense” on January 27, 2026, addressing one of the most urgent challenges facing organizations today: the exponential acceleration of AI-powered cyber threats.

With Gartner reporting that cyber attacks now occur every 39 seconds—meaning five companies worldwide face breach attempts during a typical opening introduction—the session explored a stark reality: we are no longer worried about hackers in basements or even organized crime, but about code that doesn’t sleep, doesn’t blink, and operates at speeds human brains cannot perceive.

The panel examined whether AI-driven defenses have finally given organizations a defender’s advantage, or whether we’re simply building taller walls for increasingly sophisticated attackers wielding autonomous digital weapons.

Expert Panel

The session convened four cybersecurity leaders—dubbed the “AI Avengers”—bringing expertise from infrastructure security, governance, zero-trust architecture, and enterprise AI deployment:

Clen C Richard – “The Zero Trust Visionary,” multi-award-winning strategist who builds digital immune systems, specializing in environments where trust is never assumed and verification is continuous—even when the entity requesting access looks and sounds exactly like your CEO.

Rudy Shoushany – “The Governance Architect,” Forbes Tech Council veteran who translates cybersecurity from an IT line item into the boardroom’s digital transformation insurance policy, bridging the gap between technical reality and executive decision-making.

Ella Türümina – “The AI Readiness Architect” at Siemens and founder of her own AI consulting practice, serving as the bridge between big ambition and big protection, ensuring enterprise AI scaling doesn’t inadvertently leave backdoors open for autonomous intruders.

Fadi Adam – “The Infrastructure Sentinel,” CEH-certified professional who has witnessed the zero moments of corporate security evolution firsthand, ensuring that when AI battles commence, the foundational infrastructure doesn’t just hold—it fights back.

The discussion was expertly moderated by Naman Kothari from NASSCOM, who framed the critical challenge: “We are literally bringing human brains to a machine gun fight.”

The Arms Race Reality: Speed as the New Currency

Naman opened with alarming statistics that set the urgency level:

  • AI-driven phishing attacks have spiked 1,200% in recent months
  • Attacks now happen in milliseconds while traditional human response times are measured in hours
  • 76% of organizations admit they’re struggling to keep pace with AI-powered attack speeds
  • The threat has evolved from Nigerian prince emails to deepfake CEOs on Zoom calls requesting urgent wire transfers

The fundamental question: Has the defender finally gained an advantage, or are we just building more sophisticated defenses for even more sophisticated attacks?

Round One: The Defender’s Advantage—Real or Illusion?

Zero Trust: Coverage Gaps Remain Critical

Clen Richard opened with a sobering reality check: “The defender’s advantage is there, but the asymmetry problem still exists. As defenders, we have to be right 100% of the time. Attackers only have to be right once.”

He highlighted a critical gap: even with AI deployed for threat defense, only 18% of the attack surface is currently covered. The remaining 82% remains vulnerable, demanding urgent attention.

For AI agents specifically, Clen outlined a three-layer verification model essential for zero-trust environments:

Layer 1: Identity – “Who are you?” New frameworks like SPIFFE and Spire use short-lived tokens and auto-credentials to continuously verify AI agent identities.

Layer 2: Behavioral Drift Detection – “What are you becoming?” As AI agents evolve and touch more systems, organizations must detect abnormal patterns and drifts from expected behavior.

Layer 3: Intent Analysis – “Why are you acting like this?” The Explainable AI Index (AEI) helps determine at what confidence level machines should act autonomously, requiring AI to justify decisions and explain why signals were considered malicious.

Clen’s verdict: “Speed is definitely a multiplier, and both attackers and defenders have the advantage. But at the end of the day, it’s the overall maturity in how you handle this that gives defenders the advantage.”

Governance: The Tug of War Where Attackers Stay Ahead

Rudy Shoushany brought the governance perspective with stark honesty: “It’s a tug of war. Winning is—I won’t say we’re not succeeding, but with smart AI and automation, vibe coding has become accessible to all. We’re seeing new attacks where AI itself develops attacks, not just humans anymore.”

He identified a critical escalation: AI agents are now creating their own attacks, moving beyond human-directed threats. With this advancement, attackers aren’t just one step ahead—they’re two or three steps more advanced.

The governance gap is severe. Despite frameworks existing on paper, Rudy noted: “When you put it on the ground, the attacker has always been one step ahead. And now with AI involved, it’s maybe 2 steps or even 3 steps more advanced.”

A disturbing reality persists: “We talk with senior management, and unfortunately, many of them still don’t take the cybersecurity aspect as serious as it is and should be—even with all the bad experiences they’ve faced.”

Rudy pointed to vibe coding as an example of governance failure: organizations allow the technology without proper testing frameworks, creating “a new kind of vulnerabilities in the governance itself.”

His call to action: Management must act faster, more proactively, and “more viciously in a defense perspective,” potentially implementing blue team/red team methodologies to constantly work toward strong governance.

Enterprise Reality: Speed Meets Scale Challenges

Ella Türümina brought practical insights from working with Siemens-scale enterprises and her own AI consulting practice launched in 2025. Her philosophy: move from “progress to innovation” by first evaluating whether organizations truly need AI or if automation would suffice, then building ecosystems that satisfy ROI ambitions.

“Not building taller walls, but bringing defense in other dimensions and making governance sexy again,” Ella explained, noting that enterprise scale offers both advantages and challenges.

The upside: When a CEO issues a directive in a large organization, implementation happens universally. “If your CEO addresses the science, the circular which everyone has to implement from today evening, then people just do it.”

The downside: Global enterprises span cultures, regions, and tools, making rollout time-consuming. However, with proper lean management and change management, “people also have fire in their eyes and they want to go through with you.”

Ella emphasized a three-layer pyramid approach: Governance first, then architecture, then deployment with continuous monitoring. The monitoring is critical—tracking AI performance against KPIs and swapping models when performance degrades.

Infrastructure: The Foundation Must Fight Back

Fadi Adam brought the conversation to the technical foundation, emphasizing that AI enhances compliance with frameworks like GDPR, HIPAA, and PCI, but only when organizations actually follow these regulations.

His core belief: “AI is a tool to enhance. We cannot just replace humans. You cannot replace humans.”

Fadi warned against the dangerous assumption that AI tools can simply replace human security professionals: “Most companies think ‘I will buy this AI tool to do the job of a human,’ but after some time they have breach or loophole they cannot close because AI will not support full automation.”

He stressed several critical practices:

  • Zero trust always – “Never trust, always verify, revalidations”
  • Patch accurately and test before going live – citing the CrowdStrike incident where untested patches caused massive system failures
  • Test patches outside working hours to avoid business interruption
  • Maintain updated disaster recovery plans for business continuity

Fadi’s perspective on the arms race: “AI will enhance the defense mechanism, but the question is: when we implement AI, are we ready for it? Because if you’re not ready, something goes wrong always.”

Round Two: Battlefield-Specific Challenges

Securing Borderless Infrastructure When Code Is the Person

Fadi tackled the challenge of autonomous AI agents moving freely through networks. His prescription:

1. Zero trust as the foundation – Always verify, never assume 2. Short-lived credentials – Not long-lived credentials that create persistent vulnerabilities
3. AI agent identity management – Each agent must have a verified identity tracking what it does, sees, and shares 4. Kill switches – Manual override capability when AI executes unauthorized code 5. Comparable models and tools – Multiple validation systems 6. Rule-based AND behavior-based restrictions – Dual-layer control mechanisms

“The AI agent should always be looked over through the identity—what it does, what it should do, what it should see, and what it should share,” Fadi emphasized, noting the lethal risk of AI accessing and sharing data without proper constraints.

Shadow AI: Innovation Underground or Security Nightmare?

Rudy addressed the explosive challenge of shadow AI—employees using unvetted AI tools to move faster, creating unauthorized backdoors.

His counterintuitive solution: Don’t try to ban it. You can’t.

“I did something like that 20 years ago,” Rudy admitted. “If you kill innovation in an organization, employees will be frustrated and find alternative ways. This is shadow IT, shadow AI. Organizations get it wrong—you cannot ban it. It’s there. It will always remain.”

His approach: The Sandbox Freedom Environment

Instead of driving innovation underground, create approved AI environments with clear boundaries:

  • Data boundaries are clear
  • Access is transparent
  • It’s freedom, not surveillance
  • In a controlled environment

“Do whatever you want there, but give us reporting. Let us learn. Most initiatives that go underground have no reporting—I never learn what’s happening. I’m changing this to get the output, learn, and put it back in the enterprise.”

Rudy’s rule of thumb for C-suites: “If employees are faster than your policy cycle, guess what? The policies are obsolete.”

This is the current reality with AI. Organizations need agility in both delivery and policy creation. “Governance is not there to police. Governance is there to enhance the environment in a very subtle way so everyone knows it’s a tool to drive innovation and enable, but with the guardrails we need.”

Security by Design: Speed AND Safety

Ella challenged the false choice between innovation speed and security: “Sometimes it may seem like you have to choose between speed and safety and fail at both. But the real insight is that security by design doesn’t compete with innovation—it accelerates it.”

How? Governance clarity eliminates rework. Automation compresses timelines. Stage rollouts catch problems at 1% scale instead of 100%.

Her three-layer approach:

  1. Work with people – Lean management and change management for buy-in
  2. Guide through governance first, then architecture – Bridge legacy and modern systems
  3. Deploy with safety gates – Continuous monitoring, transparency, and close analytics

The payoff is measurable. Ella cited 2025 consulting reports from McKinsey and the World Economic Forum: Organizations using this three-layer approach report almost 30% gains from automation, with incident response times measured in minutes rather than hours.

“To sum up, we don’t need to choose between security and speed. We go from the foundation—governance, architecture, deployment everywhere with control, people controlling the sequence, and onboarding with learning curves. Success will be around the corner when you have clarity and transparency.”

Identity in the Age of AI Agents

Clen tackled perhaps the most profound challenge: verifying identity when the “person” requesting access is code that can be perfectly spoofed in milliseconds.

His starting point: Accept that identity is no longer purely human.

“We have to accept the fact that identity layer is now beyond humans,” Clen stated. “It’s now shared by applications, by machines, and machine identity is very, very critical.”

He pointed to Privileged Access Management (PAM) as an example of this evolution. Traditional PAM focused on RDP, SSH, and web access—now called “legacy PAM.” The new concept: Modern PAM with zero standing privileges and app-to-app permissions.

AI’s detection capabilities are demonstrating their power: AI solutions have detected vulnerabilities in SQLite that were hidden for over 20 years, undetected by traditional fuzzing methods. “That’s when you see the capability of AI—the speed is just unmatched.”

For continuous verification, new frameworks are emerging:

  • SPIFFE and Spire – Using short-lived certificates and existing authentication layers
  • Continuous authentication – Not authenticate once, but continuously prove legitimacy

Clen described a proof of concept with WSO2 and Microsoft involving booking agents: “Although they are part of the same app, when the user agent speaks to the booking agent, the booking agent still must verify it. There is segregation on the identity at the agent level, and they must continuously verify their authenticity before they can work with others.”

Rapid Fire Insights: Bold Predictions

Biometrics vs. Passwords

Clen’s answer: Biometrics. “Passwords are easy to crack and breach. Biometrics are more complicated in implementation.” When challenged about deepfakes, he acknowledged the cat-and-mouse game: “Liveness detection systems to detect deepfakes are also evolving.”

Brand Reputation = Cybersecurity Record?

Rudy’s answer: True. His reasoning was blunt: “I will not work with a company that has been hacked, that has been compromised. Very simply, I will not do that. I will not put my money, not spend anything. It’s reputation today.”

Speed vs. Security?

Ella’s answer: Security. “There is no speed without security. You can be as fast as possible, but to really contribute long-term, you need a safe environment. First the governance, then innovation and speed rise exponentially afterwards.”

Humans as the Weakest Link?

Fadi’s answer: True. “Humans always make errors. We are made to make errors and fail. The AI corrects human error.”

Rudy’s addition: “A human will always be accountable and is the weakest point. But AI could also be the weakest point—it will be the strongest, but also the weakest. We must balance, balance, balance.”

Clen’s critical question: “When you offload everything to AI and AI makes a mistake that causes business disruption, who’s accountable? The AI? The developer? The person who used it? Your company? This is a very interesting topic.”

Trillion-Dollar Cyber Attack Before 2030?

Unanimous answer: Yes. All four panelists agreed this milestone will be reached before the decade ends.

Looking to 2030: The Smartest Moves We Must Make Now

Clen: The Partnership Between Human and AI

“By 2030, it’s not about who has the power or what model is strongest—it’s the partnership between human and AI.”

He used Security Operations Centers (SOCs) as an example: analyst burnout is very real due to overwhelming incident volumes. AI is simultaneously the solution and the cause—attackers leverage AI creating more pressure, while analysts can use AI for quick decisions.

The key distinction: “Do we reach full autonomy? No. Humans always have to be in the loop—it’s human ON the loop rather than human IN the loop.”

Clen’s vision: Routine tasks must be autonomous. Irreversible tasks must always remain within human control. This partnership is the key for the future.

Rudy: Commoditize AI, Preserve Human Judgment

“By 2030, I think the discussion of AI won’t be here anymore—it will be something else. But our human judgments will always remain key. It will not be commoditized.”

Organizations that will thrive:

  • Open innovation through smart sandboxing
  • Preserve human override authority
  • Have the latest research to challenge the algorithms
  • Reward employees working in 24/7/365 SOC environments

Rudy’s critical framework: Create a map showing where AI is taking over, then identify how humans remain involved and who owns each piece. “We will be ruled by machines in the future, so how can humans stay valid in this loop?”

Ella: Map Decision Rights on a Single Page

Ella’s advice for C-level executives: “Pick your 5-10 most critical AI-driven decisions—where money moves, people are hired, machines are controlled—and map the decision rights on a single page for each.”

Write it down in plain language:

  • The decision
  • The role of AI
  • The role of humans
  • The escalation triggers
  • The exact way a human can override the system

“Your AI suddenly comes from being magic to reality—just a tool like Excel. It’s no longer powerful above humans. It’s just a tool which humans operate, not something operating above them.”

Ella’s conclusion: “AI won’t replace humans. It’s just a new normal.”

Fadi: Align With the AI Shield

“Companies and organizations should adapt—not just adapt, but adapt under the pressures of governance. All companies should comply and follow standards and regulations to align with the AI shield to build very strong defense infrastructure.”

He emphasized the CIA triad (Confidentiality, Integrity, Availability) as the foundation for every AI implementation in defense.

Most importantly: “Human always has to supervise the AI shield. Always human has to be in the picture.”

Key Takeaways for 2026 and Beyond

1. The Attack Surface Has Exploded

Only 18% is currently covered by AI-driven defenses. The remaining 82% represents urgent unaddressed risk.

2. Governance Is Not Policing

Create sandbox environments where innovation can flourish with guardrails, not underground where you have no visibility or control.

3. Speed Without Security Fails

Security by design doesn’t slow innovation—it accelerates it by eliminating costly rework and catching problems at 1% scale.

4. Identity Must Be Continuously Verified

In a world where AI agents request access, authentication cannot be a one-time event—it must be continuous at every interaction.

5. Human ON the Loop, Not IN the Loop

Routine tasks should be autonomous. Irreversible decisions must remain under human control with clear override mechanisms.

6. Shadow AI Cannot Be Banned

Organizations must provide approved environments for experimentation rather than driving innovation underground where security cannot monitor or learn from it.

7. Kill Switches Are Non-Negotiable

Every autonomous AI agent must have manual override capability when it executes unauthorized actions.

Conclusion: The Question Isn’t If Machines Are Coming—It’s Whether We’re Ready to Lead Them

As Naman concluded: “AI is the weapon, but the human is still the architect.”

The panel consensus was unambiguous: by 2030, organizations that survive and thrive will be those that master the partnership between human judgment and AI capability. They will commoditize AI while preserving irreplaceable human override authority. They will map decision rights clearly, create governance frameworks that enable rather than police, and build infrastructure where humans remain supervisors, not spectators.

The arms race is real. The trillion-dollar cyber attack is coming before 2030. But the defender’s advantage exists for those mature enough to implement zero-trust frameworks, short-lived credentials, behavioral drift detection, explainable AI, and—most critically—the wisdom to know when humans must step in.

As the session closed, the roadmap was clear: The question isn’t whether the machines are coming. It’s whether you are ready to lead them.

This Open Innovator Knowledge Session provided a survival manual for the AI arms race. Huge appreciation to the expert panel—Clen C Richard, Rudy Shoushany, Ella Türümina, and Fadi Adam—for their candid insights on defending against autonomous threats while scaling AI safely and strategically.

Categories
Events

The Age of AI Agents: What Leaders Need to Know for 2026 & Beyond

Categories
Events

The Age of AI Agents: What Leaders Need to Know for 2026 & Beyond

Open Innovator Knowledge Session | January 19, 2026

Open Innovator organized a groundbreaking knowledge session on “The Age of AI Agents: What Leaders Need to Know for 2026 & Beyond” on January 19, 2026, marking the first major discussion of the new year on how leadership must evolve as we transition from the chatbot era to the agentic AI era.

This pivotal session brought together global experts to examine a fundamental shift: moving from AI that suggests to AI that executes from copilots to autonomous digital workforce. The panel explored critical questions around trust, accountability, ethics, and the strategic decisions leaders must make as AI agents become capable of acting independently while humans sleep, transforming not just how work gets done, but who—or what—does it.

Expert Panel

The session featured four distinguished experts bringing diverse perspectives from policy, healthcare technology, enterprise transformation, and AI development:

Beatriz Zambrano Serrano – Expert at the intersection of MedTech and virtual reality, ensuring AI agents work in high-stakes healthcare environments where the margin for error is zero, with deep expertise in VR-based medical training simulations.

Hanane Boujemi – Tech policy expert and “guardian of the guardrails,” navigating the policy landscape to keep AI innovation ethical and legal, with nearly two decades of experience working at the highest levels of both big tech and government.

Ahmed Elrayes – Enterprise transformation veteran and “organizational architect,” serving as an advisor on digital transformation who bridges the gap between high-tech AI agents and high-impact human teams, particularly in government and Saudi Arabian markets.

Puneet Agarwal – Founder of AI LifeBOT, turning agentic theory into digital workforce reality with over 100 AI agents deployed across healthcare, manufacturing, retail, and other sectors globally.

The discussion was expertly moderated by Naman Kothari from NASSCOM, who framed the conversation around a provocative premise: “AI won’t replace you, but a leader who manages AI agents will replace the leader who still thinks AI is just a fancy Google search.”

From Copilot to Chief of Staff: Understanding the Shift

Naman opened the session with a powerful distinction that set the tone for the entire discussion. In 2024, if you asked a chatbot to help you get to London for a Tuesday meeting, it would act as a copilot—scanning the web, presenting flight options, hotel prices, and weather forecasts, perhaps even drafting an email. But then the work stopped. You still had to book the flight, coordinate the Uber, and manage the calendar.

In 2026, agentic AI changes everything. You tell the agent “I need to be in London for that Tuesday meeting. Make it happen.” The agent doesn’t give you a list—it negotiates for you, identifies the best pricing, transacts on your behalf, and syncs your calendar. As Naman put it: “Gen AI is your consultant. The AI agent is your chief of staff.”

First Responsibilities: What Would You Trust to an AI Executive?

The panel tackled a provocative scenario: if an AI agent joined your leadership team tomorrow as a decision-maker, what would you trust it with first, and what would you never give up?

Healthcare: Data Analysis, Not Value Judgments

Beatriz brought the critical perspective from high-stakes medical environments. She would immediately hand her AI agent all the accumulated training simulation data—information on how medical personnel performed, where mistakes occurred, what was effective and intuitive. “There are many things that we as humans miss,” she explained, noting the difficulty of processing vast amounts of training data to improve simulators and make practice as real as possible.

However, Beatriz drew a firm line: she would never let AI design the actual training scenarios. “That’s really an ethical call. It’s a value-based judgment. You need to understand why you’re doing the case, all the demographical information about the patient. For that, you need real physicians and real experts.”

Enterprise: Operational Tasks, Not Strategic Decisions

Ahmed identified a major opportunity in the operational realm. He observed that in his work across organizations, particularly in Saudi Arabia, people are drowning—spending 70-80% of their time on repetitive operational tasks like pulling data from multiple sources, issuing reports, managing IT service requests, and writing feedback comments.

“The first thing I would give them is operational tasks that are repeated with clear decisions,” Ahmed stated. “I don’t want to replace my team yet, but I want to free them for more strategic work, more creative work, more work involving ethical values.”

What would he never hand over? “Any responsibility that requires strategic decisions dealing with values and customers, something that has accountability with it. Anything that involves culture, values, or human perspective—I wouldn’t give to an AI agent yet.”

Policy: Building Intelligence and Empathy First

Hanane offered a fascinating perspective from the policy world. Before deploying AI agents, she would focus on having them develop “exceptional communication skills”—not personality traits, but values essential to policy-making. She emphasized that agents need high levels of intelligence, empathy, and the ability to navigate complexity and ambiguity.

“We need to look at the foundations of how to make technology work with the help of policy—not to hinder it, but to help it benefit either the business model, service delivery, or scientific research,” Hanane explained. She stressed that data is agnostic until we can make sense of it, and agents need to be intelligent enough not to replace humans but to guide, coach, and anchor them.

What must remain human? Strategic decisions that require understanding your specific situation and context. “You need to be able to make the right call for your own situation and not apply a blanket policy.”

AI Development: Leading HR, With Human Loop

Puneet brought practical experience from building AI agents. With characteristic humor, he said if an AI agent joined his team, “I will hand it over lead HR—but jokes apart,” he acknowledged the current reality requires human oversight.

His more serious point focused on preparation: “How I empower the AI agent for the future is critical. Every decision, including hyper-personalization and decision-making, the AI agent will be able to do better than us as we move forward—because it will be more enriched with data, with clean data.” The current limitation? We don’t have clean data yet, which is why human-in-the-loop remains essential.

Bold Predictions: The State of AI Agents by End of 2026

The session moved to rapid-fire predictions about where we’ll be by the end of 2026.

Will Employees Manage More AI Agents Than Human Subordinates?

Ahmed’s answer: Not yet. Especially in government sectors and markets like Saudi Arabia, significant preparation work needs to happen first. “There are regulations around cloud services and other technologies. Organizations need to build themselves in terms of data structure, automation, and systems,” he explained.

Beyond technical limitations, Ahmed identified a critical cultural barrier: “A lot of leaders treat agentic AI as a collaborative tool, an added tool—not as a fundamental operational change in how you deliver value within your services.” His bold prediction: the majority of organizations are still behind, though some unicorns will emerge.

Will the Most Powerful AI Agents Live in Headsets and Glasses?

Beatriz’s answer: True, but with major caveats. She pointed to exciting startups in San Francisco combining humanoid robotics with AI, seeing this as the future trend—if politics and regulation allow it. “I see that trend, if regulation allows it, which is very difficult in my opinion. That would be the most powerful. However, I don’t know when we would allow it to enact its full power.”

Hanane reinforced this point, noting that wearables are “the ticket item”—the hardware that will make a huge difference for the AI hype, but “we need to get it right this time because we have the frameworks in place.” She cited significant challenges: fitting all the necessary chips into wearable form factors, operating beyond current infrastructure layers, and navigating regulatory pushback.

Who’s Liable When AI Agents Make Million-Dollar Mistakes?

When asked who gets fired if an AI agent makes a massive financial error—the CEO, CTO, or software vendor—Hanane’s answer was unequivocal: Everyone will be on the case of the CEO.

Drawing from her experience at Meta, she explained that when CEOs make wrong bets on major initiatives, they bear ultimate responsibility. But her deeper point was about getting the fundamentals right: “We need to do more work to get everybody on board. We need consensus building. Doing things on your own or coming with a top-down approach doesn’t work anymore. Testing regulatory readiness in some markets before deploying products is critical.”

Real-World Impact: AI Agents Already Transforming Industries

Puneet provided concrete examples of how AI LifeBOT is deploying agentic AI across sectors:

Healthcare:

  • Avatar-based appointment booking systems that talk to patients in real-time, helping them schedule appointments based on doctor preferences, clinic locations, and availability—all integrated with backend hospital management systems
  • Diabetic boot sensors that measure foot pressure and alert patients to prevent ulcers, shifting from reactive treatment to preventive medicine

Manufacturing:

  • Voice-enabled predictive maintenance systems allowing blue-collar workers to speak to machines in their native languages (Spanish, Chinese, Hindi, Arabic, English)
  • Workers can ask questions about machine maintenance in natural language, with data captured by IoT devices but engagement happening through voice

Retail & Consumer Products:

  • Customer service AI agents handling warranty services, repair requests, and support calls
  • Analysis of customer journeys across channels to improve support delivery

Cross-Functional Applications:

  • Over 100 AI agents created for various functions: legal, IT, sales, operations, finance, marketing
  • Sector-agnostic, plug-and-play solutions deployed across US, India, Africa, and Southeast Asia

Critically, Puneet emphasized built-in safeguards: “We understand these kinds of mistakes can happen due to hallucination. While building agents for enterprises, we are putting a lot of checks and balances. We have agents doing actions, and we also have anti-agents doing audit and performance management of other agents.”

Critical Warnings: The Gaps We’re Missing

The Clean Data Problem

Puneet was direct about current limitations: “One important reason for immaturity is the data we have right now. It will take more time to reach maturity where we don’t require human-in-the-loop. Right now, for critical decisions, we are putting human-in-the-loop.”

The Wrong Goal: Efficiency Over Innovation

Ahmed issued a powerful warning about what leaders will regret: “Leaders will think ‘I wish I didn’t run after ROI or cost reduction or efficiency from the start.’ They were running after the hype, saying ‘I have agentic AI’ without understanding what AI is, what agentic AI is, what LLM is.”

He emphasized the need for capacity building: “Leaders should spend more money on understanding the technology, what it’s capable of doing, how to deploy it correctly, and change management—how to treat the technology within their organization.”

The danger? “It’s not like implementing a new ERP where you can put it back to manual. The damage of going after AI and creating more issues is very hard to recover from. You need a roadmap, an ambition roadmap for managing change, educating people, and having governance in place.”

The Equity Gap

Beatriz raised a profound concern about global inequality: “The world is not equitable at the moment. We have a lot of disparity. I would love for everybody to be at the same level before all these advancements happen, because if not, some are inevitably going to be left behind.”

She called for foundational work first: “I would really like for all governments, people, and leaders to build the infrastructure—to be connected to the Internet, to have basic digital literacy and digital skills. And then when we have that base, we can advance.”

Hanane reinforced this, noting that “a few billion people are not yet connected to the Internet. We have a big chunk of data that we aim to process which is not yet available.”

Looking to 2030: What Will Future Leaders Say We Got Wrong?

In the session’s most thought-provoking segment, panelists imagined what leaders in 2030 or 2035 will say about the decisions being made today.

Hanane: “We Worried About the Wrong Things”

“I would definitely think of AI as not as smart as we all think,” Hanane projected. “Ten years down the line, as a policy maker—hopefully by then I’ll become a minister—I’ll be thinking these AI agents are not as smart as us. We have to be on top of the technology as humans because we can make sense of communication and the implicit much better than any agent we train ourselves.”

Her vision: AI should be “more of a tool in systematic decisions that cuts time and energy and helps optimize processes, especially when running large projects—whether in big companies or at the level of government.”

But she warned: “The machine ultimately will never outsmart the human. We need to mobilize the machine to follow instructions, have checks and balances in situ, make sure foundations are there for infrastructure, deployment, and data protection frameworks.”

Ahmed: “We Chased Efficiency Instead of Transformation”

Ahmed’s regret prediction was pointed: “Leaders will think ‘I wish I didn’t run after ROI or cost reduction from the start, running after the hype.’ A lot of organizations lack understanding of what’s AI, what’s agentic AI, what’s LLM.”

His prescription: “Probably would have spent more money in capacity building, understanding the technology, change management, and how to treat the technology within the organization. Many organizations treat AI as an add-on technology which is not—it has a profound impact on organization structure, decision-making, hierarchy, and workforce.”

Beatriz: “We Learned Humility From Our Blind Spots”

Beatriz offered the most optimistic perspective: “I’m very positive about the future. Agentic AI has actually made me very humble because I’ve seen what I have missed personally, what blind spots I have. Technology shows us what we’re lacking, and if we are humble enough to really analyze their judgment versus what we would have done, we can really learn and advance as a society.”

Her hope: “First really help everybody to be at the same level. Build infrastructure, get connected to the Internet, have basic digital literacy and skills. Then when we have that base, we can advance.”

Key Takeaways for Leaders

1. From Automation to Autonomy

As Puneet emphasized, “Agent AI is a mind shift. We are moving from automation to autonomy. This is not going to be stopped—this is the future. But we have to understand the consequences.”

2. Don’t Pave the Cow Path—Build New Roads

Ahmed’s warning resonates: Don’t just seek efficiency gains. Use AI agents to reimagine entirely new business models and ways of delivering value that were previously impossible.

3. Human-in-the-Loop Remains Essential

Across all perspectives, the consensus was clear: for critical decisions involving values, culture, strategy, and accountability, human judgment remains non-negotiable—at least for now.

4. Foundation First, Innovation Second

Hanane’s policy perspective emphasized getting the basics right: infrastructure, data protection frameworks, digital literacy, and regulatory readiness must precede widespread deployment.

5. Technology Only Works When It Works for Everyone

Beatriz’s closing remark captured the ethical imperative: “There’s a lot of power in agentic AI, but honestly, it’s only worth it if it makes us better and if it makes humanity better. So let’s all work towards that.”

Conclusion: The Leadership Evolution Has Begun

The session made clear that 2026 marks a pivotal moment. As Naman framed it at the close: “The age of agents isn’t just a tech upgrade—it’s a leadership evolution.”

Future leaders won’t be remembered for the agents they deployed, but for the culture of innovation they built to manage them. The challenge ahead isn’t technological—it’s human. It’s about building capacity, managing change, establishing governance, ensuring equity, and maintaining the ethical compass that only humans can provide.

The digital workforce is here. The question is: are leaders ready to orchestrate it?

This Open Innovator Knowledge Session featured expert insights on navigating the agentic AI revolution. A huge shoutout to the brilliant speakers—Beatriz Zambrano Serrano, Hanane Boujemi, Ahmed Elrayes, and Puneet Agarwal—for bringing clarity, candour, and perspective to the discussion. Special thanks to Puneet Agarwal, founder of AI LifeBOT, for showing what innovation with intent truly looks like.

Categories
Events

The New Face of Leadership: Redefining Thinking in the Age of AI

Categories
Events

The New Face of Leadership: Redefining Thinking in the Age of AI

Open Innovator organized a groundbreaking knowledge session on “The New Face of Leadership: Redefining Thinking in the Age of AI” on December 11, 2025, bringing together three distinguished women leaders from across the globe to address a critical challenge facing organizations today.

As AI rapidly reshapes how teams think, how organizations move, and how leaders must lead, the session explored an uncomfortable truth: the leadership mindsets that drove success in the past decade cannot sustain us through the coming years.

Hosted in collaboration with Net4Tech-a global ecosystem advancing women’s careers in technology- the 60-minute panel discussion moved beyond tools and algorithms to examine the deeper evolution of leadership required when machines think alongside humans, touching on essential themes of empathy, ethics, psychological safety, and the critical thinking skills leaders need to stay trusted, relevant, and effective in 2026 and beyond.

Expert Panel

The session featured four distinguished women leaders in technology and innovation as part of the Open Innovator Knowledge Sessions:

  • Begonia Vazquez Merayo (Moderator) – Founder of Net4Tech, a global ecosystem advancing women’s careers in technology, and leadership coach advocating for equality in tech.
  • Adriana Carmona Beltran – CEO and Founder of Tedix, global entrepreneur with experience building innovative tech startups across continents.
  • Deborah Hüller – Partner at IBM Consulting, expert in analytics and AI since 2014, advising federal agencies on digital transformation and public sector modernization.
  • Dr. Kamila Klug – Director of Business Development, Altair , Advisory Board Member.

Key Insights: Leadership Capabilities for the AI Era

The Curiosity Imperative

Adriana Carmona opened the discussion by identifying curiosity as the most underestimated leadership skill. “We cannot lead people in a future that we are afraid of,” she emphasized, advocating for a discovery mindset over fear when approaching AI. She positioned AI not as a replacement but as a tool to augment human capabilities, stressing that leaders must inspire curiosity in their teams to explore new possibilities.

Critical Thinking as a Compass

Dr. Kamila Klug highlighted the shift from traditional leadership to navigating changing terrain with values as a compass. She emphasized that in the AI era, critical thinking is essential for challenging assumptions and choosing the right path from multiple AI-generated options. Leaders must question not just AI outputs but their own biases to avoid creating echo chambers.

Psychological Safety in Fast-Changing Times

Deborah Hüller introduced psychological safety as a crucial leadership focus, noting that “AI accelerates change and change only works when people feel safe to learn.” She stressed that as teams face constant unlearning and relearning, creating environments where people can fail forward becomes essential rather than optional.

Cultural Philosophy and Human-Centric Innovation

Adriana shared insights from building companies across continents, emphasizing that leadership and innovation are inherently cultural. Her leadership philosophy combines emotional intelligence, empathy, and curiosity to understand diverse cultures and build meaningful connections. “AI is not here to replace us. AI is here to augment us,” she stated, positioning human relationship-building as the key differentiator in an AI-enhanced world.

Public Sector Transformation Challenges

Deborah addressed the complex challenge of transforming government institutions, which operate under zero-error tolerance and strict public fund management. She identified a critical tension: public servants are desperate to experiment with AI and fail forward, but the system doesn’t permit it. “We need to change the culture AND the system,” she emphasized, calling for systemic reforms that allow responsible experimentation while maintaining public trust.

Leading Through Continuous Transformation

Dr. Kamila Klug drew from her experience across countries and industries to advocate for coherent adaptability—maintaining core values while navigating constant change. She emphasized asking the right questions rather than simply asking many questions, and using AI as a “thinking sparring partner” that challenges rather than simply provides solutions.

Critical Warnings: AI Bias and Inclusion

The panel raised crucial concerns about AI perpetuating biases. Adriana provided a compelling example: most AI systems trained predominantly on English-language, US-based data could recommend California for agricultural projects while overlooking opportunities in Tanzania or Mozambique due to data scarcity. “If we are not aware of that, we are actually leaving behind a big part of society,” she warned.

Dr. Kamila Klug added a linguistic dimension, explaining how language itself shapes thought—a bridge described as masculine in one language is viewed as “strong,” while in languages where it’s feminine, it’s seen as “beautiful.” These biases embed themselves in AI training data.

Deborah emphasized the importance of inclusive automation design, noting that much AI will operate in the background without human oversight. “If we are not having inclusion in mind when building these systems, it will end in a non-inclusive world,” she cautioned.

Balancing Agility and Structure

Responding to audience questions about maintaining agility without chaos, the panel offered practical guidance. Adriana advocated for defining clear “North Stars”—specific goals that guide decision-making amid constant change. Deborah added that even agile environments need agreed-upon structures, with transparent communication when those structures evolve.

The Path Forward

The session concluded with a call to action from Begonia: “We are democratizing technology. We are opening doors for everyone to become a leader in AI.” She urged participants to avoid creating a new “AI gap” that would disproportionately affect women, encouraging everyone to be conscious creators who challenge biases and shape the future deliberately.

The consensus: AI presents unprecedented opportunities, but leaders must approach it with curiosity, critical thinking, psychological safety, and unwavering commitment to inclusion. As Adriana summarized, AI should be treated as a “junior collaborator” that requires training, guardrails, and guidance—not as a perfect oracle.


This Open Innovator Knowledge Session was part of “The New Face of Leadership” movement in collaboration with Net4Tech. Open Innovator specializes in digital transformation and innovation strategies, co-creating solutions where bold ideas turn into action. Write to us at open-innovator@quotients.com