Categories
Events Data Trust Quotients

From Data Privacy to Data Trust: The Evolution of Data Governance

Categories
Events Data Trust Quotients

From Data Privacy to Data Trust: The Evolution of Data Governance

Data Trust Quotient (DTQ) organized a critical knowledge session on February 20, 2026, addressing the fundamental shift from data privacy to data trust as AI systems scale across industries. The session explored a new category of risk: not just data theft, but quiet data manipulation that can make even the smartest AI make dangerously wrong decisions.

Expert Panel

The session convened four practitioners from highly regulated industries where data integrity is mission-critical:

Melwyn Rebeiro – CISO at Julius Baer, bringing extensive experience in security, risk, and compliance from ultra-regulated financial services environments, wearing both the Chief Information Security Officer and Data Protection Officer hats.

Rohit Ponnapalli – Internal CISO at Cloud4C Services, specializing in cloud security, enterprise protection, and cybersecurity for government smart city projects where real-time data integrity directly influences public infrastructure operations.

Ashwani Giri – Head of Data Standards and Governance at Zurich, working with enterprise privacy frameworks and regulators.

Mukul Agarwal – Head of IT with deep experience in IT strategy, systems, and digital transformation in the banking and financial services sector, bringing the skepticism and traceability mindset essential to financial industry operations.

Moderated by Betania Allo, international technology lawyer and AI policy expert based in Riyadh, working at the intersection of AI governance, cybersecurity, and cross-border regulatory strategy. Hosted by Data Trust (DTQ), a global platform bringing professionals together to share practices, address challenges, and co-create solutions for building stronger trust across industries.

The Shift: From Confidentiality to Verifiable Integrity

Regulators Are Changing Their Expectations

Ashwani opened by confirming the shift is happening at ground level as AI adoption increases. Organizations are preparing security documentation, having internal discussions, trying to understand what changes are required. Confidentiality was the past—now much more mature with clear understanding. The present focus: initiating discussions around veracity and verifiable data.

The Medical Prescription Analogy: Earlier, the idea was ensuring only the right people (patient and doctor) had access. Now the expectation is that nobody is altering the prescription in the background. With AI, the expectation is that data is not poisoned or drifting, that hallucinations and poisoning are prevented.

Regulators as Trust Enablers: Regulators enable trust in the social ecosystem. As AI adoption drives changes, they’re moving from simply asking access-related questions (IAM) to expecting cryptographic proof of truth, verifiable audit trails, immutable integrity checks, and mechanisms providing confidence that claimed data is actually true.

The Verification Challenge: Organizations are framing that they have bases covered, but when regulators try to verify, many cannot demonstrate it. Except for the most mature organizations with proper budgets and resourcing, most face this challenge—trying to understand changes before implementing them.

The Timeline: Similar to information security 15 years ago when organizations struggled with their own approaches, AI security faces similar challenges now. But this evolution will be much faster—5-10 years to reach maturity rather than decades.

AI Readiness Without Data Provenance Is Flying Without a Black Box

When asked if organizations can truly claim AI readiness without tracking who changed data and when, Ashwani was direct: AI readiness is definitely not there in many organizations. Provenance is absolutely essential.

The Right Thing, No Matter How Hard: Organizations should do the right thing regardless of difficulty. Provenance work is already happening in bits and pieces but not in structured format. Requirements include policies in place, dedicated teams (not stopgap arrangements), and full commitment—not pulling people just to support tasks.

The Stark Reality: AI readiness without rigorous data governance is like flying a commercial plane without a black box, without proof of provenance or source of truth. It will land nowhere.

Automation Requirements: Regulators expect automated readiness testing and red teaming (validation testing of processes) to ensure controls are designed properly and working without glitches. If automation is less than 80%, it’s a problem.

The Non-Negotiable Future: Regulators are signaling this now but will become more aggressive. Provenance will be non-negotiable. Without it, enterprises are building highly efficient black boxes.

Industry Readiness: Varied Responses to the Challenge

BFSI Leads, Others Follow at Their Own Pace

Different sectors respond differently. Banking, Financial Services, Insurance (BFSI) and healthcare—highly critical sectors—are early adopters responding well. Other industries respond at their own pace, some lagging behind, but everyone understands the importance.

The Leadership Ladder: Understanding and awareness exist. Behaviors are being introduced. Once understanding, awareness, behaviors, and ownership align, leadership emerges. AI leadership is still far away, but early adopters (especially BFSI) are doing well and having internal discussions to create right synergies.

No Choice But to Comply: Organizations understand this requirement is coming. They have no choice but to comply eventually.

The Vault Problem: Securing Contents, Not Just Containers

Mukul brought the financial services perspective with a critical observation: Skepticism is the word in BFSI. The industry doesn’t trust anything at face value unless traceability exists.

What Security Has Done Wrong: Traditional IT security secured the vault—fortifying infrastructure, ensuring nothing comes in, checking what goes out, logging and mitigating. But they haven’t verified what’s inside the vault.

The Critical Gap: Did someone with the absolute right key enter the vault and modify contents? Could be malicious intent or oversight. This is where data corruption matters.

Real-World Financial Risk: What if someone changed the interest rate for a customer’s loan for a specified period, reducing their outgo, causing damage of X amount to the financial institution, then reset it later? The change happened, reverted, damage was done, nobody noticed. This problem area lacks fair mitigation.

Insider Risk: The Blind Spot in Mature Security

Rohit emphasized this isn’t just about regulatory requirements—it’s about trust. Organizations have controls in place, but are they using those controls to monitor behavior changes or data changes?

The Maturity Imbalance: Security has organized as a fortress to prevent intrusion. Organizations are mature enough to prevent hackers from getting in. But there are fewer controls to tackle insider risk management—where data changes, data integrity, data accuracy, and data theft issues originate.

The Spending Gap: Leaving BFSI aside, other industries don’t spend much on tools. Organizations should start looking at insider threat and gaining trust from operations adapted to day-to-day life.

Zero Trust for Data: Beyond Access Control

Trust Nobody, Verify Everybody

Melwyn brought the perspective from Julius Baer’s highly regulated environment. Regulators are adopting zero trust—not trusting anybody, just verifying everybody. Whether insider or outsider, the boundary has completely changed.

The Regulatory Focus: Most regulators in India are focusing on having organizations adopt zero trust technology—trust nobody but always verify so legitimate users are the only ones accessing data.

The Evidence Requirement: If someone tries to tamper with data, at least you have logs or verifiable evidence that data has been tampered with and appropriate action can be taken.

From Access Zero Trust to Data Zero Trust

The zero trust mindset must extend directly to the data layer itself—continuously validating that information has not been altered.

The Shift Beyond Access: It’s not only about access control in zero trust, but also about the data itself. Always verify rather than trust the data. The source of data, integrity of data, and provenance of data must be verified in an irrefutable manner without tampering or malicious intent.

Why Data Is Everything: If there’s no data, there are no jobs for anyone in the room. Data is the critical aspect of decision-making and must be protected at all times.

The AI Attack Surface: Traditional cybersecurity techniques exist—encryption, hashing, salting. But with AI advent, various attacks are happening against data: injection, poisoning, and others.

The Survival Requirement: Focus must shift from zero trust access to zero trust data. Without it, organizations cannot make critical and crucial decisions and will not survive in a competitive, AI and ML-driven world.

Multi-Dimensional Accountability

Who Owns Risk When Data Is Quietly Manipulated?

In India, the trend shows most organizations still have CISOs taking care of data because they’re considered best positioned to understand both security and privacy requirements that the DPO job demands.

Different Layers of Ownership:

  • Data Owner: The reference point for data
  • CISO: Provides guardrails to guard data safety against malicious attacks
  • DPO: Concerned only with data privacy, ensuring it’s not impacted or hampered
  • Governance: Legal and compliance teams ensuring every control is covered

Shared Responsibility: Each member has their own job in the organizational chart and must do their part in protecting data. But ultimately, the board has overall responsibility and accountability to ensure whatever guardrails or safety measures allocated to data protection are in place and nothing is missing.

When Data Alteration Creates Public Safety Risks

Rohit brought critical perspective from smart city and government projects where personally identifiable information (PII) and sensitive personal data are paramount—not just for cybersecurity but for counterterrorism.

The Bio-Weapon Example: If data about blood group distribution leaked—showing a city has the highest number of O-positive blood groups—a bio-weapon could be created targeting only that blood group, causing mass casualties and impacting national reputation.

Real-Time Utility Monitoring: Smart cities don’t just hold privacy data; they monitor real-time use of public services by citizens. Traffic analysis, water management during seasonal changes, public Wi-Fi usage—all create critical data that, if tampered with, could cause chaos in city operations.

The Efficiency Question: Models exist to monitor data alteration and access, but are they efficient? Considering the scale of operations, monitoring capabilities, budget limitations, and whether they treat public safety with the same seriousness as corporate security—efficiency remains a question mark.

The Tool Gap: Industry-Specific Maturity

When it comes to infrastructure security or user security, good controls exist across industries with mature maintenance. But data access management is a question mark depending on industry.

BFSI Advantage: The Reserve Bank of India mandates database access management tools. They have controls because they have solutions. They can develop use cases, rules, and alerts for abnormalities, modifications, deletions, additions, direct database access.

The Budget Challenge: Outside BFSI, getting board approval for database access management tools requires a very strong use case or customer escalation. Without these tools, organizations rely on DB soft logs requiring manual review—cumbersome for humans to identify abnormalities and more like postmortem analysis.

Real-Time vs. Postmortem: Manual review might take six days to discover data modification. By then, damage is done. With DAM tools in place, organizations can get alerts and act in real-time with preventive and corrective controls.

Industry-Specific Reality: Controls are there but depend on how important security, integrity, and trust are to the board—determining what tools can be secured for data integrity monitoring.

Traditional Security Models Are Insufficient

Rohit identified a critical trend: Traditional data access had a system and a user or user-developed application. Controls were simple. Now there’s a third element: AI—self-adaptive, self-learning, and capable of directly accessing data.

Going Back to the Drawing Board: Everyone is returning to proper boards where they can define and design controls. The whole industry—technical people, operations teams—are validating whether traditional security controls are sufficient to handle AI operations.

The Use Case Problem: Concerns arise because controls must change for every use case. One AI tool might have eight use cases, each requiring different controls, different monitoring, different security on who’s accessing, what output is given, what data is accessed, privilege levels, potential injection attacks, and command exploitation.

Output Modification Threat: It’s not just about data modification. What if output is modified? Hackers don’t need to get into databases to modify data if they can modify output directly. This concern is getting significant attention.

The Level Question: Organizations must determine at what level they’re discussing data integrity—making it a complex, layered challenge.

Key Questions Defining Data Trust

Is Data Trust Just Rebranding Privacy?

Ashwani’s answer: Data trust is the next level of data privacy. Privacy focused on keeping data safe. The question now: Is the data you’ve kept trustable? Is somebody altering or changing it? Is it the right data collected in the first place?

End-to-End Protection: Ensuring you’re collecting data that’s right and fit for purpose, protecting it with all possible controls until consumption, and having the right pipeline protecting from end to end with proper lineage.

Traceability Requirement: You should be able to identify where trust is broken. If somebody altered data, you must be able to trace it.

The Future Parameter: Data trust is next-step beyond traditional data privacy controls—paramount for successful AI-driven organizations in the fully AI-driven era ahead.

The DPO Triad: As Rohit suggested to a DPO colleague—information security has three attributes (confidentiality, integrity, availability). For DPOs, it should be privacy, security, and trust defining overall governance.

Three Years Forward: Trusted vs. Just Compliant

Melwyn’s perspective: Trust is extremely important—going one level ahead of compliance. Compliance and trust are interchanging based on time differences.

Why Both Matter: Everyone wants to be compliant because penalties are high and heavy. Everyone wants to be trusted because without being a trusted brand or company, you’re out of business—competitors are already ahead.

The Reversal: Compliance is not driving trust. Trust is driving compliance. It’s a non-negotiable, hand-in-glove situation.

The Drinkable Water Example: Mukul provided a perfect analogy: Someone asks for water. Giving a glass of water is compliance. But was that water drinkable? That’s trust. Would you trust the person who gave drinkable water, or just take water from someone who was merely compliant?

No Shortcut to Trust: Ashwani emphasized trust cannot be bought with budget instantly. It takes time, requiring continuous good work to earn it. Trust is a real differentiator earned only by fixing things at ground level. There’s no shortcut to trust.

Compliance as Checkbox vs. Backbone

Rohit highlighted that compliance is a satisfaction factor for customers. When you want to prove you have good security controls, compliance comes into picture.

The Dangerous Trend: Compliance is becoming a checkbox, which should not be taken lightly. Compliance should be the backbone on which you build more security controls. Some organizations treat it as a checkbox saying they’re compliant, but effectiveness and efficiency remain questionable.

Priority Actions for the Next 24 Months

People, Process, Technology—In That Order

Ashwani’s Framework: Organizations must ensure right standards, policies, procedures, and mandates are in place. Identify the right people for the work and agree on RACI matrix (who’s responsible, accountable, consulted, informed) defining roles clearly.

Ground framework first. Other things are technology-related. Fixing the people part—the human factor—is always most important. Once you fix the human vector, everything else comes with much more ease.

Mindset and Culture Change

Melwyn’s Priority: The mindset must change when discussing privacy, data security, and integrity. Culture has to be there. Without the right mindset, culture, ethos, and ethics to govern, even the best controls, equipment, or security will not work.

The right mindset is the key to success.

Access Monitoring and Traceability

Rohit’s Focus: Culture is a never-ending job through awareness sessions and phishing simulations—always 10-20% violating despite efforts. But purely for trust, organizations have enough controls knowing who has access to systems.

Three Critical Questions: Focus on controls understanding who has access to systems or data, who is modifying data, and what is being modified. Answer these three questions and trust can be easily built.

Explainable AI with Human in the Loop

Mukul’s Guidance: Many organizations live in the hype of deploying AI and trusting their data with AI. There must be a human in the loop, and AI must be explainable.

Explainable AI with human in the loop is the keyword when trusting data with AI models. At least jobs are safe with this explanation—people are still needed to validate.

Conclusion: Trust Cannot Be Bought, Only Earned

The session revealed unanimous agreement: The future belongs to organizations with the most trusted data, not just the most data or the most advanced AI.

Trust is the cornerstone of AI-driven ecosystems. Provenance is non-negotiable. Zero trust must extend from access control to the data layer itself. Accountability is multi-dimensional across boards, executive leadership, technology teams, and legal compliance.

As India accelerates its AI ambitions (hosting the AI Summit during this session), embedding verifiable integrity at scale becomes essential—not only for foundational institutional credibility across sectors but for defining long-term leadership.

Key principles emerged: Do the right thing no matter how hard. Fix the human factor first. Treat compliance as backbone, not checkbox. Remember there’s no shortcut to trust—it must be earned through continuous good work fixing things at ground level.

The shift from data privacy to data trust represents the next evolution in data governance—moving from protecting data from unauthorized access to ensuring data remains true, accurate, and verifiable throughout its lifecycle in AI-driven systems.


This Data Trust Knowledge Session provided essential frameworks for organizations navigating the evolution from data privacy to data trust. Expert panel: Melwyn Rebeiro (Julius Baer), Rohit Ponnapalli (Cloud4C Services), Ashwani Giri (Zurich), and Mukul Agarwal (BFSI sector). Moderated by Betania Allo.

Categories
Data Trust Quotients DTQ Visibility Quotient

The AI Trust Fall: Building Confidence in an Era of Hallucination

Categories
Data Trust Quotients DTQ Visibility Quotient

The AI Trust Fall: Building Confidence in an Era of Hallucination

Data Trust Knowledge Session | February 9, 2026

Open Innovator organized a critical knowledge session on AI trust as systems transition from experimental tools to enterprise infrastructure. With tech giants leading trillion-dollar-plus investments in AI, the focus has shifted from model performance to governance, real-world decision-making, and managing a new category of risk: internal intelligence that can hallucinate facts, bypass traditional logic, and sound completely convincing. The session explored how to design systems, governance, and human oversight so that trust is earned, verified, and continuously managed across cybersecurity, telecom infrastructure, healthcare, and enterprise platforms.

Expert Panel

Vijay Banda – Chief Strategy Officer pioneering cognitive security, where monitors must monitor other monitors and validation layers become essential for AI-generated outputs.

Rajat Singh – Executive Vice President bringing telecommunications and 5G expertise where microsecond precision is non-negotiable and errors cascade globally.

Rahul Venkat – Senior Staff Scientist in AI and healthcare, architecting safety nets that leverage AI intelligence without compromising clinical accuracy.

Varij Saurabh – VP and Director of Products for Enterprise Search, with 15-20 years building platforms where probabilistic systems must deliver reliable business foundations.

Moderated by Rudy Shoushany, AI governance expert and founder of BCCM Management and TxDoc. Hosted by Data Trust, a community focused on data privacy, protection, and responsible AI governance.

Cognitive Security: The New Paradigm

Vijay declared that traditional security from 2020 is dead. The era of cognitive security has arrived like having a copilot monitor the pilot’s behavior, not just the plane’s systems. Security used to be deterministic with known anomalies; now it’s probabilistic and unpredictable. You can’t patch a hallucination like you patch a server.

Critical Requirements:

  • Validation layers for all AI-generated content, cross-checked by another agent using golden sources of truth
  • Human oversight checking if outputs are garbage in/garbage out, or worse-confidential data leakage
  • Zero trust of data-never assume AI outputs are correct without verification
  • Training AI systems on correct parameters, acceptable outputs, and inherent biases

The shift: These aren’t insider threats anymore, but probabilistic scenarios where data from AI engines gets used by employees without proper validation.

Telecom Precision: Layered Architecture for Zero Error

Rajat explained why the AI trust question has become urgent. Early social media was a separate dimension from real life. Now AI-generated content directly affects real lives-deepfakes, synthesized datasets submitted to governments, and critical infrastructure decisions.

The Telecom Solution: Upstream vs. Downstream

Systems are divided into two zones:

Upstream (Safe Zone): AI can freely find correlations, test hypotheses, and experiment without affecting live networks.

Downstream (Guarded Zone): Where changes affect physical networks. Only deterministic systems allowed-rule engines, policy makers, closed-loop automation, and mandatory human-in-the-loop.

Core Principle: Observation ≠ Decision ≠ Action. This separation embedded in architecture creates the first step toward near-zero error.

Additional safeguards include digital twins, policy engines, and keeping cognitive systems separate from deterministic ones. The key insight: zero error means zero learning. Managed errors within boundaries drive innovation.

Why Telecom Networks Rarely Crash: Layered architecture with what seems like too many layers but is actually the right amount, preventing cascading failures.

Healthcare: Knowledge Graphs and Moving Goalposts

Rahul acknowledged hallucination exists but noted we’re not yet at a stage of extreme worry. The issue: as AI answers more questions correctly, doctors will eventually start trusting it blindly like they trust traditional software. That’s when problems will emerge.

Healthcare Is Different from Code

You can’t test AI solutions on your body to see if they work. The costs of errors are catastrophically higher than software bugs. Doctors haven’t started extensively using AI for patient care because they don’t have 100% trust—yet.

The Knowledge Graph Moat

The competitive advantage isn’t ChatGPT or the AI model itself—it’s the curated knowledge graph that companies and institutions build as their foundation for accurate answers.

Technical Safeguards:

  • Validation layers
  • LLM-as-judge (another LLM checking if the first is lying)
  • Multiple generation testing (hallucinations produce different explanations each time)
  • Self-consistency checks
  • Mechanistic interpretability (examining network layers)

The Continuous Challenge: The moment you publish a defense technique, AI finds a way to beat it. Like cybersecurity, this is a continuous process, not a one-time solution.

AI Beyond Human Capabilities

Rahul challenged the assumption that all ground truth must come from humans. DeepMind can invent drugs at speeds impossible for humans. AI-guided ultrasounds performed by untrained midwives in rural areas can provide gestational age assessments as accurately as trained professionals, bringing healthcare to underserved communities.

The pragmatic question for clinical-grade AI: Do benefits outweigh risks? Evaluation must go beyond gross statistics to ensure systems work on every subgroup, especially the most marginalized communities.

Enterprise Platforms: Living with Probabilistic Systems

Varij’s philosophy after 15-20 years building AI systems: You have to learn to live with the weakness. Accept that AI is probabilistic, not deterministic. Once you accept this reality, you automatically start thinking about problems where AI can still outperform humans.

The Accuracy Argument

When customers complained about system accuracy, the response was simple: If humans are 80% accurate and the AI system is 95% accurate, you’re still better off with AI.

Look for Scale Opportunities

Choose use cases where scale matters. If you can do 10 cases daily and AI enables 1,000 cases daily with better accuracy, the business value is transformative.

Reframe Problems to Create New Value

Example: Competitors used ethnographers with clipboards spending a week analyzing 6 hours of video for $100,000 reports. The AI solution used thousands of cameras processing video in real-time, integrated with transaction systems, showing complete shopping funnels for physical stores—value impossible with previous systems.

The Product Manager’s Transformed Role

Traditional PM workflow–write user stories, define expectations, create acceptance criteria, hand to testers–is breaking down.

The New Reality:

Model evaluations (evals) have moved from testers to product managers. PMs must now write 50-100 test cases as evaluations, knowing exactly what deserves 100% marks, before testing can begin.

Three Critical Pillars for Reliable Foundations:

1. Data Quality Pipelines – Monitor how data moves into systems, through embeddings, and retrieval processes. Without quality data in a timely manner, AI cannot provide reliable insights.

2. Prompt Engineering – Simply asking systems to use only verified links, not hallucinate, and depend on high-quality sources increases performance 10-15%. Grounding responses in provided data and requiring traceability are essential.

3. Observability and Traceability – If mistakes happen, you must trace where they started and how they reached endpoints. Companies are building LLM observation platforms that score outputs in real-time on completeness, accuracy, precision, and recall.

The shift from deterministic to probabilistic means defining what’s good enough for customers while balancing accuracy, timeliness, cost, and performance parameters.

Non-Negotiable Guardrails

Single Source of Truth – Enterprises must maintain authentic sources of truth with verification mechanisms before AI-generated data reaches employees. Critical elements include verification layers, single source of truth, and data lineage tracking to differentiate artificiality from fact.

NIST AI RMF + ISO 42001 – Start with NIST AI Risk Management Framework to tactically map risks and identify which need prioritizing. Then implement governance using ISO 42001 as the compliance backbone.

Architecture First, Not Model First – Success depends on layered architectures with clear trust boundaries, not on having the smartest AI model.

Success Factors for the Next 3-5 Years

The next decade won’t be won by making AI perfectly truthful. Success belongs to organizations with better system engineers who understand failure, leaders who design trust boundaries, and teams who treat AI as a junior genius rather than an oracle.

What Telecom Deploys: Not intelligence, but responsibility. AI’s role is to amplify human judgment, not replace it. Understanding this prevents operational chaos and enables practical implementation.

AI Will Always Generalize: It will always overfit narratives. Everyone uses ChatGPT or similar tools for context before important sessions—this will continue. Success depends on knowing exactly where AI must not be trusted and making wrong answers as harmless as possible.

The AGI Question and Investment Reality

Panel perspectives on AGI varied from already here in certain forms, to not caring because AI is just a tool, to being far from achieving Nobel Prize-winning scientist level intelligence despite handling mediocre middle-level tasks.

From an investment perspective, AGI timing matters critically for companies like OpenAI. With trillions in commitments to data centers and infrastructure, if AGI isn’t claimed by 2026-2027, a significant market correction is likely when demand fails to match massive supply buildout.

Key Takeaways

1. Cognitive Security Has Replaced Traditional Security – Validation layers, zero trust of AI data, and semantic telemetry are mandatory.

2. Separate Observation from Decision from Action – Layered architecture prevents errors from cascading into mission-critical systems.

3. Knowledge Graphs Are the Real Moat – In healthcare and critical domains, competitive advantage comes from curated knowledge, not the LLM.

4. Accept Probabilistic Reality – Design around AI being 95% accurate vs. humans at 80%, choosing use cases where AI’s scale advantages transform value.

5. PMs Now Own Evaluations – The testing function has moved to product managers who must define what’s good enough in a probabilistic world.

6. Human-in-the-Loop Is Non-Negotiable – Structured intervention at critical decision points, not just oversight.

7. Single Source of Truth – Authentic data sources with verification mechanisms before AI outputs reach employees.

8. Continuous Process, Not One-Time Fix – Like cybersecurity, AI trust requires ongoing vigilance as defenses and attacks evolve.

9. Responsibility Over Intelligence – Deploy systems designed for responsibility and amplifying human judgment, not autonomous decision-making.

10. Better System Engineers Win – Success belongs to those who understand where AI must not be trusted and design boundaries accordingly.

Conclusion

The session revealed a unified perspective: The question isn’t whether AI can be trusted absolutely, but how we architect systems where trust is earned through verification, maintained through continuous monitoring, and bounded by clear human authority.

From cognitive security frameworks to layered telecom architectures, from healthcare knowledge graphs to PM evaluation ownership, the message is consistent: Design for the reality that AI will make mistakes, then ensure those mistakes are caught before they cascade into catastrophic failures.

The AI trust fall isn’t about blindly falling backward hoping AI catches you. It’s about building safety nets first—validation layers, zero trust of data, single sources of truth, human-in-the-loop checkpoints, and organizational structures where responsibility always rests with humans who understand both the power and limitations of their AI tools.

Organizations that thrive won’t have the most advanced AI—they’ll have mastered responsible deployment, treating AI as the junior genius it is, not the oracle we might wish it to be.


This Data Trust Knowledge Session provided essential frameworks for building AI trust in mission-critical environments. Expert panel: Vijay Banda, Rajat Singh, Rahul Venkat, and Varij Saurabh. Moderated by Rudy Shoushany.

Categories
Events

Ethics by Design: Global Leaders Convene to Address AI’s Moral Imperative

Categories
Events

Ethics by Design: Global Leaders Convene to Address AI’s Moral Imperative

In a world where ChatGPT gained 100 million users in two months—a accomplishment that took the telephone 75 years—the importance of ethical technology has never been more pressing. Open Innovator on November 14th hosted a global panel on “Ethical AI: Ethics by Design,” bringing together experts from four continents for a 60-minute virtual conversation moderated by Naman Kothari of Nasscom. The panelists were Ahmed Al Tuqair from Riyadh, Mehdi Khammassi from Doha, Bilal Riyad from Qatar, Jakob Bares from WHO in Prague, and Apurv from the Bay Area. They discussed how ethics must grow with rapidly advancing AI systems and why shared accountability is now required for meaningful, safe technological advancement.

Ethics: Collective Responsibility in the AI Ecosystem

The discussion quickly established that ethics cannot be attributed to a single group; instead, founders, investors, designers, and policymakers build a collective accountability architecture. Ahmed stressed that ethics by design must start with ideation, not as a late-stage audit. Raya Innovations examines early enterprises based on both market fit and social effect, asking direct questions about bias, damage, and unintended consequences before any code is created. Mehdi developed this into three pillars: human-centricity, openness, and responsibility, stating that technology should remain a benefit for humans rather than a danger. Jakob added the algorithmic layer, which states that values must be testable requirements and architectural patterns. With the WHO implementing multiple AI technologies, identifying the human role in increasingly automated operations has become critical.

Structured Speed: Innovating Responsibly While Maintaining Momentum

Maintaining both speed and responsibility became a common topic. Ahmed proposed “structured speed,” in which quick, repeatable ethical assessments are integrated directly into agile development. These are not bureaucratic restrictions, but rather concise, practical prompts: what is the worst-case situation for misuse? Who might be excluded by the default options? Do partners adhere to key principles? The goal is to incorporate clear, non-negotiable principles into daily workflows rather than forming large committees. As a result, Ahmed claimed, ethics becomes a competitive advantage, allowing businesses to move rapidly and with purpose. Without such guidance, rapid innovation risks becoming disruptive noise. This narrative resonated with the panelists, emphasizing that prudent development can accelerate, rather than delay, long-term growth.

Cultural Contexts and Divergent Ethical Priorities

Mehdi demonstrated how ethics differs between cultural and economic environments. Individual privacy is a priority in Western Europe and North America, as evidenced by comprehensive consent procedures and rigorous regulatory frameworks. In contrast, many African and Asian regions prioritize collective stability and accessibility while functioning under less stringent regulatory control. Emerging markets frequently focus ethical discussions on inclusion and opportunity, whereas industrialized economies prioritize risk minimization. Despite these inequalities, Mehdi pushed for universal ethical principles, claiming that all people, regardless of place, need equal protection. He admitted, however, that inconsistent regulations result in dramatically different reality. This cultural lens highlighted that while ethics is internationally relevant, its local expression—and the issues connected with it—remain intensely context-dependent.

Enterprise Lessons: The High Costs of Ethical Oversights

Bilal highlighted stark lessons from enterprise organizations, where ethical failings have multimillion-dollar consequences. At Microsoft, retrofitting ethics into existing products resulted in enormous disruptions that could have been prevented with early design assessments. He outlined enterprise “tenant frameworks,” in which each feature is subject to sign-offs across privacy, security, accessibility, localization, and geopolitical domains—often with 12 or more reviews. When crises arise, these systems maintain customer trust while also providing legal defenses. Bilal used Google Glass as a cautionary tale: billions were lost because privacy and consent concerns were disregarded. He also mentioned Workday’s legal challenges over alleged employment bias. While established organizations can weather such storms, startups rarely can, making early ethical guardrails a requirement of survival rather than preference.

Public Health AI Designing for Integrity and Human Autonomy

Jakob provided a public-health viewpoint, highlighting how AI design decisions might harm millions. Following significant budget constraints, WHO’s most recent AI systems are aimed at enhancing internal procedures such as reporting and finance. In one donor-reporting tool, the team focused “epistemic integrity,” which ensures outputs are factual while protecting employee autonomy. Jakob warned against Goodhart’s Law, which involves overoptimizing a particular statistic at the detriment of overall value. They put in place protections to prevent surveillance overreach, automation bias, power inequalities, and data exploitation. Maintaining checks and balances across measures guarantees that efficiency gains do not compromise quality or hurt employees. His findings revealed that ethical deployment necessitates continual monitoring rather than one-time judgments, especially when AI replaces duties previously conducted by specialists.

Aurva’s Approach: Security and Observability in the Agentic AI Era

The panel then moved on to practical solutions, with Apurv introducing Aurva, an AI-powered data security copilot inspired by Meta’s post-Cambridge Analytica revisions. Aurva enables enterprises to identify where data is stored, who has access to it, and how it is used—which is crucial in contexts where information is scattered across multiple systems and providers. Its technologies detect misuse, restrict privilege creep, and give users visibility into AI agents, models, and permissions. Apurv contrasted between generative AI, which behaves like a maturing junior engineer, and agentic AI, which operates independently like a senior engineer making multi-step judgments. This autonomy necessitates supervision. Aurva serves 25 customers across different continents, with a strong focus on banking and healthcare, where AI-driven risks and regulatory needs are highest.

Actionable Next Steps and the Imperative for Ethical Mindsets

In conclusion, panelists provided concrete advice: begin with human-impact visibility, undertake early bias and harm evaluations, construct feedback loops, teach teams to acquire a shared ethical understanding, and implement observability tools for AI. Jakob underlined the importance of monitoring, while others stressed that ethics must be integrated into everyday decisions rather than marketing clichés. The virtual event ended with a unifying message: ethical AI is no longer optional. As agentic AI becomes more independent, early, preemptive frameworks protect both consumers and companies’ long-term viability.

Reach out to us at open-innovator@quotients.com or drop us a line to delve into the transformative potential of groundbreaking technologies and participate in our events. We’d love to explore the possibilities with you.

Categories
Applied Innovation

Ethical AI: Constructing Fair and Transparent Systems for a Sustainable Future

Categories
Applied Innovation

Ethical AI: Constructing Fair and Transparent Systems for a Sustainable Future

Artificial Intelligence (AI) is reshaping the global landscape, with its influence extending into sectors such as healthcare, agritech, and sustainable living. To ensure AI operates in a manner that is fair, accountable, and transparent, the concept of Ethical AI has become increasingly important. Ethical AI is not merely about minimizing negative outcomes; it is about actively creating equitable environments, fostering sustainable development, and empowering communities.

The Pillars of Ethical AI

For AI to be both responsible and sustainable, it must be constructed upon five core ethical principles:

Accountability: Ensuring that AI systems are equipped with clear accountability mechanisms is crucial. This means that when an AI system makes a decision or influences an outcome, there must be a way to track and assess its impact. In the healthcare sector, where AI is increasingly utilized for diagnostic and treatment purposes, maintaining a structured governance framework that keeps medical professionals as the ultimate decision-makers is vital. This protects against AI overriding patient autonomy.

Transparency: Often, AI operates as a black box, making the reasoning behind its decisions obscure. Ethical AI demands transparency, which translates to algorithms that are auditable, interpretable, and explainable. By embracing open-source AI development and mandating companies to reveal the logic underpinning their algorithms, trust in AI-driven systems can be significantly bolstered.

Fairness & Bias Mitigation: AI models are frequently trained on historical data that may carry biases from societal disparities. It is essential to integrate fairness into AI from the outset to prevent discriminatory practices. This involves using fairness-focused training methods and ensuring data diversity, which can mitigate biases and promote equitable AI applications across various demographics.

Privacy & Security: The handling of personal data is a critical aspect of ethical AI. With AI systems interacting with vast amounts of sensitive information, adherence to data protection laws, such as the General Data Protection Regulation (GDPR) and India’s Digital Personal Data Protection Act, is paramount. A commitment to privacy and security helps prevent unauthorized data access and misuse, reinforcing the ethical integrity of AI systems.

Sustainability: AI must consider long-term environmental and societal consequences. This means prioritizing energy-efficient models and sustainable data centers to reduce the carbon footprint associated with AI training. Ethical AI practices should also emphasize the responsible use of AI to enhance climate resilience rather than contribute to environmental degradation.

Challenges in Ethical AI Implementation

Several obstacles stand in the way of achieving ethical AI:

AI models learn from historical data, which often reflect societal prejudices. This can lead to the perpetuation and amplification of discrimination. For instance, an AI system used for loan approvals might inadvertently reject individuals from marginalized communities due to biases embedded in the training data.

The Explainability Conundrum

Advanced AI models like GPT-4 and deep neural networks are highly complex, making it difficult to comprehend their decision-making processes. This lack of explainability undermines accountability, especially in healthcare where AI-driven diagnostic tools must provide clear rationales for their suggestions.

Regulatory & Policy Lag

While the ethical discourse around AI is evolving, legal frameworks are struggling to keep up with technological advancements. The absence of a unified set of global AI ethics standards results in a patchwork of national regulations that can be inconsistent.

Economic & Social Disruptions

AI has the potential to transform industries, but without careful planning, it could exacerbate economic inequalities. Addressing the need for inclusive workforce transitions and equitable access to AI technologies is essential to prevent adverse societal impacts.

Divergent Global Ethical AI Approaches

Ethical AI policies vary widely among countries, leading to inconsistencies in governance. The contrast between Europe’s emphasis on strict data privacy, China’s focus on AI-driven economic growth, and India’s balance between innovation and ethical safeguards exemplifies the challenge of achieving a cohesive international approach.

Takeaway

Ethical AI represents not only a technical imperative but also a social obligation. By embracing ethical guidelines, we can ensure that AI contributes to fairness, accountability, and sustainability across industries. The future of AI is contingent upon ethical leadership that prioritizes human empowerment over mere efficiency optimization. Only through collective efforts can we harness the power of AI to create a more equitable and sustainable world.

Write to us at Open-Innovator@Quotients.com/ Innovate@Quotients.com to get exclusive insights

Categories
Events

Industry Leaders Chart the Course for Responsible AI Implementation at OI Knowledge Session

Categories
Events

Industry Leaders Chart the Course for Responsible AI Implementation at OI Knowledge Session

In the “Responsible AI Knowledge Session,” experts from diverse fields emphasize data privacy, cultural context, and ethical practices as artificial intelligence increasingly shapes our daily decisions. The session reveals practical strategies for building trustworthy AI systems while navigating regulatory challenges and maintaining human oversight.

Executive Summary

The “Responsible AI Knowledge Session,” hosted by Open Innovator on April 17th, served as a platform for leading figures in the industry to address the vital necessity of ethically integrating artificial intelligence as it permeates various facets of our daily lives.

The session’s discourse revolved around the significance of linguistic diversity in AI models, establishing trust through ethical methodologies, the influence of regulations, and the imperatives of transparency, as well as the essence of cross-disciplinary collaboration for the effective adoption of AI.

Speakers underscored the importance of safeguarding data privacy, considering cultural contexts, and actively involving stakeholders throughout the AI development process, advocating for a methodical, iterative approach.

Key Speakers

The session featured insights from several AI industry experts:

  • Sarah Matthews, Addeco Group, discussing marketing applications
  • Rym Bachouche, CNTXT AI addressing implementation strategies
  • Alexandra Feeley, Oxford University Press, focusing on localization and cultural contexts
  • Michael Charles Borrelli, Director at AI and Partners
  • Abilash Soundararajan, Founder of PrivaSapient
  • Moderated by Naman Kothari, NASSCOM CoE

Insights

Alexandra Feeley from Oxford University Press’s informed about the initiatives by the organization to promote linguistic and cultural diversity in AI by leveraging their substantial language resources. This involved digitizing under-resourced languages and enhancing the reliability of generative AI through authoritative data sources like dictionaries, thereby enabling AI models to reflect contemporary language usage more precisely.

Sarah Matthews, specializing in AI’s role in marketing, stressed the importance of maintaining transparency and incorporating human elements in customer interactions, alongside ethical data stewardship. She highlighted the need for marketers to communicate openly about AI usage while ensuring that AI-generated content adheres to brand values.

Alexandra Feeley delved into cultural sensitivity in AI localization, emphasizing that a simple translation approach is insufficient without an understanding of cultural subtleties. She accentuated the importance of using native languages in AI systems for precision and high-quality experiences, especially in diverse linguistic landscapes such as Hindi.

Michael Charles Borrelli, from AI and Partners, introduced the concept of “Know Your AI” (KYI), drawing a parallel with the financial sector’s “Know Your Client” (KYC) practice. Borrelli posited that AI products require rigorous pre- and post-market scrutiny, akin to pharmaceutical oversight, to foster trust and ensure commercial viability.

Rym Bachouche underscored a common error where organizations hasten AI implementation without adequate data preparation and interdisciplinary alignment. The session’s panellists emphasized the foundational work of data cleansing and annotation, often neglected in favor of swift innovation.

Abilash Soundararajan, founder of PrivaSapien, presented a privacy-enhancing technology aimed at practical responsible AI implementation. His platform integrates privacy management, threat modeling, and AI inference technologies to assist organizations in quantifying and mitigating data risks while adhering to regulations like HIPAA and GDPR, thereby ensuring model safety and accountability.

Collaboration and Implementation

Collaboration was a recurring theme, with a call for transparency and cooperation among legal, cloud security, and data science teams to operationalize AI principles effectively. Responsible AI practices were identified as a means to bolster client trust, secure contracts, and allay AI adoption apprehensions. Successful collaboration hinges on valuing each team’s expertise, fostering open dialogue, and knowledge sharing.

Moving Forward

The event culminated with a strong assertion of the critical need to maintain control over our data to prevent over-reliance on algorithms that could jeopardize our civilization. The speakers advocated for preserving human critical thinking, educating future generations on technology risks, and committing to perpetual learning and curiosity. They suggested that a successful AI integration is an ongoing commitment that encompasses operational, ethical, regulatory, and societal dimensions rather than a checklist-based endeavor.

In summary, the session highlighted the profound implications AI has for humanity’s future and the imperative for responsible development and deployment practices. The experts called for an experimental and iterative approach to AI innovation, focusing on staff training and fostering data-driven cultures within organizations to ensure that AI initiatives remain both effective and ethically sound.

Reach out to us at open-innovator@quotients.com to join our upcoming sessions. We explore a wide range of technological advancements, the startups driving them, and their influence on the industry and related ecosystems.

Categories
Applied Innovation

Responsible AI:  Principles, Practices, and Challenges

Categories
Applied Innovation

Responsible AI:  Principles, Practices, and Challenges

The emergence of artificial intelligence (AI) has been a catalyst for profound transformation across various sectors, reshaping the paradigms of work, innovation, and technology interaction. However, the swift progression of AI has also illuminated a critical set of ethical, legal, and societal challenges that underscore the urgency of embracing a responsible AI framework. This framework is predicated on the ethical creation, deployment, and management of AI systems that uphold societal values, minimize potential detriments, and maximize benefits.

Foundational Principles of Responsible AI

Responsible AI is anchored by several key principles aimed at ensuring fairness, transparency, accountability, and human oversight. Ethical considerations are paramount, serving as the guiding force behind the design and implementation of AI to prevent harmful consequences while fostering positive impacts. Transparency is a cornerstone, granting stakeholders the power to comprehend the decision-making mechanisms of AI systems. This is inextricably linked to fairness, which seeks to eradicate biases in data and algorithms to ensure equitable outcomes.

Accountability is a critical component, demanding clear lines of responsibility for AI decisions and actions. This is bolstered by the implementation of audit trails that can meticulously track and scrutinize AI system performance. Additionally, legal and regulatory compliance is imperative, necessitating adherence to existing standards like data protection laws and industry-specific regulations. Human oversight is irreplaceable, providing the governance structures and ethical reviews essential for maintaining control over AI technologies.

The Advantages of Responsible AI

Adopting responsible AI practices yields a multitude of benefits for organizations, industries, and society at large. Trust and enhanced reputation are significant by-products of a commitment to ethical AI, which appeals to stakeholders such as consumers, employees, and regulators. This trust is a valuable currency in an era increasingly dominated by AI, contributing to a stronger brand identity. Moreover, responsible AI acts as a bulwark against risks stemming from legal and regulatory non-compliance.

Beyond the corporate sphere, responsible AI has the potential to propel societal progress by prioritizing social welfare and minimizing negative repercussions. This is achieved by developing technologies that are aligned with societal advancement without compromising ethical integrity.

Barriers to Implementing Responsible AI

Despite its clear benefits, implementing responsible AI faces several challenges. The intricate nature of AI systems complicates transparency and explainability. Highly sophisticated models can obscure the decision-making process, making it difficult for stakeholders to fully comprehend their functioning.

Bias in training data also presents a persistent issue, as historical data may embody societal prejudices, thus resulting in skewed outcomes. Countering this requires both technical prowess and a dedication to diversity, including the use of comprehensive datasets.

The evolving legal and regulatory landscape introduces further complexities, as new AI-related laws and regulations demand continuous system adaptations. Additionally, AI security vulnerabilities, such as susceptibility to adversarial attacks, necessitate robust protective strategies.

Designing AI Systems with Responsible Practices in Mind

The creation of AI systems that adhere to responsible AI principles begins with a commitment to minimizing biases and prejudices. This is achieved through the utilization of inclusive datasets that accurately represent all demographics, the application of fairness metrics to assess equity, and the regular auditing of algorithms to identify and rectify biases.

Data privacy is another essential design aspect. By integrating privacy considerations from the onset—through methods like encryption, anonymization, and federated learning—companies can safeguard sensitive information and foster trust among users. Transparency is bolstered by selecting interpretable models and clearly communicating AI processes and limitations to stakeholders.

Leveraging Tools and Governance for Responsible AI

The realization of responsible AI is facilitated by a range of tools and technologies. Explainability tools, such as SHAP and LIME, offer insight into AI decision-making. Meanwhile, privacy-preserving frameworks like TensorFlow Federated support secure data sharing for model training.

Governance frameworks are pivotal in enforcing responsible AI practices. These frameworks define roles and responsibilities, institute accountability measures, and incorporate regular audits to evaluate AI system performance and ethical compliance.

The Future of Responsible AI

Responsible AI transcends a mere technical challenge to become a moral imperative that will significantly influence the trajectory of technology within society. By championing its principles, organizations can not only mitigate risks but also drive innovation that harmonizes with societal values. This journey is ongoing, requiring collaboration, vigilance, and a collective commitment to ethical advancement as AI technologies continue to evolve.

Reach out to us at open-innovator@quotients.com or drop us a line to delve into the transformative potential of groundbreaking technologies. We’d love to explore the possibilities with you

Categories
Applied Innovation

Unleashing AI’s Promise: Walking the Tightrope Between Bias and Inclusion

Categories
Applied Innovation

Unleashing AI’s Promise: Walking the Tightrope Between Bias and Inclusion

Artificial intelligence (AI) and machine learning have infiltrated almost every facet of contemporary life. Algorithms now underpin many of the decisions that affect our everyday lives, from the streaming entertainment we consume to the recruiting tools used by employers to hire personnel. In terms of equity and inclusiveness, the emergence of AI is a double-edged sword.


On one hand, there is a serious risk that AI systems would perpetuate and even magnify existing prejudices and unfair discrimination against minorities if not built appropriately. On the other hand, if AI is guided in an ethical, transparent, and inclusive manner, technology has the potential to help systematically diminish inequities.

The Risks of Biassed AI


The primary issue is that AI algorithms are not inherently unbiased; they reflect the biases contained in the data used to train them, as well as the prejudices of the humans who create them. Numerous cases have shown that AI may be biased against women, ethnic minorities, and other groups.


One company’s recruitment software was shown to lower candidates from institutions with a higher percentage of female students. Criminal risk assessment systems have shown racial biases, proposing harsher punishments for Black offenders. Some face recognition systems have been criticised for greater mistake rates in detecting women and those with darker complexion.

Debiasing AI for Inclusion.


Fortunately, there is an increasing awareness and effort to create more ethical, fair, and inclusive AI systems. A major focus is on expanding diversity among AI engineers and product teams, as the IT sector is still dominated by white males whose viewpoints might contribute to blind spots.
Initiatives are being implemented to give digital skills training to underrepresented groups. Organizations are also bringing in more female role models, mentors, and inclusive team members to help prevent groupthink.


On the technical side, academics are looking at statistical and algorithmic approaches to “debias” machine learning. One strategy is to carefully curate training data to ensure its representativeness, as well as to check for proxies of sensitive qualities such as gender and ethnicity.

Another is to use algorithmic approaches throughout the modelling phase to ensure that machine learning “fairness” definitions do not result in discriminating outcomes. Tools enable the auditing and mitigation of AI biases.


Transparency around AI decision-making systems is also essential, particularly when utilised in areas such as criminal justice sentencing. The growing area of “algorithmic auditing” seeks to open up AI’s “black boxes” and ensure their fairness.

AI for Social Impact.


In addition to debiasing approaches, AI provides significant opportunity to directly address disparities through creative applications. Digital accessibility tools are one example, with apps that employ computer vision to describe the environment for visually impaired individuals.


In general, artificial intelligence (AI) has “great potential to simplify uses in the digital world and thus narrow the digital divide.” Smart assistants, automated support systems, and personalised user interfaces can help marginalised groups get access to technology.


In the workplace, AI is used to analyse employee data and discover gender/ethnicity pay inequalities that need to be addressed. Smart writing helpers may also check job descriptions for biassed wording and recommend more inclusive phrases to help diversity hiring. Data For Good Volunteer organisations are also using AI and machine intelligence to create social impact initiatives that attempt to reduce societal disparities.


The Path Forward


Finally, AI represents a two-edged sword: it may either aggravate social prejudices and discrimination against minorities, or it can be a strong force for making the world more egalitarian and welcoming. The route forward demands a multi-pronged strategy. Implementing stringent procedures to debias training data and modelling methodologies. Prioritising openness and ensuring justice in AI systems, particularly in high-stakes decision-making. Continued study on AI for social good applications that directly address inequality.

With the combined efforts of engineers, politicians, and society, we can realise AI’s enormous promise as an equalising force for good. However, attention will be required to ensure that these powerful technologies do not exacerbate inequities, but rather contribute to the creation of a more just and inclusive society.

To learn more about AI’s implications and the path to ethical, inclusive AI, contact us at open-innovator@quotients.com.Our team has extensive knowledge of AI bias reduction, algorithmic auditing, and leveraging AI as a force for social good.