Open Innovator Knowledge Session | February 2026
Open Innovator organized a critical knowledge session on ethical AI in academia, moving the conversation beyond sensationalized headlines about AI bans and cheating scandals to address how institutions can actually lead AI responsibly.
As moderator Dr. Nikolina Ljepava opened: Headlines scream that AI use is bad for students, thousands are caught cheating, and research integrity is compromised—creating panic that academia is under AI attack. But the real question isn’t whether AI should exist in academic institutions (it’s already in classrooms, research labs, and admission screening), but how institutions can cultivate ethical scholarship rather than just catching violations. The session brought together academic leaders to explore how universities can design frameworks that protect integrity while embracing innovation, shifting from prohibition to responsible integration.
Expert Panel
The session convened three academic leaders implementing AI governance at different institutional levels:
Professor Alaa Garad – Pro Vice Chancellor and Professor of Strategic Learning and Business Excellence at Abertay University, joining from Scotland. Creator of the learning-driven organization model and leader in strategic quality management, bringing decades of experience in organizational learning and institutional transformation.
Dr. Sheily Verma Panwar – Academic Program Director and Dean at CUQ Ulster University in Doha, teaching master’s level artificial intelligence programs. Specializing in integrating ethics into core AI education modules including machine learning, data engineering, and AI infrastructure.
Dr. Mayar Alsabah – Lecturer at Heriot-Watt University Dubai College of Technology, with extensive experience mentoring students, startups, and student entrepreneurship in the digital economy, bringing insights on AI-driven innovation and emerging ethical blind spots.
Moderated by Dr. Nikolina Ljepava, Acting Dean of the College of Business Administration at the University of Khorfakkan, bringing deep understanding of academic leadership and institutional responsibilities in the AI era.
Key Points & Strategic Frameworks
The Necessary Evolution: From Prohibition to Conversation
- The 2022 Turning Point: The sudden rise of generative AI initially triggered defensive reactions: bans, rushed policies, and a focus on “catching” users.
- The Shift: Institutions must move toward “responsible integration.” AI is already in labs and classrooms; the goal is to define how it exists there rather than trying to erase it.
- A Culture of Awareness: Moving away from “guilty/not guilty” terminology toward a culture of transparent AI use and human oversight.
Non-Transferable Human Accountability
- AI as a Tool, Not an Authority: AI outputs are aids, not final decisions. Responsibility for research and grading must remain with human academics.
- The Traceability Requirement: Every academic outcome must be traceable back to a human “why.” Delegating judgment to systems risks “professional delusion” where no one is responsible for produced knowledge.
- Mandatory Disclosure: Policies should require explicit documentation of how AI was used in any given assignment or research paper.
The Multi-Tier Integration Model
To effectively embed AI ethics, institutions should address four distinct levels:
- Tier 1: Quality Review: Embedding AI standards into national and institutional quality assurance indicators.
- Tier 2: Institutional Policy: Creating user-friendly, accessible policies (avoiding 20-page legal documents) that are easy for students to find and understand.
- Tier 3: Curriculum Design: Making “Ethical AI Adoption” a formal learning objective in every program. This includes using a “Human-First” assignment strategy—where students maintain a version of their work before AI enhancement.
- Tier 4: Leadership: Moving AI strategy out of the IT department and into the hands of senior executive management (Provosts and Deans).
Ethics as a “Core Literacy”
- Against Standalone Modules: Ethics should not be a separate, theoretical “add-on.” It must be embedded directly into technical lessons (e.g., discussing data bias while teaching data science).
- Professional Instinct: The goal is to graduate students who instinctively ask “Is this model safe?” rather than just “Is it accurate?”
- Universal Requirement: AI ethics is no longer a specialized elective; it is a core literacy required for every discipline, from the arts to the sciences.
Identifying Ethical Blind Spots in Innovation
- Epistemic Overconfidence: AI is “persuasively wrong.” Students may mistake AI fluency for factual truth, especially in underserved markets where data is sparse.
- Strategic Convergence: If every student uses the same prompts and models, original thinking disappears, leading to a “homogenization” of ideas and average conclusions.
Practical Implementation & The “Digital Champion” Model
- Internal Customers: Universities should include students in governance conversations to understand the reality of AI use on the ground.
- AI Champions: Similar to the COVID-19 response, departments should appoint “AI Champions” to provide peer-to-peer mentoring and share best practices.
- Budgetary Commitment: Institutions must move past “lip service” and allocate real budgets for mandatory faculty and student training.
- y alone creates culture of superficial compliance
- People will always find ways to bypass bans
- Literacy creates systematic resilience
- It gives individuals the intellectual immune system to recognize hallucination, spot bias, and most importantly know when to apply their own judgment over machine output
Conclusion: The Comprehensive Picture
Synthesizing the panel’s recommendations into a comprehensive framework:
1. Start from Top: Leadership must be aware what needs to be done, with serious commitment beyond lip service.
2. Policies That Live: Not oriented only toward compliance. Policies must live in curriculum and what we do on everyday basis.
3. Integration Everywhere: AI ethics should be in every AI learning module, but ethics of AI should be treated as core literacy—not only in AI-related courses but spanning across all disciplines and areas, because students use it everywhere.
4. Meaningful and Efficient Integration: Institutions must find ways to integrate all of this without running from it, without prohibition and policing. Find ways that are useful and efficient while not losing the human touch—human creativity, human analytical critical thinking.
5. Avoid Mediocrity: Without proper integration, we risk producing average outputs and average thinking. The goal is maintaining excellence while leveraging AI’s capabilities.
The Mission Ahead: For all in academia, the new mission is finding how to integrate AI in ways that are useful and efficient at one point, but at another point don’t sacrifice what makes education valuable—human creativity, critical thinking, original thought, and ethical judgment.
The Reality: In one hour, the panel scratched the surface of this topic. Much more can be said, and it will continue developing over time as technology advances and AI evolves. The conversation must continue as institutions, faculty, and students navigate this transformation together.
The shift required isn’t technological—it’s cultural, structural, and deeply human. Academic institutions face a choice: lead the AI integration thoughtfully and ethically, or risk becoming irrelevant as the traditional university model fundamentally transforms around them.
This Open Innovator Knowledge Session provided essential frameworks for embedding ethical AI in academic institutions. Expert panel: Professor Alaa Garad (Abertay University), Dr. Sheily Verma Panwar (CUQ Ulster University), and Dr. Mayar Alsabah (Heriot-Watt University Dubai). Moderated by Dr. Nikolina Ljepava (University of Khorfakkan).





