Gemini Research G: Student Conference Meta-Analysis
Gemini Research G: Student Conference Meta-Analysis
Category: research
Date: 15 February 2026, 18:06 UTC
Original File: GEMINI_RESEARCH_G_STUDENT_CONFERENCES.md
Comprehensive analysis of 3 years of student-led research at Imperial College
Structural Evolution of Academic Integrity: A Meta-Analysis of Student-Led Research at Imperial College London (2023–2025)
Source: Gemini Deep Research Output (Prompt G)
Date: 15 February 2026
Input: Analysis of 3 years of Imperial College London student conference videos (2023-2025)
Executive Summary
The advent of large language models and generative artificial intelligence has catalyzed a paradigm shift in higher education, necessitating a fundamental re-evaluation of how academic integrity is defined, monitored, and maintained. At Imperial College London, the module “Researching Academic Integrity in an Artificial Intelligence Driven World” (COMP60024) has provided a unique longitudinal vantage point for observing this transition from the perspective of the students themselves.
By positioning undergraduates as researchers within an interdisciplinary framework, the college has moved beyond traditional top-down policy implementation to a model of co-inquiry that captures the nuanced lived experiences of the first generation of “AI natives.”
This report analyzes three years of student research presented at public conferences between 2023 and 2025, identifying critical themes that are often overlooked in faculty-led studies and proposing a refined pedagogical model for the future of academic ethics.
The 2023 Inception: Reactive Policy and Technological Benchmarking
In early 2023, the global academic community was primarily concerned with the immediate threat posed by the public release of ChatGPT. The research conducted by the inaugural cohort of the COMP60024 module reflected this reactive posture, focusing on the capability of generative tools to bypass assessments and the readiness of institutions to respond.
Institutional Governance and the AI Misconduct Readiness Rating
A foundational contribution to the field was provided by the research group comprising Stefanus, Chen, and Lily, who sought to quantify institutional preparedness through the development of the AI Misconduct Readiness (AIMR) rating system.
Methodology:
- Systematic qualitative analysis of policy documents from top 50 UK universities
- Keyword-based analysis to extract AI-related information
- Ratings on scale 1-4 (1 = high preparedness, 4 = no guidance)
Key Findings:
- Average AIMR score: 2.83 (indicating lack of readiness)
- Only 20% of UK leading universities explicitly mentioned AI in student-facing policies
- 58% mentioned “third-party services” or “contract cheating” with vague definitions
Critical Insight: Ambiguity creates a “grey zone” of misconduct where students unintentionally breach integrity due to lack of clear permission tiers.
| University Type (2023 Sample) | Presence of AI-Specific Language | Avg AIMR Score | Primary Misconduct Definition Used |
|---|---|---|---|
| Russell Group | 28% | 2.65 | Third-party services |
| Non-Russell Group | 14% | 3.10 | Plagiarism/Collusion |
| STEM-focused | 40% | 2.10 | Unauthorized software use |
Performance Benchmarks Across STEM and Humanities
Victoria, Regina, Liam, and Sami conducted empirical testing of ChatGPT’s ability to perform academic tasks.
Research Question: Effectiveness of ChatGPT in answering and marking undergraduate exams across disciplines
Evaluation Framework:
- Accuracy
- Precision
- Relevance
- Depth
- Logic
- Originality
Key Findings:
- Subject Gap Identified: AI performs well in language-based subjects (History) but struggles with complex multi-step reasoning (Mathematics, Chemistry)
- Figure/Diagram Interpretation: AI completely failed to interpret visual elements
- Novel Application: AI as “pre-marking” assistant - unreliable for final grading but valuable for structural feedback
Insight: Shift from “cheating machine” to “sophisticated but fallible tutor”
Socio-Technical Factors and Demographic Disparities
Girish and Adelina investigated how demographics influence understanding of plagiarism.
Key Findings:
- International students face challenges due to cultural/pedagogical “clash” between education systems
- Higher socioeconomic status correlates with higher propensity for cheating (high-stakes pressure, financial access to services)
- Critical gap: Lack of research on academic integrity for disabled and neurodivergent students
- Rigid policies fail to distinguish “unauthorized assistance” from legitimate assistive technologies
The 2024 Expansion: Market Dynamics and the Post-Pandemic Reality
By 2024, research focus matured to address the sophisticated ecosystem of academic misconduct.
The Longitudinal Impact of the COVID-19 Pandemic
One group analyzed global trends:
- Cheating reports spiked 140% during shift to remote learning
- Disconnect identified: Faculty viewed proctoring as deterrent; students saw it as “privacy invasion” that increased anxiety
- Paradox: Proctoring incentivized finding “untraceable” methods (secondary devices, local AI models)
Mechanical Engineering and the Handwriting Loophole
Amia’s research highlighted critical technological gap:
- Mechanical Engineering relies on handwritten calculations, diagrams
- No reliable way to detect AI assistance in handwritten submissions
- Recommendation: “Integrity-by-design” - sequential drafts, oral vivas to verify authenticity
The Predator-Prey Dynamics of Contract Cheating
Hamish, James, and Nurin analyzed 36 essay mills.
Key Finding: Predatory marketing strategy - services market “relief” not “success”
Student Motivations:
- Personal crises
- Extreme financial pressure
- Poor time management
Service Marketing: “24/7 deadline support”, “low-cost solutions”
Lavender’s research on ghostwriting in Natural Sciences:
- Services moved from Google Ads to student-only social spaces (Discord, WhatsApp, WeChat)
- Chinese international students targeted on WeChat
- Exploitation of fear of failure and language barriers
- Framed ghostwriting as “tutoring service”
| Marketing Tactic | Frequency on Cheating Sites (2024) | Primary Student Motivator Targeted |
|---|---|---|
| “Plagiarism-Free Guarantee” | 92% | Fear of detection/penalties |
| “24/7 Live Support” | 84% | Time pressure/Poor planning |
| “Affordable Pricing” | 76% | Financial constraint |
| “Expert Writers/Tutors” | 68% | Academic self-doubt |
AI and the Future of Intellectual Property
Kai, Marcus, and Irene investigated IP implications:
- Case studies: AI-designed hardware (NASA), AI-discovered antibiotics (MIT)
- Current legal barrier: Definition of “inventor” as human being
- Proposal: New IP categories for “AI-assisted human innovation”
- Students need to learn correct attribution, not hiding AI contributions
The 2025 Normalization: Analytics, Detection Skepticism, and Inter-Institutional Variance
Longitudinal Analysis of the AIMR Scale
Tony Loas, Emma, Adam, and Ray re-evaluated AIMR ratings (2023→2025).
Dramatic Improvement:
- 2023: 2.82 (lack of readiness)
- 2025: 1.3 (high readiness)
- 86% of universities now mention “artificial intelligence”
- But only 8% mention specific “chatbots”
New Problem: “Broad-brush” policies fail to provide granular guidance:
- Grammar correction (often allowed) vs.
- Idea generation (often prohibited)
Notable: Group used “DeepSeek” as secondary research assistant - demonstrating transparent AI use in research
The Failure of AI Detection and the Rise of Stylometry
Eevee, Gabriel, Tiisha, and Harry challenged AI detection software reliance.
Testing: 5 common detectors (GPTZero, Sapling)
Findings:
- High false positives on human-written academic text
- Bias against non-native English speakers - structured/formal writing statistically similar to AI prose
Solution Proposed: “Hybrid Stylometry”
- Instead of “Is this AI?”, analyze whether submission matches student’s “individual writing traits” and “voice”
- Longitudinal style-tracking more accurate and less biased
- Shifts focus to student’s personal development and authentic voice
Search Engine Analytics as a Proxy for Integrity Trends
Rosie, Jason, and Bo introduced groundbreaking methodology using Google Trends.
Cyclical Pattern Identified:
- Searches for “AI essay” and “AI homework” peak during mid-terms and finals
- Drop to near-zero during holidays
“Tool-Shifting” Trend:
- “AI essay” searches peaked 2023, now declining
- “AI code” and “AI rephrasing” searches rising
Hypothesis: Students shifting toward “modular” assistance to avoid detection
The Policy Maze of the Golden Triangle
Alex, Fergus, Eugen, and Adrian compared policies across Imperial, UCL, LSE.
“Policy Maze” Identified: Students in intercollegiate modules subject to vastly different rules for same work.
| Institution | Policy Category | Key Restriction | Permitted Assistance |
|---|---|---|---|
| Imperial | Prohibitive | No AI for grammar/ideas unless authorized | Minimal/Departmental |
| UCL | Educative | Must disclose all AI use | Brainstorming, structuring |
| LSE | Collaborative | No generation of full content | Grammar, idea generation |
Recommendation: Legal mandate for “Standardized Baseline Policy” across UK higher education
Student vs. Faculty Perspectives: Bridging the “Empathy Gap”
Divergent Questions and Solutions
Faculty ask: “How do we stop students from using AI to bypass learning outcomes?”
Students ask: “How do we use AI to achieve learning outcomes more effectively, and why won’t the university tell us how to do that legally?”
Faculty Solutions
- Return to in-person, proctored exams
- Lockdown browsers
- Punitive enforcement
Student Solutions
- “Authentic Assessment” - tasks too localized/reflective to outsource
- Incremental Submission (research plan → draft → final)
- Addresses root cause: last-minute panic
The Hidden Motivator: Personal Crisis
Critical Finding: “Personal Crisis” variable almost entirely absent from faculty policy.
Student Reality: Significant percentage engage in misconduct during:
- Acute mental health struggle
- Bereavement
- Extreme financial stress
Current Problem: Punitive policies mean students “have no way out” but to cheat - too afraid to ask for extensions that might trigger investigation.
The Pedagogical Model: Why the “Students as Researchers” Framework Works
Core Success Factors
1. Metacognition through Inquiry
- Students engage with ethical foundations of own education
- “Active learning” beyond plagiarism lectures
- Develop “healthy skepticism” and sophisticated understanding
2. Interdisciplinary Skill Development
- Teams cross Computer Science, Engineering, Natural Sciences
- Mimics modern academic/industrial research reality
- Public conference develops “transversal skills”
3. Agility in Rapidly Evolving Field
- Fixed faculty curriculum would be obsolete upon approval
- Students naturally pivot to current phenomena (DeepSeek, modular rephrasing)
- Ensures “bottom-up” and current understanding
Spotlight on Standout Student Work
1. The AIMR Scale Longitudinal Dataset (2023–2025)
- First quantifiable metric for institutional readiness in UK
- Critical finding: policy “coverage” ≠ “clarity”
- Publication-ready contribution to educational policy literature
2. The Google Trends Search Interest Mapping (2025)
- Methodological triumph
- Innovative solution to social desirability bias
- Roadmap for future detection efforts
3. The Engineering-Specific Integrity Audit (2024)
- Use of “Expert Informants”
- Bridged student-faculty gap
- Demonstrates “Students as Researchers” as internal QA mechanism
4. The Hybrid Stylometry Detection Model (2025)
- Empirical critique of educational technology
- Aligned with faculty concerns about ESL bias
- “Voice-based” rather than “statistically-based” integrity model
Recommendations for Amplifying the Student Voice
1. Formalize Student-Staff Integrity Partnerships
- Move beyond consulting to active partnership
- “Student Integrity Champions” in each department
- Identify subject-specific loopholes
- Translate complex policies into student-friendly guidance
2. Transition to Process-Based, Authentic Assessment
“Detection is Dead” consensus
Invest in:
- Incremental submissions with feedback loops
- Oral examinations or “viva voce” components
- Reflective journals documenting research “story”
3. Address the “Policy Maze” through Standardization
Urgent Need: “National Academic Integrity Framework”
- Common definitions
- “Permissible use tiers” for AI
- Reduce “accidental misconduct”
4. Decouple Support from Discipline
- Mental health and financial support without triggering integrity investigation
- “Safe Disclosure” pathways
- Admit being overwhelmed before resorting to cheating
Conclusion
The Imperial College London student conferences of 2023–2025 reveal that the “AI Revolution” in education is not just a technical challenge, but a sociological one.
Faculty focus: Mechanics of detection and preservation of legacy standards
Student focus: Transparency, modular assistance, empathetic institutional governance
The shift: From reactive anxiety (2023) to data-driven skepticism (2025)
The best defense against academic misconduct is not a better detector, but a more engaged and empowered student body.
Full research output analyzed from Gemini Deep Research (Prompt G)
Videos analyzed: 3 years of Imperial College London student conferences (2023-2025)
Original file: GEMINI_RESEARCH_G_STUDENT_CONFERENCES.md