Gemini Research G: Student Conference Meta-Analysis

Published: 15 February 2026 | Category: research

Gemini Research G: Student Conference Meta-Analysis

Category: research
Date: 15 February 2026, 18:06 UTC
Original File: GEMINI_RESEARCH_G_STUDENT_CONFERENCES.md


Comprehensive analysis of 3 years of student-led research at Imperial College


Structural Evolution of Academic Integrity: A Meta-Analysis of Student-Led Research at Imperial College London (2023–2025)

Source: Gemini Deep Research Output (Prompt G)
Date: 15 February 2026
Input: Analysis of 3 years of Imperial College London student conference videos (2023-2025)


Executive Summary

The advent of large language models and generative artificial intelligence has catalyzed a paradigm shift in higher education, necessitating a fundamental re-evaluation of how academic integrity is defined, monitored, and maintained. At Imperial College London, the module “Researching Academic Integrity in an Artificial Intelligence Driven World” (COMP60024) has provided a unique longitudinal vantage point for observing this transition from the perspective of the students themselves.

By positioning undergraduates as researchers within an interdisciplinary framework, the college has moved beyond traditional top-down policy implementation to a model of co-inquiry that captures the nuanced lived experiences of the first generation of “AI natives.”

This report analyzes three years of student research presented at public conferences between 2023 and 2025, identifying critical themes that are often overlooked in faculty-led studies and proposing a refined pedagogical model for the future of academic ethics.


The 2023 Inception: Reactive Policy and Technological Benchmarking

In early 2023, the global academic community was primarily concerned with the immediate threat posed by the public release of ChatGPT. The research conducted by the inaugural cohort of the COMP60024 module reflected this reactive posture, focusing on the capability of generative tools to bypass assessments and the readiness of institutions to respond.

Institutional Governance and the AI Misconduct Readiness Rating

A foundational contribution to the field was provided by the research group comprising Stefanus, Chen, and Lily, who sought to quantify institutional preparedness through the development of the AI Misconduct Readiness (AIMR) rating system.

Methodology:

Key Findings:

Critical Insight: Ambiguity creates a “grey zone” of misconduct where students unintentionally breach integrity due to lack of clear permission tiers.

University Type (2023 Sample)Presence of AI-Specific LanguageAvg AIMR ScorePrimary Misconduct Definition Used
Russell Group28%2.65Third-party services
Non-Russell Group14%3.10Plagiarism/Collusion
STEM-focused40%2.10Unauthorized software use

Performance Benchmarks Across STEM and Humanities

Victoria, Regina, Liam, and Sami conducted empirical testing of ChatGPT’s ability to perform academic tasks.

Research Question: Effectiveness of ChatGPT in answering and marking undergraduate exams across disciplines

Evaluation Framework:

Key Findings:

Insight: Shift from “cheating machine” to “sophisticated but fallible tutor”

Socio-Technical Factors and Demographic Disparities

Girish and Adelina investigated how demographics influence understanding of plagiarism.

Key Findings:


The 2024 Expansion: Market Dynamics and the Post-Pandemic Reality

By 2024, research focus matured to address the sophisticated ecosystem of academic misconduct.

The Longitudinal Impact of the COVID-19 Pandemic

One group analyzed global trends:

Mechanical Engineering and the Handwriting Loophole

Amia’s research highlighted critical technological gap:

The Predator-Prey Dynamics of Contract Cheating

Hamish, James, and Nurin analyzed 36 essay mills.

Key Finding: Predatory marketing strategy - services market “relief” not “success”

Student Motivations:

Service Marketing: “24/7 deadline support”, “low-cost solutions”

Lavender’s research on ghostwriting in Natural Sciences:

Marketing TacticFrequency on Cheating Sites (2024)Primary Student Motivator Targeted
“Plagiarism-Free Guarantee”92%Fear of detection/penalties
“24/7 Live Support”84%Time pressure/Poor planning
“Affordable Pricing”76%Financial constraint
“Expert Writers/Tutors”68%Academic self-doubt

AI and the Future of Intellectual Property

Kai, Marcus, and Irene investigated IP implications:


The 2025 Normalization: Analytics, Detection Skepticism, and Inter-Institutional Variance

Longitudinal Analysis of the AIMR Scale

Tony Loas, Emma, Adam, and Ray re-evaluated AIMR ratings (2023→2025).

Dramatic Improvement:

New Problem: “Broad-brush” policies fail to provide granular guidance:

Notable: Group used “DeepSeek” as secondary research assistant - demonstrating transparent AI use in research

The Failure of AI Detection and the Rise of Stylometry

Eevee, Gabriel, Tiisha, and Harry challenged AI detection software reliance.

Testing: 5 common detectors (GPTZero, Sapling)

Findings:

Solution Proposed: “Hybrid Stylometry”

Rosie, Jason, and Bo introduced groundbreaking methodology using Google Trends.

Cyclical Pattern Identified:

“Tool-Shifting” Trend:

Hypothesis: Students shifting toward “modular” assistance to avoid detection

The Policy Maze of the Golden Triangle

Alex, Fergus, Eugen, and Adrian compared policies across Imperial, UCL, LSE.

“Policy Maze” Identified: Students in intercollegiate modules subject to vastly different rules for same work.

InstitutionPolicy CategoryKey RestrictionPermitted Assistance
ImperialProhibitiveNo AI for grammar/ideas unless authorizedMinimal/Departmental
UCLEducativeMust disclose all AI useBrainstorming, structuring
LSECollaborativeNo generation of full contentGrammar, idea generation

Recommendation: Legal mandate for “Standardized Baseline Policy” across UK higher education


Student vs. Faculty Perspectives: Bridging the “Empathy Gap”

Divergent Questions and Solutions

Faculty ask: “How do we stop students from using AI to bypass learning outcomes?”

Students ask: “How do we use AI to achieve learning outcomes more effectively, and why won’t the university tell us how to do that legally?”

Faculty Solutions

Student Solutions

The Hidden Motivator: Personal Crisis

Critical Finding: “Personal Crisis” variable almost entirely absent from faculty policy.

Student Reality: Significant percentage engage in misconduct during:

Current Problem: Punitive policies mean students “have no way out” but to cheat - too afraid to ask for extensions that might trigger investigation.


The Pedagogical Model: Why the “Students as Researchers” Framework Works

Core Success Factors

1. Metacognition through Inquiry

2. Interdisciplinary Skill Development

3. Agility in Rapidly Evolving Field


Spotlight on Standout Student Work

1. The AIMR Scale Longitudinal Dataset (2023–2025)

3. The Engineering-Specific Integrity Audit (2024)

4. The Hybrid Stylometry Detection Model (2025)


Recommendations for Amplifying the Student Voice

1. Formalize Student-Staff Integrity Partnerships

2. Transition to Process-Based, Authentic Assessment

“Detection is Dead” consensus

Invest in:

3. Address the “Policy Maze” through Standardization

Urgent Need: “National Academic Integrity Framework”

4. Decouple Support from Discipline


Conclusion

The Imperial College London student conferences of 2023–2025 reveal that the “AI Revolution” in education is not just a technical challenge, but a sociological one.

Faculty focus: Mechanics of detection and preservation of legacy standards

Student focus: Transparency, modular assistance, empathetic institutional governance

The shift: From reactive anxiety (2023) to data-driven skepticism (2025)

The best defense against academic misconduct is not a better detector, but a more engaged and empowered student body.


Full research output analyzed from Gemini Deep Research (Prompt G)
Videos analyzed: 3 years of Imperial College London student conferences (2023-2025)


Original file: GEMINI_RESEARCH_G_STUDENT_CONFERENCES.md