Claude Research #2: CS Curricula Analysis
Claude Research #2: CS Curricula Analysis
Category: research
Date: 15 February 2026, 17:36 UTC
Original File: CLAUDE_RESEARCH_2_CS_CURRICULA.md
26 universities compared - GenAI in CS education
Generative AI in CS Curricula: 26 Universities Compared
Claude Deep Research Output #2 - Analysis
Source: Claude.ai Artifact (0ca9705a-cc1b-4cdc-b4f3-6086418a68a1)
Date: 2026-02-15
Researcher: Thomas Lancaster (via Claude Deep Research)
Topic: Prompt B - GenAI in CS Education
Universities Analyzed: 22 Russell Group UK + MIT, Stanford, CMU, ETH Zurich
EXECUTIVE SUMMARY
Key Finding: No university has banned GenAI outright, but a “performance-learning paradox” has emerged:
- Students using AI complete tasks faster and score higher on homework
- But perform measurably worse on exams (2 letter grades lower at CMU)
- Stanford reported: CS problem set scores increased, exam scores declined
Central Driver: This paradox is forcing curriculum redesign across the sector.
CRITICAL FINDING: The Performance-Learning Paradox
Stanford (November 2025 Faculty Senate)
- CS problem sets: Scores increased with AI use
- Midterm/final exams: Scores declined
- VP for Undergraduate Education reported this pattern
Carnegie Mellon (Course 15-122)
- Students using AI received 2 letter grades lower on average
- Spring 2025: Highest drop rate in years
- Instructor Michael Taylor: “Every change we made somehow all tied back to AI”
Implications:
- AI assists immediate task completion
- But undermines deep learning
- Students fail when AI unavailable (exams)
UK RUSSELL GROUP ANALYSIS (22 Universities)
Framework: All Follow 5 Shared Principles
Russell Group Principles (July 2023, updated Feb 2025):
- Promote responsible use
- Not prohibition
- Tiered/categorical systems
- Module-level decision making
DETAILED INSTITUTION PROFILES
University of Cambridge
Policy Date: 15 October 2024 (CS Department-specific)
Approach: Most detailed publicly available CS policy
Key Provisions:
- ✅ AI prohibited in ALL examinations (regardless of location/format)
- ✅ Local editing tools permitted (grammar, spell-check)
- ✅ Viva voce mechanism: Suspected students must demonstrate work is their own
- ❌ AI-generated output never presented as student’s own work unless explicitly permitted
Unique Feature: Oral examination option for verification
University of Edinburgh
School of Informatics
Approach: Most restrictive default among UK institutions
Key Provisions:
- ❌ Default: GenAI systems MUST NOT be used for assessed work
- ✅ Unless explicitly allowed in writing by course organiser
- Infrastructure: Built ELM (Edinburgh Language Models)
- Zero-data-retention agreement with OpenAI
- Integrated with JupyterAI for coding instruction
Unique Feature: Institutional AI platform with privacy protection
University of Oxford
First UK Uni to Provide ChatGPT Edu
Timeline: September 2025
- Member of OpenAI’s NextGenAI consortium
- GPT-5 model access for all staff/students
Policy:
- Every assessment must explain AI permissions
- Unauthorized use = academic misconduct
- No CS department-specific policy - follows university-wide guidance
Gap: No publicly available CS-specific guidance
Imperial College London
Most Sophisticated AI Infrastructure
Platform: dAIsy (custom-built institutional platform)
- Multiple models: GPT-5, Claude, DeepSeek
- Single secure interface
Courses:
- “Introduction to AI-Assisted Programming” - doctoral students
- GitHub Copilot training
- Mandatory: “Introduction to Generative AI” online course for all students
Tool: Assessment “stress test” for staff
- Evaluate how GenAI affects specific assessment tasks
Unique Feature: Comprehensive institutional platform + mandatory training
UCL
Three-Category Framework
| Category | AI Use | Examples |
|---|---|---|
| Category 1 | AI prohibited | Invigilated exams |
| Category 2 | Assistive role only | Brainstorming, proofreading |
| Category 3 | AI as primary tool | Effective use = marking criteria |
Critical Stance:
- Does NOT use GenAI detectors when marking
- Shared by most Russell Group institutions
University of Bristol
Four-Category System
| Category | Description |
|---|---|
| Prohibited | No AI use |
| Minimal | Default level (most assessments) |
| Selective | Some AI permitted |
| Integral | AI required |
Research: Funding AI-generated feedback for first-year programming
University of Birmingham
Most Restrictive University-Wide Default
- ❌ AI use NOT permitted unless explicitly stated
- ✅ GenAI proofreading restricted to spelling/grammar only
King’s College London
Four Levels of AI Use
From routine tools → AI-integral assessment
Advice: “Against integrating generative AI into most summative assessments”
University of Manchester
World’s First: Microsoft 365 Copilot to 65,000
- Entire community (students + staff)
- Free GitHub Copilot for all
University of Leeds
Traffic-Light System
- 🟢 Green: AI permitted
- 🟡 Amber: Limited use
- 🔴 Red: Prohibited
Investigation: Student newspaper found 17% admitted using AI in “Red” assignments
University of Glasgow
Explicit Code Prohibition
“Generating code or generating code comments” prohibited in restricted assessments
University of Exeter
Four-Tier System
- AI-integrated
- AI-assisted
- AI-limited
- AI-prohibited
Feature: Mandatory compliance checkbox at submission
Queen’s University Belfast
RAISE Framework (bespoke)
- Responsible use
- AI best practice
- Integrity
- Support
- Equitable access
CRITICAL GAP ACROSS UK SECTOR
Finding: None of 22 Russell Group CS departments have publicly available, department-specific GenAI policies
- All operate under university-wide frameworks
- Devolved decision-making to module level
- Internal VLEs/module handbooks contain guidance (not public)
INTERNATIONAL INSTITUTIONS
MIT
Course-Level Experimentation (No Department-Wide Policy)
Course Examples:
| Course | AI Policy |
|---|---|
| 6.100A (Intro) | AI for concepts OK; code generation prohibited; PyTutor AI tutor promoted |
| 6.1010 | Red-Yellow-Green; GitHub Copilot = Red (prohibited) |
| 6.1020 (Software Construction) | All code-generation tools banned |
| 6.1210 (Algorithms) | Pivoted: Problem sets 25%→5%; Exams 75%→95% |
Math Department Request: All courses include in-person assessments (AI “big factor”)
Infrastructure:
- ChatGPT Enterprise for faculty
- OpenAI NextGenAI consortium member
Stanford
Most Structured Institutional Response
Academic Integrity Working Group (AIWG):
- Proctoring pilot: 3rd year, 50+ courses
- Found AI detectors unsuitable for misconduct evidence (high false positives)
Platform: AI Playground
- ChatGPT (GPT-5)
- Claude (4 Sonnet, 4.5 Sonnet)
- Gemini, DeepSeek, Llama
CS106A (Chris Piech):
- ❌ AI code completion prohibited (must disable in PyCharm)
- ✅ Students build AI-powered apps in “Infinite Story” assignment
- Teaching to program WITH GenAI as component, not crutch
Graduate School of Business:
- Instructors may NOT ban AI for take-home coursework
Data (Nov 2025 Faculty Senate):
- Problem set scores increased
- Exam scores declined
Carnegie Mellon (CMU)
Ranked #1 Globally in AI
Default: GenAI = “unauthorized assistance” unless explicitly permitted
Eberly Center: 6 template syllabus policies
- From full prohibition → full encouragement
Course 15-112 (Intro):
- Highest drop rate in years (Spring 2025)
- Students using AI heavily failed quizzes/exams
Instructor Response (Michael Taylor):
- Increased quiz weighting
- In-class homework rewrites (live coding)
- Flipped classroom formats
- Quote: “Every change we made somehow all tied back to AI”
New Course: 15-113 “Effective Coding with AI”
- Build projects using AI tools strategically
- Evaluate code for correctness, security, maintainability
Provocative Course: “Students will not write actual code”
- Manage AI-assisted development
- Tools: Cursor, Windsurf
- Method: “Mob programming” sessions
ETH Zurich
“Proactive Approach” Emphasizing Responsible Use
- Developing institutional frameworks
- Emphasis on responsible integration
- (Content truncated in source)
COMPARATIVE ANALYSIS
Assessment Restructuring Trends
| Institution | Change | Rationale |
|---|---|---|
| MIT 6.1210 | Problem sets 25%→5%; Exams 75%→95% | In-person examination |
| CMU 15-112 | Increased quiz weighting | Reduce homework reliance |
| CMU | Live coding rewrites | Verify authentic ability |
| Stanford | Proctoring pilot (50+ courses) | Exam integrity |
Infrastructure Investment
| Institution | Platform | Features |
|---|---|---|
| Imperial | dAIsy | Multi-model, secure, single interface |
| Edinburgh | ELM | Zero-retention, JupyterAI integration |
| Oxford | ChatGPT Edu | GPT-5, NextGenAI consortium |
| Stanford | AI Playground | Multiple models (GPT, Claude, Gemini, etc.) |
| Manchester | Microsoft 365 Copilot | 65,000 users |
Policy Spectrum
| Approach | Institutions |
|---|---|
| Near-total prohibition (exams) | Cambridge |
| Restrictive default | Edinburgh, Birmingham |
| Tiered frameworks | Most Russell Group |
| AI as learning tool | Imperial, CMU 15-113 |
| Cannot ban AI (GSB) | Stanford GSB |
IMPLICATIONS FOR RESEARCH
High-Value Research Questions:
Performance-Learning Paradox Replication
- Test at other institutions
- Longitudinal tracking
- Causal mechanisms
Assessment Redesign Efficacy
- Which changes work?
- Cost-benefit analysis
- Student learning outcomes
Infrastructure Comparison
- Which platforms most effective?
- Privacy vs functionality
- Cost analysis
Policy Compliance
- 17% at Leeds used AI in “Red” assignments
- Actual vs stated policy adherence
- Enforcement mechanisms
Department vs University Policies
- Gap in publicly available CS-specific guidance
- Module-level variation
- Best practice identification
Methodological Opportunities:
- Scrape and Compare - Policy document analysis (your method)
- Survey Faculty - Implementation experiences
- Student Interviews - Understanding of policies
- Longitudinal Tracking - Grade patterns pre/post AI
CONTENT CREATION OPPORTUNITIES
Twitter/X Threads:
- “The AI Paradox: Higher homework scores, lower exam scores”
- “26 universities, 26 approaches: How CS is handling GenAI”
- “Why Oxford gives all students GPT-5 (and what happened)”
- “CMU’s radical experiment: A course where you don’t write code”
Blog Posts:
- “The Performance-Learning Paradox: Data from 26 Universities”
- “Comparing CS GenAI Policies: From Prohibition to Integration”
- “Infrastructure Wars: Which University Built the Best AI Platform?”
Academic Paper:
- “The Performance-Learning Paradox in CS Education: A Multi-Institutional Analysis”
- “Policy Responses to GenAI: A Comparative Study of 26 Leading Universities”
TEAM ACTION ITEMS
Immediate (This Week):
- Riley: Verify key statistics with original sources
- Ava: Create comparison matrix (institution × policy features)
- Quinn: Draft “Performance-Learning Paradox” Twitter thread
- Kai: Design infographic showing policy spectrum
Short-term (Next 2 Weeks):
- Riley: Find full text of Cambridge, Imperial, CMU policies
- Ava: Quantitative analysis of policy patterns
- Quinn: Blog post “26 Universities, One Paradox”
- Zak: Propose conference paper structure
Medium-term (Conference):
- Replicate performance-learning analysis
- Survey UK CS departments on implementation
- Design assessment redesign efficacy study
- Prepare live demo dataset
CONNECTION TO YOUR PRIOR WORK
Building on:
- Lancaster (2016) - Contract cheating handbook (assessment redesign)
- Lancaster & Clarke (2007) - Detection process (policy analysis)
- Lancaster (2019) - Student reasons (understanding why students use AI)
New Contribution:
- First multi-institutional CS-specific analysis
- Performance-learning paradox documentation
- Infrastructure/platform comparison
- Real-time policy tracking (2024-2026)
SUMMARY
This research reveals a sector in rapid transformation:
- No outright bans anywhere
- Universal concern about the performance-learning paradox
- Massive infrastructure investment (custom platforms)
- Assessment restructuring as primary response
- Gap between policy and practice (17% non-compliance at Leeds)
Central Challenge: Students appear to be learning less while performing better on AI-assisted tasks—a fundamental threat to educational integrity.
Saved: 2026-02-15
Status: Ready for analysis, content creation, and research development
Next: Team begins content creation and comparison analysis
Original file: CLAUDE_RESEARCH_2_CS_CURRICULA.md