Claude Research #2: CS Curricula Analysis

Published: 15 February 2026 | Category: research

Claude Research #2: CS Curricula Analysis

Category: research
Date: 15 February 2026, 17:36 UTC
Original File: CLAUDE_RESEARCH_2_CS_CURRICULA.md


26 universities compared - GenAI in CS education


Generative AI in CS Curricula: 26 Universities Compared

Claude Deep Research Output #2 - Analysis

Source: Claude.ai Artifact (0ca9705a-cc1b-4cdc-b4f3-6086418a68a1)
Date: 2026-02-15
Researcher: Thomas Lancaster (via Claude Deep Research)
Topic: Prompt B - GenAI in CS Education
Universities Analyzed: 22 Russell Group UK + MIT, Stanford, CMU, ETH Zurich


EXECUTIVE SUMMARY

Key Finding: No university has banned GenAI outright, but a “performance-learning paradox” has emerged:

Central Driver: This paradox is forcing curriculum redesign across the sector.


CRITICAL FINDING: The Performance-Learning Paradox

Stanford (November 2025 Faculty Senate)

Carnegie Mellon (Course 15-122)

Implications:


UK RUSSELL GROUP ANALYSIS (22 Universities)

Framework: All Follow 5 Shared Principles

Russell Group Principles (July 2023, updated Feb 2025):


DETAILED INSTITUTION PROFILES

University of Cambridge

Policy Date: 15 October 2024 (CS Department-specific)

Approach: Most detailed publicly available CS policy

Key Provisions:

Unique Feature: Oral examination option for verification


University of Edinburgh

School of Informatics

Approach: Most restrictive default among UK institutions

Key Provisions:

Unique Feature: Institutional AI platform with privacy protection


University of Oxford

First UK Uni to Provide ChatGPT Edu

Timeline: September 2025

Policy:

Gap: No publicly available CS-specific guidance


Imperial College London

Most Sophisticated AI Infrastructure

Platform: dAIsy (custom-built institutional platform)

Courses:

Tool: Assessment “stress test” for staff

Unique Feature: Comprehensive institutional platform + mandatory training


UCL

Three-Category Framework

CategoryAI UseExamples
Category 1AI prohibitedInvigilated exams
Category 2Assistive role onlyBrainstorming, proofreading
Category 3AI as primary toolEffective use = marking criteria

Critical Stance:


University of Bristol

Four-Category System

CategoryDescription
ProhibitedNo AI use
MinimalDefault level (most assessments)
SelectiveSome AI permitted
IntegralAI required

Research: Funding AI-generated feedback for first-year programming


University of Birmingham

Most Restrictive University-Wide Default


King’s College London

Four Levels of AI Use

From routine tools → AI-integral assessment

Advice: “Against integrating generative AI into most summative assessments”


University of Manchester

World’s First: Microsoft 365 Copilot to 65,000


University of Leeds

Traffic-Light System

Investigation: Student newspaper found 17% admitted using AI in “Red” assignments


University of Glasgow

Explicit Code Prohibition

“Generating code or generating code comments” prohibited in restricted assessments


University of Exeter

Four-Tier System

Feature: Mandatory compliance checkbox at submission


Queen’s University Belfast

RAISE Framework (bespoke)


CRITICAL GAP ACROSS UK SECTOR

Finding: None of 22 Russell Group CS departments have publicly available, department-specific GenAI policies


INTERNATIONAL INSTITUTIONS

MIT

Course-Level Experimentation (No Department-Wide Policy)

Course Examples:

CourseAI Policy
6.100A (Intro)AI for concepts OK; code generation prohibited; PyTutor AI tutor promoted
6.1010Red-Yellow-Green; GitHub Copilot = Red (prohibited)
6.1020 (Software Construction)All code-generation tools banned
6.1210 (Algorithms)Pivoted: Problem sets 25%→5%; Exams 75%→95%

Math Department Request: All courses include in-person assessments (AI “big factor”)

Infrastructure:


Stanford

Most Structured Institutional Response

Academic Integrity Working Group (AIWG):

Platform: AI Playground

CS106A (Chris Piech):

Graduate School of Business:

Data (Nov 2025 Faculty Senate):


Carnegie Mellon (CMU)

Ranked #1 Globally in AI

Default: GenAI = “unauthorized assistance” unless explicitly permitted

Eberly Center: 6 template syllabus policies

Course 15-112 (Intro):

Instructor Response (Michael Taylor):

New Course: 15-113 “Effective Coding with AI”

Provocative Course: “Students will not write actual code”


ETH Zurich

“Proactive Approach” Emphasizing Responsible Use


COMPARATIVE ANALYSIS

InstitutionChangeRationale
MIT 6.1210Problem sets 25%→5%; Exams 75%→95%In-person examination
CMU 15-112Increased quiz weightingReduce homework reliance
CMULive coding rewritesVerify authentic ability
StanfordProctoring pilot (50+ courses)Exam integrity

Infrastructure Investment

InstitutionPlatformFeatures
ImperialdAIsyMulti-model, secure, single interface
EdinburghELMZero-retention, JupyterAI integration
OxfordChatGPT EduGPT-5, NextGenAI consortium
StanfordAI PlaygroundMultiple models (GPT, Claude, Gemini, etc.)
ManchesterMicrosoft 365 Copilot65,000 users

Policy Spectrum

ApproachInstitutions
Near-total prohibition (exams)Cambridge
Restrictive defaultEdinburgh, Birmingham
Tiered frameworksMost Russell Group
AI as learning toolImperial, CMU 15-113
Cannot ban AI (GSB)Stanford GSB

IMPLICATIONS FOR RESEARCH

High-Value Research Questions:

  1. Performance-Learning Paradox Replication

    • Test at other institutions
    • Longitudinal tracking
    • Causal mechanisms
  2. Assessment Redesign Efficacy

    • Which changes work?
    • Cost-benefit analysis
    • Student learning outcomes
  3. Infrastructure Comparison

    • Which platforms most effective?
    • Privacy vs functionality
    • Cost analysis
  4. Policy Compliance

    • 17% at Leeds used AI in “Red” assignments
    • Actual vs stated policy adherence
    • Enforcement mechanisms
  5. Department vs University Policies

    • Gap in publicly available CS-specific guidance
    • Module-level variation
    • Best practice identification

Methodological Opportunities:

  1. Scrape and Compare - Policy document analysis (your method)
  2. Survey Faculty - Implementation experiences
  3. Student Interviews - Understanding of policies
  4. Longitudinal Tracking - Grade patterns pre/post AI

CONTENT CREATION OPPORTUNITIES

Twitter/X Threads:

  1. “The AI Paradox: Higher homework scores, lower exam scores”
  2. “26 universities, 26 approaches: How CS is handling GenAI”
  3. “Why Oxford gives all students GPT-5 (and what happened)”
  4. “CMU’s radical experiment: A course where you don’t write code”

Blog Posts:

  1. “The Performance-Learning Paradox: Data from 26 Universities”
  2. “Comparing CS GenAI Policies: From Prohibition to Integration”
  3. “Infrastructure Wars: Which University Built the Best AI Platform?”

Academic Paper:

  1. “The Performance-Learning Paradox in CS Education: A Multi-Institutional Analysis”
  2. “Policy Responses to GenAI: A Comparative Study of 26 Leading Universities”

TEAM ACTION ITEMS

Immediate (This Week):

Short-term (Next 2 Weeks):

Medium-term (Conference):


CONNECTION TO YOUR PRIOR WORK

Building on:

New Contribution:


SUMMARY

This research reveals a sector in rapid transformation:

Central Challenge: Students appear to be learning less while performing better on AI-assisted tasks—a fundamental threat to educational integrity.


Saved: 2026-02-15
Status: Ready for analysis, content creation, and research development
Next: Team begins content creation and comparison analysis


Original file: CLAUDE_RESEARCH_2_CS_CURRICULA.md