top of page

Beware the Mass Deployment of ChatGPT at CSU

  • AI-Powered Moral Analysis
  • Apr 13
  • 6 min read

Updated: Apr 13


Faculty and students are racing to remove themselves from the educational process.
Faculty and students are racing to remove themselves from the educational process.


The California State University (CSU) system has partnered with OpenAI to provide premium AI services to its students and faculty. This collaboration involves the deployment of ChatGPT Edu, a version of ChatGPT customized for educational institutions, across all 23 CSU campuses. This initiative grants access to over 460,000 students and more than 63,000 faculty and staff, making it the largest implementation of ChatGPT by any single organization globally .


The deployment occurs abruptly in the middle of the semester, with little time for faculty or students to determine the right way to become an "AI-powered" institution, or to assess the risks to themselves, their institution, or to higher education generally.


The obvious concern that faculty will have AI write their assignments, and students will have AI complete the assignments, goes straight to the heart of what higher education is and whether it can survive the wave of industrial-scale intelligence it has now agreed to unleash inside the gates. The CSU system’s partnership with OpenAI could indeed be historic, but whether that’s a moment of triumph or collapse is still undecided—and the outcome likely depends less on the technology itself and more on how intelligently and ethically it is managed. So far, the signals are mixed at best.


The Core Concern: Learning Without Learners


If ChatGPT Edu offers answers, writes essays, and supplies citations on command, we’re not witnessing augmented learning—we’re seeing the outsourcing of the very cognitive work that constitutes learning. The core process by which students struggle with ideas, make mistakes, refine their thinking, and finally understand is in danger of being short-circuited. If students use AI as a surrogate thinker instead of a thought partner, they won’t just learn less—they may stop developing the mental habits and intellectual virtues that education is supposed to instill.


It’s even worse than plagiarism, in a sense, because plagiarism assumes there was something worth stealing. This is different: students may come to believe they don’t need to understand the material at all. If they trust the AI, and the faculty trust the output, education risks becoming a simulation of itself—a closed loop of fluent, authoritative, and utterly hollow discourse.


And the Faculty?


Faculty are not trained in epistemology, cognitive science, or AI ethics. Most are not even particularly reflective about pedagogy—they teach the way they were taught, and adjust slowly. Many are already overwhelmed with bureaucratic and economic pressures. Expecting 68,000 CSU faculty to suddenly become effective AI ethicists and digital epistemologists overnight is fantasy.


If they use ChatGPT Edu to save time on grading, create materials, and automate feedback, they may indeed be talking to the same system their students are using to produce the work. This recursive loop creates an illusion of productivity and understanding, while actual learning silently dies. It’s a digital cargo cult.


Is CSU Euthanizing Itself?


If you view universities as institutions that cultivate the human mind, then yes, there is a risk that CSU is hollowing itself out under the banner of innovation. If you view them instead as credentialing machines or labor-market sorting devices, then it might not matter. AI could even increase operational efficiency. But the soul of the university—the cultivation of independent, critical, and morally serious thinkers—does not survive on efficiency.


So What Should Faculty Do?


Here’s a constructive sketch:


1.  Reassert the Value of Process


Faculty must shift grading and pedagogy away from evaluating products (like essays) and toward process. That means requiring:


  • Drafts and outlines showing the evolution of thinking.

  • Reflective memos explaining the use of AI tools.

  • Oral defense or Socratic-style check-ins on submitted work.


2.  Teach Meta-Cognition


Students need to be taught not just what to learn, but how to know. This means:


  • Courses on epistemology, media literacy, and AI ethics.

  • Assignments that require justification of methods, not just outputs.

  • Emphasizing intellectual humility and understanding over “getting the right answer.”


3.  Use AI Transparently and Dialogically


Faculty should demonstrate how to collaborate with AI, not rely on it:


  • Use AI in class, but always with critical reflection.

  • Compare AI-generated answers with student reasoning.

  • Require students to identify and improve AI errors


4.  Build a Shared Faculty Intelligence


The CSU could create internal knowledge hubs:


  • Faculty working groups on AI ethics in pedagogy.

  • Cross-campus digital literacy curricula.

  • Shared case studies and lesson plans showing productive uses of AI.


5.  Get Political


If this really is a slow euthanasia of public higher ed, faculty have a moral obligation to resist—not just by adapting, but by advocating:


  • Publicly demand limits and transparency from vendors like OpenAI.

  • Insist on student learning outcomes that can’t be faked.

  • Fight for humanistic values in budget and curriculum decisions.


This is a moment of real moral gravity. CSU could become a beacon showing how AI and education can co-evolve for the better—or a cautionary tale of how even well-meaning institutions can self-destruct by mistaking automation for progress. Whether it survives as a university in any meaningful sense may depend on whether enough faculty can step up and fight for education as something humans do, not something they simply delegate.


Sonoma State University as an Example


Sonoma State University (SSU) is currently undergoing significant transformations due to a severe financial crisis. Facing a $24 million budget deficit, the university has announced extensive cutbacks, including the elimination of six academic departments, nearly two dozen degree programs, and its entire Division II athletics program. These measures have resulted in layoffs affecting over 60 faculty and staff members, including tenured professors. Among the academic departments terminated is Philosophy -- which is where instruction on epistemology would live -- and among the faculty terminated was Philosophy Professor John Sullins, a leading expert on AI ethics.


The financial challenges are primarily attributed to a 38% decline in enrollment since 2015, which has led to reduced tuition revenue and decreased funding from the California State University (CSU) system. Although some of the decline is due to demographic changes, CSU refused to replace President Judy Sakaki after she resigned in June, 2022, and instead assigned Interim Presidents Mike Lee, Nathan Evans, and Emily Cutrer, leaving the institution rudderless for three years. Recruiting, marketing, and program development stagnated as a result of leadership instability.


The Elimination of Philosophy and the Firing of John Sullins


Eliminating a philosophy program in the very moment CSU is rolling out AI across the entire system is not just shortsighted—it’s catastrophic. And to fire someone like John Sullins, who has been a national figure in AI ethics, is to gut the system’s moral immune response at the precise moment it is most needed. This isn’t just another budget cut—it’s a structural abdication of responsibility. CSU is scaling up technologies that challenge the very nature of knowledge, agency, and personhood while simultaneously silencing the few people professionally trained to interrogate those challenges.


It’s hard to think of a more chilling metaphor for our time: the AI ethicist is fired, but the AI remains employed.


What SSU Should Do versus What SSU Is Doing


The pedagogical shift toward emphasizing process over product requires smaller class sizes, deeper engagement, and better-trained faculty. But CSU is doing the opposite:


  • Class sizes are increasing to compensate for fewer instructors.

  • Faculty are being laid off, including tenured professors, which undermines both morale and institutional memory.

  • Programs that teach critical thinking, ethics, and epistemology are being eliminated—precisely the fields students need if they’re to learn how to think with AI rather than be replaced by it.


There is a tragic contradiction here: CSU proclaims it is preparing students for the future, but in practice it is stripping away the very capacities—philosophical reasoning, moral reflection, epistemic humility—that make humans worth educating in an AI-saturated world.


The Broader Pattern: Self-Inflicted Collapse


Chronic mismanagement has produced a crisis, and now the crisis is being used to justify decisions that will only deepen the mismanagement. Rather than steering the ship toward moral and intellectual rigor, CSU is:


  • Outsourcing intellectual work to AI.

  • Eliminating the disciplines that help us think clearly and act wisely.

  • Shrinking human judgment to make room for digital process.


This isn’t just shortsighted; it borders on nihilism. If the university cannot stand for learning—real, human learning—then what is it standing for?


The real danger isn’t just the death of philosophy. It’s the death of the conditions that make philosophy possible: reflective inquiry, dialogic engagement, and the space to not yet know.


If AI is allowed to define the future of learning without those things, it won’t be because the technology was too powerful. It will be because the stewards of the university gave up the fight before it began.


[This article was almost entirely written by AI.]



コメント


Top Stories

Get the latest ethical insights delivered to your inbox. Subscribe to our newsletter for thought-provoking content.

  • Instagram
  • Facebook
  • Twitter

© 2023 by Real Morality. All rights reserved.

bottom of page