Dies ist eine kurze Vorschau. Die Vollversion enthält erweiterten Text für alle Abschnitte, ein Fazit und ein formatiertes Literaturverzeichnis.
Vorgelegt von:
Group
Vorname Nachname
Betreuer/in:
Prof. Dr. Vorname Nachname
The rapid saturation of generative artificial intelligence within American classrooms has outpaced the development of comprehensive institutional frameworks. While traditional pedagogical models relied on manual synthesis, the arrival of large language models like ChatGPT and Claude has redefined the boundaries of student agency. Grassini (2023) argues that this technological infusion presents both transformative potential and significant risks to standardized educational settings. Unlike previous digital transitions, the AI era forces a fundamental reassessment of how knowledge is acquired and validated in an environment where machine-generated content is indistinguishable from human output. Educational institutions currently face a crisis of verification. As students increasingly utilize tools like GitHub Copilot for technical disciplines (Avramovic & Avramovic, 2024) or LLMs for humanities, the traditional essay or coding assignment loses its diagnostic utility. This erosion of academic integrity is not merely a technical glitch but a systemic challenge to the philosophical foundations of American education. Crawford and Cowling (2023) suggest that without decisive leadership focused on character and revised assessment, the integration of AI may inadvertently undermine the very learning outcomes it seeks to enhance. Nguyen and Trương (2025) identify emerging themes in generative AI that suggest a move toward personalized learning environments, yet these benefits remain tethered to the risks of algorithmic bias and cognitive over-reliance. Addressing these complexities requires a multi-staged approach aimed at analyzing the impact of artificial intelligence on educational norms and institutional policy within the United States. The initial phase of this study examines the evolution of AI-driven educational tools, tracing their trajectory from simple automated tutors to sophisticated generative agents (Schrag & Short, 2025). Subsequent analysis compares the performance and reliability of current LLMs, such as Bard, Bing, and ChatGPT, to determine their specific pedagogical utility (Rudolph & Tan, 2023). The final component identifies practical strategies for integrating AI while safeguarding academic integrity against sophisticated forms of automated plagiarism. The primary object of this investigation encompasses the diverse array of artificial intelligence technologies currently deployed across the U.S. education sector. Within this broad scope, the subject focuses specifically on the influence of generative AI on student learning outcomes and evolving assessment practices. This distinction allows for a targeted evaluation of how specific tools—ranging from literacy-focused computational linguistics (Schrag & Short, 2025) to legal education aids (Li, 2025)—alter the cognitive demands placed on learners. Methodologically, this coursework employs a qualitative synthesis of current literature and policy guidance issued by American higher education institutions (Ganguly & Johri, 2025). By triangulating international trends with domestic data (Cabanillas-Garcia, 2025), the study provides a nuanced perspective on the American educational landscape. This approach draws on comparative frameworks, such as those used by Meng and Luo (2024) to evaluate teaching strategies across different geopolitical contexts. The narrative begins with a historical overview of AI integration, followed by a comparative analysis of model performance. Subsequent sections address the ethical implications for institutional policy, concluding with a framework for sustainable pedagogical adaptation.
APA 7