Esta é uma breve prévia. A versão completa inclui texto expandido para todas as seções, uma conclusão e uma bibliografia formatada.
Autor/a:
Group
Nome Completo
Orientador/a:
Prof. Dr./Dra. Nome
The sudden proliferation of Large Language Models across American campuses has outpaced the development of robust institutional frameworks, creating a volatile educational environment. Schrag and Short (2025) identify significant potential for computational linguistics to enhance secondary literacy education, yet the actual deployment of these tools often occurs without sufficient pedagogical oversight. Basch and Hillyer (2025) report high levels of AI usage among college students for coursework, with adoption remaining largely informal and decentralized. Such a disconnect between student practice and faculty guidance exposes a critical vulnerability in the American credit-hour system. The core challenge lies in reconciling the efficiency of generative tools with the necessity of cognitive struggle in the learning process. If AI handles the foundational synthesis of information, students may bypass the very mental exercises required to develop critical thinking. Meng and Luo (2024) observe that while the United States leads in technological innovation, its teaching strategies struggle to adapt as cohesively as centralized systems elsewhere. This lag introduces risks to academic integrity and threatens the traditional metrics used to evaluate student competency. Educators face a dilemma: banning these tools risks obsolescence, yet uncritical integration may erode the value of a degree. This investigation analyzes the complex consequences of artificial intelligence technologies on the American educational landscape. The primary goal involves evaluating how generative AI influences pedagogical standards and student learning outcomes across diverse disciplines. Specifically, the following analysis examines the integration of these tools into current teaching practices and identifies the systemic risks inherent in AI-driven academic environments. Special attention is directed toward student skill acquisition in STEM fields, where computational accuracy often competes with conceptual understanding. To address these tensions, the study proposes institutional policies designed to balance technological utility with rigorous academic standards. The object of study is the integration of artificial intelligence within the United States education system, while the subject focuses on the intersection of generative AI tools, pedagogical standards, and student learning outcomes. Research by Ganguly and Johri (2025) regarding guidance issued by higher education institutions suggests that current policies are often reactive rather than proactive. By synthesizing bibliometric data from Afzaal and Xiao (2024) with qualitative assessments of emerging international trends (Nguyen & Trương, 2025), this coursework employs a comparative analytical methodology. This approach allows for a nuanced evaluation of how different educational tiers respond to the AI influx. The subsequent chapters are organized to provide a logical progression from theoretical integration to practical policy solutions. Initial sections detail the current state of AI adoption in US classrooms, followed by an assessment of the risks to literacy and STEM proficiency. The final segments evaluate the political and regulatory landscape, drawing on Weaver’s (2018) foundational work on AI regulation and Jian Li’s (2025) analysis of the politics of AI empowerment in higher education. This structure ensures a comprehensive evaluation of the technological shifts currently redefining American scholarship.
ABNT NBR 14724:2011 (Trabalhos acadêmicos)