This is a brief preview. The full version includes expanded text for all sections, a conclusion, and a formatted bibliography.
מגיש/ה:
Group
שם מלא
מנחה:
פרופ׳/ד״ר שם מלא
The rapid expansion of the Fourth Industrial Revolution has catalyzed a profound restructuring of American pedagogical frameworks, moving beyond simple digitization toward a fundamental reconfiguration of instructional logic. OKunlola and Naicker (2025) observe that the convergence of post-pandemic digital acceleration and advanced computational capabilities has pushed educational institutions into an era of unprecedented technological reliance. Unlike previous instructional shifts, the current integration of Artificial Intelligence (AI) does not merely supplement existing methods; it redefines the fundamental relationship between learner, instructor, and institutional governance. This transformation manifests through the deployment of Large Language Models and adaptive learning algorithms that promise personalized instruction while simultaneously introducing systemic risks regarding data integrity and social equity. The speed of this adoption often outpaces the development of robust empirical evidence, leaving a vacuum where institutional policy should reside. The current socio-technical landscape in the United States reflects a tension between the democratizing potential of AI and the reality of a stratified educational system. Ganguly and Johri (2025) highlight that higher education institutions are currently grappling with divergent guidance on Generative AI, creating a fragmented landscape where student experiences vary based on the wealth and technological readiness of their specific institution. This fragmentation suggests that while AI can theoretically bridge achievement gaps, it may instead widen them if access to high-quality, ethically grounded tools remains concentrated in elite tiers of the academy. Basch and Hillyer (2025) found that undergraduate students possess varying levels of knowledge and ethical perceptions regarding these tools, indicating that the human element of AI integration is as volatile as the technology itself. The central problem addressed in this dissertation concerns the widening gap between the rapid deployment of AI technologies and the lack of a coherent, equitable framework for their governance within United States educational systems. While the potential for enhanced healthcare education through LLMs has been documented (Ballard & Antigua-Made, 2025), the transferability of these successes to broader K-12 and undergraduate contexts remains unproven and fraught with ethical complexity. This study identifies a critical tension: the push for AI-driven pedagogical tools often ignores the underlying algorithmic bias that can disadvantage marginalized student populations. Chase (2020) demonstrated how systemic disparities are frequently codified into digital systems during periods of rapid change; without intervention, AI in education threatens to automate historical inequities under the guise of objective data. A secondary dimension of this problem involves the financial and structural stratification of research and implementation. The concentration of AI research funding in a handful of top-tier universities creates an intellectual monopoly that dictates the direction of educational innovation for the entire country. This imbalance raises questions about whether AI tools developed in resource-rich environments can effectively serve the diverse needs of community colleges or rural school districts. The absence of comprehensive federal oversight exacerbates these issues, leaving individual districts and universities to navigate complex questions of data privacy and intellectual property in isolation. To address these challenges, this research seeks to answer several guiding questions. How do AI-driven pedagogical tools specifically alter student learning outcomes and instructor autonomy across different socioeconomic strata in the United States? To what extent does the current distribution of AI research funding reinforce existing institutional hierarchies? In what ways do algorithmic biases within educational software manifest in the assessment of minority students? Finally, what policy frameworks can effectively balance the need for innovation with the ethical requirement for transparency and equity? These questions provide a roadmap for investigating the systemic impact of AI, moving beyond anecdotal success stories toward a rigorous, data-driven analysis of institutional change. The primary aim of this dissertation is to evaluate the transformative effects of artificial intelligence on educational quality, institutional equity, and teaching strategies within the United States. To fulfill this aim, the study pursues four specific objectives. First, it assesses the integration of AI-driven pedagogical tools in U.S. classrooms to determine their efficacy and impact on teacher-student dynamics. Second, it analyzes the stratification of AI research funding across different institutional tiers to map the geography of innovation. Third, the research examines the ethical implications of algorithmic bias and data privacy, specifically how these factors influence student retention and success. Finally, the study proposes policy frameworks designed to facilitate responsible AI implementation that prioritizes equity and human-centric pedagogy. The object of study comprises the diverse educational systems within the United States, including K-12 public schools, private institutions, and the various tiers of higher education. This broad focus allows for a comparative analysis of how different institutional structures respond to technological pressure. The subject of study is the integration and impact of Artificial Intelligence technologies, encompassing generative models, predictive analytics, and automated administrative systems. By distinguishing between the systems themselves and the technologies being introduced, the research can isolate the variables that lead to successful or detrimental outcomes. The scope of this research is delimited to the United States educational sector between 2020 and 2025, a period marked by the meteoric rise of LLMs and the subsequent institutional scramble to respond. While international trends offer valuable context, as noted by Cabanillas-García (2025), the unique decentralized nature of U.S. education necessitates a localized analysis. The study does not extend to corporate training or non-academic vocational programs unless they directly intersect with formal degree-granting pathways. Furthermore, while the technical architecture of AI is discussed, the focus remains on the social, ethical, and pedagogical consequences of its use rather than the development of new algorithms. The theoretical significance of this work lies in its contribution to the burgeoning field of digital leadership and educational technology. By synthesizing bibliometric trends (Gencer & Gencer, 2025) with qualitative policy analysis, this study builds a multi-dimensional model for understanding how AI disrupts traditional power structures in the classroom. It challenges the "neutrality" of educational technology, arguing that AI tools are value-laden artifacts that reflect the priorities of their creators. Practically, the research provides a toolkit for administrators and policymakers who are currently operating without a standardized playbook. The proposed frameworks offer a middle path between total prohibition of AI and uncritical adoption, emphasizing the need for digital literacy among both faculty and students. The methodology employed in this dissertation utilizes a mixed-methods approach to capture both the breadth and depth of AI’s impact. Quantitative data is derived from a bibliometric analysis of research outputs and funding patterns, following the precedent set by OKunlola and Naicker (2025) to identify long-term trends in digital education leadership. This is supplemented by a qualitative content analysis of institutional guidance documents from 50 major U.S. universities, building on the comparative work of Hristova (2025). By examining how institutions define "responsible use," the study uncovers the hidden philosophies driving AI adoption. Additionally, the research incorporates case studies of AI implementation in specialized fields, such as the use of AI for medical text simplification (Andalib & Spina, 2025), to illustrate how these technologies can either alleviate or exacerbate language-based disparities. The structure of the dissertation follows a logical progression from historical context to future-oriented policy. The first chapter establishes the technological and social precursors to the current AI surge, linking the Fourth Industrial Revolution to the specific needs of the American classroom. The second chapter provides an exhaustive review of the literature, focusing on the tension between efficiency and equity. In the third chapter, the methodology is detailed, justifying the use of mixed methods for evaluating complex institutional shifts. The fourth chapter presents the findings regarding funding stratification and pedagogical integration, using data to show how resources are clustered in specific geographic and institutional hubs. The fifth chapter focuses exclusively on ethics, analyzing documented cases of algorithmic bias and the failure of current data privacy protections. The final chapter synthesizes these findings into a series of policy recommendations aimed at federal and state educational agencies. The evidence suggests that the United States is at a pivotal junction. As AI becomes embedded in the fabric of the educational experience, the decisions made by current administrators and legislators will determine whether these tools serve as an equalizer or a wedge. This dissertation argues that without intentional, equity-focused intervention, the "AI revolution" will likely mirror the digital divides of the past. By providing a critical evaluation of the current landscape, this research offers a foundation for a more just and effective integration of technology in the American educational system. The following chapters detail the empirical reality of this transition, stripping away the marketing hyperbole of tech providers to reveal the actual impact on students, teachers, and the future of American democracy.
CHE/Malag Guidelines (Council for Higher Education)