Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.
Developing strategic AI leadership for future-ready universities
Explore how universities can define ethical, forward-looking AI strategies that advance innovation while upholding academic integrity and purpose.
Leading in the age of AI
Artificial Intelligence (AI) is transforming all industries, and higher education is no exception. From accelerating research breakthroughs to augmenting personalized learning experiences, AI holds enormous potential to revolutionize the academic landscape. To harness AI effectively, universities must establish strategic leadership and governance frameworks that are flexible enough to foster innovation, yet robust enough to mitigate risks.
Today's academic leaders are tasked with balancing tradition and innovation, and AI adds yet another element to the scale — raising new questions such as:
How can we craft AI strategies that align with our institutional missions?
Who should be involved in shaping AI governance?
What safeguards are needed to ensure ethical and inclusive implementation?
This guide offers insights into strategic AI leadership, drawing on expert perspectives and emerging practices. It explores governance principles, the roles of key stakeholders, considerations for teaching and research, notable trends and inspiring global examples.
The academic landscape of AI adoption
Artificial intelligence (AI) has become a powerful and dynamic force at the heart of higher education. Today, its reach across universities is striking, and we can examine its impact through multiple lenses: student, faculty, researchers and institutional leadership.
Students
A global survey conducted in August 2024 by the Digital Education Council found that “86% of students already use AI in their studies, with more than half using AI on a weekly basis, at least.” ChatGPT, Grammarly and Microsoft Copilot are among the most frequently used platforms, supporting students with tasks from researching and drafting assignments to improving writing and grasping complex concepts (Digital Education Council, 2024).
Yet while student usage rates are rising, an Inside Higher Ed’s Student Voice survey also highlights the ongoing gap in AI literacy and policy clarity. Although 51% of US students credit AI with helping them achieve better grades, a notable 31% are not sure when or how it is appropriate to use generative AI in their coursework (Inside Higher Ed, 2024). Students also expressed a clear desire for institutional support — calling for professional and ethical training, clearer guidance and integration of AI skills into majors and career pathways (Inside Higher Ed, 2024; Inside Higher Ed, 2025).
A 2025 study published in Computers and Education Open by Huang and Wu highlights how the growing use of generative AI can amplify students’ academic anxiety and performance pressures, sometimes leading to over-reliance that hampers independent learning and decision-making. Yet the researchers also found that students themselves recognize these risks, advocating for balanced, reflective use of AI as a tool for — rather than a substitute for — discovery and critical thought.
Dr. Elizabeth Reilley, who leads AI policy efforts at Arizona State University, echoed this need for education-first strategies during the THE US Digital Universities summit in June 2025:
“We try to address some of those issues... through education... really helping our students to be able to access and use these technologies effectively and responsibly, instead of focusing on that kind of surveillance.”
Her remarks reflect a growing consensus among academic leaders that empowering students through ethical AI literacy is more effective than punitive or restrictive approaches. This aligns with broader student feedback calling for transparency, skill-building and inclusive governance.
This sentiment is also reflected in a recent video from The Chronicle of Higher Education. It offers an additional look at how students are navigating generative AI — it features five undergraduates who share candid perspectives on its benefits, limitations and ethical implications (The Chronicle of Higher Education, 2025).
Faculty
Faculty adoption of AI is also steadily rising, with 61% reporting using it for teaching — though most do so on a limited scale (Digital Education Council, 2025). Confidence remains low, however: only 14% feel equipped to use AI effectively in the classroom, and many cite ongoing concerns around trust, ethics and training (Inside Higher Ed, 2024; Ithaka S+R, 2024).
The broader landscape is evolving quickly. A 2024 survey by Ellucian found that over 90% of higher education administrative leaders in the US and Canada expect AI to become central to university operations within two years, driven by trends like adaptive learning, administrative automation and AI-powered research (Ellucian, 2024). Yet other surveys show that faculty and students often feel underprepared to understand, work with and apply AI in an ethical and consistent way, calling for stronger training, clearer policies and improved digital literacy (Chegg, 2025; Inside Higher Ed, 2024). Academic integrity, bias and privacy remain top concerns—and most US provosts (80%) report their institutions still lack comprehensive AI policies, even as 92% of faculty request more guidance (Inside Higher Ed, 2024).
Faculty must be equipped with tools and policies that ensure AI enhances — not replaces — critical thinking and creativity. Transparent standards around data use and algorithmic bias are essential to maintain trust and academic integrity.
Researchers
AI is rapidly becoming a transformative tool in the research community. A global survey by Elsevier found that 94% of researchers believe AI will accelerate knowledge discovery, and 86% expect it to improve the overall quality of their work. While 37% have already used AI professionally, many remain cautious — calling for transparency, ethical safeguards and trusted content to guide responsible use. Concerns about misinformation, critical errors and weakened reasoning are widespread, with 81% of researchers worried AI could erode essential thinking skills (Elsevier, 2024).
As AI becomes more embedded in research workflows — from literature reviews to data analysis — calls for institutional leadership are growing. Researchers want clear policies, training and infrastructure to support responsible AI use. A proposed framework from Macquarie University and Queensland University of Technology in Australia emphasizes the need for principles-based governance rooted in transparency, accountability and research integrity (Journal of Higher Education Policy and Management, 2025).
Institutional leadership
Institutional leadership plays a pivotal role in shaping how AI is adopted across higher education. Elsevier’s 2024 Academic Transformation Survey reports that while many universities recognize AI’s transformational potential, only 34% of leaders report meaningful progress in integrating generative AI effectively and responsibly, and just 44% see high transformational potential in it. (Elsevier, 2024). However, resource constraints, knowledge gaps, regulatory uncertainty and institutional inertia all contribute to slow adoption.
Regional differences also reflect divergent approaches to AI governance and capacity-building. As shared in the survey: “the Americas and Europe place far greater priority and see far greater transformational potential in the effective and responsible integration and adoption of GenAI. For academic leaders in Asia Pacific, it is a noticeably less critical issue — in India, it is the objective with both the least progress made and least transformational potential ascribed. In China, out of 25 objectives, both AI-related objectives are prioritized the lowest overall."
At the same time, governance frameworks are beginning to shape leadership agendas as much as technology itself. In Europe, the proposed AI Bill seeks to provide a harmonized framework that translates principles of responsible AI use into law across member states. Italy has taken a significant early step, introducing its National Strategy for Artificial Intelligence 2024–2026 — now under parliamentary discussion — which explicitly includes research and higher education as priority domains (Agenzia per l’Italia Digitale, 2024).
Leaders also face the challenge of aligning AI initiatives with their core academic values, such as critical thinking and creativity, academic integrity, equity and access, etc. — ensuring that innovation does not come at the expense of integrity or inclusivity. Institutions like Arizona State University, which have implemented clear AI strategies to enhance student support and course design, report significant improvements in retention rates and operational efficiency, demonstrating the tangible benefits of a well-defined approach to AI integration. The path forward requires thoughtful planning, cross-functional collaboration and a commitment to shared benefit for learners, educators, researchers and communities alike.
This leadership imperative was captured well by Eduardo Pedrosa, Executive Director of the APEC Secretariat (Philippines), at the APEC University Leaders Forum 2025 when he emphasized the strategic importance of AI in shaping regional transformation:
There is a clear consensus among APEC member economies, stakeholders and working groups that digital transformation with AI at its heart is a pivotal force reshaping our region’s economic and social landscape.
For university leaders, this underscores the urgency of aligning institutional strategies with regional priorities and preparing students for a rapidly evolving workforce.
Principles of AI strategy and policy governance in higher education institutions
Watch now
|
Effective AI strategy and governance in higher education begin with clear guiding principles that align with institutional goals while upholding ethical and inclusive values. As Chris Day, Vice-Chancellor and President of Newcastle University, emphasizes, “institutions must prioritize transparency, equity and accountability in their AI initiatives."
Watch this video to hear Chris Day, Vice-Chancellor and President of Newcastle University, on AI policy and its challenges at the 2024 THE World Academic Summit.
The whitepaper introduces the CRAFT framework — Culture, Rules, Access, Familiarity and Trust — as a strategic model for institutions seeking to move beyond reactive AI policies toward inclusive, values-driven transformation. It calls for universities to empower faculty, redesign assessment and engage students as co-creators of ethical AI-enhanced learning environments.
In both teaching and research, ethical AI use requires clear standards for data collection, algorithmic decision-making and bias mitigation. Inclusive governance committees can help ensure these standards reflect diverse academic needs.
To illustrate how universities around the world are translating strategic principles into practice, the following examples highlight diverse governance models, values frameworks and implementation approaches — offering insight into how institutions are shaping responsible AI policy.
Promotes adaptable, principle-driven governance for research
Singapore Management University (SMU)
Singapore
Referenced in ASEF White Paper
Centre for AI and Data Governance
Community-first AI governance
Engages communities in co-creating AI policy
These themes echo across recent interviews, articles and conferences, where experts consistently stress the importance of involving diverse stakeholders — students, educators and administrators — in shaping AI policies. By embedding these perspectives, institutions can develop AI strategies that not only drive innovation but also are open, transparent and inclusive of every facet of their community — fostering trust and confidence in the institution’s policies, priorities and direction.
What are the five pillars of AI strategy and policy in higher education institutions?
Five pillars of AI strategy and policy in higher education institutions
1. Alignment with institutional goals
AI strategies must align with a university’s broader objectives — whether that means demonstrating research impact, fostering global collaboration, enhancing teaching outcomes or supporting student success. When thoughtfully integrated, AI can help institutions achieve these goals more efficiently. For example, it can automate time-consuming tasks like scheduling, data management and student support, freeing up resources to focus on educational quality. AI also enables personalized learning, accelerates research through large-scale data analysis and strengthens collaboration across institutions. For example, initiatives like the UNU Global AI Network bring together universities, governments and civil society to co-create AI solutions that advance sustainable development. When aligned with institutional priorities, AI becomes a driver of innovation by enabling personalized learning, accelerating research and streamlining operations.
2. Ethical guidelines and transparency
Any AI agent or tool — whether internally developed or externally sourced — must operate within clear and comprehensive ethical frameworks. These frameworks are essential to prevent bias in algorithms, foster trust and promote fairness. As AI becomes more embedded in decision-making processes, it is especially critical to address issues such as data bias, accountability and transparency to ensure equitable outcomes. The question of research integrity is equally pressing: as AI systems begin to generate new knowledge or discoveries independently, institutions must consider how to validate, attribute and ethically govern findings that no human fully understands — and what it means to trust results we cannot fully explain. By prioritizing ethical practices, universities can set a standard for responsible innovation, cultivating a culture of trust and inclusivity while advancing technological progress. This commitment must also extend to the classroom or learning standards, where students are taught how to use AI responsibly and ethically.
3. Data privacy and security
AI systems rely heavily on data to function effectively — but without robust data governance, they introduce significant risks. Compliance with regulations such as the EU’s General Data Protection Regulation (GDPR) must be a top priority to protect sensitive student and faculty information. Insights gathered from global academic leaders emphasized that data privacy and security are perhaps the most critical elements of any AI policy, noting that the success of AI initiatives hinges on building trust with users — and that trust begins with safeguarding their data. Without strong protections, the risks of misuse, breaches or non-compliance can quickly undermine the benefits AI aims to deliver. A recurring theme was that the issue is less about drafting standalone AI policies and more about ensuring that existing standards for data privacy, usage, transparency and security — typically governed by IT and CTO offices — are rigorously upheld. In this view, a strong AI policy is only as effective as the institution’s underlying data governance framework.
4. Inclusivity, accessibility and research equity
AI has the potential to be a powerful equalizer in higher education, bridging gaps and creating opportunities for all learners. By investing in AI tools that assist students with disabilities — such as text-to-speech applications, personalized learning platforms, or tools that support those with visual or hearing impairments — educators can foster more inclusive learning environments. Additionally, AI can help create equitable access to resources by tailoring content to individual needs, addressing disparities in learning styles and breaking down barriers to education for underserved communities. As the World Economic Forum notes in its article How AI could improve accessibility in education and equality in schools, “AI has the potential to improve accessibility in education, ensuring all learners can benefit from the same opportunities.” Prioritizing these advancements ensures no student or stakeholder is left behind. This vision of inclusive AI echoes globally. At the United Nations Security Council session on AI governance in New York (September 2025), Yejin Choi, Professor of Computer Science and Senior Fellow at Stanford University’s Institute for Human-Centered AI emphasized:
Let us expand what intelligence can be — and let everyone everywhere have a role in building it.
Her remarks underscore the importance of democratizing AI development, ensuring that students and researchers from all backgrounds can shape and benefit from these technologies.
One example of the transformative power of AI lies in its ability to translate research into multiple languages, highlighting how this capability opens doors for researchers worldwide to access and contribute to global scholarship. This is especially critical in a field where English has long dominated academic publishing, often sidelining valuable insights from non-English-speaking communities.
As Theresa Mayer, PhD, Vice President for Research at Carnegie Mellon University, notes in AI for Science: A paradigm shift for scientific discovery and translation, “These experimental platforms will democratize access and participation across a broad and inclusive population, enhance interdisciplinary and nimble collaboration around the world and speed up translation from scientific discovery into practice.” This underscores how AI is not only accelerating research but also making it more globally inclusive.
5. Adaptability and continuous review
The rapid evolution of AI technologies requires governance frameworks that are not only robust but also adaptable to keep pace with advancements. Flexible frameworks allow institutions to respond effectively to emerging technologies and unforeseen challenges, ensuring that regulations remain relevant and impactful. Regular reviews are essential in this process, enabling policymakers and organizations to assess the implications of new AI developments, address potential risks and update guidelines as necessary.
For example, the European Union’s AI Act emphasizes ongoing evaluation to mitigate risks while fostering innovation. Similarly, the OECD AI Principles advocate for regular assessments to ensure AI systems align with ethical and human-centric values.
The emergence of agentic AI — systems capable of autonomous, goal-directed behavior — introduces new governance complexities. These agents can perform multistep tasks with minimal human oversight, such as negotiating contracts, conducting research or managing financial transactions. As highlighted in recent policy discussions, agentic AI challenges traditional accountability structures and raises urgent questions around liability, transparency and ethics. Anticipatory governance strategies, such as those outlined in the OECD’s Steering AI’s Future framework, stress the need for proactive, flexible oversight that can evolve alongside these technologies.
By building adaptability and review mechanisms into governance structures, institutions can better navigate the dynamic AI landscape while safeguarding both institutional and societal interests — particularly as technologies evolve rapidly and AI systems become more prevalent, from today’s agentic models to innovations we have yet to imagine.
Cultivating ethical AI practices in higher education
Leadership-driven governance is key to successful AI strategy. Whether through oversight committees, policy frameworks or institutional guidelines, universities must proactively manage the risks and opportunities AI brings. A strong governance structure ensures alignment with institutional goals, ethical implementation and the flexibility to adapt to emerging challenges.
Universities have a unique role in shaping the future of AI. By convening experts from fields such as computer science, ethics, law, sociology and anthropology, universities can foster inclusive approaches to AI design and application that reflect diverse perspectives and societal impacts.
It is also important for institutions to engage external stakeholders, including industry partners, policymakers and community organizations. This broader engagement supports a more holistic understanding of AI’s potential risks and benefits and helps align academic innovation with societal needs.
AI strategy in higher education requires collaboration across diverse stakeholders, but who should be included?
Engaging stakeholders for collective impact
Strategic AI leadership relies on contributions from multiple stakeholders. Building a cross-functional team can ensure that a university’s approach to AI is holistic and adaptable. Potential stakeholders to be involved in AI policy development include faculty experts in computer science and ethics and humanities, researchers, data privacy officers, legal advisors, representatives from student organizations and Chief Technology Officers (CTOs) or IT experts with deep knowledge of AI systems.
Additionally, involving policymakers and funders can help ensure that AI strategies align with regulatory frameworks and broader societal goals. Bringing in industry partners can also provide valuable real-world perspectives, ensuring that policies align with evolving technological trends and workforce needs. This collaborative approach helps universities create comprehensive strategies that address both opportunities and challenges in AI.
Faculty and researchers must be empowered to co-create AI policies that reflect both pedagogical and research priorities — especially as AI blurs traditional boundaries between disciplines.
Who should be involved and the value they bring
Developing a comprehensive and effective AI strategy in higher education requires collaboration across diverse stakeholders. Each group contributes unique insights, expertise and perspectives, ensuring that the approach is inclusive, balanced and aligned with institutional goals. Below, we outline the roles and value brought by key players in shaping AI strategy and governance.
Administrators and staff
Faculty
Researchers
Students
CTO and IT leadership
Policymakers
Industry partners
Funders
1. Administrators
University administrators play a critical role in overseeing the development and execution of AI strategies, ensuring alignment with institutional priorities, operational efficiency and regulatory compliance. In addition to university presidents and chancellors, this group also includes staff from the offices of research, faculty affairs, libraries and academic leadership such as deans and provosts.
As AI becomes more embedded in university operations — from admissions and advising to research and curriculum design — administrators are uniquely positioned to guide its responsible adoption and long-term sustainability
Contributions:
Strategic oversight: administrators align AI initiatives with the university’s mission, vision and long-term strategic plans.
Resource allocation: they manage funding, staffing and infrastructure investments critical for implementing AI technologies.
Policy development: administrators establish governance frameworks and policies that promote ethical use, mitigate risks and ensure compliance with legal and accreditation standards.
Cross-functional coordination: they facilitate collaboration across departments, ensuring AI efforts are integrated and not siloed.
Risk management: administrators assess and address risks related to data privacy, cybersecurity, reputational impact and equity in AI deployment.
Key benefits:
Through their leadership and coordination, administrators ensure that AI efforts are scalable, sustainable and compliant with regulatory requirements. Their involvement helps universities balance innovation with accountability, fostering trust among students, faculty and external partners. Crucially, administrators ensure that AI strategies are aligned with institutional goals — advancing academic excellence, operational efficiency, equity and long-term strategic vision.
In an era of constrained budgets, AI’s potential to reduce costs and increase operational efficiency is particularly compelling — with potential gains across research offices, admissions, human resources and facilities operations. By streamlining workflows, automating routine processes and improving data-driven decision-making. AI can help institutions direct time and resources toward other high-value priorities.
University administrators ensure that AI strategies are aligned with institutional goals
2. Faculty
Faculty members are pivotal in shaping AI initiatives, as both implementers and beneficiaries of these strategies. Their direct involvement ensures that AI is tailored to enhance teaching, research and administrative processes.
As AI tools increasingly shape education, research and institutional policy, faculty must remain central to decision-making processes. John Warner, a writing professor, author and longtime columnist for Inside Higher Ed, emphasized the urgency of faculty agency in his July 25, 2025 column: “Different institutions are adopting different stances and much of the adaptation is falling on faculty, in some cases with minimal guidance. While considering how these tools impact what's happening at the level of course and pedagogy is a necessity, it also seems clear that faculty concerned about preserving their own rights should be considering some of the institutional/structural issues.” His remarks underscore the importance of shared governance and proactive faculty involvement in shaping AI policy — not just to protect academic freedom, but to ensure that AI enhances rather than erodes educational quality.
Contributions:
Subject matter expertise: faculty bring a deep understanding of disciplinary and interdisciplinary knowledge across the sciences, social sciences and arts and humanities, allowing for the development of AI applications that meet academic needs.
Pedagogical insight: they identify how AI can support innovative teaching methods and personalized learning.
Ethical lens: faculty help evaluate the ethical implications of AI and ensure its use aligns with the values of academic rigor and integrity. They play a key role in ensuring that AI tools used in teaching uphold ethical standards — including transparency in data use, fairness in algorithmic grading and protection of student privacy.
Key benefits:
Faculty engagement ensures that AI strategies address genuine academic challenges while remaining grounded in educational priorities and ethical considerations. Their involvement ensures that AI enhances critical thinking and creativity, rather than replacing them — preserving the human-centric values of pedagogy.
Faculty involvement ensures that AI enhances rather than erodes educational quality.
3. Researchers
Researchers are central to the development and governance of AI in higher education, driving inquiry, innovation and ethical reflection. As creators, users and critics of AI technologies, they are uniquely positioned to shape these policies. Their influence is essential for ensuring that AI development and deployment align with ethical, social and academic values — particularly as institutions balance the dual imperatives of innovation and responsibility. Researchers also bring critical perspectives on transparency, reproducibility and the societal impact of AI.
Contributions:
Knowledge generation: researchers produce foundational and applied insights that shape institutional understanding of AI capabilities, limitations and implications.
Ethical leadership: through scholarship and advocacy, researchers help define norms around fairness, accountability and responsible AI use. They advocate for transparency in algorithmic processes and address biases in AI-generated insights.
Policy development: faculty researchers often serve on governance committees, contributing evidence-based recommendations for institutional AI policies. They also contribute to policies that safeguard sensitive research data, ensuring compliance with privacy regulations and maintaining public trust.
Cross-sector collaboration: researchers engage with industry, government and civil society to align academic inquiry with real-world challenges and regulatory contexts.
Key benefits:
Researchers ensure that AI policy is grounded in rigorous analysis, ethical foresight and academic integrity. Their contributions help institutions navigate complexity, anticipate unintended consequences and maintain credibility in a rapidly evolving technological landscape.
Because they understand firsthand how research is conducted — including the tools, workflows and challenges faced by teams that may be smaller or operating with constrained resources — researchers play a crucial role in shaping effective institutional AI strategies. Their participation helps ensure that the right tools, use cases and priorities are identified and supported, aligning governance frameworks with the practical realities of academic research.
In doing so, researchers strengthen institutional capacity to develop policies that are credible, relevant and responsive to both current and future needs across the research ecosystem.
Researchers bring critical perspectives on transparency, reproducibility and the societal impact of AI.
4. Students
Students, as primary users and beneficiaries of many AI-powered tools, offer essential insights into the design and implementation of AI systems. Their feedback helps ensure usability, relevance and equitable access to resources that enhance their learning experience.
Today’s university students — primarily members of Generation Z — have complex and evolving relationships with AI. While they are frequent users of Generative AI tools, they also express concerns about their long-term implications. A recent article, The AI Generation Gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers?, notes that “many [Gen Z students] also expressed worries about the larger impact of GenAI on the job market and society, with both students and teachers anxious or distressed about job losses or the potential of ‘humans being replaced’ in the future, as well as the undermining of academic degrees and integrity, privacy and transparency concerns and any threats GenAI may pose to society and human values should it come to develop its own, misaligned set of values” (Chan, 2023). These concerns underscore the importance of including student voices in AI policy discussions — particularly around job displacement, mistrust of faculty AI use for evaluating work, violations of intellectual property for human creators and data privacy.
The importance of preparing students for the ethical dimensions of AI was also expressed clearly by Jae Weon Choi, President of Pusan National University in South Korea, during the APEC University Leaders Forum 2025:
AI is reshaping the way we think, learn, live and govern, while also raising pressing ethical and social questions. In this context, the role of universities must go beyond research and education. We must also take responsibility for helping students develop ethical reasoning, a sense of community and global citizenship.
His remarks reinforce the idea that AI literacy must extend beyond technical skills to include ethical reflection, civic awareness and a commitment to inclusive values — especially as students navigate the societal implications of emerging technologies.
As future stewards of AI, students must be actively included in institutional policy and strategy forums — not only to ensure relevance and equity, but to shape a future that reflects their values, concerns and aspirations.
Contributions:
User-centric feedback: students offer real-world insights into how AI affects their learning and engagement, helping to improve tool design and functionality.
Innovative ideas: tech-savvy students often bring creative approaches to AI challenges through coursework, internships and research collaborations.
Awareness of inclusivity: by sharing diverse perspectives, students help shape AI strategies that are equitable and accessible to all.
Key benefits:
Students ensure that AI strategies remain user-focused while contributing to inclusive and innovative solutions that benefit diverse academic communities.
Their participation helps institutions understand how AI tools are actually used in learning environments — informing decisions about ethics, accessibility and digital literacy. By sharing firsthand perspectives on issues such as academic integrity, equity of access and responsible use, students help shape policies that are both practical and trusted by the campus community.
Involving students in policy development also promotes a culture of transparency and collaboration, ensuring that AI strategies support learning outcomes, protect student rights and build confidence in how technology is deployed across teaching and assessment.
Spotlight: Gen Z and AI
A growing body of articles, studies and insights is exploring how Generation Z perceives and engages with AI. As the next generation filling university classrooms and entering the workforce, their perspectives are increasingly important in shaping conversations about the future of technology and policy. While this page does not focus exclusively on Gen Z, their perspectives are a vital part of the broader conversation. For those interested in further reading, here are a few resources to explore:
A Robot Stole My Internship: Explores how AI is reshaping entry-level roles and internships, raising concerns about Gen Z’s access to early career opportunities.
How AI Is Changing — Not ‘Killing’ — College: Inside Higher Ed’s Student Voice survey reveals how students are using generative AI for learning, and how it’s impacting their critical thinking
Students ensure that AI strategies remain user-focused while contributing to inclusive and innovative solutions that benefit diverse academic communities.
5. CTO and IT leaders
Chief technology officers (CTOs) and IT leaders are indispensable to effective AI strategy and governance in higher education. As architects and stewards of the institution’s digital backbone, they provide both the vision and the technical foundation essential for responsible AI innovation.
Contributions:
Digital infrastructure and security: CTOs and IT teams oversee the deployment and maintenance of secure, scalable infrastructure required to support AI initiatives. Their work ensures that AI tools are integrated smoothly and safely into university systems.
Strategic implementation: beyond technical operations, CTOs play a strategic role — translating institutional AI ambitions into practical, sustainable solutions. They advise on the adoption of new technologies, manage system interoperability and future-proof the institution’s digital environment to accommodate evolving AI capabilities.
Data governance and privacy: they establish and enforce policies that safeguard sensitive student, faculty and research data. Their leadership is central to upholding data privacy standards and regulatory compliance, building trust among all institutional stakeholders.
Policy development and risk management: CTOs and IT departments are trusted collaborators in shaping governance frameworks for AI. They help identify and mitigate risks — such as bias, misuse and cybersecurity threats — and ensure ethical guidelines keep pace with rapid technological advancement.
Change management and capacity building: IT leaders support campus-wide training and capacity-building efforts, equipping faculty, staff and students with the tools and understanding needed to use AI responsibly.
Key benefits:
As Chris Day, Vice-Chancellor and President at Newcastle University, articulated in the video Perspectives on higher education: AI and universities: challenges and opportunities, “a university’s ambitions for AI are only as strong as the digital foundation beneath them. CTOs and IT leaders are uniquely positioned to guide both strategic vision and practical implementation—translating big ideas into secure, sustainable solutions.” Insights gained from the THE US Digital Universities conference echo this, noting institutions advance with the most confidence “where IT teams are trusted collaborators in setting governance frameworks, ensuring data privacy and navigating the complexities of emerging technology with agility.”
By elevating CTOs and IT leaders as core participants — not simply technical support — universities strengthen both resilience and leadership in an AI-driven future. Their partnership is fundamental to establishing responsible, scalable and innovative AI strategies that serve the entire academic community.
CTO and IT leaders are the architects and stewards of the institution’s digital backbone, providing vision and the technical foundation essential for responsible AI innovation.
6. Policymakers
Policymakers shape the broader regulatory, ethical and funding environment for AI adoption in higher education. Their role is not simply to ensure institutional compliance, but to promote the consistent application of AI-related frameworks, standards and protections across the higher education landscape within a given country or region.
This consistency is essential for safeguarding academic integrity, protecting student data and ensuring equitable access to AI tools and resources. It also helps universities navigate the fast-evolving AI landscape with clarity and confidence.
However, the role of policymakers varies significantly across national contexts. In some countries, governments take an active role in setting centralized AI strategies and mandates for higher education (e.g., the EU’s AI Act or Canada’s Pan-Canadian AI Strategy). In others, policymaking is more decentralized, with universities or regional bodies developing their own approaches within broader national or international guidelines. These differences shape how AI is governed, funded and integrated into academic life.
This need for consistent and inclusive governance was echoed at the United Nations Security Council session on AI governance, held in New York in September 2025. In his remarks, António Guterres, Secretary-General of the United Nations and former Prime Minister of Portugal, emphasized the importance of global collaboration:
Together, these initiatives aim to connect science, policy and practice; provide every country a seat at the table; and reduce fragmentation.
His statement reinforces the role of policymakers in creating enabling environments that not only support innovation but also ensure equitable access and representation across regions and institutions. For universities, this means aligning AI strategies with broader national and international frameworks — and advocating for policies that reflect the public interest.
Contributions:
Regulatory guidance: policymakers develop frameworks for data protection, privacy, transparency and accountability that guide institutional decision-making.
Funding opportunities: they enable access to government grants, public-private partnerships and national initiatives that support university innovation.
Public interest representation: policymakers advocate for the ethical implications of AI, encouraging universities to address societal challenges such as equity, misinformation and labor market disruption.
Standardization across institutions: they help ensure that AI policies are applied consistently across universities, reducing fragmentation and promoting interoperability.
Key benefits:
Policymakers can create an enabling environment for responsible AI innovation in higher education. By promoting consistency across institutions and aligning governance frameworks with national and global standards, they help universities balance autonomy with accountability — tailored to the unique political, cultural and regulatory contexts of each country.
Policymakers play a role in creating enabling environments that not only support innovation but also ensure equitable access and representation across regions and institutions.
7. Industry partners
Collaboration with industry partners enriches AI initiatives by fostering innovation, providing practical insights and equipping universities to address rapidly evolving technological trends. Their involvement in AI policy brings valuable real-world perspectives on implementation, risk management and ethical considerations, helping institutions shape frameworks that are both principled and practical.
Contributions:
Technological expertise: industry partners contribute state-of-the-art solutions and insights into emerging AI trends, capabilities and use cases.
Joint research initiatives: companies often co-develop research projects that integrate academic inquiry with real-world applications.
Skill development: through internships, mentorships and training programs, industry partners help students and faculty build practical AI competencies.
Policy collaboration: industry stakeholders offer critical input on governance models, data standards and responsible innovation, enriching institutional approaches to AI policy.
Key benefits:
Industry involvement bridges academic research and applied innovation, enhancing institutional relevance, accelerating technology transfer and opening pathways for funding, scalable solutions and long-term collaboration.
Industry partners provide real-world insights on implementation, risk management and ethical issues, helping to develop principled and practical frameworks.
Examples of Global university–industry partnerships in AI policy and development
University
Industry Partner(s)
Country/Region
Focus Areas
Impact
University of Cambridge
Google DeepMind
UK
Ethical AI, Human-Centered AI
Funded CHIA research center; supports PhDs from underrepresented groups
University of Edinburgh
Eisai, Gates Ventures, LifeArc, NatWest, BBC
UK
Healthcare AI, Responsible AI, Banking Innovation
Multiple hubs including NEURii and BRAID; integrating AI into healthcare and finance
University of Florida
NVIDIA
USA
AI Infrastructure, Education
Created the first AI university in the US; expanded computing and research capacity
Université Paris-Saclay
Mistral AI and EdTech France
France
Generative AI in education and research
Improve student learning experience. Facilitate work teaching and administrative staff
Cardiff University
IQE plc
UK
Semiconductor AI Applications
Developed a translational research facility for compound semiconductors
University of Tehran
Various Iranian tech firms
Iran
AI Governance, Industry 4.0
Joint committees and shared platforms for AI education and policy
Lund University
Swedish Tech Companies
Sweden
AI Innovation & Research
Research collaboration driven by access
University of Edinburgh
Aberdeen Group
UK
Generative AI in Finance
Developed AI tools for investment research and sustainability
Asia Pacific University of Technology & Innovation (APU)
Vero AI, TusStar Malaysia
Malaysia
Digital Economy, AI Innovation
MOU to foster AI collaboration, innovation, and regional policy development in Southeast Asia
Asia Pacific University of Technology & Innovation (APU)
AI Policy, Education Equity, Workforce Development
Co-hosted APEC University Leaders Forum; advanced regional dialogue on AI’s societal and educational impact
Thailand Meteorological Department (in partnership with universities)
Huawei
Thailand
AI for Climate & Weather Forecasting
Piloted Huawei’s Pangu-Weather model; improved forecast speed and accuracy; supports AI policy in public services
8. Funders
Engagement with funders strengthens AI initiatives by aligning institutional goals with broader societal, economic and ethical priorities. Funders bring strategic perspectives on impact, accountability and long-term sustainability, helping shape AI policy frameworks that are both mission-driven and future-oriented.
Contributions:
Strategic direction: funders help define priorities for AI research and implementation, often emphasizing equity, ethics and public benefit.
Policy influence: through grant conditions and program design, funders encourage responsible governance practices and transparent evaluation metrics.
Capacity building: funding supports infrastructure, interdisciplinary collaboration and workforce development essential for AI readiness.
Accountability mechanisms: funders promote rigorous standards for data use, research integrity and outcome measurement, reinforcing institutional credibility.
Key benefits:
Funders play a critical role in shaping the scope and integrity of AI initiatives, ensuring that university efforts are aligned with public interest and long-term impact. Their involvement fosters a culture of responsibility, innovation and strategic foresight across academic and operational domains.
Funders shape the scope and integrity of AI initiatives, ensuring efforts are aligned with public interest and have long-term impact.
Featured Insight
Watch now
|
Kathryn Magnay, Director of Research Infrastructure at UK Research and Innovation (UKRI), emphasized the importance of funders in shaping national AI strategies through inclusive consultation and cross-sector collaboration:
We worked across all research councils and consulted widely with the community to define a shared vision for AI — one that supports development and deployment across the research landscape and beyond.
Her remarks reflect how funders like UKRI are not only financing innovation but actively guiding its ethical and strategic direction — ensuring that AI development aligns with public interest and academic integrity.
In this short clip, Magnay shares her perspectives on AI.
The power of collaboration
No single group can build a fully effective or future-ready AI strategy alone. The most meaningful outcomes emerge when faculty, students, researchers, administrators, policymakers, industry partners, funders and especially CTO and IT teams work together.
Each group’s unique perspective fosters innovation, reinforces ethical practices and grounds AI strategies in academic excellence and societal benefit. Through collaboration across all roles, universities build the capacity to both pioneer and steward AI responsibly.
Key roles and responsibilities in AI strategy
Stakeholder
Key contributions
Unique perspective and purpose
On campus stakeholders
-
-
Administrators and Staff
Long-term strategy
Foster institutional sustainability
Faculty
Innovation in pedagogy and research
Discipline-appropriate AI development
Researchers
Shape institutional understanding of AI capabilities, limitations and implications
Ground AI policy through rigorous analysis and ethical foresight
Students
Usability feedback
Ground strategies in real learner needs
CTOs and IT
Technical solutions
Operationalize AI securely and at scale
Off-campus stakeholders
-
-
Policymakers
Regulation and funding
Enable compliant and supported innovation
Industry
Real-world insights
Workforce alignment and applied innovation
Funders
Funding and impact
Establish funding mechanisms and outcome measurement standards
Examples of universities driving innovation
Global universities are playing a pivotal role in advancing AI research and fostering innovative collaborations. Around the world, institutions are experimenting with emerging technologies while embedding principles of inclusivity and ethics to ensure that progress benefits all learners and communities. The following examples highlight how some universities are approaching AI strategy and governance — offering insights into diverse policies, priorities and practices across higher education. These are a handful of examples, not an exhaustive list, but they reflect the breadth of activity and innovation taking place globally.
University
Examples of AI contributions or focus
AI Strategy or Policy
Links to AI policies, positions or insights
Arizona State University
AI Innovation Challenge, OpenAI partnership, AI in education and research
Principled Innovation Framework, Ethics Committee, AI Acceleration and Digital Trust Guideline
The choices made today — about governance, ethics, collaboration and inclusion — will define not only how institutions adapt, but how they lead.
Conclusion: Leading with purpose in an AI-driven future
As artificial intelligence reshapes the contours of higher education, universities face a defining moment in shaping how AI transforms learning, research and institutional values. The choices made today — about governance, ethics, collaboration and inclusion — will define not only how institutions adapt, but how they lead.
Strategic AI leadership is not a technical challenge alone; it is a human one. It demands foresight, humility and a commitment to shared values. By engaging diverse stakeholders, investing in ethical frameworks and aligning AI initiatives with institutional missions, universities can ensure that innovation serves the public good.
The path forward is not linear, nor is it uniform. But it is collaborative. Institutions that embrace adaptability, transparency and interdisciplinary dialogue will be best positioned to navigate complexity and shape a future where AI enhances, rather than replaces, the core values of academia.
Whether in the classroom or the lab, AI must be governed in ways that preserve academic integrity, foster inclusivity and empower human creativity. Strategic leadership ensures that AI serves as a collaborative partner — not a substitute — in advancing higher education.
As Chris Day reminds us, “AI should empower education without compromising the principles that make it equitable and trustworthy.” That is the challenge — and the opportunity — for every university leader today.
Elsevier has been at the forefront of developing trusted AI tools for higher education — helping institutions navigate the dawning of the digital age with integrity and purpose. To explore how we’re building confidence in AI across research and academia, visit our feature on AI Trust in Higher Education.
Want to stay informed? Sign up for our newsletter to receive the latest insights, strategies and global perspectives on responsible AI in higher education.
Are you interested in this topic? Sign up for our AI for Higher Education newsletter