L’X gathers U7+ students around the topic of AI in Education
Once a year, the U7+ Student Forum brings together students from the U7+ alliance member universities worldwide, inviting them to exchange with several experts on a pressing topic of global concerns. The fourth edition of the event, the U7+ Student Forum 2024, was themed “Artificial Intelligence in Higher Education” and aimed at provoking discussions on the thorny question “How to maximize the benefits and mitigate the risks?” and recommendations to contribute to shaping guidelines for an adequate use of AI in Higher Education.
Very committed to the U7+ Alliance, École Polytechnique is a founding member of this international university network. After hosting the alliance’s student forum in 2023, l’X also organized this year’s edition, in conjunction with U7+ Student & Alumni Network. The U7+ Student Forum 2024 brought together students from eighteen universities located in France, Canada, Italy, Japan, Germany, Ghana, Senegal, and the United States.
Workshops with AI experts
Structured in four sessions, the program featured lectures from experts in Artificial Intelligence (AI) from different partner universities, workshop sessions, and group discussions. The first three sessions were dedicated to a specific field and included each a short lecture from an expert in AI, introducing the subject, and were followed by group discussions, inviting the participants to further discuss the topic and answer specific questions arising from the use of AI.
Alfonso Awadalla Carreño, a student in École Polytechnique’s Master in Data and Economics for Public Policy had already participated in the NEXT Forum in Milan with the network U7+ and said he “knew how interesting and constructive such a forum can be. AI is a rapidly evolving topic, and the U7+ student forum offered a promising event to collect and discuss ideas from students with various backgrounds around the world”. Considering that “it could be a starting point for developing a more global framework and regulation of AI in higher education”, Alfonso Awadalla Carreño explains that “being able to contribute was important to me”.
The teams were assigned the tricky task of elaborating recommendations regarding the use of AI in higher education. During the final session of the student forum, all five teams presented their conclusions to the experts, who shared their feedback.
AI expert and Professor at the Mohammed VI Polytechnic University in Morocco, Lamiae Azizi opened this year’s forum with a lecture that outlined important characteristics of AI and its applications. Lamiae Azizi, who develops AI and Generative AI models for Big data and complex systems, spoke about Machine Learning and the different tasks it can perform - such as classification, clustering, and regression - and explained that three different types of learning exist: supervised, unsupervised, and reinforcement learning.
Looking closer at the use of AI in Education, she showed that its potential application ranges from personalized learning assistants, automated grading, predictive analytics to spot students at risk of dropping out, automated lesson planning for students to intelligent content creation, virtual reality, and adaptive learning platforms.
As AI offers the possibility to design personalized learning experiences, to receive timely feedback, to automate tasks, or to gain data-driven insights, its applications certainly can be beneficial for both, students and educators. AI could, for example, provide new “tools to make the inclusion of students with learning disabilities more effective in educational systems”, suggests Marième Diop Johnson, a PhD candidate in Educational Sciences at Cheikh Anta Diop University of Dakar, Senegal, who participated in the U7+ forum.
However, AI also harbors downsides, and Professor Azizi did not conceal that the use of AI in education implies risks as well. These risks are, on the one hand, due to its limited contextual understanding, and on the other hand, to data privacy and security concerns, cost and implementation challenges, and risks regarding ethical considerations to be taken into account when AI systems are developed and deployed.
Responsible implementation of AI in Higher Education
Eve Gaumond, a lawyer and PhD student at the University of Montréal affiliated with the CIFAR Chair on AI and Human Rights, gave a lecture that offered another valuable outlook on the responsible implementation of AI in Higher Education, which is the subject of her work.
Addressing the audience, Eve Gaumond raised the question whether the problem of the use of AI in the academic environment is actually its use to further a vision of higher education that is focused on diplomas and certifications rather than learning. She argues that “if we use AI to further another vision that is more focused on learning and being transformed by learning, maybe we can deal with the problems we are currently facing regarding the use of IA in higher education and even improve the quality of higher education”.
Explaining how the use of IA in the market-based approach risks leading to problems such as plagiarism, discrimination, and privacy violation, Eve Gaumond also described how a human-rights approach to higher education could prevent these issues from arising and serve as a compass to guide the use of AI to improve the quality of higher education.
A platform to voice students’ recommendations
During the U7+ forum’s fourth and final session, the students presented the recommendations on the best use of AI in Education they had elaborated in teams. “The thought-provoking and productive discussions with the team, which allowed us to develop policy recommendations for the use of AI in higher education, were one of the most rewarding aspects of the forum”, declared Scott Pardy, a student from McGill University, Canada. For him, “the most relevant recommendation to fully realize the benefits of AI, while minimizing the risks, is to focus on using the technology as a supplement rather than a replacement to critical thinking”.
The team in which Alfonso Awadalla Carreño participated shares this observation. “An interesting recommendation for the right use of AI in education could reside in redesigning exams and evaluations. If exams are redesigned in a way that allows for the use of AI, asking students to demonstrate their knowledge and skills to be evaluated, while using AI, an exam’s pertinence wouldn’t be weakened, but its focus shifted”, he summarizes.
Among the recommendations, the students also presented the call to establish clear and binding ethical guidelines. Furthermore, they recommend universities offer elective or mandatory training in AI to enable students to gain “AI literacy”. More specifically, to prevent risks linked to AI, they suggest mandatory courses in ethics and responsibility linked to AI for all data science-related programs. The students will submit their final recommendations to the university presidentsof the U7+ Alliance.
* The Laboratoire d’Informatique de l’École Polytechnique (LIX) is a joint research unit of the CNRS, École Polytechnique, Institut Polytechnique de Paris.
Trustworthy and Responsible Artificial Intelligence
Profound ethical concerns arise not only from the large energy consumption of AI and hence its negative effect on climate degradation, but also from the potential AI systems have to integrate biases and to threaten fundamental human rights.
The importance of designing Artificial Intelligence systems that are ethical, secure, and frugal in energy consumption was at the center of the lecture of AI expert Sonia Vanier, a professor at École Polytechnique’s Computer Science department. Sonia Vanier also holds the international Chair for Teaching and Research for Trustworthy and Responsible Artificial Intelligence at École Polytechnique, which rallies academic researchers and industrial experts to address the challenges of Artificial Intelligence and develop more reliable AI systems, aiming to realize the power of AI while preventing its risks.
Sonia Vanier's talk at the U7+ student forum gave the audience a better understanding of a number of AI issues. The high energy consumption of AI, which requires new frugal models, is one of the most pressing. Furthermore, Sonia Vanier also pointed out that current AI systems are designed to generate content without guaranteeing its reliability.
Indeed, the generative AI models used do not guarantee the creation of fair, ethically correct and secure results. Deep learning models are used as black boxes and do not allow the explicability and interpretability of their results.
It is crucial to integrate additional rules, reasoning, and verification mechanisms in AI systems to ensure explainability, safety, and fairness, while detecting and correcting biases induced by the learning data. These additional mechanisms are indispensable for the use of AI systems, which must guarantee reliability and safety, such as self-driving cars or trains or tools linked to health.
“It is crucial to develop new approaches to make the most of the power of AI while preventing the risks. AI is turning our world upside down and we are only at the beginning of these technological revolutions that we should not fear but understand”, stresses Sonia Vanier.