What do initiatives such as personalized and adaptive learning, chatbots for education, automatic translators or the use of predictive learning analytics have in common? All of them are components of a ‘data-driven education’.
In many countries, there is a clear interest in expanding the role of digital technologies in education, which inevitably is leading towards more data-intensive educational systems. With the growing interest for adaptive intelligent tutoring systems offering natural language interaction, tools for predicting school dropout or new automated systems to boost student recruitment, it is likely that the importance of data-intensive technologies for education will increase in the years to come.
Although such digital innovations can bring new benefits, it is also important to understand that they could transform the current landscape of education in unexpected directions. The loss of, unauthorized access to, or disclosure of, personal information has gained media attention recently, but lack of transparency, automated bias, or the use of data to influence user behavior, are also very important challenges that need to be weighted when exploring these trends.
The changing landscape of education will require not only that students and teachers become more data literate, but also that education organizations and administrators will have to develop a (more) proactive and comprehensive strategy when planning for, implementing and increasingly interacting with data-intensive education systems.
With (advanced) intelligent systems (for instance, those capable of identifying patterns or recognizing voices, faces, images, texts or even keystrokes), there is going to be a greater need for education in algorithm literacy. This will mean not only having to expand some of the current definitions of digital literacy — including those related to the use of artificial intelligence (AI) — but also to develop new institutional capacities, supporting educators and administrators to adopt these tools in safe, ethical, and transparent ways.
The growing relevance of data-intensive systems opens new challenges (and questions) that are expected to play a critical role during the coming decade. Here are some of the questions that will be important to systematically consider and answer within (and outside) educational institutions as countries adopt tools that enable more data-driven educational practices:
Privacy and data protection: Who has my data? Is the data secure? What data is held where, and who has access to it? Who is tracking me? What are my rights? How to protect my privacy? Where to get related help?
Ethical use of data: What are the risks of relying on automated systems? How to embrace technological solutions for education without ignoring ethical implications? In what processes and circumstances are the data-intensive systems (AI) appropriate to be used?
Data Accountability: What assessment has been made of the ethical use of data? Has the data been captured with the knowledge and consent of all the parties involved? If personal data previously collected is intended to be used for a new purpose, what should be done? What are the quality control mechanisms that need to be in place and implemented to use the best possible data?
Algorithmic literacy: What positive and negative impacts could the use of AI in education have on people? How to critically assess outcomes from the use of AI systems? To what extent should current frameworks of digital literacy address a deeper understanding of the ethical and social implications of big data?
Agency and responsibility: How to prepare students and educators to protect themselves from unintended uses of technology? Can end-users be more actively involved in the design or application of data-intensive tools for education?
Bias awareness: How to minimize the impact of bias on certain users or groups? What datasets is/was the algorithm trained on and what are their limitations and potential biases?
Transparency: How are student data collected, analyzed and used? How to overcome the ‘black box problem’, when an algorithm’s complexity is inscrutable even to its developers? What are the best practices to keep a transparent data policy? How to keep the data clear, consistent, and understandable?
Explainability: What does it mean to open AI’s ‘black box’? How to make related terms and conditions more user-friendly? (Here is an interesting example of simplified terms and conditions of different social media.)
There is little doubt that there is a growing need for amplifying and diversifying existing conceptions of what it means to be ‘literate’ in a digital age. As new frameworks are elaborated to enable higher levels of transparency and accountability, people and institutions will need to understand these challenges and educate themselves on both opportunities as well as the societal impacts of these innovations.
According to a recent report from UNESCO in this topic, there are at least six major challenges:
- Developing a comprehensive view of public policy on AI for sustainable development;
- Ensuring inclusion and equity for AI in education;
- Preparing teachers for an AI-powered education;
- Developing quality and inclusive data systems;
- Enhancing research on AI in education; and
- Dealing with ethics and transparency in data collection, use, and dissemination.
The growing volume of data being collected within an education system could offer richer, more sophisticated overviews of how students are learning, and provide useful insights on how to better support them with the use of technology. However, many fundamental questions remain related to the potential long term consequences of tracking and profiling today’s students.
The availability of good data can help lead to making good decisions. This is true in education, as it is in other sectors. But the opposite can also happen if the right actions are not taken. If we are entering into the ‘datafication of education’, countries will need to define rules and guidelines to ensure that present and future technology-enhanced education becomes beneficial, reducing and mitigating risks along the way. Although it is much too early to predict the potential impact of the use of AI in education, it is not early to discuss how to better prepare for the world that is coming.
Here is a selection of relevant initiatives and sources that can be of help for those who would like to learn more about this topic:
- Artificial Intelligence (AI) and Education (Congressional Research Service, 2018)
- Global guidelines for Ethics in Learning Analytics (International Council for Open and Distance Education, 2019)
- Memorandum on Artificial Intelligence and Child Rights (UNICEF Innovation, Human Rights Center, UC Berkeley, 2019)
- Data Ethics Decision Aid and toolkit (Utrecht University, 2017)
- ETICO platform for targeting ethic issues in education (UNESCO-IIEP, undated)
- Review of the online learning and Artificial Intelligence education market (British Department for Education, 2018)
- A basic introduction to AI (University of Helsinki and Reaktor, 2018)
You may be interested in the following related posts on the EduTech blog:
- What are developing countries doing to help keep kids safe online?
- On-line safety for students in developing countries
- Who owns the content and data produced in schools?
Note: The image used at the top of this blog post comes from Christa Dodoo on Unsplash. All photos published on Unsplash can be used for free.
You must be logged in to post a comment.