Professional discussion on a new kind of action literacy

As part of the UNESCO Global Media and Information Literacy Week 2023 program series, our research institute organised a workshop on 25 October 2023 entitled Human information literacy of artificial intelligence. The theme of the event was addressing the challenge of media and information literacy from a societal perspective in the context of the rapidly expanding artificial intelligence-based services.

Bernát Török, director of the Institute of the Information Society, said in his opening speech that the issue of media literacy was already a topic of focus years ago, but at that time he approached it "only" from the legal side. In the early 2000s, traditional models prevailed and the prohibitive and restrictive role of law worked more successfully. Although it is important to clarify the power of restrictive legal instruments in the present, in today's media environment, in the age of internet platforms and artificial intelligence, the law can only achieve limited results. Media literacy has become an essential issue in terms of societal risks and challenges, and law alone can no longer provide sufficient answers to these problems. Bernát Török pointed out that the workshop was deliberately organised during the UNESCO thematic week, as the fine-tuned approach of the publications published by the organisation puts social awareness and literacy more in the foreground than the regulatory logic of the European Union, and this is in line with the spirit of this meeting.

Árpád Rab, senior researcher at IIS, started his keynote presentation by saying that although the development of artificial intelligence has been going on for a long time, its experience has been spreading in the wider society in the last year. In order for AI-based services to work well, business ethics, the regulatory role of law and the awareness of society are also needed. According to the researcher, the aim of artificial intelligence is to create technology-based collaborations, with the help of which, and with the right amount of resources, we can maintain our current standard of living and save the Earth even if there are 10 billion people. Árpád Rab said that in the future we can count on more and more such technological cooperation, we will use these tools more and more instinctively, the trust in them will increase, but also the vulnerability, people will need new skills. Media literacy will become increasingly important as we will access critical services through mediatised activities in the future. AI is an instinctive technology, people can easily learn to use it because it works according to the logic of already known human actions. AI often only makes recommendations, but as life moves fast, these recommendations often become micro-decisions and therefore we can become manipulated. According to the researcher, there are much deeper changes in digital culture than we can measure, so we need to measure new things, develop new skills. For example, the ability to decide whether a person needs to use AI, whether to use AI, and how that AI will serve him or her in the course of cooperation. People need to be educated about the data used by the service that makes a recommendation to them, the context in which the data is processed, the variables that are used to make the recommendation and the validity of the recommendation. This is a new kind of literacy that the research institute is actively working on.

László Z. Karvalics joined the discussion online and gave a short presentation on the concept of information literacy. He said that many people criticise the current translation of information literacy, as "információs írástudás", and prefer to use the term "műveltség" instead. In deciding this question, it is important to clarify whether information literacy is considered as an elementary level skill or rather as a higher concept that can be classified as a domain of literacy ("műveltség"). At the moment, the term is better described as "műveltség", but due to the growing role of AI there is a scenario in which – as we interact constantly with the digital world around us, where the machine side and the human side are continuously interacting and evolving –, it may in time be true again that AI will be included in the basic literacy ("alap írástudás"). In his presentation, László Z. Karvalics first described - using the systematisation of a researcher named NG - what are the skills that fall under the scope of MI literacy (knowing and understanding, using and applying, developing and evaluating, and being aware of ethical considerations), and then noted that the discourse in social sciences is currently almost exclusively ethical and that although ethical considerations are important, this narrative overwhelms the others. According to the speaker, there are several types of literacy today and he highlighted some of them in his presentation. These include data literacy, visual literacy, critical media literacy, epistemic literacy, participital literacy and futures literacy, all of which are interlinked. Another important component of AI literacy is anti-tabloidisation or detabloidisation preparedness, which means that our knowledge of AI is currently distorted through social and other media platforms and in order to develop an appropriate AI literacy, we must first remove these misinterpretations and narratives.

Katalin Fehér, associate professor at the UPS presented a research project. The research looked at what AI research has been going on in the world over the past 10 years that has not approached the field from a computer science or engineering perspective. Only high ranked journals, including the most highly cited articles were examined: a total of 607 articles were analysed in detail. At the end of the research, it was found that the most intertwined and talked about topic area is fake media (journalism, fake news, deepfake and social media), which also indicates that this topic has become one of the most important issues in the world of AI worldwide in the last decade. The speaker pointed out that this research also showed that ethics is not the most dominant issue in the use of AI, most research is more related to communication, information flow and media. According to Katalin Fehér, teachers in most schools currently have no real experience with generative AI. On the one hand there is media panic about its use, on the other hand there is a sense of expectation and hope about what other methodologies could be used to work with students.There are already good practices in the United States on how a teacher can be a facilitator or mentor in the elementary school class and how this can lead to effective outcomes in the course of working together. There is a growing trend towards personalised and collaborative digital literacy and the need to teach students about responsibility issues.

Levente Székely, director of the MCC's Youth Research Institute, pointed out in his opening remarks that early adopters and innovators who are the first to adopt AI technologies and their everyday applications are mostly young people. According to the speaker, the current situation is similar to the time when everyone was talking about the internet at the turn of the millennium. At the moment, AI is in a kind of beta operation (betaverse), when we have not yet defined the concepts exactly, but in the meantime there is already an intensive use: the period of dangers and opportunities. According to the youth researcher, young people often use these AI tools tools in ways that is most exciting for them, and this has caused moral panic several times. In his presentation, the speaker also presented the results of a 2020 youth survey, in which young people were asked about their fears. Respondents were found to be least afraid of artificial intelligence and robots, but much more concerned about the future and their economic situation. Levente Székely emphasized that the level of frustration in society, and especially among young people, has increased over the past three years.

László Kun, a doctoral student at the UPS, discussed data use and data awareness in the broader public sector. The speaker outlined the reasons why the state may need to make better use of data. The collection, storage and use of information has always been an essential part of the functioning of the state, and with digitalisation new data processing technologies and methods have emerged. Another important reason is that currently the new needs and challenges related to the state, and the development of the service state, require the use of new solutions. László Kun emphasised that technology is ahead of regulation and governance, and that policy makers are only just starting to address this. Several factors have a role to play in improving data management, such as further development of state operations (e.g. raising the use of digitization to a higher level, utilizing the results of previous projects, better use of data assets),  more resource-efficient operations, the dissemination of innovative solutions, and security and sovereignty issues of connectivity, availability. The speaker presented several areas that could play a role in the development of data use and data awareness. These include data-driven projects, increasing data-aware management, improving public analytical and forecasting capabilities, developing knowledge bases and knowledge centres, and developing and making available methodologies.

András Pünkösty, researcher at IIS, started his presentation entitled How data ethics can help build the right trust in AI solutions? by defining the concept of data underlying artificial intelligence systems. A central issue in the definition is whether to take the property rights aspect of the data related to business uses or rather the fundamental rights aspects related to the person as a basis. Further questions raised by the researcher included whether regulation of AI is needed and whether the EU is helping innovation with the draft AI Act. According to him, it is precisely in such cases that ethical discourse is relevant, when we are confronted with a new phenomenon and the ethical narrative becomes important for the development of an appropriate set of concepts. According to the researcher, there is a need to develop a new digital ethics in which the protection of human dignity is valued and values and rights are not dictated by technology. With digitalisation, the digital economy, and as a consequence of these, enforcement is confronted with new situations that were not previously the case, a new kind of enforcement reality is emerging. There are different approaches in the UK, the US or the EU to what constitutes a "good AI society". According to Pünkösty, EU regulation is a step towards a mature information society by managing risks and helping to build an informed trust in society. The researcher defined business, applied and theoretical data ethics as the three layers of awareness and trust. Ethical data use requires three aspects: transparency of data processing, quality of consent and effectiveness of data protection.

Zsolt Ződi, a senior researcher at our research institute, gave a presentation on the legal aspects of AI literacy. The researcher drew attention to the similarity between law and AI awareness. AI awareness is about people understanding what happens to them when they interact with an AI. The law is also a complex system, run by professionals, and ordinary people need help to understand it. Zsolt Ződi was looking for an answer to the question of how the schemes and tools already used in law for this problem could be applied to the issue of AI literacy. Legal literacy can be improved in two ways. One is to make the law more understandable, for example by using shorter sentences or by reducing legal jargon, and the other is to provide legal education. The same knowledge can be applied to AI, i.e. the explainability of AI (limited by the black box phenomenon) and raising awareness about AI are important tools to raise awareness. In this context, Zsolt Ződi highlighted the legal design movement, the essence of which is to design the interface of legal content on the internet and in various applications in a way that is understandable for everyone. According to Zsolt Ződi, the most important elements of AI ethics are transparency and explainability. Explainability means that people should know when they have come into contact with AI, be able to see how AI systems work (when AI enters a complex decision process, what data it uses and where it comes from, how representative it is) and be able to interpret the outputs. The researcher pointed out that there are now many methods for explaining AI systems and that text-based methods are not always the most appropriate. There are visual methods; methods that list examples; methods that present critical data sets or decision parameters; methods that describe the functions of each layer of the neural network and which one will be the right one will be decided by the technology and the domain together.

Árpád Rab concluded the workshop by saying that this meeting is the first step of a long journey and that they will continue to work together with the invited speakers. A valuable debate has been launched with novel insights that should be followed up with primary research, convergence of standpoints and societal recommendations.