
The pervasive influence of social media platforms has undeniably transformed the landscape of contemporary discourse. While these platforms facilitate open discussions on diverse topics, they simultaneously present significant challenges associated with misinformation and the manipulation of public opinion.
With the advent of advanced Artificial Intelligence (generative AI) and Large Language Models (LLM), the dissemination of misinformation has become increasingly sophisticated, threatening the integrity of online discourse and eroding trust in institutions and among communities. It is crucial for policymakers, academics, and industry leaders to collectively address these issues. The goal of the summer school on “Artificial Intelligence for a Secure Society” is to empower individuals with the means to foster a more secure communities, both online and offline.
By gathering experts from diverse fields, the program will tackle pressing challenges through the lens of AI. Participants will learn how artificial intelligence can be used to identify, analyze, mitigate, and counteract various threats, including misinformation, disinformation, fake news, orchestrated online campaigns, and information warfare.
Download the programme
Topics covered will include:
- Data Science
- Artificial Intelligence
- Cybersecurity
- Social networks Analysis
- Network Science
- Generative AI
- Trustworthy AI
- Explainable AI
- Neural Networks
- Machine/Deep Learning
Features of the Summer School:
- Keynotes
- Interactive lectures
- Hands-on group projects
- Networking opportunities with experts and peers
General Chairs
- Carmela Comito (ICAR-CNR)
- Maurizio Tesconi (IIT-CNR)
- Giuseppe Manco (ICAR-CNR)
Organizing Committee
- Angelica Liguori (ICAR-CNR)
- Daniela Gallo (ICAR-CNR & University of Salento)
- Fabrizio Durante (University of Salento)
Topic 1: Artificial Intelligence: opportunities and risks
Abstract:
The first part of the talk will cover Artificial Intelligence areas like reasoning, planning, perception, machine learning and generative AI. Then it will focus on descriptive, predictive, prescriptive and generative models showing examples of their use.
The second part of the talk, will focus on features an AI system should exhibit for being considered trustworthy and aligned with our values. Opportunities and risks will be highlighted.
Hours: 1.15
Speaker: Michela Milano
Bio Speaker:
Michela Milano is a Full Professor at the University of Speaker Bio: Bologna, where she also serves as the Director of the ALMA-AI Strategic Center on Human-Centered Artificial Intelligence and the Director of the Digital Societies Center at the Fondazione Bruno Kessler in Trento. She has published over 180 papers in international conferences and journals. Michela Milano has served as Vice President of EurAI and was part of the expert group on Artificial Intelligence appointed by the Italian Government to develop a national AI strategy in 2019 and 2022. She is also a member of the national delegation to the Program Committee of Horizon Europe’s Cluster 4. She is EurAI Fellow, ELLIS Fellow, and a member of the Academy of Sciences and the Academy of Engineering and Technology.
Topic 2: Emergent social conventions and collective bias in LLM populations
Abstract:
Social conventions are the foundation of social coordination, shaping how individuals come together to form a society. In this talk, I will present theoretical and experimental findings that demonstrate the spontaneous emergence of social norms in human groups, as well as the existence of tipping points in social conventions. I will then explore the case of populations of large language models (LLMs). As AI agents increasingly communicate using natural language, understanding how they develop conventions is crucial for interpreting and managing their collective behaviour. I will show that LLM populations can establish social conventions and highlight how collective biases can emerge even when individual agents appear unbiased. The ability of AI agents to develop norms without explicit programming has significant implications for designing AI systems that align with human values and societal goals.
Hours: 1.15
Speaker: Andrea Baronchelli
Speaker Bio:
Andrea Baronchelli is Professor of Complexity Science at City St George’s University London. His research on human dynamics in decentralised socio-technical systems — including topics such as social norms, (mis)information spreading, polarisation in social networks, blockchain ecosystems, and dark markets — has been published in high impact journals including Science, PNAS and Nature Human Behaviour and has been widely covered from the press.
Topic 3: The Language of Collective Action: Building Solutions in the Digital Era
Abstract:
Global challenges like climate change, mass migration, and pandemics demand more than top-down institutional responses. Grassroots, behavior-driven solutions are essential, and the online world offers a unique space to foster them. Language plays a pivotal role in shaping social interactions, and digital platforms enable large-scale, rapid mobilization for collective action. To harness this potential, we must first identify the linguistic markers that signal agreement, cooperation, and collective intent. By analyzing textual traces, we can measure engagement in collective action and understand the mechanisms driving it. Building on these insights, we can then design interventions that enhance participation. Strategic discourse framing has been shown to influence user alignment and encourage collective behavior. Additionally, AI-driven persuasive messaging can further amplify mobilization efforts. This talk explores how computational methods can be used to detect and shape the language of collective action, bridging insights from linguistics, social science, and AI. By understanding and leveraging the dynamics of online discourse, we can develop more effective strategies for addressing pressing societal dilemmas.
Hours: 1.15
Speaker: Luca Maria Aiello
Speaker Bio:
Luca Maria Aiello is Professor of Data Science at the IT University of Copenhagen, where he is member of the NEtwoRks, Data and Society group (NERDS). Previously, he worked for 10 years as a Research Scientist at Yahoo Labs and Nokia Bell Labs. He conducts research in Computational Social Science with a special focus on understanding cooperation processes through the study of conversational language with the use of advanced Natural Language Processing tools. He has been awarded a 3-year grant from the Carlsberg foundation to investigate emergent processes of climate action in online social media. His work has been covered by hundreds of articles published by news outlets worldwide including Wired, WSJ, and BBC. He was once interviewed by Captain James T. Kirk.
Topic 4: Discover SoBigData: the European Research Infrastructure for social mining
Abstract:
SoBigData Research Infrastructure (RI) offers a sustainable, open, and fully compliant platform for discovering, sharing, and analyzing large-scale social data. In this session, you’ll get an overview of SoBigData RI as a powerful resource for reproducible AI and social-mining research. We’ll explore how it enables collaborative workflows, brings together diverse data sources across Europe, and upholds ethical AI practices. To illustrate its impact, we’ll showcase recent community-driven projects and tools developed within the SoBigData ecosystem.
Hours: 1.15
Speaker: Roberto Trasarti
Speaker Bio:
Roberto Trasarti is a Senior Researcher at the Institute of Information Science and Technologies (ISTI) of the Italian National Research Council (CNR), where he specializes in artificial intelligence, data mining, research infrastructures, and the analysis of large-scale mobility data. In his role as Coordinator of the SoBigData Research Infrastructure (www.sobigdata.eu), he leads multidisciplinary efforts to develop and deploy advanced tools and methodologies for the collection, integration, and exploitation of social and mobility data at the European scale. He is currently focused on the development of the infrastructure and on the ethical use of artificial intelligence. His past works underpin data-driven insights and innovations across the transportation, urban planning, and smart-city domains, contributing to both fundamental research and real-world applications.
Topic 5: On Retrieval Augmented Generation and some Unexpected Effects
Abstract:
Unexpected behaviors in retrieval-based language generation challenge conventional wisdom. Historically, text retrieval relied on inverted indexes, but the shift to Transformer-powered dense retrieval has unveiled unforeseen effects when integrating external knowledge. While relevant documents generally boost accuracy, the presence of distracting or even randomly selected texts can lead to dramatic performance fluctuations—sometimes degrading results, other times surprisingly enhancing them. Furthermore, common practices such as fine-tuning and alignment (e.g., RLHF), typically aimed at optimizing model performance, may inadvertently compromise the system’s ability to indicate when no suitable evidence exists. These insights call for a critical reevaluation of retrieval strategies and fine-tuning approaches to fully exploit the potential of modern retrieval augmented systems.
Hours: 1.15
Speaker: Fabrizio Silvestri
Speaker Bio:
Fabrizio Silvestri is a Full Professor at the Dipartimento di Ingegneria Informatica, Automatica e Gestionale (DIAG) of Sapienza University of Rome and coordinates the Ph.D. program in Data Science. His research focuses on artificial intelligence, particularly on machine learning techniques for web search and natural language processing. He has published over 150 papers in international journals and conferences and holds nine industrial patents. His work has been recognized with awards such as the Test-of-Time Award at the ECIR 2018 conference for a paper published in 2007, along with several best paper awards.
Before his academic career, he worked for eight years in industrial research, holding roles at Yahoo Research and Facebook AI. At Facebook AI, he led teams that developed AI methods to address malicious online activities. Professor Silvestri earned his Ph.D. in computer science from the University of Pisa in 2003, after completing an M.Sc. in computer science (magna cum laude) in 2000. He currently leads the RSTLess research group, which investigates approaches for robust, safe, and transparent deep learning.”
Topic 6: The Role of Persuasion Techniques in Disinformation Detection
Abstract:
Disinformation is not merely a problem of false content – it is a strategic communication challenge, leveraging persuasive techniques and compelling narratives to shape beliefs and behaviours. This talk explores how narrative structures and rhetorical strategies are employed in the creation and spread of disinformation, highlighting the psychological and emotional mechanisms underlining it. I will present state-of-the-art tools for detecting persuasion techniques and discuss their role in shaping narratives and promoting intents and agendas, offering insights into how these techniques improve the identification and mitigation of deceptive content. The talk will also feature a small demonstration on how to use the tools just introduced.
Hours: 1.15
Speaker: Giovanni Da San Martino
Speaker Bio:
Giovanni Da San Martino is Associate Professor at the University of Padova. He has been Principal Investigator for several projects, received a best paper award and organised events around the topic of disinformation.
Topic 7: Security and Trust in the Age of the "MindMesh"
Abstract:
As artificial intelligence becomes ubiquitous, we are on the verge of a profound transformation: the emergence of the MindMesh, a global, dynamic network of autonomous intelligent agents acting on behalf of individuals, organizations, and machines. Unlike traditional Internet of Things (IoT) systems, the MindMesh is populated by cognitive entities capable of making decisions, interacting, negotiating, and sometimes even deceiving one another. This new digital ecosystem presents unprecedented opportunities (such as fully autonomous services, adaptive infrastructures, and human-centric delegation of digital tasks) but also introduces fundamental challenges for cybersecurity, governance, and human rights. In this talk, we explore how the rise of the MindMesh will reshape digital security. We discuss the shift from securing devices to securing interactions among intelligent agents, the risk of informational overload and AI-driven bias amplification, and the growing need for machine-level trust, ethically grounded communication protocols, and agent certification frameworks. We will examine the emergence of malicious AIs designed to manipulate or attack other agents, and the provocative idea of establishing a policing system within the MindMesh (autonomous entities tasked with enforcing machine ethics and behavioral integrity).
Participants will be encouraged to think boldly about the future of digital societies and to identify novel, interdisciplinary research questions ranging from intrusion detection in agent ecosystems to legal frameworks for AI accountability. The session aims to provide a conceptual foundation and a forward-looking research agenda for securing tomorrow’s AI-mediated world.
Hours: 1.15
Speaker: Ettore Ritacco
Speaker Bio:
Dr. Ettore Ritacco is a Researcher (RTDB) at the University of Udine and a Research Scientist at the Institute for High Performance Computing and Networks (ICAR-CNR), National Research Council of Italy. He holds a Ph.D. in Engineering of Systems and Informatics from the University of Calabria, where he served as Assistant Professor. His research lies at the intersection of data science, machine learning, and artificial intelligence, with a focus on generative AI, user profiling, anomaly detection, cybersecurity, recommendation systems, and social network analysis. Dr. Ritacco has coordinated and contributed to numerous national and industry-funded projects, including initiatives on smart manufacturing, occupational safety, and blockchain-based data validation. He has authored over 50 peer-reviewed publications in top-tier journals, international and national conferences and actively collaborates with both academic and industrial partners. In addition to his research activities, Dr. Ritacco is a course lecturer in topics such as computer architecture, generative AI, and advanced data science. He is also member and scientific advisor of the academic spin-off Open Knowledge Technologies S.r.l.
Topic 8: A Quick and Crash Course on Watermarking
Abstract:
Digital watermarking allows to hide information within a digital carrier, such as text, video, and network traffic. For instance, cloaked data can be used to check the integrity of a software, track the diffusion of digital media, or enforce copyright constraints. With the diffusion of AI frameworks, watermarking schemes are becoming also important for supporting security and privacy constraints of modern software ecosystems. For instance, they enable to protect the code generated through large language models or to understand whether an image has been created by a human or a machine. Unfortunately, the availability of techniques to conceal data within other data also opens to many security issues, e.g., malicious payloads may be cloaked within AI models. This course briefly introduces the core concepts of digital watermarking and outlines the main research questions to be faced for handling massive digital contents and support ethical needs.
Hours: 3
Speaker: Luca Caviglione
Speaker Bio:
Luca Caviglione (male) is a Senior Research Scientist at the Institute for Applied Mathematics and Information Technologies of the National Research Council of Italy. He holds a Ph.D. in Electronic and Computer Engineering from the University of Genoa, Italy. His research interests include traffic analysis, network security, and mitigation of threats against the software supply chain.
Topic 9: Analyzing and Countering Information Disorder in the Digital Age
Abstract:
This lecture provides an overview of key challenges and methodologies for analysing and mitigating information disorder in digital environments. Participants will explore the distinctions between misinformation, disinformation, and foreign information manipulation interference (FIMI), scrutinising their socio-political and economic impacts. The session will present analytical frameworks such as DISARM (DISinformation Analysis and Risk Management) and the ABCDE model, along with methodologies for tracking and countering digital threats. Additionally, the role of Artificial Intelligence (AI) will be discussed, both as a tool for detecting and analysing disinformation patterns and as a potential enabler of synthetic content and manipulation tactics. Through an interdisciplinary approach that integrates social sciences, cybersecurity, and media literacy, attendees will examine real-world case studies and strategies for enhancing cognitive security, resilience, and policy responses to disinformation.
Hours: 3
Speaker: Domenico Furno
Speaker Bio:
Domenico Furno is a computer science researcher (RTD-A) in SERICS, working on AI methods and techniques to enhance Information Disorder Awareness.
Topic 10: Echo Chambers, Polarization and the Role of AI in Online Social Media
Abstract:
In online social media, users’ tendency to prefer like-minded narratives contributes to alarming phenomena such as opinion polarization and the formation of echo chambers. The interaction between human biases in information consumption and algorithmic recommendations not only shapes opinion dynamics but also influences how information circulates and persists within communities. We will review evidence on opinion polarization, echo chamber formation, and related phenomena such as user migration and the emergence of so-called “echo platforms.” Finally, we will consider two applications of AI in the context of social media: first, the detection of toxicity in online conversations to better understand user engagement and persistence; second, the use of Large Language Models (LLMs) to assess misinformation and content credibility at scale. In particular, we compare LLM judgments with expert assessments, evaluating new directions for automated content moderation and the study of information circulation online.
Hours: 3
Speaker: Matteo Cinelli
Speaker Bio:
Matteo Cinelli is an Assistant Professor of Computer Science at Sapienza University of Rome. His research focuses on networks, data science and social media.
Topic 11: Large Language Models as Misinformation Detectors: Promise, Pitfalls, and Research Frontiers
Abstract:
Large Language Models (LLMs) such as GPT-4 have demonstrated remarkable capabilities in understanding and generating natural language. But can they also help us detect and counter misinformation? In this lecture, we will explore the current state of research on the use of LLMs for misinformation detection, with a focus on recent studies evaluating their performance in tasks such as fake news classification and reliability assessment of news sources. We will discuss the strengths and limitations of prompting-based approaches and the challenges of generalizing across topics.
In the second part of the talk, we will highlight emerging research on personality-infused language models — LLMs trained or prompted to simulate individual traits (e.g., based on the Big Five model) — and how these human-like attributes may influence their susceptibility to misinformation. These studies open up new avenues for understanding both the vulnerabilities and robustness of AI systems when exposed to deceptive content, with implications for safety and adversarial testing.
Hours: 3
Speaker: Marinella Petrocchi
Speaker Bio:
Marinella Petrocchi is a Senior Researcher at the Institute of Informatics and Telematics of the National Research Council (IIT-CNR) in Pisa, Italy, under the Trust, Security and Privacy research unit. She also collaborates with the Sysma unit at IMT School for Advanced Studies, in Lucca, Italy. Her field of research lies between Cybersecurity, Artificial Intelligence and Data Science. Specifically, she studies novel techniques for online fake news/fake accounts detection and automated methods to rank the reliability of online news media. She is the author of several international publications on these topics and she usually gives talks and lectures on the subject. She is CNR Lead of project Humane: Holistic sUpports to inforMAtioN disordEr, under the NRRP MUR program funded by the EU – NGEU.
Topic 12: Game-theoretic Aspects of Security
Abstract:
Over the last decade, several models and results from Algorithmic Game Theory have led to practical applications in real-world security domains. These scenarios are often modeled as Security Games, which capture the strategic interaction between a defender and one or more attackers. The defender allocates limited security resources across multiple targets, while the attacker may choose one or more targets to attack. The outcome and resulting payoffs for both players depend on whether the selected target is defended. If it is, the defender gains an advantage, while the attacker incurs a loss. This game-theoretic framework facilitates the design of more effective and less predictable security strategies.
In these talks, we will provide an overview of key problems and techniques related to security games, ranging from Network Design Games and Stackelberg Competition to the Bayesian Persuasion framework.
Hours: 3
Speaker: Cosimo Vinci and Vittorio Bilò
Speaker Bio:
Cosimo Vinci is a tenure-track researcher at the University of Salento. His research focuses on Algorithmic Game Theory, Approximation and Randomized Algorithms. Vittorio Bilò is an associate professor at the University of Salento. His research focuses on the design of algorithms, and their application to Economics and Social Choice.
Topic 13: Securing the Future: Misinformation, Data Infrastructures, and the AI Lifecycle of Predictive Power
Abstract:
This keynote introduces techno-economic futurity, a conceptual and diagnostic framework for mapping and intervening in AI-driven misinformation infrastructures. Rather than treating misinformation as static content to be detected, I propose it be understood as the output of predictive systems that capture, enclose, and reframe user behaviour through recursive cycles of personalisation and feedback. Misinformation, in this view, is not an anomaly but a structural feature of AI architectures optimised for attention extraction and behavioural influence.
As generative AI systems increasingly mediate public discourse, the need to rethink misinformation as a temporal and infrastructural phenomenon becomes urgent. Futurity names the orchestration of time within AI systems: how past interactions and present contexts are recursively folded into predictive outputs that shape future actions, perceptions, and beliefs. These systems do not merely reflect reality—they prefigure it, narrowing what can be seen, trusted, and known. Infrastructures of prediction thus generate misinformation not as error, but as effective modulation.
To analyse this dynamic, I introduce a diagnostic triad:
- Non-rivalry: Data’s infinite reusability enables recursive application across domains, making it both economically potent and strategically vulnerable across adversarial contexts.
- Excludability: Technical and legal enclosures generate informational and economic inequality by concentrating control over knowledge flows.
- Futurity: Predictive infrastructures recursively generate value by modulating civic life and social perception over time.
I apply this framework to the Google AI stack, illustrating how user data becomes infrastructured misinformation: personalised, adaptive, and refined through ongoing feedback. These systems continuously learn from interaction to optimise belief-shaping outputs. I conclude by proposing democratic futurity as a countermeasure: participatory, accountable infrastructures that reassert public agency over the systems predicting and producing our shared futures—and the epistemic conditions of a secure society.
Hours: 1.15
Speaker: Mark Coté
Speaker Bio:
Mark is a Reader in Data and Society in the Digital Humanities department of King’s College London. His cross-disciplinary research addresses the relationship between the human and technical object and examines the societal dimensions of data, computation and AI. He has been PI or CI on EPSRC, H2020, and AHRC grants valued at more than £10 million. He collaborates with computer scientists in social data analytics and cybersecurity, social scientists and policy experts and legal scholars. He is a PI and Strategic Board member of REPHRAIN, the UK’s national research centre for online harm mitigation and data empowerment, and a PI on SoBigData, the European research infrastructure for social data analytics. His work has been presented at conferences and keynotes globally and has been published widely in leading journals across disciplines including Big Data & Society and the IEEE Computer. His keynote talk will propose an innovative socio-technical approach for mapping and intervening in AI-driven misinformation infrastructures.
Who can Apply
The summer school is open to Postgraduate Students, PhD Students, and Researchers interested in Artificial Intelligence.
Deadlines
Early Registration: May 10, 2024
Late Registration: June 10, 2024
How to apply
You can pre-register for the summer school by sending an email expressing your interest at ai4ss[at]icar.cnr.it. Pre-registration is now open. Please attach your CV and a motivation letter. You will receive a confirmation email regarding your enrollment. Once confirmed, you will be able to complete your registration through the website. Since the venue has limited capacity, registration requests will be processed on a first-come, first-served basis.