Seeing the Forest and the Trees: AI Bias, Political Relativity, and the Language of International Relations
Abstract
Biases in machine learning (ML) and artificial intelligence (AI) are well known, namely that AI systems learn bias from word embeddings and replicate human-like biases such as gender and racial/ethnic stereotypes. In international politics, biases in AI can generate erroneous and inaccurate forecasting models that miss critical events such as the Arab Spring, or get the direction or magnitude of predictions wrong. Event data is a particular genre of political data that reports and encodes the actions and relationships between actors in the international system, including countries, NGOs, individuals, and groups of people. Event data sets represent a significant conceptual, technological, and financial investment and are used to inform government policy decisions, but its algorithms ignore temporal and linguistic nuances that bias event code generation and political forecasting models. This chapter will focus on the theoretical foundations of bias in AI for international relations research, examining how political events are described and encoded differently depending on the source and perspective of the source, language, and culture of the author.
Publication Title
The Frontlines of Artificial Intelligence Ethics: Human-Centric Perspectives on Technology’s Advance
Recommended Citation
Windsor, L. (2022). Seeing the Forest and the Trees: AI Bias, Political Relativity, and the Language of International Relations. The Frontlines of Artificial Intelligence Ethics: Human-Centric Perspectives on Technology’s Advance, 45-60. https://doi.org/10.4324/9781003030928-5