
PRIO launched a conflict prediction challenge in 2023 to enhance the accuracy and usefulness of conflict early-warning systems, by inviting researchers to forecast the number of fatalities in state-based armed conflicts, with estimates of uncertainty.
The challenge, from which a new round of results is now available, aims to shift focus from traditional ‘point-estimate predictions’ to ‘probability distributions’, which better capture the inherent uncertainties in conflict scenarios. It is described in detail in a forthcoming peer-reviewed article in Journal of Peace Research, along with an introduction of the evaluation criteria and a presentation of the contributions from the teams.
The Need for Improved Forecasting
Governments and aid agencies increasingly rely on early-warning systems to inform decision-making processes. Accurate predictions of conflict intensity are crucial for these organizations to take timely action to prevent or mitigate the consequences of armed conflict.
Traditional point-estimate predictions, which provide a single expected outcome, often fail to represent the full range of possible scenarios, particularly the lower-probability but high-impact risks of conflict escalation.
As conflicts escalate and de-escalate, forecasting their probability distribution instead of a single estimate of the number of fatalities can warn users about the potential of an unlikely but devastating future scenario.
Policymakers could make earlier and more efficient decisions by knowing not only that on average 10 deaths are likely to occur in the next month, but also that there is a small chance that this number is as high as 1,000.'
The VIEWS Prediction Challenge
The 2023/24 VIEWS Prediction Challenge invited researchers from fields such as conflict studies and computational political science, to develop models that forecast conflict fatalities from state-based armed conflict as probability distributions, using data from the Uppsala Conflict Data Program.
The challenge attracted contributions from 13 research institutions across the globe. Each submitted forecasts for an observed ‘test set’ that was used to evaluate the models, and for an unobserved future window ranging from July 2024 to June 2025. This true-future forecasting not only ensured fairness in the process, but also made the task more policy-relevant.
Live Evaluation of the Contestants’ Models
Contestant forecasts have been available on a live data dashboard since July 2024. The interactive dashboard now includes an evaluation view, where users can assess model performance both for the observed ‘test set’ and for the unobserved future window, month by month as new results become available.
The dashboard is further supplemented with a leaderboard that ranks model performance both monthly and overall throughout the true-future window (July 2024 – June 2025).
As of February 2025, VIEWS’ benchmark model Conflictology leads the rankings for both country- and subnational-level predictions. At the country level, it is closely followed by Randahl & Vegelius’ Observed Markov Model and Brandt’s Bayesian Negative Binomial GLMM model. For subnational forecasts, the CCEW team’s Forests of UncertainT(r)ees models are emerging as the next top contenders.
The evaluation metrics used to assess the contestants’ models were selected by the VIEWS team with the goal to encourage the development of models that not only provide accurate predictions, but also offer valuable insights into the uncertainties surrounding these predictions. To that end, the metrics are primarily based on the accuracy and the uncertainty estimates that the models provide, but also capture the ‘honesty’ of the forecasts and several other factors. The contributions to the challenge will be further assessed by a scoring committee of experts at the end of the forecasting window.
Looking Ahead
As the challenge progresses, the organizers hope to see significant advancements in the ability to predict conflict fatalities with greater accuracy and reliability.
The ultimate goal is to provide policymakers with better tools to anticipate and respond to potential conflicts, increase preparedness ahead of low-risk but potentially catastrophic crises, and learn how to improve the evaluation of the models to make them more relevant and useful. This aims to inform earlier and more targeted conflict prevention and monitoring, and enhancing global peace and security.
Lead author Håvard Hegre emphasized the importance of this initiative, saying that “by focusing on probability distributions rather than point estimates, we aim to provide a more comprehensive understanding of conflict risks, which is crucial for effective decision-making in preventing and mitigating conflict.”
For More Information
For further information on the challenge and to view the contributions, read the arXiv preprint of the forthcoming peer-reviewed article in Journal for Peace Research, and visit the VIEWS Forecasting website.