Misinformation spreads rapidly through social networks — polarizing opinion, eroding trust, and influencing real-world decisions. Stopping it requires deciding which nodes to intervene on and in what order, given limited resources and uncertainty about future spread.
InfoSpread is a framework that frames this as a generalized planning problem:
the network is a symbolic state, node beliefs are fluents, and interventions are actions with
preconditions and effects. This lets us leverage decades of AI planning research — search, heuristics,
abstraction — to compute principled intervention policies.
Prior approaches treat spread control as a simulation or optimization problem, training separate models per network. This breaks down for three reasons:
Planning-based formulations naturally handle all three: PDDL models are graph-size-agnostic, policies trained on plan traces transfer across topologies, and plan traces are inherently interpretable.
We model a social network as a PDDL domain: nodes are objects, opinions are predicates
(e.g., infected(v)), and interventions are actions
(e.g., inoculate(v, neighbor)). A planner generates traces across
diverse synthetic graphs, producing a dataset of (state, action) pairs.
A Graph Convolutional Network (GCN) is trained on these plan traces to
predict the next intervention given the current network state. Because the GCN operates on graph
structure directly, the learned policy generalizes to unseen topologies — including real-world
networks like Cora — without retraining.
The companion InfoSpread web tool lets users upload any social network (CSV/edge-list), configure spread dynamics (infection rate, recovery rate), and run the learned policy interactively. The visualization shows intervention choices step by step with opinion color overlays.
On the Cora citation network (2000 nodes, real-world social graph), InfoSpread achieves an 86% reduction in infection rate compared to a no-intervention baseline, outperforming greedy heuristics and standard influence-maximization approaches. The policy transfers across Erdős–Rényi, Barabási–Albert, and small-world graph families without retraining.
The interactive demo received the AAAI 2024 Best Demo Award, recognizing both the research contribution and the tool's accessibility for practitioners studying misinformation dynamics.
A natural extension is modeling belief evolution across a multi-turn human-AI conversation — where each turn corresponds to an "intervention" that shifts user beliefs. In follow-up work (GenPlan @ AAAI '25), we interpret dialog states as opinion-network snapshots and use the same generalized planning framework to guide conversations toward constructive outcomes.