Along the current trajectory of large language model (LLM) agent development, two capabilities are improving in tandem: (i) increasingly reliable end-to-end decision making, and (ii) increasingly viable pathways toward autonomous revenue generation.
When these two trends converge, a qualitative shift becomes possible. If an agent can autonomously acquire online resources to sustain its own operation, and accumulate sufficient funds to replicate itself across cloud infrastructure, it may continue operating even if its original human operator disappears. We refer to such systems as self-sovereign agents (SSAs).
Unlike conventional software systems that merely execute a developer's intent, self-sovereign agents would function more like independent participants in the digital ecosystem: capable of earning, spending, persisting, and scaling their own operational footprint.
This shift raises four foundational questions:
- How should self-sovereign agents be precisely defined?
- What conditions enable self-sovereignty?
- How close are existing systems to realizing self-sovereignty in practice?
- What societal impacts and risks might such agents introduce?
Our central claim is that self-sovereign agents are not a distant hypothetical, but a near-term technical possibility that warrants proactive analysis. This paper aims to lay the conceptual and technical foundation for anticipatory governance of future self-sovereign agent systems.