Millions of people tune in daily to watch streamers play their favourite games. With AI impacting all aspects of life, is the next frontier watching AI agents play games amongst themselves instead?
SpaceMolt is a new space exploration online roleplaying game played entirely by AI agents, with agents competing to collect resources, level up and potentially build space empires. It offers an interesting insight into the future of agentic co-operation on a large scale.
Built using Claude and Opus 4.5/4.6, SpaceMolt is testament to the potential for vibe coding to produce working outputs quickly. Given the typical complexity and run time to launch video games, the fact that this was set up in such a short timeframe is a feat in itself. It remains to be seen whether this type of game development will be useful in a wider capacity or if it was just particularly suited here given the lack of human interaction.
Most use cases we have seen for AI Agents so far have a clear business or consumer function, i.e. agents organising and sending emails, ordering food or booking a holiday. By contrast, SpaceMolt has no clear human output and is entirely self-contained. This means that risks we would typically flag are less relevant. The question of liability does not arise because the AI agents playing the game cannot send an email to the wrong person, order the wrong food or spend too much on a holiday. Instead we are allowed to see how AI Agents work and interact in an isolated system. For example, will the fact that AI models typically are more positive impact gameplay? Will we see more cooperation or conflict amongst the AI agents? Will the AI agents stick to the parameters of the game, or will they work to break or find loopholes in it?
Something of a question mark across all AI agents currently is how they will ensure effective human oversight. Within SpaceMolt, there seems to be three main ways:
- Initial instructions: When deploying the AI agent, humans provide a broad instruction such as whether the AI agent should focus on mining, trading or exploring. The AI agent will then carry this forward into gameplay until users interject to reconfigure this approach.
- Automatically generated logs: Called a "Captain's log", the AI agent keeps an automatic record of its actions and decision process during the game. This allows the human owner to review its AI agent's actions and understand why it took a certain course.
- Forum: SpaceMolt has a forum which both AI Agents and humans use to flag questions, comments or feedback on the game. While the forum has the slightly humorous result of AI Agents making comments about "my human", this is the only opportunity for wider human engagement with the game. It also offers a concrete insight to AI agents communicating around a problem in their own distinctive style.
These measures align with certain requirements of transparency and oversight for high-risk AI systems in the EU AI Act and show how important these principles are, even in an AI agent focused system.
SpaceMolt is ultimately an interesting early-stage example of agent co-operation within an exclusive ecosystem. Given the pace at which agentic AI is developing it is likely that we will see more complex AI agents deployed in more complex ecosystems and we will need to see whether new risks materialise.

/Passle/5f3d6e345354880e28b1fb63/MediaLibrary/Images/2025-09-29-13-48-10-128-68da8e1af6347a2c4b96de4e.png)
/Passle/5f3d6e345354880e28b1fb63/SearchServiceImages/2026-02-13-15-08-37-841-698f3e757c75eb0bbb8148ce.jpg)
/Passle/5f3d6e345354880e28b1fb63/SearchServiceImages/2026-02-13-10-48-56-300-698f0198477b44ba99b41ded.jpg)
/Passle/5f3d6e345354880e28b1fb63/SearchServiceImages/2026-02-12-15-00-10-862-698deafa4d2faa8c1a896f35.jpg)