๐ฎ Pokรฉmon: AI Adventure
A text-based Pokรฉmon RPG powered by LangChain and LangGraph, where an LLM narrates your journey, generates wild encounters, and drives the story โ while deterministic game logic handles battles, catching, and stats.
See related Youtube video: https://www.youtube.com/embed/TMPeW-D6MvU
See related blog post: https://ivanmosquera.net/
โโโโโโโโโโโโ
โ START โ
โโโโโโฌโโโโโโ
โ
โโโโโโผโโโโโโ
โ intro โ โ Professor Oak welcomes you
โโโโโโฌโโโโโโ
โ
โโโโโโผโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ explore โ โ interrupt() for input โ
โโโโโโฌโโโโโโ โ
โ โ
โโโโโโโโดโโโโโโโ โ
โผ โผ โ
โโโโโโโโโโ โโโโโโโโโโโโโโโโ โ
โ heal โ โencounter_chk โ โ
โโโโโฌโโโโโ โโโโโโโโฌโโโโโโโโ โ
โ โโโโโโโดโโโโโโโ โ
โ no encounter encounter โ
โ โ โ โ
โ โ โโโโโโโโผโโโโโโโ โ
โ โ โ battle โโโโโ โ
โ โ โ interrupt() โ โ ongoing โ
โ โ โโโโโโโโฌโโโโโโโ โ โ
โ โ โโโโโโโผโโโโโโ โ โ
โ โ win loss loopโโโ โ
โ โ โ โ โ
โโโโโโโโโโดโโโโโโโดโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโผโโโโโโโ
โ game_over โ
โโโโโโโโฌโโโโโโโ
โ
โโโโโโโโผโโโโโโโ
โ END โ
โโโโโโโโโโโโโโโ
| Component | Role |
|---|---|
| LLM (Ollama) | Narrates scenes, generates wild Pokรฉmon, powers Professor Oak |
| LangGraph StateGraph | Manages game phases (explore โ encounter โ battle โ ...) |
interrupt() |
Pauses the graph to wait for player input each turn |
| Structured Output | Forces the LLM to return typed Pokรฉmon data (Pydantic models) |
| MemorySaver | Checkpoints game state so the graph can resume after interrupts |
- Python 3.10+
- Ollama installed and running locally
-
Install Ollama from ollama.com or via Homebrew:
brew install ollama
-
Start the Ollama server:
ollama serve
-
Pull a model that supports tool calling (required for structured output in this game). Recommended models:
Model Size Command Qwen 2.5 (recommended) 7B ollama pull qwen2.5Llama 3.1 8B ollama pull llama3.1Mistral 7B ollama pull mistralNote: The game uses structured output (
with_structured_output) which requires tool/function calling support. Not all Ollama models support this โ the models listed above do. -
Update the
modelfield inmain.pyto match the model you pulled:llm = ChatOllama( model="qwen2.5", # must match the model name from `ollama list` base_url="http://localhost:11434", max_tokens=4096, temperature=0.7, )
# Clone the repo
git clone
cd ivmos-langx-pokemon-game
# Create a virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install langchain langchain-ollama langgraph pydantic typing_extensionssource venv/bin/activate
python3 main.pyOnce the game starts, Professor Oak introduces you and hands you your starter Pokรฉmon (Charmander). From there:
- Explore โ Type
exploreorsearch grassto roam the area - Heal โ Type
go to townorhealto visit a Pokรฉmon Center - Battle โ When a wild Pokรฉmon appears, choose a move name,
catch, orrun - Quit โ Type
quitorexitat any prompt
๐ฎ POKรMON: AI ADVENTURE
What's your name, trainer? > Ash
๐ฎ Professor Oak smiles as he hands you a Pokรฉ Ball...
๐ Pallet Town | Team: Charmander (39/39 HP)
What do you do? (explore / search grass / go to town / check team)
> search grass
๐ฎ You wade into the tall grass north of Pallet Town...
โก A wild Pidgey (Lv.4, normal-type) appeared!
HP: 28 | ATK: 18 | DEF: 15
โ๏ธ BATTLE โ Turn 1
Charmander: 39/39 HP
Wild Pidgey: 28/28 HP
Your moves: [Scratch / Ember / Growl / Smokescreen]
Or: [catch] / [run]
> Ember
๐ฎ ๐ก๏ธ Charmander uses Ember! CRITICAL HIT! (24 damage)
๐พ Wild Pidgey strikes back! (8 damage)
Charmander: 31/39 HP | Pidgey: 4/28 HP
> catch
๐ฎ ๐ Gotcha! Pidgey was caught!
๐ฆ Added to your team!
ivmos-langx-pokemon-game/
โโโ main.py # Complete game โ all nodes, graph, and game loop
โโโ README.md
โโโ venv/ # Virtual environment (not committed)
The LLM connection is configured at the top of main.py:
llm = ChatOllama(
model="qwen3.5:35b-a3b",
base_url="http://localhost:11434",
max_tokens=4096,
temperature=0.7,
)To use a different model or server, update model and base_url.
- New locations โ Add entries to a location map and update
route_after_explore - Gym battles โ Create NPC trainer nodes with LLM-driven move selection
- Evolution โ Track XP in
GameStateand trigger evolution at thresholds - Save/Load โ Swap
MemorySaverforSqliteSaverto persist game sessions to disk - Streaming โ Use
game.stream()for real-time typewriter-style narration
This project demonstrates core LangChain and LangGraph patterns. Here's where each concept appears in main.py:
ChatOllama โ LLM Connection Wraps the Ollama REST API into LangChain's unified chat model interface. Configured once and reused across all chains.
# main.py โ top of file
llm = ChatOllama(
model="qwen3.5:35b-a3b",
base_url="http://localhost:11434",
...
)Prompt Templates & Message Roles
ChatPromptTemplate with SystemMessage sets the LLM's persona (narrator), while {variables} make prompts reusable across different game contexts.
# main.py โ narrator chain
narrator = (
ChatPromptTemplate.from_messages([
("system", """You are the narrator of a Pokรฉmon text adventure game.
Player: {player_name} | Location: {location} ..."""),
MessagesPlaceholder("history"),
("human", "{input}"),
])
| llm
)Structured Output with Pydantic Forces the LLM to return valid typed data instead of free text. Used to generate wild Pokรฉmon with guaranteed fields and value ranges.
# main.py โ schema + binding
class WildPokemonSchema(BaseModel):
name: str
type: str
level: int = Field(ge=2, le=50)
hp: int = Field(ge=20, le=120)
...
encounter_generator = llm.with_structured_output(WildPokemonSchema)LCEL Chains (pipe operator)
The | operator composes prompt โ LLM into a single callable chain. The narrator chain uses this to go from template to response in one .invoke() call.
# main.py โ chain composition
narrator = (
ChatPromptTemplate.from_messages([...])
| llm # prompt output pipes into the LLM
)Message History (Memory)
MessagesPlaceholder("history") injects past conversation messages into the prompt, giving the LLM context about earlier events in the adventure.
# main.py โ inside nodes like intro_node, explore_node
response = narrator.invoke({
...
"history": state["messages"][-8:], # last 8 messages for context
"input": "...",
})StateGraph & TypedDict
The entire game state is a TypedDict. LangGraph passes it through every node, and each node returns only the keys it wants to update.
# main.py โ state definition
class GameState(TypedDict):
messages: Annotated[list, add_messages] # reducer: appends, not replaces
player_name: str
location: str
pokemon_team: list[dict]
wild_pokemon: dict | None
badge_count: int
game_phase: str
turn_count: intNodes โ Game Phase Functions Each node is a function that receives the full state and returns a partial update. The graph handles merging.
# main.py โ node examples
def intro_node(state: GameState) -> dict: # opening scene
def explore_node(state: GameState) -> dict: # player roams
def encounter_check_node(state: GameState) -> dict: # random encounter roll
def battle_node(state: GameState) -> dict: # fight / catch / run
def heal_node(state: GameState) -> dict: # Pokรฉmon Center
def game_over_node(state: GameState) -> dict: # defeat screenConditional Edges โ Branching Logic
add_conditional_edges calls a routing function that reads the state and returns a string key, directing the graph to the next node dynamically.
# main.py โ routing
def route_after_explore(state: GameState) -> str:
# checks player's last action
if "town" in last or "heal" in last:
return "heal"
return "encounter_check"
graph.add_conditional_edges("explore", route_after_explore,
{"heal": "heal", "encounter_check": "encounter_check"})interrupt() โ Human-in-the-Loop
Pauses graph execution and surfaces a prompt to the player. When the player responds, Command(resume=...) feeds their input back into the node.
# main.py โ inside explore_node
action = interrupt(
f"\n๐ {state['location']} | Team: {team_str}\n"
f"What do you do? (explore / search grass / go to town / check team)"
)
# main.py โ inside the game loop
result = game.invoke(Command(resume=player_input), config)MemorySaver โ Checkpointing
Required for interrupt() to work. Saves graph state between pauses so execution can resume exactly where it left off. Each game session gets a unique thread_id.
# main.py โ compile with checkpointer
checkpointer = MemorySaver()
game = graph.compile(checkpointer=checkpointer)
# main.py โ session config
config = {"configurable": {"thread_id": f"game-{name}"}}add_messages Reducer
The Annotated[list, add_messages] annotation on the messages field tells LangGraph to append new messages instead of overwriting the list โ essential for building conversation history.
# main.py โ in GameState
messages: Annotated[list, add_messages]
# When a node returns {"messages": [AIMessage(...)]},
# LangGraph appends it to the existing list automatically.Graph Construction & Compilation
Nodes are registered, edges define flow, and .compile() produces a runnable graph. The full wiring:
# main.py โ graph assembly
graph = StateGraph(GameState)
graph.add_node("intro", intro_node)
graph.add_node("explore", explore_node)
graph.add_node("encounter_check", encounter_check_node)
graph.add_node("battle", battle_node)
graph.add_node("heal", heal_node)
graph.add_node("game_over", game_over_node)
graph.add_edge(START, "intro")
graph.add_edge("intro", "explore")
graph.add_conditional_edges("explore", route_after_explore, {...})
graph.add_edge("heal", "explore")
graph.add_conditional_edges("encounter_check", route_after_encounter_check, {...})
graph.add_conditional_edges("battle", route_after_battle, {...})
graph.add_edge("game_over", END)
game = graph.compile(checkpointer=checkpointer)