Skip to content

ivmos/ivmos-langx-pokemon-game

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

13 Commits
ย 
ย 
ย 
ย 

Repository files navigation

๐ŸŽฎ Pokรฉmon: AI Adventure

A text-based Pokรฉmon RPG powered by LangChain and LangGraph, where an LLM narrates your journey, generates wild encounters, and drives the story โ€” while deterministic game logic handles battles, catching, and stats.

See related Youtube video: https://www.youtube.com/embed/TMPeW-D6MvU

See related blog post: https://ivanmosquera.net/

Game Graph

        โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
        โ”‚  START    โ”‚
        โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜
             โ”‚
        โ”Œโ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”
        โ”‚  intro    โ”‚  โ† Professor Oak welcomes you
        โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜
             โ”‚
        โ”Œโ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ” โ—„โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
        โ”‚ explore   โ”‚  โ† interrupt() for input    โ”‚
        โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜                              โ”‚
             โ”‚                                    โ”‚
      โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”                             โ”‚
      โ–ผ             โ–ผ                             โ”‚
 โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                     โ”‚
 โ”‚  heal  โ”‚  โ”‚encounter_chk โ”‚                     โ”‚
 โ””โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”˜  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                     โ”‚
     โ”‚        โ”Œโ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”                      โ”‚
     โ”‚   no encounter   encounter                 โ”‚
     โ”‚        โ”‚            โ”‚                      โ”‚
     โ”‚        โ”‚     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”               โ”‚
     โ”‚        โ”‚     โ”‚   battle    โ”‚โ—„โ”€โ”€โ”           โ”‚
     โ”‚        โ”‚     โ”‚ interrupt() โ”‚   โ”‚ ongoing   โ”‚
     โ”‚        โ”‚     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”˜   โ”‚           โ”‚
     โ”‚        โ”‚      โ”Œโ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”    โ”‚           โ”‚
     โ”‚        โ”‚     win   loss  loopโ”€โ”€โ”˜           โ”‚
     โ”‚        โ”‚      โ”‚     โ”‚                      โ”‚
     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                           โ”‚
                    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”
                    โ”‚  game_over  โ”‚
                    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                           โ”‚
                    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”
                    โ”‚     END     โ”‚
                    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

How It Works

Component Role
LLM (Ollama) Narrates scenes, generates wild Pokรฉmon, powers Professor Oak
LangGraph StateGraph Manages game phases (explore โ†’ encounter โ†’ battle โ†’ ...)
interrupt() Pauses the graph to wait for player input each turn
Structured Output Forces the LLM to return typed Pokรฉmon data (Pydantic models)
MemorySaver Checkpoints game state so the graph can resume after interrupts

Prerequisites

  • Python 3.10+
  • Ollama installed and running locally

Installing Ollama and a model

  1. Install Ollama from ollama.com or via Homebrew:

    brew install ollama
  2. Start the Ollama server:

    ollama serve
  3. Pull a model that supports tool calling (required for structured output in this game). Recommended models:

    Model Size Command
    Qwen 2.5 (recommended) 7B ollama pull qwen2.5
    Llama 3.1 8B ollama pull llama3.1
    Mistral 7B ollama pull mistral

    Note: The game uses structured output (with_structured_output) which requires tool/function calling support. Not all Ollama models support this โ€” the models listed above do.

  4. Update the model field in main.py to match the model you pulled:

    llm = ChatOllama(
        model="qwen2.5",  # must match the model name from `ollama list`
        base_url="http://localhost:11434",
        max_tokens=4096,
        temperature=0.7,
    )

Setup

# Clone the repo
git clone 
cd ivmos-langx-pokemon-game

# Create a virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies
pip install langchain langchain-ollama langgraph pydantic typing_extensions

Run

source venv/bin/activate
python3 main.py

Gameplay

Once the game starts, Professor Oak introduces you and hands you your starter Pokรฉmon (Charmander). From there:

  • Explore โ€” Type explore or search grass to roam the area
  • Heal โ€” Type go to town or heal to visit a Pokรฉmon Center
  • Battle โ€” When a wild Pokรฉmon appears, choose a move name, catch, or run
  • Quit โ€” Type quit or exit at any prompt

Example Session

๐ŸŽฎ POKร‰MON: AI ADVENTURE
What's your name, trainer? > Ash

๐ŸŽฎ Professor Oak smiles as he hands you a Pokรฉ Ball...

๐Ÿ“ Pallet Town | Team: Charmander (39/39 HP)
What do you do? (explore / search grass / go to town / check team)
> search grass

๐ŸŽฎ You wade into the tall grass north of Pallet Town...

โšก A wild Pidgey (Lv.4, normal-type) appeared!
   HP: 28 | ATK: 18 | DEF: 15

โš”๏ธ  BATTLE โ€” Turn 1
  Charmander: 39/39 HP
  Wild Pidgey: 28/28 HP
  Your moves: [Scratch / Ember / Growl / Smokescreen]
  Or: [catch] / [run]
> Ember

๐ŸŽฎ ๐Ÿ—ก๏ธ Charmander uses Ember! CRITICAL HIT! (24 damage)
๐Ÿพ Wild Pidgey strikes back! (8 damage)
  Charmander: 31/39 HP | Pidgey: 4/28 HP

> catch

๐ŸŽฎ ๐ŸŽ‰ Gotcha! Pidgey was caught!
๐Ÿ“ฆ Added to your team!

Project Structure

ivmos-langx-pokemon-game/
โ”œโ”€โ”€ main.py          # Complete game โ€” all nodes, graph, and game loop
โ”œโ”€โ”€ README.md
โ””โ”€โ”€ venv/            # Virtual environment (not committed)

Configuration

The LLM connection is configured at the top of main.py:

llm = ChatOllama(
    model="qwen3.5:35b-a3b",
    base_url="http://localhost:11434",
    max_tokens=4096,
    temperature=0.7,
)

To use a different model or server, update model and base_url.

Extending the Game

  • New locations โ€” Add entries to a location map and update route_after_explore
  • Gym battles โ€” Create NPC trainer nodes with LLM-driven move selection
  • Evolution โ€” Track XP in GameState and trigger evolution at thresholds
  • Save/Load โ€” Swap MemorySaver for SqliteSaver to persist game sessions to disk
  • Streaming โ€” Use game.stream() for real-time typewriter-style narration

Concepts & Code Reference

This project demonstrates core LangChain and LangGraph patterns. Here's where each concept appears in main.py:

LangChain Fundamentals

ChatOllama โ€” LLM Connection Wraps the Ollama REST API into LangChain's unified chat model interface. Configured once and reused across all chains.

# main.py โ€” top of file
llm = ChatOllama(
    model="qwen3.5:35b-a3b",
    base_url="http://localhost:11434",
    ...
)

Prompt Templates & Message Roles ChatPromptTemplate with SystemMessage sets the LLM's persona (narrator), while {variables} make prompts reusable across different game contexts.

# main.py โ€” narrator chain
narrator = (
    ChatPromptTemplate.from_messages([
        ("system", """You are the narrator of a Pokรฉmon text adventure game.
Player: {player_name} | Location: {location} ..."""),
        MessagesPlaceholder("history"),
        ("human", "{input}"),
    ])
    | llm
)

Structured Output with Pydantic Forces the LLM to return valid typed data instead of free text. Used to generate wild Pokรฉmon with guaranteed fields and value ranges.

# main.py โ€” schema + binding
class WildPokemonSchema(BaseModel):
    name: str
    type: str
    level: int = Field(ge=2, le=50)
    hp: int = Field(ge=20, le=120)
    ...

encounter_generator = llm.with_structured_output(WildPokemonSchema)

LCEL Chains (pipe operator) The | operator composes prompt โ†’ LLM into a single callable chain. The narrator chain uses this to go from template to response in one .invoke() call.

# main.py โ€” chain composition
narrator = (
    ChatPromptTemplate.from_messages([...])
    | llm  # prompt output pipes into the LLM
)

Message History (Memory) MessagesPlaceholder("history") injects past conversation messages into the prompt, giving the LLM context about earlier events in the adventure.

# main.py โ€” inside nodes like intro_node, explore_node
response = narrator.invoke({
    ...
    "history": state["messages"][-8:],  # last 8 messages for context
    "input": "...",
})

LangGraph Game Engine

StateGraph & TypedDict The entire game state is a TypedDict. LangGraph passes it through every node, and each node returns only the keys it wants to update.

# main.py โ€” state definition
class GameState(TypedDict):
    messages: Annotated[list, add_messages]  # reducer: appends, not replaces
    player_name: str
    location: str
    pokemon_team: list[dict]
    wild_pokemon: dict | None
    badge_count: int
    game_phase: str
    turn_count: int

Nodes โ€” Game Phase Functions Each node is a function that receives the full state and returns a partial update. The graph handles merging.

# main.py โ€” node examples
def intro_node(state: GameState) -> dict:       # opening scene
def explore_node(state: GameState) -> dict:      # player roams
def encounter_check_node(state: GameState) -> dict:  # random encounter roll
def battle_node(state: GameState) -> dict:       # fight / catch / run
def heal_node(state: GameState) -> dict:         # Pokรฉmon Center
def game_over_node(state: GameState) -> dict:    # defeat screen

Conditional Edges โ€” Branching Logic add_conditional_edges calls a routing function that reads the state and returns a string key, directing the graph to the next node dynamically.

# main.py โ€” routing
def route_after_explore(state: GameState) -> str:
    # checks player's last action
    if "town" in last or "heal" in last:
        return "heal"
    return "encounter_check"

graph.add_conditional_edges("explore", route_after_explore,
    {"heal": "heal", "encounter_check": "encounter_check"})

interrupt() โ€” Human-in-the-Loop Pauses graph execution and surfaces a prompt to the player. When the player responds, Command(resume=...) feeds their input back into the node.

# main.py โ€” inside explore_node
action = interrupt(
    f"\n๐Ÿ“ {state['location']} | Team: {team_str}\n"
    f"What do you do? (explore / search grass / go to town / check team)"
)

# main.py โ€” inside the game loop
result = game.invoke(Command(resume=player_input), config)

MemorySaver โ€” Checkpointing Required for interrupt() to work. Saves graph state between pauses so execution can resume exactly where it left off. Each game session gets a unique thread_id.

# main.py โ€” compile with checkpointer
checkpointer = MemorySaver()
game = graph.compile(checkpointer=checkpointer)

# main.py โ€” session config
config = {"configurable": {"thread_id": f"game-{name}"}}

add_messages Reducer The Annotated[list, add_messages] annotation on the messages field tells LangGraph to append new messages instead of overwriting the list โ€” essential for building conversation history.

# main.py โ€” in GameState
messages: Annotated[list, add_messages]

# When a node returns {"messages": [AIMessage(...)]},
# LangGraph appends it to the existing list automatically.

Graph Construction & Compilation Nodes are registered, edges define flow, and .compile() produces a runnable graph. The full wiring:

# main.py โ€” graph assembly
graph = StateGraph(GameState)

graph.add_node("intro", intro_node)
graph.add_node("explore", explore_node)
graph.add_node("encounter_check", encounter_check_node)
graph.add_node("battle", battle_node)
graph.add_node("heal", heal_node)
graph.add_node("game_over", game_over_node)

graph.add_edge(START, "intro")
graph.add_edge("intro", "explore")
graph.add_conditional_edges("explore", route_after_explore, {...})
graph.add_edge("heal", "explore")
graph.add_conditional_edges("encounter_check", route_after_encounter_check, {...})
graph.add_conditional_edges("battle", route_after_battle, {...})
graph.add_edge("game_over", END)

game = graph.compile(checkpointer=checkpointer)

Built With

  • LangChain โ€” LLM orchestration
  • LangGraph โ€” Stateful graph execution
  • Ollama โ€” Local/remote LLM inference

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages