Examples¶
All examples run without an API key using MNESIS_MOCK_LLM=1:
MNESIS_MOCK_LLM=1 uv run python examples/01_basic_session.py
MNESIS_MOCK_LLM=1 uv run python examples/05_parallel_processing.py
To use a real LLM, set the appropriate API key and omit the flag:
01 — Basic Session¶
Demonstrates the core send() loop:
- Creating a session with
MnesisSession.create() - Sending messages and reading
TurnResult - Monitoring
compaction_triggered - Inspecting cumulative token usage
- Async context manager lifecycle
02 — Long-Running Agent¶
examples/02_long_running_agent.py
- Subscribing to
EventBusevents - Streaming callbacks via
on_part - Manually triggering
session.compact() - Inspecting compaction results
03 — Tool Use¶
- Passing tool definitions to
send() - Handling
ToolPartstreaming states (pending → running → completed) - Inspecting tombstoned tool outputs after pruning
04 — Large Files¶
- Using
LargeFileHandlerto ingest files FileRefPartand content-addressed storage- Cache hits on repeated file access
- Exploration summaries (AST outline, schema keys, etc.)
05 — Parallel Processing¶
examples/05_parallel_processing.py
LLMMapwith a Pydantic output schemaAgenticMapwith independent sub-agent sessions- Concurrency control and per-item retry
06 — BYO-LLM¶
- Using
session.record()to inject turns from your own SDK - Building the LLM message list from
session.messages() - Passing explicit
TokenUsagefor accurate compaction budgeting
This example includes a canned-response stub so it runs without any API key. See BYO-LLM for a full explanation.