Skip to content

mnesis.models.message

message

Core message and part data models for Mnesis.

FinishReason

Bases: StrEnum

Why the LLM stopped generating.

String enum so result.finish_reason == "stop" comparisons work without importing the enum.

Important — FinishReason.ERROR: when finish_reason is FinishReason.ERROR (or "error"), the LLM call failed. TurnResult.text contains an error description, not a model response. Always check finish_reason before processing text.

TextPart

Bases: BaseModel

A plain text segment of a message.

ReasoningPart

Bases: BaseModel

Chain-of-thought reasoning text (e.g. Claude extended thinking, o1 reasoning).

ToolStatus

Bases: BaseModel

Mutable lifecycle status of a tool call. Updated in-place via update_part_status().

ToolPart

Bases: BaseModel

A tool call and its result within an assistant message.

compacted_at property

compacted_at: int | None

Convenience accessor for the pruning tombstone timestamp.

is_protected property

is_protected: bool

Protected tools are never pruned by ToolOutputPruner.

CompactionMarkerPart

Bases: BaseModel

Metadata marker embedded in a compaction summary message.

Records which messages were compacted and which escalation level succeeded.

StepStartPart

Bases: BaseModel

Marker for the start of an agentic step within a turn.

StepFinishPart

Bases: BaseModel

Marker for the end of an agentic step, including token usage.

PatchPart

Bases: BaseModel

A git-format unified diff representing file changes made during a turn.

FileRefPart

Bases: BaseModel

A content-addressed reference to a large file stored outside the context window.

When the LargeFileHandler intercepts a file that exceeds the inline threshold, it stores it externally and replaces the content with this reference object. The ContextBuilder renders this as a structured [FILE: ...] block.

content_id instance-attribute

content_id: str

SHA-256 hex digest of the file content. Used as cache key.

path instance-attribute

path: str

Original file path as provided by the caller.

file_type instance-attribute

file_type: str

Detected MIME type or language identifier (e.g. 'python', 'application/json').

token_count instance-attribute

token_count: int

Estimated token count of the full file content.

exploration_summary instance-attribute

exploration_summary: str

Deterministic structural description of the file (classes, functions, keys, etc.).

TokenUsage

Bases: BaseModel

Token counts for a single LLM response or cumulative session usage.

effective_total

effective_total() -> int

Return total, computing from parts when the explicit total is zero.

MessageError

Bases: BaseModel

Structured error attached to an assistant message when the turn fails.

Message

Bases: BaseModel

A single message stored in the ImmutableStore.

Messages are append-only — the only mutable fields are token usage and finish_reason (updated after streaming completes), and part status fields (managed via update_part_status).

id instance-attribute

id: str

ULID-based sortable ID, e.g. msg_01JXYZ6K3MNPQR4STUVWXYZ01.

created_at class-attribute instance-attribute

created_at: int = Field(
    default_factory=lambda: int(time() * 1000)
)

Unix millisecond timestamp.

is_summary class-attribute instance-attribute

is_summary: bool = False

True if this message is a compaction summary produced by the CompactionEngine.

mode class-attribute instance-attribute

mode: str | None = None

'compaction' for summary messages, None for regular turns.

MessageWithParts

Bases: BaseModel

A message together with its ordered list of typed parts.

model_id property

model_id: str

The model that produced this message (empty string for user messages).

tokens property

tokens: TokenUsage | None

Token usage for this message (None for user messages or before streaming completes).

text_content

text_content() -> str

Concatenate text from all TextPart objects in this message.

ContextBudget

Bases: BaseModel

Token budget for assembling the context window on a single turn.

usable property

usable: int

Maximum tokens available for conversation history.

fits

fits(token_count: int) -> bool

Return True if token_count fits within the usable budget.

remaining

remaining(used: int) -> int

Return remaining tokens given how many have been used.

TurnResult

Bases: BaseModel

The result of a single MnesisSession.send() call.

Contains the assistant's text response, token usage, finish reason, and indicators for compaction and doom loop detection.

Important — finish_reason == "error": When finish_reason is "error", the LLM call failed and text contains an error description, not an LLM response. Do not treat the text as a model reply in this case. Always check finish_reason before processing text.

finish_reason instance-attribute

finish_reason: FinishReason | str

Why the LLM stopped generating.

Type is FinishReason | str: known values are the FinishReason enum members; providers may return additional string values.

  • FinishReason.STOP / "stop" — natural end of response.
  • FinishReason.MAX_TOKENS / "max_tokens" — output token limit reached.
  • FinishReason.LENGTH / "length" — alias used by some providers.
  • FinishReason.TOOL_CALLS / "tool_calls" — model requested tool execution.
  • FinishReason.ERROR / "error"the LLM call failed; text contains the error description, not a model response. Do not pass this text to the model as if it were a real reply.

cost instance-attribute

cost: float

Estimated USD cost of this turn. Always 0.0 — not yet implemented.

CompactionResult

Bases: BaseModel

The result of a compaction run.

Records which escalation level succeeded, how many messages were compressed, the token savings achieved, and how many tool outputs were pruned.

level_used instance-attribute

level_used: int

1 = selective LLM, 2 = aggressive LLM, 3 = deterministic fallback.

pruned_tool_outputs class-attribute instance-attribute

pruned_tool_outputs: int = 0

Number of tool output parts tombstoned by the ToolOutputPruner during this run.

pruned_tokens class-attribute instance-attribute

pruned_tokens: int = 0

Tokens reclaimed by pruning tool outputs during this run.

PruneResult

Bases: BaseModel

The result of a ToolOutputPruner pass.

RecordResult

Bases: BaseModel

The result of a MnesisSession.record() call.

Records the IDs of the persisted user and assistant messages so callers can reference them later (e.g. for event subscriptions or debugging).