SDKs & Agents
Official client libraries for Python and TypeScript — with context optimization that cuts token costs by 82–90% (measured on live APIs). Plus native integrations for LangChain, CrewAI, AutoGen, LangGraph, OpenAI Agents SDK, and Anthropic SDK.
client.context() method that handles simple salience windows and goal-driven optimization. Pass just maxFacts for a fast window, or add goals/sessionId to trigger the full optimization engine.
Python SDK
Full async and sync clients with LangChain tool wrappers. Published on PyPI — no source build needed.
Installation
pip install nocturnusai
# With framework integrations
pip install nocturnusai[langchain]
pip install nocturnusai[crewai]
pip install nocturnusai[autogen]
pip install nocturnusai[langgraph]
pip install nocturnusai[openai-agents]
# Install all integrations
pip install nocturnusai[all] Sync Client
from nocturnusai import SyncNocturnusAIClient
with SyncNocturnusAIClient("http://localhost:9300", tenant_id="default") as client:
# Store facts
client.assert_fact("parent", ["alice", "bob"])
client.assert_fact("parent", ["bob", "charlie"])
# Teach a rule
client.assert_rule(
head={"predicate": "grandparent", "args": ["?x", "?z"]},
body=[
{"predicate": "parent", "args": ["?x", "?y"]},
{"predicate": "parent", "args": ["?y", "?z"]},
]
)
# Infer
results = client.infer("grandparent", ["?who", "charlie"])
print(results) # [grandparent(alice, charlie)] Async Client
from nocturnusai import NocturnusAIClient
async with NocturnusAIClient("http://localhost:9300") as client:
await client.assert_fact("human", ["socrates"])
results = await client.infer("mortal", ["?who"])
print(results) process_turns() — Conversation Ingestion
process_turns() is the primary entry point for feeding raw conversation turns into Nocturnus.
It extracts structured facts, stores them under the given scope, and returns a briefing ready for your system prompt —
including an incremental delta so you only send what changed.
from nocturnusai import SyncNocturnusAIClient
with SyncNocturnusAIClient("http://localhost:9300", database="mydb", tenant_id="agent-1") as client:
# Process raw conversation turns into an optimized context window
result = client.process_turns(
turns=[
"user: Reset my password for john@example.com",
"assistant: I'll look up your account now.",
"tool: Account found: John Doe, plan: Enterprise",
],
scope="support-session",
session_id="sess-42",
)
print(result.briefing_delta) # Only what changed since last call (equals full briefing on first call)
print(result.new_facts_extracted) # Number of structured facts pulled from turns session_id, it contains only the additions — so you can append it to your system prompt instead of replacing the whole thing.
Context Optimization
The Python SDK includes full support for the Context Management Engine — the feature that cuts token costs by 82–90% (measured on live APIs). Use the unified client.context() method for both simple and goal-driven windows:
from nocturnusai import SyncNocturnusAIClient
with SyncNocturnusAIClient("http://localhost:9300") as client:
session_id = "session-42"
# Simple salience-ranked window (fast path)
ctx = client.context(max_facts=50)
# Goal-driven context optimization (add goals to trigger optimization engine)
ctx = client.context(
goals=[{"predicate": "eligible_for_sla", "args": ["acme_corp"]}],
max_facts=25,
session_id=session_id,
format="natural" # returns formattedText for direct LLM use
)
approx_tokens = max(1, ctx.total_char_count // 4)
print(f"{ctx.total_facts_included} facts, ~{approx_tokens} tokens")
# Incremental diffs for multi-turn conversations
diff = client.diff_context(session_id=session_id)
# Only sends what changed since last call
# Clear diff state when the conversation ends
client.clear_context_session(session_id) context_window() and optimize_context() methods still work but are deprecated. Migrate to context() which supports all parameters from both.
| Context Capability | Python SDK | TypeScript SDK |
|---|---|---|
| Unified context (simple + goal-driven) | context() | context() |
| Salience-ranked window (deprecated) | context_window() | contextWindow() |
| Goal-driven window (deprecated) | optimize_context() | optimizeContext() |
| Incremental diff | diff_context() | diffContext() |
| Clear diff session | clear_context_session() | clearContextSession() |
| Ingest text + optimize | ingest_and_optimize() | ingestAndOptimize() |
LangChain Integration
Drop Nocturnus into any LangChain agent in one line. Seven pre-built tools map directly to the core API:
| Tool Name | Maps To | Description |
|---|---|---|
| nocturnusai_assert | /tell | Store a fact |
| nocturnusai_query | /query | Pattern match stored facts |
| nocturnusai_infer | /ask | Run logical inference |
| nocturnusai_teach | /teach | Define a logical rule |
| nocturnusai_context | /memory/context | Get salience-ranked context window |
| nocturnusai_optimize | /memory/context | Goal-driven optimized context |
| nocturnusai_extract | /extract | Extract facts from raw text (requires LLM) |
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.langchain import get_nocturnusai_tools
client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client) With a LangChain Agent
from langchain_anthropic import ChatAnthropic
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
llm = ChatAnthropic(model="claude-sonnet-4-6")
prompt = ChatPromptTemplate.from_messages([
("system", "You are an assistant with access to a verified knowledge base."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({
"input": "Alice is Bob's parent. Bob is Charlie's parent. Who is Charlie's grandparent?"
})
print(result["output"])
# "Alice is Charlie's grandparent." CrewAI Integration
Five BaseTool subclasses and a Storage backend for CrewAI agents. Each tool has a Pydantic input schema for structured argument validation.
| Tool | Purpose |
|---|---|
| NocturnusAITellTool | Assert a fact into the knowledge base |
| NocturnusAIAskTool | Run logical inference queries |
| NocturnusAITeachTool | Define logical rules |
| NocturnusAIForgetTool | Retract facts |
| NocturnusAIContextTool | Get salience-ranked context window |
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.crewai import get_nocturnusai_tools, NocturnusAIStorage
client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)
storage = NocturnusAIStorage(client=client) With a CrewAI Agent
from crewai import Agent, Task, Crew
reasoner = Agent(
role="Knowledge Reasoner",
goal="Store facts and answer questions using logical inference",
backstory="You are an expert at structured reasoning.",
tools=tools,
)
task = Task(
description="Alice is Bob's parent. Bob is Charlie's parent. "
"Who is Charlie's grandparent?",
agent=reasoner,
expected_output="The grandparent relationship",
)
crew = Crew(agents=[reasoner], tasks=[task])
result = crew.kickoff() AutoGen Integration
Five plain Python tool functions and an async Memory protocol implementation for AutoGen agents.
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.autogen import get_nocturnusai_tools, NocturnusAIMemory
client = SyncNocturnusAIClient("http://localhost:9300")
# Get tool functions: tell, ask, teach, forget, context
tools = get_nocturnusai_tools(client)
# Or use as agent memory
memory = NocturnusAIMemory(client=client) Tool Functions
The five tool functions work with or without autogen-agentchat installed:
| Function | Purpose |
|---|---|
| nocturnusai_tell | Assert a fact (predicate + JSON args) |
| nocturnusai_ask | Query via inference (use ?-prefixed variables) |
| nocturnusai_teach | Define a logical rule (JSON head + body) |
| nocturnusai_forget | Retract a fact |
| nocturnusai_context | Get salience-ranked context window |
Memory Protocol
NocturnusAIMemory implements the AutoGen Memory interface (add, query, update_context, clear, close), storing messages as NocturnusAI facts with salience-ranked retrieval.
LangGraph Integration
A checkpoint saver that persists LangGraph graph state as NocturnusAI facts, using scopes for thread isolation.
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.langgraph import NocturnusAICheckpointSaver
client = SyncNocturnusAIClient("http://localhost:9300")
saver = NocturnusAICheckpointSaver(client=client)
# Use with a LangGraph compiled graph
app = graph.compile(checkpointer=saver) How It Works
Each checkpoint is stored as a fact with predicate lg_checkpoint and args [thread_id, state_json, metadata_json]. LangGraph threads map to NocturnusAI scopes for isolation. The saver implements put, get_tuple, and list for full checkpoint lifecycle management.
OpenAI Agents SDK Integration
Five tool functions that work with or without the openai-agents package. When the package is installed, functions are automatically decorated with @function_tool.
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.openai_agents import get_nocturnusai_tools
client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)
# Use with an OpenAI Agent
from agents import Agent
agent = Agent(
name="reasoner",
instructions="You are a knowledge reasoning agent.",
tools=tools,
) Anthropic SDK Integration
JSON schema tool definitions and a dispatcher function for use with the Anthropic Messages API. Zero framework dependencies — works with the raw anthropic SDK.
Quick Start
from nocturnusai import SyncNocturnusAIClient
from nocturnusai.anthropic_tools import get_nocturnusai_tool_definitions, handle_tool_call
client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tool_definitions()
# Pass tool definitions to Claude
response = anthropic_client.messages.create(
model="claude-sonnet-4-6",
tools=tools,
messages=[{"role": "user", "content": "Alice likes Bob. Who likes Bob?"}],
)
# Handle tool calls from Claude's response
for block in response.content:
if block.type == "tool_use":
result = handle_tool_call(client, block.name, block.input) Tool Definitions
Returns 5 Anthropic-compatible tool definitions with full JSON schemas: nocturnusai_tell, nocturnusai_ask, nocturnusai_teach, nocturnusai_forget, and nocturnusai_context. The handle_tool_call() dispatcher routes tool names to the appropriate client methods.
TypeScript SDK
Zero-dependency typed client. Works in Node.js 18+ and modern browsers. Published on npm.
Installation
npm install nocturnusai-sdk Usage
import { NocturnusAIClient } from 'nocturnusai-sdk';
const client = new NocturnusAIClient({
baseUrl: 'http://localhost:9300',
database: 'mydb',
tenantId: 'default',
});
// Assert facts
await client.assertFact('parent', ['alice', 'bob']);
await client.assertFact('parent', ['bob', 'charlie']);
// Assert a rule
await client.assertRule(
{ predicate: 'grandparent', args: ['?x', '?z'] },
[
{ predicate: 'parent', args: ['?x', '?y'] },
{ predicate: 'parent', args: ['?y', '?z'] },
]
);
// Infer
const results = await client.infer('grandparent', ['?who', 'charlie']);
console.log(results); processTurns() — Conversation Ingestion
processTurns() is the primary entry point for feeding raw conversation turns into Nocturnus.
Returns a briefing and a delta field so you only send what changed to the model.
import { NocturnusAIClient } from 'nocturnusai-sdk';
const client = new NocturnusAIClient({
baseUrl: 'http://localhost:9300',
database: 'mydb',
tenantId: 'agent-1',
});
const result = await client.processTurns({
turns: [
'user: Reset my password for john@example.com',
'assistant: I\'ll look up your account now.',
'tool: Account found: John Doe, plan: Enterprise',
],
scope: 'support-session',
sessionId: 'sess-42',
});
console.log(result.briefingDelta); // Only what changed (equals full briefing on first call)
console.log(result.newFactsExtracted); // Number of structured facts Context Optimization with TypeScript
The TypeScript SDK covers the full context-management loop with the unified context() method,
incremental diffs, session cleanup, and one-shot text ingestion.
// 1) Simple salience-ranked window (fast path)
const window = await client.context({
maxFacts: 25,
minSalience: 0.1,
scope: 'session_acme_42',
});
// 2) Goal-driven window (add goals to trigger optimization engine)
const optimized = await client.context({
goals: [{ predicate: 'eligible_for_sla', args: ['acme_corp'] }],
maxFacts: 25,
sessionId: 'session-42',
format: 'natural', // returns formattedText for direct LLM use
includeRules: true,
});
// 3) Incremental diff for later turns
const diff = await client.diffContext({
sessionId: 'session-42',
maxFacts: 25,
});
// 4) End session when thread closes
await client.clearContextSession('session-42');
// 5) One-shot ingestion from raw text
const ingested = await client.ingestAndOptimize({
text: 'Customer says they are enterprise and blocked on SLA credits.',
goals: [{ predicate: 'eligible_for_sla', args: ['acme_corp'] }],
maxFacts: 15,
});
console.log(
optimized.totalFactsIncluded,
diff.added?.length ?? 0,
ingested.context.totalCharCount,
); The old contextWindow() and optimizeContext() methods are deprecated. Use the unified context() method instead.
MCP Client
The SDK also includes an MCP client for JSON-RPC 2.0 tool calls:
import { NocturnusAIMCPClient } from 'nocturnusai-sdk';
const mcp = new NocturnusAIMCPClient({
baseUrl: 'http://localhost:9300'
});
// Initialize MCP session
await mcp.initialize();
// Discover available tools
const tools = await mcp.listTools();
// Call a tool
const result = await mcp.callTool('ask', {
predicate: 'grandparent',
args: ['?who', 'charlie']
});