LLM overview
These crates build on each other:
llm_client— provider-neutral HTTP client with typed requests, streaming SSE, model profiles, and wire-format abstraction (OpenAI-compat / Anthropic Messages).llm_context_core— WASM-compatible context window management: token estimation, budget tracking, history strategies, and long-term memory traits.llm_tools— lightweight, WASM-safe tool registry andregistry_from!macro for collecting LLM tools and dispatching by name.llm_tool_macros— proc-macro crate providing#[llm_tool]to register plain Rust functions as LLM tools with auto-generated JSON Schema.tool_web_search— reusable LLM tool that performs web search via the Tavily API, returns aToolSpecfor any agent’s registry.
Feature flags and public exports are defined in each crate’s Cargo.toml and src/lib.rs. Use those as the source of truth; this site summarizes them and does not replace cargo doc.
llm_client Provider-neutral HTTP client with typed requests, streaming SSE, model profiles, and wire-format abstraction.
llm_context_core WASM-compatible context window management — token estimation, budget tracking, history strategies.
llm_tools Lightweight WASM-safe tool registry and registry_from! macro for collecting LLM tools.
llm_tool_macros Proc-macro: #[llm_tool] for registering Rust functions as LLM tools with JSON Schema.
tool_web_search Reusable LLM tool for web search via Tavily API.
See also: LLM Client guide for end-to-end setup, and LLM Tools guide for registering custom tools.
Full API reference: coming soon via published rustdoc (see Getting started for local cargo doc).