llm_client
Crate: llm_client · Path: crates/llm/llm_client
Description (Cargo.toml): Simplified, provider-agnostic LLM client built on protocol_transport_core
Provider-neutral LLM HTTP client: typed requests, streaming SSE events, model profiles, and an LlmClient facade. WireFormat selects JSON shape (OpenAI-compatible vs Anthropic Messages), not a single vendor — OpenRouter uses WireFormat::OpenAiCompat; Bedrock Claude often uses WireFormat::AnthropicMessages. On native targets, streaming is incremental over SSE. On WASM, responses are buffered then parsed.
Feature flags
Section titled “Feature flags”From crates/llm/llm_client/Cargo.toml:
| Feature | Purpose |
|---|---|
default | Empty — no features enabled by default |
Native-only dependencies (reqwest, tokio) are activated via cfg(not(target_arch = "wasm32")) target gates, not feature flags.
Public API (from src/lib.rs)
Section titled “Public API (from src/lib.rs)”Modules: auth, client, error, model_client, prepare, profile, stream, types
Re-exports:
LlmClient,LlmClientBuilder,WireFormat— client facade and builderApiKeyAuth,AnthropicApiKeyAuth,AuthProvider,AzureCredential,AzureOpenAiAuth— auth providersLlmError,LlmResult— error typesApiMode,ClientCapabilities— model client configModelCapabilities,ModelConfig,ModelFamily,ModelProfile— model profilesStreamingPolicy— re-exported fromprotocol_transport_coreLlmEventStream,SseParser,StreamEvent— streaming SSE typestypes::*—ChatMessage,LlmRequest,LlmResponse,ToolCall,ToolSchema, etc.
Example sketch
Section titled “Example sketch”use llm_client::auth::ApiKeyAuth;use llm_client::client::{LlmClient, WireFormat};use llm_client::{ChatMessage, LlmRequest};
let client = LlmClient::builder(WireFormat::OpenAiCompat) .base_url("https://api.openai.com/v1") .auth(ApiKeyAuth::new(std::env::var("OPENAI_API_KEY").unwrap())) .build()?;
let resp = client .chat(LlmRequest { model: "gpt-4o-mini".into(), messages: vec![ChatMessage { role: "user".into(), content: Some("Hello".into()), ..Default::default() }], ..Default::default() }) .await?;See the LLM Client guide for a comprehensive walkthrough of setup, configuration, and streaming patterns.
Full API reference: cargo doc -p llm_client --no-deps