| Title: | Unified Interface for AI Model Providers |
| Version: | 1.1.0 |
| Description: | A production-grade AI toolkit for R featuring a layered architecture (Specification, Utilities, Providers, Core), request interception support, robust error handling with exponential retry delays, support for multiple AI model providers ('OpenAI', 'Anthropic', etc.), local small language model inference, distributed 'MCP' ecosystem, multi-agent orchestration, progressive knowledge loading through skills, and a global skill store for sharing AI capabilities. |
| License: | MIT + file LICENSE |
| Encoding: | UTF-8 |
| RoxygenNote: | 7.3.3 |
| Imports: | R6, httr2, jsonlite, rlang, yaml, callr, processx, memoise, digest, parallel, stats, tools, utils, base64enc, curl, openssl |
| Suggests: | testthat (≥ 3.0.0), httptest2, cli, knitr, rmarkdown, skimr, evaluate, shiny, shinyjs, bslib, commonmark, ggplot2, dplyr, readr, readxl, DBI, RSQLite, torch, onnx, rstudioapi, DT, withr, httpuv, devtools, fs, dotenv, quarto, pdftools |
| Config/testthat/edition: | 3 |
| URL: | https://github.com/YuLab-SMU/aisdk, https://yulab-smu.top/aisdk/ |
| BugReports: | https://github.com/YuLab-SMU/aisdk/issues |
| NeedsCompilation: | no |
| Packaged: | 2026-03-27 00:48:33 UTC; xiayh |
| Author: | Yonghe Xia [aut, cre] |
| Maintainer: | Yonghe Xia <xiayh17@gmail.com> |
| Depends: | R (≥ 4.1.0) |
| Repository: | CRAN |
| Date/Publication: | 2026-03-31 15:00:07 UTC |
aisdk: AI SDK for R
Description
A production-grade AI SDK for R featuring a layered architecture, middleware support, robust error handling, and support for multiple AI model providers.
Architecture
The SDK uses a 4-layer architecture:
-
Specification Layer: Abstract interfaces (LanguageModelV1, EmbeddingModelV1)
-
Utilities Layer: Shared tools (HTTP, retry, registry, middleware)
-
Provider Layer: Concrete implementations (OpenAIProvider, etc.)
-
Core Layer: High-level API (generate_text, stream_text, embed)
Quick Start
library(aisdk)
# Create an OpenAI provider
openai <- create_openai()
# Generate text
result <- generate_text(
model = openai$language_model("gpt-4o"),
prompt = "Explain R in one sentence."
)
print(result$text)
# Or use the registry for cleaner syntax
get_default_registry()$register("openai", openai)
result <- generate_text("openai:gpt-4o", "Hello!")
Author(s)
Maintainer: Yonghe Xia xiayh17@gmail.com
See Also
Useful links:
Report bugs at https://github.com/YuLab-SMU/aisdk/issues
SDK Feature Flags
Description
Global feature flags for controlling SDK behavior. These flags allow gradual migration to new features while maintaining backward compatibility.
Usage
.sdk_features
Format
An object of class environment of length 7.
System Prompt for Auto-Fix
Description
System Prompt for Auto-Fix
Usage
AUTO_FIX_SYSTEM_PROMPT
Format
An object of class character of length 1.
Agent Class
Description
R6 class representing an AI agent. Agents are the worker units in the multi-agent architecture. Each agent has a name, description (for semantic routing), system prompt (persona), and a set of tools it can use.
Key design principle: Agents are stateless regarding conversation history. The ChatSession holds the shared state (history, memory, environment).
Public fields
nameUnique identifier for this agent.
descriptionDescription of the agent's capability. This is the "API" that the LLM Manager uses for semantic routing.
system_promptThe agent's persona/instructions.
toolsList of Tool objects this agent can use.
modelDefault model ID for this agent.
Methods
Public methods
Method new()
Initialize a new Agent.
Usage
Agent$new( name, description, system_prompt = NULL, tools = NULL, skills = NULL, model = NULL )
Arguments
nameUnique name for this agent (e.g., "DataCleaner", "Visualizer").
descriptionA clear description of what this agent does. This is used by the Manager LLM to decide which agent to delegate to.
system_promptOptional system prompt defining the agent's persona.
toolsOptional list of Tool objects the agent can use.
skillsOptional character vector of skill paths or "auto" to discover skills. When provided, this automatically loads skills, creates tools, and updates the system prompt.
modelOptional default model ID for this agent.
Returns
An Agent object.
Method run()
Run the agent with a given task.
Usage
Agent$run( task, session = NULL, context = NULL, model = NULL, max_steps = 10, ... )
Arguments
taskThe task instruction (natural language).
sessionOptional ChatSession for shared state. If NULL, a temporary session is created.
contextOptional additional context to inject (e.g., from parent agent).
modelOptional model override. Uses session's model if not provided.
max_stepsMaximum ReAct loop iterations. Default 10.
...Additional arguments passed to generate_text.
Returns
A GenerateResult object from generate_text.
Method stream()
Run the agent with streaming output.
Usage
Agent$stream( task, callback = NULL, session = NULL, context = NULL, model = NULL, max_steps = 10, ... )
Arguments
taskThe task instruction (natural language).
callbackFunction to handle streaming chunks: callback(text, done).
sessionOptional ChatSession for shared state.
contextOptional additional context to inject.
modelOptional model override.
max_stepsMaximum ReAct loop iterations. Default 10.
...Additional arguments passed to stream_text.
Returns
A GenerateResult object (accumulated).
Method as_tool()
Convert this agent to a Tool.
Usage
Agent$as_tool()
Details
This allows the agent to be used as a delegate target by a Manager agent. The tool wraps the agent's run() method and uses the agent's description for semantic routing.
Returns
A Tool object that wraps this agent.
Method create_session()
Create a stateful ChatSession from this agent.
Usage
Agent$create_session(model = NULL, ...)
Arguments
modelOptional model override.
...Additional arguments passed to ChatSession$new.
Returns
A ChatSession object initialized with this agent's config.
Method print()
Print method for Agent.
Usage
Agent$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Agent$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
AgentRegistry Class
Description
R6 class for managing a collection of Agent objects. Provides storage, lookup, and automatic delegation tool generation for multi-agent systems.
Methods
Public methods
Method new()
Initialize a new AgentRegistry.
Usage
AgentRegistry$new(agents = NULL)
Arguments
agentsOptional list of Agent objects to register immediately.
Method register()
Register an agent.
Usage
AgentRegistry$register(agent)
Arguments
agentAn Agent object to register.
Returns
Invisible self for chaining.
Method get()
Get an agent by name.
Usage
AgentRegistry$get(name)
Arguments
nameThe agent name.
Returns
The Agent object, or NULL if not found.
Method has()
Check if an agent is registered.
Usage
AgentRegistry$has(name)
Arguments
nameThe agent name.
Returns
TRUE if registered, FALSE otherwise.
Method list_agents()
List all registered agent names.
Usage
AgentRegistry$list_agents()
Returns
Character vector of agent names.
Method get_all()
Get all registered agents.
Usage
AgentRegistry$get_all()
Returns
List of Agent objects.
Method unregister()
Unregister an agent.
Usage
AgentRegistry$unregister(name)
Arguments
nameThe agent name to remove.
Returns
Invisible self for chaining.
Method generate_delegate_tools()
Generate delegation tools for all registered agents.
Usage
AgentRegistry$generate_delegate_tools( flow = NULL, session = NULL, model = NULL )
Arguments
flowOptional Flow object for context-aware execution.
sessionOptional ChatSession for shared state.
modelOptional model ID for agent execution.
Details
Creates a list of Tool objects that wrap each agent's run() method. These tools can be given to a Manager agent for semantic routing.
Returns
A list of Tool objects.
Method generate_prompt_section()
Generate a prompt section describing available agents.
Usage
AgentRegistry$generate_prompt_section()
Details
Creates a formatted string listing all agents and their descriptions. Useful for injecting into a Manager's system prompt.
Returns
A character string.
Method print()
Print method for AgentRegistry.
Usage
AgentRegistry$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
AgentRegistry$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
AgentTeam Class
Description
R6 class representing a team of agents.
Public fields
nameName of the team.
membersList of registered agents (workers).
managerThe manager agent (created automatically).
default_modelDefault model ID for the team (optional).
sessionOptional shared ChatSession for the team.
Methods
Public methods
Method new()
Initialize a new AgentTeam.
Usage
AgentTeam$new(name = "AgentTeam", model = NULL, session = NULL)
Arguments
nameName of the team.
modelOptional default model for the team.
sessionOptional shared ChatSession (or SharedSession).
Returns
A new AgentTeam object.
Method register_agent()
Register an agent to the team.
Usage
AgentTeam$register_agent( name, description, skills = NULL, tools = NULL, system_prompt = NULL, model = NULL )
Arguments
nameName of the agent.
descriptionDescription of the agent's capabilities.
skillsCharacter vector of skills to load for this agent.
toolsList of explicit Tool objects.
system_promptOptional system prompt override.
modelOptional default model for this agent (overrides team default).
Returns
Self (for chaining).
Method run()
Run the team on a task.
Usage
AgentTeam$run(task, model = NULL, session = NULL)
Arguments
taskThe task instruction.
modelModel ID to use for the Manager.
sessionOptional shared ChatSession (or SharedSession).
Returns
The result from the Manager agent.
Method print()
Print team info.
Usage
AgentTeam$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
AgentTeam$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
AiHubMix Language Model Class
Description
Language model implementation for AiHubMix's chat completions API. Inherits from OpenAILanguageModel as AiHubMix provides an OpenAI-compatible API.
Super classes
aisdk::LanguageModelV1 -> aisdk::OpenAILanguageModel -> AiHubMixLanguageModel
Methods
Public methods
Inherited methods
aisdk::LanguageModelV1$generate()aisdk::LanguageModelV1$has_capability()aisdk::LanguageModelV1$stream()aisdk::OpenAILanguageModel$do_generate()aisdk::OpenAILanguageModel$do_stream()aisdk::OpenAILanguageModel$execute_request()aisdk::OpenAILanguageModel$format_tool_result()aisdk::OpenAILanguageModel$get_config()aisdk::OpenAILanguageModel$get_history_format()aisdk::OpenAILanguageModel$initialize()
Method parse_response()
Parse the API response into a GenerateResult. Overrides parent to extract AiHubMix-specific reasoning fields.
Usage
AiHubMixLanguageModel$parse_response(response)
Arguments
responseThe parsed API response.
Returns
A GenerateResult object.
Method build_payload()
Build the request payload for non-streaming generation. Overrides parent to process caching and reasoning parameters.
Usage
AiHubMixLanguageModel$build_payload(params)
Arguments
paramsA list of call options.
Returns
A list with url, headers, and body.
Method build_stream_payload()
Build the request payload for streaming generation. Overrides parent to process caching and reasoning parameters.
Usage
AiHubMixLanguageModel$build_stream_payload(params)
Arguments
paramsA list of call options.
Returns
A list with url, headers, and body.
Method clone()
The objects of this class are cloneable with this method.
Usage
AiHubMixLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
AiHubMix Provider Class
Description
Provider class for AiHubMix.
Super class
aisdk::OpenAIProvider -> AiHubMixProvider
Methods
Public methods
Inherited methods
Method new()
Initialize the AiHubMix provider.
Usage
AiHubMixProvider$new(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_keyAiHubMix API key. Defaults to AIHUBMIX_API_KEY env var.
base_urlBase URL. Defaults to https://aihubmix.com/v1.
headersOptional additional headers.
Method language_model()
Create a language model.
Usage
AiHubMixProvider$language_model(model_id = NULL)
Arguments
model_idThe model ID (e.g., "claude-sonnet-3-5", "claude-opus-3", "gpt-4o").
Returns
An AiHubMixLanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
AiHubMixProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Anthropic Language Model Class
Description
Language model implementation for Anthropic's Messages API.
Super class
aisdk::LanguageModelV1 -> AnthropicLanguageModel
Methods
Public methods
Inherited methods
Method new()
Initialize the Anthropic language model.
Usage
AnthropicLanguageModel$new(model_id, config, capabilities = list())
Arguments
model_idThe model ID (e.g., "claude-sonnet-4-20250514").
configConfiguration list with api_key, base_url, headers, etc.
capabilitiesOptional list of capability flags.
Method get_config()
Get the configuration list.
Usage
AnthropicLanguageModel$get_config()
Returns
A list with provider configuration.
Method do_generate()
Generate text (non-streaming).
Usage
AnthropicLanguageModel$do_generate(params)
Arguments
paramsA list of call options including messages, temperature, etc.
Returns
A GenerateResult object.
Method do_stream()
Generate text (streaming).
Usage
AnthropicLanguageModel$do_stream(params, callback)
Arguments
paramsA list of call options.
callbackA function called for each chunk: callback(text, done).
Returns
A GenerateResult object.
Method format_tool_result()
Format a tool execution result for Anthropic's API.
Usage
AnthropicLanguageModel$format_tool_result( tool_call_id, tool_name, result_content )
Arguments
tool_call_idThe ID of the tool call (tool_use_id in Anthropic terms).
tool_nameThe name of the tool (not used by Anthropic but kept for interface consistency).
result_contentThe result content from executing the tool.
Returns
A list formatted as a message for Anthropic's API.
Method get_history_format()
Get the message format for Anthropic.
Usage
AnthropicLanguageModel$get_history_format()
Method clone()
The objects of this class are cloneable with this method.
Usage
AnthropicLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Anthropic Provider Class
Description
Provider class for Anthropic. Can create language models.
Public fields
specification_versionProvider spec version.
Methods
Public methods
Method new()
Initialize the Anthropic provider.
Usage
AnthropicProvider$new( api_key = NULL, base_url = NULL, api_version = NULL, headers = NULL, name = NULL )
Arguments
api_keyAnthropic API key. Defaults to ANTHROPIC_API_KEY env var.
base_urlBase URL for API calls. Defaults to https://api.anthropic.com/v1.
api_versionAnthropic API version header. Defaults to "2023-06-01".
headersOptional additional headers.
nameOptional provider name override.
Method enable_caching()
Enable or disable prompt caching.
Usage
AnthropicProvider$enable_caching(enable = TRUE)
Arguments
enableLogical.
Method language_model()
Create a language model.
Usage
AnthropicProvider$language_model(model_id = "claude-sonnet-4-20250514")
Arguments
model_idThe model ID (e.g., "claude-sonnet-4-20250514", "claude-3-5-sonnet-20241022").
Returns
An AnthropicLanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
AnthropicProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Bailian Language Model Class
Description
Language model implementation for Alibaba Cloud DashScope's chat completions API. Inherits from OpenAI model but adds support for DashScope-specific features like reasoning content extraction from Qwen reasoning models.
Super classes
aisdk::LanguageModelV1 -> aisdk::OpenAILanguageModel -> BailianLanguageModel
Methods
Public methods
Inherited methods
aisdk::LanguageModelV1$generate()aisdk::LanguageModelV1$has_capability()aisdk::LanguageModelV1$stream()aisdk::OpenAILanguageModel$build_payload()aisdk::OpenAILanguageModel$build_stream_payload()aisdk::OpenAILanguageModel$do_generate()aisdk::OpenAILanguageModel$do_stream()aisdk::OpenAILanguageModel$execute_request()aisdk::OpenAILanguageModel$format_tool_result()aisdk::OpenAILanguageModel$get_config()aisdk::OpenAILanguageModel$get_history_format()aisdk::OpenAILanguageModel$initialize()
Method parse_response()
Parse the API response into a GenerateResult. Overrides parent to extract DashScope-specific reasoning_content.
Usage
BailianLanguageModel$parse_response(response)
Arguments
responseThe parsed API response.
Returns
A GenerateResult object.
Method clone()
The objects of this class are cloneable with this method.
Usage
BailianLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Bailian Provider Class
Description
Provider class for Alibaba Cloud Bailian / DashScope platform.
Super class
aisdk::OpenAIProvider -> BailianProvider
Methods
Public methods
Inherited methods
Method new()
Initialize the Bailian provider.
Usage
BailianProvider$new(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_keyDashScope API key. Defaults to DASHSCOPE_API_KEY env var.
base_urlBase URL. Defaults to https://dashscope.aliyuncs.com/compatible-mode/v1.
headersOptional additional headers.
Method language_model()
Create a language model.
Usage
BailianProvider$language_model(model_id = NULL)
Arguments
model_idThe model ID (e.g., "qwen-plus", "qwen-turbo", "qwq-32b").
Returns
A BailianLanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
BailianProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Channel Adapter
Description
Base class for transport adapters that translate external messaging
events into normalized aisdk channel events.
Public fields
idUnique channel identifier.
configAdapter configuration.
Methods
Public methods
Method new()
Initialize a channel adapter.
Usage
ChannelAdapter$new(id, config = list())
Arguments
idChannel identifier.
configAdapter configuration list.
Method parse_request()
Parse a raw channel request.
Usage
ChannelAdapter$parse_request(headers = NULL, body = NULL, ...)
Arguments
headersRequest headers as a named list.
bodyRaw body as JSON string or parsed list.
...Optional transport-specific values.
Returns
A normalized parse result list.
Method resolve_session_key()
Resolve a stable session key for an inbound message.
Usage
ChannelAdapter$resolve_session_key(message, policy = list())
Arguments
messageNormalized inbound message list.
policySession policy list.
Returns
Character scalar session key.
Method format_inbound_message()
Format an inbound prompt for a ChatSession.
Usage
ChannelAdapter$format_inbound_message(message)
Arguments
messageNormalized inbound message list.
Returns
Character scalar prompt.
Method prepare_inbound_message()
Prepare an inbound message using session state.
Usage
ChannelAdapter$prepare_inbound_message(session, message)
Arguments
sessionCurrent
ChatSession.messageNormalized inbound message list.
Returns
Possibly enriched inbound message list.
Method send_text()
Send a final text reply back to the channel.
Usage
ChannelAdapter$send_text(message, text, ...)
Arguments
messageOriginal normalized inbound message.
textFinal outbound text.
...Optional adapter-specific values.
Returns
Transport-specific response.
Method send_status()
Optionally send an intermediate status message.
Usage
ChannelAdapter$send_status(
message,
status = c("thinking", "working", "error"),
text = NULL,
...
)Arguments
messageOriginal normalized inbound message.
statusStatus name such as "thinking", "working", or "error".
textOptional status text override.
...Optional adapter-specific values.
Returns
Adapter-specific status result, or NULL if unsupported.
Method send_attachment()
Optionally send a generated local attachment.
Usage
ChannelAdapter$send_attachment(message, path, ...)
Arguments
messageOriginal normalized inbound message.
pathAbsolute local file path.
...Optional adapter-specific values.
Returns
Adapter-specific attachment result, or NULL if unsupported.
Method clone()
The objects of this class are cloneable with this method.
Usage
ChannelAdapter$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Channel Runtime
Description
Coordinates channel adapters, durable session state, and ChatSession
execution for external messaging integrations.
Public fields
session_storeDurable store for channel sessions.
Methods
Public methods
Method new()
Initialize a channel runtime.
Usage
ChannelRuntime$new( session_store, model = NULL, agent = NULL, tools = NULL, hooks = NULL, registry = NULL, max_steps = 10, session_policy = channel_default_session_policy() )
Arguments
session_storeFile-backed session store.
modelOptional default model id.
agentOptional default agent.
toolsOptional default tools.
hooksOptional session hooks.
registryOptional provider registry.
max_stepsMaximum tool execution steps.
session_policySession routing policy list.
Method register_adapter()
Register a channel adapter.
Usage
ChannelRuntime$register_adapter(adapter)
Arguments
adapterChannel adapter instance.
Returns
Invisible self.
Method get_adapter()
Get a channel adapter.
Usage
ChannelRuntime$get_adapter(channel_id)
Arguments
channel_idAdapter identifier.
Returns
Adapter instance.
Method handle_request()
Handle a raw channel request.
Usage
ChannelRuntime$handle_request(channel_id, headers = NULL, body = NULL, ...)
Arguments
channel_idAdapter identifier.
headersRequest headers.
bodyRaw or parsed body.
...Optional adapter-specific values.
Returns
A normalized runtime response.
Method process_message()
Process one normalized inbound message.
Usage
ChannelRuntime$process_message(channel_id, message)
Arguments
channel_idAdapter identifier.
messageNormalized inbound message.
Returns
Processing result list.
Method create_child_session()
Create a child session linked to a parent session.
Usage
ChannelRuntime$create_child_session( parent_session_key, child_session_key = NULL, inherit_history = TRUE, metadata = NULL )
Arguments
parent_session_keyParent session key.
child_session_keyOptional child key. Generated if omitted.
inherit_historyWhether to copy parent state into the child.
metadataOptional metadata to merge into the child session.
Returns
The child session key.
Method clone()
The objects of this class are cloneable with this method.
Usage
ChannelRuntime$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Channel Session Store
Description
Abstract persistence seam for channel-driven sessions.
Methods
Public methods
Method load_session()
Load a persisted ChatSession.
Usage
ChannelSessionStore$load_session( session_key, tools = NULL, hooks = NULL, registry = NULL )
Arguments
session_keySession key.
toolsOptional tools to reattach.
hooksOptional hooks to reattach.
registryOptional provider registry.
Returns
A ChatSession or NULL.
Method save_session()
Save a ChatSession and update store metadata.
Usage
ChannelSessionStore$save_session(session_key, session, record = NULL)
Arguments
session_keySession key.
sessionChatSessioninstance.recordOptional record fields to merge.
Returns
Store-specific save result.
Method update_record()
Update a store record without persisting a session file.
Usage
ChannelSessionStore$update_record(session_key, record)
Arguments
session_keySession key.
recordRecord fields to merge.
Returns
Store-specific update result.
Method get_record()
Retrieve a single session record.
Usage
ChannelSessionStore$get_record(session_key)
Arguments
session_keySession key.
Returns
Store-specific record object.
Method list_sessions()
List all session records.
Usage
ChannelSessionStore$list_sessions()
Returns
Store-specific collection of session records.
Method has_processed_event()
Check whether an event id has already been processed.
Usage
ChannelSessionStore$has_processed_event(channel_id, event_id)
Arguments
channel_idChannel identifier.
event_idEvent identifier.
Returns
Logical scalar.
Method mark_processed_event()
Mark an event id as processed.
Usage
ChannelSessionStore$mark_processed_event(channel_id, event_id, payload = NULL)
Arguments
channel_idChannel identifier.
event_idEvent identifier.
payloadOptional event payload to keep in the dedupe index.
Returns
Store-specific event record.
Method link_child_session()
Register a child session relationship.
Usage
ChannelSessionStore$link_child_session(parent_session_key, child_session_key)
Arguments
parent_session_keyParent session key.
child_session_keyChild session key.
Returns
Store-specific link result.
Method clone()
The objects of this class are cloneable with this method.
Usage
ChannelSessionStore$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Chat Manager
Description
Manages asynchronous chat generation using background processes. Handles IPC for streaming tokens and tool call bridging.
Public fields
processThe background callr process
ipc_dirTemp directory for inter-process communication
last_read_posLast position read from output file
Methods
Public methods
Method new()
Initialize a new ChatManager
Usage
ChatManager$new()
Method start_generation()
Start async text generation
Usage
ChatManager$start_generation(model, messages, system = NULL, tools = NULL)
Arguments
modelThe model (will be serialized for bg process)
messagesThe message history
systemThe system prompt
toolsThe tools list
Method poll()
Poll for new output and status
Usage
ChatManager$poll()
Returns
List with text, done, waiting_tool, tool_call, error
Method resolve_tool()
Resolve a tool call with result
Usage
ChatManager$resolve_tool(result)
Arguments
resultThe tool execution result
Method cleanup()
Cleanup resources
Usage
ChatManager$cleanup()
Method clone()
The objects of this class are cloneable with this method.
Usage
ChatManager$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
ChatSession Class
Description
R6 class representing a stateful chat session. Automatically manages conversation history, supports tool execution, and provides persistence.
Methods
Public methods
Method new()
Initialize a new ChatSession.
Usage
ChatSession$new( model = NULL, system_prompt = NULL, tools = NULL, hooks = NULL, history = NULL, max_steps = 10, registry = NULL, memory = NULL, metadata = NULL, envir = NULL, agent = NULL )
Arguments
modelA LanguageModelV1 object or model string ID (e.g., "openai:gpt-4o").
system_promptOptional system prompt for the conversation.
toolsOptional list of Tool objects for function calling.
hooksOptional HookHandler object for event hooks.
historyOptional initial message history (list of message objects).
max_stepsMaximum steps for tool execution loops. Default 10.
registryOptional ProviderRegistry for model resolution.
memoryOptional initial shared memory (list). For multi-agent state sharing.
metadataOptional session metadata (list). Used for channel/runtime state.
envirOptional shared R environment. For multi-agent data sharing.
agentOptional Agent object. If provided, the session inherits the agent's tools and system prompt.
Method send()
Send a message and get a response.
Usage
ChatSession$send(prompt, ...)
Arguments
promptThe user message to send.
...Additional arguments passed to generate_text.
Returns
The GenerateResult object from the model.
Method send_stream()
Send a message with streaming output.
Usage
ChatSession$send_stream(prompt, callback, ...)
Arguments
promptThe user message to send.
callbackFunction called for each chunk: callback(text, done).
...Additional arguments passed to stream_text.
Returns
Invisible NULL (output is via callback).
Method append_message()
Append a message to the history.
Usage
ChatSession$append_message(role, content, reasoning = NULL)
Arguments
roleMessage role: "user", "assistant", "system", or "tool".
contentMessage content.
reasoningOptional reasoning text to attach to the message.
Method get_history()
Get the conversation history.
Usage
ChatSession$get_history()
Returns
A list of message objects.
Method get_last_response()
Get the last response from the assistant.
Usage
ChatSession$get_last_response()
Returns
The content of the last assistant message, or NULL.
Method clear_history()
Clear the conversation history.
Usage
ChatSession$clear_history(keep_system = TRUE)
Arguments
keep_systemIf TRUE, keeps the system prompt. Default TRUE.
Method switch_model()
Switch to a different model.
Usage
ChatSession$switch_model(model)
Arguments
modelA LanguageModelV1 object or model string ID.
Method get_model_id()
Get current model identifier.
Usage
ChatSession$get_model_id()
Returns
Model ID string.
Method stats()
Get token usage statistics.
Usage
ChatSession$stats()
Returns
A list with token counts and message stats.
Method save()
Save session to a file.
Usage
ChatSession$save(path, format = NULL)
Arguments
pathFile path (supports .rds or .json extension).
formatOptional format override: "rds" or "json". Auto-detected from path.
Method as_list()
Export session state as a list (for serialization).
Usage
ChatSession$as_list()
Returns
A list containing session state.
Method restore()
Restore session from a file.
Usage
ChatSession$restore(path, format = NULL)
Arguments
pathFile path (supports .rds or .json extension).
formatOptional format override: "rds" or "json". Auto-detected from path.
Method restore_from_list()
Restore session state from a list.
Usage
ChatSession$restore_from_list(data)
Arguments
dataA list exported by as_list().
Method print()
Print method for ChatSession.
Usage
ChatSession$print()
Method get_memory()
Get a value from shared memory.
Usage
ChatSession$get_memory(key, default = NULL)
Arguments
keyThe key to retrieve.
defaultDefault value if key not found. Default NULL.
Returns
The stored value or default.
Method set_memory()
Set a value in shared memory.
Usage
ChatSession$set_memory(key, value)
Arguments
keyThe key to store.
valueThe value to store.
Returns
Invisible self for chaining.
Method list_memory()
List all keys in shared memory.
Usage
ChatSession$list_memory()
Returns
Character vector of memory keys.
Method get_metadata()
Get a value from session metadata.
Usage
ChatSession$get_metadata(key, default = NULL)
Arguments
keyThe metadata key to retrieve.
defaultDefault value if key is not present.
Returns
The stored metadata value or default.
Method set_metadata()
Set a value in session metadata.
Usage
ChatSession$set_metadata(key, value)
Arguments
keyThe metadata key to set.
valueThe value to store.
Returns
Invisible self for chaining.
Method merge_metadata()
Merge a named list into session metadata.
Usage
ChatSession$merge_metadata(values)
Arguments
valuesNamed list of metadata values.
Returns
Invisible self for chaining.
Method list_metadata()
List metadata keys.
Usage
ChatSession$list_metadata()
Returns
Character vector of metadata keys.
Method clear_memory()
Clear shared memory.
Usage
ChatSession$clear_memory(keys = NULL)
Arguments
keysOptional specific keys to clear. If NULL, clears all.
Returns
Invisible self for chaining.
Method get_envir()
Get the shared R environment.
Usage
ChatSession$get_envir()
Details
This environment is shared across all agents using this session. Agents can store and retrieve data frames, models, and other R objects.
Returns
An environment object.
Method eval_in_session()
Evaluate an expression in the session environment.
Usage
ChatSession$eval_in_session(expr)
Arguments
exprAn expression to evaluate.
Returns
The result of the evaluation.
Method list_envir()
List objects in the session environment.
Usage
ChatSession$list_envir()
Returns
Character vector of object names.
Method checkpoint()
Save a memory snapshot to a file (checkpoint for Mission resume).
Usage
ChatSession$checkpoint(path = NULL)
Arguments
pathFile path (.rds). If NULL, uses a temp file and returns the path.
Returns
Invisible file path.
Method restore_checkpoint()
Restore memory and history from a checkpoint file.
Usage
ChatSession$restore_checkpoint(path)
Arguments
pathFile path to a checkpoint created by checkpoint().
Returns
Invisible self for chaining.
Method clone()
The objects of this class are cloneable with this method.
Usage
ChatSession$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Computer Class
Description
R6 class providing computer abstraction with atomic tools for file operations, bash execution, and R code execution.
Public fields
working_dirCurrent working directory
sandbox_modeSandbox mode: "strict", "permissive", or "none"
execution_logLog of executed commands
Methods
Public methods
Method new()
Initialize computer abstraction
Usage
Computer$new(working_dir = tempdir(), sandbox_mode = "permissive")
Arguments
working_dirWorking directory. Defaults to
tempdir().sandbox_modeSandbox mode: "strict", "permissive", or "none"
Method bash()
Execute bash command
Usage
Computer$bash(command, timeout_ms = 30000, capture_output = TRUE)
Arguments
commandBash command to execute
timeout_msTimeout in milliseconds (default: 30000)
capture_outputWhether to capture output (default: TRUE)
Returns
List with stdout, stderr, exit_code
Method read_file()
Read file contents
Usage
Computer$read_file(path, encoding = "UTF-8")
Arguments
pathFile path (relative to working_dir or absolute)
encodingFile encoding (default: "UTF-8")
Returns
File contents as character string
Method write_file()
Write file contents
Usage
Computer$write_file(path, content, encoding = "UTF-8")
Arguments
pathFile path (relative to working_dir or absolute)
contentContent to write
encodingFile encoding (default: "UTF-8")
Returns
Success status
Method execute_r_code()
Execute R code
Usage
Computer$execute_r_code(code, timeout_ms = 30000, capture_output = TRUE)
Arguments
codeR code to execute
timeout_msTimeout in milliseconds (default: 30000)
capture_outputWhether to capture output (default: TRUE)
Returns
List with result, output, error
Method get_log()
Get execution log
Usage
Computer$get_log()
Returns
List of logged executions
Method clear_log()
Clear execution log Check bash command for sandbox violations Log execution Resolve path (relative to working_dir or absolute) Check write path for sandbox violations Check R code for sandbox violations
Usage
Computer$clear_log()
Method clone()
The objects of this class are cloneable with this method.
Usage
Computer$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
DeepSeek Language Model Class
Description
Language model implementation for DeepSeek's chat completions API.
Inherits from OpenAI model but adds support for DeepSeek-specific features
like reasoning content extraction from deepseek-reasoner model.
Super classes
aisdk::LanguageModelV1 -> aisdk::OpenAILanguageModel -> DeepSeekLanguageModel
Methods
Public methods
Inherited methods
aisdk::LanguageModelV1$generate()aisdk::LanguageModelV1$has_capability()aisdk::LanguageModelV1$stream()aisdk::OpenAILanguageModel$build_payload()aisdk::OpenAILanguageModel$build_stream_payload()aisdk::OpenAILanguageModel$do_generate()aisdk::OpenAILanguageModel$do_stream()aisdk::OpenAILanguageModel$execute_request()aisdk::OpenAILanguageModel$format_tool_result()aisdk::OpenAILanguageModel$get_config()aisdk::OpenAILanguageModel$get_history_format()aisdk::OpenAILanguageModel$initialize()
Method parse_response()
Parse the API response into a GenerateResult. Overrides parent to extract DeepSeek-specific reasoning_content.
Usage
DeepSeekLanguageModel$parse_response(response)
Arguments
responseThe parsed API response.
Returns
A GenerateResult object.
Method clone()
The objects of this class are cloneable with this method.
Usage
DeepSeekLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
DeepSeek Provider Class
Description
Provider class for DeepSeek.
Super class
aisdk::OpenAIProvider -> DeepSeekProvider
Methods
Public methods
Inherited methods
Method new()
Initialize the DeepSeek provider.
Usage
DeepSeekProvider$new(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_keyDeepSeek API key. Defaults to DEEPSEEK_API_KEY env var.
base_urlBase URL. Defaults to https://api.deepseek.com.
headersOptional additional headers.
Method language_model()
Create a language model.
Usage
DeepSeekProvider$language_model(model_id = NULL)
Arguments
model_idThe model ID (e.g., "deepseek-chat", "deepseek-reasoner").
Returns
A DeepSeekLanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
DeepSeekProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Embedding Model V1 (Abstract Base Class)
Description
Abstract interface for embedding models.
Public fields
specification_versionThe version of this specification.
providerThe provider identifier.
model_idThe model identifier.
Methods
Public methods
Method new()
Initialize the embedding model.
Usage
EmbeddingModelV1$new(provider, model_id)
Arguments
providerProvider name.
model_idModel ID.
Method do_embed()
Generate embeddings for a value. Abstract method.
Usage
EmbeddingModelV1$do_embed(value)
Arguments
valueA character string or vector to embed.
Returns
A list with embeddings.
Method clone()
The objects of this class are cloneable with this method.
Usage
EmbeddingModelV1$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Feishu Channel Adapter
Description
Transport adapter for Feishu/Lark event callbacks and text replies.
Super class
aisdk::ChannelAdapter -> FeishuChannelAdapter
Methods
Public methods
Method new()
Initialize the Feishu adapter.
Usage
FeishuChannelAdapter$new( app_id, app_secret, base_url = "https://open.feishu.cn", verification_token = NULL, encrypt_key = NULL, verify_signature = TRUE, send_text_fn = NULL, send_status_fn = NULL, download_resource_fn = NULL )
Arguments
app_idFeishu app id.
app_secretFeishu app secret.
base_urlFeishu API base URL.
verification_tokenOptional callback verification token.
encrypt_keyOptional event subscription encryption key.
verify_signatureWhether to validate Feishu callback signatures when applicable.
send_text_fnOptional custom send function for tests or overrides.
send_status_fnOptional custom status function for tests or overrides.
download_resource_fnOptional custom downloader for inbound message resources.
Method parse_request()
Parse a Feishu callback request.
Usage
FeishuChannelAdapter$parse_request(headers = NULL, body = NULL, ...)
Arguments
headersRequest headers.
bodyRaw JSON string or parsed list.
...Unused.
Returns
Channel request result.
Method resolve_session_key()
Resolve a stable session key for a Feishu inbound message.
Usage
FeishuChannelAdapter$resolve_session_key(message, policy = list())
Arguments
messageNormalized inbound message.
policySession policy list.
Returns
Character scalar session key.
Method format_inbound_message()
Format a Feishu inbound message for a ChatSession.
Usage
FeishuChannelAdapter$format_inbound_message(message)
Arguments
messageNormalized inbound message.
Returns
Character scalar prompt.
Method prepare_inbound_message()
Prepare a Feishu inbound message using stored document context.
Usage
FeishuChannelAdapter$prepare_inbound_message(session, message)
Arguments
sessionCurrent
ChatSession.messageNormalized inbound message.
Returns
Enriched inbound message.
Method send_text()
Send a final text reply to Feishu.
Usage
FeishuChannelAdapter$send_text(message, text, ...)
Arguments
messageOriginal normalized inbound message.
textFinal outbound text.
...Unused.
Returns
Parsed API response.
Method send_status()
Send an intermediate status message to Feishu.
Usage
FeishuChannelAdapter$send_status(
message,
status = c("thinking", "working", "error"),
text = NULL,
...
)Arguments
messageOriginal normalized inbound message.
statusStatus name.
textOptional status text.
...Unused.
Returns
Parsed API response or NULL.
Method send_attachment()
Send a generated local attachment to Feishu.
Usage
FeishuChannelAdapter$send_attachment(message, path, ...)
Arguments
messageOriginal normalized inbound message.
pathAbsolute local file path.
...Unused.
Returns
Parsed API response or NULL.
Method clone()
The objects of this class are cloneable with this method.
Usage
FeishuChannelAdapter$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
File Channel Session Store
Description
File-backed session store for external messaging channels.
Super class
aisdk::ChannelSessionStore -> FileChannelSessionStore
Public fields
base_dirBase directory for persisted channel sessions.
Methods
Public methods
Method new()
Initialize a file-backed channel session store.
Usage
FileChannelSessionStore$new(base_dir)
Arguments
base_dirBase directory for store files.
Method get_session_path()
Get the on-disk session file path for a key.
Usage
FileChannelSessionStore$get_session_path(session_key)
Arguments
session_keySession key.
Returns
Absolute file path.
Method get_index_path()
Get the channel index path.
Usage
FileChannelSessionStore$get_index_path()
Returns
Absolute file path.
Method list_sessions()
List all session records.
Usage
FileChannelSessionStore$list_sessions()
Returns
Named list of session records.
Method has_processed_event()
Check whether an event id has already been processed.
Usage
FileChannelSessionStore$has_processed_event(channel_id, event_id)
Arguments
channel_idChannel identifier.
event_idEvent identifier.
Returns
Logical scalar.
Method mark_processed_event()
Mark an event id as processed.
Usage
FileChannelSessionStore$mark_processed_event( channel_id, event_id, payload = NULL )
Arguments
channel_idChannel identifier.
event_idEvent identifier.
payloadOptional event payload to keep in the dedupe index.
Returns
Invisible stored event record.
Method get_record()
Get a single session record.
Usage
FileChannelSessionStore$get_record(session_key)
Arguments
session_keySession key.
Returns
Session record or NULL.
Method load_session()
Load a persisted ChatSession.
Usage
FileChannelSessionStore$load_session( session_key, tools = NULL, hooks = NULL, registry = NULL )
Arguments
session_keySession key.
toolsOptional tools to reattach.
hooksOptional hooks to reattach.
registryOptional provider registry.
Returns
A ChatSession or NULL if no persisted state exists.
Method save_session()
Save a ChatSession and update the local index.
Usage
FileChannelSessionStore$save_session(session_key, session, record = NULL)
Arguments
session_keySession key.
sessionChatSessioninstance.recordOptional record fields to merge into the index.
Returns
Invisible normalized record.
Method update_record()
Update a record without saving a session file.
Usage
FileChannelSessionStore$update_record(session_key, record)
Arguments
session_keySession key.
recordRecord fields to merge.
Returns
Invisible updated record.
Method link_child_session()
Register a child session relationship.
Usage
FileChannelSessionStore$link_child_session( parent_session_key, child_session_key )
Arguments
parent_session_keyParent session key.
child_session_keyChild session key.
Returns
Invisible updated parent record.
Method clone()
The objects of this class are cloneable with this method.
Usage
FileChannelSessionStore$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Flow Class
Description
R6 class representing an orchestration layer for multi-agent systems. Features:
Comprehensive delegation tracing
Automatic delegate_task tool generation
Depth and context limits with guardrails
Result aggregation and summarization
Methods
Public methods
Method new()
Initialize a new Flow.
Usage
Flow$new( session, model = NULL, registry = NULL, max_depth = 5, max_steps_per_agent = 10, max_context_tokens = 4000, enable_guardrails = TRUE )
Arguments
sessionA ChatSession object.
modelOptional default model ID to use. If NULL, inherits from session.
registryOptional AgentRegistry for agent lookup.
max_depthMaximum delegation depth. Default 5.
max_steps_per_agentMaximum ReAct steps per agent. Default 10.
max_context_tokensMaximum context tokens per delegation. Default 4000.
enable_guardrailsEnable safety guardrails. Default TRUE.
Method depth()
Get the current call stack depth.
Usage
Flow$depth()
Returns
Integer depth.
Method current()
Get the current active agent.
Usage
Flow$current()
Returns
The currently active Agent, or NULL.
Method session()
Get the shared session.
Usage
Flow$session()
Returns
The ChatSession object.
Method set_global_context()
Set the global context (the user's original goal).
Usage
Flow$set_global_context(context)
Arguments
contextCharacter string describing the overall goal.
Returns
Invisible self for chaining.
Method global_context()
Get the global context.
Usage
Flow$global_context()
Returns
The global context string.
Method delegate()
Delegate a task to another agent with enhanced tracking.
Usage
Flow$delegate(agent, task, context = NULL, priority = "normal")
Arguments
agentThe Agent to delegate to.
taskThe task instruction.
contextOptional additional context.
priorityTask priority: "high", "normal", "low". Default "normal".
Returns
The text result from the delegate agent.
Method generate_delegate_tool()
Generate the delegate_task tool for manager agents.
Usage
Flow$generate_delegate_tool()
Details
Creates a single unified tool that can delegate to any registered agent. This is more efficient than generating separate tools per agent.
Returns
A Tool object for delegation.
Method run()
Run a primary agent with enhanced orchestration.
Usage
Flow$run(agent, task, use_unified_delegate = TRUE)
Arguments
agentThe primary/manager Agent to run.
taskThe user's task/input.
use_unified_delegateUse single delegate_task tool. Default TRUE.
Returns
The final result from the primary agent.
Method get_delegation_history()
Get delegation history.
Usage
Flow$get_delegation_history(agent_name = NULL, limit = NULL)
Arguments
agent_nameOptional filter by agent name.
limitMaximum number of records to return.
Returns
A list of delegation records.
Method delegation_stats()
Get delegation statistics.
Usage
Flow$delegation_stats()
Returns
A list with counts, timing, and success rates.
Method clear_history()
Clear delegation history.
Usage
Flow$clear_history()
Returns
Invisible self for chaining.
Method print()
Print method for Flow.
Usage
Flow$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Flow$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Gemini Language Model Class
Description
Language model implementation for Gemini's generateContent API.
Super class
aisdk::LanguageModelV1 -> GeminiLanguageModel
Methods
Public methods
Inherited methods
Method new()
Initialize the Gemini language model.
Usage
GeminiLanguageModel$new(model_id, config)
Arguments
model_idThe model ID (e.g., "gemini-1.5-pro").
configConfiguration list with api_key, base_url, headers, etc.
Method get_config()
Get the configuration list.
Usage
GeminiLanguageModel$get_config()
Returns
A list with provider configuration.
Method build_payload_internal()
Build the request payload for generation
Usage
GeminiLanguageModel$build_payload_internal(params, stream = FALSE)
Arguments
paramsA list of call options.
streamWhether to build for streaming
Returns
A list with url, headers, and body.
Method do_generate()
Generate text (non-streaming).
Usage
GeminiLanguageModel$do_generate(params)
Arguments
paramsA list of call options including messages, temperature, etc.
Returns
A GenerateResult object.
Method do_stream()
Generate text (streaming).
Usage
GeminiLanguageModel$do_stream(params, callback)
Arguments
paramsA list of call options.
callbackA function called for each chunk: callback(text, done).
Returns
A GenerateResult object.
Method format_tool_result()
Format a tool execution result for Gemini's API.
Usage
GeminiLanguageModel$format_tool_result(tool_call_id, tool_name, result_content)
Arguments
tool_call_idThe ID of the tool call (unused in Gemini but present for interface compatibility).
tool_nameThe name of the tool.
result_contentThe result content from executing the tool.
Returns
A list formatted as a message for Gemini API.
Method get_history_format()
Get the message format for Gemini.
Usage
GeminiLanguageModel$get_history_format()
Method clone()
The objects of this class are cloneable with this method.
Usage
GeminiLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Gemini Provider Class
Description
Provider class for Google Gemini.
Public fields
specification_versionProvider spec version.
Methods
Public methods
Method new()
Initialize the Gemini provider.
Usage
GeminiProvider$new( api_key = NULL, base_url = NULL, headers = NULL, name = NULL )
Arguments
api_keyGemini API key. Defaults to GEMINI_API_KEY env var.
base_urlBase URL for API calls. Defaults to https://generativelanguage.googleapis.com/v1beta/models.
headersOptional additional headers.
nameOptional provider name override.
Method language_model()
Create a language model.
Usage
GeminiProvider$language_model(
model_id = Sys.getenv("GEMINI_MODEL", "gemini-2.5-flash")
)Arguments
model_idThe model ID (e.g., "gemini-1.5-pro", "gemini-1.5-flash", "gemini-2.0-flash").
Returns
A GeminiLanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
GeminiProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Generate Result
Description
Result object returned by model generation.
Details
This class uses lock_objects = FALSE to allow dynamic field addition.
This enables the ReAct loop and other components to attach additional
metadata (like steps, all_tool_calls) without modifying the class.
For models that support reasoning/thinking (like OpenAI o1/o3, DeepSeek, Claude with extended thinking),
the reasoning field contains the model's chain-of-thought content.
For Responses API models, response_id contains the server-side response ID
which can be used for multi-turn conversations without sending full history.
Public fields
textThe generated text content.
usageToken usage information (list with prompt_tokens, completion_tokens, total_tokens).
finish_reasonReason the model stopped generating.
warningsAny warnings from the model.
raw_responseThe raw response from the API.
tool_callsList of tool calls requested by the model. Each item contains id, name, arguments.
stepsNumber of ReAct loop steps taken (when max_steps > 1).
all_tool_callsAccumulated list of all tool calls made across all ReAct steps.
reasoningChain-of-thought/reasoning content from models that support it (o1, o3, DeepSeek, etc.).
response_idServer-side response ID for Responses API (used for stateful multi-turn conversations).
Methods
Public methods
Method new()
Initialize a GenerateResult object.
Usage
GenerateResult$new( text = NULL, usage = NULL, finish_reason = NULL, warnings = NULL, raw_response = NULL, tool_calls = NULL, steps = NULL, all_tool_calls = NULL, reasoning = NULL, response_id = NULL )
Arguments
textGenerated text.
usageToken usage.
finish_reasonReason for stopping.
warningsWarnings.
raw_responseRaw API response.
tool_callsTool calls requested by the model.
stepsNumber of ReAct steps taken.
all_tool_callsAll tool calls across steps.
reasoningChain-of-thought content.
response_idServer-side response ID for Responses API.
Method print()
Print method for GenerateResult.
Usage
GenerateResult$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
GenerateResult$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Hook Handler
Description
R6 class to manage and execute hooks.
Public fields
hooksList of hook functions.
Methods
Public methods
Method new()
Initialize HookHandler
Usage
HookHandler$new(hooks_list = list())
Arguments
hooks_listA list of hook functions. Supported hooks:
on_generation_start(model, prompt, tools)
on_generation_end(result)
on_tool_start(tool, args)
on_tool_end(tool, result)
on_tool_approval(tool, args) - Return TRUE to approve, FALSE to deny.
Method trigger_generation_start()
Trigger on_generation_start
Usage
HookHandler$trigger_generation_start(model, prompt, tools)
Arguments
modelThe language model object.
promptThe prompt being sent.
toolsThe list of tools provided.
Method trigger_generation_end()
Trigger on_generation_end
Usage
HookHandler$trigger_generation_end(result)
Arguments
resultThe generation result object.
Method trigger_tool_start()
Trigger on_tool_start
Usage
HookHandler$trigger_tool_start(tool, args)
Arguments
toolThe tool object.
argsThe arguments for the tool.
Method trigger_tool_end()
Trigger on_tool_end
Usage
HookHandler$trigger_tool_end(tool, result)
Arguments
toolThe tool object.
resultThe result from the tool execution.
Method clone()
The objects of this class are cloneable with this method.
Usage
HookHandler$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Known Aesthetic Types
Description
Registry of known aesthetics with their expected types.
Usage
KNOWN_AESTHETICS
Format
An object of class list of length 24.
Known Position Types with Parameters
Description
Known Position Types with Parameters
Usage
KNOWN_POSITIONS
Format
An object of class list of length 7.
Language Model V1 (Abstract Base Class)
Description
Abstract interface for language models. All LLM providers must implement this class.
Uses do_ prefix for internal methods to prevent direct usage by end-users.
Public fields
specification_versionThe version of this specification.
providerThe provider identifier (e.g., "openai").
model_idThe model identifier (e.g., "gpt-4o").
capabilitiesModel capability flags (e.g., is_reasoning_model).
Methods
Public methods
Method new()
Initialize the model with provider and model ID.
Usage
LanguageModelV1$new(provider, model_id, capabilities = list())
Arguments
providerProvider name.
model_idModel ID.
capabilitiesOptional list of capability flags.
Method has_capability()
Check if model has a specific capability.
Usage
LanguageModelV1$has_capability(cap)
Arguments
capCapability name (e.g., "is_reasoning_model").
Returns
Logical.
Method generate()
Public generation method (wrapper for do_generate).
Usage
LanguageModelV1$generate(...)
Arguments
...Call options passed to do_generate.
Returns
A GenerateResult object.
Method stream()
Public streaming method (wrapper for do_stream).
Usage
LanguageModelV1$stream(callback, ...)
Arguments
callbackFunction to call with each chunk.
...Call options passed to do_stream.
Returns
A GenerateResult object.
Method do_generate()
Generate text (non-streaming). Abstract method.
Usage
LanguageModelV1$do_generate(params)
Arguments
paramsA list of call options.
Returns
A GenerateResult object.
Method do_stream()
Generate text (streaming). Abstract method.
Usage
LanguageModelV1$do_stream(params, callback)
Arguments
paramsA list of call options.
callbackA function called for each chunk (text, done).
Returns
A GenerateResult object (accumulated from the stream).
Method format_tool_result()
Format a tool execution result for the provider's API.
Usage
LanguageModelV1$format_tool_result(tool_call_id, tool_name, result_content)
Arguments
tool_call_idThe ID of the tool call.
tool_nameThe name of the tool.
result_contentThe result content from executing the tool.
Returns
A list formatted as a message for this provider's API.
Method get_history_format()
Get the message format used by this model's API for history.
Usage
LanguageModelV1$get_history_format()
Returns
A character string ("openai" or "anthropic").
Method clone()
The objects of this class are cloneable with this method.
Usage
LanguageModelV1$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
MCP Client
Description
Connect to and communicate with an MCP server process.
Details
Manages connection to an external MCP server via stdio.
Public fields
processThe processx process object
server_infoInformation about the connected server
capabilitiesServer capabilities
Methods
Public methods
Method new()
Create a new MCP Client
Usage
McpClient$new(command, args = character(), env = NULL)
Arguments
commandThe command to run (e.g., "npx", "python")
argsCommand arguments (e.g., c("-y", "@modelcontextprotocol/server-github"))
envEnvironment variables as a named character vector
Returns
A new McpClient object
Method list_tools()
List available tools from the MCP server
Usage
McpClient$list_tools()
Returns
A list of tool definitions
Method call_tool()
Call a tool on the MCP server
Usage
McpClient$call_tool(name, arguments = list())
Arguments
nameThe tool name
argumentsTool arguments as a named list
Returns
The tool result
Method list_resources()
List available resources from the MCP server
Usage
McpClient$list_resources()
Returns
A list of resource definitions
Method read_resource()
Read a resource from the MCP server
Usage
McpClient$read_resource(uri)
Arguments
uriThe resource URI
Returns
The resource contents
Method is_alive()
Check if the MCP server process is alive
Usage
McpClient$is_alive()
Returns
TRUE if alive, FALSE otherwise
Method close()
Close the MCP client connection
Usage
McpClient$close()
Method as_sdk_tools()
Convert MCP tools to SDK Tool objects
Usage
McpClient$as_sdk_tools()
Returns
A list of Tool objects
Method clone()
The objects of this class are cloneable with this method.
Usage
McpClient$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
MCP Discovery Class
Description
R6 class for discovering MCP servers on the local network using mDNS/DNS-SD (Bonjour) protocol.
Public fields
discoveredList of discovered MCP endpoints.
registry_urlURL of the remote skill registry.
Methods
Public methods
Method new()
Create a new MCP Discovery instance.
Usage
McpDiscovery$new(registry_url = NULL)
Arguments
registry_urlOptional URL for remote skill registry.
Returns
A new McpDiscovery object.
Method scan_network()
Scan the local network for MCP servers.
Usage
McpDiscovery$scan_network(timeout_seconds = 5, service_type = "_mcp._tcp")
Arguments
timeout_secondsHow long to scan for services.
service_typeThe mDNS service type to look for.
Returns
A data frame of discovered services.
Method register()
Register a known MCP endpoint manually.
Usage
McpDiscovery$register(name, host, port, capabilities = NULL)
Arguments
nameService name.
hostHostname or IP address.
portPort number.
capabilitiesOptional list of capabilities.
Returns
Self (invisibly).
Method query_capabilities()
Query a discovered server for its capabilities.
Usage
McpDiscovery$query_capabilities(host, port)
Arguments
hostHostname or IP.
portPort number.
Returns
A list of server capabilities.
Method list_endpoints()
List all discovered MCP endpoints.
Usage
McpDiscovery$list_endpoints()
Returns
A data frame of endpoints.
Method search_registry()
Search the remote registry for skills.
Usage
McpDiscovery$search_registry(query)
Arguments
querySearch query.
Returns
A data frame of matching skills.
Method print()
Print method for McpDiscovery.
Usage
McpDiscovery$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
McpDiscovery$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
MCP Router Class
Description
A virtual MCP server that aggregates tools from multiple downstream MCP servers into a unified interface. Supports hot-swapping and skill negotiation.
Public fields
clientsList of connected MCP clients.
tool_mapMapping of tool names to their source clients.
capabilitiesAggregated capabilities from all clients.
Methods
Public methods
Method new()
Create a new MCP Router.
Usage
McpRouter$new()
Returns
A new McpRouter object.
Method add_client()
Add an MCP client to the router.
Usage
McpRouter$add_client(name, client)
Arguments
nameUnique name for this client.
clientAn McpClient object.
Returns
Self (invisibly).
Method connect()
Connect to an MCP server and add it to the router.
Usage
McpRouter$connect(name, command, args = character(), env = NULL)
Arguments
nameUnique name for this connection.
commandCommand to run the MCP server.
argsCommand arguments.
envEnvironment variables.
Returns
Self (invisibly).
Method remove_client()
Remove an MCP client from the router (hot-swap out).
Usage
McpRouter$remove_client(name)
Arguments
nameName of the client to remove.
Returns
Self (invisibly).
Method list_tools()
List all available tools across all connected clients.
Usage
McpRouter$list_tools()
Returns
A list of tool definitions.
Method call_tool()
Call a tool, routing to the appropriate client.
Usage
McpRouter$call_tool(name, arguments = list())
Arguments
nameTool name.
argumentsTool arguments.
Returns
The tool result.
Method as_sdk_tools()
Get all tools as SDK Tool objects for use with generate_text.
Usage
McpRouter$as_sdk_tools()
Returns
A list of Tool objects.
Method negotiate()
Negotiate capabilities with a specific client.
Usage
McpRouter$negotiate(client_name)
Arguments
client_nameName of the client.
Returns
A list of negotiated capabilities.
Method status()
Get router status.
Usage
McpRouter$status()
Returns
A list with status information.
Method close()
Close all client connections.
Usage
McpRouter$close()
Method print()
Print method for McpRouter.
Usage
McpRouter$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
McpRouter$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
MCP Server
Description
Expose R functions as MCP tools to external clients.
Details
Serves R tools and resources via MCP protocol over stdio.
Public fields
nameServer name
versionServer version
toolsRegistered tools
resourcesRegistered resources
Methods
Public methods
Method new()
Create a new MCP Server
Usage
McpServer$new(name = "r-mcp-server", version = "0.1.0")
Arguments
nameServer name
versionServer version
Returns
A new McpServer object
Method add_tool()
Add a tool to the server
Usage
McpServer$add_tool(tool)
Arguments
toolA Tool object from the SDK
Returns
self (for chaining)
Method add_resource()
Add a resource to the server
Usage
McpServer$add_resource( uri, name, description = "", mime_type = "text/plain", read_fn )
Arguments
uriResource URI
nameResource name
descriptionResource description
mime_typeMIME type
read_fnFunction that returns the resource content
Returns
self (for chaining)
Method listen()
Start listening for MCP requests on stdin/stdout This is a blocking call.
Usage
McpServer$listen()
Method process_message()
Process a single MCP message (for testing)
Usage
McpServer$process_message(json_str)
Arguments
json_strThe JSON-RPC message
Returns
The response, or NULL for notifications
Method clone()
The objects of this class are cloneable with this method.
Usage
McpServer$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
MCP SSE Client
Description
Connect to an MCP server via Server-Sent Events (SSE).
Details
Manages connection to a remote MCP server via SSE transport.
Super class
aisdk::McpClient -> McpSseClient
Public fields
endpointThe POST endpoint for sending messages (received from SSE init)
auth_headersAuthentication headers
Methods
Public methods
Inherited methods
Method new()
Create a new MCP SSE Client
Usage
McpSseClient$new(url, headers = list())
Arguments
urlThe SSE endpoint URL
headersnamed list of headers (e.g. for auth)
Returns
A new McpSseClient object
Method clone()
The objects of this class are cloneable with this method.
Usage
McpSseClient$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Middleware (Base Class)
Description
Defines a middleware that can intercept and modify model operations.
Public fields
nameA descriptive name for this middleware.
Methods
Public methods
Method transform_params()
Transform parameters before calling the model.
Usage
Middleware$transform_params(params, type, model)
Arguments
paramsThe original call parameters.
typeEither "generate" or "stream".
modelThe model being called.
Returns
The transformed parameters.
Method wrap_generate()
Wrap the generate operation.
Usage
Middleware$wrap_generate(do_generate, params, model)
Arguments
do_generateA function that calls the model's do_generate.
paramsThe (potentially transformed) parameters.
modelThe model being called.
Returns
The result of the generation.
Method wrap_stream()
Wrap the stream operation.
Usage
Middleware$wrap_stream(do_stream, params, model, callback)
Arguments
do_streamA function that calls the model's do_stream.
paramsThe (potentially transformed) parameters.
modelThe model being called.
callbackThe streaming callback function.
Method clone()
The objects of this class are cloneable with this method.
Usage
Middleware$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Mission Class
Description
R6 class representing a persistent, goal-oriented execution mission. A Mission is the global leadership layer above Agent/AgentTeam/Flow.
Key capabilities:
Full state machine: pending -> planning -> running -> succeeded/failed/stalled
LLM-driven auto-planning: converts a goal string into ordered MissionSteps
Step-level retry with exponential backoff + error-context injection
DAG dependency resolution (depends_on)
Parallel step execution (parallel = TRUE groups)
Checkpoint persistence: save() / resume()
Full audit log for post-mortem analysis
MissionHookHandler integration for observability
Public fields
idUnique mission UUID.
goalNatural language goal description.
stepsList of
MissionStepobjects.statusMission status string.
sessionSharedSession used across all steps.
modelDefault model ID for this mission.
stall_policyNamed list defining failure recovery behavior.
hooksMissionHookHandler for lifecycle events.
audit_logList of event records in chronological order.
auto_planIf TRUE and steps is NULL, use LLM to plan before running.
default_executorDefault executor for auto-planned steps.
Methods
Public methods
Method new()
Initialize a new Mission.
Usage
Mission$new( goal, steps = NULL, model = NULL, executor = NULL, stall_policy = NULL, hooks = NULL, session = NULL, auto_plan = TRUE )
Arguments
goalNatural language goal description.
stepsOptional list of
MissionStepobjects. If NULL and auto_plan=TRUE, the LLM plans them.modelDefault model ID (e.g., "anthropic:claude-opus-4-6").
executorDefault executor for all steps (Agent, AgentTeam, Flow, or function). Used for auto-planned steps when no per-step executor is specified.
stall_policyNamed list with on_tool_failure, on_step_timeout, on_max_retries, escalate_fn. Defaults to default_stall_policy().
hooksMissionHookHandler for lifecycle events.
sessionOptional SharedSession. Created automatically if NULL.
auto_planIf TRUE, call LLM to decompose goal into steps when steps is NULL.
Method run()
Run the Mission synchronously until completion or stall.
Usage
Mission$run(model = NULL, ...)
Arguments
modelOptional model override. Falls back to self$model.
...Additional arguments (reserved for future use).
Returns
Invisible self (inspect $status, $steps, $audit_log for results).
Method save()
Save mission state to a file for later resumption.
Usage
Mission$save(path)
Arguments
pathFile path (.rds).
Method resume()
Resume a Mission from a saved checkpoint.
Usage
Mission$resume(path)
Arguments
pathFile path to a previously saved mission state (.rds).
Details
Steps that are already "done" are skipped. Pending/failed/retrying steps are re-executed. The executor must be re-attached via $set_executor() or by providing a default_executor at Mission creation.
Method step_summary()
Get a summary of step statuses.
Usage
Mission$step_summary()
Returns
Named character vector: step_id -> status.
Method print()
Print method.
Usage
Mission$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Mission$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
MissionHookHandler Class
Description
R6 class to manage Mission-level lifecycle hooks. Supported events span the full Mission state machine: planning -> step execution -> completion / stall / escalation.
Public fields
hooksNamed list of hook functions.
Methods
Public methods
Method new()
Initialize a MissionHookHandler.
Usage
MissionHookHandler$new(hooks_list = list())
Arguments
hooks_listNamed list of hook functions. Supported hooks:
on_mission_start(mission) - Called when a Mission begins running.
on_mission_planned(mission) - Called after LLM planning produces steps.
on_step_start(step, attempt) - Called before each step attempt.
on_step_done(step, result) - Called when a step succeeds.
on_step_failed(step, error, attempt) - Called on each step failure.
on_mission_stall(mission, step) - Called when a step exceeds max_retries.
on_mission_done(mission) - Called when the Mission completes (succeeded or failed).
Method trigger_mission_start()
Trigger on_mission_start.
Usage
MissionHookHandler$trigger_mission_start(mission)
Arguments
missionThe Mission object.
Method trigger_mission_planned()
Trigger on_mission_planned.
Usage
MissionHookHandler$trigger_mission_planned(mission)
Arguments
missionThe Mission object (steps are now populated).
Method trigger_step_start()
Trigger on_step_start.
Usage
MissionHookHandler$trigger_step_start(step, attempt)
Arguments
stepThe MissionStep object.
attemptInteger attempt number (1 = first try).
Method trigger_step_done()
Trigger on_step_done.
Usage
MissionHookHandler$trigger_step_done(step, result)
Arguments
stepThe MissionStep object.
resultThe text result from the executor.
Method trigger_step_failed()
Trigger on_step_failed.
Usage
MissionHookHandler$trigger_step_failed(step, error, attempt)
Arguments
stepThe MissionStep object.
errorThe error message string.
attemptInteger attempt number.
Method trigger_mission_stall()
Trigger on_mission_stall.
Usage
MissionHookHandler$trigger_mission_stall(mission, step)
Arguments
missionThe Mission object.
stepThe step that caused the stall.
Method trigger_mission_done()
Trigger on_mission_done.
Usage
MissionHookHandler$trigger_mission_done(mission)
Arguments
missionThe completed Mission object.
Method clone()
The objects of this class are cloneable with this method.
Usage
MissionHookHandler$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
MissionOrchestrator Class
Description
R6 class that manages a queue of Missions and executes them within a concurrency limit. Provides full observability via status summaries and stall detection through reconcile().
Public fields
pending_queueList of
Missionobjects waiting to run.runningList of
Missionobjects currently executing.completedList of
Missionobjects that have finished.max_concurrentMaximum simultaneous missions. Default 3.
global_sessionOptional SharedSession shared across all missions.
global_modelDefault model ID for missions that don't specify one.
async_handleslist of callr::r_bg handles (for run_async).
Methods
Public methods
Method new()
Initialize a new MissionOrchestrator.
Usage
MissionOrchestrator$new(max_concurrent = 3, model = NULL, session = NULL)
Arguments
max_concurrentMaximum simultaneous missions. Default 3.
modelOptional default model for all missions.
sessionOptional shared SharedSession.
Method submit()
Submit a Mission to the orchestrator queue.
Usage
MissionOrchestrator$submit(mission)
Arguments
missionA Mission object.
Returns
Invisible self for chaining.
Method run_all()
Run all submitted missions respecting the concurrency limit.
Usage
MissionOrchestrator$run_all(model = NULL)
Arguments
modelOptional model override for all missions in this run.
Details
Missions are executed in batches of max_concurrent. Within each batch, missions run in parallel (via parallel::mclapply on Unix, sequentially on Windows). Completed missions are moved to $completed.
Returns
Invisibly returns the list of completed Mission objects.
Method run_async()
Run a single Mission asynchronously in a background process.
Usage
MissionOrchestrator$run_async(mission, model = NULL)
Arguments
missionA Mission object.
modelOptional model override.
Details
Uses callr::r_bg to launch the mission in a separate R process. The mission state is serialized to a temp file, executed, and the result is written back. Call $poll_async() to check completion.
Returns
A list with $handle (callr process), $mission_id, $checkpoint_path.
Method poll_async()
Poll all async handles and collect completed missions.
Usage
MissionOrchestrator$poll_async()
Returns
Named list with completed (list of Mission objects) and
still_running (integer count).
Method reconcile()
Stall detection: check for missions that appear stuck.
Usage
MissionOrchestrator$reconcile(stall_threshold_secs = 600)
Arguments
stall_threshold_secsMissions running longer than this are flagged. Default 600 (10 minutes).
Returns
list of stalled mission IDs.
Method status()
Get a status summary of all missions.
Usage
MissionOrchestrator$status()
Returns
A data.frame with id, goal, status, n_steps columns.
Method print()
Print method.
Usage
MissionOrchestrator$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
MissionOrchestrator$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
MissionStep Class
Description
A single execution unit within a Mission. Each step wraps an executor (Agent, AgentTeam, Flow, or plain R function) and handles its own retry loop with error-history injection.
Public fields
idUnique step identifier.
descriptionNatural language description of what this step does.
executorAgent | AgentTeam | Flow | function to perform the step.
statusCurrent status: "pending"|"running"|"done"|"failed"|"retrying".
max_retriesMaximum retry attempts before escalation. Default 2.
retry_countNumber of retries attempted so far.
timeout_secsOptional per-step timeout in seconds. NULL = no timeout.
parallelIf TRUE, this step may run concurrently with other parallel steps.
depends_onCharacter vector of step IDs that must complete before this step.
resultThe text result from the executor on success.
error_historyList of failure records, each containing
attempt,error, andtimestamp.
Methods
Public methods
Method new()
Initialize a MissionStep.
Usage
MissionStep$new( id, description, executor = NULL, max_retries = 2, timeout_secs = NULL, parallel = FALSE, depends_on = NULL )
Arguments
idUnique step ID (e.g., "step_1").
descriptionNatural language task description.
executorAgent, AgentTeam, Flow, or R function.
max_retriesMaximum retries before stall escalation. Default 2.
timeout_secsOptional per-step timeout. Default NULL.
parallelCan run in parallel with other parallel steps. Default FALSE.
depends_onCharacter vector of prerequisite step IDs. Default NULL.
Method run()
Execute this step once (no retry logic; handled by Mission).
Usage
MissionStep$run(session, model, context = NULL)
Arguments
sessionA ChatSession for shared state.
modelModel ID string.
contextOptional error-injection context string.
Returns
Character string result, or stops with an error.
Method print()
Print method.
Usage
MissionStep$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
MissionStep$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
NVIDIA Language Model Class
Description
Language model implementation for NVIDIA's chat completions API. Inherits from OpenAI model but adds support for NVIDIA-specific features like "enable_thinking" and reasoning content extraction.
Super classes
aisdk::LanguageModelV1 -> aisdk::OpenAILanguageModel -> NvidiaLanguageModel
Methods
Public methods
Inherited methods
aisdk::LanguageModelV1$generate()aisdk::LanguageModelV1$has_capability()aisdk::LanguageModelV1$stream()aisdk::OpenAILanguageModel$build_payload()aisdk::OpenAILanguageModel$build_stream_payload()aisdk::OpenAILanguageModel$do_generate()aisdk::OpenAILanguageModel$do_stream()aisdk::OpenAILanguageModel$execute_request()aisdk::OpenAILanguageModel$format_tool_result()aisdk::OpenAILanguageModel$get_config()aisdk::OpenAILanguageModel$get_history_format()aisdk::OpenAILanguageModel$initialize()
Method parse_response()
Parse the API response into a GenerateResult. Overrides parent to extract NVIDIA-specific reasoning_content.
Usage
NvidiaLanguageModel$parse_response(response)
Arguments
responseThe parsed API response.
Returns
A GenerateResult object.
Method clone()
The objects of this class are cloneable with this method.
Usage
NvidiaLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
NVIDIA Provider Class
Description
Provider class for NVIDIA.
Super class
aisdk::OpenAIProvider -> NvidiaProvider
Methods
Public methods
Inherited methods
Method new()
Initialize the NVIDIA provider.
Usage
NvidiaProvider$new(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_keyNVIDIA API key. Defaults to NVIDIA_API_KEY env var.
base_urlBase URL. Defaults to https://integrate.api.nvidia.com/v1.
headersOptional additional headers.
Method language_model()
Create a language model.
Usage
NvidiaProvider$language_model(model_id = NULL)
Arguments
model_idThe model ID (e.g., "z-ai/glm4.7").
Returns
A NvidiaLanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
NvidiaProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Object Strategy
Description
Object Strategy
Object Strategy
Details
Strategy for generating structured objects based on a JSON Schema. This strategy instructs the LLM to produce valid JSON matching the schema, and handles parsing and validation of the output.
Super class
aisdk::OutputStrategy -> ObjectStrategy
Public fields
schemaThe schema definition (from z_object, etc.).
schema_nameHuman-readable name for the schema.
Methods
Public methods
Method new()
Initialize the ObjectStrategy.
Usage
ObjectStrategy$new(schema, schema_name = "json_schema")
Arguments
schemaA schema object created by z_object, z_array, etc.
schema_nameAn optional name for the schema (default: "json_schema").
Method get_instruction()
Generate the instruction for the LLM to output valid JSON.
Usage
ObjectStrategy$get_instruction()
Returns
A character string with the prompt instruction.
Method validate()
Validate and parse the LLM output as JSON.
Usage
ObjectStrategy$validate(text, is_final = FALSE)
Arguments
textThe raw text output from the LLM.
is_finalLogical, TRUE if this is the final output.
Returns
The parsed R object (list), or NULL if parsing fails.
Method clone()
The objects of this class are cloneable with this method.
Usage
ObjectStrategy$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
OpenAI Embedding Model
Description
Embedding model implementation for OpenAI's embeddings API.
Super class
aisdk::EmbeddingModelV1 -> OpenAIEmbeddingModel
Methods
Public methods
Method new()
Initialize the OpenAI embedding model.
Usage
OpenAIEmbeddingModel$new(model_id, config)
Arguments
model_idThe model ID (e.g., "text-embedding-3-small").
configConfiguration list.
Method do_embed()
Generate embeddings for a value.
Usage
OpenAIEmbeddingModel$do_embed(value)
Arguments
valueA character string or vector to embed.
Returns
A list with embeddings and usage.
Method clone()
The objects of this class are cloneable with this method.
Usage
OpenAIEmbeddingModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
OpenAI Language Model Class
Description
Language model implementation for OpenAI's chat completions API.
Super class
aisdk::LanguageModelV1 -> OpenAILanguageModel
Methods
Public methods
Inherited methods
Method new()
Initialize the OpenAI language model.
Usage
OpenAILanguageModel$new(model_id, config, capabilities = list())
Arguments
model_idThe model ID (e.g., "gpt-4o").
configConfiguration list with api_key, base_url, headers, etc.
capabilitiesOptional list of capability flags.
Method get_config()
Get the configuration list.
Usage
OpenAILanguageModel$get_config()
Returns
A list with provider configuration.
Method build_payload()
Build the request payload for non-streaming generation. Subclasses can override to customize payload construction.
Usage
OpenAILanguageModel$build_payload(params)
Arguments
paramsA list of call options.
Returns
A list with url, headers, and body.
Method execute_request()
Execute the API request.
Usage
OpenAILanguageModel$execute_request(url, headers, body)
Arguments
urlThe API endpoint URL.
headersA named list of HTTP headers.
bodyThe request body.
Returns
The parsed API response.
Method parse_response()
Parse the API response into a GenerateResult. Subclasses can override to extract provider-specific fields (e.g., reasoning_content).
Usage
OpenAILanguageModel$parse_response(response)
Arguments
responseThe parsed API response.
Returns
A GenerateResult object.
Method do_generate()
Generate text (non-streaming). Uses template method pattern.
Usage
OpenAILanguageModel$do_generate(params)
Arguments
paramsA list of call options including messages, temperature, etc.
Returns
A GenerateResult object.
Method build_stream_payload()
Build the request payload for streaming generation. Subclasses can override to customize stream payload construction.
Usage
OpenAILanguageModel$build_stream_payload(params)
Arguments
paramsA list of call options.
Returns
A list with url, headers, and body.
Method do_stream()
Generate text (streaming).
Usage
OpenAILanguageModel$do_stream(params, callback)
Arguments
paramsA list of call options.
callbackA function called for each chunk: callback(text, done).
Returns
A GenerateResult object.
Method format_tool_result()
Format a tool execution result for OpenAI's API.
Usage
OpenAILanguageModel$format_tool_result(tool_call_id, tool_name, result_content)
Arguments
tool_call_idThe ID of the tool call.
tool_nameThe name of the tool (not used by OpenAI but kept for interface consistency).
result_contentThe result content from executing the tool.
Returns
A list formatted as a message for OpenAI's API.
Method get_history_format()
Get the message format for OpenAI.
Usage
OpenAILanguageModel$get_history_format()
Method clone()
The objects of this class are cloneable with this method.
Usage
OpenAILanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
OpenAI Provider Class
Description
Provider class for OpenAI. Can create language and embedding models.
Public fields
specification_versionProvider spec version.
Methods
Public methods
Method new()
Initialize the OpenAI provider.
Usage
OpenAIProvider$new( api_key = NULL, base_url = NULL, organization = NULL, project = NULL, headers = NULL, name = NULL, disable_stream_options = FALSE )
Arguments
api_keyOpenAI API key. Defaults to OPENAI_API_KEY env var.
base_urlBase URL for API calls. Defaults to https://api.openai.com/v1.
organizationOptional OpenAI organization ID.
projectOptional OpenAI project ID.
headersOptional additional headers.
nameOptional provider name override (for compatible APIs).
disable_stream_optionsDisable stream_options parameter (for providers that don't support it).
Method language_model()
Create a language model.
Usage
OpenAIProvider$language_model(model_id = Sys.getenv("OPENAI_MODEL", "gpt-4o"))Arguments
model_idThe model ID (e.g., "gpt-4o", "gpt-4o-mini").
Returns
An OpenAILanguageModel object.
Method responses_model()
Create a language model using the Responses API.
Usage
OpenAIProvider$responses_model(model_id)
Arguments
model_idThe model ID (e.g., "o1", "o3-mini", "gpt-4o").
Details
The Responses API is designed for:
Models with built-in reasoning (o1, o3 series)
Stateful multi-turn conversations (server maintains history)
Advanced features like structured outputs
The model maintains conversation state internally via response IDs.
Call model$reset() to start a fresh conversation.
Returns
An OpenAIResponsesLanguageModel object.
Method smart_model()
Smart model factory that automatically selects the best API.
Usage
OpenAIProvider$smart_model(
model_id,
api_format = c("auto", "chat", "responses")
)Arguments
model_idThe model ID.
api_formatAPI format to use: "auto" (default), "chat", or "responses".
Details
When api_format = "auto" (default), the method automatically selects:
Responses API for reasoning models (o1, o3, o1-mini, o3-mini)
Chat Completions API for all other models (gpt-4o, gpt-4, etc.)
You can override this by explicitly setting api_format.
Returns
A language model object (either OpenAILanguageModel or OpenAIResponsesLanguageModel).
Method embedding_model()
Create an embedding model.
Usage
OpenAIProvider$embedding_model(model_id = "text-embedding-3-small")
Arguments
model_idThe model ID (e.g., "text-embedding-3-small").
Returns
An OpenAIEmbeddingModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
OpenAIProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
OpenAI Responses Language Model Class
Description
Language model implementation for OpenAI's Responses API. This API is designed for stateful multi-turn conversations where the server maintains conversation history, and supports advanced features like:
Built-in reasoning/thinking (for o1, o3 models)
Server-side conversation state management via response IDs
Structured output items (reasoning, message, tool calls)
The Responses API uses a different request/response format than Chat Completions:
Request:
inputfield instead ofmessages, optionalprevious_response_idResponse:
outputarray with typed items instead ofchoices
Super class
aisdk::LanguageModelV1 -> OpenAIResponsesLanguageModel
Methods
Public methods
Inherited methods
Method new()
Initialize the OpenAI Responses language model.
Usage
OpenAIResponsesLanguageModel$new(model_id, config, capabilities = list())
Arguments
model_idThe model ID (e.g., "o1", "o3-mini", "gpt-4o").
configConfiguration list with api_key, base_url, headers, etc.
capabilitiesOptional list of capability flags.
Method get_config()
Get the configuration list.
Usage
OpenAIResponsesLanguageModel$get_config()
Returns
A list with provider configuration.
Method get_last_response_id()
Get the last response ID (for debugging/advanced use).
Usage
OpenAIResponsesLanguageModel$get_last_response_id()
Returns
The last response ID or NULL.
Method reset()
Reset the conversation state (clear response ID). Call this to start a fresh conversation.
Usage
OpenAIResponsesLanguageModel$reset()
Method do_generate()
Generate text (non-streaming) using Responses API.
Usage
OpenAIResponsesLanguageModel$do_generate(params)
Arguments
paramsA list of call options including messages, temperature, etc.
Returns
A GenerateResult object.
Method do_stream()
Generate text (streaming) using Responses API.
Usage
OpenAIResponsesLanguageModel$do_stream(params, callback)
Arguments
paramsA list of call options.
callbackA function called for each chunk: callback(text, done).
Returns
A GenerateResult object.
Method format_tool_result()
Format a tool execution result for Responses API.
Usage
OpenAIResponsesLanguageModel$format_tool_result( tool_call_id, tool_name, result_content )
Arguments
tool_call_idThe ID of the tool call.
tool_nameThe name of the tool.
result_contentThe result content from executing the tool.
Returns
A list formatted as a message for Responses API.
Method get_history_format()
Get the message format for Responses API.
Usage
OpenAIResponsesLanguageModel$get_history_format()
Method clone()
The objects of this class are cloneable with this method.
Usage
OpenAIResponsesLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
OpenRouter Language Model Class
Description
Language model implementation for OpenRouter's chat completions API. Inherits from OpenAI model but adds support for OpenRouter-specific features like reasoning content extraction from reasoning models.
Super classes
aisdk::LanguageModelV1 -> aisdk::OpenAILanguageModel -> OpenRouterLanguageModel
Methods
Public methods
Inherited methods
aisdk::LanguageModelV1$generate()aisdk::LanguageModelV1$has_capability()aisdk::LanguageModelV1$stream()aisdk::OpenAILanguageModel$build_payload()aisdk::OpenAILanguageModel$build_stream_payload()aisdk::OpenAILanguageModel$do_generate()aisdk::OpenAILanguageModel$do_stream()aisdk::OpenAILanguageModel$execute_request()aisdk::OpenAILanguageModel$format_tool_result()aisdk::OpenAILanguageModel$get_config()aisdk::OpenAILanguageModel$get_history_format()aisdk::OpenAILanguageModel$initialize()
Method parse_response()
Parse the API response into a GenerateResult. Overrides parent to extract reasoning_content from reasoning models.
Usage
OpenRouterLanguageModel$parse_response(response)
Arguments
responseThe parsed API response.
Returns
A GenerateResult object.
Method clone()
The objects of this class are cloneable with this method.
Usage
OpenRouterLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
OpenRouter Provider Class
Description
Provider class for OpenRouter.
Super class
aisdk::OpenAIProvider -> OpenRouterProvider
Methods
Public methods
Inherited methods
Method new()
Initialize the OpenRouter provider.
Usage
OpenRouterProvider$new(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_keyOpenRouter API key. Defaults to OPENROUTER_API_KEY env var.
base_urlBase URL. Defaults to https://openrouter.ai/api/v1.
headersOptional additional headers.
Method language_model()
Create a language model.
Usage
OpenRouterProvider$language_model(model_id = NULL)
Arguments
model_idThe model ID (e.g., "openai/gpt-4o", "anthropic/claude-sonnet-4-20250514", "deepseek/deepseek-r1", "google/gemini-2.5-pro").
Returns
An OpenRouterLanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
OpenRouterProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Output Strategy Interface
Description
Output Strategy Interface
Output Strategy Interface
Details
Abstract R6 class defining the interface for output strategies.
Subclasses must implement get_instruction() and validate().
Methods
Public methods
Method new()
Initialize the strategy.
Usage
OutputStrategy$new()
Method get_instruction()
Get the system prompt instruction for this strategy.
Usage
OutputStrategy$get_instruction()
Returns
A character string with instructions for the LLM.
Method validate()
Parse and validate the output text.
Usage
OutputStrategy$validate(text, is_final = FALSE)
Arguments
textThe raw text output from the LLM.
is_finalLogical, TRUE if this is the final output (not streaming).
Returns
The parsed and validated object.
Method clone()
The objects of this class are cloneable with this method.
Usage
OutputStrategy$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Project Memory Class
Description
R6 class for managing persistent project memory using SQLite. Stores code snippets, error fixes, and execution graphs for resuming failed long-running jobs.
Public fields
db_pathPath to the SQLite database file.
project_rootRoot directory of the project.
Methods
Public methods
Method new()
Create or connect to a project memory database.
Usage
ProjectMemory$new(project_root = tempdir(), db_name = "memory.sqlite")
Arguments
project_rootProject root directory. Defaults to
tempdir().db_nameDatabase filename. Defaults to "memory.sqlite".
Returns
A new ProjectMemory object.
Method store_snippet()
Store a successful code snippet for future reference.
Usage
ProjectMemory$store_snippet( code, description = NULL, tags = NULL, context = NULL )
Arguments
codeThe R code that was executed successfully.
descriptionOptional description of what the code does.
tagsOptional character vector of tags for categorization.
contextOptional context about when/why this code was used.
Returns
The ID of the stored snippet.
Method store_fix()
Store an error fix for learning.
Usage
ProjectMemory$store_fix( original_code, error, fixed_code, fix_description = NULL )
Arguments
original_codeThe code that produced the error.
errorThe error message.
fixed_codeThe corrected code.
fix_descriptionDescription of what was fixed.
Returns
The ID of the stored fix.
Method find_similar_fix()
Find a similar fix from memory.
Usage
ProjectMemory$find_similar_fix(error)
Arguments
errorThe error message to match.
Returns
A list with the fix details, or NULL if not found.
Method search_snippets()
Search for relevant code snippets.
Usage
ProjectMemory$search_snippets(query, limit = 10)
Arguments
querySearch query (matches description, tags, or code).
limitMaximum number of results.
Returns
A data frame of matching snippets.
Method store_workflow_node()
Store execution graph node for workflow persistence.
Usage
ProjectMemory$store_workflow_node( workflow_id, node_id, node_type, code, status = "pending", result = NULL, dependencies = NULL )
Arguments
workflow_idUnique identifier for the workflow.
node_idUnique identifier for this node.
node_typeType of node (e.g., "transform", "model", "output").
codeThe code for this node.
statusNode status ("pending", "running", "completed", "failed").
resultOptional serialized result.
dependenciesCharacter vector of node IDs this depends on.
Returns
The database row ID.
Method update_node_status()
Update workflow node status.
Usage
ProjectMemory$update_node_status(workflow_id, node_id, status, result = NULL)
Arguments
workflow_idWorkflow identifier.
node_idNode identifier.
statusNew status.
resultOptional result to store.
Method get_workflow()
Get workflow state for resuming.
Usage
ProjectMemory$get_workflow(workflow_id)
Arguments
workflow_idWorkflow identifier.
Returns
A list with workflow nodes and their states.
Method get_resumable_nodes()
Resume a failed workflow from the last successful point.
Usage
ProjectMemory$get_resumable_nodes(workflow_id)
Arguments
workflow_idWorkflow identifier.
Returns
List of node IDs that need to be re-executed.
Method store_conversation()
Store a conversation turn for context.
Usage
ProjectMemory$store_conversation(session_id, role, content, metadata = NULL)
Arguments
session_idSession identifier.
roleMessage role ("user", "assistant", "system").
contentMessage content.
metadataOptional metadata list.
Method get_conversation()
Get conversation history for a session.
Usage
ProjectMemory$get_conversation(session_id, limit = 100)
Arguments
session_idSession identifier.
limitMaximum number of messages.
Returns
A data frame of conversation messages.
Method store_review()
Store or update a human review for an AI-generated chunk.
Usage
ProjectMemory$store_review( chunk_id, file_path, chunk_label, prompt, response, status = "pending", ai_agent = NULL, uncertainty = NULL, session_id = NULL, review_mode = NULL, runtime_mode = NULL, artifact_json = NULL, execution_status = NULL, execution_output = NULL, final_code = NULL, error_message = NULL )
Arguments
chunk_idUnique identifier for the chunk.
file_pathPath to the source file.
chunk_labelChunk label from knitr.
promptThe prompt sent to the AI.
responseThe AI's response.
statusReview status ("pending", "approved", "rejected").
ai_agentOptional agent name.
uncertaintyOptional uncertainty level.
session_idOptional session identifier for transcript/provenance.
review_modeOptional normalized review mode.
runtime_modeOptional normalized runtime mode.
artifact_jsonOptional JSON review artifact payload.
execution_statusOptional execution state.
execution_outputOptional execution output text.
final_codeOptional finalized executable code.
error_messageOptional execution or generation error.
Returns
The database row ID.
Method store_review_artifact()
Store structured review artifact metadata for a chunk.
Usage
ProjectMemory$store_review_artifact( chunk_id, artifact, session_id = NULL, review_mode = NULL, runtime_mode = NULL )
Arguments
chunk_idChunk identifier.
artifactA serializable list representing the review artifact.
session_idOptional session identifier.
review_modeOptional normalized review mode.
runtime_modeOptional normalized runtime mode.
Returns
Invisible TRUE.
Method get_review()
Get a review by chunk ID.
Usage
ProjectMemory$get_review(chunk_id)
Arguments
chunk_idChunk identifier.
Returns
A list with review details, or NULL if not found.
Method get_review_artifact()
Get a parsed review artifact by chunk ID.
Usage
ProjectMemory$get_review_artifact(chunk_id)
Arguments
chunk_idChunk identifier.
Returns
A list artifact, or NULL if none is stored.
Method get_review_runtime_record()
Get a review together with its parsed artifact.
Usage
ProjectMemory$get_review_runtime_record(chunk_id)
Arguments
chunk_idChunk identifier.
Returns
A list with review and artifact, or NULL if not found.
Method get_reviews_for_file()
Get all reviews for a given source file.
Usage
ProjectMemory$get_reviews_for_file(file_path)
Arguments
file_pathSource document path.
Returns
A data frame of reviews ordered by updated time.
Method record_review_saveback()
Record a saveback lifecycle event for one or more chunk reviews.
Usage
ProjectMemory$record_review_saveback( chunk_ids, source_path, html_path = NULL, status = "requested", rerendered = FALSE, message = NULL )
Arguments
chunk_idsCharacter vector of chunk identifiers.
source_pathSource document path.
html_pathOptional rendered HTML path.
statusSaveback status string.
rerenderedWhether a rerender occurred.
messageOptional message.
Returns
Invisible TRUE.
Method update_execution_result()
Update execution result fields for a chunk review.
Usage
ProjectMemory$update_execution_result( chunk_id, execution_status, execution_output = NULL, final_code = NULL, error_message = NULL )
Arguments
chunk_idChunk identifier.
execution_statusExecution state string.
execution_outputOptional execution output text.
final_codeOptional finalized executable code.
error_messageOptional execution error.
Returns
Invisible TRUE.
Method append_review_event()
Append an audit event for a reviewed chunk.
Usage
ProjectMemory$append_review_event(chunk_id, event_type, payload = NULL)
Arguments
chunk_idChunk identifier.
event_typeEvent type string.
payloadOptional serializable payload list.
Returns
The database row ID.
Method update_review_status()
Update review status.
Usage
ProjectMemory$update_review_status(chunk_id, status)
Arguments
chunk_idChunk identifier.
statusNew status ("approved" or "rejected").
Method get_pending_reviews()
Get pending reviews, optionally filtered by file.
Usage
ProjectMemory$get_pending_reviews(file_path = NULL)
Arguments
file_pathOptional file path filter.
Returns
A data frame of pending reviews.
Method stats()
Get memory statistics.
Usage
ProjectMemory$stats()
Returns
A list with counts and sizes.
Method clear()
Clear all memory (use with caution).
Usage
ProjectMemory$clear(confirm = FALSE)
Arguments
confirmMust be TRUE to proceed.
Method print()
Print method for ProjectMemory.
Usage
ProjectMemory$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
ProjectMemory$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Provider Registry
Description
Manages registered providers and allows accessing models by ID.
Methods
Public methods
Method new()
Initialize the registry.
Usage
ProviderRegistry$new(separator = ":")
Arguments
separatorThe separator between provider and model IDs (default: ":").
Method register()
Register a provider.
Usage
ProviderRegistry$register(id, provider)
Arguments
idThe provider ID (e.g., "openai").
providerThe provider object (must have
language_modelmethod).
Method language_model()
Get a language model by ID.
Usage
ProviderRegistry$language_model(id)
Arguments
idModel ID in the format "provider:model" (e.g., "openai:gpt-4o").
Returns
A LanguageModelV1 object.
Method embedding_model()
Get an embedding model by ID.
Usage
ProviderRegistry$embedding_model(id)
Arguments
idModel ID in the format "provider:model".
Returns
An EmbeddingModelV1 object.
Method list_providers()
List all registered provider IDs.
Usage
ProviderRegistry$list_providers()
Returns
A character vector of provider IDs.
Method clone()
The objects of this class are cloneable with this method.
Usage
ProviderRegistry$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
SSEAggregator R6 Class
Description
Accumulates streaming chunks into a final GenerateResult.
Handles two tool call formats:
-
OpenAI format: Chunked deltas with index, id, function.name, function.arguments
-
Anthropic format: content_block_start with id+name, then input_json_delta chunks
Methods
Public methods
Method new()
Initialize the aggregator.
Usage
SSEAggregator$new(callback)
Arguments
callbackUser callback function: callback(text, done).
Method on_text_delta()
Handle a text content delta.
Usage
SSEAggregator$on_text_delta(text)
Arguments
textThe text chunk.
Method on_reasoning_delta()
Handle a reasoning/thinking content delta.
Usage
SSEAggregator$on_reasoning_delta(text)
Arguments
textThe reasoning text chunk.
Method on_reasoning_start()
Signal the start of a reasoning block (Anthropic thinking).
Usage
SSEAggregator$on_reasoning_start()
Method on_block_stop()
Signal content block stop (closes reasoning if open).
Usage
SSEAggregator$on_block_stop()
Method on_tool_call_delta()
Handle OpenAI-format tool call deltas.
Usage
SSEAggregator$on_tool_call_delta(tool_calls)
Arguments
tool_callsList of tool call delta objects from the choices delta.
Method on_tool_start()
Handle Anthropic-format tool use block start.
Usage
SSEAggregator$on_tool_start(index, id, name, input = NULL)
Arguments
indexBlock index (0-based from API, converted to 1-based internally).
idTool call ID.
nameTool name.
inputInitial input (usually NULL or empty).
Method on_tool_input_delta()
Handle Anthropic-format input_json_delta.
Usage
SSEAggregator$on_tool_input_delta(index, partial_json)
Arguments
indexBlock index (0-based from API).
partial_jsonPartial JSON string.
Method on_finish_reason()
Store finish reason.
Usage
SSEAggregator$on_finish_reason(reason)
Arguments
reasonThe finish reason string.
Method on_usage()
Store usage information.
Usage
SSEAggregator$on_usage(usage)
Arguments
usageUsage list.
Method on_raw_response()
Store last raw response for diagnostics.
Usage
SSEAggregator$on_raw_response(response)
Arguments
responseThe raw response data.
Method on_done()
Signal stream completion.
Usage
SSEAggregator$on_done()
Method build_result()
Finalize accumulated state into a GenerateResult.
Usage
SSEAggregator$build_result()
Returns
A GenerateResult object.
Method clone()
The objects of this class are cloneable with this method.
Usage
SSEAggregator$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
SandboxManager Class
Description
R6 class that manages an isolated R environment for executing LLM-generated R code. Tools are bound as callable functions within this environment, enabling the LLM to batch-invoke and process data locally.
Methods
Public methods
Method new()
Initialize a new SandboxManager.
Usage
SandboxManager$new(
tools = list(),
preload_packages = c("dplyr", "purrr"),
max_output_chars = 8000,
parent_env = NULL
)Arguments
toolsOptional list of Tool objects to bind into the sandbox.
preload_packagesCharacter vector of package names to preload into the sandbox (their exports become available). Default: c("dplyr", "purrr").
max_output_charsMaximum characters to capture from code output. Prevents runaway
print()from flooding the context. Default: 8000.parent_envOptional parent environment for the sandbox. When a ChatSession is available, pass
session$get_envir()here to enable cross-step variable persistence.
Method bind_tools()
Bind Tool objects into the sandbox as callable R functions.
Usage
SandboxManager$bind_tools(tools)
Arguments
toolsA list of Tool objects to bind.
Returns
Invisible self (for chaining).
Method execute()
Execute R code in the sandbox environment.
Usage
SandboxManager$execute(code_str)
Arguments
code_strA character string containing R code to execute.
Returns
A character string with captured stdout, or an error message.
Method get_tool_signatures()
Get human-readable signatures for all bound tools.
Usage
SandboxManager$get_tool_signatures()
Returns
A character string with Markdown-formatted tool documentation.
Method get_env()
Get the sandbox environment.
Usage
SandboxManager$get_env()
Returns
The R environment used by the sandbox.
Method list_tools()
Get list of bound tool names.
Usage
SandboxManager$list_tools()
Returns
Character vector of tool names available in the sandbox.
Method reset()
Reset the sandbox environment (clear all user variables). Tool bindings and preloaded packages are preserved.
Usage
SandboxManager$reset()
Method print()
Print method for SandboxManager.
Usage
SandboxManager$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
SandboxManager$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
SharedSession Class
Description
R6 class representing an enhanced session for multi-agent systems. Extends ChatSession with:
Execution context tracking (call stack, delegation history)
Sandboxed code execution with safety guardrails
Variable scoping and access control
Comprehensive tracing and observability
Super class
aisdk::ChatSession -> SharedSession
Methods
Public methods
Inherited methods
aisdk::ChatSession$append_message()aisdk::ChatSession$as_list()aisdk::ChatSession$checkpoint()aisdk::ChatSession$clear_history()aisdk::ChatSession$clear_memory()aisdk::ChatSession$eval_in_session()aisdk::ChatSession$get_envir()aisdk::ChatSession$get_history()aisdk::ChatSession$get_last_response()aisdk::ChatSession$get_memory()aisdk::ChatSession$get_metadata()aisdk::ChatSession$get_model_id()aisdk::ChatSession$list_envir()aisdk::ChatSession$list_memory()aisdk::ChatSession$list_metadata()aisdk::ChatSession$merge_metadata()aisdk::ChatSession$restore()aisdk::ChatSession$restore_checkpoint()aisdk::ChatSession$restore_from_list()aisdk::ChatSession$save()aisdk::ChatSession$send()aisdk::ChatSession$send_stream()aisdk::ChatSession$set_memory()aisdk::ChatSession$set_metadata()aisdk::ChatSession$stats()aisdk::ChatSession$switch_model()
Method new()
Initialize a new SharedSession.
Usage
SharedSession$new( model = NULL, system_prompt = NULL, tools = NULL, hooks = NULL, max_steps = 10, registry = NULL, sandbox_mode = "strict", trace_enabled = TRUE )
Arguments
modelA LanguageModelV1 object or model string ID.
system_promptOptional system prompt for the conversation.
toolsOptional list of Tool objects.
hooksOptional HookHandler object.
max_stepsMaximum steps for tool execution loops. Default 10.
registryOptional ProviderRegistry for model resolution.
sandbox_modeSandbox mode: "strict", "permissive", or "none". Default "strict".
trace_enabledEnable execution tracing. Default TRUE.
Method push_context()
Push an agent onto the execution stack.
Usage
SharedSession$push_context(agent_name, task, parent_agent = NULL)
Arguments
agent_nameName of the agent being activated.
taskThe task being delegated.
parent_agentName of the delegating agent (or NULL for root).
Returns
Invisible self for chaining.
Method pop_context()
Pop the current agent from the execution stack.
Usage
SharedSession$pop_context(result = NULL)
Arguments
resultOptional result from the completed agent.
Returns
The popped context, or NULL if stack was empty.
Method get_context()
Get the current execution context.
Usage
SharedSession$get_context()
Returns
A list with current_agent, depth, and delegation_stack.
Method set_global_task()
Set the global task (user's original request).
Usage
SharedSession$set_global_task(task)
Arguments
taskThe global task description.
Returns
Invisible self for chaining.
Method execute_code()
Execute R code in a sandboxed environment.
Usage
SharedSession$execute_code( code, scope = "global", timeout_ms = 30000, capture_output = TRUE )
Arguments
codeR code to execute (character string).
scopeVariable scope: "global", "agent", or a custom scope name.
timeout_msExecution timeout in milliseconds. Default 30000.
capture_outputCapture stdout/stderr. Default TRUE.
Returns
A list with result, output, error, and duration_ms.
Method get_var()
Get a variable from a specific scope.
Usage
SharedSession$get_var(name, scope = "global", default = NULL)
Arguments
nameVariable name.
scopeScope name. Default "global".
defaultDefault value if not found.
Returns
The variable value or default.
Method set_var()
Set a variable in a specific scope.
Usage
SharedSession$set_var(name, value, scope = "global")
Arguments
nameVariable name.
valueVariable value.
scopeScope name. Default "global".
Returns
Invisible self for chaining.
Method list_vars()
List variables in a scope.
Usage
SharedSession$list_vars(scope = "global", pattern = NULL)
Arguments
scopeScope name. Default "global".
patternOptional pattern to filter names.
Returns
Character vector of variable names.
Method summarize_vars()
Get a summary of all variables in a scope.
Usage
SharedSession$summarize_vars(scope = "global")
Arguments
scopeScope name. Default "global".
Returns
A data frame with name, type, and size information.
Method create_scope()
Create a new variable scope.
Usage
SharedSession$create_scope(scope_name, parent_scope = "global")
Arguments
scope_nameName for the new scope.
parent_scopeParent scope name. Default "global".
Returns
Invisible self for chaining.
Method delete_scope()
Delete a variable scope.
Usage
SharedSession$delete_scope(scope_name)
Arguments
scope_nameName of the scope to delete.
Returns
Invisible self for chaining.
Method trace_event()
Record a trace event.
Usage
SharedSession$trace_event(event_type, data = list())
Arguments
event_typeType of event (e.g., "context_push", "code_execution").
dataEvent data as a list.
Returns
Invisible self for chaining.
Method get_trace()
Get the execution trace.
Usage
SharedSession$get_trace(event_types = NULL, agent = NULL)
Arguments
event_typesOptional filter by event types.
agentOptional filter by agent name.
Returns
A list of trace events.
Method clear_trace()
Clear the execution trace.
Usage
SharedSession$clear_trace()
Returns
Invisible self for chaining.
Method trace_summary()
Get trace summary statistics.
Usage
SharedSession$trace_summary()
Returns
A list with event counts, agent activity, and timing info.
Method set_access_control()
Set access control for an agent.
Usage
SharedSession$set_access_control(agent_name, permissions)
Arguments
agent_nameAgent name.
permissionsList of permissions (read_scopes, write_scopes, tools).
Returns
Invisible self for chaining.
Method check_permission()
Check if an agent has permission for an action.
Usage
SharedSession$check_permission(agent_name, action, target)
Arguments
agent_nameAgent name.
actionAction type: "read", "write", or "tool".
targetTarget scope or tool name.
Returns
TRUE if permitted, FALSE otherwise.
Method get_sandbox_mode()
Get sandbox mode.
Usage
SharedSession$get_sandbox_mode()
Returns
The current sandbox mode.
Method set_sandbox_mode()
Set sandbox mode.
Usage
SharedSession$set_sandbox_mode(mode)
Arguments
modeSandbox mode: "strict", "permissive", or "none".
Returns
Invisible self for chaining.
Method print()
Print method for SharedSession.
Usage
SharedSession$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
SharedSession$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Skill Class
Description
R6 class representing a skill with progressive loading capabilities. A Skill consists of:
Level 1: YAML frontmatter (name, description) - always loaded
Level 2: SKILL.md body (detailed instructions) - on demand
Level 3: R scripts (executable code) - executed by agent
Public fields
nameThe unique name of the skill (from YAML frontmatter).
descriptionA brief description of the skill (from YAML frontmatter).
pathThe directory path containing the skill files.
Methods
Public methods
Method new()
Create a new Skill object by parsing a SKILL.md file.
Usage
Skill$new(path)
Arguments
pathPath to the skill directory (containing SKILL.md).
Returns
A new Skill object.
Method load()
Load the full SKILL.md body content (Level 2).
Usage
Skill$load()
Returns
Character string containing the skill instructions.
Method execute_script()
Execute an R script from the skill's scripts directory (Level 3). Uses callr for safe, isolated execution.
Usage
Skill$execute_script(script_name, args = list())
Arguments
script_nameName of the script file (e.g., "normalize.R").
argsNamed list of arguments to pass to the script.
Returns
The result from the script execution.
Method list_scripts()
List available scripts in the skill's scripts directory.
Usage
Skill$list_scripts()
Returns
Character vector of script file names.
Method list_resources()
List available reference files in the skill's references directory.
Usage
Skill$list_resources()
Returns
Character vector of reference file names.
Method read_resource()
Read content of a reference file from the references directory.
Usage
Skill$read_resource(resource_name)
Arguments
resource_nameName of the reference file.
Returns
Character string containing the resource content.
Method get_asset_path()
Get the absolute path to an asset in the assets directory.
Usage
Skill$get_asset_path(asset_name)
Arguments
asset_nameName of the asset file or directory.
Returns
Absolute path string.
Method print()
Print a summary of the skill.
Usage
Skill$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Skill$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
SkillRegistry Class
Description
R6 class that manages a collection of skills. Provides methods to:
Scan directories for SKILL.md files
Cache skill metadata (Level 1)
Retrieve skills by name
Generate prompt sections for LLM context
Methods
Public methods
Method new()
Create a new SkillRegistry, optionally scanning a directory.
Usage
SkillRegistry$new(path = NULL)
Arguments
pathOptional path to scan for skills on creation.
Returns
A new SkillRegistry object.
Method scan_skills()
Scan a directory for skill folders containing SKILL.md files.
Usage
SkillRegistry$scan_skills(path, recursive = FALSE)
Arguments
pathPath to the directory to scan.
recursiveWhether to scan subdirectories. Default FALSE.
Returns
The registry object (invisibly), for chaining.
Method get_skill()
Get a skill by name.
Usage
SkillRegistry$get_skill(name)
Arguments
nameThe name of the skill to retrieve.
Returns
The Skill object, or NULL if not found.
Method has_skill()
Check if a skill exists in the registry.
Usage
SkillRegistry$has_skill(name)
Arguments
nameThe name of the skill to check.
Returns
TRUE if the skill exists, FALSE otherwise.
Method list_skills()
List all registered skills with their names and descriptions.
Usage
SkillRegistry$list_skills()
Returns
A data.frame with columns: name, description.
Method count()
Get the number of registered skills.
Usage
SkillRegistry$count()
Returns
Integer count of skills.
Method generate_prompt_section()
Generate a prompt section listing available skills. This can be injected into the system prompt.
Usage
SkillRegistry$generate_prompt_section()
Returns
Character string with formatted skill list.
Method print()
Print a summary of the registry.
Usage
SkillRegistry$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
SkillRegistry$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Skill Store Class
Description
R6 class for managing the global skill store, including installation, updates, and discovery of skills.
Public fields
registry_urlURL of the skill registry.
install_pathLocal path for installed skills.
installedList of installed skills.
Methods
Public methods
Method new()
Create a new SkillStore instance.
Usage
SkillStore$new(registry_url = NULL, install_path = NULL)
Arguments
registry_urlURL of the skill registry.
install_pathLocal installation path.
Returns
A new SkillStore object.
Method install()
Install a skill from the registry or a GitHub repository.
Usage
SkillStore$install(skill_ref, version = NULL, force = FALSE)
Arguments
skill_refSkill reference (e.g., "username/skillname" or registry name).
versionOptional specific version to install.
forceForce reinstallation even if already installed.
Returns
The installed Skill object.
Method uninstall()
Uninstall a skill.
Usage
SkillStore$uninstall(name)
Arguments
nameSkill name.
Returns
Self (invisibly).
Method get()
Get an installed skill.
Usage
SkillStore$get(name)
Arguments
nameSkill name.
Returns
A Skill object or NULL.
Method list_installed()
List installed skills.
Usage
SkillStore$list_installed()
Returns
A data frame of installed skills.
Method search()
Search the registry for skills.
Usage
SkillStore$search(query = NULL, capability = NULL)
Arguments
querySearch query.
capabilityFilter by capability.
Returns
A data frame of matching skills.
Method update_all()
Update all installed skills to latest versions.
Usage
SkillStore$update_all()
Returns
Self (invisibly).
Method validate()
Validate a skill.yaml manifest.
Usage
SkillStore$validate(path)
Arguments
pathPath to skill directory or skill.yaml file.
Returns
A list with validation results.
Method print()
Print method for SkillStore.
Usage
SkillStore$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
SkillStore$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
SLM Engine Class
Description
R6 class for managing local Small Language Model inference. Provides a unified interface for loading model weights, running inference, and managing model lifecycle.
Public fields
model_pathPath to the model weights file.
model_nameHuman-readable model name.
backendThe inference backend ("onnx", "torch", "gguf").
configModel configuration parameters.
loadedWhether the model is currently loaded in memory.
Methods
Public methods
Method new()
Create a new SLM Engine instance.
Usage
SlmEngine$new(model_path, backend = "gguf", config = list())
Arguments
model_pathPath to the model weights file (GGUF, ONNX, or PT format).
backendInference backend to use: "gguf" (default), "onnx", or "torch".
configOptional list of configuration parameters.
Returns
A new SlmEngine object.
Method load()
Load the model into memory.
Usage
SlmEngine$load()
Returns
Self (invisibly).
Method unload()
Unload the model from memory.
Usage
SlmEngine$unload()
Returns
Self (invisibly).
Method generate()
Generate text completion from a prompt.
Usage
SlmEngine$generate( prompt, max_tokens = 256, temperature = 0.7, top_p = 0.9, stop = NULL )
Arguments
promptThe input prompt text.
max_tokensMaximum number of tokens to generate.
temperatureSampling temperature (0.0 to 2.0).
top_pNucleus sampling parameter.
stopOptional stop sequences.
Returns
A list with generated text and metadata.
Method stream()
Stream text generation with a callback function.
Usage
SlmEngine$stream( prompt, callback, max_tokens = 256, temperature = 0.7, top_p = 0.9, stop = NULL )
Arguments
promptThe input prompt text.
callbackFunction called with each generated token.
max_tokensMaximum number of tokens to generate.
temperatureSampling temperature.
top_pNucleus sampling parameter.
stopOptional stop sequences.
Returns
A list with the complete generated text and metadata.
Method info()
Get model information and statistics.
Usage
SlmEngine$info()
Returns
A list with model metadata.
Method print()
Print method for SlmEngine.
Usage
SlmEngine$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
SlmEngine$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Stepfun Language Model Class
Description
Language model implementation for Stepfun's chat completions API. Inherits from OpenAILanguageModel as Stepfun provides an OpenAI-compatible API.
Super classes
aisdk::LanguageModelV1 -> aisdk::OpenAILanguageModel -> StepfunLanguageModel
Methods
Public methods
Inherited methods
aisdk::LanguageModelV1$generate()aisdk::LanguageModelV1$has_capability()aisdk::LanguageModelV1$stream()aisdk::OpenAILanguageModel$do_generate()aisdk::OpenAILanguageModel$do_stream()aisdk::OpenAILanguageModel$execute_request()aisdk::OpenAILanguageModel$format_tool_result()aisdk::OpenAILanguageModel$get_config()aisdk::OpenAILanguageModel$get_history_format()aisdk::OpenAILanguageModel$initialize()aisdk::OpenAILanguageModel$parse_response()
Method build_payload()
Build the payload for the Stepfun API.
Usage
StepfunLanguageModel$build_payload(params)
Arguments
paramsA list of parameters for the API call.
Method build_stream_payload()
Build the stream payload for the Stepfun API.
Usage
StepfunLanguageModel$build_stream_payload(params)
Arguments
paramsA list of parameters for the API call.
Method clone()
The objects of this class are cloneable with this method.
Usage
StepfunLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Stepfun Provider Class
Description
Provider class for Stepfun.
Super class
aisdk::OpenAIProvider -> StepfunProvider
Methods
Public methods
Inherited methods
Method new()
Initialize the Stepfun provider.
Usage
StepfunProvider$new(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_keyStepfun API key. Defaults to STEPFUN_API_KEY env var.
base_urlBase URL. Defaults to https://api.stepfun.com/v1.
headersOptional additional headers.
Method language_model()
Create a language model.
Usage
StepfunProvider$language_model(model_id = NULL)
Arguments
model_idThe model ID (e.g., "step-3.5-flash").
Returns
A StepfunLanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
StepfunProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Theme Element Types
Description
Registry of theme element types and their properties.
Usage
THEME_ELEMENT_TYPES
Format
An object of class list of length 6.
Theme Component Hierarchy
Description
Defines the hierarchical structure of theme components.
Usage
THEME_HIERARCHY
Format
An object of class list of length 54.
Telemetry Class
Description
R6 class for logging events in a structured format (JSON).
Public fields
trace_idCurrent trace ID for the session.
pricing_tablePricing for common models (USD per 1M tokens).
Methods
Public methods
Method new()
Initialize Telemetry
Usage
Telemetry$new(trace_id = NULL)
Arguments
trace_idOptional trace ID. If NULL, generates a random one.
Method log_event()
Log an event
Usage
Telemetry$log_event(type, ...)
Arguments
typeEvent type (e.g., "generation_start", "tool_call").
...Additional fields to log.
Method as_hooks()
Create hooks for telemetry
Usage
Telemetry$as_hooks()
Returns
A HookHandler object pre-configured with telemetry logs.
Method calculate_cost()
Calculate estimated cost for a generation result
Usage
Telemetry$calculate_cost(result, model_id = NULL)
Arguments
resultThe GenerateResult object.
model_idOptional model ID string. if NULL, tries to guess from context (not reliable yet, passing in log_event might be better).
Returns
Estimated cost in USD, or NULL if unknown.
Method clone()
The objects of this class are cloneable with this method.
Usage
Telemetry$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Tool Class
Description
R6 class representing a callable tool for LLM function calling. A Tool connects an LLM's tool call request to an R function.
Public fields
nameThe unique name of the tool.
descriptionA description of what the tool does.
parametersA z_object schema defining the tool's parameters.
layerTool layer: "llm" (loaded into context) or "computer" (executed via bash/filesystem).
metaOptional metadata for the tool (e.g., caching configuration).
Methods
Public methods
Method new()
Initialize a Tool.
Usage
Tool$new(name, description, parameters, execute, layer = "llm", meta = NULL)
Arguments
nameUnique tool name (used by LLM to call the tool).
descriptionDescription of the tool's purpose.
parametersA z_object schema defining expected parameters.
executeAn R function that implements the tool logic.
layerTool layer: "llm" or "computer" (default: "llm").
metaOptional metadata list (e.g., cache_control).
Method to_api_format()
Convert tool to API format.
Usage
Tool$to_api_format(provider = "openai")
Arguments
providerProvider name ("openai" or "anthropic"). Default "openai".
Returns
A list in the format expected by the API.
Method run()
Execute the tool with given arguments.
Usage
Tool$run(args, envir = NULL)
Arguments
argsA list or named list of arguments.
envirOptional environment in which to evaluate the tool function. When provided, the environment is passed as
.envirin the args list, allowing the execute function to access and modify session variables.
Returns
The result of executing the tool function.
Method print()
Print method for Tool.
Usage
Tool$print()
Method clone()
The objects of this class are cloneable with this method.
Usage
Tool$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Volcengine Language Model Class
Description
Language model implementation for Volcengine's chat completions API.
Inherits from OpenAI model but adds support for Volcengine-specific features
like reasoning content extraction from models that support reasoning_content.
Super classes
aisdk::LanguageModelV1 -> aisdk::OpenAILanguageModel -> VolcengineLanguageModel
Methods
Public methods
Inherited methods
aisdk::LanguageModelV1$generate()aisdk::LanguageModelV1$has_capability()aisdk::LanguageModelV1$stream()aisdk::OpenAILanguageModel$build_payload()aisdk::OpenAILanguageModel$build_stream_payload()aisdk::OpenAILanguageModel$do_generate()aisdk::OpenAILanguageModel$do_stream()aisdk::OpenAILanguageModel$execute_request()aisdk::OpenAILanguageModel$format_tool_result()aisdk::OpenAILanguageModel$get_config()aisdk::OpenAILanguageModel$get_history_format()aisdk::OpenAILanguageModel$initialize()
Method parse_response()
Parse the API response into a GenerateResult. Overrides parent to extract Volcengine-specific reasoning_content.
Usage
VolcengineLanguageModel$parse_response(response)
Arguments
responseThe parsed API response.
Returns
A GenerateResult object.
Method clone()
The objects of this class are cloneable with this method.
Usage
VolcengineLanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Volcengine Provider Class
Description
Provider class for the Volcengine Ark platform.
Super class
aisdk::OpenAIProvider -> VolcengineProvider
Methods
Public methods
Inherited methods
Method new()
Initialize the Volcengine provider.
Usage
VolcengineProvider$new(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_keyVolcengine API key. Defaults to ARK_API_KEY env var.
base_urlBase URL. Defaults to https://ark.cn-beijing.volces.com/api/v3.
headersOptional additional headers.
Method language_model()
Create a language model.
Usage
VolcengineProvider$language_model(model_id = NULL)
Arguments
model_idThe model ID (e.g., "doubao-1-5-pro-256k-250115" or "gpt-4o").
Returns
A VolcengineLanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
VolcengineProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
xAI Language Model Class
Description
Language model implementation for xAI's chat completions API. Inherits from OpenAILanguageModel as xAI provides An OpenAI-compatible API.
Super classes
aisdk::LanguageModelV1 -> aisdk::OpenAILanguageModel -> XAILanguageModel
Methods
Public methods
Inherited methods
aisdk::LanguageModelV1$generate()aisdk::LanguageModelV1$has_capability()aisdk::LanguageModelV1$stream()aisdk::OpenAILanguageModel$build_payload()aisdk::OpenAILanguageModel$build_stream_payload()aisdk::OpenAILanguageModel$do_generate()aisdk::OpenAILanguageModel$do_stream()aisdk::OpenAILanguageModel$execute_request()aisdk::OpenAILanguageModel$format_tool_result()aisdk::OpenAILanguageModel$get_config()aisdk::OpenAILanguageModel$get_history_format()aisdk::OpenAILanguageModel$initialize()
Method parse_response()
Parse the API response into a GenerateResult. Overrides parent to extract xAI-specific reasoning_content.
Usage
XAILanguageModel$parse_response(response)
Arguments
responseThe parsed API response.
Returns
A GenerateResult object.
Method clone()
The objects of this class are cloneable with this method.
Usage
XAILanguageModel$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
xAI Provider Class
Description
Provider class for xAI.
Super class
aisdk::OpenAIProvider -> XAIProvider
Methods
Public methods
Inherited methods
Method new()
Initialize the xAI provider.
Usage
XAIProvider$new(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_keyxAI API key. Defaults to XAI_API_KEY env var.
base_urlBase URL. Defaults to https://api.x.ai/v1.
headersOptional additional headers.
Method language_model()
Create a language model.
Usage
XAIProvider$language_model(model_id = NULL)
Arguments
model_idThe model ID (e.g., "grok-beta", "grok-2-1212").
Returns
A XAILanguageModel object.
Method clone()
The objects of this class are cloneable with this method.
Usage
XAIProvider$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Add Stable IDs to Nested List
Description
Recursively traverses a list and adds _id fields where missing
based on content hashing.
Usage
add_stable_ids(x, prefix = NULL)
Arguments
x |
List to process. |
prefix |
Optional prefix for IDs. |
Value
Modified list.
Performance & Benchmarking: Agent Evals
Description
Testing infrastructure for LLM-powered code. Provides testthat integration with custom expectations for evaluating AI agent performance, tool accuracy, and hallucination rates.
Agent Library: Built-in Agent Specialists
Description
Factory functions for creating standard library agents for common tasks. These agents are pre-configured with appropriate system prompts and tools for their respective specializations.
Agent Registry: Agent Storage and Lookup
Description
AgentRegistry R6 class for storing and retrieving Agent instances. Used by the Flow system for agent delegation.
AI Chat Server
Description
Shiny module server for AI-powered chat, featuring non-blocking streaming via background processes and tool execution bridge.
Usage
aiChatServer(
id,
model,
tools = NULL,
context = NULL,
system = NULL,
debug = FALSE,
on_message_complete = NULL
)
Arguments
id |
The namespace ID for the module. |
model |
Either a LanguageModelV1 object, or a string ID like "openai:gpt-4o". |
tools |
Optional list of Tool objects for function calling. |
context |
Optional reactive expression that returns context data to inject
into the system prompt. This is read with |
system |
Optional system prompt. |
debug |
Reactive expression or logical. If TRUE, shows raw debug output in UI. |
on_message_complete |
Optional callback function called when a message is complete. Takes one argument: the complete assistant message text. |
Value
A reactive value containing the chat history.
AI Chat UI
Description
Creates a modern, streaming-ready chat interface for Shiny applications.
Usage
aiChatUI(id, height = "500px")
Arguments
id |
The namespace ID for the module. |
height |
Height of the chat window (e.g. "400px"). |
Value
A Shiny UI definition.
Analyze R Package for Skill Creation
Description
Introspects an installed R package to understand its capabilities, exported functions, and documentation. This is used by the Skill Architect to "learn" a package.
Usage
analyze_r_package(package)
Arguments
package |
Name of the package to analyze. |
Value
A string summary of the package.
Annotate model capabilities based on ID
Description
Annotate model capabilities based on ID
Usage
annotate_model_capabilities(df)
Arguments
df |
Data frame with 'id' column |
Value
Data frame with added logical columns
API Configuration Server
Description
Server logic for the API Configuration UI.
Usage
apiConfigServer(id)
Arguments
id |
The namespace ID for the module. |
API Configuration UI
Description
Creates a Shiny UI for configuring API providers.
Usage
apiConfigUI(id)
Arguments
id |
The namespace ID for the module. |
Value
A Shiny UI definition.
API Diagnostics
Description
Provides diagnostic tools to test internet connectivity, DNS resolution, and API reachability.
Human-in-the-Loop Authorization
Description
Provides dynamic authorization hooks to pause Agent execution and request user permission for elevated risk operations.
Auto-detect Variables
Description
Detects variable names mentioned in the prompt that exist in the environment.
Usage
auto_detect_vars(prompt, envir)
Arguments
prompt |
The user's prompt. |
envir |
The environment to check. |
Value
A character vector of variable names.
Autonomous Data Science Pipelines
Description
Self-healing runtime for R code execution. Implements a "Hypothesis-Fix-Verify" loop that feeds error messages, stack traces, and context back to an LLM for automatic error correction.
Execute R code with automatic error recovery using LLM assistance. When code fails, the error is analyzed and a fix is attempted automatically.
Usage
auto_fix(
expr,
model = NULL,
max_attempts = 3,
context = NULL,
verbose = TRUE,
memory = NULL
)
Arguments
expr |
The R expression to execute. |
model |
The LLM model to use for error analysis (default: from options). |
max_attempts |
Maximum number of fix attempts (default: 3). |
context |
Optional additional context about the code's purpose. |
verbose |
Print progress messages (default: TRUE). |
memory |
Optional ProjectMemory object for learning from past fixes. |
Value
The result of successful execution, or an error if all attempts fail.
Examples
## Not run:
# Simple usage - auto-fix a data transformation
result <- auto_fix({
df <- read.csv("data.csv")
df %>%
filter(value > 100) %>%
summarize(mean = mean(value))
})
# With context for better error understanding
result <- auto_fix(
expr = {
model <- lm(y ~ x, data = df)
},
context = "Fitting a linear regression model to predict sales"
)
## End(Not run)
Benchmark Agent
Description
Run a benchmark suite against an agent and collect performance metrics.
Usage
benchmark_agent(agent, tasks, tools = NULL, verbose = TRUE)
Arguments
agent |
An Agent object or model string. |
tasks |
A list of benchmark tasks (see details). |
tools |
Optional list of tools for the agent. |
verbose |
Print progress. |
Details
Each task in the tasks list should have:
prompt: The task prompt
expected: Expected output or criteria
category: Optional category for grouping
ground_truth: Optional ground truth for hallucination checking
Value
A benchmark result object with metrics.
Build Console System Prompt
Description
Build the system prompt for the console agent.
Usage
build_console_system_prompt(working_dir, sandbox_mode, language)
Arguments
working_dir |
Current working directory. |
sandbox_mode |
Sandbox mode setting. |
language |
Language preference. |
Value
System prompt string.
Build Context
Description
Builds context string from R objects in the environment.
Usage
build_context(prompt, context_spec, envir)
Arguments
prompt |
The user's prompt. |
context_spec |
NULL (auto-detect), FALSE (skip), or character vector of var names. |
envir |
The environment to look for variables. |
Value
A character string with context information.
Build Fix Prompt
Description
Build Fix Prompt
Usage
build_fix_prompt(code, error, call, traceback, context, memory_hint, attempt)
Caching System
Description
Utilities for caching tool execution results and other expensive operations.
Cache Tool
Description
Wrap a tool with caching capabilities using the memoise package.
Usage
cache_tool(tool, cache = NULL)
Arguments
tool |
The Tool object to cache. |
cache |
An optional memoise cache configuration (e.g., cache_memory() or cache_filesystem()).
Defaults to |
Value
A new Tool object that caches its execution.
Capture Traceback
Description
Capture Traceback
Usage
capture_traceback()
Channel Document Ingest
Description
Helpers for extracting, chunking, and summarizing inbound document attachments before they are injected into chat context.
Feishu Channel Adapter
Description
Feishu adapter built on top of the generic channel runtime seam. Phase 1 focuses on text events and final text replies.
Channel Runtime
Description
Runtime orchestration layer for driving ChatSession objects from external
messaging channels.
Channel Session Store
Description
Durable local storage for channel-driven chat sessions and their routing metadata.
Channel Integration Types
Description
Low-level types and seams for external messaging channels. These abstractions sit above providers and below UI surfaces.
Connect and Diagnose API Reachability
Description
Tests connectivity to a specific LLM, provider, or URL. This is helpful for diagnosing network issues, DNS failures, or SSL problems.
Usage
check_api(model = NULL, url = NULL, registry = NULL)
Arguments
model |
Optional. A |
url |
Optional. A specific URL to test. |
registry |
Optional ProviderRegistry to use if |
Value
A list containing diagnostic results (invisible).
Examples
if (interactive()) {
# Test by passing a URL directly
check_api(url = "https://api.openai.com/v1")
# Test a model directly
model <- create_openai()$language_model("gpt-4o")
check_api(model)
}
Check AST Safety
Description
Analyze R code for unsafe function calls or operations before execution.
Usage
check_ast_safety(code_str)
Arguments
code_str |
Character string containing R code. |
Value
The parsed AST if safe. Throws an error if unsafe.
Check SDK Version Compatibility
Description
Check if code is compatible with the current SDK version and suggest migration steps if needed.
Usage
check_sdk_compatibility(code_version)
Arguments
code_version |
Version string the code was written for. |
Value
A list with compatible (logical) and suggestions (character vector).
Examples
if (interactive()) {
result <- check_sdk_compatibility("0.8.0")
if (!result$compatible) {
cat("Migration needed:\n")
cat(paste(result$suggestions, collapse = "\n"))
}
}
Clear AI Engine Session
Description
Clears the cached session(s) for the AI engine. Useful for resetting state between documents.
Usage
clear_ai_session(session_name = NULL)
Arguments
session_name |
Optional name of specific session to clear. If NULL, clears all. |
Value
Invisible NULL.
Compatibility Layer: Feature Flags and Migration Support
Description
Provides feature flags, compatibility shims, and migration utilities for controlled breaking changes in the agent SDK.
Launch API Configuration App
Description
Launches a Shiny application to configure API providers and environment variables.
Usage
configure_api()
Value
A Shiny app object
Console Chat: Interactive REPL
Description
Interactive terminal chat interface for ChatSession. Provides a REPL (Read-Eval-Print Loop) for conversing with LLMs. By default, enables an intelligent terminal agent that can execute commands, manage files, and run R code through natural language.
Console Agent: Intelligent Terminal Assistant
Description
Creates a default agent for console_chat() that enables natural language interaction with the terminal. Users can ask the agent to run commands, execute R code, read/write files, and more through conversational language.
Console App State Helpers
Description
Internal helpers for the incremental console TUI architecture. These functions centralize view mode, capability detection, per-turn transcript state, and append-only status/timeline rendering.
Start Console Chat
Description
Launch an interactive chat session in the R console. Supports streaming output, slash commands, and colorful display using the cli package.
The console UI has three presentation modes:
-
clean: compact default output with a stable status bar -
inspect: keeps the compact transcript but adds a per-turn tool timeline and an overlay-backed inspector -
debug: shows detailed tool logs and thinking output for troubleshooting
In agent mode, console_chat() can execute shell and R tools, summarize tool
progress inline, and open an inspector overlay for the latest turn or a
specific tool. The current implementation uses a shared frame builder for the
status bar, tool timeline, and overlay surfaces, while preserving an
append-only terminal fallback.
By default, the console operates in agent mode with tools for bash execution,
file operations, R code execution, and more. Set agent = NULL for simple
chat without tools.
Usage
console_chat(
session = NULL,
system_prompt = NULL,
tools = NULL,
hooks = NULL,
stream = TRUE,
verbose = FALSE,
agent = "auto",
working_dir = tempdir(),
sandbox_mode = "permissive",
show_thinking = verbose
)
Arguments
session |
A ChatSession object, a LanguageModelV1 object, or a model string ID to create a new session. |
system_prompt |
Optional system prompt (merged with agent prompt if agent is used). |
tools |
Optional list of additional Tool objects. |
hooks |
Optional HookHandler object. |
stream |
Whether to use streaming output. Default TRUE. |
verbose |
Logical. If |
agent |
Agent configuration. Options:
|
working_dir |
Working directory for the console agent. Defaults to |
sandbox_mode |
Sandbox mode for the console agent: "strict", "permissive" (default), or "none". |
show_thinking |
Logical. Whether to show model thinking blocks when the
provider exposes them. Defaults to |
Value
The ChatSession object (invisibly) when chat ends.
Examples
if (interactive()) {
# Start with default agent (intelligent terminal mode)
console_chat("openai:gpt-4o")
# Start in debug mode with full tool logs
console_chat("openai:gpt-4o", verbose = TRUE)
# Simple chat mode without tools
console_chat("openai:gpt-4o", agent = NULL)
# Start with an existing session
chat <- create_chat_session("anthropic:claude-3-5-sonnet-latest")
console_chat(chat)
# Start with a custom agent
agent <- create_agent("MathAgent", "Does math", system_prompt = "You are a math wizard.")
console_chat("openai:gpt-4o", agent = agent)
# Available commands in the chat:
# /quit or /exit - End the chat
# /save [path] - Save session to file
# /load [path] - Load session from file
# /model - Open the provider/model chooser
# /model [id] - Switch to a different model
# /model current - Show the active model
# /history - Show conversation history
# /stats - Show token usage statistics
# /clear - Clear conversation history
# /stream [on|off] - Toggle streaming mode
# /inspect [on|off] - Toggle inspect mode
# /inspect turn - Open overlay for the latest turn
# /inspect tool <index> - Open overlay for a tool in the latest turn
# /inspect next - Move inspector overlay to the next tool
# /inspect prev - Move inspector overlay to the previous tool
# /inspect close - Close the active inspect overlay
# /debug [on|off] - Toggle detailed tool/thinking output
# /local [on|off]- Toggle local execution mode (Global Environment)
# /help - Show available commands
# /agent [on|off] - Toggle agent mode
}
Console Confirmation Prompt
Description
Ask a yes/no question with numbered choices. Returns TRUE for yes,
FALSE for no, or NULL if cancelled.
Usage
console_confirm(question)
Arguments
question |
The question to display. |
Value
TRUE if user selects Yes, FALSE for No, NULL
if cancelled.
Examples
if (interactive()) {
if (isTRUE(console_confirm("Overwrite existing file?"))) {
message("Overwriting...")
}
}
Console Frame Helpers
Description
Internal helpers for building and rendering a structured console frame from
ConsoleAppState. This is the first step toward region ownership and later
diff rendering, while still using an append-only renderer today.
Console Text Input
Description
Prompt the user for free-text input with optional default value.
Usage
console_input(prompt, default = NULL)
Arguments
prompt |
The prompt message to display. |
default |
Optional default value shown in brackets. Returned if user presses Enter without typing. |
Value
The user's input string, default if empty input and default
is set, or NULL if empty input with no default.
Examples
if (interactive()) {
name <- console_input("Project name", default = "my-project")
api_key <- console_input("API key")
}
Console Interactive Menu
Description
Present a numbered list of choices and return the user's selection.
Styled with cli to match the console chat interface. Similar to
utils::menu() but with cli formatting.
Usage
console_menu(title, choices)
Arguments
title |
The question or prompt to display. |
choices |
Character vector of options to present. |
Value
The index of the selected choice (integer), or NULL if
cancelled (user enters 'q' or empty input).
Examples
if (interactive()) {
selection <- console_menu("Which database?", c("PostgreSQL", "SQLite", "DuckDB"))
}
Console Setup Helpers
Description
Internal helpers for human-friendly console_chat() startup, including
provider profile discovery, .Renviron persistence, and interactive model
selection.
Construct Prompt
Description
Combines user prompt with context.
Usage
construct_prompt(user_prompt, context_str)
Arguments
user_prompt |
The user's original prompt. |
context_str |
The context string (may be empty). |
Value
The full prompt to send to the LLM.
Create Image Content
Description
Creates an image content object for a multimodal message. automatically handles URLs and local files (converted to base64).
Usage
content_image(image_path, media_type = "auto", detail = "auto")
Arguments
image_path |
Path to a local file or a URL. |
media_type |
MIME type of the image (e.g., "image/jpeg", "image/png"). If NULL, attempts to guess from the file extension. |
detail |
Image detail suitable for some models (e.g., "auto", "low", "high"). |
Value
A list representing the image content in OpenAI-compatible format.
Create Text Content
Description
Creates a text content object for a multimodal message.
Usage
content_text(text)
Arguments
text |
The text string. |
Value
A list representing the text content.
Context Management
Description
Utilities for capturing and summarizing R objects for LLM context.
Core API: High-Level Functions
Description
User-facing high-level API functions for interacting with AI models.
Core Object API: Structured Output Generation
Description
Functions for generating structured objects from LLMs using schemas.
Generate a structured R object (list) from a language model based on a schema. The model is instructed to output valid JSON matching the schema, which is then parsed and returned as an R list.
Usage
generate_object(
model = NULL,
prompt,
schema,
schema_name = "result",
system = NULL,
temperature = 0.3,
max_tokens = NULL,
mode = c("json", "tool"),
registry = NULL,
...
)
Arguments
model |
Either a LanguageModelV1 object, or a string ID like "openai:gpt-4o". |
prompt |
A character string prompt describing what to generate. |
schema |
A schema object created by |
schema_name |
Optional human-readable name for the schema (default: "result"). |
system |
Optional system prompt. |
temperature |
Sampling temperature (0-2). Default 0.3 (lower for structured output). |
max_tokens |
Maximum tokens to generate. |
mode |
Output mode: "json" (prompt-based) or "tool" (function calling). Currently, only "json" mode is implemented. |
registry |
Optional ProviderRegistry to use (defaults to global registry). |
... |
Additional arguments passed to the model. |
Value
A GenerateObjectResult with:
object: The parsed R object (list)
usage: Token usage information
raw_text: The raw text output from the LLM
finish_reason: The reason the generation stopped
Examples
if (interactive()) {
# Define a schema for the expected output
schema <- z_object(
title = z_string(description = "Title of the article"),
keywords = z_array(z_string()),
sentiment = z_enum(c("positive", "negative", "neutral"))
)
# Generate structured object
result <- generate_object(
model = "openai:gpt-4o",
prompt = "Analyze this article: 'R programming is great for data science!'",
schema = schema
)
print(result$object$title)
print(result$object$sentiment)
}
Create an Agent
Description
Factory function to create a new Agent object.
Usage
create_agent(
name,
description,
system_prompt = NULL,
tools = NULL,
skills = NULL,
model = NULL
)
Arguments
name |
Unique name for this agent. |
description |
A clear description of what this agent does. |
system_prompt |
Optional system prompt defining the agent's persona. |
tools |
Optional list of Tool objects the agent can use. |
skills |
Optional character vector of skill paths or "auto". |
model |
Optional default model ID for this agent. |
Value
An Agent object.
Examples
if (interactive()) {
# Create a simple math agent
math_agent <- create_agent(
name = "MathAgent",
description = "Performs arithmetic calculations",
system_prompt = "You are a math assistant. Return only numerical results."
)
# Run the agent
result <- math_agent$run("Calculate 2 + 2", model = "openai:gpt-4o")
# Create an agent with skills
stock_agent <- create_agent(
name = "StockAnalyst",
description = "Stock analysis agent",
skills = "auto"
)
}
Create an Agent Registry
Description
Factory function to create a new AgentRegistry.
Usage
create_agent_registry(agents = NULL)
Arguments
agents |
Optional list of Agent objects to register immediately. |
Value
An AgentRegistry object.
Examples
if (interactive()) {
# Create registry with agents
cleaner <- create_agent("Cleaner", "Cleans data")
plotter <- create_agent("Plotter", "Creates visualizations")
registry <- create_agent_registry(list(cleaner, plotter))
print(registry$list_agents()) # "Cleaner", "Plotter"
}
Create AiHubMix Provider
Description
Factory function to create an AiHubMix provider.
AiHubMix provides a unified API for various models including Claude, OpenAI, Gemini, etc.
Usage
create_aihubmix(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_key |
AiHubMix API key. Defaults to AIHUBMIX_API_KEY env var. |
base_url |
Base URL for API calls. Defaults to https://aihubmix.com/v1. |
headers |
Optional additional headers. |
Value
An AiHubMixProvider object.
Examples
if (interactive()) {
aihubmix <- create_aihubmix()
model <- aihubmix$language_model("claude-sonnet-3-5")
result <- generate_text(model, "Explain quantum computing in one sentence.")
}
Create AiHubMix Provider (Anthropic API Format)
Description
Factory function to create an AiHubMix provider using the Anthropic-compatible API. This allows you to use AiHubMix Claude models with the native Anthropic API format, unlocking advanced features like Prompt Caching.
Usage
create_aihubmix_anthropic(
api_key = NULL,
extended_caching = FALSE,
headers = NULL
)
Arguments
api_key |
AiHubMix API key. Defaults to AIHUBMIX_API_KEY env var. |
extended_caching |
Logical. If TRUE, enables the 1-hour beta cache for Claude. |
headers |
Optional additional headers. |
Details
AiHubMix provides an Anthropic-compatible endpoint at https://aihubmix.com/v1.
This convenience function wraps create_anthropic() with AiHubMix-specific defaults.
Value
An AnthropicProvider object configured for AiHubMix.
Examples
if (interactive()) {
# Use AiHubMix via Anthropic API format (unlocks caching)
aihubmix_claude <- create_aihubmix_anthropic()
model <- aihubmix_claude$language_model("claude-3-5-sonnet-20241022")
result <- generate_text(model, "Hello Claude!")
}
Create AiHubMix Provider (Gemini API Format)
Description
Factory function to create an AiHubMix provider using the Gemini-compatible API. This allows you to use Gemini models with the native Gemini API structure.
Usage
create_aihubmix_gemini(api_key = NULL, headers = NULL)
Arguments
api_key |
AiHubMix API key. Defaults to AIHUBMIX_API_KEY env var. |
headers |
Optional additional headers. |
Details
AiHubMix provides a Gemini-compatible endpoint at https://aihubmix.com/gemini/v1beta/models.
This convenience function wraps create_gemini() with AiHubMix-specific defaults.
Value
A GeminiProvider object configured for AiHubMix.
Examples
if (interactive()) {
# Use AiHubMix via Gemini API format
aihubmix_gemini <- create_aihubmix_gemini()
model <- aihubmix_gemini$language_model("gemini-2.5-flash")
result <- generate_text(model, "Hello Gemini!")
}
Create Anthropic Provider
Description
Factory function to create an Anthropic provider.
Usage
create_anthropic(
api_key = NULL,
base_url = NULL,
api_version = NULL,
headers = NULL,
name = NULL
)
Arguments
api_key |
Anthropic API key. Defaults to ANTHROPIC_API_KEY env var. |
base_url |
Base URL for API calls. Defaults to https://api.anthropic.com/v1. |
api_version |
Anthropic API version header. Defaults to "2023-06-01". |
headers |
Optional additional headers. |
name |
Optional provider name override. |
Value
An AnthropicProvider object.
Examples
if (interactive()) {
anthropic <- create_anthropic(api_key = "sk-ant-...")
model <- anthropic$language_model("claude-sonnet-4-20250514")
}
Artifact Tools for File Persistence
Description
Artifact Tools for File Persistence
Usage
create_artifact_dir(base_dir = NULL)
Create Alibaba Cloud Bailian Provider
Description
Factory function to create an Alibaba Cloud Bailian provider using the DashScope API.
Usage
create_bailian(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_key |
DashScope API key. Defaults to DASHSCOPE_API_KEY env var. |
base_url |
Base URL for API calls. Defaults to https://dashscope.aliyuncs.com/compatible-mode/v1. |
headers |
Optional additional headers. |
Value
A BailianProvider object.
Supported Models
DashScope platform hosts Qwen series and other models:
-
qwen-plus: Balanced performance model
-
qwen-turbo: Fast & cost-effective model
-
qwen-max: Most capable model
-
qwq-32b: Reasoning model with chain-of-thought
-
qwen-vl-plus: Vision-language model
Other third-party models available on the platform
Examples
if (interactive()) {
bailian <- create_bailian()
# Standard chat model
model <- bailian$language_model("qwen-plus")
result <- generate_text(model, "Hello")
# Reasoning model (QwQ with chain-of-thought)
model <- bailian$language_model("qwq-32b")
result <- generate_text(model, "Solve: What is 15 * 23?")
print(result$reasoning) # Chain-of-thought reasoning
# Default model (qwen-plus)
model <- bailian$language_model()
}
Create a Channel Runtime
Description
Helper for constructing a ChannelRuntime.
Usage
create_channel_runtime(
session_store,
model = NULL,
agent = NULL,
skills = NULL,
tools = NULL,
hooks = NULL,
registry = NULL,
max_steps = 10,
session_policy = channel_default_session_policy()
)
Arguments
session_store |
File-backed session store. |
model |
Optional default model id. |
agent |
Optional default agent. |
skills |
Optional skill paths or |
tools |
Optional default tools. |
hooks |
Optional session hooks. |
registry |
Optional provider registry. |
max_steps |
Maximum tool execution steps. |
session_policy |
Session routing policy list. |
Value
A ChannelRuntime.
Create a Chat Session
Description
Factory function to create a new ChatSession object.
Usage
create_chat_session(
model = NULL,
system_prompt = NULL,
tools = NULL,
hooks = NULL,
max_steps = 10,
metadata = NULL,
agent = NULL
)
Arguments
model |
A LanguageModelV1 object or model string ID. |
system_prompt |
Optional system prompt. |
tools |
Optional list of Tool objects. |
hooks |
Optional HookHandler object. |
max_steps |
Maximum tool execution steps. Default 10. |
metadata |
Optional session metadata (list). |
agent |
Optional Agent object to initialize from. |
Value
A ChatSession object.
Examples
if (interactive()) {
# Create a chat session
chat <- create_chat_session(
model = "openai:gpt-4o",
system_prompt = "You are a helpful R programming assistant."
)
# Create from an existing agent
agent <- create_agent("MathAgent", "Does math", system_prompt = "You are a math wizard.")
chat <- create_chat_session(model = "openai:gpt-4o", agent = agent)
# Send messages
response <- chat$send("How do I read a CSV file?")
print(response$text)
# Continue the conversation (history is maintained)
response <- chat$send("What about Excel files?")
# Check stats
print(chat$stats())
# Save session
chat$save("my_session.rds")
}
Create a CoderAgent
Description
Creates an agent specialized in writing and executing R code. The agent can execute R code in the session environment, making results available to other agents. Enhanced version with better safety controls and debugging support.
Usage
create_coder_agent(
name = "CoderAgent",
safe_mode = TRUE,
timeout_ms = 30000,
max_output_lines = 200
)
Arguments
name |
Agent name. Default "CoderAgent". |
safe_mode |
If TRUE (default), restricts file system and network access. |
timeout_ms |
Code execution timeout in milliseconds. Default 30000. |
max_output_lines |
Maximum output lines to return. Default 50. |
Value
An Agent object configured for R code execution.
Examples
if (interactive()) {
coder <- create_coder_agent()
session <- create_shared_session(model = "openai:gpt-4o")
result <- coder$run(
"Create a data frame with 3 rows and calculate the mean",
session = session,
model = "openai:gpt-4o"
)
}
Create Computer Tools
Description
Create atomic tools for computer abstraction layer. These tools provide a small set of primitives that agents can use to perform complex actions.
Usage
create_computer_tools(
computer = NULL,
working_dir = tempdir(),
sandbox_mode = "permissive"
)
Arguments
computer |
Computer instance (default: create new) |
working_dir |
Working directory. Defaults to |
sandbox_mode |
Sandbox mode: "strict", "permissive", or "none" |
Value
List of Tool objects
Create Console Agent
Description
Create the default intelligent terminal agent for console_chat(). This agent can execute commands, manage files, and run R code through natural language interaction.
Usage
create_console_agent(
working_dir = tempdir(),
sandbox_mode = "permissive",
additional_tools = NULL,
language = "auto"
)
Arguments
working_dir |
Working directory. Defaults to |
sandbox_mode |
Sandbox mode: "strict", "permissive", or "none" (default: "permissive"). |
additional_tools |
Optional list of additional Tool objects to include. |
language |
Language for responses: "auto", "en", or "zh" (default: "auto"). |
Value
An Agent object configured for console interaction.
Examples
if (interactive()) {
# Create default console agent
agent <- create_console_agent()
# Create with custom working directory
agent <- create_console_agent(working_dir = "~/projects/myapp")
# Use with console_chat
console_chat("openai:gpt-4o", agent = agent)
}
Create Console Tools
Description
Create a set of tools optimized for console/terminal interaction. Includes computer tools (bash, read_file, write_file, execute_r_code) plus additional console-specific tools.
Usage
create_console_tools(working_dir = tempdir(), sandbox_mode = "permissive")
Arguments
working_dir |
Working directory. Defaults to |
sandbox_mode |
Sandbox mode: "strict", "permissive", or "none" (default: "permissive"). |
Value
A list of Tool objects.
Examples
if (interactive()) {
tools <- create_console_tools()
# Use with an agent or session
session <- create_chat_session(model = "openai:gpt-4o", tools = tools)
}
Create a custom provider
Description
Creates a dynamic wrapper around existing model classes (OpenAI, Anthropic)
based on user-provided configuration. The returned provider can be registered
in the global ProviderRegistry.
Usage
create_custom_provider(
provider_name,
base_url,
api_key = NULL,
api_format = c("chat_completions", "responses", "anthropic_messages"),
use_max_completion_tokens = FALSE
)
Arguments
provider_name |
The identifier name for this custom provider (e.g. "my_custom_openai_proxy"). |
base_url |
The base URL for the API endpoint. |
api_key |
The API key for authentication. If NULL, defaults to checking environmental variables. |
api_format |
The underlying API format to use. Supports "chat_completions" (OpenAI default), "responses" (OpenAI Responses API), and "anthropic_messages" (Anthropic Messages API). |
use_max_completion_tokens |
A boolean flag. If TRUE, injects the |
Value
A custom provider object with a language_model(model_id) method.
Create a DataAgent
Description
Creates an agent specialized in data manipulation using dplyr and tidyr. The agent can filter, transform, summarize, and reshape data frames in the session environment.
Usage
create_data_agent(name = "DataAgent", safe_mode = TRUE)
Arguments
name |
Agent name. Default "DataAgent". |
safe_mode |
If TRUE (default), restricts operations to data manipulation only. |
Value
An Agent object configured for data manipulation.
Examples
if (interactive()) {
data_agent <- create_data_agent()
session <- create_shared_session(model = "openai:gpt-4o")
session$set_var("sales", data.frame(
product = c("A", "B", "C"),
revenue = c(100, 200, 150)
))
result <- data_agent$run(
"Calculate total revenue and find the top product",
session = session,
model = "openai:gpt-4o"
)
}
Create DeepSeek Provider
Description
Factory function to create a DeepSeek provider.
DeepSeek offers two main models:
-
deepseek-chat: Standard chat model (DeepSeek-V3.2 non-thinking mode)
-
deepseek-reasoner: Reasoning model with chain-of-thought (DeepSeek-V3.2 thinking mode)
Usage
create_deepseek(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_key |
DeepSeek API key. Defaults to DEEPSEEK_API_KEY env var. |
base_url |
Base URL. Defaults to "https://api.deepseek.com". |
headers |
Optional additional headers. |
Value
A DeepSeekProvider object.
Examples
if (interactive()) {
# Basic usage with deepseek-chat
deepseek <- create_deepseek()
model <- deepseek$language_model("deepseek-chat")
result <- generate_text(model, "Hello!")
# Using deepseek-reasoner for chain-of-thought reasoning
model_reasoner <- deepseek$language_model("deepseek-reasoner")
result <- model_reasoner$generate(
messages = list(list(role = "user", content = "Solve: What is 15 * 23?")),
max_tokens = 500
)
print(result$text) # Final answer
print(result$reasoning) # Chain-of-thought reasoning
# Streaming with reasoning
stream_text(model_reasoner, "Explain quantum entanglement step by step")
}
Create DeepSeek Provider (Anthropic API Format)
Description
Factory function to create a DeepSeek provider using the Anthropic-compatible API. This allows you to use DeepSeek models with the Anthropic API format.
Usage
create_deepseek_anthropic(api_key = NULL, headers = NULL)
Arguments
api_key |
DeepSeek API key. Defaults to DEEPSEEK_API_KEY env var. |
headers |
Optional additional headers. |
Details
DeepSeek provides an Anthropic-compatible endpoint at https://api.deepseek.com/anthropic.
This convenience function wraps create_anthropic() with DeepSeek-specific defaults.
Note: When using an unsupported model name, the API backend will automatically
map it to deepseek-chat.
Value
An AnthropicProvider object configured for DeepSeek.
Examples
if (interactive()) {
# Use DeepSeek via Anthropic API format
deepseek <- create_deepseek_anthropic()
model <- deepseek$language_model("deepseek-chat")
result <- generate_text(model, "Hello!")
# This is useful for tools that expect Anthropic API format
# such as Claude Code integration
}
Create a Delegate Tool for an Agent
Description
Internal function to create a Tool that delegates to an Agent.
Usage
create_delegate_tool(agent, flow = NULL, session = NULL, model = NULL)
Arguments
agent |
The Agent to wrap. |
flow |
Optional Flow object for context tracking. |
session |
Optional ChatSession for shared state. |
model |
Optional model ID for execution. |
Value
A Tool object.
Create Embeddings
Description
Generate embeddings for text using an embedding model.
Usage
create_embeddings(model, value, registry = NULL)
Arguments
model |
Either an EmbeddingModelV1 object, or a string ID like "openai:text-embedding-3-small". |
value |
A character string or vector to embed. |
registry |
Optional ProviderRegistry to use. |
Value
A list with embeddings and usage information.
Examples
if (interactive()) {
model <- create_openai()$embedding_model("text-embedding-3-small")
result <- create_embeddings(model, "Hello, world!")
print(length(result$embeddings[[1]]))
}
Create an EnvAgent
Description
Creates an agent specialized in R environment and package management. The agent can check, install, and manage R packages with safety controls.
Usage
create_env_agent(
name = "EnvAgent",
allow_install = FALSE,
allowed_repos = "https://cloud.r-project.org"
)
Arguments
name |
Agent name. Default "EnvAgent". |
allow_install |
Allow package installation. Default FALSE. |
allowed_repos |
CRAN mirror URLs for installation. |
Value
An Agent object configured for environment management.
Examples
if (interactive()) {
env_agent <- create_env_agent(allow_install = TRUE)
result <- env_agent$run(
"Check if tidyverse is installed and load it",
session = session,
model = "openai:gpt-4o"
)
}
Create a Feishu Channel Adapter
Description
Helper for creating a FeishuChannelAdapter.
Usage
create_feishu_channel_adapter(
app_id,
app_secret,
base_url = "https://open.feishu.cn",
verification_token = NULL,
encrypt_key = NULL,
verify_signature = TRUE,
send_text_fn = NULL,
send_status_fn = NULL,
download_resource_fn = NULL
)
Arguments
app_id |
Feishu app id. |
app_secret |
Feishu app secret. |
base_url |
Feishu API base URL. |
verification_token |
Optional callback verification token. |
encrypt_key |
Optional event subscription encryption key. |
verify_signature |
Whether to validate Feishu callback signatures when applicable. |
send_text_fn |
Optional custom send function for tests or overrides. |
send_status_fn |
Optional custom status function for tests or overrides. |
download_resource_fn |
Optional custom downloader for inbound message resources. |
Value
A FeishuChannelAdapter.
Create a Feishu Channel Runtime
Description
Construct a ChannelRuntime and register a Feishu adapter on it.
Usage
create_feishu_channel_runtime(
session_store,
app_id,
app_secret,
base_url = "https://open.feishu.cn",
verification_token = NULL,
encrypt_key = NULL,
verify_signature = TRUE,
send_text_fn = NULL,
send_status_fn = NULL,
download_resource_fn = NULL,
model = NULL,
agent = NULL,
skills = "auto",
tools = NULL,
hooks = NULL,
registry = NULL,
max_steps = 10,
session_policy = channel_default_session_policy()
)
Arguments
session_store |
Channel session store. |
app_id |
Feishu app id. |
app_secret |
Feishu app secret. |
base_url |
Feishu API base URL. |
verification_token |
Optional callback verification token. |
encrypt_key |
Optional event subscription encryption key. |
verify_signature |
Whether to validate Feishu callback signatures when applicable. |
send_text_fn |
Optional custom send function for tests or overrides. |
send_status_fn |
Optional custom status function for tests or overrides. |
download_resource_fn |
Optional custom downloader for inbound message resources. |
model |
Optional default model id. |
agent |
Optional default agent. |
skills |
Optional skill paths or |
tools |
Optional default tools. |
hooks |
Optional session hooks. |
registry |
Optional provider registry. |
max_steps |
Maximum tool execution steps. |
session_policy |
Optional session policy overrides. |
Value
A ChannelRuntime with the Feishu adapter registered.
Create a Feishu Event Processor
Description
Create a plain event processor for Feishu events that already arrived through an authenticated ingress such as the official long-connection SDK.
Usage
create_feishu_event_processor(runtime)
Arguments
runtime |
A |
Value
A function (payload) that processes one Feishu event payload.
Create a Feishu Webhook Handler
Description
Create a transport-agnostic handler that turns a raw Feishu callback request into a JSON HTTP response payload.
Usage
create_feishu_webhook_handler(runtime)
Arguments
runtime |
A |
Value
A function (headers, body) that returns a response list.
Create a FileAgent
Description
Creates an agent specialized in file system operations using fs and readr. The agent can read, write, and manage files with safety guardrails.
Usage
create_file_agent(
name = "FileAgent",
allowed_dirs = ".",
allowed_extensions = c("csv", "tsv", "txt", "json", "rds", "rda", "xlsx", "xls")
)
Arguments
name |
Agent name. Default "FileAgent". |
allowed_dirs |
Character vector of allowed directories. Default current dir. |
allowed_extensions |
Character vector of allowed file extensions. |
Value
An Agent object configured for file operations.
Examples
if (interactive()) {
file_agent <- create_file_agent(
allowed_dirs = c("./data", "./output"),
allowed_extensions = c("csv", "json", "txt", "rds")
)
result <- file_agent$run(
"Read the sales.csv file and store it as 'sales_data'",
session = session,
model = "openai:gpt-4o"
)
}
Create a File Channel Session Store
Description
Helper for creating a local file-backed channel session store.
Usage
create_file_channel_session_store(base_dir)
Arguments
base_dir |
Base directory for channel session state. |
Value
A FileChannelSessionStore.
Create a Flow
Description
Factory function to create a new Flow object for enhanced multi-agent orchestration.
Usage
create_flow(
session,
model = NULL,
registry = NULL,
max_depth = 5,
max_steps_per_agent = 10,
enable_guardrails = TRUE
)
Arguments
session |
A ChatSession object. |
model |
Optional default model ID to use (e.g., "openai:gpt-4o"). |
registry |
Optional AgentRegistry for agent lookup and delegation. |
max_depth |
Maximum delegation depth. Default 5. |
max_steps_per_agent |
Maximum ReAct steps per agent. Default 10. |
enable_guardrails |
Enable safety guardrails. Default TRUE. |
Value
A Flow object.
Examples
if (interactive()) {
# Create an enhanced multi-agent flow
session <- create_chat_session()
cleaner <- create_agent("Cleaner", "Cleans data")
plotter <- create_agent("Plotter", "Creates plots")
registry <- create_agent_registry(list(cleaner, plotter))
manager <- create_agent("Manager", "Coordinates data analysis")
flow <- create_flow(
session = session,
model = "openai:gpt-4o",
registry = registry,
enable_guardrails = TRUE
)
# Run the manager with auto-delegation (unified delegate_task tool)
result <- flow$run(manager, "Load data and create a visualization")
}
Create Gemini Provider
Description
Factory function to create a Gemini provider.
Usage
create_gemini(api_key = NULL, base_url = NULL, headers = NULL, name = NULL)
Arguments
api_key |
Gemini API key. Defaults to GEMINI_API_KEY env var. |
base_url |
Base URL for API calls. Defaults to https://generativelanguage.googleapis.com/v1beta/models. |
headers |
Optional additional headers. |
name |
Optional provider name override. |
Value
A GeminiProvider object.
Examples
if (interactive()) {
gemini <- create_gemini(api_key = "AIza...")
model <- gemini$language_model("gemini-1.5-pro")
}
Create Hooks
Description
Helper to create a HookHandler from a list of functions.
Usage
create_hooks(...)
Arguments
... |
Named arguments matching supported hook names. |
Value
A HookHandler object.
Create Invalid Tool Handler
Description
Creates a special "invalid" tool that handles unrecognized or failed tool calls gracefully. This allows the system to continue operating and provide meaningful feedback to the LLM.
Usage
create_invalid_tool_handler()
Value
A Tool object for handling invalid tool calls.
Create an MCP Client
Description
Convenience function to create and connect to an MCP server.
Usage
create_mcp_client(command, args = character(), env = NULL)
Arguments
command |
The command to run the MCP server |
args |
Command arguments |
env |
Environment variables |
Value
An McpClient object
Examples
if (interactive()) {
# Connect to GitHub MCP server
client <- create_mcp_client(
"npx",
c("-y", "@modelcontextprotocol/server-github"),
env = c(GITHUB_PERSONAL_ACCESS_TOKEN = Sys.getenv("GITHUB_TOKEN"))
)
# List available tools
tools <- client$list_tools()
# Use tools with generate_text
result <- generate_text(
model = "openai:gpt-4o",
prompt = "List my GitHub repos",
tools = client$as_sdk_tools()
)
client$close()
}
Create an MCP Server
Description
Convenience function to create an MCP server.
Usage
create_mcp_server(name = "r-mcp-server", version = "0.1.0")
Arguments
name |
Server name |
version |
Server version |
Value
An McpServer object
Examples
if (interactive()) {
# Create a server with a custom tool
server <- create_mcp_server("my-r-server")
# Add a tool
server$add_tool(tool(
name = "calculate",
description = "Perform a calculation",
parameters = z_object(
expression = z_string(description = "R expression to evaluate")
),
execute = function(args) {
eval(parse(text = args$expression))
}
))
# Start listening (blocking)
server$listen()
}
Create MCP SSE Client
Description
Create MCP SSE Client
Usage
create_mcp_sse_client(url, headers = list())
Arguments
url |
The SSE endpoint URL |
headers |
named list of headers (e.g. for auth) |
Create a Mission
Description
Factory function to create a new Mission object.
Usage
create_mission(
goal,
steps = NULL,
model = NULL,
executor = NULL,
stall_policy = NULL,
hooks = NULL,
session = NULL,
auto_plan = TRUE
)
Arguments
goal |
Natural language goal description. |
steps |
Optional list of MissionStep objects. If NULL and auto_plan=TRUE, the LLM will decompose the goal into steps automatically. |
model |
Default model ID (e.g., "anthropic:claude-opus-4-6"). |
executor |
Default executor (Agent, AgentTeam, Flow, or R function). Required when auto_plan = TRUE. |
stall_policy |
Named list for failure recovery. See default_stall_policy(). |
hooks |
MissionHookHandler for lifecycle events. |
session |
Optional SharedSession. Created automatically if NULL. |
auto_plan |
If TRUE (default), use LLM to create steps when none are provided. |
Value
A Mission object.
Examples
if (interactive()) {
# Auto-planned mission
agent <- create_agent("Analyst", "Analyzes data", model = "openai:gpt-4o")
mission <- create_mission(
goal = "Load the iris dataset and summarize each species",
executor = agent,
model = "openai:gpt-4o"
)
mission$run()
# Manual steps
mission2 <- create_mission(
goal = "Data pipeline",
steps = list(
create_step("step_1", "Load CSV data", executor = agent),
create_step("step_2", "Summarize statistics", executor = agent,
depends_on = "step_1")
),
model = "openai:gpt-4o"
)
mission2$run()
}
Create Mission Hooks
Description
Factory function to create a MissionHookHandler from named hook functions.
Usage
create_mission_hooks(
on_mission_start = NULL,
on_mission_planned = NULL,
on_step_start = NULL,
on_step_done = NULL,
on_step_failed = NULL,
on_mission_stall = NULL,
on_mission_done = NULL
)
Arguments
on_mission_start |
Optional function(mission) called when a Mission begins. |
on_mission_planned |
Optional function(mission) called after LLM planning. |
on_step_start |
Optional function(step, attempt) called before each step attempt. |
on_step_done |
Optional function(step, result) called on step success. |
on_step_failed |
Optional function(step, error, attempt) called on step failure. |
on_mission_stall |
Optional function(mission, step) called on stall detection. |
on_mission_done |
Optional function(mission) called on Mission completion. |
Value
A MissionHookHandler object.
Examples
hooks <- create_mission_hooks(
on_step_done = function(step, result) {
message("Completed: ", step$description)
},
on_mission_stall = function(mission, step) {
message("STALL detected at step: ", step$id)
}
)
Create a Mission Orchestrator
Description
Factory function to create a MissionOrchestrator.
Usage
create_mission_orchestrator(max_concurrent = 3, model = NULL, session = NULL)
Arguments
max_concurrent |
Maximum simultaneous missions. Default 3. |
model |
Optional default model for all missions. |
session |
Optional shared SharedSession. |
Value
A MissionOrchestrator object.
Examples
if (interactive()) {
orchestrator <- create_mission_orchestrator(max_concurrent = 5, model = "openai:gpt-4o")
orchestrator$submit(create_mission("Task A", executor = agent_a))
orchestrator$submit(create_mission("Task B", executor = agent_b))
orchestrator$submit(create_mission("Task C", executor = agent_c))
results <- orchestrator$run_all()
print(orchestrator$status())
}
Create NVIDIA Provider
Description
Factory function to create a NVIDIA provider.
Usage
create_nvidia(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_key |
NVIDIA API key. Defaults to NVIDIA_API_KEY env var. |
base_url |
Base URL. Defaults to "https://integrate.api.nvidia.com/v1". |
headers |
Optional additional headers. |
Value
A NvidiaProvider object.
Examples
if (interactive()) {
nvidia <- create_nvidia()
model <- nvidia$language_model("z-ai/glm4.7")
# Enable thinking/reasoning
result <- generate_text(model, "Who are you?",
chat_template_kwargs = list(enable_thinking = TRUE)
)
print(result$reasoning)
}
Create OpenAI Provider
Description
Factory function to create an OpenAI provider.
Usage
create_openai(
api_key = NULL,
base_url = NULL,
organization = NULL,
project = NULL,
headers = NULL,
name = NULL,
disable_stream_options = FALSE
)
Arguments
api_key |
OpenAI API key. Defaults to OPENAI_API_KEY env var. |
base_url |
Base URL for API calls. Defaults to https://api.openai.com/v1. |
organization |
Optional OpenAI organization ID. |
project |
Optional OpenAI project ID. |
headers |
Optional additional headers. |
name |
Optional provider name override (for compatible APIs). |
disable_stream_options |
Disable stream_options parameter (for providers like Volcengine that don't support it). |
Value
An OpenAIProvider object.
Token Limit Parameters
The SDK provides a unified max_tokens parameter that automatically maps to the
correct API field based on the model and API type:
-
Chat API (standard models):
max_tokens->max_tokens -
Chat API (o1/o3 models):
max_tokens->max_completion_tokens -
Responses API:
max_tokens->max_output_tokens(total: reasoning + answer)
For advanced users who need fine-grained control:
-
max_completion_tokens: Explicitly set completion tokens (Chat API, o1/o3) -
max_output_tokens: Explicitly set total output limit (Responses API) -
max_answer_tokens: Limit answer only, excluding reasoning (Responses API, Volcengine-specific)
Examples
if (interactive()) {
# Basic usage with Chat Completions API
openai <- create_openai(api_key = "sk-...")
model <- openai$language_model("gpt-4o")
result <- generate_text(model, "Hello!")
# Using Responses API for reasoning models
openai <- create_openai()
model <- openai$responses_model("o1")
result <- generate_text(model, "Solve this math problem...")
print(result$reasoning) # Access chain-of-thought
# Smart model selection (auto-detects best API)
model <- openai$smart_model("o3-mini") # Uses Responses API
model <- openai$smart_model("gpt-4o") # Uses Chat Completions API
# Token limits - unified interface
# For standard models: limits generated content
result <- model$generate(messages = msgs, max_tokens = 1000)
# For o1/o3 models: automatically maps to max_completion_tokens
model_o1 <- openai$language_model("o1")
result <- model_o1$generate(messages = msgs, max_tokens = 2000)
# For Responses API: automatically maps to max_output_tokens (total limit)
model_resp <- openai$responses_model("o1")
result <- model_resp$generate(messages = msgs, max_tokens = 2000)
# Advanced: explicitly control answer-only limit (Volcengine Responses API)
result <- model_resp$generate(messages = msgs, max_answer_tokens = 500)
# Multi-turn conversation with Responses API
model <- openai$responses_model("o1")
result1 <- generate_text(model, "What is 2+2?")
result2 <- generate_text(model, "Now multiply that by 3") # Remembers context
model$reset() # Start fresh conversation
}
Create OpenRouter Provider
Description
Factory function to create an OpenRouter provider.
Usage
create_openrouter(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_key |
OpenRouter API key. Defaults to OPENROUTER_API_KEY env var. |
base_url |
Base URL for API calls. Defaults to https://openrouter.ai/api/v1. |
headers |
Optional additional headers. |
Value
An OpenRouterProvider object.
Supported Models
OpenRouter provides access to hundreds of models from many providers:
-
OpenAI: "openai/gpt-4o", "openai/o1"
-
Anthropic: "anthropic/claude-sonnet-4-20250514"
-
Google: "google/gemini-2.5-pro"
-
DeepSeek: "deepseek/deepseek-r1", "deepseek/deepseek-chat-v3-0324"
-
Meta: "meta-llama/llama-4-maverick"
And many more at https://openrouter.ai/models
Examples
if (interactive()) {
openrouter <- create_openrouter()
# Access any model via a unified API
model <- openrouter$language_model("openai/gpt-4o")
result <- generate_text(model, "Hello!")
# Reasoning model
model <- openrouter$language_model("deepseek/deepseek-r1")
result <- generate_text(model, "Solve: 15 * 23")
print(result$reasoning)
}
Create Orchestration Flow (Compatibility Wrapper)
Description
Creates an orchestration flow using Flow. Provided for backward compatibility.
Usage
create_orchestration(
session,
model,
registry = NULL,
max_depth = 5,
max_steps_per_agent = 10,
...
)
Arguments
session |
A session object. |
model |
The default model ID. |
registry |
Optional AgentRegistry. |
max_depth |
Maximum delegation depth. Default 5. |
max_steps_per_agent |
Maximum ReAct steps per agent. Default 10. |
... |
Additional arguments. |
Value
A Flow object.
Create Permission Hook
Description
Create a hook that enforces a permission mode for tool execution.
Usage
create_permission_hook(
mode = c("implicit", "explicit", "escalate"),
allowlist = c("search_web", "read_resource", "read_file")
)
Arguments
mode |
Permission mode:
|
allowlist |
List of tool names that are auto-approved in "escalate" mode. Default includes read-only tools like "search_web", "read_file". |
Value
A HookHandler object.
Create a PlannerAgent
Description
Creates an agent specialized in breaking down complex tasks into steps using chain-of-thought reasoning. The planner helps decompose problems and create action plans.
Usage
create_planner_agent(name = "PlannerAgent")
Arguments
name |
Agent name. Default "PlannerAgent". |
Value
An Agent object configured for planning and reasoning.
Examples
if (interactive()) {
planner <- create_planner_agent()
result <- planner$run(
"How should I approach building a machine learning model for customer churn?",
model = "openai:gpt-4o"
)
}
Create R Code Interpreter Tool
Description
Creates a meta-tool (execute_r_code) backed by a SandboxManager.
This single tool replaces all individual tools for the LLM, enabling
batch execution, data filtering, and local computation.
Usage
create_r_code_tool(sandbox)
Arguments
sandbox |
A SandboxManager object. |
Value
A Tool object named execute_r_code.
Create Sandbox System Prompt
Description
Generates a system prompt section that instructs the LLM how to use the R code sandbox effectively.
Usage
create_sandbox_system_prompt(sandbox)
Arguments
sandbox |
A SandboxManager object. |
Value
A character string to append to the system prompt.
Create Schema from Function
Description
Inspects an R function and generates a z_object schema based on its arguments and default values.
Usage
create_schema_from_func(
func,
include_args = NULL,
exclude_args = NULL,
params = NULL,
func_name = NULL,
type_mode = c("infer", "any")
)
Arguments
func |
The R function to inspect. |
include_args |
Optional character vector of argument names to include. If provided, only these arguments will be included in the schema. |
exclude_args |
Optional character vector of argument names to exclude. |
params |
Optional named list of parameter values to use as defaults. This allows overriding the function's default values (e.g., with values extracted from an existing plot layer). |
func_name |
Optional string of the function name to look up documentation. If not provided, attempts to infer from 'func' symbol. |
type_mode |
How to assign parameter types. "infer" (default) uses default values to infer types. "any" uses z_any() for all parameters. |
Value
A z_object schema.
Examples
if (interactive()) {
my_func <- function(a = 1, b = "text", c = TRUE) {}
schema <- create_schema_from_func(my_func)
print(schema)
# Override defaults
schema_override <- create_schema_from_func(my_func, params = list(a = 99))
}
Create Session (Compatibility Wrapper)
Description
Creates a session using either the new SharedSession or legacy ChatSession based on feature flags. This provides a migration path for existing code.
Usage
create_session(
model = NULL,
system_prompt = NULL,
tools = NULL,
hooks = NULL,
max_steps = 10,
...
)
Arguments
model |
A LanguageModelV1 object or model string ID. |
system_prompt |
Optional system prompt. |
tools |
Optional list of Tool objects. |
hooks |
Optional HookHandler object. |
max_steps |
Maximum tool execution steps. Default 10. |
... |
Additional arguments passed to session constructor. |
Value
A SharedSession or ChatSession object.
Examples
if (interactive()) {
# Automatically uses SharedSession if feature enabled
session <- create_session(model = "openai:gpt-4o")
# Force legacy session
sdk_set_feature("use_shared_session", FALSE)
session <- create_session(model = "openai:gpt-4o")
}
Create a Shared Session
Description
Factory function to create a new SharedSession object.
Usage
create_shared_session(
model = NULL,
system_prompt = NULL,
tools = NULL,
hooks = NULL,
max_steps = 10,
sandbox_mode = "strict",
trace_enabled = TRUE
)
Arguments
model |
A LanguageModelV1 object or model string ID. |
system_prompt |
Optional system prompt. |
tools |
Optional list of Tool objects. |
hooks |
Optional HookHandler object. |
max_steps |
Maximum tool execution steps. Default 10. |
sandbox_mode |
Sandbox mode: "strict", "permissive", or "none". Default "strict". |
trace_enabled |
Enable execution tracing. Default TRUE. |
Value
A SharedSession object.
Examples
if (interactive()) {
# Create a shared session for multi-agent use
session <- create_shared_session(
model = "openai:gpt-4o",
sandbox_mode = "strict",
trace_enabled = TRUE
)
# Execute code safely
result <- session$execute_code("x <- 1:10; mean(x)")
# Check trace
print(session$trace_summary())
}
Create Skill Scaffold
Description
Create a new skill project with the standard structure.
Usage
create_skill(name, path = tempdir(), author = NULL, description = NULL)
Arguments
name |
Skill name. |
path |
Directory to create the skill in. |
author |
Author name. |
description |
Brief description. |
Value
Path to the created skill directory.
Create a SkillArchitect Agent
Description
Creates an advanced agent specialized in creating, testing, and refining new skills. It follows a rigorous "Ingest -> Design -> Implement -> Verify" workflow.
Usage
create_skill_architect_agent(
name = "SkillArchitect",
registry = NULL,
model = NULL
)
Arguments
name |
Agent name. Default "SkillArchitect". |
registry |
Optional SkillRegistry object (defaults to creating one from inst/skills). |
model |
The model object to use for verification (spawning a tester agent). |
Value
An Agent object configured for skill architecture.
Create Skill Forge Tools
Description
Wraps the analysis and testing functions into Tools for the Skill Architect.
Usage
create_skill_forge_tools(registry, model)
Arguments
registry |
SkillRegistry for looking up skills during testing. |
model |
Model to use for the test runner. |
Create a Skill Registry
Description
Convenience function to create and populate a SkillRegistry.
Usage
create_skill_registry(path, recursive = FALSE)
Arguments
path |
Path to scan for skills. |
recursive |
Whether to scan subdirectories. Default FALSE. |
Value
A populated SkillRegistry object.
Examples
if (interactive()) {
# Scan a skills directory
registry <- create_skill_registry(".aimd/skills")
# List available skills
registry$list_skills()
# Get a specific skill
skill <- registry$get_skill("seurat_analysis")
}
Create Skill Tools
Description
Create the built-in tools for interacting with skills.
Usage
create_skill_tools(registry)
Arguments
registry |
A SkillRegistry object. |
Value
A list of Tool objects.
Create Standard Agent Registry
Description
Creates an AgentRegistry pre-populated with standard library agents based on feature flags.
Usage
create_standard_registry(
include_data = TRUE,
include_file = TRUE,
include_env = TRUE,
include_coder = TRUE,
include_visualizer = TRUE,
include_planner = TRUE,
file_allowed_dirs = ".",
env_allow_install = FALSE
)
Arguments
include_data |
Include DataAgent. Default TRUE. |
include_file |
Include FileAgent. Default TRUE. |
include_env |
Include EnvAgent. Default TRUE. |
include_coder |
Include CoderAgent. Default TRUE. |
include_visualizer |
Include VisualizerAgent. Default TRUE. |
include_planner |
Include PlannerAgent. Default TRUE. |
file_allowed_dirs |
Allowed directories for FileAgent. |
env_allow_install |
Allow package installation for EnvAgent. |
Value
An AgentRegistry object.
Examples
if (interactive()) {
# Create registry with all standard agents
registry <- create_standard_registry()
# Create registry with only data and visualization agents
registry <- create_standard_registry(
include_file = FALSE,
include_env = FALSE,
include_planner = FALSE
)
}
Create a MissionStep
Description
Factory function to create a MissionStep.
Usage
create_step(
id,
description,
executor = NULL,
max_retries = 2,
timeout_secs = NULL,
parallel = FALSE,
depends_on = NULL
)
Arguments
id |
Unique step ID (e.g., "step_1"). |
description |
Natural language task description. |
executor |
Agent, AgentTeam, Flow, or R function to execute the step. |
max_retries |
Maximum retry attempts before stall escalation. Default 2. |
timeout_secs |
Optional per-step timeout in seconds. Default NULL. |
parallel |
If TRUE, this step may run in parallel with other parallel steps. |
depends_on |
Character vector of prerequisite step IDs. |
Value
A MissionStep object.
Examples
if (interactive()) {
step <- create_step(
id = "load_data",
description = "Load the CSV file and return a summary",
executor = my_agent,
max_retries = 3
)
}
Create Stepfun Provider
Description
Factory function to create a Stepfun provider.
Usage
create_stepfun(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_key |
Stepfun API key. Defaults to STEPFUN_API_KEY env var. |
base_url |
Base URL for API calls. Defaults to https://api.stepfun.com/v1. |
headers |
Optional additional headers. |
Value
A StepfunProvider object.
Supported Models
-
step-1-32k: Model: step-1-32k (Tools) | ctx: 32k
-
step-1v-32k: Vision enabled model with 32k context (Vision, Tools) | ctx: 32k
-
step-1-8k: Model: step-1-8k
-
step-1-256k: Model: step-1-256k
-
step-1v-8k: Model: step-1v-8k (Vision)
-
step-2-16k: Model: step-2-16k
-
step-1x-medium: Model: step-1x-medium
-
step-tts-mini: Model: step-tts-mini (Audio)
-
step-2-16k-202411: Model: step-2-16k-202411
-
step-asr: Model: step-asr (Audio)
-
step-1o-vision-32k: Model: step-1o-vision-32k (Vision)
-
step-2-mini: Model: step-2-mini
-
step-2-16k-exp: Model: step-2-16k-exp
-
step-1o-turbo-vision: Model: step-1o-turbo-vision (Vision)
-
step-1o-audio: Model: step-1o-audio (Audio)
-
... and 16 more models. Use
list_models("stepfun")to see all.
Examples
if (interactive()) {
stepfun <- create_stepfun()
model <- stepfun$language_model("step-1-8k")
result <- generate_text(model, "Explain quantum computing in one sentence.")
}
Create a Stream Renderer
Description
Creates an environment to manage the state of a streaming response, including thinking indicators and tool execution status.
Usage
create_stream_renderer()
Value
A list of functions for rendering.
Create an Agent Team
Description
Helper to create an AgentTeam.
Usage
create_team(name = "AgentTeam", model = NULL, session = NULL)
Arguments
name |
Team name. |
model |
Optional default model for the team. |
session |
Optional shared ChatSession for the team. |
Value
An AgentTeam object.
Create Telemetry
Description
Create Telemetry
Usage
create_telemetry(trace_id = NULL)
Arguments
trace_id |
Optional trace ID. |
Value
A Telemetry object.
Create a VisualizerAgent
Description
Creates an agent specialized in creating data visualizations using ggplot2. Enhanced version with plot type recommendations, theme support, and automatic data inspection.
Usage
create_visualizer_agent(
name = "VisualizerAgent",
output_dir = NULL,
default_theme = "theme_minimal",
default_width = 8,
default_height = 6
)
Arguments
name |
Agent name. Default "VisualizerAgent". |
output_dir |
Optional directory to save plots. If NULL, plots are stored in the session environment. |
default_theme |
Default ggplot2 theme. Default "theme_minimal". |
default_width |
Default plot width in inches. Default 8. |
default_height |
Default plot height in inches. Default 6. |
Value
An Agent object configured for data visualization.
Examples
if (interactive()) {
visualizer <- create_visualizer_agent()
session <- create_shared_session(model = "openai:gpt-4o")
session$set_var("df", data.frame(x = 1:10, y = (1:10)^2))
result <- visualizer$run(
"Create a scatter plot of df showing the relationship between x and y",
session = session,
model = "openai:gpt-4o"
)
}
Create Volcengine/Ark Provider
Description
Factory function to create a Volcengine provider using the Ark API.
Usage
create_volcengine(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_key |
Volcengine API key. Defaults to ARK_API_KEY env var. |
base_url |
Base URL for API calls. Defaults to https://ark.cn-beijing.volces.com/api/v3. |
headers |
Optional additional headers. |
Value
A VolcengineProvider object.
Supported Models
-
doubao-lite-128k-240428: Model: doubao-lite-128k-240428
-
doubao-pro-128k-240515: Model: doubao-pro-128k-240515
-
doubao-lite-4k-240328: Model: doubao-lite-4k-240328
-
doubao-lite-32k-240428: Model: doubao-lite-32k-240428
-
doubao-pro-4k-240515: Model: doubao-pro-4k-240515
-
doubao-lite-4k-character-240515: Model: doubao-lite-4k-character-240515
-
doubao-embedding-text-240515: Model: doubao-embedding-text-240515
-
mistral-7b-instruct-v0.2: Model: mistral-7b-instruct-v0.2 (Vision)
-
doubao-pro-4k-character-240515: Model: doubao-pro-4k-character-240515
-
doubao-pro-4k-functioncall-240515: Model: doubao-pro-4k-functioncall-240515
-
doubao-lite-4k-pretrain-character-240516: Model: doubao-lite-4k-pretrain-character-240516
-
doubao-pro-32k-character-240528: Model: doubao-pro-32k-character-240528
-
doubao-pro-4k-browsing-240524: Model: doubao-pro-4k-browsing-240524
-
doubao-pro-32k-functioncall-240515: Model: doubao-pro-32k-functioncall-240515
-
doubao-pro-4k-functioncall-240615: Model: doubao-pro-4k-functioncall-240615
-
... and 123 more models. Use
list_models("volcengine")to see all.
API Formats
Volcengine supports both Chat Completions API and Responses API:
-
language_model(): Uses Chat Completions API (standard) -
responses_model(): Uses Responses API (for reasoning models) -
smart_model(): Auto-selects based on model ID
Token Limit Parameters for Volcengine Responses API
Volcengine's Responses API has two mutually exclusive token limit parameters:
-
max_output_tokens: Total limit including reasoning + answer (default mapping) -
max_tokens(API level): Answer-only limit, excluding reasoning
The SDK's unified max_tokens parameter maps to max_output_tokens by default,
which is the safe choice to prevent runaway reasoning costs.
For advanced users who want answer-only limits:
Use
max_answer_tokensparameter to explicitly set answer-only limitUse
max_output_tokensparameter to explicitly set total limit
Examples
if (interactive()) {
volcengine <- create_volcengine()
# Chat API (standard models)
model <- volcengine$language_model("doubao-1-5-pro-256k-250115")
result <- generate_text(model, "Hello")
# Responses API (reasoning models like DeepSeek)
model <- volcengine$responses_model("deepseek-r1-250120")
# Default: max_tokens limits total output (reasoning + answer)
result <- model$generate(messages = msgs, max_tokens = 2000)
# Advanced: limit only the answer part (reasoning can be longer)
result <- model$generate(messages = msgs, max_answer_tokens = 500)
# Smart model selection (auto-detects best API)
model <- volcengine$smart_model("deepseek-r1-250120")
}
Create xAI Provider
Description
Factory function to create an xAI provider.
xAI provides Grok models:
-
Grok: "grok-3", "grok-4", "grok-beta", "grok-2-1212", etc.
Usage
create_xai(api_key = NULL, base_url = NULL, headers = NULL)
Arguments
api_key |
xAI API key. Defaults to XAI_API_KEY env var. |
base_url |
Base URL for API calls. Defaults to https://api.x.ai/v1. |
headers |
Optional additional headers. |
Value
A XAIProvider object.
Examples
if (interactive()) {
xai <- create_xai()
model <- xai$language_model("grok-beta")
result <- generate_text(model, "Explain quantum computing in one sentence.")
}
Create Schema for ggtree Function
Description
Specialized wrapper around create_schema_from_func for ggtree/ggplot2 functions. Handles common mapping and data arguments specifically.
Usage
create_z_ggtree(func, layer = NULL)
Arguments
func |
The R function (e.g., geom_tiplab). |
layer |
Optional ggplot2 Layer object. If provided, its parameters (aes_params and geom_params) will be used to override the schema defaults. This is useful for creating "Edit Mode" forms for existing plot layers. |
Value
A z_object schema.
Debug Log Helper
Description
Log debug information if debug mode is enabled.
Usage
debug_log(context, data)
Arguments
context |
A string describing the context. |
data |
A list of data to log. |
Deprecation Warning Helper
Description
Issue a deprecation warning with migration guidance.
Usage
deprecation_warning(old_fn, new_fn, version = "2.0.0")
Arguments
old_fn |
Name of the deprecated function/pattern. |
new_fn |
Name of the replacement. |
version |
Version when it will be removed. |
Download Model from Hugging Face
Description
Download a quantized model from Hugging Face Hub.
Usage
download_model(repo_id, filename, dest_dir = NULL, quiet = FALSE)
Arguments
repo_id |
The Hugging Face repository ID (e.g., "TheBloke/Llama-2-7B-GGUF"). |
filename |
The specific file to download. |
dest_dir |
Destination directory. Defaults to "~/.cache/aisdk/models". |
quiet |
Suppress download progress. |
Value
Path to the downloaded file.
Check if API tests should be enabled
Description
Check if API tests should be enabled
Usage
enable_api_tests()
Value
TRUE if API tests are enabled and keys are available, FALSE otherwise
AI Engine Function
Description
The core engine function for {ai} knitr chunks.
Usage
eng_ai(options)
Arguments
options |
A list of chunk options provided by knitr. |
Value
A character string suitable for knitr output.
Execute Tool Calls
Description
Execute a list of tool calls returned by an LLM. This function safely executes each tool, handling errors gracefully and returning a standardized result format.
Implements multi-layer defense strategy:
Tool name repair (case fixing, snake_case conversion, fuzzy matching)
Invalid tool routing for graceful degradation
Argument parsing with JSON repair
Error capture and structured error responses
Usage
execute_tool_calls(
tool_calls,
tools,
hooks = NULL,
envir = NULL,
repair_enabled = TRUE
)
Arguments
tool_calls |
A list of tool call objects, each with id, name, and arguments. |
tools |
A list of Tool objects to search for matching tools. |
hooks |
Optional HookHandler object. |
envir |
Optional environment in which to execute tools. When provided, tool functions can access and modify variables in this environment, enabling cross-agent data sharing through a shared session environment. |
repair_enabled |
Whether to attempt tool call repair (default TRUE). |
Value
A list of execution results, each containing:
id: The tool call ID
name: The tool name
result: The execution result (or error message)
is_error: TRUE if an error occurred during execution
Expect LLM Pass
Description
Custom testthat expectation that evaluates whether an LLM response meets specified criteria. Uses an LLM judge to assess the response.
Usage
expect_llm_pass(response, criteria, model = NULL, threshold = 0.7, info = NULL)
Arguments
response |
The LLM response to evaluate (text or GenerateResult object). |
criteria |
Character string describing what constitutes a passing response. |
model |
Model to use for judging (default: same as response or gpt-4o). |
threshold |
Minimum score (0-1) to pass (default: 0.7). |
info |
Additional information to include in failure message. |
Value
Invisibly returns the evaluation result.
Examples
if (interactive()) {
test_that("agent answers math questions correctly", {
result <- generate_text(
model = "openai:gpt-4o",
prompt = "What is 2 + 2?"
)
expect_llm_pass(result, "The response should contain the number 4")
})
}
Expect No Hallucination
Description
Test that an LLM response does not contain hallucinated information when compared against ground truth.
Usage
expect_no_hallucination(
response,
ground_truth,
model = NULL,
tolerance = 0.1,
info = NULL
)
Arguments
response |
The LLM response to check. |
ground_truth |
The factual information to check against. |
model |
Model to use for checking. |
tolerance |
Allowed deviation (0 = strict, 1 = lenient). |
info |
Additional information for failure message. |
Expect Tool Selection
Description
Test that an agent selects the correct tool(s) for a given task.
Usage
expect_tool_selection(result, expected_tools, exact = FALSE, info = NULL)
Arguments
result |
A GenerateResult object from generate_text with tools. |
expected_tools |
Character vector of expected tool names. |
exact |
If TRUE, require exactly these tools (no more, no less). |
info |
Additional information for failure message. |
Extract Code from LLM Response
Description
Extract Code from LLM Response
Usage
extract_code_from_response(response)
Extract Geom Parameters from ggproto Object
Description
Dynamically extracts parameter information from a ggplot2 geom. This handles the "scattered definitions" problem by reading from source.
Usage
extract_geom_params(geom_name)
Arguments
geom_name |
Name of the geom (e.g., "point", "line"). |
Value
List with default_aes, required_aes, optional_aes, extra_params.
Extract Guides
Description
Extract Guides
Usage
extract_guides(guides_obj)
Extract R Code
Description
Extracts R code from markdown code blocks in the LLM response.
Usage
extract_r_code(text)
Arguments
text |
The LLM response text. |
Value
A character string containing all extracted R code.
Extract Theme for Frontend
Description
Extracts theme with structured units and pixel values.
Usage
extract_theme_for_frontend(theme, plot_dims)
Extract Theme Values
Description
Extracts theme values with proper element type handling.
Usage
extract_theme_values(theme)
Arguments
theme |
A ggplot2 theme object. |
Value
A list of theme values.
Fetch available models from API provider
Description
Fetch available models from API provider
Usage
fetch_api_models(provider, api_key = NULL, base_url = NULL)
Arguments
provider |
Provider name ("openai", "nvidia", "anthropic", etc.) |
api_key |
API key |
base_url |
Base URL |
Value
A data frame with 'id' column and capability flag columns
Find Closest Match
Description
Find the closest matching string using Levenshtein distance.
Usage
find_closest_match(target, candidates, max_distance = 3)
Arguments
target |
The target string to match. |
candidates |
A vector of candidate strings. |
max_distance |
Maximum allowed edit distance (default 3). |
Value
The closest match, or NULL if none within max_distance.
Find Tool by Name
Description
Find a tool in a list of tools by its name.
Usage
find_tool(tools, name)
Arguments
tools |
A list of Tool objects. |
name |
The tool name to find. |
Value
The Tool object if found, NULL otherwise.
Generate Fix
Description
Generate Fix
Usage
generate_fix(code, hypothesis, model)
Generate Hypothesis
Description
Generate Hypothesis
Usage
generate_hypothesis(code, error, model)
Generate Document Strings for Models
Description
Helper for roxygen2 @eval tag to dynamically insert supported models into documentation.
Shows enriched information (context window, capabilities) when available.
Usage
generate_model_docs(provider, max_items = 15)
Arguments
provider |
The name of the provider. |
max_items |
Maximum number of models to display. Defaults to 15. |
Value
A string containing roxygen-formatted documentation of the models.
Generate Stable ID
Description
Generates a stable unique identifier for a plot element.
Usage
generate_stable_id(type, ..., prefix = NULL)
Arguments
type |
Type of element (e.g., "layer", "guide"). |
... |
Components to include in the ID hash. |
prefix |
Optional prefix for the ID. |
Value
A stable ID string.
Generate Text
Description
Generate text using a language model. This is the primary high-level function for non-streaming text generation.
When tools are provided and max_steps > 1, the function will automatically execute tool calls and feed results back to the LLM in a ReAct-style loop until the LLM produces a final response or max_steps is reached.
Usage
generate_text(
model = NULL,
prompt,
system = NULL,
temperature = 0.7,
max_tokens = NULL,
tools = NULL,
max_steps = 1,
sandbox = FALSE,
skills = NULL,
session = NULL,
hooks = NULL,
registry = NULL,
...
)
Arguments
model |
Either a LanguageModelV1 object, or a string ID like "openai:gpt-4o". |
prompt |
A character string prompt, or a list of messages. |
system |
Optional system prompt. |
temperature |
Sampling temperature (0-2). Default 0.7. |
max_tokens |
Maximum tokens to generate. |
tools |
Optional list of Tool objects for function calling. |
max_steps |
Maximum number of generation steps (tool execution loops). Default 1 (single generation, no automatic tool execution). Set to higher values (e.g., 5) to enable automatic tool execution. |
sandbox |
Logical. If TRUE, enables R-native programmatic sandbox mode.
All tools are bound into an isolated R environment and replaced by a single
|
skills |
Optional path to skills directory, or a SkillRegistry object. When provided, skill tools are auto-injected and skill summaries are added to the system prompt. |
session |
Optional ChatSession object. When provided, tool executions run in the session's environment, enabling cross-agent data sharing. |
hooks |
Optional HookHandler object for intercepting events. |
registry |
Optional ProviderRegistry to use (defaults to global registry). |
... |
Additional arguments passed to the model. |
Value
A GenerateResult object with text and optionally tool_calls. When max_steps > 1 and tools are used, the result includes:
steps: Number of steps taken
all_tool_calls: List of all tool calls made across all steps
Examples
if (interactive()) {
# Using hooks
my_hooks <- create_hooks(
on_generation_start = function(model, prompt, tools) message("Starting..."),
on_tool_start = function(tool, args) message("Calling tool ", tool$name)
)
result <- generate_text(model, "...", hooks = my_hooks)
}
Generate Verification Hypothesis
Description
Generate Verification Hypothesis
Usage
generate_verification_hypothesis(code, result, test_fn, model)
Get AI Engine Session
Description
Gets the current AI engine session for inspection or manual interaction.
Usage
get_ai_session(session_name = "default")
Arguments
session_name |
Name of the session. Default is "default". |
Value
A ChatSession object or NULL if not initialized.
Get Anthropic base URL from environment
Description
Get Anthropic base URL from environment
Usage
get_anthropic_base_url()
Value
Base URL for Anthropic API (default: official)
Get Anthropic model name from environment
Description
Get Anthropic model name from environment
Usage
get_anthropic_model()
Value
Model name (default: claude-sonnet-4-20250514)
Get Anthropic model ID from environment
Description
Get Anthropic model ID from environment
Usage
get_anthropic_model_id()
Value
Model ID (default: anthropic:claude-sonnet-4-20250514)
Get Default Registry
Description
Returns the global default provider registry, creating it if necessary.
Usage
get_default_registry()
Value
A ProviderRegistry object.
Get Default System Prompt
Description
Returns the default system prompt for the AI engine.
Usage
get_default_system_prompt()
Value
A character string.
Get or Create Global Memory
Description
Get the global project memory instance, creating it if necessary.
Usage
get_memory()
Value
A ProjectMemory object.
Get Default Model
Description
Returns the current package-wide default language model. This is used by
high-level helpers when model = NULL. If no explicit default has been set,
get_model() falls back to getOption("aisdk.default_model") and then to
"openai:gpt-4o".
Usage
get_model(default = "openai:gpt-4o")
Arguments
default |
Fallback model identifier when no explicit default has been set. |
Value
A model identifier string or a LanguageModelV1 object.
Examples
get_model()
Get Full Model Info
Description
Returns the full metadata for a single model as a list. Useful for framework internals to auto-configure parameters (e.g., max_tokens, context_window).
Usage
get_model_info(provider, model_id)
Arguments
provider |
The name of the provider. |
model_id |
The model ID string. |
Value
A list containing all available metadata for the model, or NULL if not found.
Get OpenAI Base URL from environment
Description
Get OpenAI Base URL from environment
Usage
get_openai_base_url()
Value
Base URL string
Get OpenAI Embedding Model from environment
Description
Get OpenAI Embedding Model from environment
Usage
get_openai_embedding_model()
Value
Model name string
Get OpenAI Model from environment
Description
Get OpenAI Model from environment
Usage
get_openai_model()
Value
Model name string
Get OpenAI Model ID from environment
Description
Get OpenAI Model ID from environment
Usage
get_openai_model_id()
Value
Model ID string
Get or Create Session
Description
Retrieves the current chat session from the cache, or creates a new one. Sessions persist across chunks within a single knit process.
Usage
get_or_create_session(options)
Arguments
options |
Chunk options containing potential |
Value
A ChatSession object.
Get R Context
Description
Generates a text summary of R objects to be used as context for the LLM.
Usage
get_r_context(vars, envir = parent.frame())
Arguments
vars |
Character vector of variable names to include. |
envir |
The environment to look for variables in. Default is parent.frame(). |
Value
A single string containing the summaries of the requested variables.
Examples
if (interactive()) {
df <- data.frame(x = 1:10, y = rnorm(10))
context <- get_r_context("df")
cat(context)
}
Get Skill Store
Description
Get the global skill store instance.
Usage
get_skill_store()
Value
A SkillStore object.
Export ggplot as Frontend-Ready JSON
Description
Exports a ggplot object as JSON optimized for frontend rendering. Addresses all frontend feedback:
Strict scalar typing (no for missing values)
Structured units with pre-calculated pixel values
Stable IDs for React keys
Consistent Array of Structures pattern
Usage
ggplot_to_frontend_json(
plot,
width = 800,
height = 600,
include_data = TRUE,
include_built = FALSE,
pretty = FALSE
)
Arguments
plot |
A ggplot object. |
width |
Plot width in pixels. |
height |
Plot height in pixels. |
include_data |
Whether to include data. |
include_built |
Whether to include ggplot_build() output. |
pretty |
Format JSON with indentation. |
Value
JSON string optimized for frontend.
Examples
if (interactive()) {
library(ggplot2)
p <- ggplot(mtcars, aes(wt, mpg)) + geom_point()
json <- ggplot_to_frontend_json(p, width = 800, height = 600)
}
Convert ggplot to Frontend-Friendly Object
Description
Internal function for structured conversion.
Usage
ggplot_to_frontend_object(plot, include_data, plot_dims)
Convert ggplot Object to Schema-Compliant Structure
Description
Converts a ggplot object to a JSON-serializable structure with precise empty value handling and render hints for frontend.
Usage
ggplot_to_z_object(plot, include_data = TRUE, include_render_hints = TRUE)
Arguments
plot |
A ggplot object. |
include_data |
Whether to include data in output. |
include_render_hints |
Whether to include frontend render hints. |
Value
A list structure matching z_ggplot schema.
Check if specific provider key is available
Description
Check if specific provider key is available
Usage
has_api_key(provider)
Arguments
provider |
Provider name ("openai" or "anthropic") |
Value
TRUE if key is available and valid
Hooks System
Description
A system for intercepting and monitoring AI SDK events. Allows implementation of "Human-in-the-loop", logging, and validation.
Hypothesis-Fix-Verify Loop
Description
Advanced self-healing execution that generates hypotheses about errors, attempts fixes, and verifies the results.
Usage
hypothesis_fix_verify(
code,
model = NULL,
test_fn = NULL,
max_iterations = 5,
verbose = TRUE
)
Arguments
code |
Character string of R code to execute. |
model |
LLM model for analysis. |
test_fn |
Optional function to verify the result is correct. |
max_iterations |
Maximum fix iterations. |
verbose |
Print progress. |
Value
List with result, fix history, and verification status.
Initialize a New Skill
Description
Creates a new skill directory with the standard "textbook" structure: SKILL.md, scripts/, references/, and assets/.
Usage
init_skill(name, path = tempdir())
Arguments
name |
Name of the skill. |
path |
Parent directory where the skill folder will be created. |
Value
Path to the created skill directory.
Install a Skill
Description
Install a skill from the global skill store or a GitHub repository.
Usage
install_skill(skill_ref, version = NULL, force = FALSE)
Arguments
skill_ref |
Skill reference (e.g., "username/skillname"). |
version |
Optional specific version. |
force |
Force reinstallation. |
Value
The installed Skill object.
Examples
if (interactive()) {
# Install from GitHub
install_skill("aisdk/data-analysis")
# Install specific version
install_skill("aisdk/visualization", version = "1.2.0")
# Force reinstall
install_skill("aisdk/ml-tools", force = TRUE)
}
Check if Value is Empty (ggplot2 semantics)
Description
Mirrors ggplot2's internal empty() function behavior.
Returns TRUE if: NULL, 0 rows, 0 cols, or waiver object.
Usage
is_ggplot_empty(x)
Arguments
x |
Value to check. |
Value
Logical.
Create a JSON-RPC 2.0 error response object
Description
Create a JSON-RPC 2.0 error response object
Usage
jsonrpc_error(code, message, id = NULL, data = NULL)
Arguments
code |
The error code (integer) |
message |
The error message |
id |
The request ID this is responding to (can be NULL) |
data |
Optional additional error data |
Value
A list representing the JSON-RPC error response
Create a JSON-RPC 2.0 request object
Description
Create a JSON-RPC 2.0 request object
Usage
jsonrpc_request(method, params = NULL, id = NULL)
Arguments
method |
The method name |
params |
The parameters (list or NULL) |
id |
The request ID (integer or string) |
Value
A list representing the JSON-RPC request
Create a JSON-RPC 2.0 success response object
Description
Create a JSON-RPC 2.0 success response object
Usage
jsonrpc_response(result, id)
Arguments
result |
The result value |
id |
The request ID this is responding to |
Value
A list representing the JSON-RPC response
Knitr Engine for AI
Description
Implements a custom knitr engine {ai} that allows using LLMs to generate
and execute R code within RMarkdown/Quarto documents.
List Available Local Models
Description
Scan common directories for available local model files.
Usage
list_local_models(paths = NULL)
Arguments
paths |
Character vector of directories to scan. Defaults to common locations. |
Value
A data frame with model information.
List Models for Provider
Description
Returns a data frame of models for the specified provider based on the static configuration. Includes enriched metadata when available (context window, pricing, capabilities).
Usage
list_models(provider = NULL)
Arguments
provider |
The name of the provider (e.g., "stepfun"). If NULL, all providers are listed. |
Value
A data frame containing model details.
List Installed Skills
Description
List all installed skills.
Usage
list_skills()
Value
A data frame of installed skills.
Load a Chat Session
Description
Load a previously saved ChatSession from a file.
Usage
load_chat_session(path, tools = NULL, hooks = NULL, registry = NULL)
Arguments
path |
File path to load from (.rds or .json). |
tools |
Optional list of Tool objects (tools are not saved, must be re-provided). |
hooks |
Optional HookHandler object. |
registry |
Optional ProviderRegistry. |
Value
A ChatSession object with restored state.
Examples
if (interactive()) {
# Load a saved session
chat <- load_chat_session("my_session.rds", tools = my_tools)
# Continue where you left off
response <- chat$send("Let's continue our discussion")
}
Load Models Configuration
Description
Internal helper to load the models JSON files.
Usage
load_models_config()
Value
A list representing the parsed JSON of models per provider.
Map native Anthropic SSE event to aggregator calls
Description
Translates a native Anthropic Messages API SSE event into the appropriate SSEAggregator method calls.
Usage
map_anthropic_chunk(event_type, event_data, agg)
Arguments
event_type |
SSE event type string (e.g. "content_block_delta"). |
event_data |
Parsed JSON data from SSE event. |
agg |
An SSEAggregator instance. |
Value
Logical TRUE if the stream should break (message_stop received).
Map OpenAI SSE chunk to aggregator calls
Description
Translates an OpenAI Chat Completions SSE data chunk into the appropriate SSEAggregator method calls.
Usage
map_openai_chunk(data, done, agg)
Arguments
data |
Parsed JSON data from SSE event (or NULL if done). |
done |
Logical, TRUE if stream is complete. |
agg |
An SSEAggregator instance. |
Deserialize a JSON-RPC message from MCP transport
Description
Deserialize a JSON-RPC message from MCP transport
Usage
mcp_deserialize(json_str)
Arguments
json_str |
The JSON string to parse |
Value
A list, or NULL on parse error
Distributed MCP Ecosystem
Description
Service discovery and dynamic composition for MCP (Model Context Protocol) servers. Implements mDNS/DNS-SD discovery, skill negotiation, and hot-swapping of tools at runtime.
Factory function to create an MCP discovery instance.
Usage
mcp_discover(registry_url = NULL)
Arguments
registry_url |
Optional URL for remote skill registry. |
Value
An McpDiscovery object.
Examples
if (interactive()) {
# Create discovery instance
discovery <- mcp_discover()
# Scan local network
services <- discovery$scan_network()
# Register a known endpoint
discovery$register("my-server", "localhost", 3000)
# List all discovered endpoints
discovery$list_endpoints()
}
Create MCP initialize request
Description
Create MCP initialize request
Usage
mcp_initialize_request(
client_info,
capabilities = structure(list(), names = character(0)),
id = 1L
)
Arguments
client_info |
List with name and version |
capabilities |
Client capabilities |
id |
Request ID |
Value
A JSON-RPC request for initialize
Create MCP initialized notification
Description
Create MCP initialized notification
Usage
mcp_initialized_notification()
Value
A JSON-RPC notification
Create MCP resources/list request
Description
Create MCP resources/list request
Usage
mcp_resources_list_request(id)
Arguments
id |
Request ID |
Value
A JSON-RPC request
Create MCP resources/read request
Description
Create MCP resources/read request
Usage
mcp_resources_read_request(uri, id)
Arguments
uri |
The resource URI |
id |
Request ID |
Value
A JSON-RPC request
Create MCP Router
Description
Factory function to create an MCP router for aggregating multiple servers.
Usage
mcp_router()
Value
An McpRouter object.
Examples
if (interactive()) {
# Create router
router <- mcp_router()
# Connect to multiple MCP servers
router$connect("github", "npx", c("-y", "@modelcontextprotocol/server-github"))
router$connect("filesystem", "npx", c("-y", "@modelcontextprotocol/server-filesystem"))
# Use aggregated tools with generate_text
result <- generate_text(
model = "openai:gpt-4o",
prompt = "List my GitHub repos and save to a file",
tools = router$as_sdk_tools()
)
# Hot-swap: remove a server
router$remove_client("github")
# Cleanup
router$close()
}
Serialize a JSON-RPC message for MCP transport
Description
Serialize a JSON-RPC message for MCP transport
Usage
mcp_serialize(msg)
Arguments
msg |
The message list to serialize |
Value
A JSON string
Create MCP tools/call request
Description
Create MCP tools/call request
Usage
mcp_tools_call_request(
tool_name,
arguments = structure(list(), names = character(0)),
id
)
Arguments
tool_name |
The tool name |
arguments |
Tool arguments as a list |
id |
Request ID |
Value
A JSON-RPC request
Create MCP tools/list request
Description
Create MCP tools/list request
Usage
mcp_tools_list_request(id)
Arguments
id |
Request ID |
Value
A JSON-RPC request
Migrate Legacy Code
Description
Provides migration guidance for legacy code patterns.
Usage
migrate_pattern(pattern)
Arguments
pattern |
The legacy pattern to migrate from. |
Value
A list with old_pattern, new_pattern, and example.
Examples
if (interactive()) {
# Get migration guidance for ChatSession
guidance <- migrate_pattern("ChatSession")
cat(guidance$example)
}
Mission Hook System
Description
Mission-level event hooks for observability and intervention. Provides a higher-level hook system above the agent-level HookHandler, enabling monitoring and control of the entire Mission lifecycle.
Mission Orchestrator: Concurrent Mission Scheduling
Description
MissionOrchestrator R6 class for managing multiple concurrent Missions. Implements the Coordinator layer from Symphony's architecture: poll loop, concurrency slots, stall detection, and result aggregation.
Concurrency model:
Synchronous batch: run_all() executes up to max_concurrent missions simultaneously using parallel::mclapply (fork-based on Unix, sequential on Windows).
Async per-mission: run_async() launches a mission in a callr background process and returns a handle for polling.
Model Shortcut
Description
Shortcut for default model configuration. Call with no arguments to read the
current default model, or pass a model to update it. This is equivalent to
calling get_model() and set_model() directly.
Usage
model(new)
Arguments
new |
Optional model identifier string or |
Value
When new is missing, returns the current default model. Otherwise
invisibly returns the previous default model.
Examples
model()
model("openai:gpt-4o-mini")
model(NULL)
Default Model Configuration
Description
Utilities for reading and updating the package-wide default language model.
High-level helpers that accept model = NULL, including generate_text(),
stream_text(), ChatSession$new(), create_chat_session(),
auto_fix(), and the knitr {ai} engine, use this default when no
explicit model is supplied.
Multimodal Helpers
Description
Helper functions for constructing multimodal messages (text and images).
Package a Skill
Description
Validates a skill and packages it into a .skill zip file.
Usage
package_skill(path, output_dir = tempdir())
Arguments
path |
Path to the skill directory. |
output_dir |
Directory to save the packaged file. Defaults to |
Value
Path to the created .skill file.
Parse Tool Arguments
Description
Robustly parse tool call arguments from various formats that different LLMs may return. Handles edge cases like incomplete JSON, malformed strings, and various empty representations.
Implements multi-layer parsing strategy (inspired by Opencode):
Direct pass-through for already-parsed lists
Empty value detection and normalization
JSON repair for common LLM mistakes
Fallback parsing with JavaScript object literal support
Graceful degradation to empty args on failure
Usage
parse_tool_arguments(args, tool_name = "unknown")
Arguments
args |
The arguments to parse (can be string, list, or NULL). |
tool_name |
Optional tool name for better error messages. |
Value
A named list of parsed arguments (empty named list if no arguments).
Post to API with Retry
Description
Makes a POST request to an API endpoint with automatic retry on failure.
Implements exponential backoff and respects retry-after headers.
Usage
post_to_api(
url,
headers,
body,
max_retries = 2,
initial_delay_ms = 2000,
backoff_factor = 2
)
Arguments
url |
The API endpoint URL. |
headers |
A named list of HTTP headers. |
body |
The request body (will be converted to JSON). |
max_retries |
Maximum number of retries (default: 2). |
initial_delay_ms |
Initial delay in milliseconds (default: 2000). |
backoff_factor |
Multiplier for delay on each retry (default: 2). |
Value
The parsed JSON response.
Print GenerateObjectResult
Description
Print GenerateObjectResult
Usage
## S3 method for class 'GenerateObjectResult'
print(x, ...)
Arguments
x |
A GenerateObjectResult object. |
... |
Additional arguments (ignored). |
Print Benchmark Result
Description
Print Benchmark Result
Usage
## S3 method for class 'benchmark_result'
print(x, ...)
Arguments
x |
A benchmark result object. |
... |
Additional arguments (not used). |
Print Method for z_schema
Description
Pretty print a z_schema object.
Usage
## S3 method for class 'z_schema'
print(x, ...)
Arguments
x |
A z_schema object. |
... |
Additional arguments (ignored). |
Print Migration Guide
Description
Print a comprehensive migration guide for upgrading to the new SDK version.
Usage
print_migration_guide(verbose = TRUE)
Arguments
verbose |
Include detailed examples. Default TRUE. |
Value
Invisible NULL (prints to console).
Calculate Tool Accuracy
Description
Calculate Tool Accuracy
Usage
private_calculate_tool_accuracy(results, tasks)
Project Memory System
Description
Long-term memory storage for AI agents using SQLite. Stores successful code snippets, error fixes, and execution history for RAG (Retrieval Augmented Generation) and learning from past interactions.
Factory function to create or connect to a project memory database.
Usage
project_memory(project_root = tempdir(), db_name = "memory.sqlite")
Arguments
project_root |
Project root directory. |
db_name |
Database filename. |
Value
A ProjectMemory object.
Examples
if (interactive()) {
# Create memory for current project
memory <- project_memory()
# Store a successful code snippet
memory$store_snippet(
code = "df %>% filter(x > 0) %>% summarize(mean = mean(y))",
description = "Filter and summarize data",
tags = c("dplyr", "summarize")
)
# Store an error fix
memory$store_fix(
original_code = "mean(df$x)",
error = "argument is not numeric or logical",
fixed_code = "mean(as.numeric(df$x), na.rm = TRUE)",
fix_description = "Convert to numeric and handle NAs"
)
# Search for relevant snippets
memory$search_snippets("summarize")
}
AiHubMix Provider
Description
Implementation for AiHubMix models. AiHubMix API is OpenAI-compatible, but provides extended support for features like Claude's extended thinking and prompt caching.
Anthropic Provider
Description
Implementation for Anthropic Claude models.
Alibaba Cloud Bailian Provider
Description
Implementation for Alibaba Cloud Bailian (DashScope) hosted models. DashScope API is OpenAI-compatible with support for Qwen series models including reasoning models (QwQ, Qwen3 etc.).
Custom Provider Factory
Description
A dynamic factory for creating custom provider instances.
This allows users to instantiate a model provider at runtime by configuring
the endpoint (base_url), credentials (api_key), network protocol/routing (api_format),
and specific capabilities (use_max_completion_tokens), without writing a new Provider class.
DeepSeek Provider
Description
Implementation for DeepSeek models. DeepSeek API is OpenAI-compatible with support for reasoning models.
Gemini Provider
Description
Implementation for Google Gemini models via REST API.
NVIDIA Provider
Description
Implementation for NVIDIA NIM and other NVIDIA-hosted models.
OpenAI Provider
Description
Implementation for OpenAI models.
OpenRouter Provider
Description
Implementation for OpenRouter, a unified API gateway for multiple LLM providers. OpenRouter API is OpenAI-compatible and provides access to models from OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, and many more.
Stepfun Provider
Description
Implementation for Stepfun models. Stepfun API is OpenAI-compatible.
Volcengine Provider
Description
Implementation for Volcengine Ark hosted models. Volcengine API is OpenAI-compatible with support for reasoning models (e.g., Doubao, DeepSeek).
xAI Provider
Description
Implementation for xAI (Grok) models. xAI API is OpenAI-compatible.
Create R Data Tasks Benchmark
Description
Create a standard benchmark suite for R data science tasks.
Usage
r_data_tasks(difficulty = "medium")
Arguments
difficulty |
Difficulty level: "easy", "medium", "hard". |
Value
A list of benchmark tasks.
Reactive Tool
Description
Create a tool that can modify Shiny reactive values.
This is a wrapper around the standard tool() function that provides
additional documentation and conventions for Shiny integration.
The execute function receives rv (reactiveValues) and session as
the first two arguments, followed by any tool-specific parameters.
Usage
reactive_tool(name, description, parameters, execute)
Arguments
name |
The name of the tool. |
description |
A description of what the tool does. |
parameters |
A schema object defining the tool's parameters. |
execute |
A function to execute. First two args are |
Value
A Tool object ready for use with aiChatServer.
Examples
if (interactive()) {
# Create a tool that modifies a reactive value
update_resolution_tool <- reactive_tool(
name = "update_resolution",
description = "Update the plot resolution",
parameters = z_object(
resolution = z_number() |> z_describe("New resolution value (50-500)")
),
execute = function(rv, session, resolution) {
rv$resolution <- resolution
paste0("Resolution updated to ", resolution)
}
)
# Use with aiChatServer by wrapping the execute function
server <- function(input, output, session) {
rv <- reactiveValues(resolution = 100)
# Wrap the tool to inject rv and session
wrapped_tools <- wrap_reactive_tools(
list(update_resolution_tool),
rv = rv,
session = session
)
aiChatServer("chat", model = "openai:gpt-4o", tools = wrapped_tools)
}
}
Register AI Engine
Description
Registers the {ai} engine with knitr. Call this function once before
knitting a document that uses {ai} chunks.
Usage
register_ai_engine()
Value
Invisible NULL.
Examples
if (interactive()) {
library(aisdk)
register_ai_engine()
# Now you can use ```{ai} chunks in your RMarkdown
}
Reload project-level environment variables
Description
Forces R to re-read the .Renviron file without restarting the session. This is useful when you've modified .Renviron and don't want to restart R.
Usage
reload_env(path = ".Renviron")
Arguments
path |
Path to .Renviron file (default: project root) |
Value
Invisible TRUE if successful
Examples
if (interactive()) {
# Reload environment after modifying .Renviron
reload_env()
# Now use the new keys
Sys.getenv("OPENAI_API_KEY")
}
Remove R Code Blocks
Description
Removes fenced R code blocks from text, leaving the explanatory prose.
Usage
remove_r_code_blocks(text)
Arguments
text |
The text to process. |
Value
A character string with R code blocks removed.
Render Markdown Text
Description
Render markdown-formatted text in the console with beautiful styling. This function uses the same rendering engine as the streaming output, supporting headers, lists, code blocks, and other markdown elements.
Usage
render_text(text)
Arguments
text |
A character string containing markdown text, or a GenerateResult object. |
Value
NULL (invisibly)
Examples
if (interactive()) {
# Render simple text
render_text("# Hello\n\nThis is **bold** text.")
# Render with code block
render_text("Here is some R code:\n\n```r\nx <- 1:10\nmean(x)\n```")
}
Repair JSON String
Description
Attempt to repair common JSON malformations from LLM outputs. This is a lightweight repair function for common issues. For more complex repairs, use fix_json() from utils_json.R.
Handles:
Missing closing braces/brackets
Trailing commas
Unquoted keys
Truncated strings
Single quotes instead of double quotes
Usage
repair_json_string(json_str)
Arguments
json_str |
The potentially malformed JSON string. |
Value
A repaired JSON string (best effort).
Repair Tool Call
Description
Attempts to repair a failed tool call. This implements a multi-layer repair strategy inspired by Opencode's experimental_repairToolCall:
Try to fix tool name case issues (e.g., "GetWeather" -> "get_weather")
If repair fails, route to an "invalid" tool for graceful handling
Usage
repair_tool_call(tool_call, tools, error_message = NULL)
Arguments
tool_call |
A list with name, arguments, and optionally id. |
tools |
A list of available Tool objects. |
error_message |
Optional error message from the failed call. |
Value
A repaired tool call list, or an "invalid" tool call if unrepairable.
Request User Authorization (HITL)
Description
Pauses execution and prompts the user for permission to execute a potentially
risky action. Supports console environments via readline.
Usage
request_authorization(action, risk_level = "YELLOW")
Arguments
action |
Character string describing the action the Agent wants to perform. |
risk_level |
Character string. One of "GREEN", "YELLOW", "RED". |
Value
Logical TRUE if user permits, otherwise throws an error with the rejection reason.
Run a Feishu Webhook Server
Description
Run a blocking httpuv loop for a Feishu callback endpoint.
This helper is intended for local demos and manual integration testing.
Usage
run_feishu_webhook_server(
runtime,
host = "127.0.0.1",
port = 8788,
path = "/feishu/webhook",
poll_ms = 100
)
Arguments
runtime |
A |
host |
Bind host. |
port |
Bind port. |
path |
Callback path. |
poll_ms |
Event loop polling interval in milliseconds. |
Value
Invisible server handle. Interrupt the R process to stop it.
Safe Eval with Timeout
Description
Execute R code with a timeout to prevent infinite loops.
Usage
safe_eval(expr, timeout_seconds = 30, envir = parent.frame())
Arguments
expr |
Expression to evaluate. |
timeout_seconds |
Maximum execution time in seconds. |
envir |
Environment for evaluation. |
Value
The result or an error.
Safe JSON Parser
Description
Parses a JSON string, attempting to repair it using fix_json if
the initial parse fails.
Usage
safe_parse_json(text)
Arguments
text |
A JSON string. |
Value
A parsed R object (list, vector, etc.) or NULL if parsing fails even after repair.
Examples
safe_parse_json('{"a": 1}')
safe_parse_json('{"a": 1,')
Safe Serialization to JSON
Description
Standardized internal helper for JSON serialization with common defaults.
Usage
safe_to_json(x, auto_unbox = TRUE, ...)
Arguments
x |
Object to serialize. |
auto_unbox |
Whether to automatically unbox single-element vectors. Default TRUE. |
... |
Additional arguments to jsonlite::toJSON. |
Value
A JSON string.
R-Native Programmatic Sandbox
Description
SandboxManager R6 class and utilities for building an R-native programmatic tool sandbox. Inspired by Anthropic's Programmatic Tool Calling, this module enables LLMs to write R code that batch-invokes registered tools and processes data locally (using dplyr/purrr), returning only concise results to the context.
Details
The core idea: instead of the LLM making N separate tool calls (each requiring
a round-trip), it writes a single R script that loops over inputs, calls tools
as ordinary R functions, filters/aggregates the results with dplyr, and
print()s only the key findings. This dramatically reduces token usage,
latency, and context window pressure.
Architecture
User tools -> SandboxManager -> isolated R environment - tool_a() - tool_b() - dplyr::* - purrr::* create_r_code_tool() -> single "execute_r_code" tool (registered with the LLM)
Sanitize Object for JSON Serialization
Description
Standardizes R objects for consistent JSON serialization, especially for ggplot2 elements like units and margins.
Usage
sanitize_for_json(x, plot_dims = list(width = 8, height = 6))
Arguments
x |
Object to sanitize. |
plot_dims |
Optional list with width and height in inches. |
Value
A sanitized list or vector.
Sanitize R Code
Description
Patches common issues in LLM-generated R code before execution. Currently handles: missing library() calls for %>% and other common operators.
Usage
sanitize_r_code(code)
Arguments
code |
The R code string to sanitize. |
Value
The sanitized code string.
Sanitize Theme Element for Frontend
Description
Sanitize Theme Element for Frontend
Usage
sanitize_theme_element(elem, plot_dims)
Scan for Skills
Description
Convenience function to scan a directory and return a SkillRegistry. Alias for create_skill_registry().
Usage
scan_skills(path, recursive = FALSE)
Arguments
path |
Path to scan for skills. |
recursive |
Whether to scan subdirectories. Default FALSE. |
Value
A populated SkillRegistry object.
Schema DSL: Lightweight JSON Schema Generator
Description
A lightweight DSL (Domain Specific Language) for defining JSON Schema structures in R, inspired by Zod from TypeScript. Used for defining tool parameters.
Schema Generator
Description
Utilities to automatically generate z_schema objects from R function signatures.
Convert Schema to JSON
Description
Convert a z_schema object to a JSON string suitable for API calls. Handles the R-specific auto_unbox issues properly.
Usage
schema_to_json(schema, pretty = FALSE)
Arguments
schema |
A z_schema object created by z_* functions. |
pretty |
If TRUE, format JSON with indentation. |
Value
A JSON string.
Examples
schema <- z_object(
name = z_string(description = "User name")
)
cat(schema_to_json(schema, pretty = TRUE))
Convert Schema to Plain List
Description
Internal function to convert z_schema to plain list, stripping class attributes.
Usage
schema_to_list(schema)
Arguments
schema |
A z_schema object. |
Value
A plain list suitable for JSON conversion.
Reset the Variable Registry
Description
Clears all protected variables.
Usage
sdk_clear_protected_vars()
Get Feature Flag
Description
Get the current value of a feature flag.
Usage
sdk_feature(flag, default = NULL)
Arguments
flag |
Name of the feature flag. |
default |
Default value if flag not set. |
Value
The flag value.
Examples
if (interactive()) {
# Check if shared session is enabled
if (sdk_feature("use_shared_session")) {
session <- create_shared_session(model = "openai:gpt-4o")
}
} else {
session <- create_chat_session(model = "openai:gpt-4o")
}
Get Metadata for a Protected Variable
Description
Get Metadata for a Protected Variable
Usage
sdk_get_var_metadata(name)
Arguments
name |
Character string. The name of the variable. |
Value
A list with metadata (locked, cost, etc.), or NULL if not protected.
Check if a Variable is Locked
Description
Check if a Variable is Locked
Usage
sdk_is_var_locked(name)
Arguments
name |
Character string. The name of the variable. |
Value
TRUE if the variable is protected and locked, FALSE otherwise.
List Feature Flags
Description
List all available feature flags and their current values.
Usage
sdk_list_features()
Value
A named list of feature flags.
Examples
if (interactive()) {
# See all feature flags
print(sdk_list_features())
}
Protect a Variable from Agent Modification
Description
Marks a variable as protected so that the Agent cannot accidentally overwrite, shadow, or deeply copy it during sandbox execution.
Usage
sdk_protect_var(name, locked = TRUE, cost = "High")
Arguments
name |
Character string. The name of the variable to protect. |
locked |
Logical. If TRUE, the variable cannot be assigned to by the Agent. |
cost |
Character string. An indicator of the variable's computation/memory cost (e.g., "High", "Medium", "Low"). |
Value
Invisible TRUE.
Reset Feature Flags
Description
Reset all feature flags to their default values.
Usage
sdk_reset_features()
Value
Invisible NULL.
Set Feature Flag
Description
Set a feature flag value. Use this to enable/disable SDK features.
Usage
sdk_set_feature(flag, value)
Arguments
flag |
Name of the feature flag. |
value |
Value to set. |
Value
Invisible previous value.
Examples
if (interactive()) {
# Disable shared session for legacy compatibility
sdk_set_feature("use_shared_session", FALSE)
# Enable legacy tool format
sdk_set_feature("legacy_tool_format", TRUE)
}
Unprotect a Variable
Description
Removes protection from a previously protected variable.
Usage
sdk_unprotect_var(name)
Arguments
name |
Character string. The name of the variable to unprotect. |
Value
Invisible TRUE.
Search Skills
Description
Search the skill registry.
Usage
search_skills(query = NULL, capability = NULL)
Arguments
query |
Search query. |
capability |
Filter by capability. |
Value
A data frame of matching skills.
Session Management: Stateful Chat Sessions
Description
ChatSession R6 class for managing stateful conversations with LLMs. Provides automatic history management, persistence, and model switching.
Set Default Model
Description
Sets the package-wide default language model. Pass NULL to restore the
built-in default ("openai:gpt-4o" unless overridden with
options(aisdk.default_model = ...)).
Usage
set_model(new = NULL)
Arguments
new |
A model identifier string, a |
Value
Invisibly returns the previous default model.
Examples
old <- set_model("deepseek:deepseek-chat")
current <- get_model()
set_model(old)
set_model(NULL)
SharedSession: Enhanced Multi-Agent Session Management
Description
SharedSession R6 class providing enhanced environment management, execution context tracking, and safety guardrails for multi-agent orchestration.
Check if thinking content should be shown
Description
Check if thinking content should be shown
Usage
should_show_thinking()
Skill Manifest Specification
Description
The skill.yaml specification defines the structure for distributable skills.
Specification
# skill.yaml - Skill Manifest Specification v1.0
# Required fields
name: my-skill # Unique skill identifier (lowercase, hyphens)
version: 1.0.0 # Semantic version
description: Brief description # One-line description
# Author information
author:
name: Author Name
email: author@example.com
url: https://github.com/author
# License (SPDX identifier)
license: MIT
# R package dependencies
dependencies:
- dplyr >= 1.0.0
- ggplot2
# System requirements
system_requirements:
- python >= 3.8 # Optional external requirements
# MCP server configuration (optional)
mcp:
command: npx # Command to start MCP server
args:
- -y
- "@my-org/my-mcp-server"
env:
API_KEY: "${MY_API_KEY}" # Environment variable substitution
# Capabilities this skill provides
capabilities:
- data-analysis
- visualization
- machine-learning
# Prompt templates
prompts:
system: |
You are a specialized assistant for...
examples:
- "Analyze this dataset..."
- "Create a visualization of..."
# Entry points
entry:
main: SKILL.md # Main skill instructions
scripts: scripts/ # Directory containing R scripts
# Repository information
repository:
type: github
url: https://github.com/author/my-skill
Skill Registry: Scan and Manage Skills
Description
SkillRegistry class for discovering, caching, and retrieving skills. Scans directories for SKILL.md files and provides access to skill metadata.
Global Skill Store
Description
A CRAN-like experience for sharing AI capabilities. Skills are packaged with a skill.yaml manifest defining dependencies, MCP endpoints, and prompt templates.
Native SLM (Small Language Model) Engine
Description
Generic interface for loading and running local language models without external API dependencies. Supports multiple backends including ONNX Runtime and LibTorch for quantized model execution.
Factory function to create a new SLM Engine for local model inference.
Usage
slm_engine(model_path, backend = "gguf", config = list())
Arguments
model_path |
Path to the model weights file. |
backend |
Inference backend: "gguf" (default), "onnx", or "torch". |
config |
Optional configuration list. |
Value
An SlmEngine object.
Examples
if (interactive()) {
# Load a GGUF model
engine <- slm_engine("models/llama-3-8b-q4.gguf")
engine$load()
# Generate text
result <- engine$generate("What is the capital of France?")
cat(result$text)
# Stream generation
engine$stream("Tell me a story", callback = cat)
# Cleanup
engine$unload()
}
Specification Layer: Model Interfaces
Description
Abstract base classes (interfaces) for AI models.
SSE Stream Aggregator
Description
Universal chunk aggregator for Server-Sent Events (SSE) streaming. Manages all chunk-level state: text accumulation, reasoning/thinking transitions, tool call assembly, usage tracking, and result finalization.
This is a pure aggregator; it does not know about HTTP, SSE parsing, or provider-specific event types. Provider event mappers call its methods.
Start a Feishu Webhook Server
Description
Start a minimal httpuv server exposing a Feishu callback endpoint.
Usage
start_feishu_webhook_server(
runtime,
host = "127.0.0.1",
port = 8788,
path = "/feishu/webhook"
)
Arguments
runtime |
A |
host |
Bind host. |
port |
Bind port. |
path |
Callback path. |
Value
An httpuv server handle.
Standard Agent Library: Built-in Specialist Agents
Description
Factory functions for creating standard library agents with scoped tools and safety guardrails. These agents form the foundation of the multi-agent orchestration system.
Output Strategy System
Description
Implements the Strategy pattern for handling different structured output formats from LLMs. This allows the SDK to be extended with new output types (e.g., objects, enums, dataframes) without modifying core logic.
Stream from Anthropic API
Description
Makes a streaming POST request to Anthropic and processes their SSE format.
Anthropic uses event types like content_block_delta instead of OpenAI's format.
Also handles OpenAI-compatible format for proxy servers.
Usage
stream_anthropic(url, headers, body, callback)
Arguments
url |
The API endpoint URL. |
headers |
A named list of HTTP headers. |
body |
The request body (will be converted to JSON). |
callback |
A function called for each text delta. |
Value
A GenerateResult object.
Stream from API
Description
Makes a streaming POST request and processes Server-Sent Events (SSE) using httr2. Implements robust error recovery for malformed SSE data.
Usage
stream_from_api(url, headers, body, callback)
Arguments
url |
The API endpoint URL. |
headers |
A named list of HTTP headers. |
body |
The request body (will be converted to JSON). |
callback |
A function called for each parsed SSE data chunk. |
Stream Renderer: Enhanced CLI output
Description
Internal utilities for rendering streaming output and tool execution status in the R console using the cli package.
Stream from Responses API
Description
Makes a streaming POST request to OpenAI Responses API and processes SSE events. The Responses API uses different event types than Chat Completions.
Usage
stream_responses_api(url, headers, body, callback)
Arguments
url |
The API endpoint URL. |
headers |
A named list of HTTP headers. |
body |
The request body (will be converted to JSON). |
callback |
A function called for each event: callback(event_type, data, done). |
Stream Text
Description
Generate text using a language model with streaming output. This function provides a real-time stream of tokens through a callback.
Usage
stream_text(
model = NULL,
prompt,
callback = NULL,
system = NULL,
temperature = 0.7,
max_tokens = NULL,
tools = NULL,
max_steps = 1,
sandbox = FALSE,
skills = NULL,
session = NULL,
hooks = NULL,
registry = NULL,
...
)
Arguments
model |
Either a LanguageModelV1 object, or a string ID like "openai:gpt-4o". |
prompt |
A character string prompt, or a list of messages. |
callback |
A function called for each text chunk: |
system |
Optional system prompt. |
temperature |
Sampling temperature (0-2). Default 0.7. |
max_tokens |
Maximum tokens to generate. |
tools |
Optional list of Tool objects for function calling. |
max_steps |
Maximum number of generation steps (tool execution loops). Default 1. Set to higher values (e.g., 5) to enable automatic tool execution. |
sandbox |
Logical. If TRUE, enables R-native programmatic sandbox mode.
See |
skills |
Optional path to skills directory, or a SkillRegistry object. |
session |
Optional ChatSession object for shared state. |
hooks |
Optional HookHandler object. |
registry |
Optional ProviderRegistry to use. |
... |
Additional arguments passed to the model. |
Value
A GenerateResult object (accumulated from the stream).
Examples
if (interactive()) {
model <- create_openai()$language_model("gpt-4o")
stream_text(model, "Tell me a story", callback = function(text, done) {
if (!done) cat(text)
})
}
Summarize Object
Description
Creates a concise summary of an R object for LLM consumption. Handles different object types appropriately.
Usage
summarize_object(obj, name)
Arguments
obj |
The object to summarize. |
name |
The name of the object (for display). |
Value
A string summary suitable for LLM context.
Agent Team: Automated Multi-Agent Orchestration
Description
AgentTeam class for managing a group of agents and automating their orchestration. It implements a "Manager-Worker" pattern where a synthesized Manager agent delegates tasks to registered worker agents based on their descriptions.
Test a Newly Created Skill
Description
Verifies a skill by running a test query against it in a sandboxed session.
Usage
test_new_skill(skill_name, test_query, registry, model)
Arguments
skill_name |
Name of the skill to test (must be in the registry). |
test_query |
A natural language query to test the skill (e.g., "Use hello_world to say hi"). |
registry |
The skill registry object. |
model |
A model object to use for the test agent. |
Value
A list containing success (boolean) and result (string).
Convert to Snake Case
Description
Convert camelCase or PascalCase to snake_case.
Usage
to_snake_case(x)
Arguments
x |
A character string. |
Value
Snake case version of the string.
Create a Tool
Description
Factory function to create a Tool object. This is the recommended way to define tools for LLM function calling.
Usage
tool(
name,
description,
parameters = NULL,
execute = NULL,
layer = "llm",
meta = NULL
)
Arguments
name |
Unique tool name (used by LLM to call the tool). |
description |
Description of the tool's purpose. Be descriptive to help the LLM understand when to use this tool. |
parameters |
A z_schema object (z_object/z_any/etc), a named list, a character vector, or NULL. When NULL, the schema is inferred from the execute function signature (if possible) and defaults to flexible types. |
execute |
An R function that implements the tool logic. It can accept a single list argument (args), or standard named parameters. List-style functions receive a single list argument containing parameters. |
layer |
Tool layer: "llm" (loaded into context) or "computer" (executed via bash/filesystem). Default is "llm". Computer layer tools are not loaded into context but executed via bash. |
meta |
Optional metadata associated with the tool (e.g., |
Value
A Tool object.
Examples
if (interactive()) {
# Define a weather tool
get_weather <- tool(
name = "get_weather",
description = "Get the current weather for a location",
parameters = z_object(
location = z_string(description = "The city name, e.g., 'Beijing'"),
unit = z_enum(c("celsius", "fahrenheit"), description = "Temperature unit")
),
execute = function(args) {
# In real usage, call a weather API here
paste("Weather in", args$location, "is 22 degrees", args$unit)
}
)
# Use with generate_text
result <- generate_text(
model = "openai:gpt-4o",
prompt = "What's the weather in Tokyo?",
tools = list(get_weather)
)
}
Create Tool Result Message
Description
Create a message representing the result of a tool call. Used to send tool execution results back to the LLM.
Usage
tool_result_message(tool_call_id, result, is_error = FALSE)
Arguments
tool_call_id |
The ID of the tool call this result responds to. |
result |
The result content (will be converted to string if needed). |
is_error |
If TRUE, indicates this result is an error message. |
Value
A list representing a tool result message.
Uninstall a Skill
Description
Remove an installed skill.
Usage
uninstall_skill(name)
Arguments
name |
Skill name. |
Update Provider Models
Description
Fetches the model list from a provider's API and updates the local JSON config. Manually enriched metadata (pricing, context, capabilities, family, etc.) is preserved during re-sync: only the ID list is refreshed, existing enriched data is merged back.
Usage
update_provider_models(provider)
Arguments
provider |
The name of the provider (e.g., "stepfun"). |
Value
Invisible TRUE on success.
Update .Renviron with new values
Description
Updates or appends environment variables to the .Renviron file.
Usage
update_renviron(updates, path = ".Renviron")
Arguments
updates |
A named list of key-value pairs to update. |
path |
Path to .Renviron file (default: project root) |
Value
Invisible TRUE if successful
AST Safety Analysis
Description
Provides static analysis of R Abstract Syntax Trees (AST) to prevent evasion of sandbox restrictions.
Capture R Console Output
Description
Internal helpers to capture printed output, messages, and warnings from evaluated R expressions so tool execution can be rendered cleanly in the console UI.
CLI Utils: Markdown and Tool Rendering
Description
Utilities for rendering Markdown text and tool execution status in the console.
Environment Configuration Utilities
Description
Utilities for managing API keys and environment variables.
Utilities: HTTP and Retry Logic
Description
Provides standardized HTTP request handling with exponential backoff retry.
Implements multi-layer defense strategy for handling API responses:
Empty response body handling (returns instead of parse error)
JSON parsing with repair fallback
SSE stream error recovery
Graceful degradation on malformed data
JSON Utilities
Description
Provides robust utilities for parsing potentially truncated or malformed JSON strings, commonly encountered in streaming LLM outputs.
A robust utility that uses a finite state machine to close open brackets, braces, and quotes to make a truncated JSON string valid for parsing.
Usage
fix_json(json_str)
Arguments
json_str |
A potentially truncated JSON string. |
Value
A repaired JSON string.
Examples
fix_json('{"name": "Gene...')
fix_json('[1, 2, {"a":')
MCP Utility Functions
Description
JSON-RPC 2.0 message helpers for MCP communication.
Utilities: Middleware System
Description
Provides middleware functionality to wrap and enhance language models. Middleware can transform parameters and wrap generate/stream operations.
Model List Management Utilities
Description
Utilities for loading, querying, and formatting provider model lists from static configuration.
Model Synchronization Utilities
Description
Internal utilities for synchronizing model configurations with provider APIs. When re-syncing, manually enriched fields (pricing, context, capabilities, etc.) are preserved and merged back into the updated file.
Utilities: Provider Registry
Description
A registry for managing AI model providers.
Supports the provider:model syntax for accessing models.
Variable Registry
Description
Provides a mechanism to protect specific variables from being accidentally modified or duplicated by the Agent within the sandbox environment.
Walk an Abstract Syntax Tree
Description
Recursively traverse an R expression and apply a visitor function to each node.
Usage
walk_ast(expr, visitor)
Arguments
expr |
An R expression, call, or primitive type. |
visitor |
A function taking a node as argument. |
Wrap Language Model with Middleware
Description
Wraps a LanguageModelV1 with one or more middleware instances. Middleware is applied in order: first middleware transforms first, last middleware wraps closest to the model.
Usage
wrap_language_model(model, middleware, model_id = NULL, provider_id = NULL)
Arguments
model |
A LanguageModelV1 object. |
middleware |
A single Middleware object or a list of Middleware objects. |
model_id |
Optional custom model ID. |
provider_id |
Optional custom provider ID. |
Value
A new LanguageModelV1 object with middleware applied.
Wrap Reactive Tools
Description
Wraps reactive tools to inject reactiveValues and session into their execute functions. Call this in your Shiny server before passing tools to aiChatServer.
Usage
wrap_reactive_tools(tools, rv, session)
Arguments
tools |
List of Tool objects, possibly including ReactiveTool objects. |
rv |
The reactiveValues object to inject. |
session |
The Shiny session object to inject. |
Value
List of wrapped Tool objects ready for use.
Aesthetic Mapping Schema
Description
Schema for ggplot2 aesthetic mappings (aes).
Usage
z_aes_mapping(known_only = FALSE)
Arguments
known_only |
If TRUE, only include known aesthetics in schema. |
Value
A z_object schema.
Create Any Schema
Description
Create a JSON Schema that accepts any JSON value.
Usage
z_any(description = NULL, nullable = TRUE, default = NULL)
Arguments
description |
Optional description of the field. |
nullable |
If TRUE, allows null values. |
default |
Optional default value. |
Value
A list representing JSON Schema for any value.
Examples
z_any(description = "Flexible input")
Create Array Schema
Description
Create a JSON Schema for array type.
Usage
z_array(
items,
description = NULL,
nullable = FALSE,
default = NULL,
min_items = NULL,
max_items = NULL
)
Arguments
items |
Schema for array items (created by z_* functions). |
description |
Optional description of the field. |
nullable |
If TRUE, allows null values. |
default |
Optional default value. |
min_items |
Optional minimum number of items. |
max_items |
Optional maximum number of items. |
Value
A list representing JSON Schema for array.
Examples
z_array(z_string(), description = "List of names")
Create Boolean Schema
Description
Create a JSON Schema for boolean type.
Usage
z_boolean(description = NULL, nullable = FALSE, default = NULL)
Arguments
description |
Optional description of the field. |
nullable |
If TRUE, allows null values. |
default |
Optional default value. |
Value
A list representing JSON Schema for boolean.
Examples
z_boolean(description = "Whether to include details")
Coordinate System Schema
Description
Schema for coordinate system.
Usage
z_coord()
Value
A z_object schema.
Create Dataframe Schema
Description
Create a schema that represents a dataframe (or list of row objects).
This is an R-specific convenience function that generates a JSON Schema
for an array of objects. The LLM will be instructed to output data in a
format that can be easily converted to an R dataframe using
dplyr::bind_rows() or do.call(rbind, lapply(..., as.data.frame)).
Create a schema that represents a dataframe (or list of row objects). This is an R-specific convenience function that generates a JSON Schema for an array of objects.
Usage
z_dataframe(
...,
.description = NULL,
.nullable = FALSE,
.default = NULL,
.min_rows = NULL,
.max_rows = NULL
)
Arguments
... |
Named arguments where names are column names and values are z_schema objects representing the column types. |
.description |
Optional description of the dataframe. |
.nullable |
If TRUE, allows null values. |
.default |
Optional default value. |
.min_rows |
Optional minimum number of rows. |
.max_rows |
Optional maximum number of rows. |
Value
A z_schema object representing an array of objects.
A z_schema object representing an array of objects.
Examples
# Define a schema for a dataframe of genes
gene_schema <- z_dataframe(
gene_name = z_string(description = "Name of the gene"),
expression = z_number(description = "Expression level"),
significant = z_boolean(description = "Is statistically significant")
)
# Use with generate_object
# result <- generate_object(model, "Extract gene data...", gene_schema)
# df <- dplyr::bind_rows(result$object)
Describe Schema
Description
Add a description to a z_schema object (pipe-friendly).
Usage
z_describe(schema, description)
Arguments
schema |
A z_schema object. |
description |
The description string. |
Value
The modified z_schema object.
Create Empty-Aware Schema Wrapper
Description
Wraps a schema to explicitly handle empty values.
Adds _empty metadata for frontend rendering decisions.
Usage
z_empty_aware(schema, empty_behavior = "skip")
Arguments
schema |
Base z_schema. |
empty_behavior |
How frontend should handle empty: "skip", "placeholder", "inherit". |
Value
Enhanced z_schema.
Create Empty Object Schema
Description
Create a JSON Schema for an empty object {}.
Usage
z_empty_object(description = NULL)
Arguments
description |
Optional description. |
Value
A z_schema object.
Create Enum Schema
Description
Create a JSON Schema for string enum type.
Usage
z_enum(values, description = NULL, nullable = FALSE, default = NULL)
Arguments
values |
Character vector of allowed values. |
description |
Optional description of the field. |
nullable |
If TRUE, allows null values. |
default |
Optional default value. |
Value
A list representing JSON Schema for enum.
Examples
z_enum(c("celsius", "fahrenheit"), description = "Temperature unit")
Facet Schema
Description
Schema for faceting.
Usage
z_facet()
Value
A z_object schema.
Build Geom-Specific Layer Schema
Description
Creates a precise schema for a specific geom type.
Usage
z_geom_layer(geom_name)
Arguments
geom_name |
Name of the geom. |
Value
A z_object schema tailored to the geom.
GGPlot Object Schema
Description
Top-level schema for a ggplot object.
Usage
z_ggplot()
Value
A z_object schema.
Guide Schema
Description
Schema for guides (legend/axis).
Usage
z_guide()
Value
A z_object schema.
Create Integer Schema
Description
Create a JSON Schema for integer type.
Usage
z_integer(
description = NULL,
nullable = FALSE,
default = NULL,
minimum = NULL,
maximum = NULL
)
Arguments
description |
Optional description of the field. |
nullable |
If TRUE, allows null values. |
default |
Optional default value. |
minimum |
Optional minimum value. |
maximum |
Optional maximum value. |
Value
A list representing JSON Schema for integer.
Examples
z_integer(description = "Number of items", minimum = 0)
Layer Schema
Description
Schema for a single ggplot2 layer.
Usage
z_layer()
Value
A z_object schema.
Create Number Schema
Description
Create a JSON Schema for number (floating point) type.
Usage
z_number(
description = NULL,
nullable = FALSE,
default = NULL,
minimum = NULL,
maximum = NULL
)
Arguments
description |
Optional description of the field. |
nullable |
If TRUE, allows null values. |
default |
Optional default value. |
minimum |
Optional minimum value. |
maximum |
Optional maximum value. |
Value
A list representing JSON Schema for number.
Examples
z_number(description = "Temperature value", minimum = -100, maximum = 100)
Create Object Schema
Description
Create a JSON Schema for object type. This is the primary schema builder for defining tool parameters.
Usage
z_object(
...,
.description = NULL,
.required = NULL,
.additional_properties = FALSE
)
Arguments
... |
Named arguments where names are property names and values are z_schema objects created by z_* functions. |
.description |
Optional description of the object. |
.required |
Character vector of required field names. If NULL (default), all fields are considered required. |
.additional_properties |
Whether to allow additional properties. Default FALSE. |
Value
A list representing JSON Schema for object.
Examples
z_object(
location = z_string(description = "City name, e.g., Beijing"),
unit = z_enum(c("celsius", "fahrenheit"), description = "Temperature unit")
)
Position Adjustment Schema
Description
Schema for position adjustments with type-specific parameters.
Usage
z_position(position_type = NULL)
Arguments
position_type |
Optional specific position type for strict schema. |
Value
A z_object schema.
Scale Schema
Description
Schema for a scale definition.
Usage
z_scale()
Value
A z_object schema.
Create String Schema
Description
Create a JSON Schema for string type.
Usage
z_string(description = NULL, nullable = FALSE, default = NULL)
Arguments
description |
Optional description of the field. |
nullable |
If TRUE, allows null values. |
default |
Optional default value. |
Value
A list representing JSON Schema for string.
Examples
z_string(description = "The city name")
Theme Schema
Description
Schema for theme settings with full hierarchy support.
Usage
z_theme(flat = TRUE)
Arguments
flat |
If TRUE, returns flat structure. If FALSE, returns hierarchical. |
Value
A z_object schema.
Create Element Schema
Description
Creates a schema for a specific theme element type.
Usage
z_theme_element(element_type)
Arguments
element_type |
One of the THEME_ELEMENT_TYPES names. |
Value
A z_object schema.