Prompt Requests
Learn how agent prompts become normalized model requests.
Prompt requests are the boundary between your application and an agent run.
Most workflows start with:
const response = await agent.prompt("Summarize this support ticket.").send();That line creates a PromptRequest, builds a normalized completion request, sends it to the model, handles tools if needed, and returns a final prompt response.
1. Start With a Prompt
The simplest prompt is a string:
await agent.prompt("Write a short onboarding checklist.").send();Use a structured Message when you need explicit content parts:
import { Message, UserContent } from "@anvia/core";
await agent
.prompt(
Message.user([
UserContent.text("Inspect this screenshot."),
UserContent.imageUrl("https://example.com/screenshot.png", {
detail: "auto",
}),
]),
)
.send();Provider support for images and documents depends on the provider model you choose.
2. Add History When Needed
const history = await conversations.loadMessages(conversationId);
const response = await agent
.prompt(userInput)
.withHistory(history)
.send();History is not global state inside Anvia. You load it from your application and pass it into the request.
3. Override Runtime Options Per Prompt
Prompt requests can override run behavior without rebuilding the agent:
const response = await agent
.prompt("Check this order.")
.maxTurns(2)
.withToolConcurrency(2)
.send();Use this when one request needs a tighter turn limit, different tool concurrency, approval handling, hooks, or tracing.
4. Request Flow
When you call send(), Anvia builds a normalized completion request in this order:
- Start with the current prompt.
- Add any history from
.withHistory(...). - Add agent instructions.
- Add static context from
.context(...). - Fetch dynamic context if retrieval is configured.
- Add tool definitions from tools, skills, and MCP servers.
- Add runtime settings such as temperature, max tokens, tool choice, and output schema.
- Send the request to the completion model.
The model response can include text, reasoning, and tool calls. If tool calls are present, Anvia executes the tools and loops until the model returns final text or the turn limit is reached.
5. Stream Instead of Waiting
Use stream() when the UI or job runner needs incremental events:
for await (const event of agent.prompt("Draft a launch checklist.").stream()) {
if (event.type === "text_delta") {
process.stdout.write(event.delta);
}
if (event.type === "final") {
console.log(event.output);
}
}Streaming emits normalized events for text deltas, reasoning deltas, tool calls, tool results, turn boundaries, final output, and errors.
Next
Read Messages and History to understand the Message[] shape used by prompts, history, and responses.
