Testing your Agents (ADK TypeScript)¶
Before you deploy your agent, you should test it to ensure that it is working as intended. The easiest way to test your agent in your development environment is to use the adk-ts api_server
command. This command will launch a local Express.js server, where you can run cURL commands or send API requests to test your agent.
Local testing¶
Local testing involves launching a local API server, creating a session, and sending queries to your agent.
1. Directory Structure
Ensure you are in the correct working directory relative to your agent code. The testing commands often expect to be run from the parent directory containing your agent folder(s), or within the agent folder itself if specifying .
as the agent directory.
A common structure might be:
parent_folder/ <-- Run commands from here, specifying '--agent_dir my_sample_agent' or '.'
|- my_sample_agent/
|- src/
| |- agent.ts <-- Your main agent definition
|- .env <-- Environment variables (optional)
|- package.json
|- tsconfig.json
Or, if running commands inside the agent folder:
my_sample_agent/ <-- Run commands from here, specifying '--agent_dir .'
|- src/
| |- agent.ts
|- .env
|- package.json
|- tsconfig.json
2. Launch the Local Server
Navigate to your project's root or the appropriate directory and launch the local API server using the ADK TypeScript CLI command. You need to specify the directory containing your agent modules.
# If run from the parent_folder containing 'my_sample_agent':
adk-ts api_server --agent_dir my_sample_agent
# Or, if run from inside the 'my_sample_agent' directory:
adk-ts api_server --agent_dir .
The output should appear similar to:
API server started on port 8000
Agent directory: /path/to/your/project/parent_folder/my_sample_agent
Your server is now running locally, typically at http://localhost:8000
(the default port can be changed with --port
).
3. Create a new session
With the API server still running, open a new terminal window or tab and create a new session with the agent using curl
or a similar tool:
curl -X POST http://localhost:8000/apps/my_sample_agent/users/u_123/sessions/s_123 \
-H "Content-Type: application/json" \
-d '{"state": {"key1": "value1", "key2": 42}}'
Let's break down what's happening:
http://localhost:8000/apps/my_sample_agent/users/u_123/sessions/s_123
: This API endpoint (matching the implementation inapiServer.ts
) creates a new session for your agentmy_sample_agent
(which should match the folder name specified in--agent_dir
), for a user ID (u_123
) and for a session ID (s_123
).{"state": {"key1": "value1", "key2": 42}}
: This optional JSON body sets the initial state for the session. The ADK TypeScript library uses aState
class internally, but the API accepts a plain JavaScript object.
This should return the session information if it was created successfully. The output will be a JSON representation of the Session
object (see src/sessions/types.ts
):
{
"id": "s_123",
"appName": "my_sample_agent",
"userId": "u_123",
"state": {
"key1": "value1",
"key2": 42
},
"events": []
}
Info
You cannot create multiple sessions with exactly the same appName
, userId
, and sessionId
. If you try to, the API server (depending on the session service implementation) might return an error like {"error":"Session already exists: s_123"}
. To fix this, you can either delete that session (if the API supports it) or choose a different sessionId
.
4. Send a query
There are two ways to send queries via POST to your agent, via the /run
or /run_sse
routes, similar to the Python version.
POST http://localhost:8000/run
: Collects all events generated during the agent's turn and returns them as a JSON array in the response body.POST http://localhost:8000/run_sse
: Returns a stream of Server-Sent Events (SSE). Each event object is sent as soon as it's generated by the agent. Suitable for real-time updates. With/run_sse
, you can also set"streaming": true
in the request body to enable token-level streaming from the LLM (if the underlying model and flow support it).
Using /run
Send a POST request with your query:
curl -X POST http://localhost:8000/run \
-H "Content-Type: application/json" \
-d '{
"appName": "my_sample_agent",
"userId": "u_123",
"sessionId": "s_123",
"newMessage": {
"role": "user",
"parts": [{
"text": "Hey whats the weather in new york today"
}]
}
}'
(Note: The keys in the JSON body are appName
, userId
, sessionId
, newMessage
)
The response will be a JSON array containing all the Event
objects generated during that turn. Each event object follows the structure defined in src/events/Event.ts
.
[
{
"invocationId": "inv-abcdef12",
"author": "weather_agent_v1",
"actions": {
"stateDelta": {},
"artifactDelta": {},
"requestedAuthConfigs": {}
},
"id": "Evt1AbCd",
"timestamp": 1710000100.123,
"content": {
"role": "model",
"parts": [
{
"functionCall": {
"name": "getWeather",
"args": { "city": "new york" },
"id": "adk-uuid-..."
}
}
]
},
"longRunningToolIds": []
},
{
"invocationId": "inv-abcdef12",
"author": "weather_agent_v1",
"actions": {
"stateDelta": {},
"artifactDelta": {},
"requestedAuthConfigs": {}
},
"id": "Evt2EfGh",
"timestamp": 1710000101.456,
"content": {
"role": "user",
"parts": [
{
"functionResponse": {
"name": "getWeather",
"response": {
"status": "success",
"report": "The weather in New York is sunny with a temperature of 25°C."
},
"id": "adk-uuid-..."
}
}
]
}
},
{
"invocationId": "inv-abcdef12",
"author": "weather_agent_v1",
"actions": {
"stateDelta": {
"last_weather_report": "The weather in New York is sunny with a temperature of 25°C."
},
"artifactDelta": {},
"requestedAuthConfigs": {}
},
"id": "Evt3IjKl",
"timestamp": 1710000102.789,
"content": {
"role": "model",
"parts": [
{
"text": "The weather in New York is sunny with a temperature of 25°C."
}
]
},
"partial": false,
"turnComplete": false
}
]
Using /run_sse
curl -X POST http://localhost:8000/run_sse \
-H "Content-Type: application/json" \
-d '{
"appName": "my_sample_agent",
"userId": "u_123",
"sessionId": "s_123",
"newMessage": {
"role": "user",
"parts": [{
"text": "Hey whats the weather in new york today"
}]
},
"streaming": false
}'
You can set "streaming": true
to attempt token-level streaming from the LLM. The output will be a stream of Server-Sent Events:
data: {"invocationId":"inv-abcdef12","author":"weather_agent_v1", ... ,"content":{"role":"model","parts":[{"functionCall":{...}}]}}
data: {"invocationId":"inv-abcdef12","author":"weather_agent_v1", ... ,"content":{"role":"user","parts":[{"functionResponse":{...}}]}}
data: {"invocationId":"inv-abcdef12","author":"weather_agent_v1", ... ,"content":{"role":"model","parts":[{"text":"The weather in New York is sunny with a temperature of 25°C."}]}}
Info
With /run_sse
, each data:
line represents a complete JSON Event
object sent as soon as it's available from the agent. If token streaming ("streaming": true
) is enabled, you might receive multiple events with partial: true
for text content before the final non-partial text event.
Integrations¶
ADK TypeScript utilizes Callbacks (like beforeModelCallback
, afterModelCallback
, beforeToolCallback
, afterToolCallback
) to hook into the agent execution lifecycle. The library also includes basic OpenTelemetry tracing capabilities (see src/telemetry.ts
).
These mechanisms allow integration with third-party observability tools. While specific integrations like Comet Opik aren't explicitly built into this codebase version, the callback and tracing foundation enables you to capture detailed traces of agent calls and interactions for understanding behavior, debugging, and evaluation. You can implement custom callbacks to send data to your preferred observability platform.
Deploying your agent¶
Now that you've verified the local operation of your agent, you're ready to move on to deploying your agent! Here are some ways you can deploy your ADK TypeScript agent:
- Deploy to Agent Engine on Vertex AI (if compatible): Check the official Agent Engine documentation for compatibility with custom ADK TypeScript agents.
- Deploy to Cloud Run: Use the
adk-ts deploy cloud_run
command (seesrc/cli/cliDeploy.ts
) to containerize and deploy your agent as a serverless application on Google Cloud, giving you full control over scaling and management.