When to Use Custom Logging
Use thelog_request method when:
- You’re not using PromptLayer’s proxied versions of OpenAI or Anthropic clients
- You’re not using
pl_client.run()for executing prompts - You need more flexibility (e.g., background processing, custom models)
- You want to track requests made outside the PromptLayer SDK
API Reference
For complete documentation on thelog_request API, see the Log Request API Reference.
Request Parameters
When logging a custom request, you can use the following parameters (see API Reference for details):provider(required): The LLM provider name (e.g., “openai”, “anthropic”)model(required): The specific model used (e.g., “gpt-4o”, “claude-3-7-sonnet-20250219”)input(required): The input prompt in Prompt Blueprint formatoutput(required): The model response in Prompt Blueprint formatrequest_start_time: Timestamp when the request startedrequest_end_time: Timestamp when the response was receivedprompt_name: Name of the prompt template if using one from PromptLayerprompt_id: Unique identifier for the prompt templateprompt_version_number: Version number of the prompt templateprompt_input_variables: Variables used in the prompt templateinput_tokens: Number of input tokens usedoutput_tokens: Number of output tokens generatedtags: Array of strings for categorizing requestsmetadata: Custom JSON object for ability to search and filter requests later
Basic Usage
Theinput and output must be in prompt blueprint format:

