Given a prompt ID, variable values, and optionally any hyperparameters, this API returns a JSON object containing the raw prompt template.Documentation Index
Fetch the complete documentation index at: https://portkey-docs-mintlify-bedrock-guardrails-docs-36443.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Note: Unlike inference requests, Prompt Render API calls are processed through Portkey’s Control Plane services.
Example: Using Prompt Render output in a new request
Example: Using Prompt Render output in a new request
Here’s how you can take the output from the
render API and use it for making a separate LLM call. We’ll take example of OpenAI SDKs, but you can use it simlarly for any other frameworks like Langchain etc. as well.
