Azure OpenAI is a cloud-based integration that can help you build intelligent applications by leveraging powerful language models.
Torq enables quick and easy integration with Azure OpenAI, so you can automate anything and everything within moments. Torq's public Azure OpenAI steps include:
Create Chat Completion
Create Completion
List Deployments
List Models
Create Embedding
If you don't see a step you need, you can create your own in various ways, such as using the Send an HTTP Request step or Torq’s Step Builder, and share it across your organization.
Prerequisites
Confirm Azure access: Ensure your Azure subscription is approved for Azure OpenAI.
Decide on setup parameters: Choose the subscription, resource group, region, and network access (public access is typically required unless private networking is configured).
Create the Azure OpenAI resource
Create the resource: In the Azure Portal, click Create a resource, search for Azure OpenAI, and start the setup.
Configure basics: In the Basics tab, select the subscription, resource group, and region, set a resource name, and choose a pricing tier.
Configure network: In the Network tab, complete networking settings as needed.
(Optional) Add resource tags: In the Tags tab, add key–value pairs to identify ownership, environment, or cost center.
Review and create: Click Review + Create and open the resource once provisioning finishes.
Resulting endpoint: The resource endpoint will follow this format:https://<YOUR_RESOURCE_NAME>.openai.azure.com/
Retrieve the API key and endpoint resource
Open the Keys and Endpoint section: In the Azure OpenAI resource, navigate to Resource Management > Keys and Endpoint.
Copy required values: Copy Key 1 (or Key 2) and the Endpoint URL.
Create an Azure Open AI step integration in Torq
Navigate to the integration: Go to Build > Integrations > Steps > Microsoft Azure OpenAI and click Add Instance.
Fill in the details:
Give the integration a unique and meaningful name.
Paste the Microsoft Azure API Key (Key 1 or Key 2) you copied earlier into the Azure API Key field.
Paste the Endpoint Resource name from the Endpoint URL that you copied previously.
Finalize: Click Add.
Create a model deployment
Open Azure AI Foundry: Go to https://ai.azure.com and select your Azure OpenAI resource.
Deploy a model: Navigate to Deployments, click + Deploy model, then choose Deploy base model.
Configure deployment: Select a base model and assign a custom deployment name.
Deploy the model: Complete the deployment.
Azure API calls reference the deployment name, not the base model name.
Capture the AI model name
Open deployment details: Select the newly created deployment.
Copy the model name: Locate and copy the Model value exactly as shown. Required values: You will need both the deployment name and the AI model name for Torq.
Final checklist for Torq
Verify you have the following values:
Microsoft Azure API Key
Endpoint Resource
Deployment name
AI model name
Recommended naming conventions
Use consistent naming:
Resource name:
aoai-torq-prod-<region>Deployment name:
torq-chat-prodortorq-embeddings-prodAI model name: Copy exactly from Azure (do not modify)
Once setup is complete, you can start using Azure OpenAI steps in your workflows. With the required values configured and a model deployed, Torq can securely connect to Azure OpenAI, invoke the appropriate deployments, and scale automation reliably across environments.

