Skip to main content

Azure OpenAI

Integrate Azure OpenAI with Torq to use Azure OpenAI steps and automate security workflows.

Updated today

Azure OpenAI is a cloud-based integration that can help you build intelligent applications by leveraging powerful language models.

Torq enables quick and easy integration with Azure OpenAI, so you can automate anything and everything within moments. Torq's public Azure OpenAI steps include:

  • Create Chat Completion

  • Create Completion

  • List Deployments

  • List Models

  • Create Embedding

If you don't see a step you need, you can create your own in various ways, such as using the Send an HTTP Request step or Torq’s Step Builder, and share it across your organization.

To use Azure OpenAI steps in Torq workflows, look here.

Prerequisites

  • Confirm Azure access: Ensure your Azure subscription is approved for Azure OpenAI.

  • Decide on setup parameters: Choose the subscription, resource group, region, and network access (public access is typically required unless private networking is configured).

Create the Azure OpenAI resource

  1. Create the resource: In the Azure Portal, click Create a resource, search for Azure OpenAI, and start the setup.

  2. Configure basics: In the Basics tab, select the subscription, resource group, and region, set a resource name, and choose a pricing tier.

  3. Configure network: In the Network tab, complete networking settings as needed.

  4. (Optional) Add resource tags: In the Tags tab, add key–value pairs to identify ownership, environment, or cost center.

  5. Review and create: Click Review + Create and open the resource once provisioning finishes.
    Resulting endpoint: The resource endpoint will follow this format: https://<YOUR_RESOURCE_NAME>.openai.azure.com/

Retrieve the API key and endpoint resource

  1. Open the Keys and Endpoint section: In the Azure OpenAI resource, navigate to Resource Management > Keys and Endpoint.

  2. Copy required values: Copy Key 1 (or Key 2) and the Endpoint URL.

Create an Azure Open AI step integration in Torq

  1. Navigate to the integration: Go to Build > Integrations > Steps > Microsoft Azure OpenAI and click Add Instance.

  2. Fill in the details:

    1. Give the integration a unique and meaningful name.

    2. Paste the Microsoft Azure API Key (Key 1 or Key 2) you copied earlier into the Azure API Key field.

    3. Paste the Endpoint Resource name from the Endpoint URL that you copied previously.

  3. Finalize: Click Add.

Create a model deployment

  1. Open Azure AI Foundry: Go to https://ai.azure.com and select your Azure OpenAI resource.

  2. Deploy a model: Navigate to Deployments, click + Deploy model, then choose Deploy base model.

  3. Configure deployment: Select a base model and assign a custom deployment name.

  4. Deploy the model: Complete the deployment.

Azure API calls reference the deployment name, not the base model name.

Capture the AI model name

  1. Open deployment details: Select the newly created deployment.

  2. Copy the model name: Locate and copy the Model value exactly as shown. Required values: You will need both the deployment name and the AI model name for Torq.

Final checklist for Torq

Verify you have the following values:

  • Microsoft Azure API Key

  • Endpoint Resource

  • Deployment name

  • AI model name

Recommended naming conventions

Use consistent naming:

  • Resource name: aoai-torq-prod-<region>

  • Deployment name: torq-chat-prod or torq-embeddings-prod

  • AI model name: Copy exactly from Azure (do not modify)

Once setup is complete, you can start using Azure OpenAI steps in your workflows. With the required values configured and a model deployed, Torq can securely connect to Azure OpenAI, invoke the appropriate deployments, and scale automation reliably across environments.

Did this answer your question?