AI Agent Instructions define the role, objective, and scope of your AI Agent in plain language. This helps align the Agent’s reasoning with your organizational needs and ensures consistency across different workflows. Think of it as the Agent’s “job description”: clear, purposeful, and detailed enough to guide its behavior effectively.
In Torq, this is done by filling out the Agent Instructions, where you describe the Agent’s mission in natural language. A well-written mission statement improves the Agent’s ability to choose tools, interpret context, and make decisions aligned with your objectives.
Keep in mind these instructions are general guidelines and suggested starting points, meant to be tailored and refined to match your organization’s specific needs and policies.
Why it matters
Clarity of Purpose: Gives the Agent clear direction about what it is supposed to accomplish.
Scope Definition: Prevents the Agent from attempting tasks outside of its intended role.
Context for Reasoning: Improves accuracy when selecting tools or interpreting ambiguous instructions.
How to write strong agent instructions
When filling out the Agent Instructions, we recommend covering the following elements:
Role: Define the identity or persona of the Agent. This helps frame its behavior and decision-making style. Keep it short and explicit. Think of this as the Agent’s title or position inside your team.
Example: “An AI SOC Analyst specializing in suspicious login investigations.”
Example: “A Cloud Security Assistant focused on identifying misconfigurations.”
Capabilities: Describe the core outcome the Agent should deliver. Be action-oriented and focus on measurable and tangible results. The objective should answer the question: “What success looks like for this Agent?”.
Example: “Investigate alerts, enrich observables, and provide remediation recommendations.”
Example: “Respond to phishing incidents by collecting context, confirming with the user, and escalating if needed.”
Rules and Constraints: Outline the boundaries, platforms, and situations where the Agent should (and should not) act. This keeps it from overstepping into areas it’s not intended for. The scope ensures the Agent’s work is targeted and consistent, avoiding drift into irrelevant or unapproved tasks.
Example: “Focus only on endpoint security incidents in CrowdStrike and Okta environments.”
Example: “Assist with IAM-related alerts but do not take action on network or infrastructure logs.”
Guidance: Tell the Agent how to perform its role and complete its tasks. While the Role and Capabilities define what the Agent is and what it should do, Guidance shapes how it should behave, interact, and sequence its actions.
Example: Start by summarizing the alert in one sentence. Then enrich any available IOCs using the configured tools. Provide a short conclusion with severity, and format the output in Markdown. Keep the tone professional and concise.
Context: The specific case or event details the Agent should act on. This gives the Agent specific details (date, file, user, platform, policy, etc) so it can tailor its investigation and communication accordingly.
Example: You are investigating a DLP alert triggered on March 12, 2025, at 10:45 AM UTC. The alert flagged a file named
customer_data_export.csvthat was shared externally withexternaluser@gmail.com. The source system is Microsoft OneDrive, and the policy that triggered the alert was “Sensitive Data Shared Externally".
These are suggestions. You don’t need to provide information for every sub-section, only the ones that make sense for your Agent.
Tips and best practices
When drafting an Agent Instructions, keep it simple, practical, and aligned with your team’s real-world workflows. Well-written Instructions make your AI Agent clear and effective.
Keep it natural: Use plain, conversational language and avoid jargon.
Use clear verbs: Frame actions with strong words like investigate, enrich, notify, escalate.
Define role, goal, and scope: Always specify what the Agent is, what it should achieve, and where its boundaries lie. Skipping one often leads to vague or unpredictable behavior.
Be specific: Call out the exact systems, workflows, or tasks the Agent should focus on.
Avoid ambiguity: Don’t leave the Agent guessing about priorities or responsibilities.
Add examples: Show what the Agent should do and how outputs should look. This sets clear expectations, reduces errors, and makes instructions easier to debug, reuse, and optimize.
Align with real processes: Make sure the Agent fits into how your team already works instead of creating extra steps.
Be concise and precise: Good grammar and detail go a long way in helping the Agent understand your intent.
Keep it simple: No need for ‘please’ or ‘thank you’. While polite, they don’t help the AI and can make Agent Instructions a bit less clear or efficient.
Maintain a professional tone: Keep it neutral and factual, avoid small talk or off-topic comments.
Using guardrails
Guardrails are clear, explicit instructions in your prompt that shape and control the model’s behavior. They help prevent hallucinations, minimize unnecessary verbosity, and ensure outputs remain consistent, reliable, and safe for downstream automation.
Use best-effort controls to prevent hallucinations
Instruct the model to admit uncertainty when applicable:
If you are not sure about something, respond with: ‘I don’t know.’
Only answer if you have high confidence; otherwise, say ‘I cannot determine that.’
Example
You are a cybersecurity assistant. Only respond with a CVE if you're certain. If you can't identify the vulnerability, say: ‘No match found.’
Control output length
Limit how much the model says, especially important when generating Slack messages, Jira updates, or logs.
Respond in no more than 100 words.
Summarize in 2–3 bullet points max.
Return only a single paragraph.
Example
Summarize this incident report in under 3 bullet points for SOC review. Be concise, clear, and avoid speculation.”
Enforce output format
Help downstream tools parse the output reliably:
Respond in JSON with these fields: severity, recommendation.
Use this Markdown template: ### Summary | ### Action Items.
Do not include any commentary outside the structured output.
Example
Return the result in strict JSON format:
{ "threat_level": "low", "actions": ["Monitor login attempts"] }Do not add explanations outside the JSON block.
Control tone and formality
Keep tone appropriate for internal teams or end users:
Use a formal tone suitable for a SOC analyst.
Be neutral and professional, avoid exclamation marks.
Example
Draft a Slack message about this alert. Be neutral and factual, not alarming. Avoid phrases like ‘critical failure’ unless severity = high.
Creating goal-oriented agents
Build AI Agents that are finely tuned to your organization’s unique needs, processes, and terminology. Instead of relying on generic behavior, you can guide the Agent to operate within the context of your environment. By focusing on a clear goal, such as SOC operations (e.g., triaging alerts and enriching observables), IT automation (e.g., managing user accounts or provisioning resources), or compliance (e.g., auditing access logs and generating reports), you ensure that the Agent is equipped with:
Relevant tools that connect directly to the workflows and systems your teams rely on.
Contextual knowledge that helps the Agent understand how tasks fit into broader processes.
Behavioral guidelines that shape how the Agent responds, prioritizes, and interacts with users.
Example
Instead of one Agent that "handles alerts triage," break it into three Agents:
Enrichment Agent
Communication Agent
SOC interaction agent
Instead of one Phishing Agent, create:
Suspicious Intent Analyzer
IOCs Analyzer
Attachments Analyzer
SOC Interaction Agent
How to use
Select and Open an AI Agent: Choose the AI Agent you want to add instructions for, then click Configure Agent to start setup.
Write the Agent Instructions: In the Instructions tab, define the Agent’s role, goal, and scope using the guidelines above.
Click Save to finish.
Example
This section outlines how to configure a DLP Communication Security Agent in Torq. The agent’s purpose is to interact with users when a Data Loss Prevention (DLP) alert is triggered, gather context around the incident, and determine whether the activity poses a risk.
Role
You are a DLP Communication Security Agent. Your job is to engage with the source user of a Data Loss Prevention (DLP) alert, investigate their intent, and determine whether the event is legitimate or poses a security risk.
Capabilities
Introduce yourself to the user and explain the alert details (date, what was shared, destination, and why it may be risky).
Investigate the incident by asking up to three targeted clarification questions, one at a time, and stop early if the explanation is justified.
Summarize the user’s explanation and assess whether risk remains.
Conclude with a professional closing message thanking the user.
Guidance
Introduction: Introduce yourself as: “I’m Torq, the AI Agent of the Security team at Acme Corp.”
Explain the alert clearly: Include details such as:
Date and time of the alert
File(s) shared (if any)
Who the file was shared with
Why the action could be risky
Investigate with respectful questions: Examples include:
Confirming if they performed the action
Asking for their business justification for sharing the file
Clarifying intent, data sensitivity, or whether the sharing was meant to be external
Summarize neutrally: Provide a balanced summary of the user’s explanation and whether the intent appears justified.
Closing: End with: “Thank you for your cooperation, your honest response is appreciated.”
Rules and constraints
Ask no more than 3 questions, one at a time.
Always maintain confidentiality, professionalism, and a respectful tone.
Context
Here is the alert to investigate: {{ $.event }}

