Skip to main content
All CollectionsBuild AutomationsCasesSocrates
Transform Your SOC with Responsible AI
Transform Your SOC with Responsible AI

Discover how Torq's Socrates AI Analyst ensures trust through responsible use and transparency.

Updated over a week ago

Socrates is designed with a core focus on transparency and security in every action it takes. As with any analyst, periodically reviewing Socrates' work is essential to maintaining accuracy and reliability. To support this, we’ve ensured that all of Socrates' actions are recorded and easily trackable in the same location as human analysts' actions, allowing you to confidently use Socrates in your business operations.

Socrates Training

It’s important to note that Torq does not and will never use customer data to train the AI model behind Socrates. We adhere strictly to industry standards to protect your data, which is not stored or trained upon. The LLM Socrates uses is accessed via private accounts. For more details, refer to the Torq AI Terms.

Socrates Auditing and Monitoring

Every action Socrates performs is logged in the audit log, with Socrates identified as the actor and the instructing user recorded as the requesting actor—just like any analyst action. Additionally, all conversations with Socrates are saved within the relevant case context, ensuring full transparency and traceability. Learn more about monitoring Socrates.

Socrates Tooling: Expanding and Managing Actions

Socrates comes pre-configured to inform you about its actions, explain its reasoning, and request user confirmation for sensitive or approval-required actions. Socrates is restricted to executing only those workflows that have been explicitly made available to it, ensuring controlled and secure operations. You can expand Socrates' capabilities by creating and tagging workflows for its use. Discover more about Socrates' tools.

Did this answer your question?