Network Connectivity
Proxy Server
Self-hosted Step Runners communicate with the Torq service via an outbound gRPC-over-TLS connection—that is, a TLS transport carries gRPC instructions. Additional operations (such as uploading logs and downloading images) are performed via outbound HTTPS connections. This communication may be configured to operate through a proxy server.
Add a Proxy for Docker Step Runners
To add a proxy to the deployment configuration file for Docker Step Runners:
Execute the Install Command: Retrieve the Docker install command from Torq and paste it in your terminal.
Copy the Output: Copy the command output and paste it into a new line.
Edit the File: Specify a proxy server by adding the flag
-e https_proxy=http://<proxy address>:<port>
Run the Edited File: Run the edited deployment configuration file and then confirm the service is running with
docker ps
.(Optional) Deploy the Edited File on Another Machine: To deploy the edited deployment configuration file on another machine, retrieve the file by running the command
docker inspect --format "$(curl -Ls https://link.torq.io/<link from install command>)" $(docker ps | grep spd | awk '{print $1}')
Add a Proxy for Kubernetes (K8s) Step Runners
To add a proxy to the deployment configuration file for Kubernetes Step Runners:
Execute the Install Command: Retrieve the K8s install command from Torq, paste it in your terminal, and run it.
Copy the Output: Copy the command output and paste it into a new line.
Pipe the Output to a File: Delete everything after the Torq link and pipe the output to a file.
curl -H "Content-Type: application/x-sh" -s -L "https://link.torq.io/<link from install command>" > <file name>.yaml
Edit the File: Specify a proxy server by adding flags to
data
in theConfigMap
section.data:
HTTP_PROXY: http://<proxy address>:<port>
HTTPS_PROXY: http://<proxy address>:<port>Run the Edited File: Run the edited deployment configuration file.
Configure Per-Step Proxy Access
While adding a proxy to the Step Runner deployment affects the traffic generated by the Step Runner itself, this configuration does not affect the Steps it runs as they are implemented as separate containers. If the traffic generated by a Runner-instantiated Step requires a proxy, you must configure per-Step proxy settings.
For HTTP Steps such as Send an HTTP request and any HTTP-based custom Steps, you can add optional parameters.
To configure per-Step proxy access:
Navigate to the Workflow: Go to Build > Workflows and open the Workflow.
Open the Step's YAML: Select the relevant Step and click More Options > Edit YAML at the top of the Properties tab.
Edit the YAML: Add the relevant variables under
env
and save the YAML.env:
HTTP_PROXY: http://<proxy address>:<port>
HTTPS_PROXY: http://<proxy address>:<port>
NO_PROXY: localhostThe proxy addresses will be reflected in the Step's UI and can be modified later directly from its parameters.
Bridge Network
A bridge network uses a software bridge that allows containers to communicate while providing isolation from containers that are not connected to it.
Connect a Docker Step Runner to a Bridge Network
To connect a Docker Step Runner to a bridge network:
Execute the Install Command: Retrieve the Docker install command from Torq and paste it in your terminal.
Copy the Output: Copy the command output and paste it into a new line.
Edit the File: Connect to the bridge network by adding the flag
--network <network name>
Run the Edited File: Run the edited deployment configuration file and then confirm the service is running with
docker ps
.(Optional) Deploy the Edited File on Another Machine: To deploy the edited deployment configuration file on another machine, retrieve the file by running the command
docker inspect --format "$(curl -Ls https://link.torq.io/<link from install command>)" $(docker ps | grep spd | awk '{print $1}')
Torq strongly recommends against performing SSL or TLS inspection (a.k.a. "man-in-the-middle" inspection) for the communication with the Torq service. It can interfere with communications, weaken security, and waste CPU power.
Resource Allocation
By default, Step Runners are allocated 128 MB of RAM and 250 CPU shares.
Each Step in a Workflow is implemented as a Docker container or Kubernetes pod. While most Steps require 256 MB of RAM and 250 CPU shares to run, some Steps like Run an inline Python script demand more resources (2 GB of RAM and 1 CPU core).
Moreover, some Workflows running multiple Steps simultaneously (e.g. parallel loops and executions) require CPU and RAM resources up to five times the default amounts. If several Workflows are running concurrently, the resources should be multiplied by the anticipated number of simultaneous Workflow executions.
Change the CPU and Memory Allocation for Docker Step Runners
To modify the CPU and memory allocation in the deployment configuration file for Docker Step Runners:
Execute the Install Command: Retrieve the Docker install command from Torq and paste it in your terminal.
Copy the Output: Copy the command output and paste it into a new line.
Edit the File: Update the memory and CPU limits by adding the flags
-e DOCKER_LIMIT_MEMORY_MB=1024
-e DOCKER_LIMIT_CPU_SHARES=256
Run the Edited File: Run the edited deployment configuration file.
Delete the Old Container: Confirm the service is running with
docker ps
and delete the old SPD container.
Change the CPU and Memory Allocation for Kubernetes (K8s) Step Runners
To modify the CPU and memory allocation in the deployment configuration file for Kubernetes Step Runners:
Execute the Install Command: Retrieve the K8s install command from Torq, paste it in your terminal, and run it.
Copy the Output: Copy the command output and paste it into a new line.
Pipe the Output to a File: Delete everything after the Torq link and pipe the output to a file.
curl -H "Content-Type: application/x-sh" -s -L "https://link.torq.io/<link from install command>" > <file name>.yaml
Edit the File: Change the CPU and RAM resource allocation by modifying the
default
andmax
records in theLimitRange
section. For example, increase thedefault
values to half a CPU and 512 MB of RAM and themax
ones to 2 CPUs and 4G of RAM.--
apiVersion: v1
kind: LimitRange
metadata:
name: steps-limits
namespace: torq
spec:
limits:
- type: Container
max:
cpu: 2
memory: 4Gi
default:
cpu: 500m
memory: 512Mi
---Run the Edited File: Run the edited deployment configuration file.
kubectl apply -f <file name>.yaml
Expert Deployments
The following deployment types reflect one-off or highly advanced deployments that are not relevant to most Step Runners.
Deploy Step Runners Across Multiple Namespaces in a Single Cluster
Changing the namespace in the deployment configuration file enables Step Runners to be deployed in multiple namespaces within the same Kubernetes cluster, allowing DevOps teams to manage network segmentation as well as their own Kubernetes ResourceQuota
requirements.
To deploy Step Runners across multiple namespaces in a single Kubernetes cluster:
Execute the Install Command: Retrieve the Kubernetes install command from Torq, delete
| kubectl apply -f
, and run the modified command in your terminal.Edit the File: Open the file and change the namespace wherever
namespace
is mentioned, includingSPD_NAMESPACE
.Run the Edited File: Run the edited deployment configuration file.
kubectl apply -f <file name>.yaml
Load a Step Runner Secret From an S3 Bucket
Secure storage such as private GitHub repositories and AWS S3 buckets can be used to auto-scale and auto-redeploy Step Runners.
To load credentials for a Step Runner from an AWS S3 bucket:
Execute the Install Command: Retrieve the Kubernetes install command from Torq, paste it in your terminal, and run it.
Copy the Output: Copy the command output and paste it into a new line.
Pipe the Output to a File: Delete everything after the Torq link and pipe the output to a file.
curl -H "Content-Type: application/x-sh" -s -L "https://link.torq.io/<link from install command>" > <file name>.yaml
Copy the File's Secret: Open the YAML deployment configuration file and copy the
key.json
value.data:
key.json: <secret>Upload the Key to the S3 Bucket: Upload
key.json
to the bucket and then delete the value from the YAML configuration file.Modify the File's ConfigMap: Delete the
GOOGLE_APPLICATION_CREDENTIALS
variable fromdata
in theConfigMap
section and add two new variables.AWS_REGION: <region of S3 bucket>
GCLOUD_CREDS_URL: s3://<path where key was uploaded>Prevent Mounting the Deleted Secret: Delete the
volumeMounts
andvolumes
sections and then save the file.volumeMounts:
- name: step-runner
readOnly: true
mountPath: /var/run/secrets/step-runner
***
volumes:
- name: step-runner
secret:
secretName: step-runnerRun the Edited File: Run the edited configuration file.
kubectl apply -f <file name>.yaml
Attach a Role to the Service Account: Go to the AWS console and attach a role to the service account. The role should have the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::spd-s3-test/spdkey.txt"
}
]
}Apply a Trust Relationship to the Role: Add your cluster region and cluster ID to apply a trust relationship.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.<cluster region>.amazonaws.com/id/<cluster ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<cluster region>.amazonaws.com/id/<cluster ID>:sub": "system:serviceaccount:default:my-service-account",
"oidc.eks.<cluster region>.amazonaws.com/id/<cluster ID>:aud": "sts.amazonaws.com"
}
}
}
]
}