Skip to content

Running StashCache Origin in a Container


Currently, the StashCache Origin container only supports distribution of public data. If you would like to distribute private data requiring authentication, see the RPM installation guide.

The OSG operates the StashCache data federation, which provides organizations with a method to distribute their data in a scalable manner to thousands of jobs without needing to pre-stage data across sites or operate their own scalable infrastructure.

Stash Origins store copies of users' data. Each community (or experiment) needs to run one origin to export its data via the StashCache federation. This document outlines how to run such an origin in a Docker container.

Before Starting

Before starting the installation process, consider the following points:

  1. Docker: For the purpose of this guide, the host must have a running docker service and you must have the ability to start containers (i.e., belong to the docker Unix group).
  2. Network ports: The Stash Origin listens for incoming HTTP/S and XRootD connections on ports 1094 and 1095 (by default).
  3. File Systems: Stash Origin needs a host partition to store user data.
  4. Registration: Before deploying an origin, you must registered the service in the OSG Topology

Configuring the Origin

In addition to the required configuration above (ports and file systems), you may also configure the behavior of your origin with the following variables using an environment variable file:

Where the environment file on the docker host, /opt/origin/.env, has (at least) the following contents, replacing <YOUR_RESOURCE_NAME> with the resource name of your origin as registered in Topology and <FQDN> with the public DNS name that should be used to contact your origin:


Populating Origin Data

The Stash Cache data federation namespace is shared by multiple VOs so you must choose a namespace for your own VO's data. When running an origin container, your chosen namespace must be reflected in your host partition.

For example, if your host partition is /srv/origin-puliic and the name of your VO is ASTRO, you should store the Astro VO's public data in/srv/origin-public/astro/. Then, when starting container, you will mount/srv/origin-public/into/xcache/namespace` in the container.

Running the Origin

It is recommended to use a container orchestration service such as docker-compose or kubernetes whose details are beyond the scope of this document. The following sections provide examples for starting origin containers from the command-line as well as a more production-appropriate method using systemd.

[email protected] $ docker run --rm --publish 1094:1094 \
             --publish 1095:1095 \
             --volume <HOST PARTITION>:/xcache/namespace \
             --env-file=/opt/origin/.env \

Replacing <HOST PARTITION> with the host directory containing data that your origin should serve. See this section for details.


A container deployed this way will serve the entire contents of <HOST PARTITION>.

Running on origin container with systemd

An example systemd service file for StashCache. This will require creating the environment file in the directory /opt/origin/.env.


This example systemd file assumes <HOST PARTITION> is /srv/origin-public.

Create the systemd service file /etc/systemd/system/docker.stash-origin.service as follows:

Description=Stash Origin Container

ExecStartPre=-/usr/bin/docker stop %n
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull opensciencegrid/stash-origin:fresh
ExecStart=/usr/bin/docker run --rm --name %n -p 1094:1094 -p 1095:1095 -v /srv/origin-public:/xcache/namespace --env-file /opt/origin/.env opensciencegrid/stash-origin:fresh


Enable and start the service with:

[email protected] $ systemctl enable docker.stash-origin
[email protected] $ systemctl start docker.stash-origin


You must register the origin before considering it a production service.

Validating Origin

To validate the origin please follow the validating origin instructions.

Getting Help

To get assistance, please use the this page or contact directly.