OSPool: Serving Open Science throughput computing

On
1 jobs completed
Placed by 1 researchers
Triggering 1 file transfers
Consuming 1 core hours

What is the OSPool?

The OSPool is a source of computing capacity that is accessible to any researcher affiliated with a US academic institution. Capacity is allocated following a Fair-Share policy. To harness the OSPool capacity you will need to obtain an account via the OSG Portal.

Is my workload a match for the OSPool?

Each of your jobs must fit on a single server. It has to be portable so that it can run on a remote server. The distributed nature of the OSPool imposes constraints on the sizes of the input and output sandboxes of a job.

OSPool: Sharing computing capacity in support of Open Science

On
1 jobs completed
Harnessing capacity from 1 institutions.

Who contributes capacity to the OSPool?

The computing resources for the OSPool are contributed by members of the OSG Compute Federation, typically campuses, government-supported supercomputing centers, and research collaborations. The members individually determine their policies for contributing resources, including the amount of resources it contributes and when these resources are available. In addition, some resource providers decide to share their resources with specific research projects, or they may choose to contribute resources to all in the OSPool.

View institution contributions on our Institutions Page.

How Can I Harness the OSPool Capacity?

Researchers can submit computational work to the OSPool via Access Points operated by the OSG, which serves researchers affiliated with projects at US-based academic, non-profit, and government institutions.

Sign up for an OSPool account on the OSG Portal

Namely, you can benefit from the OSPool Capacity if you are a

  • Researcher affiliated with a project at a US-based academic, government, or non-profit institution (via an OSG-Operated Access Point).
  • Researcher affiliated with such an institution or project that operates a local own access point.

Institutions or collaborations that would like to harness the capacity of the OSPool should contact [email protected]

View projects using the OSPool on the OSG Project Page.

What types of work run well on the OSPool?

For problems that can be run as numerous, self-contained jobs, the OSPool provides computing capacity that can transform the types of questions researchers are able to tackle (see the table below). A wide range of research problems and computational methods can be broken up or otherwise executed in this high-throughput computing (HTC ) approach, including:

  • image analysis (including MRI, GIS, etc.)
  • text-based analysis, including DNA read mapping and other bioinformatics
  • parameter sweeps
  • model optimization approaches, including Monte Carlo methods
  • machine learning and AI executed with multiple independent training tasks, different parameters, and/or data subsets

The OSPool is made up of mostly opportunistic capacity - contributing clusters may interrupt jobs at any time. Thus, the OSPool supports workloads of numerous jobs that individually complete or checkpoint within 20 hours.

Importantly, many compute tasks can take advantage of the OSPool with simple modifications, and we’d love to discuss options with you!

Ideal Jobs! Still very advantageous Maybe not, but get in touch!
Expected Throughput, per user 1000s concurrent cores 100s concurrent cores Let's discuss!
CPU 1 per job < 8 per job > 8 per job
Walltime < 10 hrs* < 20 hrs* > 20 hrs
RAM < few GB < 40 GB > 40 GB
Input < 500 MB < 10 GB > 10 GB**
Output < 1 GB < 10 GB > 10 GB**
Software pre-compiled binaries, containers Most other than → Licensed Software, non-Linux

*or checkpointable

** per job; you can work with a large dataset on OSG if it can be split into pieces

Learn more and chat with a Research Computing Facilitator by requesting an account.

Learning to use the OSPool

We have a complete knowledge base of user documentation and an active and supportive facilitation team, who support all users on OSG-Operated Access Points.

Users submitting jobs can specify their own requirements for per-job compute resources (e.g. CPU cores, memory, etc.) and any special server requirements. We recommend submitting lots of jobs and taking advantage of all the cycles possible, wherever they may be. We cannot guarantee that any single job will finish quickly, but the OSPool will allow you to accomplish a tremendous amount of work across jobs.