Swarthmore ITS provides access to high-performance computing systems for computationally intensive projects. There are several Several different types of systems that are available for faculty, staff, and students. Academic Technologists from ITS will work with you to figure out what determine which resource would best match your project.
Table of Contents | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
...
Firebird Computing Cluster
Campus researchers have access to Strelka Firebird, Swarthmore's high-performance computer cluster. Learn more about the system on the /wiki/spaces/ACADTECH/pages/20910101 page.
Technical Specifications
The cluster consists of 18 nodes, each with two CPUs with an additional head node to handle user logins and job scheduling.
12 mid-memory nodes (384GB RAM)
3 high-memory nodes (768GB RAM)
1 high-CPU node (72 cores)
2 GPU nodes, each with 4x NVIDIA 2080 Ti GPUs
Over 700TB storage
High speed Infiniband networking
Firebird Computer Cluster page.
Jobs are submitted through the Slurm job scheduling system.
National and Regional Supercomputing Resources
...
Systems available on ACCESS include:
AnvilJetStream2, a supercomputer at the Perdue University with over 130,000 processor cores and almost 300TB of compute memory.The Open Science Gridcloud-based virtual machine service hosted by Indiana University that can provide a variety of systems including large memory and GPU instances for researchers and classes.
Open Science Pool (OSPool), consisting of a network of more than 60,000 computers available to run serial jobs with a short queue time.
Comet GPUStampede3, a system supercomputer at the San Diego Supercomputing Center designed for working on problems requiring graphical processing units for accelerated computing. Texas Advanced Computing Center with 1,858 compute (more than 140,000 cores), over 330 terabytes of RAM, 13 petabytes of storage, and almost 10 petaflops of peak capability.
Swarthmore faculty and students have used ACCESS (and its precursors) for calculations of chemical structures, plasma physics simulations, and development of new computer science algorithms.
...
If you have a large job that can be broken down into small, independent pieces, high throughput computing (HTC) may be a way to reduce the time needed for your calculation. Instead of running a program on one large computer, you can create hundreds or thousands of small jobs that are sent to Open Science GridPool (OSGOSPool), a set of thousands of computers across the country. Anyone can create an account at OSG Connect and start submitting jobs using the HTCondor system.
...
The ITS Media Center has a set of high-end computers that can be used for video editing, image processing, and other computationally-intensive processes.
...