Facilities, Equipment, and Other Resources

Text appropriate for the National Science Foundation's required "Facilities, Equipment, and Other Resources" document is below.

USD has operated on-campus High Performance Computing (HPC) clusters since 2006, supporting multidisciplinary research in areas such as bioinformatics, computational biology, quantum chemistry, particle physics and mathematics. HPC systems are also employed in undergraduate and graduate courses in computational chemistry, physics, and bioinformatics. 

The Legacy Supercomputer was acquired in 2006 through USD’s Institutional Development Award from the National Institutes of Health. It was expanded in 2009 and 2011 with additional NIH funding, and the last expansion in 2013 was funded by the college of Arts and Sciences. Legacy consists of 680 AMD Opteron cores, with individual compute systems connected to separate dedicated 1Gb Ethernet networks for data storage and computational message passing traffic. A network storage appliance provides 70TB of NFS storage for the cluster. Legacy utilizes the Rocks 6.2 Linux operating system distribution to provide a high performance computing environment including the Grid Engine job and resource management system and the Ganglia system monitoring suite.

The Lawrence Supercomputer was acquired through a combination of state and federal funding: a FY16 SD Board of Regents Research And Development Innovation award, and National Science Foundation Major Research Instrumentation award OAC-1626516. Lawrence runs the CentOS 7 Linux operating system and is made up of over 2,000 CPU cores, including systems with 1.5TB of memory, Nvidia P100 GPU accelerators, and over 400TB of ZFS network storage accessible via a 56Gb FDR Infiniband network. Lawrence has an estimated performance of over 60TFLOPS and is slated for production in early 2018.

In addition to local HPC resources, to address the computational scale-out needs of research faculty, USD provides support for faculty requiring access to national HPC and cloud resources such as XSEDE, Google, and Amazon Cloud.

USD operates a Science DMZ network to support bulk research data flows. The Science DMZ consists of a separate network enclave, isolated from the TCP congestion often associated with traditional enterprise network traffic. To support high-speed, unencumbered scientific data movement, the Science DMZ employs a Data Transfer Node (DTN) connected to the HPC cluster network and other research data hubs on the Vermillion campus. The DTN hosts USD's Globus server providing high speed data transfer and publication capabilities based on the GridFTP technology.

USD operates the South Dakota Data Store (SDDS, funded by NSF award ACI-1659282), housed at the South Dakota University Center data center in Sioux Falls, SD. SDDS is accessible via the cloud-based Globus data management platform (globus.org). SDDS includes an archival tier hosted on a magnetic tape library as well as a high-capacity disk tier for data sharing.

All high performance computing equipment is hosted in an environmentally controlled, physically secured data center.  The data center provides in-rack and ceiling cooling units as well as a dedicated fire suppression system.  Production equipment is protected by uninterruptible power supplies, and production data is backed up regularly.