![]() |
Apr 2 | Apr 3 | Apr 4 | Apr 5 | Apr 6 | |
---|---|---|---|---|---|---|
Current status | ![]() |
![]() |
![]() |
![]() |
![]() |
|
Jobs Running | 227 | |||||
Cores in Use | 18241 | |||||
/lustre/beagle Usage | 95% | |||||
/lustre/beagle2 Usage | 80% |
![]() |
System is operating at peak performance. |
![]() |
Beagle2 maintenance |
![]() |
Beagle Upgrade |
Director
The acquisition of the Beagle supercomputer was made possible by a grant from the National Institutes of Health (NIH) National Center for Research Resources (NCRR).
Ian Foster, director of the Computation Institute at the University of Chicago and Argonne National Laboratory, is the PI for this project. Ian Foster, with UChicago’s team of technical and domain specialists, identified the need for a powerful computational environment that would serve the growing resource-intensive requirements of the biomedical research community.
Beagle’s “skin” was created by the Computation Institute’s Mark Hereld and Greg Cross. Beagle 2011 is built on three components: water and sky are divided by a wave. Moving to the right, the wave takes on the pitch of the double helix of DNA. The images of water and sky are generated by a stochastic, context-free grammar using a computer. This application of stochastic image generation gives Beagle 2011 a fractal aspect that combines visual elements inspired by biology and mathematics, disciplines at the heart of the research that Beagle will carry forward.
System Specifications
- Ranks among 500 fastest machines (November 2015)
- Peak Performance: 212 TFlops/s
- Cray XE6 system
- CPU: AMD Operton 6300 series (model 6380)
64GB per node * 724 compute nodes = 46336GB
- Max memory bandwidth for the Opteron 6380 is 102.4 GB/s
- GPU: NVIDIA Tesla K20
32GB per node * 4 GPU compute nodes = 128GB
- Total RAM on compute nodes: 46336 + 128 = 46464GB
- 728 compute nodes / 4 nodes per blade = 182 blades
- Extreme Scalability Mode (ESM), which supports large scalable custom applications.
- Cluster Compatibility Mode (CCM), which allows standard programs designed for smaller machines or clusters to run without modifications.
- The nodes are connected in a 3D torus topology via the Cray Gemini interconnect.
- A high-speed inter-processor connection network to support tightly coupled computational simulation and data-intensive analysis applications that involve frequent inter-process communication.
- At least 32 Gigabyte memory per compute node, for applications that create large in-memory data structures or that will run many tasks on the same node.
- The ability to easily and quickly schedule large jobs as data become available while being able to pursue a very large number of smaller tasks.
Beagle2 Operations
Focus Areas
Beagle2 will focus — but not exclusively — on biomedical research supported by NIH funding.
Some of the project areas include:
- Quantitative determination of free energies associated with large conformational changes in cell membranes
- Molecular structure and ligand interaction prediction in cellular networks
- Whole-body model for studies of electrical and thermal injury
- Computation of possible configurations of transcriptional networks
- Data-mining of biomedical literature to understand regulatory networks in cancer and to understand complex disease processes
- Mapping brain structure to human behavior
- Quantitative medical-image analysis
- High volume text-mining
- Genomic and metagenomic data analysis
- Modeling of economic impact of climate change
- Large scale molecular dynamics
- Model ion channels in nerve cells
- Study transcriptional networks
Acknowledging Beagle Statement
This research was supported in part by NIH through resources provided by the Computation Institute and the Biological Sciences Division of the University of Chicago and Argonne National Laboratory, under grant 1S10OD018495-01. We specifically acknowledge the assistance of: relevant staff members’ names.