|Jan 23||Jan 22||Jan 21||Jan 20||Jan 19|
|Cores in Use||21952||22400||22400||22168||22400|
|System is operating at peak performance.|
|Scheduled maintenance: occurring at 7:00 AM on the first Tuesday of every month.|
- All compute node CPUs upgraded from Magny-Cours to Abu Dhabi processors. With this upgrade, core count has been increased from 24 to 32 cores per node.
- All compute node RAM DIMMs replaced. Available RAM has been increased from 32GB to 64GB per node.
- Four nVidia GPU accelerator nodes have been added.
- Two additional DDN storage cabinets installed to provide new Lustre scratch storage. An additional 1.4 PB of usable space has been added with that upgrade.
- SMW and CLE system software upgraded from 7.0 to 7.2 and 4.2 to 5.2 respectively.
- Development environment tools (Cray compiler, GNU complier) updated.
- Lustre software updated from version 1.8 to 2.5. The issue that prevented group permissions from working properly on Lustre has now been fixed as well.
We had also planned to upgrade the network cards in the login nodes to 10Gbps connections, but have delayed that until January 2015. That upgrade will take place after ANL finishes planned upgrades to their network infrastructure taking place over the next few weeks.
With these changes, you will need to carefully reconsider and modify any and all submit scripts you have previous used on Beagle. These changes will affect the switches you use with aprun in your submit script, but you should spend some time generally thinking about how the increased hardware capabilities affect how you submit and run your jobs. Our wiki should now be updated to reflect these changes, but if you have any questions or concerns please let us know.
We’re also still in the process of moving data from the old Lustre volume to the new Lustre volume. By limiting access at first we’ll able to prioritize copying the data of users who need early access while we continue copying the data in the background.
If you want to be considered for early access or have any questions or concerns of any kind, please let us know at email@example.com and we’ll do our best to answer ASAP.
As always, we appreciate your patience during this upgrade period.
The acquisition of the Beagle supercomputer was made possible by a grant from the National Institutes of Health (NIH) National Center for Research Resources (NCRR).
Ian Foster, director of the Computation Institute at the University of Chicago and Argonne National Laboratory, is the PI for this project. Ian Foster, with UChicago’s team of technical and domain specialists, identified the need for a powerful computational environment that would serve the growing resource-intensive requirements of the biomedical research community.
Beagle’s “skin” was created by the Computation Institute’s Mark Hereld and Greg Cross. Beagle 2011 is built on three components: water and sky are divided by a wave. Moving to the right, the wave takes on the pitch of the double helix of DNA. The images of water and sky are generated by a stochastic, context-free grammar using a computer. This application of stochastic image generation gives Beagle 2011 a fractal aspect that combines visual elements inspired by biology and mathematics, disciplines at the heart of the research that Beagle will carry forward.
- Ranks among 500 fastest machines (Dec. 2015)
- Peak Performance: 212 TFlops/s
- Cray XE6 system
- CPU: AMD Operton 6300 series (model 6380)
- Max memory bandwidth for the Opteron 6380 is 102.4 GB/s
- GPU: NVIDIA Tesla K20
- Total RAM on compute nodes: 46336 + 128 = 46464GB
- 728 compute nodes / 4 nodes per blade = 182 blades
64GB per node * 724 compute nodes = 46336GB
32GB per node * 4 GPU compute nodes = 128GB
- Extreme Scalability Mode (ESM), which supports large scalable custom applications.
- Cluster Compatibility Mode (CCM), which allows standard programs designed for smaller machines or clusters to run without modifications.
- The nodes are connected in a 3D torus topology via the Cray Gemini interconnect.
- A high-speed inter-processor connection network to support tightly coupled computational simulation and data-intensive analysis applications that involve frequent inter-process communication.
- At least 32 Gigabyte memory per compute node, for applications that create large in-memory data structures or that will run many tasks on the same node.
- The ability to easily and quickly schedule large jobs as data become available while being able to pursue a very large number of smaller tasks.
Beagle2 will focus — but not exclusively — on biomedical research supported by NIH funding.
Some of the project areas include:
- Quantitative determination of free energies associated with large conformational changes in cell membranes
- Molecular structure and ligand interaction prediction in cellular networks
- Whole-body model for studies of electrical and thermal injury
- Computation of possible configurations of transcriptional networks
- Data-mining of biomedical literature to understand regulatory networks in cancer and to understand complex disease processes
- Mapping brain structure to human behavior
- Quantitative medical-image analysis
- High volume text-mining
- Genomic and metagenomic data analysis
- Modeling of economic impact of climate change
- Large scale molecular dynamics
- Model ion channels in nerve cells
- Study transcriptional networks