- New Attendees
- Technical Program
- HPC Interconnections
- News and Media
- Registration, Travel, and Hotels
Student Cluster Competition
Update: Eight Teams from Four Continents Selected for SC13 Student Cluster Competition
Eight student teams from universities in the United States, Germany, China and Australia have been selected to compete in the Standard Track of the Student Cluster Competition at SC13. This year’s teams include the first ever team from Australia, who will be traveling nearly 9,000 miles from Perth for the competition. The competiton will feature teams from:
- IVEC, a joint venture between CSIRO, Curtin University, Edith Cowan University, Murdoch University and the University of Western Australia (Australia)
- Massachusetts Green High Performance Computing Center (USA)
- National University of Defense Technology (China)
- The University of Colorado, Boulder (USA)
- The University of the Pacific (USA)
- The University of Tennessee, Knoxville (USA)
- The University of Texas, Austin (USA)
- Friedrich-Alexander University of Erlangen-Nuremberg (Germany)
“We received 13 proposals for the Student Cluster Competition and planned to select six teams to compete, but the caliber of the proposals was so high we decided to go with eight teams,” said Student Cluster Competition Chair Dustin Leverman of Oak Ridge National Laboratory. “Though the selection process was very difficult, it makes me confident that it will be a good competition this year. It’s always exciting to watch the students as they apply what they’ve learned in this hands-on race to the finish.”
Held as part of HPC Interconnections, the Student Cluster Competition is designed to introduce the next generation of students to the high-performance computing community. Over the last couple of years, the competition has drawn teams from around the world, including Canada, China, Costa Rica, Germany, Russia, and Taiwan.
In this real-time, non-stop, 48-hour challenge, teams of undergraduate and/or high school students assemble a small cluster on the SC13 exhibit floor and race to demonstrate the greatest sustained performance across a series of applications. In the Standard Track, teams of six (6) students partner with vendors to design and build a cutting-edge cluster from commercially available components that does not exceed a 26-amp power limit, and work with application experts to tune and run the competition codes.
New at SC13 is the Commodity Cluster track. In this track, teams of five (5) undergraduate and/or high school students will bring commercially available hardware that is not to exceed a $2500 USD retail price limit and a 15-amp power limit. Both “off-the-shelf” and “off-the-wall” solutions are encouraged. Teams selected for the Commodity Cluster track are:
- Slippery Rock University, Slippery Rock, Penn.
- Skyline High School, Salt Lake City, Utah
- Bentley University, Waltham, Mass., and Northeastern University, Boston, Mass.
- Arizona State University, Tempe, Ariz.
The Commodity Cluster track is sponsored by:
- Bank of America
- UTRC (United Technologies Research Center)
- Procter and Gamble
Prior to the competition, a team works with its advisor and vendor partners to design and build a cutting-edge, commercially available small cluster constrained by the 26-amp power limit (Standard track) or $2500 USD cost limit and a 15-amp power limit (Commodity track). Teams must also learn the three open-source competition applications, which are listed below in the “Application Details” section. Teams are encouraged to enlist the help of domain specialists to assist with building, tuning, and understanding how the applications work. A fourth mystery application will be revealed on Monday night when the competition officially kicks off.
During SC13, teams will assemble, test, and tune their machines and run the HPCC benchmarks until the starting bell rings on Monday night at the Exhibit Opening Gala, when they will be given the competition data sets for all four applications. In full view of conference attendees, teams will execute the prescribed workload while showing progress and science visualization output on large high-resolution displays in their areas. Teams race to correctly complete the greatest number of application runs during the competition period until the close of the exhibit floor on Wednesday evening.
The showcase portion of the competition allows teams to show off what they’ve learned and what they can do with their equipment. Veteran HPC experts will be present to judge the visualizations and to interview each team on their cluster and application knowledge.
The winning team in each track will be determined based on a combined score for workload completed, benchmark performance, conference attendance, and interviews. Recognition will be given for the highest LINPACK (Standard track) and FLOPS-per-dollar LINPACK (Commodity track). The winning teams will be recognized at the SC13 Awards Ceremony luncheon on Thursday.
Competition teams in the Standard track comprise six (6) student members and an advisor. Teams in the Commodity track comprise (5) student members and an advisor.
Student members must be enrolled at an educational institution, but MUST NOT have been granted an undergraduate degree (as of the start of the contest). High school students are also eligible and encouraged to participate, either as team members on a college team or as members of a team made up of all high school students. Teams may be composed of members from multiple educational institutions, including teams with combined high school and college members. Institutions are not allowed to send teams to both tracks.
The required advisor must be an employee affiliated with a team’s educational institution. The advisor is encouraged to mentor the students leading up to the competition. Teams are encouraged to work with more individuals from their institutions other than just their advisor, but only the advisor will receive sponsorship from the competition. During the competition, the advisor is responsible for the team at all times and must be available 24 hours a day. While the advisor is not allowed to provide technical assistance during the competition, he/she is encouraged to run for fuel for their team and cheer during the long nights.
Teams are responsible for a portion of the costs associated with participation. Teams must provide their cluster hardware, shipping costs, team transportation (airfare and/or ground transportation) to SC13 and some meal costs. The Student Cluster Competition committee encourages teams to get as much financial support as possible from vendors, but the committee has funds to help cover registration, hotel, and exhibition costs; some food will be provided as part of SC13 breaks or social events. Teams are required to have sponsors (see next section) to help offset the required costs.
Team members must agree to a number of safety rules for the event. These rules are intended to prevent injury to students and to prevent damage to the facility and the equipment. Among a number of safety rules, each contestant will be limited to a maximum of 12 hours per day in the contest area. A safe competition makes a fun competition!
Teams will be required to attend other aspects of the convention beyond the Student Cluster Competition. Further details will be provided before the competition.
As a team sponsor, you will not only support and inspire the next generation of HPC experts but also will benefit from added exhibit floor exposure and having a group of enthusiastic students describing your technology.
If you are a team in search of a sponsor or a sponsor in search of a team, please contact us immediately at firstname.lastname@example.org to allow us ample time to facilitate a suitable match. Team partnerships must be solidified by the final architecture proposal deadline.
Teams are responsible for obtaining their cluster hardware and transporting it to the convention center. We encourage teams to look at sponsorship from hardware and software vendors, supercomputing centers and national labs, and other businesses. Teams in the Standard Track must partner with one or more sponsors to provide cluster hardware for the duration of the competition. Teams in the Commodity Track are encouraged to find vendor or local sponsors to provide the hardware for the duration of the competition or cover its cost. For both tracks, sponsors are encouraged to provide additional financial support, including hotel, booth decoration (signage, swag, and collateral material), per diem, and so on. This year teams must fund their own air/ground transportation to Denver for the competition. More assistance from sponsors allows us to invite more teams to compete.
In all cases, teams should be sure to have their cluster hardware for practice and preparation, ideally for one month or more prior to the competition. Sponsors should provide the necessary technical assistance to ensure that the proposed configuration remains under the power budget for the competition. They are strongly encouraged to provide training and interact closely with their teams in designing the computational systems.
Booths will be 10 x 10 feet and back to a solid wall or curtain. Teams must fit into this space for all activities and must have the display visible to the viewing public.
The computational hardware (processors, switch, storage, etc.) must fit into an enclosure no larger than a single 42U rack, which must be provided by the team. Since thermal issues may be a factor, teams should exhaust hot air vertically from their systems.
The hardware must be commercially available at the time of competition start (Monday morning) and teams must display, for public view, a complete list of hardware and software used in the system. Commodity Track displays must include vendor sources and retail pricing information that will be reviewed and verified by the committee. Total cost of a commodity track team’s entire cluster must not exceed $2500 USD retail pricing, excluding any spare equipment brought along in case of emergencies, regardless of whether components are being purchased, donated, or loaned.
With the exception of spare components, no changes to the physical configuration are permitted after the start of the competition. Usage of sleep states (no power off and no hibernation) is permitted as long as when all systems in the rack are powered on into their lowest running OS (non-sleep) state, they do not exceed the power limitation. Teams are therefore not permitted to bring more equipment than could fit into the power limitation while all equipment is powered on with a running OS. Other systems (such as laptops and monitors) may be powered from separate power sources provided by the conference.
Each team should bring along a projector or other large visual display upon which they are to continually showcase their progress through display of the visualization output from the applications and other dynamic content the team chooses. The contest area is in the public area of the conference and the intention is to attract visitors to the contest activities.
A network drop will be provided for outgoing connections only. Teams will NOT be permitted to access their clusters from outside the local network. Wireless for laptops will be available throughout the convention center via SCinet. Computational hardware may be connected via wired connections only – wireless access is not permitted.
All components associated with the system, and access to it, must be powered through the 120-volt range, 20-amp circuits provided by the conference:
For Standard Track, two circuits, each with a soft limit of 13-amps, will be provided. Power from each circuit will be provided via an APC 7801 metered power distribution unit (PDU). Standard Track teams should tune their equipment never to exceed the 13-amp limit on each of the two PDUs. Standard Track teams should be prepared to tune their hardware’s power consumption based on the amperage reported by the PDUs’ amperage monitor, which teams will be able to read from the PDUs’ LED readouts as well as over Ethernet via SNMP. Electronic alarms will be sent if the power draw on either PDU exceeds the 13-amp soft limit, and point penalties will be assessed for each alarm and for not responding appropriately to the issue. Random momentary blips over the 13-amp limit, due to supply voltage variations are not penalized. Any time a Standard Track team registers over 13.00-amps for 1 minute is subject to penalty. Teams are subject to penalization or disqualification if they ever register 15-amps or more for any duration. Teams should never approach the 20-amp physical limit of the PDU circuit, since convention center power is also breakered at 20 amps and may blow before the PDU, causing delays for the team as well as aggravation and expense for the competition organizers.
For Commodity Track, one circuit will be provided. The commodity track will not receive metered PDUs, but will have power strips that will trip if the usage exceeds 15 amps. Commodity Track teams should tune their systems to ensure that they will never exceed the 15-amp limit or else all power to their system will be interrupted.
Teams may choose any operating system and software stack that will run the challenges and display software. Teams may pre-load and test the applications and other software. Teams may study and tune the open-source benchmarks and applications for their platforms (within the rules, of course). We encourage teams to use schedulers to run their clusters autonomously while they enjoy other aspects of the conference.
As part of the SC13 Student Cluster Challenge, each team will be required to run a set of scientific applications. The applications are described below.
• EARTH, ATMOSPHERIC and PLANETARY SCIENCES
WRF (Weather Research and Forecasting)
The WRF Model is a next-generation mesocale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. It features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture allowing for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers. The WRF Model is written in Fortran 90 and has interfaces written in C,; it depends on MPI and netCDF libraries.
NEMO5 (NanoElectronics Modeling Tools)
NEMO5 is a multipurpose, multiscale highly parallelized nanodevice simulation tool based on open-source packages. The core capabilities of NEMO5 lie in the atomic-resolution calculation of nanostructure properties: strain relaxation, phonon modes, electronic structure using the tight-binding model, self-consistent Schroedinger-Poisson calculations, and quantum transport. A variety of semiconductor devices have been successfully simulated by NEMO5, including: Quantum Dots, HEMTs, double-gate MOSFETs, nanowires, topological insulators, and graphene.
NEMO5 is written in C++ and embeds a variety of portable open-source packages. NEMO5 source code and executables are available for free, upon request. NEMO5 requires C, C++ and Fortran compilers and depends on MPI, PETSc, SLEPc and Boost libraries. Additionally, a Python interface to NEMO5 is included. NEMO5 has been tested in Linux (Debian, RHEL5/6, CentOS) and Windows systems. VisIT and Paraview are recommended for visualization.
The GraphLab project started in 2009 to develop a new parallel computation abstraction tailored to machine learning. GraphLab scales to graphs with billions of vertices and edges easily, performing orders of magnitude faster than competing systems. GraphLab combines advances in machine learning algorithms, asynchronous distributed graph computation, prioritized scheduling, and graph placement with optimized low-level system design and efficient data-structures to achieve unmatched performance and scalability in challenging machine learning tasks.
The GraphLab project consists of a core C++ GraphLab API and a collection of high-performance machine learning and data mining toolkits built on top of the GraphLab API. The API is built on top of standard cluster and cloud technologies: interprocess communication is accomplished over TCP-IP and MPI is used to launch and manage GraphLab programs. Each GraphLab process is multithreaded to fully utilize the multicore resources available on modern cluster nodes. GraphLab supports reading and writing to both Posix and HDFS filesystems.
• Mystery Application
The information for this application will be revealed at the competition.
For information about the Student Cluster Competition from previous years, please to go: http://www.studentclustercomp.