Invited Talks

Fifteen speakers from across HPC and the world of science – including Nobel Laureate Saul Perlmutter – are set to share their unique perspectives with the international supercomputing community as part of the invited talks program at SC13 this November. Invited talks provide a longer-term perspective than individual research papers, and put multiple research insights into broader context. At SC13 you will hear about the hardest challenges, the latest innovations in supercomputing and data analytics, and how new approaches are addressing the toughest and most complex questions of our time.

This year’s slate of fifteen invited speakers will address every aspect of HPC and the ways in which HPC and supercomputingare shaping modern scientific and engineering discovery, and the ways in which HPC is shaping the relationship among nations. For details of when and where to hear these talks, please visit the interactive program schedule.

Climate Earth System Modeling for the IPCC Sixth Assessment Report (AR6): Higher Resolution and Complexity

Warren M. Washington
National Center for Atmospheric Science
Boulder, Colorado, USA

It is expected that higher resolution coupled model components will be used for another round of climate and Earth system simulations using the Community Earth System Model (CESM). In addition, the treatment of physical processes will be more complex and tightly coupled to other components. A challenge is that the high performance computer systems will be much faster and more difficult to efficiently use. Some preliminary simulations will be presented with computational details.

Data, Computation, and the Fate of the Universe

Nobel Laureate Saul Perlmutter
University of California, Berkeley
Lawrence Berkeley National Laboratory

This talk will reach into the past to explain how integrating big data -- and careful analysis -- led to the discovery of the acceleration of the universe's expansion.  It will go on to discuss the increasing importance and impact of coupling scientists with data, analysis, and simulation to gain future insights.

Big Data + Big Compute = An Extreme Scale Marriage for Smarter Science?

Alok N. Choudhary
Northwestern University
Voxsup, Inc.

Knowledge discovery has been driven by theory, experiments and by large-scale simulations on high-performance computers. Modern experiments and simulations involving satellites, telescopes, high-throughput instruments, sensors, and supercomputers yield massive amounts of data. What has changed recently is that the world is creating massive amounts of data at an astonishing pace and diversity.

Processing, mining and analyzing this data effectively and efficiently will be critical components as we can no longer rely upon traditional ways of dealing with the data due to its scale and speed. But there is a social aspect of acceleration, which is the sharing of “big data” and unleashing thousands to ask questions and participate in discovery. This talk addresses the fundamental question, "what are the challenges and opportunities for extreme scale systems to be an effective platform" for not only traditional simulations, but their suitability for data-intensive and data-driven computing to accelerate time to insights.

The Interplay Between Internet Security and Scale

Vern Paxson
University of California, Berkeley
Lawrence Berkeley National Laboratory

Internet Security poses fundamentally hard problems due to its adversarial nature, but the challenges become especially complex at large scales. This talk will examine a range of such scaling issues: network speed, traffic volume, user diversity, forensic analysis, and difficulties that attackers themselves face. These will be discussed both in the context of operational security and in terms of how such issues arise in cybersecurity research.

Performance Evaluation for Large-Scale Systems: Closed-Loop Control with Appropriate Metrics

Lizy Kurian John
University of Texas at Austin

The number of cores/systems utilized in high-end systems is increasing. On one hand, the availability of large numbers of nodes gives opportunities for isolating faulty nodes and finding enough working nodes; on the other hand, the presence of too many systems makes it difficult to isolate problems. Performance/power problems at one node, if undetected in a timely fashion, can result in massive problems. This talk will discuss challenges in evaluating large systems, and metrics to detect performance/power bugs, and potential approaches to creating reliable high-performance and energy-efficient systems with appropriate feedback.

Extreme Scale Computational Biology

Klaus Schulten
University of Illinois

The molecular dynamics and visualization programs NAMD and VMD serve over 300,000 registered users in many fields of biology and medicine. These programs are on every kind of computer, from personal laptops to massive supercomputers, and will soon be able to generate atomic-level views of entire living cells, opening a treasure chest of data for biotechnology, pharmacology and medicine. This lecture will summarize the impressive NAMD and VMD development over the last several years that led to the programs' excellent performance on Blue Waters, Titan and Stampede. It also describes the spectacular research discoveries enabled by petascale computing, which become more exciting every day and are leading to a veritable revolution in biology as researchers become able to describe very large structures which were previously inaccessible. Ongoing efforts in power-conscientious NAMD and VMD computing on ARM+GPU processors needed for utilizing the next generation of computers will be discussed as well.

Visualization of Strong Ground Motion and Tsunami for the Great 2011 Off Tohoku, Japan, Earthquake

Takashi Furumura
University of Tokyo

The strong impact caused by the 2011 earthquake off Tohoku, Japan, (Mw=9.0) was reproduced by a large-scale FDM simulation using the K-Computer and the Earth Simulator with a detailed earthquake source rupture model and a heterogeneous subsurface structure model. The visualized derived seismic wavefield illustrates the significance of this earthquake and the process in which large and long-duration ground motion is developed in populated cities, and destructive tsunami is amplified along the Tohoku coast. The results of the simulation are compared the synthetic and observed waveforms from the high-density nation-wide seismic network and offshore ocean bottom tsunami network, both deployed over Japan. We believe the reliable disaster information should be effective in earthquake disaster reduction studies. Towards this goal, the development of integrated earthquake disaster prevention simulations by linking sequences of simulations for earthquake nucleation, seismic wave and tsunami propagation, and building oscillation in interaction with soft soil is presented.

Uncertainty Quantification in Computational Models

Habib N. Najm
Sandia National Laboratories

Models of physical systems involve inputs/parameters determined from empirical measurements, and therefore exhibiting a degree of uncertainty. Estimating the propagation of this uncertainty into computational model output predictions is crucial for purposes of validation, design optimization, and decision support. Recent years have seen significant developments in probabilistic methods for efficient uncertainty quantification (UQ) in computational models. These methods are grounded in the use of functional representations for random variables. In this context, the utility of Polynomial Chaos (PC) UQ methods has been demonstrated in a range of physical models, including structural mechanics, porous media, fluid dynamics, heat transfer, and chemistry. While high-dimensionality remains a challenge, great strides have been made in dealing with moderate dimensionality along with non-linearity and oscillatory dynamics. This talk will outline PC UQ methods, and their utility in the estimation of uncertain input parameters from empirical data, and for propagation of parametric uncertainty to model outputs.

Europe’s Place in a Global Race

Richard Kenway
University of Edinburgh

This talk describes Europe’s response to the opportunities and challenges of high-performance computing over the past 25 years. Today, those opportunities continue to grow, but so do the challenges: how to exploit exascale architectures and big data, how to organize e-infrastructure on a continental scale, and how to deliver the promised impact on the economy, science and society? Europe perceives itself to be in a global race. Will it win?

Integration of Quantum Computing into High Performance Computing

Colin P. Williams
D-Wave Systems, Inc.

The media likes to portray quantum computing as being in a head-to-head race with high performance computing. In reality, we foresee ways for quantum computing to complement and enhance high performance computing and vice versa. In this talk D-Wave's approach to quantum computing will be described, including its operating principles, system architecture, evidence of quantumness, and report the latest performance data. In particular, I will describe D-Wave's new "Vesuvius" 512-qubit quantum processor and several examples of computational problems mapped to the architecture will be describe. Next, strategies for integrating D-Wave's quantum processor into mainstream high performance computing systems will be discussed. As the quantum processor is naturally well-suited to solving discrete combinatorial optimization, machine learning and artificial intelligence problems the talk should be of broad interest to computer scientists, physicists, and engineers with interests in a wide range of application areas.

Computational Tools in Secondary Chemistry Education

Thom Dunning
University of Illinois at Urbana-Champaign

Science and engineering research has been revolutionized by computation, and opportunities abound for revolutionizing teaching and learning with computing technologies. But, to date, computing has largely been used to organize, prepare for and disseminate courses. The potential of using these technologies to teach students the fundamental principles of a subject through authentic computational simulation is largely unexplored. Computational tools and simulations have been used to help teach high school and college chemistry. The simulations enable students to gain a deep, rich appreciation for the basic principles of chemistry. Further, the use of computational tools is enthusiastically embraced by teachers and students, resulting in improved performance of both, and leading to increased student interest in chemistry. This presentation will cover the experiences using computational tools and simulations in the high school chemistry classroom, as well as in undergraduate organic chemistry courses.

Scalable Computing on Tianhe-2 System

Yutong Lu
National University of Defense Technology (NUDT), Changsha, China

Scalability is one of the big challenges for the post-petascale and exascale systems. Integrated technologies from architecture design to system software stack design are needed. This talk expresses the efforts from NUDT to design and implement the Tianhe-2 system to get better scalability. From the heterogenous to nero-heterogenous architecture, some comparison among cpu, gpu and mic were obtained. High-bandwidth, low-latency communication systems with offload collective operations to improve the scalability of interconnection have been designed. Multiple protocols in custom MPI system to support different application models are improving. The hybrid hierarchy I/O architecture and file system to tackle the I/O bottleneck has also been designed. The co-design policy through whole system for scalable computing could support the domain applications extend to even large-scale efficiently.

The Role of HPC in Education

Thom Dunning
University of Illinois at Urbana-Champaign, Champaign, Illinois, USA

Science and engineering research has been revolutionized by computation and opportunities abound for revolutionizing teaching and learning with computing technologies. But, to date computing has largely been used to organize, prepare for and disseminate courses. The potential of using these technologies to teach students the fundamental principles of a subject through authentic computational simulation is largely unexplored.

Computational tools and simulations has been used to help teach high school and college chemistry. The simulations enable students to gain a deep, rich appreciation for the basic principles of chemistry. Further, the use of computational tools is enthusiastically embraced by teachers and students, results in improved performance of both, and leads to increased student interest in chemistry.

This presentation will cover the experiences that have used computational tools and simulations in the high school chemistry classroom as well as in undergraduate organic chemistry courses.

Gathering Intelligence from Massive Graphs

David Bader
Georgia Institute of Technology, Atlanta, Georgia, USA

TBD

Transformative Impact of Computation in a Data-Driven World

Farnam Jahanian, National Science Foundation

Unprecedented growth of scientific and social data, deep integration of the cyber and physical worlds, wireless connectivity at broadband speeds, and seamless access to resources in the cloud has been witnessed. These advances are transforming the way people work, play, communicate, learn, and discover. Investments in ambitious, long-term research and infrastructure, as well as in the development of a workforce empowered by computation and data, enable these advances and are a national imperative. This talk will focus on the technological advances and emerging trends that are shaping the future and accelerating the pace of discovery and innovation across all science and engineering disciplines. It will also describe how these trends inform priorities and programs at National Science Foundation.