HPC Impact Showcase

Premier Sponsors: Deere & Company and Procter & Gamble

New for SC13, the HPC Impact Showcase reveals real world applications of high performance computing (HPC) via presentations in a theater setting on the exhibit floor.

The Showcase is designed to introduce attendees to the many ways that HPC is shaping our world through testimonials from companies, large and small, not directly affiliated with an SC13 exhibitor.  Their stories relate real-world experience of what it took to embrace HPC to better compete and succeed in their line of endeavor.

Whether you are new to HPC or a long-time professional, you are sure to see something new and exciting in the HPC Impact Showcase. Presentations will be framed for a non-expert audience interested in technology, and will discuss how the use of HPC has resulted in design, engineering, or manufacturing innovations (and will not include marketing or sales elements!). There are 16 confirmed participants, from John Deere and Proctor and Gamble to Michael Waltrip Racing (so you can get your NASCAR on!).

Program of events

Introduction to the HPC Impact Showcase

Date: Monday, 18th November
Time: 7pm-7:30pm
Presenter(s): Raymond Bair (Argonne National Laboratory), David Halstead (National Radio Astronomy Observatory)

Abstract: Overview and highlights from of the exciting presentations to be given by industry leaders throughout the week.

The HPC Initiative at the Council on Competitiveness

Date: Monday, 18th November
Time: 7:30pm-8pm
Presenter(s): Dr. Cynthia R. McIntyre, Dr. Walter L. Kirchner (The Council on Competitiveness)

Abstract: The 21st century poses new challenges to American competitiveness. Globalization, high-speed communications, enterprise resilience, and energy sustainability issues are forcing organizations at all levels to rethink and redefine how U.S. companies will remain competitive. The Council on Competitiveness is a nonpartisan and nongovernmental organization composed of CEOs, university presidents, and labor leaders working to ensure U.S. prosperity (Compete.org). The council’s High Performance Computing Initiative stimulates and facilitates wider usage of HPC across the private sector, with the goal of propelling  productivity, innovation, and competitiveness. The initiative includes efforts to foster public-private sector partnerships (e.g., the National Digital Engineering and Manufacturing Consortium, NDEMC.org) in order to better leverage resources and expertise and help overcome barriers to more widespread private sector usage.

HPC and Racecar Design

Date: Tuesday, November 19th
Time: 10:20-11:00AM
Presenter(s): Donour Sizemore (Michael Waltrip Racing)

Abstract: In professional motorsports, performance is measured by small margins – fractions of a second and thousandths of an inch. Close competition leads to tight regulation, which in turn escalates costs. Significant resources are needed to gain small advantages.

Aerodynamics is crucial to vehicle performance. Unfortunately, field testing aerodynamic systems is extraordinarily expensive. Most automotive engineers now use computational fluid dynamics to develop car bodies and cooling systems. In this talk, I discuss how Michael Waltrip racing uses high-performance computing to accelerate our design/test cycle. Topics include application development, resource acquisition, and results validation.

Michael Waltrip Racing is a professional stock car racing organization in Cornelius, NC. The company designs, builds, tests, and fields three teams in the NASCAR Sprint Cup Series.

Advanced Modeling of the Human Skin Barrier using High Performance Computing

Date: Tuesday, November 19th
Time: 11:00-11:40AM
Presenter(s): Dr. Russell DeVane (The Procter & Gamble Company)

Abstract: The stratum corneum (SC) forms the outermost layer of mammalian skin, providing a defense against external assault while also preventing dehydration. The 15 μm-thick structure is composed of protein rich corneocytes, embedded in a multilamellar lipid matrix. Although extensive experimental work has produced a somewhat detailed picture of the SC lipid matrix, molecular-level details are still lacking. Two key pieces missing are (1) the lack of knowledge regarding the molecular arrangement of the SC lipids and (2) a detailed mechanistic understanding of skin barrier disruption. These missing pieces, as well as others, restrict efforts to quantitatively model skin especially with respect to skin penetration. Through a Department of Energy INCITE award, we are using the Oak Ridge Leadership Computing Facility system Titan to run large-scale molecular dynamics simulations in order to shed light on these questions regarding the SC lipid matrix. Guided by the most reliable experimental data available, we are investigating ensembles of coupled subsystems of SC lipids; the SC lipid matrix is a complex superstructure that cannot be modeled by independent simulations of a simplified model. Our massively parallel approach exploits leadership computing resources, while using fully open source software. At this point we not only have gained insight into the mechanisms of skin lipid disruption but also have acquired a better understanding of the roles of specific lipid components in the SC lipid matrix.

HPC – A Great Enabler in Advancement of Chemical Technology and Science

Date: Tuesday, November 19th
Time: 11:40AM-12:20PM
Presenter(s): Dee Dickerson (The Dow Chemical Company)

Abstract: Early in the 1990s, The Dow Chemical Company (TDCC) ventured into the arena of computational fluid dynamics simulations using a supercomputer at NCSA. The purpose was to understand fluid migration and absorption by the superabsorbent (Drytech® particles) in baby diapers. Since then Dow has been performing computational simulations not only to solve problems but also to advance discoveries in chemistry.

Today Dow uses over 4,000 CPU cores for a multitude of research projects, manufacturing process design, problem solving, and process optimization.

HPC has also enabled better materials design and models for plant troubleshooting. Recently Dow Agro Sciences (DAS) used HPC to identify and validate millions of direct numerical simulation (DNS) variants from thousands of plants. Such discovery is an important piece of the DAS pipeline for new variety development.

HPC enabled Dow researchers to meet stringent timelines and deliver implementable solutions to businesses. HPC provides the platform where large, complex, reacting flow models can be analyzed in parallel to significantly shorten the delivery time. Dow¹s science and engineering community advances technologies and develops cutting-edge computational capabilities involving complex multiphase reactive processes in dispersion applications, possible only with the advancement of high-speed, high-capacity, and large-memory HPC systems.

This presentation will demonstrate the wide application of HPC across disciplines and businesses in Dow. Many invention disclosures, priority applications, patents, external papers, and internal research reports have been directly based on the results of computational simulations. HPC is definitely an integral part of the scientific process at TDCC.

HPC, A Journey You Must Take

Date: Tuesday, November 19th
Time: 12:20-1:00PM
Presenter(s): Dr. Mohamad S. El-Zein (Deere & Company)

Abstract: Computing is a journey in which the speed of the vehicle has increased dramatically within a short period of time. The faster the vehicle, the shorter the product cycle time, provided the training, the software, the personnel, and the financial resources are available. Many surprising destinations and false advertising need to be sorted out, as well as many set legacy systems that companies are dependent on and that are operating with low-cost structure and speed.

For many enterprises, the journey starts by the pressures of the marketplace and the need to harvest the fruit of new technology and innovation. Along the way, the hurdles become overwhelming‹whether it is the infrastructure, the software cost, or the trained resources.

Surprisingly, the main drawback to high-performance computing (HPC) is not the compute power or the hardware cost but the prehistoric cost models of the "engineering software."

This presentation will outline the computing journey over the years and where HPC is and will be used.

Accelerating Innovation While Reducing Risk – High Impact HPC at a Formula One Team and a Racing Bike Wheels Manufacturer

Date: Tuesday, November 19th
Time: 1:00-1:40PM
Presenter(s): Steve M. Legensky (Intelligent Light)

Abstract: For performance-oriented engineering organizations, high-performance computing (HPC) offers the promise of accelerating performance while reducing risk. The HPC-enabled success of two such organizations will be compared: one with large in-house HPC resources and dedicated computational fluid dynamics (CFD) engineering staff and the other a small engineering and manufacturing company producing consumer products. Common needs and different HPC strategies are discussed. These organizations depend on working with commercial software vendors to develop their capabilities and direct their investments to core skills and knowledge.

Red Bull Racing dominates Formula 1 racing with three consecutive Constructors and Drivers Championships, while Zipp Speed Weaponry has developed leading-edge products that resulted in a doubling of category revenue and 120 new manufacturing jobs for a market leader in a mature segment. The audience will see how different investment and HPC use models are producing extraordinary results and are illustrative of the path forward for innovators in many industries of any size. Through the use of HPC for CFD, these companies rapidly develop and deliver accurate, actionable information to engineers and managers through engineered CFD workflows. Each company has developed the ability to rapidly explore many ideas, put more mature designs into testing and production, and reduce the risks of exploring novel design approaches.

Next Generation Sequencing: HPC challenges and Cloud based solutions

Date: Tuesday, November 19th
Time: 1:40-2:20PM
Presenter(s): Jason Tetrault

Abstract: Pharma companies are faced with unprecedented amounts of genetics data coming out of the public domain that they need to combine with their own sequencing datasets in order to gain better understanding of disease, its translation across species, and corresponding stratification of a given specie population. Most of today's public datasets are stored in the cloud, and it is almost impossible to bring all the data internally for analysis. Thus, new hybrid HPC solutions need to be implemented. This new cloud reality brings an interesting challenge to the risk-averse regulated pharmaceutical industry. The question is not "if" but rather "how" we run computing in the cloud and still protect patients’ and donors’ privacy (e.g., HIPAA, EU vs US legislations). This presentation will discuss "how" we got there, describing the challenges we faced, decisions we took, and solutions we built along the way to support our next-generation sequencing efforts.

High-Performance Computing in Pharmaceutical R&D - Transforming Data Into Therapies

Date: Wednesday, November 20th
Time: 10:20-11:00AM
Presenter(s): Guna Rajagaopal (Janssen Research & Development (Johnson & Johnson))

Abstract: New experimental modalities incorporating technological advances in miniaturization, parallelization, and especially robotics have opened up new ways to study biological systems in their normal and diseased states and has catalyzed transformational changes in how we study disease and search for cures. The knowledge gained will enable the development of predictive models that dramatically increase the specificity and timeliness of diagnosis for a wide range of common diseases. These quantitative, data-based models could also guide the search for new therapies by delivering accurate predictions about the efficacy of treatment options, thus realizing the era of personalized medicine. Because of the enormous scale of the data in this new era of ubiquitous sequencing/imaging and concomitant developments of other data- intensive 'omics and imaging technologies, the computational challenge of making sense of massive, heterogeneous data sets to guide our R&D programs throughout Janssen Pharma is formidable indeed. Our goal at Janssen is to develop and deploy state-of-the-art Informatics capabilities and computational expertise in high-performance computing in order to extract information and knowledge from these "big data" sets. This focus is guiding our ongoing efforts, illustrated by a few examples, to support researchers throughout our R&D enterprise in helping them bring products to market – as the patients are waiting!

Bicycle aerodynamics: Fast paced research and innovation using cloud HPC

Date: Wednesday, November 20th
Time: 11:00-11:40AM
Presenter(s): Mio Suzuki (Trek Bicycles)

Abstract: In the ever-accelerating pace of the bicycle industry product development cycle, use of cloud-based high-performance computing (HPC) and macro-based automation has been playing a critical role in pushing the boundaries of Trek’s engineering R&D. This presentation will highlight the benefits that HPC-based computational fluid dynamics (CFD) has brought to Trek. Examples will be used to showcase the industry-leading design process.

Trek has been successfully implementing an HPC-based CFD program, where the production-related simulation solver elapsed time was reduced as much as 90%. In addition to the speed, HPC has enabled the engineers to change the course of design method. More intelligent model choice is now possible with fast DOE and optimization. Additional computation power has also brought high-fidelity simulation using detached eddy simulation (DES) to further enhance the understanding of bicycle aerodynamics. URANS and overset simulations have also been used to uncover aerodynamics insights that may be unique to bicycle testing.

The successful integration of the on-demand cloud HPC and local process automation leads to significant increase in productivity, specifically in analysis turnaround time reduction and rapid design iterations. This cost-effective, high-output approach will continue to drive the future of innovation culture at Trek.

HPC: Where the Rubber Meets the 'Simulated' Road

Date: Wednesday, November 20th
Time: 11:40AM-12:20PM
Presenter(s): Chris Baker (Goodyear)

Abstract: Historically, new tires were developed by build-and-test cycles. Tires were beyond the capability of simulation technology because of the extreme loads, high speeds, large deformations, and soft/stiff materials. Goodyear has invested over 25 years in high-performance computing (HPC). The company has collaborated with national laboratories developing highly paralleled modeling codes to simulate structural, fluid, noise, and thermal characteristics.  under all types of loads and road conditions. Goodyear has moved from punch cards and single-CPU mainframe computers, sharing cycles, to expensive “big iron” supercomputers, and on to thousands of low-cost Linux compute nodes, multicore processors, high-speed interconnects, parallel storage, and multisite clustering that spans the globe. The switch to commodity hardware, Linux operation systems, and open source software has propelled Goodyear's tire development and shortened its time to market, enabling Goodyear to grow even through economic downturns. HPC has allowed Goodyear to replace most of its physical build-and-testing approach with proven simulations that are a fraction of the cost and yield data not possible from driving tires around a racetrack for months. HPC enables Goodyear tire designers to model multiple iterations of a tire, varying materials, tread patterns, weight, thicknesses, and sizes until the customer’s needs are met.

High Performance Computing at the Bosch Research and Technology Center for Industrial Applications

Date: Wednesday, November 20th
Time: 12:20-1:00PM
Presenter(s): Aleksandar Kojic (Robert Bosch Research and Technology Center)

Abstract: Researchers at the Bosch Research and Technology Center use high-performance computing for numerous projects with direct industrial relevance. This talk will provide examples of projects in energy technology from among the following: ab initio simulation of battery and thermoelectric materials; multiscale simulation of battery performance and safety; and the application of large-eddy simulations to turbulent combustion, including injection, combustion, and cavitation.  Additional examples from the data-mining team will also be presented. This team researches and develops machine learning algorithms that process, iteratively, data sizes that are substantially larger than typical memory sizes for application to areas such as health informatics, manufacturing quality, and predictive maintenance.

Jeco Plastic Products – HPC Pallet Design

Date: Wednesday, November 20th
Time: 1:00-1:40PM
Presenter(s): Paul M. Thompson (Jeco Plastic Products, LLC)

Abstract: Jeco Plastic Products designs and manufactures durable pallets and containers for the worldwide printing and automotive industries, as well as other industries handling heavy product or delicate items that must be protected from damage.  A recent collaboration with Purdue University and the National Digital Engineering and Manufacturing Consortium provided us with access to powerful computers and software, which were used to perform stress calculations and demonstrate to customers that plastic pallets can perform as effectively as metal pallets in many situations. With HPC and finite element analysis, Jeco was able to significantly strengthen an existing pallet design by making structural changes as a result of detailed 3D models. Jeco plastic pallets are stronger and last significantly longer than wooden pallets. They are much lighter in weight than metal pallets and can be designed to hold much larger loads than would seem possible. The calculations we now can do enable us to develop plastic structures for a wide variety of applications.

Extreme Data for Business Situational Awareness Demands High Performance Computing (HPC)

Date: Wednesday, November 20th
Time: 1:40-2:20PM
Presenter(s): Earl J. Dodd (LGT Advanced Technology Limited)

Abstract: Juxtaposing several trends—growth of big data, the fact that most data centers use as much “overhead” energy as they do to power their computing resources, inadequate data center infrastructure, and the move to the smart grid—a new agile, scalable, and sustainable energy technology market has formed.

LGT Advanced Technology seeks to address this new market with its sustainable energy system (SES). The SES represents the first renewable energy system that allows all forms of energy to cogenerate while storing energy for on-demand use. At its core is a fully decoupled wind turbine and tower complex using hydraulic transmission. Its energy storage system feeds ground-based generators that are uniquely clustered, delivering high-voltage direct current to specialized energy consumers in nearly all weather conditions.

Our global optimization processes will couple a high-performance computing (HPC) system and command operation center, dynamically optimizing an entire energy farm based on real-time operating and financial metrics.

LGT use of HPC for breakthrough innovation of tough engineering problems:

  • Modeling and simulations of systems-of-systems
  • Computational fluid dynamics modeling to predict turbine performance
  • Continuous monitoring, reporting, and autonomic functions

HPC-supported nontraditional aspects of the SES:

  • Acoustic modeling for bat avoidance
  • Reductions in infrastructure size and weight

Just as the Internet transformed information and knowledge, clean and sustainable power producing technology—efficient turbines, energy storage, and agile cogeneration—will have similar global impacts. Renewable energy is universally recognized as a necessary path on the route to global environmental and economic recovery. This recovery is impossible without HPC.

When “Big Iron” Isn’t Big Enough

Date: Thursday, November 21st
Time: 10:20-11:00AM
Presenter(s): Matthew H. Evans, Ph.D. (Lockheed Martin Corporation)

Abstract: Lockheed Martin relies on high-performance computing (HPC) to deliver advanced technology solutions, from undersea vehicles to aircraft to satellites. High-fidelity, physics-based models and increasingly powerful machines provide critical input to design and test efforts. Designing, building, and sustaining integrated systems require more than just powerful, stand-alone modeling, however. Understanding and optimizing the performance of interconnected system-of-systems depend on linking physics-based models across computing systems. For example, simulating a regional, integrated air and missile defense system requires simultaneously running interdependent models in several HPC environments. From basic components to full-scale systems, high-performance computing at Lockheed Martin relies on powerful machines and powerful networks.

High-Performance Computing at BP

Date: Thursday, November 21st
Time: 11:00-11:40AM
Presenter(s): Keith Gray (BP)

Abstract: Seismic Imaging and rock physics are key research focus areas for BP’s Upstream Business. And high-performance computing (HPC) is critical to enable the research breakthroughs required for our successful exploration and development programs.

Our team will complete construction of a new computing facility and deliver a 2.2-petaflop system this year. The presentation will review the strategy for our HPC team, our history, current capabilities, and near-term plans. Also discussed will be the business value delivered by our seismic imaging research.

Billion Dollar Bets: HPC in Banking

Date: Thursday, November 21st
Time: 11:40AM-12:20PM
Presenter(s): Bradford Spiers (Bank of America Corporation)

Abstract: Pending corporate approval

High Performance Computing for High Patient Throughput Magnetic Resonance Imaging

Date: Thursday, November 21st
Time: 12:20-1:00PM
Presenter(s): Steven Cauley (A.A. Martinos Center for Biomedical Imaging, Mass General Hospital)

Abstract: The increased importance of health care cost containment has led to the development of techniques for achieving high patient throughput in magnetic resonance imaging (MRI).  These clinically relevant techniques, often a product of collaborative industrial and academic research efforts, reduce cost by minimizing patient imaging time. Sophisticated image reconstruction algorithms are typically required to ensure clinically relevant image quality. High-performance computing (HPC) resources are then utilized to facilitate real-time imaging for radiologists and neuroscience researchers.

Through partnership with Siemens Medical Solutions we illustrate successful collaboration for the development, integration, and distribution of the Simultaneous Multi-Slice (SMS) imaging technique. SMS allows for information from multiple regions of the human body to be encoded concurrently. The expertise of our industry collaborator was leveraged for the formulation of appropriate image-quality metrics. Based on these metrics, penalty-driven optimization models were formulated. Utilizing continuous feedback from Siemens, we successfully embedded our SMS reconstruction algorithm into several MRI platforms. Software packages were jointly released for testing at major hospital and research centers around the world. Work is on-going to develop SMS into a real-time tool through utilization of the multi-CPU multi-GPU HPC environment available on the newest Siemens scanners.

At this time we are involved in a new joint collaboration with Siemens for the development of real-time compressed sensing (CS) algorithms. CS is a general technique for producing images using rapidly acquired data. Here, we will focus on the balance between algorithm simplicity, algorithm efficiency, and the cost-benefit of HPC for MRI applications.

For more information, please contact HPC-Showcase@info.supercomputing.org.