SC13 will include nearly 30 full-day and half-day workshops that complement the overall Technical Program events, expand the knowledge base of its subject area, and extend its impact by providing greater depth of focus. These workshops are geared toward providing interaction and in-depth discussion of stimulating topics of interest to the HPC community.

For SC13, workshop submissions and acceptance dates are being moved ahead to accommodate the expanded focus on peer review. SC’s continuing goal is to provide an informative, high-quality technical program that meets the highest academic standards.

Also to be added are new programs to highlight innovative technologies in HPC, alongside the traditional program elements that our community relies upon to stay abreast of the complex, changing landscape of HPC. Together these are the elements that make SC the most informative, exciting, and stimulating technical program in HPC!

Proposal submissions closed for Workshops on February 7, 2013

Broader Engagement Workshop - Sunday and Monday

Abstract: The Broader Engagement (BE) Program’s goal is to increase the participation of individuals who have been traditionally under-represented in HPC. The program offers special activities to introduce, engage and support a diverse community at the conference and in HPC. There will be several technical sessions covering topics ranging from domain-specific languages and parallelism to performance tools, visualization techniques and HPC applications. Hands-on exercises will be included. Other workshop features include a Mentor-Protégé session, a Scavenger Hunt and an interactive session focused on improving your ability to collaborate, communicate and work with each other to fully take advantage of the conference and each other’s expertise.

This workshop is organized by Damian Rouson, Sourcery, Inc.; and Mary Ann Leung.

For details, please see

HPC Educators Workshop – Sunday

Abstract: The HPC Educators Program offers sessions of interest to SC attendees on what to teach and how to teach HPC-related concepts to undergraduate students and early-career graduate students. The Sunday morning session will be “Supercomputing in Plain English,” providing a broad overview of HPC. The session will cover fundamentals of supercomputing, including introductions to storage hierarchy, parallelism, and Moore's Law, as well as the motivation for using HPC. The Sunday afternoon session will concern the mapping of CS2013 and NSF/TCPP parallel and distributed computing (PDC) recommendations and resources to core computer science courses. There will also be time for participants to brainstorm, discuss and plan how they can incorporate these PDC topics into the core CS curriculum at their home institutions.

This workshop is organized by Henry Neeman, University of Oklahoma; Joel C. Adams, Calvin College; Richard A. Brown, St. Olaf College; and Elizabeth Shoop, Macalester College.

HPC Educators Workshop - Monday: NSF/IEEE-TCPP Workshop on Education in Parallel, Distributed and High-Performance Computing

Abstract: Parallel and Distributed Computing (PDC) now permeates most computing activities. It is no longer sufficient for even novice programmers to acquire only traditional sequential programming skills. This workshop on the state of art in Parallel, Distributed, and High Performance Computing Education will include contributed as well as invited papers from academia, industry, and other educational and research institutes on topics pertaining to the teaching of PDC and HPC topics in the Computer Science, Computer Engineering, Computational Science, and Domain Science and Engineering (and related) curricula. The emphasis of the workshop will be on undergraduate education, and the target audience will include attendees from among the SC13 Educators and related programs, academia, and industry. This effort is in coordination with the NSF/TCPP curriculum initiative for CS/CE undergraduates (, and with the SC-13 HPC Educators Program.

This workshop is organized by Almadena Chtchelkanova from the National Science Foundation, Anshul Gupta from IBM TJ Watson Research Center, Sushil Prasad, Georgia State University, Arnold Rosenberg from Northeastern University, Alan Sussman of the University of Maryland, and Charles Weems from the University of Massachussetts.

For details, please see:

Obtaining Bitwise Reproducible Results - Perspectives and Latest Advances

Abstract: It is a widely known HPC reality that many optimizations and scheduling techniques require a change in the order of operations, creating results that are not bitwise reproducible (BWR). This workshop will bring together members of the HPC community who are affected by the (non-) reproducibility phenomenon in various ways. A number of leading experts from numerical software tools, national labs, academic/military, and both open-source & commercial software development will showcase the latest developments concerning their favorite HPC tool or research method solving a BWR problem, and each will proctor a 30 minute hands-on lab module for the audience to work on.

This workshop is organized by Noah Clemons, a Technical Consulting Engineer for the Developer Products Division at Intel Corporation.

The 8th Workshop on Ultrascale Visualization

Abstract: The output from leading-edge scientific simulations and experiments is so voluminous and complex that advanced visualization techniques are necessary to interpret the calculated results. Even though visualization technology has progressed significantly in recent years, we are barely capable of exploiting petascale data to its full extent, and exascale datasets are on the horizon. This workshop aims at addressing this pressing issue by fostering communication between visualization researchers and the users of visualization. Attendees will be introduced to the latest and greatest research innovations in large data visualization, and also learn how these innovations impact the scientific supercomputing and discovery process.

This workshop is organized by Kwan-Liu Ma from the University of California, Davis, Venkatram Vishwanath from Argonne National Laboratory and Hongfeng Yu from the University of Nebraska-Lincoln.

The 8th Parallel Data Storage Workshop (PDSW)

Abstract: Peta- and exascale computing infrastructures make unprecedented demands on storage capacity, performance, concurrency, reliability, availability, and manageability. This one-day workshop focuses on the data storage problems and emerging solutions found in peta- and exascale scientific computing environments, with special attention to issues in which community collaboration can be crucial for problem identification, workload capture, solution interoperability, standards with community buy-in, and shared tools. This workshop seeks contributions on relevant topics, including but not limited to: performance and benchmarking, failure tolerance problems and solutions, APIs for high performance features, parallel file systems, high bandwidth storage architectures, wide area file systems, metadata intensive workloads, autonomics for HPC storage, virtualization for storage systems, archival storage advances, resource management innovations, storage systems for big data and analytics, and incorporation of emerging storage technologies.

This workshop is organized by John Bent from EMC, Garth Gibson from Carnegie Mellon University and Robert B. Ross from Argonne National Laboratory.

For details, please see:

IA^3 2013 - Third Workshop on Irregular Applications: Architectures & Algorithms

Abstract: Many data intensive applications are naturally irregular. They may present irregular data structures, control flow or communication. Current supercomputing systems are organized around components optimized for data locality and regular computation. Developing irregular applications on them demands a substantial effort, and often leads to poor performance. However, solving these applications efficiently will be a key requirement for future systems. The solutions needed to address these challenges can only come by considering the problem from all perspectives: from micro- to system-architectures, from compilers to languages, from libraries to runtimes, from algorithm design to data characteristics. Only collaborative efforts among researchers with different expertise, including end users, domain experts, and computer scientists, could lead to significant breakthroughs. This workshop aims at bringing together scientists with all these different backgrounds to discuss, define and design methods and technologies for efficiently supporting irregular applications on current and future architectures.

This workshop is organized by John Feo, director of the Center for Adaptive Supercomputer Software at the Pacific Northwest Laboratory, Simone Secchi of the University of Cagliari, Antonino Tumeo from Pacific Northwest National Laboratory, Oreste Villa from Pacific Northwest National Laboratory.

For details, please see:

The Second International Workshop on Data Intensive Scalable Computing Systems (DISCS)

Abstract: Existing high performance computing (HPC) systems are designed primarily for workloads requiring high rates of computation. However, the widening performance gap between processors and storage, and trends toward higher data intensity in scientific and engineering applications, suggest there is a need to rethink HPC system architectures, programming models, runtime systems, and tools with a focus on data intensive computing. The Second International Workshop on Data Intensive Scalable Computing Systems (DISCS) builds on the momentum generated by its predecessor workshop, providing a forum for researchers and other interested people in the areas of data intensive computing and high performance parallel computing to exchange ideas and discuss approaches for addressing Big Data challenges. The workshop includes a keynote address and presentation of peer-reviewed research papers, with ample opportunity for informal discussion throughout the day.

This workshop is organized by Yong Chen from Texas Tech University, Philip C. Roth of Oak Ridge National Laboratory and Xian-He Sun from the Illinois Institute of Technology.

For details, please see:

Extreme-Scale Programming Tools

Abstract: As we approach exascale, architectural complexity and severe resource limitations with respect to power, memory and I/O, make tools support in debugging and performance optimization more critical then ever before. However, the challenges mentioned above also apply to tools development and, in particular, raise importance of topics such as automatic tuning and methodologies for exascale tools-aided application development. This workshop will serve as a forum for application, system, and tool developers to discuss the requirements for future exascale-enabled tools and the roadblocks that need to be addressed on the way. We also highly encourage application developers to share their experiences with using the tools.

The event will also focus on the community-building process necessary to create an integrated tools-suite ready for an exascale software stack.

This workshop is organized by Michael Gerndt from the Technical University of Munich on behalf of the Virtual Institute - High Productivity Supercomputing, an international initiative of HPC programming-tool builders.

Python for High Performance and Scientific Computing (PyHPC 2013)

Abstract: Python is a established, high-level programming language with a large community in academia and industry. It is a general-purpose language adopted by many scientific applications such as computational fluid dynamics, bio molecular simulation, artificial intelligence, statistics, data analysis, scientific visualization etc. More and more industrial domains are turning towards it as well, such as aeronautics, robotics, semiconductor manufacturing, automotive solutions, telecommunication, finance, and games. The use of Python for scientific, high performance parallel, and distributed computing in increasing since years. Especially, the use of Python rises for data analysis and Big Data processing. Traditionally, system administrators are using Python a lot for automating tasks. Since Python is extremely easy to learn with a very clean syntax, it is well-suited for education in scientific computing, mathematics, and other disciplines.

This workshop is organized by Andreas Schreiber from the German Aerospace Center (DLR) and William R. Scullin from Argonne National Laboratory.

For details, please see:

4th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems (PMBS13)

Abstract: This workshop is concerned with the comparison of HPC systems through performance modeling, benchmarking or through the use of tools such as simulators. We are particularly interested in the ability to measure and make tradeoffs in software/hardware co-design to improve sustained application performance. We are also concerned with the assessment of future systems to ensure continued application scalability through peta- and exa-scale systems. The aim of this workshop is to bring together researchers, from industry and academia, concerned with the qualitative and quantitative evaluation and modeling of HPC systems. Authors are invited to submit novel research in all areas of performance modeling, benchmarking and simulation, and we welcome research that brings together current theory and practice. We recognize that the coverage of the term 'performance' has broadened to include power consumption and reliability, and that performance modeling is practiced through analytical methods and approaches based on software tools and simulators. This workshop is organized by Todd Gamblin, Lawrence Livermore National Laboratory, Simon Hammond from Sandia National Labs, Stephen Jarvis, University of Warwick, Darren J. Kerbyson from Pacific Northwest National Laboratory, Rolf Riesen, IBM, and Yunquan Zhang from the Chinese Academy of Sciences.

For details, please see:

Workshop on Large-Scale Data Analytics (LSDA)

Abstract: While advances in technology have led to an unprecedented ability to collect, compute, and store huge amounts of data, the ability to extract useful information from raw data is becoming progressively more challenging. Large-scale data analytics offers challenges along multiple dimensions. It requires the ability to deal with large volumes of data from diverse sources; field systems that perform well with diverse data and scale extremely well with increasing data volumes; design and build efficient and effective predictive models that can accommodate diverse data types; and develop benchmarks for systems and analytical processes to enable comparisons among alternative approaches at different data volumes. The Workshop on Large-Scale Data Analytics (LSDA) will provide a venue for discussing the challenges and solutions in this area, including keynote presentations, lightning talks, and a group discussion.

This workshop is organized by Natasha Balac and Chaitan Baru from the San Diego Supercomputer Center.

The 8th Workshop on Workflows in Support of Large-Scale Science (WORKS13)

Abstract: Data Intensive Workflows (a.k.a. scientific workflows) are routinely used in most scientific disciplines today, especially in the context of parallel and distributed computing. Workflows provide a systematic way of describing the analysis and rely on workflow management systems to execute the complex analyses on a variety of distributed resources. This workshop focuses on the many facets of data-intensive workflow management systems, ranging from job execution to service management and the coordination of data, service and job dependencies. The workshop therefore covers a broad range of issues in the scientific workflow lifecycle that include: data intensive workflows representation and enactment; designing workflow composition interfaces; workflow mapping techniques that may optimize the execution of the workflow; workflow enactment engines that need to deal with failures in the application and execution environment; and a number of computer science problems related to scientific workflows such as semantic technologies, compiler methods, fault detection and tolerance.

This workshop is organized by Johan Montagnat from CNRS and Ian Taylor from Cardiff University.

For details, please see:

3rd International Workshop on High Performance Computing, Networking and Analytics for the Power Grid

Abstract: The workshop promotes the use of high performance computing and networking for power grid applications. Technological and policy changes make this an urgent priority.

Sensor deployments on the grid are expected to increase geometrically in the immediate future, while the demand for clean energy generation is driving the use of non-dispatchable power sources such as solar and wind. New demands are being placed on the power infrastructure due to the introduction of plug-in vehicles. These trends reinforce the need for higher fidelity simulation of power grids, and higher frequency measurement of their state.

Traditional grid simulation and monitoring tools cannot handle the increased amounts of sensor data or computation imposed by these trends. The use of high performance computing and networking technologies is of paramount importance for the future power grid, particularly for its stable operation in the presence of intermittent generation and increased demands placed on its infrastructure.

This workshop is organized by Bora Akyol, Daniel G. Chavarría and Zhenyu (Henry) Huang, all from the Pacific Northwest National Laboratory.

For details, please see:

The Fourth International Workshop on Data-Intensive Computing in the Clouds

Abstract: Applications and experiments in all areas of science are becoming increasingly complex and more demanding in terms of their computational and data requirements. As scientific applications become more data intensive, the management of data resources and dataflow between storages and compute resources is becoming the main bottleneck. Analyzing, visualizing, and disseminating these large data sets has become a major challenge and data intensive computing is now considered as the “fourth paradigm” in scientific discovery after theoretical, experimental, and computational science.

The fourth international workshop on Data-intensive Computing in the Clouds (DataCloud 2013) will provide the scientific community a dedicated forum for discussing new research, development, and deployment efforts in running data-intensive computing workloads on Cloud Computing infrastructures. The DataCloud 2013 workshop will focus on the use of cloud-based technologies to meet the new data intensive scientific challenges that are not well served by the current supercomputers, grids or compute-intensive clouds.

This workshop is organized by Hui Jin from Oracle, Yong Zhao from the University of Electronic Science and Technology of China, and Ziming Zheng from the University of Chicago.

For details, please see:

WOLFHPC: Workshop on Domain-Specific Languages and High-Level Frameworks for High-Performance Computing

Abstract: Multi-level heterogeneous parallelism and deep memory hierarchies in current and emerging computer systems makes their programming very difficult. Domain-specific languages (DSLs) and high-level frameworks (HLFs) provide convenient abstractions, shielding application developers from much of the complexity of explicit parallel programming in standard programming languages like C/C++/Fortran. However, achieving scalability and performance portability with DSLs and HLFs is a significant challenge. For example, very few high-level frameworks can make effective use of accelerators like GPUs and FPGAs. This workshop seeks to bring together developers and users of DSLs and HLFs to identify challenges and discuss solution approaches for their effective implementation and use on massively parallel systems.

This workshop is organized by Sriram Krishnamoorthy from Pacific Northwest National Laboratory, Sadayappan Ponnuswamy from Ohio State University and J. (Ram) Ramanujam from Louisiana State University.

For details, please see:

8th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '13)

Abstract: The 8th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '13) brings together HPC operators and users of the scientific community with industrial technology providers fostering discussion, collaboration and mutual exchange of knowledge and practical experiences. With virtualization being currently deployed to scientific data centers topics including management, performance and HPC I/O in virtualized environments are being looked at in talks and demonstrations followed by a panel discussion.

This workshop is organized by Michael Alexander, Vienna University of Technology, and Gianluigi Zanetti, CRS4.

For details, please see:

‘Building’ Energy Efficient High Performance Computing: Fourth Annual EE HPC WG Workshop

Abstract: This annual workshop is organized by the Energy Efficient HPC Working Group ( It provides a strong blended focus that includes both the facilities and system perspectives; from architecture through design and implementation. The topics reflect the activities and interests of the EE HPC WG, which is a group with over 300 members from ~20 different countries. Speakers from prior years include John Shalf, Lawrence Berkeley National Laboratory, Satoshi Matsuoka, Tokyo Institute of Technology, Herbert Huber, Leibniz Supercomputing Centre, Steve Hammond, National Renewable Energy Laboratory, Michael Patterson, Intel and Nic Dube, Hewlett Packard. They are well known leaders in energy efficiency for supercomputing and delivered a lively and informative session. Similar speakers can be expected for SC13. This workshop is organized by Anna Maria Bailey, Lawrence Livermore National Laboratory, Natalie Bates, Energy Efficient HPC Working Group, Susan Coghlan from Argonne National Laboratory, Steve Hammond of the National Renewable Energy Laboratory, James H. Laros, Sandia National Labs, Josip Loncaric, Los Alamos National Laboratory, James H. Rogers from Oak Ridge National Laboratory, William F. Tschudi, Lawrence Berkeley National Laboratory, and Ralph Wescott from Pacific Northwest National Laboratory.

For details, please see:

6th Workshop on Many-Task Computing on Clouds, Grids, and Supercomputers

Abstract: The 6th workshop on Many-Task Computing on Clouds, Grids, and Supercomputers (MTAGS) will provide the scientific community a dedicated forum for presenting new research, development, and deployment efforts of large-scale many-task computing (MTC) applications on large scale clusters, clouds, grids, and supercomputers. MTC encompasses loosely coupled applications, which are generally composed of many tasks to achieve some larger application goal. This workshop will cover challenges that can hamper efficiency and utilization in running applications on large-scale systems, such as local resource manager scalability and granularity, efficient utilization of raw hardware, parallel file-system contention and scalability, data management, I/O management, reliability at scale, and application scalability. We welcome paper submissions in theoretical, simulations, and systems topics with special consideration to papers addressing the intersection of petascale/exascale challenges with large-scale cloud computing. We invite the submission of original research work of 6 pages.

This workshop is organized by Ian Foster from the University of Chicago, Ioan Raicu, Illinois Institute of Technology, Justin Wozniak from Argonne National Laboratory, and Yong Zhao, University of Electronic Science and Technology of China.

For more information, see

First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE)

Abstract: Progress in scientific research is dependent on the quality and accessibility of software at all levels and it is now critical to address many new challenges related to the development, deployment, and maintenance of reusable software. In addition, it is essential that scientists, researchers, and students are able to learn and adopt a new set of software-related skills and methodologies. Established researchers are already acquiring some of these skills, and in particular a specialized class of software developers is emerging in academic environments who are an integral and embedded part of successful research teams. This workshop will provide a forum for discussion of the challenges, including both positions and experiences. The short papers and discussion will be archived as a basis for continued discussion, and we intend the workshop to feed into the collaborative writing of one or more journal publications.

This workshop is organized by Gabrielle Allen from the Skolkovo Institute of Science and Technology, Neil Chue Hong, Software Sustainability Institute, University of Edinburgh, Daniel S. Katz, University of Chicago, Manish Parashar, Rutgers University, and David Proctor from the National Science Foundation.

For details, please see:

4th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA)

Abstract: Novel scalable scientific algorithms are needed to enable key science applications to exploit the computational power of large-scale systems. This is especially true for the current tier of leading petascale machines and the road to exascale computing as HPC systems continue to scale up in compute node and processor core count. These extreme-scale systems require novel scientific algorithms to hide network and memory latency, have very high computation/communication overlap, have minimal communication, and have no synchronization points. Scientific algorithms for multi-petaflop and exa-flop systems also need to be fault tolerant and fault resilient, since the probability of faults increases with scale. With the advent of heterogeneous compute nodes that employ standard processors and GPGPUs, scientific algorithms need to match these architectures to extract the most performance. Key science applications require novel mathematical models and system software that address the scalability and resilience challenges of current- and future-generation extreme-scale HPC systems.

This workshop is organized by Vassil Alexandrov from the Barcelona Supercomputing Center, Jack Dongarra, University of Tennessee, Knoxville, and Christian Engelmann and Al Geist, both from Oak Ridge National Laboratory.

VISTech Workshop: Visualization Infrastructure & Systems Technology

Abstract: Human perception is centered on the ability to process information contained in visible light, and our visual interface is a tremendously powerful data processor. Every day we are inundated with staggering amounts of digital data. For many types of computational research, visualization is the only viable means of extracting information and developing understanding from this data. Integrating our visual capacity with technological capabilities has tremendous potential for transformational science. We seek to explore the intersection between human perception and large-scale visual analysis by studying visualization interfaces and interactive displays. This rich intersection includes: virtual reality systems, visualization through augmented reality, large scale visualization systems, novel visualization interfaces, high-resolution interfaces, mobile displays, and visualization display middleware. This workshop will provide a space where experts in the large-scale visualization technology field and users can come together to discuss state-of-the art technologies for visualization and visualization laboratories.

This workshop is organized by Kelly Gaither, Texas Advanced Computing Center, Falko Kuester from the University of California, San Diego, Jason Leigh, University of Illinois at Chicago, Aditi Majumder from the University of California, Irvine, and Eric Wernert, Indiana University.

First International Workshop on Software Engineering for High Performance Computing in Computational Science & Engineering (SE-HPCCSE)

Abstract: Due to the availability of hardware and increasing computing requirements, there is a demand to utilize high performance computing (HPC), including GPGPUs and computing clusters, for computational science & engineering (CSE) applications. Unfortunately, developing HPC software is not an easy task. Developers must solve reliability, availability, and maintainability problems in extreme scales, understand domain specific constraints, deal with uncertainties inherent in scientific exploration, and develop algorithms that use computing resources efficiently. Software engineering (SE) researchers have developed tools and practices to support various development tasks, including: validation + verification, design, requirements management and maintenance. Development of HPC CSE software requires tailoring of SE tools/methods developed for more traditional software applications. The SE-HPCCSE workshop addresses this need by bringing together members of the SE and HPC CSE communities to share perspectives, present findings from research and practice, and generating an agenda to improve tools and practices for developing HPC CSE software.

This workshop is organized by Jeffrey Carver of the University of Alabama, Neil Chue Hong from the Software Sustainability Institute, University of Edinburgh, and Selim Ciraci, Pacific Northwest National Laboratory.

For details, please see:< conferences="" scala="" srt="" www="">< />

The Sixth Workshop on High Performance Computational Finance (WHPCF'13)

Abstract: The purpose of this workshop is to bring together practitioners, researchers, vendors, and scholars from the complementary fields of computational finance and high performance computing, in order to promote an exchange of ideas, develop common benchmarks and methodologies, discuss future collaborations and develop new research directions. Financial companies increasingly rely on high performance computers to analyze high volumes of financial data, automatically execute trades, and manage risk.

Recent years have seen the dramatic increase in compute capabilities across a variety of parallel systems. The systems have also become more complex with trends towards heterogeneous systems consisting of general-purpose cores and acceleration devices. The workshop will enable the dissemination of recent advances and learnings in the application of high performance computing to computational finance among researchers, scholars, vendors and practitioners, and to encourage and highlight collaborations between these groups in addressing high performance computing research challenges. This workshop is organized by David Daly, IBM Research, Matthew Dixon from the University of San Francisco, Jose Moreira from IBM and Andrew Sheppard of Fountainhead.

For details, please see:< conferences="" scala="" srt="" www="">< />

4th SC Workshop on Petascale (Big) Data Analytics: Challenges and Opportunities

Abstract: Recent decade has witnessed data explosion, and petabyte sized data archives are not uncommon any more. It is estimated that organizations with high end computing (HEC) infrastructures and data centers are doubling the amount of data that they are archiving every year. On the other hand computing infrastructures are becoming more heterogeneous. Continuing on the success of the first three workshops, held in conjunction with SC-10, SC-11, and SC-12, in addition to the cloud focus, we propose to broaden the topic of this workshop with an emphasis on middleware infrastructure that facilitate efficient data analytics on big data. The proposed workshop intends to bring together researchers, developers, and practitioners from academia, government, and industry to discuss new and emerging trends in high end computing platforms, programming models, middleware and software services, and outline the data mining and knowledge discovery approaches that can efficiently exploit this modern computing infrastructure.

This workshop is organized by Scott Klasky from Oak Ridge National Laboratory, Manish Parashar from Rutgers University, and Ranga Raju Vatsavai from Oak Ridge National Laboratory.< conferences="" scala="" srt="" www="">< />

Energy Efficiency for SuperComputing (E2SC)

Abstract: With Exascale systems on the horizon, we have ushered in an era with power and energy consumption as the primary concerns for scalable computing. To achieve this, revolutionary methods are required with a stronger integration among hardware features, system software and applications. Equally important are the capabilities for fine-grained spatial and temporal measurement and control to facilitate these layers for energy efficient computing across all layers. Current approaches for energy efficient computing rely heavily on power efficient hardware in isolation. However, it is pivotal for hardware to expose mechanisms for energy efficiency to optimize various workloads. At the same time, high fidelity measurement techniques, typically ignored in data-center level measurement, are of importance for scalable and energy efficient inter-play in different layers of application, system software and hardware. This workshop will bring together researchers from different communities working on challenging problems in this area for a dynamic exchange of ideas.

This workshop is organized by Kirk Cameron, Virginia Tech, Darren J. Kerbyson and Andres Marquez from Pacific Northwest National Laboratory, Dimitris Nikolopoulos of Queen's University, Belfast, and Sudhakar Yalamanchili, Georgia Institute of Technology.< conferences="" scala="" srt="" www="">< />

Exascale MPI

Abstract: The MPI design and its main implementations have proved surprisingly scalable. Some issues that hampered MPI scalability have been addressed in the MPI 2.1-2.2 definition process, and continued into MPI 3.0. For this and many other reasons MPI is currently the de-facto standard for HPC systems and applications. However, there is a need for re-examination of the Message Passing (MP) model and for exploring new innovative and potentially disruptive concepts and algorithms partially to explore other roads than those taken by the recently released MPI 3.0 standard. The aim of workshop is to bring together researchers and developers to present and discuss innovative algorithms and concepts in MP programming model, in particular related to MPI. Possible workshop topics include innovative algorithms for collective operations, for topology mapping, for scheduling/routing to avoid network congestion, for “fault-tolerant” communication, for interoperability of MP and PGAS models, and for integration of task-parallel models.

This workshop is organized by Erwin Laure and Stefano Markidis from KTH Royal Institute of Technology, Mark Ian Parsons and Lorna Smith from EPCC, The University of Edinburgh, and Jesper Larsson Träff from the Vienna University of Technology.

For details, please see:< conferences="" scala="" srt="" www="">< />

The Third International Workshop on Network-Aware Data Management

Abstract: In addition to increasing data volumes, future scientific collaborations require cooperative work at the extreme scale. As the number of multidisciplinary teams and experimental facilities increase, data sharing and resource coordination among distributed centers are becoming significant challenges every passing year. In the age of extraordinary advances in communication technologies, there is a need for efficient use of the network infrastructure to address the increasing data requirements of today’s applications. Traditional network and data management techniques are unlikely to scale in future’s collaborative data-intensive systems. We require novel data access mechanisms and intelligent network middleware to envision future design principles of network-aware data management. This workshop will seek contributions from academia, government, and industry to discuss emerging trends and current technological developments in dynamic resource provisioning, intelligent data-flow and resource coordination, end-to-end processing of data, network-aware application design issues, and cutting-edge network performance problems.

This workshop is organized by Mehmet Balman, Suren Byna and Brian L. Tierney, all from Lawrence Berkeley National Laboratory.

For details, please see: