hpcuserforum.com | idc.com

Sponsors

Gold Sponsor

intel


Silver Sponsors


platform
Altair


Cray



Bronze Sponsors


SGI
Adaptive Computing


Partners


COC

DOD HPC

DOE
Kaust
Ford
HPCWire

InsideHPC

NCSA

BOEING

NSF


 

Innovation Excellence Award winners

Click the dropdown boxes below to see the HPC User Forum Excellence Awards winners.

1st HPC Innovation Excellence Award Winners
Announced at ISC 2011
  • Beijing Genomics Institute (China) — BGI has created genomics solutions that speed science and commercialization. BGI estimates that the solutions have saved hundreds of thousands of dollars. Project leader: Lin Fang.
  • BMI (U.S.) — Fuel-saving aerodynamic truck design. Estimated savings of $5,000 per truck, per year. Project leader: Mike Henderson.
  • Department of Defense HPC Modernization Program - DEW (U.S.) — $13.8 million initial savings from virtual prototyping of countermeasures based on high-power microwaves, with much greater potential savings. Project leader: Cray Henry.
  • Department of Defense HPC Modernization Program - COAMPS (U.S.) — Significantly improved model for tropical cyclone forecasting. Documented savings of $6 million in one event alone, with far greater potential savings.
  • Lawrence Livermore National Laboratory (U.S.) — Achieved 17 percent improvement in fuel economy of Class 8 tractor-trailers. Potential savings: $24.2 billion annually. Project leader: Kambiz Salari.
  • National Energy Research Scientific Computing Center (U.S.) — International study has enabled much more detailed and longer (100 years) record of past weather, to improve climate studies. Project leader: Gilbert P. Compo and National Energy Research Scientific Computing Center.
  • Ohio Supercomputer Center-Edison Welding Institute (U.S.) — Weld Predictor simulation tool cuts costs by an average 50 percent by minimizing experimental trials. Project leaders: Ashok Krishnamurthy, Kathy Kelley.
  • Shanghai Supercomputer Center (China) — New method to determine molecular configurations of large molecules on metal surfaces. Project leader: Tao Wang.
  • Shanghai Supercomputer Center (China) — Substantial time and costs savings in engineering design. Project leader: Jiancheng Wu.

2nd HPC Innovation Excellence Award Winners
Announced at SC 2011
  • Cornell University Center for Advanced Computing (U.S.). Through faster computations (more than 175 times speed-up), a better understanding of networks of coordinated amino-acid variation may enable the discovery of new therapeutic targets for the hepatitis C virus (HCV). With the cost per liver transplantation in the range of $280,000 for one year, liver transplantation for hepatitis C alone reaches a total cost of nearly $300 million per year. Moreover, the average lifetime cost for hepatitis C, in the absence of liver transplant, has been estimated to be about $100,000 for individual patients. Assuming that 80% of the 4.5 million Americans believed to be infected develop chronic liver disease, the total lifetime cost for this group (3.6 million) will be a staggering $360 billion in today's dollars. Assuming an estimated survival of 40 years, the annual health care costs for the affected U.S. population with chronic hepatitis C may be as high as $9 billion. Project leaders: David A. Lifka, Paul Redfern

  • Continuous Casting Consortium (U.S.). Helped by HPC resources at the National Center for Supercomputing Applications, the Continuous Casting Consortium at the University of Illinois has developed comprehensive numerical models of the continuous casting of steel, including several ground-breaking numerical methods, to solve practical problems of interest to the steel industry. Based on the roughly 100 million tons of steel produced in the U.S. each year and the approximately $400 per ton net cost of scrapping, a one percent reduction in yield loss could would save about $400 million per year, along with energy savings during reheating of about $350 million per year. Project leaders: Seid Koric and Brian G. Thomas

  • Center for Development of Advanced Computing (India). C-DAC is part of the Department of Information Technology, Ministry of Communications and Information Technology, Government of India. C-DAC developed CHReME, an HPC application execution interface with the flexibility to create workflows and associate them with scientific applications. CHReME has been deployed at HPC sites within India and as far away as Armenia and Tanzania. The software is reducing costs by tens of thousands of dollars while improving product cycle times and increasing productivity in critical projects. Project leader: Goldi Misra
  • Swift Engineering (U.S.). Swift uses HPC to develop prototype vehicles for the automotive and aerospace industry must faster than these industries develop production vehicles. The customers include Toyota, for whom Swift helped develop the aerodynamic shape of the Tundra truck and the NASCAR version of the Camry; Northrup Grumman, which had Swift develop the BAT Unmanned Aerial Vehicle; Eclipse Aviation, for whom Swift developed and built a new, light business jet in only 200 days; and others. Swift can design a vehicle in one season that would take a big OEM up to four years (Swift makes prototypes, not certified production vehicles). The prototypes can save auto/aerospace firms tens to hundreds of millions of dollars in the design cycle. HPC enables Swift to explore aerodynamic design spaces 10 to 100 times faster than before, depending on the problem type. Project leader: Clayton Triggs

  • United Technologies Research Center/NERSC IPM (U.S.). With the help of Integrated Performance Monitoring (IPM) and staff assistance from the Department of Energy's NERSC facility, UTRC sped up a United Technologies-owned CFD code used to simulate multiphase flows. For the first time, this enabled the simulation of realistic fuel spray-related applications Traditionally, such systems are designed by costly build-test-bust experiments due to inability to effectively model or to experimentally diagnose (measurements in the very near field of the spray are not possible due to obscuration effects). The ability to simulate this problem helps reduce design cycles substantially and provides new insights into the physics that can provide sprays with enhanced properties. UTRC has seen reductions in design cycles of at least 66%. Project leader: Marios C. Soteriou

3rd HPC Innovation Excellence Award Winners
Announced At ISC2012
  • GE Global Research (U.S.). By leveraging the advancements in available computational power, GE was able to model the unsteady flow physics in a turbine test rig. Previously, only the steady flow physics could be modeled. By comparing the results of the new unsteady calculations with previously obtained steady calculations, new insights into the flow physics – not captured with steady CFD – were obtained, such as better understanding of wake dynamics. These understandings can be used by aerodynamic engineers to improve aerodynamic efficiency which will result in reduced engine fuel burn. Each year approximately $200 billion worth in fuel is consumed globally on GE’s gas turbine products, both aircraft engines and land-based gas turbines used for the production of electricity. Every 1% reduction in fuel consumption therefore saves the population of users of these products over $2 billion year. Project Leader: Richard Arthur

  • Department of Defense High Performance Computing Modernization Program (U.S.).  The Computational Research and Engineering Acquisition Tools and Environments (CREATE) program, a multi-year program to develop multi-physics based software for the design of ships, air vehicles, and antennae, applied HPC physics-based simulation to develop a standard process to generate the aerodynamic data that supported air-worthiness certifications for small unmanned aerial vehicles (UAVs). The data generated through computational methods as part of this effort also helped mitigate risk in the flight certification process by providing engineers with sound data with which to make more informed airworthiness assessments. Several UAV platforms have relied on this HPC-driven framework to gain approval, including RAVEN, AEROSTAR, and EXDRON. Without HPC resources, airworthiness decisions for small UAVs could not meet this turnaround time. This approach saved a conservative estimate of over $1 Billion. With continued use of this program, the economic impact of this technology would continue to improve. Project Leader: John E. West
      
  • Mary Bird Perkins Cancer Center and Louisiana State University (U.S.). Researchers at the centers performed clinical trials in silico that would have been prohibitively expensive and taken long to conduct with traditional methods. The center used an HPC-driven approach to inform clinical and health policy decisions. The in silico clinical trials compared the effectiveness of multiple therapy options, leading to better-informed clinical decisions. Simulation of therapy equipment resulted in savings of more than a $100 million in infrastructure costs and over $12 million in research costs while enabling rapid advancements in cancer care research.  Project leader: Wayne D. Newhauser

  • Aon Benfield Securities, Inc. (Canada). Aon has developed the PathWise platform, which uses GPU-based high performance computing to enable quantitative analysts to quickly and easily express financial application kernels such as Monte Carlo simulations using domain-specific interfaces. The computational capabilities offered by the GPU-driven HPC enabled quantitative analysts to accelerate financial computations from days to minutes, with 50-100 times throughput over conventional techniques. The PathWise platform from Aon Benfield achieved an average 90% cost savings both in terms of HPC infrastructure costs and time-to-market, translating to several millions of dollars in savings.  Project leader: Peter Phillips, Amir Mohammed

  • BGI Shenzhen (China). BGI has developed a set of distributed computing applications using a MapReduce framework to process large genome data sets on clusters. By applying advanced software technologies including HDFS, Lustre, GlusterFS, and the Platform Symphony Map-Reduce framework, the institute has saved more than $20 million to date. For some application workloads, BGI achieved a significant improvement in processing capabilities while enabling the reuse of storage, resulting in reduced infrastructure costs while delivering results in less than 10 minutes, versus the prior time of 2.5 hours. Some of the applications enabled through the MapReduce framework included: sequencing of 1% of the Human Genome for the International Human Genome Project; contributing 10% to the International Human HapMap Project; conducting research in combating SARS, and a German variant of the E. coli virus; and completely sequencing the rice genome, the silkworm potato genome, and the human gut metagenome. Project leader: Lin Fang

4th HPC Innovation Excellence Award Winners
Announced at SC 2012
  • Computational Research Laboratories (INDIA). Engineers at Computational Research Laboratories (CRL), a subsidiary of Tata Consultancy Services, developed an HPC-driven computational fluid dynamics (CFD) procedure aimed at adding realism to real-world wind tunnel data. This innovation could become crucial for the aerospace industry and others using similar technology to improve their product designs. The CFD developments by CRL-TCS are targeted at increasing the accuracy of predictions of maximum lift for aircraft and optimizing the utilization of wind tunnel testing. Improving maximum lift calculations could enable designers to use less material in aircraft, resulting in improved fuel efficiency and lower fuel costs. Improvements for wind tunnel testing alone have resulted in up to 20% reduction in flap prediction tests and estimated savings of at least $2 million per product design. The CFD methodologies could potentially be applied to other problems, resulting in more savings. The investments to enable the CRL-TCS results, including the system cost, were between $1 million and $2 million. The simulations were conducted on the Hewlett Packard EKA system. Project Leader: Kishore Nikam and Anutosh Moitra

  • St. Vincent’s Institute of Medical Research (AUSTRALIA.).  A cross-organizational team comprising researchers from St. Vincent’s Institute of Medical Research, Victorian Infectious Disease Research Laboratories, IBM Research Collaboratory for Life Sciences – Melbourne, and Victorian Life Sciences Computation Initiative developed a method to simulate the 3D atomic motion of the complete human rhinovirus on Australia’s fastest supercomputer, paving the way for new drug development. This research is the first time that the atomic motion of a complete human rhinovirus has been simulated on a computer. Understanding how anti-viral drugs work on rhinoviruses and related viruses can potentially speed up the development of new treatments, and could produce savings in development costs. The research has the potential to produce savings in drug discovery and pre-clinical development of up to $1,000,000 per year. The research was conducted on an IBM BlueGene Q system and the storage infrastructure was developed by DDN. Project Leader: Professor Michael Parker, with Jason Roberts, Matthew Downton, Michael Kuiper

  • Cycle Computing (U.S.). Cycle Computing provisioned an over 50,000-core utility supercomputer in the Amazon Web Services (AWS) cloud for Schrödinger and Nimbus Discovery to accelerate lead identification via virtual screening. Schrödinger ran a virtual screen of 21 million compounds against a protein target. The run cost under $4,900 per hour at peak and required no upfront capital. The cloud-based run time per job averaged 11 minutes and the total work completed topped 100,000 hours. Project leader: Jason Stowe

  • Intelligent Light (U.S.). The CFD workflow and software developed by Intelligent light is benefiting small and medium size manufacturer Zipp Speed Weaponry. A direct consequence of the work was a rapid change in design direction to bring a revolutionary new racing wheel, “Zipp Firecrest,” to market. HPC and a highly productive, automated CFD workflow enabled Zipp to create a sustainable competitive advantage by helping them effectively design and functionalize aerodynamic racing wheels for cyclists. The hardware system was powered by an on-demand cloud from Dell R Systems. This project involved an aggregate investment of $130,000.   Project leader: Roger R. Rintala

  • PayPal (U.S.). PayPal, an eBay company, leverages HPC technologies to provide real-time stream analytics for behavioral and situational modeling related to online fraud detection. The addition of HPC resources from SGI, Hewlett Packard, Mellanox and QLogic (Intel) has enabled PayPal to advance to real-time fraud detection. Together, the system handles more than 4 billion new records daily. Conservatively estimated, the revenue savings generated by applying the HPC infrastructure for fraud detection translates to over $710 million. Project leader: Arno Kolster and Ryan Quick

  • Ramgen Power Systems LLC (U.S.).  Ramgen Power Systems is a small business in Washington State that is developing a novel gas compressor system based on shock wave compressor technology. The system will be used in supersonic flight applications. Use of the Cray "Jaguar" supercomputer at Oak Ridge National Laboratory cut down R&D costs by over $4 million. Using Jaguar, staff of the Oak Ridge Leadership Computing Facility were able to give these applications a 100-fold speedup in time to solution and two-fold decrease in memory usage per core. This project was enabled by a seed investment of around $500,000. Project Leader: Allan D. Grosvenor.

  • ALYA RED – Barcelona Supercomputing Center developed a first of its kind, in-house, end-to-end biomechanical model including numerical methods, parallel implementation,  mesh generation, and visualization. The Alya System is a computational mechanics code with two main features. First, it is specially designed for running with high efficiency in large-scale supercomputing facilities. Secondly, it is capable of solving different physics tasks in a coupled way, each one with its own modeling characteristics: fluids, solids, electrical activity, species concentration, free surface, etc. The Alya Red biomechanical model can help bring drugs to market faster through HPC simulation driven testing. This could result in tens of millions of dollars in potential savings. The long term vision of Alya Red Project is to create an IT infrastructure of hardware and software that can help medical doctors, clinical researchers, and the pharmacological industry to use HPC to positively impact healthcare. The Alya Red simulation relied on Linux driven HPC clusters and IBM BG/P and IBM BG/Q. The interconnect infrastructure included Myrinet and custom interconnect technology. Other tools used included ParaView, Visit, Blender, and Octane. Total investment in developing the infrastructure was around  $500,000. Project Leader: Mariano Vazquez.

5th HPC Innovation Excellence Award Winners
Announced at ISC 2013
  • Alenia Aermacchi (Italy). Engineers at Alenia Aermacchi utilized technical computing and scientific computing tools in the “Clean Sky” project to design a new generation of environmentally friendly aircraft. While seeking the most promising configuration, the engineers shaped two different wings to fulfill separately the two goals of enhancing aerodynamic performance and reducing the wing weight. This was accomplished through the use of CFD and other technical computing tools. The two promising configurations enhanced cruise efficiency by 2.5% and reduced the wing weigh by 4%. Lead: Enrica Marentino

  • High Performance GeoComputing Laboratory at the University of California, San Diego (U.S.) The High Performance GeoComputing Laboratory (HPGeoC) at the San Diego Supercomputer Center, an Organized Research Unit of the University of California, San Diego, has developed a highly scalable and efficient GPU-based finite difference code based on AWP-ODC, a community code developed and supported by the Southern California Earthquake Center (SCEC) for large-scale earthquake simulations. This GPU code, re-structured to enable maximized throughput and reduced time-to-solution, achieved near-perfect weak-scaling parallel efficiency and sustained 2.33 petaflop/s on ORNL’s Titan supercomputer. Moreover, a 110-fold speedup has been demonstrated over CPU in key strain tensor calculations critical to probabilistic seismic hazard analysis. This achievement makes a California state-wide hazard model a goal that's now reachable with existing supercomputers. The performance of the code is expected to take physics-based seismic hazard analysis to a new level using petascale, heterogeneous computing resources, with the potential of saving hundreds of million core-hours as required by building engineering design. Lead: Yifeng Cui 

  • DOD High Performance Computing Modernization Program (U.S.). This HPCMP-supported project, within the armor/anti-armor portfolio, provided direct support to the Warfighter program. The fundamental goals for using modeling and simulation in support of the armor/anti-armor development programs were to reduce time, resources, and risk while improving the quality of information available to designers, users, and decision makers. HPCMP capabilities enabled the development and testing of new armor/anti-armor capabilities in the following areas: propulsion, interior ballistics, external trajectory determination, terminal ballistics, warhead analysis, and sensors. The total upper bound ROI amounted to $935 million. Lead: Deborah Schwartz

  • DOD High Performance Computing Modernization Program (U.S.). This HPC innovation comprised a suite of cutting-edge computational efficiency enhancement methodologies applied to complex missile-related, aero-propulsive problems with combustion. These methodologies were integrated into CRAFT CFD® and CRUNCH CFD®, two CFD codes in widespread use by DoD to support missile design and evaluation, and encompassed gas-phase/multi-phase combustion as well as laminar/turbulent chemistry, including tabulated/neural network approaches, reduced/adaptive chemistry, turbulent scalar fluctuation model (SFM) and GPU acceleration. By leveraging state-of-the-art HPC resources provided by DoD HPCMP, these innovative methodologies delivered higher-fidelity predictive capabilities to the analysis of missile systems/components, thereby enabling CFD to serve as a cost-effective design tool and virtual testbed for missile evaluation. Lead: John West

  • ESTECO and Airworks Engineering (Italy).  Airworks, a multidisciplinary company for mechanical engineering, was facing the challenge of improving efficiency in converting wind energy into electrical power by optimizing the whole assembly of a wind power unit rotor. Experts in the firm's wind power unit, specialists in CAD (computer aided design) and CFD (computational fluid dynamics) professionals from different organizations were involved in a complex design scenario and needed to collaborate effectively. Engineers from the University of Trieste prepared the parametric CAD model and set up CFD simulations, while Airworks developed aerodynamic performance calculations of the wind rotor blade and subsequently performed the optimization analysis. The technical computing-driven solution enabled the seamless execution of inter-organizational simulation workflows. With the set up of the optimization workflow, Airworks professionals were able to explore and evaluate new parametric geometry, leading to innovative designs. The end result was a wind turbine design with an outstanding power coefficient and an annual energy production increase as high as 1.26%. Lead: Paolo Vercesi

  • University College London and NAG HECTOR dCSE (UK). HPC experts from NAG, working under NAG’s Computational Science and Engineering (CSE) support service for HECToR, the UK’s national academic supercomputing facility, have optimized a Quantum Monte-Carlo application for multicore architectures, resulting in a performance increase of a factor of four. The objectives of this dCSE project were to enable the CASINO Quantum Monte Carlo code to effectively use the multicore processors of HECToR’s Cray XT supercomputer and thus model more complex physical systems with greater efficiency. Shared memory techniques were introduced to allow larger models to be computed with greater efficiency by enabling multiple MPI processes on a single node to share a common data set, thus reducing the number of nodes needed for a given simulation. Further work including hierarchical parallelism with OpenMP and I/O optimizations improved the scalability of the code, enabling CASINO to run 60-80% faster for simulations using more than 10,000 cores. Following NAG’s work, the scientists were able to run on 40,000 cores of the Jaguar petascale supercomputer at Oak Ridge National Laboratory. It is estimated that this dCSE work saved 12 million AUs (allocation units) for a one year research project on HECToR, equivalent to savings of as much as £760k and the potential for future savings of up to several million pounds. Lead: HECToR dCSE

  • University of Warwick and NAG HECTOR dCSE (UK). HPC experts from NAG, working under NAG’s Computational Science and Engineering (CSE) support service for HECToR, the UK’s national academic supercomputing facility, have improved the scalability and performance of DL_POLY_3, a widely used software package for studying molecular dynamics. The 20-fold improvement in performance achieved by this project enabled a study of egg-shell formation that was infeasible with previous performance. DL_POLY_3 is a general-purpose package for classical molecular dynamics (MD) simulations from STFC Daresbury Laboratory. University of Warwick researchers Mark Rodger and David Quigley, in collaboration with colleagues at the University of Sheffield, used DL_POLY_3 and the HECToR supercomputers to study the role of a protein called ovocleidin-17 (OC-17) in chicken eggshell formation. Significant performance improvements were needed to make the modeling possible in feasible timescales using the HECToR supercomputers, especially in terms of parallel I/O. Lead: HECToR dCSE

  • Bottero S.p.A. (Italy). Bottero has recently introduced the EMOC, a completely new family of Mold Opening and Closing (MOC) mechanisms for the hollow glass industry. This complex project combined the innovation, in terms of mold movement (speed and precision), cooling system, maintenance facility (“top mounted”), and maintaining compatibility with previous standards. This multidimensional aspect led to hard space constraints, requiring complex 3D kinematic schema, and required advanced CFD simulation and design tools. A high performance level was required for this mechanism, particularly regarding clamping forces in closed positions, force available at the beginning of the molds stroke, reduced closing time, absence of vibrations during movement, and robustness with respect to irregularities in air supply. CFD tools and simulation techniques were utilized to improve the design in EMOC. Lead: Alberto Marino

  • Polestar Racing (Sweden). The Polestar Racing vehicle model had to be modified without considering the chassis design parameters, which were previously the core of the optimization analysis. Design simulation acquired an even greater importance as the best combination of the front-to-rear weight, aerodynamics, and brake distribution had to be executed in three days. Polestar Racing utilized a suite of simulation tools from MSC Software, the Lap Time Simulation (LTS) in-house code, and modeFRONTIER to devise a technical computing-driven approach to improve performance. The resulting improvements in weight, aerodynamics, and brake distribution led to lap time reductions ranging from 0.19 to 0.50 seconds. Lead: Per Blomberg

  • RENCI (U.S.). Scientists and researchers at RENCI developed a comprehensive informatics framework called NCGENES, enabling medical decision support by systemizing genomic analysis and high performance computing to mine genomic data for clinical and research use. The framework tackles one of the biggest challenges in genomic medicine – the need for automated sorting of the millions of variants generated by genome-scale sequencing to identify the very few with actual clinical relevance. The innovative framework also provides a proof of principle for how genomic medicine can be carried out in a practical manner. NCGENES breaks new ground by guiding and defining best practices for the use of whole genome sequencing as a diagnostic tool to aid patients and clinicians in making sense of medical informatics. Lead: Phil Owen

  • RENCI – ADCIRC Surge Guidance System (U.S.). This system uses the coupled coastal circulation, storm surge, and wind wave model ADCIRC+SWAN to produce high-resolution forecasts of storm surge, near shore waves, and water inundation when tropical or extra-tropical storms threaten the U.S. Atlantic or Gulf Coasts. The complete system, the ADCIRC Surge Guidance System (ASGS), includes an advanced web-based display (NC-Coastal Emergency and Risk Assessment). During an active storm, ASGS is run 2-4 times each day on a 150-node Dell PowerEdge M610/cluster (2 x 2.8Ghz Intel Nehalem-EP 5560, quad core) at the Renaissance Computing Institute (RENCI). The outputs from these runs are incorporated into guidance and forecasting efforts by the National Weather Service, the National Hurricane Center, and agencies such as the U.S. Coast Guard, the U.S. Army Corps of Engineers, FEMA, and local and regional emergency management personnel. The resulting forecasts are used for evacuation decisions, to position supplies and response personnel, for search and rescue, and for other event-based decision support as needed. Lead: Rick Luettich, Brian Blanton

6th HPC Innovation Excellence Award Winners
Announced at SC 2013
  • GE Global Research (U.S.) Using a 40 million CPU hour Department of Energy award, GE Global Research has modeled the freezing behavior of water droplets on six different engineered surfaces under six operating conditions on the hybrid CPU/GPU Titan at Oak Ridge National Lab (ORNL). Through recent advances in the field, including a joint simulation enhancement effort with Oak Ridge National Lab to fully leverage hardware infrastructures, GE Global Research has been able to accelerate simulations by approximately 200-fold compared to even just two years ago. Lead: Masako Yamada

  • The Procter & Gamble Company (U.S.) P&G researchers and collaborators at Temple University developed models at the molecular and mesoscale level to understand complex molecular interactions of full formula consumer products such as shampoos, conditioners, facial creams, laundry detergents, etc. The HPC-driven research helped shed light on the performance of the complex formula interactions versus inferring performance based on isolated calculations. Results of the HPC -driven research led to a better understanding of interfacial phenomena, phase behavior, and the performance of several P&G products. Lead: Kelly L. Anderson

  • National Institute of Supercomputing and Networking, Korea Institute of Science and Technology Information (Korea) The EDISON (EDucation and research Integration through Simulation On the Net) Project, funded by the Ministry of Science, ICT and Future Planning, Korea, established an infrastructure on the Web where users across the country could easily access and utilize various engineering/science simulation tools for educational and research purposes. The EDISON project is accelerating research in five key areas: Computational Fluid Dynamics, Computational Chemistry, Nano Physics, Computational Structural Dynamics, and Multi-disciplinary Optimization. The Project utilizes a novel partnership model between the project and the respective domains to develop area-specific simulation tools that make HPC accessible to domain specialists. Lead: Kumwon Cho

  • GE Global Research (U.S.) GE Global Research’s work on Large Eddy Simulations (LES) leveraged petascale computing to break barriers in accurately characterizing the key flow physics of multi-scale turbulent mixing in boundary layer and shear flows. Findings from this research will significantly improve the prediction and design capabilities for next-generation aircraft engines and wind turbines, both from demonstrating the viability of LES as a characterization tool and as a source of physics guidance. Lead: Umesh Paliath

  • Spectraseis Inc (U.S.) and CADMOS, University of Lausanne (Switzerland) Researchers doubled both acoustic and elastic solver throughput, at the same time improving code size and maintainability, harnessing the massive parallel computing capabilities of Fermi and Kepler GPUs. With improved efficacies obtained by code redesign for GPU the time to solution was reduced from hours to seconds. The improved capability allowed Spectraseis to move from 2D to 3D and, in several cases, obtain more than 100x speed-up. Lead: Igor Podladtchikov and Yury Podladchikov

  • Intelligent Light (U.S.) Intelligent Light addressed the challenge of high volumes of CFD data using FieldView 14 data management and process automation tools. Intelligent Light contributed results from approximately 100 cases with more than 10,000 time steps each to deliver a complete response to the workshop objectives. A Cray XE6 was used to generate the CFD solutions and perform much of the post-processing. This project successfully demonstrated the value and practicality of using innovative workflow engineering with automation and data management for complex CFD studies. Lead: Dr. Earl P.N. Duque

  • Facebook (U.S.) Facebook manages a social graph that is composed of people, their friendships, subscriptions, and other connections. Facebook modified Apache Giraph to allow loading vertex data and edges from separate sources (GIRAPH-155). Facebook was able to run an iteration of page rank on an actual one trillion edge social graph formed by various user interactions in fewer than four minutes with the appropriate garbage collection and performance tuning. Facebook can now cluster a monthly active user data set of one billion input vectors with 100 features into 10,000 centroids with k-means in less than 10 minutes per iteration.Lead: Avery Ching / Apache Giraph

  • HydrOcean/Ecole Centrale Nantes (France) SPH-flow is an innovative fluid dynamic solver based on a meshless, compressible, and time-explicit approach. SPH-flow solver has been used in several industrial projects, including: impact forces of aircraft and helicopter ditching; free surface simulations of ship wake and wave fields; multiphase emulsion simulations; extreme wave impacts on structures; simulation of hydroplaning of tires; water film around car bodies; and underwater explosions. This project is lead by Dr. Erwan Jacqin, CEO of HydrOcean, a spinoff from Ecole Centrale fluid dynamic lab, and Prof. David Le Touze, in charge of the SPH-flow research team at Ecole Centrale Nantes.

  • Imperial College London and NAG (UK) HPC experts from NAG and Imperial College London have implemented scientifically valuable new functionality and substantial performance improvements in the Incompact3D application. After the improvements, the simulations can now scale to 8000 cores efficiently, with a run time of around 3.75 days (wall-clock time), which is over 6x faster. Furthermore, meshes for new high resolution turbulence mixing and flow control simulations, which use up to 4096*4096*4096 grid points, can now utilize as many as 16384 cores. Lead: NAG HECToR CSE Team

  • Queen Mary University of London and NAG (UK) NAG and Queen Mary University of London made significant improvements to CABARET (Compact Accurate Boundary Adjusting high Resolution Technique) code so that it may be used to solve the compressible Navier-Stokes equations and, in the context of this project, for the investigation of aircraft noise. The newly developed code was validated and tested against the serial code and a parallel efficiency of 72% was observed when using 250 cores of the XT4 part of HECToR with the quad core architecture. Lead: NAG HECToR CSE Team

  • Southern California Earthquake Center (U.S.) SCEC has built a special simulation platform, CyberShake, which uses the time-reversal physics of seismic reciprocity to reduce the computational cost by 1000x. Additionally, the production time for a complete regional CyberShake model at seismic frequencies up to 0.5 Hz has been reduced by 10x, and four new hazard models have been run on NCSA Blue Waters and TACC Stampede. SCEC researchers have developed highly parallel, highly efficient CUDA-optimized wave propagation code, called AWP-ODC-GPU, that achieved a sustained performance of 2.8 Petaflops on ORNL Titan. LEAD: Southern California Earthquake Center Community Modeling Environment Collaboration

  • Princeton University/Princeton Plasma Physics Laboratory (U.S.) Using high-end supercomputing resources, advanced simulations of confinement physics for large-scale MFE plasmas have been carried out for the first time with very high phase-space resolution and long temporal duration to deliver important new scientific insights. This research was enabled by the new GTC-P code, developed to use multi-petascale capabilities on world-class systems such as the IBM BG-Q “Mira” @ ALCF and “Sequoia” @ LLNL. Leads: William Tang, Bei Wang, and Stephane Ethier

  • Oak Ridge Leadership Computing Facility, Oak Ridge National Laboratory (U.S.) Researchers at ORNL have used the Titan supercomputer to perform the first simulations of organic solar cell active layers at scales commensurate with actual devices. By modifying the LAMMPS molecular dynamics software to use GPU acceleration on Titan, the researchers were able to perform simulations to study how different polymer blends can be used to alter the device morphology. The new insights will aid in the rational design of cheap solar cells with higher efficiency. Results are published in the journal Physical Chemistry Chemical Physics. Lead: W. Michael Brown and Jack C. Wells

  • Ford Werke GmbH (Germany) Researchers at Ford Werke GmbH have developed and deployed a new CAE process, which enables the optimization of the airflow through the cooling package of a vehicle using complex 3D CFD analysis. The Ford team also demonstrated it could run these complex simulations fast enough to enable their use within the time constraints of a vehicle development project. The team’s work on Jaguar at Oak Ridge National Lab will help Ford maximize the effectiveness and fuel efficiency of engine bay designs throughout the company. Lead: Dr. Burkhard Hupertz and Alex Akkerman




7th HPC Innovation Excellence Award Winners
Announced at ISC 2014
  • University of Wisconsin-Madison (U.S.). University of Wisconsin Researchers utilized HPC resources in combination with multiple advanced forms of protein structure prediction algorithms and deep sequence data mining to construct a highly plausible capsid model for Rhinovirus-C (~600,000 atoms). The simulation model helps researchers in explaining why the existing pharmaceuticals don't work on this virus. The modeling frameworks developed by the researchers provide angstrom-level predictions for new antivirals and a platform for vaccine development. Lead: Ann C. Palmenberg

  • Argonne National Laboratory, Caterpillar, Convergent Science (U.S.). Researchers from Argonne National Laboratory conducted one of the largest internal combustion engine simulations. Predictive internal combustion engine simulations necessitate very fine spatial and temporal resolutions, high-fidelity and robust two-phase flow, spray, turbulence, combustion, and emission models. The research has allowed Caterpillar Inc. to shrink their development timescales and thus result in significant cost savings. Caterpillar engineers predict that these HPC developments will reduce the number of multi-cylinder test programs by at least a factor of two, which will result in a cost saving of $500,000-$750,000 per year. Lead: Sibendu Som

  • CINECA (Italy). Engineers from THESAN srl, an Italian SME active in the renewable energy sector, teamed up with the Italian supercomputing center CINECA to develop simulation-driven engineering of hydroelectric turbines. The research was conducted in the framework of the PRACE SHAPE (SME HPC Adoption Programme in Europe) Initiative. The engineers and researchers built an HPC-based workflow to optimize the design of a new class of hydroelectric turbines. Using CFD Thesan could generate cost savings through reducing or eliminating the production of physical prototypes, better understanding the flaws of earlier design setups, and critically shortening the time to market.  Lead: Raffaele Ponzini, Roberto Vadori, Giovanni Erbacci, Claudio Arlandini

  • Pipistrel d.o.o. (Slovenia). Engineers and scientists from Pipistrel utilized HPC and technical computing resources to design and develop the Taurus G4 aircraft. The aircraft was conceived, designed, and built in a mere 5 months, relying heavily on CAD and rapid prototyping techniques, but especially on the use of CFD and other computer aerodynamic tools for evaluation of flight performance and handling before committing to the building of the prototype. The aircraft introduced a unique twin fuselage configuration, presenting significant challenges in designing the wings, high lift systems, and the overall configuration. HPC-based CFD was used already in the conceptual design stage to optimize the shape of the engine nacelle in order to avoid premature flow separation. CFD was used in further stages of the design to optimize the high lift slotted flap geometry, and especially to determine the lift and stability behavior of the complete aircraft configuration in ground effect. Lead: Prof. Dr. Gregor Veble

  • Culham Centre for Fusion Energy, EPCC at the University of Edinburgh, York Plasma Institute at the University of York, and Lund University. Researchers from CCFE, EPCC and the Universities of York and Lund have made substantial recent optimizations for the well-known plasma turbulence code, GS2. This included a total rewrite of the routines that calculate the response matrices required by the code's implicit algorithm, which has significantly accelerated GS2's initialization, typically by a factor of more than 10. Taken together, these optimizations have vastly reduced wall time, as illustrated by the impressive gain in speed by a factor of 20 that was achieved for a benchmark calculation running on 8,192 cores. The optimized code achieves scaling efficiencies close to 50% at 4,096 cores and 30% at 8,192 cores for a typical calculation, compared to efficiencies of 4% and 2% respectively prior to these optimizations. Leads: David Dickinson, Adrian Jackson, Colin M Roach and Joachim Hein

  • Westinghouse Electric Company LLC, ORNL (U.S.). Researchers from Westinghouse Electric Company and the Consortium for Advanced Simulation of LWRs (CASL), a U.S. Department of Energy (DOE) Innovation Hub, performed core physics simulations of the AP1000® PWR startup core using CASL's Virtual Environment for Reactor Application (VERA). These calculations, performed on the Oak Ridge Leadership Computing Facility (OLCF) "Titan" Cray XK7 system, produced 3D high-fidelity power distributions representing the conditions expected to occur during the AP1000 start-up. The set of results obtained provide insights that improve understanding of core conditions, helping to ensure safe startup of the AP1000 PWR first core. Lead: Fausto Franceschini (Westinghouse)

  • Rolls-Royce, Procter and Gamble, National Center for Supercomputing Applications, Cray Inc., Livermore Software Technology Corporation (U.S.). Researchers from NCSA, Rolls Royce, Proctor and Gamble, Cray Inc, and Livermore Software Technology Corporation were able to scale the commercial explicit finite element code, LS-DYNA, to 15,000 cores on Blue Waters. The research has potential to transform several industries including aerospace and automotive engine design, and consumer product development and design. Researchers cited that the increased scalability can result in significant cost savings. Lead: Todd Simons, Seid Koric.