Our program to recognize noteworthy achievements made by users using High Performance Computing (HPC) technologies.
How winners will be selected
» All submissions will receive a careful and complete review
» Submissions must contain a clear description of the dollar value or scientific value received in order to qualify
» Volunteers from the HPC User Forum Steering Committee will conduct an initial ranking of the submissions (for the list of the steering committee members, click here).
» The final evaluation will then be conducted by selected experts in the specific area/discipline in which the work was done.
» Winners will be notified in advance of any public announcement
Recent Innovation Excellence Award Winners
Click the dropdown boxes below to see the most recent HPC User Forum Excellence Awards winners.
1st HPC Innovation and ROI Award Winners - Announced at ISC 2011
- Beijing Genomics Institute (China) — BGI has created genomics solutions that speed science and commercialization. BGI estimates that the solutions have saved hundreds of thousands of dollars. Project leader: Lin Fang.
- BMI (U.S.) — Fuel-saving aerodynamic truck design. Estimated savings of $5,000 per truck, per year. Project leader: Mike Henderson.
- Department of Defense HPC Modernization Program – DEW (U.S.) — $13.8 million initial savings from virtual prototyping of countermeasures based on high-power microwaves, with much greater potential savings. Project leader: Cray Henry.
- Department of Defense HPC Modernization Program – COAMPS (U.S.) — Significantly improved model for tropical cyclone forecasting. Documented savings of $6 million in one event alone, with far greater potential savings.
- Lawrence Livermore National Laboratory (U.S.) — Achieved 17 percent improvement in fuel economy of Class 8 tractor-trailers. Potential savings: $24.2 billion annually. Project leader: Kambiz Salari.
- National Energy Research Scientific Computing Center (U.S.) — International study has enabled much more detailed and longer (100 years) record of past weather, to improve climate studies. Project leader: Gilbert P. Compo and National Energy Research Scientific Computing Center.
- Ohio Supercomputer Center-Edison Welding Institute (U.S.) — Weld Predictor simulation tool cuts costs by an average 50 percent by minimizing experimental trials. Project leaders: Ashok Krishnamurthy, Kathy Kelley.
- Shanghai Supercomputer Center (China) — New method to determine molecular configurations of large molecules on metal surfaces. Project leader: Tao Wang.
- Shanghai Supercomputer Center (China) — Substantial time and costs savings in engineering design. Project leader: Jiancheng Wu.
2nd HPC Innovation and ROI Award Winners - Announced at SC 2011
- Cornell University Center for Advanced Computing (U.S.). Through faster computations (more than 175 times speed-up), a better understanding of networks of coordinated amino-acid variation may enable the discovery of new therapeutic targets for the hepatitis C virus (HCV). With the cost per liver transplantation in the range of $280,000 for one year, liver transplantation for hepatitis C alone reaches a total cost of nearly $300 million per year. Moreover, the average lifetime cost for hepatitis C, in the absence of liver transplant, has been estimated to be about $100,000 for individual patients. Assuming that 80% of the 4.5 million Americans believed to be infected develop chronic liver disease, the total lifetime cost for this group (3.6 million) will be a staggering $360 billion in today’s dollars. Assuming an estimated survival of 40 years, the annual health care costs for the affected U.S. population with chronic hepatitis C may be as high as $9 billion. Project leaders: David A. Lifka, Paul Redfern
- Continuous Casting Consortium (U.S.). Helped by HPC resources at the National Center for Supercomputing Applications, the Continuous Casting Consortium at the University of Illinois has developed comprehensive numerical models of the continuous casting of steel, including several ground-breaking numerical methods, to solve practical problems of interest to the steel industry. Based on the roughly 100 million tons of steel produced in the U.S. each year and the approximately $400 per ton net cost of scrapping, a one percent reduction in yield loss could would save about $400 million per year, along with energy savings during reheating of about $350 million per year. Project leaders: Seid Koric and Brian G. Thomas
- Center for Development of Advanced Computing (India). C-DAC is part of the Department of Information Technology, Ministry of Communications and Information Technology, Government of India. C-DAC developed CHReME, an HPC application execution interface with the flexibility to create workflows and associate them with scientific applications. CHReME has been deployed at HPC sites within India and as far away as Armenia and Tanzania. The software is reducing costs by tens of thousands of dollars while improving product cycle times and increasing productivity in critical projects. Project leader: Goldi Misra
- Swift Engineering (U.S.). Swift uses HPC to develop prototype vehicles for the automotive and aerospace industry must faster than these industries develop production vehicles. The customers include Toyota, for whom Swift helped develop the aerodynamic shape of the Tundra truck and the NASCAR version of the Camry; Northrup Grumman, which had Swift develop the BAT Unmanned Aerial Vehicle; Eclipse Aviation, for whom Swift developed and built a new, light business jet in only 200 days; and others. Swift can design a vehicle in one season that would take a big OEM up to four years (Swift makes prototypes, not certified production vehicles). The prototypes can save auto/aerospace firms tens to hundreds of millions of dollars in the design cycle. HPC enables Swift to explore aerodynamic design spaces 10 to 100 times faster than before, depending on the problem type. Project leader: Clayton Triggs
- United Technologies Research Center/NERSC IPM (U.S.). With the help of Integrated Performance Monitoring (IPM) and staff assistance from the Department of Energy’s NERSC facility, UTRC sped up a United Technologies-owned CFD code used to simulate multiphase flows. For the first time, this enabled the simulation of realistic fuel spray-related applications Traditionally, such systems are designed by costly build-test-bust experiments due to inability to effectively model or to experimentally diagnose (measurements in the very near field of the spray are not possible due to obscuration effects). The ability to simulate this problem helps reduce design cycles substantially and provides new insights into the physics that can provide sprays with enhanced properties. UTRC has seen reductions in design cycles of at least 66%. Project leader: Marios C. Soteriou
3rd HPC Innovation and ROI Award Winners - Announced at ISC 2012
- GE Global Research (U.S.). By leveraging the advancements in available computational power, GE was able to model the unsteady flow physics in a turbine test rig. Previously, only the steady flow physics could be modeled. By comparing the results of the new unsteady calculations with previously obtained steady calculations, new insights into the flow physics – not captured with steady CFD – were obtained, such as better understanding of wake dynamics. These understandings can be used by aerodynamic engineers to improve aerodynamic efficiency which will result in reduced engine fuel burn. Each year approximately $200 billion worth in fuel is consumed globally on GE’s gas turbine products, both aircraft engines and land-based gas turbines used for the production of electricity. Every 1% reduction in fuel consumption therefore saves the population of users of these products over $2 billion year. Project Leader: Richard Arthur
- Department of Defense High Performance Computing Modernization Program (U.S.). The Computational Research and Engineering Acquisition Tools and Environments (CREATE) program, a multi-year program to develop multi-physics based software for the design of ships, air vehicles, and antennae, applied HPC physics-based simulation to develop a standard process to generate the aerodynamic data that supported air-worthiness certifications for small unmanned aerial vehicles (UAVs). The data generated through computational methods as part of this effort also helped mitigate risk in the flight certification process by providing engineers with sound data with which to make more informed airworthiness assessments. Several UAV platforms have relied on this HPC-driven framework to gain approval, including RAVEN, AEROSTAR, and EXDRON. Without HPC resources, airworthiness decisions for small UAVs could not meet this turnaround time. This approach saved a conservative estimate of over $1 Billion. With continued use of this program, the economic impact of this technology would continue to improve. Project Leader: John E. West
- Mary Bird Perkins Cancer Center and Louisiana State University (U.S.). Researchers at the centers performed clinical trials in silico that would have been prohibitively expensive and taken long to conduct with traditional methods. The center used an HPC-driven approach to inform clinical and health policy decisions. The in silico clinical trials compared the effectiveness of multiple therapy options, leading to better-informed clinical decisions. Simulation of therapy equipment resulted in savings of more than a $100 million in infrastructure costs and over $12 million in research costs while enabling rapid advancements in cancer care research. Project leader: Wayne D. Newhauser
- Aon Benfield Securities, Inc. (Canada). Aon has developed the PathWise platform, which uses GPU-based high performance computing to enable quantitative analysts to quickly and easily express financial application kernels such as Monte Carlo simulations using domain-specific interfaces. The computational capabilities offered by the GPU-driven HPC enabled quantitative analysts to accelerate financial computations from days to minutes, with 50-100 times throughput over conventional techniques. The PathWise platform from Aon Benfield achieved an average 90% cost savings both in terms of HPC infrastructure costs and time-to-market, translating to several millions of dollars in savings. Project leader: Peter Phillips, Amir Mohammed
- BGI Shenzhen (China). BGI has developed a set of distributed computing applications using a MapReduce framework to process large genome data sets on clusters. By applying advanced software technologies including HDFS, Lustre, GlusterFS, and the Platform Symphony Map-Reduce framework, the institute has saved more than $20 million to date. For some application workloads, BGI achieved a significant improvement in processing capabilities while enabling the reuse of storage, resulting in reduced infrastructure costs while delivering results in less than 10 minutes, versus the prior time of 2.5 hours. Some of the applications enabled through the MapReduce framework included: sequencing of 1% of the Human Genome for the International Human Genome Project; contributing 10% to the International Human HapMap Project; conducting research in combating SARS, and a German variant of the E. coli virus; and completely sequencing the rice genome, the silkworm potato genome, and the human gut metagenome. Project leader: Lin Fang
4th HPC Innovation and ROI Award Winners - Announced at SC 2012
- Computational Research Laboratories (INDIA). Engineers at Computational Research Laboratories (CRL), a subsidiary of Tata Consultancy Services, developed an HPC-driven computational fluid dynamics (CFD) procedure aimed at adding realism to real-world wind tunnel data. This innovation could become crucial for the aerospace industry and others using similar technology to improve their product designs. The CFD developments by CRL-TCS are targeted at increasing the accuracy of predictions of maximum lift for aircraft and optimizing the utilization of wind tunnel testing. Improving maximum lift calculations could enable designers to use less material in aircraft, resulting in improved fuel efficiency and lower fuel costs. Improvements for wind tunnel testing alone have resulted in up to 20% reduction in flap prediction tests and estimated savings of at least $2 million per product design. The CFD methodologies could potentially be applied to other problems, resulting in more savings. The investments to enable the CRL-TCS results, including the system cost, were between $1 million and $2 million. The simulations were conducted on the Hewlett Packard EKA system. Project Leader: Kishore Nikam and Anutosh Moitra
- St. Vincent’s Institute of Medical Research (AUSTRALIA.). A cross-organizational team comprising researchers from St. Vincent’s Institute of Medical Research, Victorian Infectious Disease Research Laboratories, IBM Research Collaboratory for Life Sciences – Melbourne, and Victorian Life Sciences Computation Initiative developed a method to simulate the 3D atomic motion of the complete human rhinovirus on Australia’s fastest supercomputer, paving the way for new drug development. This research is the first time that the atomic motion of a complete human rhinovirus has been simulated on a computer. Understanding how anti-viral drugs work on rhinoviruses and related viruses can potentially speed up the development of new treatments, and could produce savings in development costs. The research has the potential to produce savings in drug discovery and pre-clinical development of up to $1,000,000 per year. The research was conducted on an IBM BlueGene Q system and the storage infrastructure was developed by DDN. Project Leader: Professor Michael Parker, with Jason Roberts, Matthew Downton, Michael Kuiper
- Cycle Computing (U.S.). Cycle Computing provisioned an over 50,000-core utility supercomputer in the Amazon Web Services (AWS) cloud for Schrödinger and Nimbus Discovery to accelerate lead identification via virtual screening. Schrödinger ran a virtual screen of 21 million compounds against a protein target. The run cost under $4,900 per hour at peak and required no upfront capital. The cloud-based run time per job averaged 11 minutes and the total work completed topped 100,000 hours. Project leader: Jason Stowe
- Intelligent Light (U.S.). The CFD workflow and software developed by Intelligent light is benefiting small and medium size manufacturer Zipp Speed Weaponry. A direct consequence of the work was a rapid change in design direction to bring a revolutionary new racing wheel, “Zipp Firecrest,” to market. HPC and a highly productive, automated CFD workflow enabled Zipp to create a sustainable competitive advantage by helping them effectively design and functionalize aerodynamic racing wheels for cyclists. The hardware system was powered by an on-demand cloud from Dell R Systems. This project involved an aggregate investment of $130,000. Project leader: Roger R. Rintala
- PayPal (U.S.). PayPal, an eBay company, leverages HPC technologies to provide real-time stream analytics for behavioral and situational modeling related to online fraud detection. The addition of HPC resources from SGI, Hewlett Packard, Mellanox and QLogic (Intel) has enabled PayPal to advance to real-time fraud detection. Together, the system handles more than 4 billion new records daily. Conservatively estimated, the revenue savings generated by applying the HPC infrastructure for fraud detection translates to over $710 million. Project leader: Arno Kolster and Ryan Quick
- Ramgen Power Systems LLC (U.S.). Ramgen Power Systems is a small business in Washington State that is developing a novel gas compressor system based on shock wave compressor technology. The system will be used in supersonic flight applications. Use of the Cray “Jaguar” supercomputer at Oak Ridge National Laboratory cut down R&D costs by over $4 million. Using Jaguar, staff of the Oak Ridge Leadership Computing Facility were able to give these applications a 100-fold speedup in time to solution and two-fold decrease in memory usage per core. This project was enabled by a seed investment of around $500,000. Project Leader: Allan D. Grosvenor.
- ALYA RED – Barcelona Supercomputing Center developed a first of its kind, in-house, end-to-end biomechanical model including numerical methods, parallel implementation, mesh generation, and visualization. The Alya System is a computational mechanics code with two main features. First, it is specially designed for running with high efficiency in large-scale supercomputing facilities. Secondly, it is capable of solving different physics tasks in a coupled way, each one with its own modeling characteristics: fluids, solids, electrical activity, species concentration, free surface, etc. The Alya Red biomechanical model can help bring drugs to market faster through HPC simulation driven testing. This could result in tens of millions of dollars in potential savings. The long term vision of Alya Red Project is to create an IT infrastructure of hardware and software that can help medical doctors, clinical researchers, and the pharmacological industry to use HPC to positively impact healthcare. The Alya Red simulation relied on Linux driven HPC clusters and IBM BG/P and IBM BG/Q. The interconnect infrastructure included Myrinet and custom interconnect technology. Other tools used included ParaView, Visit, Blender, and Octane. Total investment in developing the infrastructure was around $500,000. Project Leader: Mariano Vazquez.
5th HPC Innovation and ROI Award Winners - Announced at ISC 2013
- Alenia Aermacchi (Italy). Engineers at Alenia Aermacchi utilized technical computing and scientific computing tools in the “Clean Sky” project to design a new generation of environmentally friendly aircraft. While seeking the most promising configuration, the engineers shaped two different wings to fulfill separately the two goals of enhancing aerodynamic performance and reducing the wing weight. This was accomplished through the use of CFD and other technical computing tools. The two promising configurations enhanced cruise efficiency by 2.5% and reduced the wing weigh by 4%. Lead: Enrica Marentino
- High Performance GeoComputing Laboratory at the University of California, San Diego (U.S.) The High Performance GeoComputing Laboratory (HPGeoC) at the San Diego Supercomputer Center, an Organized Research Unit of the University of California, San Diego, has developed a highly scalable and efficient GPU-based finite difference code based on AWP-ODC, a community code developed and supported by the Southern California Earthquake Center (SCEC) for large-scale earthquake simulations. This GPU code, re-structured to enable maximized throughput and reduced time-to-solution, achieved near-perfect weak-scaling parallel efficiency and sustained 2.33 petaflop/s on ORNL’s Titan supercomputer. Moreover, a 110-fold speedup has been demonstrated over CPU in key strain tensor calculations critical to probabilistic seismic hazard analysis. This achievement makes a California state-wide hazard model a goal that’s now reachable with existing supercomputers. The performance of the code is expected to take physics-based seismic hazard analysis to a new level using petascale, heterogeneous computing resources, with the potential of saving hundreds of million core-hours as required by building engineering design. Lead: Yifeng Cui
- DOD High Performance Computing Modernization Program (U.S.). This HPCMP-supported project, within the armor/anti-armor portfolio, provided direct support to the Warfighter program. The fundamental goals for using modeling and simulation in support of the armor/anti-armor development programs were to reduce time, resources, and risk while improving the quality of information available to designers, users, and decision makers. HPCMP capabilities enabled the development and testing of new armor/anti-armor capabilities in the following areas: propulsion, interior ballistics, external trajectory determination, terminal ballistics, warhead analysis, and sensors. The total upper bound ROI amounted to $935 million. Lead: Deborah Schwartz
- DOD High Performance Computing Modernization Program (U.S.). This HPC innovation comprised a suite of cutting-edge computational efficiency enhancement methodologies applied to complex missile-related, aero-propulsive problems with combustion. These methodologies were integrated into CRAFT CFD® and CRUNCH CFD®, two CFD codes in widespread use by DoD to support missile design and evaluation, and encompassed gas-phase/multi-phase combustion as well as laminar/turbulent chemistry, including tabulated/neural network approaches, reduced/adaptive chemistry, turbulent scalar fluctuation model (SFM) and GPU acceleration. By leveraging state-of-the-art HPC resources provided by DoD HPCMP, these innovative methodologies delivered higher-fidelity predictive capabilities to the analysis of missile systems/components, thereby enabling CFD to serve as a cost-effective design tool and virtual testbed for missile evaluation. Lead: John West
- ESTECO and Airworks Engineering (Italy). Airworks, a multidisciplinary company for mechanical engineering, was facing the challenge of improving efficiency in converting wind energy into electrical power by optimizing the whole assembly of a wind power unit rotor. Experts in the firm’s wind power unit, specialists in CAD (computer aided design) and CFD (computational fluid dynamics) professionals from different organizations were involved in a complex design scenario and needed to collaborate effectively. Engineers from the University of Trieste prepared the parametric CAD model and set up CFD simulations, while Airworks developed aerodynamic performance calculations of the wind rotor blade and subsequently performed the optimization analysis. The technical computing-driven solution enabled the seamless execution of inter-organizational simulation workflows. With the set up of the optimization workflow, Airworks professionals were able to explore and evaluate new parametric geometry, leading to innovative designs. The end result was a wind turbine design with an outstanding power coefficient and an annual energy production increase as high as 1.26%. Lead: Paolo Vercesi
- University College London and NAG HECTOR dCSE (UK). HPC experts from NAG, working under NAG’s Computational Science and Engineering (CSE) support service for HECToR, the UK’s national academic supercomputing facility, have optimized a Quantum Monte-Carlo application for multicore architectures, resulting in a performance increase of a factor of four. The objectives of this dCSE project were to enable the CASINO Quantum Monte Carlo code to effectively use the multicore processors of HECToR’s Cray XT supercomputer and thus model more complex physical systems with greater efficiency. Shared memory techniques were introduced to allow larger models to be computed with greater efficiency by enabling multiple MPI processes on a single node to share a common data set, thus reducing the number of nodes needed for a given simulation. Further work including hierarchical parallelism with OpenMP and I/O optimizations improved the scalability of the code, enabling CASINO to run 60-80% faster for simulations using more than 10,000 cores. Following NAG’s work, the scientists were able to run on 40,000 cores of the Jaguar petascale supercomputer at Oak Ridge National Laboratory. It is estimated that this dCSE work saved 12 million AUs (allocation units) for a one year research project on HECToR, equivalent to savings of as much as £760k and the potential for future savings of up to several million pounds. Lead: HECToR dCSE
- University of Warwick and NAG HECTOR dCSE (UK). HPC experts from NAG, working under NAG’s Computational Science and Engineering (CSE) support service for HECToR, the UK’s national academic supercomputing facility, have improved the scalability and performance of DL_POLY_3, a widely used software package for studying molecular dynamics. The 20-fold improvement in performance achieved by this project enabled a study of egg-shell formation that was infeasible with previous performance. DL_POLY_3 is a general-purpose package for classical molecular dynamics (MD) simulations from STFC Daresbury Laboratory. University of Warwick researchers Mark Rodger and David Quigley, in collaboration with colleagues at the University of Sheffield, used DL_POLY_3 and the HECToR supercomputers to study the role of a protein called ovocleidin-17 (OC-17) in chicken eggshell formation. Significant performance improvements were needed to make the modeling possible in feasible timescales using the HECToR supercomputers, especially in terms of parallel I/O. Lead: HECToR dCSE
- Bottero S.p.A. (Italy). Bottero has recently introduced the EMOC, a completely new family of Mold Opening and Closing (MOC) mechanisms for the hollow glass industry. This complex project combined the innovation, in terms of mold movement (speed and precision), cooling system, maintenance facility (“top mounted”), and maintaining compatibility with previous standards. This multidimensional aspect led to hard space constraints, requiring complex 3D kinematic schema, and required advanced CFD simulation and design tools. A high performance level was required for this mechanism, particularly regarding clamping forces in closed positions, force available at the beginning of the molds stroke, reduced closing time, absence of vibrations during movement, and robustness with respect to irregularities in air supply. CFD tools and simulation techniques were utilized to improve the design in EMOC. Lead: Alberto Marino
- Polestar Racing (Sweden). The Polestar Racing vehicle model had to be modified without considering the chassis design parameters, which were previously the core of the optimization analysis. Design simulation acquired an even greater importance as the best combination of the front-to-rear weight, aerodynamics, and brake distribution had to be executed in three days. Polestar Racing utilized a suite of simulation tools from MSC Software, the Lap Time Simulation (LTS) in-house code, and modeFRONTIER to devise a technical computing-driven approach to improve performance. The resulting improvements in weight, aerodynamics, and brake distribution led to lap time reductions ranging from 0.19 to 0.50 seconds. Lead: Per Blomberg
- RENCI (U.S.). Scientists and researchers at RENCI developed a comprehensive informatics framework called NCGENES, enabling medical decision support by systemizing genomic analysis and high performance computing to mine genomic data for clinical and research use. The framework tackles one of the biggest challenges in genomic medicine – the need for automated sorting of the millions of variants generated by genome-scale sequencing to identify the very few with actual clinical relevance. The innovative framework also provides a proof of principle for how genomic medicine can be carried out in a practical manner. NCGENES breaks new ground by guiding and defining best practices for the use of whole genome sequencing as a diagnostic tool to aid patients and clinicians in making sense of medical informatics. Lead: Phil Owen
- RENCI – ADCIRC Surge Guidance System (U.S.). This system uses the coupled coastal circulation, storm surge, and wind wave model ADCIRC+SWAN to produce high-resolution forecasts of storm surge, near shore waves, and water inundation when tropical or extra-tropical storms threaten the U.S. Atlantic or Gulf Coasts. The complete system, the ADCIRC Surge Guidance System (ASGS), includes an advanced web-based display (NC-Coastal Emergency and Risk Assessment). During an active storm, ASGS is run 2-4 times each day on a 150-node Dell PowerEdge M610/cluster (2 x 2.8Ghz Intel Nehalem-EP 5560, quad core) at the Renaissance Computing Institute (RENCI). The outputs from these runs are incorporated into guidance and forecasting efforts by the National Weather Service, the National Hurricane Center, and agencies such as the U.S. Coast Guard, the U.S. Army Corps of Engineers, FEMA, and local and regional emergency management personnel. The resulting forecasts are used for evacuation decisions, to position supplies and response personnel, for search and rescue, and for other event-based decision support as needed. Lead: Rick Luettich, Brian Blanton
6th HPC Innovation and ROI Award Winners - Announced at SC 2013
- GE Global Research (U.S.) Using a 40 million CPU hour Department of Energy award, GE Global Research has modeled the freezing behavior of water droplets on six different engineered surfaces under six operating conditions on the hybrid CPU/GPU Titan at Oak Ridge National Lab (ORNL). Through recent advances in the field, including a joint simulation enhancement effort with Oak Ridge National Lab to fully leverage hardware infrastructures, GE Global Research has been able to accelerate simulations by approximately 200-fold compared to even just two years ago. Lead: Masako Yamada
- The Procter & Gamble Company (U.S.) P&G researchers and collaborators at Temple University developed models at the molecular and mesoscale level to understand complex molecular interactions of full formula consumer products such as shampoos, conditioners, facial creams, laundry detergents, etc. The HPC-driven research helped shed light on the performance of the complex formula interactions versus inferring performance based on isolated calculations. Results of the HPC -driven research led to a better understanding of interfacial phenomena, phase behavior, and the performance of several P&G products. Lead: Kelly L. Anderson
- National Institute of Supercomputing and Networking, Korea Institute of Science and Technology Information (Korea) The EDISON (EDucation and research Integration through Simulation On the Net) Project, funded by the Ministry of Science, ICT and Future Planning, Korea, established an infrastructure on the Web where users across the country could easily access and utilize various engineering/science simulation tools for educational and research purposes. The EDISON project is accelerating research in five key areas: Computational Fluid Dynamics, Computational Chemistry, Nano Physics, Computational Structural Dynamics, and Multi-disciplinary Optimization. The Project utilizes a novel partnership model between the project and the respective domains to develop area-specific simulation tools that make HPC accessible to domain specialists. Lead: Kumwon Cho
- GE Global Research (U.S.) GE Global Research’s work on Large Eddy Simulations (LES) leveraged petascale computing to break barriers in accurately characterizing the key flow physics of multi-scale turbulent mixing in boundary layer and shear flows. Findings from this research will significantly improve the prediction and design capabilities for next-generation aircraft engines and wind turbines, both from demonstrating the viability of LES as a characterization tool and as a source of physics guidance. Lead: Umesh Paliath
- Spectraseis Inc (U.S.) and CADMOS, University of Lausanne (Switzerland) Researchers doubled both acoustic and elastic solver throughput, at the same time improving code size and maintainability, harnessing the massive parallel computing capabilities of Fermi and Kepler GPUs. With improved efficacies obtained by code redesign for GPU the time to solution was reduced from hours to seconds. The improved capability allowed Spectraseis to move from 2D to 3D and, in several cases, obtain more than 100x speed-up. Lead: Igor Podladtchikov and Yury Podladchikov
- Intelligent Light (U.S.) Intelligent Light addressed the challenge of high volumes of CFD data using FieldView 14 data management and process automation tools. Intelligent Light contributed results from approximately 100 cases with more than 10,000 time steps each to deliver a complete response to the workshop objectives. A Cray XE6 was used to generate the CFD solutions and perform much of the post-processing. This project successfully demonstrated the value and practicality of using innovative workflow engineering with automation and data management for complex CFD studies. Lead: Dr. Earl P.N. Duque
- Facebook (U.S.) Facebook manages a social graph that is composed of people, their friendships, subscriptions, and other connections. Facebook modified Apache Giraph to allow loading vertex data and edges from separate sources (GIRAPH-155). Facebook was able to run an iteration of page rank on an actual one trillion edge social graph formed by various user interactions in fewer than four minutes with the appropriate garbage collection and performance tuning. Facebook can now cluster a monthly active user data set of one billion input vectors with 100 features into 10,000 centroids with k-means in less than 10 minutes per iteration.Lead: Avery Ching / Apache Giraph
- HydrOcean/Ecole Centrale Nantes (France) SPH-flow is an innovative fluid dynamic solver based on a meshless, compressible, and time-explicit approach. SPH-flow solver has been used in several industrial projects, including: impact forces of aircraft and helicopter ditching; free surface simulations of ship wake and wave fields; multiphase emulsion simulations; extreme wave impacts on structures; simulation of hydroplaning of tires; water film around car bodies; and underwater explosions. This project is lead by Dr. Erwan Jacqin, CEO of HydrOcean, a spinoff from Ecole Centrale fluid dynamic lab, and Prof. David Le Touze, in charge of the SPH-flow research team at Ecole Centrale Nantes.
- Imperial College London and NAG (UK) HPC experts from NAG and Imperial College London have implemented scientifically valuable new functionality and substantial performance improvements in the Incompact3D application. After the improvements, the simulations can now scale to 8000 cores efficiently, with a run time of around 3.75 days (wall-clock time), which is over 6x faster. Furthermore, meshes for new high resolution turbulence mixing and flow control simulations, which use up to 4096*4096*4096 grid points, can now utilize as many as 16384 cores. Lead: NAG HECToR CSE Team
- Queen Mary University of London and NAG (UK) NAG and Queen Mary University of London made significant improvements to CABARET (Compact Accurate Boundary Adjusting high Resolution Technique) code so that it may be used to solve the compressible Navier-Stokes equations and, in the context of this project, for the investigation of aircraft noise. The newly developed code was validated and tested against the serial code and a parallel efficiency of 72% was observed when using 250 cores of the XT4 part of HECToR with the quad core architecture. Lead: NAG HECToR CSE Team
- Southern California Earthquake Center (U.S.) SCEC has built a special simulation platform, CyberShake, which uses the time-reversal physics of seismic reciprocity to reduce the computational cost by 1000x. Additionally, the production time for a complete regional CyberShake model at seismic frequencies up to 0.5 Hz has been reduced by 10x, and four new hazard models have been run on NCSA Blue Waters and TACC Stampede. SCEC researchers have developed highly parallel, highly efficient CUDA-optimized wave propagation code, called AWP-ODC-GPU, that achieved a sustained performance of 2.8 Petaflops on ORNL Titan. LEAD: Southern California Earthquake Center Community Modeling Environment Collaboration
- Princeton University/Princeton Plasma Physics Laboratory (U.S.) Using high-end supercomputing resources, advanced simulations of confinement physics for large-scale MFE plasmas have been carried out for the first time with very high phase-space resolution and long temporal duration to deliver important new scientific insights. This research was enabled by the new GTC-P code, developed to use multi-petascale capabilities on world-class systems such as the IBM BG-Q “Mira” @ ALCF and “Sequoia” @ LLNL. Leads: William Tang, Bei Wang, and Stephane Ethier
- Oak Ridge Leadership Computing Facility, Oak Ridge National Laboratory (U.S.) Researchers at ORNL have used the Titan supercomputer to perform the first simulations of organic solar cell active layers at scales commensurate with actual devices. By modifying the LAMMPS molecular dynamics software to use GPU acceleration on Titan, the researchers were able to perform simulations to study how different polymer blends can be used to alter the device morphology. The new insights will aid in the rational design of cheap solar cells with higher efficiency. Results are published in the journal Physical Chemistry Chemical Physics. Lead: W. Michael Brown and Jack C. Wells
- Ford Werke GmbH (Germany) Researchers at Ford Werke GmbH have developed and deployed a new CAE process, which enables the optimization of the airflow through the cooling package of a vehicle using complex 3D CFD analysis. The Ford team also demonstrated it could run these complex simulations fast enough to enable their use within the time constraints of a vehicle development project. The team’s work on Jaguar at Oak Ridge National Lab will help Ford maximize the effectiveness and fuel efficiency of engine bay designs throughout the company. Lead: Dr. Burkhard Hupertz and Alex Akkerman
7th HPC Innovation and ROI Award Winners - Announced at ISC 2014
- University of Wisconsin-Madison (U.S.). University of Wisconsin Researchers utilized HPC resources in combination with multiple advanced forms of protein structure prediction algorithms and deep sequence data mining to construct a highly plausible capsid model for Rhinovirus-C (~600,000 atoms). The simulation model helps researchers in explaining why the existing pharmaceuticals don’t work on this virus. The modeling frameworks developed by the researchers provide angstrom-level predictions for new antivirals and a platform for vaccine development. Lead: Ann C. Palmenberg
- Argonne National Laboratory, Caterpillar, Convergent Science (U.S.). Researchers from Argonne National Laboratory conducted one of the largest internal combustion engine simulations. Predictive internal combustion engine simulations necessitate very fine spatial and temporal resolutions, high-fidelity and robust two-phase flow, spray, turbulence, combustion, and emission models. The research has allowed Caterpillar Inc. to shrink their development timescales and thus result in significant cost savings. Caterpillar engineers predict that these HPC developments will reduce the number of multi-cylinder test programs by at least a factor of two, which will result in a cost saving of $500,000-$750,000 per year. Lead: Sibendu Som
- CINECA (Italy). Engineers from THESAN srl, an Italian SME active in the renewable energy sector, teamed up with the Italian supercomputing center CINECA to develop simulation-driven engineering of hydroelectric turbines. The research was conducted in the framework of the PRACE SHAPE (SME HPC Adoption Programme in Europe) Initiative. The engineers and researchers built an HPC-based workflow to optimize the design of a new class of hydroelectric turbines. Using CFD Thesan could generate cost savings through reducing or eliminating the production of physical prototypes, better understanding the flaws of earlier design setups, and critically shortening the time to market. Lead: Raffaele Ponzini, Roberto Vadori, Giovanni Erbacci, Claudio Arlandini
- Pipistrel d.o.o. (Slovenia). Engineers and scientists from Pipistrel utilized HPC and technical computing resources to design and develop the Taurus G4 aircraft. The aircraft was conceived, designed, and built in a mere 5 months, relying heavily on CAD and rapid prototyping techniques, but especially on the use of CFD and other computer aerodynamic tools for evaluation of flight performance and handling before committing to the building of the prototype. The aircraft introduced a unique twin fuselage configuration, presenting significant challenges in designing the wings, high lift systems, and the overall configuration. HPC-based CFD was used already in the conceptual design stage to optimize the shape of the engine nacelle in order to avoid premature flow separation. CFD was used in further stages of the design to optimize the high lift slotted flap geometry, and especially to determine the lift and stability behavior of the complete aircraft configuration in ground effect. Lead: Prof. Dr. Gregor Veble
- Culham Centre for Fusion Energy, EPCC at the University of Edinburgh, York Plasma Institute at the University of York, and Lund University. Researchers from CCFE, EPCC and the Universities of York and Lund have made substantial recent optimizations for the well-known plasma turbulence code, GS2. This included a total rewrite of the routines that calculate the response matrices required by the code’s implicit algorithm, which has significantly accelerated GS2’s initialization, typically by a factor of more than 10. Taken together, these optimizations have vastly reduced wall time, as illustrated by the impressive gain in speed by a factor of 20 that was achieved for a benchmark calculation running on 8,192 cores. The optimized code achieves scaling efficiencies close to 50% at 4,096 cores and 30% at 8,192 cores for a typical calculation, compared to efficiencies of 4% and 2% respectively prior to these optimizations. Leads: David Dickinson, Adrian Jackson, Colin M Roach and Joachim Hein
- Westinghouse Electric Company LLC, ORNL (U.S.). Researchers from Westinghouse Electric Company and the Consortium for Advanced Simulation of LWRs (CASL), a U.S. Department of Energy (DOE) Innovation Hub, performed core physics simulations of the AP1000® PWR startup core using CASL’s Virtual Environment for Reactor Application (VERA). These calculations, performed on the Oak Ridge Leadership Computing Facility (OLCF) “Titan” Cray XK7 system, produced 3D high-fidelity power distributions representing the conditions expected to occur during the AP1000 start-up. The set of results obtained provide insights that improve understanding of core conditions, helping to ensure safe startup of the AP1000 PWR first core. Lead: Fausto Franceschini (Westinghouse)
- Rolls-Royce, Procter and Gamble, National Center for Supercomputing Applications, Cray Inc., Livermore Software Technology Corporation (U.S.). Researchers from NCSA, Rolls Royce, Proctor and Gamble, Cray Inc, and Livermore Software Technology Corporation were able to scale the commercial explicit finite element code, LS-DYNA, to 15,000 cores on Blue Waters. The research has potential to transform several industries including aerospace and automotive engine design, and consumer product development and design. Researchers cited that the increased scalability can result in significant cost savings. Lead: Todd Simons, Seid Koric.
8th HPC Innovation and ROI Award Winners - Announced at SC 2015
- Argonne National Laboratory (US). Researchers in the Computing, Environment and Life Sciences Directorate (CELS) at Argonne National Laboratory developed ACCOLADES, a scalable workflow management tool that enables automotive design engineers to exploit task parallelism using large-scale computing (e.g., GPGPUs, multicore architectures, or the cloud). By effectively harnessing such large-scale computing capabilities, engine design engineers can concurrently simulate the drive cycle of thousands of vehicles in the wall-time it normally takes to complete a single dyno test. According to experts from a leading automotive manufacturer, ACCOLADES in conjunction with dyno tests can greatly accelerate the test procedure yielding an overall saving of $500,000 to $1 million during the R&D phase of an engine design/development. Leads: Shashi Aithal and Stefan Wild
- Korea Institute of Science and Technology Information (Republic of Korea) The Department of Supercomputing Modeling and Simulation at the Korea Institute of Science and Technology Information (KISTI) runs a modeling and simulation program that offers Korean SMEs the opportunity to develop high-quality products using the supercomputers at KISTI. Through open calls, the project selects about 40 engineering projects of SMEs every year and provides technical assistance, access to the supercomputers, and the appropriate modeling and simulation software technology such as CFD, FEA etc. From 2004 to date, the project has assisted about 420 SMEs. The project recently assisted the development of a slow juicer made by NUC Corp by improving the juice extraction rate from 75% to 82.5% through numerical shape optimization of a screw by using the “Tachyon II” supercomputer and fluid/structural analysis. This helped sales dramatically increase from about $1.9 million in 2010 to $96 million 2014, and the company has hired 150 new employees through building the additional manufacturing lines. Lead: Jaesung Kim and Sang Min Lee
- Sardina Systems (Tallinn, Estonia): This new company developed a technology that automates HPC operations in large-scale cloud data centers, such as collecting utilization metrics, driving scalable aggregation and consolidation of data, and optimizing resource demand to resource availability. The product, FishDirector, incorporates high performance parallel data aggregation and consolidation, coupled with high performance solvers that continuously solve for optimal layout of virtual machines (VMs) across an entire compute facility, taking into account costs such as VM movement/migration costs and constraints around placement of certain VMs, to drive higher overall server utilization and lower energy consumption The firm states that it has demonstrated raising utilization from 20 percent to over 60 percent at one government facility by optimizing the performance of over 150,000 VMs. Lead: Kenneth Tan
9th HPC Innovation and ROI Award Winners - Announced at ISC 2016
The Centre for Computational Medicine, University of Toronto: From genomics to medical imaging, almost every discipline in health care is dealing with a “Data Deluge”. Translating this into something that will ultimately benefit patients requires massive amounts of computation and storage in an environment that is fast, secure, and run with optimal efficiency. The University of Toronto’s SickKids Centre for Computational Medicine uses a supercomputer operating at 107 trillion calculations per second to predict the minute differences between individual children to identify the most precise treatment possible for each child under their care.
Disney Animation Studios, Frozen: Software engineers used advanced mathematics and physics, with assistance from mathematics researchers at the University of California Los Angeles (and 4,000 computers), to design breathtaking, believable scenes. Tangled: This film employed a unique artistic style by blending features of computer-generated imagery (CGI) and traditional animation, while using non-photorealistic rendering to create the impression of a painting. Disney developed unique techniques and tools to makes the computer “as pliable as the pencil” to create the ultimate (and most expensive) 3D movie of all time. Big Hero 6: Walt Disney Animation Studios created new software, called Denizen, to create over 700 distinctive characters. Another, called Bonzai, was used to create the city’s 250,000 trees, and a new rendering tool, called Hyperion, offered new illumination possibilities. Disney had to assemble a new supercomputing cluster just to handle Hyperion’s intense processing demands, which consisted of over 2,300 Linux workstations in four data centers, backed by a central storage system with capacity of five petabytes.
DreamWorks Animation How to Train Your Dragon: Over the five years before the film’s release, DreamWorks Animation overhauled its production workflow and animation software. How to Train Your Dragon 2 was the first DreamWorks Animation film that used “scalable multi-core processing”, developed together with Hewlett-Packard. This “next revolution in filmmaking” enabled artists for the first time to work on rich complex images in real time, instead of waiting eight hours to see the results the next day. Programmes named Premo and Torch allowed unique subtlety, improving facial animation and enabling “the sensation of skin moving over muscle instead of masses moving together”.. Kung Fu Panda: The computer animation used in this film was more complex than anything DreamWorks had applied before. They found help through the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) programme – the company was awarded a grant to refine and test its redesigned software on the leadership-class supercomputer at the Oak Ridge National Laboratory. The prototype software was successfully tested and immediately put to use. Knowledge gained from the INCITE grant informed an entirely new rendering architecture and has become essential in creating all of DreamWorks’ animated films. But the biggest win was the raw speed at which those responsible for the film’s lighting could get back frames. Iterations were sped up by an order of magnitude – a tenfold savings in total processing time. An iteration that once took hours was now accomplished in mere seconds. Monsters vs. Aliens: It took approximately 45.6 million computing hours to make this film, more than eight times as many as the original Shrek. Several hundred Hewlett-Packard workstations were used, along with a “render farm” of HP ProLiant blade servers with over 9,000 server processor cores to process the animation sequence. Animators used 120 terabytes (TB) of data to complete the film. They used 6 TB for an explosion scene. Since Monsters vs. Aliens, all feature films released by DreamWorks Animation are produced in a stereoscopic 3D format, using Intel’s InTru3D technology.
Fortissimo/Ergolines: Ergolines, an Italian small and medium enterprise (SME), is a world leader in supplying control systems to the steel industry. The focus is mainly on continuous casting, where liquid steel becomes a solid mechanical structure. Simulation of the casting process enables the design of the control systems, which allow steel-casting plans to operate at optimal levels. Without Cloud-based-HPC support, Ergolines would not have the wherewithal to perform the simulations that have led to better control systems with remarkable benefits to customers, workers, and the marketplace, such as lower costs, greater productivity, and improved safety. This was the company’s first experience with HPC and its benefits, but early results show that a total savings of up to 670,000 euro per year, per medium-size steel plant, is very achievable.
KINETICA/United States Postal Service: How does a company that makes daily deliveries to more than 154 million addresses, using several hundred thousand vehicles and employees, improve efficiencies using visualizations and analytics of real-time data? It’s called Kinetica, an in-memory GPU accelerated database, and it’s how the United States Postal Service (USPS) is optimizing its operations. Faced with the daunting task of managing the nation’s postal service, which covers a larger geographical area than any other, with increasingly limited resources, USPS needs to continuously improve safety, efficiency, and services without overspending. The complexities and dynamics of USPS’ logistics have reached all-time highs, while consumers have greater demands and more alternative options than ever before; they require sophisticated services like the just-in-time supplies, tracking, and delivery updates, and dynamic shipment routing. Improving end-to-end business process performance while concurrently reducing costs requires the ability to make fast business decisions based on live data. The USPS has accomplished this with Kinetica, optimizing operational efficiencies to save time and money.
Novartis/Amazon Web Services (AWS)/Cycle Computing: Novartis ran a project that involved virtually screening ten million compounds against a common cancer target in less than a week in 2013. They calculated it would take 50,000 cores and close to a $40 million investment if they wanted to run the experiment internally. Partnering with Cycle Computing and AWS, Novartis built a platform leveraging Amazon Simple Storage Service (Amazon S3), Amazon Elastic Block Store (Amazon EBS), and four Availability Zones. The project ran across 10,600 Spot Instances (approximately 87,000 compute cores) and allowed Novartis to conduct 39 years of computational chemistry in nine hours, all for a cost of $4,232. Out of the 10 million compounds screened, three were successfully identified.
University of Rochester Medical Center: By combining genetics, neurobiology, and supercomputing, researchers at the University of Rochester Medical Center, New York, identified a genetic mutation responsible for a potentially deadly seizure disorder found in infants and young children. Young people with the condition who survive beyond infancy often struggle for the rest of their lives with developmental disabilities, autism, and uncontrollable seizures. The researchers used a supercomputer cluster to quickly obtain a full genetic profile – more than 20,000 genes – for each study subject and to compare the results with data from other families. These findings opened up what was a “black box”, enabling researchers to more fully understand the biological pathways associated with these disorders and why some patients do not respond to treatment.
10th HPC Innovation and ROI Award Winners - Announced at SC 2016
- Robert Wilhelmson and Leigh Orf, University of Wisconsin at Madison: “Unlocking the Mysteries of the Most Violent Tornadoes.” Devastating, long-lived tornadoes are rare, but the death and destruction they cause is significant. This computational project reviewed nearly four years of the researchers’ HPC-based work on this topic, with a strong focus on recent supercell thunderstorm simulations. The researchers presented an overview of the challenges that were overcome in order to simulate and visualize tornadoes, embedded within their parent thunderstorms, at ultra-high spatial and temporal resolution.David Ceperley, University of Illinois: “Hydrogen under Extreme Conditions.” Hydrogen accounts for much of the visible mass in the universe. The properties of hydrogen and helium are important for understanding large planets. However, experiments under the relevant conditions are challenging. Dr. Ceperley developed new Quantum Monte Carlo simulation methods to treat such systems and has been using them to study molecular dissociation in liquid hydrogen. After validation, the method can be used with more confidence in modeling the wide variety of observed astrophysical objects composed largely of hydrogen and helium under extreme conditions.Gerrit Botha, University of Cape Town: “Custom Genotyping Chip for African Populations.” This computational project aims to produce genomic variant calls for the design of a cost-effective genotyping chip that will capture the genetic diversity in populations of African origin. This ongoing work will enable the identification of genetic variation specific to African populations, which will improve understanding of the links between genotype and disease in people of African origin, and thus extend the principles of personalized medicine to these underserved populations. Contributers include: Adebowale Adeyemo, Zané Lombard, Nicola Mulder, Victor Jongeneel, Liudmila Mainzer, Gloria Rendon, Ayton Meintjes and Sumir Panji. The chip design was contributed to by a bigger team, including Welcome Trust Sanger institute and University of Witwatersrand. Several data providers within the H3Africa consortium also provided samples for the design.Ruby Mendenhall et al., University of Illinois: “Rescuing Lost History: Using Big Data to Recover Black Women’s Lived Experiences.” It is often said that history is written by the victors. But it’s probably truer to say it is written by the people who have the opportunity to write. Documents recording the lives of black women are often historically obscure, hidden away in vast library collections and unintentionally misleadingly titled or cataloged. Dr. Mendenhall is leading a collaboration of social scientists, humanities scholars and digital researchers that is harnessing the power of high-performance computing to find and understand the historical experiences of black women by searching two massive databases of written works from the 18th through 20th centuries.
11th HPC Innovation and ROI Award Winners - Announced at ISC 2017
- ArcticDEM Project: Responding to Climate Change (National Center for Supercomputing Applications, National Geospatial-Intelligence Agency, Ohio State University, PGC, University of Colorado, Boulder, University of Minnesota). This project is in response to the need for high quality elevation data in remote locations, the availability of technology to process big data, and the need for accurate measurement of topographic change. Data is used to predict sea level rise, coastal erosion, national security, civil engineering, and aircraft safety, along with many, many other science, governmental and commercial applications.BP Seismic Imaging Research. BP’s Seismic Imaging Research has delivered major breakthroughs, critical in identifying over one billion additional barrels of reserves at its Gulf of Mexico offshore fields. With HPC, BP is able to test ideas quickly and scale to deliver results.Celeste Project: A New Model for Cataloging the Universe (Lawrence Berkeley National Laboratory). A Berkeley Lab-based research collaboration of astrophysicists, statisticians, and computer scientists is looking to shake things up with Celeste, a new statistical analysis model designed to enhance one of modern astronomy’s most time-tested tools: Sky surveys.Studio 100 and M.A.R.K.13 Turning the Famous Maya the Bee Character into a 3-D Film (). The task required calculating each of the CGI-stereoscopic films’ 150,000 images twice – once for the perspective of the left, and once for the right eye. Given the detail-rich nature of the Maya the Bee film, the group averaged two hours per image on a single node — blazing fast in animation terms. Such times couldn’t have been achieved on a standard PC.NASA Modular Super Computing Facility Saves Water, Power, Money (NASA). This innovative concept, launched in January 2017, centers around an SGI/HPE supercomputer nicknamed Electra, which combines outdoor air and evaporative cooling to reduce annual water use by 99% and enable a PUE of 1.03. An imminent 28-fold system expansion is expected to save NASA about $35 million per year over alternative strategies.NVIDIA Tesla VI00: Tackling Once Impossible Challenges (NVIDIA): NVIDIA’s Tesla V100 substantially advances the firm’s chip density (21 billion transistors in an 815mm2 chip) and is engineered to excel at AI and HPC. With 640 Tensor Cores, V100 boasts 120TF of performance on deep learning applications.DOME MicroDataCenter (IBM): This innovation from IBM’s Zurich Research Lab integrates compute, storage, networking, power and cooling in a package that’s up to 20 times denser than today’s typical data center technology. DOME MicroDataCenter has no moving parts, makes no noise, and is small enough for deployment on factory floors, in doctors’ offices, and other novel HPC environments.Bright Computing/Microsoft Azure Integration: Function-rich, Easy-to-learn (Bright/Microsoft). Smoothly integrating Bright’s function-rich, easy-to-learn management software into Microsoft’s important Azure public cloud service sets the stage for running a larger spectrum of HPC workloads in a public cloud environment—including support for InfiniBand, heterogeneous CPU-accelerator workloads, and more.
12th HPC Innovation and ROI Award Winners - Announced at SC 2017
-
- Uber Cloud (U.S.). HPC Simulations of the Impact of Drug-induced Arrhythmias in Living Hearts. Wolfgang Gentzsch. http://biomechanics.stanford.edu/paper/MEheart.pdfStevens Institute of Technology/New York City Economic Development Corporation (U.S.). Model for the Effects of Sea Level Rise in the Hurricane Storm Surge. Phil Orton and Alan Blumberg. https://www.scientificamerican.com/article/hurricane-sandy-animations-could-improve-flood-forecasts/Columbia University (U.S.). Massive Genetic Study Shows How Humans are Evolving. Hakhamanesh Mostafavi. https://www.nature.com/news/massive-genetic-study-shows-how-humans-are-evolving-1.22565CINECA (Italy). SuRE_HPC: Sustainable & Renewable Energy HPC. Claudio Arlandini. https://www.zeco.it/of-background-2796/NERSC, Berkeley Lab: Galactos. Prabhat. https://www.hpcwire.com/2017/10/02/galactos-project-solves-one-cosmologys-hardest-challenges/Pittsburgh Supercomputing Center (U.S.). Choice Works: PSC Models Show Value of Vaccine Choice in Fighting Flu. David Moses. https://www.psc.edu/choiceworksUniversity of Toronto SickKids Centre for Computational Medicine (Canada). Research Centre Identifies Precision Treatments. Jorge Gonzalez-Outeirino, Carl Virtanen, and Michael Brudno. http://www.sickkids.ca/globalchildhealth/Technical University of Denmark (Denmark). Supercomputer Redesign of Aeroplane Wing Mirrors Bird Anatomy. Niels Aage. https://techxplore.com/news/2017-10-supercomputer-tool-optimize-airplane-wing.htmlLawrence Livermore National Laboratory (U.S.). Researchers Eye Papermaking Improvements Through High-Performance Computing. Will Elmer. https://www.llnl.gov/news/quicker-picker-upper-lab-researchers-eye-papermaking-improvements-through-hpcHewlett Packard Enterprise (US). SpaceBorne Supercomputer. Mark R. Fernandez. https://news.hpe.com/hewlett-packard-enterprise-sends-supercomputer-into-space-to-accelerate-mission-to-mars/
To see current Award Winners, please click here.