Research Summary
High-performance computing (HPC) is the cornerstone of scientific community in tackling computationally-intensive problems in fields as diverse as green energy and biofuels, cancer research and drug discovery, weather forecasting and climate change, seismic processing for oil and gas, genomics and bioinformatics, astrophysics, material science, automotive, defense, data mining and analytics, deep learning and AI, and financial computing. Current HPC systems have accelerated researches that were not possible a few years ago. However, the demand for more computational power is never ending. Far more computational power on much larger-scale machines is required to unravel the scientific mysteries. Research in HPC has been primarily focused on how to improve the performance of such computers. Parallel processing and efficient and scalable communication are at the heart of such powerful computers.
The Parallel Processing Research Laboratory (PPRL) carries out research in the main areas of high-performance computing (HPC) and networking, heterogeneous and accelerated cluster computing, high-performance and scalable communication runtime, and parallel programming models. Our research, in part, is concerned with efficient and scalable communication runtime, algorithms, and system software to boost system performance and scalability in supporting the scientific, engineering, and deep learning frameworks and applications with their parallel programming models (MPI, MPI+X, OpenMP, PGAS, OpenSHMEM, CUDA, OpenCL, OpenACC, etc.) on top of traditional and heterogeneous (GPU, Xeon Phi) HPC systems with their high-performance interconnects (InfiniBand, Omni-Path, iWARP Ethernet, RoCE, and proprietary). Workload characterization of scientific, engineering, commercial and deep learning frameworks and applications, as well as benchmarking and performance evaluation of parallel programming models, HPC systems, and high-speed interconnects is an integral part of our research.