ACENET: Parallel computing with OpenMP
Parallel computing is the business of breaking a large problem into tens, hundreds, or even thousands of smaller problems which can then be solved at the same time using a cluster of computers, or supercomputer. It can reduce processing time to a fraction of what it would be on a desktop or workstation, or enable you to tackle larger, more complex problems. It’s widely used in big data mining, AI, time-critical simulations, and advanced graphics such as augmented or virtual reality. It’s used in fields as diverse as genetics, biotech, GIS, computational fluid dynamics, medical imaging, drug discovery, and agriculture.
In this session, we will introduce parallel programming using OpenMP. Shared memory multicore systems are commonly programmed using OpenMP. It has been extensively adopted in the supercomputing world and is gaining attention in general-purpose computing as well. OpenMP facilitates parallel programming by providing cross-platform and cross-compiler support. Although OpenMP does not parallelize code automatically, existing code can be parallelized without having to rewrite it significantly. By using compiler directives, C, C++, and Fortran programmers can fully control parallelization. In addition to CPU parallel programming, modern OpenMP has GPU offloading capabilities. Compared to native GPU languages, such as CUDA, OpenMP makes GPU programming easier and more performance-portable. Furthermore, OpenMP supports heterogeneous computations using CPU and GPU resources simultaneously to improve application performance.
Prerequisites: Before you take this training, you should...
- have taken ACENET's training "Overview of Parallel Computing"
- have familiarity with either C, C++, or Fortran
This session will take place on:
- Tuesday, June 11, 2:00--4:00 pm
- Thursday, June 13, 2:00--4:00 pm