Difference between revisions of "LES Code Parallelization"

From SNIC Documentation
Jump to: navigation, search
Line 14: Line 14:
 
* Parallelization of the 1-D multigrid pressure Poisson solver
 
* Parallelization of the 1-D multigrid pressure Poisson solver
  
----- Abstract -----
+
----- Abstract
 
Prof. Lars Davidson's LES (Large Eddy Simulation) fluid dynamics code has been chosen as a pilot project of NSC's code parallelisation service. We devise the standalone domain partitioning code for decomposing the computational domain to each processor. We deploy the MPI communicator for the halo cell exchange, which has several forms so that the communication routine fits with the regular 3-D data structure and 1-D converted data structure for the multi-grid scheme imposition. Parallel performance shows the linear speed-up on low number of processors, up to 20 cores. We do not observe more speed-up with increased number of processor in this strong-scale instrumentation, because the original domain size is designed very small (around 2 million mesh points) to compare the performance from a single-core run. Nevertheless, we expect that the code will show the good performance on larger number of cores, in case of the weak scale test. Furthermore, we emphasize that this parallelisation effort faciliate the more-detailed flow simulations in a complex geometry whose mesh system shall be constructed with lots of mesh points. We find that the change of time integration scheme will further improve the performance by providing the better convergence criteria, which will be one of main objectives of a next project.
 
Prof. Lars Davidson's LES (Large Eddy Simulation) fluid dynamics code has been chosen as a pilot project of NSC's code parallelisation service. We devise the standalone domain partitioning code for decomposing the computational domain to each processor. We deploy the MPI communicator for the halo cell exchange, which has several forms so that the communication routine fits with the regular 3-D data structure and 1-D converted data structure for the multi-grid scheme imposition. Parallel performance shows the linear speed-up on low number of processors, up to 20 cores. We do not observe more speed-up with increased number of processor in this strong-scale instrumentation, because the original domain size is designed very small (around 2 million mesh points) to compare the performance from a single-core run. Nevertheless, we expect that the code will show the good performance on larger number of cores, in case of the weak scale test. Furthermore, we emphasize that this parallelisation effort faciliate the more-detailed flow simulations in a complex geometry whose mesh system shall be constructed with lots of mesh points. We find that the change of time integration scheme will further improve the performance by providing the better convergence criteria, which will be one of main objectives of a next project.
  

Revision as of 08:50, 15 March 2012

Name LES Code Parallelization
Description Parallelization of a Large Eddy Simulation Code
Project financing   SNIC
Is active Yes
Start date 2011-03-01
End date 2012-02-29

This is a NSC-promoted project of supporting code parallelization for prominent Swedish scientists. A serial Large Eddy Simulation (LES) code by Dr. L. Davidson at Charmers University has been selected as the candidate. We provide

  • The distinct light-weight partitioning code for 3-D structured geometries
  • Inter-processor and global communicators for halo information exchange on baseline convection-diffusion code
  • Parallelization of the 1-D multigrid pressure Poisson solver

Abstract

Prof. Lars Davidson's LES (Large Eddy Simulation) fluid dynamics code has been chosen as a pilot project of NSC's code parallelisation service. We devise the standalone domain partitioning code for decomposing the computational domain to each processor. We deploy the MPI communicator for the halo cell exchange, which has several forms so that the communication routine fits with the regular 3-D data structure and 1-D converted data structure for the multi-grid scheme imposition. Parallel performance shows the linear speed-up on low number of processors, up to 20 cores. We do not observe more speed-up with increased number of processor in this strong-scale instrumentation, because the original domain size is designed very small (around 2 million mesh points) to compare the performance from a single-core run. Nevertheless, we expect that the code will show the good performance on larger number of cores, in case of the weak scale test. Furthermore, we emphasize that this parallelisation effort faciliate the more-detailed flow simulations in a complex geometry whose mesh system shall be constructed with lots of mesh points. We find that the change of time integration scheme will further improve the performance by providing the better convergence criteria, which will be one of main objectives of a next project.

Full details will be updated here.

Members

 CentreRoleField
Soon-Heum Ko (NSC)NSCApplication expertComputational fluid dynamics