Parallelization of a materials science code

From SNIC Documentation

Jump to: navigation, search
Name Parallelization of a materials science code
Description Parallelization request for a materials science code
Project financing   SNIC
Is active yes
Start date 2013-05-01
End date

This project is the result of a request from a research grroup in LiU. The code is embarrassingly parallel in nature. The parallelizing task is to send receive an array of user defined complicated data type.

Requestors and collaborators:


Description

We have implemented an easy interface for MPI send / receive for derived data. The send/recv routines "know" the data structure. These routines internally pack / unpack the structure data on a character buffer. We have taken a data structure which is similar to what was described by the user. We have implemented it in a template format. For the actual data structure the user needs to modify the relevant places in pack and unpack routines. The data structure dependent parts are placed in a separate module. In the main code the user needs to call MPI_Send_point / MPI_Recv_point to send / recv derived data types.

The routine can be used as a stand alone library to be linked at run time. The interface can be made more user friendly by "overloading" the MPI_Send_point / MPI_Recv_point routines as MPI_Send / MPI_recv respectively.


The code


Members

Centre Role Field
Chandan Basu (NSC) NSC Application expert Computational science
Johan Raber (NSC) NSC Application expert Computational chemistry
Personal tools
Namespaces
Variants
Actions
People
For Staff
Toolbox