Difference between revisions of "Distributed memory programming"
(Created page with "Category:Parallel programming Distributed memory programming is a form of parallel programming.") |
|||
(3 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
[[Category:Parallel programming]] | [[Category:Parallel programming]] | ||
− | Distributed memory programming is a form of parallel programming. | + | Distributed memory programming is a form of [[:Category:Parallel programming|parallel programming]]. When executing a distributed memory program a number of processes, commonly referred to as tasks, is executed simultaneously. Each task has its own private memory space, which is not normally allowed to be accessed by any of the other tasks. The data of the entire calculation has to be distributed by the programmer onto these private memory spaces, hence the name. |
+ | Many distributed memory programs require frequent and rapid data exchanges between the tasks. Explicit [[message passing]] is typically deployed to facilitate these data exchanges. | ||
+ | |||
+ | Distributed memory programs using message passing are suitable to be executed on a wide variety of compute platforms, ranging from multi-core desktop systems to the largest supercomputers in the world. Distributed memory programming is typically the programming model of choice for a clustered HPC system, when the program requirements towards computational speed and/or main memory exceed the capabilities of a single node. The HPC clusters available at the [[:Category:SNIC_centre|SNIC centres]] provide high performance communication networks to minimise the message passing overheads. | ||
+ | |||
+ | Distributed memory applications are typically harder to develop than applications deploying other parallel programming models, such as [[shared memory programming]]. Their key advantage is that they can, when well designed, be deployed on a large number of processing elements. By design they typically do not cause conflicts in the caches of the processor and for that reason often have a performance advantage over a shared memory implementation of the same functionality. |
Latest revision as of 16:12, 24 October 2011
Distributed memory programming is a form of parallel programming. When executing a distributed memory program a number of processes, commonly referred to as tasks, is executed simultaneously. Each task has its own private memory space, which is not normally allowed to be accessed by any of the other tasks. The data of the entire calculation has to be distributed by the programmer onto these private memory spaces, hence the name. Many distributed memory programs require frequent and rapid data exchanges between the tasks. Explicit message passing is typically deployed to facilitate these data exchanges.
Distributed memory programs using message passing are suitable to be executed on a wide variety of compute platforms, ranging from multi-core desktop systems to the largest supercomputers in the world. Distributed memory programming is typically the programming model of choice for a clustered HPC system, when the program requirements towards computational speed and/or main memory exceed the capabilities of a single node. The HPC clusters available at the SNIC centres provide high performance communication networks to minimise the message passing overheads.
Distributed memory applications are typically harder to develop than applications deploying other parallel programming models, such as shared memory programming. Their key advantage is that they can, when well designed, be deployed on a large number of processing elements. By design they typically do not cause conflicts in the caches of the processor and for that reason often have a performance advantage over a shared memory implementation of the same functionality.