Difference between revisions of "Category:Parallel programming"
(7 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | [[Category: | + | [[Category:Computational science]] |
+ | {{field info | ||
+ | |description=programming with multiple threads or processes | ||
+ | }} | ||
+ | {{PAGENAME}} entails {{#show: Category:{{PAGENAME}}|?description}}. | ||
In a parallel program, the computational work to be performed by the application is divided into a number of work packages. These work packages can then be assigned to a number of processing elements (e.g. cores on a modern multi-core processors or GPU) and have them executed independently. This should deliver a faster time to solution, when compared to utilising a single processing element only. By deploying several hundreds or thousands of processing elements, calculations which would otherwise take many years to complete can be completed in month or even weeks. | In a parallel program, the computational work to be performed by the application is divided into a number of work packages. These work packages can then be assigned to a number of processing elements (e.g. cores on a modern multi-core processors or GPU) and have them executed independently. This should deliver a faster time to solution, when compared to utilising a single processing element only. By deploying several hundreds or thousands of processing elements, calculations which would otherwise take many years to complete can be completed in month or even weeks. | ||
− | In a typical parallel program the work packages are however not fully independent. They frequently require access to data generated or modified on other processing elements. This data then needs communicating. When using a [[distributed memory]] system communication is typically facilitated by deploying some form of message passing. On a shared memory system (e.g. multi core system) | + | In a typical parallel program the work packages are however not fully independent. They frequently require access to data generated or modified on other processing elements. This data then needs communicating. When using a [[distributed memory programming|distributed memory]] system communication is typically facilitated by deploying some form of [[message passing]]. On a shared memory system (e.g. multi core system) one has a choice of using message passing or [[shared memory programming]] techniques. When using shared memory programming one spawns a number of threads, which have access to a common shared memory space. The threads communicate by writing data to and reading data from this shared space. |
+ | == Experts == | ||
+ | {{list general experts}} | ||
− | + | <!-- | |
− | == | + | == Software == |
− | {{list | + | {{list software for category}} |
+ | --> |
Latest revision as of 08:10, 4 June 2012
Empty strings are not accepted. Parallel programming entails programming with multiple threads or processes.
In a parallel program, the computational work to be performed by the application is divided into a number of work packages. These work packages can then be assigned to a number of processing elements (e.g. cores on a modern multi-core processors or GPU) and have them executed independently. This should deliver a faster time to solution, when compared to utilising a single processing element only. By deploying several hundreds or thousands of processing elements, calculations which would otherwise take many years to complete can be completed in month or even weeks.
In a typical parallel program the work packages are however not fully independent. They frequently require access to data generated or modified on other processing elements. This data then needs communicating. When using a distributed memory system communication is typically facilitated by deploying some form of message passing. On a shared memory system (e.g. multi core system) one has a choice of using message passing or shared memory programming techniques. When using shared memory programming one spawns a number of threads, which have access to a common shared memory space. The threads communicate by writing data to and reading data from this shared space.
Experts
Field | AE FTE | General activities | ||
---|---|---|---|---|
Birgitte Brydsö (HPC2N) | HPC2N | Parallel programming HPC | Training, general support | |
Jerry Eriksson (HPC2N) | HPC2N | Parallel programming HPC | HPC, Parallel programming | |
Joachim Hein (LUNARC) | LUNARC | Parallel programming Performance optimisation | 85 | Parallel programming support Performance optimisation HPC training |
Marcus Lundberg (UPPMAX) | UPPMAX | Computational science Parallel programming Performance tuning Sensitive data | 100 | I help users with productivity, program performance, and parallelisation. I also work with allocations and with sensitive data questions |
Mirko Myllykoski (HPC2N) | HPC2N | Parallel programming GPU computing | Parallel programming, HPC, GPU programming, advanced support | |
Wei Zhang (NSC) | NSC | Computational science Parallel programming Performance optimisation | code optimization, parallelization. |
Pages in category "Parallel programming"
The following 35 pages are in this category, out of 35 total.