<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://docs.snic.se/w/index.php?action=history&amp;feed=atom&amp;title=Shared_memory_programming</id>
	<title>Shared memory programming - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://docs.snic.se/w/index.php?action=history&amp;feed=atom&amp;title=Shared_memory_programming"/>
	<link rel="alternate" type="text/html" href="http://docs.snic.se/w/index.php?title=Shared_memory_programming&amp;action=history"/>
	<updated>2026-05-04T05:01:42Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.31.10</generator>
	<entry>
		<id>http://docs.snic.se/w/index.php?title=Shared_memory_programming&amp;diff=3038&amp;oldid=prev</id>
		<title>Joachim Hein (LUNARC): Created page with &quot;Category:Parallel programming Shared memory programming is a form of parallel programming.  A shared memory program typically achieves its ...&quot;</title>
		<link rel="alternate" type="text/html" href="http://docs.snic.se/w/index.php?title=Shared_memory_programming&amp;diff=3038&amp;oldid=prev"/>
		<updated>2011-10-25T14:43:47Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;&lt;a href=&quot;/wiki/Category:Parallel_programming&quot; title=&quot;Category:Parallel programming&quot;&gt;Category:Parallel programming&lt;/a&gt; Shared memory programming is a form of &lt;a href=&quot;/wiki/Category:Parallel_programming&quot; title=&quot;Category:Parallel programming&quot;&gt;parallel programming&lt;/a&gt;.  A shared memory program typically achieves its ...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;[[Category:Parallel programming]]&lt;br /&gt;
Shared memory programming is a form of [[:Category:Parallel programming|parallel programming]].  A shared memory program typically achieves its parallelism by spawning threads.  The threads can be distributed onto more than one processing element (e.g. core of a multi core processor) to gain a parallel speed-up on the process.  As the name said, all threads have access to a large shared memory area and can read and/or write to it.  When accessing the shared memory from different threads, care needs to be taken that this accesses happen in the right order to avoid data races.&lt;br /&gt;
&lt;br /&gt;
To write shared memory programs for a multi-core system, popular choices of a programming language are [[pthreads]] to parallelise a C or C++ program, [[OpenMP]] for Fortran, C or C++ programs or a threaded language such as [[Java]].  Many shared memory programs for a [[GPU]] are written in [[OpenCL]] or [[Cuda]].&lt;br /&gt;
&lt;br /&gt;
As implied above, to execute a shared memory program specialist hardware is needed.  While being capable to progress more than one thread simultaneously, it needs to provide efficient access to the shared memory space from all these threads.  Fortunately these days such hard ware is not that expensive any more.  A simple multi-core system or a single GPU can be used if the requirement towards parallel speed-up are modest.&lt;/div&gt;</summary>
		<author><name>Joachim Hein (LUNARC)</name></author>
		
	</entry>
</feed>