In computing, shared memory is memory capacity that can be accessed and used simultaneously by a number of different programs, allowing those programs to share data and avoid the creation of redundant copies of the same information. The programs may be set to operate on different processors, or all utilize the same processor. Sometimes known as concurrent computing or parallel computing, this approach allows multiple users to share data without the need to copy it to a different program, an approach that helps to save end users time and also makes for a more efficient use of system resources.
Typically, shared memory as it relates to the actual hardware refers to the use of blocks of the random access memory (RAM) that is available on a given multiple-processor computer system. In this environment, several different processors can make use of the available memory without creating any type of interference or reduction in efficiency for the other processors. This means that all the processors are essentially working off the same set of programs without slowing down the actual tasks that are being executed by each processor.
There is the possibility that at least a few issues could develop with the use of a shared memory setup. This approach does have some limitations in terms of how many processing units can actually be included in the multi-processor system. This is because the processors sometimes cache memory. With a smaller number of processors involved, this does not impact the efficiency of the system to any great degree. In order to avoid this type of problem, making sure that the amount of random access memory available on the system is kept proportionally greater than the number of processors is imperative. Doing so will help prevent any type of scaling or prioritizing issues from developing, and keep the system from performing at less than optimal efficiency even during peak periods of usage.
Shared memory is not the only possible approach to managing tasks executed by multiple processors. A different strategy, known as distributed memory, essentially allocates memory capacity to each processor that is currently in use. As with shared memory, there is some potential for the creation of bottlenecks, depending on the number of processors involved and the nature of the tasks currently in execution. There is also a hybrid approach known as a distributed shared memory that seeks to build on the strengths of both approaches, while minimizing the potential for the development of any operational problems.