Parallel computing occurs when a computer carries out more than one task simultaneously. This technique can allow computers to work faster than doing one thing at once, just like a person with two free hands can carry more than a person with one free hand. Traditionally, computer programs are designed in ways that do not necessarily allow parallel computing, but instead have to be carried out one step at a time. For a program to be computed this way, it must be designed so it can be broken into smaller tasks that can be carried out individually. As an example of how parallel computing relates to the average person, many personal computers have multiple processing cores which enable them to process multiple tasks at the same time, instead of one task after another like with a single processor computer.
Writing a computer program so it can be broken into separate tasks to be executed separately is often harder than writing one at a time, using linear execution. With computer programs that will be executed sequentially, the first task usually determines the information that is integral to the second task in the sequence. Without that first bit of information that results from carrying out the first task, the second task could be pointless to carry out. When writing a program so it can be broken into different parts, timing things so different parts of the program have the information they need when they need it and are not making decisions based on outdated information can be a unique challenge. This concern is usually associated with the most common types of computer bugs that parallel computing programs face.
The main advantage of parallel computing is that programs can execute faster. If the computer hardware that is executing a program using parallel computing has the architecture, such as more than one central processing unit (CPU), parallel computing can be an efficient technique. As an analogy, if one man can carry one box at a time and that a CPU is a man, a program executing sequentially might only be able to carry one box at a time. When executing in parallel, that same program might be able to split into two separate tasks, and if there are two CPUs to take advantage of it, carry both boxes at the same time. By doing this, the man carries both boxes and completes his task faster.