What is a Cache Bus?

Article Details
  • Written By: Kurt Inman
  • Edited By: Lauren Fritsky
  • Last Modified Date: 13 October 2019
  • Copyright Protected:
    Conjecture Corporation
  • Print this Article
Free Widgets for your Site/Blog
There is a railway line in the hills above Budapest, Hungary, that has been operated by children for over 70 years,  more...

October 13 ,  1943 :  In a major turn of events in World War II, Italy declared war on Germany.  more...

A cache bus is a dedicated high-speed bus that a computer processor uses to communicate with its cache memory. Also known as a backside bus, it operates at a much greater speed than the system bus. A cache bus directly connects a processor core to its cache; it runs independently of the processor bus, transferring data across a wider, less restricted path. A cache bus is used in most modern processors to decrease the time required to read or modify frequently accessed data.

In the 1980s, cache memory was usually located on the motherboard, not on the processor chip itself. The cache was accessed over the processor bus, just like the regular system memory. The amount of cache memory was often quite small and offered only as an optional system performance enhancement.

As processor speed and efficiency increased in the early 1990s, the processor bus became a bottleneck; fast cache memory needed a way to interact with the processor without waiting for much slower system memory and input/output operations to finish. In the mid 1990s, most new processors adopted a dual-bus architecture to solve this problem. A high-speed cache bus was created to access the cache directly. This bus is not used for anything else—all other data transfers utilize the slower processor bus, also known as the front-side bus. The processor can use both buses simultaneously, resulting in substantially better performance.


Early dual-bus designs frequently used cache memory located on the motherboard; large amounts of on-chip cache were not yet cost-effective due to production yield problems. Later designs often incorporated a mix of internal and external cache as yield improved. Modern processors usually utilize a large amount of internal cache; many include 8 megabytes (MB) or more, compared to older designs that often had only 8 kilobytes (KB). In modern designs where the entire cache is on-chip, the cache bus can be quite short with a very wide data path, 512 bits in some processors. The bus typically runs at the same speed as the processor itself. The end result is that cache content can be read or modified very quickly.

Each core of a multi-core processor may have its own cache or share one large common cache. In both cases, a cache bus connects each core to the appropriate cache memory. When each processor core has its own separate cache, coherency problems can arise. For example, when one core updates data in its cache, other copies of that data in other caches become out of date or "stale." One way this type of problem can be resolved is by using a special type of cache bus, sometimes called an inter-core bus. This bus links all of the caches together so that each one can monitor what the others are doing—if one updates a piece of shared data, the others can immediately reflect the new content.


You might also Like


Discuss this Article

Post your comments

Post Anonymously


forgot password?