How Does a Multiple Client Server Work?

Article Details
  • Written By: Ray Hawk
  • Edited By: E. E. Hubbard
  • Last Modified Date: 11 October 2019
  • Copyright Protected:
    Conjecture Corporation
  • Print this Article
Free Widgets for your Site/Blog
Part of Grand Central Station, there is a secret railway platform underneath the Waldorf Astoria hotel in New York.  more...

October 22 ,  1962 :  US President John F. Kennedy ordered an air and naval blockade in Cuba.  more...

A multiple client server is a type of software architecture for computer networks where clients, which can be basic workstations or fully functional personal computers, request information from a server computer. There are often software interfaces between the client and server also, known as middleware, and network routing and protocol software, as well as security software like firewalls. Depending on the size of a network, the servers and clients can either interact directly or through a three-tier architecture that provides additional processing between the two types of machines.

The most common type of multiple client server system for small businesses and homes is the single server with multiple clients. One server is able to handle dozens of information requests from client computers simultaneously. Contrary to popular belief, the server computer itself does not have to be the fastest, most powerful machine in the network to perform this role efficiently.


One primary distinction in multiple client server networks is that they can be local area networks (LANs) that are self-contained within one building and not necessarily connected to the Internet, or wide-area networks (WANs). Wide-area networks are multiple-client server systems distributed across multiple geographic locations, and almost exclusively tied into the Internet. Some large corporations, however, have WAN systems that are independent of the Internet. The growth of the size of the Internet, its development of the world wide web, and the increasing diversity of networking software and hardware choices has resulted in the term WAN taking on a broader meaning.

In the past, a WAN was one or more physical servers providing network support to a multitude of clients. The term is now more loosely defined, and a WAN can be built largely on software itself, such as in cloud computing or utilizing web browsers and web servers. More traditional WANs use file transfer protocol (FTP) and domain name system (DNS) architectures. File transfer and processing rates in WANs have also been improved through the use of the multi-threaded client server, a network built on central processing units (CPUs) that can seemingly execute many different program instructions simultaneously.

Web servers are a type of virtual hosting multiple client server. These networks are built entirely on software, and don't require specific physical locations for the client or server computers. The web server acts as a physical server, and can run on multiple machines, or on one section of a server machine running multiple web servers at once. The client computer in this case is a web browser that accesses the server, and can also be loaded from a variety of computers not tied to one specific location.

Cloud computing has similarities to the workstation concept of earlier years for multiple client servers. Both cloud computing and workstations are client machines with little in the way of local resources to draw upon. Almost all of the cloud computing network's software is installed on the server itself, such as word processors, games, music and video applications, and more. The client gains access to this software on the server to run it. The workstation is a monitor and network location with minimal resources, such as very little memory or processing capability, and, without access to the server, would not be a functional computer.

Web server architectures, cloud computing, and stripped-down workstation designs are all attempts to reduce the cost of a multiple client server network. By not distributing physical hardware resources or software to dozens or hundreds of client machines, the idea is that they can more economically be accessed on one central, powerful server instead. The vulnerability with them is that local copies of most files do not exist, and, if the network fails, many people could lose access to their work.

Both FTP and DNS systems are fundamental multi-client communication designs. FTP is a fast, reliable method of transmitting text and certain other files, usually in binary mode, across a network. It was an original transfer protocol when the Internet was largely text-based, before the graphic-rich subset of the world wide web came into existence. Most Internet traffic today is still text-based FTP transfers that take place largely unseen by users of the web. DNS systems arose early on also, especially as the world wide web grew, as a way of replacing actual network addresses in the form of strings of Internet Protocol (IP) numbers, with familiar English names for the servers that clients would access.


You might also Like


Discuss this Article

Post your comments

Post Anonymously


forgot password?