The concept of clustering is definite as the use of multiple computers, typically PCs or multiple storage devices, UNIX workstations, and their interconnections, to form what appears to users as a single highly available system. Workstation clusters is a set of loosely-connected processors, where every workstation acts as an autonomous and independent agent. The cluster manages faster than normal systems.
In common, a 4-CPU cluster is about 250~300% faster than a one CPU PC. Besides, it not only decreases computational time, but also permits the simulations of much bigger computational systems models than before. Due to of cluster computing overnight Topic analysis of a complete 3D injection molding simulation for a very complicated model is possible.
Cluster workstation definite in Silicon Graphics project is as follows:
"A distributed workstation cluster should be viewed as one computing resource and not as a set of individual workstations".
The details of the cluster were: 64MB memory, 150MHz R4400 workstations, 1 GB disc per workstation, 6 x 17" monitors, Cluster operating on local 10baseT Ethernet, Computing Environment Status, MPI (LAM and Chimp), Oxford BSP Library, PVM.
The operating system structure creates it difficult to exploit the characteristics of current clusters, such as huge primary memories, low-latency communication, and high operating speed. Advances in processor technology and network have greatly changed the communication and computational power of local-area workstation clusters. Cluster computing can be used for load balancing in multi-process systems.
A common use of cluster computing is to balance traffic load on high-traffic Web sites. In web operation a web page demand is sent to a manager server, which then decides which of the several identical or very same web servers to forward the request to for handling. The use of cluster computing creates this web traffic uniform.