Already have an account? Get multiple benefits of using own account!
Login in your account..!
Remember me
Don't have an account? Create your account in less than a minutes,
Forgot password? how can I recover my password now!
Enter right registered email to receive password!
Expalin the History Of Parallel Computers
The researches with and implementations of use of the parallelism started long back in the 1950's by IBM Corporation. The IBM STRETCH computers also recognized as IBM 7030 were built in 1959. In design of these computers, some new concepts such as overlapping I/O with processing and instruction look ahead were introduced. A serious approach in the direction of designing parallel computers was begun with the development of ILLIAC IV in the year 1964 at University of Illinois. It had a single control unit however multiple processing elements. On this machine, at one time, a single process is executed on different data items by different processing elements. The idea of pipelining was introduced in computer CDC 7600 in 1969. It used pipelined arithmetic unit. In the years 1970-85, the research in this field was focused on the development of vector super computer. In 1976 the CRAY1 was developed by Seymour Cray. Cray1 was a pioneering effort in development of the vector registers. It used main memory only for load and store operations. Cray1 did not access virtual memory and optimized pipelined arithmetic unit and Cray1 had clock speed of 12.5 n.sec. The Cray1 processor evolved up to a speed of 12.5 Mflops on 100 × 100 linear equation solutions. The subsequent generation of Cray known as Cray XMP was developed in the years 1982-84. It was united with 8-vector supercomputers and used a shared memory. Apart from Cray, the giant company producing parallel computers, Control Data Corporation (CDC) of USA, manufactured supercomputers, the CDC 7600. Its vector supercomputers known as Cyber 205 had memory to memory architecture which is, input vector operands were streamed from the main memory to the vector arithmetic unit and results were stored back in the main memory. The benefit of this architecture was, it didn't limit the size of vector operands. The drawback was that it required a very high speed memory so that there will be no speed mismatch between vector arithmetic units and main memory. Producing such high speed memory is very expensive. The clock speed of Cyber 205 was 20 n.sec.
In the 1980's Japan also started producing high performance vector supercomputers. Companies such as NEC, Fujitsu and Hitachi were main manufacturers. Hitachi created S-810/210 and S-810/10 vector supercomputers in 1982. NEC created SX-1 and Fujitsu created VP-200. All these machines used semiconductor technologies to attain speeds at equivalence with Cray and Cyber. However their operating system and vectorisers were poorer than those of American companies.
Electrocomp's management realizes that it forgot to contain two critical constraints. In particular, management decides that to make sure an adequate supply of air conditioners for
Normal 0 false false false EN-IN X-NONE X-NONE MicrosoftInternetExplorer4
Q. Describe short note on Information system for strategic advantage? Ans. Strategic role of information systems engage using information technology to develop products or serv
Explain the Structure of a C Program? Each C program consists of one or more functions one of the which must be main(). A function name is forever followed by a pair of parenth
What is the difference between a Substructure and an Append Structure? In case of a substructure, the reference originates in the table itself, in the form of a statement
Vector Processing A vector is an ordered set of similar type of scalar data items. The scalar tem may be a logical value, an integer or a floating point number. Vector proce
Assignment: write a C program and a MASM procedure. The C program calls the MASM procedure to perform letter case conversion. Text sections covered: 12.1 to 12.3.1 Write a
In the message-passing model, there exists a set of tasks that use their own local memories during computation. Multiple tasks can reside on the similar physical machine as well ac
With the help of a neat diagram, explain the working of a successive approximation A/D converter Ans: Successive Approximation ADC: It is the most broadly used A/D con
ID3 algorithm: Further for the calculation for information gain is the most difficult part of this algorithm. Hence ID3 performs a search whereby the search states are decisio
Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!
whatsapp: +91-977-207-8620
Phone: +91-977-207-8620
Email: [email protected]
All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd