Assignment Document

COLLECTIVE INTELLIGENT BRICKSDesignDelta has a knowledgeable

Pages:

Preview:


  • "COLLECTIVE INTELLIGENT BRICKSDesignDelta has a knowledgeable technicalteam that is committed to assisting in thedesign of our customers furnace needs,from general maintenance & rebuilds tolarge capital projects.Delta is commited to supplying the..

Preview Container:


  • "COLLECTIVE INTELLIGENT BRICKSDesignDelta has a knowledgeable technicalteam that is committed to assisting in thedesign of our customers furnace needs,from general maintenance & rebuilds tolarge capital projects.Delta is commited to supplying the highestCarbon Bake furnace componentsquality products & being the best sourcefor your refractory needs.SAN JOSE, Calif.--IBM researchers are working on a new storage system prototype thatpacks hard-drive modules into a dense, Rubik's Cube-like structure.The company's Collective Intelligent Bricks project builds variously sized three-dimensionalstacks out of the eight-inch modules, each filled with 12 hard drives and six network connectionsto keep data coursing through the collection. IBM envisions a day when hundreds of thesestorage "bricks" are stacked together, eventually with computing bricks in the same assemblage.By the first quarter of 2003, IBM hopes to have built a three-by-three-by-three-brick prototypewith a total of 32 terabytes of storage, said Jai Menon, an IBM fellow and storage researchmanager at Big Blue's Almaden Research Center in California.A three-by-three-by-three stack would have 27 bricks, and a six-by-six-by-six collection wouldhave 216. With a collection of eight bricks on a side, there would be 512 bricks--that is, 6,144hard drives--in a system only a little wider than an adult's arm span.Even though bricks slide in and out of the system easily, IBM imagines people will just buy afew extra bricks and leave defunct bricks in place. Software keeps track of where information isstored and ensures that it will still be available despite any failures.DEPT OF IT, PDCE 2009-2010 Page 13 COLLECTIVE INTELLIGENT BRICKSThose features are the real reason IBM is pushing the design. Storage systems are expensive tomanage, and Big Blue believes the resilient design will permit a single administrator to manageabout 100 times as much storage capacity.At present, though, these modular systems can't match the performance of the single large- system approach. The kings of the hill in high-end storage are million-dollar machines such asEMC's Symmetrix and Hitachi Data Systems' Lightning.Storage is a very competitive market today, with EMC leaning on an alliance with DellComputer to spread its wares more widely even as rivals Hewlett-Packard and Sun Microsystemsinvest in new designs. With customers better able to dictate purchase terms, storage companiesare searching for improvements they'll be able to charge for.DEPT OF IT, PDCE 2009-2010 Page 14 COLLECTIVE INTELLIGENT BRICKS 6.NEW TRICKS FOR BRICKSIBM is working on several advances to try to make the bricks surpass current monolithic designs.First off, because the systems are connected on all six sides to their neighbors and must easilyslide in and out, they can't be connected using ordinary plugs and sockets. IBM is using atechnique Garner developed called "capacitive couplers," small pads that can send signals fromone brick to another with which it's in contact.A key part of the brick concept is data protection. As with current RAID (redundant array ofinexpensive disks) technology, information such as a database file is stored on several disks so itwon't be lost if a single disk fails. The brick idea takes that concept one step farther, with thebricks automatically shuffling data from one brick to another to compensate for problems such asfailed drives, bricks or networking elements.And of course the bricks suffer an extreme version of a long-standing issue with high-endcomputers: overheating. In current prototypes, the bricks are impaled on vertical pipes that hold awater-cooling system. Interior bricks, with no air circulation at all, would otherwise quicklysuffer debilitating temperatures."I believe the industry is moving toward liquid cooling," Garner said.Water cooling has its problems, though, including expense, maintenance difficulties, and arequired connection to a specialized external system to cool the circulating water back down.IBM itself has switched its top-end mainframe servers from water cooling to air cooling.More challenges arrive when building a three-dimensional lattice of networked nodes--eachstoring its own data along with routing information to find other data. IBM has been researchingissues such as how many units can fail before the spread of data through the system is impaired.DEPT OF IT, PDCE 2009-2010 Page 15 COLLECTIVE INTELLIGENT BRICKSThe prototype brick system connects each brick to the other using Ethernet networks with a datatransmission speed of one gigabit per second, but that version of the technology won't work foractual products because transmission delays, or latencies, are too long, Garner said."Latencies are one of the issues with gigabit Ethernet," Garner said. Eventually, IBM expects touse either InfiniBand or 10gbps Ethernet. The project, called Collective Intelligent Bricks and formerly code-named IceCube, usesstacks of cubes about eight inches on a side, each filled with 12 hard drives and six networkconnections to keep data coursing through the collection. IBM envisions a day when hundreds ofthese storage bricks are stacked together, eventually with computing bricks in the sameassemblage. By the first quarter of 2003, IBM hopes to have built a three-by-three-by-three-brickprototype with a total of 32 terabytes of storage capacity, said Jai Menon, an IBM fellow andstorage research manager at IBM's Almaden Research Center in California.A three-by-three-by-three stack has 27 bricks, but a six-by-six-by-six collection would have216. With a collection of eight bricks on a side, there would be 512 bricks -- 6,144 hard drives --in a system a little wider than an adult's arm span. "This is almost like a revolution in packaging.We're taking advantage of the third dimension," said Almaden researcher Robert Garner. Garnerand Menon showed off prototypes of the system on Friday. Even though bricks slide in and outof the system easily, IBM imagines people will just buy a few extra bricks and leave defunctbricks in place. Software keeps track of what information is stored where and ensures it will stillbe available despite any failures. Those features are the real reason IBM is pushing the design.Storage systems are expensive to manage, and Big Blue believes the resilient design will permita single administrator to manage about 100 times as much storage capacity.IBM's brick storage system is an extension of the "blade" trend sweeping the computingindustry. The vision is that customers will stack simple modular systems up piece by piece asdemand grows instead of buying monolithic systems. At present, though, these modular systemscan't match the performance of the single large-system approach. The kings of the hill in high- end storage are million-dollar machines such as EMC's Symmetrix and Hitachi Data Systems'Lightning. Storage is a very competitive market today, with EMC leaning on an alliance withDell Computer to spread its wares more widely at the same time Hewlett-Packard and SunDEPT OF IT, PDCE 2009-2010 Page 16 COLLECTIVE INTELLIGENT BRICKSMicrosystems invest in new designs. With customers better able to dictate purchase terms,storage companies are searching for improvements they'll be able to charge for. New tricks forbricksIBM is working on several advances to try to make the bricks surpass current monolithic designs.First off, because the systems are connected on all six sides to their neighbors and must easilyslide in and out, they can't be connected using ordinary plugs and sockets. IBM is using atechnique Garner developed called "capacitive couplers," small pads that can send signals fromone brick to another with which it's in contact. A key part of the brick concept is data protection.As with current RAID (redundant array of inexpensive disks) technology, information such as adatabase file is stored on several disks so it won't be lost if a single disk fails.The brick idea takes that concept one step farther, with the bricks automatically shufflingdata from one brick to another to compensate for problems such as failed drives, bricks ornetworking elements. And of course the bricks suffer an extreme version of a long-standing issuewith high-end computers: overheating. In current prototypes, the bricks are impaled on verticalpipes that hold a water cooling system. Interior bricks, with no air circulation at all, wouldotherwise quickly suffer debilitating temperatures. "I believe the industry is moving towardliquid cooling," Garner said. Water cooling has its problems, though, including expense,maintenance difficulties, and a required connection to a specialized external system to cool thecirculating water back down. IBM itself has switched its top-end mainframe servers from watercooling to air cooling. More challenges arrive when building a three-dimensional lattice ofnetworked nodes--each storing its own data along with routing information to find other data.IBM has been researching issues such as how many units can fail before the spread of datathrough the system is impaired. The prototype brick system connects each brick to the otherusing Ethernet networks with a data transmission speed of one gigabit per second, but thatversion of the technology won't work for actual products because transmission delays, orlatencies, are too long, Garner said. "Latencies are one of the issues with gigabit Ethernet,"Garner said. Eventually, IBM expects to use either InfiniBand or 10-gigabit-per-second Ethernet. The tech giant launched on Tuesday a service that analyses the air flow in a datacentre -- afacility filled with server, storage and networking systems -- to find the best arrangement ofcomputing and air-conditioning gear inside. The service, which uses a complex modellingDEPT OF IT, PDCE 2009-2010 Page 17 COLLECTIVE INTELLIGENT BRICKStechnology from HP Labs, can cut the energy spent cooling datacentres by as much as 25percent, according to HP. Keeping datacentres cool is important because overheated computerscan lose data or crash. To see just what sort of disaster can result, read our true-story of a server- room meltdown. New technologies, such as server consolidation, are leading banks and othercompanies to centralise computing operations and to use blade servers and other systems thatcram in hot processors ever more densely. Until now, a typical response from datacentreoperators to technology changes such as these has been brute force -- bringing in bigger airconditioners, for example. "Most information technology people are not trained inthermodynamics," said Illuminata analyst David Freund. Current datacentres -- specialisedchambers dominated by hulking computer cabinets, uncomfortably chilly air and the roar ofhundreds of computer fans -- typically have raised floors, under which cool air flows and powerlines and networking cables are laid. Cool air is directed upward to computers, though some of itescapes through holes for cables. Intake ducts at the top of the room draw off the heated air andsend it to a cooling system. The first version of HP's service is a one-time analysis of acompany's datacentre to give a prescription for the best way to arrange the computing equipment,the flow of cool air into the facility and the flow of hot air out, said Brian Donabedian, an HP siteplanner and environmental specialist. HP's service uses a technique called computational fluiddynamics to simulate how air flows through a complicated arrangement of ducts, computers anddeflectors.The company began showing off the technology behind the analysis service in 2001.Within two years or so, HP will begin offering a more sophisticated second-generation coolingservice tied to its Utility Data Center product, said Donabedian. UDC distributes computing jobsacross groups of servers and storage systems and can respond to changing workload demandsautomatically. UDC distributes computing jobs across groups HP robot helps data keep its coolIn this second, "dynamic smart cooling," phase, the UDC control software will be able to movecomputing work away from hotter areas of a datacentre or adjust air conditioning systems to dealwith hot spots, Donabedian said. It will combine stationary temperature sensors with othersmounted on an HP robot patrolling the datacentre. With the cooling analysis service, HP hopes toboost its attempt to increase revenue from its profitable services group. In the wake of IBM'sDEPT OF IT, PDCE 2009-2010 Page 18 "

Why US?

Because we aim to spread high-quality education or digital products, thus our services are used worldwide.
Few Reasons to Build Trust with Students.

128+

Countries

24x7

Hours of Working

89.2 %

Customer Retention

9521+

Experts Team

7+

Years of Business

9,67,789 +

Solved Problems

Search Solved Classroom Assignments & Textbook Solutions

A huge collection of quality study resources. More than 18,98,789 solved problems, classroom assignments, textbooks solutions.

Scroll to Top