Introduction To Parallel Programming
After getting an enormous breakthrough in the serial programming and figuring out its limitations, academicians and computer professionals are focusing now on parallel programming. Parallel programming is a well-liked choice today for multi-processor architectures to solve the complex problems. If developments in the last decade give any indications, then the future belongs to of parallel computing. Parallel programming is intended to take benefits the of non-local resources to save cost and time and overcome the memory constraints.
In this section, we shall introduce parallel programming and its classifications. We shall talk about some of the high level programs used for parallel programming. There are certain compiler-directive based packages, which can be used along with some high level languages. We shall also have a detailed look upon them.
Usually, software has been written for serial computation in which programs are written for computers having a single Central Processing Unit (CPU). Here, the troubles are solved by a series of instructions, implemented one after the other, one at a time, by the CPU. Though, many complex, interrelated events happening at the same time like galactic orbital and planetary events, weather and ocean patterns and tectonic plate drift may needs super high complexity serial software. To solve these big problems and save the computational time, a new programming paradigm called parallel programming was introduced.
To develop a parallel program, we must first decide whether the problem has any part which can be parallelized. There are some problems like producing the Fibonacci sequence in the case of which there is a little scope for parallelization. Once it has been determined that the problem has some segment that can be parallelized, we break the problem into discrete chunks of work that can be distributed to multiple tasks. This partition of the problem may be data-centric or function-centric. In the earlier case, different functions work with different subsets of data while in the latter every function performs a portion of the overall work. Depending upon the type of partition approach, we need communication between the processes. Accordingly, we have to design the mechanisms for process synchronization and mode of communication.