In collective message passing, all the processes of a set participate in communication. MPI gives a number of functions to execute the collective message passing. Some of them are being discussed here.
MPI_Bcast(msgaddr, count, datatype, rank, comm):
This function is used by a procedure ranked rank in set comm to broadcast the message to all the members (including self) in the group.
MPI_Allreduce
MPI_Scatter(Sendaddr, Scount, Sdatatype, Receiveaddr, Rcount, Rdatatype, Rank, Comm):
Using this function process with rank rank in set comm sends personalized message to all the processes (including self) and sorted message (according to the rank of sending processes) are stored in the send buffer of the process. Starting three parameters define buffer of sending process and next three describe buffer of receiving process.
MPI_Gather (Sendaddr, Scount, Sdatatype, Receiveaddr, Rcount, Rdatatype,Rank, Comm):
By Using this function procedure with rank rank in group comm receives personalized message from all the processes (including self) and sorted message (according to the rank of sending processes) are stored in the receive buffer of the process. First three parameters define buffer of sending process and next three define buffer of receiving process.
MPI_Alltoall()
Every process sends a personalized message to each other process in the group.
MPI_Reduce (Sendaddr , Receiveaddr , count, datatype, op, rank, comm):
This function minimizes the partial values stored in Sendaddr of every process into a final result and stores it in Receiveaddr of the process with rank rank. op specifies the decrease operator.
MPI_Scan (Sendaddr,, Receiveaddr , count, datatype, op, comm):
It joins the partial values into p final results which are received into the Receiveaddr of all p processes in the set comm.
MPI_Barrier(cmm):
This function synchronises every processes in the set comm.