MPI_send and MPI_Resc - updating single array value

0 votes
asked May 1, 2014 by newbee

I am writing a MPI version of array updatation where I am updating a single array from multiple processes. Following is my code -

uint n_sigm;
int *suma_sigm;
int my_first_i = 0;
int my_last_i = 0;
using namespace std;
int main(int argc, char *argv[])
int rank, size, i;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int allocatedTask = n_sigm / size;
suma_sigm=(int *)malloc(sizeof(int)*n_sigm);
if (size < 2)
{ printf("Please run with two processes.\n");fflush(stdout); MPI_Finalize(); return 0;
if (rank != 0)
{ my_first_i = rank*allocatedTask; my_last_i = my_first_i+allocatedTask; cout<<rank<<" is rank and "<<my_first_i<<" is first and "<<my_last_i<<" is my last "<<endl; for (i=my_first_i; i<my_last_i; i++) { suma_sigm[i] = rand()%n_sigm; cout<<"value at "<<i<<" is : "<<suma_sigm[i]<<endl; } MPI_Send(suma_sigm, allocatedTask, MPI_INT, 0, 123, MPI_COMM_WORLD);
{ for (i=0; i<allocatedTask; i++)
{ // process 0 executing its array suma_sigm[i] = rand()%n_sigm;
} MPI_Send(suma_sigm, allocatedTask, MPI_INT, 0, 123, MPI_COMM_WORLD);
for (i=0; i<n_sigm; i++){ suma_sigm[i] = 0;}
for (int q = 0; q < size; q++)
{ MPI_Recv(suma_sigm, allocatedTask, MPI_INT, q, 123, MPI_COMM_WORLD, &status); cout<<" Process_"<<q<<" :"; int start = q*allocatedTask; int last = start +allocatedTask; for (int h=start; h<last; h++) { cout<<"value2 at "<<h<<" is : "<<suma_sigm[h]<<endl; }cout<<endl;
} fflush(stdout);
return 0;

As you can see I am generating the value for array "suma_sigm" from all the ranks and then passing it, before passing the value is displayed fine. But after receiving value are displayed at zero for all the processes except process 0. Only process zero is able to send the values which are successfully used in recieve function.

1 Answer

+2 votes
answered Nov 8 by osgx

The task you want to solve can be easier solver by using MPI_Gather.


Each process (root process included) sends the contents of its send buffer to the root process. The root process receives the messages and stores them in rank order.

Documentation also shows equivalent MPI_Send/MPI_Recv usage, which is similar to your code, but note the "+i*recvcount*extent" offset in MPI_Recv:

The outcome is as if each of the n processes in the group (including the root process) had executed a call to

 MPI_Send(sendbuf, sendcount, sendtype, root , ...),

and the root had executed n calls to

 MPI_Recv(recvbuf+i · recvcount· extent(recvtype), recvcount, recvtype, i ,...), 


Idea of MPI_Gather

Welcome to Q&A, where you can ask questions and receive answers from other members of the community.