Data messages (objects) being sent/received (passed around) in MPI are referred to by their addresses:
Individual processes rely on communication (message passing) to enforce workflow
MPI_SendMPI_RecvMPI_BroadcastMPI_ScatterMPI_GatherMPI_ReduceMPI_Barrier.int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) *buf: pointer to the address containing the data elements to be sent.count: how many data elements will be sent.MPI_Datatype: MPI_BYTE, MPI_PACKED, MPI_CHAR, MPI_SHORT, MPI_INT, MPI_LONG, MPI_FLOAT, MPI_DOUBLE, MPI_LONG_DOUBLE, MPI_UNSIGNED_CHAR, and other user-defined types.dest: rank of the process these data elements are sent to.tag: an integer identify the message. Programmer is responsible for managing tag.comm: communicator (typically just used MPI_COMM_WORLD)int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) *buf: pointer to the address containing the data elements to be written to.count: how many data elements will be received.MPI_Datatype: MPI_BYTE, MPI_PACKED, MPI_CHAR, MPI_SHORT, MPI_INT, MPI_LONG, MPI_FLOAT, MPI_DOUBLE, MPI_LONG_DOUBLE, MPI_UNSIGNED_CHAR, and other user-defined types.dest: rank of the process from which the data elements to be received.tag: an integer identify the message. Programmer is responsible for managing tag.comm: communicator (typically just used MPI_COMM_WORLD)*status: pointer to an address containing a special MPI_Status struct that carries additional information about the send/receive process.intro-mpi, create a file named send_recv.c with the following contentssend_recv.c:
1
2
mpicc -o send_recv send_recv.c
mpirun --host compute01:1,compute02:1 -np 2 ./send_recv
intro-mpi, create a file named multi_send_recv.c with the following contentsmulti_send_recv.c:
1
2
mpicc -o multi_send_recv multi_send_recv.c
mpirun --host compute01:2,compute02:2 -np 4 ./multi_send_recv
MPI_Recv is a blocking call.intro-mpi, create a file named deadlock_send_recv.c with the following contentsdeadlock_send_recv.c:
1
2
mpicc -o deadlock_send_recv deadlock_send_recv.c
mpirun --host compute01:1,compute02:1 -np 2 ./deadlock_send_recv
Ctrl-C.MPI_Send must always equal to the number of MPI_Recv.MPI_Send should be called first (preferably).