MPI_Send and MPI_Recv
Overview
Teaching: 5 min
Exercises: 20 minQuestions
How do I send data from one rank to another?
Objectives
Introduce the
MPI_Send
andMPI_Recv
functions
Display Language | ||
---|---|---|
Communication
In this section we will use two MPI library functions, MPI_Send
and
MPI_Recv
, to send data from one rank to another.
MPI_Send
/MPI_Recv
are the basic building blocks for essentially all of the more specialized MPI commands described later.- They are also the basic communication tools in your MPI application.
- Since they involve two ranks, they are called “point-to-point” communication (unlike “collective” communication which will be described later).
The process of communicating data follows a standard pattern.
Rank A decides to send data to rank B.
It first packs the data into a buffer.
This avoids sending multiple messages, which would take more time.
Rank A then calls MPI_Send
to create a message for rank B.
The communication device is then given the responsibility of routing
the message to the correct destination.
Rank B must know that it is about to receive a message and acknowledge this
by calling MPI_Recv
.
This sets up a buffer for writing the incoming data
and instructs the communication device to listen for the message.
The message will not actually be sent before the receiving rank calls MPI_Recv
,
even if MPI_Send
has been called.
MPI_Send
int MPI_Send( void* data, int count, MPI_Datatype datatype, int destination, int tag, MPI_Comm communicator)
data
:Pointer to the start of the data being sent count
:Number of elements to send datatype
:The type of the data being sent destination
:The rank number of the rank the data will be sent to tag
:A message tag (integer) communicator
:The communicator (we have used MPI_COMM_WORLD
in earlier examples)
MPI_Recv
int MPI_Recv( void* data, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm communicator, MPI_Status* status)
data
:Pointer to where the received data should be written count
:Maximum number of elements received datatype
:The type of the data being received source
:The rank number of the rank sending the data tag
:A message tag (integer) communicator
:The communicator (we have used MPI_COMM_WORLD
in earlier examples)status
:A pointer for writing the exit status of the MPI command
MPI_Send
MPI_Send(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR
BUF
:Vector containing the data to send COUNT
:Number of elements to send DATATYPE
:The type of the data being sent DEST
:The rank number of the rank the data will be sent to TAG
:A message tag (integer) COMM
:The communicator (we have used MPI_COMM_WORLD in earlier examples) IERROR
:Error status
MPI_Recv
MPI_Recv(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM INTEGER STATUS(MPI_STATUS_SIZE), IERROR
BUF
:Vector the received data should be written to COUNT
:Maximum number of elements received DATATYPE
:The type of the data being received SOURCE
:The rank number of the rank sending the data TAG
:A message tag (integer) COMM
:The communicator (we have used MPI_COMM_WORLD in earlier examples) STATUS
:A pointer for writing the exit status of the MPI command
MPI.Comm.send
def send(self, obj, int dest, int tag=0)
obj
:The Python object being sent dest
:The rank number of the rank the data will be sent to tag
:A message tag (integer)
MPI.Comm.recv
def recv(self, buf=None, int source=ANY_SOURCE, int tag=ANY_TAG, Status status=None)
buf
:The buffer object to where the received data should be written (optional) source
:The rank number of the rank sending the data tag
:A message tag (integer) status
:A pointer for writing the exit status of the MPI command
The number of arguments can make these commands look complicated, so don’t worry if you need to refer back to the documentation regularly when working with them. The most important arguments specify what data needs to be sent or received and the destination or source of the message.
The message tag is used to differentiate messages, in case rank A has sent multiple pieces of data to rank B. When rank B requests for a message with the correct tag, the data buffer will be overwritten by that message.
The communicator is something we have seen before.
It specifies information about the system and where each rank actually is.
The status parameter in MPI_Recv
will give information about any possible problems
in transit.
Here’s an example program that uses MPI_Send
and MPI_Recv
to send the string “Hello World!”
from rank 0 to rank 1:
#include <stdio.h>
#include <mpi.h>
int main(int argc, char** argv) {
int rank, n_ranks;
// First call MPI_Init
MPI_Init(&argc, &argv);
// Check that there are two ranks
MPI_Comm_size(MPI_COMM_WORLD,&n_ranks);
if( n_ranks != 2 ){
printf("This example requires exactly two ranks\n");
MPI_Finalize();
return(1);
}
// Get my rank
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
if( rank == 0 ){
char *message = "Hello, world!\n";
MPI_Send(message, 16, MPI_CHAR, 1, 0, MPI_COMM_WORLD);
}
if( rank == 1 ){
char message[16];
MPI_Status status;
MPI_Recv(message, 16, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &status);
printf("%s",message);
}
// Call finalize at the end
return MPI_Finalize();
}
program hello
implicit none
include "mpif.h"
integer rank, n_ranks, ierr
integer status(MPI_STATUS_SIZE)
character(len=13) message
! First call MPI_Init
call MPI_Init(ierr)
! Check that there are two ranks
call MPI_Comm_size(MPI_COMM_WORLD, n_ranks, ierr)
if (n_ranks .ne. 2) then
write(6,*) "This example exactly requires two ranks"
error stop
end if
! Get my rank
call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
if (rank == 0) then
message = "Hello, world!"
call MPI_Send( message, 13, MPI_CHARACTER, 1, 0, MPI_COMM_WORLD, ierr)
end if
if (rank == 1) then
call MPI_Recv( message, 13, MPI_CHARACTER, 0, 0, MPI_COMM_WORLD, status, ierr)
write(6,*) message
end if
! Call MPI_Finalize at the end
call MPI_Finalize(ierr)
end
from mpi4py import MPI
import sys
# Check that there are two ranks
n_ranks = MPI.COMM_WORLD.Get_size()
if n_ranks != 2:
print("This example requires exactly two ranks")
sys.exit(1)
# Get my rank
rank = MPI.COMM_WORLD.Get_rank()
if rank == 0:
message = "Hello, world!"
MPI.COMM_WORLD.send(message, dest=1, tag=0)
if rank == 1:
message = MPI.COMM_WORLD.recv(source=0, tag=0)
print(message)
Try It Out
Compile and run the above code.
MPI Types in C
In the above example we send a string of characters and therefore specify the type
MPI_CHAR
. For a complete list of possibly types, see the reference.
MPI Types in Fortran
In the above example we send a string of characters and therefore specify the type
MPI_CHARACTER
. For a complete list of possibly types, see the reference.
Communicating buffer-like objects in Python
The lower-case methods
send
andrecv
are used to communicate generic Python objects between MPI processes. It is also possible to directly send buffer-like objects (e.g. NumPy arrays) which provides faster communication and can be useful when working with large data, but require the memory space to be allocated for the receiving buffer prior to communication. These methods start with uppercase letters, e.g.Send
andRecv
.
Many Ranks
Change the above example so that it works with any number of ranks. Pair even ranks with odd ranks and have each even rank send a message to the corresponding odd rank.
Solution
#include <stdio.h> #include <mpi.h> int main(int argc, char** argv) { int rank, n_ranks, my_pair; // First call MPI_Init MPI_Init(&argc, &argv); // Get the number of ranks MPI_Comm_size(MPI_COMM_WORLD,&n_ranks); // Get my rank MPI_Comm_rank(MPI_COMM_WORLD,&rank); // Figure out my pair if( rank%2 == 1 ){ my_pair = rank-1; } else { my_pair = rank+1; } // Run only if my pair exists if( my_pair < n_ranks ){ if( rank%2 == 0 ){ char *message = "Hello, world!\n"; MPI_Send(message, 16, MPI_CHAR, my_pair, 0, MPI_COMM_WORLD); } if( rank%2 == 1 ){ char message[16]; MPI_Status status; MPI_Recv(message, 16, MPI_CHAR, my_pair, 0, MPI_COMM_WORLD, &status); printf("%s",message); } } // Call finalize at the end return MPI_Finalize(); }
Solution
program hello implicit none include "mpif.h" integer rank, n_ranks, my_pair, ierr integer status(MPI_STATUS_SIZE) character(len=13) message ! First call MPI_Init call MPI_Init(ierr) ! Find the number of ranks call MPI_Comm_size(MPI_COMM_WORLD, n_ranks, ierr) ! Get my rank call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr) ! Figure out my pair if ( MOD(rank,2) == 1 ) then my_pair = rank-1; else my_pair = rank+1; end if ! Run only if my pair exists if( my_pair < n_ranks ) then if ( MOD(rank,2) == 0 ) then message = "Hello, world!" call MPI_Send( message, 13, MPI_CHARACTER, my_pair, 0, MPI_COMM_WORLD, ierr) end if if ( MOD(rank,2) == 1 ) then call MPI_Recv( message, 13, MPI_CHARACTER, my_pair, 0, MPI_COMM_WORLD, status, ierr) write(6,*) message end if end if ! Call MPI_Finalize at the end call MPI_Finalize(ierr) end
Solution
from mpi4py import MPI # Get the number of ranks n_ranks = MPI.COMM_WORLD.Get_size() # Get my rank rank = MPI.COMM_WORLD.Get_rank() # Figure out my pair if rank % 2 == 1: my_pair = rank - 1 else: my_pair = rank + 1 # Run only if my pair exists if my_pair < n_ranks: if rank % 2 == 0: message = "Hello, world!" MPI.COMM_WORLD.send(message, dest=my_pair, tag=0) if rank % 2 == 1: message = MPI.COMM_WORLD.recv(source=my_pair, tag=0) print(message)
Hello Again, World!
Modify the Hello World code so that each rank sends its message to rank 0. Have rank 0 print each message.
#include <stdio.h> #include <mpi.h> int main(int argc, char** argv) { int rank; // First call MPI_Init MPI_Init(&argc, &argv); // Get my rank MPI_Comm_rank(MPI_COMM_WORLD, &rank); printf("Hello World, I'm rank %d\n", rank); // Call finalize at the end return MPI_Finalize(); }
program hello implicit none include "mpif.h" integer rank, ierr ! First call MPI_Init call MPI_Init(ierr) ! Get my rank call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr) write(6,*) "Hello World, I'm rank", rank ! Call MPI_Finalize at the end call MPI_Finalize(ierr) end
from mpi4py import MPI # Get my rank rank = MPI.COMM_WORLD.Get_rank() print("Hello World, I'm rank", rank)
Solution
#include <stdio.h> #include <mpi.h> int main(int argc, char** argv) { int rank, n_ranks, numbers_per_rank; // First call MPI_Init MPI_Init(&argc, &argv); // Get my rank and the number of ranks MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &n_ranks); if( rank != 0 ){ // All ranks other than 0 should send a message char message[30]; sprintf(message, "Hello World, I'm rank %d\n", rank); MPI_Send(message, 30, MPI_CHAR, 0, 0, MPI_COMM_WORLD); } else { // Rank 0 will receive each message and print them for( int sender=1; sender<n_ranks; sender++ ){ char message[30]; MPI_Status status; MPI_Recv(message, 30, MPI_CHAR, sender, 0, MPI_COMM_WORLD, &status); printf("%s",message); } } // Call finalize at the end return MPI_Finalize(); }
Solution
program hello implicit none include "mpif.h" integer rank, n_ranks, ierr integer sender integer status(MPI_STATUS_SIZE) character(len=40) message ! First call MPI_Init call MPI_Init(ierr) ! Get my rank and the number of ranks call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr) call MPI_Comm_size(MPI_COMM_WORLD, n_ranks, ierr) if (rank .NE. 0) then ! All ranks other than 0 should send a message write(message,*) "Hello World, I'm rank", rank call MPI_Send( message, 40, MPI_CHARACTER, 0, 0, MPI_COMM_WORLD, ierr) else ! Rank 0 will receive each message and print them do sender = 1, n_ranks-1 call MPI_Recv( message, 40, MPI_CHARACTER, sender, 0, MPI_COMM_WORLD, status, ierr) write(6,*) message end do end if ! Call MPI_Finalize at the end call MPI_Finalize(ierr) end
Solution
from mpi4py import MPI # Get my rank and the number of ranks rank = MPI.COMM_WORLD.Get_rank() n_ranks = MPI.COMM_WORLD.Get_size() if rank != 0: # All ranks other than 0 should send a message message = "Hello World, I'm rank {:d}".format(rank) MPI.COMM_WORLD.send(message, dest=0, tag=0) else: # Rank 0 will receive each message and print them for sender in range(1, n_ranks): message = MPI.COMM_WORLD.recv(source=sender, tag=0) print(message)
Blocking
- Try this code and see what happens.
- (If you are using the MPICH library, this example might automagically work. With OpenMPI it shouldn’t)
- How would you change the code to fix the problem?
#include <stdio.h> #include <stdlib.h> #include <mpi.h> int main(int argc, char** argv) { int rank, n_ranks, neighbour; int n_numbers = 10000; int *send_message; int *recv_message; MPI_Status status; send_message = malloc(n_numbers*sizeof(int)); recv_message = malloc(n_numbers*sizeof(int)); // First call MPI_Init MPI_Init(&argc, &argv); // Get my rank and the number of ranks MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &n_ranks); // Check that there are exactly two ranks if( n_ranks != 2 ){ printf("This example requires exactly two ranks\n"); MPI_Finalize(); return(1); } // Call the other rank the neighbour if( rank == 0 ){ neighbour = 1; } else { neighbour = 0; } // Generate numbers to send for( int i=0; i<n_numbers; i++){ send_message[i] = i; } // Send the message to other rank MPI_Send(send_message, n_numbers, MPI_INT, neighbour, 0, MPI_COMM_WORLD); // Receive the message from the other rank MPI_Recv(recv_message, n_numbers, MPI_INT, neighbour, 0, MPI_COMM_WORLD, &status); printf("Message received by rank %d \n", rank); free(send_message); free(recv_message); // Call finalize at the end return MPI_Finalize(); }
program hello implicit none include "mpif.h" integer, parameter :: n_numbers=10000 integer i integer rank, n_ranks, neighbour, ierr integer status(MPI_STATUS_SIZE) integer send_message(n_numbers) integer recv_message(n_numbers) ! First call MPI_Init call MPI_Init(ierr) ! Get my rank and the number of ranks call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr) call MPI_Comm_size(MPI_COMM_WORLD, n_ranks, ierr) ! Check that there are exactly two ranks if (n_ranks .NE. 2) then write(6,*) "This example requires exactly two ranks" error stop end if ! Call the other rank the neighbour if (rank == 0) then neighbour = 1 else neighbour = 0 end if ! Generate numbers to send do i = 1, n_numbers send_message(i) = i; end do ! Send the message to other rank call MPI_Send( send_message, n_numbers, MPI_INTEGER, neighbour, 0, MPI_COMM_WORLD, ierr ) ! Receive the message from the other rank call MPI_Recv( recv_message, n_numbers, MPI_INTEGER, neighbour, 0, MPI_COMM_WORLD, status, ierr ) write(6,*) "Message received by rank", rank ! Call MPI_Finalize at the end call MPI_Finalize(ierr) end
from mpi4py import MPI import sys n_numbers = 10000 # Get my rank and the number of ranks rank = MPI.COMM_WORLD.Get_rank() n_ranks = MPI.COMM_WORLD.Get_size() # Check that there are exactly two ranks if n_ranks != 2: print("This example requires exactly two ranks") sys.exit(1) # Call the other rank the neighbour if rank == 0: neighbour = 1 else: neighbour = 0 # Generate numbers to send send_message = [] for i in range(n_numbers): send_message.append(i) # Send the message to other rank MPI.COMM_WORLD.send(send_message, dest=neighbour, tag=0) # Receive the message from the other rank recv_message = MPI.COMM_WORLD.recv(source=neighbour, tag=0) print("Message received by rank", rank)
Solution
MPI_Send
will block execution until until the receiving process has calledMPI_Recv
. This prevents the sender from unintentionally modifying the message buffer before the message is actually sent. Above, both ranks callMPI_Send
and just wait for the other to respond. The solution is to have one of the ranks receive its message before sending.Sometimes
MPI_Send
will actually make a copy of the buffer and return immediately. This generally happens only for short messages. Even when this happens, the actual transfer will not start before the receive is posted.#include <stdio.h> #include <mpi.h> int main(int argc, char** argv) { int rank, n_ranks, neighbour; int n_numbers = 524288; int send_message[n_numbers]; int recv_message[n_numbers]; MPI_Status status; // First call MPI_Init MPI_Init(&argc, &argv); // Get my rank and the number of ranks MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &n_ranks); // Generate numbers to send for( int i=0; i<n_numbers; i++){ send_message[i] = i; } if( rank == 0 ){ // Rank 0 will send first MPI_Send(send_message, n_numbers, MPI_INT, 1, 0, MPI_COMM_WORLD); } if( rank == 1 ){ // Rank 1 will receive it's message before sending MPI_Recv(recv_message, n_numbers, MPI_INT, 0, 0, MPI_COMM_WORLD, &status); printf("Message received by rank %d \n", rank); } if( rank == 1 ){ // Now rank 1 is free to send MPI_Send(send_message, n_numbers, MPI_INT, 0, 0, MPI_COMM_WORLD); } if( rank == 0 ){ // And rank 0 will receive the message MPI_Recv(recv_message, n_numbers, MPI_INT, 1, 0, MPI_COMM_WORLD, &status); printf("Message received by rank %d \n", rank); } // Call finalize at the end return MPI_Finalize(); }
Solution
MPI_Send
will block execution until until the receiving process has calledMPI_Recv
. This prevents the sender from unintentionally modifying the message buffer before the message is actually sent. Above, both ranks callMPI_Send
and just wait for the other respond. The solution is to have one of the ranks receive its message before sending.Sometimes
MPI_Send
will actually make a copy of the buffer and return immediately. This generally happens only for short messages. Even when this happens, the actual transfer will not start before the receive is posted.program hello implicit none include "mpif.h" integer, parameter :: n_numbers=524288 integer i integer rank, n_ranks, neighbour, ierr integer status(MPI_STATUS_SIZE) integer send_message(n_numbers) integer recv_message(n_numbers) ! First call MPI_Init call MPI_Init(ierr) ! Get my rank and the number of ranks call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr) call MPI_Comm_size(MPI_COMM_WORLD, n_ranks, ierr) ! Check that there are exactly two ranks if (n_ranks .NE. 2) then write(6,*) "This example requires exactly two ranks" error stop end if ! Generate numbers to send do i = 1, n_numbers send_message(i) = i; end do if (rank == 0) then ! Rank 0 will send first call MPI_Send( send_message, n_numbers, MPI_INTEGER, 1, 0, MPI_COMM_WORLD, ierr ) end if if (rank == 1) then ! Receive the message from the other rank call MPI_Recv( recv_message, n_numbers, MPI_INTEGER, 0, 0, MPI_COMM_WORLD, status, ierr ) write(6,*) "Message received by rank", rank end if if (rank == 1) then ! Rank 1 will send second call MPI_Send( send_message, n_numbers, MPI_INTEGER, 0, 0, MPI_COMM_WORLD, ierr ) end if if (rank == 0) then ! Receive the message from the other rank call MPI_Recv( recv_message, n_numbers, MPI_INTEGER, 1, 0, MPI_COMM_WORLD, status, ierr ) write(6,*) "Message received by rank", rank end if ! Call MPI_Finalize at the end call MPI_Finalize(ierr) end
Solution
MPI.COMM_WORLD.send
will block execution until until the receiving process has calledMPI.COMM_WORLD.recv
. This prevents the sender from unintentionally modifying the message buffer before the message is actually sent. Above, both ranks callMPI.COMM_WORLD.send
and just wait for the other to respond. The solution is to have one of the ranks receive its message before sending.Sometimes
MPI.COMM_WORLD.send
will actually make a copy of the buffer and return immediately. This generally happens only for short messages. Even when this happens, the actual transfer will not start before the receive is posted.from mpi4py import MPI import sys n_numbers = 10000 # Get my rank and the number of ranks rank = MPI.COMM_WORLD.Get_rank() n_ranks = MPI.COMM_WORLD.Get_size() # Check that there are exactly two ranks if n_ranks != 2: print("This example requires exactly two ranks") sys.exit(1) # Call the other rank the neighbour if rank == 0: neighbour = 1 else: neighbour = 0 # Generate numbers to send send_message = [] for i in range(n_numbers): send_message.append(i) if rank == 0: # Rank 0 will send first MPI.COMM_WORLD.send(send_message, dest=1, tag=0) if rank == 1: # Rank 1 will receive it's message before sending recv_message = MPI.COMM_WORLD.recv(source=0, tag=0) print("Message received by rank", rank) if rank == 1: # Now rank 1 is free to send MPI.COMM_WORLD.send(send_message, dest=0, tag=0) if rank == 0: # And rank 0 will receive the message recv_message = MPI.COMM_WORLD.recv(source=1, tag=0) print("Message received by rank", rank)
Ping Pong
Write a simplified simplified simulation of pingpong according to the following rules:
- Ranks 0 and 1 participate
- Rank 0 starts with the ball
- The rank with the ball sends it to the other rank
- Both ranks count the number of times they get the ball
- After counting to 1 million, the rank gives up
- There are no misses or points
Solution
#include <stdio.h> #include <mpi.h> int main(int argc, char** argv) { int rank, neighbour; int max_count = 1000000; int counter; int bored; int ball = 1; // A dummy message to simulate the ball MPI_Status status; // First call MPI_Init MPI_Init(&argc, &argv); // Get my rank MPI_Comm_rank(MPI_COMM_WORLD, &rank); // Call the other rank the neighbour if( rank == 0 ){ neighbour = 1; } else { neighbour = 0; } if( rank == 0 ){ // Rank 0 starts with the ball. Send it to rank 1 MPI_Send(&ball, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); } // Now run a send and receive in a loop untill someone gets bored counter = 0; bored = 0; while( !bored ) { // Receive the ball MPI_Recv(&ball, 1, MPI_INT, neighbour, 0, MPI_COMM_WORLD, &status); // Increment the counter and send the ball back counter += 1; MPI_Send(&ball, 1, MPI_INT, neighbour, 0, MPI_COMM_WORLD); // Check if the rank is bored bored = counter >= max_count; } printf("rank %d is bored and giving up \n", rank); // Call finalize at the end return MPI_Finalize(); }
Solution
program pingpong implicit none include "mpif.h" integer ball, max_count, counter logical bored integer rank, neighbour, ierr integer status(MPI_STATUS_SIZE) ball = 1 ! A dummy message to simulate the ball max_count = 1000000 ! First call MPI_Init call MPI_Init(ierr) ! Get my rank call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr) ! Call the other rank the neighbour if (rank == 0) then neighbour = 1 else neighbour = 0 end if ! Rank 0 starts with the ball. Send it to rank 1. if ( rank == 0 ) then call MPI_Send( ball, 1, MPI_INTEGER, neighbour, 0, MPI_COMM_WORLD, ierr ) end if ! Now run send and receive in a loop until someone gets bored counter = 0 bored = .false. do while ( .NOT. bored ) ! Receive the ball call MPI_Recv( ball, 1, MPI_INTEGER, neighbour, 0, MPI_COMM_WORLD, status, ierr ) ! Increment the counter and send the ball back counter = counter + 1 call MPI_Send( ball, 1, MPI_INTEGER, neighbour, 0, MPI_COMM_WORLD, ierr ) ! Check if the rank is bored bored = counter >= max_count end do write(6, *) "Rank ", rank, "is bored and giving up" ! Call MPI_Finalize at the end call MPI_Finalize(ierr) end
Solution
from mpi4py import MPI max_count = 1000000 ball = 1 # A dummy message to simulate the ball # Get my rank rank = MPI.COMM_WORLD.Get_rank() # Call the other rank the neighbour if rank == 0: neighbour = 1 else: neighbour = 0 if rank == 0: # Rank 0 starts with the ball. Send it to rank 1 MPI.COMM_WORLD.send(ball, dest=1, tag=0) # Now run a send and receive in a loop untill someone gets bored counter = 0 bored = False while not bored: # Receive the ball ball = MPI.COMM_WORLD.recv(source=neighbour, tag=0) # Increment the counter and send the ball back counter += 1 MPI.COMM_WORLD.send(ball, dest=neighbour, tag=0) # Check if the rank is bored bored = (counter >= max_count) print("Rank {:d} is bored and giving up".format(rank))
Key Points
Use
MPI_Send
to send messages.And
MPI_Recv
to receive them.
MPI_Recv
will block the program until the message is received.