Tutorial 2 - Sending to/from different nodesΒΆ

In many cases, we will not want to send data just from the master node to all other nodes. We can use a combination of bcast, send and recv to flexibly send around objects.

Again, for this example we are going to re-use some of the classes from the previous example which must be in test_mpiclasses.py (see Tutorial 1 for details).

Our new script looks like this:

#!/usr/bin/python3

import sys

from anamnesis import MPIHandler

from test_mpiclasses import ComplexPerson, ComplexPlace, ComplexTrain

# All nodes must perform this
m = MPIHandler(use_mpi=True)

# We need at least three nodes for this
if m.size < 3:
    print("Error: This example needs at least three MPI nodes")
    m.done()
    sys.exit(1)

if m.rank == 0:
    # We are the master node
    print("Master node")

    # Create a person to broadcast
    s_person = ComplexPerson('Fred', 42)

    print("Master: Created Person: {} {}".format(s_person.name, s_person.age))

    m.bcast(s_person)

    s_place = m.bcast(root=1)

    print("Master: Recieved Place: {}".format(s_place.location))

elif m.rank == 1:
    # We are slave node 1
    print("Slave node {}".format(m.rank))

    # Wait for our broadcast object to be ready
    s_person = m.bcast()

    print("Slave node {}: Recieved Person: {} {}".format(m.rank, s_person.name, s_person.age))

    # Now create our own object and broadcast it to the other nodes
    s_place = ComplexPlace('Manchester')

    print("Slave node {}: Created place: {}".format(m.rank, s_place.location))

    m.bcast(s_place, root=1)

    s_train = m.recv(source=2)

    print("Slave node {}: Received Train: {} {}".format(m.rank, s_person.name, s_person.age))

else:
    # We are slave node 2
    print("Slave node {}".format(m.rank))

    # Wait for our first broadcast object to be ready
    s_person = m.bcast()

    print("Slave node {}: Received Person: {} {}".format(m.rank, s_person.name, s_person.age))

    # Wait for our second broadcast object to be ready
    s_place = m.bcast(root=1)

    print("Slave node {}: Received Place: {}".format(m.rank, s_place.location))

    # Create a train and send to node 1 only
    s_train = ComplexTrain('Durham')

    print("Slave node {}: Created train: {}".format(m.rank, s_train.destination))

    m.send(s_train, dest=1)

# We need to make sure that we finalise MPI otherwise
# we will get an error on exit
m.done()

Again, we need to run this code under an MPI environment (refer back to Tutorial 1 for details). We will get the following output:

Master node
Master: Created Person: Fred 42
Slave node 1
Slave node 2
Slave node 1: Recieved Person: Fred 42
Slave node 1: Created place: Manchester
Slave node 2: Received Person: Fred 42
Slave node 2: Received Place: Manchester
Master: Recieved Place: Manchester
Slave node 2: Created train: Durham
ComplexTrain.init_from_hdf5
We have already set destination: Durham
Slave node 1: Received Train: Fred 42

In order, our script does the following:

  1. Set up MPI
  2. Create a Person on node 0 (master) and bcast it to nodes 1 and 2
  3. Create a Place on node 1 and bcast it to nodes 0 and 1
  4. Create a Train on node 2 and send it to node 1 only (on which we call recv)

Using these examples, you should be able to see how we can flexibly send objects around our system.