RPi Bramble Help

Cluster computing 1

Get basic details

Write the script

Use the following script to test the cluster.

nano test_cluster.py
from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() name = MPI.Get_processor_name() print(f'World size: {size}, Rank: {rank}, host: {name}.')

Distribute the script

The script need to be on all the nodes in the same place. Copy the script from the master to the nodes like this:

andrew@master:~ $ scp test_cluster.py andrew@node1:/home/andrew/ test_cluster.py 100% 184 67.3KB/s 00:00 andrew@master:~ $ scp test_cluster.py andrew@node2:/home/andrew/ test_cluster.py 100% 184 100.7KB/s 00:00 andrew@master:~ $ scp test_cluster.py andrew@node3:/home/andrew/ test_cluster.py 100% 184 103.0KB/s 00:00 andrew@master:~ $

This process is tedious and would be even more so if there were more nodes in the cluster, so shortly I will demonstrate a way to code all of this.

Run the script

Run it by involking the MPI system

mpiexec -hostfile machinefile -n 4 python test_cluster.py

The "n" parameter is the number of processes to be spawned. Setting this to 4 in a four pi cluster will cause each machine to return one set of values.

andrew@master:~ $ mpiexec -hostfile machinefile -n 4 python test_cluster.py World size: 4, Rank: 0, host: master. World size: 4, Rank: 1, host: node1. World size: 4, Rank: 3, host: node3. World size: 4, Rank: 2, host: node2.

Given that the pi's have four core processors we should make use of them.

andrew@master:~ $ mpiexec -hostfile machinefile -n 16 python test_cluster.py World size: 16, Rank: 2, host: node2. World size: 16, Rank: 1, host: node1. World size: 16, Rank: 3, host: node3. World size: 16, Rank: 4, host: master. World size: 16, Rank: 6, host: node2. World size: 16, Rank: 5, host: node1. World size: 16, Rank: 7, host: node3. World size: 16, Rank: 8, host: master. World size: 16, Rank: 10, host: node2. World size: 16, Rank: 13, host: node1. World size: 16, Rank: 15, host: node3. World size: 16, Rank: 12, host: master. World size: 16, Rank: 14, host: node2. World size: 16, Rank: 0, host: master. World size: 16, Rank: 11, host: node3. World size: 16, Rank: 9, host: node1. andrew@master:~ $

Now that's parallel processing!

Sending data

To test the master nodes ability to send data to the other nodes we will use this code:

from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.rank size = comm.size name = MPI.Get_processor_name() data = [5, 10, 15] if rank == 0: # The master node will have rank 0 comm.send(data[0], dest=1) # Send specifically to rank 1 print(f'Sent 5 to dest 1') comm.send(data[1], dest=2) # Send specifically to rank 2 print('Sent 10 to dest 2') comm.send(data[2], dest=3) # Send specifically to rank 3 print('Sent 15 to dest 3') elif rank == 1: data = comm.recv(source=0) print(f'Rank {rank} on node {name} received {data}.') elif rank == 2: data = comm.recv(source=0) print(f'Rank {rank} on node {name} received {data}.') elif rank == 3: data = comm.recv(source=0) print(f'Rank {rank} on node {name} received {data}.')

Make sure to copy the file to the other pi's before trying to execute the code. When we run it we get the following output. Note that we have to reduce the value of "n" in the execute line to 4.

andrew@master:~ $ mpiexec -hostfile machinefile -n 4 python send_recv_2.py Rank 1 on node node1 received 5. Rank 2 on node node2 received 10. Sent 5 to dest 1 Sent 10 to dest 2 Sent 15 to dest 3 Rank 3 on node node3 received 15. andrew@master:~ $
Last modified: 22 April 2024