New images

This commit is contained in:
Noah L. Schrick 2022-03-18 19:49:38 -05:00
parent c557aceb10
commit 91da8e314f
11 changed files with 1 additions and 0 deletions

View File

@ -64,6 +64,7 @@ Task Zero is performed by the root node, and is a conditional task; it is not gu
\TUsubsubsection{Task One} \TUsubsubsection{Task One}
\TUsubsubsection{Task Two} \TUsubsubsection{Task Two}
\TUsubsubsection{Task Three} \TUsubsubsection{Task Three}
Task Three is performed only by the root node, and no division of work is necessary. The work performed during this task is that seen in Figure \ref{fig:sync-fire}.
\TUsubsubsection{Task Four and Task Five} \TUsubsubsection{Task Four and Task Five}
Intermediate database operations, though not frequent and may never occur for small graphs, are lengthy and time-consuming when they do occur. As discussed in Section \ref{sec:db-stor}, the two main memory consumers are the frontier and the instance, both of which are contained by the root node. Since the database storage requests are blocking, the pipeline would halt for a lengthy period of time while waiting for the root node to finish potentially two large storages. Tasks 4 and 5 work to alleviate the stall by executing independently of the regular pipeline execution flow since no other task relies on data sent from these tasks. The root node can then asynchronously send the frontier and instance to the appropriate nodes as needed, clear its memory, and continue execution without delay. Intermediate database operations, though not frequent and may never occur for small graphs, are lengthy and time-consuming when they do occur. As discussed in Section \ref{sec:db-stor}, the two main memory consumers are the frontier and the instance, both of which are contained by the root node. Since the database storage requests are blocking, the pipeline would halt for a lengthy period of time while waiting for the root node to finish potentially two large storages. Tasks 4 and 5 work to alleviate the stall by executing independently of the regular pipeline execution flow since no other task relies on data sent from these tasks. The root node can then asynchronously send the frontier and instance to the appropriate nodes as needed, clear its memory, and continue execution without delay.
\TUsubsubsection{MPI Tags} \TUsubsubsection{MPI Tags}

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

BIN
Chapter5_img/node-alloc.PNG Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB