Task 2 and Task 3
This commit is contained in:
parent
9a7ff0432c
commit
7497136598
@ -80,7 +80,6 @@ Once the computation work of Task 1 is completed, each node must send their comp
|
|||||||
\label{fig:Task1-Case1}
|
\label{fig:Task1-Case1}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
|
|
||||||
\begin{figure}[htp]
|
\begin{figure}[htp]
|
||||||
\includegraphics[width=\linewidth]{"./Chapter5_img/Task1-Case2.png"}
|
\includegraphics[width=\linewidth]{"./Chapter5_img/Task1-Case2.png"}
|
||||||
\vspace{.2truein} \centerline{}
|
\vspace{.2truein} \centerline{}
|
||||||
@ -89,13 +88,16 @@ Once the computation work of Task 1 is completed, each node must send their comp
|
|||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
\TUsubsubsection{Task 2}
|
\TUsubsubsection{Task 2}
|
||||||
|
Each node in Task 2 iterates through the received partial applicable exploit list and creates new states with edges to the current state. However, Synchronous Firing work is performed during this process, and syncing multiple exploits that could be distributed across multiple nodes leads to additional overhead and complexity. To prevent these difficulties, each node checks its partial applicable exploit list for exploits that are part of a group, removes these exploits from its list, and sends a new partial list to the Task 2 local communicator root. Since the Task 2 local root now contains all group exploits, it can execute the Synchronous Firing work without additional communication or synchronization between other MPI nodes in the Task 2 stage. Other than the additional setup steps required for Synchronous Firing for the local root, all work performed during this task by all MPI nodes is that seen from the Synchronous Firing figure (Figure \ref{fig:sync-fire}).
|
||||||
\TUsubsubsection{Task 3}
|
\TUsubsubsection{Task 3}
|
||||||
Task 3 is performed only by the root node, and no division of work is necessary. The work performed during this task is that seen from the Synchronous Firing figure (Figure \ref{fig:sync-fire}). The root node will continuously check for new states until the Task 2 finalize signal is detected. When the root node has processed all states and has received the Task 2 finalize signal, it will complete Task 3 by sending the instance and/or frontier to Task 4 and/or 5, respectively if applicable, then proceeds to Task 0.
|
Task 3 is performed only by the root node, and no division of work is necessary. The root node will continuously check for new states until the Task 2 finalize signal is detected. This task consists of setting the new state's ID, adding it to the frontier, adding its information to the instance, and inserting information into the hash map. When the root node has processed all states and has received the Task 2 finalize signal, it will complete Task 3 by sending the instance and/or frontier to Task 4 and/or 5, respectively if applicable, then proceeds to Task 0.
|
||||||
|
|
||||||
\TUsubsubsection{Task 4 and Task 5}
|
\TUsubsubsection{Task 4 and Task 5}
|
||||||
Intermediate database operations, though not frequent and may never occur for small graphs, are lengthy and time-consuming when they do occur. As discussed in Section \ref{sec:db-stor}, the two main memory consumers are the frontier and the instance, both of which are contained by the root node. Since the database storage requests are blocking, the pipeline would halt for a lengthy period of time while waiting for the root node to finish potentially two large storages. Tasks 4 and 5 work to alleviate the stall by executing independently of the regular pipeline execution flow. Since Tasks 4 and 5 do not send any data, no other tasks must wait for these tasks to complete. The root node can then asynchronously send the frontier and instance to the appropriate nodes as needed, clear its memory, and continue execution without delay.
|
Intermediate database operations, though not frequent and may never occur for small graphs, are lengthy and time-consuming when they do occur. As discussed in Section \ref{sec:db-stor}, the two main memory consumers are the frontier and the instance, both of which are contained by the root node. Since the database storage requests are blocking, the pipeline would halt for a lengthy period of time while waiting for the root node to finish potentially two large storages. Tasks 4 and 5 work to alleviate the stall by executing independently of the regular pipeline execution flow. Since Tasks 4 and 5 do not send any data, no other tasks must wait for these tasks to complete. The root node can then asynchronously send the frontier and instance to the appropriate nodes as needed, clear its memory, and continue execution without delay.
|
||||||
\TUsubsubsection{MPI Tags}
|
\TUsubsubsection{MPI Tags}
|
||||||
|
|
||||||
\TUsubsection{Performance Expectations}
|
\TUsubsection{Performance Expectations}
|
||||||
|
|
||||||
\TUsubsection{Results}
|
\TUsubsection{Results}
|
||||||
Communication cost of asynchronous send for T4 and T5 is less than the time requirement of a database storage by root.
|
Communication cost of asynchronous send for T4 and T5 is less than the time requirement of a database storage by root.
|
||||||
|
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user