86 lines
6.8 KiB
TeX
86 lines
6.8 KiB
TeX
\TUchapter{Utilization OF MESSAGE PASSING INTERFACE}
|
|
|
|
\TUsection{Introduction to MPI Utilization for Attack Graph Generation}
|
|
|
|
\TUsection{Necessary Components}
|
|
\TUsubsection{Serialization}
|
|
In order to distribute workloads across nodes in a distributed system, various
|
|
types of data will need to be sent and received. Support and mechanisms vary based
|
|
on the MPI implementation, but most fundamental data types such as integers, doubles,
|
|
characters, and Booleans are incorporated into the MPI implementation. While this does
|
|
simplify some of the messages that need to be sent and received in the MPI approaches of
|
|
attack graph generation, it does not cover the vast majority of them.
|
|
|
|
RAGE implements many custom classes and structs that are used throughout the generation process.
|
|
Qualities, topologies, network states, and exploits are a few such examples. Rather than breaking
|
|
each of these down into fundamental types manually, serialization functions are leveraged to handle
|
|
most of this. RAGE already incorporates Boost graph libraries for auxiliary support, so this work
|
|
extended this further to utilize the serialization libraries also provided by Boost. These
|
|
libraries also include support for serializing all STL classes, and many of the RAGE
|
|
classes have members that make use of the STL classes. One additional advantage of the Boost
|
|
library approach is that many of the RAGE class members are nested. For example, the NetworkState
|
|
class has a member vector of Quality classes. When serializing the NetworkState class, boost will
|
|
recursively serialize all members, including the custom class members, assuming they also have
|
|
serialization functions.
|
|
|
|
When using the serialization libraries, this work opted to use the intrusive route, where the
|
|
class instances are altered directly. This was preferable to the non-intrusive approach, since
|
|
the class instances were able to be altered with relative ease, and many of the class instances
|
|
did not expose enough information for the non-intrusive approach to be viable.
|
|
\TUsubsection{Data Consistency}
|
|
|
|
\TUsection{Tasking Approach}
|
|
\TUsubsection{Introduction to the Tasking Approach}
|
|
The high-level overview of the compliance graph generation process can be broken down into six main tasks.
|
|
These tasks are described in Figure \ref{fig:tasks}. Prior works such as that seen by the
|
|
authors of \cite{li_concurrency_2019}, \cite{9150145}, and \cite{7087377} work to parallelize the graph generation using
|
|
OpenMP, CUDA, and hyper-graph partitioning. This approach, however, utilizes Message Passing Interface (MPI)
|
|
to distribute the six identified tasks of RAGE to examine the effect on speedup, efficiency, and scalability for
|
|
attack and compliance graph generation.
|
|
|
|
\begin{figure}[htp]
|
|
\includegraphics[width=\linewidth]{"./Chapter5_img/horiz_task.drawio.png"}
|
|
\vspace{.2truein} \centerline{}
|
|
\caption{Task Overview of the Attack Graph Generation Process}
|
|
\label{fig:tasks}
|
|
\end{figure}
|
|
|
|
\TUsubsection{Algorithm Design}
|
|
The design of the tasking approach is to leverage a pipeline structure with the six tasks and MPI nodes. Each stage of the pipeline will pass the necessary data to the next stage through various MPI messages, where the next stage's nodes will receive the data and execute their tasks. The pipeline is considered fully saturated when each task has a dedicated node. When there are less nodes than tasks, some nodes will processing multiple tasks. When there are more nodes than tasks, additional nodes will be assigned to Tasks 1 and 2. Timings were collected in the serial approach for various networks that displayed more time requirements for Tasks 1 and 2, with larger network sizes requiring vastly more time to be taken in Tasks 1 and 2. As a result, additional nodes are assigned to Tasks 1 and 2. Node allocation can be seen in Figure \ref{fig:node-alloc}.
|
|
|
|
For determining which tasks should be handled by the root note, a few considerations were made. Minimizing communication cost and avoiding unnecessary complexity were the main two considerations. In the serial approach, the frontier queue was the primary data structure for the majority of the execution. Rather than using a distributed queue or passing multiple sub-queues between nodes, the minimal option is to pass states individually. This approach also assists in reducing the complexity. Managing multiple frontier queues would require duplication checks, multiple nodes requesting data from and storing data into the database, and devising a strategy to maintain proper queue ordering, all of which would also increase the communication cost. As a result, the root node will be dedicated to Tasks 0 and 3.
|
|
|
|
\begin{figure}[htp]
|
|
\includegraphics[width=\linewidth]{"./Chapter5_img/node-alloc.png"}
|
|
\vspace{.2truein} \centerline{}
|
|
\caption{Node Allocation for each Task}
|
|
\label{fig:node-alloc}
|
|
\end{figure}
|
|
|
|
\TUsubsubsection{Communication Structure}
|
|
|
|
\TUsubsubsection{Task Zero}
|
|
Task Zero is performed by the root node, and is a conditional task; it is not guaranteed to be executed at each pipeline iteration. Task Zero is only executed when the frontier is empty, but the database still holds unexplored states. This occurs when there are memory constraints, and database storage is performed during execution to offload the demand, as discussed in Section \ref{sec:db-stor}. After the completion of Task 0, the frontier has a state popped, and the root node sends the state to n$_1$. If the frontier is empty, the root node sends the finalize signal to all nodes.
|
|
\TUsubsubsection{Task One}
|
|
\TUsubsubsection{Task Two}
|
|
\TUsubsubsection{Task Three}
|
|
\TUsubsubsection{Task Four and Task Five}
|
|
Intermediate database operations, though not frequent and may never occur for small graphs, are lengthy and time-consuming when they do occur. As discussed in Section \ref{sec:db-stor}, the two main memory consumers are the frontier and the instance, both of which are contained by the root node. Since the database storage requests are blocking, the pipeline would halt for a lengthy period of time while waiting for the root node to finish potentially two large storages. Tasks 4 and 5 work to alleviate the stall by executing independently of the regular pipeline execution flow since no other task relies on data sent from these tasks. The root node can then asynchronously send the frontier and instance to the appropriate nodes as needed, clear its memory, and continue execution without delay.
|
|
\TUsubsubsection{MPI Tags}
|
|
|
|
\TUsubsection{Performance Expectations}
|
|
\TUsubsection{Results}
|
|
Communication cost of asynchronous send for T4 and T5 is less than the time requirement of a database storage by root.
|
|
|
|
\TUsection{Subgraphing Approach}
|
|
\TUsubsection{Introduction to the Subgraphing Approach}
|
|
|
|
\TUsubsection{Algorithm Design}
|
|
\TUsubsubsection{Communication Structure}
|
|
\TUsubsubsection{Worker Nodes}
|
|
\TUsubsubsection{Root Node}
|
|
\TUsubsubsection{Database Node}
|
|
|
|
\TUsubsection{Performance Expectations}
|
|
|