Acknowledgements

This commit is contained in:
Noah L. Schrick 2022-03-27 15:00:08 -05:00
parent 49ca61bf20
commit d846ac644b
15 changed files with 102 additions and 60 deletions

View File

@ -39,7 +39,7 @@ parameters rather than exploit files, but would otherwise function similarly in
\TUsubsection{Difficulties of Compliance Graphs and Introduction to Thesis Work} \label{sec:CG-diff}
Like attack graphs, compliance graphs suffer from the state space explosion problem. Since compliance graphs are also exhaustive, the resulting networks can grow to incredibly large sizes. Compliance regulations
that need to be checked at each system state such as SOX, HIPAA, GDPR, PCI DSS, or any other regulatory compliance in conjunction with a large number of assets that need to be checked can very quickly produce
these large resulting graphs. The creation of these graphs through a serial approach likewise becomes increasingly infeasible. Due to this, the high-performance computing space presents itself as an appealing
these large resulting graphs. The creation of these graphs through a serial approach likewise becomes increasingly infeasible. Due to this, the high-performance computing (HPC) space presents itself as an appealing
approach. This work aims to extend the attack graph generator engine RAGE presented by the author in \cite{cook_rage_2018} to begin development for compliance graph generation. The example networks in this
work will also be in the compliance graph space, specifically examining vehicle maintenance compliance. This work will also examine approaches to leverage high-performance computing to aid in the generation of
compliance graphs.

View File

@ -4,36 +4,36 @@ as attack trees. This Chapter reviews a few of their efforts as they relate to t
\TUsection{Introduction to Graph Generation}
Graph generation as a broad topic has many challenges that prevent full actualization of computation seen from a theoretical standpoint.
In actuality, graph generation often achieves only a very low percentage of its expected performance \cite{berry_graph_2007}. A few reasons
for this occurence lies in the underlying mechanisms of graph generation. The generation is predominantly memory based (as opposed to based on processor speed),
for this occurrence lie in the underlying mechanisms of graph generation. The generation is predominantly memory based (as opposed to based on processor speed),
where performance is tied to memory access time, the complexity of data dependency, and coarseness of parallelism \cite{berry_graph_2007}, \cite{zhang_boosting_2017},
\cite{ainsworth_graph_2016}. The graph generation process is typically quite poor, resulting in lower performance results. Graphs consume large amounts of memory through their
\cite{ainsworth_graph_2016}. Graphs consume large amounts of memory through their
nodes and edges, graph data structures suffer from poor cache locality, and memory latency from the processor-memory gap all slow the generation process dramatically
\cite{berry_graph_2007}, \cite{ainsworth_graph_2016}. Section \ref{sec:gen_improv} discusses a few works that can be used to improve the graph generation process, and Section
\ref{sec:related_works} discusses a few works specific to attack graph generation improvements.
\TUsection{Graph Generation Improvements} \label{sec:gen_improv}
For architectural and hardware techinques for generation improvement, the authors of \cite{ainsworth_graph_2016} discuss the high cache miss rate, and how general prefetching
leads does not increase the prediction rate due to non-sequenial graph structures and data-dependent access patterns. However, the authors continue to discuss that the generation
For architectural and hardware techniques for generation improvement, the authors of \cite{ainsworth_graph_2016} discuss the high cache miss rate, and how general prefetching
does not increase the prediction rate due to nonsequential graph structures and data-dependent access patterns. However, the authors continue to discuss that the generation
algorithm is known in advance, so explicit tuning of the hardware prefetcher to follow the traversal order pattern can lead to better performance. The authors were able to achieve
over 2x performance improvement of a breadth-first search approach with this method. Another hardware approach is to make use of accelerators. The authors of \cite{yao_efficient_2018}
present an approach for minimizing the slowdown caused by the underlying graph atomic functions. By using the atomic function patterns, the authors utilized pipeline stages where vertex
updates can be processed in parallel dynamically. Other works, such as those by the authors of \cite{zhang_boosting_2017} and \cite{dai_fpgp_2016}, leverage field-programmable gate arrays
(FPGAs) for graph generation in the HPC space through various means. This includes reducing memory strain, storing repeatedly accessed lists, storing results, or other storage through the
on-chip block RAM, or even levering Hybrid Memory Cubes for optimizing parallel access.
on-chip block RAM, or even leveraging Hybrid Memory Cubes for optimizing parallel access.
From a data structure standpoint, the authors of \cite{arifuzzaman_fast_2015} describe the infeasibility of adjacency matrices in large-scale graphs, and this work and other works such as those
by the authors of \cite{yu_construction_2018} and \cite{liakos_memory-optimized_2016} discuss the appeal of distibuting a graph representation among systems. The author of
\cite{liakos_memory-optimized_2016} discuss the usage of distributed adjacency lists for assinging vertices to workers. The authors of \cite{liakos_memory-optimized_2016} and
by the authors of \cite{yu_construction_2018} and \cite{liakos_memory-optimized_2016} discuss the appeal of distributing a graph representation across systems. The author of
\cite{liakos_memory-optimized_2016} discuss the usage of distributed adjacency lists for assigning vertices to workers. The authors of \cite{liakos_memory-optimized_2016} and
\cite{balaji_graph_2016} present other techniques for minimizing communication costs by achieving high compression ratios while maintaining a low compression cost. The Boost Graph Library
and the Parallel Boost Graph Library both provide appealing features for working with graps, with the latter library notably having interoperability with MPI, Graphviz, and METIS
\cite{noauthor_overview_nodate}, \cite {noauthor_boost_nodate}.
and the Parallel Boost Graph Library both provide appealing features for working with graphs, with the latter library notably having interoperability with MPI, Graphviz, and METIS
\cite{noauthor_overview_nodate}, \cite{noauthor_boost_nodate}.
\TUsection{Improvements Specific to Attack Graph Generation} \label{sec:related_works}
As a means of improving scalability of attack graphs, the authors of \cite{ou_scalable_2006} present a new representation scheme. Traditional attack graphs encode the entire network at each state,
but this representation uses logical statements to represent a portion of the network at each node. This is called a logical attack graph. This approach led to the reduction of the generation process
but the representation presented by the authors uses logical statements to represent a portion of the network at each node. This is called a logical attack graph. This approach led to the reduction of the generation process
to quadratic time and reduced the number of nodes in the resulting graph to $\mathcal{O}({n}^2)$. However, this approach does require more analysis for identifying attack vectors. Another approach
presented by the authors of \cite{cook_scalable_2016} represent a description of systems and their qualities and topologies as a state, with a queue of unexplored states. This work was continued by the
authors of \cite{li_concurrency_2019} by implementing a hash table among other features. Each of these works demonstrate an improvement in scalability through refining the desirable information.
presented by the authors of \cite{cook_scalable_2016} represents a description of systems and their qualities and topologies as a state, with a queue of unexplored states. This work was continued by the
authors of \cite{li_concurrency_2019} by implementing a hash table among other features. Each of these works demonstrates an improvement in scalability through refining the desirable information output.
Another approach for generation improvement is through parallelization. The authors of \cite{li_concurrency_2019} leverage OpenMP to parallelize the exploration of a FIFO queue. This parallelization also
includes the utilization of OpenMP's dynamic scheduling. In this approach, each thread receives a state to explore, where a critical section is employed to handle the atomic functions of merging new state

View File

@ -1,18 +1,18 @@
\TUchapter{UTILITY EXTENSIONS TO THE RAGE ATTACK GRAPH GENERATOR}
\TUsection{Path Walking}
\par Due to the large-scale nature of Attack Graphs, analysis can prove difficult and time-consuming. With some networks reaching millions of states and edges,
analyzing the entire network can be overwhelming complex. As a means of simplifying analysis, a potential strategy could be to consider only small subsets of
the network at a time, rather than feeding the entire network into an analysis algorithm. To aid in this effort, a Path Walking feature was implemented as a
separate program, and has two primary modes of usage. The goal of this feature is to provide a subset of the network that includes all possible paths from the
root node to a designated node. The first mode is a manual mode, where a user can input the desired state to walk to, and the program will output a separate
\par Due to the large-scale nature of attack graphs, analysis can become difficult and time-consuming. With some graphs reaching millions of states and edges,
analyzing the entire graph can be overwhelmingly complex. As a means of simplifying analysis, a potential strategy could be to consider only small subsets of
the graph at a time, rather than feeding the entire graph into an analysis algorithm. To aid in this effort, a path walking feature was implemented as a
separate program, and has two primary modes of usage. The goal of this feature is to output a subset of the graph that includes all possible paths from the
root state to a designated state. The first mode is a manual mode, where a user can input the desired state to walk to, and the program will output a separate
graph of all possible paths to the specified state. The second mode is an automatic mode, where the program will output separate subgraphs to all states in
the network that have qualities of $``compliance$\_$vio = true"$ or $``compliance$\_$vios > 0"$. This often produces multiple subgraphs, that can then be
the network that have qualities of $``compliance$\_$vio = true"$,$``compliance$\_$vios > 0"$, or any other quality that can be specified by the user. The automatic mode can produce multiple subgraphs simultaneously if multiple states contain the quality being examined, and these subgraphs can then be
separately fed into an analysis program.
Figure \ref{fig:PW} demonstrates an output of the Path Walking feature when walking to state 14. In this figure, the primary observable feature is that the
network was reduced from 16 states to 6 states, and 32 edges to 12 edges. The reduction from the original network to the subset varies on the overall connectivity
of the original Attack Graph, but the reduction can aid in simplifying the analysis process if only certain states of the network are to be analyzed.
graph was reduced from 16 states to 6 states, and from 32 edges to 12 edges. The reduction from the original graph to the subset varies on the overall connectivity
of the original attack graph.
\begin{figure}[htp]
\includegraphics[width=\linewidth]{"./Chapter3_img/PW.png"}
\vspace{.2truein} \centerline{}
@ -21,9 +21,9 @@ of the original Attack Graph, but the reduction can aid in simplifying the analy
\end{figure}
\TUsection{Compound Operators} \label{sec:compops}
Many of the networks previously generated by RAGE compromise of states with features that can be fully enumerated. In many of the generated networks, there is an
Many of the graphs previously generated by RAGE comprise of states with features that can be fully enumerated. In many of the generated graphs, there is an
established set of qualities that will be used, with an established set of values. These typically have included $``compliance$\_$vio=true/false"$,
$``root=true/false"$, or other general $``true/false"$ values or $``version=X"$ qualities. To expand on the types and complexities of networks that can be
$``root=true/false"$, or other general $``true/false"$ values or $``version=X"$ qualities. To expand on the types and complexities of graphs that can be
generated, compound operators have been added to RAGE. When updating a state, rather than setting a quality to a specific value, the previous value can now
be modified by an amount specified through standard compound operators such as $\mathrel{+}=$, $\mathrel{-}=$, $\mathrel{*}=$, or $\mathrel{/}=$.
@ -35,7 +35,7 @@ and no encoding scheme changes were necessary. This also allows for additional c
A few changes were necessary to allow for the addition of compound operators. Before the generation of an Attack Graph begins, all values are stored in a hash
table. For previous networks generated by RAGE, this was not a difficulty, since all values could be fully enumerated and all possible values were known. When
using compound operators however, not all values can be fully known. The concept of approximating which exploits will be applicable and what absolute minimum
or maximum values will be prior to generation is a difficult task, so not all values can be enumerated and stored into the hash table. As a result, on-the-fly
or maximum values will be prior to generation is a difficult task, and not all values can be enumerated and stored into the hash table. As a result, real-time
updates to the hash table needed to be added to the generator. The original key-value scheme for hash tables relied on utilizing the size of the hash table for
values. Since the order in which updates happen may not always remain consistent (and is especially true in distributed computing environments), it is possible
for states to receive different hash values with the original hashing scheme. To prevent this, the hashing scheme was adjusted so that the new value of the
@ -47,7 +47,7 @@ Other changes involved updating classes (namely the Quality, EncodedQuality, Par
\TUsection{Color Coding}
As a visual aid for analysis purposes, color coding was another feature implemented as a postprocessing tool for RAGE. When viewing the output graph of RAGE, all states are
originally identical in appearance, apart from number of edges, edge IDs, and state IDs. To allow for visual differentiation, color coding can be enabled in the run script.
visibly identical in appearance apart from number of edges, edge IDs, and state IDs. To allow for visual differentiation, color coding can be enabled in the run script.
Color coding currently functions by working through the graph output text file, but it can be extended to read directly from Postgres instead. The feature scans through the
output file, and locates states that have $``compliance$\_$vios = X"$ (where \textit{X} is a number greater than 0), or $``compliance$\_$vio = true"$. For states that meet these
properties, the color coding feature will add a color to the graphviz DOT file through the $[color=COL]$ attribute for the given node, where \textit{COL} is assigned based on severity.

View File

@ -64,8 +64,12 @@
\newlabel{fig:subg_mod}{{5.12}{47}}
\@writefile{lof}{\contentsline {figure}{\numberline {5.13}{\ignorespaces Duplicate States Explored vs Actual Number of States for the 1-4 Service Tests\relax }}{48}{}\protected@file@percent }
\newlabel{fig:subg_dup}{{5.13}{48}}
\@writefile{lof}{\contentsline {figure}{\numberline {5.14}{\ignorespaces Speedup and Efficiency of MPI Subgraphing when using a DHT\relax }}{50}{}\protected@file@percent }
\newlabel{fig:subg_DHT_Spd}{{5.14}{50}}
\@writefile{lof}{\contentsline {figure}{\numberline {5.15}{\ignorespaces Runtime of MPI Subgraphing when using a DHT vs not using a DHT\relax }}{51}{}\protected@file@percent }
\newlabel{fig:subg_DHT_base}{{5.15}{51}}
\@setckpt{Chapter5}{
\setcounter{page}{49}
\setcounter{page}{52}
\setcounter{equation}{0}
\setcounter{enumi}{4}
\setcounter{enumii}{0}
@ -80,7 +84,7 @@
\setcounter{subsubsection}{0}
\setcounter{paragraph}{0}
\setcounter{subparagraph}{0}
\setcounter{figure}{13}
\setcounter{figure}{15}
\setcounter{table}{2}
\setcounter{caption@flags}{2}
\setcounter{continuedfloat}{0}

View File

@ -243,4 +243,23 @@ As noted from the Figures, the performance from this approach appears quite poor
\vspace{.2truein} \centerline{}
\caption{Duplicate States Explored vs Actual Number of States for the 1-4 Service Tests}
\label{fig:subg_dup}
\end{figure}
\end{figure}
To minimize the duplicate work performed, a second approach using a distributed hash table (DHT) was attempted. With a DHT, each compute node would check to ensure that they were not duplicating work. This would limit the work needed by the root node, but each worker node would need to search the DHT. Using a DHT would increase the communication overhead, but if the communication overhead was less than the time taken for duplicate work or was minimal enough to still process the frontier at a greater rate than the serial approach, then the distributed hash table would be considered advantageous. Rather than devising a unique strategy for a distributed hash table, this work made use of the Berkely Container Library (BCL), which is open-source and provides distributed data structures with easy-to-use interfaces. Since BCL is a header-only library, it allowed for minimal code alterations, and primarily just needed to be dropped into the system. Testing was repeated with an identical setup to the approach without BCL. The results in terms of speedup and efficiency are seen in Figure \ref{fig:subg_DHT_Spd}. Results in terms of runtime between the DHT approach and the base approach are seen in Figure \ref{fig:subg_DHT_base}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{"./Chapter5_img/DHT_Spd.png"}
\includegraphics[width=\linewidth]{"./Chapter5_img/DHT_Eff.png"}
\caption{Speedup and Efficiency of MPI Subgraphing when using a DHT}
\label{fig:subg_DHT_Spd}
\end{figure}
\begin{figure}[htp]
\includegraphics[width=\linewidth]{"./Chapter5_img/DHT_noDHT.png"}
\vspace{.2truein} \centerline{}
\caption{Runtime of MPI Subgraphing when using a DHT vs not using a DHT}
\label{fig:subg_DHT_base}
\end{figure}
Implementing the DHT did prevent duplicate work, but the communication cost from repeated DHT queries by each worker node was far greater than the serial approach, and was also greater than the first approach for MPI subgraphing without the DHT. As a result, the MPI subgraphing approach is not viable as it stands. Future improvements or entire reworking will need to be performed, and this is discussed further in Section \ref{sec:FW}.

BIN
Chapter5_img/DHT_Eff.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

BIN
Chapter5_img/DHT_Spd.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

BIN
Chapter5_img/DHT_noDHT.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 320 KiB

View File

@ -1,9 +1,9 @@
\relax
\@writefile{toc}{\contentsline {chapter}{\numberline {CHAPTER 6: }{\bf \uppercase {CONCLUSIONS AND FUTURE WORKS}}}{49}{}\protected@file@percent }
\@writefile{toc}{\contentsline {section}{\numberline {6.1}\bf Future Work}{49}{}\protected@file@percent }
\newlabel{sec:FW}{{6.1}{49}}
\@writefile{toc}{\contentsline {chapter}{\numberline {CHAPTER 6: }{\bf \uppercase {CONCLUSIONS AND FUTURE WORKS}}}{52}{}\protected@file@percent }
\@writefile{toc}{\contentsline {section}{\numberline {6.1}\bf Future Work}{52}{}\protected@file@percent }
\newlabel{sec:FW}{{6.1}{52}}
\@setckpt{Chapter6}{
\setcounter{page}{50}
\setcounter{page}{53}
\setcounter{equation}{0}
\setcounter{enumi}{4}
\setcounter{enumii}{0}

View File

@ -31,9 +31,9 @@
\bibcite{cook_rage_2018}{9}
\bibcite{berry_graph_2007}{10}
\@writefile{toc}{{\hfill \ }}
\@writefile{toc}{\contentsline {section}{\hspace {-\parindent }NOMENCLATURE}{50}{}\protected@file@percent }
\@writefile{toc}{\contentsline {section}{\hspace {-\parindent }NOMENCLATURE}{53}{}\protected@file@percent }
\@writefile{toc}{\addvspace {10pt}}
\@writefile{toc}{\contentsline {section}{\hspace {-\parindent }BIBLIOGRAPHY}{50}{}\protected@file@percent }
\@writefile{toc}{\contentsline {section}{\hspace {-\parindent }BIBLIOGRAPHY}{53}{}\protected@file@percent }
\@writefile{toc}{{\hfill \ }}
\bibcite{ainsworth_graph_2016}{11}
\bibcite{yao_efficient_2018}{12}
@ -52,4 +52,4 @@
\bibcite{CVE-2019-10747}{25}
\bibcite{louthan_hybrid_2011}{26}
\bibstyle{ieeetr}
\gdef \@abspage@last{62}
\gdef \@abspage@last{65}

View File

@ -19,3 +19,5 @@
\contentsline {figure}{\numberline {5.11}{\ignorespaces First iteration results of MPI Subgraphing in terms of Speedup and Efficiency\relax }}{45}{}%
\contentsline {figure}{\numberline {5.12}{\ignorespaces Modified Subgraphing Example Graph with Two New Edges\relax }}{47}{}%
\contentsline {figure}{\numberline {5.13}{\ignorespaces Duplicate States Explored vs Actual Number of States for the 1-4 Service Tests\relax }}{48}{}%
\contentsline {figure}{\numberline {5.14}{\ignorespaces Speedup and Efficiency of MPI Subgraphing when using a DHT\relax }}{50}{}%
\contentsline {figure}{\numberline {5.15}{\ignorespaces Runtime of MPI Subgraphing when using a DHT vs not using a DHT\relax }}{51}{}%

View File

@ -1,4 +1,4 @@
This is pdfTeX, Version 3.141592653-2.6-1.40.23 (TeX Live 2021/Arch Linux) (preloaded format=pdflatex 2022.3.21) 26 MAR 2022 19:03
This is pdfTeX, Version 3.141592653-2.6-1.40.23 (TeX Live 2021/Arch Linux) (preloaded format=pdflatex 2022.3.21) 27 MAR 2022 13:31
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
@ -446,29 +446,47 @@ ng>]
<./Chapter5_img/dup.drawio.png, id=225, 824.07875pt x 743.77875pt>
File: ./Chapter5_img/dup.drawio.png Graphic file (type png)
<use ./Chapter5_img/dup.drawio.png>
Package pdftex.def Info: ./Chapter5_img/dup.drawio.png used on input line 236.
Package pdftex.def Info: ./Chapter5_img/dup.drawio.png used on input line 235.
(pdftex.def) Requested size: 469.75499pt x 423.98099pt.
<./Chapter5_img/Dup_DHT.png, id=226, 796.065pt x 483.99pt>
File: ./Chapter5_img/Dup_DHT.png Graphic file (type png)
<use ./Chapter5_img/Dup_DHT.png>
Package pdftex.def Info: ./Chapter5_img/Dup_DHT.png used on input line 243.
Package pdftex.def Info: ./Chapter5_img/Dup_DHT.png used on input line 242.
(pdftex.def) Requested size: 469.75499pt x 285.59593pt.
) [46] [47 <./Chapter5_img/dup.drawio.png>] [48 <./Chapter5_img/Dup_DHT.png>]
[46] [47 <./Chapter5_img/dup.drawio.png>] [48 <./Chapter5_img/Dup_DHT.png>]
<./Chapter5_img/DHT_Spd.png, id=238, 421.575pt x 233.235pt>
File: ./Chapter5_img/DHT_Spd.png Graphic file (type png)
<use ./Chapter5_img/DHT_Spd.png>
Package pdftex.def Info: ./Chapter5_img/DHT_Spd.png used on input line 252.
(pdftex.def) Requested size: 469.75499pt x 259.89395pt.
<./Chapter5_img/DHT_Eff.png, id=239, 422.889pt x 233.235pt>
File: ./Chapter5_img/DHT_Eff.png Graphic file (type png)
<use ./Chapter5_img/DHT_Eff.png>
Package pdftex.def Info: ./Chapter5_img/DHT_Eff.png used on input line 253.
(pdftex.def) Requested size: 469.75499pt x 259.08965pt.
<./Chapter5_img/DHT_noDHT.png, id=240, 806.577pt x 496.692pt>
File: ./Chapter5_img/DHT_noDHT.png Graphic file (type png)
<use ./Chapter5_img/DHT_noDHT.png>
Package pdftex.def Info: ./Chapter5_img/DHT_noDHT.png used on input line 259.
(pdftex.def) Requested size: 469.75499pt x 289.27902pt.
) [49] [50 <./Chapter5_img/DHT_Spd.png> <./Chapter5_img/DHT_Eff.png>] [51 <./Ch
apter5_img/DHT_noDHT.png>]
\openout2 = `Chapter6.aux'.
(./Chapter6.tex
(./Chapter6.tex
CHAPTER 6.
) [49
) [52
] (./Schrick-Noah_MS-Thesis.bbl [50
] (./Schrick-Noah_MS-Thesis.bbl [53
] [51]) [52]
] [54]) [55]
(./Schrick-Noah_MS-Thesis.aux (./Chapter1.aux) (./Chapter2.aux) (./Chapter3.aux
) (./Chapter4.aux) (./Chapter5.aux) (./Chapter6.aux))
@ -485,13 +503,13 @@ LaTeX Warning: There were undefined references.
### semi simple group (level 1) entered at line 52 (\begingroup)
### bottom level
Here is how much of TeX's memory you used:
3720 strings out of 478276
70439 string characters out of 5853013
362026 words of memory out of 5000000
21854 multiletter control sequences out of 15000+600000
3746 strings out of 478276
71206 string characters out of 5853013
362050 words of memory out of 5000000
21877 multiletter control sequences out of 15000+600000
473155 words of font info for 41 fonts, out of 8000000 for 9000
1141 hyphenation exceptions out of 8191
67i,8n,77p,1843b,1424s stack positions out of 5000i,500n,10000p,200000b,80000s
67i,8n,77p,2199b,1424s stack positions out of 5000i,500n,10000p,200000b,80000s
{/usr/share/texmf-dist/fonts/enc/dvips/cm-super/cm-super-ts1.en
c}</usr/share/texmf-dist/fonts/type1/public/amsfonts/cm/cmbx12.pfb></usr/share/
texmf-dist/fonts/type1/public/amsfonts/cm/cmmi12.pfb></usr/share/texmf-dist/fon
@ -500,10 +518,10 @@ ts/type1/public/amsfonts/cm/cmr12.pfb></usr/share/texmf-dist/fonts/type1/public
y10.pfb></usr/share/texmf-dist/fonts/type1/public/amsfonts/cm/cmti12.pfb></usr/
share/texmf-dist/fonts/type1/public/amsfonts/cm/cmtt12.pfb></usr/share/texmf-di
st/fonts/type1/public/cm-super/sfrm1200.pfb>
Output written on Schrick-Noah_MS-Thesis.pdf (62 pages, 1657051 bytes).
Output written on Schrick-Noah_MS-Thesis.pdf (65 pages, 2059503 bytes).
PDF statistics:
289 PDF objects out of 1000 (max. 8388607)
165 compressed objects within 2 object streams
304 PDF objects out of 1000 (max. 8388607)
171 compressed objects within 2 object streams
0 named destinations out of 1000 (max. 500000)
116 words of extra memory for PDF output out of 10000 (max. 10000000)
131 words of extra memory for PDF output out of 10000 (max. 10000000)

Binary file not shown.

View File

@ -145,7 +145,7 @@
% The number of members on the committee
% including the chairperson or the co-advisors
%
\committeesize=6
\committeesize=3
\advisor{Peter J. Hawrylak} % name of chairperson or co-advisor
\secondmember{John Hale} % name of second member or co-advisor
@ -193,9 +193,8 @@ letter.
%
% Place the text of your acknowledgements page here
%
I would like to thank everyone who has made this thesis template
possible. Thanks and more thanks. In fact, let me give thanks all
over the place.
I would like to acknowledge and give my sincerest gratitude to my wife for all of her continued support and patience through this process. I would also like to extend this appreciation to all my family and friends, who have provided constant encouragement and motivation during my work.
I would also like to greatly thank my advisor, Dr. Hawrylak, for his guidance, support, and motivation during my time through this program. He has served as a remarkably vital role not only through this thesis, but also for the advice and encouragement offered throughout my graduate career. Further thanks to the members of my committee, Dr. Papa and Dr. Hale, who have also provided their expertise and support throughout this journey.
\afteracknowledgementsp

View File

@ -67,10 +67,10 @@
\contentsline {subsubsection}{MPI Tags}{43}{}%
\contentsline {subsection}{\numberline {5.4.3}\it Performance Expectations and Use Cases}{43}{}%
\contentsline {subsection}{\numberline {5.4.4}\it Results}{44}{}%
\contentsline {chapter}{\numberline {CHAPTER 6: }{\bf \uppercase {CONCLUSIONS AND FUTURE WORKS}}}{49}{}%
\contentsline {section}{\numberline {6.1}\bf Future Work}{49}{}%
\contentsline {chapter}{\numberline {CHAPTER 6: }{\bf \uppercase {CONCLUSIONS AND FUTURE WORKS}}}{52}{}%
\contentsline {section}{\numberline {6.1}\bf Future Work}{52}{}%
{\hfill \ }
\contentsline {section}{\hspace {-\parindent }NOMENCLATURE}{50}{}%
\contentsline {section}{\hspace {-\parindent }NOMENCLATURE}{53}{}%
\addvspace {10pt}
\contentsline {section}{\hspace {-\parindent }BIBLIOGRAPHY}{50}{}%
\contentsline {section}{\hspace {-\parindent }BIBLIOGRAPHY}{53}{}%
{\hfill \ }