graph_theory_review.html 25 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629
  1. <HTML>
  2. <!--
  3. Copyright (c) Jeremy Siek 2000
  4. Copyright (c) Daniel Trebbien 2010
  5. Distributed under the Boost Software License, Version 1.0.
  6. (See accompanying file LICENSE_1_0.txt or copy at
  7. http://www.boost.org/LICENSE_1_0.txt)
  8. -->
  9. <Head>
  10. <Title>Boost Graph Library: Graph Theory Review</Title>
  11. <BODY BGCOLOR="#ffffff" LINK="#0000ee" TEXT="#000000" VLINK="#551a8b"
  12. ALINK="#ff0000">
  13. <IMG SRC="../../../boost.png"
  14. ALT="C++ Boost" width="277" height="86">
  15. <BR Clear>
  16. <H1>Review of Elementary Graph Theory</H1>
  17. <P>
  18. <P>
  19. This chapter is meant as a refresher on elementary graph theory. If
  20. the reader has some previous acquaintance with graph algorithms, this
  21. chapter should be enough to get started. If the reader has no
  22. previous background in graph algorithms we suggest a more thorough
  23. introduction such as <a
  24. href="https://mitpress.mit.edu/algorithms">Introduction to Algorithms</a>
  25. by Cormen, Leiserson, and Rivest.
  26. <P>
  27. <H1>The Graph Abstraction</H1>
  28. <P>
  29. A graph is a mathematical abstraction that is useful for solving many
  30. kinds of problems. Fundamentally, a graph consists of a set of
  31. vertices, and a set of edges, where an edge is something that connects
  32. two vertices in the graph. More precisely, a <a
  33. name="def:graph"><I>graph</I></a> is a pair <i>(V,E)</i>, where
  34. <i>V</i> is a finite set and <i>E</i> is a binary relation on
  35. <i>V</i>. <i>V</i> is called a <a name="def:vertex-set"><I>vertex
  36. set</I></a> whose elements are called <I>vertices</I>. <i>E</i> is a
  37. collection of edges, where an <a name="def:edge"><I>edge</I></a> is a
  38. pair <i>(u,v)</i> with <i>u,v</i> in <i>V</i>. In a <a
  39. name="def:directed-graph"><I>directed graph</I></a>, edges are ordered
  40. pairs, connecting a <a name="def:source"><I>source</I></a> vertex to a
  41. <a name="def:target"><I>target</I></a> vertex. In an <a
  42. name="def:undirected-graph"><I>undirected graph</I></a> edges are
  43. unordered pairs and connect the two vertices in both directions, hence
  44. in an undirected graph <i>(u,v)</i> and <i>(v,u)</i> are two ways of
  45. writing the same edge.
  46. <P>
  47. This definition of a graph is vague in certain respects; it does not
  48. say what a vertex or edge represents. They could be cities with
  49. connecting roads, or web-pages with hyperlinks. These details are left
  50. out of the definition of a graph for an important reason; they are not
  51. a necessary part of the graph <I>abstraction</I>. By leaving out the
  52. details we can construct a theory that is reusable, that can help us
  53. solve lots of different kinds of problems.
  54. <P>
  55. Back to the definition: a graph is a set of vertices and edges. For
  56. purposes of demonstration, let us consider a graph where we have
  57. labeled the vertices with letters, and we write an edge simply as a
  58. pair of letters. Now we can write down an example of a directed graph
  59. as follows:
  60. <P>
  61. <BR>
  62. <DIV ALIGN="center">
  63. <table><tr><td><tt>
  64. V = {v, b, x, z, a, y } <br>
  65. E = { (b,y), (b,y), (y,v), (z,a), (x,x), (b,x), (x,v), (a,z) } <br>
  66. G = (V, E)
  67. </tt></td></tr></table>
  68. </DIV>
  69. <BR CLEAR="ALL"><P></P>
  70. <P>
  71. <A HREF="#fig:directed-graph">Figure 1</A> gives a pictorial view of
  72. this graph. The edge <i>(x,x)</i> is called a <a
  73. name="def:self-loop"><I>self-loop</I></a>. Edges <i>(b,y)</i> and
  74. <i>(b,y)</i> are <I>parallel edges</I>, which are allowed in a <a
  75. name="def:multigraph"><I>multigraph</I></a> (but are normally not
  76. allowed in a directed or undirected graph).
  77. <P>
  78. <P></P>
  79. <DIV ALIGN="center"><A NAME="fig:directed-graph"></A>
  80. <TABLE>
  81. <CAPTION ALIGN="BOTTOM"><STRONG>Figure 1:</STRONG>
  82. Example of a directed graph.</CAPTION>
  83. <TR><TD><IMG SRC="./figs/digraph.gif" width="124" height="163"></TD></TR>
  84. </TABLE>
  85. </DIV><P></P>
  86. <P>
  87. Next we have a similar graph, though this time it is undirected. <A
  88. HREF="#fig:undirected-graph">Figure 2</A> gives the pictorial view.
  89. Self loops are not allowed in undirected graphs. This graph is the <a
  90. name="def:undirected-version"><I>undirected version</i></a. of the the
  91. previous graph (minus the parallel edge <i>(b,y)</i>), meaning it has
  92. the same vertices and the same edges with their directions removed.
  93. Also the self edge has been removed, and edges such as <i>(a,z)</i>
  94. and <i>(z,a)</i> are collapsed into one edge. One can go the other
  95. way, and make a <a name="def:directed-version"><I>directed version</i>
  96. of an undirected graph be replacing each edge by two edges, one
  97. pointing in each direction.
  98. <P>
  99. <BR>
  100. <DIV ALIGN="CENTER">
  101. <table><tr><td><tt>
  102. V = {v, b, x, z, a, y }<br>
  103. E = { (b,y), (y,v), (z,a), (b,x), (x,v) }<br>
  104. G = (V, E)
  105. </tt></td></tr></table>
  106. </DIV>
  107. <BR CLEAR="ALL"><P></P>
  108. <P>
  109. <P></P>
  110. <DIV ALIGN="CENTER"><A NAME="fig:undirected-graph"></A><A NAME="1524"></A>
  111. <TABLE>
  112. <CAPTION ALIGN="BOTTOM"><STRONG>Figure 2:</STRONG>
  113. Example of an undirected graph.</CAPTION>
  114. <TR><TD><IMG SRC="./figs/undigraph.gif" width="103" height="163"></TD></TR>
  115. </TABLE>
  116. </DIV><P></P>
  117. <P>
  118. Now for some more graph terminology. If some edge <i>(u,v)</i> is in
  119. graph , then vertex <i>v</i> is <a
  120. name="def:adjacent"><I>adjacent</I></a> to vertex <i>u</i>. In a
  121. directed graph, edge <i>(u,v)</i> is an <a
  122. name="def:out-edge"><I>out-edge</I></a> of vertex <i>u</i> and an <a
  123. name="def:in-edge"><I>in-edge</I></a> of vertex <i>v</i>. In an
  124. undirected graph edge <i>(u,v)</i> is <a
  125. name="def:incident-on"><I>incident on</I></a> vertices <i>u</i> and
  126. <i>v</i>.
  127. <P>
  128. In <A HREF="#fig:directed-graph">Figure 1</A>,
  129. vertex <i>y</i> is adjacent to vertex <i>b</i> (but <i>b</i> is not
  130. adjacent to <i>y</i>). The edge <i>(b,y)</i> is an out-edge of
  131. <i>b</i> and an in-edge of <i>y</i>. In <A
  132. HREF="#fig:undirected-graph">Figure 2</A>,
  133. <i>y</i> is adjacent to <i>b</i> and vice-versa. The edge
  134. <i>(y,b)</i> is incident on vertices <i>y</i> and <i>b</i>.
  135. <P>
  136. In a directed graph, the number of out-edges of a vertex is its <a
  137. name="def:out-degree"><I>out-degree</I></a> and the number of in-edges
  138. is its <a name="def:in-degree"><I>in-degree</I></a>. For an
  139. undirected graph, the number of edges incident to a vertex is its <a
  140. name="def:degree"><I>degree</I></a>. In <A
  141. HREF="#fig:directed-graph">Figure 1</A>, vertex <i>b</i> has an
  142. out-degree of 3 and an in-degree of zero. In <A
  143. HREF="#fig:undirected-graph">Figure 2</A>, vertex <i>b</i> simply has
  144. a degree of 2.
  145. <P>
  146. Now a <a name="def:path"><i>path</i></a> is a sequence of edges in a
  147. graph such that the target vertex of each edge is the source vertex of
  148. the next edge in the sequence. If there is a path starting at vertex
  149. <i>u</i> and ending at vertex <i>v</i> we say that <i>v</i> is <a
  150. name="def:reachable"><i>reachable</i></a> from <i>u</i>. A path is <a
  151. name="def:simple-path"><I>simple</I></a> if none of the vertices in
  152. the sequence are repeated. The path &lt;(b,x), (x,v)&gt; is simple,
  153. while the path &lt;(a,z), (z,a)&gt; is not. Also, the path &lt;(a,z),
  154. (z,a)&gt; is called a <a name="def:cycle"><I>cycle</I></a> because the
  155. first and last vertex in the path are the same. A graph with no cycles
  156. is <a name="def:acyclic"><I>acyclic</I></a>.
  157. <P>
  158. A <a name="def:planar-graph"><I>planar graph</I></a> is a graph that
  159. can be drawn on a plane without any of the edges crossing over each
  160. other. Such a drawing is called a <I>plane graph</I>. A <a
  161. name="def:face"><I>face</I></a> of a plane graph is a connected region
  162. of the plane surrounded by edges. An important property of planar
  163. graphs is that the number of faces, edges, and vertices are related
  164. through Euler's formula: <i>|F| - |E| + |V| = 2</i>. This means that a
  165. simple planar graph has at most <i>O(|V|)</i> edges.
  166. <P>
  167. <P>
  168. <H1>Graph Data Structures</H1>
  169. <P>
  170. The primary property of a graph to consider when deciding which data
  171. structure to use is <I>sparsity</I>, the number of edges relative to
  172. the number of vertices in the graph. A graph where <i>E</i> is close
  173. to </i>V<sup>2</sup></i> is a <I>dense</I> graph, whereas a graph
  174. where <i>E = alpha V</i> and <i>alpha</i> is much smaller than
  175. <i>V</i> is a <I>sparse</I> graph. For dense graphs, the
  176. <I>adjacency-matrix representation</I> is usually the best choice,
  177. whereas for sparse graphs the <I>adjacency-list representation</I> is
  178. a better choice. Also the <I>edge-list representation</I> is a space
  179. efficient choice for sparse graphs that is appropriate in some
  180. situations.
  181. <P>
  182. <H2>Adjacency Matrix Representation</H2>
  183. <P>
  184. An adjacency-matrix representation of a graph is a 2-dimensional <i>V
  185. x V</i> array. Each element in the array <i>a<sub>uv</sub></i> stores
  186. a Boolean value saying whether the edge <i>(u,v)</i> is in the graph.
  187. <A HREF="#fig:adj-matrix">Figure 3</A> depicts an adjacency matrix for
  188. the graph in <A HREF="#fig:directed-graph">Figure 1</A> (minus the
  189. parallel edge <i>(b,y)</i>). The amount of space required to store
  190. an adjacency-matrix is <i>O(V<sup>2</sup>)</i>. Any edge can be
  191. accessed, added, or removed in <i>O(1)</i> time. To add or remove a
  192. vertex requires reallocating and copying the whole graph, an
  193. <i>O(V<sup>2</sup>)</i> operation. The <a
  194. href="./adjacency_matrix.html"><tt>adjacency_matrix</tt></a> class
  195. implements the BGL graph interface in terms of the adjacency-matrix
  196. data-structure.
  197. <P>
  198. <P></P>
  199. <DIV ALIGN="CENTER"><A NAME="fig:adj-matrix"></A><A NAME="1701"></A>
  200. <TABLE>
  201. <CAPTION ALIGN="BOTTOM"><STRONG>Figure 3:</STRONG>
  202. The Adjacency Matrix Graph Representation.</CAPTION>
  203. <TR><TD><IMG SRC="./figs/adj_matrix.gif" width="136" height="135"></TD></TR>
  204. </TABLE>
  205. </DIV><P></P>
  206. <P>
  207. <H2><A NAME="sec:adjacency-list-representation"></A>
  208. Adjacency List Representation
  209. </H2>
  210. <P>
  211. An adjacency-list representation of a graph stores an out-edge
  212. sequence for each vertex. For sparse graphs this saves space since
  213. only <i>O(V + E)</i> memory is required. In addition, the out-edges
  214. for each vertex can be accessed more efficiently. Edge insertion is
  215. <i>O(1)</i>, though accessing any given edge is <i>O(alpha)</i>, where
  216. <i>alpha</i> is the sparsity factor of the matrix (which is equal to
  217. the maximum number of out-edges for any vertex in the graph). <A
  218. HREF="#fig:adj-list">Figure 4</A> depicts an
  219. adjacency-list representation of the graph in <A
  220. HREF="#fig:directed-graph">Figure 1</A>. The
  221. <a href="./adjacency_list.html"><TT>adjacency_list</TT></a> class is
  222. an implementation of the adjacency-list representation.
  223. <P>
  224. <P></P>
  225. <DIV ALIGN="CENTER"><A NAME="fig:adj-list"></A><A NAME="1713"></A>
  226. <TABLE>
  227. <CAPTION ALIGN="BOTTOM"><STRONG>Figure 4:</STRONG>
  228. The Adjacency List Graph Representation.</CAPTION>
  229. <TR><TD><IMG SRC="./figs/adj_list.gif" width="108" height="122"></TD></TR>
  230. </TABLE>
  231. </DIV><P></P>
  232. <P>
  233. <H2>Edge List Representation</H2>
  234. <P>
  235. An edge-list representation of a graph is simply a sequence of edges,
  236. where each edge is represented as a pair of vertex ID's. The memory
  237. required is only <i>O(E)</i>. Edge insertion is typically <i>O(1)</i>,
  238. though accessing a particular edge is <i>O(E)</i> (not efficient). <A
  239. HREF="#fig:edge-list">Figure 5</A> shows an
  240. edge-list representation of the graph in <A
  241. HREF="#fig:directed-graph">Figure 1</A>. The
  242. <a href="./edge_list.html"><TT>edge_list</TT></a> adaptor class can be
  243. used to create implementations of the edge-list representation.
  244. <P>
  245. <P></P>
  246. <DIV ALIGN="CENTER"><A NAME="fig:edge-list"></A><A NAME="1724"></A>
  247. <TABLE>
  248. <CAPTION ALIGN="BOTTOM"><STRONG>Figure 5:</STRONG>
  249. The Edge List Graph Representation.</CAPTION>
  250. <TR><TD><IMG SRC="./figs/edge_list.gif" width="322" height="22"></TD></TR>
  251. </TABLE>
  252. </DIV><P></P>
  253. <P>
  254. <H1>Graph Algorithms</H1>
  255. <P>
  256. <H2>Graph Search Algorithms</H2>
  257. <P>
  258. <I>Tree edges</I> are edges in the search tree (or forest) constructed
  259. (implicitly or explicitly) by running a graph search algorithm over a
  260. graph. An edge <i>(u,v)</i> is a tree edge if <i>v</i> was first
  261. discovered while exploring (corresponding to the <a
  262. href="./visitor_concepts.html">visitor</a> <TT>explore()</TT> method)
  263. edge <i>(u,v)</i>. <I>Back edges</I> connect vertices to their
  264. ancestors in a search tree. So for edge <i>(u,v)</i> the vertex
  265. <i>v</i> must be the ancestor of vertex <i>u</i>. Self loops are
  266. considered to be back edges. <I>Forward edges</I> are non-tree edges
  267. <i>(u,v)</i> that connect a vertex <i>u</i> to a descendant <i>v</i>
  268. in a search tree. <I>Cross edges</I> are edges that do not fall into
  269. the above three categories.
  270. <P>
  271. <H2><A NAME="sec:bfs-algorithm"></A>
  272. Breadth-First Search
  273. </H2>
  274. <P>
  275. Breadth-first search is a traversal through a graph that touches all
  276. of the vertices reachable from a particular source vertex. In
  277. addition, the order of the traversal is such that the algorithm will
  278. explore all of the neighbors of a vertex before proceeding on to the
  279. neighbors of its neighbors. One way to think of breadth-first search
  280. is that it expands like a wave emanating from a stone dropped into a
  281. pool of water. Vertices in the same ``wave'' are the same distance
  282. from the source vertex. A vertex is <I>discovered</I> the first time
  283. it is encountered by the algorithm. A vertex is <I>finished</I> after
  284. all of its neighbors are explored. Here's an example to help make this
  285. clear. A graph is shown in <A
  286. HREF="#fig:bfs-example">Figure 6</A> and the
  287. BFS discovery and finish order for the vertices is shown below.
  288. <P>
  289. <P></P>
  290. <DIV ALIGN="CENTER"><A NAME="fig:bfs-example"></A><A NAME="1826"></A>
  291. <TABLE>
  292. <CAPTION ALIGN="BOTTOM"><STRONG>Figure 6:</STRONG>
  293. Breadth-first search spreading through a graph.</CAPTION>
  294. <TR><TD><IMG SRC="./figs/bfs_example.gif" width="242" height="143"></TD></TR>
  295. </TABLE>
  296. </DIV><P></P>
  297. <P>
  298. <PRE>
  299. order of discovery: s r w v t x u y
  300. order of finish: s r w v t x u y
  301. </PRE>
  302. <P>
  303. We start at vertex <i>s</i>, and first visit <i>r</i> and <i>w</i> (the two
  304. neighbors of <i>s</i>). Once both neighbors of are visited, we visit the
  305. neighbor of <i>r</i> (vertex <i>v</i>), then the neighbors of <i>w</i>
  306. (the discovery order between <i>r</i> and <i>w</i> does not matter)
  307. which are <i>t</i> and <i>x</i>. Finally we visit the neighbors of
  308. <i>t</i> and <i>x</i>, which are <i>u</i> and <i>y</i>.
  309. <P>
  310. For the algorithm to keep track of where it is in the graph, and which
  311. vertex to visit next, BFS needs to color the vertices (see the section
  312. on <a href="./using_property_maps.html">Property Maps</a>
  313. for more details about attaching properties to graphs). The place to
  314. put the color can either be inside the graph, or it can be passed into
  315. the algorithm as an argument.
  316. <P>
  317. <H2><A NAME="sec:dfs-algorithm"></A>
  318. Depth-First Search
  319. </H2>
  320. <P>
  321. A depth first search (DFS) visits all the vertices in a graph. When
  322. choosing which edge to explore next, this algorithm always chooses to
  323. go ``deeper'' into the graph. That is, it will pick the next adjacent
  324. unvisited vertex until reaching a vertex that has no unvisited
  325. adjacent vertices. The algorithm will then backtrack to the previous
  326. vertex and continue along any as-yet unexplored edges from that
  327. vertex. After DFS has visited all the reachable vertices from a
  328. particular source vertex, it chooses one of the remaining undiscovered
  329. vertices and continues the search. This process creates a set of
  330. <I>depth-first trees</I> which together form the <I>depth-first
  331. forest</I>. A depth-first search categorizes the edges in the graph
  332. into three categories: tree-edges, back-edges, and forward or
  333. cross-edges (it does not specify which). There are typically many
  334. valid depth-first forests for a given graph, and therefore many
  335. different (and equally valid) ways to categorize the edges.
  336. <P>
  337. One interesting property of depth-first search is that the discover
  338. and finish times for each vertex form a parenthesis structure. If we
  339. use an open-parenthesis when a vertex is discovered, and a
  340. close-parenthesis when a vertex is finished, then the result is a
  341. properly nested set of parenthesis. <A
  342. HREF="#fig:dfs-example">Figure 7</A> shows
  343. DFS applied to an undirected graph, with the edges labeled in the
  344. order they were explored. Below we list the vertices of the graph
  345. ordered by discover and finish time, as well as show the parenthesis structure. DFS is used as the kernel for several other graph
  346. algorithms, including topological sort and two of the connected
  347. component algorithms. It can also be used to detect cycles (see the <A
  348. HREF="file_dependency_example.html#sec:cycles">Cylic Dependencies </a>
  349. section of the File Dependency Example).
  350. <P>
  351. <P></P>
  352. <DIV ALIGN="CENTER"><A NAME="fig:dfs-example"></A><A NAME="1841"></A>
  353. <TABLE>
  354. <CAPTION ALIGN="BOTTOM"><STRONG>Figure 7:</STRONG>
  355. Depth-first search on an undirected graph.</CAPTION>
  356. <TR><TD><IMG SRC="./figs/dfs.gif" width="141" height="204"></TD></TR>
  357. </TABLE>
  358. </DIV><P></P>
  359. <P>
  360. <PRE>
  361. order of discovery: a b e d c f g h i
  362. order of finish: d f c e b a
  363. parenthesis: (a (b (e (d d) (c (f f) c) e) b) a) (g (h (i i) h) g)
  364. </PRE>
  365. <P>
  366. <H2><a name="sec:minimum-spanning-tree">Minimum Spanning Tree Problem</a></H2>
  367. <P>
  368. The <I>minimum-spanning-tree problem</I> is defined as follows: find
  369. an acyclic subset <i>T</i> of <i>E</i> that connects all of the vertices
  370. in the graph and whose total weight is minimized, where the
  371. total weight is given by
  372. <P></P>
  373. <DIV ALIGN="left">
  374. <i>w(T)</i> = sum of <i>w(u,v)</i> over all <i>(u,v)</i> in <i>T</i>,
  375. where <i>w(u,v)</i> is the weight on the edge <i>(u,v)</i>
  376. </DIV>
  377. <BR CLEAR="ALL">
  378. <P></P>
  379. <i>T</i> is called the <I>spanning tree</I>.
  380. <!--
  381. <P>
  382. Kruskal's algorithm&nbsp;[<A
  383. HREF="bibliography.html#kruskal56">18</A>]
  384. for solving the minimum spanning tree problem. This is a greedy
  385. algorithm to calculate the minimum spanning tree for an undirected
  386. graph with weighted edges.
  387. <P>
  388. This is Prim's algorithm&nbsp;[<A
  389. HREF="bibliography.html#prim57:_short">25</A>] for solving the
  390. minimum spanning tree problem for an undirected graph with weighted
  391. edges. The implementation is simply a call to <a
  392. href="./dijkstra_shortest_paths.html"><TT>dijkstra_shortest_paths()</TT></a>
  393. with the appropriate choice of comparison and combine functors.
  394. -->
  395. <P>
  396. <H2><a name="sec:shortest-paths-algorithms">Shortest-Paths Algorithms</a></H2>
  397. <P>
  398. One of the classic problems in graph theory is to find the shortest
  399. path between two vertices in a graph. Formally, a <I>path</I> is a
  400. sequence of vertices <i>&lt;v<sub>0</sub>,v<sub>1</sub>,...,v<sub>k</sub>&gt;</i>
  401. in a graph <i>G = (V, E)</i> such that each vertex is connected to the
  402. next vertex in the sequence (the edges
  403. <i>(v<sub>i</sub>,v<sub>i+1</sub>)</i> for <i>i=0,1,...,k-1</i> are in
  404. the edge set <i>E</i>). In the shortest-path problem, each edge is
  405. given a real-valued weight. We can therefore talk about the <I>weight
  406. of a path</I><BR>
  407. <p></p>
  408. <DIV ALIGN="left">
  409. <i>w(p) = sum from i=1..k of w(v<sub>i-1</sub>,v<sub>i</sub>)</i>
  410. </DIV>
  411. <BR CLEAR="ALL"><P></P>
  412. The <I>shortest path weight</I> from vertex <i>u</i> to <i>v</i> is then
  413. <BR>
  414. <p></p>
  415. <DIV ALIGN="left">
  416. <i>delta (u,v) = min { w(p) : u --> v }</i> if there is a path from
  417. <i>u</i> to <i>v</i> <br>
  418. <i>delta (u,v) = infinity</i> otherwise.
  419. </DIV>
  420. <BR CLEAR="ALL"><P></P>
  421. A <I>shortest path</I> is any path who's path weight is equal to the
  422. <I>shortest path weight</I>.
  423. <P>
  424. There are several variants of the shortest path problem. Above we
  425. defined the <I>single-pair</I> problem, but there is also the
  426. <I>single-source problem</I> (all shortest paths from one vertex to
  427. every other vertex in the graph), the equivalent
  428. <I>single-destination problem</I>, and the <I>all-pairs problem</I>.
  429. It turns out that there are no algorithms for solving the single-pair
  430. problem that are asymptotically faster than algorithms that solve the
  431. single-source problem.
  432. <P>
  433. A <I>shortest-paths tree</I> rooted at vertex in graph <i>G=(V,E)</i>
  434. is a directed subgraph <G'> where <i>V'</i> is a subset
  435. of <i>V</i> and <i>E'</i> is a subset of <i>E</i>, <i>V'</i> is the
  436. set of vertices reachable from , <i>G'</i> forms a rooted tree with
  437. root , and for all <i>v</i> in <i>V'</i> the unique simple path from
  438. to <i>v</i> in <i>G'</i> is a shortest path from to <i>v</i> in . The
  439. result of a single-source algorithm is a shortest-paths tree.
  440. <P>
  441. <H2><a name="sec:network-flow-algorithms">Network Flow Algorithms</H2>
  442. <p>
  443. A flow network is a directed graph <i>G=(V,E)</i> with a
  444. <b><i>source</i></b> vertex <i>s</i> and a <b><i>sink</i></b> vertex
  445. <i>t</i>. Each edge has a positive real valued <b><i>capacity</i></b>
  446. function <i>c</i> and there is a <b><i>flow</i></b> function <i>f</i>
  447. defined over every vertex pair. The flow function must satisfy three
  448. constraints:
  449. <p>
  450. <i>f(u,v) <= c(u,v) for all (u,v) in V x V</i> (Capacity constraint) <br>
  451. <i>f(u,v) = - f(v,u) for all (u,v) in V x V</i> (Skew symmetry)<br>
  452. <i>sum<sub>v in V</sub> f(u,v) = 0 for all u in V - {s,t}</i> (Flow conservation)
  453. <p>
  454. The <b><i>flow</i></b> of the network is the net flow entering the
  455. sink vertex <i>t</i> (which is equal to the net flow leaving the
  456. source vertex <i>s</i>).<p>
  457. <i>|f| = sum<sub>u in V</sub> f(u,t) = sum<sub>v in V</sub> f(s,v)</i>
  458. <p>
  459. The <b><i>residual capacity</i></b> of an edge is <i>r(u,v) = c(u,v) -
  460. f(u,v)</i>. The edges with <i>r(u,v) > 0</i> are residual edges
  461. <i>E<sub>f</sub></i> which induce the residual graph <i>G<sub>f</sub>
  462. = (V, E<sub>f</sub>)</i>. An edge with <i>r(u,v) = 0</i> is
  463. <b><i>saturated</i></b>.
  464. <p>
  465. The <b><i>maximum flow problem</i></b> is to determine the maximum
  466. possible value for <i>|f|</i> and the corresponding flow values for
  467. every vertex pair in the graph.
  468. <p>
  469. The <b><i>minimum cost maximum flow problem</i></b> is to determine the maximum flow which minimizes <i> sum<sub>(u,v) in E</sub>
  470. cost(u,v) * f(u,v) </i>.
  471. <p>
  472. A flow network is shown in <a href="#fig:max-flow">Figure
  473. 8</a>. Vertex <i>A</i> is the source vertex and <i>H</i> is the target
  474. vertex.
  475. <P></P>
  476. <DIV ALIGN="center"><A NAME="fig:max-flow"></A>
  477. <TABLE>
  478. <CAPTION ALIGN="BOTTOM"><STRONG>Figure 8:</STRONG> A Maximum Flow
  479. Network.<br> Edges are labeled with the flow and capacity
  480. values.</CAPTION>
  481. <TR><TD><IMG SRC="./figs/max-flow.gif" width="578" height="240"></TD></TR>
  482. </TABLE>
  483. </DIV><P></P>
  484. <p>
  485. There is a long history of algorithms for solving the maximum flow
  486. problem, with the first algorithm due to <a
  487. href="./bibliography.html#ford56:_maxim">Ford and Fulkerson</a>. The
  488. best general purpose algorithm to date is the push-relabel algorithm
  489. of <a
  490. href="./bibliography.html#goldberg85:_new_max_flow_algor">Goldberg</a>
  491. which is based on the notion of a <b><i>preflow</i></b> introduced by
  492. <a href="./bibliography.html#karzanov74:_deter">Karzanov</a>.
  493. <h2><a name="sec:min-cut-algorithms">Minimum Cut Algorithms</a></h2>
  494. <h3>Undirected Graphs</h3>
  495. <p>Given an undirected graph <i>G</i> = (<i>V</i>, <i>E</i>), a <em>cut</em> of <i>G</i> is a partition of the vertices into two, non-empty sets <i>X</i> and <img src="stoer_wagner_imgs/6e4.gif" alt="\overline{X} = V - X" style="vertical-align: middle; padding-bottom: 2px">. The <i>weight</i> of a cut is defined as the number of edges between sets <i>X</i> and <img src="stoer_wagner_imgs/f79.gif" alt="\overline{X}" style="vertical-align: middle; padding-bottom: 3px"> if <i>G</i> is unweighted, or the sum of the weights of all edges between sets <i>X</i> and <img src="stoer_wagner_imgs/f79.gif" alt="\overline{X}" style="vertical-align: middle; padding-bottom: 3px"> if <i>G</i> is weighted (each edge has an associated, non-negative weight).
  496. <p>When the weight of a cut <img src="stoer_wagner_imgs/8b7.gif" alt="C = \{X, \overline{X}\}" style="vertical-align: middle"> is minimal for a graph <i>G</i> (that is, no other cut of <i>G</i> has a lesser weight), then the cut is known as a <em>minimum cut</em> or a <em>min-cut</em>. For example, given this weighted graph:
  497. <p><a href="stoer_wagner_imgs/stoer_wagner-example.dot"><img src="stoer_wagner_imgs/stoer_wagner-example.gif"></a>
  498. <p>The cut {{0, 6}, {3, 2, 7, 1, 5, 4}} has weight 13:
  499. <p><a href="stoer_wagner_imgs/stoer_wagner-example-c1.dot"><img src="stoer_wagner_imgs/stoer_wagner-example-c1.gif"></a>
  500. <p>And the min-cut is {{0, 1, 4, 5}, {2, 3, 6, 7}} (weight 4):
  501. <p><a href="stoer_wagner_imgs/stoer_wagner-example-min-cut.dot"><img src="stoer_wagner_imgs/stoer_wagner-example-min-cut.gif"></a>
  502. <p>Unlike this example, a graph will sometimes have multiple min-cuts, all of equal weight. A minimum cut algorithm determines one of them as well as the min-cut weight.
  503. <h3>Directed Graphs</h3>
  504. <p>Given a directed graph <i>G</i> = (<i>V</i>, <i>E</i>), a <em>cut</em> of <i>G</i> is a partition of the vertices into two, non-empty sets <i>S</i> and <i>T</i> where <i>S</i> is known as the set of <em>source vertices</em> and <i>T</i> is known as the set of <em>sink vertices</em>. The <em>capacity</em> of a cut <i>C</i> = (<i>S</i>, <i>T</i>) is the number of edges from a vertex in <i>S</i> to a vertex in <i>T</i> if <i>G</i> is unweighted, or the sum of weights of edges from a vertex in <i>S</i> to a vertex in <i>T</i> if <i>G</i> is weighted.
  505. <p>When the capacity of a cut <i>C</i> = (<i>S</i>, <i>T</i>) of a directed graph is minimal (that is, no other cut of <i>G</i> has lesser capacity), then <i>C</i> is known as a <em>minimum cut</em> or <em>min-cut</em>.
  506. <p>For example, given this directed graph:
  507. <p><a href="stoer_wagner_imgs/digraph1.dot"><img src="stoer_wagner_imgs/digraph1.gif"></a>
  508. <p>A min-cut is:
  509. <p><a href="stoer_wagner_imgs/digraph1-min-cut.dot"><img src="stoer_wagner_imgs/digraph1-min-cut.gif"></a>
  510. <p>where <i>S</i> = {0}, <i>T</i> = {1, 2, 3}, and the min-cut capacity is 1.
  511. <br>
  512. <HR>
  513. <TABLE>
  514. <TR valign=top>
  515. <TD nowrap>Copyright &copy; 2000-2001</TD><TD>
  516. <A HREF="http://www.boost.org/people/jeremy_siek.htm">Jeremy Siek</A>, Indiana University (<A HREF="mailto:jsiek@osl.iu.edu">jsiek@osl.iu.edu</A>)
  517. </TD></TR></TABLE>
  518. </BODY>
  519. </HTML>