migration.qbk 5.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124
  1. [/
  2. Copyright Oliver Kowalke 2016.
  3. Distributed under the Boost Software License, Version 1.0.
  4. (See accompanying file LICENSE_1_0.txt or copy at
  5. http://www.boost.org/LICENSE_1_0.txt
  6. ]
  7. [/ import path is relative to this .qbk file]
  8. [import ../examples/work_sharing.cpp]
  9. [#migration]
  10. [section:migration Migrating fibers between threads]
  11. [heading Overview]
  12. Each fiber owns a stack and manages its execution state, including all
  13. registers and CPU flags, the instruction pointer and the stack pointer. That
  14. means, in general, a fiber is not bound to a specific thread.[footnote The
  15. ["main] fiber on each thread, that is, the fiber on which the thread is
  16. launched, cannot migrate to any other thread. Also __boost_fiber__ implicitly
  17. creates a dispatcher fiber for each thread [mdash] this cannot migrate
  18. either.][superscript,][footnote Of course it would be problematic to migrate a
  19. fiber that relies on [link thread_local_storage thread-local storage].]
  20. Migrating a fiber from a logical CPU with heavy workload to another
  21. logical CPU with a lighter workload might speed up the overall execution.
  22. Note that in the case of NUMA-architectures, it is not always advisable to
  23. migrate data between threads. Suppose fiber ['f] is running on logical CPU
  24. ['cpu0] which belongs to NUMA node ['node0]. The data of ['f] are allocated on
  25. the physical memory located at ['node0]. Migrating the fiber from ['cpu0] to
  26. another logical CPU ['cpuX] which is part of a different NUMA node ['nodeX]
  27. might reduce the performance of the application due to increased latency of
  28. memory access.
  29. Only fibers that are contained in __algo__[s] ready queue can migrate between
  30. threads. You cannot migrate a running fiber, nor one that is __blocked__. You
  31. cannot migrate a fiber if its [member_link context..is_context] method returns
  32. `true` for `pinned_context`.
  33. In __boost_fiber__ a fiber is migrated by invoking __context_detach__ on the
  34. thread from which the fiber migrates and __context_attach__ on the thread to
  35. which the fiber migrates.
  36. Thus, fiber migration is accomplished by sharing state between instances of a
  37. user-coded __algo__ implementation running on different threads. The fiber[s]
  38. original thread calls [member_link algorithm..awakened], passing the
  39. fiber[s] [class_link context][^*]. The `awakened()` implementation calls
  40. __context_detach__.
  41. At some later point, when the same or a different thread calls [member_link
  42. algorithm..pick_next], the `pick_next()` implementation selects a ready
  43. fiber and calls __context_attach__ on it before returning it.
  44. As stated above, a `context` for which `is_context(pinned_context) == true`
  45. must never be passed to either __context_detach__ or __context_attach__. It
  46. may only be returned from `pick_next()` called by the ['same] thread that
  47. passed that context to `awakened()`.
  48. [heading Example of work sharing]
  49. In the example [@../../examples/work_sharing.cpp work_sharing.cpp]
  50. multiple worker fibers are created on the main thread. Each fiber gets a
  51. character as parameter at construction. This character is printed out ten times.
  52. Between each iteration the fiber calls __yield__. That puts the fiber in the
  53. ready queue of the fiber-scheduler ['shared_ready_queue], running in the current
  54. thread.
  55. The next fiber ready to be executed is dequeued from the shared ready queue
  56. and resumed by ['shared_ready_queue] running on ['any participating thread].
  57. All instances of ['shared_ready_queue] share one global concurrent queue, used
  58. as ready queue. This mechanism shares all worker fibers between all instances
  59. of ['shared_ready_queue], thus between all participating threads.
  60. [heading Setup of threads and fibers]
  61. In `main()` the fiber-scheduler is installed and the worker fibers and the
  62. threads are launched.
  63. [main_ws]
  64. The start of the threads is synchronized with a barrier. The main fiber of
  65. each thread (including main thread) is suspended until all worker fibers are
  66. complete. When the main fiber returns from __cond_wait__, the thread
  67. terminates: the main thread joins all other threads.
  68. [thread_fn_ws]
  69. Each worker fiber executes function `whatevah()` with character `me` as
  70. parameter. The fiber yields in a loop and prints out a message if it was migrated
  71. to another thread.
  72. [fiber_fn_ws]
  73. [heading Scheduling fibers]
  74. The fiber scheduler `shared_ready_queue` is like `round_robin`, except that it
  75. shares a common ready queue among all participating threads. A thread
  76. participates in this pool by executing [function_link use_scheduling_algorithm]
  77. before any other __boost_fiber__ operation.
  78. The important point about the ready queue is that it[s] a class static, common
  79. to all instances of shared_ready_queue.
  80. Fibers that are enqueued via __algo_awakened__ (fibers that are ready to be
  81. resumed) are thus available to all threads.
  82. It is required to reserve a separate, scheduler-specific queue for the thread[s]
  83. main fiber and dispatcher fibers: these may ['not] be shared between threads!
  84. When we[,]re passed either of these fibers, push it there instead of in the
  85. shared queue: it would be Bad News for thread B to retrieve and attempt to
  86. execute thread A[s] main fiber.
  87. [awakened_ws]
  88. When __algo_pick_next__ gets called inside one thread, a fiber is dequeued from
  89. ['rqueue_] and will be resumed in that thread.
  90. [pick_next_ws]
  91. The source code above is found in
  92. [@../../examples/work_sharing.cpp work_sharing.cpp].
  93. [endsect]