Modified: reef/site/wake.html
URL: 
http://svn.apache.org/viewvc/reef/site/wake.html?rev=1722413&r1=1722412&r2=1722413&view=diff
==============================================================================
--- reef/site/wake.html (original)
+++ reef/site/wake.html Wed Dec 30 21:09:32 2015
@@ -1,13 +1,13 @@
 <!DOCTYPE html>
 <!--
- | Generated by Apache Maven Doxia at 2015-12-04 
+ | Generated by Apache Maven Doxia at 2015-12-30 
  | Rendered using Apache Maven Fluido Skin 1.4
 -->
 <html xmlns="http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20151204" />
+    <meta name="Date-Revision-yyyymmdd" content="20151230" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Apache REEF - Wake</title>
     <link rel="stylesheet" href="./css/apache-maven-fluido-1.4.min.css" />
@@ -446,7 +446,7 @@ under the License. --><h1>Wake</h1>
 <div class="section">
 <h2>Background<a name="Background"></a></h2>
 <p>Wake applications consist of asynchronous <i>event handlers</i> that run 
inside of <i>stages</i>. Stages provide scheduling primitives such as thread 
pool sizing and performance isolation between event handlers. In addition to 
event handler and stage APIs, Wake includes profiling tools and a rich standard 
library of primitives for system builders.</p>
-<p>Event driven processing frameworks improve upon the performance of threaded 
architectures in two ways: (1) Event handlers often have lower memory and 
context switching overhead than threaded solutions, and (2) event driven 
systems allow applications to allocate and monitor computational and I/O 
resources in an extremely fine-grained fashion. Modern threading packages have 
done much to address the first concern, and have significantly lowered 
concurrency control and other implementation overheads in recent years. 
However, fine grained resource allocation remains a challenge in threaded 
systems, and is Wake&#x2019;s primary advantage over threading.</p>
+<p>Event driven processing frameworks improve upon the performance of threaded 
architectures in two ways: (1) event handlers often have lower memory and 
context switching overhead than threaded solutions, and (2) event driven 
systems allow applications to allocate and monitor computational and I/O 
resources in an extremely fine-grained fashion. Modern threading packages have 
done much to address the first concern, and have significantly lowered 
concurrency control and other implementation overheads in recent years. 
However, fine-grained resource allocation remains a challenge in threaded 
systems, and is Wake&#x2019;s primary advantage over threading.</p>
 <p>Early event driven systems such as SEDA executed each event handler in a 
dedicated thread pool called a stage. This isolated low-latency event handlers 
(such as cache lookups) from expensive high-latency operations, such as disk 
I/O. With a single thread pool, high-latency I/O operations can easily 
monopolize the thread pool, causing all of the CPUs to block on disk I/O, even 
when there is computation to be scheduled. With separate thread pools, the 
operating system schedules I/O requests and computation separately, 
guaranteeing that runnable computations will not block on I/O requests.</p>
 <p>This is in contrast to event-driven systems such as the Click modular 
router that were designed to maximize throughput for predictable, low latency 
event-handlers. When possible, Click aggressively chains event handlers 
together, reducing the cost of an event dispatch to that of a function call, 
and allowing the compiler to perform optimizations such as inlining and 
constant propagation across event handlers.</p>
 <p>Wake allows developers to trade off between these two extremes by 
explicitly partitioning their event handlers into stages. Within a stage, event 
handlers engage in <i>thread-sharing</i> by simply calling each other directly. 
When an event crosses a stage boundary, it is placed in a queue of similar 
events. The queue is then drained by the threads managed by the receiving 
stage.</p>
@@ -474,7 +474,7 @@ under the License. --><h1>Wake</h1>
   void onCompleted();
 }
 </pre></div>
-<p>The <tt>Observer</tt> is designed for stateful event handlers that need to 
be explicitly torn down at exit, or when errors occor. Such event handlers may 
maintain open network sockets, write to disk, buffer output, and so on. As with 
<tt>onNext()</tt>, neither <tt>onError()</tt> nor <tt>onCompleted()</tt> throw 
exceptions. Instead, callers should assume that they are asynchronously 
invoked.</p>
+<p>The <tt>Observer</tt> is designed for stateful event handlers that need to 
be explicitly torn down at exit, or when errors occur. Such event handlers may 
maintain open network sockets, write to disk, buffer output, and so on. As with 
<tt>onNext()</tt>, neither <tt>onError()</tt> nor <tt>onCompleted()</tt> throw 
exceptions. Instead, callers should assume that they are asynchronously 
invoked.</p>
 <p><tt>EventHandler</tt> and <tt>Observer</tt> implementations should be 
threadsafe and handle concurrent invocations of <tt>onNext()</tt>. However, it 
is illegal to call <tt>onCompleted()</tt> or <tt>onError()</tt> in race with 
any calls to <tt>onNext()</tt>, and the call to <tt>onCompleted()</tt> or 
<tt>onError()</tt> must be the last call made to the object. Therefore, 
implementations of <tt>onCompleted()</tt> and <tt>onError()</tt> can assume 
they have a lock on <tt>this</tt>, and that <tt>this</tt> has not been torn 
down and is still in a valid state.</p>
 <p>We chose these invariants because they are simple and easy to enforce. In 
most cases, application logic simply limits calls to <tt>onCompleted()</tt> and 
<tt>onError()</tt> to other implementations of <tt>onError()</tt> and 
<tt>onCompleted()</tt>, and relies upon Wake (and any intervening application 
logic) to obey the same protocol.</p></div>
 <div class="section">
@@ -491,13 +491,13 @@ under the License. --><h1>Wake</h1>
 
 <div class="source"><pre class="prettyprint">public interface RxStage&lt;T&gt; 
extends Observer&lt;T&gt;, Stage { }
 </pre></div>
-<p>In both cases, the stage simply exposes the same API as the event handler 
that it manages. This allows code that produces events to treat downstream 
stages and raw <tt>EventHandlers</tt> / <tt>Observers</tt> interchangebly. 
Recall that Wake implements thread sharing by allowing EventHandlers and 
Observers to directly invoke each other. Since Stages implement the same 
interface as raw EventHandlers and Observers, this pushes the placement of 
thread boundaries and other scheduling tradeoffs to the code that is 
instantiating the application. In turn, this simplifies testing and improves 
the reusability of code written on top of Wake.</p>
+<p>In both cases, the stage simply exposes the same API as the event handler 
that it manages. This allows code that produces events to treat downstream 
stages and raw <tt>EventHandlers</tt> / <tt>Observers</tt> interchangeably. 
Recall that Wake implements thread sharing by allowing EventHandlers and 
Observers to directly invoke each other. Since Stages implement the same 
interface as raw EventHandlers and Observers, this pushes the placement of 
thread boundaries and other scheduling tradeoffs to the code that is 
instantiating the application. In turn, this simplifies testing and improves 
the reusability of code written on top of Wake.</p>
 <div class="section">
 <h4>close() vs. onCompleted()<a name="close_vs._onCompleted"></a></h4>
 <p>It may seem strange that Wake RxStage exposes two shutdown methods: 
<tt>close()</tt> and <tt>onCompleted()</tt>. Since <tt>onCompleted()</tt> is 
part of the Observer API, it may be implemented in an asynchronous fashion. 
This makes it difficult for applications to cleanly shut down, since, even 
after <tt>onCompleted()</tt> has returned, resources may still be held by the 
downstream code.</p>
 <p>In contrast, <tt>close()</tt> is synchronous, and is not allowed to return 
until all queued events have been processed, and any resources held by the 
Stage implementation have been released. The upshot is that shutdown sequences 
in Wake work as follows: Once the upstream event sources are done calling 
<tt>onNext()</tt> (and all calls to <tt>onNext()</tt> have returned), 
<tt>onCompleted()</tt> or <tt>onError()</tt> is called exactly once per stage. 
After the <tt>onCompleted()</tt> or <tt>onError()</tt> call to a given stage 
has returned, <tt>close()</tt> must be called. Once <tt>close()</tt> returns, 
all resources have been released, and the JVM may safely exit, or the code that 
is invoking Wake may proceed under the assumption that no resources or memory 
have been leaked. Note that, depending on the implementation of the downstream 
Stage, there may be a delay between the return of calls such as 
<tt>onNext()</tt> or <tt>onCompleted()</tt> and their execution. Therefore, it 
is poss
 ible that the stage will continue to schedule <tt>onNext()</tt> calls after 
<tt>close()</tt> has been invoked. It is illegal for stages to drop events on 
shutdown, so the stage will execute the requests in its queue before it 
releases resources and returns from <tt>close()</tt>.</p>
 <p><tt>Observer</tt> implementations do not expose a <tt>close()</tt> method, 
and generally do not invoke <tt>close()</tt>. Instead, when 
<tt>onCompleted()</tt> is invoked, it should arrange for <tt>onCompleted()</tt> 
to be called on any <tt>Observer</tt> instances that <tt>this</tt> directly 
invokes, free any resources it is holding, and then return. Since the 
downstream <tt>onCompleted()</tt> calls are potentially asynchronous, it cannot 
assume that downstream cleanup completes before it returns.</p>
-<p>In a thread pool <tt>Stage</tt>, the final <tt>close()</tt> call will block 
until there are no more outstanding events queued in the stage. Once 
<tt>close()</tt> has been called (and returns) on each stage, no events are 
left in any queues, and no <tt>Observer</tt> or <tt>EventHandler</tt> objects 
are holding resources or scheduled on any cores, so shutdown is 
compelete.</p></div></div></div>
+<p>In a thread pool <tt>Stage</tt>, the final <tt>close()</tt> call will block 
until there are no more outstanding events queued in the stage. Once 
<tt>close()</tt> has been called (and returns) on each stage, no events are 
left in any queues, and no <tt>Observer</tt> or <tt>EventHandler</tt> objects 
are holding resources or scheduled on any cores, so shutdown is 
complete.</p></div></div></div>
 <div class="section">
 <h2>Helper libraries<a name="Helper_libraries"></a></h2>
 <p>Wake includes a number of standard library packages:</p>


Reply via email to