[GitHub] incubator-fluo-recipes pull request #130: Updated ExportQ and CFM to use new...

2017-05-03 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/incubator-fluo-recipes/pull/130


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-fluo-recipes pull request #130: Updated ExportQ and CFM to use new...

2017-05-03 Thread keith-turner
Github user keith-turner commented on a diff in the pull request:


https://github.com/apache/incubator-fluo-recipes/pull/130#discussion_r114658533
  
--- Diff: docs/combine-queue.md ---
@@ -0,0 +1,224 @@
+
+# Combine Queue Recipe
+
+## Background
+
+When many transactions try to modify the same keys, collisions will occur. 
 Too many collisions
+cause transactions to fail and throughput to nose dive.  For example, 
consider [phrasecount]
+which has many transactions processing documents.  Each transaction counts 
the phrases in a document
+and then updates global phrase counts.  Since transaction attempts to 
update many phrases
+, the probability of collisions is high.
+
+## Solution
+
+The [combine queue recipe][CombineQueue] provides a reusable solution for 
updating many keys while
+avoiding collisions.  The recipe also organizes updates into batches in 
order to improve throughput.
+
+This recipes queues updates to keys for other transactions to process. In 
the phrase count example
+transactions processing documents queue updates, but do not actually 
update the counts.  Below is an
+example of computing phrasecounts using this recipe.
+
+ * TX1 queues `+1` update  for phrase `we want lambdas now`
+ * TX2 queues `+1` update  for phrase `we want lambdas now`
+ * TX3 reads the updates and current value for the phrase `we want lambdas 
now`.  There is no current value and the updates sum to 2, so a new value of 2 
is written.
+ * TX4 queues `+2` update  for phrase `we want lambdas now`
+ * TX5 queues `-1` update  for phrase `we want lambdas now`
+ * TX6 reads the updates and current value for the phrase `we want lambdas 
now`.  The current value is 2 and the updates sum to 1, so a new value of 3 is 
written.
+
+Transactions processing updates have the ability to make additional 
updates.
+For example in addition to updating the current value for a phrase, the new
+value could also be placed on an export queue to update an external 
database.
+
+### Buckets
+
+A simple implementation of this recipe would have an update queue for each 
key.  However the
+implementation is slightly more complex.  Each update queue is in a bucket 
and transactions process
+all of the updates in a bucket.  This allows more efficient processing of 
updates for the following
+reasons :
+
+ * When updates are queued, notifications are made per bucket(instead of 
per a key).
+ * The transaction doing the update can scan the entire bucket reading 
updates, this avoids a seek for each key being updated.
+ * Also the transaction can request a batch lookup to get the current 
value of all the keys being updated.
+ * Any additional actions taken on update (like adding something to an 
export queue) can also be batched.
+ * Data is organized to make reading exiting values for keys in a bucket 
more efficient.
+
+Which bucket a key goes to is decided using hash and modulus so that 
multiple updates for a key go
+to the same bucket.
+
+The initial number of tablets to create when applying table optimizations 
can be controlled by
+setting the buckets per tablet option when configuring a Combine Queue.  
For example if you
+have 20 tablet servers and 1000 buckets and want 2 tablets per tserver 
initially then set buckets
+per tablet to 1000/(2*20)=25.
+
+## Example Use
+
+The following code snippets show how to use this recipe for wordcount.  
The first step is to
+configure it before initializing Fluo.  When initializing an ID is needed. 
 This ID is used in two
+ways.  First, the ID is used as a row prefix in the table.  Therefore 
nothing else should use that
+row range in the table.  Second, the ID is used in generating 
configuration keys.
+
+The following snippet shows how to configure a combine queue.
+
+```java
+FluoConfiguration fluoConfig = ...;
+
+// Set application properties for the combine queue.  These properties 
are read later by
+// the observers running on each worker.
+CombineQueue.configure(WcObserverProvider.ID)
+
.keyType(String.class).valueType(Long.class).buckets(119).finish(fluoConfig);
--- End diff --

I decided to go with `save()`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-fluo-recipes pull request #130: Updated ExportQ and CFM to use new...

2017-05-03 Thread keith-turner
Github user keith-turner commented on a diff in the pull request:


https://github.com/apache/incubator-fluo-recipes/pull/130#discussion_r114650888
  
--- Diff: docs/combine-queue.md ---
@@ -0,0 +1,224 @@
+
+# Combine Queue Recipe
+
+## Background
+
+When many transactions try to modify the same keys, collisions will occur. 
 Too many collisions
+cause transactions to fail and throughput to nose dive.  For example, 
consider [phrasecount]
+which has many transactions processing documents.  Each transaction counts 
the phrases in a document
+and then updates global phrase counts.  Since transaction attempts to 
update many phrases
+, the probability of collisions is high.
+
+## Solution
+
+The [combine queue recipe][CombineQueue] provides a reusable solution for 
updating many keys while
+avoiding collisions.  The recipe also organizes updates into batches in 
order to improve throughput.
+
+This recipes queues updates to keys for other transactions to process. In 
the phrase count example
+transactions processing documents queue updates, but do not actually 
update the counts.  Below is an
+example of computing phrasecounts using this recipe.
+
+ * TX1 queues `+1` update  for phrase `we want lambdas now`
+ * TX2 queues `+1` update  for phrase `we want lambdas now`
+ * TX3 reads the updates and current value for the phrase `we want lambdas 
now`.  There is no current value and the updates sum to 2, so a new value of 2 
is written.
+ * TX4 queues `+2` update  for phrase `we want lambdas now`
+ * TX5 queues `-1` update  for phrase `we want lambdas now`
+ * TX6 reads the updates and current value for the phrase `we want lambdas 
now`.  The current value is 2 and the updates sum to 1, so a new value of 3 is 
written.
+
+Transactions processing updates have the ability to make additional 
updates.
+For example in addition to updating the current value for a phrase, the new
+value could also be placed on an export queue to update an external 
database.
+
+### Buckets
+
+A simple implementation of this recipe would have an update queue for each 
key.  However the
+implementation is slightly more complex.  Each update queue is in a bucket 
and transactions process
+all of the updates in a bucket.  This allows more efficient processing of 
updates for the following
+reasons :
+
+ * When updates are queued, notifications are made per bucket(instead of 
per a key).
+ * The transaction doing the update can scan the entire bucket reading 
updates, this avoids a seek for each key being updated.
+ * Also the transaction can request a batch lookup to get the current 
value of all the keys being updated.
+ * Any additional actions taken on update (like adding something to an 
export queue) can also be batched.
+ * Data is organized to make reading exiting values for keys in a bucket 
more efficient.
+
+Which bucket a key goes to is decided using hash and modulus so that 
multiple updates for a key go
+to the same bucket.
+
+The initial number of tablets to create when applying table optimizations 
can be controlled by
+setting the buckets per tablet option when configuring a Combine Queue.  
For example if you
+have 20 tablet servers and 1000 buckets and want 2 tablets per tserver 
initially then set buckets
+per tablet to 1000/(2*20)=25.
+
+## Example Use
+
+The following code snippets show how to use this recipe for wordcount.  
The first step is to
+configure it before initializing Fluo.  When initializing an ID is needed. 
 This ID is used in two
+ways.  First, the ID is used as a row prefix in the table.  Therefore 
nothing else should use that
+row range in the table.  Second, the ID is used in generating 
configuration keys.
+
+The following snippet shows how to configure a combine queue.
+
+```java
+FluoConfiguration fluoConfig = ...;
+
+// Set application properties for the combine queue.  These properties 
are read later by
+// the observers running on each worker.
+CombineQueue.configure(WcObserverProvider.ID)
+
.keyType(String.class).valueType(Long.class).buckets(119).finish(fluoConfig);
--- End diff --

I like `store()` also thinking of `set()`

```java
CombineQueue.configure(WcObserverProvider.ID)

.keyType(String.class).valueType(Long.class).buckets(119).store(fluoConfig);
```

```java
CombineQueue.configure(WcObserverProvider.ID)

.keyType(String.class).valueType(Long.class).buckets(119).set(fluoConfig);
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please

[GitHub] incubator-fluo-recipes pull request #130: Updated ExportQ and CFM to use new...

2017-05-03 Thread keith-turner
Github user keith-turner commented on a diff in the pull request:


https://github.com/apache/incubator-fluo-recipes/pull/130#discussion_r114646561
  
--- Diff: docs/combine-queue.md ---
@@ -0,0 +1,224 @@
+
+# Combine Queue Recipe
+
+## Background
+
+When many transactions try to modify the same keys, collisions will occur. 
 Too many collisions
+cause transactions to fail and throughput to nose dive.  For example, 
consider [phrasecount]
+which has many transactions processing documents.  Each transaction counts 
the phrases in a document
+and then updates global phrase counts.  Since transaction attempts to 
update many phrases
+, the probability of collisions is high.
+
+## Solution
+
+The [combine queue recipe][CombineQueue] provides a reusable solution for 
updating many keys while
+avoiding collisions.  The recipe also organizes updates into batches in 
order to improve throughput.
+
+This recipes queues updates to keys for other transactions to process. In 
the phrase count example
+transactions processing documents queue updates, but do not actually 
update the counts.  Below is an
+example of computing phrasecounts using this recipe.
+
+ * TX1 queues `+1` update  for phrase `we want lambdas now`
+ * TX2 queues `+1` update  for phrase `we want lambdas now`
+ * TX3 reads the updates and current value for the phrase `we want lambdas 
now`.  There is no current value and the updates sum to 2, so a new value of 2 
is written.
+ * TX4 queues `+2` update  for phrase `we want lambdas now`
+ * TX5 queues `-1` update  for phrase `we want lambdas now`
+ * TX6 reads the updates and current value for the phrase `we want lambdas 
now`.  The current value is 2 and the updates sum to 1, so a new value of 3 is 
written.
+
+Transactions processing updates have the ability to make additional 
updates.
+For example in addition to updating the current value for a phrase, the new
+value could also be placed on an export queue to update an external 
database.
+
+### Buckets
+
+A simple implementation of this recipe would have an update queue for each 
key.  However the
+implementation is slightly more complex.  Each update queue is in a bucket 
and transactions process
+all of the updates in a bucket.  This allows more efficient processing of 
updates for the following
+reasons :
+
+ * When updates are queued, notifications are made per bucket(instead of 
per a key).
+ * The transaction doing the update can scan the entire bucket reading 
updates, this avoids a seek for each key being updated.
+ * Also the transaction can request a batch lookup to get the current 
value of all the keys being updated.
+ * Any additional actions taken on update (like adding something to an 
export queue) can also be batched.
+ * Data is organized to make reading exiting values for keys in a bucket 
more efficient.
+
+Which bucket a key goes to is decided using hash and modulus so that 
multiple updates for a key go
+to the same bucket.
+
+The initial number of tablets to create when applying table optimizations 
can be controlled by
+setting the buckets per tablet option when configuring a Combine Queue.  
For example if you
+have 20 tablet servers and 1000 buckets and want 2 tablets per tserver 
initially then set buckets
+per tablet to 1000/(2*20)=25.
+
+## Example Use
+
+The following code snippets show how to use this recipe for wordcount.  
The first step is to
+configure it before initializing Fluo.  When initializing an ID is needed. 
 This ID is used in two
+ways.  First, the ID is used as a row prefix in the table.  Therefore 
nothing else should use that
+row range in the table.  Second, the ID is used in generating 
configuration keys.
+
+The following snippet shows how to configure a combine queue.
+
+```java
+FluoConfiguration fluoConfig = ...;
+
+// Set application properties for the combine queue.  These properties 
are read later by
+// the observers running on each worker.
+CombineQueue.configure(WcObserverProvider.ID)
+
.keyType(String.class).valueType(Long.class).buckets(119).finish(fluoConfig);
+
+fluoConfig.setObserverProvider(WcObserverProvider.class);
+
+// initialize Fluo using fluoConfig
+```
+
+Assume the following observer is triggered when a documents is updated.  
It examines new
+and old document content and determines changes in word counts.  These 
changes are pushed to a
+combine queue.
+
+```java
+public class DocumentObserver implements StringObserver {
+  // word count combine queue
+  private CombineQueue wccq;
+
+  public static final Column NEW_COL = new Column("content", "new");
+  public static final Column 

[GitHub] incubator-fluo-recipes pull request #130: Updated ExportQ and CFM to use new...

2017-05-03 Thread mikewalch
Github user mikewalch commented on a diff in the pull request:


https://github.com/apache/incubator-fluo-recipes/pull/130#discussion_r114644943
  
--- Diff: docs/combine-queue.md ---
@@ -0,0 +1,224 @@
+
+# Combine Queue Recipe
+
+## Background
+
+When many transactions try to modify the same keys, collisions will occur. 
 Too many collisions
+cause transactions to fail and throughput to nose dive.  For example, 
consider [phrasecount]
+which has many transactions processing documents.  Each transaction counts 
the phrases in a document
+and then updates global phrase counts.  Since transaction attempts to 
update many phrases
+, the probability of collisions is high.
+
+## Solution
+
+The [combine queue recipe][CombineQueue] provides a reusable solution for 
updating many keys while
+avoiding collisions.  The recipe also organizes updates into batches in 
order to improve throughput.
+
+This recipes queues updates to keys for other transactions to process. In 
the phrase count example
+transactions processing documents queue updates, but do not actually 
update the counts.  Below is an
+example of computing phrasecounts using this recipe.
+
+ * TX1 queues `+1` update  for phrase `we want lambdas now`
+ * TX2 queues `+1` update  for phrase `we want lambdas now`
+ * TX3 reads the updates and current value for the phrase `we want lambdas 
now`.  There is no current value and the updates sum to 2, so a new value of 2 
is written.
+ * TX4 queues `+2` update  for phrase `we want lambdas now`
+ * TX5 queues `-1` update  for phrase `we want lambdas now`
+ * TX6 reads the updates and current value for the phrase `we want lambdas 
now`.  The current value is 2 and the updates sum to 1, so a new value of 3 is 
written.
+
+Transactions processing updates have the ability to make additional 
updates.
+For example in addition to updating the current value for a phrase, the new
+value could also be placed on an export queue to update an external 
database.
+
+### Buckets
+
+A simple implementation of this recipe would have an update queue for each 
key.  However the
+implementation is slightly more complex.  Each update queue is in a bucket 
and transactions process
+all of the updates in a bucket.  This allows more efficient processing of 
updates for the following
+reasons :
+
+ * When updates are queued, notifications are made per bucket(instead of 
per a key).
+ * The transaction doing the update can scan the entire bucket reading 
updates, this avoids a seek for each key being updated.
+ * Also the transaction can request a batch lookup to get the current 
value of all the keys being updated.
+ * Any additional actions taken on update (like adding something to an 
export queue) can also be batched.
+ * Data is organized to make reading exiting values for keys in a bucket 
more efficient.
+
+Which bucket a key goes to is decided using hash and modulus so that 
multiple updates for a key go
+to the same bucket.
+
+The initial number of tablets to create when applying table optimizations 
can be controlled by
+setting the buckets per tablet option when configuring a Combine Queue.  
For example if you
+have 20 tablet servers and 1000 buckets and want 2 tablets per tserver 
initially then set buckets
+per tablet to 1000/(2*20)=25.
+
+## Example Use
+
+The following code snippets show how to use this recipe for wordcount.  
The first step is to
+configure it before initializing Fluo.  When initializing an ID is needed. 
 This ID is used in two
+ways.  First, the ID is used as a row prefix in the table.  Therefore 
nothing else should use that
+row range in the table.  Second, the ID is used in generating 
configuration keys.
+
+The following snippet shows how to configure a combine queue.
+
+```java
+FluoConfiguration fluoConfig = ...;
+
+// Set application properties for the combine queue.  These properties 
are read later by
+// the observers running on each worker.
+CombineQueue.configure(WcObserverProvider.ID)
+
.keyType(String.class).valueType(Long.class).buckets(119).finish(fluoConfig);
--- End diff --

Another option is `store()` but its up to you.  I am fine with finish().


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-fluo-recipes pull request #130: Updated ExportQ and CFM to use new...

2017-05-03 Thread mikewalch
Github user mikewalch commented on a diff in the pull request:


https://github.com/apache/incubator-fluo-recipes/pull/130#discussion_r114631218
  
--- Diff: docs/combine-queue.md ---
@@ -0,0 +1,224 @@
+
+# Combine Queue Recipe
+
+## Background
+
+When many transactions try to modify the same keys, collisions will occur. 
 Too many collisions
+cause transactions to fail and throughput to nose dive.  For example, 
consider [phrasecount]
+which has many transactions processing documents.  Each transaction counts 
the phrases in a document
+and then updates global phrase counts.  Since transaction attempts to 
update many phrases
+, the probability of collisions is high.
+
+## Solution
+
+The [combine queue recipe][CombineQueue] provides a reusable solution for 
updating many keys while
+avoiding collisions.  The recipe also organizes updates into batches in 
order to improve throughput.
+
+This recipes queues updates to keys for other transactions to process. In 
the phrase count example
+transactions processing documents queue updates, but do not actually 
update the counts.  Below is an
+example of computing phrasecounts using this recipe.
+
+ * TX1 queues `+1` update  for phrase `we want lambdas now`
+ * TX2 queues `+1` update  for phrase `we want lambdas now`
+ * TX3 reads the updates and current value for the phrase `we want lambdas 
now`.  There is no current value and the updates sum to 2, so a new value of 2 
is written.
+ * TX4 queues `+2` update  for phrase `we want lambdas now`
+ * TX5 queues `-1` update  for phrase `we want lambdas now`
+ * TX6 reads the updates and current value for the phrase `we want lambdas 
now`.  The current value is 2 and the updates sum to 1, so a new value of 3 is 
written.
+
+Transactions processing updates have the ability to make additional 
updates.
+For example in addition to updating the current value for a phrase, the new
+value could also be placed on an export queue to update an external 
database.
+
+### Buckets
+
+A simple implementation of this recipe would have an update queue for each 
key.  However the
+implementation is slightly more complex.  Each update queue is in a bucket 
and transactions process
+all of the updates in a bucket.  This allows more efficient processing of 
updates for the following
+reasons :
+
+ * When updates are queued, notifications are made per bucket(instead of 
per a key).
+ * The transaction doing the update can scan the entire bucket reading 
updates, this avoids a seek for each key being updated.
+ * Also the transaction can request a batch lookup to get the current 
value of all the keys being updated.
+ * Any additional actions taken on update (like adding something to an 
export queue) can also be batched.
+ * Data is organized to make reading exiting values for keys in a bucket 
more efficient.
+
+Which bucket a key goes to is decided using hash and modulus so that 
multiple updates for a key go
+to the same bucket.
+
+The initial number of tablets to create when applying table optimizations 
can be controlled by
+setting the buckets per tablet option when configuring a Combine Queue.  
For example if you
+have 20 tablet servers and 1000 buckets and want 2 tablets per tserver 
initially then set buckets
+per tablet to 1000/(2*20)=25.
+
+## Example Use
+
+The following code snippets show how to use this recipe for wordcount.  
The first step is to
+configure it before initializing Fluo.  When initializing an ID is needed. 
 This ID is used in two
+ways.  First, the ID is used as a row prefix in the table.  Therefore 
nothing else should use that
+row range in the table.  Second, the ID is used in generating 
configuration keys.
+
+The following snippet shows how to configure a combine queue.
+
+```java
+FluoConfiguration fluoConfig = ...;
+
+// Set application properties for the combine queue.  These properties 
are read later by
+// the observers running on each worker.
+CombineQueue.configure(WcObserverProvider.ID)
+
.keyType(String.class).valueType(Long.class).buckets(119).finish(fluoConfig);
--- End diff --

finish() could be called save()


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-fluo-recipes pull request #130: Updated ExportQ and CFM to use new...

2017-05-03 Thread mikewalch
Github user mikewalch commented on a diff in the pull request:


https://github.com/apache/incubator-fluo-recipes/pull/130#discussion_r114635795
  
--- Diff: docs/combine-queue.md ---
@@ -0,0 +1,224 @@
+
+# Combine Queue Recipe
+
+## Background
+
+When many transactions try to modify the same keys, collisions will occur. 
 Too many collisions
+cause transactions to fail and throughput to nose dive.  For example, 
consider [phrasecount]
+which has many transactions processing documents.  Each transaction counts 
the phrases in a document
+and then updates global phrase counts.  Since transaction attempts to 
update many phrases
+, the probability of collisions is high.
+
+## Solution
+
+The [combine queue recipe][CombineQueue] provides a reusable solution for 
updating many keys while
+avoiding collisions.  The recipe also organizes updates into batches in 
order to improve throughput.
+
+This recipes queues updates to keys for other transactions to process. In 
the phrase count example
+transactions processing documents queue updates, but do not actually 
update the counts.  Below is an
+example of computing phrasecounts using this recipe.
+
+ * TX1 queues `+1` update  for phrase `we want lambdas now`
+ * TX2 queues `+1` update  for phrase `we want lambdas now`
+ * TX3 reads the updates and current value for the phrase `we want lambdas 
now`.  There is no current value and the updates sum to 2, so a new value of 2 
is written.
+ * TX4 queues `+2` update  for phrase `we want lambdas now`
+ * TX5 queues `-1` update  for phrase `we want lambdas now`
+ * TX6 reads the updates and current value for the phrase `we want lambdas 
now`.  The current value is 2 and the updates sum to 1, so a new value of 3 is 
written.
+
+Transactions processing updates have the ability to make additional 
updates.
+For example in addition to updating the current value for a phrase, the new
+value could also be placed on an export queue to update an external 
database.
+
+### Buckets
+
+A simple implementation of this recipe would have an update queue for each 
key.  However the
+implementation is slightly more complex.  Each update queue is in a bucket 
and transactions process
+all of the updates in a bucket.  This allows more efficient processing of 
updates for the following
+reasons :
+
+ * When updates are queued, notifications are made per bucket(instead of 
per a key).
+ * The transaction doing the update can scan the entire bucket reading 
updates, this avoids a seek for each key being updated.
+ * Also the transaction can request a batch lookup to get the current 
value of all the keys being updated.
+ * Any additional actions taken on update (like adding something to an 
export queue) can also be batched.
+ * Data is organized to make reading exiting values for keys in a bucket 
more efficient.
+
+Which bucket a key goes to is decided using hash and modulus so that 
multiple updates for a key go
+to the same bucket.
+
+The initial number of tablets to create when applying table optimizations 
can be controlled by
+setting the buckets per tablet option when configuring a Combine Queue.  
For example if you
+have 20 tablet servers and 1000 buckets and want 2 tablets per tserver 
initially then set buckets
+per tablet to 1000/(2*20)=25.
+
+## Example Use
+
+The following code snippets show how to use this recipe for wordcount.  
The first step is to
+configure it before initializing Fluo.  When initializing an ID is needed. 
 This ID is used in two
+ways.  First, the ID is used as a row prefix in the table.  Therefore 
nothing else should use that
+row range in the table.  Second, the ID is used in generating 
configuration keys.
+
+The following snippet shows how to configure a combine queue.
+
+```java
+FluoConfiguration fluoConfig = ...;
+
+// Set application properties for the combine queue.  These properties 
are read later by
+// the observers running on each worker.
+CombineQueue.configure(WcObserverProvider.ID)
+
.keyType(String.class).valueType(Long.class).buckets(119).finish(fluoConfig);
+
+fluoConfig.setObserverProvider(WcObserverProvider.class);
+
+// initialize Fluo using fluoConfig
+```
+
+Assume the following observer is triggered when a documents is updated.  
It examines new
+and old document content and determines changes in word counts.  These 
changes are pushed to a
+combine queue.
+
+```java
+public class DocumentObserver implements StringObserver {
+  // word count combine queue
+  private CombineQueue wccq;
+
+  public static final Column NEW_COL = new Column("content", "new");
+  public static final Column 

[GitHub] incubator-fluo-recipes pull request #130: Updated ExportQ and CFM to use new...

2017-05-02 Thread keith-turner
GitHub user keith-turner opened a pull request:

https://github.com/apache/incubator-fluo-recipes/pull/130

Updated ExportQ and CFM to use new ObserverProvider API

Create a new CombineQueue API that replaces the CollisionFreeMap.  The new
combineQ uses the new Fluo Observer APIs exclusively.  The CollisionFreeMap
only uses the old Fluo Observer APIs and was deprecated.

The ExportQueue was modified to support a Fluent configuration mechanism. 
This
can only be used when using the new Observer API.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/keith-turner/fluo-recipes observer-factory

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-fluo-recipes/pull/130.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #130


commit 3e25430f5b328978e4ae9ab97dabea7b0ce7e378
Author: Keith Turner 
Date:   2017-03-08T23:46:28Z

Updated ExportQ and CFM to use new ObserverProvider API

Create a new CombineQueue API that replaces the CollisionFreeMap.  The new
combineQ uses the new Fluo Observer APIs exclusively.  The CollisionFreeMap
only uses the old Fluo Observer APIs and was deprecated.

The ExportQueue was modified to support a Fluent configuration mechanism. 
This
can only be used when using the new Observer API.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---