cjmctague closed pull request #164: Spelling corrections for the website URL: https://github.com/apache/fluo-website/pull/164
This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/README.md b/README.md index 7ac54ee..7f967f6 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ # Apache Fluo website Code powering the Apache Fluo website ([https://fluo.apache.org](https://fluo.apache.org)). -[Contributing](CONTRIBUTING.md) decribes how to test locally. +[Contributing](CONTRIBUTING.md) describes how to test locally. ## Update website for new release diff --git a/_posts/release/2014-10-02-fluo-1.0.0-alpha-1.md b/_posts/release/2014-10-02-fluo-1.0.0-alpha-1.md index cbb2daf..b6fad31 100644 --- a/_posts/release/2014-10-02-fluo-1.0.0-alpha-1.md +++ b/_posts/release/2014-10-02-fluo-1.0.0-alpha-1.md @@ -38,7 +38,7 @@ The FluoInputFormat allows a specific snapshot to be read into a mapreduce job. #### FluoFileOutputFormat and FluoOutputFormat -The FluoFileOuputFormat enables the bulk ingest of a Fluo table using mapreduce by creating the Accumulo r-files in HDFS. The FluoOutputFormat pushes keys directly into the Accumulo tablet servers through the client API. [Fluo-35][35] added this feature. +The FluoFileOutputFormat enables the bulk ingest of a Fluo table using mapreduce by creating the Accumulo r-files in HDFS. The FluoOutputFormat pushes keys directly into the Accumulo tablet servers through the client API. [Fluo-35][35] added this feature. #### Fluo Workers and Oracle running in Yarn diff --git a/_posts/release/2018-02-26-fluo-1.2.0.md b/_posts/release/2018-02-26-fluo-1.2.0.md index b6e5186..f6ca94a 100644 --- a/_posts/release/2018-02-26-fluo-1.2.0.md +++ b/_posts/release/2018-02-26-fluo-1.2.0.md @@ -148,7 +148,7 @@ Accumulo uses, it will not conflict with Fluo's version. Fluo's commit code is asynchronous in order to support high throughput. Before this release the high level commit logic was spread far and wide in the code. For this release the commit code was transitioned from Guava's -ListenableFutre to Java 8's CompletableFuture in [ca63aaf]. This transition laid +ListenableFuture to Java 8's CompletableFuture in [ca63aaf]. This transition laid the ground work for [6bf604f] which centralized the commit logic. Now the high level logic for the commit code is all in one place, making it much easier to understand. diff --git a/_recipes-1-2/recipes/combine-queue.md b/_recipes-1-2/recipes/combine-queue.md index 628f528..dd8bb1e 100644 --- a/_recipes-1-2/recipes/combine-queue.md +++ b/_recipes-1-2/recipes/combine-queue.md @@ -10,7 +10,7 @@ When many transactions try to modify the same keys, collisions will occur. Too cause transactions to fail and throughput to nose dive. For example, consider [phrasecount] which has many transactions processing documents. Each transaction counts the phrases in a document and then updates global phrase counts. Since transaction attempts to update many phrases -, the probbaility of collisions is high. +, the probability of collisions is high. ## Solution diff --git a/docs/fluo-recipes/1.0.0-beta-1/cfm.md b/docs/fluo-recipes/1.0.0-beta-1/cfm.md index 377daed..a60ffde 100644 --- a/docs/fluo-recipes/1.0.0-beta-1/cfm.md +++ b/docs/fluo-recipes/1.0.0-beta-1/cfm.md @@ -185,7 +185,7 @@ public class WordCountMap { } if (sum == 0) { - //returning absent will cause the collision free map to delte the current key + //returning absent will cause the collision free map to delete the current key return Optional.absent(); } else { return Optional.of(sum); @@ -225,7 +225,7 @@ This recipe makes two important guarantees about updates for a key when it calls `updatingValues()` on an `UpdateObserver`. * The new value reported for an update will be derived from combining all - updates that were committed before the transaction thats processing updates + updates that were committed before the transaction that's processing updates started. The implementation may have to make multiple passes over queued updates to achieve this. In the situation where TX1 queues a `+1` and later TX2 queues a `-1` for the same key, there is no need to worry about only seeing diff --git a/docs/fluo-recipes/1.0.0-beta-1/export-queue.md b/docs/fluo-recipes/1.0.0-beta-1/export-queue.md index bb967ed..30b4b94 100644 --- a/docs/fluo-recipes/1.0.0-beta-1/export-queue.md +++ b/docs/fluo-recipes/1.0.0-beta-1/export-queue.md @@ -32,7 +32,7 @@ public class MyObserver extends AbstractObserver { private static final TYPEL = new TypeLayer(new StringEncoder()); - //reperesents a Query system extrnal to Fluo that is updated by Fluo + //represents a Query system external to Fluo that is updated by Fluo QuerySystem querySystem; @Override @@ -153,7 +153,7 @@ application. //initialize Fluo using fluoConfig ``` -Below is updated version of the observer from above thats now using the export +Below is updated version of the observer from above that's now using the export queue. ```java @@ -252,7 +252,7 @@ that creates the export value. In the example above only one transaction will succeed because both are setting `row1 fam1:qual1`. Since adding to the export queue is part of the transaction, only the transaction that succeeds will add something to the -queue. If the funtion ek() in the example is deterministic, then both +queue. If the function ek() in the example is deterministic, then both transactions would have been trying to add the same key to the export queue. With the above method, we know that transactions adding entries to the queue for @@ -266,7 +266,7 @@ same key. Both transactions succeed because they are writing to different cells (`rowB fam1:qual2` and `rowA fam1:qual2`). This approach makes it more difficult to reason about export entries with the same key, because the transactions adding those entries could have overlapped in time. This is an -example of write skew mentioned in the Percolater paper. +example of write skew mentioned in the Percolator paper. 1. TH1 : key1 = ek(`row1`,`fam1:qual1`) 1. TH1 : val1 = ev(tx1.get(`row1`,`fam1:qual1`), tx1.get(`rowA`,`fam1:qual2`)) diff --git a/docs/fluo-recipes/1.0.0-beta-1/table-optimization.md b/docs/fluo-recipes/1.0.0-beta-1/table-optimization.md index f0f1be9..f09b38b 100644 --- a/docs/fluo-recipes/1.0.0-beta-1/table-optimization.md +++ b/docs/fluo-recipes/1.0.0-beta-1/table-optimization.md @@ -41,7 +41,7 @@ selective optimizations is need look into using the following methods instead. ## Command Example Fluo Recipes provides an easy way to optimize a Fluo table for configured -recipes from the command line. This should be done after configuring reciped +recipes from the command line. This should be done after configuring recipes and initializing Fluo. Below are example command for initializing in this way. ```bash diff --git a/docs/fluo-recipes/1.0.0-beta-1/transient.md b/docs/fluo-recipes/1.0.0-beta-1/transient.md index 86fbe37..ab9b16f 100644 --- a/docs/fluo-recipes/1.0.0-beta-1/transient.md +++ b/docs/fluo-recipes/1.0.0-beta-1/transient.md @@ -7,7 +7,7 @@ version: 1.0.0-beta-1 ## Background Some recipes store transient data in a portion of the Fluo table. Transient -data is data thats continually being added and deleted. Also these transient +data is data that's continually being added and deleted. Also these transient data ranges contain no long term data. The way Fluo works, when data is deleted a delete marker is inserted but the data is actually still there. Over time these transient ranges of the table will have a lot more delete markers @@ -15,7 +15,7 @@ than actual data if nothing is done. If nothing is done, then processing transient data will get increasingly slower over time. These deleted markers can be cleaned up by forcing Accumulo to compact the -Fluo table, which will run Fluos garbage collection iterator. However, +Fluo table, which will run Fluo's garbage collection iterator. However, compacting the entire table to clean up these ranges within a table is overkill. Alternatively, Accumulo supports compacting ranges of a table. So a good solution to the delete marker problem is to periodically compact just @@ -37,7 +37,7 @@ TransientRegistry transientRegistry = new TransientRegistry(fluoConfig.getAppCon transientRegistry.addTransientRange(new RowRange(startRow, endRow)); //Initialize Fluo using fluoConfig. This will store the registered ranges in -//zookeeper making them availiable on any node later. +//zookeeper making them available on any node later. ``` ## Compacting Transient Ranges @@ -60,7 +60,7 @@ fluo exec <app name> io.fluo.recipes.accumulo.cmds.CompactTransient [<interval> ``` If no arguments are specified the command will call `compactTransient()` once. -If only `<interval>` is specied the command will loop forever calling +If only `<interval>` is specified, the command will loop forever calling `compactTransient()` sleeping `<interval>` seconds between calls. If `<count>` is additionally specified then the command will only loop `<count>` times. diff --git a/docs/fluo-recipes/1.0.0-beta-2/cfm.md b/docs/fluo-recipes/1.0.0-beta-2/cfm.md index 27eb409..8682975 100644 --- a/docs/fluo-recipes/1.0.0-beta-2/cfm.md +++ b/docs/fluo-recipes/1.0.0-beta-2/cfm.md @@ -184,7 +184,7 @@ public class WordCountMap { } if (sum == 0) { - //returning absent will cause the collision free map to delte the current key + //returning absent will cause the collision free map to delete the current key return Optional.absent(); } else { return Optional.of(sum); @@ -224,7 +224,7 @@ This recipe makes two important guarantees about updates for a key when it calls `updatingValues()` on an `UpdateObserver`. * The new value reported for an update will be derived from combining all - updates that were committed before the transaction thats processing updates + updates that were committed before the transaction that's processing updates started. The implementation may have to make multiple passes over queued updates to achieve this. In the situation where TX1 queues a `+1` and later TX2 queues a `-1` for the same key, there is no need to worry about only seeing diff --git a/docs/fluo-recipes/1.0.0-beta-2/export-queue.md b/docs/fluo-recipes/1.0.0-beta-2/export-queue.md index aa2ccb0..4d0efa4 100644 --- a/docs/fluo-recipes/1.0.0-beta-2/export-queue.md +++ b/docs/fluo-recipes/1.0.0-beta-2/export-queue.md @@ -32,7 +32,7 @@ public class MyObserver extends AbstractObserver { private static final TYPEL = new TypeLayer(new StringEncoder()); - //reperesents a Query system extrnal to Fluo that is updated by Fluo + //represents a Query system external to Fluo that is updated by Fluo QuerySystem querySystem; @Override @@ -153,7 +153,7 @@ application. //initialize Fluo using fluoConfig ``` -Below is updated version of the observer from above thats now using the export +Below is updated version of the observer from above that's now using the export queue. ```java @@ -252,7 +252,7 @@ that creates the export value. In the example above only one transaction will succeed because both are setting `row1 fam1:qual1`. Since adding to the export queue is part of the transaction, only the transaction that succeeds will add something to the -queue. If the funtion ek() in the example is deterministic, then both +queue. If the function ek() in the example is deterministic, then both transactions would have been trying to add the same key to the export queue. With the above method, we know that transactions adding entries to the queue for @@ -266,7 +266,7 @@ same key. Both transactions succeed because they are writing to different cells (`rowB fam1:qual2` and `rowA fam1:qual2`). This approach makes it more difficult to reason about export entries with the same key, because the transactions adding those entries could have overlapped in time. This is an -example of write skew mentioned in the Percolater paper. +example of write skew mentioned in the Percolator paper. 1. TH1 : key1 = ek(`row1`,`fam1:qual1`) 1. TH1 : val1 = ev(tx1.get(`row1`,`fam1:qual1`), tx1.get(`rowA`,`fam1:qual2`)) diff --git a/docs/fluo-recipes/1.0.0-beta-2/index.md b/docs/fluo-recipes/1.0.0-beta-2/index.md index d8888ba..8d5f313 100644 --- a/docs/fluo-recipes/1.0.0-beta-2/index.md +++ b/docs/fluo-recipes/1.0.0-beta-2/index.md @@ -52,19 +52,19 @@ Below are Maven dependencies for Fluo Recipes. <artifactId>fluo-recipes-kryo</artifactId> <version>${fluo-recipes.version}</version> </dependency> - <!--optional dependency assist w/ intergrating Accumulo and Fluo --> + <!--optional dependency assist w/ integrating Accumulo and Fluo --> <dependency> <groupId>io.fluo</groupId> <artifactId>fluo-recipes-accumulo</artifactId> <version>${fluo-recipes.version}</version> </dependency> - <!--optional dependency assist w/ intergrating Spark and Fluo --> + <!--optional dependency assist w/ integrating Spark and Fluo --> <dependency> <groupId>io.fluo</groupId> <artifactId>fluo-recipes-spark</artifactId> <version>${fluo-recipes.version}</version> </dependency> - <!--optional dependency helps when write Fluo intergeration test. --> + <!--optional dependency helps when write Fluo integration test. --> <dependency> <groupId>io.fluo</groupId> <artifactId>fluo-recipes-test</artifactId> diff --git a/docs/fluo-recipes/1.0.0-beta-2/table-optimization.md b/docs/fluo-recipes/1.0.0-beta-2/table-optimization.md index 3dbf444..dfcf9e0 100644 --- a/docs/fluo-recipes/1.0.0-beta-2/table-optimization.md +++ b/docs/fluo-recipes/1.0.0-beta-2/table-optimization.md @@ -42,7 +42,7 @@ selective optimizations is need look into using the following methods instead. ## Command Example Fluo Recipes provides an easy way to optimize a Fluo table for configured -recipes from the command line. This should be done after configuring reciped +recipes from the command line. This should be done after configuring recipes and initializing Fluo. Below are example command for initializing in this way. ```bash diff --git a/docs/fluo-recipes/1.0.0-beta-2/testing.md b/docs/fluo-recipes/1.0.0-beta-2/testing.md index e2462bd..4cc5834 100644 --- a/docs/fluo-recipes/1.0.0-beta-2/testing.md +++ b/docs/fluo-recipes/1.0.0-beta-2/testing.md @@ -3,7 +3,7 @@ layout: recipes-doc title: Testing version: 1.0.0-beta-2 --- -Fluo includes MiniFluo which makes it possible to write an integeration test that +Fluo includes MiniFluo which makes it possible to write an integration test that runs against a real Fluo instance. Fluo Recipes provides the following utility code for writing an integration test. diff --git a/docs/fluo-recipes/1.0.0-beta-2/transient.md b/docs/fluo-recipes/1.0.0-beta-2/transient.md index 4d978fc..bef6f26 100644 --- a/docs/fluo-recipes/1.0.0-beta-2/transient.md +++ b/docs/fluo-recipes/1.0.0-beta-2/transient.md @@ -6,7 +6,7 @@ version: 1.0.0-beta-2 ## Background Some recipes store transient data in a portion of the Fluo table. Transient -data is data thats continually being added and deleted. Also these transient +data is data that's continually being added and deleted. Also these transient data ranges contain no long term data. The way Fluo works, when data is deleted a delete marker is inserted but the data is actually still there. Over time these transient ranges of the table will have a lot more delete markers @@ -14,7 +14,7 @@ than actual data if nothing is done. If nothing is done, then processing transient data will get increasingly slower over time. These deleted markers can be cleaned up by forcing Accumulo to compact the -Fluo table, which will run Fluos garbage collection iterator. However, +Fluo table, which will run Fluo's garbage collection iterator. However, compacting the entire table to clean up these ranges within a table is overkill. Alternatively, Accumulo supports compacting ranges of a table. So a good solution to the delete marker problem is to periodically compact just @@ -36,7 +36,7 @@ TransientRegistry transientRegistry = new TransientRegistry(fluoConfig.getAppCon transientRegistry.addTransientRange(new RowRange(startRow, endRow)); //Initialize Fluo using fluoConfig. This will store the registered ranges in -//zookeeper making them availiable on any node later. +//zookeeper making them available on any node later. ``` ## Compacting Transient Ranges @@ -59,7 +59,7 @@ fluo exec <app name> io.fluo.recipes.accumulo.cmds.CompactTransient [<interval> ``` If no arguments are specified the command will call `compactTransient()` once. -If only `<interval>` is specied the command will loop forever calling +If only `<interval>` is specified, the command will loop forever calling `compactTransient()` sleeping `<interval>` seconds between calls. If `<count>` is additionally specified then the command will only loop `<count>` times. diff --git a/docs/fluo-recipes/1.0.0-incubating/cfm.md b/docs/fluo-recipes/1.0.0-incubating/cfm.md index e23ae38..6a77a42 100644 --- a/docs/fluo-recipes/1.0.0-incubating/cfm.md +++ b/docs/fluo-recipes/1.0.0-incubating/cfm.md @@ -192,7 +192,7 @@ public class WordCountMap { } if (sum == 0) { - //returning absent will cause the collision free map to delte the current key + //returning absent will cause the collision free map to deletes the current key return Optional.absent(); } else { return Optional.of(sum); @@ -232,7 +232,7 @@ This recipe makes two important guarantees about updates for a key when it calls `updatingValues()` on an `UpdateObserver`. * The new value reported for an update will be derived from combining all - updates that were committed before the transaction thats processing updates + updates that were committed before the transaction that's processing updates started. The implementation may have to make multiple passes over queued updates to achieve this. In the situation where TX1 queues a `+1` and later TX2 queues a `-1` for the same key, there is no need to worry about only seeing diff --git a/docs/fluo-recipes/1.0.0-incubating/export-queue.md b/docs/fluo-recipes/1.0.0-incubating/export-queue.md index e8d0412..fe25863 100644 --- a/docs/fluo-recipes/1.0.0-incubating/export-queue.md +++ b/docs/fluo-recipes/1.0.0-incubating/export-queue.md @@ -32,7 +32,7 @@ public class MyObserver extends AbstractObserver { private static final TYPEL = new TypeLayer(new StringEncoder()); - //reperesents a Query system extrnal to Fluo that is updated by Fluo + //represents a Query system external to Fluo that is updated by Fluo QuerySystem querySystem; @Override @@ -158,7 +158,7 @@ application. //initialize Fluo using fluoConfig ``` -Below is updated version of the observer from above thats now using the export +Below is updated version of the observer from above that's now using the export queue. ```java @@ -257,7 +257,7 @@ that creates the export value. In the example above only one transaction will succeed because both are setting `row1 fam1:qual1`. Since adding to the export queue is part of the transaction, only the transaction that succeeds will add something to the -queue. If the funtion ek() in the example is deterministic, then both +queue. If the function ek() in the example is deterministic, then both transactions would have been trying to add the same key to the export queue. With the above method, we know that transactions adding entries to the queue for @@ -271,7 +271,7 @@ same key. Both transactions succeed because they are writing to different cells (`rowB fam1:qual2` and `rowA fam1:qual2`). This approach makes it more difficult to reason about export entries with the same key, because the transactions adding those entries could have overlapped in time. This is an -example of write skew mentioned in the Percolater paper. +example of write skew mentioned in the Percolator paper. 1. TH1 : key1 = ek(`row1`,`fam1:qual1`) 1. TH1 : val1 = ev(tx1.get(`row1`,`fam1:qual1`), tx1.get(`rowA`,`fam1:qual2`)) diff --git a/docs/fluo-recipes/1.0.0-incubating/table-optimization.md b/docs/fluo-recipes/1.0.0-incubating/table-optimization.md index 8ae7f54..8b87c00 100644 --- a/docs/fluo-recipes/1.0.0-incubating/table-optimization.md +++ b/docs/fluo-recipes/1.0.0-incubating/table-optimization.md @@ -38,7 +38,7 @@ TableOperations.optimizeTable(fluoConf); ## Command Example Fluo Recipes provides an easy way to optimize a Fluo table for configured -recipes from the command line. This should be done after configuring reciped +recipes from the command line. This should be done after configuring recipes and initializing Fluo. Below are example command for initializing in this way. ```bash @@ -57,8 +57,8 @@ fluo exec app1 org.apache.fluo.recipes.accumulo.cmds.OptimizeTable ## Table optimization registry -Recipes register themself by calling [TableOptimizations.registerOptimization()][1]. Anyone can use -this mechanism, its not limited to use by exisitng recipes. +Recipes register themselves by calling [TableOptimizations.registerOptimization()][1]. Anyone can use +this mechanism, its not limited to use by existing recipes. [1]: {{ site.api_static }}/fluo-recipes-core/1.0.0-incubating/org/apache/fluo/recipes/core/common/TableOptimizations.html [2]: {{ site.api_static }}/fluo-recipes-accumulo/1.0.0-incubating/org/apache/fluo/recipes/accumulo/ops/TableOperations.html diff --git a/docs/fluo-recipes/1.0.0-incubating/testing.md b/docs/fluo-recipes/1.0.0-incubating/testing.md index 2fb31af..bd6c555 100644 --- a/docs/fluo-recipes/1.0.0-incubating/testing.md +++ b/docs/fluo-recipes/1.0.0-incubating/testing.md @@ -3,7 +3,7 @@ layout: recipes-doc title: Testing version: 1.0.0-incubating --- -Fluo includes MiniFluo which makes it possible to write an integeration test that +Fluo includes MiniFluo which makes it possible to write an integration test that runs against a real Fluo instance. Fluo Recipes provides the following utility code for writing an integration test. diff --git a/docs/fluo-recipes/1.0.0-incubating/transient.md b/docs/fluo-recipes/1.0.0-incubating/transient.md index e6b2914..5ff4517 100644 --- a/docs/fluo-recipes/1.0.0-incubating/transient.md +++ b/docs/fluo-recipes/1.0.0-incubating/transient.md @@ -6,7 +6,7 @@ version: 1.0.0-incubating ## Background Some recipes store transient data in a portion of the Fluo table. Transient -data is data thats continually being added and deleted. Also these transient +data is data that's continually being added and deleted. Also these transient data ranges contain no long term data. The way Fluo works, when data is deleted a delete marker is inserted but the data is actually still there. Over time these transient ranges of the table will have a lot more delete markers @@ -14,7 +14,7 @@ than actual data if nothing is done. If nothing is done, then processing transient data will get increasingly slower over time. These deleted markers can be cleaned up by forcing Accumulo to compact the -Fluo table, which will run Fluos garbage collection iterator. However, +Fluo table, which will run Fluo's garbage collection iterator. However, compacting the entire table to clean up these ranges within a table is overkill. Alternatively, Accumulo supports compacting ranges of a table. So a good solution to the delete marker problem is to periodically compact just @@ -36,7 +36,7 @@ TransientRegistry transientRegistry = new TransientRegistry(fluoConfig.getAppCon transientRegistry.addTransientRange(new RowRange(startRow, endRow)); //Initialize Fluo using fluoConfig. This will store the registered ranges in -//zookeeper making them availiable on any node later. +//zookeeper making them available on any node later. ``` ## Compacting Transient Ranges diff --git a/docs/fluo-recipes/1.1.0-incubating/combine-queue.md b/docs/fluo-recipes/1.1.0-incubating/combine-queue.md index 08d4e58..4f1d211 100644 --- a/docs/fluo-recipes/1.1.0-incubating/combine-queue.md +++ b/docs/fluo-recipes/1.1.0-incubating/combine-queue.md @@ -196,7 +196,7 @@ This recipe makes two important guarantees about updates for a key when it calls `process()` on a [ChangeObserver]. * The new value reported for an update will be derived from combining all - updates that were committed before the transaction thats processing updates + updates that were committed before the transaction that's processing updates started. The implementation may have to make multiple passes over queued updates to achieve this. In the situation where TX1 queues a `+1` and later TX2 queues a `-1` for the same key, there is no need to worry about only seeing diff --git a/docs/fluo-recipes/1.1.0-incubating/export-queue.md b/docs/fluo-recipes/1.1.0-incubating/export-queue.md index 3ee379f..ec5ebd3 100644 --- a/docs/fluo-recipes/1.1.0-incubating/export-queue.md +++ b/docs/fluo-recipes/1.1.0-incubating/export-queue.md @@ -31,7 +31,7 @@ public class MyObserver implements StringObserver { static final Column UPDATE_COL = new Column("meta", "numUpdates"); static final Column COUNTER_COL = new Column("meta", "counter1"); - //reperesents a Query system extrnal to Fluo that is updated by Fluo + //represents a Query system external to Fluo that is updated by Fluo QuerySystem querySystem; @Override @@ -166,7 +166,7 @@ public class FluoApp { } ``` -Below is updated version of the observer from above thats now using an export +Below is updated version of the observer from above that's now using an export queue. ```java @@ -272,7 +272,7 @@ that creates the export value. In the example above only one transaction will succeed because both are setting `row1 fam1:qual1`. Since adding to the export queue is part of the transaction, only the transaction that succeeds will add something to the -queue. If the funtion ek() in the example is deterministic, then both +queue. If the function ek() in the example is deterministic, then both transactions would have been trying to add the same key to the export queue. With the above method, we know that transactions adding entries to the queue for @@ -286,7 +286,7 @@ same key. Both transactions succeed because they are writing to different cells (`rowB fam1:qual2` and `rowA fam1:qual2`). This approach makes it more difficult to reason about export entries with the same key, because the transactions adding those entries could have overlapped in time. This is an -example of write skew mentioned in the Percolater paper. +example of write skew mentioned in the Percolator paper. 1. TH1 : key1 = ek(`row1`,`fam1:qual1`) 1. TH1 : val1 = ev(tx1.get(`row1`,`fam1:qual1`), tx1.get(`rowA`,`fam1:qual2`)) diff --git a/docs/fluo-recipes/1.1.0-incubating/table-optimization.md b/docs/fluo-recipes/1.1.0-incubating/table-optimization.md index 54f192a..218f69f 100644 --- a/docs/fluo-recipes/1.1.0-incubating/table-optimization.md +++ b/docs/fluo-recipes/1.1.0-incubating/table-optimization.md @@ -38,7 +38,7 @@ TableOperations.optimizeTable(fluoConf); ## Command Example Fluo Recipes provides an easy way to optimize a Fluo table for configured -recipes from the command line. This should be done after configuring reciped +recipes from the command line. This should be done after configuring recipes and initializing Fluo. Below are example command for initializing in this way. ```bash @@ -57,8 +57,8 @@ fluo exec app1 org.apache.fluo.recipes.accumulo.cmds.OptimizeTable ## Table optimization registry -Recipes register themself by calling [TableOptimizations.registerOptimization()][1]. Anyone can use -this mechanism, its not limited to use by exisitng recipes. +Recipes register themselves by calling [TableOptimizations.registerOptimization()][1]. Anyone can use +this mechanism, its not limited to use by existing recipes. [1]: {{ site.api_static }}/fluo-recipes-core/1.1.0-incubating/org/apache/fluo/recipes/core/common/TableOptimizations.html [2]: {{ site.api_static }}/fluo-recipes-accumulo/1.1.0-incubating/org/apache/fluo/recipes/accumulo/ops/TableOperations.html diff --git a/docs/fluo-recipes/1.1.0-incubating/testing.md b/docs/fluo-recipes/1.1.0-incubating/testing.md index bc598d6..52ff9fd 100644 --- a/docs/fluo-recipes/1.1.0-incubating/testing.md +++ b/docs/fluo-recipes/1.1.0-incubating/testing.md @@ -3,7 +3,7 @@ layout: recipes-doc title: Testing version: 1.1.0-incubating --- -Fluo includes MiniFluo which makes it possible to write an integeration test that +Fluo includes MiniFluo which makes it possible to write an integration test that runs against a real Fluo instance. Fluo Recipes provides the following utility code for writing an integration test. diff --git a/docs/fluo-recipes/1.1.0-incubating/transient.md b/docs/fluo-recipes/1.1.0-incubating/transient.md index 1930270..b7265d9 100644 --- a/docs/fluo-recipes/1.1.0-incubating/transient.md +++ b/docs/fluo-recipes/1.1.0-incubating/transient.md @@ -6,7 +6,7 @@ version: 1.1.0-incubating ## Background Some recipes store transient data in a portion of the Fluo table. Transient -data is data thats continually being added and deleted. Also these transient +data is data that's continually being added and deleted. Also these transient data ranges contain no long term data. The way Fluo works, when data is deleted a delete marker is inserted but the data is actually still there. Over time these transient ranges of the table will have a lot more delete markers @@ -14,7 +14,7 @@ than actual data if nothing is done. If nothing is done, then processing transient data will get increasingly slower over time. These delete markers can be cleaned up by forcing Accumulo to compact the -Fluo table, which will run Fluos garbage collection iterator. However, +Fluo table, which will run Fluo's garbage collection iterator. However, compacting the entire table to clean up these ranges within a table is overkill. Alternatively, Accumulo supports compacting ranges of a table. So a good solution to the delete marker problem is to periodically compact just @@ -36,7 +36,7 @@ TransientRegistry transientRegistry = new TransientRegistry(fluoConfig.getAppCon transientRegistry.addTransientRange(new RowRange(startRow, endRow)); //Initialize Fluo using fluoConfig. This will store the registered ranges in -//zookeeper making them availiable on any node later. +//zookeeper making them available on any node later. ``` ## Compacting Transient Ranges diff --git a/docs/fluo/1.0.0-beta-1/index.md b/docs/fluo/1.0.0-beta-1/index.md index 2ed4bee..5cd9841 100644 --- a/docs/fluo/1.0.0-beta-1/index.md +++ b/docs/fluo/1.0.0-beta-1/index.md @@ -5,7 +5,7 @@ redirect_from: /docs/1.0.0-beta-1/ version: 1.0.0-beta-1 --- -**Fluo is transaction layer that enables incremental processsing on big data.** +**Fluo is transaction layer that enables incremental processing on big data.** Fluo is an implementation of [Percolator] built on [Accumulo] that runs in [YARN]. It is not recommended for production use yet. diff --git a/docs/fluo/1.0.0-beta-1/metrics.md b/docs/fluo/1.0.0-beta-1/metrics.md index 56f9a55..30ad393 100644 --- a/docs/fluo/1.0.0-beta-1/metrics.md +++ b/docs/fluo/1.0.0-beta-1/metrics.md @@ -73,7 +73,7 @@ shortened to `i.f`. Since multiple processes can report the same metrics to services like Graphite or Ganglia, each process adds a unique id. When running in yarn, this id is of the format `worker-<instance id>` or `oracle-<instance id>`. When not running -from yarn, this id consist of a hostname and a base36 long thats unique across +from yarn, this id consist of a hostname and a base36 long that's unique across all fluo processes. In the table below this composite id is represented with `<pid>`. @@ -83,7 +83,7 @@ all fluo processes. In the table below this composite id is represented with |i.f.<pid>.tx.time.<cn> | [Timer][T] | *WHEN:* After each transaction. *WHAT:* Time transaction took to execute. Updated for failed and successful transactions. | |i.f.<pid>.tx.collisions.<cn> | [Histogram][H] | *WHEN:* After each transaction. *COND:* > 0 *WHAT:* Number of collisions a transaction had. | |i.f.<pid>.tx.set.<cn> | [Histogram][H] | *WHEN:* After each transaction. *WHAT:* Number of row/columns set by transaction | -|i.f.<pid>.tx.read.<cn> | [Histogram][H] | *WHEN:* After each transaction. *WHAT:* Number of row/columns read by transaction that existed. There is currently no count of all reads (including non-existant data) | +|i.f.<pid>.tx.read.<cn> | [Histogram][H] | *WHEN:* After each transaction. *WHAT:* Number of row/columns read by transaction that existed. There is currently no count of all reads (including nonexistent data) | |i.f.<pid>.tx.locks.timedout.<cn> | [Histogram][H] | *WHEN:* After each transaction. *COND:* > 0 *WHAT:* Number of timedout locks rolled back by transaction. These are locks that are held for very long periods by another transaction that appears to be alive based on zookeeper. | |i.f.<pid>.tx.locks.dead.<cn> | [Histogram][H] | *WHEN:* After each transaction. *COND:* > 0 *WHAT:* Number of dead locks rolled by a transaction. These are locks held by a process that appears to be dead according to zookeeper. | |i.f.<pid>.tx.status.<status>.<cn> | [Counter][C] | *WHEN:* After each transaction. *WHAT:* Counts for the different ways a transaction can terminate | diff --git a/pages/api.md b/pages/api.md index 515201a..c901b71 100644 --- a/pages/api.md +++ b/pages/api.md @@ -12,7 +12,7 @@ redirect_from: #### Apache Fluo API -* <a href="{{ site.fluo_api_base }}/1.2.0/" target="_blank">1.2.0</a> - Februrary 26, 2018 +* <a href="{{ site.fluo_api_base }}/1.2.0/" target="_blank">1.2.0</a> - February 26, 2018 * <a href="{{ site.fluo_api_base }}/1.1.0-incubating/" target="_blank">1.1.0-incubating</a> - June 12, 2017 * <a href="{{ site.fluo_api_base }}/1.0.0-incubating/" target="_blank">1.0.0-incubating</a> - October 14, 2016 ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
