This is an automated email from the ASF dual-hosted git repository.

domgarguilo pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/accumulo-examples.git


The following commit(s) were added to refs/heads/main by this push:
     new 616f7ad  Fix typos in docs (#92)
616f7ad is described below

commit 616f7ad408345ddd03ceaeaa72c6b8e4a1665f6f
Author: Dom G <domgargu...@apache.org>
AuthorDate: Thu Mar 31 12:59:15 2022 -0400

    Fix typos in docs (#92)
---
 docs/classpath.md          | 4 ++--
 docs/compactionStrategy.md | 8 ++++----
 docs/dirlist.md            | 4 ++--
 docs/export.md             | 2 +-
 docs/isolation.md          | 4 ++--
 docs/reservations.md       | 2 +-
 docs/sample.md             | 2 +-
 docs/tabletofile.md        | 2 +-
 docs/wordcount.md          | 4 ++--
 9 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/docs/classpath.md b/docs/classpath.md
index 8e8cc28..efd37bc 100644
--- a/docs/classpath.md
+++ b/docs/classpath.md
@@ -17,7 +17,7 @@ limitations under the License.
 # Apache Accumulo Classpath Example
 
 This example shows how to use per table classpaths. The example leverages a
-test jar which contains a Filter that supresses rows containing "foo". The
+test jar which contains a Filter that suppresses rows containing "foo". The
 example shows copying the FooFilter.jar into HDFS and then making an Accumulo
 table reference that jar. For this example, a directory, `/user1/lib`, is
 assumed to exist in HDFS.
@@ -29,7 +29,7 @@ Create `/user1/lib` in HDFS if it does not exist.
 Execute the following command in the shell. Note that the `FooFilter.jar`
 is located within the Accumulo source distribution. 
 
-    $ hadoop fs -copyFromLocal 
/path/to/accumulo/test/src/main/resources/FooFilter.jar /user1/lib
+    $ hadoop fs -copyFromLocal 
/path/to/accumulo/test/src/main/resources/org/apache/accumulo/test/FooFilter.jar
 /user1/lib
 
 Execute following in Accumulo shell to setup classpath context
 
diff --git a/docs/compactionStrategy.md b/docs/compactionStrategy.md
index 594b28d..8ae0908 100644
--- a/docs/compactionStrategy.md
+++ b/docs/compactionStrategy.md
@@ -45,10 +45,10 @@ The commands below will configure the 
BasicCompactionStrategy to:
  
 ```bash
  $ accumulo shell -u <username> -p <password> -e "config -t examples.test1 -s 
table.file.compress.type=snappy"
- $ accumulo shell -u <username> -p <password> -e "config -t test1 -s 
examples.table.majc.compaction.strategy=org.apache.accumulo.tserver.compaction.strategies.BasicCompactionStrategy"
- $ accumulo shell -u <username> -p <password> -e "config -t test1 -s 
examples.table.majc.compaction.strategy.opts.filter.size=250M"
- $ accumulo shell -u <username> -p <password> -e "config -t test1 -s 
examples.table.majc.compaction.strategy.opts.large.compress.threshold=100M"
- $ accumulo shell -u <username> -p <password> -e "config -t test1 -s 
examples.table.majc.compaction.strategy.opts.large.compress.type=gz"
+ $ accumulo shell -u <username> -p <password> -e "config -t examples.test1 -s 
examples.table.majc.compaction.strategy=org.apache.accumulo.tserver.compaction.strategies.BasicCompactionStrategy"
+ $ accumulo shell -u <username> -p <password> -e "config -t examples.test1 -s 
examples.table.majc.compaction.strategy.opts.filter.size=250M"
+ $ accumulo shell -u <username> -p <password> -e "config -t examples.test1 -s 
examples.table.majc.compaction.strategy.opts.large.compress.threshold=100M"
+ $ accumulo shell -u <username> -p <password> -e "config -t examples.test1 -s 
examples.table.majc.compaction.strategy.opts.large.compress.type=gz"
 ```
 
 Generate some data and files in order to test the strategy:
diff --git a/docs/dirlist.md b/docs/dirlist.md
index 159f7aa..800f7ac 100644
--- a/docs/dirlist.md
+++ b/docs/dirlist.md
@@ -33,7 +33,7 @@ To begin, ingest some data with Ingest.java.
 
     $ ./bin/runex dirlist.Ingest --vis exampleVis --chunkSize 100000 
/local/username/workspace
 
-This may take some time if there are large files in the 
/local/username/workspace directory. If you use 0 instead of 100000 on the 
command line, the ingest will run much faster, but it will not put any file 
data into Accumulo (the dataTable will be empty).
+This may take some time if there are large files in the 
/local/username/workspace directory. If you use 0 instead of 100000 as the 
`chunkSize`, the ingest will run much faster, but it will not put any file data 
into Accumulo (the dataTable will be empty).
 Note that running this example will create tables dirTable, indexTable, and 
dataTable in Accumulo that you should delete when you have completed the 
example.
 If you modify a file or add new files in the directory ingested (e.g. 
/local/username/workspace), you can run Ingest again to add new information 
into the Accumulo tables.
 
@@ -66,7 +66,7 @@ In this example, the authorizations and visibility are set to 
the same value, ex
 
 ## Directory Table
 
-Here is a illustration of what data looks like in the directory table:
+Here is an illustration of what data looks like in the directory table:
 
     row colf:colq [vis]        value
     000 dir:exec [exampleVis]    true
diff --git a/docs/export.md b/docs/export.md
index 87d5385..178829d 100644
--- a/docs/export.md
+++ b/docs/export.md
@@ -22,7 +22,7 @@ how to use this feature.
 The shell session below shows creating a table, inserting data, and exporting
 the table. A table must be offline to export it, and it should remain offline
 for the duration of the distcp. An easy way to take a table offline without
-interuppting access to it is to clone it and take the clone offline.
+interrupting access to it is to clone it and take the clone offline.
 
     root@test15> createnamespace examples
     root@test15> createtable examples.table1
diff --git a/docs/isolation.md b/docs/isolation.md
index 1a37567..412478a 100644
--- a/docs/isolation.md
+++ b/docs/isolation.md
@@ -20,8 +20,8 @@ Accumulo has an isolated scanner that ensures partial changes 
to rows are not
 seen. Isolation is documented in ../docs/isolation.html and the user manual.
 
 InterferenceTest is a simple example that shows the effects of scanning with
-and without isolation. This program starts two threads. One threads
-continually upates all of the values in a row to be the same thing, but
+and without isolation. This program starts two threads. One thread
+continually updates all the values in a row to be the same thing, but
 different from what it used to be. The other thread continually scans the
 table and checks that all values in a row are the same. Without isolation the
 scanning thread will sometimes see different values, which is the result of
diff --git a/docs/reservations.md b/docs/reservations.md
index 0b682ba..4ce6672 100644
--- a/docs/reservations.md
+++ b/docs/reservations.md
@@ -20,7 +20,7 @@ This example shows running a simple reservation system 
implemented using
 conditional mutations. This system guarantees that only one concurrent user can
 reserve a resource. The example's reserve command allows multiple users to be
 specified. When this is done, it creates a separate reservation thread for each
-user. In the example below threads are spun up for alice, bob, eve, mallory,
+user. In the example below, threads are spun up for alice, bob, eve, mallory,
 and trent to reserve room06 on 20140101. Bob ends up getting the reservation
 and everyone else is put on a wait list. The example code will take any string
 for what, when and who.
diff --git a/docs/sample.md b/docs/sample.md
index e631f12..bdf5be6 100644
--- a/docs/sample.md
+++ b/docs/sample.md
@@ -71,7 +71,7 @@ In order to make scanning the sample fast, sample data is 
partitioned as data is
 written to Accumulo.  This means if the sample configuration is changed, that
 data written previously is partitioned using a different criteria.  Accumulo
 will detect this situation and fail sample scans.  The commands below show this
-failure and fixiing the problem with a compaction.
+failure and fixing the problem with a compaction.
 
     root@instance examples.sampex> config -t examples.sampex -s 
table.sampler.opt.modulus=2
     root@instance examples.sampex> scan --sample
diff --git a/docs/tabletofile.md b/docs/tabletofile.md
index cb9f248..c72d5b8 100644
--- a/docs/tabletofile.md
+++ b/docs/tabletofile.md
@@ -34,7 +34,7 @@ write the key/value pairs to a file in HDFS.
 
 The following will extract the rows containing the column "cf:cq":
 
-    $ ./bin/runmr mapreduce.TableToFile -t exampmles.input --columns cf:cq 
--output /tmp/output
+    $ ./bin/runmr mapreduce.TableToFile -t examples.input --columns cf:cq 
--output /tmp/output
 
     $ hadoop fs -ls /tmp/output
     Found 2 items
diff --git a/docs/wordcount.md b/docs/wordcount.md
index 8da6599..4c5a27f 100644
--- a/docs/wordcount.md
+++ b/docs/wordcount.md
@@ -18,7 +18,7 @@ limitations under the License.
 
 The WordCount example ([WordCount.java]) uses MapReduce and Accumulo to compute
 word counts for a set of documents. This is accomplished using a map-only 
MapReduce
-job and a Accumulo table with combiners.
+job and an Accumulo table with combiners.
 
 To run this example, create a directory in HDFS containing text files. You can
 use the Accumulo README for data:
@@ -34,7 +34,7 @@ After creating the table, run the WordCount MapReduce job 
with your HDFS input d
 
     $ ./bin/runmr mapreduce.WordCount -i /wc
 
-[WordCount.java] creates an Accumulo table (named with a SummingCombiner 
iterator
+[WordCount.java] creates an Accumulo table named with a SummingCombiner 
iterator
 attached to it. It runs a map-only M/R job that reads the specified HDFS 
directory containing text files and
 writes word counts to Accumulo table.
 

Reply via email to