[accumulo-website] branch asf-site updated: Jekyll build from master:301aac7

2019-12-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 257f0ab  Jekyll build from master:301aac7
257f0ab is described below

commit 257f0ab38360ab82931c77d9a4ea5b70b5c85df3
Author: Mike Walch 
AuthorDate: Mon Dec 23 11:42:15 2019 -0500

Jekyll build from master:301aac7

Updated min ruby version 2.5.4
---
 index.html => blog/2019/12/16/accumulo-proxy.html | 160 +++---
 feed.xml  | 103 --
 index.html|  14 +-
 news/index.html   |   7 +
 redirects.json|   2 +-
 search_data.json  |   8 ++
 6 files changed, 135 insertions(+), 159 deletions(-)

diff --git a/index.html b/blog/2019/12/16/accumulo-proxy.html
similarity index 56%
copy from index.html
copy to blog/2019/12/16/accumulo-proxy.html
index d2d6818..76cc90b 100644
--- a/index.html
+++ b/blog/2019/12/16/accumulo-proxy.html
@@ -25,7 +25,7 @@
 https://cdn.datatables.net/v/bs/jq-2.2.3/dt-1.10.12/datatables.min.css;>
 
 
-Apache Accumulo
+Accumulo Clients in Other Programming Languages
 
 https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js&quot</a>; 
integrity="sha256-BbhdlvQf/xTY9gja0Dq3HiwQF8LaCRTXxZKRutelT44=" 
crossorigin="anonymous">
 https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js&quot</a>; 
integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa"
 crossorigin="anonymous">
@@ -138,116 +138,62 @@
 
 
   
-  
-  
-
-  Apache Accumulo is a sorted, distributed key/value store that 
provides robust, scalable data storage and retrieval.
-   Download
-  https://github.com/apache/accumulo; 
role="button"> GitHub
-
-With Apache Accumulo, users can store and manage 
large data sets across a cluster. Accumulo uses https://hadoop.apache.org;>Apache Hadoop's HDFS to store its data and 
https://zookeeper.apache.org;>Apache ZooKeeper for consensus. 
While many users interact directly with Accumulo, several open source projects use Accumulo as their 
underlying store.
-To learn more about Accumulo, take the Accumulo tour, read the user manual 
and run the Accumulo https://github.com/apache/accumulo-examples;>example code. Feel free 
to contact us if you have any questions.
+  Accumulo Clients in Other Programming 
Languages
+  
+  
 
-Major Features
+Date: 16 Dec 2019
 
-
-  
-Server-side programming
-Accumulo has a programming mechanism (called Iterators) that can modify key/value 
pairs at various points in the data management process.
-  
-  
-Cell-based access control
-Every Accumulo key/value pair has its own security label 
which limits query results based off user authorizations.
-  
-
-
-  
-Designed to scale
-Accumulo runs on a cluster using one or more HDFS instances. 
Nodes can be added or removed as the amount of data stored in Accumulo 
changes.
-  
-  
-Stable
-Accumulo has a stable client API that 
follows https://semver.org;>semantic versioning. Each Accumulo 
release goes through extensive testing.
-  
-
-  
-  
-
-  
-Latest News
-
-
-
-  
-   Nov 2019
-   Checking 
API use
-  
-
-
-
-  
-   Oct 2019
-   Using Azure 
Data Lake Gen2 storage as a data store for Accumulo
-  
-
-
-
-  
-   Sep 2019
-   Using HDFS Erasure 
Coding with Accumulo
-  
-
-
-
-  
-   Sep 2019
-   Using S3 as a 
data store for Accumulo
-  
-
-
-
-  
-   Aug 2019
-   Top 10 Reasons to 
Upgrade
-  
-
-
-
- View all posts in the news archive
-
-  
-
-
-  
-
-
-  
-
-  
-  https://twitter.com/apacheaccumulo;>@ApacheAccumulo
-
-
-  
-  https://www.linkedin.com/groups/4554913/;>Apache Accumulo 
Professionals
-
-
-  
-  https://github.com/apache/accumulo;>Apache Accumulo on 
GitHub
-
-  
-
+
 
+Apache Accumulo has an https://github.com/apache/accumulo-proxy;>Accumulo Proxy that allows 
communication with Accumulo using clients written
+in languages other than Java. This blog post shows how to run the Accumulo 
Proxy process using https://github.com/apache/fluo-uno;>Uno
+and communicate with Accumul

[accumulo-website] branch master updated: Updated min ruby version 2.5.4

2019-12-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 301aac7  Updated min ruby version 2.5.4
301aac7 is described below

commit 301aac7441c4875527091a906e9dac78fc308895
Author: Mike Walch 
AuthorDate: Mon Dec 23 11:40:12 2019 -0500

Updated min ruby version 2.5.4
---
 Gemfile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Gemfile b/Gemfile
index ef20a63..760e1a8 100644
--- a/Gemfile
+++ b/Gemfile
@@ -1,4 +1,4 @@
-ruby '>2.6'
+ruby '>2.5.4'
 source 'https://rubygems.org'
 gem 'jekyll', '>= 3.7.4'
 gem 'jekyll-redirect-from', '>= 0.13.0'



[accumulo-proxy] branch master updated: Updated Ruby client code and documentation (#15)

2019-12-20 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-proxy.git


The following commit(s) were added to refs/heads/master by this push:
 new 32e2c09  Updated Ruby client code and documentation (#15)
32e2c09 is described below

commit 32e2c09a3c00442c6ea9afd181d36934bc308e2c
Author: Mike Walch 
AuthorDate: Fri Dec 20 11:49:52 2019 -0500

Updated Ruby client code and documentation (#15)
---
 README.md  | 17 +++-
 src/main/ruby/{proxy_constants.rb => Gemfile}  | 15 ++-
 .../accumulo.gemspec}  | 20 -
 src/main/ruby/{ => accumulo/lib}/accumulo_proxy.rb |  0
 .../ruby/{ => accumulo/lib}/proxy_constants.rb |  0
 src/main/ruby/{ => accumulo/lib}/proxy_types.rb|  0
 src/main/ruby/client.rb| 47 ++
 7 files changed, 78 insertions(+), 21 deletions(-)

diff --git a/README.md b/README.md
index 63bcb56..943ed5b 100644
--- a/README.md
+++ b/README.md
@@ -54,7 +54,7 @@ thrift -r --gen  
 
 # Create an Accumulo client using Python
 
-Run the commands below to install the Python bindings and create an example 
client:
+Run the commands below to install the Python bindings and create an example 
Python client:
 
 ```bash
 mkdir accumulo-client/
@@ -68,6 +68,21 @@ vim example.py
 pipenv run python2 example.py
 ```
 
+# Create an Accumulo client using Ruby
+
+Run the command below to create an example Ruby client:
+
+```bash
+mkdir accumulo-client/
+cd accumulo-client/
+cp /path/to/accumulo-proxy/src/main/ruby/Gemfile .
+vim Gemfile # Set correct path
+cp /path/to/accumulo-proxy/src/main/ruby/client.rb .
+gem install bundler
+bundle install
+bundle exec client.rb
+```
+
 [accumulo]: https://accumulo.apache.org
 [Thrift]: https://thrift.apache.org
 [Thrift tutorial]: https://thrift.apache.org/tutorial/
diff --git a/src/main/ruby/proxy_constants.rb b/src/main/ruby/Gemfile
similarity index 81%
copy from src/main/ruby/proxy_constants.rb
copy to src/main/ruby/Gemfile
index 189a7b4..fc5ad1f 100644
--- a/src/main/ruby/proxy_constants.rb
+++ b/src/main/ruby/Gemfile
@@ -12,14 +12,7 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-#
-# Autogenerated by Thrift Compiler (0.12.0)
-#
-# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
-#
-
-require 'thrift'
-require 'proxy_types'
-
-module Accumulo
-end
+ruby '>2.5'
+source "https://rubygems.org;
+gem 'thrift', '0.11.0.0'
+gem 'accumulo', :path => "/path/to/accumulo-proxy/src/main/ruby/accumulo"
diff --git a/src/main/ruby/proxy_constants.rb 
b/src/main/ruby/accumulo/accumulo.gemspec
similarity index 59%
copy from src/main/ruby/proxy_constants.rb
copy to src/main/ruby/accumulo/accumulo.gemspec
index 189a7b4..c0f4ff9 100644
--- a/src/main/ruby/proxy_constants.rb
+++ b/src/main/ruby/accumulo/accumulo.gemspec
@@ -12,14 +12,16 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-#
-# Autogenerated by Thrift Compiler (0.12.0)
-#
-# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
-#
-
-require 'thrift'
-require 'proxy_types'
 
-module Accumulo
+Gem::Specification.new do |s|
+  s.name= 'accumulo'
+  s.version = '1.0.0'
+  s.date= '2019-12-18'
+  s.summary = "Accumulo Client library using Proxy"
+  s.description = "Code that lets you communicate with Accumulo using the 
Proxy"
+  s.authors = ["Apache Accumulo committers"]
+  s.email   = 'u...@accumulo.apache.org'
+  s.files   = ["lib/accumulo_proxy.rb", "lib/proxy_types.rb", 
"lib/proxy_constants.rb"]
+  s.homepage= 'https://github.com/apache/accumulo-proxy'
+  s.license = 'Apache-2.0'
 end
diff --git a/src/main/ruby/accumulo_proxy.rb 
b/src/main/ruby/accumulo/lib/accumulo_proxy.rb
similarity index 100%
rename from src/main/ruby/accumulo_proxy.rb
rename to src/main/ruby/accumulo/lib/accumulo_proxy.rb
diff --git a/src/main/ruby/proxy_constants.rb 
b/src/main/ruby/accumulo/lib/proxy_constants.rb
similarity index 100%
rename from src/main/ruby/proxy_constants.rb
rename to src/main/ruby/accumulo/lib/proxy_constants.rb
diff --git a/src/main/ruby/proxy_types.rb 
b/src/main/ruby/accumulo/lib/proxy_types.rb
similarity index 100%
rename from src/main/ruby/proxy_types.rb
rename to src/main/ruby/accumulo/lib/proxy_types.rb
diff --git a/src/main/ruby/client.rb b/src/main/ruby/client.rb
new file mode 100755
index 000..97d3998
--- /dev/null
+++ b/src/main/ruby/client.rb
@@ -0,0 +1,47 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# cont

[accumulo-website] branch master updated: Created blog post about Proxy (#214)

2019-12-18 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 60d6934  Created blog post about Proxy (#214)
60d6934 is described below

commit 60d6934f15f4a560f6f8db669d5706ea55f84076
Author: Mike Walch 
AuthorDate: Wed Dec 18 10:18:31 2019 -0500

Created blog post about Proxy (#214)
---
 _posts/blog/2019-12-16-accumulo-proxy.md | 56 
 1 file changed, 56 insertions(+)

diff --git a/_posts/blog/2019-12-16-accumulo-proxy.md 
b/_posts/blog/2019-12-16-accumulo-proxy.md
new file mode 100644
index 000..f6e8b89
--- /dev/null
+++ b/_posts/blog/2019-12-16-accumulo-proxy.md
@@ -0,0 +1,56 @@
+---
+title: Accumulo Clients in Other Programming Languages
+---
+
+Apache Accumulo has an [Accumulo Proxy] that allows communication with 
Accumulo using clients written
+in languages other than Java. This blog post shows how to run the Accumulo 
Proxy process using [Uno]
+and communicate with Accumulo using a Python client.
+
+First, clone the [Accumulo Proxy] repository.
+
+```bash
+git clone https://github.com/apache/accumulo-proxy
+```
+
+Assuming you have [Uno] set up on your machine, configure `uno.conf` to start 
the [Accumulo Proxy]
+by setting the configuration below:
+
+```
+export POST_RUN_PLUGINS="accumulo-proxy"
+export PROXY_REPO=/path/to/accumulo-proxy
+```
+
+Run the following command to set up Accumulo again. The Proxy will be started 
after Accumulo runs.
+
+```
+uno setup accumulo
+```
+
+After Accumulo is set up, you should see the following output from uno:
+
+```
+Executing post run plugin: accumulo-proxy
+Installing Accumulo Proxy at 
/path/to/fluo-uno/install/accumulo-proxy-2.0.0-SNAPSHOT
+Accumulo Proxy 2.0.0-SNAPSHOT is running
+* view logs at /path/to/fluo-uno/install/logs/accumulo-proxy/
+```
+
+Next, follow the instructions below to create a Python 2.7 client that creates 
an Accumulo table
+named `pythontest` and writes data to it:
+
+```
+mkdir accumulo-client/
+cd accumulo-client/
+pipenv --python 2.7
+pipenv install thrift
+pipenv install -e /path/to/accumulo-proxy/src/main/python
+cp /path/to/accumulo-proxy/src/main/python/example.py .
+# Edit credentials if needed
+vim example.py
+pipenv run python2 example.py
+```
+
+Verify that the table was created or data was written using `uno ashell` or 
the Accumulo monitor.
+
+[Uno]: https://github.com/apache/fluo-uno
+[Accumulo Proxy]: https://github.com/apache/accumulo-proxy



[accumulo-proxy] branch master updated: Updated instructions in README (#12)

2019-12-17 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-proxy.git


The following commit(s) were added to refs/heads/master by this push:
 new b665bf4  Updated instructions in README (#12)
b665bf4 is described below

commit b665bf4b13182e35aeb1d7704a939e8303e2af19
Author: Mike Walch 
AuthorDate: Tue Dec 17 09:07:32 2019 -0500

Updated instructions in README (#12)

* The -c option is no longer exists as all accumulo-client.properties
  now reside in proxy.properties
* thrift must be installed to use Python client
---
 README.md | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 9740a2b..63bcb56 100644
--- a/README.md
+++ b/README.md
@@ -34,11 +34,11 @@ Thrift language binding).
 tar xzvf ./target/accumulo-proxy-2.0.0-SNAPSHOT-bin.tar.gz -C 
/path/to/install
 ```
 
-2. Edit `proxy.properties` and `accumulo-client.properties` and run the proxy.
+2. Edit `proxy.properties` and run the proxy.
 
 ```
 cd /path/to/install/accumulo-proxy-2.0.0-SNAPSHOT
-./bin/accumulo-proxy -p conf/proxy.properties -c 
$ACCUMULO_HOME/conf/accumulo-client.properties
+./bin/accumulo-proxy -p conf/proxy.properties
 ```
 
 # Build language specific bindings
@@ -60,6 +60,7 @@ Run the commands below to install the Python bindings and 
create an example clie
 mkdir accumulo-client/
 cd accumulo-client/
 pipenv --python 2.7
+pipenv install thrift
 pipenv install -e /path/to/accumulo-proxy/src/main/python
 cp /path/to/accumulo-proxy/src/main/python/example.py .
 # Edit credentials if needed



[accumulo-proxy] branch master updated: #10 - Created instructions for using proxy with python (#11)

2019-11-26 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-proxy.git


The following commit(s) were added to refs/heads/master by this push:
 new 435d845   #10 - Created instructions for using proxy with python (#11)
435d845 is described below

commit 435d845fdf98ed68198f2598feef97ee8235d161
Author: Mike Walch 
AuthorDate: Tue Nov 26 13:01:16 2019 -0500

 #10 - Created instructions for using proxy with python (#11)
---
 README.md  | 28 -
 src/main/python/.gitignore |  1 +
 src/main/python/accumulo/.gitignore|  1 +
 .../python/{ => accumulo}/AccumuloProxy-remote |  0
 src/main/python/{ => accumulo}/AccumuloProxy.py|  0
 src/main/python/{ => accumulo}/__init__.py |  0
 src/main/python/{ => accumulo}/constants.py|  0
 src/main/python/{ => accumulo}/ttypes.py   |  0
 src/main/python/example.py | 47 ++
 src/main/python/{__init__.py => setup.py}  | 11 -
 10 files changed, 86 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index ab23767..9740a2b 100644
--- a/README.md
+++ b/README.md
@@ -41,8 +41,35 @@ Thrift language binding).
 ./bin/accumulo-proxy -p conf/proxy.properties -c 
$ACCUMULO_HOME/conf/accumulo-client.properties
 ```
 
+# Build language specific bindings
+
+Bindings have been built in `src/main/` for Java, python, and ruby.
+
+Bindings for other languages can be built using the Thrift compiler. Follow 
the [Thrift tutorial]
+to install a Thrift compiler and use the following command to generate 
language bindings.
+
+```
+thrift -r --gen  
+```
+
+# Create an Accumulo client using Python
+
+Run the commands below to install the Python bindings and create an example 
client:
+
+```bash
+mkdir accumulo-client/
+cd accumulo-client/
+pipenv --python 2.7
+pipenv install -e /path/to/accumulo-proxy/src/main/python
+cp /path/to/accumulo-proxy/src/main/python/example.py .
+# Edit credentials if needed
+vim example.py
+pipenv run python2 example.py
+```
+
 [accumulo]: https://accumulo.apache.org
 [Thrift]: https://thrift.apache.org
+[Thrift tutorial]: https://thrift.apache.org/tutorial/
 [li]: https://img.shields.io/badge/license-ASL-blue.svg
 [ll]: https://www.apache.org/licenses/LICENSE-2.0
 [mi]: 
https://maven-badges.herokuapp.com/maven-central/org.apache.accumulo/accumulo-proxy/badge.svg
@@ -51,4 +78,3 @@ Thrift language binding).
 [jl]: https://www.javadoc.io/doc/org.apache.accumulo/accumulo-proxy
 [ti]: https://travis-ci.org/apache/accumulo-proxy.svg?branch=master
 [tl]: https://travis-ci.org/apache/accumulo-proxy
-
diff --git a/src/main/python/.gitignore b/src/main/python/.gitignore
new file mode 100644
index 000..c45695d
--- /dev/null
+++ b/src/main/python/.gitignore
@@ -0,0 +1 @@
+AccumuloProxy.egg-info/
diff --git a/src/main/python/accumulo/.gitignore 
b/src/main/python/accumulo/.gitignore
new file mode 100644
index 000..c45695d
--- /dev/null
+++ b/src/main/python/accumulo/.gitignore
@@ -0,0 +1 @@
+AccumuloProxy.egg-info/
diff --git a/src/main/python/AccumuloProxy-remote 
b/src/main/python/accumulo/AccumuloProxy-remote
similarity index 100%
rename from src/main/python/AccumuloProxy-remote
rename to src/main/python/accumulo/AccumuloProxy-remote
diff --git a/src/main/python/AccumuloProxy.py 
b/src/main/python/accumulo/AccumuloProxy.py
similarity index 100%
rename from src/main/python/AccumuloProxy.py
rename to src/main/python/accumulo/AccumuloProxy.py
diff --git a/src/main/python/__init__.py b/src/main/python/accumulo/__init__.py
similarity index 100%
copy from src/main/python/__init__.py
copy to src/main/python/accumulo/__init__.py
diff --git a/src/main/python/constants.py 
b/src/main/python/accumulo/constants.py
similarity index 100%
rename from src/main/python/constants.py
rename to src/main/python/accumulo/constants.py
diff --git a/src/main/python/ttypes.py b/src/main/python/accumulo/ttypes.py
similarity index 100%
rename from src/main/python/ttypes.py
rename to src/main/python/accumulo/ttypes.py
diff --git a/src/main/python/example.py b/src/main/python/example.py
new file mode 100644
index 000..cf04c74
--- /dev/null
+++ b/src/main/python/example.py
@@ -0,0 +1,47 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is dis

[accumulo-proxy] branch master updated: Fixes #5 Create tarball with scripts for running proxy (#9)

2019-11-12 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-proxy.git


The following commit(s) were added to refs/heads/master by this push:
 new 7a74561  Fixes #5 Create tarball with scripts for running proxy (#9)
7a74561 is described below

commit 7a74561253313562ad8fbe256d3588bbb456b010
Author: Mike Walch 
AuthorDate: Tue Nov 12 17:17:54 2019 -0500

Fixes #5 Create tarball with scripts for running proxy (#9)
---
 README.md  |  17 +++
 pom.xml|  25 
 .../main/assemble/bin/accumulo-proxy   |  16 +--
 src/main/assemble/binary-release.xml   |  27 
 src/main/assemble/component.xml|  69 +++
 .../main/assemble/conf/proxy-env.sh|  14 +--
 src/main/assemble/conf/proxy.properties| 137 +
 src/main/java/org/apache/accumulo/proxy/Proxy.java |  13 +-
 8 files changed, 293 insertions(+), 25 deletions(-)

diff --git a/README.md b/README.md
index 5468b42..ab23767 100644
--- a/README.md
+++ b/README.md
@@ -24,6 +24,23 @@ an Apache [Thrift] service so that users can use their 
preferred programming
 language to communicate with Accumulo (provided that language has a supported
 Thrift language binding).
 
+# Running the Accumulo proxy
+
+1. Build the proxy tarball and install it.
+
+```
+cd /path/to/accumulo-proxy
+mvn clean package -Ptarball
+tar xzvf ./target/accumulo-proxy-2.0.0-SNAPSHOT-bin.tar.gz -C 
/path/to/install
+```
+
+2. Edit `proxy.properties` and `accumulo-client.properties` and run the proxy.
+
+```
+cd /path/to/install/accumulo-proxy-2.0.0-SNAPSHOT
+./bin/accumulo-proxy -p conf/proxy.properties -c 
$ACCUMULO_HOME/conf/accumulo-client.properties
+```
+
 [accumulo]: https://accumulo.apache.org
 [Thrift]: https://thrift.apache.org
 [li]: https://img.shields.io/badge/license-ASL-blue.svg
diff --git a/pom.xml b/pom.xml
index a1d6d26..ac9a20f 100644
--- a/pom.xml
+++ b/pom.xml
@@ -663,6 +663,31 @@
   
   
 
+  tarball
+  
+
+  
+org.apache.maven.plugins
+maven-assembly-plugin
+
+  
+binary-assembly
+
+  single
+
+package
+
+  
+
src/main/assemble/binary-release.xml
+  
+
+  
+
+  
+
+  
+
+
   thrift
   
 
diff --git a/proxy.properties b/src/main/assemble/bin/accumulo-proxy
old mode 100644
new mode 100755
similarity index 74%
copy from proxy.properties
copy to src/main/assemble/bin/accumulo-proxy
index 3bb3b28..cec23e9
--- a/proxy.properties
+++ b/src/main/assemble/bin/accumulo-proxy
@@ -1,3 +1,5 @@
+#! /usr/bin/env bash
+
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -13,10 +15,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-# Port to run proxy on
-port=42424
-# Set to true if you wish to use an Mini Accumulo Cluster
-useMiniAccumulo=false
-protocolFactory=org.apache.thrift.protocol.TCompactProtocol$Factory
-tokenClass=org.apache.accumulo.core.client.security.tokens.PasswordToken
-maxFrameSize=16M
+BIN_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
+
+cd $BIN_DIR/..
+
+. conf/proxy-env.sh
+
+java org.apache.accumulo.proxy.Proxy "${@}"
diff --git a/src/main/assemble/binary-release.xml 
b/src/main/assemble/binary-release.xml
new file mode 100644
index 000..bc9862f
--- /dev/null
+++ b/src/main/assemble/binary-release.xml
@@ -0,0 +1,27 @@
+
+
+http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+  
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2
 http://maven.apache.org/xsd/assembly-1.1.2.xsd;>
+  bin
+  
+tar.gz
+  
+  
+src/main/assemble/component.xml
+  
+
diff --git a/src/main/assemble/component.xml b/src/main/assemble/component.xml
new file mode 100644
index 000..4dca57d
--- /dev/null
+++ b/src/main/assemble/component.xml
@@ -0,0 +1,69 @@
+
+
+http://maven.apache.org/plugins/maven-assembly-plugin/component/1.1.2; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+  
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/component/1.1.2
 http://maven.apache.org/xsd/component-1.1.2.xsd;>
+  
+
+  lib
+  0755
+  0644
+  true
+  false
+  
+
+${groupId}:${artifactId}
+ 

[accumulo-website] branch asf-site updated: Jekyll build from master:c010c61

2019-04-25 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ba6ca42  Jekyll build from master:c010c61
ba6ca42 is described below

commit ba6ca42499c0142fe79887f407bb2db3b80db80d
Author: Mike Walch 
AuthorDate: Thu Apr 25 16:06:04 2019 -0400

Jekyll build from master:c010c61

Created blog post and updated docs (#175)
---
 .../2019/04/24/using-spark-with-accumulo.html  | 124 +++--
 docs/2.x/development/spark.html|   8 +-
 feed.xml   | 113 ---
 index.html |  14 +--
 news/index.html|   7 ++
 redirects.json |   2 +-
 search_data.json   |  10 +-
 7 files changed, 60 insertions(+), 218 deletions(-)

diff --git a/index.html b/blog/2019/04/24/using-spark-with-accumulo.html
similarity index 56%
copy from index.html
copy to blog/2019/04/24/using-spark-with-accumulo.html
index 7cfd632..3574fbd 100644
--- a/index.html
+++ b/blog/2019/04/24/using-spark-with-accumulo.html
@@ -25,7 +25,7 @@
 https://cdn.datatables.net/v/bs/jq-2.2.3/dt-1.10.12/datatables.min.css;>
 
 
-Apache Accumulo
+Using Apache Spark with Accumulo
 
 https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js&quot</a>; 
integrity="sha256-BbhdlvQf/xTY9gja0Dq3HiwQF8LaCRTXxZKRutelT44=" 
crossorigin="anonymous">
 https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js&quot</a>; 
integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa"
 crossorigin="anonymous">
@@ -136,120 +136,22 @@
 
 
   
-  
-  
-
-  Apache Accumulo is a sorted, distributed key/value store that 
provides robust, scalable data storage and retrieval.
-   Download
-  https://github.com/apache/accumulo; 
role="button"> GitHub
-
-With Apache Accumulo, users can store and manage 
large data sets across a cluster. Accumulo uses https://hadoop.apache.org;>Apache Hadoop's HDFS to store its data and 
https://zookeeper.apache.org;>Apache ZooKeeper for consensus. 
While many users interact directly with Accumulo, several open source projects use Accumulo as their 
underlying store.
-To learn more about Accumulo, take the Accumulo tour, read the user manual and run the Accumulo https://github.com/apache/accumulo-examples;>example code. Feel free 
to contact us if you have any questions.
+  Using Apache Spark with Accumulo
+  
+  
 
-Major Features
+Date: 24 Apr 2019
 
-
-   
-Server-side programming
-Accumulo has a programing meachinism (called Iterators) that can modify key/value 
pairs at various points in the data management process.
-  
-  
-Cell-based access control
-Every Accumulo key/value pair has its own security label 
which limits query results based off user authorizations.
-  
-
-
-   
-Designed to scale
-Accumulo runs on a cluster using one or more HDFS instances. 
Nodes can be added or removed as the amount of data stored in Accumulo 
changes.
-  
-  
-Stable
-Accumulo has a stable client API that 
follows https://semver.org;>semantic versioning. Each Accumulo 
release goes through extensive testing.
-  
-
-  
-  
-
-  
-Latest News
-
-
-
-  
-   Apr 2019
-   Apache Accumulo 1.9.3
-  
-
-
-
-  
-   Feb 2019
-   NoSQL Day 2019
-  
-
-
-
-  
-   Jan 2019
-   Apache Accumulo 
2.0.0-alpha-2
-  
-
-
-
-  
-   Oct 2018
-   Apache Accumulo 
2.0.0-alpha-1
-  
-
-
-
-  
-   Jul 2018
-   Apache Accumulo 1.9.2
-  
-
-
-
- View all posts in the news archive
-
-  
-
-
-  
-
-
-  
-
-  
-  https://twitter.com/apacheaccumulo;>@ApacheAccumulo
-
-
-  
-  https://www.linkedin.com/groups/4554913/;>Apache Accumulo 
Professionals
-
-
-  
-  https://github.com/apache/accumulo;>Apache Accumulo on 
GitHub
-
-
-  
-  #accumulo @ 
freenode
-
-  
-
+
 
+https://spark.apache.org/;>Apache Spark applications can read 
from and write to Accumulo tables.  To
+get started using Spark with Accumulo, checkout the Spark documentation in
+the 2.0 Accumulo user manual. The https://github.com/apache/accumulo-exam

[accumulo-website] branch master updated: Created blog post and updated docs (#175)

2019-04-25 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new c010c61  Created blog post and updated docs (#175)
c010c61 is described below

commit c010c611fefeba660ba0f16184a08636e5ab5ab1
Author: Mike Walch 
AuthorDate: Thu Apr 25 16:01:23 2019 -0400

Created blog post and updated docs (#175)
---
 _docs-2/development/spark.md|  8 
 _posts/blog/2019-04-24-using-spark-with-accumulo.md | 12 
 2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/_docs-2/development/spark.md b/_docs-2/development/spark.md
index e1bb251..d19b76f 100644
--- a/_docs-2/development/spark.md
+++ b/_docs-2/development/spark.md
@@ -4,7 +4,7 @@ category: development
 order: 3
 ---
 
-[Apache Spark] applications can read and write from Accumulo tables.
+[Apache Spark] applications can read from and write to Accumulo tables.
 
 Before reading this documentation, it may help to review the [MapReduce]
 documentation as API created for MapReduce jobs is used by Spark.
@@ -16,7 +16,7 @@ This documentation references code from the Accumulo [Spark 
example].
 1. Create a [shaded jar] with your Spark code and all of your dependencies 
(excluding
Spark and Hadoop). When creating the shaded jar, you should relocate Guava
as Accumulo uses a different version. The [pom.xml] in the [Spark example] 
is
-   a good reference and can be used a a starting point for a Spark application.
+   a good reference and can be used as a starting point for a Spark 
application.
 
 2. Submit the job by running `spark-submit` with your shaded jar. You should 
pass
in the location of your `accumulo-client.properties` that will be used to 
connect
@@ -43,7 +43,7 @@ JavaPairRDD data = 
sc.newAPIHadoopRDD(job.getConfiguration(),
 
 ## Writing to Accumulo table
 
-There are two ways to write an Accumulo table.
+There are two ways to write to an Accumulo table in Spark applications.
 
 ### Use a BatchWriter
 
@@ -95,7 +95,7 @@ try (AccumuloClient client = 
Accumulo.newClient().from(props).build()) {
 
 ## Reference
 
-* [Spark example] - Accumulo example application that uses Spark to read & 
write from Accumulo
+* [Spark example] - Example Spark application that reads from and writes to 
Accumulo
 * [MapReduce] - Documentation on reading/writing to Accumulo using MapReduce
 * [Apache Spark] - Spark project website
 
diff --git a/_posts/blog/2019-04-24-using-spark-with-accumulo.md 
b/_posts/blog/2019-04-24-using-spark-with-accumulo.md
new file mode 100644
index 000..9206c71
--- /dev/null
+++ b/_posts/blog/2019-04-24-using-spark-with-accumulo.md
@@ -0,0 +1,12 @@
+---
+title: "Using Apache Spark with Accumulo"
+---
+
+[Apache Spark] applications can read from and write to Accumulo tables.  To
+get started using Spark with Accumulo, checkout the [Spark 
documentation][docs] in
+the 2.0 Accumulo user manual. The [Spark example] application is a good 
starting point
+for using Spark with Accumulo.
+
+[Apache Spark]: https://spark.apache.org/
+[docs]: /docs/2.x/development/spark
+[Spark example]: https://github.com/apache/accumulo-examples/tree/master/spark



[accumulo-website] branch asf-site updated: Jekyll build from master:4f9c2aa

2019-04-25 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 3d74561  Jekyll build from master:4f9c2aa
3d74561 is described below

commit 3d74561acb87848bbfa8cb17cb13f3c8ad5ac289
Author: Mike Walch 
AuthorDate: Thu Apr 25 13:41:32 2019 -0400

Jekyll build from master:4f9c2aa

Update people.md (#173)

Added Holly Keebler as a contributer
---
 feed.xml  | 4 ++--
 people/index.html | 5 +
 redirects.json| 2 +-
 3 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/feed.xml b/feed.xml
index 1d756bc..d6ecae0 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 
 https://accumulo.apache.org/
 https://accumulo.apache.org/feed.xml; rel="self" 
type="application/rss+xml"/>
-Wed, 24 Apr 2019 22:38:23 -0400
-Wed, 24 Apr 2019 22:38:23 -0400
+Thu, 25 Apr 2019 13:41:24 -0400
+Thu, 25 Apr 2019 13:41:24 -0400
 Jekyll v3.8.5
 
 
diff --git a/people/index.html b/people/index.html
index 2ba8fba..35bdb35 100644
--- a/people/index.html
+++ b/people/index.html
@@ -499,6 +499,11 @@
    
 
 
+  Holly Keebler
+  http://www.asrc.com;>Arctic Slope Regional Corp.
+  http://www.timeanddate.com/time/zones/et;>ET
+
+
   Hung Pham
   http://www.cloudera.com;>Cloudera
   http://www.timeanddate.com/time/zones/et;>ET
diff --git a/redirects.json b/redirects.json
index 9c19363..4bda3a4 100644
--- a/redirects.json
+++ b/redirects.json
@@ -1 +1 @@
-{"/release_notes/1.5.1.html":"https://accumulo.apache.org/release/accumulo-1.5.1/","/release_notes/1.6.0.html":"https://accumulo.apache.org/release/accumulo-1.6.0/","/release_notes/1.6.1.html":"https://accumulo.apache.org/release/accumulo-1.6.1/","/release_notes/1.6.2.html":"https://accumulo.apache.org/release/accumulo-1.6.2/","/release_notes/1.7.0.html":"https://accumulo.apache.org/release/accumulo-1.7.0/","/release_notes/1.5.3.html":"https://accumulo.apache.org/release/accumulo-1.5.3/;
 [...]
\ No newline at end of file
+{"/release_notes/1.5.1.html":"https://accumulo.apache.org/release/accumulo-1.5.1/","/release_notes/1.6.0.html":"https://accumulo.apache.org/release/accumulo-1.6.0/","/release_notes/1.6.1.html":"https://accumulo.apache.org/release/accumulo-1.6.1/","/release_notes/1.6.2.html":"https://accumulo.apache.org/release/accumulo-1.6.2/","/release_notes/1.7.0.html":"https://accumulo.apache.org/release/accumulo-1.7.0/","/release_notes/1.5.3.html":"https://accumulo.apache.org/release/accumulo-1.5.3/;
 [...]
\ No newline at end of file



[accumulo-website] branch master updated: Update people.md (#173)

2019-04-25 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 4f9c2aa  Update people.md (#173)
4f9c2aa is described below

commit 4f9c2aaeab8f75e7ed4bc9e34239e35ba3c80c7c
Author: hkeebler <49656678+hkeeb...@users.noreply.github.com>
AuthorDate: Thu Apr 25 13:40:07 2019 -0400

Update people.md (#173)

Added Holly Keebler as a contributer
---
 pages/people.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/pages/people.md b/pages/people.md
index 04b49bc..1864675 100644
--- a/pages/people.md
+++ b/pages/people.md
@@ -77,6 +77,7 @@ GitHub also has a [contributor list][github-contributors] 
based on commits.
 | Eugene Cheipesh |
   |   |
 | Gary Singh  | [Sabre Engineering][SABRE] 
   | [ET][ET]  |
 | Hayden Marchant |
   |   |
+| Holly Keebler   | [Arctic Slope Regional Corp.][ASRC]
   | [ET][ET]  |
 | Hung Pham   | [Cloudera][CLOUDERA]   
   | [ET][ET]  |
 | Jacob Meisler   | [Booz Allen Hamilton][BOOZ]
   | [ET][ET]  |
 | James Fiori | [Flywheel Data][FLYWHEEL]  
   | [ET][ET]  |



[accumulo-website] branch asf-site updated: Jekyll build from master:858470e

2019-04-24 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 2e30942  Jekyll build from master:858470e
2e30942 is described below

commit 2e30942eee25392159bebd3100d12cf5ca0903cb
Author: Mike Walch 
AuthorDate: Wed Apr 24 17:53:40 2019 -0400

Jekyll build from master:858470e

Created docs for using Apache Spark with Accumulo (#171)
---
 README.md  |   1 +
 docs/2.x/administration/caching.html   |   4 +-
 docs/2.x/administration/fate.html  |   4 +-
 docs/2.x/administration/in-depth-install.html  |   4 +-
 docs/2.x/administration/monitoring-metrics.html|   4 +-
 docs/2.x/administration/multivolume.html   |   4 +-
 docs/2.x/administration/replication.html   |   4 +-
 docs/2.x/administration/scan-executors.html|   4 +-
 docs/2.x/administration/upgrading.html |   4 +-
 docs/2.x/configuration/client-properties.html  |   4 +-
 docs/2.x/configuration/files.html  |   4 +-
 docs/2.x/configuration/overview.html   |   4 +-
 docs/2.x/configuration/server-properties.html  |   4 +-
 docs/2.x/development/development_tools.html|   4 +-
 docs/2.x/development/high_speed_ingest.html|   4 +-
 docs/2.x/development/iterators.html|   4 +-
 docs/2.x/development/mapreduce.html|   4 +-
 docs/2.x/development/proxy.html|   4 +-
 docs/2.x/development/sampling.html |   4 +-
 .../{high_speed_ingest.html => spark.html} | 207 ++---
 docs/2.x/development/summaries.html|   4 +-
 docs/2.x/getting-started/clients.html  |   4 +-
 docs/2.x/getting-started/design.html   |   4 +-
 docs/2.x/getting-started/features.html |   4 +-
 docs/2.x/getting-started/glossary.html |   4 +-
 docs/2.x/getting-started/quickstart.html   |   4 +-
 docs/2.x/getting-started/shell.html|   4 +-
 docs/2.x/getting-started/table_configuration.html  |   4 +-
 docs/2.x/getting-started/table_design.html |   4 +-
 docs/2.x/security/authentication.html  |   4 +-
 docs/2.x/security/authorizations.html  |   4 +-
 docs/2.x/security/kerberos.html|   4 +-
 docs/2.x/security/on-disk-encryption.html  |   4 +-
 docs/2.x/security/overview.html|   4 +-
 docs/2.x/security/permissions.html |   4 +-
 docs/2.x/security/wire-encryption.html |   4 +-
 docs/2.x/troubleshooting/advanced.html |   4 +-
 docs/2.x/troubleshooting/basic.html|   4 +-
 docs/2.x/troubleshooting/performance.html  |   4 +-
 .../troubleshooting/system-metadata-tables.html|   4 +-
 docs/2.x/troubleshooting/tools.html|   4 +-
 docs/2.x/troubleshooting/tracing.html  |   4 +-
 feed.xml   |   4 +-
 search_data.json   |   7 +
 44 files changed, 229 insertions(+), 150 deletions(-)

diff --git a/README.md b/README.md
index b6ed8f5..196cb78 100644
--- a/README.md
+++ b/README.md
@@ -49,6 +49,7 @@ The source for these tags is at 
[_plugins/links.rb](_plugins/links.rb).
 | dlink | Creates Documentation link | None
| `{% dlink getting-stared/clients %}`  
 |
 | durl  | Creates Documentation URL  | None
| `{% durl troubleshooting/performance 
%}`   |
 | ghi   | GitHub issue link  | None  | `{% ghi 100 %}` |
+| ghc   | GitHub code link  | Branch defaults to `gh_branch` setting 
in `_config.yml`. Override using `-b` | `{% ghc 
server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java %}` 
`{% ghc -b 1.9 README.md %}` |
 | jira   | Jira issue link  | None  | `{% jira ACCUMULO-1000 %}` |
 
 ## Updating property documentation
diff --git a/docs/2.x/administration/caching.html 
b/docs/2.x/administration/caching.html
index 8158ee9..9915e62 100644
--- a/docs/2.x/administration/caching.html
+++ b/docs/2.x/administration/caching.html
@@ -210,7 +210,7 @@
 
 MapReduce 
 
-Proxy 
+Spark 
 
 Development Tools 
 
@@ -218,6 +218,8 @@
 
 Summary Statistics 
 
+Proxy 
+
 High-Speed Ingest 
 
   
diff --git a/docs/2.x/administration/fate.html 
b/docs/2.x/administration/fate.html
index f303

[accumulo-website] branch master updated: Created docs for using Apache Spark with Accumulo (#171)

2019-04-24 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 858470e  Created docs for using Apache Spark with Accumulo (#171)
858470e is described below

commit 858470eece88756783aa5659cafafe5e0e92ba97
Author: Mike Walch 
AuthorDate: Wed Apr 24 17:49:39 2019 -0400

Created docs for using Apache Spark with Accumulo (#171)
---
 _docs-2/development/proxy.md |   2 +-
 _docs-2/development/spark.md | 109 +++
 2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/_docs-2/development/proxy.md b/_docs-2/development/proxy.md
index 1a5f598..9bd5a1c 100644
--- a/_docs-2/development/proxy.md
+++ b/_docs-2/development/proxy.md
@@ -1,7 +1,7 @@
 ---
 title: Proxy
 category: development
-order: 3
+order: 7
 ---
 
 The proxy API allows the interaction with Accumulo with languages other than 
Java.
diff --git a/_docs-2/development/spark.md b/_docs-2/development/spark.md
new file mode 100644
index 000..e1bb251
--- /dev/null
+++ b/_docs-2/development/spark.md
@@ -0,0 +1,109 @@
+---
+title: Spark
+category: development
+order: 3
+---
+
+[Apache Spark] applications can read and write from Accumulo tables.
+
+Before reading this documentation, it may help to review the [MapReduce]
+documentation as API created for MapReduce jobs is used by Spark.
+
+This documentation references code from the Accumulo [Spark example].
+
+## General configuration
+
+1. Create a [shaded jar] with your Spark code and all of your dependencies 
(excluding
+   Spark and Hadoop). When creating the shaded jar, you should relocate Guava
+   as Accumulo uses a different version. The [pom.xml] in the [Spark example] 
is
+   a good reference and can be used a a starting point for a Spark application.
+
+2. Submit the job by running `spark-submit` with your shaded jar. You should 
pass
+   in the location of your `accumulo-client.properties` that will be used to 
connect
+   to your Accumulo instance.
+```bash
+$SPARK_HOME/bin/spark-submit \
+  --class com.my.spark.job.MainClass \
+  --master yarn \
+  --deploy-mode client \
+  /path/to/spark-job-shaded.jar \
+  /path/to/accumulo-client.properties
+```
+
+## Reading from Accumulo table
+
+Apache Spark can read from an Accumulo table by using [AccumuloInputFormat].
+
+```java
+Job job = Job.getInstance();
+AccumuloInputFormat.configure().clientProperties(props).table(inputTable).store(job);
+JavaPairRDD data = sc.newAPIHadoopRDD(job.getConfiguration(),
+AccumuloInputFormat.class, Key.class, Value.class);
+```
+
+## Writing to Accumulo table
+
+There are two ways to write an Accumulo table.
+
+### Use a BatchWriter
+
+Write your data to Accumulo by creating an AccumuloClient for each partition 
and writing all
+data in the partition using a BatchWriter.
+
+```java
+// Spark will automatically serialize this properties object and send it to 
each partition
+Properties props = Accumulo.newClientProperties()
+.from("/path/to/accumulo-client.properties").build();
+JavaPairRDD dataToWrite = ... ;
+dataToWrite.foreachPartition(iter -> {
+  // Create client inside partition so that Spark does not attempt to 
serialize it.
+  try (AccumuloClient client = Accumulo.newClient().from(props).build();
+   BatchWriter bw = client.createBatchWriter(outputTable)) {
+iter.forEachRemaining(kv -> {
+  Key key = kv._1;
+  Value val = kv._2;
+  Mutation m = new Mutation(key.getRow());
+  m.at().family(key.getColumnFamily()).qualifier(key.getColumnQualifier())
+  
.visibility(key.getColumnVisibility()).timestamp(key.getTimestamp()).put(val);
+  bw.addMutation(m);
+});
+  }
+});
+```
+
+### Using Bulk Import
+
+Partition your data and write it to RFiles. The [AccumuloRangePartitioner] 
found in the Accumulo
+Spark example can be used for partitioning data. After your data has been 
written to an output
+directory using [AccumuloFileOutputFormat] as RFiles, bulk import this 
directory into Accumulo.
+
+```java
+// Write Spark output to HDFS
+JavaPairRDD dataToWrite = ... ;
+Job job = Job.getInstance();
+AccumuloFileOutputFormat.configure().outputPath(outputDir).store(job);
+Partitioner partitioner = new AccumuloRangePartitioner("3", "7");
+JavaPairRDD partData = 
dataPlus5K.repartitionAndSortWithinPartitions(partitioner);
+partData.saveAsNewAPIHadoopFile(outputDir.toString(), Key.class, Value.class,
+AccumuloFileOutputFormat.class);
+
+// Bulk import RFiles in HDFS into Accumulo
+try (AccumuloClient client = Accumulo.newClient().from(props).build()) {
+  
client.tableOperations().importDirectory(outputDir.toString()).to(outputTable).load();
+}
+```
+
+## Reference
+
+* [Spark example] - Accumulo example application that uses Spark to read 

[accumulo-examples] branch master updated: Remove examples.conf.template (#48)

2019-04-24 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-examples.git


The following commit(s) were added to refs/heads/master by this push:
 new 190cee5  Remove examples.conf.template (#48)
190cee5 is described below

commit 190cee5eb3ff8b234111f2ffe7a641c0e25da2c9
Author: Mike Walch 
AuthorDate: Wed Apr 24 16:45:29 2019 -0400

Remove examples.conf.template (#48)

* This file is no longer used by any example
---
 examples.conf.template | 27 ---
 1 file changed, 27 deletions(-)

diff --git a/examples.conf.template b/examples.conf.template
deleted file mode 100644
index 8563189..000
--- a/examples.conf.template
+++ /dev/null
@@ -1,27 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Properties prefixed with accumulo.examples are not general Accumulo 
properties
-# and are only used by example code in this repository. All other properties 
are
-# general Accumulo properties parsed by Accumulo's ClientConfiguration
-
-instance.zookeeper.host=localhost:2181
-instance.name=your-instance-name
-accumulo.examples.principal=root
-accumulo.examples.password=secret
-
-# Currently the examples only support authentication via username and password.
-# Kerberos authentication is currently not supported in the utility code used
-# by all of the examples.



[accumulo-website] branch master updated: Specify branch using -b with ghc tag (#172)

2019-04-24 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 5f52dae  Specify branch using -b with ghc tag (#172)
5f52dae is described below

commit 5f52dae659c997d02139dc93df05e7fddac66ac5
Author: Mike Walch 
AuthorDate: Wed Apr 24 16:44:45 2019 -0400

Specify branch using -b with ghc tag (#172)
---
 README.md | 2 +-
 _plugins/links.rb | 9 +++--
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/README.md b/README.md
index a8da500..196cb78 100644
--- a/README.md
+++ b/README.md
@@ -49,7 +49,7 @@ The source for these tags is at 
[_plugins/links.rb](_plugins/links.rb).
 | dlink | Creates Documentation link | None
| `{% dlink getting-stared/clients %}`  
 |
 | durl  | Creates Documentation URL  | None
| `{% durl troubleshooting/performance 
%}`   |
 | ghi   | GitHub issue link  | None  | `{% ghi 100 %}` |
-| ghc   | GitHub code link  | None  | `{% ghc 
server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java %}` |
+| ghc   | GitHub code link  | Branch defaults to `gh_branch` setting 
in `_config.yml`. Override using `-b` | `{% ghc 
server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java %}` 
`{% ghc -b 1.9 README.md %}` |
 | jira   | Jira issue link  | None  | `{% jira ACCUMULO-1000 %}` |
 
 ## Updating property documentation
diff --git a/_plugins/links.rb b/_plugins/links.rb
index 6225c49..61007a9 100755
--- a/_plugins/links.rb
+++ b/_plugins/links.rb
@@ -210,12 +210,17 @@ class GitHubCodeTag < Liquid::Tag
   end
 
   def render(context)
-path = @text.strip
-file_name = path.split('/').last
+args = @text.split(' ')
+path = args[0]
 branch = context.environments.first["page"]["gh_branch"]
 if branch.nil?
   branch = context.registers[:site].config["gh_branch"]
 end
+if args[0] == '-b'
+  branch = args[1]
+  path = args[2]
+end
+file_name = path.split('/').last
 url = "https://github.com/apache/accumulo/blob/#{branch}/#{path};
 return "[#{file_name}](#{url})"
   end



[accumulo-website] branch master updated: Add GitHub code link instructions to README.md

2019-04-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 1413edc  Add GitHub code link instructions to README.md
1413edc is described below

commit 1413edce496f0364f6b7862777eea9dc537272b8
Author: Mike Walch 
AuthorDate: Tue Apr 23 16:20:56 2019 -0400

Add GitHub code link instructions to README.md
---
 README.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/README.md b/README.md
index b6ed8f5..a8da500 100644
--- a/README.md
+++ b/README.md
@@ -49,6 +49,7 @@ The source for these tags is at 
[_plugins/links.rb](_plugins/links.rb).
 | dlink | Creates Documentation link | None
| `{% dlink getting-stared/clients %}`  
 |
 | durl  | Creates Documentation URL  | None
| `{% durl troubleshooting/performance 
%}`   |
 | ghi   | GitHub issue link  | None  | `{% ghi 100 %}` |
+| ghc   | GitHub code link  | None  | `{% ghc 
server/tserver/src/main/java/org/apache/accumulo/tserver/TabletServer.java %}` |
 | jira   | Jira issue link  | None  | `{% jira ACCUMULO-1000 %}` |
 
 ## Updating property documentation



[accumulo-website] branch asf-site updated: Jekyll build from master:15c6a2c

2019-04-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 29221ca  Jekyll build from master:15c6a2c
29221ca is described below

commit 29221ca58e6905ba384ddfd59c6d72dbc25e4575
Author: Mike Walch 
AuthorDate: Tue Apr 23 15:21:34 2019 -0400

Jekyll build from master:15c6a2c

Created ghc tag to link to GitHub code (#170)

* Added links to example configuration files
---
 docs/2.x/configuration/files.html | 30 +-
 feed.xml  |  4 ++--
 search_data.json  |  2 +-
 3 files changed, 20 insertions(+), 16 deletions(-)

diff --git a/docs/2.x/configuration/files.html 
b/docs/2.x/configuration/files.html
index 63caad6..e9e02fd 100644
--- a/docs/2.x/configuration/files.html
+++ b/docs/2.x/configuration/files.html
@@ -423,17 +423,19 @@
 
 accumulo.properties
 
-Configures Accumulo server processes using server properties.
-This file can be found in the conf/ 
direcory. It is needed on every host that runs Accumulo processes.
-Therfore, any configuration should be replicated to all hosts of the Accumulo 
cluster. If a property is not
-configured here, it might have been configured another way.  See the
-quick 
start for help with configuring this file.
+The https://github.com/apache/accumulo/blob/master/assemble/conf/accumulo.properties;>accumulo.properties
 file configures Accumulo server processes using
+server properties. 
This file can be found in the conf/
+direcory. It is needed on every host that runs Accumulo processes. Therfore, 
any configuration should be
+replicated to all hosts of the Accumulo cluster. If a property is not 
configured here, it might have been
+configured another way.  See 
the quick 
start for help with
+configuring this file.
 
 accumulo-client.properties
 
-Configures Accumulo client processes using client properties.
-If run accumulo shell without 
arguments, the Accumulo connection information in this file will be used.
-This file can be used to create an AccumuloClient in Java using the following 
code:
+The accumulo-client.properties file 
configures Accumulo client processes using
+client properties. If 
accumulo shell is run without arguments,
+the Accumulo connection information in this file will be used. This file can 
be used to create an AccumuloClient
+in Java using the following code:
 
 AccumuloClient client = Accumulo.newClient()
.from("/path/to/accumulo-client.properties").build();
@@ -443,22 +445,24 @@ This file can be used to create an AccumuloClient in Java 
using the following co
 
 accumulo-env.sh
 
-Configures the Java classpath and JVM options needed to run Accumulo 
processes. See the [quick install]
-for help with configuring this file.
+The https://github.com/apache/accumulo/blob/master/assemble/conf/accumulo-env.sh;>accumulo-env.sh
 file configures the Java classpath and JVM options needed to run
+Accumulo processes. See the [quick install] for help with configuring this 
file.
 
 Log configuration files
 
 log4j-service.properties
 
-Configures logging for most Accumulo services (i.e Master, Tablet Server, Garbage 
Collector) except for the Monitor.
+The https://github.com/apache/accumulo/blob/master/assemble/conf/log4j-service.properties;>log4j-service.properties
 file configures logging for most Accumulo services
+(i.e Master, Tablet Server, Garbage 
Collector) except for the Monitor.
 
 log4j-monitor.properties
 
-Configures logging for the Monitor.
+The https://github.com/apache/accumulo/blob/master/assemble/conf/log4j-monitor.properties;>log4j-monitor.properties
 file configures logging for the Monitor.
 
 log4j.properties
 
-Configures logging for Accumulo commands (i.e accumulo init, accumulo shell, etc).
+The https://github.com/apache/accumulo/blob/master/assemble/conf/log4j.properties;>log4j.properties
 file configures logging for Accumulo commands (i.e accumulo init,
+accumulo shell, etc).
 
 Host files
 
diff --git a/feed.xml b/feed.xml
index 0e76c1e..e11524d 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 
 https://accumulo.apache.org/
 https://accumulo.apache.org/feed.xml; rel="self" 
type="application/rss+xml"/>
-Tue, 23 Apr 2019 13:52:10 -0400
-Tue, 23 Apr 2019 13:52:10 -0400
+Tue, 23 Apr 2019 15:21:26 -0400
+Tue, 23 Apr 2019 15:21:26 -0400
 Jekyll v3.7.3
 
 
diff --git a/search_data.json b/search_data.json
index 69d7fe5..292350d 100644
--- a/search_data.json
+++ b/search_data.json
@@ -65,7 +65,7 @@
   
 "docs-2-x-configuration-files": {
   "title": "Configuration Files",
-  "content" : "Accumulo has the following configuration files 
which can be found in theconf/ directory o

[accumulo-website] branch master updated: Created ghc tag to link to GitHub code (#170)

2019-04-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 15c6a2c  Created ghc tag to link to GitHub code (#170)
15c6a2c is described below

commit 15c6a2cc819ba1af683e5c9c413caf7878805a52
Author: Mike Walch 
AuthorDate: Tue Apr 23 15:18:56 2019 -0400

Created ghc tag to link to GitHub code (#170)

* Added links to example configuration files
---
 _config.yml|  1 +
 _docs-2/configuration/files.md | 30 +-
 _plugins/links.rb  | 18 ++
 3 files changed, 36 insertions(+), 13 deletions(-)

diff --git a/_config.yml b/_config.yml
index 9998ce4..b5e607c 100644
--- a/_config.yml
+++ b/_config.yml
@@ -72,6 +72,7 @@ defaults:
   docs_baseurl: "/docs/2.x"
   javadoc_base: "https://static.javadoc.io/org.apache.accumulo;
   skiph1fortitle: "true"
+  gh_branch: "master"
 
 #whitelist: [jekyll-redirect-from]
 #plugins_dir: ./_plugins
diff --git a/_docs-2/configuration/files.md b/_docs-2/configuration/files.md
index a9558d5..541b981 100644
--- a/_docs-2/configuration/files.md
+++ b/_docs-2/configuration/files.md
@@ -9,17 +9,19 @@ Accumulo has the following configuration files which can be 
found in the
 
 ## accumulo.properties
 
-Configures Accumulo server processes using [server properties]({% durl 
configuration/server-properties %}).
-This file can be found in the `conf/` direcory. It is needed on every host 
that runs Accumulo processes.
-Therfore, any configuration should be replicated to all hosts of the Accumulo 
cluster. If a property is not
-configured here, it might have been [configured another way]({% durl 
configuration/overview %}).  See the
-[quick start] for help with configuring this file.
+The {% ghc assemble/conf/accumulo.properties %} file configures Accumulo 
server processes using
+[server properties]({% durl configuration/server-properties %}). This file can 
be found in the `conf/`
+direcory. It is needed on every host that runs Accumulo processes. Therfore, 
any configuration should be
+replicated to all hosts of the Accumulo cluster. If a property is not 
configured here, it might have been
+[configured another way]({% durl configuration/overview %}).  See the [quick 
start] for help with
+configuring this file.
 
 ## accumulo-client.properties
 
-Configures Accumulo client processes using [client properties]({% durl 
configuration/client-properties %}).
-If run `accumulo shell` without arguments, the Accumulo connection information 
in this file will be used.
-This file can be used to create an AccumuloClient in Java using the following 
code:
+The `accumulo-client.properties` file configures Accumulo client processes 
using
+[client properties]({% durl configuration/client-properties %}). If `accumulo 
shell` is run without arguments,
+the Accumulo connection information in this file will be used. This file can 
be used to create an AccumuloClient
+in Java using the following code:
 
 ```java
 AccumuloClient client = Accumulo.newClient()
@@ -30,22 +32,24 @@ See the [quick start] for help with configuring this file.
 
 ## accumulo-env.sh
 
-Configures the Java classpath and JVM options needed to run Accumulo 
processes. See the [quick install]
-for help with configuring this file. 
+The {% ghc assemble/conf/accumulo-env.sh %} file configures the Java classpath 
and JVM options needed to run
+Accumulo processes. See the [quick install] for help with configuring this 
file.
 
 ## Log configuration files
 
 ### log4j-service.properties
 
-Configures logging for most Accumulo services (i.e [Master], [Tablet Server], 
[Garbage Collector]) except for the Monitor.
+The {% ghc assemble/conf/log4j-service.properties %} file configures logging 
for most Accumulo services
+(i.e [Master], [Tablet Server], [Garbage Collector]) except for the Monitor.
 
 ### log4j-monitor.properties
 
-Configures logging for the [Monitor].
+The {% ghc assemble/conf/log4j-monitor.properties %} file configures logging 
for the [Monitor].
 
 ### log4j.properties
 
-Configures logging for Accumulo commands (i.e `accumulo init`, `accumulo 
shell`, etc).
+The {% ghc assemble/conf/log4j.properties %} file configures logging for 
Accumulo commands (i.e `accumulo init`,
+`accumulo shell`, etc).
 
 ## Host files
 
diff --git a/_plugins/links.rb b/_plugins/links.rb
index f227890..6225c49 100755
--- a/_plugins/links.rb
+++ b/_plugins/links.rb
@@ -203,6 +203,23 @@ class JiraTag < Liquid::Tag
   end
 end
 
+class GitHubCodeTag < Liquid::Tag
+  def initialize(tag_name, text, tokens)
+super
+@text = text
+  end
+
+  def render(context)
+path = @text.strip
+file_name = path.split('/').last
+branch = context.environments.first["page"]["gh_branch"]
+i

[accumulo-website] branch asf-site updated: Jekyll build from master:51b6963

2019-04-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 1c76e47  Jekyll build from master:51b6963
1c76e47 is described below

commit 1c76e47b9b04620ecfd5ec35b51fe3439ed80aa8
Author: Mike Walch 
AuthorDate: Tue Apr 23 12:19:12 2019 -0400

Jekyll build from master:51b6963

Fixed authorizations documentation
---
 docs/2.x/security/authorizations.html | 5 +++--
 docs/2.x/troubleshooting/basic.html   | 3 ++-
 feed.xml  | 4 ++--
 redirects.json| 2 +-
 search_data.json  | 4 ++--
 5 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/docs/2.x/security/authorizations.html 
b/docs/2.x/security/authorizations.html
index 3adbcfc..4c33b99 100644
--- a/docs/2.x/security/authorizations.html
+++ b/docs/2.x/security/authorizations.html
@@ -423,8 +423,9 @@
 
 Configuration
 
-Accumulo’s https://static.javadoc.io/org.apache.accumulo/accumulo-server-base/2.0.0-alpha-2/org/apache/accumulo/server/security/handler/Authorizor.html;>Authorizor
 is configured by setting instance.security.authorizer.
 The default
-authorizor is described below.
+Accumulo’s https://static.javadoc.io/org.apache.accumulo/accumulo-server-base/2.0.0-alpha-2/org/apache/accumulo/server/security/handler/Authorizor.html;>Authorizor
 is configured by setting instance.security.authorizor.
 The default
+authorizor is the https://static.javadoc.io/org.apache.accumulo/accumulo-server-base/2.0.0-alpha-2/org/apache/accumulo/server/security/handler/ZKAuthorizor.html;>ZKAuthorizor
 which is described
+below.
 
 Security Labels
 
diff --git a/docs/2.x/troubleshooting/basic.html 
b/docs/2.x/troubleshooting/basic.html
index 26013c0..7816e14 100644
--- a/docs/2.x/troubleshooting/basic.html
+++ b/docs/2.x/troubleshooting/basic.html
@@ -546,7 +546,8 @@ processes should be stable on the order of months and not 
require frequent resta
 
 Accumulo is not showing me any data!
 
-Do you have your auths set so that it matches your visibilities?
+Is your client configured with authorizations that match your visibilities? 
 See the
+Authorizations documentation 
for help.
 
 What are my visibilities?
 
diff --git a/feed.xml b/feed.xml
index 7f9dbb8..a98ccd8 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 
 https://accumulo.apache.org/
 https://accumulo.apache.org/feed.xml; rel="self" 
type="application/rss+xml"/>
-Thu, 11 Apr 2019 21:44:37 -0400
-Thu, 11 Apr 2019 21:44:37 -0400
+Tue, 23 Apr 2019 12:19:03 -0400
+Tue, 23 Apr 2019 12:19:03 -0400
 Jekyll v3.7.3
 
 
diff --git a/redirects.json b/redirects.json
index 9c19363..4bda3a4 100644
--- a/redirects.json
+++ b/redirects.json
@@ -1 +1 @@
-{"/release_notes/1.5.1.html":"https://accumulo.apache.org/release/accumulo-1.5.1/","/release_notes/1.6.0.html":"https://accumulo.apache.org/release/accumulo-1.6.0/","/release_notes/1.6.1.html":"https://accumulo.apache.org/release/accumulo-1.6.1/","/release_notes/1.6.2.html":"https://accumulo.apache.org/release/accumulo-1.6.2/","/release_notes/1.7.0.html":"https://accumulo.apache.org/release/accumulo-1.7.0/","/release_notes/1.5.3.html":"https://accumulo.apache.org/release/accumulo-1.5.3/;
 [...]
\ No newline at end of file
+{"/release_notes/1.5.1.html":"https://accumulo.apache.org/release/accumulo-1.5.1/","/release_notes/1.6.0.html":"https://accumulo.apache.org/release/accumulo-1.6.0/","/release_notes/1.6.1.html":"https://accumulo.apache.org/release/accumulo-1.6.1/","/release_notes/1.6.2.html":"https://accumulo.apache.org/release/accumulo-1.6.2/","/release_notes/1.7.0.html":"https://accumulo.apache.org/release/accumulo-1.7.0/","/release_notes/1.5.3.html":"https://accumulo.apache.org/release/accumulo-1.5.3/;
 [...]
\ No newline at end of file
diff --git a/search_data.json b/search_data.json
index 026bea6..69d7fe5 100644
--- a/search_data.json
+++ b/search_data.json
@@ -205,7 +205,7 @@
   
 "docs-2-x-security-authorizations": {
   "title": "Authorizations",
-  "content" : "In Accumulo, data is written with security labels 
that limit access to only users with the 
properauthorizations.ConfigurationAccumulo’s Authorizor is configured by 
setting instance.security.authorizer. The defaultauthorizor is described 
below.Security LabelsEvery Key-Value pair in Accumulo has its own security 
label, stored under the column visibilityelement of the key, which is used to 
determine whether a given user meets the securityrequirements to

[accumulo-website] branch master updated: Fixed authorizations documentation

2019-04-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 51b6963  Fixed authorizations documentation
51b6963 is described below

commit 51b696321e837552bf5063ec4b15601b0cc3cbdf
Author: Mike Walch 
AuthorDate: Tue Apr 23 12:15:57 2019 -0400

Fixed authorizations documentation
---
 _docs-2/security/authorizations.md | 5 +++--
 _docs-2/troubleshooting/basic.md   | 3 ++-
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/_docs-2/security/authorizations.md 
b/_docs-2/security/authorizations.md
index 5d3b456..76af658 100644
--- a/_docs-2/security/authorizations.md
+++ b/_docs-2/security/authorizations.md
@@ -9,8 +9,9 @@ In Accumulo, data is written with [security labels] that limit 
access to only us
 
 ## Configuration
 
-Accumulo's [Authorizor] is configured by setting {% plink 
instance.security.authorizer %}. The default
-authorizor is described below.
+Accumulo's [Authorizor] is configured by setting {% plink 
instance.security.authorizor %}. The default
+authorizor is the {% jlink 
org.apache.accumulo.server.security.handler.ZKAuthorizor %} which is described
+below.
 
 ## Security Labels
 
diff --git a/_docs-2/troubleshooting/basic.md b/_docs-2/troubleshooting/basic.md
index 06ef2ff..3f5c27c 100644
--- a/_docs-2/troubleshooting/basic.md
+++ b/_docs-2/troubleshooting/basic.md
@@ -128,7 +128,8 @@ processes should be stable on the order of months and not 
require frequent resta
 
 **Accumulo is not showing me any data!**
 
-Do you have your auths set so that it matches your visibilities?
+Is your client configured with authorizations that match your visibilities?  
See the
+[Authorizations documentation]({% durl security/authorizations %}) for help.
 
 **What are my visibilities?**
 



[accumulo-examples] branch master updated: Created Accumulo/Spark example (#39)

2019-03-28 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-examples.git


The following commit(s) were added to refs/heads/master by this push:
 new 8c3264c  Created Accumulo/Spark example (#39)
8c3264c is described below

commit 8c3264ce46500ab328297a0122e9ede669980938
Author: Mike Walch 
AuthorDate: Thu Mar 28 12:22:23 2019 -0400

Created Accumulo/Spark example (#39)
---
 README.md  |   2 +
 spark/.gitignore   |   6 +
 spark/README.md|  45 ++
 spark/pom.xml  | 117 +++
 spark/run.sh   |  28 
 .../java/org/apache/accumulo/spark/CopyPlus5K.java | 157 +
 6 files changed, 355 insertions(+)

diff --git a/README.md b/README.md
index 77c91bc..0be1400 100644
--- a/README.md
+++ b/README.md
@@ -83,6 +83,7 @@ Each example below highlights a feature of Apache Accumulo.
 | [rowhash] | Using MapReduce to read a table and write to a new column in the 
same table. |
 | [sample] | Building and using sample data in Accumulo. |
 | [shard] | Using the intersecting iterator with a term index partitioned by 
document. |
+| [spark] | Using Accumulo as input and output for Apache Spark jobs |
 | [tabletofile] | Using MapReduce to read a table and write one of its columns 
to a file in HDFS. |
 | [terasort] | Generating random data and sorting it using Accumulo. |
 | [uniquecols] | Use MapReduce to count unique columns in Accumulo |
@@ -120,6 +121,7 @@ This repository can be used to test Accumulo release 
candidates.  See
 [rowhash]: docs/rowhash.md
 [sample]: docs/sample.md
 [shard]: docs/shard.md
+[spark]: spark/README.md
 [tabletofile]: docs/tabletofile.md
 [terasort]: docs/terasort.md
 [uniquecols]: docs/uniquecols.md
diff --git a/spark/.gitignore b/spark/.gitignore
new file mode 100644
index 000..f534230
--- /dev/null
+++ b/spark/.gitignore
@@ -0,0 +1,6 @@
+/.classpath
+/.project
+/.settings/
+/target/
+/*.iml
+/.idea
diff --git a/spark/README.md b/spark/README.md
new file mode 100644
index 000..af19029
--- /dev/null
+++ b/spark/README.md
@@ -0,0 +1,45 @@
+
+# Apache Accumulo Spark Example
+
+## Requirements
+
+* Accumulo 2.0+
+* Hadoop YARN installed & `HADOOP_CONF_DIR` set in environment
+* Spark installed & `SPARK_HOME` set in environment
+
+## Spark example
+
+The [CopyPlus5K] example will create an Accumulo table called 
`spark_example_input`
+and write 100 key/value entries into Accumulo with the values `0..99`. It then 
launches
+a Spark application that does following:
+
+* Read data from `spark_example_input` table using `AccumuloInputFormat`
+* Add 5000 to each value
+* Write the data to a new Accumulo table (called `spark_example_output`) using 
one of
+  two methods.
+  1. **Bulk import** - Write data to an RFile in HDFS using 
`AccumuloFileOutputFormat` and
+ bulk import to Accumulo table
+  2. **Batchwriter** - Creates a `BatchWriter` in Spark code to write to the 
table. 
+
+This application can be run using the command:
+
+./run.sh batch /path/to/accumulo-client.properties
+
+Change `batch` to `bulk` to use Bulk import method.
+
+[CopyPlus5K]: src/main/java/org/apache/accumulo/spark/CopyPlus5K.java
diff --git a/spark/pom.xml b/spark/pom.xml
new file mode 100644
index 000..67f5de2
--- /dev/null
+++ b/spark/pom.xml
@@ -0,0 +1,117 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+  4.0.0
+  
+org.apache
+apache
+21
+  
+  org.apache.accumulo
+  accumulo-spark
+  2.0.0-SNAPSHOT
+  Apache Accumulo Spark Example
+  Example Spark Application for Apache Accumulo
+  
+2.0.0-SNAPSHOT
+3.2.0
+1.8
+1.8
+3.4.13
+  
+  
+
+  
+org.apache.zookeeper
+zookeeper
+${zookeeper.version}
+  
+
+  
+  
+
+  org.apache.accumulo
+  accumulo-core
+  ${accumulo.version}
+
+
+  org.apache.accumulo
+  accumulo-hadoop-mapreduce
+  ${accumulo.version}
+
+
+  org.apache.hadoop
+  hadoop-client-api
+  ${hadoop.version}
+
+
+  org.apache.spark
+  spark-core_2.11
+  2.4.0
+
+  
+  
+
+  
+org.apache.maven.plugins
+maven-compiler-plugin
+  
+
+  
+  
+
+  create-shade-jar
+  
+
+  
+org.apache.maven.plugins
+maven-shade-plugin
+
+  
+spark-shade-jar
+
+  shade
+
+package
+
+  ${project.artifactId}-shaded
+  true
+   

[accumulo-examples] branch master updated: Fix integration tests (#40)

2019-03-27 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-examples.git


The following commit(s) were added to refs/heads/master by this push:
 new 310c1da  Fix integration tests (#40)
310c1da is described below

commit 310c1da6cf086adf18eb318badb25de2e6e4f641
Author: Mike Walch 
AuthorDate: Wed Mar 27 16:01:50 2019 -0400

Fix integration tests (#40)
---
 src/test/java/org/apache/accumulo/examples/ExamplesIT.java | 3 ++-
 .../java/org/apache/accumulo/examples/filedata/ChunkInputFormatIT.java | 3 ++-
 .../java/org/apache/accumulo/examples/filedata/ChunkInputStreamIT.java | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/src/test/java/org/apache/accumulo/examples/ExamplesIT.java 
b/src/test/java/org/apache/accumulo/examples/ExamplesIT.java
index 9f713ca..11fcb9c 100644
--- a/src/test/java/org/apache/accumulo/examples/ExamplesIT.java
+++ b/src/test/java/org/apache/accumulo/examples/ExamplesIT.java
@@ -34,6 +34,7 @@ import java.util.List;
 import java.util.Map.Entry;
 import java.util.concurrent.TimeUnit;
 
+import org.apache.accumulo.core.client.Accumulo;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.BatchWriter;
@@ -101,7 +102,7 @@ public class ExamplesIT extends AccumuloClusterHarness {
 
   @Before
   public void setupTest() throws Exception {
-c = createAccumuloClient();
+c = Accumulo.newClient().from(getClientProps()).build();
 String user = c.whoami();
 String instance = getClientInfo().getInstanceName();
 String keepers = getClientInfo().getZooKeepers();
diff --git 
a/src/test/java/org/apache/accumulo/examples/filedata/ChunkInputFormatIT.java 
b/src/test/java/org/apache/accumulo/examples/filedata/ChunkInputFormatIT.java
index 23790df..87ca328 100644
--- 
a/src/test/java/org/apache/accumulo/examples/filedata/ChunkInputFormatIT.java
+++ 
b/src/test/java/org/apache/accumulo/examples/filedata/ChunkInputFormatIT.java
@@ -29,6 +29,7 @@ import java.util.ArrayList;
 import java.util.List;
 import java.util.Map.Entry;
 
+import org.apache.accumulo.core.client.Accumulo;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
@@ -76,7 +77,7 @@ public class ChunkInputFormatIT extends 
AccumuloClusterHarness {
 
   @Before
   public void setupInstance() throws Exception {
-client = createAccumuloClient();
+client = Accumulo.newClient().from(getClientProps()).build();
 tableName = getUniqueNames(1)[0];
 client.securityOperations().changeUserAuthorizations(client.whoami(), 
AUTHS);
   }
diff --git 
a/src/test/java/org/apache/accumulo/examples/filedata/ChunkInputStreamIT.java 
b/src/test/java/org/apache/accumulo/examples/filedata/ChunkInputStreamIT.java
index 138867b..cf28d1d 100644
--- 
a/src/test/java/org/apache/accumulo/examples/filedata/ChunkInputStreamIT.java
+++ 
b/src/test/java/org/apache/accumulo/examples/filedata/ChunkInputStreamIT.java
@@ -25,6 +25,7 @@ import java.util.ArrayList;
 import java.util.List;
 import java.util.Map.Entry;
 
+import org.apache.accumulo.core.client.Accumulo;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
@@ -68,7 +69,7 @@ public class ChunkInputStreamIT extends 
AccumuloClusterHarness {
 
   @Before
   public void setupInstance() throws Exception {
-client = createAccumuloClient();
+client = Accumulo.newClient().from(getClientProps()).build();
 tableName = getUniqueNames(1)[0];
 client.securityOperations().changeUserAuthorizations(client.whoami(), 
AUTHS);
   }



[accumulo-testing] branch master updated: Refactored RowHash and TeraSortIngest (#68)

2019-03-21 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-testing.git


The following commit(s) were added to refs/heads/master by this push:
 new d87d448  Refactored RowHash and TeraSortIngest (#68)
d87d448 is described below

commit d87d448278d43750cc978c9f37bacaf7e8591bb9
Author: Mike Walch 
AuthorDate: Thu Mar 21 16:05:39 2019 -0400

Refactored RowHash and TeraSortIngest (#68)
---
 bin/mapred | 67 ++
 conf/accumulo-testing.properties.example   | 29 
 .../org/apache/accumulo/testing/TestProps.java | 16 +
 .../apache/accumulo/testing/mapreduce/RowHash.java | 34 -
 .../accumulo/testing/mapreduce/TeraSortIngest.java | 82 +-
 .../randomwalk/ReplicationRandomWalkIT.java|  1 -
 6 files changed, 160 insertions(+), 69 deletions(-)

diff --git a/bin/mapred b/bin/mapred
new file mode 100755
index 000..f943b45
--- /dev/null
+++ b/bin/mapred
@@ -0,0 +1,67 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+bin_dir=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
+at_home=$( cd "$( dirname "$bin_dir" )" && pwd )
+
+function print_usage() {
+  cat < {-o test.=}
+
+Available applications:
+
+terasortRun Terasort
+rowhash Run RowHash
+EOF
+}
+
+if [ -f "$at_home/conf/env.sh" ]; then
+  . "$at_home"/conf/env.sh
+else
+  . "$at_home"/conf/env.sh.example
+fi
+
+if [ -z "$1" ]; then
+  echo "ERROR:  needs to be set"
+  print_usage
+  exit 1
+fi
+
+mr_package="org.apache.accumulo.testing.mapreduce"
+case "$1" in
+  terasort)
+mr_main="${mr_package}.TeraSortIngest"
+;;
+  rowhash)
+mr_main="${mr_package}.RowHash"
+;;
+  *)
+echo "Unknown application: $1"
+print_usage
+exit 1
+esac
+
+export 
CLASSPATH="$TEST_JAR_PATH:$HADOOP_API_JAR:$HADOOP_RUNTIME_JAR:$CLASSPATH"
+
+if [ ! -z $HADOOP_HOME ]; then
+  export HADOOP_USE_CLIENT_CLASSLOADER=true
+  "$HADOOP_HOME"/bin/yarn jar "$TEST_JAR_PATH" "$mr_main" "$TEST_PROPS" 
"$ACCUMULO_CLIENT_PROPS" "${@:2}" 
+else
+  echo "Hadoop must be installed and HADOOP_HOME must be set!"
+  exit 1
+fi
diff --git a/conf/accumulo-testing.properties.example 
b/conf/accumulo-testing.properties.example
index 502bcde..9dbe4e0 100644
--- a/conf/accumulo-testing.properties.example
+++ b/conf/accumulo-testing.properties.example
@@ -119,3 +119,32 @@ test.ci.bulk.map.nodes=100
 # produce a bulk import file.
 test.ci.bulk.reducers.max=1024
 
+#
+# MapReduce Tests
+#
+
+# RowHash test
+# 
+# Table containing input data
+test.rowhash.input.table = terasort
+# Table where data will be output to
+test.rowhash.output.table = rowhash
+# Column that is fetched in input table
+test.rowhash.column = c
+
+# TeraSort ingest
+# ---
+# Table to ingest into
+test.terasort.table = terasort
+# Number of rows to ingest
+test.terasort.num.rows = 1
+# Minimum key size
+test.terasort.min.keysize = 10
+# Maximum key size
+test.terasort.max.keysize = 10
+# Minimum value size
+test.terasort.min.valuesize = 78
+# Maximum value size
+test.terasort.max.valuesize = 78
+# Number of table splits
+test.terasort.num.splits = 4
diff --git a/src/main/java/org/apache/accumulo/testing/TestProps.java 
b/src/main/java/org/apache/accumulo/testing/TestProps.java
index 3f2ca15..49ea718 100644
--- a/src/main/java/org/apache/accumulo/testing/TestProps.java
+++ b/src/main/java/org/apache/accumulo/testing/TestProps.java
@@ -33,6 +33,8 @@ public class TestProps {
   private static final String CI_SCANNER = CI + "scanner.";
   private static final String CI_VERIFY = CI + "verify.";
   private static final String CI_BULK = CI + "bulk.";
+  public static final String TERASORT = PREFIX + "terasort.";
+  public static final String ROWHASH = PREFIX + "

[accumulo-docker] branch master updated: Dockerfile & README updates (#9)

2019-03-19 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-docker.git


The following commit(s) were added to refs/heads/master by this push:
 new aa2a60c  Dockerfile & README updates (#9)
aa2a60c is described below

commit aa2a60cacb052fb8e763dcb76a511e97f1228b6e
Author: Mike Walch 
AuthorDate: Tue Mar 19 16:23:09 2019 -0400

Dockerfile & README updates (#9)
---
 Dockerfile | 16 +---
 README.md  |  4 +++-
 2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/Dockerfile b/Dockerfile
index 551046a..0350ae0 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -23,6 +23,8 @@ ARG HADOOP_VERSION=3.1.1
 ARG ZOOKEEPER_VERSION=3.4.13
 ARG HADOOP_USER_NAME=accumulo
 ARG ACCUMULO_FILE=
+ARG HADOOP_FILE=
+ARG ZOOKEEPER_FILE=
 
 ENV HADOOP_USER_NAME $HADOOP_USER_NAME
 
@@ -33,7 +35,7 @@ ENV APACHE_DIST_URLS \
   https://www.apache.org/dist/ \
   https://archive.apache.org/dist/
 
-COPY README.md $ACCUMULO_FILE /tmp/
+COPY README.md $ACCUMULO_FILE $HADOOP_FILE $ZOOKEEPER_FILE /tmp/
 
 RUN set -eux; \
   download() { \
@@ -50,8 +52,16 @@ RUN set -eux; \
 [ -n "$success" ]; \
   }; \
   \
-  download "hadoop.tar.gz" 
"hadoop/core/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz"; \
-  download "zookeeper.tar.gz" 
"zookeeper/zookeeper-$ZOOKEEPER_VERSION/zookeeper-$ZOOKEEPER_VERSION.tar.gz"; \
+  if [ -z "$HADOOP_FILE" ]; then \
+download "hadoop.tar.gz" 
"hadoop/core/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz"; \
+  else \
+cp "/tmp/$HADOOP_FILE" "hadoop.tar.gz"; \
+  fi; \
+  if [ -z "$ZOOKEEPER_FILE" ]; then \
+download "zookeeper.tar.gz" 
"zookeeper/zookeeper-$ZOOKEEPER_VERSION/zookeeper-$ZOOKEEPER_VERSION.tar.gz"; \
+  else \
+cp "/tmp/$ZOOKEEPER_FILE" "zookeeper.tar.gz"; \
+  fi; \
   if [ -z "$ACCUMULO_FILE" ]; then \
 download "accumulo.tar.gz" 
"accumulo/$ACCUMULO_VERSION/accumulo-$ACCUMULO_VERSION-bin.tar.gz"; \
   else \
diff --git a/README.md b/README.md
index 9d34287..2a5a117 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,9 @@
 **This is currently a work in progress that depends on unreleased features of 
Accumulo and will not be ready
 for use until after Accumulo 2.0.0 is released.**  Sometime after Accumulo 
2.0.0 is released this project
 will make its first release. Eventually, this will project will create a 
`apache/accumulo` image at DockerHub.
-Until then, you will need to build your own image.
+Until then, you will need to build your own image. The master branch of this 
repo creates a Docker image for
+Accumulo 2.0+. If you want to create a Docker image for Accumulo 1.9, there is 
a
+[1.9 branch](https://github.com/apache/accumulo-docker/tree/1.9) for that.
 
 ## Obtain the Docker image
 



[accumulo-docker] branch 1.9 created (now f80b503)

2019-03-19 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a change to branch 1.9
in repository https://gitbox.apache.org/repos/asf/accumulo-docker.git.


  at f80b503  Support Accumulo 1.9 docker image

This branch includes the following new commits:

 new 4e8d47c  First commit
 new f80b503  Support Accumulo 1.9 docker image

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[accumulo-docker] 02/02: Support Accumulo 1.9 docker image

2019-03-19 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch 1.9
in repository https://gitbox.apache.org/repos/asf/accumulo-docker.git

commit f80b5036d909b125cab3d781dba88d933b5b5bd8
Author: Mike Walch 
AuthorDate: Tue Mar 19 11:07:59 2019 -0400

Support Accumulo 1.9 docker image
---
 CONTRIBUTING.md|  30 
 Dockerfile |  92 
 LICENSE| 202 +
 NOTICE |   5 ++
 README.md  |  57 ++-
 accumulo-site.xml  |  51 ++
 generic_logger.xml |  67 ++
 monitor_logger.xml |  48 +
 8 files changed, 551 insertions(+), 1 deletion(-)

diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 000..7daf3d0
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,30 @@
+
+
+# Contributing to the Accumulo Docker Image
+
+Contributions to the Accumulo Docker Image can be made by creating a pull 
request to
+this repo on GitHub.
+
+Before creating a pull request, follow the instructions in the [README.md] to 
build
+the image and use it to run Accumulo in Docker.
+
+For general instructions on contributing to Accumulo projects, check out the
+[Accumulo Contributor guide][contribute].
+
+[README.md]: README.md
+[contribute]: https://accumulo.apache.org/contributor/
diff --git a/Dockerfile b/Dockerfile
new file mode 100644
index 000..e4b8808
--- /dev/null
+++ b/Dockerfile
@@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+FROM centos:7
+
+RUN yum install -y java-1.8.0-openjdk-devel make gcc-c++ wget
+ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk
+
+ARG ACCUMULO_VERSION=1.9.2
+ARG HADOOP_VERSION=2.8.5
+ARG ZOOKEEPER_VERSION=3.4.13
+ARG HADOOP_USER_NAME=accumulo
+ARG ACCUMULO_FILE=
+ARG HADOOP_FILE=
+ARG ZOOKEEPER_FILE=
+
+ENV HADOOP_USER_NAME $HADOOP_USER_NAME
+
+ENV APACHE_DIST_URLS \
+  https://www.apache.org/dyn/closer.cgi?action=download= \
+# if the version is outdated (or we're grabbing the .asc file), we might have 
to pull from the dist/archive :/
+  https://www-us.apache.org/dist/ \
+  https://www.apache.org/dist/ \
+  https://archive.apache.org/dist/
+
+COPY README.md $ACCUMULO_FILE $HADOOP_FILE $ZOOKEEPER_FILE /tmp/
+
+RUN set -eux; \
+  download() { \
+local f="$1"; shift; \
+local distFile="$1"; shift; \
+local success=; \
+local distUrl=; \
+for distUrl in $APACHE_DIST_URLS; do \
+  if wget -nv -O "$f" "$distUrl$distFile"; then \
+success=1; \
+break; \
+  fi; \
+done; \
+[ -n "$success" ]; \
+  }; \
+  \
+  if [ -z "$HADOOP_FILE" ]; then \
+download "hadoop.tar.gz" 
"hadoop/core/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz"; \
+  else \
+cp "/tmp/$HADOOP_FILE" "hadoop.tar.gz"; \
+  fi; \
+  if [ -z "$ZOOKEEPER_FILE" ]; then \
+download "zookeeper.tar.gz" 
"zookeeper/zookeeper-$ZOOKEEPER_VERSION/zookeeper-$ZOOKEEPER_VERSION.tar.gz"; \
+  else \
+cp "/tmp/$ZOOKEEPER_FILE" "zookeeper.tar.gz"; \
+  fi; \
+  if [ -z "$ACCUMULO_FILE" ]; then \
+download "accumulo.tar.gz" 
"accumulo/$ACCUMULO_VERSION/accumulo-$ACCUMULO_VERSION-bin.tar.gz"; \
+  else \
+cp "/tmp/$ACCUMULO_FILE" "accumulo.tar.gz"; \
+  fi;
+
+RUN tar xzf accumulo.tar.gz -C /tmp/
+RUN tar xzf hadoop.tar.gz -C /tmp/
+RUN tar xzf zookeeper.tar.gz -C /tmp/
+
+RUN mv /tmp/hadoop-$HADOOP_VERSION /opt/hadoop
+RUN mv /tmp/zookeeper-$ZOOKEEPER_VERSION /opt/zookeeper
+RUN mv /tmp/accumulo-$ACCUMULO_VERSION /opt/accumulo
+
+RUN cp /opt/accumulo/conf/examples/2GB/native-standalone/* /opt/accumulo/conf/
+RUN /opt/accumulo/bin/build_native_library.sh
+
+ADD ./accumulo-site.xml /opt/accumulo/conf
+ADD ./generic_logger.xml /opt/accumulo/conf
+ADD ./monitor_logger.xml /opt/accumulo/conf
+
+ENV HADOOP_HOME /opt/hadoop
+ENV ZOOKEEPER_HOME /opt/zookeeper
+ENV ACCUMULO_HOME /opt/accumulo
+ENV PATH "$PATH:$ACCUMULO_HOME/bin"
+
+ENTRYPOINT ["accumulo"]
+CMD ["help"]
diff 

[accumulo-docker] 01/02: First commit

2019-03-19 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch 1.9
in repository https://gitbox.apache.org/repos/asf/accumulo-docker.git

commit 4e8d47c3731eed67b3ef37d9d691b59ed1c955f1
Author: Mike Walch 
AuthorDate: Tue Mar 19 10:51:46 2019 -0400

First commit
---
 .gitignore | 1 +
 README.md  | 1 +
 2 files changed, 2 insertions(+)

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 000..335ec95
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1 @@
+*.tar.gz
diff --git a/README.md b/README.md
new file mode 100644
index 000..6fbad74
--- /dev/null
+++ b/README.md
@@ -0,0 +1 @@
+= Accumulo Docker =



[accumulo-docker] branch 1.9 deleted (was f0a1fa7)

2019-03-19 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a change to branch 1.9
in repository https://gitbox.apache.org/repos/asf/accumulo-docker.git.


 was f0a1fa7  Dockerfile updates (#7)

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[accumulo-docker] branch 1.9 created (now f0a1fa7)

2019-03-18 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a change to branch 1.9
in repository https://gitbox.apache.org/repos/asf/accumulo-docker.git.


  at f0a1fa7  Dockerfile updates (#7)

No new revisions were added by this update.



[accumulo-testing] branch master updated: Update Accumulo version to 2.0.0-SNAPSHOT (#66)

2019-03-18 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-testing.git


The following commit(s) were added to refs/heads/master by this push:
 new 210b30f  Update Accumulo version to 2.0.0-SNAPSHOT (#66)
210b30f is described below

commit 210b30f14d15c0002400f630e7b7997a7cd33e30
Author: Mike Walch 
AuthorDate: Mon Mar 18 10:55:32 2019 -0400

Update Accumulo version to 2.0.0-SNAPSHOT (#66)
---
 pom.xml |  2 +-
 .../accumulo/testing/continuous/ContinuousIngest.java   |  6 +++---
 .../accumulo/testing/continuous/ContinuousWalk.java |  8 
 .../apache/accumulo/testing/ingest/VerifyIngest.java| 17 +
 .../testing/randomwalk/ReplicationRandomWalkIT.java |  5 -
 5 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/pom.xml b/pom.xml
index 8ec588a..020ba7b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -28,7 +28,7 @@
   Apache Accumulo Testing
   Testing tools for Apache Accumulo
   
-2.0.0-alpha-2
+2.0.0-SNAPSHOT
 
${project.basedir}/contrib/Eclipse-Accumulo-Codestyle.xml
 3.1.1
 1.8
diff --git 
a/src/main/java/org/apache/accumulo/testing/continuous/ContinuousIngest.java 
b/src/main/java/org/apache/accumulo/testing/continuous/ContinuousIngest.java
index 85ce6c3..a65e46f 100644
--- a/src/main/java/org/apache/accumulo/testing/continuous/ContinuousIngest.java
+++ b/src/main/java/org/apache/accumulo/testing/continuous/ContinuousIngest.java
@@ -34,8 +34,8 @@ import 
org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.security.ColumnVisibility;
-import org.apache.accumulo.core.trace.Trace;
-import org.apache.accumulo.core.trace.TraceSamplers;
+// import org.apache.accumulo.core.trace.Trace;
+// import org.apache.accumulo.core.trace.TraceSamplers;
 import org.apache.accumulo.core.util.FastFormat;
 import org.apache.accumulo.testing.TestProps;
 import org.slf4j.Logger;
@@ -128,7 +128,7 @@ public class ContinuousIngest {
   }
 
   BatchWriter bw = client.createBatchWriter(tableName);
-  bw = Trace.wrapAll(bw, TraceSamplers.countSampler(1024));
+  // bw = Trace.wrapAll(bw, TraceSamplers.countSampler(1024));
 
   Random r = new Random();
 
diff --git 
a/src/main/java/org/apache/accumulo/testing/continuous/ContinuousWalk.java 
b/src/main/java/org/apache/accumulo/testing/continuous/ContinuousWalk.java
index 2094ec9..6cc0e9c 100644
--- a/src/main/java/org/apache/accumulo/testing/continuous/ContinuousWalk.java
+++ b/src/main/java/org/apache/accumulo/testing/continuous/ContinuousWalk.java
@@ -28,8 +28,8 @@ import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.trace.Span;
-import org.apache.accumulo.core.trace.Trace;
+// import org.apache.accumulo.core.trace.Span;
+// import org.apache.accumulo.core.trace.Trace;
 import org.apache.accumulo.testing.TestProps;
 import org.apache.hadoop.io.Text;
 
@@ -66,7 +66,7 @@ public class ContinuousWalk {
   values.clear();
 
   long t1 = System.currentTimeMillis();
-  Span span = Trace.on("walk");
+  // Span span = Trace.on("walk");
   try {
 scanner.setRange(new Range(new Text(row)));
 for (Entry entry : scanner) {
@@ -74,7 +74,7 @@ public class ContinuousWalk {
   values.add(entry.getValue());
 }
   } finally {
-span.stop();
+// span.stop();
   }
   long t2 = System.currentTimeMillis();
 
diff --git a/src/main/java/org/apache/accumulo/testing/ingest/VerifyIngest.java 
b/src/main/java/org/apache/accumulo/testing/ingest/VerifyIngest.java
index c403beb..dd9aa18 100644
--- a/src/main/java/org/apache/accumulo/testing/ingest/VerifyIngest.java
+++ b/src/main/java/org/apache/accumulo/testing/ingest/VerifyIngest.java
@@ -16,7 +16,6 @@
  */
 package org.apache.accumulo.testing.ingest;
 
-import java.util.Arrays;
 import java.util.Iterator;
 import java.util.Map.Entry;
 import java.util.Random;
@@ -33,8 +32,8 @@ import org.apache.accumulo.core.data.PartialKey;
 import org.apache.accumulo.core.data.Range;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.trace.DistributedTrace;
-import org.apache.accumulo.core.trace.Trace;
+// import org.apache.accumulo.core.trace.DistributedTrace;
+// import org.apache.accumulo.core.trace.Trace;
 import org.apache.hadoop.io.Text;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -64,16 +63,18 @@ public class VerifyIngest {
 try (AccumuloClient client = 
Accum

[accumulo-testing] branch master updated: Update ClientOpts (#65)

2019-03-14 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-testing.git


The following commit(s) were added to refs/heads/master by this push:
 new a84249c  Update ClientOpts (#65)
a84249c is described below

commit a84249c353bbc8b921ef2d85a28256bdb528fa07
Author: Mike Walch 
AuthorDate: Thu Mar 14 18:47:35 2019 -0400

Update ClientOpts (#65)

* Reduce the number of options
* Create clients using Accumulo.newClient()
* Remove use MapReduce opts from Accumulo
---
 contrib/import-control.xml |   1 -
 .../apache/accumulo/testing/cli/ClientOpts.java| 100 +
 .../testing/continuous/UndefinedAnalyzer.java  |   3 +-
 .../testing/ingest/BulkImportDirectory.java|   7 +-
 .../apache/accumulo/testing/ingest/TestIngest.java |   9 +-
 .../accumulo/testing/ingest/VerifyIngest.java  |   7 +-
 .../apache/accumulo/testing/mapreduce/RowHash.java |  14 +--
 .../accumulo/testing/mapreduce/TeraSortIngest.java |  17 ++--
 .../accumulo/testing/merkle/cli/CompareTables.java |   3 +-
 .../testing/merkle/cli/ComputeRootHash.java|   3 +-
 .../testing/merkle/cli/GenerateHashes.java |   3 +-
 .../testing/merkle/cli/ManualComparison.java   |   3 +-
 .../testing/merkle/ingest/RandomWorkload.java  |   3 +-
 .../accumulo/testing/randomwalk/bulk/Verify.java   |   3 +-
 .../accumulo/testing/randomwalk/shard/Merge.java   |   4 +-
 .../org/apache/accumulo/testing/stress/Scan.java   |   3 +-
 .../org/apache/accumulo/testing/stress/Write.java  |   3 +-
 17 files changed, 93 insertions(+), 93 deletions(-)

diff --git a/contrib/import-control.xml b/contrib/import-control.xml
index df7f39d..60eae35 100644
--- a/contrib/import-control.xml
+++ b/contrib/import-control.xml
@@ -42,7 +42,6 @@
 
 
 
-
 
 
 
diff --git a/src/main/java/org/apache/accumulo/testing/cli/ClientOpts.java 
b/src/main/java/org/apache/accumulo/testing/cli/ClientOpts.java
index 2689b97..3cd7823 100644
--- a/src/main/java/org/apache/accumulo/testing/cli/ClientOpts.java
+++ b/src/main/java/org/apache/accumulo/testing/cli/ClientOpts.java
@@ -24,10 +24,13 @@ import java.io.InputStream;
 import java.net.URL;
 import java.nio.file.Path;
 import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
 import java.util.Properties;
 
-import org.apache.accumulo.core.client.Accumulo;
-import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.conf.ClientProperty;
 import org.apache.accumulo.core.conf.ConfigurationTypeHelper;
@@ -38,6 +41,7 @@ import org.apache.log4j.Logger;
 
 import com.beust.jcommander.IStringConverter;
 import com.beust.jcommander.Parameter;
+import com.beust.jcommander.converters.IParameterSplitter;
 
 public class ClientOpts extends Help {
 
@@ -82,27 +86,24 @@ public class ClientOpts extends Help {
 }
   }
 
-  @Parameter(names = {"-u", "--user"}, description = "Connection user")
-  private String principal = null;
+  public static class NullSplitter implements IParameterSplitter {
+@Override
+public List split(String value) {
+  return Collections.singletonList(value);
+}
+  }
 
-  @Parameter(names = "-p", converter = PasswordConverter.class, description = 
"Connection password")
-  private Password password = null;
+  @Parameter(names = {"-u", "--user"}, description = "Connection user")
+  public String principal = null;
 
   @Parameter(names = "--password", converter = PasswordConverter.class,
   description = "Enter the connection password", password = true)
   private Password securePassword = null;
 
   public AuthenticationToken getToken() {
-return ClientProperty.getAuthenticationToken(getClientProperties());
+return ClientProperty.getAuthenticationToken(getClientProps());
   }
 
-  @Parameter(names = {"-z", "--keepers"},
-  description = "Comma separated list of zookeeper hosts 
(host:port,host:port)")
-  protected String zookeepers = null;
-
-  @Parameter(names = {"-i", "--instance"}, description = "The name of the 
accumulo instance")
-  protected String instance = null;
-
   @Parameter(names = {"-auths", "--auths"}, converter = AuthConverter.class,
   description = "the authorizations to use when reading or writing")
   public Authorizations auths = Authorizations.EMPTY;
@@ -110,16 +111,14 @@ public class ClientOpts extends Help {
   @Parameter(names = "--debug", description = "turn on TRACE-level log 
messages")
   public boolean debug = false;
 
-  @Pa

[accumulo-website] branch master updated: update multiple tserver docs

2019-03-13 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new cdb6b38  update multiple tserver docs
cdb6b38 is described below

commit cdb6b380251c7942b88790eb6afe99a61ef1068f
Author: Mike Walch 
AuthorDate: Wed Mar 13 14:38:25 2019 -0400

update multiple tserver docs
---
 _docs-2/administration/in-depth-install.md | 8 
 1 file changed, 8 insertions(+)

diff --git a/_docs-2/administration/in-depth-install.md 
b/_docs-2/administration/in-depth-install.md
index c22ce2c..d5979d2 100644
--- a/_docs-2/administration/in-depth-install.md
+++ b/_docs-2/administration/in-depth-install.md
@@ -450,6 +450,14 @@ the these properties in [accumulo.properties]:
 * {% plink tserver.port.search %} = `true`
 * {% plink replication.receipt.service.port %} = `0`
 
+Multiple TabletServers cannot be started using the `accumulo-cluster` or 
`accumulo-service` commands at this time.
+The `accumulo` command must be used:
+
+```
+ACCUMULO_SERVICE_INSTANCE=1; ./bin/accumulo tserver &> ./logs/tserver1.out &
+ACCUMULO_SERVICE_INSTANCE=2; ./bin/accumulo tserver &> ./logs/tserver2.out &
+```
+
 ## Logging
 
 Accumulo processes each write to a set of log files. By default, these logs 
are found at directory



[accumulo-website] branch asf-site updated: Jekyll build from master:cdb6b38

2019-03-13 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 7ee5233  Jekyll build from master:cdb6b38
7ee5233 is described below

commit 7ee5233fe793e386a7fc08a2f72c0a6fb6fb76a6
Author: Mike Walch 
AuthorDate: Wed Mar 13 14:41:06 2019 -0400

Jekyll build from master:cdb6b38

update multiple tserver docs
---
 docs/2.x/administration/in-depth-install.html | 7 +++
 feed.xml  | 4 ++--
 search_data.json  | 2 +-
 3 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/docs/2.x/administration/in-depth-install.html 
b/docs/2.x/administration/in-depth-install.html
index cbb33a7..0ac011e 100644
--- a/docs/2.x/administration/in-depth-install.html
+++ b/docs/2.x/administration/in-depth-install.html
@@ -930,6 +930,13 @@ the these properties in replication.receipt.service.port
 = 0
 
 
+Multiple TabletServers cannot be started using the accumulo-cluster or accumulo-service commands at this time.
+The accumulo command must be used:
+
+ACCUMULO_SERVICE_INSTANCE=1; ./bin/accumulo tserver 
 ./logs/tserver1.out 
+ACCUMULO_SERVICE_INSTANCE=2; ./bin/accumulo tserver  
./logs/tserver2.out 
+
+
 Logging
 
 Accumulo processes each write to a set of log files. By default, these logs 
are found at directory
diff --git a/feed.xml b/feed.xml
index 7108965..26682ab 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 
 https://accumulo.apache.org/
 https://accumulo.apache.org/feed.xml; rel="self" 
type="application/rss+xml"/>
-Mon, 11 Mar 2019 15:44:13 -0400
-Mon, 11 Mar 2019 15:44:13 -0400
+Wed, 13 Mar 2019 14:40:56 -0400
+Wed, 13 Mar 2019 14:40:56 -0400
 Jekyll v3.7.3
 
 
diff --git a/search_data.json b/search_data.json
index 2cad678..861e90a 100644
--- a/search_data.json
+++ b/search_data.json
@@ -16,7 +16,7 @@
   
 "docs-2-x-administration-in-depth-install": {
   "title": "In-depth Installation",
-  "content" : "This document provides detailed instructions for 
installing Accumulo. For basicinstructions, see the quick start.HardwareBecause 
we are running essentially two or three systems simultaneously layeredacross 
the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware toconsist 
of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process can haveat 
least one core and 2 - 4 GB each.One core running HDFS can typically keep 2 to 
4 disks busy, so each machi [...]
+  "content" : "This document provides detailed instructions for 
installing Accumulo. For basicinstructions, see the quick start.HardwareBecause 
we are running essentially two or three systems simultaneously layeredacross 
the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware toconsist 
of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process can haveat 
least one core and 2 - 4 GB each.One core running HDFS can typically keep 2 to 
4 disks busy, so each machi [...]
   "url": " /docs/2.x/administration/in-depth-install",
   "categories": "administration"
 },



[accumulo-examples] branch master updated: Fix integration tests (#37)

2019-03-12 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-examples.git


The following commit(s) were added to refs/heads/master by this push:
 new 3f3586b  Fix integration tests (#37)
3f3586b is described below

commit 3f3586bd237c8144bf9636c11889846b6dd6a654
Author: Mike Walch 
AuthorDate: Tue Mar 12 18:53:34 2019 -0400

Fix integration tests (#37)

* Due to changes in Accumulo
* Commented out use of DistributedTrace
* Updated ITs due to changes in TestIngest
  and ConfigurableMacBase
---
 .../accumulo/examples/client/TracingExample.java   |  4 ++--
 .../org/apache/accumulo/examples/ExamplesIT.java   | 28 --
 .../apache/accumulo/examples/dirlist/CountIT.java  |  3 ++-
 .../examples/filedata/ChunkInputFormatIT.java  |  2 +-
 .../accumulo/examples/mapreduce/MapReduceIT.java   | 14 ++-
 5 files changed, 23 insertions(+), 28 deletions(-)

diff --git 
a/src/main/java/org/apache/accumulo/examples/client/TracingExample.java 
b/src/main/java/org/apache/accumulo/examples/client/TracingExample.java
index 050bd2b..899008c 100644
--- a/src/main/java/org/apache/accumulo/examples/client/TracingExample.java
+++ b/src/main/java/org/apache/accumulo/examples/client/TracingExample.java
@@ -32,7 +32,7 @@ import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.security.Authorizations;
-import org.apache.accumulo.core.trace.DistributedTrace;
+// import org.apache.accumulo.core.trace.DistributedTrace;
 import org.apache.accumulo.examples.cli.ClientOnDefaultTable;
 import org.apache.accumulo.examples.cli.ScannerOpts;
 import org.apache.htrace.Sampler;
@@ -74,7 +74,7 @@ public class TracingExample {
   }
 
   private void enableTracing() {
-DistributedTrace.enable("myHost", "myApp");
+// DistributedTrace.enable("myHost", "myApp");
   }
 
   private void execute(Opts opts) throws TableNotFoundException, 
AccumuloException,
diff --git a/src/test/java/org/apache/accumulo/examples/ExamplesIT.java 
b/src/test/java/org/apache/accumulo/examples/ExamplesIT.java
index ff7045e..9f713ca 100644
--- a/src/test/java/org/apache/accumulo/examples/ExamplesIT.java
+++ b/src/test/java/org/apache/accumulo/examples/ExamplesIT.java
@@ -34,7 +34,6 @@ import java.util.List;
 import java.util.Map.Entry;
 import java.util.concurrent.TimeUnit;
 
-import org.apache.accumulo.core.cli.BatchWriterOpts;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.BatchWriter;
@@ -71,6 +70,7 @@ import org.apache.accumulo.harness.AccumuloClusterHarness;
 import org.apache.accumulo.minicluster.MemoryUnit;
 import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.test.TestIngest;
+import org.apache.accumulo.test.TestIngest.IngestParams;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -78,24 +78,19 @@ import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 import com.google.common.collect.Iterators;
 
 public class ExamplesIT extends AccumuloClusterHarness {
-  private static final Logger log = LoggerFactory.getLogger(ExamplesIT.class);
-  private static final BatchWriterOpts bwOpts = new BatchWriterOpts();
   private static final BatchWriterConfig bwc = new BatchWriterConfig();
-  private static final String visibility = "A|B";
   private static final String auths = "A,B";
 
-  AccumuloClient c;
-  BatchWriter bw;
-  IteratorSetting is;
-  String dir;
-  FileSystem fs;
-  Authorizations origAuths;
+  private AccumuloClient c;
+  private BatchWriter bw;
+  private IteratorSetting is;
+  private String dir;
+  private FileSystem fs;
+  private Authorizations origAuths;
 
   @Override
   public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration 
hadoopConf) {
@@ -245,13 +240,10 @@ public class ExamplesIT extends AccumuloClusterHarness {
 String tableName = getUniqueNames(1)[0];
 c.tableOperations().create(tableName);
 c.tableOperations().addConstraint(tableName, 
MaxMutationSize.class.getName());
-TestIngest.Opts opts = new TestIngest.Opts();
-opts.rows = 1;
-opts.cols = 1000;
-opts.setTableName(tableName);
-opts.setPrincipal(getAdminPrincipal());
+IngestParams params = new IngestParams(c.properties(), tableName, 1);
+params.cols = 1000;
 try {
-  TestIngest.ingest(c, opts, bwOpts);
+  TestIngest.ingest(c, params);
 } catch (MutationsRejectedException ex) {
   assertEquals(1, ex.getConstraintViolationSummaries().size());
 }
dif

[accumulo-examples] branch master updated: #35 - Factor out TestUtil API (#36)

2019-03-11 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-examples.git


The following commit(s) were added to refs/heads/master by this push:
 new f89f33d  #35 - Factor out TestUtil API (#36)
f89f33d is described below

commit f89f33d8605b3d7602e38bb9259bb25003770109
Author: Jeffrey Manno 
AuthorDate: Mon Mar 11 17:38:49 2019 -0400

#35 - Factor out TestUtil API (#36)
---
 contrib/import-control.xml | 1 -
 .../org/apache/accumulo/examples/mapreduce/bulk/BulkIngestExample.java | 3 +--
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/contrib/import-control.xml b/contrib/import-control.xml
index 22da90b..0c0f647 100644
--- a/contrib/import-control.xml
+++ b/contrib/import-control.xml
@@ -37,7 +37,6 @@
 
 
 
-
 
 
 
diff --git 
a/src/main/java/org/apache/accumulo/examples/mapreduce/bulk/BulkIngestExample.java
 
b/src/main/java/org/apache/accumulo/examples/mapreduce/bulk/BulkIngestExample.java
index 68ff994..f5d2dd6 100644
--- 
a/src/main/java/org/apache/accumulo/examples/mapreduce/bulk/BulkIngestExample.java
+++ 
b/src/main/java/org/apache/accumulo/examples/mapreduce/bulk/BulkIngestExample.java
@@ -25,7 +25,6 @@ import java.util.Collection;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.data.Key;
 import org.apache.accumulo.core.data.Value;
-import org.apache.accumulo.core.util.TextUtil;
 import org.apache.accumulo.examples.cli.ClientOpts;
 import org.apache.accumulo.hadoop.mapreduce.AccumuloFileOutputFormat;
 import org.apache.accumulo.hadoop.mapreduce.partition.RangePartitioner;
@@ -122,7 +121,7 @@ public class BulkIngestExample {
   new BufferedOutputStream(fs.create(new Path(workDir + 
"/splits.txt") {
 Collection splits = 
client.tableOperations().listSplits(SetupTable.tableName, 100);
 for (Text split : splits)
-  
out.println(Base64.getEncoder().encodeToString(TextUtil.getBytes(split)));
+  out.println(Base64.getEncoder().encodeToString(split.copyBytes()));
 job.setNumReduceTasks(splits.size() + 1);
   }
 



[accumulo-website] branch master updated: Improve upgrade instructions

2019-03-11 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c59475  Improve upgrade instructions
9c59475 is described below

commit 9c594759434f48e2ab830414b6eded82116915bc
Author: Mike Walch 
AuthorDate: Mon Mar 11 15:41:40 2019 -0400

Improve upgrade instructions
---
 _docs-2/administration/upgrading.md | 4 
 1 file changed, 4 insertions(+)

diff --git a/_docs-2/administration/upgrading.md 
b/_docs-2/administration/upgrading.md
index f054509..97e314b 100644
--- a/_docs-2/administration/upgrading.md
+++ b/_docs-2/administration/upgrading.md
@@ -23,6 +23,10 @@ Below are some changes in 2.0 that you should be aware of:
   ```
   accumulo convert-config -x old/accumulo-site.xml -p new/accumulo.properties
   ```
+* The following [server properties]({% durl configuration/server-properties 
%}) were deprecated for 2.0:
+   * {% plink general.classpaths %}
+   * {% plink tserver.metadata.readahead.concurrent.max %}
+   * {% plink tserver.readahead.concurrent.max  %}
 * `accumulo-client.properties` has replaced `client.conf`. The [client 
properties]({% durl configuration/client-properties %})
   in the new file are different so take care when customizing.
 * `accumulo-cluster` script has replaced the `start-all.sh` & `stop-all.sh` 
scripts.



[accumulo-website] branch asf-site updated: Jekyll build from master:9c59475

2019-03-11 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new c82eae0  Jekyll build from master:9c59475
c82eae0 is described below

commit c82eae0aa54ff5f154638496dadf9439f89bdf5f
Author: Mike Walch 
AuthorDate: Mon Mar 11 15:44:20 2019 -0400

Jekyll build from master:9c59475

Improve upgrade instructions
---
 docs/2.x/administration/upgrading.html | 7 +++
 feed.xml   | 4 ++--
 redirects.json | 2 +-
 search_data.json   | 2 +-
 4 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/docs/2.x/administration/upgrading.html 
b/docs/2.x/administration/upgrading.html
index b65759b..67c1960 100644
--- a/docs/2.x/administration/upgrading.html
+++ b/docs/2.x/administration/upgrading.html
@@ -438,6 +438,13 @@ from XML to properties or use the following Accumulo 
command.
 accumulo convert-config -x old/accumulo-site.xml -p 
new/accumulo.properties
 
   
+  The following server 
properties were deprecated for 2.0:
+
+  general.classpaths
+  tserver.metadata.readahead.concurrent.max
+  tserver.readahead.concurrent.max
+
+  
   accumulo-client.properties has 
replaced client.conf. The client properties
 in the new file are different so take care when customizing.
   accumulo-cluster script has 
replaced the start-all.sh  stop-all.sh scripts.
diff --git a/feed.xml b/feed.xml
index 1b574ce..7108965 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 
 https://accumulo.apache.org/
 https://accumulo.apache.org/feed.xml; rel="self" 
type="application/rss+xml"/>
-Thu, 28 Feb 2019 14:08:23 -0500
-Thu, 28 Feb 2019 14:08:23 -0500
+Mon, 11 Mar 2019 15:44:13 -0400
+Mon, 11 Mar 2019 15:44:13 -0400
 Jekyll v3.7.3
 
 
diff --git a/redirects.json b/redirects.json
index 6b9f5d5..9d051e4 100644
--- a/redirects.json
+++ b/redirects.json
@@ -1 +1 @@
-{"/release_notes/1.5.1.html":"https://accumulo.apache.org/release/accumulo-1.5.1/","/release_notes/1.6.0.html":"https://accumulo.apache.org/release/accumulo-1.6.0/","/release_notes/1.6.1.html":"https://accumulo.apache.org/release/accumulo-1.6.1/","/release_notes/1.6.2.html":"https://accumulo.apache.org/release/accumulo-1.6.2/","/release_notes/1.7.0.html":"https://accumulo.apache.org/release/accumulo-1.7.0/","/release_notes/1.5.3.html":"https://accumulo.apache.org/release/accumulo-1.5.3/;
 [...]
\ No newline at end of file
+{"/release_notes/1.5.1.html":"https://accumulo.apache.org/release/accumulo-1.5.1/","/release_notes/1.6.0.html":"https://accumulo.apache.org/release/accumulo-1.6.0/","/release_notes/1.6.1.html":"https://accumulo.apache.org/release/accumulo-1.6.1/","/release_notes/1.6.2.html":"https://accumulo.apache.org/release/accumulo-1.6.2/","/release_notes/1.7.0.html":"https://accumulo.apache.org/release/accumulo-1.7.0/","/release_notes/1.5.3.html":"https://accumulo.apache.org/release/accumulo-1.5.3/;
 [...]
\ No newline at end of file
diff --git a/search_data.json b/search_data.json
index 47160ab..2cad678 100644
--- a/search_data.json
+++ b/search_data.json
@@ -51,7 +51,7 @@
   
 "docs-2-x-administration-upgrading": {
   "title": "Upgrading Accumulo",
-  "content" : "Upgrading from 1.8/9 to 2.0Follow the steps below 
to upgrade your Accumulo instance and client to 2.0.Upgrade Accumulo 
instanceIMPORTANT! Before upgrading to Accumulo 2.0, you will need to upgrade 
to Java 8 and Hadoop 3.x.Upgrading to Accumulo 2.0 is done by stopping Accumulo 
1.8/9 and starting Accumulo 2.0.Before stopping Accumulo 1.8/9, install 
Accumulo 2.0 and configure it by following the 2.0 quick start.There are 
several changes to scripts and configuration in 2. [...]
+  "content" : "Upgrading from 1.8/9 to 2.0Follow the steps below 
to upgrade your Accumulo instance and client to 2.0.Upgrade Accumulo 
instanceIMPORTANT! Before upgrading to Accumulo 2.0, you will need to upgrade 
to Java 8 and Hadoop 3.x.Upgrading to Accumulo 2.0 is done by stopping Accumulo 
1.8/9 and starting Accumulo 2.0.Before stopping Accumulo 1.8/9, install 
Accumulo 2.0 and configure it by following the 2.0 quick start.There are 
several changes to scripts and configuration in 2. [...]
   "url": " /docs/2.x/administration/upgrading",
   "categories": "administration"
 },



[accumulo] branch 1.9 updated: Fix accumulo script (#1020)

2019-03-08 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch 1.9
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/1.9 by this push:
 new d450ef4  Fix accumulo script (#1020)
d450ef4 is described below

commit d450ef40109439c5f1a0c7174dfabaf5110dbdb8
Author: Mike Walch 
AuthorDate: Fri Mar 8 16:59:21 2019 -0500

Fix accumulo script (#1020)
---
 assemble/bin/accumulo | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/assemble/bin/accumulo b/assemble/bin/accumulo
index cad9c9e..df8d297 100755
--- a/assemble/bin/accumulo
+++ b/assemble/bin/accumulo
@@ -175,7 +175,7 @@ fi
 # app isn't used anywhere, but it makes the process easier to spot when 
ps/top/snmp truncate the command line
 JAVA="${JAVA_HOME}/bin/java"
 exec "$JAVA" "-Dapp=$1" \
-   "$INSTANCE_OPT" \
+   $INSTANCE_OPT \
$ACCUMULO_OPTS \
-classpath "${CLASSPATH}" \
-XX:OnOutOfMemoryError="${ACCUMULO_KILL_CMD:-kill -9 %p}" \



[accumulo] branch master updated: #1013 Fix ConfigurableMacBase (#1019)

2019-03-08 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new b17d0ae   #1013 Fix ConfigurableMacBase (#1019)
b17d0ae is described below

commit b17d0ae6f9a5d6d15145ffbca9700d8fcbd6058b
Author: Mike Walch 
AuthorDate: Fri Mar 8 12:39:41 2019 -0500

 #1013 Fix ConfigurableMacBase (#1019)

* Return client properties from MiniAccumuloCluster
* SSL configuration was not being returned by MiniAccumuloCluster
  causing SSL ITs to fail
---
 .../org/apache/accumulo/test/functional/ConfigurableMacBase.java | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git 
a/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
 
b/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
index 8f842dd..ad6055d 100644
--- 
a/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
+++ 
b/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
@@ -26,7 +26,6 @@ import java.io.OutputStream;
 import java.util.Map;
 import java.util.Properties;
 
-import org.apache.accumulo.core.client.Accumulo;
 import org.apache.accumulo.core.conf.ClientProperty;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.harness.AccumuloClusterHarness;
@@ -186,9 +185,7 @@ public class ConfigurableMacBase extends AccumuloITBase {
   }
 
   protected Properties getClientProperties() {
-return Accumulo.newClientProperties()
-.to(getCluster().getInstanceName(), 
getCluster().getZooKeepers()).as("root", ROOT_PASSWORD)
-.build();
+return cluster.getClientProperties();
   }
 
   protected ServerContext getServerContext() {



[accumulo] branch master updated: Integration test cleanup (#1015)

2019-03-06 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 93fb801  Integration test cleanup (#1015)
93fb801 is described below

commit 93fb8012db6f9f4abb7826cc1467a4cd68f7bdc4
Author: Mike Walch 
AuthorDate: Wed Mar 6 16:31:43 2019 -0500

Integration test cleanup (#1015)

* Use try with resources
* Simplify batch writer creation
* Put duplicate code in shared methods
---
 .../apache/accumulo/test/MetaGetsReadersIT.java|  16 +--
 .../org/apache/accumulo/test/MetaRecoveryIT.java   |  15 ++-
 .../test/MissingWalHeaderCompletesRecoveryIT.java  |  13 ++-
 .../org/apache/accumulo/test/NamespacesIT.java |  43 -
 .../org/apache/accumulo/test/OrIteratorIT.java |  37 +++
 .../test/RecoveryCompactionsAreFlushesIT.java  |  22 ++---
 .../accumulo/test/RewriteTabletDirectoriesIT.java  |  43 -
 .../java/org/apache/accumulo/test/SampleIT.java|  12 +--
 .../apache/accumulo/test/ScanFlushWithTimeIT.java  |  12 +--
 .../apache/accumulo/test/SplitCancelsMajCIT.java   |   3 +-
 .../org/apache/accumulo/test/SplitRecoveryIT.java  |  81 
 .../apache/accumulo/test/TableOperationsIT.java|  31 +++---
 .../accumulo/test/TabletServerHdfsRestartIT.java   |  12 +--
 .../org/apache/accumulo/test/TotalQueuedIT.java|  42 
 .../test/TracerRecoversAfterOfflineTableIT.java|  10 +-
 .../accumulo/test/UserCompactionStrategyIT.java|  23 ++---
 .../accumulo/test/VerifySerialRecoveryIT.java  |  12 +--
 .../org/apache/accumulo/test/VolumeChooserIT.java  |  13 ++-
 .../java/org/apache/accumulo/test/VolumeIT.java|  46 -
 .../org/apache/accumulo/test/YieldScannersIT.java  |   2 +-
 .../accumulo/test/functional/BatchScanSplitIT.java |   3 +-
 .../accumulo/test/functional/BloomFilterIT.java|   2 +-
 .../accumulo/test/functional/ConcurrencyIT.java|   3 +-
 .../accumulo/test/functional/ConstraintIT.java |  15 ++-
 .../accumulo/test/functional/CreateAndUseIT.java   |   3 +-
 .../test/functional/DeleteEverythingIT.java|   3 +-
 .../accumulo/test/functional/KerberosIT.java   |   2 +-
 .../accumulo/test/functional/LogicalTimeIT.java|   3 +-
 .../accumulo/test/functional/MetadataIT.java   |   6 +-
 .../accumulo/test/functional/RowDeleteIT.java  |   3 +-
 .../accumulo/test/functional/ScanIteratorIT.java   |   4 +-
 .../accumulo/test/functional/ScannerContextIT.java |   2 +-
 .../apache/accumulo/test/functional/SummaryIT.java |   3 +-
 .../apache/accumulo/test/functional/TimeoutIT.java |   3 +-
 .../CloseWriteAheadLogReferencesIT.java|  39 
 .../test/mapred/AccumuloFileOutputFormatIT.java|  25 +++--
 .../test/mapred/AccumuloInputFormatIT.java |  25 +++--
 .../mapred/AccumuloMultiTableInputFormatIT.java|  24 ++---
 .../test/mapred/AccumuloOutputFormatIT.java|  12 +--
 .../test/mapred/AccumuloRowInputFormatIT.java  |   9 +-
 .../apache/accumulo/test/mapred/TokenFileIT.java   |  13 ++-
 .../test/mapreduce/AccumuloFileOutputFormatIT.java |  25 +++--
 .../test/mapreduce/AccumuloInputFormatIT.java  |  53 +++---
 .../test/mapreduce/AccumuloOutputFormatIT.java |  22 +++--
 .../apache/accumulo/test/master/MergeStateIT.java  |   7 +-
 .../test/replication/CyclicReplicationIT.java  |  11 +--
 ...GarbageCollectorCommunicatesWithTServersIT.java |  43 -
 .../test/replication/KerberosReplicationIT.java|  20 ++--
 .../replication/MultiInstanceReplicationIT.java| 107 ++---
 .../accumulo/test/replication/ReplicationIT.java   |  94 +-
 .../UnorderedWorkAssignerReplicationIT.java| 107 ++---
 .../UnusedWalDoesntCloseReplicationStatusIT.java   |  35 ---
 52 files changed, 555 insertions(+), 659 deletions(-)

diff --git a/test/src/main/java/org/apache/accumulo/test/MetaGetsReadersIT.java 
b/test/src/main/java/org/apache/accumulo/test/MetaGetsReadersIT.java
index 130daab..6caa3f1 100644
--- a/test/src/main/java/org/apache/accumulo/test/MetaGetsReadersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/MetaGetsReadersIT.java
@@ -84,15 +84,15 @@ public class MetaGetsReadersIT extends ConfigurableMacBase {
 try (AccumuloClient c = 
Accumulo.newClient().from(getClientProperties()).build()) {
   c.tableOperations().create(tableName);
   Random random = new SecureRandom();
-  BatchWriter bw = c.createBatchWriter(tableName, null);
-  for (int i = 0; i < 5; i++) {
-byte[] row = new byte[100];
-random.nextBytes(row);
-Mutation m = new Mutation(row);
-m.put("", "", "");
-bw.addMutation(m);
+  try (BatchWriter bw = c.createBatchWriter(tableName)) {
+for (int i = 0; i < 5; i++) {

[accumulo] branch master updated: Replace loops that remove with Collection.removeIf (#1014)

2019-03-06 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new e6eb6b9  Replace loops that remove with Collection.removeIf (#1014)
e6eb6b9 is described below

commit e6eb6b9664e1a5054245fca44715c4d8877849e3
Author: Mike Walch 
AuthorDate: Wed Mar 6 16:31:23 2019 -0500

Replace loops that remove with Collection.removeIf (#1014)
---
 .../apache/accumulo/fate/zookeeper/ZooCache.java   | 23 +++---
 .../accumulo/server/client/BulkImporter.java   |  7 +--
 .../java/org/apache/accumulo/master/Master.java|  8 +---
 .../accumulo/shell/commands/SetIterCommand.java| 17 
 .../shell/commands/SetScanIterCommand.java |  8 +---
 .../shell/commands/SetShellIterCommand.java| 17 ++--
 .../accumulo/test/InterruptibleScannersIT.java | 13 +++-
 .../java/org/apache/accumulo/test/SampleIT.java|  8 +---
 8 files changed, 16 insertions(+), 85 deletions(-)

diff --git 
a/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java 
b/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java
index 499a9dc..7fff9ec 100644
--- a/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java
+++ b/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooCache.java
@@ -22,7 +22,6 @@ import java.security.SecureRandom;
 import java.util.Collections;
 import java.util.ConcurrentModificationException;
 import java.util.HashMap;
-import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.locks.Lock;
@@ -520,7 +519,6 @@ public class ZooCache {
 } finally {
   cacheReadLock.unlock();
 }
-
   }
 
   /**
@@ -550,28 +548,13 @@ public class ZooCache {
 Preconditions.checkState(!closed);
 cacheWriteLock.lock();
 try {
-  for (Iterator i = cache.keySet().iterator(); i.hasNext();) {
-String path = i.next();
-if (path.startsWith(zPath))
-  i.remove();
-  }
-
-  for (Iterator i = childrenCache.keySet().iterator(); 
i.hasNext();) {
-String path = i.next();
-if (path.startsWith(zPath))
-  i.remove();
-  }
-
-  for (Iterator i = statCache.keySet().iterator(); i.hasNext();) {
-String path = i.next();
-if (path.startsWith(zPath))
-  i.remove();
-  }
+  cache.keySet().removeIf(path -> path.startsWith(zPath));
+  childrenCache.keySet().removeIf(path -> path.startsWith(zPath));
+  statCache.keySet().removeIf(path -> path.startsWith(zPath));
 
   immutableCache = new ImmutableCacheCopies(++updateCount, cache, 
statCache, childrenCache);
 } finally {
   cacheWriteLock.unlock();
 }
   }
-
 }
diff --git 
a/server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java 
b/server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java
index 4f56b2d..2028706 100644
--- 
a/server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java
+++ 
b/server/base/src/main/java/org/apache/accumulo/server/client/BulkImporter.java
@@ -236,12 +236,7 @@ public class BulkImporter {
 }
 
 // remove map files that have no more key extents to assign
-Iterator>> afIter = 
assignmentFailures.entrySet().iterator();
-while (afIter.hasNext()) {
-  Entry> entry = afIter.next();
-  if (entry.getValue().size() == 0)
-afIter.remove();
-}
+assignmentFailures.values().removeIf(List::isEmpty);
 
 Set> failureIter = failureCount.entrySet();
 for (Entry entry : failureIter) {
diff --git a/server/master/src/main/java/org/apache/accumulo/master/Master.java 
b/server/master/src/main/java/org/apache/accumulo/master/Master.java
index 3e5fd7f..5d7fc6d 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/Master.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/Master.java
@@ -821,13 +821,7 @@ public class Master
 
   public void clearMigrations(TableId tableId) {
 synchronized (migrations) {
-  Iterator iterator = migrations.keySet().iterator();
-  while (iterator.hasNext()) {
-KeyExtent extent = iterator.next();
-if (extent.getTableId().equals(tableId)) {
-  iterator.remove();
-}
-  }
+  migrations.keySet().removeIf(extent -> 
extent.getTableId().equals(tableId));
 }
   }
 
diff --git 
a/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java 
b/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
index d66d88b..092aef3 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/commands/SetIterCommand.java
@

[accumulo] branch master updated: Inline createClient method in ConfigurableMacBase (#1013)

2019-03-05 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 94764fe  Inline createClient method in ConfigurableMacBase (#1013)
94764fe is described below

commit 94764fe6aa942c0b9143a59a2063b141db394f06
Author: Mike Walch 
AuthorDate: Tue Mar 5 13:51:35 2019 -0500

Inline createClient method in ConfigurableMacBase (#1013)

* Improve how client properties are created
* Reduce use of ClientInfo
---
 .../hadoop/its/mapred/AccumuloOutputFormatIT.java  | 13 ++---
 .../accumulo/hadoop/its/mapreduce/RowHashIT.java   |  3 +-
 .../org/apache/accumulo/test/AuditMessageIT.java   |  3 +-
 .../org/apache/accumulo/test/BalanceFasterIT.java  |  3 +-
 .../accumulo/test/BalanceWithOfflineTableIT.java   |  3 +-
 .../accumulo/test/BulkImportMonitoringIT.java  |  3 +-
 .../test/ConfigurableMajorCompactionIT.java|  3 +-
 .../accumulo/test/CountNameNodeOpsBulkIT.java  |  3 +-
 .../accumulo/test/DetectDeadTabletServersIT.java   |  3 +-
 .../org/apache/accumulo/test/ExistingMacIT.java|  3 +-
 .../apache/accumulo/test/GarbageCollectWALIT.java  |  3 +-
 .../org/apache/accumulo/test/LargeSplitRowIT.java  | 11 ++--
 .../java/org/apache/accumulo/test/ManySplitIT.java |  3 +-
 .../test/MasterRepairsDualAssignmentIT.java|  3 +-
 .../apache/accumulo/test/MetaGetsReadersIT.java|  3 +-
 .../org/apache/accumulo/test/MetaRecoveryIT.java   |  3 +-
 .../test/MissingWalHeaderCompletesRecoveryIT.java  |  9 ++--
 .../apache/accumulo/test/MultiTableRecoveryIT.java |  5 +-
 .../accumulo/test/RewriteTabletDirectoriesIT.java  |  3 +-
 .../accumulo/test/TabletServerGivesUpIT.java   |  3 +-
 .../accumulo/test/TabletServerHdfsRestartIT.java   |  3 +-
 .../org/apache/accumulo/test/TotalQueuedIT.java|  3 +-
 .../test/TracerRecoversAfterOfflineTableIT.java|  5 +-
 .../java/org/apache/accumulo/test/UnusedWALIT.java |  5 +-
 .../accumulo/test/VerifySerialRecoveryIT.java  |  3 +-
 .../accumulo/test/VolumeChooserFailureIT.java  | 19 +++
 .../org/apache/accumulo/test/VolumeChooserIT.java  | 35 ++--
 .../java/org/apache/accumulo/test/VolumeIT.java| 62 +++---
 .../org/apache/accumulo/test/WaitForBalanceIT.java |  3 +-
 .../accumulo/test/functional/BackupMasterIT.java   |  3 +-
 .../functional/BalanceAfterCommsFailureIT.java |  3 +-
 .../accumulo/test/functional/CleanTmpIT.java   |  3 +-
 .../test/functional/ConfigurableCompactionIT.java  |  5 +-
 .../test/functional/ConfigurableMacBase.java   | 22 +---
 .../accumulo/test/functional/DurabilityIT.java | 15 +++---
 .../test/functional/GarbageCollectorIT.java| 11 ++--
 .../test/functional/HalfDeadTServerIT.java |  3 +-
 .../test/functional/MetadataMaxFilesIT.java|  3 +-
 .../accumulo/test/functional/MetadataSplitIT.java  |  3 +-
 .../accumulo/test/functional/MonitorSslIT.java |  3 +-
 .../test/functional/RecoveryWithEmptyRFileIT.java  | 35 ++--
 .../test/functional/RegexGroupBalanceIT.java   |  3 +-
 .../test/functional/SessionDurabilityIT.java   |  9 ++--
 .../accumulo/test/functional/ShutdownIT.java   |  5 +-
 .../test/functional/SimpleBalancerFairnessIT.java  |  3 +-
 .../org/apache/accumulo/test/functional/SslIT.java | 17 +++---
 .../accumulo/test/functional/WALSunnyDayIT.java|  3 +-
 .../test/functional/WatchTheWatchCountIT.java  |  8 ++-
 .../accumulo/test/functional/ZooCacheIT.java   |  3 +-
 .../test/functional/ZookeeperRestartIT.java|  3 +-
 .../CloseWriteAheadLogReferencesIT.java|  3 +-
 .../test/mapred/AccumuloOutputFormatIT.java| 12 +++--
 .../accumulo/test/mapreduce/MapReduceIT.java   |  3 +-
 .../apache/accumulo/test/mapreduce/RowHash.java| 12 +
 .../apache/accumulo/test/master/MergeStateIT.java  |  3 +-
 .../accumulo/test/master/SuspendedTabletsIT.java   |  3 +-
 .../test/performance/RollWALPerformanceIT.java |  3 +-
 .../accumulo/test/proxy/ProxyDurabilityIT.java |  5 +-
 .../test/replication/FinishedWorkUpdaterIT.java|  3 +-
 ...GarbageCollectorCommunicatesWithTServersIT.java |  9 ++--
 .../replication/MultiInstanceReplicationIT.java|  9 ++--
 .../replication/MultiTserverReplicationIT.java |  5 +-
 .../RemoveCompleteReplicationRecordsIT.java|  3 +-
 .../accumulo/test/replication/ReplicationIT.java   | 27 +-
 .../replication/ReplicationOperationsImplIT.java   |  3 +-
 .../test/replication/SequentialWorkAssignerIT.java |  3 +-
 .../accumulo/test/replication/StatusMakerIT.java   |  3 +-
 .../test/replication/UnorderedWorkAssignerIT.java  |  3 +-
 .../UnorderedWorkAssignerReplicationIT.java|  9 ++--
 .../UnusedWalDoesntCloseReplicationStatusIT.java   |  3 +-
 .../accumulo/test/replication/WorkMakerIT.java |  3 +-
 71 files changed, 282 insertions

[accumulo] branch master updated: Simplified option parsing (#1010)

2019-03-05 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new c482550  Simplified option parsing (#1010)
c482550 is described below

commit c482550082a4b2386e4270ec4b78a3ee1c402b76
Author: Mike Walch 
AuthorDate: Tue Mar 5 13:17:26 2019 -0500

Simplified option parsing (#1010)

* Removed setters in ClientOpts
* Simplify RowHash options
---
 .../org/apache/accumulo/core/cli/ClientOpts.java   |  36 +
 .../java/org/apache/accumulo/core/util/Merge.java  |  11 +-
 .../apache/accumulo/core/cli/TestClientOpts.java   |  13 +-
 .../lib/MapReduceClientOnDefaultTable.java |  51 --
 .../lib/MapReduceClientOnRequiredTable.java|  47 --
 .../mapreduce/lib/MapReduceClientOpts.java |  31 ++--
 .../accumulo/hadoop/its/mapreduce/RowHashIT.java   |  13 +-
 .../apache/accumulo/server/cli/ServerUtilOpts.java |   5 +-
 .../server/util/CheckForMetadataProblems.java  |   3 +-
 .../apache/accumulo/server/util/LocalityCheck.java |   4 +-
 .../apache/accumulo/server/util/RandomWriter.java  |  11 +-
 .../accumulo/server/util/TableDiskUsage.java   |   3 +-
 .../server/util/VerifyTabletAssignments.java   |   3 +-
 .../apache/accumulo/master/state/MergeStats.java   |   3 +-
 .../java/org/apache/accumulo/tracer/TraceDump.java |   5 +-
 .../apache/accumulo/tracer/TraceTableStats.java|   3 +-
 .../org/apache/accumulo/test/TestBinaryRows.java   |   3 +-
 .../java/org/apache/accumulo/test/TestIngest.java  | 174 +
 .../apache/accumulo/test/TestMultiTableIngest.java |   9 +-
 .../apache/accumulo/test/TestRandomDeletes.java|   7 +-
 .../org/apache/accumulo/test/VerifyIngest.java |  77 ++---
 .../BalanceInPresenceOfOfflineTableIT.java |  13 +-
 .../apache/accumulo/test/functional/BulkIT.java|  48 +++---
 .../test/functional/BulkSplitOptimizationIT.java   |  24 ++-
 .../test/functional/ChaoticBalancerIT.java |  15 +-
 .../accumulo/test/functional/CompactionIT.java |  29 ++--
 .../apache/accumulo/test/functional/DeleteIT.java  |  22 +--
 .../test/functional/DynamicThreadPoolsIT.java  |  10 +-
 .../accumulo/test/functional/FateStarvationIT.java |  16 +-
 .../test/functional/FunctionalTestUtils.java   |  18 +--
 .../test/functional/GarbageCollectorIT.java|  20 ++-
 .../test/functional/HalfDeadTServerIT.java |   7 +-
 .../accumulo/test/functional/MasterFailoverIT.java |  13 +-
 .../apache/accumulo/test/functional/MaxOpenIT.java |  17 +-
 .../accumulo/test/functional/ReadWriteIT.java  |  37 ++---
 .../apache/accumulo/test/functional/RenameIT.java  |  23 ++-
 .../apache/accumulo/test/functional/RestartIT.java |  59 +++
 .../accumulo/test/functional/RestartStressIT.java  |  15 +-
 .../test/functional/SimpleBalancerFairnessIT.java  |   7 +-
 .../apache/accumulo/test/functional/SplitIT.java   |  15 +-
 .../apache/accumulo/test/functional/TableIT.java   |  16 +-
 .../accumulo/test/functional/WriteAheadLogIT.java  |  16 +-
 .../accumulo/test/functional/WriteLotsIT.java  |  19 +--
 .../apache/accumulo/test/mapreduce/RowHash.java| 105 +++--
 .../test/performance/ContinuousIngest.java |   3 +-
 .../test/performance/scan/CollectTabletStats.java  |   3 +-
 46 files changed, 448 insertions(+), 634 deletions(-)

diff --git a/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java 
b/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
index 1e66d90..e821283 100644
--- a/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
+++ b/core/src/main/java/org/apache/accumulo/core/cli/ClientOpts.java
@@ -26,12 +26,9 @@ import java.util.Map;
 import java.util.Properties;
 
 import org.apache.accumulo.core.Constants;
-import org.apache.accumulo.core.client.Accumulo;
-import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 import org.apache.accumulo.core.clientImpl.ClientInfoImpl;
 import org.apache.accumulo.core.conf.ClientProperty;
-import org.apache.accumulo.core.conf.ConfigurationTypeHelper;
 import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.ColumnVisibility;
 import org.apache.htrace.NullScope;
@@ -47,13 +44,6 @@ import com.beust.jcommander.converters.IParameterSplitter;
 
 public class ClientOpts extends Help {
 
-  public static class MemoryConverter implements IStringConverter {
-@Override
-public Long convert(String value) {
-  return ConfigurationTypeHelper.getFixedMemoryAsBytes(value);
-}
-  }
-
   public static class AuthConverter implements 
IStringConverter {
 @Override
 public Authorizations convert(String value) {
@@ -96,14 +86,14 @@ public class ClientOpts extends Help {
   }
 
   @Parameter(names = {"-u"

[accumulo] branch master updated: formatting

2019-02-27 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 9fdfa55  formatting
9fdfa55 is described below

commit 9fdfa554aaaf23d558689fa3239ab79eda2d0645
Author: Mike Walch 
AuthorDate: Wed Feb 27 18:01:27 2019 -0500

formatting
---
 .../java/org/apache/accumulo/core/data/Key.java|  6 +++---
 .../org/apache/accumulo/core/util/FastFormat.java  |  4 ++--
 .../core/util/format/DefaultFormatter.java |  2 +-
 .../server/master/balancer/GroupBalancerTest.java  | 24 +++---
 .../org/apache/accumulo/tserver/NativeMap.java |  8 
 5 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/core/src/main/java/org/apache/accumulo/core/data/Key.java 
b/core/src/main/java/org/apache/accumulo/core/data/Key.java
index 3859bfb..ea99d4b 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Key.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Key.java
@@ -95,7 +95,7 @@ public class Key implements WritableComparable, 
Cloneable {
   }
 
   private final void init(byte[] r, int rOff, int rLen, byte[] cf, int cfOff, 
int cfLen, byte[] cq,
-  int cqOff, int cqLen, byte[] cv, int cvOff, int 
cvLen, long ts, boolean del, boolean copy) {
+  int cqOff, int cqLen, byte[] cv, int cvOff, int cvLen, long ts, boolean 
del, boolean copy) {
 row = copyIfNeeded(r, rOff, rLen, copy);
 colFamily = copyIfNeeded(cf, cfOff, cfLen, copy);
 colQualifier = copyIfNeeded(cq, cqOff, cqLen, copy);
@@ -216,7 +216,7 @@ public class Key implements WritableComparable, 
Cloneable {
* @see #builder()
*/
   public Key(byte[] row, int rOff, int rLen, byte[] cf, int cfOff, int cfLen, 
byte[] cq, int cqOff,
- int cqLen, byte[] cv, int cvOff, int cvLen, long ts) {
+  int cqLen, byte[] cv, int cvOff, int cvLen, long ts) {
 init(row, rOff, rLen, cf, cfOff, cfLen, cq, cqOff, cqLen, cv, cvOff, 
cvLen, ts, false, true);
   }
 
@@ -1068,7 +1068,7 @@ public class Key implements WritableComparable, 
Cloneable {
* @return given StringBuilder
*/
   public static StringBuilder appendPrintableString(byte[] ba, int offset, int 
len, int maxLen,
-StringBuilder sb) {
+  StringBuilder sb) {
 int plen = Math.min(len, maxLen);
 
 for (int i = 0; i < plen; i++) {
diff --git a/core/src/main/java/org/apache/accumulo/core/util/FastFormat.java 
b/core/src/main/java/org/apache/accumulo/core/util/FastFormat.java
index f1f9aca..d153d66 100644
--- a/core/src/main/java/org/apache/accumulo/core/util/FastFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/FastFormat.java
@@ -34,7 +34,7 @@ public class FastFormat {
   }
 
   public static int toZeroPaddedString(byte[] output, int outputOffset, long 
num, int width,
-   int radix, byte[] prefix) {
+  int radix, byte[] prefix) {
 Preconditions.checkArgument(num >= 0);
 
 String strNum = Long.toString(num, radix);
@@ -43,7 +43,7 @@ public class FastFormat {
   }
 
   private static int toZeroPaddedString(byte[] output, int outputOffset, 
String strNum, int width,
-byte[] prefix) {
+  byte[] prefix) {
 
 int index = outputOffset;
 
diff --git 
a/core/src/main/java/org/apache/accumulo/core/util/format/DefaultFormatter.java 
b/core/src/main/java/org/apache/accumulo/core/util/format/DefaultFormatter.java
index 14e6c46..eeb8af7 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/util/format/DefaultFormatter.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/util/format/DefaultFormatter.java
@@ -182,7 +182,7 @@ public class DefaultFormatter implements Formatter {
   }
 
   static StringBuilder appendBytes(StringBuilder sb, byte[] ba, int offset, 
int len,
-   int shownLength) {
+  int shownLength) {
 int length = Math.min(len, shownLength);
 return DefaultFormatter.appendBytes(sb, ba, offset, length);
   }
diff --git 
a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
 
b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
index 14548a5..8222ed3 100644
--- 
a/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
+++ 
b/server/base/src/test/java/org/apache/accumulo/server/master/balancer/GroupBalancerTest.java
@@ -196,10 +196,10 @@ public class GroupBalancerTest {
   @Test
   public void testSingleGroup() {
 
-String[][] tests = {new String[]{"a", "b", "c", "d"}, new String[]{"a", 
"b", "c"},
-new String[]{"a", "b", "c", "d", "e"}, new St

[accumulo] branch master updated: Consolidate option parsing classes (#993)

2019-02-27 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new ecd896d  Consolidate option parsing classes (#993)
ecd896d is described below

commit ecd896d181c797b89d7815d9bd8f6fa9e24f9b7c
Author: Mike Walch 
AuthorDate: Wed Feb 27 15:25:48 2019 -0500

Consolidate option parsing classes (#993)
---
 .../apache/accumulo/core/cli/BatchScannerOpts.java | 31 
 .../accumulo/core/cli/ClientOnDefaultTable.java| 37 ---
 .../accumulo/core/cli/ClientOnRequiredTable.java   | 32 
 .../java/org/apache/accumulo/core/util/Merge.java  | 16 
 .../core/cli/ClientOnDefaultTableTest.java | 43 --
 ...erUtilOnRequiredTable.java => ContextOpts.java} |  4 +-
 .../apache/accumulo/server/util/RandomWriter.java  | 15 +++-
 .../accumulo/server/util/RandomizeVolumes.java | 13 +--
 .../java/org/apache/accumulo/tracer/TraceDump.java | 14 +++
 .../apache/accumulo/tracer/TraceTableStats.java| 13 ---
 .../org/apache/accumulo/test/TestBinaryRows.java   | 18 +
 .../java/org/apache/accumulo/test/TestIngest.java  | 18 +
 .../apache/accumulo/test/TestRandomDeletes.java| 30 ---
 .../org/apache/accumulo/test/VerifyIngest.java |  2 +-
 .../apache/accumulo/test/functional/BinaryIT.java  |  2 +-
 .../test/performance/ContinuousIngest.java | 17 ++---
 .../test/performance/scan/CollectTabletStats.java  | 18 +
 .../performance/scan/CollectTabletStatsTest.java   |  4 +-
 18 files changed, 103 insertions(+), 224 deletions(-)

diff --git 
a/core/src/main/java/org/apache/accumulo/core/cli/BatchScannerOpts.java 
b/core/src/main/java/org/apache/accumulo/core/cli/BatchScannerOpts.java
deleted file mode 100644
index 2706468..000
--- a/core/src/main/java/org/apache/accumulo/core/cli/BatchScannerOpts.java
+++ /dev/null
@@ -1,31 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.cli;
-
-import org.apache.accumulo.core.cli.ClientOpts.TimeConverter;
-
-import com.beust.jcommander.Parameter;
-
-public class BatchScannerOpts {
-  @Parameter(names = "--scanThreads", description = "Number of threads to use 
when batch scanning")
-  public Integer scanThreads = 10;
-
-  @Parameter(names = "--scanTimeout", converter = TimeConverter.class,
-  description = "timeout used to fail a batch scan")
-  public Long scanTimeout = Long.MAX_VALUE;
-
-}
diff --git 
a/core/src/main/java/org/apache/accumulo/core/cli/ClientOnDefaultTable.java 
b/core/src/main/java/org/apache/accumulo/core/cli/ClientOnDefaultTable.java
deleted file mode 100644
index 42dec8f..000
--- a/core/src/main/java/org/apache/accumulo/core/cli/ClientOnDefaultTable.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.cli;
-
-import com.beust.jcommander.Parameter;
-
-public class ClientOnDefaultTable extends ClientOpts {
-  @Parameter(names = "--table", description = "table to use")
-  private String tableName;
-
-  public ClientOnDefaultTable(String table) {
-this.tableName = table;
-  }
-
-  public String getTableName() {
-r

[accumulo] branch master updated: #982 - Suppress incorrect lgtm warning

2019-02-27 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new edc6d92   #982 - Suppress incorrect lgtm warning
edc6d92 is described below

commit edc6d924489813254c13ea7b5cf6a8e779e00861
Author: Mike Walch 
AuthorDate: Wed Feb 27 14:55:58 2019 -0500

 #982 - Suppress incorrect lgtm warning
---
 shell/src/main/java/org/apache/accumulo/shell/Shell.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/shell/src/main/java/org/apache/accumulo/shell/Shell.java 
b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
index 14db16d..7e2fb3b 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/Shell.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
@@ -1130,8 +1130,8 @@ public class Shell extends ShellOptions implements 
KeywordExecutable {
 
   private final void printHelp(String usage, String description, Options opts, 
int width)
   throws IOException {
-new HelpFormatter().printHelp(new PrintWriter(reader.getOutput()), width, 
usage, description,
-opts, 2, 5, null, true);
+PrintWriter pw = new PrintWriter(reader.getOutput()); // lgtm 
[java/output-resource-leak]
+new HelpFormatter().printHelp(pw, width, usage, description, opts, 2, 5, 
null, true);
 reader.getOutput().flush();
   }
 



[accumulo-testing] branch master updated: Clean up CLI parsing and stop depending on Accumulo internals (#61)

2019-02-27 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-testing.git


The following commit(s) were added to refs/heads/master by this push:
 new 65c55d0  Clean up CLI parsing and stop depending on Accumulo internals 
(#61)
65c55d0 is described below

commit 65c55d0d0929be1d3ae8eaa8ed8cffd4cf47ab4c
Author: Mike Walch 
AuthorDate: Wed Feb 27 15:07:29 2019 -0500

Clean up CLI parsing and stop depending on Accumulo internals (#61)

* Remove dependencies on Accumulo internal CLI parsing code
  by addeing ClientOpts and Help
* Remove trace code from TestIngest
---
 contrib/import-control.xml |   8 -
 .../apache/accumulo/testing/cli/ClientOpts.java| 213 +
 .../java/org/apache/accumulo/testing/cli/Help.java |  53 +
 .../accumulo/testing/continuous/TimeBinner.java|   4 +-
 .../testing/continuous/UndefinedAnalyzer.java  |  22 +--
 .../testing/ingest/BulkImportDirectory.java|   8 +-
 .../apache/accumulo/testing/ingest/TestIngest.java |  52 ++---
 .../accumulo/testing/ingest/VerifyIngest.java  |  13 +-
 .../accumulo/testing/merkle/cli/CompareTables.java |   6 +-
 .../testing/merkle/cli/ComputeRootHash.java|  18 +-
 .../testing/merkle/cli/GenerateHashes.java |  17 +-
 .../testing/merkle/cli/ManualComparison.java   |   2 +-
 .../testing/merkle/ingest/RandomWorkload.java  |  34 ++--
 .../accumulo/testing/randomwalk/bulk/Verify.java   |  12 +-
 .../apache/accumulo/testing/scalability/Run.java   |   2 +-
 .../org/apache/accumulo/testing/stress/Scan.java   |   5 +-
 .../apache/accumulo/testing/stress/ScanOpts.java   |  16 +-
 .../org/apache/accumulo/testing/stress/Write.java  |  15 +-
 .../accumulo/testing/stress/WriteOptions.java  |  16 +-
 19 files changed, 362 insertions(+), 154 deletions(-)

diff --git a/contrib/import-control.xml b/contrib/import-control.xml
index b033e9c..df7f39d 100644
--- a/contrib/import-control.xml
+++ b/contrib/import-control.xml
@@ -46,14 +46,6 @@
 
 
 
-
-
-
-
-
-
-
-
 
 
 
diff --git a/src/main/java/org/apache/accumulo/testing/cli/ClientOpts.java 
b/src/main/java/org/apache/accumulo/testing/cli/ClientOpts.java
new file mode 100644
index 000..6643d44
--- /dev/null
+++ b/src/main/java/org/apache/accumulo/testing/cli/ClientOpts.java
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.testing.cli;
+
+import static java.nio.charset.StandardCharsets.UTF_8;
+
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URL;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.Properties;
+
+import org.apache.accumulo.core.client.Accumulo;
+import org.apache.accumulo.core.client.AccumuloClient;
+import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
+import org.apache.accumulo.core.conf.ClientProperty;
+import org.apache.accumulo.core.conf.ConfigurationTypeHelper;
+import org.apache.accumulo.core.security.Authorizations;
+import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.log4j.Level;
+import org.apache.log4j.Logger;
+
+import com.beust.jcommander.IStringConverter;
+import com.beust.jcommander.Parameter;
+
+public class ClientOpts extends Help {
+
+  public static class TimeConverter implements IStringConverter {
+@Override
+public Long convert(String value) {
+  return ConfigurationTypeHelper.getTimeInMillis(value);
+}
+  }
+
+  public static class AuthConverter implements 
IStringConverter {
+@Override
+public Authorizations convert(String value) {
+  return new Authorizations(value.split(","));
+}
+  }
+
+  public static class Password {
+public byte[] value;
+
+public Password(String dfault) {
+  value = dfault.getBytes(UTF_8);
+}
+
+@Override
+public String toString() {
+  return new String(value, UTF_8);
+}
+  }
+
+  public static class PasswordConverter implements IStringConverter {
+@Overrid

[accumulo] branch master updated: More try-with-resources BatchWriter updates (#989)

2019-02-26 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 64a33c8  More try-with-resources BatchWriter updates (#989)
64a33c8 is described below

commit 64a33c8a41ebf9c4cb8dea42a50b3cf01567db02
Author: Mike Walch 
AuthorDate: Tue Feb 26 10:26:03 2019 -0500

More try-with-resources BatchWriter updates (#989)
---
 .../accumulo/test/functional/BloomFilterIT.java| 11 ++-
 .../accumulo/test/functional/CloneTestIT.java  |  3 +-
 .../test/functional/KerberosRenewalIT.java | 11 ++-
 .../accumulo/test/functional/LargeRowIT.java   | 27 +++-
 .../test/functional/ManyWriteAheadLogsIT.java  |  6 +-
 .../test/functional/MasterAssignmentIT.java| 11 ++-
 .../apache/accumulo/test/functional/MergeIT.java   | 39 ++-
 .../accumulo/test/functional/PermissionsIT.java| 39 +--
 .../accumulo/test/functional/ReadWriteIT.java  | 14 ++--
 .../apache/accumulo/test/functional/ScanIdIT.java  |  7 +-
 .../accumulo/test/functional/ScanIteratorIT.java   | 36 --
 .../accumulo/test/functional/ScanRangeIT.java  | 25 +++
 .../test/functional/ScanSessionTimeOutIT.java  | 17 ++---
 .../accumulo/test/functional/ScannerContextIT.java | 49 +++---
 .../apache/accumulo/test/functional/ScannerIT.java | 15 ++--
 .../test/functional/ServerSideErrorIT.java | 14 ++--
 .../test/functional/SessionBlockVerifyIT.java  | 17 ++---
 .../test/functional/SessionDurabilityIT.java   | 12 ++--
 .../test/functional/SparseColumnFamilyIT.java  | 30 
 .../apache/accumulo/test/functional/SummaryIT.java | 14 ++--
 .../apache/accumulo/test/functional/TabletIT.java  | 18 +++--
 .../functional/TabletStateChangeIteratorIT.java| 12 ++--
 .../apache/accumulo/test/functional/TimeoutIT.java | 18 +++--
 .../accumulo/test/functional/TooManyDeletesIT.java |  7 +-
 .../accumulo/test/functional/VisibilityIT.java | 79 ++
 .../accumulo/test/functional/WALSunnyDayIT.java| 32 -
 .../test/functional/ZookeeperRestartIT.java| 10 +--
 27 files changed, 253 insertions(+), 320 deletions(-)

diff --git 
a/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java 
b/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java
index e329fa4..3ede1ba 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java
@@ -28,7 +28,6 @@ import java.util.Random;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.admin.TableOperations;
 import org.apache.accumulo.core.conf.Property;
@@ -95,11 +94,11 @@ public class BloomFilterIT extends AccumuloClusterHarness {
 log.info("Writing complete");
 
 // test inserting an empty key
-BatchWriter bw = c.createBatchWriter(tables[3], new 
BatchWriterConfig());
-Mutation m = new Mutation(new Text(""));
-m.put(new Text(""), new Text(""), new Value("foo1".getBytes()));
-bw.addMutation(m);
-bw.close();
+try (BatchWriter bw = c.createBatchWriter(tables[3])) {
+  Mutation m = new Mutation(new Text(""));
+  m.put(new Text(""), new Text(""), new Value("foo1".getBytes()));
+  bw.addMutation(m);
+}
 c.tableOperations().flush(tables[3], null, null, true);
 
 for (String table : Arrays.asList(tables[0], tables[1], tables[2])) {
diff --git 
a/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java 
b/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
index 9133380..7b8fbbe 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
@@ -35,7 +35,6 @@ import java.util.TreeSet;
 import org.apache.accumulo.cluster.AccumuloCluster;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.admin.DiskUsage;
@@ -270,7 +269,7 @@ public class CloneTestIT extends AccumuloClusterHarness {
 
   client.tableOperations().addSplits(tables[0], splits);
 
-  try (BatchWriter bw = client.createBatchWriter(tables[0], new 
BatchWriterConfig())) {
+  try (

[accumulo] branch master updated: More try-with-resources BatchWriter updates (#989)

2019-02-26 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 64a33c8  More try-with-resources BatchWriter updates (#989)
64a33c8 is described below

commit 64a33c8a41ebf9c4cb8dea42a50b3cf01567db02
Author: Mike Walch 
AuthorDate: Tue Feb 26 10:26:03 2019 -0500

More try-with-resources BatchWriter updates (#989)
---
 .../accumulo/test/functional/BloomFilterIT.java| 11 ++-
 .../accumulo/test/functional/CloneTestIT.java  |  3 +-
 .../test/functional/KerberosRenewalIT.java | 11 ++-
 .../accumulo/test/functional/LargeRowIT.java   | 27 +++-
 .../test/functional/ManyWriteAheadLogsIT.java  |  6 +-
 .../test/functional/MasterAssignmentIT.java| 11 ++-
 .../apache/accumulo/test/functional/MergeIT.java   | 39 ++-
 .../accumulo/test/functional/PermissionsIT.java| 39 +--
 .../accumulo/test/functional/ReadWriteIT.java  | 14 ++--
 .../apache/accumulo/test/functional/ScanIdIT.java  |  7 +-
 .../accumulo/test/functional/ScanIteratorIT.java   | 36 --
 .../accumulo/test/functional/ScanRangeIT.java  | 25 +++
 .../test/functional/ScanSessionTimeOutIT.java  | 17 ++---
 .../accumulo/test/functional/ScannerContextIT.java | 49 +++---
 .../apache/accumulo/test/functional/ScannerIT.java | 15 ++--
 .../test/functional/ServerSideErrorIT.java | 14 ++--
 .../test/functional/SessionBlockVerifyIT.java  | 17 ++---
 .../test/functional/SessionDurabilityIT.java   | 12 ++--
 .../test/functional/SparseColumnFamilyIT.java  | 30 
 .../apache/accumulo/test/functional/SummaryIT.java | 14 ++--
 .../apache/accumulo/test/functional/TabletIT.java  | 18 +++--
 .../functional/TabletStateChangeIteratorIT.java| 12 ++--
 .../apache/accumulo/test/functional/TimeoutIT.java | 18 +++--
 .../accumulo/test/functional/TooManyDeletesIT.java |  7 +-
 .../accumulo/test/functional/VisibilityIT.java | 79 ++
 .../accumulo/test/functional/WALSunnyDayIT.java| 32 -
 .../test/functional/ZookeeperRestartIT.java| 10 +--
 27 files changed, 253 insertions(+), 320 deletions(-)

diff --git 
a/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java 
b/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java
index e329fa4..3ede1ba 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/BloomFilterIT.java
@@ -28,7 +28,6 @@ import java.util.Random;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.admin.TableOperations;
 import org.apache.accumulo.core.conf.Property;
@@ -95,11 +94,11 @@ public class BloomFilterIT extends AccumuloClusterHarness {
 log.info("Writing complete");
 
 // test inserting an empty key
-BatchWriter bw = c.createBatchWriter(tables[3], new 
BatchWriterConfig());
-Mutation m = new Mutation(new Text(""));
-m.put(new Text(""), new Text(""), new Value("foo1".getBytes()));
-bw.addMutation(m);
-bw.close();
+try (BatchWriter bw = c.createBatchWriter(tables[3])) {
+  Mutation m = new Mutation(new Text(""));
+  m.put(new Text(""), new Text(""), new Value("foo1".getBytes()));
+  bw.addMutation(m);
+}
 c.tableOperations().flush(tables[3], null, null, true);
 
 for (String table : Arrays.asList(tables[0], tables[1], tables[2])) {
diff --git 
a/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java 
b/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
index 9133380..7b8fbbe 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
@@ -35,7 +35,6 @@ import java.util.TreeSet;
 import org.apache.accumulo.cluster.AccumuloCluster;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.admin.DiskUsage;
@@ -270,7 +269,7 @@ public class CloneTestIT extends AccumuloClusterHarness {
 
   client.tableOperations().addSplits(tables[0], splits);
 
-  try (BatchWriter bw = client.createBatchWriter(tables[0], new 
BatchWriterConfig())) {
+  try (

[accumulo-website] branch asf-site updated: Jekyll build from master:f665f04

2019-02-25 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 140b7c6  Jekyll build from master:f665f04
140b7c6 is described below

commit 140b7c60580307f410a3f0e54d9d39b0d12719f4
Author: Mike Walch 
AuthorDate: Mon Feb 25 17:21:47 2019 -0500

Jekyll build from master:f665f04

Updated client docs with 2.0 API changes (#160)

* Limited use of Text
* Used new 2.0 API where possible
---
 docs/2.x/development/mapreduce.html|  2 +-
 docs/2.x/getting-started/clients.html  |  2 +-
 docs/2.x/getting-started/table_design.html | 51 +++---
 feed.xml   |  4 +--
 search_data.json   |  6 ++--
 5 files changed, 32 insertions(+), 33 deletions(-)

diff --git a/docs/2.x/development/mapreduce.html 
b/docs/2.x/development/mapreduce.html
index c9ef96e..e824422 100644
--- a/docs/2.x/development/mapreduce.html
+++ b/docs/2.x/development/mapreduce.html
@@ -504,7 +504,7 @@ your job with yarn 
command.
 
 https://static.javadoc.io/org.apache.accumulo/accumulo-hadoop-mapreduce/2.0.0-alpha-2/org/apache/accumulo/hadoop/mapreduce/AccumuloInputFormat.html;>AccumuloInputFormat
 has optional settings.
  ListRange 
ranges = new ArrayListRange();
- ListPairText,Text columns = new ArrayListPairText,TextCollectionIteratorSetting.Column columns = new 
ArrayListIteratorSetting.Column();
  // populate ranges  columns
  IteratorSetting is = new IteratorSetting(30, RexExFilter.class);
  RegExFilter.setRegexs(is, ".*suffix", null, null, null, true);
diff --git a/docs/2.x/getting-started/clients.html 
b/docs/2.x/getting-started/clients.html
index 6e2f7ff..7cdff39 100644
--- a/docs/2.x/getting-started/clients.html
+++ b/docs/2.x/getting-started/clients.html
@@ -681,7 +681,7 @@ to return a subset of the columns available.
 
 try (Scanner scan = 
client.createScanner("table", auths)) {
   scan.setRange(new 
Range("harry","john"));
-  scan.fetchColumnFamily(new Text("attributes"));
+  scan.fetchColumnFamily("attributes");
 
   for (EntryKey,Value entry : 
scan) {
 Text row = entry.getKey().getRow();
diff --git a/docs/2.x/getting-started/table_design.html 
b/docs/2.x/getting-started/table_design.html
index fb2cab6..9151de9 100644
--- a/docs/2.x/getting-started/table_design.html
+++ b/docs/2.x/getting-started/table_design.html
@@ -435,11 +435,9 @@ if we have the following data in a comma-separated 
file:
 name in the column family, and a blank column qualifier:
 
 Mutation m = new Mutation(userid);
-final String column_qualifier = "";
-m.put("age", column_qualifier, age);
-m.put("address", column_qualifier, address);
-m.put("balance", column_qualifier, account_balance);
-
+m.at().family("age").put(age);
+m.at().family("address").put(address);
+m.at().family("balance").put(account_balance);
 writer.add(m);
 
 
@@ -451,7 +449,7 @@ userid as the range of a scanner and fetching specific 
columns:
 Range r = 
new Range(userid, userid); // single 
row
 Scanner s = client.createScanner("userdata", auths);
 s.setRange(r);
-s.fetchColumnFamily(new Text("age"));
+s.fetchColumnFamily("age");
 
 for (EntryKey,Value entry : 
s) {
   System.out.println(entry.getValue().toString());
@@ -517,7 +515,7 @@ of a lexicoder that encodes a java Date object so that it 
sorts lexicographicall
 
 // encode the rowId so that it is sorted 
lexicographically
 Mutation mutation = new Mutation(dateEncoder.encode(hour));
-mutation.put(new Text("colf"), new Text("colq"), new Value(new mutation.at().family("colf").qualifier("colq").put(new byte[]{});
 
 
 If we want to return the most recent date first, we can reverse the sort 
order
@@ -533,7 +531,7 @@ with the reverse lexicoder:
 
 // encode the rowId so that it sorts in reverse lexicographic 
order
 Mutation mutation = new Mutation(reverseEncoder.encode(hour));
-mutation.put(new Text("colf"), new Text("colq"), new Value(new mutation.at().family("colf").qualifier("colq").put(new byte[]{});
 
 
 Indexing
@@ -581,26 +579,26 @@ BatchScanner, which performs the lookups in multiple 
threads to multiple servers
 and returns an Iterator over all the rows retrieved. The rows returned are NOT 
in
 sorted order, as is the case with the basic Scanner interface.
 
-// first we scan the index for IDs of 
rows matching our query
-Text term = new Text("mySearchTerm");
-
-HashSetRange matchingRows = new HashSetRange();
+HashSetRange 
matchingRows = new HashSetRange();
 
-Scanner indexScanner = createScanner("index", auths);
-indexScanner.setRange(new 
Range(ter

[accumulo-website] branch master updated: Updated client docs with 2.0 API changes (#160)

2019-02-25 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new f665f04  Updated client docs with 2.0 API changes (#160)
f665f04 is described below

commit f665f04447bb7b74f1ce033e3e896690d9d6cff0
Author: Mike Walch 
AuthorDate: Mon Feb 25 17:21:05 2019 -0500

Updated client docs with 2.0 API changes (#160)

* Limited use of Text
* Used new 2.0 API where possible
---
 _docs-2/development/mapreduce.md|  2 +-
 _docs-2/getting-started/clients.md  |  2 +-
 _docs-2/getting-started/table_design.md | 49 -
 3 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/_docs-2/development/mapreduce.md b/_docs-2/development/mapreduce.md
index f5877e4..c312183 100644
--- a/_docs-2/development/mapreduce.md
+++ b/_docs-2/development/mapreduce.md
@@ -79,7 +79,7 @@ Follow the steps below to create a MapReduce job that reads 
from an Accumulo tab
 [AccumuloInputFormat] has optional settings.
 ```java
 List ranges = new ArrayList();
-List> columns = new ArrayList>();
+Collection columns = new 
ArrayList();
 // populate ranges & columns
 IteratorSetting is = new IteratorSetting(30, RexExFilter.class);
 RegExFilter.setRegexs(is, ".*suffix", null, null, null, true);
diff --git a/_docs-2/getting-started/clients.md 
b/_docs-2/getting-started/clients.md
index 68ac3ce..ec769d0 100644
--- a/_docs-2/getting-started/clients.md
+++ b/_docs-2/getting-started/clients.md
@@ -240,7 +240,7 @@ Authorizations auths = new Authorizations("public");
 
 try (Scanner scan = client.createScanner("table", auths)) {
   scan.setRange(new Range("harry","john"));
-  scan.fetchColumnFamily(new Text("attributes"));
+  scan.fetchColumnFamily("attributes");
 
   for (Entry entry : scan) {
 Text row = entry.getKey().getRow();
diff --git a/_docs-2/getting-started/table_design.md 
b/_docs-2/getting-started/table_design.md
index e03d7b3..b029b2b 100644
--- a/_docs-2/getting-started/table_design.md
+++ b/_docs-2/getting-started/table_design.md
@@ -21,11 +21,9 @@ name in the column family, and a blank column qualifier:
 
 ```java
 Mutation m = new Mutation(userid);
-final String column_qualifier = "";
-m.put("age", column_qualifier, age);
-m.put("address", column_qualifier, address);
-m.put("balance", column_qualifier, account_balance);
-
+m.at().family("age").put(age);
+m.at().family("address").put(address);
+m.at().family("balance").put(account_balance);
 writer.add(m);
 ```
 
@@ -38,7 +36,7 @@ AccumuloClient client = Accumulo.newClient()
 Range r = new Range(userid, userid); // single row
 Scanner s = client.createScanner("userdata", auths);
 s.setRange(r);
-s.fetchColumnFamily(new Text("age"));
+s.fetchColumnFamily("age");
 
 for (Entry entry : s) {
   System.out.println(entry.getValue().toString());
@@ -102,7 +100,7 @@ Date hour = new Date(epoch - (epoch % 360));
 
 // encode the rowId so that it is sorted lexicographically
 Mutation mutation = new Mutation(dateEncoder.encode(hour));
-mutation.put(new Text("colf"), new Text("colq"), new Value(new byte[]{}));
+mutation.at().family("colf").qualifier("colq").put(new byte[]{});
 ```
 
 If we want to return the most recent date first, we can reverse the sort order
@@ -119,7 +117,7 @@ Date hour = new Date(epoch - (epoch % 360));
 
 // encode the rowId so that it sorts in reverse lexicographic order
 Mutation mutation = new Mutation(reverseEncoder.encode(hour));
-mutation.put(new Text("colf"), new Text("colq"), new Value(new byte[]{}));
+mutation.at().family("colf").qualifier("colq").put(new byte[]{});
 ```
 
 ### Indexing
@@ -153,26 +151,26 @@ and returns an Iterator over all the rows retrieved. The 
rows returned are NOT i
 sorted order, as is the case with the basic Scanner interface.
 
 ```java
-// first we scan the index for IDs of rows matching our query
-Text term = new Text("mySearchTerm");
-
 HashSet matchingRows = new HashSet();
 
-Scanner indexScanner = createScanner("index", auths);
-indexScanner.setRange(new Range(term, term));
+// first we scan the index for IDs of rows matching our query
+try (Scanner indexScanner = client.createScanner("index", auths)) {
+  indexScanner.setRange(Range.exact("mySearchTerm");
 
-// we retrieve the matching rowIDs and create a set of ranges
-for (Entry entry : indexScanner) {
+  // we retrieve the matching rowIDs and create a set of ranges
+  for (Entry entry : indexScanner) {
 matchingRows.add(new Range(entry.getKey().getColumnQualifier()));
+  }
 }
 
 // now we pass t

[accumulo-website] branch master updated: Use cdnjs/cloudflare for jquery CDN

2019-02-25 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 4c0f299  Use cdnjs/cloudflare for jquery CDN
4c0f299 is described below

commit 4c0f299ad5ade62dca6c0c6f14ba520d7c6ec18c
Author: Mike Walch 
AuthorDate: Mon Feb 25 10:58:54 2019 -0500

Use cdnjs/cloudflare for jquery CDN
---
 _layouts/default.html | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/_layouts/default.html b/_layouts/default.html
index 6af0960..fa5b09b 100644
--- a/_layouts/default.html
+++ b/_layouts/default.html
@@ -27,7 +27,7 @@
 
 {% if page.title_prefix %}{{ page.title_prefix | escape }}{% endif %}{% 
if page.title %}{{ page.title | escape }}{% else %}{{ site.title | escape }}{% 
endif %}
 
-https://ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js&quot</a>;>
+https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js&quot</a>; 
integrity="sha256-BbhdlvQf/xTY9gja0Dq3HiwQF8LaCRTXxZKRutelT44=" 
crossorigin="anonymous">
 https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js&quot</a>; 
integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa"
 crossorigin="anonymous">
 https://cdn.datatables.net/v/bs/jq-2.2.3/dt-1.10.12/datatables.min.js&quot</a>;>
 {% include scripts.html %}



[accumulo] branch master updated: Use try-with-resources for BatchWriter (#985)

2019-02-25 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new c16928a  Use try-with-resources for BatchWriter (#985)
c16928a is described below

commit c16928a614f6c7e572c2df429aa81b60e0097e8e
Author: Mike Walch 
AuthorDate: Mon Feb 25 10:14:10 2019 -0500

Use try-with-resources for BatchWriter (#985)

* No need to pass in default BatchWriterConfig
---
 .../org/apache/accumulo/test/AuditMessageIT.java   | 27 +--
 .../java/org/apache/accumulo/test/CleanWalIT.java  | 21 
 .../apache/accumulo/test/ClientSideIteratorIT.java |  5 +-
 .../java/org/apache/accumulo/test/CloneIT.java |  5 +-
 .../accumulo/test/CompactionRateLimitingIT.java|  3 +-
 .../apache/accumulo/test/ConditionalWriterIT.java  | 56 ++
 .../test/ConfigurableMajorCompactionIT.java| 11 ++---
 .../org/apache/accumulo/test/ExistingMacIT.java| 25 +-
 .../java/org/apache/accumulo/test/FindMaxIT.java   |  3 +-
 .../org/apache/accumulo/test/ImportExportIT.java   | 16 +++
 .../apache/accumulo/test/KeyValueEqualityIT.java   | 22 -
 .../accumulo/test/functional/AddSplitIT.java   | 24 --
 .../test/functional/BadIteratorMincIT.java | 23 -
 .../test/functional/BadLocalityGroupMincIT.java| 12 ++---
 .../accumulo/test/functional/BatchScanSplitIT.java | 17 +++
 .../test/functional/BatchWriterFlushIT.java|  8 +---
 .../accumulo/test/functional/BloomFilterIT.java| 43 -
 .../accumulo/test/functional/ClassLoaderIT.java| 11 ++---
 .../accumulo/test/functional/CleanTmpIT.java   | 27 +--
 .../accumulo/test/functional/CombinerIT.java   | 14 +++---
 .../test/functional/ConcurrentDeleteTableIT.java   |  3 +-
 .../test/functional/ConfigurableCompactionIT.java  | 29 ++-
 .../accumulo/test/functional/CreateAndUseIT.java   | 15 +++---
 .../accumulo/test/functional/DeleteRowsIT.java | 16 +++
 .../test/functional/DeleteRowsSplitIT.java | 13 +++--
 .../test/functional/DeletedTablesDontFlushIT.java  |  8 ++--
 .../accumulo/test/functional/DurabilityIT.java | 16 +++
 .../test/functional/FateConcurrencyIT.java | 17 ---
 .../test/functional/GarbageCollectorIT.java| 45 +
 .../accumulo/test/functional/KerberosIT.java   | 47 +-
 30 files changed, 259 insertions(+), 323 deletions(-)

diff --git a/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java 
b/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java
index 947b6a5..982fbcf 100644
--- a/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/AuditMessageIT.java
@@ -34,7 +34,6 @@ import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.BatchScanner;
 import org.apache.accumulo.core.client.BatchWriter;
-import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableExistsException;
 import org.apache.accumulo.core.client.TableNotFoundException;
@@ -303,13 +302,12 @@ public class AuditMessageIT extends ConfigurableMacBase {
 auditAccumuloClient.tableOperations().create(OLD_TEST_TABLE_NAME);
 
 // Insert some play data
-BatchWriter bw = auditAccumuloClient.createBatchWriter(OLD_TEST_TABLE_NAME,
-new BatchWriterConfig());
-Mutation m = new Mutation("myRow");
-m.put("cf1", "cq1", "v1");
-m.put("cf1", "cq2", "v3");
-bw.addMutation(m);
-bw.close();
+try (BatchWriter bw = 
auditAccumuloClient.createBatchWriter(OLD_TEST_TABLE_NAME)) {
+  Mutation m = new Mutation("myRow");
+  m.put("cf1", "cq1", "v1");
+  m.put("cf1", "cq2", "v3");
+  bw.addMutation(m);
+}
 
 // Prepare to export the table
 File exportDir = new File(getCluster().getConfig().getDir() + "/export");
@@ -392,13 +390,12 @@ public class AuditMessageIT extends ConfigurableMacBase {
 auditAccumuloClient.tableOperations().create(OLD_TEST_TABLE_NAME);
 
 // Insert some play data
-BatchWriter bw = auditAccumuloClient.createBatchWriter(OLD_TEST_TABLE_NAME,
-new BatchWriterConfig());
-Mutation m = new Mutation("myRow");
-m.put("cf1", "cq1", "v1");
-m.put("cf1", "cq2", "v3");
-bw.addMutation(m);
-bw.close();
+try (BatchWriter bw = 
auditAccumuloClient.createBatchWriter(OLD_TEST_TABLE_NAME)) {
+  Mutation m = new Mutation("myRo

[accumulo] branch master updated: Simplify CloneTestIT (#984)

2019-02-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new c7fcfab  Simplify CloneTestIT (#984)
c7fcfab is described below

commit c7fcfabe784b343c07094ae5a604fdaff4a0c4ce
Author: Mike Walch 
AuthorDate: Sat Feb 23 13:24:17 2019 -0500

Simplify CloneTestIT (#984)
---
 .../accumulo/test/functional/CloneTestIT.java  | 61 +-
 1 file changed, 24 insertions(+), 37 deletions(-)

diff --git 
a/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java 
b/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
index b0ff80c..9133380 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/CloneTestIT.java
@@ -36,7 +36,6 @@ import org.apache.accumulo.cluster.AccumuloCluster;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
-import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.client.Scanner;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.client.admin.DiskUsage;
@@ -81,21 +80,7 @@ public class CloneTestIT extends AccumuloClusterHarness {
   Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE_INDEX.getKey(), "2M");
   c.tableOperations().setProperty(table1, 
Property.TABLE_FILE_MAX.getKey(), "23");
 
-  BatchWriter bw = writeData(table1, c);
-
-  Map props = new HashMap<>();
-  props.put(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE.getKey(), "500K");
-
-  Set exclude = new HashSet<>();
-  exclude.add(Property.TABLE_FILE_MAX.getKey());
-
-  c.tableOperations().clone(table1, table2, true, props, exclude);
-
-  Mutation m3 = new Mutation("009");
-  m3.put("data", "x", "1");
-  m3.put("data", "y", "2");
-  bw.addMutation(m3);
-  bw.close();
+  writeDataAndClone(c, table1, table2);
 
   checkData(table2, c);
 
@@ -179,9 +164,8 @@ public class CloneTestIT extends AccumuloClusterHarness {
 }
   }
 
-  private BatchWriter writeData(String table1, AccumuloClient c)
-  throws TableNotFoundException, MutationsRejectedException {
-BatchWriter bw = c.createBatchWriter(table1, new BatchWriterConfig());
+  private BatchWriter writeData(String table1, AccumuloClient c) throws 
Exception {
+BatchWriter bw = c.createBatchWriter(table1);
 
 Mutation m1 = new Mutation("001");
 m1.put("data", "x", "9");
@@ -198,6 +182,23 @@ public class CloneTestIT extends AccumuloClusterHarness {
 return bw;
   }
 
+  private void writeDataAndClone(AccumuloClient c, String table1, String 
table2) throws Exception {
+try (BatchWriter bw = writeData(table1, c)) {
+  Map props = new HashMap<>();
+  props.put(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE.getKey(), "500K");
+
+  Set exclude = new HashSet<>();
+  exclude.add(Property.TABLE_FILE_MAX.getKey());
+
+  c.tableOperations().clone(table1, table2, true, props, exclude);
+
+  Mutation m3 = new Mutation("009");
+  m3.put("data", "x", "1");
+  m3.put("data", "y", "2");
+  bw.addMutation(m3);
+}
+  }
+
   @Test
   public void testDeleteClone() throws Exception {
 String[] tableNames = getUniqueNames(3);
@@ -235,21 +236,7 @@ public class CloneTestIT extends AccumuloClusterHarness {
 
   c.tableOperations().create(table1);
 
-  BatchWriter bw = writeData(table1, c);
-
-  Map props = new HashMap<>();
-  props.put(Property.TABLE_FILE_COMPRESSED_BLOCK_SIZE.getKey(), "500K");
-
-  Set exclude = new HashSet<>();
-  exclude.add(Property.TABLE_FILE_MAX.getKey());
-
-  c.tableOperations().clone(table1, table2, true, props, exclude);
-
-  Mutation m3 = new Mutation("009");
-  m3.put("data", "x", "1");
-  m3.put("data", "y", "2");
-  bw.addMutation(m3);
-  bw.close();
+  writeDataAndClone(c, table1, table2);
 
   // delete source table, should not affect clone
   c.tableOperations().delete(table1);
@@ -283,9 +270,9 @@ public class CloneTestIT extends AccumuloClusterHarness {
 
   client.tableOperations().addSplits(tables[0], splits);
 
-  BatchWriter bw = client.createBatchWriter(tables[0], new 
BatchWriterConfig());
-  bw.addMutations(mutations);
-  bw.close();
+  try (BatchWriter bw = client.createBatchWriter(tables[0], new 
BatchWriterConfig())) {
+bw.addMutations(mutations);
+  }
 
   client.tableOperations().clone(tables[0], tables[1], true, null, null);
 



[accumulo] branch master updated: Simplify CreateInitialSplitsIT (#983)

2019-02-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new d57a444  Simplify CreateInitialSplitsIT (#983)
d57a444 is described below

commit d57a444da6def25d64f5bc90ea80e65b8fc82b1e
Author: Mike Walch 
AuthorDate: Sat Feb 23 13:23:43 2019 -0500

Simplify CreateInitialSplitsIT (#983)

* Reduce duplicate code
---
 .../test/functional/CreateInitialSplitsIT.java | 52 +++---
 1 file changed, 16 insertions(+), 36 deletions(-)

diff --git 
a/test/src/main/java/org/apache/accumulo/test/functional/CreateInitialSplitsIT.java
 
b/test/src/main/java/org/apache/accumulo/test/functional/CreateInitialSplitsIT.java
index 6ea1b44..3522151 100644
--- 
a/test/src/main/java/org/apache/accumulo/test/functional/CreateInitialSplitsIT.java
+++ 
b/test/src/main/java/org/apache/accumulo/test/functional/CreateInitialSplitsIT.java
@@ -86,6 +86,16 @@ public class CreateInitialSplitsIT extends 
AccumuloClusterHarness {
 assertTrue(client.tableOperations().exists(tableName));
   }
 
+  private void runTest(SortedSet expectedSplits) throws 
AccumuloSecurityException,
+  TableNotFoundException, AccumuloException, TableExistsException {
+NewTableConfiguration ntc = new 
NewTableConfiguration().withSplits(expectedSplits);
+assertFalse(client.tableOperations().exists(tableName));
+client.tableOperations().create(tableName, ntc);
+assertTrue(client.tableOperations().exists(tableName));
+Collection createdSplits = 
client.tableOperations().listSplits(tableName);
+assertEquals(expectedSplits, new TreeSet<>(createdSplits));
+  }
+
   /**
* Create initial splits by providing splits from a Java Collection.
*/
@@ -93,26 +103,14 @@ public class CreateInitialSplitsIT extends 
AccumuloClusterHarness {
   public void testCreateInitialSplits() throws TableExistsException, 
AccumuloSecurityException,
   AccumuloException, TableNotFoundException {
 tableName = getUniqueNames(1)[0];
-SortedSet expectedSplits = generateNonBinarySplits(3000, 32);
-NewTableConfiguration ntc = new 
NewTableConfiguration().withSplits(expectedSplits);
-assertFalse(client.tableOperations().exists(tableName));
-client.tableOperations().create(tableName, ntc);
-assertTrue(client.tableOperations().exists(tableName));
-Collection createdSplits = 
client.tableOperations().listSplits(tableName);
-assertEquals(expectedSplits, new TreeSet<>(createdSplits));
+runTest(generateNonBinarySplits(3000, 32));
   }
 
   @Test
   public void testCreateInitialSplitsWithEncodedSplits() throws 
TableExistsException,
   AccumuloSecurityException, AccumuloException, TableNotFoundException {
 tableName = getUniqueNames(1)[0];
-SortedSet expectedSplits = generateNonBinarySplits(3000, 32, true);
-NewTableConfiguration ntc = new 
NewTableConfiguration().withSplits(expectedSplits);
-assertFalse(client.tableOperations().exists(tableName));
-client.tableOperations().create(tableName, ntc);
-assertTrue(client.tableOperations().exists(tableName));
-Collection createdSplits = 
client.tableOperations().listSplits(tableName);
-assertEquals(expectedSplits, new TreeSet<>(createdSplits));
+runTest(generateNonBinarySplits(3000, 32, true));
   }
 
   /**
@@ -122,26 +120,14 @@ public class CreateInitialSplitsIT extends 
AccumuloClusterHarness {
   public void testCreateInitialBinarySplits() throws TableExistsException,
   AccumuloSecurityException, AccumuloException, TableNotFoundException {
 tableName = getUniqueNames(1)[0];
-SortedSet expectedSplits = generateBinarySplits(1000, 16);
-NewTableConfiguration ntc = new 
NewTableConfiguration().withSplits(expectedSplits);
-assertFalse(client.tableOperations().exists(tableName));
-client.tableOperations().create(tableName, ntc);
-assertTrue(client.tableOperations().exists(tableName));
-Collection createdSplits = 
client.tableOperations().listSplits(tableName);
-assertEquals(expectedSplits, new TreeSet<>(createdSplits));
+runTest(generateBinarySplits(1000, 16));
   }
 
   @Test
   public void testCreateInitialBinarySplitsWithEncodedSplits() throws 
TableExistsException,
   AccumuloSecurityException, AccumuloException, TableNotFoundException {
 tableName = getUniqueNames(1)[0];
-SortedSet expectedSplits = generateBinarySplits(1000, 16, true);
-NewTableConfiguration ntc = new 
NewTableConfiguration().withSplits(expectedSplits);
-assertFalse(client.tableOperations().exists(tableName));
-client.tableOperations().create(tableName, ntc);
-assertTrue(client.tableOperations().exists(tableName));
-Collection createdSplits = 
client.tableOperations().listSplits(tableName);
-assertEquals(expectedSplits, new TreeSet<>(createdSplits));
+  

[accumulo] branch master updated: #982 - Revert change to fix build

2019-02-23 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new c464099  #982 - Revert change to fix build
c464099 is described below

commit c4640998bf2558e27bb296cf09cb8e809d96f545
Author: Mike Walch 
AuthorDate: Sat Feb 23 08:50:53 2019 -0500

#982 - Revert change to fix build
---
 shell/src/main/java/org/apache/accumulo/shell/Shell.java | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/shell/src/main/java/org/apache/accumulo/shell/Shell.java 
b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
index ada3424..14db16d 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/Shell.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
@@ -1130,10 +1130,9 @@ public class Shell extends ShellOptions implements 
KeywordExecutable {
 
   private final void printHelp(String usage, String description, Options opts, 
int width)
   throws IOException {
-try (PrintWriter pw = new PrintWriter(reader.getOutput())) {
-  new HelpFormatter().printHelp(pw, width, usage, description, opts, 2, 5, 
null, true);
-  reader.getOutput().flush();
-}
+new HelpFormatter().printHelp(new PrintWriter(reader.getOutput()), width, 
usage, description,
+opts, 2, 5, null, true);
+reader.getOutput().flush();
   }
 
   public int getExitCode() {



[accumulo] branch master updated: Fix code quality issues (#982)

2019-02-22 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 9056570  Fix code quality issues (#982)
9056570 is described below

commit 90565705bce67a38895e157a194ec654c3f9dfea
Author: Mike Walch 
AuthorDate: Fri Feb 22 16:18:58 2019 -0500

Fix code quality issues (#982)

* Fix if statement that is always true
* Remove resource leak
---
 .../apache/accumulo/core/client/mapred/AccumuloOutputFormat.java   | 2 +-
 .../accumulo/core/client/mapreduce/AccumuloOutputFormat.java   | 2 +-
 .../apache/accumulo/hadoopImpl/mapred/AccumuloRecordWriter.java| 2 +-
 .../apache/accumulo/hadoopImpl/mapreduce/AccumuloRecordWriter.java | 2 +-
 shell/src/main/java/org/apache/accumulo/shell/Shell.java   | 7 ---
 5 files changed, 8 insertions(+), 7 deletions(-)

diff --git 
a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
 
b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
index 9daadcc..2db6819 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloOutputFormat.java
@@ -548,7 +548,7 @@ public class AccumuloOutputFormat implements 
OutputFormat {
   try {
 mtbw.close();
   } catch (MutationsRejectedException e) {
-if (e.getSecurityErrorCodes().size() >= 0) {
+if (!e.getSecurityErrorCodes().isEmpty()) {
   HashMap> tables = new HashMap<>();
   for (Entry> ke : 
e.getSecurityErrorCodes().entrySet()) {
 String tableId = ke.getKey().getTableId().toString();
diff --git 
a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
 
b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
index 99b414f..93cc7a6 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloOutputFormat.java
@@ -549,7 +549,7 @@ public class AccumuloOutputFormat extends 
OutputFormat {
   try {
 mtbw.close();
   } catch (MutationsRejectedException e) {
-if (e.getSecurityErrorCodes().size() >= 0) {
+if (!e.getSecurityErrorCodes().isEmpty()) {
   HashMap> tables = new HashMap<>();
   for (Entry> ke : 
e.getSecurityErrorCodes().entrySet()) {
 String tableId = ke.getKey().getTableId().toString();
diff --git 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapred/AccumuloRecordWriter.java
 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapred/AccumuloRecordWriter.java
index 5ce64a3..c0371b3 100644
--- 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapred/AccumuloRecordWriter.java
+++ 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapred/AccumuloRecordWriter.java
@@ -184,7 +184,7 @@ public class AccumuloRecordWriter implements 
RecordWriter {
 try {
   mtbw.close();
 } catch (MutationsRejectedException e) {
-  if (e.getSecurityErrorCodes().size() >= 0) {
+  if (!e.getSecurityErrorCodes().isEmpty()) {
 HashMap> tables = new HashMap<>();
 for (Map.Entry> ke : 
e.getSecurityErrorCodes().entrySet()) {
   String tableId = ke.getKey().getTableId().toString();
diff --git 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/AccumuloRecordWriter.java
 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/AccumuloRecordWriter.java
index 680d813..9818b7b 100644
--- 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/AccumuloRecordWriter.java
+++ 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/AccumuloRecordWriter.java
@@ -185,7 +185,7 @@ public class AccumuloRecordWriter extends 
RecordWriter {
 try {
   mtbw.close();
 } catch (MutationsRejectedException e) {
-  if (e.getSecurityErrorCodes().size() >= 0) {
+  if (!e.getSecurityErrorCodes().isEmpty()) {
 HashMap> tables = new HashMap<>();
 for (Map.Entry> ke : 
e.getSecurityErrorCodes().entrySet()) {
   String tableId = ke.getKey().getTableId().toString();
diff --git a/shell/src/main/java/org/apache/accumulo/shell/Shell.java 
b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
index 14db16d..ada3424 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/Shell.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
@@ -1130,9 +1130,10 @@ public class Shell extends ShellOptions implements 
KeywordExecutable {
 
   private final void printHelp(String usage, String description, Options opts, 
i

[accumulo] branch master updated: Fix JavaScript code quality issues (#980)

2019-02-22 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 1fb85d3  Fix JavaScript code quality issues (#980)
1fb85d3 is described below

commit 1fb85d3a18fd82fd6b496c5688105dbe77f16051
Author: Mike Walch 
AuthorDate: Fri Feb 22 13:47:33 2019 -0500

Fix JavaScript code quality issues (#980)

* Removed unused variables
* Use delete only on properties
* Use === for string comparison
---
 .../apache/accumulo/monitor/resources/js/master.js |  6 ++
 .../apache/accumulo/monitor/resources/js/navbar.js | 22 +++---
 .../accumulo/monitor/resources/js/overview.js  | 20 ++--
 .../apache/accumulo/monitor/resources/js/server.js |  3 +--
 .../accumulo/monitor/resources/js/tservers.js  |  1 -
 .../apache/accumulo/monitor/resources/js/vis.js|  8 ++--
 6 files changed, 26 insertions(+), 34 deletions(-)

diff --git 
a/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/master.js
 
b/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/master.js
index 8a1717f..7f119e4 100644
--- 
a/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/master.js
+++ 
b/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/master.js
@@ -65,16 +65,14 @@ function recoveryList() {
   } else {
 $('#recoveryList').show();
 
-var items = [];
-
 // Creates the table for the recovery list
 $.each(data.recoveryList, function(key, val) {
   var items = [];
   items.push(createFirstCell(val.server, val.server));
   items.push(createRightCell(val.log, val.log));
   var date = new Date(parseInt(val.time));
-  date = date.toLocaleString().split(' ').join('');
-  items.push(createRightCell(val.time, date));
+  var dateStr = date.toLocaleString().split(' ').join('');
+  items.push(createRightCell(val.time, dateStr));
   items.push(createRightCell(val.progress, val.progress));
 
   $('', {
diff --git 
a/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/navbar.js
 
b/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/navbar.js
index 4a93db2..9582666 100644
--- 
a/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/navbar.js
+++ 
b/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/navbar.js
@@ -47,39 +47,39 @@ function refreshSideBarNotifications() {
   undefined : JSON.parse(sessionStorage.status);
 
   // Setting individual status notification
-  if (data.masterStatus == 'OK') {
+  if (data.masterStatus === 'OK') {
 $('#masterStatusNotification').removeClass('error').addClass('normal');
   } else {
 $('#masterStatusNotification').removeClass('normal').addClass('error');
   }
-  if (data.tServerStatus == 'OK') {
+  if (data.tServerStatus === 'OK') {
 $('#serverStatusNotification').removeClass('error').removeClass('warning').
 addClass('normal');
-  } else if (data.tServerStatus == 'WARN') {
+  } else if (data.tServerStatus === 'WARN') {
 $('#serverStatusNotification').removeClass('error').removeClass('normal').
 addClass('warning');
   } else {
 
$('#serverStatusNotification').removeClass('normal').removeClass('warning').
 addClass('error');
   }
-  if (data.gcStatus == 'OK') {
+  if (data.gcStatus === 'OK') {
 $('#gcStatusNotification').removeClass('error').addClass('normal');
   } else {
 $('#gcStatusNotification').addClass('error').removeClass('normal');
   }
 
   // Setting overall status notification
-  if (data.masterStatus == 'OK' &&
-  data.tServerStatus == 'OK' &&
-  data.gcStatus == 'OK') {
+  if (data.masterStatus === 'OK' &&
+  data.tServerStatus === 'OK' &&
+  data.gcStatus === 'OK') {
 $('#statusNotification').removeClass('error').removeClass('warning').
 addClass('normal');
-  } else if (data.masterStatus == 'ERROR' ||
-  data.tServerStatus == 'ERROR' ||
-  data.gcStatus == 'ERROR') {
+  } else if (data.masterStatus === 'ERROR' ||
+  data.tServerStatus === 'ERROR' ||
+  data.gcStatus === 'ERROR') {
 $('#statusNotification').removeClass('normal').removeClass('warning').
 addClass('error');
-  } else if (data.tServerStatus == 'WARN') {
+  } else if (data.tServerStatus === 'WARN') {
 $('#statusNotification').removeClass('normal').removeClass('error').
 addClass('warning');
   }
diff --git 
a/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/overview.js
 
b/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/overview.js
index 5f44cf6..e0d0a12 100644
--- 
a/server/monitor/src/main/resources/org/apache/accumulo/monitor/resources/js/overview.js
+++ 
b/serve

[accumulo] branch master updated: Fixes #600 - TransportCachingIT fills up disk with messages (#962)

2019-02-22 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new d88f462  Fixes #600 - TransportCachingIT fills up disk with messages 
(#962)
d88f462 is described below

commit d88f4629eb9a3e830b4afd69bb85fc1c14b311ec
Author: Jeffrey Zeiberg 
AuthorDate: Fri Feb 22 12:23:31 2019 -0500

Fixes #600 - TransportCachingIT fills up disk with messages (#962)

* TransportCachingIT fails if server can't be obtained in 100 trys
---
 .../apache/accumulo/test/TransportCachingIT.java   | 33 --
 1 file changed, 25 insertions(+), 8 deletions(-)

diff --git 
a/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java 
b/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java
index 3196ac0..7044314 100644
--- a/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/TransportCachingIT.java
@@ -47,6 +47,7 @@ import org.slf4j.LoggerFactory;
  */
 public class TransportCachingIT extends AccumuloClusterHarness {
   private static final Logger log = 
LoggerFactory.getLogger(TransportCachingIT.class);
+  private static int ATTEMPTS = 0;
 
   @Test
   public void testCachedTransport() throws InterruptedException {
@@ -70,14 +71,30 @@ public class TransportCachingIT extends 
AccumuloClusterHarness {
   }
 
   ArrayList servers = new ArrayList<>();
-  for (String tserver : children) {
-String path = zkRoot + Constants.ZTSERVERS + "/" + tserver;
-byte[] data = ZooUtil.getLockData(zc, path);
-if (data != null) {
-  String strData = new String(data, UTF_8);
-  if (!strData.equals("master"))
-servers.add(new ThriftTransportKey(
-new ServerServices(strData).getAddress(Service.TSERV_CLIENT), 
rpcTimeout, context));
+  while (servers.isEmpty()) {
+for (String tserver : children) {
+  String path = zkRoot + Constants.ZTSERVERS + "/" + tserver;
+  byte[] data = ZooUtil.getLockData(zc, path);
+  if (data != null) {
+String strData = new String(data, UTF_8);
+if (!strData.equals("master"))
+  servers.add(new ThriftTransportKey(
+  new 
ServerServices(strData).getAddress(Service.TSERV_CLIENT), rpcTimeout,
+  context));
+  }
+}
+ATTEMPTS++;
+if (!servers.isEmpty())
+  break;
+else {
+  if (ATTEMPTS < 100) {
+log.warn("Making another attempt to add ThriftTransportKey 
servers");
+Thread.sleep(100);
+  } else {
+log.error("Failed to add ThriftTransportKey servers - Failing 
TransportCachingIT test");
+org.junit.Assert
+.fail("Failed to add ThriftTransportKey servers - Failing 
TransportCachingIT test");
+  }
 }
   }
 



[accumulo] branch master updated: Fixed more lgtm warnings (#977)

2019-02-21 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 13d2a7d  Fixed more lgtm warnings (#977)
13d2a7d is described below

commit 13d2a7daf5b9e9cd2579e3d0cd031ef957fc8725
Author: Mike Walch 
AuthorDate: Thu Feb 21 19:50:04 2019 -0500

Fixed more lgtm warnings (#977)

* Use primitive types if variable defined with a boxed/wrapper
  type is never null
---
 .../org/apache/accumulo/core/client/BatchWriterConfig.java   | 11 ++-
 .../accumulo/core/clientImpl/ConditionalWriterImpl.java  |  2 +-
 .../core/file/streams/BoundedRangeFileInputStream.java   |  2 +-
 .../accumulo/core/iterators/system/VisibilityFilter.java |  2 +-
 .../accumulo/core/iterators/user/VisibilityFilter.java   |  2 +-
 .../java/org/apache/accumulo/core/data/NamespaceIdTest.java  |  4 ++--
 .../test/java/org/apache/accumulo/core/data/TableIdTest.java |  4 ++--
 .../org/apache/accumulo/core/file/FileOperationsTest.java|  2 +-
 .../test/java/org/apache/accumulo/fate/AgeOffStoreTest.java  | 12 ++--
 .../accumulo/hadoop/its/mapred/AccumuloInputFormatIT.java|  2 +-
 .../accumulo/hadoop/its/mapreduce/AccumuloInputFormatIT.java |  2 +-
 .../accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java|  4 ++--
 .../java/org/apache/accumulo/server/ServerConstants.java |  4 ++--
 .../server/rpc/TCredentialsUpdatingInvocationHandler.java|  2 +-
 .../accumulo/master/tableOps/bulkVer2/BulkImportMove.java|  2 +-
 .../apache/accumulo/monitor/rest/master/MasterResource.java  |  8 
 .../apache/accumulo/start/classloader/vfs/MiniDFSUtil.java   |  2 +-
 .../apache/accumulo/test/mapred/AccumuloInputFormatIT.java   |  2 +-
 .../accumulo/test/mapreduce/AccumuloInputFormatIT.java   |  2 +-
 19 files changed, 36 insertions(+), 35 deletions(-)

diff --git 
a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java 
b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
index 27ad00d..fe5eeef 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
@@ -50,7 +50,7 @@ public class BatchWriterConfig implements Writable {
   .getTimeInMillis(BATCH_WRITER_LATENCY_MAX.getDefaultValue());
   private Long maxLatency = null;
 
-  private static final Long DEFAULT_TIMEOUT = getDefaultTimeout();
+  private static final long DEFAULT_TIMEOUT = getDefaultTimeout();
   private Long timeout = null;
 
   private static final Integer DEFAULT_MAX_WRITE_THREADS = Integer
@@ -60,12 +60,13 @@ public class BatchWriterConfig implements Writable {
   private Durability durability = Durability.DEFAULT;
   private boolean isDurabilitySet = false;
 
-  private static Long getDefaultTimeout() {
-Long def = 
ConfigurationTypeHelper.getTimeInMillis(BATCH_WRITER_TIMEOUT_MAX.getDefaultValue());
-if (def.equals(0L))
+  private static long getDefaultTimeout() {
+long defVal = ConfigurationTypeHelper
+.getTimeInMillis(BATCH_WRITER_TIMEOUT_MAX.getDefaultValue());
+if (defVal == 0L)
   return Long.MAX_VALUE;
 else
-  return def;
+  return defVal;
   }
 
   /**
diff --git 
a/core/src/main/java/org/apache/accumulo/core/clientImpl/ConditionalWriterImpl.java
 
b/core/src/main/java/org/apache/accumulo/core/clientImpl/ConditionalWriterImpl.java
index 88cad63..63990f5 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/clientImpl/ConditionalWriterImpl.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/clientImpl/ConditionalWriterImpl.java
@@ -817,7 +817,7 @@ class ConditionalWriterImpl implements ConditionalWriter {
   return b;
 
 try {
-  Boolean bb = ve.evaluate(new ColumnVisibility(testVis));
+  boolean bb = ve.evaluate(new ColumnVisibility(testVis));
   cache.put(new Text(testVis), bb);
   return bb;
 } catch (VisibilityParseException | BadArgumentException e) {
diff --git 
a/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
 
b/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
index f40404c..0615313 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
@@ -87,7 +87,7 @@ public class BoundedRangeFileInputStream extends InputStream {
 final int n = (int) Math.min(Integer.MAX_VALUE, Math.min(len, (end - 
pos)));
 if (n == 0)
   return -1;
-Integer ret = 0;
+int ret = 0;
 synchronized (in) {
   // ensuring we are not closed which would be followed by someone else 
reusing the decompressor
   if (closed) {
diff --git 
a/core/src/main/java/org/apache/accumulo

[accumulo] branch master updated: Use lambdas when creating Threads (#975)

2019-02-21 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 3d91fa6  Use lambdas when creating Threads (#975)
3d91fa6 is described below

commit 3d91fa6291e274120082148f890a7fcb2d1fd324
Author: Mike Walch 
AuthorDate: Thu Feb 21 18:00:10 2019 -0500

Use lambdas when creating Threads (#975)
---
 .../main/java/org/apache/accumulo/proxy/Proxy.java |  2 +-
 .../org/apache/accumulo/tracer/TracerTest.java |  7 +--
 .../main/java/org/apache/accumulo/shell/Shell.java | 15 +++---
 .../accumulo/test/InterruptibleScannersIT.java | 53 ++
 .../apache/accumulo/test/MultiTableRecoveryIT.java | 36 +++
 .../org/apache/accumulo/test/ShellServerIT.java| 11 +
 .../apache/accumulo/test/SplitCancelsMajCIT.java   | 15 +++---
 .../accumulo/test/TabletServerGivesUpIT.java   | 19 
 .../test/functional/DeleteRowsSplitIT.java | 23 --
 .../accumulo/test/functional/ReadWriteIT.java  | 15 +++---
 .../accumulo/test/functional/ShutdownIT.java   | 17 +++
 .../accumulo/test/functional/ZooCacheIT.java   | 17 +++
 12 files changed, 95 insertions(+), 135 deletions(-)

diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java 
b/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java
index b117fee..99ccdae 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java
@@ -150,7 +150,7 @@ public class Proxy implements KeywordExecutable {
   Runtime.getRuntime().addShutdownHook(new Thread(() -> {
 try {
   accumulo.stop();
-} catch (InterruptedException|IOException e) {
+} catch (InterruptedException | IOException e) {
   throw new RuntimeException(e);
 } finally {
   if (!folder.delete())
diff --git 
a/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java 
b/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java
index 4b95777..797538d 100644
--- a/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java
+++ b/server/tracer/src/test/java/org/apache/accumulo/tracer/TracerTest.java
@@ -168,12 +168,7 @@ public class TracerTest {
 TThreadPoolServer.Args args = new TThreadPoolServer.Args(transport);
 args.processor(new Processor(TraceWrap.service(new Service(;
 final TServer tserver = new TThreadPoolServer(args);
-Thread t = new Thread() {
-  @Override
-  public void run() {
-tserver.serve();
-  }
-};
+Thread t = new Thread(tserver::serve);
 t.start();
 TTransport clientTransport = new TSocket(new Socket("localhost", 
socket.getLocalPort()));
 TestService.Iface client = new TestService.Client(new 
TBinaryProtocol(clientTransport),
diff --git a/shell/src/main/java/org/apache/accumulo/shell/Shell.java 
b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
index 92d84c4..b928524 100644
--- a/shell/src/main/java/org/apache/accumulo/shell/Shell.java
+++ b/shell/src/main/java/org/apache/accumulo/shell/Shell.java
@@ -538,16 +538,13 @@ public class Shell extends ShellOptions implements 
KeywordExecutable {
   final FileHistory history = new FileHistory(new File(historyPath));
   reader.setHistory(history);
   // Add shutdown hook to flush file history, per jline javadocs
-  Runtime.getRuntime().addShutdownHook(new Thread() {
-@Override
-public void run() {
-  try {
-history.flush();
-  } catch (IOException e) {
-log.warn("Could not flush history to file.");
-  }
+  Runtime.getRuntime().addShutdownHook(new Thread(() -> {
+try {
+  history.flush();
+} catch (IOException e) {
+  log.warn("Could not flush history to file.");
 }
-  });
+  }));
 } catch (IOException e) {
   log.warn("Unable to load history file at " + historyPath);
 }
diff --git 
a/test/src/main/java/org/apache/accumulo/test/InterruptibleScannersIT.java 
b/test/src/main/java/org/apache/accumulo/test/InterruptibleScannersIT.java
index abc855f..a647cde 100644
--- a/test/src/main/java/org/apache/accumulo/test/InterruptibleScannersIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/InterruptibleScannersIT.java
@@ -62,37 +62,34 @@ public class InterruptibleScannersIT extends 
AccumuloClusterHarness {
 scanner.addScanIterator(cfg);
 // create a thread to interrupt the slow scan
 final Thread scanThread = Thread.currentThread();
-Thread thread = new Thread() {
-  @Override
-  public void run() {
-try {
-  // ensure the scan is running: not perfect, the metadata tables 
could be scanned, too.
-   

[accumulo] branch master updated: Fix more lgtm.com warnings (#974)

2019-02-21 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new a72c863  Fix more lgtm.com warnings (#974)
a72c863 is described below

commit a72c863a6d36142c48a7e4fabbe236bac4d294ec
Author: Mike Walch 
AuthorDate: Thu Feb 21 16:56:14 2019 -0500

Fix more lgtm.com warnings (#974)

* Remove synchronized from getAuthorizations() in ScannerOptions
* Added synchronized to methods that override a sychonrized method
* Stop overriding Thread.start() in Proxy
---
 .../accumulo/core/clientImpl/ScannerOptions.java|  2 +-
 .../core/crypto/streams/NoFlushOutputStream.java|  2 +-
 .../impl/SeekableByteArrayInputStream.java  |  4 ++--
 .../file/streams/BoundedRangeFileInputStream.java   |  4 ++--
 .../core/file/streams/RateLimitedOutputStream.java  |  4 ++--
 .../main/java/org/apache/accumulo/proxy/Proxy.java  | 21 +
 6 files changed, 17 insertions(+), 20 deletions(-)

diff --git 
a/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java 
b/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
index 71c7900..d578bdc 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
@@ -219,7 +219,7 @@ public class ScannerOptions implements ScannerBase {
   }
 
   @Override
-  public synchronized Authorizations getAuthorizations() {
+  public Authorizations getAuthorizations() {
 throw new UnsupportedOperationException("No authorizations to return");
   }
 
diff --git 
a/core/src/main/java/org/apache/accumulo/core/crypto/streams/NoFlushOutputStream.java
 
b/core/src/main/java/org/apache/accumulo/core/crypto/streams/NoFlushOutputStream.java
index cea7798..1bff099 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/crypto/streams/NoFlushOutputStream.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/crypto/streams/NoFlushOutputStream.java
@@ -31,7 +31,7 @@ public class NoFlushOutputStream extends DataOutputStream {
* calls write a single byte at a time and will kill performance.
*/
   @Override
-  public void write(byte[] b, int off, int len) throws IOException {
+  public synchronized void write(byte[] b, int off, int len) throws 
IOException {
 out.write(b, off, len);
   }
 
diff --git 
a/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/SeekableByteArrayInputStream.java
 
b/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/SeekableByteArrayInputStream.java
index 928c550..773a921 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/SeekableByteArrayInputStream.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/file/blockfile/impl/SeekableByteArrayInputStream.java
@@ -106,12 +106,12 @@ public class SeekableByteArrayInputStream extends 
InputStream {
   }
 
   @Override
-  public void mark(int readAheadLimit) {
+  public synchronized void mark(int readAheadLimit) {
 throw new UnsupportedOperationException();
   }
 
   @Override
-  public void reset() {
+  public synchronized void reset() {
 throw new UnsupportedOperationException();
   }
 
diff --git 
a/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
 
b/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
index 34bd8b4..f40404c 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/file/streams/BoundedRangeFileInputStream.java
@@ -115,12 +115,12 @@ public class BoundedRangeFileInputStream extends 
InputStream {
   }
 
   @Override
-  public void mark(int readlimit) {
+  public synchronized void mark(int readlimit) {
 mark = pos;
   }
 
   @Override
-  public void reset() throws IOException {
+  public synchronized void reset() throws IOException {
 if (mark < 0)
   throw new IOException("Resetting to invalid mark");
 pos = mark;
diff --git 
a/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStream.java
 
b/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStream.java
index 0397092..1fcbbba 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStream.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/file/streams/RateLimitedOutputStream.java
@@ -36,13 +36,13 @@ public class RateLimitedOutputStream extends 
DataOutputStream {
   }
 
   @Override
-  public void write(int i) throws IOException {
+  public synchronized void write(int i) throws IOException {
 writeLimiter.acquire(1);
 out.write(i);
   }
 
   @Override
-  public void write(byte[] buffer, int offset, int length) throws IOException {

[accumulo] branch master updated: Fixed code quality issues found by lgtm.com (#967)

2019-02-21 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 7173ae2  Fixed code quality issues found by lgtm.com (#967)
7173ae2 is described below

commit 7173ae263c0e3929bc6ccd3039eb9e67eccf9e2e
Author: Mike Walch 
AuthorDate: Thu Feb 21 13:00:43 2019 -0500

Fixed code quality issues found by lgtm.com (#967)

* Fixed logging argument issues
* Removed null checks where never null
---
 .../java/org/apache/accumulo/fate/zookeeper/ZooLock.java  |  5 ++---
 .../master/balancer/HostRegexTableLoadBalancer.java   |  2 +-
 .../org/apache/accumulo/gc/SimpleGarbageCollector.java|  4 ++--
 .../src/main/java/org/apache/accumulo/master/Master.java  |  2 +-
 .../replication/RemoveCompleteReplicationRecords.java | 15 +--
 .../java/org/apache/accumulo/master/tableOps/Utils.java   |  2 +-
 .../java/org/apache/accumulo/tserver/TabletServer.java|  2 +-
 .../tserver/replication/ReplicationProcessor.java |  2 +-
 shell/src/main/java/org/apache/accumulo/shell/Shell.java  |  3 +--
 9 files changed, 15 insertions(+), 22 deletions(-)

diff --git a/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooLock.java 
b/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooLock.java
index db71de3..0468b83 100644
--- a/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooLock.java
+++ b/core/src/main/java/org/apache/accumulo/fate/zookeeper/ZooLock.java
@@ -399,9 +399,8 @@ public class ZooLock implements Watcher {
   } catch (Exception ex) {
 if (lock != null || asyncLock != null) {
   lockWatcher.unableToMonitorLockNode(ex);
-  log.error(
-  "Error resetting watch on ZooLock " + lock == null ? asyncLock : 
lock + " " + event,
-  ex);
+  log.error("Error resetting watch on ZooLock {} {}", lock != null ? 
lock : asyncLock,
+  event, ex);
 }
   }
 
diff --git 
a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
 
b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
index 2f0cd8b..1916784 100644
--- 
a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
+++ 
b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/HostRegexTableLoadBalancer.java
@@ -412,7 +412,7 @@ public class HostRegexTableLoadBalancer extends 
TableLoadBalancer implements Con
 }
   }
 } catch (TException e1) {
-  LOG.error("Error in OOB check getting tablets for table {} from 
server {}", tid,
+  LOG.error("Error in OOB check getting tablets for table {} from 
server {} {}", tid,
   e.getKey().host(), e);
 }
   }
diff --git 
a/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java 
b/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
index 41f2540..d5fc10f 100644
--- a/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
+++ b/server/gc/src/main/java/org/apache/accumulo/gc/SimpleGarbageCollector.java
@@ -161,7 +161,7 @@ public class SimpleGarbageCollector implements Iface {
 log.info("time delay: {} milliseconds", gcDelay);
 log.info("safemode: {}", opts.safeMode);
 log.info("verbose: {}", opts.verbose);
-log.info("memory threshold: {} of bytes", CANDIDATE_MEMORY_PERCENTAGE,
+log.info("memory threshold: {} of {} bytes", CANDIDATE_MEMORY_PERCENTAGE,
 Runtime.getRuntime().maxMemory());
 log.info("delete threads: {}", getNumDeleteThreads());
   }
@@ -368,7 +368,7 @@ public class SimpleGarbageCollector implements Iface {
 // uses suffixes to compare delete entries, there is no danger
 // of deleting something that should not be deleted. Must not 
change value of delete
 // variable because thats whats stored in metadata table.
-log.debug("Volume replaced {} -> ", delete, switchedDelete);
+log.debug("Volume replaced {} -> {}", delete, switchedDelete);
 fullPath = fs.getFullPath(FileType.TABLE, switchedDelete);
   } else {
 fullPath = fs.getFullPath(FileType.TABLE, delete);
diff --git a/server/master/src/main/java/org/apache/accumulo/master/Master.java 
b/server/master/src/main/java/org/apache/accumulo/master/Master.java
index e366c04..1bd4c2b 100644
--- a/server/master/src/main/java/org/apache/accumulo/master/Master.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/Master.java
@@ -1299,7 +1299,7 @@ public class Mas

[accumulo] branch master updated: Fix BulkFileIT (#971)

2019-02-19 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 7c88356  Fix BulkFileIT (#971)
7c88356 is described below

commit 7c883560d4c269a51eac1320f88dc647a98190cd
Author: Mike Walch 
AuthorDate: Tue Feb 19 18:10:11 2019 -0500

Fix BulkFileIT (#971)

* Added future get() calls
* Future should not return failures
---
 .../apache/accumulo/master/tableOps/bulkVer1/LoadFiles.java| 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer1/LoadFiles.java
 
b/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer1/LoadFiles.java
index 21557d1..c31b98d 100644
--- 
a/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer1/LoadFiles.java
+++ 
b/server/master/src/main/java/org/apache/accumulo/master/tableOps/bulkVer1/LoadFiles.java
@@ -126,7 +126,7 @@ class LoadFiles extends MasterRepo {
 
 final int RETRIES = Math.max(1, 
conf.getCount(Property.MASTER_BULK_RETRIES));
 for (int attempt = 0; attempt < RETRIES && filesToLoad.size() > 0; 
attempt++) {
-  List>> results = new ArrayList<>();
+  List> results = new ArrayList<>();
 
   if (master.onlineTabletServers().size() == 0)
 log.warn("There are no tablet server to process bulk import, waiting 
(tid = " + tid + ")");
@@ -159,7 +159,6 @@ class LoadFiles extends MasterRepo {
   if (servers.length > 0) {
 for (final String file : filesToLoad) {
   results.add(executor.submit(() -> {
-List failures = new ArrayList<>();
 ClientService.Client client = null;
 HostAndPort server = null;
 try {
@@ -177,18 +176,19 @@ class LoadFiles extends MasterRepo {
   setTime);
   if (fail.isEmpty()) {
 loaded.add(file);
-  } else {
-failures.addAll(fail);
   }
 } catch (Exception ex) {
   log.error("rpc failed server:" + server + ", tid:" + tid + " " + 
ex);
 } finally {
   ThriftUtil.returnClient(client);
 }
-return failures;
+return null;
   }));
 }
   }
+  for (Future f : results) {
+f.get();
+  }
   filesToLoad.removeAll(loaded);
   if (filesToLoad.size() > 0) {
 log.debug("tid " + tid + " attempt " + (attempt + 1) + " " + 
sampleList(filesToLoad, 10)



[accumulo-maven-plugin] branch master updated: Fix #2 - Maven plugin uses non public Accumulo API (#3)

2019-02-17 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-maven-plugin.git


The following commit(s) were added to refs/heads/master by this push:
 new cc3c8f8  Fix #2 - Maven plugin uses non public Accumulo API (#3)
cc3c8f8 is described below

commit cc3c8f83c49a8308d29f47056124243c46ffef13
Author: Mike Walch 
AuthorDate: Sun Feb 17 11:19:23 2019 -0500

Fix #2 - Maven plugin uses non public Accumulo API (#3)
---
 contrib/import-control.xml | 29 ++
 pom.xml|  6 -
 .../maven/plugin/AbstractAccumuloMojo.java | 13 +++---
 .../apache/accumulo/maven/plugin/StartMojo.java| 14 ---
 .../org/apache/accumulo/maven/plugin/StopMojo.java |  3 +--
 5 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/contrib/import-control.xml b/contrib/import-control.xml
new file mode 100644
index 000..5cfd06c
--- /dev/null
+++ b/contrib/import-control.xml
@@ -0,0 +1,29 @@
+
+
+https://checkstyle.org/dtds/import_control_1_4.dtd;>
+
+
+
+
+
+
+
+
+
+
+
diff --git a/pom.xml b/pom.xml
index 5c14f61..4d48065 100644
--- a/pom.xml
+++ b/pom.xml
@@ -89,7 +89,7 @@
 https://travis-ci.org/apache/accumulo-maven-plugin
   
   
-2.0.0-alpha-2
+2.0.0-SNAPSHOT
 
contrib/Eclipse-Accumulo-Codestyle.xml
 
 
@@ -395,6 +395,10 @@
 
 
 
+
+
+  
+
   
 
   
diff --git 
a/src/main/java/org/apache/accumulo/maven/plugin/AbstractAccumuloMojo.java 
b/src/main/java/org/apache/accumulo/maven/plugin/AbstractAccumuloMojo.java
index 5e192a7..c5f140b 100644
--- a/src/main/java/org/apache/accumulo/maven/plugin/AbstractAccumuloMojo.java
+++ b/src/main/java/org/apache/accumulo/maven/plugin/AbstractAccumuloMojo.java
@@ -16,12 +16,10 @@
  */
 package org.apache.accumulo.maven.plugin;
 
-import java.io.File;
 import java.net.MalformedURLException;
 import java.util.ArrayList;
-import java.util.Arrays;
 
-import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.minicluster.MiniAccumuloConfig;
 import org.apache.maven.artifact.Artifact;
 import org.apache.maven.plugin.AbstractMojo;
 import org.apache.maven.plugins.annotations.Parameter;
@@ -42,17 +40,14 @@ public abstract class AbstractAccumuloMojo extends 
AbstractMojo {
 return skip;
   }
 
-  void configureMiniClasspath(MiniAccumuloConfigImpl macConfig, String 
miniClasspath)
-  throws MalformedURLException {
+  void configureMiniClasspath(MiniAccumuloConfig macConfig) throws 
MalformedURLException {
 ArrayList classpathItems = new ArrayList<>();
-if (miniClasspath == null && project != null) {
+if (project != null) {
   classpathItems.add(project.getBuild().getOutputDirectory());
   classpathItems.add(project.getBuild().getTestOutputDirectory());
   for (Artifact artifact : project.getArtifacts()) {
 classpathItems.add(artifact.getFile().toURI().toURL().toString());
   }
-} else if (miniClasspath != null && !miniClasspath.isEmpty()) {
-  
classpathItems.addAll(Arrays.asList(miniClasspath.split(File.pathSeparator)));
 }
 
 // Hack to prevent sisu-guava, a maven 3.0.4 dependency, from effecting 
normal accumulo
@@ -65,6 +60,6 @@ public abstract class AbstractAccumuloMojo extends 
AbstractMojo {
 if (sisuGuava != null)
   classpathItems.remove(sisuGuava);
 
-macConfig.setClasspathItems(classpathItems.toArray(new 
String[classpathItems.size()]));
+macConfig.setClasspath(classpathItems.toArray(new 
String[classpathItems.size()]));
   }
 }
diff --git a/src/main/java/org/apache/accumulo/maven/plugin/StartMojo.java 
b/src/main/java/org/apache/accumulo/maven/plugin/StartMojo.java
index 56d8ddf..c8e872d 100644
--- a/src/main/java/org/apache/accumulo/maven/plugin/StartMojo.java
+++ b/src/main/java/org/apache/accumulo/maven/plugin/StartMojo.java
@@ -23,8 +23,7 @@ import java.util.HashSet;
 import java.util.Set;
 
 import org.apache.accumulo.minicluster.MiniAccumuloCluster;
-import org.apache.accumulo.miniclusterImpl.MiniAccumuloClusterImpl;
-import org.apache.accumulo.miniclusterImpl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.minicluster.MiniAccumuloConfig;
 import org.apache.commons.io.FileUtils;
 import org.apache.maven.plugin.MojoExecutionException;
 import org.apache.maven.plugins.annotations.LifecyclePhase;
@@ -57,8 +56,7 @@ public class StartMojo extends AbstractAccumuloMojo {
   required = true)
   private int zooKeeperPort;
 
-  static Set runningClusters = Collections
-  .synchronizedSet(new HashSet<>());
+  static Set runningClusters = 
Collections.synchronizedSet(new HashSet<>());
 
   @SuppressFBWarnings(value = &quo

[accumulo] branch master updated: Fix #910 - Add setClasspath() to MiniAccumuloConfig (#963)

2019-02-15 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 082ea3a  Fix #910 - Add setClasspath() to MiniAccumuloConfig (#963)
082ea3a is described below

commit 082ea3a4475f74e18e9fb07988f33c29d712fe23
Author: Mike Walch 
AuthorDate: Fri Feb 15 18:07:54 2019 -0500

Fix #910 - Add setClasspath() to MiniAccumuloConfig (#963)

* This allows accumulo-maven-plugin to only use public API
---
 .../org/apache/accumulo/minicluster/MiniAccumuloConfig.java | 13 +
 1 file changed, 13 insertions(+)

diff --git 
a/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
 
b/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
index 04e0a72..4019015 100644
--- 
a/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
+++ 
b/minicluster/src/main/java/org/apache/accumulo/minicluster/MiniAccumuloConfig.java
@@ -255,4 +255,17 @@ public class MiniAccumuloConfig {
 impl.setNativeLibPaths(nativePathItems);
 return this;
   }
+
+  /**
+   * Sets the classpath elements to use when spawning processes.
+   *
+   * @param classpathItems
+   *  the classpathItems to set
+   * @return the current instance
+   * @since 2.0.0
+   */
+  public MiniAccumuloConfig setClasspath(String... classpathItems) {
+impl.setClasspathItems(classpathItems);
+return this;
+  }
 }



[accumulo] branch master updated: Fix minor code quality issues (#960)

2019-02-14 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new c966961  Fix minor code quality issues (#960)
c966961 is described below

commit c9669613a0b463e449a07dea01d8e24ca7995e5b
Author: Mike Walch 
AuthorDate: Thu Feb 14 14:04:35 2019 -0500

Fix minor code quality issues (#960)

* Use Objects.requireNonNull rather than nonNull
* Close stream
* Set clientProps correctly if using mini in Proxy
---
 .../accumulo/core/conf/ClientConfigGenerate.java| 21 +++--
 .../main/java/org/apache/accumulo/proxy/Proxy.java  | 15 +--
 .../server/constraints/MetadataConstraints.java |  2 +-
 .../main/java/org/apache/accumulo/shell/Shell.java  |  2 +-
 4 files changed, 18 insertions(+), 22 deletions(-)

diff --git 
a/core/src/main/java/org/apache/accumulo/core/conf/ClientConfigGenerate.java 
b/core/src/main/java/org/apache/accumulo/core/conf/ClientConfigGenerate.java
index ebdfb03..55e996f 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/ClientConfigGenerate.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/ClientConfigGenerate.java
@@ -102,7 +102,7 @@ class ClientConfigGenerate {
 
 @Override
 void property(ClientProperty prop) {
-  Objects.nonNull(prop);
+  Objects.requireNonNull(prop);
   doc.print("|  "
   + prop.getKey() + " | ");
   String defaultValue = sanitize(prop.getDefaultValue()).trim();
@@ -163,7 +163,7 @@ class ClientConfigGenerate {
   private final TreeMap sortedProps = new TreeMap<>();
 
   private ClientConfigGenerate(PrintStream doc) {
-Objects.nonNull(doc);
+Objects.requireNonNull(doc);
 this.doc = doc;
 for (ClientProperty prop : ClientProperty.values()) {
   this.sortedProps.put(prop.getKey(), prop);
@@ -190,14 +190,15 @@ class ClientConfigGenerate {
   public static void main(String[] args)
   throws FileNotFoundException, UnsupportedEncodingException {
 if (args.length == 2) {
-  ClientConfigGenerate clientConfigGenerate = new ClientConfigGenerate(
-  new PrintStream(args[1], UTF_8.name()));
-  if (args[0].equals("--generate-markdown")) {
-clientConfigGenerate.generateMarkdown();
-return;
-  } else if (args[0].equals("--generate-config")) {
-clientConfigGenerate.generateConfigFile();
-return;
+  try (PrintStream stream = new PrintStream(args[1], UTF_8.name())) {
+ClientConfigGenerate clientConfigGenerate = new 
ClientConfigGenerate(stream);
+if (args[0].equals("--generate-markdown")) {
+  clientConfigGenerate.generateMarkdown();
+  return;
+} else if (args[0].equals("--generate-config")) {
+  clientConfigGenerate.generateConfigFile();
+  return;
+}
   }
 }
 throw new IllegalArgumentException("Usage: " + 
ClientConfigGenerate.class.getName()
diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java 
b/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java
index 27da6a1..d5ef5cb 100644
--- a/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java
+++ b/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java
@@ -136,12 +136,6 @@ public class Proxy implements KeywordExecutable {
 boolean useMini = Boolean
 .parseBoolean(proxyProps.getProperty(USE_MINI_ACCUMULO_KEY, 
USE_MINI_ACCUMULO_DEFAULT));
 
-if (!useMini && clientProps == null) {
-  System.err.println("The '-c' option must be set with an 
accumulo-client.properties file or"
-  + " proxy.properties must contain either useMiniAccumulo=true");
-  System.exit(1);
-}
-
 if (!proxyProps.containsKey("port")) {
   System.err.println("No port property");
   System.exit(1);
@@ -152,10 +146,7 @@ public class Proxy implements KeywordExecutable {
   final File folder = Files.createTempDirectory(System.currentTimeMillis() 
+ "").toFile();
   final MiniAccumuloCluster accumulo = new MiniAccumuloCluster(folder, 
"secret");
   accumulo.start();
-  clientProps.setProperty(ClientProperty.INSTANCE_NAME.getKey(),
-  accumulo.getConfig().getInstanceName());
-  clientProps.setProperty(ClientProperty.INSTANCE_ZOOKEEPERS.getKey(),
-  accumulo.getZooKeepers());
+  clientProps = accumulo.getClientProperties();
   Runtime.getRuntime().addShutdownHook(new Thread() {
 @Override
 public void start() {
@@ -169,6 +160,10 @@ public class Proxy implements KeywordExecutable {
   }
 }
   });
+} else if (clientProps == null) {
+  System.err.println("The '-c' option must be set with an 
accumulo-client.properties file or"
+

[accumulo] branch master updated: Fixes #927 - Added 'Clear Logs' link on Recent Logs page (#951)

2019-02-14 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 276e73c  Fixes #927 - Added 'Clear Logs' link on Recent Logs page  
(#951)
276e73c is described below

commit 276e73c1eb92c466673ad01a10445b6c05847715
Author: Jeffrey Zeiberg 
AuthorDate: Thu Feb 14 13:02:56 2019 -0500

Fixes #927 - Added 'Clear Logs' link on Recent Logs page  (#951)
---
 .../src/main/resources/org/apache/accumulo/monitor/templates/log.ftl   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/server/monitor/src/main/resources/org/apache/accumulo/monitor/templates/log.ftl
 
b/server/monitor/src/main/resources/org/apache/accumulo/monitor/templates/log.ftl
index c393786..85f105a 100644
--- 
a/server/monitor/src/main/resources/org/apache/accumulo/monitor/templates/log.ftl
+++ 
b/server/monitor/src/main/resources/org/apache/accumulo/monitor/templates/log.ftl
@@ -127,3 +127,6 @@
 
 
   
+  
+   Clear Logs
+  



[accumulo] branch master updated: Fix #952 - Support running Accumulo using Java 11 (#956)

2019-02-12 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 33a789f  Fix #952 - Support running Accumulo using Java 11 (#956)
33a789f is described below

commit 33a789fb38988fa60865dccac047bda4e032b67a
Author: Mike Walch 
AuthorDate: Tue Feb 12 17:58:29 2019 -0500

Fix #952 - Support running Accumulo using Java 11 (#956)

* Accumulo can now be run with Java 11. More work is needed to build using 
Java 11.
* Added dependencies that are no longer included by JVM in Java 11
---
 assemble/pom.xml   | 16 
 assemble/src/main/assemblies/component.xml |  4 
 pom.xml| 18 +-
 3 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/assemble/pom.xml b/assemble/pom.xml
index 561c80c..83a6291 100644
--- a/assemble/pom.xml
+++ b/assemble/pom.xml
@@ -80,6 +80,14 @@
   true
 
 
+  com.sun.xml.bind
+  jaxb-core
+
+
+  com.sun.xml.bind
+  jaxb-impl
+
+
   commons-cli
   commons-cli
 
@@ -109,6 +117,10 @@
   commons-logging
 
 
+  javax.activation
+  activation
+
+
   javax.annotation
   javax.annotation-api
 
@@ -130,6 +142,10 @@
   javax.ws.rs-api
 
 
+  javax.xml.bind
+  jaxb-api
+
+
   jline
   jline
   true
diff --git a/assemble/src/main/assemblies/component.xml 
b/assemble/src/main/assemblies/component.xml
index 5118795..d799ce0 100644
--- a/assemble/src/main/assemblies/component.xml
+++ b/assemble/src/main/assemblies/component.xml
@@ -41,6 +41,8 @@
 com.google.code.gson:gson
 com.google.guava:guava
 com.google.protobuf:protobuf-java
+com.sun.xml.bind:jaxb-core
+com.sun.xml.bind:jaxb-impl
 commons-cli:commons-cli
 commons-codec:commons-codec
 commons-collections:commons-collections
@@ -48,11 +50,13 @@
 commons-io:commons-io
 commons-lang:commons-lang
 commons-logging:commons-logging
+javax.activation:activation
 javax.annotation:javax.annotation-api
 javax.el:javax.el-api
 javax.servlet:javax.servlet-api
 javax.validation:validation-api
 javax.ws.rs:javax.ws.rs-api
+javax.xml.bind:jaxb-api
 jline:jline
 log4j:log4j
 org.apache.commons:commons-math3
diff --git a/pom.xml b/pom.xml
index cffc3bb..6f1f1d0 100644
--- a/pom.xml
+++ b/pom.xml
@@ -130,6 +130,7 @@
 false
 2.9.8
 2.2.4
+2.3.0
 2.27
 9.4.11.v20180605
 1.8
@@ -225,6 +226,16 @@
 2.5.0
   
   
+com.sun.xml.bind
+jaxb-core
+${jaxb.version}
+  
+  
+com.sun.xml.bind
+jaxb-impl
+${jaxb.version}
+  
+  
 commons-cli
 commons-cli
 1.4
@@ -260,6 +271,11 @@
 1.2
   
   
+javax.activation
+activation
+1.1.1
+  
+  
 javax.annotation
 javax.annotation-api
 1.2
@@ -292,7 +308,7 @@
   
 javax.xml.bind
 jaxb-api
-2.2.12
+${jaxb.version}
   
   
 jline



[accumulo] branch master updated: Simplify how classpath is built for MiniAccumuloCluster (#945)

2019-02-09 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 90d3b31  Simplify how classpath is built for MiniAccumuloCluster (#945)
90d3b31 is described below

commit 90d3b31bfde26df89608193bf6d54797c5dd6a5e
Author: Mike Walch 
AuthorDate: Sat Feb 9 11:02:01 2019 -0500

Simplify how classpath is built for MiniAccumuloCluster (#945)
---
 minicluster/pom.xml|   4 -
 .../miniclusterImpl/MiniAccumuloClusterImpl.java   | 102 -
 2 files changed, 16 insertions(+), 90 deletions(-)

diff --git a/minicluster/pom.xml b/minicluster/pom.xml
index bf73807..a75010c 100644
--- a/minicluster/pom.xml
+++ b/minicluster/pom.xml
@@ -88,10 +88,6 @@
   accumulo-tserver
 
 
-  org.apache.commons
-  commons-vfs2
-
-
   org.apache.hadoop
   hadoop-client-api
 
diff --git 
a/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
 
b/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
index e7bda12..339eeda 100644
--- 
a/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
+++ 
b/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
@@ -28,9 +28,6 @@ import java.io.UncheckedIOException;
 import java.net.InetSocketAddress;
 import java.net.Socket;
 import java.net.URI;
-import java.net.URISyntaxException;
-import java.net.URL;
-import java.net.URLClassLoader;
 import java.nio.charset.StandardCharsets;
 import java.nio.file.Files;
 import java.util.ArrayList;
@@ -90,8 +87,6 @@ import 
org.apache.accumulo.server.zookeeper.ZooReaderWriterFactory;
 import org.apache.accumulo.start.Main;
 import org.apache.accumulo.start.classloader.vfs.MiniDFSUtil;
 import org.apache.commons.io.IOUtils;
-import org.apache.commons.vfs2.FileObject;
-import org.apache.commons.vfs2.impl.VFSClassLoader;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.FileSystem;
@@ -156,91 +151,26 @@ public class MiniAccumuloClusterImpl implements 
AccumuloCluster {
 return _exec(clazz, jvmArgs2, args);
   }
 
-  private boolean containsConfigFile(File f) {
-if (!f.isDirectory()) {
-  return false;
-} else {
-  File[] files = f.listFiles(pathname -> 
pathname.getName().endsWith("site.xml")
-  || pathname.getName().equals("accumulo.properties"));
-  return files != null && files.length > 0;
-}
-  }
-
-  @SuppressFBWarnings(value = "PATH_TRAVERSAL_IN",
-  justification = "mini runs in the same security context as user 
providing the url")
-  private void append(StringBuilder classpathBuilder, URL url) throws 
URISyntaxException {
-File file = new File(url.toURI());
-// do not include dirs containing hadoop or accumulo config files
-if (!containsConfigFile(file))
-  
classpathBuilder.append(File.pathSeparator).append(file.getAbsolutePath());
-  }
-
-  private String getClasspath() throws IOException {
-
-try {
-  ArrayList classloaders = new ArrayList<>();
-
-  ClassLoader cl = this.getClass().getClassLoader();
-
-  while (cl != null) {
-classloaders.add(cl);
-cl = cl.getParent();
-  }
-
-  Collections.reverse(classloaders);
-
-  StringBuilder classpathBuilder = new StringBuilder();
-  classpathBuilder.append(config.getConfDir().getAbsolutePath());
-
-  if (config.getHadoopConfDir() != null)
-classpathBuilder.append(File.pathSeparator)
-.append(config.getHadoopConfDir().getAbsolutePath());
-
-  if (config.getClasspathItems() == null) {
+  private String getClasspath() {
+StringBuilder classpathBuilder = new StringBuilder();
+classpathBuilder.append(config.getConfDir().getAbsolutePath());
 
-// assume 0 is the system classloader and skip it
-for (int i = 1; i < classloaders.size(); i++) {
-  ClassLoader classLoader = classloaders.get(i);
-
-  if (classLoader instanceof URLClassLoader) {
-
-for (URL u : ((URLClassLoader) classLoader).getURLs()) {
-  append(classpathBuilder, u);
-}
-
-  } else if (classLoader instanceof VFSClassLoader) {
+if (config.getHadoopConfDir() != null)
+  classpathBuilder.append(File.pathSeparator)
+  .append(config.getHadoopConfDir().getAbsolutePath());
 
-VFSClassLoader vcl = (VFSClassLoader) classLoader;
-for (FileObject f : vcl.getFileObjects()) {
-  append(classpathBuilder, f.getURL());
-}
-  } else {
-if (classLoader.getClass().getName()
-.equals("jdk.inter

[accumulo] branch master updated (27ca87b -> 218de58)

2019-02-07 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git.


from 27ca87b  Move AccumuloClient.getInstanceID to InstanceOperations (#926)
 add 61c4491  Support running MiniAccumuloCluster using Java 11 (#924)
 new 218de58  Merge remote-tracking branch 'upstream/1.9'

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../miniclusterImpl/MiniAccumuloClusterImpl.java | 16 ++--
 1 file changed, 14 insertions(+), 2 deletions(-)



[accumulo] 01/01: Merge remote-tracking branch 'upstream/1.9'

2019-02-07 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git

commit 218de58b52a3eb43201d46e34f639c6a6a7dd58a
Merge: 27ca87b 61c4491
Author: Mike Walch 
AuthorDate: Thu Feb 7 17:33:00 2019 -0500

Merge remote-tracking branch 'upstream/1.9'

 .../miniclusterImpl/MiniAccumuloClusterImpl.java | 16 ++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --cc 
minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
index 5037f35,000..e7bda12
mode 100644,00..100644
--- 
a/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
+++ 
b/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
@@@ -1,859 -1,0 +1,871 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.miniclusterImpl;
 +
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +import static 
org.apache.accumulo.fate.util.UtilWaitThread.sleepUninterruptibly;
 +
 +import java.io.File;
 +import java.io.FileInputStream;
 +import java.io.FileWriter;
 +import java.io.IOException;
 +import java.io.InputStream;
 +import java.io.UncheckedIOException;
 +import java.net.InetSocketAddress;
 +import java.net.Socket;
 +import java.net.URI;
 +import java.net.URISyntaxException;
 +import java.net.URL;
 +import java.net.URLClassLoader;
 +import java.nio.charset.StandardCharsets;
 +import java.nio.file.Files;
 +import java.util.ArrayList;
 +import java.util.Arrays;
 +import java.util.Collection;
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.LinkedList;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Properties;
 +import java.util.Set;
 +import java.util.concurrent.ExecutionException;
 +import java.util.concurrent.ExecutorService;
 +import java.util.concurrent.Executors;
 +import java.util.concurrent.FutureTask;
 +import java.util.concurrent.TimeUnit;
 +import java.util.concurrent.TimeoutException;
 +
 +import org.apache.accumulo.cluster.AccumuloCluster;
 +import org.apache.accumulo.core.Constants;
 +import org.apache.accumulo.core.client.Accumulo;
 +import org.apache.accumulo.core.client.AccumuloClient;
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 +import org.apache.accumulo.core.clientImpl.ClientContext;
 +import org.apache.accumulo.core.clientImpl.MasterClient;
 +import 
org.apache.accumulo.core.clientImpl.thrift.ThriftNotActiveServiceException;
 +import org.apache.accumulo.core.clientImpl.thrift.ThriftSecurityException;
 +import org.apache.accumulo.core.conf.AccumuloConfiguration;
 +import org.apache.accumulo.core.conf.ClientProperty;
 +import org.apache.accumulo.core.conf.ConfigurationCopy;
 +import org.apache.accumulo.core.conf.DefaultConfiguration;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.conf.SiteConfiguration;
 +import org.apache.accumulo.core.master.thrift.MasterClientService;
 +import org.apache.accumulo.core.master.thrift.MasterGoalState;
 +import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
 +import org.apache.accumulo.core.trace.Tracer;
 +import org.apache.accumulo.core.util.Pair;
 +import org.apache.accumulo.fate.zookeeper.IZooReaderWriter;
 +import org.apache.accumulo.fate.zookeeper.ZooUtil;
 +import org.apache.accumulo.master.state.SetGoalState;
 +import org.apache.accumulo.minicluster.MiniAccumuloCluster;
 +import org.apache.accumulo.minicluster.ServerType;
 +import org.apache.accumulo.server.ServerContext;
 +import org.apache.accumulo.server.ServerUtil;
 +import org.apache.accumulo.server.fs.VolumeManager;
 +import org.apache.accumulo.server.fs.VolumeManagerImpl;
 +import org.apache.accumulo.server.init.Initialize;
 +import org.apache.accumulo.server.util.AccumuloStatus;
 +import org.apache.accumulo.server.util

[accumulo] branch 1.9 updated: Support running MiniAccumuloCluster using Java 11 (#924)

2019-02-07 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch 1.9
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/1.9 by this push:
 new 61c4491  Support running MiniAccumuloCluster using Java 11 (#924)
61c4491 is described below

commit 61c44919de22c92c98287e810467fc0df56bd87e
Author: Mike Walch 
AuthorDate: Thu Feb 7 17:19:50 2019 -0500

Support running MiniAccumuloCluster using Java 11 (#924)
---
 .../minicluster/impl/MiniAccumuloClusterImpl.java| 16 ++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git 
a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
 
b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
index 81bdebd..866038b 100644
--- 
a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
+++ 
b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniAccumuloClusterImpl.java
@@ -268,8 +268,20 @@ public class MiniAccumuloClusterImpl implements 
AccumuloCluster {
   append(classpathBuilder, f.getURL());
 }
   } else {
-throw new IllegalArgumentException(
-"Unknown classloader type : " + 
classLoader.getClass().getName());
+if (classLoader.getClass().getName()
+.equals("jdk.internal.loader.ClassLoaders$AppClassLoader")) {
+  log.debug("Detected Java 11 classloader: {}", 
classLoader.getClass().getName());
+} else {
+  log.debug("Detected unknown classloader: {}", 
classLoader.getClass().getName());
+}
+String javaClassPath = System.getProperty("java.class.path");
+if (javaClassPath == null) {
+  throw new IllegalStateException("java.class.path is not set");
+} else {
+  log.debug("Using classpath set by java.class.path system 
property: {}",
+  javaClassPath);
+}
+classpathBuilder.append(File.pathSeparator).append(javaClassPath);
   }
 }
   } else {



[accumulo] branch master updated: Move AccumuloClient.getInstanceID to InstanceOperations (#926)

2019-02-07 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 27ca87b  Move AccumuloClient.getInstanceID to InstanceOperations (#926)
27ca87b is described below

commit 27ca87b1e48f607a8e38c9d996fef613d3803e1f
Author: Jeffrey Zeiberg 
AuthorDate: Thu Feb 7 12:37:23 2019 -0500

Move AccumuloClient.getInstanceID to InstanceOperations (#926)
---
 .../org/apache/accumulo/core/client/AccumuloClient.java  |  7 ---
 .../accumulo/core/client/admin/InstanceOperations.java   |  8 
 .../apache/accumulo/core/clientImpl/ClientContext.java   |  1 -
 .../accumulo/core/clientImpl/InstanceOperationsImpl.java |  6 ++
 .../replication/DistributedWorkQueueWorkAssigner.java|  4 +++-
 .../master/replication/SequentialWorkAssigner.java   |  2 +-
 .../master/replication/UnorderedWorkAssigner.java|  2 +-
 .../org/apache/accumulo/master/state/MergeStats.java |  4 ++--
 .../master/replication/SequentialWorkAssignerTest.java   |  7 +--
 .../master/replication/UnorderedWorkAssignerTest.java|  8 ++--
 shell/src/main/java/org/apache/accumulo/shell/Shell.java |  2 +-
 .../apache/accumulo/test/BadDeleteMarkersCreatedIT.java  |  3 ++-
 .../java/org/apache/accumulo/test/ExistingMacIT.java |  3 ++-
 .../test/ThriftServerBindsBeforeZooKeeperLockIT.java |  4 ++--
 .../src/main/java/org/apache/accumulo/test/VolumeIT.java |  8 +---
 .../accumulo/test/functional/AccumuloClientIT.java   |  2 +-
 .../apache/accumulo/test/functional/BackupMasterIT.java  |  2 +-
 .../functional/BalanceInPresenceOfOfflineTableIT.java|  2 +-
 .../accumulo/test/functional/DynamicThreadPoolsIT.java   |  3 ++-
 .../accumulo/test/functional/GarbageCollectorIT.java |  3 ++-
 .../org/apache/accumulo/test/functional/ReadWriteIT.java |  4 +++-
 .../org/apache/accumulo/test/functional/RestartIT.java   |  6 +++---
 .../test/functional/SimpleBalancerFairnessIT.java|  3 ++-
 .../accumulo/test/functional/TableChangeStateIT.java |  6 --
 .../test/functional/TabletStateChangeIteratorIT.java |  4 ++--
 .../test/replication/MultiTserverReplicationIT.java  | 16 +---
 .../apache/accumulo/test/replication/ReplicationIT.java  |  3 ++-
 27 files changed, 76 insertions(+), 47 deletions(-)

diff --git 
a/core/src/main/java/org/apache/accumulo/core/client/AccumuloClient.java 
b/core/src/main/java/org/apache/accumulo/core/client/AccumuloClient.java
index d3cde65..d23b1e6 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/AccumuloClient.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/AccumuloClient.java
@@ -265,13 +265,6 @@ public interface AccumuloClient extends AutoCloseable {
   String whoami();
 
   /**
-   * Returns a unique string that identifies this instance of accumulo.
-   *
-   * @return a UUID
-   */
-  String getInstanceID();
-
-  /**
* Retrieves a TableOperations object to perform table functions, such as 
create and delete.
*
* @return an object to manipulate tables
diff --git 
a/core/src/main/java/org/apache/accumulo/core/client/admin/InstanceOperations.java
 
b/core/src/main/java/org/apache/accumulo/core/client/admin/InstanceOperations.java
index b9be310..d17b9d7 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/client/admin/InstanceOperations.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/client/admin/InstanceOperations.java
@@ -130,4 +130,12 @@ public interface InstanceOperations {
* @since 1.7.0
*/
   void waitForBalance() throws AccumuloException;
+
+  /**
+   * Returns a unique string that identifies this instance of accumulo.
+   *
+   * @return a String
+   * @since 2.0.0
+   */
+  String getInstanceID();
 }
diff --git 
a/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientContext.java 
b/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientContext.java
index 6d65fd9..6294e6c 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientContext.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/ClientContext.java
@@ -387,7 +387,6 @@ public class ClientContext implements AccumuloClient {
*
* @return a UUID
*/
-  @Override
   public String getInstanceID() {
 ensureOpen();
 final String instanceName = info.getInstanceName();
diff --git 
a/core/src/main/java/org/apache/accumulo/core/clientImpl/InstanceOperationsImpl.java
 
b/core/src/main/java/org/apache/accumulo/core/clientImpl/InstanceOperationsImpl.java
index 5813f27..4713578 100644
--- 
a/core/src/main/java/org/apache/accumulo/core/clientImpl/InstanceOperationsImpl.java
+++ 
b/core/src/main/java/org/apache/accumulo/core/clientImpl/InstanceOperationsImpl.java
@@ -232,4 +232,10 @@ public class InstanceOperationsImpl implements 
InstanceOperations {
 }
 return null;
   }
+
+  @Override

[accumulo] branch master updated: #939 - use default methods (#940)

2019-02-06 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 234df09  #939 - use default methods (#940)
234df09 is described below

commit 234df09997454fe6519136c3ce00bb17956b1458
Author: Mike Walch 
AuthorDate: Wed Feb 6 18:34:00 2019 -0500

#939 - use default methods (#940)
---
 .../java/org/apache/accumulo/core/client/ScannerBase.java | 12 ++--
 .../apache/accumulo/core/clientImpl/ScannerOptions.java   | 15 ---
 2 files changed, 10 insertions(+), 17 deletions(-)

diff --git 
a/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java 
b/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
index a490f16..328c066 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
@@ -19,6 +19,7 @@ package org.apache.accumulo.core.client;
 import java.util.Iterator;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Objects;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.IteratorSetting.Column;
@@ -107,7 +108,10 @@ public interface ScannerBase extends 
Iterable>, AutoCloseable {
*  the column family to be fetched
* @since 2.0.0
*/
-  void fetchColumnFamily(CharSequence colFam);
+  default void fetchColumnFamily(CharSequence colFam) {
+Objects.requireNonNull(colFam);
+fetchColumnFamily(new Text(colFam.toString()));
+  }
 
   /**
* Adds a column to the list of columns that will be fetched by this 
scanner. The column is
@@ -152,7 +156,11 @@ public interface ScannerBase extends 
Iterable>, AutoCloseable {
*  the column qualifier of the column to be fetched
* @since 2.0.0
*/
-  void fetchColumn(CharSequence colFam, CharSequence colQual);
+  default void fetchColumn(CharSequence colFam, CharSequence colQual) {
+Objects.requireNonNull(colFam);
+Objects.requireNonNull(colQual);
+fetchColumn(new Text(colFam.toString()), new Text(colQual.toString()));
+  }
 
   /**
* Adds a column to the list of columns that will be fetch by this scanner.
diff --git 
a/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java 
b/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
index a9d45f2..71c7900 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
@@ -140,13 +140,6 @@ public class ScannerOptions implements ScannerBase {
   }
 
   @Override
-  public void fetchColumnFamily(CharSequence colFam) {
-checkArgument(colFam != null, "colFam is null");
-Column c = new Column(colFam.toString().getBytes(), null, null);
-fetchedColumns.add(c);
-  }
-
-  @Override
   public synchronized void fetchColumn(Text colFam, Text colQual) {
 checkArgument(colFam != null, "colFam is null");
 checkArgument(colQual != null, "colQual is null");
@@ -155,14 +148,6 @@ public class ScannerOptions implements ScannerBase {
   }
 
   @Override
-  public void fetchColumn(CharSequence colFam, CharSequence colQual) {
-checkArgument(colFam != null, "colFam is null");
-checkArgument(colQual != null, "colQual is null");
-Column c = new Column(colFam.toString().getBytes(), 
colQual.toString().getBytes(), null);
-fetchedColumns.add(c);
-  }
-
-  @Override
   public void fetchColumn(IteratorSetting.Column column) {
 checkArgument(column != null, "Column is null");
 fetchColumn(column.getColumnFamily(), column.getColumnQualifier());



[accumulo] branch master updated: Add ScannerBase.fetchColumn/Family using CharSequence (#939)

2019-02-06 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new e0e7916  Add ScannerBase.fetchColumn/Family using CharSequence (#939)
e0e7916 is described below

commit e0e791609f3048ba5d6af8ab2dbcd754ad1ad848
Author: Mike Walch 
AuthorDate: Wed Feb 6 15:40:31 2019 -0500

Add ScannerBase.fetchColumn/Family using CharSequence (#939)

* Modified some but not all tests to use new method
---
 .../apache/accumulo/core/client/ScannerBase.java   | 33 ++
 .../accumulo/core/clientImpl/ScannerOptions.java   | 15 ++
 .../apache/accumulo/test/ClientSideIteratorIT.java |  4 +--
 .../apache/accumulo/test/ConditionalWriterIT.java  | 14 -
 4 files changed, 57 insertions(+), 9 deletions(-)

diff --git 
a/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java 
b/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
index 7d200bf..a490f16 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
@@ -91,6 +91,25 @@ public interface ScannerBase extends 
Iterable>, AutoCloseable {
   void fetchColumnFamily(Text col);
 
   /**
+   * Adds a column family to the list of columns that will be fetched by this 
scanner. By default
+   * when no columns have been added the scanner fetches all columns. To fetch 
multiple column
+   * families call this function multiple times.
+   *
+   * 
+   * This can help limit which locality groups are read on the server side.
+   *
+   * 
+   * When used in conjunction with custom iterators, the set of column 
families fetched is passed to
+   * the top iterator's seek method. Custom iterators may change this set of 
column families when
+   * calling seek on their source.
+   *
+   * @param colFam
+   *  the column family to be fetched
+   * @since 2.0.0
+   */
+  void fetchColumnFamily(CharSequence colFam);
+
+  /**
* Adds a column to the list of columns that will be fetched by this 
scanner. The column is
* identified by family and qualifier. By default when no columns have been 
added the scanner
* fetches all columns.
@@ -122,6 +141,20 @@ public interface ScannerBase extends 
Iterable>, AutoCloseable {
   void fetchColumn(Text colFam, Text colQual);
 
   /**
+   * Adds a column to the list of columns that will be fetched by this 
scanner. The column is
+   * identified by family and qualifier. By default when no columns have been 
added the scanner
+   * fetches all columns. See the warning on {@link #fetchColumn(Text, Text)}
+   *
+   *
+   * @param colFam
+   *  the column family of the column to be fetched
+   * @param colQual
+   *  the column qualifier of the column to be fetched
+   * @since 2.0.0
+   */
+  void fetchColumn(CharSequence colFam, CharSequence colQual);
+
+  /**
* Adds a column to the list of columns that will be fetch by this scanner.
*
* @param column
diff --git 
a/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java 
b/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
index 71c7900..a9d45f2 100644
--- a/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
+++ b/core/src/main/java/org/apache/accumulo/core/clientImpl/ScannerOptions.java
@@ -140,6 +140,13 @@ public class ScannerOptions implements ScannerBase {
   }
 
   @Override
+  public void fetchColumnFamily(CharSequence colFam) {
+checkArgument(colFam != null, "colFam is null");
+Column c = new Column(colFam.toString().getBytes(), null, null);
+fetchedColumns.add(c);
+  }
+
+  @Override
   public synchronized void fetchColumn(Text colFam, Text colQual) {
 checkArgument(colFam != null, "colFam is null");
 checkArgument(colQual != null, "colQual is null");
@@ -148,6 +155,14 @@ public class ScannerOptions implements ScannerBase {
   }
 
   @Override
+  public void fetchColumn(CharSequence colFam, CharSequence colQual) {
+checkArgument(colFam != null, "colFam is null");
+checkArgument(colQual != null, "colQual is null");
+Column c = new Column(colFam.toString().getBytes(), 
colQual.toString().getBytes(), null);
+fetchedColumns.add(c);
+  }
+
+  @Override
   public void fetchColumn(IteratorSetting.Column column) {
 checkArgument(column != null, "Column is null");
 fetchColumn(column.getColumnFamily(), column.getColumnQualifier());
diff --git 
a/test/src/main/java/org/apache/accumulo/test/ClientSideIteratorIT.java 
b/test/src/main/java/org/apache/accumulo/test/ClientSideIteratorIT.java
index c727e68..3569c60 100644
--- a/test/src/main/java/org/apache/accumulo/test/ClientSideIteratorIT.java
+++ b/test/src/main/java/org/apache/accumulo

[accumulo-examples] branch master updated: Fix QA issue

2019-01-31 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-examples.git


The following commit(s) were added to refs/heads/master by this push:
 new 0d84a80  Fix QA issue
0d84a80 is described below

commit 0d84a80ba1ebff4c3da54aca1535a5c86436d52c
Author: Mike Walch 
AuthorDate: Thu Jan 31 16:32:34 2019 -0500

Fix QA issue
---
 src/main/java/org/apache/accumulo/examples/bloom/BloomFilters.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/main/java/org/apache/accumulo/examples/bloom/BloomFilters.java 
b/src/main/java/org/apache/accumulo/examples/bloom/BloomFilters.java
index f45de93..6bba9ea 100644
--- a/src/main/java/org/apache/accumulo/examples/bloom/BloomFilters.java
+++ b/src/main/java/org/apache/accumulo/examples/bloom/BloomFilters.java
@@ -76,7 +76,7 @@ public class BloomFilters {
 Random r = new Random(seed);
 try (BatchWriter bw = client.createBatchWriter(tableName)) {
   for (int x = 0; x < 1_000_000; x++) {
-Long rowId = RandomBatchWriter.abs(r.nextLong()) % 1_000_000_000;
+long rowId = RandomBatchWriter.abs(r.nextLong()) % 1_000_000_000;
 Mutation m = RandomBatchWriter.createMutation(rowId, 50, new 
ColumnVisibility());
 bw.addMutation(m);
   }



[accumulo-website] branch master updated: Updates to 2.0 release notes (#148)

2019-01-30 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new f0cd66b  Updates to 2.0 release notes (#148)
f0cd66b is described below

commit f0cd66be721ca7c5c2736b2145a52a183569fe21
Author: Mike Walch 
AuthorDate: Wed Jan 30 16:46:05 2019 -0500

Updates to 2.0 release notes (#148)
---
 _posts/release/2017-09-05-accumulo-2.0.0.md | 15 ---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/_posts/release/2017-09-05-accumulo-2.0.0.md 
b/_posts/release/2017-09-05-accumulo-2.0.0.md
index 56bc0c0..720aaaf 100644
--- a/_posts/release/2017-09-05-accumulo-2.0.0.md
+++ b/_posts/release/2017-09-05-accumulo-2.0.0.md
@@ -12,9 +12,17 @@ to documentation for new features in addition to issues.
 
 ### New API for creating connections to Accumulo
 
-A fluent API for clients to build `Connector` objects was introduced in 
[ACCUMULO-4784].
-The new API deprecates `ClientConfiguration` and introduces its own properties
-file called `accumulo-client.properties` that ships with the Accumulo tarball.
+A fluent API for creating Accumulo clients was introduced in [ACCUMULO-4784] 
and [#634].
+The `Connector` and `ZooKeeperInstance` objects have been deprecated and 
replaced by
+`AccumuloClient` which is created from the `Accumulo` entry point. The new API 
also deprecates
+`ClientConfiguration` and introduces its own properties file called 
`accumulo-client.properties`
+that ships with the Accumulo tarball. The new API has the following benefits 
over the old API:
+  * All connection information can be specifed in properties file to create 
the client. This was not
+possible with old API.
+  * The new API does not require `ZooKeeperInstance` to be created first 
before creating a client.
+  * The new client is closeable and does not rely on shared static resource 
management
+  * Clients can be created using a new Java builder, `Properties` object, or 
`accumulo-client.properties`
+  * Clients can now be created with default settings for `BatchWriter`, 
`Scanner`, etc.
 See the [client documentation][clients] for more information on how to use the 
new API.
 
 ### Hadoop 3 and Java 8.
@@ -178,3 +186,4 @@ View the [Upgrading Accumulo documentation][upgrade] for 
guidance.
 [ACCUMULO-4490]: https://issues.apache.org/jira/browse/ACCUMULO-4490
 [ACCUMULO-4449]: https://issues.apache.org/jira/browse/ACCUMULO-4449
 [ACCUMULO-3652]: https://issues.apache.org/jira/browse/ACCUMULO-3652
+[#634]: https://github.com/apache/accumulo/issues/634



[accumulo-website] branch asf-site updated: Jekyll build from master:77812b4

2019-01-29 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ead7d43  Jekyll build from master:77812b4
ead7d43 is described below

commit ead7d43263d1dbef828ff075968f27c931186535
Author: Mike Walch 
AuthorDate: Tue Jan 29 16:36:08 2019 -0500

Jekyll build from master:77812b4

Update client documentation (#144)
---
 docs/2.x/getting-started/clients.html | 72 +++
 feed.xml  |  4 +-
 search_data.json  |  2 +-
 3 files changed, 42 insertions(+), 36 deletions(-)

diff --git a/docs/2.x/getting-started/clients.html 
b/docs/2.x/getting-started/clients.html
index 3a08bc0..3afc323 100644
--- a/docs/2.x/getting-started/clients.html
+++ b/docs/2.x/getting-started/clients.html
@@ -566,18 +566,11 @@ changes to the columns of a single row. The changes are 
made atomically in the
 TabletServer. Clients then add Mutations to a https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.0.0-alpha-1/org/apache/accumulo/core/client/BatchWriter.html;>BatchWriter
 which submits them to
 the appropriate TabletServers.
 
-Mutations can be created thus:
+The code below shows how a Mutation is created.
 
-Text rowID = new 
Text("row1");
-Text colFam = new Text("myColFam");
-Text colQual = new Text("myColQual");
-ColumnVisibility colVis = new ColumnVisibility("public");
-long timestamp = System.currentTimeMillis();
-
-Value value = new Value("myValue".getBytes());
-
-Mutation mutation = new Mutation(rowID);
-mutation.put(colFam, colQual, colVis, timestamp, value);
+Mutation mutation = new 
Mutation("row1");
+mutation.at().family("myColFam1").qualifier("myColQual1").visibility("public").putmutation.at().family("myColFam2").qualifier("myColQual2").visibility("public").putBatchWriter
@@ -588,15 +581,13 @@ amortize network overhead. Care must be taken to avoid 
changing the contents of
 any Object passed to the BatchWriter since it keeps objects in memory while
 batching.
 
-Mutations are added to a BatchWriter thus:
-
-// BatchWriterConfig has reasonable 
defaults
-BatchWriterConfig config = new BatchWriterConfig();
-config.setMaxMemory(1000L); // bytes 
available to batchwriter for buffering mutations
+The code below shows how a Mutation is added to a BatchWriter:
 
-BatchWriter writer = client.createBatchWriter("table", config)
-writer.addMutation(mutation);
-writer.close();
+try (BatchWriter writer = client.createBatchWriter("mytable")) {
+  Mutation m = new Mutation("row1");
+  m.at().family("myfam").qualifier("myqual").visibility("public").put(writer.addMutation(m);
+}
 
 
 For more example code, see the https://github.com/apache/accumulo-examples/blob/master/docs/batch.md;>batch
 writing and scanning example.
@@ -678,20 +669,34 @@ to efficiently return ranges of consecutive keys and 
their associated values.Scanner
 
-To retrieve data, Clients use a https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.0.0-alpha-1/org/apache/accumulo/core/client/Scanner.html;>Scanner,
 which acts like an Iterator over
-keys and values. Scanners can be configured to start and stop at particular 
keys, and
+To retrieve data, create a https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.0.0-alpha-1/org/apache/accumulo/core/client/Scanner.html;>Scanner
 using https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.0.0-alpha-1/org/apache/accumulo/core/client/AccumuloClient.html;>AccumuloClient.
 A Scanner acts like an Iterator over
+keys and values in the table.
+
+If a https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.0.0-alpha-1/org/apache/accumulo/core/client/Scanner.html;>Scanner
 is created without https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.0.0-alpha-1/org/apache/accumulo/core/security/Authorizations.html;>Authorizations,
 it uses all https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.0.0-alpha-1/org/apache/accumulo/core/security/Authorizations.html;>Authorizationshttps://static.javadoc.io/org.apache.accumulo/accumulo-core/2.0.0-alpha-1/org/apache/accumulo/core/client/AccumuloClient.html;>AccumuloClient:
+
+Scanner s 
= client.createScanner("table");
+
+
+A scanner can also be created to only use a subset of a user’s https://static.javadoc.io/org.apache.accumulo/accumulo-core/2.0.0-alpha-1/org/apache/accumulo/core/security/Authorizations.html;>Authorizations.
+
+Scanner s 
= client.createScanner("table", new Authorizations("public"));
+
+
+Scanners can be configured to start and stop at particular keys, and
 to return a subset of the colu

[accumulo-website] branch master updated: Update client documentation (#144)

2019-01-29 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 77812b4  Update client documentation (#144)
77812b4 is described below

commit 77812b47d32383b4a3eb6f86c2a650a3e8d12411
Author: Mike Walch 
AuthorDate: Tue Jan 29 16:34:50 2019 -0500

Update client documentation (#144)
---
 _docs-2/getting-started/clients.md | 75 +-
 1 file changed, 42 insertions(+), 33 deletions(-)

diff --git a/_docs-2/getting-started/clients.md 
b/_docs-2/getting-started/clients.md
index cff9086..68ac3ce 100644
--- a/_docs-2/getting-started/clients.md
+++ b/_docs-2/getting-started/clients.md
@@ -115,19 +115,12 @@ changes to the columns of a single row. The changes are 
made atomically in the
 TabletServer. Clients then add Mutations to a [BatchWriter] which submits them 
to
 the appropriate TabletServers.
 
-Mutations can be created thus:
+The code below shows how a Mutation is created.
 
 ```java
-Text rowID = new Text("row1");
-Text colFam = new Text("myColFam");
-Text colQual = new Text("myColQual");
-ColumnVisibility colVis = new ColumnVisibility("public");
-long timestamp = System.currentTimeMillis();
-
-Value value = new Value("myValue".getBytes());
-
-Mutation mutation = new Mutation(rowID);
-mutation.put(colFam, colQual, colVis, timestamp, value);
+Mutation mutation = new Mutation("row1");
+mutation.at().family("myColFam1").qualifier("myColQual1").visibility("public").put("myValue1");
+mutation.at().family("myColFam2").qualifier("myColQual2").visibility("public").put("myValue2");
 ```
 
 ### BatchWriter
@@ -138,16 +131,14 @@ amortize network overhead. Care must be taken to avoid 
changing the contents of
 any Object passed to the BatchWriter since it keeps objects in memory while
 batching.
 
-Mutations are added to a BatchWriter thus:
+The code below shows how a Mutation is added to a BatchWriter:
 
 ```java
-// BatchWriterConfig has reasonable defaults
-BatchWriterConfig config = new BatchWriterConfig();
-config.setMaxMemory(1000L); // bytes available to batchwriter for 
buffering mutations
-
-BatchWriter writer = client.createBatchWriter("table", config)
-writer.addMutation(mutation);
-writer.close();
+try (BatchWriter writer = client.createBatchWriter("mytable")) {
+  Mutation m = new Mutation("row1");
+  m.at().family("myfam").qualifier("myqual").visibility("public").put("myval");
+  writer.addMutation(m);
+}
 ```
 
 For more example code, see the [batch writing and scanning example][batch].
@@ -224,21 +215,37 @@ to efficiently return ranges of consecutive keys and 
their associated values.
 
 ### Scanner
 
-To retrieve data, Clients use a [Scanner], which acts like an Iterator over
-keys and values. Scanners can be configured to start and stop at particular 
keys, and
+To retrieve data, create a [Scanner] using [AccumuloClient]. A Scanner acts 
like an Iterator over
+keys and values in the table.
+
+If a [Scanner] is created without [Authorizations], it uses all 
[Authorizations] granted
+to the user that created the [AccumuloClient]:
+
+```java
+Scanner s = client.createScanner("table");
+```
+
+A scanner can also be created to only use a subset of a user's 
[Authorizations].
+
+```java
+Scanner s = client.createScanner("table", new Authorizations("public"));
+```
+
+Scanners can be configured to start and stop at particular keys, and
 to return a subset of the columns available.
 
 ```java
-// specify which visibilities we are allowed to see
+// return data with visibilities that match specified auths
 Authorizations auths = new Authorizations("public");
 
-Scanner scan = client.createScanner("table", auths);
-scan.setRange(new Range("harry","john"));
-scan.fetchColumnFamily(new Text("attributes"));
+try (Scanner scan = client.createScanner("table", auths)) {
+  scan.setRange(new Range("harry","john"));
+  scan.fetchColumnFamily(new Text("attributes"));
 
-for (Entry entry : scan) {
-  Text row = entry.getKey().getRow();
-  Value value = entry.getValue();
+  for (Entry entry : scan) {
+Text row = entry.getKey().getRow();
+Value value = entry.getValue();
+  }
 }
 ```
 
@@ -281,12 +288,13 @@ TabletServers in parallel.
 ArrayList ranges = new ArrayList();
 // populate list of ranges ...
 
-BatchScanner bscan = client.createBatchScanner("table", auths, 10);
-bscan.setRanges(ranges);
-bscan.fetchColumnFamily("attributes");
+try (BatchScanner bscan = client.createBatchScanner("table", auths, 10)) {
+  bscan.setR

[accumulo] branch master updated: Fixes several ITs that are broken (#921)

2019-01-27 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new e26532a  Fixes several ITs that are broken (#921)
e26532a is described below

commit e26532a022574551f8c76869e8eb9f878b524a5a
Author: Mike Walch 
AuthorDate: Sun Jan 27 09:08:51 2019 -0500

Fixes several ITs that are broken (#921)

* Hadoop config cannot be pulled from ServerContext if it has it has been
  constructed yet
* Remove usages of scanner after it has been closed
---
 .../miniclusterImpl/MiniAccumuloClusterImpl.java   |  2 +-
 .../apache/accumulo/test/ScanFlushWithTimeIT.java  |  6 +---
 .../accumulo/test/functional/ConcurrencyIT.java| 37 ++
 .../org/apache/accumulo/test/functional/SslIT.java |  4 +--
 .../test/functional/SslWithClientAuthIT.java   |  4 +--
 .../test/functional/WriteAheadLogEncryptedIT.java  |  2 +-
 6 files changed, 21 insertions(+), 34 deletions(-)

diff --git 
a/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
 
b/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
index db402f2..5037f35 100644
--- 
a/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
+++ 
b/minicluster/src/main/java/org/apache/accumulo/miniclusterImpl/MiniAccumuloClusterImpl.java
@@ -408,7 +408,7 @@ public class MiniAccumuloClusterImpl implements 
AccumuloCluster {
   siteConfig.put(Property.INSTANCE_DFS_DIR.getKey(), "/accumulo");
   config.setSiteConfig(siteConfig);
 } else if (config.useExistingInstance()) {
-  dfsUri = 
getServerContext().getHadoopConf().get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY);
+  dfsUri = 
config.getHadoopConfiguration().get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY);
 } else {
   dfsUri = "file:///";
 }
diff --git 
a/test/src/main/java/org/apache/accumulo/test/ScanFlushWithTimeIT.java 
b/test/src/main/java/org/apache/accumulo/test/ScanFlushWithTimeIT.java
index deff587..c31246e 100644
--- a/test/src/main/java/org/apache/accumulo/test/ScanFlushWithTimeIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/ScanFlushWithTimeIT.java
@@ -94,11 +94,7 @@ public class ScanFlushWithTimeIT extends 
AccumuloClusterHarness {
 
   private void testScanner(ScannerBase s, long expected) {
 long now = System.currentTimeMillis();
-try {
-  s.iterator().next();
-} finally {
-  s.close();
-}
+s.iterator().next();
 long diff = System.currentTimeMillis() - now;
 log.info("Diff = {}", diff);
 assertTrue("Scanner taking too long to return intermediate results: " + 
diff, diff < expected);
diff --git 
a/test/src/main/java/org/apache/accumulo/test/functional/ConcurrencyIT.java 
b/test/src/main/java/org/apache/accumulo/test/functional/ConcurrencyIT.java
index 20afa56..50613a9 100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/ConcurrencyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ConcurrencyIT.java
@@ -24,14 +24,10 @@ import java.util.Map;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.accumulo.core.client.AccumuloClient;
-import org.apache.accumulo.core.client.AccumuloException;
-import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.BatchWriter;
 import org.apache.accumulo.core.client.BatchWriterConfig;
 import org.apache.accumulo.core.client.IteratorSetting;
-import org.apache.accumulo.core.client.MutationsRejectedException;
 import org.apache.accumulo.core.client.Scanner;
-import org.apache.accumulo.core.client.TableExistsException;
 import org.apache.accumulo.core.client.TableNotFoundException;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.Mutation;
@@ -51,26 +47,27 @@ public class ConcurrencyIT extends AccumuloClusterHarness {
   static class ScanTask extends Thread {
 
 int count = 0;
-Scanner scanner = null;
+AccumuloClient client;
+String tableName;
+long time;
+
+ScanTask(AccumuloClient client, String tableName, long time) {
+  this.client = client;
+  this.tableName = tableName;
+  this.time = time;
+}
 
-ScanTask(AccumuloClient client, String tableName, long time) throws 
Exception {
-  try {
-scanner = client.createScanner(tableName, Authorizations.EMPTY);
+@Override
+public void run() {
+  try (Scanner scanner = client.createScanner(tableName, 
Authorizations.EMPTY)) {
 IteratorSetting slow = new IteratorSetting(30, "slow", 
SlowIterator.class);
 SlowIterator.setSleepTime(slow, time);
 scanner.addScanIterator(slow);
-  } finally {
-if (scanner != null) {
-  scanner.close();
- 

[accumulo-website] branch asf-site updated: Jekyll build from master:af2e63b

2019-01-26 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new b049a43  Jekyll build from master:af2e63b
b049a43 is described below

commit b049a4392e967bf6a907dd9138a5cbd7406cd06e
Author: Mike Walch 
AuthorDate: Sat Jan 26 15:29:32 2019 -0500

Jekyll build from master:af2e63b

Updates to table config
---
 docs/2.x/development/iterators.html   |   2 +-
 docs/2.x/getting-started/table_configuration.html | 102 +++---
 feed.xml  |   4 +-
 redirects.json|   2 +-
 search_data.json  |   4 +-
 5 files changed, 57 insertions(+), 57 deletions(-)

diff --git a/docs/2.x/development/iterators.html 
b/docs/2.x/development/iterators.html
index b8a5f9d..aa47085 100644
--- a/docs/2.x/development/iterators.html
+++ b/docs/2.x/development/iterators.html
@@ -436,7 +436,7 @@ in the iteration, Accumulo Iterators must also support the 
ability to “move”
 iteration (the Accumulo table). Accumulo Iterators are designed to be 
concatenated together, similar to applying a
 series of transformations to a list of elements. Accumulo Iterators can 
duplicate their underlying source to create
 multiple “pointers” over the same underlying data (which is extremely powerful 
since each stream is sorted) or they can
-merge multiple Iterators into a single view. In this sense, a collection of 
Iterators operating in tandem is close to
+merge multiple Iterators into a single view. In this sense, a collection of 
Iterators operating in tandem is closer to
 a tree-structure than a list, but there is always a sense of a flow of 
Key-Value pairs through some Iterators. Iterators
 are not designed to act as triggers nor are they designed to operate outside 
of the purview of a single table.
 
diff --git a/docs/2.x/getting-started/table_configuration.html 
b/docs/2.x/getting-started/table_configuration.html
index 771b117..1b8f83e 100644
--- a/docs/2.x/getting-started/table_configuration.html
+++ b/docs/2.x/getting-started/table_configuration.html
@@ -510,7 +510,7 @@ and place it in the lib/ directory of the
 constraint jars can be added to Accumulo and enabled without restarting but any
 change to an existing constraint class requires Accumulo to be restarted.
 
-See the https://github.com/apache/accumulo-examples/blob/master/docs/contraints.md;>constraints
 examples for example code.
+See the https://github.com/apache/accumulo-examples/blob/master/docs/constraints.md;>constraints
 examples for example code.
 
 Bloom Filters
 
@@ -1062,72 +1062,72 @@ importing tables.
 The shell session below illustrates creating a table, inserting data, and
 exporting the table.
 
-root@test15 createtable table1
-root@test15 table1 insert a cf1 cq1 v1
-root@test15 table1 insert h cf1 cq1 v2
-root@test15 table1 insert z cf1 cq1 v3
-root@test15 table1 insert z cf1 cq2 v4
-root@test15 table1 addsplits -t table1 b r
-root@test15 table1 scan
-a cf1:cq1 []v1
-h cf1:cq1 []v2
-z cf1:cq1 []v3
-z cf1:cq2 []v4
-root@test15 config -t table1 -s table.split.threshold=100M
-root@test15 table1 clonetable table1 table1_exp
-root@test15 table1 offline table1_exp
-root@test15 table1 exporttable -t table1_exp /tmp/table1_export
-root@test15 table1 quit
+root@test15 createtable table1
+root@test15 table1 insert a cf1 cq1 v1
+root@test15 table1 insert h cf1 cq1 v2
+root@test15 table1 insert z cf1 cq1 v3
+root@test15 table1 insert z cf1 cq2 v4
+root@test15 table1 addsplits -t table1 b r
+root@test15 table1 scan
+a cf1:cq1 []v1
+h cf1:cq1 []v2
+z cf1:cq1 []v3
+z cf1:cq2 []v4
+root@test15 config -t table1 -s table.split.threshold=100M
+root@test15 table1 clonetable table1 table1_exp
+root@test15 table1 offline table1_exp
+root@test15 table1 exporttable -t table1_exp /tmp/table1_export
+root@test15 table1 quit
 
 
 After executing the export command, a few files are created in the hdfs dir.
 One of the files is a list of files to distcp as shown below.
 
-$ hadoop fs -ls /tmp/table1_export
-Found 2 items
--rw-r--r--   3 user supergroup162 2012-07-25 09:56 
/tmp/table1_export/distcp.txt
--rw-r--r--   3 user supergroup821 2012-07-25 09:56 
/tmp/table1_export/exportMetadata.zip
-$ hadoop fs -cat /tmp/table1_export/distcp.txt
-hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F000.rf
-hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
+$ hadoop fs -ls /tmp/table1_export
+Found 2 items
+-rw-r--r--   3 user supergroup162 2012-07-25 09:56 
/tmp/table1_export/distcp.txt
+-rw-r--r--   3 user supergroup821 2012-07-25 09:56 
/tmp/table1_export/exportMetadata.zip
+$ had

[accumulo-website] branch master updated: Updates to table config

2019-01-26 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new af2e63b  Updates to table config
af2e63b is described below

commit af2e63b44ea49a953185a309c8c924ddba962162
Author: Mike Walch 
AuthorDate: Sat Jan 26 15:28:21 2019 -0500

Updates to table config
---
 _docs-2/getting-started/table_configuration.md | 11 +--
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/_docs-2/getting-started/table_configuration.md 
b/_docs-2/getting-started/table_configuration.md
index 027dc86..f0a61cc 100644
--- a/_docs-2/getting-started/table_configuration.md
+++ b/_docs-2/getting-started/table_configuration.md
@@ -631,8 +631,6 @@ importing tables.
 The shell session below illustrates creating a table, inserting data, and
 exporting the table.
 
-
-```
 root@test15> createtable table1
 root@test15 table1> insert a cf1 cq1 v1
 root@test15 table1> insert h cf1 cq1 v2
@@ -649,12 +647,10 @@ exporting the table.
 root@test15 table1> offline table1_exp
 root@test15 table1> exporttable -t table1_exp /tmp/table1_export
 root@test15 table1> quit
-```
 
 After executing the export command, a few files are created in the hdfs dir.
 One of the files is a list of files to distcp as shown below.
 
-```
 $ hadoop fs -ls /tmp/table1_export
 Found 2 items
 -rw-r--r--   3 user supergroup162 2012-07-25 09:56 
/tmp/table1_export/distcp.txt
@@ -662,20 +658,16 @@ One of the files is a list of files to distcp as shown 
below.
 $ hadoop fs -cat /tmp/table1_export/distcp.txt
 hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F000.rf
 hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
-```
 
 Before the table can be imported, it must be copied using `distcp`. After the
 `distcp` completes, the cloned table may be deleted.
 
-```
 $ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
-```
 
 The Accumulo shell session below shows importing the table and inspecting it.
 The data, splits, config, and logical time information for the table were
 preserved.
 
-```
 root@test15> importtable table1_copy /tmp/table1_export_dest
 root@test15> table table1_copy
 root@test15 table1_copy> scan
@@ -702,11 +694,10 @@ preserved.
 5;b srv:time []M1343224500467
 5;r srv:time []M1343224500467
 5< srv:time []M1343224500467
-```
 
 [bloom-filter-example]: 
https://github.com/apache/accumulo-examples/blob/master/docs/bloom.md
 [constraint]: {% jurl org.apache.accumulo.core.constraints.Constraint %}
-[constraints-example]: 
https://github.com/apache/accumulo-examples/blob/master/docs/contraints.md
+[constraints-example]: 
https://github.com/apache/accumulo-examples/blob/master/docs/constraints.md
 [iterators-user]: {% jurl org.apache.accumulo.core.iterators.user %}
 [option-describer]: {% jurl org.apache.accumulo.core.iterators.OptionDescriber 
%}
 [combiner]: {% jurl org.apache.accumulo.core.iterators.Combiner %}



[accumulo-website] branch asf-site updated: Jekyll build from master:ab08d17

2019-01-15 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 8e58cbc  Jekyll build from master:ab08d17
8e58cbc is described below

commit 8e58cbc1325e1622cab110ac5aaa309a42f9
Author: Mike Walch 
AuthorDate: Tue Jan 15 19:26:06 2019 -0500

Jekyll build from master:ab08d17

Improved links in docs
---
 docs/2.x/administration/in-depth-install.html | 11 ++-
 docs/2.x/development/mapreduce.html   |  2 +-
 feed.xml  |  4 ++--
 search_data.json  |  2 +-
 4 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/docs/2.x/administration/in-depth-install.html 
b/docs/2.x/administration/in-depth-install.html
index 42720b5..872a2ab 100644
--- a/docs/2.x/administration/in-depth-install.html
+++ b/docs/2.x/administration/in-depth-install.html
@@ -829,7 +829,7 @@ configuration is:
 general.vfs.context.classpath.app1.delegation=post
 
 
-To use contexts in your application you can set the table.classpath.context on your tables or use 
the setClassLoaderContext() method on 
Scanner
+To use contexts in your application you can set the table.classpath.context
 on your tables or use the setClassLoaderContext() method on Scanner
 and BatchScanner passing in the name of the context, app1 in the example 
above. Setting the property on the table allows your minc, majc, and scan 
 iterators to load classes from the locations defined by the context. Passing 
the context name to the scanners allows you to override the table setting
 to load only scan time iterators from a different location.
@@ -933,11 +933,12 @@ to be able to scale to using 10’s of GB of RAM and 10’s 
of CPU cores.
 Accumulo TabletServers bind certain ports on the host to accommodate remote 
procedure calls to/from
 other nodes. Running more than one TabletServer on a host requires that you 
set the environment variable
 ACCUMULO_SERVICE_INSTANCE to an 
instance number (i.e 1, 2) for each instance that is started. Also, set
-these properties in accumulo.properties:
+the these properties in accumulo.properties:
 
-tserver.port.search=true
-replication.receipt.service.port=0
-
+
+  tserver.port.search
 = true
+  replication.receipt.service.port
 = 0
+
 
 Logging
 
diff --git a/docs/2.x/development/mapreduce.html 
b/docs/2.x/development/mapreduce.html
index 2e5a4bf..9f13f6c 100644
--- a/docs/2.x/development/mapreduce.html
+++ b/docs/2.x/development/mapreduce.html
@@ -473,7 +473,7 @@ MapReduce jobs to run with both Accumulo’s  Hadoop’s 
dependencies on th
 Since 2.0, Accumulo no longer has the same versions for dependencies as 
Hadoop. While this allows
 Accumulo to update its dependencies more frequently, it can cause problems if 
both Accumulo’s 
 Hadoop’s dependencies are on the classpath of the MapReduce job. When 
launching a MapReduce job that
-use Accumulo, you should build a shaded jar with all of your dependencies and 
complete the following
+use Accumulo, you should build a https://maven.apache.org/plugins/maven-shade-plugin/index.html;>shaded 
jar with all of your dependencies and complete the following
 steps so YARN only includes Hadoop code (and not all of Hadoop’s dependencies) 
when running your MapReduce job:
 
 
diff --git a/feed.xml b/feed.xml
index d9aab6e..9f0b2be 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 
 https://accumulo.apache.org/
 https://accumulo.apache.org/feed.xml; rel="self" 
type="application/rss+xml"/>
-Tue, 15 Jan 2019 17:50:28 -0500
-Tue, 15 Jan 2019 17:50:28 -0500
+Tue, 15 Jan 2019 19:25:58 -0500
+Tue, 15 Jan 2019 19:25:58 -0500
 Jekyll v3.7.3
 
 
diff --git a/search_data.json b/search_data.json
index 978b936..4b7588f 100644
--- a/search_data.json
+++ b/search_data.json
@@ -16,7 +16,7 @@
   
 "docs-2-x-administration-in-depth-install": {
   "title": "In-depth Installation",
-  "content" : "This document provides detailed instructions for 
installing Accumulo. For basicinstructions, see the quick start.HardwareBecause 
we are running essentially two or three systems simultaneously layeredacross 
the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware toconsist 
of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process can haveat 
least one core and 2 - 4 GB each.One core running HDFS can typically keep 2 to 
4 disks busy, so each machi [...]
+  "content" : "This document provides detailed instructions for 
installing Accumulo. For basicinstructions, see the quick start.HardwareBecause 
we are running essentially two or three systems simultaneously layeredacross 
the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware toconsist 
of 4

[accumulo-website] branch master updated: Improved links in docs

2019-01-15 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new ab08d17  Improved links in docs
ab08d17 is described below

commit ab08d1718e49c359cf8b478bba072ed5fd22474b
Author: Mike Walch 
AuthorDate: Tue Jan 15 19:25:38 2019 -0500

Improved links in docs
---
 _docs-2/administration/in-depth-install.md | 10 --
 _docs-2/development/mapreduce.md   |  3 ++-
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/_docs-2/administration/in-depth-install.md 
b/_docs-2/administration/in-depth-install.md
index 530191c..c22ce2c 100644
--- a/_docs-2/administration/in-depth-install.md
+++ b/_docs-2/administration/in-depth-install.md
@@ -344,7 +344,7 @@ configuration is:
 general.vfs.context.classpath.app1.delegation=post
 ```
 
-To use contexts in your application you can set the `table.classpath.context` 
on your tables or use the `setClassLoaderContext()` method on Scanner
+To use contexts in your application you can set the {% plink 
table.classpath.context %} on your tables or use the `setClassLoaderContext()` 
method on Scanner
 and BatchScanner passing in the name of the context, app1 in the example 
above. Setting the property on the table allows your minc, majc, and scan 
 iterators to load classes from the locations defined by the context. Passing 
the context name to the scanners allows you to override the table setting
 to load only scan time iterators from a different location. 
@@ -445,12 +445,10 @@ to be able to scale to using 10's of GB of RAM and 10's 
of CPU cores.
 Accumulo TabletServers bind certain ports on the host to accommodate remote 
procedure calls to/from
 other nodes. Running more than one TabletServer on a host requires that you 
set the environment variable
 `ACCUMULO_SERVICE_INSTANCE` to an instance number (i.e 1, 2) for each instance 
that is started. Also, set
-these properties in [accumulo.properties]:
+the these properties in [accumulo.properties]:
 
-```
-tserver.port.search=true
-replication.receipt.service.port=0
-```
+* {% plink tserver.port.search %} = `true`
+* {% plink replication.receipt.service.port %} = `0`
 
 ## Logging
 
diff --git a/_docs-2/development/mapreduce.md b/_docs-2/development/mapreduce.md
index adf0643..f5877e4 100644
--- a/_docs-2/development/mapreduce.md
+++ b/_docs-2/development/mapreduce.md
@@ -42,7 +42,7 @@ MapReduce jobs to run with both Accumulo's & Hadoop's 
dependencies on the classp
 Since 2.0, Accumulo no longer has the same versions for dependencies as 
Hadoop. While this allows
 Accumulo to update its dependencies more frequently, it can cause problems if 
both Accumulo's &
 Hadoop's dependencies are on the classpath of the MapReduce job. When 
launching a MapReduce job that
-use Accumulo, you should build a shaded jar with all of your dependencies and 
complete the following
+use Accumulo, you should build a [shaded jar] with all of your dependencies 
and complete the following
 steps so YARN only includes Hadoop code (and not all of Hadoop's dependencies) 
when running your MapReduce job:
 
 1. Set `export HADOOP_USE_CLIENT_CLASSLOADER=true` in your environment before 
submitting
@@ -181,6 +181,7 @@ The [Accumulo Examples repo][examples-repo] has several 
MapReduce examples:
 * [tablettofile] - Uses MapReduce to read a table and write one of its columns 
to a file in HDFS
 * [uniquecols] - Uses MapReduce to count unique columns in Accumulo
 
+[shaded jar]: https://maven.apache.org/plugins/maven-shade-plugin/index.html
 [AccumuloInputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloInputFormat %}
 [AccumuloOutputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloOutputFormat %}
 [AccumuloFileOutputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloFileOutputFormat %}



[accumulo-website] branch asf-site updated: Jekyll build from master:f6cf9c8

2019-01-15 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new f2336a4  Jekyll build from master:f6cf9c8
f2336a4 is described below

commit f2336a48f05447786f759cead62f0faf93917f37
Author: Mike Walch 
AuthorDate: Tue Jan 15 17:50:36 2019 -0500

Jekyll build from master:f6cf9c8

Minor fixes

* Fixed monitor screenshot
* Added links to Accumulo property
---
 docs/2.x/administration/in-depth-install.html | 12 ++--
 docs/2.x/getting-started/features.html|  4 ++--
 feed.xml  |  4 ++--
 search_data.json  |  2 +-
 4 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/docs/2.x/administration/in-depth-install.html 
b/docs/2.x/administration/in-depth-install.html
index a56187a..42720b5 100644
--- a/docs/2.x/administration/in-depth-install.html
+++ b/docs/2.x/administration/in-depth-install.html
@@ -912,16 +912,16 @@ can be use to start/stop processes on a node.
 
 A note on rolling restarts
 
-For sufficiently large Accumulo clusters, restarting multiple TabletServers 
within a short window can place significant 
-load on the Master server.  If slightly lower availability is acceptable, this 
load can be reduced by globally setting 
-table.suspend.duration to a positive 
value.
+For sufficiently large Accumulo clusters, restarting multiple TabletServers 
within a short window can place significant
+load on the Master server.  If slightly lower availability is acceptable, this 
load can be reduced by globally setting
+table.suspend.duration
 to a positive value.
 
-With table.suspend.duration set to, 
say, 5m, Accumulo will wait 
+With table.suspend.duration
 set to, say, 5m, Accumulo will wait
 for 5 minutes for any dead TabletServer to return before reassigning that 
TabletServer’s responsibilities to other TabletServers.
-If the TabletServer returns to the cluster before the specified timeout has 
elapsed, Accumulo will assign the TabletServer 
+If the TabletServer returns to the cluster before the specified timeout has 
elapsed, Accumulo will assign the TabletServer
 its original responsibilities.
 
-It is important not to choose too large a value for table.suspend.duration, as during this time, 
all scans against the 
+It is important not to choose too large a value for table.suspend.duration,
 as during this time, all scans against the
 data that TabletServer had hosted will block (or time out).
 
 Running multiple 
TabletServers on a single node
diff --git a/docs/2.x/getting-started/features.html 
b/docs/2.x/getting-started/features.html
index 7960345..c16b43b 100644
--- a/docs/2.x/getting-started/features.html
+++ b/docs/2.x/getting-started/features.html
@@ -737,8 +737,8 @@ performance.  It displays table sizes, ingest and query 
statistics, server
 load, and last-update information.  It also allows the user to view recent
 diagnostic logs and traces.
 
-
-
+
+
 
 
 Tracing
diff --git a/feed.xml b/feed.xml
index 17d8955..d9aab6e 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 
 https://accumulo.apache.org/
 https://accumulo.apache.org/feed.xml; rel="self" 
type="application/rss+xml"/>
-Tue, 15 Jan 2019 11:08:06 -0500
-Tue, 15 Jan 2019 11:08:06 -0500
+Tue, 15 Jan 2019 17:50:28 -0500
+Tue, 15 Jan 2019 17:50:28 -0500
 Jekyll v3.7.3
 
 
diff --git a/search_data.json b/search_data.json
index 2fd8ae1..978b936 100644
--- a/search_data.json
+++ b/search_data.json
@@ -16,7 +16,7 @@
   
 "docs-2-x-administration-in-depth-install": {
   "title": "In-depth Installation",
-  "content" : "This document provides detailed instructions for 
installing Accumulo. For basicinstructions, see the quick start.HardwareBecause 
we are running essentially two or three systems simultaneously layeredacross 
the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware toconsist 
of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process can haveat 
least one core and 2 - 4 GB each.One core running HDFS can typically keep 2 to 
4 disks busy, so each machi [...]
+  "content" : "This document provides detailed instructions for 
installing Accumulo. For basicinstructions, see the quick start.HardwareBecause 
we are running essentially two or three systems simultaneously layeredacross 
the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware toconsist 
of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process can haveat 
least one core and 2 - 4 GB each.One core running HDFS can typically keep 2 to 
4 disks busy, so each machi [...]
   "url": " /docs/2.x/administration/in-depth-install",
   "categories": "administration"
 },



[accumulo-website] branch master updated: Minor fixes

2019-01-15 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new f6cf9c8  Minor fixes
f6cf9c8 is described below

commit f6cf9c8eb7c7543d6dcd3dd3aca3b1bedf824c78
Author: Mike Walch 
AuthorDate: Tue Jan 15 17:49:10 2019 -0500

Minor fixes

* Fixed monitor screenshot
* Added links to Accumulo property
---
 _docs-2/administration/in-depth-install.md | 13 +++--
 _docs-2/getting-started/features.md|  4 ++--
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/_docs-2/administration/in-depth-install.md 
b/_docs-2/administration/in-depth-install.md
index 72c7f7d..530191c 100644
--- a/_docs-2/administration/in-depth-install.md
+++ b/_docs-2/administration/in-depth-install.md
@@ -424,16 +424,16 @@ can be use to start/stop processes on a node.
 
  A note on rolling restarts
 
-For sufficiently large Accumulo clusters, restarting multiple TabletServers 
within a short window can place significant 
-load on the Master server.  If slightly lower availability is acceptable, this 
load can be reduced by globally setting 
-`table.suspend.duration` to a positive value.  
+For sufficiently large Accumulo clusters, restarting multiple TabletServers 
within a short window can place significant
+load on the Master server.  If slightly lower availability is acceptable, this 
load can be reduced by globally setting
+[table.suspend.duration] to a positive value.
 
-With `table.suspend.duration` set to, say, `5m`, Accumulo will wait 
+With [table.suspend.duration] set to, say, `5m`, Accumulo will wait
 for 5 minutes for any dead TabletServer to return before reassigning that 
TabletServer's responsibilities to other TabletServers.
-If the TabletServer returns to the cluster before the specified timeout has 
elapsed, Accumulo will assign the TabletServer 
+If the TabletServer returns to the cluster before the specified timeout has 
elapsed, Accumulo will assign the TabletServer
 its original responsibilities.
 
-It is important not to choose too large a value for `table.suspend.duration`, 
as during this time, all scans against the 
+It is important not to choose too large a value for [table.suspend.duration], 
as during this time, all scans against the
 data that TabletServer had hosted will block (or time out).
 
 ### Running multiple TabletServers on a single node
@@ -678,6 +678,7 @@ mailing lists at https://accumulo.apache.org for more info.
 [gc.port.client]: {% purl gc.port.client %}
 [master.port.client]: {% purl master.port.client %}
 [trace.port.client]: {% purl trace.port.client %}
+[table.suspend.duration]: {% purl table.suspend.duration %}
 [master.replication.coordinator.port]: {% purl 
master.replication.coordinator.port %}
 [replication.receipt.service.port]: {% purl replication.receipt.service.port %}
 [tserver.memory.maps.native.enabled]: {% purl 
tserver.memory.maps.native.enabled %}
diff --git a/_docs-2/getting-started/features.md 
b/_docs-2/getting-started/features.md
index bf0a9ba..bddbac4 100644
--- a/_docs-2/getting-started/features.md
+++ b/_docs-2/getting-started/features.md
@@ -308,8 +308,8 @@ performance.  It displays table sizes, ingest and query 
statistics, server
 load, and last-update information.  It also allows the user to view recent
 diagnostic logs and traces.
 
-
-
+
+
 
 
 ### Tracing



[accumulo] branch master updated: Fix MultiInstanceReplicationIT & KerberosReplicationIT (#898)

2019-01-15 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new c24c611  Fix MultiInstanceReplicationIT & KerberosReplicationIT (#898)
c24c611 is described below

commit c24c611b6e1d3bb2b2d27e86f0396aaeddaac294
Author: Mike Walch 
AuthorDate: Tue Jan 15 14:54:33 2019 -0500

Fix MultiInstanceReplicationIT & KerberosReplicationIT (#898)

* Added back no-args constructor to SequentialWorkAssigner
  to fix ITs
---
 .../org/apache/accumulo/master/replication/SequentialWorkAssigner.java  | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
 
b/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
index 390ebd9..323688f 100644
--- 
a/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
+++ 
b/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
@@ -58,6 +58,8 @@ public class SequentialWorkAssigner extends 
DistributedWorkQueueWorkAssigner {
   // @formatter:on
   private Map> queuedWorkByPeerName;
 
+  public SequentialWorkAssigner() {}
+
   public SequentialWorkAssigner(AccumuloConfiguration conf, AccumuloClient 
client) {
 configure(conf, client);
   }



[accumulo-website] branch asf-site updated: Jekyll build from master:942bfcb

2019-01-15 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new e7c6322  Jekyll build from master:942bfcb
e7c6322 is described below

commit e7c6322dd754345be95381a7c1c11a2b7a5bfa0e
Author: Mike Walch 
AuthorDate: Tue Jan 15 11:08:14 2019 -0500

Jekyll build from master:942bfcb

Updates links to M/R examples
---
 docs/2.x/administration/in-depth-install.html | 16 
 docs/2.x/development/high_speed_ingest.html   |  5 ++---
 docs/2.x/development/mapreduce.html   | 12 +++-
 docs/2.x/getting-started/clients.html | 13 -
 docs/2.x/getting-started/quickstart.html  |  2 +-
 feed.xml  |  4 ++--
 search_data.json  | 10 +-
 7 files changed, 29 insertions(+), 33 deletions(-)

diff --git a/docs/2.x/administration/in-depth-install.html 
b/docs/2.x/administration/in-depth-install.html
index e7b70ea..a56187a 100644
--- a/docs/2.x/administration/in-depth-install.html
+++ b/docs/2.x/administration/in-depth-install.html
@@ -593,8 +593,8 @@ and specify the following:
 Accumulo uses HADOOP_HOME and ZOOKEEPER_HOME to locate Hadoop and Zookeeper 
jars
 and add them the CLASSPATH variable. If 
you are running a vendor-specific release of Hadoop
 or Zookeeper, you may need to change how your CLASSPATH is built in accumulo-env.sh. If
-Accumulo has problems later on finding jars, run accumulo classpath -d to debug and print
-Accumulo’s classpath.
+Accumulo has problems later on finding jars, run accumulo classpath to print Accumulo’s
+classpath.
 
 You may want to change the default memory settings for Accumulo’s 
TabletServer which are
 by set in the JAVA_OPTS settings for 
‘tservers’ in accumulo-env.sh. Note 
the
@@ -799,10 +799,12 @@ consideration. There is no enforcement of these warnings 
via the API.
 
 Configuring the ClassLoader
 
-Accumulo builds its Java classpath in accumulo-env.sh.  After 
an Accumulo application has started, it will load classes from the locations
-specified in the deprecated general.classpaths
 property. Additionally, Accumulo will load classes from the locations 
specified in the
-general.dynamic.classpaths
 property and will monitor and reload them if they change. The reloading 
feature is useful during the development
-and testing of iterators as new or modified iterator classes can be deployed 
to Accumulo without having to restart the database.
+Accumulo builds its Java classpath in accumulo-env.sh. This 
classpath can be viewed by running accumulo 
classpath.
+
+After an Accumulo application has started, it will load classes from the 
locations specified in the deprecated general.classpaths
 property.
+Additionally, Accumulo will load classes from the locations specified in the 
general.dynamic.classpaths
 property and will monitor and reload
+them if they change. The reloading feature is useful during the development 
and testing of iterators as new or modified iterator classes can be
+deployed to Accumulo without having to restart the database.
 
 Accumulo also has an alternate configuration for the classloader which will 
allow it to load classes from remote locations. This mechanism
 uses Apache Commons VFS which enables locations such as http and hdfs to be 
used. This alternate configuration also uses the
@@ -810,8 +812,6 @@ uses Apache Commons VFS which enables locations such as 
http and hdfs to be used
 general.vfs.classpaths
 property instead of the general.dynamic.classpaths
 property. As in the default configuration, this alternate
 configuration will also monitor the vfs locations for changes and reload if 
necessary.
 
-The Accumulo classpath can be viewed in human readable format by running 
accumulo classpath -d.
-
 ClassLoader Contexts
 
 With the addition of the VFS based classloader, we introduced the notion of 
classloader contexts. A context is identified
diff --git a/docs/2.x/development/high_speed_ingest.html 
b/docs/2.x/development/high_speed_ingest.html
index 3210596..3f0159b 100644
--- a/docs/2.x/development/high_speed_ingest.html
+++ b/docs/2.x/development/high_speed_ingest.html
@@ -533,10 +533,9 @@ import file.
 MapReduce Ingest
 
 It is possible to efficiently write many mutations to Accumulo in parallel 
via a
-MapReduce job.  Typically, a MapReduce job will process data that lives in HDFS
+MapReduce job. Typically, a MapReduce job will process data that lives in HDFS
 and write mutations to Accumulo using https://static.javadoc.io/org.apache.accumulo/accumulo-hadoop-mapreduce/2.0.0-alpha-1/org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.html;>AccumuloOutputFormat.
 For more information
-on how use to use MapReduce with Accumulo, see the MapReduce documentation
-and the https://github.com/apache/accumulo-examples/b

[accumulo-website] branch master updated: Updates links to M/R examples

2019-01-15 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new 942bfcb  Updates links to M/R examples
942bfcb is described below

commit 942bfcb2f7df249cc89e167cc42041600a381105
Author: Mike Walch 
AuthorDate: Tue Jan 15 11:03:27 2019 -0500

Updates links to M/R examples
---
 _docs-2/development/high_speed_ingest.md |  6 ++
 _docs-2/development/mapreduce.md | 17 +++--
 _docs-2/getting-started/clients.md   |  1 -
 3 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/_docs-2/development/high_speed_ingest.md 
b/_docs-2/development/high_speed_ingest.md
index 46fee58..3147454 100644
--- a/_docs-2/development/high_speed_ingest.md
+++ b/_docs-2/development/high_speed_ingest.md
@@ -106,13 +106,11 @@ import file.
 ## MapReduce Ingest
 
 It is possible to efficiently write many mutations to Accumulo in parallel via 
a
-MapReduce job.  Typically, a MapReduce job will process data that lives in HDFS
+MapReduce job. Typically, a MapReduce job will process data that lives in HDFS
 and write mutations to Accumulo using [AccumuloOutputFormat]. For more 
information
-on how use to use MapReduce with Accumulo, see the [MapReduce 
documentation][mapred-docs]
-and the [MapReduce example code][mapred-code].
+on how use to use MapReduce with Accumulo, see the [MapReduce 
documentation][mapred-docs].
 
 [bulk-example]: 
https://github.com/apache/accumulo-examples/blob/master/docs/bulkIngest.md
 [AccumuloOutputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloOutputFormat %}
 [AccumuloFileOutputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloFileOutputFormat %}
 [mapred-docs]: {% durl development/mapreduce %}
-[mapred-code]: 
https://github.com/apache/accumulo-examples/blob/master/docs/mapred.md
diff --git a/_docs-2/development/mapreduce.md b/_docs-2/development/mapreduce.md
index 3295b6f..adf0643 100644
--- a/_docs-2/development/mapreduce.md
+++ b/_docs-2/development/mapreduce.md
@@ -171,9 +171,22 @@ can then be bulk imported into Accumulo:
 .outputPath(new Path("hdfs://localhost:8020/myoutput/")).store(job);
 ```
 
-The [MapReduce example][mapred-example] contains a complete example of using 
MapReduce with Accumulo.
+## Example Code
+
+The [Accumulo Examples repo][examples-repo] has several MapReduce examples:
+
+* [wordcount] - Uses MapReduce and Accumulo to do a word count on text files
+* [regex] - Uses MapReduce and Accumulo to find data using regular expressions
+* [rowhash] - Uses MapReduce to read a table and write to a new column in the 
same table
+* [tablettofile] - Uses MapReduce to read a table and write one of its columns 
to a file in HDFS
+* [uniquecols] - Uses MapReduce to count unique columns in Accumulo
 
-[mapred-example]: 
https://github.com/apache/accumulo-examples/blob/master/docs/mapred.md
 [AccumuloInputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloInputFormat %}
 [AccumuloOutputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloOutputFormat %}
 [AccumuloFileOutputFormat]: {% jurl 
org.apache.accumulo.hadoop.mapreduce.AccumuloFileOutputFormat %}
+[examples-repo]: https://github.com/apache/accumulo-examples/
+[wordcount]: 
https://github.com/apache/accumulo-examples/blob/master/docs/wordcount.md
+[regex]: https://github.com/apache/accumulo-examples/blob/master/docs/regex.md
+[rowhash]: 
https://github.com/apache/accumulo-examples/blob/master/docs/rowhash.md
+[tablettofile]: 
https://github.com/apache/accumulo-examples/blob/master/docs/tablettofile.md
+[uniquecols]: 
https://github.com/apache/accumulo-examples/blob/master/docs/uniquecols.md
diff --git a/_docs-2/getting-started/clients.md 
b/_docs-2/getting-started/clients.md
index c73ec89..cff9086 100644
--- a/_docs-2/getting-started/clients.md
+++ b/_docs-2/getting-started/clients.md
@@ -366,7 +366,6 @@ This page covers Accumulo client basics.  Below are links 
to additional document
 [Iterators]: {% durl development/iterators %}
 [Proxy]: {% durl development/proxy %}
 [MapReduce]: {% durl development/mapreduce %}
-[mapred-example]: 
https://github.com/apache/accumulo-examples/blob/master/docs/mapred.md
 [batch]: https://github.com/apache/accumulo-examples/blob/master/docs/batch.md
 [reservations]: 
https://github.com/apache/accumulo-examples/blob/master/docs/reservations.md
 [isolation]: 
https://github.com/apache/accumulo-examples/blob/master/docs/isolation.md



[accumulo-examples] branch master updated: More updates to MapReduce (#32)

2019-01-15 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-examples.git


The following commit(s) were added to refs/heads/master by this push:
 new 26efc49  More updates to MapReduce (#32)
26efc49 is described below

commit 26efc4950978d1575d92f04d0c38042334f17ee0
Author: Mike Walch 
AuthorDate: Tue Jan 15 10:08:51 2019 -0500

More updates to MapReduce (#32)

* WordCount now supports using HDFS path for client props
* Updated docs and fixed arguments to MapReduce job
---
 README.md  |  16 +--
 docs/compactionStrategy.md |   8 +-
 docs/dirlist.md|  18 ++--
 docs/isolation.md  |   4 +-
 docs/mapred.md | 114 -
 docs/sample.md |   8 +-
 docs/uniquecols.md |  23 +
 docs/wordcount.md  |  72 +
 .../examples/mapreduce/TokenFileWordCount.java | 107 ---
 .../accumulo/examples/mapreduce/WordCount.java |  12 ++-
 .../accumulo/examples/mapreduce/MapReduceIT.java   |   2 +-
 11 files changed, 134 insertions(+), 250 deletions(-)

diff --git a/README.md b/README.md
index 3a8ff8f..77c91bc 100644
--- a/README.md
+++ b/README.md
@@ -24,7 +24,7 @@ Follow the steps below to run the Accumulo examples:
 
 1. Clone this repository
 
-  git clone https://github.com/apache/accumulo-examples.git
+git clone https://github.com/apache/accumulo-examples.git
 
 2. Follow [Accumulo's quickstart][quickstart] to install and run an Accumulo 
instance.
Accumulo has an [accumulo-client.properties] in `conf/` that must be 
configured as
@@ -34,13 +34,13 @@ Follow the steps below to run the Accumulo examples:
are set in your shell, you may be able skip this step. Make sure 
`ACCUMULO_CLIENT_PROPS` is
set to the location of your [accumulo-client.properties].
 
-  cp conf/env.sh.example conf/env.sh
-  vim conf/env.sh
+cp conf/env.sh.example conf/env.sh
+vim conf/env.sh
 
 3. Build the examples repo and copy the examples jar to Accumulo's `lib/ext` 
directory:
 
-  ./bin/build
-  cp target/accumulo-examples.jar /path/to/accumulo/lib/ext/
+./bin/build
+cp target/accumulo-examples.jar /path/to/accumulo/lib/ext/
 
 4. Each Accumulo example has its own documentation and instructions for 
running the example which
are linked to below.
@@ -76,7 +76,6 @@ Each example below highlights a feature of Apache Accumulo.
 | [filter] | Using the AgeOffFilter to remove records more than 30 seconds 
old. |
 | [helloworld] | Inserting records both inside map/reduce jobs and outside. 
And reading records between two rows. |
 | [isolation] | Using the isolated scanner to ensure partial changes are not 
seen. |
-| [mapred] | Using MapReduce to read from and write to Accumulo tables. |
 | [maxmutation] | Limiting mutation size to avoid running out of memory. |
 | [regex] | Using MapReduce and Accumulo to find data using regular 
expressions. |
 | [reservations] | Using conditional mutations to implement simple reservation 
system. |
@@ -86,7 +85,9 @@ Each example below highlights a feature of Apache Accumulo.
 | [shard] | Using the intersecting iterator with a term index partitioned by 
document. |
 | [tabletofile] | Using MapReduce to read a table and write one of its columns 
to a file in HDFS. |
 | [terasort] | Generating random data and sorting it using Accumulo. |
+| [uniquecols] | Use MapReduce to count unique columns in Accumulo |
 | [visibility] | Using visibilities (or combinations of authorizations). Also 
shows user permissions. |
+| [wordcount] | Use MapReduce and Accumulo to do a word count on text files |
 
 ## Release Testing
 
@@ -112,7 +113,6 @@ This repository can be used to test Accumulo release 
candidates.  See
 [filter]: docs/filter.md
 [helloworld]: docs/helloworld.md
 [isolation]: docs/isolation.md
-[mapred]: docs/mapred.md
 [maxmutation]: docs/maxmutation.md
 [regex]: docs/regex.md
 [reservations]: docs/reservations.md
@@ -122,6 +122,8 @@ This repository can be used to test Accumulo release 
candidates.  See
 [shard]: docs/shard.md
 [tabletofile]: docs/tabletofile.md
 [terasort]: docs/terasort.md
+[uniquecols]: docs/uniquecols.md
 [visibility]: docs/visibility.md
+[wordcount]: docs/wordcount.md
 [ti]: https://travis-ci.org/apache/accumulo-examples.svg?branch=master
 [tl]: https://travis-ci.org/apache/accumulo-examples
diff --git a/docs/compactionStrategy.md b/docs/compactionStrategy.md
index a7c96d5..6b5bebc 100644
--- a/docs/compactionStrategy.md
+++ b/docs/compactionStrategy.md
@@ -44,13 +44,13 @@ The commands below will configure the 
TwoTierCompactionStrategy to use gz compre
 
 Generate some data and files in order

[accumulo] branch master updated: Fix #883 Specify client props as HDFS path in new M/R API (#894)

2019-01-11 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 44b4af2  Fix #883 Specify client props as HDFS path in new M/R API 
(#894)
44b4af2 is described below

commit 44b4af2c5243f7c123ea22e364b70158512a0643
Author: Mike Walch 
AuthorDate: Fri Jan 11 18:01:07 2019 -0500

Fix #883 Specify client props as HDFS path in new M/R API (#894)

* Also cleaned up new M/R API by removing unnecessary methods
  and using properties in place of ClientInfo
---
 .../hadoop/mapred/AccumuloOutputFormat.java|  12 +-
 .../hadoop/mapreduce/AccumuloOutputFormat.java |  12 +-
 .../hadoop/mapreduce/InputFormatBuilder.java   |  16 +-
 .../hadoop/mapreduce/OutputFormatBuilder.java  |  14 +-
 .../hadoopImpl/mapred/AccumuloRecordReader.java|   4 +-
 .../hadoopImpl/mapred/AccumuloRecordWriter.java|   4 +-
 .../hadoopImpl/mapreduce/AccumuloRecordReader.java |  11 +-
 .../hadoopImpl/mapreduce/AccumuloRecordWriter.java |   4 +-
 .../mapreduce/InputFormatBuilderImpl.java  |  42 ++---
 .../mapreduce/OutputFormatBuilderImpl.java |  23 ++-
 .../hadoopImpl/mapreduce/lib/ConfiguratorBase.java | 173 +++--
 .../mapreduce/lib/InputConfigurator.java   |   8 +-
 .../mapreduce/lib/OutputConfigurator.java  |   4 +-
 .../hadoop/its/mapred/AccumuloOutputFormatIT.java  |   1 -
 .../mapreduce/lib/ConfiguratorBaseTest.java|  43 ++---
 15 files changed, 116 insertions(+), 255 deletions(-)

diff --git 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapred/AccumuloOutputFormat.java
 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapred/AccumuloOutputFormat.java
index 2386081..cce542d 100644
--- 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapred/AccumuloOutputFormat.java
+++ 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapred/AccumuloOutputFormat.java
@@ -17,13 +17,14 @@
 package org.apache.accumulo.hadoop.mapred;
 
 import java.io.IOException;
+import java.util.Properties;
 
 import org.apache.accumulo.core.client.Accumulo;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.clientImpl.ClientInfo;
+import org.apache.accumulo.core.conf.ClientProperty;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.hadoop.mapreduce.OutputFormatBuilder;
 import org.apache.accumulo.hadoopImpl.mapred.AccumuloRecordWriter;
@@ -46,11 +47,10 @@ public class AccumuloOutputFormat implements 
OutputFormat {
 
   @Override
   public void checkOutputSpecs(FileSystem ignored, JobConf job) throws 
IOException {
-ClientInfo clientInfo = OutputConfigurator.getClientInfo(CLASS, job);
-String principal = clientInfo.getPrincipal();
-AuthenticationToken token = clientInfo.getAuthenticationToken();
-try (AccumuloClient c = 
Accumulo.newClient().from(clientInfo.getProperties()).build()) {
-  if (!c.securityOperations().authenticateUser(principal, token))
+Properties clientProps = OutputConfigurator.getClientProperties(CLASS, 
job);
+AuthenticationToken token = 
ClientProperty.getAuthenticationToken(clientProps);
+try (AccumuloClient c = Accumulo.newClient().from(clientProps).build()) {
+  if (!c.securityOperations().authenticateUser(c.whoami(), token))
 throw new IOException("Unable to authenticate user");
 } catch (AccumuloException | AccumuloSecurityException e) {
   throw new IOException(e);
diff --git 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.java
 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.java
index d85b2c8..4c84211 100644
--- 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.java
+++ 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.java
@@ -17,13 +17,14 @@
 package org.apache.accumulo.hadoop.mapreduce;
 
 import java.io.IOException;
+import java.util.Properties;
 
 import org.apache.accumulo.core.client.Accumulo;
 import org.apache.accumulo.core.client.AccumuloClient;
 import org.apache.accumulo.core.client.AccumuloException;
 import org.apache.accumulo.core.client.AccumuloSecurityException;
 import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
-import org.apache.accumulo.core.clientImpl.ClientInfo;
+import org.apache.accumulo.core.conf.ClientProperty;
 import org.apache.accumulo.core.data.Mutation;
 import org.apache.accumulo.hadoopImpl.mapreduce.AccumuloRecordWrite

[accumulo-testing] branch master updated: Updates for starting MapReduce jobs from randomwalk (#49)

2019-01-10 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-testing.git


The following commit(s) were added to refs/heads/master by this push:
 new a324da8  Updates for starting MapReduce jobs from randomwalk (#49)
a324da8 is described below

commit a324da87f21e1df4f12c7cf0ed4a94cb7c5dc83c
Author: Mike Walch 
AuthorDate: Thu Jan 10 12:38:56 2019 -0500

Updates for starting MapReduce jobs from randomwalk (#49)

* Set additional MapReduce configuration and Hadoop username
* These settings work if running randomwalk on user machine
  or in Docker
---
 Dockerfile | 5 +
 README.md  | 5 +++--
 bin/rwalk  | 2 +-
 conf/env.sh.example| 2 --
 src/main/docker/docker-entry   | 5 +
 src/main/java/org/apache/accumulo/testing/TestEnv.java | 7 +++
 6 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/Dockerfile b/Dockerfile
index 4038764..d543d8e 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -15,6 +15,11 @@
 
 FROM centos:7
 
+ARG HADOOP_HOME
+ARG HADOOP_USER_NAME
+ENV HADOOP_HOME ${HADOOP_HOME}
+ENV HADOOP_USER_NAME ${HADOOP_USER_NAME:-hadoop}
+
 RUN yum install -y java-1.8.0-openjdk-devel
 ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk
 
diff --git a/README.md b/README.md
index d659f8d..1a758a1 100644
--- a/README.md
+++ b/README.md
@@ -50,10 +50,11 @@ run in Docker:
 * `conf/accumulo-testing.properties` - Copy this from the example file and 
configure it
 * `target/accumulo-testing-2.0.0-SNAPSHOT-shaded.jar` - Can be created 
using `./bin/build`
 
-   Run the following command to create the image:
+   Run the following command to create the image. `HADOOP_HOME` should be 
where Hadoop is installed on your cluster.
+   `HADOOP_USER_NAME` should match the user running Hadoop on your cluster.
 
```
-   docker build -t accumulo-testing .
+   docker build --build-arg HADOOP_HOME=$HADOOP_HOME --build-arg 
HADOOP_USER_NAME=`whoami` -t accumulo-testing .
```
 
 2. The `accumulo-testing` image can run a single command:
diff --git a/bin/rwalk b/bin/rwalk
index 6bd7299..30cf0a7 100755
--- a/bin/rwalk
+++ b/bin/rwalk
@@ -34,7 +34,7 @@ else
   . "$at_home"/conf/env.sh.example
 fi
 
-export 
CLASSPATH="$TEST_JAR_PATH:$HADOOP_API_JAR:$HADOOP_RUNTIME_JAR:$HADOOP_CONF_DIR:$CLASSPATH"
+export 
CLASSPATH="$TEST_JAR_PATH:$HADOOP_API_JAR:$HADOOP_RUNTIME_JAR:$CLASSPATH"
 
 randomwalk_main="org.apache.accumulo.testing.randomwalk.Framework"
 
diff --git a/conf/env.sh.example b/conf/env.sh.example
index a48451c..bd372c3 100644
--- a/conf/env.sh.example
+++ b/conf/env.sh.example
@@ -18,8 +18,6 @@
 
 ## Hadoop installation
 export HADOOP_HOME="${HADOOP_HOME:-/path/to/hadoop}"
-## Hadoop configuration
-export HADOOP_CONF_DIR="${HADOOP_CONF_DIR:-${HADOOP_HOME}/etc/hadoop}"
 ## Accumulo installation
 export ACCUMULO_HOME="${ACCUMULO_HOME:-/path/to/accumulo}"
 ## Path to Accumulo client properties
diff --git a/src/main/docker/docker-entry b/src/main/docker/docker-entry
index 5a98525..6f23e00 100755
--- a/src/main/docker/docker-entry
+++ b/src/main/docker/docker-entry
@@ -36,6 +36,11 @@ if [ -z "$1" ]; then
   exit 1
 fi
 
+if [ -z "$HADOOP_HOME" ]; then
+  echo "HADOOP_HOME must be set!"
+  exit 1
+fi
+
 case "$1" in
   cingest|rwalk)
 "${at_home}"/bin/"$1" "${@:2}"
diff --git a/src/main/java/org/apache/accumulo/testing/TestEnv.java 
b/src/main/java/org/apache/accumulo/testing/TestEnv.java
index e5ffa1a..601db3c 100644
--- a/src/main/java/org/apache/accumulo/testing/TestEnv.java
+++ b/src/main/java/org/apache/accumulo/testing/TestEnv.java
@@ -97,6 +97,13 @@ public class TestEnv implements AutoCloseable {
   hadoopConfig.set("fs.file.impl", 
org.apache.hadoop.fs.LocalFileSystem.class.getName());
   hadoopConfig.set("mapreduce.framework.name", "yarn");
   hadoopConfig.set("yarn.resourcemanager.hostname", 
getYarnResourceManager());
+  String hadoopHome = System.getenv("HADOOP_HOME");
+  if (hadoopHome == null) {
+throw new IllegalArgumentException("HADOOP_HOME must be set in env");
+  }
+  hadoopConfig.set("yarn.app.mapreduce.am.env", "HADOOP_MAPRED_HOME=" + 
hadoopHome);
+  hadoopConfig.set("mapreduce.map.env", "HADOOP_MAPRED_HOME=" + 
hadoopHome);
+  hadoopConfig.set("mapreduce.reduce.env", "HADOOP_MAPRED_HOME=" + 
hadoopHome);
 }
 return hadoopConfig;
   }



[accumulo-website] branch master updated: Updated script docs from changes in master (#143)

2019-01-10 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new dff1a2e  Updated script docs from changes in master (#143)
dff1a2e is described below

commit dff1a2ec0d5f6407c51dc5541538b50a5301ff2d
Author: Mike Walch 
AuthorDate: Thu Jan 10 11:28:52 2019 -0500

Updated script docs from changes in master (#143)

* Removed accumulo-util hadoop-jar
* Removed use of 'accumulo classpath -d'
---
 _docs-2/administration/in-depth-install.md | 16 
 _docs-2/getting-started/clients.md | 12 
 _docs-2/getting-started/quickstart.md  |  2 +-
 3 files changed, 9 insertions(+), 21 deletions(-)

diff --git a/_docs-2/administration/in-depth-install.md 
b/_docs-2/administration/in-depth-install.md
index 524edd3..72c7f7d 100644
--- a/_docs-2/administration/in-depth-install.md
+++ b/_docs-2/administration/in-depth-install.md
@@ -115,8 +115,8 @@ and specify the following:
 Accumulo uses `HADOOP_HOME` and `ZOOKEEPER_HOME` to locate Hadoop and 
Zookeeper jars
 and add them the `CLASSPATH` variable. If you are running a vendor-specific 
release of Hadoop
 or Zookeeper, you may need to change how your `CLASSPATH` is built in 
[accumulo-env.sh]. If
-Accumulo has problems later on finding jars, run `accumulo classpath -d` to 
debug and print
-Accumulo's classpath.
+Accumulo has problems later on finding jars, run `accumulo classpath` to print 
Accumulo's
+classpath.
 
 You may want to change the default memory settings for Accumulo's TabletServer 
which are
 by set in the `JAVA_OPTS` settings for 'tservers' in [accumulo-env.sh]. Note 
the
@@ -312,10 +312,12 @@ consideration. There is no enforcement of these warnings 
via the API.
 
 ### Configuring the ClassLoader
 
-Accumulo builds its Java classpath in [accumulo-env.sh].  After an Accumulo 
application has started, it will load classes from the locations
-specified in the deprecated [general.classpaths] property. Additionally, 
Accumulo will load classes from the locations specified in the
-[general.dynamic.classpaths] property and will monitor and reload them if they 
change. The reloading feature is useful during the development
-and testing of iterators as new or modified iterator classes can be deployed 
to Accumulo without having to restart the database.
+Accumulo builds its Java classpath in [accumulo-env.sh]. This classpath can be 
viewed by running `accumulo classpath`.
+
+After an Accumulo application has started, it will load classes from the 
locations specified in the deprecated [general.classpaths] property.
+Additionally, Accumulo will load classes from the locations specified in the 
[general.dynamic.classpaths] property and will monitor and reload
+them if they change. The reloading feature is useful during the development 
and testing of iterators as new or modified iterator classes can be
+deployed to Accumulo without having to restart the database.
 
 Accumulo also has an alternate configuration for the classloader which will 
allow it to load classes from remote locations. This mechanism
 uses Apache Commons VFS which enables locations such as http and hdfs to be 
used. This alternate configuration also uses the
@@ -323,8 +325,6 @@ uses Apache Commons VFS which enables locations such as 
http and hdfs to be used
 [general.vfs.classpaths] property instead of the [general.dynamic.classpaths] 
property. As in the default configuration, this alternate
 configuration will also monitor the vfs locations for changes and reload if 
necessary.
 
-The Accumulo classpath can be viewed in human readable format by running 
`accumulo classpath -d`.
-
 # ClassLoader Contexts
 
 With the addition of the VFS based classloader, we introduced the notion of 
classloader contexts. A context is identified
diff --git a/_docs-2/getting-started/clients.md 
b/_docs-2/getting-started/clients.md
index bda0c66..c73ec89 100644
--- a/_docs-2/getting-started/clients.md
+++ b/_docs-2/getting-started/clients.md
@@ -305,7 +305,6 @@ of the different ways to execute client code.
 * build and execute an uber jar
 * add `accumulo classpath` to your Java classpath
 * use the `accumulo` command
-* use the `accumulo-util hadoop-jar` command
 
 ### Build and execute an uber jar
 
@@ -321,11 +320,6 @@ to include all of Accumulo's dependencies on your 
classpath:
 
 java -classpath /path/to/my.jar:/path/to/dep.jar:$(accumulo classpath) 
com.my.Main arg1 arg2
 
-If you would like to review which jars are included, the `accumulo classpath` 
command can
-output a more human readable format using the `-d` option which enables 
debugging:
-
-accumulo classpath -d
-
 ### Use the accumulo command
 
 Another option for running your code is to use the Accumulo script which can 
execute a
@@ -341,12 +335,6 @@ the accumulo command.
 
 export CLASSPATH

[accumulo] branch master updated: Fix #885 Simplify 'accumulo classpath' command (#887)

2019-01-10 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 3a1dc3d  Fix #885 Simplify 'accumulo classpath' command (#887)
3a1dc3d is described below

commit 3a1dc3d36a3e51cb3b2f970211e3f3b0b8ed18b3
Author: Mike Walch 
AuthorDate: Thu Jan 10 10:53:44 2019 -0500

Fix #885 Simplify 'accumulo classpath' command (#887)

* Remove Java implementation and echo CLASSPATH variable
---
 assemble/bin/accumulo  |  5 ++
 .../org/apache/accumulo/core/util/Classpath.java   | 58 --
 .../main/java/org/apache/accumulo/start/Main.java  |  1 +
 .../apache/accumulo/test/start/KeywordStartIT.java |  2 -
 4 files changed, 6 insertions(+), 60 deletions(-)

diff --git a/assemble/bin/accumulo b/assemble/bin/accumulo
index 77665e2..895adac 100755
--- a/assemble/bin/accumulo
+++ b/assemble/bin/accumulo
@@ -51,6 +51,11 @@ function main() {
   mkdir -p "${ACCUMULO_LOG_DIR}" 2>/dev/null
   : "${MALLOC_ARENA_MAX:?"variable is not set in accumulo-env.sh"}"
 
+  if [[ $cmd == "classpath" ]]; then
+echo "$CLASSPATH"
+exit 0
+  fi
+
   if [[ -x "$JAVA_HOME/bin/java" ]]; then
 JAVA="$JAVA_HOME/bin/java"
   else
diff --git a/core/src/main/java/org/apache/accumulo/core/util/Classpath.java 
b/core/src/main/java/org/apache/accumulo/core/util/Classpath.java
deleted file mode 100644
index 02d24d5..000
--- a/core/src/main/java/org/apache/accumulo/core/util/Classpath.java
+++ /dev/null
@@ -1,58 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.accumulo.core.util;
-
-import org.apache.accumulo.start.Main;
-import org.apache.accumulo.start.spi.KeywordExecutable;
-
-import com.beust.jcommander.Parameter;
-import com.google.auto.service.AutoService;
-
-@AutoService(KeywordExecutable.class)
-public class Classpath implements KeywordExecutable {
-
-  static class Opts extends org.apache.accumulo.core.cli.Help {
-
-@Parameter(names = {"-d", "--debug"}, description = "Turns on debugging")
-public boolean debug = false;
-  }
-
-  @Override
-  public String keyword() {
-return "classpath";
-  }
-
-  @Override
-  public UsageGroup usageGroup() {
-return UsageGroup.CORE;
-  }
-
-  @Override
-  public String description() {
-return "Prints Accumulo classpath";
-  }
-
-  @Override
-  public void execute(final String[] args) throws Exception {
-
-Opts opts = new Opts();
-opts.parseArgs("accumulo classpath", args);
-
-Main.getVFSClassLoader().getMethod("printClassPath", boolean.class)
-.invoke(Main.getVFSClassLoader(), opts.debug);
-  }
-}
diff --git a/start/src/main/java/org/apache/accumulo/start/Main.java 
b/start/src/main/java/org/apache/accumulo/start/Main.java
index fa3d1dc..3b205e3 100644
--- a/start/src/main/java/org/apache/accumulo/start/Main.java
+++ b/start/src/main/java/org/apache/accumulo/start/Main.java
@@ -218,6 +218,7 @@ public class Main {
 printCommands(executables, UsageGroup.CORE);
 
 System.out.println(
+"  classpath  Prints Accumulo classpath\n" +
 "   args  Runs Java  located on 
Accumulo classpath");
 
 System.out.println("\nProcess Commands:");
diff --git 
a/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java 
b/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java
index 4bfa41e..1ca2c9d 100644
--- a/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java
@@ -33,7 +33,6 @@ import java.util.Map.Entry;
 import java.util.TreeMap;
 
 import org.apache.accumulo.core.file.rfile.PrintInfo;
-import org.apache.accumulo.core.util.Classpath;
 import org.apache.accumulo.core.util.CreateToken;
 import org.apache.accumulo.core.util.Help;
 import org.apache.accumulo.core.util.Version;
@@ -103,

[accumulo-website] branch master updated: More updates to MapReduce docs (#142)

2019-01-07 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
 new d70ec3b  More updates to MapReduce docs (#142)
d70ec3b is described below

commit d70ec3bbe3bd16c9b644bd59f74a453ef17a17e4
Author: Mike Walch 
AuthorDate: Mon Jan 7 14:22:04 2019 -0500

More updates to MapReduce docs (#142)
---
 _docs-2/administration/upgrading.md |  8 -
 _docs-2/development/mapreduce.md| 72 +
 2 files changed, 72 insertions(+), 8 deletions(-)

diff --git a/_docs-2/administration/upgrading.md 
b/_docs-2/administration/upgrading.md
index 18c5714..f054509 100644
--- a/_docs-2/administration/upgrading.md
+++ b/_docs-2/administration/upgrading.md
@@ -42,7 +42,7 @@ Below are some changes in 2.0 that you should be aware of:
 - `log4j-service.properties` for all Accumulo services (except monitor)
 - `logj4-monitor.properties` for Accumulo monitor
 - `log4j.properties` for Accumulo clients and commands
-* [New Hadoop configuration is required]({% durl 
development/mapreduce#configuration %}) when reading or writing to Accumulo 
using MapReduce.
+* MapReduce jobs that read/write from Accumulo [must configure their 
dependencies differently]({% durl 
development/mapreduce#configure-dependencies-for-your-mapreduce-job %}).
 * Run the command `accumulo shell` to access the shell using configuration in 
`conf/accumulo-client.properties`
 
 When your Accumulo 2.0 installation is properly configured, stop Accumulo 
1.8/9 and start Accumulo 2.0:
@@ -78,6 +78,12 @@ Below is a list of recommended client API changes:
 * The API for [creating Accumulo clients]({% durl 
getting-started/clients#creating-an-accumulo-client %}) has changed in 2.0.
   * The old API using [ZooKeeeperInstance], [Connector], [Instance], and 
[ClientConfiguration] has been deprecated.
   * [Connector] objects can be created from an [AccumuloClient] object using 
[Connector.from()]
+* Accumulo's [MapReduce API]({% durl development/mapreduce %}) has changed in 
2.0.
+  * A new API has been introduced in the `org.apache.accumulo.hadoop` package 
of the `accumulo-hadoop-mapreduce` jar.
+  * The old API in the `org.apache.accumulo.core.client` package of the 
`accumulo-core` has been deprecated and will
+eventually be removed.
+  * For both the old and new API, you must [configure dependencies 
differently]({% durl 
development/mapreduce#configure-dependencies-for-your-mapreduce-job %})
+when creating your MapReduce job.
 
 ## Upgrading from 1.7 to 1.8
 
diff --git a/_docs-2/development/mapreduce.md b/_docs-2/development/mapreduce.md
index 7687ae8..3295b6f 100644
--- a/_docs-2/development/mapreduce.md
+++ b/_docs-2/development/mapreduce.md
@@ -8,10 +8,42 @@ Accumulo tables can be used as the source and destination of 
MapReduce jobs.
 
 ## General MapReduce configuration
 
-Since 2.0.0, Accumulo no longer has the same dependency versions (i.e Guava, 
etc) as Hadoop.
-When launching a MapReduce job that reads or writes to Accumulo, you should 
build a shaded jar
-with all of your dependencies and complete the following steps so YARN only 
includes Hadoop code
-(and not all of Hadoop dependencies) when running your MapReduce job:
+### Add Accumulo's MapReduce API to your dependencies
+
+If you are using Maven, add the following dependency to your `pom.xml` to use 
Accumulo's MapReduce API:
+
+```xml
+
+  org.apache.accumulo
+  accumulo-hadoop-mapreduce
+  {{ page.latest_release }}
+
+```
+
+The MapReduce API consists of the following classes:
+
+* If using Hadoop's **mapreduce** API:
+  * {% jlink -f org.apache.accumulo.hadoop.mapreduce.AccumuloInputFormat %}
+  * {% jlink -f org.apache.accumulo.hadoop.mapreduce.AccumuloOutputFormat %}
+  * {% jlink -f org.apache.accumulo.hadoop.mapreduce.AccumuloFileOutputFormat 
%}
+* If using Hadoop's **mapred** API:
+  * {% jlink -f org.apache.accumulo.hadoop.mapred.AccumuloInputFormat %}
+  * {% jlink -f org.apache.accumulo.hadoop.mapred.AccumuloOutputFormat %}
+  * {% jlink -f org.apache.accumulo.hadoop.mapred.AccumuloFileOutputFormat %}
+
+Before 2.0, the MapReduce API resided in the `org.apache.accumulo.core.client` 
package of the `accumulo-core` jar.
+While this old API still exists and can be used, it has been deprecated and 
will be removed eventually.
+
+### Configure dependencies for your MapReduce job
+
+Before 2.0, Accumulo used the same versions for dependencies (such as Guava) 
as Hadoop. This allowed
+MapReduce jobs to run with both Accumulo's & Hadoop's dependencies on the 
classpath.
+
+Since 2.0, Accumulo no longer has the same versions for dependencies as 
Hadoop. While this allows
+Accumulo to update its dependencies more frequently, it can cause problems if 
both Accumulo's &
+Hadoop's dependencies are on the classpath of the MapReduce job. When 
l

[accumulo] branch master updated: Fix MapReduce bug (#874)

2019-01-07 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new 0adec43  Fix MapReduce bug (#874)
0adec43 is described below

commit 0adec43da0d3652f4642c5fbcd16537fccf4a0aa
Author: Mike Walch 
AuthorDate: Mon Jan 7 14:21:11 2019 -0500

Fix MapReduce bug (#874)

* AccumuloOutputFormat is using wrong getClientInfo method
---
 .../java/org/apache/accumulo/hadoop/mapred/AccumuloOutputFormat.java  | 4 +---
 .../org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.java| 4 +---
 .../accumulo/hadoopImpl/mapreduce/AccumuloOutputFormatImpl.java   | 2 +-
 .../apache/accumulo/cluster/standalone/StandaloneClusterControl.java  | 2 --
 4 files changed, 3 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapred/AccumuloOutputFormat.java
 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapred/AccumuloOutputFormat.java
index f380c05..d29ed16 100644
--- 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapred/AccumuloOutputFormat.java
+++ 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapred/AccumuloOutputFormat.java
@@ -16,8 +16,6 @@
  */
 package org.apache.accumulo.hadoop.mapred;
 
-import static 
org.apache.accumulo.hadoopImpl.mapred.AccumuloOutputFormatImpl.getClientInfo;
-
 import java.io.IOException;
 
 import org.apache.accumulo.core.client.Accumulo;
@@ -46,7 +44,7 @@ public class AccumuloOutputFormat implements 
OutputFormat {
 
   @Override
   public void checkOutputSpecs(FileSystem ignored, JobConf job) throws 
IOException {
-ClientInfo clientInfo = getClientInfo(job);
+ClientInfo clientInfo = AccumuloOutputFormatImpl.getClientInfo(job);
 String principal = clientInfo.getPrincipal();
 AuthenticationToken token = clientInfo.getAuthenticationToken();
 try (AccumuloClient c = 
Accumulo.newClient().from(clientInfo.getProperties()).build()) {
diff --git 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.java
 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.java
index 601b671..a33e93f 100644
--- 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.java
+++ 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoop/mapreduce/AccumuloOutputFormat.java
@@ -16,8 +16,6 @@
  */
 package org.apache.accumulo.hadoop.mapreduce;
 
-import static 
org.apache.accumulo.hadoopImpl.mapreduce.AbstractInputFormat.getClientInfo;
-
 import java.io.IOException;
 
 import org.apache.accumulo.core.client.Accumulo;
@@ -57,7 +55,7 @@ public class AccumuloOutputFormat extends 
OutputFormat {
 
   @Override
   public void checkOutputSpecs(JobContext job) throws IOException {
-ClientInfo clientInfo = getClientInfo(job);
+ClientInfo clientInfo = AccumuloOutputFormatImpl.getClientInfo(job);
 String principal = clientInfo.getPrincipal();
 AuthenticationToken token = clientInfo.getAuthenticationToken();
 try (AccumuloClient c = 
Accumulo.newClient().from(clientInfo.getProperties()).build()) {
diff --git 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/AccumuloOutputFormatImpl.java
 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/AccumuloOutputFormatImpl.java
index c1aa333..8750d74 100644
--- 
a/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/AccumuloOutputFormatImpl.java
+++ 
b/hadoop-mapreduce/src/main/java/org/apache/accumulo/hadoopImpl/mapreduce/AccumuloOutputFormatImpl.java
@@ -88,7 +88,7 @@ public class AccumuloOutputFormatImpl {
*
* @since 2.0.0
*/
-  protected static ClientInfo getClientInfo(JobContext context) {
+  public static ClientInfo getClientInfo(JobContext context) {
 return OutputConfigurator.getClientInfo(CLASS, context.getConfiguration());
   }
 
diff --git 
a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
 
b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
index 13c33f7..8c43f6c 100644
--- 
a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
+++ 
b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
@@ -23,8 +23,6 @@ import java.io.BufferedReader;
 import java.io.File;
 import java.io.FileReader;
 import java.io.IOException;
-import java.net.URL;
-import java.security.CodeSource;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;



[accumulo] branch master updated: Remove 'accumulo-util hadoop-jar' command (#872)

2019-01-05 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/master by this push:
 new c265ea5  Remove 'accumulo-util hadoop-jar' command (#872)
c265ea5 is described below

commit c265ea5b16171032419b164e809f5478f70bbba8
Author: Mike Walch 
AuthorDate: Sat Jan 5 17:54:17 2019 -0500

Remove 'accumulo-util hadoop-jar' command (#872)

* Command doesn't work well now that Accumulo does
  not match Hadoop's dependenices
* Users should create shaded jar instead and submit
  using Hadoop's 'yarn jar' command
* Command was used by StandaloneClusterControl but only
  by unit test
---
 assemble/bin/accumulo-util | 69 --
 .../standalone/StandaloneClusterControl.java   | 37 +---
 .../standalone/StandaloneClusterControlTest.java   | 49 ---
 3 files changed, 2 insertions(+), 153 deletions(-)

diff --git a/assemble/bin/accumulo-util b/assemble/bin/accumulo-util
index d243cd4..13eb802 100755
--- a/assemble/bin/accumulo-util
+++ b/assemble/bin/accumulo-util
@@ -22,7 +22,6 @@ Usage: accumulo-util  ( ...)
 Commands:
   build-nativeBuilds Accumulo native libraries
   dump-zooDumps data in ZooKeeper
-  hadoop-jar  Runs 'hadoop jar' command with Accumulo jars
   gen-monitor-certGenerates Accumulo monitor certficate
   load-jars-hdfs  Loads Accumulo jars in lib/ to HDFS for VFS classloader
   
@@ -182,71 +181,6 @@ function load_jars_hdfs() {
   "$HADOOP" fs -rm "$SYSTEM_CONTEXT_HDFS_DIR/slf4j*.jar"  > /dev/null
 }
 
-function hadoop_jar() {
-  if [[ -x "$HADOOP_HOME/bin/hadoop" ]]; then
-HADOOP="$HADOOP_HOME/bin/hadoop"
-  else
-HADOOP=$(which hadoop)
-  fi
-  if [[ ! -x "$HADOOP" ]]; then
-echo "Could not find 'hadoop' command. Please set hadoop on your PATH or 
set HADOOP_HOME"
-exit 1
-  fi
-  if [[ -z "$ZOOKEEPER_HOME" ]]; then
- echo "ZOOKEEPER_HOME must be set!"
- exit 1
-  fi
-
-  ZOOKEEPER_CMD="ls -1 $ZOOKEEPER_HOME/zookeeper-[0-9]*[^csn].jar "
-  if [[ $(eval "$ZOOKEEPER_CMD" | wc -l) -ne 1 ]] ; then
- echo "Not exactly one zookeeper jar in $ZOOKEEPER_HOME"
- exit 1
-  fi
-  ZOOKEEPER_LIB=$(eval "$ZOOKEEPER_CMD")
-
-  CORE_LIB="${lib}/accumulo-core.jar"
-  THRIFT_LIB="${lib}/libthrift.jar"
-  JCOMMANDER_LIB="${lib}/jcommander.jar"
-  COMMONS_VFS_LIB="${lib}/commons-vfs2.jar"
-  GUAVA_LIB="${lib}/guava.jar"
-  HTRACE_LIB="${lib}/htrace-core.jar"
-
-  USERJARS=" "
-  for arg in "$@"; do
-  if [[ "$arg" != "-libjars" ]] && [[ -z "$TOOLJAR" ]]; then
-TOOLJAR="$arg"
-shift
- elif [[ "$arg" != "-libjars" ]] && [[ -z "$CLASSNAME" ]]; then
-CLASSNAME="$arg"
-shift
- elif [[ -z "$USERJARS" ]]; then
-USERJARS=$(echo "$arg" | tr "," " ")
-shift
- elif [[ "$arg" = "-libjars" ]]; then
-USERJARS=""
-shift
- else
-break
- fi
-  done
-
-  
LIB_JARS="$THRIFT_LIB,$CORE_LIB,$ZOOKEEPER_LIB,$JCOMMANDER_LIB,$COMMONS_VFS_LIB,$GUAVA_LIB,$HTRACE_LIB"
-  
H_JARS="$THRIFT_LIB:$CORE_LIB:$ZOOKEEPER_LIB:$JCOMMANDER_LIB:$COMMONS_VFS_LIB:$GUAVA_LIB:$HTRACE_LIB"
-
-  for jar in $USERJARS; do
- LIB_JARS="$LIB_JARS,$jar"
- H_JARS="$H_JARS:$jar"
-  done
-  export HADOOP_CLASSPATH="$H_JARS:$HADOOP_CLASSPATH"
-
-  if [[ -z "$CLASSNAME" || -z "$TOOLJAR" ]]; then
- echo "Usage: accumulo-util hadoop-jar path/to/myTool.jar 
my.tool.class.Name [-libjars my1.jar,my2.jar]" 1>&2
- exit 1
-  fi
-
-  exec "$HADOOP" jar "$TOOLJAR" "$CLASSNAME" -libjars "$LIB_JARS" "$@"
-}
-
 function main() {
   SOURCE="${BASH_SOURCE[0]}"
   while [ -h "${SOURCE}" ]; do
@@ -266,9 +200,6 @@ function main() {
 dump-zoo)
   "$bin"/accumulo org.apache.accumulo.server.util.DumpZookeeper "${@:2}"
   ;;
-hadoop-jar)
-  hadoop_jar "${@:2}"
-  ;;
 gen-monitor-cert)
   gen_monitor_cert
   ;;
diff --git 
a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
 
b/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
index c374e86..13c33f7 100644
--- 
a/minicluster/src/main/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControl.java
+++ 
b/minicluster/src/main/java/org/apac

[accumulo-docker] branch master updated: Dockerfile updates (#7)

2019-01-04 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-docker.git


The following commit(s) were added to refs/heads/master by this push:
 new f0a1fa7  Dockerfile updates (#7)
f0a1fa7 is described below

commit f0a1fa76a037e89ec5974a6374d536f825e22af1
Author: Mike Walch 
AuthorDate: Fri Jan 4 16:25:49 2019 -0500

Dockerfile updates (#7)

* Simplified arguments
* Allow building images using provided Accumulo tarball
---
 Dockerfile | 23 ++-
 README.md  |  6 +-
 2 files changed, 19 insertions(+), 10 deletions(-)

diff --git a/Dockerfile b/Dockerfile
index 98bba83..551046a 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -18,14 +18,13 @@ FROM centos:7
 RUN yum install -y java-1.8.0-openjdk-devel make gcc-c++ wget
 ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk
 
-ARG HADOOP_VERSION
-ARG HADOOP_USER_NAME
-ARG ZOOKEEPER_VERSION
+ARG ACCUMULO_VERSION=2.0.0-alpha-1
+ARG HADOOP_VERSION=3.1.1
+ARG ZOOKEEPER_VERSION=3.4.13
+ARG HADOOP_USER_NAME=accumulo
+ARG ACCUMULO_FILE=
 
-ENV HADOOP_VERSION ${HADOOP_VERSION:-3.1.1}
-ENV HADOOP_USER_NAME ${HADOOP_USER_NAME:-accumulo}
-ENV ZOOKEEPER_VERSION ${ZOOKEEPER_VERSION:-3.4.13}
-ENV ACCUMULO_VERSION 2.0.0-alpha-1
+ENV HADOOP_USER_NAME $HADOOP_USER_NAME
 
 ENV APACHE_DIST_URLS \
   https://www.apache.org/dyn/closer.cgi?action=download= \
@@ -34,6 +33,8 @@ ENV APACHE_DIST_URLS \
   https://www.apache.org/dist/ \
   https://archive.apache.org/dist/
 
+COPY README.md $ACCUMULO_FILE /tmp/
+
 RUN set -eux; \
   download() { \
 local f="$1"; shift; \
@@ -49,9 +50,13 @@ RUN set -eux; \
 [ -n "$success" ]; \
   }; \
   \
-  download "accumulo.tar.gz" 
"accumulo/$ACCUMULO_VERSION/accumulo-$ACCUMULO_VERSION-bin.tar.gz"; \
   download "hadoop.tar.gz" 
"hadoop/core/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz"; \
-  download "zookeeper.tar.gz" 
"zookeeper/zookeeper-$ZOOKEEPER_VERSION/zookeeper-$ZOOKEEPER_VERSION.tar.gz"
+  download "zookeeper.tar.gz" 
"zookeeper/zookeeper-$ZOOKEEPER_VERSION/zookeeper-$ZOOKEEPER_VERSION.tar.gz"; \
+  if [ -z "$ACCUMULO_FILE" ]; then \
+download "accumulo.tar.gz" 
"accumulo/$ACCUMULO_VERSION/accumulo-$ACCUMULO_VERSION-bin.tar.gz"; \
+  else \
+cp "/tmp/$ACCUMULO_FILE" "accumulo.tar.gz"; \
+  fi
 
 RUN tar xzf accumulo.tar.gz -C /tmp/
 RUN tar xzf hadoop.tar.gz -C /tmp/
diff --git a/README.md b/README.md
index 7b0be22..9d34287 100644
--- a/README.md
+++ b/README.md
@@ -33,10 +33,14 @@ building an image:
 cd /path/to/accumulo-docker
 docker build -t accumulo .
 
-   Or build the Accumulo docker image with specific versions of Hadoop, 
Zookeeper, etc using the command below:
+   Or build the Accumulo docker image with specific released versions of 
Hadoop, Zookeeper, etc that will downloaded from Apache using the command below:
 
 docker build --build-arg ZOOKEEPER_VERSION=3.4.8 --build-arg 
HADOOP_VERSION=2.7.0 -t accumulo .
 
+   Or build with an Accumulo tarball (located in same directory as DockerFile) 
using the command below:
+
+docker build --build-arg ACCUMULO_VERSION=2.0.0-SNAPSHOT --build-arg 
ACCUMULO_FILE=accumulo-2.0.0-SNAPSHOT-bin.tar.gz -t accumulo .
+
 ## Image basics
 
 The entrypoint for the Accumulo docker image is the `accumulo` script. While 
the primary use



[accumulo] branch 1.9 updated: #859 - Fix CLASSPATH bug causing cur dir to be added (#864)

2019-01-04 Thread mwalch
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch 1.9
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/1.9 by this push:
 new c8b4132  #859 - Fix CLASSPATH bug causing cur dir to be added (#864)
c8b4132 is described below

commit c8b413261530017faed40c5d86930da017e4292f
Author: Mike Walch 
AuthorDate: Fri Jan 4 14:33:26 2019 -0500

#859 - Fix CLASSPATH bug causing cur dir to be added (#864)
---
 assemble/bin/accumulo | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/assemble/bin/accumulo b/assemble/bin/accumulo
index a0fc17b..18e0571 100755
--- a/assemble/bin/accumulo
+++ b/assemble/bin/accumulo
@@ -122,7 +122,11 @@ if [ -z "${LOG4J_JAR}" -a -z "${CLASSPATH}" ]; then
exit 1
 fi
 
-CLASSPATH="${XML_FILES}:${START_JAR}:${SLF4J_JARS}:${LOG4J_JAR}:${CLASSPATH}"
+if [[ -n "$CLASSPATH" ]]; then
+  CLASSPATH="${XML_FILES}:${START_JAR}:${SLF4J_JARS}:${LOG4J_JAR}:${CLASSPATH}"
+else
+  CLASSPATH="${XML_FILES}:${START_JAR}:${SLF4J_JARS}:${LOG4J_JAR}"
+fi
 
 if [ -z "${JAVA_HOME}" -o ! -d "${JAVA_HOME}" ]; then
echo "JAVA_HOME is not set or is not a directory.  Please make sure it's 
set globally or in conf/accumulo-env.sh"



  1   2   3   4   5   6   7   8   >