mikewalch commented on a change in pull request #56: ACCUMULO-4784 Updating 
docs with Connector builder
URL: https://github.com/apache/accumulo-website/pull/56#discussion_r168012572
 
 

 ##########
 File path: _docs-2-0/getting-started/clients.md
 ##########
 @@ -16,92 +16,79 @@ If you are using Maven to create Accumulo client code, add 
the following to your
 </dependency>
 ```
 
-## Running Client Code
-
-There are multiple ways to run Java code that use Accumulo. Below is a list
-of the different ways to execute client code.
-
-* build and execute an uber jar
-* add `accumulo classpath` to your Java classpath
-* use the `accumulo` command
-* use the `accumulo-util hadoop-jar` command
-
-### Build and execute an uber jar
-
-If you have included `accumulo-core` as dependency in your pom, you can build 
an uber jar
-using the Maven assembly or shade plugin and use it to run Accumulo client 
code. When building
-an uber jar, you should set the versions of any Hadoop dependencies in your 
pom to match the
-version running on your cluster.
-
-### Add 'accumulo classpath' to your Java classpath
-
-To run Accumulo client code using the `java` command, use the `accumulo 
classpath` command 
-to include all of Accumulo's dependencies on your classpath:
-
-    java -classpath /path/to/my.jar:/path/to/dep.jar:$(accumulo classpath) 
com.my.Main arg1 arg2
-
-If you would like to review which jars are included, the `accumulo classpath` 
command can
-output a more human readable format using the `-d` option which enables 
debugging:
-
-    accumulo classpath -d
-
-### Use the accumulo command
-
-Another option for running your code is to use the Accumulo script which can 
execute a
-main class (if it exists on its classpath):
-
-    accumulo com.foo.Client arg1 arg2
-
-While the Accumulo script will add all of Accumulo's dependencies to the 
classpath, you
-will need to add any jars that your create or depend on beyond what Accumulo 
already
-depends on. This can be accomplished by either adding the jars to the 
`lib/ext` directory
-of your Accumulo installation or by adding jars to the CLASSPATH variable 
before calling
-the accumulo command.
-
-    export CLASSPATH=/path/to/my.jar:/path/to/dep.jar; accumulo com.foo.Client 
arg1 arg2
-
-### Use the 'accumulo-util hadoop-jar' command
-
-If you are writing map reduce job that accesses Accumulo, then you can use
-`accumulo-util hadoop-jar` to run those jobs. See the [MapReduce 
example][mapred-example]
-for more information.
-
 ## Connecting
 
-All clients must first identify the Accumulo instance to which they will be
-communicating. Code to do this is as follows:
+Before writing Accumulo client code, you will need the following information.
+
+ * Accumulo instance name
+ * Zookeeper connection string
+ * Accumulo username & password
+
+The [Connector] object is the main entry point for Accumulo clients. It can be 
created using one
+of the following methods:
+
+1. Using the `accumulo-client.properties` file (a template can be found in the 
`conf/` directory
+   of the tarball distribution):
+    ```java
+    Connector conn = Connector.builder()
+                        
.usingProperties("/path/to/accumulo-client.properties").build();
+    ```
+1. Using the builder methods of [Connector]:
+    ```java
+    Connector conn = Connector.builder().forInstance("myinstance", 
"zookeeper1,zookeper2")
+                        .usingPasswordCredentials("myuser", 
"mypassword").build();
+    ```
+1. Using a Java Properties object.
+    ```java
+    Properties props = new Properties()
+    props.put("instance.name", "myinstance")
+    props.put("instance.zookeepers", "zookeeper1,zookeeper2")
+    props.put("auth.method", "password")
+    props.put("auth.username", "myuser")
+    props.put("auth.password", "mypassword")
+    Connector conn = Connector.builder().usingProperties(props).build();
+    ```
+
+If a `accumulo-client.properties` file or a Java Properties object is used to 
create a [Connector], the following
+[client properties][client-props] must be set:
+
+* [instance.name]
+* [instance.zookeepers]
+* [auth.method]
+* [auth.username]
+* [auth.password]
+
+# Authentication
+
+When creating a [Connector], the user must be authenticated using one of the 
following
+implementations of [AuthenticationToken] below:
+
+1. [PasswordToken] is the must commonly used implementation.
+1. [CredentialProviderToken] leverages the Hadoop CredentialProviders (new in 
Hadoop 2.6).
+   For example, the [CredentialProviderToken] can be used in conjunction with 
a Java KeyStore to
+   alleviate passwords stored in cleartext. When stored in HDFS, a single 
KeyStore can be used across
+   an entire instance. Be aware that KeyStores stored on the local filesystem 
must be made available
+   to all nodes in the Accumulo cluster.
+1. [KerberosToken] can be provided to use the authentication provided by 
Kerberos. Using Kerberos
+   requires external setup and additional configuration, but provides a single 
point of authentication
+   through HDFS, YARN and ZooKeeper and allowing for password-less 
authentication with Accumulo.
+
+    ```java
+    KerberosToken token = new KerberosToken();
+    Connector conn = Connector.builder().forInstance("myinstance", 
"zookeeper1,zookeper2")
+                        .usingCredentials(token.getPrincipal(), token).build();
+    ```
 
-```java
-String instanceName = "myinstance";
-String zooServers = "zooserver-one,zooserver-two"
-Instance inst = new ZooKeeperInstance(instanceName, zooServers);
-
-Connector conn = inst.getConnector("user", new PasswordToken("passwd"));
-```
-
-The [PasswordToken] is the most common implementation of an 
[AuthenticationToken].
-This general interface allow authentication as an Accumulo user to come from
-a variety of sources or means. The [CredentialProviderToken] leverages the 
Hadoop
-CredentialProviders (new in Hadoop 2.6).
+## Writing Data
 
-For example, the [CredentialProviderToken] can be used in conjunction with a 
Java
-KeyStore to alleviate passwords stored in cleartext. When stored in HDFS, a 
single
-KeyStore can be used across an entire instance. Be aware that KeyStores stored 
on
-the local filesystem must be made available to all nodes in the Accumulo 
cluster.
+With a [Connector] created, it can be used to create objects (like the 
[BatchWriter]) for
+reading and writing from Accumulo:
 
 ```java
-KerberosToken token = new KerberosToken();
-Connector conn = inst.getConnector(token.getPrincipal(), token);
+BatchWriter writer = conn.createBatchWriter("table", new BatchWriterConfig())
 
 Review comment:
   Fixed in de495ff2c

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to