Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/security/AuthenticationAndAuthorization.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/security/AuthenticationAndAuthorization.txt
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/security/AuthenticationAndAuthorization.txt
 Thu Jul 28 01:17:26 2016
@@ -0,0 +1,239 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+================================
+Authentication and Authorization
+================================
+
+Most Hadoop components, such as HDFS, Yarn, Hive, etc., have security 
frameworks, which support Simple, Kerberos and LDAP authentication. currently 
Sqoop 2 provides 2 types of authentication: simple and kerberos. The 
authentication module is pluggable, so more authentication types can be added. 
Additionally, a new role based access control is introduced in Sqoop 1.99.6. We 
recommend to use this capability in multi tenant environments, so that 
malicious users can’t easily abuse your created link and job objects.
+
+Simple Authentication
+=====================
+
+Configuration
+-------------
+Modify Sqoop configuration file, normally in <Sqoop 
Folder>/conf/sqoop.properties.
+
+::
+
+  org.apache.sqoop.authentication.type=SIMPLE
+  
org.apache.sqoop.authentication.handler=org.apache.sqoop.security.authentication.SimpleAuthenticationHandler
+  org.apache.sqoop.anonymous=true
+
+-      Simple authentication is used by default. Commenting out authentication 
configuration will yield the use of simple authentication.
+
+Run command
+-----------
+Start Sqoop server as usual.
+
+::
+
+  <Sqoop Folder>/bin/sqoop.sh server start
+
+Start Sqoop client as usual.
+
+::
+
+  <Sqoop Folder>/bin/sqoop.sh client
+
+Kerberos Authentication
+=======================
+
+Kerberos is a computer network authentication protocol which works on the 
basis of 'tickets' to allow nodes communicating over a non-secure network to 
prove their identity to one another in a secure manner. Its designers aimed it 
primarily at a client–server model and it provides mutual 
authentication—both the user and the server verify each other's identity. 
Kerberos protocol messages are protected against eavesdropping and replay 
attacks.
+
+Dependency
+----------
+Set up a KDC server. Skip this step if KDC server exists. It's difficult to 
cover every way Kerberos can be setup (ie: there are cross realm setups and 
multi-trust environments). This section will describe how to setup the sqoop 
principals with a local deployment of MIT kerberos.
+
+-      All components which are Kerberos authenticated need one KDC server. If 
current Hadoop cluster uses Kerberos authentication, there should be a KDC 
server.
+-      If there is no KDC server, follow 
http://web.mit.edu/kerberos/krb5-devel/doc/admin/install_kdc.html to set up one.
+
+Configure Hadoop cluster to use Kerberos authentication.
+
+-      Authentication type should be cluster level. All components must have 
the same authentication type: use Kerberos or not. In other words, Sqoop with 
Kerberos authentication could not communicate with other Hadoop components, 
such as HDFS, Yarn, Hive, etc., without Kerberos authentication, and vice versa.
+-      How to set up a Hadoop cluster with Kerberos authentication is out of 
the scope of this document. Follow the related links like 
https://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/SecureMode.html
+
+Create keytab and principal for Sqoop 2 via kadmin in command line.
+
+::
+
+  addprinc -randkey HTTP/<FQDN>@<REALM>
+  addprinc -randkey sqoop/<FQDN>@<REALM>
+  xst -k /home/kerberos/sqoop.keytab HTTP/<FQDN>@<REALM>
+  xst -k /home/kerberos/sqoop.keytab sqoop/<FQDN>@<REALM>
+
+-      The <FQDN> should be replaced by the FQDN of the server, which could be 
found via “hostname -f” in command line.
+-      The <REALM> should be replaced by the realm name in krb5.conf file 
generated when installing the KDC server in the former step.
+-      The principal HTTP/<FQDN>@<REALM> is used in communication between 
Sqoop client and Sqoop server. Since Sqoop server is an http server, so the 
HTTP principal is a must during SPNEGO process, and it is case sensitive.
+-      Http request could be sent from other client like browser, wget or curl 
with SPNEGO support.
+-      The principal sqoop/<FQDN>@<REALM> is used in communication between 
Sqoop server and Hdfs/Yarn as the credential of Sqoop server.
+
+Configuration
+-------------
+Modify Sqoop configuration file, normally in <Sqoop 
Folder>/conf/sqoop.properties.
+
+::
+
+  org.apache.sqoop.authentication.type=KERBEROS
+  
org.apache.sqoop.authentication.handler=org.apache.sqoop.security.authentication.KerberosAuthenticationHandler
+  org.apache.sqoop.authentication.kerberos.principal=sqoop/_HOST@<REALM>
+  org.apache.sqoop.authentication.kerberos.keytab=/home/kerberos/sqoop.keytab
+  org.apache.sqoop.authentication.kerberos.http.principal=HTTP/_HOST@<REALM>
+  
org.apache.sqoop.authentication.kerberos.http.keytab=/home/kerberos/sqoop.keytab
+  org.apache.sqoop.authentication.kerberos.proxyuser=true
+
+-      When _HOST is used as FQDN in principal, it will be replaced by the 
real FQDN. https://issues.apache.org/jira/browse/HADOOP-6632
+-      If parameter proxyuser is set true, Sqoop server will use proxy user 
mode (sqoop delegate real client user) to run Yarn job. If false, Sqoop server 
will use sqoop user to run Yarn job.
+
+Run command
+-----------
+Set SQOOP2_HOST to FQDN.
+
+::
+
+  export SQOOP2_HOST=$(hostname -f).
+
+-      The <FQDN> should be replaced by the FQDN of the server, which could be 
found via “hostname -f” in command line.
+
+Start Sqoop server using sqoop user.
+
+::
+
+  sudo –u sqoop <Sqoop Folder>/bin/sqoop.sh server start
+
+Run kinit to generate ticket cache.
+
+::
+
+  kinit HTTP/<FQDN>@<REALM> -kt /home/kerberos/sqoop.keytab
+
+Start Sqoop client.
+
+::
+
+  <Sqoop Folder>/bin/sqoop.sh client
+
+Verify
+------
+If the Sqoop server has started successfully with Kerberos authentication, the 
following line will be in <@LOGDIR>/sqoop.log:
+
+::
+
+  2014-12-04 15:02:58,038 INFO  security.KerberosAuthenticationHandler 
[org.apache.sqoop.security.authentication.KerberosAuthenticationHandler.secureLogin(KerberosAuthenticationHandler.java:84)]
 Using Kerberos authentication, principal [sqoop/[email protected]] keytab 
[/home/kerberos/sqoop.keytab]
+
+If the Sqoop client was able to communicate with the Sqoop server, the 
following will be in <@LOGDIR>/sqoop.log :
+
+::
+
+  Refreshing Kerberos configuration
+  Acquire TGT from Cache
+  Principal is HTTP/<FQDN>@HADOOP.COM
+  null credentials from Ticket Cache
+  principal is HTTP/<FQDN>@HADOOP.COM
+  Will use keytab
+  Commit Succeeded
+
+Customized Authentication
+=========================
+
+Users can create their own authentication modules. By performing the following 
steps:
+
+-      Create customized authentication handler extends abstract class 
AuthenticationHandler.
+-      Implement abstract function doInitialize and secureLogin in 
AuthenticationHandler.
+
+::
+
+  public class MyAuthenticationHandler extends AuthenticationHandler {
+
+    private static final Logger LOG = 
Logger.getLogger(MyAuthenticationHandler.class);
+
+    public void doInitialize() {
+      securityEnabled = true;
+    }
+
+    public void secureLogin() {
+      LOG.info("Using customized authentication.");
+    }
+  }
+
+-      Modify configuration org.apache.sqoop.authentication.handler in <Sqoop 
Folder>/conf/sqoop.properties and set it to the customized authentication 
handler class name.
+-      Restart the Sqoop server.
+
+Authorization
+=============
+
+Users, Groups, and Roles
+------------------------
+
+At the core of Sqoop's authorization system are users, groups, and roles. 
Roles allow administrators to give a name to a set of grants which can be 
easily reused. A role may be assigned to users, groups, and other roles. For 
example, consider a system with the following users and groups.
+
+::
+
+  <User>: <Groups>
+  user_all: group1, group2
+  user1: group1
+  user2: group2
+
+Sqoop roles must be created manually before being used, unlike users and 
groups. Users and groups are managed by the login system (Linux, LDAP or 
Kerberos). When a user wants to access one resource (connector, link, 
connector), the Sqoop2 server will determine the username of this user and the 
groups associated. That information is then used to determine if the user 
should have access to this resource being requested, by comparing the required 
privileges of the Sqoop operation to the user privileges using the following 
rules.
+
+- User privileges (Has the privilege been granted to the user?)
+- Group privileges (Does the user belong to any groups that the privilege has 
been granted to?)
+- Role privileges (Does the user or any of the groups that the user belongs to 
have a role that grants the privilege?)
+
+Administrator
+-------------
+
+There is a special user: administrator, which can’t be created, deleted by 
command. The only way to set administrator is to modify the configuration file. 
Administrator could run management commands to create/delete roles. However, 
administrator does not implicitly have all privileges. Administrator has to 
grant privilege to him/her if he/she needs to request the resource.
+
+Role management commands
+------------------------
+
+::
+
+  CREATE ROLE –role role_name
+  DROP ROLE –role role_name
+  SHOW ROLE
+
+- Only the administrator has privilege for this.
+
+Principal management commands
+-----------------------------
+
+::
+
+  GRANT ROLE --principal-type principal_type --principal principal_name --role 
role_name
+  REVOKE ROLE --principal-type principal_type --principal principal_name 
--role role_name
+  SHOW ROLE --principal-type principal_type --principal principal_name
+  SHOW PRINCIPAL --role role_name
+
+- principal_type: USER | GROUP | ROLE
+
+Privilege management commands
+-----------------------------
+
+::
+
+  GRANT PRIVILEGE --principal-type principal_type --principal principal_name 
--resource-type resource_type --resource resource_name --action action_name 
[--with-grant]
+  REVOKE PRIVILEGE --principal-type principal_type --principal principal_name 
[--resource-type resource_type --resource resource_name --action action_name] 
[--with-grant]
+  SHOW PRIVILEGE –principal-type principal_type –principal principal_name 
[--resource-type resource_type --resource resource_name --action action_name]
+
+- principal_type: USER | GROUP | ROLE
+- resource_type: CONNECTOR | LINK | JOB
+- action_type: ALL | READ | WRITE
+- With with-grant in GRANT PRIVILEGE command, this principal could grant 
his/her privilege to other users.
+- Without resource in REVOKE PRIVILEGE command, all privileges on this 
principal will be revoked.
+- With with-grant in REVOKE PRIVILEGE command, only grant privilege on this 
principal will be removed. This principal has the privilege to access this 
resource, but he/she could not grant his/her privilege to others.
+- Without resource in SHOW PRIVILEGE command, all privileges on this principal 
will be listed.

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/security/RepositoryEncryption.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/security/RepositoryEncryption.txt
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/security/RepositoryEncryption.txt
 Thu Jul 28 01:17:26 2016
@@ -0,0 +1,104 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+.. _repositoryencryption:
+
+=====================
+Repository Encryption
+=====================
+
+Sqoop 2 uses a database to store metadata about the various data sources it 
talks to, we call this database the repository.
+
+The repository can store passwords and other pieces of information that are 
security sensitive, within the context of Sqoop
+2, this information is referred to as sensitive inputs. Which inputs are 
considered sensitive is determined by the connector.
+
+We support encrypting sensitive inputs in the repository using a provided 
password or password generator. Sqoop 2 uses the
+provided password and the provided key generation algorithm (such as PBKDF2) 
to generate a key to encrypt sensitive inputs
+and another hmac key to verify their integrity.
+
+Only the sensitive inputs are encrypted. If an input is not defined as 
sensitive by the connector, it is NOT encrypted.
+
+Server Configuration
+=====================
+
+Note: This configuration will allow a new Sqoop instance to encrypt 
information or read from an already encrypted repository.
+It will not encrypt sensitive inputs in an existing repository. For 
instructions on how to encrypt an existing repository,
+please look here: :ref:`repositoryencryption-tool`
+
+First, repository encryption must be enabled.
+::
+
+    org.apache.sqoop.security.repo_encryption.enabled=true
+
+Then we configure the password:
+
+::
+
+    org.apache.sqoop.security.repo_encryption.password=supersecret
+
+Or the password generator:
+
+::
+
+    org.apache.sqoop.security.repo_encryption.password_generator=echo 
supersecret
+
+The plaintext password is always given preference to the password generator if 
both are present.
+
+Then we can configure the HMAC algorithm. Please find the list of 
possibilities here:
+`Standard Algorithm Name Documentation - Mac 
<http://docs.oracle.com/javase/7/docs/technotes/guides/security/StandardNames.html#Mac>`_
+We can store digests with up to 1024 bits.
+
+::
+
+    org.apache.sqoop.security.repo_encryption.hmac_algorithm=HmacSHA256
+
+Then we configure the cipher algorithm. Possibilities can be found here:
+`Standard Algorithm Name Documentation - Cipher 
<http://docs.oracle.com/javase/7/docs/technotes/guides/security/StandardNames.html#Cipher>`_
+
+::
+
+    org.apache.sqoop.security.repo_encryption.cipher_algorithm=AES
+
+Then we configure the key size for the cipher in bytes. We can store up to 
1024 bit keys.
+
+::
+
+    org.apache.sqoop.security.repo_encryption.cipher_key_size=16
+
+Next we need to specify the cipher transformation. The options for this field 
are listed here:
+`Cipher (Java Platform SE 7) 
<http://docs.oracle.com/javase/7/docs/api/javax/crypto/Cipher.html>`_
+
+::
+
+    org.apache.sqoop.security.repo_encryption.cipher_spec=AES/CBC/PKCS5Padding
+
+The size of the initialization vector to use in bytes. We support up to 1024 
bit initialization vectors.
+
+::
+
+    org.apache.sqoop.security.repo_encryption.initialization_vector_size=16
+
+Next we need to specfy the algorithm for secret key generation. Please refer 
to:
+`Standard Algorithm Name Documentation - SecretKeyFactory 
<http://docs.oracle.com/javase/7/docs/technotes/guides/security/StandardNames.html#SecretKeyFactory>`_
+
+::
+
+    
org.apache.sqoop.security.repo_encryption.pbkdf2_algorithm=PBKDF2WithHmacSHA1
+
+Finally specify the number of rounds/iterations for the generation of a key 
from a password.
+
+::
+
+    org.apache.sqoop.security.repo_encryption.pbkdf2_rounds=4000

Added: websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user.txt
==============================================================================
--- websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user.txt (added)
+++ websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user.txt Thu Jul 
28 01:17:26 2016
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==========
+User Guide
+==========
+
+.. toctree::
+   :glob:
+
+   user/*
\ No newline at end of file

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/CommandLineClient.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/CommandLineClient.txt
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/CommandLineClient.txt
 Thu Jul 28 01:17:26 2016
@@ -0,0 +1,533 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===================
+Command Line Shell
+===================
+
+Sqoop 2 provides command line shell that is capable of communicating with 
Sqoop 2 server using REST interface. Client is able to run in two modes - 
interactive and batch mode. Commands ``create``, ``update`` and ``clone`` are 
not currently supported in batch mode. Interactive mode supports all available 
commands.
+
+You can start Sqoop 2 client in interactive mode using command 
``sqoop2-shell``::
+
+  sqoop2-shell
+
+Batch mode can be started by adding additional argument representing path to 
your Sqoop client script: ::
+
+  sqoop2-shell /path/to/your/script.sqoop
+
+Sqoop client script is expected to contain valid Sqoop client commands, empty 
lines and lines starting with ``#`` that are denoting comment lines. Comments 
and empty lines are ignored, all other lines are interpreted. Example script: ::
+
+  # Specify company server
+  set server --host sqoop2.company.net
+
+  # Executing given job
+  start job --name 1
+
+
+.. contents:: Table of Contents
+
+Resource file
+=============
+
+Sqoop 2 client have ability to load resource files similarly as other command 
line tools. At the beginning of execution Sqoop client will check existence of 
file ``.sqoop2rc`` in home directory of currently logged user. If such file 
exists, it will be interpreted before any additional actions. This file is 
loaded in both interactive and batch mode. It can be used to execute any batch 
compatible commands.
+
+Example resource file: ::
+
+  # Configure our Sqoop 2 server automatically
+  set server --host sqoop2.company.net
+
+  # Run in verbose mode by default
+  set option --name verbose --value true
+
+Commands
+========
+
+Sqoop 2 contains several commands that will be documented in this section. 
Each command have one more functions that are accepting various arguments. Not 
all commands are supported in both interactive and batch mode.
+
+Auxiliary Commands
+------------------
+
+Auxiliary commands are commands that are improving user experience and are 
running purely on client side. Thus they do not need working connection to the 
server.
+
+* ``exit`` Exit client immediately. This command can be also executed by 
sending EOT (end of transmission) character. It's CTRL+D on most common Linux 
shells like Bash or Zsh.
+* ``history`` Print out command history. Please note that Sqoop client is 
saving history from previous executions and thus you might see commands that 
you've executed in previous runs.
+* ``help`` Show all available commands with short in-shell documentation.
+
+::
+
+ sqoop:000> help
+ For information about Sqoop, visit: http://sqoop.apache.org/
+
+ Available commands:
+   exit    (\x  ) Exit the shell
+   history (\H  ) Display, manage and recall edit-line history
+   help    (\h  ) Display this help message
+   set     (\st ) Configure various client options and settings
+   show    (\sh ) Display various objects and configuration options
+   create  (\cr ) Create new object in Sqoop repository
+   delete  (\d  ) Delete existing object in Sqoop repository
+   update  (\up ) Update objects in Sqoop repository
+   clone   (\cl ) Create new object based on existing one
+   start   (\sta) Start job
+   stop    (\stp) Stop job
+   status  (\stu) Display status of a job
+   enable  (\en ) Enable object in Sqoop repository
+   disable (\di ) Disable object in Sqoop repository
+
+Set Command
+-----------
+
+Set command allows to set various properties of the client. Similarly as 
auxiliary commands, set do not require connection to Sqoop server. Set commands 
is not used to reconfigure Sqoop server.
+
+Available functions:
+
++---------------+------------------------------------------+
+| Function      | Description                              |
++===============+==========================================+
+| ``server``    | Set connection configuration for server  |
++---------------+------------------------------------------+
+| ``option``    | Set various client side options          |
++---------------+------------------------------------------+
+
+Set Server Function
+~~~~~~~~~~~~~~~~~~~
+
+Configure connection to Sqoop server - host port and web application name. 
Available arguments:
+
++-----------------------+---------------+--------------------------------------------------+
+| Argument              | Default value | Description                          
            |
++=======================+===============+==================================================+
+| ``-h``, ``--host``    | localhost     | Server name (FQDN) where Sqoop 
server is running |
++-----------------------+---------------+--------------------------------------------------+
+| ``-p``, ``--port``    | 12000         | TCP Port                             
            |
++-----------------------+---------------+--------------------------------------------------+
+| ``-w``, ``--webapp``  | sqoop         | Jetty's web application name         
            |
++-----------------------+---------------+--------------------------------------------------+
+| ``-u``, ``--url``     |               | Sqoop Server in url format           
            |
++-----------------------+---------------+--------------------------------------------------+
+
+Example: ::
+
+  set server --host sqoop2.company.net --port 80 --webapp sqoop
+
+or ::
+
+  set server --url http://sqoop2.company.net:80/sqoop
+
+Note: When ``--url`` option is given, ``--host``, ``--port`` or ``--webapp`` 
option will be ignored.
+
+Set Option Function
+~~~~~~~~~~~~~~~~~~~
+
+Configure Sqoop client related options. This function have two required 
arguments ``name`` and ``value``. Name represents internal property name and 
value holds new value that should be set. List of available option names 
follows:
+
++-------------------+---------------+---------------------------------------------------------------------+
+| Option name       | Default value | Description                              
                           |
++===================+===============+=====================================================================+
+| ``verbose``       | false         | Client will print additional information 
if verbose mode is enabled |
++-------------------+---------------+---------------------------------------------------------------------+
+| ``poll-timeout``  | 10000         | Server poll timeout in milliseconds      
                           |
++-------------------+---------------+---------------------------------------------------------------------+
+
+Example: ::
+
+  set option --name verbose --value true
+  set option --name poll-timeout --value 20000
+
+Show Command
+------------
+
+Show commands displays various information as described below.
+
+Available functions:
+
++----------------+--------------------------------------------------------------------------------------------------------+
+| Function       | Description                                                 
                                           |
++================+========================================================================================================+
+| ``server``     | Display connection information to the sqoop server (host, 
port, webapp)                                |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``option``     | Display various client side options                         
                                           |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``version``    | Show client build version, with an option -all it shows 
server build version and supported api versions|
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``connector``  | Show connector configurable and its related configs         
                                           |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``driver``     | Show driver configurable and its related configs            
                                           |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``link``       | Show links in sqoop                                         
                                           |
++----------------+--------------------------------------------------------------------------------------------------------+
+| ``job``        | Show jobs in sqoop                                          
                                           |
++----------------+--------------------------------------------------------------------------------------------------------+
+
+Show Server Function
+~~~~~~~~~~~~~~~~~~~~
+
+Show details about connection to Sqoop server.
+
++-----------------------+--------------------------------------------------------------+
+| Argument              |  Description                                         
        |
++=======================+==============================================================+
+| ``-a``, ``--all``     | Show all connection related information (host, port, 
webapp) |
++-----------------------+--------------------------------------------------------------+
+| ``-h``, ``--host``    | Show host                                            
        |
++-----------------------+--------------------------------------------------------------+
+| ``-p``, ``--port``    | Show port                                            
        |
++-----------------------+--------------------------------------------------------------+
+| ``-w``, ``--webapp``  | Show web application name                            
        |
++-----------------------+--------------------------------------------------------------+
+
+Example: ::
+
+  show server --all
+
+Show Option Function
+~~~~~~~~~~~~~~~~~~~~
+
+Show values of various client side options. This function will show all client 
options when called without arguments.
+
++-----------------------+--------------------------------------------------------------+
+| Argument              |  Description                                         
        |
++=======================+==============================================================+
+| ``-n``, ``--name``    | Show client option value with given name             
        |
++-----------------------+--------------------------------------------------------------+
+
+Please check table in `Set Option Function`_ section to get a list of all 
supported option names.
+
+Example: ::
+
+  show option --name verbose
+
+Show Version Function
+~~~~~~~~~~~~~~~~~~~~~
+
+Show build versions of both client and server as well as the supported rest 
api versions.
+
++------------------------+-----------------------------------------------+
+| Argument               |  Description                                  |
++========================+===============================================+
+| ``-a``, ``--all``      | Show all versions (server, client, api)       |
++------------------------+-----------------------------------------------+
+| ``-c``, ``--client``   | Show client build version                     |
++------------------------+-----------------------------------------------+
+| ``-s``, ``--server``   | Show server build version                     |
++------------------------+-----------------------------------------------+
+| ``-p``, ``--api``      | Show supported api versions                   |
++------------------------+-----------------------------------------------+
+
+Example: ::
+
+  show version --all
+
+Show Connector Function
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Show persisted connector configurable and its related configs used in creating 
associated link and job objects
+
++-----------------------+------------------------------------------------+
+| Argument              |  Description                                   |
++=======================+================================================+
+| ``-a``, ``--all``     | Show information for all connectors            |
++-----------------------+------------------------------------------------+
+| ``-c``, ``--cid <x>`` | Show information for connector with id ``<x>`` |
++-----------------------+------------------------------------------------+
+
+Example: ::
+
+  show connector --all or show connector
+
+Show Driver Function
+~~~~~~~~~~~~~~~~~~~~
+
+Show persisted driver configurable and its related configs used in creating 
job objects
+
+This function do not have any extra arguments. There is only one registered 
driver in sqoop
+
+Example: ::
+
+  show driver
+
+Show Link Function
+~~~~~~~~~~~~~~~~~~
+
+Show persisted link objects.
+
++-----------------------+------------------------------------------------------+
+| Argument              |  Description                                         
|
++=======================+======================================================+
+| ``-a``, ``--all``     | Show all available links                             
|
++-----------------------+------------------------------------------------------+
+| ``-n``, ``--name <x>``| Show link with name ``<x>``                          
|
++-----------------------+------------------------------------------------------+
+
+Example: ::
+
+  show link --all or show link --name linkName
+
+Show Job Function
+~~~~~~~~~~~~~~~~~
+
+Show persisted job objects.
+
++-----------------------+----------------------------------------------+
+| Argument              |  Description                                 |
++=======================+==============================================+
+| ``-a``, ``--all``     | Show all available jobs                      |
++-----------------------+----------------------------------------------+
+| ``-n``, ``--name <x>``| Show job with name ``<x>``                   |
++-----------------------+----------------------------------------------+
+
+Example: ::
+
+  show job --all or show job --name jobName
+
+Show Submission Function
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Show persisted job submission objects.
+
++-----------------------+-----------------------------------------------+
+| Argument              |  Description                                  |
++=======================+===============================================+
+| ``-j``, ``--job <x>`` | Show available submissions for given job name |
++-----------------------+-----------------------------------------------+
+| ``-d``, ``--detail``  | Show job submissions in full details          |
++-----------------------+-----------------------------------------------+
+
+Example: ::
+
+  show submission
+  show submission --j jobName
+  show submission --job jobName --detail
+
+Create Command
+--------------
+
+Creates new link and job objects. This command is supported only in 
interactive mode. It will ask user to enter the link config and job configs for 
from /to and driver when creating link and job objects respectively.
+
+Available functions:
+
++----------------+-------------------------------------------------+
+| Function       | Description                                     |
++================+=================================================+
+| ``link``       | Create new link object                          |
++----------------+-------------------------------------------------+
+| ``job``        | Create new job object                           |
++----------------+-------------------------------------------------+
+
+Create Link Function
+~~~~~~~~~~~~~~~~~~~~
+
+Create new link object.
+
++------------------------------+-------------------------------------------------------------+
+| Argument                     |  Description                                  
              |
++==============================+=============================================================+
+| ``-c``, ``--connector <x>``  |  Create new link object for connector with 
name ``<x>``     |
++------------------------------+-------------------------------------------------------------+
+
+
+Example: ::
+
+  create link --connector connectorName or create link -c connectorName
+
+Create Job Function
+~~~~~~~~~~~~~~~~~~~
+
+Create new job object.
+
++------------------------+------------------------------------------------------------------+
+| Argument               |  Description                                        
             |
++========================+==================================================================+
+| ``-f``, ``--from <x>`` | Create new job object with a FROM link with name 
``<x>``         |
++------------------------+------------------------------------------------------------------+
+| ``-t``, ``--to <t>``   | Create new job object with a TO link with name 
``<x>``           |
++------------------------+------------------------------------------------------------------+
+
+Example: ::
+
+  create job --from fromLinkName --to toLinkName or create job --f 
fromLinkName --t toLinkName
+
+Update Command
+--------------
+
+Update commands allows you to edit link and job objects. This command is 
supported only in interactive mode.
+
+Update Link Function
+~~~~~~~~~~~~~~~~~~~~
+
+Update existing link object.
+
++-----------------------+---------------------------------------------+
+| Argument              |  Description                                |
++=======================+=============================================+
+| ``-n``, ``--name <x>``|  Update existing link with name ``<x>``     |
++-----------------------+---------------------------------------------+
+
+Example: ::
+
+  update link --name linkName
+
+Update Job Function
+~~~~~~~~~~~~~~~~~~~
+
+Update existing job object.
+
++-------------------------+----------------------------------------------+
+| Argument                |  Description                                 |
++=========================+==============================================+
+| ``-n``, ``--name <x>``  | Update existing job object with name ``<x>`` |
++-------------------------+----------------------------------------------+
+
+Example: ::
+
+  update job --name jobName
+
+
+Delete Command
+--------------
+
+Deletes link and job objects from Sqoop server.
+
+Delete Link Function
+~~~~~~~~~~~~~~~~~~~~
+
+Delete existing link object.
+
++-------------------------+-------------------------------------------+
+| Argument                |  Description                              |
++=========================+===========================================+
+| ``-n``, ``--name <x>``  |  Delete link object with name ``<x>``     |
++-------------------------+-------------------------------------------+
+
+Example: ::
+
+  delete link --name linkName
+
+
+Delete Job Function
+~~~~~~~~~~~~~~~~~~~
+
+Delete existing job object.
+
++-------------------------+------------------------------------------+
+| Argument                |  Description                             |
++=========================+==========================================+
+| ``-n``, ``--name <x>``  | Delete job object with name ``<x>``      |
++-------------------------+------------------------------------------+
+
+Example: ::
+
+  delete job --name jobName
+
+
+Clone Command
+-------------
+
+Clone command will load existing link or job object from Sqoop server and 
allow user in place updates that will result in creation of new link or job 
object. This command is not supported in batch mode.
+
+Clone Link Function
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Clone existing link object.
+
++-------------------------+------------------------------------------+
+| Argument                |  Description                             |
++=========================+==========================================+
+| ``-n``, ``--name <x>``  |  Clone link object with name ``<x>``     |
++-------------------------+------------------------------------------+
+
+Example: ::
+
+  clone link --name linkName
+
+
+Clone Job Function
+~~~~~~~~~~~~~~~~~~
+
+Clone existing job object.
+
++-------------------------+------------------------------------------+
+| Argument                |  Description                             |
++=========================+==========================================+
+| ``-n``, ``--name <x>``  | Clone job object with name ``<x>``       |
++-------------------------+------------------------------------------+
+
+Example: ::
+
+  clone job --name jobName
+
+Start Command
+-------------
+
+Start command will begin execution of an existing Sqoop job.
+
+Start Job Function
+~~~~~~~~~~~~~~~~~~
+
+Start job (submit new submission). Starting already running job is considered 
as invalid operation.
+
++----------------------------+----------------------------+
+| Argument                   |  Description               |
++============================+============================+
+| ``-n``, ``--name <x>``     | Start job with name ``<x>``|
++----------------------------+----------------------------+
+| ``-s``, ``--synchronous``  | Synchoronous job execution |
++----------------------------+----------------------------+
+
+Example: ::
+
+  start job --name jobName
+  start job --name jobName --synchronous
+
+Stop Command
+------------
+
+Stop command will interrupt an job execution.
+
+Stop Job Function
+~~~~~~~~~~~~~~~~~
+
+Interrupt running job.
+
++-------------------------+------------------------------------------+
+| Argument                |  Description                             |
++=========================+==========================================+
+| ``-n``, ``--name <x>``  | Interrupt running job with name ``<x>``  |
++-------------------------+------------------------------------------+
+
+Example: ::
+
+  stop job --name jobName
+
+Status Command
+--------------
+
+Status command will retrieve the last status of a job.
+
+Status Job Function
+~~~~~~~~~~~~~~~~~~~
+
+Retrieve last status for given job.
+
++-------------------------+------------------------------------------+
+| Argument                |  Description                             |
++=========================+==========================================+
+| ``-n``, ``--name <x>``  | Retrieve status for job with name ``<x>``|
++-------------------------+------------------------------------------+
+
+Example: ::
+
+  status job --name jobName

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/Connectors.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/Connectors.txt 
(added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/Connectors.txt 
Thu Jul 28 01:17:26 2016
@@ -0,0 +1,24 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==========
+Connectors
+==========
+
+.. toctree::
+   :glob:
+
+   connectors/*

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/Examples.txt
==============================================================================
--- websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/Examples.txt 
(added)
+++ websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/Examples.txt 
Thu Jul 28 01:17:26 2016
@@ -0,0 +1,26 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+========
+Examples
+========
+
+This section contains various examples how Sqoop can be configured for various 
use cases.
+
+.. toctree::
+   :glob:
+
+   examples/*

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/Sqoop5MinutesDemo.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/Sqoop5MinutesDemo.txt
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/Sqoop5MinutesDemo.txt
 Thu Jul 28 01:17:26 2016
@@ -0,0 +1,242 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+====================
+Sqoop 5 Minutes Demo
+====================
+
+This page will walk you through the basic usage of Sqoop. You need to have 
installed and configured Sqoop server and client in order to follow this guide. 
Installation procedure is described in :doc:`/admin/Installation`. Please note 
that exact output shown in this page might differ from yours as Sqoop evolves. 
All major information should however remain the same.
+
+Sqoop uses unique names or persistent ids to identify connectors, links, jobs 
and configs. We support querying a entity by its unique name or by its perisent 
database Id.
+
+Starting Client
+===============
+
+Start client in interactive mode using following command: ::
+
+  sqoop2-shell
+
+Configure client to use your Sqoop server: ::
+
+  sqoop:000> set server --host your.host.com --port 12000 --webapp sqoop
+
+Verify that connection is working by simple version checking: ::
+
+  sqoop:000> show version --all
+  client version:
+    Sqoop 2.0.0-SNAPSHOT source revision 
418c5f637c3f09b94ea7fc3b0a4610831373a25f
+    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+  server version:
+    Sqoop 2.0.0-SNAPSHOT source revision 
418c5f637c3f09b94ea7fc3b0a4610831373a25f
+    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+  API versions:
+    [v1]
+
+You should received similar output as shown above describing the sqoop client 
build version, the server build version and the supported versions for the rest 
API.
+
+You can use the help command to check all the supported commands in the sqoop 
shell.
+::
+
+  sqoop:000> help
+  For information about Sqoop, visit: http://sqoop.apache.org/
+
+  Available commands:
+    exit    (\x  ) Exit the shell
+    history (\H  ) Display, manage and recall edit-line history
+    help    (\h  ) Display this help message
+    set     (\st ) Configure various client options and settings
+    show    (\sh ) Display various objects and configuration options
+    create  (\cr ) Create new object in Sqoop repository
+    delete  (\d  ) Delete existing object in Sqoop repository
+    update  (\up ) Update objects in Sqoop repository
+    clone   (\cl ) Create new object based on existing one
+    start   (\sta) Start job
+    stop    (\stp) Stop job
+    status  (\stu) Display status of a job
+    enable  (\en ) Enable object in Sqoop repository
+    disable (\di ) Disable object in Sqoop repository
+
+
+Creating Link Object
+==========================
+
+Check for the registered connectors on your Sqoop server: ::
+
+  sqoop:000> show connector
+  
+------------------------+----------------+------------------------------------------------------+----------------------+
+  |          Name          |    Version     |                        Class     
                    | Supported Directions |
+  
+------------------------+----------------+------------------------------------------------------+----------------------+
+  | hdfs-connector         | 2.0.0-SNAPSHOT | 
org.apache.sqoop.connector.hdfs.HdfsConnector        | FROM/TO              |
+  | generic-jdbc-connector | 2.0.0-SNAPSHOT | 
org.apache.sqoop.connector.jdbc.GenericJdbcConnector | FROM/TO              |
+  
+------------------------+----------------+------------------------------------------------------+----------------------+
+
+Our example contains two connectors. The ``generic-jdbc-connector`` is a basic 
connector relying on the Java JDBC interface for communicating with data 
sources. It should work with the most common databases that are providing JDBC 
drivers. Please note that you must install JDBC drivers separately. They are 
not bundled in Sqoop due to incompatible licenses.
+
+Generic JDBC Connector in our example has a name ``generic-jdbc-connector`` 
and we will use this value to create new link object for this connector. Note 
that the link name should be unique.
+::
+
+  sqoop:000> create link -connector generic-jdbc-connector
+  Creating link for connector with name generic-jdbc-connector
+  Please fill following values to create new link object
+  Name: First Link
+
+  Link configuration
+  JDBC Driver Class: com.mysql.jdbc.Driver
+  JDBC Connection String: jdbc:mysql://mysql.server/database
+  Username: sqoop
+  Password: *****
+  JDBC Connection Properties:
+  There are currently 0 values in the map:
+  entry#protocol=tcp
+  New link was successfully created with validation status OK name First Link
+
+Our new link object was created with assigned name First Link.
+
+In the ``show connector -all`` we see that there is a hdfs-connector 
registered. Let us create another link object but this time for the  
hdfs-connector instead.
+
+::
+
+  sqoop:000> create link -connector hdfs-connector
+  Creating link for connector with name hdfs-connector
+  Please fill following values to create new link object
+  Name: Second Link
+
+  Link configuration
+  HDFS URI: hdfs://nameservice1:8020/
+  New link was successfully created with validation status OK and name Second 
Link
+
+Creating Job Object
+===================
+
+Connectors implement the ``From`` for reading data from and/or ``To`` for 
writing data to. Generic JDBC Connector supports both of them List of supported 
directions for each connector might be seen in the output of ``show connector 
-all`` command above. In order to create a job we need to specifiy the ``From`` 
and ``To`` parts of the job uniquely identified by their link Ids. We already 
have 2 links created in the system, you can verify the same with the following 
command
+
+::
+
+  sqoop:000> show link --all
+  2 link(s) to show:
+  link with name First Link (Enabled: true, Created by root at 11/4/14 4:27 
PM, Updated by root at 11/4/14 4:27 PM)
+  Using Connector with name generic-jdbc-connector
+    Link configuration
+      JDBC Driver Class: com.mysql.jdbc.Driver
+      JDBC Connection String: jdbc:mysql://mysql.ent.cloudera.com/sqoop
+      Username: sqoop
+      Password:
+      JDBC Connection Properties:
+        protocol = tcp
+  link with name Second Link (Enabled: true, Created by root at 11/4/14 4:38 
PM, Updated by root at 11/4/14 4:38 PM)
+  Using Connector with name hdfs-connector
+    Link configuration
+      HDFS URI: hdfs://nameservice1:8020/
+
+Next, we can use the two link names to associate the ``From`` and ``To`` for 
the job.
+::
+
+   sqoop:000> create job -f "First Link" -t "Second Link"
+   Creating job for links with from name First Link and to name Second Link
+   Please fill following values to create new job object
+   Name: Sqoopy
+
+   FromJob configuration
+
+    Schema name:(Required)sqoop
+    Table name:(Required)sqoop
+    Table SQL statement:(Optional)
+    Table column names:(Optional)
+    Partition column name:(Optional) id
+    Null value allowed for the partition column:(Optional)
+    Boundary query:(Optional)
+
+  ToJob configuration
+
+    Output format:
+     0 : TEXT_FILE
+     1 : SEQUENCE_FILE
+    Choose: 0
+    Compression format:
+     0 : NONE
+     1 : DEFAULT
+     2 : DEFLATE
+     3 : GZIP
+     4 : BZIP2
+     5 : LZO
+     6 : LZ4
+     7 : SNAPPY
+     8 : CUSTOM
+    Choose: 0
+    Custom compression format:(Optional)
+    Output directory:(Required)/root/projects/sqoop
+
+    Driver Config
+    Extractors:(Optional) 2
+    Loaders:(Optional) 2
+    New job was successfully created with validation status OK  and name 
jobName
+
+Our new job object was created with assigned name Sqoopy. Note that if null 
value is allowed for the partition column,
+at least 2 extractors are needed for Sqoop to carry out the data transfer. On 
specifying 1 extractor in this
+scenario, Sqoop shall ignore this setting and continue with 2 extractors.
+
+Start Job ( a.k.a Data transfer )
+=================================
+
+You can start a sqoop job with the following command:
+::
+
+  sqoop:000> start job -name Sqoopy
+  Submission details
+  Job Name: Sqoopy
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    
http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
+
+You can iteratively check your running job status with ``status job`` command:
+
+::
+
+  sqoop:000> status job -n Sqoopy
+  Submission details
+  Job Name: Sqoopy
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    
http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 20:09:16 PST: RUNNING  - 0.00 %
+
+Alternatively you can start a sqoop job and observe job running status with 
the following command:
+
+::
+
+  sqoop:000> start job -n Sqoopy -s
+  Submission details
+  Job Name: Sqoopy
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    
http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
+  2014-11-04 19:43:39 PST: RUNNING  - 0.00 %
+  2014-11-04 19:43:49 PST: RUNNING  - 10.00 %
+
+And finally you can stop running the job at any time using ``stop job`` 
command: ::
+
+  sqoop:000> stop job -n Sqoopy

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-FTP.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-FTP.txt
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-FTP.txt
 Thu Jul 28 01:17:26 2016
@@ -0,0 +1,81 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==================
+FTP Connector
+==================
+
+The FTP connector supports moving data between an FTP server and other 
supported Sqoop2 connectors.
+
+Currently only the TO direction is supported to write records to an FTP 
server. A FROM connector is pending (SQOOP-2127).
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+To use the FTP Connector, create a link for the connector and a job that uses 
the link.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Input                       | Type    | Description                          
                                 | Example                    |
++=============================+=========+=======================================================================+============================+
+| FTP server hostname         | String  | Hostname for the FTP server.         
                                 | ftp.example.com            |
+|                             |         | *Required*.                          
                                 |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| FTP server port             | Integer | Port number for the FTP server. 
Defaults to 21.                       | 2100                       |
+|                             |         | *Optional*.                          
                                 |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Username                    | String  | The username to provide when 
connecting to the FTP server.            | sqoop                      |
+|                             |         | *Required*.                          
                                 |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Password                    | String  | The password to provide when 
connecting to the FTP server.            | sqoop                      |
+|                             |         | *Required*                           
                                 |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+
+**Notes**
+=========
+
+1. The FTP connector will attempt to connect to the FTP server as part of the 
link validation process. If for some reason a connection can not be 
established, you'll see a corresponding warning message.
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the TO direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Input                       | Type    | Description                          
                                   | Example                           |
++=============================+=========+=========================================================================+===================================+
+| Output directory            | String  | The location on the FTP server that 
the connector will write files to.  | uploads                           |
+|                             |         | *Required*                           
                                   |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+
+**Notes**
+=========
+
+1. The *output directory* value needs to be an existing directory on the FTP 
server.
+
+------
+Loader
+------
+
+During the *loading* phase, the connector will create uniquely named files in 
the *output directory* for each partition of data received from the **FROM** 
connector.

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-GenericJDBC.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-GenericJDBC.txt
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-GenericJDBC.txt
 Thu Jul 28 01:17:26 2016
@@ -0,0 +1,194 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+======================
+Generic JDBC Connector
+======================
+
+The Generic JDBC Connector can connect to any data source that adheres to the 
**JDBC 4** specification.
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+To use the Generic JDBC Connector, create a link for the connector and a job 
that uses the link.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| Input                       | Type    | Description                          
                                 | Example                                  |
++=============================+=========+=======================================================================+==========================================+
+| JDBC Driver Class           | String  | The full class name of the JDBC 
driver.                               | com.mysql.jdbc.Driver                   
 |
+|                             |         | *Required* and accessible by the 
Sqoop server.                        |                                          
|
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| JDBC Connection String      | String  | The JDBC connection string to use 
when connecting to the data source. | jdbc:mysql://localhost/test              |
+|                             |         | *Required*. Connectivity upon 
creation is optional.                   |                                       
   |
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| Username                    | String  | The username to provide when 
connecting to the data source.           | sqoop                                
    |
+|                             |         | *Optional*. Connectivity upon 
creation is optional.                   |                                       
   |
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| Password                    | String  | The password to provide when 
connecting to the data source.           | sqoop                                
    |
+|                             |         | *Optional*. Connectivity upon 
creation is optional.                   |                                       
   |
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+| JDBC Connection Properties  | Map     | A map of JDBC connection properties 
to pass to the JDBC driver        | profileSQL=true&useFastDateParsing=false |
+|                             |         | *Optional*.                          
                                 |                                          |
++-----------------------------+---------+-----------------------------------------------------------------------+------------------------------------------+
+
+**FROM Job Configuration**
+++++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the FROM direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Input                       | Type    | Description                          
                                   | Example                                    
 |
++=============================+=========+=========================================================================+=============================================+
+| Schema name                 | String  | The schema name the table is part 
of.                                   | sqoop                                   
    |
+|                             |         | *Optional*                           
                                   |                                            
 |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Table name                  | String  | The table name to import data from.  
                                   | test                                       
 |
+|                             |         | *Optional*. See note below.          
                                   |                                            
 |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Table SQL statement         | String  | The SQL statement used to perform a 
**free form query**.                | ``SELECT COUNT(*) FROM test 
${CONDITIONS}`` |
+|                             |         | *Optional*. See notes below.         
                                   |                                            
 |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Table column names          | String  | Columns to extract from the JDBC 
data source.                           | col1,col2                              
     |
+|                             |         | *Optional* Comma separated list of 
columns.                             |                                          
   |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Partition column name       | Map     | The column name used to partition 
the data transfer process.            | col1                                    
    |
+|                             |         | *Optional*.  Defaults to table's 
first column of primary key.           |                                        
     |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Null value allowed for      | Boolean | True or false depending on whether 
NULL values are allowed in data      | true                                     
   |
+| the partition column        |         | of the Partition column. *Optional*. 
                                   |                                            
 |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+| Boundary query              | String  | The query used to define an upper 
and lower boundary when partitioning. |                                         
    |
+|                             |         | *Optional*.                          
                                   |                                            
 |
++-----------------------------+---------+-------------------------------------------------------------------------+---------------------------------------------+
+
+**Notes**
+=========
+
+1. *Table name* and *Table SQL statement* are mutually exclusive. If *Table 
name* is provided, the *Table SQL statement* should not be provided. If *Table 
SQL statement* is provided then *Table name* should not be provided.
+2. *Table column names* should be provided only if *Table name* is provided.
+3. If there are columns with similar names, column aliases are required. For 
example: ``SELECT table1.id as "i", table2.id as "j" FROM table1 INNER JOIN 
table2 ON table1.id = table2.id``.
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the TO direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Input                       | Type    | Description                          
                                   | Example                                    
     |
++=============================+=========+=========================================================================+=================================================+
+| Schema name                 | String  | The schema name the table is part 
of.                                   | sqoop                                   
        |
+|                             |         | *Optional*                           
                                   |                                            
     |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Table name                  | String  | The table name to import data from.  
                                   | test                                       
     |
+|                             |         | *Optional*. See note below.          
                                   |                                            
     |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Table SQL statement         | String  | The SQL statement used to perform a 
**free form query**.                | ``INSERT INTO test (col1, col2) VALUES 
(?, ?)`` |
+|                             |         | *Optional*. See note below.          
                                   |                                            
     |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Table column names          | String  | Columns to insert into the JDBC data 
source.                            | col1,col2                                  
     |
+|                             |         | *Optional* Comma separated list of 
columns.                             |                                          
       |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Stage table name            | String  | The name of the table used as a 
*staging table*.                        | staging                               
          |
+|                             |         | *Optional*.                          
                                   |                                            
     |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+| Should clear stage table    | Boolean | True or false depending on whether 
the staging table should be cleared  | true                                     
       |
+|                             |         | after the data transfer has 
finished. *Optional*.                       |                                   
              |
++-----------------------------+---------+-------------------------------------------------------------------------+-------------------------------------------------+
+
+**Notes**
+=========
+
+1. *Table name* and *Table SQL statement* are mutually exclusive. If *Table 
name* is provided, the *Table SQL statement* should not be provided. If *Table 
SQL statement* is provided then *Table name* should not be provided.
+2. *Table column names* should be provided only if *Table name* is provided.
+
+-----------
+Partitioner
+-----------
+
+The Generic JDBC Connector partitioner generates conditions to be used by the 
extractor.
+It varies in how it partitions data transfer based on the partition column 
data type.
+Though, each strategy roughly takes on the following form:
+::
+
+  (upper boundary - lower boundary) / (max partitions)
+
+By default, the *primary key* will be used to partition the data unless 
otherwise specified.
+
+The following data types are currently supported:
+
+1. TINYINT
+2. SMALLINT
+3. INTEGER
+4. BIGINT
+5. REAL
+6. FLOAT
+7. DOUBLE
+8. NUMERIC
+9. DECIMAL
+10. BIT
+11. BOOLEAN
+12. DATE
+13. TIME
+14. TIMESTAMP
+15. CHAR
+16. VARCHAR
+17. LONGVARCHAR
+
+---------
+Extractor
+---------
+
+During the *extraction* phase, the JDBC data source is queried using SQL. This 
SQL will vary based on your configuration.
+
+- If *Table name* is provided, then the SQL statement generated will take on 
the form ``SELECT * FROM <table name>``.
+- If *Table name* and *Columns* are provided, then the SQL statement generated 
will take on the form ``SELECT <columns> FROM <table name>``.
+- If *Table SQL statement* is provided, then the provided SQL statement will 
be used.
+
+The conditions generated by the *partitioner* are appended to the end of the 
SQL query to query a section of data.
+
+The Generic JDBC connector extracts CSV data usable by the *CSV Intermediate 
Data Format*.
+
+------
+Loader
+------
+
+During the *loading* phase, the JDBC data source is queried using SQL. This 
SQL will vary based on your configuration.
+
+- If *Table name* is provided, then the SQL statement generated will take on 
the form ``INSERT INTO <table name> (col1, col2, ...) VALUES (?,?,..)``.
+- If *Table name* and *Columns* are provided, then the SQL statement generated 
will take on the form ``INSERT INTO <table name> (<columns>) VALUES (?,?,..)``.
+- If *Table SQL statement* is provided, then the provided SQL statement will 
be used.
+
+This connector expects to receive CSV data consumable by the *CSV Intermediate 
Data Format*.
+
+----------
+Destroyers
+----------
+
+The Generic JDBC Connector performs two operations in the destroyer in the TO 
direction:
+
+1. Copy the contents of the staging table to the desired table.
+2. Clear the staging table.
+
+No operations are performed in the FROM direction.
\ No newline at end of file

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-HDFS.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-HDFS.txt
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-HDFS.txt
 Thu Jul 28 01:17:26 2016
@@ -0,0 +1,159 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+==============
+HDFS Connector
+==============
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+To use the HDFS Connector, create a link for the connector and a job that uses 
the link.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Input                       | Type    | Description                          
                                 | Example                    |
++=============================+=========+=======================================================================+============================+
+| URI                         | String  | The URI of the HDFS File System.     
                                 | hdfs://example.com:8020/   |
+|                             |         | *Optional*. See note below.          
                                 |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+| Configuration directory     | String  | Path to the clusters configuration 
directory.                         | /etc/conf/hadoop           |
+|                             |         | *Optional*.                          
                                 |                            |
++-----------------------------+---------+-----------------------------------------------------------------------+----------------------------+
+
+**Notes**
+=========
+
+1. The specified URI will override the declared URI in your configuration.
+
+**FROM Job Configuration**
+++++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the FROM direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+------------------+
+| Input                       | Type    | Description                          
                                   | Example          |
++=============================+=========+=========================================================================+==================+
+| Input directory             | String  | The location in HDFS that the 
connector should look for files in.       | /tmp/sqoop2/hdfs |
+|                             |         | *Required*. See note below.          
                                   |                  |
++-----------------------------+---------+-------------------------------------------------------------------------+------------------+
+| Null value                  | String  | The value of NULL in the contents of 
each file extracted.               | \N               |
+|                             |         | *Optional*. See note below.          
                                   |                  |
++-----------------------------+---------+-------------------------------------------------------------------------+------------------+
+| Override null value         | Boolean | Tells the connector to replace the 
specified NULL value.                | true             |
+|                             |         | *Optional*. See note below.          
                                   |                  |
++-----------------------------+---------+-------------------------------------------------------------------------+------------------+
+
+**Notes**
+=========
+
+1. All files in *Input directory* will be extracted.
+2. *Null value* and *override null value* should be used in conjunction. If 
*override null value* is not set to true, then *null value* will not be used 
when extracting data.
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the TO direction include:
+
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Input                       | Type    | Description                          
                                   | Example                           |
++=============================+=========+=========================================================================+===================================+
+| Output directory            | String  | The location in HDFS that the 
connector will load files to.             | /tmp/sqoop2/hdfs                  |
+|                             |         | *Optional*                           
                                   |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Output format               | Enum    | The format to output data to.        
                                   | CSV                               |
+|                             |         | *Optional*. See note below.          
                                   |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Compression                 | Enum    | Compression class.                   
                                   | GZIP                              |
+|                             |         | *Optional*. See note below.          
                                   |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Custom compression          | String  | Custom compression class.            
                                   | org.apache.sqoop.SqoopCompression |
+|                             |         | *Optional* Comma separated list of 
columns.                             |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Null value                  | String  | The value of NULL in the contents of 
each file loaded.                  | \N                                |
+|                             |         | *Optional*. See note below.          
                                   |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Override null value         | Boolean | Tells the connector to replace the 
specified NULL value.                | true                              |
+|                             |         | *Optional*. See note below.          
                                   |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+| Append mode                 | Boolean | Append to an existing output 
directory.                                 | true                              |
+|                             |         | *Optional*.                          
                                   |                                   |
++-----------------------------+---------+-------------------------------------------------------------------------+-----------------------------------+
+
+**Notes**
+=========
+
+1. *Output format* only supports CSV at the moment.
+2. *Compression* supports all Hadoop compression classes.
+3. *Null value* and *override null value* should be used in conjunction. If 
*override null value* is not set to true, then *null value* will not be used 
when loading data.
+
+-----------
+Partitioner
+-----------
+
+The HDFS Connector partitioner partitions based on total blocks in all files 
in the specified input directory.
+Blocks will try to be placed in splits based on the *node* and *rack* they 
reside in.
+
+---------
+Extractor
+---------
+
+During the *extraction* phase, the FileSystem API is used to query files from 
HDFS. The HDFS cluster used is the one defined by:
+
+1. The HDFS URI in the link configuration
+2. The Hadoop configuration in the link configuration
+3. The Hadoop configuration used by the execution framework
+
+The format of the data must be CSV. The NULL value in the CSV can be chosen 
via *null value*. For example::
+
+    1,\N
+    2,null
+    3,NULL
+
+In the above example, if *null value* is set to \N, then only the first row's 
NULL value will be inferred.
+
+------
+Loader
+------
+
+During the *loading* phase, HDFS is written to via the FileSystem API. The 
number of files created is equal to the number of loads that run. The format of 
the data currently can only be CSV. The NULL value in the CSV can be chosen via 
*null value*. For example:
+
++--------------+-------+
+| Id           | Value |
++==============+=======+
+| 1            | NULL  |
++--------------+-------+
+| 2            | value |
++--------------+-------+
+
+If *null value* is set to \N, then here's how the data will look like in HDFS::
+
+    1,\N
+    2,value
+
+----------
+Destroyers
+----------
+
+The HDFS TO destroyer moves all created files to the proper output directory.

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-Kafka.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-Kafka.txt
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.7/_sources/user/connectors/Connector-Kafka.txt
 Thu Jul 28 01:17:26 2016
@@ -0,0 +1,63 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+===============
+Kafka Connector
+===============
+
+Currently, only the TO direction is supported.
+
+.. contents::
+   :depth: 3
+
+-----
+Usage
+-----
+
+To use the Kafka Connector, create a link for the connector and a job that 
uses the link.
+
+**Link Configuration**
+++++++++++++++++++++++
+
+Inputs associated with the link configuration include:
+
++----------------------+---------+-----------------------------------------------------------+-------------------------------------+
+| Input                | Type    | Description                                 
              | Example                             |
++======================+=========+===========================================================+=====================================+
+| Broker list          | String  | Comma separated list of kafka brokers.      
              | example.com:10000,example.com:11000 |
+|                      |         | *Required*.                                 
              |                                     |
++----------------------+---------+-----------------------------------------------------------+-------------------------------------+
+| Zookeeper connection | String  | Comma separated list of zookeeper servers 
in your quorum. | /etc/conf/hadoop                    |
+|                      |         | *Required*.                                 
              |                                     |
++----------------------+---------+-----------------------------------------------------------+-------------------------------------+
+
+**TO Job Configuration**
+++++++++++++++++++++++++
+
+Inputs associated with the Job configuration for the FROM direction include:
+
++-------+---------+---------------------------------+----------+
+| Input | Type    | Description                     | Example  |
++=======+=========+=================================+==========+
+| topic | String  | The Kafka topic to transfer to. | my topic |
+|       |         | *Required*.                     |          |
++-------+---------+---------------------------------+----------+
+
+------
+Loader
+------
+
+During the *loading* phase, Kafka is written to directly from each loader. The 
order in which data is loaded into Kafka is not guaranteed.



Reply via email to