Added: websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/RESTAPI.txt
==============================================================================
--- websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/RESTAPI.txt 
(added)
+++ websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/RESTAPI.txt Fri 
Feb 27 00:59:26 2015
@@ -0,0 +1,1442 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+=========================
+Sqoop REST API Guide
+=========================
+
+This document will explain how you can use Sqoop REST API to build 
applications interacting with Sqoop server.
+The REST API covers all aspects of managing Sqoop jobs and allows you to build 
an app in any programming language using HTTP over JSON.
+
+.. contents:: Table of Contents
+
+Initialization
+=========================
+
+Before continuing further, make sure that the Sqoop server is running.
+
+Then find out the details of the Sqoop server: ``host``, ``port`` and 
``webapp``, and keep them in mind. Note that the sqoop server is running on 
Apache Tomcat. To exercise a REST API for Sqoop, you could assemble and send a 
HTTP request to an url corresponding to that API. Generally, the url contains 
the ``host`` on which the sqoop server is running, the ``port`` at which the 
sqoop server is listening to and ``webapp``, the context path at which the 
Sqoop server is registered in the Apache Tomcat engine.
+
+Certain requests might need to contain some additional query parameters and 
post data. These parameters could be given via
+the HTTP headers, request body or both. All the content in the HTTP body is in 
``JSON`` format.
+
+Understand Connector, Driver, Link and Job
+===========================================================
+
+To create and run a Sqoop Job, we need to provide config values for connecting 
to a data source and then processing the data in that data source. Processing 
might be either reading from or writing to the data source. Thus we have 
configurable entities such as the ``From`` and ``To`` parts of the connectors, 
the driver that each expose configs and one or more inputs within them.
+
+For instance a connector that represents a relational data source such as 
MySQL will expose config classes for connecting to the database. Some of the 
relevant inputs are the connection string, driver class, the username and the 
password to connect to the database. These configs remain the same to read data 
from any of the tables within that database. Hence they are grouped under 
``LinkConfiguration``.
+
+Each connector can support Reading from a data source and/or writing/to a data 
source it represents. Reading from and writing to a data source are represented 
by From and To respectively. Specific configurations are required to peform the 
job of reading from or writing to the data source. These are grouped in the 
``FromJobConfiguration`` and ``ToJobConfiguration`` objects of the connector.
+
+For instance, a connector that represents a relational data source such as 
MySQL will expose the table name to read from or the SQL query to use while 
reading data as a FromJobConfiguration. Similarly a connector that represents a 
data source such as HDFS, will expose the output directory to write to as a 
ToJobConfiguration.
+
+
+Objects
+==============
+
+This section covers all the objects that might exist in an API request and/or 
API response.
+
+Configs and Inputs
+------------------
+
+Before creating any link for a connector or a job with associated ``From`` and 
``To`` links, the first thing to do is getting familiar with all the 
configurations that the connector exposes.
+
+Each config consists of the following information
+
++------------------+---------------------------------------------------------+
+|   Field          | Description                                             |
++==================+=========================================================+
+| ``id``           | The id of this config                                   |
++------------------+---------------------------------------------------------+
+| ``inputs``       | A array of inputs of this config                        |
++------------------+---------------------------------------------------------+
+| ``name``         | The unique name of this config per connector            |
++------------------+---------------------------------------------------------+
+| ``type``         | The type of this config (LINK/ JOB)                     |
++------------------+---------------------------------------------------------+
+
+A typical config object is showing below:
+
+::
+
+   {
+    id:7,
+    inputs:[
+      {
+         id: 25,
+         name: "throttlingConfig.numExtractors",
+         type: "INTEGER",
+         sensitive: false
+      },
+      {
+         id: 26,
+         name: "throttlingConfig.numLoaders",
+         type: "INTEGER",
+         sensitive: false
+       }
+    ],
+    name: "throttlingConfig",
+    type: "JOB"
+  }
+
+Each input object in a config is structured below:
+
++------------------+---------------------------------------------------------+
+|   Field          | Description                                             |
++==================+=========================================================+
+| ``id``           | The id of this input                                    |
++------------------+---------------------------------------------------------+
+| ``name``         | The unique name of this input per config                |
++------------------+---------------------------------------------------------+
+| ``type``         | The data type of this input field                       |
++------------------+---------------------------------------------------------+
+| ``size``         | The length of this input field                          |
++------------------+---------------------------------------------------------+
+| ``sensitive``    | Whether this input contain sensitive information        |
++------------------+---------------------------------------------------------+
+
+
+To send a filled config in the request, you should always use config id and 
input id to map the values to their correspondig names.
+For example, the following request contains an input value 
``com.mysql.jdbc.Driver`` with input id ``7`` inside a config with id ``4`` 
that belongs to a link with id ``3``
+
+::
+
+      link: {
+            id: 3,
+            enabled: true,
+            link-config-values: [{
+                id: 4,
+                inputs: [{
+                    id: 7,
+                    name: "linkConfig.jdbcDriver",
+                    value: "com.mysql.jdbc.Driver",
+                    type: "STRING",
+                    size: 128,
+                    sensitive: false
+                }, {
+                    id: 8,
+                    name: "linkConfig.connectionString",
+                    value: 
"jdbc%3Amysql%3A%2F%2Fmysql.ent.cloudera.com%2Fsqoop",
+                    type: "STRING",
+                    size: 128,
+                    sensitive: false
+                },
+                ...
+             }
+           }
+
+Exception Response
+------------------
+
+Each operation on Sqoop server might return an exception in the Http response. 
Remember to take this into account.The exception code and message could be 
found in both the header and body of the response.
+
+Please jump to "Header Parameters" section to find how to get exception 
information from header.
+
+In the body, the exception is expressed in ``JSON`` format. An example of the 
exception is:
+
+::
+
+  {
+    "message":"DERBYREPO_0030:Unable to load specific job metadata from 
repository - Couldn't find job with id 2",
+    "stack-trace":[
+      {
+        "file":"DerbyRepositoryHandler.java",
+        "line":1111,
+        "class":"org.apache.sqoop.repository.derby.DerbyRepositoryHandler",
+        "method":"findJob"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":451,
+        "class":"org.apache.sqoop.repository.JdbcRepository$16",
+        "method":"doIt"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":90,
+        "class":"org.apache.sqoop.repository.JdbcRepository",
+        "method":"doWithConnection"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":61,
+        "class":"org.apache.sqoop.repository.JdbcRepository",
+        "method":"doWithConnection"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":448,
+        "class":"org.apache.sqoop.repository.JdbcRepository",
+        "method":"findJob"
+      },
+      {
+        "file":"JobRequestHandler.java",
+        "line":238,
+        "class":"org.apache.sqoop.handler.JobRequestHandler",
+        "method":"getJobs"
+      }
+    ],
+    "class":"org.apache.sqoop.common.SqoopException"
+  }
+
+Config and Input Validation Status Response
+--------------------------------------------
+
+The config and the inputs associated with the connectors also provide custom 
validation rules for the values given to these input fields. Sqoop applies 
these custom validators and its corresponding valdation logic when config 
values for the LINK and JOB are posted.
+
+
+An example of a OK status with the persisted ID:
+::
+
+ {
+    "id": 3,
+    "validation-result": [
+        {}
+    ]
+ }
+
+An example of ERROR status:
+::
+
+   {
+     "validation-result": [
+       {
+        "linkConfig": [
+          {
+            "message": "Invalid URI. URI must either be null or a valid URI. 
Here are a few valid example URIs: hdfs://example.com:8020/, 
hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+            "status": "ERROR"
+          }
+        ]
+      }
+     ]
+   }
+
+Job Submission Status Response
+------------------------------
+
+After starting a job, you could look up the running status of it. There could 
be 7 possible status:
+
++-----------------------------+---------------------------------------------------------+
+|   Status                    | Description                                    
         |
++=============================+=========================================================+
+| ``BOOTING``                 | In the middle of submitting the job            
         |
++-----------------------------+---------------------------------------------------------+
+| ``FAILURE_ON_SUBMIT``       | Unable to submit this job to remote cluster    
         |
++-----------------------------+---------------------------------------------------------+
+| ``RUNNING``                 | The job is running now                         
         |
++-----------------------------+---------------------------------------------------------+
+| ``SUCCEEDED``               | Job finished successfully                      
         |
++-----------------------------+---------------------------------------------------------+
+| ``FAILED``                  | Job failed                                     
         |
++-----------------------------+---------------------------------------------------------+
+| ``NEVER_EXECUTED``          | The job has never been executed since created  
         |
++-----------------------------+---------------------------------------------------------+
+| ``UNKNOWN``                 | The status is unknown                          
         |
++-----------------------------+---------------------------------------------------------+
+
+Header Parameters
+=================
+
+For all the responses, the following parameters in the HTTP message header are 
available:
+
++---------------------------+----------+------------------------------------------------------------------------------+
+|   Parameter               | Required | Description                           
                                       |
++===========================+==========+==============================================================================+
+| ``sqoop-error-code``      | false    | The error code when some error happen 
in the server side for this request    |
++---------------------------+----------+------------------------------------------------------------------------------+
+| ``sqoop-error-message``   | false    | The explanation for a error code      
                                       |
++---------------------------+----------+------------------------------------------------------------------------------+
+
+So far, there are only these 2 parameters in the header of response message. 
They only exist when something bad happen in the server.
+And they always come along with an exception message in the response body.
+
+REST APIs
+==========
+
+The section elaborates all the rest apis that are supported by the Sqoop 
server.
+
+For all Sqoop requests, the following request parameters will be added 
automatically. However, this user name is only in simple mode. In Kerberos 
mode, this user name will be ignored by Sqoop server and user name in UGI which 
is authenticated by Kerberos server will be used instead.
+
++---------------------------+---------------------------------------------------------+
+|   Parameter               | Description                                      
       |
++===========================+=========================================================+
+| ``user.name``             | The name of the user who makes the requests      
       |
++---------------------------+---------------------------------------------------------+
+
+
+/version - [GET] - Get Sqoop Version
+-------------------------------------
+
+Get all the version metadata of Sqoop software in the server side.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Fields of Response:
+
++--------------------+---------------------------------------------------------+
+|   Field            | Description                                             
|
++====================+=========================================================+
+| ``source-revision``| The revision number of Sqoop source code                
|
++--------------------+---------------------------------------------------------+
+| ``api-versions``   | The version of network protocol                         
|
++--------------------+---------------------------------------------------------+
+| ``build-date``     | The Sqoop release date                                  
|
++--------------------+---------------------------------------------------------+
+| ``user``           | The user who made the release                           
|
++--------------------+---------------------------------------------------------+
+| ``source-url``     | The url of the source code trunk                        
|
++--------------------+---------------------------------------------------------+
+| ``build-version``  | The version of Sqoop in the server side                 
|
++--------------------+---------------------------------------------------------+
+
+
+* Response Example:
+
+::
+
+   {
+    source-url: 
"git://vbasavaraj.local/Users/vbasavaraj/Projects/SqoopRefactoring/sqoop2/common",
+    source-revision: "418c5f637c3f09b94ea7fc3b0a4610831373a25f",
+    build-version: "2.0.0-SNAPSHOT",
+    api-versions: [
+       "v1"
+     ],
+    user: "vbasavaraj",
+    build-date: "Mon Nov 3 08:18:21 PST 2014"
+   }
+
+/v1/connectors - [GET]  Get all Connectors
+-------------------------------------------
+
+Get all the connectors registered in Sqoop
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example
+
+::
+
+  {
+    connectors: [{
+        id: 1,
+        link-config: [],
+        job-config: {},
+        name: "hdfs-connector",
+        class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
+        all-config-resources: {},
+        version: "2.0.0-SNAPSHOT"
+    }, {
+        id: 2,
+        link-config: [],
+        job-config: {},
+        name: "generic-jdbc-connector",
+        class: "org.apache.sqoop.connector.jdbc.GenericJdbcConnector",
+        all-config - resources: {},
+        version: "2.0.0-SNAPSHOT"
+    }]
+  }
+
+/v1/connector/[cname] or /v1/connector/[cid] - [GET] - Get Connector
+---------------------------------------------------------------------
+
+Provide the id or unique name of the connector in the url ``[cid]`` or 
``[cname]`` part.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Fields of Response:
+
++--------------------------+----------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                     |
++==========================+========================================================================================+
+| ``id``                   | The id for the connector ( registered as a 
configurable )                              |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``job-config``           | Connector job config and inputs for both FROM and 
TO                                   |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``link-config``          | Connector link config and inputs                  
                                     |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``all-config-resources`` | All config inputs labels and description for the 
given connector                       |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``version``              | The build version required for config and input 
data upgrades                          |
++--------------------------+----------------------------------------------------------------------------------------+
+
+* Response Example:
+
+::
+
+   {
+    connector: {
+        id: 1,
+        job-config: {
+            TO: [{
+                id: 3,
+                inputs: [{
+                    id: 3,
+                    values: "TEXT_FILE,SEQUENCE_FILE",
+                    name: "toJobConfig.outputFormat",
+                    type: "ENUM",
+                    sensitive: false
+                }, {
+                    id: 4,
+                    values: 
"NONE,DEFAULT,DEFLATE,GZIP,BZIP2,LZO,LZ4,SNAPPY,CUSTOM",
+                    name: "toJobConfig.compression",
+                    type: "ENUM",
+                    sensitive: false
+                }, {
+                    id: 5,
+                    name: "toJobConfig.customCompression",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }, {
+                    id: 6,
+                    name: "toJobConfig.outputDirectory",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }],
+                name: "toJobConfig",
+                type: "JOB"
+            }],
+            FROM: [{
+                id: 2,
+                inputs: [{
+                    id: 2,
+                    name: "fromJobConfig.inputDirectory",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }],
+                name: "fromJobConfig",
+                type: "JOB"
+            }]
+        },
+        link-config: [{
+            id: 1,
+            inputs: [{
+                id: 1,
+                name: "linkConfig.uri",
+                type: "STRING",
+                size: 255,
+                sensitive: false
+            }],
+            name: "linkConfig",
+            type: "LINK"
+        }],
+        name: "hdfs-connector",
+        class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
+        all-config-resources: {
+            fromJobConfig.label: "From Job configuration",
+                toJobConfig.ignored.label: "Ignored",
+                fromJobConfig.help: "Specifies information required to get 
data from Hadoop ecosystem",
+                toJobConfig.ignored.help: "This value is ignored",
+                toJobConfig.label: "ToJob configuration",
+                toJobConfig.storageType.label: "Storage type",
+                fromJobConfig.inputDirectory.label: "Input directory",
+                toJobConfig.outputFormat.label: "Output format",
+                toJobConfig.outputDirectory.label: "Output directory",
+                toJobConfig.outputDirectory.help: "Output directory for final 
data",
+                toJobConfig.compression.help: "Compression that should be used 
for the data",
+                toJobConfig.outputFormat.help: "Format in which data should be 
serialized",
+                toJobConfig.customCompression.label: "Custom compression 
format",
+                toJobConfig.compression.label: "Compression format",
+                linkConfig.label: "Link configuration",
+                toJobConfig.customCompression.help: "Full class name of the 
custom compression",
+                toJobConfig.storageType.help: "Target on Hadoop ecosystem 
where to store data",
+                linkConfig.help: "Here you supply information necessary to 
connect to HDFS",
+                linkConfig.uri.help: "HDFS URI used to connect to HDFS",
+                linkConfig.uri.label: "HDFS URI",
+                fromJobConfig.inputDirectory.help: "Directory that should be 
exported",
+                toJobConfig.help: "You must supply the information requested 
in order to get information where you want to store your data."
+        },
+        version: "2.0.0-SNAPSHOT"
+     }
+   }
+
+
+/v1/driver - [GET]- Get Sqoop Driver
+-----------------------------------------------
+
+Driver exposes configurations required for the job execution.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Fields of Response:
+
++--------------------------+----------------------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                                 |
++==========================+====================================================================================================+
+| ``id``                   | The id for the driver ( registered as a 
configurable )                                             |
++--------------------------+----------------------------------------------------------------------------------------------------+
+| ``job-config``           | Driver job config and inputs                      
                                                 |
++--------------------------+----------------------------------------------------------------------------------------------------+
+| ``version``              | The build version of the driver                   
                                                 |
++--------------------------+----------------------------------------------------------------------------------------------------+
+| ``all-config-resources`` | Driver exposed config and input labels and 
description                                             |
++--------------------------+----------------------------------------------------------------------------------------------------+
+
+* Response Example:
+
+::
+
+ {
+    id: 3,
+    job-config: [{
+        id: 7,
+        inputs: [{
+            id: 25,
+            name: "throttlingConfig.numExtractors",
+            type: "INTEGER",
+            sensitive: false
+        }, {
+            id: 26,
+            name: "throttlingConfig.numLoaders",
+            type: "INTEGER",
+            sensitive: false
+        }],
+        name: "throttlingConfig",
+        type: "JOB"
+    }],
+    all-config-resources: {
+        throttlingConfig.numExtractors.label: "Extractors",
+            throttlingConfig.numLoaders.help: "Number of loaders that Sqoop 
will use",
+            throttlingConfig.numLoaders.label: "Loaders",
+            throttlingConfig.label: "Throttling resources",
+            throttlingConfig.numExtractors.help: "Number of extractors that 
Sqoop will use",
+            throttlingConfig.help: "Set throttling boundaries to not overload 
your systems"
+    },
+    version: "1"
+ }
+
+/v1/links/ - [GET]  Get all links
+-------------------------------------------
+
+Get all the links created in Sqoop
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example
+
+::
+
+  {
+    links: [
+      {
+        id: 1,
+        enabled: true,
+        update-user: "root",
+        link-config-values: [],
+        name: "First Link",
+        creation-date: 1415309361756,
+        connector-id: 1,
+        update-date: 1415309361756,
+        creation-user: "root"
+      },
+      {
+        id: 2,
+        enabled: true,
+        update-user: "root",
+        link-config-values: [],
+        name: "Second Link",
+        creation-date: 1415309390807,
+        connector-id: 2,
+        update-date: 1415309390807,
+        creation-user: "root"
+      }
+    ]
+  }
+
+
+/v1/links?cname=[cname] - [GET]  Get all links by Connector
+------------------------------------------------------------
+Get all the links for a given connector identified by ``[cname]`` part.
+
+
+/v1/link/[lname]  or /v1/link/[lid] - [GET] - Get Link
+-------------------------------------------------------------------------------
+
+Provide the id or unique name of the link in the url ``[lid]`` or ``[lname]`` 
part.
+
+Get all the details of the link including the id, name, type and the 
corresponding config input values for the link
+
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example:
+
+::
+
+ {
+    link: {
+        id: 1,
+        enabled: true,
+        link-config-values: [{
+            id: 1,
+            inputs: [{
+                id: 1,
+                name: "linkConfig.uri",
+                value: "hdfs%3A%2F%2Fnamenode%3A8090",
+                type: "STRING",
+                size: 255,
+                sensitive: false
+            }],
+            name: "linkConfig",
+            type: "LINK"
+        }],
+        update-user: "root",
+        name: "First Link",
+        creation-date: 1415287846371,
+        connector-id: 1,
+        update-date: 1415287846371,
+        creation-user: "root"
+    }
+ }
+
+/v1/link - [POST] - Create Link
+---------------------------------------------------------
+
+Create a new link object. Provide values to the link config inputs for the 
ones that are required.
+
+* Method: ``POST``
+* Format: ``JSON``
+* Fields of Request:
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                   |
++==========================+======================================================================================+
+| ``link``                 | The root of the post data in JSON                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``id``                   | The id of the link can be left blank in the post 
data                                |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``enabled``              | Whether to enable this link (true/false)          
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-date``          | The last updated time of this link                
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-date``        | The creation time of this link                    
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-user``          | The user who updated this link                    
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-user``        | The user who created this link                    
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``name``                 | The name of this link                             
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``link-config-values``   | Config input values for link config for the 
corresponding connector                  |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``connector-id``         | The id of the connector used for this link        
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+
+* Request Example:
+
+::
+
+  {
+    link: {
+        id: -1,
+        enabled: true,
+        link-config-values: [{
+            id: 1,
+            inputs: [{
+                id: 1,
+                name: "linkConfig.uri",
+                value: 
"hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+                type: "STRING",
+                size: 255,
+                sensitive: false
+            }],
+            name: "testInput",
+            type: "LINK"
+        }],
+        update-user: "root",
+        name: "testLink",
+        creation-date: 1415202223048,
+        connector-id: 1,
+        update-date: 1415202223048,
+        creation-user: "root"
+    }
+  }
+
+* Fields of Response:
+
++---------------------------+--------------------------------------------------------------------------------------+
+|   Field                   | Description                                      
                                    |
++===========================+======================================================================================+
+| ``id``                    | The id assigned for this new created link        
                                    |
++---------------------------+--------------------------------------------------------------------------------------+
+| ``validation-result``     | The validation status for the  link config 
inputs given in the post data             |
++---------------------------+--------------------------------------------------------------------------------------+
+
+* ERROR Response Example:
+
+::
+
+   {
+     "validation-result": [
+         {
+             "linkConfig": [
+                 {
+                     "message": "Invalid URI. URI must either be null or a 
valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, 
hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+                     "status": "ERROR"
+                 }
+             ]
+         }
+     ]
+   }
+
+
+/v1/link/[lname]  or /v1/link/[lid] - [PUT] - Update Link
+---------------------------------------------------------
+
+Update an existing link object with name [lname] or id [lid]. To make the 
procedure of filling inputs easier, the general practice
+is get the link first and then change some of the values for the inputs.
+
+* Method: ``PUT``
+* Format: ``JSON``
+
+* OK Response Example:
+
+::
+
+  {
+    "validation-result": [
+        {}
+    ]
+  }
+
+/v1/link/[lname]  or /v1/link/[lid]  - [DELETE] - Delete Link
+-----------------------------------------------------------------
+
+Delete a link with name [lname] or id [lid]
+
+* Method: ``DELETE``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/link/[lid]/enable  or /v1/link/[lname]/enable  - [PUT] - Enable Link
+--------------------------------------------------------------------------------
+
+Enable a link with id ``lid`` or name ``lname``
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/link/[lid]/disable - [PUT] - Disable Link
+---------------------------------------------------------
+
+Disable a link with id ``lid`` or name ``lname``
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/jobs/ - [GET]  Get all jobs
+-------------------------------------------
+
+Get all the jobs created in Sqoop
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example:
+
+::
+
+  {
+     jobs: [{
+        driver-config-values: [],
+            enabled: true,
+            from-connector-id: 1,
+            update-user: "root",
+            to-config-values: [],
+            to-connector-id: 2,
+            creation-date: 1415310157618,
+            update-date: 1415310157618,
+            creation-user: "root",
+            id: 1,
+            to-link-id: 2,
+            from-config-values: [],
+            name: "First Job",
+            from-link-id: 1
+       },{
+        driver-config-values: [],
+            enabled: true,
+            from-connector-id: 2,
+            update-user: "root",
+            to-config-values: [],
+            to-connector-id: 1,
+            creation-date: 1415310650600,
+            update-date: 1415310650600,
+            creation-user: "root",
+            id: 2,
+            to-link-id: 1,
+            from-config-values: [],
+            name: "Second Job",
+            from-link-id: 2
+       }]
+  }
+
+/v1/jobs?cname=[cname] - [GET]  Get all jobs by connector
+------------------------------------------------------------
+Get all the jobs for a given connector identified by ``[cname]`` part.
+
+
+/v1/job/[jname] or /v1/job/[jid] - [GET] - Get Job
+-----------------------------------------------------
+
+Provide the name or the id of the job in the url [jname]
+part or [jid] part.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example:
+
+::
+
+  {
+    job: {
+        driver-config-values: [{
+                id: 7,
+                inputs: [{
+                    id: 25,
+                    name: "throttlingConfig.numExtractors",
+                    value: "3",
+                    type: "INTEGER",
+                    sensitive: false
+                }, {
+                    id: 26,
+                    name: "throttlingConfig.numLoaders",
+                    value: "3",
+                    type: "INTEGER",
+                    sensitive: false
+                }],
+                name: "throttlingConfig",
+                type: "JOB"
+            }],
+            enabled: true,
+            from-connector-id: 1,
+            update-user: "root",
+            to-config-values: [{
+                id: 6,
+                inputs: [{
+                    id: 19,
+                    name: "toJobConfig.schemaName",
+                    type: "STRING",
+                    size: 50,
+                    sensitive: false
+                }, {
+                    id: 20,
+                    name: "toJobConfig.tableName",
+                    value: "text",
+                    type: "STRING",
+                    size: 2000,
+                    sensitive: false
+                }, {
+                    id: 21,
+                    name: "toJobConfig.sql",
+                    type: "STRING",
+                    size: 50,
+                    sensitive: false
+                }, {
+                    id: 22,
+                    name: "toJobConfig.columns",
+                    type: "STRING",
+                    size: 50,
+                    sensitive: false
+                }, {
+                    id: 23,
+                    name: "toJobConfig.stageTableName",
+                    type: "STRING",
+                    size: 2000,
+                    sensitive: false
+                }, {
+                    id: 24,
+                    name: "toJobConfig.shouldClearStageTable",
+                    type: "BOOLEAN",
+                    sensitive: false
+                }],
+                name: "toJobConfig",
+                type: "JOB"
+            }],
+            to-connector-id: 2,
+            creation-date: 1415310157618,
+            update-date: 1415310157618,
+            creation-user: "root",
+            id: 1,
+            to-link-id: 2,
+            from-config-values: [{
+                id: 2,
+                inputs: [{
+                    id: 2,
+                    name: "fromJobConfig.inputDirectory",
+                    value: 
"hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }],
+                name: "fromJobConfig",
+                type: "JOB"
+            }],
+            name: "First Job",
+            from-link- id: 1
+    }
+ }
+
+
+/v1/job - [POST] - Create Job
+---------------------------------------------------------
+
+Create a new job object with the corresponding config values.
+
+* Method: ``POST``
+* Format: ``JSON``
+
+* Fields of Request:
+
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                   |
++==========================+======================================================================================+
+| ``job``                  | The root of the post data in JSON                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``from-link-id``         | The id of the from link for the job               
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``to-link-id``           | The id of the to link for the job                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``id``                   | The id of the link can be left blank in the post 
data                                |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``enabled``              | Whether to enable this job (true/false)           
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-date``          | The last updated time of this job                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-date``        | The creation time of this job                     
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-user``          | The user who updated this job                     
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-user``        | The uset who creates this job                     
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``name``                 | The name of this job                              
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``from-config-values``   | Config input values for FROM part of the job      
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``to-config-values``     | Config input values for TO part of the job        
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``driver-config-values`` | Config input values for driver                    
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``connector-id``         | The id of the connector used for this link        
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+
+
+* Request Example:
+
+::
+
+ {
+   job: {
+     driver-config-values: [
+       {
+         id: 7,
+         inputs: [
+           {
+             id: 25,
+             name: "throttlingConfig.numExtractors",
+             value: "3",
+             type: "INTEGER",
+             sensitive: false
+           },
+           {
+             id: 26,
+             name: "throttlingConfig.numLoaders",
+             value: "3",
+             type: "INTEGER",
+             sensitive: false
+           }
+         ],
+         name: "throttlingConfig",
+         type: "JOB"
+       }
+     ],
+     enabled: true,
+     from-connector-id: 1,
+     update-user: "root",
+     to-config-values: [
+       {
+         id: 6,
+         inputs: [
+           {
+             id: 19,
+             name: "toJobConfig.schemaName",
+             type: "STRING",
+             size: 50,
+             sensitive: false
+           },
+           {
+             id: 20,
+             name: "toJobConfig.tableName",
+             value: "text",
+             type: "STRING",
+             size: 2000,
+             sensitive: false
+           },
+           {
+             id: 21,
+             name: "toJobConfig.sql",
+             type: "STRING",
+             size: 50,
+             sensitive: false
+           },
+           {
+             id: 22,
+             name: "toJobConfig.columns",
+             type: "STRING",
+             size: 50,
+             sensitive: false
+           },
+           {
+             id: 23,
+             name: "toJobConfig.stageTableName",
+             type: "STRING",
+             size: 2000,
+             sensitive: false
+           },
+           {
+             id: 24,
+             name: "toJobConfig.shouldClearStageTable",
+             type: "BOOLEAN",
+             sensitive: false
+           }
+         ],
+         name: "toJobConfig",
+         type: "JOB"
+       }
+     ],
+     to-connector-id: 2,
+     creation-date: 1415310157618,
+     update-date: 1415310157618,
+     creation-user: "root",
+     id: -1,
+     to-link-id: 2,
+     from-config-values: [
+       {
+         id: 2,
+         inputs: [
+           {
+             id: 2,
+             name: "fromJobConfig.inputDirectory",
+             value: 
"hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+             type: "STRING",
+             size: 255,
+             sensitive: false
+           }
+         ],
+         name: "fromJobConfig",
+         type: "JOB"
+       }
+     ],
+     name: "Test Job",
+     from-link-id: 1
+    }
+  }
+
+* Fields of Response:
+
++---------------------------+--------------------------------------------------------------------------------------+
+|   Field                   | Description                                      
                                    |
++===========================+======================================================================================+
+| ``id``                    | The id assigned for this new created job         
                                    |
++--------------------------+---------------------------------------------------------------------------------------+
+| ``validation-result``     | The validation status for the job config and 
driver config inputs in the post data   |
++---------------------------+--------------------------------------------------------------------------------------+
+
+
+* ERROR Response Example:
+
+::
+
+   {
+     "validation-result": [
+         {
+             "linkConfig": [
+                 {
+                     "message": "Invalid URI. URI must either be null or a 
valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, 
hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+                     "status": "ERROR"
+                 }
+             ]
+         }
+     ]
+   }
+
+
+/v1/job/[jid] - [PUT] - Update Job
+---------------------------------------------------------
+
+Update an existing job object with id [jid]. To make the procedure of filling 
inputs easier, the general practice
+is get the existing job object first and then change some of the inputs.
+
+* Method: ``PUT``
+* Format: ``JSON``
+
+The same as Create Job.
+
+* OK Response Example:
+
+::
+
+  {
+    "validation-result": [
+        {}
+    ]
+  }
+
+
+/v1/job/[jid] - [DELETE] - Delete Job
+---------------------------------------------------------
+
+Delete a job with id ``jid``.
+
+* Method: ``DELETE``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/job/[jid]/enable - [PUT] - Enable Job
+---------------------------------------------------------
+
+Enable a job with id ``jid``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/job/[jid]/disable - [PUT] - Disable Job
+---------------------------------------------------------
+
+Disable a job with id ``jid``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+
+/v1/job/[jid]/start or /v1/job/[jname]/start - [PUT]- Start Job
+---------------------------------------------------------------------------------
+
+Start a job with name ``[jname]`` or with id ``[jid]`` to trigger the job 
execution
+
+* Method: ``POST``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``Submission Record``
+
+* BOOTING Response Example
+
+::
+
+  {
+    "submission": {
+      "progress": -1,
+      "last-update-date": 1415312531188,
+      "external-id": "job_1412137947693_0004",
+      "status": "BOOTING",
+      "job": 2,
+      "creation-date": 1415312531188,
+      "to-schema": {
+        "created": 1415312531426,
+        "name": "HDFS file",
+        "columns": []
+      },
+      "external-link": 
"http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/";,
+      "from-schema": {
+        "created": 1415312531342,
+        "name": "text",
+        "columns": [
+          {
+            "name": "id",
+            "nullable": true,
+            "unsigned": null,
+            "type": "FIXED_POINT",
+            "size": null
+          },
+          {
+            "name": "txt",
+            "nullable": true,
+            "type": "TEXT",
+            "size": null
+          }
+        ]
+      }
+    }
+  }
+
+* SUCCEEDED Response Example
+
+::
+
+   {
+     submission: {
+       progress: -1,
+       last-update-date: 1415312809485,
+       external-id: "job_1412137947693_0004",
+       status: "SUCCEEDED",
+       job: 2,
+       creation-date: 1415312531188,
+       external-link: 
"http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/";,
+       counters: {
+         org.apache.hadoop.mapreduce.JobCounter: {
+           SLOTS_MILLIS_MAPS: 373553,
+           MB_MILLIS_MAPS: 382518272,
+           TOTAL_LAUNCHED_MAPS: 10,
+           MILLIS_MAPS: 373553,
+           VCORES_MILLIS_MAPS: 373553,
+           OTHER_LOCAL_MAPS: 10
+         },
+         org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
+           BYTES_WRITTEN: 0
+         },
+         org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
+           BYTES_READ: 0
+         },
+         org.apache.hadoop.mapreduce.TaskCounter: {
+           MAP_INPUT_RECORDS: 0,
+           MERGED_MAP_OUTPUTS: 0,
+           PHYSICAL_MEMORY_BYTES: 4065599488,
+           SPILLED_RECORDS: 0,
+           COMMITTED_HEAP_BYTES: 3439853568,
+           CPU_MILLISECONDS: 236900,
+           FAILED_SHUFFLE: 0,
+           VIRTUAL_MEMORY_BYTES: 15231422464,
+           SPLIT_RAW_BYTES: 1187,
+           MAP_OUTPUT_RECORDS: 1000000,
+           GC_TIME_MILLIS: 7282
+         },
+         org.apache.hadoop.mapreduce.FileSystemCounter: {
+           FILE_WRITE_OPS: 0,
+           FILE_READ_OPS: 0,
+           FILE_LARGE_READ_OPS: 0,
+           FILE_BYTES_READ: 0,
+           HDFS_BYTES_READ: 1187,
+           FILE_BYTES_WRITTEN: 1191230,
+           HDFS_LARGE_READ_OPS: 0,
+           HDFS_WRITE_OPS: 10,
+           HDFS_READ_OPS: 10,
+           HDFS_BYTES_WRITTEN: 276389736
+         },
+         org.apache.sqoop.submission.counter.SqoopCounters: {
+           ROWS_READ: 1000000
+         }
+       }
+     }
+   }
+
+
+* ERROR Response Example
+
+::
+
+  {
+    "submission": {
+      "progress": -1,
+      "last-update-date": 1415312390570,
+      "status": "FAILURE_ON_SUBMIT",
+      "error-summary": "org.apache.sqoop.common.SqoopException: 
GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
+      "job": 1,
+      "creation-date": 1415312390570,
+      "to-schema": {
+        "created": 1415312390797,
+        "name": "text",
+        "columns": [
+          {
+            "name": "id",
+            "nullable": true,
+            "unsigned": null,
+            "type": "FIXED_POINT",
+            "size": null
+          },
+          {
+            "name": "txt",
+            "nullable": true,
+            "type": "TEXT",
+            "size": null
+          }
+        ]
+      },
+      "from-schema": {
+        "created": 1415312390778,
+        "name": "HDFS file",
+        "columns": [
+        ]
+      },
+      "error-details": "org.apache.sqoop.common.SqoopException: 
GENERIC_HDFS_CONNECTOR_00"
+    }
+  }
+
+/v1/job/[jid]/stop or /v1/job/[jname]/stop  - [PUT]- Stop Job
+---------------------------------------------------------------------------------
+
+Stop a job with name ``[janme]`` or with id ``[jid]`` to abort the running job.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``Submission Record``
+
+/v1/job/[jid]/status or /v1/job/[jname]/status  - [GET]- Get Job Status
+---------------------------------------------------------------------------------
+
+Get status of the running job with name ``[janme]`` or with id ``[jid]``
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``Submission Record``
+
+::
+
+  {
+      "submission": {
+          "progress": 0.25,
+          "last-update-date": 1415312603838,
+          "external-id": "job_1412137947693_0004",
+          "status": "RUNNING",
+          "job": 2,
+          "creation-date": 1415312531188,
+          "external-link": 
"http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/";
+      }
+  }
+
+/v1/submissions? - [GET] - Get all job Submissions
+----------------------------------------------------------------------
+
+Get all the submissions for every job started in SQoop
+
+/v1/submissions?jname=[jname] - [GET] - Get Submissions by Job
+----------------------------------------------------------------------
+
+Retrieve all job submissions in the past for the given job. Each submission 
record will have details such as the status, counters and urls for those 
submissions.
+
+Provide the name of the job in the url [jname] part.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+* Fields of Response:
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                   |
++==========================+======================================================================================+
+| ``progress``             | The progress of the running Sqoop job             
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``job``                  | The id of the Sqoop job                           
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-date``        | The submission timestamp                          
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``last-update-date``     | The timestamp of the last status update           
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``status``               | The status of this job submission                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``external-id``          | The job id of Sqoop job running on Hadoop         
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``external-link``        | The link to track the job status on Hadoop        
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+
+* Response Example:
+
+::
+
+  {
+    submissions: [
+      {
+        progress: -1,
+        last-update-date: 1415312809485,
+        external-id: "job_1412137947693_0004",
+        status: "SUCCEEDED",
+        job: 2,
+        creation-date: 1415312531188,
+        external-link: 
"http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/";,
+        counters: {
+          org.apache.hadoop.mapreduce.JobCounter: {
+            SLOTS_MILLIS_MAPS: 373553,
+            MB_MILLIS_MAPS: 382518272,
+            TOTAL_LAUNCHED_MAPS: 10,
+            MILLIS_MAPS: 373553,
+            VCORES_MILLIS_MAPS: 373553,
+            OTHER_LOCAL_MAPS: 10
+          },
+          org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
+            BYTES_WRITTEN: 0
+          },
+          org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
+            BYTES_READ: 0
+          },
+          org.apache.hadoop.mapreduce.TaskCounter: {
+            MAP_INPUT_RECORDS: 0,
+            MERGED_MAP_OUTPUTS: 0,
+            PHYSICAL_MEMORY_BYTES: 4065599488,
+            SPILLED_RECORDS: 0,
+            COMMITTED_HEAP_BYTES: 3439853568,
+            CPU_MILLISECONDS: 236900,
+            FAILED_SHUFFLE: 0,
+            VIRTUAL_MEMORY_BYTES: 15231422464,
+            SPLIT_RAW_BYTES: 1187,
+            MAP_OUTPUT_RECORDS: 1000000,
+            GC_TIME_MILLIS: 7282
+          },
+          org.apache.hadoop.mapreduce.FileSystemCounter: {
+            FILE_WRITE_OPS: 0,
+            FILE_READ_OPS: 0,
+            FILE_LARGE_READ_OPS: 0,
+            FILE_BYTES_READ: 0,
+            HDFS_BYTES_READ: 1187,
+            FILE_BYTES_WRITTEN: 1191230,
+            HDFS_LARGE_READ_OPS: 0,
+            HDFS_WRITE_OPS: 10,
+            HDFS_READ_OPS: 10,
+            HDFS_BYTES_WRITTEN: 276389736
+          },
+          org.apache.sqoop.submission.counter.SqoopCounters: {
+            ROWS_READ: 1000000
+          }
+        }
+      },
+      {
+        progress: -1,
+        last-update-date: 1415312390570,
+        status: "FAILURE_ON_SUBMIT",
+        error-summary: "org.apache.sqoop.common.SqoopException: 
GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
+        job: 1,
+        creation-date: 1415312390570,
+        error-details: "org.apache.sqoop.common.SqoopException: 
GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner...."
+      }
+    ]
+  }
\ No newline at end of file

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/SecurityGuideOnSqoop2.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/SecurityGuideOnSqoop2.txt
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/SecurityGuideOnSqoop2.txt
 Fri Feb 27 00:59:26 2015
@@ -0,0 +1,173 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=========================
+Security Guide On Sqoop 2
+=========================
+
+Most Hadoop components, such as HDFS, Yarn, Hive, etc., have security 
frameworks, which support Simple, Kerberos and LDAP authentication. currently 
Sqoop 2 provides 2 types of authentication: simple and kerberos. The 
authentication module is pluggable, so more authentication types can be added.
+
+Simple Authentication
+=====================
+
+Configuration
+-------------
+Modify Sqoop configuration file, normally in <Sqoop 
Folder>/server/config/sqoop.properties.
+
+::
+
+  org.apache.sqoop.authentication.type=SIMPLE
+  
org.apache.sqoop.authentication.handler=org.apache.sqoop.security.Authentication.SimpleAuthenticationHandler
+  org.apache.sqoop.anonymous=true
+
+-      Simple authentication is used by default. Commenting out authentication 
configuration will yield the use of simple authentication.
+
+Run command
+-----------
+Start Sqoop server as usual.
+
+::
+
+  <Sqoop Folder>/bin/sqoop.sh server start
+
+Start Sqoop client as usual.
+
+::
+
+  <Sqoop Folder>/bin/sqoop.sh client
+
+Kerberos Authentication
+=======================
+
+Kerberos is a computer network authentication protocol which works on the 
basis of 'tickets' to allow nodes communicating over a non-secure network to 
prove their identity to one another in a secure manner. Its designers aimed it 
primarily at a client–server model and it provides mutual 
authentication—both the user and the server verify each other's identity. 
Kerberos protocol messages are protected against eavesdropping and replay 
attacks.
+
+Dependency
+----------
+Set up a KDC server. Skip this step if KDC server exists. It's difficult to 
cover every way Kerberos can be setup (ie: there are cross realm setups and 
multi-trust environments). This section will describe how to setup the sqoop 
principals with a local deployment of MIT kerberos.
+
+-      All components which are Kerberos authenticated need one KDC server. If 
current Hadoop cluster uses Kerberos authentication, there should be a KDC 
server.
+-      If there is no KDC server, follow 
http://web.mit.edu/kerberos/krb5-devel/doc/admin/install_kdc.html to set up one.
+
+Configure Hadoop cluster to use Kerberos authentication.
+
+-      Authentication type should be cluster level. All components must have 
the same authentication type: use Kerberos or not. In other words, Sqoop with 
Kerberos authentication could not communicate with other Hadoop components, 
such as HDFS, Yarn, Hive, etc., without Kerberos authentication, and vice versa.
+-      How to set up a Hadoop cluster with Kerberos authentication is out of 
the scope of this document. Follow the related links like 
https://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/SecureMode.html
+
+Create keytab and principal for Sqoop 2 via kadmin in command line.
+
+::
+
+  addprinc -randkey HTTP/<FQDN>@<REALM>
+  addprinc -randkey sqoop/<FQDN>@<REALM>
+  xst -k /home/kerberos/sqoop.keytab HTTP/<FQDN>@<REALM>
+  xst -k /home/kerberos/sqoop.keytab sqoop/<FQDN>@<REALM>
+
+-      The <FQDN> should be replaced by the FQDN of the server, which could be 
found via “hostname -f” in command line.
+-      The <REALM> should be replaced by the realm name in krb5.conf file 
generated when installing the KDC server in the former step.
+-      The principal HTTP/<FQDN>@<REALM> is used in communication between 
Sqoop client and Sqoop server. Since Sqoop server is an http server, so the 
HTTP principal is a must during SPNEGO process, and it is case sensitive.
+-      Http request could be sent from other client like browser, wget or curl 
with SPNEGO support.
+-      The principal sqoop/<FQDN>@<REALM> is used in communication between 
Sqoop server and Hdfs/Yarn as the credential of Sqoop server.
+
+Configuration
+-------------
+Modify Sqoop configuration file, normally in <Sqoop 
Folder>/server/config/sqoop.properties.
+
+::
+
+  org.apache.sqoop.authentication.type=KERBEROS
+  
org.apache.sqoop.authentication.handler=org.apache.sqoop.security.Authentication.KerberosAuthenticationHandler
+  org.apache.sqoop.authentication.kerberos.principal=sqoop/_HOST@<REALM>
+  org.apache.sqoop.authentication.kerberos.keytab=/home/kerberos/sqoop.keytab
+  org.apache.sqoop.authentication.kerberos.http.principal=HTTP/_HOST@<REALM>
+  
org.apache.sqoop.authentication.kerberos.http.keytab=/home/kerberos/sqoop.keytab
+  org.apache.sqoop.authentication.kerberos.proxyuser=true
+
+-      When _HOST is used as FQDN in principal, it will be replaced by the 
real FQDN. https://issues.apache.org/jira/browse/HADOOP-6632
+-      If parameter proxyuser is set true, Sqoop server will use proxy user 
mode (sqoop delegate real client user) to run Yarn job. If false, Sqoop server 
will use sqoop user to run Yarn job.
+
+Run command
+-----------
+Set SQOOP2_HOST to FQDN.
+
+::
+
+  export SQOOP2_HOST=$(hostname -f).
+
+-      The <FQDN> should be replaced by the FQDN of the server, which could be 
found via “hostname -f” in command line.
+
+Start Sqoop server using sqoop user.
+
+::
+
+  sudo –u sqoop <Sqoop Folder>/bin/sqoop.sh server start
+
+Run kinit to generate ticket cache.
+
+::
+
+  kinit HTTP/<FQDN>@<REALM> -kt /home/kerberos/sqoop.keytab
+
+Start Sqoop client.
+
+::
+
+  <Sqoop Folder>/bin/sqoop.sh client
+
+Verify
+------
+If the Sqoop server has started successfully with Kerberos authentication, the 
following line will be in <@LOGDIR>/sqoop.log:
+
+::
+
+  2014-12-04 15:02:58,038 INFO  security.KerberosAuthenticationHandler 
[org.apache.sqoop.security.Authentication.KerberosAuthenticationHandler.secureLogin(KerberosAuthenticationHandler.java:84)]
 Using Kerberos authentication, principal [sqoop/_h...@hadoop.com] keytab 
[/home/kerberos/sqoop.keytab]
+
+If the Sqoop client was able to communicate with the Sqoop server, the 
following will be in <Sqoop Folder>/server/log/catalina.out:
+
+::
+
+  Refreshing Kerberos configuration
+  Acquire TGT from Cache
+  Principal is HTTP/<FQDN>@HADOOP.COM
+  null credentials from Ticket Cache
+  principal is HTTP/<FQDN>@HADOOP.COM
+  Will use keytab
+  Commit Succeeded
+
+Customized Authentication
+=========================
+
+Users can create their own authentication modules. By performing the following 
steps:
+
+-      Create customized authentication handler extends abstract class 
AuthenticationHandler.
+-      Implement abstract function doInitialize and secureLogin in 
AuthenticationHandler.
+
+::
+
+  public class MyAuthenticationHandler extends AuthenticationHandler {
+
+    private static final Logger LOG = 
Logger.getLogger(MyAuthenticationHandler.class);
+
+    public void doInitialize() {
+      securityEnabled = true;
+    }
+
+    public void secureLogin() {
+      LOG.info("Using customized authentication.");
+    }
+  }
+
+-      Modify configuration org.apache.sqoop.authentication.handler in <Sqoop 
Folder>/server/config/sqoop.properties and set it to the customized 
authentication handler class name.
+-      Restart the Sqoop server.
\ No newline at end of file

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/Sqoop5MinutesDemo.txt
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/Sqoop5MinutesDemo.txt 
(added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/Sqoop5MinutesDemo.txt 
Fri Feb 27 00:59:26 2015
@@ -0,0 +1,240 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+====================
+Sqoop 5 Minutes Demo
+====================
+
+This page will walk you through the basic usage of Sqoop. You need to have 
installed and configured Sqoop server and client in order to follow this guide. 
Installation procedure is described on `Installation page 
<Installation.html>`_. Please note that exact output shown in this page might 
differ from yours as Sqoop evolves. All major information should however remain 
the same.
+
+Sqoop uses unique names or persistent ids to identify connectors, links, jobs 
and configs. We support querying a entity by its unique name or by its perisent 
database Id.
+
+Starting Client
+===============
+
+Start client in interactive mode using following command: ::
+
+  sqoop2-shell
+
+Configure client to use your Sqoop server: ::
+
+  sqoop:000> set server --host your.host.com --port 12000 --webapp sqoop
+
+Verify that connection is working by simple version checking: ::
+
+  sqoop:000> show version --all
+  client version:
+    Sqoop 2.0.0-SNAPSHOT source revision 
418c5f637c3f09b94ea7fc3b0a4610831373a25f
+    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+  server version:
+    Sqoop 2.0.0-SNAPSHOT source revision 
418c5f637c3f09b94ea7fc3b0a4610831373a25f
+    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+  API versions:
+    [v1]
+
+You should received similar output as shown above describing the sqoop client 
build version, the server build version and the supported versions for the rest 
API.
+
+You can use the help command to check all the supported commands in the sqoop 
shell.
+::
+
+  sqoop:000> help
+  For information about Sqoop, visit: http://sqoop.apache.org/
+
+  Available commands:
+    exit    (\x  ) Exit the shell
+    history (\H  ) Display, manage and recall edit-line history
+    help    (\h  ) Display this help message
+    set     (\st ) Configure various client options and settings
+    show    (\sh ) Display various objects and configuration options
+    create  (\cr ) Create new object in Sqoop repository
+    delete  (\d  ) Delete existing object in Sqoop repository
+    update  (\up ) Update objects in Sqoop repository
+    clone   (\cl ) Create new object based on existing one
+    start   (\sta) Start job
+    stop    (\stp) Stop job
+    status  (\stu) Display status of a job
+    enable  (\en ) Enable object in Sqoop repository
+    disable (\di ) Disable object in Sqoop repository
+
+
+Creating Link Object
+==========================
+
+Check for the registered connectors on your Sqoop server: ::
+
+  sqoop:000> show connector
+  
+----+------------------------+----------------+------------------------------------------------------+----------------------+
+  | Id |          Name          |    Version     |                        
Class                         | Supported Directions |
+  
+----+------------------------+----------------+------------------------------------------------------+----------------------+
+  | 1  | hdfs-connector         | 2.0.0-SNAPSHOT | 
org.apache.sqoop.connector.hdfs.HdfsConnector        | FROM/TO              |
+  | 2  | generic-jdbc-connector | 2.0.0-SNAPSHOT | 
org.apache.sqoop.connector.jdbc.GenericJdbcConnector | FROM/TO              |
+  
+----+------------------------+----------------+------------------------------------------------------+----------------------+
+
+Our example contains two connectors. The one with connector Id 2 is called the 
``generic-jdbc-connector``. This is a basic connector relying on the Java JDBC 
interface for communicating with data sources. It should work with the most 
common databases that are providing JDBC drivers. Please note that you must 
install JDBC drivers separately. They are not bundled in Sqoop due to 
incompatible licenses.
+
+Generic JDBC Connector in our example has a persistence Id 2 and we will use 
this value to create new link object for this connector. Note that the link 
name should be unique.
+::
+
+  sqoop:000> create link -c 2
+  Creating link for connector with id 2
+  Please fill following values to create new link object
+  Name: First Link
+
+  Link configuration
+  JDBC Driver Class: com.mysql.jdbc.Driver
+  JDBC Connection String: jdbc:mysql://mysql.server/database
+  Username: sqoop
+  Password: *****
+  JDBC Connection Properties:
+  There are currently 0 values in the map:
+  entry#protocol=tcp
+  New link was successfully created with validation status OK and persistent 
id 1
+
+Our new link object was created with assigned id 1.
+
+In the ``show connector -all`` we see that there is a hdfs-connector 
registered in sqoop with the persistent id 1. Let us create another link object 
but this time for the  hdfs-connector instead.
+
+::
+
+  sqoop:000> create link -c 1
+  Creating link for connector with id 1
+  Please fill following values to create new link object
+  Name: Second Link
+
+  Link configuration
+  HDFS URI: hdfs://nameservice1:8020/
+  New link was successfully created with validation status OK and persistent 
id 2
+
+Creating Job Object
+===================
+
+Connectors implement the ``From`` for reading data from and/or ``To`` for 
writing data to. Generic JDBC Connector supports both of them List of supported 
directions for each connector might be seen in the output of ``show connector 
-all`` command above. In order to create a job we need to specifiy the ``From`` 
and ``To`` parts of the job uniquely identified by their link Ids. We already 
have 2 links created in the system, you can verify the same with the following 
command
+
+::
+
+  sqoop:000> show link --all
+  2 link(s) to show:
+  link with id 1 and name First Link (Enabled: true, Created by root at 
11/4/14 4:27 PM, Updated by root at 11/4/14 4:27 PM)
+  Using Connector id 2
+    Link configuration
+      JDBC Driver Class: com.mysql.jdbc.Driver
+      JDBC Connection String: jdbc:mysql://mysql.ent.cloudera.com/sqoop
+      Username: sqoop
+      Password:
+      JDBC Connection Properties:
+        protocol = tcp
+  link with id 2 and name Second Link (Enabled: true, Created by root at 
11/4/14 4:38 PM, Updated by root at 11/4/14 4:38 PM)
+  Using Connector id 1
+    Link configuration
+      HDFS URI: hdfs://nameservice1:8020/
+
+Next, we can use the two link Ids to associate the ``From`` and ``To`` for the 
job.
+::
+
+   sqoop:000> create job -f 1 -t 2
+   Creating job for links with from id 1 and to id 2
+   Please fill following values to create new job object
+   Name: Sqoopy
+
+   FromJob configuration
+
+    Schema name:(Required)sqoop
+    Table name:(Required)sqoop
+    Table SQL statement:(Optional)
+    Table column names:(Optional)
+    Partition column name:(Optional) id
+    Null value allowed for the partition column:(Optional)
+    Boundary query:(Optional)
+
+  ToJob configuration
+
+    Output format:
+     0 : TEXT_FILE
+     1 : SEQUENCE_FILE
+    Choose: 0
+    Compression format:
+     0 : NONE
+     1 : DEFAULT
+     2 : DEFLATE
+     3 : GZIP
+     4 : BZIP2
+     5 : LZO
+     6 : LZ4
+     7 : SNAPPY
+     8 : CUSTOM
+    Choose: 0
+    Custom compression format:(Optional)
+    Output directory:(Required)/root/projects/sqoop
+
+    Driver Config
+    Extractors:(Optional) 2
+    Loaders:(Optional) 2
+    New job was successfully created with validation status OK  and persistent 
id 1
+
+Our new job object was created with assigned id 1.
+
+Start Job ( a.k.a Data transfer )
+=================================
+
+You can start a sqoop job with the following command:
+::
+
+  sqoop:000> start job -j 1
+  Submission details
+  Job ID: 1
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    
http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
+
+You can iteratively check your running job status with ``status job`` command:
+
+::
+
+  sqoop:000> status job -j 1
+  Submission details
+  Job ID: 1
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    
http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 20:09:16 PST: RUNNING  - 0.00 % 
+
+Alternatively you can start a sqoop job and observe job running status with 
the following command:
+
+::
+
+  sqoop:000> start job -j 1 -s
+  Submission details
+  Job ID: 1
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    
http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
+  2014-11-04 19:43:39 PST: RUNNING  - 0.00 %
+  2014-11-04 19:43:49 PST: RUNNING  - 10.00 %
+
+And finally you can stop running the job at any time using ``stop job`` 
command: ::
+
+  sqoop:000> stop job -j 1
\ No newline at end of file

Added: websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/Tools.txt
==============================================================================
--- websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/Tools.txt (added)
+++ websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/Tools.txt Fri Feb 
27 00:59:26 2015
@@ -0,0 +1,129 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=====
+Tools
+=====
+
+Tools are server commands that administrators can execute on the Sqoop server 
machine in order to perform various maintenance tasks. The tool execution will 
always perform a given task and finish. There are no long running services 
implemented as tools.
+
+In order to perform the maintenance task each tool is suppose to do, they need 
to be executed in exactly the same environment as the main Sqoop server. The 
tool binary will take care of setting up the ``CLASSPATH`` and other 
environmental variables that might be required. However it's up to the 
administrator himself to run the tool under the same user as is used for the 
server. This is usually configured automatically for various Hadoop 
distributions (such as Apache Bigtop).
+
+
+.. note:: Running tools while the Sqoop Server is also running is not 
recommended as it might lead to a data corruption and service disruption.
+
+List of available tools:
+
+* verify
+* upgrade
+
+To run the desired tool, execute binary ``sqoop2-tool`` with desired tool 
name. For example to run ``verify`` tool::
+
+  sqoop2-tool verify
+
+.. note:: Stop the Sqoop Server before running Sqoop tools. Running tools 
while Sqoop Server is running can lead to a data corruption and service 
disruption.
+
+Verify
+======
+
+The verify tool will verify Sqoop server configuration by starting all 
subsystems with the exception of servlets and tearing them down.
+
+To run the ``verify`` tool::
+
+  sqoop2-tool verify
+
+If the verification process succeeds, you should see messages like::
+
+  Verification was successful.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
+
+If the verification process will find any inconsistencies, it will print out 
the following message instead::
+
+  Verification has failed, please check Server logs for further details.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has failed.
+
+Further details why the verification has failed will be available in the Sqoop 
server log - same file as the Sqoop Server logs into.
+
+Upgrade
+=======
+
+Upgrades all versionable components inside Sqoop2. This includes structural 
changes inside the repository and stored metadata.
+Running this tool on Sqoop deployment that was already upgraded will have no 
effect.
+
+To run the ``upgrade`` tool::
+
+  sqoop2-tool upgrade
+
+Upon successful upgrade you should see following message::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+Execution failure will show the following message instead::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has failed.
+
+Further details why the upgrade process has failed will be available in the 
Sqoop server log - same file as the Sqoop Server logs into.
+
+RepositoryDump
+==============
+
+Writes the user-created contents of the Sqoop repository to a file in JSON 
format. This includes connections, jobs and submissions.
+
+To run the ``repositorydump`` tool::
+
+  sqoop2-tool repositorydump -o repository.json
+
+As an option, the administrator can choose to include sensitive information 
such as database connection passwords in the file::
+
+  sqoop2-tool repositorydump -o repository.json --include-sensitive
+
+Upon successful execution, you should see the following message::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has finished 
correctly.
+
+If repository dump has failed, you will see the following message instead::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has failed.
+
+Further details why the upgrade process has failed will be available in the 
Sqoop server log - same file as the Sqoop Server logs into.
+
+RepositoryLoad
+==============
+
+Reads a json formatted file created by RepositoryDump and loads to current 
Sqoop repository.
+
+To run the ``repositoryLoad`` tool::
+
+  sqoop2-tool repositoryload -i repository.json
+
+Upon successful execution, you should see the following message::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has finished 
correctly.
+
+If repository load failed you will see the following message instead::
+
+ Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has failed.
+
+Or an exception. Further details why the upgrade process has failed will be 
available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+.. note:: If the repository dump was created without passwords (default), the 
connections will not contain a password and the jobs will fail to execute. In 
that case you'll need to manually update the connections and set the password.
+.. note:: RepositoryLoad tool will always generate new connections, jobs and 
submissions from the file. Even when an identical objects already exists in 
repository.
+
+
+
+
+
+

Added: websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/Upgrade.txt
==============================================================================
--- websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/Upgrade.txt 
(added)
+++ websites/staging/sqoop/trunk/content/docs/1.99.5/_sources/Upgrade.txt Fri 
Feb 27 00:59:26 2015
@@ -0,0 +1,84 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=======
+Upgrade
+=======
+
+This page describes procedure that you need to take in order to upgrade Sqoop 
from one release to a higher release. Upgrading both client and server 
component will be discussed separately.
+
+.. note:: Only updates from one Sqoop 2 release to another are covered, 
starting with upgrades from version 1.99.2. This guide do not contain general 
information how to upgrade from Sqoop 1 to Sqoop 2.
+
+Upgrading Server
+================
+
+As Sqoop server is using a database repository for persisting sqoop entities 
such as the connector, driver, links and jobs the repository schema might need 
to be updated as part of the server upgrade. In addition the configs and inputs 
described by the various connectors and the driver may also change with a new 
server version and might need a data upgrade.
+
+There are two ways how to upgrade Sqoop entities in the repository, you can 
either execute upgrade tool or configure the sqoop server to perform all 
necessary upgrades on start up.
+
+It's strongly advised to back up the repository before moving on to next 
steps. Backup instructions will vary depending on the repository 
implementation. For example, using MySQL as a repository will require a 
different back procedure than Apache Derby. Please follow the repositories' 
backup procedure.
+
+Upgrading Server using upgrade tool
+-----------------------------------
+
+Preferred upgrade path is to explicitly run the `Upgrade Tool 
<Tools.html#upgrade>`_. First step is to however shutdown the server as having 
both the server and upgrade utility accessing the same repository might corrupt 
it::
+
+  sqoop2-server stop
+
+When the server has been successfully stopped, you can update the server bits 
and simply run the upgrade tool::
+
+  sqoop2-tool upgrade
+
+You should see that the upgrade process has been successful::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+In case of any failure, please take a look into `Upgrade Tool 
<Tools.html#upgrade>`_ documentation page.
+
+Upgrading Server on start-up
+----------------------------
+
+The capability of performing the upgrade has been built-in to the server, 
however is disabled by default to avoid any unintentional changes to the 
repository. You can start the repository schema upgrade procedure by stopping 
the server: ::
+
+  sqoop2-server stop
+
+Before starting the server again you will need to enable the auto-upgrade 
feature that will perform all necessary changes during Sqoop Server start up.
+
+You need to set the following property in configuration file 
``sqoop.properties`` for the repository schema upgrade.
+::
+
+   org.apache.sqoop.repository.schema.immutable=false
+
+You need to set the following property in configuration file 
``sqoop.properties`` for the connector config data upgrade.
+::
+
+   org.apache.sqoop.connector.autoupgrade=true
+
+You need to set the following property in configuration file 
``sqoop.properties`` for the driver config data upgrade.
+::
+
+   org.apache.sqoop.driver.autoupgrade=true
+
+When all properties are set, start the sqoop server using the following 
command::
+
+  sqoop2-server start
+
+All required actions will be performed automatically during the server 
bootstrap. It's strongly advised to set all three properties to their original 
values once the server has been successfully started and the upgrade has 
completed
+
+Upgrading Client
+================
+
+Client do not require any manual steps during upgrade. Replacing the binaries 
with updated version is sufficient.


Reply via email to