Added: 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/DevEnv.rst
==============================================================================
--- websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/DevEnv.rst 
(added)
+++ websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/DevEnv.rst 
Tue Nov 25 21:57:10 2014
@@ -0,0 +1,57 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=====================================
+Sqoop 2 Development Environment Setup
+=====================================
+
+This document describes you how to setup development environment for Sqoop 2.
+
+System Requirement
+==================
+
+Java
+----
+
+Sqoop written in Java and using version 1.6. You can `download java 
<http://www.oracle.com/technetwork/java/javase/downloads/index.html>`_ and 
install. Locate JAVA_HOME to installed directroy, e.g. export 
JAVA_HOME=/usr/lib/jvm/jdk1.6.0_32.
+
+Maven
+-----
+
+Sqoop uses Maven 3 for building the project. Download `Maven 
<http://maven.apache.org/download.cgi>`_ and its Installation instructions 
given in `link <http://maven.apache.org/download.cgi#Maven_Documentation>`_.
+
+Eclipse Setup
+=============
+
+Steps for downloading source code is given in `Building Sqoop2 
<BuildingSqoop2.html>`_
+
+Sqoop 2 project has multiple modules where one module is depend on another 
module for e.g. sqoop 2 client module has sqoop 2 common module dependency. 
Follow below step for creating eclipse's project and classpath for each module.
+
+::
+
+  //Install all package into local maven repository
+  mvn clean install -DskipTests
+
+  //Adding M2_REPO variable to eclipse workspace
+  mvn eclipse:configure-workspace 
-Declipse.workspace=<path-to-eclipse-workspace-dir-for-sqoop-2>
+
+  //Eclipse project creation with optional parameters
+  mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
+
+Alternatively, for manually adding M2_REPO classpath variable as maven 
repository path in eclipse-> window-> Java ->Classpath Variables ->Click "New" 
->In new dialog box, input Name as M2_REPO and Path as $HOME/.m2/repository 
->click Ok.
+
+On successful execution of above maven commands, Then import the sqoop project 
modules into eclipse-> File -> Import ->General ->Existing Projects into 
Workspace-> Click Next-> Browse Sqoop 2 directory ($HOME/git/sqoop2) ->Click Ok 
->Import dialog shows multiple projects (sqoop-client, sqoop-common, etc.) -> 
Select all modules -> click Finish.
+

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Installation.rst
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Installation.rst
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Installation.rst
 Tue Nov 25 21:57:10 2014
@@ -0,0 +1,103 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+============
+Installation
+============
+
+Sqoop ships as one binary package however it's compound from two separate 
parts - client and server. You need to install server on single node in your 
cluster. This node will then serve as an entry point for all connecting Sqoop 
clients. Server acts as a mapreduce client and therefore Hadoop must be 
installed and configured on machine hosting Sqoop server. Clients can be 
installed on any arbitrary number of machines. Client is not acting as a 
mapreduce client and thus you do not need to install Hadoop on nodes that will 
act only as a Sqoop client.
+
+Server installation
+===================
+
+Copy Sqoop artifact on machine where you want to run Sqoop server. This 
machine must have installed and configured Hadoop. You don't need to run any 
Hadoop related services there, however the machine must be able to act as an 
Hadoop client. You should be able to list a HDFS for example: ::
+
+  hadoop dfs -ls
+
+Sqoop server supports multiple Hadoop versions. However as Hadoop major 
versions are not compatible with each other, Sqoop have multiple binary 
artefacts - one for each supported major version of Hadoop. You need to make 
sure that you're using appropriated binary artifact for your specific Hadoop 
version. To install Sqoop server decompress appropriate distribution artifact 
in location at your convenience and change your working directory to this 
folder. ::
+
+  # Decompress Sqoop distribution tarball
+  tar -xvf sqoop-<version>-bin-hadoop<hadoop-version>.tar.gz
+
+  # Move decompressed content to any location
+  mv sqoop-<version>-bin-hadoop<hadoop version>.tar.gz /usr/lib/sqoop
+
+  # Change working directory
+  cd /usr/lib/sqoop
+
+
+Installing Dependencies
+-----------------------
+
+Hadoop libraries must be available on node where you are planning to run Sqoop 
server with proper configuration for major services - ``NameNode`` and either 
``JobTracker`` or ``ResourceManager`` depending whether you are running Hadoop 
1 or 2. There is no need to run any Hadoop service on the same node as Sqoop 
server, just the libraries and configuration files must be available.
+
+Path to Hadoop libraries is stored in file ``catalina.properties`` inside 
directory ``server/conf``. You need to change property called ``common.loader`` 
to contain all directories with your Hadoop libraries. The default expected 
locations are ``/usr/lib/hadoop`` and ``/usr/lib/hadoop/lib/``. Please check 
out the comments in the file for further description how to configure different 
locations.
+
+Lastly you might need to install JDBC drivers that are not bundled with Sqoop 
because of incompatible licenses. You can add any arbitrary Java jar file to 
Sqoop server by copying it into ``lib/`` directory. You can create this 
directory if it do not exists already.
+
+Configuring PATH
+----------------
+
+All user and administrator facing shell commands are stored in ``bin/`` 
directory. It's recommended to add this directory to your ``$PATH`` for their 
easier execution, for example::
+
+  PATH=$PATH:`pwd`/bin/
+
+Further documentation pages will assume that you have the binaries on your 
``$PATH``. You will need to call them specifying full path if you decide to 
skip this step.
+
+Configuring Server
+------------------
+
+Before starting server you should revise configuration to match your specific 
environment. Server configuration files are stored in ``server/config`` 
directory of distributed artifact along side with other configuration files of 
Tomcat.
+
+File ``sqoop_bootstrap.properties`` specifies which configuration provider 
should be used for loading configuration for rest of Sqoop server. Default 
value ``PropertiesConfigurationProvider`` should be sufficient.
+
+
+Second configuration file ``sqoop.properties`` contains remaining 
configuration properties that can affect Sqoop server. File is very well 
documented, so check if all configuration properties fits your environment. 
Default or very little tweaking should be sufficient most common cases.
+
+You can verify the Sqoop server configuration using `Verify Tool 
<Tools.html#verify>`__, for example::
+
+  sqoop2-tool verify
+
+Upon running the ``verify`` tool, you should see messages similar to the 
following::
+
+  Verification was successful.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
+
+Consult `Verify Tool <Tools.html#upgrade>`__ documentation page in case of any 
failure.
+
+Server Life Cycle
+-----------------
+
+After installation and configuration you can start Sqoop server with following 
command: ::
+
+  sqoop2-server start
+
+Similarly you can stop server using following command: ::
+
+  sqoop2-server stop
+
+By default Sqoop server daemons use ports 12000 and 12001. You can set 
``SQOOP_HTTP_PORT`` and ``SQOOP_ADMIN_PORT`` in configuration file 
``server/bin/setenv.sh`` to use different ports.
+
+Client installation
+===================
+
+Client do not need extra installation and configuration steps. Just copy Sqoop 
distribution artifact on target machine and unzip it in desired location. You 
can start client with following command: ::
+
+  sqoop2-shell
+
+You can find more documentation to Sqoop client in `Command Line Client 
<CommandLineClient.html>`_ section.
+
+

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/RESTAPI.rst
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/RESTAPI.rst 
(added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/RESTAPI.rst 
Tue Nov 25 21:57:10 2014
@@ -0,0 +1,1441 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+=========================
+Sqoop REST API Guide
+=========================
+
+This document will explain how you can use Sqoop REST API to build 
applications interacting with Sqoop server.
+The REST API covers all aspects of managing Sqoop jobs and allows you to build 
an app in any programming language using HTTP over JSON.
+
+.. contents:: Table of Contents
+
+Initialization
+=========================
+
+Before continuing further, make sure that the Sqoop server is running.
+
+Then find out the details of the Sqoop server: ``host``, ``port`` and 
``webapp``, and keep them in mind. Note that the sqoop server is running on 
Apache Tomcat. To exercise a REST API for Sqoop, you could assemble and send a 
HTTP request to an url corresponding to that API. Generally, the url contains 
the ``host`` on which the sqoop server is running, the ``port`` at which the 
sqoop server is listening to and ``webapp``, the context path at which the 
Sqoop server is registered in the Apache Tomcat engine.
+
+Certain requests might need to contain some additional query parameters and 
post data. These parameters could be given via
+the HTTP headers, request body or both. All the content in the HTTP body is in 
``JSON`` format.
+
+Understand Connector, Driver, Link and Job
+===========================================================
+
+To create and run a Sqoop Job, we need to provide config values for connecting 
to a data source and then processing the data in that data source. Processing 
might be either reading from or writing to the data source. Thus we have 
configurable entities such as the ``From`` and ``To`` parts of the connectors, 
the driver that each expose configs and one or more inputs within them.
+
+For instance a connector that represents a relational data source such as 
MySQL will expose config classes for connecting to the database. Some of the 
relevant inputs are the connection string, driver class, the username and the 
password to connect to the database. These configs remain the same to read data 
from any of the tables within that database. Hence they are grouped under 
``LinkConfiguration``.
+
+Each connector can support Reading from a data source and/or writing/to a data 
source it represents. Reading from and writing to a data source are represented 
by From and To respectively. Specific configurations are required to peform the 
job of reading from or writing to the data source. These are grouped in the 
``FromJobConfiguration`` and ``ToJobConfiguration`` objects of the connector.
+
+For instace, a connector that represents a relational data source such as 
MySQL will expose the table name to read from or the SQL query to use while 
reading data as a FromJobConfiguration. Similarly a connector that represents a 
data source such as HDFS, will expose the output directory to write to as a 
ToJobConfiguration.
+
+
+Objects
+==============
+
+This section covers all the objects that might exist in an API request and/or 
API response.
+
+Configs and Inputs
+------------------
+
+Before creating any link for a connector or a job with associated ``From`` and 
``To`` links, the first thing to do is getting familiar with all the 
configurations that the connector exposes.
+
+Each config consists of the following information
+
++------------------+---------------------------------------------------------+
+|   Field          | Description                                             |
++==================+=========================================================+
+| ``id``           | The id of this config                                   |
++------------------+---------------------------------------------------------+
+| ``inputs``       | A array of inputs of this config                        |
++------------------+---------------------------------------------------------+
+| ``name``         | The unique name of this config per connector            |
++------------------+---------------------------------------------------------+
+| ``type``         | The type of this config (LINK/ JOB)                     |
++------------------+---------------------------------------------------------+
+
+A typical config object is showing below:
+
+::
+
+   {
+    id:7,
+    inputs:[
+      {
+         id: 25,
+         name: "throttlingConfig.numExtractors",
+         type: "INTEGER",
+         sensitive: false
+      },
+      {
+         id: 26,
+         name: "throttlingConfig.numLoaders",
+         type: "INTEGER",
+         sensitive: false
+       }
+    ],
+    name: "throttlingConfig",
+    type: "JOB"
+  }
+
+Each input object in a config is structured below:
+
++------------------+---------------------------------------------------------+
+|   Field          | Description                                             |
++==================+=========================================================+
+| ``id``           | The id of this input                                    |
++------------------+---------------------------------------------------------+
+| ``name``         | The unique name of this input per config                |
++------------------+---------------------------------------------------------+
+| ``type``         | The data type of this input field                       |
++------------------+---------------------------------------------------------+
+| ``size``         | The length of this input field                          |
++------------------+---------------------------------------------------------+
+| ``sensitive``    | Whether this input contain sensitive information        |
++------------------+---------------------------------------------------------+
+
+
+To send a filled config in the request, you should always use config id and 
input id to map the values to their correspondig names.
+For example, the following request contains an input value 
``com.mysql.jdbc.Driver`` with input id ``7`` inside a config with id ``4`` 
that belongs to a link with id ``3``
+
+::
+
+      link: {
+            id: 3,
+            enabled: true,
+            link-config-values: [{
+                id: 4,
+                inputs: [{
+                    id: 7,
+                    name: "linkConfig.jdbcDriver",
+                    value: "com.mysql.jdbc.Driver",
+                    type: "STRING",
+                    size: 128,
+                    sensitive: false
+                }, {
+                    id: 8,
+                    name: "linkConfig.connectionString",
+                    value: 
"jdbc%3Amysql%3A%2F%2Fmysql.ent.cloudera.com%2Fsqoop",
+                    type: "STRING",
+                    size: 128,
+                    sensitive: false
+                },
+                ...
+             }
+           }
+
+Exception Response
+------------------
+
+Each operation on Sqoop server might return an exception in the Http response. 
Remember to take this into account.The exception code and message could be 
found in both the header and body of the response.
+
+Please jump to "Header Parameters" section to find how to get exception 
information from header.
+
+In the body, the exception is expressed in ``JSON`` format. An example of the 
exception is:
+
+::
+
+  {
+    "message":"DERBYREPO_0030:Unable to load specific job metadata from 
repository - Couldn't find job with id 2",
+    "stack-trace":[
+      {
+        "file":"DerbyRepositoryHandler.java",
+        "line":1111,
+        "class":"org.apache.sqoop.repository.derby.DerbyRepositoryHandler",
+        "method":"findJob"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":451,
+        "class":"org.apache.sqoop.repository.JdbcRepository$16",
+        "method":"doIt"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":90,
+        "class":"org.apache.sqoop.repository.JdbcRepository",
+        "method":"doWithConnection"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":61,
+        "class":"org.apache.sqoop.repository.JdbcRepository",
+        "method":"doWithConnection"
+      },
+      {
+        "file":"JdbcRepository.java",
+        "line":448,
+        "class":"org.apache.sqoop.repository.JdbcRepository",
+        "method":"findJob"
+      },
+      {
+        "file":"JobRequestHandler.java",
+        "line":238,
+        "class":"org.apache.sqoop.handler.JobRequestHandler",
+        "method":"getJobs"
+      }
+    ],
+    "class":"org.apache.sqoop.common.SqoopException"
+  }
+
+Config and Input Validation Status Response
+--------------------------------------------
+
+The config and the inputs associated with the connectors also provide custom 
validation rules for the values given to these input fields. Sqoop applies 
these custom validators and its corresponding valdation logic when config 
values for the LINK and JOB are posted.
+
+
+An example of a OK status with the persisted ID:
+::
+
+ {
+    "id": 3,
+    "validation-result": [
+        {}
+    ]
+ }
+
+An example of ERROR status:
+::
+
+   {
+     "validation-result": [
+       {
+        "linkConfig": [
+          {
+            "message": "Invalid URI. URI must either be null or a valid URI. 
Here are a few valid example URIs: hdfs://example.com:8020/, 
hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+            "status": "ERROR"
+          }
+        ]
+      }
+     ]
+   }
+
+Job Submission Status Response
+------------------------------
+
+After starting a job, you could look up the running status of it. There could 
be 7 possible status:
+
++-----------------------------+---------------------------------------------------------+
+|   Status                    | Description                                    
         |
++=============================+=========================================================+
+| ``BOOTING``                 | In the middle of submitting the job            
         |
++-----------------------------+---------------------------------------------------------+
+| ``FAILURE_ON_SUBMIT``       | Unable to submit this job to remote cluster    
         |
++-----------------------------+---------------------------------------------------------+
+| ``RUNNING``                 | The job is running now                         
         |
++-----------------------------+---------------------------------------------------------+
+| ``SUCCEEDED``               | Job finished successfully                      
         |
++-----------------------------+---------------------------------------------------------+
+| ``FAILED``                  | Job failed                                     
         |
++-----------------------------+---------------------------------------------------------+
+| ``NEVER_EXECUTED``          | The job has never been executed since created  
         |
++-----------------------------+---------------------------------------------------------+
+| ``UNKNOWN``                 | The status is unknown                          
         |
++-----------------------------+---------------------------------------------------------+
+
+Header Parameters
+=================
+
+For all Sqoop requests, the following header parameters are supported:
+
++---------------------------+----------+---------------------------------------------------------+
+|   Parameter               | Required | Description                           
                  |
++===========================+==========+=========================================================+
+| ``sqoop-user-name``       | true     | The name of the user who makes the 
requests             |
++---------------------------+----------+---------------------------------------------------------+
+
+For all the responses, the following parameters in the HTTP message header are 
available:
+
++---------------------------+----------+------------------------------------------------------------------------------+
+|   Parameter               | Required | Description                           
                                       |
++===========================+==========+==============================================================================+
+| ``sqoop-error-code``      | false    | The error code when some error happen 
in the server side for this request    |
++---------------------------+----------+------------------------------------------------------------------------------+
+| ``sqoop-error-message``   | false    | The explanation for a error code      
                                       |
++---------------------------+----------+------------------------------------------------------------------------------+
+
+So far, there are only these 2 parameters in the header of response message. 
They only exist when something bad happen in the server.
+And they always come along with an exception message in the response body.
+
+REST APIs
+==========
+
+The section elaborates all the rest apis that are supported by the Sqoop 
server.
+
+/version - [GET] - Get Sqoop Version
+-------------------------------------
+
+Get all the version metadata of Sqoop software in the server side.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Fields of Response:
+
++--------------------+---------------------------------------------------------+
+|   Field            | Description                                             
|
++====================+=========================================================+
+| ``source-revision``| The revision number of Sqoop source code                
|
++--------------------+---------------------------------------------------------+
+| ``api-versions``   | The version of network protocol                         
|
++--------------------+---------------------------------------------------------+
+| ``build-date``     | The Sqoop release date                                  
|
++--------------------+---------------------------------------------------------+
+| ``user``           | The user who made the release                           
|
++--------------------+---------------------------------------------------------+
+| ``source-url``     | The url of the source code trunk                        
|
++--------------------+---------------------------------------------------------+
+| ``build-version``  | The version of Sqoop in the server side                 
|
++--------------------+---------------------------------------------------------+
+
+
+* Response Example:
+
+::
+
+   {
+    source-url: 
"git://vbasavaraj.local/Users/vbasavaraj/Projects/SqoopRefactoring/sqoop2/common",
+    source-revision: "418c5f637c3f09b94ea7fc3b0a4610831373a25f",
+    build-version: "2.0.0-SNAPSHOT",
+    api-versions: [
+       "v1"
+     ],
+    user: "vbasavaraj",
+    build-date: "Mon Nov 3 08:18:21 PST 2014"
+   }
+
+/v1/connectors - [GET]  Get all Connectors
+-------------------------------------------
+
+Get all the connectors registered in Sqoop
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example
+
+::
+
+  {
+    connectors: [{
+        id: 1,
+        link-config: [],
+        job-config: {},
+        name: "hdfs-connector",
+        class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
+        all-config-resources: {},
+        version: "2.0.0-SNAPSHOT"
+    }, {
+        id: 2,
+        link-config: [],
+        job-config: {},
+        name: "generic-jdbc-connector",
+        class: "org.apache.sqoop.connector.jdbc.GenericJdbcConnector",
+        all-config - resources: {},
+        version: "2.0.0-SNAPSHOT"
+    }]
+  }
+
+/v1/connector/[cname] or /v1/connector/[cid] - [GET] - Get Connector
+---------------------------------------------------------------------
+
+Provide the id or unique name of the connector in the url ``[cid]`` or 
``[cname]`` part.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Fields of Response:
+
++--------------------------+----------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                     |
++==========================+========================================================================================+
+| ``id``                   | The id for the connector ( registered as a 
configurable )                              |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``job-config``           | Connector job config and inputs for both FROM and 
TO                                   |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``link-config``          | Connector link config and inputs                  
                                     |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``all-config-resources`` | All config inputs labels and description for the 
given connector                       |
++--------------------------+----------------------------------------------------------------------------------------+
+| ``version``              | The build version required for config and input 
data upgrades                          |
++--------------------------+----------------------------------------------------------------------------------------+
+
+* Response Example:
+
+::
+
+   {
+    connector: {
+        id: 1,
+        job-config: {
+            TO: [{
+                id: 3,
+                inputs: [{
+                    id: 3,
+                    values: "TEXT_FILE,SEQUENCE_FILE",
+                    name: "toJobConfig.outputFormat",
+                    type: "ENUM",
+                    sensitive: false
+                }, {
+                    id: 4,
+                    values: 
"NONE,DEFAULT,DEFLATE,GZIP,BZIP2,LZO,LZ4,SNAPPY,CUSTOM",
+                    name: "toJobConfig.compression",
+                    type: "ENUM",
+                    sensitive: false
+                }, {
+                    id: 5,
+                    name: "toJobConfig.customCompression",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }, {
+                    id: 6,
+                    name: "toJobConfig.outputDirectory",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }],
+                name: "toJobConfig",
+                type: "JOB"
+            }],
+            FROM: [{
+                id: 2,
+                inputs: [{
+                    id: 2,
+                    name: "fromJobConfig.inputDirectory",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }],
+                name: "fromJobConfig",
+                type: "JOB"
+            }]
+        },
+        link-config: [{
+            id: 1,
+            inputs: [{
+                id: 1,
+                name: "linkConfig.uri",
+                type: "STRING",
+                size: 255,
+                sensitive: false
+            }],
+            name: "linkConfig",
+            type: "LINK"
+        }],
+        name: "hdfs-connector",
+        class: "org.apache.sqoop.connector.hdfs.HdfsConnector",
+        all-config-resources: {
+            fromJobConfig.label: "From Job configuration",
+                toJobConfig.ignored.label: "Ignored",
+                fromJobConfig.help: "Specifies information required to get 
data from Hadoop ecosystem",
+                toJobConfig.ignored.help: "This value is ignored",
+                toJobConfig.label: "ToJob configuration",
+                toJobConfig.storageType.label: "Storage type",
+                fromJobConfig.inputDirectory.label: "Input directory",
+                toJobConfig.outputFormat.label: "Output format",
+                toJobConfig.outputDirectory.label: "Output directory",
+                toJobConfig.outputDirectory.help: "Output directory for final 
data",
+                toJobConfig.compression.help: "Compression that should be used 
for the data",
+                toJobConfig.outputFormat.help: "Format in which data should be 
serialized",
+                toJobConfig.customCompression.label: "Custom compression 
format",
+                toJobConfig.compression.label: "Compression format",
+                linkConfig.label: "Link configuration",
+                toJobConfig.customCompression.help: "Full class name of the 
custom compression",
+                toJobConfig.storageType.help: "Target on Hadoop ecosystem 
where to store data",
+                linkConfig.help: "Here you supply information necessary to 
connect to HDFS",
+                linkConfig.uri.help: "HDFS URI used to connect to HDFS",
+                linkConfig.uri.label: "HDFS URI",
+                fromJobConfig.inputDirectory.help: "Directory that should be 
exported",
+                toJobConfig.help: "You must supply the information requested 
in order to get information where you want to store your data."
+        },
+        version: "2.0.0-SNAPSHOT"
+     }
+   }
+
+
+/v1/driver - [GET]- Get Sqoop Driver
+-----------------------------------------------
+
+Driver exposes configurations required for the job execution.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Fields of Response:
+
++--------------------------+----------------------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                                 |
++==========================+====================================================================================================+
+| ``id``                   | The id for the driver ( registered as a 
configurable )                                             |
++--------------------------+----------------------------------------------------------------------------------------------------+
+| ``job-config``           | Driver job config and inputs                      
                                                 |
++--------------------------+----------------------------------------------------------------------------------------------------+
+| ``version``              | The build version of the driver                   
                                                 |
++--------------------------+----------------------------------------------------------------------------------------------------+
+| ``all-config-resources`` | Driver exposed config and input labels and 
description                                             |
++--------------------------+----------------------------------------------------------------------------------------------------+
+
+* Response Example:
+
+::
+
+ {
+    id: 3,
+    job-config: [{
+        id: 7,
+        inputs: [{
+            id: 25,
+            name: "throttlingConfig.numExtractors",
+            type: "INTEGER",
+            sensitive: false
+        }, {
+            id: 26,
+            name: "throttlingConfig.numLoaders",
+            type: "INTEGER",
+            sensitive: false
+        }],
+        name: "throttlingConfig",
+        type: "JOB"
+    }],
+    all-config-resources: {
+        throttlingConfig.numExtractors.label: "Extractors",
+            throttlingConfig.numLoaders.help: "Number of loaders that Sqoop 
will use",
+            throttlingConfig.numLoaders.label: "Loaders",
+            throttlingConfig.label: "Throttling resources",
+            throttlingConfig.numExtractors.help: "Number of extractors that 
Sqoop will use",
+            throttlingConfig.help: "Set throttling boundaries to not overload 
your systems"
+    },
+    version: "1"
+ }
+
+/v1/links/ - [GET]  Get all links
+-------------------------------------------
+
+Get all the links created in Sqoop
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example
+
+::
+
+  {
+    links: [
+      {
+        id: 1,
+        enabled: true,
+        update-user: "root",
+        link-config-values: [],
+        name: "First Link",
+        creation-date: 1415309361756,
+        connector-id: 1,
+        update-date: 1415309361756,
+        creation-user: "root"
+      },
+      {
+        id: 2,
+        enabled: true,
+        update-user: "root",
+        link-config-values: [],
+        name: "Second Link",
+        creation-date: 1415309390807,
+        connector-id: 2,
+        update-date: 1415309390807,
+        creation-user: "root"
+      }
+    ]
+  }
+
+
+/v1/links?cname=[cname] - [GET]  Get all links by Connector
+------------------------------------------------------------
+Get all the links for a given connector identified by ``[cname]`` part.
+
+
+/v1/link/[lname]  or /v1/link/[lid] - [GET] - Get Link
+-------------------------------------------------------------------------------
+
+Provide the id or unique name of the link in the url ``[lid]`` or ``[lname]`` 
part.
+
+Get all the details of the link including the id, name, type and the 
corresponding config input values for the link
+
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example:
+
+::
+
+ {
+    link: {
+        id: 1,
+        enabled: true,
+        link-config-values: [{
+            id: 1,
+            inputs: [{
+                id: 1,
+                name: "linkConfig.uri",
+                value: "hdfs%3A%2F%2Fnamenode%3A8090",
+                type: "STRING",
+                size: 255,
+                sensitive: false
+            }],
+            name: "linkConfig",
+            type: "LINK"
+        }],
+        update-user: "root",
+        name: "First Link",
+        creation-date: 1415287846371,
+        connector-id: 1,
+        update-date: 1415287846371,
+        creation-user: "root"
+    }
+ }
+
+/v1/link - [POST] - Create Link
+---------------------------------------------------------
+
+Create a new link object. Provide values to the link config inputs for the 
ones that are required.
+
+* Method: ``POST``
+* Format: ``JSON``
+* Fields of Request:
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                   |
++==========================+======================================================================================+
+| ``link``                 | The root of the post data in JSON                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``id``                   | The id of the link can be left blank in the post 
data                                |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``enabled``              | Whether to enable this link (true/false)          
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-date``          | The last updated time of this link                
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-date``        | The creation time of this link                    
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-user``          | The user who updated this link                    
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-user``        | The user who created this link                    
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``name``                 | The name of this link                             
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``link-config-values``   | Config input values for link config for the 
corresponding connector                  |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``connector-id``         | The id of the connector used for this link        
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+
+* Request Example:
+
+::
+
+  {
+    link: {
+        id: -1,
+        enabled: true,
+        link-config-values: [{
+            id: 1,
+            inputs: [{
+                id: 1,
+                name: "linkConfig.uri",
+                value: 
"hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+                type: "STRING",
+                size: 255,
+                sensitive: false
+            }],
+            name: "testInput",
+            type: "LINK"
+        }],
+        update-user: "root",
+        name: "testLink",
+        creation-date: 1415202223048,
+        connector-id: 1,
+        update-date: 1415202223048,
+        creation-user: "root"
+    }
+  }
+
+* Fields of Response:
+
++---------------------------+--------------------------------------------------------------------------------------+
+|   Field                   | Description                                      
                                    |
++===========================+======================================================================================+
+| ``id``                    | The id assigned for this new created link        
                                    |
++---------------------------+--------------------------------------------------------------------------------------+
+| ``validation-result``     | The validation status for the  link config 
inputs given in the post data             |
++---------------------------+--------------------------------------------------------------------------------------+
+
+* ERROR Response Example:
+
+::
+
+   {
+     "validation-result": [
+         {
+             "linkConfig": [
+                 {
+                     "message": "Invalid URI. URI must either be null or a 
valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, 
hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+                     "status": "ERROR"
+                 }
+             ]
+         }
+     ]
+   }
+
+
+/v1/link/[lname]  or /v1/link/[lid] - [PUT] - Update Link
+---------------------------------------------------------
+
+Update an existing link object with name [lname] or id [lid]. To make the 
procedure of filling inputs easier, the general practice
+is get the link first and then change some of the values for the inputs.
+
+* Method: ``PUT``
+* Format: ``JSON``
+
+* OK Response Example:
+
+::
+
+  {
+    "validation-result": [
+        {}
+    ]
+  }
+
+/v1/link/[lname]  or /v1/link/[lid]  - [DELETE] - Delete Link
+-----------------------------------------------------------------
+
+Delete a link with name [lname] or id [lid]
+
+* Method: ``DELETE``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/link/[lid]/enable  or /v1/link/[lname]/enable  - [PUT] - Enable Link
+--------------------------------------------------------------------------------
+
+Enable a link with id ``lid`` or name ``lname``
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/link/[lid]/disable - [PUT] - Disable Link
+---------------------------------------------------------
+
+Disable a link with id ``lid`` or name ``lname``
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/jobs/ - [GET]  Get all jobs
+-------------------------------------------
+
+Get all the jobs created in Sqoop
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example:
+
+::
+
+  {
+     jobs: [{
+        driver-config-values: [],
+            enabled: true,
+            from-connector-id: 1,
+            update-user: "root",
+            to-config-values: [],
+            to-connector-id: 2,
+            creation-date: 1415310157618,
+            update-date: 1415310157618,
+            creation-user: "root",
+            id: 1,
+            to-link-id: 2,
+            from-config-values: [],
+            name: "First Job",
+            from-link-id: 1
+       },{
+        driver-config-values: [],
+            enabled: true,
+            from-connector-id: 2,
+            update-user: "root",
+            to-config-values: [],
+            to-connector-id: 1,
+            creation-date: 1415310650600,
+            update-date: 1415310650600,
+            creation-user: "root",
+            id: 2,
+            to-link-id: 1,
+            from-config-values: [],
+            name: "Second Job",
+            from-link-id: 2
+       }]
+  }
+
+/v1/jobs?cname=[cname] - [GET]  Get all jobs by connector
+------------------------------------------------------------
+Get all the jobs for a given connector identified by ``[cname]`` part.
+
+
+/v1/job/[jname] or /v1/job/[jid] - [GET] - Get Job
+-----------------------------------------------------
+
+Provide the name or the id of the job in the url [jname]
+part or [jid] part.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+
+* Response Example:
+
+::
+
+  {
+    job: {
+        driver-config-values: [{
+                id: 7,
+                inputs: [{
+                    id: 25,
+                    name: "throttlingConfig.numExtractors",
+                    value: "3",
+                    type: "INTEGER",
+                    sensitive: false
+                }, {
+                    id: 26,
+                    name: "throttlingConfig.numLoaders",
+                    value: "3",
+                    type: "INTEGER",
+                    sensitive: false
+                }],
+                name: "throttlingConfig",
+                type: "JOB"
+            }],
+            enabled: true,
+            from-connector-id: 1,
+            update-user: "root",
+            to-config-values: [{
+                id: 6,
+                inputs: [{
+                    id: 19,
+                    name: "toJobConfig.schemaName",
+                    type: "STRING",
+                    size: 50,
+                    sensitive: false
+                }, {
+                    id: 20,
+                    name: "toJobConfig.tableName",
+                    value: "text",
+                    type: "STRING",
+                    size: 2000,
+                    sensitive: false
+                }, {
+                    id: 21,
+                    name: "toJobConfig.sql",
+                    type: "STRING",
+                    size: 50,
+                    sensitive: false
+                }, {
+                    id: 22,
+                    name: "toJobConfig.columns",
+                    type: "STRING",
+                    size: 50,
+                    sensitive: false
+                }, {
+                    id: 23,
+                    name: "toJobConfig.stageTableName",
+                    type: "STRING",
+                    size: 2000,
+                    sensitive: false
+                }, {
+                    id: 24,
+                    name: "toJobConfig.shouldClearStageTable",
+                    type: "BOOLEAN",
+                    sensitive: false
+                }],
+                name: "toJobConfig",
+                type: "JOB"
+            }],
+            to-connector-id: 2,
+            creation-date: 1415310157618,
+            update-date: 1415310157618,
+            creation-user: "root",
+            id: 1,
+            to-link-id: 2,
+            from-config-values: [{
+                id: 2,
+                inputs: [{
+                    id: 2,
+                    name: "fromJobConfig.inputDirectory",
+                    value: 
"hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+                    type: "STRING",
+                    size: 255,
+                    sensitive: false
+                }],
+                name: "fromJobConfig",
+                type: "JOB"
+            }],
+            name: "First Job",
+            from-link- id: 1
+    }
+ }
+
+
+/v1/job - [POST] - Create Job
+---------------------------------------------------------
+
+Create a new job object with the corresponding config values.
+
+* Method: ``POST``
+* Format: ``JSON``
+
+* Fields of Request:
+
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                   |
++==========================+======================================================================================+
+| ``job``                  | The root of the post data in JSON                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``from-link-id``         | The id of the from link for the job               
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``to-link-id``           | The id of the to link for the job                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``id``                   | The id of the link can be left blank in the post 
data                                |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``enabled``              | Whether to enable this job (true/false)           
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-date``          | The last updated time of this job                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-date``        | The creation time of this job                     
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``update-user``          | The user who updated this job                     
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-user``        | The uset who creates this job                     
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``name``                 | The name of this job                              
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``from-config-values``   | Config input values for FROM part of the job      
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``to-config-values``     | Config input values for TO part of the job        
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``driver-config-values`` | Config input values for driver                    
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``connector-id``         | The id of the connector used for this link        
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+
+
+* Request Example:
+
+::
+
+ {
+   job: {
+     driver-config-values: [
+       {
+         id: 7,
+         inputs: [
+           {
+             id: 25,
+             name: "throttlingConfig.numExtractors",
+             value: "3",
+             type: "INTEGER",
+             sensitive: false
+           },
+           {
+             id: 26,
+             name: "throttlingConfig.numLoaders",
+             value: "3",
+             type: "INTEGER",
+             sensitive: false
+           }
+         ],
+         name: "throttlingConfig",
+         type: "JOB"
+       }
+     ],
+     enabled: true,
+     from-connector-id: 1,
+     update-user: "root",
+     to-config-values: [
+       {
+         id: 6,
+         inputs: [
+           {
+             id: 19,
+             name: "toJobConfig.schemaName",
+             type: "STRING",
+             size: 50,
+             sensitive: false
+           },
+           {
+             id: 20,
+             name: "toJobConfig.tableName",
+             value: "text",
+             type: "STRING",
+             size: 2000,
+             sensitive: false
+           },
+           {
+             id: 21,
+             name: "toJobConfig.sql",
+             type: "STRING",
+             size: 50,
+             sensitive: false
+           },
+           {
+             id: 22,
+             name: "toJobConfig.columns",
+             type: "STRING",
+             size: 50,
+             sensitive: false
+           },
+           {
+             id: 23,
+             name: "toJobConfig.stageTableName",
+             type: "STRING",
+             size: 2000,
+             sensitive: false
+           },
+           {
+             id: 24,
+             name: "toJobConfig.shouldClearStageTable",
+             type: "BOOLEAN",
+             sensitive: false
+           }
+         ],
+         name: "toJobConfig",
+         type: "JOB"
+       }
+     ],
+     to-connector-id: 2,
+     creation-date: 1415310157618,
+     update-date: 1415310157618,
+     creation-user: "root",
+     id: -1,
+     to-link-id: 2,
+     from-config-values: [
+       {
+         id: 2,
+         inputs: [
+           {
+             id: 2,
+             name: "fromJobConfig.inputDirectory",
+             value: 
"hdfs%3A%2F%2Fvbsqoop-1.ent.cloudera.com%3A8020%2Fuser%2Froot%2Fjob1",
+             type: "STRING",
+             size: 255,
+             sensitive: false
+           }
+         ],
+         name: "fromJobConfig",
+         type: "JOB"
+       }
+     ],
+     name: "Test Job",
+     from-link-id: 1
+    }
+  }
+
+* Fields of Response:
+
++---------------------------+--------------------------------------------------------------------------------------+
+|   Field                   | Description                                      
                                    |
++===========================+======================================================================================+
+| ``id``                    | The id assigned for this new created job         
                                    |
++--------------------------+---------------------------------------------------------------------------------------+
+| ``validation-result``     | The validation status for the job config and 
driver config inputs in the post data   |
++---------------------------+--------------------------------------------------------------------------------------+
+
+
+* ERROR Response Example:
+
+::
+
+   {
+     "validation-result": [
+         {
+             "linkConfig": [
+                 {
+                     "message": "Invalid URI. URI must either be null or a 
valid URI. Here are a few valid example URIs: hdfs://example.com:8020/, 
hdfs://example.com/, file:///, file:///tmp, file://localhost/tmp",
+                     "status": "ERROR"
+                 }
+             ]
+         }
+     ]
+   }
+
+
+/v1/job/[jid] - [PUT] - Update Job
+---------------------------------------------------------
+
+Update an existing job object with id [jid]. To make the procedure of filling 
inputs easier, the general practice
+is get the existing job object first and then change some of the inputs.
+
+* Method: ``PUT``
+* Format: ``JSON``
+
+The same as Create Job.
+
+* OK Response Example:
+
+::
+
+  {
+    "validation-result": [
+        {}
+    ]
+  }
+
+
+/v1/job/[jid] - [DELETE] - Delete Job
+---------------------------------------------------------
+
+Delete a job with id ``jid``.
+
+* Method: ``DELETE``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/job/[jid]/enable - [PUT] - Enable Job
+---------------------------------------------------------
+
+Enable a job with id ``jid``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+/v1/job/[jid]/disable - [PUT] - Disable Job
+---------------------------------------------------------
+
+Disable a job with id ``jid``.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``None``
+
+
+/v1/job/[jid]/start or /v1/job/[jname]/start - [PUT]- Start Job
+---------------------------------------------------------------------------------
+
+Start a job with name ``[jname]`` or with id ``[jid]`` to trigger the job 
execution
+
+* Method: ``POST``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``Submission Record``
+
+* BOOTING Response Example
+
+::
+
+  {
+    "submission": {
+      "progress": -1,
+      "last-update-date": 1415312531188,
+      "external-id": "job_1412137947693_0004",
+      "status": "BOOTING",
+      "job": 2,
+      "creation-date": 1415312531188,
+      "to-schema": {
+        "created": 1415312531426,
+        "name": "HDFS file",
+        "columns": []
+      },
+      "external-link": 
"http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/";,
+      "from-schema": {
+        "created": 1415312531342,
+        "name": "text",
+        "columns": [
+          {
+            "name": "id",
+            "nullable": true,
+            "unsigned": null,
+            "type": "FIXED_POINT",
+            "size": null
+          },
+          {
+            "name": "txt",
+            "nullable": true,
+            "type": "TEXT",
+            "size": null
+          }
+        ]
+      }
+    }
+  }
+
+* SUCCEEDED Response Example
+
+::
+
+   {
+     submission: {
+       progress: -1,
+       last-update-date: 1415312809485,
+       external-id: "job_1412137947693_0004",
+       status: "SUCCEEDED",
+       job: 2,
+       creation-date: 1415312531188,
+       external-link: 
"http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/";,
+       counters: {
+         org.apache.hadoop.mapreduce.JobCounter: {
+           SLOTS_MILLIS_MAPS: 373553,
+           MB_MILLIS_MAPS: 382518272,
+           TOTAL_LAUNCHED_MAPS: 10,
+           MILLIS_MAPS: 373553,
+           VCORES_MILLIS_MAPS: 373553,
+           OTHER_LOCAL_MAPS: 10
+         },
+         org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
+           BYTES_WRITTEN: 0
+         },
+         org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
+           BYTES_READ: 0
+         },
+         org.apache.hadoop.mapreduce.TaskCounter: {
+           MAP_INPUT_RECORDS: 0,
+           MERGED_MAP_OUTPUTS: 0,
+           PHYSICAL_MEMORY_BYTES: 4065599488,
+           SPILLED_RECORDS: 0,
+           COMMITTED_HEAP_BYTES: 3439853568,
+           CPU_MILLISECONDS: 236900,
+           FAILED_SHUFFLE: 0,
+           VIRTUAL_MEMORY_BYTES: 15231422464,
+           SPLIT_RAW_BYTES: 1187,
+           MAP_OUTPUT_RECORDS: 1000000,
+           GC_TIME_MILLIS: 7282
+         },
+         org.apache.hadoop.mapreduce.FileSystemCounter: {
+           FILE_WRITE_OPS: 0,
+           FILE_READ_OPS: 0,
+           FILE_LARGE_READ_OPS: 0,
+           FILE_BYTES_READ: 0,
+           HDFS_BYTES_READ: 1187,
+           FILE_BYTES_WRITTEN: 1191230,
+           HDFS_LARGE_READ_OPS: 0,
+           HDFS_WRITE_OPS: 10,
+           HDFS_READ_OPS: 10,
+           HDFS_BYTES_WRITTEN: 276389736
+         },
+         org.apache.sqoop.submission.counter.SqoopCounters: {
+           ROWS_READ: 1000000
+         }
+       }
+     }
+   }
+
+
+* ERROR Response Example
+
+::
+
+  {
+    "submission": {
+      "progress": -1,
+      "last-update-date": 1415312390570,
+      "status": "FAILURE_ON_SUBMIT",
+      "exception": "org.apache.sqoop.common.SqoopException: 
GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
+      "job": 1,
+      "creation-date": 1415312390570,
+      "to-schema": {
+        "created": 1415312390797,
+        "name": "text",
+        "columns": [
+          {
+            "name": "id",
+            "nullable": true,
+            "unsigned": null,
+            "type": "FIXED_POINT",
+            "size": null
+          },
+          {
+            "name": "txt",
+            "nullable": true,
+            "type": "TEXT",
+            "size": null
+          }
+        ]
+      },
+      "from-schema": {
+        "created": 1415312390778,
+        "name": "HDFS file",
+        "columns": [
+        ]
+      },
+      "exception-trace": "org.apache.sqoop.common.SqoopException: 
GENERIC_HDFS_CONNECTOR_00"
+    }
+  }
+
+/v1/job/[jid]/stop or /v1/job/[jname]/stop  - [PUT]- Stop Job
+---------------------------------------------------------------------------------
+
+Stop a job with name ``[janme]`` or with id ``[jid]`` to abort the running job.
+
+* Method: ``PUT``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``Submission Record``
+
+/v1/job/[jid]/status or /v1/job/[jname]/status  - [GET]- Get Job Status
+---------------------------------------------------------------------------------
+
+Get status of the running job with name ``[janme]`` or with id ``[jid]``
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+* Response Content: ``Submission Record``
+
+::
+
+  {
+      "submission": {
+          "progress": 0.25,
+          "last-update-date": 1415312603838,
+          "external-id": "job_1412137947693_0004",
+          "status": "RUNNING",
+          "job": 2,
+          "creation-date": 1415312531188,
+          "external-link": 
"http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/";
+      }
+  }
+
+/v1/submissions? - [GET] - Get all job Submissions
+----------------------------------------------------------------------
+
+Get all the submissions for every job started in SQoop
+
+/v1/submissions?jname=[jname] - [GET] - Get Submissions by Job
+----------------------------------------------------------------------
+
+Retrieve all job submissions in the past for the given job. Each submission 
record will have details such as the status, counters and urls for those 
submissions.
+
+Provide the name of the job in the url [jname] part.
+
+* Method: ``GET``
+* Format: ``JSON``
+* Request Content: ``None``
+* Fields of Response:
+
++--------------------------+--------------------------------------------------------------------------------------+
+|   Field                  | Description                                       
                                   |
++==========================+======================================================================================+
+| ``progress``             | The progress of the running Sqoop job             
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``job``                  | The id of the Sqoop job                           
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``creation-date``        | The submission timestamp                          
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``last-update-date``     | The timestamp of the last status update           
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``status``               | The status of this job submission                 
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``external-id``          | The job id of Sqoop job running on Hadoop         
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+| ``external-link``        | The link to track the job status on Hadoop        
                                   |
++--------------------------+--------------------------------------------------------------------------------------+
+
+* Response Example:
+
+::
+
+  {
+    submissions: [
+      {
+        progress: -1,
+        last-update-date: 1415312809485,
+        external-id: "job_1412137947693_0004",
+        status: "SUCCEEDED",
+        job: 2,
+        creation-date: 1415312531188,
+        external-link: 
"http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0004/";,
+        counters: {
+          org.apache.hadoop.mapreduce.JobCounter: {
+            SLOTS_MILLIS_MAPS: 373553,
+            MB_MILLIS_MAPS: 382518272,
+            TOTAL_LAUNCHED_MAPS: 10,
+            MILLIS_MAPS: 373553,
+            VCORES_MILLIS_MAPS: 373553,
+            OTHER_LOCAL_MAPS: 10
+          },
+          org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter: {
+            BYTES_WRITTEN: 0
+          },
+          org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter: {
+            BYTES_READ: 0
+          },
+          org.apache.hadoop.mapreduce.TaskCounter: {
+            MAP_INPUT_RECORDS: 0,
+            MERGED_MAP_OUTPUTS: 0,
+            PHYSICAL_MEMORY_BYTES: 4065599488,
+            SPILLED_RECORDS: 0,
+            COMMITTED_HEAP_BYTES: 3439853568,
+            CPU_MILLISECONDS: 236900,
+            FAILED_SHUFFLE: 0,
+            VIRTUAL_MEMORY_BYTES: 15231422464,
+            SPLIT_RAW_BYTES: 1187,
+            MAP_OUTPUT_RECORDS: 1000000,
+            GC_TIME_MILLIS: 7282
+          },
+          org.apache.hadoop.mapreduce.FileSystemCounter: {
+            FILE_WRITE_OPS: 0,
+            FILE_READ_OPS: 0,
+            FILE_LARGE_READ_OPS: 0,
+            FILE_BYTES_READ: 0,
+            HDFS_BYTES_READ: 1187,
+            FILE_BYTES_WRITTEN: 1191230,
+            HDFS_LARGE_READ_OPS: 0,
+            HDFS_WRITE_OPS: 10,
+            HDFS_READ_OPS: 10,
+            HDFS_BYTES_WRITTEN: 276389736
+          },
+          org.apache.sqoop.submission.counter.SqoopCounters: {
+            ROWS_READ: 1000000
+          }
+        }
+      },
+      {
+        progress: -1,
+        last-update-date: 1415312390570,
+        status: "FAILURE_ON_SUBMIT",
+        exception: "org.apache.sqoop.common.SqoopException: 
GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner run",
+        job: 1,
+        creation-date: 1415312390570,
+        exception-trace: "org.apache.sqoop.common.SqoopException: 
GENERIC_HDFS_CONNECTOR_0000:Error occurs during partitioner...."
+      }
+    ]
+  }
\ No newline at end of file

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Sqoop5MinutesDemo.rst
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Sqoop5MinutesDemo.rst
 (added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Sqoop5MinutesDemo.rst
 Tue Nov 25 21:57:10 2014
@@ -0,0 +1,222 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+====================
+Sqoop 5 Minutes Demo
+====================
+
+This page will walk you through the basic usage of Sqoop. You need to have 
installed and configured Sqoop server and client in order to follow this guide. 
Installation procedure is described on `Installation page 
<Installation.html>`_. Please note that exact output shown in this page might 
differ from yours as Sqoop evolves. All major information should however remain 
the same.
+
+Sqoop uses unique names or persistent ids to identify connectors, links, jobs 
and configs. We support querying a entity by its unique name or by its perisent 
database Id.
+
+Starting Client
+===============
+
+Start client in interactive mode using following command: ::
+
+  sqoop2-shell
+
+Configure client to use your Sqoop server: ::
+
+  sqoop:000> set server --host your.host.com --port 12000 --webapp sqoop
+
+Verify that connection is working by simple version checking: ::
+
+  sqoop:000> show version --all
+  client version:
+    Sqoop 2.0.0-SNAPSHOT source revision 
418c5f637c3f09b94ea7fc3b0a4610831373a25f
+    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+  server version:
+    Sqoop 2.0.0-SNAPSHOT source revision 
418c5f637c3f09b94ea7fc3b0a4610831373a25f
+    Compiled by vbasavaraj on Mon Nov  3 08:18:21 PST 2014
+  API versions:
+    [v1]
+
+You should received similar output as shown above describing the sqoop client 
build version, the server build version and the supported versions for the rest 
API.
+
+You can use the help command to check all the supported commands in the sqoop 
shell.
+
+::
+  sqoop:000> help
+  For information about Sqoop, visit: http://sqoop.apache.org/
+
+  Available commands:
+    exit    (\x  ) Exit the shell
+    history (\H  ) Display, manage and recall edit-line history
+    help    (\h  ) Display this help message
+    set     (\st ) Configure various client options and settings
+    show    (\sh ) Display various objects and configuration options
+    create  (\cr ) Create new object in Sqoop repository
+    delete  (\d  ) Delete existing object in Sqoop repository
+    update  (\up ) Update objects in Sqoop repository
+    clone   (\cl ) Create new object based on existing one
+    start   (\sta) Start job
+    stop    (\stp) Stop job
+    status  (\stu) Display status of a job
+    enable  (\en ) Enable object in Sqoop repository
+    disable (\di ) Disable object in Sqoop repository
+
+
+Creating Link Object
+==========================
+
+Check for the registered connectors on your Sqoop server: ::
+
+  sqoop:000> show connector --all
+  
+----+------------------------+----------------+------------------------------------------------------+----------------------+
+  | Id |          Name          |    Version     |                        
Class                         | Supported Directions |
+  
+----+------------------------+----------------+------------------------------------------------------+----------------------+
+  | 1  | hdfs-connector         | 2.0.0-SNAPSHOT | 
org.apache.sqoop.connector.hdfs.HdfsConnector        | FROM/TO              |
+  | 2  | generic-jdbc-connector | 2.0.0-SNAPSHOT | 
org.apache.sqoop.connector.jdbc.GenericJdbcConnector | FROM/TO              |
+  
+----+------------------------+----------------+------------------------------------------------------+----------------------+
+
+Our example contains two connectors. The one with connector Id 2 is called the 
``generic-jdbc-connector``. This is a basic connector relying on the Java JDBC 
interface for communicating with data sources. It should work with the most 
common databases that are providing JDBC drivers. Please note that you must 
install JDBC drivers separately. They are not bundled in Sqoop due to 
incompatible licenses.
+
+Generic JDBC Connector in our example has a persistence Id 2 and we will use 
this value to create new link object for this connector. Note that the link 
name should be unique.
+::
+
+  sqoop:000> create link --cid 2
+  Creating link for connector with id 2
+  Please fill following values to create new link object
+  Name: First Link
+
+  Link configuration
+  JDBC Driver Class: com.mysql.jdbc.Driver
+  JDBC Connection String: jdbc:mysql://mysql.server/database
+  Username: sqoop
+  Password: *****
+  JDBC Connection Properties:
+  There are currently 0 values in the map:
+  entry#protocol=tcp
+  New link was successfully created with validation status OK and persistent 
id 1
+
+Our new link object was created with assigned id 1.
+
+In the ``show connector -all`` we see that there is a hdfs-connector 
registered in sqoop with the persistent id 1. Let us create another link object 
but this time for the  hdfs-connector instead.
+
+::
+  sqoop:000> create link --cid 1
+  Creating link for connector with id 1
+  Please fill following values to create new link object
+  Name: Second Link
+
+  Link configuration
+  HDFS URI: hdfs://nameservice1:8020/
+  New link was successfully created with validation status OK and persistent 
id 2
+
+Creating Job Object
+===================
+
+Connectors implement the ``From`` for reading data from and/or ``To`` for 
writing data to. Generic JDBC Connector supports both of them List of supported 
directions for each connector might be seen in the output of ``show connector 
-all`` command above. In order to create a job we need to specifiy the ``From`` 
and ``To`` parts of the job uniquely identified by their link Ids. We already 
have 2 links created in the system, you can verify the same with the following 
command
+
+::
+  sqoop:000> show links -all
+  2 link(s) to show:
+  link with id 1 and name First Link (Enabled: true, Created by root at 
11/4/14 4:27 PM, Updated by root at 11/4/14 4:27 PM)
+  Using Connector id 2
+    Link configuration
+      JDBC Driver Class: com.mysql.jdbc.Driver
+      JDBC Connection String: jdbc:mysql://mysql.ent.cloudera.com/sqoop
+      Username: sqoop
+      Password:
+      JDBC Connection Properties:
+        protocol = tcp
+  link with id 2 and name Second Link (Enabled: true, Created by root at 
11/4/14 4:38 PM, Updated by root at 11/4/14 4:38 PM)
+  Using Connector id 1
+    Link configuration
+      HDFS URI: hdfs://nameservice1:8020/
+
+Next, we can use the two link Ids to associate the ``From`` and ``To`` for the 
job.
+::
+
+   sqoop:000> create job -f 1 -t 2
+   Creating job for links with from id 1 and to id 2
+   Please fill following values to create new job object
+   Name: Sqoopy
+
+   FromJob configuration
+
+    Schema name:(Required)sqoop
+    Table name:(Required)sqoop
+    Table SQL statement:(Optional)
+    Table column names:(Optional)
+    Partition column name:(Optional) id
+    Null value allowed for the partition column:(Optional)
+    Boundary query:(Optional)
+
+  ToJob configuration
+
+   Output format:
+    0 : TEXT_FILE
+    1 : SEQUENCE_FILE
+       Output format:
+         0 : TEXT_FILE
+         1 : SEQUENCE_FILE
+       Choose: 0
+       Compression format:
+         0 : NONE
+         1 : DEFAULT
+         2 : DEFLATE
+         3 : GZIP
+         4 : BZIP2
+         5 : LZO
+         6 : LZ4
+         7 : SNAPPY
+         8 : CUSTOM
+       Choose: 0
+       Custom compression format:(Optional)
+       Output directory:(Required)/root/projects/sqoop
+
+       Driver Config
+
+       Extractors: 2
+       Loaders: 2
+       New job was successfully created with validation status OK  and 
persistent id 1
+
+Our new job object was created with assigned id 1.
+
+Start Job ( a.k.a Data transfer )
+================================
+
+You can start a sqoop job with the following command: ::
+
+  sqoop:000> start job --jid 1
+  Submission details
+  Job ID: 1
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    
http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 19:43:29 PST: BOOTING  - Progress is not available
+
+You can iteratively check your running job status with ``status job`` command: 
::
+
+  sqoop:000> status job --jid 1
+  Submission details
+  Job ID: 1
+  Server URL: http://localhost:12000/sqoop/
+  Created by: root
+  Creation date: 2014-11-04 19:43:29 PST
+  Lastly updated by: root
+  External ID: job_1412137947693_0001
+    
http://vbsqoop-1.ent.cloudera.com:8088/proxy/application_1412137947693_0001/
+  2014-11-04 20:09:16 PST: RUNNING  - 0.00 % 
+
+And finally you can stop running the job at any time using ``stop job`` 
command: ::
+
+  sqoop:000> stop job --jid 1
\ No newline at end of file

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Tools.rst
==============================================================================
--- websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Tools.rst 
(added)
+++ websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Tools.rst 
Tue Nov 25 21:57:10 2014
@@ -0,0 +1,129 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=====
+Tools
+=====
+
+Tools are server commands that administrators can execute on the Sqoop server 
machine in order to perform various maintenance tasks. The tool execution will 
always perform a given task and finish. There are no long running services 
implemented as tools.
+
+In order to perform the maintenance task each tool is suppose to do, they need 
to be executed in exactly the same environment as the main Sqoop server. The 
tool binary will take care of setting up the ``CLASSPATH`` and other 
environmental variables that might be required. However it's up to the 
administrator himself to run the tool under the same user as is used for the 
server. This is usually configured automatically for various Hadoop 
distributions (such as Apache Bigtop).
+
+
+.. note:: Running tools while the Sqoop Server is also running is not 
recommended as it might lead to a data corruption and service disruption.
+
+List of available tools:
+
+* verify
+* upgrade
+
+To run the desired tool, execute binary ``sqoop2-tool`` with desired tool 
name. For example to run ``verify`` tool::
+
+  sqoop2-tool verify
+
+.. note:: Stop the Sqoop Server before running Sqoop tools. Running tools 
while Sqoop Server is running can lead to a data corruption and service 
disruption.
+
+Verify
+======
+
+The verify tool will verify Sqoop server configuration by starting all 
subsystems with the exception of servlets and tearing them down.
+
+To run the ``verify`` tool::
+
+  sqoop2-tool verify
+
+If the verification process succeeds, you should see messages like::
+
+  Verification was successful.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has finished correctly
+
+If the verification process will find any inconsistencies, it will print out 
the following message instead::
+
+  Verification has failed, please check Server logs for further details.
+  Tool class org.apache.sqoop.tools.tool.VerifyTool has failed.
+
+Further details why the verification has failed will be available in the Sqoop 
server log - same file as the Sqoop Server logs into.
+
+Upgrade
+=======
+
+Upgrades all versionable components inside Sqoop2. This includes structural 
changes inside the repository and stored metadata.
+Running this tool on Sqoop deployment that was already upgraded will have no 
effect.
+
+To run the ``upgrade`` tool::
+
+  sqoop2-tool upgrade
+
+Upon successful upgrade you should see following message::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+Execution failure will show the following message instead::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has failed.
+
+Further details why the upgrade process has failed will be available in the 
Sqoop server log - same file as the Sqoop Server logs into.
+
+RepositoryDump
+==============
+
+Writes the user-created contents of the Sqoop repository to a file in JSON 
format. This includes connections, jobs and submissions.
+
+To run the ``repositorydump`` tool::
+
+  sqoop2-tool repositorydump -o repository.json
+
+As an option, the administrator can choose to include sensitive information 
such as database connection passwords in the file::
+
+  sqoop2-tool repositorydump -o repository.json --include-sensitive
+
+Upon successful execution, you should see the following message::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has finished 
correctly.
+
+If repository dump has failed, you will see the following message instead::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryDumpTool has failed.
+
+Further details why the upgrade process has failed will be available in the 
Sqoop server log - same file as the Sqoop Server logs into.
+
+RepositoryLoad
+==============
+
+Reads a json formatted file created by RepositoryDump and loads to current 
Sqoop repository.
+
+To run the ``repositoryLoad`` tool::
+
+  sqoop2-tool repositoryload -i repository.json
+
+Upon successful execution, you should see the following message::
+
+  Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has finished 
correctly.
+
+If repository load failed you will see the following message instead::
+
+ Tool class org.apache.sqoop.tools.tool.RepositoryLoadTool has failed.
+
+Or an exception. Further details why the upgrade process has failed will be 
available in the Sqoop server log - same file as the Sqoop Server logs into.
+
+.. note:: If the repository dump was created without passwords (default), the 
connections will not contain a password and the jobs will fail to execute. In 
that case you'll need to manually update the connections and set the password.
+.. note:: RepositoryLoad tool will always generate new connections, jobs and 
submissions from the file. Even when an identical objects already exists in 
repository.
+
+
+
+
+
+

Added: 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Upgrade.rst
==============================================================================
--- 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Upgrade.rst 
(added)
+++ 
websites/staging/sqoop/trunk/content/docs/1.99.4/src/site/sphinx/Upgrade.rst 
Tue Nov 25 21:57:10 2014
@@ -0,0 +1,84 @@
+.. Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF lANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+=======
+Upgrade
+=======
+
+This page describes procedure that you need to take in order to upgrade Sqoop 
from one release to a higher release. Upgrading both client and server 
component will be discussed separately.
+
+.. note:: Only updates from one Sqoop 2 release to another are covered, 
starting with upgrades from version 1.99.2. This guide do not contain general 
information how to upgrade from Sqoop 1 to Sqoop 2.
+
+Upgrading Server
+================
+
+As Sqoop server is using a database repository for persisting sqoop entities 
such as the connector, driver, links and jobs the repository schema might need 
to be updated as part of the server upgrade. In addition the configs and inputs 
described by the various connectors and the driver may also change with a new 
server version and might need a data upgrade.
+
+There are two ways how to upgrade Sqoop entities in the repository, you can 
either execute upgrade tool or configure the sqoop server to perform all 
necessary upgrades on start up.
+
+It's strongly advised to back up the repository before moving on to next 
steps. Backup instructions will vary depending on the repository 
implementation. For example, using MySQL as a repository will require a 
different back procedure than Apache Derby. Please follow the repositories' 
backup procedure.
+
+Upgrading Server using upgrade tool
+-----------------------------------
+
+Preferred upgrade path is to explicitly run the `Upgrade Tool 
<Tools.html#upgrade>`_. First step is to however shutdown the server as having 
both the server and upgrade utility accessing the same repository might corrupt 
it::
+
+  sqoop2-server stop
+
+When the server has been successfully stopped, you can update the server bits 
and simply run the upgrade tool::
+
+  sqoop2-tool upgrade
+
+You should see that the upgrade process has been successful::
+
+  Tool class org.apache.sqoop.tools.tool.UpgradeTool has finished correctly.
+
+In case of any failure, please take a look into `Upgrade Tool 
<Tools.html#upgrade>`_ documentation page.
+
+Upgrading Server on start-up
+----------------------------
+
+The capability of performing the upgrade has been built-in to the server, 
however is disabled by default to avoid any unintentional changes to the 
repository. You can start the repository schema upgrade procedure by stopping 
the server: ::
+
+  sqoop2-server stop
+
+Before starting the server again you will need to enable the auto-upgrade 
feature that will perform all necessary changes during Sqoop Server start up.
+
+You need to set the following property in configuration file 
``sqoop.properties`` for the repository schema upgrade.
+::
+
+   org.apache.sqoop.repository.schema.immutable=false
+
+You need to set the following property in configuration file 
``sqoop.properties`` for the connector config data upgrade.
+::
+
+   org.apache.sqoop.connector.autoupgrade=true
+
+You need to set the following property in configuration file 
``sqoop.properties`` for the driver config data upgrade.
+::
+
+   org.apache.sqoop.driver.autoupgrade=true
+
+When all properties are set, start the sqoop server using the following 
command::
+
+  sqoop2-server start
+
+All required actions will be performed automatically during the server 
bootstrap. It's strongly advised to set all three properties to their original 
values once the server has been successfully started and the upgrade has 
completed
+
+Upgrading Client
+================
+
+Client do not require any manual steps during upgrade. Replacing the binaries 
with updated version is sufficient.


Reply via email to