[jira] [Commented] (SQOOP-3171) Import as parquet jobs failed randomly while multiple jobs concurrently importing into targets with same parent

2018-03-27 Thread Szabolcs Vasas (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16415586#comment-16415586
 ] 

Szabolcs Vasas commented on SQOOP-3171:
---

Hi guys,

Since Kite does not seem to have an active community I don't expect any new 
releases.

However I am planning to work on removing the Kite dependency from Sqoop so it 
should resolve all the Kite-related limitations. I don't have a timeline yet, 
but I hope I can deliver some partial results by the end of the next quarter.

Szabolcs

> Import as parquet jobs failed randomly while multiple jobs concurrently 
> importing into targets with same parent
> ---
>
> Key: SQOOP-3171
> URL: https://issues.apache.org/jira/browse/SQOOP-3171
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Xiaomin Zhang
>Assignee: Sandish Kumar HN
>Priority: Major
>
> Running multiple parquet import jobs concurrently into below target 
> directories:
> hdfs://ns/path/dataset1
> hdfs://ns/path/dataset2
> In some cases, one of the sqoop job will be failed with below error:
> 17/03/19 08:21:21 INFO mapreduce.Job: Job job_1488289274600_188649 failed 
> with state FAILED due to: Job commit failed: 
> org.kitesdk.data.DatasetIOException: Could not cleanly delete 
> path:hdfs://ns/path/.temp/job_1488289274600_188649
> at 
> org.kitesdk.data.spi.filesystem.FileSystemUtil.cleanlyDelete(FileSystemUtil.java:239)
> at 
> org.kitesdk.data.spi.filesystem.TemporaryFileSystemDatasetRepository.delete(TemporaryFileSystemDatasetRepository.java:61)
> at 
> org.kitesdk.data.mapreduce.DatasetKeyOutputFormat$MergeOutputCommitter.commitJob(DatasetKeyOutputFormat.java:395)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: File hdfs://ns/path/.temp does not 
> exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:705)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:106)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:763)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:759)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:759)
> at 
> org.kitesdk.data.spi.filesystem.FileSystemUtil.cleanlyDelete(FileSystemUtil.java:226)
> This is due to:
> https://issues.cloudera.org/browse/KITE-1155



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SQOOP-3308) Mock ConnManager field in TestTableDefWriter

2018-03-27 Thread Szabolcs Vasas (JIRA)

 [ 
https://issues.apache.org/jira/browse/SQOOP-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szabolcs Vasas updated SQOOP-3308:
--
Attachment: SQOOP-3308.patch

> Mock ConnManager field in TestTableDefWriter
> 
>
> Key: SQOOP-3308
> URL: https://issues.apache.org/jira/browse/SQOOP-3308
> Project: Sqoop
>  Issue Type: Sub-task
>Affects Versions: 1.5.0
>Reporter: Szabolcs Vasas
>Assignee: Szabolcs Vasas
>Priority: Major
> Attachments: SQOOP-3308.patch, SQOOP-3308.patch, SQOOP-3308.patch, 
> SQOOP-3308.patch
>
>
> TableDefWriter has a dependency on ConnManager to retrieve the names and the 
> types of the table. It also introduces a field called _externalColTypes_ for 
> testing purposes and TestTableDefWriter uses this field to inject the test 
> table column names and types instead of mocking the ConnManager field.
> This setup makes it harder to add test cases to TestTableDefWriter and not a 
> good practice so it should be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66282: Mock ConnManager field in TestTableDefWriter

2018-03-27 Thread Szabolcs Vasas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66282/
---

(Updated March 27, 2018, noon)


Review request for Sqoop.


Changes
---

Some more refactoring is added to TestTableDefWriter.


Bugs: SQOOP-3308
https://issues.apache.org/jira/browse/SQOOP-3308


Repository: sqoop-trunk


Description
---

This patch removes the externalColTypes field from TableDefWriter since it was 
only used for testing purposes.
TestTableDefWriter is fixed to mock the ConnManager object provided to the 
TableDefWriter constructor and a minor refactoring is done on the class.


Diffs (updated)
-

  src/java/org/apache/sqoop/hive/TableDefWriter.java e1424c383 
  src/test/org/apache/sqoop/hive/TestTableDefWriter.java 496b5add9 


Diff: https://reviews.apache.org/r/66282/diff/4/

Changes: https://reviews.apache.org/r/66282/diff/3-4/


Testing
---

ant clean test


Thanks,

Szabolcs Vasas



Re: Review Request 66277: Don't create HTML during Ivy report

2018-03-27 Thread Szabolcs Vasas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66277/#review200040
---


Ship it!




Hi Daniel,

Thank you for this patch, I totally agree, generating a 700MB html takes a long 
time and I was not really able to open and handle it in my browser...

- Szabolcs Vasas


On March 26, 2018, 11:55 a.m., daniel voros wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66277/
> ---
> 
> (Updated March 26, 2018, 11:55 a.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Bugs: SQOOP-3307
> https://issues.apache.org/jira/browse/SQOOP-3307
> 
> 
> Repository: sqoop-trunk
> 
> 
> Description
> ---
> 
> ant clean report invokes the ivy:report task and creates both HTML and 
> GraphML reports.
> Creation of the HTML reports takes ~7 minutes and results in a ~700MB html 
> that's hard to make use of, while the GraphML reporting is fast and is easier 
> to read.
> 
> 
> Diffs
> -
> 
>   build.xml d85cf71 
> 
> 
> Diff: https://reviews.apache.org/r/66277/diff/1/
> 
> 
> Testing
> ---
> 
> `ant clean report`
> 
> 
> Thanks,
> 
> daniel voros
> 
>



Re: Review Request 66300: Upgrade to Hadoop 3.0.0

2018-03-27 Thread daniel voros

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66300/#review200037
---



Patch #1 is the minimal set of changes required to upgrade to Hadoop 3.0.0 that 
passes all unit tests. It also updates:
 - Hive to 3.0.0-SNAPSHOT since Hive hadoop shims was unable to handle Hadoop 3.
 - HBase 2.0.0-beta2 since Hive 3.0.0-SNAPSHOT depends on HBase 2.0.0-alpha4 at 
the moment.

For the list of other changes and some reasoning behind them see 
https://github.com/dvoros/sqoop/pull/4.

- daniel voros


On March 27, 2018, 8:50 a.m., daniel voros wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66300/
> ---
> 
> (Updated March 27, 2018, 8:50 a.m.)
> 
> 
> Review request for Sqoop.
> 
> 
> Bugs: SQOOP-3305
> https://issues.apache.org/jira/browse/SQOOP-3305
> 
> 
> Repository: sqoop-trunk
> 
> 
> Description
> ---
> 
> To be able to eventually support the latest versions of Hive, HBase and 
> Accumulo, we should start by upgrading our Hadoop dependencies to 3.0.0. See 
> https://hadoop.apache.org/docs/r3.0.0/index.html
> 
> 
> Diffs
> -
> 
>   ivy.xml 6be4fa2 
>   ivy/libraries.properties c44b50b 
>   src/java/org/apache/sqoop/config/ConfigurationHelper.java e07a699 
>   src/java/org/apache/sqoop/hive/HiveImport.java c272911 
>   src/java/org/apache/sqoop/mapreduce/JobBase.java 6d1e049 
>   src/java/org/apache/sqoop/mapreduce/hcat/DerbyPolicy.java PRE-CREATION 
>   src/java/org/apache/sqoop/mapreduce/hcat/SqoopHCatUtilities.java 784b5f2 
>   src/java/org/apache/sqoop/util/SqoopJsonUtil.java adf186b 
>   src/test/org/apache/sqoop/TestSqoopOptions.java bb7c20d 
>   testdata/hcatalog/conf/hive-site.xml edac7aa 
> 
> 
> Diff: https://reviews.apache.org/r/66300/diff/1/
> 
> 
> Testing
> ---
> 
> Normal and third-party unit tests.
> 
> 
> Thanks,
> 
> daniel voros
> 
>



[jira] [Commented] (SQOOP-3305) Upgrade to Hadoop 3.0.0

2018-03-27 Thread Daniel Voros (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16415267#comment-16415267
 ] 

Daniel Voros commented on SQOOP-3305:
-

Attached review request.

> Upgrade to Hadoop 3.0.0
> ---
>
> Key: SQOOP-3305
> URL: https://issues.apache.org/jira/browse/SQOOP-3305
> Project: Sqoop
>  Issue Type: Task
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
>
> To be able to eventually support the latest versions of Hive, HBase and 
> Accumulo, we should start by upgrading our Hadoop dependencies to 3.0.0. See 
> https://hadoop.apache.org/docs/r3.0.0/index.html
> In this ticket I'll collect the necessary changes to do the upgrade. I'm not 
> setting a fix version yet, since this might mean a major release and to be 
> done together with the upgrade of related components.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3171) Import as parquet jobs failed randomly while multiple jobs concurrently importing into targets with same parent

2018-03-27 Thread Xiaomin Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16415253#comment-16415253
 ] 

Xiaomin Zhang commented on SQOOP-3171:
--

Thank you [~sanysand...@gmail.com] following up on this :)

> Import as parquet jobs failed randomly while multiple jobs concurrently 
> importing into targets with same parent
> ---
>
> Key: SQOOP-3171
> URL: https://issues.apache.org/jira/browse/SQOOP-3171
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Xiaomin Zhang
>Assignee: Sandish Kumar HN
>Priority: Major
>
> Running multiple parquet import jobs concurrently into below target 
> directories:
> hdfs://ns/path/dataset1
> hdfs://ns/path/dataset2
> In some cases, one of the sqoop job will be failed with below error:
> 17/03/19 08:21:21 INFO mapreduce.Job: Job job_1488289274600_188649 failed 
> with state FAILED due to: Job commit failed: 
> org.kitesdk.data.DatasetIOException: Could not cleanly delete 
> path:hdfs://ns/path/.temp/job_1488289274600_188649
> at 
> org.kitesdk.data.spi.filesystem.FileSystemUtil.cleanlyDelete(FileSystemUtil.java:239)
> at 
> org.kitesdk.data.spi.filesystem.TemporaryFileSystemDatasetRepository.delete(TemporaryFileSystemDatasetRepository.java:61)
> at 
> org.kitesdk.data.mapreduce.DatasetKeyOutputFormat$MergeOutputCommitter.commitJob(DatasetKeyOutputFormat.java:395)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: File hdfs://ns/path/.temp does not 
> exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:705)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:106)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:763)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:759)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:759)
> at 
> org.kitesdk.data.spi.filesystem.FileSystemUtil.cleanlyDelete(FileSystemUtil.java:226)
> This is due to:
> https://issues.cloudera.org/browse/KITE-1155



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Review Request 66221: SQOOP-3301 Document SQOOP-3216 - metastore related change

2018-03-27 Thread Szabolcs Vasas

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/66221/#review200035
---



Hi Feró,

Thank you for your effort improving our documentation! Please see my comments 
in-line.


src/docs/man/sqoop-metastore.txt
Lines 29 (patched)


I think there is some confusion here. The sqoop-metastore command is only 
for starting a shared metastore the meta-connect/meta-user/meta-password 
options are parameters for sqoop-job.
Even if sqoop-job supports connecting to many different types of RDBMSs, 
the sqoop-metastore can only start an HSQLDB database.
It could be great if you could clarify this in the docs.



src/docs/man/sqoop-metastore.txt
Lines 38 (patched)


These examples are great, but they should probably go to the sqoop-job man 
page and/or sqoop-job user guide page.



src/docs/man/sqoop-metastore.txt
Lines 41 (patched)


Typo: metastore



src/docs/man/sqoop-metastore.txt
Lines 45 (patched)


Typo: metastore



src/docs/user/metastore-purpose.txt
Line 20 (original), 20 (patched)


sqoop-metastore supports HSQLDB only.



src/docs/user/saved-jobs.txt
Line 231 (original), 231 (patched)


sqoop-metastore supports HSQLDB only.



src/docs/user/saved-jobs.txt
Line 247 (original), 247 (patched)


I would not delete this piece of information, it could be helpful for some 
users.



src/docs/user/saved-jobs.txt
Lines 250 (patched)


I think this information is really useful but I suggest putting it to the  
sqoop-job part of the documentation.


- Szabolcs Vasas


On March 22, 2018, 5:46 p.m., Fero Szabo wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/66221/
> ---
> 
> (Updated March 22, 2018, 5:46 p.m.)
> 
> 
> Review request for Sqoop, Boglarka Egyed and Szabolcs Vasas.
> 
> 
> Bugs: SQOOP-3301
> https://issues.apache.org/jira/browse/SQOOP-3301
> 
> 
> Repository: sqoop-trunk
> 
> 
> Description
> ---
> 
> This is the documentation for the metastore related patch implemented by Zach 
> Berkowitz.
> 
> 
> Diffs
> -
> 
>   src/docs/man/sqoop-metastore.txt c10cc08d 
>   src/docs/user/metastore-purpose.txt 95c2d774 
>   src/docs/user/saved-jobs.txt 6885079f 
> 
> 
> Diff: https://reviews.apache.org/r/66221/diff/1/
> 
> 
> Testing
> ---
> 
> ant docs ran successfully
> 
> 
> Thanks,
> 
> Fero Szabo
> 
>



[jira] [Commented] (SQOOP-3171) Import as parquet jobs failed randomly while multiple jobs concurrently importing into targets with same parent

2018-03-27 Thread Sandish Kumar HN (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16415172#comment-16415172
 ] 

Sandish Kumar HN commented on SQOOP-3171:
-

https://issues.cloudera.org/browse/KITE-1176

> Import as parquet jobs failed randomly while multiple jobs concurrently 
> importing into targets with same parent
> ---
>
> Key: SQOOP-3171
> URL: https://issues.apache.org/jira/browse/SQOOP-3171
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Xiaomin Zhang
>Assignee: Sandish Kumar HN
>Priority: Major
>
> Running multiple parquet import jobs concurrently into below target 
> directories:
> hdfs://ns/path/dataset1
> hdfs://ns/path/dataset2
> In some cases, one of the sqoop job will be failed with below error:
> 17/03/19 08:21:21 INFO mapreduce.Job: Job job_1488289274600_188649 failed 
> with state FAILED due to: Job commit failed: 
> org.kitesdk.data.DatasetIOException: Could not cleanly delete 
> path:hdfs://ns/path/.temp/job_1488289274600_188649
> at 
> org.kitesdk.data.spi.filesystem.FileSystemUtil.cleanlyDelete(FileSystemUtil.java:239)
> at 
> org.kitesdk.data.spi.filesystem.TemporaryFileSystemDatasetRepository.delete(TemporaryFileSystemDatasetRepository.java:61)
> at 
> org.kitesdk.data.mapreduce.DatasetKeyOutputFormat$MergeOutputCommitter.commitJob(DatasetKeyOutputFormat.java:395)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: File hdfs://ns/path/.temp does not 
> exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:705)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:106)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:763)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:759)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:759)
> at 
> org.kitesdk.data.spi.filesystem.FileSystemUtil.cleanlyDelete(FileSystemUtil.java:226)
> This is due to:
> https://issues.cloudera.org/browse/KITE-1155



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (SQOOP-3171) Import as parquet jobs failed randomly while multiple jobs concurrently importing into targets with same parent

2018-03-27 Thread Sandish Kumar HN (JIRA)

[ 
https://issues.apache.org/jira/browse/SQOOP-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16415157#comment-16415157
 ] 

Sandish Kumar HN commented on SQOOP-3171:
-

[~ximz] Yes, I'm thinking to get KITE new release with current fixes. there are 
other two KITE issue's which are blocking me in SQOOP DEV. let me ask KITE.  

> Import as parquet jobs failed randomly while multiple jobs concurrently 
> importing into targets with same parent
> ---
>
> Key: SQOOP-3171
> URL: https://issues.apache.org/jira/browse/SQOOP-3171
> Project: Sqoop
>  Issue Type: Bug
>Reporter: Xiaomin Zhang
>Assignee: Sandish Kumar HN
>Priority: Major
>
> Running multiple parquet import jobs concurrently into below target 
> directories:
> hdfs://ns/path/dataset1
> hdfs://ns/path/dataset2
> In some cases, one of the sqoop job will be failed with below error:
> 17/03/19 08:21:21 INFO mapreduce.Job: Job job_1488289274600_188649 failed 
> with state FAILED due to: Job commit failed: 
> org.kitesdk.data.DatasetIOException: Could not cleanly delete 
> path:hdfs://ns/path/.temp/job_1488289274600_188649
> at 
> org.kitesdk.data.spi.filesystem.FileSystemUtil.cleanlyDelete(FileSystemUtil.java:239)
> at 
> org.kitesdk.data.spi.filesystem.TemporaryFileSystemDatasetRepository.delete(TemporaryFileSystemDatasetRepository.java:61)
> at 
> org.kitesdk.data.mapreduce.DatasetKeyOutputFormat$MergeOutputCommitter.commitJob(DatasetKeyOutputFormat.java:395)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: File hdfs://ns/path/.temp does not 
> exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:705)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:106)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:763)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:759)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:759)
> at 
> org.kitesdk.data.spi.filesystem.FileSystemUtil.cleanlyDelete(FileSystemUtil.java:226)
> This is due to:
> https://issues.cloudera.org/browse/KITE-1155



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)