[jira] [Created] (HIVE-21613) Queries with join condition having timestamp or timestamp with local time zone literal throw SemanticException

2019-04-12 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-21613:
--

 Summary: Queries with join condition having timestamp or timestamp 
with local time zone literal throw SemanticException
 Key: HIVE-21613
 URL: https://issues.apache.org/jira/browse/HIVE-21613
 Project: Hive
  Issue Type: Bug
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez


Similar to HIVE-21540.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21612) Upgrade druid to 0.14.0-incubating

2019-04-12 Thread Nishant Bangarwa (JIRA)
Nishant Bangarwa created HIVE-21612:
---

 Summary: Upgrade druid to 0.14.0-incubating
 Key: HIVE-21612
 URL: https://issues.apache.org/jira/browse/HIVE-21612
 Project: Hive
  Issue Type: Task
Reporter: Nishant Bangarwa
Assignee: Nishant Bangarwa


Druid 0.14.0-incubating is released. 
This task is to upgrade hive to use 0.14.0-incubating version of druid. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21611) Date.getTime() can be changed to System.currentTimeMillis()

2019-04-12 Thread bd2019us (JIRA)
bd2019us created HIVE-21611:
---

 Summary: Date.getTime() can be changed to 
System.currentTimeMillis()
 Key: HIVE-21611
 URL: https://issues.apache.org/jira/browse/HIVE-21611
 Project: Hive
  Issue Type: Bug
Reporter: bd2019us


Hello,
I found that System.currentTimeMillis() can be used here instead of new 
Date.getTime().
Since new Date() is a thin wrapper of light method System.currentTimeMillis(). 
The performance will be greatly damaged if it is invoked too much times.
According to my local testing at the same environment, 
System.currentTimeMillis() can achieve a speedup to 5 times (435 ms vs 2073 
ms), when these two methods are invoked 5,000,000 times.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21610) Union operator can flow in the wrong stage causing NPE

2019-04-12 Thread Antal Sinkovits (JIRA)
Antal Sinkovits created HIVE-21610:
--

 Summary: Union operator can flow in the wrong stage causing NPE
 Key: HIVE-21610
 URL: https://issues.apache.org/jira/browse/HIVE-21610
 Project: Hive
  Issue Type: Bug
Reporter: Antal Sinkovits
Assignee: Antal Sinkovits


Because of HIVE-16227 it can happen that a UnionOperator will partially go into 
the wrong stage, because the currTask is changed, and the UnionOperator is 
reinitialized in GenMRFileSink1 with the wrong task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Hive Pulsar Integration

2019-04-12 Thread Slim Bouguerra
Hi, Great to hear that you want to work on that!
We have done similar work for Kafka you can look at the code and design doc
it will help guiding for Pulsar integration.
https://github.com/apache/hive/tree/master/kafka-handler
https://docs.google.com/document/d/1UcXq-rrrc6cBR4MEDLOwazUhGphniJErhrwgrLDa0_I/edit

let me know if you have any questions!
Happy coding!

On Fri, Apr 12, 2019 at 8:35 AM 李鹏辉gmail  wrote:

> Hi guys,
>
> I’m working on integration of hive and pulsar recently. But now i have
> encountered some problems and hope to get help here.
>
> First of all, i simply describe the motivation.
>
> Pulsar can be used as infinite streams for keeping both historic data and
> streaming data, So we want to use pulsar as a storage extension for hive.
> In this way, hive can read the data in pulsar naturally, and can also
> write data into pulsar.
> We will benefit from the same data that provides both interactive query
> and streaming capabilities.
>
> As an improvement, support data partitioning can make the query more
> efficient(e.g. partition by date or any other field).
>
> But
>
> - how to get hive table partition definition?
> - While user inert data to hive table, how to get partition the data
> should be store?
> - While use select data from hive table, how to determine data is in that
> partition?
>
> If hive already expose some mechanism to support, please show me how to
> use it.
>
> Best regards
>
> Penghui
> Beijing, China
>
>
>
>


Hive Pulsar Integration

2019-04-12 Thread 李鹏辉gmail
Hi guys,

I’m working on integration of hive and pulsar recently. But now i have 
encountered some problems and hope to get help here.

First of all, i simply describe the motivation.

Pulsar can be used as infinite streams for keeping both historic data and 
streaming data, So we want to use pulsar as a storage extension for hive.
In this way, hive can read the data in pulsar naturally, and can also write 
data into pulsar.
We will benefit from the same data that provides both interactive query and 
streaming capabilities.

As an improvement, support data partitioning can make the query more 
efficient(e.g. partition by date or any other field). 

But

- how to get hive table partition definition? 
- While user inert data to hive table, how to get partition the data should be 
store? 
- While use select data from hive table, how to determine data is in that 
partition?

If hive already expose some mechanism to support, please show me how to use it.

Best regards

Penghui
Beijing, China





Re: Review Request 70453: HIVE-21584 Java 11 preparation: system class loader is not URLClassLoader

2019-04-12 Thread Adam Szita via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70453/#review214635
---




ql/src/java/org/apache/hadoop/hive/ql/exec/AddToClassPathAction.java
Lines 31 (patched)


do the _same_ vs do the _save_



ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
Lines 2104 (patched)


Is it an expected scenario? Should we log a warning in this case or even 
throw an Exception instead?



ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkRecordHandler.java
Lines 81 (patched)


Can we refactor this method into a public static method in hive-common or 
even inside ql (within a utility class or something) as it is repeated 3 times.


- Adam Szita


On April 12, 2019, 9:50 a.m., Zoltan Matyus wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/70453/
> ---
> 
> (Updated April 12, 2019, 9:50 a.m.)
> 
> 
> Review request for hive, Zoltan Haindrich, Laszlo Pinter, and Adam Szita.
> 
> 
> Bugs: HIVE-21584
> https://issues.apache.org/jira/browse/HIVE-21584
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-21584 Java 11 preparation: system class loader is not URLClassLoader
> 
> 
> Diffs
> -
> 
>   beeline/src/java/org/apache/hive/beeline/Commands.java 
> f4dd586e1127e3dad56de856b91ba00a0f777ac2 
>   common/src/java/org/apache/hadoop/hive/common/JavaUtils.java 
> c011cd1626d608d5a3c8c950eddf96b46473d796 
>   
> llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/FunctionLocalizer.java
>  2a6ef3a2461fb4e4331da678ac8521135f304018 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/AddToClassPathAction.java 
> PRE-CREATION 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 
> 36bc08f34e0aa24af68377181ccfb91c8635ddc5 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 
> 01dd93c5273caed17931d057e6d844ce17a511c5 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapper.java 
> 91868a46670f2f27dcd8f944df7c1cfca2faff32 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecReducer.java 
> e106bc9149832d8e7b1f0ecf5a9c0fdc172c7413 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkRecordHandler.java 
> f7ea212cfb9f0f1c2eb1895223fba147acd18cb8 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/tez/RecordProcessor.java 
> 0ec7a04ce7a219f3a48a38f16d4626e8f2fc87b3 
>   ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java 
> de5cd8b992c1d1fcc52611484cd6aa787c469bee 
>   ql/src/test/org/apache/hadoop/hive/ql/exec/TestAddToClassPathAction.java 
> PRE-CREATION 
>   
> spark-client/src/main/java/org/apache/hive/spark/client/SparkClientUtilities.java
>  b434d8f7b7a3b585183cd842bd9893d00a85da1b 
>   
> standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java
>  0642b39f58966b15d07e4163900919f1fad360cb 
> 
> 
> Diff: https://reviews.apache.org/r/70453/diff/1/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Zoltan Matyus
> 
>



[jira] [Created] (HIVE-21609) Missing help documentation on linux

2019-04-12 Thread lei_tang (JIRA)
lei_tang created HIVE-21609:
---

 Summary: Missing help documentation on linux
 Key: HIVE-21609
 URL: https://issues.apache.org/jira/browse/HIVE-21609
 Project: Hive
  Issue Type: Wish
  Components: API
Affects Versions: 1.2.1
Reporter: lei_tang
Assignee: lei_tang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21608) SQL parsing error

2019-04-12 Thread Ray Hou (JIRA)
Ray Hou created HIVE-21608:
--

 Summary: SQL parsing error
 Key: HIVE-21608
 URL: https://issues.apache.org/jira/browse/HIVE-21608
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.1.0
 Environment: Hive 1.1.0-cdh5.15.0
Subversion 
file:///data/jenkins/workspace/generic-package-centos64-7-0/topdir/BUILD/hive-1.1.0-cdh5.15.0
 -r Unknown
Compiled by jenkins on Thu May 24 04:17:02 PDT 2018
>From source with checksum 493255612021cd90286fcf5a3712d24e
Reporter: Ray Hou


It is my first time to post here. Sorry if I made misoperation.

When I write a SQL using a subquery and putting FROM ahead, it runs 
successfully. Like this:
{code:java}
FROM ( SELECT FCARPLATE,
 MAX(FCARCLASS) AS maxn, MIN(FCARCLASS) AS minn  
FROM receipt2018h2 WHERE (
(FCARCLASS = 9 AND FCARTYPE = 6)
OR
(FCARCLASS = 8 AND FCARTYPE = 6)
)
GROUP BY FCARPLATE
) e SELECT e.FCARPLATE
WHERE e.maxn != e.minn
;
{code}
 
 
{color:#33}But when I add an output instruction, it breaks down.{color}

 
{code:java}
INSERT OVERWRITE DIRECTORY '/sfsj/output'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
FROM ( SELECT FCARPLATE,
 MAX(FCARCLASS) AS maxn, MIN(FCARCLASS) AS minn  
FROM receipt2018h2 WHERE (
(FCARCLASS = 9 AND FCARTYPE = 6)
OR
(FCARCLASS = 8 AND FCARTYPE = 6)
)
GROUP BY FCARPLATE
) e SELECT e.FCARPLATE
WHERE e.maxn != e.minn
;
{code}
 

 

 
{code:java}
NoViableAltException(118@[])
 at 
org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41622)
 at 
org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40848)
 at 
org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:40724)
 at 
org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1530)
 at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1066)
 at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:201)
 at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:524)
 at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1358)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1475)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1287)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1277)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:226)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:175)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:389)
 at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:634)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
FAILED: ParseException line 5:0 cannot recognize input near 'FROM' '(' 'SELECT' 
in statement{code}
 

 

{color:#33}However, once I adjust the order of the SQL above, it 
works!{color}
{code:java}
INSERT OVERWRITE DIRECTORY '/sfsj/output'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
SELECT e.FCARPLATE
FROM ( SELECT FCARPLATE,
 MAX(FCARCLASS) AS maxn, MIN(FCARCLASS) AS minn  
    FROM receipt2018h2 WHERE (
    (FCARCLASS = 9 AND FCARTYPE = 6)
    OR
    (FCARCLASS = 8 AND FCARTYPE = 6)
    )
    GROUP BY FCARPLATE
    ) e
WHERE e.maxn != e.minn
;
{code}
{code:java}
Query ID = root_20190412173939_55fe9030-8860-4193-bc85-e015def5b75e
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1099
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1554965284978_0367, Tracking URL = 
http://hbltmp01:8088/proxy/application_1554965284978_0367/
Kill Command = 
/opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/bin/hadoop job  
-kill job_1554965284978_0367
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
1099
2019-04-12 17:39:26,646 Stage-1 map = 0%,  reduce = 0%
2019-04-12 17:39:37,312 Stage-1 map = 0%,  reduce = 1%, Cumulative CPU 23.02 sec
...
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21607) NoSuchMethodError: org.apache.hive.common.util.HiveStringUtils.joinIgnoringEmpty

2019-04-12 Thread anass el (JIRA)
anass el  created HIVE-21607:


 Summary: NoSuchMethodError: 
org.apache.hive.common.util.HiveStringUtils.joinIgnoringEmpty
 Key: HIVE-21607
 URL: https://issues.apache.org/jira/browse/HIVE-21607
 Project: Hive
  Issue Type: Bug
Reporter: anass el 


Use Hive  *1.2.1000.2.6.5.79-2*  spark *: 1.6.3.2.6.5.79-2*

 

*10:39:23.252 [Driver] ERROR org.apache.spark.deploy.yarn.ApplicationMaster - 
User class threw exception: java.lang.NoSuchMethodError: 
org.apache.hive.common.util.HiveStringUtils.joinIgnoringEmpty([Ljava/lang/String;C)Ljava/lang/String;*
*java.lang.NoSuchMethodError: 
org.apache.hive.common.util.HiveStringUtils.joinIgnoringEmpty([Ljava/lang/String;C)Ljava/lang/String;*
 *at 
org.apache.hadoop.hive.serde2.ColumnProjectionUtils.appendReadColumns(ColumnProjectionUtils.java:104)
 ~[hive-serde-1.2.1000.2.6.5.79-2.jar:1.2.1000.2.6.5.79-2]*
 *at org.apache.spark.sql.hive.HiveShim$.appendReadColumns(HiveShim.scala:78) 
~[spark-hive_2.10-1.6.3.2.6.5.79-2.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.hive.execution.HiveTableScan.addColumnMetadataToConf(HiveTableScan.scala:88)
 ~[spark-hive_2.10-1.6.3.2.6.5.79-2.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.hive.execution.HiveTableScan.(HiveTableScan.scala:74)
 ~[spark-hive_2.10-1.6.3.2.6.5.79-2.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$3.apply(HiveStrategies.scala:77)
 ~[spark-hive_2.10-1.6.3.2.6.5.79-2.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$$anonfun$3.apply(HiveStrategies.scala:77)
 ~[spark-hive_2.10-1.6.3.2.6.5.79-2.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.execution.SparkPlanner.pruneFilterProject(SparkPlanner.scala:82)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.hive.HiveStrategies$HiveTableScans$.apply(HiveStrategies.scala:73)
 ~[spark-hive_2.10-1.6.3.2.6.5.79-2.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) 
~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59) 
~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.execution.SparkStrategies$Aggregation$.apply(SparkStrategies.scala:217)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) 
~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59) 
~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:363)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) 
~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59) 
~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:47)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:45)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:52)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:52)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
 ~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 *at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55) 
~[spark-hdp-assembly.jar:1.6.3.2.6.5.79-2]*
 

[jira] [Created] (HIVE-21606) when creating table with hdfs location,should not check permission of all the children dirs

2019-04-12 Thread philipse (JIRA)
philipse created HIVE-21606:
---

 Summary: when creating table with hdfs location,should not check 
permission of all the children dirs
 Key: HIVE-21606
 URL: https://issues.apache.org/jira/browse/HIVE-21606
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 2.3.4
Reporter: philipse
Assignee: philipse
 Attachments: image-2019-04-12-15-31-30-883.png, 
image-2019-04-12-15-34-55-942.png

when we create a table with a specific location
{code:java}
create table testdb.test6(id int) location '/data/dpdcadmin/test2/test2/test4';
{code}
we met the following errors:
{code:java}
Error: Error while compiling statement: FAILED: HiveAccessControlException 
Permission denied: Principal [name=bidcadmin, type=USER] does not have 
following privileges for operation CREATETABLE [[INSERT, DELETE] on Object 
[type=DFS_URI, name=hdfs://hadoopcluster/data/dpdcadmin/test2/test2/test5]] 
(state=42000,code=4)
{code}
 

the hdfs permission is as the following

!image-2019-04-12-15-34-55-942.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21605) show table extended in db_name like 'tb_name' partition (inc_day = 'xxx') will show table not found

2019-04-12 Thread Chen Lantian (JIRA)
Chen Lantian created HIVE-21605:
---

 Summary: show table extended in db_name like 'tb_name' partition 
(inc_day = 'xxx') will show table not found
 Key: HIVE-21605
 URL: https://issues.apache.org/jira/browse/HIVE-21605
 Project: Hive
  Issue Type: Bug
  Components: Parser
Affects Versions: 2.1.1
Reporter: Chen Lantian


execute "show table extended in db_name like 'tb_name' partition (inc_day = 
'xxx') ",  "in db_name" not works, it will find table in current database not 
in db_name (tb_name not has database name qualifier)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21604) preCommit job should not be triggered on non-patch attachments

2019-04-12 Thread Laszlo Bodor (JIRA)
Laszlo Bodor created HIVE-21604:
---

 Summary: preCommit job should not be triggered on non-patch 
attachments
 Key: HIVE-21604
 URL: https://issues.apache.org/jira/browse/HIVE-21604
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Reporter: Laszlo Bodor






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)