[jira] [Commented] (HIVE-22942) Replace PTest with an alternative

2020-02-28 Thread Andrew Sherman (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17047974#comment-17047974
 ] 

Andrew Sherman commented on HIVE-22942:
---

See also HIVE-19571

> Replace PTest with an alternative
> -
>
> Key: HIVE-22942
> URL: https://issues.apache.org/jira/browse/HIVE-22942
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> I never opened a jira about this...but it might actually help collect ideas 
> and actually start going somewhere sooner than later :D
> Right now we maintain the ptest2 project inside Hive to be able to run Hive 
> tests in a distributed fashion...the backstab of this solution is that we are 
> putting much effort into maintaining a distributed test execution framework...
> I think it would be better if we could find an off the shelf solution for the 
> task and migrate to that instead of putting more efforts into the ptest 
> framework



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-20479) Update content/people.mdtext in cms

2019-04-30 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-20479:
--
Affects Version/s: 3.0.0

> Update content/people.mdtext in cms 
> 
>
> Key: HIVE-20479
> URL: https://issues.apache.org/jira/browse/HIVE-20479
> Project: Hive
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> I added myself to the committers list. 
>  
> {code:java}
> asherman 
> Andrew Sherman 
>  href="http://cloudera.com/;>Cloudera 
>  
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18119) show partitions should say whether a partition is stored via EC

2019-04-30 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18119:
--
Fix Version/s: 3.0.0

> show partitions should say whether a partition is stored via EC
> ---
>
> Key: HIVE-18119
> URL: https://issues.apache.org/jira/browse/HIVE-18119
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 3.0.0
>
>
> Not sure what the criteria should be here because technically any single file 
> in a directory can be stored via EC. So a partition may contain both EC files 
> and regular files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19581) view do not support unicode characters well

2019-04-30 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19581:
--
Fix Version/s: 3.2.0

> view do not support unicode characters well
> ---
>
> Key: HIVE-19581
> URL: https://issues.apache.org/jira/browse/HIVE-19581
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: kai
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, 
> HIVE-19581.3.patch, HIVE-19581.4.patch, HIVE-19581.5.patch, 
> HIVE-19581.6.patch, explain.png, metastore.png
>
>
> create table t_test (name ,string) ;
>  insert into table t_test VALUES ('李四');
>  create view t_view_test as select * from t_test where name='李四';
> when select  * from t_view_test   no  records return



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16003) Blobstores should use fs.listFiles(path, recursive=true) rather than FileUtils.listStatusRecursively

2019-03-29 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16805131#comment-16805131
 ] 

Andrew Sherman commented on HIVE-16003:
---

I set assignee to unassigned as [~janulatha] is not working on this.

> Blobstores should use fs.listFiles(path, recursive=true) rather than 
> FileUtils.listStatusRecursively
> 
>
> Key: HIVE-16003
> URL: https://issues.apache.org/jira/browse/HIVE-16003
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Priority: Major
>
> {{FileUtils.listStatusRecursively}} can be slow on blobstores because 
> {{listStatus}} calls are applied recursively to a given directory. This can 
> be especially bad on tables with multiple levels of partitioning.
> The {{FileSystem}} API provides an optimized API called {{listFiles(path, 
> recursive)}} that can be used to invoke an optimized recursive directory 
> listing.
> The problem is that the {{listFiles(path, recursive)}} API doesn't provide a 
> option to pass in a {{PathFilter}}, while {{FileUtils.listStatusRecursively}} 
> uses a custom HIDDEN_FILES_PATH_FILTER.
> To fix this we could either:
> 1: Modify the FileSystem API to provide a {{listFiles(path, recursive, 
> PathFilter)}} method (probably the cleanest solution)
> 2: Add conditional logic so that blobstores invoke {{listFiles(path, 
> recursive)}} and the rest of the code uses the current implementation of 
> {{FileUtils.listStatusRecursively}}
> 3: Replace the implementation of {{FileUtils.listStatusRecursively}} with 
> {{listFiles(path, recursive)}} and apply the {{PathFilter}} on the results 
> (not sure what optimizations can be made if {{PathFilter}} objects are passed 
> into {{FileSystem}} methods - maybe {{PathFilter}} objects are pushed to the 
> NameNode?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-16003) Blobstores should use fs.listFiles(path, recursive=true) rather than FileUtils.listStatusRecursively

2019-03-29 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-16003:
-

Assignee: (was: Janaki Lahorani)

> Blobstores should use fs.listFiles(path, recursive=true) rather than 
> FileUtils.listStatusRecursively
> 
>
> Key: HIVE-16003
> URL: https://issues.apache.org/jira/browse/HIVE-16003
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Priority: Major
>
> {{FileUtils.listStatusRecursively}} can be slow on blobstores because 
> {{listStatus}} calls are applied recursively to a given directory. This can 
> be especially bad on tables with multiple levels of partitioning.
> The {{FileSystem}} API provides an optimized API called {{listFiles(path, 
> recursive)}} that can be used to invoke an optimized recursive directory 
> listing.
> The problem is that the {{listFiles(path, recursive)}} API doesn't provide a 
> option to pass in a {{PathFilter}}, while {{FileUtils.listStatusRecursively}} 
> uses a custom HIDDEN_FILES_PATH_FILTER.
> To fix this we could either:
> 1: Modify the FileSystem API to provide a {{listFiles(path, recursive, 
> PathFilter)}} method (probably the cleanest solution)
> 2: Add conditional logic so that blobstores invoke {{listFiles(path, 
> recursive)}} and the rest of the code uses the current implementation of 
> {{FileUtils.listStatusRecursively}}
> 3: Replace the implementation of {{FileUtils.listStatusRecursively}} with 
> {{listFiles(path, recursive)}} and apply the {{PathFilter}} on the results 
> (not sure what optimizations can be made if {{PathFilter}} objects are passed 
> into {{FileSystem}} methods - maybe {{PathFilter}} objects are pushed to the 
> NameNode?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18890) Lower Logging for "Table not found" Error

2019-02-15 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769512#comment-16769512
 ] 

Andrew Sherman commented on HIVE-18890:
---

Pushed to master. Thanks [~mnarayanan2018] for your contribution.

> Lower Logging for "Table not found" Error
> -
>
> Key: HIVE-18890
> URL: https://issues.apache.org/jira/browse/HIVE-18890
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: Manoj Narayanan
>Priority: Minor
> Attachments: HIVE-18890.1.patch
>
>
> https://github.com/apache/hive/blob/7cb31c03052b815665b3231f2e513b9e65d3ff8c/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L1105
> {code:java}
> // Get the table from metastore
> org.apache.hadoop.hive.metastore.api.Table tTable = null;
> try {
>   tTable = getMSC().getTable(dbName, tableName);
> } catch (NoSuchObjectException e) {
>   if (throwException) {
> LOG.error("Table " + tableName + " not found: " + e.getMessage());
> throw new InvalidTableException(tableName);
>   }
>   return null;
> } catch (Exception e) {
>   throw new HiveException("Unable to fetch table " + tableName + ". " + 
> e.getMessage(), e);
> }
> {code}
> We should throw an exception or log it, but not both. Right [~mdrob] ? ;)
> And in this case, we are generating scary ERROR level logging in the 
> HiveServer2 logs needlessly.  This should not be reported as an application 
> error.  It is a simple user error, indicated by catching the 
> _NoSuchObjectException_ Throwable, that can always be ignored by the service. 
>  It is most likely a simple user typo of the table name.  However, the more 
> serious general _Exception_ is not logged.  This is backwards.
> Please remove the _error_ level logging for the user error... or lower it to 
> _debug_ level logging.
> Please include an _error_ level logging to the general Exception case, unless 
> this Exception is being captured up the stack, somewhere else, and is being 
> logged there at ERROR level logging.
> {code}
> -- Sample log messages found in HS2 logs
> 2018-03-02 10:26:40,363  ERROR hive.ql.metadata.Hive: 
> [HiveServer2-Handler-Pool: Thread-4467]: Table default not found: 
> default.default table not found
> 2018-03-02 10:26:40,367  ERROR hive.ql.metadata.Hive: 
> [HiveServer2-Handler-Pool: Thread-4467]: Table default not found: 
> default.default table not found
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18890) Lower Logging for "Table not found" Error

2019-02-13 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767522#comment-16767522
 ] 

Andrew Sherman commented on HIVE-18890:
---

+1 LGTM

> Lower Logging for "Table not found" Error
> -
>
> Key: HIVE-18890
> URL: https://issues.apache.org/jira/browse/HIVE-18890
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: Manoj Narayanan
>Priority: Minor
> Attachments: HIVE-18890.1.patch
>
>
> https://github.com/apache/hive/blob/7cb31c03052b815665b3231f2e513b9e65d3ff8c/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L1105
> {code:java}
> // Get the table from metastore
> org.apache.hadoop.hive.metastore.api.Table tTable = null;
> try {
>   tTable = getMSC().getTable(dbName, tableName);
> } catch (NoSuchObjectException e) {
>   if (throwException) {
> LOG.error("Table " + tableName + " not found: " + e.getMessage());
> throw new InvalidTableException(tableName);
>   }
>   return null;
> } catch (Exception e) {
>   throw new HiveException("Unable to fetch table " + tableName + ". " + 
> e.getMessage(), e);
> }
> {code}
> We should throw an exception or log it, but not both. Right [~mdrob] ? ;)
> And in this case, we are generating scary ERROR level logging in the 
> HiveServer2 logs needlessly.  This should not be reported as an application 
> error.  It is a simple user error, indicated by catching the 
> _NoSuchObjectException_ Throwable, that can always be ignored by the service. 
>  It is most likely a simple user typo of the table name.  However, the more 
> serious general _Exception_ is not logged.  This is backwards.
> Please remove the _error_ level logging for the user error... or lower it to 
> _debug_ level logging.
> Please include an _error_ level logging to the general Exception case, unless 
> this Exception is being captured up the stack, somewhere else, and is being 
> logged there at ERROR level logging.
> {code}
> -- Sample log messages found in HS2 logs
> 2018-03-02 10:26:40,363  ERROR hive.ql.metadata.Hive: 
> [HiveServer2-Handler-Pool: Thread-4467]: Table default not found: 
> default.default table not found
> 2018-03-02 10:26:40,367  ERROR hive.ql.metadata.Hive: 
> [HiveServer2-Handler-Pool: Thread-4467]: Table default not found: 
> default.default table not found
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21073) Remove Extra String Object

2018-12-27 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16729759#comment-16729759
 ] 

Andrew Sherman commented on HIVE-21073:
---

I see, thanks [~belugabehr]

> Remove Extra String Object
> --
>
> Key: HIVE-21073
> URL: https://issues.apache.org/jira/browse/HIVE-21073
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HIVE-21073.1.patch
>
>
> {code}
>   public static String generatePath(Path baseURI, String filename) {
> String path = new String(baseURI + Path.SEPARATOR + filename);
> return path;
>   }
>   public static String generateFileName(Byte tag, String bigBucketFileName) {
> String fileName = new String("MapJoin-" + tag + "-" + bigBucketFileName + 
> suffix);
> return fileName;
>   }
> {code}
> It's a bit odd to be performing string concatenation and then wrapping the 
> results in a new string.  This is creating superfluous String objects. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21073) Remove Extra String Object

2018-12-27 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16729732#comment-16729732
 ] 

Andrew Sherman commented on HIVE-21073:
---

>From Java 8 those string concatenations [are actually done with a StringBuffer 
>under the 
>covers|http://www.pellegrino.link/2015/08/22/string-concatenation-with-java-8.html].

 

> Remove Extra String Object
> --
>
> Key: HIVE-21073
> URL: https://issues.apache.org/jira/browse/HIVE-21073
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HIVE-21073.1.patch
>
>
> {code}
>   public static String generatePath(Path baseURI, String filename) {
> String path = new String(baseURI + Path.SEPARATOR + filename);
> return path;
>   }
>   public static String generateFileName(Byte tag, String bigBucketFileName) {
> String fileName = new String("MapJoin-" + tag + "-" + bigBucketFileName + 
> suffix);
> return fileName;
>   }
> {code}
> It's a bit odd to be performing string concatenation and then wrapping the 
> results in a new string.  This is creating superfluous String objects. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18358) from_unixtime returns wrong year for Dec 31 timestamps with format 'YYYY'

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-18358:
-

Assignee: (was: Andrew Sherman)

> from_unixtime returns wrong year for Dec 31 timestamps with format ''
> -
>
> Key: HIVE-18358
> URL: https://issues.apache.org/jira/browse/HIVE-18358
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
> Environment: AWS EMR with Hive 2.1.0-amzn-0
>Reporter: Nick Orka
>Priority: Major
>  Labels: timezone
>
> If you use capital Ys as a year format in from_unixtime() it returns next 
> year for Dec 31 only. All other days work as intended.
> Here is reproduction code:
> {code:sql}
> hive> select from_unixtime(1514754599, '-MM-dd HH-mm-ss'), 
> from_unixtime(1514754599, '-MM-dd HH-mm-ss');
> OK
> 2018-12-31 21-09-59   2017-12-31 21-09-59
> Time taken: 0.025 seconds, Fetched: 1 row(s)
> hive>
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20030) Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java and AnnotateRunTimeStatsOptimizer.java

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-20030:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java 
> and AnnotateRunTimeStatsOptimizer.java
> 
>
> Key: HIVE-20030
> URL: https://issues.apache.org/jira/browse/HIVE-20030
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-20030.1.patch
>
>
> For some reason the Java compiler in IntellJ is more strict that the Oracle 
> jdk compiler. Maybe this is something that can be configured away, but as it 
> is simple I propose to make the code more type correct. 
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
> Error:(613, 24) java: no suitable method found for 
> findOperatorsUpstream(java.util.List  extends 
> org.apache.hadoop.hive.ql.plan.OperatorDesc>>,java.lang.Class)
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> org.apache.hadoop.hive.ql.exec.Operator))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(java.util.Collection>,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> java.util.Collection>))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class,java.util.Set)
>  is not applicable
>   (cannot infer type-variable(s) T
> (actual and formal argument lists differ in length))
> {code}
> and
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AnnotateRunTimeStatsOptimizer.java
> Error:(76, 12) java: no suitable method found for 
> addAll(java.util.List>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> Error:(80, 14) java: no suitable method found for 
> addAll(java.util.Set>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> Error:(85, 14) java: no suitable method found for 
> addAll(java.util.Set>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> /Users/asherman/git/asf/hive2/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen/IntervalYearMonthScalarAddTimestampColumn.java
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19841) Upgrade commons-collections to commons-collections4

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-19841:
-

Assignee: (was: Andrew Sherman)

> Upgrade commons-collections to commons-collections4
> ---
>
> Key: HIVE-19841
> URL: https://issues.apache.org/jira/browse/HIVE-19841
> Project: Hive
>  Issue Type: Task
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
>
> Perhaps time to drink the Apache champagne (eat the Apache dog food) and 
> upgrade the commons-collections library from 3.x to 4.x.
> {code}
> 3.2.2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-17935) Turn on hive.optimize.sort.dynamic.partition by default

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-17935:
-

Assignee: (was: Andrew Sherman)

> Turn on hive.optimize.sort.dynamic.partition by default
> ---
>
> Key: HIVE-17935
> URL: https://issues.apache.org/jira/browse/HIVE-17935
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Priority: Major
> Attachments: HIVE-17935.1.patch, HIVE-17935.2.patch, 
> HIVE-17935.3.patch, HIVE-17935.4.patch, HIVE-17935.5.patch, 
> HIVE-17935.6.patch, HIVE-17935.7.patch, HIVE-17935.8.patch
>
>
> The config option hive.optimize.sort.dynamic.partition is an optimization for 
> Hive’s dynamic partitioning feature. It was originally implemented in 
> [HIVE-6455|https://issues.apache.org/jira/browse/HIVE-6455]. With this 
> optimization, the dynamic partition columns and bucketing columns (in case of 
> bucketed tables) are sorted before being fed to the reducers. Since the 
> partitioning and bucketing columns are sorted, each reducer can keep only one 
> record writer open at any time thereby reducing the memory pressure on the 
> reducers. There were some early problems with this optimization and it was 
> disabled by default in HiveConf in 
> [HIVE-8151|https://issues.apache.org/jira/browse/HIVE-8151]. Since then 
> setting hive.optimize.sort.dynamic.partition=true has been used to solve 
> problems where dynamic partitioning produces with (1) too many small files on 
> HDFS, which is bad for the cluster and can increase overhead for future Hive 
> queries over those partitions, and (2) OOM issues in the map tasks because it 
> trying to simultaneously write to 100 different files. 
> It now seems that the feature is probably mature enough that it can be 
> enabled by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18116) Hive + HDFS EC Supportability and Testing Improvements

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved HIVE-18116.
---
Resolution: Fixed

> Hive + HDFS EC Supportability and Testing Improvements
> --
>
> Key: HIVE-18116
> URL: https://issues.apache.org/jira/browse/HIVE-18116
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
>
> Now that we are on Hadoop 3.x, we can start integrating with HDFS Erasure 
> Coding (see 
> https://hadoop.apache.org/docs/r3.0.0-alpha2/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html
>  for details).
> First step is to add some tests using a custom CliDriver - we can do 
> something similar to what we did for encryption.
> Next step will be some supportability improvements - like printing out in the 
> explain plan when a query is reading a EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18884) Simplify Logging in Hive Metastore Client

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-18884:
-

Assignee: (was: Andrew Sherman)

> Simplify Logging in Hive Metastore Client
> -
>
> Key: HIVE-18884
> URL: https://issues.apache.org/jira/browse/HIVE-18884
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
>  Labels: noob
>
> https://github.com/apache/hive/blob/4047befe48c8f762c58d8854e058385c1df151c6/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
> The current logging is:
> {code}
> 2018-02-26 07:02:44,883  INFO  hive.metastore: [HiveServer2-Handler-Pool: 
> Thread-65]: Trying to connect to metastore with URI 
> thrift://host.company.com:9083
> 2018-02-26 07:02:44,892  INFO  hive.metastore: [HiveServer2-Handler-Pool: 
> Thread-65]: Connected to metastore.
> 2018-02-26 07:02:44,892  INFO  hive.metastore: [HiveServer2-Handler-Pool: 
> Thread-65]: Opened a connection to metastore, current connections: 2
> {code}
> Please simplify to something like:
> {code}
> 2018-02-26 07:02:44,892  INFO  hive.metastore: [HiveServer2-Handler-Pool: 
> Thread-65]: Opened a connection to the Metastore Server (URI 
> thrift://host.company.com:9083), current connections: 2
> ... or ...
> 2018-02-26 07:02:44,892  ERROR  hive.metastore: [HiveServer2-Handler-Pool: 
> Thread-65]: Failed to connect to the Metastore Server (URI 
> thrift://host.company.com:9083)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-14615) Temp table leaves behind insert command

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-14615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-14615:
-

Assignee: (was: Andrew Sherman)

> Temp table leaves behind insert command
> ---
>
> Key: HIVE-14615
> URL: https://issues.apache.org/jira/browse/HIVE-14615
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Chaoyu Tang
>Priority: Major
> Attachments: HIVE-14615.1.patch, HIVE-14615.2.patch, 
> HIVE-14615.3.patch, HIVE-14615.4.patch
>
>
> {code}
> create table test (key int, value string);
> insert into test values (1, 'val1');
> show tables;
> test
> values__tmp__table__1
> {code}
> the temp table values__tmp__table__1 was resulted from insert into ...values
> and exists until logout the session.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17760) Create a unit test which validates HIVE-9423 does not regress

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-17760:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Create a unit test which validates HIVE-9423 does not regress 
> --
>
> Key: HIVE-17760
> URL: https://issues.apache.org/jira/browse/HIVE-17760
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-17760.1.patch, HIVE-17760.2.patch, 
> HIVE-17760.3.patch, HIVE-17760.4.patch
>
>
> During [HIVE-9423] we verified that when the Thrift server pool is exhausted, 
> then Beeline connection times out, and provide a meaningful error message.
> Create a unit test which verifies this, and helps to keep this feature working



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-17572) Warnings from SparkCrossProductCheck for MapJoins are confusing

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-17572:
-

Assignee: (was: Andrew Sherman)

> Warnings from SparkCrossProductCheck for MapJoins are confusing
> ---
>
> Key: HIVE-17572
> URL: https://issues.apache.org/jira/browse/HIVE-17572
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Priority: Major
>
> When the {{SparkCrossProductCheck}} detects a cross-product in a map-join, it 
> prints out a confusing warning - e.g. {{Map Join MAPJOIN\[9\]\[bigTable=?\] 
> in task 'Stage-1:MAPRED' is a cross product}}
> I see a few ways this can be imrpoved:
> * {{bigTable}} should actually specify the big table
> * I'm not sure why the stage id is printed instead of the work id, when a 
> cross product is detected in a shuffle join the work id is shown (e.g. 
> {{Warning: Shuffle Join JOIN\[13\]\[tables = \[$hdt$_1, $hdt$_2, $hdt$_0\]\] 
> in Work 'Reducer 3' is a cross product}})
> * It shouldn't say {{MAPRED}} that can be confusing to users
> * The {{MAPJOIN}} id doesn't need to be printed, it doesn't have any meaning 
> to the user and the value just keeps on going up and up the longer a session 
> lives
> On a somewhat related note, could we just stick this warning in the explain 
> plan? Otherwise users may not even notice it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-17727) HoS Queries Print "Starting task [Stage-x:MAPRED] in serial mode"

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-17727:
-

Assignee: (was: Andrew Sherman)

> HoS Queries Print "Starting task [Stage-x:MAPRED] in serial mode"
> -
>
> Key: HIVE-17727
> URL: https://issues.apache.org/jira/browse/HIVE-17727
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Priority: Major
>
> Whenever a HoS query is run something like "Starting task [Stage-3:MAPRED] in 
> serial mode" in printed out for each {{SparkTask}}, which is confusing 
> because this isn't a MAPRED job. We should change {{StageType}} to include a 
> {{SPARK}} type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-17677) Investigate using hive statistics information to optimize HoS parallel order by

2018-12-11 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-17677:
-

Assignee: (was: Andrew Sherman)

> Investigate using hive statistics information to optimize HoS parallel order 
> by
> ---
>
> Key: HIVE-17677
> URL: https://issues.apache.org/jira/browse/HIVE-17677
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Sherman
>Priority: Major
>
> I think Spark's native parallel order by works in a similar way to what we do 
> for Hive-on-MR.  That is, it scans the RDD once and sample the data to 
> determine what ranges the data should be partitioned into, and then scans the 
> RDD again to do the actual order by (with multiple reducers). 
> One optimization suggested by [~stakiar] is that if we have column stats 
> about the col we are ordering by, then the first scan on the RDD is not 
> necessary. If we have histogram data about the RDD, we already know what the 
> ranges of the order by should be. This should work when running parallel 
> order by on simple tables, will be harder when we run it on derived datasets 
> (although not impossible). 
> To do his we would have to understand more about the internals of 
> JavaPairRDD. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20740) Remove global lock in ObjectStore.setConf method

2018-11-27 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700701#comment-16700701
 ] 

Andrew Sherman commented on HIVE-20740:
---

+1 LGTM

> Remove global lock in ObjectStore.setConf method
> 
>
> Key: HIVE-20740
> URL: https://issues.apache.org/jira/browse/HIVE-20740
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-20740.01.patch, HIVE-20740.02.patch, 
> HIVE-20740.04.patch, HIVE-20740.05.patch, HIVE-20740.06.patch, 
> HIVE-20740.08.patch, HIVE-20740.09.patch, HIVE-20740.10.patch, 
> HIVE-20740.11.patch, HIVE-20740.12.patch, HIVE-20740.13.patch, 
> HIVE-20740.14.patch
>
>
> The ObjectStore#setConf method has a global lock which can block other 
> clients in concurrent workloads.
> {code}
> @Override
>   @SuppressWarnings("nls")
>   public void setConf(Configuration conf) {
> // Although an instance of ObjectStore is accessed by one thread, there 
> may
> // be many threads with ObjectStore instances. So the static variables
> // pmf and prop need to be protected with locks.
> pmfPropLock.lock();
> try {
>   isInitialized = false;
>   this.conf = conf;
>   this.areTxnStatsSupported = MetastoreConf.getBoolVar(conf, 
> ConfVars.HIVE_TXN_STATS_ENABLED);
>   configureSSL(conf);
>   Properties propsFromConf = getDataSourceProps(conf);
>   boolean propsChanged = !propsFromConf.equals(prop);
>   if (propsChanged) {
> if (pmf != null){
>   clearOutPmfClassLoaderCache(pmf);
>   if (!forTwoMetastoreTesting) {
> // close the underlying connection pool to avoid leaks
> pmf.close();
>   }
> }
> pmf = null;
> prop = null;
>   }
>   assert(!isActiveTransaction());
>   shutdown();
>   // Always want to re-create pm as we don't know if it were created by 
> the
>   // most recent instance of the pmf
>   pm = null;
>   directSql = null;
>   expressionProxy = null;
>   openTrasactionCalls = 0;
>   currentTransaction = null;
>   transactionStatus = TXN_STATUS.NO_STATE;
>   initialize(propsFromConf);
>   String partitionValidationRegex =
>   MetastoreConf.getVar(this.conf, 
> ConfVars.PARTITION_NAME_WHITELIST_PATTERN);
>   if (partitionValidationRegex != null && 
> !partitionValidationRegex.isEmpty()) {
> partitionValidationPattern = 
> Pattern.compile(partitionValidationRegex);
>   } else {
> partitionValidationPattern = null;
>   }
>   // Note, if metrics have not been initialized this will return null, 
> which means we aren't
>   // using metrics.  Thus we should always check whether this is non-null 
> before using.
>   MetricRegistry registry = Metrics.getRegistry();
>   if (registry != null) {
> directSqlErrors = 
> Metrics.getOrCreateCounter(MetricsConstants.DIRECTSQL_ERRORS);
>   }
>   this.batchSize = MetastoreConf.getIntVar(conf, 
> ConfVars.RAWSTORE_PARTITION_BATCH_SIZE);
>   if (!isInitialized) {
> throw new RuntimeException(
> "Unable to create persistence manager. Check dss.log for details");
>   } else {
> LOG.debug("Initialized ObjectStore");
>   }
> } finally {
>   pmfPropLock.unlock();
> }
>   }
> {code}
> The {{pmfPropLock}} is a static object and it disallows any other new 
> connection to HMS which is trying to instantiate ObjectStore. We should 
> either remove the lock or reduce the scope of the lock so that it is held for 
> a very small amount of time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20916) Fix typo in JSONCreateDatabaseMessage and add test for alter database

2018-11-16 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16689836#comment-16689836
 ] 

Andrew Sherman commented on HIVE-20916:
---

+1 LGTM

> Fix typo in JSONCreateDatabaseMessage and add test for alter database
> -
>
> Key: HIVE-20916
> URL: https://issues.apache.org/jira/browse/HIVE-20916
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Affects Versions: 4.0.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-20916.01.patch, HIVE-20916.02.patch, 
> HIVE-20916.03.patch
>
>
> {code}
> public JSONCreateDatabaseMessage(String server, String servicePrincipal, 
> Database db,
>   Long timestamp) {
> this.server = server;
> this.servicePrincipal = servicePrincipal;
> this.db = db.getName();
> this.timestamp = timestamp;
> try {
>   this.dbJson = MessageBuilder.createDatabaseObjJson(db);
> } catch (TException ex) {
>   throw new IllegalArgumentException("Could not serialize Function 
> object", ex);
> }
> checkValid();
>   }
> {code}
> The exception message should say Database instead of Function. Also, the 
> {{TestDbNotificationListener#createDatabase}} should be modified to make sure 
> that the deserialized database object from the dbJson field matches with the 
> original database object 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20659) Update commons-compress to 1.18 due to security issues

2018-10-17 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16653946#comment-16653946
 ] 

Andrew Sherman commented on HIVE-20659:
---

Pushed to master, thanks [~bharos92]

> Update commons-compress to 1.18 due to security issues
> --
>
> Key: HIVE-20659
> URL: https://issues.apache.org/jira/browse/HIVE-20659
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 3.0.0, 2.3.2, 3.1.0
>Reporter: Jörn Franke
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Critical
> Attachments: HIVE-20659.1.patch
>
>
> Currently most Hive version depends on commons-compress 1.9 or 1.4. Those 
> versions have several security issues: 
> [https://commons.apache.org/proper/commons-compress/security-reports.html]
> I propose to upgrade all commons-compress dependencies in all Hive 
> (sub-)projects to at least 1.18. This will also make it easier for future 
> extensions to Hive (serde, udfs, etc.) that have dependencies to 
> commons-compress (e.g. [https://github.com/zuinnote/hadoopoffice/wiki)] to 
> integrate into Hive without upgrading the commons-compress library manually 
> in the Hive lib folder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20307) Add support for filterspec to the getPartitions with projection API

2018-10-16 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652438#comment-16652438
 ] 

Andrew Sherman commented on HIVE-20307:
---

+1 LGTM

> Add support for filterspec to the getPartitions with projection API
> ---
>
> Key: HIVE-20307
> URL: https://issues.apache.org/jira/browse/HIVE-20307
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-20307.01.patch, HIVE-20307.02.patch, 
> HIVE-20307.03.patch, HIVE-20307.04.patch, HIVE-20307.05.patch
>
>
> Implement the BY_EXPR, BY_NAMES and BY_VALUES filter modes for the projection 
> API to filter the partitions returned as discussed in the design doc



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20659) Update commons-compress to 1.18 due to security issues

2018-10-16 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652368#comment-16652368
 ] 

Andrew Sherman commented on HIVE-20659:
---

+1 LGTM

> Update commons-compress to 1.18 due to security issues
> --
>
> Key: HIVE-20659
> URL: https://issues.apache.org/jira/browse/HIVE-20659
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 3.0.0, 2.3.2, 3.1.0
>Reporter: Jörn Franke
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Critical
> Attachments: HIVE-20659.1.patch
>
>
> Currently most Hive version depends on commons-compress 1.9 or 1.4. Those 
> versions have several security issues: 
> [https://commons.apache.org/proper/commons-compress/security-reports.html]
> I propose to upgrade all commons-compress dependencies in all Hive 
> (sub-)projects to at least 1.18. This will also make it easier for future 
> extensions to Hive (serde, udfs, etc.) that have dependencies to 
> commons-compress (e.g. [https://github.com/zuinnote/hadoopoffice/wiki)] to 
> integrate into Hive without upgrading the commons-compress library manually 
> in the Hive lib folder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20699) Query based compactor for full CRUD Acid tables

2018-10-05 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16640317#comment-16640317
 ] 

Andrew Sherman commented on HIVE-20699:
---

Hi [~ekoifman] can you add a description please?

> Query based compactor for full CRUD Acid tables
> ---
>
> Key: HIVE-20699
> URL: https://issues.apache.org/jira/browse/HIVE-20699
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20610) TestDbNotificationListener should not use /tmp directory

2018-10-05 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16640243#comment-16640243
 ] 

Andrew Sherman commented on HIVE-20610:
---

Pushed to master, thanks [~bharos92]

> TestDbNotificationListener should not use /tmp directory
> 
>
> Key: HIVE-20610
> URL: https://issues.apache.org/jira/browse/HIVE-20610
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20610.1.patch, HIVE-20610.2.patch, 
> HIVE-20610.3.patch, HIVE-20610.4.patch
>
>
> Using /tmp directory creates exceptions for tests like dropTable :
> {code:java}
> 2018-09-19T06:42:04,818  INFO [main] metastore.HiveMetaStore: 0: drop_table : 
> tbl=hive.default.droptbl
> 2018-09-19T06:42:04,819  INFO [main] HiveMetaStore.audit: ugi=hiveptest   
> ip=unknown-ip-addr  cmd=drop_table : tbl=hive.default.droptbl   
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.ICE-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.XIM-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.X11-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/hsperfdata_root]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.font-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.Test-unix]: it still exists.
> 2018-09-19T06:42:05,072 ERROR [main] utils.FileUtils: Failed to delete 
> file:/tmp
> 2018-09-19T06:42:05,072 ERROR [main] utils.MetaStoreUtils: Got exception: 
> org.apache.hadoop.hive.metastore.api.MetaException Unable to delete 
> directory: file:/tmp
> org.apache.hadoop.hive.metastore.api.MetaException: Unable to delete 
> directory: file:/tmp
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl.deleteDir(HiveMetaStoreFsImpl.java:45)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:365) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:353) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.deleteTableData(HiveMetaStore.java:2562)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:2523)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:2685)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_102]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at com.sun.proxy.$Proxy33.drop_table_with_environment_context(Unknown 
> Source) [?:?]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:3204)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1492)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1432)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropTable(TestDbNotificationListener.java:522)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]{code}
>  
>  



--

[jira] [Commented] (HIVE-20545) Ability to exclude potentially large parameters in HMS Notifications

2018-10-04 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16638616#comment-16638616
 ] 

Andrew Sherman commented on HIVE-20545:
---

Pushed to master, thanks [~bharos92]

> Ability to exclude potentially large parameters in HMS Notifications
> 
>
> Key: HIVE-20545
> URL: https://issues.apache.org/jira/browse/HIVE-20545
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20545.1.patch, HIVE-20545.2.patch, 
> HIVE-20545.3.branch-3.patch, HIVE-20545.3.patch, HIVE-20545.4.patch, 
> HIVE-20545.6.patch, HIVE-20545.7.patch
>
>
> Clients can add large-sized parameters in Table/Partition objects. So we need 
> to enable adding regex patterns through HiveConf to match parameters to be 
> filtered from table and partition objects before serialization in HMS 
> notifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20688) Update Committer List

2018-10-03 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16637477#comment-16637477
 ] 

Andrew Sherman commented on HIVE-20688:
---

+ LGTM

> Update Committer List
> -
>
> Key: HIVE-20688
> URL: https://issues.apache.org/jira/browse/HIVE-20688
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Minor
> Attachments: HIVE-20688.1.patch
>
>
> Please update committer list:
> Name: Janaki Lahorani
> Apache ID: janaki
> Organization: Cloudera



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20545) Ability to exclude potentially large parameters in HMS Notifications

2018-10-02 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636399#comment-16636399
 ] 

Andrew Sherman commented on HIVE-20545:
---

+1 LGTM

> Ability to exclude potentially large parameters in HMS Notifications
> 
>
> Key: HIVE-20545
> URL: https://issues.apache.org/jira/browse/HIVE-20545
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20545.1.patch, HIVE-20545.2.patch, 
> HIVE-20545.3.branch-3.patch, HIVE-20545.3.patch, HIVE-20545.4.patch, 
> HIVE-20545.6.patch, HIVE-20545.7.patch
>
>
> Clients can add large-sized parameters in Table/Partition objects. So we need 
> to enable adding regex patterns through HiveConf to match parameters to be 
> filtered from table and partition objects before serialization in HMS 
> notifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20306) Implement projection spec for fetching only requested fields from partitions

2018-10-02 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636304#comment-16636304
 ] 

Andrew Sherman commented on HIVE-20306:
---

+1 LGTM with small spelling fixes from RB

> Implement projection spec for fetching only requested fields from partitions
> 
>
> Key: HIVE-20306
> URL: https://issues.apache.org/jira/browse/HIVE-20306
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-20306.02.patch, HIVE-20306.03.patch, 
> HIVE-20306.04.patch, HIVE-20306.05.patch, HIVE-20306.06.patch, 
> HIVE-20306.07.patch, HIVE-20306.08.patch, HIVE-20306.09.patch, 
> HIVE-20306.10.patch, HIVE-20306.11.patch, HIVE-20306.12.patch, 
> HIVE-20306.13.patch, HIVE-20306.14.patch, HIVE-20306.15.patch, 
> HIVE-20306.16.patch, HIVE-20306.17.patch, HIVE-20306.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20610) TestDbNotificationListener should not use /tmp directory

2018-10-02 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16636183#comment-16636183
 ] 

Andrew Sherman commented on HIVE-20610:
---

+1 LGTM pending test results

> TestDbNotificationListener should not use /tmp directory
> 
>
> Key: HIVE-20610
> URL: https://issues.apache.org/jira/browse/HIVE-20610
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20610.1.patch, HIVE-20610.2.patch
>
>
> Using /tmp directory creates exceptions for tests like dropTable :
> {code:java}
> 2018-09-19T06:42:04,818  INFO [main] metastore.HiveMetaStore: 0: drop_table : 
> tbl=hive.default.droptbl
> 2018-09-19T06:42:04,819  INFO [main] HiveMetaStore.audit: ugi=hiveptest   
> ip=unknown-ip-addr  cmd=drop_table : tbl=hive.default.droptbl   
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.ICE-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.XIM-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.X11-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/hsperfdata_root]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.font-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.Test-unix]: it still exists.
> 2018-09-19T06:42:05,072 ERROR [main] utils.FileUtils: Failed to delete 
> file:/tmp
> 2018-09-19T06:42:05,072 ERROR [main] utils.MetaStoreUtils: Got exception: 
> org.apache.hadoop.hive.metastore.api.MetaException Unable to delete 
> directory: file:/tmp
> org.apache.hadoop.hive.metastore.api.MetaException: Unable to delete 
> directory: file:/tmp
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl.deleteDir(HiveMetaStoreFsImpl.java:45)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:365) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:353) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.deleteTableData(HiveMetaStore.java:2562)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:2523)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:2685)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_102]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at com.sun.proxy.$Proxy33.drop_table_with_environment_context(Unknown 
> Source) [?:?]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:3204)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1492)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1432)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropTable(TestDbNotificationListener.java:522)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]{code}
>  
>  



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HIVE-20365) Fix warnings when regenerating thrift code

2018-09-28 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632655#comment-16632655
 ] 

Andrew Sherman commented on HIVE-20365:
---

One day we will have to really deal with wire format compatibility. As St 
Augustine said "Lord, make me pure – but not yet!”.

+1 LGTM

> Fix warnings when regenerating thrift code
> --
>
> Key: HIVE-20365
> URL: https://issues.apache.org/jira/browse/HIVE-20365
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-20365.01.patch
>
>
> When you build thrift code you can see thrift warning like below.
> [exec] 
> [WARNING:hive/standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift:2167]
>  No field key specified for rqst, resulting protocol may have conflicts or 
> not be backwards compatible!
>  [exec]
>  [exec] 
> [WARNING:hive/standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift:2235]
>  No field key specified for o2, resulting protocol may have conflicts or not 
> be backwards compatible!
>  [exec]
>  [exec] 
> [WARNING:hive/standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift:2167]
>  No field key specified for rqst, resulting protocol may have conflicts or 
> not be backwards compatible!
>  [exec]
>  [exec] 
> [WARNING:hive/standalone-metastore/metastore-common/src/main/thrift/hive_metastore.thrift:2235]
>  No field key specified for o2, resulting protocol may have conflicts or not 
> be backwards compatible!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20601) EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener

2018-09-25 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16627653#comment-16627653
 ] 

Andrew Sherman commented on HIVE-20601:
---

Pushed to master, thanks [~bharos92]

> EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener
> --
>
> Key: HIVE-20601
> URL: https://issues.apache.org/jira/browse/HIVE-20601
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20601.1.branch-3.patch, HIVE-20601.1.patch, 
> HIVE-20601.2.patch
>
>
> Cause : EnvironmentContext not passed here:
> [https://github.com/apache/hive/blob/36c33ca066c99dfdb21223a711c0c3f33c85b943/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java#L726]
>  
> It will be useful to have the environmentContext passed to 
> DbNotificationListener in this case, to know if the alter happened due to a 
> stat change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20601) EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener

2018-09-24 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626622#comment-16626622
 ] 

Andrew Sherman commented on HIVE-20601:
---

+1 LGTM pending some clean test runs

> EnvironmentContext null in ALTER_PARTITION event in DbNotificationListener
> --
>
> Key: HIVE-20601
> URL: https://issues.apache.org/jira/browse/HIVE-20601
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20601.1.branch-3.patch, HIVE-20601.1.patch
>
>
> Cause : EnvironmentContext not passed here:
> [https://github.com/apache/hive/blob/36c33ca066c99dfdb21223a711c0c3f33c85b943/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java#L726]
>  
> It will be useful to have the environmentContext passed to 
> DbNotificationListener in this case, to know if the alter happened due to a 
> stat change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20570) Union ALL with hive.optimize.union.remove=true has incorrect plan

2018-09-18 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16619920#comment-16619920
 ] 

Andrew Sherman commented on HIVE-20570:
---

+1 LGTM pending test results

> Union ALL with hive.optimize.union.remove=true has incorrect plan
> -
>
> Key: HIVE-20570
> URL: https://issues.apache.org/jira/browse/HIVE-20570
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20570.1.patch, HIVE-20570.2.patch, 
> HIVE-20570.3.patch
>
>
> When hive.optimize.union.remove=true and a select query is run with group by, 
> the final fetch is waiting only for one of the branches and not both.
> Test Case:
> {code}
> create table if not exists test_table(column1 string, column2 int);
> insert into test_table values('a',1),('b',2);
> set hive.optimize.union.remove=true;
> set mapred.input.dir.recursive=true;
> explain
> select column1 from test_table group by column1
> union all
> select column1 from test_table group by column1;
> {code}
> In the below the two stages correspond to the two parts of union all.  But 
> the final fetch operator (Stage 0) only depends on one of the stages, but it 
> should depend on both.
> Plan:
> {code}
> STAGE DEPENDENCIES:
>   Stage-1 is a root stage
>   Stage-2 is a root stage
>   *Stage-0 depends on stages: Stage-1*
> STAGE PLANS:
>   Stage: Stage-1
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: test_table
> Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE Column 
> stats: NONE
> Select Operator
>   expressions: column1 (type: string)
>   outputColumnNames: column1
>   Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE 
> Column stats: NONE
>   Group By Operator
> keys: column1 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE 
> Column stats: NONE
> Reduce Output Operator
>   key expressions: _col0 (type: string)
>   sort order: +
>   Map-reduce partition columns: _col0 (type: string)
>   Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE 
> Column stats: NONE
>   Execution mode: vectorized
>   Reduce Operator Tree:
> Group By Operator
>   keys: KEY._col0 (type: string)
>   mode: mergepartial
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 3 Basic stats: COMPLETE Column 
> stats: NONE
>   File Output Operator
> compressed: false
> Statistics: Num rows: 1 Data size: 3 Basic stats: COMPLETE Column 
> stats: NONE
> table:
> input format: org.apache.hadoop.mapred.SequenceFileInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
> serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>   Stage: Stage-2
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: test_table
> Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE Column 
> stats: NONE
> Select Operator
>   expressions: column1 (type: string)
>   outputColumnNames: column1
>   Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE 
> Column stats: NONE
>   Group By Operator
> keys: column1 (type: string)
> mode: hash
> outputColumnNames: _col0
> Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE 
> Column stats: NONE
> Reduce Output Operator
>   key expressions: _col0 (type: string)
>   sort order: +
>   Map-reduce partition columns: _col0 (type: string)
>   Statistics: Num rows: 2 Data size: 6 Basic stats: COMPLETE 
> Column stats: NONE
>   Execution mode: vectorized
>   Reduce Operator Tree:
> Group By Operator
>   keys: KEY._col0 (type: string)
>   mode: mergepartial
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 3 Basic stats: COMPLETE Column 
> stats: NONE
>   File Output Operator
> compressed: false
> Statistics: Num rows: 1 Data size: 3 Basic stats: COMPLETE Column 
> stats: NONE
> table:
> input format: org.apache.hadoop.mapred.SequenceFileInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
> serde: 

[jira] [Commented] (HIVE-20527) Intern table descriptors from spark task

2018-09-14 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615170#comment-16615170
 ] 

Andrew Sherman commented on HIVE-20527:
---

Pushed to master, thanks [~janulatha]

> Intern table descriptors from spark task
> 
>
> Key: HIVE-20527
> URL: https://issues.apache.org/jira/browse/HIVE-20527
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20527.1.patch, HIVE-20527.1.patch, 
> HIVE-20527.1.patch
>
>
> Table descriptors from MR tasks and Tez tasks are interned.  This fix is to 
> intern table desc from spark tasks as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20526) Add test case for HIVE-20489

2018-09-14 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615167#comment-16615167
 ] 

Andrew Sherman commented on HIVE-20526:
---

Pushed to master, thanks [~janulatha]

> Add test case for HIVE-20489
> 
>
> Key: HIVE-20526
> URL: https://issues.apache.org/jira/browse/HIVE-20526
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20526.1.patch, HIVE-20526.1.patch
>
>
> Add a test case for the issue discussed in HIVE-20489.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20527) Intern table descriptors from spark task

2018-09-12 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612572#comment-16612572
 ] 

Andrew Sherman commented on HIVE-20527:
---

+1 LGTM pending clean test run

> Intern table descriptors from spark task
> 
>
> Key: HIVE-20527
> URL: https://issues.apache.org/jira/browse/HIVE-20527
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20527.1.patch
>
>
> Table descriptors from MR tasks and Tez tasks are interned.  This fix is to 
> intern table desc from spark tasks as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20526) Add test case for HIVE-20489

2018-09-12 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612571#comment-16612571
 ] 

Andrew Sherman commented on HIVE-20526:
---

+1 LGTM

> Add test case for HIVE-20489
> 
>
> Key: HIVE-20526
> URL: https://issues.apache.org/jira/browse/HIVE-20526
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20526.1.patch
>
>
> Add a test case for the issue discussed in HIVE-20489.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20437) Handle schema evolution from float, double and decimal

2018-09-07 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-20437:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~janulatha] for the patch

> Handle schema evolution from float, double and decimal
> --
>
> Key: HIVE-20437
> URL: https://issues.apache.org/jira/browse/HIVE-20437
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20437.1.patch, HIVE-20437.2.patch, 
> HIVE-20437.3.patch
>
>
> When data created as float, double or decimal in parquet format is read back 
> using some other type, errors are seen.  Parquet should behave just like any 
> other format.  If the value is valid for the new type, data is retuned 
> otherwise null has to be returned.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20437) Handle schema evolution from float, double and decimal

2018-09-06 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606254#comment-16606254
 ] 

Andrew Sherman commented on HIVE-20437:
---

+1 LGTM

> Handle schema evolution from float, double and decimal
> --
>
> Key: HIVE-20437
> URL: https://issues.apache.org/jira/browse/HIVE-20437
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20437.1.patch, HIVE-20437.2.patch, 
> HIVE-20437.3.patch
>
>
> When data created as float, double or decimal in parquet format is read back 
> using some other type, errors are seen.  Parquet should behave just like any 
> other format.  If the value is valid for the new type, data is retuned 
> otherwise null has to be returned.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-20479) Update content/people.mdtext in cms

2018-08-28 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved HIVE-20479.
---
Resolution: Fixed

Done already

> Update content/people.mdtext in cms 
> 
>
> Key: HIVE-20479
> URL: https://issues.apache.org/jira/browse/HIVE-20479
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> I added myself to the committers list. 
>  
> {code:java}
> asherman 
> Andrew Sherman 
>  href="http://cloudera.com/;>Cloudera 
>  
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20479) Update content/people.mdtext in cms

2018-08-28 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-20479:
-


> Update content/people.mdtext in cms 
> 
>
> Key: HIVE-20479
> URL: https://issues.apache.org/jira/browse/HIVE-20479
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> I added myself to the committers list. 
>  
> {code:java}
> asherman 
> Andrew Sherman 
>  href="http://cloudera.com/;>Cloudera 
>  
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20384) Fix flakiness of erasure_commands.q

2018-08-14 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16580369#comment-16580369
 ] 

Andrew Sherman commented on HIVE-20384:
---

LGTM +1 pending test results

> Fix flakiness of erasure_commands.q
> ---
>
> Key: HIVE-20384
> URL: https://issues.apache.org/jira/browse/HIVE-20384
> Project: Hive
>  Issue Type: Bug
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-20384.0.patch
>
>
> Qtest erasure_commands.q might fail if erasure_simple.q precedes it in the 
> same batch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20156) Printing Stacktrace to STDERR

2018-07-27 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560278#comment-16560278
 ] 

Andrew Sherman commented on HIVE-20156:
---

Thanks [~ngangam]

> Printing Stacktrace to STDERR
> -
>
> Key: HIVE-20156
> URL: https://issues.apache.org/jira/browse/HIVE-20156
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Andrew Sherman
>Priority: Minor
>  Labels: newbie, noob
> Fix For: 4.0.0
>
> Attachments: HIVE-20156.1.patch
>
>
> Class {{org.apache.hadoop.hive.ql.exec.JoinOperator}} has the following code:
> {code}
> } catch (Exception e) {
>   e.printStackTrace();
>   throw new HiveException(e);
> }
> {code}
> Do not print the stack trace to STDERR with a call to {{printStackTrace()}}.  
> Please remove that line and let the code catching the {{HiveException}} worry 
> about printing any messages through a logger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18929) The method humanReadableInt in HiveStringUtils.java has a race condition.

2018-07-27 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560277#comment-16560277
 ] 

Andrew Sherman commented on HIVE-18929:
---

Thanks [~aihuaxu]

> The method humanReadableInt in HiveStringUtils.java has a race condition.
> -
>
> Key: HIVE-18929
> URL: https://issues.apache.org/jira/browse/HIVE-18929
> Project: Hive
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.3.2
>Reporter: Chaiyong Ragkhitwetsagul
>Assignee: Andrew Sherman
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-18929.1.patch
>
>
> I found that the {{humanReadableInt(long number)}} method in the 
> hive/common/src/java/org/apache/hive/common/util/HiveStringUtils.java file 
> contains code which has a race condition as shown in Hadoop (issue tracking 
> ID HADOOP-9252: https://issues.apache.org/jira/browse/HADOOP-9252). The fix 
> can also be seen in the Hadoop code base.
> I couldn't find a call to the method anywhere else in the code. But it might 
> be worth to fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20156) Printing Stacktrace to STDERR

2018-07-26 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559170#comment-16559170
 ] 

Andrew Sherman commented on HIVE-20156:
---

Thanks for looking at this, [~ngangam], please push to master at your 
convenience

> Printing Stacktrace to STDERR
> -
>
> Key: HIVE-20156
> URL: https://issues.apache.org/jira/browse/HIVE-20156
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Andrew Sherman
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20156.1.patch
>
>
> Class {{org.apache.hadoop.hive.ql.exec.JoinOperator}} has the following code:
> {code}
> } catch (Exception e) {
>   e.printStackTrace();
>   throw new HiveException(e);
> }
> {code}
> Do not print the stack trace to STDERR with a call to {{printStackTrace()}}.  
> Please remove that line and let the code catching the {{HiveException}} worry 
> about printing any messages through a logger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20158) Do Not Print StackTraces to STDERR in Base64TextOutputFormat

2018-07-26 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559020#comment-16559020
 ] 

Andrew Sherman commented on HIVE-20158:
---

Thaks [~vihangk1]

> Do Not Print StackTraces to STDERR in Base64TextOutputFormat
> 
>
> Key: HIVE-20158
> URL: https://issues.apache.org/jira/browse/HIVE-20158
> Project: Hive
>  Issue Type: Improvement
>  Components: Contrib
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Andrew Sherman
>Priority: Trivial
>  Labels: newbie, noob
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20158.1.patch, HIVE-20158.2.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/contrib/src/java/org/apache/hadoop/hive/contrib/fileformat/base64/Base64TextOutputFormat.java
> {code}
>   try {
> String signatureString = 
> job.get("base64.text.output.format.signature");
> if (signatureString != null) {
>   signature = signatureString.getBytes("UTF-8");
> } else {
>   signature = new byte[0];
> }
>   } catch (UnsupportedEncodingException e) {
> e.printStackTrace();
>   }
> {code}
> The {{UnsupportedEncodingException}} is coming from the {{getBytes}} method 
> call.  Instead, use the {{CharSet}} version of the method and it doesn't 
> throw this explicit exception so the 'try' block can simply be removed.  
> Every JVM will support UTF-8.
> https://docs.oracle.com/javase/7/docs/api/java/lang/String.html#getBytes(java.nio.charset.Charset)
> https://docs.oracle.com/javase/7/docs/api/java/nio/charset/StandardCharsets.html#UTF_8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19986) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by MR

2018-07-25 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556418#comment-16556418
 ] 

Andrew Sherman commented on HIVE-19986:
---

Thanks [~stakiar]

> Add logging of runtime statistics indicating when Hdfs Erasure Coding is used 
> by MR
> ---
>
> Key: HIVE-19986
> URL: https://issues.apache.org/jira/browse/HIVE-19986
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-19986.1.patch, HIVE-19986.2.patch, 
> HIVE-19986.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18852) Misleading error message in alter table validation

2018-07-25 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16555976#comment-16555976
 ] 

Andrew Sherman commented on HIVE-18852:
---

Thanks [~vgarg]

> Misleading error message in alter table validation
> --
>
> Key: HIVE-18852
> URL: https://issues.apache.org/jira/browse/HIVE-18852
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.4.0
>Reporter: Dan Burkert
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-18852.1.patch, HIVE-18852.2.patch
>
>
> The metastore's validation error message when attempting to rename a table to 
> a non-existent database is wrong.  For instance, attempting to alter table 
> 'db.table' to 'non_existent_database.table' results in the Thrift error:
> {{TException - service has thrown: InvalidOperationException(message=Unable 
> to change partition or table. Database db does not exist Check metastore logs 
> for detailed stack.non_existent_database)}}
> I believe the offending line of code is 
> [here|https://github.com/apache/hive/blob/branch-2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java?utf8=%E2%9C%93#L331-L333],
>  notice that {{dbname}} is used in the message, not {{newDbName}}.  I don't 
> know if switching that would cause the case of a non-existing {{dbname}} case 
> to regress, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19986) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by MR

2018-07-25 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19986:
--
Attachment: HIVE-19986.3.patch

> Add logging of runtime statistics indicating when Hdfs Erasure Coding is used 
> by MR
> ---
>
> Key: HIVE-19986
> URL: https://issues.apache.org/jira/browse/HIVE-19986
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19986.1.patch, HIVE-19986.2.patch, 
> HIVE-19986.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19987) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by Spark

2018-07-24 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-19987:
-

Assignee: Adam Szita  (was: Andrew Sherman)

> Add logging of runtime statistics indicating when Hdfs Erasure Coding is used 
> by Spark
> --
>
> Key: HIVE-19987
> URL: https://issues.apache.org/jira/browse/HIVE-19987
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Andrew Sherman
>Assignee: Adam Szita
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18119) show partitions should say whether a partition is stored via EC

2018-07-24 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman resolved HIVE-18119.
---
Resolution: Duplicate

Fixed as part of HIVE-18118

> show partitions should say whether a partition is stored via EC
> ---
>
> Key: HIVE-18119
> URL: https://issues.apache.org/jira/browse/HIVE-18119
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
>
> Not sure what the criteria should be here because technically any single file 
> in a directory can be stored via EC. So a partition may contain both EC files 
> and regular files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18852) Misleading error message in alter table validation

2018-07-24 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18852:
--
Attachment: HIVE-18852.2.patch

> Misleading error message in alter table validation
> --
>
> Key: HIVE-18852
> URL: https://issues.apache.org/jira/browse/HIVE-18852
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.4.0
>Reporter: Dan Burkert
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18852.1.patch, HIVE-18852.2.patch
>
>
> The metastore's validation error message when attempting to rename a table to 
> a non-existent database is wrong.  For instance, attempting to alter table 
> 'db.table' to 'non_existent_database.table' results in the Thrift error:
> {{TException - service has thrown: InvalidOperationException(message=Unable 
> to change partition or table. Database db does not exist Check metastore logs 
> for detailed stack.non_existent_database)}}
> I believe the offending line of code is 
> [here|https://github.com/apache/hive/blob/branch-2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java?utf8=%E2%9C%93#L331-L333],
>  notice that {{dbname}} is used in the message, not {{newDbName}}.  I don't 
> know if switching that would cause the case of a non-existing {{dbname}} case 
> to regress, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19986) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by MR

2018-07-24 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554804#comment-16554804
 ] 

Andrew Sherman commented on HIVE-19986:
---

Thanks for looking [~stakiar] I updated [https://reviews.apache.org/r/68027/] 
and uploaded a new patch. The equivalent HoS change is  HIVE-19987 but this 
relies on Spark changes that are not ready yet.

> Add logging of runtime statistics indicating when Hdfs Erasure Coding is used 
> by MR
> ---
>
> Key: HIVE-19986
> URL: https://issues.apache.org/jira/browse/HIVE-19986
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19986.1.patch, HIVE-19986.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19986) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by MR

2018-07-24 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19986:
--
Attachment: HIVE-19986.2.patch

> Add logging of runtime statistics indicating when Hdfs Erasure Coding is used 
> by MR
> ---
>
> Key: HIVE-19986
> URL: https://issues.apache.org/jira/browse/HIVE-19986
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19986.1.patch, HIVE-19986.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18852) Misleading error message in alter table validation

2018-07-24 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554682#comment-16554682
 ] 

Andrew Sherman commented on HIVE-18852:
---

[~vgarg] thanks for trying, I will rebase

> Misleading error message in alter table validation
> --
>
> Key: HIVE-18852
> URL: https://issues.apache.org/jira/browse/HIVE-18852
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.4.0
>Reporter: Dan Burkert
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18852.1.patch
>
>
> The metastore's validation error message when attempting to rename a table to 
> a non-existent database is wrong.  For instance, attempting to alter table 
> 'db.table' to 'non_existent_database.table' results in the Thrift error:
> {{TException - service has thrown: InvalidOperationException(message=Unable 
> to change partition or table. Database db does not exist Check metastore logs 
> for detailed stack.non_existent_database)}}
> I believe the offending line of code is 
> [here|https://github.com/apache/hive/blob/branch-2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java?utf8=%E2%9C%93#L331-L333],
>  notice that {{dbname}} is used in the message, not {{newDbName}}.  I don't 
> know if switching that would cause the case of a non-existing {{dbname}} case 
> to regress, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20167) apostrophe in midline comment fails with ParseException

2018-07-24 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16554642#comment-16554642
 ] 

Andrew Sherman commented on HIVE-20167:
---

OK it is a bit more complex. This code in a .q file does fail:

{{select 'hello'; -- testing hive's stomach for apostrophes}}
{{;}}

but this is partly an artifact of QTestUtil (i.e. test infrastructure). 

I can run that same code from a hue client and it works.

The different clients (BeeLine, Hive cli, qtests) do some work to split 
commands up before executing them and this is super-annoying and complex code. 

> apostrophe in midline comment fails with ParseException
> ---
>
> Key: HIVE-20167
> URL: https://issues.apache.org/jira/browse/HIVE-20167
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 2.3.2
> Environment: Observed on an AWS EMR cluster. 
> Hive cli, executing script from bash with "hive -f ..." (not interactive).
>  
>Reporter: Trey Fore
>Priority: Minor
>
> This line causes a ParseException:
> {{    , member_id string                  --  standardizing from client's 
> memberID}}
> When the apostrophe is removed, leaving:
> {{    , member_id string                  --  standardizing from clients 
> memberID}}
> the line is parsed correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20156) Printing Stacktrace to STDERR

2018-07-23 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-20156:
--
Attachment: HIVE-20156.1.patch
Status: Patch Available  (was: Open)

> Printing Stacktrace to STDERR
> -
>
> Key: HIVE-20156
> URL: https://issues.apache.org/jira/browse/HIVE-20156
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Andrew Sherman
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20156.1.patch
>
>
> Class {{org.apache.hadoop.hive.ql.exec.JoinOperator}} has the following code:
> {code}
> } catch (Exception e) {
>   e.printStackTrace();
>   throw new HiveException(e);
> }
> {code}
> Do not print the stack trace to STDERR with a call to {{printStackTrace()}}.  
> Please remove that line and let the code catching the {{HiveException}} worry 
> about printing any messages through a logger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20156) Printing Stacktrace to STDERR

2018-07-23 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-20156:
-

Assignee: Andrew Sherman

> Printing Stacktrace to STDERR
> -
>
> Key: HIVE-20156
> URL: https://issues.apache.org/jira/browse/HIVE-20156
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Andrew Sherman
>Priority: Minor
>  Labels: newbie, noob
>
> Class {{org.apache.hadoop.hive.ql.exec.JoinOperator}} has the following code:
> {code}
> } catch (Exception e) {
>   e.printStackTrace();
>   throw new HiveException(e);
> }
> {code}
> Do not print the stack trace to STDERR with a call to {{printStackTrace()}}.  
> Please remove that line and let the code catching the {{HiveException}} worry 
> about printing any messages through a logger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20158) Do Not Print StackTraces to STDERR in Base64TextOutputFormat

2018-07-23 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-20158:
--
Attachment: HIVE-20158.1.patch
Status: Patch Available  (was: Open)

> Do Not Print StackTraces to STDERR in Base64TextOutputFormat
> 
>
> Key: HIVE-20158
> URL: https://issues.apache.org/jira/browse/HIVE-20158
> Project: Hive
>  Issue Type: Improvement
>  Components: Contrib
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Andrew Sherman
>Priority: Trivial
>  Labels: newbie, noob
> Attachments: HIVE-20158.1.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/contrib/src/java/org/apache/hadoop/hive/contrib/fileformat/base64/Base64TextOutputFormat.java
> {code}
>   try {
> String signatureString = 
> job.get("base64.text.output.format.signature");
> if (signatureString != null) {
>   signature = signatureString.getBytes("UTF-8");
> } else {
>   signature = new byte[0];
> }
>   } catch (UnsupportedEncodingException e) {
> e.printStackTrace();
>   }
> {code}
> The {{UnsupportedEncodingException}} is coming from the {{getBytes}} method 
> call.  Instead, use the {{CharSet}} version of the method and it doesn't 
> throw this explicit exception so the 'try' block can simply be removed.  
> Every JVM will support UTF-8.
> https://docs.oracle.com/javase/7/docs/api/java/lang/String.html#getBytes(java.nio.charset.Charset)
> https://docs.oracle.com/javase/7/docs/api/java/nio/charset/StandardCharsets.html#UTF_8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20158) Do Not Print StackTraces to STDERR in Base64TextOutputFormat

2018-07-23 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-20158:
-

Assignee: Andrew Sherman

> Do Not Print StackTraces to STDERR in Base64TextOutputFormat
> 
>
> Key: HIVE-20158
> URL: https://issues.apache.org/jira/browse/HIVE-20158
> Project: Hive
>  Issue Type: Improvement
>  Components: Contrib
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Andrew Sherman
>Priority: Trivial
>  Labels: newbie, noob
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/contrib/src/java/org/apache/hadoop/hive/contrib/fileformat/base64/Base64TextOutputFormat.java
> {code}
>   try {
> String signatureString = 
> job.get("base64.text.output.format.signature");
> if (signatureString != null) {
>   signature = signatureString.getBytes("UTF-8");
> } else {
>   signature = new byte[0];
> }
>   } catch (UnsupportedEncodingException e) {
> e.printStackTrace();
>   }
> {code}
> The {{UnsupportedEncodingException}} is coming from the {{getBytes}} method 
> call.  Instead, use the {{CharSet}} version of the method and it doesn't 
> throw this explicit exception so the 'try' block can simply be removed.  
> Every JVM will support UTF-8.
> https://docs.oracle.com/javase/7/docs/api/java/lang/String.html#getBytes(java.nio.charset.Charset)
> https://docs.oracle.com/javase/7/docs/api/java/nio/charset/StandardCharsets.html#UTF_8



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20167) apostrophe in midline comment fails with ParseException

2018-07-23 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553569#comment-16553569
 ] 

Andrew Sherman commented on HIVE-20167:
---

I tried to reproduce with

{{create table andrew (}}
{{a int -- here's a comment}}
{{);}}

but that seems to work OK. 

> apostrophe in midline comment fails with ParseException
> ---
>
> Key: HIVE-20167
> URL: https://issues.apache.org/jira/browse/HIVE-20167
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 2.3.2
> Environment: Observed on an AWS EMR cluster. 
> Hive cli, executing script from bash with "hive -f ..." (not interactive).
>  
>Reporter: Trey Fore
>Priority: Minor
>
> This line causes a ParseException:
> {{    , member_id string                  --  standardizing from client's 
> memberID}}
> When the apostrophe is removed, leaving:
> {{    , member_id string                  --  standardizing from clients 
> memberID}}
> the line is parsed correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19986) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by MR

2018-07-23 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19986:
--
Attachment: HIVE-19986.1.patch
Status: Patch Available  (was: Open)

> Add logging of runtime statistics indicating when Hdfs Erasure Coding is used 
> by MR
> ---
>
> Key: HIVE-19986
> URL: https://issues.apache.org/jira/browse/HIVE-19986
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19986.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18929) The method humanReadableInt in HiveStringUtils.java has a race condition.

2018-07-23 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553414#comment-16553414
 ] 

Andrew Sherman commented on HIVE-18929:
---

Thanks [~aihuaxu], tests passed OK, can you please push to master when you have 
time?

> The method humanReadableInt in HiveStringUtils.java has a race condition.
> -
>
> Key: HIVE-18929
> URL: https://issues.apache.org/jira/browse/HIVE-18929
> Project: Hive
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.3.2
>Reporter: Chaiyong Ragkhitwetsagul
>Assignee: Andrew Sherman
>Priority: Minor
> Attachments: HIVE-18929.1.patch
>
>
> I found that the {{humanReadableInt(long number)}} method in the 
> hive/common/src/java/org/apache/hive/common/util/HiveStringUtils.java file 
> contains code which has a race condition as shown in Hadoop (issue tracking 
> ID HADOOP-9252: https://issues.apache.org/jira/browse/HADOOP-9252). The fix 
> can also be seen in the Hadoop code base.
> I couldn't find a call to the method anywhere else in the code. But it might 
> be worth to fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18852) Misleading error message in alter table validation

2018-07-23 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553406#comment-16553406
 ] 

Andrew Sherman commented on HIVE-18852:
---

[~vgarg] thanks for the +1, can you push this to master if possible? Thanks.

> Misleading error message in alter table validation
> --
>
> Key: HIVE-18852
> URL: https://issues.apache.org/jira/browse/HIVE-18852
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.4.0
>Reporter: Dan Burkert
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18852.1.patch
>
>
> The metastore's validation error message when attempting to rename a table to 
> a non-existent database is wrong.  For instance, attempting to alter table 
> 'db.table' to 'non_existent_database.table' results in the Thrift error:
> {{TException - service has thrown: InvalidOperationException(message=Unable 
> to change partition or table. Database db does not exist Check metastore logs 
> for detailed stack.non_existent_database)}}
> I believe the offending line of code is 
> [here|https://github.com/apache/hive/blob/branch-2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java?utf8=%E2%9C%93#L331-L333],
>  notice that {{dbname}} is used in the message, not {{newDbName}}.  I don't 
> know if switching that would cause the case of a non-existing {{dbname}} case 
> to regress, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18852) Misleading error message in alter table validation

2018-07-03 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18852:
--
Attachment: HIVE-18852.1.patch
Status: Patch Available  (was: Open)

> Misleading error message in alter table validation
> --
>
> Key: HIVE-18852
> URL: https://issues.apache.org/jira/browse/HIVE-18852
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.4.0
>Reporter: Dan Burkert
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18852.1.patch
>
>
> The metastore's validation error message when attempting to rename a table to 
> a non-existent database is wrong.  For instance, attempting to alter table 
> 'db.table' to 'non_existent_database.table' results in the Thrift error:
> {{TException - service has thrown: InvalidOperationException(message=Unable 
> to change partition or table. Database db does not exist Check metastore logs 
> for detailed stack.non_existent_database)}}
> I believe the offending line of code is 
> [here|https://github.com/apache/hive/blob/branch-2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java?utf8=%E2%9C%93#L331-L333],
>  notice that {{dbname}} is used in the message, not {{newDbName}}.  I don't 
> know if switching that would cause the case of a non-existing {{dbname}} case 
> to regress, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18852) Misleading error message in alter table validation

2018-07-03 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-18852:
-

Assignee: Andrew Sherman

> Misleading error message in alter table validation
> --
>
> Key: HIVE-18852
> URL: https://issues.apache.org/jira/browse/HIVE-18852
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.4.0
>Reporter: Dan Burkert
>Assignee: Andrew Sherman
>Priority: Major
>
> The metastore's validation error message when attempting to rename a table to 
> a non-existent database is wrong.  For instance, attempting to alter table 
> 'db.table' to 'non_existent_database.table' results in the Thrift error:
> {{TException - service has thrown: InvalidOperationException(message=Unable 
> to change partition or table. Database db does not exist Check metastore logs 
> for detailed stack.non_existent_database)}}
> I believe the offending line of code is 
> [here|https://github.com/apache/hive/blob/branch-2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java?utf8=%E2%9C%93#L331-L333],
>  notice that {{dbname}} is used in the message, not {{newDbName}}.  I don't 
> know if switching that would cause the case of a non-existing {{dbname}} case 
> to regress, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-07-03 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18118:
--
Attachment: HIVE-18118.15.patch

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, 
> HIVE-18118.12.patch, HIVE-18118.14.patch, HIVE-18118.15.patch, 
> HIVE-18118.2.patch, HIVE-18118.3.patch, HIVE-18118.4.patch, 
> HIVE-18118.5.patch, HIVE-18118.6.patch, HIVE-18118.7.patch, 
> HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20055) SQL injection via metastore ACID APIs (and maybe queries, although that's unlikely)

2018-07-02 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530404#comment-16530404
 ] 

Andrew Sherman commented on HIVE-20055:
---

I'm not sure I completely understand this jira, but often then best way to 
avoid sql injection attacks is to code jdbc using prepared statements.

> SQL injection via metastore ACID APIs (and maybe queries, although that's 
> unlikely)
> ---
>
> Key: HIVE-20055
> URL: https://issues.apache.org/jira/browse/HIVE-20055
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Thejas M Nair
>Priority: Major
>
> [~thejas] asked me to create this JIRA based on my earlier email :)
> {noformat}
> This might be doable with a specially crafted query, I’m not sure what APIs 
> calls have what checks (e.g. via Hive parser) that would prevent the below.
> However, for remote metastore (default on many clusters currently, afaik it’s 
> the default for ACID) we expose thrift API that accepts strings e.g. 
> get_valid_write_ids.
> That passes the string table names to TxnHandler::getValidWriteIdsForTable, 
> that inserts them into the query string w/quoteString call; quoteString 
> doesn’t do any validation.
> Some ready made delete statements also exist e.g.  "delete from REPL_TXN_MAP 
> where RTM_SRC_TXN_ID = " + sourceTxnId + " and RTM_REPL_POLICY = " + 
> quoteString(rqst.getReplPolicy());
> I think my replication policy might be {' OR '1' = '1} ;)
> So, SQL injection might be possible thru these APIs.
> I wonder if this class should be switched to parameter based execution? 
> DirectSQL could be used as an example, although that uses DataNucleus direct 
> sql feature… at least we need some checks on these.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20030) Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java and AnnotateRunTimeStatsOptimizer.java

2018-07-02 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530252#comment-16530252
 ] 

Andrew Sherman commented on HIVE-20030:
---

[~kgyrtkirk] that is interesting, thanks for pointing out HIVE-20008. 
With your change the problem is now worse :-) When I rebase, clean build, and 
then try to run with IntelliJ I see
{code}
/Users/asherman/git/asf/hive/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
Warning:(239, 45) java: org.apache.hadoop.hive.ql.udf.generic.GenericUDFHash in 
org.apache.hadoop.hive.ql.udf.generic has been deprecated
Error:(525, 70) java: incompatible types: 
java.util.List> cannot be converted to 
java.util.List>
Warning:(1819, 48) java: HIVEMAPREDMODE in 
org.apache.hadoop.hive.conf.HiveConf.ConfVars has been deprecated
Warning:(10924, 72) java: org.apache.hadoop.hive.ql.udf.generic.GenericUDFHash 
in org.apache.hadoop.hive.ql.udf.generic has been deprecated
Error:(12207, 52) java: incompatible types: 
java.util.List> cannot be converted to 
java.util.List>
Error:(12307, 30) java: incompatible types: 
java.util.List> cannot be converted to 
java.util.List>
Warning:(13260, 43) java: isDir() in org.apache.hadoop.fs.FileStatus has been 
deprecated
Warning:(13526, 61) java: HIVE_GROUPBY_ORDERBY_POSITION_ALIAS in 
org.apache.hadoop.hive.conf.HiveConf.ConfVars has been deprecated
Error:(14766, 49) java: incompatible types: 
java.util.List> cannot be converted to 
java.util.List>
{code}

In the code there are a lot of places where we use  so 
removing those is a much larger task than converting the few Operator to 
Operator>.


> Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java 
> and AnnotateRunTimeStatsOptimizer.java
> 
>
> Key: HIVE-20030
> URL: https://issues.apache.org/jira/browse/HIVE-20030
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-20030.1.patch
>
>
> For some reason the Java compiler in IntellJ is more strict that the Oracle 
> jdk compiler. Maybe this is something that can be configured away, but as it 
> is simple I propose to make the code more type correct. 
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
> Error:(613, 24) java: no suitable method found for 
> findOperatorsUpstream(java.util.List  extends 
> org.apache.hadoop.hive.ql.plan.OperatorDesc>>,java.lang.Class)
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> org.apache.hadoop.hive.ql.exec.Operator))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(java.util.Collection>,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> java.util.Collection>))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class,java.util.Set)
>  is not applicable
>   (cannot infer type-variable(s) T
> (actual and formal argument lists differ in length))
> {code}
> and
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AnnotateRunTimeStatsOptimizer.java
> Error:(76, 12) java: no suitable method found for 
> addAll(java.util.List>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> Error:(80, 14) java: no suitable method found for 
> addAll(java.util.Set>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends 

[jira] [Updated] (HIVE-19581) view do not support unicode characters well

2018-07-02 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19581:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ngangam] for the commit. My intention was to prevent future 
regressions in master so I think this is good enough for mow.

> view do not support unicode characters well
> ---
>
> Key: HIVE-19581
> URL: https://issues.apache.org/jira/browse/HIVE-19581
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: kai
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, 
> HIVE-19581.3.patch, HIVE-19581.4.patch, HIVE-19581.5.patch, 
> HIVE-19581.6.patch, explain.png, metastore.png
>
>
> create table t_test (name ,string) ;
>  insert into table t_test VALUES ('李四');
>  create view t_view_test as select * from t_test where name='李四';
> when select  * from t_view_test   no  records return



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-07-02 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18118:
--
Attachment: HIVE-18118.14.patch

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, 
> HIVE-18118.12.patch, HIVE-18118.14.patch, HIVE-18118.2.patch, 
> HIVE-18118.3.patch, HIVE-18118.4.patch, HIVE-18118.5.patch, 
> HIVE-18118.6.patch, HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-07-02 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18118:
--
Attachment: (was: HIVE-18118.13.patch)

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, 
> HIVE-18118.12.patch, HIVE-18118.14.patch, HIVE-18118.2.patch, 
> HIVE-18118.3.patch, HIVE-18118.4.patch, HIVE-18118.5.patch, 
> HIVE-18118.6.patch, HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20030) Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java and AnnotateRunTimeStatsOptimizer.java

2018-06-30 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528838#comment-16528838
 ] 

Andrew Sherman commented on HIVE-20030:
---

[~sershe] this could be user error (by me) but IntelliJ is using jdk8 and the 
language level is 8.
The change does seem to make the types match more accurately so this seems like 
a safe change.


> Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java 
> and AnnotateRunTimeStatsOptimizer.java
> 
>
> Key: HIVE-20030
> URL: https://issues.apache.org/jira/browse/HIVE-20030
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-20030.1.patch
>
>
> For some reason the Java compiler in IntellJ is more strict that the Oracle 
> jdk compiler. Maybe this is something that can be configured away, but as it 
> is simple I propose to make the code more type correct. 
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
> Error:(613, 24) java: no suitable method found for 
> findOperatorsUpstream(java.util.List  extends 
> org.apache.hadoop.hive.ql.plan.OperatorDesc>>,java.lang.Class)
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> org.apache.hadoop.hive.ql.exec.Operator))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(java.util.Collection>,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> java.util.Collection>))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class,java.util.Set)
>  is not applicable
>   (cannot infer type-variable(s) T
> (actual and formal argument lists differ in length))
> {code}
> and
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AnnotateRunTimeStatsOptimizer.java
> Error:(76, 12) java: no suitable method found for 
> addAll(java.util.List>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> Error:(80, 14) java: no suitable method found for 
> addAll(java.util.Set>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> Error:(85, 14) java: no suitable method found for 
> addAll(java.util.Set>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> /Users/asherman/git/asf/hive2/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen/IntervalYearMonthScalarAddTimestampColumn.java
> {code}



--
This message was sent by Atlassian 

[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-06-29 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18118:
--
Attachment: HIVE-18118.13.patch

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, 
> HIVE-18118.12.patch, HIVE-18118.13.patch, HIVE-18118.2.patch, 
> HIVE-18118.3.patch, HIVE-18118.4.patch, HIVE-18118.5.patch, 
> HIVE-18118.6.patch, HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-06-29 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527986#comment-16527986
 ] 

Andrew Sherman commented on HIVE-18118:
---

Thanks [~stakiar] for review, rebasing today I have merge conflicts so I think 
I will have to do another patch.

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, 
> HIVE-18118.12.patch, HIVE-18118.2.patch, HIVE-18118.3.patch, 
> HIVE-18118.4.patch, HIVE-18118.5.patch, HIVE-18118.6.patch, 
> HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18929) The method humanReadableInt in HiveStringUtils.java has a race condition.

2018-06-28 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18929:
--
Attachment: HIVE-18929.1.patch
Status: Patch Available  (was: Open)

> The method humanReadableInt in HiveStringUtils.java has a race condition.
> -
>
> Key: HIVE-18929
> URL: https://issues.apache.org/jira/browse/HIVE-18929
> Project: Hive
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.3.2
>Reporter: Chaiyong Ragkhitwetsagul
>Assignee: Andrew Sherman
>Priority: Minor
> Attachments: HIVE-18929.1.patch
>
>
> I found that the {{humanReadableInt(long number)}} method in the 
> hive/common/src/java/org/apache/hive/common/util/HiveStringUtils.java file 
> contains code which has a race condition as shown in Hadoop (issue tracking 
> ID HADOOP-9252: https://issues.apache.org/jira/browse/HADOOP-9252). The fix 
> can also be seen in the Hadoop code base.
> I couldn't find a call to the method anywhere else in the code. But it might 
> be worth to fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18929) The method humanReadableInt in HiveStringUtils.java has a race condition.

2018-06-28 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-18929:
-

Assignee: Andrew Sherman

> The method humanReadableInt in HiveStringUtils.java has a race condition.
> -
>
> Key: HIVE-18929
> URL: https://issues.apache.org/jira/browse/HIVE-18929
> Project: Hive
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.3.2
>Reporter: Chaiyong Ragkhitwetsagul
>Assignee: Andrew Sherman
>Priority: Minor
>
> I found that the {{humanReadableInt(long number)}} method in the 
> hive/common/src/java/org/apache/hive/common/util/HiveStringUtils.java file 
> contains code which has a race condition as shown in Hadoop (issue tracking 
> ID HADOOP-9252: https://issues.apache.org/jira/browse/HADOOP-9252). The fix 
> can also be seen in the Hadoop code base.
> I couldn't find a call to the method anywhere else in the code. But it might 
> be worth to fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20030) Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java and AnnotateRunTimeStatsOptimizer.java

2018-06-28 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-20030:
--
Attachment: HIVE-20030.1.patch
Status: Patch Available  (was: Open)

> Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java 
> and AnnotateRunTimeStatsOptimizer.java
> 
>
> Key: HIVE-20030
> URL: https://issues.apache.org/jira/browse/HIVE-20030
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-20030.1.patch
>
>
> For some reason the Java compiler in IntellJ is more strict that the Oracle 
> jdk compiler. Maybe this is something that can be configured away, but as it 
> is simple I propose to make the code more type correct. 
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
> Error:(613, 24) java: no suitable method found for 
> findOperatorsUpstream(java.util.List  extends 
> org.apache.hadoop.hive.ql.plan.OperatorDesc>>,java.lang.Class)
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> org.apache.hadoop.hive.ql.exec.Operator))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(java.util.Collection>,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> java.util.Collection>))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class,java.util.Set)
>  is not applicable
>   (cannot infer type-variable(s) T
> (actual and formal argument lists differ in length))
> {code}
> and
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AnnotateRunTimeStatsOptimizer.java
> Error:(76, 12) java: no suitable method found for 
> addAll(java.util.List>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> Error:(80, 14) java: no suitable method found for 
> addAll(java.util.Set>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> Error:(85, 14) java: no suitable method found for 
> addAll(java.util.Set>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> /Users/asherman/git/asf/hive2/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen/IntervalYearMonthScalarAddTimestampColumn.java
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20030) Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java and AnnotateRunTimeStatsOptimizer.java

2018-06-28 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman reassigned HIVE-20030:
-


> Fix Java compile errors that show up in IntelliJ from ConvertJoinMapJoin.java 
> and AnnotateRunTimeStatsOptimizer.java
> 
>
> Key: HIVE-20030
> URL: https://issues.apache.org/jira/browse/HIVE-20030
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
>
> For some reason the Java compiler in IntellJ is more strict that the Oracle 
> jdk compiler. Maybe this is something that can be configured away, but as it 
> is simple I propose to make the code more type correct. 
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConvertJoinMapJoin.java
> Error:(613, 24) java: no suitable method found for 
> findOperatorsUpstream(java.util.List  extends 
> org.apache.hadoop.hive.ql.plan.OperatorDesc>>,java.lang.Class)
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> org.apache.hadoop.hive.ql.exec.Operator))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(java.util.Collection>,java.lang.Class)
>  is not applicable
>   (cannot infer type-variable(s) T
> (argument mismatch; 
> java.util.List org.apache.hadoop.hive.ql.plan.OperatorDesc>> cannot be converted to 
> java.util.Collection>))
> method 
> org.apache.hadoop.hive.ql.exec.OperatorUtils.findOperatorsUpstream(org.apache.hadoop.hive.ql.exec.Operator,java.lang.Class,java.util.Set)
>  is not applicable
>   (cannot infer type-variable(s) T
> (actual and formal argument lists differ in length))
> {code}
> and
> {code}
> /Users/asherman/git/asf/hive2/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AnnotateRunTimeStatsOptimizer.java
> Error:(76, 12) java: no suitable method found for 
> addAll(java.util.List>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.List> cannot be 
> converted to java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> Error:(80, 14) java: no suitable method found for 
> addAll(java.util.Set>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> Error:(85, 14) java: no suitable method found for 
> addAll(java.util.Set>)
> method java.util.Collection.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> method java.util.Set.addAll(java.util.Collection org.apache.hadoop.hive.ql.exec.Operator org.apache.hadoop.hive.ql.plan.OperatorDesc>>) is not applicable
>   (argument mismatch; 
> java.util.Set> cannot be converted 
> to java.util.Collection extends org.apache.hadoop.hive.ql.plan.OperatorDesc>>)
> /Users/asherman/git/asf/hive2/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen/IntervalYearMonthScalarAddTimestampColumn.java
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19404) Revise DDL Task Result Logging

2018-06-28 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526767#comment-16526767
 ] 

Andrew Sherman commented on HIVE-19404:
---

Thanks [~ychena]

> Revise DDL Task Result Logging
> --
>
> Key: HIVE-19404
> URL: https://issues.apache.org/jira/browse/HIVE-19404
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Assignee: Andrew Sherman
>Priority: Trivial
>  Labels: noob
> Fix For: 4.0.0
>
> Attachments: HIVE-19404.1.patch
>
>
> There is some logging in {{DDLTask}} that can be made better:
> {code}
> 2018-05-03 03:08:32,524 INFO  hive.ql.exec.DDLTask: 
> [HiveServer2-Background-Pool: Thread-101980]: results : 706
> {code}
> This logging should either be demoted to _debug_ level logging and/or 
> requires additional context.
> {code}
> 2018-05-03 03:08:32,524 INFO  hive.ql.exec.DDLTask: 
> [HiveServer2-Background-Pool: Thread-101980]: Found 706 tables that match the 
> SHOW DATABASE statement
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19581) view do not support unicode characters well

2018-06-28 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526571#comment-16526571
 ] 

Andrew Sherman commented on HIVE-19581:
---

OK thanks [~ngangam] I have attached a new patch 

> view do not support unicode characters well
> ---
>
> Key: HIVE-19581
> URL: https://issues.apache.org/jira/browse/HIVE-19581
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: kai
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, 
> HIVE-19581.3.patch, HIVE-19581.4.patch, HIVE-19581.5.patch, 
> HIVE-19581.6.patch, explain.png, metastore.png
>
>
> create table t_test (name ,string) ;
>  insert into table t_test VALUES ('李四');
>  create view t_view_test as select * from t_test where name='李四';
> when select  * from t_view_test   no  records return



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19581) view do not support unicode characters well

2018-06-28 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19581:
--
Attachment: HIVE-19581.6.patch

> view do not support unicode characters well
> ---
>
> Key: HIVE-19581
> URL: https://issues.apache.org/jira/browse/HIVE-19581
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: kai
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, 
> HIVE-19581.3.patch, HIVE-19581.4.patch, HIVE-19581.5.patch, 
> HIVE-19581.6.patch, explain.png, metastore.png
>
>
> create table t_test (name ,string) ;
>  insert into table t_test VALUES ('李四');
>  create view t_view_test as select * from t_test where name='李四';
> when select  * from t_view_test   no  records return



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19971) TestRuntimeStats.testCleanup() is flaky

2018-06-28 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526452#comment-16526452
 ] 

Andrew Sherman commented on HIVE-19971:
---

Thanks [~kgyrtkirk] and [~pvary] for the review and push!

> TestRuntimeStats.testCleanup() is flaky
> ---
>
> Key: HIVE-19971
> URL: https://issues.apache.org/jira/browse/HIVE-19971
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-19971.1.patch, HIVE-19971.2.patch, 
> HIVE-19971.3.patch, HIVE-19971.4.patch
>
>
> This test is timing dependent and sometimes fails. [You can see that it 
> sometimes fails in otherwise clean 
> runs|https://issues.apache.org/jira/issues/?jql=text%20~%20%22TestRuntimeStats%22].
>   The test inserts a stat, sleeps for 2 seconds, inserts another stat, then 
> deletes stats that are older than 1 second. The test asserts that exactly one 
> stat is deleted. If the deletion is slow for some reason (perhaps a GC?) then 
> 2 stats will be deleted and the test will fail. The trouble is that the 1 
> second window is too small to work consistently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19971) TestRuntimeStats.testCleanup() is flaky

2018-06-27 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19971:
--
Attachment: HIVE-19971.4.patch

> TestRuntimeStats.testCleanup() is flaky
> ---
>
> Key: HIVE-19971
> URL: https://issues.apache.org/jira/browse/HIVE-19971
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19971.1.patch, HIVE-19971.2.patch, 
> HIVE-19971.3.patch, HIVE-19971.4.patch
>
>
> This test is timing dependent and sometimes fails. [You can see that it 
> sometimes fails in otherwise clean 
> runs|https://issues.apache.org/jira/issues/?jql=text%20~%20%22TestRuntimeStats%22].
>   The test inserts a stat, sleeps for 2 seconds, inserts another stat, then 
> deletes stats that are older than 1 second. The test asserts that exactly one 
> stat is deleted. If the deletion is slow for some reason (perhaps a GC?) then 
> 2 stats will be deleted and the test will fail. The trouble is that the 1 
> second window is too small to work consistently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-06-26 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524512#comment-16524512
 ] 

Andrew Sherman commented on HIVE-18118:
---

HIVE-18118.11.patch is the same as HIVE-18118.12.patch so the test failures 
form the latter are just further examples of flakiness.

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, 
> HIVE-18118.12.patch, HIVE-18118.2.patch, HIVE-18118.3.patch, 
> HIVE-18118.4.patch, HIVE-18118.5.patch, HIVE-18118.6.patch, 
> HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-06-26 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524388#comment-16524388
 ] 

Andrew Sherman commented on HIVE-18118:
---

[~stakiar] This has a clean test run and is ready to go, can you take a look 
and push if you agree? Thanks

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, 
> HIVE-18118.12.patch, HIVE-18118.2.patch, HIVE-18118.3.patch, 
> HIVE-18118.4.patch, HIVE-18118.5.patch, HIVE-18118.6.patch, 
> HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20000) woooohoo20000ooooooo

2018-06-26 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524234#comment-16524234
 ] 

Andrew Sherman commented on HIVE-2:
---

Looks similar to [HIVE-1]

> whoo2ooo
> 
>
> Key: HIVE-2
> URL: https://issues.apache.org/jira/browse/HIVE-2
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: All Versions
>Reporter: Prasanth Jayachandran
>Assignee: Hive QA
>Priority: Blocker
> Fix For: All Versions
>
>
> {code:java}
>    :::  :::  :::  ::: 
> :+::+::+:   :+::+:   :+::+:   :+::+:   :+:
>   +:+ +:+  :+:++:+  :+:++:+  :+:++:+  :+:+
> +#+   +#+ + +:++#+ + +:++#+ + +:++#+ + +:+
>   +#+ +#+#  +#++#+#  +#++#+#  +#++#+#  +#+
>  #+#  #+#   #+##+#   #+##+#   #+##+#   #+#
> ## ###  ###  ###  ### 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19404) Revise DDL Task Result Logging

2018-06-26 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19404:
--
Attachment: HIVE-19404.1.patch
Status: Patch Available  (was: Open)

> Revise DDL Task Result Logging
> --
>
> Key: HIVE-19404
> URL: https://issues.apache.org/jira/browse/HIVE-19404
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Assignee: Andrew Sherman
>Priority: Trivial
>  Labels: noob
> Attachments: HIVE-19404.1.patch
>
>
> There is some logging in {{DDLTask}} that can be made better:
> {code}
> 2018-05-03 03:08:32,524 INFO  hive.ql.exec.DDLTask: 
> [HiveServer2-Background-Pool: Thread-101980]: results : 706
> {code}
> This logging should either be demoted to _debug_ level logging and/or 
> requires additional context.
> {code}
> 2018-05-03 03:08:32,524 INFO  hive.ql.exec.DDLTask: 
> [HiveServer2-Background-Pool: Thread-101980]: Found 706 tables that match the 
> SHOW DATABASE statement
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20000) woooohoo20000ooooooo

2018-06-26 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524193#comment-16524193
 ] 

Andrew Sherman commented on HIVE-2:
---

Well done Hive team for 2 jiras!

> whoo2ooo
> 
>
> Key: HIVE-2
> URL: https://issues.apache.org/jira/browse/HIVE-2
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: All Versions
>Reporter: Prasanth Jayachandran
>Priority: Blocker
> Fix For: All Versions
>
>
> {code:java}
>    :::  :::  :::  ::: 
> :+::+::+:   :+::+:   :+::+:   :+::+:   :+:
>   +:+ +:+  :+:++:+  :+:++:+  :+:++:+  :+:+
> +#+   +#+ + +:++#+ + +:++#+ + +:++#+ + +:+
>   +#+ +#+#  +#++#+#  +#++#+#  +#++#+#  +#+
>  #+#  #+#   #+##+#   #+##+#   #+##+#   #+#
> ## ###  ###  ###  ### 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19971) TestRuntimeStats.testCleanup() is flaky

2018-06-26 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19971:
--
Attachment: HIVE-19971.3.patch

> TestRuntimeStats.testCleanup() is flaky
> ---
>
> Key: HIVE-19971
> URL: https://issues.apache.org/jira/browse/HIVE-19971
> Project: Hive
>  Issue Type: Task
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19971.1.patch, HIVE-19971.2.patch, 
> HIVE-19971.3.patch
>
>
> This test is timing dependent and sometimes fails. [You can see that it 
> sometimes fails in otherwise clean 
> runs|https://issues.apache.org/jira/issues/?jql=text%20~%20%22TestRuntimeStats%22].
>   The test inserts a stat, sleeps for 2 seconds, inserts another stat, then 
> deletes stats that are older than 1 second. The test asserts that exactly one 
> stat is deleted. If the deletion is slow for some reason (perhaps a GC?) then 
> 2 stats will be deleted and the test will fail. The trouble is that the 1 
> second window is too small to work consistently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19581) view do not support unicode characters well

2018-06-26 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19581:
--
Attachment: HIVE-19581.5.patch

> view do not support unicode characters well
> ---
>
> Key: HIVE-19581
> URL: https://issues.apache.org/jira/browse/HIVE-19581
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: kai
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, 
> HIVE-19581.3.patch, HIVE-19581.4.patch, HIVE-19581.5.patch, explain.png, 
> metastore.png
>
>
> create table t_test (name ,string) ;
>  insert into table t_test VALUES ('李四');
>  create view t_view_test as select * from t_test where name='李四';
> when select  * from t_view_test   no  records return



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-06-26 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18118:
--
Attachment: HIVE-18118.12.patch

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, 
> HIVE-18118.12.patch, HIVE-18118.2.patch, HIVE-18118.3.patch, 
> HIVE-18118.4.patch, HIVE-18118.5.patch, HIVE-18118.6.patch, 
> HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-06-26 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18118:
--
Attachment: HIVE-18118.11.patch

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.11.patch, 
> HIVE-18118.2.patch, HIVE-18118.3.patch, HIVE-18118.4.patch, 
> HIVE-18118.5.patch, HIVE-18118.6.patch, HIVE-18118.7.patch, 
> HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-06-26 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18118:
--
Attachment: HIVE-18118.11.patch

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.11.patch, HIVE-18118.2.patch, 
> HIVE-18118.3.patch, HIVE-18118.4.patch, HIVE-18118.5.patch, 
> HIVE-18118.6.patch, HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file

2018-06-26 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-18118:
--
Attachment: HIVE-18118.10.patch

> Explain Extended should indicate if a file being read is an EC file
> ---
>
> Key: HIVE-18118
> URL: https://issues.apache.org/jira/browse/HIVE-18118
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18118.1.patch, HIVE-18118.10.patch, 
> HIVE-18118.10.patch, HIVE-18118.2.patch, HIVE-18118.3.patch, 
> HIVE-18118.4.patch, HIVE-18118.5.patch, HIVE-18118.6.patch, 
> HIVE-18118.7.patch, HIVE-18118.8.patch, HIVE-18118.9.patch
>
>
> We already print out the files Hive will read in the explain extended 
> command, we just have to modify it to say whether or not its an EC file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19581) view do not support unicode characters well

2018-06-26 Thread Andrew Sherman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-19581:
--
Attachment: HIVE-19581.4.patch

> view do not support unicode characters well
> ---
>
> Key: HIVE-19581
> URL: https://issues.apache.org/jira/browse/HIVE-19581
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: kai
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, 
> HIVE-19581.3.patch, HIVE-19581.4.patch, explain.png, metastore.png
>
>
> create table t_test (name ,string) ;
>  insert into table t_test VALUES ('李四');
>  create view t_view_test as select * from t_test where name='李四';
> when select  * from t_view_test   no  records return



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   >