[jira] [Created] (HIVE-14907) Hive Metastore should use repeatable-read consistency level

2016-10-06 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-14907:
-

 Summary: Hive Metastore should use repeatable-read consistency 
level
 Key: HIVE-14907
 URL: https://issues.apache.org/jira/browse/HIVE-14907
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 2.2.0
Reporter: Lenni Kuff


Currently HMS uses the "read-committed" consistency level which is the default 
for DataNucleus. This could cause potential problems since the state visible to 
each transaction can actually see updates from other transactions,  so it is 
very difficult to reason about any code that reads multiple pieces of data.

Instead it should use "repeatable-read" consistency which guarantees that any 
transaction only sees the state at the beginning of a transaction plus any 
updates done within a transaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-14358) Add metrics for number of queries executed for each execution engine (mr, spark, tez)

2016-07-27 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-14358:
-

 Summary: Add metrics for number of queries executed for each 
execution engine (mr, spark, tez)
 Key: HIVE-14358
 URL: https://issues.apache.org/jira/browse/HIVE-14358
 Project: Hive
  Issue Type: Task
  Components: HiveServer2
Affects Versions: 2.1.0
Reporter: Lenni Kuff


HiveServer2 currently has a metric for the total number of queries ran since 
last restart, but it would be useful to also have metrics for number of queries 
ran for each execution engine. This would improve supportability by allowing 
users to get a high-level understanding of what workloads had been running on 
the server. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [Announce] New Hive Committer - Mohit Sabharwal

2016-07-01 Thread Lenni Kuff
Congrats Mohit!

On Fri, Jul 1, 2016 at 3:27 PM, Peter Vary  wrote:

> Congratulations Mohit!
> 2016. júl. 1. 19:10 ezt írta ("Vihang Karajgaonkar"  >):
>
> > Congratulations Mohit!
> >
> > > On Jul 1, 2016, at 10:05 AM, Chao Sun  wrote:
> > >
> > > Congratulations Mohit! Good job!
> > >
> > > Best,
> > > Chao
> > >
> > > On Fri, Jul 1, 2016 at 9:57 AM, Szehon Ho  > > wrote:
> > > On behalf of the Apache Hive PMC, I'm pleased to announce that Mohit
> > Sabharwal has been voted a committer on the Apache Hive project.
> > >
> > > Please join me in congratulating Mohit !
> > >
> > > Thanks,
> > > Szehon
> > >
> >
> >
>


Re: Review Request 47040: Monitor changes to FairScheduler.xml file and automatically update / validate jobs submitted to fair-scheduler

2016-05-13 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/47040/#review133140
---




shims/common/src/main/java/org/apache/hadoop/fs/FileWatchService.java (line 135)
<https://reviews.apache.org/r/47040/#comment197440>

Add a catch (Exception) so the executor doesn't die if there is an 
unchecked exception thrown for some reason.



shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java
 (line 48)
<https://reviews.apache.org/r/47040/#comment197464>

Do we really need to cache this information? Comment on key/value for map.



shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java
 (line 51)
<https://reviews.apache.org/r/47040/#comment197442>

Why do we need to track the last used location? Can't we just read the 
location once and use the the whole time?

nit: don't need to initialize to null


- Lenni Kuff


On May 13, 2016, 3:26 p.m., Reuben Kuhnert wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/47040/
> ---
> 
> (Updated May 13, 2016, 3:26 p.m.)
> 
> 
> Review request for hive, Lenni Kuff, Mohit Sabharwal, and Sergio Pena.
> 
> 
> Bugs: HIVE-13696
> https://issues.apache.org/jira/browse/HIVE-13696
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Ensure that jobs sent to YARN with impersonation off are correctly routed to 
> the proper queue based on fair-scheduler.xml. Monitor this file for changes 
> and validate that jobs can only be sent to queues authorized for the user.
> 
> 
> Diffs
> -
> 
>   shims/common/src/main/java/org/apache/hadoop/fs/FileWatchService.java 
> PRE-CREATION 
>   shims/scheduler/pom.xml b36c12325c588cdb609c6200b1edef73a2f79552 
>   
> shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerQueueAllocator.java
>  PRE-CREATION 
>   
> shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java
>  372244dc3c989d2a3ae2eb2bfb8cd0a235705e18 
>   
> shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/QueueAllocator.java
>  PRE-CREATION 
>   
> shims/scheduler/src/test/java/org/apache/hadoop/hive/schshim/TestFairScheduler.java
>  PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/47040/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Reuben Kuhnert
> 
>



Re: Review Request 47040: Monitor changes to FairScheduler.xml file and automatically update / validate jobs submitted to fair-scheduler

2016-05-12 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/47040/#review132942
---




common/src/java/org/apache/hive/common/util/HiveStringUtils.java (line 323)
<https://reviews.apache.org/r/47040/#comment197204>

Guava has Strings.isNullOrEmpty(). IF there is already a dependency on 
guava, just use that.



ql/src/java/org/apache/hadoop/hive/ql/session/YarnFairScheduling.java (line 58)
<https://reviews.apache.org/r/47040/#comment197207>

It's unclear what validation is actually happening here.



ql/src/java/org/apache/hadoop/hive/ql/session/YarnFairScheduling.java (line 65)
<https://reviews.apache.org/r/47040/#comment197208>

Would we ever expect a user to hit this case? If not, it should be 
converted to an invarient / precondition check.



service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java (line 
126)
<https://reviews.apache.org/r/47040/#comment197212>

We should continue to use the flag hive.server2.map.fair.scheduler.queue to 
determine whether or not to enable the mapping functionality



shims/common/src/main/java/org/apache/hadoop/fs/FileSystemWatcher.java (line 42)
<https://reviews.apache.org/r/47040/#comment197215>

You might consider simplifying this a bit since we:

a) don't care about watching multiple files. this should only support 
watching a single file. Can always extend later. 
b) don't care about multiple callbacks. Can always extend later. 
c) don't  care about specific events

For reference, a simplified version is in Impala. You might want to 
consider a similar approach:
I think this can be simplified a bit. Impala did something similar, you 
might consider using a common approach:


https://github.com/cloudera/Impala/blob/cdh5-trunk/fe/src/main/java/com/cloudera/impala/util/FileWatchService.java



shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java
 (line 135)
<https://reviews.apache.org/r/47040/#comment197218>

I don't think we need to support the case where the config file location 
has changed. Hive doesn't dynamically refresh the configs so I'm not sure we 
would see this. For now lets' keep this scoped to only detecting changes to the 
underlying file and using the same path for the course of the operation/


- Lenni Kuff


On May 12, 2016, 1:16 p.m., Reuben Kuhnert wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/47040/
> ---
> 
> (Updated May 12, 2016, 1:16 p.m.)
> 
> 
> Review request for hive, Lenni Kuff, Mohit Sabharwal, and Sergio Pena.
> 
> 
> Bugs: HIVE-13696
> https://issues.apache.org/jira/browse/HIVE-13696
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Ensure that jobs sent to YARN with impersonation off are correctly routed to 
> the proper queue based on fair-scheduler.xml. Monitor this file for changes 
> and validate that jobs can only be sent to queues authorized for the user.
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hive/common/util/HiveStringUtils.java 
> 6d28396893532302fbbd66eace53ae32b71848c3 
>   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 
> 3fecc5c4ca2a06a031c0c4a711fb49e757c49062 
>   ql/src/java/org/apache/hadoop/hive/ql/session/YarnFairScheduling.java 
> PRE-CREATION 
>   service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
> a0015ebc655931f241b28c53fbb94cfe172841b1 
>   shims/common/src/main/java/org/apache/hadoop/fs/FileSystemWatcher.java 
> PRE-CREATION 
>   shims/common/src/main/java/org/apache/hadoop/hive/shims/SchedulerShim.java 
> 63803b8b0752745bd2fedaccc5d100befd97093b 
>   shims/scheduler/pom.xml b36c12325c588cdb609c6200b1edef73a2f79552 
>   
> shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerQueueAllocator.java
>  PRE-CREATION 
>   
> shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/FairSchedulerShim.java
>  372244dc3c989d2a3ae2eb2bfb8cd0a235705e18 
>   
> shims/scheduler/src/main/java/org/apache/hadoop/hive/schshim/QueueAllocator.java
>  PRE-CREATION 
>   
> shims/scheduler/src/test/java/org/apache/hadoop/hive/schshim/TestFairScheduler.java
>  PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/47040/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Reuben Kuhnert
> 
>



[jira] [Created] (HIVE-12983) Provide a builtin function to get Hive version

2016-02-02 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12983:
-

 Summary: Provide a builtin function to get Hive version
 Key: HIVE-12983
 URL: https://issues.apache.org/jira/browse/HIVE-12983
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Affects Versions: 2.0.0
Reporter: Lenni Kuff
Assignee: Jason Dere


It would be nice to have a builtin function that would return the Hive version. 
 This would make it easier for a users and tests to programmatically check the 
Hive version in a SQL script. It's also useful so a client can check the Hive 
version on a remote cluster.

For example:
{code}
beeline> SELECT version();

2.1.0-SNAPSHOT from 208ab352311a6cbbcd1f7fcd40964da2dbc6703d by lskuff source 
checksum 8e971cda755f6b3fb528c233c40eb50a
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12971) Hive Support for Kudu

2016-01-30 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12971:
-

 Summary: Hive Support for Kudu
 Key: HIVE-12971
 URL: https://issues.apache.org/jira/browse/HIVE-12971
 Project: Hive
  Issue Type: New Feature
Affects Versions: 2.0.0
Reporter: Lenni Kuff
Assignee: Lenni Kuff


JIRA for tracking work related to Hive/Kudu integration.

It would be useful to allow Kudu data to be accessible via Hive. This would 
involve creating a Kudu SerDe/StorageHandler and implementing support for QUERY 
and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
Input/OutputFormats classes already exist. The work can be staged to support 
this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 43008: HIVE-12952 : Show query sub-pages on webui

2016-01-29 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43008/#review117098
---



This is awesome! Assume this works for all execution engines? Left some 
comments.


ql/src/java/org/apache/hadoop/hive/ql/Driver.java (line 526)
<https://reviews.apache.org/r/43008/#comment178163>

would it be useful to add a helper function "isWebUIEnabled()" vs checking 
if the port != 0?



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 46)
<https://reviews.apache.org/r/43008/#comment178168>

should this be private? comment on what the key/values are and maybe rename 
to taskIdToTaskInfo?



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 49)
<https://reviews.apache.org/r/43008/#comment178165>

Comment that this is set once the task completes.



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 69)
<https://reviews.apache.org/r/43008/#comment178164>

I would expect updateTask() to take a Task object.

Is this ever called?



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 93)
<https://reviews.apache.org/r/43008/#comment178171>

Does this call (and a few other the others) need to be synchronized? Seems 
like their vals are all set once in the ctor.



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 126)
<https://reviews.apache.org/r/43008/#comment178178>

newTask -> addTask()



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 130)
<https://reviews.apache.org/r/43008/#comment178166>

Maybe setTaskCompleted()?



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 132)
<https://reviews.apache.org/r/43008/#comment178167>

When would taskInfo be null? returnval wouldn't get set and the query would 
still show up as running.



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 141)
<https://reviews.apache.org/r/43008/#comment178169>

just make this method synchronized?



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 169)
<https://reviews.apache.org/r/43008/#comment178180>

for the methods that accept maps, comment on the expected key/value for the 
parameters.



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 189)
<https://reviews.apache.org/r/43008/#comment178179>

clarify what "times" means. Consider having a single: 
setExecTimes(startTimes, endTimes) or something if you think both should always 
be set at the same time.



ql/src/java/org/apache/hadoop/hive/ql/QueryDisplay.java (line 219)
<https://reviews.apache.org/r/43008/#comment178170>

private static.
"Unknown" -> "UNKNOWN"



service/src/java/org/apache/hive/service/cli/operation/SQLOperationDisplay.java 
(line 26)
<https://reviews.apache.org/r/43008/#comment178172>

Comment on thread safety.



service/src/java/org/apache/hive/service/cli/operation/SQLOperationDisplay.java 
(line 53)
<https://reviews.apache.org/r/43008/#comment178175>

Is this called? Looks like a dupe of close()



service/src/java/org/apache/hive/service/cli/operation/SQLOperationDisplay.java 
(line 69)
<https://reviews.apache.org/r/43008/#comment178173>

Do the calls that return state copied in the ctor need to be synchronized?



service/src/java/org/apache/hive/service/cli/operation/SQLOperationDisplay.java 
(line 90)
<https://reviews.apache.org/r/43008/#comment178174>

check that state != null?



service/src/java/org/apache/hive/service/cli/operation/SQLOperationDisplay.java 
(line 98)
<https://reviews.apache.org/r/43008/#comment178176>

setClosed()?



service/src/java/org/apache/hive/service/cli/operation/SQLOperationDisplayCache.java
 (line 26)
<https://reviews.apache.org/r/43008/#comment178177>

Does this need to extend LinkedHashMap or can you just create an instance 
of one? Are removeEldestEntry and capacity used?


- Lenni Kuff


On Jan. 30, 2016, 2:53 a.m., Szehon Ho wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43008/
> ---
> 
> (Updated Jan. 30, 2016, 2:53 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-12952
> https://issues.apache.org/jira/browse/HIVE-12952
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> This patch shows a query sub-page on WebUI, with detailed information of 
> query on differnt tabs:
> 
> 1.  Tabl- Base Info, ie user, query string, query id, begin time, end time, 
> execution engine, error (if any)
> 2.  Tab2- Query Plan
> 3.  Tab3- Stages (MR jobs), their

[jira] [Created] (HIVE-12603) Add config to block queries that scan > N number of partitions

2015-12-05 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12603:
-

 Summary: Add config to block queries that scan > N number of 
partitions 
 Key: HIVE-12603
 URL: https://issues.apache.org/jira/browse/HIVE-12603
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Query Planning
Affects Versions: 2.0.0
Reporter: Lenni Kuff


Strict mode is useful for blocking queries that load all partitions, but it's 
still possible to put significant load on the HMS for queries that scan a large 
number of partitions. It would be useful to add a config provide a hard limit 
to the number of partitions scanned by a query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 40948: HIVE-12499 : Add HMS metrics for number of tables and partitions

2015-12-03 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/40948/#review108948
---



itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetaStoreMetrics.java
 (line 104)
<https://reviews.apache.org/r/40948/#comment168403>

maybe add a partitioned/unpartitioned table that is in a different database 
for extra test coverage



metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java (line 
414)
<https://reviews.apache.org/r/40948/#comment168401>

Would it make sense to use a cached-gauge for these operations? 

https://dropwizard.github.io/metrics/3.1.0/manual/core/#cached-gauges



metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java (line 
5777)
<https://reviews.apache.org/r/40948/#comment168402>

How expensive is this? What are your thoughts on doing this nce at startup, 
then increment/decrementing as individual objects are added/removed?


- Lenni Kuff


On Dec. 4, 2015, 1:51 a.m., Szehon Ho wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/40948/
> ---
> 
> (Updated Dec. 4, 2015, 1:51 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-12499
> https://issues.apache.org/jira/browse/HIVE-12499
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Add separate timer thread that polls for count of database, table, partition 
> entries to publish as metrics, the period is configurable.  Delay in getting 
> exact number should be ok as this is for monitoring.
> 
> Implemented for HBase and DB metastores.
> 
> 
> Diffs
> -
> 
>   
> common/src/java/org/apache/hadoop/hive/common/metrics/common/MetricsConstant.java
>  95e2bcf 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 4d881ba 
>   common/src/test/org/apache/hadoop/hive/common/metrics/MetricsTestUtils.java 
> fd420f7 
>   
> itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetaStoreMetrics.java
>  f571c7c 
>   
> itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/hbase/TestHBaseMetastoreMetrics.java
>  PRE-CREATION 
>   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
> 00602e1 
>   metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
> 1c0ab6d 
>   metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java 5b36b03 
>   
> metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseReadWrite.java 
> 2fb3e8f 
>   metastore/src/java/org/apache/hadoop/hive/metastore/hbase/HBaseStore.java 
> 98e6c75 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
>  9a1d159 
>   
> metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
>  8dde0af 
> 
> Diff: https://reviews.apache.org/r/40948/diff/
> 
> 
> Testing
> ---
> 
> Added unit tests for HBase and Db metastores.
> 
> 
> Thanks,
> 
> Szehon Ho
> 
>



Re: Review Request 40898: HIVE-12431: Support timeout for global compile lock

2015-12-03 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/40898/#review108847
---



common/src/java/org/apache/hadoop/hive/conf/HiveConf.java (line 1849)
<https://reviews.apache.org/r/40898/#comment168311>

Curious - does it make sense to only apply this to the "global" compile 
lock? Wouldn't this also be applicable for the session-level compile lock?



ql/src/java/org/apache/hadoop/hive/ql/Driver.java (line 139)
<https://reviews.apache.org/r/40898/#comment168313>

Is there a better way we can add in these test hook (create a mock driver 
or something)?



ql/src/java/org/apache/hadoop/hive/ql/Driver.java (line 1265)
<https://reviews.apache.org/r/40898/#comment168314>

Add a comment to this function to describe what it does and info about the 
return value.

Maybe rename to "tryAcquireCompileLock"?



ql/src/java/org/apache/hadoop/hive/ql/Driver.java (line 1277)
<https://reviews.apache.org/r/40898/#comment168316>

Should this be INFO level? It might be useful to log the query along with 
this message for debug purposes.



ql/src/java/org/apache/hadoop/hive/ql/Driver.java (line 1280)
<https://reviews.apache.org/r/40898/#comment168318>

Can we include the query text here?


- Lenni Kuff


On Dec. 3, 2015, 8:14 a.m., Mohit Sabharwal wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/40898/
> ---
> 
> (Updated Dec. 3, 2015, 8:14 a.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-12431
> https://issues.apache.org/jira/browse/HIVE-12431
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-12431: Support timeout for global compile lock
> 
> When global (HS2-wide) compile lock is configured, a long-compiling request 
> will block remaining sessions indefinitely. 
> 
> This patch allows the user to configure the maximum time a request will wait
> to acquire the compile lock. Note that this configuration does not apply when
> session scoped compile locking is configured.
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 
> 7f9607129eb1f5f43e8a728cf7d2a56c1ed5af49 
>   ql/src/java/org/apache/hadoop/hive/ql/Driver.java 
> 62b608cbf53c371d1743df40988daf85f76a0867 
>   ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java 
> 8a47605630066e39272f506c6e309b108b8455dd 
>   service/src/test/org/apache/hive/service/cli/CLIServiceTest.java 
> d90002bd16e46b5ce970d4c6c544a9c7605328d1 
> 
> Diff: https://reviews.apache.org/r/40898/diff/
> 
> 
> Testing
> ---
> 
> TestEmbeddedThriftBinaryCLIService#testGlobalCompileLockTimeout
> 
> 
> Thanks,
> 
> Mohit Sabharwal
> 
>



[jira] [Created] (HIVE-12549) Disable execution engine in HS2 webui query view

2015-11-30 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12549:
-

 Summary: Disable execution engine in HS2 webui query view
 Key: HIVE-12549
 URL: https://issues.apache.org/jira/browse/HIVE-12549
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 2.0.0
Reporter: Lenni Kuff


As part of the query info, it would be useful to show the execution engine for 
the running query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12550) Cache last N completed queries in HS2 WebUI

2015-11-30 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12550:
-

 Summary: Cache last N completed queries in HS2 WebUI
 Key: HIVE-12550
 URL: https://issues.apache.org/jira/browse/HIVE-12550
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Affects Versions: 2.0.0
Reporter: Lenni Kuff
Assignee: Vaibhav Gumashta


Along with the in-flight queries, it would be nice to see the last N 
(configurable?) completed queries since the last process restart (I don't think 
this information needs to be persisted anywhere). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12431) Cancel queries after configurable timeout waiting on compilation

2015-11-16 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12431:
-

 Summary: Cancel queries after configurable timeout waiting on 
compilation
 Key: HIVE-12431
 URL: https://issues.apache.org/jira/browse/HIVE-12431
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2, Query Processor
Affects Versions: 1.2.1
Reporter: Lenni Kuff
Assignee: Vaibhav Gumashta


To help with HiveServer2 scalability, it would be useful to allow users to 
configure a timeout value for queries waiting to be compiled. If the timeout 
value is reached then the query would abort. One option to achieve this would 
be to update the compile lock to use a try-lock with the timeout value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12414) ALTER TABLE UNSET SERDEPROPERTY does not work

2015-11-14 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12414:
-

 Summary: ALTER TABLE UNSET SERDEPROPERTY does not work
 Key: HIVE-12414
 URL: https://issues.apache.org/jira/browse/HIVE-12414
 Project: Hive
  Issue Type: Bug
  Components: Metastore, SQL
Affects Versions: 1.1.1
Reporter: Lenni Kuff
Assignee: Alan Gates


alter table tablename set tblproperties ('key'='value')  => works as expected
alter table tablename unset tblproperties ('key')  => works as expected

alter table tablename set serdeproperties ('key'='value')  => works as expected
alter table tablename unset serdeproperties ('key')  => not supported

FAILED: ParseException line 1:28 mismatched input 'serdeproperties' expecting 
TBLPROPERTIES near 'unset' in alter properties statement



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12415) Add command to show all locks in Hive

2015-11-14 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12415:
-

 Summary: Add command to show all locks in Hive 
 Key: HIVE-12415
 URL: https://issues.apache.org/jira/browse/HIVE-12415
 Project: Hive
  Issue Type: New Feature
  Components: Locking, SQL
Affects Versions: 1.2.1
Reporter: Lenni Kuff


Customers often have lock conflicts in Hive. Currently we can use {{show locks 
extend}} command to show the existing locks on an object (db, table or 
partition) if it is know. However, some customers want to show all the locks 
over Hive service for monitoring purpose. It would be useful to add a new 
statement in Hive to support this.
To reduce noise, it might be useful to add an option to only show locks that 
have existed > N minutes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12416) CTAS fails when location is directory whose parent doesn't exist

2015-11-14 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12416:
-

 Summary: CTAS fails when location is directory whose parent 
doesn't exist
 Key: HIVE-12416
 URL: https://issues.apache.org/jira/browse/HIVE-12416
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Lenni Kuff



Repro:
{code}
0: jdbc:hive2://localhost:1> create table src (i int);
No rows affected (0.04 seconds)
0: jdbc:hive2://localhost:1> insert into table src select 1;

-- Fails
0: jdbc:hive2://localhost:1> create table dest location 
'/user/hive/warehouse/dir1/dir2' as select * from src;

-- Without CTAS, operations succeeds
0: jdbc:hive2://localhost:1> create table t2 (i int) location 
'/user/hive/warehouse/dir3/dir4';
0: jdbc:hive2://localhost:1> insert into table t2 select 1;
{code}


The failure is:


{code}
ERROR : Failed with exception Unable to move source 
hdfs://HOSTNAME:8020/user/hive/warehouse/.hive-staging_hive_2015-11-14_15-55-54_901_1808963268027473184-5/-ext-10001
 to destination /user/hive/warehouse/test/me
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source 
hdfs://HOSTNAME:8020/user/hive/warehouse/.hive-staging_hive_2015-11-14_15-55-54_901_1808963268027473184-5/-ext-10001
 to destination /user/hive/warehouse/test/me
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2612)
at org.apache.hadoop.hive.ql.exec.MoveTask.moveFile(MoveTask.java:105)
at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:237)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1669)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1430)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1215)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1077)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1070)
at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:162)
at 
org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
at 
org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:214)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at 
org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:226)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: File does not exist: 
/user/hive/warehouse/test
at 
org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1218)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1210)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1210)
at 
org.apache.hadoop.hive.shims.Hadoop23Shims.getFullFileStatus(Hadoop23Shims.java:728)
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2556)
... 21 more

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12406) HIVE-9500 introduced incompatible change to LazySimpleSerDe public interface

2015-11-12 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12406:
-

 Summary: HIVE-9500 introduced incompatible change to 
LazySimpleSerDe public interface
 Key: HIVE-12406
 URL: https://issues.apache.org/jira/browse/HIVE-12406
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 1.2.0
Reporter: Lenni Kuff
Priority: Blocker


In the process of fixing HIVE-9500, an incompatibility was introduced that will 
break 3rd party code that relies on LazySimpleSerde. In HIVE-9500, the nested 
class SerDeParamaters was removed and the method LazySimpleSerDe.initSerdeParms 
was also removed. They were replaced by a standalone class LazySerDeParameters.

Since this has already been released, I don't think we should revert the change 
since that would mean breaking compatibility again. Instead, the best approach 
would be to support both interfaces, if possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12271) Add metrics around HS2 query execution and job submission for Hive

2015-10-27 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12271:
-

 Summary: Add metrics around HS2 query execution and job submission 
for Hive 
 Key: HIVE-12271
 URL: https://issues.apache.org/jira/browse/HIVE-12271
 Project: Hive
  Issue Type: Task
  Components: HiveServer2
Affects Versions: 1.2.1
Reporter: Lenni Kuff
Assignee: Vaibhav Gumashta


We should add more metrics around query execution. Specifically:

* Number of in-use worker threads
* Number of in-use async threads
* Number of queries waiting for compilation
* Stats for query planning / compilation time
* Stats for total job submission time
* Others?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12184) DESCRIBE of fully qualified table fails when db and table name match and non-default database is in use

2015-10-15 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12184:
-

 Summary: DESCRIBE of fully qualified table fails when db and table 
name match and non-default database is in use
 Key: HIVE-12184
 URL: https://issues.apache.org/jira/browse/HIVE-12184
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.1
Reporter: Lenni Kuff


DESCRIBE of fully qualified table fails when db and table name match and 
non-default database is in use.

Repro:

{code}
: jdbc:hive2://localhost:1/default> create database foo;
No rows affected (0.116 seconds)
0: jdbc:hive2://localhost:1/default> create table foo.foo(i int);

0: jdbc:hive2://localhost:1/default> describe foo.foo;
+---++--+--+
| col_name  | data_type  | comment  |
+---++--+--+
| i | int|  |
+---++--+--+
1 row selected (0.049 seconds)

0: jdbc:hive2://localhost:1/default> use foo;

0: jdbc:hive2://localhost:1/default> describe foo.foo;
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Error in getting fields from 
serde.Invalid Field foo (state=08S01,code=1)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-12182) ALTER TABLE PARTITION COLUMN does not set partition column comments

2015-10-14 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-12182:
-

 Summary: ALTER TABLE PARTITION COLUMN does not set partition 
column comments
 Key: HIVE-12182
 URL: https://issues.apache.org/jira/browse/HIVE-12182
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.2.1
Reporter: Lenni Kuff


ALTER TABLE PARTITION COLUMN does not set partition column comments. The syntax 
is accepted, but the COMMENT for the column is ignored.


{code}
0: jdbc:hive2://localhost:1/default> create table part_test(i int comment 
'HELLO') partitioned by (j int comment 'WORLD');
No rows affected (0.104 seconds)
0: jdbc:hive2://localhost:1/default> describe part_test;
+--+---+---+--+
| col_name |   data_type   |comment|
+--+---+---+--+
| i| int   | HELLO |
| j| int   | WORLD |
|  | NULL  | NULL  |
| # Partition Information  | NULL  | NULL  |
| # col_name   | data_type | comment   |
|  | NULL  | NULL  |
| j| int   | WORLD |
+--+---+---+--+
7 rows selected (0.109 seconds)
0: jdbc:hive2://localhost:1/default> alter table part_test partition column 
(j int comment 'WIDE');
No rows affected (0.121 seconds)
0: jdbc:hive2://localhost:1/default> describe part_test;
+--+---+---+--+
| col_name |   data_type   |comment|
+--+---+---+--+
| i| int   | HELLO |
| j| int   |   |
|  | NULL  | NULL  |
| # Partition Information  | NULL  | NULL  |
| # col_name   | data_type | comment   |
|  | NULL  | NULL  |
| j| int   |   |
+--+---+---+--+
7 rows selected (0.108 seconds)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 37156: HIVE-7476 : CTAS does not work properly for s3

2015-08-05 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/37156/#review94354
---

Ship it!



ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java (line 2667)
https://reviews.apache.org/r/37156/#comment148929

Add brief comment on what this function is doing.



ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java (line 2672)
https://reviews.apache.org/r/37156/#comment148931

Might be clearer to move this line to the top of the function and just have 
something like:

// Check if FileSystems are different
if (srcFs.getClass().equals(destFs.getClass())) {
   return false;
}

// Check encryption zones ... 

Also - we might also need to a move not just if the file system type if 
different (that works for fixing S3) but also if the file system authority is 
different. For examople, you have a src destination on cluster1, destination on 
cluster 2 but both are HDFS. Maybe add a TODO to think about this more?


- Lenni Kuff


On Aug. 6, 2015, 1:32 a.m., Szehon Ho wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/37156/
 ---
 
 (Updated Aug. 6, 2015, 1:32 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-7476
 https://issues.apache.org/jira/browse/HIVE-7476
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Currently, CTAS is broken when target is on S3 and source tables are not, or 
 more generally, where source and target tables are on different file systems. 
  
 
 Mainly the issues was that during the Move operation (last stage of CTAS), it 
 was using the destination FileSystem object to run the operations on both the 
 source/dest files, thus error when running on a source.  The fix is to use 
 the source FileSystem to run operations on the source file, and the dest 
 FileSystem to run operations on the dest File.
 
 
 Diffs
 -
 
   ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java 0a466e4 
   ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 5840802 
 
 Diff: https://reviews.apache.org/r/37156/diff/
 
 
 Testing
 ---
 
 Manually ran CTAS to create a table on S3.
 
 
 Thanks,
 
 Szehon Ho
 




[jira] [Created] (HIVE-11279) Hive should emit lineage information in json compact format

2015-07-16 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-11279:
-

 Summary: Hive should emit lineage information in json compact 
format
 Key: HIVE-11279
 URL: https://issues.apache.org/jira/browse/HIVE-11279
 Project: Hive
  Issue Type: Bug
  Components: Logging
Affects Versions: 1.3.0
Reporter: Lenni Kuff
Assignee: Lenni Kuff


Hive should emit lineage information in json compact format. Currently, Hive 
prints this in human readable format which makes it harder to consume (identify 
record boundaries) and makes the output files very long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-11174) Hive does not treat floating point signed zeros as equal (-0.0 should equal 0.0 according to IEEE floating point spec)

2015-07-02 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-11174:
-

 Summary: Hive does not treat floating point signed zeros as equal 
(-0.0 should equal 0.0 according to IEEE floating point spec) 
 Key: HIVE-11174
 URL: https://issues.apache.org/jira/browse/HIVE-11174
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 1.2.0
Reporter: Lenni Kuff
Priority: Critical


Hive does not treat floating point signed zeros as equal (-0.0 should equal 
0.0).  This is because Hive uses Double.compareTo(), which states:
0.0d is considered by this method to be greater than -0.0d

http://docs.oracle.com/javase/7/docs/api/java/lang/Double.html#compareTo(java.lang.Double)

The IEEE 754 floating point spec specifies that signed -0.0 and 0.0 should be 
treated as equal. From the Wikipedia article 
(https://en.wikipedia.org/wiki/Signed_zero#Comparisons):
bq. negative zero and positive zero should compare as equal with the usual 
(numerical) comparison operators


How to reproduce:
{code}

select 1 where 0.0=-0.0;
Returns no results.

select 1 where -0.00.0;
Returns 1
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New Hive PMC Members - Chao Sun and Gopal Vijayaraghavan

2015-06-10 Thread Lenni Kuff
Congratulation!

On Wed, Jun 10, 2015 at 2:44 PM, Jimmy Xiang jxi...@cloudera.com wrote:

 Congrats!

 On Wed, Jun 10, 2015 at 2:43 PM, Hari Subramaniyan 
 hsubramani...@hortonworks.com wrote:

  Congrats Chao and Gopal!
  
  From: Lefty Leverenz leftylever...@gmail.com
  Sent: Wednesday, June 10, 2015 2:22 PM
  To: dev@hive.apache.org
  Subject: Re: [ANNOUNCE] New Hive PMC Members - Chao Sun and Gopal
  Vijayaraghavan
 
  Kudos, Chao and Gopal!  Thanks for all your contributions.
 
  -- Lefty
 
  On Wed, Jun 10, 2015 at 2:20 PM, Carl Steinbach c...@apache.org wrote:
 
   I am pleased to announce that Chao Sun and Gopal Vijayaraghavan have
 been
   elected to the Hive Project Management Committee. Please join me in
   congratulating Chao and Gopal!
  
   Thanks.
  
   - Carl
  
 



Re: Review Request 35181: HIVE-10944 : Fix HS2 for Metrics

2015-06-09 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35181/#review87169
---

Ship it!


- Lenni Kuff


On June 7, 2015, 11:47 p.m., Szehon Ho wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35181/
 ---
 
 (Updated June 7, 2015, 11:47 p.m.)
 
 
 Review request for hive and Sergey Shelukhin.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Eliminated the redundant conf checks and eliminate synchronization in the 
 code path, by making the static Metrics instance as a static volatile 
 variable.  Achieved this by removing the Metrics init() method and moved 
 directly to the constructor.
 
 Left some of the synchronization in the old LegacyMetrics the same.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/common/JvmPauseMonitor.java c3949f2 
   common/src/java/org/apache/hadoop/hive/common/metrics/LegacyMetrics.java 
 14f7afb 
   common/src/java/org/apache/hadoop/hive/common/metrics/common/Metrics.java 
 13a5336 
   
 common/src/java/org/apache/hadoop/hive/common/metrics/common/MetricsFactory.java
  12a309d 
   
 common/src/java/org/apache/hadoop/hive/common/metrics/metrics2/CodahaleMetrics.java
  e59da99 
   
 common/src/test/org/apache/hadoop/hive/common/metrics/TestLegacyMetrics.java 
 c14c7ee 
   
 common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
  8749349 
   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
 85a734c 
   service/src/java/org/apache/hive/service/server/HiveServer2.java 7820ed5 
 
 Diff: https://reviews.apache.org/r/35181/diff/
 
 
 Testing
 ---
 
 Ran affected tests, ran HS2 with and without metrics enabled.
 
 
 Thanks,
 
 Szehon Ho
 




Re: Review Request 35181: HIVE-10944 : Fix HS2 for Metrics

2015-06-09 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35181/#review87168
---



service/src/java/org/apache/hive/service/server/HiveServer2.java
https://reviews.apache.org/r/35181/#comment139486

nit: do you need the getInstance() == null check here? It seems like 
MetricsFactory.init() handles this.


- Lenni Kuff


On June 7, 2015, 11:47 p.m., Szehon Ho wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35181/
 ---
 
 (Updated June 7, 2015, 11:47 p.m.)
 
 
 Review request for hive and Sergey Shelukhin.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Eliminated the redundant conf checks and eliminate synchronization in the 
 code path, by making the static Metrics instance as a static volatile 
 variable.  Achieved this by removing the Metrics init() method and moved 
 directly to the constructor.
 
 Left some of the synchronization in the old LegacyMetrics the same.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/common/JvmPauseMonitor.java c3949f2 
   common/src/java/org/apache/hadoop/hive/common/metrics/LegacyMetrics.java 
 14f7afb 
   common/src/java/org/apache/hadoop/hive/common/metrics/common/Metrics.java 
 13a5336 
   
 common/src/java/org/apache/hadoop/hive/common/metrics/common/MetricsFactory.java
  12a309d 
   
 common/src/java/org/apache/hadoop/hive/common/metrics/metrics2/CodahaleMetrics.java
  e59da99 
   
 common/src/test/org/apache/hadoop/hive/common/metrics/TestLegacyMetrics.java 
 c14c7ee 
   
 common/src/test/org/apache/hadoop/hive/common/metrics/metrics2/TestCodahaleMetrics.java
  8749349 
   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
 85a734c 
   service/src/java/org/apache/hive/service/server/HiveServer2.java 7820ed5 
 
 Diff: https://reviews.apache.org/r/35181/diff/
 
 
 Testing
 ---
 
 Ran affected tests, ran HS2 with and without metrics enabled.
 
 
 Thanks,
 
 Szehon Ho
 




Re: Review Request 35181: HIVE-10944 : Fix HS2 for Metrics

2015-06-07 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35181/#review86952
---



common/src/java/org/apache/hadoop/hive/common/metrics/common/Metrics.java
https://reviews.apache.org/r/35181/#comment139167

How so? I think implementations do need to be thread safe. For example, how 
does MetricsFactory help with a thread safe implementation of incCounter?



common/src/java/org/apache/hadoop/hive/common/metrics/common/MetricsFactory.java
https://reviews.apache.org/r/35181/#comment139164

comment why this is volatile.



common/src/java/org/apache/hadoop/hive/common/metrics/common/MetricsFactory.java
https://reviews.apache.org/r/35181/#comment139165

might want to consider renaming this to something shorter like get() or 
getInstance(). Also mention that this returns null if init() hasn't been 
called.



common/src/java/org/apache/hadoop/hive/common/metrics/common/MetricsFactory.java
https://reviews.apache.org/r/35181/#comment139166

while you are here, consider renaming deInit() - shutdown() or close()?


- Lenni Kuff


On June 6, 2015, 10:18 p.m., Szehon Ho wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35181/
 ---
 
 (Updated June 6, 2015, 10:18 p.m.)
 
 
 Review request for hive and Sergey Shelukhin.
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Eliminated the redundant conf checks and eliminate synchronization in the 
 code path, by making the static Metrics instance as a static volatile 
 variable.  Achieved this by removing the Metrics init() method and moved 
 directly to the constructor.
 
 Left some of the synchronization in the old LegacyMetrics the same.
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/common/JvmPauseMonitor.java c3949f2 
   common/src/java/org/apache/hadoop/hive/common/metrics/LegacyMetrics.java 
 14f7afb 
   common/src/java/org/apache/hadoop/hive/common/metrics/common/Metrics.java 
 13a5336 
   
 common/src/java/org/apache/hadoop/hive/common/metrics/common/MetricsFactory.java
  12a309d 
   
 common/src/java/org/apache/hadoop/hive/common/metrics/metrics2/CodahaleMetrics.java
  e59da99 
   metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
 85a734c 
   service/src/java/org/apache/hive/service/server/HiveServer2.java 7820ed5 
 
 Diff: https://reviews.apache.org/r/35181/diff/
 
 
 Testing
 ---
 
 Ran affected tests, ran HS2 with and without metrics enabled.
 
 
 Thanks,
 
 Szehon Ho
 




Re: Review Request 34393: HIVE-10427 - collect_list() and collect_set() should accept struct types as argument

2015-05-21 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34393/#review84669
---

Ship it!


lgtm - I assume this works with decimal (with scale/precision) and 
char/varchar? Maybe add one test case for those?

- Lenni Kuff


On May 21, 2015, 6:44 a.m., Chao Sun wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34393/
 ---
 
 (Updated May 21, 2015, 6:44 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-10427
 https://issues.apache.org/jira/browse/HIVE-10427
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Currently for collect_list() and collect_set(), only primitive types are 
 supported. This patch adds support for struct, list and map types as well.
 
 It turned out I that all I need is loosen the type checking.
 
 
 Diffs
 -
 
   data/files/customers.txt PRE-CREATION 
   data/files/nested_orders.txt PRE-CREATION 
   data/files/orders.txt PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCollectList.java 
 536c4a7 
   
 ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCollectSet.java 
 6dc424a 
   
 ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFMkCollectionEvaluator.java
  efcc8f5 
   ql/src/test/queries/clientnegative/udaf_collect_set_unsupported.q 
 PRE-CREATION 
   ql/src/test/queries/clientpositive/udaf_collect_set_2.q PRE-CREATION 
   ql/src/test/results/clientnegative/udaf_collect_set_unsupported.q.out 
 PRE-CREATION 
   ql/src/test/results/clientpositive/udaf_collect_set_2.q.out PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/34393/diff/
 
 
 Testing
 ---
 
 All but one test (which seems unrelated) are passing.
 I also added a test: udaf_collect_list_set_2.q
 
 
 Thanks,
 
 Chao Sun
 




Re: [ANNOUNCE] New Hive Committer - Chaoyu Tang

2015-05-20 Thread Lenni Kuff
Congrats Chaoyu! Well deserved.

On Wed, May 20, 2015 at 4:07 PM, Sushanth Sowmyan khorg...@gmail.com
wrote:

 Congrats Chaoyu, welcome aboard! :)
 On May 20, 2015 3:45 PM, Vaibhav Gumashta vgumas...@hortonworks.com
 wrote:

  Congratulations!
 
  ‹Vaibhav
 
  On 5/20/15, 3:40 PM, Jimmy Xiang jxi...@cloudera.com wrote:
 
  Congrats!!
  
  On Wed, May 20, 2015 at 3:29 PM, Carl Steinbach c...@apache.org wrote:
  
   The Apache Hive PMC has voted to make Chaoyu Tang a committer on the
  Apache
   Hive Project.
  
   Please join me in congratulating Chaoyu!
  
   Thanks.
  
   - Carl
  
 
 



Re: Review Request 34393: HIVE-10427 - collect_list() and collect_set() should accept struct types as argument

2015-05-18 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34393/#review84260
---



ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCollectSet.java
https://reviews.apache.org/r/34393/#comment135437

should we also support arrays and unions?



ql/src/test/queries/clientpositive/udaf_collect_list_set_nested.q
https://reviews.apache.org/r/34393/#comment135438

add a negative test to validate unsupported types?


- Lenni Kuff


On May 19, 2015, 4:47 a.m., Chao Sun wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34393/
 ---
 
 (Updated May 19, 2015, 4:47 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-10427
 https://issues.apache.org/jira/browse/HIVE-10427
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Currently for collect_list() and collect_set(), only primitive types are 
 supported. This patch adds support for struct and map types as well.
 
 It turned out I that all I need is loosen the type checking.
 
 
 Diffs
 -
 
   data/files/customers.txt PRE-CREATION 
   data/files/nested_orders.txt PRE-CREATION 
   data/files/orders.txt PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCollectList.java 
 536c4a7 
   
 ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCollectSet.java 
 6dc424a 
   
 ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFMkCollectionEvaluator.java
  efcc8f5 
   ql/src/test/queries/clientpositive/udaf_collect_list_set_nested.q 
 PRE-CREATION 
   ql/src/test/results/clientpositive/udaf_collect_list_set_nested.q.out 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/34393/diff/
 
 
 Testing
 ---
 
 All but one test (which seems unrelated) are passing.
 I also added a test: udaf_collect_list_set_nested.q
 
 
 Thanks,
 
 Chao Sun
 




[jira] [Created] (HIVE-10593) Support creating table from a file schema: CREATE TABLE ... LIKE file_format '/path/to/file'

2015-05-04 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-10593:
-

 Summary: Support creating table from a file schema: CREATE TABLE 
... LIKE file_format '/path/to/file'
 Key: HIVE-10593
 URL: https://issues.apache.org/jira/browse/HIVE-10593
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 1.2.0
Reporter: Lenni Kuff


It would be useful if Hive could infer the column definitions in a create table 
statement from the underlying data file. For example:

CREATE TABLE new_tbl LIKE PARQUET '/path/to/file.parquet';

If the targeted file is not the specified file format, the statement should 
fail analysis. In addition to PARQUET, it would be useful to support other 
formats such as AVRO, JSON, and ORC.









--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 33816: HIVE-10597: Relative path doesn't work with CREATE TABLE LOCATION 'relative/path'

2015-05-04 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33816/#review82466
---



metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java
https://reviews.apache.org/r/33816/#comment133201

Update comment to explain behavior of absolute vs relative paths.



metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java
https://reviews.apache.org/r/33816/#comment133204

I don't think we want to handle relative paths for this particular bug -  
instead throw an error. Since this function is called in multiple places, be 
sure nothing else is expected to work with relative paths.
Please add test case(s) as well


- Lenni Kuff


On May 4, 2015, 7:05 p.m., Reuben Kuhnert wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33816/
 ---
 
 (Updated May 4, 2015, 7:05 p.m.)
 
 
 Review request for hive and Sergio Pena.
 
 
 Bugs: HIVE-10597
 https://issues.apache.org/jira/browse/HIVE-10597
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Allow warehouse to work with relative locations.
 
 
 Diffs
 -
 
   metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java 
 25119abf97382df7c0615edbaff29ba20624a137 
 
 Diff: https://reviews.apache.org/r/33816/diff/
 
 
 Testing
 ---
 
 Tested locally
 
 
 Thanks,
 
 Reuben Kuhnert
 




Re: Review Request 31497: HIVE-9800 Create scripts to do metastore upgrade tests on Jenkins

2015-02-26 Thread Lenni Kuff

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/31497/#review74420
---



dev-support/tests/metastore-upgrade/jenkins-upgrade-test.sh
https://reviews.apache.org/r/31497/#comment120991

AFAIK, Jenkins does not run as root. Will this still work?


- Lenni Kuff


On Feb. 26, 2015, 11:07 p.m., Sergio Pena wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/31497/
 ---
 
 (Updated Feb. 26, 2015, 11:07 p.m.)
 
 
 Review request for hive and Brock Noland.
 
 
 Bugs: HIVE-9800
 https://issues.apache.org/jira/browse/HIVE-9800
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 This script downloads a metastore upgrade script, and run all the upgrade 
 tests in an specific db server.
 Another jenkins scripts is used to create LXC containers where to run these 
 tests.
 
 
 Diffs
 -
 
   dev-support/tests/metastore-upgrade/jenkins-upgrade-test.sh PRE-CREATION 
   dev-support/tests/metastore-upgrade/metastore-upgrade-test.sh PRE-CREATION 
   dev-support/tests/metastore-upgrade/servers/mysql/execute.sh PRE-CREATION 
   dev-support/tests/metastore-upgrade/servers/mysql/prepare.sh PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/31497/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Sergio Pena
 




[jira] [Created] (HIVE-7641) INSERT ... SELECT with no source table leads to NPE

2014-08-06 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-7641:


 Summary: INSERT ... SELECT with no source table leads to NPE
 Key: HIVE-7641
 URL: https://issues.apache.org/jira/browse/HIVE-7641
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.1
Reporter: Lenni Kuff


When no source table is provided for an INSERT statement Hive fails with NPE. 

{code}
0: jdbc:hive2://localhost:11050/default create table test_tbl(i int);
No rows affected (0.333 seconds)
0: jdbc:hive2://localhost:11050/default insert into table test_tbl select 1;
Error: Error while compiling statement: FAILED: NullPointerException null 
(state=42000,code=4)

-- Get a NPE even when using incorrect syntax (no TABLE keyword)
0: jdbc:hive2://localhost:11050/default insert into test_tbl select 1;
Error: Error while compiling statement: FAILED: NullPointerException null 
(state=42000,code=4)

-- Works when a source table is provided
0: jdbc:hive2://localhost:11050/default insert into table test_tbl select 1 
from foo;
No rows affected (5.751 seconds)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6590) Hive does not work properly with boolean partition columns (wrong results and inserts to incorrect HDFS path)

2014-03-07 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-6590:


 Summary: Hive does not work properly with boolean partition 
columns (wrong results and inserts to incorrect HDFS path)
 Key: HIVE-6590
 URL: https://issues.apache.org/jira/browse/HIVE-6590
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.10.0
Reporter: Lenni Kuff


Hive does not work properly with boolean partition columns. Queries return 
wrong results and also insert to incorrect HDFS paths.

{code}
create table bool_part(int_col int) partitioned by(bool_col boolean);
# This works, creating 3 unique partitions!
ALTER TABLE bool_table ADD PARTITION (bool_col=FALSE);
ALTER TABLE bool_table ADD PARTITION (bool_col=false);
ALTER TABLE bool_table ADD PARTITION (bool_col=False);
{code}

The first problem is that Hive cannot filter on a bool partition key column. 
select * from bool_part returns the correct results, but if you apply a 
filter on the bool partition key column hive won't return any results.

The second problem is that Hive seems to just call toString() on the boolean 
literal value. This means you can end up with multiple partitions (FALSE, 
false, FaLSE, etc) mapping to the literal value 'FALSE'. For example, if you 
can add three partition in have for the same logic value false doing:
ALTER TABLE bool_table ADD PARTITION (bool_col=FALSE) - 
/test-warehouse/bool_table/bool_col=FALSE/
ALTER TABLE bool_table ADD PARTITION (bool_col=false) - 
/test-warehouse/bool_table/bool_col=false/
ALTER TABLE bool_table ADD PARTITION (bool_col=False) - 
/test-warehouse/bool_table/bool_col=False/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-5968) Assign (and expose to the client) unique object IDs for each metastore object

2013-12-05 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-5968:


 Summary: Assign (and expose to the client) unique object IDs for 
each metastore object
 Key: HIVE-5968
 URL: https://issues.apache.org/jira/browse/HIVE-5968
 Project: Hive
  Issue Type: New Feature
  Components: Database/Schema, Metastore
Affects Versions: 0.12.0
Reporter: Lenni Kuff


The Hive Metastore should assign a unique ID to every metastore object - 
Database, Table, Partition,  etc. These IDs should also be exposed on each of 
the corresponding thrift structs. There are many cases where this would be 
useful, one simple case is the following:
hive1 CREATE TABLE Foo;
hive2 DROP TABLE Foo;
hive3 CREATE Table Foo;

Without object ID, there is no good way for the client to differentiate the 
table created in step1 versus the table created in step3. In general, working 
with object IDs is much more robust (especially with concurrent operations) 
than only the object names. With an ID the client can call get_table(object_id) 
and ensure the table they get back is exactly the one they expect.









--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5457) Concurrent calls to getTable() result in: MetaException: org.datanucleus.exceptions.NucleusException: Invalid index 1 for DataStoreMapping. NucleusException: Invalid inde

2013-10-05 Thread Lenni Kuff (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lenni Kuff updated HIVE-5457:
-

Description: 
Concurrent calls to getTable() result in: MetaException: 
org.datanucleus.exceptions.NucleusException: Invalid index 1 for 
DataStoreMapping.  NucleusException: Invalid index 1 for DataStoreMapping

This happens when using a Hive Metastore Service directly connecting to the 
backend metastore db. I have been able to hit this with as few as 2 concurrent 
calls.  When I update my app to serialize all calls to getTable() this problem 
is resolved. 

Stack Trace:

{code}
Caused by: org.datanucleus.exceptions.NucleusException: Invalid index 1 for 
DataStoreMapping.
at 
org.datanucleus.store.mapped.mapping.PersistableMapping.getDatastoreMapping(PersistableMapping.java:307)
at 
org.datanucleus.store.rdbms.scostore.RDBMSElementContainerStoreSpecialization.getSizeStmt(RDBMSElementContainerStoreSpecialization.java:407)
at 
org.datanucleus.store.rdbms.scostore.RDBMSElementContainerStoreSpecialization.getSize(RDBMSElementContainerStoreSpecialization.java:257)
at 
org.datanucleus.store.rdbms.scostore.RDBMSJoinListStoreSpecialization.getSize(RDBMSJoinListStoreSpecialization.java:46)
at 
org.datanucleus.store.mapped.scostore.ElementContainerStore.size(ElementContainerStore.java:440)
at org.datanucleus.sco.backed.List.size(List.java:557) 
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToSkewedValues(ObjectStore.java:1029)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1007)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1017)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToTable(ObjectStore.java:872)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:743)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
at $Proxy6.getTable(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1349)
{code}

  was:
Concurrent calls to getTable() result in: MetaException: 
org.datanucleus.exceptions.NucleusException: Invalid index 1 for 
DataStoreMapping.  NucleusException: Invalid index 1 for DataStoreMapping

This happens when using a Hive Metastore Service directly connecting to the 
backend metastore db. I have been able to hit this with as few as 2 concurrent 
calls. 

Stack Trace:

{code}
Caused by: org.datanucleus.exceptions.NucleusException: Invalid index 1 for 
DataStoreMapping.
at 
org.datanucleus.store.mapped.mapping.PersistableMapping.getDatastoreMapping(PersistableMapping.java:307)
at 
org.datanucleus.store.rdbms.scostore.RDBMSElementContainerStoreSpecialization.getSizeStmt(RDBMSElementContainerStoreSpecialization.java:407)
at 
org.datanucleus.store.rdbms.scostore.RDBMSElementContainerStoreSpecialization.getSize(RDBMSElementContainerStoreSpecialization.java:257)
at 
org.datanucleus.store.rdbms.scostore.RDBMSJoinListStoreSpecialization.getSize(RDBMSJoinListStoreSpecialization.java:46)
at 
org.datanucleus.store.mapped.scostore.ElementContainerStore.size(ElementContainerStore.java:440)
at org.datanucleus.sco.backed.List.size(List.java:557) 
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToSkewedValues(ObjectStore.java:1029)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1007)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1017)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToTable(ObjectStore.java:872)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:743)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
at $Proxy6.getTable(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1349)
{code}


 Concurrent calls to getTable() result in: MetaException: 
 org.datanucleus.exceptions.NucleusException: Invalid index 1 for 
 DataStoreMapping

[jira] [Created] (HIVE-5457) Concurrent calls to getTable() result in: MetaException: org.datanucleus.exceptions.NucleusException: Invalid index 1 for DataStoreMapping. NucleusException: Invalid inde

2013-10-05 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-5457:


 Summary: Concurrent calls to getTable() result in: MetaException: 
org.datanucleus.exceptions.NucleusException: Invalid index 1 for 
DataStoreMapping.  NucleusException: Invalid index 1 for DataStoreMapping
 Key: HIVE-5457
 URL: https://issues.apache.org/jira/browse/HIVE-5457
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Lenni Kuff
Priority: Critical


Concurrent calls to getTable() result in: MetaException: 
org.datanucleus.exceptions.NucleusException: Invalid index 1 for 
DataStoreMapping.  NucleusException: Invalid index 1 for DataStoreMapping

This happens when using a Hive Metastore Service directly connecting to the 
backend metastore db. I have been able to hit this with as few as 2 concurrent 
calls. 

Stack Trace:

{code}
Caused by: org.datanucleus.exceptions.NucleusException: Invalid index 1 for 
DataStoreMapping.
at 
org.datanucleus.store.mapped.mapping.PersistableMapping.getDatastoreMapping(PersistableMapping.java:307)
at 
org.datanucleus.store.rdbms.scostore.RDBMSElementContainerStoreSpecialization.getSizeStmt(RDBMSElementContainerStoreSpecialization.java:407)
at 
org.datanucleus.store.rdbms.scostore.RDBMSElementContainerStoreSpecialization.getSize(RDBMSElementContainerStoreSpecialization.java:257)
at 
org.datanucleus.store.rdbms.scostore.RDBMSJoinListStoreSpecialization.getSize(RDBMSJoinListStoreSpecialization.java:46)
at 
org.datanucleus.store.mapped.scostore.ElementContainerStore.size(ElementContainerStore.java:440)
at org.datanucleus.sco.backed.List.size(List.java:557) 
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToSkewedValues(ObjectStore.java:1029)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1007)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1017)
at 
org.apache.hadoop.hive.metastore.ObjectStore.convertToTable(ObjectStore.java:872)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:743)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
at $Proxy6.getTable(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1349)
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-4118) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails when using fully qualified table name

2013-03-05 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-4118:


 Summary: ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails 
when using fully qualified table name
 Key: HIVE-4118
 URL: https://issues.apache.org/jira/browse/HIVE-4118
 Project: Hive
  Issue Type: Bug
  Components: Statistics
Affects Versions: 0.10.0
Reporter: Lenni Kuff


Computing column stats fails when using fully qualified table name. Issuing a 
USE db and using only the table name succeeds.


{code}
hive -e ANALYZE TABLE somedb.some_table COMPUTE STATISTICS FOR COLUMNS int_col

org.apache.hadoop.hive.ql.metadata.HiveException: 
NoSuchObjectException(message:Table somedb.some_table for which stats is 
gathered doesn't exist.)
at 
org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2201)
at 
org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:325)
at 
org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:336)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
at $Proxy9.updateTableColumnStatistics(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.update_table_column_statistics(HiveMetaStore.java:3171)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
at $Proxy10.update_table_column_statistics(Unknown Source)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.updateTableColumnStatistics(HiveMetaStoreClient.java:973)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:74)
at $Proxy11.updateTableColumnStatistics(Unknown Source)
at 
org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2198)
... 18 more

{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4119) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with NPE if the table is empty

2013-03-05 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-4119:


 Summary: ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails 
with NPE if the table is empty
 Key: HIVE-4119
 URL: https://issues.apache.org/jira/browse/HIVE-4119
 Project: Hive
  Issue Type: Bug
  Components: Statistics
Affects Versions: 0.10.0
Reporter: Lenni Kuff


ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with NPE if the table is 
empty


{code}
hive -e create table empty_table (i int); select compute_stats(i, 16) from 
empty_table


java.lang.NullPointerException
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:35)
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getInt(PrimitiveObjectInspectorUtils.java:535)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFComputeStats$GenericUDAFLongStatsEvaluator.iterate(GenericUDAFComputeStats.java:477)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:139)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1099)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:558)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1132)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:558)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:35)
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getInt(PrimitiveObjectInspectorUtils.java:535)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFComputeStats$GenericUDAFLongStatsEvaluator.iterate(GenericUDAFComputeStats.java:477)
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:139)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1099)
... 15 more
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1132)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:558)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567

[jira] [Created] (HIVE-4122) Queries fail if timestamp data not in expected format

2013-03-05 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-4122:


 Summary: Queries fail if timestamp data not in expected format
 Key: HIVE-4122
 URL: https://issues.apache.org/jira/browse/HIVE-4122
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
Reporter: Lenni Kuff


Queries will fail if timestamp data not in expected format. The expected 
behavior is to return NULL for these invalid values.

{code}
# Not all timestamps in correct format:
echo 1999-10-10
1999-10-10 90:10:10
-01-01 00:00:00  table.data
hive -e create table timestamp_tbl (t timestamp)
hadoop fs -put ./table.data HIVE_WAREHOUSE_DIR/timestamp_tbl/
hive -e select t from timestamp_tbl

Execution failed with exit status: 2
13/03/05 09:47:05 ERROR exec.Task: Execution failed with exit status: 2
Obtaining error information
13/03/05 09:47:05 ERROR exec.Task: Obtaining error information

Task failed!
Task ID:
  Stage-1

Logs:

13/03/05 09:47:05 ERROR exec.Task: 
Task failed!
Task ID:
  Stage-1

Logs:
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4119) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with NPE if the table is empty

2013-03-05 Thread Lenni Kuff (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lenni Kuff updated HIVE-4119:
-

Priority: Critical  (was: Major)

This is especially bad because if executing via a Hive Server it will cause the 
service process to crash. 

 ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with NPE if the table 
 is empty
 -

 Key: HIVE-4119
 URL: https://issues.apache.org/jira/browse/HIVE-4119
 Project: Hive
  Issue Type: Bug
  Components: Statistics
Affects Versions: 0.10.0
Reporter: Lenni Kuff
Priority: Critical

 ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with NPE if the table 
 is empty
 {code}
 hive -e create table empty_table (i int); select compute_stats(i, 16) from 
 empty_table
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:35)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getInt(PrimitiveObjectInspectorUtils.java:535)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFComputeStats$GenericUDAFLongStatsEvaluator.iterate(GenericUDAFComputeStats.java:477)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:139)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1099)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:558)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1132)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:558)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:35)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getInt(PrimitiveObjectInspectorUtils.java:535)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFComputeStats$GenericUDAFLongStatsEvaluator.iterate(GenericUDAFComputeStats.java:477)
   at 
 org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:139)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1099)
   ... 15 more
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp

[jira] [Created] (HIVE-3460) Simultaneous attempts to initialize the Hive Metastore can fail due to error Table 'metastore_DELETEME1347565995856' doesn't exist

2012-09-13 Thread Lenni Kuff (JIRA)
Lenni Kuff created HIVE-3460:


 Summary: Simultaneous attempts to initialize the Hive Metastore 
can fail due to error Table 'metastore_DELETEME1347565995856' doesn't exist
 Key: HIVE-3460
 URL: https://issues.apache.org/jira/browse/HIVE-3460
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.8.1
Reporter: Lenni Kuff


If multiple clients attempt to access/initialize the Hive Metastore at the same 
time they can fail due to error Table 'metastore_DELETEME1347565995856' 
doesn't exist. A common scenario where this could happen is if there is a 
central mysql metastore and clients from multiple machines attempt to read from 
the metastore at the same time. This is outside of a standalone Hive Server 
install scenario. 

I believe this is not actually a Hive bug, but instead a Data Nucleus issue.


{code}
Exception in thread main javax.jdo.JDODataStoreException: Exception thrown 
obtaining schema column information from datastore
at 
org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313)
at 
org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4154)
at 
org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411)
at 
org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312)
at 
org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225)
at 
org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1628)
at 
org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499)
at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:389)
at 
org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:408)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:485)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$300(HiveMetaStore.java:141)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$5.run(HiveMetaStore.java:507)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$5.run(HiveMetaStore.java:504)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:504)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:266)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:228)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:114)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:98)


NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 
'metastore_DELETEME1347565995856' doesn't exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
at com.mysql.jdbc.Util.getInstance(Util.java:381)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1030)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:956)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3558)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3490)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1959)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2109)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2637)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2566)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1464)
at com.mysql.jdbc.DatabaseMetaData$2.forEach(DatabaseMetaData.java:2472)
at com.mysql.jdbc.IterateBlock.doForAll(IterateBlock.java:50)
at 
com.mysql.jdbc.DatabaseMetaData.getColumns(DatabaseMetaData.java:2346)
at 
org.apache.commons.dbcp.DelegatingDatabaseMetaData.getColumns(DelegatingDatabaseMetaData.java:218)
at 
org.datanucleus.store.rdbms.adapter.DatabaseAdapter.getColumns(DatabaseAdapter.java:1460