[jira] Updated: (HIVE-1517) ability to select across a database
[ https://issues.apache.org/jira/browse/HIVE-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siying Dong updated HIVE-1517: -- Attachment: HIVE-1517.7.patch I fixed multiple tests. Two tests always fail: TestHBaseCliDriver TestHBaseMinimrCliDriver They fail even without the patch. So I guess it is unrelated. ability to select across a database --- Key: HIVE-1517 URL: https://issues.apache.org/jira/browse/HIVE-1517 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Siying Dong Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1517.1.patch.txt, HIVE-1517.2.patch.txt, HIVE-1517.3.patch, HIVE-1517.4.patch, HIVE-1517.5.patch, HIVE-1517.6.patch, HIVE-1517.7.patch After https://issues.apache.org/jira/browse/HIVE-675, we need a way to be able to select across a database for this feature to be useful. For eg: use db1 create table foo(); use db2 select .. from db1.foo. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1982) Group by key shall not duplicate with distinct key
[ https://issues.apache.org/jira/browse/HIVE-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Xu updated HIVE-1982: - Attachment: HIVE-1982-3.patch Thanks Yongqiang. I traced the problem again, and it turned out quit simple. The error was introduced by RowResolver not handled well in ReduceSinkOperator. Please review the patch again, thanks. Group by key shall not duplicate with distinct key -- Key: HIVE-1982 URL: https://issues.apache.org/jira/browse/HIVE-1982 Project: Hive Issue Type: Bug Affects Versions: 0.6.0, 0.7.0 Reporter: Ted Xu Priority: Minor Attachments: HIVE-1982-3.patch, HIVE-1982-v2.patch, HIVE-1982.patch Group by key shall not duplicate with distinct key, or there will be error because RowResolver and ColumnInfo didn't match. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1982) Group by key shall not duplicate with distinct key
[ https://issues.apache.org/jira/browse/HIVE-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Xu updated HIVE-1982: - Status: Patch Available (was: Open) Group by key shall not duplicate with distinct key -- Key: HIVE-1982 URL: https://issues.apache.org/jira/browse/HIVE-1982 Project: Hive Issue Type: Bug Affects Versions: 0.6.0, 0.7.0 Reporter: Ted Xu Priority: Minor Attachments: HIVE-1982-3.patch, HIVE-1982-v2.patch, HIVE-1982.patch Group by key shall not duplicate with distinct key, or there will be error because RowResolver and ColumnInfo didn't match. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Hudson: Hive-trunk-h0.20 #566
See https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/566/changes Changes: [cws] HIVE-1902 create script for the metastore upgrade due to HIVE-78 (Arvind Prabhakar via cws) -- [...truncated 14118 lines...] [junit] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [junit] at java.lang.reflect.Method.invoke(Method.java:597) [junit] at junit.framework.TestCase.runTest(TestCase.java:154) [junit] at junit.framework.TestCase.runBare(TestCase.java:127) [junit] at junit.framework.TestResult$1.protect(TestResult.java:106) [junit] at junit.framework.TestResult.runProtected(TestResult.java:124) [junit] at junit.framework.TestResult.run(TestResult.java:109) [junit] at junit.framework.TestCase.run(TestCase.java:118) [junit] at junit.framework.TestSuite.runTest(TestSuite.java:208) [junit] at junit.framework.TestSuite.run(TestSuite.java:203) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931) [junit] at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785) [junit] Begin query: bad_exec_hooks.q [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I Location -I transient_lastDdlTime -I last_modified_ -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused by: -I LOCK_QUERYID: -I grantTime -I [.][.][.] [0-9]* more -I USING 'java -cp https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/logs/clientnegative/bad_exec_hooks.q.out https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/ql/src/test/results/clientnegative/bad_exec_hooks.q.out [junit] Done query: bad_exec_hooks.q [junit] Begin query: bad_indextype.q [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I Location -I transient_lastDdlTime -I last_modified_ -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused by: -I LOCK_QUERYID: -I grantTime -I [.][.][.] [0-9]* more -I USING 'java -cp https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/logs/clientnegative/bad_indextype.q.out https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/ql/src/test/results/clientnegative/bad_indextype.q.out [junit] Done query: bad_indextype.q [junit] Begin query: bad_sample_clause.q [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I Location -I transient_lastDdlTime -I last_modified_ -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused by: -I LOCK_QUERYID: -I grantTime -I [.][.][.] [0-9]* more -I USING 'java -cp https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/logs/clientnegative/bad_sample_clause.q.out https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/ql/src/test/results/clientnegative/bad_sample_clause.q.out [junit] Done query: bad_sample_clause.q [junit] Begin query: clusterbydistributeby.q [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I Location -I transient_lastDdlTime -I last_modified_ -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused by: -I LOCK_QUERYID: -I grantTime -I [.][.][.] [0-9]* more -I USING 'java -cp https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/logs/clientnegative/clusterbydistributeby.q.out https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/ql/src/test/results/clientnegative/clusterbydistributeby.q.out [junit] Done query: clusterbydistributeby.q [junit] Begin query: clusterbyorderby.q [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I Location -I transient_lastDdlTime -I last_modified_ -I java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused by: -I LOCK_QUERYID: -I grantTime -I [.][.][.] [0-9]* more -I USING 'java -cp https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/logs/clientnegative/clusterbyorderby.q.out https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/ql/src/test/results/clientnegative/clusterbyorderby.q.out [junit] Done query: clusterbyorderby.q [junit] Begin query: clusterbysortby.q [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I
Build failed in Hudson: Hive-0.7.0-h0.20 #8
See https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/8/changes Changes: [cws] HIVE-BUILD update release notes with JIRA information [cws] HIVE-1902 create script for the metastore upgrade due to HIVE-78 (Arvind Prabhakar via cws) -- [...truncated 25196 lines...] [junit] Hive history file=https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/build/service/tmp/hive_job_log_hudson_201102180443_1887561617.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (num int) [junit] PREHOOK: type: CREATETABLE [junit] POSTHOOK: query: create table testhivedrivertable (num int) [junit] POSTHOOK: type: CREATETABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt [junit] Loading data to table testhivedrivertable [junit] POSTHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: select count(1) as cnt from testhivedrivertable [junit] PREHOOK: type: QUERY [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/hudson/hive_2011-02-18_04-43-31_856_2070010335440252363/-mr-1 [junit] Total MapReduce jobs = 1 [junit] Launching Job 1 out of 1 [junit] Number of reduce tasks determined at compile time: 1 [junit] In order to change the average load for a reducer (in bytes): [junit] set hive.exec.reducers.bytes.per.reducer=number [junit] In order to limit the maximum number of reducers: [junit] set hive.exec.reducers.max=number [junit] In order to set a constant number of reducers: [junit] set mapred.reduce.tasks=number [junit] Job running in-process (local Hadoop) [junit] 2011-02-18 04:43:34,900 null map = 100%, reduce = 100% [junit] Ended Job = job_local_0001 [junit] POSTHOOK: query: select count(1) as cnt from testhivedrivertable [junit] POSTHOOK: type: QUERY [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: file:/tmp/hudson/hive_2011-02-18_04-43-31_856_2070010335440252363/-mr-1 [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: default@testhivedrivertable [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] Hive history file=https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/build/service/tmp/hive_job_log_hudson_201102180443_688502033.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (num int) [junit] PREHOOK: type: CREATETABLE [junit] POSTHOOK: query: create table testhivedrivertable (num int) [junit] POSTHOOK: type: CREATETABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt [junit] Loading data to table testhivedrivertable [junit] POSTHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: select * from testhivedrivertable limit 10 [junit] PREHOOK: type: QUERY [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/hudson/hive_2011-02-18_04-43-36_376_2611858059428258463/-mr-1 [junit] POSTHOOK: query: select * from testhivedrivertable limit 10 [junit] POSTHOOK: type: QUERY [junit] POSTHOOK: Input: default@testhivedrivertable
Trying to make sense of build/ql/tmp/hive.log
It seems that each time the test argument is run hive makes tens hundreds of... 011-02-18 07:46:50,243 WARN zookeeper.ClientCnxn (ClientCnxn.java:run(1120)) - Session 0x12e39732268 for server null, unexpected error, closing socket con nection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078) 2011-02-18 07:46:50,243 DEBUG zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1167)) - Ignoring exception during shutdown input java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638) at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1164) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1129) 2011-02-18 07:46:50,243 DEBUG zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1174)) - Ignoring exception during shutdown output java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649) at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1171) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1129) Turning off locking in a q test does not seem to help. I guess these are non destructive but what is going on with this? Edward
Build failed in Hudson: Hive-trunk-h0.20 #567
See https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/567/ -- [...truncated 25406 lines...] [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket21.txt' INTO TABLE srcbucket2 [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket21.txt [junit] Loading data to table srcbucket2 [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket21.txt' INTO TABLE srcbucket2 [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@srcbucket2 [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket22.txt' INTO TABLE srcbucket2 [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket22.txt [junit] Loading data to table srcbucket2 [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket22.txt' INTO TABLE srcbucket2 [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@srcbucket2 [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket23.txt' INTO TABLE srcbucket2 [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket23.txt [junit] Loading data to table srcbucket2 [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket23.txt' INTO TABLE srcbucket2 [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@srcbucket2 [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt' INTO TABLE src [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt [junit] Loading data to table src [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt' INTO TABLE src [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@src [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv3.txt' INTO TABLE src1 [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv3.txt [junit] Loading data to table src1 [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv3.txt' INTO TABLE src1 [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@src1 [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.seq' INTO TABLE src_sequencefile [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.seq [junit] Loading data to table src_sequencefile [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.seq' INTO TABLE src_sequencefile [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@src_sequencefile [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/complex.seq' INTO TABLE src_thrift [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/complex.seq [junit] Loading data to table src_thrift [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/complex.seq' INTO TABLE src_thrift [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@src_thrift [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/json.txt' INTO TABLE src_json [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/json.txt [junit] Loading data to table src_json [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/json.txt' INTO TABLE src_json [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@src_json
Build failed in Hudson: Hive-0.7.0-h0.20 #9
See https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/9/ -- [...truncated 25198 lines...] [junit] Hive history file=https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/build/service/tmp/hive_job_log_hudson_201102181146_1084301659.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (num int) [junit] PREHOOK: type: CREATETABLE [junit] POSTHOOK: query: create table testhivedrivertable (num int) [junit] POSTHOOK: type: CREATETABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt [junit] Loading data to table testhivedrivertable [junit] POSTHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: select count(1) as cnt from testhivedrivertable [junit] PREHOOK: type: QUERY [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/hudson/hive_2011-02-18_11-46-53_318_2813286569990644593/-mr-1 [junit] Total MapReduce jobs = 1 [junit] Launching Job 1 out of 1 [junit] Number of reduce tasks determined at compile time: 1 [junit] In order to change the average load for a reducer (in bytes): [junit] set hive.exec.reducers.bytes.per.reducer=number [junit] In order to limit the maximum number of reducers: [junit] set hive.exec.reducers.max=number [junit] In order to set a constant number of reducers: [junit] set mapred.reduce.tasks=number [junit] Job running in-process (local Hadoop) [junit] 2011-02-18 11:46:56,488 null map = 100%, reduce = 100% [junit] Ended Job = job_local_0001 [junit] POSTHOOK: query: select count(1) as cnt from testhivedrivertable [junit] POSTHOOK: type: QUERY [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: file:/tmp/hudson/hive_2011-02-18_11-46-53_318_2813286569990644593/-mr-1 [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: default@testhivedrivertable [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] Hive history file=https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/build/service/tmp/hive_job_log_hudson_201102181146_45296863.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (num int) [junit] PREHOOK: type: CREATETABLE [junit] POSTHOOK: query: create table testhivedrivertable (num int) [junit] POSTHOOK: type: CREATETABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt [junit] Loading data to table testhivedrivertable [junit] POSTHOOK: query: load data local inpath 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: select * from testhivedrivertable limit 10 [junit] PREHOOK: type: QUERY [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/hudson/hive_2011-02-18_11-46-58_040_4794126788245045458/-mr-1 [junit] POSTHOOK: query: select * from testhivedrivertable limit 10 [junit] POSTHOOK: type: QUERY [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: file:/tmp/hudson/hive_2011-02-18_11-46-58_040_4794126788245045458/-mr-1 [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable
[jira] Commented: (HIVE-1941) support explicit view partitioning
[ https://issues.apache.org/jira/browse/HIVE-1941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996593#comment-12996593 ] Paul Yang commented on HIVE-1941: - @John - Yes, that's what I meant. I'll take a look at the whole patch as well. support explicit view partitioning -- Key: HIVE-1941 URL: https://issues.apache.org/jira/browse/HIVE-1941 Project: Hive Issue Type: New Feature Components: Query Processor Affects Versions: 0.6.0 Reporter: John Sichi Assignee: John Sichi Attachments: HIVE-1941.1.patch, HIVE-1941.2.patch, HIVE-1941.3.patch, HIVE-1941.4.patch Allow creation of a view with an explicit partitioning definition, and support ALTER VIEW ADD/DROP PARTITION for instantiating partitions. For more information, see http://wiki.apache.org/hadoop/Hive/PartitionedViews -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1517) ability to select across a database
[ https://issues.apache.org/jira/browse/HIVE-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siying Dong updated HIVE-1517: -- Attachment: HIVE-1517.8.patch Sorry for so many updates. I fixed lineage. The lineage info seems to have some issues with table tablesample.. even now. I didn't changed this behavior and keep it as it is today. Yongqiang verified that the HBase tests also fail on his machine. ability to select across a database --- Key: HIVE-1517 URL: https://issues.apache.org/jira/browse/HIVE-1517 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Siying Dong Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1517.1.patch.txt, HIVE-1517.2.patch.txt, HIVE-1517.3.patch, HIVE-1517.4.patch, HIVE-1517.5.patch, HIVE-1517.6.patch, HIVE-1517.7.patch, HIVE-1517.8.patch After https://issues.apache.org/jira/browse/HIVE-675, we need a way to be able to select across a database for this feature to be useful. For eg: use db1 create table foo(); use db2 select .. from db1.foo. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Hive 0.7.0 Release Candidate 0
Wondering if https://issues.apache.org/jira/browse/HIVE-1995 should also be considered for 0.7 ? Ashutosh On Thu, Feb 17, 2011 at 23:57, Carl Steinbach c...@cloudera.com wrote: http://people.apache.org/~cws/hive-0.7.0-candidate-0/ Please vote.
[jira] Commented: (HIVE-1517) ability to select across a database
[ https://issues.apache.org/jira/browse/HIVE-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996616#comment-12996616 ] John Sichi commented on HIVE-1517: -- HBase tests are passing in Hudson. Siying and Yongqiang, can you guys try rebooting your dev boxes? There were issues with port conflicts with FB dev boxes before, so maybe you still have some lingering processes/ports. ability to select across a database --- Key: HIVE-1517 URL: https://issues.apache.org/jira/browse/HIVE-1517 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Siying Dong Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1517.1.patch.txt, HIVE-1517.2.patch.txt, HIVE-1517.3.patch, HIVE-1517.4.patch, HIVE-1517.5.patch, HIVE-1517.6.patch, HIVE-1517.7.patch, HIVE-1517.8.patch After https://issues.apache.org/jira/browse/HIVE-675, we need a way to be able to select across a database for this feature to be useful. For eg: use db1 create table foo(); use db2 select .. from db1.foo. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1995) Mismatched open/commit transaction calls when using get_partition()
[ https://issues.apache.org/jira/browse/HIVE-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-1995: - Fix Version/s: (was: 0.8.0) 0.7.0 Mismatched open/commit transaction calls when using get_partition() --- Key: HIVE-1995 URL: https://issues.apache.org/jira/browse/HIVE-1995 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.7.0 Reporter: Paul Yang Assignee: Paul Yang Priority: Minor Fix For: 0.7.0 Attachments: HIVE-1995.1.patch Nested executeWithRetry() calls caused by using HiveMetaStore.get_partition() can result in mis-matched open/commit calls. Fixes the same issue as described in HIVE-1760. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (HIVE-1995) Mismatched open/commit transaction calls when using get_partition()
[ https://issues.apache.org/jira/browse/HIVE-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996617#comment-12996617 ] Carl Steinbach commented on HIVE-1995: -- Backport to branch-0.7. Mismatched open/commit transaction calls when using get_partition() --- Key: HIVE-1995 URL: https://issues.apache.org/jira/browse/HIVE-1995 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.7.0 Reporter: Paul Yang Assignee: Paul Yang Priority: Minor Fix For: 0.7.0 Attachments: HIVE-1995.1.patch Nested executeWithRetry() calls caused by using HiveMetaStore.get_partition() can result in mis-matched open/commit calls. Fixes the same issue as described in HIVE-1760. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Hive 0.7.0 Release Candidate 0
Hi Ashutosh, I backported it just now. I'll cut another RC early next week to include this. Thanks. Carl On Fri, Feb 18, 2011 at 1:37 PM, Ashutosh Chauhan hashut...@apache.orgwrote: Wondering if https://issues.apache.org/jira/browse/HIVE-1995 should also be considered for 0.7 ? Ashutosh On Thu, Feb 17, 2011 at 23:57, Carl Steinbach c...@cloudera.com wrote: http://people.apache.org/~cws/hive-0.7.0-candidate-0/ Please vote.
Re: Trying to make sense of build/ql/tmp/hive.log
Yeah, this noise has been there since we first started firing up the mini zookeeper for unit tests (first for just HBase, now for all QL due to concurrency support). They are harmless, but it would be great if someone can figure out how to squelch them. JVS On Feb 18, 2011, at 8:34 AM, Edward Capriolo wrote: It seems that each time the test argument is run hive makes tens hundreds of... 011-02-18 07:46:50,243 WARN zookeeper.ClientCnxn (ClientCnxn.java:run(1120)) - Session 0x12e39732268 for server null, unexpected error, closing socket con nection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1078) 2011-02-18 07:46:50,243 DEBUG zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1167)) - Ignoring exception during shutdown input java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:638) at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1164) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1129) 2011-02-18 07:46:50,243 DEBUG zookeeper.ClientCnxn (ClientCnxn.java:cleanup(1174)) - Ignoring exception during shutdown output java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:649) at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368) at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1171) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1129) Turning off locking in a q test does not seem to help. I guess these are non destructive but what is going on with this? Edward
Re: Hive 0.7.0 Release Candidate 0
Great. Thanks, Carl. Ashutosh On Fri, Feb 18, 2011 at 14:20, Carl Steinbach c...@cloudera.com wrote: Hi Ashutosh, I backported it just now. I'll cut another RC early next week to include this. Thanks. Carl On Fri, Feb 18, 2011 at 1:37 PM, Ashutosh Chauhan hashut...@apache.orgwrote: Wondering if https://issues.apache.org/jira/browse/HIVE-1995 should also be considered for 0.7 ? Ashutosh On Thu, Feb 17, 2011 at 23:57, Carl Steinbach c...@cloudera.com wrote: http://people.apache.org/~cws/hive-0.7.0-candidate-0/ Please vote.
[jira] Updated: (HIVE-1994) Support new annotation @UDFType(stateful = true)
[ https://issues.apache.org/jira/browse/HIVE-1994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Sichi updated HIVE-1994: - Attachment: HIVE-1994.0.patch Preliminary patch with everything except the fix to prevent short-circuiting. Support new annotation @UDFType(stateful = true) Key: HIVE-1994 URL: https://issues.apache.org/jira/browse/HIVE-1994 Project: Hive Issue Type: Improvement Components: Query Processor, UDF Reporter: John Sichi Assignee: John Sichi Attachments: HIVE-1994.0.patch Because Hive does not yet support window functions from SQL/OLAP, people have started hacking around it by writing stateful UDF's for things like cumulative sum. An example is row_sequence in contrib. To clearly mark these, I think we should add a new annotation (with separate semantics from the existing deterministic annotation). I'm proposing the name stateful for lack of a better idea, but I'm open to suggestions. The semantics are as follows: * A stateful UDF can only be used in the SELECT list, not in other clauses such as WHERE/ON/ORDER/GROUP * When a stateful UDF is present in a query, there's an implication that its SELECT needs to be treated as similar to TRANSFORM, i.e. when there's DISTRIBUTE/CLUSTER/SORT clause, then run inside the corresponding reducer to make sure that the results are as expected. For the first one, an example of why we need this is AND/OR short-circuiting; we don't want these optimizations to cause the invocation to be skipped in a confusing way, so we should just ban it outright (which is what SQL/OLAP does for window functions). For the second one, I'm not entirely certain about the details since some of it is lost in the mists in Hive prehistory, but at least if we have the annotation, we'll be able to preserve backwards compatibility as we start adding new cost-based optimizations which might otherwise break it. A specific example would be inserting a materialization step (e.g. for global query optimization) in between the DISTRIBUTE/CLUSTER/SORT and the outer SELECT containing the stateful UDF invocation; this could be a problem if the mappers in the second job subdivides the buckets generated by the first job. So we wouldn't do anything immediately, but the presence of the annotation will help us going forward. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1517) ability to select across a database
[ https://issues.apache.org/jira/browse/HIVE-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siying Dong updated HIVE-1517: -- Attachment: HIVE-1517.9.patch Resolve the conflict after rebasing. John asked me to try to reboot my machine before running the Hbase tests again. I'll try to do that too. ability to select across a database --- Key: HIVE-1517 URL: https://issues.apache.org/jira/browse/HIVE-1517 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Siying Dong Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1517.1.patch.txt, HIVE-1517.2.patch.txt, HIVE-1517.3.patch, HIVE-1517.4.patch, HIVE-1517.5.patch, HIVE-1517.6.patch, HIVE-1517.7.patch, HIVE-1517.8.patch, HIVE-1517.9.patch After https://issues.apache.org/jira/browse/HIVE-675, we need a way to be able to select across a database for this feature to be useful. For eg: use db1 create table foo(); use db2 select .. from db1.foo. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (HIVE-1517) ability to select across a database
[ https://issues.apache.org/jira/browse/HIVE-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996643#comment-12996643 ] Siying Dong commented on HIVE-1517: --- John, how about for this JIRA, we just drop support (I'll modify the codes to report error if the view is referenced as db.view_name) for referencing foreign view and we do it as a follow-up? ability to select across a database --- Key: HIVE-1517 URL: https://issues.apache.org/jira/browse/HIVE-1517 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Siying Dong Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1517.1.patch.txt, HIVE-1517.2.patch.txt, HIVE-1517.3.patch, HIVE-1517.4.patch, HIVE-1517.5.patch, HIVE-1517.6.patch, HIVE-1517.7.patch, HIVE-1517.8.patch, HIVE-1517.9.patch After https://issues.apache.org/jira/browse/HIVE-675, we need a way to be able to select across a database for this feature to be useful. For eg: use db1 create table foo(); use db2 select .. from db1.foo. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1517) ability to select across a database
[ https://issues.apache.org/jira/browse/HIVE-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siying Dong updated HIVE-1517: -- Status: Open (was: Patch Available) I'll figure out the view support now. ability to select across a database --- Key: HIVE-1517 URL: https://issues.apache.org/jira/browse/HIVE-1517 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Siying Dong Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1517.1.patch.txt, HIVE-1517.2.patch.txt, HIVE-1517.3.patch, HIVE-1517.4.patch, HIVE-1517.5.patch, HIVE-1517.6.patch, HIVE-1517.7.patch, HIVE-1517.8.patch, HIVE-1517.9.patch After https://issues.apache.org/jira/browse/HIVE-675, we need a way to be able to select across a database for this feature to be useful. For eg: use db1 create table foo(); use db2 select .. from db1.foo. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (HIVE-1517) ability to select across a database
[ https://issues.apache.org/jira/browse/HIVE-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996646#comment-12996646 ] John Sichi commented on HIVE-1517: -- Fine with me. ability to select across a database --- Key: HIVE-1517 URL: https://issues.apache.org/jira/browse/HIVE-1517 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Siying Dong Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1517.1.patch.txt, HIVE-1517.2.patch.txt, HIVE-1517.3.patch, HIVE-1517.4.patch, HIVE-1517.5.patch, HIVE-1517.6.patch, HIVE-1517.7.patch, HIVE-1517.8.patch, HIVE-1517.9.patch After https://issues.apache.org/jira/browse/HIVE-675, we need a way to be able to select across a database for this feature to be useful. For eg: use db1 create table foo(); use db2 select .. from db1.foo. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Resolved: (HIVE-359) Short-circuiting expression evaluation
[ https://issues.apache.org/jira/browse/HIVE-359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Sichi resolved HIVE-359. - Resolution: Fixed Assignee: Zheng Shao I believe this was implemented a long time ago (in whatever release included deferred evaluation). Short-circuiting expression evaluation -- Key: HIVE-359 URL: https://issues.apache.org/jira/browse/HIVE-359 Project: Hive Issue Type: Improvement Reporter: Zheng Shao Assignee: Zheng Shao We don't need to evaluate some sub-expressions for AND, OR, CASE, and IF. We should support this kind of expression operators natively so we can change the evaluation order and do short-circuiting. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1788) Add more calls to the metastore thrift interface
[ https://issues.apache.org/jira/browse/HIVE-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashish Thusoo updated HIVE-1788: Attachment: HIVE-1788_2.txt Changed the api so that we have two now... ListString get_tables_by_owner(String owner) - this returns the table names ListTable get_tables_by_names(ListString names) - this returns the table objects corresponding to the names. Am currently testing this. Add more calls to the metastore thrift interface Key: HIVE-1788 URL: https://issues.apache.org/jira/browse/HIVE-1788 Project: Hive Issue Type: New Feature Reporter: Ashish Thusoo Assignee: Ashish Thusoo Attachments: HIVE-1788.txt, HIVE-1788_2.txt For administrative purposes the following calls to the metastore thrift interface would be very useful: 1. Get the table metadata for all the tables owned by a particular users 2. Ability to iterate over this set of tables 3. Ability to change a particular key value property of the table -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1517) ability to select across a database
[ https://issues.apache.org/jira/browse/HIVE-1517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siying Dong updated HIVE-1517: -- Attachment: HIVE-1517.10.patch I tried some simple way to make it work. It doesn't seem to be such a simple task. Instead, I added this to block it: pre // TODO: add support to referencing views in foreign databases. if (!tab.getDbName().equals(db.getCurrentDatabase())) { throw new SemanticException(ErrorMsg.INVALID_TABLE_ALIAS. getMsg(Referencing view from foreign databases is not supported.)); } I'll add the view support as a follow-up patch. ability to select across a database --- Key: HIVE-1517 URL: https://issues.apache.org/jira/browse/HIVE-1517 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Siying Dong Priority: Blocker Fix For: 0.7.0 Attachments: HIVE-1517.1.patch.txt, HIVE-1517.10.patch, HIVE-1517.2.patch.txt, HIVE-1517.3.patch, HIVE-1517.4.patch, HIVE-1517.5.patch, HIVE-1517.6.patch, HIVE-1517.7.patch, HIVE-1517.8.patch, HIVE-1517.9.patch After https://issues.apache.org/jira/browse/HIVE-675, we need a way to be able to select across a database for this feature to be useful. For eg: use db1 create table foo(); use db2 select .. from db1.foo. -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Updated: (HIVE-1788) Add more calls to the metastore thrift interface
[ https://issues.apache.org/jira/browse/HIVE-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashish Thusoo updated HIVE-1788: Attachment: HIVE-1788_3.txt There were some errors in the previous patch that I had overlooked. Add more calls to the metastore thrift interface Key: HIVE-1788 URL: https://issues.apache.org/jira/browse/HIVE-1788 Project: Hive Issue Type: New Feature Reporter: Ashish Thusoo Assignee: Ashish Thusoo Attachments: HIVE-1788.txt, HIVE-1788_2.txt, HIVE-1788_3.txt For administrative purposes the following calls to the metastore thrift interface would be very useful: 1. Get the table metadata for all the tables owned by a particular users 2. Ability to iterate over this set of tables 3. Ability to change a particular key value property of the table -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] Commented: (HIVE-1788) Add more calls to the metastore thrift interface
[ https://issues.apache.org/jira/browse/HIVE-1788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996685#comment-12996685 ] Ashish Thusoo commented on HIVE-1788: - There is still one more error that I need to resolve in the tests before this patch works but any early review comments are welcome. Specifically there are some issues with the IN operator in the jdo stuff in this patch. Add more calls to the metastore thrift interface Key: HIVE-1788 URL: https://issues.apache.org/jira/browse/HIVE-1788 Project: Hive Issue Type: New Feature Reporter: Ashish Thusoo Assignee: Ashish Thusoo Attachments: HIVE-1788.txt, HIVE-1788_2.txt, HIVE-1788_3.txt For administrative purposes the following calls to the metastore thrift interface would be very useful: 1. Get the table metadata for all the tables owned by a particular users 2. Ability to iterate over this set of tables 3. Ability to change a particular key value property of the table -- This message is automatically generated by JIRA. - For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Hudson: Hive-0.7.0-h0.20 #10
See https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/10/changes Changes: [cws] HIVE-1995 Mismatched open/commit transaction calls when using get_partition() (Paul Yang via cws) -- [...truncated 25333 lines...] [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/srcbucket21.txt' INTO TABLE srcbucket2 [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/srcbucket21.txt [junit] Loading data to table srcbucket2 [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/srcbucket21.txt' INTO TABLE srcbucket2 [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@srcbucket2 [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/srcbucket22.txt' INTO TABLE srcbucket2 [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/srcbucket22.txt [junit] Loading data to table srcbucket2 [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/srcbucket22.txt' INTO TABLE srcbucket2 [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@srcbucket2 [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/srcbucket23.txt' INTO TABLE srcbucket2 [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/srcbucket23.txt [junit] Loading data to table srcbucket2 [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/srcbucket23.txt' INTO TABLE srcbucket2 [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@srcbucket2 [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' INTO TABLE src [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt [junit] Loading data to table src [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.txt' INTO TABLE src [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@src [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv3.txt' INTO TABLE src1 [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv3.txt [junit] Loading data to table src1 [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv3.txt' INTO TABLE src1 [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@src1 [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.seq' INTO TABLE src_sequencefile [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.seq [junit] Loading data to table src_sequencefile [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/kv1.seq' INTO TABLE src_sequencefile [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@src_sequencefile [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/complex.seq' INTO TABLE src_thrift [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/complex.seq [junit] Loading data to table src_thrift [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/complex.seq' INTO TABLE src_thrift [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@src_thrift [junit] OK [junit] PREHOOK: query: LOAD DATA LOCAL INPATH 'https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/json.txt' INTO TABLE src_json [junit] PREHOOK: type: LOAD [junit] Copying data from https://hudson.apache.org/hudson/job/Hive-0.7.0-h0.20/ws/hive/data/files/json.txt [junit] Loading data to table src_json [junit] POSTHOOK: query: LOAD DATA LOCAL INPATH