[jira] [Commented] (HIVE-2283) Backtracking real column names for EXPLAIN output
[ https://issues.apache.org/jira/browse/HIVE-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070328#comment-13070328 ] Navis commented on HIVE-2283: - This patch would change the output of the explain query and, doubtedly, would make fail other test cases. Making some option switching the explain mode would be appropriate, but doing that was not so clean (redundant code, etc.) as it was seemed to be. I've just laid this down and waiting opinion of others. Backtracking real column names for EXPLAIN output - Key: HIVE-2283 URL: https://issues.apache.org/jira/browse/HIVE-2283 Project: Hive Issue Type: Improvement Components: Query Processor Affects Versions: 0.8.0 Reporter: Navis Priority: Minor Attachments: HIVE-2283.1.patch, HIVE-2283.2.patch, HIVE-2283.test.patch GUI people suggested that showing real column names for result of EXPLAIN statement would make customers feel more comfortable with HIVE. I agreed and working on it. {code} a. current EXPLAIN Select Operator expressions: expr: _col10 type: int expr: _col17 type: string Group By Operator keys: expr: _col0 type: int expr: _col17 type: int b. suggested EXPLAIN Select Operator expressions: _col10=t2.key_int1, _col17=upper(t1.key_int1), _col22=t3.key_string2 Group By Operator keys: _col10=t2.key_int1, _col17=upper(t1.key_int1) {code} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Review Request: HIVE-2246: Dedupe tables' column schemas from partitions in the metastore db
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1183/#review1176 --- trunk/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql https://reviews.apache.org/r/1183/#comment2467 is the CHARSET (latin1) the same as SDS? This will require the user's comments to be in latin1 which prevents UTF chars. trunk/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql https://reviews.apache.org/r/1183/#comment2466 can you also add migration script for derby? we support derby as a default metastore RDBMS as well. trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java https://reviews.apache.org/r/1183/#comment2468 here do you check if the 'alter table' command changes the schema (columns definition)? If it just set a table property, then you don't need to create a new ColumnDescriptor right? Also if a table's schema got changed, a new CD will be created, but the old partition will still have the old CDs. When we query the old partition, do we use the old partitons's CD or the table's CD? Also in the above case, when you run 'desc table partition old_partition', do you return the old partition's CD or the table's CD? - Ning On 2011-07-22 05:30:29, Sohan Jain wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1183/ --- (Updated 2011-07-22 05:30:29) Review request for hive, Ning Zhang and Paul Yang. Summary --- This patch tries to make minimal changes to the API while keeping migration short and somewhat easy to revert. The new schema can be described as follows: - CDS is a table corresponding to Column Descriptor objects. Currently, it only stores a CD_ID. - COLUMNS_V2 is a table corresponding to MFieldSchema objects, or columns. A Column Descriptor holds a list of columns. COLUMNS_V2 has a foreign key to the CD_ID to which it belongs. - SDS was modified to reference a Column Descriptor. So SDS now has a foreign key to a CD_ID which describes its columns. During migration, we create Column Descriptors for tables in a straightforward manner: their columns are now just wrapped inside a column descriptor. The SDS of partitions use their parent table's column descriptor, since currently a partition and its table share the same list of columns. When altering or adding a partition, give it it's parent table's column descriptor IF the columns they describe are the same. Otherwise, create a new column descriptor for its columns. When adding or altering a table, create a new column descriptor every time. Whenever you drop a storage descriptor (e.g, when dropping tables or partitions), check to see if the related column descriptor has any other references in the table. That is, check to see if any other storage descriptors point to that column descriptor. If none do, then delete that column descriptor. This check is in place so we don't have unreferenced column descriptors and columns hanging around after schema evolution for tables. This addresses bug HIVE-2246. https://issues.apache.org/jira/browse/HIVE-2246 Diffs - trunk/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql PRE-CREATION trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 1148945 trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MColumnDescriptor.java PRE-CREATION trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MStorageDescriptor.java 1148945 trunk/metastore/src/model/package.jdo 1148945 Diff: https://reviews.apache.org/r/1183/diff Testing --- Passes facebook's regression testing and all existing test cases. In one instance, before migration, the overhead involved with storage descriptors and columns was ~11 GB. After migration, the overhead was ~1.5 GB. Thanks, Sohan
[jira] [Commented] (HIVE-2246) Dedupe tables' column schemas from partitions in the metastore db
[ https://issues.apache.org/jira/browse/HIVE-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070337#comment-13070337 ] jirapos...@reviews.apache.org commented on HIVE-2246: - --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1183/#review1176 --- trunk/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql https://reviews.apache.org/r/1183/#comment2467 is the CHARSET (latin1) the same as SDS? This will require the user's comments to be in latin1 which prevents UTF chars. trunk/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql https://reviews.apache.org/r/1183/#comment2466 can you also add migration script for derby? we support derby as a default metastore RDBMS as well. trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java https://reviews.apache.org/r/1183/#comment2468 here do you check if the 'alter table' command changes the schema (columns definition)? If it just set a table property, then you don't need to create a new ColumnDescriptor right? Also if a table's schema got changed, a new CD will be created, but the old partition will still have the old CDs. When we query the old partition, do we use the old partitons's CD or the table's CD? Also in the above case, when you run 'desc table partition old_partition', do you return the old partition's CD or the table's CD? - Ning On 2011-07-22 05:30:29, Sohan Jain wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1183/ bq. --- bq. bq. (Updated 2011-07-22 05:30:29) bq. bq. bq. Review request for hive, Ning Zhang and Paul Yang. bq. bq. bq. Summary bq. --- bq. bq. This patch tries to make minimal changes to the API while keeping migration short and somewhat easy to revert. bq. bq. The new schema can be described as follows: bq. - CDS is a table corresponding to Column Descriptor objects. Currently, it only stores a CD_ID. bq. - COLUMNS_V2 is a table corresponding to MFieldSchema objects, or columns. A Column Descriptor holds a list of columns. COLUMNS_V2 has a foreign key to the CD_ID to which it belongs. bq. - SDS was modified to reference a Column Descriptor. So SDS now has a foreign key to a CD_ID which describes its columns. bq. bq. During migration, we create Column Descriptors for tables in a straightforward manner: their columns are now just wrapped inside a column descriptor. The SDS of partitions use their parent table's column descriptor, since currently a partition and its table share the same list of columns. bq. bq. When altering or adding a partition, give it it's parent table's column descriptor IF the columns they describe are the same. Otherwise, create a new column descriptor for its columns. bq. bq. When adding or altering a table, create a new column descriptor every time. bq. bq. Whenever you drop a storage descriptor (e.g, when dropping tables or partitions), check to see if the related column descriptor has any other references in the table. That is, check to see if any other storage descriptors point to that column descriptor. If none do, then delete that column descriptor. This check is in place so we don't have unreferenced column descriptors and columns hanging around after schema evolution for tables. bq. bq. bq. This addresses bug HIVE-2246. bq. https://issues.apache.org/jira/browse/HIVE-2246 bq. bq. bq. Diffs bq. - bq. bq.trunk/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql PRE-CREATION bq. trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 1148945 bq. trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MColumnDescriptor.java PRE-CREATION bq. trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MStorageDescriptor.java 1148945 bq.trunk/metastore/src/model/package.jdo 1148945 bq. bq. Diff: https://reviews.apache.org/r/1183/diff bq. bq. bq. Testing bq. --- bq. bq. Passes facebook's regression testing and all existing test cases. In one instance, before migration, the overhead involved with storage descriptors and columns was ~11 GB. After migration, the overhead was ~1.5 GB. bq. bq. bq. Thanks, bq. bq. Sohan bq. bq. Dedupe tables' column schemas from partitions in the metastore db - Key: HIVE-2246 URL: https://issues.apache.org/jira/browse/HIVE-2246 Project: Hive Issue Type: Improvement
[jira] [Commented] (HIVE-2297) Fix NPE in ConditionalResolverSkewJoin
[ https://issues.apache.org/jira/browse/HIVE-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070354#comment-13070354 ] Bennie Schut commented on HIVE-2297: From what I know about the java compiler it will simply stop evaluating the rest of the line if fstatus != null is false. So the brackets don't add anything. A lot of code is written with this knowledge in mind. Fix NPE in ConditionalResolverSkewJoin -- Key: HIVE-2297 URL: https://issues.apache.org/jira/browse/HIVE-2297 Project: Hive Issue Type: Bug Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2297.patch, fix_npe.patch -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2283) Backtracking real column names for EXPLAIN output
[ https://issues.apache.org/jira/browse/HIVE-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070472#comment-13070472 ] Navis commented on HIVE-2283: - I've misunderstood the source code. This is not work for every column/case. Some more study should be preceded for me. Backtracking real column names for EXPLAIN output - Key: HIVE-2283 URL: https://issues.apache.org/jira/browse/HIVE-2283 Project: Hive Issue Type: Improvement Components: Query Processor Affects Versions: 0.8.0 Reporter: Navis Priority: Minor Attachments: HIVE-2283.1.patch, HIVE-2283.2.patch, HIVE-2283.test.patch GUI people suggested that showing real column names for result of EXPLAIN statement would make customers feel more comfortable with HIVE. I agreed and working on it. {code} a. current EXPLAIN Select Operator expressions: expr: _col10 type: int expr: _col17 type: string Group By Operator keys: expr: _col0 type: int expr: _col17 type: int b. suggested EXPLAIN Select Operator expressions: _col10=t2.key_int1, _col17=upper(t1.key_int1), _col22=t3.key_string2 Group By Operator keys: _col10=t2.key_int1, _col17=upper(t1.key_int1) {code} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2236) Cli: Print Hadoop's CPU milliseconds
[ https://issues.apache.org/jira/browse/HIVE-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070638#comment-13070638 ] He Yongqiang commented on HIVE-2236: +1, will commit after tests pass Cli: Print Hadoop's CPU milliseconds Key: HIVE-2236 URL: https://issues.apache.org/jira/browse/HIVE-2236 Project: Hive Issue Type: New Feature Components: CLI Reporter: Siying Dong Assignee: Siying Dong Priority: Minor Attachments: HIVE-2236.1.patch, HIVE-2236.2.patch, HIVE-2236.3.patch CPU Milliseonds information is available from Hadoop's framework. Printing it out to Hive CLI when executing a job will help users to know more about their jobs. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2236) Cli: Print Hadoop's CPU milliseconds
[ https://issues.apache.org/jira/browse/HIVE-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070640#comment-13070640 ] He Yongqiang commented on HIVE-2236: siying, can you rebase this patch? thanks! Cli: Print Hadoop's CPU milliseconds Key: HIVE-2236 URL: https://issues.apache.org/jira/browse/HIVE-2236 Project: Hive Issue Type: New Feature Components: CLI Reporter: Siying Dong Assignee: Siying Dong Priority: Minor Attachments: HIVE-2236.1.patch, HIVE-2236.2.patch, HIVE-2236.3.patch CPU Milliseonds information is available from Hadoop's framework. Printing it out to Hive CLI when executing a job will help users to know more about their jobs. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2158) add the HivePreparedStatement implementation based on current HIVE supported data-type
[ https://issues.apache.org/jira/browse/HIVE-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070645#comment-13070645 ] Carl Steinbach commented on HIVE-2158: -- @Ido: please open a new ticket and post your patch there. Thanks! add the HivePreparedStatement implementation based on current HIVE supported data-type -- Key: HIVE-2158 URL: https://issues.apache.org/jira/browse/HIVE-2158 Project: Hive Issue Type: Sub-task Components: JDBC Affects Versions: 0.6.0, 0.7.0, 0.8.0 Reporter: Yuanjun Li Assignee: Yuanjun Li Fix For: 0.7.1, 0.8.0 Attachments: HIVE-0.7.1-PreparedStatement.1.patch.txt, HIVE-0.8-PreparedStatement.1.patch.txt -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2128) Automatic Indexing with multiple tables
[ https://issues.apache.org/jira/browse/HIVE-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Sichi updated HIVE-2128: - Status: Open (was: Patch Available) Reran with latest patch and still got failures in these three: testCliDriver_index_auto_mult_tables testCliDriver_index_auto_mult_tables_compact testCliDriver_index_auto_self_join Automatic Indexing with multiple tables --- Key: HIVE-2128 URL: https://issues.apache.org/jira/browse/HIVE-2128 Project: Hive Issue Type: Improvement Components: Indexing Affects Versions: 0.8.0 Reporter: Russell Melick Assignee: Syed S. Albiz Attachments: HIVE-2128.1.patch, HIVE-2128.1.patch, HIVE-2128.2.patch, HIVE-2128.4.patch, HIVE-2128.5.patch, HIVE-2128.6.patch, HIVE-2128.7.patch Make automatic indexing work with jobs which access multiple tables. We'll probably need to modify the way that the index input format works in order to associate index formats/files with specific tables. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2236) Cli: Print Hadoop's CPU milliseconds
[ https://issues.apache.org/jira/browse/HIVE-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siying Dong updated HIVE-2236: -- Attachment: HIVE-2236.4.patch Cli: Print Hadoop's CPU milliseconds Key: HIVE-2236 URL: https://issues.apache.org/jira/browse/HIVE-2236 Project: Hive Issue Type: New Feature Components: CLI Reporter: Siying Dong Assignee: Siying Dong Priority: Minor Attachments: HIVE-2236.1.patch, HIVE-2236.2.patch, HIVE-2236.3.patch, HIVE-2236.4.patch CPU Milliseonds information is available from Hadoop's framework. Printing it out to Hive CLI when executing a job will help users to know more about their jobs. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2297) Fix NPE in ConditionalResolverSkewJoin
[ https://issues.apache.org/jira/browse/HIVE-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070662#comment-13070662 ] Vaibhav Aggarwal commented on HIVE-2297: I generated the patch from 0.7 branch and then rebased it to 0.8. Didn't realize that it was already fixed in 0.8 while generating the patch. I will resolve this. Thanks Vaibhav Fix NPE in ConditionalResolverSkewJoin -- Key: HIVE-2297 URL: https://issues.apache.org/jira/browse/HIVE-2297 Project: Hive Issue Type: Bug Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2297.patch, fix_npe.patch -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2297) Fix NPE in ConditionalResolverSkewJoin
[ https://issues.apache.org/jira/browse/HIVE-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Aggarwal updated HIVE-2297: --- Resolution: Not A Problem Status: Resolved (was: Patch Available) Fix NPE in ConditionalResolverSkewJoin -- Key: HIVE-2297 URL: https://issues.apache.org/jira/browse/HIVE-2297 Project: Hive Issue Type: Bug Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2297.patch, fix_npe.patch -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2128) Automatic Indexing with multiple tables
[ https://issues.apache.org/jira/browse/HIVE-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Syed S. Albiz updated HIVE-2128: Attachment: HIVE-2128.8.patch Ah, sorry, forgot to git rebase before regenerating the patch, some of the recently landed patches introduced changes to the testcase output. Automatic Indexing with multiple tables --- Key: HIVE-2128 URL: https://issues.apache.org/jira/browse/HIVE-2128 Project: Hive Issue Type: Improvement Components: Indexing Affects Versions: 0.8.0 Reporter: Russell Melick Assignee: Syed S. Albiz Attachments: HIVE-2128.1.patch, HIVE-2128.1.patch, HIVE-2128.2.patch, HIVE-2128.4.patch, HIVE-2128.5.patch, HIVE-2128.6.patch, HIVE-2128.7.patch, HIVE-2128.8.patch Make automatic indexing work with jobs which access multiple tables. We'll probably need to modify the way that the index input format works in order to associate index formats/files with specific tables. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2128) Automatic Indexing with multiple tables
[ https://issues.apache.org/jira/browse/HIVE-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Syed S. Albiz updated HIVE-2128: Status: Patch Available (was: Open) Automatic Indexing with multiple tables --- Key: HIVE-2128 URL: https://issues.apache.org/jira/browse/HIVE-2128 Project: Hive Issue Type: Improvement Components: Indexing Affects Versions: 0.8.0 Reporter: Russell Melick Assignee: Syed S. Albiz Attachments: HIVE-2128.1.patch, HIVE-2128.1.patch, HIVE-2128.2.patch, HIVE-2128.4.patch, HIVE-2128.5.patch, HIVE-2128.6.patch, HIVE-2128.7.patch, HIVE-2128.8.patch Make automatic indexing work with jobs which access multiple tables. We'll probably need to modify the way that the index input format works in order to associate index formats/files with specific tables. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Jenkins: Hive-trunk-h0.21 #846
See https://builds.apache.org/job/Hive-trunk-h0.21/846/ -- [...truncated 33492 lines...] [artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots [artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https [artifact:deploy] Uploading: org/apache/hive/hive-hbase-handler/0.8.0-SNAPSHOT/hive-hbase-handler-0.8.0-20110725.192504-42.jar to repository apache.snapshots.https at https://repository.apache.org/content/repositories/snapshots [artifact:deploy] Transferring 49K from apache.snapshots.https [artifact:deploy] Uploaded 49K [artifact:deploy] [INFO] Uploading project information for hive-hbase-handler 0.8.0-20110725.192504-42 [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'snapshot org.apache.hive:hive-hbase-handler:0.8.0-SNAPSHOT' [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'artifact org.apache.hive:hive-hbase-handler' ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /x1/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-resolve-maven-ant-tasks: [ivy:resolve] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/ivy/ivysettings.xml ivy-retrieve-maven-ant-tasks: [ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead [ivy:cachepath] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/ivy/ivysettings.xml mvn-taskdef: maven-publish-artifact: [artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime [artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots [artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https [artifact:deploy] Uploading: org/apache/hive/hive-hwi/0.8.0-SNAPSHOT/hive-hwi-0.8.0-20110725.192506-42.jar to repository apache.snapshots.https at https://repository.apache.org/content/repositories/snapshots [artifact:deploy] Transferring 23K from apache.snapshots.https [artifact:deploy] Uploaded 23K [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'snapshot org.apache.hive:hive-hwi:0.8.0-SNAPSHOT' [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'artifact org.apache.hive:hive-hwi' [artifact:deploy] [INFO] Uploading project information for hive-hwi 0.8.0-20110725.192506-42 ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To: /x1/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/build/ivy/lib/ivy-2.1.0.jar [get] Not modified - so not downloaded ivy-probe-antlib: ivy-init-antlib: ivy-init: ivy-resolve-maven-ant-tasks: [ivy:resolve] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/ivy/ivysettings.xml ivy-retrieve-maven-ant-tasks: [ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use 'ivy.settings.file' instead [ivy:cachepath] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/ivy/ivysettings.xml mvn-taskdef: maven-publish-artifact: [artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime [artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots [artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https [artifact:deploy] Uploading: org/apache/hive/hive-jdbc/0.8.0-SNAPSHOT/hive-jdbc-0.8.0-20110725.192507-42.jar to repository apache.snapshots.https at https://repository.apache.org/content/repositories/snapshots [artifact:deploy] Transferring 56K from apache.snapshots.https [artifact:deploy] Uploaded 56K [artifact:deploy] [INFO] Uploading project information for hive-jdbc 0.8.0-20110725.192507-42 [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'snapshot org.apache.hive:hive-jdbc:0.8.0-SNAPSHOT' [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'artifact org.apache.hive:hive-jdbc' ivy-init-dirs: ivy-download: [get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar [get] To:
Hudson builds
They've been failing consistently for 20 days. Seems like tests are often passing fine, but something else goes wrong, either right after (or during?) maven, where it is trying to spawn ant? See the very end of this log for an example: https://builds.apache.org/job/Hive-trunk-h0.21/846/console Did some configuration change on the build machine? JVS maven-publish-artifact : [artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime [artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots [artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https [artifact:deploy] Uploading: org/apache/hive/hive-shims/0.8.0-SNAPSHOT/hive-shims-0.8.0-20110725.192514-42.jar to repository apache.snapshots.https at https://repository.apache.org/content/repositories/snapshots [artifact:deploy] Transferring 76K from apache.snapshots.https [artifact:deploy] Uploaded 76K [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'snapshot org.apache.hive:hive-shims:0.8.0-SNAPSHOT' [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'artifact org.apache.hive:hive-shims' [artifact:deploy] [INFO] Uploading project information for hive-shims 0.8.0-20110725.192514-42 BUILD SUCCESSFUL Total time: 207 minutes 4 seconds [hive] $ ant FATAL: command execution failed.Maybe you need to configure the job to choose one of your Ant installations? java.io.IOException : Cannot run program ant (in directory /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive): java.io.IOException: error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) at hudson.Proc$LocalProc.init(Proc.java:244) at hudson.Proc$LocalProc.init(Proc.java:216) at hudson.Launcher$LocalLauncher.launch(Launcher.java:698) at hudson.Launcher$ProcStarter.start(Launcher.java:329)
Re: Hudson builds
The Build section of the job configuration had two ant invocations listed, and the second one didn't have any targets specified. I removed the second invocation and expect that this will fix the problem. On Mon, Jul 25, 2011 at 1:18 PM, John Sichi jsi...@fb.com wrote: They've been failing consistently for 20 days. Seems like tests are often passing fine, but something else goes wrong, either right after (or during?) maven, where it is trying to spawn ant? See the very end of this log for an example: https://builds.apache.org/job/Hive-trunk-h0.21/846/console Did some configuration change on the build machine? JVS maven-publish-artifact : [artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime [artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots [artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https [artifact:deploy] Uploading: org/apache/hive/hive-shims/0.8.0-SNAPSHOT/hive-shims-0.8.0-20110725.192514-42.jar to repository apache.snapshots.https at https://repository.apache.org/content/repositories/snapshots [artifact:deploy] Transferring 76K from apache.snapshots.https [artifact:deploy] Uploaded 76K [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'snapshot org.apache.hive:hive-shims:0.8.0-SNAPSHOT' [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'artifact org.apache.hive:hive-shims' [artifact:deploy] [INFO] Uploading project information for hive-shims 0.8.0-20110725.192514-42 BUILD SUCCESSFUL Total time: 207 minutes 4 seconds [hive] $ ant FATAL: command execution failed.Maybe you need to configure the job to choose one of your Ant installations? java.io.IOException : Cannot run program ant (in directory /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive): java.io.IOException: error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) at hudson.Proc$LocalProc.init(Proc.java:244) at hudson.Proc$LocalProc.init(Proc.java:216) at hudson.Launcher$LocalLauncher.launch(Launcher.java:698) at hudson.Launcher$ProcStarter.start(Launcher.java:329)
[jira] [Updated] (HIVE-2299) Optimize Hive query startup time for multiple partitions
[ https://issues.apache.org/jira/browse/HIVE-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2299: - Component/s: Query Processor Optimize Hive query startup time for multiple partitions Key: HIVE-2299 URL: https://issues.apache.org/jira/browse/HIVE-2299 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2299.patch Added an optimization to the way input splits are computed. Reduced an O(n^2) operation to O n operation. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Review Request: HIVE-2299. Optimize Hive query startup time for multiple partitions
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1188/ --- Review request for hive. Summary --- Review request for HIVE-2299. This addresses bug HIVE-2299. https://issues.apache.org/jira/browse/HIVE-2299 Diffs - ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java 421140f Diff: https://reviews.apache.org/r/1188/diff Testing --- Thanks, Carl
[jira] [Commented] (HIVE-2299) Optimize Hive query startup time for multiple partitions
[ https://issues.apache.org/jira/browse/HIVE-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070768#comment-13070768 ] jirapos...@reviews.apache.org commented on HIVE-2299: - --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1188/ --- Review request for hive. Summary --- Review request for HIVE-2299. This addresses bug HIVE-2299. https://issues.apache.org/jira/browse/HIVE-2299 Diffs - ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java 421140f Diff: https://reviews.apache.org/r/1188/diff Testing --- Thanks, Carl Optimize Hive query startup time for multiple partitions Key: HIVE-2299 URL: https://issues.apache.org/jira/browse/HIVE-2299 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2299.patch Added an optimization to the way input splits are computed. Reduced an O(n^2) operation to O n operation. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Review Request: HIVE-2299. Optimize Hive query startup time for multiple partitions
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1188/#review1180 --- Ship it! +1. Will commit if tests pass. - Carl On 2011-07-25 21:22:09, Carl Steinbach wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1188/ --- (Updated 2011-07-25 21:22:09) Review request for hive. Summary --- Review request for HIVE-2299. This addresses bug HIVE-2299. https://issues.apache.org/jira/browse/HIVE-2299 Diffs - ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java 421140f Diff: https://reviews.apache.org/r/1188/diff Testing --- Thanks, Carl
[jira] [Commented] (HIVE-2299) Optimize Hive query startup time for multiple partitions
[ https://issues.apache.org/jira/browse/HIVE-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070777#comment-13070777 ] jirapos...@reviews.apache.org commented on HIVE-2299: - --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1188/#review1180 --- Ship it! +1. Will commit if tests pass. - Carl On 2011-07-25 21:22:09, Carl Steinbach wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1188/ bq. --- bq. bq. (Updated 2011-07-25 21:22:09) bq. bq. bq. Review request for hive. bq. bq. bq. Summary bq. --- bq. bq. Review request for HIVE-2299. bq. bq. bq. This addresses bug HIVE-2299. bq. https://issues.apache.org/jira/browse/HIVE-2299 bq. bq. bq. Diffs bq. - bq. bq.ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java 421140f bq. bq. Diff: https://reviews.apache.org/r/1188/diff bq. bq. bq. Testing bq. --- bq. bq. bq. Thanks, bq. bq. Carl bq. bq. Optimize Hive query startup time for multiple partitions Key: HIVE-2299 URL: https://issues.apache.org/jira/browse/HIVE-2299 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2299.patch Added an optimization to the way input splits are computed. Reduced an O(n^2) operation to O n operation. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2299) Optimize Hive query startup time for multiple partitions
[ https://issues.apache.org/jira/browse/HIVE-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2299: - Resolution: Fixed Fix Version/s: 0.8.0 Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) Committed to trunk. Thanks Vaibhav! Optimize Hive query startup time for multiple partitions Key: HIVE-2299 URL: https://issues.apache.org/jira/browse/HIVE-2299 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Fix For: 0.8.0 Attachments: HIVE-2299.patch Added an optimization to the way input splits are computed. Reduced an O(n^2) operation to O n operation. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2299) Optimize Hive query startup time for multiple partitions
[ https://issues.apache.org/jira/browse/HIVE-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070800#comment-13070800 ] Vaibhav Aggarwal commented on HIVE-2299: Thanks for looking at this improvement request Carl! Optimize Hive query startup time for multiple partitions Key: HIVE-2299 URL: https://issues.apache.org/jira/browse/HIVE-2299 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Fix For: 0.8.0 Attachments: HIVE-2299.patch Added an optimization to the way input splits are computed. Reduced an O(n^2) operation to O n operation. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2298) Fix UDAFPercentile to tolerate null percentiles
[ https://issues.apache.org/jira/browse/HIVE-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2298: - Component/s: UDF Affects Version/s: 0.7.0 Fix UDAFPercentile to tolerate null percentiles --- Key: HIVE-2298 URL: https://issues.apache.org/jira/browse/HIVE-2298 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.7.0 Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2298.patch UDAFPercentile when passed null percentile list will throw a null pointer exception. Submitting a small fix for that. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Review Request: HIVE-2298. Fix UDAFPercentile to tolerate null percentiles
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1189/ --- Review request for hive. Summary --- Review request for HIVE-2298. This addresses bug HIVE-2298. https://issues.apache.org/jira/browse/HIVE-2298 Diffs - ql/src/java/org/apache/hadoop/hive/ql/udf/UDAFPercentile.java 92db544 Diff: https://reviews.apache.org/r/1189/diff Testing --- Thanks, Carl
[jira] [Commented] (HIVE-2298) Fix UDAFPercentile to tolerate null percentiles
[ https://issues.apache.org/jira/browse/HIVE-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070803#comment-13070803 ] jirapos...@reviews.apache.org commented on HIVE-2298: - --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1189/ --- Review request for hive. Summary --- Review request for HIVE-2298. This addresses bug HIVE-2298. https://issues.apache.org/jira/browse/HIVE-2298 Diffs - ql/src/java/org/apache/hadoop/hive/ql/udf/UDAFPercentile.java 92db544 Diff: https://reviews.apache.org/r/1189/diff Testing --- Thanks, Carl Fix UDAFPercentile to tolerate null percentiles --- Key: HIVE-2298 URL: https://issues.apache.org/jira/browse/HIVE-2298 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.7.0 Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2298.patch UDAFPercentile when passed null percentile list will throw a null pointer exception. Submitting a small fix for that. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Review Request: HIVE-2298. Fix UDAFPercentile to tolerate null percentiles
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1189/#review1181 --- ql/src/java/org/apache/hadoop/hive/ql/udf/UDAFPercentile.java https://reviews.apache.org/r/1189/#comment2483 Please fix the following checkstyle violations: Line 238: File contains tab characters (this is the first instance). Line 240: is longer than 100 characters. Line 245: '}' should be on the same line. - Carl On 2011-07-25 21:53:12, Carl Steinbach wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1189/ --- (Updated 2011-07-25 21:53:12) Review request for hive. Summary --- Review request for HIVE-2298. This addresses bug HIVE-2298. https://issues.apache.org/jira/browse/HIVE-2298 Diffs - ql/src/java/org/apache/hadoop/hive/ql/udf/UDAFPercentile.java 92db544 Diff: https://reviews.apache.org/r/1189/diff Testing --- Thanks, Carl
[jira] [Updated] (HIVE-2298) Fix UDAFPercentile to tolerate null percentiles
[ https://issues.apache.org/jira/browse/HIVE-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Steinbach updated HIVE-2298: - Status: Open (was: Patch Available) Fix UDAFPercentile to tolerate null percentiles --- Key: HIVE-2298 URL: https://issues.apache.org/jira/browse/HIVE-2298 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.7.0 Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2298.patch UDAFPercentile when passed null percentile list will throw a null pointer exception. Submitting a small fix for that. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2298) Fix UDAFPercentile to tolerate null percentiles
[ https://issues.apache.org/jira/browse/HIVE-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070807#comment-13070807 ] jirapos...@reviews.apache.org commented on HIVE-2298: - --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1189/#review1181 --- ql/src/java/org/apache/hadoop/hive/ql/udf/UDAFPercentile.java https://reviews.apache.org/r/1189/#comment2483 Please fix the following checkstyle violations: Line 238: File contains tab characters (this is the first instance). Line 240: is longer than 100 characters. Line 245: '}' should be on the same line. - Carl On 2011-07-25 21:53:12, Carl Steinbach wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/1189/ bq. --- bq. bq. (Updated 2011-07-25 21:53:12) bq. bq. bq. Review request for hive. bq. bq. bq. Summary bq. --- bq. bq. Review request for HIVE-2298. bq. bq. bq. This addresses bug HIVE-2298. bq. https://issues.apache.org/jira/browse/HIVE-2298 bq. bq. bq. Diffs bq. - bq. bq.ql/src/java/org/apache/hadoop/hive/ql/udf/UDAFPercentile.java 92db544 bq. bq. Diff: https://reviews.apache.org/r/1189/diff bq. bq. bq. Testing bq. --- bq. bq. bq. Thanks, bq. bq. Carl bq. bq. Fix UDAFPercentile to tolerate null percentiles --- Key: HIVE-2298 URL: https://issues.apache.org/jira/browse/HIVE-2298 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.7.0 Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2298.patch UDAFPercentile when passed null percentile list will throw a null pointer exception. Submitting a small fix for that. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2298) Fix UDAFPercentile to tolerate null percentiles
[ https://issues.apache.org/jira/browse/HIVE-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070809#comment-13070809 ] John Sichi commented on HIVE-2298: -- Is it possible to come up with a test case for this one? Fix UDAFPercentile to tolerate null percentiles --- Key: HIVE-2298 URL: https://issues.apache.org/jira/browse/HIVE-2298 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.7.0 Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2298.patch UDAFPercentile when passed null percentile list will throw a null pointer exception. Submitting a small fix for that. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2304) 5. Support PreparedStatement.setObject
5. Support PreparedStatement.setObject -- Key: HIVE-2304 URL: https://issues.apache.org/jira/browse/HIVE-2304 Project: Hive Issue Type: Sub-task Reporter: Ido Hadanny -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2304) Support PreparedStatement.setObject
[ https://issues.apache.org/jira/browse/HIVE-2304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ido Hadanny updated HIVE-2304: -- Description: PreparedStatement.setObject is important for spring's jdbcTemplate support Priority: Minor (was: Major) Affects Version/s: 0.7.1 Fix Version/s: 0.8.0 Summary: Support PreparedStatement.setObject (was: 5. Support PreparedStatement.setObject) Support PreparedStatement.setObject --- Key: HIVE-2304 URL: https://issues.apache.org/jira/browse/HIVE-2304 Project: Hive Issue Type: Sub-task Components: JDBC Affects Versions: 0.7.1 Reporter: Ido Hadanny Priority: Minor Fix For: 0.8.0 Original Estimate: 1h Remaining Estimate: 1h PreparedStatement.setObject is important for spring's jdbcTemplate support -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2304) Support PreparedStatement.setObject
[ https://issues.apache.org/jira/browse/HIVE-2304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ido Hadanny updated HIVE-2304: -- Status: Patch Available (was: Open) Support PreparedStatement.setObject --- Key: HIVE-2304 URL: https://issues.apache.org/jira/browse/HIVE-2304 Project: Hive Issue Type: Sub-task Components: JDBC Affects Versions: 0.7.1 Reporter: Ido Hadanny Priority: Minor Fix For: 0.8.0 Original Estimate: 1h Remaining Estimate: 1h PreparedStatement.setObject is important for spring's jdbcTemplate support -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2249) When creating constant expression for numbers, try to infer type from another comparison operand, instead of trying to use integer first, and then long and double
[ https://issues.apache.org/jira/browse/HIVE-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070811#comment-13070811 ] Siying Dong commented on HIVE-2249: --- Joseph, can you handle the string case too? When creating constant expression for numbers, try to infer type from another comparison operand, instead of trying to use integer first, and then long and double -- Key: HIVE-2249 URL: https://issues.apache.org/jira/browse/HIVE-2249 Project: Hive Issue Type: Improvement Reporter: Siying Dong Assignee: Joseph Barillari Attachments: HIVE-2249.1.patch.txt The current code to build constant expression for numbers, here is the code: try { v = Double.valueOf(expr.getText()); v = Long.valueOf(expr.getText()); v = Integer.valueOf(expr.getText()); } catch (NumberFormatException e) { // do nothing here, we will throw an exception in the following block } if (v == null) { throw new SemanticException(ErrorMsg.INVALID_NUMERICAL_CONSTANT .getMsg(expr)); } return new ExprNodeConstantDesc(v); The for the case that WHERE BIG_INT_COLUMN = 0, or WHERE DOUBLE_COLUMN = 0, we always have to do a type conversion when comparing, which is unnecessary if it is slightly smarter to choose type when creating the constant expression. We can simply walk one level up the tree, find another comparison party and use the same type with that one if it is possible. For user's wrong query like 'INT_COLUMN=1.1', we can even do more. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2304) Support PreparedStatement.setObject
[ https://issues.apache.org/jira/browse/HIVE-2304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ido Hadanny updated HIVE-2304: -- Status: Open (was: Patch Available) Support PreparedStatement.setObject --- Key: HIVE-2304 URL: https://issues.apache.org/jira/browse/HIVE-2304 Project: Hive Issue Type: Sub-task Components: JDBC Affects Versions: 0.7.1 Reporter: Ido Hadanny Priority: Minor Fix For: 0.8.0 Original Estimate: 1h Remaining Estimate: 1h PreparedStatement.setObject is important for spring's jdbcTemplate support -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2298) Fix UDAFPercentile to tolerate null percentiles
[ https://issues.apache.org/jira/browse/HIVE-2298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070814#comment-13070814 ] Vaibhav Aggarwal commented on HIVE-2298: I will make the style changes and try to add a test case to test this specific case. Fix UDAFPercentile to tolerate null percentiles --- Key: HIVE-2298 URL: https://issues.apache.org/jira/browse/HIVE-2298 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.7.0 Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Attachments: HIVE-2298.patch UDAFPercentile when passed null percentile list will throw a null pointer exception. Submitting a small fix for that. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-956) Add support of columnar binary serde
[ https://issues.apache.org/jira/browse/HIVE-956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070839#comment-13070839 ] He Yongqiang commented on HIVE-956: --- +1, looks good, will commit after tests pass. Add support of columnar binary serde Key: HIVE-956 URL: https://issues.apache.org/jira/browse/HIVE-956 Project: Hive Issue Type: New Feature Reporter: He Yongqiang Assignee: Krishna Kumar Attachments: HIVE-956v3.patch, HIVE-956v4.patch, HIVE.956.patch.0, HIVE.956.patch.1, HIVE.956.patch.2 -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2236) Cli: Print Hadoop's CPU milliseconds
[ https://issues.apache.org/jira/browse/HIVE-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Yongqiang updated HIVE-2236: --- Resolution: Fixed Status: Resolved (was: Patch Available) committed, thanks Siying! Cli: Print Hadoop's CPU milliseconds Key: HIVE-2236 URL: https://issues.apache.org/jira/browse/HIVE-2236 Project: Hive Issue Type: New Feature Components: CLI Reporter: Siying Dong Assignee: Siying Dong Priority: Minor Attachments: HIVE-2236.1.patch, HIVE-2236.2.patch, HIVE-2236.3.patch, HIVE-2236.4.patch CPU Milliseonds information is available from Hadoop's framework. Printing it out to Hive CLI when executing a job will help users to know more about their jobs. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Review Request: HIVE-2286: ClassCastException when building index with security.authorization turned on
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1137/ --- (Updated 2011-07-25 23:03:22.871042) Review request for hive, John Sichi and Ning Zhang. Changes --- Addressed comments, still need to regenerate a lot of testcase output since this will change the prehook/posthook messages for a lot of testcases. Summary --- Save the original HiveOperation/commandType when we generate the index builder task and restore it after we're done generating the task so that the authorization checks make the right decision when deciding what to do. This addresses bug HIVE-2286. https://issues.apache.org/jira/browse/HIVE-2286 Diffs (updated) - ql/src/java/org/apache/hadoop/hive/ql/Driver.java b278ffe ql/src/test/queries/clientpositive/index_auth.q PRE-CREATION ql/src/test/results/clientnegative/index_compact_entry_limit.q.out fcb2673 ql/src/test/results/clientnegative/index_compact_size_limit.q.out fcb2673 ql/src/test/results/clientpositive/index_auth.q.out PRE-CREATION ql/src/test/results/clientpositive/index_auto.q.out 8d65f98 ql/src/test/results/clientpositive/index_auto_file_format.q.out 194b35e ql/src/test/results/clientpositive/index_auto_multiple.q.out 6b81fc3 ql/src/test/results/clientpositive/index_auto_partitioned.q.out b0635db ql/src/test/results/clientpositive/index_auto_unused.q.out 3631bbc ql/src/test/results/clientpositive/index_bitmap.q.out 8f41ce3 ql/src/test/results/clientpositive/index_bitmap1.q.out 9f638f5 ql/src/test/results/clientpositive/index_bitmap2.q.out e901477 ql/src/test/results/clientpositive/index_bitmap3.q.out 116c973 ql/src/test/results/clientpositive/index_bitmap_auto.q.out cc9d91e ql/src/test/results/clientpositive/index_bitmap_auto_partitioned.q.out 9003eb4 ql/src/test/results/clientpositive/index_bitmap_rc.q.out 9bd3c98 ql/src/test/results/clientpositive/index_compact.q.out c339ec9 ql/src/test/results/clientpositive/index_compact_1.q.out 34ba3ca ql/src/test/results/clientpositive/index_compact_2.q.out e8ce238 ql/src/test/results/clientpositive/index_compact_3.q.out d39556d ql/src/test/results/clientpositive/index_creation.q.out 532f07e Diff: https://reviews.apache.org/r/1137/diff Testing --- Added new testcase to TestCliDriver: index_auth.q Thanks, Syed
[jira] [Commented] (HIVE-2286) ClassCastException when building index with security.authorization turned on
[ https://issues.apache.org/jira/browse/HIVE-2286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070847#comment-13070847 ] jirapos...@reviews.apache.org commented on HIVE-2286: - --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/1137/ --- (Updated 2011-07-25 23:03:22.871042) Review request for hive, John Sichi and Ning Zhang. Changes --- Addressed comments, still need to regenerate a lot of testcase output since this will change the prehook/posthook messages for a lot of testcases. Summary --- Save the original HiveOperation/commandType when we generate the index builder task and restore it after we're done generating the task so that the authorization checks make the right decision when deciding what to do. This addresses bug HIVE-2286. https://issues.apache.org/jira/browse/HIVE-2286 Diffs (updated) - ql/src/java/org/apache/hadoop/hive/ql/Driver.java b278ffe ql/src/test/queries/clientpositive/index_auth.q PRE-CREATION ql/src/test/results/clientnegative/index_compact_entry_limit.q.out fcb2673 ql/src/test/results/clientnegative/index_compact_size_limit.q.out fcb2673 ql/src/test/results/clientpositive/index_auth.q.out PRE-CREATION ql/src/test/results/clientpositive/index_auto.q.out 8d65f98 ql/src/test/results/clientpositive/index_auto_file_format.q.out 194b35e ql/src/test/results/clientpositive/index_auto_multiple.q.out 6b81fc3 ql/src/test/results/clientpositive/index_auto_partitioned.q.out b0635db ql/src/test/results/clientpositive/index_auto_unused.q.out 3631bbc ql/src/test/results/clientpositive/index_bitmap.q.out 8f41ce3 ql/src/test/results/clientpositive/index_bitmap1.q.out 9f638f5 ql/src/test/results/clientpositive/index_bitmap2.q.out e901477 ql/src/test/results/clientpositive/index_bitmap3.q.out 116c973 ql/src/test/results/clientpositive/index_bitmap_auto.q.out cc9d91e ql/src/test/results/clientpositive/index_bitmap_auto_partitioned.q.out 9003eb4 ql/src/test/results/clientpositive/index_bitmap_rc.q.out 9bd3c98 ql/src/test/results/clientpositive/index_compact.q.out c339ec9 ql/src/test/results/clientpositive/index_compact_1.q.out 34ba3ca ql/src/test/results/clientpositive/index_compact_2.q.out e8ce238 ql/src/test/results/clientpositive/index_compact_3.q.out d39556d ql/src/test/results/clientpositive/index_creation.q.out 532f07e Diff: https://reviews.apache.org/r/1137/diff Testing --- Added new testcase to TestCliDriver: index_auth.q Thanks, Syed ClassCastException when building index with security.authorization turned on Key: HIVE-2286 URL: https://issues.apache.org/jira/browse/HIVE-2286 Project: Hive Issue Type: Bug Reporter: Syed S. Albiz Assignee: Syed S. Albiz Attachments: HIVE-2286.1.patch, HIVE-2286.2.patch When trying to build an index with authorization checks turned on, hive issues the following ClassCastException: org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer cannot be cast to org.apache.hadoop.hive.ql.parse.SemanticAnalyzer at org.apache.hadoop.hive.ql.Driver.doAuthorization(Driver.java:540) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:848) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:224) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:358) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:293) at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:385) at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:392) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:567) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav a:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor Impl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2282) Local mode needs to work well with block sampling
[ https://issues.apache.org/jira/browse/HIVE-2282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070865#comment-13070865 ] Siying Dong commented on HIVE-2282: --- I don't know why but I ran the test suites twice and both failed. Can you rebase your codes and try to run the whole test suites and see whether all the tests pass? I'll try again too. Local mode needs to work well with block sampling - Key: HIVE-2282 URL: https://issues.apache.org/jira/browse/HIVE-2282 Project: Hive Issue Type: Improvement Reporter: Siying Dong Assignee: Kevin Wilfong Attachments: HIVE-2282.1.patch.txt, HIVE-2282.2.patch.txt, HIVE-2282.3.patch.txt, HIVE-2282.4.patch.txt Currently, if block sampling is enabled and large set of data are sampled to a small set, local mode needs to be kicked in. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2128) Automatic Indexing with multiple tables
[ https://issues.apache.org/jira/browse/HIVE-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Sichi updated HIVE-2128: - Resolution: Fixed Fix Version/s: 0.8.0 Hadoop Flags: [Reviewed] Status: Resolved (was: Patch Available) Committed. Thanks Syed! Automatic Indexing with multiple tables --- Key: HIVE-2128 URL: https://issues.apache.org/jira/browse/HIVE-2128 Project: Hive Issue Type: Improvement Components: Indexing Affects Versions: 0.8.0 Reporter: Russell Melick Assignee: Syed S. Albiz Fix For: 0.8.0 Attachments: HIVE-2128.1.patch, HIVE-2128.1.patch, HIVE-2128.2.patch, HIVE-2128.4.patch, HIVE-2128.5.patch, HIVE-2128.6.patch, HIVE-2128.7.patch, HIVE-2128.8.patch Make automatic indexing work with jobs which access multiple tables. We'll probably need to modify the way that the index input format works in order to associate index formats/files with specific tables. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Build failed in Jenkins: Hive-trunk-h0.21 #847
See https://builds.apache.org/job/Hive-trunk-h0.21/847/ -- [...truncated 31477 lines...] [junit] OK [junit] PREHOOK: query: select count(1) as cnt from testhivedrivertable [junit] PREHOOK: type: QUERY [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/jenkins/hive_2011-07-25_17-25-03_033_3148356292324665706/-mr-1 [junit] Total MapReduce jobs = 1 [junit] Launching Job 1 out of 1 [junit] Number of reduce tasks determined at compile time: 1 [junit] In order to change the average load for a reducer (in bytes): [junit] set hive.exec.reducers.bytes.per.reducer=number [junit] In order to limit the maximum number of reducers: [junit] set hive.exec.reducers.max=number [junit] In order to set a constant number of reducers: [junit] set mapred.reduce.tasks=number [junit] Job running in-process (local Hadoop) [junit] Hadoop job information for null: number of mappers: 0; number of reducers: 0 [junit] 2011-07-25 17:25:06,188 null map = 100%, reduce = 100% [junit] Ended Job = job_local_0001 [junit] POSTHOOK: query: select count(1) as cnt from testhivedrivertable [junit] POSTHOOK: type: QUERY [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: file:/tmp/jenkins/hive_2011-07-25_17-25-03_033_3148356292324665706/-mr-1 [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: default@testhivedrivertable [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] Hive history file=https://builds.apache.org/job/Hive-trunk-h0.21/ws/hive/build/service/tmp/hive_job_log_jenkins_201107251725_1300676679.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (num int) [junit] PREHOOK: type: CREATETABLE [junit] POSTHOOK: query: create table testhivedrivertable (num int) [junit] POSTHOOK: type: CREATETABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: load data local inpath 'https://builds.apache.org/job/Hive-trunk-h0.21/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] PREHOOK: type: LOAD [junit] PREHOOK: Output: default@testhivedrivertable [junit] Copying data from https://builds.apache.org/job/Hive-trunk-h0.21/ws/hive/data/files/kv1.txt [junit] Loading data to table default.testhivedrivertable [junit] POSTHOOK: query: load data local inpath 'https://builds.apache.org/job/Hive-trunk-h0.21/ws/hive/data/files/kv1.txt' into table testhivedrivertable [junit] POSTHOOK: type: LOAD [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: select * from testhivedrivertable limit 10 [junit] PREHOOK: type: QUERY [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: file:/tmp/jenkins/hive_2011-07-25_17-25-07_731_8960630023320631916/-mr-1 [junit] POSTHOOK: query: select * from testhivedrivertable limit 10 [junit] POSTHOOK: type: QUERY [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: file:/tmp/jenkins/hive_2011-07-25_17-25-07_731_8960630023320631916/-mr-1 [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input: default@testhivedrivertable [junit] PREHOOK: Output: default@testhivedrivertable [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] POSTHOOK: Input: default@testhivedrivertable [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] Hive history file=https://builds.apache.org/job/Hive-trunk-h0.21/ws/hive/build/service/tmp/hive_job_log_jenkins_201107251725_1808867213.txt [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] POSTHOOK: query: drop table testhivedrivertable [junit] POSTHOOK: type: DROPTABLE [junit] OK [junit] PREHOOK: query: create table testhivedrivertable (num int) [junit] PREHOOK: type: CREATETABLE [junit] POSTHOOK: query: create table testhivedrivertable (num int) [junit] POSTHOOK: type: CREATETABLE [junit] POSTHOOK: Output: default@testhivedrivertable [junit] OK [junit] PREHOOK: query: drop table testhivedrivertable [junit] PREHOOK: type: DROPTABLE [junit] PREHOOK: Input:
[jira] [Created] (HIVE-2305) UNION ALL on different types throws runtime exception
UNION ALL on different types throws runtime exception - Key: HIVE-2305 URL: https://issues.apache.org/jira/browse/HIVE-2305 Project: Hive Issue Type: Bug Affects Versions: 0.7.1 Reporter: Franklin Hu Assignee: Franklin Hu Ex: SELECT * (SELECT 123 FROM ... UNION ALL SELECT '123' FROM ..) t; Unioning columns of different types currently does not work. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Work started] (HIVE-2305) UNION ALL on different types throws runtime exception
[ https://issues.apache.org/jira/browse/HIVE-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-2305 started by Franklin Hu. UNION ALL on different types throws runtime exception - Key: HIVE-2305 URL: https://issues.apache.org/jira/browse/HIVE-2305 Project: Hive Issue Type: Bug Affects Versions: 0.7.1 Reporter: Franklin Hu Assignee: Franklin Hu Ex: SELECT * (SELECT 123 FROM ... UNION ALL SELECT '123' FROM ..) t; Unioning columns of different types currently does not work. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2305) UNION ALL on different types throws runtime exception
[ https://issues.apache.org/jira/browse/HIVE-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Franklin Hu updated HIVE-2305: -- Attachment: hive-2305.1.patch For multiple input directories, UnionOperator was not correctly setting the object inspector for the new row schema. Also fixed incorrect call to get a common class. UNION ALL on different types throws runtime exception - Key: HIVE-2305 URL: https://issues.apache.org/jira/browse/HIVE-2305 Project: Hive Issue Type: Bug Affects Versions: 0.7.1 Reporter: Franklin Hu Assignee: Franklin Hu Attachments: hive-2305.1.patch Ex: SELECT * (SELECT 123 FROM ... UNION ALL SELECT '123' FROM ..) t; Unioning columns of different types currently does not work. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2305) UNION ALL on different types throws runtime exception
[ https://issues.apache.org/jira/browse/HIVE-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Franklin Hu updated HIVE-2305: -- Description: Ex: SELECT * (SELECT 123 FROM ... UNION ALL SELECT '123' FROM ..) t; Unioning columns of different types currently throws runtime exceptions. was: Ex: SELECT * (SELECT 123 FROM ... UNION ALL SELECT '123' FROM ..) t; Unioning columns of different types currently does not work. UNION ALL on different types throws runtime exception - Key: HIVE-2305 URL: https://issues.apache.org/jira/browse/HIVE-2305 Project: Hive Issue Type: Bug Affects Versions: 0.7.1 Reporter: Franklin Hu Assignee: Franklin Hu Attachments: hive-2305.1.patch Ex: SELECT * (SELECT 123 FROM ... UNION ALL SELECT '123' FROM ..) t; Unioning columns of different types currently throws runtime exceptions. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2226) Add API to retrieve table names by an arbitrary filter, e.g., by owner, retention, parameters, etc.
[ https://issues.apache.org/jira/browse/HIVE-2226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070875#comment-13070875 ] Paul Yang commented on HIVE-2226: - +1 Will test and commit Add API to retrieve table names by an arbitrary filter, e.g., by owner, retention, parameters, etc. --- Key: HIVE-2226 URL: https://issues.apache.org/jira/browse/HIVE-2226 Project: Hive Issue Type: Improvement Components: Metastore Reporter: Sohan Jain Assignee: Sohan Jain Attachments: HIVE-2226.1.patch, HIVE-2226.3.patch Create a function called get_table_names_by_filter that returns a list of table names in a database that match a certain filter. The filter should operate similar to the one HIVE-1609. Initially, you should be able to prune the table list based on owner, retention, or table parameter key/values. The filtering should take place at the JDO level for efficiency/speed. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2226) Add API to retrieve table names by an arbitrary filter, e.g., by owner, retention, parameters, etc.
[ https://issues.apache.org/jira/browse/HIVE-2226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sohan Jain updated HIVE-2226: - Attachment: HIVE-2226.4.patch include auto-gen thrift files Add API to retrieve table names by an arbitrary filter, e.g., by owner, retention, parameters, etc. --- Key: HIVE-2226 URL: https://issues.apache.org/jira/browse/HIVE-2226 Project: Hive Issue Type: Improvement Components: Metastore Reporter: Sohan Jain Assignee: Sohan Jain Attachments: HIVE-2226.1.patch, HIVE-2226.3.patch, HIVE-2226.4.patch Create a function called get_table_names_by_filter that returns a list of table names in a database that match a certain filter. The filter should operate similar to the one HIVE-1609. Initially, you should be able to prune the table list based on owner, retention, or table parameter key/values. The filtering should take place at the JDO level for efficiency/speed. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HIVE-2306) timestamp attribute to be mapped for read or write, and then import data of timestamp to hbase's table from hive
timestamp attribute to be mapped for read or write, and then import data of timestamp to hbase's table from hive Key: HIVE-2306 URL: https://issues.apache.org/jira/browse/HIVE-2306 Project: Hive Issue Type: Improvement Components: HBase Handler Affects Versions: 0.8.0 Reporter: Jianyi Zhang Current column mapping dosn't support timestamp column to be mapped for read or write, and import data of timestamp to hbase's table from hive. I find HIVE-1228 mentioned this issue,but not to address the :timestamp requirement at last. And https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration said that there is currently no way to access the HBase timestamp attribute, and queries always access data with the latest timestamp. this would allow timestamp to be map to hive(just like Gut in hbase API) or INSERT OVERWRITE TABLE hbase_table_1 with timestamp from hive(like Put in hbase API)? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2306) timestamp attribute to be mapped for read or write, and then import data of timestamp to hbase's table from hive
[ https://issues.apache.org/jira/browse/HIVE-2306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianyi Zhang updated HIVE-2306: --- Description: Current column mapping dosn't support timestamp column to be mapped for read or write, and import data of timestamp to hbase's table from hive. I find HIVE-1228 mentioned this issue,but not to address the :timestamp requirement at last. And https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration said that there is currently no way to access the HBase timestamp attribute, and queries always access data with the latest timestamp. This would allow timestamp to be map to hive(just like Gut in hbase API) or INSERT OVERWRITE TABLE hbase_table_1 with timestamp from hive(like Put in hbase API)? was: Current column mapping dosn't support timestamp column to be mapped for read or write, and import data of timestamp to hbase's table from hive. I find HIVE-1228 mentioned this issue,but not to address the :timestamp requirement at last. And https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration said that there is currently no way to access the HBase timestamp attribute, and queries always access data with the latest timestamp. this would allow timestamp to be map to hive(just like Gut in hbase API) or INSERT OVERWRITE TABLE hbase_table_1 with timestamp from hive(like Put in hbase API)? timestamp attribute to be mapped for read or write, and then import data of timestamp to hbase's table from hive Key: HIVE-2306 URL: https://issues.apache.org/jira/browse/HIVE-2306 Project: Hive Issue Type: Improvement Components: HBase Handler Affects Versions: 0.8.0 Reporter: Jianyi Zhang Original Estimate: 96h Remaining Estimate: 96h Current column mapping dosn't support timestamp column to be mapped for read or write, and import data of timestamp to hbase's table from hive. I find HIVE-1228 mentioned this issue,but not to address the :timestamp requirement at last. And https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration said that there is currently no way to access the HBase timestamp attribute, and queries always access data with the latest timestamp. This would allow timestamp to be map to hive(just like Gut in hbase API) or INSERT OVERWRITE TABLE hbase_table_1 with timestamp from hive(like Put in hbase API)? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-2306) timestamp attribute to be mapped for read or write, and then import data of timestamp to hbase's table from hive
[ https://issues.apache.org/jira/browse/HIVE-2306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jianyi Zhang updated HIVE-2306: --- Description: Current column mapping dosn't support timestamp column to be mapped for read or write, and import data of timestamp to hbase's table from hive. I find HIVE-1228 mentioned this issue,but not to address the :timestamp requirement at last. And https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration said that there is currently no way to access the HBase timestamp attribute, and queries always access data with the latest timestamp. This would allow timestamp to be map to hive(just like Gut in hbase API) or INSERT OVERWRITE TABLE hbase_table_1 with timestamp from hive(like Put in hbase API)? was: Current column mapping dosn't support timestamp column to be mapped for read or write, and import data of timestamp to hbase's table from hive. I find HIVE-1228 mentioned this issue,but not to address the :timestamp requirement at last. And https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration said that there is currently no way to access the HBase timestamp attribute, and queries always access data with the latest timestamp. This would allow timestamp to be map to hive(just like Gut in hbase API) or INSERT OVERWRITE TABLE hbase_table_1 with timestamp from hive(like Put in hbase API)? timestamp attribute to be mapped for read or write, and then import data of timestamp to hbase's table from hive Key: HIVE-2306 URL: https://issues.apache.org/jira/browse/HIVE-2306 Project: Hive Issue Type: Improvement Components: HBase Handler Affects Versions: 0.8.0 Reporter: Jianyi Zhang Original Estimate: 96h Remaining Estimate: 96h Current column mapping dosn't support timestamp column to be mapped for read or write, and import data of timestamp to hbase's table from hive. I find HIVE-1228 mentioned this issue,but not to address the :timestamp requirement at last. And https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration said that there is currently no way to access the HBase timestamp attribute, and queries always access data with the latest timestamp. This would allow timestamp to be map to hive(just like Gut in hbase API) or INSERT OVERWRITE TABLE hbase_table_1 with timestamp from hive(like Put in hbase API)? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HIVE-956) Add support of columnar binary serde
[ https://issues.apache.org/jira/browse/HIVE-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] He Yongqiang updated HIVE-956: -- Resolution: Fixed Status: Resolved (was: Patch Available) committed, thanks Krishna Kumar! Add support of columnar binary serde Key: HIVE-956 URL: https://issues.apache.org/jira/browse/HIVE-956 Project: Hive Issue Type: New Feature Reporter: He Yongqiang Assignee: Krishna Kumar Attachments: HIVE-956v3.patch, HIVE-956v4.patch, HIVE.956.patch.0, HIVE.956.patch.1, HIVE.956.patch.2 -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Jenkins build is back to normal : Hive-trunk-h0.21 #848
See https://builds.apache.org/job/Hive-trunk-h0.21/848/changes
[jira] [Commented] (HIVE-2299) Optimize Hive query startup time for multiple partitions
[ https://issues.apache.org/jira/browse/HIVE-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070945#comment-13070945 ] Hudson commented on HIVE-2299: -- Integrated in Hive-trunk-h0.21 #848 (See [https://builds.apache.org/job/Hive-trunk-h0.21/848/]) HIVE-2299. Optimize Hive query startup time for multiple partitions (Vaibhav Aggarwal via cws) cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1150928 Files : * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java Optimize Hive query startup time for multiple partitions Key: HIVE-2299 URL: https://issues.apache.org/jira/browse/HIVE-2299 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Vaibhav Aggarwal Assignee: Vaibhav Aggarwal Fix For: 0.8.0 Attachments: HIVE-2299.patch Added an optimization to the way input splits are computed. Reduced an O(n^2) operation to O n operation. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2128) Automatic Indexing with multiple tables
[ https://issues.apache.org/jira/browse/HIVE-2128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070943#comment-13070943 ] Hudson commented on HIVE-2128: -- Integrated in Hive-trunk-h0.21 #848 (See [https://builds.apache.org/job/Hive-trunk-h0.21/848/]) HIVE-2128. Automatic Indexing with multiple tables. (Syed Albiz via jvs) jvs : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1150962 Files : * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/HiveIndexResult.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/HiveIndexedInputFormat.java * /hive/trunk/ql/src/test/results/clientpositive/index_auto_mult_tables_compact.q.out * /hive/trunk/ql/src/test/queries/clientpositive/index_auto_mult_tables_compact.q * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredWork.java * /hive/trunk/ql/src/test/queries/clientpositive/index_auto_self_join.q * /hive/trunk/ql/src/test/results/clientpositive/index_bitmap_auto_partitioned.q.out * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/index/IndexWhereTaskDispatcher.java * /hive/trunk/ql/src/test/results/clientpositive/index_auto_self_join.q.out * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/bitmap/BitmapIndexHandler.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/index/IndexWhereProcessor.java * /hive/trunk/ql/src/test/results/clientpositive/index_auto_mult_tables.q.out * /hive/trunk/ql/src/test/queries/clientpositive/index_auto_mult_tables.q Automatic Indexing with multiple tables --- Key: HIVE-2128 URL: https://issues.apache.org/jira/browse/HIVE-2128 Project: Hive Issue Type: Improvement Components: Indexing Affects Versions: 0.8.0 Reporter: Russell Melick Assignee: Syed S. Albiz Fix For: 0.8.0 Attachments: HIVE-2128.1.patch, HIVE-2128.1.patch, HIVE-2128.2.patch, HIVE-2128.4.patch, HIVE-2128.5.patch, HIVE-2128.6.patch, HIVE-2128.7.patch, HIVE-2128.8.patch Make automatic indexing work with jobs which access multiple tables. We'll probably need to modify the way that the index input format works in order to associate index formats/files with specific tables. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HIVE-2236) Cli: Print Hadoop's CPU milliseconds
[ https://issues.apache.org/jira/browse/HIVE-2236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070944#comment-13070944 ] Hudson commented on HIVE-2236: -- Integrated in Hive-trunk-h0.21 #848 (See [https://builds.apache.org/job/Hive-trunk-h0.21/848/]) HIVE-2236: Print Hadoop's CPU milliseconds in Cli. (Siying Dong via He Yongqiang) heyongqiang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1150945 Files : * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HadoopJobExecHelper.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/MapRedStats.java * /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java Cli: Print Hadoop's CPU milliseconds Key: HIVE-2236 URL: https://issues.apache.org/jira/browse/HIVE-2236 Project: Hive Issue Type: New Feature Components: CLI Reporter: Siying Dong Assignee: Siying Dong Priority: Minor Attachments: HIVE-2236.1.patch, HIVE-2236.2.patch, HIVE-2236.3.patch, HIVE-2236.4.patch CPU Milliseonds information is available from Hadoop's framework. Printing it out to Hive CLI when executing a job will help users to know more about their jobs. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Hudson builds
Thanks, just got a clean build notification, finally! JVS On Jul 25, 2011, at 1:25 PM, Carl Steinbach wrote: The Build section of the job configuration had two ant invocations listed, and the second one didn't have any targets specified. I removed the second invocation and expect that this will fix the problem. On Mon, Jul 25, 2011 at 1:18 PM, John Sichi jsi...@fb.com wrote: They've been failing consistently for 20 days. Seems like tests are often passing fine, but something else goes wrong, either right after (or during?) maven, where it is trying to spawn ant? See the very end of this log for an example: https://builds.apache.org/job/Hive-trunk-h0.21/846/console Did some configuration change on the build machine? JVS maven-publish-artifact : [artifact:install-provider] Installing provider: org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime [artifact:deploy] Deploying to https://repository.apache.org/content/repositories/snapshots [artifact:deploy] [INFO] Retrieving previous build number from apache.snapshots.https [artifact:deploy] Uploading: org/apache/hive/hive-shims/0.8.0-SNAPSHOT/hive-shims-0.8.0-20110725.192514-42.jar to repository apache.snapshots.https at https://repository.apache.org/content/repositories/snapshots [artifact:deploy] Transferring 76K from apache.snapshots.https [artifact:deploy] Uploaded 76K [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'snapshot org.apache.hive:hive-shims:0.8.0-SNAPSHOT' [artifact:deploy] [INFO] Retrieving previous metadata from apache.snapshots.https [artifact:deploy] [INFO] Uploading repository metadata for: 'artifact org.apache.hive:hive-shims' [artifact:deploy] [INFO] Uploading project information for hive-shims 0.8.0-20110725.192514-42 BUILD SUCCESSFUL Total time: 207 minutes 4 seconds [hive] $ ant FATAL: command execution failed.Maybe you need to configure the job to choose one of your Ant installations? java.io.IOException : Cannot run program ant (in directory /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive): java.io.IOException: error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) at hudson.Proc$LocalProc.init(Proc.java:244) at hudson.Proc$LocalProc.init(Proc.java:216) at hudson.Launcher$LocalLauncher.launch(Launcher.java:698) at hudson.Launcher$ProcStarter.start(Launcher.java:329)
[jira] [Created] (HIVE-2307) Schema creation scripts for PostgreSQL use bit(1) instead of boolean
Schema creation scripts for PostgreSQL use bit(1) instead of boolean Key: HIVE-2307 URL: https://issues.apache.org/jira/browse/HIVE-2307 Project: Hive Issue Type: Bug Components: Configuration, Metastore Affects Versions: 0.5.0, 0.6.0, 0.7.0 Reporter: Esteban Gutierrez Assignee: Esteban Gutierrez Fix For: 0.7.1, 0.8.0 When using the DDL SQL scripts to create the Metastore, tables like SEQUENCE_TABLE are missing and force the user to change the configuration to use Datanucleus to do all the provisioning of the Metastore tables. Adding the missing table definitions to the DDL scripts will allow to have a functional Hive Metastore without enabling additional privileges to the Metastore user and/or enabling datanucleus.autoCreateSchema property in hive-site.xml [After running the hive-schema-0.7.0.mysql.sql and revoking ALTER and CREATE privileges to the 'metastoreuser'] hive show tables; FAILED: Error in metadata: javax.jdo.JDOException: Exception thrown calling table.exists() for `SEQUENCE_TABLE` NestedThrowables: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: CREATE command denied to user 'metastoreuser'@'localhost' for table 'SEQUENCE_TABLE' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira