Hive-trunk-h0.21 - Build # 1517 - Still Failing

2012-06-28 Thread Apache Jenkins Server
Changes for Build #1509

Changes for Build #1510

Changes for Build #1511

Changes for Build #1512

Changes for Build #1513

Changes for Build #1514

Changes for Build #1515
[ecapriolo] HIVE-3180 Fix Eclipse classpath template broken in HIVE-3128. Carl 
Steinbach (via egc)

[hashutosh] HIVE-3048 : Collect_set Aggregate does uneccesary check for value. 
(Ed Capriolo via Ashutosh Chauhan)


Changes for Build #1516

Changes for Build #1517
[ecapriolo] HIVE-3127 Pass hconf values as XML instead of command line 
arguments to child JVM. Kanna Karanam (via egc)




1 tests failed.
FAILED:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1

Error Message:
Unexpected exception See build/ql/tmp/hive.log, or try ant test ... 
-Dtest.silent=false to get more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get 
more logs.
at junit.framework.Assert.fail(Assert.java:50)
at 
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:10642)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)




The Apache Jenkins build system has built Hive-trunk-h0.21 (build 
#$BUILD_NUMBER)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1517/ to 
view the results.

[jira] [Commented] (HIVE-3127) Pass hconf values as XML instead of command line arguments to child JVM

2012-06-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13402884#comment-13402884
 ] 

Hudson commented on HIVE-3127:
--

Integrated in Hive-trunk-h0.21 #1517 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1517/])
HIVE-3127 Pass hconf values as XML instead of command line arguments to 
child JVM. Kanna Karanam (via egc) (Revision 1354781)

 Result = FAILURE
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1354781
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapRedTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapredLocalTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/BlockMergeTask.java


 Pass hconf values as XML instead of command line arguments to child JVM
 ---

 Key: HIVE-3127
 URL: https://issues.apache.org/jira/browse/HIVE-3127
 Project: Hive
  Issue Type: Bug
  Components: Configuration, Windows
Affects Versions: 0.9.0, 0.10.0, 0.9.1
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows
 Fix For: 0.10.0

 Attachments: HIVE-3127.1.patch.txt, HIVE-3127.2.patch.txt, 
 HIVE-3127.3.patch.txt


 The maximum length of the DOS command string is 8191 characters (in Windows 
 latest versions http://support.microsoft.com/kb/830473). This limit will be 
 exceeded easily when it appends individual –hconf values to the command 
 string. To work around this problem, Write all changed hconf values to a temp 
 file and pass the temp file path to the child jvm to read and initialize the 
 -hconf parameters from file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3126) Generate build the velocity based Hive tests on windows by fixing the path issues

2012-06-28 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13402890#comment-13402890
 ] 

Carl Steinbach commented on HIVE-3126:
--

@Kanna: I added some more comments on reviewboard. Thanks.

 Generate  build the velocity based Hive tests on windows by fixing the path 
 issues
 ---

 Key: HIVE-3126
 URL: https://issues.apache.org/jira/browse/HIVE-3126
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0, 0.10.0, 0.9.1
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows, test
 Fix For: 0.10.0

 Attachments: HIVE-3126.1.patch.txt, HIVE-3126.2.patch.txt, 
 HIVE-3126.3.patch.txt


 1)Escape the backward slash in Canonical Path if unit test runs on windows.
 2)Diff comparison – 
  a.   Ignore the extra spacing on windows
  b.   Ignore the different line endings on windows  Unix
  c.   Convert the file paths to windows specific. (Handle spaces 
 etc..)
 3)Set the right file scheme  class path separators while invoking the junit 
 task from 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3126) Generate build the velocity based Hive tests on windows by fixing the path issues

2012-06-28 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-3126:
-

Status: Open  (was: Patch Available)

 Generate  build the velocity based Hive tests on windows by fixing the path 
 issues
 ---

 Key: HIVE-3126
 URL: https://issues.apache.org/jira/browse/HIVE-3126
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0, 0.10.0, 0.9.1
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows, test
 Fix For: 0.10.0

 Attachments: HIVE-3126.1.patch.txt, HIVE-3126.2.patch.txt, 
 HIVE-3126.3.patch.txt


 1)Escape the backward slash in Canonical Path if unit test runs on windows.
 2)Diff comparison – 
  a.   Ignore the extra spacing on windows
  b.   Ignore the different line endings on windows  Unix
  c.   Convert the file paths to windows specific. (Handle spaces 
 etc..)
 3)Set the right file scheme  class path separators while invoking the junit 
 task from 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3168) LazyBinaryObjectInspector.getPrimitiveJavaObject copies beyond length of underlying BytesWritable

2012-06-28 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13402918#comment-13402918
 ] 

Carl Steinbach commented on HIVE-3168:
--

@Thejas: Is the patch ready for review?

 LazyBinaryObjectInspector.getPrimitiveJavaObject copies beyond length of 
 underlying BytesWritable
 -

 Key: HIVE-3168
 URL: https://issues.apache.org/jira/browse/HIVE-3168
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.9.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.10.0, 0.9.1

 Attachments: HIVE-3168.1.patch


 LazyBinaryObjectInspector.getPrimitiveJavaObject copies the full capacity of 
 the LazyBinary's underlying BytesWritable object, which can be greater than 
 the size of the actual contents. 
 This leads to additional characters at the end of the ByteArrayRef returned. 
 When the LazyBinary object gets re-used, there can be remnants of the later 
 portion of previous entry. 
 This was not seen while reading through hive queries, which I think is 
 because a copy elsewhere seems to create LazyBinary with length == capacity. 
 (probably LazyBinary copy constructor). This was seen when MR or pig used 
 Hcatalog to read the data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-3208) Multiformat tables and partitions

2012-06-28 Thread Carl Steinbach (JIRA)
Carl Steinbach created HIVE-3208:


 Summary: Multiformat tables and partitions
 Key: HIVE-3208
 URL: https://issues.apache.org/jira/browse/HIVE-3208
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Reporter: Carl Steinbach
Assignee: Carl Steinbach




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-3209) Bucket mapping information from explain extended should not be masked

2012-06-28 Thread Navis (JIRA)
Navis created HIVE-3209:
---

 Summary: Bucket mapping information from explain extended should 
not be masked
 Key: HIVE-3209
 URL: https://issues.apache.org/jira/browse/HIVE-3209
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.10.0
 Environment: ubuntu 10.04
Reporter: Navis
Assignee: Navis
Priority: Trivial


Bucket mapping information is always masked. But it could be crucial for 
checking regressions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Review Request: Hive JDBC doesn't support TIMESTAMP column

2012-06-28 Thread Prasad Mujumdar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/5504/#review8702
---


Looks fine overall.
Perhaps we should implement  setTimestamp in HivePreparedStatement. It 
implements setXXX() for other supported datatypes.

- Prasad Mujumdar


On June 21, 2012, 11:09 p.m., Richard Ding wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/5504/
 ---
 
 (Updated June 21, 2012, 11:09 p.m.)
 
 
 Review request for hive and Carl Steinbach.
 
 
 Description
 ---
 
 See HIVE-2957
 
 
 This addresses bug HIVE-2957.
 https://issues.apache.org/jira/browse/HIVE-2957
 
 
 Diffs
 -
 
   http://svn.apache.org/repos/asf/hive/trunk/data/files/datatypes.txt 1352206 
   
 http://svn.apache.org/repos/asf/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveBaseResultSet.java
  1352206 
   
 http://svn.apache.org/repos/asf/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveResultSetMetaData.java
  1352206 
   
 http://svn.apache.org/repos/asf/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/JdbcColumn.java
  1352206 
   
 http://svn.apache.org/repos/asf/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/Utils.java
  1352206 
   
 http://svn.apache.org/repos/asf/hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
  1352206 
 
 Diff: https://reviews.apache.org/r/5504/diff/
 
 
 Testing
 ---
 
 ant test -Dtestcase=TestJdbcDriver passed.
 
 
 Thanks,
 
 Richard Ding
 




[jira] [Updated] (HIVE-3210) Support Bucketed mapjoin on partitioned table which has two or more partitions

2012-06-28 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-3210:


Status: Patch Available  (was: Open)

https://reviews.facebook.net/D3885

 Support Bucketed mapjoin on partitioned table which has two or more partitions
 --

 Key: HIVE-3210
 URL: https://issues.apache.org/jira/browse/HIVE-3210
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Minor

 Bucketed mapjoin on multiple partition seemed to have no reason to be 
 prohibited and even safer than doing simple mapjoin.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-trunk-h0.21 - Build # 1518 - Fixed

2012-06-28 Thread Apache Jenkins Server
Changes for Build #1509

Changes for Build #1510

Changes for Build #1511

Changes for Build #1512

Changes for Build #1513

Changes for Build #1514

Changes for Build #1515
[ecapriolo] HIVE-3180 Fix Eclipse classpath template broken in HIVE-3128. Carl 
Steinbach (via egc)

[hashutosh] HIVE-3048 : Collect_set Aggregate does uneccesary check for value. 
(Ed Capriolo via Ashutosh Chauhan)


Changes for Build #1516

Changes for Build #1517
[ecapriolo] HIVE-3127 Pass hconf values as XML instead of command line 
arguments to child JVM. Kanna Karanam (via egc)


Changes for Build #1518
[ecapriolo] HIVE-3206 FileUtils.tar assumes wrong directory in some cases. 
Navis Ryu (via egc)




All tests passed

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1518)

Status: Fixed

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1518/ to 
view the results.

[jira] [Commented] (HIVE-2703) ResultSetMetaData.getColumnType() always returns VARCHAR(string) for partition columns irrespective of partition column type

2012-06-28 Thread N Campbell (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403048#comment-13403048
 ] 

N Campbell commented on HIVE-2703:
--

Is this fix (or any other) addressing similar issues with respect to map, 
array, struct... where the type is always returned and described a string type?


 ResultSetMetaData.getColumnType() always returns VARCHAR(string) for 
 partition columns irrespective of partition column type
 

 Key: HIVE-2703
 URL: https://issues.apache.org/jira/browse/HIVE-2703
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.8.0
Reporter: Mythili Gopalakrishnan
Assignee: tamtam180
Priority: Critical
 Attachments: HIVE-2703.D2829.1.patch


 ResultSetMetaData.getColumnType() always returns VARCHAR(string) as column 
 type, no matter what the column type is for the partition column.
 However DatabaseMetadata.getColumnType() returns correct type. 
 Create a table with a partition column having a type other than string, you 
 will see that ResultSet.getColumnType() always returns string as the type for 
 int or boolean or float columns...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-3211) Enhance hbase column mapping strategy to support complicated structures

2012-06-28 Thread Swarnim Kulkarni (JIRA)
Swarnim Kulkarni created HIVE-3211:
--

 Summary: Enhance hbase column mapping strategy to support 
complicated structures
 Key: HIVE-3211
 URL: https://issues.apache.org/jira/browse/HIVE-3211
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Affects Versions: 0.9.0
Reporter: Swarnim Kulkarni


With the current Hive-HBase integration, we need to specify the 
hbase.columns.mapping property that maps the hbase columns to the columns in 
hive. So for example,

hive CREATE EXTERNAL TABLE complex(
key string, 
s structcol1 : int, col2 : int)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
hbase.columns.mapping = :key,cf:a
)TBLPROPERTIES (hbase.table.name = TEST_TABLE);;

The struct definition in the above query can quickly get very ugly if we are 
dealing with a very complicated structures stored in hbase columns. We should 
probably enhance the current columns mapping strategy to be able to provide a 
custom serializer and let it detect the structure by itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #62

2012-06-28 Thread Apache Jenkins Server
See 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/62/

--
[...truncated 10116 lines...]
 [echo] Project: odbc
 [copy] Warning: 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/odbc/src/conf
 does not exist.

ivy-resolve-test:
 [echo] Project: odbc

ivy-retrieve-test:
 [echo] Project: odbc

compile-test:
 [echo] Project: odbc

create-dirs:
 [echo] Project: serde
 [copy] Warning: 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/serde/src/test/resources
 does not exist.

init:
 [echo] Project: serde

ivy-init-settings:
 [echo] Project: serde

ivy-resolve:
 [echo] Project: serde
[ivy:resolve] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/ivy/ivysettings.xml
[ivy:report] Processing 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/build/ivy/resolution-cache/org.apache.hive-hive-serde-default.xml
 to 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/build/ivy/report/org.apache.hive-hive-serde-default.html

ivy-retrieve:
 [echo] Project: serde

dynamic-serde:

compile:
 [echo] Project: serde

ivy-resolve-test:
 [echo] Project: serde

ivy-retrieve-test:
 [echo] Project: serde

compile-test:
 [echo] Project: serde
[javac] Compiling 26 source files to 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/build/serde/test/classes
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

create-dirs:
 [echo] Project: service
 [copy] Warning: 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/service/src/test/resources
 does not exist.

init:
 [echo] Project: service

ivy-init-settings:
 [echo] Project: service

ivy-resolve:
 [echo] Project: service
[ivy:resolve] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/ivy/ivysettings.xml
[ivy:report] Processing 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/build/ivy/resolution-cache/org.apache.hive-hive-service-default.xml
 to 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/build/ivy/report/org.apache.hive-hive-service-default.html

ivy-retrieve:
 [echo] Project: service

compile:
 [echo] Project: service

ivy-resolve-test:
 [echo] Project: service

ivy-retrieve-test:
 [echo] Project: service

compile-test:
 [echo] Project: service
[javac] Compiling 2 source files to 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/build/service/test/classes

test:
 [echo] Project: hive

test-shims:
 [echo] Project: hive

test-conditions:
 [echo] Project: shims

gen-test:
 [echo] Project: shims

create-dirs:
 [echo] Project: shims
 [copy] Warning: 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/test/resources
 does not exist.

init:
 [echo] Project: shims

ivy-init-settings:
 [echo] Project: shims

ivy-resolve:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/ivy/ivysettings.xml
[ivy:report] Processing 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/build/ivy/resolution-cache/org.apache.hive-hive-shims-default.xml
 to 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/build/ivy/report/org.apache.hive-hive-shims-default.html

ivy-retrieve:
 [echo] Project: shims

compile:
 [echo] Project: shims
 [echo] Building shims 0.20

build_shims:
 [echo] Project: shims
 [echo] Compiling 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/common/java;/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/shims/src/0.20/java
 against hadoop 0.20.2 
(/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/build/hadoopcore/hadoop-0.20.2)

ivy-init-settings:
 [echo] Project: shims

ivy-resolve-hadoop-shim:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/hive/ivy/ivysettings.xml

ivy-retrieve-hadoop-shim:
 [echo] Project: shims
 [echo] Building shims 0.20S

build_shims:
 [echo] Project: shims
 [echo] Compiling 

[jira] [Updated] (HIVE-3168) LazyBinaryObjectInspector.getPrimitiveJavaObject copies beyond length of underlying BytesWritable

2012-06-28 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-3168:


Status: Patch Available  (was: Reopened)

Yes, the updated patch on phabricator is ready for review. 

 LazyBinaryObjectInspector.getPrimitiveJavaObject copies beyond length of 
 underlying BytesWritable
 -

 Key: HIVE-3168
 URL: https://issues.apache.org/jira/browse/HIVE-3168
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.9.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.10.0, 0.9.1

 Attachments: HIVE-3168.1.patch


 LazyBinaryObjectInspector.getPrimitiveJavaObject copies the full capacity of 
 the LazyBinary's underlying BytesWritable object, which can be greater than 
 the size of the actual contents. 
 This leads to additional characters at the end of the ByteArrayRef returned. 
 When the LazyBinary object gets re-used, there can be remnants of the later 
 portion of previous entry. 
 This was not seen while reading through hive queries, which I think is 
 because a copy elsewhere seems to create LazyBinary with length == capacity. 
 (probably LazyBinary copy constructor). This was seen when MR or pig used 
 Hcatalog to read the data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-975) Hive ODBC driver for Windows

2012-06-28 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar reassigned HIVE-975:


Assignee: Prasad Mujumdar

 Hive ODBC driver for Windows
 

 Key: HIVE-975
 URL: https://issues.apache.org/jira/browse/HIVE-975
 Project: Hive
  Issue Type: New Feature
  Components: Clients
 Environment: Windows
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar

 The current Hive ODBC driver (HIVE-187) is limited to *NIX systems. We need 
 to provide an ODBC driver that is compatible with Windows and the Windows 
 ODBC driver manager.
 It appears that it may be possible to build the current unixODBC driver on 
 Windows:
 http://mailman.unixodbc.org/pipermail/unixodbc-support/2008-June/001779.html
 The other blocker is THRIFT-591: Windows compatible Thrift C++ runtime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HIVE-1101) Extend Hive ODBC to support more functions

2012-06-28 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar reopened HIVE-1101:
---

  Assignee: Prasad Mujumdar  (was: Ning Zhang)

 Extend Hive ODBC to support more functions
 --

 Key: HIVE-1101
 URL: https://issues.apache.org/jira/browse/HIVE-1101
 Project: Hive
  Issue Type: New Feature
  Components: ODBC
Reporter: Ning Zhang
Assignee: Prasad Mujumdar
 Attachments: HIVE-1101.patch, unixODBC-2.2.14-p2-HIVE-1101.tar.gz


 Currently Hive ODBC driver only support a a minimum list of functions to 
 ensure some application work. Some other applications require more function 
 support. These functions include:
 *SQLCancel
 *SQLFetchScroll
 *SQLGetData   
 *SQLGetInfo
 *SQLMoreResults
 *SQLRowCount
 *SQLSetConnectAtt
 *SQLSetStmtAttr
 *SQLEndTran
 *SQLPrepare
 *SQLNumParams
 *SQLDescribeParam
 *SQLBindParameter
 *SQLGetConnectAttr
 *SQLSetEnvAttr
 *SQLPrimaryKeys (not ODBC API? Hive does not support primary keys yet)
 *SQLForeignKeys (not ODBC API? Hive does not support foreign keys yet)
 We should support as many of them as possible. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-1101) Extend Hive ODBC to support more functions

2012-06-28 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-1101:
--

Attachment: odbc-src.patch
odbc-build.patch

Initial patch of the ODBC driver with additional ODBC API support, ported on 
with Linux and window.

 Extend Hive ODBC to support more functions
 --

 Key: HIVE-1101
 URL: https://issues.apache.org/jira/browse/HIVE-1101
 Project: Hive
  Issue Type: New Feature
  Components: ODBC
Reporter: Ning Zhang
Assignee: Prasad Mujumdar
 Attachments: HIVE-1101.patch, odbc-build.patch, odbc-src.patch, 
 unixODBC-2.2.14-p2-HIVE-1101.tar.gz


 Currently Hive ODBC driver only support a a minimum list of functions to 
 ensure some application work. Some other applications require more function 
 support. These functions include:
 *SQLCancel
 *SQLFetchScroll
 *SQLGetData   
 *SQLGetInfo
 *SQLMoreResults
 *SQLRowCount
 *SQLSetConnectAtt
 *SQLSetStmtAttr
 *SQLEndTran
 *SQLPrepare
 *SQLNumParams
 *SQLDescribeParam
 *SQLBindParameter
 *SQLGetConnectAttr
 *SQLSetEnvAttr
 *SQLPrimaryKeys (not ODBC API? Hive does not support primary keys yet)
 *SQLForeignKeys (not ODBC API? Hive does not support foreign keys yet)
 We should support as many of them as possible. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-3212) ODBC build framework on Linux and windows

2012-06-28 Thread Prasad Mujumdar (JIRA)
Prasad Mujumdar created HIVE-3212:
-

 Summary: ODBC build framework on Linux and windows
 Key: HIVE-3212
 URL: https://issues.apache.org/jira/browse/HIVE-3212
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 0.10.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.10.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-3213) ODBC API enhancements

2012-06-28 Thread Prasad Mujumdar (JIRA)
Prasad Mujumdar created HIVE-3213:
-

 Summary: ODBC API enhancements
 Key: HIVE-3213
 URL: https://issues.apache.org/jira/browse/HIVE-3213
 Project: Hive
  Issue Type: Sub-task
  Components: ODBC
Affects Versions: 0.10.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.10.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3212) ODBC build framework on Linux and windows

2012-06-28 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-3212:
--

Component/s: ODBC

 ODBC build framework on Linux and windows
 -

 Key: HIVE-3212
 URL: https://issues.apache.org/jira/browse/HIVE-3212
 Project: Hive
  Issue Type: Sub-task
  Components: ODBC
Affects Versions: 0.10.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.10.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3098) Memory leak from large number of FileSystem instances in FileSystem.CACHE. (Must cache UGIs.)

2012-06-28 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403290#comment-13403290
 ] 

Jitendra Nath Pandey commented on HIVE-3098:


bq. Problem stems from the fact that there is no expiration policy either in fs 
or ugi cache. We need to design for UGI cache eviction policy. There, when we 
are expiring stale ugi's from ugi-cache we can do closeAllForUGI for evicting 
ugi to evict cached FS objects from fs-cache.

+1. It may be more tractable to have a cache expiration policy in ugi-cache 
based on the semantics of this particular use case. In FS-cache it gets 
trickier because of the general purpose nature of the file system.

 Memory leak from large number of FileSystem instances in FileSystem.CACHE. 
 (Must cache UGIs.)
 -

 Key: HIVE-3098
 URL: https://issues.apache.org/jira/browse/HIVE-3098
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.9.0
 Environment: Running with Hadoop 20.205.0.3+ / 1.0.x with security 
 turned on.
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Attachments: HIVE-3098.patch


 The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing 
 the Oracle backend).
 The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads, 
 in under 24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had 
 100 instances of FileSystem, whose combined retained-mem consumed the 
 entire heap.
 It boiled down to hadoop::UserGroupInformation::equals() being implemented 
 such that the Subject member is compared for equality (==), and not 
 equivalence (.equals()). This causes equivalent UGI instances to compare as 
 unequal, and causes a new FileSystem instance to be created and cached.
 The UGI.equals() is so implemented, incidentally, as a fix for yet another 
 problem (HADOOP-6670); so it is unlikely that that implementation can be 
 modified.
 The solution for this is to check for UGI equivalence in HCatalog (i.e. in 
 the Hive metastore), using an cache for UGI instances in the shims.
 I have a patch to fix this. I'll upload it shortly. I just ran an overnight 
 test to confirm that the memory-leak has been arrested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-3214) Use datanucleus performance settings in HiveConf

2012-06-28 Thread Esteban Gutierrez (JIRA)
Esteban Gutierrez created HIVE-3214:
---

 Summary: Use datanucleus performance settings in HiveConf
 Key: HIVE-3214
 URL: https://issues.apache.org/jira/browse/HIVE-3214
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema, Metastore
Affects Versions: 0.9.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
Priority: Trivial


Datanucleus recommends the following properties to be enabled for better 
performance:

http://www.datanucleus.org/products/accessplatform/performance_tuning.html

Some of this settings like datanucleus.validateTables can improve access 
performance to the metastore database by reducing the number of operations to 
the RDBMS or in some cases if bunch of DELETME* tables get stuck due some 
misconfiguration that won't block the ObjectStore until datanuclues completes 
the validatation of all the tables.

For backwards compatibility, hive-site.xml should provide the old settings.





--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-trunk-h0.21 - Build # 1519 - Failure

2012-06-28 Thread Apache Jenkins Server
Changes for Build #1519



1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1

Error Message:
Unexpected exception See build/ql/tmp/hive.log, or try ant test ... 
-Dtest.silent=false to get more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get 
more logs.
at junit.framework.Assert.fail(Assert.java:50)
at 
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:10642)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)




The Apache Jenkins build system has built Hive-trunk-h0.21 (build 
#$BUILD_NUMBER)

Status: Failure

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1519/ to 
view the results.

[jira] [Created] (HIVE-3215) JobDebugger should use RunningJob.getTrackingURL

2012-06-28 Thread Ramkumar Vadali (JIRA)
Ramkumar Vadali created HIVE-3215:
-

 Summary: JobDebugger should use RunningJob.getTrackingURL 
 Key: HIVE-3215
 URL: https://issues.apache.org/jira/browse/HIVE-3215
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor


When a MR job fails, the JobDebugger tries to construct the job tracker URL by 
connecting to the job tracker, but that is better done by using 
RunningJob#getTrackingURL.

Also, it tries to construct URLs to the tasks, which is not reliable, because 
the job could have been retired and the URL would not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3215) JobDebugger should use RunningJob.getTrackingURL

2012-06-28 Thread Zhenxiao Luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403530#comment-13403530
 ] 

Zhenxiao Luo commented on HIVE-3215:


@Ramkumar:
This seems related to HIVE-2804, which is using  HostUtil.getTaskLogUrl to get 
the URL.

 JobDebugger should use RunningJob.getTrackingURL 
 -

 Key: HIVE-3215
 URL: https://issues.apache.org/jira/browse/HIVE-3215
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor

 When a MR job fails, the JobDebugger tries to construct the job tracker URL 
 by connecting to the job tracker, but that is better done by using 
 RunningJob#getTrackingURL.
 Also, it tries to construct URLs to the tasks, which is not reliable, because 
 the job could have been retired and the URL would not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21 #62

2012-06-28 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/62/

--
[...truncated 36573 lines...]
[junit] POSTHOOK: query: select count(1) as cnt from testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/tmp/hudson/hive_2012-06-28_15-34-00_930_2961996116194047255/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/62/artifact/hive/build/service/tmp/hive_job_log_hudson_201206281534_1009061302.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: load data local inpath 
'https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] Copying data from 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt
[junit] Loading data to table default.testhivedrivertable
[junit] Copying file: 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt
[junit] POSTHOOK: query: load data local inpath 
'https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: select * from testhivedrivertable limit 10
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: 
file:/tmp/hudson/hive_2012-06-28_15-34-05_521_8303691380889137957/-mr-1
[junit] POSTHOOK: query: select * from testhivedrivertable limit 10
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/tmp/hudson/hive_2012-06-28_15-34-05_521_8303691380889137957/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/62/artifact/hive/build/service/tmp/hive_job_log_hudson_201206281534_263499950.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/62/artifact/hive/build/service/tmp/hive_job_log_hudson_201206281534_2076503811.txt
[junit] Hive history 
file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/62/artifact/hive/build/service/tmp/hive_job_log_hudson_201206281534_296256278.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: 

[jira] [Updated] (HIVE-3215) JobDebugger should use RunningJob.getTrackingURL

2012-06-28 Thread Ramkumar Vadali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Vadali updated HIVE-3215:
--

Attachment: HIVE-3215.patch

 JobDebugger should use RunningJob.getTrackingURL 
 -

 Key: HIVE-3215
 URL: https://issues.apache.org/jira/browse/HIVE-3215
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor
 Attachments: HIVE-3215.patch


 When a MR job fails, the JobDebugger tries to construct the job tracker URL 
 by connecting to the job tracker, but that is better done by using 
 RunningJob#getTrackingURL.
 Also, it tries to construct URLs to the tasks, which is not reliable, because 
 the job could have been retired and the URL would not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3215) JobDebugger should use RunningJob.getTrackingURL

2012-06-28 Thread Zhenxiao Luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403536#comment-13403536
 ] 

Zhenxiao Luo commented on HIVE-3215:


@Ramkumar:

The patch is nice. How about also update the corresponding URL construction in 
HadoopJobExecHelper.java?

 JobDebugger should use RunningJob.getTrackingURL 
 -

 Key: HIVE-3215
 URL: https://issues.apache.org/jira/browse/HIVE-3215
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor
 Attachments: HIVE-3215.patch


 When a MR job fails, the JobDebugger tries to construct the job tracker URL 
 by connecting to the job tracker, but that is better done by using 
 RunningJob#getTrackingURL.
 Also, it tries to construct URLs to the tasks, which is not reliable, because 
 the job could have been retired and the URL would not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3215) JobDebugger should use RunningJob.getTrackingURL

2012-06-28 Thread Ramkumar Vadali (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403539#comment-13403539
 ] 

Ramkumar Vadali commented on HIVE-3215:
---

Zhenxiao, Thanks for the comments. I will incorporate it.

HIVE-2804 is related, but this could be done independent of that jira.


 JobDebugger should use RunningJob.getTrackingURL 
 -

 Key: HIVE-3215
 URL: https://issues.apache.org/jira/browse/HIVE-3215
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor
 Attachments: HIVE-3215.patch


 When a MR job fails, the JobDebugger tries to construct the job tracker URL 
 by connecting to the job tracker, but that is better done by using 
 RunningJob#getTrackingURL.
 Also, it tries to construct URLs to the tasks, which is not reliable, because 
 the job could have been retired and the URL would not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3215) JobDebugger should use RunningJob.getTrackingURL

2012-06-28 Thread Ramkumar Vadali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramkumar Vadali updated HIVE-3215:
--

Attachment: HIVE-3215.2.patch

 JobDebugger should use RunningJob.getTrackingURL 
 -

 Key: HIVE-3215
 URL: https://issues.apache.org/jira/browse/HIVE-3215
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor
 Attachments: HIVE-3215.2.patch, HIVE-3215.patch


 When a MR job fails, the JobDebugger tries to construct the job tracker URL 
 by connecting to the job tracker, but that is better done by using 
 RunningJob#getTrackingURL.
 Also, it tries to construct URLs to the tasks, which is not reliable, because 
 the job could have been retired and the URL would not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3098) Memory leak from large number of FileSystem instances in FileSystem.CACHE. (Must cache UGIs.)

2012-06-28 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403563#comment-13403563
 ] 

Alejandro Abdelnur commented on HIVE-3098:
--

@Daryn,

Your solution of closeAllForUGI means you have to keep the original UGI, if you 
keep recreating them you are back to square one.

Thanks for the UGI mutability explanation. Still, I'd argue that we could 
achieve UGI immutability if we create a new UGI everytime you add credentials 
to it by composing the old UGI and the new credentials. But this still would 
not solve the caching problem if we want to do it by user ID.

Leaving UGI alone, it seem that one thing it would help is hadoop-common 
providing an (KEY, FileSystem) ExpirationCache implementation for others to 
use. This cache should return a FileSystemProxy wrapping the original 
filesystem instance which in turn wraps the IO stream returned by 
open()/create() to be able to detect streams in use to not start the eviction 
timer.

thx

 Memory leak from large number of FileSystem instances in FileSystem.CACHE. 
 (Must cache UGIs.)
 -

 Key: HIVE-3098
 URL: https://issues.apache.org/jira/browse/HIVE-3098
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.9.0
 Environment: Running with Hadoop 20.205.0.3+ / 1.0.x with security 
 turned on.
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Attachments: HIVE-3098.patch


 The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing 
 the Oracle backend).
 The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads, 
 in under 24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had 
 100 instances of FileSystem, whose combined retained-mem consumed the 
 entire heap.
 It boiled down to hadoop::UserGroupInformation::equals() being implemented 
 such that the Subject member is compared for equality (==), and not 
 equivalence (.equals()). This causes equivalent UGI instances to compare as 
 unequal, and causes a new FileSystem instance to be created and cached.
 The UGI.equals() is so implemented, incidentally, as a fix for yet another 
 problem (HADOOP-6670); so it is unlikely that that implementation can be 
 modified.
 The solution for this is to check for UGI equivalence in HCatalog (i.e. in 
 the Hive metastore), using an cache for UGI instances in the shims.
 I have a patch to fix this. I'll upload it shortly. I just ran an overnight 
 test to confirm that the memory-leak has been arrested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3098) Memory leak from large number of FileSystem instances in FileSystem.CACHE. (Must cache UGIs.)

2012-06-28 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403579#comment-13403579
 ] 

Owen O'Malley commented on HIVE-3098:
-

Alejandro,
  Daryn is absolutely right that we can't make the Subjects immutable. We need 
to be able to update a Subject with update Kerberos tickets and Tokens and 
changing that would break a lot of other code.

It would probably make sense to make a UGI.doAsAndCleanup that does a doAs and 
then removes all filesystems based on the ugi, since clearly most of the Hadoop 
ecosystem servers have related problems.

 Memory leak from large number of FileSystem instances in FileSystem.CACHE. 
 (Must cache UGIs.)
 -

 Key: HIVE-3098
 URL: https://issues.apache.org/jira/browse/HIVE-3098
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.9.0
 Environment: Running with Hadoop 20.205.0.3+ / 1.0.x with security 
 turned on.
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Attachments: HIVE-3098.patch


 The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing 
 the Oracle backend).
 The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads, 
 in under 24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had 
 100 instances of FileSystem, whose combined retained-mem consumed the 
 entire heap.
 It boiled down to hadoop::UserGroupInformation::equals() being implemented 
 such that the Subject member is compared for equality (==), and not 
 equivalence (.equals()). This causes equivalent UGI instances to compare as 
 unequal, and causes a new FileSystem instance to be created and cached.
 The UGI.equals() is so implemented, incidentally, as a fix for yet another 
 problem (HADOOP-6670); so it is unlikely that that implementation can be 
 modified.
 The solution for this is to check for UGI equivalence in HCatalog (i.e. in 
 the Hive metastore), using an cache for UGI instances in the shims.
 I have a patch to fix this. I'll upload it shortly. I just ran an overnight 
 test to confirm that the memory-leak has been arrested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3215) JobDebugger should use RunningJob.getTrackingURL

2012-06-28 Thread Zhenxiao Luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403587#comment-13403587
 ] 

Zhenxiao Luo commented on HIVE-3215:


@Ramkumar:

Generally good to me, two things:

#1. I think it is OK to keep the following:
-String taskUrl = (jtUrl == null) ? Unavailable :
-jtUrl + /taskdetails.jsp?jobid= + jobId + tipid= + 
task.toString();

-  sb.append(URL:\n   + taskUrl + \n);

Is there a reason to get rid of these info?

#2. I just checked, the only usage of JobTrackerURLResolver is:

[~/Code/studyHive]find . -name *.java | xargs grep JobTrackerURLResolver.
./ql/src/java/org/apache/hadoop/hive/ql/exec/JobDebugger.java:  jtUrl = 
JobTrackerURLResolver.getURL(conf);
./ql/src/java/org/apache/hadoop/hive/ql/exec/JobTrackerURLResolver.java: * 
JobTrackerURLResolver.
./ql/src/java/org/apache/hadoop/hive/ql/exec/JobTrackerURLResolver.java:public 
final class JobTrackerURLResolver {
./ql/src/java/org/apache/hadoop/hive/ql/exec/JobTrackerURLResolver.java:  
private JobTrackerURLResolver() {
./ql/src/java/org/apache/hadoop/hive/ql/exec/HadoopJobExecHelper.java:
String jtUrl = JobTrackerURLResolver.getURL(conf);
./ql/src/java/org/apache/hadoop/hive/ql/exec/Throttle.java:  String tracker 
= JobTrackerURLResolver.getURL(conf) + /gc.jsp?threshold= + threshold;

How about also update Throttle.java, and retire JobTrackerURLResolver.java at 
all?

Thanks,
Zhenxiao

 JobDebugger should use RunningJob.getTrackingURL 
 -

 Key: HIVE-3215
 URL: https://issues.apache.org/jira/browse/HIVE-3215
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor
 Attachments: HIVE-3215.2.patch, HIVE-3215.patch


 When a MR job fails, the JobDebugger tries to construct the job tracker URL 
 by connecting to the job tracker, but that is better done by using 
 RunningJob#getTrackingURL.
 Also, it tries to construct URLs to the tasks, which is not reliable, because 
 the job could have been retired and the URL would not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3098) Memory leak from large number of FileSystem instances in FileSystem.CACHE. (Must cache UGIs.)

2012-06-28 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403592#comment-13403592
 ] 

Alejandro Abdelnur commented on HIVE-3098:
--

Hi Owen, yeah, I do understand we are already hosed on how UGI works and we 
cannot change things. Not sure how you suggestion of doAsAndCleanup() would 
work as the usecase is a server based system that does not keep state for a 
user, not even the UGI but wants to benefit from an ExpirationCache of FS 
instances for performance. If the system creates a new UGI for the same user 
every time it needs to do something, then there will be a new FS instance every 
time and the doAsAndCleanup() will miss all the FS instances created before. In 
the case Oozie and HttpFS we need a cache based on the username, we don't 
playaround adding/removing credentials to the Subject.

 Memory leak from large number of FileSystem instances in FileSystem.CACHE. 
 (Must cache UGIs.)
 -

 Key: HIVE-3098
 URL: https://issues.apache.org/jira/browse/HIVE-3098
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.9.0
 Environment: Running with Hadoop 20.205.0.3+ / 1.0.x with security 
 turned on.
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Attachments: HIVE-3098.patch


 The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing 
 the Oracle backend).
 The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads, 
 in under 24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had 
 100 instances of FileSystem, whose combined retained-mem consumed the 
 entire heap.
 It boiled down to hadoop::UserGroupInformation::equals() being implemented 
 such that the Subject member is compared for equality (==), and not 
 equivalence (.equals()). This causes equivalent UGI instances to compare as 
 unequal, and causes a new FileSystem instance to be created and cached.
 The UGI.equals() is so implemented, incidentally, as a fix for yet another 
 problem (HADOOP-6670); so it is unlikely that that implementation can be 
 modified.
 The solution for this is to check for UGI equivalence in HCatalog (i.e. in 
 the Hive metastore), using an cache for UGI instances in the shims.
 I have a patch to fix this. I'll upload it shortly. I just ran an overnight 
 test to confirm that the memory-leak has been arrested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3068) Add ability to export table metadata as JSON on table drop

2012-06-28 Thread Andrew Chalfant (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403594#comment-13403594
 ] 

Andrew Chalfant commented on HIVE-3068:
---

Can you point me to a diff, issue number, or where the code is located?

 Add ability to export table metadata as JSON on table drop
 --

 Key: HIVE-3068
 URL: https://issues.apache.org/jira/browse/HIVE-3068
 Project: Hive
  Issue Type: New Feature
  Components: Metastore, Serializers/Deserializers
Reporter: Andrew Chalfant
Assignee: Andrew Chalfant
Priority: Minor
  Labels: features, newbie
   Original Estimate: 24h
  Remaining Estimate: 24h

 When a table is dropped, the contents go to the users trash but the metadata 
 is lost. It would be super neat to be able to save the metadata as well so 
 that tables could be trivially re-instantiated via thrift.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3215) JobDebugger should use RunningJob.getTrackingURL

2012-06-28 Thread Ramkumar Vadali (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403607#comment-13403607
 ] 

Ramkumar Vadali commented on HIVE-3215:
---

I can revert the taskUrl changes.
Throttle.java is checking the job tracker if its ok to start a new job, so 
there is no running job instance available. So that cannot be avoided.

 JobDebugger should use RunningJob.getTrackingURL 
 -

 Key: HIVE-3215
 URL: https://issues.apache.org/jira/browse/HIVE-3215
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor
 Attachments: HIVE-3215.2.patch, HIVE-3215.patch


 When a MR job fails, the JobDebugger tries to construct the job tracker URL 
 by connecting to the job tracker, but that is better done by using 
 RunningJob#getTrackingURL.
 Also, it tries to construct URLs to the tasks, which is not reliable, because 
 the job could have been retired and the URL would not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-3215) JobDebugger should use RunningJob.getTrackingURL

2012-06-28 Thread Ramkumar Vadali (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403648#comment-13403648
 ] 

Ramkumar Vadali commented on HIVE-3215:
---

The task URL is constructed using hardcoded URI like taskdetails.jsp. It will 
be better to have the job tracking URL and the task ID instead.

 JobDebugger should use RunningJob.getTrackingURL 
 -

 Key: HIVE-3215
 URL: https://issues.apache.org/jira/browse/HIVE-3215
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Ramkumar Vadali
Assignee: Ramkumar Vadali
Priority: Minor
 Attachments: HIVE-3215.2.patch, HIVE-3215.patch


 When a MR job fails, the JobDebugger tries to construct the job tracker URL 
 by connecting to the job tracker, but that is better done by using 
 RunningJob#getTrackingURL.
 Also, it tries to construct URLs to the tasks, which is not reliable, because 
 the job could have been retired and the URL would not work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-3172) Remove the duplicate JAR entries from the (“test.classpath”) to avoid command line exceeding char limit on windows

2012-06-28 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3172:
---

Status: Open  (was: Patch Available)

Hi Kanna,
I left some review comments on Review Board. Thanks!

 Remove the duplicate JAR entries from the (“test.classpath”) to avoid command 
 line exceeding char limit on windows 
 ---

 Key: HIVE-3172
 URL: https://issues.apache.org/jira/browse/HIVE-3172
 Project: Hive
  Issue Type: Sub-task
  Components: Tests, Windows
Affects Versions: 0.10.0
 Environment: Windows
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows
 Fix For: 0.10.0

 Attachments: HIVE-3172.1.patch.txt, HIVE-3172.2.patch.txt


 The maximum length of the DOS command string is 8191 characters (in Windows 
 latest versions http://support.microsoft.com/kb/830473). Following entries in 
 the “build-common.xml” are adding lot of duplicate JAR entries to the 
 “test.classpath” and it exceeds the max character limit on windows very 
 easily. 
 !-- Include build/dist/lib on the classpath before Ivy and exclude hive jars 
 from Ivy to make sure we get the local changes when we test Hive --
 fileset dir=${build.dir.hive}/dist/lib includes=*.jar 
 erroronmissingdir=false 
 excludes=**/hive_contrib*.jar,**/hive-contrib*.jar,**/lib*.jar/
 fileset dir=${hive.root}/build/ivy/lib/test includes=*.jar 
 erroronmissingdir=false excludes=**/hive_*.jar,**/hive-*.jar/
 fileset dir=${hive.root}/build/ivy/lib/default includes=*.jar 
 erroronmissingdir=false excludes=**/hive_*.jar,**/hive-*.jar /
 fileset dir=${hive.root}/testlibs includes=*.jar/
 Proposed solution (workaround)–
 1)Include all JARs from dist\lib excluding 
 **/hive_contrib*.jar,**/hive-contrib*.jar,**/lib*.jar
 2)Select the specific jars (missing jars) from test/other folders, (that 
 includes Hadoop-*.jar files)
 Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira