[jira] [Updated] (HIVE-4745) java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop

2013-06-20 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-4745:
---

Attachment: HIVE-4745.3.patch

I made a minor modification and changed assertSame back to assertEquals for 
null checks

 java.lang.RuntimeException: Hive Runtime Error while closing operators: 
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
 -

 Key: HIVE-4745
 URL: https://issues.apache.org/jira/browse/HIVE-4745
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Tony Murphy
Assignee: Jitendra Nath Pandey
 Fix For: vectorization-branch

 Attachments: HIVE-4745.2.patch, HIVE-4745.3.patch


 {noformat}
 SELECT SUM(L_QUANTITY),
(SUM(L_QUANTITY) + -1.3000E+000),
(-2.2002E+000 % (SUM(L_QUANTITY) + 
 -1.3000E+000)),
MIN(L_EXTENDEDPRICE)
 FROM   lineitem_orc
 WHERE  ((L_EXTENDEDPRICE = L_LINENUMBER)
 OR (L_TAX  L_EXTENDEDPRICE));
 {noformat}
 executed over tpch line item with scale factor 1gb
 {noformat}
 13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property 
 hive.metastore.local no longer has any effect. Make sure to provide a valid 
 value for hive.metastore.uris if you are connecting to a remote metastore.
 Logging initialized using configuration in 
 file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
 Hive history 
 file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks determined at compile time: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201306142329_0098, Tracking URL = 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill 
 job_201306142329_0098
 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
 1
 2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
 2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201306142329_0098 with errors
 Error during job, obtaining debugging information...
 Job Tracking URL: 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Examining task ID: task_201306142329_0098_m_02 (and more) from job 
 job_201306142329_0098
 Task with the most failures(4): 
 -
 Task ID:
   task_201306142329_0098_m_00
 URL:
   
 http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098tipid=task_201306142329_0098_m_00
 -
 Diagnostic Messages for this Task:
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
   at org.apache.hadoop.mapred.Child.main(Child.java:265)
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable 
 cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
   at 
 org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
   at 

[jira] [Updated] (HIVE-4745) java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop

2013-06-19 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-4745:
---

Attachment: HIVE-4745.2.patch

 java.lang.RuntimeException: Hive Runtime Error while closing operators: 
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
 -

 Key: HIVE-4745
 URL: https://issues.apache.org/jira/browse/HIVE-4745
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Tony Murphy
Assignee: Jitendra Nath Pandey
 Fix For: vectorization-branch

 Attachments: HIVE-4745.2.patch


 {noformat}
 SELECT SUM(L_QUANTITY),
(SUM(L_QUANTITY) + -1.3000E+000),
(-2.2002E+000 % (SUM(L_QUANTITY) + 
 -1.3000E+000)),
MIN(L_EXTENDEDPRICE)
 FROM   lineitem_orc
 WHERE  ((L_EXTENDEDPRICE = L_LINENUMBER)
 OR (L_TAX  L_EXTENDEDPRICE));
 {noformat}
 executed over tpch line item with scale factor 1gb
 {noformat}
 13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property 
 hive.metastore.local no longer has any effect. Make sure to provide a valid 
 value for hive.metastore.uris if you are connecting to a remote metastore.
 Logging initialized using configuration in 
 file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
 Hive history 
 file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks determined at compile time: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201306142329_0098, Tracking URL = 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill 
 job_201306142329_0098
 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
 1
 2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
 2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201306142329_0098 with errors
 Error during job, obtaining debugging information...
 Job Tracking URL: 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Examining task ID: task_201306142329_0098_m_02 (and more) from job 
 job_201306142329_0098
 Task with the most failures(4): 
 -
 Task ID:
   task_201306142329_0098_m_00
 URL:
   
 http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098tipid=task_201306142329_0098_m_00
 -
 Diagnostic Messages for this Task:
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
   at org.apache.hadoop.mapred.Child.main(Child.java:265)
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable 
 cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
   at 
 org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
   at 
 

[jira] [Updated] (HIVE-4745) java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop

2013-06-19 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-4745:
---

Attachment: (was: HIVE-4754.2.patch)

 java.lang.RuntimeException: Hive Runtime Error while closing operators: 
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
 -

 Key: HIVE-4745
 URL: https://issues.apache.org/jira/browse/HIVE-4745
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Tony Murphy
Assignee: Jitendra Nath Pandey
 Fix For: vectorization-branch

 Attachments: HIVE-4745.2.patch


 {noformat}
 SELECT SUM(L_QUANTITY),
(SUM(L_QUANTITY) + -1.3000E+000),
(-2.2002E+000 % (SUM(L_QUANTITY) + 
 -1.3000E+000)),
MIN(L_EXTENDEDPRICE)
 FROM   lineitem_orc
 WHERE  ((L_EXTENDEDPRICE = L_LINENUMBER)
 OR (L_TAX  L_EXTENDEDPRICE));
 {noformat}
 executed over tpch line item with scale factor 1gb
 {noformat}
 13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property 
 hive.metastore.local no longer has any effect. Make sure to provide a valid 
 value for hive.metastore.uris if you are connecting to a remote metastore.
 Logging initialized using configuration in 
 file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
 Hive history 
 file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks determined at compile time: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201306142329_0098, Tracking URL = 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill 
 job_201306142329_0098
 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
 1
 2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
 2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201306142329_0098 with errors
 Error during job, obtaining debugging information...
 Job Tracking URL: 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Examining task ID: task_201306142329_0098_m_02 (and more) from job 
 job_201306142329_0098
 Task with the most failures(4): 
 -
 Task ID:
   task_201306142329_0098_m_00
 URL:
   
 http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098tipid=task_201306142329_0098_m_00
 -
 Diagnostic Messages for this Task:
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
   at org.apache.hadoop.mapred.Child.main(Child.java:265)
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable 
 cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
   at 
 org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
   at 
 

[jira] [Updated] (HIVE-4745) java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop

2013-06-19 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-4745:
---

Attachment: HIVE-4754.2.patch

 java.lang.RuntimeException: Hive Runtime Error while closing operators: 
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
 -

 Key: HIVE-4745
 URL: https://issues.apache.org/jira/browse/HIVE-4745
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Tony Murphy
Assignee: Jitendra Nath Pandey
 Fix For: vectorization-branch

 Attachments: HIVE-4745.2.patch


 {noformat}
 SELECT SUM(L_QUANTITY),
(SUM(L_QUANTITY) + -1.3000E+000),
(-2.2002E+000 % (SUM(L_QUANTITY) + 
 -1.3000E+000)),
MIN(L_EXTENDEDPRICE)
 FROM   lineitem_orc
 WHERE  ((L_EXTENDEDPRICE = L_LINENUMBER)
 OR (L_TAX  L_EXTENDEDPRICE));
 {noformat}
 executed over tpch line item with scale factor 1gb
 {noformat}
 13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property 
 hive.metastore.local no longer has any effect. Make sure to provide a valid 
 value for hive.metastore.uris if you are connecting to a remote metastore.
 Logging initialized using configuration in 
 file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
 Hive history 
 file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks determined at compile time: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201306142329_0098, Tracking URL = 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill 
 job_201306142329_0098
 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
 1
 2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
 2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201306142329_0098 with errors
 Error during job, obtaining debugging information...
 Job Tracking URL: 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Examining task ID: task_201306142329_0098_m_02 (and more) from job 
 job_201306142329_0098
 Task with the most failures(4): 
 -
 Task ID:
   task_201306142329_0098_m_00
 URL:
   
 http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098tipid=task_201306142329_0098_m_00
 -
 Diagnostic Messages for this Task:
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
   at org.apache.hadoop.mapred.Child.main(Child.java:265)
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable 
 cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
   at 
 org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
   at 
 

[jira] [Updated] (HIVE-4745) java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop

2013-06-17 Thread Tony Murphy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Murphy updated HIVE-4745:
--

Summary: java.lang.RuntimeException: Hive Runtime Error while closing 
operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable 
cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable  (was: 
java.lang.RuntimeException: Hive Runtime Error while closing operators)

 java.lang.RuntimeException: Hive Runtime Error while closing operators: 
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
 -

 Key: HIVE-4745
 URL: https://issues.apache.org/jira/browse/HIVE-4745
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Tony Murphy
 Fix For: vectorization-branch


 {noformat}
 SELECT SUM(L_QUANTITY),
(SUM(L_QUANTITY) + -1.3000E+000),
(-2.2002E+000 % (SUM(L_QUANTITY) + 
 -1.3000E+000)),
MIN(L_EXTENDEDPRICE)
 FROM   lineitem_orc
 WHERE  ((L_EXTENDEDPRICE = L_LINENUMBER)
 OR (L_TAX  L_EXTENDEDPRICE));
 {noformat}
 executed over tpch line item with scale factor 1gb
 {noformat}
 13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property 
 hive.metastore.local no longer has any effect. Make sure to provide a valid 
 value for hive.metastore.uris if you are connecting to a remote metastore.
 Logging initialized using configuration in 
 file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
 Hive history 
 file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks determined at compile time: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201306142329_0098, Tracking URL = 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill 
 job_201306142329_0098
 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
 1
 2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
 2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
 2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201306142329_0098 with errors
 Error during job, obtaining debugging information...
 Job Tracking URL: 
 http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
 Examining task ID: task_201306142329_0098_m_02 (and more) from job 
 job_201306142329_0098
 Task with the most failures(4): 
 -
 Task ID:
   task_201306142329_0098_m_00
 URL:
   
 http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098tipid=task_201306142329_0098_m_00
 -
 Diagnostic Messages for this Task:
 java.lang.RuntimeException: Hive Runtime Error while closing operators
   at 
 org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
   at org.apache.hadoop.mapred.Child.main(Child.java:265)
 Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable 
 cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
   at 
 org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
   at 
 org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)

[jira] [Updated] (HIVE-4745) java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop

2013-06-17 Thread Tony Murphy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Murphy updated HIVE-4745:
--

Description: 
{noformat}
SELECT SUM(L_QUANTITY),
   (SUM(L_QUANTITY) + -1.3000E+000),
   (-2.2002E+000 % (SUM(L_QUANTITY) + 
-1.3000E+000)),
   MIN(L_EXTENDEDPRICE)
FROM   lineitem_orc
WHERE  ((L_EXTENDEDPRICE = L_LINENUMBER)
OR (L_TAX  L_EXTENDEDPRICE));
{noformat}

executed over tpch line item with scale factor 1gb

{noformat}
13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property 
hive.metastore.local no longer has any effect. Make sure to provide a valid 
value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in 
file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
Hive history 
file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapred.reduce.tasks=number

Starting Job = job_201306142329_0098, Tracking URL = 
http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill 
job_201306142329_0098
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201306142329_0098 with errors
Error during job, obtaining debugging information...
Job Tracking URL: 
http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
Examining task ID: task_201306142329_0098_m_02 (and more) from job 
job_201306142329_0098

Task with the most failures(4): 
-
Task ID:
  task_201306142329_0098_m_00

URL:
  
http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098tipid=task_201306142329_0098_m_00
-
Diagnostic Messages for this Task:
java.lang.RuntimeException: Hive Runtime Error while closing operators
at 
org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
at org.apache.hadoop.mapred.Child.main(Child.java:265)
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable 
cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35)
at 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340)
at 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
at 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
at 
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(VectorGroupByOperator.java:281)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(VectorGroupByOperator.java:423)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:196)
... 8 more


FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec



[jira] [Updated] (HIVE-4745) java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop

2013-06-17 Thread Tony Murphy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Murphy updated HIVE-4745:
--

Description: 
{noformat}
SELECT SUM(L_QUANTITY),
   (SUM(L_QUANTITY) + -1.3000E+000),
   (-2.2002E+000 % (SUM(L_QUANTITY) + 
-1.3000E+000)),
   MIN(L_EXTENDEDPRICE)
FROM   lineitem_orc
WHERE  ((L_EXTENDEDPRICE = L_LINENUMBER)
OR (L_TAX  L_EXTENDEDPRICE));
{noformat}

executed over tpch line item with scale factor 1gb

{noformat}
13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property 
hive.metastore.local no longer has any effect. Make sure to provide a valid 
value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in 
file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
Hive history 
file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapred.reduce.tasks=number

Starting Job = job_201306142329_0098, Tracking URL = 
http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill 
job_201306142329_0098
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201306142329_0098 with errors
Error during job, obtaining debugging information...
Job Tracking URL: 
http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
Examining task ID: task_201306142329_0098_m_02 (and more) from job 
job_201306142329_0098

Task with the most failures(4): 
-
Task ID:
  task_201306142329_0098_m_00

URL:
  
http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098tipid=task_201306142329_0098_m_00
-
Diagnostic Messages for this Task:
java.lang.RuntimeException: Hive Runtime Error while closing operators
at 
org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
at org.apache.hadoop.mapred.Child.main(Child.java:265)
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable 
cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35)
at 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340)
at 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
at 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
at 
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(VectorGroupByOperator.java:281)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(VectorGroupByOperator.java:423)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:196)
... 8 more


FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec



[jira] [Updated] (HIVE-4745) java.lang.RuntimeException: Hive Runtime Error while closing operators: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be cast to org.apache.hadoop

2013-06-17 Thread Tony Murphy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tony Murphy updated HIVE-4745:
--

Description: 
{noformat}
SELECT SUM(L_QUANTITY),
   (SUM(L_QUANTITY) + -1.3000E+000),
   (-2.2002E+000 % (SUM(L_QUANTITY) + 
-1.3000E+000)),
   MIN(L_EXTENDEDPRICE)
FROM   lineitem_orc
WHERE  ((L_EXTENDEDPRICE = L_LINENUMBER)
OR (L_TAX  L_EXTENDEDPRICE));
{noformat}

executed over tpch line item with scale factor 1gb

{noformat}
13/06/15 11:19:17 WARN conf.HiveConf: DEPRECATED: Configuration property 
hive.metastore.local no longer has any effect. Make sure to provide a valid 
value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in 
file:/C:/Hadoop/hive-0.9.0/conf/hive-log4j.properties
Hive history 
file=c:\hadoop\hive-0.9.0\logs\history/hive_job_log_jenkinsuser_5292@SLAVE23-WIN_201306151119_1652846565.txt
Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=number
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=number
In order to set a constant number of reducers:
  set mapred.reduce.tasks=number

Starting Job = job_201306142329_0098, Tracking URL = 
http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
Kill Command = c:\Hadoop\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd job  -kill 
job_201306142329_0098
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2013-06-15 11:19:47,490 Stage-1 map = 0%,  reduce = 0%
2013-06-15 11:20:29,801 Stage-1 map = 76%,  reduce = 0%
2013-06-15 11:20:32,849 Stage-1 map = 0%,  reduce = 0%
2013-06-15 11:20:35,880 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201306142329_0098 with errors
Error during job, obtaining debugging information...
Job Tracking URL: 
http://localhost:50030/jobdetails.jsp?jobid=job_201306142329_0098
Examining task ID: task_201306142329_0098_m_02 (and more) from job 
job_201306142329_0098

Task with the most failures(4): 
-
Task ID:
  task_201306142329_0098_m_00

URL:
  
http://localhost:50030/taskdetails.jsp?jobid=job_201306142329_0098tipid=task_201306142329_0098_m_00
-
Diagnostic Messages for this Task:
java.lang.RuntimeException: Hive Runtime Error while closing operators
at 
org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:229)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
at org.apache.hadoop.mapred.Child$4.run(Child.java:271)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1135)
at org.apache.hadoop.mapred.Child.main(Child.java:265)
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.NullWritable 
cannot be cast to org.apache.hadoop.hive.serde2.io.DoubleWritable
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableDoubleObjectInspector.get(WritableDoubleObjectInspector.java:35)
at 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:340)
at 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:257)
at 
org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:204)
at 
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:245)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:502)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:832)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.flush(VectorGroupByOperator.java:281)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorGroupByOperator.closeOp(VectorGroupByOperator.java:423)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at 
org.apache.hadoop.hive.ql.exec.vector.VectorExecMapper.close(VectorExecMapper.java:196)
... 8 more


FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec