Cast relation to scala ClassCastException: java.lang.Integer cannot be cast to java.lang.String
Hi, I tried to cast relation (one row) to scala. It works well when the cast field is Integer. But if the cast field is FLOAT, i got ClassCastException: java.lang.Integer cannot be cast to java.lang.String. coordinate_cossin_xy = FOREACH join_coordinate_cossin_xy GENERATE coordinate_xy::xlong_u as xlong_u, coordinate_xy::zone as zone; rawdata_u = LOAD 'u' USING org.apache.hive.hcatalog.pig.HCatLoader(); --FAIL!! ClassCastException: java.lang.Integer cannot be cast to java.lang.String u_filter = FILTER rawdata_u by xlong_u == coordinate_cossin_xy.xlong_u; --WORK!! --u_filter = FILTER rawdata_u by zone == coordinate_cossin_xy.zone; Any ideas how I can fix it? Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: u_x: Limit - scope-144 Operator Key: scope-144): org.apache.pig.backend.executionengine.ExecException: ERROR 2067: exception while executing [EqualToExpr (Name: Equal To[boolean] - scope-136 Operator Key: scope-136) children: [[POProject (Name: Project[int][6] - scope-134 Operator Key: scope-134) children: null at []], [ConstantExpression (Name: Constant(2) - scope-135 Operator Key: scope-135) children: null at []]] at []]: java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:316) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLimit.getNextTuple(POLimit.java:122) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307) ... 15 more Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2067: exception while executing [EqualToExpr (Name: Equal To[boolean] - scope-136 Operator Key: scope-136) children: [[POProject (Name: Project[int][6] - scope-134 Operator Key: scope-134) children: null at []], [ConstantExpression (Name: Constant(2) - scope-135 Operator Key: scope-135) children: null at []]] at []]: java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.EqualToExpr.getNextBoolean(EqualToExpr.java:97) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POAnd.getNextBoolean(POAnd.java:67) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POFilter.getNextTuple(POFilter.java:144) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307) ... 17 more Caused by: java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.String at java.lang.String.compareTo(String.java:108) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.EqualToExpr.doComparison(EqualToExpr.java:118) at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.EqualToExpr.getNextBoolean(EqualToExpr.java:85) ... 20 more BR, Patcharee
Re: datetime data type
On 5/22/15, 1:26 PM, "Michael Howard" wrote: >I would like to have a discussion about a number of issues/questions >related to support for the datetime datatype in pig. > >main topics: > > * ToDate(chararray) accepts ISO-8601 'T' timestamps, but not JDBC space ' >' timestamps ... thereby make it incompatible with hive, impala & JDBC >data >sources ... I am familiar with >https://issues.apache.org/jira/browse/PIG-1430 ToDate use org.joda.time.format.ISODateTimeFormat to parse date string. I am open to change as long as it does not break backward compatibility. Any suggestion? > >* casting: (datetime)timestampString fails, even though datetime is listed >as a primitive data type. This can be fixed, can you open a Jira? > >* ToDate(chararray) throws an exception (rather than returning null) when >given a mal-formed timestamp ... is this the desired behavior? Probably return a null is better, but we cannot break backward compatibility. We can create a new UDF to return null. > >It is unclear to me if this is the appropriate forum for a discussion of >these topics ... or if this should happen in the context of a JIRA ... or >if I should have a side-discussion with an experienced core pig developer. > >Please advise. > > >Michael
filter by query result
Hi, I am new to pig. First I queried a hive table (x = LOAD 'x' USING org.apache.hive.hcatalog.pig.HCatLoader();) and got a single record/value. How can I used this single value to filter in another query? I hope to get a better performance by filter as soon as possible. BR, Patcharee