Hi All,

If I modify the code to below The hive UDF is working in spark-sql but it
is giving different results..Please let me know difference between these
two below codes..

1) public Object get(Object name) {
          int pos = getPos((String)name);
          if(pos<0) return null;
          String f = "string";
          Object obj= list.get(pos);
          Object result = null;
          if(obj==null) return null;
          ObjectInspector ins =
((StructField)colnames.get(pos)).getFieldObjectInspector();
          if(ins!=null) f = ins.getTypeName();
          PrimitiveObjectInspector ins2 = (PrimitiveObjectInspector)ins;
          switch (ins2.getPrimitiveCategory()) {
          case DOUBLE :{
          result = new Double(((DoubleObjectInspector)ins2).get(obj));
          break;
          }


          case LONG:
          result = new Long(((LongObjectInspector)ins2).get(obj));
          break;
          case STRING:
          result =
((StringObjectInspector)ins2).getPrimitiveJavaObject(obj);
          break;
            default  :  result = obj;

          }
          return result;
        }






2) public Object get(Object name) {
          int pos = getPos((String)name);
 if(pos<0) return null;
 String f = "string";
          Object obj= list.get(pos);
 if(obj==null) return null;
 ObjectInspector ins =
((StructField)colnames.get(pos)).getFieldObjectInspector();
 if(ins!=null) f = ins.getTypeName();
 switch (f) {
   case "double" :  return ((DoubleWritable)obj).get();
            case "bigint" :  return ((LongWritable)obj).get();
            case "string" :  return ((Text)obj).toString();
   default  :  return obj;
 }
}

On Tue, Jan 24, 2017 at 5:29 PM, Sirisha Cheruvu <siri8...@gmail.com> wrote:

> Hi Team,
>
> I am trying to keep below code in get method and calling that get mthod in
> another hive UDF
> and running the hive UDF using Hive Context.sql procedure..
>
>
> switch (f) {
>     case "double" :  return ((DoubleWritable)obj).get();
>             case "bigint" :  return ((LongWritable)obj).get();
>             case "string" :  return ((Text)obj).toString();
>     default  :  return obj;
>   }
> }
>
> Suprisingly only LongWritable and Text convrsions are throwing error but
> DoubleWritable is working
> So I tried changing below code to
>
> switch (f) {
>     case "double" :  return ((DoubleWritable)obj).get();
>             case "bigint" :  return ((DoubleWritable)obj).get();
>             case "string" :  return ((Text)obj).toString();
>     default  :  return obj;
>   }
> }
>
> Still its throws error saying Java.Lang.Long cant be convrted
> to org.apache.hadoop.hive.serde2.io.DoubleWritable
>
>
>
> its working fine on hive but throwing error on spark-sql
>
> I am importing the below packages.
> import java.util.*;
> import org.apache.hadoop.hive.serde2.objectinspector.*;
> import org.apache.hadoop.io.LongWritable;
> import org.apache.hadoop.io.Text;
> import org.apache.hadoop.hive.serde2.io.DoubleWritable;
>
> .Please let me know why it is making issue in spark when perfectly running
> fine on hive
>

Reply via email to