roberto sancho rojas created PHOENIX-3403:
---------------------------------------------

             Summary: ERROR 203 (22005): Type mismatch. VARCHAR cannot be 
coerced to DOUBLE
                 Key: PHOENIX-3403
                 URL: https://issues.apache.org/jira/browse/PHOENIX-3403
             Project: Phoenix
          Issue Type: Bug
    Affects Versions: 4.4.0
         Environment: centos 7.2 64 bits
HBase1.1.2.2.4
PHOENIX 4.4.0.0-1
SPARK 1.6
HDP 2.4.0.0.-169

            Reporter: roberto sancho rojas
            Priority: Critical


After create a table with 3 primary key on ROW KEY with phoenix:

CREATE TABLE IF NOT EXISTS tabla1(
c1 VARCHAR not null,
c2 VARCHAR not null,
c3 VARCHAR not null,
c4 DOUBLE,
c5 VARCHAR,
CONSTRAINT pk PRIMARY KEY (c1,c2,c3) );

and try to insert from official site:
df.write \
  .format("org.apache.phoenix.spark") \
  .mode("overwrite") \
  .option("table", "TABLE1") \
  .option("zkUrl", "localhost:2181") \
  .save()


 y receive this error

Caused by: org.apache.phoenix.schema.ConstraintViolationException: 
org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
mismatch. VARCHAR cannot be coerced to DOUBLE
        at 
org.apache.phoenix.schema.types.PDataType.throwConstraintViolationException(PDataType.java:282)
        at org.apache.phoenix.schema.types.PDouble.toObject(PDouble.java:129)
        at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.setObject(PhoenixPreparedStatement.java:442)
        at 
org.apache.phoenix.spark.PhoenixRecordWritable$$anonfun$write$1.apply(PhoenixRecordWritable.scala:53)
        at 
org.apache.phoenix.spark.PhoenixRecordWritable$$anonfun$write$1.apply(PhoenixRecordWritable.scala:44)
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at 
org.apache.phoenix.spark.PhoenixRecordWritable.write(PhoenixRecordWritable.scala:44)
        at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
        at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1113)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1111)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1119)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1091)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        ... 1 more
Caused by: org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): 
Type mismatch. VARCHAR cannot be coerced to DOUBLE
        at 
org.apache.phoenix.exception.SQLExceptionCode$1.newException(SQLExceptionCode.java:71)
        at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
        ... 22 more




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to