zhouhongyu888 opened a new issue #3657:
URL: https://github.com/apache/hudi/issues/3657


   Flink version:1.11.2
   hadoop version:3.13
   scala version:2.11
   hudi version:0.9
   
   When I test the flink-example below:
   CREATE TABLE t3(
     uuid VARCHAR(20),
     name VARCHAR(10),
     age INT,
     ts TIMESTAMP(3),
     `partition` VARCHAR(20)
   )
   PARTITIONED BY (`partition`)
   WITH (
     'connector' = 'hudi',
     'path' = 'hdfs://hadoop102:8020//flink-hudi/t3',
     'table.type' = 'MERGE_ON_READ'
   );
   
   INSERT INTO t3 VALUES
     ('id1','Danny',23,TIMESTAMP '1970-01-01 00:00:01','par1'),
     ('id2','Stephen',33,TIMESTAMP '1970-01-01 00:00:02','par1'),
     ('id3','Julian',53,TIMESTAMP '1970-01-01 00:00:03','par2'),
     ('id4','Fabian',31,TIMESTAMP '1970-01-01 00:00:04','par2'),
     ('id5','Sophia',18,TIMESTAMP '1970-01-01 00:00:05','par3'),
     ('id6','Emma',20,TIMESTAMP '1970-01-01 00:00:06','par3'),
     ('id7','Bob',44,TIMESTAMP '1970-01-01 00:00:07','par4'),
     ('id8','Han',56,TIMESTAMP '1970-01-01 00:00:08','par4');
   The error info appeared 
   [INFO] Submitting SQL update statement to the cluster...
   _[ERROR] Could not execute SQL statement. Reason:
   java.lang.ClassNotFoundException: 
org.apache.flink.table.connector.sink.DataStreamSinkProvider_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to