liudu2326526 commented on issue #6297:
URL: https://github.com/apache/hudi/issues/6297#issuecomment-1612608779
I also encountered this problem when reading hudi tables.I was able to run
it locally, but failed to run it on the cluster.
Caused by: java.lang.LinkageError: loader constraint violation: when
resolving method 'void
org.apache.flink.formats.parquet.vector.reader.BytesColumnReader.<init>(org.apache.parquet.column.ColumnDescriptor,
org.apache.parquet.column.page.PageReader)' the class loader
org.apache.flink.util.ChildFirstClassLoader @d1be487 of the current class,
org/apache/hudi/table/format/cow/ParquetSplitReaderUtil, and the class loader
'app' for the method's defining class,
org/apache/flink/formats/parquet/vector/reader/BytesColumnReader, have
different Class objects for the type org/apache/parquet/column/ColumnDescriptor
used in the signature (org.apache.hudi.table.format.cow.ParquetSplitReaderUtil
is in unnamed module of loader org.apache.flink.util.ChildFirstClassLoader
@d1be487, parent loader 'app';
org.apache.flink.formats.parquet.vector.reader.BytesColumnReader is in unnamed
module of loader 'app')
Hudi version :0.13.1
Flink version :1.16.2
Storage (HDFS/S3/GCS..) : huawei cloud OBS
Running on Docker? (yes/no) :no
flink runs in standalon mode
`step 1:Write data
sTableEnv.executeSql("CREATE TABLE t2(\n"
+ " uuid VARCHAR(20) PRIMARY KEY NOT ENFORCED,\n"
+ " name VARCHAR(10),\n"
+ " age INT,\n"
+ " ts TIMESTAMP(3),\n"
+ " `partition` VARCHAR(20)\n"
+ ")\n"
+ "PARTITIONED BY (`partition`)\n" +
"with (\n" +
" 'connector' = 'hudi'\n" +
" ,'path' =
'obs://donson-mip-data-warehouse/dev/liudu/data/hudi_data'\n" +
// " ,'path' =
'file:///Users/macbook/Downloads/obsa-hdfs-flink-obs/flink-hudi/src/test/hudi_data'\n"
+
// " ,'table.type' = 'MERGE_ON_READ'\n" +
")");
// sTableEnv.executeSql("insert into t2 select * from sourceT");
sTableEnv.executeSql("INSERT INTO t2 VALUES\n"
+ " ('id1','Danny',23,TIMESTAMP '1970-01-01 00:00:01','par1'),\n"
+ " ('id2','Stephen',33,TIMESTAMP '1970-01-01 00:00:02','par1'),\n"
+ " ('id3','Julian',53,TIMESTAMP '1970-01-01 00:00:03','par2'),\n"
+ " ('id4','Fabian',31,TIMESTAMP '1970-01-01 00:00:04','par2'),\n"
+ " ('id5','Sophia',18,TIMESTAMP '1970-01-01 00:00:05','par3'),\n"
+ " ('id6','Emma',20,TIMESTAMP '1970-01-01 00:00:06','par3'),\n"
+ " ('id7','Bob',44,TIMESTAMP '1970-01-01 00:00:07','par4'),\n"
+ " ('id8','Han',56,TIMESTAMP '1970-01-01 00:00:08','par4');");
step 2:Read data
sTableEnv.executeSql("CREATE TABLE t2(\n"
+ " uuid VARCHAR(20) PRIMARY KEY NOT ENFORCED,\n"
+ " name VARCHAR(10),\n"
+ " age INT,\n"
+ " ts TIMESTAMP(3),\n"
+ " `partition` VARCHAR(20)\n"
+ ")\n"
+ "PARTITIONED BY (`partition`)\n" +
"with (\n" +
" 'connector' = 'hudi'\n" +
" ,'path' =
'obs://donson-mip-data-warehouse/dev/liudu/data/hudi_data'\n" +
// " ,'path' =
'file:///Users/macbook/Downloads/obsa-hdfs-flink-obs/flink-hudi/src/test/hudi_data'\n"
+
// " ,'table.type' = 'MERGE_ON_READ'\n" +
")");
sTableEnv.executeSql("select * from t2 ").print();
`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]