xiarixiaoyao opened a new pull request #4421:
URL: https://github.com/apache/hudi/pull/4421


   
   ## *Tips*
   - *Thank you very much for contributing to Apache Hudi.*
   - *Please review https://hudi.apache.org/contribute/how-to-contribute before 
opening a pull request.*
   
   ## What is the purpose of the pull request
   
   now,  flink will write decimalType as byte[]
   
   when spark read that decimal Type, if spark find the precision of current 
decimal is small spark treat it as int/long which caused the  errors
   
   
   test step:
   step1:create a hive table which contains decimalType
   create table hivetb_numeric(id int, c1 tinyint, c2 smallint, c3 int, c4 
bigint, c5 float, c6 double, c7 decimal(10,4), c8 binary) using parquet; insert 
into hivetb_numeric values 
(1,10,15,100,1230000,56.32,1123.45678,4545.1234,'5'),(2,11,16,101,1230001,57.43,1124.4578,4556.134,'6'),(3,12,17,102,1230002,58.54,1125.459,4565.1445,'7');
   
   step2:run flink job
    CREATE TABLE hudi_batch_cow( id int, c1 tinyint, c2 smallint, c3 int, c4 
bigint, c5 float, c6 double, c7 decimal(10, 4), c8 binary ) WITH ( 'connector' 
= 'hudi', 'path' = 'hdfs://xxxx/xxx/xxx/hudi_batch_cow', 'table.type' = 
'COPY_ON_WRITE', 'hoodie.datasource.query.type' = 'snapshot', 
'hoodie.datasource.write.recordkey.field' = 'id', 'write.precombine.field' = 
'id' );CREATE TABLE fs_parquet ( id int, c1 tinyint, c2 smallint, c3 int, c4 
bigint, c5 float, c6 double, c7 decimal(10, 4), c8 binary ) WITH ( 'connector' 
= 'filesystem', 'path' = 'hdfs://xxxx/xxx/xxx/xxxxx/hivetb_numeric', 'format' = 
'parquet' ); INSERT INTO hudi_batch_cow SELECT * FROM fs_parquet;
   
   step3:  start spark-shell and read hudi table
   
   **Caused by: org.apache.spark.sql.execution.QueryExecutionException: Parquet 
column cannot be converted in file 
hdfs://xxxxx/xxx/xxx/xxxxxxxx/46d44c57-aa43-41e2-a8aa-76dcc9dac7e4_0-4-0_20211221201230.parquet.
 Column: [c7], Expected: decimal(10,4), Found: BINARY
     at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
     at 
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93)
     at 
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:517)
     at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
 Source)**
   
   ## Brief change log
   
   *(for example:)*
     - *Modify AnnotationLocation checkstyle rule in checkstyle.xml*
   
   ## Verify this pull request
   
   *(Please pick either of the following options)*
   
   This pull request is a trivial rework / code cleanup without any test 
coverage.
   
   *(or)*
   
   This pull request is already covered by existing tests, such as *(please 
describe tests)*.
   
   (or)
   
   This change added tests and can be verified as follows:
   
   *(example:)*
   
     - *Added integration tests for end-to-end.*
     - *Added HoodieClientWriteTest to verify the change.*
     - *Manually verified the change by running a job locally.*
   
   ## Committer checklist
   
    - [ ] Has a corresponding JIRA in PR title & commit
    
    - [ ] Commit message is descriptive of the change
    
    - [ ] CI is green
   
    - [ ] Necessary doc changes done or have another open PR
          
    - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to