fengjian428 commented on issue #6966:
URL: https://github.com/apache/hudi/issues/6966#issuecomment-1285886893

   > > ```
   > > create table hudi_mor_nonsavepoint (
   > >   id int,
   > >   name string,
   > >   price double,
   > >   ts long,
   > >   par string
   > > ) using hudi
   > > tblproperties (
   > >   type = 'mor',
   > >   primaryKey = 'id',
   > >   preCombineField = 'ts',
   > >   hoodie.datasource.write.operation='insert',
   > >     hoodie.datasource.write.drop.partition.columns=true,
   > >     hoodie.index.type='BUCKET',
   > > hoodie.bucket.index.num.buckets=8,
   > > hoodie.bucket.index.hash.field='id',
   > > 
hoodie.storage.layout.partitioner.class='org.apache.hudi.table.action.commit.SparkBucketIndexPartitioner',
   > > hoodie.storage.layout.type='BUCKET'
   > > )
   > > partitioned by (par)
   > > location 'hdfs://xxx/hudi_mor_nonsavepoint';
   > >  set hoodie.datasource.write.operation=insert;
   > > set hoodie.merge.allow.duplicate.on.inserts=true;
   > >  set hoodie.index.type=BUCKET;
   > >  set hoodie.bucket.index.num.buckets=8;
   > >  set 
hoodie.storage.layout.partitioner.class=org.apache.hudi.table.action.commit.SparkBucketIndexPartitioner;
   > > set hoodie.storage.layout.type=BUCKET;
   > > set hoodie.datasource.write.recordkey.field=id;
   > >  set hoodie.bucket.index.hash.field=id;
   > > insert into hudi_mor_nonsavepoint select 1, 'a1', 20, 1000,'p1';
   > > insert into hudi_mor_nonsavepoint select 2, 'b1', 10, 344,'p1';
   > > 
   > > insert into hudi_mor_nonsavepoint select 1, 'a2', 20, 1001,'p1';
   > >  set hoodie.compact.inline.max.delta.commits=0;
   > >  call run_compaction(op => 'schedule', table => 'hudi_mor_nonsavepoint')
   > >  call run_compaction(op => 'run', table => 'hudi_mor_nonsavepoint');
   > > 
   > > insert into hudi_mor_nonsavepoint select 2, 'b2', 10, 344,'p1';
   > > ```
   > > 
   > > 
   > >     
   > >       
   > >     
   > > 
   > >       
   > >     
   > > 
   > >     
   > >   
   > > I tried to reproduce but the commands above work fine @eric9204 is there 
anything I missed?
   > 
   > @fengjian428 I wrote hudi table with spark structured streaming, every 
deltacommit after the first compaction will be rollback because of the 
above-mentioned error.
   
   ok,  can you also try my reproduce case to check if it works in your env?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to