wangxianghu edited a comment on pull request #4930:
URL: https://github.com/apache/hudi/pull/4930#issuecomment-1059934053


   > @wangxianghu The changes look good to me. I have a high level query. You 
have mentioned facing issues with binlog json format for example. Do you mean 
to say transformation is not possible with data in binlog json format? Data 
coming from binlogs also has a structure/schema assigned to it as far as I 
remember. Can you post a sample event where you feel this new PostProcessor you 
introduced might be useful? Basically I want to understand the motivation 
behind introducing this PR.
   
   It is possible to deal with data in binlog json format, but not very 
convenient.
   
   1. For maxwell(our company use it to capture changed data)
   ```
   {
       "database": "test",
       "table": "maxwell",
       "type": "update",
       "ts": 1449786341,
       "xid": 940786,
       "commit": true,
       "data": {"id":1, "daemon": "Firebus!  Firebus!","update_time" : 
"2022-02-03 12:22:42"},
       "old":  {"daemon": "Stanislaw Lem"}
     }
   ```
   all we want is just :
   
   ```
   {
       "id": 1, 
       "daemon": "Firebus!  Firebus!", 
       "update_time": "2022-02-03 12:22:42"
   }
   ```
   we can add write a processor to extract the data from the entire json and 
maybe do some custom define process, without configuring a huge schema 
file(including all the fields in the binlog json, no matter if we need them or 
not)
   
   2. in some scenes, we need to encode some fileds for safety purpose, the 
processor can help us
   
   3. sometimes our data quality is not very well, some key field let's say 
precombine field have null value, we can use processor to fix it
   
   4. when our schema is read from jdbc or hive, we can use processor adjust 
our kafka data compatible to it. 
   
   All in all, with custom processor we can do anything we want on the incoming 
json data before they are converted into DataSet
   Of course `Transformer` is a very useful feature too, but it is based on 
Spark DataSet, and have certain requirements for data quality.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to