n3nash commented on a change in pull request #674: Upgrade to Hive 2.x, MOR 
read query fixes and performance improvement
URL: https://github.com/apache/incubator-hudi/pull/674#discussion_r292105871
 
 

 ##########
 File path: 
hoodie-hadoop-mr/src/main/java/com/uber/hoodie/hadoop/realtime/RealtimeCompactedRecordReader.java
 ##########
 @@ -96,26 +78,41 @@ public boolean next(Void aVoid, ArrayWritable 
arrayWritable) throws IOException
       // return from delta records map if we have some match.
       String key = 
arrayWritable.get()[HoodieRealtimeInputFormat.HOODIE_RECORD_KEY_COL_POS]
           .toString();
-      if (LOG.isDebugEnabled()) {
-        LOG.debug(String.format("key %s, base values: %s, log values: %s", key,
-            arrayWritableToString(arrayWritable), 
arrayWritableToString(deltaRecordMap.get(key))));
-      }
       if (deltaRecordMap.containsKey(key)) {
         // TODO(NA): Invoke preCombine here by converting arrayWritable to 
Avro. This is required since the
         // deltaRecord may not be a full record and needs values of columns 
from the parquet
-        Writable[] replaceValue = deltaRecordMap.get(key).get();
-        if (replaceValue.length < 1) {
-          // This record has been deleted, move to the next record
+        Optional<GenericRecord> rec;
 
 Review comment:
   For L76-78 - we don't index log yet so left it there. For L82-83, I can take 
it up as a follow up task, I started doing it and ran into some issue with data 
types. https://issues.apache.org/jira/browse/HUDI-152.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to