stoty commented on code in PR #1450:
URL: https://github.com/apache/phoenix/pull/1450#discussion_r1420042762


##########
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/MultiHfileOutputFormat.java:
##########
@@ -122,11 +123,11 @@ public RecordWriter<TableRowkeyPair, Cell> 
getRecordWriter(TaskAttemptContext co
      * @return
      * @throws IOException 
      */
-    static <V extends Cell> RecordWriter<TableRowkeyPair, V> 
createRecordWriter(final TaskAttemptContext context)
+    static <V extends Cell> RecordWriter<TableRowkeyPair, V> 
createRecordWriter(
+        final TaskAttemptContext context, final OutputCommitter committer)
             throws IOException {
         // Get the path of the temporary output file
-        final Path outputPath = FileOutputFormat.getOutputPath(context);
-        final Path outputdir = new FileOutputCommitter(outputPath, 
context).getWorkPath();
+        final Path outputdir = ((PathOutputCommitter) 
committer).getOutputPath();

Review Comment:
   @ss77892 
   
   This indeed looks incorrect.
   We should use .getWorkPath() here.
   
   This works for the Magic S3a committer where the work and output path are 
the same, but for FileCommitter (i.e. HDFS) and others this breaks the commit 
mechanism by writing into the output directory directory directly.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to