Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/16677
If there is some codepath not updating shuffle write metrics (introduced
for sql), that would be a bug.
On Sat, Jun 23, 2018 at 7:27 AM Liang-Chi Hsieh <[email protected]>
wrote:
> *@viirya* commented on this pull request.
> ------------------------------
>
> In
>
core/src/main/java/org/apache/spark/shuffle/sort/BypassMergeSortShuffleWriter.java
> <https://github.com/apache/spark/pull/16677#discussion_r197613554>:
>
> > while (records.hasNext()) {
> final Product2<K, V> record = records.next();
> final K key = record._1();
> partitionWriters[partitioner.getPartition(key)].write(key,
record._2());
> + numOfRecords += 1;
>
> Hmm, I think it is fine. However, maybe I miss it, but I can't find
> SortShuffleWriter has updated writeMetrics_recordsWritten?
>
> â
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/apache/spark/pull/16677#discussion_r197613554>, or
mute
> the thread
>
<https://github.com/notifications/unsubscribe-auth/ABhJlAGkrY00kcOkccVWLytJxBqOFPVjks5t_lBdgaJpZM4Lqyoi>
> .
>
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]