GitHub user liutang123 opened a pull request:
https://github.com/apache/spark/pull/21772
[SPARK-24809] [SQL] Serializing LongHashedRelation in executor may result
in data error
When join key is long or int in broadcast join, Spark will use
LongHashedRelation as the broadcast value. Details see SPARK-14419. But, if the
broadcast value is abnormal big, executor will serialize it to disk. But, data
will lost when serializing.
A flow chart [see](http://oi67.tinypic.com/2z5pzs7.jpg)
## What changes were proposed in this pull request?
Write cursor instead when serializing and setting cursor value when
deserializing.
## How was this patch tested?
manual test.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/liutang123/spark SPARK-24809
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/21772.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #21772
----
commit a72fe61863e119c0e902cef3054d9140b6d04f77
Author: liulijia <liutang123@...>
Date: 2018-07-15T11:24:55Z
[SPARK-24809] [SQL] Serializing LongHashedRelation in executor may result
in data error
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]