[
https://issues.apache.org/jira/browse/SPARK-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Wendell resolved SPARK-1916.
------------------------------------
Resolution: Fixed
Fix Version/s: 0.9.2
1.0.1
Issue resolved by pull request 865
[https://github.com/apache/spark/pull/865]
> SparkFlumeEvent with body bigger than 1020 bytes are not read properly
> ----------------------------------------------------------------------
>
> Key: SPARK-1916
> URL: https://issues.apache.org/jira/browse/SPARK-1916
> Project: Spark
> Issue Type: Bug
> Components: Streaming
> Affects Versions: 0.9.0
> Reporter: David Lemieux
> Assignee: David Lemieux
> Fix For: 1.0.1, 0.9.2
>
> Attachments: SPARK-1916.diff
>
>
> The readExternal implementation on SparkFlumeEvent will read only the first
> 1020 bytes of the actual body when streaming data from flume.
> This means that any event sent to Spark via Flume will be processed properly
> if the body is small, but will fail if the body is bigger than 1020.
> Considering that the default max size for a Flume Avro Event is 32K, the
> implementation should be updated to read more.
> The following is related :
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-using-Flume-body-size-limitation-tt6127.html
--
This message was sent by Atlassian JIRA
(v6.2#6252)