[ 
https://issues.apache.org/jira/browse/SPARK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16112873#comment-16112873
 ] 

Basile Deustua commented on SPARK-21143:
----------------------------------------

I have the exact same issue with io.grpc which heavily use netty 4.1.x.
It's very disapointing that spark community won't upgrade the netty version or 
at least shade the 4.0.x in the jar lib to let the choice of the version we 
want use. 
Be constrained to remain at 4.0.x version by spark dependency is a bit 
frustrating.

> Fail to fetch blocks >1MB in size in presence of conflicting Netty version
> --------------------------------------------------------------------------
>
>                 Key: SPARK-21143
>                 URL: https://issues.apache.org/jira/browse/SPARK-21143
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: Ryan Williams
>            Priority: Minor
>
> One of my spark libraries inherited a transitive-dependency on Netty 
> 4.1.6.Final (vs. Spark's 4.0.42.Final), and I observed a strange failure I 
> wanted to document: fetches of blocks larger than 1MB (pre-compression, 
> afaict) seem to trigger a code path that results in {{AbstractMethodError}}'s 
> and ultimately stage failures.
> I put a minimal repro in [this github 
> repo|https://github.com/ryan-williams/spark-bugs/tree/netty]: {{collect}} on 
> a 1-partition RDD with 1032 {{Array\[Byte\]}}'s of size 1000 works, but at 
> 1033 {{Array}}'s it dies in a confusing way.
> Not sure what fixing/mitigating this in Spark would look like, other than 
> defensively shading+renaming netty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to