Hi All I am getting below exception while using Kyro serializable with
broadcast variable. I am broadcating a hasmap with below line.
MapLong, MatcherReleventData matchData =RddForMarch.collectAsMap();
final BroadcastMapLong, MatcherReleventData dataMatchGlobal =
jsc.broadcast(matchData);
Yes Without Kryo it did work out.when I remove kryo registration it did
worked out
On 15 April 2015 at 19:24, Jeetendra Gangele gangele...@gmail.com wrote:
its not working with the combination of Broadcast.
Without Kyro also not working.
On 15 April 2015 at 19:20, Akhil Das
Is it working without kryo?
Thanks
Best Regards
On Wed, Apr 15, 2015 at 6:38 PM, Jeetendra Gangele gangele...@gmail.com
wrote:
Hi All I am getting below exception while using Kyro serializable with
broadcast variable. I am broadcating a hasmap with below line.
MapLong, MatcherReleventData
This looks like known issue? check this out
http://apache-spark-user-list.1001560.n3.nabble.com/java-io-InvalidClassException-org-apache-spark-api-java-JavaUtils-SerializableMapWrapper-no-valid-cor-td20034.html
Can you please suggest any work around I am broad casting HashMap return
from
this is a really strange exception ... I'm especially surprised that it
doesn't work w/ java serialization. Do you think you could try to boil it
down to a minimal example?
On Wed, Apr 15, 2015 at 8:58 AM, Jeetendra Gangele gangele...@gmail.com
wrote:
Yes Without Kryo it did work out.when I
oh interesting. The suggested workaround is to wrap the result from
collectAsMap into another hashmap, you should try that:
MapLong, MatcherReleventData matchData =RddForMarch.collectAsMap();
MapString, String tmp = new HashMapString, String(matchData);
final BroadcastMapLong,
its not working with the combination of Broadcast.
Without Kyro also not working.
On 15 April 2015 at 19:20, Akhil Das ak...@sigmoidanalytics.com wrote:
Is it working without kryo?
Thanks
Best Regards
On Wed, Apr 15, 2015 at 6:38 PM, Jeetendra Gangele gangele...@gmail.com
wrote:
Hi All
This worked with java serialization.I am using 1.2.0 you are right if I use
1.2.1 or 1.3.0 this issue will not occur
I will test this and let you know
On 15 April 2015 at 19:48, Imran Rashid iras...@cloudera.com wrote:
oh interesting. The suggested workaround is to wrap the result from