Re: [Dev] [ML/DAS] java.lang.StackOverflowError in Recommendation Algorithm in spark 1.6.2

2016-08-31 Thread Supun Sethunga
Well, no, its a property set for the spark context. Basically what it does
is, it stores a snapshot of RDD's state in the file system, very similar to
the CEP state persistence. And when we set checkpointing in the spark
context, it applies to all the RRDs created in that context AFAIK.

On Thu, Sep 1, 2016 at 7:48 AM, Nirmal Fernando  wrote:

> 'checkpointing' is an algorithm property right? We can add it as a
> hyperparameter configuration ? What do we specifically need to do as ML
> server?
>
> On Wed, Aug 31, 2016 at 10:47 PM, Supun Sethunga  wrote:
>
>> We can reduce the default one, but usually a will user increase/change
>> that when tuning hyper-parameters to increase the accuracy. So we need a
>> solution that would work globally (for any value).  A typical 'user''
>>  cannot/shouldn't enable checkpointing, as IMO its a server configuration.
>>
>> anyway, the default one is 20, which is still in the lower side :)
>>
>> On Wed, Aug 31, 2016 at 7:15 PM, Nirmal Fernando  wrote:
>>
>>> Can't we reduce the default number of iterations? and document how to
>>> enable 'check pointing'.
>>>
>>> On Wed, Aug 31, 2016 at 7:03 PM, Supun Sethunga  wrote:
>>>
 Hi all,

 We are encountering $subject in ML, for the default hyper-parameter
 values. A similar issue has been reported in [1], but with a different
 algorithm.

 This occurs when the number of iterations for model training is large.
 The solution suggested at [1] (setting a checkpoint directory) works for
 our scenario, and is the only solid solution we have for the moment. But as
 mentioned in [2], checkpointing add some overhead for spark operations, and
 requires some tuning based on the use case. Therefore, I'm not sure is it a
 good idea to enable checkpointing in ML, as it would affect DAS's
 performance. (This checkpointing is done for the Spark Context, and it is
 shared by both ML and DAS)

 Other option would be to, set checkpointing at the start of the
 Recommendation algorithm, and once the model is trained, then unset
 checkpointing. Since we are encountering this issue only at this particular
 algorithm, it is not needed to be done for any other algorithm.

 Would like to know what would be the best approach?

 [1] https://issues.apache.org/jira/browse/SPARK-13546
 [2] http://spark.apache.org/docs/1.6.2/streaming-programming
 -guide.html#checkpointing



 *Stack Trace:*

 Caused by: java.lang.StackOverflowError
 at java.io.ObjectInputStream$BlockDataInputStream.peekByte(Obje
 ctInputStream.java:2606)
 at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1506)
 at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
 am.java:1774)
 at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
 at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
 m.java:2000)
 at java.io.ObjectInputStream.readSerialData(ObjectInputStream.j
 ava:1924)
 at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
 am.java:1801)
 at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
 at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
 m.java:2000)
 at java.io.ObjectInputStream.readSerialData(ObjectInputStream.j
 ava:1924)
 at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
 am.java:1801)
 at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
 at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
 m.java:2000)
 at java.io.ObjectInputStream.readSerialData(ObjectInputStream.j
 ava:1924)
 at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
 am.java:1801)
 at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
 at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
 at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
 at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
 thodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)
 at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass
 .java:1058)
 at java.io.ObjectInputStream.readSerialData(ObjectInputStream.j
 ava:1900)
 at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
 am.java:1801)
 at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
 at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
 m.java:2000)
 at java.io.ObjectInputStream.readSerialData(ObjectInputStream.j
 ava:1924)
 at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
 am.java:1801)
 at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
 at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>>>

Re: [Dev] [ML/DAS] java.lang.StackOverflowError in Recommendation Algorithm in spark 1.6.2

2016-08-31 Thread Nirmal Fernando
'checkpointing' is an algorithm property right? We can add it as a
hyperparameter configuration ? What do we specifically need to do as ML
server?

On Wed, Aug 31, 2016 at 10:47 PM, Supun Sethunga  wrote:

> We can reduce the default one, but usually a will user increase/change
> that when tuning hyper-parameters to increase the accuracy. So we need a
> solution that would work globally (for any value).  A typical 'user''
>  cannot/shouldn't enable checkpointing, as IMO its a server configuration.
>
> anyway, the default one is 20, which is still in the lower side :)
>
> On Wed, Aug 31, 2016 at 7:15 PM, Nirmal Fernando  wrote:
>
>> Can't we reduce the default number of iterations? and document how to
>> enable 'check pointing'.
>>
>> On Wed, Aug 31, 2016 at 7:03 PM, Supun Sethunga  wrote:
>>
>>> Hi all,
>>>
>>> We are encountering $subject in ML, for the default hyper-parameter
>>> values. A similar issue has been reported in [1], but with a different
>>> algorithm.
>>>
>>> This occurs when the number of iterations for model training is large.
>>> The solution suggested at [1] (setting a checkpoint directory) works for
>>> our scenario, and is the only solid solution we have for the moment. But as
>>> mentioned in [2], checkpointing add some overhead for spark operations, and
>>> requires some tuning based on the use case. Therefore, I'm not sure is it a
>>> good idea to enable checkpointing in ML, as it would affect DAS's
>>> performance. (This checkpointing is done for the Spark Context, and it is
>>> shared by both ML and DAS)
>>>
>>> Other option would be to, set checkpointing at the start of the
>>> Recommendation algorithm, and once the model is trained, then unset
>>> checkpointing. Since we are encountering this issue only at this particular
>>> algorithm, it is not needed to be done for any other algorithm.
>>>
>>> Would like to know what would be the best approach?
>>>
>>> [1] https://issues.apache.org/jira/browse/SPARK-13546
>>> [2] http://spark.apache.org/docs/1.6.2/streaming-programming
>>> -guide.html#checkpointing
>>>
>>>
>>>
>>> *Stack Trace:*
>>>
>>> Caused by: java.lang.StackOverflowError
>>> at java.io.ObjectInputStream$BlockDataInputStream.peekByte(Obje
>>> ctInputStream.java:2606)
>>> at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1506)
>>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>>> am.java:1774)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>>> m.java:2000)
>>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>>> am.java:1801)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>>> m.java:2000)
>>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>>> am.java:1801)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>>> m.java:2000)
>>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>>> am.java:1801)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>>> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
>>> at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
>>> at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>> thodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>> at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass
>>> .java:1058)
>>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900)
>>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>>> am.java:1801)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>>> m.java:2000)
>>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>>> am.java:1801)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>>> m.java:2000)
>>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>>> am.java:1801)
>>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>>> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
>>> at scala.collection.immutable.$colon$colon.readObject(List.scala:366)
>>> at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
>>> at su

Re: [Dev] [ML/DAS] java.lang.StackOverflowError in Recommendation Algorithm in spark 1.6.2

2016-08-31 Thread Supun Sethunga
We can reduce the default one, but usually a will user increase/change that
when tuning hyper-parameters to increase the accuracy. So we need a
solution that would work globally (for any value).  A typical 'user''
 cannot/shouldn't enable checkpointing, as IMO its a server configuration.

anyway, the default one is 20, which is still in the lower side :)

On Wed, Aug 31, 2016 at 7:15 PM, Nirmal Fernando  wrote:

> Can't we reduce the default number of iterations? and document how to
> enable 'check pointing'.
>
> On Wed, Aug 31, 2016 at 7:03 PM, Supun Sethunga  wrote:
>
>> Hi all,
>>
>> We are encountering $subject in ML, for the default hyper-parameter
>> values. A similar issue has been reported in [1], but with a different
>> algorithm.
>>
>> This occurs when the number of iterations for model training is large.
>> The solution suggested at [1] (setting a checkpoint directory) works for
>> our scenario, and is the only solid solution we have for the moment. But as
>> mentioned in [2], checkpointing add some overhead for spark operations, and
>> requires some tuning based on the use case. Therefore, I'm not sure is it a
>> good idea to enable checkpointing in ML, as it would affect DAS's
>> performance. (This checkpointing is done for the Spark Context, and it is
>> shared by both ML and DAS)
>>
>> Other option would be to, set checkpointing at the start of the
>> Recommendation algorithm, and once the model is trained, then unset
>> checkpointing. Since we are encountering this issue only at this particular
>> algorithm, it is not needed to be done for any other algorithm.
>>
>> Would like to know what would be the best approach?
>>
>> [1] https://issues.apache.org/jira/browse/SPARK-13546
>> [2] http://spark.apache.org/docs/1.6.2/streaming-programming
>> -guide.html#checkpointing
>>
>>
>>
>> *Stack Trace:*
>>
>> Caused by: java.lang.StackOverflowError
>> at java.io.ObjectInputStream$BlockDataInputStream.peekByte(Obje
>> ctInputStream.java:2606)
>> at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1506)
>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>> am.java:1774)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>> m.java:2000)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>> am.java:1801)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>> m.java:2000)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>> am.java:1801)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>> m.java:2000)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>> am.java:1801)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
>> at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
>> at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:497)
>> at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass
>> .java:1058)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900)
>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>> am.java:1801)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>> m.java:2000)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>> am.java:1801)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.defaultReadFields(ObjectInputStrea
>> m.java:2000)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
>> at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStre
>> am.java:1801)
>> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
>> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
>> at scala.collection.immutable.$colon$colon.readObject(List.scala:366)
>> at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:497)
>> at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass
>> .java:1058)
>> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900)
>> at ja

Re: [Dev] [ML/DAS] java.lang.StackOverflowError in Recommendation Algorithm in spark 1.6.2

2016-08-31 Thread Nirmal Fernando
Can't we reduce the default number of iterations? and document how to
enable 'check pointing'.

On Wed, Aug 31, 2016 at 7:03 PM, Supun Sethunga  wrote:

> Hi all,
>
> We are encountering $subject in ML, for the default hyper-parameter
> values. A similar issue has been reported in [1], but with a different
> algorithm.
>
> This occurs when the number of iterations for model training is large. The
> solution suggested at [1] (setting a checkpoint directory) works for our
> scenario, and is the only solid solution we have for the moment. But as
> mentioned in [2], checkpointing add some overhead for spark operations, and
> requires some tuning based on the use case. Therefore, I'm not sure is it a
> good idea to enable checkpointing in ML, as it would affect DAS's
> performance. (This checkpointing is done for the Spark Context, and it is
> shared by both ML and DAS)
>
> Other option would be to, set checkpointing at the start of the
> Recommendation algorithm, and once the model is trained, then unset
> checkpointing. Since we are encountering this issue only at this particular
> algorithm, it is not needed to be done for any other algorithm.
>
> Would like to know what would be the best approach?
>
> [1] https://issues.apache.org/jira/browse/SPARK-13546
> [2] http://spark.apache.org/docs/1.6.2/streaming-programming-guide.html#
> checkpointing
>
>
>
> *Stack Trace:*
>
> Caused by: java.lang.StackOverflowError
> at java.io.ObjectInputStream$BlockDataInputStream.peekByte(
> ObjectInputStream.java:2606)
> at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1506)
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:1774)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
> at java.io.ObjectInputStream.defaultReadFields(
> ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:1801)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
> at java.io.ObjectInputStream.defaultReadFields(
> ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:1801)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
> at java.io.ObjectInputStream.defaultReadFields(
> ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:1801)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
> at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
> at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900)
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:1801)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
> at java.io.ObjectInputStream.defaultReadFields(
> ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:1801)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
> at java.io.ObjectInputStream.defaultReadFields(
> ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:1801)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
> at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
> at scala.collection.immutable.$colon$colon.readObject(List.scala:366)
> at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900)
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:1801)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
> at java.io.ObjectInputStream.defaultReadFields(
> ObjectInputStream.java:2000)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
> at java.io.ObjectInputStream.readOrdinaryObject(
> ObjectInputStream.java:1801)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
> at java.io.ObjectInputStream.defaultReadFields(
> ObjectInpu

[Dev] [ML/DAS] java.lang.StackOverflowError in Recommendation Algorithm in spark 1.6.2

2016-08-31 Thread Supun Sethunga
Hi all,

We are encountering $subject in ML, for the default hyper-parameter values.
A similar issue has been reported in [1], but with a different algorithm.

This occurs when the number of iterations for model training is large. The
solution suggested at [1] (setting a checkpoint directory) works for our
scenario, and is the only solid solution we have for the moment. But as
mentioned in [2], checkpointing add some overhead for spark operations, and
requires some tuning based on the use case. Therefore, I'm not sure is it a
good idea to enable checkpointing in ML, as it would affect DAS's
performance. (This checkpointing is done for the Spark Context, and it is
shared by both ML and DAS)

Other option would be to, set checkpointing at the start of the
Recommendation algorithm, and once the model is trained, then unset
checkpointing. Since we are encountering this issue only at this particular
algorithm, it is not needed to be done for any other algorithm.

Would like to know what would be the best approach?

[1] https://issues.apache.org/jira/browse/SPARK-13546
[2]
http://spark.apache.org/docs/1.6.2/streaming-programming-guide.html#checkpointing



*Stack Trace:*

Caused by: java.lang.StackOverflowError
at
java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2606)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1506)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at scala.collection.immutable.$colon$colon.readObject(List.scala:366)
at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at scala.collection.immutable.$colon$colon.readObje