Exception in saving MatrixFactorizationModel

2015-09-04 Thread Madawa Soysa
Hi All,

I'm getting an error when trying to save a MatrixFactorizationModel. I'm
using following method to save the model.

model.save(sc, outPath)

I'm getting the following exception when saving the model. I have attached
the full stack trace. Any help would be appreciated to resolve this issue.

org.apache.spark.SparkException: Job aborted.
at
org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.insert(commands.scala:166)
at
org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.run(commands.scala:139)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
at
org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:336)
at
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:144)
at
org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:135)
at
org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:281)
at
org.apache.spark.mllib.recommendation.MatrixFactorizationModel$SaveLoadV1_0$.save(MatrixFactorizationModel.scala:284)
at
org.apache.spark.mllib.recommendation.MatrixFactorizationModel.save(MatrixFactorizationModel.scala:141)


Thanks
org.apache.spark.SparkException: Job aborted.
at 
org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.insert(commands.scala:166)
at 
org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.run(commands.scala:139)
at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at 
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
at 
org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
at 
org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:336)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:144)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:135)
at 
org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:281)
at 
org.apache.spark.mllib.recommendation.MatrixFactorizationModel$SaveLoadV1_0$.save(MatrixFactorizationModel.scala:284)
at 
org.apache.spark.mllib.recommendation.MatrixFactorizationModel.save(MatrixFactorizationModel.scala:141)

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 234.0 failed 1 times, most recent failure: Lost task 0.0 in stage 234.0 
(TID 141, localhost): java.lang.NullPointerException
at 
parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:147)
at 
parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
at 
parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
at 
org.apache.spark.sql.parquet.ParquetOutputWriter.close(newParquet.scala:88)
at 
org.apache.spark.sql.sources.DefaultWriterContainer.abortTask(commands.scala:491)
at 
org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.org$apache$spark$sql$sources$InsertIntoHadoopFsRelation$$writeRows$1(commands.scala:190)
at 
org.apache.spark.sql.sources.InsertIntoHadoopFsRelation$$anonfun$insert$1.apply(commands.scala:160)
at 
org.apache.spark.sql.sources.InsertIntoHadoopFsRelation$$anonfun$insert$1.apply(commands.scala:160)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)

Driver stacktrace:
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobA

Re: Submitted applications does not run.

2015-09-01 Thread Madawa Soysa
Hi Jeff,

I solved the issue by following the given instructions. Thanks for the help.

Regards,
Madawa.

On 1 September 2015 at 14:12, Jeff Zhang  wrote:

> You need to make yourself able to ssh to localhost without password,
> please check this blog.
>
> http://hortonworks.com/kb/generating-ssh-keys-for-passwordless-login/
>
>
>
> On Tue, Sep 1, 2015 at 4:31 PM, Madawa Soysa 
> wrote:
>
>> I used ./sbin/start-master.sh
>>
>> When I used ./sbin/start-all.sh the start fails. I get the following
>> error.
>>
>> failed to launch org.apache.spark.deploy.master.Master:
>> localhost: ssh: connect to host localhost port 22: Connection refused
>>
>> On 1 September 2015 at 13:41, Jeff Zhang  wrote:
>>
>>> Did you start spark cluster using command sbin/start-all.sh ?
>>> You should have 2 log files under folder if it is single-node cluster.
>>> Like the following
>>>
>>> spark-jzhang-org.apache.spark.deploy.master.Master-1-jzhangMBPr.local.out
>>> spark-jzhang-org.apache.spark.deploy.worker.Worker-1-jzhangMBPr.local.out
>>>
>>>
>>>
>>> On Tue, Sep 1, 2015 at 4:01 PM, Madawa Soysa 
>>> wrote:
>>>
>>>> There are no logs which includes apache.spark.deploy.worker in file
>>>> name in the SPARK_HOME/logs folder.
>>>>
>>>> On 1 September 2015 at 13:00, Jeff Zhang  wrote:
>>>>
>>>>> This is master log. There's no worker registration info in the log.
>>>>> That means the worker may not start properly. Please check the log file
>>>>> with apache.spark.deploy.worker in file name.
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Sep 1, 2015 at 2:55 PM, Madawa Soysa 
>>>>> wrote:
>>>>>
>>>>>> I cannot see anything abnormal in logs. What would be the reason for
>>>>>> not availability of executors?
>>>>>>
>>>>>> On 1 September 2015 at 12:24, Madawa Soysa 
>>>>>> wrote:
>>>>>>
>>>>>>> Following are the logs available. Please find the attached.
>>>>>>>
>>>>>>> On 1 September 2015 at 12:18, Jeff Zhang  wrote:
>>>>>>>
>>>>>>>> It's in SPARK_HOME/logs
>>>>>>>>
>>>>>>>> Or you can check the spark web ui. http://[master-machine]:8080
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Sep 1, 2015 at 2:44 PM, Madawa Soysa <
>>>>>>>> madawa...@cse.mrt.ac.lk> wrote:
>>>>>>>>
>>>>>>>>> How do I check worker logs? SPARK_HOME/work folder does not exist.
>>>>>>>>> I am using the spark standalone mode.
>>>>>>>>>
>>>>>>>>> On 1 September 2015 at 12:05, Jeff Zhang  wrote:
>>>>>>>>>
>>>>>>>>>> No executors ? Please check the worker logs if you are using
>>>>>>>>>> spark standalone mode.
>>>>>>>>>>
>>>>>>>>>> On Tue, Sep 1, 2015 at 2:17 PM, Madawa Soysa <
>>>>>>>>>> madawa...@cse.mrt.ac.lk> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi All,
>>>>>>>>>>>
>>>>>>>>>>> I have successfully submitted some jobs to spark master. But the
>>>>>>>>>>> jobs won't progress and not finishing. Please see the attached 
>>>>>>>>>>> screenshot.
>>>>>>>>>>> These are fairly very small jobs and this shouldn't take more than 
>>>>>>>>>>> a minute
>>>>>>>>>>> to finish.
>>>>>>>>>>>
>>>>>>>>>>> I'm new to spark and any help would be appreciated.
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Madawa.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> -
>>>>>>>>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>>>>>>>>

Error when creating an ALS model in spark

2015-09-01 Thread Madawa Soysa
I'm getting the an error when I try to build an ALS model in spark
standalone. I am new to spark. Any help would be appreciated to resolve
this issue.

Stack Trace:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage
0.0 (TID 6, 192.168.0.171): java.io.EOFException
at
org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:186)
at
org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:217)
at
org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:178)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1254)
at
org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:165)
at
org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
at
org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
at
org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:88)
at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:62)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)


Re: Submitted applications does not run.

2015-09-01 Thread Madawa Soysa
I used ./sbin/start-master.sh

When I used ./sbin/start-all.sh the start fails. I get the following error.

failed to launch org.apache.spark.deploy.master.Master:
localhost: ssh: connect to host localhost port 22: Connection refused

On 1 September 2015 at 13:41, Jeff Zhang  wrote:

> Did you start spark cluster using command sbin/start-all.sh ?
> You should have 2 log files under folder if it is single-node cluster.
> Like the following
>
> spark-jzhang-org.apache.spark.deploy.master.Master-1-jzhangMBPr.local.out
> spark-jzhang-org.apache.spark.deploy.worker.Worker-1-jzhangMBPr.local.out
>
>
>
> On Tue, Sep 1, 2015 at 4:01 PM, Madawa Soysa 
> wrote:
>
>> There are no logs which includes apache.spark.deploy.worker in file name
>> in the SPARK_HOME/logs folder.
>>
>> On 1 September 2015 at 13:00, Jeff Zhang  wrote:
>>
>>> This is master log. There's no worker registration info in the log. That
>>> means the worker may not start properly. Please check the log file
>>> with apache.spark.deploy.worker in file name.
>>>
>>>
>>>
>>> On Tue, Sep 1, 2015 at 2:55 PM, Madawa Soysa 
>>> wrote:
>>>
>>>> I cannot see anything abnormal in logs. What would be the reason for
>>>> not availability of executors?
>>>>
>>>> On 1 September 2015 at 12:24, Madawa Soysa 
>>>> wrote:
>>>>
>>>>> Following are the logs available. Please find the attached.
>>>>>
>>>>> On 1 September 2015 at 12:18, Jeff Zhang  wrote:
>>>>>
>>>>>> It's in SPARK_HOME/logs
>>>>>>
>>>>>> Or you can check the spark web ui. http://[master-machine]:8080
>>>>>>
>>>>>>
>>>>>> On Tue, Sep 1, 2015 at 2:44 PM, Madawa Soysa >>>>> > wrote:
>>>>>>
>>>>>>> How do I check worker logs? SPARK_HOME/work folder does not exist. I
>>>>>>> am using the spark standalone mode.
>>>>>>>
>>>>>>> On 1 September 2015 at 12:05, Jeff Zhang  wrote:
>>>>>>>
>>>>>>>> No executors ? Please check the worker logs if you are using spark
>>>>>>>> standalone mode.
>>>>>>>>
>>>>>>>> On Tue, Sep 1, 2015 at 2:17 PM, Madawa Soysa <
>>>>>>>> madawa...@cse.mrt.ac.lk> wrote:
>>>>>>>>
>>>>>>>>> Hi All,
>>>>>>>>>
>>>>>>>>> I have successfully submitted some jobs to spark master. But the
>>>>>>>>> jobs won't progress and not finishing. Please see the attached 
>>>>>>>>> screenshot.
>>>>>>>>> These are fairly very small jobs and this shouldn't take more than a 
>>>>>>>>> minute
>>>>>>>>> to finish.
>>>>>>>>>
>>>>>>>>> I'm new to spark and any help would be appreciated.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Madawa.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -
>>>>>>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>>>>>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Best Regards
>>>>>>>>
>>>>>>>> Jeff Zhang
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> *_________**Madawa Soysa*
>>>>>>>
>>>>>>> Undergraduate,
>>>>>>>
>>>>>>> Department of Computer Science and Engineering,
>>>>>>>
>>>>>>> University of Moratuwa.
>>>>>>>
>>>>>>>
>>>>>>> Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
>>>>>>> madawa...@cse.mrt.ac.lk
>>>>>>> LinkedIn <http:

Re: Submitted applications does not run.

2015-09-01 Thread Madawa Soysa
There are no logs which includes apache.spark.deploy.worker in file name in
the SPARK_HOME/logs folder.

On 1 September 2015 at 13:00, Jeff Zhang  wrote:

> This is master log. There's no worker registration info in the log. That
> means the worker may not start properly. Please check the log file
> with apache.spark.deploy.worker in file name.
>
>
>
> On Tue, Sep 1, 2015 at 2:55 PM, Madawa Soysa 
> wrote:
>
>> I cannot see anything abnormal in logs. What would be the reason for not
>> availability of executors?
>>
>> On 1 September 2015 at 12:24, Madawa Soysa 
>> wrote:
>>
>>> Following are the logs available. Please find the attached.
>>>
>>> On 1 September 2015 at 12:18, Jeff Zhang  wrote:
>>>
>>>> It's in SPARK_HOME/logs
>>>>
>>>> Or you can check the spark web ui. http://[master-machine]:8080
>>>>
>>>>
>>>> On Tue, Sep 1, 2015 at 2:44 PM, Madawa Soysa 
>>>> wrote:
>>>>
>>>>> How do I check worker logs? SPARK_HOME/work folder does not exist. I
>>>>> am using the spark standalone mode.
>>>>>
>>>>> On 1 September 2015 at 12:05, Jeff Zhang  wrote:
>>>>>
>>>>>> No executors ? Please check the worker logs if you are using spark
>>>>>> standalone mode.
>>>>>>
>>>>>> On Tue, Sep 1, 2015 at 2:17 PM, Madawa Soysa >>>>> > wrote:
>>>>>>
>>>>>>> Hi All,
>>>>>>>
>>>>>>> I have successfully submitted some jobs to spark master. But the
>>>>>>> jobs won't progress and not finishing. Please see the attached 
>>>>>>> screenshot.
>>>>>>> These are fairly very small jobs and this shouldn't take more than a 
>>>>>>> minute
>>>>>>> to finish.
>>>>>>>
>>>>>>> I'm new to spark and any help would be appreciated.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Madawa.
>>>>>>>
>>>>>>>
>>>>>>> -
>>>>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>>>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards
>>>>>>
>>>>>> Jeff Zhang
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> *_**Madawa Soysa*
>>>>>
>>>>> Undergraduate,
>>>>>
>>>>> Department of Computer Science and Engineering,
>>>>>
>>>>> University of Moratuwa.
>>>>>
>>>>>
>>>>> Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
>>>>> madawa...@cse.mrt.ac.lk
>>>>> LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
>>>>> <https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> *_**Madawa Soysa*
>>>
>>> Undergraduate,
>>>
>>> Department of Computer Science and Engineering,
>>>
>>> University of Moratuwa.
>>>
>>>
>>> Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
>>> madawa...@cse.mrt.ac.lk
>>> LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
>>> <https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>
>>>
>>
>>
>>
>> --
>>
>> *_**Madawa Soysa*
>>
>> Undergraduate,
>>
>> Department of Computer Science and Engineering,
>>
>> University of Moratuwa.
>>
>>
>> Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
>> madawa...@cse.mrt.ac.lk
>> LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
>> <https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 

*_**Madawa Soysa*

Undergraduate,

Department of Computer Science and Engineering,

University of Moratuwa.


Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
madawa...@cse.mrt.ac.lk
LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
<https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>


Re: Submitted applications does not run.

2015-08-31 Thread Madawa Soysa
I cannot see anything abnormal in logs. What would be the reason for not
availability of executors?

On 1 September 2015 at 12:24, Madawa Soysa  wrote:

> Following are the logs available. Please find the attached.
>
> On 1 September 2015 at 12:18, Jeff Zhang  wrote:
>
>> It's in SPARK_HOME/logs
>>
>> Or you can check the spark web ui. http://[master-machine]:8080
>>
>>
>> On Tue, Sep 1, 2015 at 2:44 PM, Madawa Soysa 
>> wrote:
>>
>>> How do I check worker logs? SPARK_HOME/work folder does not exist. I am
>>> using the spark standalone mode.
>>>
>>> On 1 September 2015 at 12:05, Jeff Zhang  wrote:
>>>
>>>> No executors ? Please check the worker logs if you are using spark
>>>> standalone mode.
>>>>
>>>> On Tue, Sep 1, 2015 at 2:17 PM, Madawa Soysa 
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> I have successfully submitted some jobs to spark master. But the jobs
>>>>> won't progress and not finishing. Please see the attached screenshot. 
>>>>> These
>>>>> are fairly very small jobs and this shouldn't take more than a minute to
>>>>> finish.
>>>>>
>>>>> I'm new to spark and any help would be appreciated.
>>>>>
>>>>> Thanks,
>>>>> Madawa.
>>>>>
>>>>>
>>>>> -
>>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards
>>>>
>>>> Jeff Zhang
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> *_**Madawa Soysa*
>>>
>>> Undergraduate,
>>>
>>> Department of Computer Science and Engineering,
>>>
>>> University of Moratuwa.
>>>
>>>
>>> Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
>>> madawa...@cse.mrt.ac.lk
>>> LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
>>> <https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>
>>>
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>
>
>
> --
>
> *_**Madawa Soysa*
>
> Undergraduate,
>
> Department of Computer Science and Engineering,
>
> University of Moratuwa.
>
>
> Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
> madawa...@cse.mrt.ac.lk
> LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
> <https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>
>



-- 

*_**Madawa Soysa*

Undergraduate,

Department of Computer Science and Engineering,

University of Moratuwa.


Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
madawa...@cse.mrt.ac.lk
LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
<https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>


Re: Submitted applications does not run.

2015-08-31 Thread Madawa Soysa
Following are the logs available. Please find the attached.

On 1 September 2015 at 12:18, Jeff Zhang  wrote:

> It's in SPARK_HOME/logs
>
> Or you can check the spark web ui. http://[master-machine]:8080
>
>
> On Tue, Sep 1, 2015 at 2:44 PM, Madawa Soysa 
> wrote:
>
>> How do I check worker logs? SPARK_HOME/work folder does not exist. I am
>> using the spark standalone mode.
>>
>> On 1 September 2015 at 12:05, Jeff Zhang  wrote:
>>
>>> No executors ? Please check the worker logs if you are using spark
>>> standalone mode.
>>>
>>> On Tue, Sep 1, 2015 at 2:17 PM, Madawa Soysa 
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> I have successfully submitted some jobs to spark master. But the jobs
>>>> won't progress and not finishing. Please see the attached screenshot. These
>>>> are fairly very small jobs and this shouldn't take more than a minute to
>>>> finish.
>>>>
>>>> I'm new to spark and any help would be appreciated.
>>>>
>>>> Thanks,
>>>> Madawa.
>>>>
>>>>
>>>> ---------
>>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>>
>>
>>
>> --
>>
>> *_**Madawa Soysa*
>>
>> Undergraduate,
>>
>> Department of Computer Science and Engineering,
>>
>> University of Moratuwa.
>>
>>
>> Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
>> madawa...@cse.mrt.ac.lk
>> LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
>> <https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 

*_**Madawa Soysa*

Undergraduate,

Department of Computer Science and Engineering,

University of Moratuwa.


Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
madawa...@cse.mrt.ac.lk
LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
<https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>
Spark Command: /usr/lib/jvm/java-1.7.0-openjdk-amd64/bin/java -cp 
/opt/spark-1.4.1-bin-hadoop2.6/ml/org.wso2.carbon.ml.core_1.0.1.SNAPSHOT.jar:/opt/spark-1.4.1-bin-hadoop2.6/ml/org.wso2.carbon.ml.commons_1.0.1.SNAPSHOT.jar:/opt/spark-1.4.1-bin-hadoop2.6/sbin/../conf/:/opt/spark-1.4.1-bin-hadoop2.6/lib/spark-assembly-1.4.1-hadoop2.6.0.jar:/opt/spark-1.4.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/spark-1.4.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.4.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar
 -Xms512m -Xmx512m -XX:MaxPermSize=256m org.apache.spark.deploy.master.Master 
--ip [master-url] --port 7077 --webui-port 8080

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/09/01 11:32:14 INFO Master: Registered signal handlers for [TERM, HUP, INT]
15/09/01 11:32:14 WARN Utils: Your hostname, ubuntu resolves to a loopback 
address: 127.0.1.1; using 10.8.108.92 instead (on interface wlan0)
15/09/01 11:32:14 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another 
address
15/09/01 11:32:15 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
15/09/01 11:32:15 INFO SecurityManager: Changing view acls to: ubuntu
15/09/01 11:32:15 INFO SecurityManager: Changing modify acls to: ubuntu
15/09/01 11:32:15 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(ubuntu); users 
with modify permissions: Set(ubuntu)
15/09/01 11:32:16 INFO Slf4jLogger: Slf4jLogger started
15/09/01 11:32:16 INFO Remoting: Starting remoting
15/09/01 11:32:16 INFO Remoting: Remoting started; listening on addresses 
:[akka.tcp://sparkMaster@ubuntu:7077]
15/09/01 11:32:16 INFO Utils: Successfully started service 'sparkMaster' on 
port 7077.
15/09/01 11:32:17 INFO Utils: Successfully started service on port 6066.
15/09/01 11:32:17 INFO StandaloneRestServer: Started REST server for submitting 
applications on port 6066
15/09/01 11:32:17 INFO Master: Starting Spark master at spark://ubuntu:7077
15/09/01 11:32:17 INFO Master: Running Spark version 1.4.1
15/09/01 11:32:17 INFO Utils: Successfully started service 'MasterUI' on port 
8080.
15/09/01 11:32:17 INFO MasterWebUI: Started MasterWebUI at 
http://10.8.108.92:8080
15/09/01 11:32:17 INFO Master: I have been elected leader! New state: ALIVE
15/09/01 11:32:57 INFO Master: Registering app 
ML-SPARK-APPLICATION-0.9453989189195968
15/09/01 11:32:57 INFO Master: Registered app 
ML-SPARK-APPLICATION-0.9453989189195968 with ID app-20150901113257-

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Re: Submitted applications does not run.

2015-08-31 Thread Madawa Soysa
How do I check worker logs? SPARK_HOME/work folder does not exist. I am
using the spark standalone mode.

On 1 September 2015 at 12:05, Jeff Zhang  wrote:

> No executors ? Please check the worker logs if you are using spark
> standalone mode.
>
> On Tue, Sep 1, 2015 at 2:17 PM, Madawa Soysa 
> wrote:
>
>> Hi All,
>>
>> I have successfully submitted some jobs to spark master. But the jobs
>> won't progress and not finishing. Please see the attached screenshot. These
>> are fairly very small jobs and this shouldn't take more than a minute to
>> finish.
>>
>> I'm new to spark and any help would be appreciated.
>>
>> Thanks,
>> Madawa.
>>
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>



-- 

*_**Madawa Soysa*

Undergraduate,

Department of Computer Science and Engineering,

University of Moratuwa.


Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
madawa...@cse.mrt.ac.lk
LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
<https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>


Submitted applications does not run.

2015-08-31 Thread Madawa Soysa
Hi All,

I have successfully submitted some jobs to spark master. But the jobs won't
progress and not finishing. Please see the attached screenshot. These are
fairly very small jobs and this shouldn't take more than a minute to finish.

I'm new to spark and any help would be appreciated.

Thanks,
Madawa.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Re: Serializing MLlib MatrixFactorizationModel

2015-08-21 Thread Madawa Soysa
Hi Joseph,

I have used the built in save as you suggested. The directory gets created
but the complete model doesn't gets written to the file. Only a part of the
model gets written to the file. Please find the attached part that was
written when tested with the above method.

On 18 August 2015 at 06:22, Joseph Bradley  wrote:

> I'd recommend using the built-in save and load, which will be better for
> cross-version compatibility.  You should be able to call
> myModel.save(path), and load it back with
> MatrixFactorizationModel.load(path).
>
> On Mon, Aug 17, 2015 at 6:31 AM, Madawa Soysa 
> wrote:
>
>> Hi All,
>>
>> I have an issue when i try to serialize a MatrixFactorizationModel object
>> as a java object in a Java application. When I deserialize the object, I
>> get the following exception.
>>
>> Caused by: java.lang.ClassNotFoundException:
>> org.apache.spark.OneToOneDependency cannot be found by
>> org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a39653b
>>
>> Any solution for this?
>>
>> --
>>
>> *_**Madawa Soysa*
>>
>> Undergraduate,
>>
>> Department of Computer Science and Engineering,
>>
>> University of Moratuwa.
>>
>>
>> Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
>> madawa...@cse.mrt.ac.lk
>> LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
>> <https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>
>>
>
>


-- 

*_**Madawa Soysa*

Undergraduate,

Department of Computer Science and Engineering,

University of Moratuwa.


Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
madawa...@cse.mrt.ac.lk
LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
<https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>


part-0
Description: Binary data

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Serializing MLlib MatrixFactorizationModel

2015-08-17 Thread Madawa Soysa
Hi All,

I have an issue when i try to serialize a MatrixFactorizationModel object
as a java object in a Java application. When I deserialize the object, I
get the following exception.

Caused by: java.lang.ClassNotFoundException:
org.apache.spark.OneToOneDependency cannot be found by
org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a39653b

Any solution for this?

-- 

*_**Madawa Soysa*

Undergraduate,

Department of Computer Science and Engineering,

University of Moratuwa.


Mobile: +94 71 461 6050 <%2B94%2075%20812%200726> | Email:
madawa...@cse.mrt.ac.lk
LinkedIn <http://lk.linkedin.com/in/madawasoysa> | Twitter
<https://twitter.com/madawa_rc> | Tumblr <http://madawas.tumblr.com/>