My apologies for double posting but I missed the web links that i followed
which are 1
http://ramhiser.com/2015/02/01/configuring-ipython-notebook-support-for-pyspark/,
2
http://blog.cloudera.com/blog/2014/08/how-to-use-ipython-notebook-with-apache-spark/,
3
Hello Sooraj,
I see you are using ipython notebook.
Can you tell me are you on Windows OS or Linux based OS? I am using Windows
7 and I am new to Spark.
I am trying to connect ipython with my local cluster based on CDH5.4. I
followed these tutorials here but they are written on linux environment
That turned out to be a silly data type mistake. At one point in the
iterative call, I was passing an integer value for the parameter 'alpha' of
the ALS train API, which was expecting a Double. So, py4j in fact
complained that it cannot take a method that takes an integer value for
that parameter.
Hi Ashish,
I am running ipython notebook server on one of the nodes of the cluster
(HDP). Setting it up was quite straightforward, and I guess I followed the
same references that you linked to. Then I access the notebook remotely
from my development PC. Never tried to connect a local ipython (on
Hi,
I am using MLlib collaborative filtering API on an implicit preference data
set. From a pySpark notebook, I am iteratively creating the matrix
factorization model with the aim of measuring the RMSE for each combination
of parameters for this API like the rank, lambda and alpha. After the code