yes, but only when I try to connect from a web service running in Tomcat.

When I try to connect using a stand-alone program, using the same
parameters, it works fine.


On Sat, Feb 22, 2014 at 12:15 PM, Mayur Rustagi <mayur.rust...@gmail.com>wrote:

> So Spark is running on that IP, web ui is loading on that IP showing
> workers & when you connect to that IP with javaAPI the cluster appears to
> be down to it?
>
> Mayur Rustagi
> Ph: +919632149971
> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
> https://twitter.com/mayur_rustagi
>
>
>
> On Fri, Feb 21, 2014 at 10:22 PM, Nathan Kronenfeld <
> nkronenf...@oculusinfo.com> wrote:
>
>> Netstat gives exactly the expected IP address (not a 127...., but a
>> 192...).
>> I tried it anyway, though... exactly the same results, but with a number
>> instead of a name.
>> Oh, and I forgot to mention last time, in case it makes a difference -
>> I'm running 0.8.1, not 0.9.0, at least for now
>>
>>
>>
>> On Sat, Feb 22, 2014 at 12:50 AM, Mayur Rustagi 
>> <mayur.rust...@gmail.com>wrote:
>>
>>> most likely the master is binding to a unique address and you are
>>> connecting to some other internal address. Master can bind to random
>>> internal address 127.0... or even your machine IP at that time.
>>> Easiest is to check
>>> netstat -an |grep 7077
>>> This will give you which IP to bind to exactly when launching spark
>>> context.
>>>
>>> Mayur Rustagi
>>> Ph: +919632149971
>>> h <https://twitter.com/mayur_rustagi>ttp://www.sigmoidanalytics.com
>>> https://twitter.com/mayur_rustagi
>>>
>>>
>>>
>>> On Fri, Feb 21, 2014 at 9:36 PM, Nathan Kronenfeld <
>>> nkronenf...@oculusinfo.com> wrote:
>>>
>>>> Can anyone help me here?
>>>>
>>>> I've got a small spark cluster running on three machines - hadoop-s1,
>>>> hadoop-s2, and hadoop-s3 - with s1 acting master, and all three acting as
>>>> workers.  It works fine - I can connect with spark-shell, I can run jobs, I
>>>> can see the web ui.
>>>>
>>>> The web UI says:
>>>> Spark Master at spark://hadoop-s1.oculus.local:7077
>>>> URL: spark://hadoop-s1.oculus.local:7077
>>>>
>>>> I've connected to it fine using both a scala and a java SparkContext.
>>>>
>>>> But when I try connecting from within a Tomcat service, I get the
>>>> following messages:
>>>> [INFO] 22 Feb 2014 00:27:38 - org.apache.spark.Logging$class -
>>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>>> [INFO] 22 Feb 2014 00:27:58 - org.apache.spark.Logging$class -
>>>> Connecting to master spark://hadoop-s1.oculus.local:7077...
>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - All
>>>> masters are unresponsive! Giving up.
>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Spark
>>>> cluster looks dead, giving up.
>>>> [ERROR] 22 Feb 2014 00:28:18 - org.apache.spark.Logging$class - Exiting
>>>> due to error from cluster scheduler: Spark cluster looks down
>>>>
>>>> When I look on the spark server logs, there isn't even a sign of an
>>>> attempted connection.
>>>>
>>>> I'm trying to use a JavaSparkContext, and I've printed out the
>>>> parameters I pass in, and they work fine in a stand-alone program.
>>>>
>>>> Anyone have a clue why this fails? Or even how to find out why this
>>>> fals?
>>>>
>>>>
>>>> --
>>>> Nathan Kronenfeld
>>>> Senior Visualization Developer
>>>> Oculus Info Inc
>>>> 2 Berkeley Street, Suite 600,
>>>> Toronto, Ontario M5A 4J5
>>>> Phone:  +1-416-203-3003 x 238
>>>> Email:  nkronenf...@oculusinfo.com
>>>>
>>>
>>>
>>
>>
>> --
>> Nathan Kronenfeld
>> Senior Visualization Developer
>> Oculus Info Inc
>> 2 Berkeley Street, Suite 600,
>> Toronto, Ontario M5A 4J5
>> Phone:  +1-416-203-3003 x 238
>> Email:  nkronenf...@oculusinfo.com
>>
>
>


-- 
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone:  +1-416-203-3003 x 238
Email:  nkronenf...@oculusinfo.com

Reply via email to