Did you use docker or plain lxc specifically?

Mayur Rustagi
Ph: +1 (760) 203 3257
http://www.sigmoidanalytics.com
@mayur_rustagi <https://twitter.com/mayur_rustagi>



On Tue, Jun 3, 2014 at 1:40 PM, MrAsanjar . <[email protected]> wrote:

> thanks guys, that fixed my problem. As you might have noticed, I am VERY
> new to spark. Building a spark cluster using LXC has been a challenge.
>
>
> On Tue, Jun 3, 2014 at 2:49 AM, Akhil Das <[email protected]>
> wrote:
>
>> ​As Andrew said, your application is running on Standalone mode. You need
>> to pass
>>
>> MASTER=spark://sanjar-local-machine-1:7077
>>
>> before running your sparkPi example.
>>
>>
>> Thanks
>> Best Regards
>>
>>
>> On Tue, Jun 3, 2014 at 1:12 PM, MrAsanjar . <[email protected]> wrote:
>>
>>> Thanks for your reply Andrew. I am running  applications directly on the
>>> master node. My cluster also contain three worker nodes, all are visible
>>> on WebUI.
>>> Spark Master at spark://sanjar-local-machine-1:7077
>>>
>>>    - *URL:* spark://sanjar-local-machine-1:7077
>>>    - *Workers:* 3
>>>    - *Cores:* 24 Total, 0 Used
>>>    - *Memory:* 43.7 GB Total, 0.0 B Used
>>>    - *Applications:* 0 Running, 0 Completed
>>>    - *Drivers:* 0 Running, 0 Completed
>>>    - *Status:* ALIVE
>>>
>>> Workers Id AddressState CoresMemory
>>> worker-20140603013834-sanjar-local-machine-2-43334
>>> <http://sanjar-local-machine-2:8081/> sanjar-local-machine-2:43334 ALIVE
>>> 8 (0 Used)14.6 GB (0.0 B Used)
>>> worker-20140603015921-sanjar-local-machine-3-51926
>>> <http://sanjar-local-machine-3:8081/> sanjar-local-machine-3:51926 ALIVE8
>>> (0 Used) 14.6 GB (0.0 B Used)
>>> worker-20140603020250-sanjar-local-machine-4-43167
>>> <http://sanjar-local-machine-4:8081/> sanjar-local-machine-4:43167 ALIVE8
>>> (0 Used) 14.6 GB (0.0 B Used)
>>> Running Applications ID NameCores Memory per NodeSubmitted Time User
>>> State Duration
>>> Completed Applications ID NameCores Memory per NodeSubmitted Time User
>>> State Duration
>>>
>>>
>>>
>>> On Tue, Jun 3, 2014 at 2:33 AM, Andrew Ash <[email protected]> wrote:
>>>
>>>> Your applications are probably not connecting to your existing cluster
>>>> and instead running in local mode.  Are you passing the master URL to the
>>>> SparkPi application?
>>>>
>>>> Andrew
>>>>
>>>>
>>>> On Tue, Jun 3, 2014 at 12:30 AM, MrAsanjar . <[email protected]>
>>>> wrote:
>>>>
>>>>>
>>>>>    - HI all,
>>>>>    - Application running and completed count does not get updated, it
>>>>>    is always zero. I have ran
>>>>>    - SparkPi application at least 10 times. please help
>>>>>    -
>>>>>    - *Workers:* 3
>>>>>    - *Cores:* 24 Total, 0 Used
>>>>>    - *Memory:* 43.7 GB Total, 0.0 B Used
>>>>>    - *Applications:* 0 Running, 0 Completed
>>>>>    - *Drivers:* 0 Running, 0 Completed
>>>>>    - *Status:* ALIVE
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to