Re: Setting the heap size

2010-11-01 Thread Patrick Hunt
Actually if you are going to admin your own ZK it's probably a good
idea to review that Admin doc fully. Some other good detail in there
(backups and cleaning the datadir for example).

Regards,

Patrick

On Fri, Oct 29, 2010 at 7:22 AM, Tim Robertson
 wrote:
> Great - thanks Patrick!
>
>
> On Thu, Oct 28, 2010 at 6:13 PM, Patrick Hunt  wrote:
>> Tim, one other thing you might want to be aware of:
>> http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_supervision
>>
>> Patrick
>>
>> On Thu, Oct 28, 2010 at 9:11 AM, Patrick Hunt  wrote:
>>> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
>>>  wrote:
 We are setting up a small Hadoop 13 node cluster running 1 HDFS
 master, 9 region severs for HBase and 3 map reduce nodes, and are just
 installing zookeeper to perform the HBase coordination and to manage a
 few simple process locks for other tasks we run.

 Could someone please advise what kind on heap we should give to our
 single ZK node and also (ahem) how does one actually set this? It's
 not immediately obvious in the docs or config.
>>>
>>> The amount of heap necessary will be dependent on the application(s)
>>> using ZK, also configuration of the heap is dependent on what
>>> packaging you are using to start ZK.
>>>
>>> Are you using zkServer.sh from our distribution? If so then you
>>> probably want to set JVMFLAGS env variable. We pass this through to
>>> the jvm, see -Xmx in the man page
>>> (http://www.manpagez.com/man/1/java/)
>>>
>>> Given this is Hbase (which I'm reasonably familiar with) the default
>>> heap should be fine. However you might want to check with the Hbase
>>> team on that.
>>>
>>> I'd also encourage you to enter a JIRA on the (lack of) doc issue you
>>> highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER
>>>
>>> Regards,
>>>
>>> Patrick
>>>
>>
>


Re: Setting the heap size

2010-11-01 Thread Patrick Hunt
Actually if you are going to admin your own ZK it's probably a good
idea to review that Admin doc fully. Some other good detail in there
(backups and cleaning the datadir for example).

Regards,

Patrick

On Fri, Oct 29, 2010 at 7:22 AM, Tim Robertson
 wrote:
> Great - thanks Patrick!
>
>
> On Thu, Oct 28, 2010 at 6:13 PM, Patrick Hunt  wrote:
>> Tim, one other thing you might want to be aware of:
>> http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_supervision
>>
>> Patrick
>>
>> On Thu, Oct 28, 2010 at 9:11 AM, Patrick Hunt  wrote:
>>> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
>>>  wrote:
 We are setting up a small Hadoop 13 node cluster running 1 HDFS
 master, 9 region severs for HBase and 3 map reduce nodes, and are just
 installing zookeeper to perform the HBase coordination and to manage a
 few simple process locks for other tasks we run.

 Could someone please advise what kind on heap we should give to our
 single ZK node and also (ahem) how does one actually set this? It's
 not immediately obvious in the docs or config.
>>>
>>> The amount of heap necessary will be dependent on the application(s)
>>> using ZK, also configuration of the heap is dependent on what
>>> packaging you are using to start ZK.
>>>
>>> Are you using zkServer.sh from our distribution? If so then you
>>> probably want to set JVMFLAGS env variable. We pass this through to
>>> the jvm, see -Xmx in the man page
>>> (http://www.manpagez.com/man/1/java/)
>>>
>>> Given this is Hbase (which I'm reasonably familiar with) the default
>>> heap should be fine. However you might want to check with the Hbase
>>> team on that.
>>>
>>> I'd also encourage you to enter a JIRA on the (lack of) doc issue you
>>> highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER
>>>
>>> Regards,
>>>
>>> Patrick
>>>
>>
>


Re: Setting the heap size

2010-10-29 Thread Tim Robertson
Great - thanks Patrick!


On Thu, Oct 28, 2010 at 6:13 PM, Patrick Hunt  wrote:
> Tim, one other thing you might want to be aware of:
> http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_supervision
>
> Patrick
>
> On Thu, Oct 28, 2010 at 9:11 AM, Patrick Hunt  wrote:
>> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
>>  wrote:
>>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>>> installing zookeeper to perform the HBase coordination and to manage a
>>> few simple process locks for other tasks we run.
>>>
>>> Could someone please advise what kind on heap we should give to our
>>> single ZK node and also (ahem) how does one actually set this? It's
>>> not immediately obvious in the docs or config.
>>
>> The amount of heap necessary will be dependent on the application(s)
>> using ZK, also configuration of the heap is dependent on what
>> packaging you are using to start ZK.
>>
>> Are you using zkServer.sh from our distribution? If so then you
>> probably want to set JVMFLAGS env variable. We pass this through to
>> the jvm, see -Xmx in the man page
>> (http://www.manpagez.com/man/1/java/)
>>
>> Given this is Hbase (which I'm reasonably familiar with) the default
>> heap should be fine. However you might want to check with the Hbase
>> team on that.
>>
>> I'd also encourage you to enter a JIRA on the (lack of) doc issue you
>> highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER
>>
>> Regards,
>>
>> Patrick
>>
>


Re: Setting the heap size

2010-10-29 Thread Sean Bigdatafun
Why would you only run 9 RS and leave 3 mapreduce-only nodes? I can't see
any benefit of doing that.

Sean

On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson wrote:

> Hi all,
>
> We are setting up a small Hadoop 13 node cluster running 1 HDFS
> master, 9 region severs for HBase and 3 map reduce nodes, and are just
> installing zookeeper to perform the HBase coordination and to manage a
> few simple process locks for other tasks we run.
>
> Could someone please advise what kind on heap we should give to our
> single ZK node and also (ahem) how does one actually set this? It's
> not immediately obvious in the docs or config.
>
> Thanks,
> Tim
>


Re: Setting the heap size

2010-10-28 Thread Patrick Hunt
Tim, one other thing you might want to be aware of:
http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_supervision

Patrick

On Thu, Oct 28, 2010 at 9:11 AM, Patrick Hunt  wrote:
> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
>  wrote:
>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>> installing zookeeper to perform the HBase coordination and to manage a
>> few simple process locks for other tasks we run.
>>
>> Could someone please advise what kind on heap we should give to our
>> single ZK node and also (ahem) how does one actually set this? It's
>> not immediately obvious in the docs or config.
>
> The amount of heap necessary will be dependent on the application(s)
> using ZK, also configuration of the heap is dependent on what
> packaging you are using to start ZK.
>
> Are you using zkServer.sh from our distribution? If so then you
> probably want to set JVMFLAGS env variable. We pass this through to
> the jvm, see -Xmx in the man page
> (http://www.manpagez.com/man/1/java/)
>
> Given this is Hbase (which I'm reasonably familiar with) the default
> heap should be fine. However you might want to check with the Hbase
> team on that.
>
> I'd also encourage you to enter a JIRA on the (lack of) doc issue you
> highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER
>
> Regards,
>
> Patrick
>


Re: Setting the heap size

2010-10-28 Thread Patrick Hunt
On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson
 wrote:
> We are setting up a small Hadoop 13 node cluster running 1 HDFS
> master, 9 region severs for HBase and 3 map reduce nodes, and are just
> installing zookeeper to perform the HBase coordination and to manage a
> few simple process locks for other tasks we run.
>
> Could someone please advise what kind on heap we should give to our
> single ZK node and also (ahem) how does one actually set this? It's
> not immediately obvious in the docs or config.

The amount of heap necessary will be dependent on the application(s)
using ZK, also configuration of the heap is dependent on what
packaging you are using to start ZK.

Are you using zkServer.sh from our distribution? If so then you
probably want to set JVMFLAGS env variable. We pass this through to
the jvm, see -Xmx in the man page
(http://www.manpagez.com/man/1/java/)

Given this is Hbase (which I'm reasonably familiar with) the default
heap should be fine. However you might want to check with the Hbase
team on that.

I'd also encourage you to enter a JIRA on the (lack of) doc issue you
highlighted:  https://issues.apache.org/jira/browse/ZOOKEEPER

Regards,

Patrick


Setting the heap size

2010-10-28 Thread Tim Robertson
Hi all,

We are setting up a small Hadoop 13 node cluster running 1 HDFS
master, 9 region severs for HBase and 3 map reduce nodes, and are just
installing zookeeper to perform the HBase coordination and to manage a
few simple process locks for other tasks we run.

Could someone please advise what kind on heap we should give to our
single ZK node and also (ahem) how does one actually set this? It's
not immediately obvious in the docs or config.

Thanks,
Tim