Too many connections

2010-10-05 Thread Avinash Lakshman
I find this happening in my observers node in the logs. The observers are
running in a different data center from where the ZK non-observers are
running. The only way to fix this seems to be restarting. How can I start
addressing this? Here is the stack trace.

Too many connections from /10.30.84.207 - max is 10
WARN - Session 0x0 for server mybox.mydomain.com/10.30.84.207:5001,
unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
at sun.nio.ch.IOUtil.read(IOUtil.java:200)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
at
org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:817)
at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1089)

Please advice.

Cheers
Avinash


Re: Too many connections

2010-10-05 Thread Avinash Lakshman
Thanks Patrick. But what does this mean? I see the log on server A telling
me Too many connections from A - default is 10. Too many connection from A
to whom? I do not see who the other end of the connection is.

Cheers
Avinash

On Tue, Oct 5, 2010 at 9:27 AM, Patrick Hunt ph...@apache.org wrote:

 See this configuration param in the docs maxClientCnxns:

 http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_advancedConfiguration

 
 http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_advancedConfiguration
 
 Patrick

 On Tue, Oct 5, 2010 at 8:10 AM, Avinash Lakshman 
 avinash.laksh...@gmail.com
  wrote:

  I find this happening in my observers node in the logs. The observers are
  running in a different data center from where the ZK non-observers are
  running. The only way to fix this seems to be restarting. How can I start
  addressing this? Here is the stack trace.
 
  Too many connections from /10.30.84.207 - max is 10
  WARN - Session 0x0 for server mybox.mydomain.com/10.30.84.207:5001,
  unexpected error, closing socket connection and attempting reconnect
  java.io.IOException: Connection reset by peer
 at sun.nio.ch.FileDispatcher.read0(Native Method)
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
 at sun.nio.ch.IOUtil.read(IOUtil.java:200)
 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
 at
  org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:817)
 at
  org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1089)
 
  Please advice.
 
  Cheers
  Avinash
 



Re: Too many connections

2010-10-05 Thread Avinash Lakshman
So shouldn't all servers in another DC just have one session? So even if I
have 50 observers in another DC that should be 50 sessions established since
the IP doesn't change correct? Am I missing something? In some ZK clients I
see the following exception even though they are in the same DC.

WARN - Session 0x0 for server msgzkapp013.abc.com/10.138.43.219:5001,
unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
at sun.nio.ch.IOUtil.read(IOUtil.java:200)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
at
org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:817)
at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1089)
WARN - Session 0x0 for server msgzkapp012.abc.com/10.138.42.219:5001,
unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
at sun.nio.ch.IOUtil.write(IOUtil.java:75)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
at
org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:851)
at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1089)
What might be happening in this case?

Cheers
Avinash
On Tue, Oct 5, 2010 at 9:47 AM, Patrick Hunt ph...@apache.org wrote:

 A (/10.30.84.207 a zookeeper client) is attempting to establish more
 then 10 sessions to the ZooKeeper server where you got the log. This can be
 caused by a bug in user code (we've seen bugs where incorrectly implemented
 ZK clients attempt to create an infinite number of sessions, which
 essentially DOS the service, so we added the maxClientCnxn default limit of
 10).

 Often users see this problem when they are trying to simulate a real
 environment - they run a simulated set of clients sessions (10) from a
 single host (ip) hitting the servers. However in your case I'm guessing that
 it has something to do with this

  The observers are running in a different data center from where the ZK
 non-observers are running.

 Could you have a NAT or some other networking configuration that makes all
 the observers seem to be coming from the same IP address?

 Patrick

 On Tue, Oct 5, 2010 at 9:33 AM, Avinash Lakshman 
 avinash.laksh...@gmail.com wrote:

 Thanks Patrick. But what does this mean? I see the log on server A telling
 me Too many connections from A - default is 10. Too many connection from
 A
 to whom? I do not see who the other end of the connection is.

 Cheers
 Avinash

 On Tue, Oct 5, 2010 at 9:27 AM, Patrick Hunt ph...@apache.org wrote:

  See this configuration param in the docs maxClientCnxns:
 
 
 http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_advancedConfiguration
 
  
 
 http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html#sc_advancedConfiguration
  
  Patrick
 
  On Tue, Oct 5, 2010 at 8:10 AM, Avinash Lakshman 
  avinash.laksh...@gmail.com
   wrote:
 
   I find this happening in my observers node in the logs. The observers
 are
   running in a different data center from where the ZK non-observers are
   running. The only way to fix this seems to be restarting. How can I
 start
   addressing this? Here is the stack trace.
  
   Too many connections from /10.30.84.207 - max is 10
   WARN - Session 0x0 for server mybox.mydomain.com/10.30.84.207:5001,
   unexpected error, closing socket connection and attempting reconnect
   java.io.IOException: Connection reset by peer
  at sun.nio.ch.FileDispatcher.read0(Native Method)
  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
  at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
  at sun.nio.ch.IOUtil.read(IOUtil.java:200)
  at
 sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
  at
   org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:817)
  at
   org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1089)
  
   Please advice.
  
   Cheers
   Avinash
  
 





Zookeeper on 60+Gb mem

2010-10-05 Thread Maarten Koopmans
Hi,

I just wondered: has anybody ever ran zookeeper to the max on a 68GB 
quadruple extra large high memory EC2 instance? With, say, 60GB allocated or so?

Because EC2 with EBS is a nice way to grow your zookeeper cluster (data on the 
ebs columes, upgrade as your memory utilization grows)  - I just wonder 
what the limits are there, or if I am foing where angels fear to tread...

--Maarten

Re: Zookeeper on 60+Gb mem

2010-10-05 Thread Mahadev Konar
Hi Maarteen,
  I definitely know of a group which uses around 3GB of memory heap for
zookeeper but never heard of someone with such huge requirements. I would
say it definitely would be a learning experience with such high memory which
I definitely think would be very very useful for others in the community as
well. 

Thanks
mahadev


On 10/5/10 11:03 AM, Maarten Koopmans maar...@vrijheid.net wrote:

 Hi,
 
 I just wondered: has anybody ever ran zookeeper to the max on a 68GB
 quadruple extra large high memory EC2 instance? With, say, 60GB allocated or
 so?
 
 Because EC2 with EBS is a nice way to grow your zookeeper cluster (data on the
 ebs columes, upgrade as your memory utilization grows)  - I just wonder
 what the limits are there, or if I am foing where angels fear to tread...
 
 --Maarten 
 



Re: Zookeeper on 60+Gb mem

2010-10-05 Thread Avinash Lakshman
I have run it over 5 GB of heap with over 10M znodes. We will definitely run
it with over 64 GB of heap. Technically I do not see any limitiation.
However I will the experts chime in.

Avinash

On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konar maha...@yahoo-inc.comwrote:

 Hi Maarteen,
  I definitely know of a group which uses around 3GB of memory heap for
 zookeeper but never heard of someone with such huge requirements. I would
 say it definitely would be a learning experience with such high memory
 which
 I definitely think would be very very useful for others in the community as
 well.

 Thanks
 mahadev


 On 10/5/10 11:03 AM, Maarten Koopmans maar...@vrijheid.net wrote:

  Hi,
 
  I just wondered: has anybody ever ran zookeeper to the max on a 68GB
  quadruple extra large high memory EC2 instance? With, say, 60GB allocated
 or
  so?
 
  Because EC2 with EBS is a nice way to grow your zookeeper cluster (data
 on the
  ebs columes, upgrade as your memory utilization grows)  - I just
 wonder
  what the limits are there, or if I am foing where angels fear to tread...
 
  --Maarten
 




Re: Zookeeper on 60+Gb mem

2010-10-05 Thread Ted Dunning
That would be an interesting experiment although it is way outside normal
usage as a coordination store.

I have used ZK as a session store for PHP with OK results.  I never
implemented an expiration mechanism so things
had to be cleared out manually sometimes.  It worked pretty well until
things filled up.

On Tue, Oct 5, 2010 at 11:03 AM, Maarten Koopmans maar...@vrijheid.netwrote:

 Hi,

 I just wondered: has anybody ever ran zookeeper to the max on a 68GB
 quadruple extra large high memory EC2 instance? With, say, 60GB allocated or
 so?

 Because EC2 with EBS is a nice way to grow your zookeeper cluster (data on
 the ebs columes, upgrade as your memory utilization grows)  - I just
 wonder what the limits are there, or if I am foing where angels fear to
 tread...

 --Maarten


Re: Zookeeper on 60+Gb mem

2010-10-05 Thread Benjamin Reed
 you will need to time how long it takes to read all that state back in 
and adjust the initTime accordingly. it will probably take a while to 
pull all that data into memory.


ben

On 10/05/2010 11:36 AM, Avinash Lakshman wrote:

I have run it over 5 GB of heap with over 10M znodes. We will definitely run
it with over 64 GB of heap. Technically I do not see any limitiation.
However I will the experts chime in.

Avinash

On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konarmaha...@yahoo-inc.comwrote:


Hi Maarteen,
  I definitely know of a group which uses around 3GB of memory heap for
zookeeper but never heard of someone with such huge requirements. I would
say it definitely would be a learning experience with such high memory
which
I definitely think would be very very useful for others in the community as
well.

Thanks
mahadev


On 10/5/10 11:03 AM, Maarten Koopmansmaar...@vrijheid.net  wrote:


Hi,

I just wondered: has anybody ever ran zookeeper to the max on a 68GB
quadruple extra large high memory EC2 instance? With, say, 60GB allocated

or

so?

Because EC2 with EBS is a nice way to grow your zookeeper cluster (data

on the

ebs columes, upgrade as your memory utilization grows)  - I just

wonder

what the limits are there, or if I am foing where angels fear to tread...

--Maarten







Re: Zookeeper on 60+Gb mem

2010-10-05 Thread Patrick Hunt
Tuning GC is going to be critical, otw all the sessions will timeout (and
potentially expire) during GC pauses.

Patrick

On Tue, Oct 5, 2010 at 1:18 PM, Maarten Koopmans maar...@vrijheid.netwrote:

 Yes, and syncing after a crash will be interesting as well. Off note; I am
 running it with a 6GB heap now, but it's not filled yet. I do have smoke
 tests thoug, so maybe I'll give it a try.



 Op 5 okt. 2010 om 21:13 heeft Benjamin Reed br...@yahoo-inc.com het
 volgende geschreven:

 
  you will need to time how long it takes to read all that state back in
 and adjust the initTime accordingly. it will probably take a while to pull
 all that data into memory.
 
  ben
 
  On 10/05/2010 11:36 AM, Avinash Lakshman wrote:
  I have run it over 5 GB of heap with over 10M znodes. We will definitely
 run
  it with over 64 GB of heap. Technically I do not see any limitiation.
  However I will the experts chime in.
 
  Avinash
 
  On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konarmaha...@yahoo-inc.com
 wrote:
 
  Hi Maarteen,
   I definitely know of a group which uses around 3GB of memory heap for
  zookeeper but never heard of someone with such huge requirements. I
 would
  say it definitely would be a learning experience with such high memory
  which
  I definitely think would be very very useful for others in the
 community as
  well.
 
  Thanks
  mahadev
 
 
  On 10/5/10 11:03 AM, Maarten Koopmansmaar...@vrijheid.net  wrote:
 
  Hi,
 
  I just wondered: has anybody ever ran zookeeper to the max on a 68GB
  quadruple extra large high memory EC2 instance? With, say, 60GB
 allocated
  or
  so?
 
  Because EC2 with EBS is a nice way to grow your zookeeper cluster
 (data
  on the
  ebs columes, upgrade as your memory utilization grows)  - I just
  wonder
  what the limits are there, or if I am foing where angels fear to
 tread...
 
  --Maarten
 
 
 
 



Re: Zookeeper on 60+Gb mem

2010-10-05 Thread Dave Wright
I think the issue of having to write a full ~60GB snapshot file at
intervals would make this prohibitive, particularly on EC2 via EBS. At
a scale like that I think you'd be better off with a traditional
database or a nosql database like Cassandra, possibly using Zookeeper
for transaction locking/coordination on top.


-Dave Wright

On Tue, Oct 5, 2010 at 5:27 PM, Patrick Hunt ph...@apache.org wrote:
 Tuning GC is going to be critical, otw all the sessions will timeout (and
 potentially expire) during GC pauses.

 Patrick

 On Tue, Oct 5, 2010 at 1:18 PM, Maarten Koopmans maar...@vrijheid.netwrote:

 Yes, and syncing after a crash will be interesting as well. Off note; I am
 running it with a 6GB heap now, but it's not filled yet. I do have smoke
 tests thoug, so maybe I'll give it a try.



 Op 5 okt. 2010 om 21:13 heeft Benjamin Reed br...@yahoo-inc.com het
 volgende geschreven:

 
  you will need to time how long it takes to read all that state back in
 and adjust the initTime accordingly. it will probably take a while to pull
 all that data into memory.
 
  ben
 
  On 10/05/2010 11:36 AM, Avinash Lakshman wrote:
  I have run it over 5 GB of heap with over 10M znodes. We will definitely
 run
  it with over 64 GB of heap. Technically I do not see any limitiation.
  However I will the experts chime in.
 
  Avinash
 
  On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konarmaha...@yahoo-inc.com
 wrote:
 
  Hi Maarteen,
   I definitely know of a group which uses around 3GB of memory heap for
  zookeeper but never heard of someone with such huge requirements. I
 would
  say it definitely would be a learning experience with such high memory
  which
  I definitely think would be very very useful for others in the
 community as
  well.
 
  Thanks
  mahadev
 
 
  On 10/5/10 11:03 AM, Maarten Koopmansmaar...@vrijheid.net  wrote:
 
  Hi,
 
  I just wondered: has anybody ever ran zookeeper to the max on a 68GB
  quadruple extra large high memory EC2 instance? With, say, 60GB
 allocated
  or
  so?
 
  Because EC2 with EBS is a nice way to grow your zookeeper cluster
 (data
  on the
  ebs columes, upgrade as your memory utilization grows)  - I just
  wonder
  what the limits are there, or if I am foing where angels fear to
 tread...
 
  --Maarten
 
 
 
 




Re: Zookeeper on 60+Gb mem

2010-10-05 Thread Maarten Koopmans
Yup, and that's ironic, isn't it? The gc tuning is so specialistic, as is the 
profiling, that automated memory management (to me) hasn't brought what I hoped 
it would. I've had some conversations about this topic a few years back with a 
well respected OS designer, and his point is that we (humans) can trace back 
almost all problems because we're adding complexity, in stead of reducing it.

Sorry for the slight rant Anyway, it's one of the things I like about 
zookeeper (and, e.g. voldemort): it makes a hard thing doable.

--Maarten


Op 5 okt. 2010 om 23:27 heeft Patrick Hunt ph...@apache.org het volgende 
geschreven:

 Tuning GC is going to be critical, otw all the sessions will timeout (and
 potentially expire) during GC pauses.
 
 Patrick
 
 On Tue, Oct 5, 2010 at 1:18 PM, Maarten Koopmans maar...@vrijheid.netwrote:
 
 Yes, and syncing after a crash will be interesting as well. Off note; I am
 running it with a 6GB heap now, but it's not filled yet. I do have smoke
 tests thoug, so maybe I'll give it a try.
 
 
 
 Op 5 okt. 2010 om 21:13 heeft Benjamin Reed br...@yahoo-inc.com het
 volgende geschreven:
 
 
 you will need to time how long it takes to read all that state back in
 and adjust the initTime accordingly. it will probably take a while to pull
 all that data into memory.
 
 ben
 
 On 10/05/2010 11:36 AM, Avinash Lakshman wrote:
 I have run it over 5 GB of heap with over 10M znodes. We will definitely
 run
 it with over 64 GB of heap. Technically I do not see any limitiation.
 However I will the experts chime in.
 
 Avinash
 
 On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konarmaha...@yahoo-inc.com
 wrote:
 
 Hi Maarteen,
 I definitely know of a group which uses around 3GB of memory heap for
 zookeeper but never heard of someone with such huge requirements. I
 would
 say it definitely would be a learning experience with such high memory
 which
 I definitely think would be very very useful for others in the
 community as
 well.
 
 Thanks
 mahadev
 
 
 On 10/5/10 11:03 AM, Maarten Koopmansmaar...@vrijheid.net  wrote:
 
 Hi,
 
 I just wondered: has anybody ever ran zookeeper to the max on a 68GB
 quadruple extra large high memory EC2 instance? With, say, 60GB
 allocated
 or
 so?
 
 Because EC2 with EBS is a nice way to grow your zookeeper cluster
 (data
 on the
 ebs columes, upgrade as your memory utilization grows)  - I just
 wonder
 what the limits are there, or if I am foing where angels fear to
 tread...
 
 --Maarten
 
 
 
 
 


Re: Zookeeper on 60+Gb mem

2010-10-05 Thread Maarten Koopmans
Good point. And Cassandra is a no-go for me for now. I get the model, but I 
don't like, check, dislike, things like Thrift.  



Op 5 okt. 2010 om 23:54 heeft Dave Wright wrig...@gmail.com het volgende 
geschreven:

 I think the issue of having to write a full ~60GB snapshot file at
 intervals would make this prohibitive, particularly on EC2 via EBS. At
 a scale like that I think you'd be better off with a traditional
 database or a nosql database like Cassandra, possibly using Zookeeper
 for transaction locking/coordination on top.
 
 
 -Dave Wright
 
 On Tue, Oct 5, 2010 at 5:27 PM, Patrick Hunt ph...@apache.org wrote:
 Tuning GC is going to be critical, otw all the sessions will timeout (and
 potentially expire) during GC pauses.
 
 Patrick
 
 On Tue, Oct 5, 2010 at 1:18 PM, Maarten Koopmans maar...@vrijheid.netwrote:
 
 Yes, and syncing after a crash will be interesting as well. Off note; I am
 running it with a 6GB heap now, but it's not filled yet. I do have smoke
 tests thoug, so maybe I'll give it a try.
 
 
 
 Op 5 okt. 2010 om 21:13 heeft Benjamin Reed br...@yahoo-inc.com het
 volgende geschreven:
 
 
 you will need to time how long it takes to read all that state back in
 and adjust the initTime accordingly. it will probably take a while to pull
 all that data into memory.
 
 ben
 
 On 10/05/2010 11:36 AM, Avinash Lakshman wrote:
 I have run it over 5 GB of heap with over 10M znodes. We will definitely
 run
 it with over 64 GB of heap. Technically I do not see any limitiation.
 However I will the experts chime in.
 
 Avinash
 
 On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konarmaha...@yahoo-inc.com
 wrote:
 
 Hi Maarteen,
  I definitely know of a group which uses around 3GB of memory heap for
 zookeeper but never heard of someone with such huge requirements. I
 would
 say it definitely would be a learning experience with such high memory
 which
 I definitely think would be very very useful for others in the
 community as
 well.
 
 Thanks
 mahadev
 
 
 On 10/5/10 11:03 AM, Maarten Koopmansmaar...@vrijheid.net  wrote:
 
 Hi,
 
 I just wondered: has anybody ever ran zookeeper to the max on a 68GB
 quadruple extra large high memory EC2 instance? With, say, 60GB
 allocated
 or
 so?
 
 Because EC2 with EBS is a nice way to grow your zookeeper cluster
 (data
 on the
 ebs columes, upgrade as your memory utilization grows)  - I just
 wonder
 what the limits are there, or if I am foing where angels fear to
 tread...
 
 --Maarten
 
 
 
 
 
 
 


Create node with ancestors?

2010-10-05 Thread David Rosenstrauch
The ZK create method explicitly states in the documentation If the 
parent node does not exist in the ZooKeeper, a KeeperException with 
error code KeeperException.NoNode will be thrown.  ( 
(http://hadoop.apache.org/zookeeper/docs/current/api/org/apache/zookeeper/ZooKeeper.html#create%28java.lang.String,%20byte[],%20java.util.List,%20org.apache.zookeeper.CreateMode%29)) 
 As a result, there doesn't appear to be any one single method call 
that can create a node, along with any missing parent nodes.  This would 
be an incredibly useful API call, though, akin to HDFS' mkdirs method 
call. 
(http://hadoop.apache.org/common/docs/r0.20.1/api/org/apache/hadoop/fs/FileSystem.html#mkdirs%28org.apache.hadoop.fs.Path%29)


Anybody know if there's a call like this available somewhere in the ZK API?

Thanks,

DR


Question on production readiness, deployment, data of BookKeeper / Hedwig

2010-10-05 Thread amit jaiswal
Hi,

In Hedwig talk (http://vimeo.com/13282102), it was mentioned that the primary 
use case for Hedwig comes from the distributed key-value store PNUTS in Yahoo!, 
but also said that the work is new.

Could you please about the following:

Production readiness / Deployment
1. What is the production readiness of Hedwig / BookKeeper. Is it being used 
anywhere (like in PNUTS)?
2. Is Hedwig designed to use as a generic message bus or only for 
multi-datacenter operations?
3. Hedwig installation and deployment is done through a script hw.bash, but 
that 
is difficult to use especially in a production environment. Are there any other 
packages available that can simplify the deployment of hedwig.
4. How does BK/Hedwig handle zookeeper session expiry?

Data Deletion, Handling data loss, Quorum
1. Does BookKeeper support deletion of old log entries which have been consumed.
2. How does Hedwig handles the case when all subscribers have consumed all the 
messages. In the talk, it was said that a subscriber can come back after hours, 
days or weeks. Is there any data retention / expiration policy for the data 
that 
is published?
3. How does Hedwig handles data loss? There is a replication factor, and a 
write 
operation must be accepted by majority of the bookies, but how data conflicts 
are handled? Is there any possibility of data conflict at all? Is the 
replication only for recovery? When the hub is reading data from bookies, does 
it reads from all the bookies to satisfy quorum read?

Code
What is the difference between PubSubServer, HedwigSubscriber, 
HedwigHubSubscriber. Is there any HelloWorld program that simply illustrates 
how 
to instantiate a hedwig client, and publish/consume messages. (HedwigBenchmark 
class is helpful, but was looking something like API documentation).

-regards
Amit