On Tue, Jan 15, 2013 at 09:33:35PM -0800, Sijie Guo wrote:
> > Originally, it was meant to have a number of
> > long lived subscriptions, over which a lot of data travelled. Now the
> > load has flipped to a large number of short lived subscriptions, over
> > which relatively little data travels.
> 
> The topic discussed here doesn't relate to hedwig subscriptions, it just
> about how hedwig use ledgers to store its messages.  Even there are no
> subscriptions, the problem is still there. The restart of a hub server
> carrying large number of topics would hit the metadata storage with many
> accesses. The hit is a hub server acquiring a topic, no matter the
> subscription is long lived or short lived. after topic is acquired,
> following accesses are in memory, which doesn't cause any
> performance issue.
I was using topics and subscriptions to mean the same thing here due
to the usecase we have in Yahoo where they're effectively the same
thing. But yes, I should have said topic. But my point still
stands. Hedwig was designed to deal with fewer topics, which had a lot
of data passing through them, rather than more topics, with very
little data passing though them. This is why zk was consider suffient
at that point, as tens of thousands of topics being recovered really
isn't an issue for zk. The point I was driving at is that, the usecase
has changed in a big way, so it may require a big change to handle it. 

> But we should separate the capacity problem from the software problem. A
> high performance and scalable metadata storage would help for resolving
> capacity problem. but either implementing a new one or leveraging a high
> performance one doesn't change the fact that it still need so many metadata
> accesses to acquire topic. A bad implementation causing such many metadata
> accesses is a software problem. If we had chance to improve it, why
> not?
I don't think the implementation is bad, but rather the assumptions,
as I said earlier. The data:metadata ratio has completely changed
completely. hedwig/bk were designed with a data:metadata ratio of
something like 100000:1. What we're talking about now is more like 1:1
and therefore we need to be able to handle an order of magnitude more
of metadata than previously. Bringing down the number of writes by an
order of 2 or 3, while a nice optimisation, is just putting duct tape
on the problem.

> 
> > The ledger can still be read many times, but you have removed the
> guarantee that what is read each time will be the same thing.
> 
> How we guarantee a reader's behavior when a ledger is removed at the same
> time? We don't guarantee it right now, right? It is similar thing for a
> 'shrink' operation which remove part of entries, while 'delete' operation
> removes whole entries?
> 
> And if I remembered correctly, readers only see the same thing when a
> ledger is closed. What I proposed doesn't volatile this contract.  If a
> ledger is closed (state is in CLOSED), an application can't re-open it. If
> a ledger isn't closed yet, an application can recover previous state and
> continue writing entries using this ledger. for applications, they could
> still use 'create-close-create' style to use ledgers, or evolve to new api
> for efficiency smoothly, w/o breaking any backward compatibility.
Ah, yes, I misread your proposal originally, I thought the reopen was
working with an already closed ledger.

On a side note, the reason we have an initial write for fencing, is
that when the reading client(RC) fences, the servers in the ensemble
start returning an error to the writing client (WC). At the moment we
don't distinguish between a fencing error and a i/o error for
example. So WC will try to rebuild a the ensemble by replacing the
erroring servers. Before writing to the new ensemble, it has to update
the metadata, and at this point it will see that it has been
fenced. With a specific FENCED error, we could avoid this write. This
makes me uncomfortable though. What happens if the fenced server fails
between being fenced and WC trying to write? It will get a normal i/o
error. And will try to replace the server. Since the metadata has not
been changed, nothing will stop it, and it may be able to continue
writing. I think this is also the case for the session fencing solution.

-Ivan

Reply via email to