Hi Stefan, On 5/30/06, Stefan Groschupf <[EMAIL PROTECTED]> wrote:
In my project we do not need to have multi row locking at all. I need to write and read as fast as possible into many rows - column pairs but each of them is atomically. I'm little bit worry that the goals are to big for now and having such difficult to implement functionality will slow down a general hadoop kind of big table implementation. I really would love to start small and grow may be later with each release.
A good point. I've scaled down my ambitions for the thing. Locking, query languages, even compression - all that stuff is for version 2+. I might have gotten a little carried away, asking for query language examples before the row store works properly ;) I personal don't like query languages at all - especially sql. :-)
They are only really usefully if you work on a admin gui tool and want to check your data, from my point of view. Since some years I really prefer api based query mechanisms as hibernate or db4o provide. For example query by example is just easy to use and can be powerful.
A query is any time a string in your code you need to maintain
queries separately. You need to scan and update your code as soon your change just one column name etc. But having a API based Query mechanism allows to develop faster from my experience. Keywords in my mind are test driven development, refactoring, refactoring tools like eclipse etc.
In theory, a declarative language like SQL should enable all sorts of cool things on top of BigTable. In practice, it might take 10 years before the query optimizer works right ;) So for the moment, no SQL for us. Though there is probably a great query language waiting to be written on top of a BigTable-like store. --Mike
