Hi Dan

On Sun, Mar 25, 2012 at 7:56 PM, Dan Haywood
<[email protected]>wrote:

> Hi Nour,
> By all means raise a JIRA on this, it looks like a viable technology to
> build an object store with.
>

Cool


>
> But I'm not sure that - out-of-the-box - it addresses the paging issue, I
> think that's something that needs to be designed across all object stores.
>

Let ma have a look at it and will come back with feedback on that


>
> Dan
>
> On 25 March 2012 16:27, Mohammad Nour El-Din <[email protected]
> >wrote:
>
> > Hey I found this [1] maybe it can help didn't look into details yet, but
> > even if paging not supported there we can make a patch for that ?
> Thoughts
> > ?
> >
> >
> > [1] - http://empire-db.apache.org/
> >
> > On Sun, Mar 25, 2012 at 5:18 PM, Mohammad Nour El-Din <
> > [email protected]> wrote:
> >
> > > Hi Dan...
> > >
> > >    OK but the use case you are proposing here still need paging, I mean
> > > whatever the reason we wanna do paging over, whether all rows or rows
> > that
> > > match a certain criteria still the problem is you have a result set and
> > you
> > > wanna to traverse through this result set X rows at a time.
> > >
> > > IDK an answer for that case out of mind, but I think there might be
> > > something that can be done with Spring or iBatis which we can use to
> > > abstract differences between different RDBMS(s) and paging is one of
> > them.
> > > Thats my 2 cents.
> > >
> > >
> > > On Sat, Mar 24, 2012 at 4:22 PM, Dan Haywood <
> > [email protected]
> > > > wrote:
> > >
> > >> Before answering, just a point that - if the intent is to process all
> > rows
> > >> in the domain model, but in batches - then the general idea of paging
> > >> seems
> > >> flawed to me.
> > >>
> > >> That is, your domain object could ask for rows 1~100, generate
> invoices
> > >> for
> > >> all, then move onto rows 101~200; however in the intervening time some
> > >> other user could have come along and added or removed a record that
> > could
> > >> mess up the paging.  I could see either than you either forget to
> > generate
> > >> an invoice, or that you end up generating an invoice more than once.
> > >>
> > >> I think better would be that your query is constructed to say:
> > >>
> > >> "give me the first 100 rows that don't yet have an invoice", and then
> > keep
> > >> looping through that.
> > >>
> > >> To do that, you'll need to add some general purpose query mechanism to
> > the
> > >> SQL OS; unless I'm mistaken, there doesn't seem to be one yet?
> > >>
> > >> Dan
> > >>
> > >>
> > >> On 24 March 2012 14:54, Kevin Meyer - KMZ <[email protected]> wrote:
> > >>
> > >> > Hi All,
> > >> >
> > >> > I was doing some reading about the general problem of how to
> > >> > implement database paging in JDBC, and since paging is not part of
> > >> > the SQL ANSI standard, JDBC doesn't apparently have a native (API)
> > >> > method of supporting it.
> > >> >
> > >> > One of the suggestions is to manually add a "datarow" column, which
> > >> > is auto-incremented by 1 in every row. Then paging queries can be
> > >> > spoofed via the SQL's WHERE clause.
> > >> >
> > >> > This assumes that rows are never deleted, which is most of my cases
> > >> > is a reasonable one.
> > >> >
> > >> > Opinions?
> > >> >
> > >> > Otherwise, I'll have to introduce a runtime-specified paging helper
> > >> > class that returns valid SQL for the DB being used (MySQL and
> > >> > PostgreSQL have a completely different, and incompatible, paging
> > >> > syntax, if memory serves).
> > >> >
> > >> > Regards,
> > >> > Kevin
> > >> >
> > >> >
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > > Thanks
> > > - Mohammad Nour
> > > ----
> > > "Life is like riding a bicycle. To keep your balance you must keep
> > moving"
> > > - Albert Einstein
> > >
> > >
> >
> >
> > --
> > Thanks
> > - Mohammad Nour
> > ----
> > "Life is like riding a bicycle. To keep your balance you must keep
> moving"
> > - Albert Einstein
> >
>



-- 
Thanks
- Mohammad Nour
----
"Life is like riding a bicycle. To keep your balance you must keep moving"
- Albert Einstein

Reply via email to