Hey Ceri,
thanks for your reply, but this solution will not work, as I described in 
my question. Because simply row with lower id can have higher published_at 
= published_at is totally random compared to id....

Petr

On Thursday, August 8, 2013 3:46:53 PM UTC+2, Ceri Storey wrote:
>
> (08/08/13 12:20), Petr Kaleta wrote: 
> > Hey everyone, I have an issue with data pagination. Its not a problem 
> > of Sequel, but its simply a SQL issue. Maybe, if you have time to read 
> > my question to the end, you'll give me some hints, how to solve my 
> > problem in Sequel. 
> > 
> > I am using PostgreSQL 9.2.4. So imagine, that you have table called 
> > `posts`. To make things simple, `posts` table has following columns: 
> > 
> > - `id` Integer, Primary Key 
> > - `published_at` DateTime 
> > 
> > So thats it, very simple table. Now lets speak about rows. Rows in 
> > this table are in random order, that means row with higher `id` can 
> > have lower `published_at` etc. There can be rows with exactly the same 
> > `published_at`. Data in table are not static (can be added/removed, 
> > `published_at` is un-editable). There can be hundreds thousands of 
> > rows in `posts` table. 
> > 
> > Now, I need to paginate my data (lets say by 30), but I need to sort 
> > them ASC by `published_at`. 
> > 
> > So there are two ways that came into my mind. First of all to use 
> > LIMIT and OFFSET. But this approach will not work once you add/remove 
> > some row during pagination (there will be wrong offset in next sets). 
> > Yes, you can use some timestamp marker to avoid wrong offsets while 
> > adding new rows, but this will not solve problem, while you remove row 
> > from already paginated set. 
> > 
> > Another solution can be to use just LIMIT with some marker, which will 
> > be `published_at` of the last row in previous set. But this will not 
> > work as well once there will be rows with exactly the same 
> `published_at`. 
> > 
> > Do you have any ideas, how to solve this issue in proper way? Also key 
> > in here is performance... 
>
> Why not use both the published_at and id columns, using the id column as 
> a discriminator? So in pseudo-code: 
>
> query = db[:table].order_by(:published_at, :id) 
> # Do the query: 
> first_rows = query.limit(pagesize+1).all 
>
> process_rows first_rows[0...pagesize] 
> # Get the next set of rows: 
> start_pubtime, start_id = first_rows.last 
> next_rows = query.where(Sequel.identifier(:published_at) >= 
> start_pubtime, Sequel.identifier(:id) >= start_id).limit(pagesize+1) 
>
> process_rows next_rows[0...pagesize] 
>
> Hopefully that'll be useful. 
> > 
> > Thanks a lot! 
> > 
> > Petr 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sequel-talk" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to [email protected] <javascript:>. 
> > To post to this group, send email to 
> > [email protected]<javascript:>. 
>
> > Visit this group at http://groups.google.com/group/sequel-talk. 
> > For more options, visit https://groups.google.com/groups/opt_out. 
> >   
> >   
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sequel-talk.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to