På torsdag 12. juni 2014 kl. 16:58:06, skrev Tom Lane <t...@sss.pgh.pa.us 
<mailto:t...@sss.pgh.pa.us>>: Matt Silverlock <m...@eatsleeprepeat.net> writes:
 > Hi all. This might be tricky in so much as there���s a few moving parts 
(when isn���t there?), but I���ve tried to test the postgres side as much as 
possible.
 > Trying to work out a potential database bottleneck with a HTTP application 
(written in Go):
 > Pages that render HTML templates but don���t perform DB queries can hit 
~36k+ req/s
 > Pages that perform a SELECT on a single row net about ~6.6k req/s: 
db.Get(l, "SELECT * FROM listings WHERE id = $1 AND expiry_date > 
current_date", l.Id)
 > Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle 
to top 1.3k req/s

 You don't show us exactly what you're doing with OFFSET/LIMIT, but I'm
 going to guess that you're using it to paginate large query results.
 That's basically always going to suck: Postgres has no way to implement
 OFFSET except to generate and then throw away that number of initial rows.
 If you do the same query over again N times with different OFFSETs, it's
 going to cost you N times as much as the base query would.   Are there any 
plans to make PG implement OFFSET more efficiently, so it doesn't have to "read 
and throw away"?   I used SQL Server back in 2011 in a project and seem to 
remember they implemented offset pretty fast. Paging in a resultset of millions 
was much faster than in PG.   -- Andreas Jospeh Krogh CTO / Partner - Visena AS 
Mobile: +47 909 56 963 andr...@visena.com <mailto:andr...@visena.com> 
www.visena.com <https://www.visena.com>  <https://www.visena.com>  

Reply via email to