Ah, that did make a big difference! It went from taking 10x as long to
taking only 1.5x as long (about what I would have expected, if not
better.) Thank you!
On Fri, Feb 17, 2012 at 9:29 PM, Ants Aasma wrote:
> On Feb 17, 2012 8:35 PM, "Alessandro Gagliardi"
> wrote:
> > Here is the EXPLAIN: ht
On Feb 17, 2012 8:35 PM, "Alessandro Gagliardi" wrote:
> Here is the EXPLAIN: http://explain.depesz.com/s/ley
>
> I'm using PostgreSQL 9.0.6 on i486-pc-linux-gnu, compiled by GCC
gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 32-bit
>
> My random_page_cost is 2 and yet it still insists on using Seq S
Your guess about the seen_its table growing is accurate and applies to the
blocks table as well. Partitioning on date is probably a good idea and
something that I've been meaning to investigate. I'm not surprised that the
JOIN makes it slower, I'm surprised by the magnitude of how much slower it
is
On 02/17/2012 10:34 AM, Alessandro Gagliardi wrote:
Comparing
SELECT DISTINCT(user_id) FROM blocks JOIN seen_its USING (user_id)
WHERE seen_its.created BETWEEN (now()::date - interval '8
days')::timestamp AND now()::date::timestamp
to
SELECT DISTINCT(user_id) FROM seen_its WHERE created BETWEE
Comparing
SELECT DISTINCT(user_id) FROM blocks JOIN seen_its USING (user_id) WHERE
seen_its.created BETWEEN (now()::date - interval '8 days')::timestamp AND
now()::date::timestamp
to
SELECT DISTINCT(user_id) FROM seen_its WHERE created BETWEEN (now()::date -
interval '8 days')::timestamp AND now():
On Wed, May 03, 2006 at 07:22:21AM -0400, Michael Stone wrote:
> On Tue, May 02, 2006 at 07:28:34PM -0400, Bill Moran wrote:
> >Reindexing is in a different class than vacuuming.
>
> Kinda, but it is in the same class as vacuum full. If vacuum neglect (or
> dramatic change in usage) has gotten yo
On Tue, May 02, 2006 at 07:28:34PM -0400, Bill Moran wrote:
Reindexing is in a different class than vacuuming.
Kinda, but it is in the same class as vacuum full. If vacuum neglect (or
dramatic change in usage) has gotten you to the point of 10G of overhead
on a 2G table you can get a dramatic
"Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:
> > At this point, it seems like you need to do 2 things:
> > 1) Schedule lazy vacuum to run, or configure autovacuum.
> > 2) Schedule some downtime to run "vacuum full" to recover some disk spa
On Sun, Apr 30, 2006 at 10:03:46AM -0400, Bill Moran wrote:
> At this point, it seems like you need to do 2 things:
> 1) Schedule lazy vacuum to run, or configure autovacuum.
> 2) Schedule some downtime to run "vacuum full" to recover some disk space.
>
> #2 only needs done once to get you back on
On Sat, Apr 29, 2006 at 11:18:10AM +0800, K C Lau wrote:
>
> At 10:39 06/04/29, Tom Lane wrote:
> >K C Lau <[EMAIL PROTECTED]> writes:
> >> Without knowing the internals, I have this simplistic idea: if Postgres
> >> maintains the current lowest transaction ID for all active
> >transactions, it
>
Hi, Bill,
Bill Moran wrote:
> My understanding is basically that if you vacuum with the correct
> frequency, you'll never need to vacuum full. This is why the
> autovacuum system is so nice, it adjusts the frequency of vacuum according
> to how much use the DB is getting.
Additonally, the "free
"Bealach-na Bo" <[EMAIL PROTECTED]> wrote:
> >If you don't need access to the old data constantly:
> >
> > - copy the live data to a new table
> > - TRUNCATE the old table (which needs an exclusive lock but is very fast)
> > - insert the data back in
> > - for an event log I would imagine this
If you don't need access to the old data constantly:
- copy the live data to a new table
- TRUNCATE the old table (which needs an exclusive lock but is very fast)
- insert the data back in
- for an event log I would imagine this could work
Obtaining exclusive locks on this table is very dif
On Sat, Apr 29, 2006 at 11:18:10AM +0800, K C Lau wrote:
I apologize for simplistic ideas again. I presume that the equivalent tuple
header information is not maintained for index entries. What if they are,
probably only for the most commonly used index types to allow recycling
where possible?
At 10:39 06/04/29, Tom Lane wrote:
K C Lau <[EMAIL PROTECTED]> writes:
> Without knowing the internals, I have this simplistic idea: if Postgres
> maintains the current lowest transaction ID for all active
transactions, it
> probably could recycle dead tuples on the fly.
[ yawn... ] Yes, we'
K C Lau <[EMAIL PROTECTED]> writes:
> Without knowing the internals, I have this simplistic idea: if Postgres
> maintains the current lowest transaction ID for all active transactions, it
> probably could recycle dead tuples on the fly.
[ yawn... ] Yes, we've heard that before. The hard part i
At 03:00 06/04/29, Bruno Wolff III wrote:
On Fri, Apr 28, 2006 at 17:37:30 +,
Bealach-na Bo <[EMAIL PROTECTED]> wrote:
> >The above shows that the indexes contained 10M rows and 160M of dead
> >space each. That means you weren't vacuuming nearly enough.
>
> How is it that a row in the tabl
On Fri, Apr 28, 2006 at 17:37:30 +,
Bealach-na Bo <[EMAIL PROTECTED]> wrote:
> >The above shows that the indexes contained 10M rows and 160M of dead
> >space each. That means you weren't vacuuming nearly enough.
>
> How is it that a row in the table can grow to a size far exceeding the sum
On April 28, 2006 10:31 am, "Bealach-na Bo" <[EMAIL PROTECTED]>
wrote:
> The exclusive lock is going to cause problems for me since the table is
> very active. Is there a way of getting around that or do I need to
> schedule the application that accesses this table?
If you don't need access to th
The above shows that the indexes contained 10M rows and 160M of dead
space each. That means you weren't vacuuming nearly enough.
How is it that a row in the table can grow to a size far exceeding the sum
of the maximum sized of the fields it consists of?
13M dead rows, and the table is 1.4M
> INFO: index "job_log_id_pkey" now contains 10496152 row versions in
> 59665 pages
See the 10496152 above? That means you have 10496152 rows of data in your
table. If those, only 365000 are alive. That means you have basically
never vacuumed this table before, correct?
Almost correct :| I
On April 28, 2006 04:41 am, "Bealach-na Bo" <[EMAIL PROTECTED]>
wrote:
> INFO: index "job_log_id_pkey" now contains 10496152 row versions in
> 59665 pages
See the 10496152 above? That means you have 10496152 rows of data in your
table. If those, only 365000 are alive. That means you have ba
>
> Cpu(s): 0.7% us, 0.8% sy, 0.0% ni, 49.7% id, 48.5% wa, 0.3% hi, 0.0% si
> Mem: 1554788k total, 1538580k used,16208k free, 2872k buffers
> Swap: 1020024k total, 176k used, 1019848k free, 1414908k cached
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM
9 6:25.10 postmaster
----------top output end--
I know my database needs a major redesign. But I'm having a hard time
explaining the poor performance nevertheless.
Regards,
Bealach
From: Andreas Kretschmer <[EMAIL PROTECTED]>
To: pgsql-performance@postgresql.org
Sub
Bealach-na Bo <[EMAIL PROTECTED]> schrieb:
> The node table is tiny (2500 records). What I'm pulling my hair out
> over is that ANY Query, even something as simple as select count(*)
> form job_log takes of the order of tens of minutes to complete. Just
> now I'm trying to run an explain analyze o
Hi folks,
Sorry to be bringing this up again, but I'm stumped by this problem
and hope you can shed some light on it.
I'm running postgresql 8.0 on a RLE4 server with 1.5 GB of RAM and a
Xenon 2 GHz CPU. The OS is bog standard and I've not done any kernel
tuning on it. The file system is also bo
26 matches
Mail list logo