Re: [PERFORM] pg_repack solves alter table set tablespace lock

2014-01-24 Thread Josh Kupershmidt
On Fri, Jan 24, 2014 at 3:48 PM, Ying He wrote: > I looked at the pg_repack usage and in release 1.2 > http://reorg.github.io/pg_repack/. there is -s tablespace that claims to > be an online version of ALTER TABLE ... SET TABLESPACE > > is this the functionality that solves the alter table set ta

Re: [PERFORM] COMMIT stuck for days after bulk delete

2014-01-14 Thread Josh Kupershmidt
On Tue, Jan 14, 2014 at 12:36 PM, Tom Lane wrote: > Josh Kupershmidt writes: >> We have a 9.1.11 backend (Ubuntu 12.04 x86_64, m1.medium EC2 instance) >> which seems to be stuck at COMMIT for 2 days now: >> ... >> The transaction behind that COMMIT has been the on

[PERFORM] COMMIT stuck for days after bulk delete

2014-01-14 Thread Josh Kupershmidt
We have a 9.1.11 backend (Ubuntu 12.04 x86_64, m1.medium EC2 instance) which seems to be stuck at COMMIT for 2 days now: mydb=# SELECT procpid, waiting, current_query, CURRENT_TIMESTAMP - query_start AS query_elapsed, CURRENT_TIMESTAMP - xact_start AS xact_elapsed FROM pg_stat_activity WHERE procp

Re: [PERFORM] get/set priority of PostgreSQL backends

2012-04-07 Thread Josh Kupershmidt
On Sat, Apr 7, 2012 at 11:05 AM, Scott Marlowe wrote: > On Sat, Apr 7, 2012 at 11:06 AM, Josh Kupershmidt wrote: >> The wiki says nice_backend_super() might be able to "renice any >> backend pid and set any priority, but is usable only by the [database] >> superuser&

[PERFORM] get/set priority of PostgreSQL backends

2012-04-07 Thread Josh Kupershmidt
Hi all, I noticed a note on the 'Priorities' wiki page[1], which talked about the need for having "a C-language function 'nice_backend(prio)' that renices the calling backend to "prio".', and suggests posting a link to this list. Well, here you go: http://pgxn.org/dist/prioritize/ The API is a

Re: [PERFORM] Update problem on large table

2010-12-07 Thread Josh Kupershmidt
On Mon, Dec 6, 2010 at 4:31 PM, felix wrote: > > thanks for the replies !, > but actually I did figure out how to kill it > but pb_cancel_backend didn't work.  here's some notes: > this has been hung for 5 days: > ns      |   32681 | nssql   | in transaction | f       | 2010-12-01 > 15 Right, pg

Re: [PERFORM] Update problem on large table

2010-12-06 Thread Josh Kupershmidt
On Mon, Dec 6, 2010 at 2:48 PM, Jon Nelson wrote: > On Mon, Dec 6, 2010 at 1:46 PM, bricklen wrote: >> Not sure if anyone replied about killing your query, but you can do it like >> so: >> >> select pg_cancel_backend(5902);  -- assuming 5902 is the pid of the >> query you want canceled. > > How

Re: [PERFORM] Postgres insert performance and storage requirement compared to Oracle

2010-10-25 Thread Josh Kupershmidt
On Mon, Oct 25, 2010 at 2:12 PM, Divakar Singh wrote: > 1. How does PostgreSQL perform when inserting data into an indexed (type: > btree) table? Is it true that as you add the indexes on a table, the > performance deteriorates significantly whereas Oracle does not show that > much performance dec

Re: [PERFORM] how to get the total number of records in report

2010-10-18 Thread Josh Kupershmidt
On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman wrote: > At present for reporting I use following types of query: > select crm.*, crm_cnt.cnt > from crm, > (select count(*) as cnt from crm) crm_cnt; > Here count query is used to find the total number of records. > Same FROM clause is copied in both the

Re: [PERFORM] cleanup on pg_ system tables?

2010-09-20 Thread Josh Kupershmidt
On Mon, Sep 20, 2010 at 1:25 PM, mark wrote: > Hi All, > > (pg 8.3.7 on RHEL  2.6.18-92.el5 ) > > I ran the query below (copied from > http://pgsql.tapoueh.org/site/html/news/20080131.bloat.html ) on a > production DB we have and I am looking at some pretty nasty looking > numbers for tables in th

Re: [PERFORM] stats collector suddenly causing lots of IO

2010-04-16 Thread Josh Kupershmidt
On Fri, Apr 16, 2010 at 3:22 PM, Tom Lane wrote: > Josh Kupershmidt writes: >>         name         | current_setting |       source >> --+-+ >>  vacuum_cost_delay    | 200ms           | configuration file >>

Re: [PERFORM] stats collector suddenly causing lots of IO

2010-04-16 Thread Josh Kupershmidt
On Fri, Apr 16, 2010 at 2:14 PM, Greg Smith wrote: > Josh Kupershmidt wrote: >> >> SELECT name, current_setting(name), source FROM pg_settings WHERE >> source != 'default' AND name ILIKE '%vacuum%'; >>      

Re: [PERFORM] stats collector suddenly causing lots of IO

2010-04-16 Thread Josh Kupershmidt
On Fri, Apr 16, 2010 at 12:48 PM, Tom Lane wrote: > Josh Kupershmidt writes: >> Hrm, well autovacuum is at least trying to do work: it's currently >> stuck on those bloated pg_catalog tables, of course. Another developer >> killed an autovacuum of pg_attribute (

Re: [PERFORM] stats collector suddenly causing lots of IO

2010-04-16 Thread Josh Kupershmidt
On Fri, Apr 16, 2010 at 11:41 AM, Tom Lane wrote: > Wow.  Well, we have a smoking gun here: for some reason, autovacuum > isn't running, or isn't doing its job if it is.  If it's not running > at all, that would explain failure to prune the stats collector's file > too. Hrm, well autovacuum is at

Re: [PERFORM] stats collector suddenly causing lots of IO

2010-04-16 Thread Josh Kupershmidt
On Fri, Apr 16, 2010 at 11:23 AM, Tom Lane wrote: > Josh Kupershmidt writes: >> I'm not sure whether this is related to the stats collector problems >> on this machine, but I noticed alarming table bloat in the catalog >> tables pg_attribute, pg_attrdef, pg_depend, a

Re: [PERFORM] stats collector suddenly causing lots of IO

2010-04-16 Thread Josh Kupershmidt
On Thu, Apr 15, 2010 at 6:31 PM, Tom Lane wrote: > Chris writes: >> I have a lot of centos servers which are running postgres.  Postgres isn't >> used >> that heavily on any of them, but lately, the stats collector process keeps >> causing tons of IO load.  It seems to happen only on servers wit

[PERFORM] PG-related ACM Article: "The Pathologies of Big Data"

2009-08-07 Thread Josh Kupershmidt
Just stumbled across this recent article published in the Communications of the ACM: http://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext The author shares some insights relating to difficulties processing a 6.75 billion-row table, a dummy table representing census-type