[GENERAL] CPU Load 100% when pg_dump start in Postgresql 8.4

2012-04-03 Thread Prashant Bharucha
Hello Everyone

I facing a big problem ,when pg_dump start .CPU load become 100%.

DB Size 35 GB running with e commerce web site. Insert transaction record 
successfully but Update transaction is not going through.

Could you please help to figure out where is the problem ?


Thanks
Prashant


Re: [GENERAL] CPU Load 100% when pg_dump start in Postgresql 8.4

2012-04-03 Thread Brent Wood
Can you not nice the dump process to free up resources during the dump? Of 
course this will not free up any locks, and will make them hang around longer 
as the dump is slowed down.

Brent Wood

GIS/DBA consultant
NIWA
+64 (4) 4 386-0300

From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] 
on behalf of Prashant Bharucha [prashantbharu...@yahoo.ca]
Sent: Wednesday, April 04, 2012 7:48 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] CPU Load 100% when pg_dump start in Postgresql 8.4

Hello Everyone

I facing a big problem ,when pg_dump start .CPU load become 100%.

DB Size 35 GB running with e commerce web site. Insert transaction record 
successfully but Update transaction is not going through.

Could you please help to figure out where is the problem ?


Thanks
Prashant

--
Please consider the environment before printing this email.
NIWA is the trading name of the National Institute of Water  Atmospheric 
Research Ltd.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CPU move

2011-11-27 Thread Frank Lanitz
Am 26.11.2011 19:18, schrieb Carlos Henrique Reimer:
 Hi,
 
 We're planning to move our postgreSQL database from one CPU box to
 another box.
 
 I'm considering an alternative procedure for the move as the standard
 one (pg_dump from the old, copy dump to the new box, psql to restore in
 the new) will take about 10 hours to complete. The ideia is installing
 the same Linux and PostgreSQL versions in the new box and copy the
 entire database cluster directory from the old to the new one using the
 scp Linux command.

If you are using the same architecture you could use the warm-standby
procedure for doing the sync and then switching the system. There is
pg_standby available for.

Cheers,
Frank



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] CPU move

2011-11-27 Thread frank
This message has been digitally signed by the sender.

Re___GENERAL__CPU_move.eml
Description: Binary data


-
Hi-Tech Gears Ltd, Gurgaon, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CPU move

2011-11-26 Thread Carlos Henrique Reimer
Hi,

We're planning to move our postgreSQL database from one CPU box to another
box.

I'm considering an alternative procedure for the move as the standard one
(pg_dump from the old, copy dump to the new box, psql to restore in the
new) will take about 10 hours to complete. The ideia is installing the same
Linux and PostgreSQL versions in the new box and copy the entire database
cluster directory from the old to the new one using the scp Linux command.

Checked the locale files and they were not customized and are the same in
the same box.

Which risks am I exposed in this approach?

Thank you!

-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br


Re: [GENERAL] CPU move

2011-11-26 Thread Alan Hodgson
On Saturday, November 26, 2011 10:18:56 AM Carlos Henrique Reimer wrote:
 Hi,
 
 We're planning to move our postgreSQL database from one CPU box to another
 box.
 
 I'm considering an alternative procedure for the move as the standard one
 (pg_dump from the old, copy dump to the new box, psql to restore in the
 new) will take about 10 hours to complete. The ideia is installing the same
 Linux and PostgreSQL versions in the new box and copy the entire database
 cluster directory from the old to the new one using the scp Linux command.
 
 Checked the locale files and they were not customized and are the same in
 the same box.
 
 Which risks am I exposed in this approach?

This will work, if the versions are the same and the source database is 
stopped before you start the copy. Make sure you fix up the permissions 
afterwards.

It would be faster to use rsync - you can run a first pass with the source 
database running, then stop it and run a last rsync -ar --delete to pick up 
any changes. Your only downtime need be during the last pass.

Make sure you've copied over any sysctl.conf settings.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CPU choice for postgreSQL

2011-11-25 Thread Allan Kamau
Hello,

I do have a read-only table having a field having long varbit data (of
length 6000). And I have a function that performs various aggregate
bitAND and bitOR operations on this field and other fields of this
table. This function does not explicitly write any data to disk (here
is hardly any disk activity) and consumes 100% resource of a single
core during it's execution and I can open several connections and
invoke this function repeatedly on each connection but with different
parameters. Currently I have copies of the read-only table in various
physical servers and I open several connections to each one of these
servers and repeatedly execute the above function on each one of this
connections in parallel.

Now I would like to procure new hardware to compliment my efforts.
There are substantial amounts of data moving from RAM to CPU and it
seems the most important hardware items would be CPU and RAM speed.


This are my current options all with fast RAM.

1) Build 4 separate PCs based on AMD Phenom II X6 1100T CPU with
mini-ITX motherboard each.
2) Build 4 separate PCs based on Intel Core i7-2500K CPU with mini-ITX
motherboard each.
3) Build 1 server having 4 AMD 12 core Magny-cours CPUs (on one Tyan
motherboard).
4) Build 1 server having 4 AMD 16 core Interlagos CPUs (on one motherboard).


Which of these options and hopefully others should I consider.


Regards,
Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CPU

2007-12-04 Thread Harald Armin Massa
Josh,

However, the two extra cores (even if slower), will greatly help if you
 have any kind of concurrency.


as much as I understand with running Postgres in the default configuration,
there *will* be concurrency, without an if ?

I am thinking of the background writer, the autovacuum process, the log
writer and finally the connection serving process.  ... quite sure of that
default concurrency because I had to explain those basic 5 postgres.exe to
at least 8 Windows Admins...

My non-benchmarked experience is that multicore and postgres good
(experience mainly drawn from windows)

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!


[GENERAL] CPU

2007-12-03 Thread Tom Allison
is there much of a difference in performance between a XEON, dual  
core from intel and a dual core AMD 64 CPU?


I need a bit of an upgrade and am not sure which, if any, have a  
significant advantage for postgres databases.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] CPU

2007-12-03 Thread Uwe C. Schroeder
On Monday 03 December 2007, Tom Allison wrote:
 is there much of a difference in performance between a XEON, dual
 core from intel and a dual core AMD 64 CPU?

 I need a bit of an upgrade and am not sure which, if any, have a
 significant advantage for postgres databases.


Personally I've never seen postgresql suck majorly on CPU performance. I guess 
the biggest speed increase lies in ultra fast I/O, i.e. high spinning disks 
and battery backed hardware RAID. Databases tend to suck more on I/O than 
processor unless you do a lot fo sorting, distinct selects etc.
Multi or single processor is just a matter of how many clients connect. AFAIK 
postgresql is not really multi-threaded, but runs each connection (master 
process) on one processor at a time. So if you have a quad core (or 4 
processor machine), you'll have 4 postmasters processing any given time - 
the bottleneck again is I/O because usually all processors share the same 
ressources (memory and disks).

So basically I would invest in fast I/O and would care less about the 
processors. More memory at hand may also be beneficial.


U.C.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] CPU

2007-12-03 Thread Joshua D. Drake

Uwe C. Schroeder wrote:

On Monday 03 December 2007, Tom Allison wrote:

is there much of a difference in performance between a XEON, dual
core from intel and a dual core AMD 64 CPU?


Well honestly, with how cheap you can get a quad core from Intel... I 
say do that :). The general difference between a dual core opteron and a 
dual core xeon will likely not be noticeable to a PostgreSQL 
installation (generally speaking).


However, the two extra cores (even if slower), will greatly help if you 
have any kind of concurrency.


Sincerely,

Joshua D. Drake



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] CPU

2007-12-03 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/03/07 21:27, Joshua D. Drake wrote:
 Uwe C. Schroeder wrote:
 On Monday 03 December 2007, Tom Allison wrote:
 is there much of a difference in performance between a XEON, dual
 core from intel and a dual core AMD 64 CPU?
 
 Well honestly, with how cheap you can get a quad core from Intel... I
 say do that :). The general difference between a dual core opteron and a
 dual core xeon will likely not be noticeable to a PostgreSQL
 installation (generally speaking).
 
 However, the two extra cores (even if slower), will greatly help if you
 have any kind of concurrency.

Are there any heat/power considerations?  An Opteron will most
likely draw less power, generate less heat, be easier to cool and
thus generate less noise.

Of course, the heat and whine from those 10K and 15K SCSI drives
will override any possible Opteron CPU fan quietness.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHVNgCS9HxQb37XmcRArgLAKCqTxy49KKaRy3P2UUqEyy6LJJKHACg0RDm
8TeEugJQYEGwyJ3nZBUWc9I=
=LBD2
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] CPU

2007-12-03 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, 03 Dec 2007 22:30:58 -0600
Ron Johnson [EMAIL PROTECTED] wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 12/03/07 21:27, Joshua D. Drake wrote:
  Uwe C. Schroeder wrote:
  On Monday 03 December 2007, Tom Allison wrote:
  is there much of a difference in performance between a XEON, dual
  core from intel and a dual core AMD 64 CPU?
  
  Well honestly, with how cheap you can get a quad core from Intel...
  I say do that :). The general difference between a dual core
  opteron and a dual core xeon will likely not be noticeable to a
  PostgreSQL installation (generally speaking).
  
  However, the two extra cores (even if slower), will greatly help if
  you have any kind of concurrency.
 
 Are there any heat/power considerations?  An Opteron will most
 likely draw less power, generate less heat, be easier to cool and
 thus generate less noise.

Its a server... why are we worried about noise? Heat... well yes but
what you say below :)

 
 Of course, the heat and whine from those 10K and 15K SCSI drives
 will override any possible Opteron CPU fan quietness.
 
 - --
 Ron Johnson, Jr.
 Jefferson LA  USA
 
 %SYSTEM-F-FISH, my hovercraft is full of eels
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.6 (GNU/Linux)
 
 iD8DBQFHVNgCS9HxQb37XmcRArgLAKCqTxy49KKaRy3P2UUqEyy6LJJKHACg0RDm
 8TeEugJQYEGwyJ3nZBUWc9I=
 =LBD2
 -END PGP SIGNATURE-
 
 ---(end of
 broadcast)--- TIP 6: explain analyze is your
 friend
 


- -- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHVNsnATb/zqfZUUQRAiUZAKCcnAfQAzWw1oU8F4B6bodrdSo7BgCfZL+Y
YoVsDmS2knU5I7cO4SnhpEI=
=xIJ5
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] CPU

2007-12-03 Thread Greg Smith

On Mon, 3 Dec 2007, Joshua D. Drake wrote:

Well honestly, with how cheap you can get a quad core from Intel... I 
say do that


Exactly, the budget single processor configuration to beat in server land 
right now is the Xeon X3210.  The frequency of the cores is a little on 
the low side, so individual queries won't run quite as fast as some of the 
dual-core alternatives, but when you get twice as many of them it's hard 
to complain.  The X3220 is a little faster and more expensive.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] CPU spike when doing PARSE (what is this?)

2007-09-18 Thread Ow Mun Heng
Just a short background.

using Perl-DBI to pull data from mssql into PG and \copy into a temp
table where the following is done.


my $query1 = DELETE FROM $table_name
  WHERE $unique_id in
  (SELECT $unique_id from $table_name_loading);
my $query2 = INSERT INTO $table_name SELECT * FROM $table_name_loading;
my $query3 = UPDATE sync_log SET last_sync=?,
  record_update_date_time=current_timestamp
  WHERE table_name=?
  AND db_name = ?;
my $query4 = TRUNCATE TABLE $table_name_loading;


I constantly see an operation in htop (an alternative to top)

postgres:username databasename 127.0.0.1(37833) PARSE

which sucks up huge blobs of my CPU time and I would like to know what it is 
exactly. 
I would not be surprised if it's the DELETE which is the bottleneck, as it's 
DELETING 
from a huge table  6 million in size from the loading_temp_table.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] CPU load high

2007-08-23 Thread Patrick Lindeman
Hi Max,

To find out what is causing the big load you could also try to use 'ATOP'
which can be found at http://www.atcomputing.nl/atop. This tool shows more
(accurate) information than the regular TOP.

There are also some kernel patches available which, when applied to your
kernel, even show more information which might come in handy.

Good Luck,

-
Patrick Lindeman

 Hello.

 I have a web-server with php 5.2 connected to postgres 8.0 backend. Most
 of the queries the users are doing are SELECTs (100-150 in a second for
 100 concurrent users), with a 5-10 INSERTs/UPDATEs at the same time. There
 is also a demon running in the background doing some work once every
 100ms. The problem is that after the number of concurrent users rises to
 100, CPU becomes almost 100% loaded. How do I find out what's hogging the
 CPU?

 'top' shows demon using 8% cpu on top, and some amount of postgres
 processes each using 2% cpu with some apache processes occassionally
 rising with 2% cpu also. Often the writer process is at the top using 10%
 cpu.

 And the second question is that over time demon and writer processes use
 more and more shared memory - is it normal?

 Thanks in advance.

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] CPU load high

2007-08-23 Thread Max Zorloff

On Thu, 23 Aug 2007 08:29:03 +0400, Tom Lane [EMAIL PROTECTED] wrote:


Max Zorloff [EMAIL PROTECTED] writes:

... The problem is that after the number of concurrent users rises to
100, CPU becomes almost 100% loaded. How do I find out what's hogging  
the

CPU?



'top' shows demon using 8% cpu on top, and some amount of postgres
processes each using 2% cpu with some apache processes occassionally
rising with 2% cpu also. Often the writer process is at the top using  
10%

cpu.


IOW there's nothing particular hogging the CPU?  Maybe you need more
hardware than you've got, or maybe you could fix it by trying to
optimize your most common queries.  It doesn't sound like there'll be
any quick single-point fix though.


There's no one big process chugging everything yes, but all these 2%  
postgres

processes look like they're having their hand in overall cpu consumption.
I looked through every query and they all use indexes and whats more,  
return
1-20 rows at most. Yes, I think there won't be any fix, but I wanted to  
know,

are there some tools or techinques for finding where the problem lies?

I've looked into query time statistics - they all grow with cpu usage but  
it

doesn't really mean anything - cpu usage grows, queries get slower.

When one postgres process waits for lock to release does it use any cpu?
And also, when apache waits for query to finish, does it use cpu?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] CPU load high

2007-08-23 Thread Hannes Dorbath

On 23.08.2007 11:04, Max Zorloff wrote:

When one postgres process waits for lock to release does it use any cpu?
And also, when apache waits for query to finish, does it use cpu?


No, but are you sure what you see is not i/o wait? What values does top 
display in the %wa columns in the CPU rows? What does iostat -dm 1 say 
say under load?



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] CPU load high

2007-08-23 Thread Max Zorloff
On Thu, 23 Aug 2007 12:24:32 +0400, Hannes Dorbath  
[EMAIL PROTECTED] wrote:



On 23.08.2007 11:04, Max Zorloff wrote:

When one postgres process waits for lock to release does it use any cpu?
And also, when apache waits for query to finish, does it use cpu?


No, but are you sure what you see is not i/o wait? What values does top  
display in the %wa columns in the CPU rows? What does iostat -dm 1 say  
say under load?





Well, vmstat 1 says this on 64 users (last column is the same wa) :

procs ---memory-- ---swap-- -io --system--  
cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy  
id wa
 1  0  12336 289880 331224 347380404 8   2591 0 31 13  
54  3
13  0  12336 288012 331224 347387200 0   288 1054  3237 59 17  
24  0
 3  0  12336 284044 331224 347387200 0   480  908  3922 71 18  
11  0
 4  0  12336 291500 331224 347387200 0   248  654  2913 63 13  
23  0
 6  0  12336 297220 331224 347394000 0   240  678  3232 44 12  
44  0
 6  0  12336 304312 331224 347394000 0  1708 1166  3303 50 17  
17 16
 9  0  12336 304080 331224 347394000 0   480  779  4856 61 13  
25  0
10  0  12336 309172 331224 347400800 0   304  697  3094 62 16  
21  0
 2  0  12336 308180 331224 347400800 0   272  681  3370 56 12  
32  0
 0  0  12336 307684 331224 347407600 0   112  689  3212 44 11  
44  0
 0  1  12336 312280 331224 347407600 0  1472  863  3121 51 13  
29  7
 7  0  12336 310544 331224 347407600 0   916 1023  3383 59 14  
18  9
 3  0  12336 309428 331224 347407600 0   224  731  2974 55 14  
30  0
 6  0  12336 306444 331224 347414400 0   392  796  3513 60 14  
25  0


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] CPU load high

2007-08-23 Thread Patrick
Hi Max,

To find out what is causing the big load you could also try to use 'ATOP'
which can be found at http://www.atcomputing.nl/atop. This tool shows more
(accurate) information than the regular TOP.

There are also some kernel patches available which, when applied to your
kernel, even show more information which might come in handy.

Good Luck,

Patrick Lindeman

 Hello.

 I have a web-server with php 5.2 connected to postgres 8.0 backend. Most
 of the queries the users are doing are SELECTs (100-150 in a second for
 100 concurrent users), with a 5-10 INSERTs/UPDATEs at the same time. There
 is also a demon running in the background doing some work once every
 100ms. The problem is that after the number of concurrent users rises to
 100, CPU becomes almost 100% loaded. How do I find out what's hogging the
 CPU?

 'top' shows demon using 8% cpu on top, and some amount of postgres
 processes each using 2% cpu with some apache processes occassionally
 rising with 2% cpu also. Often the writer process is at the top using 10%
 cpu.

 And the second question is that over time demon and writer processes use
 more and more shared memory - is it normal?

 Thanks in advance.

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] CPU load high

2007-08-22 Thread Max Zorloff

Hello.

I have a web-server with php 5.2 connected to postgres 8.0 backend. Most  
of the queries the users are doing are SELECTs (100-150 in a second for  
100 concurrent users), with a 5-10 INSERTs/UPDATEs at the same time. There  
is also a demon running in the background doing some work once every  
100ms. The problem is that after the number of concurrent users rises to  
100, CPU becomes almost 100% loaded. How do I find out what's hogging the  
CPU?


'top' shows demon using 8% cpu on top, and some amount of postgres  
processes each using 2% cpu with some apache processes occassionally  
rising with 2% cpu also. Often the writer process is at the top using 10%  
cpu.


And the second question is that over time demon and writer processes use  
more and more shared memory - is it normal?


Thanks in advance.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] CPU load high

2007-08-22 Thread Tom Lane
Max Zorloff [EMAIL PROTECTED] writes:
 ... The problem is that after the number of concurrent users rises to  
 100, CPU becomes almost 100% loaded. How do I find out what's hogging the  
 CPU?

 'top' shows demon using 8% cpu on top, and some amount of postgres  
 processes each using 2% cpu with some apache processes occassionally  
 rising with 2% cpu also. Often the writer process is at the top using 10%  
 cpu.

IOW there's nothing particular hogging the CPU?  Maybe you need more
hardware than you've got, or maybe you could fix it by trying to
optimize your most common queries.  It doesn't sound like there'll be
any quick single-point fix though.

 And the second question is that over time demon and writer processes use  
 more and more shared memory - is it normal?

This is probably an artifact.  Many versions of top report a process
as having used as many pages of shared memory as it's actually touched
in its lifetime.  So if you have lots of shared buffers, then any one
Postgres process will show growth of reported memory usage as it
randomly happens to access one buffer or another, eventually maxing out
at whatever you've got the PG shared memory segment size set to.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-11 Thread Tom Lane
Thomas F. O'Connell [EMAIL PROTECTED] writes:
 Honestly, I'd prefer to see pg_autovacuum improved to do O(n) rather  
 than O(n^2) table activity. At this point, though, I'm probably not  
 too likely to have much time to hack pg_autovacuum before 8.1 is  
 released, although if it doesn't become integrated by beta feature  
 freeze, I might give it a shot.

This would be vastly easier to fix if the code were integrated into the
backend first.  In the backend environment you could just keep the info
in a dynahash.c hashtable instead of in a linear list.  On the client
side, you have to roll your own hashing (or adapt dynahash to life
outside the backend environment).

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-10 Thread Shelby Cain


--- Thomas F. O'Connell [EMAIL PROTECTED] wrote:

 Phil,
 
 If you complete this patch, I'm very interested to see it.
 
 I think I'm the person Matthew is talking about who inserted a sleep 
 
 value. Because of the sheer number of tables involved, even small  
 values of sleep caused pg_autovacuum to iterate too slowly over its  
 table lists to be of use in a production environment (where I still  
 find its behavior to be preferable to a complicated list of manual  
 vacuums performed in cron).
 
 --
 Thomas F. O'Connell
 Co-Founder, Information Architect
 Sitening, LLC
 

Were you sleeping every time through the loop?  How about something
like:

if (j%500 == 1) usleep(10)

Regards,

Shelby Cain



__ 
Discover Yahoo! 
Stay in touch with email, IM, photo sharing and more. Check it out! 
http://discover.yahoo.com/stayintouch.html

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-10 Thread Thomas F. O'Connell
I was usleeping in tiny increments in each iteration of the loop. I  
didn't try break it into iterative groups like this.


Honestly, I'd prefer to see pg_autovacuum improved to do O(n) rather  
than O(n^2) table activity. At this point, though, I'm probably not  
too likely to have much time to hack pg_autovacuum before 8.1 is  
released, although if it doesn't become integrated by beta feature  
freeze, I might give it a shot.


But I hope if anyone completes the linear improvement, they'll post  
to the lists.


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jun 10, 2005, at 9:12 AM, Shelby Cain wrote:


--- Thomas F. O'Connell [EMAIL PROTECTED] wrote:


Were you sleeping every time through the loop?  How about something
like:

if (j%500 == 1) usleep(10)

Regards,

Shelby Cain


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-09 Thread Thomas F. O'Connell

Phil,

If you complete this patch, I'm very interested to see it.

I think I'm the person Matthew is talking about who inserted a sleep  
value. Because of the sheer number of tables involved, even small  
values of sleep caused pg_autovacuum to iterate too slowly over its  
table lists to be of use in a production environment (where I still  
find its behavior to be preferable to a complicated list of manual  
vacuums performed in cron).


--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC

Strategic Open Source: Open Your i

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005

On Jun 7, 2005, at 6:16 AM, Phil Endecott wrote:


Matthew T. O'Connor wrote:


Phil Endecott wrote:

 Could it be that there is some code in autovacuum that is O 
(n^2) in

 the number of tables?

Browsing the code using webcvs, I have found this:

for (j = 0; j  PQntuples(res); j++)
{
tbl_elem = DLGetHead(dbs-table_list);
while (tbl_elem != NULL)
{  Have I correctly understood what is going on here?




Indeed you have.  I have head a few similar reports but perhaps  
none as bad as yours.  One person put a small sleep value so that  
it doesn't spin so tight.  You could also just up the sleep delay  
so that it doesn't do this work quite so often.  No other quick  
suggestions.




I do wonder why autovacuum is keeping its table list in memory  
rather than in the database.


But given that it is keeping it in memory, I think the real fix is  
to sort that list (or keep it ordered when building or updating  
it).  It is trivial to also get the query results ordered, and they  
can then be compared in O(n) time.


I notice various other places where there seem to be nested loops,  
e.g. in the update_table_list function.  I'm not sure if they can  
be fixed by similar means.


--Phil.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-07 Thread Phil Endecott

Matthew T. O'Connor wrote:

Phil Endecott wrote:

 Could it be that there is some code in autovacuum that is O(n^2) in
 the number of tables?

Browsing the code using webcvs, I have found this:

for (j = 0; j  PQntuples(res); j++)
{
tbl_elem = DLGetHead(dbs-table_list);
while (tbl_elem != NULL)
{  
Have I correctly understood what is going on here? 


Indeed you have.  I have head a few similar reports but perhaps none as 
bad as yours.  One person put a small sleep value so that it doesn't 
spin so tight.  You could also just up the sleep delay so that it 
doesn't do this work quite so often.  No other quick suggestions.


I do wonder why autovacuum is keeping its table list in memory rather 
than in the database.


But given that it is keeping it in memory, I think the real fix is to 
sort that list (or keep it ordered when building or updating it).  It is 
trivial to also get the query results ordered, and they can then be 
compared in O(n) time.


I notice various other places where there seem to be nested loops, e.g. 
in the update_table_list function.  I'm not sure if they can be fixed by 
similar means.


--Phil.








---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-07 Thread Matthew T. O'Connor

Phil Endecott wrote:


Matthew T. O'Connor wrote:

Indeed you have.  I have head a few similar reports but perhaps none 
as bad as yours.  One person put a small sleep value so that it 
doesn't spin so tight.  You could also just up the sleep delay so 
that it doesn't do this work quite so often.  No other quick 
suggestions.



I do wonder why autovacuum is keeping its table list in memory rather 
than in the database.



For better or worse, this was a conscious design decision that the 
contrib version of autovacuum be non-invasive to your database. 

But given that it is keeping it in memory, I think the real fix is to 
sort that list (or keep it ordered when building or updating it).  It 
is trivial to also get the query results ordered, and they can then be 
compared in O(n) time.



I'm quite sure there is a better way, please submit a patch if you can.  
This was never a real concern for most people since the number of tables 
is typically small enough not to be a problem.  The integrated version 
of autovacuum that didn't make the cut before 8.0 avoids this problem 
since the autovacuum data is stored in the database.


I notice various other places where there seem to be nested loops, 
e.g. in the update_table_list function.  I'm not sure if they can be 
fixed by similar means. 



I would think so, they all basically do the same type of loop.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-07 Thread Matthew T. O'Connor

Phil Endecott wrote:


Matthew T. O'Connor wrote:

The integrated version of autovacuum that didn't make the cut before 
8.0 avoids this problem since the autovacuum data is stored in the 
database.



What is the status of this?  Is it something that will be included in 
8.1 or 8.0.n?  I might be able to patch the current code but that 
doesn't seem like a useful thing to do if a better solution will 
arrive eventually.  I am currently running vacuums from a cron job and 
I think I will be happy with that for the time being.



This is a good question :-)  I have been so busy with work lately that I 
have not been able to work on it.  I am currently trying to resurrect 
the patch I sent in for 8.0 and update it so that it applies against 
HEAD.  Once that is done, I will need help from someone with the 
portions of the work that I'm not comfortable / capable of.   The main 
issue with the version I created during the 8.0 devel cycle it used 
libpq to connect, query and issue commands against the databases.  This 
was deemed bad, and I need help setting up the infrastructure to make 
this happen without libpq.  I hope to have my patch applying against 
HEAD sometime this week but it probably won't happen till next week.


So the summary of the autovacuum integration status is that we are fast 
running out of time (feature freeze July 1), and I have very little time 
to devote to this task.  So you might want to submit your O(n) patch 
cause unfortunately it looks like integrated autovacuum might slip 
another release unless someone else steps up to work on it.



(Incidentally, I have also found that the indexes on my pg_attributes 
table were taking up over half a gigabyte, which came down to less 
than 40 megs after reindexing them.  Is there a case for having 
autovacuum also call reindex?) 



Yes there is certainly some merit to having autovacuum or something 
similar perform other system maintenance tasks such as reindexing.  I 
just haven't taken it there yet.  It does seem strange that your 
pg_attributes table go that big, anyone have any insight here?  You did 
say you are using 7.4.2, I forget it that has the index reclaiming code 
in vacuum, also there are some autovacuum bugs in the early 7.4.x 
releases.  You might try to upgrade to either 8.0.x or a later 7.4.x 
release.



Matthew O'Connor



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-07 Thread Bruce Momjian
Phil Endecott wrote:
 Matthew T. O'Connor wrote:
  The integrated version 
  of autovacuum that didn't make the cut before 8.0 avoids this problem 
  since the autovacuum data is stored in the database.
 
 What is the status of this?  Is it something that will be included in 
 8.1 or 8.0.n?  I might be able to patch the current code but that 
 doesn't seem like a useful thing to do if a better solution will arrive 
 eventually.  I am currently running vacuums from a cron job and I think 
 I will be happy with that for the time being.

I will post about integrating pg_autovacuum into the backend for 8.1 in
a few minutes.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-07 Thread Phil Endecott

Matthew T. O'Connor wrote:
The integrated version 
of autovacuum that didn't make the cut before 8.0 avoids this problem 
since the autovacuum data is stored in the database.


What is the status of this?  Is it something that will be included in 
8.1 or 8.0.n?  I might be able to patch the current code but that 
doesn't seem like a useful thing to do if a better solution will arrive 
eventually.  I am currently running vacuums from a cron job and I think 
I will be happy with that for the time being.


(Incidentally, I have also found that the indexes on my pg_attributes 
table were taking up over half a gigabyte, which came down to less than 
40 megs after reindexing them.  Is there a case for having autovacuum 
also call reindex?)


--Phil.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-07 Thread Tom Lane
Phil Endecott [EMAIL PROTECTED] writes:
 (Incidentally, I have also found that the indexes on my pg_attributes 
 table were taking up over half a gigabyte, which came down to less than 
 40 megs after reindexing them.  Is there a case for having autovacuum 
 also call reindex?)

Lots of temp tables I suppose?  If so that's not autovacuum's fault;
it wasn't getting told about the activity in pg_attribute until this
patch:

2005-03-31 18:20  tgl

* src/backend/postmaster/: pgstat.c (REL7_4_STABLE), pgstat.c
(REL8_0_STABLE), pgstat.c: Flush any remaining statistics counts
out to the collector at process exit.  Without this, operations
triggered during backend exit (such as temp table deletions) won't
be counted ... which given heavy usage of temp tables can lead to
pg_autovacuum falling way behind on the need to vacuum pg_class and
pg_attribute.  Per reports from Steve Crawford and others.

Unless the bloat occurred after you updated to 8.0.2, there's no issue.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-06 Thread Phil Endecott

Following up on my own post from last night:

 Could it be that there is some code in autovacuum that is O(n^2) in
 the number of tables?

Browsing the code using webcvs, I have found this:

for (j = 0; j  PQntuples(res); j++)
{
tbl_elem = DLGetHead(dbs-table_list);
while (tbl_elem != NULL)
{   

I haven't really tried to understand what is going on in here, but it 
does look like it is getting the result of the pg_class join stats 
query and then matching it up against its internal list of tables using 
nested loops, which is undoubtedly O(n^2) in the number of tables.


Have I correctly understood what is going on here?

--Phil.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] CPU-intensive autovacuuming

2005-06-06 Thread Matthew T. O'Connor

Phil Endecott wrote:


Following up on my own post from last night:

 Could it be that there is some code in autovacuum that is O(n^2) in
 the number of tables?

Browsing the code using webcvs, I have found this:

for (j = 0; j  PQntuples(res); j++)
{
tbl_elem = DLGetHead(dbs-table_list);
while (tbl_elem != NULL)
{   

I haven't really tried to understand what is going on in here, but it 
does look like it is getting the result of the pg_class join stats 
query and then matching it up against its internal list of tables 
using nested loops, which is undoubtedly O(n^2) in the number of tables.


Have I correctly understood what is going on here? 



Indeed you have.  I have head a few similar reports but perhaps none as 
bad as yours.  One person put a small sleep value so that it doesn't 
spin so tight.  You could also just up the sleep delay so that it 
doesn't do this work quite so often.  No other quick suggestions. 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] CPU-intensive autovacuuming

2005-06-05 Thread Phil Endecott

Dear Postgresql experts,

I'm trying to work out why my system spends so much time autovacuuming.

It looks like pg_autovacuum wakes up every few minutes and does a query 
on some system tables, and then spins doing no more queries but burning 
all available CPU cycles for a a couple of minutes, before sleeping again.


I'm logging all queries to syslog, and the only queries that autovacuum 
seems to run are about 3 selecting from pg_class and some stats tables. 
  They complete in a couple of seconds.  I would see VACUUM commands in 
there as well (wouldn't I?) if it was actually doing anything, but I 
don't. Since not much is happening on the system I'm not suprised that 
it decides that nothing needs vacuuming.  But instead I think it 
spins; and it's the pg_autovacuum process, not a postmaster, that is 
taking all the CPU.


I wonder if this is because I have a lot of tables (about 50,000 I think 
- I have one schema per user and each schema a couple of dozen tables). 
 Could it be that there is some code in autovacuum that is O(n^2) in 
the number of tables, or something like that?


Has anyone seen anything like this before?  Any debugging suggestions?

This is with the Debian package of 7.4.2, and all the default autovacuum 
settings.


Cheers,  Phil.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] CPU Usage

2004-02-12 Thread Christopher Browne
Quoth [EMAIL PROTECTED] (Mark Cubitt):
 is there a way of finding out which table is using a certain amount of CPU
 power?

That doesn't entirely make sense.

Tables normally use up disk space.

What consumes CPU power is the computation of queries, and, for big,
expensive queries, it is pretty common for there to be several tables
involved.

Could you describe the problem you are trying to solve, and try to
_avoid_ framing it in terms of how it might be answered?
-- 
(reverse (concatenate 'string gro.gultn @ enworbbc))
http://www.ntlug.org/~cbbrowne/lsf.html
Applicants must  have *at least*  five years experience  with Windows
XCVIII...

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] CPU killer

2000-10-27 Thread Vilson farias

Greetings,

I've been using Postgres in a Pentium 75Mhz, Linux RedHat 6.2, 32Mb.

   Every big query I execute uses too much cpu (more than 90%).

   I start postgres with these params: su -l postgres -c
'/usr/bin/postmaster -B 2048  -i -D "/home/postgres/data"' .

What should I do for avoid postgres extreme cpu allocation? I know sometimes
non-indexed tables or huge size tables can be slow, but here I don't care
about execution speed, I just want less cpu allocation no matter how slow.

Regards from Brazil,

José Vilson de Mello de Farias
Dígitro Tecnologia Ltda.






Re: [GENERAL] CPU killer

2000-10-27 Thread Alfred Perlstein

* Vilson farias [EMAIL PROTECTED] [001027 21:38] wrote:
 Greetings,
 
 I've been using Postgres in a Pentium 75Mhz, Linux RedHat 6.2, 32Mb.
 
Every big query I execute uses too much cpu (more than 90%).
 
I start postgres with these params: su -l postgres -c
 '/usr/bin/postmaster -B 2048  -i -D "/home/postgres/data"' .
 
 What should I do for avoid postgres extreme cpu allocation? I know sometimes
 non-indexed tables or huge size tables can be slow, but here I don't care
 about execution speed, I just want less cpu allocation no matter how slow.

Unix is a timesharing system, if you want an application on unix
to use less CPU then put it on a box with a slower CPU.  If you
want to limit its priority against other processes so that it
shares CPU in a more friendly manner, then you want to read the
manpage for nice(1).

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [GENERAL] CPU killer

2000-10-27 Thread Igor Roboul

On Fri, Oct 27, 2000 at 07:12:57PM -0200, Vilson farias wrote:
I start postgres with these params: su -l postgres -c
 '/usr/bin/postmaster -B 2048  -i -D "/home/postgres/data"' .
Try starting postmaster with 'nice':
nice /usr/bin/postmaster -B 2048  -i -D /home/postgres/data  

-- 
Igor Roboul, Unix System Administrator  Programmer @ sanatorium "Raduga", 
Sochi, Russia
http://www.brainbench.com/transcript.jsp?pid=304744



Re: [GENERAL] CPU killer

2000-10-27 Thread Tod McQuillin

On Fri, 27 Oct 2000, Vilson farias wrote:

 I've been using Postgres in a Pentium 75Mhz, Linux RedHat 6.2, 32Mb.
 
Every big query I execute uses too much cpu (more than 90%).
 
I start postgres with these params: su -l postgres -c
 '/usr/bin/postmaster -B 2048  -i -D "/home/postgres/data"' .
 
 What should I do for avoid postgres extreme cpu allocation? I know
 sometimes non-indexed tables or huge size tables can be slow, but here
 I don't care about execution speed, I just want less cpu allocation no
 matter how slow.

Well, assuming you've already optimised the queries to run as quickly as
possible, if you don't want the backend stealing cpu from other processes
then start postmaster with the nice command (man nice) to give it a lower
priority.  It will still use just as much CPU time, but not at the expense
of other processes.
-- 
Tod McQuillin