Re: [PERFORM] [Npgsql-general] index out of range

2005-06-09 Thread Josh Close
On 6/8/05, Francisco Figueiredo Jr. [EMAIL PROTECTED] wrote:
 
 --- Josh Close [EMAIL PROTECTED] escreveu:
 
  Well, that would make total sense. I was kinda curious how the data
  provider differentianted  between :a and casting like now()::text.
 
 
 Hi Josh!
 
 Npgsql uses the info found in NpgsqlCommand.Parameters collection. We do check
 if a parameter in Parameters collection isn't found in query string. The other
 way around isn't done yet. So, you can safely put something like: :a::text and
 it will send the text 5::text for example.
 
 I hope it helps.

Yes, that does help. Thanks.

-Josh

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] slow queries, possibly disk io

2005-05-31 Thread Josh Close
I didn't see iostat as available to install, but I'm using dstat to see this.

The server has constant disk reads averaging around 50M and quite a
few in the 60M range. This is when selects are being done, which is
almost always. I would think if postgres is grabbing everything from
memory that this wouldn't happen. This is why I think there must be
some way to allocate more mem to postgres.

There is 2 gigs of mem in this server. Here are my current settings.

max_connections = 100
shared_buffers = 5
sort_mem = 4096
vacuum_mem = 32768
effective_cache_size = 45

Shared buffers is set to 10% of total mem. Effective cache size is 90% of mem.

Is there anything that can be done to have postgres grab more from
memory rather than disk?


On 5/30/05, Rudi Starcevic [EMAIL PROTECTED] wrote:
 Hi,
 
 I had some disk io issues recently with NFS, I found the command 'iostat
 -x 5' to be a great help when using Linux.
 
 For example here is the output when I do a 10GB file transfer onto hdc
 Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s
 avgrq-sz avgqu-sz   await  svctm  %util
 hdc  0.00 875.95  0.00 29.660.00 7244.89 0.00  3622.44
 244.27 3.07  103.52   1.78   5.27
 
 The last field show the disk is 5.27% busy.
 
 I have seen this value at 100%, adding more server brought it under 100%.
 It seems that if you hit 100% problems sort of cascade all over that
 place. For example Apache connections went right up and hit their max.
 
 I am not sure how accurate the % is but it has work pretty well for me.
 
 Perhaps use this command in another window with you run your SQL and see
 what it shows.
 
 HTH.
 Kind regards,
 Rudi.
 
 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings
 


-- 
-Josh

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] slow queries, possibly disk io

2005-05-31 Thread Josh Close
On 5/31/05, Martin Fandel [EMAIL PROTECTED] wrote:
 In the documentation of
 http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
 is the shared_buffers set to 1/3 of the availble RAM. You're set
 5*8/1024=391 MB SHMEM. The effective_cache_size in your
 configuration is 45*8/1024=3516 MB SHMEM. That's 3907MB
 of RAM but you have less than 2048MB availble.


I wrote that wrong, there is actually 4 gigs of ram available.


 
 What value do you have in /proc/sys/kernel/shmmax?
 
 I'm really new at using postgres and i have not many experience
 but maybe you can try to use 1/3 (682MB/87424)for shared_buffers
 and 2/3 (1365MB/174720) for the effective_cache_size? But i these
 settings are to high too.
 
 best regards
 Martin

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] slow queries, possibly disk io

2005-05-27 Thread Josh Close
On 5/26/05, Christopher Kings-Lynne [EMAIL PROTECTED] wrote:
  I have some queries that have significan't slowed down in the last
  couple days. It's gone from 10 seconds to over 2 mins.
 
  The cpu has never gone over 35% in the servers lifetime, but the load
  average is over 8.0 right now. I'm assuming this is probably due to
  disk io.
 
 You sure it's not a severe lack of vacuuming that's the problem?
 

It's vacuumed hourly. If it needs to be more than that I could do it I
guess. But from everything I've been told, hourly should be enough.

-Josh

---(end of broadcast)---
TIP 8: explain analyze is your friend


[PERFORM] slow queries, possibly disk io

2005-05-27 Thread Josh Close
 Few mandatory questions:

 1. Do you vacuum your db on regular basis? :)

It's vacuumed once every hour. The table sizes and data are constantly changing.


 2. Perhaps statistics for tables in question are out of date, did you
 try alter table set statistics?

No I haven't. What would that do for me?


 3. explain analyze of the slow query?

Here is the function that is ran:

CREATE OR REPLACE FUNCTION adaption.funmsgspermin()
  RETURNS int4 AS
'
DECLARE
this_rServerIds  RECORD;
this_sQuery  TEXT;
this_iMsgsPerMin INT;
this_rNumSentRECORD;

BEGIN
this_iMsgsPerMin := 0;
FOR this_rServerIds IN
SELECT iId
FROM adaption.tblServers
LOOP
this_sQuery := \'
SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent
FROM adaption.tblBatchHistory_\' || this_rServerIds.iId || \'
WHERE tStamp  now() - interval \'\'5 mins\'\';
\';
FOR this_rNumSent IN EXECUTE this_sQuery LOOP
this_iMsgsPerMin := this_iMsgsPerMin + this_rNumSent.iNumSent;
END LOOP;
END LOOP;

this_iMsgsPerMin := this_iMsgsPerMin / 5;

RETURN this_iMsgsPerMin;
END;
'
LANGUAGE 'plpgsql' VOLATILE;

Here is the explain analyze of one loops of the sum:

Aggregate  (cost=31038.04..31038.04 rows=1 width=4) (actual
time=14649.602..14649.604 rows=1 loops=1)
  -  Seq Scan on tblbatchhistory_1  (cost=0.00..30907.03 rows=52401
width=4) (actual time=6339.223..14648.433 rows=919 loops=1)
Filter: (tstamp  (now() - '00:05:00'::interval))
Total runtime: 14649.709 ms


 4. if you for some reason cannot give explain analyze, please try to
 describe the type of query (what kind of join(s)) and amount of data
 found in the tables.

 2 minutes from 10 seconds is a huge leap, and it may mean that
 PostgreSQL for some reason is not planning as well as it could.
 Throwing more RAM at the problem can help, but it would be better
 to hint the planner to do the right thing.  It may be a good time to
 play with planner variables. :)

Is there any documentation on planner vars? And how would I throw more
ram at it? It has 2 gigs right now. How do I know if postgres is using
that?

-Josh

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] slow queries, possibly disk io

2005-05-27 Thread Josh Close
 I think you really want that seqscan to be an indexscan, instead.
 I'm betting this is PG 7.4.something?  If so, probably the only
 way to make it happen is to simplify the now() expression to a constant:
 
 SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent
 FROM adaption.tblBatchHistory_\' || this_rServerIds.iId || \'
 WHERE tStamp  \\\'' || (now() - interval \'5 mins\')::text ||
 \'\\\'\';

The dollar sign thing would be a lot easier. I can't get this to work.
I'm using a db manager where I can just use ' instead of \'. How would
it look for that? In other words, it doesn't have the create or
replace function as ' --stuff ' language 'plpgsql' it just has the
actual function. Makes things a little easier. I'm getting an error at
or near 5.

 
 because pre-8.0 the planner won't realize that the inequality is
 selective enough to favor an indexscan, unless it's comparing to
 a simple constant.
 
 (BTW, 8.0's dollar quoting makes this sort of thing a lot less painful)
 
 regards, tom lane
 


-- 
-Josh

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] slow queries, possibly disk io

2005-05-27 Thread Josh Close
Doing the query

explain
SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent
FROM adaption.tblBatchHistory_1
WHERE tStamp  ( now() - interval '5 mins' )::text

gives me this:

Aggregate  (cost=32138.33..32138.33 rows=1 width=4)
-  Seq Scan on tblbatchhistory_1  (cost=0.00..31996.10 rows=56891 width=4)
Filter: ((tstamp)::text  ((now() - '00:05:00'::interval))::text)

Still not an index scan.

On 5/27/05, Tom Lane [EMAIL PROTECTED] wrote:
 Josh Close [EMAIL PROTECTED] writes:
  this_sQuery := \'
  SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent
  FROM adaption.tblBatchHistory_\' || this_rServerIds.iId || \'
  WHERE tStamp  now() - interval \'\'5 mins\'\';
  \';
 
  Here is the explain analyze of one loops of the sum:
 
  Aggregate  (cost=31038.04..31038.04 rows=1 width=4) (actual
  time=14649.602..14649.604 rows=1 loops=1)
-  Seq Scan on tblbatchhistory_1  (cost=0.00..30907.03 rows=52401
  width=4) (actual time=6339.223..14648.433 rows=919 loops=1)
  Filter: (tstamp  (now() - '00:05:00'::interval))
  Total runtime: 14649.709 ms
 
 I think you really want that seqscan to be an indexscan, instead.
 I'm betting this is PG 7.4.something?  If so, probably the only
 way to make it happen is to simplify the now() expression to a constant:
 
 SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent
 FROM adaption.tblBatchHistory_\' || this_rServerIds.iId || \'
 WHERE tStamp  \\\'' || (now() - interval \'5 mins\')::text ||
 \'\\\'\';
 
 because pre-8.0 the planner won't realize that the inequality is
 selective enough to favor an indexscan, unless it's comparing to
 a simple constant.
 
 (BTW, 8.0's dollar quoting makes this sort of thing a lot less painful)
 
 regards, tom lane
 


-- 
-Josh

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[PERFORM] slow queries, possibly disk io

2005-05-26 Thread Josh Close
I have some queries that have significan't slowed down in the last
couple days. It's gone from 10 seconds to over 2 mins.

The cpu has never gone over 35% in the servers lifetime, but the load
average is over 8.0 right now. I'm assuming this is probably due to
disk io.

I need some help setting up postgres so that it doesn't need to go to
disk. I think the shared_buffers and effective_cache_size values are
the one's I need to look at.

Would setting shmmax and smmall to 90% or so of available mem and
putting a lot for postgres be helpful?

Effective cach size says this: 
Sets the planner's assumption about the effective size of the disk
cache (that is, the portion of the kernel's disk cache that will be
used for PostgreSQL data files).

Does that mean the total available ram? Or what's left over from shared_buffers?

I've tried different things and not much has been working. Is there a
good way to ensure that most of the tables accessed in postgres will
be cached in mem and not have to go to disk?

If I'm joining a lot of tables, should the sort_mem be set high also?
Do shared_buffers, effective_cache_size, and sort_mem all use
different mem? Or are they seperate?

I've looked for information and haven't found any useful pages about this.

Any help would be greatly appreciated.

Thanks.

-Josh

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] how much mem to give postgres?

2004-10-20 Thread Josh Close
On Wed, 20 Oct 2004 00:35:31 -0400, Tom Lane [EMAIL PROTECTED] wrote:
 I suspect that fooling with shared_buffers is entirely the wrong tree
 for you to be barking up.  My suggestion is to be looking at individual
 queries that are slow, and seeing how to speed those up.  This might
 involve adding indexes, or tweaking the query source, or adjusting
 planner parameters, or several other things.  EXPLAIN ANALYZE is your
 friend ...
 
 regards, tom lane

Only problem is, a select count(1) is taking a long time. Indexes
shouldn't matter with this since it's counting every row, right? The
tables are fairly well indexed also, I could probably add a few more.

If shared_buffers isn't the way to go ( you said 10k is the sweetspot
), then what about the effective_cache_size? I was suggested on the
general list about possibly setting that to 75% of ram.

Thanks.

-Josh

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] how much mem to give postgres?

2004-10-20 Thread Josh Close
On Tue, 19 Oct 2004 22:23:24 -0700, Josh Berkus [EMAIL PROTECTED] wrote:
 There have been issues with Postgres+HT, especially on Linux 2.4.   Try
 turning HT off if other tuning doesn't solve things.
 
 Otherwise, see:
 http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html

How would I turn that off? In the kernel config? Not too familiar with
that. I have a 2 proc xeon with 4 gigs of mem on the way for postgres,
so I hope HT isn't a problem. If HT is turned off, does it just not
use the other half of the processor? Or does the processor just work
as one unit?

Also, I'm taking a look at that site right now :)

-Josh

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[PERFORM] how much mem to give postgres?

2004-10-19 Thread Josh Close
I'm trying to figure out what I need to do to get my postgres server
moving faster. It's just crawling right now. It's on a p4 HT with 2
gigs of mem.

I was thinking I need to increase the amount of shared buffers, but
I've been told the sweet spot for shared_buffers is usually on the
order of 1 buffers. I already have it set at 21,078. If you have,
say 100 gigs of ram, are you supposed to still only give postgres
10,000?

Also, do I need to up the shmmax at all? I've used the formula 250 kB
+ 8.2 kB * shared_buffers + 14.2 kB * max_connections up to infinity
at http://www.postgresql.org/docs/7.4/interactive/kernel-resources.html#SYSVIPC
but it's never quite high enough, so I just make sure it's above the
amount that the postgres log says it needs.

What else can I do to speed this server up? I'm running vacuum analyze
on the heavily updated/inserted/deleted db's once an hour, and doing a
full vacuum once a night. Should I change the vacuum mem setting at
all?

Are there any other settings I should be concerned with? I've heard
about the effective_cache_size setting, but I haven't seen anything on
what the size should be.

Any help would be great. This server is very very slow at the moment.

Also, I'm using a 2.6.8.1 kernel with high mem enabled, so all the ram
is recognized.

Thanks.

-Josh

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] how much mem to give postgres?

2004-10-19 Thread Josh Close
On Wed, 20 Oct 2004 01:33:16 +0100, Simon Riggs [EMAIL PROTECTED] wrote:
 and using what version of PostgreSQL are you using? 8.0beta, I hope?

I'm using version 7.4.5.

  I was thinking I need to increase the amount of shared buffers, but
  I've been told the sweet spot for shared_buffers is usually on the
  order of 1 buffers. I already have it set at 21,078. If you have,
  say 100 gigs of ram, are you supposed to still only give postgres
  10,000?
 
 Thats under test currently. My answer would be, clearly not, others
 differ, for varying reasons.

Should I stick the rule of around 15% of mem then? I haven't found any
information on why you should use certain settings at all. I read
somewhere on the postgres site about using as much memory as possible,
but leave a little room for other processes. Whould that be an ok
theory? I'd kinda like to know why I should or shouldn't do something
like this.

-Josh

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings