Re: [GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Max Zorloff
On Thu, 29 Nov 2007 21:15:50 +0400, Rodrigo De León <[EMAIL PROTECTED]>  
wrote:



On Nov 29, 2007 9:33 AM, Max Zorloff <[EMAIL PROTECTED]> wrote:

I don't think that works.

ponline=# select ('{1,2,3}'::text)::int[];
ERROR:  cannot cast type text to integer[]


Can you try:

select ('{1,2,3}'::unknown)::int[];


Thanks, that works fine with 2-dim arrays too.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Max Zorloff
On Thu, 29 Nov 2007 19:21:03 +0400, Pavel Stehule  
<[EMAIL PROTECTED]> wrote:



Hello

arrays in PostgreSQL have to be regular allways. And before 8.3 array
cannot contais NULL, so you cannot simpl resize two dim array :(. But
your functions doesn't work in 8.3. too. So you can

a) use 1D array and access to array like myarray[10*(n1-1)+n2]

b) init array with string like

create or replace function testfunc()
returns void as $$
DECLARE
  myArray int[][];
BEGIN
  myArray :=  ('{'||array_to_string(array(select
'{0,0,0,0,0,0,0,0,0,0}'::text from
generate_series(1,10)),',')||'}')::int[][];
  FOR i IN 1..10 LOOP
FOR j IN 1..10 LOOP
  RAISE NOTICE '% %', i, j;
  myArray[i][j] := 1;
END LOOP;
  END LOOP;
  RETURN;
END
$$ language plpgsql;


Thanks for the info, but running the above gives me that :

ponline=# select testfunc();
ERROR:  cannot cast type text to integer[]
CONTEXT:  SQL statement "SELECT  ('{'||array_to_string(array(select  
'{0,0,0,0,0,0,0,0,0,0}'::text from  
generate_series(1,10)),',')||'}')::int[][]"

PL/pgSQL function "testfunc" line 4 at assignment

I think 8.1 does not have text -> int[] cast available. I think I'm stuck  
with option a.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Max Zorloff
On Thu, 29 Nov 2007 18:11:22 +0400, Rodrigo De León <[EMAIL PROTECTED]>  
wrote:



On Nov 29, 2007 3:34 AM, Max Zorloff <[EMAIL PROTECTED]> wrote:

According to the docs it seems that only way would be to declare it as
something like :
myArray := ARRAY[[1,2], [3,4], [5,6]];


You can declare arbitrary-sized, n-dimensional arrays:
...
DECLARE
  myArray integer[][]; -- two-dimensional integer array
BEGIN
...
END;
...

See:
http://www.postgresql.org/docs/8.1/static/arrays.html


I can. But unfortunately :

create or replace function testfunc()
returns setof record as $$
DECLARE
  myArray int[][];
BEGIN
  FOR i IN 1..10 LOOP
FOR j IN 1..10 LOOP
  RAISE NOTICE '% %', i, j;
  myArray[i][j] := 1;

END LOOP;
  END LOOP;

  RETURN;
END
$$ language plpgsql;


ponline=# select testfunc();
NOTICE:  1 1
NOTICE:  1 2
ERROR:  invalid array subscripts
КОНТЕКСТ:  PL/pgSQL function "testfunc" line 7 at assignment

2-dimensional arrays do not grow like 1-dimensional do (it says so in the  
docs).
The initial array is 1x1 size. I suppose I'm stuck with emulating 2-dim  
arrays through

1-dim arrays because I also need them to grow later.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Declaring multidimensional arrays in pl/pgsql

2007-11-29 Thread Max Zorloff

Hi, all.

I was wondering, can I really declare a 2-dimensional array of arbitrary  
size in pl/pgsql?
According to the docs it seems that only way would be to declare it as  
something like :

myArray := ARRAY[[1,2], [3,4], [5,6]];

But what if I pass the dimensions as function parameters?

My postgresql version is 8.1.

Thanks.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Connection pooling

2007-09-09 Thread Max Zorloff
On Sat, 08 Sep 2007 19:28:52 +0400, Scott Marlowe  
<[EMAIL PROTECTED]> wrote:



On 9/7/07, Max Zorloff <[EMAIL PROTECTED]> wrote:
On Fri, 07 Sep 2007 10:58:36 +0400, Marko Kreen <[EMAIL PROTECTED]>  
wrote:


>> The pgpool (I tried 3.1, 3.4 and pgpool-II 1.2) works fine but has  
the

>> following problem - after some time it
>> just "hangs", and if I try to connect to it with psql it just hangs
>> indefinitely. After restart
>> it works fine again. I turned off persistent connections in php so  
it's

>> not that. Does anybody
>> have the same problem?
>
> All symptoms point to the same problem - your app fails to
> release server connections for reuse.
>
> If the problem is that PHP fails to disconnect connection,
> although the transaction is finished, you could run pgbouncer
> in more relaxed mode - pool_mode=transaction.  Also setting
> client_idle_timeout to something may help debugging.

> If the problem is uncommitted transactions, you could set
> query_timeout to some small number (1-3) to see where
> errors appear.
>
> Both timeouts are not something I would put into productions
> config, so the code should be fixed still...

pgbouncer does not have this problem, only pgpool does.
pgbouncer has the problem of being very slow.

i thought php released connections at the end of script?
and also if i had this problem pgpool would hang in a few seconds
because the server has some load.


It does if you're running without pg_pconnect (i.e. using regular
pg_connects) and if the script doesn't crash the apache/php backend
it's running in.

Are you using pg_pconnect and / or having crashing apache backends?


I specifically turn off pconnects and I don't think I crash backends,
never saw mistakes of that type.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Connection pooling

2007-09-07 Thread Max Zorloff

On Fri, 07 Sep 2007 10:58:36 +0400, Marko Kreen <[EMAIL PROTECTED]> wrote:


The pgpool (I tried 3.1, 3.4 and pgpool-II 1.2) works fine but has the
following problem - after some time it
just "hangs", and if I try to connect to it with psql it just hangs
indefinitely. After restart
it works fine again. I turned off persistent connections in php so it's
not that. Does anybody
have the same problem?


All symptoms point to the same problem - your app fails to
release server connections for reuse.

If the problem is that PHP fails to disconnect connection,
although the transaction is finished, you could run pgbouncer
in more relaxed mode - pool_mode=transaction.  Also setting
client_idle_timeout to something may help debugging.



If the problem is uncommitted transactions, you could set
query_timeout to some small number (1-3) to see where
errors appear.

Both timeouts are not something I would put into productions
config, so the code should be fixed still...


pgbouncer does not have this problem, only pgpool does.
pgbouncer has the problem of being very slow.

i thought php released connections at the end of script?
and also if i had this problem pgpool would hang in a few seconds
because the server has some load.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Connection pooling

2007-09-06 Thread Max Zorloff
Hello. I'm using Apache + PHP + Postgres for my project. I've tried the  
two poolers people

usually recommend here - pgbouncer and pgpool.

I have a problem with pgbouncer - under the load the query execution  
becomes ~10 times slower
than it should be - basically to test it, I connect with psql  
(establishing connection becomes
somewhat slow under load) and use \timing to measure execution time. The  
basic query of
"select * from aaa where id = 1" runs 10 times slower than it should. If I  
connect with psql
directly to postgres, the execution time is acceptable. Does anyone know  
what is the problem?


The pgpool (I tried 3.1, 3.4 and pgpool-II 1.2) works fine but has the  
following problem - after some time it
just "hangs", and if I try to connect to it with psql it just hangs  
indefinitely. After restart
it works fine again. I turned off persistent connections in php so it's  
not that. Does anybody

have the same problem?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Shared memory usage

2007-08-25 Thread Max Zorloff
On Sun, 26 Aug 2007 00:39:52 +0400, Martijn van Oosterhout  
<[EMAIL PROTECTED]> wrote:



On Sun, Aug 26, 2007 at 01:22:58AM +0400, Max Zorloff wrote:

Hello.


shared_memory is used for caching. It is filled as stuff is used. If
you're not using all of it that means it isn't needed. Remember, it is
not the only cache. Since your database is only 400MB it will fit
entirely inside the OS disk cache, so you really don't need much shared
memory at all.

Loading stuff into memory for the hell of it is a waste, let the system
manage the memory itself, if it needs it, it'll use it.



Where do I find my OS disk cache settings? I'm using Linux.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Shared memory usage

2007-08-25 Thread Max Zorloff
On Sun, 26 Aug 2007 00:39:52 +0400, Martijn van Oosterhout  
<[EMAIL PROTECTED]> wrote:



On Sun, Aug 26, 2007 at 01:22:58AM +0400, Max Zorloff wrote:

Hello.

I have a postgres 8.0 and ~400mb database with lots of simple selects
using indexes.
I've installed pgpool on the system. I've set num_init_children to 5 and
here is the top output.
One of postmasters is my demon running some insert/update tasks. I see
that they all use cpu heavily, but do not use the shared memory.
shared_buffers is set to 6, yet they use a minimal part of that. I'd
like to know why won't they use more? All the indexes and half of the
database should be in the shared memory, is it not? Or am I completely
missing what are the shared_buffers for? If so, then how do I put my
indexes and at least a part of the data into memory?


shared_memory is used for caching. It is filled as stuff is used. If
you're not using all of it that means it isn't needed. Remember, it is
not the only cache. Since your database is only 400MB it will fit
entirely inside the OS disk cache, so you really don't need much shared
memory at all.

Loading stuff into memory for the hell of it is a waste, let the system
manage the memory itself, if it needs it, it'll use it.

Have a nice day,


Could it be that most of the cpu usage is from lots of fast indexed sql  
queries

wrapped in sql functions?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Shared memory usage

2007-08-25 Thread Max Zorloff

Hello.

I have a postgres 8.0 and ~400mb database with lots of simple selects  
using indexes.
I've installed pgpool on the system. I've set num_init_children to 5 and  
here is the top output.
One of postmasters is my demon running some insert/update tasks. I see  
that they all use cpu heavily, but do not use the shared memory.  
shared_buffers is set to 6, yet they use a minimal part of that. I'd  
like to know why won't they use more? All the indexes and half of the  
database should be in the shared memory, is it not? Or am I completely  
missing what are the shared_buffers for? If so, then how do I put my  
indexes and at least a part of the data into memory?


top - 00:12:35 up 50 days, 13:22,  8 users,  load average: 4.84, 9.71,  
13.22

Tasks: 279 total,  10 running, 268 sleeping,   1 stopped,   0 zombie
Cpu(s): 50.0% us, 12.9% sy,  0.0% ni, 33.2% id,  1.8% wa,  0.0% hi,  2.1%  
si

Mem:   6102304k total,  4206948k used,  1895356k free,   159436k buffers
Swap:  1959888k total,12304k used,  1947584k free,  2919816k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
11492 postgres  16   0  530m  72m  60m S   14  1.2   0:50.91 postmaster
11493 postgres  16   0  531m  72m  60m R   14  1.2   0:48.78 postmaster
11490 postgres  15   0  530m  71m  59m S   13  1.2   0:50.26 postmaster
11491 postgres  15   0  531m  75m  62m S   11  1.3   0:50.67 postmaster
11495 postgres  16   0  530m  71m  59m R   10  1.2   0:50.71 postmaster
10195 postgres  15   0  536m  84m  66m S6  1.4   1:11.72 postmaster

postgresql.conf:

shared_buffers = 6
work_mem = 2048
maintenance_work_mem = 256000

The rest are basically default values

Thank you in advance.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Max Zorloff
On Thu, 23 Aug 2007 21:29:46 +0400, Bill Moran <[EMAIL PROTECTED]>  
wrote:



In response to "Joshua D. Drake" <[EMAIL PROTECTED]>:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Max Zorloff wrote:
> Hello.
>
> I have a subject setup and a few questions.
>
> The first one is this. PHP establishes a connection to the Postgres
> database through pg_pconnect().

Don't use pconnect. Use pgbouncer or pgpool.

> Then it
> runs some query, then the script returns, leaving the persistent
> connection hanging. But the trouble
> is that in this case any query takes significantly more time to  
execute

> than in the case of one PHP script
> running the same query with different parameters for N times. How can  
I

> achieve the same performance in the first
> case? Persistent connections help but not enough - the queries are  
still

> 10 times slower than they would be on
> the 2nd time.

Well you haven't given us any indication of data set or what you are
trying to do. However, I can tell you, don't use pconnect, its broke ;)


Broke?  How do you figure?

I'm not trying to argue the advantages of a connection pooler such as
pgpool, but, in my tests, pconnect() does exactly what it's supposed
to do: reuse existing connections.  In our tests, we saw a 2x speed
improvement over connect().  Again, I understand that pgpool will do
even better ...

Also, I'm curious as to whether he's timing the actual _query_ or the
entire script execution.  If you're running a script multiple times
to get multiple queries, most of your time is going to be tied up in
PHP's parsing and startup -- unless I misunderstood the question.



I'm timing it with the php gettimeofday(). And I'm timing the actual  
pg_query()

run time, excluding db connection and everything else.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Max Zorloff
On Thu, 23 Aug 2007 21:16:48 +0400, Joshua D. Drake <[EMAIL PROTECTED]>  
wrote:



-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Max Zorloff wrote:

Hello.

I have a subject setup and a few questions.

The first one is this. PHP establishes a connection to the Postgres
database through pg_pconnect().


Don't use pconnect. Use pgbouncer or pgpool.


Then it
runs some query, then the script returns, leaving the persistent
connection hanging. But the trouble
is that in this case any query takes significantly more time to execute
than in the case of one PHP script
running the same query with different parameters for N times. How can I
achieve the same performance in the first
case? Persistent connections help but not enough - the queries are still
10 times slower than they would be on
the 2nd time.


Well you haven't given us any indication of data set or what you are
trying to do. However, I can tell you, don't use pconnect, its broke ;)


The data set is some 400mb database with ~100 SELECT queries running in a  
second
and some 7-10 pl/pgsql functions doing select checks and then 2-3  
insert/updates.




The second one is that the machine with this setup is dual core Xeon
2.8ghz. I've read somewhere about
the switching context problem and bad postgres performance. What are the
effects? What are the symptoms?


You likely do not have this problem if you are running anywhere near a
current PostgreSQL release but you can check it with vmstat.


I have 8.0.13 postgres. How do I check the thing with vmstat?


And what will be the performance gain if I change the machine to equal
Athlon?


Depends on the work load.


Right now 100 concurrent users completely use the cpu. So I'm trying to
find out where the problem lies.


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Apache + PHP + Postgres Interaction

2007-08-23 Thread Max Zorloff

Hello.

I have a subject setup and a few questions.

The first one is this. PHP establishes a connection to the Postgres  
database through pg_pconnect(). Then it
runs some query, then the script returns, leaving the persistent  
connection hanging. But the trouble
is that in this case any query takes significantly more time to execute  
than in the case of one PHP script
running the same query with different parameters for N times. How can I  
achieve the same performance in the first
case? Persistent connections help but not enough - the queries are still  
10 times slower than they would be on

the 2nd time.


The second one is that the machine with this setup is dual core Xeon  
2.8ghz. I've read somewhere about
the switching context problem and bad postgres performance. What are the  
effects? What are the symptoms?
And what will be the performance gain if I change the machine to equal  
Athlon?


Thank you in advance.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] CPU load high

2007-08-23 Thread Max Zorloff
On Thu, 23 Aug 2007 12:24:32 +0400, Hannes Dorbath  
<[EMAIL PROTECTED]> wrote:



On 23.08.2007 11:04, Max Zorloff wrote:

When one postgres process waits for lock to release does it use any cpu?
And also, when apache waits for query to finish, does it use cpu?


No, but are you sure what you see is not i/o wait? What values does top  
display in the %wa columns in the CPU rows? What does iostat -dm 1 say  
say under load?





Well, vmstat 1 says this on 64 users (last column is the same wa) :

procs ---memory-- ---swap-- -io --system--  
cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy  
id wa
 1  0  12336 289880 331224 347380404 8   2591 0 31 13  
54  3
13  0  12336 288012 331224 347387200 0   288 1054  3237 59 17  
24  0
 3  0  12336 284044 331224 347387200 0   480  908  3922 71 18  
11  0
 4  0  12336 291500 331224 347387200 0   248  654  2913 63 13  
23  0
 6  0  12336 297220 331224 347394000 0   240  678  3232 44 12  
44  0
 6  0  12336 304312 331224 347394000 0  1708 1166  3303 50 17  
17 16
 9  0  12336 304080 331224 347394000 0   480  779  4856 61 13  
25  0
10  0  12336 309172 331224 347400800 0   304  697  3094 62 16  
21  0
 2  0  12336 308180 331224 347400800 0   272  681  3370 56 12  
32  0
 0  0  12336 307684 331224 347407600 0   112  689  3212 44 11  
44  0
 0  1  12336 312280 331224 347407600 0  1472  863  3121 51 13  
29  7
 7  0  12336 310544 331224 347407600 0   916 1023  3383 59 14  
18  9
 3  0  12336 309428 331224 347407600 0   224  731  2974 55 14  
30  0
 6  0  12336 306444 331224 347414400 0   392  796  3513 60 14  
25  0


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] CPU load high

2007-08-23 Thread Max Zorloff

On Thu, 23 Aug 2007 08:29:03 +0400, Tom Lane <[EMAIL PROTECTED]> wrote:


"Max Zorloff" <[EMAIL PROTECTED]> writes:

... The problem is that after the number of concurrent users rises to
100, CPU becomes almost 100% loaded. How do I find out what's hogging  
the

CPU?



'top' shows demon using 8% cpu on top, and some amount of postgres
processes each using 2% cpu with some apache processes occassionally
rising with 2% cpu also. Often the writer process is at the top using  
10%

cpu.


IOW there's nothing particular hogging the CPU?  Maybe you need more
hardware than you've got, or maybe you could fix it by trying to
optimize your most common queries.  It doesn't sound like there'll be
any quick single-point fix though.


There's no one big process chugging everything yes, but all these 2%  
postgres

processes look like they're having their hand in overall cpu consumption.
I looked through every query and they all use indexes and whats more,  
return
1-20 rows at most. Yes, I think there won't be any fix, but I wanted to  
know,

are there some tools or techinques for finding where the problem lies?

I've looked into query time statistics - they all grow with cpu usage but  
it

doesn't really mean anything - cpu usage grows, queries get slower.

When one postgres process waits for lock to release does it use any cpu?
And also, when apache waits for query to finish, does it use cpu?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] CPU load high

2007-08-22 Thread Max Zorloff

Hello.

I have a web-server with php 5.2 connected to postgres 8.0 backend. Most  
of the queries the users are doing are SELECTs (100-150 in a second for  
100 concurrent users), with a 5-10 INSERTs/UPDATEs at the same time. There  
is also a demon running in the background doing some work once every  
100ms. The problem is that after the number of concurrent users rises to  
100, CPU becomes almost 100% loaded. How do I find out what's hogging the  
CPU?


'top' shows demon using 8% cpu on top, and some amount of postgres  
processes each using 2% cpu with some apache processes occassionally  
rising with 2% cpu also. Often the writer process is at the top using 10%  
cpu.


And the second question is that over time demon and writer processes use  
more and more shared memory - is it normal?


Thanks in advance.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match