Re: [GENERAL] ERROR: table row type and query-specified row type do not match

2007-08-31 Thread Patryk Kordylewski

Patryk Kordylewski wrote:

Hi Tom,

we are running PostgreSQL 8.2.4.

Tom Lane wrote:

Patryk Kordylewski [EMAIL PROTECTED] writes:
can someone help/explain me why i get this error when i try to make 
an update on that view? It seems that something is wrong with the 
subselect.

http://fooby.imosnet.de/error.txt


AFAICT, this works for me, so either you left something important out of
your example or you're looking at an already-fixed bug.  What PG version
is that?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



I made a test-case which is not working for me. Perhaps something wrong 
with the SELECT DISTINCT ON () syntax when i specify more then 1 column?


At first this was working when i specified only one column in the SELECT 
DISTINCT ON-clause, then i inserted a dummy column second_distinct and 
put it into the clause and it's not working anymore.


PG Version: 8.2.4

http://fooby.imosnet.de/not_working.txt

Thanks for help,
Patryk

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Removing pollution from log files

2007-08-31 Thread Rainer Bauer
Andrus wrote:

 Apart from that: there was a bug in the ODBC driver prior 08.02.0402 which
 resulted in this error message whenever binary data of type 
 SQL_LONGVARBINARY
 was send.

Where is 0402 driver ?

The snapshot drivers can be found here:
http://www.geocities.jp/inocchichichi/psqlodbc/index.html

Rainer

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] ERROR: table row type and query-specified row type do not match

2007-08-31 Thread Patryk Kordylewski

Patryk Kordylewski wrote:
I made a test-case which is not working for me. Perhaps something wrong 
with the SELECT DISTINCT ON () syntax when i specify more then 1 column?


At first this was working when i specified only one column in the SELECT 
DISTINCT ON-clause, then i inserted a dummy column second_distinct and 
put it into the clause and it's not working anymore.


PG Version: 8.2.4

http://fooby.imosnet.de/not_working.txt

Thanks for help,
Patryk


I think i found the problem and made 2 sql files to reproduce it. The 
only difference between this 2 files is the order by-clause of the view...


http://fooby.imosnet.de/order_by_working.txt
http://fooby.imosnet.de/order_by_not_working.txt

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] E_BAD_ACCESS with palloc/pfree in base type

2007-08-31 Thread Gregory Stark
Michael Glaesemann [EMAIL PROTECTED] writes:

 char * theDigits = malloc(sizeof(XTypeMaximumStringLength));
...
 char * subscriberNumber = malloc(XTypeMaximumStringLength);

One of those two is wrong, I suspect the first one. I wonder how you define
XTypeMaximumStringLength as sizeof(integer constant) ought to give a
compile-time error.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Query the catalog

2007-08-31 Thread Ottó Havasvölgyi
Hi,

How can I query the following information from the catalog:
 - max number of index columns;
 - max number of function arguments;

As far as I know these are compile-time constants, default is 32.

Thanks,
Otto

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Out of memory error, FreeBSD 6.2, PostgreSQL 8.2.4

2007-08-31 Thread Joost Kraaijeveld
Hi,

I have a query that has run on 3 other *identical* machines (hardware,
software, postgresql.conf idenntical, just other data in the database)
that give me an out of memory error every time I try (see below).

Anyone any idea of where or how to look for the problem or the
solution? 

From the logfile:


TopMemoryContext: 81920 total in 9 blocks; 8856 free (12 chunks); 73064 used
SPI Plan: 39936 total in 7 blocks; 7808 free (2 chunks); 32128 used
Type information cache: 8192 total in 1 blocks; 1800 free (0 chunks); 6392 used
SPI Plan: 1024 total in 1 blocks; 192 free (0 chunks); 832 used
SPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used
SPI Plan: 3072 total in 2 blocks; 1152 free (0 chunks); 1920 used
SPI Plan: 1024 total in 1 blocks; 240 free (0 chunks); 784 used
SPI Plan: 1024 total in 1 blocks; 240 free (0 chunks); 784 used
SPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used
PL/PgSQL function context: 24576 total in 2 blocks; 15192 free (11 chunks); 
9384 used
CFuncHash: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used
Rendezvous variable hash: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 
used
PLpgSQL function cache: 24596 total in 2 blocks; 5904 free (0 chunks); 18692 
used
TopTransactionContext: 8192 total in 1 blocks; 6792 free (0 chunks); 1400 used
ExecutorState: 8192 total in 1 blocks; 7784 free (0 chunks); 408 used
ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
SPI Exec: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
SPI Proc: 8192 total in 1 blocks; 7640 free (2 chunks); 552 used
SPI TupTable: 8192 total in 1 blocks; 6584 free (0 chunks); 1608 used
Operator class cache: 8192 total in 1 blocks; 4872 free (0 chunks); 3320 used
MessageContext: 40960 total in 3 blocks; 21448 free (11 chunks); 19512 used
smgr relation table: 24576 total in 2 blocks; 16080 free (4 chunks); 8496 used
TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 chunks); 16 used
Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used
PortalMemory: 8192 total in 1 blocks; 7904 free (0 chunks); 288 used
PortalHeapMemory: 31744 total in 6 blocks; 632 free (0 chunks); 31112 used
ExecutorState: 139376 total in 6 blocks; 59800 free (9 chunks); 79576 used
HashTableContext: 24576 total in 2 blocks; 16336 free (9 chunks); 8240 used
HashBatchContext: 533741652 total in 76 blocks; 1376 free (74 chunks); 
533740276 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 32768 total in 2 blocks; 13808 free (1 chunks); 18960 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 65536 total in 3 blocks; 21648 free (2 chunks); 43888 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 131072 total in 4 blocks; 7824 free (4 chunks); 123248 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 16384 total in 1 blocks; 4808 free (0 chunks); 11576 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContext: 524288 total in 6 blocks; 163376 free (6 chunks); 360912 used
TupleSort: 24600 total in 2 blocks; 6960 free (8 chunks); 17640 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
AggContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
PortalHeapMemory: 1024 total in 1 blocks; 880 free (0 chunks); 144 used
ExecutorState: 8192 total in 1 blocks; 6048 free (1 chunks); 2144 used
ExprContext: 8192 total in 1 blocks; 8176 free (6 chunks); 16 used
Relcache by OID: 8192 total in 1 blocks; 2336 free (0 chunks); 5856 used
CacheMemoryContext: 659000 total in 19 blocks; 18368 free (1 chunks); 640632 
used
idx_components_status: 1024 total in 1 blocks; 392 free (0 chunks); 632 used
idx_components7: 1024 total in 1 

[GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Phoenix Kiula
I am getting this message when I start the DB:


psql: FATAL:  could not access status of transaction 0
DETAIL:  Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.


What is this about and how do I solve this? A df -h on my system shows this:


FilesystemTypeSize  Used Avail Use% Mounted on
/dev/sda5 ext39.9G  2.5G  6.9G  27% /
/dev/sda1 ext3 99M   17M   78M  18% /boot
none tmpfs2.0G 0  2.0G   0% /dev/shm
/dev/sda7 ext3197G   17G  171G   9% /home
/dev/sda8 ext3   1012M   34M  927M   4% /tmp
/dev/sda3 ext39.9G  4.4G  5.0G  47% /usr
/dev/sda2 ext39.9G  9.5G 0 100% /var
/tmp  none   1012M   34M  927M   4% /var/tmp


Please help!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Tommy Gildseth

Phoenix Kiula wrote:

I am getting this message when I start the DB:


psql: FATAL:  could not access status of transaction 0
DETAIL:  Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.


What is this about and how do I solve this? A df -h on my system shows this:

  



/dev/sda2 ext39.9G  9.5G 0 100% /var
  


Well, the error message is pretty clear, and assuming you don't keep 
your database in any non-standard location, you /var partition is indeed 
full.


--
Tommy Gildseth


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PostgreSQL with Kerberos and Active Directory

2007-08-31 Thread Magnus Hagander
Ok. I'd try locally from the machine first, so you know the krb
configurations are absolutely identical all the way. Just change your
pg_hba so it uses krb5 (and don't forget to use -h - krb5 only works over
TCP/IP sockets)

THat said, I think your problem is in that you use postgres as your SPN.
It has to be uppercase POSTGRES to work with Active Directory.

//Magnus


On Thu, Aug 30, 2007 at 03:34:18PM +0300, Idan Miller wrote:
 We tried to connect from a different gentoo machine.
 both client and server are running version 8.2.4 of postgresql.
 right now, we are trying to connect from gentoo, but we want to connect from
 windows as well
 
 Idan
 
 
 On 8/30/07, Magnus Hagander [EMAIL PROTECTED] wrote:
 
  On Thu, Aug 30, 2007 at 02:07:13PM +0300, Idan Miller wrote:
   Hi everyone,
  
   I'm trying to configure PostgreSQL version 8.2.4 with Kerberos and
  Active
   Directory.
   The AD is run on a windows 2003 server, and the postgre on gentoo.
   The gentoo computer name is postgre and it's added to the windows 2003
   server AD domain.
  
   I did the following:
   - I compiled postgre with kerberos support and installed it on the
  gentoo
   machine.
   - I created a keytab for the user postgres/postgre on the windows 2003
   server machine and copied it to the gentoo machine.
   - I configured the postgresql.conf to point to the keytab.
   - I configured pg_hba.conf to authenticate remote users by kerberos.
   - I followed additional configurations from the howto in the mailing
  list
   archives.
  
   Now, when trying to log in with an AD user to postgre I get:
   psq: krb5_sendauth: Bad application version was sent (via sendauth)
  
   Any help will be appreciated.
 
  Are you sure you have postgresql 8.2 on both ends of the connection? Are
  yuor clients on windos or unix?
 
  //Magnus
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Bruce McAlister
Looks like you're out of disk space on:

/dev/sda2 ext39.9G  9.5G 0 100% /var


is this where your database resides?



Phoenix Kiula wrote:
 I am getting this message when I start the DB:


 psql: FATAL:  could not access status of transaction 0
 DETAIL:  Could not write to file pg_subtrans/01F8 at offset 221184:
 No space left on device.


 What is this about and how do I solve this? A df -h on my system shows this:


 FilesystemTypeSize  Used Avail Use% Mounted on
 /dev/sda5 ext39.9G  2.5G  6.9G  27% /
 /dev/sda1 ext3 99M   17M   78M  18% /boot
 none tmpfs2.0G 0  2.0G   0% /dev/shm
 /dev/sda7 ext3197G   17G  171G   9% /home
 /dev/sda8 ext3   1012M   34M  927M   4% /tmp
 /dev/sda3 ext39.9G  4.4G  5.0G  47% /usr
 /dev/sda2 ext39.9G  9.5G 0 100% /var
 /tmp  none   1012M   34M  927M   4% /var/tmp


 Please help!

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster


   

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Fwd: [GENERAL] Query the catalog

2007-08-31 Thread Sibte Abbas
-- Forwarded message --
From: Sibte Abbas [EMAIL PROTECTED]
Date: Aug 31, 2007 8:32 AM
Subject: Re: [GENERAL] Query the catalog
To: Ottó Havasvölgyi [EMAIL PROTECTED]

On 8/31/07, Ottó Havasvölgyi [EMAIL PROTECTED] wrote:

 Hi,

 How can I query the following information from the catalog:


You can find guc variables for both of these compile time constants:


- max number of index columns;


show  max_index_keys;

- max number of function arguments;


show max_function_args;


As far as I know these are compile-time constants, default is 32.

 Thanks,
 Otto



regards,
-- Sibte


[GENERAL] Obtaining random rows from a result set

2007-08-31 Thread Alban Hertroys
Hello,

I've recently been busy improving a query that yields a fixed number of
random records matching certain conditions. I have tried all the usual
approaches, and although they do work, they're all limited in some way
and don't translate really well to what you want. They're kludges, IMHO.

The methods I've tried are explained quite well on
http://people.planetpostgresql.org/greg/index.php?/archives/40-Getting-random-rows-from-a-database-table.html

All these methods involve calculating a random number for every record
in the result set at some point in time, which is really not what I'm
trying to model. I think the database should provide some means to get
those records, so...

Dear Santa,

I'd like my database to have functionality analogue to how LIMIT works,
but for other - non-sequential - algorithms.

I was thinking along the lines of:

SELECT *
  FROM table
 WHERE condition = true
 RANDOM 5;

Which would (up to) return 5 random rows from the result set, just as
LIMIT 5 returns (up to) the first 5 records in the result set.


Or maybe even with a custom function, so that you could get non-linear
distributions:

SELECT *
  FROM table
 WHERE condition = true
 LIMIT 5 USING my_func();

Where my_func() could be a user definable function accepting a number
that should be (an estimate of?) the number of results being returned so
that it can provide pointers to which rows in the resultset will be
returned from the query.

Examples:
* random(maxrows) would return random rows from the resultset.
* median() would return the rows in the middle of the result set (this
would require ordering to be meaningful).

What do people think, is this feasable? Desirable? Necessary?

If I'd have time I'd volunteer for at least looking into this, but I'm
working on three projects simultaneously already. Alas...

Regards,
Alban Hertroys.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Merlin Moncure
On 8/31/07, Phoenix Kiula [EMAIL PROTECTED] wrote:
 I am getting this message when I start the DB:


 psql: FATAL:  could not access status of transaction 0
 DETAIL:  Could not write to file pg_subtrans/01F8 at offset 221184:
 No space left on device.


 What is this about and how do I solve this? A df -h on my system shows this:

You should take the database down if it is not already and immediately
take a file system backup of the database and move it to a secure
location.  You may temporarily make some space by symlinking database
folders to partitions that have space (/home)...such as pg_xlog, or
folders inside the database proper.

After having freed up at least a few 100 mb of space, start the
database and make sure it comes up properly.  If it does, take a
proper backup and investigate a long term solution to the storage
problem...buy a drive :-)

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Josh Tolley
On 8/31/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
 Phoenix Kiula írta:
  I am getting this message when I start the DB:
 
 
  psql: FATAL:  could not access status of transaction 0
  DETAIL:  Could not write to file pg_subtrans/01F8 at offset 221184:
  No space left on device.
 
 
  What is this about and how do I solve this? A df -h on my system shows 
  this:
 
 
  FilesystemTypeSize  Used Avail Use% Mounted on
  ...
  /dev/sda2 ext39.9G  9.5G 0 100% /var
 

In addition to what others have already said, when things calm down
you should consider implementing some sort of monitoring system that
is configured to start screaming before you run into problems like
this. At my place of work, we've set up Nagios to monitor the space
left on various partitions, and email us when a partition gets above
90% full.

- eggyknap

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Zoltan Boszormenyi

Phoenix Kiula írta:

I am getting this message when I start the DB:


psql: FATAL:  could not access status of transaction 0
DETAIL:  Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.


What is this about and how do I solve this? A df -h on my system shows this:


FilesystemTypeSize  Used Avail Use% Mounted on
...
/dev/sda2 ext39.9G  9.5G 0 100% /var
  


This is the problem. Free up some space under /var or move either
the whole partition or PostgreSQL's data directory to a new disk.
The data directory lives under /var/lib/postgresql (mainstream) or
/var/lib/pgsql (RedHat speciality).

--
--
Zoltán Böszörményi
Cybertec Geschwinde  Schönig GmbH
http://www.postgresql.at/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Obtaining random rows from a result set

2007-08-31 Thread Josh Tolley
On 8/31/07, Alban Hertroys [EMAIL PROTECTED] wrote:
 Hello,

 I've recently been busy improving a query that yields a fixed number of
 random records matching certain conditions. I have tried all the usual
 approaches, and although they do work, they're all limited in some way
 and don't translate really well to what you want. They're kludges, IMHO.

 The methods I've tried are explained quite well on
 http://people.planetpostgresql.org/greg/index.php?/archives/40-Getting-random-rows-from-a-database-table.html

 All these methods involve calculating a random number for every record
 in the result set at some point in time, which is really not what I'm
 trying to model. I think the database should provide some means to get
 those records, so...

 Dear Santa,

 I'd like my database to have functionality analogue to how LIMIT works,
 but for other - non-sequential - algorithms.

 I was thinking along the lines of:

 SELECT *
   FROM table
  WHERE condition = true
  RANDOM 5;

 Which would (up to) return 5 random rows from the result set, just as
 LIMIT 5 returns (up to) the first 5 records in the result set.


 Or maybe even with a custom function, so that you could get non-linear
 distributions:

 SELECT *
   FROM table
  WHERE condition = true
  LIMIT 5 USING my_func();

 Where my_func() could be a user definable function accepting a number
 that should be (an estimate of?) the number of results being returned so
 that it can provide pointers to which rows in the resultset will be
 returned from the query.

 Examples:
 * random(maxrows) would return random rows from the resultset.
 * median() would return the rows in the middle of the result set (this
 would require ordering to be meaningful).

 What do people think, is this feasable? Desirable? Necessary?

 If I'd have time I'd volunteer for at least looking into this, but I'm
 working on three projects simultaneously already. Alas...

 Regards,
 Alban Hertroys.

 --
 Alban Hertroys
 [EMAIL PROTECTED]

 magproductions b.v.

 T: ++31(0)534346874
 F: ++31(0)534346876
 M:
 I: www.magproductions.nl
 A: Postbus 416
7500 AK Enschede

 // Integrate Your World //

 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


It seems to me that anything that wants to return a random set of rows
will need to calculate a random number for all the rows it processes,
unless you change how the database scans rows in indexes or tables,
which if at all possible will probably make things *really* slow. If
it's a given that the database will always sequentially scan whatever
it is the query plan tells it to scan, you're pretty much stuck with
the rows in your result set being in the same order unless you start
picking random numbers.

One possible alternative not mentioned on the site you linked to is to
as follows:

select [whatever] from [table] where random()  [some number between 0
and 1] limit [limit value]

That doesn't require assigning a random number for *every* row in the
table, nor does it require sorting everything. It does mean that
numbers encountered earlier in the query processing have a higher
likelihood of being returned, and it also means that there's some
chance you won't actually get as many as [limit value] rows returned.

jtolley=# create table a (i integer);
CREATE TABLE
jtolley=# insert into a (i) select * from generate_series(1, 100);
INSERT 0 100
jtolley=# create table a (i integer);
CREATE TABLE
jtolley=# insert into a (i) select * from generate_series(1, 100);
INSERT 0 100
jtolley=# select * from a where random()  .1 limit 3;
 i

 22
 23
 25
(3 rows)

Hope this helps...

-Josh

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Phoenix Kiula
On 31/08/2007, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
 Phoenix Kiula írta:
  I am getting this message when I start the DB:
 
 
  psql: FATAL:  could not access status of transaction 0
  DETAIL:  Could not write to file pg_subtrans/01F8 at offset 221184:
  No space left on device.
 
 
  What is this about and how do I solve this? A df -h on my system shows 
  this:
 
 
  FilesystemTypeSize  Used Avail Use% Mounted on
  ...
  /dev/sda2 ext39.9G  9.5G 0 100% /var
 

 This is the problem. Free up some space under /var or move either
 the whole partition or PostgreSQL's data directory to a new disk.
 The data directory lives under /var/lib/postgresql (mainstream) or
 /var/lib/pgsql (RedHat speciality).




Thanks everyone. Yes, /var was full because of the backups that're going there.

Database is back working.

It was my backup script. It is set to save a daily backup to the /var/
folder, which is not clever. I'll change it to be in the backup
folder which is a mounted one.

On that note, is it recommended to store the data of the database on a
different hard disk than the one on which the database is running? How
can I change the data folder for a live database?

Many thanks!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Obtaining random rows from a result set

2007-08-31 Thread Albe Laurenz
Alban Hertroys wrote:
 I've recently been busy improving a query that yields a fixed
 number of random records matching certain conditions.

 Dear Santa,
 
 I'd like my database to have functionality analogue to how 
 LIMIT works,
 but for other - non-sequential - algorithms.
 
 I was thinking along the lines of:
 
   SELECT *
 FROM table
WHERE condition = true
RANDOM 5;

Ho, ho, ho.

SELECT *
FROM table
WHERE condition = true
ORDER BY hashfloat8(random())
LIMIT 5;

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Obtaining random rows from a result set

2007-08-31 Thread Csaba Nagy
 Dear Santa,
 
 I'd like my database to have functionality analogue to how LIMIT works,
 but for other - non-sequential - algorithms.

There was some discussion before to possibly reuse the algorithm ANALYZE
is using for sampling some given percentage of the table data and
provide this for some kind of SELECT SAMPLE x%  style of
functionality. This would be the fastest you can get for a reasonably
big sample so it can be statistically significant, but not repeatable.
I'm not sure if this is the same what you were asking for though, I
would like something like this for statistical stuff, not for randomly
selecting rows.

Cheers,
Csaba.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Obtaining random rows from a result set

2007-08-31 Thread Martijn van Oosterhout
On Fri, Aug 31, 2007 at 02:42:18PM +0200, Alban Hertroys wrote:
 Examples:
 * random(maxrows) would return random rows from the resultset.
 * median() would return the rows in the middle of the result set (this
 would require ordering to be meaningful).

It would be possible to write an aggregate that returns a single random
value from a set. The algorithm is something like:

n = 1
v = null
for each row
  if random()  1/n:
 v = value of row
  n = n + 1

return v

It does require a seqscan though. If you're asking for 5 random rows
you probably mean 5 random but distinct rows, which is different to
just running the above set 5 times in parallel.

I don't know if there's a similar method for median...

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Obtaining random rows from a result set

2007-08-31 Thread Kaloyan Iliev

Hi,
Why not generate a random number in your application and then:

SELECT *
FROM table_x
WHERE condition = true
OFFSET generated_random_number
LIMIT xx

Kaloyan Iliev

Alban Hertroys wrote:


Hello,

I've recently been busy improving a query that yields a fixed number of
random records matching certain conditions. I have tried all the usual
approaches, and although they do work, they're all limited in some way
and don't translate really well to what you want. They're kludges, IMHO.

The methods I've tried are explained quite well on
http://people.planetpostgresql.org/greg/index.php?/archives/40-Getting-random-rows-from-a-database-table.html

All these methods involve calculating a random number for every record
in the result set at some point in time, which is really not what I'm
trying to model. I think the database should provide some means to get
those records, so...

Dear Santa,

I'd like my database to have functionality analogue to how LIMIT works,
but for other - non-sequential - algorithms.

I was thinking along the lines of:

SELECT *
  FROM table
 WHERE condition = true
 RANDOM 5;

Which would (up to) return 5 random rows from the result set, just as
LIMIT 5 returns (up to) the first 5 records in the result set.


Or maybe even with a custom function, so that you could get non-linear
distributions:

SELECT *
  FROM table
 WHERE condition = true
 LIMIT 5 USING my_func();

Where my_func() could be a user definable function accepting a number
that should be (an estimate of?) the number of results being returned so
that it can provide pointers to which rows in the resultset will be
returned from the query.

Examples:
* random(maxrows) would return random rows from the resultset.
* median() would return the rows in the middle of the result set (this
would require ordering to be meaningful).

What do people think, is this feasable? Desirable? Necessary?

If I'd have time I'd volunteer for at least looking into this, but I'm
working on three projects simultaneously already. Alas...

Regards,
Alban Hertroys.

 





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Out of memory error, FreeBSD 6.2, PostgreSQL 8.2.4

2007-08-31 Thread Tom Lane
Joost Kraaijeveld [EMAIL PROTECTED] writes:
 I have a query that has run on 3 other *identical* machines (hardware,
 software, postgresql.conf idenntical, just other data in the database)
 that give me an out of memory error every time I try (see below).

 Anyone any idea of where or how to look for the problem or the
 solution? 

What have you got work_mem set to?

The problem evidently is that a hash join table has gotten too large:

 HashBatchContext: 533741652 total in 76 blocks; 1376 free (74 chunks); 
 533740276 used

Now that's supposed to not get bigger than work_mem (plus or minus some
slop), so either you're trying to run with work_mem of half a gig or
more (answer: don't do that) or you've found some kind of memory leak
(answer: send a reproducible test case to pgsql-bugs).

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Phoenix Kiula
On 31/08/2007, Josh Tolley [EMAIL PROTECTED] wrote:
 On 8/31/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
  Phoenix Kiula írta:

 In addition to what others have already said, when things calm down
 you should consider implementing some sort of monitoring system that
 is configured to start screaming before you run into problems like
 this. At my place of work, we've set up Nagios to monitor the space
 left on various partitions, and email us when a partition gets above
 90% full.



Wow, Nagois seems like a superb tool. Thanks for the recommendation!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] E_BAD_ACCESS with palloc/pfree in base type

2007-08-31 Thread Michael Glaesemann


On Aug 31, 2007, at 3:02 , Gregory Stark wrote:


Michael Glaesemann [EMAIL PROTECTED] writes:


char * theDigits = malloc(sizeof(XTypeMaximumStringLength));

...

char * subscriberNumber = malloc(XTypeMaximumStringLength);


One of those two is wrong, I suspect the first one.


Thanks, Greg. It looks like that was it. Always good to have a fresh  
pair of eyes.



I wonder how you define
XTypeMaximumStringLength as sizeof(integer constant) ought to give a
compile-time error.


It's an enum value. I do have a few compiler warnings (no errors)  
left to clean up, but that doesn't appear to be one of them.


Thanks again for your help!

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Out of shared memory (locks per process) using table-inheritance style partitioning

2007-08-31 Thread John Prevost
 So what's the problem?  Increase max_locks_per_transaction.  The reason
 we have that as a tunable is mainly to support systems with very large
 numbers of tables.

So increasing this value into the thousands is a reasonable approach?
If it is reasonable, that's fine.  I'll certainly be increasing it
somewhat in any case.

It just feels more than a little extreme to be tweaking a parameter
which has the comment 32 has historically been enough up by a factor
of 300 or more—extreme enough to make me wonder if there shouldn't be
some other solution for partitioning.

Are there any drawbacks one should be aware of when increasing
max_locks_per_transaction to such a huge value, besides the obvious
increase in shared memory requirements?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Out of shared memory (locks per process) using table-inheritance style partitioning

2007-08-31 Thread Tom Lane
John Prevost [EMAIL PROTECTED] writes:
 So what's the problem?  Increase max_locks_per_transaction.  The reason
 we have that as a tunable is mainly to support systems with very large
 numbers of tables.

 So increasing this value into the thousands is a reasonable approach?
 If it is reasonable, that's fine.  I'll certainly be increasing it
 somewhat in any case.

 It just feels more than a little extreme to be tweaking a parameter
 which has the comment 32 has historically been enough up by a factor
 of 300 or more=97extreme enough to make me wonder if there shouldn't be
 some other solution for partitioning.

Unless you expect all of your sessions to be touching all of the tables,
you probably don't need to be so extreme as that.  The parameter is a
bit misnamed, as it is not a hard limit per-session.  The total number
of locks allowed in the system is max_locks_per_transaction times
max_connections, but we don't limit any one process to any particular
fraction of that.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Erik Jones


On Aug 31, 2007, at 8:35 AM, Phoenix Kiula wrote:

Thanks everyone. Yes, /var was full because of the backups that're  
going there.


Database is back working.

It was my backup script. It is set to save a daily backup to the /var/
folder, which is not clever. I'll change it to be in the backup
folder which is a mounted one.

On that note, is it recommended to store the data of the database on a
different hard disk than the one on which the database is running? How
can I change the data folder for a live database?

Many thanks!


The data directory is where the database is running.  If you're  
referring to where the postgres binaries are, it doesn't matter as  
they are loaded into memory when the server starts.  As far as moving  
the data directory goes, you can't move it for a running database.   
All of the options to move a server's data directory involve, at some  
point, shutting down the db.  Alternatively, if you're running out of  
space on the disk currently holding the data, you can add another  
drive in a new tablespace.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Obtaining random rows from a result set

2007-08-31 Thread Erik Jones

On Aug 31, 2007, at 8:34 AM, Kaloyan Iliev wrote:


Alban Hertroys wrote:


Hello,

I've recently been busy improving a query that yields a fixed  
number of
random records matching certain conditions. I have tried all the  
usual
approaches, and although they do work, they're all limited in some  
way
and don't translate really well to what you want. They're  
kludges, IMHO.


The methods I've tried are explained quite well on
http://people.planetpostgresql.org/greg/index.php?/archives/40- 
Getting-random-rows-from-a-database-table.html


All these methods involve calculating a random number for every  
record

in the result set at some point in time, which is really not what I'm
trying to model. I think the database should provide some means to  
get

those records, so...

Dear Santa,

I'd like my database to have functionality analogue to how LIMIT  
works,

but for other - non-sequential - algorithms.

I was thinking along the lines of:

SELECT *
  FROM table
 WHERE condition = true
 RANDOM 5;

Which would (up to) return 5 random rows from the result set, just as
LIMIT 5 returns (up to) the first 5 records in the result set.


Or maybe even with a custom function, so that you could get non- 
linear

distributions:

SELECT *
  FROM table
 WHERE condition = true
 LIMIT 5 USING my_func();

	Where my_func() could be a user definable function accepting a  
number
that should be (an estimate of?) the number of results being  
returned so

that it can provide pointers to which rows in the resultset will be
returned from the query.

Examples:
* random(maxrows) would return random rows from the resultset.
* median() would return the rows in the middle of the result set  
(this

would require ordering to be meaningful).

What do people think, is this feasable? Desirable? Necessary?

If I'd have time I'd volunteer for at least looking into this, but  
I'm

working on three projects simultaneously already. Alas...

Regards,
Alban Hertroys.



Hi,
Why not generate a random number in your application and then:

SELECT *
FROM table_x
WHERE condition = true
OFFSET generated_random_number
LIMIT xx

Kaloyan Iliev



That won't work without some kind of a priori knowledge of how many  
rows the query would return without the offset and limit.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] computing and updating the size of a table with large objects

2007-08-31 Thread Marco Bizzarri
Hi all.

I've a table with large objects, here is the definition:


PAFlow-emmebi=# \d documenti
   Tabella public.documenti
 Colonna | Tipo  | Modificatori
-+---+--
 id_doc  | character varying(50) | not null
 file| oid   |
 kind| integer   | not null
 size| bigint|
Indici:
documenti_pkey chiave primaria, btree (id_doc, kind)

(sorry for the mixed language output)

I need to populate the size attribute with the size of the large object in file.

My first attempt was:

SELECT  lo_lseek(lo_open(file, 262144), 0, 2) AS size FROM documenti ;

This opens the largeobject, and passes the result to the lo_lseek,
which goes up to the end of the largeobject, and thus computes the
size.

Now I could prepare an update which does the job. My doubt is that I
could use all the resources in the update (documenti table is quite
large).

I thought I could use something like this:

SELECT  lo_lseek(lo_open(file, 262144), 0, 2) AS size, lo_close(0)
FROM documenti ;

which is quite nasty, and relies on side effects happening in the
proper order, but uses just one file descriptor for all the query.

Does anyone has any other suggestion?

Regards
Marco

-- 
Marco Bizzarri
http://iliveinpisa.blogspot.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] auditing in postgresql

2007-08-31 Thread Jeff Davis
On Thu, 2007-08-30 at 21:43 -0400, Merlin Moncure wrote:
 Well, a SRF may be unsuitable for various reasons, but maybe views are
 better. I really like views more and more lately (better than
 functions as a rule, I think).
 
 you have some query, select yadda
 create view log_yadda as
   select yadda
 union all select null, null, null from log_func();
 

Interesting idea, I hadn't thought of that. Not perfect, but
interesting.

The returns query might help reduce the penalty of using a SRF. Maybe
I'll look into that. 

Regards,
Jeff Davis


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Select question

2007-08-31 Thread Madison Kelly

Merlin Moncure wrote:

I seem to recall giving out a query about that in the IRC channel a
while back...so if you got it from me, now I'll attempt to finish the
job :-).

If you can get postfix to look at a view, maybe you could
CREATE VIEW email_v AS
SELECT
   usr_email, dom_name,
   b.dom_name||'/'||a.usr_email||'/inbox' AS email_file
FROM users a, domains b
WHERE
 a.usr_dom_id=b.dom_id;
  AND a.usr_email='mkelly'
  AND b.dom_name='test.com';

and just
select * from email_v where usr_email = 'mkelly' and dom_name = 'test.com';

merlin


  Hiya,

Nope, wasn't me, but I was indeed able to solve the problem with a few 
(I posted the details in a follow up). It was pretty similar to your 
suggestion, so you were certainly onto something. :)


Madi

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] ERROR: table row type and query-specified row type do not match

2007-08-31 Thread Tom Lane
Patryk Kordylewski [EMAIL PROTECTED] writes:
 I think i found the problem and made 2 sql files to reproduce it. The 
 only difference between this 2 files is the order by-clause of the view...

 http://fooby.imosnet.de/order_by_working.txt
 http://fooby.imosnet.de/order_by_not_working.txt

Hmm, actually you don't even need the rule, any use of a whole-row Var
for that view will fail:

regression=# select v_baz from v_baz;
ERROR:  table row type and query-specified row type do not match
DETAIL:  Table row contains 3 attributes, but query expects 2.

The problem is that the ORDER BY on a value that's not part of the
output list generates a hidden resjunk column in the view, which
confuses ExecEvalVar.  I've applied a patch that will appear in 8.2.5.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] computing and updating the size of a table with large objects

2007-08-31 Thread Daniel Verite
Marco Bizzarri wrote:

Tabella public.documenti
  Colonna | Tipo  | Modificatori
 -+---+--
  id_doc  | character varying(50) | not null
  file| oid   |
  kind| integer   | not null
  size| bigint|
 Indici:
 documenti_pkey chiave primaria, btree (id_doc, kind)
 
 (sorry for the mixed language output)
 
 I need to populate the size attribute with the size of the large object in 
 file.

You can get the sizes from pg_largeobject, this way:

SELECT id_doc, sum(length(data)) as filesize
FROM documenti, pg_largeobject
WHERE documenti.file = pg_largeobject.loid
GROUP BY id_doc;

-- 
 Daniel
 PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] auditing in postgresql

2007-08-31 Thread Jeff Davis
On Fri, 2007-08-31 at 13:51 -0400, Merlin Moncure wrote:
  how about this:
  create view log_yadda sa
select yadda where (select func());
 
  the parens around the function call force it to be evaluated as a scalar.
 
 if you declare func() immutable, you can (maybe) remove the parens
 because the planner folds the call to a constant.  This is faster in
 some cases because
 

I like this approach. I'm a little concerned about PREPARE, however. I
think that might only execute the function once and then not on
subsequent calls, which would then not be audited.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] auditing in postgresql

2007-08-31 Thread Merlin Moncure
On 8/31/07, Jeff Davis [EMAIL PROTECTED] wrote:
 On Fri, 2007-08-31 at 13:51 -0400, Merlin Moncure wrote:
   how about this:
   create view log_yadda sa
 select yadda where (select func());
  
   the parens around the function call force it to be evaluated as a scalar.
 
  if you declare func() immutable, you can (maybe) remove the parens
  because the planner folds the call to a constant.  This is faster in
  some cases because
 

 I like this approach. I'm a little concerned about PREPARE, however. I
 think that might only execute the function once and then not on
 subsequent calls, which would then not be audited.

At present, immutable functions are only treated as constants during a
query, which is what we want (no problems with prepare).  I am a
little bit concered about interactions with queries wrapping the
view...you'll just have to try and see.

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] auditing in postgresql

2007-08-31 Thread Tom Lane
Merlin Moncure [EMAIL PROTECTED] writes:
 At present, immutable functions are only treated as constants during a
 query, which is what we want (no problems with prepare).

Uh, no, they'd be folded to constants at plan time, which is exactly
what Jeff doesn't want AFAICS.

The subselect trick is a bit of a hack, but at present it'll work to
guarantee that the function is called only once per plan execution.
(That's because we'll treat an uncorrelated subquery as an InitPlan
even if it contains volatile functions, which strictly speaking we
should not; but it's a sufficiently useful behavior that I wouldn't want
to get rid of it without providing a more principled substitute ...)

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] auditing in postgresql

2007-08-31 Thread Merlin Moncure
On 8/31/07, Tom Lane [EMAIL PROTECTED] wrote:
 Merlin Moncure [EMAIL PROTECTED] writes:
  At present, immutable functions are only treated as constants during a
  query, which is what we want (no problems with prepare).

 Uh, no, they'd be folded to constants at plan time, which is exactly
 what Jeff doesn't want AFAICS.

yikes! I did test this before I posted that, but I oversimplified it:
I didn't move the func() to the where clause...do the subselect
version defined as volatile seems the way to go.  unfortunately this
means you pay a small extra price for large result sets.

merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] auditing in postgresql

2007-08-31 Thread Merlin Moncure
On 8/31/07, Jeff Davis [EMAIL PROTECTED] wrote:
 On Thu, 2007-08-30 at 21:43 -0400, Merlin Moncure wrote:
  Well, a SRF may be unsuitable for various reasons, but maybe views are
  better. I really like views more and more lately (better than
  functions as a rule, I think).
 
  you have some query, select yadda
  create view log_yadda as
select yadda
  union all select null, null, null from log_func();
 

 Interesting idea, I hadn't thought of that. Not perfect, but
 interesting.

create function func() returns bool as
$$
  begin
raise notice 'log!';
return true;
  end;
$$ language plpgsql;

how about this:
create view log_yadda sa
  select yadda where (select func());

the parens around the function call force it to be evaluated as a scalar.

:-)

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] auditing in postgresql

2007-08-31 Thread Merlin Moncure
On 8/31/07, Merlin Moncure [EMAIL PROTECTED] wrote:
 On 8/31/07, Jeff Davis [EMAIL PROTECTED] wrote:
  On Thu, 2007-08-30 at 21:43 -0400, Merlin Moncure wrote:
   Well, a SRF may be unsuitable for various reasons, but maybe views are
   better. I really like views more and more lately (better than
   functions as a rule, I think).
  
   you have some query, select yadda
   create view log_yadda as
 select yadda
   union all select null, null, null from log_func();
  
 
  Interesting idea, I hadn't thought of that. Not perfect, but
  interesting.

 create function func() returns bool as
 $$
   begin
 raise notice 'log!';
 return true;
   end;
 $$ language plpgsql;

 how about this:
 create view log_yadda sa
   select yadda where (select func());

 the parens around the function call force it to be evaluated as a scalar.

if you declare func() immutable, you can (maybe) remove the parens
because the planner folds the call to a constant.  This is faster in
some cases because

select yadda where true
is slightly faster than
select yadda where (select true)

I'm not completely sure this will prevent multiple executions of func
in some cases however.

merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] auditing in postgresql

2007-08-31 Thread Jeff Davis
On Fri, 2007-08-31 at 16:42 -0400, Merlin Moncure wrote:
 On 8/31/07, Tom Lane [EMAIL PROTECTED] wrote:
  Merlin Moncure [EMAIL PROTECTED] writes:
   At present, immutable functions are only treated as constants during a
   query, which is what we want (no problems with prepare).
 
  Uh, no, they'd be folded to constants at plan time, which is exactly
  what Jeff doesn't want AFAICS.
 
 yikes! I did test this before I posted that, but I oversimplified it:
 I didn't move the func() to the where clause...do the subselect
 version defined as volatile seems the way to go.  unfortunately this
 means you pay a small extra price for large result sets.
 

That sounds like a good solution to me. It looks like the planner is
able to optimize the queries, and the audit function is only called
once. It sounds like I may need to beware of future changes, however.

What is the small extra price for large tables though?

Thanks for the help!

Regards,
Jeff Davis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Jeff Davis
On Fri, 2007-08-31 at 22:34 +0800, Phoenix Kiula wrote:
 On 31/08/2007, Josh Tolley [EMAIL PROTECTED] wrote:
  On 8/31/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
   Phoenix Kiula írta:
 
  In addition to what others have already said, when things calm down
  you should consider implementing some sort of monitoring system that
  is configured to start screaming before you run into problems like
  this. At my place of work, we've set up Nagios to monitor the space
  left on various partitions, and email us when a partition gets above
  90% full.
 
 
 
 Wow, Nagois seems like a superb tool. Thanks for the recommendation!
 

You might also consider OpenNMS.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] URGENT: Whole DB down (no space left on device)

2007-08-31 Thread Anton Melser
On 31/08/2007, Jeff Davis [EMAIL PROTECTED] wrote:
 On Fri, 2007-08-31 at 22:34 +0800, Phoenix Kiula wrote:
  On 31/08/2007, Josh Tolley [EMAIL PROTECTED] wrote:
   On 8/31/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Phoenix Kiula írta:
 
   In addition to what others have already said, when things calm down
   you should consider implementing some sort of monitoring system that
   is configured to start screaming before you run into problems like
   this. At my place of work, we've set up Nagios to monitor the space
   left on various partitions, and email us when a partition gets above
   90% full.
 
 
 
  Wow, Nagois seems like a superb tool. Thanks for the recommendation!
 

 You might also consider OpenNMS.

I spent about 3 hours trying to get it running and said - I'm at eval
stage, and nagios/centreon is installed and working... (even if not as
theoretically nice)... there are lots of very promising systems out
there (hyperic, zenoss, etc) but if it ain't an apt-get or yum away
then... why not just go with what *is* there? Surely it must be being
used by more people, if not, why aren't the others in the repos?
Random ramblings!
Cheers
Anton


-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] WAL to RAW devices ?

2007-08-31 Thread Alex Vinogradovs
Hi guys,


I've got a bunch of PosgreSQL servers connected to external storage,
 where a single server needs to be serving as WO database dealing with
INSERTs only, and bunch of other guys need to obtain a copy of that
data for RO serving, without taking resources on WO server.
 The idea is to have say 2 raw devices which would be used as 2 WAL
segments (round-robin). RO servers will go after the one that's not used
at a given time with something like xlogdump utility and produce INSERT
statements to be then executed locally. After that import is done, a
command will be issued to the WO server to switch to the other segment
so that the cycle can repeat.
 The objective of that replication model is to ensure that SELECT
queries won't ever affect the performance of the WO server,
which may experience uneven loads.

Is that possible with the 8.2.4 or 8.3 capabilities, or maybe with
minor modifications ?

Thanks!

Best regards,
Alex Vinogradovs


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Tom Lane
Alex Vinogradovs [EMAIL PROTECTED] writes:
  The idea is to have say 2 raw devices which would be used as 2 WAL
 segments (round-robin). RO servers will go after the one that's not used
 at a given time with something like xlogdump utility and produce INSERT
 statements to be then executed locally. After that import is done, a
 command will be issued to the WO server to switch to the other segment
 so that the cycle can repeat.

Why would you insist on these being raw devices?  Do you enjoy writing
filesystems from scratch?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Alex Vinogradovs
WAL segments already have their structure. Filesystem would be an
overhead, plus I meantioned access to the same storage from
multiple hosts - no filesystem mounting, synchronization and
other problems.

I figured PG folks aren't interested in adding enterprise-level storage
functionality (movable tablespaces, raw devices for tablespaces, etc),
thus I foresee the model described as the only way to achieve somewhat
decent performance in a stressed environment.


On Fri, 2007-08-31 at 19:21 -0400, Tom Lane wrote:
 Alex Vinogradovs [EMAIL PROTECTED] writes:
   The idea is to have say 2 raw devices which would be used as 2 WAL
  segments (round-robin). RO servers will go after the one that's not used
  at a given time with something like xlogdump utility and produce INSERT
  statements to be then executed locally. After that import is done, a
  command will be issued to the WO server to switch to the other segment
  so that the cycle can repeat.
 
 Why would you insist on these being raw devices?  Do you enjoy writing
 filesystems from scratch?
 
   regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Tom Lane
Alex Vinogradovs [EMAIL PROTECTED] writes:
 WAL segments already have their structure. Filesystem would be an
 overhead,

Just because you'd like that to be true doesn't make it true.  We have
to manage a variable number of active segments; track whether a given
segment is waiting for future use, active, waiting to be archived, etc;
manage status signaling to the archiver process; and so on.  Now I'll
freely admit that using a filesystem is only one of the ways that those
problems could be attacked, but that's how they've been attacked in
Postgres.  If you want to not have that functionality present then
you'd need to rewrite all that code and provide some other
infrastructure for it to use.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Alex Vinogradovs
But would it be a problem to have only 1 active segment at all times ?
My inspiration pretty much comes from Oracle, where redo logs are
pre-configured and can be switched by a command issued to the instance.


 Just because you'd like that to be true doesn't make it true.  We have
 to manage a variable number of active segments; track whether a given
 segment is waiting for future use, active, waiting to be archived, etc;
 manage status signaling to the archiver process; and so on.  Now I'll
 freely admit that using a filesystem is only one of the ways that those
 problems could be attacked, but that's how they've been attacked in
 Postgres.  If you want to not have that functionality present then
 you'd need to rewrite all that code and provide some other
 infrastructure for it to use.
 
   regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Alvaro Herrera
Alex Vinogradovs wrote:
 WAL segments already have their structure. Filesystem would be an
 overhead,

In this case you can choose a filesystem with lower overhead.  For
example with WAL you don't need a journalling filesystem at all, so
using ext2 is not a bad idea.  For Pg data files, you need journalling
of metadata only, not of data; the latter is provided by WAL.  So you
can mount the data filesystem with the option data=writeback.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1, W 73º 13' 56.4
All rings of power are equal,
But some rings of power are more equal than others.
 (George Orwell's The Lord of the Rings)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Alex Vinogradovs
Probably you missed that part... In my setup, I need at least
2 boxes going after those files, while 3rd box keeps on writing
to them... I can't mount ext2 even in R/O mode while it's being
written to by another guy. I can't unmount it before mounting
exclusively on any of them either, since PG will be writing to
that location. The only way is to do the WAL shipping, which
probably wouldn't be that bad since the copying would be done
via DMA, but still isn't as good as it could be since that would
utilize the same spindles...


On Fri, 2007-08-31 at 20:23 -0400, Alvaro Herrera wrote:
 Alex Vinogradovs wrote:
  WAL segments already have their structure. Filesystem would be an
  overhead,
 
 In this case you can choose a filesystem with lower overhead.  For
 example with WAL you don't need a journalling filesystem at all, so
 using ext2 is not a bad idea.  For Pg data files, you need journalling
 of metadata only, not of data; the latter is provided by WAL.  So you
 can mount the data filesystem with the option data=writeback.
 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Alex Vinogradovs wrote:
 Hi guys,
 
 
 I've got a bunch of PosgreSQL servers connected to external storage,
  where a single server needs to be serving as WO database dealing with
 INSERTs only, and bunch of other guys need to obtain a copy of that
 data for RO serving, without taking resources on WO server.

You can't do that with PostgreSQL without replication. Unless you are
willing to have outages with your RO servers to apply the logs.

Further you are considering the wrong logs. It is not the WAL logs, but
the archive logs that you need.

Sincerely,

Joshua D. Drake


  The idea is to have say 2 raw devices which would be used as 2 WAL
 segments (round-robin). RO servers will go after the one that's not used
 at a given time with something like xlogdump utility and produce INSERT
 statements to be then executed locally. After that import is done, a
 command will be issued to the WO server to switch to the other segment
 so that the cycle can repeat.
  The objective of that replication model is to ensure that SELECT
 queries won't ever affect the performance of the WO server,
 which may experience uneven loads.
 
 Is that possible with the 8.2.4 or 8.3 capabilities, or maybe with
 minor modifications ?
 
 Thanks!
 
 Best regards,
 Alex Vinogradovs
 
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org/
 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG2LY7ATb/zqfZUUQRAkM6AJ9AcueKf/f7Aali9cuia12Cp3ea3wCfdN+s
C3VIqLGY/pHMdFtXt6Tgx74=
=RASk
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Alex Vinogradovs
Oh well, I guess I will just use some trigger to invoke a C
function and store the statements in a raw device with some
proprietary format, while the actual inserts don't take place
at all.

In case anyone has more ideas, please let me know.


On Fri, 2007-08-31 at 17:45 -0700, Joshua D. Drake wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Alex Vinogradovs wrote:
  Hi guys,
  
  
  I've got a bunch of PosgreSQL servers connected to external storage,
   where a single server needs to be serving as WO database dealing with
  INSERTs only, and bunch of other guys need to obtain a copy of that
  data for RO serving, without taking resources on WO server.
 
 You can't do that with PostgreSQL without replication. Unless you are
 willing to have outages with your RO servers to apply the logs.
 
 Further you are considering the wrong logs. It is not the WAL logs, but
 the archive logs that you need.
 
 Sincerely,
 
 Joshua D. Drake
 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Alvaro Herrera
Alex Vinogradovs wrote:
 Probably you missed that part... In my setup, I need at least
 2 boxes going after those files, while 3rd box keeps on writing
 to them... I can't mount ext2 even in R/O mode while it's being
 written to by another guy. I can't unmount it before mounting
 exclusively on any of them either, since PG will be writing to
 that location. The only way is to do the WAL shipping, which
 probably wouldn't be that bad since the copying would be done
 via DMA, but still isn't as good as it could be since that would
 utilize the same spindles...

Oh, I see.

What I've seen described is to put a PITR slave on a filesystem with
snapshotting ability, like ZFS on Solaris.

You can then have two copies of the PITR logs.  One gets a postmaster
running in warm standby mode, i.e. recovering logs in a loop.  The
other one, in a sort of jail (I don't know the Solaris terminology for
this) stops the recovery and enters normal mode.  You can query it all
you like at that point.

Periodically you stop the server in normal mode, resync the snapshot
(which basically resets the modified block list in the filesystem),
take a new snapshot, create the jail and stop the recovery mode again.
So you have a fresher postmaster for queries.

It's not as good as having a true hot standby, for sure.  But it seems
it's good enough while we wait.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1, W 73º 13' 56.4
Those who use electric razors are infidels destined to burn in hell while
we drink from rivers of beer, download free vids and mingle with naked
well shaved babes. (http://slashdot.org/comments.pl?sid=44793cid=4647152)

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] WAL to RAW devices ?

2007-08-31 Thread Alex Vinogradovs
Yeah, that's the trick... I need high availability with
high performance and nearly real-time synchronization ;-)
Also, I've got FreeBSD here... ZFS will be out with 7.0
release, plus UFS2 has snapshotting capability too. But
the whole method isn't good enough anyway. 


 Oh, I see.
 
 What I've seen described is to put a PITR slave on a filesystem with
 snapshotting ability, like ZFS on Solaris.
 
 You can then have two copies of the PITR logs.  One gets a postmaster
 running in warm standby mode, i.e. recovering logs in a loop.  The
 other one, in a sort of jail (I don't know the Solaris terminology for
 this) stops the recovery and enters normal mode.  You can query it all
 you like at that point.
 
 Periodically you stop the server in normal mode, resync the snapshot
 (which basically resets the modified block list in the filesystem),
 take a new snapshot, create the jail and stop the recovery mode again.
 So you have a fresher postmaster for queries.
 
 It's not as good as having a true hot standby, for sure.  But it seems
 it's good enough while we wait.
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-31 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 It continues with the next table if interrupted (SIGINT), but the worker
 exits on any other error.  I would ask you to review that code -- it's
 in do_autovacuum, the PG_TRY block at the end.  It was committed in rev
 1.52 of autovacuum.c.

While looking at this I came across something I didn't like at all:

 * We somewhat ignore the risk that the launcher changes its PID
 * between we reading it and the actual kill; we expect ProcKill to be
 * called shortly after us, and we assume that PIDs are not reused too
 * quickly after a process exits.

I'm fairly sure that Windows has a bad habit of recycling PIDs almost
immediately.  I didn't actually read the code to see what the assumption
is for --- I just noticed this comment and it set off alarm bells.  Can
you rework the logic to not depend on PIDs at all?  (Perhaps the
session IDs that Florian's patch will create would serve instead?
I imagine those will be assigned during InitProcess, so they should
be available to identify individual autovac workers.)

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-31 Thread Alvaro Herrera
Tom Lane escribió:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  It continues with the next table if interrupted (SIGINT), but the worker
  exits on any other error.  I would ask you to review that code -- it's
  in do_autovacuum, the PG_TRY block at the end.  It was committed in rev
  1.52 of autovacuum.c.
 
 While looking at this I came across something I didn't like at all:
 
  * We somewhat ignore the risk that the launcher changes its PID
  * between we reading it and the actual kill; we expect ProcKill to be
  * called shortly after us, and we assume that PIDs are not reused too
  * quickly after a process exits.
 
 I'm fairly sure that Windows has a bad habit of recycling PIDs almost
 immediately.  I didn't actually read the code to see what the assumption
 is for --- I just noticed this comment and it set off alarm bells.

Well, this is not much of a risk, because what's going on is that the
worker wants to signal the launcher.  So the launcher would need to shut
down for this to happen, which would be pretty rare on its own.  Also,
note that the time interval we're talking about is between one proc_exit
handler fires and the next.

Also, note that the worst thing that can happen is that the wrong
process gets a SIGUSR1 signal, and the launcher misses an opportunity
for starting another worker and rebalancing the vacuum cost parameters.

-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
No me acuerdo, pero no es cierto.  No es cierto, y si fuera cierto,
 no me acuerdo. (Augusto Pinochet a una corte de justicia)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-31 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Also, note that the worst thing that can happen is that the wrong
 process gets a SIGUSR1 signal, and the launcher misses an opportunity
 for starting another worker and rebalancing the vacuum cost parameters.

Hmmm ... okay, but I note that part of that assumption is that every
postgres-owned process either ignores SIGUSR1 or handles it in a fashion
such that an extra signal won't cause any Bad Things.  This is not
obvious, especially considering that the Unix default action for SIGUSR1
is abnormal process termination.  I'm starting to think that we need a
README somewhere collecting all the system's assumptions about signal
handling.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster