[GENERAL] PostgreSQL trap, and assertion failed

2011-04-14 Thread Radosław Smogura
Hello,

I have small crash reporting code, which I use during mmap-ing database. After 
last merge with master I got 

TRAP: FailedAssertion(!(slot  0  slot = PMSignalState-num_child_flags), 
File: pmsignal.c, Line: 227)
LOG:  server process (PID 5128) was terminated by signal 6: Aborted
LOG:  terminating any other active server processes
TRAP: FailedAssertion(!(slot  0  slot = PMSignalState-num_child_flags), 
File: pmsignal.c, Line: 189)

I've installed crash reports on sigbus and sigseg signals. May I ask what may 
be wrong, and how to prevent this?

Regards,
Radek.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread John R Pierce

On 04/13/11 9:19 PM, Benjamin Smith wrote:


The speed benefits of SSDs as benchmarked would seem incredible. Can 
anybody comment on SSD benefits and problems in real life use?



I maintain some 100 databases on 3 servers, with 32 GB of RAM each and 
an extremely rich, complex schema. (300+ normalized tables)



I was wondering if anybody here could comment on the benefits of SSD 
in similar, high-demand rich schema situations?





consumer grade MLC SSD's will crash and burn in short order under a 
heavy transactional workload characterized by sustained small block 
random writes.


The enterprise grade SLC SSDs' will perform very nicely, but they are 
very very expensive, and found in high end enterprise database servers 
like Oracle's Exadata machines.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication: slave is in permanent startup 'recovery'

2011-04-14 Thread Henry C.
On Wed, April 13, 2011 20:15, Henry C. wrote:
 If I try and execute a long-lived SQL query on the slave, it eventually fails
  with canceling statement due to conflict with recovery.  Replication is
 definitely working (DML actions are propagated to the slave), but something
 is amiss.

Let me ask the question another way.

Following the hints at http://wiki.postgresql.org/wiki/Streaming_Replication:

Between the primary and standby hosts, pg_current_xlog_location() and
pg_last_xlog_receive_location()/select pg_last_xlog_replay_location() show
what appears to be normal activity (ie, replication is happening successfully,
and the numbers indicate health).

primary% ps -ef | grep sender
standby% ps -ef | grep receiver

...show things talking nicely and the numbers match.

ps also shows a startup process recovering logs on the standby (which I assume
is normal behaviour).

However, a SELECT eventually fails with canceling statement due to conflict
with recovery.

Where else can I check, or what else can I do to determine what the problem is?

Thanks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cannot start Postgres : invalid data in PID file

2011-04-14 Thread Daron Ryan

 Problem: Cannot start Postgres
Platform: Postgres 8.3, Windows Vista Home
Error Message: pg_ctl: invalid data in PID file C:/Program 
Files/PostgreSQL/8.3/data/postmaster.pid


I run postgres on Windows Vista Home Edition. It normally runs as a 
service but is not starting. I created a command line instruction by 
copying the data from Windows service GUI and changing runservice to 
start. Under the command line it produces the following:


C:\Users\DaronC:/Program Files/PostgreSQL/8.3/bin/pg_ctl.exe start -N 
postgr

esql-8.3 -D C:/Program Files/PostgreSQL/8.3/data -w
pg_ctl: invalid data in PID file C:/Program 
Files/PostgreSQL/8.3/data/postmaste

r.pid

Entering type postmaster.pid in the appropriate directory produces a 
couple of blank lines.


I thought of trying to delete the PID file but I am not sure if it may 
cause further problems.


Can anyone help?

Regards,
Daron.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication: slave is in permanent startup 'recovery'

2011-04-14 Thread Craig Ringer

On 14/04/2011 2:15 AM, Henry C. wrote:

Greets,

Pg 9.0.3

This must be due to my own misconfiguration, so apologies if I'm not seeing
the obvious - I've noticed that my slave seems to be stuck in a permanent
startup/recovery state.


That's what warm- and hot-standby slaves are. They're continuously 
replaying WAL files from the master, essentially the same thing as 
during recovery from a bad shutdown. The advantage is that it's 
*extremely* well tested code.



If I try and execute a long-lived SQL query on the slave, it eventually fails
with canceling statement due to conflict with recovery.


That's a limitation of streaming replication. It's a lot like the issue 
Oracle has with running out of undo or redo log space. Essentially, my 
understanding is that the hot standby server cannot replay WAL archives 
to keep up with the master's changes at the same time as running 
queries. To avoid getting too far behind the master because of a huge or 
stuck query, it'll cancel very long-running queries.


Again from my limited understanding, the reason it can't replay WAL is 
because the WAL records include overwrites of pages VACUUMed and re-used 
on the master. HS is block-level replication; it cannot keep a page 
in-place on the slave when the master has erased or overwritten it.


It's theoretically possible for the slave to copy blocks that're about 
to be written out-of-line into a slave-side-only store of blocks that've 
been erased on the master but are still needed by transactions on the 
slave. The discussion I've read suggests that that'd be ... complicated 
... to make work well especially with log replay happening concurrently.



Replication is
definitely working (DML actions are propagated to the slave), but something is
amiss.


Nope, it's working as designed I'm afraid.

There are params you can tune to control how far slaves are allowed to 
get behind the master before cancelling queries. I don't remember what 
they are, but the manual will cover them. Do consider though that the 
more behind the slave is, the more log files the master has to have 
space to keep... and if the master runs out of space, things get ugly.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot start Postgres : invalid data in PID file

2011-04-14 Thread John R Pierce

On 04/14/11 1:01 AM, Daron Ryan wrote:

 Problem: Cannot start Postgres
Platform: Postgres 8.3, Windows Vista Home
Error Message: pg_ctl: invalid data in PID file C:/Program 
Files/PostgreSQL/8.3/data/postmaster.pid


I run postgres on Windows Vista Home Edition. It normally runs as a 
service but is not starting. I created a command line instruction by 
copying the data from Windows service GUI and changing runservice to 
start. Under the command line it produces the following:


C:\Users\DaronC:/Program Files/PostgreSQL/8.3/bin/pg_ctl.exe start 
-N postgr

esql-8.3 -D C:/Program Files/PostgreSQL/8.3/data -w
pg_ctl: invalid data in PID file C:/Program 
Files/PostgreSQL/8.3/data/postmaste

r.pid

Entering type postmaster.pid in the appropriate directory produces a 
couple of blank lines.


I thought of trying to delete the PID file but I am not sure if it may 
cause further problems.


as long as you can ensure that the server is not running, its perfectly 
safe to delete the PID file.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot start Postgres : invalid data in PID file

2011-04-14 Thread Peter Geoghegan
The postmaster.pid file shows the pid of the postmaster. The file
shouldn't exist when the postmaster isn't running, so it should be
safe to delete. Its presence does indicate that postgres was
improperly shutdown though.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication: slave is in permanent startup 'recovery'

2011-04-14 Thread Henry C.
 However, a SELECT eventually fails with canceling statement due to conflict
 with recovery.

 Where else can I check, or what else can I do to determine what the problem
 is?

...or maybe there _is_ no problem.

select count(*) from big_table; -- will fail because it's long-lived and rows
are changing (deletes, inserts, updates) underneath it.

select * from big_table where id=nnn; -- succeeds because the window for
something in that particular row to change during the select is too small.

All's good!

Sorry about the noise.

h


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Henry C.

On Thu, April 14, 2011 06:19, Benjamin Smith wrote:
 The speed benefits of SSDs as benchmarked would seem incredible. Can anybody
 comment on SSD benefits and problems in real life use?

 I maintain some 100 databases on 3 servers, with 32 GB of RAM each and an
 extremely rich, complex schema. (300+ normalized tables)

 I was wondering if anybody here could comment on the benefits of SSD in
 similar, high-demand rich schema situations?

Even if you only use SSDs for your indexes, the gains are staggering.  We use
them on several servers, one of which is extremely busy (xid wraparound stuff)
and the performance gains are game-changing.

There is no going back.  Hint: don't use cheap SSDs - cough up and use Intel.

Cheers
Henry


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Bug in PGSQL 9.0 with handling chr(1..255) in Win1250 context?

2011-04-14 Thread Durumdara
Hi!

Windows 7x64, PG9.0, PGAdmin 1.12.1.

First I sent this problem to PGADMIN support list, you can see it, but
I repeat the description.

http://archives.postgresql.org/pgadmin-support/2011-04/msg00012.php

I tried to fillup a text field with all ANSI characters
chr(1)..chr(255). Except 0 because it is string limitator in C.

This experience is based on that I want to check, how the row encoded
in Backup (how to COPY eat this).

But when I tried to see the row in PGADMIN, I got this error:

ERROR:  character 0x81 of encoding WIN1250 has no equivalent in UTF8

** Error **

ERROR: character 0x81 of encoding WIN1250 has no equivalent in UTF8
SQL state: 22P05


I reported this to PGADMIN list, but Dave said:

That error comes from PostgreSQL, and is entirely expected if you try
to store non-printable characters in a text column, and then convert
it to UTF-8 (which is what happens when pgAdmin requests data).
PostgreSQL is extremely strict about enforcing the correctness of
unicode data.

Interesting thing that I have Win1250 context with Win1250 db.

I think that is unacceptable to cannot see the row.

EMS PG Manager can show the row, but may it do something in other way
- I don't know.

Thanks:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] updating rows which have a common value forconsecutive dates

2011-04-14 Thread Lonni J Friedman
Hi David,
I had just figured out the sub-query requirement when you replied.  So
now I've got this working:
SELECT * FROM (
SELECT testname,os,arch,build_type,branch,current_status,last_update,rank()
OVER
(PARTITION BY testname,os,arch,branch,build_type ORDER BY last_update
DESC) AS myrank
FROM mytable
WHERE current_status='FAILED' AND age(now(),last_update) = INTERVAL
'15 days') blah
where myrank=2 ;

However, I'm getting a bit lost at this point as to what I need to do
next.  I've been googling on WITH CTE and its mostly going over my
head.  Could you give me more of a hint about this 3rd layer of
sub-query?

thanks


On Wed, Apr 13, 2011 at 10:17 PM, David Johnston pol...@yahoo.com wrote:
 You need to turn the query with the window function into a sub-query and then 
 in the outer query you can refer to the inner-query's rank() column.  The 
 inner query should effectively get you the last two test results for each 
 context and then you can check to see if any of those failed.

 I have a hunch you might need a third layer of sub-queries to handle the 
 failure aspect of the requirement properly; possibly as part of a WITH CTE. 
  You for sure need to in order to. Properly utilize the rank() function 
 limiting.

 Dave

 On Apr 14, 2011, at 0:52, Lonni J Friedman netll...@gmail.com wrote:

 Hi David,
 Thanks for your reply.  I'm using 8.4.7, so window functions are
 certainly an option, although I've admittedly never used them before.
 I've spent the past few hours reading the dox, and I now have a
 rudimentary understanding of window functions.  I tried to compose a
 query based on your suggestion, but I think i'm running up against my
 lack of experience.  This query seems to give me all failures but not
 neccesarily when there are two in a row for a unique group (although
 I'm still not 100% certain its actually returning only last_update
 consecutive rows):
 SELECT testname,os,arch,build_type,branch,current_status,last_update,rank()
 OVER (PARTITION BY testname,os,arch,branch,build_type ORDER BY
 last_update DESC) FROM mytable WHERE current_status='FAILED' AND
 age(now(),last_update) = INTERVAL '15 days'

 However, if I'm understanding how this works, what I really care about
 is when a rank=2 exists, as that's truly when something failed for two
 consecutive last_update's.  I thought this might do it, but apparently
 I'm doing it wrong:

 SELECT testname,os,arch,build_type,branch,current_status,last_update,rank()
 OVER (PARTITION BY testname,os,arch,branch,build_type ORDER BY
 last_update DESC) FROM mytable WHERE current_status='FAILED' AND
 age(now(),last_update) = INTERVAL '15 days' AND rank()=2 ;
 ERROR:  window function call requires an OVER clause
 LINE 1: ... age(now(),last_update) = INTERVAL '15 days' AND rank()=2 ;

 How do I restrict the results to only show when rank=2 ?

 thanks!


 On Wed, Apr 13, 2011 at 9:57 AM, David Johnston pol...@yahoo.com wrote:
 If you have the ability to use Window functions you can group (as 
 necessary), order by last_update, and then use rank() to number each test 
 run sequentially.  Then you can limit the results to  ( rank() = 2 AND 
 current_status = 'FAILED' ).

 David J.

 -Original Message-
 From: pgsql-general-ow...@postgresql.org 
 [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Lonni J Friedman
 Sent: Wednesday, April 13, 2011 3:34 AM
 To: pgsql-general
 Subject: [GENERAL] updating rows which have a common value forconsecutive 
 dates

 Greetings,
 I have a table full of automated test data, which continuously has new 
 unique data inserted:

    Column     |            Type             |
  Modifiers
 +-+-
 +-+-
 +-+---
  id             | integer                     | not null default
 nextval('dbltests_id_seq'::regclass)
  testname       | text                        | not null
  last_update    | timestamp without time zone | not null default now()
  current_status | text                        | not null
  os             | text                        | not null
  arch           | text                        | not null
  build_type     | text                        | not null
  branch         | text                        | not null

 The 'testname' column contains many different tests, and each unique 
 'testname' has numerous different associated os,arch,build_type  branch 
 values.  For example, testname='foo' will run on 
 os='Linux',arch='i686',build_type='debug',branch='t38', and also on 
 os='Windows7',arch='x86_64',build_type='release',branch='r40', etc, and 
 there will be many other different testname's with similar permutations of 
 the os,arch,build_type  branch columns.  So for example, there will also 
 be testname='bar' or testname='omega' or testname='sigma' for the other 
 assorted os,arch,build_type  branch permutations.

 The 

Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Craig Ringer

On 14/04/2011 4:35 PM, Henry C. wrote:


There is no going back.  Hint: don't use cheap SSDs - cough up and use Intel.


The server-grade SLC stuff with a supercap, I hope, not the scary 
consumer-oriented MLC pray you weren't writing anything during 
power-loss devices?


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread John R Pierce

On 04/14/11 1:35 AM, Henry C. wrote:

Hint: don't use cheap SSDs - cough up and use Intel.


aren't most of the Intel SSD's still MLC, and still have performance and 
reliability issues with sustained small block random writes such as are 
generated by database servers?   the enterprise grade SLC SSD drives are 
things like STEC ZeusIOPS and Seagate Pulsar. and the majority of them 
end up in EMC and other big iron SAN systems.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL trap, and assertion failed

2011-04-14 Thread Craig Ringer

On 14/04/2011 2:06 PM, Radosław Smogura wrote:

Hello,

I have small crash reporting code, which I use during mmap-ing database. After
last merge with master I got

TRAP: FailedAssertion(!(slot  0  slot= PMSignalState-num_child_flags),
File: pmsignal.c, Line: 227)
LOG:  server process (PID 5128) was terminated by signal 6: Aborted
LOG:  terminating any other active server processes
TRAP: FailedAssertion(!(slot  0  slot= PMSignalState-num_child_flags),
File: pmsignal.c, Line: 189)

I've installed crash reports on sigbus and sigseg signals. May I ask what may
be wrong, and how to prevent this?


The fact that you mention merging with master, and the fact that you're 
trapping signals, suggests that you're running a modified Pg. Without 
knowing what your changes are, it's probably going to be hard for anyone 
to help. Thanks to the joy of C, it's easy to stamp all over the stack 
or the heap, so all sorts of weird and wonderful things can happen in 
code unrelated to the real source of the problem...


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL trap, and assertion failed

2011-04-14 Thread rsmogura

On Thu, 14 Apr 2011 16:57:01 +0800, Craig Ringer wrote:

On 14/04/2011 2:06 PM, Radosław Smogura wrote:

Hello,

I have small crash reporting code, which I use during mmap-ing 
database. After

last merge with master I got

TRAP: FailedAssertion(!(slot  0  slot= 
PMSignalState-num_child_flags),

File: pmsignal.c, Line: 227)
LOG:  server process (PID 5128) was terminated by signal 6: Aborted
LOG:  terminating any other active server processes
TRAP: FailedAssertion(!(slot  0  slot= 
PMSignalState-num_child_flags),

File: pmsignal.c, Line: 189)

I've installed crash reports on sigbus and sigseg signals. May I ask 
what may

be wrong, and how to prevent this?


The fact that you mention merging with master, and the fact that
you're trapping signals, suggests that you're running a modified Pg.
Without knowing what your changes are, it's probably going to be hard
for anyone to help. Thanks to the joy of C, it's easy to stamp all
over the stack or the heap, so all sorts of weird and wonderful 
things

can happen in code unrelated to the real source of the problem...


Yes, indeed, I deal with mmapings and I little play with memory, so 
there is some probability that I little changed memory. I asked, 
because someone may bear in mind such change about this assert. At this 
time, this trap do not, seriously, affect my work. Actually I want to 
show changes, but those are too big 140kb of bz2, and I have small 
problems with vacuum.


Regards,
Radek

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Leonardo Francalanci
have a look at 

 
http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td4268261.html


It looks like those are safe to use with a db, and aren't that expensive.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Henry C.

On Thu, April 14, 2011 10:51, Craig Ringer wrote:
 On 14/04/2011 4:35 PM, Henry C. wrote:


 There is no going back.  Hint: don't use cheap SSDs - cough up and use
 Intel.


 The server-grade SLC stuff with a supercap, I hope, not the scary
 consumer-oriented MLC pray you weren't writing anything during power-loss
 devices?

That's what a UPS and genset are for.  Who writes critical stuff to *any*
drive without power backup?

You have a valid point about using SLC if that's what you need though. 
However, MLC works just fine provided you stick them into RAID1.  In fact, we
use a bunch of them in RAID0 on top of RAID1.

In our environment (clusters) it's all about using cheap consumer-grade
commodity hardware with lots of redundancy to cater for the inevitable
failures.  The trade-off is huge:  performance with low cost.

We've been using MLC intel drives since they came out and have never had a
failure.  Other SSDs we've tried have failed, and so have hard drives.  The
point though, is that there are tremendous performance gains to be had with
commodity h/w if you factor in failure rates and make *sure* you have
redundancy.

h


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Arnaud Lesauvage

Le 14/04/2011 10:54, John R Pierce a écrit :

On 04/14/11 1:35 AM, Henry C. wrote:

 Hint: don't use cheap SSDs - cough up and use Intel.


aren't most of the Intel SSD's still MLC, and still have performance and
reliability issues with sustained small block random writes such as are
generated by database servers?   the enterprise grade SLC SSD drives are
things like STEC ZeusIOPS and Seagate Pulsar. and the majority of them
end up in EMC and other big iron SAN systems.



I thnik Henry is referring to Intel's X25-E line. They are SLC, 
enterprise grade.

Quite expensive though, ~700 euros for the 64GB version.
We have one of them in a production server (light load), it works very 
well so far.
Performance gain versus WD Raptor RAID array is huge. I never tried to 
quantify it.


Arnaud

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Henry C.


On Thu, April 14, 2011 11:30, Leonardo Francalanci wrote:
 have a look at

 http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td426826
 1.html



 It looks like those are safe to use with a db, and aren't that expensive.

The new SSDs look great.  From our experience, we trust SSDs (even MLC) far
more than mechanical hard drives.

I believe this perception that SSDs are less safe than failure-prone
mechanical hard drives will eventually change.

In the meantime, we've embraced them and the advantages are compelling.

h





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Arnaud Lesauvage

Le 14/04/2011 11:40, Henry C. a écrit :

You have a valid point about using SLC if that's what you need though.
However, MLC works just fine provided you stick them into RAID1.  In fact, we
use a bunch of them in RAID0 on top of RAID1.


AFAIK, you won't have TRIM support on RAID-arrayed SSDs.
That might change soon, but I think that RAID boards supporting TRIM are 
still a work in progress.


Arnaud

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Radosław Smogura

On Thu, 14 Apr 2011 11:46:12 +0200, Henry C. wrote:

On Thu, April 14, 2011 11:30, Leonardo Francalanci wrote:

have a look at


http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td426826
1.html



It looks like those are safe to use with a db, and aren't that 
expensive.


The new SSDs look great.  From our experience, we trust SSDs (even 
MLC) far

more than mechanical hard drives.

I believe this perception that SSDs are less safe than 
failure-prone

mechanical hard drives will eventually change.

In the meantime, we've embraced them and the advantages are 
compelling.


h


One thing you should care about is such called write endurance - number 
of writes to one memory region before it will be destroyed - if your SSD 
driver do not have transparent allocation, then you may destroy it 
really fast, because write of each block will be in same memory 
segment, clog/xlog may be failed with 10k-100k writes. But if your SSD 
has transparent allocation, then internal controller will count your 
writes to given memory cell, and when lifetime of this cell will be at 
the end, it will associate block with different cell. With transparent 
allocation, You may sometimes do not fear if system uses journaling, you 
store logs there on any kind of often updatable data. You may calculate 
life time of your SSD with:

WritesToDestroyCells = write_endurance  * disk_size
AvgLifeTime = WritesToDestroyCells / writes_per_sec

Those are high numbers, even with simply disks as 10.000 * 60GB, means 
you need to send 600TB of data to one SSD (not completely true, as you 
can't send one byte, but full blocks) . Ofc, In order to extend life 
time of SSD you should provide file systems cache, or SSD with cache, as 
well turn off FS journaling.


Regards,
Radek

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication: slave is in permanent startup 'recovery'

2011-04-14 Thread Henry C.
 On 14/04/2011 2:15 AM, Henry C. wrote:
 Nope, it's working as designed I'm afraid.

 There are params you can tune to control how far slaves are allowed to
 get behind the master before cancelling queries...

Thanks Craig - this dawned on me eventually.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Leonardo Francalanci
 I believe this perception that SSDs are less safe than  failure-prone
 mechanical hard drives will eventually change.


By safe I mean they won't corrupt data in case of crash of the machine.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Craig Ringer

On 14/04/2011 5:40 PM, Henry C. wrote:


The server-grade SLC stuff with a supercap, I hope, not the scary
consumer-oriented MLC pray you weren't writing anything during power-loss
devices?


That's what a UPS and genset are for.  Who writes critical stuff to *any*
drive without power backup?


Even a server with redundant PSUs on a UPS backed by generators can go 
down hard and unexpectedly. I'd be extremely nervous unless I could 
afford to lose data since the last backup, or unless I had a really 
trustworthy replication setup going.


Of course, it's wise to have one or both of those conditions be true 
anyway, because no redundant storage system will save you from file 
system corruption caused by an OS bug, data corruption caused by a Pg 
bug, or a DELETE FROM critical_table; by a careless superuser. So I 
guess it doesn't cost you more than the risk of some downtime to use 
potentially corruption-prone non-supercap MLC, and it's probably worth 
it for the performance in your clustered environment.


All I meant with my post was to raise the concern that the OP needs to 
be aware of the untrustworthy nature of even the low-end Intel SSDs. 
They're usable, you just have to compensate for their deficiencies.



You have a valid point about using SLC if that's what you need though.
However, MLC works just fine provided you stick them into RAID1.  In fact, we
use a bunch of them in RAID0 on top of RAID1.


RAID won't help you if they all drop their caches if the power supply 
throws a wobbly. That said, it's certainly good for the lifetime issues 
especially if the units are upgraded or rotated out in phases.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Vacuumdb error

2011-04-14 Thread Gipsz Jakab
Dear List,

Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
maintenance script has started (vacuumdb --all --full --analyze), and
stopped with this error:

sqlstate=23505ERROR:  duplicate key value violates unique constraint
pg_index_indexrelid_index
sqlstate=23505DETAIL:  Key (indexrelid)=(2678) already exists.

We didn't do or change anything in postgre enviroment (except we raised the
max_lock_per_transaction from 64 to 128)

I've started a search on the internet, because of this error, but I didn't
find anything, that I can use. Any idea? I've checked the postgres DB /
catalogs / pg_catalog / tables / pg_index, and there is the record where
indexrelid=2678, but there is only one record there...

I saw, that somebody has the same problem (with the same ID!), and Tom Lane
and Robert Haas requested the (select ctid,xmin,xmax,* from pg_index;) query
output in a file. See attached.

PostgreSQL: PostgreSQL 9.0.3 on i386-portbld-freebsd8.2, compiled by GCC cc
(GCC) 4.2.1 20070719 [FreeBSD], 32-bit

Thanks, and regards,
Carl
ctid;xmin;xmax;indexrelid;indrelid;indnatts;indisunique;indisprimary;indimmediate;indisclustered;indisvalid;indcheckxmin;indisready
(0,1);1;0;2831;2830;2;true;true;true;false;true;false;true
(0,2);1;0;2833;2832;2;true;true;true;false;true;false;true
(0,3);1;0;2835;2834;2;true;true;true;false;true;false;true
(0,4);1;0;2837;2836;2;true;true;true;false;true;false;true
(0,5);1;0;2839;2838;2;true;true;true;false;true;false;true
(0,6);1;0;2841;2840;2;true;true;true;false;true;false;true
(0,7);1;0;2337;2336;2;true;true;true;false;true;false;true
(0,8);1;0;2845;2844;2;true;true;true;false;true;false;true
(0,9);1;0;2847;2846;2;true;true;true;false;true;false;true
(0,10);1;0;2967;2966;2;true;true;true;false;true;false;true
(0,11);1;0;2650;2600;1;true;false;true;false;true;false;true
(0,12);1;0;2651;2601;1;true;false;true;false;true;false;true
(0,13);1;0;2652;2601;1;true;false;true;false;true;false;true
(0,14);1;0;2653;2602;4;true;false;true;false;true;false;true
(0,15);1;0;2654;2602;2;true;false;true;false;true;false;true
(0,16);1;0;2756;2602;1;true;false;true;false;true;false;true
(0,17);1;0;2655;2603;4;true;false;true;false;true;false;true
(0,18);1;0;2757;2603;1;true;false;true;false;true;false;true
(0,19);1;0;2656;2604;2;true;false;true;false;true;false;true
(0,20);1;0;2657;2604;1;true;false;true;false;true;false;true
(0,21);1;0;2658;1249;2;true;false;true;false;true;false;true
(0,22);1;0;2659;1249;2;true;false;true;false;true;false;true
(0,23);1;0;2676;1260;1;true;false;true;false;true;false;true
(0,24);1;0;2677;1260;1;true;false;true;false;true;false;true
(0,25);1;0;2694;1261;2;true;false;true;false;true;false;true
(0,26);1;0;2695;1261;2;true;false;true;false;true;false;true
(0,27);1;0;2660;2605;1;true;false;true;false;true;false;true
(0,28);1;0;2661;2605;2;true;false;true;false;true;false;true
(0,29);1;0;2664;2606;2;false;false;true;false;true;false;true
(0,30);1;0;2665;2606;1;false;false;true;false;true;false;true
(0,31);1;0;2666;2606;1;false;false;true;false;true;false;true
(0,32);1;0;2667;2606;1;true;false;true;false;true;false;true
(0,33);1;0;2668;2607;4;true;false;true;false;true;false;true
(0,34);1;0;2669;2607;2;true;false;true;false;true;false;true
(0,35);1;0;2670;2607;1;true;false;true;false;true;false;true
(0,36);1;0;2671;1262;1;true;false;true;false;true;false;true
(0,37);1;0;2672;1262;1;true;false;true;false;true;false;true
(0,38);1;0;2673;2608;3;false;false;true;false;true;false;true
(0,39);1;0;2674;2608;3;false;false;true;false;true;false;true
(0,40);1;0;2675;2609;3;true;false;true;false;true;false;true
(0,41);1;0;2397;2396;2;true;false;true;false;true;false;true
(0,42);1;0;3502;3501;1;true;false;true;false;true;false;true
(0,43);1;0;3503;3501;2;true;false;true;false;true;false;true
(0,44);1;0;2680;2611;2;true;false;true;false;true;false;true
(0,45);1;0;2187;2611;1;false;false;true;false;true;false;true
(0,46);1;0;2681;2612;1;true;false;true;false;true;false;true
(0,47);1;0;2682;2612;1;true;false;true;false;true;false;true
(0,48);1;0;2683;2613;2;true;false;true;false;true;false;true
(0,49);1;0;2996;2995;1;true;false;true;false;true;false;true
(0,50);1;0;2684;2615;1;true;false;true;false;true;false;true
(0,51);1;0;2685;2615;1;true;false;true;false;true;false;true
(0,52);1;0;2686;2616;3;true;false;true;false;true;false;true
(0,53);1;0;2687;2616;1;true;false;true;false;true;false;true
(0,54);1;0;2688;2617;1;true;false;true;false;true;false;true
(0,55);1;0;2689;2617;4;true;false;true;false;true;false;true
(0,56);1;0;2754;2753;3;true;false;true;false;true;false;true
(0,57);1;0;2755;2753;1;true;false;true;false;true;false;true
(0,58);1;0;1137;1136;1;true;false;true;false;true;false;true
(1,1);1;0;2690;1255;1;true;false;true;false;true;false;true
(1,2);1;0;2691;1255;3;true;false;true;false;true;false;true
(1,3);1;0;2692;2618;1;true;false;true;false;true;false;true
(1,4);1;0;2693;2618;2;true;false;true;false;true;false;true

Re: [GENERAL] Vacuumdb error

2011-04-14 Thread Carl von Clausewitz
Ok, thanks, I'll try at night.
Regards,
Carl

2011/4/14 Vidhya Bondre vbon...@zedo.com

 Gipsz,

 We got this error too what we did is ran vacuum analyze verbose and
 afterthat reindexed the db and we din't see the error croping again.

 Regards
 Vidhya

 On Thu, Apr 14, 2011 at 5:26 PM, Gipsz Jakab clausewit...@gmail.comwrote:

 Dear List,

 Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
 maintenance script has started (vacuumdb --all --full --analyze), and
 stopped with this error:

 sqlstate=23505ERROR:  duplicate key value violates unique constraint
 pg_index_indexrelid_index
 sqlstate=23505DETAIL:  Key (indexrelid)=(2678) already exists.

 We didn't do or change anything in postgre enviroment (except we raised
 the max_lock_per_transaction from 64 to 128)

 I've started a search on the internet, because of this error, but I didn't
 find anything, that I can use. Any idea? I've checked the postgres DB /
 catalogs / pg_catalog / tables / pg_index, and there is the record where
 indexrelid=2678, but there is only one record there...

 I saw, that somebody has the same problem (with the same ID!), and Tom
 Lane and Robert Haas requested the (select ctid,xmin,xmax,* from pg_index;)
 query output in a file. See attached.

  PostgreSQL: PostgreSQL 9.0.3 on i386-portbld-freebsd8.2, compiled by GCC
 cc (GCC) 4.2.1 20070719 [FreeBSD], 32-bit

 Thanks, and regards,
 Carl


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Vick Khera
On Thu, Apr 14, 2011 at 12:19 AM, Benjamin Smith
li...@benjamindsmith.comwrote:

 I was wondering if anybody here could comment on the benefits of SSD in
 similar, high-demand rich schema situations?


For the last several months, I've been using Texas Memory Systems RamSAN 620
drives on my main DB servers.  Having near zero seek times has been a
tremendous boon to our performance, and will have pretty much paid for
themselves within the next couple of months.  Ie, the throw hardware at it
solution worked really well :)


Re: [GENERAL] Vacuumdb error

2011-04-14 Thread Vidhya Bondre
Gipsz,

We got this error too what we did is ran vacuum analyze verbose and
afterthat reindexed the db and we din't see the error croping again.

Regards
Vidhya

On Thu, Apr 14, 2011 at 5:26 PM, Gipsz Jakab clausewit...@gmail.com wrote:

 Dear List,

 Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
 maintenance script has started (vacuumdb --all --full --analyze), and
 stopped with this error:

 sqlstate=23505ERROR:  duplicate key value violates unique constraint
 pg_index_indexrelid_index
 sqlstate=23505DETAIL:  Key (indexrelid)=(2678) already exists.

 We didn't do or change anything in postgre enviroment (except we raised the
 max_lock_per_transaction from 64 to 128)

 I've started a search on the internet, because of this error, but I didn't
 find anything, that I can use. Any idea? I've checked the postgres DB /
 catalogs / pg_catalog / tables / pg_index, and there is the record where
 indexrelid=2678, but there is only one record there...

 I saw, that somebody has the same problem (with the same ID!), and Tom Lane
 and Robert Haas requested the (select ctid,xmin,xmax,* from pg_index;) query
 output in a file. See attached.

  PostgreSQL: PostgreSQL 9.0.3 on i386-portbld-freebsd8.2, compiled by GCC
 cc (GCC) 4.2.1 20070719 [FreeBSD], 32-bit

 Thanks, and regards,
 Carl


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




Re: [BUGS] [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-14 Thread Shianmiin

Merlin Moncure-2 wrote:
 
 
  ... I've coded a
 lot of multi schema designs and they tend to either go the one
 session/schema route or the connection pooling route.  Either way,
 cache memory usage tends to work itself out pretty well (it's never
 been a problem for me before at least).  I can't recall anyone ever
 even complaining about it in a non synthetic test.
 
 

It's good to know that. Many thanks to your help. :) 

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4301555.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [BUGS] [GENERAL] PostgreSQL backend process high memory usage issue

2011-04-14 Thread Shianmiin

Tom Lane-2 wrote:
 
 
 I don't think it's a leak, exactly: it's just that the relcache entry
 for each one of these views occupies about 100K.  A backend that touches
 N of the views is going to need about N*100K in relcache space.  I can't
 get terribly excited about that.  Trying to reduce the size of the
 relcache would be a net loss for most usage patterns (ie, we'd end up
 increasing the amount of re-fetching from the system catalogs that
 backends would have to do).  And I don't think that this test case has
 much of anything to do with sane application design, anyway.  Do you
 really need that many complex views?  Do you really need to have most
 sessions touching all of them?
 
 

Thanks for the clarification, that answers our question and gives us a good
direction where to look for further information. We have gained more
confidence on moving toward using PostgreSQL as our multitenant database
backend. 

The tests were designed to show the high memory usage findings while we are
evaluating PostgreSQL, and yes, it's far from real world scenario. However,
the concern is not come from nothing - current our system is running on
Microsoft SQL Server with one db per tenant multitenancy model. We have one
db server that has 5000 tenant databases, each with 200 tables and 500
views. There are quite a few views that are much more complex than the one
shown in the test. When a request comes in, the application servers will
randomly pick a connection from the pool to query the db, so theoretically
every connection could eventually hit all views, in real wold it may take
quite a while to fill-up the memory until it reaches an unacceptable size.
However, it just feel a little weird that there isn't a convenient way for
PostgreSQL to control the cache memory usage of backend process.  :) 

We are still at early staging of moving to a different multitenant db model
and there are plenty of options that we can go or get around issues like
this. Thanks again.


--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4301552.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Lincoln Yeoh

At 06:07 PM 4/14/2011, Radosław Smogura wrote:

One thing you should care about is such called 
write endurance - number of writes to one memory 
region before it will be destroyed - if your SSD 
driver do not have transparent allocation, then 
you may destroy it really fast, because write of 
each block will be in same memory segment, 
clog/xlog may be failed with 10k-100k writes. 
But if your SSD has transparent allocation, then 
internal controller will count your writes to 
given memory cell, and when lifetime of this 
cell will be at the end, it will associate 
block with different cell. With transparent 
allocation, You may sometimes do not fear if 
system uses journaling, you store logs there on 
any kind of often updatable data. You may calculate life time of your SSD with:

WritesToDestroyCells = write_endurance  * disk_size
AvgLifeTime = WritesToDestroyCells / writes_per_sec

Those are high numbers, even with simply disks 
as 10.000 * 60GB, means you need to send 600TB 
of data to one SSD (not completely true, as you 
can't send one byte, but full blocks) . Ofc, In 
order to extend life time of SSD you should 
provide file systems cache, or SSD with cache, as well turn off FS journaling.


I'm not an expert on SSDs, but I believe modern 
SSDs are supposed to automatically spread the 
writes across the entire disk where possible - 
even to the extent of moving already written stuff.


So if the drives are full or near full, the 
tradeoff is between lower performance (because 
you have to keep moving stuff about) or lower 
lifespan (one area gets overused).


If the drives are mostly empty the SSD's 
controller has an easier job - it doesn't have to move as much data around.


Regards,
Link.









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding a default value to a column after it exists

2011-04-14 Thread Harald Fuchs
In article 20110413163120.gu24...@shinkuro.com,
Andrew Sullivan a...@crankycanuck.ca writes:

 On Wed, Apr 13, 2011 at 09:21:20AM -0700, Gauthier, Dave wrote:
 Is there a way to add a default value definition to an existing column?  
 Something like an alter table... alter column... default 'foo'.

 ALTER TABLE table ALTER [ COLUMN ] column SET DEFAULT expression

 (see http://www.postgresql.org/docs/9.0/interactive/sql-altertable.html)

 Note that this doesn't actually update the fields that are NULL in the
 column already.  For that, once you had the default in place, you
 could do

 UPDATE table SET column = DEFAULT WHERE column IS NULL

And you probably want to do
  ALTER TABLE table ALTER [ COLUMN ] column SET NOT NULL
after that.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgsql 9.0.1 table corruption

2011-04-14 Thread Dan Biagini
I have a 9.0.1 database with two corrupted tables (one table has 20
rows, the other 140).  The tables *seem* fine for read/select
operations, but updating certain rows in the table produce error
messages:

update media set updated_at = now() at time zone 'UTC';
ERROR:  could not read block 2 in file base/16384/16485: read only 0
of 8192 bytes

update media_status set updated_at = now() at time zone 'UTC';
2011-04-14 00:15:15 UTC ERROR:  could not read block 3 in file
base/16384/16543: read only 0 of 8192 bytes
2011-04-14 00:15:15 UTC STATEMENT:  update media_status set updated_at
= now() at time zone 'UTC';

Examining the corrupted files in the filesystem (linux), they aren't
zero bytes:

ll base/16384/16485
-rwx-- 1 postgres postgres 16384 2011-04-07 09:43 base/16384/16485

I ran a vacuum(FULL, VERBOSE) table command and the corruption (or
at least the errors on update) has disappeared.  Is it expected that a
vacuum(FULL) command would/could fix table corruption?  Does that
providing any clues as to what may have happened?

Is there any way to determine how/when this corruption may have occurred?

I suspect that it may have occurred during a filesystem level backup
(ie pg_start_backup(), tar -czf..., pg_stop_backup()), as I performed
a backup and moved the database to a different system.  After
restoring the files and starting postgres I began getting these
errors.  I have tried restoring multiple times with the same tar
archive with the same results (on different systems).

Thanks,
Dan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql 9.0.1 table corruption

2011-04-14 Thread Alan Hodgson
On April 14, 2011 08:10:47 am Dan Biagini wrote:

 I suspect that it may have occurred during a filesystem level backup
 (ie pg_start_backup(), tar -czf..., pg_stop_backup()), as I performed
 a backup and moved the database to a different system.  After
 restoring the files and starting postgres I began getting these
 errors.  I have tried restoring multiple times with the same tar
 archive with the same results (on different systems).

Did you perform a PITR restore using that tar as a base backup? Do any errors 
occur?  The tar file alone will not be an intact backup, as I'm sure you're 
aware.

http://www.postgresql.org/docs/9.0/static/continuous-archiving.html#BACKUP-
PITR-RECOVERY

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuumdb error

2011-04-14 Thread Tom Lane
Gipsz Jakab clausewit...@gmail.com writes:
 Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
 maintenance script has started (vacuumdb --all --full --analyze), and
 stopped with this error:

 sqlstate=23505ERROR:  duplicate key value violates unique constraint
 pg_index_indexrelid_index
 sqlstate=23505DETAIL:  Key (indexrelid)=(2678) already exists.

Maybe you had better show us *all* of your routine maintenance steps,
because there are some fairly unexplainable things in that pg_index dump.
VACUUM FULL alone wouldn't have resulted in the pg_index and pg_class
indexes' entries having updated xmins:

 (1,40);48741;0;2678;2610;1;false;false;true;false;true;false;true
 (1,41);48741;0;2679;2610;1;true;false;true;false;true;true;true
 (1,42);49791;0;2662;1259;1;true;false;true;false;true;true;true
 (1,43);49791;0;2663;1259;2;true;false;true;false;true;true;true

and I'm wondering how come three of them have indcheckxmin true too.
I think you must be whacking the system catalogs around harder than
you've admitted to.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuumdb error

2011-04-14 Thread Carl von Clausewitz
Hi,

see the two scripts attached. First one is the postgres_maintenance.sh, and
the second is the postgres_backup.sh. I've attached it, and copied, because
of the antivirus filters :-)

regards,
Carl

Maintenance:
#!/bin/sh
date  /var/log/postgresql_maintenance.log
/usr/local/bin/reindexdb --all --username=cvc 
/var/log/postgresql_maintenance.log
echo Reindex done  /var/log/postgresql_maintenance.log
/usr/local/bin/vacuumdb --all --full --analyze --username=cvc 
/var/log/postgresql_maintenance.log
echo Vacuum done  /var/log/postgresql_maintenance.log

Backup:
#!/bin/sh
date  /var/log/postgresql_backup.log
/usr/local/bin/pg_dump --create --file=/usr/NAS/mentesek/postgres/$(date
+%Y%m%d).backup --format=c --no-owner --username=cvc ktv_migracio_access
echo Backup done  /var/log/postgresql_backup.log


2011/4/14 Tom Lane t...@sss.pgh.pa.us

 Gipsz Jakab clausewit...@gmail.com writes:
  Today morning at 01:00 AM in our PostgreSQL 9.0.3 server a routine
  maintenance script has started (vacuumdb --all --full --analyze), and
  stopped with this error:

  sqlstate=23505ERROR:  duplicate key value violates unique constraint
  pg_index_indexrelid_index
  sqlstate=23505DETAIL:  Key (indexrelid)=(2678) already exists.

 Maybe you had better show us *all* of your routine maintenance steps,
 because there are some fairly unexplainable things in that pg_index dump.
 VACUUM FULL alone wouldn't have resulted in the pg_index and pg_class
 indexes' entries having updated xmins:

  (1,40);48741;0;2678;2610;1;false;false;true;false;true;false;true
  (1,41);48741;0;2679;2610;1;true;false;true;false;true;true;true
  (1,42);49791;0;2662;1259;1;true;false;true;false;true;true;true
  (1,43);49791;0;2663;1259;2;true;false;true;false;true;true;true

 and I'm wondering how come three of them have indcheckxmin true too.
 I think you must be whacking the system catalogs around harder than
 you've admitted to.

regards, tom lane



postgre_backup.sh
Description: Bourne shell script


postgre_maintenance.sh
Description: Bourne shell script

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuumdb error

2011-04-14 Thread Tom Lane
Carl von Clausewitz clausewit...@gmail.com writes:
 Maintenance:
 #!/bin/sh
 date  /var/log/postgresql_maintenance.log
 /usr/local/bin/reindexdb --all --username=cvc 
 /var/log/postgresql_maintenance.log
 echo Reindex done  /var/log/postgresql_maintenance.log
 /usr/local/bin/vacuumdb --all --full --analyze --username=cvc 
 /var/log/postgresql_maintenance.log
 echo Vacuum done  /var/log/postgresql_maintenance.log

Just FYI, the reindex step is 100% useless if you're going to do a
vacuum full afterwards.

Before 9.0 there was some value in doing vacuum full and then reindex,
but none whatsoever in the other ordering.  As of 9.0 there's just no
point at all in doing both.  VACUUM FULL rebuilds the indexes anyway.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuumdb error

2011-04-14 Thread Carl von Clausewitz
Ok thanks, the information. I've made the mistake, I will change the script,
but I will try, that Vidhya told me. Let me see, what will going on.

Regards,
Carl

2011/4/14 Tom Lane t...@sss.pgh.pa.us

 Carl von Clausewitz clausewit...@gmail.com writes:
  Maintenance:
  #!/bin/sh
  date  /var/log/postgresql_maintenance.log
  /usr/local/bin/reindexdb --all --username=cvc 
  /var/log/postgresql_maintenance.log
  echo Reindex done  /var/log/postgresql_maintenance.log
  /usr/local/bin/vacuumdb --all --full --analyze --username=cvc 
  /var/log/postgresql_maintenance.log
  echo Vacuum done  /var/log/postgresql_maintenance.log

 Just FYI, the reindex step is 100% useless if you're going to do a
 vacuum full afterwards.

 Before 9.0 there was some value in doing vacuum full and then reindex,
 but none whatsoever in the other ordering.  As of 9.0 there's just no
 point at all in doing both.  VACUUM FULL rebuilds the indexes anyway.

regards, tom lane



Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Benjamin Smith
After a glowing review at AnandTech (including DB benchmarks!) I decided to 
spring for an OCX Vertex 3 Pro 120 for evaluation purposes. It cost about $300 
with shipping, etc and at this point, won't be putting any 

Considering that I sprang for 96 GB of ECC RAM last spring for around $5000, 
even if I put the OCX drives in pairs w/RAID1, I'd still come out well ahead 
if it allows me to put off buying more servers for a year or two. 

-Ben 

On Thursday, April 14, 2011 02:30:06 AM Leonardo Francalanci wrote:
 have a look at
 
  http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td426
 8261.html
 
 
 It looks like those are safe to use with a db, and aren't that expensive.

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



[GENERAL] function to filter out tokens (sql syntax lexical)?

2011-04-14 Thread Wim Bertels
Hallo,

according to
http://www.postgresql.org/docs/8.4/interactive/sql-syntax-lexical.htm
A token can be a key word, an identifier, a quoted identifier, a literal
(or constant), or a special character symbol.

I suppose these different tokens are used by the internal parser?
So my questions is,
is there a function one could use to filter out the specific tokes?

Example:

Given the statement: INSERT INTO MY_TABLE VALUES (3, 'hi there');

We can do something like:

select filter_tokens($$INSERT INTO MY_TABLE VALUES (3, 'hi there');$$);

With output:

token   |   type
Insert  keyword
intokeyword
my_tableidentifier
values  keyword
..
'hi there'  literal

or in another way with similar functionality?

mvg,
Wim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Scott Marlowe
On Thu, Apr 14, 2011 at 3:40 AM, Henry C. he...@cityweb.co.za wrote:

 On Thu, April 14, 2011 10:51, Craig Ringer wrote:
 On 14/04/2011 4:35 PM, Henry C. wrote:


 There is no going back.  Hint: don't use cheap SSDs - cough up and use
 Intel.


 The server-grade SLC stuff with a supercap, I hope, not the scary
 consumer-oriented MLC pray you weren't writing anything during power-loss
 devices?

 That's what a UPS and genset are for.  Who writes critical stuff to *any*
 drive without power backup?

Because power supply systems with UPS never fail.

(hint, I've seen them fail, more than once)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Calculating memory allocaiton per process

2011-04-14 Thread David Kerr
Howdy, 

Is there a doc somewhere that has a formula for how much memory PG backend 
process will use?

I'm looking to get something like total_mem = max_connections * ( work_mem + 
temp_buffers )
// I know it's more complicated than that, which is why I'm asking =)

Something similar to Table 17-2 here: 
http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html
would be awesome.

Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication: slave is in permanent startup 'recovery'

2011-04-14 Thread Tomas Vondra
Dne 14.4.2011 10:01, Craig Ringer napsal(a):
 That's a limitation of streaming replication. It's a lot like the issue
 Oracle has with running out of undo or redo log space. Essentially, my
 understanding is that the hot standby server cannot replay WAL archives
 to keep up with the master's changes at the same time as running
 queries. To avoid getting too far behind the master because of a huge or
 stuck query, it'll cancel very long-running queries.
 
 Again from my limited understanding, the reason it can't replay WAL is
 because the WAL records include overwrites of pages VACUUMed and re-used
 on the master. HS is block-level replication; it cannot keep a page
 in-place on the slave when the master has erased or overwritten it.

I guess it's worth mentioning there's another solution to this problem,
implemented in 9.1 - it's called feedback. That means the standby asks
the primary not to overwrite the blocks - see this

http://developer.postgresql.org/pgdocs/postgres/hot-standby.html#HOT-STANDBY-CONFLICT

But obviously this is not for free - you have to pay a price, in this
case a bloat on the primary (as it can't do VACUUM properly).

regards
Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Andrew Sullivan
On Thu, Apr 14, 2011 at 12:27:34PM -0600, Scott Marlowe wrote:
  That's what a UPS and genset are for.  Who writes critical stuff to *any*
  drive without power backup?
 
 Because power supply systems with UPS never fail.

Right, there's obviously a trade-off here.  Some of this has to do
with how much your data is worth vs. how much the speed is worth.
There's also the issue of whether you can stand to lose a few rows,
and whether you can stand to lose them for a short time.  For
instance, collecting user comments might be a matter of great value,
but if you write them to more than one system, you might not care
whether one of the systems fails briefly.  In that case, maybe big
redundancy of cheap disks with power backup is good enough to meet the
price:value ratio.  On stock trades worth maybe millions of dollars,
not so much: you miss your teeny window of opportunity to do a trade
and suddenly you're out in the street wearing a barrel.

I can think of lots of different points to be along that continuum,
and surely nobody is suggesting that there is one right answer for
everything.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Calculating memory allocaiton per process

2011-04-14 Thread Jerry Sievers
David Kerr d...@mr-paradox.net writes:

 Howdy, 

 Is there a doc somewhere that has a formula for how much memory PG
 backend process will use?

 I'm looking to get something like total_mem = max_connections * (
 work_mem + temp_buffers ) // I know it's more complicated than that,
 which is why I'm asking =)

Depends on your query complexity, load distribution across concurrent
sessions and session lifetime.

work_mem will, in cases of queries having multiple sort nodes, have to
be counted multiple times on behalf of a single backend.

Some observation of the running system can be your best bet.

HTH

 Something similar to Table 17-2 here:
 http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html
 would be awesome.

 Dave

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: gsiever...@comcast.net
p: 305.321.1144

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Greg Smith

Henry C. wrote:

I believe this perception that SSDs are less safe than failure-prone
mechanical hard drives will eventually change.
  


Only because the manufacturers are starting to care about write 
durability enough to include the right hardware for it.  Many of them 
are less safe right now on some common database tasks.  Intel's gen 1 
and gen 2 drives are garbage for database use.  I've had customers lose 
terabytes of data due to them.  Yes, every system can fail, but these 
*will* fail and corrupt your database the first time there's a serious 
power problem of some sort.  And the idea that a UPS is sufficient to 
protect against that even happening in wildly optimistic.


See http://wiki.postgresql.org/wiki/Reliable_Writes for more background 
here, and links to reading on the older Intel drives.  I summarized the 
situation with their newer 320 series drives at 
http://blog.2ndquadrant.com/en/2011/04/intel-ssd-now-off-the-sherr-sh.html  
Those finally get the write flushing right.  But the random seeks IOPS 
is wildly lower than you might expect on read/write workloads.  My own 
tests and other sources have all come up with around 3500 IOPS as being 
a real-world expectation for the larger sizes of these drives.  Also, it 
is cheap flash, so durability in a server environment won't be great.  
Don't put your WAL on them if you have a high transaction rate.  Put 
some indexes there instead.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Calculating memory allocaiton per process

2011-04-14 Thread David Kerr
On Thu, Apr 14, 2011 at 03:00:07PM -0400, Jerry Sievers wrote:
- David Kerr d...@mr-paradox.net writes:
- 
-  Howdy, 
- 
-  Is there a doc somewhere that has a formula for how much memory PG
-  backend process will use?
- 
-  I'm looking to get something like total_mem = max_connections * (
-  work_mem + temp_buffers ) // I know it's more complicated than that,
-  which is why I'm asking =)
- 
- Depends on your query complexity, load distribution across concurrent
- sessions and session lifetime.
- 
- work_mem will, in cases of queries having multiple sort nodes, have to
- be counted multiple times on behalf of a single backend.
- 
- Some observation of the running system can be your best bet.
- 
- HTH

Yeah, that's the complication that I knew about (but am still not able to
fully 'get', let along vocalize). 

Are there no rules of thumb or upper bounds to help estimate total memory usage?

Thanks

Dave

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuumdb error

2011-04-14 Thread Carl von Clausewitz
Everything was fine, the reordered script fixed everything. Thanks all.

Regards,
Carl

2011/4/14 Carl von Clausewitz clausewit...@gmail.com

 Ok thanks, the information. I've made the mistake, I will change the
 script, but I will try, that Vidhya told me. Let me see, what will going
 on.

 Regards,
 Carl

 2011/4/14 Tom Lane t...@sss.pgh.pa.us

 Carl von Clausewitz clausewit...@gmail.com writes:
  Maintenance:
  #!/bin/sh
  date  /var/log/postgresql_maintenance.log
  /usr/local/bin/reindexdb --all --username=cvc 
  /var/log/postgresql_maintenance.log
  echo Reindex done  /var/log/postgresql_maintenance.log
  /usr/local/bin/vacuumdb --all --full --analyze --username=cvc 
  /var/log/postgresql_maintenance.log
  echo Vacuum done  /var/log/postgresql_maintenance.log

 Just FYI, the reindex step is 100% useless if you're going to do a
 vacuum full afterwards.

 Before 9.0 there was some value in doing vacuum full and then reindex,
 but none whatsoever in the other ordering.  As of 9.0 there's just no
 point at all in doing both.  VACUUM FULL rebuilds the indexes anyway.

regards, tom lane





Re: [GENERAL] SSDs with Postgresql?

2011-04-14 Thread Scott Marlowe
On Thu, Apr 14, 2011 at 1:14 PM, Greg Smith g...@2ndquadrant.com wrote:

  And the idea that a UPS is sufficient to protect against that even happening 
 in
 wildly optimistic.

Note that the real danger in relying on a UPS is that most power
conditioning / UPS setups tend to fail in total, not in parts.  The
two times I've seen it happen, the whole grid shut down completely for
a few hours.  The first time we had Oracle, Ingress, Sybase,
SQL-Server, etc. etc. database server across the company corrupted.
DAYS of recovery time, and since they all failed at once, the machines
in replication got corrupted as well.  Restoring production dbs from
backups took days.

The only machine to survive was the corporate intranet running pgsql
on twin 15k SCSI drives with a proven reliable battery backed
controller on it.  It was mine.  This was a company that lost
something like $10k a minute for downtime.  And the downtime was
measured not in seconds, minutes or hours, but days because everyone
had said the same thing, The UPS and power conditioners make power
plug pull survivability a non issue.  When the only machine with an
uncorrupted database is the corporate intranet server the 24/7
production guys look pretty stupid.  They also suddenly decided to
start doing power plug pull tests on all database servers.

To  make matters worse, the kind of system to NEED the higher
throughput from SSDs is likely the kind of system to be the worst kind
to suffer downtime due to corruption.  OTOH, restores from backups
should run pretty fast. :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function to filter out tokens (sql syntax lexical)?

2011-04-14 Thread Tom Lane
Wim Bertels wim.bert...@khleuven.be writes:
 is there a function one could use to filter out the specific tokes?

In 8.4 and up, pg_get_keywords() might help.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple foreign keys with the same name and information_schema

2011-04-14 Thread nothing
check out this link. I it will be what you are looking for 
http://errorbank.blogspot.com/2011/03/list-all-foreign-keys-references-for.html

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Multiple-foreign-keys-with-the-same-name-and-information-schema-tp1921901p4303625.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Compression

2011-04-14 Thread Yang Zhang
Is there any effort to add compression into PG, a la MySQL's
row_format=compressed or HBase's LZO block compression?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression

2011-04-14 Thread Adrian Klaver
On Thursday, April 14, 2011 4:01:54 pm Yang Zhang wrote:
 Is there any effort to add compression into PG, a la MySQL's
 row_format=compressed or HBase's LZO block compression?

TOAST?
http://www.postgresql.org/docs/9.0/interactive/storage-toast.html
-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Normalize or not?

2011-04-14 Thread Perry Smith
Based upon my reading of wikipedia, the solution I think I want to implement is 
not in 2nd normal form.  I'm wondering if anyone on this list has suggestions, 
etc.

I have a table called containers where object A will contain object B.  There 
is a middle column that will describe the type of association.  For example, 
the account team will have Joe as a member and Fred will be the lead.  This 
will starts off with:

Containers:
Left How Right
Account member Joe
Account lead Fred

There is another table (which I've talked about before) which tells me that 
Account is a Team.  I call this the names table and has two columns: name 
and type.  So:

Names:
Name Type
Account Team
Joe User
Fred User

in this case.  I want to restrict the type of association e.g. I want teams to 
contain users but not allow users to contain teams.  And I want some type of 
associations not apply to others like a Group can simply contain a Team but a 
Team can not contain a Group.  Thus, I have a third table with entries like:

Types of Associations:
LType How RType
Team member User
Team lead User
Group contain Team

There is a foreign key constraint so that Containers(Left) and 
Containers(Right) must be in Names(Name) (I hope that syntax makes sense).  But 
now comes the hard part.

For each row in Containers, I want to take the Type from Names associated with 
the Left + Containers(How) + the Type from Names associated with the Right to 
be an entry in the Types of Associations table.  For eacmple, Account member 
Joe would translate to Team member User which is in the Types of Associations 
table so it is a valid entry.  But I don't believe I can do this with foreign 
key constraints.  If I can, then stop me here and tell me how.

My solution to solving this that I am considering is to add in the LType and 
RType into the Containers so now Containers becomes:

Containers:
Left LType How Right RType
Account Team member Joe User
Account Team lead Fred User

and now my foreign key constraint is simply that 
Containers(Ltype),Containers(How),Containers(RType) be in Types of Association. 
 I would also change my constraint so that Left,LType of Containers must be in 
Names as well as Right,RType be in Names.

According to my interpretation, the RType and LType columns of Containers are 
non-prime but are functionally dependent upon either Left or Right so this 
table is not even second normal form.  But, if I add in the constraint that 
both the Name and Type must be in Names, does it really matter?  I know that I 
will not be able to change just LType or just RType and create an update 
anomaly.  I guess I should mention that change the Type of a name is not even 
being considered.  The Names and Types of Associations tables I think of as 
a type of constants which will be defined and rarely change.


I hope this is reasonably easy to follow.  I'm looking forward to your thoughts 
and comments.

Thank you,
Perry Smith



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] Streaming Replication limitations

2011-04-14 Thread Jeff Davis
On Wed, 2011-04-13 at 14:42 -0400, Tom Lane wrote:
 Simon Riggs si...@2ndquadrant.com writes:
  2011/4/13 Tom Lane t...@sss.pgh.pa.us:
  Short answer is to test the case you have in mind and see.
 
  That's the long answer, not least because the absence of a failure in
  a test is not conclusive proof that it won't fail at some point in the
  future while in production.
 
 Not really.  Every known source of incompatibility (endianness,
 alignment, float format, etc) is checked at postmaster startup via
 entries in pg_control.

I seem to remember that Mac and Linux have a different notion of what
en_US collation means (I couldn't find any standard anywhere to say that
one was right and the other was wrong). So, that risks index corruption.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression

2011-04-14 Thread Craig Ringer

On 15/04/2011 7:01 AM, Yang Zhang wrote:

Is there any effort to add compression into PG, a la MySQL's
row_format=compressed or HBase's LZO block compression?


There's no row compression, but as mentioned by others there is 
out-of-line compression of large values using TOAST.


Row compression would be interesting, but I can't imagine it not having 
been investigated already.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Normalize or not?

2011-04-14 Thread John R Pierce

On 04/14/11 4:28 PM, Perry Smith wrote:

I hope this is reasonably easy to follow.  I'm looking forward to your thoughts 
and comments.


at least on first glance, that looks like object oriented methodology, 
not relational.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Normalize or not?

2011-04-14 Thread David Johnston
It is not easy to follow...but I'll toss out some thoughts anyway.

I would generally not de-normalize the data model in order to make
referential integrity easier.  If your requirements are that complex then
writing a wrapper around the insert/update layer for the tables in question
is a better idea.  You can perform a higher level of validation without
sacrificing data integrity.

I would suggest looking at both the kinds of selection queries you plan to
run as well as what kinds of data (i.e., relationships between containers
and objects) to see how readily the final model will work.  Two things that
SQL does have challenges with is indefinite hierarchies and many-to-many
relationships with constraints.  You seem to be using both.  It isn't that
SQL cannot handle them but as a programmer you need to be extra careful to
introduce appropriate constraints on data entry and to spend extra time
figuring out what valid combinations are likely to result in incorrect
results given the kinds of queries you are using.

There isn't anything wrong with the normal form violation but you probably
want to understand where your model is vulnerable to update anomalies.  If
you can add some additional tables and turn the de-normalized data into
foreign-keys you can use referential integrity to at least limit your
exposure but it may not remove it completely.

In the end sometimes the only real question of import is whether the
solution works.  If you are concerned then anything built upon the schema
should be designed such that you can refactor the underlying schema if you
desire.  But it isn't necessarily worth it to exhaust every possible avenue
to find the better solution if you have one that works.  The question
becomes whether you have the resources (time and/or money) to search for the
better solution now or whether you should just use the immediately viable
solution until it breaks.

David J.


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Perry Smith
Sent: Thursday, April 14, 2011 7:29 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Normalize or not?

I hope this is reasonably easy to follow.  I'm looking forward to your
thoughts and comments.

Thank you,
Perry Smith



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression

2011-04-14 Thread Adrian Klaver
On Thursday, April 14, 2011 4:50:44 pm Craig Ringer wrote:
 On 15/04/2011 7:01 AM, Yang Zhang wrote:
  Is there any effort to add compression into PG, a la MySQL's
  row_format=compressed or HBase's LZO block compression?
 
 There's no row compression, but as mentioned by others there is
 out-of-line compression of large values using TOAST.

I could be misunderstanding but I thought compression happened in the row as 
well. From the docs:

EXTENDED allows both compression and out-of-line storage. This is the default 
for most TOAST-able data types. Compression will be attempted first, then 
out-of-
line storage if the row is still too big. 

 
 Row compression would be interesting, but I can't imagine it not having
 been investigated already.

-- 
Adrian Klaver
adrian.kla...@gmail.com


Re: [GENERAL] Compression

2011-04-14 Thread Yang Zhang
On Thu, Apr 14, 2011 at 5:07 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
 On Thursday, April 14, 2011 4:50:44 pm Craig Ringer wrote:

 On 15/04/2011 7:01 AM, Yang Zhang wrote:

  Is there any effort to add compression into PG, a la MySQL's

  row_format=compressed or HBase's LZO block compression?



 There's no row compression, but as mentioned by others there is

 out-of-line compression of large values using TOAST.

 I could be misunderstanding but I thought compression happened in the row as
 well. From the docs:

 EXTENDED allows both compression and out-of-line storage. This is the
 default for most TOAST-able data types. Compression will be attempted first,
 then out-of-

 line storage if the row is still too big. 



 Row compression would be interesting, but I can't imagine it not having

 been investigated already.

 --

 Adrian Klaver

 adrian.kla...@gmail.com

Already know about TOAST.  I could've been clearer, but that's not the
same as the block-/page-level compression I was referring to.

-- 
Yang Zhang
http://yz.mit.edu/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How can you have an Exclusive Lock on nothing? please help me understand this lock monitoring query output

2011-04-14 Thread Aleksey Tsalolikhin
Hi.  I use the following query (from
http://wiki.postgresql.org/wiki/Lock_Monitoring)
to monitor locks; and I've got an ExlusiveLock that does not have a relation
name associated with it.  What is locked with the Exclusive Lock in this case,
please?  (it's between d and e tables below)


psql -U postgres -d ddcKeyGen -c 'select
 pg_class.relname,pg_locks.transactionid, pg_locks.mode, pg_locks.granted,
 pg_stat_activity.usename,substr(pg_stat_activity.current_query,1,50),
pg_stat_activity.query_start,
 age(now(),pg_stat_activity.query_start) as age, pg_stat_activity.procpid
   from pg_stat_activity,pg_locks left
 outer join pg_class on (pg_locks.relation = pg_class.oid)
   where pg_locks.pid=pg_stat_activity.procpid order by query_start;'



Output:

 a_index   |   | AccessShareLock
   | t   | user |
  | 2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798
 b_index |   | AccessShareLock
 | t   | user |
| 2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798
 c_index |   | AccessShareLock
 | t   | user |
| 2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798
 d|   |
AccessShareLock  | t   | user |
| 2011-04-14 17:36:01.257669-07 |
00:21:28.847825 |   16798
   |
| ExclusiveLock| t   | user |
  | 2011-04-14 17:36:01.257669-07 |
00:21:28.847825 |   16798
 e   |   |
AccessShareLock  | t   | user |
| 2011-04-14 17:36:01.257669-07 |
00:21:28.847825 |   16798
 f_index |   | ShareLock
 | t   | user |
| 2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798
 g_index |   | AccessShareLock  | t
   | user ||
2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798


Thanks,
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UUID can be written to row when passed as parameter/argument to a PL/pgSQL function. Feature or bug?

2011-04-14 Thread Basil Bourque
If I pass the hex string representation of a UUID to a PL/pgSQL function as a 
varchar, that value cannot be used directly when writing to a row whose column 
data type is uuid, in Postgres 9.0.x. Normally Postgres automatically 
converts a hex string to a 128-bit UUID value and back again. 

Is not doing so in a function a bug?

Example follows below. 

(1) Create a simple table with one column of type uuid.

---
CREATE TABLE uuid_tbl_
(
  uuid_col_ uuid NOT NULL
)
WITH (
  OIDS=FALSE
);
---

(2) Create this function.

---
CREATE OR REPLACE FUNCTION uuid_write_( varchar ) RETURNS boolean AS $$

BEGIN
INSERT INTO uuid_tbl_ ( uuid_col_ )
VALUES ( $1 );
RETURN True;
END;

$$ LANGUAGE plpgsql;


(3) Call this function:
select uuid_write_('34A94C40-453A-4A30-9404-128121E76570');

Note the error:

ERROR:  column uuid_col_ is of type uuid but expression is of type character 
varying
LINE 2:  VALUES ( $1 )
  ^
HINT:  You will need to rewrite or cast the expression.
QUERY:  INSERT INTO uuid_tbl_ ( uuid_col_ )
VALUES ( $1 )
CONTEXT:  PL/pgSQL function uuid_write_ line 3 at SQL statement

** Error **

ERROR: column uuid_col_ is of type uuid but expression is of type character 
varying
SQL state: 42804
Hint: You will need to rewrite or cast the expression.
Context: PL/pgSQL function uuid_write_ line 3 at SQL statement
--

(4) Change the function by assigning the passed hex string to a variable named 
'uuid_arg' and declared to be of type uuid, then write that variable to the 
row instead of the argument. 

-
CREATE OR REPLACE FUNCTION uuid_write_( varchar ) RETURNS boolean AS $$

DECLARE
uuid_arg uuid;
BEGIN
uuid_arg := $1;
INSERT INTO uuid_tbl_ ( uuid_col_ )
VALUES ( uuid_arg );
RETURN True;
END;

$$ LANGUAGE plpgsql;
-

(5) Run the same line calling this function:

select uuid_write_('34A94C40-453A-4A30-9404-128121E76570');

Note the success of this workaround.

My blog post on this issue:
http://crafted-software.blogspot.com/2011/04/passing-uuid-value-to-function-in.html

--Basil Bourque

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression

2011-04-14 Thread mark


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Yang Zhang
 Sent: Thursday, April 14, 2011 6:51 PM
 To: Adrian Klaver
 Cc: pgsql-general@postgresql.org; Craig Ringer
 Subject: Re: [GENERAL] Compression
 
 On Thu, Apr 14, 2011 at 5:07 PM, Adrian Klaver
 adrian.kla...@gmail.com wrote:
  On Thursday, April 14, 2011 4:50:44 pm Craig Ringer wrote:
 
  On 15/04/2011 7:01 AM, Yang Zhang wrote:
 
   Is there any effort to add compression into PG, a la MySQL's
 
   row_format=compressed or HBase's LZO block compression?
 
 
 
  There's no row compression, but as mentioned by others there is
 
  out-of-line compression of large values using TOAST.
 
  I could be misunderstanding but I thought compression happened in the
 row as
  well. From the docs:
 
  EXTENDED allows both compression and out-of-line storage. This is
 the
  default for most TOAST-able data types. Compression will be attempted
 first,
  then out-of-
 
  line storage if the row is still too big. 
 
 
 
  Row compression would be interesting, but I can't imagine it not
 having
 
  been investigated already.
 
  --
 
  Adrian Klaver
 
  adrian.kla...@gmail.com
 
 Already know about TOAST.  I could've been clearer, but that's not the
 same as the block-/page-level compression I was referring to.

There is a (closed source) PG fork that has row (or column) oriented storage
that can have compression applied to them if you are willing to give up
updates and deletes on the table that is. 


I haven't seen a lot of people talking about wanting that in the Postgres
core tho. 


-M

 
 --
 Yang Zhang
 http://yz.mit.edu/
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID can be written to row when passed as parameter/argument to a PL/pgSQL function. Feature or bug?

2011-04-14 Thread Basil Bourque
Whoops… Typo in the Subject line. Should have been UUID cannot rather than 
UUID can.

UUID cannot be written to row when passed as parameter/argument to a PL/pgSQL 
function. Feature or bug?
^^^

--Basil Bourque
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID can be written to row when passed as parameter/argument to a PL/pgSQL function. Feature or bug?

2011-04-14 Thread Adrian Klaver
On Thursday, April 14, 2011 6:43:21 pm Basil Bourque wrote:
 If I pass the hex string representation of a UUID to a PL/pgSQL function as
 a varchar, that value cannot be used directly when writing to a row whose
 column data type is uuid, in Postgres 9.0.x. Normally Postgres
 automatically converts a hex string to a 128-bit UUID value and back
 again.
 
 Is not doing so in a function a bug?

How about:
CREATE OR REPLACE FUNCTION public.uuid_write_(character varying)
 RETURNS boolean
 LANGUAGE plpgsql
AS $function$

BEGIN
INSERT INTO uuid_tbl_ ( uuid_col_ )
VALUES ( $1::uuid );
RETURN True;
END;

$function$

test(5432)aklaver=select uuid_write_('34A94C40-453A-4A30-9404-128121E76570');
 uuid_write_ 
-
 t
(1 row)


 
 Note the success of this workaround.
 
 My blog post on this issue:
 http://crafted-software.blogspot.com/2011/04/passing-uuid-value-to-function
 -in.html
 
 --Basil Bourque

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can you have an Exclusive Lock on nothing? please help me understand this lock monitoring query output

2011-04-14 Thread Jerry Sievers
Aleksey Tsalolikhin atsaloli.t...@gmail.com writes:

 Hi.  I use the following query (from
 http://wiki.postgresql.org/wiki/Lock_Monitoring)
 to monitor locks; and I've got an ExlusiveLock that does not have a relation
 name associated with it.  What is locked with the Exclusive Lock in this case,
 please?  (it's between d and e tables below)

Try printing all fields from the pg_lock view and you may be
enlightened.

Transaction IDs are one example of a lock that is not on a relation.

Also be aware that if you are viewing a lock that is for some other DB
besides the one you're sitting in, the pg_class join will give nothing.

Also be aware that 

 psql -U postgres -d ddcKeyGen -c 'select
  pg_class.relname,pg_locks.transactionid, pg_locks.mode, pg_locks.granted,
  pg_stat_activity.usename,substr(pg_stat_activity.current_query,1,50),
 pg_stat_activity.query_start,
  age(now(),pg_stat_activity.query_start) as age, 
 pg_stat_activity.procpid
from pg_stat_activity,pg_locks left
  outer join pg_class on (pg_locks.relation = pg_class.oid)
where pg_locks.pid=pg_stat_activity.procpid order by query_start;'



 Output:

  a_index   |   | AccessShareLock
| t   | user |
   | 2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798
  b_index |   | AccessShareLock
  | t   | user |
 | 2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798
  c_index |   | AccessShareLock
  | t   | user |
 | 2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798
  d|   |
 AccessShareLock  | t   | user |
 | 2011-04-14 17:36:01.257669-07 |
 00:21:28.847825 |   16798
|
 | ExclusiveLock| t   | user |
   | 2011-04-14 17:36:01.257669-07 |
 00:21:28.847825 |   16798
  e   |   |
 AccessShareLock  | t   | user |
 | 2011-04-14 17:36:01.257669-07 |
 00:21:28.847825 |   16798
  f_index |   | ShareLock
  | t   | user |
 | 2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798
  g_index |   | AccessShareLock  | t
| user ||
 2011-04-14 17:36:01.257669-07 | 00:21:28.847825 |   16798


 Thanks,
 Aleksey

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: gsiever...@comcast.net
p: 305.321.1144

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cross-schema view issue/question

2011-04-14 Thread Bosco Rama
Hi folks,

We have a current DB on PG 8.4.7 on Ubuntu Server 10.04 LTS.
This DB used to only have one schema and that schema was replaced
on a regular schedule using a pg_dump/pg_restore process.  The old
schema was renamed to another name and the incoming schema and data
replaced it in the DB.  If an old renamed schema was present at the
time it was dropped prior to the renaming above. This schema is only
writable by the owning user.  There are other users that read this
schema/data for reports and other tasks.

Let's call that schema 'abc'.

This was all well and good until a user (quite rightly) decided to
create their own views of the 'abc' schema in their own schema which
we'll call 'xyz'.

The issue that has arisen is that we can no longer simply rename/drop
the 'abc' schema as the other user's schema objects in 'xyz' now refer
to objects in 'abc'.  At least, not without dropping their objects as
well.  Not a good thing.

My quesion is:  Is there any way to reduce/eliminate the tight coupling
of the views, et al. in schema 'xyz' to those in 'abc'?  My thoughts have
brought me to:
1) a function that removes/creates the views, etc in the 'xyz' schema
   that gets called as part of the replacement process for schema 'abc'

2) replacing the views, etc. with functions that return tables or
   the results of dynamic queries.

3) have the user create the views, etc. as 'temp' items in their
   session each time.  Though this may still conflict with the
   replacement since there will still be a tight coupling between
   the temp objects and the 'abc' schema and the replacement occurs
   regardless of the current state of user connections.

None of these is very appealing.

Anyone have any thoughts or suggestions?  Or even an RTFM reference. :-)

TIA,
Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression

2011-04-14 Thread Adrian Klaver
On Thursday, April 14, 2011 5:51:21 pm Yang Zhang wrote:

  
  adrian.kla...@gmail.com
 
 Already know about TOAST.  I could've been clearer, but that's not the
 same as the block-/page-level compression I was referring to.

I am obviously missing something. The TOAST mechanism is designed to keep tuple 
data below the default 8KB page size. In fact it kicks in at a lower level than 
that:

The TOAST code is triggered only when a row value to be stored in a table is 
wider than TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). The TOAST code will 
compress and/or move field values out-of-line until the row value is shorter 
than 
TOAST_TUPLE_TARGET bytes (also normally 2 kB) or no more gains can be had. 
During an UPDATE operation, values of unchanged fields are normally preserved 
as-
is; so an UPDATE of a row with out-of-line values incurs no TOAST costs if none 
of the out-of-line values change.'

Granted no all data types are TOASTable.  Are you looking for something more 
aggressive than that?

-- 
Adrian Klaver
adrian.kla...@gmail.com


Re: [GENERAL] Compression

2011-04-14 Thread Yang Zhang
On Thu, Apr 14, 2011 at 7:42 PM, Adrian Klaver adrian.kla...@gmail.com wrote:
 On Thursday, April 14, 2011 5:51:21 pm Yang Zhang wrote:

 

  adrian.kla...@gmail.com



 Already know about TOAST. I could've been clearer, but that's not the

 same as the block-/page-level compression I was referring to.

 I am obviously missing something. The TOAST mechanism is designed to keep
 tuple data below the default 8KB page size. In fact it kicks in at a lower
 level than that:

 The TOAST code is triggered only when a row value to be stored in a table
 is wider than TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). The TOAST code
 will compress and/or move field values out-of-line until the row value is
 shorter than TOAST_TUPLE_TARGET bytes (also normally 2 kB) or no more gains
 can be had. During an UPDATE operation, values of unchanged fields are
 normally preserved as-is; so an UPDATE of a row with out-of-line values
 incurs no TOAST costs if none of the out-of-line values change.'

 Granted no all data types are TOASTable. Are you looking for something more
 aggressive than that?

Yes.

http://blog.oskarsson.nu/2009/03/hadoop-feat-lzo-save-disk-space-and.html

http://wiki.apache.org/hadoop/UsingLzoCompression

http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-compression-internals-algorithms.html


 --

 Adrian Klaver

 adrian.kla...@gmail.com



-- 
Yang Zhang
http://yz.mit.edu/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.0 Out of memory

2011-04-14 Thread Jeremy Palmer

 Ok I removed the geometry column from the cursor query within the function 
 and the session still runs out of memory. I'm still seeing the same error 
 message as well:

 PortalHeapMemory: 16384 total in 4 blocks; 5944 free (0 chunks); 10440 
 used
   ExecutorState: 122880 total in 4 blocks; 63984 free (8 chunks); 58896 
 used
 ExprContext: 2496819768 total in 9 blocks; 21080 free (15 
 chunks); 2496798688 used

 So I guess it's not likely to be the PostGIS geometry to text cast that is 
 leaking the memory.

OK, so that was a wrong guess.

Hi Tom,

I finally tracked down the issue! The query that was generating the temp table 
which was used as input into the 2 cursor queries was generating an invalid, 
very large geometry (like 200mb). 

It turned out I had a bug in the previous testing I was doing, and I didn't 
actually remove the geometry column from the second cursor. So I guess a 200mb 
geometry being cast to text used too much memory. Not sure if there is still a 
leak... But I guess that depends on weather the geometry expands to over 1 GB 
when converted to text.

Anyway I would like to personally thank you for you time in helping me with 
this issue.

Regards,
Jeremy 


__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression

2011-04-14 Thread Adrian Klaver
On Thursday, April 14, 2011 7:46:34 pm Yang Zhang wrote:
 On Thu, Apr 14, 2011 at 7:42 PM, Adrian Klaver adrian.kla...@gmail.com 
wrote:

  Granted no all data types are TOASTable. Are you looking for something
  more aggressive than that?
 
 Yes.
 
 http://blog.oskarsson.nu/2009/03/hadoop-feat-lzo-save-disk-space-and.html
 
 http://wiki.apache.org/hadoop/UsingLzoCompression
 
 http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-compression-internals-
 algorithms.html

I can see that as a another use case for SQL/MED in 9.1+. 

 
  --
  
  Adrian Klaver
  
  adrian.kla...@gmail.com

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression

2011-04-14 Thread Yang Zhang
On Thu, Apr 14, 2011 at 6:46 PM, mark dvlh...@gmail.com wrote:


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Yang Zhang
 Sent: Thursday, April 14, 2011 6:51 PM
 To: Adrian Klaver
 Cc: pgsql-general@postgresql.org; Craig Ringer
 Subject: Re: [GENERAL] Compression

 On Thu, Apr 14, 2011 at 5:07 PM, Adrian Klaver
 adrian.kla...@gmail.com wrote:
  On Thursday, April 14, 2011 4:50:44 pm Craig Ringer wrote:
 
  On 15/04/2011 7:01 AM, Yang Zhang wrote:
 
   Is there any effort to add compression into PG, a la MySQL's
 
   row_format=compressed or HBase's LZO block compression?
 
 
 
  There's no row compression, but as mentioned by others there is
 
  out-of-line compression of large values using TOAST.
 
  I could be misunderstanding but I thought compression happened in the
 row as
  well. From the docs:
 
  EXTENDED allows both compression and out-of-line storage. This is
 the
  default for most TOAST-able data types. Compression will be attempted
 first,
  then out-of-
 
  line storage if the row is still too big. 
 
 
 
  Row compression would be interesting, but I can't imagine it not
 having
 
  been investigated already.
 
  --
 
  Adrian Klaver
 
  adrian.kla...@gmail.com

 Already know about TOAST.  I could've been clearer, but that's not the
 same as the block-/page-level compression I was referring to.

 There is a (closed source) PG fork that has row (or column) oriented storage
 that can have compression applied to them if you are willing to give up
 updates and deletes on the table that is.

Greenplum and Aster?

We *are* mainly doing analytical (non-updating/deleting) processing.
But it's not a critical pain point - we're mainly interested in FOSS
for now.



 I haven't seen a lot of people talking about wanting that in the Postgres
 core tho.


 -M


 --
 Yang Zhang
 http://yz.mit.edu/

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





-- 
Yang Zhang
http://yz.mit.edu/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression

2011-04-14 Thread Craig Ringer

On 15/04/2011 8:07 AM, Adrian Klaver wrote:


EXTENDED allows both compression and out-of-line storage. This is the
default for most TOAST-able data types. Compression will be attempted
first, then out-of-

line storage if the row is still too big. 


Good point. I was unclear; thanks for pointing it out.

What I was trying to say is that there's no whole-row compression, ie 
compression of the whole tuple except for minimal headers. A value in a 
field may be compressed, but you can't (say) compress a 100-column row 
of integers in Pg, because the individual fields don't support compression.



--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UUID can be written to row when passed as parameter/argument to a PL/pgSQL function. Feature or bug?

2011-04-14 Thread Basil Bourque
Thanks for the suggestion of casting the hex string to uuid. That works. 

I tried the standard syntax using CAST:

VALUES ( CAST( $1 AS uuid) )

--Basil Bourque

 How about:
 CREATE OR REPLACE FUNCTION public.uuid_write_(character varying)
 RETURNS boolean
 LANGUAGE plpgsql
 AS $function$
 
 BEGIN
INSERT INTO uuid_tbl_ ( uuid_col_ )
VALUES ( $1::uuid );
RETURN True;
 END;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general