[GENERAL] Why grantor is owner in this case?

2009-12-25 Thread donniehan
Hi guys,

I have a question about the grantor. Why the grantor is owner in the following 
case ?  I think it should be postgres(dba). 

postgres=# create user u1;
CREATE ROLE
postgres=# create user u2;
CREATE ROLE
postgres=# set session authorization u1;
SET
postgres= create table u1_tb1(a int);
CREATE TABLE
postgres= reset session authorization;
RESET
postgres=# grant update(a) on u1_tb1 to u2;
GRANT
postgres=# select attacl from pg_attribute where attname='a';
  attacl
---
 {u2=w/u1}
(4 rows)

From attacl u2=w/u1, we can see the grantor is u1, but in fact the grantor 
is postgres, the dba of database.
Does anyone know why ?  Or is that a bug?


--Dongni

Re: [GENERAL] Updating from 8.2 to 8.4

2009-12-25 Thread Mark Morgan Lloyd

Adrian Klaver wrote:

On Thursday 24 December 2009 11:20:35 am Mark Morgan Lloyd wrote:

I was hoping to finally get the servers updated from 8.2 to 8.4 over the
festive season, but by now I think I've left things too tight.

Is it necessary to update the (Windows) ODBC driver as well? I've got a
couple of app servers still on NT4 with ODBC 8.02.0100 which I'm trying
to be careful with lest I put something on them which requires a later
OS and can't back out.

The apps are written in a rather old version of Delphi with BDE which is
fine with 8.2. Trying to run against 8.4 I get Couldn't perform the
edit because another user changed the record.- looking back through
this and other MLs I see suggestion that this could be caused by an
excessive number of decimal places in the data (current case only
contains integers, timestamps and text) or possibly by a transaction
isolation issue.


My experience with this is it related to timestamps with fractional second 
precision. The other thing to note is that in 8.4 the default for datetimes is 
now 64-bit integer datetimes, not the previous float datetimes.


Many thanks for that Adrian. I notice this in the ODBC release notes 
which could be relevant:


-8-
psqlODBC 08.04.0100 Release
..
8.) Remove a spurious . with no trailing digits in timestamp 
representation (bug report [#1010540] from Brian Feldman).

-8-

although I don't know why it didn't bite on 8.2 unless it's specifically 
when 64-bit timestamps are processed.


I think that I'll try an ODBC update on a non-critical system, if that 
doesn't work I'll dig into my code. The table giving problems is part of 
a scheduling program where I can probably truncate timestamps, elsewhere 
it might not be so convenient. I'll report back if I find anything 
interesting.


The sooner I can get this code off Delphi+BDE+ODBC+Windows to 
Lazarus+Linux the better.


Happy Christmas everybody, whatever country you're in and whatever your 
tradition of celebration. We run 24x365 here but the caterers are hard 
at work :-)


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Session based transaction!!

2009-12-25 Thread Bill Moran
S Arvind arvindw...@gmail.com wrote:

 Hi Everyone,
 
 In a web application running in Tomcat and Postgres 8.3 as DB, i need to
 know whether my given task is feasible or not.
All the db operation which is done by that web application must
 be rollback at the end(session dies) and the DB data must be same as the
 starting stage(of session). Its like virtually placing the data for that
 session alone and rollbacking the db to the template which is originally
 been. So whenever users get in the webapplication, the initial data must be
 the template data only and he can perform any operation for which data is
 visible for that user alone and when the users leaves(session destroyed) all
 the data changed in that time must be rollbacked.
 
 I thought this by, When the session created in the application a transaction
 must be started and all the activites must be done on the DB, but the
 transaction will not be commited or rollback across request but it must be
 done across the session. By not destroying the connection and using it for
 all the work done by that session. when session destroy we can rollback the
 entire transaction
 
 Is there any other way to achieve the requirements in postgres.
 Thanks in advance..

Would be easy except for one factor that I don't know about in Tomcat.

In most web applications, the database connection is not maintained between
page loads.  Each new page view may (and usually does) get a different DB
connection than the previous one.  If Tomcat maintains a single DB
connection for a session across all page view, then you should be able
to implement this.  However, if Tomcat runs like most of the other web
systems I've seen, you'll have no way to ensure that a particular page
view will have the same DB connection as a previous page view.  It will
require some sort of middleware that keeps the DB connections open and
associates HTTP sessions with DB connections.

Past that, however, I expect it will be a maintenance nightmare.  Each
rolled back DB session is going to generate a lot of dead rows that
vacuum will have to reclaim.  Whether or not this is feasible overall
depends on a lot of questions that I don't know the answers to.  Partly,
it's going to depend on the amount of change and amount of concurrency
that occurs.

Personally, I would recommend coming up with a different approach, but
I might be wrong.

-- 
Bill Moran
http://www.potentialtech.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of space making backup

2009-12-25 Thread Farhan Malik
When making a backup of my database I run out of space.  I tell it to
put the backup file on my K: drive, which has tons of free space, but
during the backup postgresql creates a temporary folder on my C: Drive
where it initially writes data.  I don't have enough space on my C:
drive for all the data and so I get an out of space error.  Is there a
way to have postgre use one my drives with lots of free space for the
temporary folder?  I'm using version 8.3.9-1.

Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date_trunc on date is immutable?

2009-12-25 Thread Greg Stark
On Fri, Dec 25, 2009 at 1:58 AM, Scott Marlowe scott.marl...@gmail.com wrote:
 Isn't it the client timezone and not the system timezone that actually
 sets the tz the tstz is set to on retrieval?

It's the GUC:

stark= set timezone = 'America/Los_Angeles';
SET
stark= select now();
  now
---
 2009-12-25 06:44:33.238187-08
(1 row)


I'm not sure if and how we initialize the GUC on a new connection
though. It might vary depending on the driver you use too.


-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Session based transaction!!

2009-12-25 Thread John R Pierce

S Arvind wrote:

Hi Everyone,

In a web application running in Tomcat and Postgres 8.3 as DB, i need 
to know whether my given task is feasible or not.
   All the db operation which is done by that web application 
must be rollback at the end(session dies) and the DB data must be same 
as the starting stage(of session). Its like virtually placing the data 
for that session alone and rollbacking the db to the template which is 
originally been. So whenever users get in the webapplication, the 
initial data must be the template data only and he can perform any 
operation for which data is visible for that user alone and when the 
users leaves(session destroyed) all the data changed in that time must 
be rollbacked.


I thought this by, When the session created in the application a 
transaction must be started and all the activites must be done on the 
DB, but the transaction will not be commited or rollback across 
request but it must be done across the session. By not destroying the 
connection and using it for all the work done by that session. when 
session destroy we can rollback the entire transaction


when exactly might the session be destroyed if, for instance, the user 
wanders off bored or abruptly closes his browser ?   http sessions are 
independent of http sockets, at least in http/1.1


eventually, presumably, you time out the idle session, bit isn't that 
typically in several hours?


transactions that run many hours are painful for postgres, they intefere 
with vacuum processing, and can lead to bloated tables, especially on 
databases that ahve high transactional rates.


you likely will need to manage your own database connections, and keep 
them with your session tracking data.  you may need to implement your 
own specialized version of a connection pool, this one would return a 
specific session upon request rather than the next available one, and be 
closely tied into your session manager.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of space making backup

2009-12-25 Thread John R Pierce

Farhan Malik wrote:

When making a backup of my database I run out of space.  I tell it to
put the backup file on my K: drive, which has tons of free space, but
during the backup postgresql creates a temporary folder on my C: Drive
where it initially writes data.  I don't have enough space on my C:
drive for all the data and so I get an out of space error.  Is there a
way to have postgre use one my drives with lots of free space for the
temporary folder?  I'm using version 8.3.9-1.
  




what tool are you using for this backup process?   



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date_trunc on date is immutable?

2009-12-25 Thread Tom Lane
Greg Stark gsst...@mit.edu writes:
 On Fri, Dec 25, 2009 at 12:56 AM, Scott Marlowe scott.marl...@gmail.com 
 wrote:
 On Thu, Dec 24, 2009 at 4:36 PM, Kian Wright
 kian.wri...@senioreducators.com wrote:
 I'm trying to create an index on the month and year of a date field (in
 8.3), and I'm getting the functions in index expression must be marked
 IMMUTABLE error message.
 
 If applied to a timestamp, it is immutable.  If it's a timestamp with
 timezone it's not, because the timezone can change, which would change
 the index.

 Put another way, a given point in time doesn't necessarily lie in a
 particular month or on a particular day because it depends what time
 zone the system is set to. So right now it's a day earlier or later in
 part of the globe.

 To do what you want define the index on date_trunc('month',
 appl_recvd_date at time zone 'America/Los_Angeles') or something like that.

Given the way the question was phrased, I think the real situation is
that the OP has a column of type date.  There isn't any date_trunc()
function on date; there are such functions for timestamp with and
without timezone.  The parser prefers the former because with-tz is a
preferred type, so what he's really got is
date_trunc('month', datecol::timestamptz)
which is doubly not immutable: both the cast and the trunc function
are timezone-sensitive.

So one possible answer is to make sure the cast is to without-tz:
date_trunc('month', datecol::timestamp)
which in fact is indexable.  However:

 You'll have to make sure your queries have the same expression in them
 though :( It won't work if you just happen to have the system time
 zone set to the matching time zone.

This point is still a problem, because he'd need the same explicit
cast in the queries he wants to use the index.  It might be worth making
a special-purpose function monthof(date) or something like that to
reduce the notational burden.

(More generally, I wonder if it is worth creating a built-in date_trunc
for date input, just to avoid this gotcha.  At least in this context,
it seems like date-timestamp ought to be a preferable promotion over
date-timestamptz.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why grantor is owner in this case?

2009-12-25 Thread Tom Lane
donniehan donnie...@126.com writes:
 I have a question about the grantor. Why the grantor is owner in the 
 following case ?  I think it should be postgres(dba). 

Grants done by a superuser on an object he doesn't own are treated as
being done by the object owner instead.  Otherwise you end up with
grants that don't have a clear chain of traceability to the owner,
which causes all sorts of un-fun issues for REVOKE.  (I'm too lazy
to come up with the details right now, but if you care you can look
back in the pgsql-hackers archives to find the discussions where this
behavior was agreed on.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of space making backup

2009-12-25 Thread Tom Lane
Farhan Malik malikpi...@gmail.com writes:
 When making a backup of my database I run out of space.  I tell it to
 put the backup file on my K: drive, which has tons of free space, but
 during the backup postgresql creates a temporary folder on my C: Drive
 where it initially writes data.  I don't have enough space on my C:
 drive for all the data and so I get an out of space error.  Is there a
 way to have postgre use one my drives with lots of free space for the
 temporary folder?  I'm using version 8.3.9-1.

Reading between the lines, I suspect you are trying to use 'tar' output
format, which does have a need to make temp files that can be large.
If I guessed right, I'd suggest using 'custom' format instead.  There
really is no advantage to tar format, and several disadvantages besides
this one.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of space making backup

2009-12-25 Thread Farhan Malik
 Reading between the lines, I suspect you are trying to use 'tar' output
 format, which does have a need to make temp files that can be large.
 If I guessed right, I'd suggest using 'custom' format instead.  There
 really is no advantage to tar format, and several disadvantages besides
 this one.

                        regards, tom lane

That sounds right.  The error I get from the software is 2009/12/25
10:21:40.812: [1EA8][ThreadBackupRestore] Restore Error: pg_dump:
[tar archiver] could not write to output file: No space left on device

Is there a way to have postgre put those large temp files on a
different drive?  I only have 4GB free on my C: drive and once the
temp files go over that I get an out of space error.  I have tons of
free space on other drives, including the one where I am asking that
the final backup.zip file goes.

As for changing the backup to a custom format, I will pass that on to
the developer of the software.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of space making backup

2009-12-25 Thread Brian Modra
2009/12/25 Farhan Malik malikpi...@gmail.com:
 Reading between the lines, I suspect you are trying to use 'tar' output
 format, which does have a need to make temp files that can be large.
 If I guessed right, I'd suggest using 'custom' format instead.  There
 really is no advantage to tar format, and several disadvantages besides
 this one.

                        regards, tom lane

 That sounds right.  The error I get from the software is 2009/12/25
 10:21:40.812: [1EA8][ThreadBackupRestore] Restore Error: pg_dump:
 [tar archiver] could not write to output file: No space left on device

 Is there a way to have postgre put those large temp files on a
 different drive?  I only have 4GB free on my C: drive and once the
 temp files go over that I get an out of space error.  I have tons of
 free space on other drives, including the one where I am asking that
 the final backup.zip file goes.

 As for changing the backup to a custom format, I will pass that on to
 the developer of the software.

I do backups semi-manually:

use select pg_start_backup('some-name') (in psql logged in a postres)
then start a tar of /var/lib/pgsql/data/, to stdout and pipe this to
tar on another server using ssh
then finally select pg_stop_backup()

e.g. my two scripts (backup.sh calls back1.sh

[r...@www pgsql]# cat back1.sh
#/bin/bash
cd /var/lib/pgsql
ssh lead touch /var/lib/postgresql/backups/start_backup
tar zcf - data |ssh lead cat  /var/lib/postgresql/backups/20091223.tgz
echo DONE
[r...@www pgsql]# cat backup.sh
#!/bin/bash
cd /var/lib/pgsql
./back1.sh  backups/backup.log 21 /dev/null 
[r...@www pgsql]#



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Brian Modra   Land line: +27 23 5411 462
Mobile: +27 79 69 77 082
5 Jan Louw Str, Prince Albert, 6930
Postal: P.O. Box 2, Prince Albert 6930
South Africa
http://www.zwartberg.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updating from 8.2 to 8.4

2009-12-25 Thread Adrian Klaver
On Friday 25 December 2009 5:06:28 am Mark Morgan Lloyd wrote:
 Adrian Klaver wrote:
  On Thursday 24 December 2009 11:20:35 am Mark Morgan Lloyd wrote:
  I was hoping to finally get the servers updated from 8.2 to 8.4 over the
  festive season, but by now I think I've left things too tight.
 
  Is it necessary to update the (Windows) ODBC driver as well? I've got a
  couple of app servers still on NT4 with ODBC 8.02.0100 which I'm trying
  to be careful with lest I put something on them which requires a later
  OS and can't back out.
 
  The apps are written in a rather old version of Delphi with BDE which is
  fine with 8.2. Trying to run against 8.4 I get Couldn't perform the
  edit because another user changed the record.- looking back through
  this and other MLs I see suggestion that this could be caused by an
  excessive number of decimal places in the data (current case only
  contains integers, timestamps and text) or possibly by a transaction
  isolation issue.
 
  My experience with this is it related to timestamps with fractional
  second precision. The other thing to note is that in 8.4 the default for
  datetimes is now 64-bit integer datetimes, not the previous float
  datetimes.

 Many thanks for that Adrian. I notice this in the ODBC release notes
 which could be relevant:

 -8-
 psqlODBC 08.04.0100 Release
 ..
 8.) Remove a spurious . with no trailing digits in timestamp
 representation (bug report [#1010540] from Brian Feldman).
 -8-

 although I don't know why it didn't bite on 8.2 unless it's specifically
 when 64-bit timestamps are processed.

You might want to confirm your 8.4 installation is using integer datetimes as 
there is some variability in its useage among packagers. Follow this thread for 
one such discussion:
http://archives.postgresql.org/pgsql-general/2009-07/msg01119.php





-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of space making backup

2009-12-25 Thread John R Pierce

Farhan Malik wrote:

That sounds right.  The error I get from the software is 2009/12/25
10:21:40.812: [1EA8][ThreadBackupRestore] Restore Error: pg_dump:
[tar archiver] could not write to output file: No space left on device

Is there a way to have postgre put those large temp files on a
different drive?  I only have 4GB free on my C: drive and once the
temp files go over that I get an out of space error.  I have tons of
free space on other drives, including the one where I am asking that
the final backup.zip file goes.

As for changing the backup to a custom format, I will pass that on to
the developer of the software.
  


wild guess says, the value of the TEMP environment variable when the 
backup software is started will determine the path of where that 
temporary file is written.  In MS Windows this usually defaults to 
%USERPROFILE%\Local Settings\Temp\



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updating from 8.2 to 8.4

2009-12-25 Thread Mark Morgan Lloyd

Adrian Klaver wrote:

although I don't know why it didn't bite on 8.2 unless it's specifically
when 64-bit timestamps are processed.


You might want to confirm your 8.4 installation is using integer datetimes as 
there is some variability in its useage among packagers. Follow this thread for 
one such discussion:

http://archives.postgresql.org/pgsql-general/2009-07/msg01119.php


Thanks Adrian, noted. I'm hoping to get onto 8.4 before too long because 
of the analytic functions but that thread suggests that the best course 
of action would be to make sure that my code is robust against the new 
server and then decide on an upgrade schedule.


I'm building from scratch here so all options are as default except for 
added Perl.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of space making backup

2009-12-25 Thread Farhan Malik
Thanks.  Changing the environmental variable has solved that issue.
It ultimately required 13GB of free space to create a 2GB backup file.

On Fri, Dec 25, 2009 at 1:19 PM, John R Pierce pie...@hogranch.com wrote:
 Farhan Malik wrote:

 That sounds right.  The error I get from the software is 2009/12/25
 10:21:40.812: [1EA8][ThreadBackupRestore] Restore Error: pg_dump:
 [tar archiver] could not write to output file: No space left on device

 Is there a way to have postgre put those large temp files on a
 different drive?  I only have 4GB free on my C: drive and once the
 temp files go over that I get an out of space error.  I have tons of
 free space on other drives, including the one where I am asking that
 the final backup.zip file goes.

 As for changing the backup to a custom format, I will pass that on to
 the developer of the software.


 wild guess says, the value of the TEMP environment variable when the backup
 software is started will determine the path of where that temporary file is
 written.  In MS Windows this usually defaults to %USERPROFILE%\Local
 Settings\Temp\





-- 
Please note my new email address  malikpi...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of space making backup

2009-12-25 Thread Farhan Malik
When making a backup of my database I run out of space.  I tell it to
put the backup file on my K: drive, which has tons of free space, but
during the backup postgresql creates a temporary folder on my C: Drive
where it initially writes data.  I don't have enough space on my C:
drive for all the data and so I get an out of space error.  Is there a
way to have postgre use one my drives with lots of free space for the
temporary folder?  I'm using version 8.3.9-1.

Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Finding the bin path

2009-12-25 Thread Rob Jaeger
Is there a command or reliable method of finding the location of the
PostgreSQL bin path?

I'm currently using SHOW hba_file; which gives me the data path. For
default installs I can find the bin relative to this, but it fails under
custom installs.

My apologies if this has been addressed before, but I could not find it in a
search.

- Rob


Re: [GENERAL] Finding the bin path

2009-12-25 Thread John R Pierce

Rob Jaeger wrote:
Is there a command or reliable method of finding the location of the 
PostgreSQL bin path?


I'm currently using SHOW hba_file; which gives me the data path. For 
default installs I can find the bin relative to this, but it fails 
under custom installs.


My apologies if this has been addressed before, but I could not find 
it in a search.


actually in some configurations (debian/ubuntu for instance) the .CONF 
files like pg_hba, aren't stored in the $PGDATA directory either, they 
are in /etc/postgresql/ver/ or something.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finding the bin path

2009-12-25 Thread Guillaume Lelarge
Le 25/12/2009 18:02, Rob Jaeger a écrit :
 Is there a command or reliable method of finding the location of the
 PostgreSQL bin path?
 

Nope.

 I'm currently using SHOW hba_file; which gives me the data path. For
 default installs I can find the bin relative to this, but it fails under
 custom installs.
 

It doesn't give you the path to to data directory. It gives you the path
to the pg_hba.conf file. If you want the data directory path, use SHOW
data_directory.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Finding the bin path

2009-12-25 Thread Tom Lane
Rob Jaeger yogi...@gmail.com writes:
 Is there a command or reliable method of finding the location of the
 PostgreSQL bin path?

pg_config --bindir

Although I think not all packagers install this in the base package,
which might limit its usefulness.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updating from 8.2 to 8.4

2009-12-25 Thread Adrian Klaver
On Friday 25 December 2009 10:27:09 am Mark Morgan Lloyd wrote:
 Adrian Klaver wrote:
  although I don't know why it didn't bite on 8.2 unless it's specifically
  when 64-bit timestamps are processed.
 
  You might want to confirm your 8.4 installation is using integer
  datetimes as there is some variability in its useage among packagers.
  Follow this thread for one such discussion:
  http://archives.postgresql.org/pgsql-general/2009-07/msg01119.php

 Thanks Adrian, noted. I'm hoping to get onto 8.4 before too long because
 of the analytic functions but that thread suggests that the best course
 of action would be to make sure that my code is robust against the new
 server and then decide on an upgrade schedule.

 I'm building from scratch here so all options are as default except for
 added Perl.

 --

Alright then, nothing worse then trying to fix a problem that does not exist :) 
One other thing that came to mind in regards to the the 'changed record' 
problem is whether Row Versioning has been checked in the ODBC connection 
parameters?



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Clarification regarding array columns usage?

2009-12-25 Thread m. hvostinski
Hi,

I would appreciate if someone could clarify the aspects of using array
columns.

We need to store up to ten related integer values per row and currently it
is implemented as a varchar column that holds a string that is concatenated
by a trigger function. Something like this:

FOR var IN (SELECT id FROM support_table WHERE...) LOOP
   str := concatenate string...
END LOOP;
UPDATE main_table SET id_string = str WHERE...

So we have a string like this 1201,1202,1203,201 in the main_table varchar
column that is parsed by the client app. Recently I realized that it would
be more natural to use the array column in the main table to store the
values - no looping, concatenation, parsing, should take less space. After
implementing it I run explain on selects from the main table to compare
string vs. array and results are somewhat confusing.

EXPLAIN SELECT id_string FROM main_table WHERE...
returns row width: 3 where actual value of the id_string =
1201,1202,1203,201

EXPLAIN SELECT id_array FROM main_table WHERE...
returns row width: 26 for the same values

It looks like array takes more space than a string containing the same
values. Another strange thing is that for the varchar column explain shows
width 3, it's to low. I thought that it might be related to TOAST but I
understand that TOAST kicks in only if the row size is more than 2kb and

EXPLAIN SELECT * FROM main_table WHERE...
returns row width: 251


Hence the questions:

-- Could someone help me to interpret the explain readings above?

-- Is storing integers in array is really more space efficient approach?

-- Is there a noticeable performance difference in reading array vs varchar
columns? Creating id string is relatively rare operation in our case and if
reading strings is faster may be it makes sense to have the overhead of
string concatenation in the trigger.

-- Is it possible to estimate how much slower the string concatenation
trigger function would be in comparison to one that insets into array column
on up to 10 values per string/array? The trigger still will be executed
fairly often.


Thanks.


Re: [GENERAL] Out of space making backup

2009-12-25 Thread Craig Ringer

On 26/12/2009 12:44 AM, Brian Modra wrote:


use select pg_start_backup('some-name') (in psql logged in a postres)
then start a tar of /var/lib/pgsql/data/, to stdout and pipe this to
tar on another server using ssh


This won't work on a Windows machine. Windows does not permit files that 
are open for write by one process to be opened by another, unless the 
first process makes special efforts to permit it.


In general, people get around this by using the Volume Shadow Copy 
Service (VSS) via dedicated backup software. This takes a consistent 
snapshot of the file system and permits the backup software to access 
that snapshot.


If you were going to take filesystem-level backups on Windows, that'd be 
how you'd want to do it - have a pre hook in your backup software that 
called pg_start_backup() and a post hook that called pg_stop_backup(), 
letting the backup software handle the snapshot and filesystem copy.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clarification regarding array columns usage?

2009-12-25 Thread Randal L. Schwartz
 m == m hvostinski makhv...@gmail.com writes:

m I would appreciate if someone could clarify the aspects of using array
m columns.

In general, bad idea.

m We need to store up to ten related integer values per row and currently it
m is implemented as a varchar column that holds a string that is concatenated
m by a trigger function. Something like this:

Why?

If you were storing these as a daughter table, then you get
easy parsing, easy concurrent updating, easy access to aggregate
functions.  Just like SQL was meant to be used.

Stop thinking of tables as Excel sheets.

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
mer...@stonehenge.com URL:http://www.stonehenge.com/merlyn/
Smalltalk/Perl/Unix consulting, Technical writing, Comedy, etc. etc.
See http://methodsandmessages.vox.com/ for Smalltalk and Seaside discussion

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clarification regarding array columns usage?

2009-12-25 Thread Pavel Stehule
Hello

2009/12/25 m. hvostinski makhv...@gmail.com:
 Hi,

 I would appreciate if someone could clarify the aspects of using array
 columns.


a) don't use array column as storage for list of foreign keys. It is
one basic and significant rule. Planner and optimizer cannot work well
with keys stored in arrays. So your queries should be suboptimal.

b) arrays are good for storing time series and similar values.
c) it is good for storing some polymorphic logs

d) it is usable for storing EAV with some risks.

 We need to store up to ten related integer values per row and currently it
 is implemented as a varchar column that holds a string that is concatenated
 by a trigger function. Something like this:

 FOR var IN (SELECT id FROM support_table WHERE...) LOOP
        str := concatenate string...
 END LOOP;
 UPDATE main_table SET id_string = str WHERE...

 So we have a string like this 1201,1202,1203,201 in the main_table varchar
 column that is parsed by the client app. Recently I realized that it would
 be more natural to use the array column in the main table to store the
 values - no looping, concatenation, parsing, should take less space. After
 implementing it I run explain on selects from the main table to compare
 string vs. array and results are somewhat confusing.

sure - array of integer is better than list of integer stored in varchar


 EXPLAIN SELECT id_string FROM main_table WHERE...
 returns row width: 3 where actual value of the id_string =
 1201,1202,1203,201

 EXPLAIN SELECT id_array FROM main_table WHERE...
 returns row width: 26 for the same values

 It looks like array takes more space than a string containing the same
 values. Another strange thing is that for the varchar column explain shows
 width 3, it's to low. I thought that it might be related to TOAST but I
 understand that TOAST kicks in only if the row size is more than 2kb and


width in this case isn't important. Length of integer array is: header
+ sizeof(int)*number of elements,
length of varchar is: header + number of chars. If you store small
values, like 1,20,19 then list of values should be smaller then array.
But processing of array is faster.

 EXPLAIN SELECT * FROM main_table WHERE...
 returns row width: 251


 Hence the questions:

 -- Could someone help me to interpret the explain readings above?

 -- Is storing integers in array is really more space efficient approach?

the primary goal for storing values in arrays is protection under slow
parsing. Integer takes 4bytes, storing number in string is dynamic -
it depends on count of numbers + separators. But it has slower
processing then arrays - it good for reports only. For searching in
string list you have to use like %val% what is probably the most slow
operation - and then slow is your application too.


 -- Is there a noticeable performance difference in reading array vs varchar
 columns? Creating id string is relatively rare operation in our case and if
 reading strings is faster may be it makes sense to have the overhead of
 string concatenation in the trigger.

no - varchar and array is stored as stream of bytes, so reading
performance is +/- same.


 -- Is it possible to estimate how much slower the string concatenation
 trigger function would be in comparison to one that insets into array column
 on up to 10 values per string/array? The trigger still will be executed
 fairly often.


probably there are not significant difference for write. With arrays
you can write more sophisticated and faster queries.

Regards
Pavel Stehule


 Thanks.








-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general