Re: [GENERAL] Numbering rows

2008-10-15 Thread David Rowley
Andreas Kretschmer wrote:
> Can you show an example for 8.4?

It's not 100% certain that it will be possible for 8.4, probably though.

select row_number() over (order by employeeid) as nrow,* from employee order
by employeeid

It's important to have both the order bys

There is more information on windowing functions here
http://en.wikipedia.org/wiki/SELECT

David.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No select permission on a table but can query it

2008-10-15 Thread Michele Petrazzo - Unipex srl

Scott Marlowe wrote:

I think that this is a strange question, but: I need to revoke the
select permission on a table, but I also need to leave, with a function,
a user do a query on column.
A real case can be that a user "test" cannot have the permissions for do
a "select * from articles", but for do a "select has_article('an_article')"
where has_article are:


Look up "security definer" for functions.




It was!

Thanks,
MIchele

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problems with Timezones in Australia

2008-10-15 Thread Craig Ayliffe
Hi,

I have several Postgres DB's not showing correct daylight savings time.

>From maillist etc I believe these are patched up to the right levels to have
the correct time zones - but they don't seem to be working still.

The unix date command show the correct dates.

Server 1: postgresql-8.2.4 (Ubuntu 7.04)

dbtest=> show timezone;
 TimeZone
---
 localtime
(1 row)

dbtest=> select localtime;
  time
-
 16:16:32.853566
(1 row)

[EMAIL PROTECTED]:~$ date
Thu Oct 16 17:16:36 EST 2008


And another:

Server2: Postgresql-8.1.9 (on Ubuntu Dapper 6.06.1)
btest=> show timezone ;
   TimeZone
---
 Australia/ACT
(1 row)

btest=> select localtime;
  time
-
 16:17:45.227342
(1 row)

[EMAIL PROTECTED]:~$ date
Thu Oct 16 17:18:31 EST 2008






-- 
Craig Ayliffe


Re: [GENERAL] Numbering rows

2008-10-15 Thread A. Kretschmer
am  Wed, dem 15.10.2008, um 12:23:42 -0700 mailte Richard Broersma folgendes:
> On Wed, Oct 15, 2008 at 12:08 PM, ries van Twisk <[EMAIL PROTECTED]> wrote:
> > May be this function can help :
> >
> > http://www.postgresql.org/docs/8.3/static/functions-srf.html
> 
> Using generate series won't number the rows that way that you would
> want.  You basically will end up with a cross join between the
> generated series and the requested set.   There are three ways that I
> know of to get a row number:
> 
> 1) IIRC use a pl-pgsql function that returns an incremented number
> 2) use SQL by joining using the operator ">=" and Group by aggregate count(*)
> 3) 8.4 has sum new analytic functions that will do this nicely.

Can you show an example for 8.4?

Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE and Indexes and Performance

2008-10-15 Thread Craig Ringer

Bill Thoen wrote:

Does PG (8.1) ever use existing indexes when executing an UPDATE?

I've got some tables with millions of records and whenever I update a 
column that involves most or all the records the EXPLAIN command seems 
to indicate that it isn't using the pre-existing indexes. This result in 
a slow update, which is further slowed by the presence of indexes. So 
when doing a large update should I just drop the indexes first, or is 
there some good reason to keep them?


Joshua Tolley explained why it's doing a sequential scan, and why that's 
a good thing.


As for the added cost of maintaining indexes when doing the UPDATE - 
yes, you might want to consider dropping the index(es) before issuing 
the UPDATE and then recreating it/them afterwards. That can be 
considerably faster.


I have the feeling you'd need to drop the index then COMMIT before you 
ran the update and recreated the index, though, since Pg probably can't 
really get rid of the index if it's still visible to other transactions 
and might be restored by a ROLLBACK anyway. I'm not sure, though - 
explicit locking might be used to handle that, I haven't looked into it.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restoring a database

2008-10-15 Thread Joshua D. Drake

Chris Henderson wrote:

pg_dumpall archive.  If you look at the backup file, you'll find that
it's just straight SQL.  If you want to restore a particular database
out of it and not all of them, then you will need to edit the sql file
to include only what you want to restore.  Then you simply pass it
through psql like so:


Thank you all for your help.

Could anyone please share his or her backup and restore script? I want
to backup all databases plus schemas in one server via crontab every
night and restore one or all of the backed up databases to another
server when I need to.

Thanks again.



Here ya go.
[DEFAULT]
; List of files to tar
backup_files: /var/spool/mail /etc/ /home
pg_dump: /usr/bin/pg_dump
psql: /usr/bin/psql
pg_dump_user: postgres
pg_dumpall: /usr/bin/pg_dumpall
pg_port: 5432
svn: /usr/bin/svn
text_archive: /backups/full
binary_archive: /backups
; the path to the repos you want to backup
svn_repo_paths: /home/lacey/test/test1 /home/lacey/test/test2 
/home/lacey/test/test3
; the owner and group that will own the archive files
archive_owner: root
archive_group: admin
; the location of the backup directory
backup_dir: /backups
; Notify messages
notify_critical:
notify_warning:
notify_ok:
#!/usr/bin/python

""" LICENSE

Copyright Command Prompt, Inc.

Permission to use, copy, modify, and distribute this software and its
documentation for any purpose, without fee, and without a written agreement
is hereby granted, provided that the above copyright notice and this
paragraph and the following two paragraphs appear in all copies.

IN NO EVENT SHALL THE COMMAND PROMPT, INC. BE LIABLE TO ANY PARTY FOR
DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION,
EVEN IF THE COMMAND PROMPT, INC. HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.

THE COMMAND PROMPT, INC. SPECIFICALLY DISCLAIMS ANY WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN
"AS IS" BASIS, AND THE COMMAND PROMPT, INC. HAS NO OBLIGATIONS TO
PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

"""

import os
import sys
import pwd
import grp
import stat

from stat import *
from ConfigParser import *
from os import *
from sys import *
from optparse import OptionParser
from time import gmtime, strftime

# Initiate command line switches

usage = "usage: %prog [options] arg1 arg2"
parser = OptionParser(usage=usage)

parser.add_option("-B", "--backup", dest="backup", action="store_true", 
help="Whether or not to run a backup")
parser.add_option("-P", "--pgbackup", dest="pgbackup", action="store_true", 
help="Backup postgresql")
parser.add_option("-F", "--files", dest="files", action="store_true",  
help="File system backup")
parser.add_option("-G", "--globals", dest="globals", action="store_true", 
help="PostgreSQL users and groups")
parser.add_option("-C", "--config", dest="configfile", action="store", 
help="Location of config file", metavar="FILE")
parser.add_option("-T", "--text", dest="text", action="store_true", 
help="Create a text pg_dumpall")
parser.add_option("-X", "--debug", dest="debug", action="store_true", 
help="Lots of Debug output")

(options, args) = parser.parse_args()

backup = options.backup
files = options.files
configfile = options.configfile
pgbackup = options.pgbackup
debug = options.debug
globals = options.globals
text = options.text

if not configfile:
   print "You must pass -C or --config"
   exit(1)

# initiate config parser
config = ConfigParser()
config.read(configfile)

# What day is it
day = strftime("%a")

# Set up our keys
backup_files = config.defaults()['backup_files']
pg_dump = config.defaults()['pg_dump']
pg_dump_user = config.defaults()['pg_dump_user']
psql = config.defaults()['psql']
pg_dumpall = config.defaults()['pg_dumpall']
text_archive = config.defaults()['text_archive']
binary_archive = config.defaults()['binary_archive']
archive_owner = config.defaults()['archive_owner']
archive_group = config.defaults()['archive_group']
backup_dir = config.defaults()['backup_dir']
pg_port = config.defaults()['pg_port']


def syntax_check_func():
   if not configfile:   
  print "ERROR: --config is required"
  notify_critical_func()
  exit(1)

   if not backup:
  if not pgbackup:
 if not files:
print "ERROR: You need one of --backup, --pgbackup or --files"
notify_critical_func()
exit(1)
 
def stat_backup_dir_func():
  exists = os.path.exists(str(binary_archive))
  if not exists:
 print "WARNING: Directory doesn't exist: %s" % (str(binary_archive))
 print "INFO: Attempting to create backup directory"
 notify_warning_func()
 try:
os.mkdir("%s" % (str(binary_archive)))
if text_archive:
   try:
  os.mkdir("%s" % (str(text_archive

Re: [GENERAL] Restoring a database

2008-10-15 Thread Chris Henderson
> pg_dumpall archive.  If you look at the backup file, you'll find that
> it's just straight SQL.  If you want to restore a particular database
> out of it and not all of them, then you will need to edit the sql file
> to include only what you want to restore.  Then you simply pass it
> through psql like so:

Thank you all for your help.

Could anyone please share his or her backup and restore script? I want
to backup all databases plus schemas in one server via crontab every
night and restore one or all of the backed up databases to another
server when I need to.

Thanks again.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: multi recordset and data type check was: Re: PL/pgSQL stored procedure returning multiple result sets (SELECTs)?

2008-10-15 Thread Pavel Stehule
Hello

2008/10/15 Bruce Momjian <[EMAIL PROTECTED]>:
> Vladimir Dzhuvinov wrote:
> -- Start of PGP signed section.
>> Hi Ivan,
>>
>> > It seems anyway that the usefulness of this feature largely depends
>> > on the language library.
>> > eg. I can't see a way to support it with php right now but it is
>> > supported by python.
>> > Am I missing something?
>>
>> Yes, the client library will have to support multiple result sets too.
>>
>> For example, the PHP MySQLi lib does that by providing functions to
>> check for and retrieve outstanding result sets in the buffer:
>>
>> bool mysqli_more_results  ($link)
>>
>> bool mysqli_next_result  ($link)
>>
>>
>> It seems like the PHP PG binding does allow (?) retrieval of multiple
>> result sets through pg_get_result(), but only for requests issued
>> asynchronously:
>>
>> http://bg2.php.net/manual/en/function.pg-get-result.php
>>
>>
>> > Out of curiosity, what language are you using?
>>
>> For MySQL I've been mostly using PHP, occasionally Java, Python and C.
>
> Interesting.  I think we need to decide if we want a TODO for this.
>

year ago I wrote prototype:
http://okbob.blogspot.com/2007/11/stacked-recordset-multirecordset.html
http://okbob.blogspot.com/2007/11/first-real-procedures-on-postgresql.html

regards
Pavel Stehule


> --
>  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
>  EnterpriseDB http://enterprisedb.com
>
>  + If your life is a hard drive, Christ can be your backup. +
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ADO TO ODBC

2008-10-15 Thread salman Sheikh


 

hi freinds,
I have an application software which has connection with MS Access through 
DAO.I want to use same software without any changing with my new databank 
Postgresql through Odbc.
I dont think so,that i can change it,I must develope whole software with odbc 
for postgresql.
can any body help me?
thanks
sheikh



In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten! 
Nur 3,99 EUR/Monat! *http://www.maildomain.web.de/?mc=021114* 
[http://www.maildomain.web.de/?mc=021114] 


Re: [GENERAL] Re: multi recordset and data type check was: Re: PL/pgSQL stored procedure returning multiple result sets (SELECTs)?

2008-10-15 Thread Bruce Momjian
Vladimir Dzhuvinov wrote:
-- Start of PGP signed section.
> Hi Ivan,
> 
> > It seems anyway that the usefulness of this feature largely depends
> > on the language library.
> > eg. I can't see a way to support it with php right now but it is
> > supported by python.
> > Am I missing something?
> 
> Yes, the client library will have to support multiple result sets too.
> 
> For example, the PHP MySQLi lib does that by providing functions to
> check for and retrieve outstanding result sets in the buffer:
> 
> bool mysqli_more_results  ($link)
> 
> bool mysqli_next_result  ($link)
> 
> 
> It seems like the PHP PG binding does allow (?) retrieval of multiple
> result sets through pg_get_result(), but only for requests issued
> asynchronously:
> 
> http://bg2.php.net/manual/en/function.pg-get-result.php
> 
> 
> > Out of curiosity, what language are you using?
> 
> For MySQL I've been mostly using PHP, occasionally Java, Python and C.

Interesting.  I think we need to decide if we want a TODO for this.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL stored procedure returning multiple result sets (SELECTs)?

2008-10-15 Thread Bruce Momjian

Below is a very good summary of the limitations of our function
capabilities compared to procedures, e.g.:

o  no transaction control in functions
o  no multi-query return values without using special syntax

I don't think we can cleanly enable the second capability, but could we
allow transaction control for functions that are not called inside a
multi-statement transaction?

FYI, right now when you call a function all statements are assumed to be
in a single transaction, and allowing transaction control inside a
function would mean that each statement in a function is its own
transaction _unless_ transaction control is specified.  There would
certainly need to be special syntax to enable this.

Is there a TODO here?

---

Vladimir Dzhuvinov wrote:
-- Start of PGP signed section.
> Hi Merlin,
> 
> >> A function is... hmm, a function, a mapping: given a set of arguments it
> >> returns a single and well defined value: f(x,y) -> z
> >>
> >> The purpose of stored procedures, on the other hand, is to encapsulate
> >> an (arbitrary) bunch of SQL commands, a mini-program of sort.
> 
> > I think your understanding is off here.  Functions can encapsulate
> > arbitrary collection of statements...as I said previously, there are
> > two principle differences:
> > *) functions have implicit created transaction, procedures do not
> > *) how you pass data to/from the procedure body.  (functions return a
> > scalar, record, or a set)
> > 
> > Functions are limited in the sense that it is awkward to return
> > multiple sets, but are much more flexible how they can be integrated
> > into queries -- you can call a function anywhere a scalar or a set is
> > allowed -- in addition to the monolithic procedure style.
> 
> I was speaking about how it's done in MySQL. And I liked their approach
> of clear separation of responsibility between functions and stored
> procedures. At first I didn't quite understand their point, but then,
> during the development of my app, I gradually began to appreciate it.
> 
> To sum up how it's done in MySQL:
> 
> Functions are meant for tasks such as string operations, date/calendar
> functions, maths, encryption. They are allowed to operate only on their
> arguments. And they are stackable, just as functions in other languages
> like C.
> 
> Stored procedures are meant to be programs that work on the data.
> Hence they allowed to access tables, they can start explicit
> transactions and they can execute plain arbitrary SELECTs that pass
> their rows straight to the client. And stored procedures are "nestable"
> - akin to include() in PHP.
> 
> 
> I suspect that the present situation with Postgres reflects the way the
> software developed over the years. Perhaps in the very beginning the
> Postgres developers introduced functions which more or less resembled
> the "plain" functions of MySQL today. But then users might have pressed
> for a method to store their table manipulation logic on the server, and
> then for some reason it had been decided to overload functions with this
> extra responsibility, rather than create a separate clean "stored
> procedure" class.
> 
> So today Postgres has got functions which are very feature-full
> (compared with functions in MySQL), but still fall short of what
> traditional stored procedures can provide.
> 
> Yes, I was very much pleased with a number of Postgres features, such as
> the ability to do a tighter data definition using checks and
> constraints. Postgres allows for a much richer data model when I compare
> it with MySQL. I decided to put Postgres aside simply because it doesn't
> allow the definition of *clean* stored procedures (as I'm used to them
> in MySQL). And I didn't like the idea of twisting the PG function model
> around to accommodate my existing MySQL stored procedure logic. I abhor
> doing ugly things with code :)
> 
> 
> Pavel stated interest to work on the addition of stored procedures to
> Postgres provided he finds sponsorship. Right now I don't see much
> benefit investing money into such a venture, besides I've got my hands
> full with the day-to-day management of my own project. So far MySQL has
> been doing its job well and for the near future it looks like I'm
> staying on it.
> 
> 
> 
> > Just as a 'for example', look how you can trap errors and do some
> > recovery inside a pl/pgsql routine:
> > http://www.postgresql.org/docs/8.3/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
> 
> Well, MySQL does allow for exception handling within SPs, although there
> are some shortcomings (if you define a generic handler you cannot obtain
> precise info on the error type).
> 
> 
> > That feature alone can help you enormously.  Lest you think I'm
> > biased, I dba a mysql box professionally...every time I pop into the
> > mysql shell I feel like I'm stepping backwards in time about 5 years.
> > Don't let the inability to return mu

Re: [GENERAL] Restoring a database

2008-10-15 Thread Jeff Frost


Jeff Frost wrote:
> Chris Henderson wrote:
>   
>> I backup all my databases by using pg_dumpall - pg_dumpall >
>> /tmp/postgres.backup.`hostname`. It should backup four DBs: analyze,
>> postgres,  template0 and template1
>> I guess this backs up the schemas as well.
>>
>> Now I want to restore one of the databases and schema from this backup
>> dump file onto a different server. The databases is call "analyze".
>> Does anyone know how to do that with pg_restore? Thanks.
>>   
>> 
> Chris,
>
> pg_restore is used to restore backups that were saved in the custom
> format by pg_dump (option -Fc).  You don't need pg_restore to restore a
>   
Whoops, that should say, in the custom or tar format (options -Fc or
-Ft).

> pg_dumpall archive.  If you look at the backup file, you'll find that
> it's just straight SQL.  If you want to restore a particular database
> out of it and not all of them, then you will need to edit the sql file
> to include only what you want to restore.  Then you simply pass it
> through psql like so:
>
> psql -f /tmp/postgres.backup.`hostname` postgres
>
> Hint: the above will require that the CREATE DATABASE and \connect
> commands for the database you want to restore are still in the file.
>
>   

-- 
Jeff Frost, Owner   <[EMAIL PROTECTED]>
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032



Re: [GENERAL] Restoring a database

2008-10-15 Thread Jeff Frost
Chris Henderson wrote:
> I backup all my databases by using pg_dumpall - pg_dumpall >
> /tmp/postgres.backup.`hostname`. It should backup four DBs: analyze,
> postgres,  template0 and template1
> I guess this backs up the schemas as well.
>
> Now I want to restore one of the databases and schema from this backup
> dump file onto a different server. The databases is call "analyze".
> Does anyone know how to do that with pg_restore? Thanks.
>   
Chris,

pg_restore is used to restore backups that were saved in the custom
format by pg_dump (option -Fc).  You don't need pg_restore to restore a
pg_dumpall archive.  If you look at the backup file, you'll find that
it's just straight SQL.  If you want to restore a particular database
out of it and not all of them, then you will need to edit the sql file
to include only what you want to restore.  Then you simply pass it
through psql like so:

psql -f /tmp/postgres.backup.`hostname` postgres

Hint: the above will require that the CREATE DATABASE and \connect
commands for the database you want to restore are still in the file.

-- 
Jeff Frost, Owner   <[EMAIL PROTECTED]>
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restoring a database

2008-10-15 Thread Raymond O'Donnell
On 15/10/2008 22:19, Chris Henderson wrote:
> I backup all my databases by using pg_dumpall - pg_dumpall >
> /tmp/postgres.backup.`hostname`. It should backup four DBs: analyze,
> postgres,  template0 and template1
> I guess this backs up the schemas as well.
> 
> Now I want to restore one of the databases and schema from this backup
> dump file onto a different server. The databases is call "analyze".
> Does anyone know how to do that with pg_restore? Thanks.

Off the top of my head, I don't think you can do this with pg_restore.
pg_dumpall produces a plain-text dump, which means that you need to use
it as input to psql to restore from it; pg_restore uses the non-text
formats.

However, if the only other databases in the dump are the built-in ones,
they will add very little time to the restore process.

In any case, there's no need to back up databases postgres, template0
and template1, unless you've added custom stuff to template1 that you
need to keep.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restoring a database

2008-10-15 Thread Scott Marlowe
On Wed, Oct 15, 2008 at 3:19 PM, Chris Henderson <[EMAIL PROTECTED]> wrote:
> I backup all my databases by using pg_dumpall - pg_dumpall >
> /tmp/postgres.backup.`hostname`. It should backup four DBs: analyze,
> postgres,  template0 and template1
> I guess this backs up the schemas as well.
>
> Now I want to restore one of the databases and schema from this backup
> dump file onto a different server. The databases is call "analyze".
> Does anyone know how to do that with pg_restore? Thanks.

This is a common mistake.  Regular pg_dump / pg_dumpall without custom
format are just plain text dumps and are designed to be restored with
psql.  You can take a text editor  (or tail / head etc) to the text
file and chop out the parts you want, or you can make a compressed
backup and use pg_restore to do things like what you want.

I generally do a pg_dumpall --globals, and then pg_dump for each
database into a separate file.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Restoring a database

2008-10-15 Thread Chris Henderson
I backup all my databases by using pg_dumpall - pg_dumpall >
/tmp/postgres.backup.`hostname`. It should backup four DBs: analyze,
postgres,  template0 and template1
I guess this backs up the schemas as well.

Now I want to restore one of the databases and schema from this backup
dump file onto a different server. The databases is call "analyze".
Does anyone know how to do that with pg_restore? Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Get PG version using JDBC?

2008-10-15 Thread Thomas Kellerer

David Wall wrote on 15.10.2008 23:01:

Is there a way to get the PG version string from JDBC?  I'm using PG 8.3.

Thanks,
David


In a portable manner:

http://java.sun.com/j2se/1.5.0/docs/api/java/sql/DatabaseMetaData.html#getDatabaseProductName()
http://java.sun.com/j2se/1.5.0/docs/api/java/sql/DatabaseMetaData.html#getDatabaseProductVersion()

Althout that will not return the information "compiled by..." as the suggested 
"select version()" does


Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Numbering rows

2008-10-15 Thread Mark Morgan Lloyd

D. Dante Lorenso wrote:

PERL can remember variables in your session.  Here's a function I wrote 
that sets a "global" variable in PL/PERL:


Perl can do anything- that's cheating :-)

Actually, I use Perl heavily but the advantage of being able to do the 
sort of analysis being discussed in a single query is that the query can 
be easily shipped with the results as a description of the method. 
Having to set up ad-hoc extra functions (in addition to those that are 
normally in the workflow) can be problematic, particularly if a 
recipient of the query only has restricted access.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Get PG version using JDBC?

2008-10-15 Thread Rodrigo Gonzalez

David Wall wrote:

Is there a way to get the PG version string from JDBC?  I'm using PG 8.3.

Thanks,
David


SELECT version() ?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Get PG version using JDBC?

2008-10-15 Thread Richard Broersma
Select version();
   version
-
 PostgreSQL 8.3.3, compiled by Visual C++ build 1400
(1 row)

On Wed, Oct 15, 2008 at 2:01 PM, David Wall <[EMAIL PROTECTED]> wrote:
> Is there a way to get the PG version string from JDBC?  I'm using PG 8.3.
>
> Thanks,
> David
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Get PG version using JDBC?

2008-10-15 Thread David Wall

Is there a way to get the PG version string from JDBC?  I'm using PG 8.3.

Thanks,
David

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Numbering rows

2008-10-15 Thread D. Dante Lorenso

Mark Morgan Lloyd wrote:
Is there an easy way to assign a sequential number, possibly based on an 
arbitrary minimum (typically 0 or 1) to each row of an ordered result 
set, or do I have to work with explicit sequences?


I need to do quite a lot of maths on successive rows, extracting numeric 
and timestamp differences hence rates of change. I've typically been 
doing it manually or in a spreadsheet but there has to be a better way 
e.g. by a join on offset row numbers.


PERL can remember variables in your session.  Here's a function I wrote 
that sets a "global" variable in PL/PERL:


--
CREATE OR REPLACE FUNCTION "public"."global_var_set"
  (in_key varchar, in_value bigint)
RETURNS bigint AS
$body$
my ($key, $value) = @_;
$_SHARED{$key} = $value;
return $value;
$body$
LANGUAGE 'plperl' VOLATILE RETURNS NULL ON NULL INPUT;
--

Then, later you can read that global variable with another function like 
this:


--
CREATE OR REPLACE FUNCTION "public"."global_var_get"
(in_key varchar)
RETURNS bigint AS
$body$
my ($key) = @_;
return $_SHARED{$key} ? $_SHARED{$key} : 0;
$body$
LANGUAGE 'plperl' VOLATILE RETURNS NULL ON NULL INPUT;
--

Perhaps you can use PL/PERL and a function like these to modify "global" 
variables that you can increment as you do your select.  Something like:


   SELECT global_var_set(0);

   SELECT global_var_inc() AS row_counter, *
   FROM datatable
   ORDER BY whatever;

Just an idea.

-- Dante

--
D. Dante Lorenso
[EMAIL PROTECTED]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Numbering rows

2008-10-15 Thread Mark Morgan Lloyd
Thanks everybody- I'm watching with a lot of interest. I was worried 
that I was asking something stupid with an obvious answer...


ries van Twisk wrote:

May be this function can help :

http://www.postgresql.org/docs/8.3/static/functions-srf.html


Thanks, that's already turning out to be useful for something else I was 
working on today:


select percent,
to_char(nedcar_tonnes_001(percent), '0.99') AS nedcar
from generate_series(0,110,10) as percent;

The one thing I'd say about generate_series() is that the description 
suggests that one has to put an explicit count() as the second parameter 
if using it to number rows, i.e. it doesn't have an "as many as 
necessary" option.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No select permission on a table but can query it

2008-10-15 Thread Scott Marlowe
On Wed, Oct 15, 2008 at 1:33 PM, Michele Petrazzo - Unipex srl
<[EMAIL PROTECTED]> wrote:
> I think that this is a strange question, but: I need to revoke the
> select permission on a table, but I also need to leave, with a function,
> a user do a query on column.
> A real case can be that a user "test" cannot have the permissions for do
> a "select * from articles", but for do a "select has_article('an_article')"
> where has_article are:

Look up "security definer" for functions.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Numbering rows

2008-10-15 Thread Richard Broersma
On Wed, Oct 15, 2008 at 12:32 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:

> Can't you put the query into a subselect with an offset 0 and join to
> that to get the generate_series to work correctly?


I've never heard of doing it that way, but I'm very interestes in
seeing how it is done.  This is what i've tried so far, but am still
getting the cross join:

postgres=#
select * from generate_series(1,3)
CROSS JOIN  ( values('a'),('b'),('c') ) as myvals( letter )
OFFSET 0;

 generate_series | letter
-+
   1 | a
   2 | a
   3 | a
   1 | b
   2 | b
   3 | b
   1 | c
   2 | c
   3 | c
(9 rows)


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] No select permission on a table but can query it

2008-10-15 Thread Michele Petrazzo - Unipex srl

I think that this is a strange question, but: I need to revoke the
select permission on a table, but I also need to leave, with a function,
a user do a query on column.
A real case can be that a user "test" cannot have the permissions for do
a "select * from articles", but for do a "select has_article('an_article')"
where has_article are:

FUNCTION has_article (text) RETURNS articles AS $$
SELECT * from articles WHERE id_article=$1;
$$ LANGUAGE SQL

Into my tries I receive always a
ERROR:  permission denied for relation articles

It's there a solution?

Thanks,
Michele

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Numbering rows

2008-10-15 Thread Scott Marlowe
On Wed, Oct 15, 2008 at 1:23 PM, Richard Broersma
<[EMAIL PROTECTED]> wrote:
> On Wed, Oct 15, 2008 at 12:08 PM, ries van Twisk <[EMAIL PROTECTED]> wrote:
>> May be this function can help :
>>
>> http://www.postgresql.org/docs/8.3/static/functions-srf.html
>
> Using generate series won't number the rows that way that you would
> want.  You basically will end up with a cross join between the
> generated series and the requested set.   There are three ways that I
> know of to get a row number:

Can't you put the query into a subselect with an offset 0 and join to
that to get the generate_series to work correctly?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drupal and PostgreSQL - performance issues?

2008-10-15 Thread Bruce Momjian
Tom Lane wrote:
> "Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> Mind you, I find the SysV APIs uselessly baroque too, but there is one
> >> feature that we have to have that is not in mmap(): the ability to
> >> detect other processes attached to a shmem block.
> 
> > Didn't we solve this problem on Windows?
> 
> Not terribly well --- see active thread on -hackers.

We could allocate a small shared memory area to solve this and use
mmap() for the other shared memory usage.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Numbering rows

2008-10-15 Thread Richard Broersma
On Wed, Oct 15, 2008 at 12:08 PM, ries van Twisk <[EMAIL PROTECTED]> wrote:
> May be this function can help :
>
> http://www.postgresql.org/docs/8.3/static/functions-srf.html

Using generate series won't number the rows that way that you would
want.  You basically will end up with a cross join between the
generated series and the requested set.   There are three ways that I
know of to get a row number:

1) IIRC use a pl-pgsql function that returns an incremented number
2) use SQL by joining using the operator ">=" and Group by aggregate count(*)
3) 8.4 has sum new analytic functions that will do this nicely.


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Numbering rows

2008-10-15 Thread ries van Twisk

May be this function can help :

http://www.postgresql.org/docs/8.3/static/functions-srf.html

Ries
On Oct 15, 2008, at 1:44 PM, Mark Morgan Lloyd wrote:

Is there an easy way to assign a sequential number, possibly based  
on an arbitrary minimum (typically 0 or 1) to each row of an ordered  
result set, or do I have to work with explicit sequences?


I need to do quite a lot of maths on successive rows, extracting  
numeric and timestamp differences hence rates of change. I've  
typically been doing it manually or in a spreadsheet but there has  
to be a better way e.g. by a join on offset row numbers.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or  
colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Numbering rows

2008-10-15 Thread Mark Morgan Lloyd
Is there an easy way to assign a sequential number, possibly based on an 
arbitrary minimum (typically 0 or 1) to each row of an ordered result 
set, or do I have to work with explicit sequences?


I need to do quite a lot of maths on successive rows, extracting numeric 
and timestamp differences hence rates of change. I've typically been 
doing it manually or in a spreadsheet but there has to be a better way 
e.g. by a join on offset row numbers.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] slony and fill factor

2008-10-15 Thread Scott Marlowe
I was wondering if I can set fill factor without breaking slony
replication.  It's technically DDL, but it's not really altering the
table in the way I'd expect to be an issue for slony.  Anyone know
before I set up a replication set and experiment on it?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NATURAL JOINs

2008-10-15 Thread Scott Marlowe
On Wed, Oct 15, 2008 at 10:17 AM, Richard Broersma
<[EMAIL PROTECTED]> wrote:
> On Wed, Oct 15, 2008 at 9:03 AM, Reg Me Please <[EMAIL PROTECTED]> wrote:
>
>> Both are perfectly right, indeed.
>> Nonetheless, in my opinion a NATURAL JOIN  exploiting the FKs
>> instead of the column names would be much more helpful and much less error
>> prone!
>>
>> As far as I know there is no way to exploit FKs in JOINs, right?
>
> Yes AFAIK, this would make postgresql's implementation of natural join
> violate the SQL standard.  Perhaps you could propose an "UNNATURAL
> JOIN" syntax extension. ;)

Or a "VERY VERY NATURAL JOIN" syntax?  :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE and Indexes and Performance

2008-10-15 Thread Scott Marlowe
On Wed, Oct 15, 2008 at 10:42 AM, Bill Thoen <[EMAIL PROTECTED]> wrote:
> Does PG (8.1) ever use existing indexes when executing an UPDATE?
>
> I've got some tables with millions of records and whenever I update a column
> that involves most or all the records the EXPLAIN command seems to indicate
> that it isn't using the pre-existing indexes. This result in a slow update,
> which is further slowed by the presence of indexes. So when doing a large
> update should I just drop the indexes first, or is there some good reason to
> keep them?

You're assuming that seq scan is making it slow.  You can always use
the enable_xxx settings to turn off sequential scan etc to see if it
runs faster with an index.  Also, you might have a tuning issue going
on and indexed lookups would be faster.

If you're hitting every record, it's probably best to do a seq scan as
index scans, as previously mentioned hit both the index and the table.

What's your work_mem set to?  What about random_page_cost,
effective_cache_size, and shared_buffers?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drupal and PostgreSQL - performance issues?

2008-10-15 Thread Matthew T. O'Connor

Tom Lane wrote:

I think the subtext there is that the Linux kernel hackers hate the SysV
IPC APIs and wish they'd go away.  They are presently constrained from
removing 'em by their desire for POSIX compliance, but you won't get
them to make any changes that might result in those APIs becoming more
widely used :-(

Mind you, I find the SysV APIs uselessly baroque too, but there is one
feature that we have to have that is not in mmap(): the ability to
detect other processes attached to a shmem block.


Didn't we solve this problem on Windows?  Can we do a similar thing in 
Unix and get ride of the SysV stuff?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drupal and PostgreSQL - performance issues?

2008-10-15 Thread Tom Lane
"Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Mind you, I find the SysV APIs uselessly baroque too, but there is one
>> feature that we have to have that is not in mmap(): the ability to
>> detect other processes attached to a shmem block.

> Didn't we solve this problem on Windows?

Not terribly well --- see active thread on -hackers.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimizing projections containing unused columns

2008-10-15 Thread Andrus

Tom,


This question is too vague to be answerable --- especially if you want
an answer that applies across all 8.x releases.  I'd suggest
experimenting a bit using EXPLAIN ANALYZE to see what happens in your
actual application.


Thank you very much.
I cannot experiment with application currently since this part is at 
planning state.

I asked this to get right designing direction.

OK, I will create applicaton so that inner WHERE clauses contain as much 
restrictions as it is possible without adding additional tables to
inner SELECTs. Hopefully this cannot be slower than moving those 
restrictions to outer select.


A third related issue:

Is it OK to use 5 levels of nested queries like

SELECT ...
 FROM ...
  (SELECT ..   FROM  ...
  (SELECT ... FROM ..
   ...
   )  p1
   ) p2

or is it better to create queries with as few levels of nestings as possible 
?
In some cases we have selection to use UNIONS, JOINS and CREATE TEMP TABLE x 
ON COMMIT DROP clauses instead of nested queries but

nested queries look more natural to create.

Currently we are planning to support servers starting at 8.0 so we are not 
planning to use WITH statement.


Andrus 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Opptimizing projections containing unused columns

2008-10-15 Thread Tom Lane
"Andrus" <[EMAIL PROTECTED]> writes:
> Is it OK to put all filters to outer query WHERE clase?
> Or should I move as many filter conditions as possible to inner query so 
> that inner query returns 1 records instead of 50 records.

> Is there difference in perfomance if inner query returs large number of rows 
> which are filtered only by outer query ?

This question is too vague to be answerable --- especially if you want
an answer that applies across all 8.x releases.  I'd suggest
experimenting a bit using EXPLAIN ANALYZE to see what happens in your
actual application.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Opptimizing projections containing unused columns

2008-10-15 Thread Andrus

Tom,


Don't worry about it.  All modern Postgres versions ignore columns that
aren't actually used in the query --- at least for examples as simple as
this one.  In cases where you intentionally defeat optimization (eg via
OFFSET 0 in a sub-select) it's possible that the sub-select will compute
all its output columns even though the upper query doesn't use 'em all.

The width field in EXPLAIN output offers a good hint as to how many
columns the query is actually fetching.


thank you very much.
I don't use OFFSET clause in those queries and have servers starting at 8.0 
version.

So I hope this works.

I have also a related question.
Inner query returns about 50 records, outer query returns about 1 
records.


Is it OK to put all filters to outer query WHERE clase?
Or should I move as many filter conditions as possible to inner query so 
that inner query returns 1 records instead of 50 records.


Is there difference in perfomance if inner query returs large number of rows 
which are filtered only by outer query ?


Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE and Indexes and Performance

2008-10-15 Thread Joshua Tolley
On Wed, Oct 15, 2008 at 10:42 AM, Bill Thoen <[EMAIL PROTECTED]> wrote:
> Does PG (8.1) ever use existing indexes when executing an UPDATE?
>
> I've got some tables with millions of records and whenever I update a column
> that involves most or all the records the EXPLAIN command seems to indicate
> that it isn't using the pre-existing indexes. This result in a slow update,
> which is further slowed by the presence of indexes. So when doing a large
> update should I just drop the indexes first, or is there some good reason to
> keep them?
>
> Thanks,
> - Bill Thoen

You have to hit the disk twice if you use an index -- once to read the
index and once to read the table itself. If the planner guesses that
an operation involves most of the records in a table, it will avoid
indexes and just seqscan, because it's faster.

- Josh / eggyknap

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UPDATE and Indexes and Performance

2008-10-15 Thread Bill Thoen

Does PG (8.1) ever use existing indexes when executing an UPDATE?

I've got some tables with millions of records and whenever I update a 
column that involves most or all the records the EXPLAIN command seems 
to indicate that it isn't using the pre-existing indexes. This result in 
a slow update, which is further slowed by the presence of indexes. So 
when doing a large update should I just drop the indexes first, or is 
there some good reason to keep them?


Thanks,
- Bill Thoen


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column level triggers

2008-10-15 Thread Tom Lane
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Scott Marlowe wrote:
>> Since you can check which columns have changed, it's pretty easy to
>> write a trigger that just skips its logic when none of the trigger
>> columns have changed.

> I think column-level triggers actually fire when one of the columns is 
> written to, not only when the value there is distinct from the previous 
> one.  This small difference is not easy to emulate by comparing OLD and 
> NEW in the trigger body.

So what happens when one of the target columns is modified by another
trigger, rather than by the SQL query?  (For extra credit, what if it's
a trigger that comes after the column trigger in the firing order?)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NATURAL JOINs

2008-10-15 Thread Richard Broersma
On Wed, Oct 15, 2008 at 9:03 AM, Reg Me Please <[EMAIL PROTECTED]> wrote:

> Both are perfectly right, indeed.
> Nonetheless, in my opinion a NATURAL JOIN  exploiting the FKs
> instead of the column names would be much more helpful and much less error
> prone!
>
> As far as I know there is no way to exploit FKs in JOINs, right?

Yes AFAIK, this would make postgresql's implementation of natural join
violate the SQL standard.  Perhaps you could propose an "UNNATURAL
JOIN" syntax extension. ;)


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Drupal and PostgreSQL - performance issues?

2008-10-15 Thread Tom Lane
Greg Smith <[EMAIL PROTECTED]> writes:
> If there were ever any Linux distributions that increased this value from 
> the tiny default, you might have a defensible position here (maybe 
> Oracle's RHEL fork does, they might do something here).  I've certainly 
> never seen anything besides Solaris ship with a sensible SHMMAX setting 
> for database use on 2008 hardware out of the box.  It's really quite odd, 
> but as numerous probes in this area (from the above in 2000 to Peter's 
> recent Linux bugzilla jaunt) show the resistance to making the OS default 
> to any higher is considerable.

I think the subtext there is that the Linux kernel hackers hate the SysV
IPC APIs and wish they'd go away.  They are presently constrained from
removing 'em by their desire for POSIX compliance, but you won't get
them to make any changes that might result in those APIs becoming more
widely used :-(

Mind you, I find the SysV APIs uselessly baroque too, but there is one
feature that we have to have that is not in mmap(): the ability to
detect other processes attached to a shmem block.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NATURAL JOINs

2008-10-15 Thread Reg Me Please
Il Wednesday 15 October 2008 17:55:03 Tom Lane ha scritto:
> "Richard Broersma" <[EMAIL PROTECTED]> writes:
> > For this reason, clients passing natural joins to the server can have
> > dangerous result sets returned with no warning.
>
> Yeah.  A lot of people consider that NATURAL JOIN is simply a bad idea
> and shouldn't be used ever --- it's too easy to shoot yourself in the
> foot with a careless column addition or rename.  Explicitly spelling out
> the join columns with ON or USING is a lot less prone to silent breakage
> after a schema change.
>
>   regards, tom lane

Both are perfectly right, indeed.
Nonetheless, in my opinion a NATURAL JOIN  exploiting the FKs
instead of the column names would be much more helpful and much less error
prone!

As far as I know there is no way to exploit FKs in JOINs, right?

THANKS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Opptimizing projections containing unused columns

2008-10-15 Thread Grzegorz Jaśkiewicz
looks like most ppl nowdays have two simple problems, and try to work
against it. Instead they all should focus on getting their data organized
properly, and queries writeen for project before they start to code other
stuff.

The problems are: trying to outsmart db, still belive that you can catch all
data to memory, and work it out in software.

I see it time and time again :/


Re: [GENERAL] NATURAL JOINs

2008-10-15 Thread Tom Lane
"Richard Broersma" <[EMAIL PROTECTED]> writes:
> For this reason, clients passing natural joins to the server can have
> dangerous result sets returned with no warning.

Yeah.  A lot of people consider that NATURAL JOIN is simply a bad idea
and shouldn't be used ever --- it's too easy to shoot yourself in the
foot with a careless column addition or rename.  Explicitly spelling out
the join columns with ON or USING is a lot less prone to silent breakage
after a schema change.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Opptimizing projections containing unused columns

2008-10-15 Thread Tom Lane
"Andrus" <[EMAIL PROTECTED]> writes:
> I have lot of autogenerated from projection queries in form
> SELECT source.c1, source.c2, t1.col1, t1.col2, ...
> FROM (SELECT c1, c2, c3, . , c20 FROM ... WHERE ... ) source
> LEFT JOIN t2 USING (somecolumn)
> Main SELECT uses only few columns (source.c1 and source.c2 in this sample)
> from source projection.

> 1. Does selecting of unused columns ( c3 ... c20) affect to query prefomance 

Don't worry about it.  All modern Postgres versions ignore columns that
aren't actually used in the query --- at least for examples as simple as
this one.  In cases where you intentionally defeat optimization (eg via
OFFSET 0 in a sub-select) it's possible that the sub-select will compute
all its output columns even though the upper query doesn't use 'em all.

The width field in EXPLAIN output offers a good hint as to how many
columns the query is actually fetching.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL stored procedure returning multiple result sets (SELECTs)?

2008-10-15 Thread Jaime Casanova
On 10/15/08, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Vladimir Dzhuvinov wrote:
>
> > > That feature alone can help you enormously.  Lest you think I'm
> > > biased, I dba a mysql box professionally...every time I pop into the
> > > mysql shell I feel like I'm stepping backwards in time about 5 years.
> > > Don't let the inability to return multiple sets trip you up...you are
> > > missing the big picture.
> >
> > Oh, I am not missing the big picture: Quit programming and take up the
> > job of a lazy millionaire :)
>
> I don't quite understand you here.  I'm sure we all crave the lazy
> millionaire bit, but what would a lazy millionaire do other than
> programming for fun?
>

read dozens of mails from a forum?

-- 
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Chart of Accounts

2008-10-15 Thread Robert Parker
On Tue, Oct 14, 2008 at 3:50 PM, Isak Hansen <[EMAIL PROTECTED]> wrote:
> On Mon, Oct 13, 2008 at 2:57 AM, justin <[EMAIL PROTECTED]> wrote:
>> [...]  Also you want to split out the debit and credits instead of
>> using one column.  Example one column accounting table to track values
>> entered how do you handle Crediting a Credit Account Type.  is it a negative
>> or positive entry???
>
> How is crediting a credit account different from crediting any other account?
>
> YMMV, but I think a single amount column makes for a more consistent design.

Absolutely. I worked on computer accounting systems many years ago,
not exactly BC but BPC (Before PC) and such systems had a flag in the
Chart of Accounts records to indicate how to display negative numbers.
The assets and cost records were displayed as recorded and the revenue
and liabilities were negated for display purposes. Naturally offsets
such as credit notes against revenue displayed in the way that humans
expected to read them without any difficulty.
-- 
In a world without walls who needs Windows (or Gates)? Try Linux instead!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column level triggers

2008-10-15 Thread Robert Treat
On Wednesday 15 October 2008 04:19:59 Laurent Wandrebeck wrote:
> 2008/10/15 Scott Marlowe <[EMAIL PROTECTED]>:
> > You'll probably have to ask that in -hackers.  I'm guessing it's one
> > of those things that if one wrote a sufficiently large check one could
> > find a hacker to implement it.  But I can't imagine it being a weekend
> > project, and if it's not already in 8.4 beta it wouldn't make it to
> > 8.4, but you'd have to shoot for 8.5.
>

Actually, the final commitfest for 8.4 isn't untill November 1st, so if you 
did have something you wanted to get into 8.4, you have 2 weeks to make it 
into the last commitfest; after that you're probably looking at 8.5. 

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL stored procedure returning multiple result sets (SELECTs)?

2008-10-15 Thread Alvaro Herrera
Vladimir Dzhuvinov wrote:

> > That feature alone can help you enormously.  Lest you think I'm
> > biased, I dba a mysql box professionally...every time I pop into the
> > mysql shell I feel like I'm stepping backwards in time about 5 years.
> > Don't let the inability to return multiple sets trip you up...you are
> > missing the big picture.
> 
> Oh, I am not missing the big picture: Quit programming and take up the
> job of a lazy millionaire :)

I don't quite understand you here.  I'm sure we all crave the lazy
millionaire bit, but what would a lazy millionaire do other than
programming for fun?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NATURAL JOINs

2008-10-15 Thread Richard Broersma
On Tue, Oct 14, 2008 at 11:17 PM, regme please <[EMAIL PROTECTED]> wrote:

> Well, it could make some sense to extend the semantics when you have
> explicit "REFERENCES" to tables in the JOINs.
> Or at least warn or notice the user that the "NATURAL (INNER) JOIN" has
> actuallt been converted into a CROSS one.
> It would not be standard but helpful for developers.

For this reason, clients passing natural joins to the server can have
dangerous result sets returned with no warning.  If one of the joined
columns were to have its name altered, then clients issuing these
queries wouldn't imediately know that there was a problem.  On the
otherhand, a client application joins using the ON clause will get an
error when PostgreSQL notices the nonexistant column reference.

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Opptimizing projections containing unused columns

2008-10-15 Thread Andrus

I have lot of autogenerated from projection queries in form

SELECT source.c1, source.c2, t1.col1, t1.col2, ...
FROM (SELECT c1, c2, c3, . , c20 FROM ... WHERE ... ) source
LEFT JOIN t2 USING (somecolumn)

Main SELECT uses only few columns (source.c1 and source.c2 in this sample)
from source projection.
Inner table may have up to 50 rows.

I read 8.4 WITH documentation and this describes somewhat that PostgreSQL
can discard unused columns automatically but this is not clear. I havent
found exact doc about this. I havent way how to determine this using EXPLAIN
or similar command.

Questions:

1. Does selecting of unused columns ( c3 ... c20) affect to query prefomance 
?
2. Will PostgreSQL 8.0+ automatically exclude columns c3 .. c20 while 
executing

query ?
3. Or should I make sql builder much more sophisticated so that it will not
generate columns c3 .. c20 when it creates sql ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL stored procedure returning multiple result sets (SELECTs)?

2008-10-15 Thread Vladimir Dzhuvinov
Hi Merlin,

>> A function is... hmm, a function, a mapping: given a set of arguments it
>> returns a single and well defined value: f(x,y) -> z
>>
>> The purpose of stored procedures, on the other hand, is to encapsulate
>> an (arbitrary) bunch of SQL commands, a mini-program of sort.

> I think your understanding is off here.  Functions can encapsulate
> arbitrary collection of statements...as I said previously, there are
> two principle differences:
> *) functions have implicit created transaction, procedures do not
> *) how you pass data to/from the procedure body.  (functions return a
> scalar, record, or a set)
> 
> Functions are limited in the sense that it is awkward to return
> multiple sets, but are much more flexible how they can be integrated
> into queries -- you can call a function anywhere a scalar or a set is
> allowed -- in addition to the monolithic procedure style.

I was speaking about how it's done in MySQL. And I liked their approach
of clear separation of responsibility between functions and stored
procedures. At first I didn't quite understand their point, but then,
during the development of my app, I gradually began to appreciate it.

To sum up how it's done in MySQL:

Functions are meant for tasks such as string operations, date/calendar
functions, maths, encryption. They are allowed to operate only on their
arguments. And they are stackable, just as functions in other languages
like C.

Stored procedures are meant to be programs that work on the data.
Hence they allowed to access tables, they can start explicit
transactions and they can execute plain arbitrary SELECTs that pass
their rows straight to the client. And stored procedures are "nestable"
- akin to include() in PHP.


I suspect that the present situation with Postgres reflects the way the
software developed over the years. Perhaps in the very beginning the
Postgres developers introduced functions which more or less resembled
the "plain" functions of MySQL today. But then users might have pressed
for a method to store their table manipulation logic on the server, and
then for some reason it had been decided to overload functions with this
extra responsibility, rather than create a separate clean "stored
procedure" class.

So today Postgres has got functions which are very feature-full
(compared with functions in MySQL), but still fall short of what
traditional stored procedures can provide.

Yes, I was very much pleased with a number of Postgres features, such as
the ability to do a tighter data definition using checks and
constraints. Postgres allows for a much richer data model when I compare
it with MySQL. I decided to put Postgres aside simply because it doesn't
allow the definition of *clean* stored procedures (as I'm used to them
in MySQL). And I didn't like the idea of twisting the PG function model
around to accommodate my existing MySQL stored procedure logic. I abhor
doing ugly things with code :)


Pavel stated interest to work on the addition of stored procedures to
Postgres provided he finds sponsorship. Right now I don't see much
benefit investing money into such a venture, besides I've got my hands
full with the day-to-day management of my own project. So far MySQL has
been doing its job well and for the near future it looks like I'm
staying on it.



> Just as a 'for example', look how you can trap errors and do some
> recovery inside a pl/pgsql routine:
> http://www.postgresql.org/docs/8.3/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

Well, MySQL does allow for exception handling within SPs, although there
are some shortcomings (if you define a generic handler you cannot obtain
precise info on the error type).


> That feature alone can help you enormously.  Lest you think I'm
> biased, I dba a mysql box professionally...every time I pop into the
> mysql shell I feel like I'm stepping backwards in time about 5 years.
> Don't let the inability to return multiple sets trip you up...you are
> missing the big picture.

Oh, I am not missing the big picture: Quit programming and take up the
job of a lazy millionaire :)

> ok :-) enough advocacy...
> merlin

Cheers,

Vladimir
-- 
Vladimir Dzhuvinov * www.valan.net * PGP key ID AC9A5C6C



signature.asc
Description: OpenPGP digital signature


[GENERAL] user and default schema

2008-10-15 Thread Alain Roger
Hi,

where can i find which user account has which default schema ?
thanks a lot,

-- 
Alain

Windows XP SP3
PostgreSQL 8.2.4 / MS SQL server 2005
Apache 2.2.4
PHP 5.2.4
C# 2005-2008


Re: [GENERAL] run postgres 8.3

2008-10-15 Thread Joao Ferreira gmail
Hello Eduardo


On Tue, 2008-10-14 at 15:40 -0500, Eduardo Arévalo wrote:
> I installed the 8.3 postgres 
> the amount of giving the command:
> bash-3.2$ /usr/local/postgres_8.3/bin/initdb -D /base/data
> 

that command only initializes the underlying filesystem database files,
directories and configurations

the command that starts the database system (if you do not wish to use
init scripts, as seems to be the case) is something like:


 su - postgres -c '/usr/bin/pg_ctl start -w -D /var/pgsql/data \
 -l /var/pgsql/data/logfile -o "-i"'
evaluate_retval
;;




> the result is:
> 
> 
> The files belonging to this database system will be owned by user
> "postgres".
> This user must also own the server process.
> 
> The database cluster will be initialized with locale en_US.UT F-8.
> The default database encoding has accordingly been set to UTF8.
> The default text search configuration will be set to "english".
> 
> initdb: directory "/base/data" exists but is not empty
> If you want to create a new database system, either remove or empty
> the directory "/base/data" or run initdb
> with an argument other than "/base/data".

you must remove that directory  (or it's contents ) first:

try one of these:

rm -rf /base/*
rm -rf /base/data/*

only after this removal retry the initdb command.

after that try the start command

 BTW, use something like

/usr/bin/pg_ctl stop

to stop your DB


BTW: why aren't you using your distribution's packages they should
work just fine :)

cheers
joao


> 
> 
> but do not raise your service
> 
> 
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] making trigger on delete, set 'affected rows' correctly

2008-10-15 Thread Grzegorz Jaśkiewicz
Hey list,

so I was wondering. Since many ppl depend on 'affected rows', we have here a
trigger running on delete. It will update the table, and set certain fields
to false on delete, return NULL - so it will look like:

CREATE OR REPLACE FUNCTION ondelete_update() returns trigger as
$_$
BEGIN
  IF OLD.foo <> false THEN
  EXECUTE 'UPDATE ' || TG_RELNAME || ' SET foo = 'true' WHERE rid = ' ||
OLD.rid ;
  END IF;
  RETURN NULL;
END;
$_$
LANGUAGE 'plpgsql';

Now, this on every delete will return 'affected rows count = 0'. C

-- 
GJ


[GENERAL] Querying on partitioned tables

2008-10-15 Thread Peter Vanderborght
Hi,

I've recently split my log table into time-based partitions, which really
improves insert speed and query times for certain queries.
However, I can't help thinking the query optimizer is really suboptimal
here.

My partitions look like this:

CREATE TABLE log_cdf
(
  id serial NOT NULL,
  tstamp timestamp without time zone,
  peopleid integer,
  room character varying(50),
  event character varying(50),
  "type" character varying(50),
  contentid integer,
  CONSTRAINT pk_log_cdf PRIMARY KEY (id)
)
WITH (OIDS=TRUE);
ALTER TABLE log_cdf OWNER TO postgres;
GRANT ALL ON TABLE log_cdf TO postgres;
GRANT SELECT ON TABLE log_cdf TO public;

CREATE TABLE log_cdf_200810 (
CHECK ( tstamp >= DATE '2008-10-01' AND tstamp < DATE '2008-11-01' )
) INHERITS (log_cdf);
CREATE TABLE log_cdf_200811 (
CHECK ( tstamp >= DATE '2008-11-01' AND tstamp < DATE '2008-12-01' )
) INHERITS (log_cdf);
CREATE TABLE log_cdf_200812 (
CHECK ( tstamp >= DATE '2008-12-01' AND tstamp < DATE '2009-01-01' )
) INHERITS (log_cdf);
CREATE TABLE log_cdf_200901 (
CHECK ( tstamp >= DATE '2009-01-01' AND tstamp < DATE '2009-02-01' )
) INHERITS (log_cdf);

CREATE INDEX idx_log_cdf_200810_tstamp ON log_cdf_200810 USING btree
(tstamp);
CREATE INDEX idx_log_cdf_200811_tstamp ON log_cdf_200811 USING btree
(tstamp);
CREATE INDEX idx_log_cdf_200812_tstamp ON log_cdf_200812 USING btree
(tstamp);
CREATE INDEX idx_log_cdf_200901_tstamp ON log_cdf_200901 USING btree
(tstamp);

And ofcourse I've added the matching trigger function as well.
On top of that, I have my old table which I've renamed to log_cdf_old, added
a check constraint for tstamp < DATE '2008-10-01' and set to inherit
log_cdf.

--
  PROBLEM 1
--
But now, simple queries like:

Select * 
from log_cdf 
Order by tstamp desc
Limit 100

All of a sudden take ages to complete and the query plan looks absolutely
awful:

Limit  (cost=8333060.45..8333060.70 rows=100 width=374)
  ->  Sort  (cost=8333060.45..8676529.57 rows=137387645 width=374)
Sort Key: public.log_cdf.tstamp
->  Result  (cost=0.00..3082203.45 rows=137387645 width=374)
  ->  Append  (cost=0.00..3082203.45 rows=137387645 width=374)
->  Seq Scan on log_cdf  (cost=0.00..12.00 rows=200
width=374)
->  Seq Scan on log_cdf_old log_cdf
(cost=0.00..2915379.36 rows=129675136 width=57)
->  Seq Scan on log_cdf_200810 log_cdf
(cost=0.00..166776.09 rows=7711709 width=58)
->  Seq Scan on log_cdf_200811 log_cdf
(cost=0.00..12.00 rows=200 width=374)
->  Seq Scan on log_cdf_200812 log_cdf
(cost=0.00..12.00 rows=200 width=374)
->  Seq Scan on log_cdf_200901 log_cdf
(cost=0.00..12.00 rows=200 width=374)

This while

analyse select * from log_cdf_200810 order by tstamp desc limit 100 

Limit  (cost=0.00..7.51 rows=100 width=58)
  ->  Index Scan Backward using idx_log_cdf_200810_tstamp on
log_cdf_200810  (cost=0.00..579351.47 rows=7711021 width=58)


Is much, much better.


--
  PROBLEM 2
--
I also notice that the query planner doesn't take time variables (like
LOCALTIMESTAMP or now() ) into account at all.
Look at this:

select * 
from log_cdf 
where tstamp > '2008-10-10' -- 5 days ago
  and tstamp < '2008-10-15' -- today

Gives the plan:

Result  (cost=0.00..205388.86 rows=3144503 width=374)
  ->  Append  (cost=0.00..205388.86 rows=3144503 width=374)
->  Seq Scan on log_cdf  (cost=0.00..13.00 rows=1 width=374)
  Filter: ((tstamp > '2008-10-10 00:00:00'::timestamp without
time zone) AND (tstamp < '2008-10-15 00:00:00'::timestamp without time
zone))
->  Seq Scan on log_cdf_200810 log_cdf  (cost=0.00..205375.86
rows=3144502 width=58)
  Filter: ((tstamp > '2008-10-10 00:00:00'::timestamp without
time zone) AND (tstamp < '2008-10-15 00:00:00'::timestamp without time
zone))

Which for all intents and purposes is exactly what I'd expect.

But now

select * 
from log_cdf 
where tstamp > LOCALTIMESTAMP - interval '5 days'
  and tstamp < LOCALTIMESTAMP

Gives me the following plan:

Result  (cost=0.00..1161067.27 rows=3476371 width=374)"
  ->  Append  (cost=0.00..1161067.27 rows=3476371 width=374)"
->  Seq Scan on log_cdf  (cost=0.00..15.50 rows=1 width=374)"
  Filter: ((tstamp < ('now'::text)::timestamp without time zone)
AND (tstamp > (('now'::text)::timestamp without time zone - '5
days'::interval)))
->  Bitmap Heap Scan on log_cdf_old log_cdf
(cost=8054.10..909421.75 rows=378889 width=57)
  Recheck Cond: ((tstamp > (('now'::text)::timestamp without
time zone - '5 days'::interval)) AND (tstamp < ('now'::text)::timestamp
without time zone))
  ->  Bitmap Index Scan on idx_log_cdf_old_tstamp
(cost=0.00..7959.37 rows=378889 width=0)
In

Re: [GENERAL] Column level triggers

2008-10-15 Thread Laurent Wandrebeck
2008/10/15 Scott Marlowe <[EMAIL PROTECTED]>:
>
> You'll probably have to ask that in -hackers.  I'm guessing it's one
> of those things that if one wrote a sufficiently large check one could
> find a hacker to implement it.  But I can't imagine it being a weekend
> project, and if it's not already in 8.4 beta it wouldn't make it to
> 8.4, but you'd have to shoot for 8.5.
OK, will forward that to the more appropriate ml.
>
> Since you can check which columns have changed, it's pretty easy to
> write a trigger that just skips its logic when none of the trigger
> columns have changed.
Agreed, but it's not the cleanest way (well, actually, it is, right now:).
Laurent.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column level triggers

2008-10-15 Thread Peter Eisentraut

Scott Marlowe wrote:

Since you can check which columns have changed, it's pretty easy to
write a trigger that just skips its logic when none of the trigger
columns have changed.


I think column-level triggers actually fire when one of the columns is 
written to, not only when the value there is distinct from the previous 
one.  This small difference is not easy to emulate by comparing OLD and 
NEW in the trigger body.  Details might need to be checked in the 
standard and existing implementations.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general