Re: [GENERAL] B+ versus hash maps

2006-06-16 Thread surabhi.ahuja
Title: Re: [GENERAL] B+ versus hash maps






in that case, should i set 

enable_seqscan parameter to off at the time 
of starting postmaster?

because i have seen that even thou the 
index exists it still goes for seq scan

thanks
surabhi


From: Jim Nasby 
[mailto:[EMAIL PROTECTED]Sent: Fri 6/16/2006 1:49 
AMTo: surabhi.ahujaCc: Michael Fuhr; 
pgsql-general@postgresql.orgSubject: Re: [GENERAL] B+ versus hash 
maps

On Jun 15, 2006, at 8:07 AM, surabhi.ahuja wrote: is 
there any way of specifying wht type of index i want, say hash 
maps instead of the B+ trees. someone told me that in the case where 
duplicates occur(on the indexed field), hash map are better than 
B+ trees.http://www.postgresql.org/docs/8.1/interactive/sql-createindex.html 
and also please tell if i can assume that it will use index only 
and not go for sequential scan, again i was told for that i ll have to 
set the random page cost parameter to 1.The database will 
use whatever it thinks is optimal. Use explainanalyze to see what it's 
doing.Unless your database fits (and stays) entirely in memory, 
you'llprobably be pretty unhappy with random_page_cost=1.--Jim 
C. Nasby, Sr. Engineering Consultant 
[EMAIL PROTECTED]Pervasive Software http://pervasive.com work: 
512-231-6117vcard: http://jim.nasby.net/pervasive.vcf 
cell: 512-569-9461




Re: [GENERAL] pgadmin window closes abnormally

2006-06-16 Thread Richard Huxton

Jasbinder Bali wrote:

Hi,
I'm using pgadmin 3 in fedora core 4.
Whenever i try to open postgres help, pgadmin shuts down abnormally.
The same thing happens when i try to view postgresql.conf or pg_hba.conf
files.
Don't know whats going wrong.


If you start pgadmin from a terminal, you might see an error message. 
The other place to check is you system logs, perhaps SELinux is interfering.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] need help to recover database

2006-06-16 Thread Richard Huxton

Haroon Sayyad wrote:

Dear Sir, Please help us and guide us to solve following error while
using postgres 8.3 version. Error showing is

'invalid page header in block 102 of relation pg_proc'


There is no PostgreSQL version 8.3 - please check again.

The error message suggests on-disk corruption. Have you had crashes? Do 
you have backups readily available? What operating-system are you 
running on?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Searching BLOB - Lucene setup problem

2006-06-16 Thread John Sidney-Woollett

This is a bit off topic for the Postgres list... ;)

Make sure you explicitly include the name of the Lucene jar file in your 
command line invocation, and any other directories that are required 
(normally your current working directory), so for Windows you'd use 
something like


java -cp .;{pathto}\lucene-1.4.3.jar YouJavaApp

When you use Lucene in your webapp include the Lucene jar file in 
{tomcat_home}\commons\lib or the WEB-INF\lib directory under your webapp.


Hope that helps.

John

[EMAIL PROTECTED] wrote:

Hi John,
I have had a read through the lucene website
(http://lucene.apache.org/java/docs/index.html) and it sounds pretty
good to me.  I should be able to use this in conjuction with my JSP
pages.

This may sound quite dumb to anyone who develops in java, but I need a
little help setting up the demo on my windowsXP machine.  I have
installed JDY 1.5.0_07, i have installed tomcat and can confirm that is
is all up and running correctly, as I have already written a few simple
JSP pages.

I have downloaded the lucene package, extracted the package to my C:\
and followed the steps of the demo page:
http://lucene.apache.org/java/docs/demo.html

But, when i try to run  java org.apache.lucene.demo.IndexFiles
c:\lucene-2.0.0\src from the cmd prompt, I get the following error:

Exception in thread 'main' java.lang.NoClassDefFoundError:
org/apache/lucene/analysis/Analyser

I am not sure why this is coming up.  I have followed the instructions
on the demo page on the web.

The only thing i can think of is I may have my CLASSPATH incorrect.
Can someone help me out with a basic desription if what the classpath
is and where I should point the classpath environment variable to?

Once I have that correct, i think that I may be able to run the demo.

thanks for any help you can provide.

James

John Sidney-Woollett wrote:


Save yourself some effort and use Lucene to index a directory of your 300
word documents. I'm pretty sure that Lucene includes an extension to read
Word documents, and you can use PDFBox to read/write PDF files. Marrying
the searching and displaying of results to your web application should be
trivial since you're wanting to use java anyway. Lucene has full character
set support and is blindingly fast

If you're looking for a solution to this problem using Postgres, then
you'll be creating a ton extra work for yourself. If you're wanting to
learn more about postgres, then maybe it'll be worthwhile.

John





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pgadmin window closes abnormally

2006-06-16 Thread Richard Huxton

Jasbinder Bali wrote:

well, i start it from anywhere, it would close abnormally..


And you don't get an error message in the terminal? Not even process 
exited on signal XXX?



SELinux is interfering? like how?


It sets policies for what applications can access what files/other 
resources. It might not know about pgadmin at all, or if it does might 
not be setup correctly.


How did you install pgadmin - was it from a Fedora repository, another 
RPM, source, static binary?


Oh, and don't forget to CC: the list - I don't read this mailbox 
directly much, and I'm not a Fedora user so you'll want others to see this.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Richard Huxton

snacktime wrote:

Anyone have any tips for minimizing downtime when upgrading?  So far
we have done upgrades during scheduled downtimes.  Now we are getting
to the point where the time required for a standard dump/restore is
just too long.  What have others done when downtime is critical?  The
only solution we have been able to come up with is to migrate the data
on a per user basis to a new database server.  Each user is a
merchant, and the data in the database is order data.  Migrating one
merchant at a time will keep the downtime per merchant limited to just
the time it takes to migrate the data for that merchant, which is
acceptable.


The other option would be to run replication, e.g. slony to migrate from 
one version to another. I've done it and it works fine, but it will mean 
slony adding its own tables to each database. I'd still do it one 
merchant at a time, but that should reduce your downtime to seconds.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] VACUUMing sometimes increasing database size / sometimes

2006-06-16 Thread Nitin Verma


Will 7.3.2 Dump made up of copies using pg_dump import without any migration
to 8.0+? What I need isn't a once process and will go as a automated script,
in a way that user will not even get to know (if he isn't reading that logs) 
Database version changed. Considering that even a remote problem in export
and import across versions may hit. So please let me know all the do's and
don'ts... or the pointers to those.


-Original Message-
From: Jim Nasby [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 16, 2006 1:54 AM
To: Florian G.Pflug
Cc: Nitin Verma; pgsql-general@postgresql.org
Subject: Re: [GENERAL] VACUUMing sometimes increasing database size /
sometimes

On Jun 15, 2006, at 1:16 PM, Florian G. Pflug wrote:

 Nitin Verma wrote:
 Were these bugs fixed by 7.3.2, if not what version should I look  
 for?
 http://archives.postgresql.org/pgsql-admin/2001-06/msg5.php  
 http://archives.postgresql.org/pgsql-hackers/2000-04/msg00083.php
 Ahm... 7.3.2 is *very* outdated. The current version of postgresql is
 8.1.4.

 The mails you linked are from the year 2001 (!), and concern 6.5  
 (!!) - A lot of things have changed in postgres since then ;-)

 None of the problems discussed there should trouble postgres  
 anymore, if
 you use a at least remotely recent version (Say, = 8.0, or 7.4  
 *at* *the* *very* *least*).

And if you are going to stick with 7.3, at least  get the latest  
version of it.

As  for searching for bugs... http://archives.postgresql.org/pgsql-bugs/
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] postgres and ldap

2006-06-16 Thread Magnus Hagander
 Hi
 
 I have started to use ldap for user authentication on my systems.
 
 1 is it possible to get postgres to authenticate against ldap

Yes. With current releases you can do this with PAM, assuming you're on
a platform that can do PAM. If your platform can't do PAM (for example,
Windows), you can't do LDAP auth.

8.2 will have direct LDAP authentication without PAM.


 2 is it advisable to do this ?

Sure, I see no reason why not. Beware of insecure password transports
though - you'll need to use SSL/TLS or similar to secure the connection
if you're going across insecure networks.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] A slow query - Help please?

2006-06-16 Thread Alban Hertroys

Alban Hertroys wrote:

Hi all,

We're using some 3rd party product that uses inheritence, and the 
following query is rather slow on PostgreSQL 7.4.7 (debian stable). Any 
suggestions how to speed it up?


We really need this solved. Isn't anybody able to shed some light on 
this? Is it possible to make this query use an index scan, preferably 
w/o disabling sequential scanning?


Regards,
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Forcing backslash to be treated as ordinary string character

2006-06-16 Thread John Gunther

Doc section 4.1.2.1 says that in future, backslashes will be treated as
ordinary string characters. It says to use the E string prefix to
ensure that backslashes are permanently treated as escape characters.

Question: Aside from dollar quoting (Section 4.1.2.2), is there any
way NOW to force backslashes in string constants to be treated as
ORDINARY characters?

Thanks


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] postgres password

2006-06-16 Thread DataIntellect
You cannot use the postgres account without some tweaks, Sorry I
don't remember where I found this info - somewhere in the pgadmin
install docs. 

However this works:

Create a new superuser with a password, add connectivity to the pg_hba.conf file if needed and pgadmin should work fine


/Kevin



- Original Message -



Change the following in pg_hba.conf

host all all 127.0.0.1/32 md5

to

host all all 127.0.0.1/32
 trust

and that will let you connect without password and then reload your db server settings using 'pg_ctl reload'. You can then set the password once connected to database using

alter user username password 'password'
Once you are done with that now you can switch back to original authentication mode.

/Shoaib


On 6/15/06, Jasbinder Bali 
[EMAIL PROTECTED]> wrote:

 Hello,
> I'm trying to connect to the local postgres database using pgadmin. It> asks for a password that i never set and hence i'm not able to connect
> to my local postgres database.

> Also, when i click postgresql help option in the Help tab of pgadmin,
> the pgadmin screen would close down. Don't know whats going on wrong.
>
> Thanks,
> ~Jas 


Re: [GENERAL] postgres password

2006-06-16 Thread Dave Page





  
  
  From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED] On Behalf Of 
  DataIntellectSent: 15 June 2006 20:56To: 
  pgsql-general@postgresql.orgSubject: Re: [GENERAL] postgres 
  password
  
  You cannot use the postgres account without some tweaks, Sorry I don't 
  remember where I found this info - somewhere in the pgadmin install 
  docs.
  
If it 
does say that anywhere, I'd love to know where so I can remove 
it.

  
  However this works:
  Create a new superuser with a password, add connectivity to 
  the pg_hba.conf file if needed and pgadmin should work fine
  
The 
postgres account should be quite usable for this. You might need to edit 
pg_hba.conf as well of course.

Regards,. Dave


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Kenneth Downs

snacktime wrote:


Anyone have any tips for minimizing downtime when upgrading?  So far
we have done upgrades during scheduled downtimes.  Now we are getting
to the point where the time required for a standard dump/restore is
just too long.  What have others done when downtime is critical?  The
only solution we have been able to come up with is to migrate the data
on a per user basis to a new database server.  Each user is a
merchant, and the data in the database is order data.  Migrating one
merchant at a time will keep the downtime per merchant limited to just
the time it takes to migrate the data for that merchant, which is
acceptable.



AFAIK it has always been the case that you should expect to have to dump 
out your databases and reload them for version upgrades.


Is anybody over at the dev team considering what an onerous burden this 
is?  Is anyone considering doing away with it?



begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] VACUUMing sometimes increasing database size /

2006-06-16 Thread Douglas McNaught
Nitin Verma [EMAIL PROTECTED] writes:

 Will 7.3.2 Dump made up of copies using pg_dump import without any migration
 to 8.0+? What I need isn't a once process and will go as a automated script,
 in a way that user will not even get to know (if he isn't reading that logs) 
 Database version changed. Considering that even a remote problem in export
 and import across versions may hit. So please let me know all the do's and
 don'ts... or the pointers to those.

It will very likely have problems.  The usual recommended procedure is
to use the version of pg_dump that comes with the PG that you're
upgrading *to* against the old database; e.g. you'd use the 8.0+
pg_dump and tell it to connect to the 7.3.2 database.

You should really upgrade from 7.3.2, at least to the latest point
release in the 7.3 series, and have a plan to go to 8.0 or 8.1,
because 7.3 won't be supported for that much longer (if it even is
right now).

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] VACUUMing sometimes increasing database size / sometimes

2006-06-16 Thread Florian G. Pflug

Douglas McNaught wrote:

Nitin Verma [EMAIL PROTECTED] writes:


Will 7.3.2 Dump made up of copies using pg_dump import without any migration
to 8.0+? What I need isn't a once process and will go as a automated script,
in a way that user will not even get to know (if he isn't reading that logs) 
Database version changed. Considering that even a remote problem in export

and import across versions may hit. So please let me know all the do's and
don'ts... or the pointers to those.


It will very likely have problems.  The usual recommended procedure is
to use the version of pg_dump that comes with the PG that you're
upgrading *to* against the old database; e.g. you'd use the 8.0+
pg_dump and tell it to connect to the 7.3.2 database.


Note that even if your 7.3 dump restores fine on 8.1 (How likely that is
depends on the complexity of your schema), you might still experience 
problems, if your application depends on things that changed between 7.3

and 8.1. Postgres tends to become more strict with every release, so
there are things you got away with in 7.3 which now cause an error message.

So, you shouldn't upgrade database version behind a users back. You'll 
need to test his applikations against the new version, or at least tell

him that there might be problems.


You should really upgrade from 7.3.2, at least to the latest point
release in the 7.3 series, and have a plan to go to 8.0 or 8.1,
because 7.3 won't be supported for that much longer (if it even is
right now).


If 8.0 or 8.1 is too big a step for you, you could consider moving to 
7.4. I don't know if 7.3 already supported schemas, but if it did, then

the chance of breakage is a lot smaller if you switch to 7.4 compared to
switching to 8.1. OTOH, one day 7.4 will be unsupported too, and then 
you'll need to switch anyway.


greetings, Florian Pflug


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Oleg Bartunov

On Thu, 15 Jun 2006, snacktime wrote:


Anyone have any tips for minimizing downtime when upgrading?  So far
we have done upgrades during scheduled downtimes.  Now we are getting
to the point where the time required for a standard dump/restore is
just too long.  What have others done when downtime is critical?  The
only solution we have been able to come up with is to migrate the data
on a per user basis to a new database server.  Each user is a
merchant, and the data in the database is order data.  Migrating one
merchant at a time will keep the downtime per merchant limited to just
the time it takes to migrate the data for that merchant, which is
acceptable.

Any other ideas?


we use replication package slony for upgrading.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

 http://www.postgresql.org/docs/faq



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] VACUUMing sometimes increasing database size / sometimes

2006-06-16 Thread Nitin Verma
 if your application depends on things that changed between 7.3 and 8.1. 
Postgres tends to become more strict with every release, so
 there are things you got away with in 7.3 which now cause an error
 message.

Do we have change lists where I can see all the changes between 7.3 and 8.1,
may be release by release?

-Original Message-
From: Florian G. Pflug [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 16, 2006 4:48 PM
To: Douglas McNaught
Cc: Nitin Verma; Jim Nasby; pgsql-general@postgresql.org
Subject: Re: [GENERAL] VACUUMing sometimes increasing database size /
sometimes

Douglas McNaught wrote:
 Nitin Verma [EMAIL PROTECTED] writes:
 
 Will 7.3.2 Dump made up of copies using pg_dump import without any
migration
 to 8.0+? What I need isn't a once process and will go as a automated
script,
 in a way that user will not even get to know (if he isn't reading that
logs) 
 Database version changed. Considering that even a remote problem in export
 and import across versions may hit. So please let me know all the do's and
 don'ts... or the pointers to those.
 
 It will very likely have problems.  The usual recommended procedure is
 to use the version of pg_dump that comes with the PG that you're
 upgrading *to* against the old database; e.g. you'd use the 8.0+
 pg_dump and tell it to connect to the 7.3.2 database.

Note that even if your 7.3 dump restores fine on 8.1 (How likely that is
depends on the complexity of your schema), you might still experience 
problems, if your application depends on things that changed between 7.3
and 8.1. Postgres tends to become more strict with every release, so
there are things you got away with in 7.3 which now cause an error message.

So, you shouldn't upgrade database version behind a users back. You'll 
need to test his applikations against the new version, or at least tell
him that there might be problems.

 You should really upgrade from 7.3.2, at least to the latest point
 release in the 7.3 series, and have a plan to go to 8.0 or 8.1,
 because 7.3 won't be supported for that much longer (if it even is
 right now).

If 8.0 or 8.1 is too big a step for you, you could consider moving to 
7.4. I don't know if 7.3 already supported schemas, but if it did, then
the chance of breakage is a lot smaller if you switch to 7.4 compared to
switching to 8.1. OTOH, one day 7.4 will be unsupported too, and then 
you'll need to switch anyway.

greetings, Florian Pflug


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Richard Huxton

Kenneth Downs wrote:
AFAIK it has always been the case that you should expect to have to dump 
out your databases and reload them for version upgrades.


Is anybody over at the dev team considering what an onerous burden this 
is?  Is anyone considering doing away with it?


Far from trivial. You have changes in on-disk formats and actual 
functionality between major version numbers. For instance - what would 
you do to deal with the recent changes in unicode validation?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] VACUUMing sometimes increasing database size / sometimes

2006-06-16 Thread Richard Huxton

Nitin Verma wrote:

if your application depends on things that changed between 7.3 and 8.1. 

Postgres tends to become more strict with every release, so

there are things you got away with in 7.3 which now cause an error
message.


Do we have change lists where I can see all the changes between 7.3 and 8.1,
may be release by release?


Try the manuals where there are version-by-version details of changes in 
the release-notes.


http://www.postgresql.org/docs/8.1/static/release.html

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Omitting tablespace creation from pg_dumpall...

2006-06-16 Thread Florian G. Pflug

Chander Ganesan wrote:

Tom Lane wrote:

Chander Ganesan [EMAIL PROTECTED] writes:
 
I'd like to suggest that a feature be added to pg_dumpall to remove 
tablespace definitions/creation from the output.  While the inclusion 
is important for backups - it's equally painful when attempting to 
migrate data from a development to production database.  Since 
PostgreSQL won't create the directory that will contain the 
tablespace, the tablespace creation will fail.  Following that, any 
objects that are to be created in that tablespace will fail (since 
the tablespace doesn't exist).


If the above statements were actually true, it'd be a problem, but they
are not true.  The dump only contains SET default_tablespace = foo
commands, which may themselves fail, but they won't prevent subsequent
CREATE TABLE commands from succeeding.

  

With PostgreSQL 8.1.4, if I do the following:

create tablespace test location '/srv/tblspc';
create database test with tablespace = test;

The pg_dumpall result will contain:
*
CREATE TABLESPACE test OWNER postgres LOCATION '/srv/tblspc';
CREATE DATABASE test WITH TEMPLATE=template0 OWNER=postgres 
ENCODING='utf8' TABLESPACE=test;


Hm.. I guess pg_dumpall is meant to create a identical clone of a 
postgres cluster (Note that the term cluster refers to one 
postgres-instance serving multiple databases, and _not_ to a cluster

in the high-availability sense). For moving a single database from one
machine to another, pg_dump might suit you more. With pg_dump, you
normally create the new database manually, and _afterwards_ restore
your dump into this database.

I'd say that pg_dumpall not supporting restoring into a different 
tablespace is compareable to not supporting database renaming. Think

of pg_dumpall as equivalent to copying the data directory - only that
it works while the database is online, and supports differing 
architectures on source and destination machine.


greetings, Florian Pflug


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] postgres and ldap

2006-06-16 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 8.2 will have direct LDAP authentication without PAM.

That code's going to go away real soon if some documentation doesn't
show up.  I can't believe Bruce was sloppy enough to accept a feature
patch with zero documentation.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgres and ldap

2006-06-16 Thread Magnus Hagander
  8.2 will have direct LDAP authentication without PAM.
 
 That code's going to go away real soon if some documentation 
 doesn't show up.  I can't believe Bruce was sloppy enough to 
 accept a feature patch with zero documentation.

?? I thought I had sent in the docs for that. Will dig through my notes
when I get home to resubmit. (I have certainly written it, because my
local docs copy has it!)

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] postgres and ldap

2006-06-16 Thread Magnus Hagander
   8.2 will have direct LDAP authentication without PAM.
  
  That code's going to go away real soon if some 
 documentation doesn't 
  show up.  I can't believe Bruce was sloppy enough to accept 
 a feature 
  patch with zero documentation.
 
 ?? I thought I had sent in the docs for that. Will dig 
 through my notes when I get home to resubmit. (I have 
 certainly written it, because my local docs copy has it!)

Actually, the docs *were* submitted. See
http://archives.postgresql.org/pgsql-patches/2005-12/msg00375.php.
Applied per
http://archives.postgresql.org/pgsql-patches/2006-03/msg00080.php, I
think it's just the docs that were missed.

I don't recall any feedback about updating them, so I think that patch
still stands.

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [HACKERS] Fabian Pascal and RDBMS deficiencies in fully

2006-06-16 Thread Simon Riggs
On Wed, 2006-06-14 at 18:34 -0400, Chris Browne wrote:
 kleptog@svana.org (Martijn van Oosterhout) writes:
 
  On Tue, Jun 13, 2006 at 05:23:56PM -0400, Christopher Browne wrote:
   [3] 
   http://www.intelligententerprise.com/010327/celko_online.jhtml;jsessionid=NDIHEWXGL4TNKQSNDBNSKHSCJUMEKJVN
  
  The sample problem in [3] is one that shows pretty nicely a
  significant SQL weakness; it's very painful to build SQL to do complex
  things surrounding cumulative statistics.
 
  I havn't managed to wrap my brain around them yet, but this seems like
  something that SQL WINDOW functions would be able to do. For each row
  define the window frame to be all the preceding rows, do a SUM() and
  divide that over the total. Or perhaps the PERCENT_RANK() function does
  this already, not sure.
 
  Mind you, postgres doesn't support them yet, but it's interesting that
  it may be possible at all...
 
 Yes, you are exactly right; I have seen a couple references to OVER
 and PARTITION BY which look as though they are the relevant SQL
 additions...
 
 http://blogs.ittoolbox.com/database/technology/archives/olap-sql-part-5-windowing-aggregates-8373
 http://www.sqljunkies.com/HowTo/4E65FA2D-F1FE-4C29-BF4F-543AB384AFBB.scuk
 http://sqljunkies.com/Article/4E65FA2D-F1FE-4C29-BF4F-543AB384AFBB.scuk
 http://www.experts-exchange.com/Databases/Oracle/Q_21793507.html
 
 I'm not sure the degree to which these are standardized, but they are
 available in some form or another in late-breaking versions of Oracle,
 DB2, and Microsoft SQL Server.
 
 I'm not quite sure how to frame this so as to produce something that
 should go on the TODO list, but it looks like there's a possible TODO
 here...

Yes, SQL Window Functions should be explicitly part of the TODO. They
are already described in detail as part of SQL:2003.

Window functions allow you to work with ordered result sets, moving
averages etc.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgres and ldap

2006-06-16 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Actually, the docs *were* submitted. See
 http://archives.postgresql.org/pgsql-patches/2005-12/msg00375.php.
 Applied per
 http://archives.postgresql.org/pgsql-patches/2006-03/msg00080.php, I
 think it's just the docs that were missed.

Ah.  I had found the docs-less commit but didn't go looking through patches.
Will gather up the docs and apply unless Bruce beats me to it.  Thanks.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] VACUUMing sometimes increasing database size / sometimes

2006-06-16 Thread Nitin Verma

Thanx so much which would really help

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 16, 2006 6:29 PM
To: Nitin Verma
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] VACUUMing sometimes increasing database size /
sometimes

Nitin Verma wrote:
 if your application depends on things that changed between 7.3 and 8.1.

 Postgres tends to become more strict with every release, so
 there are things you got away with in 7.3 which now cause an error
 message.
 
 Do we have change lists where I can see all the changes between 7.3 and
8.1,
 may be release by release?

Try the manuals where there are version-by-version details of changes in 
the release-notes.

http://www.postgresql.org/docs/8.1/static/release.html

-- 
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] VACUUMing sometimes increasing database size / sometimes

2006-06-16 Thread Nitin Verma
$ ls -al pgsqldb/pg_xlog
total 32816
drwx--2 nitinverma root 4096 Jun 16 19:53 .
drwx--6 nitinverma root 4096 Jun 16 19:33 ..
-rw---1 nitinverma root 16777216 Jun 16 20:08 0001
-rw---1 nitinverma root 16777216 Jun 16 19:45 0002

Looks like if a WAL file is created even vacuum can't reclaim the space. Is
that the root cause behind DB bloating with 7.3.2?

-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 16, 2006 6:29 PM
To: Nitin Verma
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] VACUUMing sometimes increasing database size /
sometimes

Nitin Verma wrote:
 if your application depends on things that changed between 7.3 and 8.1.

 Postgres tends to become more strict with every release, so
 there are things you got away with in 7.3 which now cause an error
 message.
 
 Do we have change lists where I can see all the changes between 7.3 and
8.1,
 may be release by release?

Try the manuals where there are version-by-version details of changes in 
the release-notes.

http://www.postgresql.org/docs/8.1/static/release.html

-- 
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] VACUUMing sometimes increasing database size /

2006-06-16 Thread Bill Moran
In response to Nitin Verma [EMAIL PROTECTED]:

 $ ls -al pgsqldb/pg_xlog
 total 32816
 drwx--2 nitinverma root 4096 Jun 16 19:53 .
 drwx--6 nitinverma root 4096 Jun 16 19:33 ..
 -rw---1 nitinverma root 16777216 Jun 16 20:08 0001
 -rw---1 nitinverma root 16777216 Jun 16 19:45 0002
 
 Looks like if a WAL file is created even vacuum can't reclaim the space. Is
 that the root cause behind DB bloating with 7.3.2?

All versions of Postgresql generate WAL logs.  This is not bloat, this is
space required for normal operation of the database system.

I believe the defaults are to create 4 files, 16M each, and then rotate
through them.  If you've only got two files so far, this must be a
fairly new installation.

http://www.postgresql.org/docs/8.1/interactive/wal-configuration.html

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] B+ versus hash maps

2006-06-16 Thread Ragnar
On fös, 2006-06-16 at 11:39 +0530, surabhi.ahuja wrote:
[in response to Jim advising not to set random_page_cost=1]

 in that case, should i set 
 enable_seqscan parameter to off at the time of starting postmaster?

that is unlikely to be a good stategy.
 
 because i have seen that even thou the index exists it still goes for
 seq scan

there can be many situations where a sequential scan
is the correct thing to do.

it is not clear whether you have a case that needs
to be optimized, or if you are just assuming that
a sequential scan must is wrong.

things that may be causing wrong choice of seqscan include:
  table has not been ANALYZED lately
  some columns need higer statistics target
  table contain few rows
  table is not correctly indexed
  search using non-indexable operators
  query phrased in a way that prevents use of indexes

so before jumping to setting wild global settings, you
should make sure you understand what your problem really
is.

the best way to do that is by looking at the output of
EXPLAIN ANALYZE. 

show us the output of EXPLAIN ANALYZE yourquery, along
with details about relevant columns, indexes, so that we 
can give more concrete advice.

gnari

 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] postgres and ldap

2006-06-16 Thread Bruce Momjian

Doc patch applied. I must have missed the second attachment.

---

Tom Lane wrote:
 Magnus Hagander [EMAIL PROTECTED] writes:
  Actually, the docs *were* submitted. See
  http://archives.postgresql.org/pgsql-patches/2005-12/msg00375.php.
  Applied per
  http://archives.postgresql.org/pgsql-patches/2006-03/msg00080.php, I
  think it's just the docs that were missed.
 
 Ah.  I had found the docs-less commit but didn't go looking through patches.
 Will gather up the docs and apply unless Bruce beats me to it.  Thanks.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 

-- 
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Omitting tablespace creation from pg_dumpall...

2006-06-16 Thread Bruce Momjian

Should pg_dumpall be using the SET default_tablespace = foo method as
well?

---

Florian G. Pflug wrote:
 Chander Ganesan wrote:
  Tom Lane wrote:
  Chander Ganesan [EMAIL PROTECTED] writes:
   
  I'd like to suggest that a feature be added to pg_dumpall to remove 
  tablespace definitions/creation from the output.  While the inclusion 
  is important for backups - it's equally painful when attempting to 
  migrate data from a development to production database.  Since 
  PostgreSQL won't create the directory that will contain the 
  tablespace, the tablespace creation will fail.  Following that, any 
  objects that are to be created in that tablespace will fail (since 
  the tablespace doesn't exist).
 
  If the above statements were actually true, it'd be a problem, but they
  are not true.  The dump only contains SET default_tablespace = foo
  commands, which may themselves fail, but they won't prevent subsequent
  CREATE TABLE commands from succeeding.
 

  With PostgreSQL 8.1.4, if I do the following:
  
  create tablespace test location '/srv/tblspc';
  create database test with tablespace = test;
  
  The pg_dumpall result will contain:
  *
  CREATE TABLESPACE test OWNER postgres LOCATION '/srv/tblspc';
  CREATE DATABASE test WITH TEMPLATE=template0 OWNER=postgres 
  ENCODING='utf8' TABLESPACE=test;
 
 Hm.. I guess pg_dumpall is meant to create a identical clone of a 
 postgres cluster (Note that the term cluster refers to one 
 postgres-instance serving multiple databases, and _not_ to a cluster
 in the high-availability sense). For moving a single database from one
 machine to another, pg_dump might suit you more. With pg_dump, you
 normally create the new database manually, and _afterwards_ restore
 your dump into this database.
 
 I'd say that pg_dumpall not supporting restoring into a different 
 tablespace is compareable to not supporting database renaming. Think
 of pg_dumpall as equivalent to copying the data directory - only that
 it works while the database is online, and supports differing 
 architectures on source and destination machine.
 
 greetings, Florian Pflug
 
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
 

-- 
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Question about clustering multiple columns

2006-06-16 Thread Bruno Wolff III
On Tue, Jun 13, 2006 at 09:04:15 -0700,
  Benjamin Arai [EMAIL PROTECTED] wrote:
 Hi,
  
 I have a database where there are three columns (name,date,data).  The
 queries are almost always something like SELECT date,data FROM table WHERE
 name=blah AND date  1/1/2005 AND date  1/1/2006;.  I currently have three
 B-tree indexes, one for each of the columns.  Is clustering on date index
 going to be what I want, or do I need a index that contains both name and
 date?

I would expect that clustering on the name would be better for the above
query.
You probably want an index on name and date combined.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] PL/Perl questions...

2006-06-16 Thread LLC
Hi List;

I have 2 questions...

1) I want to create a perl function which connects to an external non-Postgres database and retrieves data. I wonder is the best way to do this to return a set or an array? I assume returning a data set is a better way to go, I want to be able to run something like this:

"select a,b,c from my_new_perl_func('odbc-conn', 'sql-string') as t1(a,b,c)";

2) can anyone provide me with an example of how to return data from a perl function in this manner based on a DBI query?

Thanks in advance for your help...




Re: [GENERAL] Omitting tablespace creation from pg_dumpall...

2006-06-16 Thread Chander Ganesan

Florian G. Pflug wrote:

Chander Ganesan wrote:

Tom Lane wrote:

Chander Ganesan [EMAIL PROTECTED] writes:
 
I'd like to suggest that a feature be added to pg_dumpall to remove 
tablespace definitions/creation from the output.  While the 
inclusion is important for backups - it's equally painful when 
attempting to migrate data from a development to production 
database.  Since PostgreSQL won't create the directory that will 
contain the tablespace, the tablespace creation will fail.  
Following that, any objects that are to be created in that 
tablespace will fail (since the tablespace doesn't exist).


If the above statements were actually true, it'd be a problem, but they
are not true.  The dump only contains SET default_tablespace = foo
commands, which may themselves fail, but they won't prevent subsequent
CREATE TABLE commands from succeeding.

  

With PostgreSQL 8.1.4, if I do the following:

create tablespace test location '/srv/tblspc';
create database test with tablespace = test;

The pg_dumpall result will contain:
*
CREATE TABLESPACE test OWNER postgres LOCATION '/srv/tblspc';
CREATE DATABASE test WITH TEMPLATE=template0 OWNER=postgres 
ENCODING='utf8' TABLESPACE=test;


Hm.. I guess pg_dumpall is meant to create a identical clone of a 
postgres cluster (Note that the term cluster refers to one 
postgres-instance serving multiple databases, and _not_ to a cluster

in the high-availability sense). For moving a single database from one
machine to another, pg_dump might suit you more. With pg_dump, you
normally create the new database manually, and _afterwards_ restore
your dump into this database.

I'd say that pg_dumpall not supporting restoring into a different 
tablespace is compareable to not supporting database renaming. Think

of pg_dumpall as equivalent to copying the data directory - only that
it works while the database is online, and supports differing 
architectures on source and destination machine.


greetings, Florian Pflug
I understand why it's doing what it's doing - and I'm not disputing the 
usefulness of it.  I just think it might be good to have a flag that 
allows the omission of the alternate tablespace usage (or set the 
default instead of including it in the create db statement), since I can 
see how the failures might become problematic in some environments.


--
Chander Ganesan
The Open Technology Group
One Copley Parkway, Suite 210
Morrisville, NC  27560
Phone: 877-258-8987/919-463-0999
http://www.otg-nc.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread snacktime

On 6/16/06, Richard Huxton dev@archonet.com wrote:


The other option would be to run replication, e.g. slony to migrate from
one version to another. I've done it and it works fine, but it will mean
slony adding its own tables to each database. I'd still do it one
merchant at a time, but that should reduce your downtime to seconds.



I'll have to take another look at slony, it's been a while.  Our
database structure is a bit non standard.  Being a payment gateway, we
are required to have a separation of data between merchants, which
means not mixing data from different merchants in the same table.
So what we do is every user has their own schema, with their own set
of tables.  Yes I know that's not considered the best practice design
wise, but separate databases would have caused even more issues, and
as it turns out there are some advantages to the separate schema
approach that we never thought of.  Last time I looked at slony you
have to configure it for each individual table you want replicated.
We have around 50,000 tables, and more are added on a daily basis.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] table has many to many relationship with itself - how

2006-06-16 Thread Bruno Wolff III
On Wed, Jun 14, 2006 at 13:51:50 -0700,
  [EMAIL PROTECTED] wrote:
 Starting with this:
 
 create sequence languages_seq increment by 1;
 create table languages (
   id integer primary key default nextval('languages_seq'),
   language_name varchar(100)
 );
 insert into languages (id, language_name) values (1, 'English');
 insert into languages (id, language_name) values (2, 'French');
 insert into languages (id, language_name) values (3, 'Spanish');
 insert into languages (id, language_name) values (4, 'Italian');
 
 create table phrases(
  id serial primary key,
  language integer references languages(id),
  content text
 );
 insert into phrases (language, content) values (1, 'the book');
 insert into phrases (language, content) values (2, 'le livre');
 insert into phrases (language, content) values (3, 'el libro');
 insert into phrases (language, content) values (4, 'il libro');
 insert into phrases (language, content) values (1, 'the room');
 insert into phrases (language, content) values (4, 'la stanza');
 insert into phrases (language, content) values (4, 'la camera');
 
 
 For your translations table, I would go with something like this:
 
 
 create sequence translations_seq increment by 1;
 create table translations (
  translation_id integer primary key default nextval('translations_seq'),
  lang1_id integer references phrases(id),
  lang2_id integer references phrases(id)
 );

I think you are better off putting the equivalence information in the phrases
table. (This assumes that treating translations of a phrase into various
languages forms an equivalence class.) Under this model each phrase will
be in exactly one equivalence class, so that adding an equivalence class
column to the phrase table seems like a good solution.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] minimizing downtime when upgrading

2006-06-16 Thread Bill Moran
In response to snacktime [EMAIL PROTECTED]:

 On 6/16/06, Richard Huxton dev@archonet.com wrote:
 
  The other option would be to run replication, e.g. slony to migrate from
  one version to another. I've done it and it works fine, but it will mean
  slony adding its own tables to each database. I'd still do it one
  merchant at a time, but that should reduce your downtime to seconds.
 
 I'll have to take another look at slony, it's been a while.  Our
 database structure is a bit non standard.  Being a payment gateway, we
 are required to have a separation of data between merchants, which
 means not mixing data from different merchants in the same table.
 So what we do is every user has their own schema, with their own set
 of tables.  Yes I know that's not considered the best practice design
 wise, but separate databases would have caused even more issues, and
 as it turns out there are some advantages to the separate schema
 approach that we never thought of.  Last time I looked at slony you
 have to configure it for each individual table you want replicated.
 We have around 50,000 tables, and more are added on a daily basis.

We've got a script here that takes a pg_dump and automatically generates
a slony config that adds all tables and sequences.  I've got to check with
the Powers That Be, but i suspect we'll be opening up the code.

Does this duplicate any work that anyone else is doing?

-- 
Bill Moran
Collaborative Fusion Inc.


IMPORTANT: This message contains confidential information and is
intended only for the individual named. If the reader of this
message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] How to install PL/perlU (perl untrusted)

2006-06-16 Thread LLC
Hi List;

I found documentation that talks about untrusted perl here:
http://www.postgresql.org/docs/8.1/interactive/plperl-trusted.html

However I do not know how to install/use untrusted perl.

Can someone point me in the right direction?

Thanks in advance...



Re: [GENERAL] How to install PL/perlU (perl untrusted)

2006-06-16 Thread Tony Caduto

LLC wrote:


However I do not know how to install/use untrusted perl.
 


the easiest way is to use createlang from the command line.

for PLperlu just do the following:

(run the command as the postgres user)
su postgrestenter
Then:
createlang plperlu yourdatabasename

It's pretty much the same on win32, except creatlang may not be in the 
path, so you should actually be in the bin

dir when you run it or use the full path name.

Hope this helps,

--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] (no subject)

2006-06-16 Thread LLC
Hi List;

I've created a plperlu (perl untrusted) function with the ability to use
dbi. I can use the code listing below to return data if I specify the
query as follows:

select * from sybase_get2() as (f1 varchar(100), f2 varchar(100));

However, I have to specify the column names as f1 and f2.
I want to turn this into a generic solution where I can eventually pass in
the SQL. For now I just want to be able to specify any column names I want
in the as part of the above query.

Any Ideas?

Thanks in advance for your help...


### Code Listing  Start ###
CREATE OR REPLACE FUNCTION sybase_get2()
RETURNS SETOF record
AS $$
use DBI;

my $dbh = DBI-connect(dbi:Sybase:server=CXS100, 'pguser1', 'pg70093' );

if (!$dbh) {
return;
}

my $qry = $dbh-prepare(select parent_id, Parent_key  from
csx1db.dbo.parentkeys);

$qry-execute();

while ( @data = $qry-fetchrow_array() ) {
return_next ({f1=$data[0],  f2=$data[1]});
}

return ;
$$ LANGUAGE plperlu


### Code Listing  End ###

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PL/Perl questions...

2006-06-16 Thread David Fetter
On Fri, Jun 16, 2006 at 11:42:51AM -0500, LLC wrote:
 Hi List;
  
 I have 2 questions...
  
 1) I want to create a perl function which connects to an external
 non-Postgres database and retrieves data. I wonder is the best way
 to do this to return a set or an array? I assume returning a data
 set is a better way to go, I want to be able to run something like
 this:
  
 select a,b,c from my_new_perl_func('odbc-conn', 'sql-string') as t1(a,b,c);

DBI-Link does something a lot like this.
http://pgfoundry.org/projects/dbi-link/

 2) can anyone provide me with an example of how to return data from
 a perl function in this manner based on a DBI query?

See above :)

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] How to install PL/perlU (perl untrusted)

2006-06-16 Thread Scott Marlowe
On Fri, 2006-06-16 at 12:57, LLC wrote:
 Hi List;
  
 I found documentation that talks about untrusted perl here:
 http://www.postgresql.org/docs/8.1/interactive/plperl-trusted.html
  
 However I do not know how to install/use untrusted perl.
  
 Can someone point me in the right direction?
  
 Thanks in advance...
 

Here's a dirt simple shell script to make a set for everything in the
public schema.  It's not pretty, but it seems to work on 7.4.x

echo create temp sequence tc;SELECT 'set add table (set id=1, origin=1,
id='||nextval('tc')||', fully qualified name =
''public.'||c.relname||'\', comment=\'\');' as \Name\ FROM
pg_catalog.pg_class c LEFT JOIN pg_catalog.pg_user u ON u.usesysid =
c.relowner LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname='public' and c.relkind IN ('r') AND n.nspname NOT IN
('pg_catalog', 'pg_toast') AND pg_catalog.pg_table_is_visible(c.oid)
order by 1;drop sequence tc;|psql stage_reporting

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] A real currency type

2006-06-16 Thread Bruce Momjian

I have added this URL to the TODO list in case people want to move
forward on this.

---

Martijn van Oosterhout wrote:
-- Start of PGP signed section.
 For a while I've been wondering about making a type that was really a
 shell around a base type that tagged the type in some way. For example,
 associating a currency with a numeric and complaining about additions
 between mismatches.
 
 Well, I did it and it's available here:
 http://svana.org/kleptog/pgsql/taggedtypes.html
 
 Below some examples of it in action. Yes, that's a timestamp that
 remembers the timezone. Neat huh?
 
 Tested on 7.4 and a recent 8.1devel so it should work for most people.
 Installation reports welcome. Note, this is beta software, don't run it
 on your production server. Thanks.
 
 Have a nice day,
 
 
 test=# select '5.6 USD'::currency + '4.5 USD'::currency;;
  ?column?  
 ---
  10.10 USD
 (1 row)
 
 test=# select '5.6 USD'::currency + '4.5 AUD'::currency;;
 ERROR:  Using operator +(currency,currency) with incompatable tags (USD,AUD)
 test=# select c1, print_currency(c1) from c;
  c1 | print_currency 
 +
  232.44 USD | US$232.44
  21.20 EUR  | ?   21.20
  -13.44 AUD | AU$-13.44
  0.01 USD   | US$  0.01
  14.00 AUD  | AU$ 14.00
 (5 rows)
 
 test=# select 5.4*c1 from c where tag(c1) = 'AUD';
   ?column?  
 
  -72.58 AUD
  75.60 AUD
 (2 rows)
 
 test=# select t, timestamp(t), date_part('hour',t) from c;
 t|  timestamp  | date_part 
 -+-+---
  2005-08-14 02:00:00+02 Europe/Amsterdam | 2005-08-14 02:00:00 | 2
  2005-08-14 02:00:00+02 Australia/Sydney | 2005-08-14 10:00:00 |10
  2005-08-14 02:00:00+02 Asia/Hong_Kong   | 2005-08-14 08:00:00 | 8
  2005-08-14 02:00:00+02 America/New_York | 2005-08-13 20:00:00 |20
  2005-08-14 02:00:00+02 Asia/Kuwait  | 2005-08-14 03:00:00 | 3
 (5 rows)
 
 -- 
 Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
  Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
  tool for doing 5% of the work and then sitting around waiting for someone
  else to do the other 95% so you can sue them.
-- End of PGP section, PGP failed!

-- 
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] (no subject)

2006-06-16 Thread Tony Caduto

LLC wrote:

Hi List;

I've created a plperlu (perl untrusted) function with the ability to use
dbi. I can use the code listing below to return data if I specify the
query as follows:

select * from sybase_get2() as (f1 varchar(100), f2 varchar(100));

However, I have to specify the column names as f1 and f2.
I want to turn this into a generic solution where I can eventually pass in
the SQL. For now I just want to be able to specify any column names I want
in the as part of the above query.

Any Ideas?

  

Use a type to define the output columns, then you can call the function
like this:

select * from sybase_get2()

not need to us AS

See the examples on this page on how to use a type with a plperl function:

http://www.commandprompt.com/community/pgdocs81/plperl-database


--
Tony Caduto
AM Software Design
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql
Your best bet for Postgresql Administration



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Omitting tablespace creation from pg_dumpall...

2006-06-16 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 Should pg_dumpall be using the SET default_tablespace = foo method as
 well?

That would mean changing the semantics of CREATE DATABASE; currently it
copies the default tablespace from the template database, rather than
looking at default_tablespace.  I'm unsure if that's a good idea or not.
None of the other properties of a database are handled that way.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Online backups and tar

2006-06-16 Thread Wes
1. Section 23.3.2 (Making a Base backup) of the 8.1.0 doc says:

Also, some versions of GNU tar consider it an error if a file is changed
while tar is copying it. There does not seem to be any very convenient way
to distinguish this error from other types of errors, other than manual
inspection of tar¹s messages. GNU tar is therefore not the best tool for
making base backups.

What then, on linux (Red Hat 3.0 ES), is the recommended backup program for
online backups?  It appears cpio does the same thing.

2. During a restore, postmaster tries to restore a file '0001.history'.
Neither that file, nor any *.history file, is anywhere to be found.  I can
not find this documented anywhere.  What is this file?  Does it have to do
with timelines?


Wes



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing

2006-06-16 Thread Florian G. Pflug

Trent Shipley wrote:

On Tuesday 2006-06-13 09:26, David Fetter wrote:

On Tue, Jun 13, 2006 at 09:18:17AM -0600, Scott Ribe wrote:

To hold it up as any kind of paradigm is really misinformed.

SQL had something that relational algebra/relational calculus did not
have, which is that somebody without a math degree can stare at it a
short while and *do* something with it right away.  That it also has
other properties that are extremely useful and powerful (the ability
to specify states of ignorance using NULL, do arithmetic, use
aggregates, etc.) is what has made it such a smashing success.

Now, there's another thing that makes it amazingly hard to displace:
imagining what would be better *enough* to justify the many millions
of people-years and even more billions of dollars needed to move away
from it.  Despite Date's many whines over the decades, his
still-vaporware Relational Model doesn't even vaguely approximate that
criterion.



COBOL and VisualBasic are better than Haskell by the same argument.

Well, VisualBasic really sucks IMHO, but if I had to choose between
taking over a 100.000-line VB Project, or a 10.000 line Haskhell 
Project, I'm not sure if I wouldn't choose the VB one.


Haskhell has very nice properties, but there are haskhell onelines
which I can stare at for hours, and am still not exactly sure what they
do ;-)

I normally prefer languages with a terse syntax, but haskhell is 
sometimes too much even for me ;-)


greetings, Florian Pflug


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Regarding data recovery

2006-06-16 Thread Haroon Sayyad
Dear Sir,  We are using postgres 8.0.3, on windows xp and we don't have latest database backup. Database shows tables but missing all the procedures written in and shows following error. Also we are using front end PgAdminIII for creating procedures etc.  Error is:  invalid page header in block 102 of relation pg_proc 
		Yahoo! Messenger with Voice. PC-to-Phone calls for ridiculously low rates.

Re: [GENERAL] Performance Question

2006-06-16 Thread Patrick TJ McPhee
In article [EMAIL PROTECTED],
Terry Lee Tucker [EMAIL PROTECTED] wrote:

% elements of 50 thousand records on 8 structurally identical databases. We 
% threw together the script and decided to just delete the record and re-insert 
% it with the data that was brought into sync. Now the question: Is it just as 
% fast to do it this way, or is there some hidden advantage to performing an 
% update?

If you have foreign key relationships to the table being updated, then
deleting from that table will often be slower than updating.


-- 

Patrick TJ McPhee
North York  Canada
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] problem in loading a jar file

2006-06-16 Thread Umut Sargut



Hi,

I'm getting an exception when I'm trying to load a 
jar file in Windows XP. 

The command I'm using is:
 statement.executeUpdate("SELECT 
sqlj.install_jar('file:///"+jarLocation+"','"+transformName+"',false)");
where jarLcation is: 
C:/temp/transform.jar

The error is:
 java.sql.SQLException: I/O 
exception reading jar file: C:\temp\transform.jar (The system cannot find the 
file specified)

The file is actually located in that 
path.

Please help with any ideas,

Thanks,

Umut Sargut