Re: [GENERAL] Unknown winsock error 10061while dumping a big database

2007-11-28 Thread Richard Huxton

Cyril VELTER wrote:


	I recently upgraded a 8.0 database to 8.2 (server is running windows 2003 
server) and now I cannot dump the database any more.


	At some point in the dump, pg_dump (run on a centos 3 linux) return : 


pg_dump: Dumping the contents of table c2674 failed: PQgetCopyData() failed.
pg_dump: Error message from server: out of memory


That's the problem. Do you have maintenance_work_mem or work_mem set too 
high perhaps?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Greg Smith

On Tue, 27 Nov 2007, Scott Ribe wrote:

IIRC, it was later established that during those tests they had fsync 
enabled on OS X and disabled on Linux.


You recall correctly but I'm guessing you didn't keep up with the 
investigation there; I was tempted to bring this up in that last message 
but was already running too long.


Presumably you're talking about http://ridiculousfish.com/blog/?p=17 . The 
fsync theory was suggested by them and possibly others after Anandtech's 
first benchmarking test of this type.


The second test that I linked to rebutted that at 
http://www.anandtech.com/mac/showdoc.aspx?i=2520p=6 .  This specific 
issue is also addressed by a comment from Johan Of Anandtech on the 
ridiculousfish site.  The short version is that the MySQL they were using 
had a MyISAM configuration that doesn't do fsyncs, period, so it's 
impossible fsyncs were to blame.  You only get fsync if you're running 
InnoDB.  I think the reason for this confusion is that at the time of the 
initial review, working MySQL fsync under OS X was pretty new (January 
2005 I think, http://dev.mysql.com/doc/refman/4.1/en/news-4-1-9.html )


Ultimately, the exact cause here doesn't change how to clear the air here. 
As I suggested, the only way to refute benchmarks showing awful 
performance is not to theorize as to the cause, but to show new ones that 
disprove the first ones are still accurate.  If you can point me to one of 
those, I'd love to see it--this is actually one of the items on the 
relatively short list of why I'm typing this on a Thinkpad running Linux 
instead of a Macbook (the other big one is Apple's string of Eclipse 
issues, which I already ranted about recently at 
http://slashdot.org/comments.pl?sid=342667cid=21154137 )


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] I have a select statement on the issue.

2007-11-28 Thread Rodrigo De León
On Nov 28, 2007 1:58 AM,  [EMAIL PROTECTED] wrote:
 1. Why the default output changes after I execute the update statement?
 2. Qustion, sorting as main keys when query, how to do?

See:
http://www.postgresql.org/docs/8.2/static/queries-order.html

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] I have a select statement on the issue.

2007-11-28 Thread Thomas Pundt
Hi,

On Mittwoch, 28. November 2007, [EMAIL PROTECTED] wrote:
| Step 4:Update Date
| update test set name='name' where code='1002'

Simplified, when you perform an update, PostgreSQL internally marks 
the affected row as deleted and inserts a new row in the table. For details
look at the MVCC documentation, eg.

  http://www.postgresql.org/docs/8.2/interactive/mvcc-intro.html

| Results:
| code name     qty
| 1001 1001name  1
| 1003 1003name  3
| 1002 name  2
|
| Question:
| 1. Why the default output changes after I execute the update statement?

See above; output order is not guaranteed without order clause.

| 2. Qustion, sorting as main keys when query, how to do?

You mean: select * from test order by code ?


Ciao,
Thomas

-- 
Thomas Pundt [EMAIL PROTECTED]  http://rp-online.de/ 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Greg Smith

On Tue, 27 Nov 2007, Ron Johnson wrote:


There was a benchmark in Feb 2007 which demonstrated that FBSD 7.0
scaled *better* than Linux 2.6 after 4 CPUs.
http://jeffr-tech.livejournal.com/5705.html
Turns out that there was/is a bug in glibc's malloc().  Don't know
if it's been fixed yet.


Last I heard it was actually glibc combined with a kernel problem, and 
changes to both would be required to resolve:


http://kerneltrap.org/mailarchive/linux-kernel/2007/4/3/73000

I'm not aware of any resolution there but I haven't been following this 
one closely.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] System Load analyze

2007-11-28 Thread Greg Smith

On Sat, 24 Nov 2007, Peter Bauer wrote:


top shows that the CPUs are at least 80% idle most of the time so i
think there is an I/O bottleneck.


top also shows that you're never waiting for I/O which is usually evidence 
there isn't an I/O bottleneck.  You passed along most of the right data, 
but some useful additional things to know are:


-Actual brand/model of SCSI controller
-Operating system
-What time interval the vmstat and iostat information you gave were 
produced at.


I agree with Scott that checkpoints should be considered as a possibility 
here.  I'd suggest you set checkpoint_warning to a high value so you get a 
note in the logs every time one happens, then see if those happen at the 
same time as your high load average.  More on that topic and how to adjust 
the background writer if that proves to be the cause of your slowdown is 
at http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] PostgreSQL DB split

2007-11-28 Thread [EMAIL PROTECTED]
Hi,

I'm currently using a PostgreSQL DB on my disk memory, but right now I need to 
split this DB in this way:

a. the first DB (or first DB partion) will be stored in the Flash 
memory and it will contain the system configuration values;
b. the second DB (or second DB partion) will be stored in the RAM 
memory and it will contain the state values.
   These values will be modified many times and they will be deleted 
when the system goes down.

I can read that I can create a PostgreSQL DB on the RAMDisk partion, but I'm 
wondering if is it possible to create
one DB with two schemas in two different memory location (RAM and flash)?

Otherwise, I can create two DBs in two different location (RAM and flash) but 
is it possible to execute a SQL query
where the join relationship concerns two tables located in the different DBs? 
Can I use the dblink in order to create
only one DB connection?


Can someone help me? Thanks in advance.

--Tomas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] WAL Recovery

2007-11-28 Thread Simon Riggs
On Wed, 2007-11-28 at 06:05 +, Ashish Karalkar wrote:

 I have PostgreSQL 8.2.4 on Suse Linux
 
 Whne I am doing recovery from WAL archives I see in the logger that
 01.history file not found infact server has not created such
 file insted it created 01.backup  file which contains the history
 which documents also suggest is history file. 
 Should I ignore this message in logger?

Yes. I will add a note to the docs to mention this.

 Secondly, the logger says for e.g file 57 not fond in archive area
 which is present in pg_xlog and was not archived when server went down
  
 file upto 56 are there in archive.
 
 can anybody please explain why server giving message for filer 57 not
 found in archive area.

http://www.postgresql.org/docs/8.2/static/continuous-archiving.html#BACKUP-PITR-RECOVERY

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] client_encoding error

2007-11-28 Thread Albe Laurenz
Glyn Astill wrote:
 Whilst trying to migrate one of our tables to postgres we get the
 following error:

 invalid byte sequence for encoding EUC_JP : 0x9c32
 HINT:  This error can also happen if the byte sequence does not match
 the enccding expected by the server, which is controlled by client_encoding.

 Does anyone have any idea why this might be? The data cing into the
 table should be plain ASCII

It means that there is a string in your data which is incorrect
in your client encoding, which is EUC_JP.

You can only enter correctly encoded strings, everything else
will produce an error.

Your data is not plain ASCII, because the byte 0x9c is not
(0x9c  0x7f).

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] WAL Recovery

2007-11-28 Thread Ashish Karalkar
Thanks Simon for your replay,
Yes I have followed the same instruction givn on the link, still it is 
happining , should i ignore this message too?

With Regards
Ashish


Simon Riggs [EMAIL PROTECTED] wrote: On Wed, 2007-11-28 at 06:05 +, 
Ashish Karalkar wrote:

 I have PostgreSQL 8.2.4 on Suse Linux
 
 Whne I am doing recovery from WAL archives I see in the logger that
 01.history file not found infact server has not created such
 file insted it created 01.backup  file which contains the history
 which documents also suggest is history file. 
 Should I ignore this message in logger?

Yes. I will add a note to the docs to mention this.

 Secondly, the logger says for e.g file 57 not fond in archive area
 which is present in pg_xlog and was not archived when server went down
  
 file upto 56 are there in archive.
 
 can anybody please explain why server giving message for filer 57 not
 found in archive area.

http://www.postgresql.org/docs/8.2/static/continuous-archiving.html#BACKUP-PITR-RECOVERY

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com



   
-
 5, 50, 500, 5000 - Store N number of mails in your inbox. Click here.

Re: [GENERAL] Config settings for large restore

2007-11-28 Thread Tomasz Ostrowski
On Tue, 27 Nov 2007, Erik Jones wrote:

 I'm just wondering what is considered the general wisdom on config setting 
 for large pg_restore runs.

I think the first thing you can do is to fsync=off temporarily. But
do remember to turn this back on when you're done restoring.

Regards
Tometzky
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
  Winnie the Pooh

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Trevor Talbot
On 11/27/07, Tom Lane [EMAIL PROTECTED] wrote:
 Doug McNaught [EMAIL PROTECTED] writes:

  Kind of.  Mach is still running underneath (and a lot of the app APIs
  use it directly) but there is a BSD 'personality' above it which
  (AIUI) is big parts of FreeBSD ported to run on Mach.  So when you use
  the Unix APIs you're going through that.

 The one bit of the OSX userland code that I've really had my nose rubbed
 in is libedit, and they definitely took that from NetBSD not FreeBSD.
 You sure you got your BSDen straight?

 Some random poking around at
 http://www.opensource.apple.com/darwinsource/10.5/
 finds a whole lot of different-looking license headers.  But it seems
 pretty clear that their userland is BSD-derived, whereas I've always
 heard that their kernel is Mach-based.  I've not gone looking at the
 kernel though.

The majority of the BSDness in the kernel is from FreeBSD, but it is
very much a hybrid, Mach being the other parent.  Userland is a mixed
bag; FreeBSD, NetBSD, OpenBSD are all visible in different places.  In
older versions I've also seen 4.4BSD credited directly (as in not even
caught up with FreeBSD), but I believe most of that has been updated
in newer versions of the OS.  Apple also has employees who are major
developers for both FreeBSD and NetBSD at least, though I haven't kept
up with who is doing what.

http://developer.apple.com/documentation/Darwin/Conceptual/KernelProgramming/Architecture/chapter_3_section_3.html

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] WAL Recovery

2007-11-28 Thread Simon Riggs
On Wed, 2007-11-28 at 11:00 +, Ashish Karalkar wrote:
 Thanks Simon for your replay,
 Yes I have followed the same instruction givn on the link, still it is
 happining , should i ignore this message too?

Yes.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[Re] Re: [Re] Re: [GENERAL] Unknown winsock error 10061while dumping a big database

2007-11-28 Thread Cyril VELTER

De : mailto:[EMAIL PROTECTED]
 Cyril VELTER wrote:
  
  maintenance_work_mem and work_mem are set to their default value (16M / 
1M). 
  Does Copy use any of these values ?  The doc only state sort operations for 

  work_mem and vacuum / create index / alter table add foreing key for 
  maintenance_work_mem.
 
 You'll probably want to increase both of those (assuming you have a few 
 gigs of RAM).

The server have 4G of ram. But doing a search in the source code it 
does not 
seem that these values are used during a copy operation. I will try to increase 
these values.

 
  BTW, just dumping this table fail with the same error.
 
 Hmm - what version of 8.2 are you running on Windows?
 
 It's just that 8.2.4 has a fix that says:
 
 Allow pg_dump to do binary backups larger than two gigabytes on Windows 
 (Magnus)
 
 Now, whether that will affect you I'm not sure, since you said you were 
 dumping from Linux, with the server on Windows.

I don't think so. The dump stop at 75GB (the 2GB limit is not a 
problem), I'm 
running 8.2.5 on both the client and the server and the dump is done on a linux 
machine.


Cyril VELTER


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [Re] Re: [Re] Re: [GENERAL] Unknown winsock error 10061while dumping a big database

2007-11-28 Thread Richard Huxton

Cyril VELTER wrote:

It's just that 8.2.4 has a fix that says:

Allow pg_dump to do binary backups larger than two gigabytes on Windows 
(Magnus)


Now, whether that will affect you I'm not sure, since you said you were 
dumping from Linux, with the server on Windows.


	I don't think so. The dump stop at 75GB (the 2GB limit is not a problem), I'm 
running 8.2.5 on both the client and the server and the dump is done on a linux 
machine.


Can you select all the rows in that table, or does that cause an error too?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL DB split

2007-11-28 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Hi,

I'm currently using a PostgreSQL DB on my disk memory, but right now I need to 
split this DB in this way:

a. the first DB (or first DB partion) will be stored in the Flash 
memory and it will contain the system configuration values;
b. the second DB (or second DB partion) will be stored in the RAM 
memory and it will contain the state values.
   These values will be modified many times and they will be deleted 
when the system goes down.

I can read that I can create a PostgreSQL DB on the RAMDisk partion, but I'm 
wondering if is it possible to create
one DB with two schemas in two different memory location (RAM and flash)?


See the manuals for tablespaces.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Press contacts

2007-11-28 Thread Simon Riggs
There are still many countries/languages/regions not represented on the
Press Contact list.

http://www.postgresql.org/about/press/

http://www.postgresql.org/about/press/contact

Please can supporters of PostgreSQL check the above links to ensure that
we have coverage in as many countries and languages as possible?

It is likely that many people are already active, just not mentioned on
those lists. If that is the case, please register on the advocacy list
and let everybody know about your involvement.

Josh Berkus has published this guide to being a Regional Contact:
http://pgfoundry.org/docman/view.php/147/2800/regional.contact.brief.txt

Release time is soon...

So please register on the pgsql-advocacy list and help get the
PostgreSQL message across in your local area. No need to reply to me,
nor to this message.

-- 
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Error compiling Slony I

2007-11-28 Thread Glyn Astill
Hi people,

When I try to compile Slony 1.2 I get the following error:P

parser.y:1090:18: error: scan.c: No such file or directory
make[2]: *** [parser.o] Error 1
make[2]: Leaving directory `/tmp/slony1-1.2.12/src/slony_logshipper'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/tmp/slony1-1.2.12/src'

I've installed bison.

Anyone got any idea what I may be doing wrong?


Glyn Astill



  ___ 
Want ideas for reducing your carbon footprint? Visit Yahoo! For Good  
http://uk.promotions.yahoo.com/forgood/environment.html

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Crosstab limitation...

2007-11-28 Thread ronald tabada
Good day... I'm new in postgresql and I used PostgreSQL 8.2. I tried
generating a query that can convert your rows into columns. I was
happy when I discovered the crosstab function. But when I tried
producing reports with more than 17 columns it displayed the following
error message:

ERROR: invalid return type
SQL state: 42601
Detail: Query-specified return tuple has 39 columns but crosstab returns 17.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Scott Ribe
 Yes, very much so. Windows lacks the fork() concept, which is what makes
 PostgreSQL much slower there.

So grossly slower process creation would kill postgres connection times. But
what about the cases where persistent connections are used? Is it the case
also that Windows has a performance bottleneck for interprocess
communication?

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] System Load analyze

2007-11-28 Thread Peter Bauer
Hi Greg,

Am Mittwoch 28 November 2007 schrieb Greg Smith:
 On Sat, 24 Nov 2007, Peter Bauer wrote:
  top shows that the CPUs are at least 80% idle most of the time so i
  think there is an I/O bottleneck.

 top also shows that you're never waiting for I/O which is usually evidence
 there isn't an I/O bottleneck.  You passed along most of the right data,
 but some useful additional things to know are:

 -Actual brand/model of SCSI controller
 -Operating system
 -What time interval the vmstat and iostat information you gave were
 produced at.

here are the hardware specs:
2x POWEREDGE 2850 - XEON 3.0GHZ/2MB, 800FSB
2048MB SINGLE RANK DDR2
73 GB SCSI-Disk , 15.000 rpm, UL
PERC4E/DI DC ULTRA320 SCSI RAID, 256MB

Its Debian sarge with kernel 2.4.26.050719-686 #1 SMP.

vmstat and iostat were running with 1 second intervals.

 I agree with Scott that checkpoints should be considered as a possibility
 here.  I'd suggest you set checkpoint_warning to a high value so you get a
 note in the logs every time one happens, then see if those happen at the
 same time as your high load average.  More on that topic and how to adjust
 the background writer if that proves to be the cause of your slowdown is
 at http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm

 --
 * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org/

thank you,
Peter




-- 
Peter Bauer
APUS Software G.m.b.H.
A-8074 Raaba, Bahnhofstrasse 1/1
Email: [EMAIL PROTECTED]
Tel: +43 316 401629 24
Fax: +43 316 401629 9

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Streaming LIBPQ? Sliding Window in LIBPQ?

2007-11-28 Thread Martijn van Oosterhout
On Wed, Nov 28, 2007 at 09:11:07AM -0600, Abraham, Danny wrote:
 Is there a way to break the PGresult array to chuncks  Like Oracle? 
 
 I mean, without changing the text of given queries, 
 
 can we  somehow limit the memory consumption of a client that is using
 LIBPQ?

Use non-blocking mode to get the data in chunks. You can't easy control
the number of rows you get each time though...

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [Re] Re: [Re] Re: [GENERAL] Unknown winsock error 10061while dumping a big database

2007-11-28 Thread Tom Lane
Cyril VELTER [EMAIL PROTECTED] writes:
   The server have 4G of ram. But doing a search in the source code it 
 does not 
 seem that these values are used during a copy operation.

They aren't.  The out of memory complaint sounds more like it might be
due to corrupt data, ie the olde scenario where a variable-width field's
length word contains a ridiculously large value.  I don't know how that
would lead to a winsock error, though.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Streaming LIBPQ? Sliding Window in LIBPQ?

2007-11-28 Thread Gregory Stark
Martijn van Oosterhout [EMAIL PROTECTED] writes:

 On Wed, Nov 28, 2007 at 09:11:07AM -0600, Abraham, Danny wrote:
 Is there a way to break the PGresult array to chuncks  Like Oracle? 
 
 I mean, without changing the text of given queries, 
 
 can we  somehow limit the memory consumption of a client that is using
 LIBPQ?

 Use non-blocking mode to get the data in chunks. You can't easy control
 the number of rows you get each time though...

This doesn't really help. It's nonblocking but you still can't get libpq to
actually give you a result data structure until the entire results have
accumulated.

The only interface like this libpq supports is to use cursors in your SQL and
then FETCH n for each chunk. There's no libpq support for receiving results
incrementally. 

If you're writing a driver implementing the protocol from scratch you could
expose chunks of results to the application but there's no protocol-level
support for it so you can't directly control the rate at which results arrive
or the chunk size or anything like that.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread Chad Hendren

PostgreSQL Team,

I have a large OEM customer (one of the top 8 for Sun worldwide) that is 
considering embedding PostgreSQL into the core of their primary product 
instead of Informix.  He is trying to build his case for this change.  
Do we have anything that I can forward to him (something like the 
presentation you did at CEC?) to help him build his case?


This is an urgent request from my customer given that his timeline is 
relatively short.  Any help you can give me will be very appreciated.


Thanks,

Chad Hendren

Original question:

Have you seen any studies (either by Sun or others) that compares 
PostgresSQL to other commercial database software (Informix, Oracle, 
Sybase,etc.)? I am interested seeing a feature by feature comparison of 
PostgresSQL and Informix.



--
Chad Hendren
Solutions Architect
Sun Microsystems, Inc.
3655 North Point Parkway, Suite 600
Alpharetta, GA 30005 US
Phone 770-360-6402
Mobile 770-596-4778
Email [EMAIL PROTECTED] 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Streaming LIBPQ? Sliding Window in LIBPQ?

2007-11-28 Thread Abraham, Danny
Is there a way to break the PGresult array to chuncks  Like Oracle? 

I mean, without changing the text of given queries, 

can we  somehow limit the memory consumption of a client that is using
LIBPQ?

 

 The API is ...

 

 extern PGresult *PQexec(PGconn *conn, const char *query);

 

 Is there any SLiding Window layer that breaks the PGresult into
chuncks?

 

 Big queries take a lot more memory on the client side compared to other
DB  clients.

 

 Thanks

 

 Danny

 

 

 

Danny Abraham

BMC Software

CTMD Business Unit

972-52-4286-513

[EMAIL PROTECTED]

 



Re: [GENERAL] Config settings for large restore

2007-11-28 Thread Vivek Khera


On Nov 27, 2007, at 3:30 PM, Erik Jones wrote:

I'm just wondering what is considered the general wisdom on config  
setting for large pg_restore runs.  I know to increase  
maintenance_work_mem and turn off autovacuum and stats collection.   
Shoule should checkpoint_segments and checkpoint_timeout be  
increased?  Would twiddling shared_buffers help?  What about


At least with 8.0 testing I did a while back, I found that bumping  
checkpoint segments was the biggest benefit.  I use 256 segments as a  
matter of course now, even for normal operations.




=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.MailerMailer, LLC Rockville, MD
http://www.MailerMailer.com/ +1-301-869-4449 x806



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL]

2007-11-28 Thread Matt Doughty
Hi,

Is there a way of selecting all fields except for one in particular? I'd
like to create a query that says something like: 

 

select * except fieldx 

 

What I'm hoping to achieve is a query that can be valid for a number of
different tables, although the field I don't want to select is always
called the same (field_not_wanted, for example). I've been told that an
array could work in this case but I'm not reallly sure about how to go
about this.  

 
Cheers,

Matt

 

 

 

Matt Doughty

 

GEOGRAMA S.L.

Tel.:  +34 945 13 13 72652 77 14 15

Fax: +34 945 23 03 40 

www.geograma.com

 

 

image001.jpg

[GENERAL] Select all fields except one

2007-11-28 Thread Matt Doughty
Hi,

Is there a way of selecting all fields except for one in particular? I'd
like to create a query that says something like: 

 

select * except fieldx 

 

What I'm hoping to achieve is a query that can be valid for a number of
different tables, although the field I don't want to select is always
called the same (field_not_wanted, for example). I've been told that an
array could work in this case but I'm not reallly sure about how to go
about this.  

 
Cheers,

Matt

 

 

 

 

Matt Doughty

 

GEOGRAMA S.L.

Tel.:  +34 945 13 13 72652 77 14 15

Fax: +34 945 23 03 40 

www.geograma.com

 

 

image001.jpg

Re: [GENERAL] PostgreSQL DB split

2007-11-28 Thread Vivek Khera


On Nov 28, 2007, at 8:18 AM, Richard Huxton wrote:

I can read that I can create a PostgreSQL DB on the RAMDisk  
partion, but I'm wondering if is it possible to create
one DB with two schemas in two different memory location (RAM and  
flash)?


See the manuals for tablespaces.



but postgres will not like it when you restart after a failure and the  
ramdisk tablespace is missing the data postgres expects to be there.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Error compiling Slony I

2007-11-28 Thread Vivek Khera


On Nov 28, 2007, at 8:50 AM, Glyn Astill wrote:


Hi people,

When I try to compile Slony 1.2 I get the following error:P

parser.y:1090:18: error: scan.c: No such file or directory
make[2]: *** [parser.o] Error 1
make[2]: Leaving directory `/tmp/slony1-1.2.12/src/slony_logshipper'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/tmp/slony1-1.2.12/src'

I've installed bison.

Anyone got any idea what I may be doing wrong?


Slony mailing list will be more helpful... but I ran into this.  the  
solution for me was to uninstall bison and re-build.  for some reason  
if the slony configure script finds bison, it forces it to rebuild the  
parser.  i think the parser requires a certain version of bison.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Streaming LIBPQ? Sliding Window in LIBPQ?

2007-11-28 Thread Martijn van Oosterhout
On Wed, Nov 28, 2007 at 03:27:56PM +, Gregory Stark wrote:
  Use non-blocking mode to get the data in chunks. You can't easy control
  the number of rows you get each time though...
 
 This doesn't really help. It's nonblocking but you still can't get libpq to
 actually give you a result data structure until the entire results have
 accumulated.

It certainly used to work. You get a whole PGresult structure for each
few rows usually so it's not terribly efficient. I posted an example in
Perl a while back...

The trick was to set non-blocking mode and send an async query. Then
PQisBusy() would return false when any data had been received, not just
when all data had been received. At that point you could call
PQgetResult to get those rows. You would get a zero-length result when
you reached the end of data.

Admittedly, I havn't tested it on recent versions. The program I posted
a while back that tested if the locks blocked as documented drove two
connections simultaneously this way.

http://archives.postgresql.org/pgsql-hackers/2005-08/msg01073.php
http://archives.postgresql.org/pgsql-general/2006-07/msg00806.php

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Magnus Hagander

On Wed, 2007-11-28 at 07:29 -0700, Scott Ribe wrote:
  Yes, very much so. Windows lacks the fork() concept, which is what makes
  PostgreSQL much slower there.
 
 So grossly slower process creation would kill postgres connection times. But
 what about the cases where persistent connections are used? Is it the case
 also that Windows has a performance bottleneck for interprocess
 communication?

There is at least one other bottleneck, probably more than one. Context
switching between processes is a lot more expensive than on Unix (given
that win32 is optimized towards context switching between threads). NTFS
isn't optimized for having 100+ processes reading and writing to the
same file. Probably others..

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Select all fields except one

2007-11-28 Thread Vivek Khera


On Nov 28, 2007, at 11:06 AM, Matt Doughty wrote:

Is there a way of selecting all fields except for one in particular?  
I'd like to create a query that says something like:


select * except fieldx




For best practices, you should never use select * in your queries.   
You will inevitably end up with code that cannot deal with a schema  
change, and for any live system, you will have a schema change at some  
point...


It is best to explicitly list the field names your code is expecting.   
Besides, I don't think you can do what you want to do with just SQL.





Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/28/07 11:13, Magnus Hagander wrote:
 On Wed, 2007-11-28 at 07:29 -0700, Scott Ribe wrote:
 Yes, very much so. Windows lacks the fork() concept, which is what makes
 PostgreSQL much slower there.
 So grossly slower process creation would kill postgres connection times. But
 what about the cases where persistent connections are used? Is it the case
 also that Windows has a performance bottleneck for interprocess
 communication?
 
 There is at least one other bottleneck, probably more than one. Context
 switching between processes is a lot more expensive than on Unix (given
 that win32 is optimized towards context switching between threads). NTFS

Isn't that why Apache2 has separate thread mode and 1.x-style
pre-forked mode?

 isn't optimized for having 100+ processes reading and writing to the
 same file. Probably others..

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTaP3S9HxQb37XmcRAoFfAJ4gQJIzI95FWyukNy0+7mt2NT+MFgCbBpt/
pdIzLmq1Rndnt3busADFHP8=
=NgLQ
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Config settings for large restore

2007-11-28 Thread Erik Jones

Thanks, we're at 128 now but I'll see how bumping that up goes.

On Nov 28, 2007, at 9:46 AM, Vivek Khera wrote:



On Nov 27, 2007, at 3:30 PM, Erik Jones wrote:

I'm just wondering what is considered the general wisdom on config  
setting for large pg_restore runs.  I know to increase  
maintenance_work_mem and turn off autovacuum and stats  
collection.  Shoule should checkpoint_segments and  
checkpoint_timeout be increased?  Would twiddling shared_buffers  
help?  What about


At least with 8.0 testing I did a while back, I found that bumping  
checkpoint segments was the biggest benefit.  I use 256 segments as  
a matter of course now, even for normal operations.




=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.MailerMailer, LLC Rockville, MD
http://www.MailerMailer.com/ +1-301-869-4449 x806




Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Select all fields except one

2007-11-28 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/28/07 11:18, Vivek Khera wrote:
 
 On Nov 28, 2007, at 11:06 AM, Matt Doughty wrote:
 
 Is there a way of selecting all fields except for one in particular?
 I'd like to create a query that says something like:

 select * except fieldx


 
 For best practices, you should never use select * in your queries.  You
 will inevitably end up with code that cannot deal with a schema change,
 and for any live system, you will have a schema change at some point...

Remember back in the late-80s when Data Dictionaries were the rage?
 (Some legacy still have these.)  3GL structs/copybooks can be
auto-generated from the CDD, so any added columns are auto-added to
your record structure.

Of course, you still have to rebuild your apps.

 It is best to explicitly list the field names your code is expecting. 
 Besides, I don't think you can do what you want to do with just SQL.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTaTaS9HxQb37XmcRAiaWAJ9/BiarNsC9UUNyreg8LiIq9+mUKwCeNS/L
1y4DkS4vJbJd15ZbPuwalac=
=QZG7
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread mgainty
Hi Chad

I had a devil of a time finding any usable doc on Informix
whereas Postgres/MySQL and Oracle all had documentation aplenty
Im still looking for a whitepaper or any benchmarks 
http://www.iiug.org/forums/ids/index.cgi/noframes/read/3517

Anyone?
M--
- Original Message - 
Wrom: VFVWRKJVZCMHVIBGDADRZFSQHYUCDDJBLVLMH
To: pgsql-general@postgresql.org
Sent: Wednesday, November 28, 2007 10:32 AM
Subject: [GENERAL] PostgresSQL vs. Informix


 PostgreSQL Team,
 
 I have a large OEM customer (one of the top 8 for Sun worldwide) that is 
 considering embedding PostgreSQL into the core of their primary product 
 instead of Informix.  He is trying to build his case for this change.  
 Do we have anything that I can forward to him (something like the 
 presentation you did at CEC?) to help him build his case?
 
 This is an urgent request from my customer given that his timeline is 
 relatively short.  Any help you can give me will be very appreciated.
 
 Thanks,
 
 Chad Hendren
 
 Original question:
  
 Have you seen any studies (either by Sun or others) that compares 
 PostgresSQL to other commercial database software (Informix, Oracle, 
 Sybase,etc.)? I am interested seeing a feature by feature comparison of 
 PostgresSQL and Informix.
  
 
 -- 
 Chad Hendren
 Solutions Architect
 Sun Microsystems, Inc.
 3655 North Point Parkway, Suite 600
 Alpharetta, GA 30005 US
 Phone 770-360-6402
 Mobile 770-596-4778
 Email [EMAIL PROTECTED] 
 
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Streaming LIBPQ? Sliding Window in LIBPQ?

2007-11-28 Thread Merlin Moncure
On Nov 28, 2007 10:51 AM, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
 The trick was to set non-blocking mode and send an async query. Then
 PQisBusy() would return false when any data had been received, not just
 when all data had been received. At that point you could call
 PQgetResult to get those rows. You would get a zero-length result when
 you reached the end of data.

 Admittedly, I havn't tested it on recent versions. The program I posted
 a while back that tested if the locks blocked as documented drove two
 connections simultaneously this way.

 http://archives.postgresql.org/pgsql-hackers/2005-08/msg01073.php
 http://archives.postgresql.org/pgsql-general/2006-07/msg00806.php

aiui, PQgetResult does not allow streaming of partial results. it does
however allow returning multiple results for multiple queries sent in
a batch...so, a 'kinda sorta' streaming could be rigged in certain
cases if the big query could be split to multiple queries and chained.
 pulling back and using a piece of a single result is not possible,
and never has been.  also, there is always the cursor technique which
i only find appealing in very special circumstances.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Magnus Hagander
Ron Johnson wrote:
 On 11/28/07 11:13, Magnus Hagander wrote:
 On Wed, 2007-11-28 at 07:29 -0700, Scott Ribe wrote:
 Yes, very much so. Windows lacks the fork() concept, which is what makes
 PostgreSQL much slower there.
 So grossly slower process creation would kill postgres connection times. But
 what about the cases where persistent connections are used? Is it the case
 also that Windows has a performance bottleneck for interprocess
 communication?
 There is at least one other bottleneck, probably more than one. Context
 switching between processes is a lot more expensive than on Unix (given
 that win32 is optimized towards context switching between threads). NTFS
 
 Isn't that why Apache2 has separate thread mode and 1.x-style
 pre-forked mode?

I think it was a contributing reason for getting it in the first place,
but it's certainly not the only reason...

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Select all fields except one

2007-11-28 Thread Gauthier, Dave
There's probably some way to pull all the field names from the metadata
tables and build a query on-the-fly that selects all but the offensive
one.

 



From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Matt Doughty
Sent: Wednesday, November 28, 2007 11:07 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Select all fields except one

 

Hi,

Is there a way of selecting all fields except for one in particular? I'd
like to create a query that says something like: 

 

select * except fieldx 

 

What I'm hoping to achieve is a query that can be valid for a number of
different tables, although the field I don't want to select is always
called the same (field_not_wanted, for example). I've been told that an
array could work in this case but I'm not reallly sure about how to go
about this.  

 
Cheers,

Matt

 

 

 

 

Matt Doughty

 

GEOGRAMA S.L.

Tel.:  +34 945 13 13 72652 77 14 15

Fax: +34 945 23 03 40 

www.geograma.com

 

 

image001.jpg

Re: [GENERAL] Select all fields except one

2007-11-28 Thread David Fetter
On Wed, Nov 28, 2007 at 05:06:45PM +0100, Matt Doughty wrote:
 Hi,
 
 Is there a way of selecting all fields except for one in particular?
 I'd like to create a query that says something like: 
 
 select * except fieldx 
 
 What I'm hoping to achieve is a query that can be valid for a number
 of different tables, although the field I don't want to select is
 always called the same (field_not_wanted, for example).

This sounds like self-modifying code.  I'd submit that you're trying
to do the wrong thing in the first place, and that you should look
over your design for flaws like this and re-do that design.

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread Merlin Moncure
On Nov 28, 2007 10:32 AM, Chad Hendren [EMAIL PROTECTED] wrote:
 PostgreSQL Team,

 I have a large OEM customer (one of the top 8 for Sun worldwide) that is
 considering embedding PostgreSQL into the core of their primary product
 instead of Informix.  He is trying to build his case for this change.
 Do we have anything that I can forward to him (something like the
 presentation you did at CEC?) to help him build his case?

 This is an urgent request from my customer given that his timeline is
 relatively short.  Any help you can give me will be very appreciated.

 Thanks,

 Chad Hendren

 Original question:

 Have you seen any studies (either by Sun or others) that compares
 PostgresSQL to other commercial database software (Informix, Oracle,
 Sybase,etc.)? I am interested seeing a feature by feature comparison of
 PostgresSQL and Informix.

there was a very excellent case study, written by cc'd person, that is
unhappily not available at the moment.  see:
http://archives.postgresql.org/pgsql-advocacy/2005-08/msg00147.php
(maybe try contacting author or the open source db migration people)

see also:
http://www.spec.org/jAppServer2004/results/res2007q3/

also consider that sun is backing postgresql, and that informix is
looking more and more like a legacy platform.  not to bash informix
though...but if postgresql meets your requirements it is a great db to
ship out for licensing reasons, etc.

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Select all fields except one

2007-11-28 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 28 Nov 2007 09:37:43 -0800
David Fetter [EMAIL PROTECTED] wrote:

 On Wed, Nov 28, 2007 at 05:06:45PM +0100, Matt Doughty wrote:
  Hi,
  
  Is there a way of selecting all fields except for one in particular?
  I'd like to create a query that says something like: 
  
  select * except fieldx 
  
  What I'm hoping to achieve is a query that can be valid for a number
  of different tables, although the field I don't want to select is
  always called the same (field_not_wanted, for example).
 
 This sounds like self-modifying code.  I'd submit that you're trying
 to do the wrong thing in the first place, and that you should look
 over your design for flaws like this and re-do that design.

Or to actually try to be helpful, you could consider using a server
side function instead:

select * from global_query({},TEXT);

Where the array list is is the list of fields you don't want selected
and the second parameter is the table you are going to use. Then you
just have to build the logic inside the function.

Sincerely,

Joshua D. Drake


 
 Cheers,
 David.


- -- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTamnATb/zqfZUUQRArcrAJwIfarEnOPTzE8nzoWOfocW2C1kyQCgm4iU
e6DgUTvJD3bJDarJkoVpk7Y=
=GO+V
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Trevor Talbot
On 11/28/07, Magnus Hagander [EMAIL PROTECTED] wrote:

 On Wed, 2007-11-28 at 07:29 -0700, Scott Ribe wrote:
   Yes, very much so. Windows lacks the fork() concept, which is what makes
   PostgreSQL much slower there.
 
  So grossly slower process creation would kill postgres connection times. But
  what about the cases where persistent connections are used? Is it the case
  also that Windows has a performance bottleneck for interprocess
  communication?

 There is at least one other bottleneck, probably more than one. Context
 switching between processes is a lot more expensive than on Unix (given
 that win32 is optimized towards context switching between threads). NTFS
 isn't optimized for having 100+ processes reading and writing to the
 same file. Probably others..

I'd be interested to know what this info is based on.  The only
fundamental difference between a process and a thread context switch
is VM mapping (extra TLB flush, possible pagetable mapping tweaks).
And why would NTFS care about anything other than handles?

I mean, I can understand NT having bottlenecks in various areas
compared to Unix, but this threads are specially optimized thing is
seeming a bit overblown.  Just how often do you see threads from a
single process get contiguous access to the CPU?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Magnus Hagander
Trevor Talbot wrote:
 On 11/28/07, Magnus Hagander [EMAIL PROTECTED] wrote:
 
 On Wed, 2007-11-28 at 07:29 -0700, Scott Ribe wrote:
 Yes, very much so. Windows lacks the fork() concept, which is what makes
 PostgreSQL much slower there.
 So grossly slower process creation would kill postgres connection times. But
 what about the cases where persistent connections are used? Is it the case
 also that Windows has a performance bottleneck for interprocess
 communication?
 There is at least one other bottleneck, probably more than one. Context
 switching between processes is a lot more expensive than on Unix (given
 that win32 is optimized towards context switching between threads). NTFS
 isn't optimized for having 100+ processes reading and writing to the
 same file. Probably others..
 
 I'd be interested to know what this info is based on.  The only
 fundamental difference between a process and a thread context switch
 is VM mapping (extra TLB flush, possible pagetable mapping tweaks).

Generally, lots of references I've seen around the net and elsewhere. If
I'm not mistaken, the use of threads over processes was listed as one of
the main reasons why SQL Server got such good performance on Windows
compared to it's competitors. But I don't have my Inside SQL Server
around to check for an actual reference.


 And why would NTFS care about anything other than handles?

Not sure, again it's just something I've picked up from what others have
been saying. I should perhaps have been clearer that I don't have any
direct proof of that one.


 I mean, I can understand NT having bottlenecks in various areas
 compared to Unix, but this threads are specially optimized thing is
 seeming a bit overblown.  Just how often do you see threads from a
 single process get contiguous access to the CPU?

On a CPU loaded SQL server, fairly often I'd say. But certainly not always.

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread Jutta Horstmann
Hi Merlin, Chad,

Merlin Moncure schrieb:
 there was a very excellent case study, written by cc'd person, that is
 unhappily not available at the moment.  see:
 http://archives.postgresql.org/pgsql-advocacy/2005-08/msg00147.php

Thank you :-) The link in the post is not up to date.

The linked case study made it into my thesis on Open Source Database
Migration.

This is its Informix to PostgreSQL chapter:
http://www.osdbmigration.de/thesis/node98.html

The full thesis is available for download here:
http://www.osdbmigration.org/misc/migrating_OSDB_jh.pdf

There is some more, probably helpful stuff on my osdbmigration-site,
like eg the feature matrix:
http://www.osdbmigration.org:8080/osdb/osdb-features

Unfortunately most of the stuff there is not up to date anymore (written
Dec 2005).

Feel free to contact me personally if you have any more questions.

Jutta

-- 


Jutta Horstmann (Dipl. Inform., Dipl. Pol.)
data in transit  email: [EMAIL PROTECTED]
Ellerstr. 38 fon:+49.228.2401295
53119 Bonn   mob:   +49.176.61188960
Germany  web:  www.dataintransit.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 28 Nov 2007 09:53:34 -0800
Trevor Talbot [EMAIL PROTECTED] wrote:

 On 11/28/07, Magnus Hagander [EMAIL PROTECTED] wrote:
 
  On Wed, 2007-11-28 at 07:29 -0700, Scott Ribe wrote:
Yes, very much so. Windows lacks the fork() concept, which is
what makes PostgreSQL much slower there.

 I mean, I can understand NT having bottlenecks in various areas
 compared to Unix, but this threads are specially optimized thing is
 seeming a bit overblown.  Just how often do you see threads from a
 single process get contiguous access to the CPU?

I thought it was more about the cost to fork() a process in win32? 

Sincerely,

Joshua D. Drake



- -- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTazMATb/zqfZUUQRAtpgAJwNXh9tyO0J/KSYnlzB5HoTiru/3wCfQeDy
5cZ+OIZmAUMPmuflVfRP11Q=
=4j6q
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Select all fields except one

2007-11-28 Thread Merlin Moncure
On Nov 28, 2007 12:18 PM, Vivek Khera [EMAIL PROTECTED] wrote:
 For best practices, you should never use select * in your queries.  You will
 inevitably end up with code that cannot deal with a schema change, and for
 any live system, you will have a schema change at some point...

 It is best to explicitly list the field names your code is expecting.
 Besides, I don't think you can do what you want to do with just SQL.

sometimes this is desirable.  for example, you may have a view defined
as 'select * from foo' which you want to change with foo...and it is
not too difficult to write application code that is tolerant of adding
extra fields.  also it is often useful to expand row and type
variables with '*' although this is kind of a postgresql peculiarity.

also consider 'insert...select' or 'create table as select' statements
that copy data from copy of table to another that by definition take
all the fields...so wouldn't a * be preferable to adjusting the field
list when it changes?

you are generally right though...i'm just laying out some excpetions.

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Recheck condition

2007-11-28 Thread Martijn van Oosterhout
On Wed, Nov 28, 2007 at 01:18:56PM -0500, Josh Harrison wrote:
 Hi,
 Sorry if my question is silly.
 When I use explain analyze command I come across 'recheck condition' in some
 places.
 I googled for this  but didn't get any solid answers.

Some indexes are inexact, i.e. they may sometimes return tuples that
don't actually match the index condition. This also happens with bitmap
scans, because it'll return anything in the bitmap which will probably
be more than what you asked for. The recheck just means that the
planner retests the index condition on the result to make sure you only
get the rows you wanted.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread Erik Jones

On Nov 28, 2007, at 9:32 AM, Chad Hendren wrote:


PostgreSQL Team,

I have a large OEM customer (one of the top 8 for Sun worldwide)  
that is considering embedding PostgreSQL into the core of their  
primary product instead of Informix.  He is trying to build his  
case for this change.  Do we have anything that I can forward to  
him (something like the presentation you did at CEC?) to help him  
build his case?


This is an urgent request from my customer given that his timeline  
is relatively short.  Any help you can give me will be very  
appreciated.


Thanks,

Chad Hendren

Original question:
Have you seen any studies (either by Sun or others) that compares  
PostgresSQL to other commercial database software (Informix,  
Oracle, Sybase,etc.)? I am interested seeing a feature by feature  
comparison of PostgresSQL and Informix.


You should speak with Josh Berkus (http://www.ittoolbox.com/profiles/ 
josh_berkus) and Jignesh Shah (http://blogs.sun.com/jkshah/), both  
also work for Sun.  Josh has been a core Postgres team member for  
years now and heads up the community relations stuff and Jignesh has  
been heading up lots of Postgres benchmarking on Solaris.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Trevor Talbot
On 11/28/07, Magnus Hagander [EMAIL PROTECTED] wrote:
 Trevor Talbot wrote:
  On 11/28/07, Magnus Hagander [EMAIL PROTECTED] wrote:

  There is at least one other bottleneck, probably more than one. Context
  switching between processes is a lot more expensive than on Unix (given
  that win32 is optimized towards context switching between threads). NTFS
  isn't optimized for having 100+ processes reading and writing to the
  same file. Probably others..

  I'd be interested to know what this info is based on.  The only
  fundamental difference between a process and a thread context switch
  is VM mapping (extra TLB flush, possible pagetable mapping tweaks).

 Generally, lots of references I've seen around the net and elsewhere. If
 I'm not mistaken, the use of threads over processes was listed as one of
 the main reasons why SQL Server got such good performance on Windows
 compared to it's competitors. But I don't have my Inside SQL Server
 around to check for an actual reference.

Well, yes, in general using multiple threads instead of multiple
processes is going to be a gain on any common OS for several reasons,
but context switching is a very minor part of that. Threads let you
share state much more efficiently than processes do, and in complex
servers of this type there tends to be a lot to be shared.

SQL Server is somewhat unique in that it doesn't simply throw threads
at the problem; it has a small pool and uses its own internal task
scheduler for actual SQL work. There's no OS thread per user or
anything. Think continuations or pure userspace threading. That design
also lets it reduce context switches in general.

  I mean, I can understand NT having bottlenecks in various areas
  compared to Unix, but this threads are specially optimized thing is
  seeming a bit overblown.  Just how often do you see threads from a
  single process get contiguous access to the CPU?

 On a CPU loaded SQL server, fairly often I'd say. But certainly not always.

I meant as a design point for a general-purpose OS. If you consider
how Windows does GUIs, ignoring the expense of process context
switching would be fatal, since it forces so much app involvement in
window painting. Having a system dedicated to a single process with
multiple threads running full-bore is not particularly common in this
sense.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Recheck condition

2007-11-28 Thread Josh Harrison
Hi,
Sorry if my question is silly.
When I use explain analyze command I come across 'recheck condition' in some
places.
I googled for this  but didn't get any solid answers.

What is recheck condition and when does the query planner choose this?
Thanks
josh


Re: [GENERAL] Select all fields except one

2007-11-28 Thread David Fetter
On Wed, Nov 28, 2007 at 09:47:19AM -0800, Joshua D. Drake wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On Wed, 28 Nov 2007 09:37:43 -0800
 David Fetter [EMAIL PROTECTED] wrote:
 
  On Wed, Nov 28, 2007 at 05:06:45PM +0100, Matt Doughty wrote:
   Hi,
   
   Is there a way of selecting all fields except for one in
   particular?  I'd like to create a query that says something
   like: 
   
   select * except fieldx 
   
   What I'm hoping to achieve is a query that can be valid for a
   number of different tables, although the field I don't want to
   select is always called the same (field_not_wanted, for
   example).
  
  This sounds like self-modifying code.  I'd submit that you're
  trying to do the wrong thing in the first place, and that you
  should look over your design for flaws like this and re-do that
  design.
 
 Or to actually try to be helpful, you could consider using a server
 side function instead:
 
 select * from global_query({},TEXT);

Helping somebody shoot themselves in the foot isn't helping them.

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Select all fields except one

2007-11-28 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 28 Nov 2007 10:14:52 -0800
David Fetter [EMAIL PROTECTED] wrote:


   This sounds like self-modifying code.  I'd submit that you're
   trying to do the wrong thing in the first place, and that you
   should look over your design for flaws like this and re-do that
   design.
  
  Or to actually try to be helpful, you could consider using a server
  side function instead:
  
  select * from global_query({},TEXT);
 
 Helping somebody shoot themselves in the foot isn't helping them.

You have zero idea of his business requirements. He may have a valid
reason for this. I will grant that in this particular case it is
unlikely and that he is probably over thinking the automation of his
solution but still...

I would note that I use self modifying code all the time with
partitioning and there is *nothing* wrong with that. It calls dynamic.

Sincerely,

Joshua D. Drake

 
 Cheers,
 David.


- -- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTbH8ATb/zqfZUUQRAuDqAKCOKq4xoqn3lqZfYtxcYaF+z46ZMACeJd7D
UAKcBZDhQxBu8+lBsv8ZU18=
=BJBD
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to automate password requests?

2007-11-28 Thread Tom Hart

Marten Lehmann wrote:

Hello,

I'm trying to automate some postgresql scripts, but I cannot find a 
way to pass a password directly to commands like pg_dump psql and so on.

Even a

echo password | psql

doesn't work, the password prompt of psql is still waiting.

mysql has the -p option. What would be the postgresql equivalent? I 
don't want to enter passwords dozend times.


Regards
Marten

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Read about something called the .pgpass file. There's a way to set a 
file that contains the password (pgAdmin will create one autmoatically) 
that pgsql will look for before it asks for your password. It's stored in ~/


The solution I use is a bat file that redefines an environment variable 
(PGPASSFILE) that points to the file, then runs pg_dumpall.


--
Tom Hart
IT Specialist
Cooperative Federal
723 Westcott St.
Syracuse, NY 13210
(315) 471-1116 ext. 202
(315) 476-0567 (fax)


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-28 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


Just as a followup, I reported this as a bug and it is 
being looked at and discussed:

http://rt.perl.org/rt3//Public/Bug/Display.html?id=47576

Appears there is no easy resolution yet.

- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200711281358
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-BEGIN PGP SIGNATURE-

iD8DBQFHTbpxvJuQZxSWSsgRA+BqAJ9Q1KB6w4ow7GyqXTY3EtZvJRrdkgCfVXlb
yC/EaTWPOI6SpvBSRBXTC7s=
=LA+E
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Select all fields except one

2007-11-28 Thread Gauthier, Dave
Seems odd that you'd know specifically which column you don't want, but
not know what columns you do want. And then not care what order those
desired columns happen to be be returned in.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of David Fetter
Sent: Wednesday, November 28, 2007 1:15 PM
To: Joshua D. Drake
Cc: Matt Doughty; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Select all fields except one

On Wed, Nov 28, 2007 at 09:47:19AM -0800, Joshua D. Drake wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On Wed, 28 Nov 2007 09:37:43 -0800
 David Fetter [EMAIL PROTECTED] wrote:
 
  On Wed, Nov 28, 2007 at 05:06:45PM +0100, Matt Doughty wrote:
   Hi,
   
   Is there a way of selecting all fields except for one in
   particular?  I'd like to create a query that says something
   like: 
   
   select * except fieldx 
   
   What I'm hoping to achieve is a query that can be valid for a
   number of different tables, although the field I don't want to
   select is always called the same (field_not_wanted, for
   example).
  
  This sounds like self-modifying code.  I'd submit that you're
  trying to do the wrong thing in the first place, and that you
  should look over your design for flaws like this and re-do that
  design.
 
 Or to actually try to be helpful, you could consider using a server
 side function instead:
 
 select * from global_query({},TEXT);

Helping somebody shoot themselves in the foot isn't helping them.

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] How to automate password requests?

2007-11-28 Thread Marten Lehmann

Hello,

I'm trying to automate some postgresql scripts, but I cannot find a way 
to pass a password directly to commands like pg_dump psql and so on.

Even a

echo password | psql

doesn't work, the password prompt of psql is still waiting.

mysql has the -p option. What would be the postgresql equivalent? I 
don't want to enter passwords dozend times.


Regards
Marten

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Linux v.s. Mac OS-X Performance

2007-11-28 Thread Trevor Talbot
On 11/28/07, Joshua D. Drake [EMAIL PROTECTED] wrote:

 On Wed, 28 Nov 2007 09:53:34 -0800
 Trevor Talbot [EMAIL PROTECTED] wrote:

   On Wed, 2007-11-28 at 07:29 -0700, Scott Ribe wrote:
 Yes, very much so. Windows lacks the fork() concept, which is
 what makes PostgreSQL much slower there.

  I mean, I can understand NT having bottlenecks in various areas
  compared to Unix, but this threads are specially optimized thing is
  seeming a bit overblown.  Just how often do you see threads from a
  single process get contiguous access to the CPU?

 I thought it was more about the cost to fork() a process in win32?

Creating a process is indeed expensive on Windows, but a followup
question was about the performance when using persistent connections,
and therefore not creating processes. That's where the conversation
got more interesting :)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How to automate password requests?

2007-11-28 Thread Alan Hodgson
On Wednesday 28 November 2007, Marten Lehmann [EMAIL PROTECTED] wrote:
 Hello,

 I'm trying to automate some postgresql scripts, but I cannot find a way
 to pass a password directly to commands like pg_dump psql and so on.
 Even a

 echo password | psql

 doesn't work, the password prompt of psql is still waiting.

 mysql has the -p option. What would be the postgresql equivalent? I
 don't want to enter passwords dozend times.


create a .pgpass file.

-- 
The global consumer economy can best be described as the most efficient way  
to convert natural resources into waste.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Recheck condition

2007-11-28 Thread Josh Harrison
Some indexes are inexact, i.e. they may sometimes return tuples that
don't actually match the index condition.

What causes an index to be inexact. When you create an index and vacuum it
regularly, it is suppose to be correctright??

This also happens with bitmap
scans, because it'll return anything in the bitmap which will probably
be more than what you asked for. The recheck just means that the
planner retests the index condition on the result to make sure you only
get the rows you wanted

So does recheck condition affect the performance of the queries since it
basically rechecks the condition?
Also does it goes to the heap to retest ?

For example for this query
explain analyze select count(*) from foo where foo_id=1 I get the following
plan

QUERY PLAN

 


 Aggregate  (cost=1949.84..1949.85 rows=1 width=0) (actual
time=7.996..7.996rows=1 loops=1)
   -  Bitmap Heap Scan on foo  (cost=277.45..1924.94 rows=9959 width=0)
(actual time=1.903..5.270 rows=10020 loops=1)
 Recheck Cond: (foo_id =
1::numeric)

 -  Bitmap Index Scan on foo_pk  (cost=0.00..274.96 rows=9959
width=0) (actual time=1.864..1.864 rows=10020 loops=1)
   Index Cond: (foo_id =
1::numeric)

 Total runtime: 8.062 ms

Can you please explain to me with respect to this example what is happening
here? This is a small table but for big tables the performance is not very
good. Does recheck condition brings down the query performance?

Thanks
josh


[Re] Re: [Re] Re: [Re] Re: [GENERAL] Unknown winsock error 10061while dumping a big database

2007-11-28 Thread Cyril VELTER

De : mailto:[EMAIL PROTECTED]
 Cyril VELTER [EMAIL PROTECTED] writes:
  The server have 4G of ram. But doing a search in the source code it 
  does 
not 
  seem that these values are used during a copy operation.
 
 They aren't.  The out of memory complaint sounds more like it might be
 due to corrupt data, ie the olde scenario where a variable-width field's
 length word contains a ridiculously large value.  I don't know how that
 would lead to a winsock error, though.

Yes this is very strange. The corruption scenario is plausible as the 
dump 
stop at nearly the same place each time (it's a live database so the place is 
not exactly the same).

Also, the database have been upgraded recently (2007-11-09) from 8.0.13 
to 
8.2.5 so I suppose the corruption have occured between this date and now ?

I have run the following query to get all record with one field over 
10MB :

select 
p2,length(p18155),length(p18154),length(p4065),length(p4083),length(p4020),lengt
h(p4074),length(p3857),length(p32402),length(p5512),length(p18175) from c2674 
where length(p18155)1000 or length(p18154)1000 or 
length(p4065)1000 or length(p4083)1000 or length(p4020)1000 or 
length(p4074)1000 or length(p3857)1000 or length(p32402)1000 or 
length(p5512)1000 or length(p18175)1000;

The biggest value is 28034379.

Is length() supposed to return the very high length in case of 
corruption ?

Is there anythning else i can do ?

Thanks,


Cyril VELTER


Table definition :


 Column |Type | Modifiers
+-+---
 p2 | bigint  |
 p9 | boolean |
 p3337  | integer |
 p18155 | text|
 p18154 | text|
 p17561 | bigint  |
 p4065  | text|
 p689   | bigint  |
 p43711 | integer |
 p4083  | text|
 p4020  | text|
 p4029  | text|
 p4218  | timestamp without time zone |
 p4074  | text|
 p45127 | bigint  |
 p3857  | text|
 p7 | timestamp without time zone |
 p6 | bigint  |
 p5 | timestamp without time zone |
 p32402 | text|
 p5512  | bytea   |
 p18175 | bytea   |
 p42542 | bigint  |
 p17562 | integer |
 p4 | bigint  |
Indexes:
idx_2674_p2 UNIQUE, btree (p2)
idx_2674_p17562 btree (p17562)
idx_2674_p4 btree (p4)
idx_2674_p4029 btree (p4029)
idx_2674_p4218 btree (p4218)
idx_2674_p42542 btree (p42542)
idx_2674_p45127 btree (p45127)
idx_2674_p5 btree (p5)
idx_2674_p6 btree (p6)
idx_2674_p689 btree (p689)
idx_2674_p7 btree (p7)

Row count=1320365


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-28 Thread Andrew Dunstan



Greg Sabino Mullane wrote:
Just as a followup, I reported this as a bug and it is 
being looked at and discussed:


http://rt.perl.org/rt3//Public/Bug/Display.html?id=47576

Appears there is no easy resolution yet.


  


We might be able to do something with the suggested workaround. I will 
see what I can do, unless you have already tried.


cheers

andrew

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Another question about partitioning

2007-11-28 Thread Alex Vinogradovs
Yes, I enter query manually while testing. Here are explain plans :

for select count(*) from poll_3 where eid = 72333

Aggregate  (cost=34697.64..34697.65 rows=1 width=0)
  -  Seq Scan on poll_3  (cost=0.00..34650.40 rows=18893 width=0)
Filter: (eid = 72333)


for for select count(*) from poll where eid = 72333

Aggregate  (cost=320001.59..320001.60 rows=1 width=0)
  -  Append  (cost=0.00..319570.78 rows=172323 width=0)
-  Seq Scan on poll  (cost=0.00..27.50 rows=17 width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_0 poll  (cost=0.00..14348.85 rows=9014
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_1 poll  (cost=0.00..34796.82 rows=18735
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_2 poll  (cost=0.00..34993.84 rows=18527
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_3 poll  (cost=0.00..34650.40 rows=18893
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_4 poll  (cost=0.00..34230.55 rows=18099
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_5 poll  (cost=0.00..34267.64 rows=17543
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_6 poll  (cost=0.00..34469.73 rows=18719
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_7 poll  (cost=0.00..33642.98 rows=17968
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_8 poll  (cost=0.00..32199.15 rows=16480
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_9 poll  (cost=0.00..31943.33 rows=18328
width=0)
  Filter: (eid = 72333)



On Tue, 2007-11-27 at 17:40 -0800, paul rivers wrote:
 Alex Vinogradovs wrote:
  Hello all,
 
 
  I have a table which is partitioned by range into 10 pieces with
  constraint exceptions. Constraint exceptions is enabled in server
  configuration too. For some reason, queries to the master table
  are still slower than direct queries against partitions. Is there
  any real reason for that, or I should look into misconfiguration ?
 
  Thanks!
 
 
  Best regards,
  Alex Vinogradovs

 Is that true even if you type the query yourself in psql and ensure that 
 the values for the partitioned columns are constants in the where 
 clause?  Can you post an explain of the sql?
 
 Paul
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [Re] Re: [Re] Re: [Re] Re: [GENERAL] Unknown winsock error 10061while dumping a big database

2007-11-28 Thread Richard Huxton

Cyril VELTER wrote:


I have run the following query to get all record with one field over 
10MB :

select 
p2,length(p18155),length(p18154),length(p4065),length(p4083),length(p4020),lengt
h(p4074),length(p3857),length(p32402),length(p5512),length(p18175) from c2674 
where length(p18155)1000 or length(p18154)1000 or 
length(p4065)1000 or length(p4083)1000 or length(p4020)1000 or 
length(p4074)1000 or length(p3857)1000 or length(p32402)1000 or 
length(p5512)1000 or length(p18175)1000;


The biggest value is 28034379.

Is length() supposed to return the very high length in case of 
corruption ?


You'd have thought it would. The odd thing (if it is data corruption) is 
that you would expect to see something in the server logs about a 
failure to allocate 12345412234124 bytes of memory or some such. Whereas 
all you get is this winsock error.



Is there anythning else i can do ?


Could you try CREATE some_table AS SELECT * FROM c2674 WHERE ... to copy 
the large rows within the database. If that fails, the table is 
corrupted but you can identify the problem rows and work around them 
while you dump the data.


If it doesn't fail, that suggests (to me, anyway) you've found a bug 
somewhere in the communication between server and client (which would 
explain the winsock error).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Recheck condition

2007-11-28 Thread Martijn van Oosterhout
On Wed, Nov 28, 2007 at 02:20:11PM -0500, Josh Harrison wrote:
 Some indexes are inexact, i.e. they may sometimes return tuples that
 don't actually match the index condition.
 
 What causes an index to be inexact. When you create an index and vacuum it
 regularly, it is suppose to be correctright??

The nature of the beast. For example, if you create an index on large
integer arrays it doesn't store the actual array in the index, but a
hashed version thereof. When we scan the index because of this hashing
it might match other arrays that shouldn't be. Hence the recheck.

Similarly for geometry indexes. The index only stores bounding boxes
and an intersection test might hit the bounding box but not match the
actual query.

 So does recheck condition affect the performance of the queries since it
 basically rechecks the condition?
 Also does it goes to the heap to retest ?

At the time of the recheck the data is already in memory. So no, it
doesn't go back to the heap.

 For example for this query
 explain analyze select count(*) from foo where foo_id=1 I get the following
 plan

It isn't the recheck that's costing it, it's probably just that you're
matching a lot of rows. A bitmap scan classically needs a recheck
because if a lot of rows need to be stored it might remember only
blocks 2044-2060. It then needs to recheck each row as it comes through
to make sure it really matches the conditions.

This query is 8ms, I imagine when it takes a long time it's matching
lots of rows?

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-28 Thread Andrew Dunstan



Andrew Dunstan wrote:



Greg Sabino Mullane wrote:
Just as a followup, I reported this as a bug and it is being looked 
at and discussed:


http://rt.perl.org/rt3//Public/Bug/Display.html?id=47576

Appears there is no easy resolution yet.


  


We might be able to do something with the suggested workaround. I will 
see what I can do, unless you have already tried.





OK, I have a fairly ugly manual workaround, that I don't yet understand, 
but seems to work for me.


In your session, run the following code before you do anything else:

CREATE OR REPLACE FUNCTION test((text) RETURNS bool LANGUAGE plperl as $$
return shift =~ /\xa9/i ? 'true' : 'false';
$$;
SELECT test('a');
DROP FUNCTION test(text);

After that we seem to be good to go with any old UTF8 chars.

I'm looking at automating this so the workaround can be hidden, but I'd 
rather understand it first.


(Core guys: If we can hold RC1 for a bit while I get this fixed that 
would be good.)


cheers

andrew



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Cluster using tablespaces?

2007-11-28 Thread Rainer Bauer
Hello,

is there a way to instruct cluster to store the temporary created data on a
different tablespace (i.e. drive)? If not, wouldn't that have a decent
performance impact or is most of the time spend retrieving the data in index
order?

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Cluster using tablespaces?

2007-11-28 Thread Erik Jones


On Nov 28, 2007, at 4:47 PM, Rainer Bauer wrote:


Hello,

is there a way to instruct cluster to store the temporary created  
data on a

different tablespace (i.e. drive)? If not, wouldn't that have a decent
performance impact or is most of the time spend retrieving the data  
in index

order?

Rainer


What temporary created data are you referring to?  Do you mean the  
contents of the the other cluster directories such as pg_xlog?  If  
so, there's no need.  Just make it a symlink to a directory on other  
disks and you're done.  If that's not what you mean, can you explain  
further?


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread Jeff Larsen
On Nov 28, 2007 9:32 AM, Chad Hendren [EMAIL PROTECTED] wrote:
 I have a large OEM customer (one of the top 8 for Sun worldwide) that is
 considering embedding PostgreSQL into the core of their primary product
 instead of Informix.  He is trying to build his case for this change.
 Do we have anything that I can forward to him (something like the
 presentation you did at CEC?) to help him build his case?

I can't offer much in the way of a case study, but I am an Informix
customer that has been doing an informal feasibility study on
switching to PostgreSQL. I've learned a ton from the high quality PG
docs and from the PG mailing lists.

Both have their pros and cons. I haven't had equal hardware to compare
PG performance, but my impression is that it is Informix is faster and
has a more robust query planner. And by that, I mean that the Informix
planner does a great job without things like data type casts and other
help from the SQL developer. If I had to put a number on it, I would
say that PG is 75 to 80% as fast as Informix. Please, don't anyone
start a flame war over this, I'm just guessing here. With equal
hardware and some more tuning, I'm sure I could do better with PG.

Another edge for Informix is the fully synchronous replication feature
where a committed transaction on the primary is guaranteed to be
committed to disk on the secondary if you configure things
appropriately. High availability is critical for us and that's been a
weak spot for PG as far as I'm concerned.  Yes, PG has replication,
but in my opinion, it's not 'enterprise' class just yet.

Lastly on the Informix side, they have more advanced online
backup/restore tools. It's similar to PG PITR backup but does not
depend on file-system level backup tools. The option I use (called
ontape) stores data in a proprietary format to disk or tape. It also
has an incremental backup option so you don't have to do a full dump
every time. There is a more advanced Informix backup tool called
onbar, but I haven't worked with it enough to comment on it.

What does PG have going for it? Price, obviously. I'd love to have
that $100k that I just spent back. PG has better conformance to SQL
language standards, so portability of code would be easier. PG has
some better built in functions and indexing features. I prefer the
PLPGSQL language for stored procedures to Informix.PG has more options
for stored procedure languages (python, perl). PG has table
inheritance, Informix does not.

One of the most impressive things about PG has been these mailing
lists. Informix support is OK, but the front-line support drones just
don't have the same access to developers who really know what's going
on that you can get directly on this list. Heck, PG developers have
answered my questions here on the weekend! I don't know if you can
even put a price on such direct access to high-level gurus.

I wish I had a better feature comparison list for you. I'm sure I've
failed to mention a lot of great things about PG here, but like I
said, my evaluation has been pretty informal. However at this time, I
have concluded that we could move our company from Informix to PG
without having to give up too much, other than the big licensing fees.
We use a lot of open source software at our company and I would love
to add PostgreSQL to the list.

Jeff

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Cluster using tablespaces?

2007-11-28 Thread Martin Gainty
start here
http://www.postgresql.org/docs/8.1/static/creating-cluster.html

M-
- Original Message -
From: Erik Jones [EMAIL PROTECTED]
To: Rainer Bauer [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Wednesday, November 28, 2007 5:59 PM
Subject: Re: [GENERAL] Cluster using tablespaces?



On Nov 28, 2007, at 4:47 PM, Rainer Bauer wrote:

 Hello,

 is there a way to instruct cluster to store the temporary created
 data on a
 different tablespace (i.e. drive)? If not, wouldn't that have a decent
 performance impact or is most of the time spend retrieving the data
 in index
 order?

 Rainer

What temporary created data are you referring to?  Do you mean the
contents of the the other cluster directories such as pg_xlog?  If
so, there's no need.  Just make it a symlink to a directory on other
disks and you're done.  If that's not what you mean, can you explain
further?

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Cluster using tablespaces?

2007-11-28 Thread Rainer Bauer
Erik Jones wrote:

 is there a way to instruct cluster to store the temporary created  
 data on a
 different tablespace (i.e. drive)? If not, wouldn't that have a decent
 performance impact or is most of the time spend retrieving the data  
 in index
 order?

What temporary created data are you referring to?

The one described in the manual
http://www.postgresql.org/docs/8.2/static/sql-cluster.html:

During the cluster operation, a temporary copy of the table is created that
contains the table data in the index order. Temporary copies of each index on
the table are created as well.

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Cluster using tablespaces?

2007-11-28 Thread Tom Lane
Rainer Bauer [EMAIL PROTECTED] writes:
 Erik Jones wrote:
 What temporary created data are you referring to?

 The one described in the manual
 http://www.postgresql.org/docs/8.2/static/sql-cluster.html:

 During the cluster operation, a temporary copy of the table is created that
 contains the table data in the index order. Temporary copies of each index on
 the table are created as well.

That's probably a bit misleading.  There is no temporary copy of the
table, just the new permanent copy.  The document is trying to point out
to you that the transient disk space requirement will be 2X the table
size, but maybe we could phrase it better.

For btree indexes, there is a temporary copy of the index data, which
will go wherever you have arranged for temp files to go.  (I think that
easy user control of this may be new for 8.3, though.)

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Cluster using tablespaces?

2007-11-28 Thread Rainer Bauer
Tom Lane wrote:

Rainer Bauer [EMAIL PROTECTED] writes:

 During the cluster operation, a temporary copy of the table is created that
 contains the table data in the index order. Temporary copies of each index on
 the table are created as well.

That's probably a bit misleading.  There is no temporary copy of the
table, just the new permanent copy.  The document is trying to point out
to you that the transient disk space requirement will be 2X the table
size, but maybe we could phrase it better.

Ok, I expected that. Does this work:
ALTER TABLE foo CLUSTER ON index_name SET TABLESPACE new_tablespace

I.e. is the table moved to the other tablespace and clustered at the same time
or are these independant operations?

What I am trying to achieve is cutting down the time the cluster command
takes. I thought the most promising way would be if the new data is written to
different drive.

For btree indexes, there is a temporary copy of the index data, which
will go wherever you have arranged for temp files to go.  (I think that
easy user control of this may be new for 8.3, though.)

Could you give me a hint where that would be on Windows? I guess this might be
worth a try since there are a couple of btree indexes in the database.

Rainer

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 11/28/07 17:11, Jeff Larsen wrote:
[snip]
 
 Lastly on the Informix side, they have more advanced online
 backup/restore tools. It's similar to PG PITR backup but does not
 depend on file-system level backup tools. The option I use (called
 ontape) stores data in a proprietary format to disk or tape. It also
 has an incremental backup option so you don't have to do a full dump
 every time. There is a more advanced Informix backup tool called
 onbar, but I haven't worked with it enough to comment on it.
 
 What does PG have going for it? Price, obviously. I'd love to have
 that $100k that I just spent back. PG has better conformance to SQL
 language standards, so portability of code would be easier. PG has
 some better built in functions and indexing features. I prefer the
 PLPGSQL language for stored procedures to Informix.PG has more options
 for stored procedure languages (python, perl). PG has table
 inheritance, Informix does not.

That's similar to how I'd compare PG to the niche legacy database
(Rdb/VMS) that we use.

 One of the most impressive things about PG has been these mailing
 lists. Informix support is OK, but the front-line support drones just
 don't have the same access to developers who really know what's going
 on that you can get directly on this list. Heck, PG developers have
 answered my questions here on the weekend! I don't know if you can
 even put a price on such direct access to high-level gurus.

Rdb has Informix beat there.  Greybeard engineers are always on the
Rdb mailing list, and the support staff are long-timers who have
access to a continuously updated 20+year VAX NOTES database that
came along when Oracle purchased Rdb from DEC.

 I wish I had a better feature comparison list for you. I'm sure I've
 failed to mention a lot of great things about PG here, but like I
 said, my evaluation has been pretty informal. However at this time, I
 have concluded that we could move our company from Informix to PG
 without having to give up too much, other than the big licensing fees.
 We use a lot of open source software at our company and I would love
 to add PostgreSQL to the list.

We couldn't do it because our databases are too big for single-
threaded backups.  The proprietary backup scheme is transactionaly
consistent and format lets us easily restore to systems with wildly
different disk layouts than the original database.

- --
Ron Johnson, Jr.
Jefferson LA  USA

%SYSTEM-F-FISH, my hovercraft is full of eels
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHTgd3S9HxQb37XmcRAs5kAKCSuOLOguqhpf/DT0OxbA6ew33CWQCfaVf1
KBzM2RxA91WQEa7MM02SKZg=
=lvNg
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Cluster using tablespaces?

2007-11-28 Thread Alvaro Herrera
Rainer Bauer wrote:
 Tom Lane wrote:
 
 Rainer Bauer [EMAIL PROTECTED] writes:
 
  During the cluster operation, a temporary copy of the table is created 
  that
  contains the table data in the index order. Temporary copies of each index 
  on
  the table are created as well.
 
 That's probably a bit misleading.  There is no temporary copy of the
 table, just the new permanent copy.  The document is trying to point out
 to you that the transient disk space requirement will be 2X the table
 size, but maybe we could phrase it better.
 
 Ok, I expected that. Does this work:
 ALTER TABLE foo CLUSTER ON index_name SET TABLESPACE new_tablespace
 
 I.e. is the table moved to the other tablespace and clustered at the same time
 or are these independant operations?

No, the ALTER TABLE/CLUSTER ON only defines what index will the table be
clustered on in the future, but it doesn't cluster it at that time.
Perhaps it could be improved so that if a table rewrite is going to be
done anyway for some other reason, then make sure the rewrite uses the
cluster order.  I think it's far from trivial though.

 What I am trying to achieve is cutting down the time the cluster command
 takes. I thought the most promising way would be if the new data is written to
 different drive.

It has been theorized that cluster would be faster in general if instead
of doing an indexscan we would instead use a seqscan + sort step.  It
would be good to measure it.

 For btree indexes, there is a temporary copy of the index data, which
 will go wherever you have arranged for temp files to go.  (I think that
 easy user control of this may be new for 8.3, though.)
 
 Could you give me a hint where that would be on Windows? I guess this might be
 worth a try since there are a couple of btree indexes in the database.

I think Tom is referring to the new temp_tablespaces config variable.

-- 
Alvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J
We are who we choose to be, sang the goldfinch
when the sun is high (Sandman)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Another question about partitioning

2007-11-28 Thread paul rivers

Alex Vinogradovs wrote:

Yes, I enter query manually while testing. Here are explain plans :

for select count(*) from poll_3 where eid = 72333

Aggregate  (cost=34697.64..34697.65 rows=1 width=0)
  -  Seq Scan on poll_3  (cost=0.00..34650.40 rows=18893 width=0)
Filter: (eid = 72333)


for for select count(*) from poll where eid = 72333

Aggregate  (cost=320001.59..320001.60 rows=1 width=0)
  -  Append  (cost=0.00..319570.78 rows=172323 width=0)
-  Seq Scan on poll  (cost=0.00..27.50 rows=17 width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_0 poll  (cost=0.00..14348.85 rows=9014
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_1 poll  (cost=0.00..34796.82 rows=18735
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_2 poll  (cost=0.00..34993.84 rows=18527
width=0)
  
Do you have appropriate check constraints defined on table poll?  Can 
you include a \d poll?


Also, what version is this?

Paul



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Another question about partitioning

2007-11-28 Thread paul rivers

paul rivers wrote:

Alex Vinogradovs wrote:

Yes, I enter query manually while testing. Here are explain plans :

for select count(*) from poll_3 where eid = 72333

Aggregate  (cost=34697.64..34697.65 rows=1 width=0)
  -  Seq Scan on poll_3  (cost=0.00..34650.40 rows=18893 width=0)
Filter: (eid = 72333)


for for select count(*) from poll where eid = 72333

Aggregate  (cost=320001.59..320001.60 rows=1 width=0)
  -  Append  (cost=0.00..319570.78 rows=172323 width=0)
-  Seq Scan on poll  (cost=0.00..27.50 rows=17 width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_0 poll  (cost=0.00..14348.85 rows=9014
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_1 poll  (cost=0.00..34796.82 rows=18735
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_2 poll  (cost=0.00..34993.84 rows=18527
width=0)
  
Do you have appropriate check constraints defined on table poll?  Can 
you include a \d poll?


Also, what version is this?

Paul



Sorry, I should have asked: do you have check constraints defined on all 
the child poll tables?  So, what's \d poll_3 look like, etc?  You've 
already said you're sure constraint exclusion is on, but you're also 
sure postmaster was restarted too?


Paul




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Btree indexes temp copy Was [Re: [GENERAL] Cluster using tablespaces?]

2007-11-28 Thread Ow Mun Heng

On Wed, 2007-11-28 at 18:30 -0500, Tom Lane wrote:

 For btree indexes, there is a temporary copy of the index data, which
 will go wherever you have arranged for temp files to go.  (I think that
 easy user control of this may be new for 8.3, though.)

In 8.2.5 is there a way to control this? I noticed that when I create a
new index etc, it will hit the os disk (that's where the main tablespace
is located but no data is actually being stored there except for PG's
template1/0 etc table)

I would rather it hit the Raid Array. I looked at the postgres.conf file
but didn't see anything obvious to point out the temp location..

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Error compiling Slony I

2007-11-28 Thread Glyn Astill
Hi Vivek,

Ah thanks, I have posted this to the slony list tooo, but it doesn't
seem to have appeared yet so I thought I'd ask here.

I've already tried removing and re-installing bison, but I shall try
again as you suggest.

You can't remember any other detail can you?

Thanks
Glyn
--- Vivek Khera [EMAIL PROTECTED] wrote:

 
 On Nov 28, 2007, at 8:50 AM, Glyn Astill wrote:
 
  Hi people,
 
  When I try to compile Slony 1.2 I get the following error:P
 
  parser.y:1090:18: error: scan.c: No such file or directory
  make[2]: *** [parser.o] Error 1
  make[2]: Leaving directory
 `/tmp/slony1-1.2.12/src/slony_logshipper'
  make[1]: *** [all] Error 2
  make[1]: Leaving directory `/tmp/slony1-1.2.12/src'
 
  I've installed bison.
 
  Anyone got any idea what I may be doing wrong?
 
 Slony mailing list will be more helpful... but I ran into this. 
 the  
 solution for me was to uninstall bison and re-build.  for some
 reason  
 if the slony configure script finds bison, it forces it to rebuild
 the  
 parser.  i think the parser requires a certain version of bison.
 
 
 ---(end of
 broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 



Glyn Astill



  ___
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/ 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Error compiling Slony I

2007-11-28 Thread Alvaro Herrera
Glyn Astill wrote:
 Hi Vivek,
 
 Ah thanks, I have posted this to the slony list tooo, but it doesn't
 seem to have appeared yet so I thought I'd ask here.
 
 I've already tried removing and re-installing bison, but I shall try
 again as you suggest.

Actually I think the suggestion is to remove bison and recompile Slony
without reinstalling it.



-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1, W 73º 13' 56.4
La virtud es el justo medio entre dos defectos (Aristóteles)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Create index syntax error

2007-11-28 Thread Chris Velevitch
create index job_approver1_idx on jobs (approver1_id)

I'm using 7.4 and pgAdmin 1.8 and I don't see why this simple command
has a syntax error:-

Query result with 0 rows will be returned.

ERROR:  syntax error at or near index
LINE 1: EXPLAIN create index job_approver1_idx on jobs (approver1_id)
   ^

** Error **

ERROR: syntax error at or near index
SQL state: 42601
Character: 16


Chris
-- 
Chris Velevitch
Manager - Sydney Flash Platform Developers Group
m: 0415 469 095
www.flashdev.org.au

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: Btree indexes temp copy Was [Re: [GENERAL] Cluster using tablespaces?]

2007-11-28 Thread Tom Lane
Ow Mun Heng [EMAIL PROTECTED] writes:
 On Wed, 2007-11-28 at 18:30 -0500, Tom Lane wrote:
 For btree indexes, there is a temporary copy of the index data, which
 will go wherever you have arranged for temp files to go.  (I think that
 easy user control of this may be new for 8.3, though.)

 In 8.2.5 is there a way to control this?

You can replace the pgsql_tmp subdirectory with a symlink to someplace.

This is outside the purview of the database, so for instance it won't
survive a dump/reload, but it can be a workable hack ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Create index syntax error

2007-11-28 Thread Chris



ERROR:  syntax error at or near index
LINE 1: EXPLAIN create index job_approver1_idx on jobs (approver1_id)


You can't explain creating an index.

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Create index syntax error

2007-11-28 Thread Tom Lane
Chris Velevitch [EMAIL PROTECTED] writes:
 I'm using 7.4 and pgAdmin 1.8 and I don't see why this simple command
 has a syntax error:-

 LINE 1: EXPLAIN create index job_approver1_idx on jobs (approver1_id)

CREATE INDEX isn't an explain-able operation, because it doesn't
have a plan.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PostgresSQL vs. Informix

2007-11-28 Thread Gregory Williamson
Chad --
...
  
 Have you seen any studies (either by Sun or others) that compares 
 PostgresSQL to other commercial database software (Informix, Oracle, 
 Sybase,etc.)? I am interested seeing a feature by feature comparison of 
 PostgresSQL and Informix.
 

Hope this ain't too late!

I can't provide a feature by feature comparison and I've never seen benchmarks 
comparing Informix to PostgreSQL.

I can however speak from the experience of having migrated a moderate sized set 
of databases from Informix 9.x to PostgreSQL. Alas, we went from Sun Solaris 
with Informix to Linux with PostgreSQL so a direct comparison is not realistic. 
But in most things we have found the two roughly the same; when there are 
differences Informix has usually been faster but not always.

A few almost random points in now particular order, in addition to what other 
posters have said:

* SQL is fairly similar with some exceptions
 DISTINCT vs UNIQUE; see also DISTINCT ON; Informix handles NULLs in
 concatenations differently (see COALESCE in PostgreSQL SQL); LIMIT
 is handled differently. UPDATE STATISTICS is roughly like ANALYZE.

* MVCC has some real differences; in particular SELECT COUNT(*) FROM foo;
 is noticibly slower in PostgreSQL (but much less so in 8.3beta).
 Read the manual sections on autovacuum/vacuum carefully. Logging differs
 as well. Rollbacks are cheaper in PostreSQL.

* Beware the default settings on PostgreSQL configs -- they are very
 conservative and can lead to performance issues if you don't
 tweak them.

* High speed loader is slightly faster than COPY FROM, but not by a lot.

* SPL conversion is a pain and I've basically recoded everything, although
 it gets easiers once you get used to the differences (returning set
 values for instance). Only had a few dozen procedures had to get done.

* We use the PostGIS spatial extension (akin to the Spatial Blade in Informix)
 Informix had better documentation, and a bit more functionality, but
 if a spatial user defined function went bad it could (and would) bring
 the whole instance down with a hard crash; rebooting the server was often
 necessary. The very few times we've seen PostGIS slay a PostgreSQL 
instance,
 the PostgreSQL recovery way far easier -- sometimes not even needing a 
recycle.
 YMMV

* Informix's replication and backups are superior. But for most purposes I think
 PostgreSQL has a usable variant. But certainly not for all needs. 

* PostgreSQL spawns a process for each connection (make sure you tweak shared
 memory!); this leads to higher apparent loads and lots more processes in
 a ps or top listing than you'd see on an Informix database server since
 Informix uses internal threads. But throughput is the metric ...

* There is no such thing as raw disk space in PostgreSQL; the claim is it 
doesn't
 gain them much since modern OSs have spent a lot of time on regular file
 system speed; the argument goes on that when Informix and Oracle were young
 so were the servers they were on, and they had to invent everything for
 themselves.

* There is no equivalent of a synonym; a view can be used to fake this 
sometimes
 but where Informix lets you create a synonym to a table in another 
database /
 instance, PostgreSQL doesn't. dblink can be used to poke a hole to other
 databases though, including non-postgres ones.

* Locking differs some -- no such thing as a page level lock.

HTH -- might add more if I think of anything.

Greg Williamson
Senior DBA
GlobeXplorer LLC, a DigitalGlobe company

Confidentiality Notice: This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and must be protected in accordance with those 
provisions. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message.

(My corporate masters made me say this.)


[GENERAL] Why upgrade?

2007-11-28 Thread Chris Velevitch
I'm currently using 7.4 and I trying find out what the value/advantage
of upgrading to a more recent version and to which version.


Chris
-- 
Chris Velevitch
Manager - Sydney Flash Platform Developers Group
m: 0415 469 095
www.flashdev.org.au

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] System Load analyze

2007-11-28 Thread Greg Smith

On Wed, 28 Nov 2007, Peter Bauer wrote:


PERC4E/DI DC ULTRA320 SCSI RAID, 256MB
Its Debian sarge with kernel 2.4.26.050719-686 #1 SMP.


OK, so I'd expect you're using the megaraid2 driver.  That and kernel 
2.4.26 are a few years behind current at this point, and there have been 
plenty of performance improvements in Linux and that driver since then. 
Since you're asking about this before it's a serious problem and don't 
need an immediate fix, you may want to consider whether upgrading to a 
more modern 2.6 kernel is in your future; that change alone may resolve 
some of the possibly too high load you're having.  I doubt any 2.4 kernel 
is really getting the best from your fairly modern server hardsare.


Also, the PERC4E and similar Megaraid cards are known to be generally 
sluggish on write throughput compared with some of the competing products 
out there.  I don't know that I'd replace it though, as spending the same 
amount of money adding disks would probably be more useful.


There's actually a way to combine these two ideas and get an upgrade 
transition plan.  Buy a second disk, find a downtime window, install a 
newer Linux onto it and test.  If that works well switch to it.  If it 
doesn't you still have the original unmodified system around and you can 
at least split the disk load between the two drives.



vmstat and iostat were running with 1 second intervals.


Good, that means the responses you've already gotten made the right 
assumptions.  There's some additional fun statistics you can collect with 
the Linux iostat, but sadly that needs a newer kernel as well.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Why upgrade?

2007-11-28 Thread Josh Trutwin
On Thu, 29 Nov 2007 17:08:41 +1100
Chris Velevitch [EMAIL PROTECTED] wrote:

 I'm currently using 7.4 and I trying find out what the
 value/advantage of upgrading to a more recent version and to which
 version.

The release notes - esp. for the major versions - are the best source
of finding out what changed. We're running 8.1 and I'm very anxious
for 8.3 (probably 8.3.1) after reading some of the new features and
performance improvements.

http://www.postgresql.org/docs/8.3/static/release.html

Josh

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] 60 Seconds to connected to Postgresql using ODBC or PGAdmin

2007-11-28 Thread Richard Broersma Jr
--- On Wed, 11/28/07, Finn Lassen [EMAIL PROTECTED] wrote:

Can anyone else comment on what problems could be causing these slow 
connections problems?


 Meanwhile, I don't understand why it now takes exactly
 60 seconds to connect to the database whether I use pgAdmin or my
 connection string from within VB.

This is very odd.  Since you are having a problem with your connection time on 
both pgAdmin(which doesn't use ODBC to connect) and ODBC connects, I would 
assume that you must be having an issue that is non-odbc related.  

 I thought I had seen a comment about this somewhere, but can't find it 
 now. I've tried changing Connection Pooling in the OBDC Data Source 
 Administrator for the PostgreSQL ANSI driver, but doesn't have any 
 effect (or maybe just reloading the server configuration is not enough?

If this were an ODBC connection Issue, i would first make sure that all of ODBC 
logging was disabled on your client computer.  ODBC logging can really kill 
performance.

The windows ODBC tracing is found in the ODBC Datasource Administrator form - 
tracing - [Stop tracing now]  [Stop Visual Studio Analyzer now].

I guess it is impossible for postgresql ODBC logging to be taking place since 
you using a DNSless connection and have set any parameters to start the logging.

If all of the logging is already off, try turning turning on the Myloging and 
CommLogin by setting the appropriate setting in your DNS-less connection 
string.  If you post these logs, It will help others on the ODBC mailing list 
to trouble shoot where your problem is coming from.

Also on a side note, it is important to remember that many of the subscribers 
to the Postgresql mailing list are bombarded with countless emails on a daily 
basis.  

Do to the voluminous amount of emails, I am pretty sure that most subscribers 
pick and choose which emails they will read purely based on the interest 
generated by the email's subject heading.  

So to help encourage more subscribes to participate, it is important to make 
your subject headings very specific (to the point) and to make them as eye 
catching as possible.  You'll notice that I've alter your email subject a bit.  
Hopefully it will help get a few more people to join in on this thread.

There is nothing wrong with tackling a very difficult but general problem with 
postgresql by sending seperate emails with different subject heading the 
specifically address only the individual facets of the overall problem.  
Different people will probably respond to different facets of your overall 
problem.

Regarding the test case I sent to you, how many columns should I try to create 
in a table in order to reproduce the problem you where having with needing OIDs 
created in your tables?

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 60 Seconds to connected to Postgresql using ODBC or PGAdmin

2007-11-28 Thread Richard Broersma Jr
--- On Wed, 11/28/07, Richard Huxton [EMAIL PROTECTED] wrote:
 Name lookups. Something is trying to look up a name,
 failing and it's 
 timing out after 60 seconds.

It seems the OP's connection string was set to localhost.  Would this still 
indicate a Name Loopup problem?

Regards,
Richard

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] 60 Seconds to connected to Postgresql using ODBC or PGAdmin

2007-11-28 Thread Richard Huxton

Richard Broersma Jr wrote:

--- On Wed, 11/28/07, Finn Lassen [EMAIL PROTECTED] wrote:

Can anyone else comment on what problems could be causing these slow 
connections problems?



Meanwhile, I don't understand why it now takes exactly
60 seconds to connect to the database whether I use pgAdmin or my
connection string from within VB.


Name lookups. Something is trying to look up a name, failing and it's 
timing out after 60 seconds.


Could be DNS, or WINS (or whatever MS' current name resolution system is 
called). It could be either end of the connection too, if for example 
something on the server is logging the names of connecting clients.


Try a couple of name lookups (forward and reverse) from each end and see 
what happens.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq