Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-24 Thread Gábor Farkas

Scott Marlowe wrote:

On Thu, 2006-03-23 at 12:17, Jim Nasby wrote:

On Mar 22, 2006, at 10:08 PM, Scott Marlowe wrote:
Now, I shouldn't be able to insert anything in b that's not  
referencing

an entry in a.  and I used innodb tables.  and I used ansi SQL, and I
got no errors.  So how come my data's incoherent three seconds after
creating the tables the way the spec says should work?  Simple.  MySQL
only implements foreign keys if you do them this way:


Yep.  I filed the bug report on it.

http://bugs.mysql.com/bug.php?id=13301



from the response:

 Years ago, to help porting applications from other database brands to 
 MySQL, MySQL was made to accept the syntax even though no real

 constraints were created.


i hope postgresql will never help me this way.

gabor

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] COPY command documentation

2006-03-24 Thread Dave Page



On 23/3/06 20:12, David Fetter [EMAIL PROTECTED] wrote:

 On Thu, Mar 23, 2006 at 07:00:13PM +0100, Jim Nasby wrote:
 On Mar 23, 2006, at 4:08 PM, Oisin Glynn wrote:
 
 I just discovered that the comments from 8.0 had the answer I was
 looking for but these comments are not in the 8.1 docs. Should the
 comments be rolled forward as new versions are created? Or if valid
 comments added to the docs themselves?
 
 http://www.postgresql.org/docs/8.1/interactive/sql-copy.html
 
 http://www.postgresql.org/docs/8.0/interactive/sql-copy.html
 
 No, comments don't roll forward.
 
 ...and it's unlikely that they will, now or later, without somebody
 whose whole job is to monitor those comments and make patches.
 
 I'd like to make a Modest Proposal™:  Let's take down the interactive
 documents and, in their place, put up a request that doc patches be
 sent to -docs.
 
 What say?

I say no, because whilst some comments should (and do) end up in the docs,
many are simply useful real-world code examples and related information that
people post. It's useful stuff, but would clutter the docs.

Regards, Dave.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Some pgbench results

2006-03-24 Thread Just Someone
Hi Magnus,

 It might seem that I'm selling ext3 or something :) but it's the linux
 filesystem I know best.
 If you want ext3 to perform with large directories, there is an mkfs
 option that enables directory hashing that you can try: -O dir_index.

Not at all (sell ext3 ;-) ). It's great to get this kind of info! I
rather use ext3 as it's VERY stable., and the default in Fedora
anyway. So thanks for the tip!

Bye,

Guy.

--
Family management on rails: http://www.famundo.com - coming soon!
My development related blog: http://devblog.famundo.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Retrieve large object

2006-03-24 Thread Carmen Wai
Hello:

My database table consists of a column of large object and i can only
use lo_export to export 1 large object file each time, as my
application is required to retrieve all the large object files from the
table, is there any way to retrieve all the large object files
quickly?? Any similar command like COPY TO for the table with large
object??

Thanks a lot!
Carmen


[GENERAL] Practical limit on number of tables ina single database

2006-03-24 Thread Just Someone
Hi,

I am creating a hosted solution that I want to base on separation by
schemas. So that each hosted family we will have, will have a schema
assigned to it (and a user). On login I will set the search path, and
so each family will see it's tables. This is all tested and works
fine.

But I would like to know if there's a practical limit to the number of
schemas and tables I can have. Please note that I'm using table spaces
to make sure the directories are manageable.

I tested it so far with 13000 schemas and users, with 26 tables in
each schema (a total of more that 33 tables). It works perfectly,
but I would like to know if someone has experience with this number of
tables/schemas, and if there's a limit I should be careful of.

Bye,

Guy.

--
Family management on rails: http://www.famundo.com - coming soon!
My development related blog: http://devblog.famundo.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Practical limit on number of tables ina single database

2006-03-24 Thread chris smith
On 3/24/06, Just Someone [EMAIL PROTECTED] wrote:
 Hi,

 I am creating a hosted solution that I want to base on separation by
 schemas. So that each hosted family we will have, will have a schema
 assigned to it (and a user). On login I will set the search path, and
 so each family will see it's tables. This is all tested and works
 fine.

 But I would like to know if there's a practical limit to the number of
 schemas and tables I can have. Please note that I'm using table spaces
 to make sure the directories are manageable.

 I tested it so far with 13000 schemas and users, with 26 tables in
 each schema (a total of more that 33 tables). It works perfectly,
 but I would like to know if someone has experience with this number of
 tables/schemas, and if there's a limit I should be careful of.

There's no real limit. See this recent thread for details:

http://archives.postgresql.org/pgsql-advocacy/2006-03/msg00082.php

and

http://people.planetpostgresql.org/greg/index.php?/archives/37-The-million-table-challenge.html

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-24 Thread Jim C. Nasby
On Thu, Mar 23, 2006 at 12:24:18PM -0600, Scott Marlowe wrote:
 On Thu, 2006-03-23 at 12:17, Jim Nasby wrote:
  On Mar 22, 2006, at 10:08 PM, Scott Marlowe wrote:
   Now, I shouldn't be able to insert anything in b that's not  
   referencing
   an entry in a.  and I used innodb tables.  and I used ansi SQL, and I
   got no errors.  So how come my data's incoherent three seconds after
   creating the tables the way the spec says should work?  Simple.  MySQL
   only implements foreign keys if you do them this way:
 
 Yep.  I filed the bug report on it.
 
 http://bugs.mysql.com/bug.php?id=13301

Submitted to the gotchas page...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] COPY command documentation

2006-03-24 Thread Jim C. Nasby
On Fri, Mar 24, 2006 at 08:58:55AM +, Dave Page wrote:
 On 23/3/06 20:12, David Fetter [EMAIL PROTECTED] wrote:
 
  On Thu, Mar 23, 2006 at 07:00:13PM +0100, Jim Nasby wrote:
  On Mar 23, 2006, at 4:08 PM, Oisin Glynn wrote:
  
  I just discovered that the comments from 8.0 had the answer I was
  looking for but these comments are not in the 8.1 docs. Should the
  comments be rolled forward as new versions are created? Or if valid
  comments added to the docs themselves?
  
  http://www.postgresql.org/docs/8.1/interactive/sql-copy.html
  
  http://www.postgresql.org/docs/8.0/interactive/sql-copy.html
  
  No, comments don't roll forward.
  
  ...and it's unlikely that they will, now or later, without somebody
  whose whole job is to monitor those comments and make patches.
  
  I'd like to make a Modest Proposal???:  Let's take down the interactive
  documents and, in their place, put up a request that doc patches be
  sent to -docs.
  
  What say?
 
 I say no, because whilst some comments should (and do) end up in the docs,
 many are simply useful real-world code examples and related information that
 people post. It's useful stuff, but would clutter the docs.

But now that stuff gets 'lost' with ever new major version. It'd
probably be better if it was posted somewhere like
http://www.pervasivepostgres.com/instantkb13/
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Parallel sequential scans

2006-03-24 Thread Jim C. Nasby
On Fri, Mar 24, 2006 at 01:44:19AM -0500, Tom Lane wrote:
  Or I could fire off all the queries simultaneously and hope
  they stay in close-enough lockstep through a single pass
  through the table to be able to share most of the IO.
 
 I have not tried this sort of thing, but right offhand I like the second
 alternative.  The hope is more well-founded than you seem to think:
 whichever process is currently ahead will be slowed by requesting I/O,
 while processes that are behind will find the pages they need already in
 shared buffers.  You should definitely see just one read of each table
 page as the parallel scans advance, assuming you don't have an
 unreasonably small number of buffers.
 
 Another reason, if you have more than one CPU in your machine, is that
 multiple processes can make use of multiple CPUs, whereas the
 one-fancy-query approach doesn't parallelize (at least not without
 Bizgres or some such).
 
 And lastly, you can just try it without sweating hard to convert the
 queries ;-).  So try it and let us know how it goes.

There was also a patch floating around that allowed seqscans to start in
the middle of a table if it was detected that a seqscan on that table
was already in progress. Not sure if that made it in, but it might be
relevant here.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Practical limit on number of tables ina single database

2006-03-24 Thread Jim C. Nasby
On Fri, Mar 24, 2006 at 02:33:39AM -0800, Just Someone wrote:
 Hi,
 
 I am creating a hosted solution that I want to base on separation by
 schemas. So that each hosted family we will have, will have a schema
 assigned to it (and a user). On login I will set the search path, and
 so each family will see it's tables. This is all tested and works
 fine.
 
 But I would like to know if there's a practical limit to the number of
 schemas and tables I can have. Please note that I'm using table spaces
 to make sure the directories are manageable.
 
 I tested it so far with 13000 schemas and users, with 26 tables in
 each schema (a total of more that 33 tables). It works perfectly,
 but I would like to know if someone has experience with this number of
 tables/schemas, and if there's a limit I should be careful of.

Just make sure you increase max_fsm_relations, and that max_fsm_pages is
at least  max_fsm_relations, because each relation must get at least
one page.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] another seemingly simple encoding question

2006-03-24 Thread joseph
maybe a routine question here ...  i hope i can understand the
answer.

[EMAIL PROTECTED] ~]$ pg_ctl --version
pg_ctl (PostgreSQL) 8.0.0beta3
[EMAIL PROTECTED] ~]$


i have a problem matching a utf8 string with a field in a database
encoded in utf8.
i read the documentation, checked the following, and don't know where i
went astray, trying to match ...
1) i am almost 100% sure the data is correctly utf8. i just dumped and
loaded into postgres. 
2) 
utf8db - \l
  List of databases
 Name |  Owner   | Encoding
--+--+---
utf8db | postgres | UNICODE
3) postgresql.conf

# These settings are initialized by initdb -- they might be changed
lc_messages = 'en_US.utf8'  # locale for system error
message strings
lc_monetary = 'en_US.utf8'  # locale for monetary formatting
lc_numeric = 'en_US.utf8'   # locale for number formatting
lc_time = 'en_US.utf8'  # locale for time formatting

# - Other Defaults -

4) set client encoding in client (psql or php, either one, both same
mismatch)

LOG:  statement: select wordid,word from korean_english where word='기르
다' limit 10;
LOG:  statement: show client_encoding;
LOG:  statement: set client_encoding to  'utf8';
LOG:  statement: select wordid,word from korean_english where word='기르
다' limit 10;
LOG:  statement: show client_encoding;

5) locale -a | grep en 
snip
en_US.utf8
/snip


ohhh,  where is my mistake, please!






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Reimer



Hello,

We hadsome 
disk problems here and now pg_dump is complaining with:

pg_dump: ERROR: could not access status of 
transaction 1768711534DETAIL: could not open file 
"/usr/local/pgsql/data/pg_clog/0696": Arquivo ou diretório não 
encontradopg_dump: SQL command to dump the contents of table "tt_fin" 
failed: PQendcopy() failed.pg_dump: Error message from server: 
ERROR: could not access status of 
transaction1768711534DETAIL: could not open file 
"/usr/local/pgsql/data/pg_clog/0696": Arquivo ou diretório não 
encontradopg_dump: The command was: COPY "REIMER".tt_fin (codfil, sequen, 
tipger, tipnum, codusu, filger, filnum, filusu, seqger, seqnum, datope, 
vlrmov, seqtrn, codctr, codcus, numdoc, observ, flgfin, chvtrn, atu_em) TO 
stdout;
I know is something related with the hardware and 
not with postgresql but I don´t want to restore all 
thedatabase.

How is the best way to fix? 
pg_resetxlog?

Thanks in advance!

Reimer


Re: [GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Reimer



I forgot to mentionpostgreSQL version: 
7.4.6

Thanks!

  - Original Message - 
  From: 
  Reimer 
  To: pgsql-general@postgresql.org 
  
  Sent: Friday, March 24, 2006 10:41 
  AM
  Subject: [GENERAL] URGENT! could not 
  access status of transaction
  
  Hello,
  
  We hadsome 
  disk problems here and now pg_dump is complaining with:
  
  pg_dump: ERROR: could not access status of 
  transaction 1768711534DETAIL: could not open file 
  "/usr/local/pgsql/data/pg_clog/0696": Arquivo ou diretório não 
  encontradopg_dump: SQL command to dump the contents of table "tt_fin" 
  failed: PQendcopy() failed.pg_dump: Error message from server: 
  ERROR: could not access status of 
  transaction1768711534DETAIL: could not open file 
  "/usr/local/pgsql/data/pg_clog/0696": Arquivo ou diretório não 
  encontradopg_dump: The command was: COPY "REIMER".tt_fin (codfil, sequen, 
  tipger, tipnum, codusu, filger, filnum, filusu, seqger, seqnum, datope, 
  vlrmov, seqtrn, codctr, codcus, numdoc, observ, flgfin, chvtrn, atu_em) TO 
  stdout;
  I know is something related with the hardware and 
  not with postgresql but I don´t want to restore all 
  thedatabase.
  
  How is the best way to fix? 
  pg_resetxlog?
  
  Thanks in advance!
  
  Reimer


Re: [GENERAL] another seemingly simple encoding question

2006-03-24 Thread John D. Burger
i have a problem matching a utf8 string with a field in a database 
encoded in utf8.


You seem to give all the details of your configuration, but unless I 
misread your message, you don't say what the actual problem is.  Can 
you provide more details?  What exactly doesn't work?


This may not be the issue, but many people don't realize that there are 
sometimes multiple ways to encode what is conceptually the same string 
in UTF8 (or any of the Unicode encodings).  If you do not canonicalize 
your strings using one of the Unicode normalization forms, then 
seemingly identical strings may not match, because they are not 
byte-for-byte identical.


- John D. Burger
  MITRE


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Jim C. Nasby
On Fri, Mar 24, 2006 at 10:51:25AM -0300, Reimer wrote:
 I forgot to mention postgreSQL version: 7.4.6

Had you been running the latest 7.4 release, this probably wouldn't have
happened. There's been a number of data-loss bugs fixed since 7.4.6.

 Thanks!
   - Original Message - 
   From: Reimer 
   To: pgsql-general@postgresql.org 
   Sent: Friday, March 24, 2006 10:41 AM
   Subject: [GENERAL] URGENT! could not access status of transaction
 
 
   Hello,
 
   We had some disk problems here and now pg_dump is complaining with:
 
   pg_dump: ERROR:  could not access status of transaction 1768711534
   DETAIL:  could not open file /usr/local/pgsql/data/pg_clog/0696: Arquivo 
 ou dire
   t?rio n?o encontrado
   pg_dump: SQL command to dump the contents of table tt_fin failed: 
 PQendcopy() fa
   iled.
   pg_dump: Error message from server: ERROR:  could not access status of 
 transaction
1768711534
   DETAIL:  could not open file /usr/local/pgsql/data/pg_clog/0696: Arquivo 
 ou dire
   t?rio n?o encontrado
   pg_dump: The command was: COPY REIMER.tt_fin (codfil, sequen, tipger, 
 tipnum, co
   dusu, filger, filnum, filusu, seqger, seqnum, datope, vlrmov, seqtrn, 
 codctr, codc
   us, numdoc, observ, flgfin, chvtrn, atu_em) TO stdout;
 
   I know is something related with the hardware and not with postgresql but I 
 don?t want to restore all the database.
 
   How is the best way to fix? pg_resetxlog?
 
   Thanks in advance!
 
   Reimer

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] another seemingly simple encoding question

2006-03-24 Thread joseph
problem is that my string -- which is in utf-8 -- because
all input is converted first in php with 
   $str_out = mb_convert_encoding($str_in, UTF-8);
and the query, which 
is like 
select wordid from korean_english where word='utf8string';
and it is returning wordids for words which are not = utf8string

(in debug mode) the above is output as UTF-8 by php / browser encoding
over the web, and then exit; is called, 
so i just grab it from the browser by cutting and pasting the whole
query string.  
running the query in php and from psql return the same bad wordids,
pointing that the encoding is consistent through the cut-and-paste
operation.  

i don't understand what a unicode normalization form is.  the postgres
docs http://www.postgresql.org/docs/8.0/interactive/multibyte.html
say

Table 20-1. Server Character Sets

Name
 Description


UNICODE Unicode (UTF-8)

so i thought they were the same, and i dont know about unicode
normalization form.  


my question is why isn't the utf8string in query returning only
matching, corresponding wordids from the database

thx.
 
2006-03-24 (금), 08:56 -0500, John D. Burger 쓰시길:
  i have a problem matching a utf8 string with a field in a database 
  encoded in utf8.
 
 You seem to give all the details of your configuration, but unless I 
 misread your message, you don't say what the actual problem is.  Can 
 you provide more details?  What exactly doesn't work?
 
 This may not be the issue, but many people don't realize that there
are 
 sometimes multiple ways to encode what is conceptually the same
string 
 in UTF8 (or any of the Unicode encodings).  If you do not
canonicalize 
 your strings using one of the Unicode normalization forms, then 
 seemingly identical strings may not match, because they are not 
 byte-for-byte identical.
 
 - John D. Burger
MITRE
 







---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] another seemingly simple encoding question

2006-03-24 Thread John D. Burger
This doesn't sound like your problem, but I'll explain the 
normalization issue using Korean as an example, since that seems to be 
your data:  There are codepoints in Unicode both for Hangul and Jamo, 
so a Hangul glyph can be represented either with the single 
corresponding codepoint, or as two or three Jamo codepoints.  A Unicode 
font would display these two alternatives identically.  In any Unicode 
encoding, including UTF8, these two strings would not be byte-for-byte 
identical.  The Unicode normalization forms are four algorithms for 
normalizing the strings in such a way that they do compare identically.


Anyway, it sounds like you have the opposite problem, two strings that 
are comparing equal when you think they shouldn't.  I don't know that 
anyone can help you unless you post an actual example of two such 
strings.


- John D. Burger
  MITRE


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Relation 'pg_largeobject' does not exist

2006-03-24 Thread Brandon Keepers
Tom,

On Fri, 2006-03-17 at 23:48 -0500, Tom Lane wrote:
 Brandon Keepers [EMAIL PROTECTED] writes:
  If it makes a difference, when I un-tar the dump file from each failed
  dump, it always has 2937 files in it.
 
 That's pretty consistent with the idea that you're running out of
 locktable space --- you'd expect that to fail at a certain number of
 dumped blobs.
 
 However, this makes me wonder whether you really did successfully
 increase max_connections as I suggested; that should have had at least
 some impact.  7.0 is far enough back that I don't remember exactly where
 one sets the maximum number of backends in that version --- better
 double-check the docs for 7.0.
 
   regards, tom lane


The 7.0 docs revealed that I needed to start up postmaster with -N
300.  I tried to get the admin of the box to do that, but it kept
failing at exactly the same spot.  I ended up just having the admin of
the server send me a copy of the data directory, and I compiled 7.0.3
myself and it worked beautifully.

Thanks a lot for your help!

Brandon


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] PostgreSQL support on Redhat Advance Server 2.1

2006-03-24 Thread Leon Pu
Hi all,

I found the newest PostgreSQL binary download on Redhat Advance Server
2.1 is version 7.4.7.

http://www.postgresql.org/ftp/binary/OLD/v7.4.7/rpms/redhat/rhel-es-2.1/

But I want to install newer PostgreSQL in Redhat Advance Server 2.1, at
least version 8.0.4.

Any possibility to install 8.0.4 or newer PostgreSQL?


Best regards,
Leon

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL support on Redhat Advance Server 2.1

2006-03-24 Thread Devrim GUNDUZ
Hi,

On Fri, 2006-03-24 at 07:52 -0800, Leon Pu wrote:

 I found the newest PostgreSQL binary download on Redhat Advance Server
 2.1 is version 7.4.7.
 
 http://www.postgresql.org/ftp/binary/OLD/v7.4.7/rpms/redhat/rhel-es-2.1/
 
 But I want to install newer PostgreSQL in Redhat Advance Server 2.1, at
 least version 8.0.4.
 
 Any possibility to install 8.0.4 or newer PostgreSQL?

None of the RPM package builders have a box running RHEL 2.1, and we
abandoned support for that. In my buildfarm I have a sata disk that RHEL
2.1 cannot recognize:(

Anyway, if you want, I can send you the instructions about how to build
RPMs on your platform.

Regards,

-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Tom Lane
Reimer [EMAIL PROTECTED] writes:
 pg_dump: ERROR:  could not access status of transaction 1768711534
 DETAIL:  could not open file /usr/local/pgsql/data/pg_clog/0696: =
 Arquivo ou dire
 t=F3rio n=E3o encontrado

 How is the best way to fix? pg_resetxlog?

resetxlog won't help: you've got corrupt data in that table.  See the
archives for previous threads about finding and removing corrupt rows.

I concur with the nearby suggestion that you should be running something
newer than 7.4.6, but even with an update you'll still be needing to
clean up this corruption manually :-(

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-24 Thread Scott Marlowe
On Fri, 2006-03-24 at 02:51, Gábor Farkas wrote:
 Scott Marlowe wrote:
  On Thu, 2006-03-23 at 12:17, Jim Nasby wrote:
  On Mar 22, 2006, at 10:08 PM, Scott Marlowe wrote:
  Now, I shouldn't be able to insert anything in b that's not  
  referencing
  an entry in a.  and I used innodb tables.  and I used ansi SQL, and I
  got no errors.  So how come my data's incoherent three seconds after
  creating the tables the way the spec says should work?  Simple.  MySQL
  only implements foreign keys if you do them this way:
  
  Yep.  I filed the bug report on it.
  
  http://bugs.mysql.com/bug.php?id=13301
  
 
 from the response:
 
   Years ago, to help porting applications from other database brands to 
   MySQL, MySQL was made to accept the syntax even though no real
   constraints were created.

 i hope postgresql will never help me this way.

No kidding.  What bothers me so much about this failure is that there's
not way in the current version to change this behaviour.  Everytime
there's a MySQL versus PostgreSQL flamefest on Slashdot or elsewhere, I
see the MySQL folks chiming in with but the -ansi switch fixes all
those problems

It doesn't, and there are many other things I've found that the -ansi
switch doesn't fix.

I really really really wish they'd make a version that followed the ANSI
standard more closely, then had a -compatv4 and -compatv3 switch to
make it behave like the older MySQL flavors.

This defaulting to running like an old version, with all its issues is
one thing that makes MySQL so unnattractive to use.  That and the fact
that if you've got a problem, the standard answer nowadays is buy a
support contract.  ugh.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Hibernate Query Question - Get most recent unique rows

2006-03-24 Thread Julie Robinson
Given the two classes at the bottom of this email, I'm having trouble 
coming up with a Hibernate query statement that returns a list 
representing all rows in the quality_control_reset table where there is 
only one row for the most recent quality_control_range.  Help?


Example:

In table quality_control_reset, there are the following rows:

id   |   timestamp   |   qualitycontrolrange
-
1|  02/23/2006   |   20
2|  02/23/2006   |   6
3|  02/28/2006   |   18
4|  03/01/2006   |   18
5|  03/23/2006   |   12
6|  03/23/2006   |   20

I want the results of the following from the query:

id   |   timestamp   |   qualitycontrolrange
-
2|  02/23/2006   |   6
4|  03/01/2006   |   18
5|  03/23/2006   |   12
6|  03/23/2006   |   20



QualityControlReset.java:

package test;

import java.util.Date;

/**
 * @hibernate.class table=quality_control_reset
 */
public class QualityControlReset {
   private Date date;
   private QualityControlRange qualityControlRange;

   public QualityControlReset() {
  // noop
   }

   /**
* @hibernate.id generator-class=native column=id
*/
   public Long getId() {
  return id;
   }

   public void setId(Long long1) {
  id = long1;
   }

   /**
* @hibernate.property not-null=true
*/
   public Date getDate() {
  return date;
   }

   public void setDate(Date date) {
  this.date = date;
   }

   /**
* @hibernate.many-to-one class=test.QualityControlRange 
cascade=none not-null=true

*/
   public QualityControlRange getQualityControlRange() {
  return qualityControlRange;
   }

   public void setQualityControlRange(QualityControlRange 
qualityControlRange) {

  this.qualityControlRange = qualityControlRange;
   }
}


QualityControlRange.java:

package test;

/**
 * @hibernate.class table=quality_control_range
 */
public class QualityControlRange {
   private String code;

   public QualityControlRange() {
   }

   /**
* @hibernate.id generator-class=native column=id
*/
   public Long getId() {
  return id;
   }

   public void setId(Long long1) {
  id = long1;
   }

   /**
* @hibernate.property
*/
   public String getCode() {
  return code;
   }

   public void setCode(String code) {
  this.code = code;
   }
}

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] pgadmin

2006-03-24 Thread Mary Adel
I am trying to set PgAdmin so that i can connect from my pc to the
server  but i have error saying :

Error connecting to the server: could not connect to server:
Connection refused
Is the server running on host xxx.xxx.xxx.xxx and accepting
TCP/IP connections on port 5432? 

how i can solve this 
can anyone help??

Thanks,
mary


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pgadmin

2006-03-24 Thread Adrian Klaver
On Friday 24 March 2006 11:13 am, Mary Adel wrote:
 I am trying to set PgAdmin so that i can connect from my pc to the
 server  but i have error saying :

 Error connecting to the server: could not connect to server:
 Connection refused
 Is the server running on host xxx.xxx.xxx.xxx and accepting
 TCP/IP connections on port 5432?

 how i can solve this
 can anyone help??

 Thanks,
 mary


 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
The answer depends on what version you are running. You need to look at the 
postgresql.conf file for the server. On versions prior to 8.0  look for the 
parameter tcpip_socket and set it to true, by default it is set to false. For 
8.0 and above the parameter is listen_addresses and can be set to * to listen 
to all IP interfaces on the server, by default it is set to localhost. This 
only allows connections from the machine the server resides on. 
-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] basic stored proc/transaction question

2006-03-24 Thread Ben
My understanding is that a stored procedure does an implicit begin/commit when 
it executes. Maybe my brain isn't working so well this morning, because I can't 
figure out how I would do:


begin;
call stored proc;
call another stored proc;
commit;

It seems that the transaction would be committed after the first call.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Practical limit on number of tables ina single database

2006-03-24 Thread Just Someone
Hi Jim,

 Just make sure you increase max_fsm_relations, and that max_fsm_pages is
 at least  max_fsm_relations, because each relation must get at least
 one page.

I increased it to 4 relations, should I go even higher?


--
Family management on rails: http://www.famundo.com - coming soon!
My development related blog: http://devblog.famundo.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pgadmin

2006-03-24 Thread A. Kretschmer
am  24.03.2006, um 19:13:56 + mailte Mary Adel folgendes:
 Is the server running on host xxx.xxx.xxx.xxx and accepting
 TCP/IP connections on port 5432? 
 
 how i can solve this 
 can anyone help??

You have a good error-message ;-)

My guess: you should check your pg_hab.conf.
http://www.postgresql.org/docs/current/static/client-authentication.html


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pgadmin

2006-03-24 Thread Tony Caduto

Mary Adel wrote:

I am trying to set PgAdmin so that i can connect from my pc to the
server  but i have error saying :

Error connecting to the server: could not connect to server:
Connection refused
Is the server running on host xxx.xxx.xxx.xxx and accepting
TCP/IP connections on port 5432? 

  
You need to edit the postgresql.conf and set it to allow all connections 
by changing localhost to * , then
you need to edit the pg_hba.conf file and allow access from all hosts or 
specific hosts.


both of these files can be found in the postgresql data directory.


Hope this helps,

Tony Caduto
http://www.amsoftwaredesign.com
Home of PG Lightning Admin
There is a alternative to pgAdmin III for win32 users.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Hibernate Query Question - Get most recent unique rows

2006-03-24 Thread Julie Robinson

OK.  The following how to do this in SQL.  How does this convert into HQL?

select distinct on (qualitycontrolrange) qualitycontrolrange, date, id
from quality_control_reset
order by qualitycontrolrange, date desc;





Julie Robinson wrote:
Given the two classes at the bottom of this email, I'm having trouble 
coming up with a Hibernate query statement that returns a list 
representing all rows in the quality_control_reset table where there is 
only one row for the most recent quality_control_range.  Help?


Example:

In table quality_control_reset, there are the following rows:

id   |   date|   qualitycontrolrange
-
1|  02/23/2006   |   20
2|  02/23/2006   |   6
3|  02/28/2006   |   18
4|  03/01/2006   |   18
5|  03/23/2006   |   12
6|  03/23/2006   |   20

I want the results of the following from the query:

id   |   date|   qualitycontrolrange
-
2|  02/23/2006   |   6
4|  03/01/2006   |   18
5|  03/23/2006   |   12
6|  03/23/2006   |   20



QualityControlReset.java:

package test;

import java.util.Date;

/**
 * @hibernate.class table=quality_control_reset
 */
public class QualityControlReset {
   private Date date;
   private QualityControlRange qualityControlRange;

   public QualityControlReset() {
  // noop
   }

   /**
* @hibernate.id generator-class=native column=id
*/
   public Long getId() {
  return id;
   }

   public void setId(Long long1) {
  id = long1;
   }

   /**
* @hibernate.property not-null=true
*/
   public Date getDate() {
  return date;
   }

   public void setDate(Date date) {
  this.date = date;
   }

   /**
* @hibernate.many-to-one class=test.QualityControlRange 
cascade=none not-null=true

*/
   public QualityControlRange getQualityControlRange() {
  return qualityControlRange;
   }

   public void setQualityControlRange(QualityControlRange 
qualityControlRange) {

  this.qualityControlRange = qualityControlRange;
   }
}


QualityControlRange.java:

package test;

/**
 * @hibernate.class table=quality_control_range
 */
public class QualityControlRange {
   private String code;

   public QualityControlRange() {
   }

   /**
* @hibernate.id generator-class=native column=id
*/
   public Long getId() {
  return id;
   }

   public void setId(Long long1) {
  id = long1;
   }

   /**
* @hibernate.property
*/
   public String getCode() {
  return code;
   }

   public void setCode(String code) {
  this.code = code;
   }
}

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly






---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] basic stored proc/transaction question

2006-03-24 Thread Ian Harding
On 3/24/06, Ben [EMAIL PROTECTED] wrote:
 My understanding is that a stored procedure does an implicit begin/commit when
 it executes. Maybe my brain isn't working so well this morning, because I 
 can't
 figure out how I would do:

 begin;
 call stored proc;
 call another stored proc;
 commit;

 It seems that the transaction would be committed after the first call.

Nope.  Unless you use the new SAVEPOINT stuff, the explicit
transaction is the transaction.  Any error in any function will
rollback the whole thing.  The commit happens at the explicit commit.

Every SQL statement (such as calling a function) runs in an implicit
transaction.  Explicit transactions effectively group these implicit
transactions such that any one failure causes them all to fail.

- Ian

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] basic stored proc/transaction question

2006-03-24 Thread Ben

Well, that's awesome. Thanks!

On Fri, 24 Mar 2006, Ian Harding wrote:


On 3/24/06, Ben [EMAIL PROTECTED] wrote:

My understanding is that a stored procedure does an implicit begin/commit when
it executes. Maybe my brain isn't working so well this morning, because I can't
figure out how I would do:

begin;
call stored proc;
call another stored proc;
commit;

It seems that the transaction would be committed after the first call.


Nope.  Unless you use the new SAVEPOINT stuff, the explicit
transaction is the transaction.  Any error in any function will
rollback the whole thing.  The commit happens at the explicit commit.

Every SQL statement (such as calling a function) runs in an implicit
transaction.  Explicit transactions effectively group these implicit
transactions such that any one failure causes them all to fail.

- Ian



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] libpq vs pqxx

2006-03-24 Thread Seloua Seloua

Hello all,

I am writting a simple C++ program to access my postgresql 8.0 database. 
What is the difference from using the interface of libpq versus the pqxx 
librairy (use of PGconn, PGresult provided in the /usr/include/libpq-fe.h 
file versus connection, result etc provided in the /usr/include/pqxx 
directory)?


Thanks for your help!
Jana

_
Powerful Parental Controls Let your child discover the best the Internet has 
to offer.  
http://join.msn.com/?pgmarket=en-capage=byoa/premxAPID=1994DI=1034SU=http://hotmail.com/encaHL=Market_MSNIS_Taglines 
 Start enjoying all the benefits of MSN® Premium right now and get the 
first two months FREE*.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Build only clients

2006-03-24 Thread Steve Crawford

Chris wrote:

Steve Crawford wrote:
We have a dedicated PostgreSQL server but a variety of client machines 
ranging from soon to be retired SCO 5.0.x to SuSE 10.


What is the recommended method to build/install/deploy only the 
client-side libraries and utilities (psql, pg_dump, etc.) and not the 
server?


We have a development box for each supported client OS which would 
also need headers but none of them need the server.


If I overlooked something in the docs just point me the right direction.

Thanks,
Steve


The INSTALL file in the tar file suggests:

Client-only installation: If you want to install only the client 
applications and interface libraries, then you can use these commands:

gmake -C src/bin install
gmake -C src/include install
gmake -C src/interfaces install
gmake -C doc install



Doh! Sometimes it's better to go home and start the next morning. :) I 
was too busy looking through the makefile for make libraries or make 
utilities, etc. to step back and reread INSTALL.


Of course adding make interfaces etc. would be a convenient and quick 
addition to the makefile...


Cheers,
Steve

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Practical limit on number of tables ina single database

2006-03-24 Thread Just Someone
Hi Jim,

On 3/24/06, Jim Nasby [EMAIL PROTECTED] wrote:
 You want max_fsm_relations to be greater than select count(*) from pg_class 
 where
 relkind in ('i','t') *across all databases*. And you want max_fsm_pages to be 
 bigger than
 that. That's the only way you can be assured that you'll be tracking free 
 space info for
 every table.

So I need something like 2,000,000 in max_fsm_relations. So
max_fsm_pages now need to be 16 times that? That's how I read the
postgresql.conf comment.

 Taking a look at the last few lines of a vacuumdb -av would also be 
 insightful, but I'm
 pretty certain that will only show what you need right *now*, not what you 
 might need in
 the future.

Thanks!

Bye,

Guy


--
Family management on rails: http://www.famundo.com - coming soon!
My development related blog: http://devblog.famundo.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Some pgbench results

2006-03-24 Thread Douglas McNaught
Magnus Naeslund(f) [EMAIL PROTECTED] writes:

 It might seem that I'm selling ext3 or something :) but it's the linux
 filesystem I know best.
 If you want ext3 to perform with large directories, there is an mkfs
 option that enables directory hashing that you can try: -O dir_index.

You can also turn it on for an existing filesystem using 'tune2fs' and
a remount, but it won't hash already-existing large directories--those
will have to be recreated to take advantage of hashing.

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Practical limit on number of tables ina single database

2006-03-24 Thread Jim C. Nasby
On Fri, Mar 24, 2006 at 11:15:56AM -0800, Just Someone wrote:
 Hi Jim,
 
 On 3/24/06, Jim Nasby [EMAIL PROTECTED] wrote:
  You want max_fsm_relations to be greater than select count(*) from pg_class 
  where
  relkind in ('i','t') *across all databases*. And you want max_fsm_pages to 
  be bigger than
  that. That's the only way you can be assured that you'll be tracking free 
  space info for
  every table.
 
 So I need something like 2,000,000 in max_fsm_relations. So
 max_fsm_pages now need to be 16 times that? That's how I read the
 postgresql.conf comment.

Hrm... I forgot that FSM allocation is done in terms of CHUNKPAGES,
which is #defined at 16. So yes, you'd need 32M pages to track freespace
for all tables. Given that that's 250GB, I guess it won't work terribly
well... :)

Will any of these tables be very small and not see any real update
activity? If so, you could possibly do without being able to store FSM
info for them. Keeping a close eye on the last few lines of vacuumdb -av
would be key here.

Another possibility is to change CHUNKPAGES in
include/storage/freespace.h to 1. That means you could get by with 2M
pages, which is 'only' 16GB.

Perhaps it would be worth considering some alternatives to how the FSM
works. In particular, it might be worth it to be able to store free
space info for multiple relations on a single page. Or perhaps allow the
backend to tablescan very small tables to look for free space.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COPY command documentation

2006-03-24 Thread Dave Page
 

 -Original Message-
 From: Jim C. Nasby [mailto:[EMAIL PROTECTED] 
 Sent: 24 March 2006 12:28
 To: Dave Page
 Cc: David Fetter; Oisin Glynn; pgsql general; PostgreSQL Docs
 Subject: Re: [GENERAL] COPY command documentation
 
 But now that stuff gets 'lost' with ever new major version. It'd
 probably be better if it was posted somewhere like
 http://www.pervasivepostgres.com/instantkb13/

That's the way it has worked, well, forever.

But yes, more formal user documententation should be stored elsewhere -
that's why the new user documentation system on the main website has
been developed and will be going live soon.

Regards, Dave.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-24 Thread Jim C. Nasby
On Fri, Mar 24, 2006 at 10:32:42AM -0600, Scott Marlowe wrote:
   http://bugs.mysql.com/bug.php?id=13301
   
  
  from the response:
  
Years ago, to help porting applications from other database brands to 
MySQL, MySQL was made to accept the syntax even though no real
constraints were created.
 
  i hope postgresql will never help me this way.
 
 No kidding.  What bothers me so much about this failure is that there's
 not way in the current version to change this behaviour.  Everytime
 there's a MySQL versus PostgreSQL flamefest on Slashdot or elsewhere, I
 see the MySQL folks chiming in with but the -ansi switch fixes all
 those problems
 
 It doesn't, and there are many other things I've found that the -ansi
 switch doesn't fix.
 
Got a list? I'd love to have it as ammo, and I'm sure that Ian at MySQL
Gotchas would love to have it too.

 I really really really wish they'd make a version that followed the ANSI
 standard more closely, then had a -compatv4 and -compatv3 switch to
 make it behave like the older MySQL flavors.
 
 This defaulting to running like an old version, with all its issues is
 one thing that makes MySQL so unnattractive to use.  That and the fact
 that if you've got a problem, the standard answer nowadays is buy a
 support contract.  ugh.

Happen to have any examples of that as well?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Hibernate Query Question - Get most recent unique rows

2006-03-24 Thread Jim C. Nasby
On Fri, Mar 24, 2006 at 12:13:24PM -0600, Julie Robinson wrote:
 OK.  The following how to do this in SQL.  How does this convert into HQL?
 
 select distinct on (qualitycontrolrange) qualitycontrolrange, date, id
 from quality_control_reset
 order by qualitycontrolrange, date desc;

That won't do what you described below. I think what you want is this:

SELECT quality_control_range, max(date) FROM quality_control_reset GROUP BY
qualitycontrolrange;

If you need id, just wrap the above in a subquery and join against it.

 Julie Robinson wrote:
 Given the two classes at the bottom of this email, I'm having trouble 
 coming up with a Hibernate query statement that returns a list 
 representing all rows in the quality_control_reset table where there is 
 only one row for the most recent quality_control_range.  Help?
 
 Example:
 
 In table quality_control_reset, there are the following rows:
 
 id   |   date|   qualitycontrolrange
 -
 1|  02/23/2006   |   20
 2|  02/23/2006   |   6
 3|  02/28/2006   |   18
 4|  03/01/2006   |   18
 5|  03/23/2006   |   12
 6|  03/23/2006   |   20
 
 I want the results of the following from the query:
 
 id   |   date|   qualitycontrolrange
 -
 2|  02/23/2006   |   6
 4|  03/01/2006   |   18
 5|  03/23/2006   |   12
 6|  03/23/2006   |   20
 
 
 
 QualityControlReset.java:
 
 package test;
 
 import java.util.Date;
 
 /**
  * @hibernate.class table=quality_control_reset
  */
 public class QualityControlReset {
private Date date;
private QualityControlRange qualityControlRange;
 
public QualityControlReset() {
   // noop
}
 
/**
 * @hibernate.id generator-class=native column=id
 */
public Long getId() {
   return id;
}
 
public void setId(Long long1) {
   id = long1;
}
 
/**
 * @hibernate.property not-null=true
 */
public Date getDate() {
   return date;
}
 
public void setDate(Date date) {
   this.date = date;
}
 
/**
 * @hibernate.many-to-one class=test.QualityControlRange 
 cascade=none not-null=true
 */
public QualityControlRange getQualityControlRange() {
   return qualityControlRange;
}
 
public void setQualityControlRange(QualityControlRange 
 qualityControlRange) {
   this.qualityControlRange = qualityControlRange;
}
 }
 
 
 QualityControlRange.java:
 
 package test;
 
 /**
  * @hibernate.class table=quality_control_range
  */
 public class QualityControlRange {
private String code;
 
public QualityControlRange() {
}
 
/**
 * @hibernate.id generator-class=native column=id
 */
public Long getId() {
   return id;
}
 
public void setId(Long long1) {
   id = long1;
}
 
/**
 * @hibernate.property
 */
public String getCode() {
   return code;
}
 
public void setCode(String code) {
   this.code = code;
}
 }
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
 
 
 
 
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
   http://www.postgresql.org/docs/faq
 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] libpq vs pqxx

2006-03-24 Thread Peter Eisentraut
Seloua Seloua wrote:
 I am writting a simple C++ program to access my postgresql 8.0
 database. What is the difference from using the interface of libpq
 versus the pqxx librairy (use of PGconn, PGresult provided in the
 /usr/include/libpq-fe.h file versus connection, result etc provided
 in the /usr/include/pqxx directory)?

They are similar in that they are programming interfaces to access 
PostgreSQL databases, but other than that they are completely 
different.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-24 Thread Scott Marlowe
On Fri, 2006-03-24 at 13:55, Jim C. Nasby wrote:
 On Fri, Mar 24, 2006 at 10:32:42AM -0600, Scott Marlowe wrote:
http://bugs.mysql.com/bug.php?id=13301

   
   from the response:
   
 Years ago, to help porting applications from other database brands to 
 MySQL, MySQL was made to accept the syntax even though no real
 constraints were created.
  
   i hope postgresql will never help me this way.
  
  No kidding.  What bothers me so much about this failure is that there's
  not way in the current version to change this behaviour.  Everytime
  there's a MySQL versus PostgreSQL flamefest on Slashdot or elsewhere, I
  see the MySQL folks chiming in with but the -ansi switch fixes all
  those problems
  
  It doesn't, and there are many other things I've found that the -ansi
  switch doesn't fix.
  
 Got a list? I'd love to have it as ammo, and I'm sure that Ian at MySQL
 Gotchas would love to have it too.

Actually, it's probably true for more than half the things on the mysql
gotchas page.  I haven't looked them over in a while, as after
discovering 3 or 4 things you just couldn't fix with the -ansi switch I
kinda gave up on MySQL as anything other than a simple text storage
engine.  While I think it's a pretty decent storage system for text
documents with minimum needs for ref integrity, for anything else it's
the most frustrating database in the world, so it's hard to get
motivated.

  I really really really wish they'd make a version that followed the ANSI
  standard more closely, then had a -compatv4 and -compatv3 switch to
  make it behave like the older MySQL flavors.
  
  This defaulting to running like an old version, with all its issues is
  one thing that makes MySQL so unnattractive to use.  That and the fact
  that if you've got a problem, the standard answer nowadays is buy a
  support contract.  ugh.
 
 Happen to have any examples of that as well?

Only my most recent personal experience, when I was explaining to the
guy from MySQL how frustrating it was that installing MySQL broke my
build of PHP and meant I had to use the mysqli libs, not the mysql
ones.  The answer from the guy at MySQL was that the standard fix was to
buy the commercial version, which is generally an older, stabler
version.

But I'm not going to pay money to see if MAYBE, just maybe, that version
is better.  They certainly haven't won me over with the GPL / Free
version of the database, so why should I have any confidence of them
doing it for money.

MySQL feels less and less like Free Software every time I find a problem
with it, and more and more like dealing with Oracle's morass of tech
support layers to get an answer or a fix.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Some pgbench results

2006-03-24 Thread Magnus Naeslund(k)
Just Someone wrote:
 
 Initialized the data with: pgbench -i -s 100
 Test runs: pgbench -s 100 -t 1 -c 20
 I did 20 runs, removed the first 3 runs from each sample to account
 for stabilization.

Did you re-initialize the test pgbench database between runs?
I get weird results otherwise since some integers gets overflowed in the
test (it doesn't complete the full 1 transactions after the first run).

 Here are the results in tps without connection
 establishing:
 
 FS:   JFS XFS EXT3
 Avg: 462  425   319
 Stdev:  10474   106
 

Could you please tell me what stripe size you have on the raid system?
Could you also share the mkfs and mount options on each filesystem you
tried?

I ran some tests on an somewhat similar system:
A supermicro H8SSL-i-B motherboard with one dual core opteron 165 with
4gb of memory, debian sarge amd64 (current stable) but with a pristine
kernel.org 2.6.16 kernel (there's no debian patches or packages yet).

It has a 3ware 9550 + BBU sata raid card with 6 disks in a raid 10
configuration with 256kb stripe size. I think this results in about
200mb/s raw read performance and about 155mb/s raw write performance (as
in tested with dd:ing a 10gb file back and forth).
I had no separate WAL device/partition, only tweaked postgresql.conf.

I get about 520-530 tps with your pgbench parameters on ext3 but very
poor (order of magnitude) performance on xfs (that's why I ask of your
mkfs parameters).

A hint on using a raided ext3 system is to use whole block device
instead of partitions to align the data better and use data=journal with
a big journal. This might seem counter-productive at first (it did to
me) but I increased my throughput a lot when using this.

My filesystem parameters are calculated like this:
stripe=256 # - 256k raid stripe size
bsize=4 # 4k blocksize
bsizeb=$(( $bsize * 1024 )) # in bytes
stride=$(( $stripe / $bsize ))

mke2fs -b $bsizeb -j -J size=400 -m 1 -O sparse_super \
  -T largefile4 -E stride=$stride /dev/sdb

Mounted with: mount -t ext3 -o data=journal,noatime /dev/sdb /mnt/test8

I'm a little surprised that I can get more pgbench performance out of my
system since you're using 10K scsi disks. Please try the above settings
and see if it helps you...

I've not run so many tests yet, I'll do some more after the weekend...

Regards,
Magnus


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How Using new created DB

2006-03-24 Thread LJJGIS

Connect to postgres with

c:\psql.exe  -h localhost -p 5432 postgres postgres

And then 
   postgres=# create database mydb

Then I try to connect to mydb with

  C:\ psql mydb
After try many password, include postgres password and OS user password
report ** psql: FATAL:  password authentication failed for user
Administrator **
OR 
  C:\psql -h localhost -p 5432 mydb mydb
After try many password, include postgres password and OS user password
report ** psql: FATAL:  password authentication failed for user mydb  **

Could someone here help to illustrate how to connect to a newly created
database




--
View this message in context: 
http://www.nabble.com/How-Using-new-created-DB-t1334225.html#a3564598
Sent from the PostgreSQL - general forum at Nabble.com.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] ADO.NET ExecuteReader returns no rows

2006-03-24 Thread roman . motyka
Hello,

Someitmes ExectueReader doesn't return rows, if I repeat the method it
does!!

I've got code like that:
using (NpgsqlDataReader readFrames =
command.ExecuteReader())
{
if (!readFrames.HasRows)
{
System.Diagnostics.Debugger.Break();
}

while (readFrames.Read())
{
currentMaxDate =
(DateTime)readFrames[idvideodate];
mediaData.Add((byte[])readFrames[mediabinary]);
}

readFrames.Close();
}

That part of code executes multiple times and always, soon or later
I've got reader without rows.. When I execute that statement again:
the same parameters - ExecuteReader() returns rows (as it should).

The problem occurs in both versions: 0.71 and in 1.0 beta 2.
I've also tried commercial driver from Core Lab.. with the same
result.
PostgresSQL version 8.1.2.

The field idvideodate is Timestamp and mediabinary is bytea.

Probably it's  a problem not with ado.net driver, but with
PostgressSQL but how to cope with that?

Roman.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] ADO.NET ExecuteReader returns no rows

2006-03-24 Thread roman . motyka
I've compared Debug log between correct and wrong activity:
Good activity:
2006-03-24 16:30:36 2632Debug   String written: select * from
getvideodata(2::int4, 1::int4, '2006-03-24 16:30:31.40'::timestamp,
1::int4) AS (idvideodate timestamp, mediabinary bytea).
2006-03-24 16:30:36 2632Debug   Entering
NpgsqlState.ProcessBackendResponses()
2006-03-24 16:30:36 2632Debug   RowDescription message received from
server.
2006-03-24 16:30:36 2632Debug   Entering
NpgsqlRowDescription.ReadFromStream_Ver_3()
2006-03-24 16:30:36 2632Debug   Entering PGUtil.ReadString()
2006-03-24 16:30:36 2632Debug   String read: idvideodate.
2006-03-24 16:30:36 2632Debug   Entering PGUtil.ReadString()
2006-03-24 16:30:36 2632Debug   String read: mediabinary.
2006-03-24 16:30:36 2632Debug   DataRow message received from server.
2006-03-24 16:30:36 2632Debug   Entering NpgsqlAsciiRow.NpgsqlAsciiRow()
2006-03-24 16:30:36 2632Debug   Entering
NpgsqlAsciiRow.ReadFromStream_Ver_3()
2006-03-24 16:30:36 2632Debug   Entering
NpgsqlTypesHelper.ConvertBackendStringToSystemType()
2006-03-24 16:30:37 2632Debug   Entering
NpgsqlTypesHelper.ConvertBackendStringToSystemType()
2006-03-24 16:30:38 2632Debug   Entering PGUtil.ReadString()
2006-03-24 16:30:38 2632Debug   String read: SELECT.
2006-03-24 16:30:38 2632Debug   CompletedResponse message from Server:
SELECT.
2006-03-24 16:30:38 2632Debug   ReadyForQuery message received from
server.
2006-03-24 16:30:38 2632Debug   Entering PGUtil.ReadString()
2006-03-24 16:30:38 2632Debug   String read: I.
...
Wrong activity:
2006-03-24 16:30:38 2632Debug   String written: select * from
getvideodata(2::int4, 1::int4, '2006-03-24 16:30:35.40'::timestamp,
1::int4) AS (idvideodate timestamp, mediabinary bytea).
2006-03-24 16:30:38 2632Debug   Entering
NpgsqlState.ProcessBackendResponses()
2006-03-24 16:30:38 2632Debug   RowDescription message received from
server.
2006-03-24 16:30:38 2632Debug   Entering
NpgsqlRowDescription.ReadFromStream_Ver_3()
2006-03-24 16:30:38 2632Debug   Entering PGUtil.ReadString()
2006-03-24 16:30:38 2632Debug   String read: idvideodate.
2006-03-24 16:30:38 2632Debug   Entering PGUtil.ReadString()
2006-03-24 16:30:38 2632Debug   String read: mediabinary.
2006-03-24 16:30:38 2632Debug   Entering PGUtil.ReadString()
2006-03-24 16:30:38 2632Debug   String read: SELECT.
2006-03-24 16:30:38 2632Debug   CompletedResponse message from Server:
SELECT.
2006-03-24 16:30:38 2632Debug   ReadyForQuery message received from
server.
2006-03-24 16:30:38 2632Debug   Entering PGUtil.ReadString()
2006-03-24 16:30:38 2632Debug   String read: I.
2006-03-24 16:30:38 2632Debug   Entering NpgsqlState.ChangeState()
2006-03-24 16:30:38 2632Debug   Entering
NpgsqlCommand.UpdateOutputParameters()
2006-03-24 16:30:38 2632Debug   Get NpgsqlCommand.Connection
2006-03-24 16:30:48 2632Debug   Entering NpgsqlDataReader.Read()
2006-03-24 16:30:48 2632Debug   Entering NpgsqlDataReader.Dispose()
2006-03-24 16:30:48 2632Debug   Entering NpgsqlConnection.Close()


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] When does a check constraint execute?

2006-03-24 Thread Edmund.Bacon
I've been working with a time travel table as discussed in Developing
Time-Oriented Database Applications in SQL by Richard Snodgrass.

I attempted to build a constraint for a table that disallowed
overlapping time itervals as follows:

test=# create table test_dates (from_date date, to_date date);
CREATE TABLE
test=# create or replace function no_overlaps()
test-# returns boolean
test-# language sql
test-# as
test-# $$ select not exists
test$# (select * from test_dates t1
test$#   where 1  ( select count(*) from test_dates t2
test$#   where (t1.from_date, t1.to_date) overlaps
test$#(t2.from_date, t2.to_date)) )$$;
CREATE FUNCTION

Now lets test our function:

test=# insert into test_dates values (date 'Jan 1, 2006', date 'Jan 31, 2006');
INSERT 0 1
test=# insert into test_dates values (date 'Feb 1, 2006', date 'Mar 1, 2006');
INSERT 0 1
test=# 

This should return true

test=# select no_overlaps();
 no_overlaps 
-
 t
(1 row)

Good the two intervals do not overlap
lets add an overlapping interval

test=# insert into test_dates values (date 'Jan 15, 2006', date 'Feb 15, 2006');
INSERT 0 1

test=# select no_overlaps();
 no_overlaps 
-
 f
(1 row)

Good. Now lets remove the overlapping interval and try that and add a check
constraint to test_dates

test=# delete from test_dates where from_date = date 'Jan 15, 2006';
DELETE 1 
test=# alter table test_dates add constraint check_overlaps 
check(no_overlaps());
ALTER TABLE
test=# insert into test_dates values (date 'Jan 15, 2006', date 'Feb 15, 2006');
INSERT 0 1

what? The check constraint should have prevented that insert, shouldn't it?

Let's make sure we haven't messed up.
 
test=# select no_overlaps();
 no_overlaps 
-
 f
(1 row)

Nope, It looks like the check() constraint is being fired before the
insert occurs.  I'm wondering if that's correct?  I have tried this on
SQL Server 2000/2005 and both of those seem to fire the check constraint
after the insert.

I do not know whether this is a bug or not?  I suspect that what's
really happening is that I'm abusing CHECK constraints, and that this
should more properly be handled by a trigger.

Edmund



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] inheridt tables from db

2006-03-24 Thread nik600
hi

i am considering to port an important web applications to postgres,
this applications is made of php and i reuse the same code for many
customer, due to have a clean structure and simple updates...

now i have one code and many databases in mysql...

i know that with postgres i can inheridt some properties...can i have
a main database, called A and then many databases, Customer1,
Customer2, Customer3 and if i made a change in A the changes is
replicated to Customer1,2 and 3?

for example:

MAIN DB
- table 1
- table 2
- table 3
-- -- CUSTOMER DB 1
-- -- - table 1
-- -- - table 2
-- -- - table 3
-- -- CUSTOMER DB 2
-- -- - table 1
-- -- - table 2
-- -- - table 3
-- -- CUSTOMER DB 3
-- -- - table 1
-- -- - table 2
-- -- - table 3

if i add table4 to MAIN DB automatically i got:


MAIN DB
- table 1
- table 2
- table 3
- table 4
-- -- CUSTOMER DB 1
-- -- - table 1
-- -- - table 2
-- -- - table 3
-- -- - table 4
-- -- CUSTOMER DB 2
-- -- - table 1
-- -- - table 2
-- -- - table 3
-- -- - table 4
-- -- CUSTOMER DB 3
-- -- - table 1
-- -- - table 2
-- -- - table 3
-- -- - table 4

if i add a table to CUSTOMER DB 2 it is added ony at db2...


is it possible? thanks

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] How Using new created DB

2006-03-24 Thread Scott Marlowe
On Thu, 2006-03-23 at 20:18, LJJGIS wrote:
 Connect to postgres with
 
 c:\psql.exe  -h localhost -p 5432 postgres postgres
 
 And then 
postgres=# create database mydb
 
 Then I try to connect to mydb with
 
   C:\ psql mydb
 After try many password, include postgres password and OS user password
 report ** psql: FATAL:  password authentication failed for user
 Administrator **
 OR 
   C:\psql -h localhost -p 5432 mydb mydb
 After try many password, include postgres password and OS user password
 report ** psql: FATAL:  password authentication failed for user mydb  **
 
 Could someone here help to illustrate how to connect to a newly created
 database

Is this on the same machines(s)?  I.e. are you running psql on the same
machine (client) against he same machine (server) for both examples?

Also, note that in the first example, you're connecting via tcp/ip with
-h localhost -p 5432, but in the second example you're not specifying
those things.  Have you tried changing ONLY the database name and
specifying the username postgres?

PostgreSQL has an internal set of names it uses.  They don't come from
the OS.  You need to use the create user and alter user commands to
create and change those.

It's all in the docs at:
http://www.postgresql.org/docs/8.1/static/sql-createuser.html

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] UPDATE

2006-03-24 Thread German Raul Hoyos Parravicino
Hello everybody,

I have a problema...I am making a program using ECPG. I would like to know the number of register I update, the sentence that I am writing is:

 EXEC SQL UPDATE TABLE1 SET STATUS = :t_status WHERE STATUS = :t_status_pendiente;
Maybe using sqlca.??? I can find the answer ??

Thanks a lot for your help!!!-- Raul Hoyos Parravicino


Re: [GENERAL] When does a check constraint execute?

2006-03-24 Thread Tom Lane
[EMAIL PROTECTED], [EMAIL PROTECTED] writes:
 what? The check constraint should have prevented that insert, shouldn't it?

No, because a check constraint is only designed to check values within a
row.  It's applied to the proposed row before insertion.

 I do not know whether this is a bug or not?  I suspect that what's
 really happening is that I'm abusing CHECK constraints,

Yup.  The main problem with a CHECK constraint that does sub-selects is
that alterations to other rows could make it no-longer-true, but the
check won't be reapplied to discover that.  It's only semantically
consistent if you restrict the CHECK to checking the values of the row
itself.

SQL99 considers this an optional feature:

 1) Without Feature F671, Subqueries in CHECK constraints, the
search condition contained in a check constraint definition
shall not contain a subquery.

We don't support this, and in fact disallow you from writing such a
thing.  You can make an end run around the error check by hiding the
subquery in a function (as you did), but that isn't going to result
in correct behavior.  At least not without a lot more logic than this
function has got.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Visual Fox Pro clients using MySQL or PostgreSQL back-end wanted for book feedback

2006-03-24 Thread MargaretGillon

Publisher Whil Hentzen has been publishing
Visual FoxPro books for many years. Whil mentors developers who want to
move from Windows to open-source platforms. He has published several books
to help users and administrators transition to Linux. Whil has also fought
with Microsoft about developers right to develop projects in Visual FoxPro
that run on Linux using WINE. Whil is looking for programmers who use Visual
FoxPro with MySql, PostgreSQL, and other open source databases to give
him feedback on a new book he is publishing. Details and contact information
for Whil are below.

Thank you,
Margaret Gillon, IS Dept., Chromalloy
Los Angeles, ext. 297

*** *** *** *** *** *** *** *** ***
*** *** *** *** *** *** *** *** *** *** ***

From Whil's email:


2. VFP and MySQL

Just about all my VFP development over the past seven years has been with

a SQL backend, and over the past four, that backend has usually been MySQL.

It's the perfect complement to VFP - inexpensive, fast, and with 5.0's

introduction of stored procedures, triggers, views and transactions, 
there are precious few VFP applications out there that need to spend the

'big' bucks for the 'big' backends. Even better, it'll run on darn near

any OS that you have laying around - Windows, Linux, Mac, and others.

It's worth a look, to say the least.

Anyway, I'm doing a book on VFP and MySQL this spring. I'm putting 
the finishing touches on the first six chapters, covering 
installation and initial connect from VFP, and still have room for 
a couple more folks who want to provide pre-release feedback. 

I'm particularly interested in hearing from folks who have used either

MySQL or another back-end, in order to provide a 'reality check' and
alternative points of view.

If you're interested in previewing it as it's being written, drop me a
line:
[EMAIL PROTECTED], with 'MySQL' in the subject line, and a note about

what experience, if any, you've got in the C/S or MySQL arenas. 

Re: [GENERAL] Advantages of PostgreSQL over MySQL 5.0

2006-03-24 Thread Guy Fraser
On Fri, 2006-24-03 at 14:53 -0600, Scott Marlowe wrote:
...snip...
 Only my most recent personal experience, when I was explaining to the
 guy from MySQL how frustrating it was that installing MySQL broke my
 build of PHP and meant I had to use the mysqli libs, not the mysql
 ones.  The answer from the guy at MySQL was that the standard fix was to
 buy the commercial version, which is generally an older, stabler
 version.

So is this older more stable version ANSI compliant, or is it broken
like the free version?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Between !

2006-03-24 Thread Carlos Rivas
Necesito saber si alguien sabe si puedo utilizar la instruccion between 
en un selet en postgress ya que lo he

probado y no me devuelve nada.

ejemplo:

SELECT * FROM NOMBRES WHERE APELLIDO BETWEEN 'R' AND 'RO';


Gracias.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Practical limit on number of tables ina single database

2006-03-24 Thread Just Someone
Hi Jim,

Actually, most table are VERY small. On each schema I would expect 4 -
5 tables to go over 1000 records. The rest will be much smaller, with
most at under 100. And aside from 2-3 tables, the activity will be
pretty low (few records a day at the most extreme).

Can I use this to optimize the fsm part in a different way?

Bye,

Guy.

On 3/24/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
 On Fri, Mar 24, 2006 at 11:15:56AM -0800, Just Someone wrote:
  Hi Jim,
 
  On 3/24/06, Jim Nasby [EMAIL PROTECTED] wrote:
   You want max_fsm_relations to be greater than select count(*) from 
   pg_class where
   relkind in ('i','t') *across all databases*. And you want max_fsm_pages 
   to be bigger than
   that. That's the only way you can be assured that you'll be tracking free 
   space info for
   every table.
 
  So I need something like 2,000,000 in max_fsm_relations. So
  max_fsm_pages now need to be 16 times that? That's how I read the
  postgresql.conf comment.

 Hrm... I forgot that FSM allocation is done in terms of CHUNKPAGES,
 which is #defined at 16. So yes, you'd need 32M pages to track freespace
 for all tables. Given that that's 250GB, I guess it won't work terribly
 well... :)

 Will any of these tables be very small and not see any real update
 activity? If so, you could possibly do without being able to store FSM
 info for them. Keeping a close eye on the last few lines of vacuumdb -av
 would be key here.

 Another possibility is to change CHUNKPAGES in
 include/storage/freespace.h to 1. That means you could get by with 2M
 pages, which is 'only' 16GB.

 Perhaps it would be worth considering some alternatives to how the FSM
 works. In particular, it might be worth it to be able to store free
 space info for multiple relations on a single page. Or perhaps allow the
 backend to tablescan very small tables to look for free space.
 --
 Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
 Pervasive Software  http://pervasive.comwork: 512-231-6117
 vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



--
Family management on rails: http://www.famundo.com - coming soon!
My development related blog: http://devblog.famundo.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Practical limit on number of tables ina single database

2006-03-24 Thread Tom Lane
Just Someone [EMAIL PROTECTED] writes:
 Actually, most table are VERY small. On each schema I would expect 4 -
 5 tables to go over 1000 records. The rest will be much smaller, with
 most at under 100. And aside from 2-3 tables, the activity will be
 pretty low (few records a day at the most extreme).

 Can I use this to optimize the fsm part in a different way?

Seems like a one-page table hardly needs an FSM entry at all: there's
only one place to look for free space anyway.  Maybe we should have a
threshold for the minimum size a table must be before it gets an FSM
entry.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Between !

2006-03-24 Thread Dann Corbit
sí Usted lata uso el BETWEEN predicate en PostgreSQL. Si usted está no 
recogiendo el resultados usted esperar , explicar por qué usted creer así. Allí 
está no conocido funcionalismo problemas con BETWEEN.

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Carlos Rivas
 Sent: Friday, March 24, 2006 3:34 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Between !
 
 Necesito saber si alguien sabe si puedo utilizar la instruccion between
 en un selet en postgress ya que lo he
 probado y no me devuelve nada.
 
 ejemplo:
 
 SELECT * FROM NOMBRES WHERE APELLIDO BETWEEN 'R' AND 'RO';
 
 
 Gracias.
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Problems running regression tests

2006-03-24 Thread Karl O. Pinc

FYI,

Postgres 8.0.7
OS Centos 4.2 (Final)

Rebuilt and installed the rpms.

To test, I copied /usr/lib/pgsql/test/regress/
to my home directory.  However,

gmake check

always produces

/usr/bin/chcon: can't apply partial context to unlabeled file  
testtablespace

/usr/bin/chcon: can't apply partial context to unlabeled file results

The Makefile has

[ -x /usr/bin/chcon ]  /usr/bin/chcon -t postgresql_db_t  
testtablespa\ce results


This does not seem to be a good way to test for SELinux.  SELinux
is not enabled in my kernel but chcon seems to be included in
the coreutils rpm.  As a result, AFIK, chcon will never work
and the make will always fail.

Karl [EMAIL PROTECTED]
Free Software:  You don't pay back, you pay forward.
 -- Robert A. Heinlein


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] URGENT! could not access status of transaction

2006-03-24 Thread Reimer

ok,

I fixed it using the dd command and the pg_dump is not complaining anymore, 
but can I trust in my database now?


Thanks in advance!

- Original Message - 
From: Tom Lane [EMAIL PROTECTED]

To: Reimer [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Friday, March 24, 2006 1:11 PM
Subject: Re: [GENERAL] URGENT! could not access status of transaction



Reimer [EMAIL PROTECTED] writes:

pg_dump: ERROR:  could not access status of transaction 1768711534
DETAIL:  could not open file /usr/local/pgsql/data/pg_clog/0696: =
Arquivo ou dire
t=F3rio n=E3o encontrado



How is the best way to fix? pg_resetxlog?


resetxlog won't help: you've got corrupt data in that table.  See the
archives for previous threads about finding and removing corrupt rows.

I concur with the nearby suggestion that you should be running something
newer than 7.4.6, but even with an update you'll still be needing to
clean up this corruption manually :-(

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL support on Redhat Advance Server 2.1

2006-03-24 Thread Leon Pu
--- Devrim GUNDUZ [EMAIL PROTECTED] wrote:

 Hi,
 
 On Fri, 2006-03-24 at 07:52 -0800, Leon Pu wrote:
 
  I found the newest PostgreSQL binary download on Redhat Advance
 Server
  2.1 is version 7.4.7.
  
 

http://www.postgresql.org/ftp/binary/OLD/v7.4.7/rpms/redhat/rhel-es-2.1/
  
  But I want to install newer PostgreSQL in Redhat Advance Server
 2.1, at
  least version 8.0.4.
  
  Any possibility to install 8.0.4 or newer PostgreSQL?
 
 None of the RPM package builders have a box running RHEL 2.1, and we
 abandoned support for that. In my buildfarm I have a sata disk that
 RHEL
 2.1 cannot recognize:(
 
 Anyway, if you want, I can send you the instructions about how to
 build
 RPMs on your platform.

Hi Devrim,

is there any article or annoucement about abandon support on REEL2.1
which I can refer to?

Please send me the instructions to build RPMs on RHEL 2.1.

Anyway, is there any potential problem to install unofficial downloaded
PostgreSQL?

Thanks a lot for your reply.


Best regards,
Leon

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL support on Redhat Advance Server 2.1

2006-03-24 Thread Joshua D. Drake


Hi Devrim,

is there any article or annoucement about abandon support on REEL2.1
which I can refer to?

Please send me the instructions to build RPMs on RHEL 2.1.

Anyway, is there any potential problem to install unofficial downloaded
PostgreSQL?


No there is not. I actually suggest, considering your ancient version of 
RedHat that you install from source.


Sincerely,

Joshua D. Drake


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL support on Redhat Advance Server 2.1

2006-03-24 Thread Tom Lane
Joshua D. Drake [EMAIL PROTECTED] writes:
 No there is not. I actually suggest, considering your ancient version of 
 RedHat that you install from source.

Better yet, install a more modern Red Hat release ;-)

AS2.1 is very far into the legacy part of its support cycle.  You
shouldn't even be thinking of installing new applications on top of it
--- the only reason to be running it at all is this server works and
we don't want to fix what's not broken.  You won't get any support from
Red Hat for running PG 8.anything on AS2.1, so why would you choose that
combination of releases?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Problems running regression tests

2006-03-24 Thread Tom Lane
Karl O. Pinc [EMAIL PROTECTED] writes:
 /usr/bin/chcon: can't apply partial context to unlabeled file testtablespace
 /usr/bin/chcon: can't apply partial context to unlabeled file results

 The Makefile has
 [ -x /usr/bin/chcon ]  /usr/bin/chcon -t postgresql_db_t testtablespace 
 results

Does it work better if you change that to

[ -x /usr/bin/chcon ]  /usr/bin/chcon -u user_u -r object_r -t 
postgresql_db_t testtablespace results

?  I recall we had a similar issue in the initscript ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Query plans for plpgsql triggers

2006-03-24 Thread Eric B. Ridge
I've found a few performance issues with an internal database  
application and I'm sure it's related to my misunderstanding of how  
and when queries are planned when used in a plpgsql function.  This  
is against Postgres 7.4.


For example, suppose this function is defined as a per-statement  
update trigger on some_table:


CREATE OR REPLACE FUNCTION foo() RETURNS TRIGGER LANGUAGE 'plpsql' AS
'DECLARE
   my_id int8;
BEGIN
   my_id := 12;  -- some arbitrary value
   UPDATE some_other_table SET field = 'value' WHERE id = my_id::int8;
   RETURN NULL;
END;'

The above function is (obviously) a stripped down version of  
something real, but it accurately represents the basics.


When is the UPDATE statement inside foo() planned?  When the trigger  
is first created, or when it's first used per backend, or every time  
it's used per backend?  It's gotta be one of the former, because it  
sure ain't the latter.


I dunno what plan is being generated, but it's gotta be using a  
sequential scan.  some_other_table contains roughly 2 million rows  
and the id column is uniquely indexed, yet the UPDATE takes 35-40  
seconds when run via the trigger, but only milliseconds if the  
equivalent UPDATE statement is played into psql.  The database is  
freshly vacuumed and analyzed.  And of course an EXPLAIN via psql  
shows an index scan.


If I turn on statement logging I can plainly see that the embedded  
UPDATE statement is the query taking a long time.  Additionally, a  
gdb stacktrace of the backend shows Postgres is somewhere inside the  
pl_pgsql call handlers.


Changing the UPDATE to be:
	EXECUTE ''UPDATE some_other_table SET field = value WHERE id  
= '' || my_id || ''::int8'';


seems to solve the bad planning problem, but this sure is ugly.

Maybe I missed it in the 7.4 docs, but I can't find any information  
on query planning for plpgsql functions.  Any insight into how this  
works would be greatly appreciated.  Also, any mention of how PG  
8.1.3 differs in this regard would also be handy.


thanks!

eric

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Query plans for plpgsql triggers

2006-03-24 Thread Tom Lane
Eric B. Ridge [EMAIL PROTECTED] writes:
 When is the UPDATE statement inside foo() planned?  When the trigger  
 is first created, or when it's first used per backend, or every time  
 it's used per backend?

First use per backend, ignoring corner cases such as replacing the
function definition.

 I dunno what plan is being generated, but it's gotta be using a  
 sequential scan.

The issue is probably that the planner is seeing a parameterized
query.  Try this:

prepare foo(int8) as update some_other_table SET field = 'value' WHERE id = $1;
explain execute foo(42);

and see what plan you get.  If the id field has sufficiently
discouraging statistics then the planner may think that a seqscan
is the safest plan.  In a normal query where you're comparing id
to a constant, the planner can see whether the constant matches any
of the most common values for the column --- if it doesn't then an
indexscan is a good plan.

If you really want a replan every time, you can get it by using
EXECUTE.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Query plans for plpgsql triggers

2006-03-24 Thread Eric B. Ridge

On Mar 24, 2006, at 11:39 PM, Tom Lane wrote:


The issue is probably that the planner is seeing a parameterized
query.  Try this:

prepare foo(int8) as update some_other_table SET field = 'value'  
WHERE id = $1;

explain execute foo(42);


I should have mentioned that while the UPDATE statement in the  
trigger function really is as simple as the above, some_other_table

is actually a view with the requisite ON UPDATE DO INSTEAD rule:

CREATE OR REPLACE RULE some_other_table_update AS ON UPDATE TO  
some_other_real_table DO INSTEAD

(
   UPDATE some_other_real_table_1 SET field = NEW.field WHERE id =  
OLD.id::int8;
   UPDATE some_other_real_table_2 SET field = NEW.field WHERE id =  
OLD.id::int8;

   ...
   UPDATE some_other_real_table_39 SET field = NEW.field WHERE id =  
OLD.id::int8;
   UPDATE some_other_real_table SET field = NEW.field WHERE id =  
OLD.id::int8;

);

The explain for your prepare foo(42) suggestion shows the correct  
index scans for each of the 40 actual tables being updated by the RULE.



and see what plan you get.  If the id field has sufficiently
discouraging statistics then the planner may think that a seqscan
is the safest plan. In a normal query where you're comparing id
to a constant, the planner can see whether the constant matches any
of the most common values for the column --- if it doesn't then an
indexscan is a good plan.


the id column, for *each* of the tables referenced in the RULE is  
defined as

   id int8 NOT NULL PRIMARY KEY

No value should be any more common than the other.

Could the fact that some_other_table is a view influence the  
planner in some way?


If you really want a replan every time, you can get it by using  
EXECUTE.


Indeed.  If big-ugly-updateable-views can't influence the planner,  
what positive impact would changing the statistics threshold have on  
a primary key column?


As an aside, has there ever been any discussion/thought into some  
ability to force all plpgsql queries to by dynamically planned w/o  
the need to explicitly wrap them inside EXPLAIN?  Maybe something like:


	CREATE OR REPLACE FUNCTION foo() LANGUAGE 'plpgsql' OPTIONS  
'dynamic_plans=on' AS '';


or maybe a plpgsql, named 'plpgsql_dont_preplan_my_queries'?

Something like the above would at least make for prettier function  
sources.


thanks for your time.

eric

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Query plans for plpgsql triggers

2006-03-24 Thread Tom Lane
Eric B. Ridge [EMAIL PROTECTED] writes:
 I should have mentioned that while the UPDATE statement in the  
 trigger function really is as simple as the above, some_other_table
 is actually a view with the requisite ON UPDATE DO INSTEAD rule:

This is the sort of detail that you really should not omit.

If you were using something newer than 7.4 then I'd ask for a complete
test case so I could look into improving the behavior --- but as it is,
I'd first suggest upgrading and seeing if the problem is already fixed.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Query plans for plpgsql triggers

2006-03-24 Thread Eric B. Ridge

On Mar 25, 2006, at 12:24 AM, Tom Lane wrote:


This is the sort of detail that you really should not omit.


Yeah, it didn't even occur to me until I ran the explain execute foo 
(42) thing you suggested.  We've been using these update rules for  
so long that I just think of the views as regular tables (rules are  
great, btw).



If you were using something newer than 7.4 then I'd ask for a complete
test case so I could look into improving the behavior --- but as it  
is,
I'd first suggest upgrading and seeing if the problem is already  
fixed.


We're working towards an upgrade to 8.1.3, and a new schema.  Both of  
which will likely provide all sorts of new behaviors.


I'm now curious if complex rules can influence the planner in  
negative ways.  I don't see how they could -- I've never seen  
unexpected EXPLAIN output via psql.  However, I can try to work up a  
test case against 7.4.12 if you think it'll be beneficial.  It'll  
take a few days and if you wanted 2 million-ish sample rows, be very  
large.


eric

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Query plans for plpgsql triggers

2006-03-24 Thread Tom Lane
Eric B. Ridge [EMAIL PROTECTED] writes:
 On Mar 25, 2006, at 12:24 AM, Tom Lane wrote:
 This is the sort of detail that you really should not omit.

 I'm now curious if complex rules can influence the planner in  
 negative ways.

It's possible.  I'm not certain that that is really what you are
seeing, but it could be.

 ... However, I can try to work up a  
 test case against 7.4.12 if you think it'll be beneficial.

I doubt we'd consider patching such a problem in 7.4.  At this point
the only issues that will get patched in 7.4 are security and data-loss
risks, not performance problems.

Again: demonstrating the issue in 8.1 or CVS HEAD would be a good step
towards getting people motivated to fix it.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] regarding join

2006-03-24 Thread AKHILESH GUPTA
hi all,
below I have created two tables in pgsql with field name as 'name' and 'id' as their datatype 'varchar(15)' and  'integer'.

One of the table is:-
chemical= select * from test1;
name | id
---+
akhil | 1
b | 2
c | 3
d | 4
e | 5
f | 6
(6 rows)

Another table is:-chemical= select * from test3;
name | id
--+
ab | 1
cd | 2
ef | 3
gh | 4
(4 rows)

i want the output as:-
name | id

---+

akhil | 1 -from test1 table
ab | 1--from test2 table 
b | 2-from test1 table
cd | 2--from test2 table 

c | 3-from test1 table
ef | 3--from test2 table 

d | 4-from test1 table
gh | 4--from test2 table 

e | 5-from test1 table

f | 6-from test1 table

i have tried all the joins but it makes different fields for different tables.
is there any way out for this kind of output??
(plz reply asap)urgent.
THANKS IN ADVANCE

-- Thanks  Regards,AkhileshS/W Trainee (EDP),
NUCHEM Pvt. Ltd.,
Faridabad(Haryana)GSM:-(+919891606064)FAILURES CAN BE FORGIVEN BUT AIMING LOW IS A CRIME


Re: [GENERAL] PostgreSQL support on Redhat Advance Server 2.1

2006-03-24 Thread Leon Pu
--- Tom Lane [EMAIL PROTECTED] wrote:

 Joshua D. Drake [EMAIL PROTECTED] writes:
  No there is not. I actually suggest, considering your ancient
 version of 
  RedHat that you install from source.
 
 Better yet, install a more modern Red Hat release ;-)
 
 AS2.1 is very far into the legacy part of its support cycle.  You
 shouldn't even be thinking of installing new applications on top of
 it
 --- the only reason to be running it at all is this server works and
 we don't want to fix what's not broken.  You won't get any support
 from
 Red Hat for running PG 8.anything on AS2.1, so why would you choose
 that
 combination of releases?

Hi Tom,

you are right, it's unresonable to run PostgreSQL 8.x on RHEL 2.1.

Since there are no support between both PostgreSQL and RHEL side for
PostgreSQL 8.x and RHEL 2.1, and I cannot upgrade the operating system.
I think I have to use PostgreSQL 7.4.7.


Best regards,
Leon

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly