Re: [GENERAL] Problem with postgres installation

2006-02-17 Thread Dave Page
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Jim McMaster
 Sent: 17 February 2006 02:32
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Problem with postgres installation
 
 I am trying to use the Windows installer for postgtes-8.0.3, 
 for Windows XP
 SP 2.  I had installed and de-installed it previously.  The 
 install fails,
 because the postgres user is defined on the system, and I do 
 not know the
 password.
 
 The postgres user is not visible on the Windows User Accounts 
 panel, so I
 cannot delete it that way.  How can I get rid of the user so 
 I can reinstall
 postgres?

From a command prompt:

net user username /delete

Regards, Dave

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] How do I use the backend APIs

2006-02-17 Thread Martijn van Oosterhout
On Thu, Feb 16, 2006 at 07:41:09AM -0800, Chad wrote:
 Hi,
 
 In Postgres, is there a C language API which would give me access to
 BTrees like Berkeley DB does? eg to seek to a particular key/value pair
 and iterate forward from there? If not whats the nearest thing to this
 in Postgres?

Well, in the backend you can do things like open a btree index, setup
an ScanKey to indicate which values you want and then keep calling
getnext(). If you set your scankey to (col1 = 'A') it will start at
'A' and go up from there...

Most of the time though you just create a query and use SPI_exec. Then
you don't actually have to worry about details like names of the
indexes, OIDs, types, comparison functions, etc...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


[GENERAL] Implicit conversion from string to timestamp

2006-02-17 Thread David Goodenough
I have some generic code to which I pass a series of values to be inserted 
into a PostgreSQL table which includes a field which is defined as a timestamp
and which I wish to populate with a string of the form 
-MM-dd hh:mm:ss.SSS.  Under pg 8 and before this worked fine
but now with 8.1 I seem to be getting an exception which reads:-

ERROR: column created is of type timestamp without time zone but expression 
is of type character varying

All this is done using JDBC (so I suppose it might be a JDBC error).

I know that a number of things were tightened up with 8.1, is this one of 
them?  Or should I be asking this on the JDBC list.

I had thought that passing strings into timestamps was acceptable.

David

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] return setof and Temp tables

2006-02-17 Thread William Leite Araújo
2006/2/17, Justin B. Kay [EMAIL PROTECTED]:













I have looked around and found that you can use return setof
in a function to return a result set, but can you use a temp table as the setof
target? Yes, you can ( PostgreSQL 8.0.6 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 (Debian 1:3.3.5-13))! 
 I build a temp table using various select statements and then try to
return the result as a recordset. I get an error: type t1 does not exist.  Please, send the sample. I test Create temp table AS SELECT * FROM setof function and works on 
8.0.6-- William Leite Araújo


[GENERAL] Take advantage of PREPARE (Pool of Conections)

2006-02-17 Thread Marcos

Hi, 

I will have an aplication on the web that will have many connections
because it will be looking for data. 

I'm doing the main functions that will be used in PL/PGSQL, they use the
PREPARE resource, but in order to use it, I need the connection to be
permanent because the PLAN generated by PREPARE only exists while the
connection is opened. 

This means, if each visitor of the website opens and closes the
connection, PREPARE is useless. 

I need the connection pool for POSTGRESQL for Linux, but i don't know
how to do this. 

Thanks in advance. 

Marcos.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Take advantage of PREPARE (Pool of Conections)

2006-02-17 Thread Martijn van Oosterhout
On Fri, Feb 17, 2006 at 10:47:16AM +, Marcos wrote:
 
 Hi, 
 
 I will have an aplication on the web that will have many connections
 because it will be looking for data. 

snip

 This means, if each visitor of the website opens and closes the
 connection, PREPARE is useless. 
 
 I need the connection pool for POSTGRESQL for Linux, but i don't know
 how to do this. 

It's called pgpool.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] Problem with postgres installation

2006-02-17 Thread Jim McMaster
On Friday, February 17, 2006 12:50 AM, Richard Huxton
mailto:dev@archonet.com wrote:

 Jim McMaster wrote:
 I am trying to use the Windows installer for postgtes-8.0.3, for
 Windows XP SP 2.  I had installed and de-installed it previously. 
 The install fails, because the postgres user is defined on the
 system, and I do not know the password. 
 
 The postgres user is not visible on the Windows User Accounts panel,
 so I cannot delete it that way.  How can I get rid of the user so I
 can reinstall postgres?
 
  From the command-prompt the net user command should let
 you reset the
 password/delete the user (you'll need to log in as an
 administrator to
 do so). A bit of googling should find some step-by-step instructions.


Thank you.  That was the solution.

-- 
Jim McMaster
mailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Fixing up a corrupted toast table

2006-02-17 Thread Tom Lane
Steve Atkins [EMAIL PROTECTED] writes:
 pg_dump: ERROR:  missing chunk number 0 for toast value 25923965

 I'd like to make the current problem go away, though, perhaps by
 deleting the relevant row in the element table. I'm not quite sure how
 to go about that, though. Could anyone point me in the right direction?

First thing you should try is REINDEXing the toast table.  (I think in
7.4, reindexing the owning table will do this too; try that if reindex
won't let you hit the toast table directly.)

If that doesn't work, the standard technique for locating damaged data
should help: find the bad row by identifying the largest N for which
SELECT * FROM table LIMIT n doesn't fail, then SELECT ctid FROM table
OFFSET n LIMIT 1.  You may be able to delete the bad row with DELETE
FROM table WHERE ctid = 'value gotten above', but I wouldn't be too
surprised if the DELETE gives the same error.  If so, you can probably
make it happy by inserting a dummy row into the toast table (chunk ID
as specified in the error, chunk sequence 0, any old data value).

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Take advantage of PREPARE (Pool of Conections)

2006-02-17 Thread Marcos
 It's called pgpool.

I've installed it in my computer, that's o.k.

Now, how do I to know if it's working with Postgresql?

Thanks :o)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] TSearch2 / German compound words / UTF-8

2006-02-17 Thread Alexander Presber

Hello,

Thanks for your efforts, I still don't get it to work.
I now tried the norwegian example. My encoding is ISO-8859 (I never  
used UTF-8, because I thought it would be slower, the thread name is  
a bit misleading).


So I am using an ISO-8859-9 database:

  ~/cvs/ssd% psql -l

 Name| Eigentümer | Kodierung
  ---++---
   postgres  | postgres   | LATIN9
   tstest| aljoscha   | LATIN9

and a norwegian, ISO-8859 encoded dictionary and aff-file:

  ~% file tsearch/dict/ispell_no/norwegian.dict
  tsearch/dict/ispell_no/norwegian.dict: ISO-8859 C program text
  ~% file tsearch/dict/ispell_no/norwegian.aff
  tsearch/dict/ispell_no/norwegian.aff: ISO-8859 English text

the aff-file contains the lines:

  compoundwords controlled z
  ...
  #to compounds only:
  flag ~\\:
 [^S] S

and the dictionary containins:

  overtrekk/BCW\z

  (meaning: word can be compound part, intermediary s is allowed)

My configuration is:

  tstest=# SELECT * FROM tsearch2.pg_ts_cfg;
ts_name  | prs_name |   locale
  ---+--+
   simple| default  | [EMAIL PROTECTED]
   german| default  | [EMAIL PROTECTED]
   norwegian | default  | [EMAIL PROTECTED]


Now the test:

  tstest=# SELECT tsearch2.lexize('ispell_no','overtrekksgrill');
   lexize
  

  (1 Zeile)

BUT:

  tstest=# SELECT tsearch2.lexize('ispell_no','overtrekkgrill');
 lexize
  
   {over,trekk,grill,overtrekk,grill}
  (1 Zeile)


It simply doesn't work. No UTF-8 is involved.

Sincerely yours,

Alexander Presber

P.S.: Henning: Sorry for bothering you with the CC, just ignore it,  
if you like.



Am 27.01.2006 um 18:17 schrieb Teodor Sigaev:


contrib_regression=# insert into pg_ts_dict values (
 'norwegian_ispell',
  (select dict_init from pg_ts_dict where  
dict_name='ispell_template'),

  'DictFile=/usr/local/share/ispell/norsk.dict ,'
  'AffFile =/usr/local/share/ispell/norsk.aff',
 (select dict_lexize from pg_ts_dict where  
dict_name='ispell_template'),

 'Norwegian ISpell dictionary'
   );
INSERT 16681 1
contrib_regression=# select lexize('norwegian_ispell','politimester');
  lexize
--
 {politimester,politi,mester,politi,mest}
(1 row)

contrib_regression=# select lexize 
('norwegian_ispell','sjokoladefabrikk');

lexize
--
 {sjokoladefabrikk,sjokolade,fabrikk}
(1 row)

contrib_regression=# select lexize 
('norwegian_ispell','overtrekksgrilldresser');

 lexize
-
 {overtrekk,grill,dress}
(1 row)
% psql -l
   List of databases
Name| Owner  | Encoding
++--
 contrib_regression | teodor | KOI8
 postgres   | pgsql  | KOI8
 template0  | pgsql  | KOI8
 template1  | pgsql  | KOI8
(4 rows)


I'm afraid that UTF-8 problem. We just committed in CVS HEAD  
multibyte support for tsearch2, so you can try it.


Pls, notice, the dict, aff stopword files should be in server  
encoding. Snowball sources for german (and other) in UTF8 can be  
founded in http://snowball.tartarus.org/dist/libstemmer_c.tgz


To all: May be, we should put all snowball's stemmers (for all  
available languages and encodings) to tsearch2 directory?


--
Teodor Sigaev   E-mail:  
[EMAIL PROTECTED]
   WWW: http:// 
www.sigaev.ru/



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Implicit conversion from string to timestamp

2006-02-17 Thread Tom Lane
David Goodenough [EMAIL PROTECTED] writes:
 I have some generic code to which I pass a series of values to be inserted 
 into a PostgreSQL table which includes a field which is defined as a timestamp
 and which I wish to populate with a string of the form 
 -MM-dd hh:mm:ss.SSS.  Under pg 8 and before this worked fine
 but now with 8.1 I seem to be getting an exception which reads:-

 ERROR: column created is of type timestamp without time zone but expression 
 is of type character varying

 All this is done using JDBC (so I suppose it might be a JDBC error).

Check the JDBC archives --- I think you're probably getting burnt by
some side effect of the changes they made to use parameterized queries.
You have to be a lot more honest now about informing the JDBC driver
what datatype your statement parameters really are.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Btrieve to SQL

2006-02-17 Thread Adam Alkins
Hey,

Been looking around to do this for a while, haven't gotten concrete
information. I'm interested in taking data from Peachtree Accounting
2003 (Which is stored Btrieve DAT files) and importing them into a
Postgres SQL database. I have looked around on the net about this, but
haven't gotten any concrete methods on doing this. Figured with
Pervasive's involvement in PostgreSQL, might have some luck on this
list.

Peachtree also stores the DDL (Dictionary) files for the Btrieve
database in the directory. I'm not sure what version of Btrieve the
files are. Does anyone have any experience in doing this? I read
somewhere about Pervasive's SDK for their DB server which can convert
Btrieve files to SQL (or something like that), but looking through the
SDK and documentation, haven't found any real information. This is
just an experimental project, so any commercial solutions to do this
really doesn't apply.

Regards,

--
Adam Alkins
http://www.rasadam.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Fixing up a corrupted toast table

2006-02-17 Thread Steve Atkins


On Feb 17, 2006, at 6:29 AM, Tom Lane wrote:


Steve Atkins [EMAIL PROTECTED] writes:

pg_dump: ERROR:  missing chunk number 0 for toast value 25923965



I'd like to make the current problem go away, though, perhaps by
deleting the relevant row in the element table. I'm not quite sure  
how
to go about that, though. Could anyone point me in the right  
direction?


First thing you should try is REINDEXing the toast table.  (I think in
7.4, reindexing the owning table will do this too; try that if reindex
won't let you hit the toast table directly.)


Yes, forgot to mention I'd already tried that. Sorry.



If that doesn't work, the standard technique for locating damaged data
should help: find the bad row by identifying the largest N for which
SELECT * FROM table LIMIT n doesn't fail, then SELECT ctid FROM  
table

OFFSET n LIMIT 1.  You may be able to delete the bad row with DELETE
FROM table WHERE ctid = 'value gotten above', but I wouldn't be too
surprised if the DELETE gives the same error.  If so, you can probably
make it happy by inserting a dummy row into the toast table (chunk ID
as specified in the error, chunk sequence 0, any old data value).


OK, that's what I was looking for. Thanks!

Cheers,
  Steve

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Btrieve to SQL

2006-02-17 Thread Joshua D. Drake

Adam Alkins wrote:

Hey,

Been looking around to do this for a while, haven't gotten concrete
information. I'm interested in taking data from Peachtree Accounting
2003 (Which is stored Btrieve DAT files) and importing them into a
Postgres SQL database. I have looked around on the net about this, but
haven't gotten any concrete methods on doing this. Figured with
Pervasive's involvement in PostgreSQL, might have some luck on this
list.
  

Use Perl :)

http://search.cpan.org/~dlane/

Which will get you access to the files and then Perl-DBI/DBD to
push into PostgreSQL.

Sincerely,

Joshua D. Drake



Peachtree also stores the DDL (Dictionary) files for the Btrieve
database in the directory. I'm not sure what version of Btrieve the
files are. Does anyone have any experience in doing this? I read
somewhere about Pervasive's SDK for their DB server which can convert
Btrieve files to SQL (or something like that), but looking through the
SDK and documentation, haven't found any real information. This is
just an experimental project, so any commercial solutions to do this
really doesn't apply.

Regards,

--
Adam Alkins
http://www.rasadam.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
  



--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl - http://www.commandprompt.com/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] A question about Vacuum analyze

2006-02-17 Thread Emi Lu


In another way, whenever we delete/truncate and then insert data into 
a table, it is better to vacuum anaylze?
  

   


You shouldn't need a VACUUM if you haven't yet done any updates or
deletes since the TRUNCATE.  An ANALYZE seems like a good idea, though.
(You could get away without ANALYZE if the new data has essentially the
same statistics as the old, but if you're making only minor changes, why
are you using this technique at all ...)


 

After truncate table A, around 60,000 will be inserted. Then a 
comparision will be done between table A and table B.  After that, table 
B will be updated according to the comparision result. Records inserted 
into table A is increasing everyday.


So, your suggestion is that after the population of table A, the query 
planner should be able to find the most efficient query plan because we 
do truncate but not delete, and we do not need to do vacuum analyze at 
all, right?
   



no. the suggestion was that a VACUUM is not needed, but 
that an ANALYZE might be.
 



Thank you gnari for your answer. But I am a bit confused about not running vacuum but only 
analyze. Can I seperate these two operations? I guess vacuum analyze do both vacuum 
and analyze. Or EXPLAIN ANALYZE can do it for me?

Emi








---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] A question about Vacuum analyze

2006-02-17 Thread Scott Marlowe
On Fri, 2006-02-17 at 11:06, Emi Lu wrote:
 In another way, whenever we delete/truncate and then insert data into 
 a table, it is better to vacuum anaylze?

 
 
 
 You shouldn't need a VACUUM if you haven't yet done any updates or
 deletes since the TRUNCATE.  An ANALYZE seems like a good idea, though.
 (You could get away without ANALYZE if the new data has essentially the
 same statistics as the old, but if you're making only minor changes, why
 are you using this technique at all ...)
  
 
   
 
 After truncate table A, around 60,000 will be inserted. Then a 
 comparision will be done between table A and table B.  After that, table 
 B will be updated according to the comparision result. Records inserted 
 into table A is increasing everyday.
 
 So, your suggestion is that after the population of table A, the query 
 planner should be able to find the most efficient query plan because we 
 do truncate but not delete, and we do not need to do vacuum analyze at 
 all, right?
 
 
 
 no. the suggestion was that a VACUUM is not needed, but 
 that an ANALYZE might be.
   
 
 
 Thank you gnari for your answer. But I am a bit confused about not running 
 vacuum but only analyze. Can I seperate these two operations? I guess 
 vacuum analyze do both vacuum and analyze. Or EXPLAIN ANALYZE can do it 
 for me?


Yeah, vacuum analyze is kind of a leftover from the olden days when you
could only run an analyze as part of a vacuum command.  analyze has been
it's own command for quite some time now.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Basic problems using plpythonu - bug?

2006-02-17 Thread Curt Schwaderer



I'm trying to do something pretty basic per the 
documentation using plpython triggers. When I change the "cfgCmd" field in the 
SysConfig table below, the trigger works fine and if I have a null function that 
does nothing but write to the pipe, that all works fine. The trouble is when the 
function attemts to read from the SysConfig database, "cfgCmd" field which tells 
the function what to do. When I write a value to the SysConfig table, cfgCmd 
field, the server log says:

ERROR: plppython: function "send_ctl_msg" 
failed
DETAIL: exceptions.KeyError: 'cfgCmd'

What's a key error? I can't find any documentation 
on it? Also, when I try to reference TD["old"][column name] this gives 
me an error too. Does this have something to do with the untrusted language part 
or am I doing something wrong? Thanks much for any help,
Curt

Here is the trigger function:

-- Send Control Message (send_ctl_msg)CREATE FUNCTION 
send_ctl_msg() RETURNS trigger AS $$ import os import 
os.path

 pipe_loc = TD["args"][0]

 old = TD["old"] new = TD["new"]

 rv = plpy.execute("SELECT * from 
ucfg.SysConfig",1) ens_cmd = rv[0]["cfgCmd"] 
plpy.log(cmd)

 if os.path.exists(pipe_loc): pipeout = 
open(pipe_loc,"w") print pipeout,ens_cmd 
else: plpy.error("Build System cmd FIFO not found. Make 
sure VMD is running")$$ LANGUAGE plpythonu;

Here is the table it's trying to access:

CREATE TABLE ucfg.SysConfig (
 -- System map selection and running 
startupSysMapName VARCHAR(64) REFERENCES 
ucfg.SysMaps(sysMapName), -- System map to load and run at 
system boot. activeSysMapName VARCHAR(64) REFERENCES 
ucfg.SysMaps(sysMapName), -- System map that is currently 
loaded. checkSysMapName VARCHAR(64) REFERENCES 
ucfg.SysMaps(sysMapName), -- System map to be checked to 
see if it builds properly 
cfgCmd 
VARCHAR(16), -- ENS configuration control command -- 
"NONE", "CHECK", "LOAD", "RUN", "STOP"
);
Here is the trigger function:
CREATE TRIGGER tr_exec AFTER UPDATE ON ucfg.SysConfig for 
EACH ROW EXECUTE PROCEDURE 
public.send_ctl_msg("/var/ens/cmdfifo");


Re: [GENERAL] TSearch2 / German compound words / UTF-8

2006-02-17 Thread Teodor Sigaev

Very strange...


  ~% file tsearch/dict/ispell_no/norwegian.dict
  tsearch/dict/ispell_no/norwegian.dict: ISO-8859 C program text
  ~% file tsearch/dict/ispell_no/norwegian.aff
  tsearch/dict/ispell_no/norwegian.aff: ISO-8859 English text


Can you place that files anywhere wher I can download it (or mail it directly to 
me)?



--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] TSearch2 / German compound words / UTF-8

2006-02-17 Thread Teodor Sigaev
BTW, if you take norwegian dictionary from 
http://folk.uio.no/runekl/dictionary.html then try to build it from OpenOffice 
sources (http://lingucomponent.openoffice.org/spell_dic.html, tsearch2/my2ispell).


I found mails in my archive which says that norwegian people prefer OpenOffice's 
one.

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Basic problems using plpythonu - bug?

2006-02-17 Thread Michael Fuhr
On Fri, Feb 17, 2006 at 11:38:21AM -0600, Curt Schwaderer wrote:
 ERROR: plppython: function send_ctl_msg failed
 DETAIL: exceptions.KeyError: 'cfgCmd'

You're getting bit by case folding of identifiers.  You created
the column as:

   cfgCmdVARCHAR(16),  -- ENS configuration control command

Since you didn't quote the identifier it's folded to lowercase, so
when you refer to it as

   ens_cmd = rv[0][cfgCmd]

you get a KeyError exception.  Try using rv[0][cfgcmd].  You might
want to read the documentation about quoted identifiers:

http://www.postgresql.org/docs/8.1/interactive/sql-syntax.html#SQL-SYNTAX-IDENTIFIERS

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] A question about Vacuum analyze

2006-02-17 Thread Ragnar
On fös, 2006-02-17 at 12:06 -0500, Emi Lu wrote:
 In another way, whenever we delete/truncate and then insert data into 
 a table, it is better to vacuum anaylze?

 ...
 
 So, your suggestion is that after the population of table A, the query 
 planner should be able to find the most efficient query plan because we 
 do truncate but not delete, and we do not need to do vacuum analyze at 
 all, right?
 
 
 ...
 
 Thank you gnari for your answer. But I am a bit confused about not running 
 vacuum but only analyze. Can I seperate these two operations? I guess 
 vacuum analyze do both vacuum and analyze. 
 Or EXPLAIN ANALYZE can do it for me?

VACUUM ensures that dead rows can be reused. Dead rows 
are created by DELETE and UPDATE.
If you have done a significant number of DELETEs
or UPDATEs, you might want to VACUUM

ANALYZE collect statistical information about
your tables. this helps the planner make good plans.
After having changed your data significantly, you
might want to ANALYZE, for example after lots of
INSERTs, UPDATEs or DELETEs

TRUNCATE does not create dead rows, so you do
not need to VACUUM just because of that, but
you still might have to ANALYZE.

If you TRUNCATE a table and then repopulate it
with similar data as before, you do not have to
ANALYZE, as plans based on the old statistics
would assumedly be just as good.

EXPLAIN dislays the plan that will be chosen
for a query, along with some estimated cost
information.

EXPLAIN ANALYZE actually executes the query, and
shows same info as EXPLAIN, and in addition actual
cost information 

Hope this makes it more clear

gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Performance Tuning

2006-02-17 Thread Darryl W. DeLao Jr.
I wanted to check on my settings in my postgresql.conf file for a Dual Xeon 2.4 GHZ server with 6 GB of Ram.

I have max_connections set to 512, with shared buffers set to 1024. If I set this any higher, postgres will not start. But, it seems that this setting is not enough. Though the server runs fine, certain queries for reports are taking anywhere from 40 to 55 seconds, and the CPU is only topping out at 25%. Is there a way to make this better? My thinking is that if more CPU was utilized then the query would run faster.


Thank you!


Re: [GENERAL] Basic problems using plpythonu - bug?

2006-02-17 Thread Curt Schwaderer

Thanks a bunch - that cleaned everything up!

Curt

- Original Message - 
From: Michael Fuhr [EMAIL PROTECTED]

To: Curt Schwaderer [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Friday, February 17, 2006 11:57 AM
Subject: Re: [GENERAL] Basic problems using plpythonu - bug?



On Fri, Feb 17, 2006 at 11:38:21AM -0600, Curt Schwaderer wrote:

ERROR: plppython: function send_ctl_msg failed
DETAIL: exceptions.KeyError: 'cfgCmd'


You're getting bit by case folding of identifiers.  You created
the column as:


  cfgCmdVARCHAR(16),  -- ENS configuration control command


Since you didn't quote the identifier it's folded to lowercase, so
when you refer to it as


  ens_cmd = rv[0][cfgCmd]


you get a KeyError exception.  Try using rv[0][cfgcmd].  You might
want to read the documentation about quoted identifiers:

http://www.postgresql.org/docs/8.1/interactive/sql-syntax.html#SQL-SYNTAX-IDENTIFIERS

--
Michael Fuhr 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] connecting to pgsql

2006-02-17 Thread mmaclennan
I have installed postgresql with postgis on a server at my home and I
have loaded some data into it but I can not access this data from
another computer. I have downloaded the ODBC driver from the postgresql
web site and have installed it on another computer but when I try to
establish a connection to postgres through microsoft access or
arccatalogue I can not connect.

The error I am receiving says that the connection was refused. I am
positive I have the password right and the firewall is disabled but I
still cannot connect. Has anyone experienced this problem before? Is
there some default setting I have to change in postgres to allow other
computers to access the db? 

Any help would be greatly appreiciated.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Ident authentication failed without su to user

2006-02-17 Thread jim_bowery
Yes.  It turns out the problem was that my pg_hba.conf file was
replaced by another file during a yum install of the new postgresql but
in a different path.  When I used locate pg_hba.conf it returned the
path of the old file which didn't have the 'ident sameuser' rule.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] How do I use the backend APIs

2006-02-17 Thread Chad
In a word: The kind of problems people use Berkeley DB for.

People use BDB for more fine grained cursor access to BTrees. Stuff you
CANNOT do with SQL. There is a market for this. See their website. I'd
like something similar from Postgres so that the data would be stored
in a full fledged RDBMS but I could use the cursor methods for
searching more efficient than SQL. Best of both worlds.

I've had a quick browse around the Postgres code and found some
functions like _bt_first() but no sample code to use it. BTW its for
developing an alternative server based access to the underlying
relational data.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Converting an ASCII database to an UTF-8 database

2006-02-17 Thread kishore . sainath
Hi All,

I have a database in PostgreSQL which is ASCII.
Due to some internationalization issues, I need to convert the database
to the UTF-8 format.

So my question is:
How do I convert a database in the ASCII format into one of the UTF-8
format?

Thanks in advance
- Kishore


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] EnterpriseDB

2006-02-17 Thread Benjamin Arai



Is the PL support in 
EnterpriseDB worth the money? Are there any specific benefits that I 
should specifically be aware of?


Benjamin Arai
[EMAIL PROTECTED]
http://www.benjaminarai.com



Re: [GENERAL] Performance Tuning

2006-02-17 Thread Tom Lane
Darryl W. DeLao Jr. [EMAIL PROTECTED] writes:
 I have max_connections set to 512, with shared buffers set to 1024.  If I
 set this any higher, postgres will not start.  But, it seems that this
 setting is not enough.  Though the server runs fine, certain queries for
 reports are taking anywhere from 40 to 55 seconds, and the CPU is only
 topping out at 25%.  Is there a way to make this better?

You really really want shared_buffers higher --- 1 or so would be
reasonable.  (Which PG version are you running?  If 8.1 you might want
higher than that.)

Fix the kernel's SHMMAX setting to let you do this.

After that, you probably want to read the archives of the
pgsql-performance list a bit.  You likely have a standard query-tuning
problem, but you've not said enough to let anyone help you.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] connecting to pgsql

2006-02-17 Thread Chandra Sekhar Surapaneni
Did you make the required changes to pg_hba.conf??
Maybe you forgot to do those

-Chandra Sekhar Surapaneni 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of mmaclennan
Sent: Thursday, February 16, 2006 11:38 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] connecting to pgsql

I have installed postgresql with postgis on a server at my home and I
have loaded some data into it but I can not access this data from
another computer. I have downloaded the ODBC driver from the postgresql
web site and have installed it on another computer but when I try to
establish a connection to postgres through microsoft access or
arccatalogue I can not connect.

The error I am receiving says that the connection was refused. I am
positive I have the password right and the firewall is disabled but I
still cannot connect. Has anyone experienced this problem before? Is
there some default setting I have to change in postgres to allow other
computers to access the db? 

Any help would be greatly appreiciated.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Performance Tuning

2006-02-17 Thread Darryl W. DeLao Jr.
Running ver 7.3.10 in RHEL 3.0 ES. If I change shared buffers, dont i have to change max connections as well? 
On 2/17/06, Tom Lane [EMAIL PROTECTED] wrote:
Darryl W. DeLao Jr. [EMAIL PROTECTED] writes:
 I have max_connections set to 512, with shared buffers set to 1024.If I set this any higher, postgres will not start.But, it seems that this setting is not enough.Though the server runs fine, certain queries for
 reports are taking anywhere from 40 to 55 seconds, and the CPU is only topping out at 25%.Is there a way to make this better?You really really want shared_buffers higher --- 1 or so would be
reasonable.(Which PG version are you running?If 8.1 you might wanthigher than that.)Fix the kernel's SHMMAX setting to let you do this.After that, you probably want to read the archives of the
pgsql-performance list a bit.You likely have a standard query-tuningproblem, but you've not said enough to let anyone help you. regards, tom lane


Re: [GENERAL] How do I use the backend APIs

2006-02-17 Thread Tom Lane
Chad [EMAIL PROTECTED] writes:
 This is exactly what I am looking for. I'm wondering how easy it is to
 sit on top of this backend.

You can't, and you'll get exactly zero community support for trying.
We don't believe in embedded databases --- or at least, we don't
believe in trying to use Postgres as one.  We like a hard-and-fast
separation between client and database server, so that client
programming mistakes can't corrupt the database.

You could possibly do what you are thinking of in the form of
user-defined functions executing in the backend, but the communication
overhead to the client side is likely more than you want, and you'll
be relying on APIs that we consider backend-internal and feel free to
whack around at the drop of a hat.

I'd suggest looking for something that's actually intended to be an
embedded database.  sqlite maybe, though I'm no expert on the subject.
For that matter, have you looked at good old dbm?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] primary index permits duplicates

2006-02-17 Thread H Hale
Hello,  I am using Postgres 8.0.3 on Fedora Core 4. I may have found a bug in Postgres.  I have a table as follows:Table ".master"  Column | Type | Modifiers --++--- objectid | dsuuid | not null classid | dsuuid | not null Indexes:  "master_pkey" PRIMARY KEY, btree (objectid)  dsuuid is a custom data type for uuids with an external library with comparsion functions.  CREATE TYPE dsuuid (  INTERNALLENGTH = 16,  INPUT = dsuuid_in,  OUTPUT = dsuuid_out,  RECEIVE = dsuuid_recv, 
 SEND = dsuuid_send,  alignment = CHAR );  CREATE OPERATOR CLASS _uuid_ops DEFAULT FOR TYPE dsuuid USING btree  AS  OPERATOR 1  ,  OPERATOR 2 = ,  OPERATOR 3 = ,  OPERATOR 4 = ,  OPERATOR 5  ,
  FUNCTION 1 dsuuid_cmp(dsuuid, dsuuid);   Inserts to this table are done via triggers on other tables. I have found duplicate objectid column entries.  I have reproduced the problem by inserting directly in the table using psql as follows:  capsa=# insert into master values('3021----','3001----'); INSERT 21633 1  capsa=# insert into master values('3021----','3001----'); ERROR: duplicate key violates unique constraint "master_pkey"  capsa=# insert into master values('3022----','4001----'); INSERT 21635 1  capsa=# insert into master values('3021----','3001----'); I
 NSERT
 21636 1  Note the last insert permits duplicate objectid to be inserted.  The uuid custom data type's compare functions have be confirmed to be correct.  I am logging the calls the libs compare functions.  For the last insert what I have found is the postgres finds match but continues checking. The compare returns 0 if equal otherwise non-zero.  uuid_cmp : 3021---- 3021---- 0 - match found uuid_cmp : 3022---- 3021---- 1 - but one more is checked

Re: [GENERAL] primary index permits duplicates

2006-02-17 Thread Tom Lane
H Hale [EMAIL PROTECTED] writes:
  dsuuid is a custom data type for uuids with an external library with 
 comparsion functions.

Unless you can reproduce this with a standard datatype, you should
probably file this report with the developer(s) of dsuuid.  It sounds
like an incorrect comparison function to me.

  The compare returns 0 if equal otherwise non-zero.

In fact, if that's what the code actually thinks the API is, that's
the problem right there ... it's supposed to be a three-way result.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Converting an ASCII database to an UTF-8 database

2006-02-17 Thread Ragnar
On fös, 2006-02-17 at 05:21 -0800, [EMAIL PROTECTED] wrote:
 Hi All,
 
 I have a database in PostgreSQL which is ASCII.
 Due to some internationalization issues, I need to convert the database
 to the UTF-8 format.
 
 So my question is:
 How do I convert a database in the ASCII format into one of the UTF-8
 format?

using pg_dump ?

gnari



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] sqlite speed comparison

2006-02-17 Thread Merlin Moncure
On 2/14/06, Neil Conway [EMAIL PROTECTED] wrote:
 On Tue, 2006-02-14 at 13:53 -0500, Kevin Murphy wrote:
  A Windows PostgreSQL guru who cares (;-)) might help this guy with his
  benchmark of mysql, firebird, sqlite, and postgresql:
 
  http://www.sqlite.org/cvstrac/wiki?p=SpeedComparison

 Yeah, see recent discussion:

 http://archives.postgresql.org/pgsql-hackers/2006-02/msg00465.php

He turned on row level statistics in his postgresql.conf which is
either accidental goof or borderline cheating.  This would have
mesurable impact on all rapid fire query tests.  editing his wiki atm

merlin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] return setof and Temp tables

2006-02-17 Thread Robert Treat
On Friday 17 February 2006 00:13, Justin B. Kay wrote:
 I have looked around and found that you can use return setof in a
 function to return a result set, but can you use a temp table as the
 setof target?  I build a temp table using various select statements and
 then try to return the result as a recordset.  I get an error: type t1
 does not exist.
 If this is not possible, is there some alternative way to do this?  I am
 trying to translate what was done in a ms sql database.


If I were doing this, I would create a seperate permanent type that's 
structure matched that of the temp table, or use out parameters to mimic it. 

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Converting an ASCII database to an UTF-8 database

2006-02-17 Thread Peter Eisentraut
[EMAIL PROTECTED] wrote:
 How do I convert a database in the ASCII format into one of the UTF-8
 format?

ASCII is a subset of UTF-8, so you don't need to do anything.  Just 
change the encoding entry in the pg_database table.  Of course, using 
pg_dump would be the official way to convert a database between any two 
encodings.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] How do I use the backend APIs

2006-02-17 Thread Martijn van Oosterhout
On Fri, Feb 17, 2006 at 01:06:16AM -0800, Chad wrote:
 In a word: The kind of problems people use Berkeley DB for.
 
 People use BDB for more fine grained cursor access to BTrees. Stuff you
 CANNOT do with SQL. There is a market for this. See their website. I'd
 like something similar from Postgres so that the data would be stored
 in a full fledged RDBMS but I could use the cursor methods for
 searching more efficient than SQL. Best of both worlds.

Well, just the brief look at the docs doesn't immediatly reveal
anything that couldn't be done with straight SQL and server side
functions. It would be helpful if you could give an example of what you
actually want to do.

 I've had a quick browse around the Postgres code and found some
 functions like _bt_first() but no sample code to use it. BTW its for
 developing an alternative server based access to the underlying
 relational data.

Well, that function is several levels below where you need to be
looking. Using it directly will probably get you into a world of hurt.
BTW, what does alternative server based access to the underlying
relational data mean?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] Converting an ASCII database to an UTF-8 database

2006-02-17 Thread Ragnar
On fös, 2006-02-17 at 22:38 +0100, Peter Eisentraut wrote:
 [EMAIL PROTECTED] wrote:
  How do I convert a database in the ASCII format into one of the UTF-8
  format?
 
 ASCII is a subset of UTF-8, so you don't need to do anything.  Just 
 change the encoding entry in the pg_database table.  Of course, using 
 pg_dump would be the official way to convert a database between any two 
 encodings.

This will only work correctly if the database
definitely does not contain non-ASCII characters.

Assuming by ASCII format we mean that the database was
created SQL_ASCII, then it is possible that it contains
invalid UTF-8 characters, as SQL_ASCII is a 8 bit
encoding.

consider:

template1=# create database test with encoding='SQL_ASCII';
CREATE DATABASE
template1=# \connect test
You are now connected to database test.
test=# create table a (x text);
CREATE TABLE
test=# insert into a values ('á');
INSERT 33304378 1
test=# select * from a;
 x
---
 á
(1 row)

test=# update pg_database set encoding =
pg_catalog.pg_char_to_encoding('UTF8') where datname='test';
UPDATE 1
test=# select * from a;
 x
---
 á
(1 row)

test=# \connect template1
You are now connected to database template1.
template1=# \connect test
You are now connected to database test.
test=# select * from a;
 x
---

(1 row)

test=#


gnari



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Performance Tuning

2006-02-17 Thread Christopher Browne
The world rejoiced as [EMAIL PROTECTED] (Darryl W. DeLao Jr.) wrote:
 Running ver 7.3.10 in RHEL 3.0 ES.  If I change shared buffers, dont i have 
 to change max connections as well? 

If you have enough connections, then that seems unnecessary.

The *opposite* would be true; if you change max connections, you might
need to change the size of the shared buffer...

At any rate, if you actually want substantive improvements in
performance, I would *highly* advise moving to a version of PostgreSQL
that is *way* newer, like version 8.1.  There are *enormous* numbers
of improvements that affect performance between 7.3 and 8.1.
-- 
output = reverse(gro.gultn @ enworbbc)
http://cbbrowne.com/info/lsf.html
In the case of CAPP, an EAL4 evaluation tells you everything you need
to   know.  It tells you   that  Microsoft spent  millions  of dollars
producing documentation that   shows   that  Windows 2000 meets an
inadequate  set of  requirements,  and  that  you can have  reasonably
strong confidence that this is the case. -- Jonathan S. Shapiro

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] EnterpriseDB

2006-02-17 Thread Christopher Browne
[EMAIL PROTECTED] (Benjamin Arai) wrote:
 Is the PL support in EnterpriseDB worth the money?  Are there any
 specific benefits that I should specifically be aware of?

I dunno; this is a PostgreSQL list, and many (most?) of us have never
used EnterpriseDB.

The people that can answer your *second* question are the folks from
EnterpriseDB.  Whether they are necessarily totally objective about
the *first* question is another matter...
-- 
let name=cbbrowne and tld=gmail.com in String.concat @ [name;tld];;
http://linuxdatabases.info/info/slony.html
Ever stop to think and forget to start again? 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How do I use the backend APIs

2006-02-17 Thread Christopher Browne
A long time ago, in a galaxy far, far away, Chad [EMAIL PROTECTED] wrote:
 In a word: The kind of problems people use Berkeley DB for.

 People use BDB for more fine grained cursor access to BTrees. Stuff you
 CANNOT do with SQL. There is a market for this. See their website. I'd
 like something similar from Postgres so that the data would be stored
 in a full fledged RDBMS but I could use the cursor methods for
 searching more efficient than SQL. Best of both worlds.

I daresay we get enough challenges to fill the day when we use the
coarse graining of SQL.

I'm generally keener on getting aggregate results that let me not
bother needing to search in fantastical ways...

As far as I'm concerned, you're not pointing at a better world; you're
pointing at a worse one.  I've seen far too many bugs falling out of
the navigational complexities of navigation-oriented data structures.

The sheer scope of bugginess of that is why my ears perk up when
mention of languages like R and APL and such come up; I don't want to
navigate through data; I want to parallel process it :-).

 I've had a quick browse around the Postgres code and found some
 functions like _bt_first() but no sample code to use it. BTW its
 for developing an alternative server based access to the underlying
 relational data.

Those sorts of functions are intended as internals, and public usage
can be expected to break gloriously badly as changing them is fair
game as PostgreSQL progresses to new versions.

For things for public use, you should look at what is offered in
libpq.

If you could outline some usage that might make it more powerful, it
is not implausible that people would listen.  There are doubtless ways
that cursors could be enhanced, and that might be the direction you
would want to go.  But you're not too likely to see PostgreSQL
rewritten for the sake of attracting the market of people who need
to manipulate the fine semantics of B-tree navigation.
-- 
If this was helpful, http://svcs.affero.net/rm.php?r=cbbrowne rate me
http://cbbrowne.com/info/rdbms.html
For those  of you who are  into writing programs that  are as obscure
and complicated  as possible, there are opportunities  for... real fun
here -- Arthur Norman

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] A question about Vacuum analyze

2006-02-17 Thread Christopher Browne
Quoth [EMAIL PROTECTED] (Emi Lu):
 no. the suggestion was that a VACUUM is not needed, but that an
 ANALYZE might be.

 Thank you gnari for your answer. But I am a bit confused about not
 running vacuum but only analyze. Can I seperate these two
 operations? I guess vacuum analyze do both vacuum and analyze. Or
 EXPLAIN ANALYZE can do it for me?

EXPLAIN, ANALYZE, and VACUUM are different things; ANALYZE gets used
in two different contexts...

1.  VACUUM is what cleans dead tuples out of tables.

  e.g. VACUUM my_table;

2.  VACUUM ANALYZE cleans out dead tuples and recalculates data
distributions

  e.g. VACUUM ANALYZE my_table;

3.  EXPLAIN describes query plans

  e.g. EXPLAIN select * from my_table;

4.  EXPLAIN ANALYZE compares query plan estimates to real results

  e.g. EXPLAIN ANALYZE select * from my_table;

5.  ANALYZE recalculates data distributions (as in 2, but without
cleaning out dead tuples).

  e.g. ANALYZE my_table;

Pointedly, EXPLAIN ANALYZE is entirely distinct from ANALYZE and
VACUUM ANALYZE...
-- 
let name=cbbrowne and tld=cbbrowne.com in name ^ @ ^ tld;;
http://linuxdatabases.info/info/slony.html
Rules of the Evil Overlord #218. I will not pick up a glowing ancient
artifact and shout Its power is now mine!!! Instead I will grab some
tongs, transfer  it to a hazardous materials  container, and transport
it back to my lab for study. http://www.eviloverlord.com/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] EnterpriseDB

2006-02-17 Thread Joshua D. Drake

Benjamin Arai wrote:
Is the PL support in EnterpriseDB worth the money?  Are there any 
specific benefits that I should specifically be aware of?

It depends.. do you want plSQL/Oracle compatibility? If so... then probably.
If not... then the base PostgreSQL pl support is more then enough 
considering

you can use pljava, plperl, plpython, plphp, plruby, pl/r etc...

Sincerely,

Joshua D. Drake


 
*Benjamin Arai*

[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
http://www.benjaminarai.com http://www.benjaminarai.com/
 



--
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: PLphp, PLperl - http://www.commandprompt.com/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Updating a sequential range of unique values?

2006-02-17 Thread Benjamin Smith
How can I update a range of constrained values in order, without having to 
resubmit a query for every single possiblity? 

I'm trying to create a customer-specific sequence number, so that, for each 
customer, the number starts at one, and continues, 1, 2, 3, 4, 5... etc. with 
no values skipped. (This is necessary, as the record is used to sort values, 
and the order can be changed by the customer) 

Here's sample code that demonstrates my question: 

create table snark (custid integer not null, custseq integer not null, 
unique(custid, custseq));

insert into snark (custid, custseq) VALUES (1, 2);
insert into snark (custid, custseq) VALUES (1, 4);
insert into snark (custid, custseq) VALUES (1, 3);
insert into snark (custid, custseq) VALUES (1, 1);

begin transaction; 
DELETE FROM snark WHERE custid=1 AND custseq=2; 
UPDATE snark SET custseq=custseq-1 WHERE custid=1 and custseq2; 

This generates an error! 
ERROR: duplicate key violates unique constraint snark_custid_key

I've tried putting an order by clause on the query: 

UPDATE snark SET custseq=custseq-1 WHERE custid=1 and custseq2 
ORDER BY custseq ASC; 

But that got me nowhere. Also, I can't defer the enforcement of the 
constraint, as, according to the manual, this only works for foreign keys. 

Any ideas where to go from here? 

-Ben 
-- 
The best way to predict the future is to invent it.
- XEROX PARC slogan, circa 1978

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] primary index permits duplicates

2006-02-17 Thread H Hale
dsuuid is my library  The libraryuses standard uuid comparison functions provided with linux. To clarify, the compare returns the same value the uuid compare functions.From man page...Returns an integer less than, equal to, or greater than zero if uu1 is found, respectively, to be lexigraphically less than, equal, or greater than uu2. Isthis not what Postgres expects?As I mentioned what I have seen is that if Postgresql finds a match it normally stops, but in this the case I described it does 1 more comparison and adds a duplicate primary key. This problem has appear a few times over the the last several months during normal use. If Iclear all the data from the db, I can no longer reproduce it. Once a duplicate key is found, then I can reproduce it
 again as
 I described, so this will most likely not be easy to find. I find it only because of checks for rowcount==1 after a query in the application code. Postgres never complains as far as I can tell. Let's assume for a moment the dsuuid lib is correct, how then is it possible to get non-unique values for a primary index? Is there anything else I could do to track down the cause of this? Logging? Tom Lane [EMAIL PROTECTED] wrote:  H Hale <[EMAIL PROTECTED]>writes: dsuuid is a custom data type for uuids with an external library with comparsion functions.Unless you can reproduce this with a standard dat
 atype,
 you shouldprobably file this report with the developer(s) of dsuuid. It soundslike an incorrect comparison function to me. The compare returns 0 if equal otherwise non-zero.In fact, if that's what the code actually thinks the API is, that'sthe problem right there ... it's supposed to be a three-way result.regards, tom lane---(end of broadcast)---TIP 5: don't forget to increase your free space map settings  

Re: [GENERAL] primary index permits duplicates

2006-02-17 Thread Tom Lane
H Hale [EMAIL PROTECTED] writes:
   Let's assume for a moment the dsuuid lib is correct,  how then is it 
 possible to get non-unique values for a primary index?  

Well, you're assuming a fact not in evidence as far as I'm concerned.
But feel free to send in a reproducible test case, and we'll take a look.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Converting an ASCII database to an UTF-8 database

2006-02-17 Thread Rick Gigger
I have this exact problem.  I have dumped and reloaded other  
databases and set the client encoding to convert them to UTF-8 but I  
have one database with values that still cause it to fail, even if I  
specify that the client encoding is SQL_ASCII.  How do I fix that?


On Feb 17, 2006, at 4:08 PM, Ragnar wrote:


On fös, 2006-02-17 at 22:38 +0100, Peter Eisentraut wrote:

[EMAIL PROTECTED] wrote:
How do I convert a database in the ASCII format into one of the  
UTF-8

format?


ASCII is a subset of UTF-8, so you don't need to do anything.  Just
change the encoding entry in the pg_database table.  Of course, using
pg_dump would be the official way to convert a database between  
any two

encodings.


This will only work correctly if the database
definitely does not contain non-ASCII characters.

Assuming by ASCII format we mean that the database was
created SQL_ASCII, then it is possible that it contains
invalid UTF-8 characters, as SQL_ASCII is a 8 bit
encoding.

consider:

template1=# create database test with encoding='SQL_ASCII';
CREATE DATABASE
template1=# \connect test
You are now connected to database test.
test=# create table a (x text);
CREATE TABLE
test=# insert into a values ('á');
INSERT 33304378 1
test=# select * from a;
 x
---
 á
(1 row)

test=# update pg_database set encoding =
pg_catalog.pg_char_to_encoding('UTF8') where datname='test';
UPDATE 1
test=# select * from a;
 x
---
 á
(1 row)

test=# \connect template1
You are now connected to database template1.
template1=# \connect test
You are now connected to database test.
test=# select * from a;
 x
---

(1 row)

test=#


gnari



---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match