Re: [GENERAL] optimizing postgres

2007-07-13 Thread Zlatko Matić

Hello, Tom.
I don't understand relation between constraints and indexes.
By using EMS PostgreSQL Manager Lite, I created indexes on columns, some of 
them are unique values.
But when I open it in PgAdmin, all such unique indexes are listed as 
constraints and there are no  indexes in Indexes section. When I open it 
again in EMS PostgreSQL Manager, they are listed as Indexes.

Does it mean that I need to create additional indexes on the same columns?
Is Constrain index as well?
Thanks,

Zlatko


- Original Message - 
From: Tom Lane [EMAIL PROTECTED]

To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Friday, July 13, 2007 3:39 AM
Subject: Re: [GENERAL] optimizing postgres



[EMAIL PROTECTED] writes:

It turned out he was right for our current set up. When I needed to
empty the project table to re-parse data, doing a cascading delete
could take up to 10 minutes!


You mean ON CASCADE DELETE foreign keys?  Usually the reason that's
slow is you forgot to put an index on the referencing column.  PG
doesn't force you to have such an index, but unless the referenced
table is nearly static you'll want one.

I too am fairly suspicious of the N-tables-are-faster-than-another-
key-column mindset, but you'd need to do some actual experimentation
(with correctly optimized table definitions ;-)) to be sure.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] optimizing postgres

2007-07-13 Thread Richard Huxton

Zlatko Matić wrote:

Hello, Tom.
I don't understand relation between constraints and indexes.
By using EMS PostgreSQL Manager Lite, I created indexes on columns, some 
of them are unique values.
But when I open it in PgAdmin, all such unique indexes are listed as 
constraints and there are no  indexes in Indexes section. When I open it 
again in EMS PostgreSQL Manager, they are listed as Indexes.

Does it mean that I need to create additional indexes on the same columns?
Is Constrain index as well?


The default tool is psql, the command-line client. If in doubt, refer 
to that (and the manuals).


In this case, a quick look at the manuals cover it:
  http://www.postgresql.org/docs/8.2/static/indexes-unique.html
PostgreSQL automatically creates a unique index when a unique 
constraint or a primary key is defined for a table. The index covers the 
columns that make up the primary key or unique columns (a multicolumn 
index, if appropriate), and is the mechanism that enforces the constraint.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] optimizing postgres

2007-07-13 Thread PFC



The parsing has turned out to be pretty intense. It takes about 10-20
minutes for any project. When we are parsing data, it really slows
down the site's  response. I tested serving static webpages from
apache, endless loops in php , but the choke point seems to be doing
any other query on postgres when constructing a php page during
parsing.


Do you do lots of INSERTs without explicitly using transactions ?
You also need to run EXPLAIN ANALYZE on your most frequent queries.
It is very possible the slowdown is just from a forgotten index.


As an example, the original designer specified separate tables for
each project. Since they were all the same structure, I suggested
combining them into a single table with a project_id column, but he
said it would take too long to query. I was suspicious, but I went
with his design anyway.


From the small size of the dataset I don't see a justification for 
this...


It turned out he was right for our current set up. When I needed to
empty the project table to re-parse data, doing a cascading delete
could take up to 10 minutes! I cut re-parsing time in half by just
dropping the table and creating a new one. Which was an okay thing to
do, since the table only belonged to one project anyway. But I hate to
think how long it would have taken to do a delete, cascading to child
tables, if all the project data was in a single table.


	That's probably because you forgot to create an index on the referenced  
column. They are not created automatically.



Since I'm not an expert in Postgres database design, I'm assuming I've
done something sub-optimal. Are there some common techniques for
tuning postgres performance? Do we need beefier hardware?

Or is it a problem with how PHP or apache pools connections to the
database?


	It depends on a lot of stuff, but the first thing is to run EXPLAIN  
ANALYZE on your queries and post the results here.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Limit number connections by IP

2007-07-13 Thread Stefan Kaltenbrunner
tabai wrote:
 Hi
 
 I know that i can limited the total number of connections in
 postgresql.conf with max_connections, but... can i limite  the max
 number of connections from an specific IP?
 
 For example y like have total max connections of 40 buy the same ip
 can't have more than 5 connections open.
 
 It is possible?

no - you can limit the maximum numbers of connections on a per database
and also a per role base. If you really need a per source address
limitation look into using whatever firewall solution is available on
your OS.


Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Limit connections per username

2007-07-13 Thread Stefan Kaltenbrunner
Nik wrote:
 PostgreSQL 8.2 on Windows 2003 Server.
 
 Is it possible to limit number of connections per username?

yes - look for CONNECTION LIMIT on:

http://www.postgresql.org/docs/8.2/static/sql-createrole.html
and
http://www.postgresql.org/docs/8.2/static/sql-alterrole.html


Stefan

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] how to measure performance slony

2007-07-13 Thread angga erwina
hi all, anybody can you tell me how to measure
performance slony to replicate?? where site is i can
get reference for this topik??



 

TV dinner still cooling? 
Check out Tonight's Picks on Yahoo! TV.
http://tv.yahoo.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Updates/Changes to a database

2007-07-13 Thread imageguy
On Jul 12, 4:03 pm, [EMAIL PROTECTED] (Steve Crawford)
wrote:
 imageguy wrote:
  I am building an application with Postrges as the backend foundation.
  This is my first application and it has struck me that as we add
  features/functionality to the application and database with each new
  version, we will need some method of obtaining the current structure
  of the customers database and then modifying/updating the structure so
  that it matches the application revision standard.

  Are there pre-existing tools out there that does this sort of thing ??

  My present direction is to create a small SQLite db that has there
  expected structure, compare each table against the SQL
  information_Schema.columns and the create a series of SQL commands
  to be executed that would add columns and/or table as needed.

  -- any thoughts or comments ?

 Not sure why you need SQLite when you, *ahem*, have and are modifying
 PostgreSQL. All the info you seek is in the system tables. To get a
 jump-start, try running psql with the -E option to see the backend
 queries that generate the displays of tables and table layouts.

 How you go about performing the updates will depend on many things:

 Are other apps running against the DB - especially the tables your app uses?

 Will the app be running on various versions of PG or will you control that?

 Will you allow any version to any version updates or only updates to the
 next version?

 What about the ability to downgrade to prior versions?

 Will the client-side be updated simultaneously with the database schema?

 What permissions will be required to perform the update?

 Updates in a sophisticated system will not be as simple as just matching
 table structures. You need to consider alterations to constraints -
 especially foreign-key constraints. Also the effect on views. It is
 likely that any version-to-version updates will need to be done in a
 specific and tested order. As a simple example, you would need to update
 a table to add a column before updating a view that refers to that column.

 One thing that might be useful is to create a simple function that just
 returns a version number:

 create or replace function my_app_version()
  returns text
  language sql
  as 'select ''1.01''::text;';

 You can use this as needed. The client application can check the
 database-side version and either modify its behavior appropriately (ie.
 hide unavailable features) or refuse to start if there is an
 un-reconcilable mismatch.

 You could also create scripts to verify your database setup against the
 returned version and report errors, and you can base your update
 activity on the returned value. For example:

 1. Test that existing tables/views/indexes/etc. match the returned
 version number - exit if not

 2. If yes, check for availability of handler to change existing version
 to desired version - exit if one isn't available.

 3. Perform backup.

 4. Perform update including update of version-number function. As
 appropriate to your situation, you could change the version-number
 function at the start of your operation, say from '1.01' to '1.01-1.15'
 and program the clients to display an appropriate message if they try to
 connect during the upgrade. You will, of course, need to use
 transactions, locks, etc. to prevent access during the upgrade.

 5. Verify database against new value of my_app_version()

 Cheers,
 Steve

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster- Hide quoted text -

 - Show quoted text -

Thanks very much for these thoughtful questions/hints/suggestions.
Much food for thought.

FWIW I was planning to use SQLite to store the new schema that the PG
database should be upgraded to.  SQLite being simple, fast and
portable so it can easily be distributed with the next version upgrade
of the client programs.








---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Force SSL / username combination

2007-07-13 Thread Robert Treat
On Friday 13 July 2007 05:21, Gregory Stark wrote:
 Robert Treat [EMAIL PROTECTED] writes:
  I'm guessing the lack of response is due to a lack of knowledge on the
  topic. Personally I've never quite understood how you'd make use of the
  sslinfo functions to manage connections without something like on commit
  triggers, so I hope you'll consider submitting some documentation once
  you figure it out.

 Well if you do the popular technique of doing everything through stored
 procedures (in our case plpgsql functions) then you can have those
 functions check. I don't like that approach myself though.


Right. This approach always seemed too late to me, since the user was 
already connected at that point. 

 You could also have a column with a default value which uses the sslinfo to
 retrieve the common name. Or you could have a trigger which throws an error
 if that function doesn't return valid value. Either way you would be doing
 a lot more work than necessary since it would be checking every row, not
 once per session. And it wouldn't stop selects.

 I think what you really want is a ON CONNECT trigger for this.

lol! I surely meant ON CONNECT triggers above! I'm pretty sure PostgreSQL can 
do on commit triggers right now. :-D

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] how to measure performance slony

2007-07-13 Thread Richard Huxton

angga erwina wrote:

hi all, anybody can you tell me how to measure
performance slony to replicate?? where site is i can
get reference for this topik??


What performance are you trying to measure?

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Big table with UNION ALL or partitioning with Tsearch2

2007-07-13 Thread Simon Riggs
On Thu, 2007-07-12 at 11:19 -0700, Benjamin Arai wrote:

 I am running the latest 8.2.4.  I am using GIN.  The data is static.  I do
 a batch upload every week of about 500GB and the ata is never touched
 again, it is always add and never delete or update.

Partitioning will certainly help the index build times, even if it
doesn't specifically help with the queries.

 From your slides you state:
 
 GIN_FUZZY_SEARCH_LIMIT - maximum number of
 returned rows
 – GIN_FUZZY_SEARCH_LIMIT=0, disabled on
 default
 
 When I do a search with say LIMIT 100 isn't this essentially the same thing?

Both restrict the number of matches found, but they're not the same
thing. One is for the query as a whole, the other is for one index scan
on a GIN index.

-- 
  Simon Riggs
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Force SSL / username combination

2007-07-13 Thread Gregory Stark
Robert Treat [EMAIL PROTECTED] writes:

 I'm guessing the lack of response is due to a lack of knowledge on the topic. 
 Personally I've never quite understood how you'd make use of the sslinfo 
 functions to manage connections without something like on commit triggers, so 
 I hope you'll consider submitting some documentation once you figure it out.

Well if you do the popular technique of doing everything through stored
procedures (in our case plpgsql functions) then you can have those functions
check. I don't like that approach myself though.

You could also have a column with a default value which uses the sslinfo to
retrieve the common name. Or you could have a trigger which throws an error if
that function doesn't return valid value. Either way you would be doing a lot
more work than necessary since it would be checking every row, not once per
session. And it wouldn't stop selects.

I think what you really want is a ON CONNECT trigger for this.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] order is preserved by outer select?

2007-07-13 Thread Richard Huxton

Joshua N Pritikin wrote:

Here's another easy (stupid?) question:

  SELECT data.* FROM (SELECT * FROM foo ORDER BY bar) AS data

Will the returned rows still be ordered by bar?


In practice, for this precise query, yes.

However, embedding this into a larger query or adding a WHERE clause 
could re-order the results. For example, if PG uses a hash anywhere 
during the query you'll lose ordering.


More importantly, the specifications say that without an ORDER BY the 
order of a result-set is undefined. So if the developers find it's more 
efficient to destroy order they are free to do so. That means you might 
upgrade and have your application stop working.


So - good practice says don't rely on ORDER BY except in the outermost 
level of your query.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] order is preserved by outer select?

2007-07-13 Thread Joshua N Pritikin
Here's another easy (stupid?) question:

  SELECT data.* FROM (SELECT * FROM foo ORDER BY bar) AS data

Will the returned rows still be ordered by bar?

-- 
Make April 15 just another day, visit http://fairtax.org

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Updates/Changes to a database

2007-07-13 Thread Alexander Staubo

On 7/12/07, imageguy [EMAIL PROTECTED] wrote:

Are there pre-existing tools out there that does this sort of thing ??


Rails and Django -- two popular web development frameworks -- support
a simple mechanism for doing schema migrations.

In Rails, in particular, each schema change is encapsulated as a
class. Each such change is called a migration, and implements two
methods for effecting and rolling back the migration, respectively.

Since these are Ruby classes, they can do anything at all -- execute
SQL, flush caches, restart daemons, etc. Transactions ensure that each
migration is executed atomically.

All the migrations are then collected in a directory, and numbered:

$ ls -l db/migrate
...
-rw-r--r--   1 alex  alex  1691 Jun 28 15:21
163_send_dns_message_to_domain_owners.rb
-rw-r--r--   1 alex  alex   711 Jun 28 20:56 164_create_image_batches.rb
-rw-r--r--   1 alex  alex  1087 Jun 28 17:12 165_delete_some_dns_messages.rb
-rw-r--r--   1 alex  alex   970 Jul  2 14:39
166_add_reader_to_visitor_transistion.rb
-rw-r--r--   1 alex  alex  1267 Jul  2 15:33 170_create_indexes3.rb

In the database, a dedicated table is used to store the last applied
migration number. Rails itself provides a command that sets the
database to a specific migration number, allowing you to roll forward
and backward in the schema evolution.

I know somebody has released an independent schema migration tool
based on numbered SQL scripts, but I don't remember the name. Might be
of use to you. I recommend looking on SourceForge or FreshMeat.

Alexander.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] One Large Table or Multiple DBs?

2007-07-13 Thread Gregory Stark
Mike [EMAIL PROTECTED] writes:

 Hi,

 What is efficient approach toward storing a web application's user
 data.

I would recommend one large DB with large tables.

You might consider using partitioning to actually store the data separately.
But even then I would not consider it until it was actually a problem.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Restoring the DB from others user DB Backup.

2007-07-13 Thread Jim Nasby

On Jul 4, 2007, at 5:22 AM, Mavinakuli, Prasanna (STSD) wrote:

We have 2 altogether databases owned by two different users.The
requirement is ,by using a back up dump of a database owned by some
other user,We will need to restore the other DB which is owned
completely by other user.

The problem what we are facing is,if we try to use pg_restore
command,then it cribbs saying the user doesn't exist.So could you  
please

direct me for a way to achieve this.Unfortunately I have to use
postgreSQL 7.4.2 version where there is no support for -O option.


pg_restore will just send SQL commands to STDOUT if you don't give it  
a database to connect to. You can then use something like 'grep -v'  
to strip out all the ownership commands for the affected user.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Accent-insensitive search

2007-07-13 Thread Alvaro Herrera
turbovince escribió:
 Hello, I would like to perform some accent-insensitive searches on my
 database, which means that a select query with condition, say, WHERE
 NAME = 'HELLÔ' would return records where name is 'HELLO' as well.
 
 My data is encoded in Unicode (UTF8) and therefore I cannot use
 Postgre's to_ascii() trick to achieve accent-insensitive searches.

Use the convert() function to turn it into Latin1 (or whatever encoding
you prefer), then to_ascii.

Note that if you have strange chars it will fail anyway (for example
there was this guy not long ago complaining in pgsql-es-ayuda that it
failed when he had the mu greek letter in a product description).

-- 
Alvaro Herrera http://www.flickr.com/photos/alvherre/
El sabio habla porque tiene algo que decir;
el tonto, porque tiene que decir algo (Platon).

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Strange Problem

2007-07-13 Thread Jim Nasby

On Jul 9, 2007, at 9:22 AM, Gustavo Ces wrote:

Hi all,

I´ve got a strange problem, hope you could help. I´ve got a  
table (a ) with n tuples, and two fields , birthplace and birth  
date. There´s another table ( say b) with m tuples, where i´ve got  
a lot of polygons ( with their code, same as bithplace). Now i want  
to make a join, to obtain how many people corresponds to each place- 
polygon. I write


create table personsbyplace as select a.birthplacecode as code,count 
(*) as peoplecount from a,b where

a.birthplacecode=b.polygoncode group by birthplacecode

Well, what i can´t understand is why Select sum(count) from  
personsbyplace is  than a row number! (n)


In b i´ve got all places and a table is a list of people, with  
their place and birth date. It can´t be greater sum(count) than  
number of persons! Where is the mistake?


Probably because there's a duplicated polygonecode (might want to  
stick some underscores in the field names so they're easier to read,  
btw).


My question is: if you just want a count of people grouped by  
birth_place_code, why join to b at all?

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Fastest way to join materalized view to child tables

2007-07-13 Thread Jim Nasby

On Jul 9, 2007, at 10:07 PM, Postgres User wrote:

materialized view -  view_a
 child tables   -  table_a, table_b, table_c

Here's my question- what's the fastest what to retrieve rows from each
of the child tables after I get results from view_a ?
I don't like using temp tables in Postgres (too much pain in the
past), so first selecting into a temp table which could subsequently
be joined against the child tables isn't appealing to me.

The result set from materialized view_a will never exceed 60 rows, so
I'm thinking about this:
a) LOOP on a SELECT FROM view_a
b) for each record, add the row id to one of 3 comma delimited strings
(one per child table)
c) perform a SELECT WHERE IN (delimited_string) from each child table


Build an array of IDs and then use that in your 3 queries with the  
ANY operator.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Function to ADD a value into each column of real[]

2007-07-13 Thread Jim Nasby

On Jul 10, 2007, at 2:38 PM, orehon wrote:

Hello,
 if I have this table:
 CREATE TABLE teste (
id integer,
picos   real[],
);

and insert this values:
INSERT INTO teste VALUES(1, '{{1,2,3},{1,2,3},{1,2,3},{1,2,3},
{1,2,3}}');
INSERT INTO teste VALUES(2, '{{1,2,3},{1,2,3},{1,2,3},{1,2,3},{1,2,3},
{1,2,3},{1,2,3}}');
INSERT INTO teste VALUES(3, '{{1,2,3},{1,2,3},{1,2,3},{1,2,3},{1,2,3},
{1,2,3},{1,2,3},{1,2,3}}');
INSERT INTO teste VALUES(4, '{{1,2,3},{1,2,3},{1,2,3}}');

I need to write a function to return all the record of this table and
add a 0 in the begin of each array.
Ex.:

1, {{0,1,2,3},{0,1,2,3},{0,1,2,3},{0,1,2,3},{0,1,2,3}}
.


You'll need something like...
FOR i IN array_lower(array, 1) .. array_upper(array, 1)
array[i] := 0 || array[i];
END

You might need some special logic to handle either the second pass  
through the loop if this ends up giving you a single-dimension array.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] TOAST, large objects, and ACIDity

2007-07-13 Thread Jim Nasby

On Jul 10, 2007, at 3:47 AM, Benoit Mathieu wrote:
I want to use postgres to store data and large files, typically  
audio files from 100ko to 20Mo. For those files, I just need to  
store et retrieve them, in an ACID way. (I don't need search, or  
substring, or others functionnalities)
I saw postgres offers at least 2 method : bytea column with TOAST,  
or large objects API.


I wonder what are the differences of the 2 methods.
* I found that large objects need a vacuum after delete to really  
release place. That may be handled by a trigger or automatic  
vaccum, is it right ?
* Large objects are used via a special API available in libpq C  
client library.
* I really care keeping my transaction fully ACID. Documentation on  
large objects doesn't explicitly say if lo_import an lo_export (and  
other primitives) are fully ACID. Some ideas ?


AFAIK large objects are ACID, at least as ACID as you can be when  
dealing directly with the filesystem (lo_export). Bytea is fully ACID.



* I going to bench insertion and read with this 2 methods.


I'd be interested to know what you find. Unless there's a notable  
speed difference, I'd probably just go with bytea.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] how to resolve invalid primary checkpoint

2007-07-13 Thread Jim Nasby

On Jul 10, 2007, at 4:08 AM, lim siang min wrote:

I'm new to Postgresql and need to support on any IT related problem.
One of my customer not able to start the postgresql services.
The log said that ..
record with zero length at 0/2E16910
invalid primary checkpoint record
record with zero length at 0/2E168D0
invalid secondary checkpoint record
panic: could not locate a valid checkpoint record.

how should i resolve this issue.
Really appreciate any inputs. Thanks.


Sounds like your WAL files have been corrupted, which indicates  
faulty hardware (assuming that you haven't set fsync=off). I don't  
really know if there's any way to recover from this; you may have to  
restore from a backup. In any case, I'd question the hardware that  
you're running on.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] multirow insert

2007-07-13 Thread Zlatko Matić
When using multirow INSERT INTO...VALUES command, are all rows inserted in a 
batch, or row by row?
Regards,

Zlatko

Re: [GENERAL] multirow insert

2007-07-13 Thread A. Kretschmer
am  Fri, dem 13.07.2007, um 18:50:26 +0200 mailte Zlatko Mati? folgendes:
 When using multirow INSERT INTO...VALUES command, are all rows inserted in a
 batch, or row by row?

Within one transaction, yes.

Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] restore dump to 8.19

2007-07-13 Thread mljv
Hi 

i tried to restore a dump from version 8.1.8 to 8.1.9 and i had in one table
a value 1.7383389519587511e-310

i got the following error message:

pg_restore: ERROR:  type double precision value out of range: underflow
CONTEXT:  COPY gesamtpunktecache, line 925001, column 
gc_gesamtsiege: 1.7383389519587511e-310
pg_restore: [archiver (db)] error returned by PQendcopy: ERROR:  type double 
precision value out of range: underflow
CONTEXT:  COPY gesamtpunktecache, line 925001, column 
gc_gesamtsiege: 1.7383389519587511e-310

i tried to insert a value of 1.7383389519587511e-310 to a 8.18 database but 
i didn't manage it (with the same error message). But in one of my databases 
i can select a value like this.

very strange. i just wanted to told you just in case it is of any interest and 
maybe i can help with further informations.
 
kind regards,
janning


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] restore dump to 8.19

2007-07-13 Thread Jim Nasby

On Jul 13, 2007, at 2:11 PM, [EMAIL PROTECTED] wrote:
i tried to restore a dump from version 8.1.8 to 8.1.9 and i had in  
one table

a value 1.7383389519587511e-310

i got the following error message:

pg_restore: ERROR:  type double precision value out of range:  
underflow

CONTEXT:  COPY gesamtpunktecache, line 925001, column
gc_gesamtsiege: 1.7383389519587511e-310
pg_restore: [archiver (db)] error returned by PQendcopy: ERROR:   
type double

precision value out of range: underflow
CONTEXT:  COPY gesamtpunktecache, line 925001, column
gc_gesamtsiege: 1.7383389519587511e-310

i tried to insert a value of 1.7383389519587511e-310 to a 8.18  
database but
i didn't manage it (with the same error message). But in one of my  
databases

i can select a value like this.

very strange. i just wanted to told you just in case it is of any  
interest and

maybe i can help with further informations.


Is this on *identical* hardware, and were the binaries built the  
same? Floating point stuff is very hardware dependent, and may have  
some dependencies on compiler, etc as well.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] pg_dump vs schemas

2007-07-13 Thread Francisco Reyes
pg_dump by default puts at the top 
SET search_path = public,pg_catalog;


This considering a plain vanilla setup where no schemas other than public 
have been created.


I however noticed that pg_dump also does this:
ALTER TABLE public.mytable OWNER TO pgsql;

Shouldn't the public. be left out?
I verified that even if multiple tables exist with the same name only the 
table in the first referenced schema in the path will be deleted.


By the same token shouldn't all references to schemas be left out?
In the case there are reasons why the schema is referenced, perhaps create a 
parameter in pg_dump to omit the schema.


The rationale is to be able to easily move schemas in the target restore.
Specially if one was doing an entire database.

Alternatively is there any easy way to take all data in one schema and load 
it into a target DB and a different schema?
The default produced by pg_dump would be a problem because of the schema. 
references.



As for why I am doing this schema move..
From what i can tell it may be best to have tsearch into it's own schema so 
I either move tsearch out of public, or my data out of public. I figure 
since public is what tsearch and other utilities like it target may be 
easier to move the data out of public.


Currently trying a small data set to see how this work and whether it is 
better to move the data out of public or tsearch.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] multirow insert

2007-07-13 Thread Tom Allison


On Jul 13, 2007, at 2:11 PM, A. Kretschmer wrote:



am  Fri, dem 13.07.2007, um 18:50:26 +0200 mailte Zlatko Mati?  
folgendes:
When using multirow INSERT INTO...VALUES command, are all rows  
inserted in a

batch, or row by row?


Within one transaction, yes.


Trust me... It's MUCH faster then trying to do each insert.

I have a little program that was a 100x improvement in performance by  
changing my SQL to use INSERT INTO ... VALUES



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dump vs schemas

2007-07-13 Thread Michael Glaesemann


On Jul 13, 2007, at 19:10 , Francisco Reyes wrote:

Alternatively is there any easy way to take all data in one schema  
and load it into a target DB and a different schema?


You might try using the -n flag with pg_dump, replace schema1 with  
schema2 in the dump file, and loading the altered dump into the new  
database. There may also be some tricks you can play with pg_restore  
(on a dump file created with pg_dump -Fc), though I don't know  
specifically what offhand.



Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PostgreSQL equivelant of this MySQL query

2007-07-13 Thread Madison Kelly

Hi all,

  I am reading through some docs on switching to Postfix with a SQL 
backend. The docs use MySQL but I want to use PgSQL so I am trying to 
adapt as I go. I am stuck though; can anyone help give me the PgSQL 
equiv. of:


SELECT 
CONCAT(SUBSTRING_INDEX(usr_email,'@',-1),'/',SUBSTRING_INDEX(usr_email,'@',1),'/') 
FROM users WHERE usr_id=1;


  If the 'usr_email' value is '[EMAIL PROTECTED]' this should return 
'domain.com/person'.


  Thanks for the help!

Madison

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL equivelant of this MySQL query

2007-07-13 Thread Madison Kelly

Madison Kelly wrote:

Hi all,

  I am reading through some docs on switching to Postfix with a SQL 
backend. The docs use MySQL but I want to use PgSQL so I am trying to 
adapt as I go. I am stuck though; can anyone help give me the PgSQL 
equiv. of:


SELECT 
CONCAT(SUBSTRING_INDEX(usr_email,'@',-1),'/',SUBSTRING_INDEX(usr_email,'@',1),'/') 
FROM users WHERE usr_id=1;


  If the 'usr_email' value is '[EMAIL PROTECTED]' this should return 
'domain.com/person'.


  Thanks for the help!

Madison


Bah, answered my own question after posting. :)

For the record:

SELECT substring(usr_email FROM '@(.*)')||'/'||substring(usr_email FROM 
'(.*)@') FROM users WHERE usr_id=1;


Sorry for the line noise!

Madi

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL equivelant of this MySQL query

2007-07-13 Thread Steve Atkins


On Jul 13, 2007, at 6:39 PM, Madison Kelly wrote:


Hi all,

  I am reading through some docs on switching to Postfix with a SQL  
backend. The docs use MySQL but I want to use PgSQL so I am trying  
to adapt as I go. I am stuck though; can anyone help give me the  
PgSQL equiv. of:


SELECT CONCAT(SUBSTRING_INDEX(usr_email,'@',-1),'/',SUBSTRING_INDEX 
(usr_email,'@',1),'/') FROM users WHERE usr_id=1;


  If the 'usr_email' value is '[EMAIL PROTECTED]' this should  
return 'domain.com/person'.


A direct conversion would be something like:

select split_part(usr_email, '@', 2) || '/' || split_part(usr_email,  
'@', 1) from users where usr_id=1;


You could also do this:

select regexp_replace(usr_email, '(.*)@(.*)', '\2/\1') from users  
where usr_id=1;


http://www.postgresql.org/docs/8.2/static/functions-string.html and  
http://www.postgresql.org/docs/8.2/static/functions-matching.html are  
the bits of the docs that cover these functions.


Cheers,
  Steve


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dump vs schemas

2007-07-13 Thread Tom Lane
Francisco Reyes [EMAIL PROTECTED] writes:
 I however noticed that pg_dump also does this:
 ALTER TABLE public.mytable OWNER TO pgsql;

 Shouldn't the public. be left out?

Probably in an ideal world, but that code is far from ideal --- it's
dealing with a bunch of considerations including compatibility with dump
files from old pg_dump versions with assorted bugs.  I'm hesitant to
remove the schema spec for fear we'd end up with underspecified output
in some corner case or other.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL equivelant of this MySQL query

2007-07-13 Thread Madison Kelly

Steve Atkins wrote:


On Jul 13, 2007, at 6:39 PM, Madison Kelly wrote:


Hi all,

  I am reading through some docs on switching to Postfix with a SQL 
backend. The docs use MySQL but I want to use PgSQL so I am trying to 
adapt as I go. I am stuck though; can anyone help give me the PgSQL 
equiv. of:


SELECT 
CONCAT(SUBSTRING_INDEX(usr_email,'@',-1),'/',SUBSTRING_INDEX(usr_email,'@',1),'/') 
FROM users WHERE usr_id=1;


  If the 'usr_email' value is '[EMAIL PROTECTED]' this should return 
'domain.com/person'.


A direct conversion would be something like:

select split_part(usr_email, '@', 2) || '/' || split_part(usr_email, 
'@', 1) from users where usr_id=1;


You could also do this:

select regexp_replace(usr_email, '(.*)@(.*)', '\2/\1') from users where 
usr_id=1;


http://www.postgresql.org/docs/8.2/static/functions-string.html and 
http://www.postgresql.org/docs/8.2/static/functions-matching.html are 
the bits of the docs that cover these functions.


Cheers,
  Steve


Thanks Steve!

  Those look more elegant that what I hobbled together. :)

Madi

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match