Re: [GENERAL] Query string is too long

2004-01-09 Thread Doug McNaught
Dino Nardini <[EMAIL PROTECTED]> writes:

> "Error while executing the query; Query string is too long"
>
> The entry fields are set to "text" datatype, so should easily handle
> the web page content.  I'm thinking that the problem may be related to
> the driver.  The content management system manages pages of text on
> client websites.  A query string limitation wouldn't go over well with
> the clients, heh :)
>
> I'm wondering if the problem is related to the outdated unixODBC
> driver that was included with ColdFusion 5, or perhaps there is a
> setting that I am overlooking in PG?  The fact that the EMS DataPump
> truncated the large text causes some concern.

There is (pretty much) no query size limit in modern versions of PG.
It used to be 8k or so, and some old drivers still try to enforce that
limit in order to avoid database errors.  

> I've also posted a similiar message in the ColdFusion Forum at the
> Macromedia site and I'm just now compiling the latest unixODBC
> driver.  I have a test box that I can play around with. I'm hoping to
> replace the current unixODBC driver that was included with CF 5 with
> the latest driver.  Hope for the best :)

Sounds like the best approach--good luck.

> Sadly, my Friday nights have been reduced to this... ha :)

I feel your pain--I just got back from 12+ hours at work doing a
software upgrade...

-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] Query string is too long

2004-01-09 Thread Dino Nardini
Hey folks,

I've been migrating a content management system from MS SQL Server 7 over 
to PostgreSQL.  I used the EMS DataPump utility 
(http://www.ems-hitech.com/index.phtml) to migrate the database, but 
noticed that in the process it truncated a few large text blocks.   I then 
plugged the ColdFusion web application into the PG database and ran into 
problems trying to insert large text blocks into the database.  Error as 
follows:

"Error while executing the query; Query string is too long"

The entry fields are set to "text" datatype, so should easily handle the 
web page content.  I'm thinking that the problem may be related to the 
driver.  The content management system manages pages of text on client 
websites.  A query string limitation wouldn't go over well with the 
clients, heh :)

I'm wondering if the problem is related to the outdated unixODBC driver 
that was included with ColdFusion 5, or perhaps there is a setting that I 
am overlooking in PG?  The fact that the EMS DataPump truncated the large 
text causes some concern.

I've also posted a similiar message in the ColdFusion Forum at the 
Macromedia site and I'm just now compiling the latest unixODBC 
driver.  I have a test box that I can play around with. I'm hoping to 
replace the current unixODBC driver that was included with CF 5 with the 
latest driver.  Hope for the best :)

For the record, I'm running ColdFusion 5 and PostgreSQL 7.4.1 on a (heavily 
patched) RH 7.2 box.

Any suggestions would be greatly appreciated.

Sadly, my Friday nights have been reduced to this... ha :)

Cheers... Dino

_
Rivendell Software - Dynamic Web Solutions
http://www.rivendellsoftware.com
Tel 902.461.1697
Fax 902.461.3765


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Postgres planner bug in 7.3.x and 7.4.1 ?

2004-01-09 Thread Tom Lane
Laurent Perez <[EMAIL PROTECTED]> writes:
> We're experiencing weird behaviours with both Postgres 7.3.x and 7.4.1 
> versions, relating to views based on views including fields based upon 
> plpgsql functions.

There are a couple of things going on here.  The principal one is a
limitation that subqueries can't be flattened into the parent query
if they are below an outer join and have non-nullable output columns
(which is presently defined as "any column that's not a simple Var").
An output column that isn't nullable wouldn't be guaranteed to go to
null in null-extended rows, thus breaking the outer join semantics.

Since the subquery doesn't get flattened, all its output columns will
still get evaluated for each row demanded from the subquery.  The
optimization that discards unused output columns (including your
expensive function call) is part of the flattening process.

That applies to your problem because f1(t1.num_person) isn't nullable
--- that is, it might produce a non-null output even when num_person
is null.  Had you declared f1 as strict, then in principle the system
could recognize the column as nullable.  At the moment it won't, but
I'll see what I can do about fixing that for 7.5.

The other thing that's going on is a plain old bug.  Given the above
restriction, the system should not have flattened the example involving
v4, but it did so because of an oversight in handling nested views.
I have fixed that here:
http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/optimizer/prep/prepjointree.c
That's probably not really what you wanted to hear, since that will
guarantee the inefficient behavior in both cases :-(

The easiest workaround I can think of is to avoid putting the expensive
function call below an outer join, though that may not be real practical
for you.  Alternatively, since you are evidently concerned with cases
where the function-call column won't be referenced at all, maybe you
could make a variant version of the view that doesn't contain that
column in the first place.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] ERROR: Cannot insert a duplicate key into unique index pg_class_relname_nsp_index

2004-01-09 Thread Tom Lane
Kragen Sitaker <[EMAIL PROTECTED]> writes:
> We'll run the experiment again.  Should we try 7.3.3 too?

No, I don't think 7.3.3 is likely to behave differently from 7.3.4
as far as this goes.  What would actually be interesting is whether
you can make 7.4 fail.

> Well, it's possible the daemon could have gotten killed while it was
> inside the transaction, followed shortly by a shutdown of postgres ---
> a dozen times or more --- and during development, we frequently kill
> the daemon so that it will restart with new code.

But you're seeing these errors in production, on a machine where you're
not doing that, no?  In any case there is code in place to clean out
a temp schema of any pre-existing junk when a new backend starts to use
it ... perhaps there's a bug in that, but that code was not changed
since 7.3.2 ...

Another question: are you fairly confident that if the same bug had been
in 7.3.2, you would have found it?  Were there any changes in your usage
patterns around the time you adopted 7.3.4?

> For our application, we shut down and restart Postgres every night
> because it seems to make VACUUM FULL work better.

[ itch... ]  Let's not discuss the wisdom of that just now, but ...

> I wonder why those old namespaces are left around?

They're supposed to be; there's no point in deleting the pg_namespace
entry only to recreate it the next time someone needs it.  The real
question is whether you see any tables belonging to those namespaces.
The count(*) query on pg_class looked like a fine way to watch that.

> BTW, we're using the 7.3.4 PGDG RPMs with an extra patch to add
> pg_autovacuum.

If you're not planning to go to 7.4 soon, you might want to think about
an update to 7.3.5, just on general principles.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] ERROR: Cannot insert a duplicate key into unique index

2004-01-09 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> Digging in the 7.3.2-to-7.3.4 change logs, I see one potentially
> >> relevant change:
> 
> > The only thing I can think of is the fix for splitting the first btree
> > page.
> 
> I paused on that too, but I don't see how it could apply, unless they
> were dropping and rebuilding their database every few hours.  Besides,
> that bug is fixed in 7.3.5, which is still showing the problem.

I didn't know we got that into 7.3.5, but now I remember it wasn't
serious enough to force a new 7.3.X release but it was in 7.3.X CVS.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] ERROR: Cannot insert a duplicate key into unique index pg_class_relname_nsp_index

2004-01-09 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Digging in the 7.3.2-to-7.3.4 change logs, I see one potentially
>> relevant change:

> The only thing I can think of is the fix for splitting the first btree
> page.

I paused on that too, but I don't see how it could apply, unless they
were dropping and rebuilding their database every few hours.  Besides,
that bug is fixed in 7.3.5, which is still showing the problem.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] unsubscribe

2004-01-09 Thread sibu xolo
unsubscribe


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] ERROR: Cannot insert a duplicate key into unique index

2004-01-09 Thread Bruce Momjian
Tom Lane wrote:
> Hmm.  I'm not aware of any 7.4 bug fix that would affect such a thing,
> so I wouldn't want to bet that 7.4 has really solved the issue.
> 
> Digging in the 7.3.2-to-7.3.4 change logs, I see one potentially
> relevant change:

The only thing I can think of is the fix for splitting the first btree
page.  We fixed that in 7.4.  I remember it happened mostly on SMP
machines.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] ERROR: Cannot insert a duplicate key into unique index pg_class_relname_nsp_index

2004-01-09 Thread Kragen Sitaker
On Sat, Jan 10, 2004 at 11:20:11AM +1100, Martijn van Oosterhout wrote:
> Not really related to your problem, but given you're in a transaction, why
> do you need to lock anything? What's wrong with:
> 
> > The daemon that gets this error does the following every 15 seconds:
> > - start a transaction
> > - delete the contents of the other table
> > - execute a complex and sometimes slow SELECT INTO query, creating a
> >   temporary table
> > - copy the contents of the temporary table into the other table
> > - drop the temporary table (again, embarrassing, sorry)
> > - commit
> 
> Maybe I'm missing something?

We don't need to lock anything.  We just thought we did.  We'd observed
that accessing a table inside a transaction (at the default READ COMMITTED
isolation level) could show us records created by other transactions since
this transaction started (i.e. it doesn't guarantee repeatable reads),
even if we'd already accessed the table.

So, lacking a thorough understanding of section 12.2 (or transaction
isolation levels in general), we thought we might have to lock the table
to keep someone else from accessing it while it was partly empty.
We were wrong, but I didn't know that until this afternoon.

Thank you very much for your help!

-Kragen

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] ERROR: Cannot insert a duplicate key into unique index pg_class_relname_nsp_index

2004-01-09 Thread Kragen Sitaker
On Fri, Jan 09, 2004 at 06:19:00PM -0500, Tom Lane wrote:
> Kragen Sitaker <[EMAIL PROTECTED]> writes:
> > ERROR:  Cannot insert a duplicate key into unique index pg_class_relname_nsp_index
> > We've been getting this error in our application every once in a while
> > --- typically once an hour to once a day, although it varies over time.
> 
> This seems to me that it must indicate a collision on name+schema of the
> temp table.  Now that ought to be impossible :-(

Those were my first thoughts too :)

> --- you should get errors earlier than this if you were actually
> creating a duplicately named temp table, and the design for selecting
> nonconflicting temp schemas seems pretty bulletproof to me too.

Sure.  We thought maybe we had two instances of the daemons running at
once, but we tried that, and couldn't make the error happen every time.

It's worth mentioning that the daemon will often run for hours before
dying with this error.  Then, when it comes back up a few seconds later,
it's likely to fail again immediately, but it's even more likely to run
without a problem for hours more.

> > We started seeing this error on November 22, three days after we migrated
> > from Postgres 7.2.3 and 7.3.2 to 7.3.4.  We still see the error with
> > 7.3.5, but not with 7.4.0.
> 
> Hmm.  I'm not aware of any 7.4 bug fix that would affect such a thing,
> so I wouldn't want to bet that 7.4 has really solved the issue.

I'm glad to know that.

> Digging in the 7.3.2-to-7.3.4 change logs, I see one potentially
> relevant change:
> 
> 2003-02-06 20:33  tgl
> 
>   * src/: backend/catalog/dependency.c, backend/catalog/namespace.c,
>   include/catalog/dependency.h (REL7_3_STABLE): Revise mechanism for
>   getting rid of temp tables at backend shutdown.  Instead of
>   grovelling through pg_class to find them, make use of the handy
>   dandy dependency mechanism: just delete everything that depends on
>   our temp schema.  Unlike the pg_class scan, the dependency
>   mechanism is smart enough to delete things in an order that doesn't
>   fall foul of any dependency restrictions.  Fixes problem reported
>   by David Heggie: a temp table with a serial column may cause a
>   backend FATAL exit at shutdown time, if it chances to try to delete
>   the temp sequence first.
> 
> Now this change also exists in 7.4, but perhaps it is malfunctioning in
> 7.3.*.  Or maybe you just haven't stressed the 7.4 installation enough
> to reproduce the problem there --- what do you think are the odds of
> that?

It's possible.  We've re-downgraded that development machine to 7.3.4
to experiment with other ways of solving the problem, and it looks like
our nightly backup script didn't work last night, so I can't inspect
the state of the database that didn't manifest the problems with 7.4.
It's possible it might have had less stuff in it :(

We'll run the experiment again.  Should we try 7.3.3 too?

> Given that you're explicitly dropping the temp table before exit, it's
> not clear how a problem in this code could cause you grief anyway.

Well, it's possible the daemon could have gotten killed while it was
inside the transaction, followed shortly by a shutdown of postgres ---
a dozen times or more --- and during development, we frequently kill
the daemon so that it will restart with new code.  For our application,
we shut down and restart Postgres every night because it seems to make
VACUUM FULL work better.

> But it's the only starting point I can see.  You might try adding some
> monitoring code to see if you can detect temp tables being left around
> by exited backends.

Something like this?
foobar=> select count(*), pg_class.relnamespace group by relnamespace;
 count | relnamespace 
---+--
   106 |   11
70 |   99
   147 | 2200
(3 rows)
foobar=> select oid, * from pg_namespace;
   oid   |  nspname   | nspowner | nspacl 
-++--+
  11 | pg_catalog |1 | {=U}
  99 | pg_toast   |1 | {=}
2200 | public |1 | {=UC}
   16765 | pg_temp_1  |1 | 
   17593 | pg_temp_2  |1 | 
   17647 | pg_temp_15 |1 | 
   20278 | pg_temp_16 |1 | 
 1570284 | pg_temp_32 |1 | 
(8 rows)

I wonder why those old namespaces are left around?  A new one shows up
whenever I kill and restart the daemon that creates the temporary tables.

We could run this code periodically to see when new namespaces pop up.

> > Creating and destroying a temporary table with a lot of attributes every
> > second is causing other problems, too; the indices on the pg_attribute
> > table grow to hundreds of megabytes in size,
> 
> Right.  7.4 should fix that though.

Great!

> > and for some reason,
> > reindexing our system tables corrupts our database.
> 
> That seems suspicious as well.  What happens exactly?  How did you get
> out of it??

I don't remember what happens exactly.  One of us will try 

Re: [GENERAL] no space left on device

2004-01-09 Thread Nigel J. Andrews

On Sat, 10 Jan 2004, Nigel J. Andrews wrote:
> And a common culprit is whatever is being used for usenet caching/serving...or
> ordinary mail which is just accumulating in /var/mail (or whereever).

Sheesh. Did I really put ordinary mailbox mail in the uses up inodes category?
I should taken out and whie...on the other hand better not might be too
exciting for some and spark off a whole new xxx web site.


Nigel Andrews


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] no space left on device

2004-01-09 Thread Nigel J. Andrews
On Fri, 9 Jan 2004, scott.marlowe wrote:

> On Fri, 9 Jan 2004, Aurangzeb M. Agha wrote:
> 
> > Right!  Thus my quandry.
> > 
> > Re inodes, how can I check this?  But why would this be?  Is Postgres
> > sucking up inodes just sitting there as a read-only DB?
> 
> If you are out of inodes, I seriously doubt it is Postgresql's fault, as 
> you seem to be running everything on the root partition here, it could be 
> any other process more likely than postgresql is using all the inodes.  
> Basically, when you make a lot of small files you can run out of inodes.  

And a common culprit is whatever is being used for usenet caching/serving...or
ordinary mail which is just accumulating in /var/mail (or whereever).


> Since postgresql tends to make a few rather large files, it's usually not 
> a concern.
> 
> df -i shows inode usage.
> 
> On linux, you can change the % reserved for root to 1% with tune2fs:
> 
> tune2fs -m 1

-- 
Nigel J. Andrews


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] ERROR: Cannot insert a duplicate key into unique index pg_class_relname_nsp_index

2004-01-09 Thread Martijn van Oosterhout
On Fri, Jan 09, 2004 at 12:07:25PM -0800, Kragen Sitaker wrote:


Not really related to your problem, but given you're in a transaction, why
do you need to lock anything? What's wrong with:

> The daemon that gets this error does the following every 15 seconds:
> - start a transaction
> - delete the contents of the other table
> - execute a complex and sometimes slow SELECT INTO query, creating a
>   temporary table
> - copy the contents of the temporary table into the other table
> - drop the temporary table (again, embarrassing, sorry)
> - commit

Maybe I'm missing something?
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> (... have gone from d-i being barely usable even by its developers
> anywhere, to being about 20% done. Sweet. And the last 80% usually takes
> 20% of the time, too, right?) -- Anthony Towns, debian-devel-announce


pgp0.pgp
Description: PGP signature


Re: [GENERAL] no space left on device

2004-01-09 Thread Mark Kirkwood
I would suspect some *other* service is using the 4G for transient 
storage every now and again, and it just so happens that Pg is getting 
tripped up.

What else does this machine run ?

regards

Mark



Nigel J. Andrews wrote:

On Fri, 9 Jan 2004, Aurangzeb M. Agha wrote:

 

Here's the output of "df -m":

[postgres - DB]$ df -m .
Filesystem   1M-blocks  Used Available Use% Mounted on
-63328 55308  4803  93% /
   

But your du, below, of the postgres data directory shows 53MB in use. That's an
order of magnitude smaller than the 55GB the above appears to be saying is used
in the db.
 



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] no space left on device

2004-01-09 Thread scott.marlowe
On Fri, 9 Jan 2004, Aurangzeb M. Agha wrote:

> Right!  Thus my quandry.
> 
> Re inodes, how can I check this?  But why would this be?  Is Postgres
> sucking up inodes just sitting there as a read-only DB?

If you are out of inodes, I seriously doubt it is Postgresql's fault, as 
you seem to be running everything on the root partition here, it could be 
any other process more likely than postgresql is using all the inodes.  
Basically, when you make a lot of small files you can run out of inodes.  
Since postgresql tends to make a few rather large files, it's usually not 
a concern.

df -i shows inode usage.

On linux, you can change the % reserved for root to 1% with tune2fs:

tune2fs -m 1



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Using indices with long unique IDs.

2004-01-09 Thread Greg Stark
"Sergey Olefir" <[EMAIL PROTECTED]> writes:

> Unfortunately neither of them seem to be portable (for example, the one with
> single quotes fails if I create PreparedStatement in Java:
> con.prepareStatement("SELECT * FROM table WHERE id='?'"); apparently Java
> doesn't parse question mark inside quotes).

That's wrong. You should not quote the ? in the query.

If the driver is textually substituting the argument (as was necessary in 7.3
and prior) then it probably ought to quote it. If it's not there should be an
option to tell it to quote even integer arguments.

If it's any consolation 7.5 will use the index even if you put a plain integer
in the query. If production is a long way off you could consider developing
against a CVS build today and plan to use 7.5 for production when it comes
out.

-- 
greg


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] no space left on device

2004-01-09 Thread Aurangzeb M. Agha
I'm running Postgres 7.1.3, and just started having a problem where my
dynamic site is going down (read-only DB, with no writes happening to the
DB) regularly (every other day).  I have no idea whay this is happening,
and my search of the FAQ's and mail list don't bring up anything.  i've
attached the error from the log file, at the end of this message.

Here's an output of the disk usage from within the DB dir

[postgres - DB]$ du -k .
1716./base/1
1716./base/16555
5192./base/56048
8628./base
116 ./global
32812   ./pg_xlog
11380   ./pg_clog
53192   .

Note that the pg_xlog dir is huge!  Here's its contents:

[postgres - DB/pg_xlog]$ ls -al
total 32816
drwx--2 postgres admin4096 Mar 29  2003 .
drwx--6 postgres admin4096 Jan  9 15:04 ..
-rwx--1 postgres admin16777216 Jan  9 15:09 0001
-rwx--1 postgres admin16777216 Mar 29  2003 0002

What are these files, and what can I do to resolve this issue?

Thx,

Zeb


--
DEBUG:  statistics collector process (pid 2523) exited with exit code 1
PGSTAT: Error closing temp stats file
PGSTAT: /usr/local/G101/App/DB/./global/pgstat.tmp.7823: No space left on
device
PGSTAT: AbDEBUG:  statistics collector process (pid 2979) exited with exit
code
1
FATAL 2:  write of clog file 43, offset 188416 failed: No space left on
device
DEBUG:  server process (pid 3741) exited with exit code 2
DEBUG:  terminating any other active server processes
NOTICE:  Message from PostgreSQL backend:
The Postmaster has informed me that some other backend
died abnormally and possibly corrupted shared memory.
I have rolled back the current transaction and am
going to terminate your database system connection and exit.
Please reconnect to the database system and repeat your query.
NOTICE:  Message from PostgreSQL backend:
The Postmaster has informed me that some other backend
died abnormally and possibly corrupted shared memory.
I have rolled back the current transaction and am
going to terminate your database system connection and exit.
Please reconnect to the database system and repeat your query.
NOTICE:  Message from PostgreSQL backend:
The Postmaster has informed me that some other backend
died abnormally and possibly corrupted shared memory.
I have rolled back the current transaction and am
going to terminate your database system connection and exit.
Please reconnect to the database system and repeat your query.
DEBUG:  all server processes terminated; reinitializing shared memory and
semaph
ores
DEBUG:  database system was interrupted at 2004-01-09 05:22:52 EST
DEBUG:  checkpoint record is at 0/138CFD4
DEBUG:  redo record is at 0/138CFD4; undo record is at 0/0; shutdown FALSE
DEBUG:  next transaction id: 45811837; next oid: 65205
DEBUG:  database system was not properly shut down; automatic recovery in
progre
ss
DEBUG:  redo starts at 0/138D014
FATAL 2:  write of clog file 43, offset 188416 failed: No space left on
device
DEBUG:  startup process (pid 3785) exited with exit code 2
DEBUG:  aborting startup due to startup process failure

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Hierarchical queries

2004-01-09 Thread Richard Huxton
On Friday 09 January 2004 19:16, Andrew Rawnsley wrote:
> There's a patch to mimic Oracle's CONNECT BY queries. You can get it
> at the Postgres Cookbook site:
>
> http://www.brasileiro.net/postgres/cookbook.

I believe I saw an announcement on freshmeat about a patch for the source to 
allow Oracle-style connect by. Yep:
http://gppl.terminal.ru/index.eng.html

I could have sworn there was something in contrib/ too, but I can't see it 
now.

-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Hierarchical queries

2004-01-09 Thread Andrew Rawnsley
There's a patch to mimic Oracle's CONNECT BY queries. You can get it
at the Postgres Cookbook site:
http://www.brasileiro.net/postgres/cookbook.

(although it seems to be down at the moment...)

On Jan 9, 2004, at 2:05 PM, [EMAIL PROTECTED] wrote:

Hello everybody!

Does someone know how to build hierarchical queries to the postgresql?

I have a table with tree in it (id, parent)
and need to find a way from any point of the tree to any other point.
And i would like to have a list of all steps from point A to point B
to make some changes on each step (this is required by the algorythm).
Here is an example:
treetable (where tree is stored):
idparent   data
int4  int4 varchar(255)
0 0root
1 0root's chield 1
2 0root's chield 2
3 1root's chield 1 chield 1
4 1root's chield 1 chield 2
5 2root's chield 2 chield 1
6 2root's chield 2 chield 2
And i want to get something like this:
start point "root's chield 2 chield 2"
finish "root's chield 1 chield 1"
And the result i need:
idparent   data
6 2root's chield 2 chield 2
2 0root's chield 2
0 0root
1 0root's chield 1
4 1root's chield 1 chield 2
i know that it is possible in Oracle but what about postgres?

Best regards,
Anton Nikiforov
---(end of 
broadcast)---
TIP 7: don't forget to increase your free space map settings



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Hierarchical queries

2004-01-09 Thread Anton . Nikiforov
Hello everybody!

Does someone know how to build hierarchical queries to the postgresql?

I have a table with tree in it (id, parent)
and need to find a way from any point of the tree to any other point.
And i would like to have a list of all steps from point A to point B
to make some changes on each step (this is required by the algorythm).

Here is an example:
treetable (where tree is stored):
idparent   data
int4  int4 varchar(255)
0 0root
1 0root's chield 1
2 0root's chield 2
3 1root's chield 1 chield 1
4 1root's chield 1 chield 2
5 2root's chield 2 chield 1
6 2root's chield 2 chield 2

And i want to get something like this:
start point "root's chield 2 chield 2"
finish "root's chield 1 chield 1"

And the result i need:
idparent   data
6 2root's chield 2 chield 2
2 0root's chield 2
0 0root
1 0root's chield 1
4 1root's chield 1 chield 2

i know that it is possible in Oracle but what about postgres?

Best regards,
Anton Nikiforov


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] TSearch2 ... ignore word repetition for rank

2004-01-09 Thread Teodor Sigaev


Chris Gamache wrote:
For my particular case, word repetition shouldn't be relevant in determining
the rank of a document. If I strip() the vector, I loose what relevance
proximity and weight add to the rank. It seems impossible, yet I ask anyway: Is
it possible to eliminate the second (third, fourth, fifth, etc.) occurrence of
any given word when its presence in the document is being scored, yet kept in
the equation for modifications to the score when proximity is being considered?
I don't see the way except modify strip or rank functions...
--
Teodor Sigaev  E-mail: [EMAIL PROTECTED]
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] deferring/disabling unique index

2004-01-09 Thread Oleg Lebedev
I see that it works for this simple case.
Check my previous email for a more complex example.

Thanks.

Oleg

-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 09, 2004 10:45 AM
To: Joshua D. Drake
Cc: Oleg Lebedev; [EMAIL PROTECTED]
Subject: Re: [GENERAL] deferring/disabling unique index


Joshua D. Drake wrote:
> 
> >So, does it mean that the only way to disable the index is to drop 
> >and recreate it? What about setting indisunique to false temporarily?
> >
> >  
> >
> I am just curious... why would you want to defer a unique constraint?

I remember now --- if you do:

UPDATE tab SET col = col + 1;

you hit a unique constraint when you shouldn't need to.  I think the
workaround was:

UPDATE tab SET col = -col + -1;

then:

UPDATE tab SET col = -col;

This assumes all the values are positive, of course.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania
19073

*

This e-mail may contain privileged or confidential material intended for the named 
recipient only.
If you are not the named recipient, delete this message and all attachments.
Unauthorized reviewing, copying, printing, disclosing, or otherwise using information 
in this e-mail is prohibited.
We reserve the right to monitor e-mail sent through our network. 

*


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] deferring/disabling unique index

2004-01-09 Thread Bruce Momjian
Joshua D. Drake wrote:
> 
> >So, does it mean that the only way to disable the index is to drop and
> >recreate it? What about setting indisunique to false temporarily?
> >
> >  
> >
> I am just curious... why would you want to defer a unique constraint?

I remember now --- if you do:

UPDATE tab SET col = col + 1;

you hit a unique constraint when you shouldn't need to.  I think the
workaround was:

UPDATE tab SET col = -col + -1;

then:

UPDATE tab SET col = -col;

This assumes all the values are positive, of course.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] deferring/disabling unique index

2004-01-09 Thread Oleg Lebedev

Basically, swapping values of columns involved in a unique index causes
the problem.

Example:

I wrote a synchronization script that syncs data between multiple
databases. It retrieves primary key information from the system tables,
joins remote tables and updates corresponding values.

Suppose I have a table:

Employee (FirstName, LastName, id)
PrimaryKey: id
UniqueIndex: FirstName, LastName

Suppose on each database instance this table contains two records:
Jane Doe   1
Jane Smith 2

Now, suppose we swap the last names between the two emplyees on one
instance, so we end up with:
Jane Smith 1
Jane Doe   2

Now, I want to propagate this data to another database instance and run
this query:

UPDATE Employee1
SET LastName=e2.LastName
FROM  Employee2 e2
WHERE Employee1.id = e2.id;

In the above query Employee1 is the Employee table from the first DB
instance and Employee2 - from the second DB instance. 

The query will throw an error saying that it UniqueIndex is violated
when assigning last name Doe to employee with id 1.

Thanks.

Oleg

-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 09, 2004 10:37 AM
To: Oleg Lebedev
Cc: Bruce Momjian; [EMAIL PROTECTED]
Subject: Re: [GENERAL] deferring/disabling unique index



>So, does it mean that the only way to disable the index is to drop and 
>recreate it? What about setting indisunique to false temporarily?
>
>  
>
I am just curious... why would you want to defer a unique constraint?

Sincerely,

Joshua Drake






-- 
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL

*

This e-mail may contain privileged or confidential material intended for the named 
recipient only.
If you are not the named recipient, delete this message and all attachments.
Unauthorized reviewing, copying, printing, disclosing, or otherwise using information 
in this e-mail is prohibited.
We reserve the right to monitor e-mail sent through our network. 

*


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] deferring/disabling unique index

2004-01-09 Thread Bruce Momjian
Oleg Lebedev wrote:
> So, does it mean that the only way to disable the index is to drop and
> recreate it? What about setting indisunique to false temporarily?

Not sure.  I seem to remember a way someone got around this, but can't
remember the details.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] TSearch2 ... ignore word repetition for rank

2004-01-09 Thread Chris Gamache
For my particular case, word repetition shouldn't be relevant in determining
the rank of a document. If I strip() the vector, I loose what relevance
proximity and weight add to the rank. It seems impossible, yet I ask anyway: Is
it possible to eliminate the second (third, fourth, fifth, etc.) occurrence of
any given word when its presence in the document is being scored, yet kept in
the equation for modifications to the score when proximity is being considered?

CG


__
Do you Yahoo!?
Yahoo! Hotjobs: Enter the "Signing Bonus" Sweepstakes
http://hotjobs.sweepstakes.yahoo.com/signingbonus

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] deferring/disabling unique index

2004-01-09 Thread Joshua D. Drake

So, does it mean that the only way to disable the index is to drop and
recreate it? What about setting indisunique to false temporarily?
 

I am just curious... why would you want to defer a unique constraint?

Sincerely,

Joshua Drake





--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Rép. : Re: [GENERAL] start/stop a database

2004-01-09 Thread Erwan DUROSELLE
Scott,

Though there is not exact match, you may consider that:
Oracle instance <=> Postgres cluster
Oracle schema <=> Postgres database.
But there is a schema concept in Postgres that is pretty similar to the one in Oracle.
( Actually, I think there is a lack of "database" concept in Oracle.)

That's why you cannot "start" or "stop" a db. You start or stop a cluster.
But you can deny acess to a database with pg_hba.conf as  tatyana.krasnokutsky 
explained.

Other gotchas for those who come from the Oracle world:
  - PostgreSQL = pg = postgres
  - autocommit is the default in pg. If you want to be able to rollback, you must 
issue a BEGIN first.
  - no Toad on pg ;-), though there are several graphical admin tools. I like Pgadmin 
III: http://www.pgadmin.org/pgadmin3/index.php 
  - don't forget to VACUUM your db VERY often. Usually: more than once a day on active 
databases. Carefully read the docs on that subject.
  - no tablespace, no storage clause, no initial, next,  Tablespaces should come 
up soon however (pg 7.5?)
  - no archive log, a.k.a. Point in Time Recovery. So there are only full backups.
  - the equivalent of initSID.ora is postgresql.conf
  - in postgresql.conf, always set fsync=TRUE  unless you are ready to loose your db 
afret a power failure.
  - in postgresql.conf, shared_buffers is the equivalent db_block_buffers in 
initSID.ora. Don't set it to high, it is not usefull. Let the file system cache the 
data.
  - COPY is (roughly) the equivalent of SQL*LOAD

HTH,
Erwan

>>> "scott.marlowe" <[EMAIL PROTECTED]> 09/01/2004 15:46:16 >>>
On Mon, 5 Jan 2004 [EMAIL PROTECTED] wrote:

> Hello,
> I am new in PostgreSQL world coming from Oracle.
> I have created two databases using command line as manual has suggested.
> For instance I have done like the following:
> createdb tanya
> createdb eps
> Now  I would like to stop my "tanya"  database and have "eps" database
> running.
> How can I do this?




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] deferring/disabling unique index

2004-01-09 Thread Oleg Lebedev
So, does it mean that the only way to disable the index is to drop and
recreate it? What about setting indisunique to false temporarily?


-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 09, 2004 10:19 AM
To: Oleg Lebedev
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] deferring/disabling unique index


Oleg Lebedev wrote:
> Hi,
>  
> I need to know if there is a way to defer or disable a unique index on

> a table during an update. One way would be to set indisunique to 
> false, perform update and then set to true. But, this seems to be an 
> ugly solution.
>  
> I've posted a similar message 6 months ago and at that time deferring 
> unique constraints was on a todo list. I wonder if this has been added

> to 7.4.1 release. If not, what is the best way to disable an index on 
> a table?

It is still not done, and no one is working on it.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania
19073

*

This e-mail may contain privileged or confidential material intended for the named 
recipient only.
If you are not the named recipient, delete this message and all attachments.
Unauthorized reviewing, copying, printing, disclosing, or otherwise using information 
in this e-mail is prohibited.
We reserve the right to monitor e-mail sent through our network. 

*


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] deferring/disabling unique index

2004-01-09 Thread Bruce Momjian
Oleg Lebedev wrote:
> Hi,
>  
> I need to know if there is a way to defer or disable a unique index on a
> table during an update. One way would be to set indisunique to false,
> perform update and then set to true. But, this seems to be an ugly
> solution.
>  
> I've posted a similar message 6 months ago and at that time deferring
> unique constraints was on a todo list. I wonder if this has been added
> to 7.4.1 release. If not, what is the best way to disable an index on a
> table?

It is still not done, and no one is working on it.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] array faults?

2004-01-09 Thread Tom Lane
David Helgason <[EMAIL PROTECTED]> writes:
> EXAMPLE 1:
> maint=# select ('{{1,2,3},{4,5,6}}'::int[])[1][1:3];
> int4
> ---
>   {{1,2,3}}
> (1 row)

> Shouldn't this have been just {1,2,3} ?

Nope.  It's equivalent to (...)[1:1][1:3].  See section 8.10.3 "Accessing
Arrays" in the current documentation.  Note in particular where it says

 An array subscripting operation is always taken to represent an array
 slice if any of the subscripts are written in the form lower:upper. A
 lower bound of 1 is assumed for any subscript where only one value is
 specified, ...

> Shouldn't this have been just {4,5,6} (or maybe {{4,5,6}} accepting the 
> result of example 1) ?

See above.  You need to write [2:2] not [2].

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] start/stop a database

2004-01-09 Thread scott.marlowe
On Mon, 5 Jan 2004 [EMAIL PROTECTED] wrote:

> Hello,
> I am new in PostgreSQL world coming from Oracle.
> I have created two databases using command line as manual has suggested.
> For instance I have done like the following:
> createdb tanya
> createdb eps
> Now  I would like to stop my "tanya"  database and have "eps" database
> running.
> How can I do this?

I'm not sure I understand.  Do you want to be able to arbitrarily have one 
or more databases up, while one or more are down, or do you think you HAVE 
to shut down one database to start another?

If you need to have certain databases up / down, you can either use 
pg_hba.conf to make them unavailable, or you can actually set up seperate 
clusters on different ports and bring them up and down individually.

It's much easier to do this kinda thing via the pg_hba.conf file than it 
is to do it via multiple clusters.

On the other hand, if you're thinking that one must go down for another to 
come up, don't worry, you can run more than one database at a time just 
fine under pgsql.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] deferring/disabling unique index

2004-01-09 Thread Oleg Lebedev
Title: Message



Hi,
 
I need to know if 
there is a way to defer or disable a unique index on a table during an 
update. One way would be to set indisunique to false, perform update and then 
set to true. But, this seems to be an ugly solution.
 
I've posted a similar message 6 months ago and at that time 
deferring unique constraints was on a todo list. I wonder if this has been added 
to 7.4.1 release. If not, what is the best way to disable an index on a 
table?
 
Thanks.
 
Oleg


*

This e-mail may contain privileged or confidential material intended for the named recipient only.
If you are not the named recipient, delete this message and all attachments.
Unauthorized reviewing, copying, printing, disclosing, or otherwise using information in this e-mail is prohibited.
We reserve the right to monitor e-mail sent through our network. 

*





[GENERAL] Invalid UNICODE character sequence found(0xc000)

2004-01-09 Thread Antonio Gallardo
psql returns:

Invalid UNICODE character sequence found(0xc000) in a valid query.

Steps to reproduce:

1. createdb -E UNICODE mydbname.
2. create a table with some varchar inside, we will query on this field.
Example:

CREATE TABLE auth_role
  (
rol_id  int4  not null default
nextval('auth_rol_rol_id_seq'),
rol_namevarchar(50)   unique not null,
rol_enable  boolean   default true,

primary key(rol_id)
  );

INSERT INTO auth_role(rol_name,rol_enable) VALUES ('admin',true);
INSERT INTO auth_role(rol_name,rol_enable) VALUES ('zorro',true);

3. run psql and write a select like this:

SELECT * FROM AUTH_ROLE WHERE ROL_NAME LIKE 'z%';

4. I got the error.

Comments:

If in the LIKE we don't use the 'z%' this query runs OK. Examples:

LIKE 'zo%'    OK returns 1 row.
LIKE 'r%' OK returns 0 rows.
LIKE 'az%'    ERROR again (contains z%).

Version:
the "SELECT version();" returns:
PostgreSQL 7.3.4-RH on i386-redhat-linux-gnu, compiled by GCC
i386-redhat-linux-gcc (GCC) 3.3.2 20031022 (Red Hat Linux 3.3.2-1)

OS: Fedora Core 1


Please help. I think this is a bug, but I am not sure.

Best Regards,

Antonio Gallardo







---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Natural upgrade path for RedHat 9?

2004-01-09 Thread jeffrey rivero
hello
i have been used RH for over 5 yrs and some of our server are going to 
RH AS and most of our workstations are moving to fedora
i have fedora servers in testing right now(PG 7.4 and 7.3) and have not 
seen any major problems
as for extended rh9,7.. support you can check out 
http://www.tummy.com/Software/krud
i have used them and i love the cd based idea
jeff

Richard Huxton wrote:

On Friday 09 January 2004 03:13, D. Dante Lorenso wrote:
 

Is there going to be a RedHat 10?  Or are we all supposed
to choose a path of RH Enterprise vs Fedora Core?  I have
about 10 to 20 Redhat 9 machines in dev/qa/production that
I'm trying to plan the futures for.
   

It's RH-Enterprise/Fedora from here in, the bonus should be that you know 
where you stand with RH-Ent. I can't see availability being a problem, Fedora 
is going to have all the standard packages available and even in the worst 
case scenario will be around for a few years. RedHat can't afford *not* to 
support their Enterprise product, so that's about as safe a choice as you can 
get.

The question is whether you want free, but rapidly changing with no corporate 
support, 350 USD/EUR per year with regular patches, 1500 USD/EUR per year 
with support too. RedHat have some documents on their site describing the 
differences.

There has also been talk about third-parties providing security-only patches 
to older RedHat versions, but I don't know if any of these has/will happen.

In your case, I'm guessing it depends on your budget. If your machines cost 
5000 each then I'm guessing 350 p.a. isn't too bad. On the other hand if they 
are cheap 700 EUR white-boxes, the price might not look so good.

They seem to be your options - the beauty is, if you don't like them you can 
always switch to another distribution.

 



---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] problems dumping from one server 2 another

2004-01-09 Thread Victor Spång Arthursson
Hi!

Can't get the

sudo -u user1 /usr/local/bin/pg_dump db1 | /usr/local/bin/psql -U user2 
-h host2 db2

to work.

Only thing that happens is that I get multiple passwordprompts, and 
then I gets told that the password is incorrect…

Would appreciate quick help,

sincerely

Victor Spång Arthursson

Copenhagen, Denmark and Malmoe, Sweden

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Optimize query: time of "single * IN(many)" > time of "many *

2004-01-09 Thread Paul Janssen
Tom Lane writes:
Paul Janssen writes:
Can anyone help me out with the following situation:
   (a) a single query with 550 id's in the IN-clause resulting into 800+ 
seconds;
   (b) 550 queries with a single id in the IN-clause resulting into 
overall time of <60 seconds;
The table consists of 950.000 records, and the resultset consists of 
205.000 records.

Why is there such an extreme difference in time?
Most likely the planner is opting not to use an indexscan in the first
case.  Could we see EXPLAIN ANALYZE results for both cases?  Also, try
"SET enable_seqscan TO OFF" and then repeat EXPLAIN ANALYZE for case (a).
			regards, tom lane
Thanks all for your suggestions!

As suggested, I run EXPLAIN ANALYZE for the initial situation and the 
situation
that you suggested ("seqscan OFF"). The results of this change is already a
decrease of total runtime by 78%. Very nice!
In respect to the total runtime of the "many * IN(single)" there was a 
differ-
ence of 54x, that's now down to 11x. But still >10 times slower... Hope you 
can
use the details below to help me close the gap. Thx!

__ SITUATION 0 : INITIAL__
EXPLAIN ANALYZE "single * IN(many)"
Unique  (cost=2120494.74..2139985.87 rows=129941 width=24) (actual 
time=818313.20..820605.09 rows=335311 loops=1)
 ->  Sort  (cost=2120494.74..2123743.26 rows=1299409 width=24) (actual 
time=818313.19..819327.09 rows=335311 loops=1)
   Sort Key: a_id, questions_id, answer, order_id, uid
   ->  Append  (cost=0.00..1916403.49 rows=1299409 width=24) (actual 
time=421673.91..795948.40 rows=335311 loops=1)
 ->  Subquery Scan "*SELECT* 1"  (cost=0.00..1344006.15 
rows=911298 width=24) (actual time=421673.91..556486.08 rows=207809 loops=1)
   ->  Seq Scan on tbl_scores  (cost=0.00..1344006.15 
rows=911298 width=24) (actual time=421673.88..554797.94 rows=207809 loops=1)
 Filter: ((a_id = 1) OR (a_id = 2) OR ... OR (a_id 
= 550))
 ->  Subquery Scan "*SELECT* 2"  (cost=0.00..572397.34 
rows=388111 width=24) (actual time=157088.99..238498.40 rows=127502 loops=1)
   ->  Seq Scan on tbl_scores_extra  (cost=0.00..572397.34 
rows=388111 width=24) (actual time=157088.96..237474.23 rows=127502 loops=1)
 Filter: ((a_id = 1) OR (a_id = 2) OR ... OR (a_id 
= 550))
Total runtime: 821062.84 msec

EXPLAIN ANALYZE "many * IN(single)"
Unique  (cost=24501.02..24605.30 rows=695 width=24) (actual 
time=51.20..57.93 rows=1349 loops=1)
 ->  Sort  (cost=24501.02..24518.40 rows=6952 width=24) (actual 
time=51.20..52.95 rows=1349 loops=1)
   Sort Key: a_id, questions_id, answer, order_id, uid
   ->  Append  (cost=0.00..24057.38 rows=6952 width=24) (actual 
time=0.62..38.04 rows=1349 loops=1)
 ->  Subquery Scan "*SELECT* 1"  (cost=0.00..17442.04 rows=4875 
width=24) (actual time=0.61..22.39 rows=931 loops=1)
   ->  Index Scan using tbl_scores_idx_a on tbl_scores  
(cost=0.00..17442.04 rows=4875 width=24) (actual time=0.60..16.46 rows=931 
loops=1)
 Index Cond: (a_id = 1233)
 ->  Subquery Scan "*SELECT* 2"  (cost=0.00..6615.34 rows=2076 
width=24) (actual time=0.67..12.10 rows=418 loops=1)
   ->  Index Scan using tbl_scores_extra_idx_a on 
tbl_scores  (cost=0.00..6615.34 rows=2076 width=24) (actual time=0.65..9.42 
rows=418 loops=1)
 Index Cond: (a_id = 1233)
Total runtime: 60.59 msec

The subsequent 550 executions of the query lead to...
 * total runtime varies from 0,93..163.62 msec;
 * total runtime sums up to 15107msec.
__SITUATION 1 : SET enable_seqscan TO OFF__
EXPLAIN ANALYZE "single * IN(many)"
Unique  (cost=18425133.86..18444624.99 rows=129941 width=24) (actual 
time=174020.84..176301.67 rows=335311 loops=1)
 ->  Sort  (cost=18425133.86..18428382.38 rows=1299409 width=24) (actual 
time=174020.82..175090.09 rows=335311 loops=1)
   Sort Key: a_id, questions_id, answer, order_id, uid
   ->  Append  (cost=0.00..18221042.61 rows=1299409 width=24) (actual 
time=191.87..159763.68 rows=335311 loops=1)
 ->  Subquery Scan "*SELECT* 1"  (cost=0.00..13088911.15 
rows=911298 width=24) (actual time=191.86..97162.20 rows=207809 loops=1)
   ->  Index Scan using tbl_scores_idx_a, .(548x).. , 
tbl_scores_idx_a on tbl_scores (cost=0.00..13088911.15 rows=911298 width=24) 
(actual time=191.84..95416.34 rows=207809 loops=1)
 Index Cond: ((a_id = 1) OR (a_id = 2) OR ... OR 
(a_id = 550))
 ->  Subquery Scan "*SELECT* 2"  (cost=0.00..5132131.46 
rows=388111 width=24) (actual time=59.60..61179.24 rows=127502 loops=1)
   ->  Index Scan using tbl_scores_extra_idx_a, .(548x).. , 
tbl_scores_extra_idx_a on tbl_scores_extra (cost=0.00..5132131.46 
rows=388111 width=24) (actual time=59.58..59871.50 rows=127502 loops=1)
 Index Cond: ((a_id = 1) OR (a_id = 2) OR ... OR 
(a_id = 550))
Total runtime

[GENERAL] array faults?

2004-01-09 Thread David Helgason
Using arrays I came across some strangenesses. Probably this is well 
known, but I couldn't find mentions of it.

I am under the impression that this should be an error. Am I wrong?

EXAMPLE 1:
maint=# select ('{{1,2,3},{4,5,6}}'::int[])[1][1:3];
   int4
---
 {{1,2,3}}
(1 row)
Shouldn't this have been just {1,2,3} ?

EXAMPLE 2:
maint=# select ('{{1,2,3},{4,5,6}}'::int[])[2][1:3];
   int4
---
 {{1,2,3},{4,5,6}}
(1 row)
Shouldn't this have been just {4,5,6} (or maybe {{4,5,6}} accepting the 
result of example 1) ?

I am trying to do sth like:
	maint=# select 1 = any ('{{1,2,3},{4,5,6}}'::int[])[2][1:3];
But that is not working for obvious reasons. This makes arrays pretty 
broken for me.

Am I missing anything obvious?

Regards,

David Helgason
Over the Edge Entertainments
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Using indices with long unique IDs.

2004-01-09 Thread Alvaro Herrera
On Fri, Jan 09, 2004 at 04:11:08AM -0600, D. Dante Lorenso wrote:
> Sergey Olefir wrote:
> 
> >So the logical choice would be int8, right? Unfortunately quite wrong.
> >Statement of the form: "SELECT * FROM table WHERE id=1"
> >will never use index for id (assumming id is int8) since '1' is of type
> >int4. This is confirmed both by documentation and SQL EXPLAIN (after set
> >enable_seqscan TO 'off').

I think you can use WHERE id=cast(1 AS bigint)


> Well, that just plain sucks.  That means I've gotta go back and
> add casts to all my queries?
> 
> Tell me it isn't so!

Yeah, it is :-(  AFAIU it is fixed in the current development version,
but who knows if it will be done in time before you tables grow too big ...

-- 
Alvaro Herrera ()
"Porque francamente, si para saber manejarse a uno mismo hubiera que
rendir examen... ¿Quién es el machito que tendría carnet?"  (Mafalda)

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Using indices with long unique IDs.

2004-01-09 Thread D. Dante Lorenso
Sergey Olefir wrote:

So the logical choice would be int8, right? Unfortunately quite wrong.
Statement of the form: "SELECT * FROM table WHERE id=1"
will never use index for id (assumming id is int8) since '1' is of type
int4. This is confirmed both by documentation and SQL EXPLAIN (after set
enable_seqscan TO 'off').
 

I'm using BIGSERIAL as the primary key for all my tables.  Please tell
me that what is described above will not be true for me as well!
When I say:
   SELECT x, y, z
   FROM mytable
   WHERE pk_mybigint = 1;
That had better be using an index, or in a few months, OMG!  Let me check:

   leads=> EXPLAIN ANALYSE SELECT * FROM leads WHERE lead_id = 555300;
   QUERY 
PLAN
   
---
Seq Scan on leads  (cost=0.00..334.66 rows=1 width=263) (actual 
time=21.35..21.46 rows=1 loops=1)
  Filter: (lead_id = 555300)
Total runtime: 21.53 msec
   (3 rows)
  
   leads=> EXPLAIN ANALYSE SELECT * FROM leads WHERE lead_id = 
555300::bigint;
  QUERY 
PLAN  
   

Index Scan using pk_leads on leads  (cost=0.00..5.36 rows=1 
width=263) (actual time=0.18..0.18 rows=1 loops=1)
  Index Cond: (lead_id = 555300::bigint)
Total runtime: 0.24 msec
   (3 rows)

Well, that just plain sucks.  That means I've gotta go back and
add casts to all my queries?
Tell me it isn't so!

Dante





---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] Postgres planner bug in 7.3.x and 7.4.1 ?

2004-01-09 Thread Laurent Perez
Hello

We're experiencing weird behaviours with both Postgres 7.3.x and 7.4.1 
versions, relating to views based on views including fields based upon 
plpgsql functions. Attached is a .sql file showing our problems, from a 
co-worker who doesn't have immediate access to this mailing list.

Here's the psql output on a 7.4.1 version (t1, t2 already created), showing 
the function being called :

[EMAIL PROTECTED]:/$ psql -p5433 -d t2g_3 -f /tmp/t1.sql
You are now connected as new user postgres.
psql:/tmp/t1.sql:32: ERROR:  function "plpgsql_call_handler" already exists 
with same argument types
psql:/tmp/t1.sql:35: ERROR:  language "plpgsql" already exists
CREATE FUNCTION
CREATE VIEW
CREATE VIEW
CREATE VIEW
CREATE VIEW
psql:/tmp/t1.sql:284: NOTICE:  f1  8980
psql:/tmp/t1.sql:284: NOTICE:  f1  8981
psql:/tmp/t1.sql:284: NOTICE:  f1  8982
 num_adr | num_person | firstname | name
-++---+--
6231 |   8982 | Chico | Marx
(1 row)

 num_adr | num_person | firstname | name
-++---+--
6231 |   8982 | Chico | Marx
(1 row)
Thanks for any support

laurent


t1.sql
Description: Binary data

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.545 / Virus Database: 339 - Release Date: 27/11/2003

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Using indices with long unique IDs.

2004-01-09 Thread Sergey Olefir

Hello!

I am planning to use unique IDs in the little system I am building. Now
being more than a little paranoid (and having no idea about expected loads),
I am wary of using int4 as a basis for uids (for the fear of ever running
out of them).

So the logical choice would be int8, right? Unfortunately quite wrong.
Statement of the form:
"SELECT * FROM table WHERE id=1"
will never use index for id (assumming id is int8) since '1' is of type
int4. This is confirmed both by documentation and SQL EXPLAIN (after set
enable_seqscan TO 'off').

There are two suggested work-arounds:
"SELECT * FROM table WHERE id='1'"
"SELECT * FROM table WHERE id=1::int8"

Unfortunately neither of them seem to be portable (for example, the one with
single quotes fails if I create PreparedStatement in Java:
con.prepareStatement("SELECT * FROM table WHERE id='?'"); apparently Java
doesn't parse question mark inside quotes).

I cannot have non-portable SQL as I am not convinced that PostgreSQL is the
right choice for my system. Oh, and by the way, I am using Java + JDBC for
my DB needs.


So what other options do I have? One seems to be NUMERIC( 13, 0 ) [or
something to that effect]. But what impact does it have performance-wise?
(code-wise it makes no difference through JDBC whether I use int4, int8, or
NUMERIC; I still represent that as 'long' in Java)

One thing I am definitely noticing is that NUMERIC( 13, 0 ) does not seem to
ever use sequential scan for queries (verified with EXPLAIN), it always goes
for indices. It that bad (performance wise)? Are there any other issues I
need to be aware of before settling on an uid type?

Thanks in advance,
-
Sergey Olefir
Exigen Latvia, system analyst

Honesty is a virtue.
That is if you manage to survive.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Natural upgrade path for RedHat 9?

2004-01-09 Thread Richard Huxton
On Friday 09 January 2004 03:13, D. Dante Lorenso wrote:
>
> Is there going to be a RedHat 10?  Or are we all supposed
> to choose a path of RH Enterprise vs Fedora Core?  I have
> about 10 to 20 Redhat 9 machines in dev/qa/production that
> I'm trying to plan the futures for.

It's RH-Enterprise/Fedora from here in, the bonus should be that you know 
where you stand with RH-Ent. I can't see availability being a problem, Fedora 
is going to have all the standard packages available and even in the worst 
case scenario will be around for a few years. RedHat can't afford *not* to 
support their Enterprise product, so that's about as safe a choice as you can 
get.

The question is whether you want free, but rapidly changing with no corporate 
support, 350 USD/EUR per year with regular patches, 1500 USD/EUR per year 
with support too. RedHat have some documents on their site describing the 
differences.

There has also been talk about third-parties providing security-only patches 
to older RedHat versions, but I don't know if any of these has/will happen.

In your case, I'm guessing it depends on your budget. If your machines cost 
5000 each then I'm guessing 350 p.a. isn't too bad. On the other hand if they 
are cheap 700 EUR white-boxes, the price might not look so good.

They seem to be your options - the beauty is, if you don't like them you can 
always switch to another distribution.

-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] RedHat install question.

2004-01-09 Thread Peter Eisentraut
Am Donnerstag, 8. Januar 2004 16:32 schrieb Earnshaw, Peter J:
> I currently have postgresql-7.3.4-3.rh19 installed and need to compile with
> options: --enable-multibyte and --enable-unicode.

These options do not exist in the 7.3 series.  (They are the default 
behavior.)

> I also need to compile up
> the JDBC drivers --with-java to create the postgresql.jar.

There should be an RPM available for the JDBC driver.  Else, it might be 
easier to download binary drivers from jdbc.postgresql.org.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] RedHat install question.

2004-01-09 Thread Richard Huxton
On Thursday 08 January 2004 15:32, Earnshaw, Peter J wrote:
> I currently have postgresql-7.3.4-3.rh19 installed and need to compile with
> options: --enable-multibyte and --enable-unicode. I also need to compile up
> the JDBC drivers --with-java to create the postgresql.jar. Do I need to
> un-install the current version and re-install from source?  I am currently
> running Red Hat Linux release 9.
>
> Any help or comments will be greatly appreciated.

Well, you can compile from source without interfering. With the right 
./configure options you could overwrite your existing RPM, but that's messy.

Your best bet might be to get the source-rpm, and alter the settings in that.

-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] [GENERAL] Announce: Search PostgreSQL related resources

2004-01-09 Thread Oleg Bartunov
On Tue, 6 Jan 2004, Rajesh Kumar Mallah wrote:

>
> Hi,
>
> Could you please tell how the "did you mean " feature
> was implemented when the serach term has a typo.

it's based on trigrams similarity and words statistics.

>
> The search engine is good .
>
> Regds
> mallah.
>
> Oleg Bartunov wrote:
>
> >Hi there,
> >
> >I'm pleased to present pilot version of http://www.pgsql.ru - search system on
> >postgresql related resources. Currently, we have crawled 27 sites,
> >new resources are welcome. It has multi-languages interface (russian, english)
> >but more languages could be added. We plan to add searchable archive of
> >mailing lists (a'la fts.postgresql.org), russian documentation and
> >WIKI for online documentation, tips, etc.
> >
> >We are welcome your feedback and comments. We need design solution, icons.
> >
> >This project is hosted at
> >Sternberg Astronomical Institute, Moscow University and supported
> >by Russian Foundation for Basic Research and Delta-Soft LLC.
> >
> > Regards,
> > Oleg
> >_
> >Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
> >Sternberg Astronomical Institute, Moscow University (Russia)
> >Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
> >phone: +007(095)939-16-83, +007(095)939-23-83
> >
> >---(end of broadcast)---
> >TIP 9: the planner will ignore your desire to choose an index scan if your
> >  joining column's datatypes do not match
> >
> >
> >
>
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
>   subscribe-nomail command to [EMAIL PROTECTED] so that your
>   message can get through to the mailing list cleanly
>

Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org