Welcome to the pgsql-hackers mailing list!

2002-03-14 Thread pgsql-hackers-owner

Welcome to the pgsql-hackers mailing list!
Your password at PostgreSQL User Support Lists is

tsXRQ6

To leave this mailing list, send the following command in the body
of a message to [EMAIL PROTECTED]:

approve tsXRQ6 unsubscribe pgsql-hackers archive@jab.org

This command will work even if your address changes.  For that reason,
among others, it is important that you keep a copy of this message.

If you need help or have questions about the mailing list, please
contact the people who manage the list by sending a message to
  [EMAIL PROTECTED]

If the web features of Majordomo are supported at this site, you
can manage your subscription by visiting the following location:
  URL:http://webmail.postgresql.org/cgi-bin/mj_wwwusr/domain=postgresql.org



  Put the text of the welcome message here.



Re: [HACKERS] 'Following' the Primary key

2002-03-14 Thread Oliver Elphick

On Thu, 2002-03-14 at 13:00, Turbo Fredriksson wrote:
 With '\d table' I get the columns, types and modifiers. Also
 the Primary key, Indexes etc are shown.
 
 But if I want to know WHAT the primary key 'is pointing to',
 how would I do that (ie, what is the primary key)?

Just do \d again on the key index name:

bray=# \d org_contact
 Table org_contact
 Column  | Type  | Modifiers 
-+---+---
 org | character varying(10) | not null
 contact | character varying(10) | not null
 role| text  | not null
 address | integer   | 
Primary key: org_contact_pkey
Triggers: RI_ConstraintTrigger_6933120,
  RI_ConstraintTrigger_6933114,
  RI_ConstraintTrigger_6933108

bray=# \d org_contact_pkey
Index org_contact_pkey
 Column  | Type  
-+---
 org | character varying(10)
 contact | character varying(10)
unique btree (primary key)



-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C

 Let your light so shine before men, that they may see 
  your good works, and glorify your Father which is in 
  heaven. Matthew 5:16 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 'Following' the Primary key

2002-03-14 Thread Turbo Fredriksson

 Oliver == Oliver Elphick [EMAIL PROTECTED] writes:

Oliver On Thu, 2002-03-14 at 13:00, Turbo Fredriksson wrote:
 With '\d table' I get the columns, types and modifiers. Also
 the Primary key, Indexes etc are shown.
 
 But if I want to know WHAT the primary key 'is pointing to',
 how would I do that (ie, what is the primary key)?

Oliver Just do \d again on the key index name:

Oliver bray=# \d org_contact
Oliver bray=# \d org_contact_pkey

Cool. Works fine in 7.2, but not 7.1.3 (which we're running on our
production systems)...

Any idea how to do this on 7.1.3?
-- 
jihad iodine subway arrangements 767 Cocaine 747 Waco, Texas [Hello to
all my fans in domestic surveillance] terrorist security radar North
Korea plutonium Semtex
[See http://www.aclu.org/echelonwatch/index.html for more about this]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Survey results on Oracle/M$NT4 to PG72/RH72 migration

2002-03-14 Thread Hannu Krosing

On Thu, 2002-03-14 at 02:30, Zeugswetter Andreas SB SD wrote:
 
  So this finaly makes the batch work taking 300% the time Oracle needs.
  We clearly see our ECPG programs waits for PostgreSQL in the functions
  were CURSORs are opened. Then, we know the problem is not in ECPG but in
  PG backend.
 
  This is unaceptable for our customer. Many batches are launched during
  the night and have to be completed in 5h (between 0h and 5h). With a
  ratio of 3, this is not worth think about migration anymore :-(
 
 So why exactly can you not simply do the whole batch in one transaction ?
 
 Unless you need to run concurrent vacuums,

I ran some tests based on their earlier description and concurrent
vacuums (the new, non-locking ones) are a must, best run every few
seconds, as without them the ratio of dead/live tuples will be huge and
that will bog down the whole process.

 or are low on disk space, or need 
 to concurrently update the affected rows (thus fear deadlocks or locking out
 interactive clients that update), there is no need to do frequent commits in 
 PostgreSQL for batch work.

I also suspect (from reading their description) that the main problem of
parsing/optimising each and every similar query will remain even if they
do run in one transaction.

In my tests of simple updates I got 3/2 speed increase (from 1050 to
1500 updates/sec) by using prepared statements inside a stored procedure

Hannu


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Survey results on Oracle/M$NT4 to PG72/RH72 migration

2002-03-14 Thread Hannu Krosing

On Wed, 2002-03-13 at 21:18, Jean-Paul ARGUDO wrote:
 Hi all,
 
 
 Here are the results of our survey on a migration from Oracle 8.0 / W$
 NT4 SP5 to PostgreSQL 7.2 / Red Hat 7.2.
 
 You'll probably remember of a thread I initiated in this list a couple
 of weeks ago, this is the same survey for the same customer. Now, the
 survey is finished.
 
 So, we migrated all Oracle's specific syntaxes succesfully, including
 CONNECT BY statements (thanks to all hackers at OpenACS project (visit
 http://www.openacs.org) for the good code!).

Could you elaborate here ?

I know they do some of it using triggers and bitmap indexes, do you mean
this ?

 We migrated succesfully Oracle Pro*C thanks to fantastic ECPG (Thanks
 Michael).
 
 The overall performance of PostgreSQL, is 33% slower than the Oracle/Nt
 solution. One must say we faced a well tuned Oracle, tuned for best
 performance. Even SQL queries were very well tuned, using Oracle
 pragmas for example (ex: %USE HASH foobar%).
 
 Since our customer accepted up to 50%, this is a success for us,
 technicaly on this point.
 
 BUT, we faced a real problem. On some batches, in ECPG, Pro*C
 structures uses intensively CURSORs loops. In Oracle, CURSORs
 can be PREPARED. Thus, it seems Oracle only computes once the query plan
 for the cursor, even if it is closed and re-opened. Maybe some kind of
 stored query plan / caching / whatever makes it possible.

What kind of work do you do in these cursors ?

Is it inserts, updates, deletes, complicated selects ...

 This seems not be the case in ECPG. In each COMMIT, the cursors are
 closed (they dont even need to close cursors in Oracle!). And at each
 BEGIN TRANSACTION PostgreSQL seems to compute again parsing and query
 plan..
 
 So this finaly makes the batch work taking 300% the time Oracle needs.
 We clearly see our ECPG programs waits for PostgreSQL in the functions
 were CURSORs are opened. Then, we know the problem is not in ECPG but in
 PG backend.

Could you make ona sample test case with minimal schema/data that
demonstrates this behaviour so I can try to optimise it ?

 This is unaceptable for our customer. Many batches are launched during
 the night and have to be completed in 5h (between 0h and 5h). With a
 ratio of 3, this is not worth think about migration anymore :-(
 
 We know we could have much better performances with something else than
 ECPG, for example, using C or TCL stored procedures, placing the
 SQL work wuch closer from the PG backend, using SPI, etc...

Did you do any tests ?

How much faster did it get ?

 But this is 
 not possible. We have to make it under ECPG, there are tons of Pro*C 
 code to migrate, and we must make it the same. With ECPG/Pro*C compiled 
 programs, we can stop executions, renice programs, etc, what we would 
 loose putting work in stored procedures.

AFAIK some SQL/C type precompilers and other frontend tools for other
databases do generate stored procedures for PREPAREd CURSORs.

I'm afraid ECPG does not :(

But making ECPG do it might be one way to fix this until real prepared
queries will be available to frontend.

 So, I'd really like some of you validate this thing about cursor. We
 have a strange feeling blended of pride for only a 1,33 ratio face to
 the giant Oracle, and a feeling of something unfinished, because only 
 of a feature not yet implemented...
 
 I read many times the current TODO list. I think our problem is
 somewhere between the CURSOR thread and the CACHE thread in the TODO.
 
 We would really appreciate some of you validate this behaviour about
 CURSORs, this would validate we didn't spent 40 day/man for nothing, and
 that we reached a certain good explanation of the problem, that we have
 not dug just next to the treasure.

The treasure is currently locked up in backend behind FE/BE protocol 

-
Hannu




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] insert statements

2002-03-14 Thread Vince Vielhaber

On Wed, 13 Mar 2002, Peter Eisentraut wrote:

 Vince Vielhaber writes:

  For example:
 
  insert into foo(foo.a) values(1);
 
  fails because the table name is used.  Update statements also include the
  table name.  Both fail.  Does anyone know of a workaround?

 Completely loudly to whomever wrote that SQL.  It's completely
 non-standard.

 (The implication I'm trying to make is that there's no way to make
 PostgreSQL accept that statement.  Adding this as an extension has been
 rejected in the past.)

I'm now wondering why it was rejected.  I couldn't try this last nite
so I just tried it now.  Here's with Sybase 11.0.3.3 :

1 create table foo(a int)
2 go
1 insert into foo(a) values(1)
2 go
(1 row affected)
1 insert into foo(foo.a) values(2)
2 go
(1 row affected)
1

And I suspect more than just mysql and sybase accept either syntax.
Right now I'm modifying postnuke but that's only a short term solution,
and I wouldn't want to add it to PostgreSQL either 'cuze if it remains
rejected that would hamper upgrades.  ROCK -- ME -- HARD PLACE   :)
There are really no other decent CMSs available that support PostgreSQL.

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [JDBC] implementing query timeout

2002-03-14 Thread Jessica Perry Hekman

On Wed, 13 Mar 2002, Bruce Momjian wrote:

 You bet, but it would be done in the backend, not in jdbc.  Is that OK?

Theoretically this is okay. I am more comfortable in Java than in C and I
hadn't looked at the backend code at all, but I'll take a peek and see if
it looks like something I'd feel comfortable doing.

 I have some ideas that should make it pretty easy.  If you set an
 alarm() in the backend on transaction start, then call the query
 cancel() code if the alarm() goes off, that should do it.  Of course,
 you reset the alarm if the query finishes before the timeout.

Sounds straightforward enough. Hopefully I'll get a chance to look at this
before the end of this week.

Thanks!

Jessica



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Survey results on Oracle/M$NT4 to PG72/RH72 migration

2002-03-14 Thread Hannu Krosing

On Thu, 2002-03-14 at 11:20, Jean-Paul ARGUDO wrote:
   Unless you need to run concurrent vacuums,
 
 Forgot to say too that de x3 ratio is based only on batch mode. Daemon
 mode is as faster as Oracle (wow!).
 
 Forgot to say too that in batch mode we launch concurrent vacuum analyze
 on the 2 tables constantly accessed (update/inserts only : updating
 total sale by sector/ sub-sector/ sub-sub-sector, etc.. the total sales
 has a tree structure then).
 
 The vacuum analyze on those 2 tables has a sleep of 10 s, in a 
 while [ 1 ] loop in a .sh

If the general distribution of values does not drastically change in
these tables then you can save some time by running just VACUUM, not
VACUUM ANALYZE.

VACUUM does all the old tuple removing work

VACUUM ANALYZE does that + also gathers statistics which make it slower.
  
  I ran some tests based on their earlier description and concurrent
  vacuums (the new, non-locking ones) are a must, best run every few
  seconds, as without them the ratio of dead/live tuples will be huge and
  that will bog down the whole process.
 
 Yes, concurrent vaccums is really *GREAT* without it, the batch work is
 going slower and slower with time. Concurrent vaccum allows constant
 performances.
  
  I also suspect (from reading their description) that the main problem of
  parsing/optimising each and every similar query will remain even if they
  do run in one transaction.
 
 Exactly.
  
 To answer a question in this thread: the batch has really basic SQL
 statments! CURSORS are really simple too, based on 1 to 2 bind
 variables that unfortunately are not processed the same way has Oracle.
 :-(

can you give me a small made-up example and then tell me what
performance you get on Oracle/NT and what on PostgreSQL/Linux ?

I'd like to try to move cursor - backend proc and see 

1) if it is big enough gain to warrant further work

2) if it can be done automatically, either by preprocessing ECPG or just
   changing it

--
Hannu



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] 'Following' the Primary key

2002-03-14 Thread Jean-Paul ARGUDO

 Cool. Works fine in 7.2, but not 7.1.3 (which we're running on our
 production systems)...
 Any idea how to do this on 7.1.3?

contact=# \d t_operation
   Table t_operation
 Attribute |  Type  |  Modifier
  
---++
 op_id | integer| not null default 
nextval('operation_id_seq'::text)
 op_date   | date   | not null
 op_dpt| character varying(50)  | 
 op_typ| character varying(50)  | 
 op_dsc| character varying(500) | 
 cnx_id| integer| not null
Index: t_operation_pkey
  ^
  Default primary key index


contact=# \d t_operation_pkey
Index t_operation_pkey
 Attribute |  Type   
---+-
 op_id | integer
unique btree (primary key)
^^   ^
  
Watch for unique indices created with CREATE UNIQUE INDEX ...

Cheers,

-- 
Jean-Paul ARGUDO

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Client/Server compression?

2002-03-14 Thread Greg Copeland

Just curious, and honestly I haven't looked, but is there any form of
compression between clients and servers?  Has this been looked at?

Greg





signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] 'Following' the Primary key

2002-03-14 Thread Turbo Fredriksson

Quoting Oliver Elphick [EMAIL PROTECTED]:

 On Thu, 2002-03-14 at 13:28, Turbo Fredriksson wrote:
  Oliver Just do \d again on the key index name:
  
  Oliver bray=# \d org_contact
  Oliver bray=# \d org_contact_pkey
  
  Cool. Works fine in 7.2, but not 7.1.3 (which we're running on our
  production systems)...
  
  Any idea how to do this on 7.1.3?
 
 psql -E tells me that the queries include this:

I thought it was '-e', and that didn't give any output,
so I never figured out this my self...

  SELECT a.attname, format_type(a.atttypid, a.atttypmod),
 a.attnotnull, a.atthasdef, a.attnum
  FROM pg_class c, pg_attribute a
  WHERE c.relname = 'org_contact_pkey'
AND a.attnum  0 AND a.attrelid = c.oid
  ORDER BY a.attnum;

Works like a charm, thanx!!
-- 
$400 million in gold bullion Soviet Saddam Hussein supercomputer Waco,
Texas Iran munitions PLO explosion Cuba congress Semtex BATF Treasury
NSA
[See http://www.aclu.org/echelonwatch/index.html for more about this]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] problems with Tomcat and postgres

2002-03-14 Thread Jose Javier Gutierrez

Hi friends,
I have problems with postgres.jar and tomcat. I have de follow exception :

- Excepcion de persistencia:
com.kristinaIbs.persistence.ExceptionPersistence: ManagerPersistencePool
(getConnection).Connection refused. Check that the hostname and port is
correct, and that the postmaster is running with the -i flag, which enables
TCP/IP networking.
at
com.kristinaIbs.persistence.ManagerPersistencePool.getConnection(ManagerPers
istencePool.java:112)
at
com.kristinaIbs.user.UserManager.getUserByLogin(UserManager.java:314)

I have the follows parameters :
 driver  = org.postgresql.Driver
url  = jdbc:postgresql://192.168.0.7:5432/easysite
user = postgres
password =

Do you can Help please!


-Mensaje original-
De: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]En nombre de Bruce Momjian
Enviado el: miercoles 13 de marzo de 2002 21:23
Para: Jessica Perry Hekman
CC: [EMAIL PROTECTED]; PostgreSQL-development
Asunto: Re: [JDBC] implementing query timeout


Jessica Perry Hekman wrote:
 As has been noted on this list before, query timeouts are not implemented
 in pgsql-jdbc (see

   http://archives.postgresql.org/pgsql-bugs/2000-12/msg00093.php

 ). This is currently causing a problem for me, and I might (no
 promises) be interested in implementing it. So I'm testing the waters. If
 I did submit a patch for this, would the developers here be interested?

Let me also add that Cancel now works in the CVS copy of the jdbc
driver.

--
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] insert statements

2002-03-14 Thread Rod Taylor

Why not send in your changes to PostNuke along with the appropriate
section from the SQL specs?

Surely they'll apply a reasoned patch which improves conformance to
the SQL standard and doesn't break anything in the process.  I'd
suspect both SyBase, and MySQL can also take insert into foo (a) as
well.
--
Rod Taylor

This message represents the official view of the voices in my head

- Original Message -
From: Vince Vielhaber [EMAIL PROTECTED]
To: Peter Eisentraut [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, March 14, 2002 8:29 AM
Subject: Re: [HACKERS] insert statements


 On Wed, 13 Mar 2002, Peter Eisentraut wrote:

  Vince Vielhaber writes:
 
   For example:
  
   insert into foo(foo.a) values(1);
  
   fails because the table name is used.  Update statements also
include the
   table name.  Both fail.  Does anyone know of a workaround?
 
  Completely loudly to whomever wrote that SQL.  It's completely
  non-standard.
 
  (The implication I'm trying to make is that there's no way to make
  PostgreSQL accept that statement.  Adding this as an extension has
been
  rejected in the past.)

 I'm now wondering why it was rejected.  I couldn't try this last
nite
 so I just tried it now.  Here's with Sybase 11.0.3.3 :

 1 create table foo(a int)
 2 go
 1 insert into foo(a) values(1)
 2 go
 (1 row affected)
 1 insert into foo(foo.a) values(2)
 2 go
 (1 row affected)
 1

 And I suspect more than just mysql and sybase accept either syntax.
 Right now I'm modifying postnuke but that's only a short term
solution,
 and I wouldn't want to add it to PostgreSQL either 'cuze if it
remains
 rejected that would hamper upgrades.  ROCK -- ME -- HARD PLACE
:)
 There are really no other decent CMSs available that support
PostgreSQL.

 Vince.
 --

==

 Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]
http://www.pop4.net
  56K Nationwide Dialup from $16.00/mo at Pop4 Networking
 Online Campground Directoryhttp://www.camping-usa.com
Online Giftshop Superstorehttp://www.cloudninegifts.com

==





 ---(end of
broadcast)---
 TIP 6: Have you searched our list archives?

 http://archives.postgresql.org



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] insert statements

2002-03-14 Thread Vince Vielhaber

On Thu, 14 Mar 2002, Rod Taylor wrote:

 Why not send in your changes to PostNuke along with the appropriate
 section from the SQL specs?

 Surely they'll apply a reasoned patch which improves conformance to
 the SQL standard and doesn't break anything in the process.  I'd
 suspect both SyBase, and MySQL can also take insert into foo (a) as
 well.

Look below, I showed both syntaxes with Sybase.  Since I don't have a
copy of the SQL specs I can't send them the appropriate section or I
would have already.  Care to forward that appropriate section?


 --
 Rod Taylor

 This message represents the official view of the voices in my head

 - Original Message -
 From: Vince Vielhaber [EMAIL PROTECTED]
 To: Peter Eisentraut [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Thursday, March 14, 2002 8:29 AM
 Subject: Re: [HACKERS] insert statements


  On Wed, 13 Mar 2002, Peter Eisentraut wrote:
 
   Vince Vielhaber writes:
  
For example:
   
insert into foo(foo.a) values(1);
   
fails because the table name is used.  Update statements also
 include the
table name.  Both fail.  Does anyone know of a workaround?
  
   Completely loudly to whomever wrote that SQL.  It's completely
   non-standard.
  
   (The implication I'm trying to make is that there's no way to make
   PostgreSQL accept that statement.  Adding this as an extension has
 been
   rejected in the past.)
 
  I'm now wondering why it was rejected.  I couldn't try this last
 nite
  so I just tried it now.  Here's with Sybase 11.0.3.3 :
 
  1 create table foo(a int)
  2 go
  1 insert into foo(a) values(1)
  2 go
  (1 row affected)
  1 insert into foo(foo.a) values(2)
  2 go
  (1 row affected)
  1
 
  And I suspect more than just mysql and sybase accept either syntax.
  Right now I'm modifying postnuke but that's only a short term
 solution,
  and I wouldn't want to add it to PostgreSQL either 'cuze if it
 remains
  rejected that would hamper upgrades.  ROCK -- ME -- HARD PLACE
 :)
  There are really no other decent CMSs available that support
 PostgreSQL.
 
  Vince.
  --
 
 ==
 
  Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]
 http://www.pop4.net
   56K Nationwide Dialup from $16.00/mo at Pop4 Networking
  Online Campground Directoryhttp://www.camping-usa.com
 Online Giftshop Superstorehttp://www.cloudninegifts.com
 
 ==
 
 
 
 
 
  ---(end of
 broadcast)---
  TIP 6: Have you searched our list archives?
 
  http://archives.postgresql.org
 




Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] insert statements

2002-03-14 Thread Rod Taylor

 As snipped from:
http://archives.postgresql.org/pgsql-bugs/2000-10/msg00030.php  (All
my stuff is in paper form)
What's your definition of other dbs?  The above statement is quite
clearly in violation of the SQL92 and SQL99 specifications:

  insert statement ::=
   INSERT INTO table name
 insert columns and source

  insert columns and source ::=
 [ left paren insert column list right paren ]
   query expression
   | DEFAULT VALUES

  insert column list ::= column name list

  column name list ::=
   column name [ { comma column name }... ]

  column name ::= identifier

I'm not particularly excited about supporting non-SQL variant syntaxes
that add no functionality.

regards, tom lane
--
Rod Taylor

This message represents the official view of the voices in my head

- Original Message -
From: Vince Vielhaber [EMAIL PROTECTED]
To: Rod Taylor [EMAIL PROTECTED]
Cc: Peter Eisentraut [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Thursday, March 14, 2002 9:08 AM
Subject: Re: [HACKERS] insert statements


 On Thu, 14 Mar 2002, Rod Taylor wrote:

  Why not send in your changes to PostNuke along with the
appropriate
  section from the SQL specs?
 
  Surely they'll apply a reasoned patch which improves conformance
to
  the SQL standard and doesn't break anything in the process.  I'd
  suspect both SyBase, and MySQL can also take insert into foo (a)
as
  well.

 Look below, I showed both syntaxes with Sybase.  Since I don't have
a
 copy of the SQL specs I can't send them the appropriate section or I
 would have already.  Care to forward that appropriate section?


  --
  Rod Taylor
 
  This message represents the official view of the voices in my head
 
  - Original Message -
  From: Vince Vielhaber [EMAIL PROTECTED]
  To: Peter Eisentraut [EMAIL PROTECTED]
  Cc: [EMAIL PROTECTED]
  Sent: Thursday, March 14, 2002 8:29 AM
  Subject: Re: [HACKERS] insert statements
 
 
   On Wed, 13 Mar 2002, Peter Eisentraut wrote:
  
Vince Vielhaber writes:
   
 For example:

 insert into foo(foo.a) values(1);

 fails because the table name is used.  Update statements
also
  include the
 table name.  Both fail.  Does anyone know of a workaround?
   
Completely loudly to whomever wrote that SQL.  It's completely
non-standard.
   
(The implication I'm trying to make is that there's no way to
make
PostgreSQL accept that statement.  Adding this as an extension
has
  been
rejected in the past.)
  
   I'm now wondering why it was rejected.  I couldn't try this last
  nite
   so I just tried it now.  Here's with Sybase 11.0.3.3 :
  
   1 create table foo(a int)
   2 go
   1 insert into foo(a) values(1)
   2 go
   (1 row affected)
   1 insert into foo(foo.a) values(2)
   2 go
   (1 row affected)
   1
  
   And I suspect more than just mysql and sybase accept either
syntax.
   Right now I'm modifying postnuke but that's only a short term
  solution,
   and I wouldn't want to add it to PostgreSQL either 'cuze if it
  remains
   rejected that would hamper upgrades.  ROCK -- ME -- HARD PLACE
  :)
   There are really no other decent CMSs available that support
  PostgreSQL.
  
   Vince.
   --
  
 
==
  
   Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]
  http://www.pop4.net
56K Nationwide Dialup from $16.00/mo at Pop4 Networking
   Online Campground Directory
http://www.camping-usa.com
  Online Giftshop Superstore
http://www.cloudninegifts.com
  
 
==
  
  
  
  
  
   ---(end of
  broadcast)---
   TIP 6: Have you searched our list archives?
  
   http://archives.postgresql.org
  
 
 


 Vince.
 --

==

 Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]
http://www.pop4.net
  56K Nationwide Dialup from $16.00/mo at Pop4 Networking
 Online Campground Directoryhttp://www.camping-usa.com
Online Giftshop Superstorehttp://www.cloudninegifts.com

==







---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] insert statements

2002-03-14 Thread Vince Vielhaber

On Thu, 14 Mar 2002, Rod Taylor wrote:

  As snipped from:
 http://archives.postgresql.org/pgsql-bugs/2000-10/msg00030.php  (All
 my stuff is in paper form)
 What's your definition of other dbs?  The above statement is quite
 clearly in violation of the SQL92 and SQL99 specifications:

And nowhere does it say that column name cannot be qualified with
the table name in front of it.  Looking at the entire message noted
above the list of other dbs that support it is now Oracle, Sybase,
MS-SQL and mysql.  If other dbs ends up the equivilent of everything
but PostgreSQL then which one is non-standard?





   insert statement ::=
INSERT INTO table name
  insert columns and source

   insert columns and source ::=
  [ left paren insert column list right paren ]
query expression
| DEFAULT VALUES

   insert column list ::= column name list

   column name list ::=
column name [ { comma column name }... ]

   column name ::= identifier

 I'm not particularly excited about supporting non-SQL variant syntaxes
 that add no functionality.

   regards, tom lane
 --
 Rod Taylor

 This message represents the official view of the voices in my head

 - Original Message -
 From: Vince Vielhaber [EMAIL PROTECTED]
 To: Rod Taylor [EMAIL PROTECTED]
 Cc: Peter Eisentraut [EMAIL PROTECTED];
 [EMAIL PROTECTED]
 Sent: Thursday, March 14, 2002 9:08 AM
 Subject: Re: [HACKERS] insert statements


  On Thu, 14 Mar 2002, Rod Taylor wrote:
 
   Why not send in your changes to PostNuke along with the
 appropriate
   section from the SQL specs?
  
   Surely they'll apply a reasoned patch which improves conformance
 to
   the SQL standard and doesn't break anything in the process.  I'd
   suspect both SyBase, and MySQL can also take insert into foo (a)
 as
   well.
 
  Look below, I showed both syntaxes with Sybase.  Since I don't have
 a
  copy of the SQL specs I can't send them the appropriate section or I
  would have already.  Care to forward that appropriate section?
 
 
   --
   Rod Taylor
  
   This message represents the official view of the voices in my head
  
   - Original Message -
   From: Vince Vielhaber [EMAIL PROTECTED]
   To: Peter Eisentraut [EMAIL PROTECTED]
   Cc: [EMAIL PROTECTED]
   Sent: Thursday, March 14, 2002 8:29 AM
   Subject: Re: [HACKERS] insert statements
  
  
On Wed, 13 Mar 2002, Peter Eisentraut wrote:
   
 Vince Vielhaber writes:

  For example:
 
  insert into foo(foo.a) values(1);
 
  fails because the table name is used.  Update statements
 also
   include the
  table name.  Both fail.  Does anyone know of a workaround?

 Completely loudly to whomever wrote that SQL.  It's completely
 non-standard.

 (The implication I'm trying to make is that there's no way to
 make
 PostgreSQL accept that statement.  Adding this as an extension
 has
   been
 rejected in the past.)
   
I'm now wondering why it was rejected.  I couldn't try this last
   nite
so I just tried it now.  Here's with Sybase 11.0.3.3 :
   
1 create table foo(a int)
2 go
1 insert into foo(a) values(1)
2 go
(1 row affected)
1 insert into foo(foo.a) values(2)
2 go
(1 row affected)
1
   
And I suspect more than just mysql and sybase accept either
 syntax.
Right now I'm modifying postnuke but that's only a short term
   solution,
and I wouldn't want to add it to PostgreSQL either 'cuze if it
   remains
rejected that would hamper upgrades.  ROCK -- ME -- HARD PLACE
   :)
There are really no other decent CMSs available that support
   PostgreSQL.
   
Vince.
--
   
  
 ==
   
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]
   http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directory
 http://www.camping-usa.com
   Online Giftshop Superstore
 http://www.cloudninegifts.com
   
  
 ==
   
   
   
   
   
---(end of
   broadcast)---
TIP 6: Have you searched our list archives?
   
http://archives.postgresql.org
   
  
  
 
 
  Vince.
  --
 
 ==
 
  Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]
 http://www.pop4.net
   56K Nationwide Dialup from $16.00/mo at Pop4 Networking
  Online Campground Directoryhttp://www.camping-usa.com
 Online Giftshop Superstorehttp://www.cloudninegifts.com
 
 ==
 
 
 
 
 




Vince.
-- 

Re: [HACKERS] insert statements

2002-03-14 Thread Rod Taylor

Out of curiosity, does SyBase allow you to qualify it with
schema.table.column?
--
Rod Taylor

This message represents the official view of the voices in my head

- Original Message -
From: Vince Vielhaber [EMAIL PROTECTED]
To: Rod Taylor [EMAIL PROTECTED]
Cc: Peter Eisentraut [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Thursday, March 14, 2002 9:39 AM
Subject: Re: [HACKERS] insert statements


 On Thu, 14 Mar 2002, Rod Taylor wrote:

   As snipped from:
  http://archives.postgresql.org/pgsql-bugs/2000-10/msg00030.php
(All
  my stuff is in paper form)
  What's your definition of other dbs?  The above statement is
quite
  clearly in violation of the SQL92 and SQL99 specifications:

 And nowhere does it say that column name cannot be qualified with
 the table name in front of it.  Looking at the entire message noted
 above the list of other dbs that support it is now Oracle, Sybase,
 MS-SQL and mysql.  If other dbs ends up the equivilent of
everything
 but PostgreSQL then which one is non-standard?




 
insert statement ::=
 INSERT INTO table name
   insert columns and source
 
insert columns and source ::=
   [ left paren insert column list right
paren ]
 query expression
 | DEFAULT VALUES
 
insert column list ::= column name list
 
column name list ::=
 column name [ { comma column name }... ]
 
column name ::= identifier
 
  I'm not particularly excited about supporting non-SQL variant
syntaxes
  that add no functionality.
 
  regards, tom lane
  --
  Rod Taylor
 
  This message represents the official view of the voices in my head
 
  - Original Message -
  From: Vince Vielhaber [EMAIL PROTECTED]
  To: Rod Taylor [EMAIL PROTECTED]
  Cc: Peter Eisentraut [EMAIL PROTECTED];
  [EMAIL PROTECTED]
  Sent: Thursday, March 14, 2002 9:08 AM
  Subject: Re: [HACKERS] insert statements
 
 
   On Thu, 14 Mar 2002, Rod Taylor wrote:
  
Why not send in your changes to PostNuke along with the
  appropriate
section from the SQL specs?
   
Surely they'll apply a reasoned patch which improves
conformance
  to
the SQL standard and doesn't break anything in the process.
I'd
suspect both SyBase, and MySQL can also take insert into foo
(a)
  as
well.
  
   Look below, I showed both syntaxes with Sybase.  Since I don't
have
  a
   copy of the SQL specs I can't send them the appropriate section
or I
   would have already.  Care to forward that appropriate section?
  
  
--
Rod Taylor
   
This message represents the official view of the voices in my
head
   
- Original Message -
From: Vince Vielhaber [EMAIL PROTECTED]
To: Peter Eisentraut [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, March 14, 2002 8:29 AM
Subject: Re: [HACKERS] insert statements
   
   
 On Wed, 13 Mar 2002, Peter Eisentraut wrote:

  Vince Vielhaber writes:
 
   For example:
  
   insert into foo(foo.a) values(1);
  
   fails because the table name is used.  Update statements
  also
include the
   table name.  Both fail.  Does anyone know of a
workaround?
 
  Completely loudly to whomever wrote that SQL.  It's
completely
  non-standard.
 
  (The implication I'm trying to make is that there's no way
to
  make
  PostgreSQL accept that statement.  Adding this as an
extension
  has
been
  rejected in the past.)

 I'm now wondering why it was rejected.  I couldn't try this
last
nite
 so I just tried it now.  Here's with Sybase 11.0.3.3 :

 1 create table foo(a int)
 2 go
 1 insert into foo(a) values(1)
 2 go
 (1 row affected)
 1 insert into foo(foo.a) values(2)
 2 go
 (1 row affected)
 1

 And I suspect more than just mysql and sybase accept either
  syntax.
 Right now I'm modifying postnuke but that's only a short
term
solution,
 and I wouldn't want to add it to PostgreSQL either 'cuze if
it
remains
 rejected that would hamper upgrades.  ROCK -- ME -- HARD
PLACE
:)
 There are really no other decent CMSs available that support
PostgreSQL.

 Vince.
 --

   
 
==

 Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]
http://www.pop4.net
  56K Nationwide Dialup from $16.00/mo at Pop4
Networking
 Online Campground Directory
  http://www.camping-usa.com
Online Giftshop Superstore
  http://www.cloudninegifts.com

   
 
==





 ---(end of
broadcast)---
 TIP 6: Have you searched our list archives?

 http://archives.postgresql.org

   
   
  
  
   Vince.
   

Re: [HACKERS] insert statements

2002-03-14 Thread Vince Vielhaber

On Thu, 14 Mar 2002, Rod Taylor wrote:

 Out of curiosity, does SyBase allow you to qualify it with
 schema.table.column?

Just tried it...  Yes.

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] about BufferPoolBlowaway()

2002-03-14 Thread Tom Lane

Seung Hyun Jeong [EMAIL PROTECTED] writes:
 I am experimenting on performance evaluation for some queries based on
 PostgreSQL.
 To give fair conditions to each queries, I try to clear buffer of PostgreSQL
 before running each queries.
 I think  the following function in .../backend/storage/buffer/bufmgr.c seems
 to be designed
 for such a purpose.
 But the function seems to have a logical error in my opinion.

Actually, BufferPoolBlowaway is so completely wrong-headed that it
should be removed entirely.  You can't go around arbitrarily releasing
pins on buffers.  The holder of the pin is going to crash or corrupt
data if you do.

I'm not convinced that starting from an empty disk cache is a
particularly interesting performance measurement, but if you insist
on it: reboot and start the postmaster for each measurement.  (Anything
less than a reboot is an exercise in self-deception, since Postgres
relies on the kernel's disk cache quite as much as its own buffers.)

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] problems with Tomcat and postgres

2002-03-14 Thread Doug McNaught

Jose Javier Gutierrez [EMAIL PROTECTED] writes:

 com.kristinaIbs.persistence.ExceptionPersistence: ManagerPersistencePool
 (getConnection).Connection refused. Check that the hostname and port is
 correct, and that the postmaster is running with the -i flag, which enables
 TCP/IP networking.

Is the postmaster indeed listening on a TCP/IP port, (usually 5432) or
just on the Unix-domain socket?  You have to specifically turn on
TCP/IP for security reasons--it's not enabled by default.

-Doug
-- 
Doug McNaught   Wireboard Industries  http://www.wireboard.com/

  Custom software development, systems and network consulting.
  Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Survey results on Oracle/M$NT4 to PG72/RH72 migration

2002-03-14 Thread Matthew Kirkwood

On Thu, 14 Mar 2002, Jean-Paul ARGUDO wrote:

 This daemon wakes up every 5 seconds. It scans (SELECT...) for new
 insert in a table (lika trigger). When new tuples are found, it
 launches the work. The work consist in computing total sales of a big
 store...

You might find it worthwhile to investigate listen and
notify -- combined with a rule or trigger, you can get
this effect in near-real-time

You'll probably still want a sleep(5) at the end of the
loop so you can batch a reasonable number of updates if
there's a lot going on.

Matthew.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] 'Following' the Primary key

2002-03-14 Thread Joe Conway

Turbo Fredriksson wrote:
Oliver == Oliver Elphick [EMAIL PROTECTED] writes:

 
 Oliver On Thu, 2002-03-14 at 13:00, Turbo Fredriksson wrote:
  With '\d table' I get the columns, types and modifiers. Also
  the Primary key, Indexes etc are shown.
  
  But if I want to know WHAT the primary key 'is pointing to',
  how would I do that (ie, what is the primary key)?
 
 Oliver Just do \d again on the key index name:
 
 Oliver bray=# \d org_contact
 Oliver bray=# \d org_contact_pkey
 
 Cool. Works fine in 7.2, but not 7.1.3 (which we're running on our
 production systems)...
 
 Any idea how to do this on 7.1.3?
 

See:
http://www.brasileiro.net/postgres/cookbook/view-one-recipe.adp?recipe_id=36

for one possible way.

Joe


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] insert statements

2002-03-14 Thread Stephan Szabo


On Thu, 14 Mar 2002, Vince Vielhaber wrote:

 On Thu, 14 Mar 2002, Rod Taylor wrote:

   As snipped from:
  http://archives.postgresql.org/pgsql-bugs/2000-10/msg00030.php  (All
  my stuff is in paper form)
  What's your definition of other dbs?  The above statement is quite
  clearly in violation of the SQL92 and SQL99 specifications:

 And nowhere does it say that column name cannot be qualified with
 the table name in front of it.  Looking at the entire message noted

AFAICS  periods are not valid in identifiers that are not double
quoted (section 5.2 has the rules on regular identifiers and delimited
ones)

 regular identifier ::= identifier body

 identifier body ::=
 identifier start [ { underscore | identifier part }... ]


 identifier start ::= !! See the Syntax Rules

 identifier part ::=
identifier start
  | digit
identifier start is a simple latin letter, a letter in the character
repertoire that's in use, a syllable in the repertoire or an ideograph in
the repertoire.

identifier is defined as either a regular identifier or a delimited one
(ie double quoted).  So column name cannot contain periods.

That being said, is this something that's worth adding due to general
usage by other systems?



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] insert statements

2002-03-14 Thread Vince Vielhaber

On Thu, 14 Mar 2002, Stephan Szabo wrote:


  identifier start ::= !! See the Syntax Rules

  identifier part ::=
 identifier start
   | digit
 identifier start is a simple latin letter, a letter in the character
 repertoire that's in use, a syllable in the repertoire or an ideograph in
 the repertoire.

 identifier is defined as either a regular identifier or a delimited one
 (ie double quoted).  So column name cannot contain periods.

 That being said, is this something that's worth adding due to general
 usage by other systems?

In an odd way, I guess that's what I'm asking.  At what point is it us
that's non-standard?

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] insert statements

2002-03-14 Thread Tom Lane

Vince Vielhaber [EMAIL PROTECTED] writes:
 What's your definition of other dbs?  The above statement is quite
 clearly in violation of the SQL92 and SQL99 specifications:

 And nowhere does it say that column name cannot be qualified with
 the table name in front of it.

Au contraire, that is EXACTLY what that bit of BNF is saying.  If
they'd meant to allow this construction then the BNF would refer to
qualified name, not just identifier.

 Looking at the entire message noted
 above the list of other dbs that support it is now Oracle, Sybase,
 MS-SQL and mysql.  If other dbs ends up the equivilent of everything
 but PostgreSQL then which one is non-standard?

Out of curiosity, what do these guys do if I try the obvious

insert into foo (bar.col) ...

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-14 Thread Tom Lane

Tom Pfau [EMAIL PROTECTED] writes:
 I'm concerned that the discussion here has been of the opinion that
 since no records were written to the database using the value retrieved
 from the sequence that no damage has been done.

Um, you certainly didn't hear me saying that ;-)

There are two different bugs involved here.  One is the no-WAL-flush-
if-transaction-is-only-nextval problem.  AFAIK everyone agrees we must
fix that.  The other issue is this business about logging ahead
(to reduce the number of WAL records written) not interacting correctly
with checkpoints.  What we're arguing about is exactly how to fix that
part.

 We are using database
 sequences to get unique identifiers for things outside the database.  If
 a sequence could ever under any circumstances reissue a value, this
 could be damaging to the integrity of our software.

If you do a SELECT nextval() and then use the returned value externally
*without waiting for a commit acknowledgement*, then I think you are
risking trouble; there's no guarantee that the WAL record (if one is
needed) has hit disk yet, and so a crash could roll back the sequence.

This isn't an issue for a SELECT nextval() standing on its own ---
AFAIK the result will not be transmitted to the client until after the
commit happens.  But it would be an issue for a select executed inside
a transaction block (begin/commit).

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-14 Thread Dave Cramer

I noticed a message asking if this scenario was consistent with the
other reports, and yes it is. We have seen this occuring on our system
with versions as old as 7.0.

Glad to see someone has finally nailed this one.

Dave


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] insert statements

2002-03-14 Thread Vince Vielhaber

On Thu, 14 Mar 2002, Tom Lane wrote:

 Vince Vielhaber [EMAIL PROTECTED] writes:
  What's your definition of other dbs?  The above statement is quite
  clearly in violation of the SQL92 and SQL99 specifications:

  And nowhere does it say that column name cannot be qualified with
  the table name in front of it.

 Au contraire, that is EXACTLY what that bit of BNF is saying.  If
 they'd meant to allow this construction then the BNF would refer to
 qualified name, not just identifier.

  Looking at the entire message noted
  above the list of other dbs that support it is now Oracle, Sybase,
  MS-SQL and mysql.  If other dbs ends up the equivilent of everything
  but PostgreSQL then which one is non-standard?

 Out of curiosity, what do these guys do if I try the obvious

   insert into foo (bar.col) ...

Looks like Sybase ignores the bar:

1 create table foo(a int)
2 go
1 insert into foo(bar.a) values(1)
2 go
(1 row affected)
1 select * from foo
2 go
 a
 ---
   1

(1 row affected)
1



Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Bruce Momjian

Greg Copeland wrote:
 Just curious, and honestly I haven't looked, but is there any form of
 compression between clients and servers?  Has this been looked at?

This issues has never come up before.  It is sort of like compressing an
FTP session.  No one really does that.  Is there value in trying it with
PostgreSQL?


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Arguile

Bruce Momjian wrote:

 Greg Copeland wrote:
  Well, it occurred to me that if a large result set were to be identified
  before transport between a client and server, a significant amount of
  bandwidth may be saved by using a moderate level of compression.
  Especially with something like result sets, which I tend to believe may
  lend it self well toward compression.

 I should have said compressing the HTTP protocol, not FTP.

  This may be of value for users with low bandwidth connectivity to their
  servers or where bandwidth may already be at a premium.

 But don't slow links do the compression themselves, like PPP over a
 modem?

Yes, but that's packet level compression. You'll never get even close to the
result you can achieve compressing the set as a whole.

Speaking of HTTP, it's fairly common for web servers (Apache has mod_gzip)
to gzip content before sending it to the client (which unzips it silently);
especially when dealing with somewhat static content (so it can be cached
zipped). This can provide great bandwidth savings.

I'm sceptical of the benefit such compressions would provide in this setting
though. We're dealing with sets that would have to be compressed every time
(no caching) which might be a bit expensive on a database server. Having it
as a default off option for psql migtht be nice, but I wonder if it's worth
the time, effort, and cpu cycles.



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Greg Copeland

Well, it occurred to me that if a large result set were to be identified
before transport between a client and server, a significant amount of
bandwidth may be saved by using a moderate level of compression. 
Especially with something like result sets, which I tend to believe may
lend it self well toward compression.

Unlike FTP which may be transferring (and often is) previously
compressed data, raw result sets being transfered between the server and
a remote client, IMOHO, would tend to compress rather well as I doubt
much of it would be true random data.

This may be of value for users with low bandwidth connectivity to their
servers or where bandwidth may already be at a premium.

The zlib exploit posting got me thinking about this.

Greg


On Thu, 2002-03-14 at 12:20, Bruce Momjian wrote:
 Greg Copeland wrote:
  Just curious, and honestly I haven't looked, but is there any form of
  compression between clients and servers?  Has this been looked at?
 
 This issues has never come up before.  It is sort of like compressing an
 FTP session.  No one really does that.  Is there value in trying it with
 PostgreSQL?
 
 
 -- 
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 853-3000
   +  If your life is a hard drive, |  830 Blythe Avenue
   +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Bruce Momjian

Greg Copeland wrote:

Checking application/pgp-signature: FAILURE
-- Start of PGP signed section.
 Well, it occurred to me that if a large result set were to be identified
 before transport between a client and server, a significant amount of
 bandwidth may be saved by using a moderate level of compression. 
 Especially with something like result sets, which I tend to believe may
 lend it self well toward compression.
 
 Unlike FTP which may be transferring (and often is) previously
 compressed data, raw result sets being transfered between the server and
 a remote client, IMOHO, would tend to compress rather well as I doubt
 much of it would be true random data.
 

I should have said compressing the HTTP protocol, not FTP.

 This may be of value for users with low bandwidth connectivity to their
 servers or where bandwidth may already be at a premium.

But don't slow links do the compression themselves, like PPP over a
modem?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] libpq usage question

2002-03-14 Thread Joe Conway

I'm working on an update to contrib/dblink which would allow 
INSERT/UPDATE/DELETE statements in addition to SELECT statements against 
a remote database.

In the current version, only SELECT is possible because the SQL 
statement passed to the function gets DECLARE mycursor CURSOR FOR  
appended to the front of it, and the result set is obtained with res = 
PQexec(conn, FETCH ALL in mycursor);.

My question is, what is the downside (if any) of eliminating the use of 
a cursor in this context? I have locally made the changes, and don't see 
any negative impact. I'd appreciate any thoughts.

Thanks,

Joe


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Neil Conway

On Thu, 2002-03-14 at 14:35, Bruce Momjian wrote:
 Greg Copeland wrote:
 
 Checking application/pgp-signature: FAILURE
 -- Start of PGP signed section.
  Well, it occurred to me that if a large result set were to be identified
  before transport between a client and server, a significant amount of
  bandwidth may be saved by using a moderate level of compression. 
  Especially with something like result sets, which I tend to believe may
  lend it self well toward compression.
  
  Unlike FTP which may be transferring (and often is) previously
  compressed data, raw result sets being transfered between the server and
  a remote client, IMOHO, would tend to compress rather well as I doubt
  much of it would be true random data.
 
 I should have said compressing the HTTP protocol, not FTP.

Except that lots of people compress HTTP traffic (or rather should, if
they were smart). Bandwidth is much more expensive than CPU time, and
most browsers have built-in support for gzip-encoded data. Take a look
at mod_gzip or mod_deflate (2 Apache modules) for more info on this.

IMHO, compressing data would be valuable iff there are lots of people
with a low-bandwidth link between Postgres and their database clients.
In my experience, that is rarely the case. For example, people using
Postgres as a backend for a dynamically generated website usually have
their database on the same server (for a low-end site), or on a separate
server connected via 100mbit ethernet to a bunch of webservers. In this
situation, compressing the data between the database and the webservers
will just add more latency and increase the load on the database.

Perhaps I'm incorrect though -- are there lots of people using Postgres
with a slow link between the database server and the clients?

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED]
PGP Key ID: DB3C29FC


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Mark Pritchard

You can get some tremendous gains by compressing HTTP sessions - mod_gzip
for Apache does this very well.

I believe SlashDot saves in the order of 30% of their bandwidth by using
compression, as do sites like http://www.whitepages.com.au/ and
http://www.yellowpages.com.au/

The mod_gzip trick is effectively very similar to what Greg is proposing. Of
course, how often would you connect to your database over anything less than
a fast (100mbit+) LAN connection?

In any case the conversation regarding FE/BE protocol changes occurs
frequently, and this thread would certainly impact that protocol. Has any
thought ever been put into using an existing standard such as HTTP instead
of the current postgres proprietary protocol? There are a lot of advantages:

* You could leverage the existing client libraries (java.net.URL etc) to
make writing PG clients (JDBC/ODBC/custom) an absolute breeze.

* Results sets / server responses could be returned in XML.

* The protocol handles extensions well (X-* headers)

* Load balancing across a postgres cluster would be trivial with any number
of software/hardware http load balancers.

* The prepared statement work needs to hit the FE/BE protocol anyway...

If the project gurus thought this was worthwhile, I could certainly like to
have a crack at it.

Regards,

Mark

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]]On Behalf Of Bruce Momjian
 Sent: Friday, 15 March 2002 6:36 AM
 To: Greg Copeland
 Cc: PostgresSQL Hackers Mailing List
 Subject: Re: [HACKERS] Client/Server compression?


 Greg Copeland wrote:

 Checking application/pgp-signature: FAILURE
 -- Start of PGP signed section.
  Well, it occurred to me that if a large result set were to be identified
  before transport between a client and server, a significant amount of
  bandwidth may be saved by using a moderate level of compression.
  Especially with something like result sets, which I tend to believe may
  lend it self well toward compression.
 
  Unlike FTP which may be transferring (and often is) previously
  compressed data, raw result sets being transfered between the server and
  a remote client, IMOHO, would tend to compress rather well as I doubt
  much of it would be true random data.
 

 I should have said compressing the HTTP protocol, not FTP.

  This may be of value for users with low bandwidth connectivity to their
  servers or where bandwidth may already be at a premium.

 But don't slow links do the compression themselves, like PPP over a
 modem?

 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 853-3000
   +  If your life is a hard drive, |  830 Blythe Avenue
   +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 This may be of value for users with low bandwidth connectivity to their
 servers or where bandwidth may already be at a premium.

 But don't slow links do the compression themselves, like PPP over a
 modem?

Even if the link doesn't compress, shoving the feature into PG itself
isn't necessarily the answer.  I'd suggest running such a connection
through an ssh tunnel, which would give you encryption as well as
compression.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Greg Copeland

On Thu, 2002-03-14 at 13:35, Bruce Momjian wrote:
 Greg Copeland wrote:
 
 Checking application/pgp-signature: FAILURE
 -- Start of PGP signed section.
  Well, it occurred to me that if a large result set were to be identified
  before transport between a client and server, a significant amount of
  bandwidth may be saved by using a moderate level of compression. 
  Especially with something like result sets, which I tend to believe may
  lend it self well toward compression.
  
  Unlike FTP which may be transferring (and often is) previously
  compressed data, raw result sets being transfered between the server and
  a remote client, IMOHO, would tend to compress rather well as I doubt
  much of it would be true random data.
  
 
 I should have said compressing the HTTP protocol, not FTP.
 
  This may be of value for users with low bandwidth connectivity to their
  servers or where bandwidth may already be at a premium.
 
 But don't slow links do the compression themselves, like PPP over a
 modem?


Yes and no.  Modem compression doesn't understand the nature of the data
that is actually flowing through it.  As a result, a modem is going to
speed an equal amount of time trying to compress the PPP/IP/NETBEUI
protocols as it does trying to compress the data contained within those
protocol envelopes.  Furthermore, modems tend to have a very limited
amount of time to even attempt to compress, combined with the fact that
they have very limited buffer space, usually limits its ability to
provide effective compression.  Because of these issues, it not uncommon
for a modem to actually yield a larger compressed block than was the
input.

I'd also like to point out that there are also other low speed
connections available which are in use which do not make use of modems
as well as modems which do not support compression (long haul modems for
example).

As for your specific example of HTTP versus FTP, I would also like to
point out that it is becoming more and more common for gzip'd data to be
transported within the HTTP protocol whereby each end is explicitly
aware of the compression taking place on the link with knowledge of what
to do with it.

Also, believe it or not, one of the common uses of SSH is to provide
session compression.  It is not unheard of for people to disable the
encryption to simply use it for a compression tunnel which also provides
for modest session obscurantism.

Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Greg Copeland

On Thu, 2002-03-14 at 14:14, Neil Conway wrote:
 On Thu, 2002-03-14 at 14:35, Bruce Momjian wrote:
  Greg Copeland wrote:
  
  Checking application/pgp-signature: FAILURE
  -- Start of PGP signed section.
   Well, it occurred to me that if a large result set were to be identified
   before transport between a client and server, a significant amount of
   bandwidth may be saved by using a moderate level of compression. 
   Especially with something like result sets, which I tend to believe may
   lend it self well toward compression.
   
   Unlike FTP which may be transferring (and often is) previously
   compressed data, raw result sets being transfered between the server and
   a remote client, IMOHO, would tend to compress rather well as I doubt
   much of it would be true random data.
  
  I should have said compressing the HTTP protocol, not FTP.
 
 Except that lots of people compress HTTP traffic (or rather should, if
 they were smart). Bandwidth is much more expensive than CPU time, and
 most browsers have built-in support for gzip-encoded data. Take a look
 at mod_gzip or mod_deflate (2 Apache modules) for more info on this.
 
 IMHO, compressing data would be valuable iff there are lots of people
 with a low-bandwidth link between Postgres and their database clients.
 In my experience, that is rarely the case. For example, people using
 Postgres as a backend for a dynamically generated website usually have
 their database on the same server (for a low-end site), or on a separate
 server connected via 100mbit ethernet to a bunch of webservers. In this
 situation, compressing the data between the database and the webservers
 will just add more latency and increase the load on the database.
 
 Perhaps I'm incorrect though -- are there lots of people using Postgres
 with a slow link between the database server and the clients?
 


What about remote support of these databases where a VPN may not be
available?  In my past experience, this was very common as many
companies do not was to expose their database, even via a VPN to the out
side world, while allowing only modem access.  Not to mention, road
warriors that may need to remotely support their databases may find
value here too.  Would they not?

...I think I'm pretty well coming to the conclusion that it may be of
some value...even if only for a limited number of users.


Greg




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Greg Copeland

On Thu, 2002-03-14 at 14:29, Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  This may be of value for users with low bandwidth connectivity to their
  servers or where bandwidth may already be at a premium.
 
  But don't slow links do the compression themselves, like PPP over a
  modem?
 
 Even if the link doesn't compress, shoving the feature into PG itself
 isn't necessarily the answer.  I'd suggest running such a connection
 through an ssh tunnel, which would give you encryption as well as
 compression.
 
   regards, tom lane

Couldn't the same be said for SSL support?

I'd also like to point out that it's *possible* that this could also be
a speed boost under certain work loads where extra CPU is available as
less data would have to be transfered through the OS, networking layers,
and device drivers.  Until zero copy transfers becomes common on all
platforms for all devices, I would think that it's certainly *possible*
that this *could* offer a possible improvement...well, perhaps a break
even at any rate...

Such claims, again, given specific workloads for compressed file systems
are not unheard off as less device I/O has to take place.

Greg





signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] privileges regression problem on freebsd/alpha

2002-03-14 Thread Bruce Momjian


Christopher, is this problem fixed now?

---

Christopher Kings-Lynne wrote:
 Hi all,
 
 Just tested latest CVS on my freebsd/alpha.  Only one test failed, and
 that's privileges related...
 
 *** ./expected/privileges.out Thu Mar  7 09:53:51 2002
 --- ./results/privileges.out  Fri Mar  8 11:03:36 2002
 ***
 *** 201,218 
   CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 * $1;' LANGUAGE
 sql;
   CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 * $1;' LANGUAGE
 sql;
   GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO regressuser2;
   GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; -- semantic error
 ! ERROR:  invalid privilege type USAGE for function object
   GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;
   GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;
 ! ERROR:  Function 'testfunc_nosuch(int4)' does not exist
   SET SESSION AUTHORIZATION regressuser2;
   SELECT testfunc1(5), testfunc2(5); -- ok
 !  testfunc1 | testfunc2
 ! ---+---
 ! 10 |15
 ! (1 row)
 !
   CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 * $1;' LANGUAGE
 sql; -- fail
   ERROR:  permission denied
   SET SESSION AUTHORIZATION regressuser3;
 --- 201,216 
   CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 * $1;' LANGUAGE
 sql;
   CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 * $1;' LANGUAGE
 sql;
   GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO regressuser2;
 + ERROR:  bogus GrantStmt.objtype 458
   GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; -- semantic error
 ! ERROR:  bogus GrantStmt.objtype 458
   GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;
 + ERROR:  bogus GrantStmt.objtype 458
   GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;
 ! ERROR:  bogus GrantStmt.objtype 458
   SET SESSION AUTHORIZATION regressuser2;
   SELECT testfunc1(5), testfunc2(5); -- ok
 ! ERROR:  permission denied
   CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 * $1;' LANGUAGE
 sql; -- fail
   ERROR:  permission denied
   SET SESSION AUTHORIZATION regressuser3;
 ***
 *** 220,230 
   ERROR:  permission denied
   SET SESSION AUTHORIZATION regressuser4;
   SELECT testfunc1(5); -- ok
 !  testfunc1
 ! ---
 ! 10
 ! (1 row)
 !
   DROP FUNCTION testfunc1(int); -- fail
   ERROR:  RemoveFunction: function 'testfunc1': permission denied
   \c -
 --- 218,224 
   ERROR:  permission denied
   SET SESSION AUTHORIZATION regressuser4;
   SELECT testfunc1(5); -- ok
 ! ERROR:  permission denied
   DROP FUNCTION testfunc1(int); -- fail
   ERROR:  RemoveFunction: function 'testfunc1': permission denied
   \c -
 
 ==

[ Attachment, skipping... ]

[ Attachment, skipping... ]

 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Allowing usernames in pg_hba.conf

2002-03-14 Thread Bruce Momjian


OK, I no one can seem to come up with an improved file format for
pg_hba.conf so I am going to continue in the direction outlined in this
email ---  basically remove the auth_argument column and make it
'auth_type=auth_arg' and add a username column, plus add the ability for
the username and database columns to use a secondary file if the column
value starts with @.

---

pgman wrote:
  This is definitely stressing pg_hba past its design limits --- heck, the
  name of the file isn't even appropriate anymore, if usernames are part
  of the match criteria.  Rather than contorting things to maintain a
  pretense of backwards compatibility, it's time to abandon the current
  file format, change the name, and start over.  (I believe there are
  traces in the code of this having been done before.)  We could probably
  arrange to read and convert the existing pg_hba format if we don't see
  a new-style authentication config file out there.
  
  My first thoughts are (a) add a column outright for matching username;
  (b) for both database and username columns, allow a filename reference
  so that a bunch of names can be stored separately from the master
  authentication file.  I don't much care for sticking large lists of
  names into the auth file itself.
 
 OK, I have an idea.  I was never happy with the AUTH_ARGUMENT column. 
 What I propose is adding an optional auth_type=val capability to the
 file, so an AUTH_ARGUMENT column isn't needed.  If a username column
 starts with @, it is a file name containing user names.  The same can be
 done with the database column.  Seems very backward compatible..  If you
 don't use auth_argument, it is totally compatible.  If you do, you need
 to use the new format auth_type=val:
 
 TYPE DATABASEIP_ADDRESSMASK   AUTH_TYPE  USERNAMES
 local  all  trust  fred
 host   all 127.0.0.1 255.255.255.255trust  @staff
 host   all 127.0.0.1 255.255.255.255ident=sales jimmy
 
 I have thought about a redesign of the file, but I can't come up with
 something that is as powerful, and cleaner.  Do others have ideas?
 
 As far as missing features, I can't think of other things people have
 asked for in pg_hba.conf except usernames.
 
 -- 
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 853-3000
   +  If your life is a hard drive, |  830 Blythe Avenue
   +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Kyle

On the subject on client/server compression, does the server
decompress toast data before sending it to the client?  Is so, why
(other than requiring modifications to the protocol)?

On the flip side, does/could the client toast insert/update data
before sending it to the server?

-Kyle

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] psql and output from \?

2002-03-14 Thread Bruce Momjian

  I guess some of these weren't introduces by you, but if someone is going
  to fix this, he might as well take care of these.
 
 Will submit another patch in the morning (it's late here).

Ian, do you have another version of this patch ready?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] help with a patch

2002-03-14 Thread Bruce Momjian

Neil Conway wrote:
 Hi all,
 
 I'm working on implementing unique hash indexes. I've got most of the
 code finished, but I'm stumped on how to implement the remainder. Since
 I'm still a newbie to the Postgres code, so any pointers or help would
 be much appreciated.
 
 I've been able to borrow a fair amount of code from the btree unique
 index implementation (where possible, I've tried to share code between
 hash and btree, I'll do this more in the final patch). The problem I'm
 having is the implementation of the _hash_check_unique() function. This
 is passed the Buffer which corresponds to the first page in the bucket
 chain for the key, the hash item itself, the ScanKey, as well as the
 index Relation and the heap Relation. Given this, how does one scan
 through the hash bucket to determine if a matching key is present?
 
 I can probably figure out the MVCC related code (ensuring that the
 tuples we find aren't dead, etc); what I can't figure out is the basic
 methodology required to search for matching tuples in the hash bucket.
 
 Any help would be appreciated. I've attached the current development
 version of the patch, if that is of any help.

I am not totally sure of the question, but for hash don't you have to
spin through the entire bucket and test each one.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] psql and output from \?

2002-03-14 Thread Ian Barwick

On Thursday 14 March 2002 22:40, Bruce Momjian wrote:
   I guess some of these weren't introduces by you, but if someone is
   going to fix this, he might as well take care of these.
 
  Will submit another patch in the morning (it's late here).

 Ian, do you have another version of this patch ready?

Patch attached (diff against CVS, replacing previous patch).

Ian Barwick

Index: help.c
===
RCS file: /projects/cvsroot/pgsql/src/bin/psql/help.c,v
retrieving revision 1.48
diff -c -r1.48 help.c
*** help.c	2002/03/11 18:26:20	1.48
--- help.c	2002/03/14 22:25:22
***
*** 177,183 
  	if (pset.notty == 0 
  		(pagerenv = getenv("PAGER")) 
  		(pagerenv[0] != '\0') 
! 		screen_size.ws_row = 39 
  		(fout = popen(pagerenv, "w")))
  	{
  		usePipe = true;
--- 177,183 
  	if (pset.notty == 0 
  		(pagerenv = getenv("PAGER")) 
  		(pagerenv[0] != '\0') 
! 		screen_size.ws_row = 46 
  		(fout = popen(pagerenv, "w")))
  	{
  		usePipe = true;
***
*** 189,242 
  		fout = stdout;
  
  	/* if you add/remove a line here, change the row test above */
  	fprintf(fout, _(" \\a toggle between unaligned and aligned output mode\n"));
  	fprintf(fout, _(" \\c[onnect] [DBNAME|- [USER]]\n"
! 		 "connect to new database (currently \"%s\")\n"),
  			PQdb(pset.db));
! 	fprintf(fout, _(" \\C [TITLE] set table title, or unset with no title\n"));
  	fprintf(fout, _(" \\cd [DIR]  change the current working directory\n"));
  	fprintf(fout, _(" \\copy ...  perform SQL COPY with data stream to the client host\n"));
  	fprintf(fout, _(" \\copyright show PostgreSQL usage and distribution terms\n"));
! 	fprintf(fout, _(" \\d [TABLE] describe table (or view, index, sequence)\n"));
! 	fprintf(fout, _(" \\d{t|i|s|v}... list tables/indexes/sequences/views\n"));
! 	fprintf(fout, _(" \\d{p|S|l}  list access privileges, system tables, or large objects\n"));
! 	fprintf(fout, _(" \\dalist aggregate functions\n"));
! 	fprintf(fout, _(" \\dd [NAME] show comment for table, type, function, or operator\n"));
! 	fprintf(fout, _(" \\dflist functions\n"));
! 	fprintf(fout, _(" \\dolist operators\n"));
! 	fprintf(fout, _(" \\dTlist data types\n"));
! 	fprintf(fout, _(" \\du [PATTERN]  lists all configured users or only those that match pattern\n"));
! 	fprintf(fout, _(" \\e [FILE]  edit the query buffer or file with external editor\n"));
! 	fprintf(fout, _(" \\echo TEXT write text to standard output\n"));
! 	fprintf(fout, _(" \\encoding ENCODING  set client encoding\n"));
! 	fprintf(fout, _(" \\f [SEPARATOR] set field separator, or unset if none\n"));
! 	fprintf(fout, _(" \\g [FILE]  send SQL command to server (and write results to file or |pipe)\n"));
! 	fprintf(fout, _(" \\h NAMEhelp on syntax of SQL commands, * for all commands\n"));
  	fprintf(fout, _(" \\H toggle HTML output mode (currently %s)\n"),
  			ON(pset.popt.topt.format == PRINT_HTML));
  	fprintf(fout, _(" \\i FILEexecute commands from file\n"));
  	fprintf(fout, _(" \\l list all databases\n"));
  	fprintf(fout, _(" \\lo_export, \\lo_import, \\lo_list, \\lo_unlink\n"
! 	" large object operations\n"));
  	fprintf(fout, _(" \\o FILEsend all query results to file or |pipe\n"));
! 	fprintf(fout, _(" \\p show the content of the query buffer\n"));
! 	fprintf(fout, _(" \\pset VAR  set table output option (VAR := {format|border|expanded|\n"
! 	" fieldsep|null|recordsep|tuples_only|title|tableattr|pager})\n"));
  	fprintf(fout, _(" \\q quit psql\n"));
! 	fprintf(fout, _(" \\qecho TEXTwrite text to query output stream (see \\o)\n"));
  	fprintf(fout, _(" \\r reset (clear) the query buffer\n"));
! 	fprintf(fout, _(" \\s [FILE]  print history or save it to file\n"));
! 	fprintf(fout, _(" \\set NAME VALUE  set internal variable\n"));
  	fprintf(fout, _(" \\t show only rows (currently %s)\n"),
  			ON(pset.popt.topt.tuples_only));
! 	fprintf(fout, _(" \\T [TAG_ATTR]  set HTML table tag attributes, or unset if none\n"));
  	fprintf(fout, _(" \\timingtoggle timing of queries (currently %s)\n"),
  			ON(pset.timing));
  	fprintf(fout, _(" \\unset NAMEunset (delete) internal variable\n"));
! 	fprintf(fout, _(" \\w FILEwrite query buffer to file\n"));
  	fprintf(fout, _(" \\x toggle expanded output (currently %s)\n"),
  			ON(pset.popt.topt.expanded));
! 	fprintf(fout, _(" \\z list table access privileges\n"));
  	fprintf(fout, _(" \\! [COMMAND]   execute command in shell or start interactive shell\n"));
  
  	if (usePipe)
--- 189,245 
  		fout = stdout;
  
  	/* if you add/remove a line here, change the row test above */
+ /*  if this " is the start of the string then it ought to end there to fit in 

Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-14 Thread Tom Lane

Dave Cramer [EMAIL PROTECTED] writes:
 I noticed a message asking if this scenario was consistent with the
 other reports, and yes it is. We have seen this occuring on our system
 with versions as old as 7.0.

Given that these are WAL bugs, they could not predate 7.1.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-14 Thread Tom Lane

Ben Grimm [EMAIL PROTECTED] writes:
 The behavior of SELECT nextval() should not be conditional on being in or 
 out of a transaction block.

Nonsense.  The behavior of INSERT or UPDATE is conditional in exactly
the same way: you should not rely on the reported result until it's
committed.

Given Vadim's performance concerns, I doubt he'll hold still for forcing
an XLogFlush immediately every time a sequence XLOG record is written
-- but AFAICS that'd be the only way to guarantee durability of a
nextval result in advance of commit.  Since I don't think that's an
appropriate goal for the system to have, I don't care for it either.

I'm planning to try coding up Vadim's approach (pay attention to page's
old LSN to see if a WAL record must be generated) tonight or tomorrow
and see if it seems reasonable.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] [BUGS] Bug #613: Sequence values fall back to previously chec

2002-03-14 Thread Mikheev, Vadim

  This isn't an issue for a SELECT nextval() standing on
  its own AFAIK the result will not be transmitted to the
  client until after the commit happens. But it would be
  an issue for a select executed inside a transaction
  block (begin/commit).
 
 The behavior of SELECT nextval() should not be conditional
 on being in or out of a transaction block.

And it's not. But behaviour of application *must* be
conditional on was transaction committed or not.

What's the problem for application that need nextval() for
external (out-of-database) purposes to use sequence values
only after transaction commit? What's *wrong* for such application
to behave the same way as when dealing with other database objects
which are under transaction control (eg only after commit you can
report to user that $100 was successfully added to his/her account)?

---

I agree that if nextval-s were only write actions in transaction
and they made some XLogInsert-s then WAL must be flushed at commit
time. But that's it. Was this fixed? Very easy.

Vadim

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Client/Server compression?

2002-03-14 Thread Bruce Momjian

Kyle wrote:
 On the subject on client/server compression, does the server
 decompress toast data before sending it to the client?  Is so, why
 (other than requiring modifications to the protocol)?
 
 On the flip side, does/could the client toast insert/update data
 before sending it to the server?

It has to decrypt it so the server functions can process it too.  Hard
to avoid that.  Of course, in some cases, it doesn't need to be
processed on the server, just passed, so it would have to be done
conditionally.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] insert statements

2002-03-14 Thread Michael Alan Dorman

Vince Vielhaber [EMAIL PROTECTED] writes:
 There are really no other decent CMSs available that support
 PostgreSQL.

bricolage.thepirtgroup.com/

Mike.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] privileges regression problem on freebsd/alpha

2002-03-14 Thread Christopher Kings-Lynne

Yep

 -Original Message-
 From: Bruce Momjian [mailto:[EMAIL PROTECTED]]
 Sent: Friday, 15 March 2002 5:20 AM
 To: Christopher Kings-Lynne
 Cc: Hackers
 Subject: Re: [HACKERS] privileges regression problem on freebsd/alpha



 Christopher, is this problem fixed now?

 --
 -

 Christopher Kings-Lynne wrote:
  Hi all,
 
  Just tested latest CVS on my freebsd/alpha.  Only one test failed, and
  that's privileges related...
 
  *** ./expected/privileges.out   Thu Mar  7 09:53:51 2002
  --- ./results/privileges.outFri Mar  8 11:03:36 2002
  ***
  *** 201,218 
CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 *
 $1;' LANGUAGE
  sql;
CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 *
 $1;' LANGUAGE
  sql;
GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO
 regressuser2;
GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; --
 semantic error
  ! ERROR:  invalid privilege type USAGE for function object
GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;
GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;
  ! ERROR:  Function 'testfunc_nosuch(int4)' does not exist
SET SESSION AUTHORIZATION regressuser2;
SELECT testfunc1(5), testfunc2(5); -- ok
  !  testfunc1 | testfunc2
  ! ---+---
  ! 10 |15
  ! (1 row)
  !
CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 *
 $1;' LANGUAGE
  sql; -- fail
ERROR:  permission denied
SET SESSION AUTHORIZATION regressuser3;
  --- 201,216 
CREATE FUNCTION testfunc1(int) RETURNS int AS 'select 2 *
 $1;' LANGUAGE
  sql;
CREATE FUNCTION testfunc2(int) RETURNS int AS 'select 3 *
 $1;' LANGUAGE
  sql;
GRANT EXECUTE ON FUNCTION testfunc1(int), testfunc2(int) TO
 regressuser2;
  + ERROR:  bogus GrantStmt.objtype 458
GRANT USAGE ON FUNCTION testfunc1(int) TO regressuser3; --
 semantic error
  ! ERROR:  bogus GrantStmt.objtype 458
GRANT ALL PRIVILEGES ON FUNCTION testfunc1(int) TO regressuser4;
  + ERROR:  bogus GrantStmt.objtype 458
GRANT ALL PRIVILEGES ON FUNCTION testfunc_nosuch(int) TO regressuser4;
  ! ERROR:  bogus GrantStmt.objtype 458
SET SESSION AUTHORIZATION regressuser2;
SELECT testfunc1(5), testfunc2(5); -- ok
  ! ERROR:  permission denied
CREATE FUNCTION testfunc3(int) RETURNS int AS 'select 2 *
 $1;' LANGUAGE
  sql; -- fail
ERROR:  permission denied
SET SESSION AUTHORIZATION regressuser3;
  ***
  *** 220,230 
ERROR:  permission denied
SET SESSION AUTHORIZATION regressuser4;
SELECT testfunc1(5); -- ok
  !  testfunc1
  ! ---
  ! 10
  ! (1 row)
  !
DROP FUNCTION testfunc1(int); -- fail
ERROR:  RemoveFunction: function 'testfunc1': permission denied
\c -
  --- 218,224 
ERROR:  permission denied
SET SESSION AUTHORIZATION regressuser4;
SELECT testfunc1(5); -- ok
  ! ERROR:  permission denied
DROP FUNCTION testfunc1(int); -- fail
ERROR:  RemoveFunction: function 'testfunc1': permission denied
\c -
 
  ==

 [ Attachment, skipping... ]

 [ Attachment, skipping... ]

 
  ---(end of broadcast)---
  TIP 2: you can get off all lists at once with the unregister command
  (send unregister YourEmailAddressHere to [EMAIL PROTECTED])

 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 853-3000
   +  If your life is a hard drive, |  830 Blythe Avenue
   +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] User Level Lock question

2002-03-14 Thread Lance Ellinghaus

Is there an easy way to test the lock on a user level lock without actually
issuing the lock?

I would like to use them, but there is only a LockAcquire() and
LockRelease().. There is no LockTest()..

I guess I could do:

IF LockAcquire() == 0:
locked do whatever if it is locked...
ELSE:
LockRelease()
unlocked do whatever since it was not locked in the first place..

This just seems to be an inefficient way of doing this...

Thanks,
Lance Ellinghaus


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [SQL] [ADMIN] Syslog

2002-03-14 Thread Larry Rosenman

On 14 Mar 2002 21:17:05 +
Oliver Elphick [EMAIL PROTECTED] wrote:

 On Thu, 2002-03-14 at 20:13, Jie Liang wrote:
  I did everything as you did, however, when start the postmaster,
  I got following:
  FATAL 1:'syslog' is not a valid option name.
 
 Then you haven't configured postgresql with --enable-syslog.  (That
 message comes from src/backend/utils/misc/guc.c, if you want
 confirmation.)
Hackers: Is there any reason to NOT make --enable-syslog the default
any more? 

I.E. can we change the sense of it to be --disable-syslog and have
USE_SYSLOG defined by default? 


 
 -- 
 Oliver Elphick[EMAIL PROTECTED]
 Isle of Wight  http://www.lfix.co.uk/oliver
 GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
  Let your light so shine before men, that they may see 
   your good works, and glorify your Father which is in 
   heaven. Matthew 5:16 
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 


-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Survey results on Oracle/M$NT4 to PG72/RH72 migration

2002-03-14 Thread Michael Meskes

On Thu, Mar 14, 2002 at 09:08:55AM +0500, Hannu Krosing wrote:
 AFAIK some SQL/C type precompilers and other frontend tools for other
 databases do generate stored procedures for PREPAREd CURSORs.

You mean ECPG should/could replace a PEPARE statement with a CREATE
FUNCTION and then the usage of the cursor with the usage of that
function?

Should be possible, but needs some work.

 I'm afraid ECPG does not :(

That's correct of course.

Michael

-- 
Michael Meskes
[EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire!
Use Debian GNU/Linux! Use PostgreSQL!

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]