[HACKERS] XML support?

2001-09-14 Thread Marius Andreiana

Hi

I saw in TODO
CLIENTS
*  Add XML interface: psql, pg_dump, COPY, separate server
and there's some code for JDBC in contrib/retep directory.

Are there any plans to add xml support to postgresql, to return
rown in formatted in xml for start? not only from psql, but
from everywhere (e.g. php)

Thanks!
-- 
Marius Andreiana
--
You don't have to go to jail for helping your neighbour
http://www.gnu.org/philosophy/


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] syslog by default?

2001-09-14 Thread Joel W. Reed

On Sep 12, [EMAIL PROTECTED] contorted a few electrons to say...
Bruce OK, that makes sense.  My only question is how many platforms _don't_
Bruce have syslog.  If it is only NT and QNX, I think we can live with using
Bruce it by default if it exists.

perhaps you could take some code from

http://freshmeat.net/projects/cpslapi/

which implements a syslog-api that writes to NT's eventlog.

i'd be glad to change the license if it is useful.

jr

-- 

Joel W. Reed412-257-3881
--All the simple programs have been written.



 PGP signature


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] Trigger - Editing Procedures

2001-09-14 Thread francis




I have written some triggers which will call some 
procedures.

I am looking for some way wherein I canedit 
these procedures

Is there any way to do so?

regards,
joseph



Re: [HACKERS] Index location patch for review

2001-09-14 Thread Darren King

  Attached is a patch that adds support for specifying a location  for
  indexes via the create database command.
  
  I believe this patch is complete, but it is my first .
 
 This patch allows index locations to be specified as 
 different from data locations.  Is this a feature direction
 we want to go in?  Comments?

Having the table and index on separate drives can do wonders for i/o
performance. :)

darrenk


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] System Tables

2001-09-14 Thread Peter Harvey

Are the table structures of the System Tables changed often? Have they
changed from v7.1.1 and v7.1.2?

Peter
-- 
+---
| Data Architect
| your data; how you want it
| http://www.codebydesign.com
+---

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] querying system catalogs to extract foreign keys

2001-09-14 Thread Rene Pijlman

On 13 Sep 2001 22:56:16 -0700, you wrote:
I tried to use the getImportedKeys and getExportedKeys of 
java.sql.DatabaseMetadata... But it didnt give any expected 
results... 

This is probably a limitation or bug in the JDBC driver. Please
post details of your problem on [EMAIL PROTECTED] E.g.
what results did you get, and what did you not get?

So can anyone tell me how to query the system
catalogs to extract this info??

The system catalogs are documented on
http://www.postgresql.org/idocs/index.php?catalogs.html

Regards,
René Pijlman [EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Status of index location patch

2001-09-14 Thread Jim Buttafuoco

All,

Just wondering what is the status of this patch.  Is seems from comments
that people like the idea.  I have also looked in the archives for other
people looking for this kind of feature and have found alot of interest.

If you think it is a good idea for 7.2, let me know what needs to be
changed and I will work on it this weekend.

Thanks
Jim




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS]

2001-09-14 Thread Peter T Mount

 Hi
 
 I saw in TODO
 CLIENTS
 *  Add XML interface: psql, pg_dump, COPY, separate server
 and there's some code for JDBC in contrib/retep directory.
 
 Are there any plans to add xml support to postgresql, to return
 rown in formatted in xml for start? not only from psql, but
 from everywhere (e.g. php)

I am (as time allows) adding to the xml under contrib/retep, but I don't know 
of anyone else working on it.

Adding xml support to psql shouldn't be too difficult (it has html support 
already), and there is the ResultSet-XML stuff under contrib/retep.

Peter

-- 
Peter Mount [EMAIL PROTECTED]
PostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/
RetepPDF PDF library for Java: http://www.retep.org.uk/pdf/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS]

2001-09-14 Thread Peter T Mount


 Hi
 
 I saw in TODO
 CLIENTS
 *  Add XML interface: psql, pg_dump, COPY, separate server
 and there's some code for JDBC in contrib/retep directory.
 
 Are there any plans to add xml support to postgresql, to return
 rown in formatted in xml for start? not only from psql, but
 from everywhere (e.g. php)

I am (as time allows) adding to the xml under contrib/retep, but I don't know 
of anyone else working on it.

Adding xml support to psql shouldn't be too difficult (it has html support 
already), and there is the ResultSet-XML stuff under contrib/retep.

Peter

-- 
Peter Mount [EMAIL PROTECTED]
PostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/
RetepPDF PDF library for Java: http://www.retep.org.uk/pdf/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] querying system catalogs to extract foreign keys

2001-09-14 Thread jiby george

I wanted to extract foriegn keys from the postgresql database related
to each of the tables.. I tried to use the getImportedKeys and
getExportedKeys of java.sql.DatabaseMetadata... But it didnt give any
expected results... So can anyone tell me how to query the system
catalogs to extract this info??
Thanx
Jiby

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [BUGS] Build problem with CVS version

2001-09-14 Thread John Summerfield

 John Summerfield writes:
 
  I'd point out this from the INSTALL document:
   --prefix=PREFIX
 
Install all files under the directory PREFIX instead of
/usr/local/pgsql. The actual files will be installed into various
subdirectories; no files will ever be installed directly into the
PREFIX directory.
 
If you have special needs, you can also customize the individual
subdirectories with the following options.
 
 But there are also exceptions listed at --with-perl and --with-python.
 
  This is entirely consistent with the way other software that uses the
  same configuration procedure.
 
 I am not aware of a package that installs general-purpose Perl/Python
 modules as well as items from outside of those environments.
 
  I contend that if a user wants different behaviour the onus is on the
  user to specify that.
 
 You're probably right, but I suspect that the majority of users rightly
 expect the Perl module to go where Perl modules usually go.  This wouldn't

that isn't reasonable if the installer's not root, and I wasn't, for 
the precise reason I didn't want to update the system.

 be such an interesting question if Perl provided a way to add to the
 module search path (cf. LD_LIBRARY_PATH and such), but to my knowledge
 there isn't a generally accepted one, so this issue would introduce an
 annoyance for users.

from 'man perlrun'
PERL5LIBA colon-separated list of directories in which to look 
for Perl library files
   before looking in the standard library and the 
current directory.  Any architec-
   ture-specific directories under the specified 
locations are automatically
   included if they exist.  If PERL5LIB is not defined, 
PERLLIB is used.
 
   When running taint checks (either because the 
program was running setuid or set-
   gid, or the -T switch was used), neither variable is 
used.  The program should
   instead say:

There are several other environment variables.




 
 Surely the current behaviour is an annoyance too, making the whole issue a
 rather unpleasant subject. ;-)
 
 Well, as soon as I come up with a name for the install-where-I-tell-you-to
 option, I'll implement it.



The current behaviour makes it difficult to have two versions on one 
computer.
-- 
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my 
disposition.




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] System Tables

2001-09-14 Thread Peter Eisentraut

Peter Harvey writes:

 Are the table structures of the System Tables changed often?

Only between major releases (if necessary).

 Have they changed from v7.1.1 and v7.1.2?

No.

-- 
Peter Eisentraut   [EMAIL PROTECTED]   http://funkturm.homeip.net/~peter


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] syslog by default?

2001-09-14 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 OK, that makes sense.  My only question is how many platforms _don't_
 have syslog.  If it is only NT and QNX, I think we can live with using
 it by default if it exists.

There seems to be a certain amount of confusion here.  The proposal at
hand was to make configure set up to *compile* the syslog support
whenever possible.  Not to *use* syslog by default.  Unless we change
the default postgresql.conf --- which I would be against --- we will
still log to stderr by default.

Given that, I'm not sure that Peter's argument about losing
functionality is right; the analogy to readline support isn't exact.
Perhaps what we should do is (a) always build syslog support if
possible, and (b) at runtime, complain if syslog logging is requested
but we don't have it available.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] chunk size problem

2001-09-14 Thread Martín Marqués

I started getting these error messages

webunl= \dt 
NOTICE:  AllocSetFree: detected write past chunk end in 
TransactionCommandContext 3a4608 pqReadData() -- backend closed the channel 
unexpectedly.
 This probably means the backend terminated abnormally
 before or while processing the request. 
The connection to the server was lost. 
Attempting reset: Failed. 
!

The logs on the first times today I had these problems said this:

Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-1] DEBUG:  
query: SELECT c.relname as Name, 'table'::text as Type, u.usename as 
Owner
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-2] FROM 
pg_class c, pg_user u
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-3] WHERE 
c.relowner = u.usesysid AND c.relkind = 'r'
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-4]   AND 
c.relname !~ '^pg_'
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-5] UNION
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-6] SELECT 
c.relname as Name, 'table'::text as Type, NULL as Owner
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-7] FROM 
pg_class c
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-8] WHERE 
c.relkind = 'r'
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-9]   AND 
not exists (select 1 from pg_user where usesysid = c.relowner)
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-10]   AND 
c.relname !~ '^pg_'
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-11] ORDER 
BY Name
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [13] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3a4608
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [14] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3a4608
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [15] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3a4608
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [16] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3a4608
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [17] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3aadf0
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [18] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3aadf0
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [19] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3aadf0
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [20] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3aadf0

Any idea? Some databases are screwed up

Saludos... :-)

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [GENERAL] Where do they find the time??? Great Bridge closed now!!!??

2001-09-14 Thread Matthew Rice

peace_flower alavoor[AT]@yahoo.com writes:
 I hope the MySQL team will drop the development and Jump into PostgreSQL
 development. Pgsql going to be the only sql server  to
 run the WORLD ECONOMY smoothly.. There is no time support and develop
 two duplicate products!! PostgreSQL is very advanced SQL server
 more advanced than mysql.

What a coincidence.  I was about to say the exact opposite.  Obviously,
PostgreSQL isn't the one true database and everyone should Jump into MySQL.
It is easier to type.

I hope your post was meant as a joke because it was hilarious.
-- 
matthew rice [EMAIL PROTECTED]   starnix inc.
tollfree: 1-87-pro-linuxthornhill, ontario, canada
http://www.starnix.com  professional linux services  products

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] count of occurences PLUS optimisation

2001-09-14 Thread Thurstan R. McDougle

Sorry about the size of this message!, it covers several optimisation
areas.

Yes we are talking about a limited situation of ORDER BY (that does not
match the GROUP BY order) plus LIMIT, but one that is easy to identify.

It also has the advantage that the number to be LIMITed will 9 times out
of 10 be known at query plan time (as LIMIT seems to mostly be used with
a constant), so making it an optimization that rarely needs to estimate.

It could even be tested for at the query run stage rather than the query
plan stage in those cases where the limit is not known in advance,
although that would make the explain less accurate.  Probably for
planning we should just assume that if a LIMIT is present that it is
likely to be for a smallish number.  The planner currently estimates
that 10% of the tuples will be returned in these cases.

The level up to which building a shorter list is better than a sort and
keep/discard should be evaluated.  It would perhaps depend on what
proportion the LIMIT is of the estimated set returned by the GROUP BY.
One point to note is that, IIRC, an ordered lists efficiency drops
faster than that of most decent sorts once the data must be paged
to/from disk.

For larger LIMITs we should still get some benefits if we get the first
LIMIT items, then sort just them and compare each new item against the
lowest item in this list.  Maybe form a batch of new items, then merge
the new batch in to produce a new LIMIT n long sorted list and repeat. 
One major advantage to this is that as the new batch is being fetched we
no longer need to keep the existing list in ram.  I should think that
each new batch should be no longer than we can fit in ram or the amount
of ram that is best to enable an efficient list merge phase.

We could kick into this second mode when we the LIMIT exceeds the cutoff
or available ram.

I have just noticed while looking through the planner and executor that
the 'unique' node (SELECT DISTINCT [ON]) comes between the sort and
limit nodes and is run seperately.  Would it not be more efficient, in
the normal case of distinct on ORDER BY order (or start of ORDER BY),
for uniqueness to be handled within the the sorting as these routines
are already comparing the tuples?  Also if the unique node is seperate
then it makes the merging of sort and limit impossible if DISTINCT is
present.
However there is still the case of distinct where a sort is not
requested,  needed (index scan instead?) or is not suitable for the
distinct, so a seperate distinct node executor is still required.

Taking all these into account it seems that quite a lot of code would
need changing to implement this optimisation. Specifically the SORT,
UNIQUE and LIMIT nodes (and their planners) and the sort utils would
need a seperate variant and the current nodes would need altering.
It is a pity as I would expect fairly large benefits in those cases of
LIMITing to a small subset of a large dataset.

Martijn van Oosterhout wrote:
 
 On Thu, Sep 13, 2001 at 05:38:56PM +0100, Thurstan R. McDougle wrote:
  What I am talking about is WHEN the sort is required we could make the
  sort more efficient as inserting into a SHORT ordered list should be
  better than building a BIG list and sorting it, then only keeping a
  small part of the list.
 
 For a plain SORT, it would be possible. Anything to avoid materialising the
 entire table in memory. Unfortunatly it won't help if there is a GROUP
 afterwards because the group can't really know when to stop.
 
 But yes, if you had LIMITSORT... you could do that. I can't imagine it
 would be too hard to arrange.
 
  In the example in question there would be perhaps 400 records, but only
  10 are needed.  From the questions on these lists it seems quite common
  for only a very low proportion of the records to be required (less then
  10%/upto 100 typically), in these cases it would seem to be a usefull
  optimisation.
 
 Say you have a query:
 
 select id, count(*) from test group by id order by count desc limit 10;
 
 This becomes:
 
 LIMIT  SORT  GROUP  SORT  test
 
 The inner sort would still have to scan the whole table, unless you have an
 index on id. In that case your optimisation would be cool.
 
 Have I got it right now?
 
 --
 Martijn van Oosterhout [EMAIL PROTECTED]
 http://svana.org/kleptog/
  Magnetism, electricity and motion are like a three-for-two special offer:
  if you have two of them, the third one comes free.
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

-- 
This is the identity that I use for NewsGroups. Email to 
this will just sit there. If you wish to email me replace
the domain with knightpiesold . co . uk (no spaces).

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Where do they find the time??? Great Bridge closed now!!!??

2001-09-14 Thread peace_flower

Great Bridge ceased operation and not going to support postgreSQL
(because of lack of investor)

In these days of economic downturn, recession and world-wide  economic
depression...(and even the looming war) I am wondering
how the MySQL team is finding time to support and develop duplicate SQL
server products...

I am NOT FINDING time even to fully understand every line of postgreSQL
source code and use all the capabilities of POstgreSQL!!!

I hope the MySQL team will drop the development and Jump into PostgreSQL
development. Pgsql going to be the only sql server  to
run the WORLD ECONOMY smoothly.. There is no time support and develop
two duplicate products!! PostgreSQL is very advanced SQL server
more advanced than mysql.

If they (mysql developers) have lot of time to waste, I can give them
plenty of work at my home!!


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]