[HACKERS] indexes and big tables

2001-07-27 Thread Robert Vojta

Hi,
  I'm fighting with problem with indexes. I read documentation about
performance tips, about internal logic functions which are making decision
if to use or not use indexes, etc. and I'm still failed. I'm not SQL
guru and I don't know what to do now. My tables and indexes looks like
...

CREATE TABLE counters (
  line VARCHAR(64) NOT NULL,
  counterfrom  INT8 NOT NULL,
  countertoINT8 NOT NULL,
  counterstamp TIMESTAMP NOT NULL,
  stampTIMESTAMP NOT NULL DEFAULT 'now');

CREATE INDEX counters_line_stamp ON counters (line, counterstamp); 

  I have several other tables too with names static_counters_(hour|day|month).
Why? It's only for fast sumarization, so ...

  in counters - 5min counters for last hour, rows are moved into static_counters
after hour sumarization in counters_hour table

  in counters_hour - last day hour sums, rows are moved into static_counters_
 hour table after day sumarization in counters_day

  in counters_day - last month days sums, rows are moved into static_counters_
days table after month sumarization in counters_month

  I'm inserting about 300 rows into counters table in 5min period (fetching
info from routers). Sumarization is doing everyhour with some internal logic
and decision are made by hour info. There are about 3 milions rows in
static_counters table and they are only for last month. It means, that when
next month begins, we moved this old data into tables counters__MM, etc.
I'm running VACUUM ANALYZE two times a day. Everything works fine, but I'm drawing 
graphs from static_counters and counters tables. For first graph I
need about 33 hour old data and for second graph I need about a week old data.
I know, now there is a more data than I need in this table, but if I create a
table with needed values only, there is no indexes used too. Select for graphs
looks like ...

netacc=> EXPLAIN (SELECT SUM(counterfrom) AS from, SUM(counterto) AS to,
floor((985098900 - date_part('epoch', counterstamp)) / 300) AS sequence
FROM counters WHERE line='absolonll' AND date_part('epoch', counterstamp)
> 984978900 GROUP BY sequence, line) UNION (SELECT SUM(counterfrom) AS
from, SUM(counterto) AS to, floor((985098900 - date_part('epoch',
counterstamp)) / 300) AS sequence FROM static_counters WHERE
line='absolonll' AND date_part('epoch', counterstamp) > 984978900 GROUP BY
sequence, line); NOTICE:  QUERY PLAN:
 
Unique (cost=67518.73..67525.44 rows=89 width=36)
  -> Sort (cost=67518.73..67518.73 rows=895 width=36)
-> Append (cost=1860.01..67474.87 rows=895 width=36)
-> Aggregate (cost=1860.01..1870.90 rows=109 width=36)
  -> Group (cost=1860.01..1865.46 rows=1089 width=36)
-> Sort (cost=1860.01..1860.01 rows=1089
width=36)
  -> Seq Scan on counters
(cost=0.00..1805.10 rows=1089 width=36)
-> Aggregate (cost=65525.38..65603.97 rows=786 width=36)
  -> Group (cost=65525.38..65564.67 rows=7858
width=36)
-> Sort (cost=65525.38..65525.38 rows=7858
width=36)
  -> Seq Scan on static_counters
(cost=0.00..65016.95 rows=7858 width=36)
 
EXPLAIN
netacc=>

  ... Indexes are used when I have a few rows in table only :( Result of this
select is about ~105 rows in every way. Now, I don't know what to do, because
drawing of this two graphs is about 30 seconds and it's too much.
  Please, how may I change my solution for fast graphs drawings? May I
split this table? Make table for each line? Upgrade HW?

  I'm running PostgreSQL 7.0.3 now on RedHat 6.2 linux box. HW of this box
is Duron 700MHz, 384MB RAM, SCSI disk. May I upgrade PostgreSQL? May I
upgrade RAM, CPU? I don't know what to do now and any help will be very
appreciated.

  Thank you very much,
king regards,
  Robert Vojta

-- 
   _
  |-|  __  Robert Vojta   -= Oo.oO =-
  |=| [Ll] IPEX, s.r.o.
  "^" `o

 PGP signature


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



AW: [HACKERS] indexes and big tables

2001-07-27 Thread Zeugswetter Andreas SB


> netacc=> EXPLAIN (SELECT SUM(counterfrom) AS from, 
> SUM(counterto) AS to,
> floor((985098900 - date_part('epoch', counterstamp)) / 300) 
> AS sequence
> FROM counters WHERE line='absolonll' AND date_part('epoch', 
> ) counterstamp > 984978900 GROUP BY sequence, line) ...

I would guess the problem is the restriction on counterstamp, because 
written like that, it probably can't use the index.

try something where you avoid the use of the date_part function e.g.:
AND counterstamp > '2001-07-26 00:00:00.0'

Andreas

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] indexes and big tables

2001-07-27 Thread Robert Vojta

> I would guess the problem is the restriction on counterstamp, because 
> written like that, it probably can't use the index.
> 
> try something where you avoid the use of the date_part function e.g.:
>   AND counterstamp > '2001-07-26 00:00:00.0'

  I will try it, but it use the index when there is a few amount of rows.
When I insert a lot of rows like me (in milions), index isn't used. I don't
know the number of rows which makes border between using and don't using
index and I can discover it if you want. Going to try your suggestions ...

Best regards,
  Robert

-- 
   _
  |-|  __  Robert Vojta   -= Oo.oO =-
  |=| [Ll] IPEX, s.r.o.
  "^" `o

 PGP signature


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] indexes and big tables

2001-07-27 Thread Robert Vojta

> The index is only used for the line= part of the where clause
> with your query. With many rows the "line=" is not selective enough 
> to justify the index.

Hi,
  I tried you suggestion about 'AND counterstamp > '2001-07-26 00:00:00.0' and
it works and index is used :) But, whole query run for 10 sec (was 30s) and
it's too much, I need about 1 sec. May I optimize my tables, queries or may I
upgrade something from my HW (duron 700, 384MB RAM, slow scsi disk :( )? I
do not want solution, some hint in which part may I focus and I will go through
documentation again, thank you very much.

Best regards,
  Robert

-- 
   _
  |-|  __  Robert Vojta   -= Oo.oO =-
  |=| [Ll] IPEX, s.r.o.
  "^" `o

 PGP signature


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] indexes and big tables

2001-07-27 Thread Robert Vojta

> The index is only used for the line= part of the where clause
> with your query. With many rows the "line=" is not selective enough 
> to justify the index.

  I tried move only needed data into new table and change query into ...

netacc=> EXPLAIN SELECT counterfrom AS from, counterto AS to,
floor((98000 - date_part('epoch', counterstamp)) / 300) AS sequence
FROM graphs_5m WHERE line='absolonll'; NOTICE:  QUERY PLAN:
 
Index Scan using graphs_5m_idx on graphs_5m (cost=0.00..58.38 rows=29
width=24)
 
EXPLAIN

  and query runs for 3-5 seconds. Any idea how to make it faster? I think,
that now it's ready to HW upgrade for faster result ...

Best regards,
  Robert

-- 
   _
  |-|  __  Robert Vojta   -= Oo.oO =-
  |=| [Ll] IPEX, s.r.o.
  "^" `o

 PGP signature


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[HACKERS] PostgreSQL7.1 on AIX5L is running with too poor ferformance

2001-07-27 Thread Leslie

Hi

PostgreSQL7.1 is now running on AIX5L( S85, 6GB memory, 6CPU), which was
running on Linux before(Pentium3, 2CPU, as far as I
remember...sorry..).
The performance( on AIX5L ) is just half as good as the one( on Linux ).
I compiled PostgreSQL on AIX5L ofcourse.
I haven't configured it when migrating to AIX5L though.
Are there any problems in not tuning when migrating it to AIX5L?
What should I check first?
I can't make a head or tail of it:(
Help!!

Best regards,
Shuichi







---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] indexes and big tables

2001-07-27 Thread Stephan Szabo


On Fri, 27 Jul 2001, Robert Vojta wrote:
> netacc=> EXPLAIN (SELECT SUM(counterfrom) AS from, SUM(counterto) AS to,
> floor((985098900 - date_part('epoch', counterstamp)) / 300) AS sequence
> FROM counters WHERE line='absolonll' AND date_part('epoch', counterstamp)
> > 984978900 GROUP BY sequence, line) UNION (SELECT SUM(counterfrom) AS
> from, SUM(counterto) AS to, floor((985098900 - date_part('epoch',
> counterstamp)) / 300) AS sequence FROM static_counters WHERE
> line='absolonll' AND date_part('epoch', counterstamp) > 984978900 GROUP BY
> sequence, line); NOTICE:  QUERY PLAN:

Is there any possibility of overlapping rows between the parts of the
union?  If not, I'd suggest union all, since that might get rid of the top
level unique and sort steps (probably not a huge gain, but might help).

> Unique (cost=67518.73..67525.44 rows=89 width=36)
>   -> Sort (cost=67518.73..67518.73 rows=895 width=36)
> -> Append (cost=1860.01..67474.87 rows=895 width=36)
> -> Aggregate (cost=1860.01..1870.90 rows=109 width=36)
>   -> Group (cost=1860.01..1865.46 rows=1089 width=36)
> -> Sort (cost=1860.01..1860.01 rows=1089
> width=36)
>   -> Seq Scan on counters
> (cost=0.00..1805.10 rows=1089 width=36)
> -> Aggregate (cost=65525.38..65603.97 rows=786 width=36)
>   -> Group (cost=65525.38..65564.67 rows=7858
> width=36)
> -> Sort (cost=65525.38..65525.38 rows=7858
> width=36)
>   -> Seq Scan on static_counters
> (cost=0.00..65016.95 rows=7858 width=36)
>  
> EXPLAIN
> netacc=>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



RE: [HACKERS] Question about porting the PostgreSQL

2001-07-27 Thread Hsin Lee


>Impossible to tell, since you haven't said word one about what this
>box is or what it can do.  If it were plain storage hardware, why do

thanks for your reply.  Yes I know it's hard to explain
why we plan to do what I described without explaining more
about the hardware we have.  So it sounds like it's possible
but very difficult due to the sensitive code to be changed.

Hsin H. Lee



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[HACKERS] (forw) Caldera OpenUNIX 8

2001-07-27 Thread Larry Rosenman

I got a mailbox full for Peter, so here is information.

Larry ROsenman
- Forwarded message from Larry Rosenman <[EMAIL PROTECTED]> -

From: Larry Rosenman <[EMAIL PROTECTED]>
Subject: Caldera OpenUNIX 8
Date: Fri, 27 Jul 2001 11:58:01 -0500
Message-ID: <[EMAIL PROTECTED]>
User-Agent: Mutt/1.3.19i
X-Mailer: Mutt http://www.mutt.org/
To: [EMAIL PROTECTED]

lerami.lerctr.org is now running Caldera's OpenUNIX 8 operating system
which is UnixWare 7.1.1+fixes+Linux Kernel Personality. 

The tools (cc, et al) have just bug fixes. 

THere is also a LINUX mode, which is the full OpenLinux 3.1 userland
on top of the UnixWare kernel with call mapping. 

If you have questions, let me know.

LER



-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

- End forwarded message -

-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[HACKERS] Re: From TODO, XML?

2001-07-27 Thread jgray

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (mlw)
wrote:
> I was looking over the todo list and saw that someone wanted to support
> XML. I have some quick and dirty stuff that could be used.
> 

I'm not clear from the TODO what that "XML support" might involve. The
reference to pg_dump suggests an XML dump format for databases. That only
makes sense if we build an XML frontend that can load XML-based pg_dump
files.

I can't see any very useful application though, unless someone has a
standard for database dumps using XML -I'd have thought that our current
"list of SQL statements" dump is fine (and useful if you speak SQL)

> OK, what should the feature look like?
> 

What's the feature for? The things I've been working on are trying to make
an XML parser available in the backend, and to build some XML document
manipulation functions/operators. This is useful for what I'm doing (using
XML documents as short packets of human and machine-readable descriptive
data) and may be useful to other people. This work hasn't progressed very
far (I did only spend an afternoon or so writing it though):
(available at http://www.cabbage.uklinux.net/pgxml.tar.gz)

One obvious (and current) topic is XQuery and we might ask whether PG
could/should implement it. I think some thinking would be needed on that
because  a) It involves having a second, non-SQL parser on the front-end
and that could be quite a large undertaking  and  b)  there's probably
(from my initial reading) some discrepancy  between the PG (and indeed
SQL) data model and the XQuery one. If  we could work round that, XQuery
*might* be an attraction to people. Certainly the ability to form one XML
document out of another via a query may be good for some projects.

Perhaps if people interested in XML "stuff" could add here, we might flesh
out a little more of what's desired.

> Should it be grafted onto pg_dump or should a new utility pg_xml be
> created?
> 
> How strict should it be? A stricter parser is easier to write, one can
> use a library, unfortunately most xml is crap and for the utility to be
> useful, it has to be real fuzzy.
> 

I don't think you really can write a non-strict XML parser. At least, not
if you want the resulting DOM to be useful - violations of well-formedness
probably result in logical difficulties wth the document structure. i.e. 


text
more text


Is  within ? Are  and  siblings? These are answerable with
well-formed XML -And they're very relevant questions to ask for many XML
processing tasks. 

> Any input would be appreciated.
> 

Likewise -I'd be very insterested to know what sort of things people were
interested in -as I've found an area where I have a need which others
might share. I'd like to contribute some effort into it.

Regards

John


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[HACKERS] Re: Re: Re: Storing XML in PostgreSQL

2001-07-27 Thread John Gray

In article <9jrn78$pbv$[EMAIL PROTECTED]>, "Colin 't Hart" <[EMAIL PROTECTED]>
wrote:
>> Should we add this to /contrib?
> 
> I think so, at least until we get something better.
> 

I'm happy for you to add it, if you're willing to have it (It is meant to
be under the PostgreSQL license). I agree that there's still much to be
done... note that another thread (From TODO, XML?) has started up on this
subject as well.

No threads on XML for months, and then along come two at once :)

Regards

John


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] config.guess on OpenUnix 8...

2001-07-27 Thread Larry Rosenman

I believe Caldera has submitted changes to the autoconf people to
update config/config.guess to support OpenUNIX 8. 

Our current stuff BREAKS unless you use the SCOMPAT magic to look like
a UnixWare 7.1.1 box. 

Who needs to pick up an update? 

Larry 

-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] OU8...

2001-07-27 Thread Larry Rosenman

Skip the patch for configure.in in that last one, use this in it's
place (I missed one sysv5uw). 



-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749


Index: configure.in
===
RCS file: /home/projects/pgsql/cvsroot/pgsql/configure.in,v
retrieving revision 1.129
diff -c -r1.129 configure.in
*** configure.in2001/06/11 22:12:48 1.129
--- configure.in2001/07/27 21:45:58
***
*** 81,86 
--- 81,87 
  esac ;;
 sysv4*) template=svr4 ;;
   sysv5uw*) template=unixware ;;
+  sysv5*) template=unixware ;;
ultrix*) template=ultrix4 ;;
  esac
  
***
*** 637,643 
  AC_PROG_LD
  AC_SUBST(LD)
  AC_SUBST(with_gnu_ld)
! case $host_os in sysv5uw*)
AC_CACHE_CHECK([whether ld -R works], [pgac_cv_prog_ld_R],
[
  pgac_save_LDFLAGS=$LDFLAGS; LDFLAGS="$LDFLAGS -Wl,-R/usr/lib"
--- 638,644 
  AC_PROG_LD
  AC_SUBST(LD)
  AC_SUBST(with_gnu_ld)
! case $host_os in sysv5*)
AC_CACHE_CHECK([whether ld -R works], [pgac_cv_prog_ld_R],
[
  pgac_save_LDFLAGS=$LDFLAGS; LDFLAGS="$LDFLAGS -Wl,-R/usr/lib"



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[HACKERS] Re: From TODO, XML?

2001-07-27 Thread mlw

[EMAIL PROTECTED] wrote:
> 
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (mlw)
> wrote:
> > I was looking over the todo list and saw that someone wanted to support
> > XML. I have some quick and dirty stuff that could be used.
> >
> 
> I'm not clear from the TODO what that "XML support" might involve. The
> reference to pg_dump suggests an XML dump format for databases. That only
> makes sense if we build an XML frontend that can load XML-based pg_dump
> files.
> 
> I can't see any very useful application though, unless someone has a
> standard for database dumps using XML -I'd have thought that our current
> "list of SQL statements" dump is fine (and useful if you speak SQL)

Actually I have been thinking about a couple projects I have done. Vendors like
to think XML is a way to distribute databases.

So a parser that can scan a DTD and make a usable create table (...) line would
be very helpful. One which could compare a DTD to an existing SQL table and map
XML data correctly. (Or error if conversion from data to SQL types yields an
error.)

During a database export, a SQL table could be used to create a DTD.

I was thinking along the line of being able to use XML as a fairly portable
import/export feature. Having this ability, as a generic solution, would have
made several tasks MUCH easier.

I would also like the XML parser to be fuzzy enough to take some bad XML
(because ALL XML is bad), because a lot of vendors like to distribute data in
bad XML.


-- 
5-4-3-2-1 Thunderbirds are GO!

http://www.mohawksoft.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] Re: From TODO, XML?

2001-07-27 Thread Frank Ch. Eigler


markw wrote:

: [...]  Actually I have been thinking about a couple projects I have
: done. Vendors like to think XML is a way to distribute databases.

I would find it very helpful to see a table of what sorts of XML
functionality each major vendor supports.


: So a parser that can scan a DTD and make a usable create table (...) 
: line would be very helpful. [...]

Hmm, but hierarchically structured documents such as XML don't map
well to a relational model.  The former tend to be recursive (e.g.,
have more levels of containment than the one or two that might be
mappable to tables and columns.)


: During a database export, a SQL table could be used to create a DTD.
: [...]

This mapping (relational model -> XML) is more straightforward.


- FChE

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] LIBPQ on Windows and large Queries

2001-07-27 Thread Joerg Hessdoerfer

At 01:36 27.07.2001 -0400, you wrote:
>"Steve Howe" <[EMAIL PROTECTED]> writes:
> > Is anybody trying to solve the 8191 bytes query limit from libpq
> > windows port ???
>
>I think it's *your* responsibility, at least to identify what's going
>on.  This is an open source project, and that means you can and should
>fix problems that affect you.
>
>FWIW, if the problem is real (which I still misdoubt), it seems like
>it would have to be related to needing to flush multiple output
>bufferloads during a single PQsendQuery.  This works just fine on
>everything but Windows --- why would it fail there?  And why would
>no one but you have complained of it before?  I have no ideas.

[...]

To go on about this, I use psql 7.1.2 for toying around on Win2K, and have not
had this problem. I just evaluated using TOASTED blobs (written in a text 
column
as base64 encoded).

I did use the cygwin libpq though. Probably I'll get around compiling 
native libpq
and try that with my test cases.

Greetings,
 Joerg


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] LIBPQ on Windows and large Queries

2001-07-27 Thread Hiroshi Inoue
Tom Lane wrote:
> 
> "Steve Howe" <[EMAIL PROTECTED]> writes:
> > Is anybody trying to solve the 8191 bytes query limit from libpq
> > windows port ???
> 
> FWIW, if the problem is real (which I still misdoubt), 

Yes it's real.
Error handling seems the cause. When "Error: pqReadData() --
read() failed: errno=0 No error" occurs WSAGetLastError()
returns WSAEWOULDBLOCK. If EWOULDBLOCK exists and errno ==
EWOULDBLOCK, pqReadData() returns 0 or 1 not -1.
I added the code errno = WSAGetLastError(); and 
#define EWOULDBLOCK WSAEWOULDBLOCK.
After that I encountered another error "pqFlush() --  
couldn't send data: errno=0". Then WSAGetLastError() also
returns WSAEWOULDBLOCK. After adding *errno = WSAGetLastError();*
the insertion was successful.

regards,
Hiroshi Inoue

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[HACKERS] Re: Slow Performance in PostgreSQL

2001-07-27 Thread Hiroshi Inoue
Michael Rudolph wrote:
> 
> To let all of you know the status of my problem: I got on one large
> step when realized, that I forgot a "vacuum analyze" after copying the
> data. When I did this, my performance gain was huge - but didn't reach
> the performance of centura at all. It is still a bit slower. Maybe I
> can optimize the ODBC-Connection in any way, because there is still
> one unsolved question when watching the postgres-logfile: Every query
> is done two times (I don't know why) and both queries of one type need
> the same execution time. So I think, if I manage to reduce that load,
> I can get an acceptable performance.
> 

Could you turn on mylog debug though it generates a lot of
debug output ? To turn on it, please add the Windows registry
entry
HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI\PostgreSQL\Debug
as 1. To turn off mylog debug, please set the entry to 0 or
remove it.

regards,
Hiroshi Inoue

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


[HACKERS] From TODO, XML?

2001-07-27 Thread mlw

I was looking over the todo list and saw that someone wanted to support XML. I
have some quick and dirty stuff that could be used.

OK, what should the feature look like?

Should it be grafted onto pg_dump or should a new utility pg_xml be created?

How strict should it be? A stricter parser is easier to write, one can use a
library, unfortunately most xml is crap and for the utility to be useful, it
has to be real fuzzy.

Any input would be appreciated.


-- 
5-4-3-2-1 Thunderbirds are GO!

http://www.mohawksoft.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[HACKERS] Re: Re: Re: Storing XML in PostgreSQL

2001-07-27 Thread Colin 't Hart

> Should we add this to /contrib?

I think so, at least until we get something better.

Cheers,

Colin



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly