[HACKERS] CREATE INDEX CONCURRENTLY?

2014-10-31 Thread Mark Woodward
I have not kept up with PostgreSQL changes and have just been using it. A co-worker recently told me that you need to word "CONCURRENTLY" in "CREATE INDEX" to avoid table locking. I called BS on this because to my knowledge PostgreSQL does not lock tables. I referenced this page in the documentatio

Re: [HACKERS] Permanent settings

2008-02-21 Thread Mark Woodward
I have been looking at this thread for a bit and want to interject an idea. A couple years ago, I offered a patch to the GUC system that added a number of abilities, two left out were: (1) Specify a configuration file on the command line. (2) Allow the inclusion of a configuration file from withi

Re: [HACKERS] Permanent settings

2008-02-21 Thread Mark Woodward
> > > Mark Woodward wrote: >> I have been looking at this thread for a bit and want to interject an >> idea. >> >> A couple years ago, I offered a patch to the GUC system that added a >> number of abilities, two left out were: >> >> (1) Spec

[HACKERS] SSL and USER_CERT_FILE

2008-05-15 Thread Mark Woodward
I am using PostgreSQL's SSL support and the conventions for the key and certifications don't make sense from the client perspective. Especially under Windows. I am proposing a few simple changes: Adding two API void PQsetSSLUserCertFileName(char *filename) { user_crt_filename = strdup(filenam

[HACKERS] Syntax bug? Group by?

2006-10-17 Thread Mark Woodward
Shouldn't this work? select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15; ERROR: column "y.ycis_id" must appear in the GROUP BY clause or be used in an aggregate function If I am asking for a specific column value, should I, technically speaking, need to group by that column? -

Re: [HACKERS] Syntax bug? Group by?

2006-10-17 Thread Mark Woodward
> Stephen Frost wrote: > >> select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15; > > But back to the query the issue comes in that the ycis_id value is > included with the return values requested (a single row value with > aggregate values that isn't grouped) - if ycis_id is not uniq

Re: [HACKERS] Syntax bug? Group by?

2006-10-17 Thread Mark Woodward
> Hi, Mark, > > Mark Woodward wrote: >>> Stephen Frost wrote: >>> >>>> select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15; >>> But back to the query the issue comes in that the ycis_id value is >>> included with the retur

Re: [HACKERS] Syntax bug? Group by?

2006-10-17 Thread Mark Woodward
> Hi, Mark, > > Mark Woodward wrote: >> Shouldn't this work? >> >> select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15; >> >> ERROR: column "y.ycis_id" must appear in the GROUP BY clause or be used >> in an aggregate funct

Re: [HACKERS] Syntax bug? Group by?

2006-10-17 Thread Mark Woodward
> Mark Woodward wrote: >>> Stephen Frost wrote: >>> >>> >>>> select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15; >>>> >>> But back to the query the issue comes in that the ycis_id value is >>> included with t

Re: [HACKERS] Syntax bug? Group by?

2006-10-17 Thread Mark Woodward
> Mark Woodward wrote: >>>> select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15; >>>> >> >> I still assert that there will always only be one row to this query. >> This >> is an aggregate query, so all the rows with ycis_i

Re: [HACKERS] Syntax bug? Group by?

2006-10-17 Thread Mark Woodward
> On Tue, Oct 17, 2006 at 02:41:25PM -0400, Mark Woodward wrote: > >> The output column "ycis_id" is unabiguously a single value with regards >> to >> the query. Shouldn't PostgreSQL "know" this? AFAIR, I think I've used >> this >>

Re: [HACKERS] Syntax bug? Group by?

2006-10-17 Thread Mark Woodward
> Mark Woodward wrote: >> Shouldn't this work? >> >> select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15; >> >> ERROR: column "y.ycis_id" must appear in the GROUP BY clause or be >> used in an aggregate function > > This

Re: [HACKERS] Syntax bug? Group by?

2006-10-17 Thread Mark Woodward
> On Oct 17, 2006, at 15:19, Peter Eisentraut wrote: > >> Mark Woodward wrote: >>> Shouldn't this work? >>> >>> select ycis_id, min(tindex), avg(tindex) from y where ycis_id = 15; >>> >>> ERROR: column "y.ycis_id" mu

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-22 Thread Mark Woodward
> >> As you can see, in about a minute at high load, this very simple table >> lost about 10% of its performance, and I've seen worse based on update >> frequency. Before you say this is an obscure problem, I can tell you it >> isn't. I have worked with more than a few projects that had to switch

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-22 Thread Mark Woodward
> >> What you seem not to grasp at this point is a large web-farm, about 10 >> or >> more servers running PHP, Java, ASP, or even perl. The database is >> usually >> the most convenient and, aside from the particular issue we are talking >> about, best suited. > > The answer is sticky session

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-23 Thread Mark Woodward
>> The example is a very active web site, the flow is this: >> >> query for session information >> process HTTP request >> update session information >> >> This happens for EVERY http request. Chances are that you won't have >> concurrent requests for the same row, but you may have well over 100 >>

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-23 Thread Mark Woodward
>> I suppose you have a table memberships (user_id, group_id) or something >> like it ; it should have as few columns as possible ; then try regularly >> clustering on group_id (maybe once a week) so that all the records for a >> particular group are close together. Getting the members of a gr

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-23 Thread Mark Woodward
>> Let me ask a question, you have this hundred million row table. OK, how >> much of that table is "read/write?" Would it be posible to divide the >> table into two (or more) tables where one is basically static, only >> infrequent inserts and deletes, and the other is highly updated? > > Well, al

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-23 Thread Mark Woodward
> Ühel kenal päeval, N, 2006-06-22 kell 12:41, kirjutas Mark Woodward: > >> > Depending on exact details and optimisations done, this can be either >> > slower or faster than postgresql's way, but they still need to do >> > something to get transactional

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-23 Thread Mark Woodward
> Mark Woodward wrote: > >> > In case of the number of actively modified rows being in only tens or >> > low hundreds of thousands of rows, (i.e. the modified set fits in >> > memory) the continuous vacuum process shows up as just another >> backend, >>

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-23 Thread Mark Woodward
> > Bottom line: there's still lots of low-hanging fruit. Why are people > feeling that we need to abandon or massively complicate our basic > architecture to make progress? > > regards, tom lane I, for one, see a particularly nasty unscalable behavior in the implementation

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-23 Thread Mark Woodward
> On 6/23/06, Mark Woodward <[EMAIL PROTECTED]> wrote: >> I, for one, see a particularly nasty unscalable behavior in the >> implementation of MVCC with regards to updates. > > I think this is a fairly common acceptance. The overhead required to > perform an UPDATE

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-23 Thread Mark Woodward
> Tom Lane wrote: >> If you're doing heavy updates of a big table then it's likely to end up >> visiting most of the table anyway, no? There is talk of keeping a map >> of dirty pages, but I think it'd be a win for infrequently-updated >> tables, not ones that need constant vacuuming. >> >> I thin

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-23 Thread Mark Woodward
l that would be a show stopper. > > Rick > > On Jun 22, 2006, at 7:59 AM, Mark Woodward wrote: > >>> After a long battle with technology, [EMAIL PROTECTED] ("Mark >>> Woodward"), an earthling, wrote: >>>>> Clinging to sanity, [EMAIL PROTECTED]

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-24 Thread Mark Woodward
> On 6/23/2006 3:10 PM, Mark Woodward wrote: > >> This is NOT an "in-place" update. The whole MVCC strategy of keeping old >> versions around doesn't change. The only thing that does change is one >> level of indirection. Rather than keep references to all v

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-24 Thread Mark Woodward
> On Sat, 24 Jun 2006, Mark Woodward wrote: > >> I'm probably mistaken, but aren't there already forward references in >> tuples to later versions? If so, I'm only sugesting reversing the order >> and referencing the latest version. > > I thought I un

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-24 Thread Mark Woodward
> On 6/24/06, Mark Woodward <[EMAIL PROTECTED]> wrote: >> Currently it looks like this: >> >> ver001->ver002->ver003->...-verN >> >> That's what t_ctid does now, right? Well, that's sort of stupid. Why not >> have it do this: >>

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-24 Thread Mark Woodward
> On 6/24/06, Mark Woodward <[EMAIL PROTECTED]> wrote: >> In the scenario, as previously outlined: >> >> ver001->verN->...->ver003->ver2->| >> ^-/ > > So you want to always keep an old version around? Prior to

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-24 Thread Mark Woodward
> On 6/24/06, Mark Woodward <[EMAIL PROTECTED]> wrote: >> > On 6/24/06, Mark Woodward <[EMAIL PROTECTED]> wrote: >> >> In the scenario, as previously outlined: >> >> >> >> ver001->verN->...->ver003->ver2->| >> >>

[HACKERS] vacuum row?

2006-06-24 Thread Mark Woodward
I originally suggested a methodology for preserving MVCC and everyone is confusing it as update "in place," this isnot what I intended. How about a form of vacuum that targets a particular row? Is this possible? Would if have to be by transaction? ---(end of broadcast)

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-25 Thread Mark Woodward
> On 6/24/2006 9:23 AM, Mark Woodward wrote: > >>> On Sat, 24 Jun 2006, Mark Woodward wrote: >>> >>>> I'm probably mistaken, but aren't there already forward references in >>>> tuples to later versions? If so, I'm only sugesting r

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-26 Thread Mark Woodward
> Ühel kenal päeval, R, 2006-06-23 kell 17:27, kirjutas Bruce Momjian: >> Jonah H. Harris wrote: >> > On 6/23/06, Tom Lane <[EMAIL PROTECTED]> wrote: >> > > What I see in this discussion is a huge amount of "the grass must be >> > > greener on the other side" syndrome, and hardly any recognition

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-26 Thread Mark Woodward
> Heikki Linnakangas wrote: >> On Mon, 26 Jun 2006, Jan Wieck wrote: >> >> > On 6/25/2006 10:12 PM, Bruce Momjian wrote: >> >> When you are using the update chaining, you can't mark that index row >> as >> >> dead because it actually points to more than one row on the page, >> some >> >> are non-vi

Re: [HACKERS] vacuum row?

2006-06-26 Thread Mark Woodward
> On 6/24/06, Mark Woodward <[EMAIL PROTECTED]> wrote: >> I originally suggested a methodology for preserving MVCC and everyone is >> confusing it as update "in place," this isnot what I intended. > > Actually, you should've presented your idea as perf

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-27 Thread Mark Woodward
> Ühel kenal päeval, E, 2006-06-26 kell 09:10, kirjutas Mark Woodward: >> > Ãœhel kenal päeval, R, 2006-06-23 kell 17:27, kirjutas Bruce >> Momjian: >> >> Jonah H. Harris wrote: >> >> > On 6/23/06, Tom Lane <[EMAIL PROTECTED]> wrote: >&

Re: [HACKERS] vacuum, performance, and MVCC

2006-06-27 Thread Mark Woodward
> On Fri, Jun 23, 2006 at 06:37:01AM -0400, Mark Woodward wrote: >> While we all know session data is, at best, ephemeral, people still want >> some sort of persistence, thus, you need a database. For mcache I have a >> couple plugins that have a wide range of opition

Re: [HACKERS] SO_SNDBUF size is small on win32?

2006-06-27 Thread Mark Woodward
I would set the SO_SNDBUF to 32768. > Hi, > > I see a performance issue on win32. This problem is causes by the > following URL. > > http://support.microsoft.com/kb/823764/EN-US/ > > On win32, default SO_SNDBUF value is 8192 bytes. And libpq's buffer is > 8192 too. > > pqcomm.c:117 > #define PQ

Re: [HACKERS] SO_SNDBUF size is small on win32?

2006-06-27 Thread Mark Woodward
> We have definitly seen weird timing issues sometimes when both client > and server were on Windows, but have been unable to pin it exactly on > what. From Yoshiykis other mail it looks like this could possibly be it, > since he did experience a speedup in the range we've been looking for in > tho

[HACKERS] update/insert, delete/insert efficiency WRT vacuum and MVCC

2006-07-03 Thread Mark Woodward
Is there a difference in PostgreSQL performance between these two different strategies: if(!exec("update foo set bar='blahblah' where name = 'xx'")) exec("insert into foo(name, bar) values('xx','blahblah'"); or exec("delete from foo where name = 'xx'"); exec("insert into foo(name, bar) values

Re: [HACKERS] update/insert,

2006-07-05 Thread Mark Woodward
> On Tue, Jul 04, 2006 at 11:59:27AM +0200, Zdenek Kotala wrote: >> Mark, >> I don't know how it will exactly works in postgres but my expectations >> are: >> >> Mark Woodward wrote: >> >Is there a difference in PostgreSQL performance between these

[HACKERS] Mapping arbitriary and heirachical XML to tuple

2006-09-08 Thread Mark Woodward
I have a system by which I store complex data in PostgreSQL as an XML string. I have a simple function that can return a single value. I would like to return sets and sets of rows from the data. This is not a huge problem, as I've written a few of these functions. The question I'd like to put out

[HACKERS] Netflix Prize data

2006-10-04 Thread Mark Woodward
I signed up for the Netflix Prize. (www.netflixprize.com) and downloaded their data and have imported it into PostgreSQL. Here is how I created the table: Table "public.ratings" Column | Type | Modifiers +-+--- item | integer | client | integer | rating | intege

Re: [HACKERS] Netflix Prize data

2006-10-04 Thread Mark Woodward
>> I signed up for the Netflix Prize. (www.netflixprize.com) >> and downloaded their data and have imported it into PostgreSQL. >> Here is how I created the table: > > I signed up as well, but have the table as follows: > > CREATE TABLE rating ( > movie SMALLINT NOT NULL, > person INTEGER NOT

Re: [HACKERS] Netflix Prize data

2006-10-04 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> The one thing I notice is that it is REAL slow. > > How fast is your disk? Counting on my fingers, I estimate you are > scanning the table at about 47MB/sec, which might or might not be > disk-limited... &g

Re: [HACKERS] Netflix Prize data

2006-10-04 Thread Mark Woodward
> > "Greg Sabino Mullane" <[EMAIL PROTECTED]> writes: > >> CREATE TABLE rating ( >> movie SMALLINT NOT NULL, >> person INTEGER NOT NULL, >> rating SMALLINT NOT NULL, >> viewed DATE NOT NULL >> ); > > You would probably be better off putting the two smallints first followed > by > the

Re: [HACKERS] Netflix Prize data

2006-10-04 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> The rating, however, is one char 1~9. Would making it a char(1) buy >> anything? > > No, that would actually hurt because of the length word for the char > field. Even if you used the "char" type,

[HACKERS] Query Failed, out of memory

2006-10-05 Thread Mark Woodward
I am using the netflix database: Table "public.ratings" Column | Type | Modifiers +--+--- item | integer | client | integer | day| smallint | rating | smallint | The query was executed as: psql -p 5435 -U pgsql -t -A -c "select client, item, rating, da

Re: [HACKERS] Query Failed, out of memory

2006-10-05 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from >> ratings order by client" netflix > netflix.txt > >> My question, it looks like the kernel killed psql, and not postmaster.

Re: [HACKERS] Query Failed, out of memory

2006-10-05 Thread Mark Woodward
> On Thu, Oct 05, 2006 at 11:56:43AM -0400, Mark Woodward wrote: >> The query was executed as: >> psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from >> ratings order by client" netflix > netflix.txt >> >> >> My question, it l

Re: [HACKERS] Query Failed, out of memory

2006-10-05 Thread Mark Woodward
> >> > FWIW, there's a feature in CVS HEAD to instruct psql to try to use a >> > cursor to break up huge query results like this. For the moment I'd >> > suggest using COPY instead. >> >> >> That's sort of what I was afraid off. I am trying to get 100 million >> records into a text file in a speci

Re: [HACKERS] Query Failed, out of memory

2006-10-05 Thread Mark Woodward
> Tom Lane wrote: >> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> >>> psql -p 5435 -U pgsql -t -A -c "select client, item, rating, day from >>> ratings order by client" netflix > netflix.txt >>> >> >> FWIW, there&

Re: [HACKERS] Query Failed, out of memory

2006-10-05 Thread Mark Woodward
> On Thu, 2006-10-05 at 14:53 -0400, Luke Lonergan wrote: >> Is that in the release notes? > > Yes: "Allow COPY to dump a SELECT query (Zoltan Boszormenyi, Karel Zak)" I remember this discussion, it is cool when great features get added. ---(end of broadcast)--

[HACKERS] Upgrading a database dump/restore

2006-10-05 Thread Mark Woodward
Not to cause any arguments, but this is sort a standard discussion that gets brought up periodically and I was wondering if there has been any "softening" of the attitudes against an "in place" upgrade, or movement to not having to dump and restore for upgrades. I am aware that this is a difficult

Re: [HACKERS] Upgrading a database dump/restore

2006-10-05 Thread Mark Woodward
> Mark Woodward wrote: >> I am currently building a project that will have a huge number of >> records, >> 1/2tb of data. I can't see how I would ever be able to upgrade >> PostgreSQL >> on this system. >> >> > > Slony will help you upgrad

Re: [HACKERS] Upgrading a database dump/restore

2006-10-05 Thread Mark Woodward
> > Indeed. The main issue for me is that the dumping and replication > setups require at least 2x the space of one db. That's 2x the > hardware which equals 2x $$$. If there were some tool which modified > the storage while postgres is down, that would save lots of people > lots of money. Its tim

Re: [HACKERS] Upgrading a database dump/restore

2006-10-08 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> Not to cause any arguments, but this is sort a standard discussion that >> gets brought up periodically and I was wondering if there has been any >> "softening" of the attitudes against an "in pla

Re: [HACKERS] Upgrading a database dump/restore

2006-10-09 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >>> Whenever someone actually writes a pg_upgrade, we'll institute a policy >>> to restrict changes it can't handle. > >> IMHO, *before* any such tool *can* be written, a set of rules must be >

Re: [HACKERS] Upgrading a database dump/restore

2006-10-09 Thread Mark Woodward
> On Mon, Oct 09, 2006 at 11:50:10AM -0400, Mark Woodward wrote: >> > That one is easy: there are no rules. We already know how to deal >> with >> > catalog restructurings --- you do the equivalent of a pg_dump -s and >> > reload. Any proposed pg_upgra

Re: [HACKERS] Upgrading a database dump/restore

2006-10-09 Thread Mark Woodward
> Mark, > >> No one could expect that this could happen by 8.2, or the release after >> that, but as a direction for the project, the "directors" of the >> PostgreSQL project must realize that the dump/restore is becomming like >> the old locking vacuum problem. It is a *serious* issue for PostgreS

Re: [HACKERS] Index Tuning Features

2006-10-10 Thread Mark Woodward
> Simon Riggs <[EMAIL PROTECTED]> writes: >> - RECOMMEND command > >> Similar in usage to an EXPLAIN, the RECOMMEND command would return a >> list of indexes that need to be added to get the cheapest plan for a >> particular query (no explain plan result though). > > Both of these seem to assume t

Re: [HACKERS] Index Tuning Features

2006-10-11 Thread Mark Woodward
> On 10/10/06, Mark Woodward <[EMAIL PROTECTED]> wrote: >> I think the idea of "virtual indexes" is pretty interesting, but >> ultimately a lesser solution to a more fundimental issue, and that would >> be "hands on" control over the planner. Est

Re: [HACKERS] Index Tuning Features

2006-10-11 Thread Mark Woodward
> > "Mark Woodward" <[EMAIL PROTECTED]> writes: > >> The analyzer, at least the last time I checked, does not recognize these >> relationships. > > The analyzer is imperfect but arguing from any particular imperfection is > weak > because someone w

Re: [HACKERS] Index Tuning Features

2006-10-11 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> I would say that a "simpler" planner with better hints >> will always be capable of creating a better query plan. > > This is demonstrably false: all you need is an out-of-date hint, and > you can

Re: [HACKERS] Hints WAS: Index Tuning Features

2006-10-11 Thread Mark Woodward
> Mark, > > First off, I'm going to request that you (and other people) stop hijacking > Simon's thread on hypothetical indexes. Hijacking threads is an > effective way to get your ideas rejected out of hand, just because the > people whose thread you hijacked are angry with you. > > So please ob

Re: [HACKERS] Hints WAS: Index Tuning Features

2006-10-11 Thread Mark Woodward
>>> >>> Since you're the one who wants hints, that's kind of up to you to >>> define. >>> Write a specification and make a proposal. >>> >> >> What is the point of writing a proposal if there is a threat of "will be >> rejected" if one of the people who would do the rejection doesn't at >> least >>

Re: [HACKERS] Hints WAS: Index Tuning Features

2006-10-12 Thread Mark Woodward
> Clinging to sanity, [EMAIL PROTECTED] ("Mark Woodward") mumbled into > her beard: >> What is the point of writing a proposal if there is a threat of >> "will be rejected" if one of the people who would do the rejection >> doesn't at least outline

Re: [HACKERS] signed short fd

2005-03-14 Thread Mark Woodward
> Christopher Kings-Lynne wrote: >> > I really don't intend to do that, and it does seem to happen a lot. I >> am >> > the first to admit I lack tact, but often times I view the decisions >> made >> > as rather arbitrary and lacking a larger perspective, but that is a >> rant I >> > don't want to g

Re: [HACKERS] PHP stuff

2005-03-15 Thread Mark Woodward
> I'm currently adding support for the v3 protocol in PHP pgsql extension. > I'm wondering if anyone minds if I lift documentation wholesale from > the PostgreSQL docs for the PHP docs for these functions. For instance, > the fieldcodes allowed for PQresultErrorField, docs on > PQtransactionStat

Re: [HACKERS] signed short fd

2005-03-15 Thread Mark Woodward
> Mark Woodward wrote: >> > Christopher Kings-Lynne wrote: >> >> > I really don't intend to do that, and it does seem to happen a lot. >> I >> >> am >> >> > the first to admit I lack tact, but often times I view the >>

Re: [HACKERS] PHP stuff

2005-03-16 Thread Mark Woodward
>>>Uh, but that's what the BSD license allows --- relicensing as any other >>>license, including commercial. >> >> The point remains that Chris, by himself, does not hold the copyright on >> the PG docs and therefore cannot assign it to anyone. >> >> ISTM the PHP guys are essentially saying that th

Re: [HACKERS] WIN1252 patch broke my database

2005-03-16 Thread Mark Woodward
> Tom Lane wrote: >> You can't just randomly rearrange the pg_enc enum without forcing an >> initdb, because the numeric values of the encodings appear in system >> catalogs (eg pg_conversion). > > Oh, those numbers appear in the catalogs? I didn't relealize that. > > I will force an initdb. > Doe

Re: [HACKERS] PHP stuff

2005-03-17 Thread Mark Woodward
> Mark Woodward wrote: >> I would say that "The PostgreSQL Global Development Group" or its >> representatives (I'm assuming Tom, Bruce, and/or Marc Fournier) just >> has to give something written, that says Christopher Kings-Lynne of >> "your ad

Re: [HACKERS] PHP stuff

2005-03-17 Thread Mark Woodward
> Peter Eisentraut wrote: >> Mark Woodward wrote: >> > I would say that "The PostgreSQL Global Development Group" or its >> > representatives (I'm assuming Tom, Bruce, and/or Marc Fournier) just >> > has to give something written, that says

Re: [HACKERS] PHP stuff

2005-03-17 Thread Mark Woodward
> Mark Woodward wrote: >> As the copyright owner, "The PostgreSQL Global Development Group," >> has the right to license the documentation any way they see fit. For >> PHP to sub-license the documentation, it legally has to be transfered >> in writing. Ver

Re: [HACKERS] PHP stuff

2005-03-17 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> Sorry, that's not true. At least in the USA, any entity that can be >> identified can own and control copyright. While it is true, however, >> that >> there can be ambiguity, an informal body, say

Re: [HACKERS] postgreSQL and history of relational databases

2005-03-28 Thread Mark Woodward
> Hi there, > > while learning inkscape I did a sketch of picture describing > history of relational databases. It's available from > http://mira.sai.msu.su/~megera/pgsql/ Is there a direct line from INGRES to Postgres? I was under the impression that Postgres is a "new" lineage started after INGR

Re: [HACKERS] postgreSQL and history of relational databases

2005-03-28 Thread Mark Woodward
> On Mon, 28 Mar 2005, Mark Woodward wrote: > >>> Hi there, >>> >>> while learning inkscape I did a sketch of picture describing >>> history of relational databases. It's available from >>> http://mira.sai.msu.su/~megera/pgsql/ >> >

Re: [HACKERS] New FLOSS survey

2005-04-01 Thread Mark Woodward
> There is an updated survey of open source developers: > > http://flosspols.org/survey/survey_part.php?groupid=sd > It was very long, it says "45" questions, but many of those questions are many parts with drop down menues. Tedious!! Also, it seems to be looking for sexual harrasment issues as

Re: [HACKERS] ARC patent

2005-04-01 Thread Mark Woodward
>> -Original Message- >> From: Marian POPESCU [mailto:[EMAIL PROTECTED] >> Sent: Friday, April 01, 2005 8:06 AM >> To: pgsql-hackers@postgresql.org >> Subject: Re: [HACKERS] ARC patent >> >> >>>Neil Conway <[EMAIL PROTECTED]> writes: >> >>> >> >>> >> FYI, IBM has applied for a patent on

[HACKERS] US Census database (Tiger 2004FE)

2005-08-03 Thread Mark Woodward
I just finished converting and loading the US census data into PostgreSQL would anyone be interested in it for testing purposes? It's a *LOT* of data (about 40+ Gig in PostgreSQL) ---(end of broadcast)--- TIP 6: explain analyze is your friend

Re: [HACKERS] US Census database (Tiger 2004FE)

2005-08-04 Thread Mark Woodward
, 2005 at 05:00:16PM -0400, Mark Woodward wrote: >> >> >>>I just finished converting and loading the US census data into >>> PostgreSQL >>>would anyone be interested in it for testing purposes? >>> >>>It's a *LOT* of data (about 40+ Gig i

Re: [HACKERS] Solving the OID-collision problem

2005-08-04 Thread Mark Woodward
> I was reminded again today of the problem that once a database has been > in existence long enough for the OID counter to wrap around, people will > get occasional errors due to OID collisions, eg > > http://archives.postgresql.org/pgsql-general/2005-08/msg00172.php > > Getting rid of OID usage i

Re: [HACKERS] US Census database (Tiger 2004FE)

2005-08-04 Thread Mark Woodward
> * Mark Woodward ([EMAIL PROTECTED]) wrote: >> I just finished converting and loading the US census data into >> PostgreSQL >> would anyone be interested in it for testing purposes? >> >> It's a *LOT* of data (about 40+ Gig in PostgreSQL) > > How big du

Re: [HACKERS] US Census database (Tiger 2004FE)

2005-08-04 Thread Mark Woodward
> * Mark Woodward ([EMAIL PROTECTED]) wrote: >> > How big dumped & compressed? I may be able to host it depending on >> how >> > big it ends up being... >> >> It's been running for about an hour now, and it is up to 3.3G. > > Not too bad.

Re: [HACKERS] US Census database (Tiger 2004FE)

2005-08-04 Thread Mark Woodward
>> It's been running for about an hour now, and it is up to 3.3G. >> >> pg_dump tiger | gzip > tiger.pgz > > | bzip2 > tiger.sql.bz2 :) > I find bzip2 FAR SLOWER than the gain in compression. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postm

[HACKERS] pg_dump -- data and schema only?

2005-08-04 Thread Mark Woodward
I haven't seen this option, and does anyone thing it is a good idea? A option to pg_dump and maybe pg_dump all, that dumps only the table declarations and the data. No owners, tablespace, nothing. This, I think, would allow more generic PostgreSQL data transfers. ---(end

Re: [HACKERS] Solving the OID-collision problem

2005-08-04 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> Why is there collision? It is because the number range of an OID is >> currently smaller than the possible usage. > > Expanding OIDs to 64 bits is not really an attractive answer, on several > grounds: > &

Re: [HACKERS] Solving the OID-collision problem

2005-08-04 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >>> 2. Performance. Doing this would require widening Datum to 64 bits, >>> which is a system-wide performance hit on 32-bit machines. > >> Do you really think it would make a measurable difference

[HACKERS] US Census database (Tiger 2004FE) - 4.4G

2005-08-04 Thread Mark Woodward
It is 4.4G in space in a gzip package. I'll mail a DVD to two people who promise to host it for Hackers. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org

Re: [HACKERS] pg_dump -- data and schema only?

2005-08-04 Thread Mark Woodward
> Am Donnerstag, den 04.08.2005, 10:26 -0400 schrieb Mark Woodward: >> I haven't seen this option, and does anyone thing it is a good idea? >> >> A option to pg_dump and maybe pg_dump all, that dumps only the table >> declarations and the data. No owners, tablespa

Re: [HACKERS] Solving the OID-collision problem

2005-08-04 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >>> I'm too lazy to run an experiment, but I believe it would. Datum is >>> involved in almost every function-call API in the backend. In >>> particular this means that it would affect performance-cr

Re: [HACKERS] pg_dump -- data and schema only?

2005-08-04 Thread Mark Woodward
> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> Actually, there isn't a setting to just dump the able definitions and >> the >> data. When you dump the schema, it includes all the tablespaces, >> namespaces, owners, etc. > >> Just

Re: [HACKERS] US Census database (Tiger 2004FE) - 4.4G

2005-08-04 Thread Mark Woodward
e pre-formatted database? I would say the preformated database is easier to manage. There are hundreds of individual zips files, in each of those files 10 or so data files. > Mark Woodward wrote: >> It is 4.4G in space in a gzip package. >> >> I'll mail a DVD to two pe

Re: [HACKERS] shrinking the postgresql.conf

2005-08-08 Thread Mark Woodward
> Hello, > > As I have been laboring over the documentation of the postgresql.conf > file for 8.1dev it seems that it may be useful to rip out most of the > options in this file? > > Considering many of the options can already be altered using SET why > not make it the default for many of them? > >

Re: [HACKERS] shrinking the postgresql.conf

2005-08-08 Thread Mark Woodward
>> Well, if you want PostgreSQL to act a specific way, then you are going >> to >> have to set up the defaults somehow, right? > > Of course, which is why we could use a global table for most of it. What if you wish to start the same database cluster with different settings? > >> >> Which is clea

[HACKERS] Want to add to contrib.... xmldbx

2006-01-29 Thread Mark Woodward
I have a fairly simple extension I want to add to contrib. It is an XML parser that is designed to work with a specific dialect. I have a PHP extension called xmldbx, it allows the PHP system to serialize its web session data to an XML stream. (or just serialize variables) PHP's normal serializer

Re: [HACKERS] Want to add to contrib.... xmldbx

2006-01-29 Thread Mark Woodward
> > [removing -patches since no patch was attached] > This sounds highly specialised, and probably more appropriate for a > pgfoundry project. > > In any case, surely the whole point about XML is that you shouldn't need > to contruct custom parsers. Should we include a specialised parser for > evey

Re: [HACKERS] Want to add to contrib.... xmldbx

2006-01-29 Thread Mark Woodward
> > > Mark Woodward wrote: > >>XML is not really much more than a language, it says virtually nothing >>about content. Content requires custom parsers. >> >> > > Really? Strange I've been dealing with it all this time without having > to contr

Re: [HACKERS] Want to add to contrib.... xmldbx

2006-01-29 Thread Mark Woodward
> David Fetter <[EMAIL PROTECTED]> writes: >> I also think this would make a great pgfoundry project :) > > Yeah ... unless there's some reason that it needs to be tied to PG > server releases, it's better to put it on pgfoundry where you can > have your own release cycle. > I don't need pfoundry,

Re: [HACKERS] Want to add to contrib.... xmldbx

2006-01-30 Thread Mark Woodward
> On Sun, Jan 29, 2006 at 03:15:06PM -0500, Mark Woodward wrote: >> > Postgres generally seems to favor extensibility over integration, and >> I >> > generally agree with that approach. >> >> I generally agree as well, but. >> >> I think th

  1   2   >