[HACKERS] Using 7.1rc1 under RH 6.2

2001-05-21 Thread Arsalan Zaidi
Hi. We tried the RPM under RH 6.2 and it would unpack, saying we needed this or that or the other... A whole bunch of dependencies. Anyway, once of us downloaded the source and compiled it and it worked... I just want to be sure... there's no compelling reason to use RH 7.x (basically any new

[HACKERS] RE: Plans for solving the VACUUM problem

2001-05-21 Thread Henshall, Stuart - WCP
Apologises if I've missed something, but isn't that the same xmin that ODBC uses for row versioning? - Stuart Snip Currently, the XMIN/XMAX command counters are used only by the current transaction, and they are useless once the transaction finishes and take up 8 bytes on disk.

[HACKERS] shared library strangeness?

2001-05-21 Thread Patrick Welche
I just upgraded PostgreSQL from 21 March CVS (rc1?) to May 19 16:21 GMT CVS. I found that all my cgi/fcg scripts which use libpq++ stopped working in the vague sense of apache mentioning an internal server error. Relinking them cured the problem (had to do this in haste = unfortunately no more

RE: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Mikheev, Vadim
We could keep share buffer lock (or add some other kind of lock) untill tuple projected - after projection we need not to read data for fetched tuple from shared buffer and time between fetching tuple and projection is very short, so keeping lock on buffer will not impact concurrency

RE: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Mikheev, Vadim
We could keep share buffer lock (or add some other kind of lock) untill tuple projected - after projection we need not to read data for fetched tuple from shared buffer and time between fetching tuple and projection is very short, so keeping lock on buffer will not impact concurrency

[HACKERS] Re: Detecting readline in configure

2001-05-21 Thread Tony Reina
[EMAIL PROTECTED] (Peter Eisentraut) wrote in message news:[EMAIL PROTECTED]... Every once in a while some user complains that the cursor keys don't work anymore in psql. There has even been one case where a major vendor has shipped binaries without readline support. While we keep telling

Re: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Tom Lane
Mikheev, Vadim [EMAIL PROTECTED] writes: I'm not sure that the time to do projection is short though --- what if there are arbitrary user-defined functions in the quals or the projection targetlist? Well, while we are on this subject I finally should say about issue bothered me for long

[HACKERS] Prevent CREATE TABLE

2001-05-21 Thread Tulio Oliveira
Hi, I need grant access on my DB to many users that's can SELECT, UPDATE, INSERT or DELETE over tables, and viwes. This point, all OK. But I don't wan't this users can create new tables on my DB. Where can I do that ??? Regards, Tulio Oliveira -- Um velho homem sábio disse uma vez:

Re: [HACKERS] cvs snapshot compile problems

2001-05-21 Thread Patrick Welche
On Sat, May 19, 2001 at 08:03:50PM -0400, bpalmer wrote: On OBSD from cvs source, clean checkout: gcc -O2 -pipe -Wall -Wmissing-prototypes -Wmissing-declarations -I../../../../src/include -DLIBDIR=\/usr/local/pgsql/lib\ -DDLSUFFIX=\.so\ -c -o dfmgr.o dfmgr.c dfmgr.c: In function

Re: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Tom Lane
Vadim Mikheev [EMAIL PROTECTED] writes: It probably will not cause more IO than vacuum does right now. But unfortunately it will not reduce that IO. Uh ... what? Certainly it will reduce the total cost of vacuum, because it won't bother to try to move tuples to fill holes. The index cleanup

AW: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Zeugswetter Andreas SB
Vadim, can you remind me what UNDO is used for? 4. Split pg_log into small files with ability to remove old ones (which do not hold statuses for any running transactions). They are already small (16Mb). Or do you mean even smaller ? This imposes one huge risk, that is already a pain in

AW: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Zeugswetter Andreas SB
Vadim, can you remind me what UNDO is used for? 4. Split pg_log into small files with ability to remove old ones (which do not hold statuses for any running transactions). and I wrote: They are already small (16Mb). Or do you mean even smaller ? Sorry for above little confusion of

AW: [HACKERS] Fix for tablename in targetlist

2001-05-21 Thread Zeugswetter Andreas SB
I tend to agree that we should not change the code to make select tab work, on the grounds of error-proneness. OK, here is another patch that does this: test= select test from test; test -- 1 (1 row) I object also. It is not a feature, and

AW: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Zeugswetter Andreas SB
Really?! Once again: WAL records give you *physical* address of tuples (both heap and index ones!) to be removed and size of log to read records from is not comparable with size of data files. So how about a background vacuum like process, that reads the WAL and does the cleanup ? Seems that

AW: [HACKERS] Fix for tablename in targetlist

2001-05-21 Thread Zeugswetter Andreas SB
True, although there's a certain inconsistency in allowing a whole row to be passed to a function by select foo(pg_class) from pg_class; and not allowing the same row to be output by Imho there is a big difference between the two. The foo(pg_class) calls a function with argument

AW: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Zeugswetter Andreas SB
Would it be possible to split the WAL traffic into two sets of files, Sure, downside is two fsyncs :-( When I first suggested physical log I had a separate file in mind, but that is imho only a small issue. Of course people with more than 3 disks could benefit from a split. Tom: If your

Re: AW: [HACKERS] Fix for tablename in targetlist

2001-05-21 Thread Tom Lane
Zeugswetter Andreas SB [EMAIL PROTECTED] writes: select pg_class from pg_class; Probably a valid interpretation would be if type pg_class or opaque had an output function. Hmm, good point. We shouldn't foreclose the possibility of handling things that way. Okay, I'm convinced: allowing

AW: [HACKERS] Re: External search engine, advice

2001-05-21 Thread Zeugswetter Andreas SB
Tom Lane wrote: begin; select * from foo where x = functhatreadsbar(); I thought that the per statement way to do it with a non cacheable function was: select * from foo where x = (select functhatreadsbar()); ?? Andreas PS: an iscacheable function without

Re: AW: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Tom Lane
Zeugswetter Andreas SB [EMAIL PROTECTED] writes: Tom: If your ratio of physical pages vs WAL records is so bad, the config should simply be changes to do fewer checkpoints (say every 20 min like a typical Informix setup). I was using the default configuration. What caused the problem was

AW: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Zeugswetter Andreas SB
My point is that we'll need in dynamic cleanup anyway and UNDO is what should be implemented for dynamic cleanup of aborted changes. I do not yet understand why you want to handle aborts different than outdated tuples. The ratio in a well tuned system should well favor outdated tuples. If

RE: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Mikheev, Vadim
It probably will not cause more IO than vacuum does right now. But unfortunately it will not reduce that IO. Uh ... what? Certainly it will reduce the total cost of vacuum, because it won't bother to try to move tuples to fill holes. Oh, you're right here, but daemon will most likely

[HACKERS] Is stats update during COPY IN really a good idea?

2001-05-21 Thread Tom Lane
We have a TODO item * Update reltuples in COPY I was just about to go do this when I realized that it may not be such a hot idea after all. The problem is that updating pg_class.reltuples means that concurrent COPY operations will block each other, because they want to update the same

AW: AW: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Zeugswetter Andreas SB
Tom: If your ratio of physical pages vs WAL records is so bad, the config should simply be changes to do fewer checkpoints (say every 20 min like a typical Informix setup). I was using the default configuration. What caused the problem was probably not so much the standard 5-minute

RE: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Mikheev, Vadim
I hope we can avoid on-disk FSM. Seems to me that that would create problems both for performance (lots of extra disk I/O) and reliability (what happens if FSM is corrupted? A restart won't fix it). We can use WAL for FSM. Vadim ---(end of

Re: [HACKERS] Detecting readline in configure

2001-05-21 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes: I think we should add a --with-readline option to configure, and make configure die with an error if the option is used and no readline is found. If the option is not used, readline would still be used if found. This would not help, unless the user

RE: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Mikheev, Vadim
Really?! Once again: WAL records give you *physical* address of tuples (both heap and index ones!) to be removed and size of log to read records from is not comparable with size of data files. So how about a background vacuum like process, that reads the WAL and does the cleanup ?

Re: [HACKERS] Detecting readline in configure

2001-05-21 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes: This may be useful as well, but it doesn't help those doing unattended builds, such as RPMs and *BSD ports. In that case you need to abort to notify the user that things didn't go the way the package maker had planned. Oh, I see: you are thinking

Re: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Tom Lane
Jan Wieck [EMAIL PROTECTED] writes: I think the in-shared-mem FSM could have some max-per-table limit and the background VACUUM just skips the entire table as long as nobody reused any space. I was toying with the notion of trying to use Vadim's MNMB idea (see his

RE: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Mikheev, Vadim
My point is that we'll need in dynamic cleanup anyway and UNDO is what should be implemented for dynamic cleanup of aborted changes. I do not yet understand why you want to handle aborts different than outdated tuples. Maybe because of aborted tuples have shorter Time-To-Live. And

Re: [HACKERS] Using 7.1rc1 under RH 6.2

2001-05-21 Thread Trond Eivind Glomsrød
Lamar Owen [EMAIL PROTECTED] writes: On Monday 21 May 2001 07:46, Arsalan Zaidi wrote: We tried the RPM under RH 6.2 and it would unpack, saying we needed this or that or the other... A whole bunch of dependencies. I just want to be sure... there's no compelling reason to use RH 7.x

[HACKERS] Re: Is stats update during COPY IN really a good idea?

2001-05-21 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes: People are using COPY into the same table at the same time? Yes --- we had a message from someone who was doing that (and running into unrelated performance issues) just last week. My vote is to update pg_class. The VACUUM takes much more time than

RE: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Mikheev, Vadim
From: Mikheev, Vadim Sent: Monday, May 21, 2001 10:23 AM To: 'Jan Wieck'; Tom Lane Cc: The Hermit Hacker; 'Bruce Momjian'; [EMAIL PROTECTED] Strange address, Jan? Subject: RE: [HACKERS] Plans for solving the VACUUM problem I think the in-shared-mem FSM could have some

[HACKERS] Re: Is stats update during COPY IN really a good idea?

2001-05-21 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes: My vote is to update pg_class. The VACUUM takes much more time than the update, and we are only updating the pg_class row, right? What? What does VACUUM have to do with this? You have to VACUUM to get pg_class updated after COPY, right? But doing

RE: AW: [HACKERS] Plans for solving the VACUUM problem

2001-05-21 Thread Mikheev, Vadim
Correct me if I am wrong, but both cases do present a problem currently in 7.1. The WAL log will not remove any WAL files for transactions that are still open (even after a checkpoint occurs). Thus if you do a bulk insert of gigabyte size you will require a gigabyte sized WAL directory.