On Thu, Nov 20, 2003 at 04:08:28PM -0500, Tom Lane wrote:
Kurt Roeckx [EMAIL PROTECTED] writes:
I just installed a 7.4 on windows/cygwin. I restored a dump but
ran out of disk space during the creating of an index. In psql I
saw the ERROR: could not extend relation .
From that point
Kurt Roeckx [EMAIL PROTECTED] writes:
It's still logging the recycled transation log file. Is that
send to stdout instead of stderr maybe?
No, it all goes to stderr. But that output comes from a different
subprocess. Not sure why that subprocess would still have working
stderr if others
Neil Conway [EMAIL PROTECTED] writes:
Under what circumstances do we convert a relation to a view? Is this
functionality exposed to the user?
This is a backwards-compatibility hangover. pg_dump scripts from
somewhere back in the Dark Ages (6.something) would represent a view
as
CREATE
No problem, dictionary with support of compounds will be avaliable as separate
contrib module from our site till 7.5.
Hannu Krosing wrote:
Tom Lane kirjutas N, 20.11.2003 kell 17:18:
Oleg Bartunov [EMAIL PROTECTED] writes:
we have a patch for contrib/tsearch2 we'd like to commit for 7.4.1.
Is
On Fri, 21 Nov 2003, Teodor Sigaev wrote:
No problem, dictionary with support of compounds will be avaliable as separate
contrib module from our site till 7.5.
Hmm, I think better not to introduce another dictionary, which require
additional efforts to configure tsearch2, but maintain whole
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Friday 21 November 2003 09:42, Oleg Bartunov wrote:
On Fri, 21 Nov 2003, Teodor Sigaev wrote:
No problem, dictionary with support of compounds will be avaliable as
separate contrib module from our site till 7.5.
Hmm, I think better not to
On Fri, 21 Nov 2003, Andreas Joseph Krogh wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Friday 21 November 2003 09:42, Oleg Bartunov wrote:
On Fri, 21 Nov 2003, Teodor Sigaev wrote:
No problem, dictionary with support of compounds will be avaliable as
separate contrib module
Tom Lane wrote:
It's completely fallacious to imagine that we could make this change be
transparent to external applications. To take two examples:
1. How many places do you think know that pg_attribute.attnum links to
pg_attrdef.adnum? pg_dump, psql, and the JDBC driver all appear to
know
On Fri, Nov 21, 2003 at 09:38:50AM +0800, Christopher Kings-Lynne wrote:
Yeah, I think the main issue in all this is that for real production
sites, upgrading Postgres across major releases is *painful*. We have
to find a solution to that before it makes sense to speed up the
major-release
On Thu, 2003-11-20 at 19:40, Matthew T. O'Connor wrote:
I'm open to discussion on changing the defaults. Perhaps what it would
be better to use some non-linear (perhaps logorithmic) scaling factor.
So that you wound up with something roughly like this:
#tuples activity% for vacuum
1k
On Thu, 2003-11-20 at 23:27, Tom Lane wrote:
Neil Conway [EMAIL PROTECTED] writes:
Actually, I deliberately chose attpos rather than attlognum (which is
what some people had been calling this feature earlier). My reasoning
was that the logical number is really a nonsensical idea: we just
Andreas Pflug [EMAIL PROTECTED] writes:
I don't quite understand your argumentation.
My point is that to change attnum into a logical position without
breaking client apps (which is the ostensible reason for doing it
that way), we would need to redefine all system catalog entries that
reference
Hello:
I'm having a little problem with my .net data provider for postgresql 7.4.
I'm executing a little sample that does:
1. Connect to the server.
2. Start transaction.
3. Execution of an invalid SQL command.
4. Catch exception and rollback transaction.
After send the rollbact transaction
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
I don't quite understand your argumentation.
My point is that to change attnum into a logical position without
breaking client apps (which is the ostensible reason for doing it
that way), we would need to redefine all system catalog
Andreas Pflug [EMAIL PROTECTED] writes:
Maybe my proposal wasn't clear enough:
Just as an index references a pg_class entry by it's OID, not some value
identifying it's physical storage, all objects might continue
referencing columns by attnum.
That's exactly the same thing I am saying.
=?ISO-8859-1?Q?Carlos_Guzm=E1n_=C1lvarez?= [EMAIL PROTECTED] writes:
After send the rollbact transaction command i'm not receiving any
response from the server, instead, if the SQL command is a valid SQL
command all runs fine, any idea about what can be the problem ??
Are you using the
I'm thinking about attacking pg_dump's lack of knowledge about using
dependencies to determine a safe dump order. But if there's someone
out there actively working on the problem, I don't want to tread on
your toes ... anyone?
Also, if you've got uncommitted patches for pg_dump, please let me
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Maybe my proposal wasn't clear enough:
Just as an index references a pg_class entry by it's OID, not some value
identifying it's physical storage, all objects might continue
referencing columns by attnum.
That's exactly the same
On Fri, Nov 21, 2003 at 02:49:28AM -0500, Tom Lane wrote:
Kurt Roeckx [EMAIL PROTECTED] writes:
It's still logging the recycled transation log file. Is that
send to stdout instead of stderr maybe?
No, it all goes to stderr. But that output comes from a different
subprocess. Not sure
On Thu, 2003-11-20 at 22:20, Tom Lane wrote:
It should be noted that because Oracle does it that way is a
guaranteed nonstarter as a rationale for any Postgres feature proposal.
A method of doing something is not a feature; making something
possible that couldn't be done before is a feature.
Max Jacob [EMAIL PROTECTED] writes:
I'm trying to call plpgsql functions from c functions directly through
the Oid, but i have a problem: it seems that the plpgsql interpreter
calls SPI_connect and fails even if the caller has already
spi-connected.
This is a safety check. If you are
Hello:
Are you using the extended query protocol? If so you probably have
forgotten the need for a Sync message.
You are right, thanks very much, it's working well now.
--
Best regards
Carlos Guzmán Álvarez
Vigo-España
---(end of
Alvaro Herrera wrote:
On Fri, Nov 21, 2003 at 09:38:50AM +0800, Christopher Kings-Lynne wrote:
Yeah, I think the main issue in all this is that for real production
sites, upgrading Postgres across major releases is *painful*. We have
to find a solution to that before it makes sense to speed up
strk [EMAIL PROTECTED] writes:
It seems that the build system is missing something
(make distclean made it work)
If you aren't using configure --enable-depend, you should count on doing
at least make clean, preferably make distclean anytime you do a CVS
update. The default behavior is not to
Peter Eisentraut [EMAIL PROTECTED] writes:
Andrew Dunstan writes:
Maybe it wouldn't be of great value to PostgreSQL. And maybe it would. I
have an open mind about it. I don't think incompleteness is an argument
against it, though.
If you want to do it, by all means go for it. I'm sure it
Andreas Pflug [EMAIL PROTECTED] writes:
To put it differently: a ALTER COLUMN command may never-ever change the
identifier of the column, i.e. attrelid/attnum.
If the ALTER is changing the column type, it's not really the same
column anymore; I see nothing wrong with assigning a new attnum in
Tom Lane wrote:
If the ALTER is changing the column type, it's not really the same
column anymore;
This doesn't strike. If the ALTER is changing the number of columns,
it's not really the same table anymore is as true as your statement.
Still, pg_class.oid remains the same for ADD and DROP
Jan Wieck [EMAIL PROTECTED] writes:
Alvaro Herrera wrote:
One of the most complex would be to avoid the need of pg_dump for
upgrades ...
We don't need a simple way, we need a way to create some sort of catalog
diff and a safe way to apply that to an existing installation during
the
Tom Lane wrote:
I think the main value of a build farm is that we'd get nearly immediate
feedback about the majority of simple porting problems. Your previous
arguments that it wouldn't smoke everything out are certainly valid ---
but we wouldn't abandon the regression tests just because they
Hello hackers
Sorry when I am talking to the gurus...
There is a database, which has a concept called Transportable
Tablespace (TTS). Would it not be a verry easy and fast solution to
just do this with the Tables, Index and all non catalog related files.
- You create a new db cluster (e.g
I was at the ObjectWeb Conference today; ObjectWeb
(http://www.objectweb.org) being a consortium that has amassed quite an
impressive array of open-source, Java-based middleware under their
umbrella, including for instance our old friend Enhydra. And they
regularly kept mentioning PostgreSQL in
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
When you get around to it, can you commit the patch I submitted that
dumps conversions in pg_dump. I need that in to complete my COMMENT ON
patch.
Just for the record, this is committed as part of the COMMENT ON patch.
Tom Lane [EMAIL PROTECTED] writes:
Josh Berkus [EMAIL PROTECTED] writes:
BTW, do we have any provisions to avoid overlapping vacuums? That is, to
prevent a second vacuum on a table if an earlier one is still running?
Yes, VACUUM takes a lock that prevents another VACUUM on the same
I'm thinking about attacking pg_dump's lack of knowledge about using
dependencies to determine a safe dump order. But if there's someone
out there actively working on the problem, I don't want to tread on
your toes ... anyone?
I've done a whole lot of _thinking_, but basically no _doing_, so go
Robert Treat wrote:
Just thinking out loud here, so disregard if you think its chaff but...
if we had a system table pg_avd_defaults
[snip]
As long as pg_autovacuum remains a contrib module, I don't think any
changes to the system catelogs will be make. If pg_autovacuum is
deemed ready to
Josh Berkus wrote:
Matthew,
True, but I think it would be one hour once, rather than 30 minutes 4
times.
Well, generally it would be about 6-8 times at 2-4 minutes each.
Are you saying that you can vacuum a 1 million row table in 2-4
minutes? While a vacuum of the same table with an
Matthew T. O'Connor wrote:
But we track tuples because we can compare against the count given by
the stats system. I don't know of a way (other than looking at the FSM,
or contrib/pgstattuple ) to see how many dead pages exist.
I think making pg_autovacuum dependent of pgstattuple is very good
Matthew,
As long as pg_autovacuum remains a contrib module, I don't think any
changes to the system catelogs will be make. If pg_autovacuum is
deemed ready to move out of contrib, then we can talk about the above.
But we could create a config file that would store stuff in a flatfile table,
Josh Berkus wrote:
Matthew,
But we could create a config file that would store stuff in a flatfile table,
OR we could add our own system table that would be created when one
initializes pg_avd.
I don't want to add tables to existing databases, as I consider that
clutter and I never like
Matthew,
Actually, this might be a necessary addition as pg_autovacuum currently
suffers from the startup transients that the FSM used to suffer from,
that is, it doesn't remember anything that happened the last time it
ran. A pg_autovacuum database could also be used to store thresholds
Josh Berkus wrote:
Matthew,
I don't see how a seperate database is better than a table in the databases.,
except that it means scanning only one table and not one per database. For
one thing, making it a seperate database could make it hard to back up and
move your database+pg_avd
Matthew,
Basically, I don't like the idea of modifying users databases, besides,
in the long run most of what needs to be tracked will be moved to the
system catalogs. I kind of consider the pg_autvacuum database to
equivalent to the changes that will need to be made to the system
Josh Berkus wrote:
Matthew,
I certainly agree that less than 10% would be excessive, I still feel
that 10% may not be high enough though. That's why I kinda liked the
sliding scale I mentioned earlier, because I agree that for very large
tables, something as low as 10% might be useful,
Josh Berkus [EMAIL PROTECTED] writes:
BTW, do we have any provisions to avoid overlapping vacuums? That is, to
prevent a second vacuum on a table if an earlier one is still running?
Yes, VACUUM takes a lock that prevents another VACUUM on the same table.
regards, tom
strk [EMAIL PROTECTED] writes:
Does with(isStrict) still work ?
regression=# create function foo(int) returns int as
regression-# 'select $1' language sql with(isStrict);
CREATE FUNCTION
regression=# select version();
version
I have uploaded a first cut at the RPM's to ftp.postgresql.org. While I am
not 100% convinced of the need to do so, I have restructured the directories,
and await comment on that.
Currently the upload is for Fedora Core 1 only. The source RPM should compile
on most recent Red Hat's and close
On Friday 21 November 2003 01:13 pm, Lamar Owen wrote:
I have uploaded a first cut at the RPM's to ftp.postgresql.org. While I am
not 100% convinced of the need to do so, I have restructured the
directories, and await comment on that.
I expect RH 7.3, RH9, and RH 6.2 packages shortly from
47 matches
Mail list logo