Josh Berkus wrote:
Well, it eliminates transaction isolation. Completely.
If the data is read-only, you presumably don't need that.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 3: Have you checked
Heikki Linnakangas wrote:
I've been thinking about running postgres from read-only media. It's
handy for creating demo CDs etc.
I think that a read-only installation of PostgreSQL would be a very poor
demonstration of its capabilities. Better put the data in a RAM disk.
--
Peter Eisentraut
Hi,
does anyone have experiences about putting a tablespace on ramdisk? Does it
work (and keep working after a restart of the server)?
Thanks in advance for any insight.
Paolo
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
[EMAIL PROTECTED] schrieb:
Hi,
does anyone have experiences about putting a tablespace on ramdisk? Does it
work (and keep working after a restart of the server)?
Thanks in advance for any insight.
Yes it does work as long as you dont restart your server.
Postgres does not appreciate
On Mon, 2005-11-21 at 08:37 +0100, Tino Wildenhain wrote:
Am Montag, den 21.11.2005, 02:11 +0200 schrieb Hannu Krosing:
Hi
It seems that plpython is unable to return bytea string when it contains
NUL bytes:
...
Did you also try:
I'd like to mimic MySQL's in-memory tables (engine=memory), which structure
survives a server restart (data lost of course).
I suspected that a server restart would be a problem in this case.
Thank you anyway.
Paolo
Tino Wildenhain [EMAIL PROTECTED] ha scritto
[EMAIL PROTECTED] schrieb:
Hi,
Hannu Krosing schrieb:
On Mon, 2005-11-21 at 08:37 +0100, Tino Wildenhain wrote:
Am Montag, den 21.11.2005, 02:11 +0200 schrieb Hannu Krosing:
Hi
It seems that plpython is unable to return bytea string when it contains
NUL bytes:
...
Did you also try:
[EMAIL PROTECTED] schrieb:
I'd like to mimic MySQL's in-memory tables (engine=memory), which structure
survives a server restart (data lost of course).
I suspected that a server restart would be a problem in this case.
Thank you anyway.
you could use temp tables... but usually it isnt worth
On 11/20/05, Heikki Linnakangas [EMAIL PROTECTED] wrote:
On Sat, 19 Nov 2005, Nicolas Barbier wrote:
You might want to take a look at the pages that I set up to track the
progress on my master's thesis:
url:http://www.nicolas.barbier.easynet.be/itsme/thesis/
especially the
here's a patch for drop database if exists. Barring objections I will
apply it in a day or two.
Should we use the IF EXISTS syntax in pg_dump output? For all DROP
commands in clean mode?
Might make it easier to wrap pg_dump output in a transaction?
Chris
---(end
I see Alvaro and Andrew have landed the patches they were working on
last week, so maybe today is a good time to do that re-pgindent we
were discussing.
regards, tom lane
---(end of broadcast)---
TIP 9: In versions below
Hi folks
my conquers with Gist index for custom type are nearly finished. It
is working as it is now, but there are few problems here and there.
One of em, being amount of disc space index it self takes. The type
stucture it self takes 160bytes. Adding 100.000 rows into table -
CREATE
On Mon, 2005-11-21 at 15:18 +0200, Hannu Krosing wrote:
The project seems quite interesting, will surely take a deeper look
It looks quite alpha, seems to mess to deeply with backend, and the cvs
checkout of module be does not build, so I will probably not be able to
use it in production for
On 2005-11-21, at 19:32, Martijn van Oosterhout wrote:
On Mon, Nov 21, 2005 at 04:58:25PM +0100, Grzegorz Jaskiewicz wrote:
my conquers with Gist index for custom type are nearly finished. It
is working as it is now, but there are few problems here and there.
One of em, being amount of disc
I wonder if this idea might be taken a little further, to allow
read-only tablespaces?
This would allow old partitions in very large databases to be kept on
read-only media, and would allow normal backups to ignore this
unchanging set of data.
It also allows for certain specific optimisations
Take the query.
select a,b from dupa where b::text in (select b::text from dupa group by
b::text having count(b) 2);
This is acceptable to create a unique constraint, however, we cannot mark
the column unique, without defining btree operators, which clearly are not
possible for sorting. Is
Tom Lane wrote:
I see Alvaro and Andrew have landed the patches they were working on
last week, so maybe today is a good time to do that re-pgindent we
were discussing.
I return home at 9pm EST. If that is a good time I will run it tonight.
--
Bruce Momjian|
So final question, what should I do to make that index much smaller on
the disc.
Tune your penalty and picksplit function. Gevel module can help you to look
inside of index ( http://www.sai.msu.su/~megera/postgres/gist/gevel ).
Usially, index becomes big when picksplit works bad: during
Jim C. Nasby wrote:
It might be more useful to look at caching only planning and not
parsing. I'm not familiar with the output of the parsing stage, but
perhaps that could be hashed to use as a lookup into a cache of planned
queries. I suspect that would remove issues of different search_paths.
I've been running 8.1.0 on a test box since the FreeBSD port has been
available, and it appears that the catalog has become corrupted.
There's plenty of free space on all volumes, so it hasn't run out of
space or anything.
$ uname -a
FreeBSD shi.mochibot.com 6.0-RELEASE FreeBSD
On Nov 21, 2005, at 12:44 PM, Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
mochi=# drop table ping_1132387200;
ERROR: table ping_1132387200 does not exist
mochi=# create table ping_1132387200();
ERROR: type ping_1132387200 already exists
I'm not sure what to do about this..
Bob Ippolito wrote:
I've been running 8.1.0 on a test box since the FreeBSD port has been
available, and it appears that the catalog has become corrupted.
There's plenty of free space on all volumes, so it hasn't run out of
space or anything.
$ uname -a
FreeBSD shi.mochibot.com
On Nov 21, 2005, at 11:31 AM, Bob Ippolito wrote:
I've been running 8.1.0 on a test box since the FreeBSD port has
been available, and it appears that the catalog has become
corrupted. There's plenty of free space on all volumes, so it
hasn't run out of space or anything.
$ uname -a
Bob Ippolito [EMAIL PROTECTED] writes:
mochi=# drop table ping_1132387200;
ERROR: table ping_1132387200 does not exist
mochi=# create table ping_1132387200();
ERROR: type ping_1132387200 already exists
I'm not sure what to do about this..
Try dropping the type.
We've seen at least one
Are NULLs in Arrays compressed?
Just as NULLs are with normal unary datatypes.
Thanks,
Best Regards, Simon Riggs
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Tom Lane [EMAIL PROTECTED] wrote
Would a simple retry loop actually help? It's not clear to me how
persistent such a failure would be.
[with reply to all followup threads] Yeah, this is the key and we definitely
have no 100% guarantee that several retries will solve the problem - just as
On Mon, 21 Nov 2005, Peter Eisentraut wrote:
Heikki Linnakangas wrote:
I've been thinking about running postgres from read-only media. It's
handy for creating demo CDs etc.
I think that a read-only installation of PostgreSQL would be a very poor
demonstration of its capabilities. Better put
Bob Ippolito [EMAIL PROTECTED] writes:
On Nov 21, 2005, at 12:44 PM, Tom Lane wrote:
Try dropping the type.
I did try that, I guess it didn't make it to the list yet:
mochi=# drop type ping_1132387200;
ERROR: cache lookup failed for relation 211174567
Hmm, apparently there are still
On Sun, 20 Nov 2005, Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
5. Don't try to write buffers with commit hint modifications. Just discard
them.
The performance costs of that alone are astonishing (ie, repeated
verifications of commit status).
You'd vacuum first to avoid
On Nov 21, 2005, at 1:14 PM, Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
On Nov 21, 2005, at 12:44 PM, Tom Lane wrote:
Try dropping the type.
I did try that, I guess it didn't make it to the list yet:
mochi=# drop type ping_1132387200;
ERROR: cache lookup failed for
Heikki Linnakangas wrote:
I think that a read-only installation of PostgreSQL would be a very
poor demonstration of its capabilities. Better put the data in a
RAM disk.
RAM space is limited.
Nowadays, CD space is often more limited than RAM. You could of course
now talk about a DVD
Bob Ippolito [EMAIL PROTECTED] writes:
I don't know how to get the oid of a type.. but there are certainly
entries in pg_depend with the other criteria:
Hmph, looks like you still have a pretty full set of dependencies for
the table. What about attributes --- try
select attname from
On Mon, Nov 21, 2005 at 04:58:25PM +0100, Grzegorz Jaskiewicz wrote:
my conquers with Gist index for custom type are nearly finished. It
is working as it is now, but there are few problems here and there.
One of em, being amount of disc space index it self takes. The type
stucture it self
On Thu, Nov 17, 2005 at 09:30:43PM -0500, Bruce Momjian wrote:
Is the requirement for predicate locking, over and above a unique
constraint on an index that involves the record key, to deal with
the scenario of two inserts executing at the same time, both before
commit?
No. If you have
On Thu, Nov 17, 2005 at 10:15:30AM -0500, Stephen Frost wrote:
I don't think MERGE can really be made to be both though, in which case
it should really be the SQL2003 MERGE and we can make REPLACE/INSERT ON
DUPLICATE UPDATE something else. Perhaps a special form of MERGE where
you know it's
On Mon, Nov 21, 2005 at 08:51:32PM +, Simon Riggs wrote:
Are NULLs in Arrays compressed?
Just as NULLs are with normal unary datatypes.
I thought NULLs don't work in arrays yet? :-)
mark
--
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED]
__
. . _
Heikki Linnakangas [EMAIL PROTECTED] writes:
There's also this TODO:
Allow a warm standby system to also allow read-only queries [pitr]
In fact, I was originally thinking of that. I should've mentioned it.
It has the same issues with transactions and WAL as running from
read-only media, so
On Nov 21, 2005, at 1:43 PM, Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
I don't know how to get the oid of a type.. but there are certainly
entries in pg_depend with the other criteria:
Hmph, looks like you still have a pretty full set of dependencies for
the table. What about
Bob Ippolito [EMAIL PROTECTED] writes:
The attributes look like the names of all the columns in the table,
and reindexing didn't help.
So at this point it seems that the pg_class row disappeared, but there
probably wasn't any actual DROP operation --- you'd think at least some
of those other
On Wed, Nov 16, 2005 at 11:05:11PM -0300, Alvaro Herrera wrote:
Christopher Kings-Lynne wrote:
I've never been a fan of regression tests in the narrow sense of
let's test for this specific mistake we made once. If you can devise
a test that catches a class of errors including the one you
On Nov 21, 2005, at 1:59 PM, Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
The attributes look like the names of all the columns in the table,
and reindexing didn't help.
So at this point it seems that the pg_class row disappeared, but there
probably wasn't any actual DROP
Jim C. Nasby wrote:
isn't a test for a specific
case better than none at all? Is the concern how long make check takes?
It shouldn't be, since we can (and do) have multiple regression test
sets. If there are tests that take too long for normal use, let's make a
takes a long time set
It would certainly seem so on AIX.
In tracking down why postgres 8.x would segfault on AIX 5.3, it became
apparent to me that libreadline.a is a problematic library to link
against and that libedit.a is preferable (and for reasons other than
that readline is GPL while postgres is BSD-licensed).
On Wed, Nov 16, 2005 at 06:29:42PM +, Simon Riggs wrote:
On Wed, 2005-11-16 at 13:09 -0500, Bruce Momjian wrote:
I leaving for Mexico in a few hours to speak at a conference. I return
on Monday.
Is it helpful to tell hackers when I am not around? I was in NYC last
week for four
If you've ever loaded 100 million rows, you'll know just how annoying it
is to find that you have a duplicate row somewhere in there. Experience
shows that there is always one, whatever oath the analyst swears
beforehand.
It's hard to find out which row is the duplicate, plus you've just
screwed
Bob Ippolito [EMAIL PROTECTED] writes:
Ok, so how do I figure out which file(s) are associated with pg_class
so I can feed this thing?
See contrib/oid2name and/or read
http://www.postgresql.org/docs/8.1/static/storage.html
regards, tom lane
Am Montag, den 21.11.2005, 09:08 -0700 schrieb James William Pye:
On Mon, 2005-11-21 at 15:18 +0200, Hannu Krosing wrote:
The project seems quite interesting, will surely take a deeper look
...
The 'layout' package needs to be installed first.
See this quick start section:
On Mon, 21 Nov 2005, Marc Munro wrote:
I wonder if this idea might be taken a little further, to allow
read-only tablespaces?
This would allow old partitions in very large databases to be kept on
read-only media, and would allow normal backups to ignore this
unchanging set of data.
I guess
On Mon, Nov 21, 2005 at 08:14:44PM +0100, Grzegorz Jaskiewicz wrote:
You mean you sometimes put the same elements in the two halves? You
shouldn't do that. The whole point is that the search will descend any
node that matches consistant, but any single key should only appear
once in each
On Thu, Nov 17, 2005 at 12:51:47AM -0500, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=cuckoodt=2005-11-15%2023:56:22
I took a closer look at this, and noticed something interesting:
ccache gcc -no-cpp-precomp -O2
On Nov 21, 2005, at 2:42 PM, Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
Ok, so how do I figure out which file(s) are associated with pg_class
so I can feed this thing?
See contrib/oid2name and/or read
http://www.postgresql.org/docs/8.1/static/storage.html
Ok, here's the
On Wed, Nov 16, 2005 at 10:19:21AM +0100, Marcus Engene wrote:
Martijn van Oosterhout wrote:
This sql cache I think is a really good thing. Is there a reason
Postgres hasn't got it? Would it be very hard to implement? From
a naive perspective; make a hashvalue from the sql-string to
quickly
On Thu, Nov 17, 2005 at 07:56:21PM +0100, Magnus Hagander wrote:
The way I read it, a delay should help. It's basically running out of
kernel buffers, and we just delay, somebody else (another process, or an
IRQ handler, or whatever) should get finished with their I/O, free up
the buffer, and
Bob Ippolito [EMAIL PROTECTED] writes:
Ok, here's the pg_filedump for the pg_class table in the mochi
database that is having the issue:
Thanks. I don't see any live tuples that look like they could have been
the one we want, but there's a whole lot of deleted rows, which
pg_filedump won't
+1, and I know Sybase had this in 11.0.3, which IIRC is over 10 years
old now.
BTW,
http://archives.postgresql.org/pgsql-performance/2004-08/msg00492.php is
one discussion about this from the past. I seem to recall that there was
an objection to true Index Organized Tables because it would be too
Magnus Hagander [EMAIL PROTECTED] wrote
The way I read it, a delay should help. It's basically running out of
kernel buffers, and we just delay, somebody else (another process, or an
IRQ handler, or whatever) should get finished with their I/O, free up
the buffer, and let us have it. Looking
On Nov 21, 2005, at 3:12 PM, Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
Ok, here's the pg_filedump for the pg_class table in the mochi
database that is having the issue:
Thanks. I don't see any live tuples that look like they could have
been
the one we want, but there's a
On Fri, Nov 18, 2005 at 12:08:03AM +, Simon Riggs wrote:
The trouble is, people moan and constantly. Perhaps we should stick to
our guns and say, why do you care? From here, I think we should say,
show me an application package that needs this so badly we'll change
PostgreSQL just for
On Fri, Nov 18, 2005 at 02:56:52PM -0500, Gregory Maxwell wrote:
However, some great ideas have been proposed here which would not only
help in that case but would otherwise be quite useful.
*Inclusion of a 'MVCC inflight' bit in indexes which would allow
skipping MVCC checks in clumps of an
Bob Ippolito [EMAIL PROTECTED] writes:
Sure, here it is:
http://undefined.org/mochi.pg_class-1.filedump.gz
Well, I count at least a couple hundred deleted versions of that table
row :-(. What the heck were you doing with it? As far as I can think
offhand, only a schema modification would
While replying to the Improving count(*) thread, the following occured
to me:
Vacuuming a table is only useful if we're nearing xid-wrap or if new
tuples are being created in the table. One way to detect the later case
is to monitor how many pages that table has in the FSM. Of course
there's
Simon Riggs [EMAIL PROTECTED] writes:
What I'd like to do is add an ERRORTABLE clause to COPY. The main
problem is how we detect a duplicate row violation, yet prevent it from
aborting the transaction.
If this only solves the problem of duplicate keys, and not any other
kind of COPY error,
On 11/21/05, Jim C. Nasby [EMAIL PROTECTED] wrote:
What about Greg Stark's idea of combining Simon's idea of storing
per-heap-block xmin/xmax with using that information in an index scan?
ISTM that's the best of everything that's been presented: it allows for
faster index scans without adding
On Nov 21, 2005, at 3:56 PM, Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
Sure, here it is:
http://undefined.org/mochi.pg_class-1.filedump.gz
Well, I count at least a couple hundred deleted versions of that table
row :-(. What the heck were you doing with it? As far as I can
Bob Ippolito [EMAIL PROTECTED] writes:
On Nov 21, 2005, at 3:56 PM, Tom Lane wrote:
Well, I count at least a couple hundred deleted versions of that table
row :-(. What the heck were you doing with it?
The ETL process keeps trying until it succeeds or someone stops it,
so I guess that's
On Mon, 21 Nov 2005, Martijn van Oosterhout wrote:
On Mon, Nov 21, 2005 at 08:14:44PM +0100, Grzegorz Jaskiewicz wrote:
You mean you sometimes put the same elements in the two halves? You
shouldn't do that. The whole point is that the search will descend any
node that matches consistant, but
Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
On Nov 21, 2005, at 3:56 PM, Tom Lane wrote:
Well, I count at least a couple hundred deleted versions of that table
row :-(. What the heck were you doing with it?
The ETL process keeps trying until it succeeds or someone stops it,
Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
What I'd like to do is add an ERRORTABLE clause to COPY. The main
problem is how we detect a duplicate row violation, yet prevent it from
aborting the transaction.
If this only solves the problem of duplicate keys, and not any
Nice analysis, but we can't hack configure like that. It has to be able
to be fully generated from its sources. I think the other source file
you would need to look at is config/programs.m4. (Not sure about quoting
$ac_popdir - why only that one?)
Also, I suspect we'd want to enable the
On Fri, Nov 18, 2005 at 09:58:24AM -0500, Tom Lane wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
It appears that predicate indexes won't uses indexes on int8 columns
unless they are casted:
This is a known (although perhaps not well documented) limitation of the
predicate testing
On Nov 21, 2005, at 4:33 PM, Alvaro Herrera wrote:
Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
On Nov 21, 2005, at 3:56 PM, Tom Lane wrote:
Well, I count at least a couple hundred deleted versions of that
table
row :-(. What the heck were you doing with it?
The ETL process
Andrew Dunstan [EMAIL PROTECTED] writes:
Also, I suspect we'd want to enable the libedit preference with a switch
rather than just force it, if we want to go this way.
Quite. My recollection is that there are other platforms on which
readline works and libedit is broken. (Readline used to
Jim C. Nasby [EMAIL PROTECTED] writes:
On Fri, Nov 18, 2005 at 09:58:24AM -0500, Tom Lane wrote:
This is a known (although perhaps not well documented) limitation of the
predicate testing logic. You do not need a cast in the query, though,
only in the index's WHERE condition.
I'm working on
Seems similar to the pgloader project on pgfoundry.org.
Chris
Simon Riggs wrote:
If you've ever loaded 100 million rows, you'll know just how annoying it
is to find that you have a duplicate row somewhere in there. Experience
shows that there is always one, whatever oath the analyst swears
Bob Ippolito [EMAIL PROTECTED] writes:
I don't touch pg_class at all... this is what I'm doing (over and
over again).
-- clone_table is almost always a no-op, but once a day it creates a
new table
SELECT clone_table('ping', 'ping_%s', '')
SELECT
In two of the sections covered by #ifdef WAL_DEBUG there are
declarations like:
charbuf[8192];
It seems to me that these should be:
charbuf[BLCKSZ];
- or have I misunderstood what is going on here?
I realize that it's probably not terribly significant, as most people
will
Mark Kirkwood [EMAIL PROTECTED] wrote
In two of the sections covered by #ifdef WAL_DEBUG there are declarations
like:
charbuf[8192];
It seems to me that these should be:
charbuf[BLCKSZ];
Those two 8192 have nothing to do with BLCKSZ, it is just an arbitrary
buffer size
On Nov 22, 2005, at 11:44 , Qingqing Zhou wrote:
Mark Kirkwood [EMAIL PROTECTED] wrote
In two of the sections covered by #ifdef WAL_DEBUG there are
declarations
like:
charbuf[8192];
Those two 8192 have nothing to do with BLCKSZ, it is just an arbitrary
buffer size as long as it
On Mon, 21 Nov 2005, Michael Glaesemann wrote:
Would it make sense to abstract that out so it's clear that it's
*not* related to BLCKSZ? Or maybe just a comment would be enough.
Insprite of incremental improvement, I think rename buf to str would
work,
Regards,
Qingqing
On Mon, Nov 21, 2005 at 04:46:40PM -0500, [EMAIL PROTECTED] wrote:
On Mon, Nov 21, 2005 at 08:51:32PM +, Simon Riggs wrote:
Are NULLs in Arrays compressed?
Just as NULLs are with normal unary datatypes.
I thought NULLs don't work in arrays yet? :-)
Jim C. Nasby wrote:
On Thu, Nov 17, 2005 at 10:15:30AM -0500, Stephen Frost wrote:
I don't think MERGE can really be made to be both though, in which case
it should really be the SQL2003 MERGE and we can make REPLACE/INSERT ON
DUPLICATE UPDATE something else. Perhaps a special form of MERGE
On Mon, Nov 21, 2005 at 08:12:10PM -0700, Michael Fuhr wrote:
On Mon, Nov 21, 2005 at 04:46:40PM -0500, [EMAIL PROTECTED] wrote:
On Mon, Nov 21, 2005 at 08:51:32PM +, Simon Riggs wrote:
Are NULLs in Arrays compressed?
Just as NULLs are with normal unary datatypes.
I thought NULLs
(integer,bigint)
(bigint,integer)
(smallint,integer)
(integer,smallint)
(real,double precision)
(double precision,real)
(smallint,bigint)
(bigint,smallint)
(date,timestamp without time zone)
(date,timestamp with time zone)
(timestamp without time zone,date)
(timestamp with time zone,date)
I thought NULLs don't work in arrays yet? :-)
http://archives.postgresql.org/pgsql-committers/2005-11/msg00385.php
http://developer.postgresql.org/docs/postgres/arrays.html
Someone's checked that this NULLs in arrays stuff doesn't affect indexes
over array elements, etc.? Or indexes that
Qingqing Zhou wrote:
Mark Kirkwood [EMAIL PROTECTED] wrote
In two of the sections covered by #ifdef WAL_DEBUG there are declarations
like:
charbuf[8192];
Those two 8192 have nothing to do with BLCKSZ, it is just an arbitrary
buffer size as long as it is big enough to hold debug
On Nov 21, 2005, at 5:50 PM, Tom Lane wrote:
Bob Ippolito [EMAIL PROTECTED] writes:
I don't touch pg_class at all... this is what I'm doing (over and
over again).
-- clone_table is almost always a no-op, but once a day it creates a
new table
SELECT clone_table('ping',
And yes merge CAN be used to do REPLACE (oracle uses their dummy table
for this, we can use the fact that FROM clause isn't required in postgres).
the FROM clause is required by default (starting with 8.1) unless you
change a postgresql.conf parameter.
and i don't think that idea will have
87 matches
Mail list logo