Tom Lane said:
Bruce Momjian pgman@candle.pha.pa.us writes:
Christopher Kings-Lynne wrote:
Is there any way to force COPY to accept that there will be lines of
different length in a data file?
I suppose we could have a TRAILINGNULL flag to COPY but because few
ask for this feature, it
On Dec 9, 2005, at 18:10 , David Fetter wrote:
Please find enclosed a patch that lets you use \c to connect
(optionally) to a new host and port without exiting psql.
I'm not familiar enough with the psql code to be able to tell, but is
this secure? The pg_hba.conf on the new server is
Until recently, pg_relation_size used SearchSysCache to locate the
relation to examine, and calculated the file location from that
information. Starting with dbsize.c V1.5 (committed after Beta2),
relation_open(.., AccessShareLock) is used. This is very unfortunate
because it will not allow to
Michael Glaesemann wrote:
I'm not familiar enough with the psql code to be able to tell, but is
this secure? The pg_hba.conf on the new server is enforced, I assume?
You don't need to be familiar with the psql code to know that it would
be pretty stupid if client programs could override the
On Dec 12, 2005, at 20:33 , Peter Eisentraut wrote:
Michael Glaesemann wrote:
I'm not familiar enough with the psql code to be able to tell, but is
this secure? The pg_hba.conf on the new server is enforced, I assume?
You don't need to be familiar with the psql code to know that it would
be
Michael Glaesemann said:
On Dec 12, 2005, at 20:33 , Peter Eisentraut wrote:
Michael Glaesemann wrote:
I'm not familiar enough with the psql code to be able to tell, but is
this secure? The pg_hba.conf on the new server is enforced, I assume?
You don't need to be familiar with the psql
On Dec 12, 2005, at 21:32 , Andrew Dunstan wrote:
The patch is to the client only, not even to libpq, so of course no
auth
bypass is involved.
Cool. Thanks for the explanation, Andrew.
Michael Glaesemann
grzm myrealbox com
---(end of
Defaulat values of various parameters in PostgreSQL:
#random_page_cost = 4 # units are one sequential page fetch
cost
#cpu_tuple_cost = 0.01 # (same)
#cpu_index_tuple_cost = 0.001 # (same)
#cpu_operator_cost = 0.0025 # (same)
Dear friends,
I thank you for all replies.
If you permit, I'd like to present my modest view of the problem.
I agree with Tom when he says:
... if the user wishes the password to be secure, he
needs to encrypt it on the client side. Anything else is
just the illusion of security.
and with
On 12/9/05, Peter Eisentraut [EMAIL PROTECTED] wrote:
Tom Lane wrote:
As I said already, if the user wishes the password to be secure, he
needs to encrypt it on the client side.
Maybe we should provide a backslash command in psql for secure password
entry, say, \password [username]. This
The discussion was a bit inconclusive last time, so I would like to
renew my proposal for more generous default resource limits.
Nearly everyone seems to agree that the default for max_fsm_pages is
woefully low, so I would like to have the default for this set
unconditionally to 200,000
On Sat, Dec 10, 2005 at 8:01 pm, in message
[EMAIL PROTECTED], Bruce Momjian
pgman@candle.pha.pa.us wrote:
Kevin Grittner wrote:
Since the
non- standard behavior is in the lexer, I couldn't see any
reasonable way
to base it on a runtime switch. I'm curious what is intended here.
Can
On Mon, Dec 12, 2005 at 09:20:57PM +0900, Michael Glaesmann wrote:
On Dec 12, 2005, at 21:32 , Andrew Dunstan wrote:
The patch is to the client only, not even to libpq, so of course no
auth bypass is involved.
Cool. Thanks for the explanation, Andrew.
Is the patch suitable for a re-send
Andrew Dunstan [EMAIL PROTECTED] writes:
The COPY code is probably on the edge of maintainability now.
Our CSV routines accept a wide variety of imports formats, but a fixed
number of columns is required. Maybe we need a pgfoundry project with some
general perl CSV munging utilities - this
Andrew Dunstan [EMAIL PROTECTED] writes:
Nearly everyone seems to agree that the default for max_fsm_pages is
woefully low, so I would like to have the default for this set
unconditionally to 200,000 rather than 20,000. The cost would be just
over 1Mb of shared memory, if the docs are
Kevin Grittner wrote:
Between those and the release notes, I don't know what additional
information you want. In the future you will set
standard_conforming_strings to on and backslashes will be treated
literally.
Perhaps my language was ambiguous. I'm not curious about the intended
Tom Lane wrote:
What's been suggested in the past is some sort of standalone
file-format-conversion utility, which could deal with this sort of
stuff
without having to also deal with all the backend-internal
considerations
that COPY must handle. So (at least in theory) it'd be simpler and
Michael Glaesemann [EMAIL PROTECTED] writes:
On Dec 9, 2005, at 18:10 , David Fetter wrote:
Please find enclosed a patch that lets you use \c to connect
(optionally) to a new host and port without exiting psql.
I'm not familiar enough with the psql code to be able to tell, but is
this
Andreas Pflug [EMAIL PROTECTED] writes:
Until recently, pg_relation_size used SearchSysCache to locate the
relation to examine, and calculated the file location from that
information. Starting with dbsize.c V1.5 (committed after Beta2),
relation_open(.., AccessShareLock) is used. This is
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Until recently, pg_relation_size used SearchSysCache to locate the
relation to examine, and calculated the file location from that
information. Starting with dbsize.c V1.5 (committed after Beta2),
relation_open(.., AccessShareLock) is
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
Nonsense.
Ahem.
I'm running Slony against a big replication set. While slon runs COPY
foo(colnamelist) FROM STDIN, I can't execute pg_relation_size(foo_oid).
pg_locks will show that the AccessShareLock on foo is not granted.
That's
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
Nonsense.
Ahem.
I'm running Slony against a big replication set. While slon runs COPY
foo(colnamelist) FROM STDIN, I can't execute pg_relation_size(foo_oid).
pg_locks will show that the AccessShareLock on foo is
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
The COPY code is probably on the edge of maintainability now.
Our CSV routines accept a wide variety of imports formats, but a fixed
number of columns is required. Maybe we need a pgfoundry project with some
general perl CSV munging
On Mon, Dec 12, 2005 at 10:19:00AM -0500, Tom Lane wrote:
Michael Glaesemann [EMAIL PROTECTED] writes:
On Dec 9, 2005, at 18:10 , David Fetter wrote:
Please find enclosed a patch that lets you use \c to connect
(optionally) to a new host and port without exiting psql.
I'm not familiar
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
That's only possible if Slony is taking AccessExclusive lock; if so,
your gripe is properly directed to the Slony folks, not to
pg_relation_size which is acting as a good database citizen should.
More precisely, it executes
Tom Lane wrote:
In the above example, the contrib code would have not done the right
thing at all --- if I'm not mistaken, it would have kept handing back
the size of the original, pre-TRUNCATE file, since the new pg_class
row with the new relfilenode isn't committed yet. So it wouldn't have
Andrew Dunstan [EMAIL PROTECTED] writes:
Tom Lane wrote:
What's been suggested in the past is some sort of standalone
file-format-conversion utility,
Anyway, that's history now. Where would you want this file conversion
utility? bin? contrib? pgfoundry?
I'd say pgfoundry for starters ---
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Nearly everyone seems to agree that the default for max_fsm_pages is
woefully low, so I would like to have the default for this set
unconditionally to 200,000 rather than 20,000. The cost would be just
over 1Mb of shared memory,
Alvaro Herrera wrote:
The problem with the original coding was that it used the table Oid to
look up the file name, which is wrong. (Test it with a table that has
been clustered or an index that has been reindexed.)
Um, can't test at the moment. The oldcode used pg_class-relfilnode,
which
Seems we don't link against the port/qsort.c - is there any reason for
that? My tests indicates our qsort is much much faster than the libc's.
Regards,
Qingqing
---(end of broadcast)---
TIP 4: Have you searched our list archives?
Qingqing Zhou wrote:
Seems we don't link against the port/qsort.c - is there any reason for
that? My tests indicates our qsort is much much faster than the libc's.
We haven't been able to determine if the OS's qsort or pgport's is
faster. Right now we only force pgport qsort on Solaris (from
Bruce Momjian pgman@candle.pha.pa.us wrote
Are you willing to say that we should always prefer pgport over glibc's
qsort()?
At least for Linux and windows. My test is performed on a dataset ranges
from 10 to 1500 elements. Each elements contains a 64 bytes garbage
character area and an
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
That's only possible if Slony is taking AccessExclusive lock; if so,
your gripe is properly directed to the Slony folks, not to
pg_relation_size which is acting as a good database citizen should.
More precisely, it
On Mon, 2005-12-12 at 11:50 -0500, Bruce Momjian wrote:
Are you willing to say that we should always prefer pgport over glibc's
qsort()?
glibc's qsort is actually implemented via merge sort. I'm not sure why
the glibc folks chose to do that, but as a result, it's not surprising
that BSD qsort
On Mon, 12 Dec 2005, Neil Conway wrote:
Whether we should go to the trouble of second-guessing glibc is a
separate question, though: it would be good to see some performance
figures for real-world queries.
For qsort, due to its simple usage, I think simulation test should be
enough. But we
I prepared a patch for Have COPY return the number of rows
loaded/unloaded? TODO. (Sorry for disturbing list with such a simple
topic, but per warning from Bruce Momjian, I send my proposal to -hackers
first.)
I used the appending related information to commandTag method which is
used for
Hello
I did some test and I can see so DISTINCT works well on indexed columns, but
is slow on derived tables without indexes. If I use without distinct group
by I get much better times.
SELECT DISTINCT a, b FROM tab
SELECT a,b FROM tab GROUP BY a, b.
Can You Explain it.
Thank You
Pavel
On 12/9/2005 8:27 PM, Stephan Szabo wrote:
On Fri, 9 Dec 2005, Jan Wieck wrote:
On 12/8/2005 8:53 PM, Tom Lane wrote:
Stephan Szabo [EMAIL PROTECTED] writes:
Yeah. I really don't understand it, but it appears to me to be explicitly
different in the spec for on delete cascade even compared
Andreas Pflug wrote:
Until recently, pg_relation_size used SearchSysCache to locate the
relation to examine, and calculated the file location from that
information. Starting with dbsize.c V1.5 (committed after Beta2),
relation_open(.., AccessShareLock) is used. This is very unfortunate
Hannu Krosing wrote:
Ühel kenal päeval, L, 2005-12-10 kell 21:07, kirjutas Tom Lane:
In any case the design idea here
seems to be we don't care how long REINDEX takes as long as it's not
blocking anyone.
Yes, thats the general idea.
Within reason of course, for example making a
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
You have to have a lock to ensure that the table even exists, let alone
that you are looking at the right set of disk files.
This would require a lock on pg_class, not table foo, no?
No, the convention is that you take a lock on the
On 12/9/2005 8:27 PM, Stephan Szabo wrote:
On Fri, 9 Dec 2005, Jan Wieck wrote:
On 12/8/2005 8:53 PM, Tom Lane wrote:
Stephan Szabo [EMAIL PROTECTED] writes:
Yeah. I really don't understand it, but it appears to me to be
explicitly
different in the spec for on delete
On Mon, Dec 12, 2005 at 10:15:03AM -0500, Pollard, Mike wrote:
Tom Lane wrote:
What's been suggested in the past is some sort of standalone
file-format-conversion utility, which could deal with this sort of
stuff without having to also deal with all the backend-internal
considerations
[Resend: apparently there's a problem with my mail server]
Andreas Pflug wrote:
Until recently, pg_relation_size used SearchSysCache to locate the
relation to examine, and calculated the file location from that
information. Starting with dbsize.c V1.5 (committed after Beta2),
I too have to deal with this issue daily since I deal with Windows daily. It
sounds like the source of the csv files were originally Excel files. I have
never been able to figure out how Excel determines to quit putting null values
in empty columns after X rows and resume again later on.
If
I am running XP Pro with SP2.
Postgres 8.1 installed with MSI but later on I did upgrade the ODBC driver.
Shortly after selecting upgrade.bat I received an error: This error may
indicate a problem with the installation package. Number 2803
After clicking OK the installer continued anyway and
Le Samedi 10 Décembre 2005 17:43, vous avez écrit :
Guillaume LELARGE [EMAIL PROTECTED] writes:
Apparently, I can rename all schemas, even system schemas !
metier=# alter schema pg_catalog rename to foobar;
ALTER SCHEMA
If you are superuser, you can do anything you want, up to and
On Sat, Dec 10, 2005 at 09:18:32AM -0800, Joshua D. Drake wrote:
However there is an effort to get rid of root in some Unix lands,
separating its responsabilities with more granularity. Maybe there
could be an effort, not to hand-hold the true superusers, but to
delegate some of its
Tom Lane wrote:
Where would you want this file conversion
utility? bin? contrib? pgfoundry?
I'd say pgfoundry for starters --- there's no reason to tie it down
to server release cycles. Maybe when the thing is fairly mature and
doesn't need frequent releases, we could think about
On Mon, 12 Dec 2005 14:05:03 -0600
Jim C. Nasby [EMAIL PROTECTED] wrote:
I think it would be a huge benefit to have something equivalent to
sudo for psql (though maybe it could be generalized more). Having to
change to a different connection/authorization anytime you need to do
something
Am Montag, den 12.12.2005, 15:08 -0500 schrieb Andrew Dunstan:
Tom Lane wrote:
...
You are probably right. The biggest wrinkle will be dealing with various
encodings, I suspect. That at least is one thing that doing CSV within
the backend bought us fairly painlessly. Perl's Text::CSV_XS
On Mon, Dec 12, 2005 at 06:39:42PM +0530, Anjan Kumar. A. wrote:
Through googling, i found that Normal Disk has external data transfer rate
of around 40MBps,
where as Main Memory has Data transfer rate ranging from 1.6GBps to 2.8GBps.
I think 40MB/s is a burst speed. You should do some
On Mon, Dec 12, 2005 at 09:30:12PM +0100, Tino Wildenhain wrote:
Am Montag, den 12.12.2005, 15:08 -0500 schrieb Andrew Dunstan:
You are probably right. The biggest wrinkle will be dealing with various
encodings, I suspect. That at least is one thing that doing CSV within
the backend
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
while playing with Npgsql I faced an
strange behavior of Postgresql server.
I have all the details of it and I thought it could be a severe security
exploit, so I don't send it in clear to this mailing list directly as, I
think, anybody
It seems that gcc is up to some creative reinterpretation of basic C
semantics again; specifically, you can no longer trust that traditional
C semantics of integer overflow hold:
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=175462
While I don't think we are anywhere using exactly the same
I have a few SPs I wrote in C language in PostgreSQL 8.1, they do mainly SPI
functions as SELECT field1, field2 from table1 and INSERT INTO table2
VALUES (val1, val2) and to minimize the execution time I do something like
that:
static void *splan_insertstuff = NULL;
On Mon, Dec 12, 2005 at 18:35:07 +0100,
Pavel Stehule [EMAIL PROTECTED] wrote:
Hello
I did some test and I can see so DISTINCT works well on indexed columns,
but is slow on derived tables without indexes. If I use without distinct
group by I get much better times.
SELECT DISTINCT a, b
due to a piece of monumental carelessness by me, buildfarm web app is
off the air. I will advise when I have managed to recover. It's a good
thing we are not coming up to release :-)
cheers
andrew
---(end of broadcast)---
TIP 3: Have you
Jim C. Nasby [EMAIL PROTECTED] writes:
I'd love to see something like SUDO ALTER USER ... SUDO REINDEX ... etc.
That would make it easy to do 'normal' work with a non-superuser
account.
You can already do most of this with SET/RESET ROLE:
regression=# create user tgl;
CREATE ROLE
regression=#
On Mon, Dec 12, 2005 at 06:26:25PM -0200, Francisco Figueiredo Jr. wrote:
Hi all,
while playing with Npgsql I faced an
strange behavior of Postgresql server.
I have all the details of it and I thought it could be a severe security
exploit, so I don't send it in clear to this
On Mon, Dec 12, 2005 at 05:00:45PM -0500, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
I'd love to see something like SUDO ALTER USER ... SUDO REINDEX ... etc.
That would make it easy to do 'normal' work with a non-superuser
account.
You can already do most of this with
Jim C. Nasby wrote:
On Mon, Dec 12, 2005 at 05:00:45PM -0500, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
I'd love to see something like SUDO ALTER USER ... SUDO REINDEX ... etc.
That would make it easy to do 'normal' work with a non-superuser
account.
You can
On Mon, Dec 12, 2005 at 05:27:33PM -0500, Andrew Dunstan wrote:
On Mon, Dec 12, 2005 at 05:00:45PM -0500, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
I'd love to see something like SUDO ALTER USER ... SUDO REINDEX ... etc.
That would make it easy to do 'normal' work with
On Mon, 2005-12-12 at 16:35 -0600, Jim C. Nasby wrote:
On Mon, Dec 12, 2005 at 05:27:33PM -0500, Andrew Dunstan wrote:
On Mon, Dec 12, 2005 at 05:00:45PM -0500, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
I'd love to see something like SUDO ALTER USER ... SUDO
Neil Conway [EMAIL PROTECTED] writes:
BTW, Luke Lonergan recently posted some performance results for a fairly
efficient public domain implementation of qsort to the bizgres list:
http://lists.pgfoundry.org/pipermail/bizgres-general/2005-December/000294.html
As those results suggest, there can
On Mon, Dec 12, 2005 at 05:43:47PM -0500, Rod Taylor wrote:
I'd love to see something like SUDO ALTER USER ... SUDO REINDEX ...
etc.
That would make it easy to do 'normal' work with a non-superuser
account.
A sudo equivalent would be a version of psql that always connected to
the
Bruno Wolff III [EMAIL PROTECTED] writes:
DISTINCT will require a sort step to remove duplicates. GROUP BY can use
either a sort or hash aggregate plan. If there are few distinct values,
the hash aggregate plan can be much faster.
The DISTINCT code hasn't been revisited in a long time. One
Tom Lane wrote:
Andreas Pflug [EMAIL PROTECTED] writes:
Tom Lane wrote:
You have to have a lock to ensure that the table even exists, let alone
that you are looking at the right set of disk files.
This would require a lock on pg_class, not table foo, no?
No, the convention is that you
Andrew Dunstan wrote:
due to a piece of monumental carelessness by me, buildfarm web app is
off the air. I will advise when I have managed to recover. It's a good
thing we are not coming up to release :-)
We are back again, thanks to Joshua Drake and CommandPrompt's excellent
On Mon, 2005-12-12 at 18:00 -0500, Andrew Dunstan wrote:
Andrew Dunstan wrote:
due to a piece of monumental carelessness by me, buildfarm web app is
off the air. I will advise when I have managed to recover. It's a good
thing we are not coming up to release :-)
We are back
Guillaume LELARGE [EMAIL PROTECTED] writes:
I've made more tests on schemas. I'm able to drop information_schema and
public schemas but I can't drop pg_catalog and pg_toast. It makes me think
that only pg_* are system schemas and that public and information_schema are
public schemas. Am I
Platypus (http://lnk.nu/pgbuildfarm.org/6yt.pl) started failing about 12
days ago with the following:
ccache gcc -O3 -pipe -pipe -fno-strict-aliasing -g -Wall -Wmissing-prototypes
-Wmissing-declarations -fpic -DPIC -I. -I../../../src/include
-I/usr/local/include -DFRONTEND
Jim C. Nasby [EMAIL PROTECTED] writes:
On Mon, Dec 12, 2005 at 05:43:47PM -0500, Rod Taylor wrote:
A sudo equivalent would be a version of psql that always connected to
the database using super-user and allowed command execution based on a
regular expression. Bit of a hack to say the least.
Le Mardi 13 Décembre 2005 00:13, Tom Lane a écrit :
Guillaume LELARGE [EMAIL PROTECTED] writes:
I've made more tests on schemas. I'm able to drop information_schema and
public schemas but I can't drop pg_catalog and pg_toast. It makes me
think that only pg_* are system schemas and that
On Mon, Dec 12, 2005 at 06:37:03PM -0500, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
On Mon, Dec 12, 2005 at 05:43:47PM -0500, Rod Taylor wrote:
A sudo equivalent would be a version of psql that always connected to
the database using super-user and allowed command execution
Jim C. Nasby [EMAIL PROTECTED] writes:
Platypus (http://lnk.nu/pgbuildfarm.org/6yt.pl) started failing about 12
days ago with the following:
fe-auth.c: In function `pg_local_sendauth':
fe-auth.c:466: error: conflicting types for 'cmsgmem'
fe-auth.c:459: error: previous declaration of
On Mon, Dec 12, 2005 at 06:44:23PM -0500, Tom Lane wrote:
The PG 7.3 branch is definitely showing its age. I'm not sure how
interesting it is to keep updating it for newer platforms; is anyone
very likely to run 7.3 on a new machine, rather than some later PG?
Probably no one... I'll just
Tom,
On 12/12/05 2:47 PM, Tom Lane [EMAIL PROTECTED] wrote:
As those results suggest, there can be huge differences in sort
performance depending on whether the input is random, nearly sorted,
nearly reverse sorted, possesses many equal keys, etc. It would be very
dangerous to say
On Mon, 2005-12-12 at 16:19 -0500, Tom Lane wrote:
It seems that gcc is up to some creative reinterpretation of basic C
semantics again; specifically, you can no longer trust that traditional
C semantics of integer overflow hold:
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=175462
Tom Lane said:
Jim C. Nasby [EMAIL PROTECTED] writes:
Platypus (http://lnk.nu/pgbuildfarm.org/6yt.pl) started failing about
12 days ago with the following:
fe-auth.c: In function `pg_local_sendauth':
fe-auth.c:466: error: conflicting types for 'cmsgmem'
fe-auth.c:459: error: previous
On Mon, 12 Dec 2005, Luke Lonergan wrote:
Do you have a test suite you can recommend with those edge cases?
I have at least those factors in mind:
+ f1: number of elements
- in {10^3, 10^4, 10^5, 10^6, 10^7}
+ f2: key comparators measured by cpu cost
- in {1, 10, 100+};
+ f3: data
Neil Conway [EMAIL PROTECTED] writes:
IMHO code that makes assumptions about overflow behavior beyond what is
defined by the standard is asking for trouble, whether those assumptions
are traditional C semantics or not. Given that -fwrapv apparently
hurts performance *and* you've presented no
Hi,
This might be a newbie question but I'm wondering how most of you are
debugging PostgreSQL on Windows (XP) ? Are you all using gdb? I want to
develop a new data type and I'd like to trace code in a graphical interface.
I have been trying very hard with Eclipse without success. Insight
Andrew Dunstan [EMAIL PROTECTED] writes:
I don't care that much if 7.3 fails to build on fbsd 6. The flipside is that
the fix for this particular problem appears to be very simple and very low
risk, unless I'm going blind.
Possibly --- if you've gone to the trouble of identifying the relevant
A vaguely interesting interview with IBM and MS guys about cost-based
optimizers.
http://www.acmqueue.com/modules.php?name=Contentpa=showpagepid=297
Chris
---(end of broadcast)---
TIP 6: explain analyze is your friend
Andreas Pflug wrote:
Alvaro Herrera wrote:
The problem with the original coding was that it used the table Oid to
look up the file name, which is wrong. (Test it with a table that has
been clustered or an index that has been reindexed.)
Um, can't test at the moment. The oldcode used
Anyway, that's history now. Where would you want this file conversion
utility? bin? contrib? pgfoundry?
How about a full SQL*Loader clone? :D
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Tom Lane said:
Andrew Dunstan [EMAIL PROTECTED] writes:
I don't care that much if 7.3 fails to build on fbsd 6. The flipside
is that the fix for this particular problem appears to be very simple
and very low risk, unless I'm going blind.
Possibly --- if you've gone to the trouble of
On Mon, Dec 12, 2005 at 12:34:45PM -0600, Bruno Wolff III wrote:
You are better off doing a multiply. Something like:
area= select '2005-12-01'::date + 456.5 * '1 second'::interval;
?column?
2005-12-01 00:07:36.50
(1 row)
This comes up often enough that
Andrew Dunstan [EMAIL PROTECTED] writes:
I think I'd just delete lines 464-470 in
http://developer.postgresql.org/cvsweb.cgi/pgsql/src/interfaces/libpq/fe-auth.c?annotate=1.71
I confirmed that this is the patch appearing in 7.4 (fe-auth.c rev
1.84), so went ahead and applied it to 7.3.
Jim C. Nasby [EMAIL PROTECTED] writes:
On Mon, Dec 12, 2005 at 12:34:45PM -0600, Bruno Wolff III wrote:
You are better off doing a multiply. Something like:
area= select '2005-12-01'::date + 456.5 * '1 second'::interval;
?column?
2005-12-01 00:07:36.50
(1 row)
http://www.acmqueue.com/modules.php?name=Contentpa=showpagepid=297
I saw it in print; the only thing that seemed interesting about it was
the recommendation that query optimization be biased towards the
notion of stable plans, query plans that may not be the most
aggressively fast, but which
Qingqing,
On 12/12/05 5:08 PM, Qingqing Zhou [EMAIL PROTECTED] wrote:
This will gives us a 5*3*4*4 = 240 tests ...
Looks good - I'm not going to be able to implement this matrix of tests
quickly, but each dimension seems right.
Might you have time to implement these within the testing
I saw it in print; the only thing that seemed interesting about it was
the recommendation that query optimization be biased towards the
notion of stable plans, query plans that may not be the most
aggressively fast, but which don't fall apart into hideous
performance if the estimates are a little
Alvaro Herrera wrote:
Andreas Pflug wrote:
Alvaro Herrera wrote:
The problem with the original coding was that it used the table Oid to
look up the file name, which is wrong. (Test it with a table that has
been clustered or an index that has been reindexed.)
Um, can't test at the moment.
Chris,
On 12/12/05 8:44 PM, Christopher Kings-Lynne [EMAIL PROTECTED]
wrote:
assumption of five. Thus, being able to correct mid-course is an area of
enhancement for query optimizers that IBM is pursuing.
Hmmm dynamic re-planning!
I recently interviewed someone who is in the research group
Anjan,
But, in PostgreSQL all costs are scaled relative to a page fetch. If we
make both sequential_page_fetch_cost and random_page_cost to 1, then we
need to increase the various cpu_* paramters by multiplying the default
values with appropriate Scaling Factor. Now, we need to determine
Tom,
IIRC, the reason we reject
Solaris' qsort is not that it is so bad in the typical case, but that it
has some horrible corner-case behaviors.
Sun claims to have fixed these. Hopefully they'll do some testing which will
prove it.
--
Josh Berkus
Aglio Database Solutions
San Francisco
On Mon, Dec 12, 2005 at 10:39:47PM -0500, Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
I think I'd just delete lines 464-470 in
http://developer.postgresql.org/cvsweb.cgi/pgsql/src/interfaces/libpq/fe-auth.c?annotate=1.71
I confirmed that this is the patch appearing in 7.4
Jim C. Nasby [EMAIL PROTECTED] writes:
The error talks about SEMMNI and SEMMNS, but both look ok...
kern.ipc.semmns: 100
That's not enough to run two postmasters ...
regards, tom lane
---(end of broadcast)---
TIP 5:
1 - 100 of 101 matches
Mail list logo