Hi All,
I was running the run_Build.pl script that is specific
to Buildfarm and encountered errors. I am listing out
the names of the logfiles and the errors that I have
seen.
Can anyone give me some clarity on these errors?
Even though these errors are existing, at the end the
latest version is
Does anyone has a solution for pljava.dll not loading? I see the same issue
listed
for 8.3-rc1, supposedly fixed in 8.3-rc3. But I need something that works
for
8.2
C:\Program Files\PostgreSQL\8.2\bin%JAVA_HOME%/bin/java -cp
../../../../pljav
On Mon, Jan 21, 2008 at 04:13:32PM -0500, Tom Lane wrote:
Sam Mason [EMAIL PROTECTED] writes:
It's really up to you to find answers to these questions, especially
the first one. Once you've designed an efficient algorithm then the
second point (which I'm interpreting as how you'd go about
Am Montag, 21. Januar 2008 schrieb Tom Lane:
%_topdir /home/tgl/rpmwork
then create the directory ~/rpmwork and under it the five directories
BUILD/ RPMS/ SOURCES/ SPECS/ SRPMS/
Some distributions set up their standard build areas under /usr/src to be
world writeable (sticky bit
I suggest make a distinction between DDL and DML locks. A DDL lock would be
required for a TRUNCATE, CREATE, ALTER, DROP, REPLACE, etc while DML is just
insert, update, and delete.
A TRUNCATE (or any DDL activity) should wait until all DML activity is
committed before it can acquire an exclusive
cinu wrote:
Hi All,
I was running the run_Build.pl script that is specific
to Buildfarm and encountered errors. I am listing out
the names of the logfiles and the errors that I have
seen.
Can anyone give me some clarity on these errors?
Even though these errors are existing, at the end the
Tom Lane wrote:
What I propose we do about this is put the same check into TRUNCATE,
CLUSTER, and REINDEX that is already in ALTER TABLE, namely that we
reject the command if the current transaction is already holding
the table open.
+1.
The issue Steven directly complained of is a
I really needed this functionality in PostgreSQL. A common use for
autonomous transactions is error logging. I want to log sqlerrm in a
function and raise an exception so the calling application knows there is an
error and I have it logged to a table.
I figured out a way to hack an
On Wednesday 16 January 2008 08:32, Andrew Dunstan wrote:
( Slightly OT - chkpass uses crypt(). Maybe that should be upgraded to
use md5 or some more modern hashing function. )
Some versions of crypt() will generate md5 hashes if you start the salt with
$1$salt$. I know this to work on
--
From: Tom Lane [EMAIL PROTECTED]
Sent: Monday, January 21, 2008 10:13 PM
To: Sam Mason [EMAIL PROTECTED]
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Polyphase Merge
I agree --- having to read the run back from external storage,
Hi -hackers,
While testing RC2 on the new servers of one of our customers, I found
a query really slow on the new server which is quite fast on the old
box currently in production (production is 8.1 at the moment).
If I set enable_nestloop to off, the query is fast (the plan is
different from the
On Tue, 2008-01-22 at 10:02 -0600, Roberts, Jon wrote:
Maybe someone could enhance this concept to include it with the core
database to provide autonomous transactions.
I agree that autonomous transactions would be useful, but doing them via
dblink is a kludge. If we're going to include
Neil Conway wrote:
On Tue, 2008-01-22 at 10:02 -0600, Roberts, Jon wrote:
Maybe someone could enhance this concept to include it with the core
database to provide autonomous transactions.
I agree that autonomous transactions would be useful, but doing them via
dblink is a kludge. If we're
On Tue, 2008-01-22 at 10:02 -0600, Roberts, Jon wrote:
Maybe someone could enhance this concept to include it with the core
database to provide autonomous transactions.
I agree that autonomous transactions would be useful, but doing them via
dblink is a kludge.
Kludge or hack but I
Guillaume Smet [EMAIL PROTECTED] writes:
While testing RC2 on the new servers of one of our customers, I found
a query really slow on the new server which is quite fast on the old
box currently in production (production is 8.1 at the moment).
Have you ANALYZEd these tables lately? It looks
Agreed. I think Pavel Stehule was doing some experiments with them, I
don't know if he got anywhere.
I did only first research. Any hack is possible - you can stack
current transaction, but real implementation needs similar work like
nested transaction :( and it is too low level for me. And
On Jan 22, 2008 8:28 PM, Tom Lane [EMAIL PROTECTED] wrote:
The only way the merge join could have an estimated cost that's barely
1% of the estimate for one of its inputs is if the planner thinks the
merge will stop after reading only 1% of that input, ie, the largest
a.numasso value is only
I added TimeOffset and DateOffset typedefs to get rid of the instances
using the HAVE_INT64_TIMESTAMP define being used to determine the
types of variables or functions in timestamp.c.
---
src/backend/utils/adt/timestamp.c | 77 +++--
Guillaume Smet escribió:
On Jan 22, 2008 8:28 PM, Tom Lane [EMAIL PROTECTED] wrote:
I'd expect 8.1 to make about the same estimate given the same stats,
so I think it's not looking at the same stats.
Yep, the statistics were the problem, sorry for the noise. The query
performs in 50ms
Here is a fixed patch for the timestamp refactor. I apologize for sending the
wrong patch earlier.
I would like to get some comments on if this patch is heading in the right
direction. Here's the proposal for what I am doing:
Proposal for Refactoring of Timestamp Datatype
Goal:
The primary
I added TimeOffset and DateOffset typedefs to get rid of the instances
using the HAVE_INT64_TIMESTAMP define being used to determine the
types of variables or functions in timestamp.c.
---
src/backend/utils/adt/timestamp.c | 77 +++-
src/include/utils/timestamp.h
This patch is in error. Please ignore. I have already sent the corrected patch.
Thanks,
wt
On Jan 22, 2008 12:50 PM, Warren Turkal [EMAIL PROTECTED] wrote:
I added TimeOffset and DateOffset typedefs to get rid of the instances
using the HAVE_INT64_TIMESTAMP define being used to determine the
On Jan 22, 2008 9:52 PM, Alvaro Herrera [EMAIL PROTECTED] wrote:
No, autovacuum should have taken care of it. I would be interesting in
knowing why it didn't.
I just dropped the database on monday morning and import it again.
Nothing really fancy.
It seems that autovacuum took care of them
Guillaume Smet escribió:
It seems that autovacuum took care of them just after the import which
is what I expected:
[...]
So I wonder why the stats were so bad... I didn't update the data at
all after the initial import. Any idea?
Hmm, perhaps the analyze was done with the default
On Jan 22, 2008 11:22 PM, Alvaro Herrera [EMAIL PROTECTED] wrote:
So I wonder why the stats were so bad... I didn't update the data at
all after the initial import. Any idea?
Hmm, perhaps the analyze was done with the default statistic target (10)?
It's set to 30 in the postgresql.conf from
25 matches
Mail list logo