Don Y [EMAIL PROTECTED] writes:
Jim C. Nasby wrote:
This is a very minor reason why you should be running the most recent
8.0.x release and not 8.0.3. A much bigger reason is that there are
data-loss bugs that have been fixed.
The folks watching the Postgres releases haven't yet said
we
Tom Lane wrote:
Don Y [EMAIL PROTECTED] writes:
Jim C. Nasby wrote:
This is a very minor reason why you should be running the most recent
8.0.x release and not 8.0.3. A much bigger reason is that there are
data-loss bugs that have been fixed.
The folks watching the Postgres releases haven't
On Thu, May 18, 2006 at 10:54:02PM -0700, Don Y wrote:
I assumed that the contents of ./contrib have NOT been
thoroughly tested/reviewed by the Postgres team (though
that is just an impression I have... i.e. why have those
features not been INTEGRATED into the codebase?)
contrib has been
-Original Message-
From: Florian G. Pflug [mailto:[EMAIL PROTECTED]
Sent: 18 May 2006 21:15
Cc: A.M.; Dave Page; pgsql-general@postgresql.org
Subject: Re: [GENERAL] GUI Interface
Florian G. Pflug wrote:
A.M. wrote:
Excellent catch! I was wondering why I couldn't get it to
Oliver A. Rojo wrote:
Hi!
I've just recently upgraded my database from 7.4.8 to 8.0.1. Im dumping
data i got from my old db to my new db but eventually an error occured
ERROR: invalid byte sequence for encoding UNICODE: 0xd141
I tried setting the client encoding to UNICODE but to no avail.
Nis Jorgensen wrote:
Oliver A. Rojo wrote:
Hi!
I've just recently upgraded my database from 7.4.8 to 8.0.1. Im
dumping data i got from my old db to my new db but eventually an
error occured
ERROR: invalid byte sequence for encoding UNICODE: 0xd141
I tried setting the client encoding to
Oliver A. Rojo wrote:
Nis Jorgensen wrote:
Oliver A. Rojo wrote:
Hi!
I've just recently upgraded my database from 7.4.8 to 8.0.1. Im
dumping data i got from my old db to my new db but eventually an
error occured
I fixed it by fixing the original db and dumping again. If this is not
Nik [EMAIL PROTECTED] wrote
I have an application that executes an INSERT statement every second or
two. In my old design this data was inserted into one table. However, I
changed the design to have one table with multiple partitions. Now when
I run the application it runs for some random
Markus Wollny wrote:
Nis Jorgensen wrote:
Oliver A. Rojo wrote:
how do you fix your original db?
Since I had only 3 occurrences of the error, I used
hand-crafted update statements. The fact that the replacement
for the invalid characters was constant and plain ascii made
this
Hi all,
Currently the LIMIT clause is not allowed in UPDATE or DELETE
statements. I wonder how easy it would be to allow it, and what people
think about it ? For our application it would help a lot when processing
things chunk-wise to avoid long running queries.
The fact that the actual rows
We've been using EMS PostgreSQL admin.Pros:It has great visual tools for building views. Like in Enterprise manager or MS Access. pgAdmin don't have that.Great import/export abilities. PGAdmin don't have much flexibility.
Cons:Too bad it don't support UNICODE and not known if it will.Problems with
Hello,
I have a production database which have not been full vacuumed for ~2
months (autovacuum worked fine with default settings all this time).
When I run VACUUM FULL VERBOSE ANALYZE it processes several
tables/indexes and than hangs (at least I tried to wait for 30 mins
and nothing happened)
On Wednesday 17 May 2006 18:12, Jim C. Nasby wrote:
In any case, there's extensive discussion about this in the -hackers
archives. IIRC, there is consensus that this would be nice to have but
no one has cared enough to actually make it happen. There are some
non-trivial issues since this would
Hello,
I need to grant all privileges on all objects in database. Without
using SUPERUSER.
It's strange that GRANT ALL PRIVILEGES ON DATABASE is useless, it
don't grant privileges on tables.
I've found out this best practice, (more like ugly workaround):
select 'grant all on
On Friday 19 May 2006 01:54, Don Y wrote:
Tom Lane wrote:
Tim Allen [EMAIL PROTECTED] writes:
Don Y wrote:
So, I'll deploy them and get feedback on which features I
may need to add (some of the data types are probably a bit
too exotic for most users). I figure I can always contribute
On 5/19/06, Csaba Nagy [EMAIL PROTECTED] wrote:
Hi all,
Currently the LIMIT clause is not allowed in UPDATE or DELETE
statements. I wonder how easy it would be to allow it, and what people
think about it ? For our application it would help a lot when processing
things chunk-wise to avoid long
On 5/19/06, Csaba Nagy [EMAIL PROTECTED] wrote:
Hi all,
Currently the LIMIT clause is not allowed in UPDATE or DELETE
statements. I wonder how easy it would be to allow it, and what people
think about it ? For our application it would help a lot when processing
things chunk-wise to avoid long
I asked that question a while ago..
http://archives.postgresql.org/pgsql-general/2005-04/msg00386.php
and got this response:
http://archives.postgresql.org/pgsql-general/2005-04/msg00387.php
Works quite well :)
I don't doubt that it works well, in fact that's what I plan to use
I have an application that executes an INSERT statement every second or
two. In my old design this data was inserted into one table. However, I
changed the design to have one table with multiple partitions. Now when
I run the application it runs for some random amount of time (random
number of
Hi,
I see people use postgreSQL for storing their binary data including
images, etc. Here are a few questions I would very much like to have
expert opinions on:
- Is it not faster / more efficient to store binary data using file
system and let DB keep the path?
- I know of only one way to
I am working with postgres 8.0.3 on suse linux 9.3.I want to restore data
from pg_dump backup.Previously it was working fine with the same dump.
But now it is giving the following error...
pg_restore: [archiver (db)] could not execute query: ERROR: relation
sw_4904.operatorjob does not exist
Nis Jorgensen wrote:
Oliver A. Rojo wrote:
how do you fix your original db?
Since I had only 3 occurrences of the error, I used
hand-crafted update statements. The fact that the replacement
for the invalid characters was constant and plain ascii made
this very easy.
If you have many
-- sample data
CREATE TEMP TABLE tab (id serial primary key, n int, t text);
INSERT INTO tab(n) SELECT * FROM generate_series(1,1000);
-- say, you want such an update:
UPDATE tab SET t = 'aqq' WHERE n 10;
-- but with limit:
UPDATE tab SET t = 'aqq' WHERE id in (SELECT id FROM tab WHERE n
That would spare a HashAggregate and an index scan. The index scan would
very likely not be a problem, as the same index entries are visited in
the subquery and likely are cached, and the HashAggregate should be also
fast for the max 1000 rows it has to handle, but they are still
completely
Csaba Nagy [EMAIL PROTECTED] writes:
Currently the LIMIT clause is not allowed in UPDATE or DELETE
statements. I wonder how easy it would be to allow it, and what people
think about it ?
This has been proposed before, and rejected before, and the arguments
against are just as strong as they
Ivan Zolotukhin [EMAIL PROTECTED] writes:
When I run VACUUM FULL VERBOSE ANALYZE it processes several
tables/indexes and than hangs (at least I tried to wait for 30 mins
and nothing happened) on one particular table education which is
quite ordinary I think. When it hangs I see in `ps auxww`
On Fri, 2006-05-19 at 15:51, Tom Lane wrote:
Csaba Nagy [EMAIL PROTECTED] writes:
Currently the LIMIT clause is not allowed in UPDATE or DELETE
statements. I wonder how easy it would be to allow it, and what people
think about it ?
This has been proposed before, and rejected before, and
Siah [EMAIL PROTECTED] writes:
Is there any better method of pushing binary data to
postgres besides parsing it into ascii sql which seems extremely
inefficient to me.
Yeah, send it as an out-of-line binary parameter. Dunno whether you can
get at that from Python though :-(
Csaba Nagy [EMAIL PROTECTED] writes:
I DO NOT CARE about which rows are deleted.
You can't possibly think that that holds true in general.
The fact that it is
nondeterministic can be very clearly specified in the documentation if
you think it is such a bad thing, but nondeterministic is
On Fri, 2006-05-19 at 16:31, Tom Lane wrote:
Csaba Nagy [EMAIL PROTECTED] writes:
I DO NOT CARE about which rows are deleted.
You can't possibly think that that holds true in general.
I agree that it is not true in the general case, but then I also don't
want to use DELETE with LIMIT in the
- Original Message
From: Tom Lane [EMAIL PROTECTED]
To: Csaba Nagy [EMAIL PROTECTED]
Cc: Postgres general mailing list pgsql-general@postgresql.org
Sent: Friday, May 19, 2006 9:31:24 AM
Subject: Re: [GENERAL] allow LIMIT in UPDATE and DELETE
You can't possibly think that that holds true
Now the question... why would you do that instead of doing everything in one
big transaction on Oracle? I guess performance is one reason. Oracle's (at
least with 8/8i) performance seems to tank very quickly on deletes as you
increase the number of records you delete in a single
On Fri, May 19, 2006 at 04:22:42PM +0200, Csaba Nagy wrote:
Then use ctid.
For the problem at hand in your post it is a good solution, except
that it will cause a full table scan cause I guess few people have
indexes on ctid. Or you have to write your queries really contrived,
by
Err, you don't need an index on ctid because the ctid represents that
physical location of the tuple on disk. ctids are what indexes use to
refer to tuples...
OK, then how you explain this:
db=# prepare test_001(bigint, bigint, smallint) as
db-# DELETE FROM big_table
db-# WHERE ctid IN
db-#
PG_VERSION 8.1 - windows XP - 1GB RAM -- Desktop workstation, PG used
for test/dev.
This installation has been working fine for quite a while. I've used it
minimally.
I rebooted yestderday- log file from reboot
2006-05-18 12:24:32 LOG: database system was shut down at 2006-05-18
12:23:08
Regarding Oracle's ROWNUM - since they have to use that instead of OFFSET
and LIMIT, that isn't much of an argument for the Oracle way. When
converting queries into Oracle SQL, I always _really_ miss OFFSET and
LIMIT. They are much easier to use than ROWNUM, especially with ORDER BY.
I think
On Fri, 2006-05-19 at 17:43, [EMAIL PROTECTED] wrote:
Regarding Oracle's ROWNUM - since they have to use that instead of OFFSET
and LIMIT, that isn't much of an argument for the Oracle way. When
converting queries into Oracle SQL, I always _really_ miss OFFSET and
LIMIT. They are much easier
- Original Message
From: [EMAIL PROTECTED]
To: Csaba Nagy [EMAIL PROTECTED]
Cc: Postgres general mailing list pgsql-general@postgresql.org; [EMAIL
PROTECTED]
Sent: Friday, May 19, 2006 10:43:43 AM
Subject: Re: [GENERAL] allow LIMIT in UPDATE and DELETE
Personally, I have never wanted a
Personally, I have never wanted a DELETE or UPDATE with LIMIT. The one
time I did something similar in Oracle, I used partitions, and just dropped
or truncated the partition containing the old data.
Yeah, that’s the proper way to handle the issue assuming that sufficient
forethought was
Martijn van Oosterhout kleptog@svana.org writes:
Err, you don't need an index on ctid because the ctid represents that
physical location of the tuple on disk. ctids are what indexes use to
refer to tuples...
# explain DELETE FROM t WHERE ctid=(SELECT ctid FROM t WHERE pronargs=1 LIMIT
1);
On Fri, May 19, 2006 at 03:58:19PM +0400, Ivan Zolotukhin wrote:
There are no clients connected to the
database except me in that moment since I switched application to
another DB.
Did you check for not yet committed prepared transactions?
Check the pg_prepared_xacts system view.
Joachim
- Original Message
From: Csaba Nagy [EMAIL PROTECTED]
To: Shelby Cain [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; Postgres general mailing list
pgsql-general@postgresql.org; [EMAIL PROTECTED]
Sent: Friday, May 19, 2006 11:46:42 AM
Subject: Re: [GENERAL] allow LIMIT in UPDATE and DELETE
42 matches
Mail list logo