hi,
when i create an unique-constraint on a varchar field, how exactly
does postgresql compare the texts? i'm asking because in UNICODE there
are a lot of complexities about this..
or in other words, when are two varchars equal in postgres? when their
bytes are? or some algorithm is applied?
hi,
i have a table where i need to update 7million rows.
i'm trying to do this without a downtime, but doesn't matter what i do,
i get massive slowdowns every 5 minutes.
details:
the table's schema contains 6integers, 2timestamps, 1 varchar, and 1text.
i added a new text-field (currently null),
hi,
postgresql8.4.7 here.
i checked the pg_stat_user_tables table, and it have a lot of rows
there where the last_autovacuum and/or last_autoanalyze are null.
does this mean that autovacuum never worked on those tables?
roughly 70% of all the tables have null in those fields..
in those
2011/6/23 Thom Brown t...@linux.com:
2011/6/23 Gábor Farkas ga...@nekomancer.net:
hi,
postgresql8.4.7 here.
i checked the pg_stat_user_tables table, and it have a lot of rows
there where the last_autovacuum and/or last_autoanalyze are null.
does this mean that autovacuum never worked
hi,
i have a postgresql-8.2 database instance,
which contained in the past a latin9 database.
this database was later converted to utf8 (by dumping and reloading it).
everything works fine, except that
SELECT lower(Gábor)
returns gbor (the 'á' is dropped).
the problem seems to be that
SHOW
Erik Jones wrote:
On Jan 10, 2008, at 6:01 AM, Simon Riggs wrote:
On Thu, 2008-01-10 at 11:18 +0100, Gábor Farkas wrote:
Simon Riggs wrote:
also, even if it is wrong, can an 'idle-in-transaction' connection
that
was opened today block the vacuuming of rows that were deleted
yesterday
Simon Riggs wrote:
also, even if it is wrong, can an 'idle-in-transaction' connection that
was opened today block the vacuuming of rows that were deleted yesterday?
Yes, if the rows were deleted after the connection started.
to avoid any potential misunderstandings, i will summarize the
hi,
i have a postgresql-8.2.4 db,
and vacuuming it does not remove the dead rows
basically, the problem is this part of the vacuum-output:
HINT: Close open transactions soon to avoid wraparound problems.
INFO: vacuuming public.sessions
INFO: scanned index sessions_pkey to remove 2 row
Tom Lane wrote:
=?iso-8859-1?Q?G=E1bor?= Farkas [EMAIL PROTECTED] writes:
basically, the problem is this part of the vacuum-output:
INFO: sessions: found 2 removable, 6157654 nonremovable row versions
in 478069 pages
DETAIL: 6155746 dead row versions cannot be removed yet.
The problem is
Joshua D. Drake wrote:
Gábor Farkas wrote:
hi,
i have a postgresql-8.2.4 db,
and vacuuming it does not remove the dead rows
basically, the problem is this part of the vacuum-output:
on the db-server, 4 postgres processes are idle in transaction, but
none is older than 2 days.
If you
hi,
i got the following error-message:
ERROR: deadlock detected
DETAIL: Process 32618 waits for ShareLock on transaction 1137032034;
blocked by process 16136.
Process 16136 waits for ShareLock on transaction 1137045910;
blocked by process 32618.
(postgres 7.4 here)
i checked the
Scott Marlowe wrote:
On Thu, 2006-03-23 at 12:17, Jim Nasby wrote:
On Mar 22, 2006, at 10:08 PM, Scott Marlowe wrote:
Now, I shouldn't be able to insert anything in b that's not
referencing
an entry in a. and I used innodb tables. and I used ansi SQL, and I
got no errors. So how come my
hi,
i'd like to delete the postgresql log file
(resides in /var/log/pgsql/postgres),
because it has become too big.
can i simply delete the file while postres is running?
or do i have to stop postgres first, and only delete the logfile after that?
thanks,
gabor
hi,
we have a database, which was not vacuumed for a long time.
right now it's size is 30GB. it only contains a simple table with 90rows.
it seems that it's so big because it was not vacuumed for a long time.
is this a reasonable assumption?
now we'd like to somehow 'compact' him.
it
Alban Hertroys wrote:
Gábor Farkas wrote:
i'm only afraid that maybe if we issue the drop-db command, it will
take for example 30minutes...
Wouldn't it be more effective to create a new table by selecting your
session table and switch their names? You can drop the troublesome table
hi,
for historical reasons ;) (are there any other reasons),
we have a postgres db,
where the data are in iso-8859-15 encoding,
but the database encoding is iso-8859-1.
question(s):
1. is it possible to change the db-encoding?
2. if it remains like it is currently, when can there be problems?
Peter Eisentraut wrote:
Am Dienstag, 27. September 2005 10:15 schrieb Gábor Farkas:
for historical reasons ;) (are there any other reasons),
we have a postgres db,
where the data are in iso-8859-15 encoding,
but the database encoding is iso-8859-1.
2. if it remains like it is currently, when
Yonatan Ben-Nes wrote:
Dawid Kuroczko wrote:
Hmm, JOIN on a Huge table with LIMIT. You may be suffering from
the same problem I had:
http://archives.postgresql.org/pgsql-performance/2005-07/msg00345.php
Tom came up with a patch which worked marvellous in my case:
18 matches
Mail list logo