Hi,
Can anyone who have started using 8.3.1 list out the pros and cons.
Thanx in advance
~ Gauri
Can anyone who have started using 8.3.1 list out the pros and cons.
I upgraded to 8.3.1 yesterday from 8.3.0. I've used 8.3.0 since it was
released and it's working fine. I upgraded from 7.4 (dump/restore) and
it was working out of the box. We have somewhat simple sql-queries so
there was no
What version of Postgres you are running ?
If you are using 8.3, you can use pg_stat_all_tables.If Not you can use
http://www.postgresql.org/docs/current/static/pgstattuple.html
Chirag
On Tue, Apr 29, 2008 at 8:14 AM, Francisco Reyes [EMAIL PROTECTED]
wrote:
I recall reading posts in the past
Marc G. Fournier wrote:
Someone on this list has one of those 'confirm your email' filters on their
mailbox, which is bouncing back messages ... this is an attempt to try and
narrow down the address that is causing this ...
Did you find out?
--
Alvaro Herrera
Gauri Kanekar wrote:
Hi,
Can anyone who have started using 8.3.1 list out the pros and cons.
Thanx in advance
~ Gauri
don't know for sure if it is windows to linux but we moved to 8.2 that
was install on windows and moved to 8.3.1 on Ubuntu using the compiled
version from Ubuntu
We
On Fri, May 2, 2008 at 2:31 AM, in message
[EMAIL PROTECTED], Gauri
Kanekar
[EMAIL PROTECTED] wrote:
Can anyone who have started using 8.3.1 list out the pros and cons.
There are bugs in the 8.3.1 release which bit us when we started using
it; however, these are fixed in the 8.3 stable
Attempting to resend. My first attempt was rejected with this
explanation:
Your message to the pgsql-performance list has been denied
for the following reason(s):
A message was previous posted with this Message-ID
Duplicate Message-ID - [EMAIL PROTECTED] (Fri May 2
13:36:52 2008)
Duplicate
Attempting to resend. My first attempt was rejected with this
explanation:
Your message to the pgsql-performance list has been denied
for the following reason(s):
A message was previous posted with this Message-ID
Duplicate Message-ID - [EMAIL PROTECTED] (Fri May 2
13:36:52 2008)
Duplicate
On Fri, 02 May 2008 12:11:34 -0500
Justin [EMAIL PROTECTED] wrote:
don't know for sure if it is windows to linux but we moved to 8.2
that was install on windows and moved to 8.3.1 on Ubuntu using the
compiled version from Ubuntu
We had minor annoying problem with implicit data conversion
Greetings -- I have an UPDATE query updating a 100 million row table,
and allocate enough memory via shared_buffers=1500MB. However, I see
two processes in top, the UPDATE process eating about 850 MB and the
writer process eating about 750 MB. The box starts paging. Why is
there the
On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov [EMAIL PROTECTED] wrote:
Greetings -- I have an UPDATE query updating a 100 million row table, and
allocate enough memory via shared_buffers=1500MB. However, I see two
processes in top, the UPDATE process eating about 850 MB and the writer
On May 2, 2008, at 12:30 PM, Scott Marlowe wrote:
On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov
[EMAIL PROTECTED] wrote:
Greetings -- I have an UPDATE query updating a 100 million row
table, and
allocate enough memory via shared_buffers=1500MB. However, I see two
processes in top, the
On Fri, May 2, 2008 at 1:38 PM, Alexy Khrabrov [EMAIL PROTECTED] wrote:
On May 2, 2008, at 12:30 PM, Scott Marlowe wrote:
On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov [EMAIL PROTECTED]
wrote:
Greetings -- I have an UPDATE query updating a 100 million row table,
and
allocate
Scott Marlowe [EMAIL PROTECTED] writes:
On Fri, May 2, 2008 at 1:38 PM, Alexy Khrabrov [EMAIL PROTECTED] wrote:
I randomly increased values in postgresql.conf to
shared_buffers = 1500MB
max_fsm_pages = 200
max_fsm_relations = 1
On a laptop with 2G ram, 1.5Gig shared buffers is
On Fri, 2 May 2008, Alexy Khrabrov wrote:
I have an UPDATE query updating a 100 million row table, and
allocate enough memory via shared_buffers=1500MB.
In addition to reducing that as you've been advised, you'll probably need
to increase checkpoint_segments significantly from the default
On May 2, 2008, at 1:13 PM, Tom Lane wrote:
I don't think you should figure on more than 1GB being
usefully available to Postgres, and you can't give all or even most of
that space to shared_buffers.
So how should I divide say a 512 MB between shared_buffers and, um,
what else? (new to pg
Interestingly, after shutting down the server with
shared_buffer=1500MB in the middle of that UPDATE, I see this:
bash-3.2$ /opt/bin/pg_ctl -D /data/pgsql/ stop
waiting for server to shut downLOG: received smart shutdown request
LOG: autovacuum launcher shutting down
On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov [EMAIL PROTECTED] wrote:
So how should I divide say a 512 MB between shared_buffers and, um, what
else? (new to pg tuning :)
Don't worry so much about the rest of the settings. Maybe increase
sort_mem (aka work_mem) to something like 16M or so.
On May 2, 2008, at 1:40 PM, Scott Marlowe wrote:
Again, a database protects your data from getting scrambled should the
program updating it quit halfway through etc...
Right -- but this is a data mining work, I add a derived column to a
row, and it's computed from that very row and a small
On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov [EMAIL PROTECTED] wrote:
I naively thought that if I have a 100,000,000 row table, of the form
(integer,integer,smallint,date), and add a real coumn to it, it will scroll
through the memory reasonably fast.
In Postgres, an update is the same as a
On May 2, 2008, at 2:02 PM, Craig James wrote:
On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov
[EMAIL PROTECTED] wrote:
I naively thought that if I have a 100,000,000 row table, of the form
(integer,integer,smallint,date), and add a real coumn to it, it
will scroll
through the memory
On Fri, 2 May 2008, Alexy Khrabrov wrote:
I created several indices for the primary table, yes.
That may be part of your problem. All of the indexes all are being
updated along with the main data in the row each time you touch a record.
There's some optimization there in 8.3 but it doesn't
I created several indices for the primary table, yes. Sure I can do a
table for a volatile column, but then I'll have to create a new such
table for each derived column -- that's why I tried to add a column to
the existing table. Yet seeing this is really slow, and I need to to
many
Hi,
I'm porting an application written with pretty portable SQL, but tested
almost exclusively on MySQL.
I'm wondering why would this query take about 90 seconds to return 74 rows?
SELECT INFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_NAME,
On May 2, 2008, at 2:23 PM, Greg Smith wrote:
On Fri, 2 May 2008, Alexy Khrabrov wrote:
I created several indices for the primary table, yes.
That may be part of your problem. All of the indexes all are being
updated along with the main data in the row each time you touch a
record.
Alexy Khrabrov wrote:
SInce I don't index on that
new column, I'd assume my old indices would do -- do they change
because of rows deletions/insertions, with the effective new rows
addresses?
Every update is a delete and insert. The new version of the row must
be added to the
Ernesto [EMAIL PROTECTED] writes:
I'm wondering why would this query take about 90 seconds to return 74 rows?
EXPLAIN ANALYZE might tell you something.
Is this really the query you're running? Because these two columns
don't exist:
INFORMATION_SCHEMA.KEY_COLUMN_USAGE.REFERENCED_TABLE_NAME,
On May 2, 2008, at 2:43 PM, Kevin Grittner wrote:
Alexy Khrabrov wrote:
SInce I don't index on that
new column, I'd assume my old indices would do -- do they change
because of rows deletions/insertions, with the effective new rows
addresses?
Every update is a delete and insert. The new
Alexy Khrabrov wrote:
OK. I've cancelled all previous attempts at UPDATE and will now
create some derived tables. See no changes in the previous huge
table
-- the added column was completely empty. Dropped it. Should I
vacuum just in case, or am I guaranteed not to have any extra
29 matches
Mail list logo