Hi all,
In a previous post, Ron Peacetree suggested to check what work_mem
needs a query needs. How that can be done?
Thanks all
--
Arnau
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Alvaro Herrera wrote:
Ron wrote:
C= What file system are you using? Unlike BigDBMS, pg does not have
its own native one, so you have to choose the one that best suits
your needs. For update heavy applications involving lots of small
updates jfs and XFS should both be seriously considered.
On þri, 2007-01-02 at 09:04 -0500, Geoffrey wrote:
Alvaro Herrera wrote:
Actually it has been suggested that a combination of ext2 (for WAL) and
ext3 (for data, with data journalling disabled) is a good performer.
AFAIK you don't want the overhead of journalling for the WAL partition.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2 Jan 2007, at 14:54, Ragnar wrote:
On þri, 2007-01-02 at 09:04 -0500, Geoffrey wrote:
Alvaro Herrera wrote:
Actually it has been suggested that a combination of ext2 (for
WAL) and
ext3 (for data, with data journalling disabled) is a good
More specifically, you should set the noatime,data=writeback options in
fstab on ext3 partitions for best performance. Correct?
it doesn't really belong here but ext3 has
data journaled (data and meta data)
ordered (meta data journald but data written before meta data (default))
journald
=?iso-8859-1?q?Rolf=20=D8stvik?= [EMAIL PROTECTED] writes:
If you (Tom) still want me to do the following steps
then please tell me.
Please --- I'm still curious why the estimated cost changed so much from
7.4 to 8.2. I can believe a marginal change in cost leading to a plan
switch, but
I'm curious what parameters you guys typically *always* adjust on new
PostgreSQL installs.
I am working with a database that contains several large tables (10-20
million) and many smaller tables (hundreds of rows). My system has 2 GB
of RAM currently, although I will be upping it to 4GB soon.
Jeremy Haile wrote:
I'm curious what parameters you guys typically *always* adjust on new
PostgreSQL installs.
The parameters that I almost always change when installing a new system
is shared_buffers, max_fsm_pages, checkpoint_segments, and
effective_cache_size.
Always: work_mem,
Hello, we recently migrated our system from 8.1.x to 8.2 and when
running dumps have noticed an extreme decrease in speed where the dump
is concerned (by more than a factor of 2). I was wondering if someone
might offer some suggestions as to what may be causing the problem. How
important
What is a decent default setting for work_mem and maintenance_work_mem,
considering I am regularly querying tables that are tens of millions of
rows and have 2-4 GB of RAM?
Also - what is the best way to determine decent settings for
temp_buffers and random_page_cost?
On Tue, 02 Jan 2007
Erik Jones [EMAIL PROTECTED] writes:
Hello, we recently migrated our system from 8.1.x to 8.2 and when
running dumps have noticed an extreme decrease in speed where the dump
is concerned (by more than a factor of 2).
That's odd. pg_dump is normally pretty much I/O bound, at least
assuming
Tom Lane wrote:
Erik Jones [EMAIL PROTECTED] writes:
Hello, we recently migrated our system from 8.1.x to 8.2 and when
running dumps have noticed an extreme decrease in speed where the dump
is concerned (by more than a factor of 2).
That's odd. pg_dump is normally pretty much I/O
On Fri, 2006-12-29 at 07:52 -0500, Ron wrote:
A= go through each query and see what work_mem needs to be for that
query to be as RAM resident as possible. If you have enough RAM, set
work_mem for that query that large. Remember that work_mem is =per
query=, so queries running in parallel
Jeremy Haile wrote:
What is a decent default setting for work_mem and maintenance_work_mem,
considering I am regularly querying tables that are tens of millions of
rows and have 2-4 GB of RAM?
Well, work_mem will depend on your query-load. Queries that do a lot of
sorting should benefit from
Thanks for the information!
Are there any rule-of-thumb starting points for these values that you
use when setting up servers? I'd at least like a starting point for
testing different values.
For example, I'm sure setting a default work_mem of 100MB is usually
overkill - but is 5MB usually a
On Tue, 2007-01-02 at 13:19, Jeremy Haile wrote:
Thanks for the information!
Are there any rule-of-thumb starting points for these values that you
use when setting up servers? I'd at least like a starting point for
testing different values.
For example, I'm sure setting a default
So, on a 4 Gig machine you could divide 1G (25%) by the total possible
connections, then again by the average number of sorts you'd expect per
query / connection to get an idea.
Thanks for the advice. I'll experiment with higher work_mem settings,
as I am regularly doing sorts on large
Hmm... This gets stranger and stranger. When connecting to the
database with the psql client in 8.2's bin directory and using commands
such as \d the client hangs, or takes an extremely long time. If we
connect to the same 8.2 database with a psql client from 8.1.4, both
remotely and
Erik Jones wrote:
Hmm... This gets stranger and stranger. When connecting to the
database with the psql client in 8.2's bin directory and using commands
such as \d the client hangs, or takes an extremely long time. If we
connect to the same 8.2 database with a psql client from 8.1.4, both
Richard Huxton wrote:
Erik Jones wrote:
Hmm... This gets stranger and stranger. When connecting to the
database with the psql client in 8.2's bin directory and using
commands such as \d the client hangs, or takes an extremely long
time. If we connect to the same 8.2 database with a psql
Erik Jones [EMAIL PROTECTED] writes:
Hmm... This gets stranger and stranger. When connecting to the
database with the psql client in 8.2's bin directory and using commands
such as \d the client hangs, or takes an extremely long time.
Hangs at what point? During connection? Try strace'ing
21 matches
Mail list logo