On Fri, Oct 13, 2006 at 03:57:23PM -0700, Josh Berkus wrote:
Jim,
Well, that's not what I said (my point being that until the planner and
stats are perfect you need a way to over-ride them)... but I've also
never said hints would be faster or easier than stats modification (I
said I
I think there's 2 things that would help this case. First, partition on
country. You can either do this on a table level or on an index level
by putting where clauses on the indexes (index method would be the
fastest one to test, since it's just new indexes). That should shrink
the size of that
On Mon, Oct 16, 2006 at 05:56:54PM -0400, Carlo Stonebanks wrote:
I think there's 2 things that would help this case. First, partition on
country. You can either do this on a table level or on an index level
by putting where clauses on the indexes (index method would be the
fastest one to
On Sun, Oct 15, 2006 at 04:52:12PM +0200, Tobias Brox wrote:
Are there any logs that can help me, and eventually, are there any
ready-made scripts for checking when autovacuum is running, and
eventually for how long it keeps its transactions? I'll probably write
up something myself if not.
On Tue, Oct 17, 2006 at 12:51:19PM -0400, Merlin Moncure wrote:
so, imo alexander is correct:
contacto varchar(255)
...is a false constraint, why exactly 255? is that were the dart landed?
BTW, if we get variable-length varlena headers at some point, then
setting certain limits might make
On Tue, Oct 17, 2006 at 12:25:39PM +0200, Ruben Rubio wrote:
First of all I have to say that I now the database is not ok. There was
a people before me that didn't do the thinks right. I would like to
normalize the database, but it takes too much time (there is is hundred
of SQLs to change and
On Wed, Oct 18, 2006 at 02:33:49PM -0700, Jeff Davis wrote:
On Wed, 2006-10-18 at 23:19 +0200, Dimitri Fontaine wrote:
Le mercredi 18 octobre 2006 23:02, Ioana Danes a ??crit :
I tried the partitioning scenario but I've got into
the same problem. The max function is not using the
On Wed, Oct 18, 2006 at 03:20:19PM -0700, Graham Davis wrote:
I have a question about index growth.
The way I understand it, dead tuples in indexes were not reclaimed by
VACUUM commands in the past. However, I've read in a few forum posts
that this was changed somewhere between 7.4 and
On Wed, Oct 18, 2006 at 03:32:15PM -0700, Jeff Davis wrote:
On Wed, 2006-10-18 at 17:10 -0500, Jim C. Nasby wrote:
Sorry, don't have the earlier part of this thread, but what about...
SELECT greatest(max(a), max(b)) ...
?
To fill you in, we're trying to get the max of a union
And PLEASE do not post something to 3 lists; it's a lot of extra traffic
for no reason.
Moving to -hackers.
On Wed, Oct 18, 2006 at 05:15:13PM -0400, jungmin shin wrote:
Hello all,
I read a paper, which is Query optimization in the presence of Foreign
Functions.
And the paper , there is a
On Wed, Oct 18, 2006 at 03:39:56PM -0700, Graham Davis wrote:
So I guess any changes that were made to make VACUUM and FSM include
indexes
does not remove the necessity to reindex (as long as we don't want index
sizes to bloat and grow larger than they need be).
Is that correct?
Not in
On Wed, Oct 18, 2006 at 05:15:13PM -0400, jungmin shin wrote:
Hello all,
I read a paper, which is Query optimization in the presence of Foreign
Functions.
And the paper , there is a paragraph like below.
In order to reduce the number of invocations, caching the results of
invocation
On Thu, Oct 19, 2006 at 03:54:28PM +0200, Tobias Brox wrote:
I just came to think about /proc/sys/swappiness ...
When this one is set to a high number (say, 100 - which is maximum), the
kernel will aggressively swap out all memory that is not beeing
accessed, to allow more memory for caches.
On Thu, Oct 19, 2006 at 06:00:54PM +0200, Tobias Brox wrote:
[Jim C. Nasby - Thu at 10:28:31AM -0500]
I think it'd be much better to experiment with using much larger
shared_buffers settings. The conventional wisdom there is from 7.x days
when you really didn't want a large buffer
On Thu, Oct 19, 2006 at 06:39:22PM +0200, Tobias Brox wrote:
[Jim C. Nasby - Thu at 11:31:26AM -0500]
Yeah, test setups are a good thing to have...
We would need to replicate the production traffic as well to do reliable
tests. Well, we'll get to that one day ...
Marginally reliable
On Thu, Oct 19, 2006 at 06:53:49PM +0200, Tobias Brox wrote:
[Jim C. Nasby - Thu at 11:45:32AM -0500]
The issue with pg_xlog is you don't need bandwidth... you need super-low
latency. The best way to accomplish that is to get a battery-backed RAID
controller that you can enable write
On Thu, Oct 19, 2006 at 06:19:16PM +0100, Richard Huxton wrote:
OK - these plans look about the same, but the time is greatly different.
Both have rows=140247 as the estimated number of rows in tbl_reg. Either
you have many more rows in the second case (in which case you're not
running
On Mon, Oct 23, 2006 at 09:45:59AM +0100, Richard Huxton wrote:
Mike wrote:
Hello friends,
I am responsible for maintaining a high volume website using postgresql
8.1.4. Given the amount of reads and writes, I vacuum full the server a
few times a week around 1, 2 AM shutting down the site
On Sat, Oct 21, 2006 at 12:12:59AM +0930, Shane Ambler wrote:
Generally more disks at slower speed - 2 10K disks in raid 0 is faster
than 1 15K disk. More disks also allow more options.
Not at writing they're not (unless you're using RAID0... ugh).
--
Jim Nasby
On Mon, Oct 23, 2006 at 11:10:19AM -0400, Worky Workerson wrote:
I am most interested in loading two tables, one with about 21 (small)
VARCHARs where each record is about 200 bytes, and another with 7
INTEGERs, 3 TIMESTAMPs, and 1 BYTEA where each record is about 350
bytes.
On Sat, Oct 21, 2006 at 08:43:05AM -0700, John Philips wrote:
I heard some say that the transaction log should be on
it's own array, others say it doesn't hurt to have it
on the same array as the OS. Is it really worthwhile
to put it on it's own array?
It all depends on the controller and
On Mon, Oct 23, 2006 at 03:37:47PM -0700, Craig A. James wrote:
Jim C. Nasby wrote:
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect
On Tue, Oct 24, 2006 at 09:17:08AM -0400, Worky Workerson wrote:
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect that
if you want
On Mon, Oct 23, 2006 at 04:54:00PM -0300, Mara Dalponte wrote:
Hello,
I have a query with several join operations and applying the same
filter condition over each involved table. This condition is a complex
predicate over an indexed timestamp field, depending on some
parameters.
To
On Mon, Oct 23, 2006 at 05:51:40PM -0400, Steve wrote:
Hello there;
I've got an application that has to copy an existing database to a new
database on the same machine.
I used to do this with a pg_dump command piped to psql to perform the
copy; however the database is 18 gigs large on
On Tue, Oct 24, 2006 at 10:36:04PM -0700, Craig A. James wrote:
Jim C. Nasby wrote:
Well, given that perl is using an entire CPU, it sounds like you should
start looking either at ways to remove some of the overhead from perl,
or to split that perl into multiple processes.
I use Perl
On Wed, Oct 25, 2006 at 08:03:38AM -0400, Worky Workerson wrote:
I'm just doing CSV style transformations (and calling a lot of
functions along the way), but the end result is a straight bulk load
of data into a blank database. And we've established that Postgres
can do *way* better than what
On Wed, Oct 25, 2006 at 07:55:38AM -0300, Mara Dalponte wrote:
On 10/24/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Mon, Oct 23, 2006 at 04:54:00PM -0300, Mara Dalponte wrote:
Hello,
I have a query with several join operations and applying the same
filter condition over each involved
On Wed, Oct 25, 2006 at 11:25:01AM -0400, Worky Workerson wrote:
I'm guessing the high bursts are checkpoints. Can you check your log
files for pg and see if you are getting warnings about checkpoint
frequency? You can get some mileage here by increasing wal files.
Nope, nothing in the
On Wed, Oct 25, 2006 at 04:32:16PM -0400, Carlo Stonebanks wrote:
I have a question for you: did you have a long running query keeping open
a transaction? I've just noticed the same problem here, but things cleaned
up immediately when I aborted the long-running transaction.
No, the only
On Wed, Oct 25, 2006 at 05:31:29PM -0400, Mark Lonsdale wrote:
Set my sort_mem to 8192
You really need to look at what your workload is before trying to tweak
sort_mem. With 8G of memory, sort_mem=40 (~400MB) with only 10
active connections might be a good setting. It's usually better to get
On Thu, Oct 26, 2006 at 04:06:09PM +0100, Gavin Hamill wrote:
On Thu, 26 Oct 2006 10:47:21 -0400
Tom Lane [EMAIL PROTECTED] wrote:
Gavin Hamill [EMAIL PROTECTED] writes:
Nodes 2 and 3 take only the tables necessary to run our search (10
out of the full 130) and are much lighter (only
On Thu, Oct 26, 2006 at 09:35:56PM +0100, Gavin Hamill wrote:
On Thu, 26 Oct 2006 14:17:29 -0500
Jim C. Nasby [EMAIL PROTECTED] wrote:
Are you sure that there's nothing else happening on the machine that
could affect the vacuum times? Like, say a backup? Or perhaps updates
coming in from
On Thu, Oct 26, 2006 at 03:03:38PM -0700, George Pavlov wrote:
i have wondered myself. i wouldn't do it through pgAdmin (not sure what
the best test it, but i thought psql from the same machine might be
better--see below). anyway, the funny thing is that if you concatenate
them the time drops:
On Tue, Nov 14, 2006 at 09:17:08AM -0500, Merlin Moncure wrote:
On 11/14/06, Cosimo Streppone [EMAIL PROTECTED] wrote:
I must say I lowered shared_buffers to 8192, as it was before.
I tried raising it to 16384, but I can't seem to find a relationship
between shared_buffers and performance
On Sat, Nov 18, 2006 at 08:13:26PM -0700, Brian Wipf wrote:
It certainly is unfortunate if Guido's right and this is an upper
limit for OS X. The performance benefit of having high shared_buffers
on our mostly read database is remarkable.
Got any data about that you can share? People have
On Wed, Nov 22, 2006 at 09:02:04AM -0800, Jeff Frost wrote:
A valid question. Does the caching raid controller negate the desire to
separate pg_xlog from PGDATA?
Theoretically, yes. But I don't think I've seen any hard numbers from
testing.
--
Jim Nasby
On Wed, Nov 22, 2006 at 04:35:37PM -0500, Bucky Jordan wrote:
While I'm at it, if I have time I'll run pgbench with pg_log on a
separate RAID1, and one with it on a RAID10x6, but I don't know how
useful those results will be.
Very, but only if the controller has write-caching enabled. For
On Sat, Nov 18, 2006 at 05:28:46PM -0800, Richard Troy wrote:
soapbox ...I read a large number of articles on this subject and am
absolutely dumbfounded by the -ahem- idiots who think killing a random
process is an appropriate action. I'm just taking their word for it that
there's some kind of
On Sun, Nov 26, 2006 at 12:24:17PM +0100, Joost Kraaijeveld wrote:
Hi,
Are there guidelines (or any empirical data) available how to determine
how often a table should be vacuumed for optimum performance or is this
an experience / trial-and-error thing?
Most of the time I just turn autovac
On Mon, Nov 27, 2006 at 07:23:47AM +, Brian Wipf wrote:
On 26-Nov-06, at 11:25 PM, Jim C. Nasby wrote:
On Sat, Nov 18, 2006 at 08:13:26PM -0700, Brian Wipf wrote:
It certainly is unfortunate if Guido's right and this is an upper
limit for OS X. The performance benefit of having high
On Mon, Dec 04, 2006 at 05:41:14PM +0100, Arjen van der Meijden wrote:
Since I'd rather not send the entire list of queries to the entire
world, is it OK to send both explain analyze-files to you off list?
Can you post them on the web somewhere so everyone can look at them?
Also, are you
On Mon, Dec 04, 2006 at 09:42:57AM -0800, Jeff Davis wrote:
fsm_pages = 200,000 ??? Based this on some statistics about the number
of pages freed from a vacuum on older server. Not sure if its fair
to calculate this based on vacuum stats of 7.3.4 server?
Might as well make it a
Remember that as you increase shared_buffers you might need to make the
bgwriter more aggressive too.
On Thu, Dec 07, 2006 at 11:42:39AM -0500, Bill Moran wrote:
I'm gearing up to do some serious investigation into performance for
PostgreSQL with regard to our application. I have two issues
On Thu, Dec 28, 2006 at 02:15:31PM -0800, Jeff Frost wrote:
When benchmarking various options for a new PG server at one of my clients,
I tried ext2 and ext3 (data=writeback) for the WAL and it appeared to be
fastest to have ext2 for the WAL. The winning time was 157m46.713s for
ext2,
On Sun, Jan 07, 2007 at 11:26:01PM -0500, Guy Rouillier wrote:
Ok, I ran with the settings below, but with
shared_buffers=768MB
effective_cache_size=2048MB
fsync=on
This run took 29000 seconds. I'm beginning to think configuration
changes are not going to buy significant additional
On Thu, Dec 28, 2006 at 10:35:29PM -0500, Dave Cramer wrote:
start with 25% of your 12G as shared buffers, and 75% of 12G for
effective cache
I'm curious... why leave 3G for the kernel? Seems like overkill...
Granted, as long as you're in the ballpark on effective_cache_size
that's all that
Are you 100% certain that both builds are using all the same libraries?
And to be an apples-apples comparison, you really need to ensure that
the datadir is on the same filesystem in both cases (that's the first
thing I'd check).
Also, that pg_index... error sounds like the second build has been
On Wed, Jan 03, 2007 at 11:56:20AM -0500, Tom Lane wrote:
Erik Jones [EMAIL PROTECTED] writes:
Tom Lane wrote:
I could see this taking an unreasonable amount of time if you had a huge
number of pg_class rows or a very long search_path --- is your database
at all out of the ordinary in
On Wed, Jan 03, 2007 at 12:24:24PM -0500, Jeremy Haile wrote:
I am sure that this has been discussed before, but I can't seem to find
any recent posts. (I am running PostgreSQL 8.2)
I have always ran PostgreSQL on Linux in the past, but the company I am
currently working for uses Windows on
On Fri, Jan 05, 2007 at 01:15:44PM -0500, Reid Thompson wrote:
On Fri, 2007-01-05 at 04:10 +0100, Grega Bremec wrote:
he main reason I'm writing this mail though, is to suggest you take a
look
at Oryx, http://www.oryx.com/; They used to have this product called
Mailstore, which was
BTW, someone coming up with a set of functions to handle partitioning
for the general 'partition by time' case would make a GREAT project on
pgFoundry.
On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
Take a look at the set of partitioning functions I wrote shortly after
the
On Mon, Jan 08, 2007 at 03:02:24PM -0500, Steven Flatt wrote:
We use partitioned tables extensively and we have observed linear
performance degradation on inserts as the number of rules on the master
table grows (i.e. number of rules = number of partitions). We had to come
up with a solution
On Tue, Jan 09, 2007 at 09:10:51AM -0800, Jeff Frost wrote:
On Tue, 9 Jan 2007, Jim C. Nasby wrote:
On Thu, Dec 28, 2006 at 02:15:31PM -0800, Jeff Frost wrote:
When benchmarking various options for a new PG server at one of my
clients,
I tried ext2 and ext3 (data=writeback) for the WAL
On Tue, Jan 09, 2007 at 12:26:41PM -0500, Jeremy Haile wrote:
I am developing an application that has very predictable database
operations:
-inserts several thousand rows into 3 tables every 5 minutes. (table
contain around 10 million rows each)
-truncates and rebuilds aggregate tables
On Wed, Jan 10, 2007 at 03:28:00PM -0500, Jeremy Haile wrote:
This seems so much more intuitive and simpler than what is required to
set it up in PostgreSQL. Does PostgreSQL's approach to table
partitioning have any advantage over MySQL? Is a nicer syntax planned
for Postgres?
The focus was
to
ensure there's no long running transactions).
--
Jim C. Nasby, Database Architect [EMAIL PROTECTED]
512.569.9461 (cell) http://jim.nasby.net
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Wed, Jan 10, 2007 at 04:39:06PM -0500, Steven Flatt wrote:
On 1/10/07, Jim C. Nasby [EMAIL PROTECTED] wrote:
Except for the simplest partitioning cases, you'll be much better off
using a trigger on the parent table to direct inserts/updates/deletes to
the children. As a bonus, using
On Wed, Jan 10, 2007 at 04:48:42PM -0500, Jeremy Haile wrote:
BTW, that's the default values for analyze... the defaults for vacuum
are 2x that.
Yeah - I was actually more concerned that tables would need to be
analyzed more often than I was about vacuuming too often, so I used
analyze as
On Thu, Jan 11, 2007 at 12:10:34AM -0300, Alvaro Herrera wrote:
Jim C. Nasby wrote:
Is the best way to do that usually to lower the scale factors? Is it
ever a good approach to lower the scale factor to zero and just set the
thresholds to a pure number of rows? (when setting
On Thu, Jan 11, 2007 at 12:15:50PM +, Simon Riggs wrote:
On Wed, 2007-01-10 at 16:00 -0500, Steven Flatt wrote:
On 1/9/07, Simon Riggs [EMAIL PROTECTED] wrote:
If you are doing date range partitioning it should be fairly
simple to
load data into the latest
On Wed, Jan 24, 2007 at 02:37:44PM +0900, Galy Lee wrote:
1. How do we know if autovacuum is enough for my application, or should
I setup a vacuum manually from cron for my application?
Generally I trust autovac unless there's some tables where it's critical
that they be vacuumed
On Thu, Jan 25, 2007 at 12:52:02AM -0300, Alvaro Herrera wrote:
Jim C. Nasby wrote:
I'll generally start with a cost delay of 20ms and adjust based on IO
utilization.
I've been considering set a default autovacuum cost delay to 10ms; does
this sound reasonable?
For a lightly loaded
-running vacuum transaction (prior to 8.2), but in many systems
you'll still get some use out of other vacuums.
--
Jim C. Nasby, Database Architect [EMAIL PROTECTED]
512.569.9461 (cell) http://jim.nasby.net
---(end of broadcast
It's not necessarily the join order that's an issue; it could also be
due to the merge join that it does in the first case. I've also run into
situations where the cost estimate for a merge join is way off the mark.
Rather than forcing the join order, you might try setting
enable_mergejoin=false.
On Fri, Feb 23, 2007 at 10:13:31AM +0100, Csaba Nagy wrote:
You likely don't need the nightly full vacuum run... we also do here a
nightly vacuum beside autovacuum, but not a full one, only for tables
which are big enough that we don't want autovacuum to touch them in high
business time but
If you're doing much updating at all you'll also want to bump up
checkpoint_segments. I like setting checkpoint_warning just a bit under
checkpoint_timeout as a way to monitor how often you're checkpointing
due to running out of segments.
With a large shared_buffers you'll likely need to make the
On Fri, Feb 23, 2007 at 10:14:29AM -0800, Jeff Davis wrote:
The postgresql.conf says that the maximum checkpoint_timeout is 1 hour.
However, the following messages seem to suggest that it may be useful to
set the value significantly higher to reduce unnecessary WAL volume:
On Fri, Feb 23, 2007 at 12:23:08PM -0800, Jeff Davis wrote:
On Fri, 2007-02-23 at 14:02 -0600, Jim C. Nasby wrote:
say that checkpoints cause extra disk I/O. Is there a good way to
measure how much extra I/O (and WAL volume) is caused by the
checkpoints? Also, it would be good to know
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to [EMAIL PROTECTED]
- End forwarded message -
--
Jim C. Nasby, Database Architect[EMAIL PROTECTED]
Give your computer
On Fri, Mar 30, 2007 at 11:19:09AM -0500, Erik Jones wrote:
On Fri, Mar 30, 2007 at 04:25:16PM +0200, Dimitri wrote:
The problem is while your goal is to commit as fast as possible -
it's
pity to vast I/O operation speed just keeping common block size...
Let's say if your transaction
On Thu, Apr 05, 2007 at 03:10:43PM -0500, Erik Jones wrote:
Nope. What we never tracked down was the factor of 10 drop in
database transactions, not disk transactions. The write volume was
most definitely due to the direct io setting -- writes are now being
done in terms of the
On Wed, Apr 11, 2007 at 12:50:37AM +0200, Tobias Brox wrote:
We had problems again, caused by long running transactions. I'm
monitoring the pg_stat_activity view, checking the query_start of all
requests that are not idle - but this one slipped under the radar as the
application was running
On Fri, May 11, 2007 at 01:25:04PM -0400, Alvaro Herrera wrote:
Guillaume Cottenceau wrote:
Guillaume Cottenceau gc 'at' mnc.ch writes:
With that in mind, I've tried to estimate how much benefit would
be brought by running VACUUM FULL, with the output of VACUUM
VERBOSE. However, it
On Sat, May 12, 2007 at 03:28:45PM +0100, Heikki Linnakangas wrote:
In the case of it being disk-block based, my
inclination would be to let the kernel do the buffering. In the case of
the cache being table-row-based, I would expect it to be much more
space-efficient and I would be
Have you either re-loaded the config or restarted the server since
making those changes?
On Mon, May 14, 2007 at 09:16:54AM -0700, Y Sidhu wrote:
I am trying to use them. I have set these values in my conf file:
stats_start_collector TRUE stats_reset_on_server_start FALSE
Please include the list in your replies...
Ok, so you've got stats collection turned on. What's the question then?
And are stats_block_level and stats_row_level also enabled?
On Mon, May 14, 2007 at 09:28:46AM -0700, Y Sidhu wrote:
yes
Yudhvir
===
On 5/14/07, Jim C. Nasby [EMAIL
On Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:
The stats_block_level and stats_row_level are NOT enabled. The question is
how to use pg_stats. Do I access/see them via the ANALYZE command? or using
SQL. I cannot find any document which will get me started on this.
Ok, we're both
On Tue, May 15, 2007 at 11:47:29AM +0100, Richard Huxton wrote:
Tarhon-Onu Victor wrote:
On Mon, 14 May 2007, Richard Huxton wrote:
1. Is this one client making 500 requests, or 500 clients making one
request per second?
Up to 250 clients will make up to 500 requests per second.
On Mon, May 14, 2007 at 08:20:49PM -0400, Tom Lane wrote:
Y Sidhu [EMAIL PROTECTED] writes:
it may be table fragmentation. What kind of tables? We have 2 of them which
experience lots of adds and deletes only. No updates. So a typical day
experiences record adds a few dozen times on the
I'm guessing you're seeing the affect of softupdates. With those enabled
it can take some time before the space freed by a delete will actually
show up as available.
On Tue, May 15, 2007 at 01:18:42PM -0700, Y Sidhu wrote:
Anyone seen PG filling up a 66 GB partition from say 40-ish percentage to
On Wed, May 16, 2007 at 09:41:46AM +0200, Guillaume Cottenceau wrote:
Michael Stone mstone+postgres 'at' mathom.us writes:
On Tue, May 15, 2007 at 06:43:50PM +0200, Guillaume Cottenceau wrote:
patch - basically, I think the documentation under estimates (or
sometimes misses) the benefit
No, it's part of FreeBSD's UFS. google FreeBSD softupdates and you
should get plenty of info.
As I said, it's probably not worth worrying about.
On Wed, May 16, 2007 at 08:21:23AM -0700, Y Sidhu wrote:
What do you mean by softupdates? Is that a parameter in what I am guessing
is the conf file?
On Wed, May 16, 2007 at 12:09:26PM -0400, Alvaro Herrera wrote:
Guillaume Cottenceau wrote:
Jim C. Nasby decibel 'at' decibel.org writes:
On Wed, May 16, 2007 at 09:41:46AM +0200, Guillaume Cottenceau wrote:
[...]
Come on, I don't suggest to remove several bold warnings about
On Fri, May 18, 2007 at 04:26:05PM -0700, Y Sidhu wrote:
To answer your original question, a way to take a look at how bloated
your tables are would be to ANALYZE, divide reltuples by relpages from
pg_class (gives how many rows per page you have) and compare that to 8k
/ average row size. The
On Mon, May 21, 2007 at 03:50:27PM -0400, Merlin Moncure wrote:
I work on a system about like you describe400tps constant24/7.
Major challenges are routine maintenance and locking. Autovacuum is
your friend but you will need to schedule a full vaccum once in a
while because of tps
On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:
I also went into benchmarking mode last night for my own
amusement when I read on the linux-kernel ML that
NCQ support for nForce5 chips was released.
I tried current PostgreSQL 8.3devel CVS.
pgbench over local TCP connection
On Tue, May 22, 2007 at 09:29:00AM +0200, PFC wrote:
This does not run a complete sort on the table. It would be about as
fast as your seq scan disk throughput. Obviously, the end result is
not as
good as a real CLUSTER since the table will be made up of several ordered
On Tue, May 22, 2007 at 12:56:21PM -0400, Chander Ganesan wrote:
Are there any performance improvements that come from using a domain
over a check constraint (aside from the ease of management component)?
No. Plus support for domain constraints isn't universal (plpgsql doesn't
honor them, for
On Wed, May 23, 2007 at 11:58:06AM -0700, Y Sidhu wrote:
Is there any easy way to take a database and add/delete records to create
fragmentation of the records and indexes. I am trying to recreate high
vacuum times.
Update random rows, then do a vacuum. That will result in free space in
random
What does top report as using the most memory?
On Wed, May 23, 2007 at 11:01:24PM -0300, Leandro Guimar?es dos Santos wrote:
Hi all,
I have a 4 CPU, 4GB Ram memory box running PostgreSql 8.2.3 under Win 2003 in
a very high IO intensive insert application.
The application
On Fri, May 25, 2007 at 10:29:30AM +0200, Bastian Voigt wrote:
Hi *,
for caching large autogenerated XML files, I have created a bytea table
in my database so that the cached files can be used by multiple servers.
There are about 500 rows and 10-20 Updates per minute on the table. The
On Tue, Jul 03, 2007 at 04:06:29PM +0100, Heikki Linnakangas wrote:
Dimitri wrote:
I'm very curious to know if we may expect or guarantee any data
consistency with WAL sync=OFF but using file system mounted in Direct
I/O mode (means every write() system call called by PG really writes
to disk
On Mon, Jul 09, 2007 at 01:48:44PM -0400, Jignesh K. Shah wrote:
Hi Heikki,
Heikki Linnakangas wrote:
That's really exciting news!
I'm sure you spent a lot of time tweaking the settings, so let me ask
you something topical:
How did you end up with the bgwriter settings you used?
On Fri, Jul 13, 2007 at 12:30:46PM -0400, Tom Lane wrote:
Adriaan van Os [EMAIL PROTECTED] writes:
I started another test. I copied an existing database (not very large,
35 tables, typically a few hundred up to a few thousand records) with
CREATE DATABASE testdb TEMPLATE mydb and started to
On Sat, Jul 14, 2007 at 12:19:51PM +0200, Hannes Dorbath wrote:
Gregory Stark wrote:
From the DELL site it seems this `PERC 5/i' on board controller
(assuming that's what you have) doesn't even have a BBU. If you don't
plan to post here in a few weeks again about data corruption, go out and
On Fri, Jul 13, 2007 at 09:12:34PM +0200, Pavel Stehule wrote:
Hello,
I tested speed difference between TRUNCATE TABLE and DROP TABLE
(tested on my notebook ext3 and Linux fedora 7):
CREATE OR REPLACE FUNCTION test01() RETURNS SETOF double precision
AS $$
DECLARE t1 timestamp with time
On Tue, Jul 17, 2007 at 04:10:30PM +0200, Marc Mamin wrote:
shared_buffers= 262143
You should at least try some runs with this set far, far larger. At
least 10% of memory, but it'd be nice to see what happens with this set
to 50% or higher as well (though don't set it larger than the database
On Wed, Jul 18, 2007 at 10:03:00AM +0100, Heikki Linnakangas wrote:
Ron Mayer wrote:
Seems Linux has IO scheduling through a program called ionice.
Has anyone here experimented with using it rather than
vacuum sleep settings?
I looked at that briefly for smoothing checkpoints, but it
On Wed, Jul 18, 2007 at 11:30:48AM -0500, Scott Marlowe wrote:
EnterpriseDB, a commercially enhanced version of PostgreSQL can do
query parallelization, but it comes at a cost, and that cost is making
sure you have enough spindles / I/O bandwidth that you won't be
actually slowing your system
501 - 600 of 602 matches
Mail list logo