On Fri, 20 Feb 2004, Sean Shanny wrote:
max_connections = 100
# - Memory -
shared_buffers = 16000 # min 16, at least max_connections*2,
8KB each
sort_mem = 256000 # min 64, size in KB
You might wanna drop sort_mem somewhat and just set it during your imports
to
On Fri, 13 Feb 2004 [EMAIL PROTECTED] wrote:
Josh, the disks in the new system should be substantially faster than
the old. Both are Ultra160 SCSI RAID 5 arrays, but the new system has
15k RPM disks, as opposed to the 10k RPM disks in the old system.
Spindle speed does not correlate with
On Tue, 17 Feb 2004, Craig Thomas wrote:
On Tue, 17 Feb 2004, Konstantin Tokar wrote:
Hi!
Does PostgreSQL allow to create tables and indices of a single
database on multiple disk drives with a purpose of increase
performance as Oracle database does? If a symbolic reference is the
On Wed, 12 May 2004, Grega Bremec wrote:
...and on Tue, May 11, 2004 at 03:02:24PM -0600, scott.marlowe used the keyboard:
If you get the LSI megaraid, make sure you're running the latest megaraid
2 driver, not the older, slower 1.18 series. If you are running linux,
look
On Tue, 11 May 2004, Paul Tuckfield wrote:
If you are having a write storm or bursty writes that's burying
performance, a scsi raid controler with writeback cache will greatly
improve the situation, but I do believe they run around $1-2k. If
it's write specific problem, the cache matters
On Tue, 11 May 2004, Paul Tuckfield wrote:
Love that froogle.
It looks like a nice card. One thing I didn't get straight is if
the cache is writethru or write back.
If the original posters problem is truly a burst write problem (and not
linux caching or virtual memory
On Tue, 11 May 2004, Allan Wind wrote:
On 2004-05-11T15:29:46-0600, scott.marlowe wrote:
The other nice thing about the LSI cards is that you can install 1 and
the act like one big RAID array. i.e. install two cards with a 20 drive
RAID0 then make a RAID1 across them, and if one
On Tue, 11 May 2004, Bjoern Metzdorf wrote:
scott.marlowe wrote:
Well, from what I've read elsewhere on the internet, it would seem the
Opterons scale better to 4 CPUs than the basic Xeons do. Of course, the
exception to this is SGI's altix, which uses their own chipset and runs
On Tue, 11 May 2004, Bjoern Metzdorf wrote:
scott.marlowe wrote:
Sure, adaptec makes one, so does lsi megaraid. Dell resells both of
these, the PERC3DI and the PERC3DC are adaptec, then lsi in that order, I
believe. We run the lsi megaraid with 64 megs battery backed cache.
The LSI
On Mon, 10 May 2004, Anderson Boechat Lopes wrote:
Hum... now i think i´m beginning to understand.
The vacuum analyse is recommended to perform at least every day, after
adding or deleting a large number of records, and not vacuum full analyse.
I´ve performed the vacuum full analyse
Sorry about that, I meant kbytes, not megs. My point being it's NOT
measured in 8k blocks, like a lot of other settings. sorry for the mixup.
On Fri, 7 May 2004, Bricklen wrote:
scott.marlowe wrote:
sort_mem might do with a small bump, especially if you're only handling a
few
On Sat, 5 Jun 2004, Carlos Eduardo Smanioto wrote:
Hello all,
What's the case of bigger database PostgreSQL (so greate and amount of
registers) that they know???
http://www.postgresql.org/docs/faqs/FAQ.html#4.5
---(end of broadcast)---
TIP
On Mon, 3 May 2004, Joseph Shraibman wrote:
I have a big table with some int fields. I frequently need to do
queries like:
SELECT if2, count(*) FROM table WHERE if1 = 20 GROUP BY if2;
The problem is that this is slow and frequently requires a seqscan. I'd
like to cache the results in
On Mon, 26 Apr 2004, Stephan Szabo wrote:
On Fri, 23 Apr 2004 [EMAIL PROTECTED] wrote:
I have a query which I think should be using an index all of the time but
postgres only uses the index part of the time. The index
(ticket_crm_map_crm_id_suppid) has the where clause column (crm_id)
On Wed, 28 Apr 2004, Jie Liang wrote:
All,
After I upgraded postgres from 7.3.4 to 7.4.2, one of my program got following error:
DRROR:out of memory
DETAIL: Fail on request of size 92.
any idea??
does memory management have big difference between 7.3.4 and 7.4.2???
this
On Mon, 26 Apr 2004, Rob Fielding wrote:
scott.marlowe wrote:
On Fri, 23 Apr 2004, Chris Hoover wrote:
DB's on Powervaults 220S using raid 5 (over 6 disks)
What controller is this, the adaptec? We've found it to be slower than
the LSI megaraid based controller, but YMMV
On Thu, 22 Apr 2004, Sean Shanny wrote:
I should have included this as well:
fsync | on
shared_buffers | 4000
sort_mem | 64000
For purposes of loading only, you can try turning off fsync, assuming this
is a virgin load and
On Fri, 23 Apr 2004, Chris Hoover wrote:
DB's on Powervaults 220S using raid 5 (over 6 disks)
What controller is this, the adaptec? We've found it to be slower than
the LSI megaraid based controller, but YMMV.
Running RH ES 2.1
Are you running the latest kernel for ES 2.1? Early 2.4
On Fri, 23 Apr 2004, Chris Hoover wrote:
On Friday 23 April 2004 13:21, scott.marlowe wrote:
On Fri, 23 Apr 2004, Chris Hoover wrote:
DB's on Powervaults 220S using raid 5 (over 6 disks)
What controller is this, the adaptec? We've found it to be slower than
the LSI megaraid based
On Thu, 22 Apr 2004, Pallav Kalva wrote:
Hi
We are in the process of building a new machine for our production
database. Below you will see some of the harware specs for the machine.
I need some help with setting these parameters (shared buffers,
effective cache, sort mem) in the
On Wed, 7 Apr 2004, Andrew McMillan wrote:
On Wed, 2004-04-07 at 20:56, huang yaqin wrote:
Hello, Richard Huxton,
You said turning fsync off may cause losing data, that's terrible.
I use SCSI disk, and file system is ext3. I tune postgresql.conf and can't get
any
On Wed, 7 Apr 2004, Nid wrote:
Rather than ask some general, unanswerable question on how to tune my
database...I thought I ask where I might find an introduction to...or
manual/tutorial for the depths of managing a postgres db. Books?
Websites? Assume a basic to intermediate knowledge
On Mon, 5 Apr 2004, Kevin Barnard wrote:
On 5 Apr 2004 at 12:05, Rosser Schwarz wrote:
Just this morning, however, I created a copy of the target table (all
4.7M rows), with absolutely no foreign keys referring to it, and ran
the update against the copy. That update took 2300 seconds.
On Fri, 26 Mar 2004, Fabio Esposito wrote:
On Fri, 26 Mar 2004, scott.marlowe wrote:
It maintains 48hours of data, so its not a large database; roughly
600mbs. We do this by running a housekeeping program in a cron job.
It deletes all data older then 48hours, then vaccuum analyzes
On Thu, 25 Mar 2004, Ara Anjargolian wrote:
I've run into this odd planner choice which I don't quite understand.
I have two tables articles, users and
articles.article_id and users.user_id are primary keys.
Insides articles there are two optional fields author_id1, author_id2
which all
On Tue, 23 Mar 2004, Fabio Esposito wrote:
Hello fellow PostgreSQL users.
We've been working on this interesting issue for some time now, and we're
hoping that someone can help.
We've recently integrated postgres into an existing mature app. Its a
time sensitive 24x7 system. It runs
On Wed, 24 Mar 2004, pginfo wrote:
Hi,
I am running pg 7.4.1 on linux box.
I have a midle size DB with many updates and after it I try to run
vacuum full analyze.
Is there a reason to not use just regular vacuum / analyze (i.e. NOT
full)?
It takes about 2 h.
Full vacuums, by their
On Wed, 24 Mar 2004, pginfo wrote:
Hi,
scott.marlowe wrote:
On Wed, 24 Mar 2004, pginfo wrote:
Hi,
I am running pg 7.4.1 on linux box.
I have a midle size DB with many updates and after it I try to run
vacuum full analyze.
Is there a reason to not use just regular
On Tue, 23 Mar 2004, Josh Berkus wrote:
Matt, Stalin,
As for the compute intensive side (complex joins sorts etc), the Dell will
most likely beat the Sun by some distance, although
what the Sun lacks in CPU power it may make up a bit in memory bandwidth/
latency.
Personally, I've
On Thu, 11 Mar 2004, Christopher Kings-Lynne wrote:
You could also consider not using syslog at all: let the postmaster
output to its stderr, and pipe that into a log-rotation program.
I believe some people use Apache's log rotator for this with good
results.
Not an option I'm afraid.
On Mon, 15 Mar 2004, Bruce Momjian wrote:
scott.marlowe wrote:
On Thu, 11 Mar 2004, Christopher Kings-Lynne wrote:
You could also consider not using syslog at all: let the postmaster
output to its stderr, and pipe that into a log-rotation program.
I believe some people use
On Wed, 10 Mar 2004, Marcus Andree S. Magalhaes wrote:
Guys,
I got a Java program to tune. It connects to a 7.4.1 postgresql server
running Linux using JDBC.
The program needs to update a counter on a somewhat large number of
rows, about 1200 on a ~130k rows table. The query is
On Thu, 4 Mar 2004, Paul Thomas wrote:
On 03/03/2004 18:23 scott.marlowe wrote:
[snip]
There are three factors that affect how fast you can get to the next
sector:
seek time
settle time
rotational latency
Most drives only list the first, and don't bother to mention the other
On Tue, 2 Mar 2004, Anjan Dave wrote:
By lots I mean dozen(s) in a raid 10 array with a good controller.
I believe, for RAID-10, I will need even number of drives.
Correct.
Currently,
the size of the database is about 13GB, and is not expected to grow
exponentially with thousands of
On Tue, 2 Mar 2004, Anjan Dave wrote:
That was part of my original question - whether it makes sense to go for
a mid-range SunFire machine (64bit HW, 64bit OS), which is scalable to
high amounts of memory, and shouldn't have any issues addressing it all.
I've had that kind of setup once
On Thu, 26 Feb 2004, Dror Matalon wrote:
Hi,
We have postgres running on freebsd 4.9 with 2 Gigs of memory. As per
repeated advice on the mailing lists we configured effective_cache_size
= 25520 which you get by doing `sysctl -n vfs.hibufspace` / 8192
Which results in using 200Megs for
On Thu, 26 Feb 2004, David Pradier wrote:
Hi everybody,
i'd like to know if it exists a system of cache for the results of
queries.
I believe there are some external libs that provide this at the
application level. PHP's adodb is purported to do so.
---(end of
On Thu, 26 Feb 2004, Dror Matalon wrote:
On Thu, Feb 26, 2004 at 11:55:31AM -0700, scott.marlowe wrote:
On Thu, 26 Feb 2004, Dror Matalon wrote:
Hi,
We have postgres running on freebsd 4.9 with 2 Gigs of memory. As per
repeated advice on the mailing lists we configured
On Sun, 22 Feb 2004, Sean Shanny wrote:
Tom,
We have the following setting for random page cost:
random_page_cost = 1# units are one sequential page fetch cost
Any suggestions on what to bump it up to?
We are waiting to hear back from Apple on the speed issues, so far we
Easy two step procedure for speeding this up:
1: Upgrade to 7.4.1
2: Read this:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
---(end of broadcast)---
TIP 6: Have you searched our list archives?
On Tue, 17 Feb 2004, Konstantin Tokar wrote:
Hi!
Does PostgreSQL allow to create tables and indices of a single
database on multiple disk drives with a purpose of increase
performance as Oracle database does? If a symbolic reference is the
only method then the next question is: how can it be
On Tue, 17 Feb 2004, Craig Thomas wrote:
On Tue, 17 Feb 2004, Craig Thomas wrote:
On Tue, 17 Feb 2004, Konstantin Tokar wrote:
Hi!
Does PostgreSQL allow to create tables and indices of a single
database on multiple disk drives with a purpose of increase
performance as Oracle
On Wed, 18 Feb 2004, Christopher Kings-Lynne wrote:
1- How can I lock a single record so that other users can only read
it. ??
You cannot do that in PostgreSQL.
How about SELECT ... FOR UPDATE?
No, because users cannot read the locked row in that case.
I just tested it
If my boss came to me and asked me to make my database server run as
slowly as possible, I might come up with the exact same postgresql.conf
file as what you posted.
Just installing the default postgresql.conf that came with postgresql
should make this machine run faster.
Read this:
On Tue, 3 Feb 2004, Christopher Browne wrote:
[EMAIL PROTECTED] (Anjan Dave) writes:
I would like to know whether there are any significant performance
advantages of compiling (say, 7.4) on your platform (being RH7.3, 8,
and 9.0, and Fedora especially) versus getting the relevant binaries
On Mon, 2 Feb 2004, Tomasz Myrta wrote:
Dnia 2004-02-02 15:46, U?ytkownik Rigmor Ukuhe napisa3:
Hi,
I have many indexes somehow overlaping like:
... btree (STATUS, VISIBLE, NP_ID);
... btree (STATUS, VISIBLE);
is perfomance gained by more exact index worth overhead with managing
On Mon, 2 Feb 2004, Tomasz Myrta wrote:
Dnia 2004-02-02 19:30, U¿ytkownik scott.marlowe napisa³:
Not entirely, since it only has to sort two columns, it will be smaller,
and will therefore be somewhat faster.
Can you say something more about it? Will it be enough faster to keep
them
On Mon, 2 Feb 2004, Qing Zhao wrote:
I am new here. I have a question related to this in some way.
Our web site needs to upload a large volume of data into Postgres at a
time. The performance deterioates as number of rows becomes larger.
When it reaches 2500 rows, it never come back to
On Mon, 2 Feb 2004, Kevin Barnard wrote:
On 2 Feb 2004 at 13:58, scott.marlowe wrote:
what do you mean at 2 GB? Is that how much is in kernel cache plus
buffer, plus used, plus etc??? Could you give us the top of top output to
make sure? If most of that is kernel cache, then that's
On Wed, 14 Jan 2004, Jón Ragnarsson wrote:
I am writing a website that will probably have some traffic.
Right now I wrap every .php page in pg_connect() and pg_close().
Then I read somewhere that Postgres only supports 100 simultaneous
connections (default). Is that a limitation? Should I use
On Wed, 14 Jan 2004, Adam Alkins wrote:
scott.marlowe wrote:
A few tips from an old PHP/Apache/Postgresql developer.
1: Avoid pg_pconnect unless you are certain you have load tested the
system and it will behave properly. pg_pconnect often creates as many
issues as it solves
On Wed, 14 Jan 2004, Rajesh Kumar Mallah wrote:
scott.marlowe wrote:
On Tue, 13 Jan 2004, David Shadovitz wrote:
We avert the subsequent execution of count(*) by passing the
value of count(*) as a query parameter through the link in page
numbers.
Mallah, and others who
On Tue, 13 Jan 2004, David Shadovitz wrote:
We avert the subsequent execution of count(*) by passing the
value of count(*) as a query parameter through the link in page
numbers.
Mallah, and others who mentioned caching the record count:
Yes, I will certainly do this. I can detect
On Wed, 7 Jan 2004, Eric Jain wrote:
Any tips for speeding up index creation?
I need to bulk load a large table with 100M rows and several indexes,
some of which span two columns.
By dropping all indexes prior to issuing the 'copy from' command, the
operation completes 10x as fast (1.5h
On Thu, 18 Dec 2003, Conny Thimren wrote:
Hi,
This a kind of newbie-question. I've been using Postgres for a long time in a low
transction environment - and it is great.
Now I've got an inquiry for using Postgresql in a heavy-load on-line system. This
system must handle something like 20
On Fri, 12 Dec 2003, Rhaoni Chiu Pereira wrote:
Hi, is there a switch in your pgsql/odbc connector to enable cursors? If
so, try turning that on.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
On Thu, 4 Dec 2003, Jack Coates wrote:
Another problem is that performance of the 6 million row job is decent
if I stop the job and run a vacuumdb --analyze before letting it
continue; is this something that 7.4 will help with? vacuumb --analyze
doesn't seem to have much effect on the 18
On Thu, 4 Dec 2003, Jack Coates wrote:
On Thu, 2003-12-04 at 12:27, Richard Huxton wrote:
On Thursday 04 December 2003 19:50, Jack Coates wrote:
I'm trying to set Postgres's shared memory usage in a fashion that
allows it to return requested results quickly. Unfortunately, none of
As another post pointed out, you need to set cflags to get optimization
under Solaris on that flavor of Postgresql.
Also, Postgresql tends to get its best performance from the free unixes,
Linux and BSD. those are available for Sun Sparcs, but postgresql in 64
bit mode on those boxes is still
On 11 Nov 2003, Greg Stark wrote:
Marc G. Fournier [EMAIL PROTECTED] writes:
On Tue, 11 Nov 2003, Greg Stark wrote:
Actually you might be able to get the same effect using function indexes
like:
create index i on traffic_log (month_trunc(runtime), company_id)
had actually
On Tue, 11 Nov 2003, Rod Taylor wrote:
On Tue, 2003-11-11 at 18:32, Chris Field wrote:
We are getting ready to spec out a new machine and are wondering about
the wisdom of buying a quad versus a dual processor machine. Seing as
how postgres in not a threaded application, and this server
On Wed, 29 Oct 2003, Rob Nagler wrote:
Greg Stark writes:
SELECT a, (SELECT name FROM t2 WHERE t2.f2 = t1.f2)
FROM t1
GROUP BY f2
This doesn't solve the problem. It's the GROUP BY that is doing the
wrong thing. It's grouping, then aggregating.
But at least in the
On Thu, 30 Oct 2003, Rob Nagler wrote:
scott.marlowe writes:
t2 was 'vacuum full'ed and analyzed, right? Just guessing.
Fresh import. I've been told this includes a ANALYZE.
You should probably run analyze by hand just to be sure. If the planner
is using an index scan on a table
On Thu, 30 Oct 2003 [EMAIL PROTECTED] wrote:
So its not just PostgreSQL that is suffering from the bad SQL but MySQL also.
But the
question is my does PostgreSQL suffer so badly ?? I think not all developers
write very nice
SQLs.
Its really sad to see that a fine peice of work
On Fri, 24 Oct 2003, Vivek Khera wrote:
AL == Allen Landsidel [EMAIL PROTECTED] writes:
AL Well I had the vacuums running every 15 minutes for a while.. via a
AL simple cron script I wrote just to make sure no more than one vacuum
AL ran at once, and to 'nice' the job.. but performance on
On Tue, 21 Oct 2003, Josh Berkus wrote:
Anjan,
Pretty soon, a PowerEdge 6650 with 4 x 2Ghz XEONs, and 8GB Memory, with
internal drives on RAID5 will be delivered. Postgres will be from RH8.0.
How many drives? RAID5 sucks for heavy read-write databases, unless you have
5+ drives. Or
On Tue, 21 Oct 2003, Josh Berkus wrote:
Scott,
Also, if it's a read only environment, RAID5 with n drives equals the
performance of RAID0 with n-1 drives.
True.
Josh, you gotta get out more. IA32 has supported 4 gig ram for a long
time now, and so has the linux kernel. It uses a
On Tue, 21 Oct 2003, Josh Berkus wrote:
Anjan,
From what I know, there is a cache-row-set functionality that doesn't
exist with the newer postgres...
What? PostgreSQL has always used the kernel cache for queries.
Concurrent users will start from 1 to a high of 5000 or more, and
On 21 Oct 2003, Will LaShell wrote:
On Tue, 2003-10-21 at 08:40, scott.marlowe wrote:
SNIP
So that brings up my question, which is better, the Perc4 or Perc3
controllers, and what's the difference between them? I find Dell's
tendency to hide other people's hardware behind their own
On Tue, 21 Oct 2003, Andrew Sullivan wrote:
On Tue, Oct 21, 2003 at 11:51:02AM -0700, Josh Berkus wrote:
Of course, if you have anecdotal evidence to the contrary, then the
only way to work this would be to have OSDL help us sort it out.
Nope. I too have such anecdotal evidence that
On Mon, 20 Oct 2003, Alexander Priem wrote:
Hi all,
Does anyone have any experience with putting PostgreSQL data on a NAS
device?
I am asking this because a NAS device is much cheaper to set up than a
couple of SCSI disks. I would like to use a relatively cheap NAS device
which uses
On Tue, 14 Oct 2003, Tom Lane wrote:
scott.marlowe [EMAIL PROTECTED] writes:
open_sync was WAY faster at this than the other two methods.
Do you not have open_datasync? That's the preferred method if
available.
Nope, when I try to start postgresql with it set to that, I get this error
On Tue, 14 Oct 2003, Wei Weng wrote:
On Sat, 11 Oct 2003, Christopher Kings-Lynne wrote:
I have two very similar queries which I need to execute. They both have
exactly the same from / where conditions. When I execute the first, it takes
about 16 seconds. The second is executed
On Thu, 9 Oct 2003, Bruce Momjian wrote:
scott.marlowe wrote:
I was testing to get some idea of how to speed up the speed of pgbench
with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0
/dev/hdx).
The only parameter that seems to make a noticeable difference
On Fri, 10 Oct 2003, Josh Berkus wrote:
Bruce,
Yes. If you were doing multiple WAL writes before transaction fsync,
you would be fsyncing every write, rather than doing two writes and
fsync'ing them both. I wonder if larger transactions would find
open_sync slower?
Want me to
On Fri, 10 Oct 2003, Josh Berkus wrote:
Bruce,
Yes. If you were doing multiple WAL writes before transaction fsync,
you would be fsyncing every write, rather than doing two writes and
fsync'ing them both. I wonder if larger transactions would find
open_sync slower?
Want me to
On Thu, 9 Oct 2003, Shridhar Daithankar wrote:
Kaarel wrote:
http://www.ussg.iu.edu/hypermail/linux/kernel/0310.1/0208.html
Shridhar
I feel incompetent when it comes to file systems. Yet everybody would like to
have the best file system if given the choice...so do I :) Here I am
On Thu, 9 Oct 2003, Jeff wrote:
On Thu, 9 Oct 2003, David Griffiths wrote:
1) the MySQL docs are better (sorry - I found them easier to read, and
more comprehensive; I had an easier time finding the answers I needed)
Huh. I had the opposite experience. Each to his own.
I think
On Mon, 6 Oct 2003, Jason Hihn wrote:
Ok, I asked this on [novice], but I was told it's be better to post it
here...
I've got some money to spend on a new servers. The biggest concern is the
PostgreSQL database server that will be the company. (*Everyone* uses the
database server in some
hours to 2 minutes. I attached the new query plan to this posting.
Is there any way to optimize it even further?
What should I do to make this query run fast without hurting the
performance of the other queries?
Thanks.
Oleg
-Original Message-
From: scott.marlowe [mailto:[EMAIL
I was testing to get some idea of how to speed up the speed of pgbench
with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0
/dev/hdx).
The only parameter that seems to make a noticeable difference was setting
wal_sync_method = open_sync. With it set to either fsync, or
On Thu, 2 Oct 2003, Oleg Lebedev wrote:
I was trying to get the pg_stats information to Josh and decided to
recreate the indexes on all my tables. After that I ran vacuum full
analyze, re-enabled nestloop and ran explain analyze on the query. It
ran in about 2 minutes.
I attached the new
On Wed, 1 Oct 2003, Dimitri Nagiev wrote:
here goes the EXPLAIN ANALYZE output:
template1=# VACUUM analyze mytable;
VACUUM
template1=# explain analyze select * from mytable where
mydate='2003-09-01';
QUERY PLAN
Oh, to followup on my previously sent post, make sure you've got
effective_cache_size set right BEFORE you go trying to set
random_page_cost, and you might wanna run a select * from table to load
the table into kernel buffer cache before testing, then also test it with
the cache cleared out
)
Index Cond:
(outer.ps_suppkey = supplier.s_suppkey) Total runtime: 6674724.23
msec (28 rows)
-Original Message-
From: Oleg Lebedev
Sent: Wednesday, October 01, 2003 12:00 PM
To: Josh Berkus; scott.marlowe
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] TPC-R
I've used the megaraid / LSI cards in the past and they were pretty good
in terms of reliability, but the last one I used was the 328 model, from 4
years ago or so. that one had a battery backup option for the cache, and
could go to 128 Meg. We tested it with 4/16 and 128 meg ram, and it was
On 29 Sep 2003, Will LaShell wrote:
On Mon, 2003-09-29 at 06:48, scott.marlowe wrote:
I've used the megaraid / LSI cards in the past and they were pretty good
in terms of reliability, but the last one I used was the 328 model, from 4
years ago or so. that one had a battery backup option
On Wed, 17 Sep 2003, Matt Clark wrote:
*** THE QUESTION(S) ***
Is there any reason for me not to run continuous sequential vacuum analyzes?
At least for the 6 tables that see a lot of updates?
I hear 10% of tuples updated as a good time to vac-an, but does my typical
count of 3 indexes per
On Mon, 15 Sep 2003, Joseph Bove wrote:
Stephan,
I've run explain analyze a number of times and have gotten results between
5.5 and 7.5 seconds
Attached is a typical output
QUERY PLAN
-
Aggregate (cost=9993.92..9993.92 rows=1 width=0)
On Mon, 15 Sep 2003, scott.marlowe wrote:
On Mon, 15 Sep 2003, Joseph Bove wrote:
Stephan,
I've run explain analyze a number of times and have gotten results between
5.5 and 7.5 seconds
Attached is a typical output
QUERY PLAN
On Thu, 11 Sep 2003, Tom Lane wrote:
Christopher Browne [EMAIL PROTECTED] writes:
The right answer for most use seems likely to involve:
a) Getting an appropriate number of bins (I suspect 10 is a bit
small, but I can't justify that mathematically), and
I suspect that also, but I
On Sun, 7 Sep 2003, Pailloncy Jean-Gérard wrote:
Asking a question about why max(id) is so much slower than select id order
by id desc limit 1, Pailloncy said:
I ask for the same thing.
That's better !
This is a Frequently asked question about something that isn't likely to
change any time
On Mon, 8 Sep 2003, Neil Conway wrote:
On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:
Basically, Postgresql uses an MVCC locking system that makes massively
parallel operation possible, but costs in certain areas, and one of those
areas is aggregate performance over large sets. MVCC
On Wed, 3 Sep 2003, Relaxin wrote:
I have a table with 102,384 records in it, each record is 934 bytes.
Using the follow select statement:
SELECT * from table
PG Info: version 7.3.4 under cygwin on Windows 2000
ODBC: version 7.3.100
Machine: 500 Mhz/ 512MB RAM / IDE HDD
Under
Can you just create an extra serial column and make sure that one is
always in order and no holes in it? (i.e. a nightly process, etc...)???
If so, then something like this truly flies:
select * from accounts where aid = (select cast(floor(random()*10)+1 as int));
My times on it on a
On Thu, 28 Aug 2003, Vivek Khera wrote:
sm == scott marlowe scott.marlowe writes:
sm My experience has been that once you get past 6 disks, RAID5 is faster
sm than RAID1+0.
Any opinion on stripe size for the RAID?
That's more determined by what kind of data you're gonna be handling
On 27 Aug 2003, matt wrote:
I'm wondering if the good people out there could perhaps give me some
pointers on suitable hardware to solve an upcoming performance issue.
I've never really dealt with these kinds of loads before, so any
experience you guys have would be invaluable. Apologies in
On Tue, 26 Aug 2003, JM wrote:
need input on parameter values on confs...
our database is getting 1000 transactions/sec on peak periods..
sitting on RH 7.3
2.4.7-10smp
RAM: 1028400
SWAP: 2040244
1: Upgrade your kernel. 2.4.7 on RH3 was updated to 2.4.18-24 in March,
and the 2.4.18
On 27 Aug 2003, matt wrote:
You probably, more than anything, should look at some kind of
superfast, external storage array
Yeah, I think that's going to be a given. Low end EMC FibreChannel
boxes can do around 20,000 IOs/sec, which is probably close to good
enough.
You mentioned
On Thu, 28 Aug 2003, Sean Chittenden wrote:
What it still leaves quite open is just what happens when the OS has
more than one disk drive or CPU to play with. It's not clear what
happens in such cases, whether FreeBSD would catch up, or be left
further in the dust. The traditional
1 - 100 of 119 matches
Mail list logo