not a real issue as all my servers have perl
installed.
Thanks for any advice
Alex
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
Is there a performance difference between the two?
which of the PL is most widely used. One problem i have with the plpgsql
is that the quoting is really a pain.
Christopher Browne wrote:
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Alex) belched out:
i am thinking
is not faster than a dual P3
1.4Ghz, and the hdparm results also dont make much sense.
Has anybody an explanation for that? Is there something I can do to get
more performance out of the SCSI disks?
Thanks for any advise
Alex
---(end of broadcast
would be appreciated.
Thanks
Alex
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
in the 80-90% the other in
5-10% max.
John A Meinel wrote:
Alex wrote:
Hi,
we just got a new dual processor machine and I wonder if there is a
way to utilize both processors.
Our DB server is basically fully dedicated to postgres. (its a dual
amd with 4gb mem.)
I have a batch job that
.
Alex
John A Meinel wrote:
Alex wrote:
Thanks John.
Well as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm
SCSI Disks, 4GB of memory.
Disks are pretty fast and memory should be more than enough.
Currently we dont have many concurrent connections.
Well, you didn't mention Opteron b
Forgot to add:
postg...@ec2-75-101-128-4:~$ psql --version
psql (PostgreSQL) 8.3.5
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Below is a query that takes 16 seconds on the first run. I am having
generally poor performance for queries in uncached areas of the data
and often mediocre (500ms-2s+) performance generallly, although
sometimes it's very fast. All the queries are pretty similar and use
the indexes this way.
I'v
>
> How is the index sl_city_etc defined?
>
Index "public.sl_city_etc"
Column|Type
--+-
city | text
listing_type | text
post_time| timestamp without time zone
bedrooms | integer
region | text
geo_lat |
Thanks. That's very helpful. I'll take your suggestions and see if
things improve.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I am using Postgres with Rails. Each rails application "thread" is
actually a separate process (mongrel) with it's own connection.
Normally, the db connection processes (?) look something like this in
top:
15772 postgres 15 0 229m 13m 12m S0 0.8 0:00.09 postgres:
db db [local]
idle
The writer process seems to be using inordinate amounts of memory:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+
COMMAND
11088 postgres 13 -2 3217m 2.9g 2.9g S0 38.7 0:10.46 postgres:
writer process
20190 postgres 13 -2 3219m 71m 68m S0 0.9 0:52.48 postgres:
cribq
RAID 10, database is on RAID 10.
Data is very spread out because database turnover time is very high,
so our performance is about double this with a fresh DB. (the data
half life is probably measurable in days or weeks).
Alex Turner
netEconomist
On Apr 1, 2005 1:06 PM, Marc G. Fournier <[EM
Oh - this is with a seperate transaction per command.
fsync is on.
Alex Turner
netEconomist
On Apr 1, 2005 4:17 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
> 1250/sec with record size average is 26 bytes
> 800/sec with record size average is 48 bytes.
> 250/sec with record size
ly in
linux), I would be greatly interested.
Alex Turner
netEconomist
On Mar 29, 2005 8:17 AM, Dave Cramer <[EMAIL PROTECTED]> wrote:
> Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is
> about 50Mb/sec, and striped is about 100
>
> Dave
>
> PFC wrote:
Yup, Battery backed, cache enabled. 6 drive RAID 10, and 4 drive RAID
10, and 2xRAID 1.
It's a 3ware 9500S-8MI - not bad for $450 plus BBU.
Alex Turner
netEconomist
On Apr 1, 2005 6:03 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Alex Turner <[EMAIL PROTECTED]> writes:
> &
otocol compared with SCSI. If anyone has a usefull
link on that, it would be greatly appreciated.
More drives will give more throughput/sec, but not necesarily more
transactions/sec. For that you will need more RAM on the controler,
and defaintely a BBU to keep your data safe.
Alex Turner
netEcono
hough even in some, SATA wins, or draws. I'm
trying to find something more apples to apples. 10k to 10k.
Alex Turner
netEconomist
On Apr 4, 2005 3:23 PM, Vivek Khera <[EMAIL PROTECTED]> wrote:
>
> On Apr 4, 2005, at 3:12 PM, Alex Turner wrote:
>
> > Our system is mos
imple benchmark test database to run, I would be
happy to run it on our hardware here.
Alex Turner
On Apr 6, 2005 3:30 AM, William Yu <[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
> > I'm no drive expert, but it seems to me that our write performance is
> > excellent.
I think everyone was scared off by the 5000 inserts per second number.
I've never seen even Oracle do this on a top end Dell system with
copious SCSI attached storage.
Alex Turner
netEconomist
On Apr 6, 2005 3:17 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Unfortun
I guess I was thinking more in the range of 5000 transaction/sec, less
so 5000 rows on bulk import...
Alex
On Apr 6, 2005 12:47 PM, Mohan, Ross <[EMAIL PROTECTED]> wrote:
>
>
> 31Million tuples were loaded in approx 279 seconds, or approx 112k rows per
> second.
>
>
I think his point was that 9 * 4 != 2400
Alex Turner
netEconomist
On Apr 6, 2005 2:23 PM, Rod Taylor <[EMAIL PROTECTED]> wrote:
> On Wed, 2005-04-06 at 19:42 +0200, Steinar H. Gunderson wrote:
> > On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:
> > > Yeah,
cheap RAID controller from
HighPoint. As soon as I put in a decent controller, things went much
better. I think it's unfair to base your opinion of SATA from a test
that had a poor controler.
I know I'm not the only one here running SATA RAID and being very
satisfied with the results.
hannel, but you have to share in SCSI.
A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but
SCSI can only do 320MB/sec across the entire array.
What am I missing here?
Alex Turner
netEconomist
On Apr 6, 2005 5:41 PM, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> Sorry if I
Ok - so I found this fairly good online review of various SATA cards
out there, with 3ware not doing too hot on RAID 5, but ok on RAID 10.
http://www.tweakers.net/reviews/557/
Very interesting stuff.
Alex Turner
netEconomist
On Apr 6, 2005 7:32 PM, Alex Turner <[EMAIL PROTECTED]> wrot
Ok - I take it back - I'm reading through this now, and realising that
the reviews are pretty clueless in several places...
On Apr 6, 2005 8:12 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
> Ok - so I found this fairly good online review of various SATA cards
> out there, with 3w
ing on the controller and to the drive. *shrug*
This of course is all supposed to go away with SATA II which as NCQ,
Native Command Queueing. Of course the 3ware controllers don't
support SATA II, but a few other do, and I'm sure 3ware will come out
with a controller that does.
Alex Turner
net
and technology, thereby generating a
cost increase (at least thats what the manufactures tell us). I know
if you ever held a 15k drive in your hand, you can notice a
considerable weight difference between it and a 7200RPM IDE drive.
Alex Turner
netEconomist
On Apr 7, 2005 11:37 AM, [EMAIL PRO
7;t
belong there).
Alex Turner
netEconomist
On Apr 12, 2005 10:10 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> hubert lubaczewski <[EMAIL PROTECTED]> writes:
> > and it made me wonder - is there a way to tell how much time of backend
> > was spent on triggers, index update
I have read a large chunk of this, and I would highly recommend it to
anyone who has been participating in the drive discussions. It is
most informative!!
Alex Turner
netEconomist
On 4/14/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Greg,
>
> I posted this link under a d
es=1&devID_0=232&devID_1=40&devID_2=259&devID_3=267&devID_4=261&devID_5=248&devCnt=6
It does illustrate some of the weaknesses of SATA drives, but all in
all the Raptor drives put on a good show.
Alex Turner
netEconomist
On 4/14/05, Alex Turner <[EMAIL PROTECTED]> wrote
Just to clarify these are tests from http://www.storagereview.com, not
my own. I guess they couldn't get number for those parts. I think
everyone understands that a 0ms seek time impossible, and indicates a
missing data point.
Thanks,
Alex Turner
netEconomist
On 4/14/05, Dave Held &l
od as NCQ on the drive).
Alex Turner
netEconomist
On 4/14/05, Dave Held <[EMAIL PROTECTED]> wrote:
> > -Original Message-----
> > From: Alex Turner [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, April 14, 2005 12:14 PM
> > To: [EMAIL PROTECTED]
> > Cc: Gre
y how
expensive it is to retrieve a given block knowing it's linear
increment.
Alex Turner
netEconomist
On 4/14/05, Kevin Brown <[EMAIL PROTECTED]> wrote:
> Tom Lane wrote:
> > Kevin Brown <[EMAIL PROTECTED]> writes:
> > > I really don't see how this is a
ore to set up a good review to be honest.
The 3ware trounces the Areca in all IO/sec test.
Alex Turner
netEconomist
On 4/15/05, Marinos Yannikos <[EMAIL PROTECTED]> wrote:
> Joshua D. Drake wrote:
> > Well I have never even heard of it. 3ware is the defacto authority of
> >
15k RPM drive config. Our biggest hit is reads, so
we can buy 3xSATA machines and load balance. It's all about the
application, and buying what is appropriate. I don't buy a Corvette
if all I need is a malibu.
Alex Turner
netEconomist
On 4/15/05, Dave Held <[EMAIL PROTECTED]&
I stand corrected!
Maybe I should re-evaluate our own config!
Alex T
(The dell PERC controllers do pretty much suck on linux)
On 4/15/05, Vivek Khera <[EMAIL PROTECTED]> wrote:
>
> On Apr 15, 2005, at 11:01 AM, Alex Turner wrote:
>
> > You can't fit a 15k RPM SCSI
you start having to factor in the cost of a bigger chassis
to hold more drives, which can be big bucks.
Alex Turner
netEconomist
On 18 Apr 2005 10:59:05 -0400, Greg Stark <[EMAIL PROTECTED]> wrote:
>
> William Yu <[EMAIL PROTECTED]> writes:
>
> > Using the above price
database accross multiple tablespaces on
seperate paritions.
My assertion therefore is that simply adding more drives to an already
competent* configuration is about as likely to increase your database
effectiveness as swiss cheese is to make your car run faster.
Alex Turner
netEconomist
*Asser
).
Alex Turner
netEconomist
On 4/18/05, John A Meinel <[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
>
> >[snip]
> >
> >
> >>Adding drives will not let you get lower response times than the average
> >>seek
> >>time on your drives*. But
I think the add more disks thing is really from the point of view that
one disk isn't enough ever. You should really have at least four
drives configured into two RAID 1s. Most DBAs will know this, but
most average Joes won't.
Alex Turner
netEconomist
On 4/18/05, Steve Poe <[EM
would only need to read
from one disk.
So my assertion that adding more drives doesn't help is pretty
wrong... particularly with OLTP because it's always dealing with
blocks that are smaller that the stripe size.
Alex Turner
netEconomist
On 4/18/05, Jacques Caron <[EMAIL PROTECTED]>
alot since subscribing.
Alex Turner
netEconomist
On 4/18/05, Alex Turner <[EMAIL PROTECTED]> wrote:
> Ok - well - I am partially wrong...
>
> If you're stripe size is 64Kb, and you are reading 256k worth of data,
> it will be spread across four drives, so you will need to read
Mistype.. I meant 0+1 in the second instance :(
On 4/18/05, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
> > Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
> > least I would never recommend 1+0 for anything).
>
> Uhmm I was un
On 4/18/05, Jacques Caron <[EMAIL PROTECTED]> wrote:
> Hi,
>
> At 20:21 18/04/2005, Alex Turner wrote:
> >So I wonder if one could take this stripe size thing further and say
> >that a larger stripe size is more likely to result in requests getting
> >served pa
Does it really matter at which end of the cable the queueing is done
(Assuming both ends know as much about drive geometry etc..)?
Alex Turner
netEconomist
On 4/18/05, Bruce Momjian wrote:
> Kevin Brown wrote:
> > Greg Stark wrote:
> >
> >
> > > I think you
I wonder if thats something to think about adding to Postgresql? A
setting for multiblock read count like Oracle (Although having said
that I believe that Oracle natively caches pages much more
aggressively that postgresql, which allows the OS to do the file
caching).
Alex Turner
netEconomist
array, which would
yield better performance to cost ratio. Therefore I would suggest it
is something that should be investigated. After all, why implemented
TCQ on each drive, if it can be handled more effeciently at the other
end by the controller for less money?!
Alex Turner
netEconomist
On 4/19
Is:
REINDEX DATABASE blah
supposed to rebuild all indices in the database, or must you specify
each table individualy? (I'm asking because I just tried it and it
only did system tables)
Alex Turner
netEconomist
On 4/21/05, Josh Berkus wrote:
> Bill,
>
> > What about if an ou
the index forces the system to physically re-allocate all
that data space, and now you have just 2499 entries, that use 625
blocks.
I'm not sure that 'blocks' is the correct term in postgres, it's
segments in Oracle, but the concept remains the same.
Alex Turner
netEconomist
On
What is the status of Postgres support for any sort of multi-machine
scaling support? What are you meant to do once you've upgraded your
box and tuned the conf files as much as you can? But your query load
is just too high for a single machine?
Upgrading stock Dell boxes (I know we could be
On 10 May 2005, at 15:41, John A Meinel wrote:
Alex Stapleton wrote:
What is the status of Postgres support for any sort of multi-machine
scaling support? What are you meant to do once you've upgraded
your box
and tuned the conf files as much as you can? But your query load is
just too
lto:[EMAIL PROTECTED] On Behalf Of John A
Meinel
Sent: Tuesday, May 10, 2005 7:41 AM
To: Alex Stapleton
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Partitioning / Clustering
Alex Stapleton wrote:
What is the status of Postgres support for any sort of multi-machine
scaling support? Wha
On 11 May 2005, at 08:16, Simon Riggs wrote:
On Tue, 2005-05-10 at 11:03 +0100, Alex Stapleton wrote:
So, when/is PG meant to be getting a decent partitioning system?
ISTM that your question seems to confuse where code comes from.
Without
meaning to pick on you, or reply rudely, I'd li
On 11 May 2005, at 08:57, David Roussel wrote:
For an interesting look at scalability, clustering, caching, etc for a
large site have a look at how livejournal did it.
http://www.danga.com/words/2004_lisa/lisa04.pdf
I have implemented similar systems in the past, it's a pretty good
technique, unf
On 11 May 2005, at 09:50, Alex Stapleton wrote:
On 11 May 2005, at 08:57, David Roussel wrote:
For an interesting look at scalability, clustering, caching, etc
for a
large site have a look at how livejournal did it.
http://www.danga.com/words/2004_lisa/lisa04.pdf
I have implemented similar
On 11 May 2005, at 23:35, PFC wrote:
However, memcached (and for us, pg_memcached) is an excellent way
to improve
horizontal scalability by taking disposable data (like session
information)
out of the database and putting it in protected RAM.
So, what is the advantage of such a system ve
website I've managed
will ever see.
Why solve the complicated clustered sessions problem, when you don't
really need to?
Alex Turner
netEconomist
On 5/11/05, PFC <[EMAIL PROTECTED]> wrote:
>
>
> > However, memcached (and for us, pg_memcached) is an excellent way t
On 12 May 2005, at 15:08, Alex Turner wrote:
Having local sessions is unnesesary, and here is my logic:
Generaly most people have less than 100Mb of bandwidth to the
internet.
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an
a couple of mid size RAID
arrays on 10k discs with fsync on for small transactions. I'm sure
that could easily be bettered with a few more dollars.
Maybe my number are off, but somehow it doesn't seem like that many
people need a highly complex session solution to me.
Alex Turner
netEc
On 12 May 2005, at 18:33, Josh Berkus wrote:
People,
In general I think your point is valid. Just remember that it
probably
also matters how you count page views. Because technically images
are a
separate page (and this thread did discuss serving up images). So if
there are 20 graphics on a sp
Is using a ramdisk in situations like this entirely ill-advised then?
When data integrity isn't a huge issue and you really need good write
performance it seems like it wouldn't hurt too much. Unless I am
missing something?
On 20 May 2005, at 02:45, Christopher Kings-Lynne wrote:
I'm doing
I am interested in optimising write performance as well, the machine
I am testing on is maxing out around 450 UPDATEs a second which is
quite quick I suppose. I haven't tried turning fsync off yet. The
table has...a lot of indices as well. They are mostly pretty simple
partial indexes thoug
Until you start worrying about MVC - we have had problems with the
MSSQL implementation of read consistency because of this 'feature'.
Alex Turner
NetEconomistOn 5/24/05, Bruno Wolff III <[EMAIL PROTECTED]> wrote:
On Tue, May 24, 2005 at 08:36:36 -0700, mark durrant <[EMAI
about reiser, I went
straight back to default after that problem (that partition is not on a
DB server though).
Alex Turner
netEconomistOn 6/3/05, Martin Fandel <[EMAIL PROTECTED]> wrote:
Hi @ all,i have only a little question. Which filesystem is preferred forpostgresql? I'm plan to
Is this advisable? The disks are rather fast (15k iirc) but somehow I
don't think they are covered in whatever magic fairy dust it would
require for a sequential read to be as fast as a random one. However
I could be wrong, are there any circumstances when this is actually
going to help per
We have two index's like so
l1_historical=# \d "N_intra_time_idx"
Index "N_intra_time_idx"
Column |Type
+-
time | timestamp without time zone
btree
l1_historical=# \d "N_intra_pkey"
Index "N_intra_pkey"
Column |Type
-
Oh, we are running 7.4.2 btw. And our random_page_cost = 1
On 13 Jun 2005, at 14:02, Alex Stapleton wrote:
We have two index's like so
l1_historical=# \d "N_intra_time_idx"
Index "N_intra_time_idx"
Column |Type
+---
On 13 Jun 2005, at 15:47, John A Meinel wrote:
Alex Stapleton wrote:
Oh, we are running 7.4.2 btw. And our random_page_cost = 1
Which is only correct if your entire db fits into memory. Also, try
updating to a later 7.4 version if at all possible.
I am aware of this, I didn
will read from independant halves, but gives
worse redundancy.
Alex Turner
NetEconomistOn 6/18/05, Jacques Caron <[EMAIL PROTECTED]> wrote:
Hi,At 18:00 18/06/2005, PFC wrote:> I don't know what I'm talking about, but wouldn't mirorring be> faster>than stri
Hi, i'm trying to optimise our autovacuum configuration so that it
vacuums / analyzes some of our larger tables better. It has been set
to the default settings for quite some time. We never delete
anything (well not often, and not much) from the tables, so I am not
so worried about the VAC
On 20 Jun 2005, at 15:59, Jacques Caron wrote:
Hi,
At 16:44 20/06/2005, Alex Stapleton wrote:
We never delete
anything (well not often, and not much) from the tables, so I am not
so worried about the VACUUM status
DELETEs are not the only reason you might need to VACUUM. UPDATEs
are
On 20 Jun 2005, at 18:46, Josh Berkus wrote:
Alex,
Hi, i'm trying to optimise our autovacuum configuration so that it
vacuums / analyzes some of our larger tables better. It has been set
to the default settings for quite some time. We never delete
anything (well not often, and not
On 21 Jun 2005, at 18:13, Josh Berkus wrote:
Alex,
Downtime is something I'd rather avoid if possible. Do you think we
will need to run VACUUM FULL occasionally? I'd rather not lock tables
up unless I cant avoid it. We can probably squeeze an automated
vacuum tied to our data
On 8 Jul 2005, at 20:21, Merlin Moncure wrote:
Stuart,
I'm putting together a road map on how our systems can scale as our
load
increases. As part of this, I need to look into setting up some fast
read only mirrors of our database. We should have more than enough
RAM
to fit everythin
Also seems pretty silly to put it on a regular SATA connection, when
all that can manage is 150MB/sec. If you made it connection directly
to 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not
to mention PCI-X.
Alex Turner
NetEconomist
On 7/26/05, John A Meinel <[EMAIL PROTEC
good ol 2.5 Reg ECC.
Alex Turner
NetEconomist
On 7/26/05, PFC <[EMAIL PROTECTED]> wrote:
>
> > I'm a little leary as it is definitely a version 1.0 product (it is
> > still using an FPGA as the controller, so they were obviously pushing to
> > get the ca
Are you calculating aggregates, and if so, how are you doing it (I ask
the question from experience of a similar application where I found
that my aggregating PGPLSQL triggers were bogging the system down, and
changed them so scheduled jobs instead).
Alex Turner
NetEconomist
On 8/16/05, Ulrich
ocks to
rebuild just one block where n is the number of drives in the array,
whereas a mirror only required to read from a single spindle of the
RAID.
I would suggest running some benchmarks at RAID 5 and RAID 10 to see
what the _real_ performance actualy is, thats the only way to really
tel
Don't forget that often controlers don't obey fsyncs like a plain
drive does. thats the point of having a BBU ;)
Alex Turner
NetEconomist
On 8/16/05, John A Meinel <[EMAIL PROTECTED]> wrote:
> Anjan Dave wrote:
> > Yes, that's true, though, I am a bit confu
, U320 is only 320MB/channel...
Alex Turner
NetEconomist
On 8/16/05, Anjan Dave <[EMAIL PROTECTED]> wrote:
> Thanks, everyone. I got some excellent replies, including some long
> explanations. Appreciate the time you guys took out for the responses.
>
> The gist of it i take, i
just around $7k. I have two independant controlers on two
independant PCI buses to give max throughput. on with a 6 drive RAID
10 and the other with two 4 drive RAID 10s.
Alex Turner
NetEconomist
On 8/19/05, Mark Cotner <[EMAIL PROTECTED]> wrote:
> Hi all,
> I bet you get tired of the
ck reads, which is sequential reads.
Alex Turner
NetEconomist
P.S. Sorry if i'm a bit punchy, I've been up since yestarday with
server upgrade nightmares that continue ;)
On 8/19/05, Ron <[EMAIL PROTECTED]> wrote:
> Alex mentions a nice setup, but I'm pretty sure I know how to
On 2 Sep 2005, at 10:42, Richard Huxton wrote:
Ricardo Humphreys wrote:
Hi all.
In a cluster, is there any way to use the main memory of the
other nodes instead of the swap? If I have a query with many sub-
queries and a lot of data, I can easily fill all the memory in a
node. The point
On Wed, 7 Sep 2005, Meetesh Karia wrote:
> PG is creating the union of January, February and March tables first and
> that doesn't have an index on it. If you're going to do many queries using
> the union of those three tables, you might want to place their contents into
> one table and create an
don't co-operate with linux well.
Alex Turner
NetEconomist
On 9/12/05, Brandon Black <[EMAIL PROTECTED]> wrote:
I'm in the process of developing an application which uses PostgreSQL
for data storage. Our database traffic is very atypical, and as a
result it has been rather chall
that lower stripe sizes impacted performance badly as did overly large stripe sizes.
Alex Turner
NetEconomistOn 16 Sep 2005 04:51:43 -0700, bmmbn <[EMAIL PROTECTED]> wrote:
Hi EveryoneThe machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15kdisks.2 disks are in RAID1 and hold the OS
le systems hit 100% usage,
they get all kinds of unhappy, we haven't had the same problem with JFS.
Alex Turner
NetEconomistOn 9/20/05, Welty, Richard <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> I would also recommend looking at file system. For us JFS worked signifcantly> f
I have found that while the OS may flush to the controller fast with
fsync=true, the controller does as it pleases (it has BBU, so I'm not
too worried), so you get great performance because your controller is
determine read/write sequence outside of what is being demanded by an
fsync.
Alex T
On 28 Sep 2005, at 15:32, Arnau wrote:
Hi all,
I have been "googling" a bit searching info about a way to
monitor postgresql (CPU & Memory, num processes, ... ) and I
haven't found anything relevant. I'm using munin to monitor others
parameters of my servers and I'd like to include po
most people without
resorting to SSD.
Alex Turner
NetEconomistOn 10/4/05, Emil Briggs <[EMAIL PROTECTED]> wrote:
I have an application that has a table that is both read and write intensive.Data from iostat indicates that the write speed of the system is the factorthat is limiting performanc
doing an iostat and see how many IOs
and how much throughput is happening. That will rappidly help determine
if you are bound by IOs or by MB/sec.
Worst case I'm wrong, but IMHO it's worth a try.
Alex Turner
NetEconomistOn 10/4/05, Emil Briggs <[EMAIL PROTECTED]> wrote:
> Talk
with cached disk pages. It
looks to me more like either a Java problem, or a kernel problem...
Alex Turner
NetEconomistOn 10/10/05, Jon Brisbin <[EMAIL PROTECTED]> wrote:
Tom Lane wrote:>> Are you sure it's not cached data pages, rather than cached inodes?> If so, the above beh
Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)
but I was more thinking 1.4 which many folks are still using.
AlexOn 10/11/05, Alan Stange <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> Realise also that unless you are running the 1.5 x86-64 build, java>
Well - to each his own I guess - we did extensive testing on 1.4, and
it refused to allocate much past 1gig on both Linux x86/x86-64 and
Windows.
AlexOn 10/11/05, Alan Stange <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> Perhaps this is true for 1.5 on x86-32 (I've only used it on
Oracle uses LRU caching algorithm also, not LFU.
AlexOn 10/21/05, Martin Nickel <[EMAIL PROTECTED]> wrote:
I was reading a comment in another posting and it started me thinkingabout this. Let's say I startup an Oracle server. All my queries are alittle bit (sometimes a lot bit) slow until it get
[snip]to the second processor in my dual Xeon eServer) has got me to thepoint that the perpetually high memory usage doesn't affect my
application server.
I'm curious - how does the high memory usage affect your application server?
Alex
Just to play devils advocate here for as second, but if we have an
algorithm that is substational better than just plain old LRU, which is
what I believe the kernel is going to use to cache pages (I'm no kernel
hacker), then why don't we apply that and have a significantly larger
page cache a la Or
This is possible with Oracle utilizing the keep pool
alter table t_name storage ( buffer_pool keep);
If Postgres were to implement it's own caching system, this seems like
it would be easily to implement (beyond the initial caching effort).
Alex
On 10/24/05, Craig A. James <[EMAIL P
b.order_val>=25 and
b.order_val<50 and a.primary_key_id=b.primary_key_id
If the data updates alot then this won't work as well though as the
index table will require frequent updates to potentialy large number
of records (although a small number of pages so it still won't be
horribl
1 - 100 of 279 matches
Mail list logo