full parameter. If I look at memory
allocation, it never goes over 250MB whatever I do with the database. The
kernel shmmax is set to 600MB. Database Size is around 550MB.
Need some advise.
Thanks.
Andy.
g" service from this system.
Andy.
- Original Message -
From: "Christopher Kings-Lynne" <[EMAIL PROTECTED]>
To: "Andy" <[EMAIL PROTECTED]>
Cc:
Sent: Monday, October 10, 2005 11:55 AM
Subject: Re: [PERFORM] Server misconfiguration???
A lot of them a
gards,
Andy.
- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Andy" <[EMAIL PROTECTED]>
Cc:
Sent: Monday, October 10, 2005 5:18 PM
Subject: Re: [PERFORM] Server misconfiguration???
"Andy" <[EMAIL PROTECTED]> writes:
I get t
t of data's to be deleted.
Or is there any other solution for this?
DB -> (replication) RE_DB -> (copy) ->
COPY_DB -> (Delete unnecesary data) -> CLIENT_DB -> (ISDN connection)
-> Data's to the client.
Regards,
Andy.
ng
any type of vacuum after the whole process? What kind?
Full vacuum. (cmd: vacuumdb -f)
Is there any configuration parameter for delete speed up?
- Original Message -
From: "Sean Davis" <[EMAIL PROTECTED]>
To: "Andy" <[EMAIL PROTECTED]>;
Sent: Tuesday
<[EMAIL PROTECTED]>
To:
Sent: Tuesday, October 11, 2005 3:19 PM
Subject: Re: [PERFORM] Massive delete performance
On Tue, Oct 11, 2005 at 10:47:03AM +0300, Andy wrote:
So, I have a replication only with the tables that I need to send, then I
make a copy of this replication, and from th
ot;.id)
Total runtime: 31952.811 ms
- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Andy" <[EMAIL PROTECTED]>
Cc: "Steinar H. Gunderson" <[EMAIL PROTECTED]>;
Sent: Tuesday, October 11, 2005 5:17 PM
Subject: Re: [PERFORM] Massive d
Yes I did, and it works better(on a test server). I had no time to put it in
production.
I will try to do small steps to see what happens.
Regards,
Andy.
- Original Message -
From: "Andrew Sullivan" <[EMAIL PROTECTED]>
To:
Sent: Thursday, October 13, 2005 6:0
s puts in some other search fields on the where then the
query runs faster but in this format sometimes it takes a lot lot of
time(sometimes even 2,3 seconds).
Can this be tuned somehow???
Regards,
Andy.
Yes I have indexes an all join fields.
The tables have around 30 columns each and around 100k rows.
The database is vacuumed every hour.
Andy.
- Original Message -
From: "Frank Wiles" <[EMAIL PROTECTED]>
To: "Andy" <[EMAIL PROTECTED]>
Cc:
Sent: Th
Sorry, I had to be more specific.
VACUUM ANALYZE is performed every hour.
Regards,
Andy.
- Original Message -
From: "Michael Glaesemann" <[EMAIL PROTECTED]>
To: "Andy" <[EMAIL PROTECTED]>
Cc:
Sent: Friday, January 06, 2006 11:45 AM
Subject: Re
the user
can have. I use this to build pages of results.
Andy.
- Original Message -
From: "Pandurangan R S" <[EMAIL PROTECTED]>
To: "Andy" <[EMAIL PROTECTED]>
Cc:
Sent: Friday, January 06, 2006 11:56 AM
Subject: Re: [PERFORM] Improving Inner Join Performance
shared_buffers = 10240effective_cache_size =
64000RAM on server: 1Gb.
Andy.
- Original Message -
From: "Frank Wiles" <[EMAIL PROTECTED]>
To: "Andy" <[EMAIL PROTECTED]>
Sent: Friday, January 06, 2006 7:12 PM
Subject: Re: [PERFORM] Improving Inner J
eq scan on the whole table and that
takes some time. How can this be optimized or made in another way to be
faster?
I tried to make indexes on the columns but no success.
PG 8.2
Regards,
Andy.
Thank you all for the answers.
I will try your suggestions and see what that brings in terms of
performance.
Andy.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Rigmor Ukuhe
> Sent: Wednesday, May 23, 2007 6:52 PM
> Cc:
Hi,
Are there any significant performance improvements or regressions from 8.4 to
9.0? If yes, which areas (inserts, updates, selects, etc) are those in?
In a related question, is there any public data that compares the performances
of various Postgresql versions?
Thanks
--
Sent vi
Wouldn't UUID PK cause a significant drop in insert performance because every
insert is now out of order, which leads to a constant re-arranging of the B+
tree? The amount of random IO's that's going to generate would just kill the
performance.
--- On Fri, 10/15/10, Craig Ringer wrote:
From:
If you are IO-bound, you might want to consider using SSD.
A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10.
--- On Wed, 12/8/10, Benjamin Krajmalnik wrote:
> From: Benjamin Krajmalnik
> Subject: [PERFORM] Hardware recommendations
> To: pgsql-performance@postgresql.org
> > If you are IO-bound, you might want to consider using
> SSD.
> >
> > A single SSD could easily give you more IOPS than 16
> 15k SAS in RAID 10.
>
> Are there any that don't risk your data on power loss, AND
> are cheaper
> than SAS RAID 10?
>
Vertex 2 Pro has a built-in supercapacitor to s
> We use ZFS and use SSDs for both the log device and
> L2ARC. All disks
> and SSDs are behind a 3ware with BBU in single disk
> mode.
Out of curiosity why do you put your log on SSD? Log is all sequential IOs, an
area in which SSD is not any faster than HDD. So I'd think putting log on SSD
> The "common knowledge" you based that comment on, may
> actually not be very up-to-date anymore. Current
> consumer-grade SSD's can achieve up to 200MB/sec when
> writing sequentially and they can probably do that a lot
> more consistent than a hard disk.
>
> Have a look here: http://www.anandt
--- On Thu, 12/23/10, John W Strange wrote:
> Typically my problem is that the
> large queries are simply CPU bound.. do you have a
> sar/top output that you see. I'm currently setting up two
> FusionIO DUO @640GB in a lvm stripe to do some testing with,
> I will publish the results after I'm d
--- On Sun, 2/6/11, Linos wrote:
> I am studying too the possibility of use an OCZ Vertex 2
> Pro with Flashcache or Bcache to use it like a second level
> filesystem cache, any comments on that please?
>
OCZ Vertex 2 Pro is a lot more expensive than other SSD of comparable
performances beca
.
Is there any benchmark measuring the performance of these SSD's (the new Intel
vs. the new SandForce) running database workloads? The benchmarks I've seen so
far are for desktop applications.
Andy
--- On Mon, 3/28/11, Greg Smith wrote:
> From: Greg Smith
> Subject: [PERFORM
--- On Wed, 4/6/11, Scott Carey wrote:
> I could care less about the 'fast' sandforce drives.
> They fail at a high
> rate and the performance improvement is BECAUSE they are
> using a large,
> volatile write cache.
The G1 and G2 Intel MLC also use volatile write cache, just like most SandF
AID 1, would that
be any slower than 2 SSD in HW RAID 1 with BBU? What are the pros and cons?
Thanks.
Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
--- On Mon, 7/18/11, David Rees wrote:
> >> In this case is BBU still needed? If I put 2 SSD
> in software RAID 1, would
> >> that be any slower than 2 SSD in HW RAID 1 with
> BBU? What are the pros and
> >> cons?
>
> What will perform better will vary greatly depending on the
> exact
> SSDs,
> > I'm not comparing SSD in SW RAID with rotating disks
> in HW RAID with
> > BBU though. I'm just comparing SSDs with or without
> BBU. I'm going to
> > get a couple of Intel 320s, just want to know if BBU
> makes sense for
> > them.
>
> Yes, it certainly does, even if you have a RAID BBU.
"ev
According to the specs for database storage:
"Random 4KB arites: Up to 600 IOPS"
Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not much
faster than mechanical disks.
Has anyone done any performance benchmark of 320 used as a DB storage? Is it
really that slow?
Do you have an Intel 320? I'd love to see tests comparing 710 to 320 and see
if it's worth the price premium.
From: David Boreham
To: PGSQL Performance
Sent: Saturday, October 1, 2011 10:39 PM
Subject: [PERFORM] Suggestions for Intel 710 SSD test
I have a 71
Your results are consistent with the benchmarks I've seen. Intel SSD have much
worse write performance compared to SSD that uses Sandforce controllers, which
Vertex 2 Pro does.
According to this benchmark, at high queue depth the random write performance
of Sandforce is more than 5 times that o
You could try using Unix domain socket and see if the performance improves. A
relevant link:
http://stackoverflow.com/questions/257433/postgresql-unix-domain-sockets-vs-tcp-sockets
From: Ofer Israeli
To: "pgsql-performance@postgresql.org"
Sent: Sunday, Apri
have also become extremely slow. I was expecting a drop off when the
database grew out of memory, but not this much.
Am I really missing the target somewhere?
Any help and or suggestions will be very much appreciated.
Best regards,
Andy.
http://explain.depesz.com/s/cfb
select distinct
Limit the sub-queries to 1, i.e. :
select 1 from Table2 where Table2.ForeignKey = Table1.PrimaryKey fetch first 1
rows only
Andy.
On 19.02.2013 07:34, Bastiaan Olij wrote:
Hi All,
Hope someone can help me a little bit here:
I've got a query like the following:
--
select Column1, Co
t with, easily extensible, allows a close coupling between the apache
server responsible for a region and the database it hits.
Any insights gratefully received!
Andy Ballingall
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
as the
data.
Yes, I'd prefer things to be that way in any event.
Regards,
Andy
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
ime required mysql database that is running. Its my MythTV box at
home, and I have to ask permission from my GF before I take the box down to upgrade
anything. And heaven forbid if it crashes or anything. So I do have experience with
care and feeding of mysql. And no, I'm not kidding.)
And
n")
You need to vacuum way more often than once a week. Just VACUUM ANALYZE, two,
three times a day. Or better yet, let autovacuum do its thing. (if you do
have autovacuum enabled, then the only problem is the open transaction thing).
Dont "VACUUM FULL", its not helping
uot; or "fake
prepare".
It does "real" by default. Try setting:
$dbh->{pg_server_prepare} = 0;
before you prepare/run that statement and see if it makes a difference.
http://search.cpan.org/dist/DBD-Pg/Pg.pm#prepare
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
is different than transaction. The
output above looks good, that's what you want to see. (If it had said
"idle in transaction" that would be a problem). I dont think you need
to change anything.
Hopefully just vacuuming more often will help.
-Andy
--
Sent via pgsql-performance
ew weeks or once a month).
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
could optimize that one statement.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
a string, then execute it, like:
a := "select junk from aTable where akey = 5";
EXECUE a;
(I dont think that's the exact right syntax, but hopefully gets the idea
across)
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
could try batching them together:
DELETE FROM table1 WHERE table2_id in (11242939, 1,2,3,4,5, 42);
Also are you preparing the query?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mail
On 6/3/2010 12:47 PM, Anj Adu wrote:
I cant seem to pinpoint why this query is slow . No full table scans
are being done. The hash join is taking maximum time. The table
dev4_act_action has only 3 rows.
box is a 2 cpu quad core intel 5430 with 32G RAM... Postgres 8.4.0
1G work_mem
20G effective_
om addentry('2010-006-06 8:00:00', 130);
I do an extra check that if the date's match that the level's match too, but
you wouldnt have to. There is a unique index on adate.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
x27;-'||p_day1||''') d1,
date(extract(YEAR FROM
m.taken)||''-'||p_month2||'-'||p_day2||''') d2
*
What is a better way to create those dates (without string
concatenation, I presume)?
Dave
I assume you are doing this in a lo
you have indexes on emaildetails(emailid) and vantage_email_track(mailid)?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
nk about it. You start two transactions at the same
time. A transaction is defined as "do this set of operations, all of which must succeed or
fail atomicly". One transaction cannot update the exact same row as another transaction
because that would break the second transa
augh) I got about 20. I had to go out of my way (way out) to enable the
disk caching, and even then only got 50 meg a second.
http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
th int's and string's but I couldnt find a way using the &
operator.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
would help (cuz clustered is
assuming sequential reads).
or if you seq scan a table, it might help (as long as the table is stored
relatively close together).
But if you have a big db, that doesnt fit into cache, and you bounce all over
the place doing seeks, I doubt it'll help.
-Andy
--
Sen
FULL is usually bad. Stick to "vacuum analyze" and drop the full.
Do you have indexes on:
test.tid, testresult.fk_tid, questionresult.fk_trid and testresult.trid
-Andy
On 7/15/2010 10:12 AM, Patrick Donlin wrote:
I'll read over that wiki entry, but for now here is the
n it. PG never uses the unique index on id, it
always table scans it... because its faster.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
, but when
you fire off "select * from bigtable" pg will create the entire
resultset in memory (and maybe swap?) and then send it all to the client
in one big lump. You might try a cursor and fetch 100-1000 at a time
from the cursor. No idea if it would be faster or slower.
-And
ok. But its doing a sequential scan.
Are you missing an index?
Also:
http://explain.depesz.com/
is magic.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t, etc).
select count(*) for example is always going to be slow... just expect
it, lets not destroy what works well about the database just to make it
fast. Instead, find a better alternative so you dont have to run it.
Just like any database, you have to work within MVCC's good points
and the rest of your data,
including the system catalogs, will still be intact.
if I am reading this right means: we can run our db safely (with fsync
and full_page_writes enabled) except for tables of our choosing?
If so, I am very +1 for this!
-Andy
--
Sent via pgsql-performance mailin
re about transactions,
but PG really does. Make sure all your code is properly starting and
commiting transactions.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 12/7/2010 1:22 PM, Justin Pitts wrote:
Also, as a fair warning: mssql doesn't really care about transactions, but
PG really does. Make sure all your code is properly starting and commiting
transactions.
-Andy
I do not understand that statement. Can you explain it a bit better?
In
On 12/7/2010 2:10 PM, Kenneth Marshall wrote:
On Tue, Dec 07, 2010 at 11:56:51AM -0800, Richard Broersma wrote:
On Tue, Dec 7, 2010 at 11:43 AM, Andy Colson wrote:
In PG the first statement you fire off (like an "insert into" for example)
will start a transaction. ?If you dont com
How'd he get along?
http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 12/14/2010 9:41 AM, Jim Nasby wrote:
On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:
Is this the same thing Nick is working on? How'd he get along?
http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov
So it is. The one I replied to stood out because no one had repli
pace for the imagery. The
imagery code uses more cpu that PG does. The database is 98% read,
though, so my setup is different that yours.
My maps get 100K hits a day. The cpu's never use more than 20%. I'm
running on a $350 computer, AMD Dual core, with 4 IDE disks in softwar
code, one for each database.
In the end, can PG be fast? Yes. Very. But only when you treat is as
PG. If you try to use PG as if it were mssql, you wont be a happy camper.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
up? *scratches head*
Because it all fix it memory and didnt swap to disk?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 1/13/2011 4:49 PM, Robert Haas wrote:
On Thu, Jan 13, 2011 at 5:47 PM, Andy Colson wrote:
I don't believe there is any case where hashing each individual relation
is a win compared to hashing them all together. If the optimizer were
smart enough to be considering the situation as a
et an "explain analyze"? It give's more info.
(Also, have you seen http://explain.depesz.com/)
Last: If you wanted to force the index usage, for a test, you could drop the
other indexes. I assume this is on a test box so it should be ok. If its
live, you could w
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Sent: Monday, 17 January 2011 5:22 p.m.
To: Jeremy Palmer
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Possible to improve query plan?
First, wow, those are long names... I had a hard time keeping track
ge pattern is (70% read,
small columns, no big blobs (like photos), etc)... and even then we'd
still have to guess.
I can tell you, however, having your readers and writers not block each
other is really nice.
Not only will I not compare apples to oranges, but I really wont compare
app
oops, call them database 'a' and database 'b'.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
fashionable non-SQL
databases, but it's pretty well known in wider circles.
--
Craig Ringer
Or... PG is just so good we've never had to use more than one database
server! :-)
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to you
?
Is the stress package running niced?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
st and one when its slow?
Looks to me, in both cases, you are not using much memory at all. (if
you happen to have 'free', its output is a little more readable, if you
wouldn't mind posting it (only really need it for when the box is slow)
-Andy
--
Sent via pgsql-perfor
check some of
your sql statements and make sure they are all behaving. You may not
notice a table scan when the user count is low, but you will when it
gets higher.
Have you run each of your queries through explain analyze lately?
Have you checked for bloat?
You are vacuuming/autovacuum
On 1/27/2011 9:09 AM, Michael Kohl wrote:
On Thu, Jan 27, 2011 at 4:06 PM, Andy Colson wrote:
Have you run each of your queries through explain analyze lately?
A code review including checking of queries is on our agenda.
You are vacuuming/autovacuuming, correct?
Sure :-)
Thank you
to the next product
lastprodid = prodid
... etc
> Is there any better way to do it? And how reliable is this?
It makes the sql really easy, but the code complex... so pick your poison.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
le cores you
need multiple database connections.
3) If your jobs are IO bound, then running multiple jobs may hurt
performance.
Your naive approach is the best. Just spawn off two jobs (or three, or
whatever). I think its also the only method. (If there is another
method, I dont know what
ead a paper someplace that said shared cache (L1/L2/etc) multicore
cpu's would start getting really slow at 16/32 cores, and that message passing
was the way forward past that. If PG started aiming for 128 core support right
now, it should use some kinda message passing with queues thing
On 02/03/2011 10:00 PM, Greg Smith wrote:
Andy Colson wrote:
Cpu's wont get faster, but HD's and SSD's will. To have one database
connection, which runs one query, run fast, it's going to need multi-core
support.
My point was that situations where people need to
t in ( select id frome details where some set is bad )
and id in ( select anotherid from anothertable where ... )
Its the subselects you need to think about. Find one that gets you a small set
that's interesting somehow. Once you get all your little sets, its easy to
combine them.
-A
On 03/02/2011 06:12 PM, sverhagen wrote:
Thanks for your help already!
Hope you're up for some more :-)
Andy Colson wrote:
First off, excellent detail.
Second, your explain analyze was hard to read... but since you are not
really interested in your posted query, I wont worry about lo
On 3/3/2011 3:19 AM, sverhagen wrote:
Andy Colson wrote:
For your query, I think a join would be the best bet, can we see its
explain analyze?
Here is a few variations:
SELECT events_events.* FROM events_events WHERE transactionid IN (
SELECT transactionid FROM
I have seen really complex geometries cause problems. If you have
thousands of points, when 10 would do, try ST_Simplify and see if it
doesnt speed things up.
-Andy
On 3/8/2011 2:42 AM, Andreas Forø Tollefsen wrote:
Hi. Thanks for the comments. My data is right, and the result is exactly
On 3/8/2011 10:58 AM, Andreas Forø Tollefsen wrote:
Andy. Thanks. That is a great tips. I tried it but i get the error:
NOTICE: ptarray_simplify returned a <2 pts array.
Query:
SELECT ST_Intersection(priogrid_land.cell,
ST_Simplify(cshapeswdate.geom,0.1)) AS geom,
priogrid_land.gid AS divi
s quick as it can be?
Thanks.
autovacuum = off
fsync = off
synchronous_commit = off
full_page_writes = off
bgwriter_lru_maxpages = 0
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 3/18/2011 9:38 AM, Kevin Grittner wrote:
Andy Colson wrote:
On 03/17/2011 09:25 AM, Michael Andreasen wrote:
I've been looking around for information on doing a pg_restore as
fast as possible.
bgwriter_lru_maxpages = 0
I hadn't thought much about that last one -- d
more than happy to benchmark it and send it back :-)
Or, more seriously, even remote ssh would do.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
top indicates a significant number or sblock states and
occasional smwai states;
e) ps auxww | grep postgres doesn't show anything abnormal;
f) ESQL applications are very slow.
We VACUUM ANALYZE user databases every four hours. We VACUUM template1
every 4 hours. We make a copy of the c
Hi Stephen,
Thanks for your input. My follow ups are interleaved below ...
Stephen Frost wrote:
* andy rost ([EMAIL PROTECTED]) wrote:
We're in the process of porting from Informix 9.4 to PostgreSQL 8.1.3.
Our PostgreSQL server is an AMD Opteron Dual Core 275 with two 2.2 Ghz
6
es in excess of 5 may hamper performance).
Thanks again ...
Andy
Mark Kirkwood wrote:
andy rost wrote:
effective_cache_size = 27462# typically 8KB each
This seems like it might be a little low... How much memory do you have
in the system? Then again, with your shared_me
Hi Merlin,
Thanks for the input. Please see below ...
Merlin Moncure wrote:
On 7/5/06, andy rost <[EMAIL PROTECTED]> wrote:
fsync = on # turns forced synchronization
have you tried turning this off and measuring performance?
No, not yet. We'
, with time, be more and more DB applications that
would want to capitalise on the potential speed improvements that come with
not having to work hard to get the right bits in the right bit of memory all
the time?
And finally, am I worrying too much, and actually this problem is common to
all dat
enting with stuff the way it works at
the moment.
Many thanks,
Andy
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
just a few years. The disk system gets
relegated to a data preload on startup and servicing the writes as the
server does its stuff.
Regards,
Andy
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
p://www.sgi.com/servers/altix/
(This won lots of awards recently)
The nice thing about the two things above is that they run linux in a single
address space NUMA setup, and in theory you can just bolt on more CPUs and
more RAM as your needs grow.
Thanks,
Andy
- Original Message -
F
io, without breaking the existing usage scenarios of PG in the
traditional 'DB > RAM' scenario?
The answer isn't "undermine the OS". The answer is "make the postmaster able
to build and operate with persistent, query optimised representations of
versystems.com/used-ibm-servers.htm
You could check with them and see what they are selling for. (And maybe what
they'd buy for)
Also, there is always ebay.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.p
knees by the sheer number of connections.
check "ps ax|grep http|wc --lines" and make sure its not too big.
(perhaps less than 100)
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ate && vmstat
Wed Aug 19 10:01:23 CDT 2009
procs ---memory-- ---swap-- -io --system-- cpu
r b swpd free buff cache si sobibo incs us sy id wa
0 0 20920 106376 59220 75310160074 1530 3 10 5 74 12
On Wed, Aug 19, 2
1 - 100 of 177 matches
Mail list logo