Limit the sub-queries to 1, i.e. :
select 1 from Table2 where Table2.ForeignKey = Table1.PrimaryKey fetch first 1
rows only
Andy.
On 19.02.2013 07:34, Bastiaan Olij wrote:
Hi All,
Hope someone can help me a little bit here:
I've got a query like the following:
--
select Column1, Column2
was expecting a drop off when the
database grew out of memory, but not this much.
Am I really missing the target somewhere?
Any help and or suggestions will be very much appreciated.
Best regards,
Andy.
http://explain.depesz.com/s/cfb
select distinct tr.nr as tnr
, tr.time_end as tend
know if this is usual and if so where the time's
spent?
Short of getting a faster server, is there anything I can do to influence this?
Thanks,
Andy
the IDX_order_sort_down_2 index for the sort?
Thanks,
Andy
Details..
Version
---
PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2,
64-bit
Tables
--
CREATE TABLE node (
node_id bigint NOT NULL,
node_typeint4 NOT NULL,
c_state int4 NOT NULL
of the temp table usage to use
subselect's/views/cte/etc?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
and queries are piling up.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
?), so
all that traffic still has to go thru some bit of network stack, yes?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
to make.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
(maybe it worth more tests)? that's why we took that path.
No, if you tried it out, I'd stick with what you have. I've never used
them myself, so I was just wondering aloud.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
each returning one resultset?
Or maybe it would be something you can speed test to see if it would
even make a difference.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
You could try using Unix domain socket and see if the performance improves. A
relevant link:
http://stackoverflow.com/questions/257433/postgresql-unix-domain-sockets-vs-tcp-sockets
From: Ofer Israeli of...@checkpoint.com
To: pgsql-performance@postgresql.org
(@list)
{
my $q = $db-prepare('select a from b where c = $1');
$q-execute($key);
$result = $q-fetch;
}
This would be right:
my $q = $db-prepare('select a from b where c = $1');
for $key (@list)
{
$q-execute($key);
$result = $q-fetch;
}
-Andy
--
Sent via pgsql-performance mailing list
On 03/16/2012 05:30 PM, Kevin Grittner wrote:
Brian Hamlinmapl...@light42.com wrote:
On Mar 16, 2012, at 7:17 AM, Kevin Grittner wrote:
Andy Colsona...@squeakycode.net wrote:
I tried shared_buffers at both 2400M and 18000M, and it took 4.5
hours both times. ... (weak attempts at humor
! (especially that poetic Dave
Fetter, and that somewhat mean, but helpful, Andy Colson)
Shout outs to my friends Garlynn, Nick and Rush (best band ever!).
Party, my house, next week!
==
(Virtually) Brian Hamlin
GeoCal
OSGeo California Chapter
415-717-4462 cell
-Andy
--
Sent via pgsql
heard that LVM limits IO, so if you want full speed you
might wanna drop LVM. (And XFS supports increasing fs size, and when are you
ever really gonna want to decrease fs size?).
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
:
SELECT count(*) FROM osm_point WHERE tags @ 'tourism=junk'
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 02/26/2012 01:11 PM, Stefan Keller wrote:
2012/2/26 Andy Colsona...@squeakycode.net wrote:
On 02/25/2012 06:16 PM, Stefan Keller wrote:
1. How can I warm up or re-populate shared buffers of Postgres?
2. Are there any hints on how to tell Postgres to read in all table
contents into memory
that I thought should be happening on a db with next
to no usage.
I found setting autovacuum_naptime = 6min made the IO all but vanish.
And if I ever get a wild hair and blow some stuff away, the db will
clean up after me.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance
:
http://archives.postgresql.org/pgsql-hackers/2010-11/msg02028.php
They use it without locks.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
you watched vmstat and iostat?
Have you read up on synchronous_commit?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ANALYZE'?
-Andy
Have you read up on synchronous_commit?
Only a tiny bit. A couple people suggested disabling it since my
database is being hosted on AWS so I did that. It seems a bit risky but
perhaps worth it.
I would think they are running on battery backed IO, with boxes on UPS,
so I'd
should
be able to do HOT updates.
If you have lots of indexes, you should review them, you probably don't
need half of them.
And like Kevin said, try the simple one first. Wont hurt anything, and
if it works, great!
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Sent: Tuesday, February 07, 2012 4:47 PM
To: Ofer Israeli
Cc: pgsql-performance@postgresql.org; Olga Vingurt; Netta Kabala
Subject: Re: [PERFORM] Inserts or Updates
On 2/7/2012 4:18 AM, Ofer Israeli wrote:
Hi all,
We
Oh, I knew I'd seen index usage stats someplace.
give this a run:
select * from pg_stat_user_indexes where relname = 'SuperBigTable';
http://www.postgresql.org/docs/current/static/monitoring-stats.html
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
index on text column as
well then the performance reduced to 1/8th times. My question is how I
can improve performance when inserting data using index on text
column?
Thanks,
Saurabh
Do it in a single transaction, and use COPY.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance
On 1/27/2012 10:47 AM, Heikki Linnakangas wrote:
On 27.01.2012 15:34, Jayashankar K B wrote:
Hi,
We are having an embedded system with a freescale m68k architecture
based micro-controller, 256MB RAM running a customized version of
Slackware 12 linux.
It's a relatively modest Hardware.
of transaction, look into
commit_delay, it'll help batch commits out to disk (if I remember
correctly).
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
BY m.emailaddress, m.websiteid ';
execute(sql);
Maybe its the planner doesnt plan so well with $1 arguments vs actual
arguments thing.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
into a summary table, and blow away the details. You
wouldn't have to delete the details if you wanted them, just keeping the
summary table updated would be enough.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
Your results are consistent with the benchmarks I've seen. Intel SSD have much
worse write performance compared to SSD that uses Sandforce controllers, which
Vertex 2 Pro does.
According to this benchmark, at high queue depth the random write performance
of Sandforce is more than 5 times that
is only 5
gig, so you must be doing other things on this box.
-- autovacuum = off
Are you vacuuming by hand!? If not this is a really bad idea (tm)(c)(r)
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
Do you have an Intel 320? I'd love to see tests comparing 710 to 320 and see
if it's worth the price premium.
From: David Boreham david_l...@boreham.org
To: PGSQL Performance pgsql-performance@postgresql.org
Sent: Saturday, October 1, 2011 10:39 PM
Subject:
On Sep 21, 2011, at 8:30 AM, Shaun Thomas wrote:
I wish they would erase that Wiki page, or at least add a disclaimer.
The they that you refer to includes you. It's a wiki. You can write your
own disclaimer.
xoa
--
Andy Lester = a...@petdance.com = www.petdance.com = AIM:petdance
enough to drive testing myself.
xoa
--
Andy Lester = a...@petdance.com = www.petdance.com = AIM:petdance
On 9/12/2011 12:15 PM, Robert Klemme wrote:
On 11.09.2011 19:02, Marti Raudsepp wrote:
On Sun, Sep 11, 2011 at 17:23, Andy Colsona...@squeakycode.net wrote:
On 09/11/2011 08:59 AM, Igor Chudov wrote:
By the way, does that INSERT UPDATE functionality or something like
this exist in Postgres
1500, 64-bit
Windows 2008 Server Enterprise
Thanks,
Bob
work_mem is not the total a query can use. I believe each step can use
that much, and each backend can use it for multiple bits. So if you had
two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB = 8GB.
-Andy
--
Sent via pgsql
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =
8GB
On 9/12/2011 1:22 PM, Robert Schnabel wrote:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each
On 9/12/2011 1:22 PM, Robert Schnabel wrote:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each
at the expense
of safety and usability. (mysql still cannot do update's with subselects).
PG is safe and usable at the expense of speed, and you wont be disappointed by
the speed.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
where not exists(select 1 from realTable where tmp.id
= realTable.id);
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
total space. That's not big
enough, is it?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
. But, if you ignore the problem for two years, and have super
really bad table bloat, well, maybe backup/restore is best.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.
fsync | off
Seriously?
-Andy
There are 30 DBs in total on the server coming in at 226GB. The one that's
used the most is 67GB and there are another 29 that come to 159GB.
I'd really appreciate it if you could review my configurations below and make
any suggestions
884812 4051293200 464 168 353 92 4 2 84
9
Only one line? That does not help much. Can you run it as 'vmstat 2' and let
it run while a few slow queries are performed? Then paste all the lines?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance
, maybe the stats are old and you have bad plans? It
could also be major updates to the data too (as opposed to growth).
Gerhard, have you done an 'explain analyze' on any of your slow queries? Have
you done an analyze lately?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance
On 09/05/2011 08:57 AM, Richard Shaw wrote:
Hi Andy,
It's not a new issue no, It's a legacy system that is in no way ideal but is
also not in a position to be overhauled. Indexes are correct, tables are up to
25 million rows.
On startup, it hits CPU more than IO, I'll provide some
in the
first place :-) )
Good catch, thanks Scott.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 8/30/2011 8:33 PM, Craig Ringer wrote:
On 31/08/2011 4:30 AM, Andy Colson wrote:
Hi all,
I have read things someplace saying not exists was better than not
in... or something like that. Not sure if that was for in/exists and
not in/not exists, and for a lot of records or not.
`EXISTS' may
When you ran it, did it really feel like 30 seconds? Or did it come
right back real quick?
Because your report says:
35.833 ms
Thats ms, or milliseconds, or 0.035 seconds.
-Andy
On 8/31/2011 8:04 AM, Kai Otto wrote:
Hi all,
I am running a simple query:
SELECT * FROM public.“Frame
On 8/31/2011 1:51 PM, Alan Hodgson wrote:
On August 31, 2011 11:26:57 AM Andy Colson wrote:
When you ran it, did it really feel like 30 seconds? Or did it come
right back real quick?
Because your report says:
35.833 ms
Thats ms, or milliseconds, or 0.035 seconds.
I think
);
delete from public.general a where exists(select 1 from upd.general b
where a.gid=b.gid);
Thanks for any suggestions,
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
According to the specs for database storage:
Random 4KB arites: Up to 600 IOPS
Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not much
faster than mechanical disks.
Has anyone done any performance benchmark of 320 used as a DB storage? Is it
really that slow?
On 8/19/2011 4:03 AM, Krzysztof Chodak wrote:
Is there any performance benefit of using constant size tuples?
If you are referring to varchar(80) vs text, then no, there is no benefit.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
, then don't
fix it :-)
You say reporting query's are fast, and the disk's should take care of
your slow write problem from before. (Did you test the write
performance?) So, whats wrong?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
to unix socket /var/run/postgresql/.s.PGSQL.5432,
but your postgresql.conf file probably has:
unix_socket_directory = '/tmp'
Change it to:
unix_socket_directory = '/var/run/postgresql'
and restart PG.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
cores. Everyone with more than 8 cores and 64 gig of ram is off
my Christmas list! :-) )
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 8/17/2011 1:55 PM, Ogden wrote:
On Aug 17, 2011, at 1:48 PM, Andy Colson wrote:
On 8/17/2011 1:35 PM, k...@rice.edu wrote:
On Wed, Aug 17, 2011 at 01:32:41PM -0500, Ogden wrote:
On Aug 17, 2011, at 1:31 PM, k...@rice.edu wrote:
On Wed, Aug 17, 2011 at 01:26:56PM -0500, Ogden wrote:
I
On 8/3/2011 11:03 PM, Craig Ringer wrote:
great gobs of battery backed write cache DRAM.
Now I know what I'm asking Santa for Christmas this year!
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
, absolute, system. Linux on metal is
the way to go.
(This is all speculation and personal opinion, I have no numbers to back
anything up)
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
--- On Mon, 7/18/11, David Rees dree...@gmail.com wrote:
In this case is BBU still needed? If I put 2 SSD
in software RAID 1, would
that be any slower than 2 SSD in HW RAID 1 with
BBU? What are the pros and
cons?
What will perform better will vary greatly depending on the
exact
I'm not comparing SSD in SW RAID with rotating disks
in HW RAID with
BBU though. I'm just comparing SSDs with or without
BBU. I'm going to
get a couple of Intel 320s, just want to know if BBU
makes sense for
them.
Yes, it certainly does, even if you have a RAID BBU.
even if you
, would that
be any slower than 2 SSD in HW RAID 1 with BBU? What are the pros and cons?
Thanks.
Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
, and the stats from 'select * from pg_stats_something'
We (or, you really) could compare the seq_page_cost and random_page_cost
from the config to the stats collected by PG and determine they are way
off... and you should edit your config a little and restart PG.
-Andy
--
Sent via pgsql
--- On Wed, 4/6/11, Scott Carey sc...@richrelevance.com wrote:
I could care less about the 'fast' sandforce drives.
They fail at a high
rate and the performance improvement is BECAUSE they are
using a large,
volatile write cache.
The G1 and G2 Intel MLC also use volatile write cache,
than happy to benchmark it and send it back :-)
Or, more seriously, even remote ssh would do.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 3/18/2011 9:38 AM, Kevin Grittner wrote:
Andy Colsona...@squeakycode.net wrote:
On 03/17/2011 09:25 AM, Michael Andreasen wrote:
I've been looking around for information on doing a pg_restore as
fast as possible.
bgwriter_lru_maxpages = 0
I hadn't thought much about that last one
be?
Thanks.
autovacuum = off
fsync = off
synchronous_commit = off
full_page_writes = off
bgwriter_lru_maxpages = 0
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I have seen really complex geometries cause problems. If you have
thousands of points, when 10 would do, try ST_Simplify and see if it
doesnt speed things up.
-Andy
On 3/8/2011 2:42 AM, Andreas Forø Tollefsen wrote:
Hi. Thanks for the comments. My data is right, and the result is exactly
On 3/8/2011 10:58 AM, Andreas Forø Tollefsen wrote:
Andy. Thanks. That is a great tips. I tried it but i get the error:
NOTICE: ptarray_simplify returned a 2 pts array.
Query:
SELECT ST_Intersection(priogrid_land.cell,
ST_Simplify(cshapeswdate.geom,0.1)) AS geom,
priogrid_land.gid AS divider
On 3/3/2011 3:19 AM, sverhagen wrote:
Andy Colson wrote:
For your query, I think a join would be the best bet, can we see its
explain analyze?
Here is a few variations:
SELECT events_events.* FROM events_events WHERE transactionid IN (
SELECT transactionid FROM
On 03/02/2011 06:12 PM, sverhagen wrote:
Thanks for your help already!
Hope you're up for some more :-)
Andy Colson wrote:
First off, excellent detail.
Second, your explain analyze was hard to read... but since you are not
really interested in your posted query, I wont worry about looking
... )
Its the subselects you need to think about. Find one that gets you a small set
that's interesting somehow. Once you get all your little sets, its easy to
combine them.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
--- On Sun, 2/6/11, Linos i...@linos.es wrote:
I am studying too the possibility of use an OCZ Vertex 2
Pro with Flashcache or Bcache to use it like a second level
filesystem cache, any comments on that please?
OCZ Vertex 2 Pro is a lot more expensive than other SSD of comparable
, but the code complex... so pick your poison.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
are IO bound, then running multiple jobs may hurt
performance.
Your naive approach is the best. Just spawn off two jobs (or three, or
whatever). I think its also the only method. (If there is another
method, I dont know what it would be)
-Andy
--
Sent via pgsql-performance mailing list (pgsql
, and that message passing
was the way forward past that. If PG started aiming for 128 core support right
now, it should use some kinda message passing with queues thing, yes?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
On 02/03/2011 10:00 PM, Greg Smith wrote:
Andy Colson wrote:
Cpu's wont get faster, but HD's and SSD's will. To have one database
connection, which runs one query, run fast, it's going to need multi-core
support.
My point was that situations where people need to run one query on one
of
your sql statements and make sure they are all behaving. You may not
notice a table scan when the user count is low, but you will when it
gets higher.
Have you run each of your queries through explain analyze lately?
Have you checked for bloat?
You are vacuuming/autovacuuming, correct?
-Andy
On 1/27/2011 9:09 AM, Michael Kohl wrote:
On Thu, Jan 27, 2011 at 4:06 PM, Andy Colsona...@squeakycode.net wrote:
Have you run each of your queries through explain analyze lately?
A code review including checking of queries is on our agenda.
You are vacuuming/autovacuuming, correct?
Sure
and one when its slow?
Looks to me, in both cases, you are not using much memory at all. (if
you happen to have 'free', its output is a little more readable, if you
wouldn't mind posting it (only really need it for when the box is slow)
-Andy
--
Sent via pgsql-performance mailing list (pgsql
running niced?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
well known in wider circles.
--
Craig Ringer
Or... PG is just so good we've never had to use more than one database
server! :-)
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
photos), etc)... and even then we'd
still have to guess.
I can tell you, however, having your readers and writers not block each
other is really nice.
Not only will I not compare apples to oranges, but I really wont compare
apples in Canada to oranges in Japan. :-)
-Andy
--
Sent via pgsql
oops, call them database 'a' and database 'b'.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
it should be ok. If its
live, you could wrap it in a BEGIN ... ROLLBACK (in theory... never tried it
myself)
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Sent: Monday, 17 January 2011 5:22 p.m.
To: Jeremy Palmer
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Possible to improve query plan?
First, wow, those are long names... I had a hard time keeping track
to re-de-duplicate at the
end. But then why did the OP get a speedup? *scratches head*
Because it all fix it memory and didnt swap to disk?
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
On 1/13/2011 4:49 PM, Robert Haas wrote:
On Thu, Jan 13, 2011 at 5:47 PM, Andy Colsona...@squeakycode.net wrote:
I don't believe there is any case where hashing each individual relation
is a win compared to hashing them all together. If the optimizer were
smart enough to be considering
uses more cpu that PG does. The database is 98% read,
though, so my setup is different that yours.
My maps get 100K hits a day. The cpu's never use more than 20%. I'm
running on a $350 computer, AMD Dual core, with 4 IDE disks in software
raid-5. On Slackware Linux, of course!
-Andy
different bits of
code, one for each database.
In the end, can PG be fast? Yes. Very. But only when you treat is as
PG. If you try to use PG as if it were mssql, you wont be a happy camper.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
along?
http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 12/14/2010 9:41 AM, Jim Nasby wrote:
On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:
Is this the same thing Nick is working on? How'd he get along?
http://archives.postgresql.org/message-id/4cd1853f.2010...@noaa.gov
So it is. The one I replied to stood out because no one had replied
We use ZFS and use SSDs for both the log device and
L2ARC. All disks
and SSDs are behind a 3ware with BBU in single disk
mode.
Out of curiosity why do you put your log on SSD? Log is all sequential IOs, an
area in which SSD is not any faster than HDD. So I'd think putting log on SSD
The common knowledge you based that comment on, may
actually not be very up-to-date anymore. Current
consumer-grade SSD's can achieve up to 200MB/sec when
writing sequentially and they can probably do that a lot
more consistent than a hard disk.
Have a look here:
If you are IO-bound, you might want to consider using SSD.
A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10.
--- On Wed, 12/8/10, Benjamin Krajmalnik k...@servoyant.com wrote:
From: Benjamin Krajmalnik k...@servoyant.com
Subject: [PERFORM] Hardware recommendations
To:
If you are IO-bound, you might want to consider using
SSD.
A single SSD could easily give you more IOPS than 16
15k SAS in RAID 10.
Are there any that don't risk your data on power loss, AND
are cheaper
than SAS RAID 10?
Vertex 2 Pro has a built-in supercapacitor to save data on
really does. Make sure all your code is properly starting and
commiting transactions.
-Andy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 12/7/2010 1:22 PM, Justin Pitts wrote:
Also, as a fair warning: mssql doesn't really care about transactions, but
PG really does. Make sure all your code is properly starting and commiting
transactions.
-Andy
I do not understand that statement. Can you explain it a bit better?
In mssql
On 12/7/2010 2:10 PM, Kenneth Marshall wrote:
On Tue, Dec 07, 2010 at 11:56:51AM -0800, Richard Broersma wrote:
On Tue, Dec 7, 2010 at 11:43 AM, Andy Colsona...@squeakycode.net wrote:
In PG the first statement you fire off (like an insert into for example)
will start a transaction. ?If you
1 - 100 of 167 matches
Mail list logo