On 2012-10-08 23:45, Craig James wrote:
This is driving me crazy. A new server, virtually identical to an old
one, has 50% of the performance with pgbench. I've checked everything
I can think of.
The setups (call the servers old and new):
old: 2 x 4-core Intel Xeon E5620
new: 4 x 4-core
supplier, which were suggested by OCZ as replacement for the
vertex 2 pro's. They're marketed as safe under power failure and our
tests with the diskchecker tool confirmed that.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
On 2012-04-26 04:40, Venki Ramachandran wrote:
Thanks Tom, clock_timestamp() worked. Appreciate it!!! and Sorry was
hurrying to get this done at work and hence did not read through.
Can you comment on how you would solve the original problem? Even if I
can get the 11 seconds down to 500 ms
On 2012-03-23 05:53, Sebastian Melchior wrote:
Hi,
we already used iostat and iotop during times of the slowdown, there is no
sudden drop in I/O workload in the times of the slowdown. Also the iowait does
not spike and stays as before.
So i do not think that this is I/O related. As the disks
On 2012-03-07 01:36, Mark Kirkwood wrote:
On 06/03/12 21:17, Yeb Havinga wrote:
One thing to note is that linux software raid with md doesn't support
discard, which might shorten the drive's expected lifetime. To get
some numbers I tested the raid 1 of ssd's setup for mediawear under
showed a
significant performance regression in the higher thread counts that we
decided to go for the other brand for all servers.
--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On 2012-03-06 09:34, Andrea Suisani wrote:
On 03/06/2012 09:17 AM, Yeb Havinga wrote:
PS: we applied the same philosophy (different brands) also to
motherboards, io controllers and memory, but after testing, we liked
one IO controllers software so much more than the other so we chose
On 2011-11-02 22:08, Merlin Moncure wrote:
On Wed, Nov 2, 2011 at 3:45 PM, Yeb Havingayebhavi...@gmail.com wrote:
Intel latency graph at http://imgur.com/Hh3xI
Ocz latency graph at http://imgur.com/T09LG
curious: what were the pgbench results in terms of tps?
merlin
Both comparable near
On 2011-11-03 15:31, Shaun Thomas wrote:
On 11/03/2011 04:38 AM, Yeb Havinga wrote:
Both comparable near 10K tps.
That's another thing I was wondering about. Why are we talking about
Vertex 2 Pro's, anyway? The Vertex 3 Pros post much better results and
are still capacitor-backed
On 2011-11-02 16:06, Magnus Hagander wrote:
On Wed, Nov 2, 2011 at 16:04, Yeb Havingayebhavi...@gmail.com wrote:
On 2011-11-02 15:06, Kevin Grittner wrote:
Yeb Havingayebhavi...@gmail.comwrote:
I'm now contemplating not using the 710 at all. Why should I not
buy two 6Gbps SSDs without
Hello list,
A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1
setup. I was pretty convinced this was the perfect solution to run
PostgreSQL on SSDs without a IO controller with BBU. No worries for
strange firmware bugs because of two different drives, good write
On 2011-11-02 15:06, Kevin Grittner wrote:
Yeb Havingayebhavi...@gmail.com wrote:
I'm now contemplating not using the 710 at all. Why should I not
buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex
3 Max IOPS) with a IO controller+BBU?
Wouldn't the data be subject to loss
On 2011-11-02 15:26, Merlin Moncure wrote:
On Wed, Nov 2, 2011 at 8:05 AM, Yeb Havingayebhavi...@gmail.com wrote:
Hello list,
A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1
setup. I was pretty convinced this was the perfect solution to run
PostgreSQL on SSDs without a
On 2011-11-02 16:16, Yeb Havinga wrote:
On 2011-11-02 15:26, Merlin Moncure wrote:
I would keep at least 20-30% of both drives unpartitioned to leave the
controller room to wear level and as well as other stuff. I'd try
wiping the drives, reparititoing, and repeating your test. I would
also
On 2011-07-19 09:56, Florian Weimer wrote:
* Yeb Havinga:
The biggest drawback of 2 SSD's with supercap in hardware raid 1, is
that if they are both new and of the same model/firmware, they'd
probably reach the end of their write cycles at the same time, thereby
failing simultaneously.
I
5.41, whereas e.g.
Scientific Linux 6 has 5.39).
--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
tool in smartmontools-5.4x:
/usr/sbin/update-smart-drivedb :-)
--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 2011-07-18 03:43, Andy wrote:
Hi,
Is BBU still needed with SSD?
SSD has its own cache. And in certain models such as Intel 320 that cache is
backed by capacitors. So in a sense that cache acts as a BBU that's backed by
capacitors instead of batteries.
In this case is BBU still needed? If
if there was yet another supercapped SSD brand,
with a modified md software raid that reads all three drives at once and
compares results, instead of the occasional check. If at least two
drives agree on the contents, return the data.
--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data
--
Sent via
On 2011-06-21 09:51, Yeb Havinga wrote:
On 2011-06-21 08:33, Greg Smith wrote:
On 06/20/2011 11:54 PM, Dan Harris wrote:
I'm exploring the combination of an Areca 1880ix-12 controller with
6x OCZ Vertex 3 V3LT-25SAT3 2.5 240GB SATA III drives in RAID-10.
Has anyone tried this combination
in a
workstation. The raw value of SSD Available Reserved Space seems to be a
good candidate to watch to go to 0, since the pgbenched-drive has 16GB
left and the workstation disk 17GB. Would be cool to graph with e.g.
symon (http://i.imgur.com/T4NAq.png)
--
Yeb Havinga
http://www.mgrid.net
On 2011-06-21 22:10, Yeb Havinga wrote:
There's some info burried in
http://archives.postgresql.org/pgsql-performance/2011-03/msg00350.php
where two Vertex 2 pro's are compared; the first has been really
hammered with pgbench, the second had a few months duty in a
workstation. The raw
and olap like io patterns, see
http://www.oracle.com/technetwork/topics/index-089595.html
--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
--
Yeb Havinga
http://www.mgrid.net/
Mastering Medical Data
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
as well: a SSD with supercap that acts as battery backup)
maybe another one or two spindled 2.5 drives for archive/backup.
Supermicro 113TQ-563UB chassis
At the time I looked this up, I could buy it for just over €3000,-
regards
Yeb Havinga
PS: I'm in no way involved with either of the manufacturers
Greg Smith wrote:
Yeb Havinga wrote:
model name : AMD Phenom(tm) II X4 940 Processor @ 3.00GHz
cpu cores : 4
stream compiled with -O3
Function Rate (MB/s) Avg time Min time Max time
Triad: 5395.1815 0.0089 0.0089 0.0089
I'm not sure if Yeb's
/Product.aspx?Item=N82E16813182230Tpk=H8SGL-F)
and some memory?
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Scott Marlowe wrote:
On Tue, Aug 31, 2010 at 6:41 AM, Yeb Havinga yebhavi...@gmail.com wrote:
export OMP_NUM_THREADS=4
Then I get the following. The rather wierd dip at 5 threads is consistent
over multiple tries:
I get similar dips on my server. Especially as you make
Jose Ildefonso Camargo Tolosa wrote:
Also, nowadays, Intel has better performance than AMD, at least when
comparing Athlon 64 vs Core2, I'm still saving to get a Phenom II
system in order to benchmark them and see how it goes (does anyone
have one of these for testing?).
Scott Marlowe wrote:
On Mon, Aug 30, 2010 at 1:58 AM, Yeb Havinga yebhavi...@gmail.com wrote:
four parallel
r...@p:~/ff/www.cs.virginia.edu/stream/FTP/Code# ./a.out ./a.out ./a.out
./a.out
You know you can just do stream 4 to get 4 parallel streams right?
Which version
Greg Smith wrote:
This comes up every year or so. The ability of GPU offloading to help
with sorting has to overcome the additional latency that comes from
copying everything over to it and then getting all the results back.
If you look at the typical types of sorting people see in
/gregs1104/pgbench-tools - my experience with
it is that it takes less than 10 minutes to setup and run and after some
time you get rewarded with nice pictures! :-)
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html
Is this what everyone else is seeing?
I tested a SSD with a capacitor and posted conclusions here
http://archives.postgresql.org/pgsql-performance/2010-07/msg00449.php
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list
in postgres).
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
this question.
Do you have any stats on how much WAL is written for 8kb and 4kb test
cases ?
Would some iostat -xk 1 for each partition suffice?
And for other disk i/o during the tests ?
Not existent.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
Yeb Havinga wrote:
Hannu Krosing wrote:
Did it fit in shared_buffers, or system cache ?
Database was ~5GB, server has 16GB, shared buffers was set to 1920MB.
I first noticed this several years ago, when doing a COPY to a large
table with indexes took noticably longer (2-3 times longer
Merlin Moncure wrote:
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga yebhavi...@gmail.com wrote:
Postgres settings:
8.4.4
--with-blocksize=4
I saw about 10% increase in performance compared to 8KB blocksizes.
That's very interesting -- we need more testing in that department
in it's buffer and somehow arrange them into sets of 128KB of
256KB writes for the flash chips. See also
http://www.anandtech.com/show/2899/2
But I ran out of ideas to test, so I'm going to test it anyway.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance
Yeb Havinga wrote:
Michael Stone wrote:
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog
block
device to remove the file system overhead and guaranteeing your data is
written sequentially every time
be improved. Maybe rewriting the top query
to not do bitmap heap scans in subqueries or inner loops?
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Yeb Havinga wrote:
Greg Smith wrote:
Put it on ext3, toggle on noatime, and move on to testing. The
overhead of the metadata writes is the least of the problems when
doing write-heavy stuff on Linux.
I ran a pgbench run and power failure test during pgbench with a 3
year old computer
Matthew Wakeling wrote:
On Sun, 25 Jul 2010, Yeb Havinga wrote:
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Does your latency graph really have milliseconds as the y axis?
Yes
If so, this device is really slow - some requests have a latency
see now I
forgot to mention it was a 8.4.4 postgres version.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Greg Smith wrote:
Yeb Havinga wrote:
Please remember that particular graphs are from a read/write pgbench
run on a bigger than RAM database that ran for some time (so with
checkpoints), on a *single* $435 50GB drive without BBU raid controller.
To get similar *average* performance results
Yeb Havinga wrote:
To get similar *average* performance results you'd need to put about
4 drives and a BBU into a server. The
Please forget this question, I now see it in the mail i'm replying to.
Sorry for the spam!
-- Yeb
--
Sent via pgsql-performance mailing list (pgsql-performance
Yeb Havinga wrote:
8GB DDR2 something..
(lots of details removed)
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Thanks http://www.westnet.com/~gsmith/content/postgresql/pgbench.htm for
the gnuplot and psql scripts!
--
Sent via pgsql
of not
comparing apples with pears I'd have to go with ext2 on the rotating
data disk as well.
Do you guys have any more ideas to properly 'feel this disk at its teeth' ?
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
and compare the amount of data per
x pgbench transactions.
Put it on ext3, toggle on noatime, and move on to testing. The
overhead of the metadata writes is the least of the problems when
doing write-heavy stuff on Linux.
Will surely do and post the results.
thanks,
Yeb Havinga
--
Sent via pgsql
Yeb Havinga wrote:
diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; 39/s)
Total errors: 0
:-)
OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Yeb Havinga wrote:
Yeb Havinga wrote:
diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes;
39/s)
Total errors: 0
:-)
OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer.
When playing with it a bit more, I couldn't get the test_file to be
created
=234409763) ;
regards,
Yeb Havinga
PS: the analyze time of the slow query showed 0.007ms?
Zotov wrote:
I have a query:
SELECT d1.ID, d2.ID
FROM DocPrimary d1
JOIN DocPrimary d2 ON d2.BasedOn=d1.ID
WHERE (d1.ID=234409763) or (d2.ID=234409763)
i think what QO(Query Optimizer) can make
are stored.
Hello Jouanin,
Could you give some more information following the guidelines from
http://wiki.postgresql.org/wiki/SlowQueryQuestions ?
Essential are the contents from both conf files (comments may be removed).
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql
imported database that was
not ANALYZEd, it would explain the different and likely bad plan. If you
want to know for sure this is the cause, instead of e.g. faulty
hardware, you could verify redoing the import + query without analyze.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list
('manufacturer'::text)))
It looks like seq_scans are disabled, since the index scan has only a
filter expression but not an index cond.
regards,
Yeb Havinga
Regds
Rajesh Kumar Mallah.
explain analyze SELECT * from ( SELECT
a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords
a good idea to use pgtune
(on pgfoundry) to get some reasonable ball park settings for your hardware.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
oblivous algorithms match with this
problem to find a suitable virtual page format.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Kevin Grittner wrote:
Yeb Havinga yebhavi...@gmail.com wrote:
concerning gist indexes:
1) with larger block sizes and hence, larger # entries per gist
page, results in more generic keys of those pages. This in turn
results in a greater number of hits, when the index is queried, so
?)
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
this would still be
possible, if the leap year problem could be 'fixed' by a additional
condition in the where clause that filters the surplus records.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
immutable
strict;
and that is the function that was used with all the other output (it can
be seen inlined in the explain output). I did not catch this until after
the post.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Matthew Wakeling wrote:
On Fri, 21 May 2010, Yeb Havinga wrote:
For time based data I would for sure go for year based indexing.
On the contrary, most of the queries seem to be over many years, but
rather restricting on the time of year. Therefore, partitioning by
month or some other per
))
- Bitmap Index Scan on doy_i (cost=0.00..4.85 rows=10 width=0)
Index Cond: ((doy(t) = 10::double precision) AND (doy(t)
= 20::double precision))
(7 rows)
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
) AS text)||'-12-31')::date
Either I had too less coffee and completely misunderstand this
expression, or it is always true and can be omitted. Could you explain a
bit what this part tries to do and maybe also show it's original
counterpart in the source database?
regards,
Yeb Havinga
but it is an interesting thing to inspect, and perhaps rewrite
the query to use constraint exclusion.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
? In particular, it would be interesting to know how many
concurrent connections are active running what mix of queries.
It would be also interesting to know how many disks are there in the new
server, and the size of the database (select
pg_size_pretty(pg_database_size('yourdb'))).
regards,
Yeb
artificial or are
you're statistics old or too small histogram/mcv's?
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
artificial or are you're statistics old or too small histogram/mcv's?
Nope, I suppose this is because of limit. If I remove the limit, the
estimations are quite correct. There are ~6 millions of row in each table.
Yes, that makes sense.
regards,
Yeb Havinga
--
Sent via pgsql-performance
(btree) operator where 'Leibniz' would fail in this case.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
?
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
to see
how creative people can finding ways to not properly use (your) software ;-)
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
of not shutting down the
machine properly when e.g. the remote was missing).
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Yeb Havinga wrote:
Greg Smith wrote:
Tom Lane wrote:
So has anyone looked at porting MythTV to PG?
Periodically someone hacks together something that works, last big
effort I'm aware of was in 2006, and then it bit rots away. I'm sure
we'd get some user uptake on the result--MySQL
Matthew Wakeling wrote:
On Sat, 20 Mar 2010, Yeb Havinga wrote:
The gist virtual pages would then match more the original blocksizes
that
were used in Guttman's R-tree paper (first google result, then figure
4.5).
Since the nature/characteristics of the underlying datatypes and keys
Theodor or Oleg could say something about how easy or hard it is to
do?
regards,
Yeb Havinga
Regards,
Ken
On Fri, Mar 19, 2010 at 09:49:30PM +0100, Yeb Havinga wrote:
Yeb Havinga wrote:
Since the gistpagesize is derived from the database blocksize, it might
be
wise to set the blocksize
Yeb Havinga wrote:
Yeb Havinga wrote:
Matthew Wakeling wrote:
Matthew Wakeling wrote:
A second quite distinct issue is the general performance of GiST
indexes
which is also mentioned in the old thread linked from Open Items. For
that, we have a test case at
http://archives.postgresql.org
Yeb Havinga wrote:
Matthew Wakeling wrote:
A second quite distinct issue is the general performance of GiST
indexes
which is also mentioned in the old thread linked from Open Items.
For
that, we have a test case at
http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php
Yeb Havinga wrote:
Since the gistpagesize is derived from the database blocksize, it
might be wise to set the blocksize low for this case, I'm going to
play with this a bit more.
Ok, one last mail before it turns into spam: with a 1KB database
blocksize, the query now runs in 30 seconds
Yeb Havinga wrote:
Matthew Wakeling wrote:
Matthew Wakeling wrote:
A second quite distinct issue is the general performance of GiST
indexes
which is also mentioned in the old thread linked from Open Items. For
that, we have a test case at
http://archives.postgresql.org/pgsql-performance/2009
when scanning the index, because the constants that are queries with
are repeatedly compressed and palloced.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
events_event_types ON
eventType_id=events_event_types.id WHERE severity=70 AND (eventType_id
IN (71, 999)) ORDER BY datetime DESC LIMIT 50;
regards
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
might arise when the join is changed to inner.
regards
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
-query.html. And
also to 'help' the planner: I'd just change the query to an inner join
in this case, since there cannot be null tuples in the right hand side here.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Kenneth Marshall wrote:
EXISTS matches NULLs too and since they are not indexed a
sequential scan is needed to check for them. Try using
IN instead.
This is nonsense in more than one way.
regards
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
Yeb Havinga wrote:
Kenneth Marshall wrote:
EXISTS matches NULLs too and since they are not indexed a
sequential scan is needed to check for them. Try using
IN instead.
This is nonsense in more than one way.
Hit ctrl-return a bit too slow - exists does not match null but a set of
records
should be. E.g. 1 disk gives a
small matrix with small number of concurrent io requests. So I set it to 50.
Another idea: pgbench?
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
Greg Smith wrote:
Yeb Havinga wrote:
With 24 drives it'll probably be the controller that is the limiting
factor of bandwidth. Our HP SAN controller with 28 15K drives
delivers 170MB/s at maximum with raid 0 and about 155MB/s with raid 1+0.
You should be able to clear 1GB/s on sequential
Scott Marlowe wrote:
On Tue, Mar 2, 2010 at 1:51 PM, Yeb Havinga yebhavi...@gmail.com wrote:
With 24 drives it'll probably be the controller that is the limiting factor
of bandwidth. Our HP SAN controller with 28 15K drives delivers 170MB/s at
maximum with raid 0 and about 155MB/s with raid
to mix random and sequential io and also the wal has
demands when it comes to write cache).
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Francisco Reyes wrote:
Yeb Havinga writes:
controllers. Also, I am not sure if it is wise to put the WAL on the
same logical disk as the indexes,
If I only have two controllers would it then be better to put WAL on
the first along with all the data and the indexes on the external
and about 155MB/s with raid 1+0. So I'd
go for the 10K drives and put the saved money towards the controller (or
maybe more than one controller).
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
. The possible gains
there are way beyond anything you can accomplish with optimizing
recursive functions.
Regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
=8)
Index Cond: (user_id = $0)
(5 rows)
id | high
--+-
1 | {641,896}
2 | {1757,1167}
3 | {2765,2168}
4 | {3209,3674}
5 | {4479,4993}
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
.
Regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
, you could use WITH RECURSIVE - my
experience is that it is several orders of magnitude faster than
recursive functions.
http://developer.postgresql.org/pgdocs/postgres/queries-with.html
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
setup using the same
technique.
regards,
Yeb Havinga
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Glenn Maynard wrote:
On Mon, Feb 1, 2010 at 6:15 AM, Yeb Havinga yhavi...@gmail.com wrote:
Stomp_steps is analyzed to 2902 rows but when you run the query the actual
rows are 0. This means that the highscore function is not called or the
number 0 is incorrect.
This SELECT returns 0
trying out, if you
have the possibility to try out 8.4.
Regards,
Yeb Havinga
The inner function looks like this:
CREATE FUNCTION highscores_for_steps_and_card(steps_id int, card_id
int, count int) RETURNS SETOF INTEGER LANGUAGE SQL AS $$
SELECT r.id FROM stomp_round r
WHERE ($1
97 matches
Mail list logo