Re: [PERFORM] Hardware suggestions for maximum read performance

2013-05-03 Thread Arjen van der Meijden

3x200GB suggests you want to use RAID5?

Perhaps you should just pick 2x200GB and set them to RAID1. With roughly 
200GB of storage, that should still easily house your potentially 
10GB-database with ample of room to allow the SSD's to balance the 
writes. But you save the investment and its probably a bit faster with 
writes (although your raid-card may reduce or remove the differences 
with your workload).


You can then either keep the money or invest in faster cpu's. With few 
concurrent connections the E5-2643 (also a quad core, but with 3.3GHz 
cores rather than 2.4GHz) may be interesting.
Its obviously a bit of speculation to see whether that would help, but 
it should speed up sorts and other in-memory/cpu-operations (even if 
you're not - and never will be - cpu-bound right now).


Best regards,

Arjen

On 3-5-2013 1:11 Mike McCann wrote:

Hello,

We are in the fortunate situation of having more money than time to help
solve our PostgreSQL 9.1 performance problem.

Our server hosts databases that are about 1 GB in size with the largest
tables having order 10 million 20-byte indexed records. The data are
loaded once and then read from a web app and other client programs.
  Some of the queries execute ORDER BY on the results. There are
typically less than a dozen read-only concurrent connections to any one
database.

SELECTs for data are taking 10s of seconds. We'd like to reduce this to
web app acceptable response times (less than 1 second). If this is
successful then the size of the database will grow by a factor of ten -
we will still want sub-second response times.  We are in the process of
going through the excellent suggestions in the PostgreSQL 9.0 High
Performance book to identify the bottleneck (we have reasonable
suspicions that we are I/O bound), but would also like to place an order
soon for the dedicated server which will host the production databases.
Here are the specs of a server that we are considering with a budget of
$13k US:

HP ProLiant DL360p Gen 8
Dual Intel Xeon 2.4GHz 4-core E5-2609 CPUs
64GB RAM
2x146GB 15K SAS hard drives
3x200GB SATA SLC SSDs
+ the usual accessories (optical drive, rail kit, dual power supplies)

Opinions?

Thanks in advance for any suggestions you have.

-Mike

--
Mike McCann
Software Engineer
Monterey Bay Aquarium Research Institute
7700 Sandholdt Road
Moss Landing, CA 95039-9644
Voice: 831.775.1769  Fax: 831.775.1736 http://www.mbari.org




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] fast read of binary data

2012-11-12 Thread Arjen van der Meijden

On 12-11-2012 11:45, Eildert Groeneveld wrote:

Dear All

I am currently implementing using a compressed binary storage scheme
genotyping data. These are basically vectors of binary data which may be
megabytes in size.

Our current implementation uses the data type bit varying.


Wouldn't 'bytea' be a more logical choice for binary data?
http://www.postgresql.org/docs/9.2/interactive/datatype-binary.html


What we want to do is very simple: we want to retrieve such records from
the database and transfer it unaltered to the client which will do
something (uncompressing) with it. As massive amounts of data are to be
moved, speed is of great importance, precluding any to and fro
conversions.

Our current implementation uses Perl DBI; we can retrieve the data ok,
but apparently there is some converting going on.

Further, we would like to use ODBC from Fortran90 (wrapping the
C-library)  for such transfers. However, all sorts funny things happen
here which look like conversion issues.

In old fashioned network database some decade ago (in pre SQL times)
this was no problem. Maybe there is someone here who knows the PG
internals sufficiently well to give advice on how big blocks of memory
(i.e. bit varying records) can between transferred UNALTERED between
backend and clients.


Although I have no idea whether bytea is treated differently in this 
context. Bit varying should be about as simple as possible (given that 
it only has 0's and 1's)


Best regards,

Arjen


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSD options, small database, ZFS

2011-11-17 Thread Arjen van der Meijden



On 18-11-2011 4:44 CSS wrote:

Resurrecting this long-dormant thread...

Btw, the 5500 and 5600 Xeons are normally more efficient with a multiple of 6 
ram-modules, so you may want to have a look at 24GB (6x4), 36GB (6x4+6x2) or 
48GB (12x4 or 6x8) RAM.


Thanks - I really had a hard time wrapping my head around the rules on 
populating the banks.  If I understand it correctly, this is due to the memory 
controller moving from the south(?)bridge to being integrated in the CPU.


That's not complete. A while back Intel introduced an integrated memory 
controller in the Xeon's (I think it was with the 5500). And doing so, 
they brought NUMA to the mainstream Xeons (Opterons had been doing that 
from the start).
The memory controllers in 5500/5600 are triple channel. I.e. they can 
distribute their work over three memory channels at the same time. The 
next generation E5 Xeon's will have quad channel, so it'll be going 
even faster with module count than.


With these kinds of cpu's its normally best to have increments of num 
channels*num cpu memory modules for optimal performance. I.e. with 
one triple channel cpu, you'd increment with three at the time, with 
two cpu's you'd go with six.


Having said that, it will work with many different amounts of memory 
modules, just at a (slight?) disadvantage compared to the optimal setting.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSD options, small database, ZFS

2011-10-14 Thread Arjen van der Meijden

On 14-10-2011 10:23, CSS wrote:

-I'm calling our combined databases at 133GB small, fair
assumption?  -Is there any chance that a server with dual quad core
xeons, 32GB RAM, and 2 or 4 SSDs (assume mirrored) could be slower
than the 4 old servers described above?  I'm beating those on raw
cpu, quadrupling the amount of RAM (and consolidating said RAM),
and going from disks that top out at 4x300 IOPS with SSDs that
conservatively should provide 2000 IOPS.


Whether 133GB is small or not probably mostly depends on how much of it 
is actually touched during use. But I'd agree that it isn't a terribly 
large database, I'd guess a few simple SSDs would be plenty to achieve 
2000 IOPs. For lineair writes, they're still not really faster than 
normal disks, but if that's combined with random access (either read or 
write) you ought to be ok.
We went from 15x 15k sas-disks to 6x ssd several years back in our 
MySQL-box, but since we also increased the ram from 16GB to 72GB, the 
io-load dropped so much the ssd's are normally only lightly loaded...


Btw, the 5500 and 5600 Xeons are normally more efficient with a multiple 
of 6 ram-modules, so you may want to have a look at 24GB (6x4), 36GB 
(6x4+6x2) or 48GB (12x4 or 6x8) RAM.


Given the historical questions on the list, there is always a risk of 
getting slower queries with hardware that should be much faster. For 
instance, the huge increase in RAM may trigger a less efficient 
query-plan. Or the disks abide by the flush-policies more correctly.
Assuming the queries are still getting good plans and there are no such 
special differences, I'd agree with the assumption that its a win on 
every count.
Or your update to a newer OS and PostgreSQL may trigger some worse query 
plan or hardware-usage.



-Should I even be looking at the option of ZFS on SATA or low-end
SAS drives and ZIL and L2ARC on SSDs?  Initially this intrigued me,
but I can't quite get my head around how the SSD-based ZIL can deal
with flushing the metadata out when the whole system is under any
sort of extreme write-heavy load - I mean if the ZIL is absorbing
2000 IOPS of metadata writes, at some point it has to get full as
it's trying to flush this data to much slower spinning drives.


A fail-safe set-up with SSD's in ZFS assumes at least 3 in total, i.e. a 
pair of SSD's for ZIL and as many as you want for L2ARC. Given your 
database size, 4x160GB SSD (in raid10) or 2x 300GB should yield plenty 
of space. So given the same choice, I wouldn't bother with a set of 
large capacity sata disks and ZIL/L2ARC-SSD's, I'd just go with 4x160GB 
or 2x300GB SSD's.



-Should my standby box be the same configuration or should I look
at actual spinning disks on that?  How rough is replication on the
underlying storage?  Would the total data written on the slave be
less or equal to the master?


How bad is it for you if the performance of your database potentially 
drops a fair bit when your slave becomes the master? If you have a 
read-mostly database, you may not even need SSD's in your master-db 
(given your amount of RAM). But honestly, I don't know the answer to 
this question :)


Good luck with your choices,
Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Adding more memory = hugh cpu load [solved]

2011-10-11 Thread Arjen van der Meijden

On 11-10-2011 20:05 Claudio Freire wrote:

On Tue, Oct 11, 2011 at 3:02 PM, alexandre - aldeia digital
adald...@gmail.com  wrote:

2) Change all memory chips to new others, instead of maintain the old (16
GB) + new (32 GB).


Of course, mixing disables double/triple/whatuple channel, and makes
your memory subsystem correspondingly slower.
By a lot.



That really depends on the chipset/server. The current intel E56xx-chips 
(and previous E55xx) basically just expect groups of 3 modules per 
processor, but it doesn't really matter whether that's 3x2+3x4 or 6x4 in 
terms of performance (unless the linuxkernel does some weirdness of 
course). It at least won't disable triple-channel, just because you 
added different size modules. Only when you get to too many 'ranks', 
you'll see performance degradation. But that's in terms of clock speed, 
not in disabling triple channel.


But as said, that all depends on the memory controller in the server's 
mainboard or processors.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Suggestions for Intel 710 SSD test

2011-10-02 Thread Arjen van der Meijden

Anandtech took the trouble of doing that:
http://www.anandtech.com/show/4902/intel-ssd-710-200gb-review

I think the main advantage of the 710 compared to the 320 is its much 
heavier over-provisioning and better quality MLC-chips. Both the 320 and 
710 use the same controller and offer similar performance. But 320GB of 
raw capacity is sold as a 300GB Intel 320 and as a 200GB Intel 710...


So if you don't need write-endurance, you can probably assume the 320 
will be more capacity and bang for the buck and will be good enough. If 
you're a worried about write-endurance, you should have a look at the 
710. You can obviously also only provision about 200GB of that 300GB 
320-ssd and thus increase its expected live span, but you'd still miss 
the higher quality MLC. Given the fact that you can get two 320's for 
the price of one 710, its probably always a bit difficult to actually 
make the choice (unless you want a fixed amount of disks and the best 
endurance possible for that).


Best regards,

Arjen

On 2-10-2011 5:22 Andy wrote:

Do you have an Intel 320? I'd love to see tests comparing 710 to 320 and
see if it's worth the price premium.


*From:* David Boreham david_l...@boreham.org
*To:* PGSQL Performance pgsql-performance@postgresql.org
*Sent:* Saturday, October 1, 2011 10:39 PM
*Subject:* [PERFORM] Suggestions for Intel 710 SSD test


I have a 710 (Lyndonville) SSD in a test server. Ultimately we'll run
capacity tests using our application (which in turn uses PG), but it'll
take a while to get those set up. In the meantime, I'd be happy to
entertain running whatever tests folks here would like to suggest,
spare time-permitting.

I've already tried bonnie++, sysbench and a simple WAL emulation
test program I wrote more than 10 years ago. The drive tests at
around 160Mbyte/s on bulk data and 4k tps for commit rate writing
small blocks.



-- Sent via pgsql-performance mailing list
(pgsql-performance@postgresql.org mailto:pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?

2011-09-13 Thread Arjen van der Meijden


On 12-9-2011 0:44 Anthony Presley wrote:

A few weeks back, we purchased two refurb'd HP DL360's G5's, and were
hoping to set them up with PG 9.0.2, running replicated.  These machines
have (2) 5410 Xeon's, 36GB of RAM, (6) 10k SAS drives, and are using the
HP SA P400i with 512MB of BBWC.  PG is running on an ext4 (noatime)
partition, and they drives configured as RAID 1+0 (seems with this
controller, I cannot do JBOD).


If you really want a JBOD-setup, you can try a RAID0 for each available 
disk, i.e. in your case 6 separate RAID0's. That's how we configured our 
Dell H700 - which doesn't offer JBOD as well - for ZFS.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-12 Thread Arjen van der Meijden


On 11-4-2011 22:04 da...@lang.hm wrote:

in your case, try your new servers without hyperthreading. you will end
up with a 4x4 core system, which should handily outperform the 2x4 core
system you are replacing.

the limit isn't 8 cores, it's that the hyperthreaded cores don't work
well with the postgres access patterns.


It would be really weird if disabling HT would turn these 8-core cpu's 
in 4-core cpu's ;) They have 8 physical cores and 16 threads each. So he 
basically has a 32-core machine with 64 threads in total (if HT were 
enabled). Still, HT may or may not improve things, back when we had time 
to benchmark new systems we had one of the first HT-Xeon's (a dual 5080, 
with two cores + HT each) available:

http://ic.tweakimg.net/ext/i/1155958729.png

The blue lines are all slightly above the orange/red lines. So back then 
HT slightly improved our read-mostly Postgresql benchmark score.


We also did benchmarks with Sun's UltraSparc T2 back then:
http://ic.tweakimg.net/ext/i/1214930814.png

Adding full cores (including threads) made things much better, but we 
also tested full cores with more threads each:

http://ic.tweakimg.net/ext/i/1214930816.png

As you can see, with that benchmark, it was better to have 4 cores with 
8 threads each, than 8 cores with 2 threads each.


The T2-threads where much heavier duty than the HT-threads back then, 
but afaik Intel has improved its technology with this re-introduction of 
them quite a bit.


So I wouldn't dismiss hyper threading for a read-mostly Postgresql 
workload too easily.


Then again, keeping 32 cores busy, without them contending for every 
resource will already be quite hard. So adding 32 additional threads 
may indeed make matters much worse.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-18 Thread Arjen van der Meijden

On 18-3-2011 4:02 Scott Marlowe wrote:

On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
postgresql-p...@ocharles.org.uk  wrote:

Another point.  My experience with 1U chassis and cooling is that they
don't move enough air across their cards to make sure they stay cool.
You'd be better off ordering a 2U chassis with 8 3.5 drive bays so
you can add drives later if you need to, and it'll provide more
cooling air across the card.

Our current big 48 core servers are running plain LSI SAS adapters
without HW RAID because the LSI s we were using overheated and
cooked themselves to death after about 3 months.  Those are 1U chassis
machines, and our newer machines are all 2U boxes now.


We have several 1U boxes (mostly Dell and Sun) running and had several 
in the past. And we've never had any heating problems with them. That 
includes machines with more power hungry processors than are currently 
available, all power slurping FB-dimm slots occupied and two raid cards 
installed.


But than again, a 2U box will likely have more cooling capacity, no 
matter how you look at it.


Another tip that may be useful; look at 2.5 drives. Afaik there is no 
really good reason to use 3.5 drives for new servers. The 2.5 drives 
save power and room - and thus may allow more air flowing through the 
enclosure - and offer the same performance and reliability (the first I 
know for sure, the second I'm pretty sure of but haven't seen much proof 
of lately).


You could even have a 8- or 10-disk 1U enclosure in that way or up to 24 
disks in 2U. But those configurations will require some attention to 
cooling again.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-18 Thread Arjen van der Meijden

On 18-3-2011 10:11, Scott Marlowe wrote:

On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
acmmail...@tweakers.net  wrote:

On 18-3-2011 4:02 Scott Marlowe wrote:
We have several 1U boxes (mostly Dell and Sun) running and had several in
the past. And we've never had any heating problems with them. That includes
machines with more power hungry processors than are currently available, all
power slurping FB-dimm slots occupied and two raid cards installed.


Note I am talking specifically about the ability to cool the RAID
card, not the CPUS etc.  Many 1U boxes have poor air flow across the
expansion slots for PCI / etc cards, while doing a great job cooling
the CPUs and memory.  If you don't use high performance RAID cards
(LSI 9xxx  Areca 16xx 18xx) then it's not an issue.  Open up your 1U
and look at the air flow for the expansion slots, it's often just not
very much.



I was referring to amongst others two machines that have both a Dell 
Perc 5/i for internal disks and a Perc 5/e for an external disk 
enclosure. Those also had processors that produce quite some heat (2x 
X5160 and 2x X5355) combined with all fb-dimm (8x 2GB) slots filled, 
which also produce a lot of heat. Those Dell Perc's are similar to the 
LSI's from the same period in time.


So the produced heat form the other components was already pretty high. 
Still, I've seen no problems with heat for any component, including all 
four raid controllers. But I agree, there are some 1U servers that skimp 
on fans and thus air flow in the system. We've not had that problem with 
any of our systems. But both Sun and Dell seem to add quite a bit of 
fans in the middle of the system, where others may do it a bit less 
heavy duty and less over-dimensioned.


Best regards,

Arjen


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Anyone tried Flashcache with PostgreSQL?

2011-03-03 Thread Arjen van der Meijden

On 2-3-2011 16:29 Robert Haas wrote:

On Mon, Feb 28, 2011 at 2:09 PM, Josh Berkusj...@agliodbs.com  wrote:

Does anyone have the hardware to test FlashCache with PostgreSQL?

http://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx

I'd be interested to hear how it performs ...


It'd be a lot more interesting if it were a write-through cache rather
than a write-back cache, wouldn't it?


That's what bcache tries to accomplish, both read and write cache.
It also appears to aim to be more widely usable, rather than the 
relatively specific requirements the facebook variant is designed for.


http://bcache.evilpiepirate.org/

They seem to try and combine both the dedicated ZIL and L2ARC 
functionality from ZFS in one block device based caching layer.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hardware recommendations

2010-12-10 Thread Arjen van der Meijden

On 10-12-2010 14:58 Andy wrote:

We use ZFS and use SSDs for both the log device and L2ARC.  All
disks and SSDs are behind a 3ware with BBU in single disk mode.


Out of curiosity why do you put your log on SSD? Log is all
sequential IOs, an area in which SSD is not any faster than HDD. So
I'd think putting log on SSD wouldn't give you any performance
boost.


The common knowledge you based that comment on, may actually not be 
very up-to-date anymore. Current consumer-grade SSD's can achieve up to 
200MB/sec when writing sequentially and they can probably do that a lot 
more consistent than a hard disk.


Have a look here: http://www.anandtech.com/show/2829/21
The sequential writes-graphs consistently put several SSD's at twice the 
performance of the VelociRaptor 300GB 10k rpm disk and that's a test 
from over a year old, current SSD's have increased in performance, 
whereas I'm not so sure there was much improvement in platter based 
disks lately?


Apart from that, I'd guess that log-devices benefit from reduced latencies.

Its actually the recommended approach from Sun to add a pair of (small 
SLC-based) ssd log devices to increase performance (especially for 
nfs-scenario's where a lot of synchonous writes occur) and they offer it 
as an option for most of their Unified Storage appliances.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hardware recommendations

2010-12-10 Thread Arjen van der Meijden


On 10-12-2010 18:57 Arjen van der Meijden wrote:

Have a look here: http://www.anandtech.com/show/2829/21
The sequential writes-graphs consistently put several SSD's at twice the
performance of the VelociRaptor 300GB 10k rpm disk and that's a test
from over a year old, current SSD's have increased in performance,
whereas I'm not so sure there was much improvement in platter based
disks lately?


Here's a more recent test:
http://www.anandtech.com/show/4020/ocz-vertex-plus-preview-introducing-the-indilinx-martini/3

That shows several consumer grade SSD's and a 600GB VelociRaptor, its 
200+ vs 140MB/sec. I'm not sure how recent 15k rpm sas disks would do, 
nor do I know how recent server grade SSD's would behave. But if we 
assume similar gains for both, its still in favor of SSD's :-)


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] best db schema for time series data?

2010-11-16 Thread Arjen van der Meijden

On 16-11-2010 11:50, Louis-David Mitterrand wrote:

I have to collect lots of prices from web sites and keep track of their
changes. What is the best option?

1) one 'price' row per price change:

create table price (
id_price primary key,
id_product integer references product,
price integer
);

2) a single 'price' row containing all the changes:

create table price (
id_price primary key,
id_product integer references product,
price integer[] -- prices are 'pushed' on this array as they 
change
);

Which is bound to give the best performance, knowing I will often need
to access the latest and next-to-latest prices?


If you mostly need the last few prices, I'd definitaly go with the first 
aproach, its much cleaner. Besides, you can store a date/time per price, 
so you know when it changed. With the array-approach that's a bit harder 
to do.


If you're concerned with performance, introduce some form of a 
materialized view for the most recent price of a product. Or reverse the 
entire process and make a current price-table and a price history-table.


Best regards,

Arjen


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] turn off caching for performance test

2010-08-26 Thread Arjen van der Meijden
Isn't it more fair to just flush the cache before doing each of the 
queries? In real-life, you'll also have disk caching... Flushing the 
buffer pool is easy, just restart PostgreSQL (or perhaps there is a 
admin command for it too?). Flushing the OS-disk cache is obviously 
OS-dependent, for linux its trivial: http://linux-mm.org/Drop_Caches


Best regards,

Arjen

On 26-8-2010 12:32 Willy-Bas Loos wrote:

Hi,

I have a colleague that is convinced that the website is faster if
enable_seqscan is turned OFF.
I'm convinced of the opposite (better to leave it ON), but i would like
to show it, prove it to him.
Now the first query we tried, would do a bitmap heap scan instead of a
seqscan when the latter were disabled, to exclude about 50% of the
records (18K of 37K records).
The bitmap heap scan is 3% faster, so that didn't really plea my case.
The thing is that by the time we tried it, the data had been cached, so
there is no penalty for the use of the index (HDD retention on random
access). So it's logical that the index lookup is faster, it looks up
less records.

Now i'm looking for a way to turn off the caching, so that we'll have a
fair test.

It makes no sense to me to set shared_buffers really low. Any tips?

Cheers,

WBL


--
Patriotism is the conviction that your country is superior to all
others because you were born in it. -- George Bernard Shaw


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Completely un-tuned Postgresql benchmark results: SSD vs desktop HDD

2010-08-12 Thread Arjen van der Meijden

On 12-8-2010 2:53 gnuo...@rcn.com wrote:

- The value of SSD in the database world is not as A Faster HDD(tm).
Never was, despite the naive' who assert otherwise.  The value of SSD
is to enable BCNF datastores.  Period.  If you're not going to do
that, don't bother.  Silicon storage will never reach equivalent
volumetric density, ever.  SSD will never be useful in the byte bloat
world of xml and other flat file datastores (resident in databases or
not).  Industrial strength SSD will always be more expensive/GB, and
likely by a lot.  (Re)factoring to high normalization strips out an
order of magnitude of byte bloat, increases native data integrity by
as much, reduces much of the redundant code, and puts the ACID where
it belongs.  All good things, but not effortless.


It is actually quite common to under-utilize (short stroke) hard drives 
in the enterprise world. Simply because 'they' need more IOps per amount 
of data than a completely utilized disk can offer.
As such the expense/GB can be much higher than simply dividing the 
capacity by its price (and if you're looking at fiber channel disks, 
that price is quite high already). And than it is relatively easy to 
find enterprise SSD's with better pricing for the whole system as soon 
as the IOps are more important than the capacity.


So in the current market, you may already be better off, price-wise, 
with (expensive) SSD if you need IOps rather than huge amounts of 
storage. And while you're in both cases not comparing separate disks to 
SSD, you're replacing a 'disk based storage system' with a '(flash) 
memory based storage system' and it basically becomes 'A Faster HDD' ;)
But you're right, that for data-heavy applications, completely replacing 
HDD's with some form of SSD is not going to happen soon, maybe never.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Completely un-tuned Postgresql benchmark results: SSD vs desktop HDD

2010-08-12 Thread Arjen van der Meijden

On 13-8-2010 1:40 Scott Carey wrote:

Agreed.  There is a HUGE gap between ooh ssd's are fast, look! and
engineering a solution that uses them properly with all their
strengths and faults.  And as 'gnuoytr' points out, there is a big
difference between an Intel SSD and say, this thing:
http://www.nimbusdata.com/products/s-class_overview.html


From the description it sounds as if its either FreeBSD or OpenSolaris 
with ZFS with some webinterface-layer. That's not a bad thing per se, 
but as the site suggests its 'only' $25k for the smallest (2.5TB?) 
device. That makes it very likely that it are off the shelf MLC flash 
drives. Given the design of the device and the pricing it probably are 
your average 2.5-drives with 100, 200 or 400GB capacity (maybe OCZ 
vertex 2 pro, which do have such a capacitor?), similar to the Intel SSD 
you compared it to.
And than we're basically back to square one, unless the devices have a 
capacitor or ZFS works better with SSD-drives to begin with (it will at 
least know silent data corruption did occur).


There are of course devices that are not built on top of normal disk 
form factor SSD-drives like the Ramsan devices or Sun's F5100.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Using high speed swap to improve performance?

2010-04-02 Thread Arjen van der Meijden
What about FreeBSD with ZFS? I have no idea which features they support 
and which not, but it at least is a bit more free than Solaris and still 
offers that very nice file system.


Best regards,

Arjen

On 2-4-2010 21:15 Christiaan Willemsen wrote:

Hi there,

About a year ago we setup a machine with sixteen 15k disk spindles on
Solaris using ZFS. Now that Oracle has taken Sun, and is closing up
Solaris, we want to move away (we are more familiar with Linux anyway).

So the plan is to move to Linux and put the data on a SAN using iSCSI
(two or four network interfaces). This however leaves us with with 16
very nice disks dooing nothing. Sound like a wast of time. If we were to
use Solaris, ZFS would have a solution: use it as L2ARC. But there is no
Linux filesystem with those features (ZFS on fuse it not really an option).

So I was thinking: Why not make a big fat array using 14 disks (raid 1,
10 or 5), and make this a big and fast swap disk. Latency will be lower
than the SAN can provide, and throughput will also be better, and it
will relief the SAN from a lot of read iops.

So I could create a 1TB swap disk, and put it onto the OS next to the
64GB of memory. Then I can set Postgres to use more than the RAM size so
it will start swapping. It would appear to postgres that the complete
database will fit into memory. The question is: will this do any good?
And if so: what will happen?

Kind regards,

Christiaan



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] mysql to postgresql, performance questions

2010-03-18 Thread Arjen van der Meijden

On 18-3-2010 16:50 Scott Marlowe wrote:

It's different because it only takes pgsql 5 milliseconds to run the
query, and 40 seconds to transfer the data across to your applicaiton,
which THEN promptly throws it away.  If you run it as

MySQL's client lib doesn't transfer over the whole thing.  This is
more about how each db interface is implemented in those languages.


Its the default behavior of both PostgreSQL and MySQL to transfer the 
whole resultset over to the client. Or is that different for Ruby's 
MySQL-driver? At least in PHP the behavior is similar for both.
And I certainly do hope its 40ms rather than 40s, otherwise it would be 
a really bad performing network in either case (15s for mysql) or very 
large records (which I doubt).


I'm wondering if a new connection is made between each query. PostgreSQL 
is (afaik still is but I haven't compared that recently) a bit slower on 
that department than MySQL.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSD + RAID

2010-02-21 Thread Arjen van der Meijden

On 22-2-2010 6:39 Greg Smith wrote:

But the point of this whole testing exercise coming back into vogue
again is that SSDs have returned this negligent behavior to the
mainstream again. See
http://opensolaris.org/jive/thread.jspa?threadID=121424 for a discussion
of this in a ZFS context just last month. There are many documented
cases of Intel SSDs that will fake a cache flush, such that the only way
to get good reliable writes is to totally disable their writes
caches--at which point performance is so bad you might as well have
gotten a RAID10 setup instead (and longevity is toast too).


That's weird. Intel's SSD's didn't have a write cache afaik:
I asked Intel about this and it turns out that the DRAM on the Intel 
drive isn't used for user data because of the risk of data loss, instead 
it is used as memory by the Intel SATA/flash controller for deciding 
exactly where to write data (I'm assuming for the wear 
leveling/reliability algorithms).

http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403p=10

But that is the old version, perhaps the second generation does have a 
bit of write caching.


I can understand a SSD might do unexpected things when it loses power 
all of a sudden. It will probably try to group writes to fill a single 
block (and those blocks vary in size but are normally way larger than 
those of a normal spinning disk, they are values like 256 or 512KB) and 
it might loose that waiting until a full block can be written-data or 
perhaps it just couldn't complete a full block-write due to the power 
failure.
Although that behavior isn't really what you want, it would be incorrect 
to blame write caching for the behavior if the device doesn't even have 
a write cache ;)


Best regards,

Arjen


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] renice on an I/O bound box

2010-01-19 Thread Arjen van der Meijden

On 19-1-2010 13:59 Willy-Bas Loos wrote:

Hi,

I have a query that runs for about 16 hours, it should run at least weekly.
There are also clients connecting via a website, we don't want to keep
them waiting because of long DSS queries.

We use Debian Lenny.
I've noticed that renicing the process really lowers the load (in
top), though i think we are I/O bound. Does that make any sense?


Renicing a postgresql-process can be a very bad thing for the 
throughput. As it may also possess some locks, which are required by the 
processes that you think should have a higher priority. Those higher 
priority processes will be locked by the lower priority one.


Then again, renicing postgresql as a whole can be useful. And if your 
absolutely certain you want to renice a process, renicing a process 
shouldn't break anything. But it may have some unexpected side effects.


Another command to look at, if you're I/O-bound, is the 'ionice' 
command, which is similar to nice, but obviously intended for I/O.
For some I/O-bound background job, one of the 'idle' classes can be a 
nice level. But for a (single) postgres-process, I'd be careful again 
for the same reasons as with process-nice.


To see which commands do some I/O, looking at 'iotop' may be useful, 
apart from just examining the output of 'iostat' and similar commands.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Air-traffic benchmark

2010-01-07 Thread Arjen van der Meijden

On 7-1-2010 13:38 Lefteris wrote:

I decided to run the benchmark over postgres to get some more
experience and insights. Unfortunately, the query times I got from
postgres were not the expected ones:


Why were they not expected? In the given scenario, column databases are 
having a huge advantage. Especially the given simple example is the type 
of query a column database *should* excel.

You should, at the very least, compare the queries to MyISAM:
http://www.mysqlperformanceblog.com/2009/11/05/air-traffic-queries-in-myisam-and-tokutek-tokudb/

But unfortunately, that one also beats your postgresql-results.


The hardware characteristics are:
Platform Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz with 8GB RAM and
ample disk space (2x 500 GB SATA disk @ 7200 RPM as SW-RAID-0)


Unfortunately, the blogpost fails to mention the disk-subsystem. So it 
may well be much faster than yours, although its not a new, big or fast 
server, so unless it has external storage, it shouldn't be too different 
for sequential scans.



SELECT DayOfWeek, count(*) AS c FROM ontime WHERE Year BETWEEN
2000 AND 2008 GROUP BY DayOfWeek ORDER BY c DESC;

Reported query times are (in sec):
MonetDB 7.9s
InfoBright 12.13s
LucidDB 54.8s

For pg-8.4.2  I got with 3 consecutive runs on the server:
5m52.384s
5m55.885s
5m54.309s


Maybe an index of the type 'year, dayofweek' will help for this query. 
But it'll have to scan about half the table any way, so a seq scan isn't 
a bad idea.
In this case, a partitioned table with partitions per year and 
constraint exclusion enabled would help a bit more.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL 8.4 performance tuning questions

2009-07-30 Thread Arjen van der Meijden

On 30-7-2009 20:46 Scott Carey wrote:

Of course Compression has a HUGE effect if your I/O system is half-decent.
Max GZIP compression speed with the newest Intel CPU's is something like
50MB/sec (it is data dependant, obviously -- it is usually closer to
30MB/sec).  Max gzip decompression ranges from 50 to 150MB/sec (it can get
really high only if the ratio is extremely large, like if you compress a
repeating sequence of 256 bytes).


I just ran some quick numbers on our lightly loaded Nehalem X5570 (2.93+ 
Ghz depending on turbo-mode). I compressed a 192MB text file I had at 
hand using gzip -1, -2, -3, -6 and -9 and outputted its results to 
/dev/null. The file was in the kernels file cache all the time and I did 
the tests 3 times.


Gzip -1 reached 54MB/s, -2 got 47MB/s, -3 got 32MB/s, -6 got 18MB/s and 
-9 got to 12MB/s. Just running cat on the file made it do 6400MB/s (i.e. 
it took 0.030 seconds to copy the file from memory to nowhere).

Those files where respectively 69MB, 66MB, 64MB, 59MB and 58MB.

Gunzip on the -1 file took 1.66 seconds, i.e. it read data at 41MB/s and 
outputted it to /dev/null at 115MB/s. The -9 file took 1.46s, so it read 
40MB/s and wrote 131MB/s.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] AMD Shanghai versus Intel Nehalem

2009-05-14 Thread Arjen van der Meijden

On 13-5-2009 20:39 Scott Carey wrote:

Excellent!  That is a pretty huge boost.   I'm curious which aspects of this
new architecture helped the most.  For Postgres, the following would seem
the most relevant:
1.  Shared L3 cache per processors -- more efficient shared datastructure
access.
2.  Faster atomic operations -- CompareAndSwap, etc are much faster.
3.  Faster cache coherency.
4.  Lower latency RAM with more overall bandwidth (Opteron style).


Apart from that, it has a newer debian (and thus kernel/glibc) and a 
slightly less constraining IO which may help as well.



Can you do a quick and dirty memory bandwidth test? (assuming linux)
On the older X5355 machine and the newer E5540, try:
/sbin/hdparm -T /dev/sddevice


It is in use, so the results may not be so good, this is the best I got 
on our dual X5355:

 Timing cached reads:   6314 MB in  2.00 seconds = 3159.08 MB/sec

But this is the best I got for a (also in use) Dual E5450 we have:
 Timing cached reads:   13158 MB in  2.00 seconds = 6587.11 MB/sec

And here the best for the (idle) E5540:
 Timing cached reads:   16494 MB in  2.00 seconds = 8256.27 MB/sec

These numbers are with hdparm v8.9

Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] AMD Shanghai versus Intel Nehalem

2009-05-13 Thread Arjen van der Meijden
We have a dual E5540 with 16GB (I think 1066Mhz) memory here, but no AMD 
Shanghai. We haven't done PostgreSQL benchmarks yet, but given the 
previous experiences, PostgreSQL should be equally faster compared to mysql.


Our databasebenchmark is actually mostly a cpu/memory-benchmark. 
Comparing the results of the dual E5540 (2.53Ghz with HT enabled) to a 
dual Intel X5355 (2.6Ghz quad core two from 2007), the peek load has 
increased from somewhere between 7 and 10 concurrent clients to 
somewhere around 25, suggesting better scalable hardware. With the 25 
concurrent clients we handled 2.5 times the amount of queries/second 
compared to the 7 concurrent client-score for the X5355, both in MySQL 
5.0.41. At 7 CC we still had 1.7 times the previous result.


I'm not really sure how the shanghai cpu's compare to those older 
X5355's, the AMD's should be faster, but how much?


I've no idea if we get a Shanghai to compare it with, but we will get a 
dual X5570 soon on which we'll repeat some of the tests, so that should 
at least help a bit with scaling the X5570-results around the world down.


Best regards,

Arjen

On 12-5-2009 20:47 Scott Marlowe wrote:

Anyone on the list had a chance to benchmark the Nehalem's yet?  I'm
primarily wondering if their promise of performance from 3 memory
channels holds up under typical pgsql workloads.  I've been really
happy with the behavior of my AMD shanghai based server under heavy
loads, but if the Nehalems much touted performance increase translates
to pgsql, I'd like to know.



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] linux deadline i/o elevator tuning

2009-04-09 Thread Arjen van der Meijden

On 9-4-2009 16:09 Kevin Grittner wrote:

I haven't benchmarked it, but when one of our new machines seemed a
little sluggish, I found this hadn't been set.  Setting this and
rebooting Linux got us back to our normal level of performance.


Why would you reboot after changing the elevator? For 2.6-kernels, it 
can be adjusted on-the-fly for each device separately (echo 'deadline'  
/sys/block/sda/queue/scheduler).


I saw a nice reduction in load and slowness too after adjusting the cfq 
to deadline for a machine that was at its maximum I/O-capacity on a 
raid-array.
Apart from deadline, 'noop' should also be interesting for RAID and 
SSD-owners, as it basically just forwards the I/O-request to the device 
and doesn't do much (if any?) scheduling.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller

2009-02-06 Thread Arjen van der Meijden

On 4-2-2009 22:36 Scott Marlowe wrote:

We purhcased the Perc 5E, which dell wanted $728 for last fall with 8
SATA disks in an MD-1000 and the performance is just terrible.  No
matter what we do the best throughput on any RAID setup was about 30
megs/second write and 60 Megs/second read.  I can get that from a
mirror set of the same drives under linux kernel software RAID.  This
was with battery backed cache enabled. Could be an interaction issue
with the MD-1000, or something, but the numbers are just awful.  We
have a Perc 6(i or e not sure) on a 6 disk SAS array and it's a little
better, getting into the hundred meg/second range, but nothing
spectacular.  They're stable, which is more than I can say for a lot
of older PERCs and the servers they came in (x600 series with Perc 3i
for instance).


When we purchased our Perc 5/e with MD1000 filled with 15 15k rpm sas 
disks, my colleague actually spend some time benchmarking the PERC and a 
ICP Vortex (basically a overclocked Adaptec) on those drives. 
Unfortunately he doesn't have too many comparable results, but it 
basically boiled down to quite good scores for the PERC and a bit less 
for the ICP Vortex.
IOMeter sequential reads are above 300MB/s for the RAID5 and above 
240MB/s for a RAID10 (and winbench99 versions range from 400+ to 
600+MB/s). The results for a 10, 12 and to 14 disk configuration also 
showed nice increments in performance.


So we've based our purchase on my colleague's earlier bad experience 
with Adaptec (much worse results than LSI) and weren't dissapointed by 
Dell's scores. I have no idea whether Adaptec's results have increased 
over time, unfortunately we haven't had a larger scale disk IO-benchmark 
for quite some time.


If you're able to understand Dutch, you can click around here:
http://tweakers.net/benchdb/test/90

Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller

2009-02-06 Thread Arjen van der Meijden

On 6-2-2009 16:27 Bruce Momjian wrote:

The experiences I have heard is that Dell looks at server hardware in
the same way they look at their consumer gear, If I put in a cheaper
part, how much will it cost Dell to warranty replace it.  Sorry, but I
don't look at my performance or downtime in the same way Dell does.  ;-)


I'm pretty sure all major server-suppliers will have some form of 
risk-analysis for their servers, especially in the high volume x86 
market where most servers are replaced in three years time anyway.
And although Dell's image for quality hardware isn't too good, the 
servers we have from them all reached high uptimes before we did 
hardware unrelated reboots.
Our Dell-desktops/workstations have seen a bit more support-technician's 
though, so we're not becoming fanboys any time soon ;-) They seem to be 
much more serious on quality for their servers compared to the other stuff.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller

2009-02-04 Thread Arjen van der Meijden

On 4-2-2009 21:09 Scott Marlowe wrote:

I have little experience with the 6i.  I do have experience with all
the Percs from the 3i/3c series to the 5e series.  My experience has
taught me that a brand new, latest model $700 Dell RAID controller is
about as good as a $150 LSI, Areca, or Escalade/3Ware controller.
I.e. a four or five year old design.  And that's being generous.


Afaik the Perc 5/i and /e are more or less rebranded LSI-cards (they're 
not identical in layout etc), so it would be a bit weird if they 
performed much less than the similar LSI's wouldn't you think?
And as far as I can remember, our Perc 5/e actually performed similar to 
a LSI with similar specs (external sas, 256MB ram, etc) we had at the 
time of testing.
Areca may be the fastest around right now, but if you'd like to get it 
all from one supplier, its not too bad to be stuck with Dell's perc 5 or 
6 series.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] large tables and simple = constant queries using indexes

2008-04-10 Thread Arjen van der Meijden

Hi John,

You don't use the same 'gene_ref ='-value, so its not a perfect 
comparison. And obviously, there is the fact that the data can be in the 
disk cache, the second time you run it, which would explain the almost 
instantaneous result for the second query.


If repeating the query a few times with 200 still makes it do its work 
in 15 seconds and with 800 in less than 100ms, than you might have found 
a bug, or it is at least something I don't know how to fix.


I doubt upping the default for all tables to 1000 is a good idea. The 
data collected is used in the query-planning-stage, where more data 
means more processing time. Obviously there is a tradeoff somewhere 
between having more statistics and thus being able to plan the query 
better versus requiring more time to process those statistics.


Best regards,

Arjen

On 10-4-2008 0:24 John Beaver wrote:
Perfect - thanks Arjen. Using your value of 200 decreased the time to 15 
seconds, and using a value of 800 makes it almost instantaneous. I'm 
really not concerned about space usage; if having more statistics 
increases performance this much, maybe I'll just default it to 1000?


Strangely, the steps taken in the explain analyze are all the same. The 
only differences are the predicted costs (and execution times).


explain analyze for a statistics of 200:
Aggregate  (cost=8831.27..8831.28 rows=1 width=0) (actual 
time=15198.407..15198.408 rows=1 loops=1)
  -  Bitmap Heap Scan on gene_prediction_view  (cost=44.16..8825.29 
rows=2392 width=0) (actual time=19.719..15191.875 rows=2455 loops=1)

Recheck Cond: (gene_ref = 500)
-  Bitmap Index Scan on ix_gene_prediction_view_gene_ref  
(cost=0.00..43.56 rows=2392 width=0) (actual time=18.871..18.871 
rows=2455 loops=1)

  Index Cond: (gene_ref = 500)
Total runtime: 15198.651 ms

explain analyze for a statistics of 800:
Aggregate  (cost=8873.75..8873.76 rows=1 width=0) (actual 
time=94.473..94.473 rows=1 loops=1)
  -  Bitmap Heap Scan on gene_prediction_view  (cost=44.25..8867.74 
rows=2404 width=0) (actual time=39.358..93.733 rows=2455 loops=1)

Recheck Cond: (gene_ref = 301)
-  Bitmap Index Scan on ix_gene_prediction_view_gene_ref  
(cost=0.00..43.65 rows=2404 width=0) (actual time=38.472..38.472 
rows=2455 loops=1)

  Index Cond: (gene_ref = 301)
Total runtime: 94.622 ms




Arjen van der Meijden wrote:
First of all, there is the 'explain analyze' output, which is pretty 
helpful in postgresql.


My guess is, postgresql decides to do a table scan for some reason. It 
might not have enough statistics for this particular table or column, 
to make a sound decision. What you can try is to increase the 
statistics target, which works pretty easy:

ALTER TABLE gene_prediction_view ALTER gene_ref SET STATISTICS 200;

Valid ranges are from 1(0?) - 1000, the default is 10, the default on 
my systems is usually 100. For such a large table, I'd go with 200.


After that, you'll need to re-analyze your table and you can try again.

Perhaps analyze should try to establish its own best guess to how many 
samples it should take? The default of 10 is rather limited for large 
tables.


Best regards,

Arjen

On 9-4-2008 22:58 John Beaver wrote:
Hi, I've started my first project with Postgres (after several years 
of using Mysql), and I'm having an odd performance problem that I was 
hoping someone might be able to explain the cause of.


My query
   - select count(*) from gene_prediction_view where gene_ref = 523
   - takes 26 seconds to execute, and returns 2400 (out of a total of 
15 million records in the table)


---My problem---
   Using a single-column index to count 2400 records which are 
exactly one constant value doesn't sound like something that would 
take 26 seconds. What's the slowdown? Any silver bullets that might 
fix this?


Steps I've taken
   - I ran vacuum and analyze
   - I upped the shared_buffers to 58384, and I upped some of the 
other postgresql.conf values as well. Nothing seemed to help 
significantly, but maybe I missed something that would help 
specifically for this query type?
   - I tried to create a hash index, but gave up after more than 4 
hours of waiting for it to finish indexing


Table stats
   - 15 million rows; I'm expecting to have four or five times this 
number eventually.

   - 1.5 gigs of hard drive usage

My development environment---
   - 2.6ghz dual-core MacBook Pro with 4 gigs of ram and a 7200 rpm 
hard drive

   - OS X 10.5.2
   - Postgres 8.3 (installed via MacPorts)

My table

CREATE TABLE gene_prediction_view
(
 id serial NOT NULL,
 gene_ref integer NOT NULL,
 go_id integer NOT NULL,
 go_description character varying(200) NOT NULL,
 go_category character varying(50) NOT NULL,
 function_verified_exactly boolean NOT NULL,
 function_verified_with_parent_go boolean NOT NULL,
 function_verified_with_child_go boolean NOT NULL,
 score numeric(10,2) NOT NULL,
 precision_score

Re: [PERFORM] large tables and simple = constant queries using indexes

2008-04-09 Thread Arjen van der Meijden
First of all, there is the 'explain analyze' output, which is pretty 
helpful in postgresql.


My guess is, postgresql decides to do a table scan for some reason. It 
might not have enough statistics for this particular table or column, to 
make a sound decision. What you can try is to increase the statistics 
target, which works pretty easy:

ALTER TABLE gene_prediction_view ALTER gene_ref SET STATISTICS 200;

Valid ranges are from 1(0?) - 1000, the default is 10, the default on my 
systems is usually 100. For such a large table, I'd go with 200.


After that, you'll need to re-analyze your table and you can try again.

Perhaps analyze should try to establish its own best guess to how many 
samples it should take? The default of 10 is rather limited for large 
tables.


Best regards,

Arjen

On 9-4-2008 22:58 John Beaver wrote:
Hi, I've started my first project with Postgres (after several years of 
using Mysql), and I'm having an odd performance problem that I was 
hoping someone might be able to explain the cause of.


My query
   - select count(*) from gene_prediction_view where gene_ref = 523
   - takes 26 seconds to execute, and returns 2400 (out of a total of 15 
million records in the table)


---My problem---
   Using a single-column index to count 2400 records which are exactly 
one constant value doesn't sound like something that would take 26 
seconds. What's the slowdown? Any silver bullets that might fix this?


Steps I've taken
   - I ran vacuum and analyze
   - I upped the shared_buffers to 58384, and I upped some of the other 
postgresql.conf values as well. Nothing seemed to help significantly, 
but maybe I missed something that would help specifically for this query 
type?
   - I tried to create a hash index, but gave up after more than 4 hours 
of waiting for it to finish indexing


Table stats
   - 15 million rows; I'm expecting to have four or five times this 
number eventually.

   - 1.5 gigs of hard drive usage

My development environment---
   - 2.6ghz dual-core MacBook Pro with 4 gigs of ram and a 7200 rpm hard 
drive

   - OS X 10.5.2
   - Postgres 8.3 (installed via MacPorts)

My table

CREATE TABLE gene_prediction_view
(
 id serial NOT NULL,
 gene_ref integer NOT NULL,
 go_id integer NOT NULL,
 go_description character varying(200) NOT NULL,
 go_category character varying(50) NOT NULL,
 function_verified_exactly boolean NOT NULL,
 function_verified_with_parent_go boolean NOT NULL,
 function_verified_with_child_go boolean NOT NULL,
 score numeric(10,2) NOT NULL,
 precision_score numeric(10,2) NOT NULL,
 CONSTRAINT gene_prediction_view_pkey PRIMARY KEY (id),
 CONSTRAINT gene_prediction_view_gene_ref_fkey FOREIGN KEY (gene_ref)
 REFERENCES sgd_annotations (id) MATCH SIMPLE
 ON UPDATE NO ACTION ON DELETE NO ACTION,
 CONSTRAINT gene_prediction_view_go_id_fkey FOREIGN KEY (go_id)
 REFERENCES go_terms (term) MATCH SIMPLE
 ON UPDATE NO ACTION ON DELETE NO ACTION,
 CONSTRAINT gene_prediction_view_gene_ref_key UNIQUE (gene_ref, go_id)
)
WITH (OIDS=FALSE);
ALTER TABLE gene_prediction_view OWNER TO postgres;

CREATE INDEX ix_gene_prediction_view_gene_ref
 ON gene_prediction_view
 USING btree
 (gene_ref);






--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSDs

2008-04-02 Thread Arjen van der Meijden
My colleague has tested a single Mtron Mobo's and a set of 4. He also 
mentioned the write performance was pretty bad compared to a Western 
Digital Raptor. He had a solution for that however, just plug the SSD in 
a raid-controller with decent cache performance (his favorites are the 
Areca controllers) and the bad write performance is masked by the 
controller's cache. It wood probably be really nice if you'd get tuned 
controllers for ssd's so they use less cache for reads and more for writes.


Best regards,

Arjen

On 2-4-2008 8:16, James Mansion wrote:
Tried harder to find info on the write cycles: found som CFs that claim 
2million

cycles, and found the Mtron SSDs which claim to have very advanced wear
levelling and a suitably long lifetime as a result even with an 
assumption that

the underlying flash can do 100k writes only.

The 'consumer' MTrons are not shabby on the face of it and not too 
expensive,

and the pro models even faster.

But ... the spec pdf shows really hight performance for average access, 
stream
read *and* write, random read ... and absolutely pants performance for 
random

write.  Like 130/s, for .5k and 4k writes.

Its so pants it looks like a misprint and it doesn't seem to square with 
the

review on tomshardware:
http://www.tomshardware.com/2007/11/21/mtron_ssd_32_gb/page7.html

Even there, the database IO rate does seem lower than you might hope,
and this *might* be because the random reads are very very fast and the
random writes ... aren't. Which is a shame, because that's exactly the
bit I'd hope was fast.

So, more work to do somewhere.





--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Anyone using a SAN?

2008-02-13 Thread Arjen van der Meijden

On 13-2-2008 22:06 Tobias Brox wrote:

What I'm told is that the state-of-the-art SAN allows for
an insane amount of hard disks to be installed, much more than what
would fit into any decent database server.  We've ended up buying a SAN,
the physical installation was done last week, and I will be able to tell
in some months if it was a good idea after all, or not.


Your SAN-pusher should have a look at the HP-submissions for TPC-C... 
The recent Xeon systems are all without SAN's and still able to connect 
hundreds of SAS-disks.


This one has 2+28+600 hard drives connected to it:
http://tpc.org/results/individual_results/HP/hp_ml370g5_2p_X5460_tpcc_080107_es.pdf

Long story short, using SAS you can theoretically connect up to 64k 
disks to a single system. And with the HP-example they connected 26 
external enclosures (MSA70) to 8 internal with external SAS-ports. I.e. 
they ended up with 28+600 harddrives spread out over 16 external 4-port 
SAS-connectors with a bandwidth of 12Gbit per connector...


Obviously its a bit difficult to share those 628 harddrives amongst 
several systems, but the argument your colleagues have for SAN isn't a 
very good one. All major hardware vendors nowadays have external 
SAS-enclosures which can hold 12-25 external harddrives (and can often 
be stacked to two or three enclosures) and can be connected to normal 
internal PCI-e SAS-raid-cards. Those controllers have commonly two 
external ports and can be used with other controllers in the system to 
combine all those connected enclosures to one or more virtual images, or 
you could have your software LVM/raid on top of those controllers.


Anyway, the common physical limit of 6-16 disks in a single 
server-enclosure isn't very relevant anymore in an argument against SAN.


Best regards,

Arjen

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] 8x2.5 or 6x3.5 disks

2008-01-29 Thread Arjen van der Meijden
There are several suppliers who offer Seagate's 2.5 15k rpm disks, I 
know HP, Dell are amongst those. So I was actually refering to those, 
rather than to the 10k one's.


Best regards,

Arjen

[EMAIL PROTECTED] wrote:

On Mon, 28 Jan 2008, Arjen van der Meijden wrote:


On 28-1-2008 20:25 Christian Nicolaisen wrote:
So, my question is: should I go for the 2.5 disk setup or 3.5 disk 
setup, and does the raid setup in either case look correct?


Afaik they are about equal in speed. With the smaller ones being a bit 
faster in random access and the larger ones a bit faster for 
sequential reads/writes.


I missed the initial post in this thread, but I haven't seen any 15K rpm 
2.5 drives, so if you compare 10K rpm 2.5 drives with 15K rpm 3.5 
drives you will see differences (depending on your workload and 
controller cache)


David Lang


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] 8x2.5 or 6x3.5 disks

2008-01-28 Thread Arjen van der Meijden

On 28-1-2008 20:25 Christian Nicolaisen wrote:
So, my question is: should I go for the 2.5 disk setup or 3.5 disk 
setup, and does the raid setup in either case look correct?


Afaik they are about equal in speed. With the smaller ones being a bit 
faster in random access and the larger ones a bit faster for sequential 
reads/writes.


My guess is that the 8x 2.5 configuration will be faster than the 6x 
3.5, even if the 3.5-drives happen to be faster they probably aren't 
50% faster... So since you don't need the larger storage capacities that 
3.5 offer, I'd go with the 8x 2.5-setup.


Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Hardware for PostgreSQL

2007-10-31 Thread Arjen van der Meijden

On 31-10-2007 17:45 Ketema wrote:

I understand query tuning and table design play a large role in
performance, but taking that factor away
and focusing on just hardware, what is the best hardware to get for Pg
to work at the highest level
(meaning speed at returning results)?


It really depends on your budget and workload. Will it be read-heavy or 
write-heavy? How large will the database be? Are those concurrent users 
actively executing queries or is the actual concurrent query load lower 
(it normally is)?
You should probably also try to estimate the amount of concurrently 
executed queries and how heavy those queries are, as that is normally 
more important as a performance measure. And normally its much less than 
the amount of concurrently connected users.



How does pg utilize multiple processors?  The more the better?
Are queries spread across multiple processors?


It forks a process for a new connection and leaves the multi-cpu 
scheduling to the OS. It does not spread a single query across multiple 
cpu's. But with many concurrent users, you normally don't want or need 
that anyway, it would mainly add extra stress to the scheduling of your 
operating system.



Is Pg 64 bit?

It can be compiled 64-bit and is available pre-compiled as 64-bits as well.


If so what processors are recommended?


I think the x86-class cpu's deliver the most bang for buck and are the 
best tested with postgres. Both AMD and Intel cpu's are pretty good, but 
I think currently a system with two intel quad core cpus is in a very 
good price/performance-point. Obviously you'll need to match the cpus to 
your load, you may need more cpu-cores.



Its pretty old (2003) but is it still accurate?  if this statement is
accurate how would it affect connection pooling software like pg_pool?


It just keeps the process alive as long as the connection isn't closed, 
nothing fancy or worrisome going on there. That's just the behavior I'd 
expect at the connection pool-level.



RAM?  The more the merrier right? Understanding shmmax and the pg
config file parameters for shared mem has to be adjusted to use it.


More is better, but don't waste your money on it if you don't need it, 
if your (the active part of your) database is smaller than the RAM, 
increasing it doesn't do that much. I would be especially careful with 
configurations that require those very expensive 4GB-modules.



Disks?  standard Raid rules right?  1 for safety 5 for best mix of
performance and safety?


Make sure you have a battery backed controller (or multiple), but you 
should consider raid 10 if you have many writes and raid 5 or 50 if you 
have a read-heavy environment. There are also people reporting that it's 
faster to actually build several raid 1's and use the OS to combine them 
to a raid 10.
Be careful with the amount of disks, in performance terms you're likely 
better off with 16x 73GB than with 8x 146GB



Any preference of SCSI over SATA? What about using a High speed (fibre
channel) mass storage device?


I'd consider only SAS (serial attached scsi, the successor of scsi) for 
a relatively small high performance storage array. Fibre channel is so 
much more expensive, that you'll likely get much less performance for 
the same amount of money. And I'd only use sata in such an environment 
if the amount of storage, not its performance, is the main metric. I.e. 
for file storage and backups.


Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Problems with + 1 million record table

2007-10-05 Thread Arjen van der Meijden

On 5-10-2007 16:34 Cláudia Macedo Amorim wrote:

[13236.470] statement_type=0, statement='select
a_teste_nestle.CODCLI,
a_teste_nestle.CODFAB,
a_teste_nestle.CODFAMILIANESTLE,
a_teste_nestle.CODFILIAL,
a_teste_nestle.CODGRUPONESTLE,
a_teste_nestle.CODSUBGRUPONESTLE,
a_teste_nestle.CONDVENDA,
a_teste_nestle.DATA,
a_teste_nestle.DESCRICAO,
a_teste_nestle.PESO,
a_teste_nestle.PRACA,
a_teste_nestle.PUNIT,
a_teste_nestle.PVENDA,
a_teste_nestle.QT,
a_teste_nestle.QTITVENDIDOS,
a_teste_nestle.QTPESOPREV,
a_teste_nestle.QTVENDAPREV,
a_teste_nestle.SUPERVISOR,
a_teste_nestle.VENDEDOR,
a_teste_nestle.VLVENDAPREV
from a_teste_nestle
 
'


Is that the entire query? Are you sure you really want to select the 
entire table without having a where-clause? That's normally not a very 
scalable aproach...


Best regards,

Arjen

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] SAN vs Internal Disks

2007-09-06 Thread Arjen van der Meijden

On 6-9-2007 14:35 Harsh Azad wrote:

2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)


I don't understand this sentence. You seem to imply you might be able to 
fit more processors in your system?
Currently the only Quad Core's you can buy are dual-processor 
processors, unless you already got a quote for a system that yields the 
new Intel Tigerton processors.
I.e. if they are clovertown's they are indeed Intel Core-architecture 
processors, but you won't be able to fit more than 2 in the system and 
get 8 cores in a system.
If they are Tigerton, I'm a bit surprised you got a quote for that, 
although HP seems to offer a system for those. If they are the old 
dual-core MP's (70xx or 71xx), you don't want those...



32 GB RAM
OS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1
(Data Storage mentioned below)


I doubt you need 15k-rpm drives for OS... But that won't matter much on 
the total cost.


HELP 1: Does something look wrong with above configuration, I know there 
will be small differences b/w opetron/xeon. But do you think there is 
something against going for 2.4Ghz Quad Xeons (clovertown i think)?


Apart from your implication that you may be able to stick more 
processors in it: no, not to me. Two Quad Core Xeons were even faster 
than 8 dual core opterons in our benchmarks, although that might also 
indicate limited OS-, postgres or underlying I/O-scaling.
Obviously the new AMD Barcelona-line of processors (coming next week 
orso) and the new Intel Quad Core's DP (Penryn?) and MP (Tigerton) may 
be interesting to look at, I don't know how soon systems will be 
available with those processors (HP seems to offer a tigerton-server).


B: Go for Internal of DAS based storage. Here for each server we should 
be able to have: 2x disks on RAID-1 for logs, 6x disks on RAID-10 for 
tablespace1 and 6x disks on RAID-10 for tablespace2. Or maybe 12x disks 
on RAID-10 single table-space.


You don't necessarily need to use internal disks for DAS, since you can 
also link an external SAS-enclosure either with or without an integrated 
raid-controller (IBM, Sun, Dell, HP and others have options for that), 
and those are able to be expanded to either multiple enclosures tied to 
eachother or to a controller in the server.
Those may also be usable in a warm-standby-scenario and may be quite a 
bit cheaper than FC-hardware.


But for a moment keeping these aside, i wanted to discuss, purely on 
performance side which one is a winner? It feels like internal-disks 
will perform better, but need to understand a rough magnitude of 
difference in performance to see if its worth loosing the manageability 
features.


As said, you don't necessarily need real internal disks, since SAS can 
be used with external enclosures as well, still being DAS. I have no 
idea what difference you will or may see between those in terms of 
performance. It probably largely depends on the raid-controller 
available, afaik the disks will be mostly the same. And it might depend 
on your available bandwidth, external SAS offers you a 4port-connection 
allowing for a 12Gbit-connection between a disk-enclosure and a 
controller. While - as I understand it - even expensive SAN-controllers 
only offer dual-ported, 8Gbit connections?
What's more important is probably the amount of disks and raid-cache you 
can buy in the SAN vs DAS-scenario. If you can buy 24 disks when going 
for DAS vs only 12 whith SAN...


But then again, I'm no real storage expert, we only have two Dell MD1000 
DAS-units at our site.


Best regards and good luck,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] SAN vs Internal Disks

2007-09-06 Thread Arjen van der Meijden

On 6-9-2007 20:42 Scott Marlowe wrote:

On 9/6/07, Harsh Azad [EMAIL PROTECTED] wrote:

Hi,

How about the Dell Perc 5/i card, 512MB battery backed cache or IBM
ServeRAID-8k Adapter?


All Dell Percs have so far been based on either adaptec or LSI
controllers, and have ranged from really bad to fairly decent
performers.  There were some recent posts on this list where someone
was benchmarking one, I believe.  searching the list archives might
prove useful.


The Dell PERC5-cards are based on LSI-chips and perform quite well. 
Afaik Dell hasn't used adaptecs for a while now, but even recent 
(non-cheap ;) ) adaptecs aren't that bad afaik.


The disadvantage of using Areca or 3Ware is obviously the lack of 
support in A-brand servers and the lack of support for SAS-disks. Only 
recently Areca has stepped in the SAS-market, but I have no idea how 
easily those controllers are integrated in standard servers (they tend 
to be quite large, which will not fit in 2U and maybe not even in 3U or 
4U-servers).


Arjen

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] SAN vs Internal Disks

2007-09-06 Thread Arjen van der Meijden

On 6-9-2007 20:29 Mark Lewis wrote:

Maybe I'm jaded by past experiences, but the only real use case I can
see to justify a SAN for a database would be something like Oracle RAC,
but I'm not aware of any PG equivalent to that.


PG Cluster II seems to be able to do that, but I don't know whether 
that's production quality already...


Arjen

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Dell Hardware Recommendations

2007-08-09 Thread Arjen van der Meijden

On 9-8-2007 23:50 Merlin Moncure wrote:

Where the extra controller especially pays off is if you have to
expand to a second tray.  It's easy to add trays but installing
controllers on a production server is scary.


For connectivity-sake that's not a necessity. You can either connect 
(two?) extra MD1000's to your first MD1000 or you can use the second 
external SAS-port on your controller. Obviously it depends on the 
controller whether its good enough to just add the disks to it, rather 
than adding another controller for the second tray. Whether the perc5/e 
is good enough for that, I don't know, we've only equipped ours with a 
single MD1000 holding 15x 15k rpm drives, but in our benchmarks it 
scaled pretty well going from a few to all 14 disks (+1 hotspare).


Best regards,

Arjen

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] disable archiving

2007-07-23 Thread Arjen van der Meijden

Perhaps you should've read the configuration-manual-page more carefully. ;)
Besides, WAL-archiving is turned off by default, so if you see them 
being archived you actually enabled it earlier


The archive_command is empty by default: If this is an empty string 
(the default), WAL archiving is disabled.


http://www.postgresql.org/docs/8.2/interactive/runtime-config-wal.html

Best regards,

Arjen

On 23-7-2007 19:24 Paul van den Bogaard wrote:
the manual somewhere states ... if archiving is enabled... To me this 
implies that archiving can be disabled. However I cannot find the 
parameter to use to get this result. Or should I enable archiving and 
use a backup script like


#!/usr/bin/bash
exit 0



Would appreciate a hint. And yes I know I put my database in danger etc. 
This is for some benchmarks where I do not want the overhead of 
archiving. Jus a file system that will not fill with zillions of these 
16MB WAL files ;^)


Thanks
Paul.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] insert vs select into performance

2007-07-17 Thread Arjen van der Meijden
Have you also tried the COPY-statement? Afaik select into is similar to 
what happens in there.


Best regards,

Arjen

On 17-7-2007 21:38 Thomas Finneid wrote:

Hi

I was doing some testing on insert compared to select into. I 
inserted 100 000 rows (with 8 column values) into a table, which took 14 
seconds, compared to a select into, which took 0.8 seconds.
(fyi, the inserts where batched, autocommit was turned off and it all 
happend on the local machine)


Now I am wondering why the select into is that much faster?
Does the select into translate into a specially optimised function in c 
that can cut corners which a insert can not do (e.g. lazy copying), or 
is it some other reason?


The reason I am asking is that select into shows that a number of rows 
can be inserted into a table quite a lot faster than one would think was 
possible with ordinary sql. If that is the case, it means that if I 
write an pl-pgsql insert function in C instead of sql, then I can have 
my db perform order of magnitude faster.


Any comments?

regards

thomas

---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Append table

2007-06-02 Thread Arjen van der Meijden

There are two solutions:
You can insert all data from tableB in tableA using a simple insert 
select-statement like so:

INSERT INTO tabelA SELECT EmpId, EmpName FROM tabelB;

Or you can visually combine them without actually putting the records in 
a single table. That can be with a normal select-union statement or with 
a view, something like this:

SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName FROM tabelB;

You can use this query as a table-generating subquery in a FROM-clause, 
like so:


SELECT * FROM (SELECT EmpId, EmpName FROM tabelA UNION EmpID, EmpName 
FROM tabelB) as emps WHERE EmpId = 1;


Or with the view:
CREATE VIEW tabelC AS SELECT EmpId, EmpName FROM tabelA UNION EmpID, 
EmpName FROM tabelB;


And then you can use the view as if it was a normal table (altough 
inserts are not possible without applying rules to them, see the manual 
for that).


SELECT * FROM tabelC WHERE EmpId = 1;

Best regards,

Arjen

On 2-6-2007 17:52 Hanu Kurubar wrote:

Any luck on appending two table in PostgreSQL.
Below are two table with same schema that have different values. In this 
case EmpID is unique value.
 
tabelA


EmpId (Int) EmpName (String)
1   Hanu
2   Alvaro
 
 
tabelB


EmpId (Int) EmpName (String)
3   Michal
4   Tom
 
 
I would be looking below output after appending tableA with tableB. Is 
this possible in PostgreSQL?


 
tabelA


EmpId (Int) EmpName (String)
1   Hanu
2   Alvaro
3   Michal
4   Tom

 


Thanks,
Hanu

 
On 5/30/07, *Hanu Kurubar* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Can you help me appending two table values into single table without
performing INSERT?
Note that these tables are of same schema.
 
Is there any sql command is supported?
 
Thanks,

Hanu

 
On 5/29/07, *Alvaro Herrera* [EMAIL PROTECTED]

mailto:[EMAIL PROTECTED] wrote:

Michal Szymanski wrote:
  There is another strange thing. We have two versions of our test
  environment one with production DB copy and second
genereated with
  minimal data set and it is odd that update presented above
on copy of
  production is executing 170ms but on small DB it executing
6s 
  
  How are you vacuuming the tables?
  
  Using pgAdmin (DB is installed on my laptop) and I use this
tool for
  vaccuminh, I do not think that vaccuming can help because
I've tested on
  both database just after importing.

I think you are misunderstanding the importance of vacuuming the
table.
Try this: on a different terminal from the one running the test,
run a
VACUUM on the updated table with vacuum_cost_delay set to 20, on an
infinite loop.  Keep this running while you do your update
test.  Vary
the vacuum_cost_delay and measure the average/min/max UPDATE times.
Also try putting a short sleep on the infinite VACUUM loop and
see how
its length affects the UPDATE times.

One thing not clear to me is if your table is in a clean
state.  Before
running this test, do a TRUNCATE and import the data
again.  This will
get rid of any dead space that may be hurting your measurements.

--
Alvaro
Herrerahttp://www.advogato.org/person/alvherre
The Postgresql hackers have what I call a NASA space shot
mentality.
Quite refreshing in a world of weekend drag racer developers.
(Scott Marlowe)

---(end of
broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org
http://archives.postgresql.org/




-- 
With best regards,

Hanumanthappa Kurubar
Mobile: 98 801 800 65 





--
With best regards,
Hanumanthappa Kurubar
Mobile: 98 801 800 65


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Postgres Benchmark Results

2007-05-20 Thread Arjen van der Meijden
I assume red is PostgreSQL and green is MySQL. That reflects my own 
benchmarks with those two.


But I don't fully understand what the graph displays. Does it reflect 
the ability of the underlying database to support a certain amount of 
users per second given a certain database size? Or is the growth of the 
database part of the benchmark?


Btw, did you consider that older topics are normally read much less and 
almost never get new postings? I think the size of the active data set 
is more dependent on the amount of active members than on the actual 
amount of data available.
That can reduce the impact of the size of the database greatly, although 
we saw very nice gains in performance on our forum (over 22GB of 
messages) when replacing the databaseserver with one with twice the 
memory, cpu's and I/O.


Best regards,

Arjen

On 20-5-2007 16:58 PFC wrote:


I felt the world needed a new benchmark ;)
So : Forum style benchmark with simulation of many users posting and 
viewing forums and topics on a PHP website.


http://home.peufeu.com/ftsbench/forum1.png

One of those curves is a very popular open-source database which 
claims to offer unparallelled speed.
The other one is of course Postgres 8.2.3 which by popular belief is 
full-featured but slow


What is your guess ?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Diminishing bandwidth performance with multiple quad core X5355s

2007-05-20 Thread Arjen van der Meijden

On 14-5-2007 0:00 jlmarin wrote:

I wanted to post this even if it's a bit late on the thread because
right now I have exactly this kind of problem.
We're trying to figure out if a dual-Quadcore (Xeon) will be better
(cost/benefit wise) than a 4-way Opteron dualcore, for *our* program.


We've benchmarked the Sun Fire x4600 (with the older socket 939 cpu's) 
and compared it to a much cheaper dual quad core xeon X5355.


As you can see on the end of this page:
http://tweakers.net/reviews/674/8

The 4-way dual core opteron performs less (in our benchmark) than the 
2-way quad core xeon. Our benchmark does not consume a lot of memory, 
but I don't know which of the two profits most of that. Obviously it may 
well be that the Socket F opterons with support for DDR2 memory perform 
better, but we haven't seen much proof of that.
Given the cost of a 4-way dual core opteron vs a 2-way quad core xeon, 
I'd go for the latter for now. The savings can be used to build a system 
with heavier I/O and/or more memory, which normally yield bigger gains 
in database land.
For example a Dell 2900 with 2x X5355 + 16GB of memory costs about 7000 
euros less than a Dell 6950 with 4x 8220 + 16GB. You can buy an 
additional MD1000 with 15x 15k rpm disks for that... And I doubt you'll 
find any real-world database benchmark that will favour the 
opteron-system if you look at the price/performance-picture.


Of course this picture might very well change as soon as the new 
'Barcelona' quad core opterons are finally available.



As you say, Opterons do definitely have a much better memory system.
But then a 4-way mobo is WAY more expensive that a dual-socket one...


And it might be limited by NUMA and the relatively simple broadcast 
architecture for cache coherency.


Best regards,

Arjen van der Meijden

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Postgres Benchmark Results

2007-05-20 Thread Arjen van der Meijden

On 20-5-2007 19:09 PFC wrote:
Since I use lighttpd, I don't really care about the number of actual 
slow clients (ie. real concurrent HTTP connections). Everything is 
funneled through those 8 PHP processes, so postgres never sees huge 
concurrency.


Well, that would only be in favour of postgres anyway, it scales in our 
benchmarks better to multiple cpu's, multiple clients and appaerantly in 
yours to larger datasets. MySQL seems to be faster up untill a certain 
amount of concurrent clients (close to the amount of cpu's available) 
and beyond that can collapse dramatically.


I'm writing a full report, but I'm having a lot of problems with 
MySQL, I'd like to give it a fair chance, but it shows real obstination 
in NOT working.


Yeah, it displayed very odd behaviour when doing benchmarks here too. If 
you haven't done already, you can try the newest 5.0-verion (5.0.41?) 
which eliminates several scaling issues in InnoDB, but afaik not all of 
them. Besides that, it just can be pretty painful to get a certain query 
fast, although we've not very often seen it failing completely in the 
last few years.


Best regards,

Arjen van der Meijden

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] index structure for 114-dimension vector

2007-04-27 Thread Arjen van der Meijden

On 21-4-2007 1:42 Mark Kirkwood wrote:

I don't think that will work for the vector norm i.e:

|x - y| = sqrt(sum over j ((x[j] - y[j])^2))


I don't know if this is usefull here, but I was able to rewrite that 
algorithm for a set of very sparse vectors (i.e. they had very little 
overlapping factors) to something like:

|x - y| = sum over j (x[j]^2) + sum over j (y[j]^2)
   + for each j where x[j] and y[j] are both non-zero: - (x[j]^2 + 
y[j]^2) + (x[j] - y[j])^2


The first two parts sums can be calculated only once. So if you have 
very little overlap, this is therefore much more efficient (if there is 
no overlap at all you end up with x[j]^2 + y[j]^2 anyway). Besides, this 
rewritten calculation allows you to store the X and Y vectors using a 
trivial table-layout vector(x,i,value) which is only filled with 
non-zero's and which you can trivially self-join to find the closest 
matches. You don't care about the j's where there is either no x or 
y-value anyway with this rewrite.


I can compare over 1000 y's of on average 100 elements to two x's of 
over 1000 elements on just a single 1.8Ghz amd processor. (I use it for 
a bi-kmeans algorithm, so there are only two buckets to compare to).


So it might be possible to rewrite your algorithm to be less 
calculation-intensive. Obviously, with a dense-matrix this isn't going 
to work, but there may be other ways to circumvent parts of the 
algorithm or to cache large parts of it.
It might also help to extract only the 6 relevant columns into a 
seperate temporary table which will have much smaller records and thus 
can fit more records per page.


Best regards,

Arjen

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] fast DISTINCT or EXIST

2007-04-07 Thread Arjen van der Meijden
Can't you use something like this? Or is the distinct on the t.cd_id 
still causing the major slowdown here?


SELECT ... FROM cd
  JOIN tracks ...
WHERE cd.id IN (SELECT DISTINCT t.cd_id FROM tracks t
 WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 10)

If that is your main culprit, you could also use two limits based on the 
fact that there will be at most X songs per cd which would match your 
title (my not very educated guess is 3x). Its a bit ugly... but if that 
is what it takes to make postgresql not scan your entire index, so be it...


SELECT ... FROM cd
  JOIN tracks ...
WHERE cd.id IN (SELECT DISTINCT cd_id FROM (SELECT t.cd_id FROM tracks t
 WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 30) 
as foo LIMIT 10)



Best regards,

Arjen

On 7-4-2007 12:47 Tilo Buschmann wrote:

Hello,

I am trying to build a application to search CDs and their tracks and I
am experiencing some performance difficulties.

The database is very simple at the moment, two tables cd and tracks
contain the CD-information and their respective tracks. A column
cd_id in public.tracks is the foreign key to the cd table.

#v+
  Table public.cd
   Column|   Type| Modifiers
-+---+
 revision| integer   | not null default 0
 disc_length | integer   |
 via | character varying |
 cd_id   | integer   | not null default 
nextval('cd_cd_id_seq'::regclass)
 discid  | integer   | not null
 title   | character varying | not null
 artist  | character varying | not null
 year| smallint  |
 genre   | character varying |
 ext | character varying |
 tstitle | tsvector  |
 tsartist| tsvector  |
Indexes:
cd_id_key PRIMARY KEY, btree (cd_id)
discid_key UNIQUE, btree (discid)
tsartist_cd_idx gist (tsartist)
tstitle_cd_idx gist (tstitle)
Check constraints:
year_check CHECK (year IS NULL OR year = 0 AND year = 1)
Tablespace: d_separate

   Table public.tracks
  Column  |   Type| Modifiers 
--+---+---

 track_id | integer   | not null default 
nextval('tracks_track_id_seq'::regclass)
 cd_id| integer   | not null
 title| character varying | 
 artist   | character varying | 
 ext  | character varying | 
 length   | integer   | 
 number   | smallint  | not null default 0
 tstitle  | tsvector  | 
 tsartist | tsvector  | 
Indexes:

tracks_pkey PRIMARY KEY, btree (track_id)
cdid_tracks_idx btree (cd_id)
tsartist_tracks_idx gist (tsartist)
tstitle_tracks_idx gin (tstitle)
Foreign-key constraints:
tracks_cd_id_fkey FOREIGN KEY (cd_id) REFERENCES cd(cd_id) ON UPDATE 
RESTRICT ON DELETE RESTRICT
Tablespace: d_separate

#v-

I am using tsearch2 to be able to search very fast for CD and track
artists and titles.

The database is created only once and I expect SELECTS to happen very
often, therefore the indexes will not hurt the performance. I also ran
a VACUUM FULL ANALYSE.

The query that I want to optimise at the moment is the Give me all CDs
with their tracks, that contain a track with the Title 'foobar'. The
query is very expensive, so I try to limit it to 10 cds at once.

My first idea was:

#+
cddb=# EXPLAIN ANALYSE SELECT cd.cd_id,cd.title,cd.artist,tracks.title FROM 
tracks JOIN (SELECT cd.cd_id,cd.artist,cd.title FROM cd JOIN tracks USING 
(cd_id) WHERE tracks.tstitle @@ plainto_tsquery('simple','education') LIMIT 10) 
AS cd USING (cd_id);
QUERY PLAN
--

 Nested Loop  (cost=0.00..3852.42 rows=11974 width=91) (actual 
time=310.983..972.739 rows=136 loops=1)
   -  Limit  (cost=0.00..121.94 rows=10 width=46) (actual 
time=264.797..650.178 rows=10 loops=1)
 -  Nested Loop  (cost=0.00..227602.43 rows=18665 width=46) (actual 
time=264.793..650.165 rows=10 loops=1)
   -  Index Scan using tstitle_tracks_idx on tracks  
(cost=0.00..73402.74 rows=18665 width=4) (actual time=155.516..155.578 rows=10 
loops=1)
 Index Cond: (tstitle @@ '''education'''::tsquery)
   -  Index Scan using cd_id_key on cd  (cost=0.00..8.25 rows=1 
width=46) (actual time=49.452..49.453 rows=1 loops=10)
 Index Cond: (public.cd.cd_id = public.tracks.cd_id)
   -  Index Scan using cdid_tracks_idx on tracks  (cost=0.00..358.08 rows=1197 
width=27) (actual 

Re: [PERFORM] fast DISTINCT or EXIST

2007-04-07 Thread Arjen van der Meijden

On 7-4-2007 18:24 Tilo Buschmann wrote:

Unfortunately, the query above will definitely not work correctly, if
someone searches for a or the. 


That are two words you may want to consider not searching on at all.

As Tom said, its not very likely to be fixed in PostgreSQL. But you can 
always consider using application logic (or a pgpsql function, you could 
even use a set returning function to replace the double-limit subselects 
in your in-statement) which will automatically fetch more records when 
the initial guess turns out to be wrong, obviously using something like 
a NOT IN to remove the initially returned cd.id's for the next batches.
Then again, even 'a' or 'the' will not likely be in *all* tracks of a 
cd, so you can also use the 'average amount of tracks per cd' (about 10 
or 11?) as your multiplier rather than my initial 3. Obviously you'll 
loose performance with each increment of that value.


Best regards,

Arjen

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] SCSI vs SATA

2007-04-05 Thread Arjen van der Meijden
If the 3U case has a SAS-expander in its backplane (which it probably 
has?) you should be able to connect all drives to the Adaptec 
controller, depending on the casing's exact architecture etc. That's 
another two advantages of SAS, you don't need a controller port for each 
harddisk (we have a Dell MD1000 with 15 drives connected to a 4-port 
external sas connection) and you can mix SAS and SATA drives on a 
SAS-controller.


Best regards,

Arjen

On 5-4-2007 1:42 [EMAIL PROTECTED] wrote:
In a perhaps fitting compromise, I have decide to go with a hybrid 
solution:


8*73GB 15k SAS drives hooked up to Adaptec 4800SAS
PLUS
6*150GB SATA II drives hooked up to mobo (for now)

All wrapped in a 16bay 3U server. My reasoning is that the extra SATA 
drives are practically free compared to the rest of the system (since 
the mobo has 6 onboard connectors). I plan on putting the pg_xlog  
operating system on the sata drives and the tables/indices on the SAS 
drives, although  I might not use the sata drives for the xlog if they 
dont pan out perf-wise. I plan on getting the battery backed module for 
the adaptec (72 hours of charge time).


Thanks to everyone for the valuable input. I hope i can do you all proud 
with the setup and postgres.conf optimizations.


-jay


On Apr 4, 2007, at 1:48 PM, Carlos Moreno wrote:




Problem is :), you can purchase SATA Enterprise Drives.


Problem  I would have thought that was a good thing!!!   ;-)

Carlos
--


---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] SCSI vs SATA

2007-04-05 Thread Arjen van der Meijden

On 5-4-2007 17:58 [EMAIL PROTECTED] wrote:


On Apr 5, 2007, at 4:09 AM, Ron wrote:


BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!


Thanks - I received similar private emails with the same advice. I will 
change the controller to a LSI MegaRAID SAS 8408E -- any feedback on 
this one?


We have the dell-equivalent (PERC 5/e and PERC 5/i) in production and 
have had no issues with it, it also performes very well (compared to a 
ICP Vortex controller). The LSI has been benchmarked by my colleague and 
he was pleased with the controller.


I went w/ Fujitsu. Fortunately these servers are hosted in a very well 
ventilated area so i am not that concerned with heat issues.


We have 15 of the 36GB drives and they are doing great. According to 
that same colleague, the Fujitsu drives are currently the best 
performing drives. Although he hasn't had his hands on the new Savvio 
15k rpm drives yet.



What 16bay 3U server are you using?


supermicro sc836tq-r800
http://www.supermicro.com/products/chassis/3U/836/SC836TQ-R800V.cfm


You could also look at this version of that chassis:
http://www.supermicro.com/products/chassis/3U/836/SC836E1-R800V.cfm

Afaik it sports a 28-port expander, which should (please confirm with 
your vendor) allow you to connect all 16 drives to the 8-ports of your 
controller. Which in turn allows your both sets of disks to be used with 
your BBU-backed controller.


Best regards,

Arjen

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] SCSI vs SATA

2007-04-04 Thread Arjen van der Meijden

On 4-4-2007 0:13 [EMAIL PROTECTED] wrote:
We need to upgrade a postgres server. I'm not tied to these specific 
alternatives, but I'm curious to get feedback on their general qualities.


SCSI
  dual xeon 5120, 8GB ECC
  8*73GB SCSI 15k drives (PERC 5/i)
  (dell poweredge 2900)


This is a SAS set-up, not SCSI. So the cabling, if an issue at all, is 
in SAS' favour rather than SATA's. Normally you don't have to worry 
about that in a hot-swap chassis anyway.



SATA
  dual opteron 275, 8GB ECC
  24*320GB SATA II 7.2k drives (2*12way 3ware cards)
  (generic vendor)

Both boxes are about $8k running ubuntu. We're planning to setup with 
raid10. Our main requirement is highest TPS (focused on a lot of INSERTS).


Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?


I'm not sure this is an entirely fair question given the fact that the 
systems aren't easily comparable. They are likely not the same build 
quality or have the same kind of support, they occupy different amounts 
of space (2U vs probably at least 4U or 5U) and there will probably a be 
difference in energy consumption in favour of the first solution.
If you don't care about such things, it may actually be possible to 
build a similar set-up as your SATA-system with 12 or 16 15k rpm SAS 
disks or 10k WD Raptor disks. For the sata-solution you can also 
consider a 24-port Areca card.



Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] SCSI vs SATA

2007-04-04 Thread Arjen van der Meijden

On 4-4-2007 21:17 [EMAIL PROTECTED] wrote:
fwiw, I've had horrible experiences with areca drivers on linux. I've 
found them to be unreliable when used with dual AMD64 processors  4+ GB 
of ram. I've tried kernels 2.16 up to 2.19... intermittent yet 
inevitable ext3 corruptions. 3ware cards, on the other hand, have been 
rock solid.


That's the first time I hear such a thing. We have two systems (both are 
previous generation 64bit Xeon systems with 6 and 8GB memory) which run 
perfectly stable with uptimes with a ARC-1130 and 8 WD-raptor disks.


Best regards,

Arjen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] [EMAIL PROTECTED]: Progress on scaling of FreeBSD on 8 CPU systems]

2007-03-05 Thread Arjen van der Meijden
And here is that latest benchmark we did, using a 8 dual core opteron 
Sun Fire x4600. Unfortunately PostgreSQL seems to have some difficulties 
scaling over 8 cores, but not as bad as MySQL.


http://tweakers.net/reviews/674

Best regards,

Arjen

Arjen van der Meijden wrote:

Alvaro Herrera wrote:

Interesting -- the MySQL/Linux graph is very similar to the graphs from
the .nl magazine posted last year.  I think this suggests that the
MySQL deficiency was rather a performance bug in Linux, not in MySQL
itself ...


The latest benchmark we did was both with Solaris and Linux on the same 
box, both showed such a drop. So I doubt its not in MySQL, although it 
might be possible to fix the load MySQL's usage pattern poses on a 
system, via the OS. And since MySQL 5.0.32 is less bad than 4.1.22 on 
that system. We didn't have time to test 5.0.25 again, but .32 scaled 
better, so at least some of the scaling issues where actually fixed in 
MySQL itself.


Best regards,

Arjen

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] [EMAIL PROTECTED]: Progress on scaling of FreeBSD on 8 CPU systems]

2007-03-05 Thread Arjen van der Meijden

Stefan Kaltenbrunner wrote:
ouch - do I read that right that even after tom's fixes for the 
regressions in 8.2.0 we are still 30% slower then the -HEAD checkout 
from the middle of the 8.2 development cycle ?


Yes, and although I tested about 17 different cvs-checkouts, Tom and I 
weren't really able to figure out where it happened. So its a bit of a 
mystery why the performance is so much worse.


Best regards,

Arjen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Opinions on Raid

2007-03-03 Thread Arjen van der Meijden

On 28-2-2007 0:42 Geoff Tolley wrote:

[2] How do people on this list monitor their hardware raid?  Thus far we
have used Dell and the only way to easily monitor disk status is to use
their openmanage application.  Do other controllers offer easier means
of monitoring individual disks in a raid configuration?  It seems one
advantage software raid has is the ease of monitoring.


Recent Dell raid-controllers are based on LSI chips, although they are 
not exactly the same as similar LSI-controllers (anymore). Our Dell 
Perc5/e and 5/i work with the MegaCLI-tool from LSI. But that tool has 
really limited documentation from LSI itself. Luckily Fujitsu-Siemens 
offers a nice PDF:

http://manuals.fujitsu-siemens.com/serverbooks/content/manuals/english/mr-sas-sw-ug-en.pdf

Besides that, there are several Dell linux resources popping up, 
including on their own site:

http://linux.dell.com/

Personally I use nagios with nrpe for most of the monitoring, and write 
a little wrapper around the cli monitoring tool from the controller 
manufacturer to grok whether it's in a good/degraded/bad state.


If you have a MegaCLI-version, I'd like to see it, if possible? That 
would definitely save us some reinventing the wheel  :-)


Dell PERC controllers I think are mostly just derivatives of Adaptec/LSI 
controllers, so you might be able to get a more convenient monitoring 
tool from one of them that might work. See if you can find your PERC 
version in http://pciids.sourceforge.net/pci.ids, or if you're using 
Linux then which hw raid module is loaded for it, to get an idea of 
which place to start looking for that.


The current ones are afaik all LSI-based. But at least the recent SAS 
controllers (5/i and 5/e) are.


Best regards,

Arjen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] [EMAIL PROTECTED]: Progress on scaling of FreeBSD on 8 CPU systems]

2007-03-02 Thread Arjen van der Meijden

Alvaro Herrera wrote:

Interesting -- the MySQL/Linux graph is very similar to the graphs from
the .nl magazine posted last year.  I think this suggests that the
MySQL deficiency was rather a performance bug in Linux, not in MySQL
itself ...


The latest benchmark we did was both with Solaris and Linux on the same 
box, both showed such a drop. So I doubt its not in MySQL, although it 
might be possible to fix the load MySQL's usage pattern poses on a 
system, via the OS. And since MySQL 5.0.32 is less bad than 4.1.22 on 
that system. We didn't have time to test 5.0.25 again, but .32 scaled 
better, so at least some of the scaling issues where actually fixed in 
MySQL itself.


Best regards,

Arjen

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] quad or dual core Intel CPUs

2007-02-13 Thread Arjen van der Meijden
 and power 
supply. It was one of the reasons we decided to use seperate enclosures, 
seperating the processors/memory from the big disk array.


Best regards and good luck,

Arjen van der Meijden

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Configuration Advice

2007-01-18 Thread Arjen van der Meijden

On 18-1-2007 0:37 Adam Rich wrote:

4) Complex queries that might take advantage of the MySQL Query Cache
since the base data never changes


Have you ever compared MySQL's performance with complex queries to 
PostgreSQL's? I once had a query which would operate on a recordlist and 
see whether there were any gaps larger than 1 between consecutive 
primary keys.


Normally that information isn't very usefull, but this time it was. 
Since the data was in MySQL I tried several variations of queries in 
MySQL... After ten minutes or so I gave up waiting, but left my last 
version running. In the mean time I dumped the data, reloaded the data 
in PostgreSQL and ran some testqueries there. I came up with a query 
that took only 0.5 second on Postgres pretty soon. The query on MySQL 
still wasn't finished...
In my experience it is (even with the 5.0 release) easier to get good 
performance from complex queries in postgresql. And postgresql gives you 
more usefull information on why a query takes a long time when using 
explain (analyze). There are some draw backs too of course, but while we 
in our company use mysql I switched to postgresql for some readonly 
complex query stuff just for its performance...


Besides that, mysql rewrites the entire table for most table-altering 
statements you do (including indexes). For small tables that's no issue, 
but if you somehow can't add all your indexes in a single statement to a 
table you'll be waiting a long time more for new indexes than with 
postgresql. And that situation isn't so unusual if you think of a query 
which needs an index that isn't there yet. Apart from the fact that it 
doesn't have functional indexes and such.


Long story short: MySQL still isn't the best performer when looking at 
the more complex queries. I've seen performance which made me assume it 
can't optimise sequential scans (when it is forced to loop using a seq 
scan it appears to do a new seq scan for each round in the loop...) and 
various other cases PostgreSQL can execute much more efficiently.


So unless you run the same queries a lot of times and know of a way to 
get it fast enough the initial time, the query cache is not much of a help.


Best regards,

Arjen

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Configuration Advice

2007-01-18 Thread Arjen van der Meijden

On 18-1-2007 17:20 Scott Marlowe wrote:
Besides that, mysql rewrites the entire table for most table-altering 
statements you do (including indexes). 


Note that this applies to the myisam table type.  innodb works quite
differently.  It is more like pgsql in behaviour, and is an mvcc storage


Afaik this is not engine specific and also applies to InnoDB. Here is 
what the MySQL-manual sais about it:
In most cases, ALTER TABLE works by making a temporary copy of the 
original table. The alteration is performed on the copy, and then the 
original table is deleted and the new one is renamed. While ALTER TABLE 
 is executing, the original table is readable by other clients. Updates 
and writes to the table are stalled until the new table is ready, and 
then are automatically redirected to the new table without any failed 
updates.


http://dev.mysql.com/doc/refman/5.0/en/alter-table.html

If it were myisam-only they sure would've mentioned that. Besides this 
is the behaviour we've seen on our site as well.


Since 'create index' is also an alter table statement for mysql, this 
also applies for adding indexes.


Best regards,

Arjen


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Configuration Advice

2007-01-18 Thread Arjen van der Meijden

On 18-1-2007 18:28 Jeremy Haile wrote:
I once had a query which would operate on a recordlist and 
see whether there were any gaps larger than 1 between consecutive 
primary keys.


Would you mind sharing the query you described?  I am attempting to do
something similar now. 



Well it was over a year ago, so I don't know what I did back then. But 
since it was a query adjusted from what I did in MySQL there where no 
subqueries involved, I think it was something like this:

select a.id, min(b.id)
 from
  members a
  join members b on a.id  b.id
  left join members c on a.id +1 = c.id
where c.id IS NULL
group by a.id;

Or rewriting it to this one halves the execution time though:

select a.id, min(b.id)
 from
  members a
  left join members c on a.id +1 = c.id
  join members b on a.id  b.id
where c.id IS NULL
group by a.id;

Although this query seems to be much faster with 150k records:

select aid, bid
from
(select a.id as aid, (select min(b.id) from members b where b.id  a.id) 
as bid

 from
  members a
group by a.id) as foo
where bid  aid+1;

The first one takes about 16 seconds on my system with PG 8.2, the 
second about 1.8 second. But back then the list was much shorter, so it 
can have been the first one or a variant on that. On MySQL the first 
takes much more than the 16 seconds PostgreSQL uses, and after editting 
this e-mail it still isn't finished... The second one made EXPLAIN hang 
in my 5.0.32-bk, so I didn't try that for real.


Best regards,

Arjen

PS, In case any of the planner-hackers are reading, here are the plans 
of the first two queries, just to see if something can be done to 
decrease the differences between them. The main differences seems to be 
that groupaggregate vs the hashaggregate?


 GroupAggregate  (cost=34144.16..35144.38 rows=50011 width=8) (actual 
time=17653.401..23881.320 rows=71 loops=1)
   -  Sort  (cost=34144.16..34269.19 rows=50011 width=8) (actual 
time=17519.274..21423.128 rows=7210521 loops=1)

 Sort Key: a.id
 -  Nested Loop  (cost=11011.41..30240.81 rows=50011 width=8) 
(actual time=184.412..10945.189 rows=7210521 loops=1)
   -  Hash Left Join  (cost=11011.41..28739.98 rows=1 
width=4) (actual time=184.384..1452.467 rows=72 loops=1)

 Hash Cond: ((a.id + 1) = c.id)
 Filter: (c.id IS NULL)
 -  Seq Scan on members a  (cost=0.00..9903.33 
rows=150033 width=4) (actual time=0.009..71.463 rows=150033 loops=1)
 -  Hash  (cost=9903.33..9903.33 rows=150033 
width=4) (actual time=146.040..146.040 rows=150033 loops=1)
   -  Seq Scan on members c 
(cost=0.00..9903.33 rows=150033 width=4) (actual time=0.002..77.066 
rows=150033 loops=1)
   -  Index Scan using members_pkey on members b 
(cost=0.00..875.69 rows=50011 width=4) (actual time=0.025..78.971 
rows=100146 loops=72)

 Index Cond: (a.id  b.id)
 Total runtime: 23882.511 ms
(13 rows)

 HashAggregate  (cost=30240.82..30240.83 rows=1 width=8) (actual 
time=12870.440..12870.504 rows=71 loops=1)
   -  Nested Loop  (cost=11011.41..30240.81 rows=1 width=8) (actual 
time=168.658..9466.644 rows=7210521 loops=1)
 -  Hash Left Join  (cost=11011.41..28739.98 rows=1 width=4) 
(actual time=168.630..865.690 rows=72 loops=1)

   Hash Cond: ((a.id + 1) = c.id)
   Filter: (c.id IS NULL)
   -  Seq Scan on members a  (cost=0.00..9903.33 
rows=150033 width=4) (actual time=0.012..70.612 rows=150033 loops=1)
   -  Hash  (cost=9903.33..9903.33 rows=150033 width=4) 
(actual time=140.432..140.432 rows=150033 loops=1)
 -  Seq Scan on members c  (cost=0.00..9903.33 
rows=150033 width=4) (actual time=0.003..76.709 rows=150033 loops=1)
 -  Index Scan using members_pkey on members b 
(cost=0.00..875.69 rows=50011 width=4) (actual time=0.023..73.317 
rows=100146 loops=72)

   Index Cond: (a.id  b.id)
 Total runtime: 12870.756 ms
(11 rows)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Configuration Advice

2007-01-18 Thread Arjen van der Meijden

On 18-1-2007 23:11 Tom Lane wrote:

Increase work_mem?  It's not taking the hash because it thinks it won't
fit in memory ...


When I increase it to 128MB in the session (arbitrarily selected 
relatively large value) it indeed has the other plan.


Best regards,

Arjen

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] opportunity to benchmark a quad core Xeon

2006-12-16 Thread Arjen van der Meijden

On 16-12-2006 4:24 Jeff Frost wrote:
We can add more RAM and drives for testing purposes.  Can someone 
suggest what benchmarks with what settings would be desirable to see how 
this system performs.  I don't believe I've seen any postgres benchmarks 
done on a quad xeon yet.


We've done our standard benchmark on a dual X5355:
http://tweakers.net/reviews/661

Verdict is that for a price/performance-ratio you're better off with a 
5160, but in absolute performance it does win.


Best regards,

Arjen

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] 8.2rc1 (much) slower than 8.2dev?

2006-12-07 Thread Arjen van der Meijden

On 7-12-2006 7:01 Jim C. Nasby wrote:

Can you post them on the web somewhere so everyone can look at them?
No, its not (only) the size that matters, its the confidentiality I'm 
not allowed to just break by myself. Well, at least not on a scale like 
that. I've been mailing off-list with Tom and we found at least one 
query that in some circumstances takes a lot more time than it should, 
due to it mistakenly chosing to do a bitmap index scan rather than a 
normal index scan.



Also, are you looking at EXPLAIN or EXPLAIN ANALYZE?
Explain analyze and normal query execution times of several millions of 
queries executed on both versions of postgresql, so we can say something 
about them statistically.


Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Core 2 or Opteron

2006-12-07 Thread Arjen van der Meijden

These benchmarks are all done using 64 bit linux:
http://tweakers.net/reviews/646

Best regards,

Arjen

On 7-12-2006 11:18 Mindaugas wrote:


 Hello,

 We're planning new server or two for PostgreSQL and I'm wondering Intel
Core 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?

 When I look through hardware sites Core 2 wins. But I believe those tests
mostly are being done in 32 bits. Does the picture change in 64 bits?

 And I also remember that in PostgreSQL Opteron earlier had huge advantage
over older Xeons. But did Intel manage to change picture now?

 Thanks,

 Mindaugas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Core 2 or Opteron

2006-12-07 Thread Arjen van der Meijden

On 7-12-2006 12:05 Mindaugas wrote:
 Now about 2 core vs 4 core Woodcrest. For HP DL360 I see similarly 
priced dual core [EMAIL PROTECTED] and four core [EMAIL PROTECTED] According to 
article's scaling data PostgreSQL performance should be similar (1.86GHz 
* 2 * 80% = ~3GHz). And quad core has slightly slower FSB (1066 vs 1333).


 So it looks like more likely dual core 5160 Woodrest is the way to go 
if I want ultimate performance on two sockets?

 Besides that I think it should consume a bit less power!?


I think that's the better choice yes. I've seen the X5355 (quad core 
2.66Ghz) in work and that one is faster than the 5160 we tested. But its 
not as much faster as the extra ghz' could imply, so the 5320 would very 
likely not outperform the 5160. At least not in our postgresql benchmark.
Besides that you end up with a slower FSB for more cores (1333 / 2 = 666 
per core, 1066 / 4 = 266 per core!) while there will be more traffic 
since the seperate dual cores on the quad core communicate via the bus 
and there are more cores so there is also in an absolute sence more 
cache coherency traffic...


So I'd definitely go with the 5160 or perhaps just the 5150 if the 
savings can allow for better I/O or more memory.


Best regards,

Arjen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] 8.2rc1 (much) slower than 8.2dev?

2006-12-04 Thread Arjen van der Meijden

Tom Lane wrote:

Arjen van der Meijden [EMAIL PROTECTED] writes:
I'll run another analyze on the database to see if that makes any 
difference, but after that I'm not sure what to check first to figure 
out where things go wrong?


Look for changes in plans?


Yeah, there are a few number of small changes in plans and costs 
estimated. I've a large list of queries executed against both databases, 
and I haven't seen any differences in row-estimates, so the analyze's 
have yielded similar enough results.


I'm not sure whether some of the changes are for better or worse, you 
can probably spot that a bit faster than I can. I saw a few index scans 
replaced by seq scans (on small tables), all index scans seem to have 
doubled in cost? And I saw a few bitmap scans in stead of normal index 
scans and more such small changes. But not so small if you execute a 
hundreds of thousands of those queries.


Since I'd rather not send the entire list of queries to the entire 
world, is it OK to send both explain analyze-files to you off list?


Best regards,

Arjen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[PERFORM] 8.2rc1 (much) slower than 8.2dev?

2006-12-03 Thread Arjen van der Meijden

Hi List,

We've been doing some benchmarks lately (one of them made it to the 
PostgreSQL frontpage) with postgresql 8.2 dev (cvs checkout of 3 june 
2006). But we prefer of course to run a more production-like version and 
installed postgresql 8.2rc1.


As it turns out after a dump/restore (to go from 820 to 822), copying 
the configuration files doing a fresh 'vacuumdb -z' (z is analyze) and 
'clusterdb' the RC1 processes about 50% *less* (webpage)requests than 
the 8.2dev we had, on the same machine/linux kernel/etc. On all 
cpu-configurations and loads we throw at it. Since its a read-mostly 
database the location on disk should matter only very slightly.


For instance, with the system currently at hand it peaks at about 20 
concurrent clients in pg8.2 dev with 465407 requests processed in a 10 
minuten timeframe. 8.2rc1 can only achieve 332499 requests in that same 
time frame with the same load and has a peak of 335995 with 35 
concurrent clients (but with 30 it only reached 287624). And we see the 
same for all loads we throw at them.


So either I'm missing something, there is a (significant enough) 
difference in how the tables where analyzed or there have been some 
code-changes since then to change the behaviour and thereby decreasing 
performance in our set-up.


Preferably I'd load the statistics from the 8.2-dev database in the 
8.2-rc1 one, but a simple insert or copy-statement won't work due to the 
'anyarray'-fields of pg_statistic, will it?


I'll run another analyze on the database to see if that makes any 
difference, but after that I'm not sure what to check first to figure 
out where things go wrong?


Best regards,

Arjen van der Meijden
Tweakers.net

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] availability of SATA vendors

2006-11-23 Thread Arjen van der Meijden

Hi Luke,

I forgot about that article, thanks for that link. That's indeed a nice 
overview of (in august) recent controllers. The Areca 1280 in that test 
(and the results I linked to earlier) is a pre-production model, so it 
might actually perform even better than in that test.


We've been getting samples from AMCC in the past, so a 96xx should be 
possible. I've pointed it out to the author of the previous 
raid-articles. Thanks for pointing that out to me.


Best regards,

Arjen

On 22-11-2006 22:47 Luke Lonergan wrote:

Arjen,

As usual, your articles are excellent!

Your results show again that the 3Ware 9550SX is really poor at random I/O
with RAID5 and all of the Arecas are really good.  3Ware/AMCC have designed
the 96xx to do much better for RAID5, but I've not seen results - can you
get a card and test it?

We now run the 3Ware controllers in RAID10 with 8 disks each and they have
been excellent.  Here (on your site) are results that bear this out:
  http://tweakers.net/reviews/639/9

- Luke


On 11/22/06 11:07 AM, Arjen van der Meijden [EMAIL PROTECTED]
wrote:


Jeff,

You can find some (Dutch) results here on our website:
http://tweakers.net/reviews/647/5

You'll find the AMCC/3ware 9550SX-12 with up to 12 disks, Areca 1280 and
1160 with up to 14 disks and a Promise and LSI sata-raid controller with
each up to 8 disks. Btw, that Dell Perc5 (sas) is afaik not the same
card as the LSI MegaRAID SATA 300-8X, but I have no idea whether they
share the same controllerchip.
In most of the graphs you also see a Areca 1160 with 1GB in stead of its
default 256MB. Hover over the labels to see only that specific line,
that makes the graphs quite readable.

You'll also see a Dell Perc5/e in the results, but that was done using
Fujitsu SAS 15k rpm drives, not the WD Raptor 10k rpm's

If you dive deeper in our (still Dutch) benchmark database you may
find some results of several disk-configurations on several controllers
in various storage related tests, like here:
http://tweakers.net/benchdb/test/193

If you want to filter some results, look for Resultaatfilter 
tabelgenerator and press on the Toon filteropties-tekst. I think
you'll be able to understand the selection-overview there, even if you
don't understand Dutch ;)
Filter resultaten below means the same as in English (filter [the]
results)

Best regards,

Arjen

On 22-11-2006 17:36 Jeff Frost wrote:

On Wed, 22 Nov 2006, Bucky Jordan wrote:


Dells (at least the 1950 and 2950) come with the Perc5, which is
basically just the LSI MegaRAID. The units I have come with a 256MB BBU,
I'm not sure if it's upgradeable, but it looks like a standard DIMM in
there...

I posted some dd and bonnie++ benchmarks of a 6-disk setup a while back
on a 2950, so you might search the archive for those numbers if you're
interested- you should be able to get the same or better from a
similarly equipped LSI setup. I don't recall if I posted pgbench
numbers, but I can if that's of interest.

I could only find the 6 disk RAID5 numbers in the archives that were run
with bonnie++1.03.  Have you run the RAID10 tests since?  Did you settle
on 6 disk RAID5 or 2xRAID1 + 4XRAID10?





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] availability of SATA vendors

2006-11-22 Thread Arjen van der Meijden

Jeff,

You can find some (Dutch) results here on our website:
http://tweakers.net/reviews/647/5

You'll find the AMCC/3ware 9550SX-12 with up to 12 disks, Areca 1280 and 
1160 with up to 14 disks and a Promise and LSI sata-raid controller with 
each up to 8 disks. Btw, that Dell Perc5 (sas) is afaik not the same 
card as the LSI MegaRAID SATA 300-8X, but I have no idea whether they 
share the same controllerchip.
In most of the graphs you also see a Areca 1160 with 1GB in stead of its 
default 256MB. Hover over the labels to see only that specific line, 
that makes the graphs quite readable.


You'll also see a Dell Perc5/e in the results, but that was done using 
Fujitsu SAS 15k rpm drives, not the WD Raptor 10k rpm's


If you dive deeper in our (still Dutch) benchmark database you may 
find some results of several disk-configurations on several controllers 
in various storage related tests, like here:

http://tweakers.net/benchdb/test/193

If you want to filter some results, look for Resultaatfilter  
tabelgenerator and press on the Toon filteropties-tekst. I think 
you'll be able to understand the selection-overview there, even if you 
don't understand Dutch ;)
Filter resultaten below means the same as in English (filter [the] 
results)


Best regards,

Arjen

On 22-11-2006 17:36 Jeff Frost wrote:

On Wed, 22 Nov 2006, Bucky Jordan wrote:


Dells (at least the 1950 and 2950) come with the Perc5, which is
basically just the LSI MegaRAID. The units I have come with a 256MB BBU,
I'm not sure if it's upgradeable, but it looks like a standard DIMM in
there...

I posted some dd and bonnie++ benchmarks of a 6-disk setup a while back
on a 2950, so you might search the archive for those numbers if you're
interested- you should be able to get the same or better from a
similarly equipped LSI setup. I don't recall if I posted pgbench
numbers, but I can if that's of interest.


I could only find the 6 disk RAID5 numbers in the archives that were run 
with bonnie++1.03.  Have you run the RAID10 tests since?  Did you settle 
on 6 disk RAID5 or 2xRAID1 + 4XRAID10?




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] availability of SATA vendors

2006-11-17 Thread Arjen van der Meijden

On 17-11-2006 18:45 Jeff Frost wrote:
I see many of you folks singing the praises of the Areca and 3ware SATA 
controllers, but I've been trying to price some systems and am having 
trouble finding a vendor who ships these controllers with their 
systems.  Are you rolling your own white boxes or am I just looking in 
the wrong places?


In Holland it are indeed the smaller companies who supply such cards. 
But luckily there is a very simple solution, all those big suppliers do 
supply SAS-controllers. And as you may know, SATA disks can be used 
without any problem on a SAS controller. Of course they are less 
advanced and normally slower than a SAS disk.


So you can have a nice SAS raid card and insert SATA disks in it. And 
than you can shop at any major server vendor I know off.


Good luck,

Arjen

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Query plan for heavy SELECT with lite sub-SELECTs

2006-11-03 Thread Arjen van der Meijden

Alvaro Herrera wrote:

Performance analysis of strange queries is useful, but the input queries
have to be meaningful as well.  Otherwise you end up optimizing bizarre
and useless cases.



I had a similar one a few weeks ago. I did some batch-processing over a 
bunch of documents and discovered postgresql was faster if I let it 
process just 1000 documents, in stead of all 45000 at the same time. But 
with 1000 it was faster than 1000x one document.


So I started with a query like:
SELECT docid, (SELECT work to be done for each document)
FROM documents
ORDER BY docid
LIMIT 1000
OFFSET ?

And I noticed the 44th iteration was much slower than the first.

Rewriting it to something like this made the last iteration about as 
fast as the first:

SELECT docid, (SELECT work to be done for each document)
FROM documents
WHERE docid IN (SELECT docid FROM documents
ORDER BY docid
LIMIT 1000
OFFSET ?
)

I know something like that isn't very set-based thinking, but then again 
the query's structure did come from a iterative algoritm, but turned out 
to be faster (less query-overhead) and easier to scale in PostgreSQL. 
I've tried a few more set-like structures, but those were all slower 
than this aproach probably because they would be were a little more 
complex. Some of them took more than 10x the amount of time...


Another real-life example would be to display the amount of replies to a 
topic in a topic listing of a forum or the name of the author of the 
last message. You probably don't want to count all the replies for each 
topic if you're only going to display headings 100 - 200.
And there are a few more examples to think of where a join+group by 
isn't going to work, but a subquery in the selectlist just does what you 
want.

Of course most of the time you won't be using a OFFSET then.

Best regards,

Arjen

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] New hardware thoughts

2006-10-23 Thread Arjen van der Meijden

On 20-10-2006 22:33 Ben Suffolk wrote:
How about the Fujitsu Siemens Sun Clones? I have not really looked at 
them but have heard the odd good thing about them.


Fujitsu doesn't build Sun clones! That really is insulting for them ;-) 
They do offer Sparc-hardware, but that's a bit higher up the market.


On the other hand, they also offer nice x86-server hardware. We've had 
our hands on a RX300 (2U, dual woodcrest, six 3.5 sas-bays, integraded 
lsi-logic raid-controller) and found it to be a very nice machine.


But again, they also offer (the same?) Broadcom networking on board. 
Just like Dell and HP. And it is a LSI Logic sas-controller on board, so 
if FBSD has trouble with either of those, its hard to find anything 
suitable at all in the market.


Best regards,

Arjen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] New hardware thoughts

2006-10-22 Thread Arjen van der Meijden

On 20-10-2006 16:58 Dave Cramer wrote:

Ben,

My option in disks is either 5 x 15K rpm disks or 8 x 10K rpm disks 
(all SAS), or if I pick a different server I can have 6 x 15K rpm or 8 
x 10K rpm (again SAS). In each case controlled by a PERC 5/i (which I 
think is an LSI Mega Raid SAS 8408E card).



You mentioned a Perc controller, so I'll assume this is a Dell.

My advice is to find another supplier. check the archives for Dell.

Basically you have no idea what the Perc controller is since it is 
whatever Dell decides to ship that day.


As far as I know, the later Dell PERC's have all been LSI 
Logic-controllers, to my knowledge Dell has been a major contributor to 
the LSI-Linux drivers...
At least the 5/i and 5/e have LSI-logic controller chips. Although the 
5/e is not an exact copy of the LSI Mega raid 8480E, its board layout 
and BBU-memory module are quite different. It does share its 
functionality however and has afaik the same controller-chip on it.


Currently we're using a Dell 1950 with PERC 5/e connecting a MD1000 
SAS-enclosure, filled with 15 36GB 15k rpm disks. And the Dell-card 
easily beats an ICP Vortex-card we also connected to that enclosure.


Ow and we do get much more than, say, 8-50 MB/sec out of it. WinBench99 
gets about 644MB/sec in sequential reading tops from a 14-disk raid10 
and although IOmeter is a bit less dramatic it still gets over 
240MB/sec. I have no idea how fast a simple dd would be and have no 
bonnie++ results (at hand) either.
At least in our benchmarks, we're convinced enough that it is a good 
set-up. There will be faster set-ups, but at this price-point it won't 
surprise me if its the fastest disk-set you can get.


By the way, as far as I know, HP offers the exact same broadcom network 
chip in their systems as Dell does... So if that broadcom chip is 
unstable on a Dell in FreeBSD, it might very well be unstable in a HP too.


Best regards,

Arjen

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Hints proposal

2006-10-12 Thread Arjen van der Meijden

On 12-10-2006 21:07 Jeff Davis wrote:

On Thu, 2006-10-12 at 19:15 +0200, Csaba Nagy wrote:

To formalize the proposal a litte, you could have syntax like:

CREATE HINT [FOR USER username] MATCHES regex APPLY HINT some_hint;

Where some_hint would be a hinting language perhaps like Jim's, except
not guaranteed to be compatible between versions of PostgreSQL. The
developers could change the hinting language at every release and people
can just re-write the hints without changing their application.


There are some disadvantages of not writing the hints in a query. But of 
course there are disadvantages to do as well ;)


One I can think of is that it can be very hard to define which hint 
should apply where. Especially in complex queries, defining at which 
point exaclty you'd like your hint to work is not a simple matter, 
unless you can just place a comment right at that position.


Say you have a complex query with several joins of the same table. And 
in all but one of those joins postgresql actually chooses the best 
option, but somehow you keep getting some form of join while a nested 
loop would be best. How would you pinpoint just that specific clause, 
while the others remain unhinted ?


Your approach seems to be a bit similar to aspect oriented programming 
(in java for instance). You may need a large amount of information about 
the queries and it is likely a general regexp with general hint will 
not do much good (at least I expect a hinting-system to be only useable 
in corner cases and very specific points in a query).


By the way, wouldn't it be possible if the planner learned from a query 
execution, so it would know if a choice for a specific plan or estimate 
was actually correct or not for future reference? Or is that in the line 
of DB2's complexity and a very hard problem and/or would it add too much 
overhead?


Best regards,

Arjen

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Opteron vs. Xeon benchmark

2006-09-22 Thread Arjen van der Meijden

Try the translation ;)

http://tweakers.net/reviews/646/13

On 22-9-2006 10:32 Hannes Dorbath wrote:

A colleague pointed me to this site tomorrow:

http://tweakers.net/reviews/642/13

I can't read the language, so can't get a grip on what exactly the 
benchmark was about.


Their diagrams show `Request per seconds'. What should that mean? How 
many connections PG accepted per second? So they measured the OS fork 
performance? Should that value be of any interrest? Anyone with heavy 
OLTP workload will use persistent connections or a connection pool in 
front.


Do they mean TPS? That woulnd't make much sense in a CPU benchmark, as 
OLTP workload is typically limited by the disc subsystem.


Can someone enlighten me what this site is about?




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Opteron vs. Xeon benchmark

2006-09-22 Thread Arjen van der Meijden

On 22-9-2006 22:34 Vivek Khera wrote:
so you think AMD is just sitting around twiddling their thumbs and 
saying well, time to give up since Intel is faster today.  no.  there 
will be back-and forth between these two vendors to our benefit.  I 
would expect next-gen AMD chips to be faster than the intels.  If not, 
then perhaps they *should* give up :-)


Please read the english translation of that article I gave earlier 
today. Than you can see the set-up and that its a bit childish to quote 
benchmark as you did in the title of this thread.
All the answers in your initial mail are answered in the article, and as 
said, there is an english translation of the dutch article you posted.


What you conclude from that translation is not the conclusion of the 
article, just that AMD has *no* answer at this time and won't have for 
at least somewhere in 2007 when their K8L will hit the market.
But the K8L is not likely to be as much faster as the Opteron was to the 
first Xeon's, if at all faster...


If you're an AMD-fan, by all means, buy their products, those processors 
are indeed fast and you can build decent servers with them. But don't 
rule out Intel, just because with previous processors they were the 
slower player ;)


Best regards,

Arjen van der Meijden

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] PostgreSQL and sql-bench

2006-09-21 Thread Arjen van der Meijden

On 21-9-2006 23:49 Jim C. Nasby wrote:

Even with fsync = off, there's still a non-trivial amount of overhead
brought on by MVCC that's missing in myisam. If you don't care about
concurrency or ACIDity, but performance is critical (the case that the
MySQL benchmark favors), then PostgreSQL probably isn't for you.


That depends on the required scalability (both in number of cpu's and in 
number of concurrent clients). In our benchmarks MySQL is beaten by 
PostgreSQL in a read-mostly environment with queries that are designed 
for MySQL, but slightly adjusted to work on PostgreSQL (for MySQL 5.0 
and 5.1, about the same adjustments where needed).

But for very low amounts of concurrent users, MySQL outperforms PostgreSQL.

Have a look here:
http://tweakers.net/reviews/646/10
and here:
http://tweakers.net/reviews/638/4

As you can see both MySQL 5.0 and 4.1 start much higher for a few 
clients, but when you add more clients or more cpu's, MySQL scales less 
good and even starts dropping performance and soon is far behind 
compared to PostgreSQL.


So for a web-application, PostgreSQL may be much better, since generally 
the only situation where you need maximum performance, is when you have 
to service a lot of concurrent visitors.
But if you benchmark only with a single thread or do benchmarks that are 
no where near a real-life environment, it may show very different 
results of course.


Best regards,

Arjen van der Meijden

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Optimize SQL

2006-09-15 Thread Arjen van der Meijden

On 15-9-2006 17:53 Tom Lane wrote:

If that WHERE logic is actually what you need, then getting this query
to run quickly seems pretty hopeless.  The database must form the full
outer join result: it cannot discard any listing0_ rows, even if they
have lastupdate outside the given range, because they might join to
addressval2_ rows within the given createdate range.  And conversely
it can't discard any addressval2_ rows early.  Is there any chance
that you wanted AND not OR there?


Couldn't it also help to do something like this?

SELECT ..., (SELECT MAX(createdate) FROM addressval ...)
FROM listing l
  LEFT JOIN address ...
WHERE l.id IN (SELECT id FROM listing WHERE lastupdate ...
  UNION
SELECT id FROM listing JOIN addressval a ON ... WHERE 
a.createdate ...)



Its not pretty, but looking at the explain only a small amount of 
records match both clauses. So this should allow the use of indexes for 
both the createdate-clause and the lastupdate-clause.


Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with

2006-09-08 Thread Arjen van der Meijden

Dave Cramer wrote:

Hi, Arjen,


The Woodcrest is quite a bit faster than the Opterons. Actually... 
With Hyperthreading *enabled* the older Dempsey-processor is also 
faster than the Opterons with PostgreSQL. But then again, it is the 
top-model Dempsey and not a top-model Opteron so that isn't a clear win.
Of course its clear that even a top-Opteron wouldn't beat the 
Dempsey's as easily as it would have beaten the older Xeon's before that.


Why wouldn't you use a top of the line Opteron ?


What do you mean by this question? Why we didn't test the Opteron 285 
instead of the 280?


Well, its not that you can just go up to a hardware supplier and pick 
exactly the system you want to review/benchmar... especially not with 
pre-production hardware that (at the time) wasn't very widely available.
Normally, you just get what system they have available at their 
marketing or pre-sales department.


The Opteron 280 was from an earlier review and was fitted in the Try 
and Buy-version of the Sun Fire x4200. In that system; you only have a 
few options where the 280 was the fastest at the time.


But then again, systems with the Woodcrest 5150 (the subtop one) and 
Opteron 280 (also the subtop one) are about equal in price, so its not a 
bad comparison in a bang-for-bucks point of view. The Dempsey was added 
to show how both the Opteron and the newer Woodcrest would compete 
against that one.


Best regards,

Arjen

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with

2006-09-08 Thread Arjen van der Meijden

On 8-9-2006 15:01 Dave Cramer wrote:


But then again, systems with the Woodcrest 5150 (the subtop one) and 
Opteron 280 (also the subtop one) are about equal in price, so its not 
a bad comparison in a bang-for-bucks point of view. The Dempsey was 
added to show how both the Opteron and the newer Woodcrest would 
compete against that one.


Did I read this correctly that one of the Opterons in the test only had 
4G of ram vs 7 G in the Intel boxes ? If so this is a severely limiting 
factor for postgresql at least?


Actually, its not in this benchmark. Its not a large enough dataset to 
put any pressure on IO, not even with just 2GB of memory.


But, to display it more acurately have a look here:
http://tweakers.net/reviews/638/2 and then scroll down to the bottom-graph.
As you can see, the 8GB-version was faster, but not that much to call it 
'severely'. Unfortunately, the system just wasn't very stable with that 
8GB memory (it was other memory, not just more). So we couldn't finish 
much benchmarks with it and decided, partially based on this graph to 
just go for the 4GB.


Anyway, you can always compare the results of the Woodcrest with the Sun 
Fire x4200-results (called 'Opteron DDR' or 'Opteron 940' in the latest 
article) to see how a Opteron with 8GB of memory compares to the Woodcrest.


More of those results can be found in this english article:
http://tweakers.net/reviews/638
And in this Dutch one:
http://tweakers.net/reviews/633

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with

2006-09-08 Thread Arjen van der Meijden

On 8-9-2006 18:18 Stefan Kaltenbrunner wrote:


interesting - so this is a mostly CPU-bound benchmark ?
Out of curiousity have you done any profiling on the databases under
test to see where they are spending their time ?


Yeah, it is.

We didn't do any profiling.
We had a Sun-engineer visit us to see why MySQL performed so bad on the 
T2000 and he has done some profiling, but that is of course just a small 
and specific part of our total set of benchmarks.
Postgresql was mostly left out of that picture since it performed pretty 
well (although it may even do better with more tuning and profiling).


We are/were not interested enough in the profiling-part, since we just 
run the benchmark to see how fast each system is. Not really to see how 
fast each database is or why a database is faster on X or Y.


The latter is of course pretty interesting, but also requires quite a 
bit of knowledge of the internals and a bit of time to analyze the 
results...


Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[PERFORM] Xeon Woodcrest/Dempsey vs Opteron Socket F/940 with postgresql and some SAS raid-figures

2006-09-07 Thread Arjen van der Meijden

Hi,

We've been running our webapp database-benchmark again on mysql and 
postgresql. This time using a Fujitsu-Siemens RX300 S3 machine equipped 
with a 2.66Ghz Woodcrest (5150) and a 3.73Ghz Dempsey (5080). And 
compared those results to our earlier undertaken Opteron benchmarks on 
2.4GHz' Socket F- and 940-versions (2216, 280).


You can see the english translation here:
http://tweakers.net/reviews/646

The Woodcrest is quite a bit faster than the Opterons. Actually... With 
Hyperthreading *enabled* the older Dempsey-processor is also faster than 
the Opterons with PostgreSQL. But then again, it is the top-model 
Dempsey and not a top-model Opteron so that isn't a clear win.
Of course its clear that even a top-Opteron wouldn't beat the Dempsey's 
as easily as it would have beaten the older Xeon's before that.


Again PostgreSQL shows very good scalability, so good even 
HyperThreading adds extra performance to it with 4 cores enabled... 
while MySQL in every version we tested (5.1.9 is not displayed, but 
showed similar performance) was slower with HT enabled.


Further more we received our ordered Dell MD1000 SAS-enclosure which has 
15 SAS Fujitsu MAX3036RC disks and that unit is controlled using a Dell 
PERC 5/e.

We've done some benchmarks (unfortunately everything is in Dutch for this).

We tested varying amounts of disks in RAID10 (a set of 4,5,6 and 7 
2-disk-mirrors striped), RAID50 and RAID5. The interfaces to display the 
results are in a google-stylee beta-state, but here is a list of all 
benchmarks done:

http://tweakers.net/benchdb/search?query=md1000ColcomboID=5

Hover over the left titles to see how many disks and in what raid-level 
 was done. Here is a comparison of 14 disk RAID5/50/10's:

http://tweakers.net/benchdb/testcombo/wide/?TestcomboIDs%5B1156%5D=1TestcomboIDs%5B1178%5D=1TestcomboIDs%5B1176%5D=1DB=NieuwsQuery=Keyword

For raid5 we have some graphs:
http://tweakers.net/benchdb/testcombo/1156
Scroll down to see how adding disks improves performance on it. The 
Areca 1280 with WD Raptor's is a very good alternative (or even better) 
as you can see for most benchmarks, but is beaten as soon as the 
relative weight of random-IO increases (I/O-meter fileserver and 
database benchmarks), the processor on the 1280 is faster than the one 
on the Dell-controller so its faster in sequential IO.
These benchmarks were not done using postgresql, so you shouldn't read 
them as absolute for all your situations ;-) But you can get a good 
impression I think.


Best regards,

Arjen van der Meijden
Tweakers.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] most bang for buck with ~ $20,000

2006-08-18 Thread Arjen van der Meijden

Hi Kenji,

I'm not sure what you mean by 'something newer'? The intel 
woodcrest-cpu's are brand-new compared to the amd opterons. But if you 
need a 4-cpu config (I take it you want 8-cores in that case), Dell 
doesn't offer much. Whether something new will come, I don't know. I'm 
not sure when (or if?) a MP-Woodcrest will arrive and/or when Dell will 
start offering Opteron-servers.


Sas has been designed as the successor to SCSI.

As I see it, SAS has currently one major disadvantage. Lots of new 
servers are equipped with SAS-drives, a few nice SAS-raidcontrollers 
exist, but the availability of external enclosures for SAS is not 
widespread yet. So your options of going beyond (say) 8 disks per system 
are a bit limited.


There are of course advantages as well. The bus is much wider (you can 
have 4 lanes of 3Gbps each to an enclosure). You can mix sas and sata 
disks, so you could have two arrays in the same enclosure, one big 
storage bin and a very fast array or just use only sata disks on a sas 
controller. The cabling itself is also much simpler/more flexible 
(although using a hot-plug enclosure of course shields you mostly from 
that).
But whether its the right choice to make now? I'm not sure. We weren't 
to fond of investing a lot of money in an end-of-life system. And since 
we're a tech-website, we also had to worry about our being modern 
image, of course ;)


The main disadvantage I see in this case is, as said, the limited 
availability of external enclosures in comparison to SCSI and Fibre 
Channel. HP currently only offers their MSA50 (for the rather expensive 
SFF disks) while their MSA60 (normal disks) will not be available until 
somewhere in 2007 and Dell also only offers one enclosure, the MD1000.
The other big players offer nothing yet, as far as I know, while they 
normally offer several SCSI and/or FC-enclosures.
There are also some third-party enclosures (adaptec and promise for 
instance) available of course.


Best regards,

Arjen

On 18-8-2006 21:07, Kenji Morishige wrote:
Thanks Arjen, 
I have unlimited rack space if I really need it.  Is serial/SAS really the

better route to go than SCSI these days? I'm so used to ordering SCSI that
I've been out of the loop with new disk enclosures and disk tech.  I been
trying to price out a HP DL585, but those are considerably more than the
Dells.  Is it worth waiting a few more weeks/months for Dell to release
something newer?

-Kenji

On Wed, Aug 09, 2006 at 07:35:22AM +0200, Arjen van der Meijden wrote:

With such a budget you should easily be able to get something like:
- A 1U high-performance server (for instance the Dell 1950 with 2x 
Woodcrest 5160, 16GB of FB-Dimm memory, one 5i and one 5e perc raid 
controller and some disks internally)
- An external SAS direct attached disks storage enclosure full with 15k 
rpm 36GB disks (for instance the MD1000, with 15x 36GB 15k disks)


Going for the dell-solution would set you back only (including 
savings) about $13-$14k. HP offers a similar solutions (a HP DL360G5 or 
a DL380G5/DL385 with two MSA50's for instance) which also fit in your 
budget afaik. The other players tend to be (a bit) more expensive, force 
you to go with Fibre Channel or ancient SCSI external storage ;)


If you'd like to have a product by a generic vendor, have a look at the 
Adaptec JS50 SAS Jbod enclosure or Promise's Vtrak 300 (both offer 12 
sas/sata bays in 2U) for storage.


If you're limited to only 2U of rack space, its a bit more difficult to 
get maximum I/O in your budget (you have basically space for about 8 or 
12 3.5 disks (with generic suppliers) or 16 2.5 sff disks (with HP)).
But you should still be able to have two top-off-the-line x86 cpu's (amd 
opteron 285 or intel woorcrest 5160) and 16GB of memory (even FB Dimm, 
which is pretty expensive).


Best regards,

Arjen van der Meijden


On 8-8-2006 22:43, Kenji Morishige wrote:

I've asked for some help here a few months ago and got some really helpfull
answers regarding RAID controllers and server configuration.  Up until
recently I've been running PostgreSQL on a two year old Dual Xeon 3.06Ghz
machine with a single channel RAID controller (previously Adaptec 2200S, 
but
now changed to LSI MegaRAID). The 2U unit is from a generic vendor using 
what
I believe is a SuperMicro motherboard.  In the last week after upgrading 
the

RAID controller, the machine has had disk failure and some other issues. I
would like to build a very reliable dedicated postgreSQL server that has 
the
ultimate possible performance and reliabily for around $20,000.  The data 
set

size is only currently about 4GB, but is increasing by approximately 50MB
daily.  The server also requires about 500 connections and I have been
monitoring about 100-200 queries per second at the moment.  I am planning 
to

run FreeBSD 6.1 if possible, but I am open to any other suggestions if it
improves performance.

I am considering a setup such as this:
 - At least dual cpu (possibly

Re: [PERFORM] most bang for buck with ~ $20,000

2006-08-18 Thread Arjen van der Meijden
Well, that's of course really hard to tell. From personal experience in 
a read-mostly environment, the subtop woodcrest 5150 (2.6Ghz) 
outperforms the top dempsey 5080 (3.7Ghz, in the same system) by quite a 
nice margin. But that dempsey already has the faster FB-Dimm memory and 
a much wider FSB compared to your 3.06Ghz Xeons.
But if we assume that the 3.7Ghz 5080 is just the extra mhz faster (~ 
25%), for a single (dual core) 3Ghz Woodcrest you might already be 
talking about a 50% improvement in terms of cpu-power over your current 
set-up. Of course depending on workload and its scalability etc etc.


In a perfect world, with linear scalability (note, a read-mostly 
postgresql can actually do that on a Sun fire T2000 with solaris) that 
would yield a 200% improvement when going form 2 to 4 cores. A 70-80% 
scaling is more reasonable and would still imply you'd improve more than 
150% over your current set-up.
Please note that this is partially based on internal testing and partial 
on assumptions and would at least require more real-world testing for a 
app more similar to yours.


As soon as we're publishing some numbers on this (and I don't forget), 
I'll let you know on the list. That will include postgresql and recent 
x86 cpu's on linux and should be ready soon.


Best regards,

Arjen


On 18-8-2006 21:51, Kenji Morishige wrote:

Thanks Arjen for your reply, this is definitely something to consider. I
think in our case, we are not too concerned with the tech image as much as if
the machine will allow us to scale the loads we need. I'm not sure if we
should worry so much about the IO bandwidth as we are not even close to
saturating 320MB/s.  I think stability, reliability, and ease-of-use and
recovery is our main concern at the moment.  I currently am runing a load
average of about .5 on a dual Xeon 3.06Ghz P4 setup.  How much CPU
performance improvement do you think the new woodcrest cpus are over these?

-Kenji

On Fri, Aug 18, 2006 at 09:41:55PM +0200, Arjen van der Meijden wrote:

Hi Kenji,

I'm not sure what you mean by 'something newer'? The intel 
woodcrest-cpu's are brand-new compared to the amd opterons. But if you 
need a 4-cpu config (I take it you want 8-cores in that case), Dell 
doesn't offer much. Whether something new will come, I don't know. I'm 
not sure when (or if?) a MP-Woodcrest will arrive and/or when Dell will 
start offering Opteron-servers.


Sas has been designed as the successor to SCSI.

As I see it, SAS has currently one major disadvantage. Lots of new 
servers are equipped with SAS-drives, a few nice SAS-raidcontrollers 
exist, but the availability of external enclosures for SAS is not 
widespread yet. So your options of going beyond (say) 8 disks per system 
are a bit limited.


There are of course advantages as well. The bus is much wider (you can 
have 4 lanes of 3Gbps each to an enclosure). You can mix sas and sata 
disks, so you could have two arrays in the same enclosure, one big 
storage bin and a very fast array or just use only sata disks on a sas 
controller. The cabling itself is also much simpler/more flexible 
(although using a hot-plug enclosure of course shields you mostly from 
that).
But whether its the right choice to make now? I'm not sure. We weren't 
to fond of investing a lot of money in an end-of-life system. And since 
we're a tech-website, we also had to worry about our being modern 
image, of course ;)


The main disadvantage I see in this case is, as said, the limited 
availability of external enclosures in comparison to SCSI and Fibre 
Channel. HP currently only offers their MSA50 (for the rather expensive 
SFF disks) while their MSA60 (normal disks) will not be available until 
somewhere in 2007 and Dell also only offers one enclosure, the MD1000.
The other big players offer nothing yet, as far as I know, while they 
normally offer several SCSI and/or FC-enclosures.
There are also some third-party enclosures (adaptec and promise for 
instance) available of course.


Best regards,

Arjen

On 18-8-2006 21:07, Kenji Morishige wrote:
Thanks Arjen, 
I have unlimited rack space if I really need it.  Is serial/SAS really the

better route to go than SCSI these days? I'm so used to ordering SCSI that
I've been out of the loop with new disk enclosures and disk tech.  I been
trying to price out a HP DL585, but those are considerably more than the
Dells.  Is it worth waiting a few more weeks/months for Dell to release
something newer?

-Kenji

On Wed, Aug 09, 2006 at 07:35:22AM +0200, Arjen van der Meijden wrote:

With such a budget you should easily be able to get something like:
- A 1U high-performance server (for instance the Dell 1950 with 2x 
Woodcrest 5160, 16GB of FB-Dimm memory, one 5i and one 5e perc raid 
controller and some disks internally)
- An external SAS direct attached disks storage enclosure full with 15k 
rpm 36GB disks (for instance the MD1000, with 15x 36GB 15k disks)


Going for the dell-solution would set you back only

Re: [PERFORM] PostgreSQL runs a query much slower than BDE and MySQL

2006-08-16 Thread Arjen van der Meijden

On 16-8-2006 18:48, Peter Hardman wrote:
Using identically structured tables and the same primary key, if I run this on 
Paradox/BDE it takes about 120ms, on MySQL (5.0.24, local server) about 3ms, 
and on PostgresSQL (8.1.3, local server) about 1290ms). All on the same 
Windows XP Pro machine with 512MB ram of which nearly half is free.  


Is that with or without query caching? I.e. can you test it with SELECT 
SQL_NO_CACHE ... ?
In a read-only environment it will still beat PostgreSQL, but as soon as 
you'd get a read-write environment, MySQL's query cache is of less use. 
So you should compare both the cached and non-cached version, if applicable.


Besides that, most advices on this list are impossible without the 
result of 'explain analyze', so you should probably get that as well.


I'm not sure whether this is the same query, but you might want to try:
SELECT DISTINCT f1.regn_no, f1.transfer_date as date_in
FROM SHEEP_FLOCK f1
WHERE
f1.flock_no = '1359'
AND f1.transfer_date = (SELECT MAX(f.transfer_date) FROM SHEEP_FLOCK f 
WHERE regn_no = f1.regn_no)


And you might need an index on (regn_no, transfer_date) and/or one 
combined with that flock_no.


Best regards,

Arjen

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] most bang for buck with ~ $20,000

2006-08-09 Thread Arjen van der Meijden
We were in a similar situation with a similar budget. But we had two 
requirements, no deprecated scsi while the successor SAS is available 
and preferrably only 3 or 4U of rack space. And it had to have 
reasonable amounts of disks (at least 12).


The two options we finally choose between where a Dell 1U 1950 with two 
woodcrests 5160 (I don't think the older dempsey 50x0's are a good idea) 
and 16GB of memory combined with a PowerVault MD1000 external storage 
SAS JBOD unit, with 15 36GB 15k rpm disks and from HP a similar 
configured DL360G5 (also 1U) combined with two MSA50 SFF SAS JBOD 
enclosures with 20 36GB 10k rpm SFF disks.


Both enclosures offer has SAS-connectivity (serial attached scsi), i.e. 
the next generation scsi. Which is supposed to be the successor to 
scsi, but unfortunately its not yet as widely available.


The Dell MD1000 is 3U high and can be fitted with 15 3.5 disks, the 
MSA50 is 1U and can be fitted with 10 2.5 disks.
In terms of performance you'll likely need two MSA50's to be up to par 
with one MD1000. The SFF disks are about as expensive as the 15k 3.5 
disks... so its mostly interesting for packing a lot of I/O in a small 
enclosure. HP is going to offer a 3.5 SAS-enclosure (MSA60) but that 
one won't be available until Q1 2007 or something like that.
As said Promise and Adaptec also offer SAS enclosures, both are 2U and 
can be fitted with 12 disks. There are more available, but they are 
generally quite bit hard to find.


Good luck with your search.

Best regards,

Arjen


Kenji Morishige wrote:

I have unlimited rack space, so 2U is not the issue. The boxes are stored in
our lab for internal software tools.  I'm going to research those boxes you
mention.  Regarding the JBOD enclosures, are these generally just 2U or 4U
units with SCSI interface connectors?  I didn't see these types of boxes
availble on Dell website, I'll look again.
-Kenji

On Wed, Aug 09, 2006 at 07:35:22AM +0200, Arjen van der Meijden wrote:

With such a budget you should easily be able to get something like:
- A 1U high-performance server (for instance the Dell 1950 with 2x 
Woodcrest 5160, 16GB of FB-Dimm memory, one 5i and one 5e perc raid 
controller and some disks internally)
- An external SAS direct attached disks storage enclosure full with 15k 
rpm 36GB disks (for instance the MD1000, with 15x 36GB 15k disks)


Going for the dell-solution would set you back only (including 
savings) about $13-$14k. HP offers a similar solutions (a HP DL360G5 or 
a DL380G5/DL385 with two MSA50's for instance) which also fit in your 
budget afaik. The other players tend to be (a bit) more expensive, force 
you to go with Fibre Channel or ancient SCSI external storage ;)


If you'd like to have a product by a generic vendor, have a look at the 
Adaptec JS50 SAS Jbod enclosure or Promise's Vtrak 300 (both offer 12 
sas/sata bays in 2U) for storage.


If you're limited to only 2U of rack space, its a bit more difficult to 
get maximum I/O in your budget (you have basically space for about 8 or 
12 3.5 disks (with generic suppliers) or 16 2.5 sff disks (with HP)).
But you should still be able to have two top-off-the-line x86 cpu's (amd 
opteron 285 or intel woorcrest 5160) and 16GB of memory (even FB Dimm, 
which is pretty expensive).


Best regards,

Arjen van der Meijden


On 8-8-2006 22:43, Kenji Morishige wrote:

I've asked for some help here a few months ago and got some really helpfull
answers regarding RAID controllers and server configuration.  Up until
recently I've been running PostgreSQL on a two year old Dual Xeon 3.06Ghz
machine with a single channel RAID controller (previously Adaptec 2200S, 
but
now changed to LSI MegaRAID). The 2U unit is from a generic vendor using 
what
I believe is a SuperMicro motherboard.  In the last week after upgrading 
the

RAID controller, the machine has had disk failure and some other issues. I
would like to build a very reliable dedicated postgreSQL server that has 
the
ultimate possible performance and reliabily for around $20,000.  The data 
set

size is only currently about 4GB, but is increasing by approximately 50MB
daily.  The server also requires about 500 connections and I have been
monitoring about 100-200 queries per second at the moment.  I am planning 
to

run FreeBSD 6.1 if possible, but I am open to any other suggestions if it
improves performance.

I am considering a setup such as this:
 - At least dual cpu (possibly with 2 cores each)
 - 4GB of RAM
 - 2 disk RAID 1 array for root disk
 - 4 disk RAID 1+0 array for PGDATA
 - 2 disk RAID 1 array for pg_xlog

Does anyone know a vendor that might be able provide such setup?  Any
critique in this design? I'm thinking having a 2 channel RAID controller to
seperate the PGDATA, root and pg_xlog.

Sincerely,
Kenji

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster





---(end of broadcast)---
TIP 5

Re: [PERFORM] most bang for buck with ~ $20,000

2006-08-08 Thread Arjen van der Meijden

With such a budget you should easily be able to get something like:
- A 1U high-performance server (for instance the Dell 1950 with 2x 
Woodcrest 5160, 16GB of FB-Dimm memory, one 5i and one 5e perc raid 
controller and some disks internally)
- An external SAS direct attached disks storage enclosure full with 15k 
rpm 36GB disks (for instance the MD1000, with 15x 36GB 15k disks)


Going for the dell-solution would set you back only (including 
savings) about $13-$14k. HP offers a similar solutions (a HP DL360G5 or 
a DL380G5/DL385 with two MSA50's for instance) which also fit in your 
budget afaik. The other players tend to be (a bit) more expensive, force 
you to go with Fibre Channel or ancient SCSI external storage ;)


If you'd like to have a product by a generic vendor, have a look at the 
Adaptec JS50 SAS Jbod enclosure or Promise's Vtrak 300 (both offer 12 
sas/sata bays in 2U) for storage.


If you're limited to only 2U of rack space, its a bit more difficult to 
get maximum I/O in your budget (you have basically space for about 8 or 
12 3.5 disks (with generic suppliers) or 16 2.5 sff disks (with HP)).
But you should still be able to have two top-off-the-line x86 cpu's (amd 
opteron 285 or intel woorcrest 5160) and 16GB of memory (even FB Dimm, 
which is pretty expensive).


Best regards,

Arjen van der Meijden


On 8-8-2006 22:43, Kenji Morishige wrote:

I've asked for some help here a few months ago and got some really helpfull
answers regarding RAID controllers and server configuration.  Up until
recently I've been running PostgreSQL on a two year old Dual Xeon 3.06Ghz
machine with a single channel RAID controller (previously Adaptec 2200S, but
now changed to LSI MegaRAID). The 2U unit is from a generic vendor using what
I believe is a SuperMicro motherboard.  In the last week after upgrading the
RAID controller, the machine has had disk failure and some other issues. I
would like to build a very reliable dedicated postgreSQL server that has the
ultimate possible performance and reliabily for around $20,000.  The data set
size is only currently about 4GB, but is increasing by approximately 50MB
daily.  The server also requires about 500 connections and I have been
monitoring about 100-200 queries per second at the moment.  I am planning to
run FreeBSD 6.1 if possible, but I am open to any other suggestions if it
improves performance.

I am considering a setup such as this:
  - At least dual cpu (possibly with 2 cores each)
  - 4GB of RAM
  - 2 disk RAID 1 array for root disk
  - 4 disk RAID 1+0 array for PGDATA
  - 2 disk RAID 1 array for pg_xlog

Does anyone know a vendor that might be able provide such setup?  Any
critique in this design? I'm thinking having a 2 channel RAID controller to
seperate the PGDATA, root and pg_xlog.

Sincerely,
Kenji

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] PostgreSQL scalability on Sun UltraSparc T1

2006-08-07 Thread Arjen van der Meijden

Hi Markus,

As said, our environment really was a read-mostly one. So we didn't do 
much inserts/updates and thus spent no time tuning those values and left 
them as default settings.


Best regards,

Arjen

Markus Schaber wrote:

Hi, Arjen,

Arjen van der Meijden wrote:


It was the 8core version with 16GB memory... but actually that's just
overkill, the active portions of the database easily fits in 8GB and a
test on another machine with just 2GB didn't even show that much
improvements when going to 7GB (6x1G, 2x 512M), it was mostly in the
range of 10% improvement or less.


I'd be interested in the commit_siblings and commit_delay settings,
tuning them could give a high increase on throughput for highly
concurrent insert/update workloads, at the cost of latency (and thus
worse results for low concurrency situations).

Different fsync method settings can also make a difference (I presume
that syncing was enabled).

HTH,
Markus




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] PostgreSQL scalability on Sun UltraSparc T1

2006-08-01 Thread Arjen van der Meijden

On 1-8-2006 19:26, Jim C. Nasby wrote:

On Sat, Jul 29, 2006 at 08:43:49AM -0700, Joshua D. Drake wrote:

I'd love to get an english translation that we could use for PR.


Actually, we have an english version of the Socket F follow-up. 
http://tweakers.net/reviews/638 which basically displays the same 
results for Postgres vs MySQL.
If and when a translation of the other article arrives, I don't know. 
Other follow-up stories will follow as well, whether and how soon those 
will be translated, I also don't know. We are actually pretty interested 
in doing so, but its a lot of work to translate correctly :)


Best regards,

Arjen

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] PostgreSQL scalability on Sun UltraSparc T1

2006-07-31 Thread Arjen van der Meijden

Hi Jignesh,

It was a cvs-checkout of 8.2 devel, compiled using:
CPPFLAGS=-fast -xtarget=ultraT1 -xnolibmopt CC=/opt/SUNWspro/bin/cc 
./configure --without-readline


We'd gotten a specially adjusted Solaris version from Sun Holland for 
the T2000. It was a dvd with a Solaris flar archive from 11 april 2006 
and patches from 25 april 2006. It also had the preferred Solaris System 
settings already applied. If you need more details about that dvd, I 
think your best option is to contact Hans Nijbacker or Bart Muijzer, 
since we're no Solaris-experts :)


Appart from that, we did no extra tuning of the OS, nor did Hans for the 
MySQL-optimizations (afaik, but then again, he knows best).


Best regards,

Arjen van der Meijden

Jignesh Shah wrote:

Hi Arjen,

I am curious about your Sun Studio compiler options also.
Can you send that too ?

Any other tweakings that you did on Solaris?

Thanks.

Regards,
Jignesh


Arjen van der Meijden wrote:

On 29-7-2006 19:01, Joshua D. Drake wrote:
Well I would be curious about the postgresql.conf and how much ram 
etc... it had.


It was the 8core version with 16GB memory... but actually that's just 
overkill, the active portions of the database easily fits in 8GB and a 
test on another machine with just 2GB didn't even show that much 
improvements when going to 7GB (6x1G, 2x 512M), it was mostly in the 
range of 10% improvement or less.


Anyway, the differences to the default postgresql.conf:
shared_buffers = 3
Tests with 40k, 50k en 60k didn't really show improvements.

work_mem = 2048
This probably could've been set higher with the sheer amount of 
not-really-used memory.


maintenance_work_mem = 65535
Not really important of course

max_fsm_pages = 5
Somehow it needed to be set quite high, probably because we only 
cleaned up after doing over 200k requests.


effective_cache_size = 35
As said, the database fitted in 8GB of memory, so I didn't see a need 
to set this higher than for the 8GB machines (x4200 and another T2000 
we had).


default_statistics_target = 200
For a few columns on the largest tables I manually raised it to 1000

log_min_duration_statement = 1000
I'm not sure if this has much overhead? Stats logging was turned/left 
on as well.

Turning that off improved it a few percent.

I understand, I just have a feeling that we could do even better :) I 
do appreciate all your efforts.


Well, I'll keep that in mind :)
What it makes even worse for MySQL is that it had (on another machine) 
about 8M hits on the query cache for 4M inserts, i.e. half of the 
queries weren't even executed on it.


Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performances with new Intel Core* processors

2006-07-31 Thread Arjen van der Meijden

On 31-7-2006 17:52, Merlin Moncure wrote:

On 7/31/06, Jonathan Ballet [EMAIL PROTECTED] wrote:

Hello,

I've read a lot of mails here saying how good is the Opteron with 
PostgreSQL,

and a lot of people seems to recommend it (instead of Xeon).


I am a huge fan of the opteron but intel certainly seems to have a
winner for workstations. from my research on a per core basis the c2d
is a stronger chip with the 4mb cache version but it is unclear which
is a better choice for pg on 4 and 8 core platforms.  I have direct
personal experience with pg on dual (4 core) and quad (8 core) opteron
and the performance is fantastic, especially on 64 bit o/s with  2gb
memory (vs 32 bit xeon).


As far as I know there is no support for more than two Woodcrest 
processors (Core 2 version of the Xeon) in a system. So when using a 
scalable application (like postgresql) and you need more than four 
cores, Opteron is still the only option in the x86 world.


The Woodcrest however is faster than a comparably priced Opteron using 
Postgresql. In a benchmark we did (and have yet to publish) a Woodcrest 
system outperforms a comparable Sun Fire x4200. And even if you'd adjust 
it to a clock-by-clock comparison, Woodcrest would still beat the 
Opteron. If you'd adjust it to a price/performance comparison (I 
configured a HP DL 380G5-system which is similar to what we tested on 
their website), the x4200 would loose as well. Mind you a Opteron 280 
2.4Ghz or 285 2.6Ghz costs more than a Woodcrest 5150 2.66Ghz or 5160 
3Ghz (resp.), but the FB-Dimm memory for the Xeons is more expensive 
than the DDR or DDR2 ECC REG memory you need in a Opteron.



also opteron is 64 bit and mature so i think is a better choice for
server platform at the moment, especially for databases.  my mind
could be changed but it is too soon right now.  consider how long it
took for the opteron to prove itself in the server world.


Intel Woodcrest can do 64-bit as well. As can all recent Xeons. Whether 
Opteron does a better job at 64-bit than a Xeon, I don't know (our test 
was in 64-bit though). I have not seen our Xeon 64-bits production 
servers be any less stable than our Opteron 64-bit servers.
For a database system, however, processors hardly ever are the main 
bottleneck, are they? So you should probably go for a set of fast 
processors from your favorite supplier and focus mainly on lots of 
memory and fast disks. Whether that employs Opterons or Xeon Woodcrest 
(no other Xeons are up to that competition, imho) doesn't really matter.


We'll be publishing the article in the near future, and I'll give a 
pointer to it (even though it will be in Dutch, you can still read the 
graphs).


Best regards,

Arjen van der Meijden
Tweakers.net

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] PostgreSQL scalability on Sun UltraSparc T1

2006-07-29 Thread Arjen van der Meijden

On 29-7-2006 17:43, Joshua D. Drake wrote:


I would love to get my hands on that postgresql version and see how much 
farther it could be optimized.


You probably mean the entire installation? As said in my reply to 
Jochem, I've spent a few days testing all queries to improve their 
performance. I'm not sure what kind of improvements that yielded, but if 
I remember correctly its in the order of 3-5 times for the entire 
benchmark, compared to the initial MySQL-layout and queries.


If you mean the configuration and which version it was, I can look that 
up for you if you'd like. Including the compilation switches used on the 
T2000.


If we get to keep the machine (which we're going to try, but that's with 
worse performance than with their x4200 a bit doubtful), I'm sure we can 
work something out.
Then again, we regularly have other server hardware on which the same 
database is used, so even without the T2000 we could still do some 
effort to further improve postgresql's performance.
It might be interesting to have some Postgres experts do some more 
tuning and allowing MySQL AB to do the same... But I'm not sure if we're 
willing to spent that much extra time on a benchmark (just testing one 
database costs us about a day and a half...)


Best regards,

Arjen

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL

2006-06-22 Thread Arjen van der Meijden

On 22-6-2006 15:03, David Roussel wrote:
Sureky the 'perfect' line ought to be linear?  If the performance was 
perfectly linear, then the 'pages generated' ought to be G times the 
number (virtual) processors, where G is the gradient of the graph.  In 
such a case the graph will go through the origin (o,o), but you graph 
does not show this. 


I'm a bit confused, what is the 'perfect' supposed to be?


First of all, this graph has no origin. Its a bit difficult to test with 
less than one cpu.


Anyway, the line actually is linear and would've gone through the 
origin, if there was one. What I did was take the level of the 
'max'-line at 1 and then multiply it by 2, 4, 6 and 8. So if at 1 the 
level would've been 22000, the 2 would be 44000 and the 8 176000.


Please do notice the distance between 1 and 2 on the x-axis is the same 
as between 2 and 4, which makes the graph a bit harder to read.


Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL

2006-06-18 Thread Arjen van der Meijden

On 17-6-2006 1:24, Josh Berkus wrote:

Arjen,


I can already confirm very good scalability (with our workload) on
postgresql on that machine. We've been testing a 32thread/16G-version
and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores
(with all four threads enabled).


Keen.   We're trying to keep the linear scaling going up to 32 cores of 
course (which doesn't happen, presently).  Would you be interested in 
helping us troubleshoot some of the performance issues?


You can ask your questions, if I happen to do know the answer, you're a 
step further in the right direction.


But actually, I didn't do much to get this scalability... So I won't be 
of much help to you, its not that I spent hours on getting this performance.
I just started out with the normal attempts to get a good config. 
Currently the shared buffers is set to 30k. Larger settings didn't seem 
to differ much on our previous 4-core version, so I didn't even check it 
out on this one. I noticed I forgot to set the effective cache size to 
more than 6G for this one too, but since our database is smaller than 
that, that shouldn't make any difference. The work memory was increased 
a bit to 2K. So there are no magic tricks here.


I do have to add its a recent checkout of 8.2devel compiled using Sun 
Studio 11. It was compiled using this as CPPFLAGS: -xtarget=ultraT1 
-fast -xnolibmopt


The -xnolibmopt was added because we couldn't figure out why it yielded 
several linking errors at the end of the compilation when the -xlibmopt 
from -fast was enabled, so we disabled that particular setting from the 
-fast macro.



The workload generated is an abstraction and simplification of our 
website's workload, used for benchmarking. Its basically a news and 
price comparision site and it runs on LAMP (with the M of MySQL), i.e. a 
lot of light queries, many primary-key or indexed foreign-key lookups 
for little amounts of records. Some aggregations for summaries, etc. 
There are little writes and hardly any on the most read tables.
The database easily fits in memory, the total size of the actively read 
tables is about 3G.
This PostgreSQL-version is not a direct copy of the queries and tables, 
but I made an effort of getting it more PostgreSQL-minded as much as 
possible. I.e. I combined a few queries, I changed boolean-enum's in 
MySQL to real booleans in Postgres, I added specific indexes (including 
partials) etc.


We use apache+php as clients and just open X apache processes using 'ab' 
at the same time to generate various amounts of concurrent workloads. 
Solaris scales really well to higher concurrencies and PostgreSQL 
doesn't seem to have problems with it either in our workload.


So its not really a real-life scenario, but its not a synthetic 
benchmark either.


Here is a graph of our performance measured on PostgreSQL:
http://achelois.tweakers.net/~acm/pgsql-t2000/T2000-schaling-postgresql.png

What you see are three lines. Each represents the amount of total page 
views processed in 600 seconds for a specific amount of Niagara-cores 
(i.e. 1, 2, 4, 6 and 8). Each core had all its threads enabled, so its 
actually 4, 8, 16, 24 and 32 virtual cpu's you're looking at.
The Max-line displays the maximum generated page views on a specific 
core-amount for any concurrency, respectively: 5, 13, 35, 45 and 60.
The Bij 50 is the amount of page views it generated with 50 
apache-processes working at the same time (on two dual xeon machines, so 
25 each). I took 50 a bit arbitrary but all core-configs seemed to do 
pretty well under that workload.


The perfect line is based on the Max value for 1 core and then just 
multiplied by the amount of cores to have a linear reference. The Bij 
50 and the perfect line don't differ too much in color, but the 
top-one is the perfect line.


In the near future we'll be presenting an article on this on our 
website, although that will be in dutch the graphs should still be easy 
to read for you guys.
And because of that I can't promise too much detailed information until 
then.


I hope I clarified things a bit now, if not ask me about it,
Best regards,

Arjen

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL community

2006-06-16 Thread Arjen van der Meijden

On 16-6-2006 17:18, Robert Lor wrote:


I think this system is well suited for PG scalability testing, among 
others. We did an informal test using an internal OLTP benchmark and 
noticed that PG can scale to around 8 CPUs. Would be really cool if all 
32 virtual CPUs can be utilized!!!


I can already confirm very good scalability (with our workload) on 
postgresql on that machine. We've been testing a 32thread/16G-version 
and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores 
(with all four threads enabled).


The threads are a bit less scalable, but still pretty good. Enabling 1, 
2 or 4 threads for each core yields resp 60 and 130% extra performance.


Best regards,

Arjen

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] Pgsql (and mysql) benchmark on T2000/Solaris and some profiling

2006-05-16 Thread Arjen van der Meijden

Hi List,

In the past few weeks we have been developing a read-heavy 
mysql-benchmark to have an alternative take at cpu/platform-performance. 
Not really to have a look at how fast mysql can be.


This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is modelled 
after our website's production database and the load generated on it is 
modelled after a simplified version of our visitor behaviour.


Long story short, we think the test is a nice example of the relatively 
lightweight, read-heavy webapplications out there and therefore decided 
to have a go at postgresql as well.
Of course the queries and indexes have been adjusted to (by our 
knowledge) best suit postgresql, while maintaining the same output to 
the application/interface layer. While the initial structure only got 
postgresql at about half the performance of mysql 4.1.x, the current 
version of our postgresql-benchmark has quite similar results to mysql 
4.1.x, but both are quite a bit slower than 5.0.x (I think its about 
30-40% faster).


Since the results from those benchmarks are not yet public (they will be 
put together in a story at our website), I won't go into too much 
details about this benchmark.


Currently we're having a look at a Sun T2000 and will be looking at will 
be looking at other machines as well in the future. We are running the 
sun-release of postgresql 8.1.3 on that T2000 now, but are looking at 
compiling the cvs-head version (for its index-root-cache) somewhere this 
week.


My guess is there are a few people on this list who are interested in 
some dtrace results taken during our benchmarks on that T2000.
Although my knowledge of both Solaris and Dtrace are very limited, I 
already took some samples of the system and user calls. I used Jignesh 
Shah's scripts for that: 
http://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on


You can find the samples here:
http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log
http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log

And I also did the memcpy-scripts, here:
http://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log
http://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log
(this last log is 3.5MB)

If anyone is interested in some more dtrace results, let me know (and 
tell me what commands to run ;-) ).


Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Pgsql (and mysql) benchmark on T2000/Solaris and some

2006-05-16 Thread Arjen van der Meijden

Qingqing Zhou wrote:

Arjen van der Meijden [EMAIL PROTECTED] wrote
Some sort of web query behavior is quite optimized in MySQL. For example,
the query below is runing very fast due to the query result cache
implementation in MySQL.

Loop N times
SELECT * FROM A WHERE i = 1;
End loop.


Yeah, I know. But our queries get random parameters though for 
identifiers and the like, so its not just a few queries getting executed 
a lot of times, there are. In a run for which I just logged all queries, 
almost 42k distinct queries executed from 128k in total (it may actually 
be more random than real life).
Besides that, they are not so extremely simple queries as your example. 
Most join at least two tables, while the rest often joins three to five.


But I agree, MySQL has a big advantage with its query result cache. That 
makes the current performance of postgresql even more impressive in this 
situation, since the query cache of the 4.1.x run was enabled as well.



IMHO, without knowing the exact queries you sent, these logs are not very
useful :-(. I would suggest you compare the queries in pair and then post
their dtrace/timing results here (just like the previous Firebird vs.
PostgreSQL comparison did).


Well, I'm bound to some privacy and copyright laws, but I'll see if I 
can show some example plans of at least the top few queries later today 
(the top two is resp 27% and 21% of the total time).
But those top queries aren't the only ones run during the benchmarks or 
in the production environment, nor are they run exclusively at any given 
time. So the overall load-picture should be usefull too, shouldn't it?


Best regards,

Arjen

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Pgsql (and mysql) benchmark on T2000/Solaris and some

2006-05-16 Thread Arjen van der Meijden

Hi Jignesh,

Jignesh K. Shah wrote:

Hi Arjen,

Looking at your outputs...of syscall and usrcall it looks like

* Spending too much time in semsys  which means you have too many 
connections and they are contending to get a lock.. which is potentially 
the WAL log lock


* llseek is high which means you can obviously gain a bit with the right 
file system/files tuning by caching them right.


Have you set the values for Solaris for T2000 tuned for Postgresql?


Not particularly, we got a special T2000 Solaris dvd from your 
colleagues here in the Netherlands and installed that (actually one of 
your colleagues did). Doing so all the better default 
/etc/system-settings are supposed to be set. I haven't really checked 
that they are, since two of your colleagues have been working on it for 
the mysql-version of the benchmark and I assumed they'd have verified that.



Check out the tunables from the following URL

http://www.sun.com/servers/coolthreads/tnb/applications_postgresql.jsp

Try specially the /etc/system and postgresql.conf changes  and see if it 
changes/improves your performance.


I will see that those tunables are verified to be set.

I am a bit surprised though about your remarks, since they'd point at 
the I/O being in the way? But we only have about 600k/sec i/o according 
to vmstat. The database easily fits in memory.
In total I logged about 500k queries of which only 70k where altering 
queries, of which almost all where inserts in log-tables which aren't 
actively read in this benchmark.


But I'll give it a try.

Best regards,

Arjen



Arjen van der Meijden wrote:

Hi List,

In the past few weeks we have been developing a read-heavy 
mysql-benchmark to have an alternative take at 
cpu/platform-performance. Not really to have a look at how fast mysql 
can be.


This benchmark runs on mysql 4.1.x, 5.0.x and 5.1.x and is modelled 
after our website's production database and the load generated on it 
is modelled after a simplified version of our visitor behaviour.


Long story short, we think the test is a nice example of the 
relatively lightweight, read-heavy webapplications out there and 
therefore decided to have a go at postgresql as well.
Of course the queries and indexes have been adjusted to (by our 
knowledge) best suit postgresql, while maintaining the same output to 
the application/interface layer. While the initial structure only got 
postgresql at about half the performance of mysql 4.1.x, the current 
version of our postgresql-benchmark has quite similar results to mysql 
4.1.x, but both are quite a bit slower than 5.0.x (I think its about 
30-40% faster).


Since the results from those benchmarks are not yet public (they will 
be put together in a story at our website), I won't go into too much 
details about this benchmark.


Currently we're having a look at a Sun T2000 and will be looking at 
will be looking at other machines as well in the future. We are 
running the sun-release of postgresql 8.1.3 on that T2000 now, but are 
looking at compiling the cvs-head version (for its index-root-cache) 
somewhere this week.


My guess is there are a few people on this list who are interested in 
some dtrace results taken during our benchmarks on that T2000.
Although my knowledge of both Solaris and Dtrace are very limited, I 
already took some samples of the system and user calls. I used Jignesh 
Shah's scripts for that: 
http://blogs.sun.com/roller/page/jkshah?entry=profiling_postgresql_using_dtrace_on 



You can find the samples here:
http://achelois.tweakers.net/~acm/pgsql-t2000/syscall.log
http://achelois.tweakers.net/~acm/pgsql-t2000/usrcall.log

And I also did the memcpy-scripts, here:
http://achelois.tweakers.net/~acm/pgsql-t2000/memcpysize.log
http://achelois.tweakers.net/~acm/pgsql-t2000/memcpystack.log
(this last log is 3.5MB)

If anyone is interested in some more dtrace results, let me know (and 
tell me what commands to run ;-) ).


Best regards,

Arjen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


  1   2   >