Re: [PERFORM] Having I/O problems in simple virtualized environment

2012-01-31 Thread Jose Ildefonso Camargo Tolosa
On Mon, Jan 30, 2012 at 3:11 AM, Ron Arts ron.a...@gmail.com wrote:
 Op 30-01-12 02:52, Jose Ildefonso Camargo Tolosa schreef:
 On Sun, Jan 29, 2012 at 6:18 PM, Ron Arts ron.a...@gmail.com wrote:
 Hi list,

 I am running PostgreSQL 8.1 (CentOS 5.7) on a VM on a single XCP 
 (Xenserver) host.
 This is a HP server with 8GB, Dual Quad Core, and 2 SATA in RAID-1.

 The problem is: it's running very slow compared to running it on bare 
 metal, and
 the VM is starving for I/O bandwidht, so other processes (slow to a crawl.
 This does not happen on bare metal.

 I had to replace the server with a bare-metal one, I could not troubleshoot 
 in production.
 Also it was hard to emulte the workload for that VM in a test environment, 
 so I
 concentrated on PostgreSQLand why it apparently generated so much I/O.

 Before I start I should confess having only spotty experience with Xen and 
 PostgreSQL
 performance testing.

 I setup a test Xen server created a CentOS5.7 VM with out-of-the-box 
 PostgreSQL and ran:
 pgbench -i  pgbench ; time pgbench -t 10 pgbench
 This ran for 3:28. Then I replaced the SATA HD with an SSD disk, and reran 
 the test.
 It ran for 2:46. This seemed strange as I expected the run to finish much 
 faster.

 I reran the first test on the SATA, and looked at CPU and I/O use. The CPU 
 was not used
 too much in both the VM (30%) and in dom0 (10%). The I/O use was not much 
 as well,
 around 8MB/sec in the VM. (Couldn't use iotop in dom0, because of missing 
 kernel support
 in XCP 1.1).

 It reran the second test on SSD, and experienced almost the same CPU, and 
 I/O load.

 (I now probably need to run the same test on bare metal, but didn't get to 
 that yet,
 all this already ruined my weekend.)

 Now I came this far, can anybody give me some pointers? Why doesn't pgbench 
 saturate
 either the CPU or the I/O? Why does using SSD only change the performance 
 this much?

 Ok, one point: Which IO scheduler are you using?  (on dom0 and on the VM).

 Ok, first dom0:

 For the SSD (hda):
 # cat /sys/block/sda/queue/scheduler
 [noop] anticipatory deadline cfq

Use deadline.


 For the SATA:
 # cat /sys/block/sdb/queue/scheduler
 noop anticipatory deadline [cfq]

Use deadline too (this is specially true if sdb is a raid array).


 Then in the VM:

 # cat /sys/block/xvda/queue/scheduler
 [noop] anticipatory deadline cfq

Should be ok for the VM.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.

2012-01-29 Thread Jose Ildefonso Camargo Tolosa
Greetings,

On Sat, Jan 28, 2012 at 12:51 PM, Jayashankar K B
jayashankar...@lnties.com wrote:
 Hi,

 I downloaded the source code and cross compiled it into a relocatable package 
 and copied it to the device.
 LTIB was the cross-compile tool chain that was used. Controller is  coldfire 
 MCF54418 CPU.
 Here is the configure options I used.

Ok, no floating point, and just ~250MHz... small.  Anyway, lets not
talk about hardware options, because you already have it.

About kernel, I'm not sure if on this arch you have the option, but
did you enable PREEMPT kernel config option? (on menuconfig:
Preemptible Kernel (Low-Latency Desktop))  Or, is that a RT
kernel?

With such a small CPU, almost any DB engine you put there will be
CPU-hungry, but if your CPU usage is under 95%, you know you still
have some CPU to spare, on the other hand, if you are 100% CPU, you
have to evaluate required response time, and set priorities
accordingly.. However, I have found that, even with processes with
nice level 19 using 100% CPU, other nice level 0 processes will
slow-down unless I set PREEMPT option to on kernel compile options
(other issue are IO wait times, at least on my application that uses
CF can get quite high).

Sincerely,

Ildefonso Camargo

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Having I/O problems in simple virtualized environment

2012-01-29 Thread Jose Ildefonso Camargo Tolosa
On Sun, Jan 29, 2012 at 6:18 PM, Ron Arts ron.a...@gmail.com wrote:
 Hi list,

 I am running PostgreSQL 8.1 (CentOS 5.7) on a VM on a single XCP (Xenserver) 
 host.
 This is a HP server with 8GB, Dual Quad Core, and 2 SATA in RAID-1.

 The problem is: it's running very slow compared to running it on bare metal, 
 and
 the VM is starving for I/O bandwidht, so other processes (slow to a crawl.
 This does not happen on bare metal.

 I had to replace the server with a bare-metal one, I could not troubleshoot 
 in production.
 Also it was hard to emulte the workload for that VM in a test environment, so 
 I
 concentrated on PostgreSQLand why it apparently generated so much I/O.

 Before I start I should confess having only spotty experience with Xen and 
 PostgreSQL
 performance testing.

 I setup a test Xen server created a CentOS5.7 VM with out-of-the-box 
 PostgreSQL and ran:
 pgbench -i  pgbench ; time pgbench -t 10 pgbench
 This ran for 3:28. Then I replaced the SATA HD with an SSD disk, and reran 
 the test.
 It ran for 2:46. This seemed strange as I expected the run to finish much 
 faster.

 I reran the first test on the SATA, and looked at CPU and I/O use. The CPU 
 was not used
 too much in both the VM (30%) and in dom0 (10%). The I/O use was not much as 
 well,
 around 8MB/sec in the VM. (Couldn't use iotop in dom0, because of missing 
 kernel support
 in XCP 1.1).

 It reran the second test on SSD, and experienced almost the same CPU, and I/O 
 load.

 (I now probably need to run the same test on bare metal, but didn't get to 
 that yet,
 all this already ruined my weekend.)

 Now I came this far, can anybody give me some pointers? Why doesn't pgbench 
 saturate
 either the CPU or the I/O? Why does using SSD only change the performance 
 this much?

Ok, one point: Which IO scheduler are you using?  (on dom0 and on the VM).

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Large rows number, and large objects

2011-07-21 Thread Jose Ildefonso Camargo Tolosa
On Wed, Jul 20, 2011 at 3:03 PM, Andrzej Nakonieczny
dzemik-pgsql-performa...@e-list.pingwin.eu.org wrote:
 W dniu 20.07.2011 17:57, Jose Ildefonso Camargo Tolosa pisze:

 [...]

    Many of the advantages of partitioning have to do with maintenance
    tasks.  For example, if you gather data on a daily basis, it's faster
    to drop the partition that contains Thursday's data than it is to do a
    DELETE that finds the rows and deletes them one at a time.  And VACUUM
    can be a problem on very large tables as well, because only one VACUUM
    can run on a table at any given time.  If the frequency with which the
    table needs to be vacuumed is less than the time it takes for VACUUM
    to complete, then you've got a problem.


 And pg_largeobject table doesn't get vacuumed? I mean, isn't that
 table just as any other table?

 Vacuum is a real problem on big pg_largeobject table. I have 1.6 TB database
 mostly with large objects and vacuuming that table on fast SAN takes about 4
 hours:

        now          |        start        |   time   |  datname   |
  current_query
 -+-+--++--
  2011-07-20 20:12:03 | 2011-07-20 16:21:20 | 03:50:43 | bigdb      |
 autovacuum: VACUUM pg_catalog.pg_largeobject
 (1 row)


 LO generates a lot of dead tuples when object are adding:

     relname      | n_dead_tup
 --+
  pg_largeobject   |     246731

 Adding LO is very fast when table is vacuumed. But when there is a lot of
 dead tuples adding LO is very slow (50-100 times slower) and eats 100% of
 CPU.

 It looks that better way is writing object directly as a bytea on paritioned
 tables althought it's a bit slower than LO interface on a vacuumed table.

Well... yes... I thought about that, but now then, what happen when
you need to fetch the file from the DB? will that be fetched
completely at once?  I'm thinking about large files here, say
(hypothetically speaking) you have 1GB files stored if the system
will fetch the whole 1GB at once, it would take 1GB RAM (or not?), and
that's what I wanted to avoid by dividing the file in 2kB chunks
(bytea chunks, actually) I don't quite remember where I got the
2kB size from... but I decided I wanted to avoid using TOAST too.



 Regards,
 Andrzej


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Large rows number, and large objects

2011-07-20 Thread Jose Ildefonso Camargo Tolosa
On Tue, Jul 19, 2011 at 3:57 PM, Robert Haas robertmh...@gmail.com wrote:

 On Sun, Jun 19, 2011 at 10:19 PM, Jose Ildefonso Camargo Tolosa
 ildefonso.cama...@gmail.com wrote:
  So, the question is, if I were to store 8TB worth of data into large
  objects system, it would actually make the pg_largeobject table slow,
  unless it was automatically partitioned.

 I think it's a bit of an oversimplification to say that large,
 unpartitioned tables are automatically going to be slow.  Suppose you
 had 100 tables that were each 80GB instead of one table that is 8TB.
 The index lookups would be a bit faster on the smaller tables, but it
 would take you some non-zero amount of time to figure out which index
 to read in the first place.  It's not clear that you are really
 gaining all that much.


Certainly but it is still very blurry to me on *when* it is better to
partition than not.



 Many of the advantages of partitioning have to do with maintenance
 tasks.  For example, if you gather data on a daily basis, it's faster
 to drop the partition that contains Thursday's data than it is to do a
 DELETE that finds the rows and deletes them one at a time.  And VACUUM
 can be a problem on very large tables as well, because only one VACUUM
 can run on a table at any given time.  If the frequency with which the
 table needs to be vacuumed is less than the time it takes for VACUUM
 to complete, then you've got a problem.


And pg_largeobject table doesn't get vacuumed? I mean, isn't that table
just as any other table?



 But I think that if we want to optimize pg_largeobject, we'd probably
 gain a lot more by switching to a different storage format than we
 could ever gain by partitioning the table.  For example, we might
 decide that any object larger than 16MB should be stored in its own
 file.  Even somewhat smaller objects would likely benefit from being
 stored in larger chunks - say, a bunch of 64kB chunks, with any
 overage stored in the 2kB chunks we use now.  While this might be an
 interesting project, it's probably not going to be anyone's top
 priority, because it would be a lot of work for the amount of benefit
 you'd get.  There's an easy workaround: store the files in the
 filesystem, and a path to those files in the database.


Ok, one reason for storing a file *in* the DB is to be able to do PITR of a
wrongly deleted files (or overwritten, and that kind of stuff), on the
filesystem level you would need a versioning filesystem (and I don't, yet,
know any that is stable in the Linux world).

Also, you can use streaming replication and at the same time you stream your
data, your files are also streamed to a secondary server (yes, on the
FS-level you could use drbd or similar).

Ildefonso.


Re: [PERFORM] [GENERAL] DELETE taking too much memory

2011-07-08 Thread Jose Ildefonso Camargo Tolosa
On Fri, Jul 8, 2011 at 4:35 AM, Dean Rasheed dean.a.rash...@gmail.comwrote:

  On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
  Hi,
 
  I have a delete query taking 7.2G of ram (and counting) but I do not
  understant why so much memory is necessary. The server has 12G, and
  I'm afraid it'll go into swap. Using postgres 8.3.14.
 
  I'm purging some old data from table t1, which should cascade-delete
  referencing rows in t2. Here's an anonymized rundown :
 
  # explain delete from t1 where t1id in (select t1id from t2 where
  foo=0 and bar  '20101101');

 It looks as though you're hitting one of the known issues with
 PostgreSQL and FKs. The FK constraint checks and CASCADE actions are
 implemented using AFTER triggers, which are queued up during the query
 to be executed at the end. For very large queries, this queue of
 pending triggers can become very large, using up all available memory.

 There's a TODO item to try to fix this for a future version of
 PostgreSQL (maybe I'll have another go at it for 9.2), but at the
 moment all versions of PostgreSQL suffer from this problem.

 The simplest work-around for you might be to break your deletes up
 into smaller chunks, say 100k or 1M rows at a time, eg:

 delete from t1 where t1id in (select t1id from t2 where foo=0 and bar
  '20101101' limit 10);


I'd like to comment here I had serious performance issues with a similar
query (planner did horrible things), not sure if planner will do the same
dumb thing it did for me, my query was against the same table (ie, t1=t2).
I had this query:

delete from t1 where ctid in (select ctid from t1 where
created_at'20101231' limit 1);   --- this was slow.  Changed to:

delete from t1 where ctid = any(array(select ctid from t1 where
created_at'20101231' limit 1));   --- a lot faster.

So... will the same principle work here?, doing this?:

delete from t1 where t1id = any(array(select t1id from t2 where foo=0 and
bar
 '20101101' limit 10));  -- would this query be faster then original
one?




 Regards,
 Dean

 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance



Re: [PERFORM] Large rows number, and large objects

2011-06-19 Thread Jose Ildefonso Camargo Tolosa
Hi!

Thanks (you both, Samuel and Craig) for your answers!

On Sun, Jun 19, 2011 at 11:19 AM, Craig James
craig_ja...@emolecules.com wrote:
 On 6/19/11 4:37 AM, Samuel Gendler wrote:

 On Sat, Jun 18, 2011 at 9:06 PM, Jose Ildefonso Camargo Tolosa
 ildefonso.cama...@gmail.com wrote:

 Greetings,

 I have been thinking a lot about pgsql performance when it is dealing
 with tables with lots of rows on one table (several millions, maybe
 thousands of millions).  Say, the Large Object use case:

 one table has large objects (have a pointer to one object).
 The large object table stores the large object in 2000 bytes chunks
 (iirc), so, if we have something like 1TB of data stored in large
 objects, the large objects table would have something like 550M rows,
 if we get to 8TB, we will have 4400M rows (or so).

 I have read at several places that huge tables should be partitioned,
 to improve performance now, my first question comes: does the
 large objects system automatically partitions itself? if no: will
 Large Objects system performance degrade as we add more data? (I guess
 it would).

 You should consider partitioning your data in a different way: Separate
 the relational/searchable data from the bulk data that is merely being
 stored.

 Relational databases are just that: relational.  The thing they do well is
 to store relationships between various objects, and they are very good at
 finding objects using relational queries and logical operators.

 But when it comes to storing bulk data, a relational database is no better
 than a file system.

 In our system, each object is represented by a big text object of a few
 kilobytes.  Searching that text file is essential useless -- the only reason
 it's there is for visualization and to pass on to other applications.  So
 it's separated out into its own table, which only has the text record and a
 primary key.

Well, my original schema does exactly that (I mimic the LO schema):

files (searchable): id, name, size, hash, mime_type, number_chunks
files_chunks : id, file_id, hash, chunk_number, data (bytea)

So, my bulk data is on files_chunks table, but due that data is
restricted (by me) to 2000 bytes, the total number of rows on the
files_chunks table can get *huge*.

So, system would search the files table, and then, search the
files_chunks table (to get each of the chunks, and, maybe, send them
out to the web client).

So, with a prospect of ~4500M rows for that table, I really thought it
could be a good idea to partition files_chunks table.  Due that I'm
thinking on relatively small files (100MB), table partitioning should
do great here, because I could manage to make all of the chunks for a
table  to be contained on the same table.  Now, even if the system
were to get larger files (5GB), this approach should still work.

The original question was about Large Objects, and partitioning...
see, according to documentation:
http://www.postgresql.org/docs/9.0/static/lo-intro.html

All large objects are placed in a single system table called pg_largeobject.

So, the question is, if I were to store 8TB worth of data into large
objects system, it would actually make the pg_largeobject table slow,
unless it was automatically partitioned.

Thanks for taking the time to discuss this matter with me!

Sincerely,

Ildefonso Camargo

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Large rows number, and large objects

2011-06-18 Thread Jose Ildefonso Camargo Tolosa
Greetings,

I have been thinking a lot about pgsql performance when it is dealing
with tables with lots of rows on one table (several millions, maybe
thousands of millions).  Say, the Large Object use case:

one table has large objects (have a pointer to one object).
The large object table stores the large object in 2000 bytes chunks
(iirc), so, if we have something like 1TB of data stored in large
objects, the large objects table would have something like 550M rows,
if we get to 8TB, we will have 4400M rows (or so).

I have read at several places that huge tables should be partitioned,
to improve performance now, my first question comes: does the
large objects system automatically partitions itself? if no: will
Large Objects system performance degrade as we add more data? (I guess
it would).

Now... I can't fully understand this: why does the performance
actually goes lower? I mean, when we do partitioning, we take a
certain parameter to divide the data,and then use the same parameter
to issue the request against the correct table... shouldn't the DB
actually do something similar with the indexes? I mean, I have always
thought about the indexes, well, exactly like that: approximation
search, I know I'm looking for, say, a date that is less than
2010-03-02, and the system should just position itself on the index
around that date, and scan from that point backward... as far as my
understanding goes, the partitioning only adds like this auxiliary
index, making the system, for example, go to a certain table if the
query goes toward one particular year (assuming we partitioned by
year), what if the DB actually implemented something like an Index for
the Index (so that the first search on huge tables scan on an smaller
index that points to a position on the larger index, thus avoiding the
scan of the large index initially).

Well I'm writing all of this half-sleep now, so... I'll re-read it
tomorrow... in the meantime, just ignore anything that doesn't make a
lot of sense :) .

Thanks!

Ildefonso Camargo

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-31 Thread Jose Ildefonso Camargo Tolosa
Hi!

On Tue, Aug 31, 2010 at 8:11 AM, Yeb Havinga yebhavi...@gmail.com wrote:
 Greg Smith wrote:

 Yeb Havinga wrote:

 model name      : AMD Phenom(tm) II X4 940 Processor @ 3.00GHz
 cpu cores         : 4
 stream compiled with -O3
 Function      Rate (MB/s)   Avg time     Min time     Max time
 Triad:       5395.1815       0.0089       0.0089       0.0089

 I'm not sure if Yeb's stream was compiled to use MPI correctly though,
 because I'm not seeing Number of Threads in his results.  Here's what
 works for me:

  gcc -O3 -fopenmp stream.c -o stream

 And then you can set:

 export OMP_NUM_THREADS=4

 Then I get the following. The rather wierd dip at 5 threads is consistent
 over multiple tries:

 Number of Threads requested = 1
 Function      Rate (MB/s)   Avg time     Min time     Max time
 Triad:       5378.7495       0.0089       0.0089       0.0090

 Number of Threads requested = 2
 Function      Rate (MB/s)   Avg time     Min time     Max time
 Triad:       6596.1140       0.0073       0.0073       0.0073

 Number of Threads requested = 3
 Function      Rate (MB/s)   Avg time     Min time     Max time
 Triad:       7033.9806       0.0069       0.0068       0.0069

 Number of Threads requested = 4
 Function      Rate (MB/s)   Avg time     Min time     Max time
 Triad:       7007.2950       0.0069       0.0069       0.0069

 Number of Threads requested = 5
 Function      Rate (MB/s)   Avg time     Min time     Max time
 Triad:       6553.8133       0.0074       0.0073       0.0074

 Number of Threads requested = 6
 Function      Rate (MB/s)   Avg time     Min time     Max time
 Triad:       6803.6427       0.0071       0.0071       0.0071

 Number of Threads requested = 7
 Function      Rate (MB/s)   Avg time     Min time     Max time
 Triad:       6895.6909       0.0070       0.0070       0.0071

 Number of Threads requested = 8
 Function      Rate (MB/s)   Avg time     Min time     Max time
 Triad:       6931.3018       0.0069       0.0069       0.0070

 Other info: DDR2 800MHz ECC memory

Ok, this could explain the huge difference.  I was planing on getting
GigaByte GA-890GPA-UD3H, with a Phenom II X6 and that ram: Crucial
CT2KIT25664BA13​39, Crucial BL2KIT25664FN1608, or something better I
find when I get enough money (depending on my budget at the moment).

 MB: 790FX chipset (Asus m4a78-e)

 regards,
 Yeb Havinga



Thanks for the extra info!

Ildefonso.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-31 Thread Jose Ildefonso Camargo Tolosa
Hi!

On Tue, Aug 31, 2010 at 11:13 AM, Greg Smith g...@2ndquadrant.com wrote:
 Yeb Havinga wrote:

 The rather wierd dip at 5 threads is consistent over multiple tries

 I've seen that twice on 4 core systems now.  The spot where there's just one
 more thread than cores seems to be the worst case for cache thrashing on a
 lot of these servers.

 How much total RAM is in this server?  Are all the slots filled?  Just
 filling in a spreadsheet I have here with sample configs of various
 hardware.

 Yeb's results look right to me now.  That's what an AMD Phenom II X4 940 @
 3.00GHz should look like.  It's a little faster, memory-wise, than my older
 Intel Q6600 @ 2.4GHz.  So they've finally caught up with that generation of
 Intel's stuff.  But my current desktop quad-core i860 with hyperthreading is
 nearly twice as fast in terms of memory access at every thread size.  That's
 why I own one of them instead of a Phenom II X4.

your i860? http://en.wikipedia.org/wiki/Intel_i860  wow!. :D

Now, seriously: what memory (brand/model) does the Q6600 and your
newer desktop have?

I'm just too curious, last time I was able to run benchmarks myself
was with a core2duo and a athlon 64 x2, back then: core2due beated
athlon at almost anything.

Nowadays, it looks like amd is playing the more cores for the money
game, but I think that sooner or later they will catchup again, and
when that happen: Intel will just get another ET chip, and put on
marked,and so on! :D

This is a game where the winners are: us!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-30 Thread Jose Ildefonso Camargo Tolosa
Hi!

Thanks you all for this great amount of information!

What memory/motherboard (ie, chipset) is installed on the phenom ii one?

it looks like it peaks to ~6.2GB/s with 4 threads.

Also, what kernel is on it? (uname -a would be nice).

Now, this looks like sustained memory speed, what about random memory
access (where latency comes to play an important role):
http://icl.cs.utk.edu/projectsfiles/hpcc/RandomAccess/

I don't have any of these systems to test, but it would be interesting
to get the random access benchmarks too, what do you think? will the
result be the same?

Once again, thanks!

Sincerely,

Ildefonso Camargo

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Performance on new 64bit server compared to my 32bit desktop

2010-08-30 Thread Jose Ildefonso Camargo Tolosa
Hi!

Thanks for the review link!

Ildefonso.

On Mon, Aug 30, 2010 at 6:01 PM, Greg Smith g...@2ndquadrant.com wrote:
 Clemens Eisserer wrote:

 Hi,



 This isn't an older Opteron, its 6 core, 6MB L3 cache Istanbul.  Its not
 the newer stuff either.


 Everything before Magny Cours is now an older Opteron from my perspective.


 The 6-cores are identical to Magny Cours (except that Magny Cours has
 two of those beast in one package).


 In some ways, but not in regards to memory issues.
 http://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/2
 has a good intro.  While the inside is like two 6-core models stuck
 together, the external memory interface was completely reworked.

 Original report here involved Opteron 2427, correctly idenitified as being
 from the 6-core Istanbul architecture.  All Istanbul processors use DDR2
 and are quite slow at memory access compared to similar Intel Nehalem
 systems.  The Magny-Cours architecture is available in 8 and 12 core
 variants, and the memory controller has been completely redesigned to take
 advantage of many banks of DDR3 at the same time; it is far faster than two
 of the older 6 cores working together.

 http://en.wikipedia.org/wiki/List_of_AMD_Opteron_microprocessors has a good
 summary of the models; it's confusing.  Quick chart showing the three
 generations compared demonstrates what I just said above using the same
 STREAM benchmarking that a few results have popped out here using already:

 http://www.anandtech.com/show/2978/amd-s-12-core-magny-cours-opteron-6174-vs-intel-s-6-core-xeon/5

 Istanbul Opteron 2435 in this case, 21GB/s.  The two Nehelam Intel Xeons,
31GB/s.  New Magny, 49MB/s.

 --
 Greg Smith  2ndQuadrant US  Baltimore, MD
 PostgreSQL Training, Services and Support
 g...@2ndquadrant.com   www.2ndQuadrant.us


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance