On 06/22/2017 08:21 PM, Kilian Cavalotti wrote:
> Oh, and at least the higher core-count SKUs like the 32-core 7251 are
> actually 4 8-core dies linked together with a new "Infinity Fabric"
> interconnect, not a single 32-core die. I completely missed that. And
> it's fine, it probably makes sense
Oh, and at least the higher core-count SKUs like the 32-core 7251 are
actually 4 8-core dies linked together with a new "Infinity Fabric"
interconnect, not a single 32-core die. I completely missed that. And
it's fine, it probably makes sense from a yield perspective, but
behold the intra-socket
On 06/22/2017 04:41 PM, mathog wrote:
> On 22-Jun-2017 15:05, Greg Lindahl wrote:
>> I don't think it hurt AMD that much in the end.
>
> I disagree.
It's hard to say. I agree that AMD very slowly managed to claw some small
market share from intel with the Opteron. I believe it was on the order
On 22-Jun-2017 15:05, Greg Lindahl wrote:
On Thu, Jun 22, 2017 at 12:27:30PM -0700, mathog wrote:
Recall that when the Opterons first came out the major manufacturers
did not ship any systems with it for what, a year, maybe longer? I
vaguely recall SuperMicro going in quickly and Dell, HP,
On 06/22/2017 05:04 PM, John Hearns wrote:
David, you recall correctly. I recall working with Clustervision. We
installed the first 64 bit x86 cluster in the UK,
at the chemistry department in Manchester University. AMD CPUs, 1U
pizza boxes.
We had a Sun Linux cluster with Opteron 244 (?)
On Thu, Jun 22, 2017 at 12:27:30PM -0700, mathog wrote:
> Recall that when the Opterons first came out the major manufacturers
> did not ship any systems with it for what, a year, maybe longer? I
> vaguely recall SuperMicro going in quickly and Dell, HP, and IBM
> whistling in a corner.
David, you recall correctly. I recall working with Clustervision. We
installed the first 64 bit x86 cluster in the UK,
at the chemistry department in Manchester University. AMD CPUs, 1U pizza
boxes.
For the life of me I Cannot recall the manufacturer... but it was a white
box.
Remember that SuSE
On Thu, Jun 22, 2017 at 10:04:34 -0600 Brian Dobbins wrote
On Wed, Jun 21, 2017 at 6:29 PM, Christopher Samuel
wrote:
I thought it interesting that the only performance info in that
article
for Epyc were SpecINT and (the only mention for SpecFP was for
Radeon).
As
On Wed, Jun 21, 2017 at 6:29 PM, Christopher Samuel
wrote:
> I thought it interesting that the only performance info in that article
> for Epyc were SpecINT and (the only mention for SpecFP was for Radeon).
>
As did I, but a little digging shows a STREAM benchmark (on
Echoing what Joe says, "The Network is the Computer" - now who said that
(h)
We know this anyway - more attention is being paid to bandwidth to memory,
and memory access patterns rather than hero numbers with core counts.
Perhaps I'm replaying a long running LP record, but going back to
Hi Mark,
I agree that these are slightly noticeable but they are far less than
accessing a NIC on the "wrong" socket, etc.
Scott
On Thu, Jun 22, 2017 at 9:26 AM, Mark Hahn wrote:
> But now, with 20+ core CPUs, does it still really make sense to have
>> dual socket systems
But now, with 20+ core CPUs, does it still really make sense to have
dual socket systems everywhere, with NUMA effects all over the place
that typical users are blissfully unaware of?
I claim single-socket systems already have NUMA effects, since multiple
layers (differently-shared) of cache
12 matches
Mail list logo