> On Jan 9, 2019, at 2:54 PM, dwight via cctalk wrote:
>
> ...
> Of course in an embedded processor you can run in kernel mode and busy wait
> if you want.
Yes, and that has a number of advantages. You get well defined latencies and
everything that the program does gets done within bounded
unrolling the loops. Before speculative
execution, unrolling had a clear advantage.
Dwight
From: cctalk on behalf of Eric Korpela via
cctalk
Sent: Wednesday, January 9, 2019 11:06 AM
To: ben; General Discussion: On-Topic and Off-Topic Posts
Subject: Re: OT? Upper limits
On Tue, Jan 8, 2019 at 3:01 PM ben via cctalk wrote:
> I bet I/O loops throw every thing off.
>
Even worse than you might think. For user mode code you've got at least
two context switches which are typically thousands of CPU cycles. On the
plus side when you start waiting for I/O the CPU will
On 1/8/2019 3:51 PM, Guy Sotomayor Jr via cctalk wrote:
Some architectures (I’m thinking of the latest Intel CPUs) have a small loop
cache
whose aim is to keep a loop entirely within that cache. That cache operates at
the
full speed of the instruction fetch/execute (actually I think it keeps t
Some architectures (I’m thinking of the latest Intel CPUs) have a small loop
cache
whose aim is to keep a loop entirely within that cache. That cache operates at
the
full speed of the instruction fetch/execute (actually I think it keeps the
decoded uOps)
cycles (e.g. you can’t go faster). L1 c
On 1/8/19 1:23 PM, Tapley, Mark via cctalk wrote:
> Why so (why surprising, I mean)? Understood an unrolled loop executes
> faster...
That can't always be true, can it?
I'm thinking of an architecture where the instruction cache is slow to
fill and multiple overlapping operations are involved an
> On Jan 6, 2019, at 1:31 PM, dwight via cctalk wrote:
>
> Surprisingly, this is actually good for older languages like Forth that are
> fugal with RAM.
Why so (why surprising, I mean)? Understood an unrolled loop executes faster,
RISC instruction sets have lower information density than CISC
rday, January 5, 2019 9:40 PM
To: Jeffrey S. Worley; General Discussion: On-Topic and Off-Topic Posts
Subject: Re: OT? Upper limits of FSB
Interconnects at 28Gb/s/lane have been out for a while now, supported by quite
a few chips. 56Gb/s PAM4 is around the corner, and we run 100Gb/s in the lab
Interconnects at 28Gb/s/lane have been out for a while now, supported by quite
a few chips. 56Gb/s PAM4 is around the corner, and we run 100Gb/s in the lab
right now. Just sayin’ ;-). That said, we throw in about every equalization
trick we know of, PCB materials are getting quite exotic and con
On Sat, Jan 05, 2019 at 02:02:35AM -0500, Jeffrey S. Worley via cctalk wrote:
> [...] So here's the question. Is maximum fsb on standard, non-optical bus
> still limited to a maximum of a couple of hundred megahertz, or did something
> happen in the last decade or two that changed things dramatical
I'll assume you've read:
https://en.wikipedia.org/wiki/Front-side_bus
Even though synchronization base clocks have remained low, parallel
buses can run up in the low GHz range (sub 4) in terms of data line
transitions per second with as many as 128 parallel wires in sync. It's
not just FSB
On Sat, Jan 5, 2019, 00:02 Jeffrey S. Worley via cctalk <
cctalk@classiccmp.org> wrote:
> Apropos of nothing, I've been confuse for some time regarding maximum
> clock rates for local bus.
>
> My admittedly old information, which comes from the 3rd ed. of "High
> Performance Computer Architecture"
Apropos of nothing, I've been confuse for some time regarding maximum
clock rates for local bus.
My admittedly old information, which comes from the 3rd ed. of "High
Performance Computer Architecture", a course I audited, indicates a
maximum speed on the order of 1ghz for very very short trace len
13 matches
Mail list logo