Replying to Yakov Rekhter: Mobility for billions of users.
Whether the current system is
"acceptable".
RLDRAM (Reduced Latency Dynamic RAM)
is not much faster than DDR SDRAM.
Eliot Lear: RLDRAM costs.
Hi Yakov,
You wrote:
>> The current system is unacceptable since it doesn't fit the needs
>> of many end-user networks.
>
> The current system is *acceptable* to a very large number of users
> - for number of users for whom the current system is acceptable
> just count the number of users in the Internet.
OK - everyone who uses the Internet accepts it as it is, because
there is no alternative.
We are attempting to redesign the Net so that in the decades to come
the needs and desires of end-users will be better met. I think one
important thing we need to provide is much easier access to
multihoming than is currently possible. In that sense, I meant that
"the current system" is unacceptable - *we* shouldn't accept the
current difficulties with multihoming as something which should
remain in the decades to come.
This is a restatement of:
>> In the long term, we need to enhance the architecture of the Net
>> to cope happily with a much larger growth in the numbers of
>> end-user networks which have multihoming, TE and portability.
By "portability" I meant keeping the the network's address range when
selecting another ISP. Renumbering can never be anywhere near as
inexpensive and risk-free as not having to renumber - in part because
network addresses appear in all sorts of places, including DNS files
and ACLs in other networks.
> Some folks think that the (large) majority of users will be mobile
> devices.
I do, since I think there is a huge demand and that it will be
perfectly feasible to achieve this - with both IPv4 and IPv6. IPv6
or at least something other than IPv4 is needed for billions of
cell-phone / media-players / PCs.
The desire, demand, need or whatever for physical mobility -
including with session continuity during all physical movements and
access network changes - is very strong indeed. I think the demand
is worth hundreds of billions of dollars (to whoever satisfies it) -
just like in the cellphone industry.
> If that is indeed the case, then one should ask whether
> the current solution for supporting host mobility (Mobile IP) is
> viewed as adequate for this scenario.
I don't think it is adequate for supporting global mobility with each
host retaining its IP address(es) year after year, while achieving
generally optimal path lengths for all packets.
> As if not, then enhancing the architecture without taking into
> account the need to provide an adequate support for a (very) large
> number of mobile hosts would be rather myopic.
I agree. Steve Russert and I wrote:
TTR Mobility Extensions for Core-Edge Separation Solutions to the
Internet's Routing Scaling Problem (2008-08-25)
http://www.firstpr.com.au/ip/ivip/TTR-Mobility.pdf
This imposes no architectural complications on the basic Ivip
core-edge separation scalable routing solution. It extends it with a
type of ETRs called a Translating Tunnel Router. Mobile hosts,
connecting by any means whatsoever - including Mobile IP - choose a
TTR physically and topologically nearby, such as 1000km or less.
They create one or more tunnels to the TTR, which acts as an ETR and
also sends outgoing packets to the rest of the Net.
There would be extra mapping changes in such a system, but the
mapping changes are only needed infrequently - such as when the
mobile node moves so much as to benefit from a closer TTR. Mapping
changes are not required when the MN changes its access networks, as
long as they are all close nought to the current TTR.
In Ivip, each mapping change is paid for by the end-user - so the
mapping changes of mobile users will pay their way, and indeed help
make the overall Ivip system profitable for its operators.
Quoting Tony Li:
http://www.irtf.org/pipermail/rrg/2009-January/000674.html
>> The growth rate of prefixes that are circulating within the DFZ exceeds the
>> rate of speed improvement in the underlying DRAM that we use to implement
>> the control plane. This will undoubtedly result in architectural changes
>> and cost increases. The concern then becomes that over the long term, as
>> this continues, the cost increases must be absorbed somewhere in the system,
>> and it is most likely to propagate to the end users, with a net detriment to
>> the growth of the network.
>
> Few points:
>
> - The above assumes that DRAM will continue to be used to implement control
> plane memory. As had been discussed before on this list, DRAM is not the
> only option - another alternative is RLDRAM.
A 2006-02 article comparing Reduced Latency DRAM and Quad Data Rate
SRAM is:
http://www.networksystemsdesignline.com/shared/article/showArticle.jhtml?articleId=180200010
RLDRAM provides only marginal improvements. There is still a long
cycle time due to the need to precharge the sense amplifiers.
Splitting the chip into different independent segments helps
somewhat, but the cycle time is still a limitation when reading or
writing to the same segment in quick succession.
QDR SRAM has no such problem - reads and writes can follow one after
the other every 3nsec or so, as I wrote:
http://www.irtf.org/pipermail/rrg/2008-December/000632.html
Looking at the latest Micron parts
http://www.micron.com/products/dram/rldram/partlist
There is a 575M bit RLDRAM II chip and less than 2ns "cycle times" -
but that is the interface cycle time. Looking at the datasheet:
http://download.micron.com/pdf/datasheets/rldram/MT49H16M36A.pdf
the cycle time between successive reads is 15ns.
This is only marginally less than for the latest PC-style DDR3 SDRAMs:
http://www.micron.com/products/dram/ddr3/partlist.aspx
The fanciest of these is the 4G bit (twin chip) MT41J1G4. The timing
information is found in the top link at the right from:
http://www.micron.com/products/partdetail?part=MT41J512M8THU-15E
A 170 page data sheet for a memory chip . . .
I spent 15 minutes looking at this and couldn't easily find the cycle
time of interest. The read latency (CL) figures on page 1 are 13ns
to 15ns. This is the delay in producing the first read data. The
total cycle time before another read or write could be performed
would be longer than this. I guess 20 to 25 nsec.
RLDRAMs are less dense (a factor of 4 is evident in the above chips)
than DDR3 SDRAM. This is partly a function of the massive demand for
the latter, which is likely to remain, since SDRAMs are highly
optimised for filling the on-chip caches of Pentium, Core, AMD etc.
mass market 32/64 bit CPUs.
I agree with the rest of your message.
However, I think our target shouldn't be keeping up with current
rates of growth in multihomed end-user networks, but in meeting the
real demand - which is much higher than the small subset who overcome
the currently very steep cost and administrative barriers to get
their own PI prefixes advertised in the DFZ.
Eliot wrote:
> Now I know nothing about RLDRAM, but I'll hazard a guess that it is
> at least for now more expensive than DRAM.
You can bet your sweet bippy on that - now and in the future.
As noted above, the latest Micron RLDRAM chips have 1/4 the capacity
of their DDR3 SDRAM contempories.
RLDRAM only provides marginal speed increases, but the chips have
fancier interfaces, since they allow all address bits to presented in
a single cycle, in contrast to the decades old DRAM tradition of
clocking the address bits in two cycles on half the number of pins.
Digikey has Micron 288M bit RLDRAM (MT49H8M36FM-25 TR in a 144 "pin"
Ball Grid Array package) for USD$52 in 1k quantities.
A Micron 1G bit DDR3 DRAM (MT41J256M4HX-187E:D TR in a 78 "pin" BGA
package) costs USD$9.40 in 2k quantities.
On this basis, the raw cost ratio between RLDRAM and DDR3 SDRAM is:
(52 / 288) / (9.4 / 1024) = 19.6.
However, the larger size of the RLDRAM packages, and the fact that
four times as many packages are needed for a given amount of data,
means the real cost of using RLDRAM relative to DDR3 DRAM is higher
still. Furthermore, RLDRAM would involve greater power supply costs,
heat dissipation, cooling costs etc.
Since RLDRAM provides only a marginally faster total cycle time
between reads or writes, I would say these costs are prohibitive.
A better option would be to go straight to QDR SRAM, which is
probably the same or somewhat higher price than RLDRAM, and has a
much shorter cycle time (3ns) between successive reads and/or writes.
- Robin
_______________________________________________
rrg mailing list
[email protected]
https://www.irtf.org/mailman/listinfo/rrg