On 2023-04-27 16:05, Dobbins, Roland via NANOG wrote:
> There isn’t a standard for rack depth, AFAIK, but one typically sees
> anywhere from 27in/69cm – 50in/127cm, in my experience. 42in/106.7cm
> & 48in/122cm are quite common depth dimensions.
You are talking about the depth of the entire
On 2023-01-23 19:08, I wrote:
> I get that for 1310 nm light, the doppler shift would be just under
> 0.07 nm, or 12.2 GHz:
> [...]
> In the ITU C band, I get the doppler shift to be about 10.5 GHz (at
> channel 72, 197200 GHz or 1520.25 nm).
> [...]
> These shifts are noticably less than typical
On 2023-01-23 17:27, Tom Beecher wrote:
> What I didn't think was adequately solved was what Starlink shows in
> marketing snippets, that is birds in completely different orbital
> inclinations (sometimes close to 90 degrees off) shooting messages to each
> other. Last I had read the dopplar
On 2019-12-18 22:14 CET, Rod Beck wrote:
> Well, the fact that a data center generates a lot of means it is
> consuming a lot of electricity.
Indeed, they are consuming lots of electricity. But it's easier to
measure that by putting an electricity meter on the incoming power
line (and the power
On 2019-12-18 20:06 CET, Rod Beck wrote:
> I was reasoning from the analogy that an incandescent bulb is less
> efficient than a LED bulb because more it generates more heat - more
> of the electricity goes into the infrared spectrum than the useful
> visible spectrum. Similar to the way that an
On 2019-12-18 15:57, Rod Beck wrote:
> This led me to wonder what is the inefficiency of these servers in data>
> centers. Every time I am in a data center I am impressed by how much> heat
> comes off these semiconductor chips. Looks to me may be 60% of the>
> electricity ends up as heat.
What
On 2019-10-22 22:38 -0700, Stephen Satchell wrote:
> So, to the reason for the comment request, you are telling me not to
> blackhole 100.64/10 in the edge router downstream from an ISP as a
> general rule, and to accept source addresses from this netblock. Do I
> understand you correctly?
On 2019-05-31 01:18 +, Mel Beckman wrote:
> No, that's not the situation being discussed.
Actually, that *was* the example I was trying to give, where I
suspect many are *not* following the rules of RFC 1930.
> As I've pointed out, a multi homed AS without an IGP connecting all
> prefixes
On 2019-05-30 20:00 +, Mel Beckman wrote:
> I’m sure we can find corner cases, but it’s clear that the vast
^
> majority of BGP users are following the standard.
"Citation needed". :-) How is it clear that the vast majority are
following
On 2019-05-27 18:18 +, Mel Beckman wrote:
> Before the trigger temperature is reached, the NMS would have sent
> various escalating alarms to on call staffers, who hopefully would
> intervene before this point.
Would they actually have time to react and do something? In our
datacenters, we
On 2019-03-23 12:41 -0700, Mehmet Akcin wrote:
> I am trying to get my hands on some QFX5000s and I have a rather quick
> question.
First, there is no model named QFX5000. There is QFX5100, QFX5110,
QFX5120, QFX5200 and QFX5210 (and some of them have several submodels,
e.g. QFX5100-48T,
On 2019-03-05 07:26 CET, Mark Andrews wrote:
> It does work as designed except when crap middleware is added. ECMP
> should be using the flow label with IPv6. It has the advantage that
> it works for non-0-offset fragments as well as 0-offset fragments and
> also works for transports other than
On 2019-02-11 04:57 CET, Mark Tinka wrote:
> On 10/Feb/19 17:46, Baldur Norddahl wrote:
[...]
>> In any case, we are now building out our own fiber to cover the gaps
>> left by TDC. Here the end user has to pay DKK 12,000 (USD 1,824 / EUR
>> 1,608) one time fee and with that he gets everything
On 2019-02-09 18:59 CET, Mikael Abrahamsson wrote:
> For anyone saying it's "impossible" to do AE they're welcome here to
> the nordic region and especially Sweden where PON is basically unheard
> of. We have millions of AE connected households. I live in one of them.
However, large parts
On 2018-12-19 21:28 MET, William Herrin wrote:
> Easy: .97 matches neither one because 64 & 97 !=0 and 32 & 97 != 0.
> That's a 0 that has to match at the end of the 10.20.30.
D'oh! Sorry, I got that wrong. (Trying to battle 10+% packet loss at
home and a just upgraded Thunderbird at the same
On 2018-12-19 20:47 MET, valdis.kletni...@vt.edu wrote:
> There was indeed a fairly long stretch of time (until the CIDR RFC came out
> and
> specifically said it wasn't at all canon) where we didn't have an RFC that
> specifically said that netmask bits had to be contiguous.
How did routers
On 2018-08-08 23:36, na...@jack.fr.eu.org wrote:
> Let me fix that for you.
> Using multicast on IPv6 grant us the ability to do more.
> Today, this is worthless.
> Will it be the same tomorrow ?
Problem is, to handle the Neigbour Discovery design (16M multicast
groups), we need hardware that
On 2018-05-16 15:22, Adam Kajtar wrote:
> I wasn't using per-packet load balancing. I believe juniper default is per
> IP.
The Juniper default is to not do ECMP at all. Only a single route is
programmed into the FIB for each prefix in your RIB. If you e.g. have
routes to 198.51.100.0/24
ier for that wavelength.
(This was using 120km CWDM gigabit transceivers directly in the routers
at each end. We have since retired those and use 10 gigabit DWDM with
transponders and EDFA amplifiers.)
Yes, it was a duct-tape solution, but it was cheap and got the work
done. :-)
/Thomas Bell
On 2017-12-28 22:31, Owen DeLong wrote:
> Sure, but that’s intended in the design of IPv6. There’s really no need
> to think beyond 2^64 because the intent is that a /64 is a single subnet
> no matter how many or how few machines you want to put on it.
> Before anyone rolls out the argument
y to get the manufacturer
to tell you what the most power-efficient inlet temperature is, they
will just tell you "oh, we support anything between 5°C and 40°C" (or
whatever their actual limits are), and absolutely refuse to answer your
actual question.
--
Thomas Bellman
National Superco
On 2017-09-10 00:09, Baldur Norddahl wrote:
> You want to configure point to point interfaces as /127 or /126 even if you
> allocate a full /64 for the link. This prevents an NDP exhaustion attack
> with no downside.
An alternative is to just have link-local addresses on your point-to-
point
y
initially wanted to give us only a /56. Of course, they can only
give out a few /52:s; other departments will have less structured
address plans than us.
- --
Thomas Bellman, National Supercomputer Centre, Linköping Univ., Sweden
"Life IS pain, highness. Anyone who tells ! b
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2017-06-29/17 17:06, Job Snijders wrote:
> On Wed, Jun 28, 2017 at 11:09:25PM +0200, Thomas Bellman wrote:
>> I know that many devices allow you to configure any subnet size, but
>> is there any RFC allowing you to use e.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2017-06-28 17:03, William Herrin wrote:
> The common recommendations for IPv6 point to point interface numbering are:
>
> /64
> /124
> /126
> /127
I thought the only allowed subnet prefix lengths for IPv6 were /64 and
/127. RFC 4291 states:
25 matches
Mail list logo