Re: Google and Coronavirus Tech Handbook

2020-03-20 Thread Rob Pickering
On Fri, 20 Mar 2020 at 21:20, Alexandre Petrescu <
alexandre.petre...@gmail.com> wrote:

> 1. I did not understand why you call it "_Google_ and... Handbook"
>

For goodness sake I posted here looking for an AS15169 contact for a useful
project that needs some of their help.

What I seem to be getting is a bunch of critique from folks who
don't understand the difference between the Internet and a corporate VPN
which is MITMing their SSL traffic about the merits of the technology
choices the project made and the country it originates in (in case you
haven't noticed all of our governments are all screwing this up).

Nanog has gone to the dogs, it wasn't like this after 9/11!

Thanks folks.


Re: Google and Coronavirus Tech Handbook

2020-03-20 Thread Rob Pickering
On Fri, 20 Mar 2020, 20:08 Alexandre Petrescu, 
wrote:

> Rob,
>
> You told me in private a few moments ago that if I cant help with fixin an
> AS-number issue critical to you, then I should drop from this thread.
>

I actually said "help reaching someone from AS15169" but, apart from that,
yes good paraphrase.

Please don't be offended, I'm just trying to help what I think is a super
important resource stay accessible by connecting them to someone at Google
who can help with a Google Docs access capacity issue they are having.
Conversations about root CAs are noise in that context.

Thank you.


Re: Google and Coronavirus Tech Handbook

2020-03-20 Thread Rob Pickering
On Fri, 20 Mar 2020 at 18:11, Alexandre Petrescu <
alexandre.petre...@gmail.com> wrote:

> CA==Certificate Authority
>
> the browser makes me questions before allowing me to see the content,
> after I click the indicated URL
>
> LF/HF
>
> What root CA list are you using?

I'm not at all involved in their hosting, but it looks like they are
sitting behind Cloudflare SSL which is trusted by the default CA list of
the browser vendor on my desktop.

--
Rob Pickering, r...@pickering.org


Re: Google and Coronavirus Tech Handbook

2020-03-20 Thread Rob Pickering
CA?

On Fri, 20 Mar 2020 at 18:07, Alexandre Petrescu <
alexandre.petre...@gmail.com> wrote:

> can I trust its CA?
>
>
> Alex, LF/HF 2
>
> Le 20/03/2020 à 18:54, Rob Pickering a écrit :
>
> This: https://coronavirustechhandbook.com/home is a super useful resource
> in my opinion.
>
> They are using Google Docs because it provides a really accessible way of
> doing content creation but hitting capacity issues.
>
> Are there any Google contacts here who can get them talking to the right
> people please?
>
> Message me offlist and I will update here when sorted.
>
> --
> Rob Pickering, r...@pickering.org
>
>

-- 
--
Rob Pickering, r...@pickering.org


Google and Coronavirus Tech Handbook

2020-03-20 Thread Rob Pickering
This: https://coronavirustechhandbook.com/home is a super useful resource
in my opinion.

They are using Google Docs because it provides a really accessible way of
doing content creation but hitting capacity issues.

Are there any Google contacts here who can get them talking to the right
people please?

Message me offlist and I will update here when sorted.

--
Rob Pickering, r...@pickering.org


Re: Reminiscing our first internet connections (WAS) Re: akamai yesterday - what in the world was that

2020-01-27 Thread Rob Pickering
Wasn't the 56/64k thing a result of CAS (bit robbed) signalling which was a
fudge AT did to transport signalling information in-band on T1s by
stealing the low order bit for OOB signalling (it wasnt actually every low
order bit, but meant you had to throw away every low order bit as CPE
didn't know which ones were "corrupted" by the carrier).
Proper ISDN was always 64kbit/s clear path with separate D channels carried
OOB end to end, away from the B channel data.

On Mon, 27 Jan 2020 at 11:57, Mark Andrews  wrote:

> The hardware support was 2B+D but you could definitely just use a single
> B.   56k vs 64k depended on where you where is the world and which style of
> ISDN the telco offered.
>
>
> --
> Mark Andrews
>
> > On 27 Jan 2020, at 22:32, Bryan Holloway  wrote:
> >
> > I didn't think one could get a single 'B' channel over ISDN ... but I
> could be mistaken.
> >
> > In my early ISP days, ISDN was 2 x 64k (full-rate) 'B' channels and a
> 16k 'D' channel for signaling.
> >
> >
> >> On 1/26/20 5:58 AM, Joly MacFie wrote:
> >> IIRC that 64k was in fact 56k with 8k for overhead.
> >> I had one, and it would kick in a second channel if you pushed it, for
> a whopping 112k. Metered, came out to about $500/mo.
> >> Joly
> >> On Fri, Jan 24, 2020 at 6:26 PM Ben Cannon  b...@6by7.net>> wrote:
> >>I started what became 6x7 with a 64k ISDN line.   And 9600 baud
> modems…
> >>in ’93 or so.  (I was a child, in Jr High…)
> >>-Ben.
> >>-Ben Cannon
> >>CEO 6x7 Networks & 6x7 Telecom, LLC
> >>b...@6by7.net <mailto:b...@6by7.net>
> >>>On Jan 24, 2020, at 3:21 PM, b...@theworld.com
> >>><mailto:b...@theworld.com> wrote:
> >>>
> >>>
> >>>On January 24, 2020 at 08:55 aar...@gvtc.com
> >>><mailto:aar...@gvtc.com> (Aaron Gould) wrote:
> >>>>Thanks Jared, When I reminisce with my boss he reminds me that
> >>>>this telco/ISP here initially started with a 56kbps internet
> >>>>uplink , lol
> >>>
> >>>Point of History:
> >>>
> >>>When we, The World, first began allowing the general public onto the
> >>>internet in October 1989 we actually had a (mildly shared*) T1
> >>>(1.544mbps) UUNET link. So not so bad for the time. Dial-up
> customers
> >>>shared a handful of 2400bps modems, we still have them.
> >>>
> >>>* It was also fanned out of our office to a handful of Boston-area
> >>>customers who had 56kbps or 9600bps leased lines, not many.
> >>>
> >>>---Barry Shein
> >>>
> >>>Software Tool & Die| b...@theworld.com
> >>><mailto:b...@theworld.com> | http://www.TheWorld.com
> >>><http://www.theworld.com>
> >>>Purveyors to the Trade | Voice: +1 617-STD-WRLD   | 800-THE-WRLD
> >>>The World: Since 1989  | A Public Information Utility | *oo*
> >> --
> >> ---
> >> Joly MacFie 218 565 9365 Skype:punkcast
> >> --
> >> -
>
>

-- 
--
Rob Pickering, r...@pickering.org


Re: Protecting 1Gb Ethernet From Lightning Strikes

2019-08-13 Thread Rob Pickering
On Tue, 13 Aug 2019 at 19:23, Javier J  wrote:

> I'm working with a client site that has been hit twice, very close by
> lightening.
>
> I did lots of electrical work/upgrades/grounding but now I want to focus
> on protecting Ethernet connections between core switching/other devices
> that can't be migrated to fiber optic.
>
> I was looking for surge protection devices for Ethernet but have never
> shopped for anything like this before. Was wondering if anyone has deployed
> a solution?
> They don't have a large presence on site (I have been moving all of their
> core stuff to AWS) but they still have core networking / connectivity and
> PoE cameras / APs around the property.
> Since migrating their onsite servers/infra to the cloud, now their
> connectivity is even more important.
>

The correct answer is use fiber.

If you really, really can't then APC make a single port transient arrestor
p/n PNET1GB.

I've used these in the past for a PoE phone in a wooden gatehouse hut right
on the 100M max length with no power for active kit and they seem to work
fine. I'm using one at the moment for a PoE access point in my garden shed.
Not sure I would bring an inter building link in copper onto an expensive
core switch though.

Don't know of anything in higher density than "one port".

--
Rob Pickering, r...@pickering.org


Re: IP Dslams

2019-01-04 Thread Rob Pickering
Just a thought, would a two wire Ethernet extender technology (eg
Phybridge) provide you with a simpler solution?  xDSL needs a lot of
infrastructure for a low port count (& budget) application.

I have no idea if you can split the baseband out to provide POTS over the
same pair, but even if you can't, Ethernet plus a VoIP phone or ATA to each
unit may end up cheaper than a shed load of carrier oriented xDSL infra?

On Mon, 31 Dec 2018 at 19:15, Nick Edwards  wrote:

> Howdy,
> We have a requirement for an aged care facility to provide voice and data,
> we have the voice worked out, but data, WiFi is out of the question, so are
> looking for IP-Dslams, preferably a system that is all-in-one, or self
> contained, as in contains its own BBRAS/LNS/PPP server/Radius, such as has
> a property managment API, or even just a webpage manager where admin can
> add in new residents when they arive, or delete when they depart I know
> these used to be available  many years ago, but that vendor has like many
> vanished, only requirement is for ADSL2+, prefer units with either 48 ports
> or multiples of (192 etc) and have filtered voice out ports (telco50/rj21
> etc)
>
> If anyone knows of such units, would appreciate some details on them,
> brand/model suppliers if known, etc, we can try get out google fu back if
> we have some steering:)
>
> Thank Y'all
>
> (resent - original never made it to the list for some gremlin reason)
>


-- 
--
Rob Pickering, r...@pickering.org


Re: Email Portability Approved by Knesset Committee

2010-02-23 Thread Rob Pickering
--On 23 February 2010 09:06 -0600 Larry Sheldon 
larryshel...@cox.net wrote:

No kidding--something like making airlines do something railroads
can do.


I guess that depends whether you are talking about issuing flexible 
tickets or cruising at zero feet.


--
Rob.



Re: DNS and potential energy

2008-07-01 Thread Rob Pickering


Maybe it's not that bad.   The eventual result is instead of having
a billion .COM SLDs, there are a billion TLDs:  all eggs in one


There are simply not going to me billions, millions, or even probably 
tens of thousands of TLDs as a result of this. It's still a complex 
several months long administrative process that costs some multiple 
of $100,000.


As far as I can work out, minus the press noise, the difference is 
that creating a TLD will take half a year rather than half a decade 
or more.



basket, the root zone -- there will be so many gTLD servers, no DNS
resolver can cache the gTLD server lookups,  so almost every DNS
query will now involve an additional request to the root,  instead
of (usually) a request to a TLD server  (where in the past the TLD
servers' IP would still be cached for most lookups).


Maybe, maybe not.


Ultimately that is a  1/3   increase in number of DNS requests, say
to lookup www.example.com
if there wasn't a cache hit.   In that case, I would expect the
increase in traffic seen by root servers to be massive.


There will probably be a significant increase if there is a very wide 
takeup of new TLDs, yes.


Conversely load on some of the existing gTLD servers may decrease if 
the number of domains in active use is spread across a larger number 
of independent TLDs.



Possible technical ramifications that  haven't been considered with
the proper weight,
and ICANN rushing ahead towards implementation in 2009  without
having provided opportunity for internet  ops community input
before developing such drastic plans?



Massive further sell-out of the  root zone (a public resource) for
profit?  Further
commercialization of the DNS?  Potentially giving  some registrants
advantageous treatment at the TLD level,  which has usually been
available to registrants on  more equal terms??
[access to TLDs merely first-come, first-served]


Don't think that is operational and in any case the current system is 
weighted towards entities who have had domains for eons when they 
were able to be the first comers, it's very unfair and unequal in the 
sense that it works against the interests of newer registrants. 
Definitely not operational though.



Vanity  TLD space may make  .COM seem boring. Visitors will expect
names like
 MYSITE.SHOES, and consider other sites like  myshoestore1234.com
not-legitimate
or not secure


The lucky organization who won the ICANN auction and got to run the
SHOES TLD may price subdomains at $1 minimum for a 1-year
registration (annual auction-based renewal/registration in case of
requests to register X.TLD by multiple entities) and registrants
under vanity TLD  to sign  non-compete agreements  and  other
pernicious EULAs and contracts of adhesion merely to be able to put
up their web site,

As a subdomain of what _LOOKS_ like a generic name.


And, of course,  http://shoes/   reserved for the TLD registrant's
billion-$ shoe store,
with DNS registration a side-business (outsourced to some DNS
registrar using some domain SLD resale service).


The operational issue is?

Actually your shoe shop still now has a greater number of choices 
(.com or .shoes) and I can bet that if your scenario comes to pass 
with a very aggressive and restrictive registrar of .shoes, some 
enterprising soul will register .boots, .sneakers or .shoeshop etc to 
make their living on those parts of the market that don't like .shoes 
policies.



The possibilities that vanity TLD registry opens are more  insidious
than it  was for someone to bag a good second-level domain.


Questionable and certainly not operational.






Sure, nefarious use of say .local could cause a few problems but
this is


I'd be more concerned about nefarious use of a TLD like  .DLL,
.EXE, .TXT Or other domains that look like filenames.


Or .com. Oddly enough I just now found a Windows box and typed 
command.com in a browser URL bar and it did what I expected, when I 
typed the same thing at a cmd prompt it did something different and I 
expected that too.



Seeing as a certain popular operating system confounds local file
access via Explorer with internet access...
You may think abcd.png  is an image on your computer... but if you
type that into your
address, er, location bar,  it may be a website too!


To the extent that possibility already exists, there is a reason that 
web URIs have both a host and path component. I don't see why new 
TLDs substantially change this. If applications insist on confusing 
the two then bad things will always happen but that is an app issue.


--
Rob.





Re: DNS and potential energy

2008-06-30 Thread Rob Pickering

--On 29 June 2008 23:59 + [EMAIL PROTECTED] wrote:

one might legitimately argue that ICANN is in need of
some serious regulation

that can happen at that national level or on the international
level.


It is very likely that serious regulation particularly at an 
international level would have a way more degenerate effect on DNS 
operations than adding a bunch of new entries into the root.


Be careful about what you legitimately argue for...

I'm still having a hard time seeing what everyone is getting worked 
up about.


Can anyone point to an example of a reasonably plausible bad thing, 
that could happen as a result of doubling, tripling, or even 
increasing by an order of magnitude the size of the root zone.


Sure, nefarious use of say .local could cause a few problems but this 
is pretty inconceivable given that:
1) most estimates I've seen of the cost of setting up a TLD start at 
around $500,000 (probably a bit over the credit limit on a stolen 
credit card #).
2) These are easily fixed by adding known large uses like to this to 
the formal reserved list.
3) I'm sure that these will in any case be caught well before 
deployment under the proposed filtering process.


So, other than a change in the number of various DNS related money 
chutes and their net recipients, what are the actual operational 
issues here?


--
Rob.