Re: 44/8

2019-07-23 Thread Jimmy Hess
On Tue, Jul 23, 2019 at 9:57 AM Naslund, Steve  wrote:
> How about this?  If you guys think your organization (club, group of friends,
> neighborhood association, whatever...) got screwed over by the ARDC, then
> why not apply for your own v6 allocation.  You would then have complete

They could likely just use Link-Local V6 space if they wanted.
Digital linking using space from the 44/8 block would very likely
often be at 1200 or 9600 baud for many uses.  Each bit of overhead
expensive,  and IPv6 with its much greater overhead would seem
uniquely Unsuitable and not a viable replacement for IPv4 usage in cases.

I'm curious how does a "Point of Contact"  change from a Point of Contact
to the general organization, to "Owner of a resource"?
My general assumption is one does not follow from the other --- for
example, Amazon might designate an Admin POC for their /10,  But
by no means does that confer a right to that individual to auction
Amazon's /10,  sell the block,  and decide how the sales proceeds will be used.

Its not even that the registry should allow this and say "Well, Amazon,
tough.. if you didn't want it sold by $POC or their successor against your
wishes,  then you should have appointed a better POC."
I would anticipate the registry requiring legal documents from $OrgName
signed by however many people to verify complete agency over $OrgName
or someone making a representation;  not just sending an e-mail
or pushing a button.

And if there is no organization name,   then it may just be that
there isn't a single person in the world  who has been vested
with authorization to represent an item registered "for use by a community"
or "the public in general" in matters like that.


And why should any one organization get to monetize AMPRnet and
decide the use of any funds for monetization?   They may be a public
benefit, but how do you establish they are the _right_ and _only_
public benefit,  that the public deems the most proper for advancing
development for the greatest public good in IP/digital networking
communications?

The mention of  "Scholarships" and "Grants" to be decided by the
board of the entity that seemed to unilaterally decide to  "Sell" a
shared resource that was provided for free -  Sounds like an
idea biased towards "academics" and certain kinds of researchers
-- as in more most likely university academics --- sounds suspect.
Perhaps  Scholarships mostly benefit an individual,  and Grants could
be decided by an entity more well-known and reputable to the
community such as one vetted by IARU or ARRL, anyways.

Usage from the 44/8 space chosen is not necessarily co-ordinated with nor
were AMPR networks created within 44/8 ever  required to be approved or
co-ordinated by any central registry contacts that were shown for the block,
and the AMPR users can simply continue ignoring any IANA changes to 44/8;
just like you probably would if  some random contact on a registry record
decided they were owner, and auctioned off   "192.168.0.0/17"  reducing
the shared 192.168 allocation to 192.168.128.0/17  only.

They may simply go by the decisions of whichever user, vendor, or
experimenter makes the linking technology in question for deciding the
IP address co-ordination ---   For example,  the Icom or Yaesu network
may designate their own addressing authority for users of their digital
linking system,  and there is a good chance they already do.

I think there is a false belief here in the first place that the community
in question which is separate from the internet relies upon IANA or ARIN
registry information to continue existing or using address space;  Or that the
contact has any "ownership",  "resource holdership",  or  "network management"
purpose,  for anything related to 44/8  other than a purpose of
co-ordination  for
a SUBSET of the likely AMPRnet  44/8  users  when considering
CERTAIN applications of AMPRnet  where interoperability with internet was
a goal.

And 44/8 commonly for discrete isolated networks;  similar to RFC1918,
But predating RFC1918  by almost two decades.Consider that
10.0.0.0/8   COULD have been a substitute for many 44/8 applications.

My understanding is this 44/8 allocation predates the public internet;
and its normal everyday usage is completely separate from public internet
IP having been actually utilized on this space first.   People sought an
allocation from IANA originally,  but that does not give IANA nor
any contact listed by IANA "ownership" or  "management" authority
over usage of this IP address space  outside of their registry which
is supposed to accurately cover the internet: but the AMPRnet is Not
a block of networks on the internet,  and not under the purview
of IETF or IANA, anyways  ---  its just a community that uses
TCP/IP mostly in isolated discrete networks which can be neither
allocated,  nor managed,  nor get their individual assignments
within 44/8 from any central authority.

Although ARDC provides an option to do so --- these 

Re: Apple devices spoofing default gateway?

2019-03-14 Thread Jimmy Hess
On Thu, Mar 14, 2019 at 7:29 AM Simon Lockhart  wrote:

> Apple devices, but what's more strange is that we're only seeing it where
> those Apple devices are connected to Cisco 1810 and 1815 APs, and where those
> APs are connected to a Cisco WLC running v8.5 software. If we downgrade the
> WLC to v8.2 the problem goes away (but v8.2 doesn't support 1815 APs, so we

Apple's Bonjour protocols include something called Apple Bonjour Sleep Proxy
for Wake on Demand ---  When a device goes to sleep,  the Proxy that
runs on various
Apple devices is supposed to seize all the IP and MAC addresses that
device had registered,
so it can wait for an incoming TCP SYN, (and if one's received,  then
signal the
sleeping device to wake up and process the connection.)

Bonjour and the related mDNS protocols used for AirPlay/AirPrint/etc
are built on Link-Local
multicast.   I wonder what would happen if  some random Wireless LAN
controllers malfunctioned,
and decided that it would like to ignore that Link-Local restriction
and proxy those packets b/w
subnets anyways, as if they had been unrestricted multicast or
Unicast,   Possibly with the
source IP address on registration Mangled to or  "gateway'd"  from the
 router's  IP address.

(Or perhaps they wanted to have a feature to let someone  AirPlay from
a different VLAN than
another device?)

Either way  playing around with the source IP address on the
Link-local m/c packets
might accidentally  get them a  Default Gateway IP address  registered
with other workstations
as a mobile device that's gone to sleep and needs a proxy.

-- 
-JH


Re: A Zero Spam Mail System [Feedback Request]

2019-02-18 Thread Jimmy Hess
On Sun, Feb 17, 2019 at 8:05 PM Viruthagiri Thirumavalavan
 wrote:

> These guys always are the know-it-all assholes. They don't listen. They don't 
> want to listen.

I would simply like to remind everyone that NANOG is a Mailing list
that has some rules.
They could be found here:   https://www.nanog.org/list
I would like to request for the Communications Committee to look at
this thread,  except the list  of
NANOG Committee members seems to be secret -  https://nanog.org/governance/cc

The message I reply to seems to be a long string of major deviations
from AUP, specifically:

:  4.  Postings that are defamatory, abusive, profane, threatening, or
include foul language,
:  character assassination, and lack of respect for other participants
are prohibited.

We get really tired about people bringing up disputes and personal
issues -- this is not the right place.
> Long story short I solved the email spam problem. Well... Actually I 
> solved it long time back. I'm just ready to disclose it today. Again...

While you have some technical solution and some ideas that appear to
have merit such as  "Isolated
mailboxes"  (Isolated Mailboxes are not really a new idea -- but there
is currently no agreed
protocol/standard to make such function convenient enough for people to use);

Prompting e-mail senders to answer "CAPTCHA"  sounds too burdensome to
use;  This is also
not Spam-Free,   as  spammers will find ways to answer CAPTCHAs.

The overall claims to have "zero" spam or completely "solved" the spam
problem technically
are way too bold and are close to "Product Marketing" in my view,
since some Dombox dot com
service is being advertised.

Most Network Operators like those that subscribe to NANOG usually have
little to say about
technical details involved in developing standards regarding e-mail
protocols;  please see
IETF for on-topic discussion groups.

--
-JH


Re: DNS Flag Day, Friday, Feb 1st, 2019

2019-01-31 Thread Jimmy Hess
On Thu, Jan 31, 2019 at 10:33 AM James Stahr  wrote:
[snip]
> So is the tool right in saying that TCP/53 is a absolute requirement of
> ENDS0 support for "DNS Flag Day"?  If so, do we expect a dramatic
> increases in TCP/53 requests over UDP/53 queries?  Or is the tool flawed
[snip]

Their test tool will obviously alert on more error conditions than
what the Flag Day is
specifically about --   One or more of your DNS servers not responding
[OR] TCP/53 not
working are still broken configurations,   But  the brokenness is
unrelated to what the flag
day is about -  In the first case,  better safe than sorry, I suppose:
 Inability to complete
one or more of the tests because of brokenness definitely means that
things are broken.

TCP/53 is a fairly strong requirement,  except if you are supporting
an authoritative-only
server with  no record labels that could result in a >512byte
response, plus no DNSSEC-secured zones,
and even then the AXFR protocol for replication to secondary servers
calls for TCP.

EDNS support is not required.   Authoritative servers that don't support EDNS
and are also compliant with the original DNS standard continue to work
after the workarounds are removed.

The relevant standard does not allow for silently ignoring requests
that contain the EDNS option;
patching the bug in a broken server does not necessarily entail the
larger task of adding EDNS support
-- achieving consistence compliance with either the DNS standard
before EDNS, or the DNS standard after
EDNS, is the requirement.

There are two ways for a DNS server to relay the DNS responses larger
than 512 bytes
1. The server replies with a truncated message with the truncate bit
set, and the client connects
to you over TCP to make the DNS request,OR   The client provided
the EDNS option with a larger packet size,
and you support that, so you send a large UDP reply instead.

A DNS server must support the first of these methods  (The second is
preferable but optional,  and support
for the first method over TCP is mandatory)  if you could ever be
required to handle
a DNS message whose reply is larger than 512 bytes:

All  recursive resolvers fall into this category, and with DNSSEC +
IPv6,   many common queries
will result in an answer longer than the original 512 byte limit of UDP.

--
-JH


Re: DNS Flag Day, Friday, Feb 1st, 2019

2019-01-31 Thread Jimmy Hess
On Thu, Jan 31, 2019 at 6:01 AM Matthew Petach  wrote:

> Google, Cloudflare, Quad9 all changing their codebase/response behaviour on a 
> Friday before a major sporting and advertising event?
> Not sounding like a really great idea from this side of the table.

If your DNS zone is hosted on Google or Cloudflare's servers, then you
should have nothing to worry about,  other than your end users having
a broken firewall in between their DNS resolver and the internet, and then
updating their resolver software

Actually, none of those providers announced detailed plans, at least yet;
and maybe they won't even bother announcing.
they could update their software yesterday if they wanted,  or they could
wait until next week,  and it would still be  "On or Around Feb 1, 2019."
The 'Flag Day' is not a specific moment at which all providers
necessarily push a big red button at the same instant to remove
their workaround for broken DNS servers discarding queries.

> Are we certain that the changes on the part of the big four recursive DNS 
> operators won't cause downstream issues?

Not downstream issues.   They will break resolution of  the
domains which have authoritative DNS servers that
discard or ignore DNS queries which comply with all the
original DNS standards but contain EDNS attributes.

The common cause for this was Authoritative DNS servers placed
behind 3rd party Firewalls that tried to "inspect" DNS traffic and
discard queries and responses with "unknown" properties or sizes
larger than 512  ---  there may also be an esoteric DNS proxy/
balancer implementation with bugs, or some broken authoritative
server implementations running software that was more than 10 years
past End of Support at this point.

If your authoritative DNS service sits behind such a broken device or
on such broken DNS server,
then clients already have troubles resolving your domains,  and some time
after the DNS Flag day,  it will likely stop working altogether.

> As someone noted earlier, this mainly affects products from a specific 
> company, Microsoft, and L7 load balancers like A10s.  I'm going to hope legal 
> teams from each of the major recursive providers were consulted ahead of time 
> to vet the effort, and ensure there were no concerns about collusion or 
> anticompetitive practices, right?

I wouldn't be too concerned.The operators of a recursive DNS service
very likely won't have an agreement giving them a duty to  Microsoft,
A10, etc;
If  you have a software or service that you expect to interoperate with DNS
resolvers,  then its on you to make sure your  implementation of the software
or service complies with the agreed standards regarding its processing
of compliant messages.

-- 
-JH


Re: DNS Flag Day, Friday, Feb 1st, 2019

2019-01-30 Thread Jimmy Hess
On Wed, Jan 30, 2019 at 9:51 PM Christopher Morrow
 wrote:
> you do realize you are proposing to make a breaking change (breaking change to
> a global system) on a friday.  delaying until the following monday would not 
> have

There are reasons to prefer a Friday over other days as well, but the
internet doesn't
schedule around random participant's personal preferences.

Besides,  its a substantial misrepresentation of what the DNS Flag day
is to describe it
as a "breaking change"  made on a certain date  - changes won't finish
in a week, changes
won't finish in two weeks  --- every day of the week may be affected
until the gradual process
of every OS and DNS vendor releasing and every end user upgrading finishes.

Each software vendor and service provider will have their very own
update schedules  regarding
on what exact date the next  version release and every manager of a system with
a DNS Resolver software installation  will have their own  choice on when they
actually install the next update at their site.

Just because all the major maintainers of DNS resolver software agree
all releases after
tomorrow  will remove the workarounds for broken DNS servers/firewalls
that silently discard
queries does not mean every software vendor is shipping their new code
to release on Feb 1,
_and_  every end user is  rushing to upgrade their DNS resolver to
remove the workarounds.

--
-JH


Re: SMTP Over TLS on Port 26 - Implicit TLS Proposal [Feedback Request]

2019-01-13 Thread Jimmy Hess
On Fri, Jan 11, 2019 at 6:23 PM Viruthagiri Thirumavalavan
 wrote:

> I'm trying to propose two things to the Internet Standard and it's related to 
> SMTP.
> (1) STARTTLS downgrade protection in a dead simple way
> (2) SMTPS (Implicit TLS) on a new port (26). This is totally optional.

A new Well-Known Port allocation should come from IANA if required, not
by random cherrypicking;   Port 26 has not been assigned for transport,  and
might be required for a different application. 465 is already allocated for
SMTP over TLS.

If you are using DNS Records to prevent downgrades anyways,  then there
should be no need nor valid justification for using an extra port number;  the
client SMTP sender can be required to inspect the DNS Record and find in
the record a signal that TLS is mandatory,  and the smtp client must not proceed
past EHLO  other than to STARTTLS immediately.

> e.g. mx1.example.com should be prefixed like starttls-mx1.example.com.

This is a layering violation/improper way of encoding information in the DNS.
The RFCs that specify the MX RR data have already been written. Special names
in the LABEL portion of a record cannot have special significance for
MX records,
Because it would be  backwards-incompatible with current MX records.

For example,  I may very well have a host named
"starttls-mx1.example.com"  today,
based on current standards which is not used solely for TLS SMTP,   Or
it might not
even support TLS SMTP ---  Significance cannot be added to strings in
the DNS that
did not exist in the original standard,  due to potential conflicts
with existing implementations.

If you want a DNS format that behaves differently, then you should
either get a new RR type, or
utilize a TXT record  ala  DKIM, SPF, DMARC.

Also,  using a DNS Record prefix, TXT,  new RR,  or whatever still suffers from
the same downgrade attacks you are concerned about  (DNS Response
Mangling/Stripping),
unless DNSSEC is a mandatory  requirement  in order to use the facility.

The DANE Facilities and other IETF drafts address this much more adequately.
See RFC8461 -- https://tools.ietf.org/html/rfc8461
RFC 8461 seems to solve the same problem and does a better job.

> Where "starttls-" says "Our port 25 supports Opportunistic TLS. So if 
> STARTTLS command not found in the EHLO
> response or certificate is invalid, then drop the connection".

Wait... what you are suggesting is not Opportunistic TLS, but  Strict TLS;
or a SMTP equivalent to  HTTP's  HSTS.

You could equally suggest a  SMTP  Banner Pattern for such a feature;
instead of trying to overload
the meaning of some DNS label substring.

220   smtp.example.com "Welcome to the example.com SMTP Server"
strict-tls=*.example.com;  max-age=604800; includeSubDomains

> (2) SMTPS Prefix
> Use this prefix if you wanna support Implicit TLS on port 26 and 
> Opportunistic TLS on port 25.
> e.g. mx1.example.com should be prefixed like smtps-mx1.example.com.

Again, this is not useful ---  vulnerable to downgrade attacks which
are equivalent to Port 25 SMTP TLS stripping.
The TLS stripper  simply  intercepts  outgoing TCP Port 26 SYN Packets
and responds with TCP RESET.

Rewrites the MX response to DNS queries   if the record begins with
"smtps-XXX"  to   "-XXX"
with the same IP addresses in the additional section  and caches the A
response  for the  generated hostnames.

--
-JH


Re: Should ISP block child pornography?

2018-12-09 Thread Jimmy Hess
On Fri, Dec 7, 2018 at 12:08 AM Lotia, Pratik M
 wrote:
> Hello all, was curious to know the community’s opinion on whether an ISP 
> should block
> domains hosting CPE (child pornography exploitation) content?

"Whether an ISP should block" ?!

Probably not in most cases,  except may be required in some jurisdictions
mostly outside our region that are under authoritarian regime requiring ISPs
block any resource banned at the whim of any blanket order from their executive
(without due process);  this is within the same vein as a phone company
hearing a rumor that a certain payphone is being used for illegal activity and
banning all calls from their customers to/from the number,  under the
presumption that  _all_ calls from that phone are for criminal acts.

Assuming: said hosting IP address is on a remote network:
the ISP does not provide authoritative name service for that domain,   and
the customer accesses the resource over the network not through a cache or
application proxy/other service provided by the ISP  the customer expects
their ISP merely routes packets and does not participate in content,  and an
ISP deliberately interfering with expected connectivity jeapordizes stability
of the network and the ISP's business relationship with their customers;
the best possible affect on the ISP is neutral.

Notable exception is emergencies where blocking an IP address or domain
actually stops behavior such as DoS that directly disrupts the network,
and blocking mitigates a negative affect on the network.

For example,  let's say we receive a report that
www.twitter.com[104.244.42.65]
hosts CPE.In that example, the report should be sent to law enforcement and
Twitter: no action by anyone else should be required UNLESS  Law Enforcement
produces to the public a court order to disconnect/block Twitter's
communication
services, that would normally come after a hearing,  and same principle applies
regardless of if the domain name is a top1000 domain or not.

If each ISP wants to be extra helpful, then perhaps they would like to
log all their
traffic to Twitter (in that example) and forward to law enforcement as suspected
CPE trafficking activity  -- although that is a risky invasion of
customer privacy;
at least reporting suspected potential of access to CPE doesn't deliberately
lobomitze IPs from the network or disrupt traffic:  not all of which traffic is
necessarily CPE-related.

In case the ISP oversteps and blocks Twitter traffic that includes legitimate
non-CPE traffic  (It may even affect e-mail traffic where people are
communicating
with the site to try to identify the CPE for removal); the ISP may
face a loss of
subscribers,  and in that example Twitter would hopefully pursue
various lawsuits
or regulatory complaints against the ISPs blocking their IPs for
deliberately creating
an unreasonable disruption to the network.

Possible negatives for the ISP are the risk of those repercussions
PLUS the ongoing
maintenance costs,  personnel time,  and  other resources required for
the ISP to maintain the blocking policy --  and service the extra blocklist, any
removals or exceptions needed ---  helpdesk hours for all the
additional customer
complaints that will occur;  potential loss of good will and negative
reputational
affects on the ISP.

It begins to seem fairly difficult to business justify the policy and likely
fiscally irresponsible for an ISP to start opening this can of worms.

> On one side we want the ISP to not do any kind of censorship or inspection of 
> customer traffic
[snip]

Blocking domains or IP resources is not MERELY censorship.
Censorship, which is itself
far less objectionable:  is  selective blocking or removing  content,
for example,
redacting a chapter from a book.

Blocking domains or IPs is disconnecting infrastructure, for example: seeking to
block  twitter  due to alleged CPE  has an impact that affects much more
than the CPE --- its like blocking an entire publisher;  it doesn't
matter they have
printed mostly books that don't contain the content you've objected to
-  since you
(ISPs)  lack a censorship system --- censorship is not even an option,
and the measures you're talking about are much more drastic than
censoring content.

Also,  when the domain holder eventually responds and works with law enforcement
to remove the found example of CPE,   the domain block does not go away
on its own -- therefore evidencing it is MUCH more than censorship.

Furthermore, if the domain is then unblocked any other examples of CPE
that had been overlooked (not detected by anybody yet) may become
accessible again.

Its fair to say a domain block is not technically related to content
at all --- its in effect an
"Independent ban"  of access to a generic host identifier registered
to a remote network.

(Generic host identifiers aren't content,  don't refer to content, and
don't have a 1:1 relationship to content)

> Pratik Lotia
--
-JH


Re: using expect to log into devices

2018-07-24 Thread Jimmy Hess
On Tue, Jul 24, 2018 at 9:55 PM, Scott Weeks  wrote:
>
> --- valdis.kletni...@vt.edu wrote:
> From: valdis.kletni...@vt.edu
>
> On Sun, 22 Jul 2018 00:43:35 +0200, Niels Bakker said:
> > Fine as a personal exercise, of course.  The inability to download
> > modules seems sadistic to me, though.
>

Yeah... just download RANCID and check the command line options.
Expect is mainly of historical interest,  and  the code already exists in
several forms, so no need to completely re-invent the wheel (as a square)
here.

I call shenanigans about the avoidance of Perl modules.No real-world
system
has such constraints.

Besides,  Expect itself is a module / extension of the Tcl language and
requires the
use of dynamically-loaded extension libraries for pattern matching and
various functions,
so using Expect would break the  "No modules rule".

If you're not allowed the use of modules,  then your implementation option
is pretty much
to write in something that compiles to straight machine language.

--
-JH


Re: BGP in a containers

2018-06-15 Thread Jimmy Hess
On Thu, Jun 14, 2018 at 7:22 PM, Michael Thomas  wrote:
> So I have to ask, why is it advantageous to put this in a container rather
> than just run it directly  > on the container's host?

There is no real reason not to run it in a container, and  all the
advantages of running ALL applications in standardized containers
(whether the choice be the likes of  vSphere,XEN,KVM,Virtuozzo, LXC, or Docker).

Assuming the host runs containers:  running one app.  outside the
container (BGP) would put the other applications at risk,  since there
could be a security vulnerability in the BGP implementation allowing
runaway resource usage or remote code exploitation,  or in theory,
the install process for that app could "spoil"  the host or introduce
incompatibilities or divergence from expected  host configuration.

One of the major purposes of containers is to mitigate such problems.
For example the BGP application could be exploited but the container
boundary prevents access to sensitive data of other apps. sharing the
hardware; the application installer running in a container cannot
introduce conflicts or impact  operating settings of the host  platform.

Also,  the common model of virtualizing the compute resource calls for
treating hosts as a shared compute farm ---  no one host is special:
any container can  run equally  on other hosts in the same pod,  and
you hardly ever even check which host a particular container has been
scheduled to run on.

Users of the resource are presented an interface for running their
application: containers.No other option is offered... there is no such
thing as "run my program  (or run this container) directly on host X"
option.  
no host runs directly any programs or  services which have configurations
different from any other host,  and also every host config is about
identical  other than hostname & ip address; Simply put: being
able to run a program outside a container would violate the
service model  for   datacenter compute services  that is
most commonly used these days.


Running the BGP application in a container on a shared storage system managed by
a host cluster would also make it easier to start the service up on a
different host when
the first host fails or requires maintenance.

On the other hand, running directly on a host,  suggests that
individual hosts need
to be backed up again,   and  some sort of manual restore of  local
files from the lost host
will be required to copy the non-containerized application to a new host.

> Mike
--
-JH


Re: Application or Software to detect or Block unmanaged swicthes

2018-06-07 Thread Jimmy Hess
On Thu, Jun 7, 2018 at 3:57 AM, segs  wrote:
[snip]
> Please I have a very interesting scenario that I am on the lookout for a
> solution for, We have instances where the network team of my company bypass
> controls and processes when adding new switches to the network.

The  NETWORK management team of your own company?

The answer is adequate change controls, policy, procedures,
technical auditing (Such as logging of all CLI commands),  and
mandatory training with clearly-communicated in advance severe
consequences for violators of the compulsory security policy that
all switches must be of X type and configured according to Y process
before being connected to the network, signed off  by management.

There are technical controls that can be implemented to help prevent/
mitigate end users  from attaching an unauthorized switch to a normal
access port,

But as you mention...  clearly an employee on the NETWORKING team
can likely just configure a port as  Trunk and  circumvent any technical
protections.

Two methods that could effectively prevent End Users (not Network/IT team) from
connecting unmanaged switches would be:

*  Port-security feature common on many managed switches  that allow you to
   limit the number of MAC Addresses that can use a port to 1 or given
number of MAC addresses.
   (Use a short MAC address aging time  such as 30 seconds to allow
people to unplug
and plug a different device in, but a low MAC address account and
Err-Disable violation
to  kill the port if a Switch is connected)

 * 802.1x Wired Port Security -   More detailed system that requires a
   PKI + RADIUS server infrastructure and authentication by every
client to every port.


--
-JH


Re: Whois vs GDPR, latest news

2018-05-22 Thread Jimmy Hess
Perhaps it's time that some would consider  new RBLs  and  Blackhole
feeds  based on :
Domains with deliberately unavailable WHOIS data.

Including  domains whose  registrant has failed to cause their domain
registrar and/or registry to
list personally identifiable details for registrant and contacts   on
servers available to
the public using the TCP port 43 WHOIS service.

For any reason,  whether use of a privacy service,  or by a  Default
"Opt-to-Privacy Rule" enforced
by a  local / country-specific regulation such as GPDR.

Stance

* Ultimate burden goes to the REGISTRANT of any Internet Domain to take the
  steps to ensure their domain or IP address registry makes public
contacts appear
  in WHOIS at all times for  their Domain and/or IP address(es) --- including
  a traceable registrant name AND direct Telephone and E-mail contacts
 to a responsible
  party specific to the domain from which a timely response is available and
  are not through a re-mailer or proxy service.

People may have in their country a legal right to secure control of
a domain on a registry
And anonymize  their registration:"Choose not to have personal
information listed in WHOIS".

HOWEVER, Making this choice might then result in adverse consequences
towards connectivity AND accessibility to your resources from others
during such times
as you exercise your option to have no identifiable WHOIS data.

The registration of a domain with hidden or anonymous data only ensures
exclusivity of control.  Registration of a domain  with
questionable or unverifiable personal
registrant or contact information does not guarantee that  ISPs  or
other sites connected to the
internet will choose to allow their own users and DNS infrastructure
access to   un-WHOISable domains.

Then have:
---

* Right-hand sided BLs for Internet domains with no direct
WHOIS-listed registrant address and  real-person contacts
including  name, address, direct e-mail and phone number valid for
contact during the domain's operational hours.

* Addons/Extensions for Common Web Browsers  to check the BLs  before
allowing access to a HTTP or HTTPS  URL.  Then display a prominent
"Anonymized Domain:
Probable  Scam/Phishing Site"   within the Web Browser MUA;

And limit or disable high-risk functions for anonymous sites:  such as
 Web Form Submissions,
Scripting,  Cookies,  Etc   to  Non-WHOIS'd domains.

if   the domain's  WHOIS  listingis  missing  or showed a privacy
service, or had appeared  t
runcated or anonymized.

* IP Address DNSBL for IP Address allocations  with no direct
WHOIS-listed  holder address real-person contacts.
including name, address, direct e-mail and phone number valid for
contact during the hours when that IP address
is connected to the internet.

* DNS response policy zones (for resolver blacklists)  for internet
domains with no WHOIS-listed registrant &
real-person contacts  including name, address, direct e-mail and phone
number valid for contact.


The EU  GDPR   _might_  require  your  registrar to offer you the
ability Opt by default to mask your
personal information and e-mail from domain or IP  WHOIS data,

But  should you  choose  to Not opt to have identifiable contacts and
ownership published:

There may be networks and resources that will refuse access,  Or whose
users  will not be allowed
to resolve your DNS names,  due to your refusal to identify
yourself/provide contacts   for   vetting,
identifying and reporting technical issues, abuse, etc.

Real-Life equivalent  would beDirectories/Listings of
Recommended businesses that
refuse to accept listings from businesses whose  Owner  wants to stay Anonymous.

Or  people who don't want to buy their groceries from random shady
buildings  that don't even
have a proper sign out.

--
-JH

On Wed, May 16, 2018 at 4:10 PM, Constantine A. Murenin
 wrote:
> I think this is the worst of both worlds.  The data is basically still
> public, but you cannot access it unless someone marks you as a
> "friend".
>
> This policy is basically what Facebook is.  And how well it played out
> once folks realised that their shared data wasn't actually private?
>
> C.
>
> On 16 May 2018 at 16:02, Brian Kantor  wrote:
>> A draft of the new ICANN Whois policy was published a few days ago.
>>
>> https://www.icann.org/en/system/files/files/proposed-gtld-registration-data-temp-specs-14may18-en.pdf
>>
>> From that document:
>>
>> "This Temporary Specification for gTLD Registration Data (Temporary
>> Specification) establishes temporary requirements to allow ICANN
>> and gTLD registry operators and registrars to continue to comply
>> with existing ICANN contractual requirements and community-developed
>> policies in light of the GDPR. Consistent with ICANN’s stated
>> objective to comply with the GDPR, while maintaining the existing
>> WHOIS system to the greatest extent possible, the Temporary
>> Specification maintains robust 

Re: Yet another Quadruple DNS?

2018-03-31 Thread Jimmy Hess
On Sat, Mar 31, 2018 at 7:08 PM,   wrote:

> From what I can tell, this has not been "allocated" (probably closer to a 
> LOA)?
> All contacts and maintainers on the inetnum object are still APNIC's, 
> Cloudflare
> does not have free access to do whatever they want here.

Did you ask WHOIS?Looks like the  /24  isPortable-Assigned to
a joint project.
I don't know that APNIC is necessarily required to make a public consultation;.

If it was from an ARIN block; ARIN wouldn't have to "ask the public either"...
the  Number Resource Policy allows for /24 micro-allocations for
critical infrastructure,which exactly describes the nature of an anycasted
/24  for  the service IP of a shared open DNS recursive resolver service,
and the RIR could potentially allocate from any block under their control that
were deemed most suitable for the critical infrastructure.

Then again,  maybe APNIC made a consultation at their February meeting
in Nepal?
One thing i'm sure is they wouldn't have to ask NANOG's permission.

$ whois 1.1.1.1
% [whois.apnic.net]
% Whois data copyright termshttp://www.apnic.net/db/dbcopyright.html

% Information related to '1.1.1.0 - 1.1.1.255'
% Abuse contact for '1.1.1.0 - 1.1.1.255' is 'ab...@apnic.net'

inetnum:1.1.1.0 - 1.1.1.255
netname:APNIC-LABS
descr:  APNIC and Cloudflare DNS Resolver project
descr:  Routed globally by AS13335/Cloudflare
descr:  Research prefix for APNIC Labs
country:AU
org:ORG-ARAD1-AP
admin-c:AR302-AP
tech-c: AR302-AP
mnt-by: APNIC-HM
mnt-routes: MAINT-AU-APNIC-GM85-AP
mnt-irt:IRT-APNICRANDNET-AU
status: ASSIGNED PORTABLE
remarks:---
remarks:All Cloudflare abuse reporting can be done via
remarks:resolver-ab...@cloudflare.com
remarks:---
last-modified:  2018-03-30T01:51:28Z
source: APNIC

.



-- 
-JH


Re: Proof of ownership; when someone demands you remove a prefix

2018-03-13 Thread Jimmy Hess
On Tue, Mar 13, 2018 at 1:58 PM, Naslund, Steve  wrote:

I would consider that the RIR WHOIS records are currently the network's
authoritative source of truth about  IP number management.

For 99% of situations there's no such proper thing as "delaying
addressing abuse"
so someone claims they can go dispute the RIR record.   The rare exception
would be  you have  documented  the original contacts and LOAs,  and a stranger
who is a new WHOIS POC sends a request that you disrupt what has now been
a long-established operational network,  and  your customer is
objecting/claiming
the WHOIS record has been hijacked.

In that case:  avoid disrupting the long-established announcement:  to allow the
customer 5 to 10 days  to get it fixed with the RIR  or show you a
court order against
the false WHOIS contacts.

If you started announcing a newly setup prefix,  and it immediately
resulted in a phone call
or e-mail  within a few weeks  from   the resource holder
organization's   RIR-listed
WHOIS contact, then obviously corrective actions are in order to pull that
announcement quickly,  after confirming with the org. listed in WHOIS

That would mean your new announcement is credibly reported as abuse,  AND
"claim of dispute in progress with the RIR" does not hold water  as
any kind of basis
to continue your AS  causing harm to this resource holder.



I would  not blame a legitimate WHOIS contact for immediately escalating to
upstreams and ARIN for  emergency assistance: if they don't  receive an
adequate resolution and removal of the rogue announcement within 15
minutes or so...



While ARIN cannot do anything about the routing issues;  they might be
able to confirm the history of the resource  the Rogue announcement
might include the IP space of 1 or more DNS  or SMTP Servers related to one
or more domain names  that are also  listed WHOIS  E-mail contacts.

You know because ARIN stopped supporting using PGP/GPG keys with POCs
and digitally signed e-mail templates  to formally authorize modifications :


"Wait while we dispute with the RIR" could very well  truly mean:  -

"Please wait while we try to use our rogue IP space announcement  to
quickly setup some

fake SMTP servers on hijacked IPs while we gear up our spamming
campaign to maximum
effectiveness and misuse ARIN's  single-factor  Email-based

password recovery process to fraudulently gain account access and
modify resource
WHOIS POC details  to make it look more like we're the plausible
resource holder."


> The fact that it is a newer customer would make me talk to the RIR direct and 
> verify
> that a dispute is really in progress.
[snip]
> Steven Naslund
> Chicago IL
--
-JH


Re: Proof of ownership; when someone demands you remove a prefix

2018-03-13 Thread Jimmy Hess
On Tue, Mar 13, 2018 at 9:23 AM, Sean Pedersen
 wrote:

> In this case we defaulted to trusting our customer and their LOA over a 
> stranger on the
> Internet and asked our customer to review the request. Unfortunately, that 
> doesn't
> necessarily mean a stranger on the Internet isn't the actual assignee.  
> [..]

I believe the suggested process would be   submit the stranger's request to
the administrative & technical contacts listed for the organization
and IP resource
in public WHOIS  at the time the request is received,  and in order to
confirm:

Request whether their organization approves that the announcements must be
withdrawn,  and if so:  that they also submit to you a signed official
form to either
revise,  rescind, or repudiate  the existing LOA provided by that WHOIS contact.


Then reply to the  "stranger"  that official documentation is required
to cancel the
announcement, and you are unable to verify you have the right to make
the request,
and you will forward their message to the IP Address registry and
officially listed WHOIS and customer technical contacts  who must
approve of the request,
before any further actions can be taken.

--
-JH


Re: Blockchain and Networking

2018-01-23 Thread Jimmy Hess
On Tue, Jan 23, 2018 at 9:39 AM, John R. Levine  wrote:

>
> the problem isn't keeping the database, it's figuring out who can make
> authoritative statements about each block of IP addresses.

That is an inherently hierarchical question since all IP blocks originally
> trace back to allocations from IANA.
>

Well;  It's a hierarchical question only because the current registration
scheme is defined in
a hierarchical manner.  If  BGP were being designed today,  we could
choose  256-Bit  AS numbers,
and allow  each mined or staked block to yield a block of AS numbers
prepended by some
random previously-unused 128-bit GUID.

However,  a blockchain could also be used to allow an authority to make a
statement representing
a resource that can be made a non-withdrawable statement ---  in other
words,  the authority's role
or job in the registration process is to originate the registration,  and
after that is done:
their authoritative statement is accepted ONCE per resource.

The registration is permanent ---  the authority has no ongoing role and no
ability to later make
a new conflicting statement about that same resource,   and   the
authority  has  no operational role
except to originate new registrations.

This would mean that a foreign government could not coerce the authority
to  "cancel"  or mangle
a registration to meet a political or adversarial objective of disrupting
the ability to co-ordinate networks,
since the  number registry is an authority of  limited power:  not an
authority of complete power.


We can have arguments about the best way to document the chain of
> ownership, and conspiracy theories about how the evil RIRs are planning to
> steal our precious bodily flu^W^WIPs, but "put it in a blockchain!"
> Puhleeze.
> R's,
> John
>

--
-JH


Re: Blockchain and Networking

2018-01-23 Thread Jimmy Hess
On Tue, Jan 9, 2018 at 10:22 AM, William Herrin  wrote:

> On Tue, Jan 9, 2018 at 1:07 AM, John R. Levine  wrote:
>
>
The promise of blockchain is fraud-resistant recordkeeping, database
management,  AND
resource management maintained by a distributed decentralized network which
eliminates or reduces the extent to which there are central points of trust
involved in
the recordkeeping,

AND can implement resource-management rules or policies programmatically
and
in an unbiased way  (E.G.  "Smart Contracts").

For example:  A decentralized internet number registry could use a
blockchain
as the means of making the public records showing the transferrence of the
ownership of a particular  internet number from the originator to the
registrant.

The potential is there to go a step beyond replacing RPKI,   as a blockchain
could be the AS number authority itself,  thus eliminating the need to
have any centralized organizations for  tracking and managing
number resource assignments.

> How about validating whether a given AS is an acceptable origin for a set
> >> of prefixes?
> That's a job for ordinary PKI. Any time you have a trusted central
> authority to serve as an anchor, ordinary PKI works fine. The RIRs serve as
>

See:  That's the problem.   Ordinary PKI  DEPENDS on centralized trust --
that is, with PKI there  are  corruptible or potentially corruptible or
compromisable entities in your system  that you assign an unwarranted or
unnecessary level of trust to.

That would include organizations such AS Number and IP Address registries.
Under the current system,  they retain an Unwarranted level of trust,  for
example:  ARIN  Could  Delete an IP address allocation or an AS number
allocation  after it was assigned,because  someone else told them to,
or  maybe someone didn't like the content on your website and
coerced/tricked
someone who manipulated or legally forced the central figure to do so.

This would include whatever entities can be signing authorities of your PKI.
This includes any organization with unsecured resource management
capabilities,
such as the DNS Root server, TLD Server operators,  and Domain registrars.

Which includes the risks:
(1)  The signing authority could be breached by an outsider or insider
attack
(2)   The signing authority could prove untrustworthy or later change
the rules.
(3)   The signing authority could be covertly corrupted by a government
authority
or foreign power: to support nefarious goals or surveilance or
censorship.

For example:  A DNS Registrar or TLD Registry could make a change to the DS
Key or remove
the DS Key and confiscate a domain to intercept traffic, without even the
permission
of the original registrant.


-- 
-JH


Re: AS Numbers unused/sitting for long periods of time

2018-01-02 Thread Jimmy Hess
On Tue, Jan 2, 2018 at 4:46 PM, James Breeden  wrote:


> I.e. some form of ARIN or global policy that basically says "If AS number
> not routed or whois updated or used in 24 months, said AS number can be
> public noticed via mailing list and website and then revoked and reissued
> to a pending, approved AS request"
>

Why?   What is the justification for a  reclamation project?
Besides this is  Outside the purview, scope, or powers that RIRs/
ARIN in particular have put into their public policy development process.
of.

Number resource policies govern management regarding
number resources:  allocation, assignment,  and transfer.

Policies are not able to set fees or conditions on any existing services.
Revoking an unused resource would require a condition on
existing services that cannot be defined by a number resource policy.

EXISTING  number resources in ARIN region in particular are serviced under
the RSA contract that include terms specifically informs the end user that
ARIN is disclaiming itself from having any ability  or authority to
revoke any unused resources or cancel any services for lack of use.

> "ARIN will take no action to reduce the Services currently provided for
Included Number Resources due to lack of utilization by the Holder, and
(ii) ARIN has no right to revoke any included Number Resources under
this Agreement due to lack of utilization by Holder.

However, ARIN may refuse to permit transfers or additional allocations of
number resources to Holder if Holder’s included Number Resources are not
utilized in accordance with Policy."


I'm amazed at the number of AS numbers that are assigned, but not actively
> being used.

"Actively being used"   is determined only by the resource holder.

And before you come back with "Well they may be using it internally

where it doesn't need to be in the GRT" - that's why we have Private AS
> numbers.
>

It is a valid technical decision to use AS numbers internally,  and
there are reasons  Not to use the small pool of available  Private AS
numbers,

Even  if the private AS numbers  might be available for some legitimate use
cases;
there is no reason to favor them when privately interconnecting networks
across multiple organizations or policy domains, and it is perfectly valid
to
maintain uniquely-registered AS numbers for such internal purposes.


--
-JH


Re: Waste will kill ipv6 too

2017-12-21 Thread Jimmy Hess
On Wed, Dec 20, 2017 at 3:57 PM, Mark Andrews  wrote:
[SNIP]

25B  estimate for earth's carrying capacity for humans is likely on the
high side,
but sure: IPv6  should suffice  until  we have a few planets' worth of
humans,
and require an  interstellar   IP network  with end-to-end  comms between
every
remote device in our galaxy cluster  ---   and may have to fallback to
planetary NAT or LISP  for some applications.

Something  should probably go into some FAQ at some point.

"Q:   IPv6 could still run out of addresses / Waste  will kill ipv6 too  /
Etc."

"A:   No,  although there is an occasional point of confusion regarding
IPv6 that we
will still run out of addresses:  that is a highly-improbable event:
please see list archives.

>From evaluation of the arithmetic, there is not a reasonable forecast model
that can be
made that would start from realistic assumptions  about network growth and
could come to the conclusion that depletion of IPv6  would be a possibility
under  current regional registry allocation policies based on justified
need,
even allocating up to a couple  dedicated  /48s  per  person  up to the
expected maximum population capacities of earth


--
-JH

When the IETF decided on 128 bit addresses it was taking into consideration
> /80 sized subnet.  Prior to that it was looking at a 64 bit address size
> and allocating addresses the IPv4 way with lots of variable sized
> networks.  This was changed to /64 subnets to accomodate 64 bit MAC.  After
> that there was discussion about how many subnet should be enough for 99.99%
> of sites which gave /48 per site using /64 sized network.  That
> 281474976710656 sites or 35184372088832 out of the /3 we are currently
> allocating from.
>
> Now there are very few sites that need 65536 subnets and those that do can
> request additional /48’s.
>
> Now if you assume the earth’s population will get to 25B, and every person
> is a site, that still leaves 35159372088832 sites.
> And if each of those people also has a home and a vehicle, that still
> leaves 35109372088832 sites.
>
> Handing out /48’s to homes was never ever going to cause us to run out of
> IPv6 space.  Even if the homes are are connected to multiple providers
> there isn’t a issue.
>
> Mark
>
> > On 21 Dec 2017, at 7:57 am, William Herrin  wrote:
> >
> > On Wed, Dec 20, 2017 at 1:48 PM, Mel Beckman  wrote:
> >
> >> I won’t do the math for you, but you’re circumcising the mosquito here.
> We
> >> didn’t just increase our usable space by 2 orders of magnitude. It’s
> >> increased more than 35 orders of magnitude.
> >>
> >
> > Hi Mel,
> >
> > The gain is just shy of 29 orders of magnitude. 2^128 / 2^32 = 7.9*10^28.
> >
> > There are 2^128 = 3.4*10^38 IPv6 addresses, but that isn't 38 "orders of
> > magnitude." Orders of magnitude describes a difference between one thing
> > and another, in this case the IPv4 and IPv6 address spaces.
> >
> >
> > Using a /64 for P2P links is no problem, really. Worrying about that is
> >> like a scuba diver worrying about how many air molecules are surrounding
> >> the boat on the way out to sea.
> >>
> >
> > It's not a problem, exactly, but it cuts the gain vs. IPv4 from ~29
> orders
> > of magnitude to just 9 orders of magnitude. Your link which needed at
> most
> > 2 bits of IPv4 address space now consumes 64 bits of IPv6 address space.
> >
> > Then we do /48s from which the /64s are assigned and we lose another 3 or
> > so orders of magnitude... Sparsely allocate those /48s for another order
> of
> > magnitude. From sparsely allocated ISP blocks for another order of
> > magnitude. It slips away faster than you might think.
> >
> > Regards,
> > Bill Herrin
> >
> >
> > --
> > William Herrin  her...@dirtside.com  b...@herrin.us
> > Dirtside Systems . Web: 
>
> --
> Mark Andrews, ISC
> 1 Seymour St., Dundas Valley, NSW 2117, Australia
> PHONE: +61 2 9871 4742  INTERNET: ma...@isc.org
>
>


Re: WiFi - login page redirection not working

2017-11-29 Thread Jimmy Hess
On Wed, Nov 29, 2017 at 10:34 PM, Ramy Hashish 
wrote:


> Two points with this problem: 1)Is there a "non client" solution to the
> problem of the WiFi login notification not showing up on the clients after
> connecting to the WiFi network?
>

A  Captive portal  embedding WispR  XML data
for connections from browsers/OSes that request a test page upon network
access.
https://stackoverflow.com/questions/3615147/how-to-create-wifi-popup-login-page

However if WPA2 authentication is not method used for access,  then network
traffic is
vulnerable and not secured.

AP solutions that are non-standard being a "Non client" solution and using
"Open Wireless" mode SSIDs are likely so deficient in security as to be
an unreasonable risk for users to actually connect to.


> Second, anything to be done from the AP to show the landing page even if
> the page requested is HTTPs?
>

If TLS  would somehow allow you to redirect or create a HTTPS connection
from
a domain name that is not yours, then this could obviously be exploited for
attacks.

--
-JH


Re: Templating/automating configuration

2017-06-15 Thread Jimmy Hess
On Wed, Jun 14, 2017 at 3:55 PM, Job Snijders  wrote:
> On Wed, Jun 14, 2017 at 09:35:59PM +0100, Nick Hilliard wrote:
>> Graham Johnston wrote:
>> > Would you be able to provide any further insight into your Don’t #5 –
>> > “Don’t agree to change management. Managers are rarely engineers and
>> > should not be making technical decisions. (nor should sales)“.
>>
>> What do you think the purpose of change control / management is?
>
> well, http://dilbert.com/strip/1995-05-29

> On Wed, Jun 14, 2017 at 09:35:59PM +0100, Nick Hilliard wrote:
>> What do you think the purpose of change control / management is?

Bureaucratic change control implementations using the ITIL view
of change control with a formal CAB are likely an (over)reaction
to human mistakes causing outages,  most of which could probably
be avoided  with a simpler  less-formal process such as peer or
team review.

Change control functions as a risk transfer away from operations teams to
CAB board members,   since if things go wrong b/c of a change: it is now
the CAB's fault.There may also be bias towards change-aversity
if the CAB cannot be held accountable for issues that come from
delaying or rejecting  important maintenance.

Overall purpose for change control / management,  when applied to  substantial
modifications to an operating environment or configuration of
business-critical network/applications is

To mitigate possibility of damage/outages from high-impact / high-risk
changes made by humans to systems and network-devices by
requiring standards of formal written documentation and  planning,
combined with  peer review And approvals by business and technical
stakeholders for the maintenance time,  including evaluation  of
exact proposed configuration changes,   implementation plans,
and backout/contingency plan:   for  possible errors or omissions.


But as with most things
can be taken to an unreasonable extreme.

The use of change management procedures has a high
associated cost, b/c the time and labor to implement
even simple relatively low-risk changes can be dramatically
increased with an unreasonable delay,  and extensive test labs
may be necessary.There may actually be increases in
various risks,  if any kind of maintenance is delayed or
lost in the paperwork.

-- 
-JH


Re: Government agency renting or selling IP space

2017-03-16 Thread Jimmy Hess
On Thu, Mar 16, 2017 at 7:39 PM, Mel Beckman  wrote:
> Bill,
> Is there a technically a restriction preventing swiping of this IP space when 
> it's being rented? How is that different from an ISP swiping  its customers 
> that are renting bandwidth?

This is a difference between an "Allocated" block of addresses to an
ISP and an "Assigned" network prefix belonging to a end-user.

End-User Orgs typically lack technical ability to create re-assignment
records showing a different
organization,  b/c  they have IPs assigned for a specific network

 ISPs / Co-location providers who are ARIN members with Allocated addresses
can Re-Allocate to a downstream ISP or Assign a network prefix from allocated
space to a downstream End-user organization.

An End user can likely show they're an ISP, join ARIN as an ISP member,   &
request  Direct Assignments from ARIN be combined into new Allocations;

If the character of the network changes,  I would expect the new ISP
may have to show information to ARIN establishing that the change to
an ISP Allocation  will be consistent with the NRPM requirements.

(Seeing as Assignments to End-Users and Allocations to ISPs have
different  policies  for creation described in the NRPM,  and there's
no mention in the Policy they can be directly converted  without a
Transfer or Renumber/Consolidate  or Return & renumber)


> -mel via cell
--
-JH


Re: Serious Cloudflare bug exposed a potpourri of secret customer data

2017-03-02 Thread Jimmy Hess
On Thu, Mar 2, 2017 at 5:15 PM, Matt Palmer  wrote:
> On Sat, Feb 25, 2017 at 07:21:48AM +, Mike Goodwin wrote:
>> Useful information on potentially compromised sites due to this:
>> https://github.com/pirate/sites-using-cloudflare
> "This list contains all domains that use Cloudflare DNS"

> That's only marginally more useful than saying "any domain matching /^.*$/";

Iirc;  It's quite easy to use the Proxy service without the DNS
service, as long as
you are using a Paid  CF account for the domain and not a free account.

Also;  Querying after the fact is not very scientific,  Because there
may be domains
that _Were_  using  CF  proxy service  During the incident  which no longer use
CF DNS or Proxy servers,  for whatever reason.

If you're going to scrape DNS records to decide,  should probably be
scraping A records for www,
and then checking Reverse DNS or matching against possible CF IP
addresses,   not  NS records.

> - Matt
--
-JH


Re: SHA1 collisions proven possisble

2017-03-02 Thread Jimmy Hess
On Wed, Mar 1, 2017 at 10:57 PM, James DeVincentis via NANOG
 wrote:
> Let me add some context to the discussion.

> With specific regard to SSL certificates: "Are TLS/SSL certificates at risk? 
> Any Certification
> Authority abiding by the CA/Browser Forum regulations is not allowed to issue 
> SHA-1 certificates
> anymore. Furthermore, it is required that certificate authorities insert at 
> least 64 bits of randomness
> inside the serial number field. If properly implemented this helps preventing 
> a practical exploitation.”

Yes, they are at risk, of course.  This latest technique does not
appear able to be used to attack certificates, however.  Subscribers
to a CA don't have sufficient control of the contents of the signed
certificates a CA will issue;   Even if they did,   the computational
requirement with the described attack is likely to be slightly out of
reach.

The attack is not such that certs can be directly spoiled, *YET*;
However, It is almost surely a high likely possibility in the
forseeable future, and the existence of this attack does serve as
valid evidence further solidifying that the risk is very High now for
crypto applications of the SHA1 digest,  of  further  collision
attacks against SHA1  being made practical in the forseeable future
with very little or no further warning.

If you are still using SHA1 and were expecting to use it forever
This attack is what gives
you your fair warning,  that in 6 or 7 years,  brute-forcing your SHA1
will likely be
within reach of the average script kiddie.

This does not fundamentally change security expectations for SHA1,  such attack
now being feasible with Google's resources is well-within expectations;
However,  the "Hype"  should be a wake-up call to some developers who
continue to write new software relying upon SHA1 for security under a false
belief  that SHA1 digest is still almost certain to be fundamentally
sound crypto for
many years to come.

If you are writing a program expected to be in use  5 years from now,  and
believe SHA1 will continue to meet your existing security requirements.
time to rethink that,  and realize the risk is very high for SHA1
becoming insecure
within a decade.

If the "Hype"  behind this Google thing is the catalyst that makes
some developers
think about the future of their choice of crypto algorithms more carefully
before relying upon them,   then that is a good thing.

> - Hardened SHA1 exists to prevent this exact type of attack.

I suppose hardened SHA1 is a  non-standard kludge of questionable durability.
Sure,  implement as a work-around for the current attack.
But  the future-going risk of continuing to use SHA1 remains qualitatively high.

> - Every hash function will eventually have collisions. It’s literally 
> impossible
>  to create a hashing function that will never have a collision.   [snip]

There may be hashing functions which are likely to never have a practical
collision discovered by humans,  because of their size and improbability.

It's mostly a matter of the computing power currently available VS  size and
qualities of the hash.

> - Google created a weak example. The difference in the document they 
> generated was a background color. They didn’t even go a full RGBA difference. 
> They went from Red to Blue. That’s a difference of 4 bytes (R and B values). 
> It took them nine quintillion computations to generate the

With some applications;  you'd be surprised what evil things you can
do if you change 4 Bytes to a malicious value.

For example;  If you're digitally signing a binary,  4 Bytes is maybe
enough to overwrite a machine language instruction,  introducing an
exploitable bug  into the  machine code.

That latest attack on SHA1  will not allow code signing following
typical code signing algorithms to be attacked.

--
-JH


Re: SHA1 collisions proven possisble

2017-02-25 Thread Jimmy Hess
On Thu, Feb 23, 2017 at 2:03 PM, Patrick W. Gilmore  wrote:

> For instance, someone cannot take Verisign’s root cert and create a cert 
> which collides
> on SHA-1. Or at least we do not think they can. We’ll know in 90 days when
> Google releases the code.

Maybe.   If you assume that no SHA attack was known to anybody at the
time the Verisign
cert was originally created,  And that the process used to originally
create Verisign's root cert
was not tainted  to leverage such attack.

If it was tainted,  then  maybe there's another version of the
certificate that was constructed
with a different Subject name and Subject public key,  but the same
SHA1 hash, and same Issuer Name and same Issuer Public Key.

> --
> TTFN,
--
-JH


Re: Should abuse mailboxes have quotas?

2016-10-27 Thread Jimmy Hess
On Thu, Oct 27, 2016 at 1:35 PM, Dan Hollis  wrote:
> not so much malice as gross incompetence.
> running spamfilters on your abuse@ mailbox, really? that is, for those which
> actually have an abuse mailbox that doesn't bounce outright.

Sorry about that,  many networks do perform standard filtering on
messages to Abuse contacts based on DNS RBLs,  SPF/DMARC
policy enforcement,  virus scans,  etc,  and do send a SMTP Reject on
detected spam or malware.

If your own mail server's IP appears on Spamhaus, then, Yes, you should
expect  any abuse reports you attempt to submit have a likelihood of being
rejected as spam.

Abuse/contact mailboxes are not special in this regard,  and it would not be
a good practice to leave unprotected.If anything.:

For many networks;  files sent to abuse mailboxes are likely aliased to the
normal mailbox of sysadmins who have access to high privileges.As such,
these mailboxes may require even stronger protection  than other accounts,
because of increased risk   (when a mistake is made).

There is a reason that phone numbers, and not just e-mail addresses are listed
in the WHOIS records..

If you get a SMTP reject, then call the the Abuse POC of the organization you
need to report abuse from.

> -Dan
--
-JH


Re: nested prefixes in Internet

2016-10-11 Thread Jimmy Hess
On Mon, Oct 10, 2016 at 12:24 PM, Niels Bakker 

Re: Domain renawals

2016-09-22 Thread Jimmy Hess
On Thu, Sep 22, 2016 at 9:37 AM, Doug Barton  wrote:
> On 09/21/2016 01:44 PM, Richard Holbo wrote:
>> FWIW, as I'm in the middle of this right now. It would appear that many of
> What do you think glue records are, and why do you think you need them? :)
> (Those are serious questions, btw)

Glue records are also called "Host  records",  or Alternatively
called: "Nameserver" records.
These are A and  records for your domain name which appear in the
parent TLD zone,
instead of the child zone.

Host records also typically appear in WHOIS, for example:   "$ whois
ns5.yahoo.com"

If you think your registrar does not support them,  then you're
probably having trouble with
your registrar's user interface,  and just don't have the right
procedure,   because the use
of host records is  quite essential and necessary for at least one
domain to self-host DNS..


These records are non-authoritative and belong to the reply delegating
nameservers for
your domain to your servers,  and you need to duplicate a copy of all
your NS, A,  records in your
child zone,  which must be identical to the parent's version of the records.

For example, suppose your domain name is "Example.com"
And you want your nameservers to be  NS1.example.com,
NS2.example.com,  NS3.example.com.

Because the nameservers exist in the same domain name which references them,
the required DNS lookup graph is circular,  and your DNS zone becomes an island!

In order for clients to find your nameserver  to figure out what
NS1.example.com resolves to,
it first needs to be able to find a nameserver for  Example.com,
which is NS1.example.com.

This is what is circular without a Hint in the Additional section of
the DNS reply from the parent nameserver.

The glue record in the parent zone is used to tell the parent TLD
server to include the IP address of
your nameserver in the Additional Section  of the DNS reply,  so you
can  bootstrap DNS resolution
for Example.com.



> Doug
--
-JH


Re: Domain renawals

2016-09-21 Thread Jimmy Hess
On Wed, Sep 21, 2016 at 8:35 PM, John Levine  wrote:
>>For domain registration I found that joining the GoDaddy Domain Club
>>( $120/year or less if you pay ahead for multiple years [1] ) ...
> There's a lot of registrars with prepay discounts.  Gandi's domains
> are cheaper if you prepay $600, a lot cheaper if you prepay $2000.

Prepayment makes no sense, unless you are planning on maintaining more
than 10 domains,
which warrants much more due dilligence than if registering  one or two domains.

Also,  if you're maintaining one or two domains,  then it is sensible
to pay more
for a registrar that provides better support, or a more intuitive web interface.
For maintaining a larger number of domains: perhaps more powerful management
tools are  more useful,  and  possibly the ease-of-use is a lower priority.

Therefore, it depends on what you are doing with domains.
I know of registrars that are $8.99 per Year and $8.39 per Year for a .COM,
with no prepayment necessary,  for those rates,  and  small discounts
for prepay.

*  They say "cheap, secure, reliable, pick two"   But that's not
really how it is.

it's really  more like "Inexpensive, Good support, Feature-complete",
 pick two.

Because no registrar is "secure" totally; phishing is conceivable with
any registrar.
That includes ne'er do wells  pre-texting you and tricking registrar
support personnel
to change your e-mail address plus password and give it to a cracker.

You can't give up reliability to get security,  so the original 3 don't work.
Every registrar known to offer advanced security mitigations charges a boatload,
or part of a boatload to add them.

If you want security,  then the closest you get is what's called a
Registry lock  with'
a telephone-based confirmation of domain changes,  And two-factor login to the
website.


Last I check,  getting the registry lock service  is Only available on
certain TLDs,
and adds between $500 and $1000  Per domain name to the cost.

Also,  there is a bit of inconvenience,  since you are setting a lock which
your domain registrar is unable to override on their own,  so routine
maintenance
such as updating DNS servers or renewing becomes a potentially
drawn-out process.



Various registrars offer  Two-Factor website login and  'Max Lock'
features of their own,
providing their own confirmation,   and just a Client/Registrar-Lock
on the domain,

But again..  you can't see the registrar's IT systems,  so blindly
assuming they are secure
would be silly. Certainly price can't tell you that.

None of the registrars are going to be totally secure.

It's just a question of  How long have they been around,  how much
business does
the registrar do, and how many times have they been hacked and the hack
was bad enough that the internet community discovered it?

--
-JH


Re: Zayo Extortion

2016-08-15 Thread Jimmy Hess
On Sat, Aug 13, 2016 at 11:50 AM, HonorFirst Name Ethics via NANOG
 wrote:
> to say "our accounting system does not track invoice details -- it only shows 
> the total amount due so your numbers mean nothing to us."
>  All the while they relentlessly levied disconnect threats with short 
> timelines such as: "if you don't pay us $128,000 by this Friday,
> we will shut your operation down."
[...]
>At one point their lawyers and accounting people had the nerve to say "our 
>accounting system does not track invoice details

Are you talking with your SP's lawyers without your a legal team of
your own present and advising you?
I think one of the first things they should tell you is not to discuss
pending disputes in public. Time to get
a consultation with your own Lawyers to assist with billing dispute
resolution, ASAP.

Provided there is a reasonable agreement in place: I think  You ought
to be able to at least
temporarily delay your SP from turning off services,  while you work
out your billing dispute.

The service provider could be subject to liability by turning off
services which you have not agreed to disconnect.

Your lawyers should be able to refute a SP's claim about  their
records system not tracking the actual amounts
due under specific agreements causing  the conclusion that the output
from the record system is inscrutible and
infallible.


--
-JH


Re: number of characters in a domain?

2016-07-23 Thread Jimmy Hess
On Sat, Jul 23, 2016 at 7:31 AM, Ryan Finnesey  wrote:
> I was hoping someone can help me confirm my research.  I am correct that 
> domains are now limited to 67 characters in length including the extension?
>

RFC1035;  A hostname / FQDN cannot exceed 255 octets in totality.
This includes all the label-length fields,  therefore, the limit
on total human-readable FQDN string is less.

(Subtract 1 Octets initially,  then subtract more octets for every
DNS Path component added,  including the Null label at the End of every FQDN.)

In addition, the string component of each DNS label is limited to 63 octets.

-- 
-JH


Re: Leap Second planned for 2016

2016-07-10 Thread Jimmy Hess
On Sun, Jul 10, 2016 at 3:27 AM, Saku Ytti  wrote:

[snip]
> a) use UTC or unix time, and accept that code is broken
[snip]

The Unix time format  might be an unsuitable time representation for
applications which require clock precision or time precision within a
few seconds  for the purposes of Timestamping or synchronizing events
 down to a  Per-Second or   Subsecond resolution.

Suggest revising Unix/POSIX   Time implementation to use a 3-Tuple
representation of calendar time,  instead of a single Integer.

typedef   int64_t  time_t [3];

 [  Delta from Epoch  in Seconds,  Delta in Microseconds,
Cumulative  Leap Adjustment from the Epoch in Microseconds]

Thus to compare two  timestamps  A and B

long long difference_in_seconds(time_t A, time_t B) {

return  (B[0] - A[0])  +   ( B[1] - A[1]  +  B[2] - A[2] ) /100;

}

-- 
-JH


Re: Bitcoin mining reward halved

2016-07-09 Thread Jimmy Hess
On Sat, Jul 9, 2016 at 2:04 PM,   wrote:
> Hi,

Blockchain-based replacement for RPKI involving encoding of
IP address  registry registrations   assigned to a Network operator's
specified  Org ID wallet,  And LOAs for propagating the announcement
of a prefix by using  Colored coins   automatically distributed to a
specified ASN by
BGP daemon  on your routers?

Err...

>> This is pretty O/T for this list, isn't it?
>
> not if he's using his routers ASICs to do it! ;-)
> (or maybe its related to the bitcoin network traffic volumes...but
> thats too logical...)
>
> alan

-- 
-JH


Re: Cisco 2 factor authentication

2016-06-25 Thread Jimmy Hess
On Wed, Jun 22, 2016 at 9:38 PM, Chris Lawrence
 wrote:
> Any radius based auth works well I've used a solution by secure envoy I the 
> past which seems to work well they also have soft token apps, hard tokens 
> plus SMS based.

However, a cautionary note there is that RADIUS protocol itself uses
only weak cryptography and is not  secure on the wire.

That is, in the absence of AES Keywrap proprietary extension  Or when
the method of credential used is not authentication using a
Client-side Certificate (PKI)  as  in  *EAP.

Specifically:  if RADIUS is used for the Authentication stage of AAA
with a code sent by SMS or OATH token [User types Normal password +
One Time Password],  then when traffic between RADIUS server and  VPN
device is captured:   The user credentials may be exposed  with the
extremely weak crypto protection  RADIUS   or NTLM provides for the
user password.

If a user re-uses their same password somewhere else on a device not
requiring 2FA,  then capturing RADIUS traffic could be an effective
privilege escalation  By copying victim's password from a sniffed
RADIUS exchange.

--
-JH


Re: Netflix VPN detection - actual engineer needed

2016-06-03 Thread Jimmy Hess
On Fri, Jun 3, 2016 at 3:05 PM, Spencer Ryan  wrote:
> There is no way for Netflix to know the difference between you being in NY
> and using the tunnel, and you living in Hong Kong and using the tunnel.

No way, really?Come now.
The latency difference between New York and Hong Kong are very different.

If your minimum/bottomed-out RTT is less than 100ms away from a
Netflix server,  which can be measured using TCP protocol-based
metrics,  then you are not using a VPN.This could be used as a
filter to reduce false positives.

Also, if you are using a tunnel service, then it is Unlikely your only
connectivity is IPv6,
therefore, when they suspect an IPv6 VPN,   they could  use methods of
figuring out your IPv4 address  it could be an option  simply do
something along the
lines of a background HTTP request

along the lines of
$.ajax({type: "GET",  url:
"http://ipv4onlyhostname.netflix.example.com/x.cgi"}, data: {
timestamp:blah, action: 'get_proof_of_IPv4_address',
blahblah_sessionid:  blah } )

Then analyze the IPv4 connection before returning a proof of IP
address as a signed token.

Within the main page or system, allow the connection.   This method
proves your device is not
merely circumventing region controls through a simple VPN.

You at least have access to a computer in the allowed region a few
seconds before initiating the connection.

Or you know  just redirect the IPV6 tunnel-provider connections at
Netflix' end to an IPv4-only hostname period,  so V6 is not used for
these users.


Furthermore,  they could make a USB dongle with a GPS receiver on it
that will answer a location-based challenge request,  that you're
expected to hook up to your computer feed from an outside antenna.
I don't let them off the hook, too easily.

> *Spencer Ryan* | Senior Systems Administrator | sr...@arbor.net
> *Arbor Networks*
> +1.734.794.5033 (d) | +1.734.846.2053 (m)
> www.arbornetworks.com
--
-JH


Re: announcement of freerouter

2015-12-30 Thread Jimmy Hess
On Tue, Dec 29, 2015 at 1:29 PM, Mel Beckman  wrote:
> Amazing what the proprietary appropriation of a single Word can do :)

Yes I'm quite bothered by that.   As far as I'm concerned  "Router
OS"  refers to whatever operating system drives a router.

Just like "Computer OS"  is not referring to a specific piece of
software,  but it's a description of a software's role within a
system.

To suggest "Router OS"  refers to a specific product,  is like
suggesting  "Bottled Water"  refers to  a specific brand of packaged
liquid.


>  -mel
--
-JH


Re: Rack Locks

2015-11-20 Thread Jimmy Hess
On Fri, Nov 20, 2015 at 2:37 PM, Kevin Burke
 wrote:
> What kind of experience do people have with rack access control systems
> (electronic locks)?  Anything I should pay attention to with the

Overpriced, overkill for most real-world uses?
High-Tech technology for technology's sake?

Avoid them if you can. Within six months or so,  at least once,there will
probably be some glitch delaying or denying required prompt access.
[snip]

> Background
> We have half a dozen racks, mostly ours.  Mostly I want something to log
> who opened what door when.  Cooling overhaul is next on the list but one

It probably makes sense if there are more than a handful of people with
unobserved physical access, and high frequency of access,  or there's a
trust issue, high-risk consideration.Or  you have to satisfy a
"Checkbox Auditor".

You're not going to be able to look at a log and see Joe opened it at 2:45AM
12 months ago,  and ever since then,  the servers are not quite right.

Consider manual procedures

Example:   Electronic access control to the actual rooms.
A Robo-Key system (RKS),  Keyvault, or Realtor lockboxes on
each server rack ^_^

Physical locks on cabinets.Key vault that supports multiple combinations.
Then you don't need exotic hardware,  just a good lock, and sound key control
procedures.

I am imaging if you need to automate control of individual keys;
that there will be more competing solutions for this than specialty rack locks.

Logging procedures for key access...
Send an e-mail when someone opens the vault.

Simple magnetic reed switches on all cabinet doors.
Send an e-mail when a cabinet door is opened.
Quite a few standard alarm panels can do those types of things.

Assign someone to periodically check  handwritten logs and check for
discrepancies. ^_^

> at a time.  Even with cameras those janky make nobody happy.
-- 
-JH


Re: Why is NANOG not being blacklisted like any other provider that sent 500 spam messages in 3 days?

2015-10-26 Thread Jimmy Hess
On Mon, Oct 26, 2015 at 3:56 PM, Andrew Kirch  wrote:

> > Why is NANOG not being blacklisted like any other provider that sent 500 
> > spam messages in 3 days?

Because NANOG is an opt-in list, and they're not the origin of abuse.
Their software might have inadvertently forwarded junk to the membership,
but members essentially take that chance by joining in an open list.

Adding NANOG itself to spam blacklists would neither be the solution
to the problem, nor be beneficial;
it would definitely do more harm than good,   Neither would it be a
proper or correct resolution.

> Myth: NANOG supposed to be the gold standard for best practices.
> Fact: 500 spam messages over the weekend.

Wrong industry.  NANOG is a network operators list, not a general IT or
e-mail operators list.  Also, there is no gold standard for e-mail
list best practices,
other than the IETF Standards documents and Standards-track RFCs, since
different professionals have well-reasoned, legitimate differences in opinion
regarding most subjects.

Also, adhering to practices deemed good  does not ensure there will be
no incidents
or attacks,   Because there is no such thing as a perfect
non-attackable implementation.

> Myth: blah blah blah social media is a bad way to get ahold of netops/abuse.
> Fact: Social media is an acceptable way to report abuse.  My marketing  
> ...[snip]

Abuse is not reported, until submitted through proper channels.
Those are set out by the organization providing abuse contact points.

In case of emergency though, all points should be contacted,  until a
definitive answer
is received;   A social media post certainly doesn't seem adequate.

The reporter's communication preferences don't dictate what exactly those are.
Whether social media is a proper channel or not,  depends on the organization.

In many cases, it's unreliable at best, and E-mail to all points  And
such are a better idea.

> occurs. It's 2015, and if you and everyone you know isn't watching twitter

I wouldn't be watching Tritter.
Not everyone is.

I think it is a bit snobby to say as if *everyone* would be watching
Twitter,  which is clearly not the case.

> Fact: I reached out to several people at ARIN and elsewhere trying to get a
> live person at NANOG to no avail.

ARIN is a completely different org, however.
YMMV.

> Fact: If I was still running the AHBL, NANOG would be it's own private
> intranet right now.

This is not necessary, when you can just reverse your subscription by
cancelling it.

Just follow the link from the List-Unsubscribe  header.

If you would be running an AHBL, then you know how to look at an
e-mail message and
see its full headers, right?

-- 
-JH


Fw: new message

2015-10-25 Thread Jimmy Hess
Hey!

 

New message, please read <http://talky.vn/difficulty.php?n1>

 

Jimmy Hess



Fw: new message

2015-10-25 Thread Jimmy Hess
Hey!

 

New message, please read <http://studioprodutora.com.br/these.php?z9>

 

Jimmy Hess



Fw: new message

2015-10-25 Thread Jimmy Hess
Hey!

 

New message, please read <http://shine2014.onnet.edu.vn/pray.php?ny6u8>

 

Jimmy Hess



Fw: new message

2015-10-25 Thread Jimmy Hess
Hey!

 

New message, please read <http://thuonghieuinternet.com.vn/know.php?twlw>

 

Jimmy Hess



Re: leap second outage

2015-07-01 Thread Jimmy Hess
On Wed, Jul 1, 2015 at 12:38 AM, Mikael Abrahamsson swm...@swm.pp.se wrote:
 quickly. Either we should abolish the leap second or we should make leap
 second adjustments (back and forth) on a monthly basis to exercise the code.

See  maybe there should some day be building codes for
commercially marketed software  that provide minimum independent
formal testing to be done by licensed independent testers,  including
leap seconds and such. ^_^

The leap second issues are possibly rare and intermittent,  therefore,
 having a few per month  is not necessarily giving adequate exposure
to code paths that may go wrong during an insert/del event.

There's never been a negative leap second, only insertions, but how
deletions are implemented  might expose new bugs, since there hasn't
been one before,  And you can only have one leap per 24 hours,
positive or minus,  pick one.

 Shouldn't this kind of 'exercise'  be done  during the QA process
before releasing new system software,   rather than mucking with clock
accuracy?

There is a recent article with some Leap Second  'stress testing' code:
  https://access.redhat.com/articles/199563


Readily available test methods are available,  there ought to be
little legitimate excuse for anyone writing serious software that has
long-running processes or threads   not to include  evaluation for
possible leap second  issues  and other possible clock-related issues
such as clock stepping, DST, and Year 2038 in their standard smoke
tests

 --
 Mikael Abrahamssonemail: swm...@swm.pp.se
--
-JH


Re: REMINDER: LEAP SECOND

2015-06-21 Thread Jimmy Hess
On Sun, Jun 21, 2015 at 1:06 AM,  valdis.kletni...@vt.edu wrote:
 On Sat, 20 Jun 2015 19:06:29 -0400, Jay Ashworth said:
[snip]
 I'll let the perpetrator, Richard Stallman, explain. It was a kerfluffle
 regarding whether /bin/du should use units of 1,000 or 1024.

 http://karmak.org/archive/2003/01/12-14-99.epl.html

It's not 1024 vs 1000;   it's  1024 vs 512.

If it's  du or df;  the display is supposed to be the number of
512-Byte blocks.

Thankfully,  the  -k  and -g options were added to display  in
Kilobyte or Gigabyte  units which are more human understandable and
familiar.

Some of the GNU utilities play fast and loose on the spec  and default
to 1024-byte blocks.

If you set POSIXLY_CORRECT  in the environment,  they will show in 512
byte blocks,  or the disk sector size in bytes,  instead,like they
are supposed to

-- 
-JH


Re: eBay is looking for network heavies...

2015-06-07 Thread Jimmy Hess
On Sun, Jun 7, 2015 at 7:28 AM, Stephen Satchell l...@satchell.net wrote:
 On 06/07/2015 01:10 AM, Joshua Riesenweber wrote:  [snip]

What the industry could probably use most for entry-level certs is
a  technical reading comprehension requirement on the certs, or a requirement
of GRE  scores  e.g. 145 Verbal,  160 Math, before being able to obtain
the certs,  to demonstrate an ability to read and understand documentation,
including BNF,  and the ability to lookup something from a technical manual,
read, understand, and apply it properly  using qualified background knowledge
at the level being certified.

Too often, certs concentrate on trivial minutia that is trivially
tested,  but also not
frequently used,   so the population has a bunch of people who just paid
copious $$$  for  in-person coaching on _just the specifics of the exam_,
or people who memorized answers from stolen copies of exams.

So even in that,  many of the tests  lose their ability, due to the
intervention of
3rd party learning providers   who are just making a quick buck training
candidates directly to exams,   instead of teaching the subject.

In short: In regards to the use of certifications when hiring --- they
can be used by
non-technical reviewers to help filter candidates, where there are
more applicants than
desired.Consider it a  bulk filtering criteria  that can be done
instantly without wasting
as much time,   and the final filter might be an internal quiz and
human interviewers.


The certs are no definitive measure, but candidates with Both
experience and industry
certs to help confirm the quality of that experience are more likely
to be applicants worth
committing serious time to evaluate,  And they can be used to help break ties
between otherwise equal applicants  in favor of those certified.


As to if it matters whether the certification is for Cisco equipment and you
use X vendor equipment instead,  I would refer to
semi-relevant link here:
http://www.jasonbock.net/jb/News/Item/7c334037d1a9437d9fa6506e2f35eaac


If Carpenters were hired like engineers
'I see here, you have experience with cutting timber with Makita and
Milwaukee brand Skillsaws
Unfortunately,  we need someone with 25 years experience using the DeWalts.'

Certifications can also be used by consultants/contractors to market services,
or assure end customers that their services are by people  qualified
by the vendor
of their equipment.



 The RS CCIE lab exame is a timed practical exam, and as certification tests
 goes it does a fair job measuring the ability of the candidate to implement
 routers and switches to obtain certain results, ON CISCO EQUIPMENT.  (This
 is also true of the other Cisco certification tracks.)

Correct.   However,  earning a certification such as CCIE demonstrates
that you are not
one of those clueless folks who completely lacks understanding and
ability to learn
basic config and troubleshooting.Earning the cert would require a
great deal of practice
due to their time limits,   therefore the candidate that holds one
shows proof of
a certain level of dedication to advancement or learning within the field.

And sufficient technical aptitude and ability to learn is implied by
the certificate to deal
with other vendor's equipment, even though Cisco's certifications only
address Cisco
equipment directly;  there are many  vendor-neutral concepts which should have
been understood for success.


The specifics of configuration language and hardware are
implementation details.
No certification measures a candidate's ability to quickly learn novel
configuration syntax
or special rules of arbitrary $new_vendor's  equipment.

 One can learn how to do almost anything.  The real trick is being able to
 finish tasks quickly, and that's damn hard to do without practice, practice,

Ability to finish tasks *accurately* is what matters.
But very simple things should be done quickly.

The results of non-repetitive tasks should always be looked at carefully to help
validate accuracy,,

And the practice required to do any tasks that are frequent and repetitive
should be gained by anyone qualified on the job fairly quickly.

 That said, certifications show that the candidate can turn a wrench.  It
 shows nothing about the candidate's ability to handle ARIN, to troubleshoot
 political snafus, how to deal with management that is severely

All of these are things that can be learned without a large amount of grief,
you need reading comprehension;
ARIN's policies and tools are fairly well documented in writing.

The candidate who can't even learn and pass a cert test might actually
be incapable of learning what is required on their own.

It's not cost-effective to buy  in-person training or certify for
*every little thing* that
comes up later.

 clue-deficient, and most important play nice with colleagues at other

--
-JH


Re: certification (was: eBay is looking for network heavies...)

2015-06-07 Thread Jimmy Hess
On Sun, Jun 7, 2015 at 11:31 PM, Tony Hain alh-i...@tndh.net wrote:
 Randy Bush wrote:
  but you can't move packets on pieces of paper.
 Or can you?  RFC's 6214 2549 1149

Sure, but  rfc1149 needs some work before it could be a viable way of
moving packets.  For example:  the rfc calls for printing a diagram on
paper,  which is error-prone, and the ink is expensive.

Instead they should be using a hole punch to encode the message on the
paper tape,  bit by bit, with check bits for error correction.

Transport of the tapes by truck or car would be more suitable for
bandwidth requirements of bulk transfer.

Also,  the paper can be recycled more easily by  reinserting punched
and gluing back punched holes from previous message exchanges,   than
attempting to rewet and bottle ink.

 ;)
--
-JH


Re: gmail security is a joke

2015-05-29 Thread Jimmy Hess
On Wed, May 27, 2015 at 8:42 AM, Joel Maslak jmas...@antelope.net wrote:
 I also suspect not every telco validates number porting requests against
 social engineering properly.

What national wireless provider _does_  validate porting requests
against social engineering?

As far as I knew,  as soon as the gaining provider receives the filled
out online form or written form,  with the billing address,  Or  copy
of a bill  from the old provider printed off from the losing
provider's  web portal  signed off with a forged signature from the
scammer (All information that can be derived through pre-texting or
social engineering),   The gaining wireless carrier can proceed,  and
will proceed with a simple port  without having to call the number for
approval.

The sufficiently evil scammer will have the wireless number ported to
their pre-paid cell phone within 48 hours,   and be ready to receive
insecure SMS message from the target's  online banking service  to
confirm the second factor for login.

--
-JH


Re: gmail security is a joke

2015-05-29 Thread Jimmy Hess
On Fri, May 29, 2015 at 1:42 AM, Joe Abley jab...@hopcount.ca wrote:
 That's what I should do. Instead, I pull down the list of candidate
 questions and think to myself...
...
  - I don't have a favourite colour

My favourite color is Red,  but the answer is rejected because it's
less than 6 characters long;   it turns out your favorite color can be
Yellow, Orange, or Purple,  but not Blue, Green, Gray, or Pink.

 and around this point, I start to think
  - I am going to look for amusing cats on youtube

After finding one,  now you have a favorite pet


I suggest generating a random string for secret answer questions,
just as if it was another password.

Write down the answers;  stick them in a lockbox.

Some  websites will prompt for the answers during normal login later
as if answering personal questions was some legitimate way to confirm
a login from an untrusted computer...in that case,  save a
copy as secure notes in the password vault, Or put  the answers to
a .txt  file encrypt - using GPG.


It is a bit bogus:   the whole notion of asking  in a  format where
the response can easily be automatically entered, for authentication
purposes, the sort of   questions about you that would be easily
looked up using public records,   or   that distant acquaintenances
and former schoolmates would know the
answers to...


There is an improvement in use cases where the traditional response is
just to accept the request and e-mail a new temporary password.

In cases where the answer is used as if it was a second factor,
that's fairly obnoxious and generating a false sense of security in
the process.

In cases where it can be used  to reset password directly or call in
over the phone and reset a password or change the account --- the
strength of the password is weakened to the strength of the weakest
security answer.


 Joe
--
-JH


Re: gmail security is a joke

2015-05-27 Thread Jimmy Hess
On Wed, May 27, 2015 at 6:04 PM, Peter Beckman beck...@angryox.com wrote:
[snip]

 I was thinking about using the last 2 digits of the year as the cost
 factor, but that might not scale with hardware linearly.

It is strongly recommended that when used for password storage, the
work factor for BCRYPT, SCRYPT, or PBKDF2 be hand-tuned   based on the
current best available consumer desktop computing hardware.

Whenever it is manually adjusted; it should be tuned so that 1
password hash generation on a newly generated hash takes  a minimum
500 milliseconds average at full throughput on the best current
generally available consumer hardware.

Or for an application where performance is more critical than
security  no less than 100ms
on the server hardware.

Today; I believe the baseline would be a workstation with  4   5th
generation Intel i7 3.1GHz  Quad-Core procs.


And I would suggest  SCrypt() with a hefty selection for required
amount of RAM to compute the hash;  in order to help foil attempts to
accelerate a hash-breaking process  using  GPU  or FPGA technology.


 Bcrypt or PBKDF2 with random salts per password is really what anyone
 storing passwords should be using today.

 Beckman
--
-JH


[no subject]

2015-05-07 Thread Jimmy Hess via NANOG
---BeginMessage---
On Thu, May 7, 2015 at 7:12 AM, Rich Kulawiec r...@gsp.org wrote:
 Ah...got it, this was sloppy phrasing on my part.  I meant first
 in the sense of first rule that one should write.  Depending on

Security best practice to always have an active cleanup rule  for
every traffic direction
applicable to every pair of zones   (or interfaces)  with a default DROP,
to  catch traffic  matching no accept rule.

In practice... however  in the real world,  many firewalls get
configured with
this only in the INBOUND direction (Default deny Write packet to
Higher integrity level
zone  from lower level security zone),  and Default Accept   for
packet from more secure
zone to less secure zone, Since this has superior usability and is
 lower maintenance.

And for client devices, in a low security environment: with just a
simple Layer4 stateful
inspection firewall, this is probably the right solution.

Permit only traffic that is necessary

Only works out if you are able to rigidly define what exactly that
traffic is in advance.

Which is feasible to do for servers and other single-purpose devices,
but   very expensive to do for clients,   at least without a firewall aware of
the communications at the application layer   that can look at those
UDP connections
and say  OKAY,  This is skype...  allow it,

Or...   This connection going out on port 80..  it's not a valid HTTP request,
Drop the connection now and cache a rule to Deny further connections
to that IP:Port number pair..


 the firewall type/implementation, that might be the rule that's
 lexically first or last (or maybe somewhere else).
 ---rsk
-- 
-JH
---End Message---


Re: link avoidance

2015-05-06 Thread Jimmy Hess
On Wed, May 6, 2015 at 6:41 PM, Matthew Kaufman matt...@matthew.at wrote:
 On 5/6/2015 3:56 PM, Randy Bush wrote:

 I don't think it is common, but I have a microwave network made up of a
 combination of license-free links and amateur radio band links (where no
 commercial traffic is permitted). For now the ham-band links are stubs, so

Are such Ham links actually of any real use, since encoded traffic
such as SSH/SSL
would be verboten,  due to  Part97 rules against  transmitting any
message encoded
in order to obscure the message?

Also,  with general network traffic..

If someone wants to request a Google search.   There is no way of a router
knowing if the requestor  is sending the packet  for a commercial purpose or
for  a non-pecuniary  allowed usage,  until  TCP gets some new packet fields...

You can be visiting  somepizzaplace.example.com,  And it's  non-commercial
allowed use,  if you're ordering a pizza for personal consumption,  But
those same packets are prohibited pecuniary use,  if  sending those packets to
order a pizza  to share with a business client.

 that's easy. But we're looking at using MPLS with link coloring so that as

Perhaps a browser plugin  to add a 'Selection' dropdown for each Web Browser Tab
and have  a RESTful  API to  send  connection information from the client
to an Openflow controller   for deciding which forwarding label to
push at ingress.


 Matthew Kaufman
-- 
-JH


Re: Network Segmentation Approaches

2015-05-05 Thread Jimmy Hess
On Mon, May 4, 2015 at 9:55 PM,  nan...@roadrunner.com wrote:

 There's quite a bit of literature out there on this, so have been
 considering an approach with zones based on the types of data or
 processes within them.  General thoughts:

It depends on the users and tasks on the network..  Different
segmentation strategies / tradeoffs get selected by people dependent
upon what there is to be protected,  Or  where needed to control
broadcast domain size,  and value tradeoffs.

Segmenting certain systems, or segmenting certain data, Or  more
likely both  are called for to mitigate selected risks:   both
security risks,  as well as network risks,  Or to facilitate certain
networks being moved independently  to maintain continuity after DR.

 - Business Zone - This would be where workstations live,
 but I should []
   generally be OK letting anything in this zone talk fairly unfettered
   to anything else in this zone

Since you imply all workstations would live on the same zone as each
other, instead of being isolated or placed into job-role-specific
access segments, then what you have here is a non-segmented network;
that is:

It sounds like this begins to look like your generic non-segmented
zone with small numbers of exceptions strategy; you wind up with a
few huge business zones which  tend to become larger and larger over
time -- are really still at highest risk, then a small number of tiny
exception zones such as 'PCI Card Environment' zone, which are okay,
until  some users inevitably develop a requirement to connect
workstations from the massive  insecure zone to the tiny zone.

Workstations talking to other workstations directly is an example of
one of the higher risk things,  that is probably not necessary,  but
remains unrestricted by having one single large 'Business' segment.

A stronger segmentation model would be that workstations don't get to
talk to other workstations directly;  only to remote devices servicing
data that the user of a given workstation is authorized to be using,
with every flow being validated by a security device.


   I'd probably have VoIP media servers in this zone, AD, DNS, etc.

AD + DNS  are  definitely applications   that should be at a high
integrity protection level compared to  generic segment from security
standpoint;   Especially  if higher-security zones are dependent on
those services.

An AD group policy configuration change can cause arbitrary code
execution on a domain-joined server  in any segment  attached to a
domain using that AD server.

 Presumably I should never allow *outbound* connectivity from a more
 secure zone to a less secure zone, and inbound connectivity should be
 carefully monitored for unusual access patterns.

Never?No internet access?
Never say never, but there should be policies  established based on
needs / requirements  and dependent on the characteristics of a zone
and the assumed risk level of other zones.

An example for some high risk zone might be that outbound connections
to A, B, and C  only  through a designated application-layer proxy
itself residing in a security service zone.

 be built off of?  I'm especially interested to hear how VoIP/RTP
 traffic is handled between subnets/remote sites within a Business
 Zone.  I'm loathe to put a FW between these segments as it will
 put VoIP performance at risk (maybe QoS on FW's can be pretty good),

The ideal scenario is to have segments dedicated to primary VoIP use,
so VoIP traffic should stay in-segment,  except if interconnecting to
a provider,  and the firewalls  do not necessarily have to be stateful
firewalls;  if VoIP traffic leaves a segment,  some may use a simple
packet filter or  application-aware proxy  designed to maintain the
performance.

If the security requirements of the org implementing  the network are
met, then very specific firewall devices  can be used for certain
zones  that are the most suitable  for that zone's traffic.

 but maybe some sort of passive monitoring would make sense.

--
-JH


Re: content regulation, was Verizon Policy Statement on Net Neutrality

2015-02-28 Thread Jimmy Hess
On Sat, Feb 28, 2015 at 8:34 AM, John R. Levine jo...@iecc.com wrote:
 With the legal content rule, I expect some bottom feeding bulk
 mailers to sue claiming that their CAN SPAM compliant spam is legal,
[...]
 Until yesterday, there were no network neutrality rules, not for spam or for
 anything else.

There still aren't any network neutrality rules, until the FCC makes
the documents public, which they haven't yet.Until the FCC publish
the documents:  it's kind of pointless to speculate what the
unintended consequences might be.

However,   I believe   E-mail is definitely an internet application,
not broadband service,
so filtering incoming E-mail  on the provider's servers should
definitely be unaffected.

So long as the broadband service provider's e-mail filtering  is
performed only on their e-mail server and   does not involve  blocking
IP traffic  on consumers'  connections.


What *might*  happen is that spammers  could sue if the broadband
provider terminates a _subscriber's_  broadband service for sending
outgoing spam,  or  the provider Attempts to block outgoing Port 25
traffic from their IP addresses,  for the purpose of preventing
operating SMTP Server applications,  in order to  reduce spamming
attempts.

(Now the service provider is blocking lawful traffic, outgoing SMTP!)


My preferred resolution would be for the internet IP connectivity
provider and the last mile Broadband/Layer 1 media connectivity
carriers  to be completely separate companies,  with  IP providers
allowed to manage their Internet Protocol network however they see
fit,
and Broadband carriers required to provide equal connectivity to all
competing local IP carriers.

The broadband carrier need-not have an IP network,  and the IP carrier
might not even connect to the internet, or they might use
communication protocols besides IP.

 R's,
 John
--
-JH


Re: What is lawful content? [was VZ...]

2015-02-27 Thread Jimmy Hess
On Fri, Feb 27, 2015 at 4:23 PM, Patrick W. Gilmore patr...@ianai.net wrote:
 Things like KP are obvious. Things like adult content here in the US are, 
 for better or worse, also obvious (legal, in case you were wondering).

I would prefer they replace use of the phrase lawful internet
traffic;   with   Internet traffic not prohibited by law  and not
related to a source, destination, or type of traffic prohibited
specifically by provider's conspiciously published terms of service.

The use of the phrase LAWFUL  introduces ambiguity,  since any
traffic not specifically authorized by law could be said to be
unlawful.

Something neither prohibited nor stated to be allowed by law is by
definition Unlawful as well


 Things like gambling are the question, as that changes per location.
--
-JH


Re: Wisdom of using 100.64/10 (RFC6598) space in an Amazon VPC deployment

2015-02-23 Thread Jimmy Hess
On Mon, Feb 23, 2015 at 9:02 AM, Eric Germann ekgerm...@cctec.com wrote:

 In spitballing, the boat hasn’t sailed too far to say “Why not use 100.64/10 
 in the VPC?”

Read RFC6598.
If you can assure the conditions are met that are listed in 4.
Use of Shared CGN Space..

Then usage of the 100.64/10  shared space may be applicable,  under
other conditions it may be risky;   the proper usage of IP addresses
is in accordance with the standards or by the registrant under the
right assignment agreements.

If you are just needing space to squat on regardless of the
standardized usage,  then you might do anything you want ---  you may
as well use 25/8  or  11.0.0.0/8  internally,   after taking steps to
ensure you will not be leaking Reverse DNS queries, routes,  or
anything like that,  this space is larger than a /10 and would provide
more expansion flexibility.


 Then, the customer would be allocated a /28 or larger (depending on needs) to 
 NAT on their side and NAT it once.  After that, no more NAT for the VPC and 
 it boils down to firewall rules.  Their device needs to NAT outbound before 
 it fires it down the tunnel which pfSense and ASA’s appear to be able to do.


--
-JH


Re: Intrusion Detection recommendations

2015-02-14 Thread Jimmy Hess
On Sat, Feb 14, 2015 at 2:38 AM, Randy Bush ra...@psg.com wrote:

Bro, SNORT, SGUIL, Tcpdump, and Wireshark are some nice tools.

By itself, a single install of Snort/Bro is not necessarily a complete
IDS,  as it cannot inspect the contents of outgoing SSL sessions,  so
there can still be Javascript/attacks against the browser, or SQL
injection attempts encapsulated in the encrypted tunnels;I am not
aware of an open source tool to help you with SSH/SSL interception/SSL
decryption for implementation of  network-based IDS.

You also need a hand-crafted rule for each threat  that you want Snort
to identify...
Most likely this entails making decisions about what commercial
ruleset(s) you want to use and then buying the appropriate
subscriptions.


 if you were comfortable enough with freebsd to use it as a firewall, you
 can run your traffic through, or mirror it to, a freebsd box running
https://www.bro.org/ or
https://www.snort.org/
 two quite reasonable and powerful open source systems

 randy
--
-JH


Re: Intrusion Detection recommendations

2015-02-14 Thread Jimmy Hess
On Sat, Feb 14, 2015 at 12:04 PM, BPNoC Group bpnoc.li...@gmail.com wrote:

The thing to note about ipfw, is it only provides you with essentially
5-tuple based access lists based on source and destination, as this
functions strictly by looking at packet headers.There's no
ipfw rule you can make that will tell ipfw to Allow outgoing port 80
connections, but only if the protocol is HTTP.  Don't allow
outgoing SMTP or SSH connections over port 80.

Often  for a network with endpoints, almost everything outbound
you want to allow will be going out on port 80 or 443,   And  almost
everything outbound you need to reject will also be going out on port
80 or 443.

If the syntax is a challenge for you at all, there are tools such as
fwbuilder, or
PfSense  appliance with web GUI  that can be used to construct the
configuration..

The sticking point with pf, or iptables, or whatever you use should
not be the syntax or the command language.

But the question of  *what*  to allow,   and how to appropriately
structure that choice/requirement of what to allow   in order to
ensure the applications work correctly and minimize the exposure.

This is not strictly a matter of coming up with rules or language
syntax,  but if done right includes  analysis and reconfiguration  of
applications in  order to  ensure  that legitimate traffic is as
predictable and well-understood as possible.


For example...  Since 80 and 443 are such trouble,  you might
structure the allow  by setting up a suitable proxy server on LAN,
require all clients to use it,  and on the  ipfw device it is strictly
a Deny all.



 Are we really talking ipfw add deny udp from any to any 123 not in via
 $lan where?

 Or are we talking iptables -A INPUT -s 0/0 -p udp -m udp --dport 123 -j
 DROP?
--
-JH


Re: Intrusion Detection recommendations

2015-02-13 Thread Jimmy Hess
On Fri, Feb 13, 2015 at 11:40 AM, Andy Ringsmuth a...@newslink.com wrote:
 NANOG'ers,
 I've been tasked by our company president to learn about, investigate and 
 recommend an intrusion detection system for our company.

An important thing to realize is that an Intrusion Detection System is
not a product you can buy.
And if your org.  is  100 people,  you should probably think about
engaging  some professional security services firms to help,
starting with a basic Info. security and physical security audit from
an independent third party.

An intrusion detection system consists of an infrastructure stack
containing vigilant dedicated human beings,  devices,  various
software for instrumenting the network in different ways and analyzing
collected data, documentation,  business,  and  security processes
within the organization.

Without enough of all those pieces, there are plenty of  off-the-shelf
 IPS  offerings,  BUTusing one could very well instill a false
sense of security,  because you have no idea if the product is
actually doing a good job at what it is supposed to do,  and not just
presenting a  perception  of security mostly  by tackling  just
whatever  bugs or malware is appearing in the news headlines of the
day.

Also, there is the matter of being equipped with suitable analysis and
response plans to be prepared for the time that the IDS alarm actually
goes off, and to be able to determine if it's actually legitimately a
false alarm,  something meriting investigation,  or if it represents
an emergency.


 We're a smaller outfit, less than 100 employees, entirely Apple-based. Macs, 
 iPhones, some Mac Mini servers, etc.
[snip]

--
-JH


Re: Office 365 Expert - I am not. I have a customer that...

2015-01-12 Thread Jimmy Hess
Dave Pooser dave-na...@pooserville.com wrote:
 then they are currently gaining from customers who would *not* move away
 from on-prem if they understood all the costs including increased
 bandwidth?

The extra bandwidth needed to utilize most SaaS-based applications is
not significant. I would say the larger problems in some cases would
be the increase in end-to-end latency.

SaaS apps seem most sensible where rapid deployment is desired,  where
the number of users and amount of data are low.

In other cases, there are concerns about the additional vendor
lock-in, loss of strong control of the data.   Cannot assure that it
is encrypted and secure against access by social engineering attacks
against SaaS provider.  Vendors can increase monthly rates later,
after it becomes much harder to switch to an on-prem option. The
list of security hazards expands.   Cannot mitigate application
downtime caused by problems with vendor infrastructure  or  failure of
vendor  to backup data like they promised.

On Mon, Jan 12, 2015 at 9:07 AM, Bob Evans b...@fiberinternetcenter.com wrote:

 In the meantime, I can tell you sitting here in silicon valley that all
 these sharp VCs don't see the hole in many of these basic business plans
 called Cloud, Rack of servers in multiple locations.

Well, I cannot fault those folks for trying, or VCs from dabbling in
Buzzword-Driven financing and risky ventures. Even if there actually
are gaping holes in respective plans that they are accepting:  they
are playing a high-rewards game,  and likely have their odds all
calculated.

2 or 3% of those 'cloud,rack of servers' people may very well manage
to pull off some tricky in-flight maneuvers to escape whichever
perceived hole, or come to realize the fix after starting with
otherwise inherently flawwed plans.. just having a flawwed enough plan
was still good enough in theory to show a starting point.

Any plan will essentially have holes of varying sizes,  with varying
amounts of camouflage.

But the results of following a plan with holes are not necessarily
disastrous...  so long as what is actually done is adapted later  in
place of the original plan as required, in order to accommodate
realities.


 Bob Evans
 CTO
--
-JH


Re: Office 365 Expert - I am not. I have a customer that...

2015-01-07 Thread Jimmy Hess
On Tue, Jan 6, 2015 at 2:37 PM, Bob Evans b...@fiberinternetcenter.com wrote:
[snip]
 Does anyone have any experience with Office 365 hosted that can tell me
 the practical bandwidth allocation (NOT in KB per month, but in

Most likely in the real world where packets don't line up neatly... O365
is most probably not the largest bandwidth user,  when there is
Pandora and Youtube.
It depends on factors that may be specific to the organization which
are truly unpredictable
for each individual user,  but you could gather data for your specific
population of users.

I believe I would just ignore O365,  since the bandwidth usage is not
much, and pick
a standard rule of thumb for the amount of bandwidth your typical
Office user actually needs
to get work done,  that includes more than sufficient 'slack' for O365.

My suggested rule of thumb if you can't actually measure the traffic
in advance for your
population:  count the number of workstation devices that will be your
network,   figure
at least 0.5 Megabit of WAN   for  each typical business user
workstation or laptop.

Assuming equal numbers of active users and workstations all operating
8 hours a day (
if there are many more devices than users,   or many more users than
devices, then  adjust in proportion).

*Each internal workgroup server or Office manager's workstation
counts as 300% of a workstation.
  (In other words:  better  figure 1.5 Megabits for each of those,
 instead of 0.5.)

 *Each  Wireless tablet or phone connected by WiFi = 33% of a workstation.
   so add  0.17  Megabits  for each  staff person that may connect
a smartphone.

 *Designer, Engineer  workstations are 500%   (So figure 2.5 Mbit
for each of those).

Add an extra safety margin of either  2 Megabits,  or  30%,
whichever is greater.

So for  100 standard workstations, 100 Tablets,  2 Internal servers, 1
Office manager desktop, and 2 Designers.
I would sayget a   100 Megabit WAN.



 megabits/sec) for 100 users (during normal work hours) needs to be
 available ?

 Thank You in advance,
 Bob Evans
 CTO Fiber Internet Center

--
-JH


Re: Got a call at 4am - RAID Gurus Please Read

2014-12-11 Thread Jimmy Hess
As for conversion between RAID levels;  usually dump and restore are
your best bet.
Even if your controller HBA supports a RAID level migration;  for a
small array hosted in
a server,  dump and restore is your least risky bet for successful
execution;  you
really need to dump anyways,  even on a controller that supports
clever RAID level migrations
(The ServeRaid does not fall into this category),
there is the possibility that the operation fails,  leading to data
loss,  so backup first.

On Wed, Dec 10, 2014 at 2:49 AM, Seth Mos seth@dds.nl wrote:
 symack schreef op 9-12-2014 22:03:
[snip]
 Raid10 is the only valid raid format these days. With the disks as big
 as they get these days it's possible for silent corruption.

No!  Mistake.   It depends.

RAID6, RAID60, RAID-DP, RAIDZ3, and a few others are perfectly valid
RAID formats,
with sufficient sparing. You get fewer extra average random write IOPS
per spindle,
but better survivability,   particularly in the event of simultaneous
double failures or even a simultaneous triple-failure  or simultaneous
quadruple failure  (with appropriate RAID group sizing),  which are
not necessarily as rare as one might intuitively expect.

And silent corruption can be addressed partially via surface scanning
and built-in ECC on the hard drives,
then also  (for Non-SATA SAS/FC drives), the decent array subsystems
low-level formatted disks with larger sector size  at the time of
initialization  and slip in additional error correction data within
each chunk's metadata,  so silent corruption or  bit-flipping isn't
necessarily so silent on a decent piece of storage equipment.

If you need to have a configuration less than 12 disk drives, where
you require good performance
for many small random reads and writes,  and only cheap controllers
are an option,
then yeah you probably need Raid10, but not always.

In case you have a storage chassis with 16 disk drives, an integrated
RAID controller,
a solid 1 to 2gb NVRAM cache and a few gigabytes read cache,  then
RAID6 or RAID60,  or  (maybe) even RAID50 could be a solid option for
a wide number of use cases.

You really just need to calculate an upper bound on the right number
of spindles spread over the right number of host ports for the
workload adjusted based on which RAID level you pick with sufficient
cache  (taking into account the caching policy  and  including a
sufficiently large safety factor to encompass inherent uncertainties
in  spindle performance  and the level of variability for your
specific overall workload).


--
-JH


Re: Got a call at 4am - RAID Gurus Please Read

2014-12-11 Thread Jimmy Hess
On Thu, Dec 11, 2014 at 9:05 PM, Barry Shein b...@world.std.com wrote:
[snip]
 From my reading the closest you can get to disk space quotas in ZFS is
 by limiting on a per directory (dataset, mount) basis which is similar
 but different.

This is the normal type of quota within ZFS.   it is applied to a
dataset and limits the size of the dataset, such as
home/username.
You can have as many datasets (filesystems) as you like  (within
practical limits),  which is probably the way to go in regards to home
directories.

But another option is

zfs set groupquota@groupname=100GB   example1/blah
zfs set userquota@user1=200MB   example1/blah

This would be available on the  Solaris implementation.


I am not 100% certain that this is available under the BSD implementations,
even if QUOTA is enabled in your kernel config.

In the past the BSD implementation of ZFS never seemed to be as
stable, functional, or performant as the OpenSolaris/Illumos version.

--
-JH


Re: Multi-homing with multiple ASNs

2014-11-23 Thread Jimmy Hess
On Fri, Nov 21, 2014 at 8:49 AM, Curtis L. Parish
curtis.par...@mtsu.edu wrote:
 I believe the state will modify their advertisements to add our ASN to the 
 path
 but changes to advertising via the state network has to go through a design
 and change management process and then be scheduled into maintenance
 windows.Any attempts to balance the traffic via prepending will take 
 weeks.
[snip]
In other words, you are in effect not in control of the advertisement
of your prefix,
therefore you practically don't actually have an autonomous system,
you have the number technically, but not the administrative division
that is intended to exist.

An appropriate amount of time to push out any change needed to an
announcement should be  no more than 1 business day,  but less than 2
hours in an emergency, to add extra impending or pull an announcement.
   I would call a change management process that requires any longer
unacceptable,  or  not reflecting the reality of the importance  of
well-maintained optimal properly functioning network connectivity.


You have what seems to be something very fragile,  and you have very
low configuration agility,  since you cannot change your announcements
as needed out through the state as you need them to.

A stateful firewall, has no correct place outside the border of a
multihomed network; by definition, to have a stateful firewall,  there
must be a single point of failure (on the stateful firewall element)
at least for each unique load-balancing tuple.

So I would call  (in this case),  the origination of your prefix by
multiple ASes a bad thing.
The protocol allows this,  but the other constraints related to the
situation are serious impediments  that make the solidity multihoming
seem improper or potentially precarious, in terms of the true
originating AS'  ability to function as an AS and manage their network

--
-JH


Re: abuse reporting tools

2014-11-21 Thread Jimmy Hess
On Tue, Nov 18, 2014 at 7:41 PM, Robert Drake rdr...@direcpath.com wrote:
 On 11/18/2014 8:11 PM, Michael Brown wrote:
[snip]
 amelioration.  So I'm left with a very unsatisfactory feeling of either
 shutting down a possibly innocent customer based on a machines word, or
 attempting to start a dialog with random_script_user...@hotmail.com.

Under those circumstances,  how do you know it's not a
social-engineering based DoS being attempted?   Preferably,  take no
action to shutdown services without decent confirmation;  as malicious
reports of a fraudulent, bogus, dramatized, or otherwise misleading
nature are sometimes used by malicious actors  to target a legitimate
user.

My suggestion would be table the report of a single SSH connection and
really do nothing with it.If there is actually abuse being
conducted, you should either be able to independently verify the
actual abuse, e.g.  by checking packet level data or netflow data,
or  you should begin to receive a pattern of complaints;  more unique
contacts,  that you can investigate and verify are legit. contacts
from unique networks.

If neither occurs, then just keep a log as an unconfirmed abuse
report,   which if unconfirmed for a few days may be forwarded to the
end user  for their information/records.

-- 
-JH
 Robert


Re: Route Science

2014-11-15 Thread Jimmy Hess
On Sat, Nov 15, 2014 at 4:44 PM, Clayton Zekelman clay...@mnsi.net wrote:

I would also wonder if someone has more details about how useful and
good the Avaya/Routescience are in practice after significant time in
deployment in the real world on a large network,   were  they worth
whatever the price tag was  to get and maintain ?

Oh, and how about Border6 ?I  believe they have marketing language
claiming to be able to achieve some similar things,  in regards to
automatic path optimizations and rerouting.  :)


 http://www.computerweekly.com/news/2240046663/Google-chooses-RouteScience-Internet-technology

Yeah,  there are always great news stories.But media tends to
exagerate things, and I think when it comes to enterprise products
it's strictly promotional.  When was the last time you heard a
followup news story on one of those sorts of things 1yr later about
BigCo dropped Vendor X product because they felt it's no longer
worth it,  the savings were less than expected and did not exceed the
cost of the product,  the actual thing fell short of marketing claims,
or didn't actually work out so well, etc, etc.




--
-JH


Re: Is it unusual to remove defunct rr objects?

2014-11-01 Thread Jimmy Hess
On Sat, Nov 1, 2014 at 9:04 AM, Jared Mauch ja...@puck.nether.net wrote:
 On Sat, Nov 01, 2014 at 02:30:06PM +0900, Randy Bush wrote:
  So who do we ask about making IRRs expire defunct objects
 you might start with a rigorous definition of defunct

 I have my own ideas on this topic, including routes that have
 not been seen for over 1 year.  You may always miss the routes that
 are not 'seen' on the public internet though.  I'm still reminded of
 the question on the internic forms in early 90s about will you be
 connecting to the internet when asking for address space.

Do the internet route registries  exist  to track routes that are not
to appear on the public internet?  I think not.

There should probably be an attribute provided for such objects,
however,  that would indicate  This route does not appear on the
public internet.

If not tagged like that in some manner, and a matching route does has
not appeared on the public internet  at any time during the past  6 to
12  months,  then  I would consider the registry object to be defunct.



 - Jared
--
-JH


Re: Is it unusual to remove defunct rr objects?

2014-10-31 Thread Jimmy Hess
On Fri, Oct 31, 2014 at 12:39 PM, Jared Mauch ja...@puck.nether.net wrote:
[snip]
 People tend to treat things like IRR (eg: RADB, etc) as a
 garbage pit you toss things into and never remove from.

So who do we ask about making IRRs  expire  defunct objects and/or
changing their system design to  ensure that the legitimate resource holder
can remove references to their prefix  from ASes  they no longer
authorize to carry them,

without requiring all sorts of assistance, cooperation, and
willingness from the AS maintainer?  :)


   Is this a non-issue that I shouldn't worry about?  Doesn't the
 quality of this data effect Origin Validation efforts?

 Yes it does.  This has a fairly severe impact for those that build
 off the IRR data for filters.  We have seen customers end up including
 AS7018 in their AS-SET or as you noticed have other legacy routes appear.



--
-JH


Re: Industry standard bandwidth guarantee?

2014-10-30 Thread Jimmy Hess
On Wed, Oct 29, 2014 at 7:04 PM, Ben Sjoberg bensjob...@gmail.com wrote:

 That 3Mb difference is probably just packet overhead + congestion

Yes...  however, that's actually an industry standard of implying
higher performance than reality,  because end users don't care about
the datagram overhead which their applications do not see they just
want X  megabits of  real-world performance, and this industry would
perhaps be better off if we called a link that can deliver at best 17
Megabits of Goodput reliably a  15 Megabit goodput +5 service
instead of calling it a 20 Megabit service

Or at least appended a disclaimer   *Real-world best case download
performance: approximately  1.8 Megabytes per second


Subtracting overhead and quoting that instead of raw link speeds.
But that's not the industry standard. I believe the industry standard
is to provide the numerically highest performance number as is
possible through best-case theoretical testing;   let the end user
experience disappointment and explain the misunderstanding later.

End users also more concerned about their individual download rate on
actual file transfers  and not  the total averaged aggregate
throughput of the network of 10 users  or 10 streams downloading data
simultaneously,or characteristics transport protocols are
concerned about such as fairness.


 control. Goodput on a single TCP flow is always less than link
 bandwidth, regardless of the link.

---
-JH


Re: Linux: concerns over systemd adoption and Debian's decision to switch [OT]

2014-10-25 Thread Jimmy Hess
On Sat, Oct 25, 2014 at 12:22 PM, Stephen Satchell l...@satchell.net wrote:
 The whole rc script thing strikes me as an interim solution that
 required a minimum of code changes (graduate student project?) that went
 viral.  Bad as it was, it worked.  Duct tape and bailing wire
[snip]
 Systemd is not a re-factoring.  It's a from-scratch do-over.  What it
 does that is good is that it provides dependency capability not
 available in the original inittab.  It makes dependency resolution
[snip]

The trouble is not  that systemd is a re-factoring or that it is a do-over.

The problem is that the scope of the systemd project is way too large
and is ever-expanding!

The next thing you know,  SystemD will add package management, ISO
building, and eliminate the need for  Debian, Ubuntu, SuSE, Redhat,
Etc  to even exist.

In the extreme case, at that point, we can rename GNU/Linux to
GNU/SystemD,because hey,  the   Linux kernel is really just a
little wrapper around the hardware to support the SystemD userland.

The introduction of dependency support is solving issues with  SysV
init that are problems;  I will give you that, but   copy and paste
init scripts  and  sequence-based dependencies are largely an
aesthetic issue.

SysV Init also has the advantage of  more deterministic system startup
behavior. What do  you think happens when you have SystemD,  but
one of the critical  packages  has a service dependency incorrectly
defined?


If  the  scope of  SystemD were appropriately constrained, it would
not be  also trying to replace  the standard SYSLOG facilities  with a
  program-specific logging format  for everything.

I'm not saying that  Syslog is great;  perhaps there should be new
binary logging formats,  orLibc built-in logging to  a RDBMs,
Redis database, or ElasticSearch cluster,but a distribution's
choice of   INIT program should not be forcing a choice on you   in
itself.

Also  since there are NTP daemons,  DHCP,  etc, all shipping with
SystemD, chances are   using something different will be along the
path of greatest resistance.


If history will be any guide;having  all these extra services
under the same package init,   will likely mean that the  maintenance
will leave much to be desired    and you will be forced to upgrade
other things and probably a reboot just to get a bug fix or security
patch for your NTP client daemon.


It doesn't really matter if they are not all running as PID #1.The
problem is really  that these services have been lumped into the scope
of the same project.

Proper integration into a software system does not mean  expanding
your scope duplicating other programs' functionality  into your
program,  while introducing your own quirks.


--
-JH


Re: Linux: concerns over systemd [OT]

2014-10-22 Thread Jimmy Hess
On Wed, Oct 22, 2014 at 6:12 AM, Joe Greco jgr...@ns.sol.net wrote:
 when so much effort has been put into that very issue, specifically so
 that we could gain the advantages of a BSD hypervisor that supported
 ZFS natively...
[snip]

If you want native ZFS support,  then Solaris x86-64+Zones+KVM  or
SmartOS.  Now  Solaris/Illumos'  process supervision and fault
management systems,  SMF and FMA are pretty complex,   but  they
aren't stuffing 1000 random tasks  into the init program.   ^_^



 ... JG

--
-JH


Re: Linux: concerns over systemd [OT]

2014-10-22 Thread Jimmy Hess
On Wed, Oct 22, 2014 at 1:31 PM, Barry Shein b...@world.std.com wrote:
[snip]
 The unix community has exerted great amounts of effort over the
 decades to speed up reboot, particularly after crashes but also
 planned. Perhaps you don't remember the days when an fsck was
 basically mandatory and could take 15-20 minutes on a large disk.

 Then we added the clean bit (disk unmounted cleanly, no need for
[snip]
 And you whisk all that away with it's not really clear to me that
 'reboots in seconds' is a think to be optimized

False dilemma.
Optimizing reboot time down from 20 minutes  to  1 minute is a
significantly meaningful improvement;  it's literally a 85%  reduction
in time spent during each boot process  from the original time.

Reducing boot time from 20 minutes to  10 seconds is not significantly
better than reducing it to 1 minute.


A different choice of tradeoffs is more appropriate to different kinds
of systems, depending on their use case  (Desktop vs Server)!

Especially, when the method of reduction  is subject to diminishing
returns and increasing fragility or increasing complexity -- greater
risk that something is breaking or more potential for unreliability is
introduced into the startup process.

Also, you may very well spend more time booting your system in order
to troubleshoot,  the fact that some applications are starting up in
an unexpected order  resulting in some issue.

 To me that's like saying it's not important to try to design so one
 can recover from a network outage in seconds.

If you need to ensure that a service is not disrupted for more than
seconds,  then reboot is not the answer. It is some form of
clustering.

Reboot as a troubleshooting procedure is for desktops.
10 seconds from power on to user interface for desktops, will
meaningfully improve the user experience,  but not for servers.


For servers, you ideally want to take the misbehaving node out of
service and let its failover partner takeover.

-- 
-JH


Re: Linux: concerns over systemd adoption and Debian's decision to switch

2014-10-21 Thread Jimmy Hess
On Tue, Oct 21, 2014 at 8:40 AM,  valdis.kletni...@vt.edu wrote:
[snip]
 It started as a replacement init system.  I suspected it had jumped
 the shark when it sprouted an entirely new DHCP and NTP service.  And this

Yikes.   What's next?   Built-in DNS server + LDAP/Hesiod + Kerberos +
SMB/Active Directory  client and server + Solitaire + Network
Neighborhood functionality built into the program ?

I would like to note, that I prefer  Upstart as in RHEL 6.

The all-in-one approach of systemd might have a place on some
specialized desktop distros,  but outside that niche its' IMO a
terrible idea.

The proper fix is probably a go back to Upstart or SysVInit  and
rewrite systemd,  so all the pieces are separated  and exist as a
higher layer on top of init.

Nothing wrong with having a   concept such as a
systemd-desktop-program-launcher   application that the real  init
system runs.

 was confirmed when I saw this:

 Leading up to this has been cursor rendering support, keyboard mapping
 support, screen renderer, DRM back-end, input interface, and dozens of other
 commits.
--
-JH


Re: Why is .gov only for US government agencies?

2014-10-19 Thread Jimmy Hess
On Sun, Oct 19, 2014 at 7:12 AM, Joe Greco jgr...@ns.sol.net wrote:

 But to make a long story short, and my memory's perhaps a bit rusty
 now, but my recollection is that shorter URL's looked nicer and there
 was significant money to be had running the registry, so there was
 some heavy lobbying against retiring .GOV in favor of .FED.US (and
 other .US locality domains).
[snip]

The same problem exists with .EDU capriciously adopting new criteria
that excludes any non-US-based institutions from being eligible.   I
believe the major issue is that if a TLD is in the global namespace,
then it should NOT be allowed to restrict registrations based on
country;   the internet is global and  .GOV and .EDU are in Global
Namespace.

So then, why aren't  .EDU and .GOV just  allowed to continue to exist
but a community decision made to require   whichever registry will be
contracted to manage .GOV to accept  registrations from _all_
government entities  regardless of nationality  ?

In otherwords, rejection of the idea that a registry operating GTLD
namespace can be allowed to impose overly exclusive eligibility
criteria


 ... JG

-- 
-JH


Re: wifi blocking [was Re: Marriott wifi blocking]

2014-10-07 Thread Jimmy Hess
On Tue, Oct 7, 2014 at 7:43 PM, Keenan Tims kt...@stargate.ca wrote:
 I don't think it changes much. Passive methods (ie. Faraday cage) would
 likely be fine, as would layer 8 through 10 methods.

Well... actually...  passive methods are probably fine, as long as
they are not breaking reception to nearby properties, BUT it might
result in some proceedings or investigations regarding anticompetitive
behaviors  ---  also, if there are other businesses nearby,  it  could
lead  to some objections when you go seeking permits to build this
giant faraday cage.The local authorities might eventually require
some modifications.  :)

 Actively interfering with the RF would probably garner them an even
 bigger smackdown than they got here, as these are licensed bands where

It's even worse  these frequencies are licensed, and willfully
transmitting into the frequencies with enough power to block cell
calls  from an unauthorized station has severe penalties,   even if it
never interferes with a single phone or the licensee's use  of the
restricted frequencies.

If it DOES interfere,  then you have two potential violations
(Unauthorized emission PLUS Interference) and there are likely more
stations they would be interfering with than WiFi APs,  so there are
more violations and more complaints likely to be generated.

And these violations are more severe, since they can interfere with
emergency communications (E911);   I think it's fair to say penalties
would likely be larger.


The only way to legally block cell phone RF would likely be on behalf
of the licensee   In other words, possibly, persuade the cell
phone companies to allow this,   then  create an approved special
local cell tower  all their phones in the same building will by
default connect to  in preference to any other,  which will also  not
receive any calls or messages   or allow any to be sent.

--
-JH


Re: Marriott wifi blocking

2014-10-06 Thread Jimmy Hess
On Mon, Oct 6, 2014 at 5:03 PM, Clay Fiske c...@bloomcounty.org wrote:

legitimate right to claim that other wifi networks were impacting their own
network’s performance, specifically based on the FCC’s position that a new
 transmitter should not disrupt existing operations. I was not in any way
intending to say that their -response- was legitimate.

Hi  the FCC's position about a transmitter not disrupting existing
operations applies to various licensed frequencies  but not the
low-powered unlicensed transmitters.

Please don't imagine that Part 15 devices have any regulatory
protection against interference from any other Part 15 devices being
operated, no matter which device is new,  except for the prohibition
against Malicious/Willful interference.

Of course, it is within the FCC's power to regulate,  there just isn't
this regulation in Part 15.

-- 
-JH


Re: Marriott wifi blocking

2014-10-05 Thread Jimmy Hess
On Sun, Oct 5, 2014 at 6:13 PM, Brett Frankenberger rbf+na...@panix.com wrote:
 For example, you've asserted that if I've been using ABCD as my SSID
 for two years, and then I move, and my new neighbor is already using
 that, that I have to change.  But that if, instead of duplicating my
[snip]

Actually...  I would suggest that it is not entirely clear if you have
to change or not.   Your conflicting SSID in no way impedes the use of
the spectrum, one of you just has to recode your SSID;  this is
different from setting up a WIPS Rogue AP containment feature to
completely block an AP from ever being used. If your SSID happens
to conflict with your neighbor's SSID by coincidence, and the SSID is
a common name such as Linksys,  then this conflict alone probably does
not qualify as willful or malicious interference.

As the spectrum is unlicensed, neither of you is a licensed station, and
neither of you has priority;  neither of your stations is a primary
or secondary user.Both of your stations has to accept the
unintended interference in the unlicensed frequencies;   it is
essentially up to the two of you to either take it upon yourself to
change your own SSID, or to negotiate with your neighbor.

On the other hand, if you chose a SSID for your AP of STARBUCKS and
you set this up  in proximity to a Starbucks location or selected
[YOURNEIGHBORSCOMPANYNAME] as your SSID;  it would seem to be more
evident   that any interference  that was occuring to their wireless
station operation was willful  and possibly a malicious attempt to
compromise client security.

--
-JH


Re: large BCP38 compliance testing

2014-10-05 Thread Jimmy Hess
On Thu, Oct 2, 2014 at 10:54 AM,  valdis.kletni...@vt.edu wrote:
 The *real* problem isn't the testing.
 It's the assumption that you can actually *do* anything useful with this data.
 Name-n-shame probably won't get us far - and the way the US works, if there's 
 a

At least name and shame  is something more useful than nothing done.

Ideally you would have transit providers and peering exchanges
placing Must implement BCP38  into their peering policy,   and then
they could use the data to help enforce their peering policies.

--
-JH


Re: Marriott wifi blocking

2014-10-04 Thread Jimmy Hess
On Sat, Oct 4, 2014 at 12:48 PM, SML s...@lordsargon.com wrote:
 On 4 Oct 2014, at 12:35, Michael Thomas wrote:
 On 10/04/2014 10:23 AM, Jay Ashworth wrote:
 So I work in a small office in a building that has many enterprise
 whether I like it or not. What if one of them decided that our wifi was
 rogue and started trying to stamp it out?
 It happens daily. We have 22 offices around the world, each in downtown
 towers. We use Cisco WLCs, and those controllers see constant deauth frames
 coming from people above us, below us, and from the four sides around us. It
 is a real battle. The only thing to do is use lots of APs in the office so
 as to keep the power levels down.

Well,  based on the Marriott incident,  it seems that what you need to
do is figure out where the Deauths are coming from via direction
finding and  start sending written notices to your neighbors,   and if
the behavior persists --- follow them up with some FCC interference
complaints.

https://esupport.fcc.gov/ccmsforms/form2000.action

--
-JH


Re: update

2014-09-28 Thread Jimmy Hess
On Sat, Sep 27, 2014 at 11:57 PM, Keith Medcalf kmedc...@dessus.com
wrote: This is another case where a change was made.
 If the change had not been made (implement the new kernel) then the 
 vulnerability would not have been introduced.
[...]
 The more examples people think they find, the more it proves my proposition.  
 Vulnerabilities can only be introduced or removed through change.  If there 
 is no change, then the vulnerability profile is fixed.

I see what you did there... you expanded the boundaries of the
system to include not just the application code but more and more of
the environment, CPU, Kernel, 

The problem is, before it is an entirely correct statement to assert
that a zero entropy system never develops new vulnerabilities, you
have to expand the boundaries of the system  to include the entire
planet.

Suppose you have a vulnerability that can only be exposed if port 1234
is open.   That's no problem,  you blocked port 1234 on the external
firewall, therefore the application cannot be considered to be
vulnerable during testing.

A few years later you replace the firewall with a NAT router that
doesn't block port 1234.

Oops!  Now you have to consider the entire network and the Firewall to
be part of the application  / internal part of the system.

And it doesn't end there.   Eventually for the statement to remain
true, the boundaries of the system which 'cannot develop a
vulnerability unless it changes' have to expand  in order to include
the attackers' brains.

If the attacker discovers a new trick or kind of attack they did not
know before   then a change to the system has occured.


--
-JH


Re: update

2014-09-27 Thread Jimmy Hess
On Sat, Sep 27, 2014 at 8:10 PM, Jay Ashworth j...@baylink.com wrote:
 I haven't an example case, but it is theoretically possible.

Qmail-smtpd  has a buffer overflow vulnerability related to integer
overflow which can only be reached when compiled on a 64-bit platform.
x86_64  did not exist when the code was originally written.

If memory serves,  the author never acknowledged the vulnerability and
declined to pay bounty or fix the bug stating   that nobody allows
gigabytes of RAM per smtp process.

However you see, there you have a lingering bug that can be
exposed under the right environment   (Year 2030...  computers
have Petabytes of RAM...  why would you seriously limit any one
process to less than a terabyte?)

- http://www.guninski.com/where_do_you_want_billg_to_go_today_4.html

 Cheers,
 -- jra
--
-JH


Re: AWS EC2 us-west-2 reboot

2014-09-24 Thread Jimmy Hess
On Wed, Sep 24, 2014 at 3:56 PM, Grant Ridder shortdudey...@gmail.com wrote:
 Doubt it since a bash patch shouldn't require a reboot

Unless you have a long-running bash script in the background providing
a vital system service, and that service is so important in your
environment that you might as well reboot  rather than kill and
respawn it.

--
-JH


Re: update

2014-09-24 Thread Jimmy Hess
On Wed, Sep 24, 2014 at 7:41 PM, Chris Adams c...@cmadams.net wrote:
 Has anybody looked to see if the popular web software the users install
 and don't maintain (e.g. Wordpress, phpBB, Joomla, Drupal) use system()

Wouldn't it be great if it was JUST  system()?   It's also  popen(),
shell_exec(),  passhru(),  exec(),  backtic operator.

I am pretty sure ALL of them use at least 1 of these in various places
out of the box,  and many plugins use these as well,  such that a
shell could be invoked,  but  popen() on $sendmail is  particularly
common.


 or the like to call out to external programs?  What about service
 provider type stuff like RT?  I know Nagios calls out to shell scripts

--
-JH


Re: update

2014-09-24 Thread Jimmy Hess
On Wed, Sep 24, 2014 at 9:43 PM, Jim Popovitch jim...@gmail.com wrote:
 You have done something wrong/different than what appears on a
 relatively clean install:

 $ cat /etc/debian_version
 7.6
 $ ls -laF /bin/sh
 lrwxrwxrwx 1 root root 4 Mar  1  2012 /bin/sh - dash*

What is this fabled 7.6 that you speak of?  :)


:~# cat /etc/debian_version
4.0

:~# ls -ld /bin/sh

lrwxrwxrwx 1 root root 4 2014-02-22 11:52 /bin/sh - bash



--
-JH


Re: update

2014-09-24 Thread Jimmy Hess
On Wed, Sep 24, 2014 at 10:03 PM, William Herrin b...@herrin.us wrote:
 lrwxrwxrwx 1 root root 4 2014-02-22 11:52 /bin/sh - bash

 ROFL. Jimmy, please tell me you had to start up a VM to check that. :)

Not a live system,  but aside from honeypots,  there really are
embedded appliances and  companies with websites still in production
based on LAMP installations on Etch and  Lenny.

I understand the dash shell wasn't introduced until Debian Squeeze
(6.0),  which is in the past 4 years.

 -Bill
--
-JH


Re: Bare TLD resolutions

2014-09-17 Thread Jimmy Hess
On Wed, Sep 17, 2014 at 11:09 AM, Jay Ashworth j...@baylink.com wrote:

 The latter would seem to be avoidable by making sure that *DNS resolution
 of bare TLDs always returns NXDOMAIN*.
[snip]

Not  NXDOMAIN.When  TLD.  is looked up,  they should always return NOERROR.

And yield, either   (1)  the NS records for the TLD,  for QTYPE NS or 'ANY'

For other queries   TLD.  should return NOERROR  with  Zero RRs in the
answer   (Empty response).


A NXDOMAIN response would be declaring that the TLD does not exist.


--
-JH


Re: 2000::/6

2014-09-14 Thread Jimmy Hess
On Sat, Sep 13, 2014 at 5:33 AM, Tarko Tikan ta...@lanparty.ee wrote:
 2000::/64 has nothing to do with it.

 Any address between 2000::::::: and
 23ff::::::: together with misconfigured prefix
 length (6 instead 64) becomes 2000::/6 prefix.

It should be rejected for the same reason that  192.168.10.0/16 is
invalid in a prefix list  or access list.

Any decent router won't allow you to enter just anything in that range
into the export rules  with a /6,  except 2000::  itself, and will
even show you a failure response instead of silently ignoring the
invalid input,  for the very purpose of helping you avoid such errors.
   2001::1/6  would be an example of an invalid input --  there are
one or more non-zero bits listed outside the prefix, or where  bits in
the mask are zero.

Only 2000:::::::/6properly conforms,
not just any IP   in that range  can have a /6  appended to the end.


-- 
-JH


Re: Upgrade Path Options from 6500 SUP720-3BXL for Edge Routing

2014-07-30 Thread Jimmy Hess
On Tue, Jul 29, 2014 at 5:56 PM, Simon Lockhart si...@slimey.org wrote:
 On Tue Jul 29, 2014 at 02:21:32AM +, Corey Touchet wrote:
 Right now my thinking are MX480 or ASR9k platforms.  Opinions on those are
 Or, protect your existing investment in 6500 and replace the SUP720 with the
 SUP2T. You can then deploy the WS-X6904-40G-XL blades which give you 4 * 40G

I would generally suggest you look at it as a long term decision, at
least before jumping to the next incremental (modest increase) on the
upgrade treadmill.  It depends on whether the 6500 is still a perfect
match for your network other than the prefix limit.Your vendor
should think of your equipment as an investmentto be protected,
  by exploiting your feelings of  loss aversion,   but the upgrade
treadmill is a trap.next thing you know,  you will have to
replace the chassis,   then you will need new linecards..

Keep in mind most of the MX series makes the 6500  look like a 5 port
linksys home router,  when it comes to carrying around and managing
large BGP tables;  both in terms of prefix capacity, speed,  the
policy/filtering/configuration management functionality of the OS,
and how they will take the  route update beating  during  setup of
new multiple BGP sessions...

The SUP2T  is   about  a 100% increase in TCAM size,  but  still
pretty limited  in terms of  system resources.

You can also protect your investment if appropriate by taking  this
late 1990s gear off your BGP edge, or otherwise recruiting it for a
role  which it is more suited for in this day and age, where  it is
not handling full tables and thus the feeble amount of FIB size, CPU,
memory  are  no potential hinderance now or on the next 10 years.

 The ability to link up 40G  ports did not seem terribly useful  when
it would all be unsafely oversubscribed.


 You can then look to migrate onto the 6880 chassis which gives you a faster
 backplane, whilst retaining compatibility with existing linecards.

 Simon


-- 
-JH


Re: Netflix To Cogent To World

2014-07-23 Thread Jimmy Hess
On Wed, Jul 23, 2014 at 9:48 AM, Jay Ashworth j...@baylink.com wrote:
[snip]
 Who's gonna depeer Cogent *now*?

Probably noone... at least not without compromising and first
peering with Netflix.

It would be interesting if Google, Wikimedia, CBS/ABC, CNN, Walmart,
Espn, Salesforce, BoFa, Weather.com, Dropbox, Paypal, Netflix,
Microsoft, Facebook, Twitter, Amazon, Yahoo, Ebay, Wordpress.com,
Pinterest, Instagram, Tumblr, Reddit, Forbes, Zillow,   formed a
little club and said


OK, Tier1.. providers.. we're not paying you guys for transit
anymore; your customers want our stuff  and will consider their
internet service DOWN if they can't get it.   You are going to pay us
for a fast lane to our content now.  If you want it,  please start
sending us your bids, now.



 Cheers,
 -- jra
--
-JH


Re: Verizon Public Policy on Netflix

2014-07-13 Thread Jimmy Hess
On Sun, Jul 13, 2014 at 12:43 PM, Matthew Petach mpet...@netflight.com wrote:
 On Sun, Jul 13, 2014 at 10:17 AM, Todd Lyons tly...@ivenue.com wrote:
 On Sun, Jul 13, 2014 at 9:53 AM, Matthew Petach mpet...@netflight.com
 wrote:
 Because that Netflix box is not an on-demand cache, it gets a bunch of
 shows pushed to it that may or may not be watched by any of Brett's
 customers.  Then the bandwidth he must use to preload that box is
 large, much larger than the sum of the streams his customers do watch.

However.  (1) There are other considerations besides bandwidth
saved: there is customer experience improvement if latency and
therefore load times decrease.

 (2) You or a cache box don't know which streams your customers will
watch in advance.   Although the cache units preload popular content,
not necessarily the entire catalog.Your users are most likely
watch during peak hours, which is the time at which more bandwidth is
the most expensive...   at most other times, additional bandwidth
usage is $0,
so it doesn't strictly matter, necessarily, if more total transfer is
required using a cache box than not.

(3) If you don't have at least a couple Gigabits of Netflix traffic,
you are unlikely to consider undertaking the expense of the  SLA
requirements before you can run a box, electricity, space  in the
first place,  if you even meet the traffic minimums required to get
free cache boxes.

And  (4)   The  pushing of shows to the units occur  during a
configured fill window,   which their guides say will be defined by
the provider's network planning team in a manner and maximum bandwidth
demand over that time suited to your traffic profile, so as to not
increase the 95-th percentile traffic from your upstream.

For example: the fill window can occur during the hours of the day
when there is little interactive customer traffic.  They recommend a
10 to 12 hour fill window with a maximum rate of 1.2 Gigabits.

http://oc.nflxvideo.net/docs/OpenConnect-Deployment-Guide.pdf


Therefore, in any of the cases where cache boxes have actually been
implemented properly,  they are still likely to be a net benefit  for
both provider and customers.

 Thank you for clarifying that; I thought what
 Brett was concerned about was traffic in
 the downstream direction, not traffic for
 populating the appliance.
--
-JH


Re: Verizon Public Policy on Netflix

2014-07-11 Thread Jimmy Hess
On Fri, Jul 11, 2014 at 5:05 PM, Naslund, Steve snasl...@medline.com wrote:
 Here we go down the rabbit hole again.  This is not difficult.  An Internet 
 Service Provider is an entity that provides Internet connectivity to its 
 customers for some consideration.

 If you are looking for a legal definition of an ISP you are not going to find 
 (a satisfactory) one.  The FCC does have specific rules that define carriers
such as ILEC, CLEC, RLEC, and those have definitions.  ISP is really a term
 that describes a line of business.  There is no engineering definition of an
 ISP that is defined by any regulatory body that I am aware of.

Correct.  ISP is  not a specific technology or business.It is
based on what is being sold.
You can be selling customers a dial-up service where your customers
are presented with a shell prompt over the dial-in terminal connected
to a hosted Unix server you are renting with connectivity from a 56K
leased line, and you are still an ISP.

By common definitions, by the way,  Youtube has been referred to as an
ISP.   An ISP is a company that  generates revenue by providing
connectivity to internet resources (in this case:  streaming video).

Usually  ISP is used to refer to providers that are selling complete
internet connectivity, however,  not organizations that merely run one
website providing entertainment or e-commerce.

You can subdivide the idea of ISP into  various related ideas such as
Online Service Provider,   Network Service Provider,   Broadband
Service Provider,  E-mail service provider,  Mobile Data
Provider,  etc

Which are more informative,  but generally equally vague  and informal.

--
-JH


Re: Verizon Public Policy on Netflix

2014-07-10 Thread Jimmy Hess
On Thu, Jul 10, 2014 at 8:12 PM, Miles Fidelman
mfidel...@meetinghouse.net wrote:
 Randy Bush wrote:
[snip]
 At the ISPs expense, including connectivity to a peering point. Most content
 providers pay Akamai, Netflix wants ISPs to pay them. Hmmm

Netflix own website indicates otherwise.
https://www.netflix.com/openconnect

ISPs can directly connect their networks to Open Connect for free.
ISPs can do this either by free peering with us at common Internet
exchanges, or can save even more transit costs by putting our free
storage appliances in or near their network.


-- 
-JH


Re: Ars Technica on IPv4 exhaustion

2014-06-23 Thread Jimmy Hess
On Sun, Jun 22, 2014 at 10:41 PM, Laszlo Hanyecz las...@heliacal.net
wrote:   The Comcast business SMC gateway speaks RIP to make the
routed /29 work.. in theory it could be put into bridge mode and you can do 
the RIP yourself but they don't support that configuration (you'd need the 
key to configure it successfully and they didn't want to do when I asked).  If

It begins to sound like a job for a packet capture tool to grab a copy
of a SMC's outgoing broadcast,  and then an Ad Infinitium replay of
the last   30 second  broadcast.  Even with md5 auth;  RIPv2
protocol basically has nothing preventing message replay, so, as long
as your original router is offline such that the  sequence number does
not increase,
and  if  you can continuously replay your router's last RIP broadcast,
 you may  not even need to know any keys..






you poke around in the web UI, it does support IPv6 in some form, but it

--
-JH


Re: Getting pretty close to default IPv4 route maximum for 6500/7600 routers.

2014-06-11 Thread Jimmy Hess
On Wed, Jun 11, 2014 at 1:28 AM, Saku Ytti s...@ytti.fi wrote:
 On (2014-06-10 12:39 -0500), Blake Hudson wrote:
 There is nothing to summarize away from global BGP table, if you have number
 showing less, it's probably counter bug or misinterpretation.
 Global BGP table, single BGP feed, will take same amount of RIB and FIB.
[snip]

That depends  if by  summarize   they mean filter prefixes
longer than say  /22,  or otherwise...:  discard  extraneous prefixes
from networks that were allocated a /16   network but chose to
deaggregate and advertise every /24  --   choosing to accept only
the /16 advertisement  instead of installing these extra /24 routes in
the FIB,   then there are plenty of entries to summarize away.


--
-JH


Re: ipmi access

2014-06-02 Thread Jimmy Hess
On Mon, Jun 2, 2014 at 8:21 AM, shawn wilson ag4ve...@gmail.com wrote:  [snip]
 So, kinda the same idea - just put IPMI on another network and use ssh
 forwards to it. You can have multiple boxes connected in this fashion
 but the point is to keep it simple and as secure as possible (and IPMI
 security doesn't really count here :) ).

About that as secure as possible bit.If just one server gets
compromised that happens to have its IPMI port plugged into this
private network;  the attacker may  be able to pivot  into the IPMI
network  and start unloading IPMI exploits.

So caution is definitely advised,  about security boundaries: in case
a shared IPMI network is used,  and this  is a case where a Private
VLAN   (PVLAN-Isolated)   could be considered,   to ensure devices on
the IPMI  LAN cannot communicate with one another ---  and only
devices on a separate dedicated IPMI Management station subnet  can
interact with the IPMI LAN.

-- 
-JH


Re: A simple proposal

2014-05-16 Thread Jimmy Hess
On Fri, May 16, 2014 at 12:26 AM, Matthew Petach mpet...@netflight.com wrote:

 You want to stream a movie?  No problem;
 the video player opens up a second data
 port back to a server next to the streaming
 box; its only purpose is to accept a socket,
 and send all bits received on it to /dev/null.
 The video player sends back an equivalent
 stream of data to what is being received in.

1. Take the understanding that the media player will return the stream
it received.   For the sake of expediency  and avoiding
unnecessary waste (Enhanced efficiency),  I suggest the introduction
of a new frame format,  the  Null reduced frame  and Null reduced
IP packet.

This is an IP packet which logically contains N bytes of payload,
that is to be transmitted without its payload,  but is to be
understood  as having contained those N octets of payload data,  for
administrative and billing purposes;  where N is some number between 1
octet and   (2^32 - 1)  octets.

The media player can then emit these Null-reduced IP datagrams that
contain no ordinary physically payload  ---  a flag will be set in the
return packet and the frame when transmitted to indicate, that
although the IP datagram physically contains no actual data,it
MUST be counted  on all device interface counters and Netflow reports
as X octets,   and  treated as having contained N octets  for the
purposes of billing and peering negotiations.

--

2. Excellent.   Especially if the video player receives streams over
UDP and doesn't verify the source IP  address   before sending the
stream back,  what could possibly go wrong?.

3. On second thought  why not send the return stream to another subscriber?
Stream the thing  only to buffer the content  to a subset of the
users' media players.The users' media players then shape the
return stream in order to distribute the content  -  they could
even SEND more data back to the content provider than they receive, if
this benefits the content provider in peering negotiations.



-- 
-JH


Re: Residential CPE suggestions

2014-05-06 Thread Jimmy Hess
On Tue, May 6, 2014 at 2:31 PM, Scott Weeks sur...@mauigateway.com wrote:

I wouldn't worry.  A fancy GUI  without intelligent engineering and
design leveraged is just more rope for everyone to hang themselves
with,  esp.  when something in the GUI inevitably doesn't work quite
like it's supposed to.

Network vendor GUIs never work 100% like they are supposed to;
there's always eventually some bug or another,  or limitation
requiring some workaround.

And  IPv6 is a game-changer.

 It looks like everyone here should start looking for a new
 career: Next-generation user experience allows anyone to
 quickly become a routing expert.

 ;-)
 scott
-- 
-JH


Re: The FCC is planning new net neutrality rules. And they could enshrine pay-for-play. - The Washington Post

2014-04-24 Thread Jimmy Hess
On Thu, Apr 24, 2014 at 10:02 AM, Chris Boyd cb...@gizmopartners.com wrote:
 I'd like to propose a new ICMP message type 3 code --
 Communication with Destination Network is Financially Prohibited

Wait;  it should be a new type code message, so the header format/data
section can be different.  And include in the error response;  the
160-bit  Bitcoin addresses the user should send the ransom, err,  I
mean  payment to for various drop rates,  and a declaration of  the
amount  of total payment that needs  to be confirmed on the blockchain
 per  Kilobyte for  successful delivery  of the payload at the offered
service levels :)

 --Chris
--
-JH



Re: Requirements for IPv6 Firewalls

2014-04-19 Thread Jimmy Hess
On Sat, Apr 19, 2014 at 1:08 PM, George William Herbert
george.herb...@gmail.com wrote:
 On Apr 18, 2014, at 9:10 PM, Dobbins, Roland rdobb...@arbor.net wrote:
 I don't know where you find ideas like this.

 There are stateful firewalls in the security packages in front of all the 
 internet facing servers in all the major service providers I've worked at.  
 Not *just* stateful firewalls, but they're in there.
 That one company is trying something different does not mean there isn't 
 widespread standardized use of the technology.


There is not widespread use of stateful firewall units with the
stateful element as a single point of failure in front of large public
web farms.

This is different from  security package software on individual web servers.


There is plenty of one-off usage in small web farms, where DDoS is not
a concern.



 -george william herbert
--
-JH



Re: Requirements for IPv6 Firewalls

2014-04-18 Thread Jimmy Hess
On Fri, Apr 18, 2014 at 10:02 AM, William Herrin b...@herrin.us wrote:

It would appear point (5)  in favor of NAT with IPv6 is the only point
that has any merit there.   (1) to (4) are just rationalizations.
None of (1) to (4) are the reasons IPv4 got NAT, none are valid, and
none are good reasons to bring NAT to IPv6  or use NAT in designs of
IPv6 networks.

You could also add as good reasons..   (6) Requirement for NAT based
on personal preference,  and...

(7) You don't need this NAT function anymore,  is not a good reason
to 'withhold the feature as a design and implementation option'.  --
IPv6 is clearly not as mature as IPv4, when my IPv4 router has
greater flexibility in translation options
---

(1) to (4) are just excuses for people who want NAT to not admit the
real reasons which are illogical,  BUT  still important.

 (5) is quite valid.   Also, you  cannot fight it.   When you have
customers  that demand NAT, even though there is absolutely no sound
reason for NAT anymore. The users will still buy whatever gives
them the feature they deemed important,  based on their experience
with IPv4.


The fact of the matter is,  the demand has irrational sources
contributing:  comfort and change-aversity  and loss-aversity.

People want to keep and not lose their IPv4 or their IPv4 features.
They expect to cherrypick IPv6's advantages   and not lose any
functionality from V4 or have any extra work to do,  or re-thinking of
their understanding of IP networking to be doing.

So if you are building IPv6 firewall SW,  you should definitely
include any NAT functionality  you believe that many of your potential
customers will demand.

The fact is...  as a product vendor to move the maximal number of
people to the IPv6 paradigm: you are still going to have to cater to
people with IPv4-like thinking.

Therefore...  I fully expect all the NAT features of IPv4  to have an
IPv6 equivalent appearing.


 1. Easier to manage the network if the IPv4 and IPv6 versions are [...]
 2. Risk management - developing a new operating posture for a new [...]
 3. Renumbering - works about as well in IPv6 as in IPv4, which is to [...]
 4. Defense in depth is a core principle of all security, network and [...]
5.
 Feel free to refute all four points. No doubt you have arguments you
 personally find compelling. Your arguments will fall on deaf ears. At
 best the arguments propose theory that runs contrary to decades of
 many folks' experience. More likely the arguments are simply wrong.

--
-JH



Re: ATT / Verizon DNS Flush?

2014-04-16 Thread Jimmy Hess
On Wed, Apr 16, 2014 at 11:56 AM,  valdis.kletni...@vt.edu wrote:
 On Wed, 16 Apr 2014 10:21:34 -0600, Steven Briggs said:
 Yeah...I know.  Unfortunately, the domain was mishandled by our
 registrar, who imposed their own TTLs on our zone, THEN turned it back over
 to us with a 48HR TTL.  Which is very bad.

 That's almost calling for a name-and-shame.

It's not hard to use WHOIS to lookup the registrar of each of the
nameservers for proofpoint.com
(ns1.proofpoint.us,  ns3.proofpoint.us).

Long TTLS are appropriate for a production zone,  but in my
estimation, it is improper for
a registrar to impose or select by default a TTL  longer than  1 hour,
for a newly published or newly changed zone.

The TTL can and should be  reasonably low initially  and
automatically increased gradually over time,
only after  the zone has aged with no record changes and confidence is
increased
that the newly published zone is correct.

-- 
-JH



  1   2   3   4   5   6   >