Re: [j-nsp] DDoS to core interface - mitigation

2018-03-08 Thread Roland Dobbins


On 9 Mar 2018, at 3:35, Saku Ytti wrote:


a) have edgeACL which polices ICMP and UDP high ports to your links
and drops rest
b) don't advertise your links in IGP or iBGP


This.  iACL plus no link advertisement (need a sound addressing plan to 
make both practical at scale).


Here's a link to a .pdf preso which talks about network infrastructure 
self-protection.  It's Cisco-centric because that's my background, but 
the concepts are universal:




---
Roland Dobbins 
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Publish API data over SNMP

2018-03-08 Thread Saku Ytti
On 8 March 2018 at 22:43, Phil Shafer  wrote:

> Unfortunately not, since MIBs use numeric identifier for fields,
> so if a developer inserts "leaf foxtrot" between "leaf echo" and
> "leaf geronimo" (where it really belongs) then the MIB ordering has
> changed and the numbers and all off.  The MIB generator would need
> the numbers from the previous release to generate useful numbers.

To me it seems like you'd need three OID levels, one for key=>value,
one for key's depth and one for key's position. Even though position
changes, the original key=>value stays same. The position isn't
relevant for SNMP consumer, but important if you want to map it back.

I see no reason why arbitrary JSON couldn't be machine translated to
SNMP and, crucially, back.

ASN.1 <-> XML/JSON probably is solved problem.

> The XML content is generated by the component, so the "show route"
> is forwarded to RPD, who emits the XML response.  MGD forwards that
> to the API client or to the CLI, who turns it into text.  The JSON
> generation (and the "format='text'") is done as a "filter" (in the
> unix sense of the word) to the output stream coming back to the API
> client.

This is all quite beautiful, to have single source of truth. Unlike C
(not dishing on them, every vendor has high points and low points)
where they need to write programmatic presentation support
individually for every thing, which will never work, it has to be
guaranteed 100% coverage or it's useless.
I think exactly because of this infrastructure that you (and NOK) have
in place, adding new ways to emit data, JSON or SNMP, is rather cheap
proposal.

And in context of SNMP, who hasn't found themselves wanting to have
this and that CLI counter in SNMP, this would be one-off work, which
would pay dividends continuously. And strong marketing message, if you
can see it on CLI, you can poll it on your NMS. Compared how much
ad-hoc work have already been spent adding OIDs customers requests.

> Well, if this is a "one off", we have two potential solutions.  First
> is the utility MIB, which holds simple name/value pairs, with the name
> being the key.  An intermittent event script can record a value into
> the utility mib and an SNMP client can retrieve it.  More info is at:

Thank you for the proposals, but certainly this isn't first OID I've
found missing nor last. And these proposed solutions don't contribute
well towards reducing work. With guaranteed CLI counter in SNMP OID
people can ship more turn-key NMS packages to Juniper users.

> My concern was with ensuring that a PDU containing requests for the
> value of "foxtrot" and "geronimo" get these values from the same
> RPC output, instead of issuing the RPC twice, which makes more work
> and my give inconsistent results.  Sorry if my previous explanation
> (and perhaps this one as well ;^) wasn't clear.

Aah yes, I got it now. Implication is, either SNMP is potentially
costly proposal if you walk say fabric statistics, and for each time
you ask for value, it executes RPC. Or another solution is that it's
stateful or at least caching, increasing complexity and bug surface.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Publish API data over SNMP

2018-03-08 Thread Phil Shafer
Saku Ytti writes:
>As a user I'd be comfortable at stability which matches display
>XML/JSON stability, and I think that level of stability would be
>implied.

Unfortunately not, since MIBs use numeric identifier for fields,
so if a developer inserts "leaf foxtrot" between "leaf echo" and
"leaf geronimo" (where it really belongs) then the MIB ordering has
changed and the numbers and all off.  The MIB generator would need
the numbers from the previous release to generate useful numbers.

>All this relies on my assumption that JNPR is machine generating
>JSON/XML stuff automatically for new commands, and 100% coverage is
>guaranteed. If this is true, same way this CLI-MIB could be machine
>generated.

The XML content is generated by the component, so the "show route"
is forwarded to RPD, who emits the XML response.  MGD forwards that
to the API client or to the CLI, who turns it into text.  The JSON
generation (and the "format='text'") is done as a "filter" (in the
unix sense of the word) to the output stream coming back to the API
client.

>In practice I rarely use MIBs, and if CLI would offer 'show
>class-of-service fabric statistic | display oid', that would be good
>enough for me. So not even having MIB wouldn't bother me. My only use
>case for MIBs is when I try to find what OID to poll, actual tooling
>does not load MIBs.

Well, if this is a "one off", we have two potential solutions.  First
is the utility MIB, which holds simple name/value pairs, with the name
being the key.  An intermittent event script can record a value into
the utility mib and an SNMP client can retrieve it.  More info is at:


https://www.juniper.net/documentation/en_US/junos/topics/task/operational/snmp-best-practices-utility-mib-using.html

The other path is "snmp scripts" where one can associate an OID
with a script that is run to generate data for that OID.  More info
is at:


https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-script-automation-snmp-script-overview.html

>> The second issue (which is mostly my lack of snmp depth) is how
>> to ensure the results of multiple queries are given data from the
>> same RPC results.  Looking at "show chassis environment" results
>> should give consistent output.
>
>I'm not sure I understand the problem or question.

My concern was with ensuring that a PDU containing requests for the
value of "foxtrot" and "geronimo" get these values from the same
RPC output, instead of issuing the RPC twice, which makes more work
and my give inconsistent results.  Sorry if my previous explanation
(and perhaps this one as well ;^) wasn't clear.

Thanks,
 Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DDoS to core interface - mitigation

2018-03-08 Thread Saku Ytti
Hey Daniel,

Apologies for not answering your question, but generally this is not a
problem, because:

a) have edgeACL which polices ICMP and UDP high ports to your links
and drops rest
b) don't advertise your links in IGP or iBGP



On 8 March 2018 at 22:17, Dan Římal  wrote:
> Hi all,
>
> I would like to discuss, how do you handle ddos attack pointing to IP address 
> of any router core interface, if your UPLINK/ISP support RTBH and you would 
> like to drop traffic at ISP level because of congested links.
>
> I have tried to implement "classic" BGP signalized RTBH, via changing 
> next-hop to discard route. It works good for customers IPs, but applied to 
> core-interface IP address, it drops routing protocol running on this 
> interfaces between routers (because /32 discard route is more specific than, 
> at least, /31 p2p). I tried to implement export filter between RIB and FIB 
> (routing-options forwarding-table export) to not to install this routes to 
> FIB. It looks better, it doesn't drop BGP/BFD/... anymore, but it works just 
> by half. Try to explain:
>
> I have two routers, both have transit operator (UPLINK-A, UPLINK-B) and they 
> are connected to each other. Routers interconnect is let's say 
> 192.168.72.248/31 (248 router-A, 249 router-B). I will start to propagate via 
> iBGP discard route 192.168.72.248/32 from ddos detection appliance to both 
> routers. Router-B get RTBH route as the best, skip install to FIB because of 
> export filter between RIB and FIB and will start to propagate appropriate 
> route with blackhole community to UPLINK-B. UPLINK-B drops dst at their edge. 
> Good.
>
> But, router A get the same blackhole route, but not as the best, because it 
> has the same route (/32) as a local route with lower route preference:
>
> 192.168.72.248/32  *[Local/0] 34w1d 07:59:10
>   Local via ae2.3900
> [BGP/170] 07:43:20, localpref 2000
>   AS path: I, validation-state: unverified
> > to 10.110.0.12 via ae1.405
>
> So, router-A doesn't start propagate blackhole route to UPLINK-A (because it 
> is not the best, i guess) and DDOS still came from UPLINK-A.
>
> How can i handle this situation? Maybe set lower route preference from 
> detection appliance than default 170? But "Directly connected network" has 
> preference 0 and i cannot go lower and cannot get more specific than local 
> /32. Or maybe use bgp advertise-inactive toward my UPLINKs? Will this help?
>
> Thanks!
>
> Daniel
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp



-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] DDoS to core interface - mitigation

2018-03-08 Thread Dan Římal
Hi all,

I would like to discuss, how do you handle ddos attack pointing to IP address 
of any router core interface, if your UPLINK/ISP support RTBH and you would 
like to drop traffic at ISP level because of congested links.

I have tried to implement "classic" BGP signalized RTBH, via changing next-hop 
to discard route. It works good for customers IPs, but applied to 
core-interface IP address, it drops routing protocol running on this interfaces 
between routers (because /32 discard route is more specific than, at least, /31 
p2p). I tried to implement export filter between RIB and FIB (routing-options 
forwarding-table export) to not to install this routes to FIB. It looks better, 
it doesn't drop BGP/BFD/... anymore, but it works just by half. Try to explain:

I have two routers, both have transit operator (UPLINK-A, UPLINK-B) and they 
are connected to each other. Routers interconnect is let's say 
192.168.72.248/31 (248 router-A, 249 router-B). I will start to propagate via 
iBGP discard route 192.168.72.248/32 from ddos detection appliance to both 
routers. Router-B get RTBH route as the best, skip install to FIB because of 
export filter between RIB and FIB and will start to propagate appropriate route 
with blackhole community to UPLINK-B. UPLINK-B drops dst at their edge. Good.

But, router A get the same blackhole route, but not as the best, because it has 
the same route (/32) as a local route with lower route preference:

192.168.72.248/32  *[Local/0] 34w1d 07:59:10
  Local via ae2.3900
[BGP/170] 07:43:20, localpref 2000
  AS path: I, validation-state: unverified
> to 10.110.0.12 via ae1.405

So, router-A doesn't start propagate blackhole route to UPLINK-A (because it is 
not the best, i guess) and DDOS still came from UPLINK-A.

How can i handle this situation? Maybe set lower route preference from 
detection appliance than default 170? But "Directly connected network" has 
preference 0 and i cannot go lower and cannot get more specific than local /32. 
Or maybe use bgp advertise-inactive toward my UPLINKs? Will this help?

Thanks!

Daniel
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Publish API data over SNMP

2018-03-08 Thread Saku Ytti
Hey,

> I'm not an snmp-head, but something could certainly be done here.
> I see two issues, one being the need for a formal MIB where our
> content evolves release-to-release.  Making per-release MIBs would
> be a pain, and I'm not sure how well tools would handle those.  A
> "generic" MIB might be suitable, where the key is the RPC (and it's
> arguments?) and the fields are the results of the RPC.

As a user I'd be comfortable at stability which matches display
XML/JSON stability, and I think that level of stability would be
implied.

All this relies on my assumption that JNPR is machine generating
JSON/XML stuff automatically for new commands, and 100% coverage is
guaranteed. If this is true, same way this CLI-MIB could be machine
generated.

In practice I rarely use MIBs, and if CLI would offer 'show
class-of-service fabric statistic | display oid', that would be good
enough for me. So not even having MIB wouldn't bother me. My only use
case for MIBs is when I try to find what OID to poll, actual tooling
does not load MIBs.

> The second issue (which is mostly my lack of snmp depth) is how
> to ensure the results of multiple queries are given data from the
> same RPC results.  Looking at "show chassis environment" results
> should give consistent output.

I'm not sure I understand the problem or question.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] certain commands executed on CLI provide additional information over corresponding RPCs

2018-03-08 Thread Saku Ytti
Hey Phil,

I'm hijacking this for a bit.

You have | display json and xml, I assume json was relatively modest
amount of work, as you have formal source of data, so someone only
needed to write translator, without being aware of all context to
support | display json, which also means, no one needs to do any work
to get display xml or json to work on new command introduced?

If this is remotely true, shouldn't it be equally possibly to present
all data over SNMP which is presentable as JSON and XML? There are
bunch of gaps on relatively important stuff, which I'd love to see
available in SNMP. This week particularly I was frustrated to find how
'show class-of-service fabric statistics' is not available over SNMP.
Perhaps introduce some CLI-MIB where all json/xml supporting commands
are generated in OIDs and command in CLI to ask OID for particular
command?




On 8 March 2018 at 07:26, Phil Shafer  wrote:
> Martin T writes:
>>I have noticed that certain commands executed on CLI provide some
>>additional information over corresponding RPCs. For example "show ipv6
>>neighbors" or "show system storage" on CLI show column names while XML
>>output does not contain this data. Why is that so?
>
> Both the CLI and RPC content contain the same information, but the
> CLI takes the data supplied by the RPC and displays it using rules
> specified by the developer.  These rules include column headers,
> field titles, and other gritty little details.
>
> But these are "display" features.  The API is meant to allow access
> to the data, and to make that data the same data used by the CLI,
> so the API is complete, up-to-date, well-tested, and useful.
>
> If you want to use the API to get pure text data, we do have the
> 'format="text"' attribute that can be put on an RPC.
>
> Thanks,
>  Phil
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp



-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp