RE: HTTPS redirects to HTTP for monitoring

2015-01-18 Thread Teleric Team
Honestly, don't do this. Neither option.You can still have some control over 
SSL access with ordinary domain based filtering getting proxied, via CONNECT 
method or sorta. You don't need filtering capabilities over full 
POST/DELETE/UPDATE HTTP methods, and if you believe you need it, you just have 
a bigger problem that MITMing won't solve at all. It's just like believing a 
data leak prevention system will really prevent data leaking.
Or believing a Palo Alto NGFW policy that allows gmail but won't allow gmail 
attachments of mime-type XYZ will be effective. If someone is really 
interested, there are clever ways to bypass it, more clever than your options 
to filter it.
Forcing http fallback for https communication is not only wrong, it's a general 
regression regarding security policy and best practices. You are risking 
privacy, or confidentiality and integrity if you prefer 27002 buzzwords. 
Not to mention the availability breakage since many applications won't just 
work (aka, you will break 'em).
On the other hand, adding a MITM strategy, be using Squid, Fortinet, pfSense, 
Palo Alto, Sonicwall, EndianFW, is just worse. You are adding you own an attack 
vector on your company. You are doing the difficult part of the attack for the 
attacker, installing a custom root cert in your client stations. So you will 
have much more to worry about, from who has access, how vulnerable and how 
to deploy until what is deployed, what is revogated, how's renegotiation, 
CRIME, etc like. You will have more problem root and vectors to care about. 
Not only how safe is the remote destination SSL server, but how save is the 
client to local proxy doing MITM and local proxy doing MITM to remote SSL 
server. 
You are attacking, cracking and breaking your own network. If someone raise 
your squid log levels, you will have to respond for that, and respond for what 
was copied before you noticed it. Same goes for Fortinet, Websense, Sonicwall, 
or whatever open source or proprietary solution you pick. You are still 
breaking confidentiality and integrity but now without allowing for 
ordinary users or applications to notice it.
Back to the beginning: you can still do some level of HTTPS filtering and per 
domain controlling without having to fully MITM and inspect the payload. Don't 
add OWASP Top 10 / SANS Top 25 facilitation vectors to your company. Do the 
usual limited but still safe (oh no, not counting that unknown openssl 
zero-day NSA and people on IRC know about but industry stills ignore, or any 
other conspirator theory/fact), filtering... do just whatever can be filtered 
without MITMing https and HTTP redirection. And don't be seduced by the 
possibility of filtering more than that. It's a trap, for both your users and 
your responsibilities as organization regarding users' privacy not to mention 
possible ACTs and other laws on your state/contry.


 Date: Sun, 18 Jan 2015 04:29:56 -0800
 Subject: HTTPS redirects to HTTP for monitoring
 From: shortdudey...@gmail.com
 To: nanog@nanog.org
 
 Hi Everyone,
 
 I wanted to see what opinions and thoughts were out there.  What software,
 appliances, or services are being used to monitor web traffic for
 inappropriate content on the SSL side of things?  personal use?
 enterprise enterprise?
 
 It looks like Websense might do decryption (
 http://community.websense.com/forums/t/3146.aspx) while Covenant Eyes does
 some sort of session hijack to redirect to non-ssl (atleast for Google) (
 https://twitter.com/CovenantEyes/status/451382865914105856).
 
 Thoughts on having a product that decrypts SSL traffic internally vs one
 that doesn't allow SSL to start with?
 
 -Grant
  

RE: Level3 to Savvis/CenturyLink problems?

2014-12-18 Thread Teleric Team


 From: leah.revelio...@virginamerica.com
 Date: Wed, 17 Dec 2014 13:47:33 -0800
 Subject: Level3 to Savvis/CenturyLink problems?
 To: nanog@nanog.org
 
 We have an MPLS backbone from Level3 and are experiencing issues between
 two of our San Francisco Bay Area locations – namely our HQ in Burlingame,
 CA and our Savvis/CenturyLink Data Center in Santa Clara, CA.  Ping
 response times between these 2 sites are within the normal range, but our
 Applications are timing out. We’ve obviously done a ton of troubleshooting
 on the Application side of things, but everything points back to a circuit
 issue and after a lot of testing we are scratching our heads trying to
 narrow down this issue.  So I wanted to post to this list to see if anyone
 else has noticed issues with their Level3 circuits in the Bay Area over the
 last 2 days as well?
Hello Leah,
Yes and no, not anymore.
From Monday to Wed we had bad connectivity from LA to Sacramento, but 
different from your scenario, ping had a good latency but package loss was 
noticeable. 
After 2 days of phone chattery we managed someone to skip San Jose, San 
Francisco and the whole SF Bay area, routing us via Fresno-Modesto to reach 
Sacramento. Latency increased a bit but overall quality was restored.
So yes, somehow I can confirm there's something going on with apparent isolated 
issues on Bay Area, specially near San Jose. Seems isolated because from what I 
noticed on phone it was only me. And now you.

 
 
 
 Thanks for your feedback!
 
 Regards, Leah
  

RE: possible twtelecom routing issue

2014-12-07 Thread Teleric Team


 Date: Fri, 5 Dec 2014 02:19:46 -1000
 From: t...@lavanauts.org
 To: nanog@nanog.org
 Subject: possible twtelecom routing issue
 
 Trying to gather information on a connectivity issue between TW Telecom 
 and a specific government web server.  If one of your upstream providers 
 is TW Telecom, could you report back whether you have connectivity to 
 https://safe.amrdec.army.mil.  Thanks.
I can reach it through Level3.Is your TW Telecom routing hops L3 already? Or 
still legacy?Whats your aspath/hops to destination?
 
 Antonio Querubin
 e-mail:  t...@lavanauts.org
 xmpp:  antonioqueru...@gmail.com
  

RE: 10Gb iPerf kit?

2014-12-07 Thread Teleric Team
 From: p...@fiberphone.co.nz
 Subject: Re: 10Gb iPerf kit?
 Date: Sun, 7 Dec 2014 09:24:41 +1300
 To: nanog@nanog.org
 
 On 11/11/2014, at 1:35 PM, Randy Carpenter rcar...@network1.net wrote:
 
  I have not tried doing that myself, but the only thing that would even be 
  possible that I know of is thunderbolt.
  A new MacBook Pro and one of these maybe: 
  http://www.sonnettech.com/product/echoexpresssel_10gbeadapter.html
 
 Or one of these ones for dual-10Gbit links (one for out of band management or 
 internet?):
 
   http://www.sonnettech.com/product/twin10g.html
 
 I haven't tried one myself, but they're relatively cheap (for 10gig) so not 
 that much outlay to grab one and try it (esp if you already have an Apple 
 laptop you can test with).
 
How would you use it? with iperf still?I don't think you will go nearly close 
to 14.8Mpps per port this way.Unless you are talking about bandwidth testing 
with full sized packet frames and low pps rate.
I personally tested a 1Gbit/s port over a MBP retina 15 thunderbot gbe with 
BCM5701 chipset. I had only 220kpps on a single TX flow.Later I tried another 
adapter with a marvel yukon mini port. Had better pps rate, but nothing beyond 
260kpps.

 I've done loads of 1Gbit testing using the entry-level MacBook Air and a 
 Thunderbolt Gigabit Ethernet adapter though, and I disagree with Saku's 
 statement of 'You cannot use UDPSocket like iperf does, it just does not 
 work, you are lucky if you reliably test 1Gbps'. I find iperf testing at 
 1Gbit on Mac Air with Thunderbolt Eth extremely reliable (always 950+mbit/sec 
 TCP on a good network, and easy to push right to the 1gbit limit with UDP.
Again, with 64byte packet size? Or are you talking MTU?
With MTU size you can try whatever you want and it will seem to be reliable. A 
wget/ftp download of a 1GB file will provide similar results, but I dont think 
this is useful anyway since it won't test anything close to rfc2544 or at least 
an ordinary internet traffic profile with a mix of 600bytes pkg size combined 
with a lower rate of smaller packets (icmp/udp, ping/dns/ntp/voice/video).
I am also interested in a cheap and reliable method to test 10GbE connections. 
So far I haven't found something I trust.
 
 Pete
 
  

RE: Anybody at Amazon AWS?

2014-12-04 Thread Teleric Team


 From: amitch...@isipp.com
 Subject: Anybody at Amazon AWS?
 Date: Thu, 4 Dec 2014 09:15:36 -0700
 To: nanog@nanog.org
 
 Anybody have a contact at Amazon AWS?
 
 I sent in a spam complaint, and got back the below response - while I give 
 them kudos for actually, you know, responding, I'm pretty sure that we can 
 all agree that sending the same canned message to email addresses scraped 
 off websites is the very definition of spam, yet somehow the EC2 abuse team 
 seems to consider it a perfectly acceptable explanation  - I'd sure love to 
 discuss this with someone with a clue at Amazon AWS
Did you try their abuse telephone? +1 (206) 266-2187?
Once I needed I had proper services on that number.
Anyway I am not sure if your contact will make a difference. As I see the case, 
honestly, it's you complaining against their customer, and Amazon is profiting 
from that customer. If you and only you are complaining I don't believe you 
will be heard.
Anyway the customer assumed they sent UCE. But won't assume it was a SPAM. As I 
see the customer states that a e-mail was sent to an e-mail address you have 
published as contact e-mail address and therefore, they have contacted you. In 
a canned way, but if it was a personal e-mail offering you something you don't 
care about, would you fill an abuse report? Or just ignoring/declining the 
offer?
If I right you a polite message right from my MUA and don't mention your name, 
treating you pretty much like a generic person I don't know, and offering my 
services, my curricula, or trying to show you a product I have created myself 
and believe it might be off your interest, it's certainly UCE but will you 
complain to my provider stating I was spamming you?
Well if it's true tha the sender used gmail (you can check your e-mail 
headers), pasted your address on their MUA or webmail as a Bcc or something 
like that, and Gmail didn't block the outgoing message, and you (and maybe 2 or 
3 other individuals) didn't like that, I don't think Amazon or Google will find 
it as abuse of services.
Certainly it's not a good practice. Not something nice to do, or to receive. 
But is that an abuse? I don't think so. Specially of a minimum good practice is 
in place, just like a an opt-out mechanism or similar.
Good luck with that phone call. You will find someone to talk to. But I'm not 
sure you will find someone to agree with you it's an abuse.

 ---
 
 Our customer has responded to your abuse report and provided the following 
 information
 
 The below emails were sent individually to the recipient using a canned 
 message. There is no automation or mass emailing at all. Our publisher 
 representative personally visited each of the below websites, decided they 
 were right for our service and emailed them individually. The emails are sent 
 through gmail using a web interface to their API.
 
 Let me know if you require any additional information.
 
 Dwayne
 
 If you are satisfied with the above information, there is no need to respond 
 to this notice. If you are not satisfied, please respond with a clear, 
 succinct reason for dissatisfaction and what results you desire from our 
 customer. We will make every reasonable attempt to work with you and our 
 customer to resolve this matter.  
 
 Thank you,
 The EC2 Abuse team
 
 ---
 
 Anne
 
 Anne P. Mitchell, Esq.
 CEO/President
 ISIPP SuretyMail Email Accreditation  Certification
 Your mail system + SuretyMail accreditation = delivered to their inbox!
 http://www.SuretyMail.com/
 http://www.SuretyMail.eu/
 
 Author: Section 6 of the Federal CAN-SPAM Act of 2003
 Member, California Bar Cyberspace Law Committee
 Ret. Professor of Law, Lincoln Law School of San Jose
 https://www.linkedin.com/in/annemitchell
 303-731-2121 | amitch...@isipp.com | @AnnePMitchell | Facebook/AnnePMitchell 
 
 
  

RE: How to track DNS resolution sources

2014-12-03 Thread teleric team


 Date: Wed, 3 Dec 2014 17:56:23 +0100
 From: bortzme...@nic.fr
 To: notify.s...@gmail.com
 Subject: Re: How to track DNS resolution sources
 CC: nanog@nanog.org
 
 On Wed, Dec 03, 2014 at 05:22:58PM +0100,
  Notify Me notify.s...@gmail.com wrote 
  a message of 13 lines which said:
 
  I hope I'm wording this correctly.
 
 Not really :-)
 
  I had a incident at a client site where a DNS record was being
  spoofed.
 
 How do you know? What steps did you use to assert this? Answers to
 these questions would help to understand your problem.
 
  How does one track down the IP address that's returning the false
  records ?
 
 If it's real DNS spoofing (which I doubt), the source IP address of
 the poisoner is forged, so it would not help.
 
 The main tool to use is dig. Let's assume the name that bothers you is
 foobar.example.com. Query your local resolver:
 
 dig A foobar.example.com
 
 Query an external resolver, here Google Public DNS:
 
 dig @8.8.4.4 A foobar.example.com
 
 Query the authoritative name servers of example.com. First, to find them:
 
 dig NS example.com
 
 Second, query them (replace the server name by the real one):
 
 dig @a.iana-servers.net. A foobar.example.com

I didn't understand how this will help him identify the poisoner.
What an IDS rule will do is check for responding authoritative query IDs for 
DNS queries never made to that responder, but made for the authoritative server 
identified as per above (direct NS inquiry).
If no IDS is present, BIND logging would allow for identification of 
authoritative responses and query ID identification. 
In summary whatever is answered authoritatively by a server other than the NS 
ones tracked by dig +trace foobar.examplecom is the potential poisoner. But 
if the poisoing is done from an spoofed IP address (spoofing the authoritative 
IP), well good luck w/ that if the spoofed domain is not DNSSEC aware.