Google IPv6 geo location problem
Hello, At the beginning of this year we started to roll out IPv6 for a large part of our customer base. Everything was working perfectly fine until mid February when google decided to geo locate our entire 2a00:e60::/32 IPv6 net to Iran. We expected that it would be done for /48 to /64 blocks (On IPv4 it's done for each IP). Being a ISP with mostly satellite internet customers from all over the world we're used to problems caused by IP based geo locating. Usual the complains are Please change the language of google.com to English, I don't understand the currently configured language (happens if the previous user of the IP address trained google to some other language). Most of our customers (Mostly military, oil gas, government and international companies) are expecting an English internet. Usually we can fix their problems by changing their IP address. But with our entire /32 being handled as Iran by Google this is a problem. The problem is that google apparently blocks Iran from using any remotely commercial service (I assume due to the sanctions against Iran). We got a lot of customers complaining about not being able to login into Google Apps (Unable to sign in from this country - You appear to be signing in from a country where Google Apps accounts are not supported.). But plenty of other purely informational Google websites (e.g. http://www.html5rocks.com/) returned a 403 - We're sorry, but this service is not available in your country. forbidden error too. We had to revert our IPv6 roll out again due to this problem. We tried submitting a correction request via https://support.google.com/websearch/contact/ip back in February but that apparently wasn't processed. Is there anyone from Google around who can help with this? This is currently blocking our IPv6 deployment. Best Regards, Freddy AS62023 / NYNEX satellite OHG
Re: Network Segmentation Approaches
On Mon, May 04, 2015 at 07:55:43PM -0700, nan...@roadrunner.com wrote: Possibly a bit off-topic, but curious how all of you out there segment your networks. [snip] I break them up by function and (when necessary) by the topology enforced by geography. The first rule in every firewall is of course deny all and subsequent rulesets permit only the traffic that is necessary. Determing what's necessary is done via a number of tools: tcpdump, ntop, argus, nmap, etc. When possible, rate-limiting is imposed based on a multiplier of observed maxima. Performance tuning is done after functionality and is usually pretty limited: modern efficient firewalls (e.g., pf/OpenBSD) can shovel a lot of traffic even on modest hardware. ---rsk
Re: Network Segmentation Approaches
On Mon, May 4, 2015 at 9:55 PM, nan...@roadrunner.com wrote: There's quite a bit of literature out there on this, so have been considering an approach with zones based on the types of data or processes within them. General thoughts: It depends on the users and tasks on the network.. Different segmentation strategies / tradeoffs get selected by people dependent upon what there is to be protected, Or where needed to control broadcast domain size, and value tradeoffs. Segmenting certain systems, or segmenting certain data, Or more likely both are called for to mitigate selected risks: both security risks, as well as network risks, Or to facilitate certain networks being moved independently to maintain continuity after DR. - Business Zone - This would be where workstations live, but I should [] generally be OK letting anything in this zone talk fairly unfettered to anything else in this zone Since you imply all workstations would live on the same zone as each other, instead of being isolated or placed into job-role-specific access segments, then what you have here is a non-segmented network; that is: It sounds like this begins to look like your generic non-segmented zone with small numbers of exceptions strategy; you wind up with a few huge business zones which tend to become larger and larger over time -- are really still at highest risk, then a small number of tiny exception zones such as 'PCI Card Environment' zone, which are okay, until some users inevitably develop a requirement to connect workstations from the massive insecure zone to the tiny zone. Workstations talking to other workstations directly is an example of one of the higher risk things, that is probably not necessary, but remains unrestricted by having one single large 'Business' segment. A stronger segmentation model would be that workstations don't get to talk to other workstations directly; only to remote devices servicing data that the user of a given workstation is authorized to be using, with every flow being validated by a security device. I'd probably have VoIP media servers in this zone, AD, DNS, etc. AD + DNS are definitely applications that should be at a high integrity protection level compared to generic segment from security standpoint; Especially if higher-security zones are dependent on those services. An AD group policy configuration change can cause arbitrary code execution on a domain-joined server in any segment attached to a domain using that AD server. Presumably I should never allow *outbound* connectivity from a more secure zone to a less secure zone, and inbound connectivity should be carefully monitored for unusual access patterns. Never?No internet access? Never say never, but there should be policies established based on needs / requirements and dependent on the characteristics of a zone and the assumed risk level of other zones. An example for some high risk zone might be that outbound connections to A, B, and C only through a designated application-layer proxy itself residing in a security service zone. be built off of? I'm especially interested to hear how VoIP/RTP traffic is handled between subnets/remote sites within a Business Zone. I'm loathe to put a FW between these segments as it will put VoIP performance at risk (maybe QoS on FW's can be pretty good), The ideal scenario is to have segments dedicated to primary VoIP use, so VoIP traffic should stay in-segment, except if interconnecting to a provider, and the firewalls do not necessarily have to be stateful firewalls; if VoIP traffic leaves a segment, some may use a simple packet filter or application-aware proxy designed to maintain the performance. If the security requirements of the org implementing the network are met, then very specific firewall devices can be used for certain zones that are the most suitable for that zone's traffic. but maybe some sort of passive monitoring would make sense. -- -JH
RE: Network Segmentation Approaches
It is called the Purdue Enterprise Reference Architecture ... -Original Message- From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of nan...@roadrunner.com Sent: Monday, 4 May, 2015 20:56 To: nanog@nanog.org Subject: Network Segmentation Approaches Possibly a bit off-topic, but curious how all of you out there segment your networks. Corporate/business users, dependent services, etc. from critical data and/or processes with remote locations thrown in the mix which could be mini-versions of your primary network. There's quite a bit of literature out there on this, so have been considering an approach with zones based on the types of data or processes within them. General thoughts: - Business Zone - This would be where workstations live, web browsing occurs, VoIP and authentication services live too. Probably consider this a somewhat dirty zone, but I should generally be OK letting anything in this zone talk fairly unfettered to anything else in this zone (for example a business network at my HQ location should be able to talk unfettered to an equivalent network at a remote site). I'd probably have VoIP media servers in this zone, AD, DNS, etc. - Some sort of management zone(z) - Maybe accessible only via jump host -- this zone gives control access into key resources (most likely IT resouces like network devices, storage devices, etc.). Should have sound logging/auditing here to establish access patterns outsid the norm and perhaps multi-factor authentication (and of course FW's). - Secure Zone(s) - Important data sets or services can be isolated from untrusted zones here. May need separate services (DNS, AD, etc.) - I should think carefully about where I stick stateful FW's -- especially on my internal networks. Risk of DoS'ing myself is high. Presumably I should never allow *outbound* connectivity from a more secure zone to a less secure zone, and inbound connectivity should be carefully monitored for unusual access patterns. Perhaps some of you have some fairly simple rules of thumb that could be built off of? I'm especially interested to hear how VoIP/RTP traffic is handled between subnets/remote sites within a Business Zone. I'm loathe to put a FW between these segments as it will put VoIP performance at risk (maybe QoS on FW's can be pretty good), but maybe some sort of passive monitoring would make sense. (Yes, I've also read the famous thread on stateful firewalls[1]). Thanks! [1] http://markmail.org/thread/fvordsbnuc74fuu2#query:+page:1+mid:fvordsbnuc74 fuu2+state:results
Re: Network Segmentation Approaches
On 05/04/2015 07:55 PM, nan...@roadrunner.com wrote: Possibly a bit off-topic, but curious how all of you out there segment your networks. Corporate/business users, dependent services, etc. from critical data and/or processes with remote locations thrown in the mix which could be mini-versions of your primary network. Add management zone or infrastructure zone: Consider setting up a separate zone or zones (via VLAN) for devices with embedded TCP/IP stacks. I have worked in several shops using switched power units from APC, SynAccess, and TrippLite, and find that the TCP/IP stacks in those units are a bit fragile when confronted with a lot of traffic, even when the traffic is not addressed to the embedded devices. Separately, an ISP discovered that a consumer-grade NAS has the same problem. These should be on a separate subnet anyway, with unfettered access from the outside disallowed at the edge. To access the infrastructure equipment, you would use VPN to bypass your edge router access lists. If you have a lot of inside equipment not under your direct control, consider locking them out of the infrastructure subnet, too. Needless to day, watch the load you direct at these embedded devices. My current day job installed Solar Winds to monitor everything. The probes from the software knocked out the SNMP access to all too many of the PDU devices on the network.
Re: Network Segmentation Approaches
On 5/5/2015 4:34 PM, Mark Andrews wrote: In message 20150505113445.gb24...@gsp.org, Rich Kulawiec writes: I break them up by function and (when necessary) by the topology enforced by geography. The first rule in every firewall is of course deny all and subsequent rulesets permit only the traffic that is necessary. Deny all really isn't needed with modern machines but that is a matter of policy. The firewalls I've worked with don't log denies if they are due to an implicit deny-all at the end of the policy. I always put one in at the end to make sure that the attempt is logged. Gene
Re: IP DSCP across the Internet
On 5 May 2015, at 17:27, Ramy Hashish wrote: Assume two ASs connected through two tier 1 networks, will the tier one networks trust any DSCP markings done from an AS to the other? The BCP is to re-color on ingress. --- Roland Dobbins rdobb...@arbor.net
Re: Network Segmentation Approaches
In message 20150505113445.gb24...@gsp.org, Rich Kulawiec writes: On Mon, May 04, 2015 at 07:55:43PM -0700, nan...@roadrunner.com wrote: Possibly a bit off-topic, but curious how all of you out there segment your networks. [snip] I break them up by function and (when necessary) by the topology enforced by geography. The first rule in every firewall is of course deny all and subsequent rulesets permit only the traffic that is necessary. The first rule of every firewall should be to enforce BCP 38 out bound. Deny all really isn't needed with modern machines but that is a matter of policy. Determing what's necessary is done via a number of tools: tcpdump, ntop, argus, nmap, etc. When possible, rate-limiting is imposed based on a multiplier of observed maxima. Performance tuning is done after functionality and is usually pretty limited: modern efficient firewalls (e.g., pf/OpenBSD) can shovel a lot of traffic even on modest hardware. ---rsk -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
Re: IP DSCP across the Internet
In general there are very few bad actors here in regards to trusting/accepting/using DSCP across the internet. Apple has a tendency to mark some traffic with EF that shouldn't be EF on PNIs, and Cogent leaks a lot of their internal markings into customers, but it's generally unmarked traffic from certain customers/peers. Other than that IMHO it's totally valid to accept, and nobody abuses it (other than those 2). We accept DSCP from the internet and do queue a few things higher towards customers for things like OTT VoIP etc. Remarking DSCP is bad IMHO, trusting it is another thing. You just have to be careful, and I suggest good netflow tools to keep an eye on it. On May 5, 2015 5:30 PM, Ramy Hashish ramy.ihash...@gmail.com wrote: Good day all, A simple question, does Internet trust IP DSCP marking? Assume two ASs connected through two tier 1 networks, will the tier one networks trust any DSCP markings done from an AS to the other? Thanks, Ramy
Re: Google IPv6 geo location problem
I would first fix the RIPE whois data. s/UNITED STATES/GERMANY/ All the contacts are wrong. GI-GO e.g. person: Andreas Buxbaum address:NYNEX satellite OHG address:Robert-Bosch-Str. 20 address:64293 Darmstadt address:UNITED STATES phone: +49 6151 50074-0 nic-hdl:NX911 mnt-by: MNT-NY source: RIPE # Filtered Mark In message CAKCUjRVWEE5O=foegss7j-yg3l+uxcnmjz+ihx1lhzmo6_y...@mail.gmail.com , Frederik Kriewitz writes: Hello, At the beginning of this year we started to roll out IPv6 for a large part of our customer base. Everything was working perfectly fine until mid February when google decided to geo locate our entire 2a00:e60::/32 IPv6 net to Iran. We expected that it would be done for /48 to /64 blocks (On IPv4 it's done for each IP). Being a ISP with mostly satellite internet customers from all over the world we're used to problems caused by IP based geo locating. Usual the complains are Please change the language of google.com to English, I don't understand the currently configured language (happens if the previous user of the IP address trained google to some other language). Most of our customers (Mostly military, oil gas, government and international companies) are expecting an English internet. Usually we can fix their problems by changing their IP address. But with our entire /32 being handled as Iran by Google this is a problem. The problem is that google apparently blocks Iran from using any remotely commercial service (I assume due to the sanctions against Iran). We got a lot of customers complaining about not being able to login into Google Apps (Unable to sign in from this country - You appear to be signing in from a country where Google Apps accounts are not supported.). But plenty of other purely informational Google websites (e.g. http://www.html5rocks.com/) returned a 403 - We're sorry, but this service is not available in your country. forbidden error too. We had to revert our IPv6 roll out again due to this problem. We tried submitting a correction request via https://support.google.com/websearch/contact/ip back in February but that apparently wasn't processed. Is there anyone from Google around who can help with this? This is currently blocking our IPv6 deployment. Best Regards, Freddy AS62023 / NYNEX satellite OHG -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
Re: IP DSCP across the Internet
If there isn't a specific peering agreement which sets up DSCP marks with your Z side, you're going to have a bad time doing anything other than remarking to 0. -Blake On Tue, May 5, 2015 at 6:35 PM, Tim Jackson jackson@gmail.com wrote: In general there are very few bad actors here in regards to trusting/accepting/using DSCP across the internet. Apple has a tendency to mark some traffic with EF that shouldn't be EF on PNIs, and Cogent leaks a lot of their internal markings into customers, but it's generally unmarked traffic from certain customers/peers. Other than that IMHO it's totally valid to accept, and nobody abuses it (other than those 2). We accept DSCP from the internet and do queue a few things higher towards customers for things like OTT VoIP etc. Remarking DSCP is bad IMHO, trusting it is another thing. You just have to be careful, and I suggest good netflow tools to keep an eye on it. On May 5, 2015 5:30 PM, Ramy Hashish ramy.ihash...@gmail.com wrote: Good day all, A simple question, does Internet trust IP DSCP marking? Assume two ASs connected through two tier 1 networks, will the tier one networks trust any DSCP markings done from an AS to the other? Thanks, Ramy
Re: Fixing Google geolocation screwups
In message 20150505210746.gh22...@hezmatt.org, Matt Palmer writes: On Tue, May 05, 2015 at 12:03:23PM -0400, Luan Nguyen wrote: There's a form here - https://support.google.com/websearch/contact/ip But google is pretty smart, its systems will learn the correct geolocation over time... That'd be quite a trick, given that the netblock practically can't be used at all with Google services. - Matt One would expect support.google.com to not be geo blocked just like postmaster@ should not be filtered. That said they can always disable IPv6 temporarially (or just firewall off the IPv6 instance of support.google.com and have the browser fallback to IPv4) and reach support.google.com over IPv4 to lodge the complaint. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
Re: Fixing Google geolocation screwups
On Wed, May 06, 2015 at 10:56:22AM +1000, Mark Andrews wrote: In message 20150505210746.gh22...@hezmatt.org, Matt Palmer writes: On Tue, May 05, 2015 at 12:03:23PM -0400, Luan Nguyen wrote: There's a form here - https://support.google.com/websearch/contact/ip But google is pretty smart, its systems will learn the correct geolocation over time... That'd be quite a trick, given that the netblock practically can't be used at all with Google services. One would expect support.google.com to not be geo blocked just like postmaster@ should not be filtered. That said they can always disable IPv6 temporarially (or just firewall off the IPv6 instance of support.google.com and have the browser fallback to IPv4) and reach support.google.com over IPv4 to lodge the complaint. I was specifically responding to the suggestion that Google would automagically learn the correct location of the netblock, presumably based on the characteristics of requests coming from the range. Being explicitly told that a given netblock is in a given location (as effective, or otherwise, as that may be) doesn't really fit the description of systems [learning] the correct geolocation over time. - Matt -- Skippy was a wallaby. ... Wallabies are dumb and not very trainable... The *good* thing...is that one Skippy looks very much like all the rest, hence...one-shot Skippy and plug-compatible Skippy. I don't think they ever had to go as far as belt-fed Skippy -- Robert Sneddon, ASR
IP DSCP across the Internet
Good day all, A simple question, does Internet trust IP DSCP marking? Assume two ASs connected through two tier 1 networks, will the tier one networks trust any DSCP markings done from an AS to the other? Thanks, Ramy
Re: IP DSCP across the Internet
On 6/May/15 03:35, Tim Jackson wrote: In general there are very few bad actors here in regards to trusting/accepting/using DSCP across the internet. Apple has a tendency to mark some traffic with EF that shouldn't be EF on PNIs, and Cogent leaks a lot of their internal markings into customers, but it's generally unmarked traffic from certain customers/peers. Other than that IMHO it's totally valid to accept, and nobody abuses it (other than those 2). We accept DSCP from the internet and do queue a few things higher towards customers for things like OTT VoIP etc. Remarking DSCP is bad IMHO, trusting it is another thing. You just have to be careful, and I suggest good netflow tools to keep an eye on it. We had an odd experience, once, where - due to old hardware - we could not remark traffic we were picking up from a peer in South Africa. With color-aware policing toward a customer in Uganda, any traffic coming from that peer in South Africa was getting dropped toward that customer in Uganda. After a very odd sequence of troubleshooting events, we found that the AF DSCP alues being set by the peer in South Africa (and us passing them due to the old kit not being able to remark on ingress) was causing the color-aware policer in Uganda to drop traffic toward the customer there. Re-configuring the policer to be color-blind fixed the issue, but you can imagine how such a corner case this was. Naturally, with new kit in now, our global QoS policy is in effect. We don't honor DSCP values that comes in via best-effort circuits (i.e., the Internet). Although not a very strong reason, this particular experience is one reason why. Mark.
Re: IP DSCP across the Internet
On 5/May/15 12:27, Ramy Hashish wrote: Good day all, A simple question, does Internet trust IP DSCP marking? Assume two ASs connected through two tier 1 networks, will the tier one networks trust any DSCP markings done from an AS to the other? I wouldn't bet on it. Some providers honor, most remark. We remark. We can only honor DSCP values on private circuits (l2vpn, l3vpn, that sort o' thing). Mark.
Re: IP DSCP across the Internet
We don't honor DSCP values that comes in via best-effort circuits (i.e., the Internet). Although not a very strong reason, this particular experience is one reason why. trusting markings of any sort which you do not need is an increase in attack, game playing, and/or bug surface. the only thing i would pass is ecn. randy
Re: yarr - Yet Another Route Server Implementation [WAS: Euro-IX quagga stable download and implementation]
My experience tells me Martins direction is a good one. You would be surprised to learn how much time already went into whats out there that people trust now. Besides - it has very limited marketing appeal. The IXs number is small. The big ones already have something working well. I wouldn't implement something new. When I chose, I went for something a big network ran for years. As a result it was reliable and easy to maintain. Had few and simple problems. Simply ran 2 and had people get a session with both. No one ever lost routes when I took one down to upgrade - or when we had a hardware failure. Thank You Bob Evans CTO On Mon, 4 May 2015, Sebastian Spies wrote: sorry, for the double post. dmarc fuckup... Hey there, considering the state of this discussion, BIRD seems to be the only scalable solution to be used as a route server at IXPs. I have built a large code base around BGP for the hoofprints project [1] and BRITE [2] and would enjoy building another state-of-the-art open-source route-server implementation for IXPs. Would you be so kind to send me your feedback on this idea? Do you think, it makes sense to pursue such a project or is it not relevant enough for you? How about (instead of another implementation) helping one of the existing projects? Writing another implementation is easy. Keeping it up to date, testing it and supporting it over multiple years is what I would worry about. I would *strongly* suggest to solve that issue first before starting on another implementation. - Martin
RE: Fixing Google geolocation screwups
Pedro Cavaca suggests: https://support.google.com/websearch/answer/873?hl=en Correct me if I'm wrong, that looks like Google simply saves location data in a browser cookie. A location helps Google find more relevant information when you use Search, Maps, and other Google products. Learn how Google saves location information on this computer. matthew black california state university, long beach -Original Message- From: NANOG [mailto:nanog-bounces+matthew.black=csulb@nanog.org] On Behalf Of Pedro Cavaca Sent: Tuesday, April 07, 2015 3:41 PM To: John Levine Cc: NANOG Mailing List Subject: Re: Fixing Google geolocation screwups https://support.google.com/websearch/answer/873?hl=en On 7 April 2015 at 23:26, John Levine jo...@iecc.com wrote: A friend of mine lives in Alabama and has business service from att. But Google thinks he's in France. We've checked for various possibilities of VPNs and proxies and such, and it's pretty clear that the Goog's geolocation for addresses around 99.106.185.0/24 is screwed up. Bing and other services correctly find him in Alabama. Poking around I see lots of advice about how to use Google's geolocation data, but nothing on how to update it. Anyone know the secret? TIA Regards, John Levine, jo...@iecc.com, Primary Perpetrator of The Internet for Dummies, Please consider the environment before reading this e-mail. http://jl.ly
Re: Network Segmentation Approaches
I'd certainly forget anything with service provider in the name. Different problem, different architecture. Last time I built this, I built a core network (WAN links, routers, etc) that enforced anti-spoofing rules, so I knew if I saw an internal IP address (either public assigned to me or RFC1918) on a given device inside this core, it came from the network segment it claimed to have come from. Then I built building-specific firewalls using proper firewalls. These might have anywhere from two interfaces (branch office) to thousands of interfaces (datacenters) with lots of VLANs. Checkpoint is a good product for this. The firewalls' job was to protect the building/segments behind it, not to protect things upstream (towards the core) of it. There was obviously an edge firewall. Users were segmented by job role. Workstations were typically considered to be *MORE* secure from a network perspective. AD servers need to be contacted by everything in your Windows domain. Most workstations don't. And your Windows domain, nowadays, probably includes cloud services over the internet. So it's hard to say AD servers are secure from a purely how many open network ports are there? standpoint. Servers were likewise segmented. I'd consider putting department file servers on the same LAN as the users, but only if performance required it - otherwise I'd put them on their own segments too. The benefit of this in a large organization is that a subdivision could put a firewall behind one of my anti-spoofing interfaces (so I validate packets come from them) and run it themselves without ruining everyone else's security. I second the thoughts about embedded management stacks. As for management, I put workstations used by IT management on their own segment (and give them a Stand up the infected workstation you're working on LAN separate from that segment). I put servers used for management on yet another segment. I've never had a problem with giving those workstations and servers access to management segments in the wild, but I trusted my skilled admins I worked with. Think mesh of connections, not tiers or levels or DMZs. Because you'll find that super-secure accounting server needs access from some random vendor across the internet, and stuff like that, such that everything eventually ends up in the DMZ anyhow (except MAYBE workstations). You can use separate firewalls for particularly sensitive segments - for instance, your management stuff might not be behind your main firewall - that way when Joe User gets a virus and fills the connection table on his firewall(s), you can still manage things. One more thing: Guest network access. When it was needed, I built a virtual network on top of the real Corporate LAN that tunneled this around. I terminated it on a DSL modem (which was sufficient for my needs). Just about every building with a conference room these days will need a guest network. It also helps if your employees can use their cell phones for checking work email and such - do you really want them on your main LAN? On Tue, May 5, 2015 at 7:01 AM, Keith Medcalf kmedc...@dessus.com wrote: It is called the Purdue Enterprise Reference Architecture ... -Original Message- From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of nan...@roadrunner.com Sent: Monday, 4 May, 2015 20:56 To: nanog@nanog.org Subject: Network Segmentation Approaches Possibly a bit off-topic, but curious how all of you out there segment your networks. Corporate/business users, dependent services, etc. from critical data and/or processes with remote locations thrown in the mix which could be mini-versions of your primary network. There's quite a bit of literature out there on this, so have been considering an approach with zones based on the types of data or processes within them. General thoughts: - Business Zone - This would be where workstations live, web browsing occurs, VoIP and authentication services live too. Probably consider this a somewhat dirty zone, but I should generally be OK letting anything in this zone talk fairly unfettered to anything else in this zone (for example a business network at my HQ location should be able to talk unfettered to an equivalent network at a remote site). I'd probably have VoIP media servers in this zone, AD, DNS, etc. - Some sort of management zone(z) - Maybe accessible only via jump host -- this zone gives control access into key resources (most likely IT resouces like network devices, storage devices, etc.). Should have sound logging/auditing here to establish access patterns outsid the norm and perhaps multi-factor authentication (and of course FW's). - Secure Zone(s) - Important data sets or services can be isolated from untrusted zones here. May need separate services (DNS, AD, etc.) - I should think carefully about where I stick stateful FW's -- especially on
Re: Fixing Google geolocation screwups
There's a form here - https://support.google.com/websearch/contact/ip But google is pretty smart, its systems will learn the correct geolocation over time... On Tue, May 5, 2015 at 11:22 AM, Matthew Black matthew.bl...@csulb.edu wrote: Pedro Cavaca suggests: https://support.google.com/websearch/answer/873?hl=en Correct me if I'm wrong, that looks like Google simply saves location data in a browser cookie. A location helps Google find more relevant information when you use Search, Maps, and other Google products. Learn how Google saves location information on this computer. matthew black california state university, long beach -Original Message- From: NANOG [mailto:nanog-bounces+matthew.black=csulb@nanog.org] On Behalf Of Pedro Cavaca Sent: Tuesday, April 07, 2015 3:41 PM To: John Levine Cc: NANOG Mailing List Subject: Re: Fixing Google geolocation screwups https://support.google.com/websearch/answer/873?hl=en On 7 April 2015 at 23:26, John Levine jo...@iecc.com wrote: A friend of mine lives in Alabama and has business service from att. But Google thinks he's in France. We've checked for various possibilities of VPNs and proxies and such, and it's pretty clear that the Goog's geolocation for addresses around 99.106.185.0/24 is screwed up. Bing and other services correctly find him in Alabama. Poking around I see lots of advice about how to use Google's geolocation data, but nothing on how to update it. Anyone know the secret? TIA Regards, John Levine, jo...@iecc.com, Primary Perpetrator of The Internet for Dummies, Please consider the environment before reading this e-mail. http://jl.ly
Re: Fixing Google geolocation screwups
On 5 May 2015 at 16:22, Matthew Black matthew.bl...@csulb.edu wrote: Pedro Cavaca suggests: https://support.google.com/websearch/answer/873?hl=en Correct me if I'm wrong, that looks like Google simply saves location data in a browser cookie. A location helps Google find more relevant information when you use Search, Maps, and other Google products. Learn how Google saves location information on this computer. I don't see the text you quoted on the URL I provided. I do see a report the problem clickable, which was the point I was trying to make on my original answer. matthew black california state university, long beach -Original Message- From: NANOG [mailto:nanog-bounces+matthew.black=csulb@nanog.org] On Behalf Of Pedro Cavaca Sent: Tuesday, April 07, 2015 3:41 PM To: John Levine Cc: NANOG Mailing List Subject: Re: Fixing Google geolocation screwups https://support.google.com/websearch/answer/873?hl=en On 7 April 2015 at 23:26, John Levine jo...@iecc.com wrote: A friend of mine lives in Alabama and has business service from att. But Google thinks he's in France. We've checked for various possibilities of VPNs and proxies and such, and it's pretty clear that the Goog's geolocation for addresses around 99.106.185.0/24 is screwed up. Bing and other services correctly find him in Alabama. Poking around I see lots of advice about how to use Google's geolocation data, but nothing on how to update it. Anyone know the secret? TIA Regards, John Levine, jo...@iecc.com, Primary Perpetrator of The Internet for Dummies, Please consider the environment before reading this e-mail. http://jl.ly
Network Segmentation Approaches
Possibly a bit off-topic, but curious how all of you out there segment your networks. Corporate/business users, dependent services, etc. from critical data and/or processes with remote locations thrown in the mix which could be mini-versions of your primary network. There's quite a bit of literature out there on this, so have been considering an approach with zones based on the types of data or processes within them. General thoughts: - Business Zone - This would be where workstations live, web browsing occurs, VoIP and authentication services live too. Probably consider this a somewhat dirty zone, but I should generally be OK letting anything in this zone talk fairly unfettered to anything else in this zone (for example a business network at my HQ location should be able to talk unfettered to an equivalent network at a remote site). I'd probably have VoIP media servers in this zone, AD, DNS, etc. - Some sort of management zone(z) - Maybe accessible only via jump host -- this zone gives control access into key resources (most likely IT resouces like network devices, storage devices, etc.). Should have sound logging/auditing here to establish access patterns outsid the norm and perhaps multi-factor authentication (and of course FW's). - Secure Zone(s) - Important data sets or services can be isolated from untrusted zones here. May need separate services (DNS, AD, etc.) - I should think carefully about where I stick stateful FW's -- especially on my internal networks. Risk of DoS'ing myself is high. Presumably I should never allow *outbound* connectivity from a more secure zone to a less secure zone, and inbound connectivity should be carefully monitored for unusual access patterns. Perhaps some of you have some fairly simple rules of thumb that could be built off of? I'm especially interested to hear how VoIP/RTP traffic is handled between subnets/remote sites within a Business Zone. I'm loathe to put a FW between these segments as it will put VoIP performance at risk (maybe QoS on FW's can be pretty good), but maybe some sort of passive monitoring would make sense. (Yes, I've also read the famous thread on stateful firewalls[1]). Thanks! [1] http://markmail.org/thread/fvordsbnuc74fuu2#query:+page:1+mid:fvordsbnuc74fuu2+state:results
Re: yarr - Yet Another Route Server Implementation [WAS: Euro-IX quagga stable download and implementation]
On Mon, 4 May 2015, Sebastian Spies wrote: sorry, for the double post. dmarc fuckup... Hey there, considering the state of this discussion, BIRD seems to be the only scalable solution to be used as a route server at IXPs. I have built a large code base around BGP for the hoofprints project [1] and BRITE [2] and would enjoy building another state-of-the-art open-source route-server implementation for IXPs. Would you be so kind to send me your feedback on this idea? Do you think, it makes sense to pursue such a project or is it not relevant enough for you? How about (instead of another implementation) helping one of the existing projects? Writing another implementation is easy. Keeping it up to date, testing it and supporting it over multiple years is what I would worry about. I would *strongly* suggest to solve that issue first before starting on another implementation. - Martin
Re: yarr - Yet Another Route Server Implementation [WAS: Euro-IX quagga stable download and implementation]
http://xkcd.com/927/ On Mon, May 4, 2015 at 7:05 AM, Sebastian Spies s+mailinglisten.na...@sloc.de wrote: sorry, for the double post. dmarc fuckup... Hey there, considering the state of this discussion, BIRD seems to be the only scalable solution to be used as a route server at IXPs. I have built a large code base around BGP for the hoofprints project [1] and BRITE [2] and would enjoy building another state-of-the-art open-source route-server implementation for IXPs. Would you be so kind to send me your feedback on this idea? Do you think, it makes sense to pursue such a project or is it not relevant enough for you? Best regards, Sebastian 1: https://github.com/sspies8684/hoofprints/ 2: https://brite.antd.nist.gov/statics/about Am 25.04.2015 um 22:06 schrieb Goran Slaviæ: Andy, Believe me when I say: I would never have the idea to think about attempting to try to test my ability to generate configurations for this 2 route servers/ 2 different programs that run them solution without the IXP Manager :-) I am familiar with the work INEX has been doing with IXP Manager and have for some time attempted to find time from regular SOX operation to implement it in our IX. This migration gives me the excellent opportunity and arguments to finally allocate time, resources and manpower for installation and implementation of IXP Manager as the route server configuration generator at SOX. Regards G.Slavic -Original Message- From: Andy Davidson [mailto:a...@nosignal.org] Sent: Saturday, 25 April 2015 21:34 To: Goran Slaviæ Cc: nanog@nanog.org Subject: Re: Euro-IX quagga stable download and implementation On 25 Apr 2015, at 15:16, Goran Slaviæ gsla...@sox.rs wrote: Considering what I have learned in your posts (and on other places that I have informed myself) I will definitely suggest to SOX management to go the way similar to what LINX did (1 Bird + 1 Quagga as route servers) for the simple reason that 2 different solution provides more security in context of new program update-new bugs problems and incidents and prevents other potential problems. Goran - glad to have helped. One last piece of advice which might be useful - to help to guarantee consistency of performance between the two route-servers, you should consider a configuration generator so that your route-server configs are in sync. The best way to implement this at your exchange is to use IXP Manager, maintained by the awesome folks at the Irish exchange point, INEX. https://github.com/inex/IXP-Manager IXP Manager will get you lots of other features as well as good route-server hygiene. There's also a historic perl-script that does this on my personal github. Both of these solutions allow you to filter route-server participants based on IRR data, which has proved to be a life-saver at all of the exchanges I help to operate. Having my horrible historic thing is maybe better than no thing at all, but I deliberately won't link to it as you should really use IXP Manager. :-) Andy=
Re: Fixing Google geolocation screwups
On Tue, May 05, 2015 at 12:03:23PM -0400, Luan Nguyen wrote: There's a form here - https://support.google.com/websearch/contact/ip But google is pretty smart, its systems will learn the correct geolocation over time... That'd be quite a trick, given that the netblock practically can't be used at all with Google services. - Matt
Re: yarr - Yet Another Route Server Implementation [WAS: Euro-IX quagga stable download and implementation]
Hi Sebastian, I highly support your idea! Almost all large IXPs switched to bird. As we know from different route server studies almost 1/3 of the traffic handled by IXPs is managed by route servers. This shows that route servers play an important role in the IXP ecosystem. So, depending on only one implementation comes with a lot of operational risks at least. The beauty of your suggestion is that a route server that is just that (and not a routing daemon with many unneeded features as bird or quagga). This could limit the complexity of the source code which means the effort needed to maintain the code should be a lot smaller compared to bird/quagga. I see no reason why there is not enough room for at least one or two more route server implementations besides bird. Best regards, Thomas (with no hat on - my personal opinion)