Hi,
One idea could be that "small" tests can be non-public but when the
setup passes some limits it has to be public. I understand your
/business needs/ contra the public interest.
The definition of a "small" test could be something like;
- maximum 20 probes
- runs for maximum 60 minutes
- is repeated maximum 10 times
(if you run every 300sec you have to limit the endtime to 50 minutes)
With such limits you can troubleshoot things (non-publicly) but you
can't build your monitoring system on top of that.
Regards,
// mem
Den 2022-12-15 kl. 19:41, skrev Steve Gibbard:
The “one specific user” is www.globaltraceroute.com, an Atlas front end I
created a few years ago that does instant one-off traceroutes, pings, and DNS
lookups. It has become a well-used operational troubleshooting tool. It
operates as a free service, with operational costs slowly draining the bank
account of my mostly defunct consulting firm, and Atlas credits coming from a
couple probes I operate plus some other donors.
What I think is worth considering here:
Atlas, and the RIPE NCC, have two fairly separate constituencies: researchers
and operators.
In the research community, there’s an expectation that data be made available
so that others can review it, judge the validity of research results, do their
own work based on it, etc. Atlas, in its native form with long running
repeatable measurements, gets a lot of research use. It is useful to have
those datasets available to the public.
The operations use case for Global Traceroute is different. It’s generally
either “I set up a new CDN POP, and want to make sure the networks it’s
supposed to serve are getting there are on a direct path,” or “some other
source is telling me my performance is bad from ISP X, and I want to know why.”
Instead of calling the other ISP’s NOC or trying to track down a random
customer to help troubleshoot, they can do a one-off traceroute, see where the
traffic is going, and hopefully figure out what to adjust or who to talk to to
fix the situation.
Making those operational troubleshooting results public may not be worthwhile.
The results themselves, being one-offs, are not something it would be all that
interesting to track over time. If anybody does want a one-off traceroute to a
particular target, they can go get it themselves. It is pretty obvious who is
doing the traceroutes. If there are a bunch of traceroutes to a certain CDN
operators’ services, they almost always come from that CDN operators’ corporate
network, so it does show who is concerned about network performance issues in
certain regions at certain times. That info might be potentially interesting —
color for a news story about an outage saying “as the outage unfolded,
engineers from company X unleashed a series of measurements to see paths into
their network from region Y.” Given the fear a lot of companies have about
releasing internal information to the public, I worry that that would have a
“chilling effect” on use of the service.
So, I think think Global Traceroute and Atlas are together a useful operational
tool. I think it’s made more useful by setting is_public to false in the
query. I’d really like to be able to continue proxying non-public one-off
measurements into Atlas.
Thanks,
Steve
On Dec 14, 2022, at 9:57 PM, Alexander Burke via ripe-atlas
<[email protected]> wrote:
Hello,
From the linked page:
A total of 173 users scheduled at least one, 81 users have at least two, one
specific user scheduled 91.5% of all of these.
That is surprising. What do those numbers look like if you zoom out to the past
6/12/24 months?
If you can count on one hand the number of users using >90% of the private
measurements over a longer timeframe than two weeks, then I submit that the choice
is clear.
Cheers,
Alex
--
ripe-atlas mailing list
[email protected]
https://lists.ripe.net/mailman/listinfo/ripe-atlas
--
ripe-atlas mailing list
[email protected]
https://lists.ripe.net/mailman/listinfo/ripe-atlas