Communication is very important to the health of an open source
community, and there are varying levels of communication between
sending a quick note to the mailing list explaining your idea in 3-5
sentences and coming to the mailing list with a fully completed
blueprint -- anything between those t
> I do think large changes like this might be better first discussed and
> requirements understood before we write code.
A phrase I like from another public community is "Rough Consensus and
Running Code" (IETF). The idea is that for a practical design, you
need a general agreement from the partic
On Fri, Aug 2, 2019 at 4:28 PM Dave Neuman wrote:
> I do think large changes like this might be better first discussed and
> requirements understood before we write code. I realize that it's hard to
> actually accomplish that with so many different opinions, but I do think
> it's the right thing
Ok, it sounds like there's generally consensus. So unless anyone objects,
I'll move forward the the PR.
> we need to figure out how to make it easy for normal end-users without
ssh access to be able to see our config files.
I agree, that's a good goal. But I'm having trouble coming up with a simp
The original intention of this thread was cache-side config generation, we
should take other conversations to other threads if we think now is the
time to have them.
First of all, thanks for putting the thought and time into this. I do
think large changes like this might be better first discussed
JvD, you're spoiling the surprise ending! :D
But I do think one of the goals is to transmit subsets of the data,
not single enormous data objects. And we have to get from here to
there in a sane, supportable, and testable way.
It's been a couple of days and I haven't heard any opposition to
cache
> On Aug 1, 2019, at 18:10, Evan Zelkowitz wrote:
>
> Also a +1 on the defining a standard common language that this library
> will use instead of direct dependencies on TO data. I know it has been
> bandied about for years that instead of having a direct TOdata->ATS
> config instead we have TO
Also a +1 on the defining a standard common language that this library
will use instead of direct dependencies on TO data. I know it has been
bandied about for years that instead of having a direct TOdata->ATS
config instead we have TOdata->cache side library->generic CDN
language->$LocalCacheSoftw
It sounds like:
(A) everyone is +1 on cache-side config generation
(B) most people are -1 on caches connecting directly to the TO DB
(C) most people are +1 on TO pushing data to ORT instead of the other way around
(D) most people are -1 on using Kafka for cache configs
For (A) I'm +1 on the approa
Well, in that spirit:
- Cache-side processing: +1. I suppose given the fact that we wouldn't want
to rewrite the entire configuration generation logic at once, there's no
reason to prefer this being part of ORT immediately versus separate. Either
way, there's no real "extra step". Though I must ad
This is true, but you can also run the cache-config generator to
visually inspect them as well. That makes it easy to visually inspect
them as well as to pipe them to diff and mechanically inspect them. So
we don't lose the ability entirely, we just move it from one place to
another.
On Wed, Jul 3
It looks like we've got two, maybe three, possibly separate ideas going on here.
The first is cache-side processing of the data. For this, TO's role
would shift from providing config files to providing data that is used
to build config files. Rob's suggestion is to implement this a bit at
a time,
A small point, but TO currently allows one to visually inspect/validate the
generated configuration files. I don't know how critical that functionality is
(I personally found it invaluable when testing logging configuration changes),
but it seems like we either have the generation logic in two
my feedback:
1. i like the idea of slimming down TO. It's gotten way too fat. Basically
deprecating these api endpoints at some point and letting "something else"
do the job of config generation:
GET /api/$version/servers/#id/configfiles/ats
GET /api/$version/profiles/#id/configfiles/ats/#filenam
I've accidentally been replying to Rob directly :P
Forwarded Message
Subject: Re: Cache-Side Config Generation
Date: Wed, 31 Jul 2019 09:56:37 -0600
From: ocket
To: Robert Butts
The "extra step" is just asking a local app instead o
Smaller, simpler pieces closer to the cache that do one job are far simpler to
maintain, triage, and build. I'm not a fan of trying to inject a message bus
in the middle of everything.
Jonathan G
On 7/31/19, 8:48 AM, "Genz, Geoffrey" wrote:
To throw a completely different idea out ther
To throw a completely different idea out there . . . some time ago Matt Mills
was talking about using Kafka as the configuration transport mechanism for
Traffic Control. The idea is to use a Kafka compacted topic as the
configuration source. TO would write database updates to Kafka, and the OR
+1 on this
On Jul 30, 2019, at 6:01 PM, Rawlin Peters wrote:
I've been thinking for a while now that ORT's current pull-based model
of checking for queued updates is not really ideal, and I was hoping
with "ORT 2.0" that we would switch that paradigm around to where TO
itself would push updates
Seems to me the client could download a matrix of what-goes-where just as
easily as traffic ops could use that info.
> On Jul 31, 2019, at 9:48 AM, Nir Sopher wrote:
>
> Hi,
>
> Architecture wise, I'm in favor of the traffic ops sending the specific
> configuration to the cache.
> Main reason
Hi,
Architecture wise, I'm in favor of the traffic ops sending the specific
configuration to the cache.
Main reason is taking features like "DS *individual *automatic deployment"
into account, where we would like to be able to control "which server get
which configuration and when" - e.g. edge cac
>Sure, but I think that's missing the point a bit. There's still the extra
step of fetching the configs from a local source, which is the redundancy
that concerns me. Not in the short-term, but as a long-term solution.
I'm not sure I understand the concern. The "extra step" is just asking a
local
From a security perspective, allowing access to your postgres database from
uncontrolled locations or in excessive quantities of locations exposes you to
many new problems. A read-only replica of part of the database (not the full
thing), might work for a while, but would be very easy to run po
>is there any reason we can't hit the DB from ORT
Technically, it's possible. But we really, really shouldn't. The API is a
guaranteed interface. The database has no such guarantees. TC users would
then be required to deploy ORT with TO, in order; or else implement some
sort of backwards compatibi
I've been thinking for a while now that ORT's current pull-based model
of checking for queued updates is not really ideal, and I was hoping
with "ORT 2.0" that we would switch that paradigm around to where TO
itself would push updates out to queued caches. That way TO would
never get overloaded bec
This is probably a stupid question, but is there any reason we can't hit the DB
from ORT, thus saving us the expense of writing any new scripting? My
understanding is that the biggest hit on traffic ops isn't the DB so much as
the perl processing for thousands of hosts at once. I assume that t
>I'm confused why this is separate from ORT.
Because ORT does a lot more than just fetching config files. Rewriting all
of ORT in Go would be considerably more work. Contrawise, if we were to put
the config generation in the ORT script itself, we would have to write it
all from scratch in Perl (th
26 matches
Mail list logo