This feels extremely heavy handed to me. I don't think we should try to build out a new table for different server types which will mostly have all the same columns. I could maybe see a total of 3 tables for caches, origins (which already exists), and other things, but even then I would be hesitant to think it was a great idea. Even if we have a caches table, we still have to put some sort of typing in place to distinguish edges and mids and with the addition of flexible topologies, even that is muddy; it might be better to call them forward and reverse proxies instead, but that is a different conversation. I think while it may seem like this solves a lot of problems on the surface, I still think some of the things you are trying to address will remain and we will have new problems on top of that.
I think we should think about addressing this problem with a better way of identifying server types that can be accounted for in code instead of searching for strings, adding some validation to our API based on the server types (e.g. only require some fields for caches), and also by thinking about the way we do our API and maybe trying to get away from "based on database tables" to be "based on use cases". --Dave On Tue, Aug 25, 2020 at 10:49 AM ocket 8888 <[email protected]> wrote: > Hello everyone, I'd like to discuss something that the Traffic Ops Working > Group > has been working on: splitting servers apart. > > Servers have a lot of properties, and most are specifically important to > Cache > Servers - made all the more clear by the recent addition of multiple > network > interfaces. We propose they be split up into different objects based on > type - > which will also help reduce (if not totally eliminate) the use of custom > Types > for servers. This will also eliminate the need for hacky ways of searching > for > certain kinds of servers - e.g. checking for a profile name that matches > "ATS_.*" to determine if something is a cache server and searching for a > Type > that matches ".*EDGE.*" to determine if something is an edge-tier or > mid-tier > Cache Server (both of which are real checks in place today). > > The new objects would be: > > - Cache Servers - exactly what it sounds like > - Infrastructure Servers - catch-all for anything that doesn't fit in a > different category, e.g. Grafana > - Origins - This should ideally eat the concept of "ORG"-type servers so > that we ONLY have Origins to express the concept of an Origin server. > - Traffic Monitors - exactly what it sounds like > - Traffic Ops Servers - exactly what it sounds like > - Traffic Portals - exactly what it sounds like > - Traffic Routers - exactly what it sounds like > - Traffic Stats Servers - exactly what it sounds like - but InfluxDB > servers would be Infrastructure Servers; this is just whatever machine is > running the actual Traffic Stats program. > - Traffic Vaults - exactly what it sounds like > > I have a Draft PR (https://github.com/apache/trafficcontrol/pull/4986) > ready for > a blueprint to split out Traffic Portals already, to give you a sort of > idea of > what that would look like. I don't want to get too bogged down in what > properties each one will have exactly, since that's best decided on a > case-by-case basis and each should have its own blueprint, but I'm more > looking > for feedback on the concept of splitting apart servers in general. > > If you do have questions about what properties each is semi-planned to > have, > though, I can answer it or point you at the current draft of the API design > document which contains all those answers. >
