+1 for splitting cache servers and infra servers. Currently, each server
must be associated with a CDN and  cache group.

While that part may seem logical by itself, when updates are queued on a
CDN, it sets the upd_pending column on *all* of that CDN's servers,
including servers of RASCAL type, servers of CCR type, etc. Although this
doesn't hurt anything, as Jeremy has said, side effects like these make
database-level validation difficult, so a table split of some kind seems
like a step in the right direction.

-Zach

On Tue, Aug 25, 2020 at 2:22 PM Jeremy Mitchell <[email protected]>
wrote:

> If you look at the columns of the servers table, you'll see that most are
> specific to "cache servers", so I definitely think that should be
> addressed. Overloaded tables make it hard (impossible?) to do any
> database-level validation and I thought we wanted to move in that direction
> where possible.
>
> At the very least I think we should have these tables to capture all our
> "server objects":
>
> - cache_servers (formerly known as servers)
> - infra_servers
> - origins
>
> Now whether the API mirrors the tables is another discussion. I don't think
> we strive for that but sometimes GET /api/cache_servers just seems to make
> sense.
>
> Jeremy
>
>
>
>
>
> On Tue, Aug 25, 2020 at 12:19 PM Gray, Jonathan <[email protected]
> >
> wrote:
>
> > I agree with Dave here.  Instead of trying to make our database and API
> > identical, we should focus on doing better relational data modeling
> inside
> > the database and letting that roll upward into TO with more specific
> > queries and stronger data integrity inside the database.
> >
> > Jonathan G
> >
> > On 8/25/20, 11:20 AM, "Dave Neuman" <[email protected]> wrote:
> >
> >     This feels extremely heavy handed to me.  I don't think we should try
> > to
> >     build out a new table for different server types which will mostly
> > have all
> >     the same columns.  I could maybe see a total of 3 tables for caches,
> >     origins (which already exists), and other things, but even then I
> > would be
> >     hesitant to think it was a great idea.  Even if we have a caches
> > table, we
> >     still have to put some sort of typing in place to distinguish edges
> and
> >     mids and with the addition of flexible topologies, even that is
> muddy;
> > it
> >     might be better to call them forward and reverse proxies instead, but
> > that
> >     is a different conversation.  I think while it may seem like this
> > solves a
> >     lot of problems on the surface, I still think some of the things you
> > are
> >     trying to address will remain and we will have new problems on top of
> >     that.
> >
> >     I think we should think about addressing this problem with a better
> > way of
> >     identifying server types that can be accounted for in code instead of
> >     searching for strings, adding some validation to our API based on the
> >     server types (e.g. only require some fields for caches), and also by
> >     thinking about the way we do our API and maybe trying to get away
> from
> >     "based on database tables" to be "based on use cases".
> >
> >     --Dave
> >
> >     On Tue, Aug 25, 2020 at 10:49 AM ocket 8888 <[email protected]>
> > wrote:
> >
> >     > Hello everyone, I'd like to discuss something that the Traffic Ops
> > Working
> >     > Group
> >     > has been working on: splitting servers apart.
> >     >
> >     > Servers have a lot of properties, and most are specifically
> > important to
> >     > Cache
> >     > Servers - made all the more clear by the recent addition of
> multiple
> >     > network
> >     > interfaces. We propose they be split up into different objects
> based
> > on
> >     > type -
> >     > which will also help reduce (if not totally eliminate) the use of
> > custom
> >     > Types
> >     > for servers. This will also eliminate the need for hacky ways of
> > searching
> >     > for
> >     > certain kinds of servers - e.g. checking for a profile name that
> > matches
> >     > "ATS_.*" to determine if something is a cache server and searching
> > for a
> >     > Type
> >     > that matches ".*EDGE.*" to determine if something is an edge-tier
> or
> >     > mid-tier
> >     > Cache Server (both of which are real checks in place today).
> >     >
> >     > The new objects would be:
> >     >
> >     > - Cache Servers - exactly what it sounds like
> >     > - Infrastructure Servers - catch-all for anything that doesn't fit
> > in a
> >     > different category, e.g. Grafana
> >     > - Origins - This should ideally eat the concept of "ORG"-type
> > servers so
> >     > that we ONLY have Origins to express the concept of an Origin
> server.
> >     > - Traffic Monitors - exactly what it sounds like
> >     > - Traffic Ops Servers - exactly what it sounds like
> >     > - Traffic Portals - exactly what it sounds like
> >     > - Traffic Routers - exactly what it sounds like
> >     > - Traffic Stats Servers - exactly what it sounds like - but
> InfluxDB
> >     > servers would be Infrastructure Servers; this is just whatever
> > machine is
> >     > running the actual Traffic Stats program.
> >     > - Traffic Vaults - exactly what it sounds like
> >     >
> >     > I have a Draft PR (
> >
> https://urldefense.com/v3/__https://github.com/apache/trafficcontrol/pull/4986__;!!CQl3mcHX2A!SSbbYaDqtrWYwoO2tJM5Q1FjEiah5oVxE2I8kUagnUPqF3nKfj3k9miwq8px91-RjQAG$
> > )
> >     > ready for
> >     > a blueprint to split out Traffic Portals already, to give you a
> sort
> > of
> >     > idea of
> >     > what that would look like. I don't want to get too bogged down in
> > what
> >     > properties each one will have exactly, since that's best decided
> on a
> >     > case-by-case basis and each should have its own blueprint, but I'm
> > more
> >     > looking
> >     > for feedback on the concept of splitting apart servers in general.
> >     >
> >     > If you do have questions about what properties each is semi-planned
> > to
> >     > have,
> >     > though, I can answer it or point you at the current draft of the
> API
> > design
> >     > document which contains all those answers.
> >     >
> >
> >
>

Reply via email to