Re: Removing 'internal' from TO API
I think we should do as Dave mentioned, assess and rename. > On Mar 15, 2017, at 2:18 PM, Jeremy Mitchellwrote: > > I don't like duplicating routes either but I thought it would ease the > transition rather than just changing the route. So no code duplication, > just 2 routes that go to the same place: > > $r->get("/internal/api/$version/steering")->over( authenticated => 1 )->to( > 'Steering#index', namespace => 'API::DeliveryService' ); > $r->get("/api/$version/steering")->over( authenticated => 1 )->to( > 'Steering#index', namespace => 'API::DeliveryService' ); > > And then we circle back and delete > > $r->get("/internal/api/$version/steering")->over( authenticated => 1 )->to( > 'Steering#index', namespace => 'API::DeliveryService' ); > > at some point. > > And yes, this internal namespace was introduced for comcast-specific > reasons that I believe no longer exist. > > Jeremy > > > > On Wed, Mar 15, 2017 at 2:13 PM, David Neuman > wrote: > > > At least a few of those (Steering, federations) were put in the "internal" > > namespace to work around Comcast specific issues. I don't know that I like > > the idea of duplicating routes, if anything we should see what is impacted > > by moving them out of the internal namespace. > > > > On Wed, Mar 15, 2017 at 1:30 PM, Jeremy Mitchell > > wrote: > > > > > Currently, we have a number of API routes scoped as "internal". Here are > > a > > > few examples: > > > > > > https://github.com/apache/incubator-trafficcontrol/blob/ > > > master/traffic_ops/app/lib/TrafficOpsRoutes.pm#L516 > > > > > > I believe this is going to make it more difficult as we try to implement > > > more granular roles / capabilities coupled with tenancy. > > > > > > So I'm proposing that we create a duplicate non-internal route like this, > > > for example: > > > > > > $r->get("/api/$version/steering")->over( authenticated => 1 )->to( > > > 'Steering#index', namespace => 'API::DeliveryService' ); > > > > > > that way we can slowly move away from the "internal" routes and > > eventually > > > deprecate them. > > > > > > I think with our upcoming more robust role / tenancy model, there is no > > > longer a need for "internal". > > > > > > Thoughts? > > > > > > Jeremy > > > > >
Re: Removing 'internal' from TO API
I don't like duplicating routes either but I thought it would ease the transition rather than just changing the route. So no code duplication, just 2 routes that go to the same place: $r->get("/internal/api/$version/steering")->over( authenticated => 1 )->to( 'Steering#index', namespace => 'API::DeliveryService' ); $r->get("/api/$version/steering")->over( authenticated => 1 )->to( 'Steering#index', namespace => 'API::DeliveryService' ); And then we circle back and delete $r->get("/internal/api/$version/steering")->over( authenticated => 1 )->to( 'Steering#index', namespace => 'API::DeliveryService' ); at some point. And yes, this internal namespace was introduced for comcast-specific reasons that I believe no longer exist. Jeremy On Wed, Mar 15, 2017 at 2:13 PM, David Neumanwrote: > At least a few of those (Steering, federations) were put in the "internal" > namespace to work around Comcast specific issues. I don't know that I like > the idea of duplicating routes, if anything we should see what is impacted > by moving them out of the internal namespace. > > On Wed, Mar 15, 2017 at 1:30 PM, Jeremy Mitchell > wrote: > > > Currently, we have a number of API routes scoped as "internal". Here are > a > > few examples: > > > > https://github.com/apache/incubator-trafficcontrol/blob/ > > master/traffic_ops/app/lib/TrafficOpsRoutes.pm#L516 > > > > I believe this is going to make it more difficult as we try to implement > > more granular roles / capabilities coupled with tenancy. > > > > So I'm proposing that we create a duplicate non-internal route like this, > > for example: > > > > $r->get("/api/$version/steering")->over( authenticated => 1 )->to( > > 'Steering#index', namespace => 'API::DeliveryService' ); > > > > that way we can slowly move away from the "internal" routes and > eventually > > deprecate them. > > > > I think with our upcoming more robust role / tenancy model, there is no > > longer a need for "internal". > > > > Thoughts? > > > > Jeremy > > >
Re: Removing 'internal' from TO API
At least a few of those (Steering, federations) were put in the "internal" namespace to work around Comcast specific issues. I don't know that I like the idea of duplicating routes, if anything we should see what is impacted by moving them out of the internal namespace. On Wed, Mar 15, 2017 at 1:30 PM, Jeremy Mitchellwrote: > Currently, we have a number of API routes scoped as "internal". Here are a > few examples: > > https://github.com/apache/incubator-trafficcontrol/blob/ > master/traffic_ops/app/lib/TrafficOpsRoutes.pm#L516 > > I believe this is going to make it more difficult as we try to implement > more granular roles / capabilities coupled with tenancy. > > So I'm proposing that we create a duplicate non-internal route like this, > for example: > > $r->get("/api/$version/steering")->over( authenticated => 1 )->to( > 'Steering#index', namespace => 'API::DeliveryService' ); > > that way we can slowly move away from the "internal" routes and eventually > deprecate them. > > I think with our upcoming more robust role / tenancy model, there is no > longer a need for "internal". > > Thoughts? > > Jeremy >
Re: Public CI Builds for Traffic Control
So, after some investigation, I've circled back on the mounting docker-compose sideways and letting it manage sibling containers. It appears that docker folks have already tamed the maddest of the madness. There's a relatively reasonably supported script for doing it that I was able to reasonably incorporate in the packaging script I had already put together. I've updated the PR to include this: https://github.com/apache/incubator-trafficcontrol/pull/347 If you have docker-compose, it will use it. If you don't, it will run it inside a container. Caveats apply, but none that are likely in practice, I think. This reduces the requirements to: - git - bash - docker All of which are satisfied by the ubuntu hosts on the ASF build infrastructure. On Wed, Mar 15, 2017 at 8:57 AM Jeff Elsloowrote: Docker isn't required to build the software, it's just another option. There's a build script, `build/build.sh`, that works just fine so long as you have the dependencies required to successfully build all components. I only mention this because if Docker is going to gate our ability to perform CI out in the open, we still have the `build.sh` option. I was able to use the build script to successfully build all components from master yesterday. -- Thanks, Jeff On Tue, Mar 14, 2017 at 8:27 PM, Chris Lemmons wrote: > Yeah, there're unfortunately good reasons not to have any accounts with > write permission in the GitHub repo. It can cause all sorts of problems if > anything were actually pushed. It also allows lots of other things like > editing other people's comments. GitHub should really separate that out for > the purpose of bots with minimum required access anyway. But yeah, without > write, it's a no-go. Comments are a very reasonable alternative, though. > > It's definitely worth a few minutes to get things set up on the ASF Jenkins > if those ubuntu slaves have the requirements. As it stands, the only > requirements for a build are: > >- git >- bash >- docker >- docker-compose > - which requires python > > I believe the first three are satisfied by the build hosts. I don't know > that docker-compose is available. It's 100% worth finding out, though. > > If it's not, we can do one of: > >- Run docker-compose inside a docker instance with the docker port >forwarded into the container to allow it to manage sibling containers. >- Re-create the subset of docker-compose behaviour that we actually use >in a build script. >- Give up. > > I mention the first option only because People on the Internet seem to keep > suggesting it. I believe madness that way lies. I dislike giving up, so for > lack of a better option, perhaps we might need to ditch docker-compose. > I've got a PR open that wraps docker compose in a unified script. It > wouldn't be entirely unreasonable to shift a bit more logic into it. > > Another possibility is to see if the infra folks would mind adding python > and docker-compose. I'm not sure adding python to the mix on those boxen is > a good idea, though, even if they're willing. > > On Tue, Mar 14, 2017 at 6:03 PM Leif Hedstrom wrote: > > >> On Mar 14, 2017, at 6:15 PM, Chris Lemmons wrote: >> >> Honestly, the key is hosting. If we have a host for CI that runs the basic >> build steps, we can configure any solution to build all the changes on >> branches of a collection of repos on Github. Pretty much all the > reasonable >> options have a status update script on GitHub, which integrates it quite >> nicely. (And therein might lie the rub. I think GitHub ties status updates >> to "push permission", which may be false for everyone on the main repo, >> since it's just a mirror.) But direct integration or no, we'd be able to > go >> look at the results and even download the binary, install it on a test >> system and watch it go. > > > So, we do not have the Jenkins master have “write permission” into the > Github repo. I asked Infra before, and they said no, but I’ll try again. > > However, things can still work reasonably well, since any registered Github > accounts is able to comment on a PR / issue. So, no, we can’t set labels > etc. automatically from the Jenkins master, but we get pretty good feedback > on what happens with the builds. See e.g. > > https://github.com/apache/trafficserver/pull/1581 < > https://github.com/apache/trafficserver/pull/1581> > > > Cheers, > > — leif > >> >> Now, that doesn't get us automatic builds from first-time or probably even >> very occasional contributors. But stick builds on the most frequent >> contributors' clones and we get 95% of the benefit without solving any of >> the actually hard problems. >> >> We'd need a host, though. >> >> On Tue, Mar 14, 2017 at 5:06 PM Leif Hedstrom wrote: >> >>> On Mar 13, 2017, at 8:44 AM, Chris Lemmons wrote: To me, the key features of
Re: Public CI Builds for Traffic Control
Docker isn't required to build the software, it's just another option. There's a build script, `build/build.sh`, that works just fine so long as you have the dependencies required to successfully build all components. I only mention this because if Docker is going to gate our ability to perform CI out in the open, we still have the `build.sh` option. I was able to use the build script to successfully build all components from master yesterday. -- Thanks, Jeff On Tue, Mar 14, 2017 at 8:27 PM, Chris Lemmonswrote: > Yeah, there're unfortunately good reasons not to have any accounts with > write permission in the GitHub repo. It can cause all sorts of problems if > anything were actually pushed. It also allows lots of other things like > editing other people's comments. GitHub should really separate that out for > the purpose of bots with minimum required access anyway. But yeah, without > write, it's a no-go. Comments are a very reasonable alternative, though. > > It's definitely worth a few minutes to get things set up on the ASF Jenkins > if those ubuntu slaves have the requirements. As it stands, the only > requirements for a build are: > >- git >- bash >- docker >- docker-compose > - which requires python > > I believe the first three are satisfied by the build hosts. I don't know > that docker-compose is available. It's 100% worth finding out, though. > > If it's not, we can do one of: > >- Run docker-compose inside a docker instance with the docker port >forwarded into the container to allow it to manage sibling containers. >- Re-create the subset of docker-compose behaviour that we actually use >in a build script. >- Give up. > > I mention the first option only because People on the Internet seem to keep > suggesting it. I believe madness that way lies. I dislike giving up, so for > lack of a better option, perhaps we might need to ditch docker-compose. > I've got a PR open that wraps docker compose in a unified script. It > wouldn't be entirely unreasonable to shift a bit more logic into it. > > Another possibility is to see if the infra folks would mind adding python > and docker-compose. I'm not sure adding python to the mix on those boxen is > a good idea, though, even if they're willing. > > On Tue, Mar 14, 2017 at 6:03 PM Leif Hedstrom wrote: > > >> On Mar 14, 2017, at 6:15 PM, Chris Lemmons wrote: >> >> Honestly, the key is hosting. If we have a host for CI that runs the basic >> build steps, we can configure any solution to build all the changes on >> branches of a collection of repos on Github. Pretty much all the > reasonable >> options have a status update script on GitHub, which integrates it quite >> nicely. (And therein might lie the rub. I think GitHub ties status updates >> to "push permission", which may be false for everyone on the main repo, >> since it's just a mirror.) But direct integration or no, we'd be able to > go >> look at the results and even download the binary, install it on a test >> system and watch it go. > > > So, we do not have the Jenkins master have “write permission” into the > Github repo. I asked Infra before, and they said no, but I’ll try again. > > However, things can still work reasonably well, since any registered Github > accounts is able to comment on a PR / issue. So, no, we can’t set labels > etc. automatically from the Jenkins master, but we get pretty good feedback > on what happens with the builds. See e.g. > > https://github.com/apache/trafficserver/pull/1581 < > https://github.com/apache/trafficserver/pull/1581> > > > Cheers, > > — leif > >> >> Now, that doesn't get us automatic builds from first-time or probably even >> very occasional contributors. But stick builds on the most frequent >> contributors' clones and we get 95% of the benefit without solving any of >> the actually hard problems. >> >> We'd need a host, though. >> >> On Tue, Mar 14, 2017 at 5:06 PM Leif Hedstrom wrote: >> >>> On Mar 13, 2017, at 8:44 AM, Chris Lemmons wrote: To me, the key features of CI are that a) it builds each branch automatically, b) notifies affected parties when all is not well, and c) manages the artefacts in a reasonable way. Additionally, we're a lot > more useful when we're writing neat software and not spending out time >>> managing CI, so it should be as automatic as reasonable. We're using github for >>> PRs, so if it's at all possible to get automatic PR tagging with build information, that is greatly desirable. Knowing that the PR breaks the build prior to merging it can save quite a bit of time. :) >>> >>> >>> My $0.25: My experience is that making as much CI build / tests on pull >>> requests, *before* they are landed, gives the most bang for the buck. But >>> that might not work well for you, since you can’t use Github right? >>> >>> — leif >>>