jonpspri commented on issue #184: Create tarball for Nginx container in 
OpenWhisk as part of release deploy script
URL: 
https://github.com/apache/incubator-openwhisk-cli/pull/184#issuecomment-355691228
 
 
   Good question marks.  I'm going to come at them sideways first, because I 
want to be sure there is clarity in the discussion.
   
   TL;DR:  I hadn't intended to work on the gradle build because it isn't used 
to build the release files.  But probably all the build processes should be the 
same, and I'm open to suggestions on what that unified build should look like 
(though I'm leaning toward a Makefile).
   
   Context:  I started down this path because I was introducing OpenWhisk to 
the Linux on Z ('linux s390x') architecture.  The immediate problem I ran into 
was that 's390x' wasn't a valid architecture for mac/darwin or windows OSs, 
which broke the cartesian logic that had been used to build the CLI.  I made 
some corrections in the 'incubator-openwhisk' and 'incubator-openwhisk-cli' to 
address that.
   
   What became clear in the longer term is that process and control were 
flowing back and forth between the two projects.  To add an os/architecture 
required 1) Adding it to the CLI compile and production repository, 2) Adding 
it to the list of os/architectures to be downloaded/redirected to, and 3) 
generating a 'content.json' file as part of the ansible deployment scripts for 
OpenWhisk to make known what was available from a particular OpenWhisk build.  
Which led me to my axiom for this PR:
   
   **Axiom:**  The 'incubator-openwhisk-cli' project should have the master 
list of what CLI builds are available and package them along with content.json. 
 Then the CLI process becomes one-way -- the ansible Nginx deployment process 
in 'incubator-openwhisk' downloads a tarball and we're off to the races.
   
   So I did what I'd been doing, I fiddled with the Travis scripts to build an 
extra artifact for release -- a tarball of content.json and the various builds 
of wsk to be downloaded directly to Nginx.  As I was thinking this through I 
came to a conclusion:  
   
   **Conclusion:*** "Hey content.json is metadata that could be produced 
manually and used to drive the build".  Note, that it doesn't have to be this 
way, but the list of os/architecture pairs has to live somewhere -- if not in 
content.json then in some other build file or configuration file.  I grant 
hand-coding JSON is not the most romantic endeavor, but it may be less resource 
intensive than maintaining something fancier.
   
   Okay, so here I was with a beautiful PR that would make what I wanted when 
my new friend @dubeejw pointed out "Hey what about the Gradle build?", to which 
my initial response was "WTF -- Isn't the gradle build being discouraged 
anyway?".  On re-reading though, I realized it wasn't and that even my 's390x' 
patch hadn't made it to that, which gets us to where we are.
   
   So: what are the eventual goals here?  This is what I think we want:
   1.  Travis builds and releases each individual binary.
   2.  Travis builds and releases a tarball suitable for Nginx to download 
(including content.json)
   3.  A developer can build the local os/architecture combination
   4.  A developer can build a _single_ specified os/architecture combination
   5.  A developer can build all support cross-compile combinations (and get a 
bonus content.json)
   
   And ideally, we would do this:
   1.  With a minimum of duplicated code between the travis and developer 
approaches, and
   2.  Using a local go installation (instead of resorting to Docker)
   
   My gut says the way to achieve this is with gradle/make and the existing 
build.sh script, though I may move some logic to and fro between.  Travis would 
invoke gradle/Make.  We'd rip out all the gradle and docker stuff, but the 
build would target a '.gitignore'd build directory.
   
   I'm open to opinions about build tool.  I think Make is, in the longer run, 
a lighter-weight very capable choice, but if we think openwhisk devs are more 
likely to talk Gradle, we can go that way.
   
   So to answer your questions (and this is vision, not how the code looks 
today):
   1.  Configuration is part of the repository/source code and so should not be 
empty.  Build options are current architecture (go without options), specified 
architecture (GOOS/GOARCH) or cross-compile everything.
   2.  Cross-compile build becomes a target in the overall build script.
   3.  Architecture(s) to build are a build script parameter.
   
   I think the key point is to make sure we reduce the processing flow to a 
single flow, which is where I want to focus for now.  I will let you know 
whether that becomes another PR.
   
   Sorry for the _very_ long reply, but it's as much to get my thinking clear 
as anything.  Kudos if you made it this far!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to