The vast majority of projects are using a db to solve it. But that requires a db and a lot more API. Which we will get to when we go to glare. In the mean time, I suspect we're in fairly uncharted waters at the moment.
Thanks, Kevin ________________________________________ From: Christopher Aedo [d...@aedo.net] Sent: Thursday, January 14, 2016 5:29 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [app-catalog] Automating some aspects of catalog maintenance While we are looking forward to implementing an API based on Glare, I think it would be nice to have a few aspects of catalog maintenance be automated. For instance discovering and removingt/agging assets with dead links, updating the hash for assets that change frequently or exposing when an entry was last modified. Initially I thought the best approach would be to create a very simple API service using Flask on top of a DB. This would provide output identical to the current "v1" API. But of course that "simple" idea starts to look too complicated for something that would eventually be abandoned wholesale. Someone on the infra team suggested a dead-link checker that would run as a periodic job similar to other proposal-bot jobs, so I took a first pass at that [1]. As expected that resulted in a VERY large initial change[2] due to "normalizing" the existing human-edited assets.yaml file. I think the feedback that this is un-reviewable without some external tools is reasonable (though it's possible to verify the 86 assets are unmolested, only slightly reformatted). One thing that would help would be forcing all entries to meet a specific format which would not need adjustment by proposal-bot. But even that change would require a near-complete rewrite of the assets file, so I don't think it would help in this case. I'm generally in favor of this approach because it keeps all the information on the assets in one place (the assets.yaml file) which makes it easy for humans to read and understand. An alternate proposed direction is to merge machine-generated information with the human-generated assets.yaml during the creation of the JSON file[3] that is used by the website and Horizon plugin. The start of that work is this script to discover last-modified times for assets based on git history[4]. While I think the approach of merging machine-generated and human-generated files could work, it feels a lot like creating a relational database out of yaml files glued together with a bash script. If it works though, maybe it's the best short term approach? Ultimately my goal is to make sure the assets in the catalog are kept up to date without introducing a great deal of administrative overhead or obfuscating how the display version of the catalog is created. How are other projects handling concerns like this? Would love to hear feedback on how you've seen something like this handled - thanks! [1]: https://review.openstack.org/#/c/264978/ [2]: https://review.openstack.org/#/c/266218/ [3]: https://apps.openstack.org/api/v1/assets [4]: https://review.openstack.org/#/c/267087/ -Christopher __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev