Hi Tony! This all sounds cool... and I might just assign over GPII-111 to you, if you think it's more or less the actual work you're doing right now. You can take a look, and if you think it is, feel free to take it.
Just one comment below. > On Apr 12, 2017, at 5:43 AM, Tony Atkins <[email protected]> wrote: > > Hi, All: > > As we have long discussed, currently the solutions and settings used within > the GPII are stored in massive JSON files in the "universal" repository. I > have been tasked with helping move us towards the kind of granularity, > inheritance, and testability we discussed in Toronto. I have been sketching > out initial documentation and a loading/validation harness > <https://github.com/the-t-in-rtf/gpii-live-registries>, and wanted to > summarize for wider discussion. > > First, as discussed in Toronto, the idea is that the "live" registries would > be a separate repo that contains the data that currently lives in universal, > more finely broken down. Changes to the data would be submitted as pull > requests against this repo. The platform-specific repos would use a > versioned release of the "live" data (more on that in a bit). > > Each solution and setting would be a distinct grade, saved to a single > JSON(5) file. We would use the effective path and filename to create an > implicit and unique grade name for each options file. This accomplishes two > things: I think having a default convention for the repo of having each solution in it's own file is probably good, but I would hope that each setting for that solution wouldn't have to be in it's own file... maybe it could be if you want, but hopefully optional. Having barely survived that decade in the early 2000's when J2EE was cool and having to put every single public java class has left some anxiety in my stomach thinking about this. Mostly, I want to make sure that we future proof ourselves, and have this validation and solutions registry tooling work well in any situation where you have some JSON data that makes up a schema. Regardless of whether it's in a file, a couch document, a node in another JSON document, or being dynamically and temporarily stored in the local storage of an awesome web based authoring tool, which is most likely just over the horizon for us. Out of a paranoia [1], I did start reading through the spec and looking at some of the validation libraries, and everything seems like it should be Ok. And even though I don't have the schema directive in yet (although it's only 1 five minute vim macro away ;) ), they do actually seem to validate fine. From what I mentioned on the APCP call, I am actually hoping that we can start with the metadata I've created for the Generic Preferences and filling in some of the apps settings metadata (JAWS mostly), since it is like a good half days worth of typing. I'm happy to spend the 10 minutes with a vim macro to make them look however we need them, and split them up in to files. I guess my point is: Breaking these up so they aren't just one file per OS sounds great, but I would recommend against requiring a 5-10 line file for every single setting. And while these are in files now, I hope we can remember with any API we encounter that the JSON data is what's important, and whatever physical storage it sits in is mostly accidental complexity. Cheers, Steve [1] and left over stress/nightmares of when I was still working on J2EE projects. > We will have an easier time detecting namespace collisions with this model. > We can detect the existence of and perform standard tests against each grade > in isolation (see below). > So, what do I mean by "grades" in this context? Basically, anything you can > do in an options block without writing code can be stored in one of these > JSON(5) files. Settings and solutions derive from concrete gpii.setting and > gpii.solution grades. Abstract grades are also possible, such as platform > and platform version mix-ins. > > A new loader would scan through an "options file hierarchy" and associate > each block of options with its namespace, as though the user had called > fluid.defaults(namespace, options). Once all grades have their defaults > defined, we can search for any grades that extend gpii.solution or > gpii.setting, and do things like: > Confirm that each component can be safely instantiated. > Confirm that the component satisfies the contract defined for the base grade, > for example, that it provides an "isInstalled" invoker. > For "abstract" grades, we would not attempt to instantiate them, only to > confirm that each is extended by at least one "concrete" grade that has been > tested. > Platform specific tests would take place within the platform-specific repos, > which would test their version of the "live" data, for example calling each > solution's "isInstalled" method to confirm that nothing breaks. As with any > versioned dependency change, we would submit a PR against a platform repo and > confirm that the new version of the "live" data does not break anything > before merging and releasing a new version of the platform repo. > > So, that's the proposed workflow and test harness, which are independent of > the data format. Please comment. Once we have even "lazy consensus" > agreement on that, we will immediately need to move forward with discussions > about how we represent each solution/setting and the relationships between > settings. > > Cheers, > > > Tony > > _______________________________________________ > Architecture mailing list > [email protected] > http://lists.gpii.net/mailman/listinfo/architecture
_______________________________________________ Architecture mailing list [email protected] http://lists.gpii.net/mailman/listinfo/architecture
