Hi Tony,
Sorry for the very late answer! This looks really good to me - and for
who (like me) have been a bit slow on reading up on this, I recommend
also peaking at this documentation file
https://github.com/the-t-in-rtf/gpii-live-registries/blob/master/docs/options-files.md
which helps get a clearer idea of the files and grades strcuture that
would be used.
~K
On 12/04/17 14:43, Tony Atkins wrote:
Hi, All:
As we have long discussed, currently the solutions and settings used
within the GPII are stored in massive JSON files in the "universal"
repository. I have been tasked with helping move us towards the kind
of granularity, inheritance, and testability we discussed in Toronto.
I have been sketching out initial documentation and a
loading/validation harness
<https://github.com/the-t-in-rtf/gpii-live-registries>, and wanted to
summarize for wider discussion.
First, as discussed in Toronto, the idea is that the "live" registries
would be a separate repo that contains the data that currently lives
in universal, more finely broken down. Changes to the data would be
submitted as pull requests against this repo. The platform-specific
repos would use a versioned release of the "live" data (more on that
in a bit).
Each solution and setting would be a distinct grade, saved to a single
JSON(5) file. We would use the effective path and filename to create
an implicit and unique grade name for each options file. This
accomplishes two things:
1. We will have an easier time detecting namespace collisions with
this model.
2. We can detect the existence of and perform standard tests against
each grade in isolation (see below).
So, what do I mean by "grades" in this context? Basically, anything
you can do in an options block without writing code can be stored in
one of these JSON(5) files. Settings and solutions derive from
concrete /gpii.setting/ and /gpii.solution/ grades. Abstract grades
are also possible, such as platform and platform version mix-ins.
A new loader would scan through an "options file hierarchy" and
associate each block of options with its namespace, as though the user
had called /fluid.defaults(namespace, options)/. Once all grades have
their defaults defined, we can search for any grades that extend
/gpii.solution/ or /gpii.setting/, and do things like:
1. Confirm that each component can be safely instantiated.
2. Confirm that the component satisfies the contract defined for the
base grade, for example, that it provides an "isInstalled" invoker.
3. For "abstract" grades, we would not attempt to instantiate them,
only to confirm that each is extended by at least one "concrete"
grade that has been tested.
Platform specific tests would take place within the platform-specific
repos, which would test their version of the "live" data, for example
calling each solution's "isInstalled" method to confirm that nothing
breaks. As with any versioned dependency change, we would submit a PR
against a platform repo and confirm that the new version of the "live"
data does not break anything before merging and releasing a new
version of the platform repo.
So, that's the proposed workflow and test harness, which are
independent of the data format. Please comment. Once we have even
"lazy consensus" agreement on that, we will immediately need to move
forward with discussions about how we represent each solution/setting
and the relationships between settings.
Cheers,
Tony
_______________________________________________
Architecture mailing list
[email protected]
http://lists.gpii.net/mailman/listinfo/architecture
--
Kasper Galschiot Markus
Lead Research Engineer,
Raising the Floor - International,
www.raisingthefloor.org
_______________________________________________
Architecture mailing list
[email protected]
http://lists.gpii.net/mailman/listinfo/architecture