On Tue, Apr 23, 2019 at 9:47 AM Pierre Labastie <[email protected]> wrote:
>
> Forgotten point above:
> - Create an organization on github with name (tentative) automate-lfs:
> it will contain the jhalfs repo, of course, but could also host
> experimental versions of the books, with xml added (or modified) in
> order to make the scriptlet generation easier (LFS is already good in
> this respect, BLFS is not, CLFS is intermediate...). All this will
> depend on manpower (hmm, don't want to exclude women, of course, maybe
> should use "humanpower"?). Note that the original idea of ALFS (creating
> a DTD for xml based GNU-linux distribution builds) is not what I have in
> mind, hence the proposal for a different name.

Sounds great!

Your description brings up a few thoughts that I have been
contemplating for a little bit:

1. Do you have in mind supporting more advanced/alternative build methods?

For example, in Mere Linux, I've replaced all chroot actions with
containers. Having a generic automated system that can help support or
drive ideas like that would be interesting.

2. Moving away from shell scripts?

Shell scripts are great, and I admit I enjoy writing them. However,
they are hard to test. Specifically, they're really hard to unit test
in a meaningful way. And, of course, as the requirements and
complexity of the tool grows, the nicer it is to work with a more
modern and flexible language.

So I've started exploring the idea of gradually swapping out the shell
bits with Python and doing it in a Test Driven Development style. I
have a little boilerplate setup that looks promising. Specifically,
there's a few places that I think a move to Python might really help
with:

 * Using a library like https://github.com/ulfalizer/Kconfiglib to
avoid in-source menu code. All you would need is the Config.in file.
 * Using either built-in or third-party XML libraries, I believe we
could make the command extraction easier and avoid XSL stylesheets
entirely.
 * Better download management, caching, verification of sources. e.g.,
https://2.python-requests.org//en/v0.10.6/user/advanced/#asynchronous-requests
 * Packaging, versioning and distribution becomes easier via pip and
setuptools. Some of the host pre-requisites would be a little less, or
easier to manage because pip will track and install the required
dependencies of the tool.
 * Testing, of course. The boilerplate I have uses flake8
(http://flake8.pycqa.org), pytest (https://docs.pytest.org) and
coverage (https://coverage.readthedocs.io) all driven by tox
(https://tox.readthedocs.io).

3. CI/CD?

The move to Github opens up some doors around Continuous Integration
and Continuous Delivery that may be really interesting to explore.
Have you thought about this at all, even if in a limited way? For
example, having pull requests trigger automated syntax/linting checks
or even unit tests if we get that far. Or, periodically triggering
some automated builds for testing/reporting purposes.

I could write more, I tend to have a lot of ideas. That can be both a
good and bad thing. :) But I'll leave it there for now.

JH
-- 
http://lists.linuxfromscratch.org/listinfo/alfs-discuss
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to