Jon Turney via Cygwin-apps writes:
> So, to follow on from the points raised in the thread ending [1], and
> perhaps start a discussion:
>
> Having developers build executable packages locally and then upload
> them doesn't really meet contemporary standards.

Based on my observations @work currently with people absolutely misusing
CI/CD, I'm not sure "contemporary standards" are actually an
improvement.

> Given my druthers, I'd just disable sftp package uploads right now,
> and make you all use the janky "build service" I hacked together in a
> few spare weekends.
>
> Amongst the advantages this has:
>
> * Ensures that the packaging git repo is updated when the package is.
>
> * Ensures that the source package is complete; and that the build
>   works and is repeatable.
>
> * Ensures that the package is built in a sterile environment.

That's an assertion that would seem to need a few more qualifications to
become true.  Fundamentally the underlying VM is actually not under your
control.  More specifically cygport already pulls in many dependencies
that you often can't ignore.  I've been thinking about doing the Cygwin
install in a way that lets me quickly set up a targeted installation
that really has just the packages required for the build, another one
for test and then of course the one that cygport runs in.  That would at
least start to untangle that knot…

Oh and btw, all my build machines that I have used since the Perl 5.14
update are or have been dedicated machines that I don't use for anything
else.

> * Ensures that the package is serially developed in the git main
>   branch (so when someone does a NMU, they don't also need to remember
>   to explicitly communicate any changes made to the maintainer so they
>   don't drop off again in the next release...)
>
> * Transparency about, and some degree of auditability on how, the
>   contents of the install package are generated.
>
> * The way we provide a chrooted sftp session for uploads is weird and
>   non-standard and this means sourceware no longer has to support it.

This is actually artifact storage / deployment, not build.  Why do you
(want to?) conflate these, I think that makes the discussion more
complicated.  We're anyhow still creating a classical binary package
repo and setup.ini for the users to install Cygwin, not giving them some
Git repo that they can materialize their installation from.

> (A related consideration is that this probably also helps if/when we
> want to start providing arm64 packages (which will otherwise entail
> all the problems we had when x86_64 started being a thing - but at
> least in that case, most maintainers had the necessary hardware, if
> not the needed OS) - The alternative seems to be providing numerous
> cross-build packages, which doesn't seem like a good use of anyone's
> time...)
>
>
> Unfortunately, there are some issues with my half-assed replacement:
>
> * It relies on free CI services (GitHub and AppVeyor) to do the actual
>   builds, which might be withdrawn or start charging.

Also there aree limits to what you can do on these free services that my
builds frequently run into.  AppVeyor has a 1 hour limit and either the
testing or sometimes even the build does not finish in that time.

> * In particular, github requires an account to view the logs, which is
>   a sticking point for some people.

It very much is.  I've made that decision based on how GitHub is
perverting Git to become a centralized development model again; and
before GitHub was bought by Microsoft actually.

Also, as an SFC supporter: https://sfconservancy.org/GiveUpGitHub/

> * There's a number of problems with the current implementation: For a
>   start it's a synchronous daisy-chain of actions, which isn't
>   tolerant of intermittent connectivity or other transient problems.
>
> * The "tokens" which can specified to control options in it are an
>   ad-hoc mess. Idk what the ideal solution is, but the names of those
>   options all need rethinking for a start...
>
> * If you want to rebuild A and then B which requires the new B, you
>   have to guess when to submit the build for B. (perhaps it needs
>   private access to the repo rather than using a mirror, to ensure the
>   copy it's looking at is up to date, but even then...)

This is the actual showstopper.  If I can't stage builds and use those
pre-builds to do the final ones, then a new Perl release would become
more or less impossible without it taking either an extremely long time
or leaving a window of when setup.ini is broken for unsuspecting users.

> * Some packages have circular dependencies, requiring some
>   bootstrapping build of package A, using that to build B, before
>   returning to rebuild package A properly with B available. This is
>   completely unaccounted for in the current model of doing things.

Same as above.  As I said you will need to have at least one staging
repo of some sorts so that a build can access in addition of the actual
release repo and sequence the builds accordingly.

> * There are a couple of packagers using their own handcrafted
>   packaging rather than cygport.
>
> I'll be reaching out the those soon to discuss what can (reasonably)
> be done to accommodate them.
>
>
> Any questions/thoughts/concerns?

As said before, I don't really care much about using sftp for deployment
and can adapt to different artifact upload/deployment strategies if
necessary.  I just don't see how to (quickly) fix the problems using the
current builders would create on my side.

>
> [1] https://cygwin.com/pipermail/cygwin-apps/2025-June/044388.html


Regards,
Achim.
-- 
+<[Q+ Matrix-12 WAVE#46+305 Neuron microQkb Andromeda XTk Blofeld]>+

Wavetables for the Terratec KOMPLEXER:
http://Synth.Stromeko.net/Downloads.html#KomplexerWaves

Reply via email to