Re: Packaging problems

2022-06-12 Thread madmurphy
I have added the Arch packages

to the repo, as other distros might find them useful.

--madmurphy

On Mon, Jun 6, 2022 at 7:55 PM Maxime Devos  wrote:

> Martin Schanzenbach schreef op ma 06-06-2022 om 16:52 [+]:
> > > As Maxime says, GNUnet takes a long time to compile (when it
> > actually
> > > does - I'm having problems with that right now), and presumably
> > quite
> > > a
> > > while to test too. The obvious way to reduce those times is to
> > simply
> > > *reduce the amount of code being compiled and tested*. Breaking up
> > > the
> > > big repo would achieve that quite nicely.
> >
> > It really does not (on modern hardware).
> > See:
> > https://copr.fedorainfracloud.org/coprs/schanzen/gnunet/build/4501586/
> >
> > It takes around 7mins to install & compile from scratch (this
> > includes
> > installing all dependencies!).
> >
> > IMO right now "make check" is kind of annoying because it takes too
> > long and fails because of bad test design. It needs some love.
> > Maybe a high-level, quick "make check" and an optional "make check-
> > thorough" idk.
>
> FWIW, I included "make check" time in compilation time (from Guix'
> perspective, running tests is just yet another compilation).  And the
> computer I use for compilation isn't exactly modern -- according to the
> timestamp on /etc/host.conf, it's ~16 years old ... wait no, cannot be
> right ...  I'm not sure about the exact year, but at least ~5 years I'd
> say?  And that's the current computer (*) which has an SSD, whereas the
> computer(^) I used back then for GNUnet was older and had an old
> spinning disk.  I guess that isn't representative.
>
> (*) currently held together by duck tape, missing a few screws, keys,
> part of the frame and has a crack in the screen -- I cannot recommend
> Lenovo Yoga computers.
>
> (^) screen is broken unless a sea star is inserted between the keyboard
> and the screen in the right place at the right depth and angle.
>
> Greetings,
> Maxime.
>


Re: Packaging problems

2022-06-06 Thread Maxime Devos
Martin Schanzenbach schreef op ma 06-06-2022 om 16:52 [+]:
> > As Maxime says, GNUnet takes a long time to compile (when it
> actually
> > does - I'm having problems with that right now), and presumably
> quite
> > a 
> > while to test too. The obvious way to reduce those times is to
> simply
> > *reduce the amount of code being compiled and tested*. Breaking up
> > the 
> > big repo would achieve that quite nicely.
> 
> It really does not (on modern hardware).
> See:
> https://copr.fedorainfracloud.org/coprs/schanzen/gnunet/build/4501586/
> 
> It takes around 7mins to install & compile from scratch (this
> includes
> installing all dependencies!).
> 
> IMO right now "make check" is kind of annoying because it takes too
> long and fails because of bad test design. It needs some love.
> Maybe a high-level, quick "make check" and an optional "make check-
> thorough" idk.

FWIW, I included "make check" time in compilation time (from Guix'
perspective, running tests is just yet another compilation).  And the
computer I use for compilation isn't exactly modern -- according to the
timestamp on /etc/host.conf, it's ~16 years old ... wait no, cannot be
right ...  I'm not sure about the exact year, but at least ~5 years I'd
say?  And that's the current computer (*) which has an SSD, whereas the
computer(^) I used back then for GNUnet was older and had an old
spinning disk.  I guess that isn't representative.

(*) currently held together by duck tape, missing a few screws, keys,
part of the frame and has a crack in the screen -- I cannot recommend
Lenovo Yoga computers.

(^) screen is broken unless a sea star is inserted between the keyboard
and the screen in the right place at the right depth and angle.

Greetings,
Maxime.


signature.asc
Description: This is a digitally signed message part


Re: Packaging problems

2022-06-06 Thread Martin Schanzenbach
On Thu, 2022-06-02 at 16:29 +0100, Willow Liquorice wrote:
> Right. Perhaps the onus is on the developers (i.e. us) to make things
> a 
> bit easier, then?


Well, the reason why the "lax" distros have outdated packages is
exactly because of that. Somebody decided to contribute once or twice
and then dropped the package.
This would be less likely in less lax-ly managed distros I would
assume.

> 
> To be honest, I barely understand how the GNUnet project is put
> together 
> on a source code level, let alone how packaging is done. One of the 
> things I'm going to do with the Sphinx docs is provide a high-level 
> overview of how the main repo is structured.

I think I am going to keep this copr semi up-to-date from now on:
https://copr.fedorainfracloud.org/coprs/schanzen/gnunet/

This can serve as a basis on how GNUnet could be packaged.
But, packaging is very specific to distros. It really does not make
sense for us to do that for any tiny little distro.

Note that functional composition of GNUnet and packaging are not
necessarily the same.
For example, it makes sense to make mariadb/postgres plugins optional
so that dependencies do not explode further. But why make tiny packages
for the components of GNUnet? You may as well just enable/disable what
you do not need from the installed package.
The package/binaries are not that big (~15MB installed from the RPM) so
size cannot be the argument.
Severely constrained OSes/HW need customized packages, yes, but those
devs know better than we do on what is needed.

> 
> On the subject of complexity, I attempted to disentangle that awful 
> internal dependency graph a while ago, to get a better idea of how 
> GNUnet works. I noticed that it's possible to divide the subsystems
> up 
> into closely-related groups:
> * a "backbone" (CADET, DHT, CORE, and friends),
> * a VPN suite,
> * a GNS suite,
> * and a set operations suite (SET, SETI, SETU).
> 
> A bunch of smaller "application layer" things (psyc+social+secushare,
> conversation, fs, secretsharing+consensus+voting) then rest on top of
> one or more of those suites.
> 
> I seem to recall that breaking up the main repo has been discussed 
> before, and I think it got nowhere because no agreement was reached
> on 
> where the breaks should be made. My position is that those 
> "applications" (which, IIRC, are in various states of "barely 
> maintained") should be moved to their own repos, and the main repo be
> broken up into those four software suites.
> 
> As Maxime says, GNUnet takes a long time to compile (when it actually
> does - I'm having problems with that right now), and presumably quite
> a 
> while to test too. The obvious way to reduce those times is to simply
> *reduce the amount of code being compiled and tested*. Breaking up
> the 
> big repo would achieve that quite nicely.

It really does not (on modern hardware).
See:
https://copr.fedorainfracloud.org/coprs/schanzen/gnunet/build/4501586/

It takes around 7mins to install & compile from scratch (this includes
installing all dependencies!).

IMO right now "make check" is kind of annoying because it takes too
long and fails because of bad test design. It needs some love.
Maybe a high-level, quick "make check" and an optional "make check-
thorough" idk.

> 
> More specifically related to packaging, would it be a good idea to
> look 
> into CD (Continuous Delivery) to complement our current CI setup? It 
> could make things easier on package maintainers. Looks like Debian
> has a 
> CI system we might be able to make use of, and all we'd need to do is
> point out the test suite in the package that goes to the Debian
> archive.

We build tar.gz every night: https://buildbot.gnunet.org/artifacts/

Packages can be built from that.
But, it does not really make sense to integrate distro packaging with
our releases ("CD"):
Packages have a disjunct release and update cycle.
Usually #packageReleases >= #upstreamReleases.

My 2 cents


signature.asc
Description: This is a digitally signed message part


Re: Packaging problems

2022-06-04 Thread Willow Liquorice
Still, using the proposed high-level categorisation of GNUnet components 
would be handy for documentation purposes, even if the repo structure 
only vaguely reflects it, as a lot of service names aren't very 
indicative of their function or importance to the uninitiated.


I'm taking after the Python standard library docs in that respect, where 
modules are categorised into broad domains first, before being sorted 
within those categories.


On 03/06/2022 20:44, Christian Grothoff wrote:
Having many packages doesn't usually make it easier for packagers, it 
just means that now they have to deal with even more sources, and create 
more package specifications. Moreover, build times go up, as you now 
need to run configure many times. Worse, you then need to find out in 
which order to build things, and what are dependencies. It basically 
makes it worse in all aspects.


Another big issue is that right now, I at least notice if I break the 
build of an application and can fix it. Same if I run analysis tools: 
they at least get to see the entire codebase, and can warn us if 
something breaks. If we move those out-of-tree, they'll be even more 
neglected. What we can (and do do) is mark really badly broken 
applications as 'experimental' and require --with-experimental to build 
those. That's IMO better than moving stuff out of tree.


Also, you probably don't want to split things as you proposed: GNS 
depends on VPN and SETU! SET is supposed to become obsolete, but 
consensus still needs it until SETU is extended to match the SET 
capabilities.


Finally, as for build times, have you even tried 'make -j 16' or 
something like that? Multicore rules ;-).


Happy hacking!

Christian


On 6/2/22 17:29, Willow Liquorice wrote:
Right. Perhaps the onus is on the developers (i.e. us) to make things 
a bit easier, then?


To be honest, I barely understand how the GNUnet project is put 
together on a source code level, let alone how packaging is done. One 
of the things I'm going to do with the Sphinx docs is provide a 
high-level overview of how the main repo is structured.


On the subject of complexity, I attempted to disentangle that awful 
internal dependency graph a while ago, to get a better idea of how 
GNUnet works. I noticed that it's possible to divide the subsystems up 
into closely-related groups:

 * a "backbone" (CADET, DHT, CORE, and friends),
 * a VPN suite,
 * a GNS suite,
 * and a set operations suite (SET, SETI, SETU).

A bunch of smaller "application layer" things (psyc+social+secushare, 
conversation, fs, secretsharing+consensus+voting) then rest on top of 
one or more of those suites.


I seem to recall that breaking up the main repo has been discussed 
before, and I think it got nowhere because no agreement was reached on 
where the breaks should be made. My position is that those 
"applications" (which, IIRC, are in various states of "barely 
maintained") should be moved to their own repos, and the main repo be 
broken up into those four software suites.


As Maxime says, GNUnet takes a long time to compile (when it actually 
does - I'm having problems with that right now), and presumably quite 
a while to test too. The obvious way to reduce those times is to 
simply *reduce the amount of code being compiled and tested*. Breaking 
up the big repo would achieve that quite nicely.


More specifically related to packaging, would it be a good idea to 
look into CD (Continuous Delivery) to complement our current CI setup? 
It could make things easier on package maintainers. Looks like Debian 
has a CI system we might be able to make use of, and all we'd need to 
do is point out the test suite in the package that goes to the Debian 
archive.









Re: Packaging problems

2022-06-04 Thread Christian Grothoff
On 6/3/22 22:37, Willow Liquorice wrote:
> Alright, fair point.
> 
> Still, more automated testing/packaging isn't a bad thing. What exactly
> does the CI do, right now? I looked in .buildbot in the main repo, and I
> guess it just tries to build and install GNUnet from source on whatever
> OS hosts Buildbot? Couldn't see that much automated testing/packaging.
> 
> I'll say again that not having GNUnet running on Debian's CI is a big
> missed opportunity. Being able to deploy and test on Debian Unstable
> automatically would surely make it easier to keep the Debian package up
> to date.

Absolutely. Having the CI do more would be great.

> I'm not sure about the exact process, but I get the impression from
> reading about the subject that it could just be a matter of creating a
> new version, which could trigger building Debian packages, which then go
> to Debian Unstable, and are then used with autopkgtest on ci.debian.net.

Who can do this? Anyone, or only a DD? Our DD has been somewhat, eh,
distracted recently...

Cheers!

Christian

> Best wishes,
> Willow
> 
> On 03/06/2022 20:44, Christian Grothoff wrote:
>> Having many packages doesn't usually make it easier for packagers, it
>> just means that now they have to deal with even more sources, and
>> create more package specifications. Moreover, build times go up, as
>> you now need to run configure many times. Worse, you then need to find
>> out in which order to build things, and what are dependencies. It
>> basically makes it worse in all aspects.
>>
>> Another big issue is that right now, I at least notice if I break the
>> build of an application and can fix it. Same if I run analysis tools:
>> they at least get to see the entire codebase, and can warn us if
>> something breaks. If we move those out-of-tree, they'll be even more
>> neglected. What we can (and do do) is mark really badly broken
>> applications as 'experimental' and require --with-experimental to
>> build those. That's IMO better than moving stuff out of tree.
>>
>> Also, you probably don't want to split things as you proposed: GNS
>> depends on VPN and SETU! SET is supposed to become obsolete, but
>> consensus still needs it until SETU is extended to match the SET
>> capabilities.
>>
>> Finally, as for build times, have you even tried 'make -j 16' or
>> something like that? Multicore rules ;-).
>>
>> Happy hacking!
>>
>> Christian
>>
>>
>> On 6/2/22 17:29, Willow Liquorice wrote:
>>> Right. Perhaps the onus is on the developers (i.e. us) to make things
>>> a bit easier, then?
>>>
>>> To be honest, I barely understand how the GNUnet project is put
>>> together on a source code level, let alone how packaging is done. One
>>> of the things I'm going to do with the Sphinx docs is provide a
>>> high-level overview of how the main repo is structured.
>>>
>>> On the subject of complexity, I attempted to disentangle that awful
>>> internal dependency graph a while ago, to get a better idea of how
>>> GNUnet works. I noticed that it's possible to divide the subsystems
>>> up into closely-related groups:
>>>  * a "backbone" (CADET, DHT, CORE, and friends),
>>>  * a VPN suite,
>>>  * a GNS suite,
>>>  * and a set operations suite (SET, SETI, SETU).
>>>
>>> A bunch of smaller "application layer" things (psyc+social+secushare,
>>> conversation, fs, secretsharing+consensus+voting) then rest on top of
>>> one or more of those suites.
>>>
>>> I seem to recall that breaking up the main repo has been discussed
>>> before, and I think it got nowhere because no agreement was reached
>>> on where the breaks should be made. My position is that those
>>> "applications" (which, IIRC, are in various states of "barely
>>> maintained") should be moved to their own repos, and the main repo be
>>> broken up into those four software suites.
>>>
>>> As Maxime says, GNUnet takes a long time to compile (when it actually
>>> does - I'm having problems with that right now), and presumably quite
>>> a while to test too. The obvious way to reduce those times is to
>>> simply *reduce the amount of code being compiled and tested*.
>>> Breaking up the big repo would achieve that quite nicely.
>>>
>>> More specifically related to packaging, would it be a good idea to
>>> look into CD (Continuous Delivery) to complement our current CI
>>> setup? It could make things easier on package maintainers. Looks like
>>> Debian has a CI system we might be able to make use of, and all we'd
>>> need to do is point out the test suite in the package that goes to
>>> the Debian archive.
>>>
>>>
>>
> 



Re: Packaging problems

2022-06-04 Thread Christian Grothoff
On 6/4/22 09:43, Schanzenbach, Martin wrote:
> 
> 
>> On 3. Jun 2022, at 21:44, Christian Grothoff  wrote:
>>
>> Having many packages doesn't usually make it easier for packagers, it just 
>> means that now they have to deal with even more sources, and create more 
>> package specifications. Moreover, build times go up, as you now need to run 
>> configure many times. Worse, you then need to find out in which order to 
>> build things, and what are dependencies. It basically makes it worse in all 
>> aspects.
> 
> Well. I would think this suggests a very badly designed packaging tool.
> Even the extremely old RPM format allows to build once and then make packages 
> from any subset of the built binaries.
> That is how our gnunet rpm in the tree works.

Oh, I'm talking about having many TGZ, not about generating multiple
binary packages from one master source. We do that already for the
GNUnet Debian packages for which we have rules in-tree.




Re: Packaging problems

2022-06-04 Thread Schanzenbach, Martin


> On 3. Jun 2022, at 21:44, Christian Grothoff  wrote:
> 
> Having many packages doesn't usually make it easier for packagers, it just 
> means that now they have to deal with even more sources, and create more 
> package specifications. Moreover, build times go up, as you now need to run 
> configure many times. Worse, you then need to find out in which order to 
> build things, and what are dependencies. It basically makes it worse in all 
> aspects.

Well. I would think this suggests a very badly designed packaging tool.
Even the extremely old RPM format allows to build once and then make packages 
from any subset of the built binaries.
That is how our gnunet rpm in the tree works.

> 
> Another big issue is that right now, I at least notice if I break the build 
> of an application and can fix it. Same if I run analysis tools: they at least 
> get to see the entire codebase, and can warn us if something breaks. If we 
> move those out-of-tree, they'll be even more neglected. What we can (and do 
> do) is mark really badly broken applications as 'experimental' and require 
> --with-experimental to build those. That's IMO better than moving stuff out 
> of tree.
> 

That is the big issue I agree.

BR

> Also, you probably don't want to split things as you proposed: GNS depends on 
> VPN and SETU! SET is supposed to become obsolete, but consensus still needs 
> it until SETU is extended to match the SET capabilities.
> 
> Finally, as for build times, have you even tried 'make -j 16' or something 
> like that? Multicore rules ;-).
> 
> Happy hacking!
> 
> Christian
> 
> 
> On 6/2/22 17:29, Willow Liquorice wrote:
>> Right. Perhaps the onus is on the developers (i.e. us) to make things a bit 
>> easier, then?
>> To be honest, I barely understand how the GNUnet project is put together on 
>> a source code level, let alone how packaging is done. One of the things I'm 
>> going to do with the Sphinx docs is provide a high-level overview of how the 
>> main repo is structured.
>> On the subject of complexity, I attempted to disentangle that awful internal 
>> dependency graph a while ago, to get a better idea of how GNUnet works. I 
>> noticed that it's possible to divide the subsystems up into closely-related 
>> groups:
>> * a "backbone" (CADET, DHT, CORE, and friends),
>> * a VPN suite,
>> * a GNS suite,
>> * and a set operations suite (SET, SETI, SETU).
>> A bunch of smaller "application layer" things (psyc+social+secushare, 
>> conversation, fs, secretsharing+consensus+voting) then rest on top of one or 
>> more of those suites.
>> I seem to recall that breaking up the main repo has been discussed before, 
>> and I think it got nowhere because no agreement was reached on where the 
>> breaks should be made. My position is that those "applications" (which, 
>> IIRC, are in various states of "barely maintained") should be moved to their 
>> own repos, and the main repo be broken up into those four software suites.
>> As Maxime says, GNUnet takes a long time to compile (when it actually does - 
>> I'm having problems with that right now), and presumably quite a while to 
>> test too. The obvious way to reduce those times is to simply *reduce the 
>> amount of code being compiled and tested*. Breaking up the big repo would 
>> achieve that quite nicely.
>> More specifically related to packaging, would it be a good idea to look into 
>> CD (Continuous Delivery) to complement our current CI setup? It could make 
>> things easier on package maintainers. Looks like Debian has a CI system we 
>> might be able to make use of, and all we'd need to do is point out the test 
>> suite in the package that goes to the Debian archive.
> 



signature.asc
Description: Message signed with OpenPGP


Re: Packaging problems

2022-06-03 Thread Willow Liquorice

Alright, fair point.

Still, more automated testing/packaging isn't a bad thing. What exactly 
does the CI do, right now? I looked in .buildbot in the main repo, and I 
guess it just tries to build and install GNUnet from source on whatever 
OS hosts Buildbot? Couldn't see that much automated testing/packaging.


I'll say again that not having GNUnet running on Debian's CI is a big 
missed opportunity. Being able to deploy and test on Debian Unstable 
automatically would surely make it easier to keep the Debian package up 
to date.


I'm not sure about the exact process, but I get the impression from 
reading about the subject that it could just be a matter of creating a 
new version, which could trigger building Debian packages, which then go 
to Debian Unstable, and are then used with autopkgtest on ci.debian.net.


Best wishes,
Willow

On 03/06/2022 20:44, Christian Grothoff wrote:
Having many packages doesn't usually make it easier for packagers, it 
just means that now they have to deal with even more sources, and create 
more package specifications. Moreover, build times go up, as you now 
need to run configure many times. Worse, you then need to find out in 
which order to build things, and what are dependencies. It basically 
makes it worse in all aspects.


Another big issue is that right now, I at least notice if I break the 
build of an application and can fix it. Same if I run analysis tools: 
they at least get to see the entire codebase, and can warn us if 
something breaks. If we move those out-of-tree, they'll be even more 
neglected. What we can (and do do) is mark really badly broken 
applications as 'experimental' and require --with-experimental to build 
those. That's IMO better than moving stuff out of tree.


Also, you probably don't want to split things as you proposed: GNS 
depends on VPN and SETU! SET is supposed to become obsolete, but 
consensus still needs it until SETU is extended to match the SET 
capabilities.


Finally, as for build times, have you even tried 'make -j 16' or 
something like that? Multicore rules ;-).


Happy hacking!

Christian


On 6/2/22 17:29, Willow Liquorice wrote:
Right. Perhaps the onus is on the developers (i.e. us) to make things 
a bit easier, then?


To be honest, I barely understand how the GNUnet project is put 
together on a source code level, let alone how packaging is done. One 
of the things I'm going to do with the Sphinx docs is provide a 
high-level overview of how the main repo is structured.


On the subject of complexity, I attempted to disentangle that awful 
internal dependency graph a while ago, to get a better idea of how 
GNUnet works. I noticed that it's possible to divide the subsystems up 
into closely-related groups:

 * a "backbone" (CADET, DHT, CORE, and friends),
 * a VPN suite,
 * a GNS suite,
 * and a set operations suite (SET, SETI, SETU).

A bunch of smaller "application layer" things (psyc+social+secushare, 
conversation, fs, secretsharing+consensus+voting) then rest on top of 
one or more of those suites.


I seem to recall that breaking up the main repo has been discussed 
before, and I think it got nowhere because no agreement was reached on 
where the breaks should be made. My position is that those 
"applications" (which, IIRC, are in various states of "barely 
maintained") should be moved to their own repos, and the main repo be 
broken up into those four software suites.


As Maxime says, GNUnet takes a long time to compile (when it actually 
does - I'm having problems with that right now), and presumably quite 
a while to test too. The obvious way to reduce those times is to 
simply *reduce the amount of code being compiled and tested*. Breaking 
up the big repo would achieve that quite nicely.


More specifically related to packaging, would it be a good idea to 
look into CD (Continuous Delivery) to complement our current CI setup? 
It could make things easier on package maintainers. Looks like Debian 
has a CI system we might be able to make use of, and all we'd need to 
do is point out the test suite in the package that goes to the Debian 
archive.









Re: Packaging problems

2022-06-03 Thread Christian Grothoff
Having many packages doesn't usually make it easier for packagers, it 
just means that now they have to deal with even more sources, and create 
more package specifications. Moreover, build times go up, as you now 
need to run configure many times. Worse, you then need to find out in 
which order to build things, and what are dependencies. It basically 
makes it worse in all aspects.


Another big issue is that right now, I at least notice if I break the 
build of an application and can fix it. Same if I run analysis tools: 
they at least get to see the entire codebase, and can warn us if 
something breaks. If we move those out-of-tree, they'll be even more 
neglected. What we can (and do do) is mark really badly broken 
applications as 'experimental' and require --with-experimental to build 
those. That's IMO better than moving stuff out of tree.


Also, you probably don't want to split things as you proposed: GNS 
depends on VPN and SETU! SET is supposed to become obsolete, but 
consensus still needs it until SETU is extended to match the SET 
capabilities.


Finally, as for build times, have you even tried 'make -j 16' or 
something like that? Multicore rules ;-).


Happy hacking!

Christian


On 6/2/22 17:29, Willow Liquorice wrote:
Right. Perhaps the onus is on the developers (i.e. us) to make things a 
bit easier, then?


To be honest, I barely understand how the GNUnet project is put together 
on a source code level, let alone how packaging is done. One of the 
things I'm going to do with the Sphinx docs is provide a high-level 
overview of how the main repo is structured.


On the subject of complexity, I attempted to disentangle that awful 
internal dependency graph a while ago, to get a better idea of how 
GNUnet works. I noticed that it's possible to divide the subsystems up 
into closely-related groups:

 * a "backbone" (CADET, DHT, CORE, and friends),
 * a VPN suite,
 * a GNS suite,
 * and a set operations suite (SET, SETI, SETU).

A bunch of smaller "application layer" things (psyc+social+secushare, 
conversation, fs, secretsharing+consensus+voting) then rest on top of 
one or more of those suites.


I seem to recall that breaking up the main repo has been discussed 
before, and I think it got nowhere because no agreement was reached on 
where the breaks should be made. My position is that those 
"applications" (which, IIRC, are in various states of "barely 
maintained") should be moved to their own repos, and the main repo be 
broken up into those four software suites.


As Maxime says, GNUnet takes a long time to compile (when it actually 
does - I'm having problems with that right now), and presumably quite a 
while to test too. The obvious way to reduce those times is to simply 
*reduce the amount of code being compiled and tested*. Breaking up the 
big repo would achieve that quite nicely.


More specifically related to packaging, would it be a good idea to look 
into CD (Continuous Delivery) to complement our current CI setup? It 
could make things easier on package maintainers. Looks like Debian has a 
CI system we might be able to make use of, and all we'd need to do is 
point out the test suite in the package that goes to the Debian archive.







Re: Packaging problems

2022-06-02 Thread Willow Liquorice
Right. Perhaps the onus is on the developers (i.e. us) to make things a 
bit easier, then?


To be honest, I barely understand how the GNUnet project is put together 
on a source code level, let alone how packaging is done. One of the 
things I'm going to do with the Sphinx docs is provide a high-level 
overview of how the main repo is structured.


On the subject of complexity, I attempted to disentangle that awful 
internal dependency graph a while ago, to get a better idea of how 
GNUnet works. I noticed that it's possible to divide the subsystems up 
into closely-related groups:

* a "backbone" (CADET, DHT, CORE, and friends),
* a VPN suite,
* a GNS suite,
* and a set operations suite (SET, SETI, SETU).

A bunch of smaller "application layer" things (psyc+social+secushare, 
conversation, fs, secretsharing+consensus+voting) then rest on top of 
one or more of those suites.


I seem to recall that breaking up the main repo has been discussed 
before, and I think it got nowhere because no agreement was reached on 
where the breaks should be made. My position is that those 
"applications" (which, IIRC, are in various states of "barely 
maintained") should be moved to their own repos, and the main repo be 
broken up into those four software suites.


As Maxime says, GNUnet takes a long time to compile (when it actually 
does - I'm having problems with that right now), and presumably quite a 
while to test too. The obvious way to reduce those times is to simply 
*reduce the amount of code being compiled and tested*. Breaking up the 
big repo would achieve that quite nicely.


More specifically related to packaging, would it be a good idea to look 
into CD (Continuous Delivery) to complement our current CI setup? It 
could make things easier on package maintainers. Looks like Debian has a 
CI system we might be able to make use of, and all we'd need to do is 
point out the test suite in the package that goes to the Debian archive.





Re: Packaging problems

2022-06-02 Thread Maxime Devos
Willow Liquorice schreef op do 02-06-2022 om 14:44 [+0100]:
> While I was working on the documentation, I decided to put a Repology 
> badge at the top level of the install page, and then I saw the extent of 
> GNUnet's packaging woes. There are more distros with severely outdated 
> versions than ones that are up to date, even the rolling release ones 
> that have comparatively lax stability requirements.
> 
> What's the deal with that? This sort of thing only hurts the project.
> 
> Best wishes,
>   Willow
> 

In case of Guix:

  * lots of failing tests (has been fixed recently, current version
should now be up to date)

  * takes long to compile (not unique to gnunet), which makes testing
changes to the packaging of gnunet require much more time.

Greetings,
Maxime


signature.asc
Description: This is a digitally signed message part


Packaging problems

2022-06-02 Thread Nikita Ronja Gillmann
More or less, from my work back then and observations still today: complexity 
is to blame (and complex scenarios). Only people involved in gnunet at some 
point or close to the project managed to create correct packages.
With my pkgsrc hat on: almost nobody gets paid to do packaging work. Updates 
happen whenever someone cares for or has the time, we're all just human with 
limited time for this.
I can understand the view on this, I've given up ar some point and it is solely 
a responsibly of packagers. Emails help in some cases.



Packaging problems

2022-06-02 Thread Willow Liquorice
While I was working on the documentation, I decided to put a Repology 
badge at the top level of the install page, and then I saw the extent of 
GNUnet's packaging woes. There are more distros with severely outdated 
versions than ones that are up to date, even the rolling release ones 
that have comparatively lax stability requirements.


What's the deal with that? This sort of thing only hurts the project.

Best wishes,
Willow