Re: continuous integrations

2024-05-06 Thread Bruno Haible
Hi Simon,

> Right.  I also had trouble with Savannah git mirrors in the past, but
> for the past year or so it has worked well.

Interesting...

> One of the few disadvantages with this approach that I've discovered is
> that you don't get tight coupling of ci/cd script and the rest of the
> repository.  This means that if you for some reason want to redo the
> pipeline on commit X in say 5 years, you may have to find whatever old
> commit of the CI pipeline job definition was used at the time and then
> set that up to be able to run the pipeline.  If the pipeline definition
> can be written to work with both current master git and 5 years old git,
> then it will work fine, but it means more work to keep it tested.  I've
> found this pattern useful once in a while, but it is not a strong
> reason.

I haven't had the need for this situation. And if I were in this situation,
it would be easy to look over the changes in the CI script, since it hardly
changes more often than 1x per year.

> >> Then we can apply that group for free CI/CD minutes
> >
> > What do you mean by that? I've found GitLab's limit of 400 minutes per
> > month and top-level group limiting, and see that GitHub does not have such
> > a limit.
> 
> I have applied for this program for a couple of programs and while it is
> a manual process and takes some time, it will give you 50.000 compute
> minutes per month:
> 
> https://about.gitlab.com/solutions/open-source/join/

This changes things, indeed. 5 minutes would be comfortable for a set
of 20 to 50 GNU packages. But you need one person who will do the necessary
renewal paperwork once a year.

> By using a single project it would also be possible to purchase compute
> minutes in bulk and have them apply to all sub-projects.  I've found
> this to be fairly cheap compared to alternative cost of setting up and
> maintaining runners on my own hardware.

Sure: The electricity prices in the US are significantly lower than they
are in Sweden. [1] On this basis, your location can't compete.

> I've found it to only be cost
> effective to setup my own runners for platforms that gitlab doesn't
> support natively, such as arm64 or ppc64el.

Yes, it would be quite a waste of energy to run a QEMU-emulated CI job.

Bruno

[1] 
https://worldpopulationreview.com/country-rankings/cost-of-electricity-by-country






Re: continuous integrations pipeline frameworks

2024-05-06 Thread Simon Josefsson via Gnulib discussion list
Bruno Haible  writes:

> Simon Josefsson wrote:
>> I forgot to mention: the pattern to provide re-usable GitLab CI/CD
>> definitions that I'm inspired by is Debian's pipeline project:
>> 
>> https://salsa.debian.org/salsa-ci-team/pipeline/
>> 
>> It is easy to setup a new project to use their reusable pipeline -- just
>> add the CI/CD configuration file setting pointing to their job file --
>> and gives a broad configurable and tweakable pipeline.
>
> Sorry if this sounds negative, but
>
>   - So far, I've loved to adapt my CIs as needed. For example, one package
> has a number of --with options, so my CI first builds without these
> --with options, then installs the extra Debian packages and builds a
> second time with these --with options. I don't think that any
> pipeline framework can give me this possibility without causing
> massive hurdles.
>
>   - With such frameworks, documentation is key.

Yes, any reusable system will need to support additional system packages
and ./configure flags and so on.

>> I'm thinking we could do the same but for any project using gnulib.
>> Within some reasonable limit and assumptions, but the majority of
>> projects mentioned already are similar enough for this to be possible
>> relatively easily.
>> 
>> I'm thinking it should be sufficient to add gnu-ci.yml@gnulib/pipeline
>> (or similar) as a CI/CD configuration file setting to achieve this.
>
> It's quite possible that with this approach, you can bring more GNU packages
> into the "we have CI" camp.
>
> I wouldn't like to switch to such a framework, though, because I'm already
> too much of an expert in GitLab CI.

Right -- the key to this working well is that no switch should be
necessary.  Written properly, you add one 'include' to your existing job
definition file and that enable opt-in functionality.

I'm also quite merried to the job definition files I have so it will
take time to surrender them, but I also realize that large chunks of the
files I have repeat a lot of the same code patterns.

It's an experiment, I'm not sure how well it will work out, having
started on this a couple of times before and failed...

/Simon


signature.asc
Description: PGP signature


Re: continuous integrations

2024-05-06 Thread Simon Josefsson via Gnulib discussion list
Bruno Haible  writes:

>> I think the pattern of having the .gitlab-ci.yml outside of the core
>> Savannah project is a good one for those GNU projects who are not
>> embracing the GitLab platform.  Then GitLab-related stuff stays on the
>> GitLab platform and doesn't invade the core project.
>
> Yes, that's one reason I put the CI outside the main repository. The
> other reasons are:
>   - CIs will come and go over time. Whereas the source code is meant to
> be stable for > 20 years.
>   - Maintaining CIs is a different business than developing. It can be
> handled by different persons, with different skills.
>   - I had problems creating a git repository's mirror from Savannah at
> GitLab. If we can't have a GNU package's mirror at GitLab, and of
> course don't want to move the main repository away from Savannah,
> that was the only option.
>   - There is also the possibility of having CIs on other clouds, such as
> GitHub, Travis, etc. This is simpler if there is no mirroring
> in-between.

Right.  I also had trouble with Savannah git mirrors in the past, but
for the past year or so it has worked well.  So I like this pattern.

One of the few disadvantages with this approach that I've discovered is
that you don't get tight coupling of ci/cd script and the rest of the
repository.  This means that if you for some reason want to redo the
pipeline on commit X in say 5 years, you may have to find whatever old
commit of the CI pipeline job definition was used at the time and then
set that up to be able to run the pipeline.  If the pipeline definition
can be written to work with both current master git and 5 years old git,
then it will work fine, but it means more work to keep it tested.  I've
found this pattern useful once in a while, but it is not a strong
reason.

>> Then we can apply that group for free CI/CD minutes
>
> What do you mean by that? I've found GitLab's limit of 400 minutes per
> month and top-level group limiting, and see that GitHub does not have such
> a limit.

I have applied for this program for a couple of programs and while it is
a manual process and takes some time, it will give you 50.000 compute
minutes per month:

https://about.gitlab.com/solutions/open-source/join/

By using a single project it would also be possible to purchase compute
minutes in bulk and have them apply to all sub-projects.  I've found
this to be fairly cheap compared to alternative cost of setting up and
maintaining runners on my own hardware.  I've found it to only be cost
effective to setup my own runners for platforms that gitlab doesn't
support natively, such as arm64 or ppc64el.

>> How about using https://gitlab.com/gnulib/ as a playground for these
>> ideas?  Then we can add sub-projects there for pipeline definitions, and
>> Savannah mirrors of other projects too.
>
> On GitLab, the 400 minutes limit is per top-level group. Therefore, it's
> better if, for each GNU package, we have a separate top-level group.

Now I understand why you went through that effort to create new projects!

>> If you can add 'jas' as
>> maintainer of the 'gnulib' group on GitLab
>
> Done.

Thank you.

>> I could add one project to
>> start work on writing re-usable pipeline definitions, and one example
>> project maybe for GNU InetUtils that would use these new re-usable
>> pipeline components to provide a CI/CD pipeline definition file.  I
>> could add some arm64/ppc64el builds of gnulib too.
>
> The usefulness of this step depends on how much it would reduce the
> frequency of the x86_64 runs (which currently are at 1/week). Most
> parts of Gnulib are not arch-specific; therefore I think the minutes
> are better invested in testing Alpine Linux, FreeBSD, OpenBSD, than
> arm64/ppc64el.

Yes having more OSes is a good first step, but then having more
architectures than amd64 becomes relevant.

/Simon


signature.asc
Description: PGP signature


Re: continuous integrations pipeline frameworks

2024-05-06 Thread Bruno Haible
Simon Josefsson wrote:
> I forgot to mention: the pattern to provide re-usable GitLab CI/CD
> definitions that I'm inspired by is Debian's pipeline project:
> 
> https://salsa.debian.org/salsa-ci-team/pipeline/
> 
> It is easy to setup a new project to use their reusable pipeline -- just
> add the CI/CD configuration file setting pointing to their job file --
> and gives a broad configurable and tweakable pipeline.

Sorry if this sounds negative, but

  - So far, I've loved to adapt my CIs as needed. For example, one package
has a number of --with options, so my CI first builds without these
--with options, then installs the extra Debian packages and builds a
second time with these --with options. I don't think that any
pipeline framework can give me this possibility without causing
massive hurdles.

  - With such frameworks, documentation is key.

> I'm thinking we could do the same but for any project using gnulib.
> Within some reasonable limit and assumptions, but the majority of
> projects mentioned already are similar enough for this to be possible
> relatively easily.
> 
> I'm thinking it should be sufficient to add gnu-ci.yml@gnulib/pipeline
> (or similar) as a CI/CD configuration file setting to achieve this.

It's quite possible that with this approach, you can bring more GNU packages
into the "we have CI" camp.

I wouldn't like to switch to such a framework, though, because I'm already
too much of an expert in GitLab CI.

Bruno






Re: continuous integrations

2024-05-06 Thread Bruno Haible
Hi Simon,

> These are useful pipelines with basic build testing!

My main use of these CI pipelines are to
  - find regressions caused by commits in the respective packages,
  - find regressions caused by gnulib (despite upstream having gnulib
as a git submodule),
  - create snapshot tarballs for GNU m4.

> I think the pattern of having the .gitlab-ci.yml outside of the core
> Savannah project is a good one for those GNU projects who are not
> embracing the GitLab platform.  Then GitLab-related stuff stays on the
> GitLab platform and doesn't invade the core project.

Yes, that's one reason I put the CI outside the main repository. The
other reasons are:
  - CIs will come and go over time. Whereas the source code is meant to
be stable for > 20 years.
  - Maintaining CIs is a different business than developing. It can be
handled by different persons, with different skills.
  - I had problems creating a git repository's mirror from Savannah at
GitLab. If we can't have a GNU package's mirror at GitLab, and of
course don't want to move the main repository away from Savannah,
that was the only option.
  - There is also the possibility of having CIs on other clouds, such as
GitHub, Travis, etc. This is simpler if there is no mirroring
in-between.

> Would it make sense to collaborate on re-usable GitLab CI/CD pipeline
> definitions in a single GitLab project?  Then we can apply that group
> for free CI/CD minutes and get testing on macOS/Windows too. I have a
> shared GitLab runner for native arm64 and ppc64el building, and have
> wanted to setup NetBSD/OpenBSD/FreeBSD/etc GitLab runners too.

I am currently experimenting with CI on GitHub, with the immediate goal
of testing Gnulib's testdirs on various macOS versions. (The macOS machine
in the compilefarm is not up-to-date.) This should also give FreeBSD and
Solaris 11 testing.

> Then we can apply that group for free CI/CD minutes

What do you mean by that? I've found GitLab's limit of 400 minutes per
month and top-level group limiting, and see that GitHub does not have such
a limit.

> How about using https://gitlab.com/gnulib/ as a playground for these
> ideas?  Then we can add sub-projects there for pipeline definitions, and
> Savannah mirrors of other projects too.

On GitLab, the 400 minutes limit is per top-level group. Therefore, it's
better if, for each GNU package, we have a separate top-level group.

> If you can add 'jas' as
> maintainer of the 'gnulib' group on GitLab

Done.

> I could add one project to
> start work on writing re-usable pipeline definitions, and one example
> project maybe for GNU InetUtils that would use these new re-usable
> pipeline components to provide a CI/CD pipeline definition file.  I
> could add some arm64/ppc64el builds of gnulib too.

The usefulness of this step depends on how much it would reduce the
frequency of the x86_64 runs (which currently are at 1/week). Most
parts of Gnulib are not arch-specific; therefore I think the minutes
are better invested in testing Alpine Linux, FreeBSD, OpenBSD, than
arm64/ppc64el.

Bruno






Re: gnulib-tool.py speeds up continuous integrations

2024-05-06 Thread Simon Josefsson via Gnulib discussion list
I forgot to mention: the pattern to provide re-usable GitLab CI/CD
definitions that I'm inspired by is Debian's pipeline project:

https://salsa.debian.org/salsa-ci-team/pipeline/

It is easy to setup a new project to use their reusable pipeline -- just
add the CI/CD configuration file setting pointing to their job file --
and gives a broad configurable and tweakable pipeline.  Of course, this
is only for building Debian packages, so it is a narrow focus.

I'm thinking we could do the same but for any project using gnulib.
Within some reasonable limit and assumptions, but the majority of
projects mentioned already are similar enough for this to be possible
relatively easily.

I'm thinking it should be sufficient to add gnu-ci.yml@gnulib/pipeline
(or similar) as a CI/CD configuration file setting to achieve this.

/Simon


signature.asc
Description: PGP signature


Re: gnulib-tool.py speeds up continuous integrations

2024-05-06 Thread Simon Josefsson via Gnulib discussion list
Bruno Haible  writes:

> gnulib-tool is used is many CI jobs. Just adding 'python3' to the
> prerequisites of such a job makes it run faster. Here are the execution
> times for a single run, before and after adding 'python3', for those
> CIs that I maintain or co-maintain. In minutes and seconds.
>
>   Before   After
>
> https://gitlab.com/gnulib/gnulib-ci/-/pipelines   30:  11:
> https://gitlab.com/gnu-gettext/ci-distcheck/-/pipelines   36:  32:
> https://gitlab.com/gnu-poke/ci-distcheck/-/pipelines  18:4018:24
> https://gitlab.com/gnu-libunistring/ci-distcheck/-/pipelines  11:2509:16
> https://gitlab.com/gnu-diffutils/ci-distcheck/-/pipelines 07:2106:27
> https://gitlab.com/gnu-grep/ci-distcheck/-/pipelines  06:5106:08
> https://gitlab.com/gnu-m4/ci-distcheck/-/pipelines06:4605:44
> https://gitlab.com/gnu-sed/ci-distcheck/-/pipelines   05:2804:39
> https://gitlab.com/gnu-gzip/ci-distcheck/-/pipelines  04:1603:58
> https://gitlab.com/gnu-libffcall/ci-distcheck/-/pipelines 01:5001:42
> https://gitlab.com/gnu-libsigsegv/ci-distcheck/-/pipelines00:4500:42

These are useful pipelines with basic build testing!  I help on a bunch
of others below, to get broader OS/architecture-compatibility testing.

https://gitlab.com/gsasl/inetutils/-/pipelines
https://gitlab.com/gsasl/gsasl/-/pipelines
https://gitlab.com/gsasl/shishi/-/pipelines
https://gitlab.com/gsasl/gss/-/pipelines
https://gitlab.com/libidn/libidn2/-/pipelines
https://gitlab.com/libidn/libidn/-/pipelines
https://gitlab.com/gnutls/libtasn1/-/pipelines

I think the pattern of having the .gitlab-ci.yml outside of the core
Savannah project is a good one for those GNU projects who are not
embracing the GitLab platform.  Then GitLab-related stuff stays on the
GitLab platform and doesn't invade the core project.

Would it make sense to collaborate on re-usable GitLab CI/CD pipeline
definitions in a single GitLab project?  Then we can apply that group
for free CI/CD minutes and get testing on macOS/Windows too.  I have a
shared GitLab runner for native arm64 and ppc64el building, and have
wanted to setup NetBSD/OpenBSD/FreeBSD/etc GitLab runners too.  Adding
runners to a group is easy, adding it to multiple groups require some
manual work and added resources on the runner.

How about using https://gitlab.com/gnulib/ as a playground for these
ideas?  Then we can add sub-projects there for pipeline definitions, and
Savannah mirrors of other projects too.  If you can add 'jas' as
maintainer of the 'gnulib' group on GitLab I could add one project to
start work on writing re-usable pipeline definitions, and one example
project maybe for GNU InetUtils that would use these new re-usable
pipeline components to provide a CI/CD pipeline definition file.  I
could add some arm64/ppc64el builds of gnulib too.

/Simon


signature.asc
Description: PGP signature


Re: syntax-check reject u_char u_short u_int u_long

2024-05-06 Thread Simon Josefsson via Gnulib discussion list
Thanks for +1 Bruno, I have pushed the commits below.  More history or
insight on how to think about use of these types would be great.  My
recollection was that these types were preferred for compatibility with
ancient C tools that didn't parse 'unsigned char' etc.

/Simon
From 2adbe3be9e278cfc66289bbd9c8c433db84d5ce4 Mon Sep 17 00:00:00 2001
From: Simon Josefsson 
Date: Mon, 6 May 2024 14:56:08 +0200
Subject: [PATCH 1/2] inet-ntop, inet-pton: Avoid obsolete u_char type.

* lib/inet_pton.c (inet_pton6): Use unsigned char instead of u_char.
* lib/inet_ntop.c: Doc fix.
---
 ChangeLog   | 6 ++
 lib/inet_ntop.c | 2 +-
 lib/inet_pton.c | 8 
 3 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/ChangeLog b/ChangeLog
index 02ecbd341d..6b969dddbe 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,9 @@
+2024-05-06  Simon Josefsson  
+
+	inet-ntop, inet-pton: Avoid obsolete u_char type.
+	* lib/inet_pton.c (inet_pton6): Use unsigned char instead of u_char.
+	* lib/inet_ntop.c: Doc fix.
+
 2024-05-05  Bruno Haible  
 
 	gnulib-tool.py: Regenerate aclocal.m4 before using 'autoconf -t ...'.
diff --git a/lib/inet_ntop.c b/lib/inet_ntop.c
index 0a4ba20e0d..26089959da 100644
--- a/lib/inet_ntop.c
+++ b/lib/inet_ntop.c
@@ -117,7 +117,7 @@ inet_ntop (int af, const void *restrict src,
  *  'dst' (as a const)
  * notes:
  *  (1) uses no statics
- *  (2) takes a u_char* not an in_addr as input
+ *  (2) takes a 'unsigned char *' not an in_addr as input
  * author:
  *  Paul Vixie, 1996.
  */
diff --git a/lib/inet_pton.c b/lib/inet_pton.c
index 2d29608d47..3d35f37adf 100644
--- a/lib/inet_pton.c
+++ b/lib/inet_pton.c
@@ -217,8 +217,8 @@ inet_pton6 (const char *restrict src, unsigned char *restrict dst)
 }
   if (tp + NS_INT16SZ > endp)
 return (0);
-  *tp++ = (u_char) (val >> 8) & 0xff;
-  *tp++ = (u_char) val & 0xff;
+  *tp++ = (unsigned char) (val >> 8) & 0xff;
+  *tp++ = (unsigned char) val & 0xff;
   saw_xdigit = 0;
   val = 0;
   continue;
@@ -236,8 +236,8 @@ inet_pton6 (const char *restrict src, unsigned char *restrict dst)
 {
   if (tp + NS_INT16SZ > endp)
 return (0);
-  *tp++ = (u_char) (val >> 8) & 0xff;
-  *tp++ = (u_char) val & 0xff;
+  *tp++ = (unsigned char) (val >> 8) & 0xff;
+  *tp++ = (unsigned char) val & 0xff;
 }
   if (colonp != NULL)
 {
-- 
2.34.1

From aacceb6eff58eba91290d930ea9b8275699057cf Mon Sep 17 00:00:00 2001
From: Simon Josefsson 
Date: Mon, 6 May 2024 15:01:10 +0200
Subject: [PATCH 2/2] maintainer-makefile: Prohibit BSD4.3/SysV u_char etc
 types.

* top/maint.mk (sc_unsigned_char, sc_unsigned_short)
(sc_unsigned_int, sc_unsigned_long): Add.
---
 ChangeLog|  6 ++
 top/maint.mk | 18 ++
 2 files changed, 24 insertions(+)

diff --git a/ChangeLog b/ChangeLog
index 6b969dddbe..54ac701a98 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,9 @@
+2024-05-06  Simon Josefsson  
+
+	maintainer-makefile: Prohibit BSD4.3/SysV u_char etc types.
+	* top/maint.mk (sc_unsigned_char, sc_unsigned_short)
+	(sc_unsigned_int, sc_unsigned_long): Add.
+
 2024-05-06  Simon Josefsson  
 
 	inet-ntop, inet-pton: Avoid obsolete u_char type.
diff --git a/top/maint.mk b/top/maint.mk
index af865717c4..32228f4366 100644
--- a/top/maint.mk
+++ b/top/maint.mk
@@ -854,6 +854,24 @@ sc_obsolete_symbols:
 	halt='do not use HAVE''_FCNTL_H or O'_NDELAY			\
 	  $(_sc_search_regexp)
 
+# Prohibit BSD4.3/SysV u_char, u_short, u_int and u_long usage.
+sc_unsigned_char:
+	@prohibit=u''_char \
+	halt='don'\''t use u''_char; instead use unsigned char'	\
+	  $(_sc_search_regexp)
+sc_unsigned_short:
+	@prohibit=u''_short \
+	halt='don'\''t use u''_short; instead use unsigned short' \
+	  $(_sc_search_regexp)
+sc_unsigned_int:
+	@prohibit=u''_int \
+	halt='don'\''t use u''_int; instead use unsigned int' \
+	  $(_sc_search_regexp)
+sc_unsigned_long:
+	@prohibit=u''_long \
+	halt='don'\''t use u''_long; instead use unsigned long'	\
+	  $(_sc_search_regexp)
+
 # FIXME: warn about definitions of EXIT_FAILURE, EXIT_SUCCESS, STREQ
 
 # Each nonempty ChangeLog line must start with a year number, or a TAB.
-- 
2.34.1



signature.asc
Description: PGP signature


Re: syntax-check reject u_char u_short u_int u_long

2024-05-06 Thread Collin Funk
On 5/6/24 5:38 AM, Bruno Haible wrote:
>> sc_unsigned_int:
>> @prohibit=u''_int \
>> halt='don'\''t use u''_int; instead use unsigned int' \
>>   $(_sc_search_regexp)
> Sounds good to me. My only suggestion is to move the sc_unsigned_long
> rule after the sc_unsigned_int rule.
> 
>> The u_char/u_long/u_short/u_int idiom used to be common but today I
>> don't think any reasonable code should use it.
>
> I agree. For some time, Linux kernel headers used these types heavily,
> IIRC. But nowadays, they are nearly gone there as well.

I was trying to get rid of all of the old BSD types from Inetutils.
They seem somewhat portable, but since they are not standardized who
knows how many more years until they cause breakage. :)

Maybe it would be best to have a list of types to prohibit and then
construct a regular expression? Something like the headers we suggest
changing #include "..." to #include <...>.

You would lose the suggested alternative (without doing extra work),
but you could configure the types as you wish without creating
multiple rules.

Off of the top of my head here are some old types that I can think of
that should generally be avoided, but exist in some old code:

 # BSD
 u_char, u_int, etc.
 u_int8_t, u_int16_t, etc.
 quad_t, u_quad_t
 qaddr_t
 # SysV maybe?
 ushort, ulong
 # Mach or BSD, both? Not sure.
 caddr_t, daddr_t

I think the *addr_t types were because prestandard C didn't do
implicit void conversions or something.

Collin



Re: syntax-check reject u_char u_short u_int u_long

2024-05-06 Thread Bruno Haible
Hi Simon,

> How about adding inetutils u_* syntax-checks to gnulib's maint.mk?
> 
> sc_unsigned_char:
> @prohibit=u''_char \
> halt='don'\''t use u''_char; instead use unsigned char' \
>   $(_sc_search_regexp)
> 
> sc_unsigned_long:
> @prohibit=u''_long \
> halt='don'\''t use u''_long; instead use unsigned long' \
>   $(_sc_search_regexp)
> 
> sc_unsigned_short:
> @prohibit=u''_short \
> halt='don'\''t use u''_short; instead use unsigned short' \
>   $(_sc_search_regexp)
> 
> sc_unsigned_int:
> @prohibit=u''_int \
> halt='don'\''t use u''_int; instead use unsigned int' \
>   $(_sc_search_regexp)

Sounds good to me. My only suggestion is to move the sc_unsigned_long
rule after the sc_unsigned_int rule.

> The u_char/u_long/u_short/u_int idiom used to be common but today I
> don't think any reasonable code should use it.

I agree. For some time, Linux kernel headers used these types heavily,
IIRC. But nowadays, they are nearly gone there as well.

> The only usage in gnulib is lib/inet_ntop.c and lib/inet_pton.c.  It
> seems u_char was removed in most places of the code except a few
> remaining type casts/comments:
> 
> lib/inet_ntop.c: *  (2) takes a u_char* not an in_addr as input
> lib/inet_pton.c:  *tp++ = (u_char) (val >> 8) & 0xff;
> lib/inet_pton.c:  *tp++ = (u_char) val & 0xff;
> lib/inet_pton.c:  *tp++ = (u_char) (val >> 8) & 0xff;
> lib/inet_pton.c:  *tp++ = (u_char) val & 0xff;

Feel free to modernize.

Bruno






syntax-check reject u_char u_short u_int u_long

2024-05-06 Thread Simon Josefsson via Gnulib discussion list
How about adding inetutils u_* syntax-checks to gnulib's maint.mk?

sc_unsigned_char:
@prohibit=u''_char \
halt='don'\''t use u''_char; instead use unsigned char' \
  $(_sc_search_regexp)

sc_unsigned_long:
@prohibit=u''_long \
halt='don'\''t use u''_long; instead use unsigned long' \
  $(_sc_search_regexp)

sc_unsigned_short:
@prohibit=u''_short \
halt='don'\''t use u''_short; instead use unsigned short' \
  $(_sc_search_regexp)

sc_unsigned_int:
@prohibit=u''_int \
halt='don'\''t use u''_int; instead use unsigned int' \
  $(_sc_search_regexp)


The u_char/u_long/u_short/u_int idiom used to be common but today I
don't think any reasonable code should use it.  Does anyone have more
background or opinions on this?

Glibc definitions:

/usr/include/features.h:
   __USE_MISC   Define things from 4.3BSD or System V Unix.
/usr/include/x86_64-linux-gnu/sys/types.h:
#ifdef  __USE_MISC
# ifndef __u_char_defined
typedef __u_char u_char;
typedef __u_short u_short;
typedef __u_int u_int;
typedef __u_long u_long;
typedef __quad_t quad_t;
typedef __u_quad_t u_quad_t;
typedef __fsid_t fsid_t;
#  define __u_char_defined
# endif
typedef __loff_t loff_t;
#endif
/usr/include/x86_64-linux-gnu/bits/types.h:
/* Convenience types.  */
typedef unsigned char __u_char;
typedef unsigned short int __u_short;
typedef unsigned int __u_int;
typedef unsigned long int __u_long;

The only usage in gnulib is lib/inet_ntop.c and lib/inet_pton.c.  It
seems u_char was removed in most places of the code except a few
remaining type casts/comments:

lib/inet_ntop.c: *  (2) takes a u_char* not an in_addr as input
lib/inet_pton.c:  *tp++ = (u_char) (val >> 8) & 0xff;
lib/inet_pton.c:  *tp++ = (u_char) val & 0xff;
lib/inet_pton.c:  *tp++ = (u_char) (val >> 8) & 0xff;
lib/inet_pton.c:  *tp++ = (u_char) val & 0xff;

/Simon


signature.asc
Description: PGP signature