Re: arm64 builds?

2020-01-24 Thread Willy Tarreau
On Fri, Jan 24, 2020 at 10:31:14AM +0100, Willy Tarreau wrote:
> This is really annoying, so much time spent dealing with bogus warnings
> is really irritating. I'm going to work around it with a cast to unsigned
> short.
> 
> The worst part of it is that if it emits such a warning, who knows how valid
> is the code that it will produce given that it doesn't even seem to figure
> the code path :-(

So I could isolate the minimum code to reproduce the bug:

#include 

int nbt = 64;

struct srv {
void *a[5];
int **b;
};

void check_config_validity4(struct srv *srv, unsigned int *v)
{
int i;

for (i = 0; i < nbt; i++)
(*v)++;
 
srv->b = calloc(/*(unsigned int)*/nbt, sizeof(*srv->b));
for (i = 0; i < nbt; i++)
srv->b[i] = 0;
}

Just change the fields ordering in "struct srv" or the size of field "a"
and the bug disappears. Remove any of the loops or make it not depend on
nbt and the bug disappears. Casting nbt to unsigned has no effect. It
always emits the same warning as seen, and only on 32-bits.

When I have a bit of spare time I may file a bug to gcc with this excerpt,
unless someone wants to handle it, of course. Even once fixed it will
probably not flow down to distros given that it's not critical, so we'll
have to work around it as we can.

Willy



Re: arm64 builds?

2020-01-24 Thread Willy Tarreau
On Fri, Jan 24, 2020 at 07:12:49AM +0500,  ??? wrote:
> > it was ubuntu 18.04 + gcc8, I'll try 19.10 + gcc9
> >
> 
> gcc9 produces the same warning

OK, I've put my hands on an ubuntu-18.04 + gcc8 32 bits. And now I can
confirm it definitely is a gcc bug. The code is the same as a few lines
above.  The cast was added a first time to shut it up but it re-appears.
And if you just change the condition in the "for" loop a few lines above
that *walks* through global.nbthread, the warning disappears. Gcc seems
to believe that we cannot reach that point if we enter the loop, which
is obviously wrong.

This is really annoying, so much time spent dealing with bogus warnings
is really irritating. I'm going to work around it with a cast to unsigned
short.

The worst part of it is that if it emits such a warning, who knows how valid
is the code that it will produce given that it doesn't even seem to figure
the code path :-(

Willy



Re: arm64 builds?

2020-01-23 Thread Willy Tarreau
On Fri, Jan 24, 2020 at 07:12:49AM +0500,  ??? wrote:
> ??, 24 ???. 2020 ?. ? 01:04,  ??? :
> 
> >
> >
> > ??, 24 ???. 2020 ?. ? 00:54, Willy Tarreau :
> >
> >> On Fri, Jan 24, 2020 at 12:46:12AM +0500,  ??? wrote:
> >> > > diff --git a/Makefile b/Makefile
> >> > > index 8399f6ca35..4757bc77e6 100644
> >> > > --- a/Makefile
> >> > > +++ b/Makefile
> >> > > @@ -199,6 +199,7 @@ SPEC_CFLAGS += $(call
> >> cc-opt,-Wshift-negative-value)
> >> > >  SPEC_CFLAGS += $(call cc-opt,-Wshift-overflow=2)
> >> > >  SPEC_CFLAGS += $(call cc-opt,-Wduplicated-cond)
> >> > >  SPEC_CFLAGS += $(call cc-opt,-Wnull-dereference)
> >> > > +SPEC_CFLAGS += $(call cc-opt,-Walloc-size-larger-than=-1)
> >> > >
> >> >
> >> >
> >> >   CC  src/cfgparse.o
> >> > src/cfgparse.c: In function 'check_config_validity':
> >> > src/cfgparse.c:3642:33: warning: product '2147483648 * 8' of arguments 1
> >> > and 2 exceeds 'SIZE_MAX' [-Walloc-size-larger-than=]
> >>
> >> Pfff The only good news is that it takes -1 as SIZE_MAX.
> >>
> >> >  newsrv->idle_orphan_conns = calloc((unsigned int)global.nbthread,
> >> > sizeof(*newsrv->idle_orphan_conns));
> >> >
> >> >
> >> ^
> >> > src/cfgparse.c:3642:33: note: argument 1 in the range [2147483648,
> >> > 4294967295]
> >> (...)
> >> > why is it complaining about "product '2147483648 * 8" ?
> >>
> >> because calloc multiplies the two fields and gcc decided that the largest
> >> value we could possibly pass to the first one if we were as stupid as it
> >> is is 2147483648. Interestingly it took the largest negative value turned
> >> to positive and ignored the positive ones that can be turned to the second
> >> half that are negative if nbthread was negative.
> >>
> >> I really think this test is totally bogus and that there is no way to
> >> express it correctly. I mean, gcc only lets us use 8, 16, 32 or 64 bits.
> >> If you need to calloc a few megabytes, you'll be forced to apply a mask
> >> to the value just to shut it up, and *create* the overflow problem
> >> yourself
> >> when it didn't exist.
> >>
> >> Let's give up on this one if it doesn't cause too much trouble to you.
> >> Otherwise we might cheat doing this :
> >>
> >> calloc((unsigned short)global.nbthread, ...)
> >>
> >> But I really despise this given that we have to make the code wrong just
> >> to please this shitty compiler.
> >>
> >
> >
> > it was ubuntu 18.04 + gcc8, I'll try 19.10 + gcc9
> >
> 
> gcc9 produces the same warning

OK thanks.

Willy



Re: arm64 builds?

2020-01-23 Thread Илья Шипицин
пт, 24 янв. 2020 г. в 01:04, Илья Шипицин :

>
>
> пт, 24 янв. 2020 г. в 00:54, Willy Tarreau :
>
>> On Fri, Jan 24, 2020 at 12:46:12AM +0500,  ??? wrote:
>> > > diff --git a/Makefile b/Makefile
>> > > index 8399f6ca35..4757bc77e6 100644
>> > > --- a/Makefile
>> > > +++ b/Makefile
>> > > @@ -199,6 +199,7 @@ SPEC_CFLAGS += $(call
>> cc-opt,-Wshift-negative-value)
>> > >  SPEC_CFLAGS += $(call cc-opt,-Wshift-overflow=2)
>> > >  SPEC_CFLAGS += $(call cc-opt,-Wduplicated-cond)
>> > >  SPEC_CFLAGS += $(call cc-opt,-Wnull-dereference)
>> > > +SPEC_CFLAGS += $(call cc-opt,-Walloc-size-larger-than=-1)
>> > >
>> >
>> >
>> >   CC  src/cfgparse.o
>> > src/cfgparse.c: In function 'check_config_validity':
>> > src/cfgparse.c:3642:33: warning: product '2147483648 * 8' of arguments 1
>> > and 2 exceeds 'SIZE_MAX' [-Walloc-size-larger-than=]
>>
>> Pfff The only good news is that it takes -1 as SIZE_MAX.
>>
>> >  newsrv->idle_orphan_conns = calloc((unsigned int)global.nbthread,
>> > sizeof(*newsrv->idle_orphan_conns));
>> >
>> >
>> ^
>> > src/cfgparse.c:3642:33: note: argument 1 in the range [2147483648,
>> > 4294967295]
>> (...)
>> > why is it complaining about "product '2147483648 * 8" ?
>>
>> because calloc multiplies the two fields and gcc decided that the largest
>> value we could possibly pass to the first one if we were as stupid as it
>> is is 2147483648. Interestingly it took the largest negative value turned
>> to positive and ignored the positive ones that can be turned to the second
>> half that are negative if nbthread was negative.
>>
>> I really think this test is totally bogus and that there is no way to
>> express it correctly. I mean, gcc only lets us use 8, 16, 32 or 64 bits.
>> If you need to calloc a few megabytes, you'll be forced to apply a mask
>> to the value just to shut it up, and *create* the overflow problem
>> yourself
>> when it didn't exist.
>>
>> Let's give up on this one if it doesn't cause too much trouble to you.
>> Otherwise we might cheat doing this :
>>
>> calloc((unsigned short)global.nbthread, ...)
>>
>> But I really despise this given that we have to make the code wrong just
>> to please this shitty compiler.
>>
>
>
> it was ubuntu 18.04 + gcc8, I'll try 19.10 + gcc9
>

gcc9 produces the same warning


>
>
>>
>> Willy
>>
>


Re: arm64 builds?

2020-01-23 Thread Илья Шипицин
пт, 24 янв. 2020 г. в 00:54, Willy Tarreau :

> On Fri, Jan 24, 2020 at 12:46:12AM +0500,  ??? wrote:
> > > diff --git a/Makefile b/Makefile
> > > index 8399f6ca35..4757bc77e6 100644
> > > --- a/Makefile
> > > +++ b/Makefile
> > > @@ -199,6 +199,7 @@ SPEC_CFLAGS += $(call
> cc-opt,-Wshift-negative-value)
> > >  SPEC_CFLAGS += $(call cc-opt,-Wshift-overflow=2)
> > >  SPEC_CFLAGS += $(call cc-opt,-Wduplicated-cond)
> > >  SPEC_CFLAGS += $(call cc-opt,-Wnull-dereference)
> > > +SPEC_CFLAGS += $(call cc-opt,-Walloc-size-larger-than=-1)
> > >
> >
> >
> >   CC  src/cfgparse.o
> > src/cfgparse.c: In function 'check_config_validity':
> > src/cfgparse.c:3642:33: warning: product '2147483648 * 8' of arguments 1
> > and 2 exceeds 'SIZE_MAX' [-Walloc-size-larger-than=]
>
> Pfff The only good news is that it takes -1 as SIZE_MAX.
>
> >  newsrv->idle_orphan_conns = calloc((unsigned int)global.nbthread,
> > sizeof(*newsrv->idle_orphan_conns));
> >
> >
> ^
> > src/cfgparse.c:3642:33: note: argument 1 in the range [2147483648,
> > 4294967295]
> (...)
> > why is it complaining about "product '2147483648 * 8" ?
>
> because calloc multiplies the two fields and gcc decided that the largest
> value we could possibly pass to the first one if we were as stupid as it
> is is 2147483648. Interestingly it took the largest negative value turned
> to positive and ignored the positive ones that can be turned to the second
> half that are negative if nbthread was negative.
>
> I really think this test is totally bogus and that there is no way to
> express it correctly. I mean, gcc only lets us use 8, 16, 32 or 64 bits.
> If you need to calloc a few megabytes, you'll be forced to apply a mask
> to the value just to shut it up, and *create* the overflow problem yourself
> when it didn't exist.
>
> Let's give up on this one if it doesn't cause too much trouble to you.
> Otherwise we might cheat doing this :
>
> calloc((unsigned short)global.nbthread, ...)
>
> But I really despise this given that we have to make the code wrong just
> to please this shitty compiler.
>


it was ubuntu 18.04 + gcc8, I'll try 19.10 + gcc9


>
> Willy
>


Re: arm64 builds?

2020-01-23 Thread Willy Tarreau
On Fri, Jan 24, 2020 at 12:46:12AM +0500,  ??? wrote:
> > diff --git a/Makefile b/Makefile
> > index 8399f6ca35..4757bc77e6 100644
> > --- a/Makefile
> > +++ b/Makefile
> > @@ -199,6 +199,7 @@ SPEC_CFLAGS += $(call cc-opt,-Wshift-negative-value)
> >  SPEC_CFLAGS += $(call cc-opt,-Wshift-overflow=2)
> >  SPEC_CFLAGS += $(call cc-opt,-Wduplicated-cond)
> >  SPEC_CFLAGS += $(call cc-opt,-Wnull-dereference)
> > +SPEC_CFLAGS += $(call cc-opt,-Walloc-size-larger-than=-1)
> >
> 
> 
>   CC  src/cfgparse.o
> src/cfgparse.c: In function 'check_config_validity':
> src/cfgparse.c:3642:33: warning: product '2147483648 * 8' of arguments 1
> and 2 exceeds 'SIZE_MAX' [-Walloc-size-larger-than=]

Pfff The only good news is that it takes -1 as SIZE_MAX.

>  newsrv->idle_orphan_conns = calloc((unsigned int)global.nbthread,
> sizeof(*newsrv->idle_orphan_conns));
> 
>  ^
> src/cfgparse.c:3642:33: note: argument 1 in the range [2147483648,
> 4294967295]
(...)
> why is it complaining about "product '2147483648 * 8" ?

because calloc multiplies the two fields and gcc decided that the largest
value we could possibly pass to the first one if we were as stupid as it
is is 2147483648. Interestingly it took the largest negative value turned
to positive and ignored the positive ones that can be turned to the second
half that are negative if nbthread was negative.

I really think this test is totally bogus and that there is no way to
express it correctly. I mean, gcc only lets us use 8, 16, 32 or 64 bits.
If you need to calloc a few megabytes, you'll be forced to apply a mask
to the value just to shut it up, and *create* the overflow problem yourself
when it didn't exist.

Let's give up on this one if it doesn't cause too much trouble to you.
Otherwise we might cheat doing this :

calloc((unsigned short)global.nbthread, ...)

But I really despise this given that we have to make the code wrong just
to please this shitty compiler.

Willy



Re: arm64 builds?

2020-01-23 Thread Илья Шипицин
чт, 23 янв. 2020 г. в 23:17, Willy Tarreau :

> On Thu, Jan 23, 2020 at 10:59:00PM +0500,  ??? wrote:
> > > Could you please try to use (size_t) instead of (unsigned int) ? If
> it's
> > > enough to shut it up, I'm fine with doing that change. Otherwise we'll
> > > probably get rid of that stupid warning.
> > >
> >
> >   CC  src/server.o
> >   CC  src/cfgparse.o
> > src/cfgparse.c: In function 'check_config_validity':
> > src/cfgparse.c:3642:33: warning: argument 1 range [2147483648,
> 4294967295]
> > exceeds maximum object size 2147483647 [-Walloc-size-larger-than=]
> >  newsrv->idle_orphan_conns = calloc((size_t)global.nbthread,
> > sizeof(*newsrv->idle_orphan_conns));
> >
> >  ^~~
> > In file included from src/cfgparse.c:24:
> > /usr/arm-linux-gnueabihf/include/stdlib.h:541:14: note: in a call to
> > allocation function 'calloc' declared here
> >  extern void *calloc (size_t __nmemb, size_t __size)
> >   ^~
> >   CC  src/checks.o
> > ^CMakefile:846: recipe for target 'src/checks.o' failed
> > make: *** [src/checks.o] Interrupt
>
> Thanks for the test! So basically this clearly proves we respect the
> calling convention but the compiler still complains. OK I'm seeing in
> the mad that it's for functions "decorated" with the "alloc_size"
> attribute. Thus in short they enforce constraints that cannot be
> expressed with available types. This is becoming totally ridiculous.
>
> We're getting a huge collection of stupid warnings to shut up. I
> suspect that the sum of all the code written to shut them is larger
> than what haproxy 1.0 used to be :-/
>
> The man page says we can disable this crap using
> -Walloc-size-larger-than=SIZE_MAX but on my compiler (8.2) it simply
> fails to build. However when passing explicit values not even that
> large, I don't get any such warning, so I'm starting to think that
> it's a real bug of GCC 9.2, because quite frankly, aside calling
> calloc() with a char or short in argument, there's almost no way out
> of this absurdity.
>
> For me, calling it with -Walloc-size-larger-than=-1 makes it stay
> silent. is it also the case for you ? You can patch your Makefile this
> way:
>
> diff --git a/Makefile b/Makefile
> index 8399f6ca35..4757bc77e6 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -199,6 +199,7 @@ SPEC_CFLAGS += $(call cc-opt,-Wshift-negative-value)
>  SPEC_CFLAGS += $(call cc-opt,-Wshift-overflow=2)
>  SPEC_CFLAGS += $(call cc-opt,-Wduplicated-cond)
>  SPEC_CFLAGS += $(call cc-opt,-Wnull-dereference)
> +SPEC_CFLAGS += $(call cc-opt,-Walloc-size-larger-than=-1)
>


  CC  src/cfgparse.o
src/cfgparse.c: In function ‘check_config_validity’:
src/cfgparse.c:3642:33: warning: product ‘2147483648 * 8’ of arguments 1
and 2 exceeds ‘SIZE_MAX’ [-Walloc-size-larger-than=]
 newsrv->idle_orphan_conns = calloc((unsigned int)global.nbthread,
sizeof(*newsrv->idle_orphan_conns));

 ^
src/cfgparse.c:3642:33: note: argument 1 in the range [2147483648,
4294967295]
In file included from src/cfgparse.c:24:
/usr/arm-linux-gnueabihf/include/stdlib.h:541:14: note: in a call to
allocation function ‘calloc’ declared here
 extern void *calloc (size_t __nmemb, size_t __size)
  ^~
  CC  src/checks.o
  CC  src/backend.o




why is it complaining about "product ‘2147483648 * 8" ?




>
>   Memory usage tuning
>  # If small memory footprint is required, you can reduce the buffer size.
> There
>
> If it still fails, we'll have to ignore it and wait for this monstrosity
> to be fixed by their authors :-/
>
> Willy
>


Re: arm64 builds?

2020-01-23 Thread Willy Tarreau
On Thu, Jan 23, 2020 at 07:41:11PM +0100, Tim Düsterhus wrote:
> Willy,
> 
> Am 23.01.20 um 19:17 schrieb Willy Tarreau:
> > Thanks for the test! So basically this clearly proves we respect the
> > calling convention but the compiler still complains. OK I'm seeing in
> > the mad that it's for functions "decorated" with the "alloc_size"
> > attribute. Thus in short they enforce constraints that cannot be
> > expressed with available types. This is becoming totally ridiculous.
> 
> Googling for that error message says something about the *result*
> needing to fit into ptrdiff_t and also something about negative numbers.
> 
> Maybe it is sufficient to change `nbthread` from `int` to `unsigned
> int`?

That's essentially what the cast did. Also we don't do it because I think
I remember that we used to use value -1 to mean "unspecified" during the
parsing.

> Otherwise the compiler assumes that a negative nbthread casted to
> an unsigned value becomes too large.

No, the reality is only about the possible extent of the value. By
turning it to unsigned it it would still be allowed to cover 0..2^32-1.
It turns out the compiler knows nothing about this value and just says
"if you did something wrong, it could theorically do that". That's
exactly the problem with this type of warnings, they're utter shit
because they assume the compiler knows better than the developer what
the possible range of his numbers are but the compiler doesn't even
look where the values are assigned and knows nothing, so it only relies
on types.

Anyway it's obvious that stupid gcc warnings started to appear after
gcc developers stopped using it for themselves by rewriting it in C++,
so they don't care if they piss off their users. Possibly some of them
don't even read C anymore since they don't need to :-/

Willy



Re: arm64 builds?

2020-01-23 Thread Tim Düsterhus
Willy,

Am 23.01.20 um 19:17 schrieb Willy Tarreau:
> Thanks for the test! So basically this clearly proves we respect the
> calling convention but the compiler still complains. OK I'm seeing in
> the mad that it's for functions "decorated" with the "alloc_size"
> attribute. Thus in short they enforce constraints that cannot be
> expressed with available types. This is becoming totally ridiculous.

Googling for that error message says something about the *result*
needing to fit into ptrdiff_t and also something about negative numbers.

Maybe it is sufficient to change `nbthread` from `int` to `unsigned
int`? Otherwise the compiler assumes that a negative nbthread casted to
an unsigned value becomes too large. I didn't check whether the code
assumes that nbthread can be negative anywhere.

Best regards
Tim Düsterhus



Re: arm64 builds?

2020-01-23 Thread Willy Tarreau
On Thu, Jan 23, 2020 at 10:59:00PM +0500,  ??? wrote:
> > Could you please try to use (size_t) instead of (unsigned int) ? If it's
> > enough to shut it up, I'm fine with doing that change. Otherwise we'll
> > probably get rid of that stupid warning.
> >
> 
>   CC  src/server.o
>   CC  src/cfgparse.o
> src/cfgparse.c: In function 'check_config_validity':
> src/cfgparse.c:3642:33: warning: argument 1 range [2147483648, 4294967295]
> exceeds maximum object size 2147483647 [-Walloc-size-larger-than=]
>  newsrv->idle_orphan_conns = calloc((size_t)global.nbthread,
> sizeof(*newsrv->idle_orphan_conns));
> 
>  ^~~
> In file included from src/cfgparse.c:24:
> /usr/arm-linux-gnueabihf/include/stdlib.h:541:14: note: in a call to
> allocation function 'calloc' declared here
>  extern void *calloc (size_t __nmemb, size_t __size)
>   ^~
>   CC  src/checks.o
> ^CMakefile:846: recipe for target 'src/checks.o' failed
> make: *** [src/checks.o] Interrupt

Thanks for the test! So basically this clearly proves we respect the
calling convention but the compiler still complains. OK I'm seeing in
the mad that it's for functions "decorated" with the "alloc_size"
attribute. Thus in short they enforce constraints that cannot be
expressed with available types. This is becoming totally ridiculous.

We're getting a huge collection of stupid warnings to shut up. I
suspect that the sum of all the code written to shut them is larger
than what haproxy 1.0 used to be :-/

The man page says we can disable this crap using
-Walloc-size-larger-than=SIZE_MAX but on my compiler (8.2) it simply
fails to build. However when passing explicit values not even that
large, I don't get any such warning, so I'm starting to think that
it's a real bug of GCC 9.2, because quite frankly, aside calling
calloc() with a char or short in argument, there's almost no way out
of this absurdity.

For me, calling it with -Walloc-size-larger-than=-1 makes it stay
silent. is it also the case for you ? You can patch your Makefile this
way:

diff --git a/Makefile b/Makefile
index 8399f6ca35..4757bc77e6 100644
--- a/Makefile
+++ b/Makefile
@@ -199,6 +199,7 @@ SPEC_CFLAGS += $(call cc-opt,-Wshift-negative-value)
 SPEC_CFLAGS += $(call cc-opt,-Wshift-overflow=2)
 SPEC_CFLAGS += $(call cc-opt,-Wduplicated-cond)
 SPEC_CFLAGS += $(call cc-opt,-Wnull-dereference)
+SPEC_CFLAGS += $(call cc-opt,-Walloc-size-larger-than=-1)
 
  Memory usage tuning
 # If small memory footprint is required, you can reduce the buffer size. There

If it still fails, we'll have to ignore it and wait for this monstrosity
to be fixed by their authors :-/

Willy



Re: arm64 builds?

2020-01-23 Thread Илья Шипицин
чт, 23 янв. 2020 г. в 15:14, Willy Tarreau :

> On Thu, Jan 23, 2020 at 01:09:22PM +0500,  ??? wrote:
> > hello,
> >
> > I tried to build using cross compiler (arm32 on amd64).  sorry for
> > screenshot.
> > Willy, do you mean errors like that ?
>
> So for those not seeing the screenshot, it says:
> warning: argument 1 range [2147483648, 4294967295] exceeds maximum object
> size 2147483647 :
>
>  new->idle_orphan_conns = calloc((unsigned int)global.nbthread,
> sizeof(*new->idle_orphan_conns));
>
> Thus the cast to unsigned int was likely placed there to shut it up
> once. Looking at the man page, it says:
>
>void *calloc(size_t nmemb, size_t size);
>
> size_t is unsigned (unless you're using an older gcc < 2.4 on a unusual
> OS). I don't see how it can be said to have a maximum size of 2147483647.
>
> This is exactly the type of stupid warnings that causes us to add casts
> that hide real bugs :-/
>
> Could you please try to use (size_t) instead of (unsigned int) ? If it's
> enough to shut it up, I'm fine with doing that change. Otherwise we'll
> probably get rid of that stupid warning.
>

  CC  src/server.o
  CC  src/cfgparse.o
src/cfgparse.c: In function ‘check_config_validity’:
src/cfgparse.c:3642:33: warning: argument 1 range [2147483648, 4294967295]
exceeds maximum object size 2147483647 [-Walloc-size-larger-than=]
 newsrv->idle_orphan_conns = calloc((size_t)global.nbthread,
sizeof(*newsrv->idle_orphan_conns));

 ^~~
In file included from src/cfgparse.c:24:
/usr/arm-linux-gnueabihf/include/stdlib.h:541:14: note: in a call to
allocation function ‘calloc’ declared here
 extern void *calloc (size_t __nmemb, size_t __size)
  ^~
  CC  src/checks.o
^CMakefile:846: recipe for target 'src/checks.o' failed
make: *** [src/checks.o] Interrupt



>
> Thanks!
> Willy
>


Re: arm64 builds?

2020-01-23 Thread Willy Tarreau
On Thu, Jan 23, 2020 at 03:21:40PM +0500,  ??? wrote:
> also, can we treat warnings as errors for CI builds ? it would save a bunch
> of time, instead of
> looking at build log, we can watch for build status.

Yes definitely. You just have to add "ERR=1" to your "make" command line
and it will append -Werror. I do this on my builds as well. And I agree
it will save some time because quite frankly, analysing why a regtest
failed when we already have a build warning is pointless!

Willy



Re: arm64 builds?

2020-01-23 Thread Илья Шипицин
чт, 23 янв. 2020 г. в 15:14, Willy Tarreau :

> On Thu, Jan 23, 2020 at 01:09:22PM +0500,  ??? wrote:
> > hello,
> >
> > I tried to build using cross compiler (arm32 on amd64).  sorry for
> > screenshot.
> > Willy, do you mean errors like that ?
>
> So for those not seeing the screenshot, it says:
> warning: argument 1 range [2147483648, 4294967295] exceeds maximum object
> size 2147483647 :
>
>  new->idle_orphan_conns = calloc((unsigned int)global.nbthread,
> sizeof(*new->idle_orphan_conns));
>
> Thus the cast to unsigned int was likely placed there to shut it up
> once. Looking at the man page, it says:
>
>void *calloc(size_t nmemb, size_t size);
>
> size_t is unsigned (unless you're using an older gcc < 2.4 on a unusual
> OS). I don't see how it can be said to have a maximum size of 2147483647.
>
> This is exactly the type of stupid warnings that causes us to add casts
> that hide real bugs :-/
>
> Could you please try to use (size_t) instead of (unsigned int) ? If it's
> enough to shut it up, I'm fine with doing that change. Otherwise we'll
> probably get rid of that stupid warning.
>

yep.
I'll play with it a liitle bit, and hopefully we will add cross builds to
Github Actions.


also, can we treat warnings as errors for CI builds ? it would save a bunch
of time, instead of
looking at build log, we can watch for build status.


>
> Thanks!
> Willy
>


Re: arm64 builds?

2020-01-23 Thread Willy Tarreau
On Thu, Jan 23, 2020 at 01:09:22PM +0500,  ??? wrote:
> hello,
> 
> I tried to build using cross compiler (arm32 on amd64).  sorry for
> screenshot.
> Willy, do you mean errors like that ?

So for those not seeing the screenshot, it says:
warning: argument 1 range [2147483648, 4294967295] exceeds maximum object
size 2147483647 :

 new->idle_orphan_conns = calloc((unsigned int)global.nbthread, 
sizeof(*new->idle_orphan_conns));

Thus the cast to unsigned int was likely placed there to shut it up
once. Looking at the man page, it says:

   void *calloc(size_t nmemb, size_t size);

size_t is unsigned (unless you're using an older gcc < 2.4 on a unusual
OS). I don't see how it can be said to have a maximum size of 2147483647.

This is exactly the type of stupid warnings that causes us to add casts
that hide real bugs :-/

Could you please try to use (size_t) instead of (unsigned int) ? If it's
enough to shut it up, I'm fine with doing that change. Otherwise we'll
probably get rid of that stupid warning.

Thanks!
Willy



Re: arm64 builds?

2020-01-23 Thread Илья Шипицин
hello,

I tried to build using cross compiler (arm32 on amd64).  sorry for
screenshot.
Willy, do you mean errors like that ?


[image: Screenshot from 2020-01-23 13-03-49.png]


ср, 6 нояб. 2019 г. в 12:26, Willy Tarreau :

> Hi Ilya,
>
> On Tue, Nov 05, 2019 at 07:20:43PM +0500,  ??? wrote:
> > only arm64 are available.
> > we can try arm using cross build, for example.
>
> Then don't bother with this, it's likely then that they will not
> have all the environment available. Maybe even their hardware does
> not support arm32 at all. It was just a suggestion to try to optimize
> the solution but even arm64 is already nice to have.
>
> > is it common way to use cross build for building haproxy ?
>
> Yes it is. Actually I don't think I've built it natively for a very
> long time now, as even on my laptop I'm used to stick to the cross-
> compilers which are distributed on the distcc build farm running on
> the lab load generators :-)
>
> In practice just pass "CC=/path/to/gcc" and let it do its job. It will
> automatically use LD=$(CC) if you don't override it. You may have to
> pass PCRE_INC/PCRE_LIB, SSL_INC/SSL_LIB, LUA_INC/LUA_LIB but that's
> about all.
>
> Just for reference here's the command line I'm using when building
> (and a variant with -O0 which builds in 3.5 seconds). It may look
> large because I force all debugging options but it's in a script so
> I don't have to type it :-)
>
>PATH=/f/tc/x86_64-gcc47_glibc218-linux-gnu/bin:$PATH make -j 120
> TMPDIR=/dev/shm DISTCC_FALLBACK=0 DISTCC_SKIP_LOCAL_RETRY=0
> DISTCC_BACKOFF_PERIOD=0 DISTCC_PAUSE_TIME_MSEC=50
> DISTCC_HOSTS="--localslots_cpp=120 10.0.0.235/120,lzo"
> CC=/g/public/linux/master/x86_64-gcc47_glibc218-linux-gnu-gcc
> TARGET=linux-glibc DEP= USE_PCRE=1 PCREDIR=
> DEFINE="-DDEBUG_DONT_SHARE_POOLS -DDEBUG_MEMORY_POOLS -DDEBUG_UAF
> -DDEBUG_EXPR -DDEBUG_STRICT -DDEBUG_DEV" USE_OPENSSL=1 USE_ZLIB=1 USE_LUA=1
> LUA_LIB_NAME=lua EXTRA= USE_SLZ=1 SLZ_INC=/g/public/slz/src
> SLZ_LIB=/g/public/slz USE_ZLIB= USE_DEVICEATLAS=1
> DEVICEATLAS_SRC=contrib/deviceatlas USE_51DEGREES=1
> 51DEGREES_SRC=contrib/51d/src/trie USE_WURFL=1 WURFL_INC=contrib/wurfl
> WURFL_LIB=contrib/wurfl CPU_CFLAGS="-O2" "$@"
>
> When testing with various local openssl branches I do have another variant
> of this which uses the local, native pre-processor and linkers, and remote
> compilers. It's a bit ugly since they're not on the same version but in
> practice it works fine.
>
> Cheers,
> Willy
>


Re: arm64 builds?

2019-11-06 Thread Willy Tarreau
Hi Ionel,

On Mon, Nov 04, 2019 at 02:10:56PM +0100, GARDAIS Ionel wrote:
> FWIW, when I build for armhf (RaspberryPi 3b with Raspbian buster), I have to 
> add -latomic to the LD_FLAGS.

It's not the case for me but I had to add it for another one, I think
it was for the mips running in my build farm's controller. It's easy
to detect usually, if the linking fails with atomic operations you
know you need it.

Willy



Re: arm64 builds?

2019-11-05 Thread Willy Tarreau
Hi Ilya,

On Tue, Nov 05, 2019 at 07:20:43PM +0500,  ??? wrote:
> only arm64 are available.
> we can try arm using cross build, for example.

Then don't bother with this, it's likely then that they will not
have all the environment available. Maybe even their hardware does
not support arm32 at all. It was just a suggestion to try to optimize
the solution but even arm64 is already nice to have.

> is it common way to use cross build for building haproxy ?

Yes it is. Actually I don't think I've built it natively for a very
long time now, as even on my laptop I'm used to stick to the cross-
compilers which are distributed on the distcc build farm running on
the lab load generators :-)

In practice just pass "CC=/path/to/gcc" and let it do its job. It will
automatically use LD=$(CC) if you don't override it. You may have to
pass PCRE_INC/PCRE_LIB, SSL_INC/SSL_LIB, LUA_INC/LUA_LIB but that's
about all.

Just for reference here's the command line I'm using when building
(and a variant with -O0 which builds in 3.5 seconds). It may look
large because I force all debugging options but it's in a script so
I don't have to type it :-)

   PATH=/f/tc/x86_64-gcc47_glibc218-linux-gnu/bin:$PATH make -j 120 
TMPDIR=/dev/shm DISTCC_FALLBACK=0 DISTCC_SKIP_LOCAL_RETRY=0 
DISTCC_BACKOFF_PERIOD=0 DISTCC_PAUSE_TIME_MSEC=50 
DISTCC_HOSTS="--localslots_cpp=120 10.0.0.235/120,lzo" 
CC=/g/public/linux/master/x86_64-gcc47_glibc218-linux-gnu-gcc 
TARGET=linux-glibc DEP= USE_PCRE=1 PCREDIR= DEFINE="-DDEBUG_DONT_SHARE_POOLS 
-DDEBUG_MEMORY_POOLS -DDEBUG_UAF -DDEBUG_EXPR -DDEBUG_STRICT -DDEBUG_DEV" 
USE_OPENSSL=1 USE_ZLIB=1 USE_LUA=1 LUA_LIB_NAME=lua EXTRA= USE_SLZ=1 
SLZ_INC=/g/public/slz/src SLZ_LIB=/g/public/slz USE_ZLIB= USE_DEVICEATLAS=1 
DEVICEATLAS_SRC=contrib/deviceatlas USE_51DEGREES=1 
51DEGREES_SRC=contrib/51d/src/trie USE_WURFL=1 WURFL_INC=contrib/wurfl 
WURFL_LIB=contrib/wurfl CPU_CFLAGS="-O2" "$@"

When testing with various local openssl branches I do have another variant
of this which uses the local, native pre-processor and linkers, and remote
compilers. It's a bit ugly since they're not on the same version but in
practice it works fine.

Cheers,
Willy



Re: arm64 builds?

2019-11-05 Thread Илья Шипицин
пн, 4 нояб. 2019 г. в 17:49, Willy Tarreau :

> Hi Ilya,
>
> On Mon, Nov 04, 2019 at 12:18:49AM +0500,  ??? wrote:
> > hello,
> >
> > should we switch some builds to arm64?
> >
> > https://blog.travis-ci.com/2019-10-07-multi-cpu-architecture-support
>
> Ah that's interesting! I frequently build for arm/arm64 but I agree
> it would be nice to have this done more frequently. The two main points
> I'm seeing are :
>   - unsigned chars, which occasionally trigger a warning somewhere
>   - non-x86 build, which can occasionally trigger a build error if
> we accidently rely on an arch-specific function
>
> In fact I would even suggest that we build for arm instead of arm64 so
> that we switch to 32 bits at the same time and have an opportunity to
> detect long vs int vs uint64t vs pointer vs size_t issues (typically
> places where printf uses "%lu" without a cast for uint64_t).
>


only arm64 are available.
we can try arm using cross build, for example. is it common way to use
cross build for building haproxy ?


>
> Thanks,
> Willy
>


Re: arm64 builds?

2019-11-04 Thread GARDAIS Ionel
Hi,

FWIW, when I build for armhf (RaspberryPi 3b with Raspbian buster), I have to 
add -latomic to the LD_FLAGS.

Ionel


- Mail original -
De: "Willy Tarreau" 
À: " ???" 
Cc: "haproxy" 
Envoyé: Lundi 4 Novembre 2019 13:48:57
Objet: Re: arm64 builds?

Hi Ilya,

On Mon, Nov 04, 2019 at 12:18:49AM +0500,  ??? wrote:
> hello,
> 
> should we switch some builds to arm64?
> 
> https://blog.travis-ci.com/2019-10-07-multi-cpu-architecture-support

Ah that's interesting! I frequently build for arm/arm64 but I agree
it would be nice to have this done more frequently. The two main points
I'm seeing are :
  - unsigned chars, which occasionally trigger a warning somewhere
  - non-x86 build, which can occasionally trigger a build error if
we accidently rely on an arch-specific function

In fact I would even suggest that we build for arm instead of arm64 so
that we switch to 32 bits at the same time and have an opportunity to
detect long vs int vs uint64t vs pointer vs size_t issues (typically
places where printf uses "%lu" without a cast for uint64_t).
 
Thanks,
Willy
--
232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON
Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301




Re: arm64 builds?

2019-11-04 Thread Willy Tarreau
Hi Ilya,

On Mon, Nov 04, 2019 at 12:18:49AM +0500,  ??? wrote:
> hello,
> 
> should we switch some builds to arm64?
> 
> https://blog.travis-ci.com/2019-10-07-multi-cpu-architecture-support

Ah that's interesting! I frequently build for arm/arm64 but I agree
it would be nice to have this done more frequently. The two main points
I'm seeing are :
  - unsigned chars, which occasionally trigger a warning somewhere
  - non-x86 build, which can occasionally trigger a build error if
we accidently rely on an arch-specific function

In fact I would even suggest that we build for arm instead of arm64 so
that we switch to 32 bits at the same time and have an opportunity to
detect long vs int vs uint64t vs pointer vs size_t issues (typically
places where printf uses "%lu" without a cast for uint64_t).
 
Thanks,
Willy