On 2017-05-23 10:11:35 +0700, Robert Elz wrote:
>     Date:        Tue, 23 May 2017 02:10:23 +0200
>     From:        Vincent Lefevre <vincent-o...@vinc17.net>
>     Message-ID:  <20170523001023.ga19...@zira.vinc17.org>
> 
>   | If the intent were to have "int" everywhere related to sizes in the
>   | printf context, then why is the second argument of snprintf of type
>   | size_t instead of int?
> 
> I doubt anyone really ever had a specific intent, the implementations just
> use "int" because anything bigger than 32000 is absurd anyway...

I don't see why it's absurd. I haven't mentioned it yet, but the
context in which I asked this is the generalization in GNU MPFR
(mpfr_printf...). GNU MPFR can be used to do computations on numbers
with billions of digits. So, output with sizes over INT_MAX may be
regarded as a normal behavior. The question is whether we should
support this in MPFR (some users might want such sizes) or not. This
is directly related to the standard printf family functions because
the MPFR ones are based on them. For instance, if in some mpfr_printf
function, a size larger than INT_MAX works for the MPFR types (a
floating-point number with a huge precision), but not for %s, this
would be disturbing.

> It would be something of a surprise though if the size expected for
> the value obtained from a '*' and the values that could be handled
> via inline coding, were different - if larger values can be placed
> directly in the string, we'd have people going back to dynamically
> building format strings again, and that is really not something to
> be encouraged.

I agree, this is awkward.

> snprintf was invented later, its 2nd arg is typically the result from
> sizeof (sometimes strlen), and hence is a size_t - not because anyone
> expected to have sizes that might not fit an int but would in a size_t.

I disagree here. I would say that one of the possible uses of
snprintf is to first call it with 0 as the second argument (where
the literal 0 is an int), then to call it a second time, after
allocating a buffer, with the result of the first call, which is
also an int. So, an int would really make sense... unless one wants
some type that is typically bigger (such as size_t).

> I have no opinion on whether processing should stop when an overflow
> occurs (however big the inline field width is supposed to be, some
> user can always write a bigger number, so it is always possible), or
> whether it should just continue (seems like a C std issue, rather
> than one for here.)

If I understand the C standard, an overflow on the return value is
just undefined behavior; so, there isn't much to discuss. But in
POSIX, a part of the behavior is specified: a negative value is
returned and errno is set to EOVERFLOW. That's why I posted the
question here.

-- 
Vincent Lefèvre <vinc...@vinc17.net> - Web: <https://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

Reply via email to