On 29 Dec 2025, at 16:05, Robin Watts <[email protected]> wrote:
> 
> On 29/12/2025 11:08, Roger Leigh via Tiff wrote:
>> I mentioned this as a possibility during discussion
> > in mid December,
> 
> Please be aware that moving libtiff to C++ instead of C would cause problems 
> for various pieces of software that depend upon it.
> 
> The most obvious case, from my point of view, is Ghostscript.
> 
> We are careful to avoid any compulsory dependencies that depend upon
> C++, as we have to run on lots of systems where C++ presents a problem.

Please could you provide a list of the platforms and compiler versions which 
would be affected for context.

>> As a very brief bit of context and summary.  I’ve long been
> > unsatisfied with the base C API of libtiff.  It’s unsafe,
> > doesn’t support multi-threading well, is hard to use correctly,
> > is stateful where it doesn’t need to be, and has inconsistent
> > and hard to use error handling.
> 
> All of which case be solved without resorting to C++.

This isn’t the case.  Both libertiff and the ome-files tiff wrappers were 
written to solve problems which could not otherwise be solved in plain C.  They 
both add a lot of additional safety checking at both compile-time and run-time. 
 They weren’t written just for the sake of it, I personally spent several years 
working on this stuff.  This is a summary of the parts in the proposal which 
couldn’t be done in C:

Type Safety

Field Access:
- Type-safe field access at compile-time (get<tag::ImageWidth>() → uint32_t)
- Type-safe field access at runtime (variant-based FieldValue)
- Compile-time tag-to-type mapping via FieldTraits<Tag>
- Type-safe enum classes for Compression, Photometric, etc.
- Runtime tag registration with type traits
- User-defined FieldTraits<> specializations

Pixel Data:
- Type-safe pixel spans (std::span<uint16_t>, std::span<float>)
- Automatic type-safe unpacking of packed formats (1-bit, 12-bit → byte-aligned)
- PixelTraits<T> for compile-time pixel format validation
- Typed rational pixel types

Error Handling

- Unified std::expected<T, Error> returns (vs 6 different C patterns)
- Typed error codes (enum) instead of string-only errors
- Monadic chaining (.and_then(), .transform())
- All functions can now directly report errors

Resource Management

- RAII file handles (automatic cleanup)
- Move-only semantics preventing accidental copies
- std::unique_ptr-based codec/buffer ownership

Thread Safety

- Per-file std::recursive_mutex for I/O operations
- Thread-safe field caching with std::shared_mutex
- Lock RAII guard for compound operations

Zero-Allocation I/O

- Caller-managed buffers via std::span<T>
- No internal allocations for pixel data

Rational Types

- Multi-precision rationals (Rational16, Rational32, Rational64)
- Exact fractional values without float rounding
- Lossless round-trip for metadata (DPI, GPS, exposure)

Sure, some aspects of these things could be done in C, but mostly that’s not 
the case.  C isn’t capable of the type-safety aspects brought in here.  
Likewise safe resource management.  And portable and safe locking.  This can 
all be done both portably and safely in C++.

>> The proposal includes elements from all three of these
> > implementations, and is really asking the question: what would
> > libtiff look like if we implemented it wholly in C++, but also
> > used extern "C” linkage to completely retain the C API and ABI as they
> > are today.
> 
> Writing it in C++, and offering a C API (unchanged or not) does NOT address 
> the issues I'm concerned about.
> 
> Any use of C++ (in particular the standard library) is a bridge too far for 
> at least some of your target audience.
> 
> Now, I imagine that some people will argue that losing support for such 
> "legacy" code is just too bad. So be it, but be aware that this means the old 
> version of the lib will likely live on as a source of security bugs for years 
> to come.

Making decisions about the scope of what libtiff will and will not support 
would be useful, as well as who are target audience are.  This is the current 
state of things:

If you take a look at the current test matrix: 
(https://gitlab.com/libtiff/libtiff/-/pipelines/2234658359) you’ll see that we 
actively test on a number of platforms.  They are all contemporary—current 
versions of operating systems and compilers, with an “older” set in there as 
well (Ubuntu 22.04LTS, was 20.04 until 24.04 released, Visual Studio 2022; was 
2019 until 2026 was released).  In general, this is the current and previous 
major release of each platform, and current only for rolling releases like 
home-brew.  This is because the primary customers of libtiff are the people who 
distribute it (Linux package managers, BSD ports, Microsoft vcpkg, 
MacPorts/homebrew etc.).  It will of course also be used by application 
developers who are linking it into their applications.  Some of those will also 
be distributed via these package managers, or directly linked in and 
distributed independently as they see fit.

I have always tried to strike a balance and be practical about what can 
reasonable be supported and tested, and look at where is best to spend time and 
effort for the most value.  I’ve tried the same with this proposal.  Retaining 
the existing C API and ABI and maintaining full compatibility is I think a 
reasonable and achievable commitment to make, while also allowing for the 
creation of a safer and more capable C++ interface, and improving the overall 
safety and quality of the core library internals, as well as the tools (which 
sorely need it given the number of defects being continually reported).  
Requiring a C++ compiler to build does raise the minimum bar for which systems 
and compilers can build libtiff, but that comes back to who the target audience 
is, and how old is “too old”.  If, for example, we were to use C++17 as the 
minimum baseline (it’s been the default for GCC and Clang for some time now) it 
would work on most systems.  Basically there since GCC 8, which is about 8 
years old now, same for Clang 6/7 and VS2019.  I even use embedded toolchains 
with C++17 support for years at this point [IAR].  Most users develop on 
contemporary platforms for end users on contemporary platforms, and that’s 
where the CI effort is spent for the greatest positive effect.  When it comes 
to a safe baseline, we have had over half a decade of systems releases with 
suitable support at this point.

https://en.cppreference.com/w/cpp/compiler_support/17

For context it would be useful to know which systems you support which would 
not meet this baseline, and what the needs are for those systems.

Supporting older platforms is *not* cost-free.  It comes at the expense of not 
being able to use newer platforms effectively, and those platforms are where 
the users and developers are mostly at.  It also comes at the expense of not 
being able to advance the state of the art and make meaningful improvements.  
For example, we had a merge request to better support use on iOS just last 
week.  I don’t use it myself, but hundreds of millions of others do.  Is that 
more or less important than supporting ancient systems with a handful of users? 
 Where do we draw the line on what we support?  This isn’t specifically about 
C++, it would apply equally to using newer C language features and newer 
library dependencies.


Kind regards,
Roger
_______________________________________________
Tiff mailing list
[email protected]
https://lists.osgeo.org/mailman/listinfo/tiff

Reply via email to