https://gcc.gnu.org/bugzilla/show_bug.cgi?id=122567

--- Comment #3 from GCC Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Jonathan Wakely <[email protected]>:

https://gcc.gnu.org/g:20a6ff7a4877a25ba78461a19417e956bd6c0095

commit r16-8190-g20a6ff7a4877a25ba78461a19417e956bd6c0095
Author: Jonathan Wakely <[email protected]>
Date:   Fri Jan 9 13:39:49 2026 +0000

    libstdc++: Fix chrono::current_zone() for three-level names [PR122567]

    chrono::current_zone() fails if /etc/localtime is a symlink to a zone
    with three components, like "America/Indiana/Indianapolis", because we
    only try to find "Indianapolis" and "Indiana/Indianapolis" but neither
    of those is a valid zone name.

    We need to try up to three components to handle all valid cases, such as
    "UTC", "America/Indianapolis", and "America/Indiana/Indianapolis". It's
    also possible that users could provide a custom tzdata.zi file which
    includes zones with names using more than three levels, so loop over all
    filename components of the path that /etc/localtime points to.

    This also replaces std::filesystem::read_symlink with a plain readlink
    call and find+substr operations on a std::string_view, which is
    approximately twice as fast as using std::filesystem::path and
    std::string.

    By default we use a fixed char[128] buffer for readlink to write into,
    but if that doesn't fit we use a std::string as a dynamic buffer that
    grows as needed. We could use ::stat to find the exact length of the
    symlink and avoid looping with an increasingly large std::string
    capacity, but it's already expected to be rare for the char[128] buffer
    to be exceeded, so needing to double the std::string capacity more than
    once (i.e. to 512 or more) should be exceedingly rare. Adding a call to
    ::stat would perform a third filesystem operation when two readlink
    calls should be sufficient for the vast majority of realistic cases.

    One consequence of not using filesystem::path is that redundant
    consecutive slashes in the pathname aren't automatically ignored, e.g.
    /usr/share/zoneinfo/Europe//London worked fine with the old
    implementation because we manually concatenated the path components,
    i.e. "Europe" + '/' + "London". So that this continues to work there is
    a new loop to remove redundant slashes from the string being processed.
    That adds a slower, allocating path, but is unlikely to be needed in
    practice (the systemd spec for /etc/localtime explicitly says it should
    end with a time zone name, so "Europe//London" would be invalid anyway,
    even if it points to a valid file). Again, this loop is expected to be
    rare so optimizing this case further isn't important.

    While manually testing this I noticed that we will interpret a bogus
    symlink such as /usr/share/zoneinfo/America/Europe/London as a valid
    timezone, even though it's a dangling symlink. We find a name match for
    "Europe/London" before we get to the "America" component. This seems
    unlikely to matter in practice, and was a pre-existing problem.

    There's no testcase for current_zone() correctly handling three-level
    names or symlinks with unusual targets. It cannot be tested without
    changing the target of /etc/localtime which requires root access.

    I'm still considering whether we want to cache the result of
    current_zone(), either globally or in the tzdb object. Just returning a
    cached variable takes 20-30ns instead of more than 700ns to access the
    filesystem and read the symlink. Using ::lstat to check the symlink's
    mtime would add some overhead though.

    libstdc++-v3/ChangeLog:

            PR libstdc++/122567
            * src/c++20/tzdb.cc (tzdb::current_zone): Loop over all trailing
            components of /etc/localtime path. Use readlink instead of
            std::filesystem::read_symlink.

    Reviewed-by: Tomasz KamiÅski <[email protected]>
  • [Bug libstdc++/122567] std::chr... cvs-commit at gcc dot gnu.org via Gcc-bugs

Reply via email to