Hello tzdata maintainers,

Following up on the issue I reported earlier, I would like to share some 
additional findings and ask for your guidance on a possible approach.

After further investigation, we found that zdump is actually able to 
comprehensively detect invalid tzfiles, including cases that may later trigger 
aborts at runtime. In particular, when zdump loads certain malformed tzfiles, 
it can hit assertions in tzfile.c and abort with exit code 134, effectively 
acting as a strict validator for tzfile correctness.

For example, with the following rules:

Rule NOS 1970 2037 - Jan 27 23:43:01 2:0 SDST
Rule NOS 1970 2037 - Jan 28 0:51:01 0 S
Zone OS/NOS +7:0:0 NOS NO%s

Although zic successfully generates a tzfile, loading this file via zdump can 
result in an abort due to unreachable local times created by the DST offset 
transition. This indicates that zdump already exercises a more complete 
validation path than zic itself.

Based on this, we are considering whether it would be reasonable to:

Reuse the zdump loading path (or equivalent internal logic) to validate tzfiles 
immediately after zic generates them.

Detect invalid tzfiles early, before they are installed or used by applications.

Avoid adding rule-specific checks (e.g. DST-only validation), since zdump 
appears capable of catching a broader class of invalid timezone data.

Before proceeding further, I would like to ask for your opinion:

Do you think it makes sense to add an optional or internal validation step in 
zic based on zdump-style loading?

Is this approach acceptable from the tzdata design perspective, or is zic 
intentionally permissive while zdump is expected to catch such issues?

Are there any existing constraints or concerns (performance, portability, 
semantics) that would argue against this approach?

Any guidance or feedback would be greatly appreciated before we explore a 
concrete implementation.

Thank you for your time and for maintaining tzdata.

Best regards,
Renchunhui

Reply via email to