samboy <sam.lia...@global360.com.au> writes: > I have a storage application to which I can record or retrieve files. Its > multi-process C/C++ compiled > to run on the SPARC platform. The problem is the maximum file size which can > be stored is 4GB due to > the size being internally conveyed using an unsigned integer. I need to > extend the range so it accepts > much larger files i.e. convert all uints to long longs. The approach at > first was to work through the code > line-by-line to locate and upsize all variables as required. The problem is > there's almost a million lines of > code and no guarantee every instance of filesize usage will be located & > upsized. Another more elegant > approach which came to mind was to somehow 'trap' the occurrence of > *unsigned* integer overflow...or > should that be wrapping - yes I know its completely acceptable behaviour. If > I could get gcc to do this > for me I could simply run a full suite of end-to-end functional testing then > wait for the trap to fire and > abort. After each abort I can identify the undersized variable (via the > backtrace) then apply a fix and > repeat the process. This approach guarantees I'll target all variables > restricting >4GB file records.
That seems doable but somewhat difficult. You may get some traction with the -Wconversion option in mainline gcc. It will emit a warning every time you convert from, e.g, 'unsigned long long' to 'unsigned int'. If the file size is coming in as 'unsigned long long' somewhere, then you can use -Wconversion to trace it as it goes through the code. Assuming there are no explicit casts: -Wconversion will only warn about implicit casts. Ian