Hi, I am using cut in an awkward situation: I got huge files that for any reason show larger file sizes than they actually have. For instance: # stat boot_image.clone2fs File: `boot_image.clone2fs' Size: 1077411840 Blocks: 113480 IO Block: 4096 regular file Device: 902h/2306d Inode: 2845250 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2008-10-08 16:29:55.066038214 +0200 Modify: 2008-10-08 16:43:10.000000000 +0200 Change: 2008-10-08 16:43:10.000000000 +0200
If you compare the apparent size (also shown by ls) with the size that results from the number of blocks times 512 bytes you see what I mean. 'du' reports the correct sizes b.t.w.: # du -k boot_image.clone2fs 56740 boot_image.clone2fs Now I found a hint on the Web (http://www.programmersheaven.com/mb/linux/187697/245244/re-how-to-chang e-filesize-in-linux/?S=B20000) for how the change the incorrect filesize by using cut to take over only a given amount of bytes into a new file: cut -b 1-500 oldFile > newFile I never tried it on short files, but when I use this on the above file I get a very different result than expected: # cut -b 1-58101760 boot_image.clone2fs > boot_image.clone2fs_correct # stat boot_image.clone2fs_correct File: `boot_image.clone2fs_correct' Size: 309987280 Blocks: 606048 IO Block: 4096 regular file Device: 902h/2306d Inode: 2469155 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2008-10-08 16:31:20.000000000 +0200 Modify: 2008-10-08 16:31:38.000000000 +0200 Change: 2008-10-08 16:31:38.000000000 +0200 The number of blocks and the apparent size is all but correct now. To me this looks like a typical overflow problem. Could you please investigate this? Regards, Roger Klein Finanzrechenzentrum der OFD Magdeburg Otto-von-Guericke-Str. 4 39104 Magdeburg Tel: +49 391 545-3868 Fax: +49 391 545-3873 !!NEU!! E-Mail: [EMAIL PROTECTED] _______________________________________________ Bug-coreutils mailing list [email protected] http://lists.gnu.org/mailman/listinfo/bug-coreutils
