Rodolfo,
On Sunday August 24, 2003 11:28, Rodolfo J. Paiz wrote:
> Is there someone out there with some coding expertise, who can maybe
> explain why "ls -l" and "ls -sh" give different results? Like this:
Not that I read the code but it isn't too hard to derive the answer from the
info page of ls.
`-s'
`--size'
Print the disk allocation of each file to the left of the file
name. This is the amount of disk space used by the file, which is
usually a bit more than the file's size, but it can be less if the
file has holes.
Normally the disk allocation is printed in units of 1024 bytes,
but this can be overridden (*note Block size::).
`-l'
`--format=long'
`--format=verbose'
In addition to the name of each file, print the file type,
permissions, number of hard links, owner name, group name, size in
bytes, and timestamp (*note Formatting file timestamps::), normally
the modification time.
So in this case it appears you have some VERY "holey" files. Since the -s is
telling you the space used and the -l is calculating the bytes used by
(number of blocks allocated - 1) * 512 + remainder.
Perhaps try running "stat" on a file and see if this is accurate.
I'll give some examples to try and illustrate this.
Here is an example file of ~40MB. You will see the blocks in the first version
and the "human interpretation of those blocks. By default, ls uses 1K blocks
for it's output of the -s flag.
[EMAIL PROTECTED] test]$ ls -ls testfile
39376 -rw-r--r-- 1 brian brian 40274705 Jan 20 2003 testfile
[EMAIL PROTECTED] test]$ ls -lsh testfile
39M -rw-r--r-- 1 brian brian 38M Jan 20 2003 testfile
Note how what the man page told us holds true here as the two numbers don't
match. This is because of using (blocks * blocksize) and each having
different numbers to use. Now let's see what the system _really_ thinks about
this file.
[EMAIL PROTECTED] test]$ stat testfile
File: `testfile'
Size: 40274705 Blocks: 78752 IO Block: 4096 Regular File
Device: 305h/773d Inode: 109 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 500/ brian) Gid: ( 500/ brian)
Access: 2003-08-25 02:24:01.000000000 -0400
Modify: 2003-01-20 10:34:12.000000000 -0500
Change: 2003-01-20 10:34:12.000000000 -0500
Now we see we actually have a 4K blocksize. But the number of blocks is still
in the default 512b size.
So let's tell ls what the blocksize actually is and see what it tells us.
[EMAIL PROTECTED] test]$ ls --block-size=4K -ls testfile
9844 -rw-r--r-- 1 brian brian 40274705 Jan 20 2003 testfile
Notice the number of blocks changes but the file size doesn't. So now we can
see that the blocksize is playing a large part in how even the same command
can interpret what it is seeing.
However this doesn't explain what's going on, just what you see.
Everything from here on is either pure speculation or WAG. ;)
> ls -l:
> -rwxr--r-- 1 rpaiz rpaiz 1177207676 Aug 3 18:09 Kansas ~ Best of
> Kansas ~ 04 ~ Dust in the Wind ~ 890B500A.wav
> ls -sh:
> 35M Kansas ~ Best of Kansas ~ 04 ~ Dust in the Wind ~ 890B500A.wav
>
The difference here looks like a 32 times in difference (35 * 1024 * 1024 * 32
= 1174405120 (which is pretty close to the 1177207676 you see above
considering rounding)). Which the only thing I can guess at is perhaps one of
the two drives has a 32K blocksize and the other has a 1K blocksize. Further
it would seem that somehow the rsync command you used (or something else)
transfered blocks instead of bytes and really screwed up the layout of the
filesystem. I believe you mentioned that you had seen a 98% fragmented system
and this would seem congruent with some very holey files. Hence the vast
difference in reported sizes.
You really need to take a look inside these files to see what's going on. That
way you can see if there are real file issues or some sort of filesystem
confusion. You had mentioned earlier that you can't read a file in a hex
editor due to memory constraints. You should try something like
head -c 64k <filename> | od -c
to get a feel for what's in the file (look for zeroes).
The only thing I can think to recommend that would make sense is to grab a
defrag tool and see if it can fix it. Or if you can copy a file back to the
old drive using the same method and then use a different one to get it back
on the desired drive if that seems to restore sanity to the files.
Anyway, it's late here and there could be some inaccuracies, but perhaps you
can either get it fixed or provide some new info. I left out using debugfs
for now, since it can get very confusing if you're not used to it.
--
Brian Ashe CTO
Dee-Web Software Services, LLC. [EMAIL PROTECTED]
http://www.dee-web.com/
--
redhat-list mailing list
unsubscribe mailto:[EMAIL PROTECTED]
https://www.redhat.com/mailman/listinfo/redhat-list