The ext2 file system will fragment over time, but unlike windows the file /
data management structure defrags on the "fly" keep in mind this OS was
designed with long periods of uptimein mind. That means it needs to be
efficient. UNIX is designed with the same concept in mind. I can't remember
the exact contruct details but the allocation of memory blocks on the hard
drive is more efficient than the FAT system. Here is what I found with
Google:

From: [EMAIL PROTECTED] (Kristian K�hntopp) 
Date: Sat Sep 28 16:50:44 DST 1996 
Subject: Re: file fragmentation in Linux (EXT2) 
Newsgroups: comp.os.linux.misc 


[EMAIL PROTECTED] (Steve Tupy) writes:
>>Your DOS roots are showing. Ext2 doesn't fragment much, and there
>>really isn't that need as there was in DOS.

>       Knowing a little about OS's, I must ask how this is possible? Don't
>get me wrong, I am open to this concept, but I really am curious how
>they achieve this?

By not cramming every single byte into the first half of the
harddisk. Instead, ext2 (and BSD FFS/ufs, Windows NT NTFS and
other more sophisticated filesystems) lay out their data
carefully, organizing different directories in different parts
of the disk and allocate space for growing files in advance to
provide contiguous space for growth.

Another key concept to understand is that it is not desireable
to store every file in one contiguous chunk of data.  It is
enough to make sure that the chunks of a file are of a
resonable size to allow the operating system to access the data
fast.

Kristian

3.4 Some facts about file systems and fragmentation 

Disk space is administered by the operating system in units of blocks and
fragments of blocks. In ext2, fragments and blocks have to be of the same
size, so we can limit our discussion to blocks. 

Files come in any size. They don't end on block boundaries. So with every
file a part of the last block of every file is wasted. Assuming that file
sizes are random, there is approximately a half block of waste for each file
on your disk. Tanenbaum calls this "internal fragmentation" in his book
"Operating Systems". 

You can guess the number of files on your disk by the number of allocated
inodes on a disk. On my disk 



----------------------------------------------------------------------------
----

# df -i
Filesystem           Inodes   IUsed   IFree  %IUsed Mounted on
/dev/hda3              64256   12234   52022    19%  /
/dev/hda5              96000   43058   52942    45%  /var


----------------------------------------------------------------------------
----

there are about 12000 files on / and about 44000 files on /var. At a block
size of 1 KB, about 6+22 = 28 MB of disk space are lost in the tail blocks
of files. Had I chosen a block size of 4 KB, I had lost 4 times this space. 


Data transfer is faster for large contiguous chunks of data, though. That's
why ext2 tries to preallocate space in units of 8 contigous blocks for
growing files. Unused preallocation is released when the file is closed, so
no space is wasted. 

Noncontiguous placement of blocks in a file is bad for performance, since
files are often accessed in a sequential manner. It forces the operating
system to split a disk access and the disk to move the head. This is called
"external fragmentation" or simply "fragmentation" and is a common problem
with DOS file systems. 

ext2 has several strategies to avoid external fragmentation. Normally
fragmentation is not a large problem in ext2, not even on heavily used
partitions such as a USENET news spool. While there is a tool for
defragmentation of ext2 file systems, nobody ever uses it and it is not up
to date with the current release of ext2. Use it, but do so on your own
risk. 

The MS-DOS file system is well known for its pathological managment of disk
space. In conjunction with the abysmal buffer cache used by MS-DOS the
effects of file fragmentation on performance are very noticeable. DOS users
are accustomed to defragging their disks every few weeks and some have even
developed some ritualistic beliefs regarding defragmentation. None of these
habits should be carried over to Linux and ext2. Linux native file systems
do not need defragmentation under normal use and this includes any condition
with at least 5% of free space on a disk. 


The MS-DOS file system is also known to lose large amounts of disk space due
to internal fragmentation. For partitions larger than 256 MB, DOS block
sizes grow so large that they are no longer useful (This has been corrected
to some extent with FAT32). 

ext2 does not force you to choose large blocks for large file systems,
except for very large file systems in the 0.5 TB range (that's terabytes
with 1 TB equaling 1024 GB) and above, where small block sizes become
inefficient. So unlike DOS there is no need to split up large disks into
multiple partitions to keep block size down. Use the 1 KB default block size
if possible. You may want to experiment with a block size of 2 KB for some
partitions, but expect to meet some seldom exercised bugs: Most people use
the default. 

Hope that helps...
Mac

-----Original Message-----
From: Brash, Matthew [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 14, 2000 3:49 PM
To: '[EMAIL PROTECTED]'
Subject: RE: [expert] Defrag counterpart in MDK


Supposedly defragging is unnessecary in Linux.  The Ext2 file system doesn't
get fragmented or something. Can anyone confirm/trash this idea?

-----Original Message-----
From: Thompsson [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 14, 2000 7:38 AM
To: [EMAIL PROTECTED]
Subject: Re: [expert] Defrag counterpart in MDK





> > On Mon, 12 Jun 2000, you wrote:
> > > Dear all:
> > >
> > >  Could any body know there is any utility like Defrag under
> > Windows for
> > > optimization of Hard disk in Mandrake 6.0 or RPM in higher version?
> > >
> > >
> > >  Thanks in advance,
> > >
> > >  Wei Quan Tian
> >

You have defrag on the Mdk 6.1 cd

Tommi

Reply via email to