Re: defrag

2008-08-30 Thread Wojciech Puchar


... In logical sense yes, in physical sense no. They are video big files 
(from 9 to 40 GB) that i edit, cut, apply video filters, recompress and stuff


so no. but still it's funny windoze can't keep fragmentation on files that 
are processed in large chunks on system with lots of RAM.
even stupid allocation algorithm, but with delayed allocation (searching 
for first available block as big as unwritten data in cache) will suffice.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-30 Thread Wojciech Puchar

First of all , I would be careful who I called an idiot. Secondly, you
obviously have no business knowledge.


i think i do have. and exactly described how it works.
of course if i would be microsoft's marketing guy, i won't use the word 
idiot :)


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-29 Thread Wojciech Puchar

 CP/M was single-user and was used on floppies up to 360kB AFAIK,

And MP/M was multi-user, using the same filesystem.  From memory, there
was perhaps one byte that indicated which user owned a file :)


in CP/M there were users too, but it was just to help keeping it clear, 
not for security, you could simply type user n to switch users AFAIK.




It wasn't (straight-up) theft; MS cut a deal with IBM to use HPFS and
OS/2, more or less in exchange for letting IBM licence Windows 3.1 as
WINOS/2

When things went sour - google provides days of happy reading if you're
interested - MS morphed it into NTFS for NT, cruelled the deal with IBM
so OS/2 couldn't run NT/Win95 apps (signing OS/2's death warrant, though
it took a long time to die) and stopped distributing OS/2 themselves.


a little better than theft but... as you said


 writing a few-paged document or view a webpage

Yeah, yeah :)  I'd be surprised if NTFS isn't as defrag-proof as HPFS,
which as I recall had self-defragging garbage-collecting features built


exactly like in microsoft. they quickly created similar filesystem not 
even really understanding it, or if they did - simply ignoring things.



used it for quite a few years to run BBS and Fidonet stuff, not once
losing any data .. HPFS was a very resiliant and reliable filesystem.


i never used OS/2 for really long, my friend was. it was faster than FAT 
by much, and never had FS crash.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-29 Thread Wojciech Puchar

at FAT.


possibly untrue in Win NT,


From what I've read, it's a journalling filesytem  based on a


i mean FAT partition under NT.


I see that ext4 the successor to ext3, and which also has extent
support, has a defragmenter. And it appears to give significant
increases in read speeds.


still it's something wrong if it needs the defragmenter at all...
UFS do not.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-29 Thread Eduardo Morras

At 15:21 28/08/2008, RW wrote:



 On Thu, 28 Aug 2008 10:13:40 +0200
 Eduardo Morras [EMAIL PROTECTED] wrote:

  No, if you check a NTFS disk after some work, it's heavily
  fragmented. As you fill it and work with it, it becomes more and
  more fragmented.

How did you measure it? AFAIK the percentage fragmentation figures given
by windows tools and fsck, aren't measured on the same basis.


I run jkdefrag. I outs an image of fragmented files. In practice 
work, when i defrag the data disks i get 30-40 (even 50) MB/s when 
copying files using a Gigabit ethernet and ftp. This copy speed drops 
to 9-10 MB/s after some days of work.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-29 Thread Wojciech Puchar


How did you measure it? AFAIK the percentage fragmentation figures given
by windows tools and fsck, aren't measured on the same basis.


I run jkdefrag. I outs an image of fragmented files. In practice work, when i 
defrag the data disks i get 30-40 (even 50) MB/s when copying files using a 
Gigabit ethernet and ftp. This copy speed drops to 9-10 MB/s after some days 
of work.

for THE SAME files?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-29 Thread Gerard
On Fri, 29 Aug 2008 13:44:20 +1000 (EST)
Ian Smith [EMAIL PROTECTED] wrote:

 On Thu, 28 Aug 2008 13:33:35 +0200 (CEST)
 Wojciech Puchar [EMAIL PROTECTED] wrote:
 
   CP/M was single-user and was used on floppies up to 360kB AFAIK, 
 
 And MP/M was multi-user, using the same filesystem.  From memory,
 there was perhaps one byte that indicated which user owned a file :)
 
   NTFS is a theft of OS/2 HPFS. they didn't even bothered to use
   other partition ID :), but they managed to f..k^H^H^H^Hextend it's 
   functionality, so it's actually even slower than FAT, and too -
   does nothing to prevent fragmentation.
 
 It wasn't (straight-up) theft; MS cut a deal with IBM to use HPFS and 
 OS/2, more or less in exchange for letting IBM licence Windows 3.1 as 
 WINOS/2
 
 When things went sour - google provides days of happy reading if
 you're interested - MS morphed it into NTFS for NT, cruelled the deal
 with IBM so OS/2 couldn't run NT/Win95 apps (signing OS/2's death
 warrant, though it took a long time to die) and stopped distributing
 OS/2 themselves.

It might be worth mentioning that things deteriorated swiftly when IBM
insisted that Microsoft, who was writing OS/2 for IBM, write the code
specifically for the 286 processor. Bill Gates personally invaded the
Armonk IBM headquarters and basically told the IBM execs that they were
making a colossal mistake. When IBM refused to back down, Gates gave
them what they wanted. The rest is history. IBM signed their own 'death
warrant'. Remember, Gates once offered to sell DOS to IBM for $10,000
dollars, and IBM turned him down.

 
   This is normal, as Microsoft make a problems to be able to fix
   it (creating 3 times more others) in new releases, so idiots
   continue to buy new versions of windoze and new hardware, just to
   do as simple task as writing a few-paged document or view a webpage

First of all , I would be careful who I called an idiot. Secondly, you
obviously have no business knowledge. Products, whether they are cars,
drugs, etc. are improved and reissued to the general public. That is
just the name of the game.

 
 Yeah, yeah :)  I'd be surprised if NTFS isn't as defrag-proof as
 HPFS, which as I recall had self-defragging garbage-collecting
 features built in; certainly I never felt the need to defrag any HPFS
 volumes, and I used it for quite a few years to run BBS and Fidonet
 stuff, not once losing any data .. HPFS was a very resiliant and
 reliable filesystem.
 
 If you compare:
 % find /usr/src -name *hpfs*
 with
 % find /usr/src -name *ntfs*
 
 you'll go 'hmmm ..' and if you look through the sources you'll see
 whole large slabs of code that are shared between those two
 implementations, by the same author.
 
 I've never tried writing to HPFS volumes, but I did recover many
 years of work and play from a number of HPFS disks and still hope to
 do some more someday, so I was glad to see the code is still there in
 7.0 ..
 
 cheers, Ian


signature.asc
Description: PGP signature


Re: defrag

2008-08-28 Thread Eduardo Morras

At 06:56 28/08/2008, you wrote:

On Wed, 27 Aug 2008 22:08:47 -0400
Mike Jeays [EMAIL PROTECTED] wrote:


 That's true about FAT.  What I have never understood is why Microsoft
 didn't fix the problem when they designed NTFS.  UFS and EXT2 both
 existed at that time, and neither needs periodic defragmentation.

I think they probably did, NTFS took a lot from UNIX filesystems, and
at the time it was released they said that NTFS didn't need any
defragmentation at all.


No, if you check a NTFS disk after some work, it's heavily 
fragmented. As you fill it and work with it, it becomes more and more 
fragmented.



I suspect that it's mostly a matter of attitude. Windows users have an
irrational obsessive-compulsive attitude to fragmentation, so they
end-up with good reliable defragmenters, and so less reason not to use
them. We don't really care, so we end-up with no, or poor,
defragmenters, which reinforces our don't care attitude.


The best way to defragment a NTFS drive is make a backup to other 
device, format the original and recover the backup. It take less time 
and device don't suffer. I do it monthly with the data disks and 
performance grows espectacularly (near x4 on sustained  file read).




---
   Este documento muestra mis ideas. Son originales mias.
   Queda prohibido pensar lo mismo que yo sin pago previo.
Si estas de acuerdo conmigo PAGAME  Cuidado con mi abogado MUERDE!!  


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Bill Moran
RW [EMAIL PROTECTED] wrote:

 On Wed, 27 Aug 2008 22:08:47 -0400
 Mike Jeays [EMAIL PROTECTED] wrote:
 
  That's true about FAT.  What I have never understood is why Microsoft
  didn't fix the problem when they designed NTFS.  UFS and EXT2 both
  existed at that time, and neither needs periodic defragmentation.
 
 I think they probably did, NTFS took a lot from UNIX filesystems, and
 at the time it was released they said that NTFS didn't need any
 defragmentation at all. 
 
 I suspect that it's mostly a matter of attitude. Windows users have an
 irrational obsessive-compulsive attitude to fragmentation, so they
 end-up with good reliable defragmenters, and so less reason not to use
 them. We don't really care, so we end-up with no, or poor,
 defragmenters, which reinforces our don't care attitude.

Companies like Executive software make money off Diskkeeper by running
tests that demonstrate that defragging is wortwhile.  I've seen all sorts
of benchmarks, and I've seen the ones where they demonstrate that NTFS
really does have a fragmentation problem, unlike MS early claims.  If
UFS or ext2 had fragmentation problems, some company would have jumped
on it by now and be marketing a disk defragmenter for Linux/etc.  Also,
accessing a FS by block device and relocating file data isn't a terribly
difficult thing to do, so I have a hard time believing that nobody's
tried it ... my guess is that they've simply found that it wasn't
worth doing.

FAT had a fragmentation problem because it wasn't really designed, it was
just thrown together.  MS was lucky it didn't have bigger problems.

I think NTFS has a fragmentation problem because it wasn't a priority.
MS was focused on building a filesytem that could store the outrageous
ACLs they wanted, and that was non-trival (look at how long it took the
BSDs to have native file-level ACLs).  In that desire to get those other
features in place, I think long-term disk performance got second seat.

-- 
Bill Moran
http://www.potentialtech.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Wojciech Puchar

something that has puzzled me for years (but i've never got around to
asking) is how does *nix get away without regular defrag as with
windoze.


because it doesn't need it.



fsck is equivalent to scandisk, right?


not exactly - fsck usually fix errors, unlike scandisk
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Wojciech Puchar
Maybe it is because FAT filesystem wasn't well designed from the beginning 
and defrag was a workaround to solve performances problems.


as everything else microsoft did it wasn't designed but stoled, possibly 
slightly changed.


FAT is similar (mostly the same) as CP/M filesystem.

CP/M was single-user and was used on floppies up to 360kB AFAIK, 
small enough to be able to keep most metadata in memory, even for small 
hard disks FAT was stupid.


The only innovation of Micro$oft was subdirs ;)


FAT does NOTHING to prevent fragmentation, simply gets the first block 
availble when needed.



NTFS is a theft of OS/2 HPFS. they didn't even bothered to use other 
partition ID :), but they managed to f..k^H^H^H^Hextend it's 
functionality, so it's actually even slower than FAT, and too - does 
nothing to prevent fragmentation.



This is normal, as Microsoft make a problems to be able to fix it 
(creating 3 times more others) in new releases, so idiots continue to buy 
new versions of windoze and new hardware, just to do as simple task as 
writing a few-paged document or view a webpage

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Wojciech Puchar

That's true about FAT.  What I have never understood is why Microsoft didn't
fix the problem when they designed NTFS.  UFS and EXT2 both existed at that
time, and neither needs periodic defragmentation.



because Microsoft never fixes the real problems, but create it.
if they would fix most of them, their users won't buy new versions of 
windoze and new hardware. for tasks 99% of users needs 486 with VGA card 
is enough with right software. or even less.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Wojciech Puchar

I think they probably did, NTFS took a lot from UNIX filesystems, and

  
what for example?

it took 95% from OS/2 HPFS filesystem and another 5% is what microsoft 
f...ed up.




at the time it was released they said that NTFS didn't need any
defragmentation at all.


SAID is most important word in Your sentence.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Wojciech Puchar
No, if you check a NTFS disk after some work, it's heavily fragmented. As you 
fill it and work with it, it becomes more and more fragmented.


it's just like FAT, because nothing is done to prevent fragmentation.

if NTFS needs to allocate block, it simply get first free.

consider writing to 3 files, one block at a time to each.

you will get block arranged like this (where 1 is file 1's data,2 is data 
from file 2 and 3 from file 3):


123123123123123123123123213213


with newer systems with lots of memory windoze POSSIBLY delays allocation, 
so it may somehow prevent allocation if these files are written within 
short period.


but there is no real thing, as simple and efficient as in BSD UFS.

The best way to defragment a NTFS drive is make a backup to other device, 
format the original and recover the backup. It take less time and device 
don't suffer. I do it monthly with the data disks and performance grows 
espectacularly (near x4 on sustained  file read).


did they finally managed to be able to backup everything just by copying 
files like in unix?


is there any way to restore it without doing windoze installation on blank 
drive?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Wojciech Puchar

MS was focused on building a filesytem that could store the outrageous
ACLs they wanted, and that was non-trival


so - as usually - they quickly implemented OS/2 filesystem (at best, 
assuming no stolen code), and added their bloat then.


performance is never a priority in Microsoft. exactly opposite is true.
High quality of windows will kill Microsoft, few would buy new 
versions then.



(look at how long it took the
BSDs to have native file-level ACLs).


because in unix they are not actually needed.

usersgroups system is just perfect.

i don't know anyone here that actually use ACL under unix
because he/she needs it.

POSSIBLY it's needed for samba users to allow using this on windoze 
clients.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Svein Halvor Halvorsen

Wojciech Puchar wrote:

(look at how long it took the
BSDs to have native file-level ACLs).


because in unix they are not actually needed.

usersgroups system is just perfect.


That's one man's opinion.


i don't know anyone here that actually use ACL under unix
because he/she needs it.


It depends on your definition of need, I guess. The groups file could 
always be the power set[1] of the passwd file.




Svein Halvor


[1] http://en.wikipedia.org/wiki/Power_set
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Wojciech Puchar

because in unix they are not actually needed.

usersgroups system is just perfect.


That's one man's opinion.


for sure not one. all local unix users (not linux fans) i know share the 
same opinion

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread RW


 On Thu, 28 Aug 2008 10:13:40 +0200
 Eduardo Morras [EMAIL PROTECTED] wrote:

  No, if you check a NTFS disk after some work, it's heavily
  fragmented. As you fill it and work with it, it becomes more and
  more fragmented.

How did you measure it? AFAIK the percentage fragmentation figures given
by windows tools and fsck, aren't measured on the same basis.


On Thu, 28 Aug 2008 13:41:22 +0200 (CEST)
Wojciech Puchar [EMAIL PROTECTED] wrote:

 it's just like FAT, because nothing is done to prevent fragmentation.
 
 if NTFS needs to allocate block, it simply get first free.
 
 consider writing to 3 files, one block at a time to each.
 
 you will get block arranged like this (where 1 is file 1's data,2 is
 data from file 2 and 3 from file 3):
 
 123123123123123123123123213213

This is just untrue. I don't much like Microsoft, but I don't think
there's much to be gained by out-fudding them.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Bob Johnson
On 8/27/08, prad [EMAIL PROTECTED] wrote:
 something that has puzzled me for years (but i've never got around to
 asking) is how does *nix get away without regular defrag as with
 windoze.


Essentially, the UFS file system (and its close relatives) is
intentionally fragmented in a controlled way as the files are written,
so that the effect of the fragmentation is limited. Files are written
at sort-of-random locations all over the disk, rather than starting at
one end and working toward the other, and there is a limit to how much
sequential disk space a single file can occupy (a large file
essentially gets broken up and stored as if it were a collection of
smaller files). The result is that as long as there is a reasonable
amount of empty disk space available, it will be possible to find
space to store a new file efficiently. This is why the filesystem
wants to have at least 8% empty space. If you have less than 8% empty
space left on the filesystem, it switches from the speed optimizing
mode that I just described to a mode that tries to pack things into
the remaining space as efficiently as possible, at the cost of speed.
FreeBSD also by default reserves some disk space for administrative
use that is not available to normal users.

One result of this scheme (and other issues) is that access time for
large files suffers a bit (but not as much as it would if they were
heavily fragmented). If you are setting up a volume mainly for storing
large files, you can adjust some of the parameters (e.g. using
tunefs(8)) so the filesystem will handle large files more efficiently,
at the expense of wasting space on small files.

 fsck is equivalent to scandisk, right?

Pretty much. It looks for errors and tries to fix them. It does not
attempt to defragment the disk. Unless the disk is almost full,
defragmenting probably wouldn't improve things enough to matter.


 so when you delete files and start getting 'holes', how does *nix deal
 with it?


The process of scattering files all over the disk intentionally leaves
holes all over the disk (that's what I mean by controlled
fragmentation). When you add and delete files, those holes get bigger
and smaller, and merge or split apart, but until the disk gets very
full, there should always be holes big enough to efficiently store new
files. The difference between this and what happens in a FAT
filesystem is that the process is designed so that there is a
statistically high likelihood that the holes produced will be large
enough to be used efficiently.

 --
 In friendship,
 prad

I hope that helps. And as usual, if I got any of that wrong, someone
please correct me. If I answer this question often enough, I will
eventually get it right, then perhaps we can make it a FAQ ;-)

- Bob
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Wojciech Puchar


Essentially, the UFS file system (and its close relatives) is
intentionally fragmented in a controlled way as the files are written,


exactly that was invented over 20 years ago and still it works perfect.


at sort-of-random locations all over the disk, rather than starting at


it's definitely NOT sort of random.

it divides disk onto cylinder groups. it puts new files to the same 
cylinder group as other files in the same directory, BUT when files grow 
large (like over 1MB) it FORCES the fragmentation by switching to other 
cylinder group.


the reason is simple - having file fragmented every few megs doesn't make 
a speed difference, while it keeps every cylinder group from filling out.


for small files there will be almost always space available in the same 
cyl group. seek time within cylinder group is in order of 2-3ms at most.


UFS from the beginning optimized for rotational delay too, by dividing 
tracks into multiple angle zones, so if it has to fragment within 
cylinder group, it choose the space in the zone that after head movement 
it will be shortest rotational delay possible.

same for seeking between inode and file data.

unfortunately - modern drives hide real geometry, so such optimization 
doesn't work any more. this is quite a large loss, for 7200rpm drive
one rotation is 9 ms, average rotational delay 4.5ms, could be half that 
or less with such optimization possible.


UFS does not just prevent fragmentation, it tries to manage it (as 
unavoidable thing) to make it's effect as little as possible.



all of this worked fine and efficient on about 1 MIPS computer like VAX, 
after that UFS was changed a lot, but this basic mechanism is still the 
same.


except extreme cases there is never need for defragmenting UFS 
filesystem!!!





the remaining space as efficiently as possible, at the cost of speed.


while it still can keep fragmentation quite low with much less space 
available (unless it's really close to 0%), this low speed means mostly 
higher CPU load when selecting blocks to allocate. on modern machines like

1Ghz or more it's difficult to see any difference.


large files, you can adjust some of the parameters (e.g. using
tunefs(8)) so the filesystem will handle large files more efficiently,
at the expense of wasting space on small files.


rather by newfs, by making huge blocks like -b 65536 -f 8192, and make 
MUCH less inodes (like -i 1048576)


still - it will lose about as much space then as FAT32 with 8kB clusters, 
which is AFAIK default for FAT32 on large drives.


with huge files, such settings may not only speed things a bit, but 
actually save space by not reserving that much for inodes and bitmaps.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Wojciech Puchar

you will get block arranged like this (where 1 is file 1's data,2 is
data from file 2 and 3 from file 3):

123123123123123123123123213213


This is just untrue. I don't much like Microsoft, but I don't think


i AM sure it is like that under DOS up to 6.2 (where i tested it), and 
almost sure with windoze 9598.


possibly untrue in Win NT, or just more blocks for each files like


 etc.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread Ian Smith
On Thu, 28 Aug 2008 13:33:35 +0200 (CEST)
Wojciech Puchar [EMAIL PROTECTED] wrote:

  CP/M was single-user and was used on floppies up to 360kB AFAIK, 

And MP/M was multi-user, using the same filesystem.  From memory, there 
was perhaps one byte that indicated which user owned a file :)

  NTFS is a theft of OS/2 HPFS. they didn't even bothered to use other 
  partition ID :), but they managed to f..k^H^H^H^Hextend it's 
  functionality, so it's actually even slower than FAT, and too - does 
  nothing to prevent fragmentation.

It wasn't (straight-up) theft; MS cut a deal with IBM to use HPFS and 
OS/2, more or less in exchange for letting IBM licence Windows 3.1 as 
WINOS/2

When things went sour - google provides days of happy reading if you're 
interested - MS morphed it into NTFS for NT, cruelled the deal with IBM 
so OS/2 couldn't run NT/Win95 apps (signing OS/2's death warrant, though 
it took a long time to die) and stopped distributing OS/2 themselves.

  This is normal, as Microsoft make a problems to be able to fix it 
  (creating 3 times more others) in new releases, so idiots continue to buy 
  new versions of windoze and new hardware, just to do as simple task as 
  writing a few-paged document or view a webpage

Yeah, yeah :)  I'd be surprised if NTFS isn't as defrag-proof as HPFS, 
which as I recall had self-defragging garbage-collecting features built 
in; certainly I never felt the need to defrag any HPFS volumes, and I 
used it for quite a few years to run BBS and Fidonet stuff, not once 
losing any data .. HPFS was a very resiliant and reliable filesystem.

If you compare:
% find /usr/src -name *hpfs*
with
% find /usr/src -name *ntfs*

you'll go 'hmmm ..' and if you look through the sources you'll see whole 
large slabs of code that are shared between those two implementations, 
by the same author.

I've never tried writing to HPFS volumes, but I did recover many years 
of work and play from a number of HPFS disks and still hope to do some 
more someday, so I was glad to see the code is still there in 7.0 ..

cheers, Ian
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-28 Thread RW
On Fri, 29 Aug 2008 02:43:40 +0200 (CEST)
Wojciech Puchar [EMAIL PROTECTED] wrote:

  you will get block arranged like this (where 1 is file 1's data,2
  is data from file 2 and 3 from file 3):
 
  123123123123123123123123213213
 
  This is just untrue. I don't much like Microsoft, but I don't think
 
 i AM sure it is like that under DOS up to 6.2 (where i tested it),
 and almost sure with windoze 9598.

Well, you can't really say  it's just like FAT if you've only looked
at FAT.

 possibly untrue in Win NT, 

From what I've read, it's a journalling filesytem  based on a
B+ tree with small files stored directly in the tree and larger files in
variable-length extents. It sounds superficially similar to several
UNIX filesystems. 

I see that ext4 the successor to ext3, and which also has extent
support, has a defragmenter. And it appears to give significant
increases in read speeds. 

http://ols.108.redhat.com/2007/Reprints/sato-Reprint.pdf
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-27 Thread Michael Powell
prad wrote:

 something that has puzzled me for years (but i've never got around to
 asking) is how does *nix get away without regular defrag as with
 windoze.
 
 fsck is equivalent to scandisk, right?
 
 so when you delete files and start getting 'holes', how does *nix deal
 with it?
 

The short answer is that defrag is built into and an integral part of the
filesystem. So you can think of it as always running as opposed to
the regularly scheduled by some other entity/application external to the
filesystem. No third party Disk Keeper like utilities required.

-Mike


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-27 Thread Fred C


Maybe it is because FAT filesystem wasn't well designed from the  
beginning and defrag was a workaround to solve performances problems.


-fred-

On Aug 27, 2008, at 5:29 PM, prad wrote:


something that has puzzled me for years (but i've never got around to
asking) is how does *nix get away without regular defrag as with
windoze.

fsck is equivalent to scandisk, right?

so when you delete files and start getting 'holes', how does *nix deal
with it?

--
In friendship,
prad

 ... with you on your journey
Towards Freedom
http://www.towardsfreedom.com (website)
Information, Inspiration, Imagination - truly a site for soaring I's
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED] 








___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-27 Thread Mike Jeays
On August 27, 2008 09:35:42 pm Fred C wrote:
 Maybe it is because FAT filesystem wasn't well designed from the
 beginning and defrag was a workaround to solve performances problems.

 -fred-

 On Aug 27, 2008, at 5:29 PM, prad wrote:
  something that has puzzled me for years (but i've never got around to
  asking) is how does *nix get away without regular defrag as with
  windoze.
 
  fsck is equivalent to scandisk, right?
 
  so when you delete files and start getting 'holes', how does *nix deal
  with it?
 
  --
  In friendship,
  prad
 
   ... with you on your journey
  Towards Freedom
  http://www.towardsfreedom.com (website)
  Information, Inspiration, Imagination - truly a site for soaring I's
  ___
  freebsd-questions@freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-questions
  To unsubscribe, send any mail to
  [EMAIL PROTECTED] 

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
 [EMAIL PROTECTED]

That's true about FAT.  What I have never understood is why Microsoft didn't 
fix the problem when they designed NTFS.  UFS and EXT2 both existed at that 
time, and neither needs periodic defragmentation.

-- 
Mike Jeays
http://www.jeays.ca
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2008-08-27 Thread RW
On Wed, 27 Aug 2008 22:08:47 -0400
Mike Jeays [EMAIL PROTECTED] wrote:

 
 That's true about FAT.  What I have never understood is why Microsoft
 didn't fix the problem when they designed NTFS.  UFS and EXT2 both
 existed at that time, and neither needs periodic defragmentation.

I think they probably did, NTFS took a lot from UNIX filesystems, and
at the time it was released they said that NTFS didn't need any
defragmentation at all. 

I suspect that it's mostly a matter of attitude. Windows users have an
irrational obsessive-compulsive attitude to fragmentation, so they
end-up with good reliable defragmenters, and so less reason not to use
them. We don't really care, so we end-up with no, or poor,
defragmenters, which reinforces our don't care attitude.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-07 Thread RW
On Sat, 3 Mar 2007 15:01:12 +0100 (CET)
Christian Baer [EMAIL PROTECTED] wrote:
 
 You do know that you can use 'tunefs -m 0'? This will in fact cause
 fragmentation to happen - even on UFS2! UFS2 has methods of avoiding
 fragmentation that work quite well but it is not a 'magical' file
 system, which only means that every gain comes with a price. In this
 case the price is 10-15% of the HD's space.

What happens if you use tunefs -m 0, but don't use the released space?

Or if you only occasionally use it?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-05 Thread Kevin Kinsey

Giorgos Keramidas wrote:

On 2007-03-02 11:27, Mario Lobo [EMAIL PROTECTED] wrote:

On Thursday 01 March 2007 17:27, Pietro Cerutti wrote:

On 3/1/07, Kevin Kinsey [EMAIL PROTECTED] wrote:

Kevin Kinsey wrote:

groff /usr/share/doc/smm/05.fastfs/*  ~/ffs.ps

This is what worked for me:

[~]gunzip -c /usr/share/doc/smm/05.fastfs/paper.ascii.gz  paper.ascii
[~]groff paper.ascii  ffs.ps
[~]ps2pdf ffs.ps
[~]acroread ffs.pdf


Actually 'paper.ascii' is a plain ASCII file with some 'escape
sequences' -- like literal backspace and repeated characters, to denote
*bold* text.  It's not valid groff input AFAIK, but you can strip off
the special characters with:

gunzip -c /usr/share/doc/smm/05.fastfs/paper.ascii.gz  05.fastfs.ascii
col -b  05.fastfs.ascii  05.fastfs.txt  rm 05.fastfs.ascii

Then you have a plain text version of 05.fastfs.txt, which can be
converted to PS and/or PDF with tools like a2ps or enscript :)



As *you* know, but maybe some others don't, I'm a (relative) newb and 
was clueless about these old papers; I believe I came across my 
hackage by, err, hacking?  It worked, but not nearly so prettily ;-)


Oh, and I tried *piping* to `col -bx`, but no joy.  Thanks for the 
__real__ magic!


KDK

--
There is one difference between a tax collector and
a taxidermist -- the taxidermist leaves the hide.
-- Mortimer Caplan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-04 Thread Wojciech Puchar

As you said, HFS(+) is not a native unix file system, but maybe someone
will know about it. All I know about is that HFS+ is a journaling file
system and that it defragments (in the Windows sense) files smaller than
certain size (20MB?) on the fly.


This may be completely OT here, but I gotta ask: Is Reiser a native Unix
FS?


fortunately not!
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-03 Thread Christian Baer
On Thu, 1 Mar 2007 13:50:44 -0700 Steve Franks wrote:

 Excellent!  Never had that one answered.  I've gone down the typical
 road of being an MS booster (It doesn't take 10 hours to set up and
 configure) to experiencing glee when I find yet another way FBSD
 kicks the crap out of MS.  Why?  Because I've grown up, and learned
 that 2 hours time spent *reading* and configuring is way better than 2
 days time spent when the system crashes in the middle of the workweek
 - bottom line, BSD is cheaper before, during, and after installation.
 Probably by a factor of 10 for me over the last 10 years.  As I write
 this, I'm on a MS laptop that has degraded to the point where any disk
 acess takes 10 seconds before the display updates (but not from the
 shell - so not a defrag issue, just a screwed registry or something).
 I used to reinstall my entire MS server every 6 months, on average...

Hence the three Rs of MS support.

Regards
Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-03 Thread Christian Baer
On Thu, 1 Mar 2007 17:39:05 -0500 Jerry McAllister wrote:

 Well, it would do some, but for the greatest effect, you would need:
   dump + rm -rf * + restore
 That would get it all.

 Of course, I should have re-emphasized that this is not needed.
 You will not improve performance.   Its only value might be to exercise
 every used file block on the filesystem to make sure it is still
 readable. And for that you don't need to nuke and rewrite things.

You could of try changing the above command into 'rm -rfP *'. That would
make sure everything on your file system is still readable. And it would
give you a lot of time to think about it. :-)

 Just doing the backup (which you should do anyway) will read up all
 used file space (except what you might have marked as nodump).

Actually, that way you won't get every sector on the drive - not unless
the drive is full to the brim anyway.

If you really just want to check the drive, use 
  smartctl -t long /dev/whatever

You could also try
  dd if=/dev/whatever of=/dev/null bs=1m

The idea with the backup isn't a bad one either. Cause if your drive
goes up in flames, you don't really care. You still have your data.

Regards
Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-03 Thread Christian Baer
On Fri, 2 Mar 2007 11:12:25 -0500 Jerry McAllister wrote:

 On the other hand, doing all this either way wouldn't make any difference 
 in performance for file access in a running system because so-called
 fragmentation is not an issue in the UNIX file system - except in
 the small possibility that it might make a bit of difference in a
 file system filled to capacity, well in to the reserve where non-root
 processes are not allowed to write anyway.   I don't know just how 
 close to absolutely full you have to get to see any difference, but it
 is beyond what users would normally get to.

You do know that you can use 'tunefs -m 0'? This will in fact cause
fragmentation to happen - even on UFS2! UFS2 has methods of avoiding
fragmentation that work quite well but it is not a 'magical' file
system, which only means that every gain comes with a price. In this
case the price is 10-15% of the HD's space.

BTW. I have used tunefs to utilize all of my space on some drives.
However, these drive contain only static information that has to be
accessed often and then fast. That is the reason why it is on a drive at
all. If you know what you are doing then this option is ok. Otherwise,
the use will run into trouble when the drive fills up and the
information stored on it is not static.

Regards
Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-03 Thread Christian Baer
On Thu, 01 Mar 2007 22:56:02 +0100 Ivan Voras wrote:

 For what it's worth, this has been Microsoft's official position since
 NTFS became mainstream.

As usual, it's not worth much if it come from Microsoft...

Regards
Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-03 Thread Christian Baer
On Thu, 1 Mar 2007 17:21:57 -0500 Bill Moran wrote:

 But this also makes it _easy_ for the filesystem to avoid causing the type
 of fragmentation that _does_ degrade performance.  For example, when the
 first block is on track 10, then the next block is on track 20, then we're
 back to track 10 again, then over to track 35 ... etc, etc

Fragmentation *this* bad doesn't happen on MS systems either. Although
the systems are much more in danger of creating a big mess on the drive,
there is a certain method included to reduce this, like only allowing
the track numbers to either rise or fall (possibly per file access) but
not back and forth over the drive.

I can remember experimenting on my Commodore 64 (can anyone remember
that ol' thing?) and the floppy drive. I stored a file all over the
disc, one sector per track. The idea was to find out how much time it
actually took to load a file fragmented like this - and made a really
cool loading sound as well, especially if you had a floppy speeder like
dolphin DOS. :-) I wanted to actually cause the drive to go from track 1
to 40 and then back again while loading a single file. But that didn't
work. So if I started on track a and I am now on track c, then jumping
to track b (with abc) resulted in an error from the drive. Mind you,
this was not a load command that I programmed. It's just the way the
file was allocated on the disc.

A certain logic to how files are saved on discs (no matter if hard or
floppy) has been around for a fair while.

Regards
Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-03 Thread Christian Baer
On Thu, 01 Mar 2007 23:56:30 +0100 Ivan Voras wrote:

   UFS fragmentation refers to dividing blocks (e.g. 16KB in size) into
 block fragments (e.g. 2KB in size) that can be allocated separately in
 special circumstances (which all boil down to: at the end of files).
 This is done to lessen the effect of internal fragmentation.

No, to lessen the loss of disc space.

Regards
Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-03 Thread Christian Baer
On Fri, 02 Mar 2007 02:14:07 +0100 Ivan Voras wrote:

 As you said, HFS(+) is not a native unix file system, but maybe someone
 will know about it. All I know about is that HFS+ is a journaling file
 system and that it defragments (in the Windows sense) files smaller than
 certain size (20MB?) on the fly.

This may be completely OT here, but I gotta ask: Is Reiser a native Unix
FS?

Regards
Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-03 Thread Duane Hill

On Sat, 3 Mar 2007, Christian Baer wrote:


On Fri, 02 Mar 2007 02:14:07 +0100 Ivan Voras wrote:


As you said, HFS(+) is not a native unix file system, but maybe someone
will know about it. All I know about is that HFS+ is a journaling file
system and that it defragments (in the Windows sense) files smaller than
certain size (20MB?) on the fly.


This may be completely OT here, but I gotta ask: Is Reiser a native Unix
FS?


http://en.wikipedia.org/wiki/Namesys
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-03 Thread Jerry McAllister
On Sat, Mar 03, 2007 at 02:53:30PM +0100, Christian Baer wrote:

 On Thu, 1 Mar 2007 17:39:05 -0500 Jerry McAllister wrote:
 
  Well, it would do some, but for the greatest effect, you would need:
dump + rm -rf * + restore
  That would get it all.
 
  Of course, I should have re-emphasized that this is not needed.
  You will not improve performance.   Its only value might be to exercise
  every used file block on the filesystem to make sure it is still
  readable. And for that you don't need to nuke and rewrite things.
 
 You could of try changing the above command into 'rm -rfP *'. That would
 make sure everything on your file system is still readable. And it would
 give you a lot of time to think about it. :-)
 
  Just doing the backup (which you should do anyway) will read up all
  used file space (except what you might have marked as nodump).
 
 Actually, that way you won't get every sector on the drive - not unless
 the drive is full to the brim anyway.

Note that I did say all of the _used_ space - eg actual files.

 
 If you really just want to check the drive, use 
   smartctl -t long /dev/whatever
 
 You could also try
   dd if=/dev/whatever of=/dev/null bs=1m
 
 The idea with the backup isn't a bad one either. Cause if your drive
 goes up in flames, you don't really care. You still have your data.

Yup, just what I was sort of pointing out.

jerry

 
 Regards
 Chris
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-02 Thread Wojciech Puchar

shell - so not a defrag issue, just a screwed registry or something).
I used to reinstall my entire MS server every 6 months, on average...


as rarely? looks that you are very good windows admin, or this MS server 
wasn't used much

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-02 Thread Wojciech Puchar

Wojciech Puchar [EMAIL PROTECTED] wrote:


backup+restore will be defrag


you mean : backup, format, (reinstall if needed, depending on method of backup)


s/format/newfs/g
no reinstall, that's not windows.
after restoring all files, bsdlabel -B /dev/your_disk is enough

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-02 Thread Mario Lobo
On Thursday 01 March 2007 17:27, Pietro Cerutti wrote:
 On 3/1/07, Kevin Kinsey [EMAIL PROTECTED] wrote:
  Kevin Kinsey wrote:
 
  groff /usr/share/doc/smm/05.fastfs/*  ~/ffs.ps

   /\/\

This is what worked for me:

[~]gunzip -c /usr/share/doc/smm/05.fastfs/paper.ascii.gz  paper.ascii
[~]groff paper.ascii  ffs.ps
[~]ps2pdf ffs.ps
[~]acroread ffs.pdf
-- 
*
   //| //| Mario Lobo
  // |// | http://www.ipad.com.br
 //  //  |||  FreeBSD since 2.2.8 - 100% Rwindows-free
*


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-02 Thread Vince Hoffman
Kevin Kinsey wrote:
 Kevin Kinsey wrote:
 Steve Franks wrote:
 How come I never hear defrag come up as a topic, and can't find
 anything related to defrag in the ports tree?  Is it really not an
 issue on UFS?  Can someone point me to an explantion if so?

 Thanks,
 Steve

 I'm thinking this one's in the FAQ at freebsd.org.

 
 Bah!  HEADS-UP:  Ignore any advice I feel compelled to give today.  Two
 retractions in one hour would seem to demonstrate a cranial
 short-circuit this morning.  Steve, it's not in the FAQ.
 
 Here's a link to a brief mailist discussion:
 
 http://lists.freebsd.org/pipermail/freebsd-chat/2003-July/000932.html
 
 Assuming you have Ghostscript installed (which may be a big IF), you
 might be able to take a gander at the document mentioned with something
 like:
 
 groff /usr/share/doc/smm/05.fastfs/* ~/ffs.ps
 ps2pdf ~/ffs.ps
 acroread ~/ffs.pdf
 
 But there's probably a better way --- I'm certainly one offing today.
 

If you dont mind reading in a terminal.
gzcat /usr/share/doc/smm/05.fastfs/* | more
does the trick fine for me. By the way thanks for the link to the doc.

Vince


 Kevin Kinsey

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-02 Thread Jerry McAllister
On Fri, Mar 02, 2007 at 02:17:31AM +0100, Ivan Voras wrote:

 Jerry McAllister wrote:
 
  Well, it would do some, but for the greatest effect, you would need:
dump + rm -rf * + restore
 
 This is nitpicking so ignore it: deleting all files on UFS2 volume won't
 restore it to it's pristine state because inodes are lazily initialized.
 It doesn't have anything to do with fragmentation, but will make fsck
 run a little longer.
 

True it wouldn't be quite pristine because files would have different
inodes assigned when they get reloaded than they might have if it was
newfs-ed before reloading.   That might make fsck run a tiny bit slower.
But it wouldn't be any difference for a running system file access.

On the other hand, doing all this either way wouldn't make any difference 
in performance for file access in a running system because so-called
fragmentation is not an issue in the UNIX file system - except in
the small possibility that it might make a bit of difference in a
file system filled to capacity, well in to the reserve where non-root
processes are not allowed to write anyway.   I don't know just how 
close to absolutely full you have to get to see any difference, but it
is beyond what users would normally get to.

jerry
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-02 Thread Giorgos Keramidas
On 2007-03-02 11:27, Mario Lobo [EMAIL PROTECTED] wrote:
 On Thursday 01 March 2007 17:27, Pietro Cerutti wrote:
  On 3/1/07, Kevin Kinsey [EMAIL PROTECTED] wrote:
   Kevin Kinsey wrote:
  
   groff /usr/share/doc/smm/05.fastfs/*  ~/ffs.ps
 
 This is what worked for me:
 
 [~]gunzip -c /usr/share/doc/smm/05.fastfs/paper.ascii.gz  paper.ascii
 [~]groff paper.ascii  ffs.ps
 [~]ps2pdf ffs.ps
 [~]acroread ffs.pdf

Actually 'paper.ascii' is a plain ASCII file with some 'escape
sequences' -- like literal backspace and repeated characters, to denote
*bold* text.  It's not valid groff input AFAIK, but you can strip off
the special characters with:

gunzip -c /usr/share/doc/smm/05.fastfs/paper.ascii.gz  05.fastfs.ascii
col -b  05.fastfs.ascii  05.fastfs.txt  rm 05.fastfs.ascii

Then you have a plain text version of 05.fastfs.txt, which can be
converted to PS and/or PDF with tools like a2ps or enscript :)

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Jerry McAllister
On Thu, Mar 01, 2007 at 09:49:09AM -0700, Steve Franks wrote:

 How come I never hear defrag come up as a topic, and can't find
 anything related to defrag in the ports tree?  Is it really not an
 issue on UFS?  Can someone point me to an explantion if so?

It's really not an issue.

jerry

 
 Thanks,
 Steve
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Kevin Kinsey

Steve Franks wrote:

How come I never hear defrag come up as a topic, and can't find
anything related to defrag in the ports tree?  Is it really not an
issue on UFS?  Can someone point me to an explantion if so?

Thanks,
Steve


I'm thinking this one's in the FAQ at freebsd.org.

Kevin Kinsey

--
HELP!  MY TYPEWRITER IS BROKEN!
-- E. E. CUMMINGS

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Kevin Kinsey

Kevin Kinsey wrote:

Steve Franks wrote:

How come I never hear defrag come up as a topic, and can't find
anything related to defrag in the ports tree?  Is it really not an
issue on UFS?  Can someone point me to an explantion if so?

Thanks,
Steve


I'm thinking this one's in the FAQ at freebsd.org.



Bah!  HEADS-UP:  Ignore any advice I feel compelled to give today.  Two 
retractions in one hour would seem to demonstrate a cranial 
short-circuit this morning.  Steve, it's not in the FAQ.


Here's a link to a brief mailist discussion:

http://lists.freebsd.org/pipermail/freebsd-chat/2003-July/000932.html

Assuming you have Ghostscript installed (which may be a big IF), you 
might be able to take a gander at the document mentioned with something 
like:


groff /usr/share/doc/smm/05.fastfs/* ~/ffs.ps
ps2pdf ~/ffs.ps
acroread ~/ffs.pdf

But there's probably a better way --- I'm certainly one offing today.

Kevin Kinsey
--
I don't care for the Sugar Smacks commercial.  I don't like the idea of
a frog jumping on my Breakfast.
-- Lowell, Chicago Reader 10/15/82

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Pietro Cerutti

On 3/1/07, Kevin Kinsey [EMAIL PROTECTED] wrote:

Kevin Kinsey wrote:

groff /usr/share/doc/smm/05.fastfs/*  ~/ffs.ps

 /\/\
--
Pietro Cerutti

- ASCII Ribbon Campaign -
against HTML e-mail and
proprietary attachments
  www.asciiribbon.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Wojciech Puchar



How come I never hear defrag come up as a topic, and can't find
anything related to defrag in the ports tree?  Is it really not an
issue on UFS?  Can someone point me to an explantion if so?


unless you'll keep your filesystem always near-full it's not an issue.

POSSIBLY it could gain few percent improvement with defrag, after long 
time of usage, but no more. there is AFAIK not defrag for UFS.


backup+restore will be defrag
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Steve Franks

Excellent!  Never had that one answered.  I've gone down the typical
road of being an MS booster (It doesn't take 10 hours to set up and
configure) to experiencing glee when I find yet another way FBSD
kicks the crap out of MS.  Why?  Because I've grown up, and learned
that 2 hours time spent *reading* and configuring is way better than 2
days time spent when the system crashes in the middle of the workweek
- bottom line, BSD is cheaper before, during, and after installation.
Probably by a factor of 10 for me over the last 10 years.  As I write
this, I'm on a MS laptop that has degraded to the point where any disk
acess takes 10 seconds before the display updates (but not from the
shell - so not a defrag issue, just a screwed registry or something).
I used to reinstall my entire MS server every 6 months, on average...

Steve

On 3/1/07, Kevin Kinsey [EMAIL PROTECTED] wrote:

Kevin Kinsey wrote:
 Steve Franks wrote:
 How come I never hear defrag come up as a topic, and can't find
 anything related to defrag in the ports tree?  Is it really not an
 issue on UFS?  Can someone point me to an explantion if so?

 Thanks,
 Steve

 I'm thinking this one's in the FAQ at freebsd.org.


Bah!  HEADS-UP:  Ignore any advice I feel compelled to give today.  Two
retractions in one hour would seem to demonstrate a cranial
short-circuit this morning.  Steve, it's not in the FAQ.

Here's a link to a brief mailist discussion:

http://lists.freebsd.org/pipermail/freebsd-chat/2003-July/000932.html

Assuming you have Ghostscript installed (which may be a big IF), you
might be able to take a gander at the document mentioned with something
like:

groff /usr/share/doc/smm/05.fastfs/* ~/ffs.ps
ps2pdf ~/ffs.ps
acroread ~/ffs.pdf

But there's probably a better way --- I'm certainly one offing today.

Kevin Kinsey
--
I don't care for the Sugar Smacks commercial.  I don't like the idea of
a frog jumping on my Breakfast.
-- Lowell, Chicago Reader 10/15/82





--
Steve Franks, KE7BTE
Staff Engineer
La Palma Devices, LLC
http://www.lapalmadevices.com
(520) 312-0089
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Jerry McAllister
On Thu, Mar 01, 2007 at 11:21:16AM -0600, Kevin Kinsey wrote:

 Kevin Kinsey wrote:
 Steve Franks wrote:
 How come I never hear defrag come up as a topic, and can't find
 anything related to defrag in the ports tree?  Is it really not an
 issue on UFS?  Can someone point me to an explantion if so?
 
 Thanks,
 Steve
 
 I'm thinking this one's in the FAQ at freebsd.org.
 
 
 Bah!  HEADS-UP:  Ignore any advice I feel compelled to give today.  Two 
 retractions in one hour would seem to demonstrate a cranial 
 short-circuit this morning.  Steve, it's not in the FAQ.

It should be, maybe including a pointer to 
that   /usr/share/doc/smm/05.fastfs   paper.
It is frequently asked.

jerry

 
 Here's a link to a brief mailist discussion:
 
 http://lists.freebsd.org/pipermail/freebsd-chat/2003-July/000932.html
 
 Assuming you have Ghostscript installed (which may be a big IF), you 
 might be able to take a gander at the document mentioned with something 
 like:
 
 groff /usr/share/doc/smm/05.fastfs/* ~/ffs.ps
 ps2pdf ~/ffs.ps
 acroread ~/ffs.pdf
 
 But there's probably a better way --- I'm certainly one offing today.
 
 Kevin Kinsey
 -- 
 I don't care for the Sugar Smacks commercial.  I don't like the idea of
 a frog jumping on my Breakfast.
   -- Lowell, Chicago Reader 10/15/82
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Kevin Kinsey

Steve Franks wrote:

Excellent!  Never had that one answered.  I've gone down the typical
road of being an MS booster (It doesn't take 10 hours to set up and
configure) to experiencing glee when I find yet another way FBSD
kicks the crap out of MS.  Why?  Because I've grown up, and learned
that 2 hours time spent *reading* and configuring is way better than 2
days time spent when the system crashes in the middle of the workweek
- bottom line, BSD is cheaper before, during, and after installation.
Probably by a factor of 10 for me over the last 10 years.  As I write
this, I'm on a MS laptop that has degraded to the point where any disk
acess takes 10 seconds before the display updates (but not from the
shell - so not a defrag issue, just a screwed registry or something).
I used to reinstall my entire MS server every 6 months, on average...

Steve


There are some advantages to FBSD, for certain.  Your last sentence is a 
huge example, although I know some Winservers that have been running on 
the same install for quite some time (but some of those have to be 
rebooted fairly often).  A big *BSD argument is uptime - my personal 
server record is ~450 days, but you do kinda worry because there was 
probably supposed to be a security fix with a new kernel somewhere 
during that time period


Pietro Cerutti wrote:

groff /usr/share/doc/smm/05.fastfs/* ~/ffs.ps
ps2pdf ~/ffs.ps
acroread ~/ffs.pdf

But there's probably a better way --- I'm certainly one offing today.


 groff /usr/share/doc/smm/05.fastfs/*  ~/ffs.ps

Err, yes; that's today's one off groff ... thanks.  Never type what 
you can copy/paste  B-/


KDK
--
Hear about...
	the guru who refused Novocain while having a tooth pulled because he 
wanted to transcend dental medication?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Ivan Voras
Steve Franks wrote:
 How come I never hear defrag come up as a topic, and can't find
 anything related to defrag in the ports tree?  Is it really not an
 issue on UFS?  Can someone point me to an explantion if so?

fsck will tell you the level of fragmentation on the file system:

 fsck /usr
** /dev/ad0s2g (NO WRITE)
** Last Mounted on /usr
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
352462 files, 2525857 used, 875044 free (115156 frags, 94986 blocks,
3.4% fragmentation)

This is from a /usr system that's been in use for years. (note that
frags in the last line refer to file system fragments - subblocks,
not fragmented files).




signature.asc
Description: OpenPGP digital signature


Re: defrag

2007-03-01 Thread Richard Lynch
On Thu, March 1, 2007 3:35 pm, Ivan Voras wrote:
 Steve Franks wrote:
 How come I never hear defrag come up as a topic, and can't find
 anything related to defrag in the ports tree?  Is it really not an
 issue on UFS?  Can someone point me to an explantion if so?

I've been told that most modern file systems have much better
allocation routines and/or automated defragmentation as needed.

So that the need to do defrag is essentially almost 0 for almost all
users.

No promises that this answer is correct, but it sure sounded good to me.

-- 
Some people have a gift link here.
Know what I want?
I want you to buy a CD from some starving artist.
http://cdbaby.com/browse/from/lynch
Yeah, I get a buck. So?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Bill Moran
In response to Ivan Voras [EMAIL PROTECTED]:

 Steve Franks wrote:
  How come I never hear defrag come up as a topic, and can't find
  anything related to defrag in the ports tree?  Is it really not an
  issue on UFS?  Can someone point me to an explantion if so?
 
 fsck will tell you the level of fragmentation on the file system:
 
  fsck /usr
 ** /dev/ad0s2g (NO WRITE)
 ** Last Mounted on /usr
 ** Phase 1 - Check Blocks and Sizes
 ** Phase 2 - Check Pathnames
 ** Phase 3 - Check Connectivity
 ** Phase 4 - Check Reference Counts
 ** Phase 5 - Check Cyl groups
 352462 files, 2525857 used, 875044 free (115156 frags, 94986 blocks,
 3.4% fragmentation)
 
 This is from a /usr system that's been in use for years. (note that
 frags in the last line refer to file system fragments - subblocks,
 not fragmented files).

Just to reiterate:
Fragmentation on a Windows filesystem is _not_ the same as fragmentation
on a unix file system.  They are not comparable numbers, and do not mean
the same thing.  The only way to avoid fragmentation on a unix file system
is to make every file you create equal to a multiple of the block size.
And unix fragmentation does not degrade performance unless the file system
is close to full.

-- 
Bill Moran
Collaborative Fusion Inc.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Norberto Meijome
On Thu, 1 Mar 2007 19:22:32 +0100 (CET)
Wojciech Puchar [EMAIL PROTECTED] wrote:

 backup+restore will be defrag

you mean : backup, format, (reinstall if needed, depending on method of backup)
and restore - or simply restore on top (if you backed up everything and can
access your drive offline).

simply overwriting the existing files won't change anything, would it? You'd
need a clean slate to do a defrag this way...

_
{Beto|Norberto|Numard} Meijome

Software isn't released it escapes.

I speak for myself, not my employer. Contents may be hot. Slippery when wet.
Reading disclaimers makes you go blind. Writing them is worse. You have been
Warned.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Ivan Voras
Richard Lynch wrote:
 On Thu, March 1, 2007 3:35 pm, Ivan Voras wrote:
 Steve Franks wrote:
 How come I never hear defrag come up as a topic, and can't find
 anything related to defrag in the ports tree?  Is it really not an
 issue on UFS?  Can someone point me to an explantion if so?
 
 I've been told that most modern file systems have much better
 allocation routines and/or automated defragmentation as needed.
 
 So that the need to do defrag is essentially almost 0 for almost all
 users.

For what it's worth, this has been Microsoft's official position since
NTFS became mainstream.




signature.asc
Description: OpenPGP digital signature


Re: defrag

2007-03-01 Thread Robert Huff

Richard Lynch writes:

  So that the need to do defrag is essentially almost 0 for
  almost all users.

For one of my boxes, with three filesystems, the frag % has
been (0,8, 0.4, 1.1).
For n5 years.


Robert Huff



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Ivan Voras
Bill Moran wrote:
 In response to Ivan Voras [EMAIL PROTECTED]:

 352462 files, 2525857 used, 875044 free (115156 frags, 94986 blocks,
 3.4% fragmentation)

 
 Just to reiterate:
 Fragmentation on a Windows filesystem is _not_ the same as fragmentation
 on a unix file system.  They are not comparable numbers, and do not mean
 the same thing.  The only way to avoid fragmentation on a unix file system
 is to make every file you create equal to a multiple of the block size.

Ok, my point was that 3.4% is a low number for a long used system, but,
for education sake, what is the difference between Windows'
fragmentation and Unix's fragmentation?

I believe that a fragmented file in common usage refers to a file
which is not stored continuously on the drive - i.e. it occupies more
than one continuous region. How is UFS fragmentation different than
fragmentation on other kinds of file systems?

UFS has cylinder groups, blocks and block fragments. Obviously, a file
larger than a cylinder group will get fragmented to spill over to
another cylinder group. Block fragments only occur at the end of files.





signature.asc
Description: OpenPGP digital signature


Re: defrag

2007-03-01 Thread Jerry McAllister
On Fri, Mar 02, 2007 at 08:51:00AM +1100, Norberto Meijome wrote:

 On Thu, 1 Mar 2007 19:22:32 +0100 (CET)
 Wojciech Puchar [EMAIL PROTECTED] wrote:
 
  backup+restore will be defrag
 
 you mean : backup, format, (reinstall if needed, depending on method of 
 backup)
 and restore - or simply restore on top (if you backed up everything and can
 access your drive offline).
 
 simply overwriting the existing files won't change anything, would it? You'd
 need a clean slate to do a defrag this way...

Well, it would do some, but for the greatest effect, you would need:
  dump + rm -rf * + restore
That would get it all.

You could be really extreme and do:
  dump + newfs + restore 
if you wanted to, to be sure of a clean file system

You would not need a reformat (though some people really mean newfs
when they say reformat) and you would not need any reinstall.  But,
if you did this on the root partition, you would have to to the
restore from some other boot such as on a different disk or from
the fixit utility on the installation CD.   That shouldn't be
necessary for any other file system unless you foolishly put /sbin
in a partition other than /  (root).

jerry

 _
 {Beto|Norberto|Numard} Meijome
 
 Software isn't released it escapes.
 
 I speak for myself, not my employer. Contents may be hot. Slippery when wet.
 Reading disclaimers makes you go blind. Writing them is worse. You have been
 Warned.
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Bill Moran
In response to Ivan Voras [EMAIL PROTECTED]:

 Bill Moran wrote:
  In response to Ivan Voras [EMAIL PROTECTED]:
 
  352462 files, 2525857 used, 875044 free (115156 frags, 94986 blocks,
  3.4% fragmentation)
 
  
  Just to reiterate:
  Fragmentation on a Windows filesystem is _not_ the same as fragmentation
  on a unix file system.  They are not comparable numbers, and do not mean
  the same thing.  The only way to avoid fragmentation on a unix file system
  is to make every file you create equal to a multiple of the block size.
 
 Ok, my point was that 3.4% is a low number for a long used system, but,
 for education sake, what is the difference between Windows'
 fragmentation and Unix's fragmentation?
 
 I believe that a fragmented file in common usage refers to a file
 which is not stored continuously on the drive - i.e. it occupies more
 than one continuous region. How is UFS fragmentation different than
 fragmentation on other kinds of file systems?

That common usage refers to Windows filesystems.

In unix filesystems, fragmentation refers to the number of blocks that have
been broken down in to fragments to either hold files smaller than a block,
or (as you mentioned) use the space at the end of a file that doesn't fit
exactly in a block.

 UFS has cylinder groups, blocks and block fragments. Obviously, a file
 larger than a cylinder group will get fragmented to spill over to
 another cylinder group. Block fragments only occur at the end of files.

Yes, and UFS _intentionally_ creates what Windows users would call
fragmentation  There's no way I know of to measure this, however.

The key to understanding this is that not all fragmentation is bad.
Typically, files are accessed in chunks.  Your OS seldom grabs an
entire 50M file all at once -- it grabs (perhaps) 16 blocks worth, then
sends it to the requesting program, then grabs another 16 blocks worth,
etc, etc.  The time between grabbing a chunk is enough that allowing
the heads time to reposition to a difference cylinder group doesn't cause
a significant performance problem.  As a result, the OS _intentionally_
switches to a different cylinder group after a certain number of blocks
have been written (this is tunable with tunefs).  The result is that a
large file will typically be strewn about the disk.

But this also makes it _easy_ for the filesystem to avoid causing the type
of fragmentation that _does_ degrade performance.  For example, when the
first block is on track 10, then the next block is on track 20, then we're
back to track 10 again, then over to track 35 ... etc, etc

Keep in mind, that in the previous 3 paragraphs, I was using the Windows
definition of fragmentation.

-- 
Bill Moran
Collaborative Fusion Inc.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Lowell Gilbert
Norberto Meijome [EMAIL PROTECTED] writes:

 On Thu, 1 Mar 2007 19:22:32 +0100 (CET)
 Wojciech Puchar [EMAIL PROTECTED] wrote:

 backup+restore will be defrag

 you mean : backup, format, (reinstall if needed, depending on method of 
 backup)
 and restore - or simply restore on top (if you backed up everything and can
 access your drive offline).

 simply overwriting the existing files won't change anything, would it? You'd
 need a clean slate to do a defrag this way...

Right.  That's what the -r flag to restore(8) is for.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Lowell Gilbert
Ivan Voras [EMAIL PROTECTED] writes:

 Bill Moran wrote:
 In response to Ivan Voras [EMAIL PROTECTED]:

 352462 files, 2525857 used, 875044 free (115156 frags, 94986 blocks,
 3.4% fragmentation)

 
 Just to reiterate:
 Fragmentation on a Windows filesystem is _not_ the same as fragmentation
 on a unix file system.  They are not comparable numbers, and do not mean
 the same thing.  The only way to avoid fragmentation on a unix file system
 is to make every file you create equal to a multiple of the block size.

 Ok, my point was that 3.4% is a low number for a long used system, but,
 for education sake, what is the difference between Windows'
 fragmentation and Unix's fragmentation?

 I believe that a fragmented file in common usage refers to a file
 which is not stored continuously on the drive - i.e. it occupies more
 than one continuous region. How is UFS fragmentation different than
 fragmentation on other kinds of file systems?

 UFS has cylinder groups, blocks and block fragments. Obviously, a file
 larger than a cylinder group will get fragmented to spill over to
 another cylinder group. Block fragments only occur at the end of files.

If you know the standard computer science terminology, it can be
described quite tersely.  UFS fragmentation is a way of avoiding
internal fragmentation from wasting too much space.  MS-DOS-FS
fragmentation is an example of external fragmentation in the storage
space.  They don't really have anything to do with each other.

-- 
Lowell Gilbert, embedded/networking software engineer, Boston area
http://be-well.ilk.org/~lowell/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Jerry McAllister
On Thu, Mar 01, 2007 at 05:17:38PM -0500, Jerry McAllister wrote:

 On Fri, Mar 02, 2007 at 08:51:00AM +1100, Norberto Meijome wrote:
 
  On Thu, 1 Mar 2007 19:22:32 +0100 (CET)
  Wojciech Puchar [EMAIL PROTECTED] wrote:
  
   backup+restore will be defrag
  
  you mean : backup, format, (reinstall if needed, depending on method of 
  backup)
  and restore - or simply restore on top (if you backed up everything and can
  access your drive offline).
  
  simply overwriting the existing files won't change anything, would it? You'd
  need a clean slate to do a defrag this way...
 
 Well, it would do some, but for the greatest effect, you would need:
   dump + rm -rf * + restore
 That would get it all.
 
 You could be really extreme and do:
   dump + newfs + restore 
 if you wanted to, to be sure of a clean file system
 
 You would not need a reformat (though some people really mean newfs
 when they say reformat) and you would not need any reinstall.  But,
 if you did this on the root partition, you would have to to the
 restore from some other boot such as on a different disk or from
 the fixit utility on the installation CD.   That shouldn't be
 necessary for any other file system unless you foolishly put /sbin
 in a partition other than /  (root).

Of course, I should have re-emphasized that this is not needed.
You will not improve performance.   Its only value might be to exercise
every used file block on the filesystem to make sure it is still
readable.And for that you don't need to nuke and rewrite things.
Just doing the backup (which you should do anyway) will read up all
used file space (except what you might have marked as nodump).
 -- or the other possible value - to placate mal-informed management.

jerry

 
 jerry
 
  _
  {Beto|Norberto|Numard} Meijome
  
  Software isn't released it escapes.
  
  I speak for myself, not my employer. Contents may be hot. Slippery when wet.
  Reading disclaimers makes you go blind. Writing them is worse. You have been
  Warned.
  ___
  freebsd-questions@freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-questions
  To unsubscribe, send any mail to [EMAIL PROTECTED]
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Ivan Voras
Bill Moran wrote:
 In response to Ivan Voras [EMAIL PROTECTED]:

 I believe that a fragmented file in common usage refers to a file
 which is not stored continuously on the drive - i.e. it occupies more
 than one continuous region. How is UFS fragmentation different than
 fragmentation on other kinds of file systems?
 
 That common usage refers to Windows filesystems.
 
 In unix filesystems, fragmentation refers to the number of blocks that have
 been broken down in to fragments to either hold files smaller than a block,
 or (as you mentioned) use the space at the end of a file that doesn't fit
 exactly in a block.

Ok, so the difference is in the name, not in the semantics :)
Unfortunately, all the world is Windows now and that's why I try to use
block fragments instead of just fragments to try avoid confusion.

 But this also makes it _easy_ for the filesystem to avoid causing the type
 of fragmentation that _does_ degrade performance.  For example, when the
 first block is on track 10, then the next block is on track 20, then we're
 back to track 10 again, then over to track 35 ... etc, etc
 
 Keep in mind, that in the previous 3 paragraphs, I was using the Windows
 definition of fragmentation.

Agreed.



signature.asc
Description: OpenPGP digital signature


Re: defrag

2007-03-01 Thread Ivan Voras
Lowell Gilbert wrote:

 If you know the standard computer science terminology, it can be
 described quite tersely.  UFS fragmentation is a way of avoiding
 internal fragmentation from wasting too much space.  MS-DOS-FS
 fragmentation is an example of external fragmentation in the storage
 space.  They don't really have anything to do with each other.

It looks like I actually AM arguing about semantics here:

UFS fragmentation refers to dividing blocks (e.g. 16KB in size) into
block fragments (e.g. 2KB in size) that can be allocated separately in
special circumstances (which all boil down to: at the end of files).
This is done to lessen the effect of internal fragmentation.

Fragmentation without UFS prefix, as mostly used today (and which I
believe it's how the original poster understands it) refers to dividing
files into non-continuous regions, i.e. external fragmentation.

Correct so far?

% fragmentation message from fsck cannot refer to internal
fragmentation as the numbers don't add up, so it almost certainly refers
to external fragmentation.

As I understand it from technical documentation, it is correct that UFS
deliberately does external fragmentation of large files in order to make
file allocation faster and more managable, except in optimized for
space mode, correct? (the default being optimized for time).





signature.asc
Description: OpenPGP digital signature


Re: defrag

2007-03-01 Thread jekillen


On Mar 1, 2007, at 2:56 PM, Ivan Voras wrote:


Lowell Gilbert wrote:


If you know the standard computer science terminology, it can be
described quite tersely.  UFS fragmentation is a way of avoiding
internal fragmentation from wasting too much space.  MS-DOS-FS
fragmentation is an example of external fragmentation in the storage
space.  They don't really have anything to do with each other.


It looks like I actually AM arguing about semantics here:

UFS fragmentation refers to dividing blocks (e.g. 16KB in size) into
block fragments (e.g. 2KB in size) that can be allocated separately in
special circumstances (which all boil down to: at the end of files).
This is done to lessen the effect of internal fragmentation.

	Fragmentation without UFS prefix, as mostly used today (and which 
I

believe it's how the original poster understands it) refers to dividing
files into non-continuous regions, i.e. external fragmentation.

Correct so far?

% fragmentation message from fsck cannot refer to internal
fragmentation as the numbers don't add up, so it almost certainly 
refers

to external fragmentation.


This discussion has been about UFS vs MS file system. But I have been
using Macs and have run file system utilities, Norton, and watched it 
defrag
a Mac disc. I am just curious as to how the HFS and HFS+ file systems 
fit
into this picture. Particularly since OSX is essentially a Unix 'like' 
system

but still uses HFS+
Just for some perspective and idle curiosity.
Thanks
Jeff K

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Norberto Meijome
On Thu, 01 Mar 2007 22:56:02 +0100
Ivan Voras [EMAIL PROTECTED] wrote:

  So that the need to do defrag is essentially almost 0 for almost all
  users.  
 
 For what it's worth, this has been Microsoft's official position since
 NTFS became mainstream.

Meaning that NTFS is cured of this ?? 

I must be using the Fat-16 version of NTFS because i haven't seen 1 Win32 box
where  fragmentation isnt an issue... It may be have a smaller impact on
performance  than in the old days (faster buses / disks / CPU ? ) , but it is
definitely still there , and it definitely affects performance.

_
{Beto|Norberto|Numard} Meijome

...using the internet as it was originally intended... for the further research
of pornography and pipebombs.

I speak for myself, not my employer. Contents may be hot. Slippery when wet.
Reading disclaimers makes you go blind. Writing them is worse. You have been
Warned.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: defrag

2007-03-01 Thread Ivan Voras
jekillen wrote:

 a Mac disc. I am just curious as to how the HFS and HFS+ file systems fit
 into this picture. Particularly since OSX is essentially a Unix 'like'
 system
 but still uses HFS+
 Just for some perspective and idle curiosity.

As you said, HFS(+) is not a native unix file system, but maybe someone
will know about it. All I know about is that HFS+ is a journaling file
system and that it defragments (in the Windows sense) files smaller than
certain size (20MB?) on the fly.



signature.asc
Description: OpenPGP digital signature


Re: defrag

2007-03-01 Thread Ivan Voras
Jerry McAllister wrote:

 Well, it would do some, but for the greatest effect, you would need:
   dump + rm -rf * + restore

This is nitpicking so ignore it: deleting all files on UFS2 volume won't
restore it to it's pristine state because inodes are lazily initialized.
It doesn't have anything to do with fragmentation, but will make fsck
run a little longer.



signature.asc
Description: OpenPGP digital signature