Re: How does Sysinstall Mount File Systems?
Polytropon writes: > If I interpret your question correctly, you are intending to > ask how sysinstall can install on an already sliced, partitioned Correct. > and formatted disk; is this correct? > > You chose "Custom" for the installation. In the partition > editor, you assign the the located partitions to the functional > subtrees (/, swap, /tmp, /var, /usr, /home - or any layout you > want) and make sure that they are of the type "UFS+S" (except > / which is usually "UFS" without S), and the format option is > set to "N" which will cause sysinstall not to format the > partitions. I think I failed to do exactly that. I have been using FreeBSD and sysinstall for around 8 years but have never used sysinstall in this manner. When you run it from the CDROM, it quietly mounts everything after formatting the disk so I thought I should stay away from the partition editor as the drive is already formatted with Freebsd and swap partitions. > In other words: It's obvious - you just use the disk. :-) Thanks for clearing that up. I am discovering a load of details about sysinstall that I didn't know as well as I thought I did. I think I have asked my quota of really dumb questions for the day. Thanks to all for your patience. Martin McCormick WB5AGZ Stillwater, OK Systems Engineer OSU Information Technology Department Telecommunications Services Group ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: How does Sysinstall Mount File Systems?
On Tue, Feb 02, 2010 at 03:56:17PM -0600, Martin McCormick wrote: > How does one tell sysinstall to use an existing disk that is > already formatted? It should come up in the list of available drives. Just select it and proceed. It will overwrite the part that you tell it too. The most likely thing is you want to use it all for FreeBSD. If so, select that option. If you want more than one slice, you have to tell it that by telling it how much to use for the slice or by indicating which existing slice to use. If it doesn't show up, then the system is having some trouble talking to it for some reason. Note: in all of this, where I use the FreeBSD term slice, MS uses the term 'Primary Partition'. There can be from 1 to 4 primary partitions - slices - on a disk. Slices/Primary Partitions are essentially identical and are compatible with each other, although MS utilities ignore non MS slices as if they are not there. These slices - primary partitions can be further divided. FreeBSD calls those subdivisions 'partitions' and MS tends to call them something like 'logical partitions'. The subdivisions are not compatible between the systems, but generally FreeBSD can read and, except for NTFS, write the MS versions. Because of the weight of MS in the marketplace, most non-FreeBSD utilities use the MS terminology and it even still shows up in some FreeBSD documentation which causes newbies all kinds of confusion. That seems to be gradually being cleaned up though. Now, if you mean you want to _share_ an existing disk that is already being used, then you will have to get a utility to shrink the slice that is already being used and create a new slice and then tell sysinstall to install in to it. This is most often used for creating a 'dual boot' machine.The main utilities for this are gpartd and Partition Magic. Partition Magic is commercial - around $70 and Gparted is freely downloadable (last I tried). There are also several other free ones available.In both cases - Gparted or PM -, make the bootable media and do not try to run from a copy that is installed on your hard disk. I found that PM 7 is better quality than PM 8. In fact, I sent my PM 8 back for a refund. But, Partition Magic seems to handle MS NTFS type disk and other odities better than some of the free ones.It did NOT handle a USB connected disk - neither PM 7 or PM 8 did that even though PM 8 promoted working with USB as one its features. Probably you weren't talking about crating a dual boot disk, so you can probably just ignore than last long para. But, just in case that is what you meant, I threw it in. jerry > > Thank you. > > Martin McCormick > ___ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org" ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: How does Sysinstall Mount File Systems?
On Tue, 02 Feb 2010 15:56:17 -0600, Martin McCormick wrote: > How does one tell sysinstall to use an existing disk that is > already formatted? If I interpret your question correctly, you are intending to ask how sysinstall can install on an already sliced, partitioned and formatted disk; is this correct? You chose "Custom" for the installation. In the partition editor, you assign the the located partitions to the functional subtrees (/, swap, /tmp, /var, /usr, /home - or any layout you want) and make sure that they are of the type "UFS+S" (except / which is usually "UFS" without S), and the format option is set to "N" which will cause sysinstall not to format the partitions. In other words: It's obvious - you just use the disk. :-) -- Polytropon Magdeburg, Germany Happy FreeBSD user since 4.0 Andra moi ennepe, Mousa, ... ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
How does Sysinstall Mount File Systems?
How does one tell sysinstall to use an existing disk that is already formatted? Thank you. Martin McCormick ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
On Sat, 23 Jan 2010 02:34:31 +0100 Roland Smith wrote: > On Fri, Jan 22, 2010 at 03:08:00AM -0600, Scott Bennett wrote: > > > > Why is that stored in the last sector of the device, rather > > than in the key file? What is the purpose of the key file if not > > to hold that type of information? The keyfile is user generated, usually just some bytes from /dev/random > All geom(4) providers use their last sector to store metadata; it's a > design decision. Probably because the first sector(s) are used for > boot blocks or filesystem metadata etc. > > It would have been possible to store the generated key in the > user-provided keyfile. But since it is not mandatory to have a > keyfile (you can also use just a passphrase), it makes more sense to > use the already provided metadata space in the last sector. Having it on the last sector allows the auto-detection of geli partitions. It would be nice to have the option of having the metadata in a separate metadata file instead of the last sector, to allow geli partitions to be indistinguishable from securely erased partitions. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
On 1/14/10, Scott Bennett wrote: > I used "glabel label" to label each of the file systems I have on > external > disk drives. Unfortunately, afterward I am now unable to "geli attach" any > of > the GELI-encrypted file systems. The system is FreeBSD 7.2-STABLE. Is > there > a way to get this to work? Or have I just lost everything in the encrypted > file systems? > > hellas# geli attach -k work.key /dev/label/work > geli: Cannot read metadata from /dev/label/work: Invalid argument. > hellas# ls -lgF /dev/label/ > total 0 > crw-r- 1 root operator0, 192 Jan 14 00:47 archives > crw-r- 1 root operator0, 191 Jan 14 00:47 backupsi > crw-r- 1 root operator0, 182 Jan 14 00:47 backupsl > crw-r- 1 root operator0, 166 Jan 14 00:47 backupss > crw-r- 1 root operator0, 179 Jan 14 00:47 sec > crw-r- 1 root operator0, 161 Jan 14 00:47 usrobj > crw-r- 1 root operator0, 184 Jan 14 00:47 usrports > crw-r- 1 root operator0, 186 Jan 14 00:47 vboxdisk > crw-r- 1 root operator0, 181 Jan 14 00:47 work > hellas# > > Any help in recovering the lost data would be deeply appreciated. If > that cannot be done, then at least knowing that would keep me from wasting > further time on it. Thanks much. Are you aware that tunefs -L will label a device? It is stored as part of the filesystem, instead as a GEOM metadata. So you should be able to get both labeling (/dev/ufs/labelname) and GELI as you are asking for. As for recovering your data, I see other helpful posts in this thread, as I have no additional helpful information to recommend. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
On Fri, Jan 22, 2010 at 03:08:00AM -0600, Scott Bennett wrote: > > Why is that stored in the last sector of the device, rather than in the > key file? What is the purpose of the key file if not to hold that type of > information? All geom(4) providers use their last sector to store metadata; it's a design decision. Probably because the first sector(s) are used for boot blocks or filesystem metadata etc. It would have been possible to store the generated key in the user-provided keyfile. But since it is not mandatory to have a keyfile (you can also use just a passphrase), it makes more sense to use the already provided metadata space in the last sector. > >Well, it should be different, otherwise they overwrite the same sector. Ipso > >facto you should nest providers... > > ...unless, of course, the two had been designed to use different parts > of the "last sector" for their own purposes, but also to avoid damaging the > other's data when altering their own. The geom framework was designed to be _extensible_. It was designed so that it would be possible to combine (nest) different types of geom providers, even if those classes (types of providers) didn't even exist when the framework was designed. Trying to shoehorn all metadata for any combination of geom providers into on 512-byte sector would have severely limited the usability of the geom system. In my opinion the solition of using nested providers each using their own last sector for metadata is simple and elegant and avoids that problem rather nicely. As I've been trying to explain, the 'nesting' of geoms is _precisely_ what avoids the whole issue of damaging each others data. I've got the feeling that you do not 'get' that concept, which lead to your problem. Unfortunately, I don't know how to explain it more clearly. > Thanks for the explanation. However, if the key information is stored > in the "last sector" rather than in the key file, then I guess I'm totally > confused about how GELI works. The encryption key is _not_ stored in the last sector. That would be unsafe, like locking your front door and leaving the key in the lock. But a part of the information necessary to create the encryption key is. Your keyfile is just one component of the en- / decryption key to unlock the data. They are not the same. You can use one or more keyfile(s), a passphrase or both. You can also have more than one key; a user key and a 'company' or system key. And geli uses a random component when the encryption key is initially created. The metadata sector is the natural place to store some of that info. This is safe because it is in itself not sufficient to create the en- / decryption key. One also need the keyfile and/or passphase. Personally, I would never use only a keyfile; it is not really secure, especially if you leave that key on another unencrypted partition of the same drive! So-called two-factor authentication (something you have [keyfile] and something you know [passphrase]) is much safer. If you really want to know how geli works, as always with free software, the source code is the ultimate reference. :-) Roland -- R.F.Smith http://www.xs4all.nl/~rsmith/ [plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated] pgp: 1A2B 477F 9970 BA3C 2914 B7CE 1277 EFB0 C321 A725 (KeyID: C321A725) pgpOg17AZwPNF.pgp Description: PGP signature
Re: GELI file systems unusable after "glabel label" operations
On Sat, 16 Jan 2010 10:31:22 +0100 Roland Smith wrote: >On Sat, Jan 16, 2010 at 12:38:14AM -0600, Scott Bennett wrote: >> >2) Create the geli device /dev/daXsYP.eli, and then create a label on th= >at, >> > yielding /dev/label/bar. [not sure what the utility of this is, since= > the >> > label will only appear after the geil provider has been attached] >> > >> The important point here is that one of the above methods must be us= >ed >> *before* the file system is created and the data loaded into it. Attempt= >ing >> either method *after* data are loaded will result in loss of the data. > >Maybe not immediately, but since both the filesystem and geom can use the l= >ast >sector, there will be trouble. :-) The examples in the glabel manpage should >how to set up a label correctly. > >> Perhaps this provides a possible recovery method. As you read it, >> would it be possible to build an altered version of geli(8) that would si= >mply >> use the existing key file without generating a new one to do a "geli init" >> operation? If so, it would certainly be worth my trouble to do that. > >In theory it is possible, I guess. But the salt is 512 bytes long. So it can >have 2^512 different values. That is 1.340=C3=9710^154 different values, an= >d you'd >have to test them all. And by testing I mean use the modified 'geli init' to Why is that stored in the last sector of the device, rather than in the key file? What is the purpose of the key file if not to hold that type of information? >generate a key, and then try if the key works, i.e. check if the relevant >sector decrypted with that key yields a valid UFS2 superblock. Suppose you >wrote a program capable of testing 10^9 keys every second, which sounds like >quite alot to me. It would still be running for 2^512/1e9/(3600*24*365) =3D >4.25=C3=9710^137 years! So in practice, this is a hopeless task. > >> >And I think that the proper way to nest geoms is too obvious (at least f= >or =3D >> >the >> >developers/maintainers) to explicitly list in the handbook. If you know = >that >> >geoms store metadata in their last sector, the proper way to nest them i= >s to >> >use the different devices for each geom "stage", so that each has their = >own >> >metadata sector. >>=20 >> Well, it wasn't at all obvious to me, and reading the parts that men= >tion >> metadata being written to the last sector suggests, if anything, that lab= >eling >> and encryption are incompatible because both write to the "last sector", = >i.e., >> to the *same* sector. The idea of the "last sector" being different for = >the >> two operations is not at all apparent. > >Well, it should be different, otherwise they overwrite the same sector. Ipso >facto you should nest providers... ...unless, of course, the two had been designed to use different parts of the "last sector" for their own purposes, but also to avoid damaging the other's data when altering their own. > >Say you want to have a labeled, encrypted device on /dev/da0s1d. First, you >create the label; > >glabel label =E2=80=90v foo /dev/da0s1d > >A device /dev/label/foo now appears. This device is one sector smaller than >/dev/da0s1d, because the last sector of /dev/da0s1d is used for the glabel >metadata. Now we want to create an encrypted device, so we do: > > geli init -l 256 /dev/label/foo > geli attach /dev/label/foo > >This will create /dev/label/foo.eli. Again, /dev/label/foo.eli is one sector >smaller than /dev/label/foo, because the last sector of /dev/label/foo >contains the geli metadata. > >If one uses > >geli init -l 256 /dev/da0s1d > geli attach /dev/da0s1d > >this will create and attach /dev/da0s1d.eli, but /dev/label/foo will be des= >troyed, >because 'geli init' overwrites glabel's metadata! > >Below I've tried to sketch the last sectors of the device, with the extents= > of >the geom-ed devices and the location of the metadata below. > >-- /dev/da0s1d > ...N-5N-4N-3N-2 N-1N >| | | | | | geli |glabel| >-- /dev/label/foo >--- /dev/label/foo.eli > >Nested geom devices are the only way to keep the metadata safe. > Thanks for the explanation. However, if the key information is stored in the "last sector" rather than in the key file, then I guess I'm totally confused about how GELI works. Scott Bennett, Comm. ASMELG, CFIAG ** * Internet: bennett at cs.niu.edu * ** * "A well regulated and disciplined militia, is at all times a good * * objection to the introduction of that bane of all free governments * * -- a standing army." * *-- Gov. John Hanco
Re: GELI file systems unusable after "glabel label" operations
On Sat, Jan 16, 2010 at 12:38:14AM -0600, Scott Bennett wrote: > >2) Create the geli device /dev/daXsYP.eli, and then create a label on that, > > yielding /dev/label/bar. [not sure what the utility of this is, since the > > label will only appear after the geil provider has been attached] > > > The important point here is that one of the above methods must be used > *before* the file system is created and the data loaded into it. Attempting > either method *after* data are loaded will result in loss of the data. Maybe not immediately, but since both the filesystem and geom can use the last sector, there will be trouble. :-) The examples in the glabel manpage should how to set up a label correctly. > Perhaps this provides a possible recovery method. As you read it, > would it be possible to build an altered version of geli(8) that would simply > use the existing key file without generating a new one to do a "geli init" > operation? If so, it would certainly be worth my trouble to do that. In theory it is possible, I guess. But the salt is 512 bytes long. So it can have 2^512 different values. That is 1.340×10^154 different values, and you'd have to test them all. And by testing I mean use the modified 'geli init' to generate a key, and then try if the key works, i.e. check if the relevant sector decrypted with that key yields a valid UFS2 superblock. Suppose you wrote a program capable of testing 10^9 keys every second, which sounds like quite alot to me. It would still be running for 2^512/1e9/(3600*24*365) = 4.25×10^137 years! So in practice, this is a hopeless task. > >And I think that the proper way to nest geoms is too obvious (at least for = > >the > >developers/maintainers) to explicitly list in the handbook. If you know that > >geoms store metadata in their last sector, the proper way to nest them is to > >use the different devices for each geom "stage", so that each has their own > >metadata sector. > > Well, it wasn't at all obvious to me, and reading the parts that mention > metadata being written to the last sector suggests, if anything, that labeling > and encryption are incompatible because both write to the "last sector", i.e., > to the *same* sector. The idea of the "last sector" being different for the > two operations is not at all apparent. Well, it should be different, otherwise they overwrite the same sector. Ipso facto you should nest providers... Say you want to have a labeled, encrypted device on /dev/da0s1d. First, you create the label; glabel label ‐v foo /dev/da0s1d A device /dev/label/foo now appears. This device is one sector smaller than /dev/da0s1d, because the last sector of /dev/da0s1d is used for the glabel metadata. Now we want to create an encrypted device, so we do: geli init -l 256 /dev/label/foo geli attach /dev/label/foo This will create /dev/label/foo.eli. Again, /dev/label/foo.eli is one sector smaller than /dev/label/foo, because the last sector of /dev/label/foo contains the geli metadata. If one uses geli init -l 256 /dev/da0s1d geli attach /dev/da0s1d this will create and attach /dev/da0s1d.eli, but /dev/label/foo will be destroyed, because 'geli init' overwrites glabel's metadata! Below I've tried to sketch the last sectors of the device, with the extents of the geom-ed devices and the location of the metadata below. -- /dev/da0s1d ...N-5N-4N-3N-2 N-1N | | | | | | geli |glabel| -- /dev/label/foo --- /dev/label/foo.eli Nested geom devices are the only way to keep the metadata safe. Hope this helps, Roland -- R.F.Smith http://www.xs4all.nl/~rsmith/ [plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated] pgp: 1A2B 477F 9970 BA3C 2914 B7CE 1277 EFB0 C321 A725 (KeyID: C321A725) pgpWo80zoDjvn.pgp Description: PGP signature
Re: GELI file systems unusable after "glabel label" operations
is too obvious (at least for = >the >developers/maintainers) to explicitly list in the handbook. If you know that >geoms store metadata in their last sector, the proper way to nest them is to >use the different devices for each geom "stage", so that each has their own >metadata sector. Well, it wasn't at all obvious to me, and reading the parts that mention metadata being written to the last sector suggests, if anything, that labeling and encryption are incompatible because both write to the "last sector", i.e., to the *same* sector. The idea of the "last sector" being different for the two operations is not at all apparent. > >Procedure #1 that I outlined above should be easier to automate, should you >want to, because you can then just use 'geli init /dev/label/foo'. > >As of 7.2, each UFS filesystem automatically has an unique file system id t= >hat >is automatically created during boot in /dev/ufsid. These labels are unique >and do not change. You can use those to mount these filesystems. See =A719.= >6 in >the latest version of the Handbook. > >> I have a new 1 TB drive that I will soon connect to the system and b= >egin >> creating file systems. I will make gzipped image files with dd(1) of the >> damaged partitions and store them on the new drive for a while in case a >> workable idea turns up. > >Since the partitions are encrypted, don't bother with gzip. Encrypted data = >is >pretty close to random noise. No compression program can compress that vey >much. With the gzip header, it might even become bigger. > True enough, but sometimes one gets lucky. Given that the storage time might be rather long in hopes of eventually turning up a solution, I was thinking that I would try it both ways. If there were little difference in size between the raw and the compressed versions, then I would probably just keep the raw version. OTOH, if the compressed version were a few percent smaller, then I would keep that one instead. Scott Bennett, Comm. ASMELG, CFIAG ** * Internet: bennett at cs.niu.edu * ** * "A well regulated and disciplined militia, is at all times a good * * objection to the introduction of that bane of all free governments * * -- a standing army." * *-- Gov. John Hancock, New York Journal, 28 January 1790 * ** ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
On Fri, Jan 15, 2010 at 01:25:50AM -0600, Scott Bennett wrote: > >Check /var/backups. There should be *.eli files there. Those are the automa= > >tic > > No joy. :-( > > >metadata backups that 'geli init' makes (at least in 8.0). You can restore > >those backups with 'geli restore'. > > Those must be new in 8.0. I don't see any in 7.2, just {aliases,group, > master.passwd}.bak{,2} in /var/backups. [No help here, just a me-too...] I can confirm this: no metadata of GELI partitions generated on RELENG_7 were saved in /var/backups, but GELI partitions created since RELENG_8 were! I've noticed this by chance with "geli init" on an external disk, and thought that geli init would only create metadata backup automatically for disks that are not the same than the one hosting /var/backups (for obvious reasons, i.e. when you want to quickly destroy a key, and for- getting to wipe out the metadata backup). Apparently, it was the version bump, and not the different disks. Good to know indeed. Would a "geli backup" on those old RELENG_7 GELI partitions (or rather provider partitions) have the same effect as a RELENG_8-style "geli init" to get those metadata files? Maybe /usr/src/UPDATING should contain a little hint for those of us with old GELI partitions without auto-backups of metadata? > I have a new 1 TB drive that I will soon connect to the system and begin > creating file systems. I will make gzipped image files with dd(1) of the > damaged partitions and store them on the new drive for a while in case a > workable idea turns up. I feel your pain (having lost some data in a similar scenario while experimenting with glabel on geli partitions, but not as much as you). There should really be a big obvious warning in the glabel(8) and geli(8) man pages, because that's a big trap waiting to spring on unsuspecting users (POLA violation). :-( -cpghost. -- Cordula's Web. http://www.cordula.ws/ ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
On Fri, Jan 15, 2010 at 01:25:50AM -0600, Scott Bennett wrote: > > It has been a long time since I created those GELI partitions, but I > think I used the "geli init -K keyfilename /dev/daXsYP", where P is the > partition identifier in slice Y of drive X. What I did when I screwed the > pooch on this was of the form "glabel label fsname /dev/daXsYP", which I had > thought would produce a /dev/label/fsname device and that doing a "geli > attach" > afterward would produce a /dev/label/fsname.eli device. You could have done two things to create a nested label/geli configuration; 1) Create a labeled device from /dev/daXsYP, which would yield /dev/label/foo, then create a geli device _on the labeled device_, creating /dev/label/foo.eli. This will work because the labeled device will be one sector shorter than the raw da device. The .eli device will be yet another sector shorter, leaving two adjacent metadata sectors for the nested providers. This is the key point. 2) Create the geli device /dev/daXsYP.eli, and then create a label on that, yielding /dev/label/bar. [not sure what the utility of this is, since the label will only appear after the geil provider has been attached] The first one seems most useful for things like automatic mounting. > >Check /var/backups. There should be *.eli files there. Those are the automa= > >tic > > No joy. :-( > > >metadata backups that 'geli init' makes (at least in 8.0). You can restore > >those backups with 'geli restore'. > > Those must be new in 8.0. I don't see any in 7.2, just {aliases,group, > master.passwd}.bak{,2} in /var/backups. Probably. I didn't see them when I was running 7.2, and I only noticed it in the 8.0 manpage. > >Running 'geli init' again with the same parameters will not work, because > >'geli init' uses a random component in the key generation. In other words, = > >two > >inits with the same password will not generate the same key! > > Is there some way to recover using the existing key files, which I do > still have? And of course, I do know the passphrases. Not as I read the geli source. It _always_ uses arc4rand to generate a random salt for the key during 'init'. Read the function 'static void eli_init(struct gctl_req *req)' in /usr/src/sbin/geom/class/eli/geom_eli.c. This means that subsequent 'geli init' calls with the same password or keyfile will still yield a different key. I'm afraid your data is lost. You should always make a backup before playing with filesystems. Most people learn this the hard way, although I realize that this is small consolation. > >What you should have done (for future refrence) is use geli(8) to create the > >encrypted device, then create a filesystem on that encrypted device with > >newfs(8) using the '-L' flag to set the volume name. Or use tunefs(8) to set > >the volume name later. These names will be automatically recognized next ti= > >me > >you attach it and listed in /dev/ufs/. > > > Thank you for that information. If only it had been laid out that way > in the man page of the handbook when I read it before starting on the labeling > procedure...sigh. It _is_ listed in the glabel manpage, at least in 8.0. And I think that the proper way to nest geoms is too obvious (at least for the developers/maintainers) to explicitly list in the handbook. If you know that geoms store metadata in their last sector, the proper way to nest them is to use the different devices for each geom "stage", so that each has their own metadata sector. Procedure #1 that I outlined above should be easier to automate, should you want to, because you can then just use 'geli init /dev/label/foo'. As of 7.2, each UFS filesystem automatically has an unique file system id that is automatically created during boot in /dev/ufsid. These labels are unique and do not change. You can use those to mount these filesystems. See §19.6 in the latest version of the Handbook. > I have a new 1 TB drive that I will soon connect to the system and begin > creating file systems. I will make gzipped image files with dd(1) of the > damaged partitions and store them on the new drive for a while in case a > workable idea turns up. Since the partitions are encrypted, don't bother with gzip. Encrypted data is pretty close to random noise. No compression program can compress that vey much. With the gzip header, it might even become bigger. Roland -- R.F.Smith http://www.xs4all.nl/~rsmith/ [plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated] pgp: 1A2B 477F 9970 BA3C 2914 B7CE 1277 EFB0 C321 A725 (KeyID: C321A725) pgpT6c9Yiukp2.pgp Description: PGP signature
Re: GELI file systems unusable after "glabel label" operations
On Thu, 14 Jan 2010 18:42:32 +0100 Roland Smith >On Thu, Jan 14, 2010 at 01:31:55AM -0600, Scott Bennett wrote: >> I used "glabel label" to label each of the file systems I have on ex= >ternal >> disk drives. Unfortunately, afterward I am now unable to "geli attach" a= >ny of >> the GELI-encrypted file systems. The system is FreeBSD 7.2-STABLE. Is t= >here >> a way to get this to work? Or have I just lost everything in the encrypt= >ed >> file systems? > >Did you use 'geli init /dev/daXsY' and 'glabel label /dev/daXsY'? That will >overwrite the geli metadata with the glabel metadata!=20 It has been a long time since I created those GELI partitions, but I think I used the "geli init -K keyfilename /dev/daXsYP", where P is the partition identifier in slice Y of drive X. What I did when I screwed the pooch on this was of the form "glabel label fsname /dev/daXsYP", which I had thought would produce a /dev/label/fsname device and that doing a "geli attach" afterward would produce a /dev/label/fsname.eli device. > >Check /var/backups. There should be *.eli files there. Those are the automa= >tic No joy. :-( >metadata backups that 'geli init' makes (at least in 8.0). You can restore >those backups with 'geli restore'. Those must be new in 8.0. I don't see any in 7.2, just {aliases,group, master.passwd}.bak{,2} in /var/backups. > >Running 'geli init' again with the same parameters will not work, because >'geli init' uses a random component in the key generation. In other words, = >two >inits with the same password will not generate the same key! Is there some way to recover using the existing key files, which I do still have? And of course, I do know the passphrases. > >What you should have done (for future refrence) is use geli(8) to create the >encrypted device, then create a filesystem on that encrypted device with >newfs(8) using the '-L' flag to set the volume name. Or use tunefs(8) to set >the volume name later. These names will be automatically recognized next ti= >me >you attach it and listed in /dev/ufs/. > Thank you for that information. If only it had been laid out that way in the man page of the handbook when I read it before starting on the labeling procedure...sigh. I have a new 1 TB drive that I will soon connect to the system and begin creating file systems. I will make gzipped image files with dd(1) of the damaged partitions and store them on the new drive for a while in case a workable idea turns up. Scott Bennett, Comm. ASMELG, CFIAG ** * Internet: bennett at cs.niu.edu * ** * "A well regulated and disciplined militia, is at all times a good * * objection to the introduction of that bane of all free governments * * -- a standing army." * *-- Gov. John Hancock, New York Journal, 28 January 1790 * ** ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
On Thu, Jan 14, 2010 at 01:31:55AM -0600, Scott Bennett wrote: > I used "glabel label" to label each of the file systems I have on > external > disk drives. Unfortunately, afterward I am now unable to "geli attach" any of > the GELI-encrypted file systems. The system is FreeBSD 7.2-STABLE. Is there > a way to get this to work? Or have I just lost everything in the encrypted > file systems? Did you use 'geli init /dev/daXsY' and 'glabel label /dev/daXsY'? That will overwrite the geli metadata with the glabel metadata! Check /var/backups. There should be *.eli files there. Those are the automatic metadata backups that 'geli init' makes (at least in 8.0). You can restore those backups with 'geli restore'. Running 'geli init' again with the same parameters will not work, because 'geli init' uses a random component in the key generation. In other words, two inits with the same password will not generate the same key! What you should have done (for future refrence) is use geli(8) to create the encrypted device, then create a filesystem on that encrypted device with newfs(8) using the '-L' flag to set the volume name. Or use tunefs(8) to set the volume name later. These names will be automatically recognized next time you attach it and listed in /dev/ufs/. Roland -- R.F.Smith http://www.xs4all.nl/~rsmith/ [plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated] pgp: 1A2B 477F 9970 BA3C 2914 B7CE 1277 EFB0 C321 A725 (KeyID: C321A725) pgpHuSU1N8tAm.pgp Description: PGP signature
Re: GELI file systems unusable after "glabel label" operations
Scott Bennett wrote: As noted above, that would not work because then the label would not be readable at boot time. Yes it would. What you would have is a nested configuration, geli within a label. The label would be read when the device is present, then you would be able to attach the geli device (probably as /dev/label/blah.geli, I didn't try it). ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
On Thu, 14 Jan 2010 10:30:00 +0100 Ivan Voras wrote: >Scott Bennett wrote: >> I used "glabel label" to label each of the file systems I have on >> external >> disk drives. Unfortunately, afterward I am now unable to "geli attach" any >> of >> the GELI-encrypted file systems. The system is FreeBSD 7.2-STABLE. > >Hmm, did you say you had geli-encrypted drives, then you have >overwritten the last sector with glabel, and then you are surprised you >cannot get to the data any more? No, I am not surprised, just disappointed that when I asked exactly that question on this list, the only response I got was one that missed the point of my question. So I experimented first with an unencrypted UFS2 file system and saw no problem with it. I then proceeded, but stupidly did it to both the primary encrypted file systems and the encrypted backup file system at the same time, so I can't restore from the backups I had taken because they are also hosed. Neither the man page nor the handbook covers the combination of a partition labeled by "glabel label" and encryption with GELI. Apparently, though, the two are completely incompatible. The label metadata have to be readable at boot time in order to create the /dev/label/whatever device file, but the metadata apparently occupy the same place as GELI metadata. What a mess. I have no idea how many hundreds, or perhaps thousands, of hours of work were lost, but it was a *lot* of time and effort. > > > Or have I just lost everything in the encrypted > > file systems? > >I think you did. > > From the geli(8) man page: > >"init ... The last providerâs sector is used to store metadata." > > From the glabel(8) man page: > >"label ... metadata is stored in a providerâs last sector." That was why I had originally posted my questions. It seemed to me that the usage of that sector might have been designed in such a way as to allow both GELI and labeling to be used together. It seem, however, that that capability was not included in the design. > >If you did "geli init ... da0" and then "glabel label ... da0" then you >have lost the geli metadata, which contains keys, etc. You might recover >this, though, by reading geli(8) about the "restore" command. The "restore" only works if a "backup" operation had been done to produce a file from which to restore the metadata, which I had never done. > >There is no way you can label your devices after you applied geli to >them (which is one of the points of using geli...). You could destroy >the geli layer (and the data), apply the label and then apply geli to >the label. > As noted above, that would not work because then the label would not be readable at boot time. It now looks to me as though the only way the two could be used in combination would require that the label and the GELI metadata be stored in separate places and that the label would have to be applied *after* the GELI data were created, so that the label would be readable at boot time. So the two features are currently unusable in combination. That means that a GELI-encrypted partition cannot be mounted by a /dev/label/whatever device, which means, in effect, that a GELI- encrypted partition cannot be mounted from a drive in a multiple-drive system using a device name given in /etc/fstab. Such a partition has to be mounted manually with the device file name entered explicitly. :-( Now I have one more question. If I use the same key file to do a "geli init" on one of the damaged partitions, what will happen? Is there a chance that the rest of the data might then be accessible? Scott Bennett, Comm. ASMELG, CFIAG ** * Internet: bennett at cs.niu.edu * ** * "A well regulated and disciplined militia, is at all times a good * * objection to the introduction of that bane of all free governments * * -- a standing army." * *-- Gov. John Hancock, New York Journal, 28 January 1790 * ** ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
Scott Bennett wrote: I used "glabel label" to label each of the file systems I have on external disk drives. Unfortunately, afterward I am now unable to "geli attach" any of the GELI-encrypted file systems. The system is FreeBSD 7.2-STABLE. Hmm, did you say you had geli-encrypted drives, then you have overwritten the last sector with glabel, and then you are surprised you cannot get to the data any more? > Or have I just lost everything in the encrypted > file systems? I think you did. From the geli(8) man page: "init ... The last provider’s sector is used to store metadata." From the glabel(8) man page: "label ... metadata is stored in a provider’s last sector." If you did "geli init ... da0" and then "glabel label ... da0" then you have lost the geli metadata, which contains keys, etc. You might recover this, though, by reading geli(8) about the "restore" command. There is no way you can label your devices after you applied geli to them (which is one of the points of using geli...). You could destroy the geli layer (and the data), apply the label and then apply geli to the label. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
On Thu, 14 Jan 2010 10:55:35 +0300 Boris Samorodov wrote: Thanks so much for responding so fast! >On Thu, 14 Jan 2010 01:31:55 -0600 (CST) Scott Bennett wrote: > >> hellas# geli attach -k work.key /dev/label/work >> geli: Cannot read metadata from /dev/label/work: Invalid argument. > >Did you try to mount it via geom consumer (/dev/daX)? Um, no, a GELI-encrypted partition must first be attached. The attach operation fails, as shown above, so there's no way to mount it. >Can you show apropriate "glabel list"? > hellas# geom ELI list geom: Cannot get GEOM tree: Unknown error: -1 hellas# I'm afraid I'm clueless here. What should I try next? Scott Bennett, Comm. ASMELG, CFIAG ** * Internet: bennett at cs.niu.edu * ** * "A well regulated and disciplined militia, is at all times a good * * objection to the introduction of that bane of all free governments * * -- a standing army." * *-- Gov. John Hancock, New York Journal, 28 January 1790 * ** ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: GELI file systems unusable after "glabel label" operations
On Thu, 14 Jan 2010 01:31:55 -0600 (CST) Scott Bennett wrote: > hellas# geli attach -k work.key /dev/label/work > geli: Cannot read metadata from /dev/label/work: Invalid argument. Did you try to mount it via geom consumer (/dev/daX)? Can you show apropriate "glabel list"? -- WBR, bsam ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
GELI file systems unusable after "glabel label" operations
I used "glabel label" to label each of the file systems I have on external disk drives. Unfortunately, afterward I am now unable to "geli attach" any of the GELI-encrypted file systems. The system is FreeBSD 7.2-STABLE. Is there a way to get this to work? Or have I just lost everything in the encrypted file systems? hellas# geli attach -k work.key /dev/label/work geli: Cannot read metadata from /dev/label/work: Invalid argument. hellas# ls -lgF /dev/label/ total 0 crw-r- 1 root operator0, 192 Jan 14 00:47 archives crw-r- 1 root operator0, 191 Jan 14 00:47 backupsi crw-r- 1 root operator0, 182 Jan 14 00:47 backupsl crw-r- 1 root operator0, 166 Jan 14 00:47 backupss crw-r- 1 root operator0, 179 Jan 14 00:47 sec crw-r- 1 root operator0, 161 Jan 14 00:47 usrobj crw-r- 1 root operator0, 184 Jan 14 00:47 usrports crw-r- 1 root operator0, 186 Jan 14 00:47 vboxdisk crw-r- 1 root operator0, 181 Jan 14 00:47 work hellas# Any help in recovering the lost data would be deeply appreciated. If that cannot be done, then at least knowing that would keep me from wasting further time on it. Thanks much. Scott Bennett, Comm. ASMELG, CFIAG ** * Internet: bennett at cs.niu.edu * ** * "A well regulated and disciplined militia, is at all times a good * * objection to the introduction of that bane of all free governments * * -- a standing army." * *-- Gov. John Hancock, New York Journal, 28 January 1790 * ** ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
How do I see disk IO for specific ZFS file systems?
I realize I can see per device iostats with zpool or gstat, but I am interested in specific mount points. Is that possible? What are the read/writes for "/tank/BBB.monkeybrains.net"? The READS are low and then jump to a higher value and I'm trying to track it down. READ/WRITE over past 6 months: http://www.monkeybrains.net/images/IO-past-6months.png # zfs list NAME USED AVAIL REFER MOUNTPOINT tank 73.1G 368G39K /tank tank/AAA.monkeybrains.net2.54G 7.46G 2.45G /tank/AAA.monkeybrains.net tank/BBB.monkeybrains.net2.64G 97.4G 2.64G /tank/BBB.monkeybrains.net tank/CCC.monkeybrains.net1.39G 98.6G 1.39G /tank/CCC.monkeybrains.net # zpool iostat -v capacity operationsbandwidth pool used avail read write read write -- - - - - - - tank67.5G 380G 12 34 446K 257K mirror67.5G 380G 12 34 446K 257K ad0s2 - - 3 17 276K 258K ad8s2 - - 3 17 256K 258K -- - - - - - - # gstat dT: 1.001s w: 1.000s L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name 0 29 64566.1 223382.3 21.7| ad0 0 31 86159.8 223382.3 19.2| ad8 0 26 53924.6 213182.53.1| ad0s1 0 3 1 64 13.6 1 202.3 19.4| ad0s2 0 26 5392 10.5 213182.55.8| ad8s1 0 5 32248.7 1 200.3 16.9| ad8s2 specifically, I am intested in the file systems in ad0s2 and ad8s2. Thanks if you can point me in the right direction, and thanks if you read this far! Rudy ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"
Re: Re: gmirror and the UFS file systems
is your partition size multiply of fragment size without remainder? if not (quite a big chance) at least one sector at the end is unused and never be. so go on, but then fix disklabel, as c partition is 1 sector smaller. of course - boot from livecd to do this. Thanks both Mel and Wojciech for the advice. Andy ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror and the UFS file systems
I'm getting ready to move forward on enabling gmirror on my churches website server (FreeBSD 7.0-RELEASE p4). I used defaults during the install (most importantly for this, the file system defaults). I've read in the manual pages that the data for the mirror is contained in the last sector of the drive/partitions. So, I want to mirror the entire drive (ad4) to the second drive (ad5). This server doesn't yet have much data at all. I'm wondering if I turn on this mirror, will anything important be overwritten in the last sector? is your partition size multiply of fragment size without remainder? if not (quite a big chance) at least one sector at the end is unused and never be. so go on, but then fix disklabel, as c partition is 1 sector smaller. of course - boot from livecd to do this. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror and the UFS file systems
On Friday 28 November 2008 18:08:19 Andrew Falanga wrote: > I'm getting ready to move forward on enabling gmirror on my churches > website server (FreeBSD 7.0-RELEASE p4). I used defaults during the > install (most importantly for this, the file system defaults). I've read > in the manual pages that the data for the mirror is contained in the last > sector of the drive/partitions. So, I want to mirror the entire drive > (ad4) to the second drive (ad5). This server doesn't yet have much data at > all. I'm wondering if I turn on this mirror, will anything important be > overwritten in the last sector? I've converted UFS filesystems with just the base install and a few ports over to gmirror many times and no problem. I guess the question can best be paraphrased as: "If I cross a street in a residential area at 4am in the morning, will I get hit by a car?" Also, gmirror create will tell you if the last sector contains data, just like a driver would honk his horn. -- Mel Problem with today's modular software: they start with the modules and never get to the software part. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
gmirror and the UFS file systems
Hi, I'm getting ready to move forward on enabling gmirror on my churches website server (FreeBSD 7.0-RELEASE p4). I used defaults during the install (most importantly for this, the file system defaults). I've read in the manual pages that the data for the mirror is contained in the last sector of the drive/partitions. So, I want to mirror the entire drive (ad4) to the second drive (ad5). This server doesn't yet have much data at all. I'm wondering if I turn on this mirror, will anything important be overwritten in the last sector? Andy ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: dumping mounted file systems with insufficient space...
>> I can use dump(8) an active, mounted file systems via the -L flag. >> According to the manual, this first creates a snapshot of the file >> system, to the .snap directory of the file systems root. What if the >> file system to be dumped, does not have sufficient free-space to store >> a snapshot? Can I still safely dump(8) a mounted file system? > > A snapshot doesn't take any significant /extra/ space itself. Rather it > consists of marking the state of the system at that time and provides a > view (via the .snap directory) of that state of the filesystem. Of course, > subsequent modifications of the filesystem can cause more space than > otherwise expected to be used up -- as both the snapshot and the latest > versions of anything have to be kept around -- but how much impact this has > depends entirely on the IO traffic characteristics of your particular > filesystem and cannot be predicted in any useful fashion without a great > deal more information. > > If snapshots won't work for you, another trick (if you can swing it) is to > have the data on a RAID1 mirror. Then you can detach one of the mirrors, > back it up and then reattach the mirror. Doing this with gmirror is a > simple matter of writing about a 10 line shell script. Other mirroring > hard/soft-ware may be less cooperative. However you do it, this will > involve an extended period while mirrors resynchronise after the backup > where your file system won't have the desired level of resilience. > > If you can't use snapshots, can't split the mirror and you can't unmount > the filesystem, then the next best thing is to make the filesystem as > quiescent as possible. Basically, shut down any processes using the > filesystem. That's probably as unacceptable as any of the other > alternatives -- in which case, you can still go ahead and dump the > filesystem, but don't expect the generated dump to be 100% consistent. > It will be 'good enough' for some purposes, but files actively involved > in IO at the time the dump is made are likely to be corrupted. > > Cheers, > > Matthew > > -- > Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard > Flat 3 > PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate > Kent, CT11 9PW Thank you for the clarification about snapshots, Matthew. I went with a dump to another disk and it worked out without any problems :) -Modulok- ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: dumping mounted file systems with insufficient space...
Modulok wrote: Before I try this on a live server... I can use dump(8) an active, mounted file systems via the -L flag. According to the manual, this first creates a snapshot of the file system, to the .snap directory of the file systems root. What if the file system to be dumped, does not have sufficient free-space to store a snapshot? Can I still safely dump(8) a mounted file system? A snapshot doesn't take any significant /extra/ space itself. Rather it consists of marking the state of the system at that time and provides a view (via the .snap directory) of that state of the filesystem. Of course, subsequent modifications of the filesystem can cause more space than otherwise expected to be used up -- as both the snapshot and the latest versions of anything have to be kept around -- but how much impact this has depends entirely on the IO traffic characteristics of your particular filesystem and cannot be predicted in any useful fashion without a great deal more information. If snapshots won't work for you, another trick (if you can swing it) is to have the data on a RAID1 mirror. Then you can detach one of the mirrors, back it up and then reattach the mirror. Doing this with gmirror is a simple matter of writing about a 10 line shell script. Other mirroring hard/soft-ware may be less cooperative. However you do it, this will involve an extended period while mirrors resynchronise after the backup where your file system won't have the desired level of resilience. If you can't use snapshots, can't split the mirror and you can't unmount the filesystem, then the next best thing is to make the filesystem as quiescent as possible. Basically, shut down any processes using the filesystem. That's probably as unacceptable as any of the other alternatives -- in which case, you can still go ahead and dump the filesystem, but don't expect the generated dump to be 100% consistent. It will be 'good enough' for some purposes, but files actively involved in IO at the time the dump is made are likely to be corrupted. Cheers, Matthew -- Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate Kent, CT11 9PW signature.asc Description: OpenPGP digital signature
dumping mounted file systems with insufficient space...
Before I try this on a live server... I can use dump(8) an active, mounted file systems via the -L flag. According to the manual, this first creates a snapshot of the file system, to the .snap directory of the file systems root. What if the file system to be dumped, does not have sufficient free-space to store a snapshot? Can I still safely dump(8) a mounted file system? -Modulok- ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: File Systems
On 6/24/08, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > >I would like to contribute my knowledge of several otherwise ar cane > file systems and wanted your take on modifying the FS types with other >values. Is there a central authority for all file system types that >these should be registered with first or should I simply choose values >and add them in? >It would be dumb to add support for a new file system only for some >other partition utility to not recognize it and want to destroy it. >Please advise... If you are referring to the partition type (which indicates the file system type that is on an MS/IBM style disk partition), I don't think there is a central authority, but there is a very extensive list at: http://www.win.tue.nl/~aeb/partitions/partition_types-1.html - Bob ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
File Systems
I would like to contribute my knowledge of several otherwise ar= cane file systems and wanted your take on modifying the FS types with other values. Is there a central authority for all file system types that these should be registered with first or should I simply choose values and add them in? It would be dumb to add support for a new file system only for some other partition utility to not recognize it and want to destroy it. Please advise... ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: NFS export subdirs on different file systems?
Pieter de Goeje wrote: On Tuesday 21 August 2007, Rakhesh Sasidharan wrote: I have a directory /net/store. This directory is exported to all machines on my network. I have a sub-directory /net/store/photos. That too is exported to all machines on my network. What I want is that when I mount /net/store from another machine, the contents of /net/store/photos too be visible. Is there any way I can do that? From the manpage and the handbook and Google etc I get the idea that it might not be possible. Still, asking just in case there are any round-about ways ... I would assume a scenario like this is common. Forgot to add: the two directories are imported "dynamically" on the client side. So I can't just make fstab entries on the client side to mount both points. I use AMD to mount /net/store when needed. And I can't for the life of me figure how to make it mount /net/store/photos too when needed -- I dont think that's possible(?) ... I have that configuration working. In /etc/exports: /pub-alldirsclient /pub/video -alldirs client On the client side, I didn't change anything to the configuration of AMD. Simply cd'ing to /host/server/pub and then 'cd video' does the right thing. I don't think -alldirs is really needed, but it's there for convenience. Thanks Pieter. The default configuration mounts *all* the exported filesystems from host. Which should be fine, just that I don't want it that way (and I like complicating matters, I guess! :p). Using the default way, I can access the exported filesystems as /host/server/net/store[/photos] -- which is not what I want. Rather, I want to access the exported /net/store[/photos] filesystems under the /net/store[/photos] mount points of the client -- and I don't want any other exported file systems in there either. Kind of like the "host" type amd filesystem, but only for a specific branch. This is something I did come up with: -8<- /defaults host!=obelix;type:=nfsopts:=rw,intr,grpid,nosuid store type:=auto;fs:=${map};pref:=${key}/ store/* type:=nfs;rhost:=obelix;rfs:=${path} -8<- It does what I want -- /net/store/[anything] is mounted from the remote host (obelix) -- only problem (and the reason why I didnt go ahead with this) being that there's no way to see what all directories are available under /net/store. If do a "cd /net/store/music" it will work well; but if you do an "ls /net/store" it won't mount "/net/store" and show me what subdirs are available. And the "browseable_dirs" option in "amd.conf" does not help either coz its an "auto" type filesystem. I had forgotten about the "host" filesystem type (the default). So thanks for pointing it out. Let me see if I can twiddle around and find a workaround. :) Regards, - Rakhesh http://rakhesh.com/ ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: NFS export subdirs on different file systems?
On Tuesday 21 August 2007, Rakhesh Sasidharan wrote: > > I have a directory /net/store. This directory is exported to all machines > > on my network. > > > > I have a sub-directory /net/store/photos. That too is exported to all > > machines on my network. > > > > What I want is that when I mount /net/store from another machine, the > > contents of /net/store/photos too be visible. Is there any way I can do > > that? > > > > From the manpage and the handbook and Google etc I get the idea that it > > might not be possible. Still, asking just in case there are any > > round-about ways ... I would assume a scenario like this is common. > > > > [I need /net/store/photos to be on a separate partition coz its encrypted > > and stuff. And I'd rather have it appear as part of the /net/store > > namespace ...] > > Forgot to add: the two directories are imported "dynamically" on the > client side. So I can't just make fstab entries on the client side to > mount both points. I use AMD to mount /net/store when needed. And I can't > for the life of me figure how to make it mount /net/store/photos too when > needed -- I dont think that's possible(?) ... I have that configuration working. In /etc/exports: /pub-alldirsclient /pub/video -alldirs client On the client side, I didn't change anything to the configuration of AMD. Simply cd'ing to /host/server/pub and then 'cd video' does the right thing. I don't think -alldirs is really needed, but it's there for convenience. Regards, Pieter de Goeje ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: NFS export subdirs on different file systems?
I have a directory /net/store. This directory is exported to all machines on my network. I have a sub-directory /net/store/photos. That too is exported to all machines on my network. What I want is that when I mount /net/store from another machine, the contents of /net/store/photos too be visible. Is there any way I can do that? From the manpage and the handbook and Google etc I get the idea that it might not be possible. Still, asking just in case there are any round-about ways ... I would assume a scenario like this is common. [I need /net/store/photos to be on a separate partition coz its encrypted and stuff. And I'd rather have it appear as part of the /net/store namespace ...] Forgot to add: the two directories are imported "dynamically" on the client side. So I can't just make fstab entries on the client side to mount both points. I use AMD to mount /net/store when needed. And I can't for the life of me figure how to make it mount /net/store/photos too when needed -- I dont think that's possible(?) ... ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
NFS export subdirs on different file systems?
Hi, I have a directory /net/store. This directory is exported to all machines on my network. I have a sub-directory /net/store/photos. That too is exported to all machines on my network. What I want is that when I mount /net/store from another machine, the contents of /net/store/photos too be visible. Is there any way I can do that? From the manpage and the handbook and Google etc I get the idea that it might not be possible. Still, asking just in case there are any round-about ways ... I would assume a scenario like this is common. [I need /net/store/photos to be on a separate partition coz its encrypted and stuff. And I'd rather have it appear as part of the /net/store namespace ...] Regards, - Rakhesh http://rakhesh.com/ ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: About file systems and formats
On Thu, Mar 29, 2007 at 02:25:57PM -0600, Andrew Falanga wrote: > Yesterday while working on a problem at work, a colleague and I were talking > about the various file systems and something that I have always wondered on > is what are the various file systems doing when a format is being done. For > example, at home, my PC has 2 80gb drives. One for Windows and the other > for FreeBSD. It took Windows nearly an hour (give or take) to format the > 80gb drive. On the other hand, it took FreeBSD little more than 3 - 5 > minutes to format its 80gb drive. > > Both drives are similar in capability. They are both 7200 rpm drives, etc. > So what is so much different about NTFS from FFS? Are the file systems > really that different that MS's system is simply dog slow, or is the format > for FreeBSD skipping some "integrity" checks on the surface of the drive or > whatever (this assumes that the MS install process is actually doing this). > Please understand, I intend only to find the answer to the question with > this. I'm looking for starting a "war" about who's file system rocks more > than the other. The idea of an integrity check was just speculation between > my colleague and I because there such a speed difference in formatting > things (once windows is installed) when choosing between a "Quick Format" or > a "Full Format". Unix systems such as FreeBSD do not usually do an actual 'format' in the way we used to think of Format. When you do a newfs on a FreeBSD partition it is creating a filesystem by writing file system tables and copies of those tables in specific places across the partition. It makes use of the low level format that is already put there by the manufacturer. I don't really know how much of that MS does when it builds an NTFS file system. jerry > > Can someone here offer some in depth information on this for me? Thanks. > > Andy > > P.S. on a side note, but related to this, in what directories under the > system sources will I find the source code for the FFS used by FreeBSD, and > how are those modules structured? > ___ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to "[EMAIL PROTECTED]" ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: About file systems and formats
On 3/29/07, Ivan Voras <[EMAIL PROTECTED]> wrote: Andrew Falanga wrote: > Yesterday while working on a problem at work, a colleague and I were > talking > about the various file systems and something that I have always wondered on > is what are the various file systems doing when a format is being done. > For > example, at home, my PC has 2 80gb drives. One for Windows and the other > for FreeBSD. It took Windows nearly an hour (give or take) to format the > 80gb drive. On the other hand, it took FreeBSD little more than 3 - 5 > minutes to format its 80gb drive. This is too slow for the FreeBSD case. By default, Windows will do a full format - in effect, will write zeroes all over the drive, with the intent of checking if the drive is capable of it. Unix format (newfs) will only initialize file system structures - in effect, will write out (initially empty) file tables to the drive. This takes about 5-10 seconds on 250 GB drives, so 3-5 minutes you got is way too much. There's no way of making newfs to the "checking" phase; there are separate utilities for that. Wow, I guess so! I did this some time ago and was trying to be conservative in my time table as I actually couldn't remember the exact time. my colleague and I because there such a speed difference in formatting > things (once windows is installed) when choosing between a "Quick > Format" or > a "Full Format". Yes, Quick format will just write the file tables (this is simplified, but you'll get the picture) on Windows, too. Sounds like we were on track. The system just creates the appropriate data structures using newfs and one can use the smartmon Chuck mentioned to keep track of the surface, i.e. looking for disk defects. Thanks for the links to the source (Chuck). Andy ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: About file systems and formats
Andrew Falanga wrote: > Yesterday while working on a problem at work, a colleague and I were > talking > about the various file systems and something that I have always wondered on > is what are the various file systems doing when a format is being done. > For > example, at home, my PC has 2 80gb drives. One for Windows and the other > for FreeBSD. It took Windows nearly an hour (give or take) to format the > 80gb drive. On the other hand, it took FreeBSD little more than 3 - 5 > minutes to format its 80gb drive. This is too slow for the FreeBSD case. By default, Windows will do a full format - in effect, will write zeroes all over the drive, with the intent of checking if the drive is capable of it. Unix format (newfs) will only initialize file system structures - in effect, will write out (initially empty) file tables to the drive. This takes about 5-10 seconds on 250 GB drives, so 3-5 minutes you got is way too much. There's no way of making newfs to the "checking" phase; there are separate utilities for that. > my colleague and I because there such a speed difference in formatting > things (once windows is installed) when choosing between a "Quick > Format" or > a "Full Format". Yes, Quick format will just write the file tables (this is simplified, but you'll get the picture) on Windows, too. signature.asc Description: OpenPGP digital signature
Re: About file systems and formats
On Mar 29, 2007, at 1:25 PM, Andrew Falanga wrote: Both drives are similar in capability. They are both 7200 rpm drives, etc. So what is so much different about NTFS from FFS? All sorts of things. :-) Are the file systems really that different that MS's system is simply dog slow, or is the format for FreeBSD skipping some "integrity" checks on the surface of the drive or whatever (this assumes that the MS install process is actually doing this). The Windows format is probably doing a bad sector scan and testing each and every sector during the format. The Unix newfs/mkfs doesn't perform bad-sector checking, but you can invoke things like the smartmon utilities to perform disk checking later on. Please understand, I intend only to find the answer to the question with this. I'm looking for starting a "war" about who's file system rocks more than the other. The idea of an integrity check was just speculation between my colleague and I because there such a speed difference in formatting things (once windows is installed) when choosing between a "Quick Format" or a "Full Format". A "quick format" is the Windows equivalent of what newfs does, yes. P.S. on a side note, but related to this, in what directories under the system sources will I find the source code for the FFS used by FreeBSD, and how are those modules structured? See: http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/ufs/ufs/ ...versus other filesystems found here: http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/fs/ -- -Chuck ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
About file systems and formats
Yesterday while working on a problem at work, a colleague and I were talking about the various file systems and something that I have always wondered on is what are the various file systems doing when a format is being done. For example, at home, my PC has 2 80gb drives. One for Windows and the other for FreeBSD. It took Windows nearly an hour (give or take) to format the 80gb drive. On the other hand, it took FreeBSD little more than 3 - 5 minutes to format its 80gb drive. Both drives are similar in capability. They are both 7200 rpm drives, etc. So what is so much different about NTFS from FFS? Are the file systems really that different that MS's system is simply dog slow, or is the format for FreeBSD skipping some "integrity" checks on the surface of the drive or whatever (this assumes that the MS install process is actually doing this). Please understand, I intend only to find the answer to the question with this. I'm looking for starting a "war" about who's file system rocks more than the other. The idea of an integrity check was just speculation between my colleague and I because there such a speed difference in formatting things (once windows is installed) when choosing between a "Quick Format" or a "Full Format". Can someone here offer some in depth information on this for me? Thanks. Andy P.S. on a side note, but related to this, in what directories under the system sources will I find the source code for the FFS used by FreeBSD, and how are those modules structured? ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Upgrade from 4.x -> 6.2: Old file systems?
On Fri, 23 Mar 2007 12:24:50 -0600 Brett Glass <[EMAIL PROTECTED]> wrote: >I have a server which I am considering upgrading from 4.11 to 6.2. >Besides the operating system disk (which contains all of the >expected partitions such as /, /usr, /var, and /tmp), There's a >large data disk on the system containing useful data that I'd like >to put back online as soon as the upgrade is completed. I went from FreeBSD 5.4R to 6.2R. On my drive, I had two partitions that I wanted to keep without losing. I was able to walk through sysinstall and slice up the drive as it was before, and when all was said and done, my two partitions that I was hoping to keep were intact (mind you, I did make backups just in case!). I was very pleased, and the upgrade to 6.2R was painless. (As a side note, I opted to wipe the other slices and do a fresh install of the OS and apps while retaining my two data slices). Your mileage may vary -gerry ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Upgrade from 4.x -> 6.2: Old file systems?
On Mar 23, 2007, at 12:15 PM, Erik Trulsson wrote: On Fri, Mar 23, 2007 at 11:36:21AM -0700, [EMAIL PROTECTED] wrote: On Fri, 23 Mar 2007, Brett Glass wrote: I have a server which I am considering upgrading from 4.11 to 6.2. Besides the operating system disk (which contains all of the expected partitions such as /, /usr, /var, and /tmp), There's a large data disk on the system containing useful data that I'd like to put back online as soon as the upgrade is completed. I'd rather not have to reformat it unless there is a significant advantage to doing so. Does 6.2 work properly with the older disk format? Is there any reason to take the time and effort to back up the data and restore it to the new format? Is there anything I'll need to be careful about if I upgrade just the system disk? --Brett Glass Brett, Yes, 6.2 does but there are features that were added to UFS2 (softupdates, file size limit raised past 2GB?) which make it a much better filesystem infrastructure than UFS1. The things you mention (softupdates, large files) were and are well supported with UFS1 too. There were not really much features added with UFS2 (support for very large disks (> 1 TB) and some support for extra flags and attributes are what I can think of right now.) There is not really any significant gains to be had from converting the existing file systems from UFS1 to UFS2. FreeBSD 6.2 should work just fine with the older disk. Sorry. I meant "snapshots", a feature of softupdates, which according to McKusick (dev author of softupdates?) are available post 5.0. Reference: <http://www.mckusick.com/softdep/>. -Garrett ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Upgrade from 4.x -> 6.2: Old file systems?
On Fri, Mar 23, 2007 at 11:36:21AM -0700, [EMAIL PROTECTED] wrote: > On Fri, 23 Mar 2007, Brett Glass wrote: > > >I have a server which I am considering upgrading from 4.11 to 6.2. Besides > >the operating system disk (which contains all of the expected partitions > >such as /, /usr, /var, and /tmp), There's a large data disk on the system > >containing useful data that I'd like to put back online as soon as the > >upgrade is completed. I'd rather not have to reformat it unless there is a > >significant advantage to doing so. Does 6.2 work properly with the older > >disk format? Is there any reason to take the time and effort to back up > >the data and restore it to the new format? Is there anything I'll need to > >be careful about if I upgrade just the system disk? > > > >--Brett Glass > > Brett, > Yes, 6.2 does but there are features that were added to UFS2 > (softupdates, file size limit raised past 2GB?) which make it a much > better filesystem infrastructure than UFS1. The things you mention (softupdates, large files) were and are well supported with UFS1 too. There were not really much features added with UFS2 (support for very large disks (> 1 TB) and some support for extra flags and attributes are what I can think of right now.) There is not really any significant gains to be had from converting the existing file systems from UFS1 to UFS2. FreeBSD 6.2 should work just fine with the older disk. -- Erik Trulsson [EMAIL PROTECTED] ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Upgrade from 4.x -> 6.2: Old file systems?
On Fri, 23 Mar 2007, Brett Glass wrote: I have a server which I am considering upgrading from 4.11 to 6.2. Besides the operating system disk (which contains all of the expected partitions such as /, /usr, /var, and /tmp), There's a large data disk on the system containing useful data that I'd like to put back online as soon as the upgrade is completed. I'd rather not have to reformat it unless there is a significant advantage to doing so. Does 6.2 work properly with the older disk format? Is there any reason to take the time and effort to back up the data and restore it to the new format? Is there anything I'll need to be careful about if I upgrade just the system disk? --Brett Glass Brett, Yes, 6.2 does but there are features that were added to UFS2 (softupdates, file size limit raised past 2GB?) which make it a much better filesystem infrastructure than UFS1. -Garrett ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Upgrade from 4.x -> 6.2: Old file systems?
I have a server which I am considering upgrading from 4.11 to 6.2. Besides the operating system disk (which contains all of the expected partitions such as /, /usr, /var, and /tmp), There's a large data disk on the system containing useful data that I'd like to put back online as soon as the upgrade is completed. I'd rather not have to reformat it unless there is a significant advantage to doing so. Does 6.2 work properly with the older disk format? Is there any reason to take the time and effort to back up the data and restore it to the new format? Is there anything I'll need to be careful about if I upgrade just the system disk? --Brett Glass ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
SMART errors and file systems
I have a drive that gained a bad sector, detected by smartctl. I have the LBA number of the sector. The drive is one large UFS partition. Is it possible to determine where in the filesystem the sector lies? Mostly, what file is corrupted by the bad sector? Thanks, Mike ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
size of crypto file systems geli/gbde
Hi: I want to create encrypted memory filesystems for backup, and selective data destruction: If I have data from different users say, each user's backup will be stored as different encrypted file systems. Then I can selectively destroy data from one user by throwing away the key. Now, how do I estimate the actual available space on an encrypted partition? Say, I need to backup 100MB - how big an mfs do I need to create in order that the encrypted file system will be 100MB? Secondly: Which of the two supported crypto file systems is recommended: ELI or BDE? PHK writes in the manpage of BDE that no audit of the code have been made, but no such warning appears on ELI. Which is strongest/fastest/most efficient/reliable? Thanks, Erik -- Ph: +34.666334818 web: http://www.locolomo.org X.509 Certificate: http://www.locolomo.org/crt/8D03551FFCE04F0C.crt Key ID: 69:79:B8:2C:E3:8F:E7:BE:5D:C3:C3:B1:74:62:B8:3F:9F:1F:69:B9 smime.p7s Description: S/MIME Cryptographic Signature
Re: Dump on large file systems
> On 8/14/05, John Pettitt <[EMAIL PROTECTED]> wrote: >> -BEGIN PGP SIGNED MESSAGE- >> Hash: RIPEMD160 >> >> >> I tried to dump a 600gb file system a few days ago and it didn't >> work. dump went compute bound during phase III and never wrote any >> data to the dump device (this on an up to date RELENG_5 box). - is >> this a known problem? Are there any work arounds? >> >> John >> -BEGIN PGP SIGNATURE- >> Version: GnuPG v1.4.1 (MingW32) >> >> iD8DBQFC/1VpaVyA7PElsKkRAwnlAKCiqEJ5BLoKpHIRCOLMbcSjrpNBjgCgyyZp >> nM+KOXrDZs96+nk7QV6hOCc> =7Kv9 >> -END PGP SIGNATURE- >> >> ___ >> freebsd-questions@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-questions >> To unsubscribe, send any mail to >> "[EMAIL PROTECTED]" >> > > If you are dumping that 660G slice to a file, you will need to split > it up into smaller chuncks. > > dump -0auLf - / | split -a4 -b1024m - "patth/to/dump/file." > > The above line will create 1G files and append the filename (see the > trailing ".") > eg.. 20050815-root. > 20050815-root.aaab > > You can also gzip it, but this makes the backup take a long time. > dump -0auLf - / | gzip | split -a4 -b1024m - "patth/to/dump/file.gz." > > Nope I'm dumping to an IOMEGA Rev 35Gb removable disk - but it doesn't get that far -it hangs before wrting any output data. John ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Dump on large file systems
On 8/14/05, John Pettitt <[EMAIL PROTECTED]> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: RIPEMD160 > > > I tried to dump a 600gb file system a few days ago and it didn't > work. dump went compute bound during phase III and never wrote any > data to the dump device (this on an up to date RELENG_5 box). - is > this a known problem? Are there any work arounds? > > John > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.4.1 (MingW32) > > iD8DBQFC/1VpaVyA7PElsKkRAwnlAKCiqEJ5BLoKpHIRCOLMbcSjrpNBjgCgyyZp > nM+KOXrDZs96+nk7QV6hOCc= > =7Kv9 > -END PGP SIGNATURE- > > ___ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to "[EMAIL PROTECTED]" > If you are dumping that 660G slice to a file, you will need to split it up into smaller chuncks. dump -0auLf - / | split -a4 -b1024m - "patth/to/dump/file." The above line will create 1G files and append the filename (see the trailing ".") eg.. 20050815-root. 20050815-root.aaab You can also gzip it, but this makes the backup take a long time. dump -0auLf - / | gzip | split -a4 -b1024m - "patth/to/dump/file.gz." -Erik- ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Dump on large file systems
-BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 I tried to dump a 600gb file system a few days ago and it didn't work. dump went compute bound during phase III and never wrote any data to the dump device (this on an up to date RELENG_5 box). - is this a known problem? Are there any work arounds? John -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (MingW32) iD8DBQFC/1VpaVyA7PElsKkRAwnlAKCiqEJ5BLoKpHIRCOLMbcSjrpNBjgCgyyZp nM+KOXrDZs96+nk7QV6hOCc= =7Kv9 -END PGP SIGNATURE- ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
bg flag for local file-systems
Hello, I'm sorry if this sounds dumb, but I'm looking for a kind of bg flag (from mount_nfs) for my local file-systems. The problem is that I frequently pull hard drives out of my headless file-server, but forget to edit fstab _before_ it. So the system won't boot until I connect a monitor, a keyboard and sort the mess out from the single-user mode. I currently use noauto flag in fstab for the file-systems, but that makes me mount them manually every time I reboot. Is there any magic I can do with fstab to solve my problem, or will I have to write a script? Best wishes, Andrew P. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Re: Isn't there a way to install freebsd from hard disk with only a bootable grub? (all file systems are ext3)
> Date: 05 Dec 2004 12:02:26 -0500 > From: Lowell Gilbert <[EMAIL PROTECTED]> > Subject: Re: Isn't there a way to install freebsd from hard disk with >only a bootable grub? (all file systems are ext3) > [EMAIL PROTECTED] writes: > > > Hi,all, > > > > I really want to know if there is a way to install freebsd from hard > > disk. This old laptop doesn't contain a floppy drive, And I didn't get a > > CD burner either. Certainly, The rubbishy computer couldn't be boot use > > pxe kind of thing. I've searched google and I have done what I can do > > for the result. And still no clue,So anyone here can tell me if there is > > a way to install freebsd from hard disk? Grub is installed in a > > seperated partition. And the grub manual says it can boot the bsd > > kernel. > > > > I am sure that it can be installed with only a bootable hard drive.I > > wonder if someone can make a kernel with "tmpfs" kind of thing > > supported. And a loop file contains a minimum root with bsd installer in > > it.Then It can do the job. > > > > Or maybe there is a solution like this already? > > Well,thanks all for your attention! > > I don't think this has been done, but it should be possible. > Sounds like a quick project for somebody knowledgeable about > GRUB. > I searched google. Found that FreeBsd Remote Install is nearly reached what I thought(In fact, I think It's easy to make it for local installation). I don't have another computer. So can anyone else help? Still wait for the answer... :-S ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: Isn't there a way to install freebsd from hard disk with only a bootable grub? (all file systems are ext3)
[EMAIL PROTECTED] writes: > Hi,all, > > I really want to know if there is a way to install freebsd from hard > disk. This old laptop doesn't contain a floppy drive, And I didn't get a > CD burner either. Certainly, The rubbishy computer couldn't be boot use > pxe kind of thing. I've searched google and I have done what I can do > for the result. And still no clue,So anyone here can tell me if there is > a way to install freebsd from hard disk? Grub is installed in a > seperated partition. And the grub manual says it can boot the bsd > kernel. > > I am sure that it can be installed with only a bootable hard drive.I > wonder if someone can make a kernel with "tmpfs" kind of thing > supported. And a loop file contains a minimum root with bsd installer in > it.Then It can do the job. > > Or maybe there is a solution like this already? > Well,thanks all for your attention! I don't think this has been done, but it should be possible. Sounds like a quick project for somebody knowledgeable about GRUB. -- Lowell Gilbert, embedded/networking software engineer, Boston area http://be-well.ilk.org/~lowell/ ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Isn't there a way to install freebsd from hard disk with only a bootable grub? (all file systems are ext3)
Hi,all, I really want to know if there is a way to install freebsd from hard disk. This old laptop doesn't contain a floppy drive, And I didn't get a CD burner either. Certainly, The rubbishy computer couldn't be boot use pxe kind of thing. I've searched google and I have done what I can do for the result. And still no clue,So anyone here can tell me if there is a way to install freebsd from hard disk? Grub is installed in a seperated partition. And the grub manual says it can boot the bsd kernel. I am sure that it can be installed with only a bootable hard drive.I wonder if someone can make a kernel with "tmpfs" kind of thing supported. And a loop file contains a minimum root with bsd installer in it.Then It can do the job. Or maybe there is a solution like this already? Well,thanks all for your attention! YiyiHu ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: fstab - why different file systems nummers?
Marcel, > and i am stick with another prolbem. so far i've read that the md > driver can be used to mound a file in an filesystem. before i could > use mdconfig. buti don't have mdconfig on my branch (4.10) On 4.x you can use md, but it's easier to use mfs. In the vfstab, you simply put the swap device in place of "md", eg. /dev/ad0s1b /tmpmfs -s=655360 0 See mount_mfs(8), aka. newfs(8). -Andrew- -- ___ | -Andrew J. Caines- Unix Systems Engineer [EMAIL PROTECTED] | | "They that can give up essential liberty to obtain a little temporary | | safety deserve neither liberty nor safety" - Benjamin Franklin, 1759 | ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: fstab - why different file systems nummers?
Guten Tag Dan Nelson, am Freitag, 27. August 2004 um 21:37 schrieben Sie: DN> In the last episode (Aug 27), Marcel.lautenbach said: >> well, i new to freebsd but i didn't find help in the newbelist. and >> since i got this daily message from the list i think this is the >> right place to go. >> >> i am at the point to change my /etc/fstab file. well, there i can set >> two numbers 1 for root file system; 2 for another ufs file system and >> 0 for everythin else. so, in my example here: why ist a ms-dos file >> system set to 2 and not to 0? it isn't a ufs file >> system...*wondering* >> >> also, why to distinguish between 1,2 and 0. there is a file system >> declaration in the third column. so, i don't get it with the >> differences and reasons for these three numbers. but i would like to >> understand :-) DN> Run "man fstab", and read the descriptions of the fifth and sixth DN> columns. >> so, can someone help? >> >> and, what does the term "userland" mean for freebsd? DN> Any user programs, headers, libraries, etc (anything that's not the DN> kernel). Hi Dan, thanks for the help. i will check the man then :-) and i am stick with another prolbem. so far i've read that the md driver can be used to mound a file in an filesystem. before i could use mdconfig. buti don't have mdconfig on my branch (4.10), not even a man entry. -- mailto:[EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: fstab - why different file systems nummers?
In the last episode (Aug 27), Marcel.lautenbach said: > well, i new to freebsd but i didn't find help in the newbelist. and > since i got this daily message from the list i think this is the > right place to go. > > i am at the point to change my /etc/fstab file. well, there i can set > two numbers 1 for root file system; 2 for another ufs file system and > 0 for everythin else. so, in my example here: why ist a ms-dos file > system set to 2 and not to 0? it isn't a ufs file > system...*wondering* > > also, why to distinguish between 1,2 and 0. there is a file system > declaration in the third column. so, i don't get it with the > differences and reasons for these three numbers. but i would like to > understand :-) Run "man fstab", and read the descriptions of the fifth and sixth columns. > so, can someone help? > > and, what does the term "userland" mean for freebsd? Any user programs, headers, libraries, etc (anything that's not the kernel). -- Dan Nelson [EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
fstab - why different file systems nummers?
hi folks, well, i new to freebsd but i didn't find help in the newbelist. and since i got this daily message from the list i think this is the right place to go. i am at the point to change my /etc/fstab file. well, there i can set two numbers 1 for root file system; 2 for another ufs file system and 0 for everythin else. so, in my example here: why ist a ms-dos file system set to 2 and not to 0? it isn't a ufs file system...*wondering* also, why to distinguish between 1,2 and 0. there is a file system declaration in the third column. so, i don't get it with the differences and reasons for these three numbers. but i would like to understand :-) so, can someone help? and, what does the term "userland" mean for freebsd? o.k., i am kind of dump but i hope you can clear the fog in my brain :-) thanky -- mailto:[EMAIL PROTECTED] ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: read only system file systems for jail
On May 12, 2004, at 10:15 AM, Kirk Strauser wrote: At 2004-05-12T05:31:41Z, "Chad Leigh -- Shire.Net LLC" <[EMAIL PROTECTED]> writes: Is there a fundamental problem of having the following all be read-only file systems, with the noted exceptions? With the exception of /var (that you mentioned in another post), you should be fine. good deal. I have been running test jails like this for a while and it seemed to work. note that users are not allowed root privilege and hence are not installing stuff into any of these hierarchies and no /usr/ports Out of curiosity, what are you using jails for? Create "virtual servers". Up to now I have been using them as I consolidated real HW onto one more powerful box[1] (since I pay by the rack unit :-), as well as I have a few customers who have their own jails that they run for whatever they want to do. Current production systems are 4.9 (and a 4.7) currently. Currently all jails have their own installs, which is a pita to admin for upgrades. With a single jail install, I can update one instance and restart the jails and get everyone updated. On my test system I am currently using localhost nfs mounting to remount the master jail directories. I am getting ready to deploy 5.x sometime this summer, hopefully 5.3-RELEASE, and want to virtualize all the users. So each virtual web host (with IP) will actually be running in its own jail, with its own instance of Roxen or apache running (out of one install though). No services except ssh should be running on the main HW, with only admin log-in, no customers, and all mail, web, customer, whatever, services will be running in "hardened" jails (hardened through the read only part). Additionally, I create file-backed mdXXX file systems and mount them for each jail, so the jail is self contained in its own file system. (And that enforces a quota by default on the user without having to run quota stuff). The idea is to make it a lot harder for potential hackers to take over the machine. Any cracked web or other services land them in a jail that should be hard to break out of and even harder to take advantage of since the main system directories are read only. I have not been hacked so far anyway, that I can tell (and I do regular checks with various utils), and want to make it that much harder. best Chad [1] we run more than one box but multiples that did not need to be separate have been consolidated down ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: read only system file systems for jail
At 2004-05-12T05:31:41Z, "Chad Leigh -- Shire.Net LLC" <[EMAIL PROTECTED]> writes: > Is there a fundamental problem of having the following all be read-only > file systems, with the noted exceptions? With the exception of /var (that you mentioned in another post), you should be fine. > note that users are not allowed root privilege and hence are not > installing stuff into any of these hierarchies and no /usr/ports Out of curiosity, what are you using jails for? -- Kirk Strauser "94 outdated ports on the box, 94 outdated ports. Portupgrade one, an hour 'til done, 82 outdated ports on the box." pgp0.pgp Description: PGP signature
Re: read only system file systems for jail
On May 11, 2004, at 11:31 PM, Chad Leigh -- Shire.Net LLC wrote: Hi All I am playing around on 5.2-CURRENT and am setting up a system to run various programs inside of jails. Including allowing the users to ssh in etc. Is there a fundamental problem of having the following all be read-only file systems, with the noted exceptions? /bin /sbin /libexec /lib /usr /var note: /usr/local would not be readonly and /var/tmp would not be readonly Sorry, the whole /var is not readonly. Sorry, I misread my notes... Chad It seems to work in my test jails but I was wondering about hidden problems or non obvious problems. note that users are not allowed root privilege and hence are not installing stuff into any of these hierarchies and no /usr/ports Thanks Chad ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]" ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
read only system file systems for jail
Hi All I am playing around on 5.2-CURRENT and am setting up a system to run various programs inside of jails. Including allowing the users to ssh in etc. Is there a fundamental problem of having the following all be read-only file systems, with the noted exceptions? /bin /sbin /libexec /lib /usr /var note: /usr/local would not be readonly and /var/tmp would not be readonly It seems to work in my test jails but I was wondering about hidden problems or non obvious problems. note that users are not allowed root privilege and hence are not installing stuff into any of these hierarchies and no /usr/ports Thanks Chad ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: file systems
On Tue, 30 Mar 2004 09:08:35 -0700, "John Klein" <[EMAIL PROTECTED]> said: > What does the warning "file system full" mean? That whatever you are doing has caused you to run out of room. Jud ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: file systems
On Tue, Mar 30, 2004 at 09:08:35AM -0700, John Klein wrote: > What does the warning "file system full" mean? That you have a file system which is full up. Cheers, Matthew -- Dr Matthew J Seaman MA, D.Phil. 26 The Paddocks Savill Way PGP: http://www.infracaninophile.co.uk/pgpkey Marlow Tel: +44 1628 476614 Bucks., SL7 1TH UK pgp0.pgp Description: PGP signature
file systems
What does the warning "file system full" mean? ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: "Live" CD Mounting file systems
On 18 Aug 2003, at 16:48, [EMAIL PROTECTED] wrote: > > Hello all, > I am trying to build a FreeBSD 5.1-RELEASE-i386 based "live" cd and I am having > trouble getting the filesystems mounted. The kernel boots but then tells me > that it cannot find the root file system. From what I have read it appears > that I need to make a memory filesystem to load part of file system on. > > Does anyone have samples on how to actually accomplish this? > Maybe http://www.freesbie.org/ could be of help? Kjell ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
"Live" CD Mounting file systems
Hello all, I am trying to build a FreeBSD 5.1-RELEASE-i386 based "live" cd and I am having trouble getting the filesystems mounted. The kernel boots but then tells me that it cannot find the root file system. From what I have read it appears that I need to make a memory filesystem to load part of file system on. Does anyone have samples on how to actually accomplish this? Thanks Tom ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Vinum root file systems (was: Difference between Vinum and atacontrol RAID?)
On Sunday, 9 March 2003 at 20:15:32 -0500, Chuck Swiger wrote: > Pete wrote: > > Generally, one cannot boot from a vinum based-device, unless you are > only doing RAID-1 mirroring. You can have a Vinum root file system as long as at least one plex is concatenated from a single subdisk. > I'm familar with something called "encapsulating the root partition" > under Solaris and Veritas; it's not for the faint-of-heart. :-) I suspect this is the same thing. Greg -- When replying to this message, please copy the original recipients. If you don't, I may ignore the reply or reply to the original recipients. For more information, see http://www.lemis.com/questions.html See complete headers for address and phone numbers pgp0.pgp Description: PGP signature
Re: layered file systems ...
On 2002-12-13 13:39, "Gary W. Swearingen" <[EMAIL PROTECTED]> wrote: > "Marc G. Fournier" <[EMAIL PROTECTED]> writes: > > That's kinda what I'm wondering ... is it just that nobody has updated the > > man page since '94 ... from looking at the sources, ther have been mods to > > it since then: > > If you're referring to the manpage date which gets displayed with the > manpage, you can fugidaboudit; according to the "mdoc" manpage, it's the > "date of authorship", which most read with an implied "original". I > complained to the doc people that this policy makes FreeBSD stuff look > unmaintained to casual users (at least). You can guess the response. I do remember the thread, but a quick search in my ~/mail archive didn't show anything about it. An old date in a manpage doesn't necessarily mean that its text is old. The misunderstanding you described is valid, but the .Dd tag has been used for 'major revisions' of the manpages since a long time AFAIK. Furthermore, since the date of last modification is readily available at the source of the manpage itself, and there are tools to find and review it, there is no real reason to use the .Dd information for the same reason. The `date' field of the manpage is currently different from the `last modification date' of the source, and merging the two would lose us some information; namely the date of the last major revision of each manpage. Both dates are easy to find, using tools that are standard in the system: $ man 1 ls | tail -1 FreeBSD 5.0 May 19, 2002 FreeBSD 5.0 $ zcat `man -w 1 ls` | ident $FreeBSD: src/bin/ls/ls.1,v 1.72 2002/11/25 13:52:57 ru Exp $ "Some users might be confused" is not a good enough reason for merging the two date fields, since losing information is bad, but answering a few questions like "are my manpages old?" once in a while is not really bad :) To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: layered file systems ...
"Marc G. Fournier" <[EMAIL PROTECTED]> writes: > That's kinda what I'm wondering ... is it just that nobody has updated the > man page since '94 ... from looking at the sources, ther have been mods to > it since then: If you're referring to the manpage date which gets displayed with the manpage, you can fugidaboudit; according to the "mdoc" manpage, it's the "date of authorship", which most read with an implied "original". I complained to the doc people that this policy makes FreeBSD stuff look unmaintained to casual users (at least). You can guess the response. To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: layered file systems ...
On Fri, 13 Dec 2002, Marcus L. Reid wrote: > On Fri, Dec 13, 2002 at 11:32:28AM -0400, Marc G. Fournier wrote: > > On Fri, 13 Dec 2002, Marcus Reid wrote: > > > Sounds like you're looking for something like unionfs. Unfortunately, > > > it doesn't work (even in -CURRENT) and if it did I don't know if it > > > could be made to work across jails. But the manpage for mount_unionfs > > > makes for a good read anyway.. > > > > Actually, I just spent some time playing with it, and figured out what I > > was doing wrong .. haven't tested it full blown yet, but it looks like it > > works fine ... > > > > What is known to be wrong with it? The man page is dated '94, so the 'IT > > DOESN"T WORK' is a weee bit old ... > > Hmm, I didn't notice the date on the manpage, just the loud warnings > of impending doom should one attempt to use it. Is it safe to use under > some circumstances? How does one make it break? That's kinda what I'm wondering ... is it just that nobody has updated the man page since '94 ... from looking at the sources, ther have been mods to it since then: total 103 drwxr-xr-x 2 root wheel512 Dec 30 2001 . -rw-r--r-- 1 root wheel 33588 Dec 30 2001 union_subr.c -rw-r--r-- 1 root wheel 12980 Oct 28 2001 union_vfsops.c drwxr-xr-x 12 root wheel512 Sep 28 2001 .. -rw-r--r-- 1 root wheel 5916 Dec 29 1999 union.h -rw-r--r-- 1 root wheel 49818 Dec 15 1999 union_vnops.c and it looks like 5.0 has some changes to it: > ls -lta total 106 drwxr-xr-x 2 root wheel512 Nov 19 10:00 . -rw-r--r-- 1 root wheel 5419 Nov 19 10:00 union.h -rw-r--r-- 1 root wheel 33343 Nov 19 10:00 union_subr.c -rw-r--r-- 1 root wheel 13175 Nov 19 10:00 union_vfsops.c -rw-r--r-- 1 root wheel 47534 Nov 19 10:00 union_vnops.c drwxr-xr-x 19 root wheel512 Jul 17 09:12 .. To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: layered file systems ...
On Fri, Dec 13, 2002 at 11:32:28AM -0400, Marc G. Fournier wrote: > On Fri, 13 Dec 2002, Marcus Reid wrote: > > Sounds like you're looking for something like unionfs. Unfortunately, > > it doesn't work (even in -CURRENT) and if it did I don't know if it > > could be made to work across jails. But the manpage for mount_unionfs > > makes for a good read anyway.. > > Actually, I just spent some time playing with it, and figured out what I > was doing wrong .. haven't tested it full blown yet, but it looks like it > works fine ... > > What is known to be wrong with it? The man page is dated '94, so the 'IT > DOESN"T WORK' is a weee bit old ... Hmm, I didn't notice the date on the manpage, just the loud warnings of impending doom should one attempt to use it. Is it safe to use under some circumstances? How does one make it break? Marcus To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: layered file systems ...
On Fri, 13 Dec 2002, Marcus Reid wrote: > On Fri, Dec 13, 2002 at 10:01:09AM -0400, Marc G. Fournier wrote: > > > > Morning ... > > > >I'm trying to figure out a way of sharing, as an example, /usr/X11R6 > > across several jail'd environments, but in such a way that if one of them > > installs an extra package under that directory structure, its only visible > > to that jail , and not the others ... > > > > As a better example ... sharing /etc across several jails, but where > > each would have its own /etc/rc.conf ... > > > > Anyone have an idea of how this could be accomplished? > > Sounds like you're looking for something like unionfs. Unfortunately, > it doesn't work (even in -CURRENT) and if it did I don't know if it > could be made to work across jails. But the manpage for mount_unionfs > makes for a good read anyway.. Actually, I just spent some time playing with it, and figured out what I was doing wrong .. haven't tested it full blown yet, but it looks like it works fine ... What is known to be wrong with it? The man page is dated '94, so the 'IT DOESN"T WORK' is a weee bit old ... To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: layered file systems ...
On Fri, Dec 13, 2002 at 10:01:09AM -0400, Marc G. Fournier wrote: > > Morning ... > >I'm trying to figure out a way of sharing, as an example, /usr/X11R6 > across several jail'd environments, but in such a way that if one of them > installs an extra package under that directory structure, its only visible > to that jail , and not the others ... > > As a better example ... sharing /etc across several jails, but where > each would have its own /etc/rc.conf ... > > Anyone have an idea of how this could be accomplished? Sounds like you're looking for something like unionfs. Unfortunately, it doesn't work (even in -CURRENT) and if it did I don't know if it could be made to work across jails. But the manpage for mount_unionfs makes for a good read anyway.. Marcus To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
layered file systems ...
Morning ... I'm trying to figure out a way of sharing, as an example, /usr/X11R6 across several jail'd environments, but in such a way that if one of them installs an extra package under that directory structure, its only visible to that jail , and not the others ... As a better example ... sharing /etc across several jails, but where each would have its own /etc/rc.conf ... Anyone have an idea of how this could be accomplished? To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
Re: File-systems
On Wed, Oct 16, 2002 at 09:55:31PM +0200, Socketd wrote: > Hi again :-) > > Just some filesystem questions. > What filesystem does FreeBSD use for disks (1.44mb and 120 mb). In > windows they use ms-dos, but my guess is that we don't use FFS for > disks? and how do I format a disk to our filesystem? You can use whatever filesystem you want. It's just a filesystem..it doesn't care about the underlying disk medium. You can create a FFS filesystem on a floppy disk the same way you create one on a hard disk (using newfs, or newfs_* for other filesystems). > Why is it what we don't have to fragment our hard-disks all the time > like I did in windows? The filesystem is designed not to become fragmented under normal use. On the other hand FAT was not designed very well (at all? :). > I also heard that the filesystem for hard-disk > will be updated in FreeBSD 5.0, can you guide my to some info about > this? Check the release notes. > > br > socketd > > To Unsubscribe: send mail to [EMAIL PROTECTED] > with "unsubscribe freebsd-questions" in the body of the message msg05341/pgp0.pgp Description: PGP signature
RE: File-systems
> -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED]] On Behalf Of Socketd > Sent: Wednesday, October 16, 2002 12:56 PM > To: [EMAIL PROTECTED] > Subject: File-systems > > > Hi again :-) > > Just some filesystem questions. > What filesystem does FreeBSD use for disks (1.44mb and 120 mb). In > windows they use ms-dos, but my guess is that we don't use FFS for > disks? and how do I format a disk to our filesystem? Just a clarification, but windows 9x and below uses FAT(16/32). DOS sits on fat. FreeBSD I believe uses UFS for drive access and can be used via /stand/sysinstall. I don't recall what the acronym means. > > Why is it what we don't have to fragment our hard-disks all the time > like I did in windows? I also heard that the filesystem for hard-disk > will be updated in FreeBSD 5.0, can you guide my to some info about > this? You are close, on windows you need to de-fragment the hard drive. Fragmentation slows down a system a little bit. Fragmentation usually occurs when you are doing lots of file creation/deletion. My 2 freebsd boxen don't do this on a normal basis, so the drives don't get too fragmented. > > br > socketd > > To Unsubscribe: send mail to [EMAIL PROTECTED] > with "unsubscribe freebsd-questions" in the body of the message > To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message
File-systems
Hi again :-) Just some filesystem questions. What filesystem does FreeBSD use for disks (1.44mb and 120 mb). In windows they use ms-dos, but my guess is that we don't use FFS for disks? and how do I format a disk to our filesystem? Why is it what we don't have to fragment our hard-disks all the time like I did in windows? I also heard that the filesystem for hard-disk will be updated in FreeBSD 5.0, can you guide my to some info about this? br socketd To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-questions" in the body of the message