Re: SU+J and fsck problem ?
On 10.03.2012 14:01, jb wrote: Hi, FB9.0-RELEASE; no updates or recompilation. In multi-user mode: $ mount /dev/ada0s2a on / (ufs, local, journaled soft-updates) The fs was in normal state (no known problem, clean shutdown), Booted by choice in single-user mode. # mount /dev/ada0s2a on / (ufs, local, read-only) # fsck -F ** /dev/ada0s2a USE JOURNAL? [yn] y ** SU+J recovering /dev/ada0s2a ** Reading 33554432 byte journal from inode 4. RECOVER? [yn] y ** ... ** Processing journal entries. WRITE CHANGES? [yn] y ** 208 journal records in 13312 bytes for 50% utilization ** Freed 0 inodes (0 dirs) 6 blocks, and 0 frags. * FILE SYSTEM MARKED CLEAN # fsck -F ** /dev/ada0s2a USE JOURNAL? [yn] n ** Skipping journal, falling through to full fsck ** Last Mounted on / ** Root file system ** Phase 1 - Check Blocks and Sizes INCORRECT BLOCK COUNT I=114700 (8 should be 0) CORRECT? [yn] n INCORRECT BLOCK COUNT I=196081 (32 should be 8) CORRECT? [yn] n INCORRECT BLOCK COUNT I=474381 (32 should be 8) CORRECT? [yn] n ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups FREE BLOCK COUNTS(S) WRONG IN SUPERBLK SALVAGE? [yn] n SUMMARY INFORMATION BAD SALVAGE? [yn] n BLK(S) MISSING IN BIT MAPS SALVAGE? [yn] n 266075 files, 939314 used, 1896628 free (2724 frags, 236738 blocks, 0.1% fragmentation) * FILE SYSTEM MARKED DIRTY * * FILE SYSTEM WAS MODIFIED * * PLEASE RERUN FSCK * # fsck -F ** /dev/ada0s2a USE JOURNAL? [yn] y ** SU+J recovering /dev/ada0s2a Journal timestamp does not match fs mount time ** Skipping journal, falling through to full fsck ** Last Mounted on / ** Root file system ** Phase 1 - Check Blocks and Sizes INCORRECT BLOCK COUNT I=114700 (8 should be 0) CORRECT? [yn] y INCORRECT BLOCK COUNT I=196081 (32 should be 8) CORRECT? [yn] y INCORRECT BLOCK COUNT I=474381 (32 should be 8) CORRECT? [yn] y ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups FREE BLOCK COUNTS(S) WRONG IN SUPERBLK SALVAGE? [yn] y SUMMARY INFORMATION BAD SALVAGE? [yn] y BLK(S) MISSING IN BIT MAPS SALVAGE? [yn] y 266075 files, 939314 used, 1896629 free (2725 frags, 236738 blocks, 0.1% fragmentation) * FILE SYSTEM MARKED CLEAN * * FILE SYSTEM WAS MODIFIED * # Summary: 1. # fsck -F ## recovery done with J 2. # fsck -F ## no recovery; fs marked dirty; time stamp modified Why during this step there were incorrect block counts reported if the fs was recovered and marked clean in step 1 ? Despite the fact that choice of no recovery was made, the fs was marked dirty (based on false assumption above ?, and time stamp ?) 3. # fsck -F ## forced skipped Journal Same question as in step 2, based on which it accepted the choice of recovery ... Note: after step 2: 1896628 free and 2724 frags in 266075 files, 939314 used, 1896620 free (2724 frags, 236738 blocks, ... after step 3: 1896629 free and 2725 frags in 266075 files, 939314 used, 1896629 free (2725 frags, 236738 blocks, ... Questions: - is the fsck working properly with SU+J fs ? Note: fsck(8) -F ... -B ... It is recommended that you perform foreground fsck on your systems periodically and whenever you encounter file-system-related panics. - would the fs as after step 1, and steps 1-3 or 1,3 be considered recovered: - structurally ? - identical ?, does it matter ? - integrally ? Any comments before I file a PR# ? jb SUJ very strange work. it's can say - filesystem OK, but, after full boot system crash - file system have errors... I disable it on all production hosts, use only on desktop. If I manually run fsck after crash and unexpected reboot - fsck _always_ find errors, unhandled by SUJ ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
[RFC, RFT] LDM support (aka Windows Dynamic Volumes)
Hi, All i wrote GEOM_PART_LDM class. It provides basic support of Logical Disk Manager partitioning scheme [1]. Since LDM metadata is not documented i used several articles found in the web and linux implementation as reference [2]. Only generic volumes is supported. Spanned, striped and raid5 configurations aren't implemented. Mirrored volumes also are not shown by default, but they can be accessed when kern.geom.part.ldm.show_mirrors=1 (by your own risk). Currently only LDM on top of MBR is supported. Also only gpart destroy is allowed with LDM scheme. you can compile class without patching, the source code is here: http://people.freebsd.org/~ae/LDM/ [1] http://en.wikipedia.org/wiki/Logical_Disk_Manager [2] http://fxr.watson.org/fxr/source/fs/partitions/?v=linux-2.6 Example: /* da1 and da2 disks without geom_part_ldm module */ # gpart show da1 da2 = 63 104857537 da1 MBR (50G) 63 19851 ms-ldm-data (992k) 2048 2048002 ms-ldm-data [active] (100M) 206848 1046487043 ms-ldm-data (49G) 10482 2048 - free - (1.0M) = 32 2097120 da2 MBR (1.0G) 32 31 - free - (15k) 63 20950411 ms-ldm-data (1G) 2095104 2048 - free - (1.0M) # kldload ./geom_part_ldm.ko # gpart show da1 da2 = 63 104855489 da1 LDM (50G) 63 1985 - free - (992k) 2048 2048001 ntfs (100M) 206848 1046487042 ntfs (49G) = 63 2095041 da2 LDM (1.0G) 63 65 - free - (32k) 128 10240001 ntfs (500M) 1024128 10670082 ntfs (521M) 2091136 3968 - free - (2M) -- WBR, Andrey V. Elsukov signature.asc Description: OpenPGP digital signature
Re: RFC: FUSE kernel module for the kernel...
On 08/03/2012 22:20, George Neville-Neil wrote: Howdy, I've taken the GSoC work done with the FUSE kernel module, and created a patch against HEAD which I have now subjected to testing using tools/regression/fsx. The patch is here: http://people.freebsd.org/~gnn/head-fuse-1.diff I would like to commit this patch in the next few days, so, please, if you care about this take a look and get back to me. Thanks, George Hi, I'm running HEAD r232383 (as of 2 March) + head-fuse-2.diff in AMD64. I've been able to use some fuse fs. I run fsx for a while without problems with some of them (ext4fuse is readonly). Then ones working were: sshfs ntfs-3g ext4fuse others like: truecrypt gvfs (gnome fuse daemon) do fail. I tried fsx with gvfs, that's what I got: [gus@portgus ~]$ /root/deviant2/tools/regression/fsx/fsx .gvfs/multimedia\ a\ harkserver/prova no extend on truncate! not posix! They (truecrypt and gvfs) fail when doing setattr/getattr syscalls. truecrypt complains about not being able to find the recently created encrypted volume (a simple one like $HOME/Desktop/prova). With gvfs, the nautilus (or the application trying to use the file) tries to setattr the file causing gvfs to get an I/O. It happens with nearly all kind of files opened with gvfs, although there are some that are useable. With those files useable with gvfs, when the application closes them causes gvfs to block somewhere, rendering gvfs unuseable. Those two filesystems can be very useful in the desktop, I guess PCBSD could benefit from them. I would say there is something blocking in fuse_vnop_setattr/fuse_vnop_getattr, but I'm not sure how to debug it. Thanks for your help. ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: FreeBSD 10.0-CURRENT #0 r232730: buildworld broken with CLANG?
On 03/10/12 19:09, Dimitry Andric wrote: On 2012-03-10 17:11, Ivan Klymenko wrote: В Sat, 10 Mar 2012 14:26:42 +0100 Dimitry Andric d...@freebsd.org пишет: ... Unfortunately, you did a -j build, which makes the actual errors difficult to find, and if you show only the last few lines, as you have done here, those errors are not visible at all. Try doing a single-threaded build instead. Save the entire log, using script(1) for example, compress it and upload it somewhere. Full buildworld log: http://pazzle.otdux.com.ua/logs/buildworld.log This is, again, a multithreaded build log, so it is very difficult to see where the actual error is. Moreover, it seems to be using ccache, which almost certainly result in problems, and non-standard CFLAGS. Try disabling all of these, deleting /usr/obj, and rebuild. Sorry for the noise I made especially. Reporting here a parallel make via make -jX happened by my stupidity, I'm sorry. Mea culpa! A make buildworld works on a six-core Intel i7-3930X, but it fails while performing a make -j24 buildworld, make -j12 buildworld and make -j6 buildworld. It fails only on this box. So far, the initail problem has vanished, but a parallel build isn't possible on a even more powerful architecture. It seems that a make buildworl on a Core2Duao E8500 (2 cores/threads, 3 GHz, 8 GB RAM, P45/ICH10 SATA 3GB harddrive) takes approximately 100 minutes to compile world while the new box (Sandy-Bridge-E Core i7-3930X, 6 cores/12 threads, 32 GB RAM, 3.2GHz, 460GB WD Caviar Black HD attached to the SATA 6GB port) takes more than 125 minutes to compile the same sources. Something is very fishy ... Regards, Oliver signature.asc Description: OpenPGP digital signature
Re: SU+J and fsck problem ?
Please file a PR and put as much debugging output as you can. I haven't had it fail for me on any of my test machines that panic _very frequently_. But I only hvae a single disk with minimal IO, I haven't had it crash doing lots of ongoing server style iO. adrian On 11 March 2012 00:19, Alex Keda ad...@lissyara.su wrote: On 10.03.2012 14:01, jb wrote: Hi, FB9.0-RELEASE; no updates or recompilation. In multi-user mode: $ mount /dev/ada0s2a on / (ufs, local, journaled soft-updates) The fs was in normal state (no known problem, clean shutdown), Booted by choice in single-user mode. # mount /dev/ada0s2a on / (ufs, local, read-only) # fsck -F ** /dev/ada0s2a USE JOURNAL? [yn] y ** SU+J recovering /dev/ada0s2a ** Reading 33554432 byte journal from inode 4. RECOVER? [yn] y ** ... ** Processing journal entries. WRITE CHANGES? [yn] y ** 208 journal records in 13312 bytes for 50% utilization ** Freed 0 inodes (0 dirs) 6 blocks, and 0 frags. * FILE SYSTEM MARKED CLEAN # fsck -F ** /dev/ada0s2a USE JOURNAL? [yn] n ** Skipping journal, falling through to full fsck ** Last Mounted on / ** Root file system ** Phase 1 - Check Blocks and Sizes INCORRECT BLOCK COUNT I=114700 (8 should be 0) CORRECT? [yn] n INCORRECT BLOCK COUNT I=196081 (32 should be 8) CORRECT? [yn] n INCORRECT BLOCK COUNT I=474381 (32 should be 8) CORRECT? [yn] n ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups FREE BLOCK COUNTS(S) WRONG IN SUPERBLK SALVAGE? [yn] n SUMMARY INFORMATION BAD SALVAGE? [yn] n BLK(S) MISSING IN BIT MAPS SALVAGE? [yn] n 266075 files, 939314 used, 1896628 free (2724 frags, 236738 blocks, 0.1% fragmentation) * FILE SYSTEM MARKED DIRTY * * FILE SYSTEM WAS MODIFIED * * PLEASE RERUN FSCK * # fsck -F ** /dev/ada0s2a USE JOURNAL? [yn] y ** SU+J recovering /dev/ada0s2a Journal timestamp does not match fs mount time ** Skipping journal, falling through to full fsck ** Last Mounted on / ** Root file system ** Phase 1 - Check Blocks and Sizes INCORRECT BLOCK COUNT I=114700 (8 should be 0) CORRECT? [yn] y INCORRECT BLOCK COUNT I=196081 (32 should be 8) CORRECT? [yn] y INCORRECT BLOCK COUNT I=474381 (32 should be 8) CORRECT? [yn] y ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups FREE BLOCK COUNTS(S) WRONG IN SUPERBLK SALVAGE? [yn] y SUMMARY INFORMATION BAD SALVAGE? [yn] y BLK(S) MISSING IN BIT MAPS SALVAGE? [yn] y 266075 files, 939314 used, 1896629 free (2725 frags, 236738 blocks, 0.1% fragmentation) * FILE SYSTEM MARKED CLEAN * * FILE SYSTEM WAS MODIFIED * # Summary: 1. # fsck -F ## recovery done with J 2. # fsck -F ## no recovery; fs marked dirty; time stamp modified Why during this step there were incorrect block counts reported if the fs was recovered and marked clean in step 1 ? Despite the fact that choice of no recovery was made, the fs was marked dirty (based on false assumption above ?, and time stamp ?) 3. # fsck -F ## forced skipped Journal Same question as in step 2, based on which it accepted the choice of recovery ... Note: after step 2: 1896628 free and 2724 frags in 266075 files, 939314 used, 1896620 free (2724 frags, 236738 blocks, ... after step 3: 1896629 free and 2725 frags in 266075 files, 939314 used, 1896629 free (2725 frags, 236738 blocks, ... Questions: - is the fsck working properly with SU+J fs ? Note: fsck(8) -F ... -B ... It is recommended that you perform foreground fsck on your systems periodically and whenever you encounter file-system-related panics. - would the fs as after step 1, and steps 1-3 or 1,3 be considered recovered: - structurally ? - identical ?, does it matter ? - integrally ? Any comments before I file a PR# ? jb SUJ very strange work. it's can say - filesystem OK, but, after full boot system crash - file system have errors... I disable it on all production hosts, use only on desktop. If I manually run fsck after crash and unexpected reboot - fsck _always_ find errors, unhandled by SUJ ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: SU+J and fsck problem ?
Adrian Chadd adrian at freebsd.org writes: Please file a PR and put as much debugging output as you can. ... Because this is a case of clean shutdown and oing on purpose to single user mode to see how these hings behave, I assume I can go and try again and collect similar info. But, is there any debugging method I as a user can utilize to collect specific info that could aid devs ? jb ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: SU+J and fsck problem ?
Hi, I think you've included enough info. I don't know if fsck is supposed to work that way with journalling but if not, it'd be nice to get that documented. I'd also like to see that timestamp mismatch print out the timestamps so we can see what's going on. Eg, if your clock is somehow skewing. Adrian ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: [RFC, RFT] LDM support (aka Windows Dynamic Volumes)
This is awesome! Is it just read-only, or does it allow creation/destruction of LDM volumes? Adrian 2012/3/11 Andrey V. Elsukov bu7c...@yandex.ru: Hi, All i wrote GEOM_PART_LDM class. It provides basic support of Logical Disk Manager partitioning scheme [1]. Since LDM metadata is not documented i used several articles found in the web and linux implementation as reference [2]. Only generic volumes is supported. Spanned, striped and raid5 configurations aren't implemented. Mirrored volumes also are not shown by default, but they can be accessed when kern.geom.part.ldm.show_mirrors=1 (by your own risk). Currently only LDM on top of MBR is supported. Also only gpart destroy is allowed with LDM scheme. you can compile class without patching, the source code is here: http://people.freebsd.org/~ae/LDM/ [1] http://en.wikipedia.org/wiki/Logical_Disk_Manager [2] http://fxr.watson.org/fxr/source/fs/partitions/?v=linux-2.6 Example: /* da1 and da2 disks without geom_part_ldm module */ # gpart show da1 da2 = 63 104857537 da1 MBR (50G) 63 1985 1 ms-ldm-data (992k) 2048 204800 2 ms-ldm-data [active] (100M) 206848 104648704 3 ms-ldm-data (49G) 10482 2048 - free - (1.0M) = 32 2097120 da2 MBR (1.0G) 32 31 - free - (15k) 63 2095041 1 ms-ldm-data (1G) 2095104 2048 - free - (1.0M) # kldload ./geom_part_ldm.ko # gpart show da1 da2 = 63 104855489 da1 LDM (50G) 63 1985 - free - (992k) 2048 204800 1 ntfs (100M) 206848 104648704 2 ntfs (49G) = 63 2095041 da2 LDM (1.0G) 63 65 - free - (32k) 128 1024000 1 ntfs (500M) 1024128 1067008 2 ntfs (521M) 2091136 3968 - free - (2M) -- WBR, Andrey V. Elsukov ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
sudo through ssh broken on -current?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I noted some thing odd when executing the following .. /home/imb ssh imb@ sudo /sbin/ipfw list sudo: (malloc) /usr/src/lib/libc/stdlib/malloc.c:2644: Failed assertion: (run-regs_mask[elm] (1U bit)) == 0 Abort Adding '-t' as a parameter to ssh runs without the assert, imb -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.12 (FreeBSD) iEYEARECAAYFAk9dO1YACgkQQv9rrgRC1JIw3gCcCvRibOixjfVQeA4673P3P7r/ 7fQAn00C1DhO3GrwSoDHqx4NbiLfoGni =RM0/ -END PGP SIGNATURE- ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: sudo through ssh broken on -current?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/11/12 20:07, Glen Barber wrote: On Sun, Mar 11, 2012 at 07:55:02PM -0400, Michael Butler wrote: I noted some thing odd when executing the following .. /home/imb ssh imb@ sudo /sbin/ipfw list sudo: (malloc) /usr/src/lib/libc/stdlib/malloc.c:2644: Failed assertion: (run-regs_mask[elm] (1U bit)) == 0 Abort Adding '-t' as a parameter to ssh runs without the assert, What is 'uname -a'? client is FreeBSD 7.4-STABLE #11: Fri Mar 2 20:44:44 EST 2012 server is FreeBSD 10.0-CURRENT #23: Sun Mar 11 18:46:14 EDT 2012 Both are i386. Another interesting point: if run as part of a script, with no controlling tty, '-t' (or '-n', for that matter) produces the assertion :-( imb -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.12 (FreeBSD) iEYEARECAAYFAk9dP9sACgkQQv9rrgRC1JIWvgCdFwQmf1rCCAW72NWir4U3+rWA FjcAoJdZyk1TeWiRFYsvdw3L7Swy+9Xf =fuQr -END PGP SIGNATURE- ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: sudo through ssh broken on -current?
On Sun, Mar 11, 2012 at 07:55:02PM -0400, Michael Butler wrote: I noted some thing odd when executing the following .. /home/imb ssh imb@ sudo /sbin/ipfw list sudo: (malloc) /usr/src/lib/libc/stdlib/malloc.c:2644: Failed assertion: (run-regs_mask[elm] (1U bit)) == 0 Abort Adding '-t' as a parameter to ssh runs without the assert, What is 'uname -a'? Glen ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: sudo through ssh broken on -current?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 03/11/12 20:14, Michael Butler wrote: On 03/11/12 20:07, Glen Barber wrote: On Sun, Mar 11, 2012 at 07:55:02PM -0400, Michael Butler wrote: I noted some thing odd when executing the following .. /home/imb ssh imb@ sudo /sbin/ipfw list sudo: (malloc) /usr/src/lib/libc/stdlib/malloc.c:2644: Failed assertion: (run-regs_mask[elm] (1U bit)) == 0 Abort Adding '-t' as a parameter to ssh runs without the assert, What is 'uname -a'? client is FreeBSD 7.4-STABLE #11: Fri Mar 2 20:44:44 EST 2012 server is FreeBSD 10.0-CURRENT #23: Sun Mar 11 18:46:14 EDT 2012 Both are i386. Another interesting point: if run as part of a script, with no controlling tty, '-t' (or '-n', for that matter) produces the assertion :-( Client version appears to not be relevant - occurs when executed from the same SVN of -current, imb -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.12 (FreeBSD) iEYEARECAAYFAk9dREIACgkQQv9rrgRC1JJIhgCdEl8f2ALP9R9gmlPwJQ/LYXdw YdsAn2ZNmBtL9QsKpegU8Mvo70CCGbLd =vs1t -END PGP SIGNATURE- ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: growfs remove ufs/label and can't reset it with tunefs
2012/3/9 Olivier Cochard-Labbé oliv...@cochard.me Hi all, once run growfs on a partition that had an UFS label, this label is removed and it's no more possible to re-set it with tunefs. Here is how to reproduce (tested on 8.3 and 9.0): mdconfig -a -t malloc -s 10MB gpart create -s mbr /dev/md0 gpart add -t freebsd -s 5MB /dev/md0 newfs -L THELABEL /dev/md0s1 glabel status | grep THELABEL = Label is present, now we resize the slice: gpart resize -i 1 /dev/md0 glabel status | grep THELABEL = Label is still present, now we growfs the slice: growfs /dev/md0s1 glabel status | grep THELABEL = UFS label disapear ! Ok, I will try to re-set it: tunefs -L THELABEL /dev/md0s1 glabel status | grep THELABEL = Still no label !?! Should I create a PR about this problem ? Regards, Olivier Yes, It is important to record this problem in the PR system. I suspect that the problem is with growfs as it needs to be taught to not overwrite the end of the volume where the label information is stored. (It will need to examine the volume to see if GEOM has information stored at the end of the volume such that the grow should not overwrite the GEOM metadata). Matthew ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
Re: [RFC, RFT] LDM support (aka Windows Dynamic Volumes)
On 11.03.2012 23:31, Adrian Chadd wrote: This is awesome! Is it just read-only, or does it allow creation/destruction of LDM volumes? It is read-only, but you can partially destroy LDM metadata on given disk. LDM keeps information about all volumes on each disk, and i guess windows can recover destroyed metadata. It is targeted to get access to some windows partitions. Actually, it is possible make better LDM support in conjunction with GEOM_RAID, but i think we don't need it :) -- WBR, Andrey V. Elsukov ___ freebsd-current@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org