You're confusing lofi and lofs, I think. Have a look at man lofs.
Now all _I_ would like is translucent options to that and I'd solve one
of my major headaches.
That I am. I have never used lofs, looks interesting. Thanks.
--
Jorgen Lundman | [EMAIL PROTECTED]
Unix
Jorgen Lundman wrote:
You're confusing lofi and lofs, I think. Have a look at man lofs.
Now all _I_ would like is translucent options to that and I'd solve one
of my major headaches.
I can not export lofs on NFS. Just gives invalid path, and:
I can not export lofs on NFS. Just gives invalid path,
Tell that to our mirror server.
-bash-3.00$ /sbin/mount -p | grep linux
/data/linux - /linux lofs - no ro
/data/linux - /export/ftp/pub/linux lofs - no ro
-bash-3.00$ grep linux /etc/dfs/sharetab
/linux - nfs ro Linux
Ah it's a somewhat mis-leading error message:
bash-3.00# mount -F lofs /zpool1/test /export/test
bash-3.00# share -F nfs -o rw,anon=0 /export/test
Could not share: /export/test: invalid path
bash-3.00# umount /export/test
bash-3.00# zfs set sharenfs=off zpool1/test
bash-3.00# mount -F lofs
On Wed, Nov 28, 2007 at 05:40:57PM +0900, Jorgen Lundman wrote:
Ah it's a somewhat mis-leading error message:
bash-3.00# mount -F lofs /zpool1/test /export/test
bash-3.00# share -F nfs -o rw,anon=0 /export/test
Could not share: /export/test: invalid path
bash-3.00# umount /export/test
Jorgen Lundman wrote:
SXCE is coming out _very_ soon. But all of your clients need
to support NFSv4 mount point crossing to make full use of it,
unless the automounter works out well enough.
Ahh, that's a shame.. Automounter works sufficiently at the moment, but
it does not work well
I made the mistake of umount -f /net/x4500/export/mail, even when autofs
was disabled, and now all I get is I/O Errors.
Is it always this sensitive?
umount -f is a power tool with no guard. If you had local
apps using the filesystem, they would have seen I/O errors
as well. The
[EMAIL PROTECTED] wrote:
I made the mistake of umount -f /net/x4500/export/mail, even when autofs
was disabled, and now all I get is I/O Errors.
Is it always this sensitive?
umount -f is a power tool with no guard. If you had local
apps using the filesystem, they would have seen I/O
1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more
flexibility in the way we setup xen networking. What is sad is that
the code is already available in the unreleased crossbow bits... but
it won't appear in nevada until Q1 2008 :(
This is a real blocker for me as my ISP
K wrote:
4/ Poor exploit mitigation under Solaris. In comparaison, OpenBSD,
grsec linux and Windows = XP SP2 have really good exploit
mitigation It is a shame because solaris offered a non-exec stack
before nearly everyone else... but it stopped there... no heap
protection, etc...
K wrote:
1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more
flexibility in the way we setup xen networking. What is sad is that
the code is already available in the unreleased crossbow bits... but
it won't appear in nevada until Q1 2008 :(
This is a real blocker
ok cc'ing the zfs discuss was probably a mistake.
However I don't like the way you troll me and single out point 4, while the
other 3 points are directly related to Xen.
point 1, I can't set migrate a xen domU from a linux dom0 because it is
impossible to keep the previous network
Hello all
I am posting the proposal in topic to this community for comment and
hoping to count on eventual sponsorship.
I already discussed this idea with the security community and the
outcome is that such project will be an interesting thing to have.
Please find more info on the discussion
hi there,
Last week i've installed 3 disks Western Digital 250GB on ZFS.
All that time, i was able to put file on it, move, copy... everything worked
fine, cool!
But, i needed to reboot the computer, so i did it, but then, when the Bios
detect all the disk, it stop after that. I can't go in
Is zfs causing all this? does it write something at the beginning of the
drive that can cause this behavior ?
Well, cause is not the correct term here.
We've found that quite a few motherboards have buggy BIOSes; as soon as the
BIOS sees a drive, it tries to read some data from it and in case
IHAC who would like to understand following:
We've upgraded a box to sol10-u4 and created a ZFS pool. We notice that
running zfs iostat 1 or iostat -xnz 1, the data gets written to disk
every 5 seconds, even though the data is being copied to the filesystem
continuously.
This behavior is
Ajay Kumar wrote:
IHAC who would like to understand following:
We've upgraded a box to sol10-u4 and created a ZFS pool. We notice that
running zfs iostat 1 or iostat -xnz 1, the data gets written to disk
every 5 seconds, even though the data is being copied to the filesystem
Thanks for the response. I don't know enough about the symantics of the device
IDs, I hope it does not change, and that maybe zfs will see that the lun has
grown. Seeing if you can use use a file system or file as a vdev (and can't
they change sizes?) then you'd figure it could do the same with
Hi all,
Did you know that the Solaris ZFS Administration Guide is open source? Download
the latest XML source files and HTML here:
http://dlc.sun.com/osol/docs/downloads/current/
ZFSADMIN directory contains the ZFS administration guide.
Thanks,
Michelle Olson
OpenSolaris Documentation
It is now solved! thanks to Casper and billm
this is the mail i've received from Casper, i don't know why i didn't saw it
here in the forum but...
Is zfs causing all this? does it write something at the beginning of the
drive that can cause this behavior ?
Well, cause is not the correct term
I am still having issues with lofs even.
I have created 2329 home directories, each with a mail directory
inside it.
zfs original: /export/mail/
lofs mount: /export/test/
# find /export/test/mail/m/e/0/0/ -name mail | wc -l
2327
NFS client: mount /export/test/
# ls -l
I'm getting ready to test a thumper (500gig drives/ 16GB) as a backup store for
small (avg 2kb) encrypted text files. I'm considering a zpool of 7 x 5+1 raidz1
vdevs to maximize space and provide some level of redundancy carved into about
10 zfs filesystems. Since the files are encrypted,
Found them. They are all under the second layer file-system.
# zfs set mountpoint=/mnt zpool1/mail/m/e/0/0/zfs_without_quota
# cd /export/mail/m/e/0/0/zfs_without_quota
# ls -l
drwxr-xr-x 2 root root 2 Nov 29 12:28 foo
drwxr-xr-x 2 root root 2 Nov 29 16:04 roger
Point of clarification: I meant recordsize. I'm guessing {from what I've read}
that the blocksize is auto-tuned.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
24 matches
Mail list logo