(I don't suppose there is some hack to let me cross file-systems?)
I believe that if you lofs mount the filesystems under, say, /export you
can share that directory and have all the subdirectories appear.
We certainly do that for a single directory at a time.
On the NFS client side, this
Hi,
thank you for your info. The netinstall would fit perfectly.
The following text out from the README does prevent me from
using it yet:
Although this is not enforced yet, it is likely
the required convention for dataset name will be this.
pool-name/boot-environment-name[/directory in
kugutsum
I tried with just 4Gb in the system, and the same issue. I'll try
2Gb tomorrow and see if any better.(ps, how did you determine
that was the problem in your case)
sorry, I wasn't monitoring this list for a while. My machine has 8GB
of ram and I remembered that some
Hi,
I read some articles on solarisinternals.com like ZFS_Evil_Tuning_Guide on
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide . They
clearly suggest to disable cache flush
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
It seems to be
Anton B. Rang wrote:
Given that it will be some time before NFSv4 support, let alone NFSv4 support
for mount point crossing, in most client operating systems ... what obstacles
are in the way of constructing an NFSv3 server which would 'do the right
thing' transparently to clients so long
Jorgen Lundman wrote:
*** NFS Option
Start:
Since we need quota per user, I need to create a file-system of
size=$quota for each user.
But NFS will not let you cross mount-point/file-systems so mounting just
/export/mail/ means I will not see any directory below that.
NFSv4 will
[EMAIL PROTECTED] said:
They clearly suggest to disable cache flush http://www.solarisinternals.com/
wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
It seems to be the only serious article on the net about this subject.
Could someone here state on this tuning suggestion ? My cu is running
Customer has a Thumper running:
SunOS x4501 5.10 Generic_120012-14 i86pc i386 i86pc
where running zpool detech disk c6t7d0 to detech a mirror causes zpool
command to hang with following kernel stack trace:
PC: _resume_from_idle+0xf8CMD: zpool detach disk1 c6t7d0
stack pointer for thread
On Nov 27, 2007, at 1:36 AM, Anton B. Rang wrote:
Given that it will be some time before NFSv4 support, let alone
NFSv4 support for mount point crossing, in most client operating
systems ... what obstacles are in the way of constructing an NFSv3
server which would 'do the right thing'
The info in that tuning guide depends on what Solaris version you are working
with. Last I checked it was not current.
I use Solaris 10u4 and have zfs_nocacheflush set. Haven't played with using
alternate disks for ZIL yet not really sure what that does to my HA model. I
have mirrored LUNS
On Nov 27, 2007, at 9:48 AM, Richard Elling wrote:
Anton B. Rang wrote:
Given that it will be some time before NFSv4 support, let alone
NFSv4 support for mount point crossing, in most client operating
systems ... what obstacles are in the way of constructing an NFSv3
server which
[EMAIL PROTECTED] said:
Interesting. The HDS folks I talked to said the array no-ops the cache sync.
Which models were you using? Midrange only, right?
HDS modular product -- ours is 9520V, which was the smallest available.
It has a mix of FC and SATA drives (yes, really).
Check the HDS
Thanks for your answers so far.
Yes the pools are properly dealt with on a reboot.
What makes you think
this wouldn't be the case ? Do you have a specific
case where you
believe it has failed ?
Well, I have to admit I only played around a little bit with zfs-fuse so far.
And I don't
I have searched high and low and cannot find the answer. I read about how zfs
uses a Device ID for identification, usually provided by the firmware of the
device. So if an controller presents an (array) lun w/a unique device ID, what
would happen if I onlined the pool, and suddenly that lun was
What do *you* mean by flexible in this context ?
ZFS Crypto is by design extensible to new crypto algorithms and modes.
It is also designed to allow for multiple different key management
strategies and implementations, and in fact is explicitly a multiple
phase project for this reason.
Hello;
Which version of Lustre can run both server and client on the same server?
regards
http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax
J.P.King,
Richard Elling,
Robert Thurlow,
Marion Hakanson,
Thank you for replying. My apologies if I was a bit extreme, the local
Sun people do not speak English, and it is my fault for not speaking
sufficient Japanese, and the Sunsolve forums appear not to be the place
to post questions to
I believe that if you lofs mount the filesystems under, say, /export you
can share that directory and have all the subdirectories appear.
Wow, that a neat idea, and crazy at the same time. But the mknod's minor
value can be 0-262143 so it probably would be doable with some loss of
memory
NFSv4 will let the client cross mount points transparently;
this is implemented in Nevada build 77, and in Linux and AIX.
Looks like I have 70b only. Wonder what the chances are of another
release coming out in the 2 month trial period.
Does only the x4500 need to run Nevada 77, or would
The question is, if you *temporarily* migrate your zones to UFS to install the
big bad S10u4 patch, and migrate back to ZFS afterwards, will patches work
after that? A better way to say that is, have we resolved this patch problem
with zoneroot on zfs for S10u4?
Tommy
This message posted
Marion Hakanson wrote:
The downside is that you do lose some of the flexibility of ZFS, mainly
that snapshots are now done on whole UFS filesystems (zvol's), and access
to snapshots is not available via the .zfs/snapshot/ path. ZFS ACL's on
the individual file level are also not possible
/export/www/com/e/p/example/ for example.com. The quota is only at the
example/ level. But the complicated issue is that it could be any depth.
Can automount even do that? Guess my next stop is automount documentation.
I should have played first, then sent the emails. If I use the /net
Nicolas Dorfsman wrote:
Le 27 nov. 07 à 16:17, Torrey McMahon a écrit :
According to the array vendor the 99xx arrays no-op the cache flush
command. No need to set the /etc/system flag.
http://blogs.sun.com/torrey/entry/zfs_and_99xx_storage_arrays
Perfect !
Thanks Torrey.
Jorgen Lundman wrote:
Software we use are the usual. Postfix with dovecot, apache with
double-hash, https with TLS/SNI, LDAP for provisioning, pure-ftpd, DLZ,
freeradius. No local config changes needed for any setup, just ldap and
netapp.
I meant your client operating systems, actually.
Jorgen Lundman wrote:
NFSv4 will let the client cross mount points transparently;
this is implemented in Nevada build 77, and in Linux and AIX.
Looks like I have 70b only. Wonder what the chances are of another
release coming out in the 2 month trial period.
Does only the x4500 need to
On Nov 28, 2007 12:58 AM, Justin Tuttle [EMAIL PROTECTED] wrote:
I have searched high and low and cannot find the answer. I read about how zfs
uses a Device ID for identification, usually provided by the firmware of the
device. So if an controller presents an (array) lun w/a unique device ID,
26 matches
Mail list logo