Hi,
did you read the following?
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Currently, pool performance can degrade when a pool is very full and
filesystems are updated frequently, such as on a busy mail server.
Under these circumstances, keep pool space under 80%
size=66560)
In-Reply-To: [EMAIL PROTECTED]
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Approved: 3sm4u3
X-OpenSolaris-URL:
http://www.opensolaris.org/jive/message.jspa?messageID=163221tstart=0#163221
how does one free
I suspect that the bad ram module might have been the root
cause for that freeing free segment zfs panic,
perhaps I removed two 2G simms but left the two 512M
simms, also removed kernelbase but the zpool import
still crashed the machine.
its also registered ECC ram, memtest86 v1.7
On 11/10/2007, Dick Davies [EMAIL PROTECTED] wrote:
No, they aren't (i.e. zoneadm clone on S10u4 doesn't use zfs snapshots).
I have a workaround I'm about to blog
Here it is - hopefully be of some use:
http://number9.hellooperator.net/articles/2007/10/11/fast-zone-cloning-on-solaris-10
--
Hello all, sorry if somebody already asked this or not. I was playing today
with
iSCSI and I was able to create zpool and then via iSCSI I can see it on two
other hosts. I was courious if I could use zfs to have it shared on those two
hosts but aparently I was unable to do it for obvious
Has there been any solution to the problem discussed above in ZFS version 8??
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I am using XFS_IOC_FSGETXATTR in ioctl() call on Linux running XFS file
system.I want to use similar thing on Solaris running ZFS file system.
struct fsxattr fsx;
ioctl(fd, XFS_IOC_FSGETXATTR, fsx);
The above call get additional attributes associated with files in XFS
file systems. The
2007/10/12, Krzys [EMAIL PROTECTED]:
Hello all, sorry if somebody already asked this or not. I was playing today
with
iSCSI and I was able to create zpool and then via iSCSI I can see it on two
other hosts. I was courious if I could use zfs to have it shared on those two
hosts but aparently
Manoj Nayak wrote:
Hi,
I am using XFS_IOC_FSGETXATTR in ioctl() call on Linux running XFS file
system.I want to use similar thing on Solaris running ZFS file system.
See openat(2).
--
Darren J Moffat
___
zfs-discuss mailing list
eSX wrote:
We are tesing ZFS in OpenSolairs, write TBs data to ZFS, But when the
capacity is close to 90%, ZFS went into slowly. We do ls, rm, and write
something, those operation is so terrible. for example, we do ls in a
Directory which have about 4000 Directories, the time is about 5-10s!
We've been evaluating ZFS as a possible enterprise file system for our
campus. Initially, we were considering one large cluster, but it doesn't
look like that will scale to meet our needs. So, now we are thinking about
breaking our storage across multiple servers, probably three.
However, I
I was courious if I could use zfs to have it shared on those two hosts
no, that`s not possible for now.
but aparently I was unable to do it for obvious reasons.
you will corrupt your data!
On my linuc oracle rac I was using ocfs which works just as I need it
yes, because ocfs is build for
roland wrote:
Is there any solutions out there of this kind?
i`m not that deep into solaris, but iirc there isn`t one for free.
veritas is quite popular, but you need spend lots of bucks for this.
maybe SAM-QFS ?
We have lots of customers using shared QFS with RAC.
QFS is on the road to open
So what are the failure modes to worry about?
I'm not exactly sure what the implications of this nocache option for my
configuration.
Say from a recent example I have an overtemp and first one array shuts down,
then the other one.
I come in after A/C is returned, shutdown and repower
Jim Mauro wrote:
Hi Neel - Thanks for pushing this out. I've been tripping over this for
a while.
You can instrument zfs_read() and zfs_write() to reliably track filenames:
#!/usr/sbin/dtrace -s
#pragma D option quiet
zfs_read:entry,
zfs_write:entry
{
printf(%s of
Łukasz K wrote:
Now space maps, intent log, spa history are compressed.
All normal metadata (including space maps and spa history) is always
compressed. The intent log is never compressed.
Can you tell me where space map is compressed ?
we specify that it should be compressed in
So the problem in the zfs send/receive thing, is what if your network glitches
out during the transfers?
We have these once a day due to some as-yet-undiagnosed switch problem, a
chop-out of 50 seconds or so which is enough to trip all our IPMP setups and
enough to abort SSH transfers in
Tim Thomas wrote:
Hi
this may be of interest:
http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire
I appreciate that this is not a frightfully clever set of tests but I
needed some throughout numbersand the easiest way to share the
results is to blog.
It seems
Michael Kucharski wrote:
We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have
the files system mounted over v5 krb5 NFS and accessed directly. The pool
is a 20TB pool and is using . There are three filesystems, backup, test
and home. Test has about 20 million files and
Hi all,
Forgive me if this is a dumb question. Is it possible for a two-disk mirrored
zpool to be seamlessly enlarged by gradually replacing previous disk with
larger one?
Say, in a constrained desktop, only space for two internal disks is available,
could I just begin with two 160G disks,
Ivan Wang wrote:
Hi all,
Forgive me if this is a dumb question. Is it possible for a two-disk mirrored
zpool to be seamlessly enlarged by gradually replacing previous disk with
larger one?
Say, in a constrained desktop, only space for two internal disks is
available, could I just
Erik Trimble wrote:
Ivan Wang wrote:
Hi all,
Forgive me if this is a dumb question. Is it possible for a two-disk
mirrored zpool to be seamlessly enlarged by gradually replacing previous
disk with larger one?
Say, in a constrained desktop, only space for two internal disks is
22 matches
Mail list logo