Edward Ned Harvey solar...@nedharvey.com writes:
There are legitimate specific reasons to use separate filesystems
in some circumstances. But if you can't name one reason why it's
better ... then it's not better for you.
Having separate filesystems per user lets you create user-specific
Hi,
I would really apreciate if any of you can help me get the modified mdb and zdb
(in any version of OpenSolaris) for digital forensic reserch purpose.
Thank you.
Jonathan Cifuentes
_
Right now I have a machine with a mirrored boot setup. The SAS drives are 43Gs
and the root pool is getting full.
I do a backup of the pool nightly, so I feel confident that I don't need to
mirror the drive and can break the mirror and expand the pool with the detached
drive.
I understand
On Wed, 28 Jul 2010, Gary Gendel wrote:
Right now I have a machine with a mirrored boot setup. The SAS drives are 43Gs
and the root pool is getting full.
I do a backup of the pool nightly, so I feel confident that I don't need to
mirror the drive and can break the mirror and expand the pool
From: Richard Elling [mailto:richard.ell...@gmail.com]
http://arc.opensolaris.org/caselog/PSARC/2010/193/mail
Agree. This is a better solution because some configurable parameters
are hidden from zfs get all
Forgive me for not seeing it ... That link is extremely dense, and 34 pages
On Jul 27, 2010, at 10:37 PM, Jack Kielsmeier wrote:
The only other zfs pool in my system is a mirrored rpool (2 500 gb disks).
This is for my own personal use, so it's not like the data is mission
critical in some sort of production environment.
The advantage I can see with going with
The performance will be similar, but in the non-degraded case, the
raidz3
will perform better for small, random reads.
Why is this? The two will have the same amount of data drives
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
Hi Gary,
If your root pool is getting full, you can replace the root pool
disk with a larger disk. My recommendation is to attach the replacement
disk, let the replacement disk resilver, install the boot blocks, and
then detach the smaller disk. The system will see the expanded space
On Jul 28, 2010, at 8:34 AM, Roy Sigurd Karlsbakk wrote:
The performance will be similar, but in the non-degraded case, the
raidz3
will perform better for small, random reads.
Why is this? The two will have the same amount of data drives
The simple small, random read model for
Hi all,
I have in lab two servers running snv_134 and while doing some
experiences with iscsi volumes and replication i came up to a road-block
that i would like to ask for your help.
So in server A i have a lun created in COMSTAR without any views attach
to it and i can zfs send it to server B
Thanks,
Looks like I'll be using raidz3.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am trying to give a general user permissions to create zfs filesystems in the
rpool.
zpool set=delegation=on rpool
zfs allow user create rpool
both run without any issues.
zfs allow rpool reports the user does have create permissions.
zfs create rpool/test
cannot create rpool/test :
Hi Mark,
A couple of things are causing this to fail:
1. The user needs permissions to the underlying mount point.
2. The user needs both create and mount permissions to create ZFS datasets.
See the syntax below, which might vary depending on your Solaris
release.
Thanks,
Cindy
# chmod
Thanks adding mount did allow me to create it but does not allow me to create
the mountpoint.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mike,
Did you also give the user permissions to the underlying mount point:
# chmod A+user:user-name:add_subdirectory:fd:allow /rpool
If so, please let me see the syntax and error messages.
Thanks,
Cindy
On 07/28/10 12:23, Mike DeMarco wrote:
Thanks adding mount did allow me to create it
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of sol
Sent: Wednesday, July 28, 2010 3:12 PM
To: Richard Elling; Gregory Gee
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Tips for ZFS tuning for NFS
Hi
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks? However the
zpool status shows checksum errors not I/O errors and I'm not sure what
On Jul 28, 2010, at 12:41 PM, sol wrote:
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks? However the
zpool status shows checksum
On 07/29/10 07:41 AM, sol wrote:
Hi
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks? However the
zpool status shows checksum errors
On Jul 28, 2010, at 12:11 PM, sol wrote:
Richard Elling wrote:
Gregory Gee wrote:
I am using OpenSolaris to host VM images over NFS for XenServer. I'm
looking
for tips on what parameters can be set to help optimize my ZFS pool that
holds
my VM images.
There is nothing special about
Hi Sol,
What kind of disks?
You should be able to use the fmdump -eV command to identify when the
checksum errors occurred.
Thanks,
Cindy
On 07/28/10 13:41, sol wrote:
Hi
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it
From: Darren J Moffat [mailto:darr...@opensolaris.org]
It basically says that 'zfs send' gets a new '-b' option so send back
properties, and 'zfs recv' gets a '-o' and '-x' option to allow
explicit set/ignore of properties in the stream. It also adds a '-r'
option for 'zfs set'.
If/when
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
This can happen if there is a failure in a common system component
during the write (eg. main memory, HBA, PCI bus, CPU, bridges, etc.)
I bet that's the cause. Because as sol
fyi
--
Robert Milkowski
http://milek.blogspot.com
Original Message
Subject:zpool import despite missing log [PSARC/2010/292 Self Review]
Date: Mon, 26 Jul 2010 08:38:22 -0600
From: Tim Haley tim.ha...@oracle.com
To: psarc-...@sun.com
CC: zfs-t...@sun.com
I appear to be getting between 2-9MB/s reads from individual disks in my zpool
as shown in iostat -v
I expect upwards of 100MBps per disk, or at least aggregate performance on par
with the number of disks that I have.
My configuration is as follows:
Two Quad-core 5520 processors
48GB ECC/REG
How many iops per spindle are you getting?
A rule of thumb I use is to expect no more than 125 iops per spindle for
regular HDDs.
SSDs are a different story of course. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hi r2ch
The operations column shows about 370 operations for read - per spindle
(Between 400-900 for writes)
How should I be measuring iops?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
+1
On Wed, Jul 28, 2010 at 6:11 PM, Robert Milkowski mi...@task.gda.pl wrote:
fyi
--
Robert Milkowski
http://milek.blogspot.com
Original Message Subject: zpool import despite missing
log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From:
Tim
28 matches
Mail list logo