The NFS client that we're using always uses O_SYNC, which is why it was
critical for us to use the DDRdrive X1 as the ZIL. I was unclear on the entire
system we're using, my apologies. It is:
OpenSolaris SNV_134
Motherboard: SuperMicro X8DAH
RAM: 72GB
CPU: Dual Intel 5503 @ 2.0GHz
ZIL: DDRdrive
On 7/10/10 03:46 PM, Ramesh Babu wrote:
I am trying to create ZPool using single veritas volume. The host is going
down as soon as I issue zpool create command. It looks like the command is
crashing and bringing host down. Please let me know what the issue might
be.Below is the command used,
Hi Edward,
well that was exactly my point, when I raised this question. If zfs send is
able to identify corrupted files while it transfers a snapshot, why shouldn't
scrub be able to do the same?
ZFS send quit with an I/O error and zpool status -v showed my the file that
indeed had problems.
On 10/ 7/10 06:22 PM, Stephan Budach wrote:
Hi Edward,
these are interesting points. I have considered a couple of them, when I
started playing around with ZFS.
I am not sure whether I disagree with all of your points, but I conducted a
couple of tests, where I configured my raids as jbods
Hi Guys,
We are a running a Solaris 10 production server being used for backup
services within our DC. We have 8 500GB drives in a zpool and we wish to
swap them out 1 by 1 for 1TB drives.
I would like to know if it is viable to add larger disks to zfs pool to grow
the pool size and then remove
On 07/10/2010 11:22, Kevin Walker wrote:
We are a running a Solaris 10 production server being used for backup
services within our DC. We have 8 500GB drives in a zpool and we wish to
swap them out 1 by 1 for 1TB drives.
I would like to know if it is viable to add larger disks to zfs pool to
Ian,
I know - and I will address this, by upgrading the vdevs to mirrors, but
there're a lot of other SPOFs around. So I started out by reducing the most
common failures and I have found that to be the disc drives, not the chassis.
The beauty is: one can work their way up until the point of
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
I
conducted a couple of tests, where I configured my raids as jbods and
mapped each drive out as a seperate LUN and I couldn't notice a
difference in performance in any way.
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
On Wed, Oct 6 at 22:04, Edward Ned Harvey wrote:
* Because ZFS automatically buffers writes in ram in order to
aggregate as previously mentioned, the hardware WB cache is not
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kevin Walker
We are a running a Solaris 10 production server being used for backup
services within our DC. We have 8 500GB drives in a zpool and we wish
to swap them out 1 by 1 for 1TB
In message 201008112022.o7bkmc2j028...@elvis.arl.psu.edu, John D Groenveld wr
ites:
I'm stumbling over BugID 6961707 on build 134.
I see the bug has been stomped in build 150. Awesome!
URL:http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6961707
In which build did it first arrive?
I would not discount the performance issue...
Depending on your workload, you might find that performance increases
with ZFS on your hardware RAID in JBOD mode.
Cindy
On 10/07/10 06:26, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
On 7-Oct-10, at 1:22 AM, Stephan Budach wrote:
Hi Edward,
these are interesting points. I have considered a couple of them,
when I started playing around with ZFS.
I am not sure whether I disagree with all of your points, but I
conducted a couple of tests, where I configured my raids as
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC
SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and
have not received disk for our SAN. Using df -h results in:
Filesystem size used avail capacity Mounted on
pool1
any snapshots?
*zfs list -t snapshot*
..Remco
On 10/7/10 7:24 PM, Jim Sloey wrote:
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC
SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and
have not received disk for our SAN. Using df -h
Forgive me, but isn't this incorrect:
---
mv /pool1/000 /pool1/000d
---
rm –rf /pool1/000
Shouldn't that last line be
rm –rf /pool1/000d
??
On 8 October 2010 04:32, Remco Lengers re...@lengers.com wrote:
any snapshots?
*zfs list -t snapshot*
..Remco
On 10/7/10 7:24 PM,
Yes, you're correct. There was a typo when I copied to the forum.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
- 19.5T -
po...@20101006-10:20:00 3.16M - 19.5T -
po...@20101006-13:20:00 177M - 19.5T -
po...@20101006-16:20:00 396M - 19.5T -
po...@20101006-22:20:00 282M - 19.5T -
po...@20101007-10:20:00 187M - 19.5T -
--
This message posted from
One of us found the following:
The presence of snapshots can cause some unexpected behavior when you attempt
to free space. Typically, given appropriate permissions, you can remove a file
from a full file system, and this action results in more space becoming
available in the file system.
Figured it out - it was the NFS client. I used snoop and then some dtrace magic
to prove that the client (which was using O_SYNC) was sending very bursty
requests to the system. I tried a number of other NFS clients with O_SYNC as
well and got excellent performance when they were configured
Hi Cindys,
Thanks for your mail. I have some further queries here based on your answer.
Once zfs split creates new pool (as per below example it is mypool_snap) can I
access mypool_snap just by importing on the same host Host1 ??
what is the current access method of newly created mypool_snap
Hi,
I could able to call zfs snapshot on individual file system/ volume using zfs
snapshot filesystem|volume
Or
I can call zfs -r snapshot filesys...@snapshot-name to take all snapshots.
I there a way I can specify more than 1 file system/volume of same pool or
different pool to call a
Hi all
I'm setting up a couple of 110TB servers and I just want some feedback in case
I have forgotten something.
The servers (two of them) will, as of current plans, be using 11 VDEVs with 7
2TB WD Blacks each, with a couple of Crucial RealSSD 256GB SSDs for the L2ARC
and another couple of
Hi Sridhar,
Most of the answers to your questions are yes.
If I have a mirrored pool mypool, like this:
# zpool status mypool
pool: mypool
state: ONLINE
scan: none requested
config:
NAMESTATE READ WRITE CKSUM
mypool ONLINE 0 0 0
On 10/ 8/10 10:54 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I'm setting up a couple of 110TB servers and I just want some feedback in case
I have forgotten something.
The servers (two of them) will, as of current plans, be using 11 VDEVs with 7
2TB WD Blacks each, with a couple of Crucial
- Original Message -
On 10/ 8/10 10:54 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I'm setting up a couple of 110TB servers and I just want some
feedback in case I have forgotten something.
The servers (two of them) will, as of current plans, be using 11
VDEVs with 7 2TB WD
On 10/ 8/10 11:06 AM, Roy Sigurd Karlsbakk wrote:
- Original Message -
On 10/ 8/10 10:54 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I'm setting up a couple of 110TB servers and I just want some
feedback in case I have forgotten something.
The servers (two of them) will, as of
Those must be pretty busy drives. I had a recent failure of a 1.5T disks in a 7
disk raidz2 vdev that took about 16 hours to resliver. There was very little IO
on the array, and it had maybe 3.5T of data to resliver.
On Oct 7, 2010, at 3:17 PM, Ian Collins wrote:
I would seriously consider
On 10/ 8/10 11:22 AM, Scott Meilicke wrote:
Those must be pretty busy drives. I had a recent failure of a 1.5T disks in a 7
disk raidz2 vdev that took about 16 hours to resliver. There was very little IO
on the array, and it had maybe 3.5T of data to resliver.
On Oct 7, 2010, at 3:17 PM, Ian
Hi,
I've been playing around with zfs for a few days now, and now ended up with a
faulted raidz (4 disks) with 3 disks still marked as online.
Lets start with the output of zpool import:
pool: tank-1
id: 15108774693087697468
state: FAULTED
status: One or more devices contains corrupted
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
I would not discount the performance issue...
Depending on your workload, you might find that performance increases
with ZFS on your hardware RAID in JBOD mode.
Depends on the raid card you're comparing to. I've certainly seen
Hi Cindy,
very well - thanks.
I noticed that either the pool you're using and the zpool that is
described inb the docs already show a mirror-0 configuration, which
isn't the case for my zpool:
zpool status obelixData
pool: obelixData
state: ONLINE
scrub: none requested
config:
Hi,
The issue here was using DKIOCGMEDIAINFOEXT by ZFS introduced in
changeset 12208.
Forcing DKIOCGMEDIAINFO solved that.
On Tue, Sep 7, 2010 at 4:35 PM, Gavin Maltby gavin.mal...@oracle.com wrote:
On 09/07/10 23:26, Piotr Jasiukajtis wrote:
Hi,
After upgrade from snv_138 to snv_142 or
Hi Cindy,
well, actually the two LUNs represent two different raid boxes that are
conencted through a FC switch to which the host is attached too.
I simply added these two FC LUNs to a pool, but from what you all are
telling me, I should be good, by adding two equal LUNs as described and
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
I would seriously consider raidz3, given I typically see 80-100 hour
resilver times for 500G drives in raidz2 vdevs. If you haven't
already,
If you're going raidz3, with 7
On 2010-Oct-08 09:07:34 +0800, Edward Ned Harvey sh...@nedharvey.com wrote:
If you're going raidz3, with 7 disks, then you might as well just make
mirrors instead, and eliminate the slow resilver.
There is a difference in reliability: raidzN means _any_ N disks can
fail, whereas mirror means one
On Tue, 5 Oct 2010, Nicolas Williams wrote:
Right. That only happens from NFSv3 clients [that don't instead edit the
POSIX Draft ACL translated from the ZFS ACL], from non-Windows NFSv4
clients [that don't instead edit the ACL], and from local applications
[that don't instead edit the ZFS
37 matches
Mail list logo