From: Frank Lahm [mailto:frankl...@googlemail.com]
With Netatalk for AFP he _is_ running a database: any AFP server needs
to maintain a consistent mapping between _not reused_ catalog node ids
(CNIDs) and filesystem objects. Luckily for Apple, HFS[+] and their
Cocoa/Carbon APIs provide such
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bill Werner
on a single 60GB SSD drive, use FDISK to create 3 physical partitions, a
20GB
for boot, a 30GB for L2ARC and a 10GB for ZIL? Or is 3 physical Solaris
partitions on a disk not
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
Now, I am wondering if using a mirror of such 15k SAS drives would be a
good-enough fit for a ZIL on a zpool that is mainly used for file services
via
AFP and SMB.
For
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Charles J. Knipe
Some more information about our configuration: We're running OpenSolaris
svn-134. ZFS is at version 22. Our disks are 15kRPM 300gb Seagate
Cheetahs,
mounted in Promise
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
On Mon, Dec 20 at 19:19, Edward Ned Harvey wrote:
If there is no correlation between on-disk order of blocks for different
disks within the same vdev, then all hope is lost; it's
From: Richard Elling [mailto:richard.ell...@gmail.com]
Now suppose you have a raidz with 3 disks (disk1, disk2, disk3, where
disk3
is resilvering). You find some way of ordering all the used blocks of
disk1... Which means disk1 will be able to read in optimal order and
speed.
Sounds
From: edmud...@mail.bounceswoosh.org
[mailto:edmud...@mail.bounceswoosh.org] On Behalf Of Eric D. Mudama
Unless your drive is able to queue up a request to read every single used
part of the drive... Which is larger than the command queue for any
reasonable drive in the world... The point
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
I believe Oracle is aware of the problem, but most of
the core ZFS team has left. And of course, a fix for
Oracle Solaris no longer means a fix for the rest of
us.
OK,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
Is there any argument against using the rpool for all data storage as well
as
being the install volume?
Generally speaking, you can't do it.
The rpool is only supported on
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tobias Lauridsen
sorry to bring the old one up, but I think it is better than make a new
one ??
Are there some one who have some resilver time from a raidz1/2 pool whith
5TB+ data on it ?
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of artiepen
Sure, but it's really straightforward:
0,5,10,15,20,25,30,35,40,45,50,55 * * * * chown -R user1:group1
/zpool1/test/share2/* 2 /dev/null ; chmod -R g+w /zpool1/test/share2/* 2
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
zfs create tank/snapshots
zfs set sharenfs=on tank/snapshots
on by default sets the NFS share parameters to: rw
You can set specific NFS share parameters by using a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Piscuc
NAME STATE READ WRITE CKSUM
zpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c2t0d0 ONLINE 0 0
From: Erik Trimble [mailto:erik.trim...@oracle.com]
We can either (a) change how ZFS does resilvering or (b) repack the
zpool layouts to avoid the problem in the first place.
In case (a), my vote would be to seriously increase the number of
in-flight resilver slabs, AND allow for
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
In the case of resilvering on a mirrored disk, why not take a snapshot,
and
then
resilver by doing a pure block copy from the snapshot? It would be
sequential,
So, a
ZFS
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Alexander Lesle
at Dezember, 17 2010, 17:48 Lanky Doodle wrote in [1]:
By single drive mirrors, I assume, in a 14 disk setup, you mean 7
sets of 2 disk mirrors - I am thinking of
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Friday, December 17, 2010 9:16 PM
While I agree that smaller vdevs are more reliable, your statement
about the failure being more likely be in the same vdev if you have
only 2 vdev's to be a rather useless statement. The
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Geoff Nordli
I am trying to configure a system where I have two different NFS shares
which point to the same directory. The idea is if you come in via one
path,
you will have read-only
From: Stephan Budach [mailto:stephan.bud...@jvm.de]
Ehh. well. you answered it. sort of. ;)
I think I simply didn't dare to overwrite the root zfs on the destination
zpool
with -F, but of course you're right, that this is the way to go.
What are you calling the root zfs on the destination?
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
On the subject of where to install ZFS, I was planning to use either
Compact
Flash or USB drive (both of which would be mounted internally); using up 2
of
the drive bays for a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Cusack
Claiming you'd start paying for Solaris if they gave you ZFS for free
in Linux is absolutely ridiculous.
*Start* paying? You clearly have NO idea what it costs to run
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
This is relevant as my final setup was planned to be 15 disks, so only one
more than the example.
So, do I drop one disk and go with 2 7 drive vdevs, or stick to 3 5 drive
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
Now, I want to use zfs send -R t...@movetank | zfs recv targetTank/...
which would place all zfs fs one level down below targetTank.
Overwriting targetTank is not an option,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of artiepen
I'm using zfs/osol snv_134. I have 2 zfs volumes: /zpool1/test/share1 and
/zpool1/test/share2. share1 is using CIFS, share2: nfs.
I've recently put a cronjob in place that changes
From: Garrett D'Amore [mailto:garr...@nexenta.com]
Sent: Friday, December 10, 2010 10:47 AM
We have ZFS version 28. Whether we ever get another open source update
of ZFS from *Oracle* is at this point doubtful. However, I will point out
that
Forgive me for swinging the conversation back
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
In that case, wouldn't it be better to, as you say, start with a 6 drive
Z2, then
just keep adding drives until the case is full, for a single Z2 zpool?
Doesn't work that way.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Alexander Lesle
After reading the board manual I found at page 4-9 where I can set
SATA#1 from IDE to AHCI.
Can zfs handle AHCI for rpool?
Can zfs handle AHCI for tank?
Yes, and yes.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Linder, Doug
But it
sure would be nice if they spared everyone a lot of effort and annoyance and
just GPL'd ZFS.
If you just mean it should be open source, under CDDL that it's been
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
And if they don't, it will be Sad, both in terms of useful code not
being available to a wide community to review and amend, as in terms
of Oracle not really getting the
From: Joerg Schilling [mailto:joerg.schill...@fokus.fraunhofer.de]
Problem is... Oracle is now the only company in the world who's immune
to netapp lawsuit over ZFS. Even if IBM and Dell and HP wanted to band
together and fund the open-source development of ZFS and openindiana...
It's a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
But a few days ago, Dell released a new firmware upgrade, from version 5.x
to 4.x. That's right. The new firmware is a downgrade to 4.
I am going to remove my intel add
It's been a while since I last heard anybody say anything about this.
What's the latest version of publicly released ZFS? Has oracle made it
closed-source moving forward?
Nexenta ... openindiana ... etc ... Are they all screwed?
___
zfs-discuss
From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
It has been over 3 weeks now, with no crashes, and me doing everything I
can to get it to crash again. So I'm going to call this one resolved...
All I did was disable the built-in Broadcom network cards, and buy an add-
on Intel
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
It's been a while since I last heard anybody say anything about this.
What's
the latest version of publicly
released ZFS? Has oracle made it closed-source moving forward?
Nice troll.
Are you kidding? 6 months ago, and 1
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tony MacDoodle
zpool create tank mirror c1t2d0 c1t3d0 mirror c1t4d0 c1t5d0 spare c1t6d0
Is this a 0 then 1 (mirror of stripes)
or
1 then 0 (stripe of mirrors)
Strictly speaking, ZFS
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Also, if you have a NFS datastore, which is not available at the time of
ESX
bootup, then the NFS datastore doesn't come online, and there seems to be
no
way of telling
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
The best choice is usually to install with zfs root on a mirrored pair
of disks. UFS is going away as a boot option.
UFS is already unavailable as a boot option. It's only
For anyone who cares:
I created an ESXi machine. Installed two guest (centos) machines and
vmware-tools. Connected them to each other via only a virtual switch. Used
rsh to transfer large quantities of data between the two guests,
unencrypted, uncompressed. Have found that ESXi virtual switch
From: Edward Ned Harvey
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
In order to test random reads, you have to configure iozone to use a data set
which is much larger than physical ram. Since iozone will write a big file
and
then immediately afterward, start reading
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
Bear a few things in mind:
iops is not iops.
snip/
I am totally aware of these differences, but it seems some people think
RAIDz is nonsense unless you don't need speed at all. My testing shows (so
far) that the speed is quite
From: Ross Walker [mailto:rswwal...@gmail.com]
Well besides databases there are VM datastores, busy email servers, busy
ldap servers, busy web servers, and I'm sure the list goes on and on.
I'm sure it is much harder to list servers that are truly sequential in IO
then
random. This is
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Joost Mulders
This tells me that *86,7G* is used by *snapshots* of this filesystem.
However, when I look at the space allocation of the snapshots, I don't
see the 86,7G back!
jo...@onix$
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Don Jackson
# zfs send -R naspool/open...@xfer-11292010 | zfs recv -d
npool/openbsd
cannot receive new filesystem stream: out of space
The destination pool is much larger (by
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Don Jackson
# zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv
npool/openbsd
receiving full stream of naspool/open...@xfer-11292010 into
npool/open...@xfer-11292010 received
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
(1) Unless you are using Zvols for raw disk partitions (for use with
something like a database), the recordsize value is a MAXIMUM value, NOT
an absolute value. Thus, if you
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Piscuc
looks promising. One element that we cannot determine is the optimum
number of disks in a raid-z pool. In the ZFS best practice guide, 7,9 and
11
There are several important
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
In fact, I recently got one of these Samsung drives...
http://tinyurl.com/38s3ac3
The spec sheet says sequential read 220MB/s, sequential write 120MB/s...
Which is 2-4 times faster than the best SATA disk out there... And
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Christopher George
Jump to slide 37 for the write IOPS benchmarks:
http://www.ddrdrive.com/zil_accelerator.pdf
Anybody who designs or works with NAND (flash) at a low level knows it can't
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karel Gardas
I'm curious if there is a support for OCZ RevoDrive SSD or any other SSD
hooked directly on PCIe in Solaris. This RevoDrive looks particularly
interesting for its low price and
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Pavel Heimlich
Are there some zfs / OS parameters I could set so that my usb drive with
zfs
on it would meet the expectations one has from a removable drive? (i.e.
safe
to remove +-anytime)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Pavel Heimlich
I tried to transfer some data between two S11 machines via a usb harddrive
with zfs on it, but importing the zpool failed (with some assertion error
I did
not write down)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karel Gardas
Thank you Christopher and Edward for all the detailed information
provided.
Indeed DDRDrive looks like a right tool for fast ZIL, but for my
development
workstation I'm rather
From: Saxon, Will [mailto:will.sa...@sage.com]
What I am wondering is whether this is really worth it. Are you planning
to
share the storage out to other VM hosts, or are all the VMs running on the
host using the 'local' storage? I know we like ZFS vs. traditional RAID
and
volume
Suppose if you wanted to boot from an iscsi target, just to get vmware a
ZFS server up. And then you could pass-thru the entire local storage
bus(es) to the ZFS server, and you could create other VM's whose storage is
backed by the ZFS server on local disk.
One way you could do this is to buy
From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
I have a Dell R710 which has been flaky for some time. It crashes about
once
per week. I have literally replaced every piece of hardware in it, and
reinstalled Sol 10u9 fresh and clean.
It has been over 3 weeks now, with no crashes
From: Saxon, Will [mailto:will.sa...@sage.com]
In order to do this, you need to configure passthrough for the device at
the
host level (host - configuration - hardware - advanced settings). This
Awesome. :-)
The only problem is that once a device is configured to pass-thru to the
guest VM,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of VO
How to accomplish ESXi 4 raw device mapping with SATA at least:
http://www.vm-help.com/forum/viewtopic.php?f=14t=1025
It says:
You can pass-thru individual disks, if you have SCSI, but
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of VO
This sounds interesting as I have been thinking something similar but
never
implemented it because all the eggs would be in the same basket. If you
don't mind me asking for more
From: Gil Vidals [mailto:gvid...@gmail.com]
connected to my ESXi hosts using 1 gigabit switches and network cards: The
speed is very good as can be seen by IOZONE tests:
KB reclen write rewrite read reread
512000 32 71789 76155 94382 101022
512000 1024 75104
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Günther
br br Disabling the ZIL (Don't) br
This is relative. There are indeed situations where it's acceptable to
disable ZIL. To make your choice, you need to understand a few things...
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
SAS Controller
and all ZFS Disks/ Pools are passed-through to Nexenta to have full
ZFS-Disk
control like on real hardware.
This is precisely the thing I'm interested in. How do you do that? On my
ESXi (test) server, I have a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of VO
The server hardware is pretty ghetto with whitebox components such as
non-ECC RAM (cause of the pool loss). I know the hardware sucks but
sometimes non-technical people don't understand
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bryan Horstmann-Allen
|
| I am a newbie on Solaris.
| We recently purchased a Sun Sparc M3000 server. It comes with 2
identical
hard drives. I want to setup a raid 1. After searching on
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
The corruption will at least be detected by a scrub, even in cases where
it
cannot be repaired.
Not necessarily. Let's suppose you have some bad memory, and no ECC. Your
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Chad Leigh -- Shire.Net LLC
1) The ZFS box offers a single iSCSI target that exposes all the
zvols as individual disks. When the FreeBSD initiator finds it, it
creates a separate disk
Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm
in love. But for one thing. The interconnect between the head storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast enough,
but it's overkill and why is it so bloody expensive? Why is there
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Alexander Skwar
I've got a Solaris 10 10/08 Sparc system and use ZFS pool version 15. I'm
playing around a bit to make it break.
Now I write some garbage to one of the log mirror devices.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Mark Sandrock
I'm working with someone who replaced a failed 1TB drive (50%
utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian D
I get that multi-cores doesn't necessarily better performances, but I
doubt that both the latest AMD CPUs (the Magny-Cours) and the latest
Intel CPUs (the Beckton) suffer from
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
From: Markus Kovero [mailto:markus.kov...@nebula.fi]
Any other feasible alternatives for Dell hardware? Wondering, are these
issues mostly related to Nehalem-architectural problems, eg. c-states.
So is there anything good in switching hw vendor? HP anyone?
In googling around etc ... Many
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
Sun hardware? Then you get all your support from one vendor.
+1
Sun hardware costs more, but it's worth it, if you want to simply assume
your stuff will work. In my case, I'd
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
I actually have three Dell R610 boxes running OSol snv134 and since I
switched from the internal Broadcom NICs to Intel ones, I didn't have
any issue with them.
I am still
From: Stephan Budach [mailto:stephan.bud...@jvm.de]
What sort of problems did you have with the bcom NICs in your R610?
Well, basically the boxes would hang themselves up, after a week or so.
And by hanging up, I mean becoming inaccessible by either the network
via ssh or the local
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Kyle McDonald
I'm currently considering purchasing 1 or 2 Dell R515's.
With up to 14 drives, and up to 64GB of RAM, it seems like it's well
suited
for a low-end ZFS server.
I know this
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave
I have a 14 drive pool, in a 2x 7 drive raidz2, with l2arc and slog
devices attached.
I had a port go bad on one of my controllers (both are sat2-mv8), so I
need to replace it (I have
From: Stephan Budach [mailto:stephan.bud...@jvm.de]
Just in case this wasn't already clear.
After scrub sees read or checksum errors, zpool status -v will list
filenames that are affected. At least in my experience.
--
- Tuomas
That didn't do it for me. I used scrub and
From: Edward Ned Harvey [mailto:sh...@nedharvey.com]
Let's crunch some really quick numbers here. Suppose a 6Gbit/sec
sas/sata bus, with 6 disks in a raid-5. Each disk is 1TB, 1000G, and
each disk is capable of sustaining 1 Gbit/sec sequential operations.
These are typical measurements
In a discussion a few weeks back, it was mentioned that the Best Practices
Guide says something like Don't put more than ___ disks into a single
vdev. At first, I challenged this idea, because I see no reason why a
21-disk raidz3 would be bad. It seems like a good thing.
I was operating on
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
4. Guess what happens if you have 2 or 3 failed disks in your raidz3,
and
they're trying to resilver at the same time. Does the system ignore
subsequently failed disks
From: Stephan Budach [mailto:stephan.bud...@jvm.de]
Although, I have to say that I do have exactly 3 files that are corrupt
in each snapshot until I finally deleted them and restored them from
their original source.
zfs send will abort when trying to send them, while scrub doesn't
notice
-Original Message-
From: Darren J Moffat [mailto:darr...@opensolaris.org]
It's one of the big selling points, reasons for ZFS to exist. You
should
always give ZFS JBOD devices to work on, so ZFS is able to scrub both
of the
redundant sides of the data, and when a checksum error
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
Last I checked, you lose the pool if you lose the slog on zpool
versions 19. I don't think there is a trivial way around this.
You should plan for this to be true when
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Gil Vidals
What would the performance impact be of splitting up a 64 GB SSD into
four partitions of 16 GB each versus having the entire SSD dedicated to
each pool?
This is a common
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Trond Michelsen
Hi.
I think everything you said sounds perfectly right.
As for estimating the time required to zfs send ... I don't know how badly
zfs send gets hurt by the on-disk order or
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
Ignore Edward Ned Harvey's response because he answered the wrong
question.
Indeed.
Although, now that I go back and actually read the question correctly, I
wonder why
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
If you lose 1 vdev, you lose the pool.
As long as 1 vdev is striped and not mirrored, that's true.
You can only afford to lose a vdev, if your vdev itself is mirrored.
You
From: Richard Elling [mailto:richard.ell...@gmail.com]
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg41998.html
Slabs don't matter. So the rest of this argument is moot.
Tell it to Erik. He might want to know. Or maybe he knows better than you.
2. Each slab is spread
From: Richard Elling [mailto:richard.ell...@gmail.com]
On Oct 17, 2010, at 6:17 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
If scrub is operating at a block-level (and I think
From: Richard Elling [mailto:richard.ell...@gmail.com]
This is one of the reasons the raidzN resilver code is inefficient.
Since you end up waiting for the slowest seek time of any one disk in
the vdev, and when that's done, the amount of data you were able to
process was at most 128K.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Marty Scholes
Would it make sense for scrub/resilver to be more aware of operating in
disk order instead of zfs order?
It would certainly make sense. As mentioned, even if you do the entire
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Simon Breden
So are we all agreed then, that a vdev failure will cause pool loss ?
Yes. When I said you could mirror a raidzN vdev, it was based on nothing
more credible than assumption
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
If scrub is operating at a block-level (and I think it is), then how
can
checksum failures be mapped to file names? For example, this is a
long-requested feature of zfs
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you're using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
raidzN takes a really long time to resilver (code written
inefficiently,
it's a known problem.) If you had a huge raidz3, it would literally
never
finish, because it couldn't resilver as fast as new data appears. A
week
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
The vdev only.
Right on.
Furthermore, as shown in the zpool status, a 7-disk raidz2 is certainly a
reasonable vdev configuration.
scrub: resilver in progress for
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Phil Harman
I'm wondering whether your HBA has a write through or write back cache
enabled? The latter might make things very fast, but could put data at
risk if not sufficiently
From: Stephan Budach [mailto:stephan.bud...@jvm.de]
Point taken!
So, what would you suggest, if I wanted to create really big pools? Say
in the 100 TB range? That would be quite a number of single drives
then, especially when you want to go with zpool raid-1.
You have a lot of disks.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Cassandra Pugh
I would like to know how to replace a failed vdev in a non redundant
pool?
Non redundant ... Failed ... What do you expect? This seems like a really
simple answer... You
501 - 600 of 1109 matches
Mail list logo