2012-10-03 22:03, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
If you are going to be an initiator only, then it makes sense for
svc:/network/iscsi/initiator to be required by svc:/system/filesystem/local
If you are going to be a target only, then it makes sense for
Thanks for all the input. It seems information on the performance of the
ZIL is sparse and scattered. I've spent significant time researching this
the past day. I'll summarize what I've found. Please correct me if I'm
wrong.
- The ZIL can have any number of SSDs attached either mirror or
From: Andrew Gabriel [mailto:andrew.gabr...@cucumber.demon.co.uk]
Temporarily set sync=disabled
Or, depending on your application, leave it that way permanently. I know,
for the work I do, most systems I support at most locations have
sync=disabled. It all depends on the workload.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Schweiss, Chip
. The ZIL can have any number of SSDs attached either mirror or
individually. ZFS will stripe across these in a raid0 or raid10 fashion
depending on how you configure.
I'm
From: Jim Klimov [mailto:jimkli...@cos.ru]
Well, on my system that I complained a lot about last year,
I've had a physical pool, a zvol in it, shared and imported
over iscsi on loopback (or sometimes initiated from another
box), and another pool inside that zvol ultimately.
Ick. And it
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Schweiss, Chip
How can I determine for sure that my ZIL is my bottleneck? If it is the
bottleneck, is it possible to keep adding
2012-10-04 16:06, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) пишет:
From: Jim Klimov [mailto:jimkli...@cos.ru]
Well, on my system that I complained a lot about last year,
I've had a physical pool, a zvol in it, shared and imported
over iscsi on loopback (or sometimes initiated
On 10/04/12 05:30, Schweiss, Chip wrote:
Thanks for all the input. It seems information on the
performance of the ZIL is sparse and scattered. I've spent
significant time researching this the past day. I'll summarize
what I've found. Please correct me if I'm wrong.
This whole thread has been fascinating. I really wish we (OI) had the
two following things that freebsd supports:
1. HAST - provides a block-level driver that mirrors a local disk to a
network disk presenting the result as a block device using the GEOM API.
2. CARP.
I have a prototype
Forgot to mention: my interest in doing this was so I could have my ESXi
host point at a CARP-backed IP address for the datastore, and I would
have no single point of failure at the storage level.
___
zfs-discuss mailing list
Hi,
I have a machine whose zpools are at version 28, and I would like to
keep them at that version for portability between OSes. I understand
that 'zpool status' asks me to upgrade, but so does 'zpool status -x'
(the man page says it should only report errors or unavailability).
This is a problem
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber dswa...@druber.com
mailto:dswa...@druber.com wrote:
This whole thread has been fascinating. I really wish we (OI) had
the two following things that freebsd supports:
1. HAST - provides a
On Oct 4, 2012, at 8:58 AM, Jan Owoc jso...@gmail.com wrote:
Hi,
I have a machine whose zpools are at version 28, and I would like to
keep them at that version for portability between OSes. I understand
that 'zpool status' asks me to upgrade, but so does 'zpool status -x'
(the man page
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber dswa...@druber.com wrote:
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber dswa...@druber.com wrote:
This whole thread has been fascinating. I really wish we (OI) had the two
following things
2012-10-04 19:48, Richard Elling wrote:
2. CARP.
This exists as part of the OHAC project.
-- richard
Wikipedia says CARP is the open-source equivalent of VRRP.
And we have that in OI, don't we? Would it suffice?
# pkg info -r vrrp
Name: system/network/routing/vrrp
On Thu, Oct 4, 2012 at 9:14 AM, Richard Elling richard.ell...@gmail.com wrote:
On Oct 4, 2012, at 8:58 AM, Jan Owoc jso...@gmail.com wrote:
The return code for zpool is ambiguous. Do not rely upon it to determine
if the pool is healthy. You should check the health property instead.
Huh. Learn
2012-10-04 20:36, Freddie Cash пишет:
On Thu, Oct 4, 2012 at 9:14 AM, Richard Elling richard.ell...@gmail.com wrote:
On Oct 4, 2012, at 8:58 AM, Jan Owoc jso...@gmail.com wrote:
The return code for zpool is ambiguous. Do not rely upon it to determine
if the pool is healthy. You should check the
On Thu, Oct 4, 2012 at 9:45 AM, Jim Klimov jimkli...@cos.ru wrote:
2012-10-04 20:36, Freddie Cash пишет:
On Thu, Oct 4, 2012 at 9:14 AM, Richard Elling richard.ell...@gmail.com
wrote:
On Oct 4, 2012, at 8:58 AM, Jan Owoc jso...@gmail.com wrote:
The return code for zpool is ambiguous. Do not
On 10/4/2012 12:19 PM, Richard Elling wrote:
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber dswa...@druber.com
mailto:dswa...@druber.com wrote:
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber dswa...@druber.com
mailto:dswa...@druber.com wrote:
Hey guys,
I've run into another ZFS performance disaster that I was hoping someone might
be able to give me some pointers on resolving. Without any significant change
in workload write performance has dropped off dramatically. Based on previous
experience we tried deleting some files to free
2012-10-04 21:19, Dan Swartzendruber writes:
Sorry to be dense here, but I'm not getting how this is a cluster setup,
or what your point wrt authoritative vs replication meant. In the
scenario I was looking at, one host is providing access to clients - on
the backup host, no services are
Thanks Neil, we always appreciate your comments on ZIL implementation.
One additional comment below...
On Oct 4, 2012, at 8:31 AM, Neil Perrin neil.per...@oracle.com wrote:
On 10/04/12 05:30, Schweiss, Chip wrote:
Thanks for all the input. It seems information on the performance of the
Again thanks for the input and clarifications.
I would like to clarify the numbers I was talking about with ZiL
performance specs I was seeing talked about on other forums. Right now
I'm getting streaming performance of sync writes at about 1 Gbit/S. My
target is closer to 10Gbit/S. If I
Hi Charles,
Yes, a faulty or failing disk can kill performance.
I would see if FMA has generated any faults:
# fmadm faulty
Or, if any of the devices are collecting errors:
# fmdump -eV | more
Thanks,
Cindy
On 10/04/12 11:22, Knipe, Charles wrote:
Hey guys,
I’ve run into another ZFS
Sounds similar to the problem discussed here:
http://blogs.everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/
Check 'iostat -xn' and see if one or more disks is stuck at 100%.
-Chip
On Thu, Oct 4, 2012 at 3:42 PM, Cindy Swearingen
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
There are also loops ;)
# svcs -d filesystem/usr
STATE STIMEFMRI
online Aug_27 svc:/system/scheduler:default
...
# svcs -d scheduler
STATE
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Schweiss, Chip
If I get to build it this system, it will house a decent size VMware
NFS storage w/ 200+ VMs, which will be dual connected via 10Gbe. This is all
medical imaging research.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin
The ZIL code chains blocks together and these are allocated round robin
among slogs or
if they don't exist then the main pool devices.
So, if somebody is doing sync writes as
2012-10-05 1:44, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
There are also loops ;)
# svcs -d filesystem/usr
STATE STIMEFMRI
online Aug_27
On Oct 4, 2012, at 1:33 PM, Schweiss, Chip c...@innovates.com wrote:
Again thanks for the input and clarifications.
I would like to clarify the numbers I was talking about with ZiL performance
specs I was seeing talked about on other forums. Right now I'm getting
streaming performance
Its been awhile, but it seems like in the past, you would power the
system down, boot from removable media, import your pool then destroy or
archive the /etc/zfs/zpool.cache, and possibly your /etc/path_to_inst
file, power down again and re-arrange your hardware, then come up one
final time with a
On Thu, Oct 04, 2012 at 07:57:34PM -0500, Jerry Kemp wrote:
I remember a similar video that was up on YouTube as done by some of the
Sun guys employed in Germany. They build a big array from USB drives,
then exported the pool. Once the system was down, they re-arranged all
the drives in
thanks for the link.
This was the youtube link that I had.
http://www.youtube.com/watch?v=1zw8V8g5eT0
Jerry
On 10/ 4/12 08:07 PM, Jens Elkner wrote:
On Thu, Oct 04, 2012 at 07:57:34PM -0500, Jerry Kemp wrote:
I remember a similar video that was up on YouTube as done by some of the
Sun
On 10/04/12 15:59, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin
The ZIL code chains blocks together and these are allocated round robin
among slogs or
if they
34 matches
Mail list logo