From: Timothy Coalson [mailto:tsc...@mst.edu]
Sent: Friday, October 19, 2012 9:43 PM
A shot in the dark here, but perhaps one of the disks involved is taking a
long
time to return from reads, but is returning eventually, so ZFS doesn't notice
the problem? Watching 'iostat -x' for busy
On Sat, Oct 20, 2012 at 7:39 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: Timothy Coalson [mailto:tsc...@mst.edu]
Sent: Friday, October 19, 2012 9:43 PM
A shot in the dark here, but perhaps one of the disks
Yikes, I'm back at it again, and so frustrated.
For about 2-3 weeks now, I had the iscsi mirror configuration in production, as
previously described. Two disks on system 1 mirror against two disks on system
2, everything done via iscsi, so you could zpool export on machine 1, and then
Several times, I destroyed the pool and recreated it completely from
backup. zfs send and zfs receive both work fine. But strangely - when I
launch a VM, the IO grinds to a halt, and I'm forced to powercycle
(usually) the host.
A shot in the dark here, but perhaps one of the disks involved
2012-10-05 22:53, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
http://nedharvey.com/blog/?p=105
Nice writeup, thanks. Perhaps you could also post/link it on OI wiki
so the community can find it easier?
A few comments:
1) For readability I'd use ...| awk '{print $1}'
Hello Ed and all,
Just for the sake of completeness, I dug out my implementation of
SMF services for iscsi-imported pools. As I said, it is kinda ugly
due to hardcoded things which should rather be in SMF properties
or at least in config files, but this was a single-solution POC.
Here is the
2012-10-06 14:49, Jim Klimov wrote:
$ cat /lib/svc/method/iscsi-mount-dcpool
--
#!/bin/sh
DELAY=600
case $1 in
start)
if [ -f /etc/zfs/delay.dcpool ]; then
D=`head -1 /etc/zfs/delay.dcpool`
[ $D -gt 0 ] 2/dev/null
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Well, it seems just like a peculiar effect of required vs. optional
dependencies. The loop is in the default installation. Details:
# svcprop filesystem/usr | grep scheduler
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
I must be missing something - I don't see anything above that indicates any
required vs optional dependencies.
Ok, I see that now. (Thanks to the SMF FAQ).
A dependency
2012-10-03 22:03, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
If you are going to be an initiator only, then it makes sense for
svc:/network/iscsi/initiator to be required by svc:/system/filesystem/local
If you are going to be a target only, then it makes sense for
From: Jim Klimov [mailto:jimkli...@cos.ru]
Well, on my system that I complained a lot about last year,
I've had a physical pool, a zvol in it, shared and imported
over iscsi on loopback (or sometimes initiated from another
box), and another pool inside that zvol ultimately.
Ick. And it
2012-10-04 16:06, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) пишет:
From: Jim Klimov [mailto:jimkli...@cos.ru]
Well, on my system that I complained a lot about last year,
I've had a physical pool, a zvol in it, shared and imported
over iscsi on loopback (or sometimes initiated
This whole thread has been fascinating. I really wish we (OI) had the
two following things that freebsd supports:
1. HAST - provides a block-level driver that mirrors a local disk to a
network disk presenting the result as a block device using the GEOM API.
2. CARP.
I have a prototype
Forgot to mention: my interest in doing this was so I could have my ESXi
host point at a CARP-backed IP address for the datastore, and I would
have no single point of failure at the storage level.
___
zfs-discuss mailing list
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber dswa...@druber.com
mailto:dswa...@druber.com wrote:
This whole thread has been fascinating. I really wish we (OI) had
the two following things that freebsd supports:
1. HAST - provides a
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber dswa...@druber.com wrote:
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber dswa...@druber.com wrote:
This whole thread has been fascinating. I really wish we (OI) had the two
following things
2012-10-04 19:48, Richard Elling wrote:
2. CARP.
This exists as part of the OHAC project.
-- richard
Wikipedia says CARP is the open-source equivalent of VRRP.
And we have that in OI, don't we? Would it suffice?
# pkg info -r vrrp
Name: system/network/routing/vrrp
On 10/4/2012 12:19 PM, Richard Elling wrote:
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber dswa...@druber.com
mailto:dswa...@druber.com wrote:
On 10/4/2012 11:48 AM, Richard Elling wrote:
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber dswa...@druber.com
mailto:dswa...@druber.com wrote:
2012-10-04 21:19, Dan Swartzendruber writes:
Sorry to be dense here, but I'm not getting how this is a cluster setup,
or what your point wrt authoritative vs replication meant. In the
scenario I was looking at, one host is providing access to clients - on
the backup host, no services are
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
There are also loops ;)
# svcs -d filesystem/usr
STATE STIMEFMRI
online Aug_27 svc:/system/scheduler:default
...
# svcs -d scheduler
STATE
2012-10-05 1:44, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
There are also loops ;)
# svcs -d filesystem/usr
STATE STIMEFMRI
online Aug_27
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
it doesn't work right - It turns out, iscsi
devices (And I presume SAS devices) are not removable storage. That
means, if the device goes offline and comes back online
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
If they are close enough for crossover cable where the cable is UTP,
then they are
close enough for SAS.
Pardon my ignorance, can a system easily serve its local storage
2012-10-01 17:07, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) пишет:
Well, now I know why it's stupid. Cuz it doesn't work right - It turns out,
iscsi devices (And I presume SAS devices) are not removable storage. That
means, if the device goes offline and comes back online
From: Tim Cook [mailto:t...@cook.ms]
Sent: Wednesday, September 26, 2012 3:45 PM
I would suggest if you're doing a crossover between systems, you use
infiniband rather than ethernet. You can eBay a 40Gb IB card for under
$300. Quite frankly the performance issues should become almost a
On Thu, Sep 27, 2012 at 12:48 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: Tim Cook [mailto:t...@cook.ms]
Sent: Wednesday, September 26, 2012 3:45 PM
I would suggest if you're doing a crossover between
If you're willing to try FreeBSD, there's HAST (aka high availability
storage) for this very purpose.
You use hast to create mirror pairs using 1 disk from each box, thus
creating /dev/hast/* nodes. Then you use those to create the zpool one the
'primary' box.
All writes to the pool on the
head units crash or do weird things, but disks persist. There are a couple of
HA head-unit solutions out there but most of them have their own separate
storage and they effectively just send transaction groups to each other.
The other way is to connect 2 nodes to an external SAS/FC chassis.
On Wed, Sep 26, 2012 at 12:54 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Here's another one.
** **
Two identical servers are sitting side by side. They could be connected
to each other via anything (presently
On Sep 26, 2012, at 10:54 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Here's another one.
Two identical servers are sitting side by side. They could be connected to
each other via anything (presently using
30 matches
Mail list logo