Instead we've switched to Linux and DRBD. And if that doesn't get me
sympathy I don't know what will.
SvSAN does something similar and it does it rather well, I think.
http://www.stormagic.com/SvSAN.php
___
zfs-discuss mailing list
On Thu, Apr 26, 2012 at 12:10 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
Reboot requirement is a lame client implementation.
And lame protocol design. You could possibly migrate read-write NFSv3
on the fly by preserving FHs and somehow
-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nico Williams
Sent: 星期四, 四月 26, 2012 14:00
To: Richard Elling
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] cluster vs nfs
On Thu, Apr 26, 2012 at 12:10 AM, Richard Elling
richard.ell
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
prevent
one from having, on a single file server, /exports/nodes/node[0-15],
and then
having each node
On 26 April, 2012 - Jim Klimov sent me these 1,6K bytes:
Which reminds me: older Solarises used to have a nifty-looking
(via descriptions) cachefs, apparently to speed up NFS clients
and reduce traffic, which we did not get to really use in real
life. AFAIK Oracle EOLed it for Solaris 11, and
On 04/26/12 10:12 PM, Jim Klimov wrote:
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
prevent
one from having, on a single file server,
On 2012-04-26 14:47, Ian Collins wrote:
I don't think it even made it into Solaris 10.
Actually, I see the kernel modules available in both Solaris 10,
several builds of OpenSolaris SXCE and an illumos-current.
$ find /kernel/ /platform/ /usr/platform/ /usr/kernel/ | grep -i cachefs
On 04/26/12 04:17 PM, Ian Collins wrote:
On 04/26/12 10:12 PM, Jim Klimov wrote:
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
prevent
one from
On Thu, Apr 26, 2012 at 4:34 AM, Deepak Honnalli
deepak.honna...@oracle.com wrote:
cachefs is present in Solaris 10. It is EOL'd in S11.
And for those who need/want to use Linux, the equivalent is FSCache.
--
Freddie Cash
fjwc...@gmail.com
___
On 4/25/12 10:10 PM, Richard Elling wrote:
On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
And applications that don't pin the mount points, and can be idled
during the migration. If your migration is due to a dead server, and
you have pending writes, you have no choice but to reboot the
Shared storage is evil (in this context). Corrupt the storage, and you have
no DR.
Now I am confused. We're talking about storage which can be used for
failover, aren't we? In which case we are talking about HA not DR.
That goes for all block-based replication products as well. This is
On 4/26/12 2:17 PM, J.P. King wrote:
Shared storage is evil (in this context). Corrupt the storage, and you
have no DR.
Now I am confused. We're talking about storage which can be used for
failover, aren't we? In which case we are talking about HA not DR.
Depends on how you define DR - we
Depends on how you define DR - we have shared storage HA in each datacenter
(NetApp cluster), and replication between them in case we lose a datacenter
(all clients on the MAN hit the same cluster unless we do a DR failover). The
latter is what I'm calling DR.
It's what I call HA. DR is
On Thu, Apr 26, 2012 at 5:45 PM, Carson Gaspar car...@taltos.org wrote:
On 4/26/12 2:17 PM, J.P. King wrote:
I don't know SnapMirror, so I may be mistaken, but I don't see how you
can have non-synchronous replication which can allow for seamless client
failover (in the general case).
On Apr 25, 2012, at 11:00 PM, Nico Williams wrote:
On Thu, Apr 26, 2012 at 12:10 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
Reboot requirement is a lame client implementation.
And lame protocol design. You could possibly migrate
On Thu, Apr 26, 2012 at 12:37 PM, Richard Elling
richard.ell...@gmail.com wrote:
[...]
NFSv4 had migration in the protocol (excluding protocols between
servers) from the get-go, but it was missing a lot (FedFS) and was not
implemented until recently. I've no idea what clients and servers
I agree, you need something like AFS, Lustre, or pNFS. And/or an NFS
proxy to those.
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Subject: Re: [zfs-discuss] cluster vs nfs (was: Re: ZFS on Linux vs
FreeBSD)
I agree, you need something like AFS, Lustre, or pNFS. And/or an NFS
proxy
to those.
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
11:26am, Richard Elling wrote:
On Apr 25, 2012, at 10:59 AM, Paul Archer wrote:
The point of a clustered filesystem was to be able to spread our data
out among all nodes and still have access
from any node without having to run
2:20pm, Richard Elling wrote:
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
Interesting, something more complex than NFS to avoid the
complexities of NFS? ;-)
We have data coming in on multiple nodes (with local storage) that is
needed on other multiple nodes. The only
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention your data is in 16 different mounts/directory
structures, instead of
On Wed, Apr 25, 2012 at 4:26 PM, Paul Archer p...@paularcher.org wrote:
2:20pm, Richard Elling wrote:
Ignoring lame NFS clients, how is that architecture different than what
you would have
with any other distributed file system? If all nodes share data to all
other nodes, then...?
Simple.
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would prevent
one from having, on a single file server, /exports/nodes/node[0-15], and then
having each node NFS-mount /exports/nodes from the server? Much simplier
than
your example, and
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would prevent
one from having, on a single file server, /exports/nodes/node[0-15], and then
having each node NFS-mount /exports/nodes from the
On Apr 25, 2012, at 2:26 PM, Paul Archer wrote:
2:20pm, Richard Elling wrote:
On Apr 25, 2012, at 12:04 PM, Paul Archer wrote:
Interesting, something more complex than NFS to avoid the
complexities of NFS? ;-)
We have data coming in on multiple nodes (with local
2:34pm, Rich Teer wrote:
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention your data is in 16 different
On Wed, Apr 25, 2012 at 5:22 PM, Richard Elling
richard.ell...@gmail.com wrote:
Unified namespace doesn't relieve you of 240 cross-mounts (or equivalents).
FWIW,
automounters were invented 20+ years ago to handle this in a nearly seamless
manner.
Today, we have DFS from Microsoft and NFS
On 04/26/12 10:34 AM, Paul Archer wrote:
2:34pm, Rich Teer wrote:
On Wed, 25 Apr 2012, Paul Archer wrote:
Simple. With a distributed FS, all nodes mount from a single DFS. With NFS,
each node would have to mount from each other node. With 16 nodes, that's
what, 240 mounts? Not to mention
On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
On Wed, Apr 25, 2012 at 5:22 PM, Richard Elling
richard.ell...@gmail.com wrote:
Unified namespace doesn't relieve you of 240 cross-mounts (or equivalents).
FWIW,
automounters were invented 20+ years ago to handle this in a nearly seamless
On Wed, Apr 25, 2012 at 5:42 PM, Ian Collins i...@ianshome.com wrote:
Aren't those general considerations when specifying a file server?
There are Lustre clusters with thousands of nodes, hundreds of them
being servers, and high utilization rates. Whatever specs you might
have for one server
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
richard.ell...@gmail.com wrote:
On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
I disagree vehemently. automount is a disaster because you need to
synchronize changes with all those clients. That's not realistic.
Really? I did it with NIS
Tomorrow, Ian Collins wrote:
On 04/26/12 10:34 AM, Paul Archer wrote:
That assumes the data set will fit on one machine, and that machine won't
be a
performance bottleneck.
Aren't those general considerations when specifying a file server?
I suppose. But I meant specifically that our data
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams n...@cryptonector.com wrote:
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
richard.ell...@gmail.com wrote:
On Apr 25, 2012, at 3:36 PM, Nico Williams wrote:
I disagree vehemently. automount is a disaster because you need to
synchronize
On 4/25/12 6:57 PM, Paul Kraus wrote:
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williamsn...@cryptonector.com wrote:
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
richard.ell...@gmail.com wrote:
Nothing's changed. Automounter + data migration - rebooting clients
(or close enough to
On Wed, Apr 25, 2012 at 8:57 PM, Paul Kraus pk1...@gmail.com wrote:
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams n...@cryptonector.com wrote:
Nothing's changed. Automounter + data migration - rebooting clients
(or close enough to rebooting). I.e., outage.
Uhhh, not if you design your
On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
On 4/25/12 6:57 PM, Paul Kraus wrote:
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williamsn...@cryptonector.com wrote:
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
richard.ell...@gmail.com wrote:
Nothing's changed. Automounter + data
36 matches
Mail list logo