Looks great.. Any (long-term distant-future) plans of integrating this with the
Trusted Extensions for a real quick setup? With a check to make sure the key's
password != the label, just to be sure :-P
This message posted from opensolaris.org
___
Can we we have a physical topolgy of:
1. many Web servers running ZFS
2.many File servers with JBODs running ZFS.
i heard that this will be possible in the future, allow a single write peration
to be locked across the many File servers.
Yes -no ?
This message posted from opensolaris.org
Multi master replication is when both computers are read and write basically.
I meant this in the context of a Data center with one ZFS pool, and another
data center with another. The idea being that you can keep the two in sync
using Multi master replication.
This message posted from
Ged wrote:
Multi master replication is when both computers are read and write basically.
I meant this in the context of a Data center with one ZFS pool, and another
data center with another. The idea being that you can keep the two in sync
using Multi master replication.
Of course you
Ged wrote:
Can we we have a physical topolgy of:
1. many Web servers running ZFS
2.many File servers with JBODs running ZFS.
i heard that this will be possible in the future, allow a single write
peration to be locked across the many File servers.
Yes -no ?
The simplest way to do
Richard,
I assume you mean. Multiple ZFS pools mapped to a singel File server, that then
publishes the files over NFS to the web servers?
Or do you mean
Multiple ZFS Pools mapped to MANY File servers, that then expose the Files over
NFS to the web servers.
i am trying to work out what i can
Hi Ged;
At the moment ZFS is not a shared file system nor a paralell file system.
However lustre integration which will take some time will provide parallel
file system abilities. I am unsure if lustre at the moment supports
redundancy between storage nodes (it was on the road map)
But
To my mind ZFS has a serious deficiency for JBOD usage in a high-availability
clustered environment.
Namely, inability to tie spare drives to a particular storage group.
Example in clustering HA setups you would would want 2 SAS JBOD units and
mirror between them. In this way if a chassis
We had a Sun Engineer on-site recently who said this:
We should set our array controllers to sequential I/O *even* if we are doing
random I/O if we are using ZFS. This is because the Arc cache is already
grouping requests up sequentially so to the array controller it will appear
like
Yes we do this currently on some systems where we haven't had time to install
and test Cluster software.
Even old 3310 array can be setup so 2 servers have storage visible. We export
pool on one system and import it on the other, move over a virtual IP and the
service is back up.
You
As the subject says - a quick grovel around didn't say that zfs boot/root had
made it into SEDE 9/07, before I download it and try, can anyone save me the
bandwidth?
Thanks!
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Sun, 21 Oct 2007, Ian Collins wrote:
Carl Brewer wrote:
As the subject says - a quick grovel around didn't say that zfs boot/root
had made it into SEDE 9/07, before I download it and try, can anyone save me
the bandwidth?
Thanks!
It didn't. It still isn't supported by the installer
Vincent Fox wrote:
To my mind ZFS has a serious deficiency for JBOD usage in a high-availability
clustered environment.
I don't agree.
Namely, inability to tie spare drives to a particular storage group.
Example in clustering HA setups you would would want 2 SAS JBOD units and
mirror
13 matches
Mail list logo