On Thu, Apr 26, 2012 at 12:37 PM, Richard Elling
wrote:
> [...]
NFSv4 had migration in the protocol (excluding protocols between
servers) from the get-go, but it was missing a lot (FedFS) and was not
implemented until recently. I've no idea what clients and servers
support it adequately besides
On Apr 25, 2012, at 11:00 PM, Nico Williams wrote:
> On Thu, Apr 26, 2012 at 12:10 AM, Richard Elling
> wrote:
>> On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
>> Reboot requirement is a lame client implementation.
>
> And lame protocol design. You could possibly migrate read-write NFSv3
>
On Apr 26, 2012, at 12:27 AM, Fred Liu wrote:
> “zfs 'userused@' properties” and “'zfs userspace' command” are good enough to
> gather usage statistics.
> I think I mix that with NetApp. If my memory is correct, we have to set
> quotas to get usage statistics under DataOnTAP.
> Further, if we ca
>On 2012-04-26 11:27, Fred Liu wrote:
>> "zfs 'userused@' properties" and "'zfs userspace' command" are good
>> enough to gather usage statistics.
>...
>> Since no one is focusing on enabling default user/group quota now, the
>> temporarily remedy could be a script which traverse all the
>> users
On Thu, Apr 26, 2012 at 5:45 PM, Carson Gaspar wrote:
> On 4/26/12 2:17 PM, J.P. King wrote:
>> I don't know SnapMirror, so I may be mistaken, but I don't see how you
>> can have non-synchronous replication which can allow for seamless client
>> failover (in the general case). Technically this doe
Depends on how you define DR - we have shared storage HA in each datacenter
(NetApp cluster), and replication between them in case we lose a datacenter
(all clients on the MAN hit the same cluster unless we do a DR failover). The
latter is what I'm calling DR.
It's what I call HA. DR is wha
>
>2012/4/26 Fred Liu
>
>Currently, dedup/compression is pool-based right now, they don't have the
>granularity on file system or user or group level. There is also a lot of
>improving space in this aspect.
>
>Compression is not pool-based, you can control it with the 'compression'
>property on a
On 4/26/12 2:17 PM, J.P. King wrote:
Shared storage is evil (in this context). Corrupt the storage, and you
have no DR.
Now I am confused. We're talking about storage which can be used for
failover, aren't we? In which case we are talking about HA not DR.
Depends on how you define DR - we h
Shared storage is evil (in this context). Corrupt the storage, and you have
no DR.
Now I am confused. We're talking about storage which can be used for
failover, aren't we? In which case we are talking about HA not DR.
That goes for all block-based replication products as well. This is
no
On 4/25/12 10:10 PM, Richard Elling wrote:
On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
And applications that don't pin the mount points, and can be idled
during the migration. If your migration is due to a dead server, and
you have pending writes, you have no choice but to reboot the
clie
On Thu, Apr 26, 2012 at 4:34 AM, Deepak Honnalli
wrote:
> cachefs is present in Solaris 10. It is EOL'd in S11.
And for those who need/want to use Linux, the equivalent is FSCache.
--
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-
2012/4/26 Fred Liu
>
> ** **
>
> Currently, dedup/compression is pool-based right now, they don’t have the
> granularity on file system or user or group level. There is also a lot of
> improving space in this aspect.
>
Compression is not pool-based, you can control it with the 'compression'
prop
On 2012-04-26 11:27, Fred Liu wrote:
“zfs 'userused@' properties” and “'zfs userspace' command” are good
enough to gather usage statistics.
...
Since no one is focusing on enabling default user/group quota now, the
temporarily remedy could be a script which traverse all the users/groups
in the
On 04/26/12 04:17 PM, Ian Collins wrote:
On 04/26/12 10:12 PM, Jim Klimov wrote:
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
prevent
one from hav
On 2012-04-26 14:47, Ian Collins wrote:
> I don't think it even made it into Solaris 10.
Actually, I see the kernel modules available in both Solaris 10,
several builds of OpenSolaris SXCE and an illumos-current.
$ find /kernel/ /platform/ /usr/platform/ /usr/kernel/ | grep -i cachefs
/kernel/fs
On 04/26/12 10:12 PM, Jim Klimov wrote:
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
prevent
one from having, on a single file server, /exports/nod
On 26 April, 2012 - Jim Klimov sent me these 1,6K bytes:
> Which reminds me: older Solarises used to have a nifty-looking
> (via descriptions) cachefs, apparently to speed up NFS clients
> and reduce traffic, which we did not get to really use in real
> life. AFAIK Oracle EOLed it for Solaris 11,
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
prevent
one from having, on a single file server, /exports/nodes/node[0-15],
and then
having each node
“zfs 'userused@' properties” and “'zfs userspace' command” are good enough to
gather usage statistics.
I think I mix that with NetApp. If my memory is correct, we have to set quotas
to get usage statistics under DataOnTAP.
Further, if we can add an ILM-like feature to poll the time-related
info(
19 matches
Mail list logo