al == Adam Leventhal a...@eng.sun.com writes:
al As always, we welcome feedback (although zfs-discuss is not
al the appropriate forum),
``Please, you criticize our work in private while we compliment it in
public.''
pgpyrrUQeYImd.pgp
Description: PGP signature
On Mon, Mar 8, 2010 at 2:10 PM, Miles Nordin car...@ivy.net wrote:
al == Adam Leventhal a...@eng.sun.com writes:
al As always, we welcome feedback (although zfs-discuss is not
al the appropriate forum),
``Please, you criticize our work in private while we compliment it in
public.''
tc == Tim Cook t...@cook.ms writes:
tc I'm betting its more the fact that zfs-discuss is not
Firstly, there's no need for you to respond on anyone's behalf,
especially not by ``betting.''
Secondly, fishworks does run ZFS, and I for one am interested in what
works and what doesn't.
tc
On Mon, Mar 8, 2010 at 5:47 PM, Miles Nordin car...@ivy.net wrote:
tc == Tim Cook t...@cook.ms writes:
tc I'm betting its more the fact that zfs-discuss is not
Firstly, there's no need for you to respond on anyone's behalf,
especially not by ``betting.''
I'm not betting, I know. It's
On Mar 5, 2010, at 5:10 PM, James Dickens wrote:
On Fri, Mar 5, 2010 at 4:48 PM, Tonmaus sequoiamo...@gmx.net wrote:
Hi,
so, what would be a critical test size in your opinion? Are there any other
side conditions?
when your dedup hash table ( a table that holds a checksum of every
Hi,
I have tried what dedup does on a test dataset that I have filled with 372 GB
of partly redundant data. I have used snv_133. All in all, it was successful.
The net data volume was only 120 GB. Destruction of the dataset finally took a
while, but without any compromise of anything else.
On Fri, Mar 5, 2010 at 10:49 AM, Tonmaus sequoiamo...@gmx.net wrote:
Hi,
I have tried what dedup does on a test dataset that I have filled with 372 GB
of partly redundant data. I have used snv_133. All in all, it was successful.
The net data volume was only 120 GB. Destruction of the
Hi,
so, what would be a critical test size in your opinion? Are there any other
side conditions?
I.e., I am not using any snapshots and have also turned off automatic snapshots
because I was bitten by system hangs while destroying datasets with living
snapshots.
I am also aware that Fishworks
On Fri, Mar 5, 2010 at 4:48 PM, Tonmaus sequoiamo...@gmx.net wrote:
Hi,
so, what would be a critical test size in your opinion? Are there any other
side conditions?
when your dedup hash table ( a table that holds a checksum of every block
seen on filesystems/zvols after dedup was enabled)
On Thu, Mar 4, 2010 at 8:08 AM, Henrik Johansson henr...@henkis.net wrote:
Hi all,
Now that the Fishworks 2010.Q1 release seems to get deduplication, does
anyone know if bugid: 6924824 (destroying a dedup-enabled dataset bricks
system) is still valid, it has not been fixed in in onnv and it is
On 3/4/10 9:17 AM, Brent Jones wrote:
On Thu, Mar 4, 2010 at 8:08 AM, Henrik Johanssonhenr...@henkis.net wrote:
Hi all,
Now that the Fishworks 2010.Q1 release seems to get deduplication, does
anyone know if bugid: 6924824 (destroying a dedup-enabled dataset bricks
system) is still valid, it
On Thu, Mar 4, 2010 at 4:40 PM, zfs ml zf...@itsbeen.sent.com wrote:
On 3/4/10 9:17 AM, Brent Jones wrote:
My rep says Use dedupe at your own risk at this time.
Guess they've been seeing a lot of issues, and regardless if its
'supported' or not, he said not to use it.
So its not a
It seems they kind of rushed the appliance into the market. We've a few 7410s
and replication (with zfs send/receive) doesn't work after shares reach ~1TB
(broken pipe error).
While it's the case that the 7000 series is a relatively new product, the
characterization of rushed to market is
13 matches
Mail list logo