On Mon, May 9, 2011 at 2:11 AM, Evaldas Auryla evaldas.aur...@edqm.euwrote:
On 05/ 6/11 07:21 PM, Brandon High wrote:
On Fri, May 6, 2011 at 9:15 AM, Ray Van Dolsonrvandol...@esri.com
wrote:
We use dedupe on our VMware datastores and typically see 50% savings,
often times more. We do of
On Wed, May 04, 2011 at 08:49:03PM -0700, Edward Ned Harvey wrote:
From: Tim Cook [mailto:t...@cook.ms]
That's patently false. VM images are the absolute best use-case for dedup
outside of backup workloads. I'm not sure who told you/where you got the
idea that VM images are not ripe
On Fri, May 6, 2011 at 9:15 AM, Ray Van Dolson rvandol...@esri.com wrote:
We use dedupe on our VMware datastores and typically see 50% savings,
often times more. We do of course keep like VM's on the same volume
I think NetApp uses 4k blocks by default, so the block size and
alignment should
From: Garrett D'Amore [mailto:garr...@nexenta.com]
We have customers using dedup with lots of vm images... in one extreme
case they are getting dedup ratios of over 200:1!
I assume you're talking about a situation where there is an initial VM image,
and then to clone the machine, the
Hi,
On 05/ 5/11 03:02 PM, Edward Ned Harvey wrote:
From: Garrett D'Amore [mailto:garr...@nexenta.com]
We have customers using dedup with lots of vm images... in one extreme
case they are getting dedup ratios of over 200:1!
I assume you're talking about a situation where there is an initial
On Thu, 2011-05-05 at 09:02 -0400, Edward Ned Harvey wrote:
From: Garrett D'Amore [mailto:garr...@nexenta.com]
We have customers using dedup with lots of vm images... in one extreme
case they are getting dedup ratios of over 200:1!
I assume you're talking about a situation where there
I assume you're talking about a situation where there is an initial VM image,
and then to clone the machine, the customers copy the VM, correct?
If that is correct, have you considered ZFS cloning instead?
When I said dedup wasn't good for VM's, what I'm talking about is: If there is data
We have customers using dedup with lots of vm images... in one extreme case
they are getting dedup ratios of over 200:1!
You don't need dedup or sparse files for zero filling. Simple zle compression
will eliminate those for you far more efficiently and without needing massive
amounts of ram.
On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Generally speaking, dedup doesn't work on VM images. (Same is true for ZFS
or netapp or anything else.) Because the VM images are all going to have
their own filesystems internally with
On May 5, 2011, at 2:58 PM, Brandon High wrote:
On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey
Or if you're intimately familiar with both the guest host filesystems, and
you choose blocksizes carefully to make them align. But that seems
complicated and likely to fail.
Using a 4k
On May 5, 2011, at 6:02 AM, Edward Ned Harvey wrote:
Is this a zfs discussion list, or a nexenta sales promotion list?
Obviously, this is a Nextenta sales promotion list. And Oracle. And OSX.
And BSD. And Linux. And anyone who needs help or can offer help with ZFS
technology :-) This list has
From: Brandon High [mailto:bh...@freaks.com]
On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Generally speaking, dedup doesn't work on VM images. (Same is true for
ZFS
or netapp or anything else.) Because the VM images are all
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
If you have to use the 4k recordsize, it is likely to consume 32x more
memory than the default 128k recordsize of ZFS. At this rate, it becomes
increasingly difficult to
On Thu, May 5, 2011 at 8:50 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
If you have to use the 4k recordsize, it is likely to consume 32x more
memory than the default 128k recordsize of ZFS. At this rate, it becomes
increasingly difficult to get a
There are a number of threads (this one[1] for example) that describe
memory requirements for deduplication. They're pretty high.
I'm trying to get a better understanding... on our NetApps we use 4K
block sizes with their post-process deduplication and get pretty good
dedupe ratios for VM
On 5/4/2011 9:57 AM, Ray Van Dolson wrote:
There are a number of threads (this one[1] for example) that describe
memory requirements for deduplication. They're pretty high.
I'm trying to get a better understanding... on our NetApps we use 4K
block sizes with their post-process deduplication
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
On 5/4/2011 9:57 AM, Ray Van Dolson wrote:
There are a number of threads (this one[1] for example) that describe
memory requirements for deduplication. They're pretty high.
I'm trying to get a better understanding... on our
On Wed, May 4, 2011 at 12:29 PM, Erik Trimble erik.trim...@oracle.com wrote:
I suspect that NetApp does the following to limit their resource
usage: they presume the presence of some sort of cache that can be
dedicated to the DDT (and, since they also control the hardware, they can
On 5/4/2011 2:54 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
(2) Block size: a 4k block size will yield better dedup than a 128k
block size, presuming reasonable data turnover. This is inherent, as
any single bit change in a block will make it
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
On Wed, May 4, 2011 at 12:29 PM, Erik Trimble erik.trim...@oracle.com wrote:
I suspect that NetApp does the following to limit their resource
usage: they presume the presence of some sort of cache that can be
dedicated
On Wed, May 04, 2011 at 03:49:12PM -0700, Erik Trimble wrote:
On 5/4/2011 2:54 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
(2) Block size: a 4k block size will yield better dedup than a 128k
block size, presuming reasonable data turnover. This
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
On Wed, May 4, 2011 at 12:29 PM, Erik Trimbleerik.trim...@oracle.com wrote:
I suspect that NetApp does the following to limit their resource
usage: they presume the presence of
On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.comwrote:
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
On Wed, May 4, 2011 at 12:29 PM, Erik Trimbleerik.trim...@oracle.com
wrote:
I suspect that NetApp
On 5/4/2011 4:17 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 03:49:12PM -0700, Erik Trimble wrote:
On 5/4/2011 2:54 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
(2) Block size: a 4k block size will yield better dedup than a 128k
block size,
On 5/4/2011 4:44 PM, Tim Cook wrote:
On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.com
mailto:erik.trim...@oracle.com wrote:
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
On Wed, May 4,
On Wed, May 04, 2011 at 04:51:36PM -0700, Erik Trimble wrote:
On 5/4/2011 4:44 PM, Tim Cook wrote:
On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.com
wrote:
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM
On Wed, May 4, 2011 at 4:36 PM, Erik Trimble erik.trim...@oracle.com wrote:
If so, I'm almost certain NetApp is doing post-write dedup. That way, the
strictly controlled max FlexVol size helps with keeping the resource limits
down, as it will be able to round-robin the post-write dedup to each
On Wed, May 4, 2011 at 6:51 PM, Erik Trimble erik.trim...@oracle.comwrote:
On 5/4/2011 4:44 PM, Tim Cook wrote:
On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.comwrote:
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High
On 5/4/2011 5:11 PM, Brandon High wrote:
On Wed, May 4, 2011 at 4:36 PM, Erik Trimbleerik.trim...@oracle.com wrote:
If so, I'm almost certain NetApp is doing post-write dedup. That way, the
strictly controlled max FlexVol size helps with keeping the resource limits
down, as it will be able to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
ZFS's problem is that it needs ALL the resouces for EACH pool ALL the
time, and can't really share them well if it expects to keep performance
from tanking... (no pun intended)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
Are any of you out there using dedupe ZFS file systems to store VMware
VMDK (or any VM tech. really)? Curious what recordsize you use and
what your hardware specs /
On Wed, May 4, 2011 at 10:15 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
ZFS's problem is that it needs ALL the resouces for EACH pool ALL
On Wed, May 4, 2011 at 10:23 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
Are any of you out there using dedupe ZFS file systems to store
From: Tim Cook [mailto:t...@cook.ms]
ZFS's problem is that it needs ALL the resouces for EACH pool ALL the
time, and can't really share them well if it expects to keep performance
from tanking... (no pun intended)
That's true, but on the flipside, if you don't have adequate resources
From: Tim Cook [mailto:t...@cook.ms]
That's patently false. VM images are the absolute best use-case for dedup
outside of backup workloads. I'm not sure who told you/where you got the
idea that VM images are not ripe for dedup, but it's wrong.
Well, I got that idea from this list. I said
Hi guys,
I'm currently running 2 zpools each in a raidz1 configuration, totally
around 16TB usable data. I'm running it all on an OpenSolaris based box with
2gb memory and an old Athlon 64 3700 CPU, I understand this is very poor and
underpowered for deduplication, so I'm looking at building a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michael
Core i7 2600 CPU
16gb DDR3 Memory
64GB SSD for ZIL (optional)
Would this produce decent results for deduplication of 16TB worth of pools
or would I need more RAM still?
What
On 2/7/2011 1:06 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michael
Core i7 2600 CPU
16gb DDR3 Memory
64GB SSD for ZIL (optional)
Would this produce decent results for deduplication of 16TB worth of pools
or
On 2/7/2011 1:06 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Michael
Core i7 2600 CPU
16gb DDR3 Memory
64GB SSD for ZIL (optional)
Would this produce decent results for deduplication of 16TB worth of pools
or
On 6 February 2011 01:34, Michael michael.armstr...@gmail.com wrote:
Hi guys,
I'm currently running 2 zpools each in a raidz1 configuration, totally
around 16TB usable data. I'm running it all on an OpenSolaris based box with
2gb memory and an old Athlon 64 3700 CPU, I understand this is
Hi,
this has already been the source of a lot of interesting discussions, so
far I haven't found the ultimate conclusion. From some discussion on
this list in February, I learned that an antry in ZFS' deduplication
table takes (in practice) half a KiB of memory. At the moment my data
looks like
- Brandon High bh...@freaks.com skrev:
On Sun, Jun 6, 2010 at 10:46 AM, Brandon High bh...@freaks.com
wrote:
No, that's the number that stuck in my head though.
Here's a reference from Richard Elling:
(http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038018.html)
Around
On Fri, Jun 04, 2010 at 01:10:44PM -0700, Ray Van Dolson wrote:
On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote:
On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson rvandol...@esri.com wrote:
Makes sense. So, as someone else suggested, decreasing my block size
may improve the
- Ray Van Dolson rvandol...@esri.com skrev:
FYI;
With 4K recordsize, I am seeing 1.26x dedupe ratio between the RHEL
5.4
ISO and the RHEL 5.5 ISO file.
However, it took about 33 minutes to copy the 2.9GB ISO file onto the
filesystem. :) Definitely would need more RAM in this
- Brandon High bh...@freaks.com skrev:
Decreasing the block size increases the size of the dedup table
(DDT).
Every entry in the DDT uses somewhere around 250-270 bytes.
Are you sure it's that high? I was told it's ~150 per block, or ~1,2GB per
terabytes of storage with only 128k blocks
On Sun, Jun 6, 2010 at 3:26 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
- Brandon High bh...@freaks.com skrev:
Decreasing the block size increases the size of the dedup table
(DDT).
Every entry in the DDT uses somewhere around 250-270 bytes.
Are you sure it's that high? I was told
- Brandon High bh...@freaks.com skrev:
On Sun, Jun 6, 2010 at 3:26 AM, Roy Sigurd Karlsbakk
r...@karlsbakk.net wrote:
- Brandon High bh...@freaks.com skrev:
Decreasing the block size increases the size of the dedup table
(DDT).
Every entry in the DDT uses somewhere around
On Sun, Jun 6, 2010 at 10:46 AM, Brandon High bh...@freaks.com wrote:
No, that's the number that stuck in my head though.
Here's a reference from Richard Elling:
(http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038018.html)
Around 270 bytes, or one 512 byte sector.
-B
--
Brandon
I'm running zpool version 23 (via ZFS fuse on Linux) and have a zpool
with deduplication turned on.
I am testing how well deduplication will work for the storage of many,
similar ISO files and so far am seeing unexpected results (or perhaps
my expectations are wrong).
The ISO's I'm testing with
On Fri, Jun 4, 2010 at 9:30 AM, Ray Van Dolson rvandol...@esri.com wrote:
The ISO's I'm testing with are the 32-bit and 64-bit versions of the
RHEL5 DVD ISO's. While both have their differences, they do contain a
lot of similar data as well.
Similar != identical.
Dedup works on blocks in
On Fri, Jun 04, 2010 at 11:16:40AM -0700, Brandon High wrote:
On Fri, Jun 4, 2010 at 9:30 AM, Ray Van Dolson rvandol...@esri.com wrote:
The ISO's I'm testing with are the 32-bit and 64-bit versions of the
RHEL5 DVD ISO's. While both have their differences, they do contain a
lot of similar
Makes sense. So, as someone else suggested, decreasing my block size
may improve the deduplication ratio.
recordsize I presume is the value to tweak?
It is, but keep in mind that zfs will need about 150 bytes for each block. 1TB
with 128k blocks will need about 1GB memory for the index to
On Fri, Jun 04, 2010 at 12:37:01PM -0700, Ray Van Dolson wrote:
On Fri, Jun 04, 2010 at 11:16:40AM -0700, Brandon High wrote:
On Fri, Jun 4, 2010 at 9:30 AM, Ray Van Dolson rvandol...@esri.com wrote:
The ISO's I'm testing with are the 32-bit and 64-bit versions of the
RHEL5 DVD ISO's.
On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson rvandol...@esri.com wrote:
Makes sense. So, as someone else suggested, decreasing my block size
may improve the deduplication ratio.
It might. It might make your performance tank, too.
Decreasing the block size increases the size of the dedup
On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote:
On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson rvandol...@esri.com wrote:
Makes sense. So, as someone else suggested, decreasing my block size
may improve the deduplication ratio.
It might. It might make your performance tank,
On 05.06.10 00:10, Ray Van Dolson wrote:
On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote:
On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson rvandol...@esri.com wrote:
Makes sense. So, as someone else suggested, decreasing my block size
may improve the deduplication ratio.
It
In reading this blog post:
http://blogs.sun.com/bobn/entry/taking_zfs_deduplication_for_a
a question came to mind.
To understand the context of the question, consider the opening paragraph
from the above post;
Here is my test case: I have 2 directories of photos, totaling about 90MB
each.
Colin Raven wrote:
What happens if, once dedup is on, I (or someone else with delete
rights) open a photo management app containing that collection, and
start deleting dupes - AND - happen to delete the original that all
other references are pointing to. I know, I know, it doesn't matter -
On Tuesday 08 December 2009 14:00, Colin Raven wrote:
Help in understanding this would be hugely helpful - anyone?
i am no pro in zfs, but to my understanding there is no original.
All the files have pointers to blocks on disk. Even if there is no ther file
that shares the same block on the
Colin,
I think you mix up the filesystem layer (where the individual files as
maintained) and the block layer, where actual data is stored.
The analogue of deduplication on the filesystem layer would be to create hard
links of the files, where deleting one file does not remove the other link.
i am no pro in zfs, but to my understanding there is no original.
That is correct. From a semantic perspective, there is no change
in behavior between dedup=off and dedup=on. Even the accounting
remains the same: each reference to a block is charged to the dataset
making the reference. The
On Tue, Dec 8, 2009 at 22:54, Jeff Bonwick jeff.bonw...@sun.com wrote:
i am no pro in zfs, but to my understanding there is no original.
That is correct. From a semantic perspective, there is no change
in behavior between dedup=off and dedup=on. Even the accounting
remains the same: each
On Fri, Jul 17, 2009 at 2:42 PM, Brandon High bh...@freaks.com wrote:
The keynote was given on Wednesday. Any more willingness to discuss
dedup on the list now?
The following video contains a de-duplication overview from Bill and Jeff:
https://slx.sun.com/1179275620
Hope this helps,
- Ryan
Thanks James! I look forward to these - we could really use dedup in my org.
Blake
On Thu, Sep 17, 2009 at 6:02 PM, James C. McPherson
james.mcpher...@sun.com wrote:
On Thu, 17 Sep 2009 11:50:17 -0500
Tim Cook t...@cook.ms wrote:
On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess
2009/9/17 Brandon High bh...@freaks.com:
2009/9/11 C. Bergström codest...@osunix.org:
Can we make a FAQ on this somewhere?
1) There is some legal bla bla between Sun and green-bytes that's tying up
the IP around dedup... (someone knock some sense into green-bytes please)
2) there's an
I think you're right, and i also think we'll still see a new post asking
about it once or twice a week.
On Thu, Sep 17, 2009 at 2:20 AM, Cyril Plisko cyril.pli...@mountall.comwrote:
2009/9/17 Brandon High bh...@freaks.com:
2009/9/11 C. Bergström codest...@osunix.org:
Can we make a FAQ on
On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess wonsl...@gmail.com wrote:
I think you're right, and i also think we'll still see a new post asking
about it once or twice a week.
On Thu, Sep 17, 2009 at 2:20 AM, Cyril Plisko
cyril.pli...@mountall.comwrote:
2009/9/17 Brandon High
On Thu, 17 Sep 2009 11:50:17 -0500
Tim Cook t...@cook.ms wrote:
On Thu, Sep 17, 2009 at 5:27 AM, Thomas Burgess wonsl...@gmail.com wrote:
I think you're right, and i also think we'll still see a new post asking
about it once or twice a week.
[snip]
As we should. Did the video of the
2009/9/11 C. Bergström codest...@osunix.org:
Can we make a FAQ on this somewhere?
1) There is some legal bla bla between Sun and green-bytes that's tying up
the IP around dedup... (someone knock some sense into green-bytes please)
2) there's an acquisition that's got all sorts of delays..
I'll maintain hope for seeing/hearing the presentation until you guys announce
that you had NASA store the tape for safe-keeping.
Bump'd.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
buMP? I watched the stream for several hours and never heard a word about
dedupe. The blogs also all seem to be completely bare of mention. What's the
deal?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Mon, 27 Jul 2009 15:17:52 -0700 (PDT)
Tim Cook no-re...@opensolaris.org wrote:
buMP? I watched the stream for several hours and never heard a word
about dedupe. The blogs also all seem to be completely bare of mention.
What's the deal?
ZFS Deduplication was most definitely talked about
The keynote was given on Wednesday. Any more willingness to discuss
dedup on the list now?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Do we know if this web article will be discuss at Brisbane Australia the
conference this week?
http://www.pcworld.com/article/168428/sun_tussles_with_deduplication_startup.html?tk=rss_news
I do not expect details but at least Sun position on this instead of letting
peoples on rumors like
On 15/07/2009, at 1:51 PM, Jean Dion wrote:
Do we know if this web article will be discuss at Brisbane Australia
the conference this week?
http://www.pcworld.com/article/168428/sun_tussles_with_deduplication_startup.html?tk=rss_news
I do not expect details but at least Sun position on this
Richard,
Also, we now know the market value for dedupe intellectual property: $2.1
Billion.
Even though there may be open source, that does not mean there are not IP
barriers. $2.1 Billion attracts a lot of lawyers :-(
Indeed, good point.
--
Regards,
Cyril
jcm == James C McPherson james.mcpher...@sun.com writes:
dm == David Magda dma...@ee.ryerson.ca writes:
jcm What I can say, however, is that open source does not always
jcm equate to requiring open development.
+1
To maintain what draws me to free software, you must
* release
On Sun, Jul 12, 2009 at 7:06 AM, James C.
McPhersonjames.c.mcpher...@gmail.com wrote:
Anil wrote:
When it comes out, how will it work?
Does it work at the pool level or a zfs file system level? If I
create a zpool called 'zones' and then I create several zones
underneath that, could I
On Sun, 12 Jul 2009, Cyril Plisko wrote:
There is an ongoing speculations of what/when/how deduplication will
be in ZFS and I am curious: what is the reason to keep the thing
secret ? I always thought open source assumes open development
process. What exactly people behind deduplication effort
On Sun, Jul 12, 2009 at 12:57 PM, Andre van Eyssenan...@purplecow.org wrote:
On Sun, 12 Jul 2009, Cyril Plisko wrote:
There is an ongoing speculations of what/when/how deduplication will
be in ZFS and I am curious: what is the reason to keep the thing
secret ? I always thought open source
On Sun, Jul 12, 2009 at 7:27 PM, Cyril Pliskocyril.pli...@mountall.com wrote:
I am talking about the process, not the announcement.
What's wrong with process?
--
Kind regards, BM
Things, that are stupid at the beginning, rarely ends up wisely.
___
On Sun, 12 Jul 2009 12:53:59 +0300
Cyril Plisko cyril.pli...@mountall.com wrote:
On Sun, Jul 12, 2009 at 7:06 AM, James C.
McPhersonjames.c.mcpher...@gmail.com wrote:
Anil wrote:
When it comes out, how will it work?
Does it work at the pool level or a zfs file system level? If I
I don't think this is anything unusual, nor suspicious. Sun have released huge
amounts of code to the open source communities, and the very fact that you can
come on these forums, ask a question like that, and get answers back from some
of the kernel developers shows just how open Sun is.
Hello James,
Hi Cyril,
I don't work with Jeff and Bill, and I cannot speak for them
about this.
What I can say, however, is that open source does not always
equate to requiring open development.
Indeed. However willingness to openly develop opensource project or
lack of that of is also
On Sun, 12 Jul 2009, Cyril Plisko wrote:
Open source is much more than throwing the code over the wall.
Heck, in the early pilot days I was told by a number of Sun engineers,
that the reason things are taking time is exactly that - we do not
want to just throw the code over the wall - we want
On Sun, 12 Jul 2009, Cyril Plisko wrote:
So Jeff, Bill and team (I know you are on this list), is there any
reason ZFS deduplication project isn't run as OpenSolaris project ?
With code repository, mailing list and all the other things publicly
available. That way the development process becomes
On Sun, 12 Jul 2009, Bob Friesenhahn wrote:
This is the first I have heard about a ZFS deduplication project. Is there a
public anouncement (from Sun) somewhere that there is a ZFS deduplication
project or are you just speculating that there might be such a project?
Ahhh, I found some
On Jul 11, 2009, at 21:11, Anil wrote:
When it comes out, how will it work?
I'm more interested in being able to remove devices from a pool, and
perhaps changing a pool from RAID-Z to -Z2 on the fly.
Presumably all of these features are depending on *bp re-write.
On Jul 12, 2009, at 08:05, Cyril Plisko wrote:
Indeed. However willingness to openly develop opensource project or
lack of that of is also considered by community.
Open source is much more than throwing the code over the wall.
Heck, in the early pilot days I was told by a number of Sun
Yup, that's one feature I'm eagerly awaiting too, the list of things it could
facilitate is huge.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Ross wrote:
I don't think this is anything unusual, nor suspicious. Sun have released huge
amounts of code to the open source communities, and the very fact that you can
come on these forums, ask a question like that, and get answers back from some
of the kernel developers shows just how
When it comes out, how will it work?
Does it work at the pool level or a zfs file system level? If I create a zpool
called 'zones' and then I create several zones underneath that, could I expect
to see a lot of disk space savings if I enable dedup on the pool?
Just curious as to what's coming
Anil wrote:
Does it work at the pool level or a zfs file system level? If I create a zpool
called 'zones' and then I create several zones underneath that, could I expect
to see a lot of disk space savings if I enable dedup on the pool?
You can get the same savings by cloning your zones.
On Sat, Jul 11, 2009 at 9:32 PM, Ian Collins i...@ianshome.com wrote:
Anil wrote:
Does it work at the pool level or a zfs file system level? If I create a
zpool called 'zones' and then I create several zones underneath that, could
I expect to see a lot of disk space savings if I enable
Anil wrote:
When it comes out, how will it work?
Does it work at the pool level or a zfs file system level? If I
create a zpool called 'zones' and then I create several zones
underneath that, could I expect to see a lot of disk space savings if
I enable dedup on the pool?
Just curious as to
95 matches
Mail list logo