zpool import done! Back online.
Total downtime for 4TB pool was about 8 hours, don't know how much of this was
completing the destroy transaction.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Bob Friesenhahn wrote:
On Mon, 7 Dec 2009, Michael DeMan (OA) wrote:
Args for FreeBSD + ZFS:
- Limited budget
- We are familiar with managing FreeBSD.
- We are familiar with tuning FreeBSD.
- Licensing model
Args against OpenSolaris + ZFS:
- Hardware compatibility
- Lack of knowledge for
Hi all,
Is there any way to generate some report related to the de-duplication
feature of ZFS within a zpool/zfs pool?
I mean, its nice to have the dedup ratio, but it think it would be also
good to have a report where we could see what directories/files have
been found as repeated and therefore
On Wed, Dec 9, 2009 at 2:26 PM, Bruno Sousa bso...@epinfante.com wrote:
Hi all,
Is there any way to generate some report related to the de-duplication
feature of ZFS within a zpool/zfs pool?
I mean, its nice to have the dedup ratio, but it think it would be also
good to have a report where
I'm planning to try out deduplication in the near future, but started
wondering if I can prepare for it on my servers. one thing which struck
me was that I should change the checksum algorithm to sha256 as soon as
possible. but I wonder -- is that sufficient? will the dedup code know
about old
Hi Andrey,
For instance, i talked about deduplication to my manager and he was
happy because less data = less storage, and therefore less costs .
However, now the IT group of my company needs to provide to management
board, a report of duplicated data found per share, and in our case one
share
zpool import done! Back online.
Total downtime for 4TB pool was about 8 hours, don't
know how much of this was completing the destroy
transaction.
Lucky You! :)
My box has gone totally unresponsive again :( I cannot even ping it now and I
can't hear the disks thrashing.
--
This message
On Wed, Dec 9, 2009 at 2:47 PM, Bruno Sousa bso...@epinfante.com wrote:
Hi Andrey,
For instance, i talked about deduplication to my manager and he was
happy because less data = less storage, and therefore less costs .
However, now the IT group of my company needs to provide to management
Hi There,
does anybody know, if there's a Roadmap oder simply a List of the future
features of ZFS?
Would be interesting to see, what will happen in the future.
THX
Henri
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hi ZFS guys,
when playing with one of recent version of OpenSolaris GUI installer,
I have tried to restart it after previous failure.
However, the installer failed when trying to destroy previously created
ZFS root pool. It was discovered that this is due to the fact that dump
ZFS volume could
Folks,
I've been seeing this for a while, but never had the urge to ask, until now.
When I take a snapshot of my current root-FS and tell the system to reboot off
that snapshot, I'm faced with an assertion failure (running DEBUG bits) that
looks like this:
r...@codemonkey:~# df -h /
Filesystem
I have disabled all 'non-important' processes (gdm, ssh, vnc, etc). I am now
starting this process locally on the server via the console with about 3.4 GB
free of RAM.
I still have my entries in /etc/system for limiting how much RAM zfs can use.
--
This message posted from opensolaris.org
Hi,
The tool to report storage usage per share is du -h / df -h :) , so yes,
these tools could be deduplication aware.
I know for instance that microsoft has a feature (in Win2003 R2), called
File Server Resource Manager, and inside theres the possibility to make
Storage Reports, and one of those
Alex, thanks for the info. You made my heart stop a little when reading your
problem with PowerPath, but MPxIO seems like it might be a good option for me.
I'll will try that as well although I have not used it before. Thank you!
--
This message posted from opensolaris.org
On Wed, 9 Dec 2009, James Andrewartha wrote:
There is a huge difference practically - OpenSolaris has no free security
updates for stable releases, unlike FreeBSD. And I'm sure you don't recommend
running /dev in production.
If OpenSolaris was to do that, then it would be called Solaris.
Hi Joep,
Booting from a snapshot isn't possibly because the snapshot is not
writable and the boot operation writes to the BE. Booting from a
clone is successful because the clone is writable.
The second issue is whether the reboot command understands what
a snapshot is. I see from the reboot
Hi Henri,
The slides from the SNIA conference this past fall provide a description
of upcoming features, here:
http://www.snia.org/events/storage-developer2009/presentations/monday/JeffBonwick_zfs-What_Next-SDC09.pdf
Cindy
On 12/09/09 05:25, Henri Maddox wrote:
Hi There,
does anybody
I've just done a fresh install of Solaris 10 u8 (2009.10) onto a Thumper.
Running zfs allow gives the following delightful output:
-bash-3.00$ zfs allow
internal error: /usr/lib/zfs/pyzfs.py not found
I've confirmed it on a second thumper, also running Solaris 10 u8 installed
about 2 months ago.
On Wed, 9 Dec 2009, Markus Kovero wrote:
From what I've noticed, if one destroys dataset that is say 50-70TB and reboots
before destroy is finished, it can take up to several _days_ before it's back
up again.
So, nowadays I'm doing rm -fr BEFORE issuing zfs destroy whenever possible.
It
On 09 December, 2009 - Andrew Robert Nicols sent me these 1,6K bytes:
I've just done a fresh install of Solaris 10 u8 (2009.10) onto a Thumper.
Running zfs allow gives the following delightful output:
-bash-3.00$ zfs allow
internal error: /usr/lib/zfs/pyzfs.py not found
I've confirmed it
Hi Kjetil,
Unfortunately, dedup will only apply to data written after the setting is
enabled. That also means that new blocks cannot dedup against old block
regardless of how they were written. There is therefore no way to prepare
your pool for dedup -- you just have to enable it when you have
On Tue, December 8, 2009 19:23, Andrew Daugherity wrote:
Description/rationale of the script (more detailed comments within the
script):
# This supplements zfs-auto-snapshot, but runs independently. I prefer
that
# snapshots continue to be taken even if the backup fails.
#
# This aims to
Adam Leventhal a...@eng.sun.com writes:
Unfortunately, dedup will only apply to data written after the setting
is enabled. That also means that new blocks cannot dedup against old
block regardless of how they were written. There is therefore no way
to prepare your pool for dedup -- you just
Adam,
So therefore, the best way is to set this at pool creation timeOK, that
makes sense, it operates only on fresh data that's coming over the fence.
BUT
What happens if you snapshot, send, destroy, recreate (with dedup on this
time around) and then write the contents of the cloned snapshot
On Dec 9, 2009, at 3:47 AM, Bruno Sousa wrote:
Hi Andrey,
For instance, i talked about deduplication to my manager and he was
happy because less data = less storage, and therefore less costs .
However, now the IT group of my company needs to provide to management
board, a report of duplicated
Hi,
Despite the fact that i agree in general with your comments, in reality
it all comes to money..
So in this case, if i could prove that ZFS was able to find X amount of
duplicated data, and since that X amount of data has a price of Y per
GB, IT could be seen as business enabler instead of a
What happens if you snapshot, send, destroy, recreate (with dedup on this
time around) and then write the contents of the cloned snapshot to the
various places in the pool - which properties are in the ascendancy here? the
host pool or the contents of the clone? The host pool I assume,
On Wed, 9 Dec 2009, Bruno Sousa wrote:
Despite the fact that i agree in general with your comments, in reality
it all comes to money..
So in this case, if i could prove that ZFS was able to find X amount of
duplicated data, and since that X amount of data has a price of Y per
GB, IT could be
On Wed, Dec 9, 2009 at 10:43 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Wed, 9 Dec 2009, Bruno Sousa wrote:
Despite the fact that i agree in general with your comments, in reality
it all comes to money..
So in this case, if i could prove that ZFS was able to find X amount of
Hi,
The data needs to be stored somewhere, and usually we need to have a
server, disk array, disks, and more data means more disks, and more
disks active means more power usage , therefore higher costs, and less
green IT :)
So, at my point of view, deduplication is relevant for lowering costs,
On 7 dec 2009, at 18.40, Bob Friesenhahn wrote:
On Mon, 7 Dec 2009, Richard Bruce wrote:
I started copying over all the data from my existing workstation. When
copying files (mostly multi-gigabyte DV video files), network throughput
drops to zero for ~1/2 second every 8-15 seconds. This
I didn't see remove a simple device anywhere in there.
Is it:
too hard to even contemplate doing,
or
too silly a thing to do to even consider letting that happen
or
too stupid a question to even consider
or
too easy and straightforward to do the procedure I see recommended (export the
whole
On Wed, 9 Dec 2009, Andrey Kuzmin wrote:
Um, I thought deduplication had been invented to reduce backup window :).
Unless the backup system also supports deduplication, in what way does
deduplication reduce the backup window?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
What you're talking about is a side-benefit of the BP rewrite section of the
linked slides.
I believe that once BP rewrite is fully baked, we'll soon afterwards see a
device removal feature arrive.
/dale
On Dec 9, 2009, at 3:46 PM, R.G. Keen wrote:
I didn't see remove a simple device
On Wed, 9 Dec 2009, Ragnar Sundblad wrote:
This is expected behavior. From what has been posted here, these
are the current buffering rules:
Is it really?
Shouldn't it start on the next txg and while the previous txg commits,
and just continue writing?
The pause is clearly not during the
R.G. Keen wrote:
I didn't see remove a simple device anywhere in there.
Is it:
too hard to even contemplate doing,
or
too silly a thing to do to even consider letting that happen
or
too stupid a question to even consider
or
too easy and straightforward to do the procedure I see
Dale Ghent wrote:
What you're talking about is a side-benefit of the BP rewrite section of the
linked slides.
I believe that once BP rewrite is fully baked, we'll soon afterwards see a
device removal feature arrive.
/dale
On Dec 9, 2009, at 3:46 PM, R.G. Keen wrote:
I didn't see remove
* R.G. Keen (k...@geofex.com) wrote:
I didn't see remove a simple device anywhere in there.
Is it:
too hard to even contemplate doing,
or
too silly a thing to do to even consider letting that happen
or
too stupid a question to even consider
or
too easy and straightforward to do the
On 12/09/09 13:52, Glenn Lagasse wrote:
* R.G. Keen (k...@geofex.com) wrote:
I didn't see remove a simple device anywhere in there.
Is it:
too hard to even contemplate doing,
or
too silly a thing to do to even consider letting that happen
or
too stupid a question to even consider
or
too
Neil Perrin wrote:
On 12/09/09 13:52, Glenn Lagasse wrote:
* R.G. Keen (k...@geofex.com) wrote:
I didn't see remove a simple device anywhere in there.
Is it:
too hard to even contemplate doing, or
too silly a thing to do to even consider letting that happen
or too stupid a question to even
* Neil Perrin (neil.per...@sun.com) wrote:
On 12/09/09 13:52, Glenn Lagasse wrote:
* R.G. Keen (k...@geofex.com) wrote:
I didn't see remove a simple device anywhere in there.
Is it:
too hard to even contemplate doing, or
too silly a thing to do to even consider letting that happen
or
hi,
i'm re-sending this because I'm hoping that someone has some answers
to the following questions. I'm working a hot Escalation on AmberRoad
and am trying to understand what's under zfs' hood.
thanks
Solaris RPE
/andrew rutz
On 11/25/09 13:55, andrew.r...@sun.com wrote:
I am trying to
On 10/12/2009, at 5:36 AM, Adam Leventhal wrote:
The dedup property applies to all writes so the settings for the pool of
origin don't matter, just those on the destination pool.
Just a quick related question I’ve not seen answered anywhere else:
Is it safe to have dedup running on your
I have disabled all 'non-important' processes (gdm,
ssh, vnc, etc). I am now starting this process
locally on the server via the console with about 3.4
GB free of RAM.
I still have my entries in /etc/system for limiting
how much RAM zfs can use.
Going on 10 hours now, still importing.
I have disabled all 'non-important' processes
(gdm,
ssh, vnc, etc). I am now starting this process
locally on the server via the console with about
3.4
GB free of RAM.
I still have my entries in /etc/system for
limiting
how much RAM zfs can use.
Going on 10 hours now, still
I wonder if you are hitting this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6905936
Deleting large files or filesystems on a dedup=on filesystem stalls the
whole system
Cindy
On 12/09/09 16:41, Jack Kielsmeier wrote:
I have disabled all 'non-important' processes
(gdm,
Ah that could be it!
This leaves me hopeful, as it looks like that bug says it'll eventually finish!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I replied... maybe I don't count anymore, boo hoo :-)
http://opensolaris.org/jive/thread.jspa?threadID=118667tstart=15
-- richard
On Dec 9, 2009, at 1:57 PM, andrew.r...@sun.com wrote:
hi,
i'm re-sending this because I'm hoping that someone has some answers
to the following questions. I'm
On Dec 9, 2009, at 11:07 AM, Bruno Sousa wrote:
Hi,
Despite the fact that i agree in general with your comments, in
reality
it all comes to money..
So in this case, if i could prove that ZFS was able to find X amount
of
duplicated data, and since that X amount of data has a price of Y per
I've had a case open for a year or so now regarding the inefficiencies of
having a large number of zfs filesystems, in particular how long it takes
to share/unshare them (resulting in a reboot cycle time on my x4500 with
8000 file systems of over two hours).
I got an update indicating that the
Ditto, and also some estimate of when we can see them in opensolaris.
On Wed, Dec 9, 2009 at 11:02 PM, Paul B. Henson hen...@acm.org wrote:
I've had a case open for a year or so now regarding the inefficiencies of
having a large number of zfs filesystems, in particular how long it takes
to
OK Today I played with a J4400 connected to a Txxx server running S10 10/09
First off read the release notes I spent about 4 hours pulling my hair out as I
could not get stmsboot to work until we read in the release notes that 500GB
SATA drives do not work!!!
Initial Setup:
A pair of dual port
On Thu, Dec 10, 2009 at 12:37 AM, James Lever j...@jamver.id.au wrote:
On 10/12/2009, at 5:36 AM, Adam Leventhal wrote:
The dedup property applies to all writes so the settings for the pool of
origin don't matter, just those on the destination pool.
Just a quick related question I’ve not
53 matches
Mail list logo