Sorry fort he late answer.
Approximately it's 150 bytes per individual block. So increasing the
blocksize is a good idea.
Also when L1 and L2 arc is not enough system will start making disk IOPS and
RaidZ is not very effective for random IOPS and it's likely that when your
dram is not enough
We got 50+ X4500/X4540's running in the same DC happiliy with ZFS.
Approximately 2500 drives and growing everyday...
Br
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email
Hi Simon
I.e. you'll have to manually intervene
if a consumer drive causes the system to hang, and
replace it, whereas the RAID edition drives will
probably report the error quickly and then ZFS will
rewrite the data elsewhere, and thus maybe not kick
the drive.
IMHO the relevant aspects
So do you mean I cannot gather the names and locations of
changed/created/removed files just by analyzing a stream of
(incremental) zfs_send?
Quoting Andrey Kuzmin andrey.v.kuz...@gmail.com:
On Wed, Feb 3, 2010 at 6:11 PM, Ross Walker rswwal...@gmail.com wrote:
On Feb 3, 2010, at 9:53 AM,
Whoa! That is exactly what I've been looking for. Is there any
developement version publicly available for testing?
Regards,
Henrik Heino
Quoting Matthew Ahrens matthew.ahr...@sun.com:
This is RFE 6425091 want 'zfs diff' to list files that have changed
between snapshots, which covers both
Henu wrote:
So do you mean I cannot gather the names and locations of
changed/created/removed files just by analyzing a stream of
(incremental) zfs_send?
That's correct, you can't. Snapshots do not work at the file level.
--
Ian.
___
zfs-discuss
Frank Cusack wrote:
Is it possible to emulate a unionfs with zfs and zones somehow? My zones
are sparse zones and I want to make part of /usr writable within a zone.
(/usr/perl5/mumble to be exact)
Why don't you just export that directory with NFS (rw) to your sparse zone
and mount it on
Hi All,
I've been using ZFS for a while now - and everything's been going well. I
use it under FreeBSD - but this question almost certainly should be the
same answer, whether it's FreeBSD or Solaris (I think/hope :)...
Imagine if I have a zpool with 2 disks in it, that are mirrored:
NAME
Hi All,
Anyone in the group using ZFS compression on clearcase vobs? If so any issues,
gotchas?
IBM support informs that ZFS compression is not supported. Any views on this?
Rgds
Roshan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 04/02/2010 11:54, Roshan Perera wrote:
Anyone in the group using ZFS compression on clearcase vobs? If so any issues,
gotchas?
There shouldn't be any issues and I'd be very surprised if there was.
IBM support informs that ZFS compression is not supported. Any views on this?
Need more
On Thu, Feb 4, 2010 at 2:09 AM, Frank Cusack
frank+lists/z...@linetwo.net wrote:
Is it possible to emulate a unionfs with zfs and zones somehow? My zones
are sparse zones and I want to make part of /usr writable within a zone.
(/usr/perl5/mumble to be exact)
I can't just mount a writable
Hi Darren,
Thanks - IBM basically haven't test clearcase with ZFS compression therefore,
they don't support currently. Future may change, as such my customer cannot use
compression. I have asked IBM for roadmap info to find whether/when it will be
supported.
Thanks
Roshan
- Original
Hi Ross,
zdb - f...@snapshot | grep path | nawk '{print $2}'
Enjoy!
Darren Mackay
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 04.02.2010 12:12, dick hoogendijk wrote:
Frank Cusack wrote:
Is it possible to emulate a unionfs with zfs and zones somehow? My zones
are sparse zones and I want to make part of /usr writable within a zone.
(/usr/perl5/mumble to be exact)
Why don't you just export that directory with
On 04/02/2010 12:13, Roshan Perera wrote:
Hi Darren,
Thanks - IBM basically haven't test clearcase with ZFS compression therefore,
they don't support currently. Future may change, as such my customer cannot use
compression. I have asked IBM for roadmap info to find whether/when it will be
On Wed, Feb 03, 2010 at 03:02:21PM -0800, Brandon High wrote:
Another solution, for a true DIY x4500: BackBlaze has schematics for
the 45 drive chassis that they designed available on their website.
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
Hi Darren,
I totally agree with you and have raised some of the points mentioned but you
have given even more items to pass on.
I will update the alias when I hear further.
Many Thanks
Roshan
- Original Message -
From: Darren J Moffat darr...@opensolaris.org
Date: Thursday, February
looking through some more code.. i was a bit premature in my last post - been a
long day.
extracting the guids and query the metadata seems to be logical - i think
runnign a zfs send just to parse the data stream is a lot of overhead, when you
really only need to traverse metadata directly.
On Feb 4, 2010, at 2:00 AM, Tomas Ögren st...@acc.umu.se wrote:
On 03 February, 2010 - Frank Cusack sent me these 0,7K bytes:
On February 3, 2010 12:04:07 PM +0200 Henu henrik.he...@tut.fi
wrote:
Is there a possibility to get a list of changed files between two
snapshots? Currently I do
The delete queue and related blocks need further investigation...
r...@osol-dev:/data/zdb-test# zdb -dd data/zdb-test | more
Dataset data/zdb-test [ZPL], ID 641, cr_txg 529804, 24.5K, 6 objects
Object lvl iblk dblk dsize lsize %full type
0716K16K 15.0K16K
--On 04 February 2010 11:31 + Karl Pielorz kpielorz_...@tdx.co.uk
wrote:
What would happen when I tried to 'online' ad2 again?
A reply to my own post... I tried this out, when you make 'ad2' online
again, ZFS immediately logs a 'vdev corrupt' failure, and marks 'ad2'
(which at this
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I'm kind stuck at trying to get my aging Netra 240 machine to boot
OpenSolaris. The live CD and installation worked perfectly, but when I
reboot and try to boot from the installed disk, I get:
Rebooting with command: boot disk0
Boot device:
Seems your controller is actually doing only harm here, or am I missing
something?
On Feb 4, 2010 8:46 AM, Karl Pielorz kpielorz_...@tdx.co.uk wrote:
--On 04 February 2010 11:31 + Karl Pielorz kpielorz_...@tdx.co.uk
wrote:
What would happen...
A reply to my own post... I tried this out,
I think you'll do just fine then. And I think the extra platter will
work to your advantage.
-marc
On 2/3/10, Simon Breden sbre...@gmail.com wrote:
Probably 6 in a RAID-Z2 vdev.
Cheers,
Simon
--
This message posted from opensolaris.org
___
Hi all,
it might not be a ZFS issue (and thus on the wrong list), but maybe there's
someone here who might be able to give us a good hint:
We are operating 13 x4500 and started to play with non-Sun blessed SSDs in
there. As we were running Solaris 10u5 before and wanted to use them as log
Hi Arnaud,
which type of controller is this?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--On 04 February 2010 08:58 -0500 Jacob Ritorto jacob.rito...@gmail.com
wrote:
Seems your controller is actually doing only harm here, or am I missing
something?
The RAID controller presents the drives as both a mirrored pair, and JBOD -
*at the same time*...
The machine boots off the
Le 04/02/10 16:57, Tonmaus a écrit :
Hi Arnaud,
which type of controller is this?
Regards,
Tonmaus
I use two LSI SAS3081E-R in each server (16 hard disk trays, passive
backplane AFAICT, no expander).
Works very well.
Arnaud
___
zfs-discuss
On 04/02/2010 13:45, Karl Pielorz wrote:
--On 04 February 2010 11:31 + Karl Pielorz
kpielorz_...@tdx.co.uk wrote:
What would happen when I tried to 'online' ad2 again?
A reply to my own post... I tried this out, when you make 'ad2' online
again, ZFS immediately logs a 'vdev corrupt'
On 03/02/2010 21:45, Aleksandr Levchuk wrote:
Hardware RAID6 + hot spare, worked well for us. So, I wanted to stick
our SAN for data protection. I understand that the end-to-end checks
of ZFS make it better at detecting corruptions.
In my case, I can imagine that ZFS would FREEZ the whole
Hi again,
thanks for the answer. Another thing that came to my mind is that you mentioned
that you mixed the disks among the controllers. Does that mean you mixed them
as well among pools? Unsurprisingly, the WD20EADS is slower than the Hitachi
that is a fixed 7200 rpm drive. I wonder what
On 4 Feb 2010, at 16:35, Bob Friesenhahn wrote:
On Thu, 4 Feb 2010, Darren J Moffat wrote:
Thanks - IBM basically haven't test clearcase with ZFS compression
therefore, they don't support currently. Future may change, as such my
customer cannot use compression. I have asked IBM for roadmap
On 04/02/2010 12:42, Darren J Moffat wrote:
On 04/02/2010 12:13, Roshan Perera wrote:
Hi Darren,
Thanks - IBM basically haven't test clearcase with ZFS compression
therefore, they don't support currently. Future may change, as such
my customer cannot use compression. I have asked IBM for
On February 4, 2010 12:12:04 PM +0100 dick hoogendijk d...@nagual.nl
wrote:
Why don't you just export that directory with NFS (rw) to your sparse zone
and mount it on /usr/perl5/mumble ? Or is this too simple a thought?
On February 4, 2010 1:41:20 PM +0100 Thomas Maier-Komor
BTW, I could just install everything in the global zone and use the
default inheritance of /usr into each local zone to see the data.
But then my zones are not independent portable entities; they would
depend on some non-default software installed in the global zone.
Just wanted to explain why
On 2/4/10 8:00 AM +0100 Tomas Ögren wrote:
rsync by default compares metadata first, and only checks through every
byte if you add the -c (checksum) flag.
I would say rsync is the best tool here.
ah, i didn't know that was the default. no wonder recently when i was
incremental-rsyncing a few
On 2/4/10 8:21 AM -0500 Ross Walker wrote:
Find -newer doesn't catch files added or removed it assumes identical
trees.
This may be redundant in light of my earlier post, but yes it does.
Directory mtimes are updated when a file is added or removed, and
find -newer will detect that.
-frank
On Thu, Feb 04, 2010 at 03:19:15PM -0500, Frank Cusack wrote:
BTW, I could just install everything in the global zone and use the
default inheritance of /usr into each local zone to see the data.
But then my zones are not independent portable entities; they would
depend on some non-default
On 2/4/10 2:46 PM -0600 Nicolas Williams wrote:
In Frank's case, IIUC, the better solution is to avoid the need for
unionfs in the first place by not placing pkg content in directories
that one might want to be writable from zones. If there's anything
about Perl5 (or anything else) that causes
On Thu, Feb 04, 2010 at 04:03:19PM -0500, Frank Cusack wrote:
On 2/4/10 2:46 PM -0600 Nicolas Williams wrote:
In Frank's case, IIUC, the better solution is to avoid the need for
unionfs in the first place by not placing pkg content in directories
that one might want to be writable from zones.
Hi all,
Im trying to replace broken LUN in pool using zpool replace -f lun,
but it fails. Physical disk is already replaced, and new lun has the
same address as broken one. But zpool detach/attach works.
This is simple configration:
pool: mypool
state: DEGRADED
status: One or more devices
Supermicro USAS-L8i controllers.
I agree with you, I'd much rather have the drives respond properly and promptly
than save a little power if that means I'm going to get strange errors from the
array. And these are the green drives, they just don't seem to cause me any
problems. The issues
Hi Ross,
Yes - zdb - is dumping out info in the form of:
Object lvl iblk dblk dsize lsize %full type
19116K512512512 100.00 ZFS plain file
264 bonus ZFS znode
dnode flags: USED_BYTES
I am Starting to put together a home NAS server that will have the following
roles:
(1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5 HD
streams at a time. These will be streamed live to the NAS box during recording.
(2) Playback TV (could be stream being recorded,
* Brian (broco...@vt.edu) wrote:
I am Starting to put together a home NAS server that will have the
following roles:
(1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to
4 or 5 HD streams at a time. These will be streamed live to the NAS
box during recording. (2) Playback
I was interested in the impact the type of an SSD has on the performance of the
ZIL. So I did some benchmarking and just want to share the results.
My test case is simply untarring the latest ON source (528 MB, 53k files) on an
Linux system that has a ZFS file system mounted via NFS over
I would go with cores (threads) rather than clock speed here. My home system
is a 4-core AMD @ 1.8Ghz and performs well.
I wouldn't use drives that big and you should be aware of the overheads of
RaidZ[x].
-marc
On Thu, Feb 4, 2010 at 6:19 PM, Brian broco...@vt.edu wrote:
I am Starting to
Le 04/02/10 20:26, Tonmaus a crit:
Hi again,
thanks for the answer. Another thing that came to my mind is that you mentioned that you mixed the disks among the controllers. Does that mean you mixed them as well among pools? Unsurprisingly, the WD20EADS is slower than the Hitachi that is a
Thanks for the reply.
Are cores better because of the compression/deduplication being mult-threaded
or because of multiple streams? It is a pretty big difference in clock speed -
so curious as to why core would be better. Glad to see your 4 core system is
working well for you - so seems like
Very interesting stats -- thanks for taking the time and trouble to share
them!
One thing I found interesting is that the Gen 2 X25-M has higher write IOPS
than the X25-E according to Intel's documentation (6,600 IOPS for 4K writes
versus 3,300 IOPS for 4K writes on the E). I wonder if it'd
Put your money into RAM, especially for dedup.
-- richard
On Feb 4, 2010, at 3:19 PM, Brian wrote:
I am Starting to put together a home NAS server that will have the following
roles:
(1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5
HD streams at a time.
Peter Radig wrote:
I was interested in the impact the type of an SSD has on the performance of the
ZIL. So I did some benchmarking and just want to share the results.
My test case is simply untarring the latest ON source (528 MB, 53k files) on an
Linux system that has a ZFS file system
I have a single zfs volume, shared out using COMSTAR and connected to a Windows
VM. I am taking snapshots of the volume regularly. I now want to mount a
previous snapshot, but when I go through the process, Windows sees the new
volume, but thinks it is blank and wants to initialize it. Any
Le 05/02/10 01:00, Brian a crit:
Thanks for the reply.
Are cores better because of the compression/deduplication being mult-threaded or because of multiple streams? It is a pretty big difference in clock speed - so curious as to why core would be better. Glad to see your 4 core system is
Hi Brian,
If you are considering testing dedup, particularly on large datasets,
see the list of known issues, here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup
Start with build 132.
Thanks,
Cindy
On 02/04/10 16:19, Brian wrote:
I am Starting to put together a home NAS
It sounds like the consensus is more cores over clock speed. Surprising to me
since the difference in clocks speed was over 1Ghz. So, I will go with a quad
core.
I was leaning towards 4GB of ram - which hopefully should be enough for dedup
as I am only planning on dedupping my smaller file
On Thu, Feb 4, 2010 at 7:54 PM, Brian broco...@vt.edu wrote:
It sounds like the consensus is more cores over clock speed. Surprising to
me since the difference in clocks speed was over 1Ghz. So, I will go with a
quad core.
Four cores @ 1.8Ghz = 7.2Ghz of threaded performance ([Open]Solaris
I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2
mirrored boot drives.
You want to use compression and deduplication and raidz2. I hope you didn't
want to get any performance out of this system, because all of those are
compute or IO intensive.
FWIW ... 5 disks in raidz2
Interesting comments..
But I am confused.
Performance for my backups (compression/deduplication) would most likely not be
#1 priority.
I want my VMs to run fast - so is it deduplication that really slows things
down?
Are you saying raidz2 would overwhelm current I/O controllers to where I
On Thu, 4 Feb 2010, Brian wrote:
Was my raidz2 performance comment above correct? That the write
speed is that of the slowest disk? That is what I believe I have
read.
Data in raidz2 is striped so that it is split across multiple disks.
In this (sequential) sense it is faster than a
On Thu, 4 Feb 2010, Marc Nicholas wrote:
Very interesting stats -- thanks for taking the time and trouble to share them!
One thing I found interesting is that the Gen 2 X25-M has higher write IOPS
than the
X25-E according to Intel's documentation (6,600 IOPS for 4K writes versus 3,300
IOPS
On Thu, Feb 4, 2010 at 10:18 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Thu, 4 Feb 2010, Marc Nicholas wrote:
Very interesting stats -- thanks for taking the time and trouble to share
them!
One thing I found interesting is that the Gen 2 X25-M has higher write
IOPS than
On Thu, 4 Feb 2010, Marc Nicholas wrote:
The write IOPS between the X25-M and the X25-E are different since with the
X25-M, much
more of your data gets completely lost. Most of us prefer not to lose our data.
Would you like to qualify your statement further?
Google is your friend. And
On Thu, Feb 4, 2010 at 10:35 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Thu, 4 Feb 2010, Marc Nicholas wrote:
The write IOPS between the X25-M and the X25-E are different since with
the X25-M, much
more of your data gets completely lost. Most of us prefer not to lose our
Data in raidz2 is striped so that it is split across multiple disks.
Partial truth.
Yes, the data is on more than one disk, but it's a parity hash, requiring
computation overhead and a write operation on each and every disk. It's not
simply striped. Whenever you read or write, you need to
I want my VMs to run fast - so is it deduplication that really slows
things down?
Are you saying raidz2 would overwhelm current I/O controllers to where
I could not saturate 1 GB network link?
Is the CPU I am looking at not capable of doing dedup and compression?
Or are no CPUs capable
Brian wrote:
Interesting comments..
But I am confused.
Performance for my backups (compression/deduplication) would most likely not be
#1 priority.
I want my VMs to run fast - so is it deduplication that really slows things
down?
Dedup requires a fair amount of CPU, but it really wants a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/05/2010 03:21 AM, Edward Ned Harvey wrote:
FWIW ... 5 disks in raidz2 will have capacity of 3 disks. But if you bought
6 disks in mirrored configuration, you have a small extra cost, and much
better performance.
But the raidz2 can survive
I am leaning towards AMD because of ECC support
well, lets look at Intel's offerings... Ram is faster than AMD's
at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139040
This MB has two Intel ethernets and for
69 matches
Mail list logo