When I say fast that's mean I already do some benchmarks with iozone. And
do some graphs to see what the performance are.
What I can say is it's go lot faster than H700+ 12 disk 600 15k/min.
i asked if it is faster than properly made UFS/gmirror/gstripe mix on the
same hardware.
And I do
On the other hand, even on a single-disk pool, ZFS stores two copies of all
metadata, so the chances of actually losing a directory block are extremely
remote. On mirrored or RAIDZ pools, you have at least four copies of all
metadata.
i can only wish you to be lucky. sometimes lack of
I have another storage server named bd3 that has a RAIDz2 array of 2.5T
drives (11 of them, IIRC) but it is presently powered down for maintenance.
seems you don't need performance at all if you use RAIDz1/2 and ZFS.
unless performance for you means how fast 1GB file are read linearly.
This thread confused me. Is the conclusion of this thread that ZFS is slow and
breaks beyond recovery? I keep seeing two sides to this coin. I can't decide
whether to use ZFS or hardware RAID. Why does EMC use hardware RAID?
-Simon
___
This thread confused me. Is the conclusion of this thread that ZFS is slow
and breaks beyond recovery?
I've personally experienced no problems with ZFS. The performance has been on
par with UFS as far as I can tell. Sometimes it's a little faster, sometimes a
little slower depending on the
--As of June 2, 2012 6:32:39 PM -0400, Simon is alleged to have said:
This thread confused me. Is the conclusion of this thread that ZFS is
slow and breaks beyond recovery? I keep seeing two sides to this coin. I
can't decide whether to use ZFS or hardware RAID. Why does EMC use
hardware RAID?
On Sat, Jun 2, 2012 at 7:44 PM, Daniel Staal dst...@usa.net wrote:
I will agree that ZFS could use a good worst-case scenario 'fsck' like tool.
Worst-case scenario? That's when fsck doesn't work. Quickly followed
by a sinking feeling.
ZFS can be a complicated beast: It's not the best choice
Le 31/05/2012 ? 11:32:33-0400, Oscar Hodgson a écrit
The subject is pretty much the question. Perhaps there's a better
place to be asking this question ...
We have (very briefly) discussed the possibility of using FreeBSD
pizza boxes as a storage heads direct attached to external JBOD
48TB each, roughly. There would be a couple of units. The pizza
boxes would be used for computational tasks, and nominally would have
8 cores and 96G+ RAM.
Obvious questions are hardware compatibility and stability. I've set
up small FreeBSD 9 machines with ZFS roots and simple mirrors for
I am also in charge of redesigning one of our virtual SAN's to a
FreeBSD ZFS storage system which will run well how many JBOD's can
you fit on the system?? Probably round ~100TB or so.
quite a bit more without buying overpriced things
___
I'm not using as huge a dataset, but I was seeing this behavior as well when
I first set my box up. What was happening was that ZFS was caching *lots* of
writes, and then would dump them all to disk at once, during which time the
computer was completely occupied with the disk I/O.
The
and definitely do not use it if you will not have regular backups of all
data, as in case of failures (yes they do happen) you will just have no
chance to repair it.
There is NO fsck_zfs! And ZFS is promoted as it doesn't need it.
Assuming that filesystem doesn't need offline filesystem
Assuming that filesystem doesn't need offline filesystem check utility
because it never crash is funny.
zfs scrub...???
when starting means crash quickly?
Well.. no.
Certainly with computers that never have hardware faults and assuming ZFS
doesn't have any software bugs you may be right.
Additionally ZFS works directly at the block level of the HD meaning
that it is slightly different to the 'normal' file systems in storing
information and is also self healing..
doesn't other filesystem work on block level too? if no - then at what
level?
It was my impression that
On Fri, 1 Jun 2012, Wojciech Puchar wrote:
Assuming that filesystem doesn't need offline filesystem check utility
because it never crash is funny.
zfs scrub...???
when starting means crash quickly?
Well.. no.
Certainly with computers that never have hardware faults and assuming ZFS
On Fri, 1 Jun 2012 14:05:57 +0100, Kaya Saman wrote:
It was my impression that ZFS doesn't actually format the disk as
stores data as raw information on the hard disk directly rather then
using an actual file system structure as such.
In worst... in ultra-worst abysmal inexpected exceptional
level?
It was my impression that ZFS doesn't actually format the disk as
does any filesystem format a disk?
disks are nowadays factory formatted.
filesystem only write data and it's metadata on it.
I really recommend you to get basic knowledge of how (any) filesystem
works.
THEN please
and unbelievable narrow cases, when you don't have or can't
access a backup (which you should have even when using ZFS),
and you _need_ to do some forensic analysis on disks, ZFS
seems to be a worse solution than UFS. On ZFS, you never
can predict where the data will go. Add several disks to
On Fri, Jun 1, 2012 at 7:35 AM, Polytropon free...@edvax.de wrote:
I do _not_ want to try to claim a ZFS inferiority due to
missing backups, but there may be occassions where (except
performance), low-level file system aspects of UFS might be
superior to using ZFS.
If you have an operational
As for ZFS being dangerous, we have a score of drive-years with no loss of
data. The lack of fsck is considered in this intelligently written piece
you are just lucky.
before i would start using anything new in such important part as
filesystem, i do extreme test, ssimulate hardware faults,
On Fri, Jun 1, 2012 at 8:16 AM, Wojciech Puchar
woj...@wojtek.tensor.gdynia.pl wrote:
Better=random read performance of single drive.
What an entirely useless performance measure! Maybe you should
restrict yourself to
using SSDs, which have rather unbeatable random read performance - the
On Fri, Jun 1, 2012 at 8:08 AM, Wojciech Puchar
woj...@wojtek.tensor.gdynia.pl wrote:
ZFS is somehow in that part similar to Amiga Fast File System. when you
overwrite a directory block (by hardware fault for example), everything
below that directory will disappear. You may not be even aware
Albert,
What are you using for an HBA in the Dell?
On Fri, Jun 1, 2012 at 1:23 AM, Albert Shih albert.s...@obspm.fr wrote:
I've Dell R610 + 48 Go Ram, 2x 6 core + 4 * MD1200 (36*3T + 12*2T)
[root@filer ~]# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
filer 119T
In the last episode (Jun 01), Wojciech Puchar said:
and unbelievable narrow cases, when you don't have or can't access a
backup (which you should have even when using ZFS), and you _need_ to do
some forensic analysis on disks, ZFS seems to be a worse solution than
UFS. On ZFS, you never
Certainly with computers that never have hardware faults and assuming ZFS
doesn't have any software bugs you may be right.
That was part of their assumption. It's based on server grade hardware and
ECC RAM, and lots of redundancy.
They missed the part about their code not being perfect.
The subject is pretty much the question. Perhaps there's a better
place to be asking this question ...
We have (very briefly) discussed the possibility of using FreeBSD
pizza boxes as a storage heads direct attached to external JBOD arrays
with ZFS. In perusing the list, I haven't stumbled
If this is any consellation I run a 36TB cluster using a self built
server with a Promise DAS (VessJBOD 1840) using ZFS at home! to
support my OpenSource projects and personal files.
As for OS take your pick: NexentaStor, FreeBSD, Solaris 11
All capable, of course Solaris has latest version of
That helps. Thank you.
This is an academic departmental instructional / research environment.
We had a great relationship with Sun, they provided great
opportunities to put Solaris in front of students. Oracle, not so
much, and the Oracle single-tier support model simply isn't affordable
for
On Thu, May 31, 2012 at 5:05 PM, Oscar Hodgson oscar.hodg...@gmail.com wrote:
That helps. Thank you.
This is an academic departmental instructional / research environment.
We had a great relationship with Sun, they provided great
opportunities to put Solaris in front of students. Oracle,
As a side note and in case you were considering, I strongly advise against
Linux + fuse ZFS.
On 31 May 2012, at 18:05, Oscar Hodgson oscar.hodg...@gmail.com wrote:
That helps. Thank you.
This is an academic departmental instructional / research environment.
We had a great relationship
On Thu, May 31, 2012 at 6:28 PM, Damien Fleuriot m...@my.gd wrote:
As a side note and in case you were considering, I strongly advise against
Linux + fuse ZFS.
Yes I agree; as far as I understand ZFS in Linux is still in testing
and in any case not part of the Linux kernel which means
I'm doing this with HP heads, LSI SAS adapters, and
http://www.dataonstorage.com/ JBODs.
Note: the DataOn JBODs are very, very hard to get right now because these
are really rebadged LSI devices and LSI sold this division to NetApp, who
promptly shut it down to prevent people like us from
On Thu, 31 May 2012, Oscar Hodgson wrote:
The subject is pretty much the question. Perhaps there's a better
place to be asking this question ...
We have (very briefly) discussed the possibility of using FreeBSD
pizza boxes as a storage heads direct attached to external JBOD arrays
with ZFS.
On Thu, 31 May 2012, Oscar Hodgson wrote:
That helps. Thank you.
This is an academic departmental instructional / research environment.
We had a great relationship with Sun, they provided great
opportunities to put Solaris in front of students. Oracle, not so
much, and the Oracle
On Thu, 31 May 2012, Kaya Saman wrote:
On Thu, May 31, 2012 at 5:05 PM, Oscar Hodgson oscar.hodg...@gmail.com wrote:
That helps. Thank you.
This is an academic departmental instructional / research environment.
We had a great relationship with Sun, they provided great
opportunities to put
The thought never crossed my mind.
On Thu, May 31, 2012 at 1:28 PM, Damien Fleuriot m...@my.gd wrote:
As a side note and in case you were considering, I strongly advise against
Linux + fuse ZFS.
___
freebsd-questions@freebsd.org mailing list
--As of May 31, 2012 11:24:41 AM -0700, Dennis Glatting is alleged to have
said:
2) Under heavy I/O my systems freeze for a few seconds. I haven't looked
into why but they are completely unresponsive. Note I am also using
compressed volumes (gzip), which puts a substantual load on the kernel.
On Thu, 2012-05-31 at 19:27 -0400, Daniel Staal wrote:
--As of May 31, 2012 11:24:41 AM -0700, Dennis Glatting is alleged to have
said:
2) Under heavy I/O my systems freeze for a few seconds. I haven't looked
into why but they are completely unresponsive. Note I am also using
compressed
38 matches
Mail list logo