On 07/29/12 14:52, Bob Friesenhahn wrote:
My opinion is that complete hard drive failure and block-level media
failure are two totally different things.
That would depend on the recovery behavior of the drive for
block-level media failure. A drive whose firmware does excessive
(reports of up
On 07/19/12 19:27, Jim Klimov wrote:
However, if the test file was written in 128K blocks and then
is rewritten with 64K blocks, then Bob's answer is probably
valid - the block would have to be re-read once for the first
rewrite of its half; it might be taken from cache for the
second half's rew
On 07/10/12 19:56, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3 can
On 07/04/12 16:47, Nico Williams wrote:
I don't see that the munmap definition assures that anything is written to
"disk". The system is free to buffer the data in RAM as long as it likes
without writing anything at all.
Oddly enough the manpages at the Open Group don't make this clear. So
I
On 06/16/12 12:23, Richard Elling wrote:
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
by the way
when you format start with cylinder 1 donot use 0
There is no requirement for skipping cylinder 0 for root on Solaris, and there
never has been.
Maybe not for core Solaris, but it i
On 06/15/12 15:52, Cindy Swearingen wrote:
Its important to identify your OS release to determine if
booting from a 4k disk is supported.
In addition, whether the drive is really 4096p or 512e/4096p.
___
zfs-discuss mailing list
zfs-discuss@opensolar
On 05/29/12 07:26, bofh wrote:
ashift:9 is that standard?
Depends on what the drive reports as physical sector size.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 05/29/12 08:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC, and performance for
just about everything was 20MB/s pe
On 05/28/12 08:48, Nathan Kroenert wrote:
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB units up until now (which
are 512 byte sector).
Anyone offer up suggestions of either 3 or preferably 4TB drives that
actually work well with Z
On 01/25/12 09:08, Edward Ned Harvey wrote:
Assuming the failure rate of drives is not linear, but skewed toward higher
failure rate after some period of time (say, 3 yrs) ...
See section 3.1 of the Google study:
http://research.google.com/archive/disk_failures.pdf
although section 4.2 o
On 01/24/12 17:06, Gregg Wonderly wrote:
What I've noticed, is that when I have my drives in a situation of small
airflow, and hence hotter operating temperatures, my disks will drop
quite quickly.
While I *believe* the same thing and thus have over provisioned
airflow in my cases (for both dri
On 01/16/12 11:08, David Magda wrote:
The conclusions are hardly unreasonable:
While the reliability mechanisms in ZFS are able to provide reasonable
robustness against disk corruptions, memory corruptions still remain a
serious problem to data integrity.
I've heard the same thing said ("use
On 01/08/12 10:15, John Martin wrote:
I believe Joerg Moellenkamp published a discussion
several years ago on how L1ARC attempt to deal with the pollution
of the cache by large streaming reads, but I don't have
a bookmark handy (nor the knowledge of whether the
behavior is still acc
On 01/08/12 20:10, Jim Klimov wrote:
Is it true or false that: ZFS might skip the cache and
go to disks for "streaming" reads?
I don't believe this was ever suggested. Instead, if
data is not already in the file system cache and a
large read is made from disk should the file system
put this d
On 01/08/12 11:30, Jim Klimov wrote:
However for smaller servers, such as home NASes which have
about one user overall, pre-reading and caching files even
for a single use might be an objective per se - just to let
the hard-disks spin down. Say, if I sit down to watch a
movie from my NAS, it is
On 01/08/12 09:30, Edward Ned Harvey wrote:
In the case of your MP3 collection... Probably the only thing you can do is
to write a script which will simply go read all the files you predict will
be read soon. The key here is the prediction - There's no way ZFS or
solaris, or any other OS in th
On 09/12/11 10:33, Jens Elkner wrote:
Hmmm, at least if S11x, ZFS mirror, ICH10 and cmdk (IDE) driver is involved,
I'm 99.9% confident, that "a while" turns out to be some days or weeks, only
- no matter what Platinium-Enterprise-HDDs you use ;-)
On Solaris 11 Express with a dual drive mirror,
http://wdc.custhelp.com/app/answers/detail/a_id/1397/~/difference-between-desktop-edition-and-raid-%28enterprise%29-edition-drives
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there a list of zpool versions for development builds?
I found:
http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system
where it says Solaris 11 Express is zpool version 31, but my
system has BEs back to build 139 and I have not done a zpool upgrade
since installing this system but it
19 matches
Mail list logo