On 07/29/12 14:52, Bob Friesenhahn wrote:
My opinion is that complete hard drive failure and block-level media
failure are two totally different things.
That would depend on the recovery behavior of the drive for
block-level media failure. A drive whose firmware does excessive
(reports of up
On 07/19/12 19:27, Jim Klimov wrote:
However, if the test file was written in 128K blocks and then
is rewritten with 64K blocks, then Bob's answer is probably
valid - the block would have to be re-read once for the first
rewrite of its half; it might be taken from cache for the
second half's
On 07/10/12 19:56, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3
On 07/04/12 16:47, Nico Williams wrote:
I don't see that the munmap definition assures that anything is written to
disk. The system is free to buffer the data in RAM as long as it likes
without writing anything at all.
Oddly enough the manpages at the Open Group don't make this clear. So
I
On 06/16/12 12:23, Richard Elling wrote:
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
by the way
when you format start with cylinder 1 donot use 0
There is no requirement for skipping cylinder 0 for root on Solaris, and there
never has been.
Maybe not for core Solaris, but it
On 06/15/12 15:52, Cindy Swearingen wrote:
Its important to identify your OS release to determine if
booting from a 4k disk is supported.
In addition, whether the drive is really 4096p or 512e/4096p.
___
zfs-discuss mailing list
On 05/28/12 08:48, Nathan Kroenert wrote:
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB units up until now (which
are 512 byte sector).
Anyone offer up suggestions of either 3 or preferably 4TB drives that
actually work well with
On 05/29/12 08:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC, and performance for
just about everything was 20MB/s
On 05/29/12 07:26, bofh wrote:
ashift:9 is that standard?
Depends on what the drive reports as physical sector size.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/25/12 09:08, Edward Ned Harvey wrote:
Assuming the failure rate of drives is not linear, but skewed toward higher
failure rate after some period of time (say, 3 yrs) ...
See section 3.1 of the Google study:
http://research.google.com/archive/disk_failures.pdf
although section 4.2
On 01/24/12 17:06, Gregg Wonderly wrote:
What I've noticed, is that when I have my drives in a situation of small
airflow, and hence hotter operating temperatures, my disks will drop
quite quickly.
While I *believe* the same thing and thus have over provisioned
airflow in my cases (for both
On 01/16/12 11:08, David Magda wrote:
The conclusions are hardly unreasonable:
While the reliability mechanisms in ZFS are able to provide reasonable
robustness against disk corruptions, memory corruptions still remain a
serious problem to data integrity.
I've heard the same thing said (use
On 01/08/12 20:10, Jim Klimov wrote:
Is it true or false that: ZFS might skip the cache and
go to disks for streaming reads?
I don't believe this was ever suggested. Instead, if
data is not already in the file system cache and a
large read is made from disk should the file system
put this
On 01/08/12 10:15, John Martin wrote:
I believe Joerg Moellenkamp published a discussion
several years ago on how L1ARC attempt to deal with the pollution
of the cache by large streaming reads, but I don't have
a bookmark handy (nor the knowledge of whether the
behavior is still accurate
On 01/08/12 09:30, Edward Ned Harvey wrote:
In the case of your MP3 collection... Probably the only thing you can do is
to write a script which will simply go read all the files you predict will
be read soon. The key here is the prediction - There's no way ZFS or
solaris, or any other OS in
On 01/08/12 11:30, Jim Klimov wrote:
However for smaller servers, such as home NASes which have
about one user overall, pre-reading and caching files even
for a single use might be an objective per se - just to let
the hard-disks spin down. Say, if I sit down to watch a
movie from my NAS, it is
On 09/12/11 10:33, Jens Elkner wrote:
Hmmm, at least if S11x, ZFS mirror, ICH10 and cmdk (IDE) driver is involved,
I'm 99.9% confident, that a while turns out to be some days or weeks, only
- no matter what Platinium-Enterprise-HDDs you use ;-)
On Solaris 11 Express with a dual drive mirror,
http://wdc.custhelp.com/app/answers/detail/a_id/1397/~/difference-between-desktop-edition-and-raid-%28enterprise%29-edition-drives
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there a list of zpool versions for development builds?
I found:
http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system
where it says Solaris 11 Express is zpool version 31, but my
system has BEs back to build 139 and I have not done a zpool upgrade
since installing this system but it
Tim Cook tim at cook.ms writes:
You are not a court of law, and that statement has not been tested. It is
your opinion and nothing more. I'd appreciate if every time you repeated that
statement, you'd preface it with in my opinion so you don't have people
running around believing what they're
the anywhere in the internet. Any hint?
Martin
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.16 (Darwin)
iQEcBAEBAgAGBQJNBIw2AAoJEA6eiwqkMgR8vAcH/0jeBh0PvZdnjLK4FOY6/Xw1
JwAqdNbS5jvUn8pvYRxdA379gqyZNoFXMRTpPl5Xefw88rpXS+vqvDHoaM1A5Wov
the anywhere in the internet. Any hint?
Martin
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.16 (Darwin)
iQEcBAEBAgAGBQJNBIzZAAoJEA6eiwqkMgR8NhYIALeIA7VTTSP3PkpN+GaIwQ/e
Y5lVRTJCCY5jcj++g7WLniF9NmbrYrm/dGObXGL8WbkdsJSW1G0vUwVoW+lEYU9G
wFbXRtny5uklb7N7coy25aPioSGdJGaIBFk
can not imagine, that NFS
performance used to be not more than 1/3 of the speed of a 10BaseT connection
ever before...
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
which is ok but on a GBit network, more should be
possible, since the servers disk performance reaches up to 120 M/sec.
Does anyone have a solution how I can at least speed up the writes?
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
to be small and cute.
Not reliable.
The MacMini and the disks themselves are just fine. The problem seems to be the
SATA-bridges to USB/FW. They just stall, when the load gets heavy.
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
system without having
an enterprise solution: eSATA, USB, FireWire, FibreChannel?
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
After about 62 hours and 90%, the resilvering process got stuck. Since 12 hours
nothing happens anymore. Thus, I can not detach the spare device. Is there a
way to get the resilvering process back running?
Martin
Am 18.08.2010 um 20:11 schrieb Mark Musante:
You need to let the resilver
does the system get stuck? Even when a USB-Plug is unhooked, why
does the spare does not go online?
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a device faults?
2. Why does the hot spare not go online? (The manual says, that going online
automatically is the default behavior)
3. Why does the system not boot to the usual run level, when a zpool is in a
degraded state at boot time?
Regards,
Martin
The 4 disks attached to the ahci driver should be using NCQ. The two
cmdk disks will not have NCQ capability as they are under control of
the legacy ata driver. What does your pool topology look like? Can you
try removing the cmdk disks from your pool.
You can also verify if your disks are
and if the internal hard drive fails, can I reboot the
system with the detached internal drive but with the degraded mirror half on
the external drive?
The mac is definitely capable of booting from all kinds of devices. But does
OSOL support it in such a way, described above?
Regards,
Martin
, might that contribute to the problem?
Perhaps a zfs mount -a bug in correspondence with the -R parameter?
Greetings, Martin
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
in the bugreport does not exist before
the zfs set mountpoint command.
Greetings, Martin
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We are also running into this bug.
Our system is a Solaris 10u4
SunOS sunsystem9 5.10 Generic_127112-10 i86pc i386 i86pc
ZFS version 4
We opened a Support Case (Case ID 71912304) which after some discussion came to
the conclusion that we should not use /etc/reboot for rebooting.
This leads me
specially for the sharing part but more data is needed
to see what's going on here.
what data do you need?
Greetings, Martin
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, is my hardware configuration worthless?
Regards,
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
such a mess happen and how do I get it
back straight?
Regards,
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have written an python script that enables to get back already deleted files
and pools/partitions. This is highly experimental, but I managed to get back a
moths work when all the partitions were deleted by accident(and of course
backups are for the weak ;-)
I hope someone can pass this
I forgot to add the script
--
This message posted from opensolaris.org
zfs_revert.py
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The links work fine if you take the * off from the end...sorry bout that
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You might want to check out this thread:
http://opensolaris.org/jive/thread.jspa?messageID=435420
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I encountered the same problem...like i sed in the first post...zpool command
freezes. Anyone knows how to make it respond again?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I have no idea why this forum just makes files dissapear??? I will put a link
tomorrow...a file was attached before...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I already got my files back acctuay and the disc contains already new pools, so
i have no idea how it was set.
I have to make a virtualbox installation and test it.
Can you please tell me how-to set the failmode?
--
This message posted from opensolaris.org
Did anyone reply to this question?
We have the same issue and our Windows admins do see why the iSCSI target
should be disconnected when the underlying storage is extended
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
You are the 2nd customer I've ever heard of to use shrink.
This attitude seems to be a common theme in ZFS discussions: No enterprise
uses shrink, only grow.
Maybe. The enterprise I work for requires that every change be reversible and
repeatable. Every change requires a backout plan and
C,
I appreciate the feedback and like you, do not wish to start a side rant, but
rather understand this, because it is completely counter to my experience.
Allow me to respond based on my anecdotal experience.
What's wrong with make a new pool.. safely copy the data. verify data
and then
richard wrote:
Preface: yes, shrink will be cool. But we've been
running highly
available,
mission critical datacenters for more than 50 years
without shrink being
widely available.
I would debate that. I remember batch windows and downtime delaying one's
career movement. Today we are
Bob wrote:
Perhaps the problem is one of educating the customer
so that they can
ammend their accounting practices. Different
business groups can
share the same pool if necessary.
Bob, while I don't mean to pick on you, that statement captures a major
thinking flaw in IT when it comes
With RAID-Z stripes can be of variable width meaning that, say, a
single row
in a 4+2 configuration might have two stripes of 1+2. In other words,
there
might not be enough space in the new parity device.
Wow -- I totally missed that scenario. Excellent point.
I did write up the
steps
I don't see much similarity between mirroring and raidz other than
that they both support redundancy.
A single parity device against a single data device is, in essence, mirroring.
For all intents and purposes, raid and mirroring with this configuration are
one and the same.
A RAID system
Don't hear about triple-parity RAID that often:
I agree completely. In fact, I have wondered (probably in these forums), why
we don't bite the bullet and make a generic raidzN, where N is any number =0.
In fact, get rid of mirroring, because it clearly is a variant of raidz with
two devices.
Did anyone ever have success with this?
I'm trying to add a usb flash device as rpool cache, and am hitting the same
problem,
even after working through the SMI/EFI label and other issues above.
I played with adding a USB stick as L2ARC a few versions ago of SXCE, pre 104.
At the time, I
guess that's
something I should wait for.
--
Martin Blom --- [EMAIL PROTECTED]
Eccl 1:18 http://martin.blom.org/
smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss
it more than usual
when the procedure is done?
--
Martin Blom --- [EMAIL PROTECTED]
Eccl 1:18 http://martin.blom.org/
smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss
doesn't have a driver for the SATA chipset on this card.
It is listed as verified for Sparc Solaris because the USB and FireWire ports
will work on Sparc systems.
--
Martin Winkelman - [EMAIL PROTECTED] - 303-272-3122
http://www.sun.com/solarisready
When I attempt again to import using zdb -e ztank
I still get zdb: can't open ztank: I/O error
and zpool import -f, whilst it starts and seems to
access the disks sequentially, it stops al the 3rd
one (no sure which precisely - it spins it up and the
process stops right there, and the system
I will be out of the office starting 09/05/2008 and will not return until
09/08/2008.
I will respond to your message when I return.
CONFIDENTIALITY NOTICE: This electronic message contains
information which may be legally confidential and/or privileged and
does not in any case represent a
I have created a zvol. My client computer (windows) has the volume connected
fine.
But when I resize the zvol using:
zfs set volsize=20G pool/volumes/v1
.. it disconnects the client. Is this by design?
This message posted from opensolaris.org
___
Hi,
in which opensolaris (nevada) version this fix is included
thanks,
Martin
On 13 Aug, 2008, at 18:52, Bob Friesenhahn wrote:
I see that a driver patch has now been released for marvell88sx
hardware. I expect that this is the patch that Thumper owners have
been anxiously waiting
I read this (http://blogs.sun.com/roch/entry/when_to_and_not_to) blog regarding
when and when not to use raidz. There is an example of a plain striped
configuration and a mirror configuration. (See below)
M refers to a 2-way mirror and S to a simple dynamic stripe.
Config Blocks Available
Hello! I'm new to ZFS and have some configuration questions.
What's the difference, performance wise, in below configurations?
* In the first configuration, can I loose 1 disk? And, are the disks striped to
gain performance, as they act as one vdev?
* In the second configuration, can I loose 2
I have a server with a huge number of datasets (around 9000)
When the pool containing the datasets is imported on boot up, a few (10)
datasets are not mounted and thus not exported via nfs. Which dataset is not
mounted is random.
All datasets are exported via nfs. A zfs import takes around 30
Hi everyone,
after using Linux for 12 years, I now decided to give OpenSolaris a try, using
it as OS for my new home-filer.
I've created a zpool, and multiple zfs on there, two of those are
NAMEUSED AVAIL REFER
MOUNTPOINT
tank/data
According to PerterB in #opensolaris, I'd need NFS4 mirror-mounts for that.
I decided to instead just setup the automounter on the clients and put the
directories in the automount-map :)
This message posted from opensolaris.org
___
zfs-discuss
somewhere?
Are sharesmb sharenfs tied together somehow or can they be separated?
Cheers,
-Martin.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Mark,
Sharemgr output:
-bash-3.2# sharemgr show -vp
default nfs=()
smb smb=()
zfs
zfs/rpool/export smb=()
export=/export
zfs/store/movies smb=()
Movies=/store/movies
zfs/store/overlord2 nfs=() smb=()
overlord2=/store/overlord2
zfs/store/tv smb=()
running eject
unnamed_rmdisk.
--
Martin Winkelman - [EMAIL PROTECTED] - 303-272-3122
http://www.sun.com/solarisready/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rmdisk
#
--
Martin Winkelman - [EMAIL PROTECTED] - 303-272-3122
http://www.sun.com/solarisready/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Oh, it should say retryable and normal write errors - I have permanent
errors too
/Martin
On 2 apr 2008, at 00:55, Richard Elling wrote:
Martin Englund wrote:
I've got a newly created zpool where I know (from the previous UFS)
that one of the disks has retryable write errors.
What
online z2 c5t4d0
cheers,
/Martin
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ONLINE 0 0 0
How do I get this back to normal?
cheers,
/Martin
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0 0 0
cheers,
/Martin
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bill Shannon wrote:
Marty Itzkowitz wrote:
Interesting problem. I've used disk rattle as a measurement of io
activity before
there were such tools for measurement. It's crude, but effective.
To answer your question: you could try er_kernel. It uses DTrace to
do statistical callstack
I set /etc/system's zfs:zfs_arc_max = 0x1000 and it seems better now.
I had previously tried setting it to 2Gb rather than 256Mb as above without
success... I should have tried much lower!
It seems that when I perform I/O though a WindowsXP hvm, I get a reasonable
I/O rate, but I'm not
Regarding the following that I also hit, see
http://www.opensolaris.org/jive/thread.jspa?messageID=180995
and if any further details or tests are required, I would be happy to assist.
3/ Problem with DMA under Xen ... e.g. my areca raid cards works
perfect on a 8GB box without xen but because
I used a usb stick, and the first time I used it, I used something similar to
zpool create black c5t0d0p0 # ie with the p0 pseudo partition
and used it happily for some while.
Some weeks later, I wanted to use the stick again, starting afresh, but this
time used
zpool create black c5t0d0 # ie
and when I re-created it, the duplicate disappeared...
# zpool destroy black
# zpool create -f newblack c5t0d0
# zpool export newblack
# zpool import
pool: newblack
id: 5325813934475784040
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
IIn this PC, I'm using the PCI card
http://www.intel.com/network/connectivity/products/pro1000gt_desktop_adapter.htm
, but, more recentlyI'm using the PCI Express card
http://www.intel.com/network/connectivity/products/pro1000pt_desktop_adapter.htm
Note that the latter didn't have PXE and the
kugutsum
I tried with just 4Gb in the system, and the same issue. I'll try 2Gb
tomorrow and see if any better.(ps, how did you determine that was the
problem in your case)
cheers
Martin
This message posted from opensolaris.org
___
zfs
80M, then 209 records of 2.5M (pretty consistent), then the final 11 records
climbing to 2.82, 3.29, 3.05, 3.32, 3.17, 3.20, 3.33, 4.41, 5.44, 8.11
regards
Martin
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Hello
I've got Solaris Express Community Edition build 75 (75a) installed on an Asus
P5K-E/WiFI-AP (ip35/ICH9R based) board. CPU=Q6700, RAM=8Gb, disk=Samsung
HD501LJ and (older) Maxtor 6H500F0.
When the O/S is running on bare metal, ie no xVM/Xen hypervisor, then
everything is fine.
When
on the
way...
it might be a faq or known problem, but it's rather dangerous, is this
being worked ON? usb stick removal should not panic the kernel, should it?
thanx,
Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Menno Lageman wrote:
Martin Man wrote:
I insert the stick, and how can I figure out what poools are available
for 'zpool import' without knowing their name?
zpool list does not seem to be listing those,
A plain 'zpool import' should do the trick.
yep, works like a charm, that one
via redundancy failure on mirror or raidz(2)) and you
lose a device the system panics. This is a known issue/bug/feature (pick
one depending on your view) that has been discussed multiple times on the
list.
discussed yes, I think I remember that, reported? being worked on?
-Wade
thanx,
Martin
Quoth Steven Sim on Thu, May 17, 2007 at 09:55:37AM
+0800:
Gurus;
I am exceedingly impressed by the ZFS although
it is my humble opinion
that Sun is not doing enough evangelizing for
it.
What else do you think we should be doing?
David
I'll jump in here. I am a huge
postings about Drobo on the web, including:
http://www.engadget.com/2007/04/09/drobo-the-worlds-first-storage-robot/
---8---
cheers,
/Martin
--
Martin Englund, Java Security Engineer, Java SE, Sun Microsystems Inc.
Email: [EMAIL PROTECTED] Time Zone: GMT+2 PGP: 1024D/AA514677
The question is not if you
Hi,
I have a zpool with only one disk. No mirror.
I have some data in the file system.
Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror with the initial disk?
If yes, how?
Thanks
Martin
This message posted from opensolaris.org
Jeremy Teo wrote:
On the issue of the ability to remove a device from
a zpool, how
useful/pressing is this feature? Or is this more
along the line of
nice to have?
This is a pretty high priority. We are working on
it.
Good news! Where is the discussion on the best approach to take?
Hello Kyle,
Wednesday, January 10, 2007, 5:33:12 PM, you wrote:
KM Remember though that it's been mathematically
figured that the
KM disadvantages to RaidZ start to show up after 9
or 10 drives. (That's
Well, nothing like this was proved and definitely not
mathematically.
It's
How could i monitor zfs ?
or the zpool activity ?
I want to know if anything wrong is going on.
If i could receive those warning by email, it would be great :)
Martin
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
Did I miss something on this thread? Was the root cause of the
15-minute fsync every actually determined?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of eric kustarz
Sent: Wednesday, June 21, 2006 2:12 PM
To: [EMAIL PROTECTED]
Cc:
92 matches
Mail list logo