I have a pool with zvolume (Opensolaris b134)
When I try zpool destroy tank I get pool is busy
# zpool destroy -f tank
cannot destroy 'tank': pool is busy
When I try destroy zvolume first I get dataset is busy
# zfs destroy -f tank/macbook0-data
cannot destroy 'tank/macbook0-data': dataset
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it
complains that log device is missing :)
For sure I can manually remove/and add it by
I have a box running snv_134 that had a little boo-boo.
The problem first started a couple of weeks ago with some corruption on two
filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that
revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No
biggie.
I
On Wed, 2010-08-18 at 00:16 -0700, Alxen4 wrote:
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it
complains that log device is
Alxen4 wrote:
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it
complains that log device is missing :)
For sure I can manually
Any argumentation why ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 2010-08-18 at 00:49 -0700, Alxen4 wrote:
Any argumentation why ?
Because a RAMDISK defeats the purpose of a ZIL, which is to provide a
fast *stable storage* for data being written. If you are using a
RAMDISK, you are not getting any non-volatility guarantees that the ZIL
is supposed to
Andrew Gabriel wrote:
Alxen4 wrote:
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise
it complains that log device is missing :)
For
Thanks...Now I think I understand...
Let me summarize it andd let me know if I'm wrong.
Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to
report data acknowledgment before it actually was written to stable storage
which in turn improves performance but might cause
Alxen4 wrote:
Thanks...Now I think I understand...
Let me summarize it andd let me know if I'm wrong.
Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to
report data acknowledgment before it actually was written to stable storage
which in turn improves performance
On Wed, 2010-08-18 at 01:20 -0700, Alxen4 wrote:
Thanks...Now I think I understand...
Let me summarize it andd let me know if I'm wrong.
Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF
to report data acknowledgment before it actually was written to stable
Hi,
We are considering using a ZFS based storage as a staging disk for Networker.
We're aiming at
providing enough storage to be able to keep 3 months worth of backups on disk,
before it's moved
to tape.
To provide storage for 3 months of backups, we want to utilize the dedup
functionality in
Thanks.Everything is clear now.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Il giorno 18/ago/2010, alle ore 10.20, Alxen4 ha scritto:
My NFS Client is ESXi so the major question is there risk of corruption for
VMware images if I disable ZIL ?
I do the same use of ZFS. I had a huge improvement in performance by using
mirrors instead of raidz. How is your zpool
Miles Nordin car...@ivy.net wrote:
gd == Garrett D'Amore garr...@nexenta.com writes:
Joerg is correct that CDDL code can legally live right
alongside the GPLv2 kernel code and run in the same program.
gd My understanding is that no, this is not possible.
GPLv2 and CDDL
On Wed, Aug 18, 2010 at 7:51 AM, Peter Tribble peter.trib...@gmail.com wrote:
I tried this with NetBackup, and decided against it pretty rapidly.
Basically, we
got hardly any dedup at all. (Something like 3%; compression gave us
much better results.) Tiny changes in block alignment completely
On Wed, Aug 18, 2010 at 12:16:04AM -0700, Alxen4 wrote:
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it
complains that log device
On Wed, 18 Aug 2010, Joerg Schilling wrote:
Linus is right with his primary decision, but this also applies for static
linking. See Lawrence Rosen for more information, the GPL does not distinct
between static and dynamic linking.
GPLv2 does not address linking at all and only makes vague
Frank wrote:
Have you dealt with RedHat Enterprise support? lol.
Have you dealt with Sun/Oracle support lately? lololol It's a disaster.
We've had a failed disk in a fully support Sun system for over 3 weeks,
Explorer data turned in, and been given the runaround forever. The 7000
series
Its hard to tell what caused the smart predictive failure message,
like a temp fluctuation. If ZFS noticed that a disk wasn't available
yet, then I would expect a message to that effect.
In any case, I think I would have a replacement disk available.
The important thing is that you continue to
All of this is entirely legal conjecture, by people who aren't lawyers,
for issues that have not been tested by court and are clearly subject to
interpretation. Since it no longer is relevant to the topic of the
list, can we please either take the discussion offline, or agree to just
let the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Alxen4
Disabling ZIL converts all synchronous calls to asynchronous which
makes ZSF to report data acknowledgment before it actually was written
to stable storage which in turn improves
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Alxen4
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
Other people have already corrected you about ramdisk for log.
It's already been said, use SSD, or disable ZIL
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ethan Erchinger
We've had a failed disk in a fully support Sun system for over 3 weeks,
Explorer data turned in, and been given the runaround forever. The
7000
series support is no better,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Garrett D'Amore
interpretation. Since it no longer is relevant to the topic of the
list, can we please either take the discussion offline, or agree to
just
let the topic die (on the basis
Edward wrote:
That is really weird. What are you calling failed? If you're
getting
either a red blinking light, or a checksum failure on a device in a
zpool...
You should get your replacement with no trouble.
Yes, failed, with all the normal failed signs, cfgadm not finding it,
FAULTED in
On Aug 18, 2010, at 5:11 AM, Paul Kraus wrote:
On Wed, Aug 18, 2010 at 7:51 AM, Peter Tribble peter.trib...@gmail.com
wrote:
I tried this with NetBackup, and decided against it pretty rapidly.
Basically, we
got hardly any dedup at all. (Something like 3%; compression gave us
much better
What you say is true only on the system itself. On an NFS client system, 30
seconds of lost data in the middle of a file (as per my earlier example) is a
corrupt file.
-original message-
Subject: Re: [zfs-discuss] Solaris startup script location
From: Edward Ned Harvey sh...@nedharvey.com
Date:
I had a perfectly working 7 drive raidz pool using some on board STATA
connectors and some on PCI SATA controller cards. My pool was using 500GB
drives. I had the stupid idea to replace my 500GB drives with 2TB ( Mitsubishi
) drives. This process resulted in me loosing much of my data ( see my
Garrett D'Amore garr...@nexenta.com wrote:
All of this is entirely legal conjecture, by people who aren't lawyers,
for issues that have not been tested by court and are clearly subject to
interpretation. Since it no longer is relevant to the topic of the
list, can we please either take the
You need to let the resilver complete before you can detach the spare. This is
a known problem, CR 6909724.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724
On 18 Aug 2010, at 14:02, Dr. Martin Mundschenk wrote:
Hi!
I had trouble with my raidz in the way, that some of
On Wed, August 18, 2010 15:14, Linder, Doug wrote:
I've noticed that everytime someone mentions using NFS with ZFS here, they
always seem to be using NFSv3. Is there a reason for this that I just
don't know about? To me, using NFSv4 is a no-brainer. ZFS supports it
natively, it supports
On 08/19/10 04:56 AM, seth keith wrote:
I had a perfectly working 7 drive raidz pool using some on board STATA
connectors and some on PCI SATA controller cards. My pool was using 500GB
drives. I had the stupid idea to replace my 500GB drives with 2TB ( Mitsubishi
) drives. This process
Hmm still running zdb since last night. Anyone have any suggestions or advice
how to proceed with this issue?
Thanks,
Matthew Ellison
Begin forwarded message:
From: Matthew Ellison m...@mattellison.com
Date: August 18, 2010 3:15:39 AM EDT
To: zfs-discuss@opensolaris.org
Subject: Kernel
this is a 64 bit system, and I already used 2 of these drives in a raidz1 pool
and they worked great, except I needed to use the SATA controller card and not
the motherboard SATA. Any ideas?
--
This message posted from opensolaris.org
___
zfs-discuss
On Wed, Aug 18, 2010 at 1:34 PM, Miles Nordin car...@ivy.net wrote:
ee == Ethan Erchinger et...@plaxo.com writes:
ee We've had a failed disk in a fully support Sun system for over
ee 3 weeks, Explorer data turned in, and been given the runaround
ee forever.
that sucks.
but
In message 4c6c4e30.7060...@ianshome.com, Ian Collins writes:
If you count Monday this week as lately, we have never had to wait more
than 24 hours for replacement drives for our 45x0 or 7000 series
Same here, but two weeks ago for a failed drive in an X4150.
Last week SunSolve was sending my
Il giorno 18/ago/2010, alle ore 21.24, David Magda ha scritto:
On Wed, August 18, 2010 15:14, Linder, Doug wrote:
I've noticed that everytime someone mentions using NFS with ZFS here, they
always seem to be using NFSv3. Is there a reason for this that I just
don't know about?
At $WORK it's
Also the linux NFSv4 client is bugged (as in hang-the-whole-machine bugged).
I am deploying a new osol fileserver for home directories and I'm using NFSv3
+ automounter (because I am also using one dataset per user, and thus I have
to mount each home dir separately).
We are also in the
Well I seemed to have hit on that hot button topic of NFSv4, (good
thing I didn't mention that we are running IPv4).
To get back to the topic, is anyone running ZFS group quota on large
filesystem with lots of smaller files and thousands
for groups per filesystem, or have any quota related
On 8/18/10 3:58 PM -0400 Linder, Doug wrote:
Erik Trimble wrote:
That said, stability vs new features has NOTHING to do with the OSS
development model. It has everything to do with the RELEASE model.
[...]
All that said, using the OSS model for actual *development* of an
Operating System is
On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Wed, 18 Aug 2010, Joerg Schilling wrote:
Linus is right with his primary decision, but this also applies for static
linking. See Lawrence Rosen for more information, the GPL does not distinct
between
On 08/18/10 08:40 AM, Joerg Schilling wrote:
Ian Collinsi...@ianshome.com wrote:
Some application benefit from the extended register set and function
call ABI, others suffer due to increased sizes impacting the cache.
Well, please verify your claims as they do not meet my
On 2010-Aug-18 04:40:21 +0800, Joerg Schilling
joerg.schill...@fokus.fraunhofer.de wrote:
Ian Collins i...@ianshome.com wrote:
Some application benefit from the extended register set and function
call ABI, others suffer due to increased sizes impacting the cache.
Well, please verify your
Ross Walker wrote:
On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Wed, 18 Aug 2010, Joerg Schilling wrote:
Linus is right with his primary decision, but this also applies for static
linking. See Lawrence Rosen for more information, the GPL does
45 matches
Mail list logo