>From what I've noticed, if one destroys dataset that is say 50-70TB and
>reboots before destroy is finished, it can take up to several _days_ before
>it's back up again.
So, nowadays I'm doing rm -fr BEFORE issuing zfs destroy whenever possible.
Yours
Markus Kovero
-Original Message-
Am in the same boat, exactly. Destroyed a large set and rebooted, with a scrub
running on the same pool.
My reboot stuck on "Reading ZFS Config: *" for several hours (disks were
active). I cleared the zpool.cache from single-user and am doing an import (can
boot again). I wasn't able to boot m
fyi to everyone, the Asus P5W64 motherboard previously in my opensolaris machine
was the culprit, and not the general mpt issues. At the time the motherboard
was
originally put in that machine, there was not enough zfs i/o load to trigger the
problem which led to the false impression the hardware
On Tue, Dec 8, 2009 at 22:54, Jeff Bonwick wrote:
> > i am no pro in zfs, but to my understanding there is no original.
>
> That is correct. From a semantic perspective, there is no change
> in behavior between dedup=off and dedup=on. Even the accounting
> remains the same: each reference to a
On Wed, Dec 9, 2009 at 10:41 AM, Brent Jones wrote:
> I submitted a bug a while ago about this:
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855208
>
> I'll escalate since I have a support contract. But yes, I see this as
> a serious bug, I thought my machine had locked up entir
> On Tue, Dec 8, 2009 at 6:36 PM, Jack Kielsmeier
> wrote:
> > Ah, good to know! I'm learning all kinds of stuff
> here :)
> >
> > The command (zpool import) is still running and I'm
> still seeing disk activity.
> >
> > Any rough idea as to how long this command should
> last? Looks like each dis
Ok, I have started the zpool import again. Looking at iostat, it looks like I'm
getting compatible read speeds (possibly a little slower):
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.00.00.00.0
Yikes,
Posted too soon. I don't want to set my ncsize that high!!! (Was thinking the
entry was memory, but it's entries).
set ncsize = 25
set zfs:zfs_arc_max = 0x1000
Now THIS should hopefully only make it so the process can take around 1GB of
RAM.
--
This message posted from opensola
Upon further research, it appears I need to limit both the ncsize and the
arc_max. I think I'll use:
set ncsize = 0x3000
set zfs:zfs_arc_max = 0x1000
That should give me a max of 1GB used between both.
If I should be using different values (or other settings), please let me know :)
--
Ok,
When searching for how to do that, I see that it requires a modification to
/etc/system.
I'm thinking I'll limit it to 1GB, so the entry (which must be in hex) appears
to be:
set zfs:zfs_arc_max = 0x4000
Then I'll reboot the server and try the import again.
Thanks for the continued a
On Tue, Dec 8, 2009 at 10:16 PM, Jack Kielsmeier wrote:
> I just hard-rebooted my server. I'm moving off my VM to my laptop so it can
> continue to run :)
>
> Then, if it "freezes" again I'll just let it sit, as I did hear the disks
> thrashing.
> --
>
As long as you've already rebooted, you sho
I just hard-rebooted my server. I'm moving off my VM to my laptop so it can
continue to run :)
Then, if it "freezes" again I'll just let it sit, as I did hear the disks
thrashing.
--
This message posted from opensolaris.org
___
zfs-discuss mailing lis
The server just went "almost" totally unresponsive :(
I still hear the disks thrashing. If I press keys on the keyboard, my login
screen will not show up. I had a VNC session hang and can no longer get back in.
I can try to ssh to the server, I get prompted for my username and password,
but it
On Tue, Dec 8, 2009 at 6:36 PM, Jack Kielsmeier wrote:
> Ah, good to know! I'm learning all kinds of stuff here :)
>
> The command (zpool import) is still running and I'm still seeing disk
> activity.
>
> Any rough idea as to how long this command should last? Looks like each disk
> is being rea
Ah, good to know! I'm learning all kinds of stuff here :)
The command (zpool import) is still running and I'm still seeing disk activity.
Any rough idea as to how long this command should last? Looks like each disk is
being read at a rate of 1.5-2 megabytes per second.
Going worst case, assumin
After successfully setting up the ZFS auto snapshot package, I was at the point
of implementing its backup functionality when I saw the announcement of the
removal of that, and the associated uproar. I looked deeper and discovered
that the autosnap backup didn't really meet my needs after all,
On Tue, Dec 8, 2009 at 5:38 PM, Jack Kielsmeier wrote:
> It's been about 45 minutes now since I started trying to import the pool.
>
> I see disk activity (see below)
>
> What concerns me is my free memory keeps shrinking as time goes on. Now
> have 185MB free out of 4 gigs (and 2 gigs of swap fr
It's been about 45 minutes now since I started trying to import the pool.
I see disk activity (see below)
What concerns me is my free memory keeps shrinking as time goes on. Now have
185MB free out of 4 gigs (and 2 gigs of swap free).
Hope this doesn't exhaust all my memory and freeze my box.
> i am no pro in zfs, but to my understanding there is no original.
That is correct. From a semantic perspective, there is no change
in behavior between dedup=off and dedup=on. Even the accounting
remains the same: each reference to a block is charged to the dataset
making the reference. The on
Hi Mike,
while I have not done it with EMC and Powerpath, I have done a similar
thing with mpxio and even grabing the disks
and throwing in the in a totally different machine.
The steps are:
1) export the pool
2) Do your powerpath stuff and get the new devices seen by solaris.
3) Import the pool
On Tue, Dec 8, 2009 at 9:32 PM, Richard Elling wrote:
> FYI,
> Seagate has announced a new enterprise SSD. The specs appear
> to be competitive:
> + 2.5" form factor
> + 5 year warranty
> + power loss protection
> + 0.44% annual failure rate (AFR) (2M hours MTBF, IMHO
Hi All,
My question is related to: 6839260 want zfs send with properties.
I'm running "zfs send | mbuffer | network | mbuffer | zfs recv" nightly between
two arrays (2008.11), but my backup script does not do recursive send/recv - it
walks through all datasets, sends the increments one by one a
On Tue, Dec 8, 2009 at 1:37 PM, Mike wrote:
> Thanks Cindys for your input... I love your fear example too, but lucky
> for me I have 10 years before I have to worry about that and hopefully we'll
> all be in hovering bumper cars by then.
>
> It looks like I'm going to have to create another tes
Bob,
Thanks for your help. I thought that I might have seen something about this in
the past but couldn't remember for sure. Thanks for pointing me in the right
direction.
>From the URL below, it states that each TXG will be limited to 1/8th of the
>physical memory (this differs from the 7/8
Thanks Cindys for your input... I love your fear example too, but lucky for me
I have 10 years before I have to worry about that and hopefully we'll all be in
hovering bumper cars by then.
It looks like I'm going to have to create another test system and try
recommondations give here...and hop
But don't forget that "The unknown is what makes life interesting" :)
Bruno
Cindy Swearingen wrote:
> Hi Mike,
>
> In theory, this should work, but I don't have an experience with this
> particular software, maybe someone else does.
>
> One way to determine if it might work is by using use the zd
Hi Mike,
In theory, this should work, but I don't have an experience with this
particular software, maybe someone else does.
One way to determine if it might work is by using use the zdb -l command
on each device in the pool and check for a populated devid= string. If
the devid exists, then ZFS
FYI,
Seagate has announced a new enterprise SSD. The specs appear
to be competitive:
+ 2.5" form factor
+ 5 year warranty
+ power loss protection
+ 0.44% annual failure rate (AFR) (2M hours MTBF, IMHO too low :-)
+ UER 1e-16 (new), 1e-15 (5 years)
+
The pool is roughly 4.5 TB (Raidz1, 4 1.5 TB Disks)
I didn't attempt to destroy the pool, only a dataset within the pool. The
dataset is/was about 1.2TB.
System Specs
Intel Q6600 (2.4 Ghz Quad Core)
4GB RAM
2x 500 GB drives in zfs mirror (rpool)
4x 1.5 TB drives in zfs raidz1 array (vault)
The 1
I had a system that I was testing zfs on using EMC Luns to create a striped
zpool without using the multi-pathing software PowerPath. Of coarse a storage
emergency came up so I lent this storage out for temp storage and we're still
using. I'd like to add PowerPath to take advanage of the multi
On Tue, Dec 8, 2009 at 8:23 AM, Jack Kielsmeier wrote:
> I waited about 20 minutes or so. I'll try your suggestions tonight.
>
> I didn't look at iostat. I just figured it was hung after waiting that
> long, but now that I know it can take a very long time, I will watch it and
> make sure it's do
Andrey Kuzmin wrote:
> > If you think about it a little bit, you will see that there is no
> > significant difference in the licensing model between FreeBSD+ZFS and
> > OpenSolaris+ZFS. It is not possible to be a "little bit pregnant". Either
> > one is pregnant, or one is not.
> >
>
> Well, Fre
2009/12/8 "C. Bergström"
> Andrey Kuzmin wrote:
>
>> On Tue, Dec 8, 2009 at 7:02 PM, Bob Friesenhahn
>> wrote:
>>
>>
>>> On Mon, 7 Dec 2009, Michael DeMan (OA) wrote:
>>>
>>>
Args for FreeBSD + ZFS:
- Limited budget
- We are familiar with managing FreeBSD.
- We are famil
Hi Matthias,
The process of replacing disk and whether you need to unconfigure the
disk depends on the hardware. With some hardware, like our x4500 series,
you must unconfigure the disk first by using cfgadm. This process is
described in this section of the ZFS Admin Guide:
http://docs.sun.com
Andrey Kuzmin wrote:
On Tue, Dec 8, 2009 at 7:02 PM, Bob Friesenhahn
wrote:
On Mon, 7 Dec 2009, Michael DeMan (OA) wrote:
Args for FreeBSD + ZFS:
- Limited budget
- We are familiar with managing FreeBSD.
- We are familiar with tuning FreeBSD.
- Licensing model
Args against OpenSolari
On Tue, Dec 8, 2009 at 7:02 PM, Bob Friesenhahn
wrote:
> On Mon, 7 Dec 2009, Michael DeMan (OA) wrote:
>>
>> Args for FreeBSD + ZFS:
>>
>> - Limited budget
>> - We are familiar with managing FreeBSD.
>> - We are familiar with tuning FreeBSD.
>> - Licensing model
>>
>> Args against OpenSolaris + ZF
On Mon, 7 Dec 2009, Michael DeMan (OA) wrote:
Args for FreeBSD + ZFS:
- Limited budget
- We are familiar with managing FreeBSD.
- We are familiar with tuning FreeBSD.
- Licensing model
Args against OpenSolaris + ZFS:
- Hardware compatibility
- Lack of knowledge for tuning and associated costs
I waited about 20 minutes or so. I'll try your suggestions tonight.
I didn't look at iostat. I just figured it was hung after waiting that long,
but now that I know it can take a very long time, I will watch it and make sure
it's doing something.
Thanks. I'll post my results either tonight or t
Daniel Carosone writes:
>>> Not if you're trying to make a single disk pool redundant by adding
>>> .. er, attaching .. a mirror; then there won't be such a warning,
>>> however effective that warning might or might not be otherwise.
>>
>> Not a problem because you can then detach the vdev and a
>>> Doesn't the "mismatched replication" message help?
>>
>> Not if you're trying to make a single disk pool redundant by adding ..
>> er, attaching .. a mirror; then there won't be such a warning, however
>> effective that warning might or might not be otherwise.
>
> Not a problem because you ca
Answer from another guru...
nxyyt wrote:
> This question is forwarded from ZFS-discussion. Hope any developer can
> throw some light on it.
>
> I'm a newbie to ZFS. I have a special question against the COW transaction
> of ZFS.
>
> Does ZFS keeps the sequential consistency of the same file when
Neil,
Thank you. You closed my question. :-)
best regards,
hanzhu
On Mon, Dec 7, 2009 at 3:00 AM, Neil Perrin wrote:
>
> I'll try to find out whether ZFS binding the same file always to the same
>> opening transaction group.
>>
>
> Not sure what you mean by this. Transactions (eg writes) wil
Colin,
I think you mix up the filesystem layer (where the individual files as
maintained) and the block layer, where actual data is stored.
The analogue of deduplication on the filesystem layer would be to create hard
links of the files, where deleting one file does not remove the other link.
B
I can report io errors with Chenbro based LSI SASx36 IC based
expanders tested with 111b/121/128a/129. The HBA was LSI 1068 based.
If I bypass expander by adding more HBA controllers, mpt does not have
io errors.
-nola
On Dec 8, 2009, at 6:48 AM, Bruno Sousa wrote:
Hi James,
Thank yo
On Tuesday 08 December 2009 14:00, Colin Raven wrote:
> Help in understanding this would be hugely helpful - anyone?
>
i am no pro in zfs, but to my understanding there is no original.
All the files have pointers to blocks on disk. Even if there is no ther file
that shares the same block on the d
Colin Raven wrote:
What happens if, once dedup is on, I (or someone else with delete
rights) open a photo management app containing that collection, and
start deleting dupes - AND - happen to delete the original that all
other references are pointing to. I know, I know, it doesn't matter -
sn
In reading this blog post:
http://blogs.sun.com/bobn/entry/taking_zfs_deduplication_for_a
a question came to mind.
To understand the context of the question, consider the opening paragraph
from the above post;
Here is my test case: I have 2 directories of photos, totaling about 90MB
> each. An
Hi James,
Thank you for your feedback, and i will send the prtconf -v output for
your email.
I also have another system where i can test something if that's the
case, and if you need extra information or even access to the system,
please let me know it.
Thank you,
Bruno
James C. McPherson wrote:
Bruno Sousa wrote:
Hi all,
During this problem i did a power-off/power-on in the server and the bus
reset/scsi timeout issue persisted. After that i decided to
poweroff/power on the jbod array, and after that everything became normal.
No scsi timeouts, normal performance, everything is okay n
Hi
the live upgrade info doc
http://sunsolve.sun.com/search/document.do?assetkey=1-61-206844-1
has all the relevant patch, if you are on u6 KU or higher ( you are on
u8 ), then you can just migrate straight to zfs, so there is no need to
upgrade to u8 ufs, in order to move to u8 zfs, the u6 KU d
Hi, are you sure zfs isnt just going thru transactions after forcibly stopping
zfs destroy?
Sometimes (always) it seems zfs/zpool commands just hang if you destroy larger
filesets, in reality zfs is just doing its job, if you reboot server during
dataset destroy it will take some time to come up
51 matches
Mail list logo