Hi Bob,
you are using some non-Sun SCSI HBA. Could you please be more specific
about HBA model and driver?
You are getting pretty the same high CPU load with write to single-disk
UFS and raid-z. This may mean that the problem is not with ZFS itself.
Victor
Bob Evans wrote:
Robert,
Sorry
Hi All,
I've noticed that link to dmu_txg.c from the ZFS Source Code tour is
broken. It looks like it dmu_txg.c should be changed to dmu_tx.c
Please take care of this.
- Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The next natural question is
Richard Elling - PAE wrote:
Isn't this in a FAQ somewhere? IIRC, if ZFS finds a disk via two paths,
then it will pick one.
Will it (try to) failover to another one if picked one fails?
Wbr,
Victor
-- richard
Torrey McMahon wrote:
Robert Milkowski wrote:
Darren J Moffat wrote:
Asif Iqbal wrote:
Hi
I have a X2100 with two 74G disks. I build the OS on the first disk
with slice0 root 10G ufs, slice1 2.5G swap, slice6 25MB ufs and slice7
62G zfs. What is the fastest way to clone it to the second disk. I
have to build 10 of those in 2 days. Once I
Maybe something like the slow parameter of VxVM?
slow[=iodelay]
Reduces toe system performance impact of copy
operations. Such operations are usually per-
formed on small regions of the volume (nor-
Darren J Moffat wrote:
Pawel Jakub Dawidek wrote:
I like the idea, I really do, but it will be s expensive because of
ZFS' COW model. Not only file removal or truncation will call bleaching,
but every single file system modification... Heh, well, if privacy of
your data is important
Hi Peter,
Peter Tribble wrote:
I'm being a bit of a dunderhead at the moment and neither the site
search nor google are picking up the information I seek...
There was the thread named Metaslab alignment on RAID-Z on this list,
you may want to look at it.
I'm setting up a thumper and I'm
Richard Elling wrote:
Neil Perrin wrote:
ZFS checksums are at the block level.
This has been causing some confusion lately, so perhaps we could say:
ZFS checksums are at the file system block level, not to be confused with
the disk block level or transport block level.
Saying that ZFS
Hi Gino,
What version of Solaris your server is running?
What happens here is while opening your pool ZFS is trying to process
ZFS Intent Log of this poll and discovers some inconsistency between
on-disk state and ZIL contents.
What was the first panic you refer to?
Wbr,
Victor
Gino
, but crash dumps can still be
there. Have you checked this?
Is there a way to force the mount of the zpool? We have some important
data on it!
This may be impossible to achieve on the version of Solaris this system
is running.
Victor
Thank you,
Gino
From: Victor Latushkin [EMAIL PROTECTED
Gino,
Gino Ruopolo пишет:
Victor,
can we try to mount the zpool on a S10U3 system?
No, this may require to use one of the recent Solaris Nevada builds. I'm
trying to check relevant build number.
What about answers to other my questions?
Wbr,
Victor
From: Victor Latushkin [EMAIL PROTECTED
from
this situation even in the most recent code. I think someone from ZFS
development team may provide additional input here.
Wbr,
Victor
Thank you,
Gino
From: Victor Latushkin [EMAIL PROTECTED]
To: Gino Ruopolo [EMAIL PROTECTED]
CC: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss
Hi Robert,
AL With this, ZFS now supports gzip compression. To enable gzip compression
Great!
AL just set the 'compression' property to 'gzip' (or 'gzip-N' where N=1..9).
AL Existing pools will need to upgrade in order to use this feature, and, yes,
AL this is the second ZFS version number
Hi Gino,
this looks like an instance of bug 6458218 (see
http://bugs.opensolaris.org/view_bug.do?bug_id=6458218)
The fix for this bug is integrated into snv_60.
Kind regards,
Victor
Gino Ruopolo wrote:
Hi All,
Last week we had a panic caused by ZFS and then we had a corrupted zpool!
Hi Roman,
from the provided data I suppose that you a running unpatched Solaris 10
Update 3.
Since fault address is 0xc4 and in zio_create we manipulate mostly with
zio_t structures, then 0xc4 most likely corresponds to io_child member
of zio_t structure. If my assumption about Solaris
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM Hello,
LM I've got some weird problem: ZFS does not seem to be utilizing
LM all disks in my pool properly. For some reason, it's only using 2 of the 3
disks in my pool:
LMcapacity
Hi Steven,
Steven Sim wrote:
My confusion is simple. Would this not then give rise also to the
write-hole vulnerability of RAID-5?
Jeff Bonwick states /that there's no way to update two or more disks
atomically, so RAID stripes can become damaged during a crash or power
outage./
If I
If I understand correctly, then the parity block for RAID-Z are also
written in two different atomic operations. As per RAID-5. (the only
difference being each can be of a different stripe size).
HL As with Raid-5 on a four disk stripe, there are four independant
HL writes, and they
Hi Ben,
this is a known problem ( I don't have bug id handy), as far as I
remember, it should be fixed in build 64.
Victor.
Ben Rockwood пишет:
May 25 23:32:59 summer unix: [ID 836849 kern.notice]
May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740:
May 25 23:32:59 summer genunix:
Hi Michael,
search on bugs.opensolaris.org for data after EOF shows that this
looks pretty much like bug 6424466:
http://bugs.opensolaris.org/view_bug.do?bug_id=6424466
It is fixed in Nevada build 53. Fix for Solaris 10 is going to be
available with Solaris 10 Update 4, as the second link
Michael Hase wrote:
Hi Victor,
the kernel panic in bug 6424466 resulted from overwriting some areas
of the disks, in this case I would expect at least strange things -
ok, not exactly a panic. In my case there was no messsing around
with the underlying disks. The fix only seems to avoid the
Gino wrote:
Same problem here (snv_60).
Robert, did you find any solutions?
Couple of week ago I put together an implementation of space maps which
completely eliminates loops and recursion from space map alloc
operation, and allows to implement different allocation strategies quite
easily
Richard Elling wrote:
Rob Logan wrote:
an array of 30 drives in a RaidZ2 configuration with two hot spares
I don't want to mirror 15 drives to 15 drives
ok, so space over speed... and are willing to toss somewhere between 4
and 15 drives for protection.
raidz splits the (up to 128k)
Łukasz пишет:
After few hours with dtrace and source code browsing I found that in my space
map there are no 128K blocks left.
Actually you may have some 128k or more free space segments, but
alignment requirements will not allow to allocate them. Consider the
following example:
1. Space
Albert Chin wrote:
On Tue, Jul 10, 2007 at 07:12:35AM -0500, Al Hopper wrote:
On Mon, 9 Jul 2007, Albert Chin wrote:
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
It would also be nice for extra hardware (PCI-X,
Hi Lukasz and all,
I just returned from month long sick-leave, so I need some time to sort
pile of emails, do SPARC build and some testing and then I'll be able to
provide you with my changes in some form. Hope this will happen next week.
Cheers,
Victor
Łukasz K wrote:
Dnia 26-07-2007 o
Mike Gerdts wrote:
Short question:
Not so short really :-)
Answers to som questions inline. I think others will correct me if I'm
wrong.
I'm curious as to how ZFS manages space (free and used) and how
its usage interacts with thin provisioning provided by HDS
arrays. Is there any effort
Please see the following link:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache
Hth,
Victor
Sergey пишет:
I am running Solaris U4 x86_64.
Seems that something is changed regarding mdb:
# mdb -k
Loading modules: [ unix krtld genunix specfs
I'm proposing new project for ZFS community - Block Selection Policy and
Space Map Enhancements.
Space map[1] is very efficient data structure for keeping track of free
space in the metaslabs, but there is at least one area of improvement -
space map block selection algorithm which could be
Gino,
although these messages show some similarity to ones in the Sun Alert
you are referring to, it looks like this is unrelated. Sun Alert 57773
describes symptoms of a problem seen in SAN configurations with specific
switches (Brocade SilkWorm Switch 12000, 24000, 3250, 3850, 3900) with
Roch - PAE wrote:
Pawel Jakub Dawidek writes:
On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
Tomas Ögren wrote:
On 18 September, 2007 - Gino sent me these 0,3K bytes:
Hello,
upgrade to snv_60 or later if you care about your data :)
If there are known serious data loss bug fixes that have gone into
snv60+, but not into s10u4, then I would like to tell Sun to backport
those into
Roch Bourbonnais wrote:
Le 21 oct. 07 à 02:40, Vincent Fox a écrit :
We had a Sun Engineer on-site recently who said this:
We should set our array controllers to sequential I/O *even* if we
are doing random I/O if we are using ZFS.
This is because the Arc cache is already grouping
Cindy Swearingen wrote:
Chris,
I agree that your best bet is to replace the 128-mb device with
another device, fix the emcpower2a manually, and then replace it
back. I don't know these drives at all, so I'm unclear about the
fix it manually step.
Because your pool isn't redundant, you
Niksa Franceschi wrote:
Hi,
pool wasn't exported.
server1 was rebooted (with ZFS on it).
During reboot ZFS (pool) was released, and I could import it on server2
(which I have done).
However, when server1 was booting up it imported pool and mounted ZFS
filesystems even thou they were
Jorgen Lundman wrote:
If we were to get two x4500s, with the idea of keeping one as a passive
standby (serious hardware failure) are there any clever solutions in
doing so?
We can not use ZFS itself, but rather zpool volumes, with UFS on-top. I
assume there is no zpool send/recv
J.P. King wrote:
I think I have heard something called dirty time logging being implemented
in ZFS.
Thanks for the pointer. Certainly interesting, but according to the
talks/emails I've found a month or so ago ZFS will offer this, so I am
guessing it isn't there yet, and certainly not in
Looking at the txg numbers, it's clear that labels on to devices that
are unavailable now may be stale:
Krzys wrote:
When I do zdb on emcpower3a which seems to be ok from zpool perspective I get
the following output:
bash-3.00# zdb -lv /dev/dsk/emcpower3a
Jeff Bonwick wrote:
Looking at the txg numbers, it's clear that labels on to devices that
are unavailable now may be stale:
Actually, they look OK. The txg values in the label indicate the
last txg in which the pool configuration changed for devices in that
top-level vdev (e.g. mirror or
Hi,
using VirtualBox I just tried to move an OpenSolaris 2008.05 boot
environment (ZFS) on a gzip-9 compressed dataset, but I have the
following error from grub:
Error 16: Inconsistent filesystem structure
Googling around I found the same error with ZFS boot and Xen in July 2007:
this seems quite easy to me but I don't know how to move around to
actually implement/propose the required changes.
To make grub aware of gzip (as it already is of lzjb) the steps should be:
1. create a new
/onnv-gate/usr/src/grub/grub-0.95/stage2/zfs_gzip.c
starting from
Orvar Korvar пишет:
Ok, so i make one vdev out of 8 discs. And I combine all vdevs into
one large zpool? Is it correct?
I think it is easier to provide couple of examples:
zpool create pool c1t0d0 mirror c1t1d0 c1t2d0
This command would create storage pool with name 'pool' consisting of 2
Hernan Freschi пишет:
I tried the mkfile and swap, but I get:
[EMAIL PROTECTED]:/]# mkfile -n 4g /export/swap
[EMAIL PROTECTED]:/]# swap -a /export/swap
/export/swap may contain holes - can't swap on it.
You should not use -n for creating files for additional swap. This is
mentioned in the
Gerard Henry wrote:
hello all,
i have indiana freshly installed on a sun ultra 20 machine. It only does nfs
server. During one night, the kernel had crashed, and i got this messages:
May 22 02:18:57 ultra20 unix: [ID 836849 kern.notice]
May 22 02:18:57 ultra20
Mike Gerdts пишет:
On Sat, May 31, 2008 at 8:48 PM, Mike Gerdts [EMAIL PROTECTED] wrote:
I just experienced a zfs-related crash. I have filed a bug (don't
know number - grumble).
It's number is 6709336. I added you second e-mail to the description.
Wbr,
Victor
I have a crash dump but little
Scott wrote:
Hello,
I have several ~12TB storage servers using Solaris with ZFS. Two of
them have recently developed performance issues where the majority of
time in an spa_sync() will be spent in the space_map_*() functions.
During this time, zpool iostat will show 0 writes to disk, while
,
Victor
Aaron
Well, finally managed to solve my issue, thanks to
the invaluable help of Victor Latushkin, who I can't
thank enough.
I'll post a more detailed step-by-step record of what
he and I did (well, all credit to him actually) to
solve this. Actually, the problem is still
Hi Tom and all,
Tom Bird wrote:
Hi,
Have a problem with a ZFS on a single device, this device is 48 1T SATA
drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
a ZFS on it as a single device.
There was a problem with the SAS bus which caused various errors
including the
Would be grateful for any ideas, relevant output here:
[EMAIL PROTECTED]:~# zpool import
pool: content
id: 14205780542041739352
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active
Miles Nordin wrote:
re == Richard Elling [EMAIL PROTECTED] writes:
tb == Tom Bird [EMAIL PROTECTED] writes:
tb There was a problem with the SAS bus which caused various
tb errors including the inevitable kernel panic, the thing came
tb back up with 3 out of 4 zfs mounted.
Miles Nordin пишет:
r == Ross [EMAIL PROTECTED] writes:
r Tom wrote There was a problem with the SAS bus which caused
r various errors including the inevitable kernel panic. It's
r the various errors part that catches my eye,
yeah, possibly, but there are checksums on the
Hi Erik,
Erik Gulliksson wrote:
Hi,
I have a zfs-pool (unfortunately not setup according to the Best
Practices Guide) that somehow got corrupted after a spontaneous server
reboot. On Solaris 10u4 the machine simply panics when I try to import
the pool.
Panic stack would be useful.
So
Ben Taylor wrote:
I've got a Intel DP35DP Motherboard, Q6600 proc (Intel 2.4G, 4 core), 4GB of
ram and a
copule of Sata disks, running ICH9. S10U5, patched about a week ago or so...
I have a zpool on a single slice (haven't added a mirror yet, was getting to
that) and have
started to
Erik,
could you please provide a little bit more details.
Erik Gulliksson wrote:
Hi,
I have a zfs-pool (unfortunately not setup according to the Best
Practices Guide) that somehow got corrupted after a spontaneous server
reboot.
Was it totally spontaneous? What was the uptime before
Erik Gulliksson wrote:
Hi Victor,
Thanks for the prompt reply. Here are the results from your suggestions.
Panic stack would be useful.
I'm sorry I don't have this available and I don't want to cause another panic
:)
It should be saved in system messages on your Solaris 10 machine
Miles Nordin wrote:
cm == Chris Murray [EMAIL PROTECTED] writes:
cm The next issue is that when the pool is actually imported
cm (zpool import -f zp), it too hangs the whole system, albeit
cm after a minute or so of disk activity.
could it be #6573681?
On 28.08.08 15:06, Chris Gerhard wrote:
I have a USB disk with a pool on it called removable. On one laptop
zpool import removable works just fine but on another with the same
disk attached it tells me there is more than one matching pool:
: sigma TS 6 $; pfexec zpool import removable
On 09.09.08 19:32, Richard Elling wrote:
Ralf Ramge wrote:
Richard Elling wrote:
Yes, you're right. But sadly, in the mentioned scenario of having
replaced an entire drive, the entire disk is rewritten by ZFS.
No, this is not true. ZFS only resilvers data.
Okay, I see we have a
On 23.09.08 21:25, Bob Friesenhahn wrote:
Today while reading EE Times I read an article about a startup company
named Greenbytes which will be offering a system called Cypress which
supports deduplication and arrangement of data to minimize power
consumption. It seems that deduplication
Borys Saulyak wrote:
As a follow up to the whole story, with the fantastic help of
Victor, the failed pool is now imported and functional thanks to
the redundancy in the meta data.
It would be really useful if you could publish the steps to recover
the pools.
Here it is:
Executive summary:
Eric Schrock wrote:
On Fri, Oct 10, 2008 at 06:15:16AM -0700, Marcelo Leal wrote:
- ZFS does not need fsck.
Ok, that?s a great statement, but i think ZFS needs one. Really does.
And in my opinion a enhanced zdb would be the solution. Flexibility.
Options.
About 99% of the problems
C. Bergström пишет:
I had to hard power reset the laptop... Now I can't import my pool..
zpool status -x
bucket UNAVAIL 0 0 0 insufficient replicas
c6t0d0UNAVAIL 0 0 0 cannot open
cfgadm
usb8/4 usb-storage
Richard Elling пишет:
Keep in mind that this is for Solaris 10 not opensolaris.
Keep in mind that any changes required for Solaris 10 will first
be available in OpenSolaris, including any changes which may
have already been implemented.
Indeed. For example, less than a week ago fix for the
Blake Irvin wrote:
Looks like there is a closed bug for this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
It's been closed as 'not reproducible', but I can reproduce consistently on
Sol 10 5/08. How can I re-open this bug?
Have you tried to reproduce it with Nevada build 94
Armin Ollig пишет:
Hi Venku and all others,
thanks for your suggestions. I wrote a script to do some IO from both
hosts (in non-cluster-mode) to the FC-LUNs in questions and check the
md5sums of all files afterwards. As expected there was no corruption.
After recreating the
Armin Ollig пишет:
Hi Victor,
it was initially created from
c4t600D02300088824BC4228807d0s0, then destroyed and recreated
from /dev/did/dsk/d12s0. You are right: It still shows up the old
dev.
You pool is cached in zpool.caches of both hosts (with the old device path):
bash-3.2#
Eric Schrock пишет:
These are the symptoms of a shrinking device in a RAID-Z pool. You can
try to run the attached script during the import to see if this the
case. There's a bug filed on this, but I don't have it handy.
it's
6753869 labeling/shrinking a disk in raid-z vdev makes pool
Hi Ben,
Ben Rockwood пишет:
Is there some hidden way to coax zdb into not just displaying data
based on a given DVA but rather to dump it in raw usable form?
I've got a pool with large amounts of corruption. Several
directories are toast and I get I/O Error when trying to enter or
read
[EMAIL PROTECTED] пишет:
Hi,
Victor Latushkin wrote:
Hi Ben,
Ben Rockwood пишет:
Is there some hidden way to coax zdb into not just displaying data
based on a given DVA but rather to dump it in raw usable form?
I've got a pool with large amounts of corruption. Several
directories
[EMAIL PROTECTED] пишет:
Victor Latushkin wrote:
[EMAIL PROTECTED] пишет:
Hi,
Victor Latushkin wrote:
Hi Ben,
Ben Rockwood пишет:
Is there some hidden way to coax zdb into not just displaying data
based on a given DVA but rather to dump it in raw usable form?
I've got a pool
Hi Jens,
Jens Hamisch wrote:
Hi Erik,
hi Victor,
I have exactly the same problem as you described in your thread.
Exactly same problem would mean that only config object in the pool is
corrupted. Are you 100% sure that you have exact same problem?
Could you please explain to me what to
David Champion wrote:
I have a feeling I pushed people away with a long message. Let me
reduce my problem to one question.
# zpool import -f z
cannot import 'z': one or more devices is currently unavailable
'zdb -l' shows four valid labels for each of these disks except for the
new one.
Sherwood Glazier wrote:
Let me preface this by admitting that I'm a bonehead.
I had a mirrored a zfs filesystem. I needed to use one of the
mirrors temporarily so I did a zpool detach to remove the member
(call it disk1) leaving disk0 in the pool. However, after the detach
I mistakenly
Andrew,
Andrew wrote:
Thanks a lot! Google didn't seem to cooperate as well as I had hoped.
Still no dice on the import. I only have shell access on my
Blackberry Pearl from where I am, so it's kind of hard, but I'm
managing.. I've tried the OP's exact commands, and even trying to
import
Andrew пишет:
hey Victor,
Where would i find that? I'm still somewhat getting used to the
Solaris environment. /var/adm/messages doesn't seem to show any Panic
info.. I only have remote access via SSH, so I hope I can do
something with dtrace to pull it.
Do you have anything in
Henrik Johansson wrote:
Hello,
I have a snv101 machine with a three disk raidz pool which allocation
of about 1TB with for no obvious reason, no snapshot, no files,
nothing. I tried to run zdb on the pool to see if I got any useful
info, but it has been working for over two hours
Erik Trimble wrote:
I also think I re-started this thread. Mea culpa.
The original comment from me was that I wasn't certain that the bug I
tripped over last year this time (a single-LUN zpool is declared corrupt
if the underlying LUN goes away, usually due to SAN issues) was fixed. I
I do
Dennis Clarke wrote:
I tried to import a zpool and the process just hung there, doing nothing.
It has been ten minutes now so I tries to hit CTRL-C. That did nothing.
It may be because it is blocked in kernel.
Can you do something like this:
echo 0tpid of zpool import::pid2proc|::walk
Dennis Clarke wrote:
Dennis Clarke wrote:
I tried to import a zpool and the process just hung there, doing nothing.
It has been ten minutes now so I tries to hit CTRL-C. That did
nothing.
It may be because it is blocked in kernel.
Can you do something like this:
echo 0tpid of zpool
Dennis Clarke wrote:
Dennis Clarke wrote:
It may be because it is blocked in kernel.
Can you do something like this:
echo 0tpid of zpool import::pid2proc|::walk thread|::findstack -v
So we see that it cannot complete import here and is waiting for
transaction group to sync. So probably
Dennis Clarke wrote:
CTRL+C does nothing and kill -9 pid does nothing to this command.
feels like a bug to me
Yes, it is:
http://bugs.opensolaris.org/view_bug.do?bug_id=6758902
Now I recall why I had to reboot. Seems as if a lot of commands hang now.
Things like :
df -ak
zfs list
pid
Dennis Clarke wrote:
Original Message
Subject: Re: I see you're running zdb -e -bbcsL
From:Victor Latushkin victor.latush...@sun.com
Date:Sun, May 10, 2009 11:17
To: dcla...@blastwave.org
Hi Brad,
Brad Reese wrote:
Hello,
I've run into a problem with zpool import that seems very similar to
the following thread as far as I can tell:
http://opensolaris.org/jive/thread.jspa?threadID=70205tstart=15
The suggested solution was to use a later version of open solaris
(b99 or later)
Brad Reese wrote:
Hi Victor,
Here's the output of 'zdb -e -bcsvL tank' (similar to above but with -c).
Thanks,
Brad
Traversing all blocks to verify checksums ...
zdb_blkptr_cb: Got error 50 reading 0, 11, 0, 0 [L0 packed nvlist] 4000L/4000P
DVA[0]=0:2500014000:4000
Brian Leonard wrote:
Since you did not export the pool, it may be looking for the wrong
devices. Try this:
zpool export vault
zpool import vault
That was the first thing I tried, with no luck.
Above, I used slice 0 as an example, your system may use a
different slice. But you can run
On 08.06.09 15:50, Marius van Vuuren wrote:
A description of the problem
- Description
The j4200 and the x4150 connected to it were powered off
and then moved to another building with the utmost care. When powered on
again 'zpool status' revealed corrupted data on 3 of the disks.
Outputs:
On 16.06.09 07:57, Brad Reese wrote:
Hi Victor,
'zdb -e -bcsv -t 2435913 tank' ran for about a week with no output. We had yet
another brown out and then the comp shut down (have a UPS on the way). A few
days before that I started the following commands, which also had no output:
zdb -e
On 24.06.09 17:10, Thomas Maier-Komor wrote:
Ben schrieb:
Thomas,
Could you post an example of what you mean (ie commands in the order to use
them)? I've not played with ZFS that much and I don't want to muck my system
up (I have data backed up, but am more concerned about getting myself
On 29.06.09 11:41, Ketan wrote:
I'm having following issue .. i import the zpool and it shows pool imported
correctly
'zpool import' only show what pools are available to import. In order to
actually import pool you need to to
zpool import emcpool1
but after few seconds when i issue
On 29.06.09 23:01, Carsten Aulbert wrote:
One question:
Where can I find more about CR 6827199? I logged into sun.com with my
service contract enabled log-in but I cannot find it there (or the
search function does not like me too much).
You can try bugs.opensolaris.org too:
On 19.01.09 12:09, Tom Bird wrote:
Toby Thain wrote:
On 18-Jan-09, at 6:12 PM, Nathan Kroenert wrote:
Hey, Tom -
Correct me if I'm wrong here, but it seems you are not allowing ZFS any
sort of redundancy to manage.
Every other file system out there runs fine on a single LUN, when things
go
On 03.07.09 15:34, James Lever wrote:
Hi Mertol,
On 03/07/2009, at 6:49 PM, Mertol Ozyoney wrote:
ZFS SSD usage behaviour heavly depends on access pattern and for
asynch ops ZFS will not use SSD's. I'd suggest you to disable SSD's
, create a ram disk and use it as SLOG device to compare
On 02.07.09 22:05, Bob Friesenhahn wrote:
On Thu, 2 Jul 2009, Zhu, Lejun wrote:
Actually it seems to be 3/4:
3/4 is an awful lot. That would be 15 GB on my system, which explains
why the 5 seconds to write rule is dominant.
3/4 is 1/8 * 6, where 6 is worst-case inflation factor (for
On 08.07.09 12:30, Darren J Moffat wrote:
Karl Dalen wrote:
I'm a new user of ZFS and I have an external USB drive which contains
a ZFS pool with file system. It seems that it does not get auto mounted
when I plug in the drive. I'm running osol-0811.
How can I manually mount this drive? It has
On 22.07.09 10:45, Adam Leventhal wrote:
which gap?
'RAID-Z should mind the gap on writes' ?
Message was edited by: thometal
I believe this is in reference to the raid 5 write hole, described here:
http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance
It's not.
So I'm not
On 28.07.09 20:31, Graeme Clark wrote:
Hi Again,
A bit more futzing around and I notice that output from a plain 'zdb' returns
this:
store
version=14
name='store'
state=0
txg=0
pool_guid=13934602390719084200
hostid=8462299
hostname='store'
vdev_tree
On 29.07.09 13:04, Pavel Kovalenko wrote:
after several errors on QLogic HBA pool cache was damaged and zfs cannot import
pool, there is no any disk or cpu activity during import...
#uname -a
SunOS orion 5.11 snv_111b i86pc i386 i86pc
# zpool import
pool: data1
id: 6305414271646982336
On 29.07.09 14:42, Pavel Kovalenko wrote:
fortunately, after several hours terminal went back --
# zdb -e data1
Uberblock
magic = 00bab10c
version = 6
txg = 2682808
guid_sum = 14250651627001887594
timestamp = 1247866318 UTC = Sat Jul 18 01:31:58
On 29.07.09 16:59, Mark J Musante wrote:
On Tue, 28 Jul 2009, Glen Gunselman wrote:
# zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
zpool1 40.8T 176K 40.8T 0% ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool1 364K 32.1T
On 25.07.09 00:30, Rob Logan wrote:
The post I read said OpenSolaris guest crashed, and the guy clicked
the ``power off guest'' button on the virtual machine.
I seem to recall guest hung. 99% of solaris hangs (without
a crash dump) are hardware in nature. (my experience backed by
an uptime
1 - 100 of 211 matches
Mail list logo