Hello list,
I got a c7000 with BL465c G1 blades to play with and have been trying to get
some form of Solaris to work on it.
However, this is the state:
OpenSolaris 134: Installs with ZFS, but no BNX nic drivers.
OpenIndiana 147: Panics on zpool create everytime, even from console. Has no
On Oct 3, 2010, at 7:22 PM, Jorgen Lundman lund...@lundman.net wrote:
One option would be to get 147 NIC drivers for 134.
IIRC, the bnx drivers are closed source and obtained from Broadcom. No
need to upgrade OS just for a NIC driver.
-- richard
On 04/ 2/10 10:25 AM, Ian Collins wrote:
Is this callstack familiar to anyone? It just happened on a Solaris
10 update 8 box:
genunix: [ID 655072 kern.notice] fe8000d1b830
unix:real_mode_end+7f81 ()
genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 ()
genunix: [ID 655072
Is this callstack familiar to anyone? It just happened on a Solaris 10
update 8 box:
genunix: [ID 655072 kern.notice] fe8000d1b830 unix:real_mode_end+7f81 ()
genunix: [ID 655072 kern.notice] fe8000d1b910 unix:trap+5e6 ()
genunix: [ID 655072 kern.notice] fe8000d1b920
Andre van Eyssen an...@purplecow.org wrote:
On Fri, 10 Apr 2009, Rince wrote:
FWIW, I strongly expect live ripping of a SATA device to not panic the disk
layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be
fault-tolerant and drive dropping away at any time is a
r == Rince rincebr...@gmail.com writes:
r *ZFS* shouldn't panic under those conditions. The disk layer,
r perhaps, but not ZFS.
well, yes, but panicing brings down the whole box anyway so there is
no practical difference, just a difference in blame.
I would rather say, the fact
Grant,
Didn't see a response so I'll give it a go.
Ripping a disk away and silently inserting a new one is asking for
trouble imho. I am not sure what you were trying to accomplish but
generally replace a drive/lun would entail commands like
zpool offline tank c1t3d0
cfgadm | grep c1t3d0
will be doing this. Thanks for
the feedback. I appreciate it.
- Original Message
From: Remco Lengers re...@lengers.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, April 9, 2009 5:31:42 AM
Subject: Re: [zfs-discuss] ZFS Panic
Grant,
Didn't see
On Fri, 10 Apr 2009, Rince wrote:
FWIW, I strongly expect live ripping of a SATA device to not panic the disk
layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be
fault-tolerant and drive dropping away at any time is a rather expected
scenario.
Ripping a SATA device
On Fri, Apr 10, 2009 at 12:43 AM, Andre van Eyssen an...@purplecow.orgwrote:
On Fri, 10 Apr 2009, Rince wrote:
FWIW, I strongly expect live ripping of a SATA device to not panic the
disk
layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to
be
fault-tolerant and drive
Hi All,
Don't know if this is worth reporting, as it's human error. Anyway, I had a
panic on my zfs box. Here's the error:
marksburg /usr2/glowe grep panic /var/log/syslog
Apr 8 06:57:17 marksburg savecore: [ID 570001 auth.error] reboot after panic:
assertion failed: 0 ==
I upgraded my 280R system to yesterday's nightly build, and when I
rebooted, this happened:
Boot device:
/p...@8,60/SUNW,q...@4/f...@0,0/d...@w212037e9abe4,0:a File and args:
SunOS Release 5.11 Version snv_108 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Looks like a corrupted pool -- you appear to have a mirror block pointer with
no valid children. From the dump, you could probably determine which file is
bad, but I doubt you could delete it; you might need to recreate your pool.
--
This message posted from opensolaris.org
I'm sorry about the problems. We try to be responsive to fixing bugs and
implementing new features that people are requesting for ZFS.
It's not always possible to get it right. In this instance I don't think the
bug was reproducible, and perhaps that's why it hasn't received the attention
it
On Tue, January 13, 2009 09:51, Neil Perrin wrote:
I'm sorry about the problems. We try to be responsive to fixing bugs and
implementing new features that people are requesting for ZFS.
It's not always possible to get it right. In this instance I don't think
the
bug was reproducible, and
any idea what could cause my system to panic? I get my system rebooted daily at
various times. very strange, but its pointing to zfs. I have U6 with all latest
patches.
Jan 12 05:47:12 chrysek unix: [ID 836849 kern.notice]
Jan 12 05:47:12 chrysek ^Mpanic[cpu1]/thread=30002c8d4e0:
Jan 12
This is a known bug:
6678070 Panic from vdev_mirror_map_alloc()
http://bugs.opensolaris.org/view_bug.do?bug_id=6678070
Neil.
On 01/12/09 21:12, Krzys wrote:
any idea what could cause my system to panic? I get my system rebooted daily
at
various times. very strange, but its pointing to zfs.
I'm seeing this too. Nothing unusual happened before the panic.
Just a shutdown (init 5) and later startup. I have the crashdump
and copy of the problem zpool (on swan). Here's the stack trace:
$C
ff0004463680 vpanic()
ff00044636b0 vcmn_err+0x28(3, f792ecf0, ff0004463778)
space_map_add+0xdb(ff014c1a21b8, 472785000, 1000)
space_map_load+0x1fc(ff014c1a21b8, fbd52568, 1,
ff014c1a1e88, ff0149c88c30)
running snv79.
hmm.. did you spend any time in snv_74 or snv_75 that might
have gotten
Hi,
On 09/29/07 22:00, Gavin Maltby wrote:
Hi,
Our zfs nfs build server running snv_73 (pool created back before
zfs integrated to ON) paniced I guess from zfs the first time
and now panics on attempted boot every time as below. Is this
a known issue and, more importantly (2TB of data in the
T3 comment below...
Gavin Maltby wrote:
Hi,
On 09/29/07 22:00, Gavin Maltby wrote:
Hi,
Our zfs nfs build server running snv_73 (pool created back before
zfs integrated to ON) paniced I guess from zfs the first time
and now panics on attempted boot every time as below. Is this
a known
On 10/01/07 17:01, Richard Elling wrote:
T3 comment below...
[cut]
A scrub is only 20% complete, but has found no errors thus far. I check
the T3 pair and no complaints there either - I did reboot them just for
luck (last reboot was 2 years ago, apparently!).
Living on the edge...
The T3
Hi,
Our zfs nfs build server running snv_73 (pool created back before
zfs integrated to ON) paniced I guess from zfs the first time
and now panics on attempted boot every time as below. Is this
a known issue and, more importantly (2TB of data in the pool),
any suggestions on how to recover
Ok I found the problem with 0x06, one disk was missing. But now I got all my
disk and I get 0x05.:
Sep 21 10:25:53 unknown ^Mpanic[cpu0]/thread=ff0001e12c80:
Sep 21 10:25:53 unknown genunix: [ID 603766 kern.notice] assertion failed:
dmu_read(os, smo-smo_object, offset, size, entry_map) == 0
Tomas Ögren wrote:
On 18 September, 2007 - Gino sent me these 0,3K bytes:
Hello,
upgrade to snv_60 or later if you care about your data :)
If there are known serious data loss bug fixes that have gone into
snv60+, but not into s10u4, then I would like to tell Sun to backport
those into
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read
and write errors.
The disks was so bad that I started to have trans_err. The server lock up and
the server was reset. Then now when trying to import the pool the system panic.
I installed the last Recommend on my
Basically, it is complaining that there aren't enough disks to read
the pool metadata. This would suggest that in your 3-disk RAID-Z
config, either two disks are missing, or one disk is missing *and*
another disk is damaged -- due to prior failed writes, perhaps.
(I know there's at least one
actually here is the first panic messages:
Sep 13 23:33:22 netra2 unix: [ID 603766 kern.notice] assertion failed:
dmu_read(os, smo-smo_object, offset, size, entry_map) == 0 (0x5 == 0x0), file:
../../common/fs/zfs/space_map.c, line: 307
Sep 13 23:33:22 netra2 unix: [ID 10 kern.notice]
Sep 13
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is
Hi Matty,
From the stack I saw, that is 6454482.
But this defect has been marked as 'Not reproducible', I have no idea
about how to recover
from it, but looks like new update will not hit this issue.
Matty wrote:
One of our Solaris 10 update 3 servers paniced today with the following error:
hi all,
I was extracting a 8GB tar and encountered this panic. the system was
just installed last week with Solaris 10 update 3 and the latest
recommended patches as of June 26. I can provide more output from mdb,
or the crashdump itself if it would be of any use.
any ideas what's going on
Apr 23 02:02:21 SERVER144 offline or reservation conflict
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/[EMAIL PROTECTED] (sd82):
Apr 23 02:02:21 SERVER144 i/o to invalid geometry
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING:
Folks, before I start delving too deeply into this crashdump, has anyone
seen anything like it?
The background is that I'm running a non-debug open build of b49 and was
in the process of running the zoneadm -z redlx install
After a bit, the machine panics, initially looking at the
I made some powernow experiments on a dual core amd64 box, running the
64-bit debug on-20060828 kernel. At some point the kernel seemed to
make no more progress (probably a bug in the multiprocessor powernow
code), the gui was stuck, so I typed (blind) F1-A + $systemdump.
Writing the crashdump
34 matches
Mail list logo