All;
I am deeply sorry if this topic has been rehashed, checksummed,
de-duplicated and archived before.
But I just need a small clarification.
/etc/sfw/smb.conf is necessary only for smb/server to function properly
but is smb/server SMF service necessary for ZFS sharesmb to work
I am tryin
Bruno -
Sorry, I don't have experience with OpenSolaris, but I *do* have experience
running a J4400 with Solaris 10u8.
First off, you need a LSI HBA for the Multipath support. It won't work with any
others as far as I know.
I ran into problems with the multipath support because it wouldn't
Hi Ross,
On Friday 27 November 2009 21:31:52 Ross Walker wrote:
> I would plan downtime to physically inspect the cabling.
There is not much cabling as the disks are directly connected to a large
backplane (Sun Fire X4500)
Cheers
Carsten
___
zfs-
On Nov 27, 2009, at 12:55 PM, Carsten Aulbert > wrote:
On Friday 27 November 2009 18:45:36 Carsten Aulbert wrote:
I was too fast, now it looks completely different:
scrub: resilver completed after 4h3m with 0 errors on Fri Nov 27
18:46:33
2009
[...]
s13:~# zpool status
pool: atlashome
state
On Fri, 27 Nov 2009, Carsten Aulbert wrote:
Now the big question:
(1) zpool clear or
(2) bring in the spare again (or exchange two more disks)?
Opinions?
Since "applications are unaffected" (good sign!), I would save all
notes regarding current status, do 'zpool clear', 'zpool scrub' and
t
On Friday 27 November 2009 18:45:36 Carsten Aulbert wrote:
I was too fast, now it looks completely different:
scrub: resilver completed after 4h3m with 0 errors on Fri Nov 27 18:46:33
2009
[...]
s13:~# zpool status
pool: atlashome
state: DEGRADED
status: One or
Hi Bob
On Friday 27 November 2009 17:19:22 Bob Friesenhahn wrote:
>
> It is interesting that in addition to being in the same vdev, the
> disks encountering serious problems are all target 6. Besides
> something at the zfs level, there could be some some issue at the
> device driver, or underlyi
On Fri, 27 Nov 2009, Carsten Aulbert wrote:
At the very least, I would consider physically replacing c1t6d0.
That's an option and see if I can let the system repair more of the errors.
Regarding the error with a named disk, there is only one disk named in the
output so far.
Definitely repla
Last remark was spot on:
Script started on Wed Oct 28 10:23:11 2009
# zpool get all rpool
NAME PROPERTY VALUE SOURCE
rpool size 16.9G -
rpool capacity 64% -
rpool altroot- default
rpool health
Michael Schuster schrieb:
> Thomas Maier-Komor wrote:
>
>>> Script started on Wed Oct 28 09:38:38 2009
>>> # zfs get dedup rpool/export/home
>>> NAME PROPERTY VALUE SOURCE
>>> rpool/export/home dedup onlocal
>>> # for i in 1 2 3 4 5 ; do mkdir /expor
Thomas Maier-Komor wrote:
Script started on Wed Oct 28 09:38:38 2009
# zfs get dedup rpool/export/home
NAME PROPERTY VALUE SOURCE
rpool/export/home dedup onlocal
# for i in 1 2 3 4 5 ; do mkdir /export/home/d${i} && df -k
/export/home/d${i} && zfs
Chavdar Ivanov schrieb:
> Hi,
>
> I BFUd successfully snv_128 over snv_125:
>
> ---
> # cat /etc/release
> Solaris Express Community Edition snv_125 X86
>Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
> Use is subject to license
Hi,
I BFUd successfully snv_128 over snv_125:
---
# cat /etc/release
Solaris Express Community Edition snv_125 X86
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembl
Hi all,
On Thursday 26 November 2009 17:38:42 Cindy Swearingen wrote:
> Did anything about this configuration change before the checksum errors
> occurred?
>
No, This machine is running in this configuration for a couple of weeks now
> The errors on c1t6d0 are severe enough that your spare kick
14 matches
Mail list logo