Wilkinson, Alex wrote:
> 0n Wed, Sep 03, 2008 at 12:57:52PM -0700, Paul B. Henson wrote:
>
> >I tried installing the Sun provided samba source code package to try to
> do
> >some debugging on my own, but it won't even compile, configure fails
> with:
>
> Oh, where did you get that fr
I have a similar situation and would love some concise suggestions:
Had a working version of 2008.05 running svn_93 with the updated grub. I did a
pkg-update to svn_95 and ran the zfs update when it was suggested. System ran
fine until I did a a reboot, then no boot, only grub command line shows
0n Wed, Sep 03, 2008 at 12:57:52PM -0700, Paul B. Henson wrote:
>I tried installing the Sun provided samba source code package to try to do
>some debugging on my own, but it won't even compile, configure fails with:
Oh, where did you get that from ?
-aW
IMPORTANT: This email remai
[EMAIL PROTECTED] said:
> We did ask our vendor, but we were just told that AVS does not support
> x4500.
You might have to use the open-source version of AVS, but it's not
clear if that requires OpenSolaris or if it will run on Solaris-10.
Here's a description of how to set it up between two X45
Doesn't really have a write cache, but some of us have been using this
relatively inexpensive card with good fast results. I've been using it with
SATA rather than SAS.
AOC-USAS-L8i
http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm
Thread:
http://opensolaris.org/jive/thread
If we get two x4500s, and look at AVS, would it be possible to:
1) Setup AVS to replicate zfs, and zvol (ufs) from 01 -> 02 ? Supported
by Sol 10 5/08 ?
Assuming 1, if we setup a home-made IP fail-over so that; should 01 go
down, all clients are redirected to 02.
2) Fail-back, are there met
I have a pool on a usb device that I try to 'zpool import -f passport'. I get
an error in syslog "Pool 'passport' has encountered an uncorrectable I/O error.
Manual intervention is required."
The import at this point is hung and unkillable.
I didn't find anything in the man pages to cover thi
On Wed, Sep 3, 2008 at 1:48 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> I've never heard of a battery that's used for anything but RAID
> features. It's an interesting question, if you use the controller in
> ``JBOD mode'' will it use the write cache or not? I would guess not,
> but it might.
comment at bottom...
Miles Nordin wrote:
>> "mb" == Matt Beebe <[EMAIL PROTECTED]> writes:
>>
>
> mb> Anyone know of a SATA and/or SAS HBA with battery backed write
> mb> cache?
>
> I've never heard of a battery that's used for anything but RAID
> features. It's an in
> "mb" == Matt Beebe <[EMAIL PROTECTED]> writes:
mb> Anyone know of a SATA and/or SAS HBA with battery backed write
mb> cache?
I've never heard of a battery that's used for anything but RAID
features. It's an interesting question, if you use the controller in
``JBOD mode'' will it us
Way back when I first started looking at ZFS I remember testing the sun
samba/zfs acl integration. I had some problems with the special ace's at
first, but I thought those were resolved by installing the latest samba
patch. However, after working on other pieces of our developing
infrastructure fo
I have a disk that went 'bad' on a x4500. It came up with UNAVAIL in a zpool
status and was 'unconfigured' in cfgadm. The x4500 has a cute little blue light
that tells you when it's able to be removed. With it on, I replaced the disk
and reconfigured it with cfgadm.
Now cfgadm lists it as confi
Anyone know of a SATA and/or SAS HBA with battery backed write cache?
Seems like using a full-blown RAID controller and exporting each individual
drive back to ZFS as a single LUN is a waste of power and $$$. Looking for any
thoughts or ideas.
Thanks.
-Matt
--
This message posted from opensol
2008/9/3 Jerry K <[EMAIL PROTECTED]>:
> Hello Bob,
>
> Thank you for your reply. Your final sentence is a gem I will keep.
>
> As far as the rest, I have a lot of production server that are (2) drive
> systems, and I really hope that there is a mechanism to quickly R&R dead
> drives, resilvering a
On Fri, Aug 29, 2008 at 10:32 PM, Todd H. Poole <[EMAIL PROTECTED]> wrote:
> I can't agree with you more. I'm beginning to understand what the phrase
> "Sun's software is great - as long as you're running it on Sun's hardware"
> means...
>
> Whether it's deserved or not, I feel like this OS isn't
> "rm" == Robert Milkowski <[EMAIL PROTECTED]> writes:
rm> What bothers me is why did I got CKSUM errors?
I think they accumulated latently while you had the pool imported on
Node 2 with half of the mirror missing. ZFS seems to count unexpected
resilvering as CKSUM errors sometimes
Hello Bob,
Thank you for your reply. Your final sentence is a gem I will keep.
As far as the rest, I have a lot of production server that are (2) drive
systems, and I really hope that there is a mechanism to quickly R&R dead
drives, resilvering aside. I guess I need to do some more RTFMing in
On Wed, 3 Sep 2008, Jerry K wrote:
> How would this work for servers that support only (2) drives, or systems
> that are configured to have pools of (2) drives, i.e. mirrors, and
> there is no additional space to have a new disk, as shown in the sample
> below.
You may be able to accomplish what
Brandon High wrote:
> On Tue, Sep 2, 2008 at 2:15 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
>
>> Silly me. It is still Monday, and I am coffee challenged. RAIDoptimizer
>> is still an internal tool. However, for those who are interested in the
>> results
>> of a RAIDoptimizer run for 48 d
Hello zfs-discuss,
S10U5+patches, SPARC, Sun/qlogic 4Gb dual ported fc cards.
ZFS does mirroring between two lun's, each is a lun comming from
separate 6540 disk array.
I got a kernel panic while pool was imported on one of the nodes
(kernel panic - it's my fault).
How would this work for servers that support only (2) drives, or systems
that are configured to have pools of (2) drives, i.e. mirrors, and
there is no additional space to have a new disk, as shown in the sample
below.
I still support lots of V490's, which hold only (2) drives.
Thanks,
Jerr
Gaah, my command got nerfed by the forum, sorry, should have previewed. What
you want is:
# zpool replace poolname olddisk newdisk
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolar
I'm pretty sure you just need the zpool replace command:
# zpool replace
Run that for the disk you want to replace and let it resilver. Once it's done,
you can unconfigure the old disk with cfgadm and remove it.
If you have multiple mirror vdev's, you'll need to run the command a few times.
Mark J. Musante wrote:
>
> On 3 Sep 2008, at 05:20, "F. Wessels" <[EMAIL PROTECTED]> wrote:
>
>> Hi,
>>
>> can anybody describe the correct procedure to replace a disk (in a
>> working OK state) with a another disk without degrading my pool?
>
> This command ought to do the trick:
>
> zfs rep
On 3 Sep 2008, at 05:20, "F. Wessels" <[EMAIL PROTECTED]> wrote:
> Hi,
>
> can anybody describe the correct procedure to replace a disk (in a
> working OK state) with a another disk without degrading my pool?
This command ought to do the trick:
zfs replace
The type of pool doesn't matter
Hi,
can anybody describe the correct procedure to replace a disk (in a working OK
state) with a another disk without degrading my pool?
For a mirror I thought off adding the spare, you'll get a three device mirror.
Let it resilver. Finally remove the disk I want.
But what would be the correct
Yeah, I'm looking at using 10 disks or 16 disks (depending on which
chassis I get) - and I would like reasonable redundancy (not HA-crazy
redundancy where I can suffer tons of failures, I can power this down
and replace disks, it's a home server) and maximize the amount of
usable space.
Putting up
27 matches
Mail list logo