Kenny [EMAIL PROTECTED] writes:
I have Sun Solaris 5.10 Generic_120011-14 and the zpool version is 4.
I've found references to version 5-10 on the Open Solaris site.
Are these versions for Open solaris only? I've searched the SUN site
for ZFS patches and found nothing (most likely operator
Well I managed to get my pool back up, unconventionally though...
I got to thinking about how my data was fine before the replace so I
popped the cable off of the new disk and walla! The spare showed back
up and the pool imported in a degraded state.
Something must have gotten botched in the
I am trying to bringup a 3510 JBOD on Solaris 10 and would like to enable
multipathing. I have connected both ports on a dual-port HBA to two loops
(FC0 and FC5). This is a X4100 running Solaris 10. When I run the format
command I only see 12 drives - I was expecting that when
3510 FC JBOD array
On Wed, May 21, 2008 at 9:54 AM, Krutibas Biswal [EMAIL PROTECTED] wrote:
I am trying to bringup a 3510 JBOD on Solaris 10 and would like to enable
multipathing. I have connected both ports on a dual-port HBA to two loops
(FC0 and FC5). This is a X4100 running Solaris 10. When I run the format
On x64 Solaris 10, the default setting of mpxio was :
mpxio-disable=no;
I changed it to
mpxio-disable=yes;
and rebooted the machine and it detected 24 drives.
Thanks,
Krutibas
Peter Tribble wrote:
On Wed, May 21, 2008 at 9:54 AM, Krutibas Biswal [EMAIL PROTECTED] wrote:
I am trying to
Hi All ;
Do anyone know the status of supporting ZFS on active active clusters ?
Best regards
Mertol
http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile
Hello Krutibas,
Wednesday, May 21, 2008, 10:43:03 AM, you wrote:
KB On x64 Solaris 10, the default setting of mpxio was :
KB mpxio-disable=no;
KB I changed it to
KB mpxio-disable=yes;
KB and rebooted the machine and it detected 24 drives.
Originally you wanted to get it multipathed which
Hello,
I had a 3-disk raidz2 pool. I wanted to increase throughput and available
storage so I added in another 2 disks into the pool with:
zpool add -f external c12t0d0p0
zpool add -f external c13t0d0p0 (it wouldn't work without -f, and I believe
that's because the fs was online)
I
1) Am I right in my reasoning?
yes
2) Can I remove the new disks from the pool, and re-add them under the
raidz2 pool
copy the data off the pool, destroy and remake the pool, and copy back
3) How can I check how much zfs data is written on the actual disk (say
c12)?
On Wed, 21 May 2008, Justin Vassallo wrote:
zpool add -f external c12t0d0p0
zpool add -f external c13t0d0p0 (it wouldn't work without -f, and I believe
that's because the fs was online)
No, it had nothing to do with the pool being online. It was because a
single disk was being added to a
zpool add -f external c12t0d0p0
zpool add -f external c13t0d0p0 (it wouldn't work without -f, and I believe
that's because the fs was online)
No, it had nothing to do with the pool being online. It was because a
single disk was being added to a pool with raidz2. The error message that
On Wed, 21 May 2008, Claus Guttesen wrote:
Aren't one supposed to be able to add more disks to an existing raidz(2)
pool and have the data spread all disks in the pool automagically?
Alas, that is not yet possible. See Adam's blog for details:
On Wed, May 21, 2008 at 2:54 PM, Claus Guttesen [EMAIL PROTECTED] wrote:
zpool add -f external c12t0d0p0
zpool add -f external c13t0d0p0 (it wouldn't work without -f, and I believe
that's because the fs was online)
No, it had nothing to do with the pool being online. It was because a
single
Robert Milkowski wrote:
Hello Krutibas,
Wednesday, May 21, 2008, 10:43:03 AM, you wrote:
KB On x64 Solaris 10, the default setting of mpxio was :
KB mpxio-disable=no;
KB I changed it to
KB mpxio-disable=yes;
KB and rebooted the machine and it detected 24 drives.
Originally
On Wed, May 21, 2008 at 10:55 PM, Krutibas Biswal
[EMAIL PROTECTED] wrote:
Robert Milkowski wrote:
Originally you wanted to get it multipathed which was the case by
default. Now you have disabled it (well, you still have to paths but
no automatic failover).
Thanks. Can somebody point me to
Krutibas
On x64 Solaris 10, the default setting of mpxio was :
mpxio-disable=no;
I changed it to
mpxio-disable=yes;
and rebooted the machine and it detected 24 drives.
...you have just *disabled* Solairs scsi_vhci(7d) multi-pathing.
You should go back to 'mpxio-disable=no;' and look
I encountered an issue that people using OS-X systems as NFS clients
need to be aware of. While not strictly a ZFS issue, it may be
encounted most often by ZFS users since ZFS makes it easy to support
and export per-user filesystems. The problem I encountered was when
using ZFS to create
On May 21, 2008, at 11:15 AM, Bob Friesenhahn wrote:
I encountered an issue that people using OS-X systems as NFS clients
need to be aware of. While not strictly a ZFS issue, it may be
encounted most often by ZFS users since ZFS makes it easy to support
and export per-user filesystems. The
[EMAIL PROTECTED] wrote on 05/21/2008 10:38:10 AM:
On May 21, 2008, at 11:15 AM, Bob Friesenhahn wrote:
I encountered an issue that people using OS-X systems as NFS clients
need to be aware of. While not strictly a ZFS issue, it may be
encounted most often by ZFS users since ZFS makes
On Wed, 21 May 2008, Krutibas Biswal wrote:
Thanks. Can somebody point me to some documentation on this ?
I wanted to see 24 drives so that I can use load sharing between
two controllers (C1Disk1, C2Disk2, C1Disk3, C2Disk4...) for
performance.
If I enable multipathing, would the drive do
On Wed, 21 May 2008, Andy Lubel wrote:
The simple solution was to simply create a /home/.DS_Store directory
on the server so that the mount request would succeed.
Did you try this?
http://support.apple.com/kb/HT1629
No, I decided not to use that since it has negative impact on OS-X
On May 21, 2008, at 11:15, Bob Friesenhahn wrote:
The simple solution was to simply create a /home/.DS_Store directory
on the server so that the mount request would succeed.
What permissions do you have on /home/.DS_Store? I assume the
clients fail quietly on their write attempts?
Does the
Ive never understood it.
Ive heard that for the Thumper which uses 48 drives, you should not make them
all into one zpool. Instead you should make them into several vdevs. And then
you combine all vdevs into one zpool? Is it so? Why do you do that? Why not
several zpools?
This message
On Wed, 21 May 2008, Bill McGonigle wrote:
What permissions do you have on /home/.DS_Store? I assume the clients fail
quietly on their write attempts?
The actual permissions do not seem to matter. The directory does not
need to be writeable. As long as the path can be mounted, the problem
On May 21, 2008, at 02:53, Christopher Gibbs wrote:
I got to thinking about how my data was fine before the replace so I
popped the cable off of the new disk and walla! The spare showed back
up and the pool imported in a degraded state.
Good news. I'll be curious to hear if you ultimately
On Wed, 21 May 2008, Orvar Korvar wrote:
Ive heard that for the Thumper which uses 48 drives, you should not
make them all into one zpool. Instead you should make them into several
vdevs. And then you combine all vdevs into one zpool? Is it so? Why do
Right. A vdev is the smallest storage
I'm looking at implementing home directories on ZFS. This will be
about 400 users, each with a quota. The ZFS way of doing this AIUI is
create one filesystem per user, assign them a quota and/or
reservation, and set sharenfs=on. So I tried it:
# zfs create local-space/test
# zfs set sharenfs=on
On May 21, 2008, at 1:43 PM, Will Murnane wrote:
I'm looking at implementing home directories on ZFS. This will be
about 400 users, each with a quota. The ZFS way of doing this AIUI is
create one filesystem per user, assign them a quota and/or
reservation, and set sharenfs=on. So I tried
Spencer Shepler wrote:
On May 21, 2008, at 1:43 PM, Will Murnane wrote:
Okay, all is well. Try the same thing on a Solaris client, though,
and it doesn't work:
# mount -o vers=4 ds3:/export/local-space/test /mnt/
# cd mnt
# ls
foo
# ls foo
nothing
This behavior was a recent
Bob Friesenhahn wrote:
I can't speak from a Mac-centric view, but for my purposes NFS in
Leopard works well. The automounter in Leopard is a perfect clone of
the Solaris automounter, and may be based on OpenSolaris code.
It is based on osol code. The implementor worked a long time at Sun
I've always done a disksuite mirror of the boot disk. It's been easry to do
after the install in Solaris. WIth Linux I had do do it during the install.
OpenSolaris 2008.05 didn't give me an option.
How do I add my 2nd drive to the boot zpool to make it a mirror?
This message posted from
On Wed, 21 May 2008, Will Murnane wrote:
So, my questions are:
* Are there options I can set server- or client-side to make Solaris
child mounts happen automatically (i.e., match the Linux behavior)?
* Will this behave with automounts? What I'd like to do is list
/export/home in the
Hi Tom,
You need to use the zpool attach command, like this:
# zpool attach pool-name disk1 disk2
Cindy
Tom Buskey wrote:
I've always done a disksuite mirror of the boot disk. It's been easry to do
after the install in Solaris. WIth Linux I had do do it during the install.
OpenSolaris
OK, so this is another my pool got eaten problem. Our setup:
Nevada 77 when it happened, now running 87.
9 iSCSI vdevs exported from Linux boxes off of hardware RAID (running Linux for
drivers on the RAID controllers). The pool itself is simply striped.
Our problem:
Power got yanked to 8 of
And another thing we noticed: on test striped pools we've created, all the vdev
labels hold the same txg number, even as vdevs are added later, while the
labels on our primary pool (the dead one) are all different.
This message posted from opensolaris.org
[Eric Schrock:]
| Look at alternate cachefiles ('zpool set cachefile', 'zpool import -c
| cachefile', etc). This avoids scanning all devices in the system
| and instead takes the config from the cachefile.
This sounds great.
Is there any information on when this change will make it to Solaris?
Hardware: Supermicro server with Adaptec 5405 SAS controller, LSI expander -
24 drives. Currently using 2x 1tb SAS drives striped and 1x750gb SATA as
another pool. I don't think hardware is related though as if I turn off zfs
compression it's fine - I seem to get same behavior on either pool.
On Wed, May 21, 2008 at 02:43:26PM -0400, Will Murnane wrote:
So, my questions are:
* Are there options I can set server- or client-side to make Solaris
child mounts happen automatically (i.e., match the Linux behavior)?
I think these are known as mirror-mounts in Solaris. They first
It is also necessary to use either installboot (sparc) or installgrub (x86)
to install the boot loader on the attached disk. It is a bug that this
is not done automatically (6668666 - zpool command should put a
bootblock on a disk added as a mirror of a root pool vdev)
Lori
[EMAIL PROTECTED]
On Wed, May 21, 2008 at 4:13 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
Here is the answer you were looking for:
In /etc/auto_home:
# Home directory map for automounter
#
* server:/home/
This works on Solaris 9, Solaris 10, and OS-X Leopard.
And Linux, too! Thank you for the
On Wed, May 21, 2008 at 04:59:54PM -0400, Chris Siebenmann wrote:
[Eric Schrock:]
| Look at alternate cachefiles ('zpool set cachefile', 'zpool import -c
| cachefile', etc). This avoids scanning all devices in the system
| and instead takes the config from the cachefile.
This sounds
Will Murnane wrote:
On Wed, May 21, 2008 at 4:13 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
Here is the answer you were looking for:
In /etc/auto_home:
# Home directory map for automounter
#
* server:/home/
This works on Solaris 9, Solaris 10, and OS-X Leopard.
And
Hi All,
I wonder if this is something that needs to be looked into further, or a quirk
in my configuration or something. I have an Opensolaris 2008.05 box which I
have configured as a CIFS member server in a Windows 2003 AD. CIFS is running
in domain mode, Windows/Linux/MacOS clients can
Hi All,
Another oddity I have noticed is this, and sounds close to what is described
here after Googling:
http://www.nexenta.com/corp/index.php?option=com_fireboardfunc=viewid=202catid=11
I have a share on Windows fileserver (server1) in the domain my OpenSolaris
ZFS+CIFS box (server2) is
44 matches
Mail list logo