iman habibi wrote:
Dear support
when i connect my external usb dvdrom to the sparc machine which has
installed solaris 10u6 based zfs file system,,it return this error:
Your ZFS question is?
DVDs uue the HSFS filesystem.
One good place for general Solaris questions is comp.unix.solaris.
--
On Tue, Jan 27, 2009 at 9:49 AM, iman habibi iman.hab...@gmail.com wrote:
Dear support
when i connect my external usb dvdrom to the sparc machine which has
installed solaris 10u6 based zfs file system,,it return this error:
bash-3.00# mount /dev/dsk/c1t0d0s0 /dvd/
Jan 27 11:08:41 global
Hi All,
Can we freeze and thaw a zfs file-system either from user land (lockfs
for ufs) or from the kernel space or through ioctl?
Thanks in advance for your help.
Best regards,
ajit
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ajit jain wrote:
Hi All,
Can we freeze and thaw a zfs file-system either from user land (lockfs
for ufs) or from the kernel space or through ioctl?
Can you step back a level and explain what you're trying to achieve?
Freezing UFS is to get round ufs-specific issues which don't apply to
I am having trouble getting ZFS to behave as I would expect.
I am using the HP driver (cpqary3) for the Smart Array P400 (in a HP Proliant
DL385 G2) with 10k 2.5 146GB SAS drives. The drives appear correctly, however
due to the controller not offering JBOD functionality I had to configure each
On Tue, Jan 27, 2009 at 7:16 PM, Alex a...@pancentric.com wrote:
I am using the HP driver (cpqary3) for the Smart Array P400 (in a HP Proliant
DL385 G2) with 10k 2.5 146GB SAS drives. The drives appear correctly,
however due to the controller not offering JBOD functionality I had to
I'm testing the same thing on a DL380 G5 with P400 controller. I set
individual RAID 0 logical drives for each disk. I ended up with the same
result upon drive removal. I'm looking into whether the hpacucli array
command line utility will let me re-enable a logical drive from its
interface.
--
I forgot the pool that's having problems was recreated recently so it's already
at zfs version 3. I just did a 'zfs upgrade -a' for another pool, but some of
those filesystems failed since they are busy and couldn't be unmounted.
# zfs upgrade -a
cannot unmount '/var/mysql': Device busy
cannot
Hi Andrew,
I am writing a filtering device which tracks the write to the
file-system. I am doing it for ufs, vxfs and for zfs. Sometime for
consistent point I need to freeze the file-system which flushes dirty
block to the disk and block every IO on the top level. So, for ufs and
vxfs I got
You need to step back and appreciate that the manner in which you are
presenting Solaris with disks is the problem and not necessarily ZFS.
As your storage system is incapable of JBOD operation, you have
decided to present each disk as a 'simple' RAID0 volume. Whilst this
looks like a
Hi Folks,
call me a lernen ;-)
I got a crazy Problem with zpool list and the size of my pool:
created zpool create raidz2 hdd1 hdd2 hdd3 - each hdd is about 1GB.
zpool list shows me a size of 2.95GB - shouldn't this bis online 1GB?
After creating a file about 500MB - Capacity is shown as 50 %
Henri Meddox wrote:
Hi Folks,
call me a lernen ;-)
I got a crazy Problem with zpool list and the size of my pool:
created zpool create raidz2 hdd1 hdd2 hdd3 - each hdd is about 1GB.
zpool list shows me a size of 2.95GB - shouldn't this bis online 1GB?
After creating a file about 500MB
Mika Borner wrote:
You're lucky. Ben just wrote about it :-)
http://www.cuddletech.com/blog/pivot/entry.php?id=1013
Oops, should have read your message completly :-) Anyway you can
lernen something from it...
___
zfs-discuss mailing list
Given that I have lots of ProLiant equipment, are there any recommended
controllers that would work in this situation? Is this an issue unique to
the Smart Array controllers? If I do choose to use some level of hardware
RAID on the existing Smart Array P400, what's the best way to use it with
ZFS
On 26-Jan-09, at 8:15 PM, Miles Nordin wrote:
js == Jakov Sosic jso...@gmail.com writes:
tt == Toby Thain t...@telegraphics.com.au writes:
js Yes but that will do the complete resilvering, and I just want
js to fix the corrupted blocks... :)
tt What you are asking for is
Hi
I took a look at the archives and I have seen a few threads about using
array block level snapshots with ZFS and how we face the old issue
that we used to see with logical volumes and unique IDs (quite
correctly) stopping the same volume being presented twice to the same
server.
IHAC
Hi Tim,
I took a look at the archives and I have seen a few threads about
using
array block level snapshots with ZFS and how we face the old issue
that we used to see with logical volumes and unique IDs (quite
correctly) stopping the same volume being presented twice to the same
server.
ajit jain wrote:
Hi Andrew,
I am writing a filtering device which tracks the write to the
file-system. I am doing it for ufs, vxfs and for zfs. Sometime for
consistent point I need to freeze the file-system which flushes dirty
block to the disk and block every IO on the top level. So, for
Henri Meddox wrote:
Hi Folks,
call me a lernen ;-)
I got a crazy Problem with zpool list and the size of my pool:
created zpool create raidz2 hdd1 hdd2 hdd3 - each hdd is about 1GB.
zpool list shows me a size of 2.95GB - shouldn't this bis online 1GB?
After creating a file about 500MB
Any ideas on this? It looks like a potential bug to me, or there is something
that I'm not seeing.
Thanks again!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 27 Jan 2009, at 17:59, Richard Elling wrote:
ajit jain wrote:
Hi Andrew,
I am writing a filtering device which tracks the write to the
file-system. I am doing it for ufs, vxfs and for zfs. Sometime for
consistent point I need to freeze the file-system which flushes dirty
block to the
Hello all,
There is some project here to integrate amanda on opensolaris, or some howto
for integration with ZFS? Some use case (using the opensource version)?
The amanda site there is a few instructions, but i think here we can create
something more specific to OS.
Thanks.
--
This message
Hi!
I have a system with S10, b101, and b104 installed in the same partition
on disk 1. On disks 1 and 2 in different partitions, I also created ZFS
pools from S10 to be imported by b101 and b104. Pool 1 is mirrored.
Pool 2 is not. About every three builds, I replace the oldest build
with
Do you know 7200.11 has firmware bugs?
Go to seagate website to check.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jakov Sosic wrote:
Be happy, the data is already fixed. The DEGRADED
state is used
when too many errors were found in a short period of
time, which
one would use as an idicator of a failing device.
However, since the
evice is not actually failed, it is of no practical
use in your test
I guess you could try 'zpool import -f'. This is a pretty odd status,
I think. I'm pretty sure raidz1 should survive a single disk failure.
Perhaps a more knowledgeable list member can explain.
On Sat, Jan 24, 2009 at 12:48 PM, Brad Hill b...@thosehills.com wrote:
I've seen reports of a
Can you share the output of 'uname -a' and the disk controller you are using?
On Sun, Jan 25, 2009 at 6:24 PM, Ramesh Mudradi rameshm.ku...@gmail.com wrote:
# zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
jira-app-zpool 272G 330K 272G 0% ONLINE -
The
In the previous config I had two RAID0 hardware stripes on an LSI1068
that were then mirrored together with ZFS.
I then got a PERC 6/i card (aka LSI1078) to stick in the box and so I
moved the one stripe over to that (and had to re-create the stripe of
course).
The problem is that once the
comment far below...
Brent Jones wrote:
On Mon, Jan 26, 2009 at 10:40 PM, Brent Jones br...@servuhome.net wrote:
While doing some performance testing on a pair of X4540's running
snv_105, I noticed some odd behavior while using CIFS.
I am copying a 6TB database file (yes, a single file)
On Tue, Jan 27, 2009 at 5:47 PM, Richard Elling
richard.ell...@gmail.com wrote:
comment far below...
Brent Jones wrote:
On Mon, Jan 26, 2009 at 10:40 PM, Brent Jones br...@servuhome.net wrote:
--
Brent Jones
br...@servuhome.net
I found some insight to the behavior I found at this
just installed s10_u6 with a root pool. i'm blown away. so now i want
to attach my external storage via firewire. i could use usb2 but i prefer
firewire as i won't need an external hub.
what firewire cards are supported for x86? the HCL doesn't list any that
i could find. i searched for any
On Tue, 27 Jan 2009, Frank Cusack wrote:
what firewire cards are supported for x86? the HCL doesn't list any that
i could find. i searched for any of the terms 'firewire', '1394', 'ohci',
'uhci', 'ehci'.
The Sun Ultra-40 (recently discontinued) comes with dual 400Mbit
firefire ports. See
On Tue, 27 Jan 2009 21:01:55 -0600 (CST)
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
On Tue, 27 Jan 2009, Frank Cusack wrote:
what firewire cards are supported for x86? the HCL doesn't list
any that i could find. i searched for any of the terms 'firewire',
'1394', 'ohci',
I'm not an authority, but on my 'vanilla' filer, using the same
controller chipset as the thumper, I've been in really good shape
since moving to zfs boot in 10/08 and doing 'zpool upgrade' and 'zfs
upgrade' to all my mirrors (3 3-way). I'd been having similar
troubles to yours in the past.
My
Thanks for your reply,
While the savecore is working its way up the chain to (hopefully) Sun,
the vendor asked us not to use it, so we moved x4500-02 to use x4500-04
and x4500-05. But perhaps moving to Sol 10 10/08 on x4500-02 when fixed
is the way to go.
The savecore had the usual info,
What does 'zpool status -xv' show?
On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller mil...@eecis.udel.edu wrote:
I forgot the pool that's having problems was recreated recently so it's
already at zfs version 3. I just did a 'zfs upgrade -a' for another pool,
but some of those filesystems failed
r...@opensolaris:~# zpool import -f tank
internal error: Bad exchange descriptor
Abort (core dumped)
Hoping someone has seen that before... the Google is seriously letting me down
on that one.
I guess you could try 'zpool import -f'. This is a
pretty odd status,
I think. I'm pretty sure
On Tue, Jan 27, 2009 at 9:28 PM, Jorgen Lundman lund...@gmo.jp wrote:
Thanks for your reply,
While the savecore is working its way up the chain to (hopefully) Sun,
the vendor asked us not to use it, so we moved x4500-02 to use x4500-04
and x4500-05. But perhaps moving to Sol 10 10/08 on
Frank Cusack wrote:
just installed s10_u6 with a root pool. i'm blown away. so now i want
to attach my external storage via firewire.
I was able to use this cheap thing with good initial results:
http://www.newegg.com/Product/Product.aspx?Item=N82E16815124002
However, I ran into a frequent
This is outside the scope of my knowledge/experience. Maybe there is
now a core file you can examine? That might help you at least see
what's going on?
On Tue, Jan 27, 2009 at 10:32 PM, Brad Hill b...@thosehills.com wrote:
r...@opensolaris:~# zpool import -f tank
internal error: Bad exchange
thanks for all the feedback. i guess i'll stick with usb2.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I assume you've changed the failmode to continue already?
http://prefetch.net/blog/index.php/2008/03/01/configuring-zfs-to-gracefully-deal-with-failures/
This appears to be new to 10/08, so that is another vote to upgrade.
Also interesting that the default is wait, since it almost
I do, thank you. The disk that went out sounds like it had a head crash or some
such - loud clicking shortly after spin-up then it spins down and gives me
nothing. BIOS doesn't even detect it properly to do a firmware update.
Do you know 7200.11 has firmware bugs?
Go to seagate website to
Just a thought, but have you physically disconnected the bad disk? It's not
unheard of for a bad disk to cause problems with others.
Failing that, it's the corrupted data bit that's worrying me, it sounds like
you may have other corruption on the pool (always a risk with single parity
raid),
i was wondering if you have a zfs filesystem that mounts in a subdir
in another zfs filesystem, is there any problem with zfs finding
them in the wrong order and then failing to mount correctly?
say you have pool1/data which mounts on /data and pool2/foo which
mounts on /data/subdir/foo, what if
45 matches
Mail list logo