Andreas Koppenhoefer wrote:
Maybe this is the same as I've described in article
http://www.opensolaris.org/jive/thread.jspa?threadID=81613tstart=0
I've written a quickdirty shell script to reproduce a race condition which
forces Update 34 to panic and leads Update 5 to hanging zfs commands.
Maybe this is the same as I've described in article
http://www.opensolaris.org/jive/thread.jspa?threadID=81613tstart=0
I've written a quickdirty shell script to reproduce a race condition which
forces Update 34 to panic and leads Update 5 to hanging zfs commands.
- Andreas
--
This message
Hi Marlanne,
Excellent question and thank you for asking...
We have a set of instructions for creating root pool snapshots and
root pool recovery, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery
The zfs send and recv options used in this
Hi, everybody. I started a resilvering with a 150 GB ZFS pool 16 hours
ago. The resilvering is not progressing at all:
[...]
[EMAIL PROTECTED] /]# zpool status
Try to run zpool status as a non-root user and see if the resilver
shows any progress. Sometimes, zpool status as root seems to
Thomas, for long latency fat links, it should be quite
beneficial to set the socket buffer on the receive side
(instead of having users tune tcp_recv_hiwat).
throughput of a tcp connnection is gated by
receive socket buffer / round trip time.
Could that be Ross' problem ?
-r
Ross Smith
On Tue, Nov 11, 2008 at 12:52 PM, Adam Leventhal [EMAIL PROTECTED] wrote:
On Nov 11, 2008, at 9:38 AM, Bryan Cantrill wrote:
Just to throw some ice-cold water on this:
1. It's highly unlikely that we will ever support the x4500 -- only the
x4540 is a real possibility.
And to warm
What date format is that in? We who are used to OpenSolaris are
internationalized
to write dates in big endian style like -MM-DD, but I suppose this is in
mixed
up endian style MM/DD/, or? ;-)
As you might have figured out it was in the format MM/DD/.
or 20081112 - MMDD
Roch schrieb:
Thomas, for long latency fat links, it should be quite
beneficial to set the socket buffer on the receive side
(instead of having users tune tcp_recv_hiwat).
throughput of a tcp connnection is gated by
receive socket buffer / round trip time.
Could that be Ross' problem ?
I had the same problem described by kometen with our Areca ARC-1680 controller
on opensolaris 2008.05. We were using the controller in JBOD mode and allowing
zpool to to use entire disks.
Setting the drives in pass-through mode on the Areca controller manager solved
the issue.
Also worthy
On Thu 13/11/08 07:57 , Mark Horstman [EMAIL PROTECTED] sent:
I beg to differ.
With what? Some context would help.
# ludelete beA
ERROR: cannot open 'pool00/zones/global/home': dataset does not exist
ERROR: cannot mount mount point /.alt.tmp.b-QY.mnt/home device
I upgraded my machine to snv_101a_rc1 and now that machine is broken.
I described my problem here
http://www.opensolaris.org/jive/thread.jspa?threadID=81928tstart=0
and here
http://www.opensolaris.org/jive/thread.jspa?threadID=80625tstart=0#304281
The problem seems to be some low level zfs
On Thu 13/11/08 08:38 , Richard Elling [EMAIL PROTECTED] sent:
Ian Collins wrote:
Richard Elling wrote:
Ian Collins wrote:
I've been replicating a number of
filesystems from a Solaris 10 update 6 system to an update 5 one. All
of the filesystems receive fine except for one,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
[EMAIL PROTECTED] # zpool upgrade
This system is currently running ZFS pool version 10.
Update 6 introduced a new feature from Nevada: ZFS *filesystem* versions (as
opposed to pool versions):
[EMAIL PROTECTED]:~#zpool upgrade
This
Ummm, could you name a specific patch number that would apply to a stock
install?
Or suggest a way to search?
I poked at the 10_PatchReport from Nov 1st which has a handful but none of
the ones I picked out were needed on a 10u6 OEM install. I kept seeing patches
that applied to
On 12 Nov 2008, at 20:15, Vincent Fox wrote:
Just wondering if anyone knows of a patch released for 10u6?
Yes. Many patches has been released since u6 was frozen.
Sami
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Does zfs send maintain an internal state or lock?
I attempted to restart a send that was interrupted when the sending system
rebooted and the send hangs. Here's the last few lines of the truss output:
# zfs send -i tue live/[EMAIL PROTECTED] | ssh staging zfs receive -F -v
tank/backup/[EMAIL
Vincent Fox wrote:
Ummm, could you name a specific patch number that would apply to a stock
install?
Or suggest a way to search?
(Google: pca solaris)
On a freshly liveupgraded box:
# pca -l missing
Using /var/tmp/patchdiag.xref from Nov/11/08
Host: hostname (SunOS
[Default] On Wed, 12 Nov 2008 14:22:40 +, Gordon
Johnston [EMAIL PROTECTED] wrote:
David Magda wrote:
Took a bit of digging, but the VMware image is at:
http://www.sun.com/storage/disk_systems/unified_storage/resources.jsp
This is really great, I've had a play with it in VMWare Player
Do you have any info on this upgrade path?
I can't seem to find anything about this...
I would also like to throw in my $0.02 worth that I would like to see the
software offered to existing sun X4540 (or upgraded X4500) customers.
Chris G.
--
This message posted from opensolaris.org
I wrote:
Vincent Fox wrote:
Ummm, could you name a specific patch number that would apply to a stock
install?
Or suggest a way to search?
# pca -l missing
or, of course, smpatch analyze
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Ian Collins wrote:
Richard Elling wrote:
Ian Collins wrote:
I've been replicating a number of filesystems from a Solaris 10
update 6 system to an update 5 one. All of the filesystems receive
fine except for one, which fails with
cannot receive: invalid backup stream
Jonathan Loran wrote:
David Evans wrote:
For anyone looking for a cheap home ZFS server...
Dell is having a sale on their PowerEdge SC440 for $199 (regular $598) through
11/12/2008.
http://www.dell.com/content/products/productdetails.aspx/pedge_sc440?c=uscs=04l=ens=bsd
Its got Dual Core
I don't think the Pentium E2180 has the lanes to use ECC RAM.
look at the north bridge, not the cpu.. the PowerEdge SC440
uses intel 3000 MCH which supports up to 8GB unbuffered ECC
or non-ECC DDR2 667/533 SDRAM. its been replaced with
the intel 32x0 that uses DDR2 800/667MHz unbuffered ECC /
Will probably have a 10_recommended u6 patch bundle sometime in December...
For now, to get to u6 (and ZFS) you must do LU (ie u5 to u6)
Just FYI
On Wed, Nov 12, 2008 at 12:48 PM, Johan Hartzenberg [EMAIL PROTECTED]wrote:
On Wed, Nov 12, 2008 at 8:15 PM, Vincent Fox [EMAIL PROTECTED]wrote:
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Chris Greer
Sent: Wednesday, November 12, 2008 3:20 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] OpenStorage GUI
Do you have any info on this upgrade path?
I can't seem to find anything
David Evans wrote:
For anyone looking for a cheap home ZFS server...
Dell is having a sale on their PowerEdge SC440 for $199 (regular $598)
through 11/12/2008.
http://www.dell.com/content/products/productdetails.aspx/pedge_sc440?c=uscs=04l=ens=bsd
Its got Dual Core IntelĀ® PentiumĀ®E2180,
On Wed, Nov 12, 2008 at 8:15 PM, Vincent Fox [EMAIL PROTECTED]wrote:
Just wondering if anyone knows of a patch released for 10u6?
I realize this is OT but want to test my new ability with ZFS root to do
lucreate, patch the alternate BE, and luactivate it.
Send me an explorer and I will run
The word Module makes it sound really easy :) Has anyone ever swapped
this module out, and if so - was it painful?
Since our 4500's went from the pallet to the offsite datacenter I never
did really get a chance to look closely at it. I found a picture of one
and it looks like you could take out
There is no inbox/field upgrade available for the x4500 - x4540. The
upgrades mentioned are in the form of discounted box swaps.
Sorry about that. It would be nice though.
Original Message
Subject: Re: [zfs-discuss] OpenStorage GUI
From: Andy Lubel [EMAIL PROTECTED]
To:
On Thu 13/11/08 09:10 , Ian Collins [EMAIL PROTECTED] sent:
Does zfs send maintain an internal state or lock?
I attempted to restart a send that was interrupted when the sending system
rebooted and the send hangs.
It looks like the problem is at the receiving end, I can send to a local
Never mind, I ran through the cluster_install for 10_Recommended and found
patch 126868-02 is new so I used that one.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Just wondering if anyone knows of a patch released for 10u6?
I realize this is OT but want to test my new ability with ZFS root to do
lucreate, patch the alternate BE, and luactivate it.
--
This message posted from opensolaris.org
___
zfs-discuss
I am having problem with one ZFS pool and the raid controller was moved around.
The OS is Solaris 10 8/07 X86.
All the drives are found by the OS, but ZFS can't find two hot spare drives for
some reason. And the pool is unavailable. Here is what zpool status -x shows:
pool: scrpool
On 13 Nov 2008, at 00:20, Simon Gao wrote:
I am having problem with one ZFS pool and the raid controller was
moved around. The OS is Solaris 10 8/07 X86.
All the drives are found by the OS, but ZFS can't find two hot spare
drives for some reason. And the pool is unavailable. Here is
I beg to differ.
# cat /etc/release
Solaris 10 10/08 s10s_u6wos_07b SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 27 October 2008
# lustatus
Boot
I figured it out.
Exporting and re-importing fixed the problem.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a system running OpenSolaris 2008.05 upgraded to snv_90. Trying to
boot after a hard power cut was failing with a panic. I booted off the
latest 2008.11 livecd (snv 101 I believe) and managed to eventually import
the pool, successfully scrub it, and export again. Since then however,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
are there any RFEs or plans to create a 'continuous' replication mode for ZFS?
i envisage it working something like this: a 'zfs send' on the sending host
monitors the pool/filesystem for changes, and immediately sends them to the
receiving host,
Hi,
in preparation to try zfs boot on sparc I installed all recent patches
incl. feature patches comming from s10s_u3wos_10 and after reboot
finally 137137-09 (still having everything on UFS).
Now it doesn't boot at anymore:
###
Sun Fire V240, No Keyboard
Copyright
On Thu 13/11/08 12:04 , River Tarnell [EMAIL PROTECTED] sent:
are there any RFEs or plans to create a 'continuous' replication mode for
ZFS?i envisage it working something like this: a 'zfs send' on the sending
hostmonitors the pool/filesystem for changes, and immediately sends them to
Hi,
Can ZFS snapshot be performed at zvol size of 100GB ?
I have no problem with the zvol snapshot at size of 1GB or 10GB.
Thanks,
Paul
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Are you asking if zvol size is 100GB can you do a snapshot of it, or can the
snapshot grow to over 100GB? I have a x4500 with nightly snapshots being done
on a 7 terabyte filesystem (each nightly snapshot is about 20GB).
I don't believe there is a functional limit to the size of the snapshot
Afaik, the drives are pretty much the same, its the chipset that
changed, which also meant a change of cpu and memory.
-Andy
From: Tim [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 12, 2008 7:24 PM
To: Andy Lubel
Cc: Chris Greer;
[EMAIL PROTECTED] said:
# ludelete beA
ERROR: cannot open 'pool00/zones/global/home': dataset does not exist
ERROR: cannot mount mount point /.alt.tmp.b-QY.mnt/home device
pool00/zones/global/home
ERROR: failed to mount file system pool00/zones/global/home on
/.alt.tmp.b-QY.mnt/home
As an aside, replication has been implemented as part of the new Storage
7000 family. Here's a link to a blog discussing using the 7000
Simulator running in two separate VMs and replicating w/ each other:
http://blogs.sun.com/pgdh/entry/fun_with_replicating_the_sun
I'm not sure of the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brent Jones:
It sounds like you need either a true clustering file system or to draw back
your plans to see changes read-only instantly on the secondary node.
well, the idea is to have two separate copies of the data, for backup / DR.
being able to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Daryl Doami:
As an aside, replication has been implemented as part of the new Storage
7000 family. Here's a link to a blog discussing using the 7000
Simulator running in two separate VMs and replicating w/ each other:
that's interesting,
Matthew Ahrens wrote:
Andreas Koppenhoefer wrote:
Hello,
occasionally we got some solaris 10 server to panic in zfs code while doing
zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh remote zfs receive
poolname.
The race condition(s) get triggered by a broken data transmission or
Ya. That's what I ended up doing. Re-creating my UFS soft-partition boot meta
devices and all the zfs filesystems (even though they were all empty). Then I
was able to ludelete beA. There ought to be an option to '-f' or '--force' the
ludelete so you can ignore the errors and just delete the
Running zdb on my broken system, one of the things I see is the hostname.
I'm not sure why zfs needs to know about the hostname of the system it's
on, but...
The thing I did that started all my problems was I changed the hostname
of my system. Do I need to do something with zfs to tell it the
On Wed, Nov 12, 2008 at 5:58 PM, River Tarnell
[EMAIL PROTECTED] wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Daryl Doami:
As an aside, replication has been implemented as part of the new Storage
7000 family. Here's a link to a blog discussing using the 7000
Simulator running in
Spoke with customer, David Radden they have applied update 06. Now they
are having an issue with ZFS.
Customer is attempting to clone a zone. Normally when source and target
are in the same pool ZFS uses snapshop.
Now its using CPIO to copy instead of snapshot.
Any suggestions?
52 matches
Mail list logo