On Thu, 27 Aug 2009, Paul B. Henson wrote:
However, I went to create a new boot environment to install the patches
into, and so far that's been running for about an hour and a half :(,
which was not expected or planned for.
[...]
I don't think I'm going to make my downtime window :(, and will
Thanks, Trevor. I understand the RFE/CR distinction. What I don't
understand is how this is not a bug that should be fixed in all solaris
versions.
The related ID 6612830 says it was fixed in Sol 10 U6, which was a while
ago. I am using OpenSolaris, so I would really appreciate confirmation
Randall Badilla wrote:
Hi all:
First; it is possible modify the boot zpool rpool after OS
installation...? I install the OS on the whole 72GB harddisk.. it is
mirrored so If I want to decrease the rpool; for example resize to a
36GB slice it can be done?
As far I remember on UFS/SVM I was
Well, so far lucreate took 3.5 hours, lumount took 1.5 hours, applying the
patches took all of 10 minutes, luumount took about 20 minutes, and
luactivate has been running for about 45 minutes. I'm assuming it will
probably take at least the 1.5 hours of the lumount (particularly
considering it
Randall Badilla wrote:
Hi all:
First; it is possible modify the boot zpool rpool after OS
installation...? I install the OS on the whole 72GB harddisk.. it is
mirrored so If I want to decrease the rpool; for example resize to a
36GB slice it can be done?
As far I remember on UFS/SVM I
Trevor Pretty wrote:
*Release Fixed* , solaris_10u6(s10u6_01) (*Bug ID:*2160894
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=2160894)
Although as nothing in life is guaranteed it looks like another bug
2160894 has been identified and that's not yet on bugs.opensolaris.org
I've reported this in the past, and have seen a related thread but no
resolution so far and I'm still seeing it. Any help really would be very
much appreciated.
We have three thumpers (X4500):
* thumper0 - Running snv_76
* thumper1 - Running snv_121
* thumper2 - Running Solaris 10 update 7
Each
On Thu, Aug 27, 2009 at 10:59:16PM -0700, Paul B. Henson wrote:
On Thu, 27 Aug 2009, Paul B. Henson wrote:
However, I went to create a new boot environment to install the patches
into, and so far that's been running for about an hour and a half :(,
which was not expected or planned for.
Hello,
I have a quick question I cant seem to find a good answer for. I have a
S10U7 server that is running zfs to a couple of iSCSI shares on one of our
sans, we use a routed network to connect to our iSCSI shares with the route
statements in solaris. During normal operations it works fine
Roman, are you saying you want to install OpenSolaris
on your old servers, or make the servers look like an
external JBOD array, that another server will then
connect to?
No, JBOD is JBOD, just an external enclosure, disks+internal/external connectors
--
Roman
--
This message posted from
This non-raid sas controller is $199 and is based on
the LSI SAS 1068.
http://accessories.us.dell.com/sna/products/Networking
_Communication/productdetail.aspx?c=usl=ens=bsdcs=0
4sku=310-8285~lt=popup~ck=TopSellers
Why Dell? Isn't it cheaper to go with LSI itself?
What kind of chassis do
On Fri, 28 Aug 2009, Stephen Stogner wrote:
I have a quick question I cant seem to find a good answer for. I
have a S10U7 server that is running zfs to a couple of iSCSI shares
on one of our sans, we use a routed network to connect to our iSCSI
shares with the route statements in solaris.
I'm using find to run a directory scan to see which files have changed since
the last snapshot was taken. Something like:
zfs snapshot tank/filesys...@snap1
... time passes ...
find /tank/filesystem -newer /tank/filesystem/.zfs/snap1 -print
Initially I assumed the time data on the .zfs/snap1
Hi Grant,
I've had no more luck researching this, mostly because the error message
can mean different things in different scenarios.
I did try to reproduce it and I can't.
I noticed you are booting using boot -s, which I think means the system
will boot from the default boot disk, not the
On Fri, Aug 28, 2009 at 3:51 PM, Chris Bakero...@lhc.me.uk wrote:
I'm using find to run a directory scan to see which files have changed
since the last snapshot was taken. Something like:
zfs snapshot tank/filesys...@snap1
... time passes ...
find /tank/filesystem -newer
Try a: zfs get -pH -o value creation snapshot
-- MikeE
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Baker
Sent: Friday, August 28, 2009 10:52 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss]
Hi
What does boot -L show you?
Enda
On 08/28/09 15:59, cindy.swearin...@sun.com wrote:
Hi Grant,
I've had no more luck researching this, mostly because the error message
can mean different things in different scenarios.
I did try to reproduce it and I can't.
I noticed you are booting
Peter, Mike,
Thank you very much, zfs get -p is exactly what I need (and why I didn't see
it despite having been through the man page dozens of times I cannot fathom.)
Much appreciated.
Chris
--
This message posted from opensolaris.org
___
Hi Enda,
This is what I get when I do the boot -L:
1} ok boot -L
Sun Fire V240, No Keyboard
Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved.
OpenBoot 4.13.2, 4096 MB memory installed, Serial #61311259.
Ethernet address 0:3:ba:a7:89:1b, Host ID: 83a7891b.
Rebooting with
I was using OpenSolaris 2009.06 on an IDE drive, and decided to reinstall onto
a mirror (smaller SSDs).
My data pool was a separate pool and before reinstalling onto the new SSDs I
exported the data pool.
After rebooting and installing OpenSolaris 2009.06 onto the first SSD I tried
to import
On Aug 28, 2009, at 12:15 AM, Dave wrote:
Thanks, Trevor. I understand the RFE/CR distinction. What I don't
understand is how this is not a bug that should be fixed in all
solaris versions.
In a former life, I worked at Sun to identify things like this that
affect availability
and
Hello all,
I am running an OpenSolaris server running 06/09. I installed comstar and
enabled it. I have an ESXi 4.0 server connecting to Comstar via iscsi on its
own switch. (There are two esxi servers), both of which do this regardless of
whether one is on or off. The error I see is on esxi
Hi,
I have a zfs root Solaris 10 running on my home server. I created an ISCSI
target in the following way:
# zfs create -V 1g rpool2/iscsivol
Turned on the shareiscsi property
# zfs set shareiscsi=on rpool2/iscsivol
# zfs list rpool2/iscsivol
NAME USED AVAIL
Richard Elling wrote:
On Aug 28, 2009, at 12:15 AM, Dave wrote:
Thanks, Trevor. I understand the RFE/CR distinction. What I don't
understand is how this is not a bug that should be fixed in all
solaris versions.
In a former life, I worked at Sun to identify things like this that
affect
On Fri, 28 Aug 2009, Dave wrote:
Thanks, Trevor. I understand the RFE/CR distinction. What I don't
understand is how this is not a bug that should be fixed in all solaris
versions.
Just to get the terminology right: CR means Change Request, and can
refer to Defects (bugs) or RFE's. Defects
Well, what I ended up doing was reinstalling Solaris. Fortunately this is a
test box for now. I've repeatedly pulled both the root drive and the mirrored
drive. The system behaved as normal. The trick that worked for me was to
reinstall, but select both drives for zfs. Originally I
Some more info that might help:
I have the old IDE boot drive which I can reconnect if I get no help with this
problem. I just hope it will allow me to import the data pool, as this is not
guaranteed.
Way back, I was using SXCE and the pool was upgraded to the latest ZFS version
at the time.
Andrew Robert Nicols wrote:
I've reported this in the past, and have seen a related thread but no
resolution so far and I'm still seeing it. Any help really would be very
much appreciated.
We have three thumpers (X4500):
* thumper0 - Running snv_76
* thumper1 - Running snv_121
* thumper2 -
Looks like my last IDE-based boot environment may have been pointing to the
/dev package repository, so that might explain how the data pool version got
ahead of the official 2009.06 one.
Will try to fix the problem by pointing the SSD-based BE towards the dev repo
and see if I get success.
Alan,
Super find. Thanks, I thought I was just going crazy until I rolled back to
110 and the errors disappeared. When you do work out a fix, please ping me to
let me know when I can try an upgrade again.
Gary
--
This message posted from opensolaris.org
On 28/08/2009, at 3:23 AM, Adam Leventhal wrote:
There appears to be a bug in the RAID-Z code that can generate
spurious checksum errors. I'm looking into it now and hope to have
it fixed in build 123 or 124. Apologies for the inconvenience.
Are the errors being generated likely to cause
Hi,
I've read a few articles about the lack of 'simple' raidz pool expansion
capability in ZFS. I am interested in having a go at developing this
functionality. Is anyone working on this at the moment?
I'll explain what I am proposing. As mentioned in many forums, the concept is
really
On Thu, Aug 27, 2009 at 04:05:15PM -0700, Grant Lowe wrote:
I've got a 240z with Solaris 10 Update 7, all the latest patches from
Sunsolve. I've installed a boot drive with ZFS. I mirrored the drive with
zpool. I installed the boot block. The system had been working just fine.
But for
I've a zfs pool named 'ppool' with two vdevs(files) file1, file2 in it.
zdb -l /pak/file1 output:
version=16
name='ppool'
state=0
txg=3080
pool_guid=14408718082181993222
hostid=8884850
hostname='solaris-b119-44'
top_guid=4867536591080553814
On Fri, 28 Aug 2009 casper@sun.com wrote:
luactivate has been running for about 45 minutes. I'm assuming it will
probably take at least the 1.5 hours of the lumount (particularly
considering it appears to be running a lumount process under the hood) if
not the 3.5 hours of lucreate.
On Fri, 28 Aug 2009, Jens Elkner wrote:
More info:
http://iws.cs.uni-magdeburg.de/~elkner/luc/lutrouble.html#luslow
**sweet**!!
This is *exactly* the functionality I was looking for. Thanks much
Any Sun people have any idea if Sun has any similar functionality planned
for
36 matches
Mail list logo