, like this:
# zpool replace mypool2 c3t6d0 c3t7d0
Otherwise, zpool will error...
cs
Bart Smaalders wrote:
Krzys wrote:
my drive did go bad on me, how do I replace it? I am sunning solaris 10
U2 (by the way, I thought U3 would be out in November, will it be out
soon? does
any idea what could cause my system to panic? I get my system rebooted daily at
various times. very strange, but its pointing to zfs. I have U6 with all latest
patches.
Jan 12 05:47:12 chrysek unix: [ID 836849 kern.notice]
Jan 12 05:47:12 chrysek ^Mpanic[cpu1]/thread=30002c8d4e0:
Jan 12
I was wondering, if I have vdev setup and I do present it to another box via
iscsi, is there any way to grow that vdev?
for example when I do this:
zfs create -V 100G mypool6/v1
zfs set shareiscsi=on mypool6/v1
can I then expand 100G pool to lets say 150G?
I do not care about file system on
at this moment.
On Mon, 24 Nov 2008, Krzys wrote:
somehow I have issue replacing my disk.
[20:09:29] [EMAIL PROTECTED]: /root zpool status mypooladas
pool: mypooladas
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist
for
the pool
somehow I have issue replacing my disk.
[20:09:29] [EMAIL PROTECTED]: /root zpool status mypooladas
pool: mypooladas
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach
I was wondering if this ever made to zfs as a fix for bad labels?
On Wed, 7 May 2008, Jeff Bonwick wrote:
Yes, I think that would be useful. Something like 'zpool revive'
or 'zpool undead'. It would not be completely general-purpose --
in a pool with multiple mirror devices, it could only
wrote:
Hi
try and get the stack trace from the core
ie mdb core.vold.24978
::status
$C
$r
also run the same 3 mdb commands on the cpio core dump.
also if you could extract some data from the truss log, ie a few hundred
lines before the first SIGBUS
Enda
On 11/06/08 01:25, Krzys
, 6 Nov 2008, Enda O'Connor wrote:
Hi
Wierd, almost like some kind of memory corruption.
Could I see the upgrade logs, that got you to u6
ie
/var/sadm/system/logs/upgrade_log
for the u6 env.
What kind of upgrade did you do, liveupgrade, text based etc?
Enda
On 11/06/08 15:41, Krzys
WHen property value copies is set to value greater than 1 how does it work?
Will
it store second copy of data on different disk? or does it store it on the same
disk? Also when this setting is changed at some point on file system, will it
make copies of existing data or just new data thats
Currently I have the following:
# zpool status
pool: rootpool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
rootpoolONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
errors: No known data errors
#
I
READ WRITE CKSUM
testroot ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
c1t0d0s1 ONLINE 0 0 0
errors: No known data errors
#
On Thu, 6 Nov 2008, Krzys wrote:
Currently I
What am I doing wrong? I have sparc V210 and I am having difficulty with boot
-L, I was under the impression that boot -L will give me options to which zfs
mirror I could boot my root disk?
Anyway but even not that, I am seeing some strange behavior anyway... After
trying boot -L I am unabl
Great, thank you.
Chris
On Fri, 7 Nov 2008, Nathan Kroenert wrote:
A quick google shows that it's not so much about the mirror, but the BE...
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/
Might help?
Nathan.
On 7/11/08 02:39 PM, Krzys wrote:
What am I doing wrong? I have
, and then started it up and it
did boot without any problems to original disk, I just had to do hard reset on
the box for some reason.
On Wed, 5 Nov 2008, Tomas Ă–gren wrote:
On 05 November, 2008 - Krzys sent me these 18K bytes:
I am not sure what I did wrong but I did follow up all the steps to get my
Sorry its Solaris 10 U6, not Nevada. I just upgraded to U6 and was hoping I
could take advantage of the zfs boot mirroring.
On Wed, 5 Nov 2008, Enda O'Connor wrote:
On 11/05/08 13:02, Krzys wrote:
I am not sure what I did wrong but I did follow up all the steps to get my
system moved from
On Wed, 5 Nov 2008, Enda O'Connor wrote:
On 11/05/08 13:02, Krzys wrote:
I am not sure what I did wrong but I did follow up all the steps to get my
system moved from ufs to zfs and not I am unable to boot it... can anyone
suggest what I could do to fix it?
here are all my steps:
[00
.
Enda
On 11/05/08 13:46, Krzys wrote:
On Wed, 5 Nov 2008, Enda O'Connor wrote:
On 11/05/08 13:02, Krzys wrote:
I am not sure what I did wrong but I did follow up all the steps to get
my system moved from ufs to zfs and not I am unable to boot it... can
anyone suggest what I could do
, Enda O'Connor wrote:
Hi Krzys
Also some info on the actual system
ie what was it upgraded to u6 from and how.
and an idea of how the filesystems are laid out, ie is usr seperate from /
and so on ( maybe a df -k ). Don't appear to have any zones installed, just
to confirm.
Enda
On 11/05/08
an idea of
where things went wrong.
Enda
On 11/05/08 15:38, Krzys wrote:
I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
my file system is setup as follow:
[10:11:54] [EMAIL PROTECTED]: /root df -h | egrep -v
platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
what makes me wonder is why I am not even able to see anything under boot -L ?
and it is just not seeing this disk as a boot device? so strange.
On Wed, 5 Nov 2008, Krzys wrote:
THis is so bizare, I am unable to pass this problem. I though I had not enough
space on my hard drive (new one) so
I did upgrade my Solaris and wanted to move from ufs to zfs, I did read about
it
a little but I am not sure about all the steps...
Anyway I do understand that I cannot use whole disk as zpool, so I cannot use
c1t1d0 but I do have to use c1t1d0s0 instead is that correct?
also all documents
compression is not supported for rootpool?
# zpool create rootpool c1t1d0s0
# zfs set compression=gzip-9 rootpool
# lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
ERROR: ZFS pool rootpool does not support boot environments
#
why? are there any plans to have compression on
I was hoping that in U5 at least ZFS version 5 would be included but it was
not,
do you think that will be in U6?
On Fri, 16 May 2008, Robin Guo wrote:
Hi, Paul
The most feature and bugfix so far towards Navada 87 (or 88? ) will
backport into s10u6.
It's about the same (I mean from
I just upgraded to Sol 10 U5 and I was hoping that gzip compression will be
there, but when I do upgrade it only does show v4
[10:05:36] [EMAIL PROTECTED]: /export/home zpool upgrade
This system is currently running ZFS version 4.
Do you know when Version 5 will be included in Solaris 10? are
Because this system was in production I had to fairly quickly recover, so I was
unable to play much more with it we had to destroy it and recreate new pool and
then recover data from tapes.
Its a mistery as to why in the middle of a night it rebooted, we could not
figure this out and why pool
I have a problem on one of my systems with zfs. I used to have zpool created
with 3 luns on SAN. I did not have to put any raid or anything on it since it
was already using raid on SAN. Anyway server rebooted and I cannot zee my
pools.
When I do try to import it it does fail. I am using EMC
It would be nice to be able to mount zfs file system by its mountpoit also and
not just by the pool... For example I have the following:
mypool5 257G 199G 24.5K /mypool5
mypool5/d5 257G 199G 257G /d/d5
hello folks, I am running Solaris 10 U3 and I have small problem that I dont
know how to fix...
I had a pool of two drives:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypoolONLINE
Hello all, sorry if somebody already asked this or not. I was playing today
with
iSCSI and I was able to create zpool and then via iSCSI I can see it on two
other hosts. I was courious if I could use zfs to have it shared on those two
hosts but aparently I was unable to do it for obvious
Hello everyone, I am slowly running out of space in my zpool.. so I wanted to
replace my zpool with a different zpool..
my current zpool is
zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
mypool 278G263G 14.7G94% ONLINE -
to have.
Regards,
Chris
On Fri, 1 Jun 2007, Will Murnane wrote:
On 5/31/07, Krzys [EMAIL PROTECTED] wrote:
so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small
Try zpool attach mypool
#
On Fri, 1 Jun 2007, Will Murnane wrote:
On 5/31/07, Krzys [EMAIL PROTECTED] wrote:
so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small
Try zpool attach mypool emcpower0a; see
http
going
to try it now.
Chris
On Fri, 1 Jun 2007, Will Murnane wrote:
On 5/31/07, Krzys [EMAIL PROTECTED] wrote:
so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small
Try zpool attach
that device old internal
disk with this one and lets see how that will work.
thanks so much for help.
Chris
On Fri, 1 Jun 2007, Will Murnane wrote:
On 6/1/07, Krzys [EMAIL PROTECTED] wrote:
bash-3.00# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
mypool
On Fri, 1 Jun 2007, Will Murnane wrote:
On 6/1/07, Krzys [EMAIL PROTECTED] wrote:
bash-3.00# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
mypool 68G 53.1G 14.9G78% ONLINE -
mypool2 123M 83.5K123M 0
, Krzys [EMAIL PROTECTED] wrote:
Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more
storage
to this pool (double the space) then start using it. Then what I wanted to
do is
just take out the internal disks
run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small
Any idea what I am doing wrong? Why it thinks that emcpower0a is too small?
Regards,
Chris
On Thu, 31 May 2007, Richard Elling wrote:
Krzys
Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more storage
to this pool (double the space) then start using it. Then what I wanted to do is
just take out the internal disks out of that pool and use
Perfect, i will try to play with that...
Regards,
Chris
On Tue, 29 May 2007, Cyril Plisko wrote:
On 5/29/07, Krzys [EMAIL PROTECTED] wrote:
Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more
Hey, that's nothing, I had one zfs file system, then I cloned it, so I
thought that I had two separate file systems. then I was making snaps
of both of them. Then later on I decided I did not need original file
system with its snaps. So I did recursively remove it, all of a sudden
I got a
It does not seem to work unless I am doing it incorectly.
Chris
On Tue, 17 Apr 2007, Nicholas Lee wrote:
On 4/17/07, Krzys [EMAIL PROTECTED] wrote:
and when I did try to run that last command I got the following error:
[16:26:00] [EMAIL PROTECTED]: /root zfs send -i mypool/[EMAIL PROTECTED
. This option is in current Solaris community
release, build 48.
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6f1?a=view#gdsup
Otherwise, you'll have to wait until an upcoming Solaris 10 release.
Cindy
Krzys wrote:
[18:19:00] [EMAIL PROTECTED]: /root zfs send -i mypool/[EMAIL PROTECTED
Ah, perfect then... Thank you so much for letting me know...
Regards,
Chris
On Tue, 17 Apr 2007, Robert Milkowski wrote:
Hello Krzys,
Sunday, April 15, 2007, 4:53:43 AM, you wrote:
K Strange thing, I did try to do zfs send/receive using zfs.
K On the from host I did the following:
K
Hello folks, I have strange and unusual request...
I have two 300gig drives mirrored:
[11:33:22] [EMAIL PROTECTED]: /d/d2 zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
mypool ONLINE 0 0 0
Hello folks, I have a small problem, originally I had this setup:
[16:39:40] @zglobix1: /root zpool status -x
pool: mypool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action:
:
Hello Krzys,
Wednesday, March 28, 2007, 10:58:40 PM, you wrote:
K Hello folks, I have a small problem, originally I had this setup:
K [16:39:40] @zglobix1: /root zpool status -x
Kpool: mypool
K state: DEGRADED
K status: One or more devices could not be opened. Sufficient replicas exist
for
K
I have Solaris 10 U2 with all the latest patches that started to crash recently
on regular basis... so I started to dig and see what is causing it and here is
what I found out:
panic[cpu1]/thread=2a1009a7cc0: really out of space
02a1009a6d70 zfs:zio_write_allocate_gang_members+33c
I guess I need to upgrade this system then... thanks for info...
Chris
On Thu, 1 Feb 2007, James C. McPherson wrote:
Krzys wrote:
I have Solaris 10 U2 with all the latest patches that started to crash
recently on regular basis... so I started to dig and see what is causing it
and here
.
-- richard
Krzys wrote:
Ok, so here is an update
I did restart my sysyte, I power it off and power it on. Here is screen
capture of my boot. I certainly do have some hard drive issues and will
need to take a look at them... But I got my disk back visible to the
system and zfs is doing resilvering
ok, two weeks ago I did notice one of my disk in zpool got problems.
I was getting Corrupt label; wrong magic number messages, then when I looked
in format it did not see that disk... (last disk) I had that setup running for
few months now and all of the sudden last disk failed. So I ordered
.
On 12/5/06, Krzys [EMAIL PROTECTED] wrote:
ok, two weeks ago I did notice one of my disk in zpool got problems.
I was getting Corrupt label; wrong magic number messages, then when I
looked
in format it did not see that disk... (last disk) I had that setup running
for
few months now and all
, 5 Dec 2006, Torrey McMahon wrote:
Krzys wrote:
Thanks, ah another wird thing is that when I run format on that frive I
get a coredump :(
Run pstack /path/to/core and send the output.
!DSPAM:122,45759fd826586021468!
___
zfs-discuss mailing
On Tue, 5 Dec 2006, Al Hopper wrote:
On Tue, 5 Dec 2006, Krzys wrote:
Thanks, ah another wird thing is that when I run format on that frive I get
a coredump :(
... snip
Try zeroing out the disk label with something like:
dd if=/dev/zero of=/dev/rdsk/c?t?d?p0 bs=1024k count
0 0
6 unassignedwm 0 0 0
8 reservedwm 2867330718.00MB 286749454
format q
On Tue, 5 Dec 2006, Krzys wrote:
ok, two weeks ago I did notice one of my disk in zpool got problems.
I was getting
I am having no luck replacing my drive as well. few days ago I replaced my drive
and its completly messed up now.
pool: mypool2
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait
Great, thank you, it certainly helped, I did not want to loose data on that disk
therefore wanted to be sure than sorry
thanks for help.
Chris
On Thu, 30 Nov 2006, Bart Smaalders wrote:
Krzys wrote:
my drive did go bad on me, how do I replace it? I am sunning solaris 10 U2
this:
# zpool replace mypool2 c3t6d0 c3t7d0
Otherwise, zpool will error...
cs
Bart Smaalders wrote:
Krzys wrote:
my drive did go bad on me, how do I replace it? I am sunning solaris 10
U2 (by the way, I thought U3 would be out in November, will it be out
soon? does anyone know?
[11:35
Awesome, thanks for your help, will there be any way to convert raidz to
raidz2?
Thanks again for help/
Chris
On Mon, 23 Oct 2006, Robert Milkowski wrote:
Hello Krzys,
Sunday, October 22, 2006, 8:42:06 PM, you wrote:
K I have solaris 10 U2 and I have raidz partition setup on 5 disks, I
I have solaris 10 U2 and I have raidz partition setup on 5 disks, I just added a
new disk and was wondering, can I add another disk to raidz? I was able to add
it to a pool but I do not think it added it to zpool.
[13:38:41] /root zpool status -v mypool2
pool: mypool2
state: ONLINE
yeah disks need to be identical but why do you need to do prtvtoc and fmthard
to duplicate the disk label (before the dd), I thought that dd would take care
of all of that... whenever I used dd I used it on slice 2 and I never had to do
prtvtoc and fmthard... Juts make sure disks are identical
Great, thanks :)
Chris
On Mon, 2 Oct 2006, Mark Shellenbaum wrote:
Krzys wrote:
Hello all,
Is there any way to mount zfs file system from vfstab?
Thanks,
Chris
Set the mountpoint property for the file system to legacy and add the
necessary info to the vfstab
in man page it does say zpool df home dows work, when I do type it it does not
work and I get the following error:
zpool df mypool
unrecognized command 'df'
usage: zpool command args ...
where 'command' is one of the following:
create [-fn] [-R root] [-m mountpoint] pool vdev ...
That is bad such a big time difference... 14rs vs less than 2 hrs... did you
have the same hardware setup? I did not follow up the thread...
Chris
On Sun, 17 Sep 2006, Gino Ruopolo wrote:
Other test, same setup.
SOLARIS10:
zpool/a filesystem containing over 10Millions subdirs
Hello everyone, I just wanted to play with zfs just a bit before I start using
it at my workplace on servers so I did set it up on my Solaris 10 U2 box.
I used to have all my disks mounted as UFS and everything was fine. I had my
/etc/vfstab as such:
#
fd -
and continue with
operations rather than have no space and put my system to a halt.
:)
On Fri, 5 May 2006, Darren J Moffat wrote:
Krzys wrote:
I did not think of it this way and it is a very valid point, but I still
think that most likely you would have a backup already on tape if need be
and haveing
I realy do like the way NetApp is handling snaps :) that would be an excelent
thing in ZFS :)
On Fri, 5 May 2006, Marion Hakanson wrote:
Interesting discussion. I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's). Is it
verboten to
66 matches
Mail list logo