Re: [zfs-discuss] Question: created non global zone with ZFS underneath the root filesystem

2006-09-27 Thread Arlina Goce-Capiral


Thank you Al for your quick response.
I will forward this info to customer and inform him about it.

Arlina-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question: created non global zone with ZFS underneath the root filesystem

2006-09-27 Thread Arlina Goce-Capiral

All,


Customer would like to confirm this if this is supported or not.

=
I've created non global zones with ZFS underneath the root filesystem 
for a new SAP
environment that's approaching production next week.  Then I read that 
its not supported,
but many people in the discussions boards and in google searches are 
doing it. 
Would like to know the truth, and if I need to back it out, how.

=

Thank you in advance,
Arlina

NOTE: Please email me directly as i'm not on this alias.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [Fwd: RESEND: [Fwd: Queston: after installing SunMC 3.6.1 ability to view the ZFS gui has disappeared]]

2006-09-25 Thread Arlina Goce-Capiral

All,

Anyone for this?
I haven't received any informations regarding this. This is my third 
attempt and i would appreciate if you can

send me any info you have.

TIA,
Arlina

NOTE: Please email me directly as i'm not on this alias.
--- Begin Message ---


I'm resending this again since i haven't received anything from anybody.
Any suggestions i would appreciate it.

Thanks,
Arlina-
--- Begin Message ---



Customer opened a case with an issue regarding the ability of the ZFS 
gui which disappeared

under the menu storage. This is after loading the SunMC 3.6.1.

More informations from customer's email below:


Yes, I just installed Solaris 10 6/06 on an Ultra 25 for testing, we'll
be using ZFS on an E2900 very soon.

I was evaluating the zfs and was looking into the SMC as well for zone
management and loaded SMC on to the Ultra 25.

When I started up the below link for zfs, java web console started up in
it's place.

Under the menu storage should have ZFS underneath it, but it wasn't
there, only Solaris Container Manager.

As for the SMC installation I took the defaults except for snmp.


The disks show up via command line, however, the problem is this. The
ZFS gui management tool and the sun management center gui both use port
6789 as the java web console.

After I installed SunMC, the ability to view the zfs gui has disappeared.
I thought you could do all the above from the web port 6789, but SunMC
seems to have 'overwritten' the zfs management gui.

Any ideas?
=

TIA,
Arlina

NOTE: Please email me directly as i'm not on this alias.


--- End Message ---
--- End Message ---
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [Fwd: Queston: after installing SunMC 3.6.1 ability to view the ZFS gui has disappeared]

2006-09-21 Thread Arlina Goce-Capiral


I'm forwarding this inquiry to this alias as well just in case somebody 
can suggests or provide any informations.


Thank you in advance,
Arlina-
--- Begin Message ---



Customer opened a case with an issue regarding the ability of the ZFS 
gui which disappeared

under the menu storage. This is after loading the SunMC 3.6.1.

More informations from customer's email below:


Yes, I just installed Solaris 10 6/06 on an Ultra 25 for testing, we'll
be using ZFS on an E2900 very soon.

I was evaluating the zfs and was looking into the SMC as well for zone
management and loaded SMC on to the Ultra 25.

When I started up the below link for zfs, java web console started up in
it's place.

Under the menu storage should have ZFS underneath it, but it wasn't
there, only Solaris Container Manager.

As for the SMC installation I took the defaults except for snmp.


The disks show up via command line, however, the problem is this. The
ZFS gui management tool and the sun management center gui both use port
6789 as the java web console.

After I installed SunMC, the ability to view the zfs gui has disappeared.
I thought you could do all the above from the web port 6789, but SunMC
seems to have 'overwritten' the zfs management gui.

Any ideas?
=

TIA,
Arlina

NOTE: Please email me directly as i'm not on this alias.


--- End Message ---
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help: Getting error "zfs:bad checksum (read on off...)

2006-09-01 Thread Arlina Goce-Capiral

Hello Matthew,

Thanks for your very helpful informations.
While waiting for your reply, i checed the ZFS admin Guide and it was in 
there, on page 150. :-)


Have a good weekend.

Thanks again.
Arlina-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Need Help: Getting error "zfs:bad checksum (read on off...)

2006-09-01 Thread Arlina Goce-Capiral

All,

Customer who has 15K running Solaris 10 6/06 with 3510 array attached, 
has a disk array failure
and ZFS was mirroring.  The system panic and can't boot off either in 
single user mode nor in multi-user mode.

The panic error shows:

IMPACT:panic[cpu512]/thread=2a1022a7cc0: ZFS: bad checksum (read on 
 off 0: zio 3003ca93900 [L0 SPA space map] 1000L/a00P 
DVA[0]=<1:11128e9400:a00> DVA[1]=<2:111204e400:a00> fletcher4 lzjb BE 
contiguous birth=794148 fill=1 
cksum=71ac8b094c:9a49ad9f14f9:7b1154b0bb95af:49201a336bbfbf6a): error 50


02a1022a7740 zfs:zio_done+284 (3003ca93900, 0, a8, 70785be0, 0, 
3001847d240)

 %l0-3: 030019a1fcc0 03003e8646c0 0032 0032
 %l4-7: 0002 0001  0032
02a1022a7940 zfs:zio_vdev_io_assess+178 (3003ca93900, 8000, 10, 0, 
0, 10)

 %l0-3: 0001 0002  0032
 %l4-7:    030017a34000
02a1022a7a00 genunix:taskq_thread+1a4 (30019a2b670, 30019a2b618, 
50001, d548643cc3, 2a1022a7aca, 2a1022a7ac8)

 %l0-3: 0001 030019a2b640 030019a2b648 030019a2b64a
 %l4-7: 030019b0c9e0 0002  030019a2b638


Customer's main concern right now is to make the system bootable but it 
seems couldn't do that since the bad disks is part
of the zfs filesystems.  Is there a way to disable or clear out the bad 
zfs filesystem so system can be booted?


Also i did search on the error "ZFS:  bad checksum" error and i found 
some bugs.


Any assistance on this issue is greatly appreciated. Customer's domain 
is still down at this point.


TIA,
Arlina

NOTE: PLease email me directly as i'm not on this alias.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question: Looking for ways on copying zfs filesys between disks

2006-09-01 Thread Arlina Goce-Capiral


Thanks Matthew for your quick response.
And as always for all of your assistance on my zfs inquiries. I really 
appreciate it.


Have a good weekend.
Arlina-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question: Looking for ways on copying zfs filesys between disks

2006-09-01 Thread Arlina Goce-Capiral

All,

System (SunFire T2000) migrated to Solaris 10 6/06 and old lucopy 
methods do not work for copying zfs
filesystems between disks for backups. Customer is looking for way to do 
"ufsdump" type method between

c0t0d0s4 and c0t2d0s4 for backup?

I checked the ZFS Admin guide and i saw regarding ZFS backups and 
restore commands like "zfs send" and
"zfs receive". More informations can be found on page 96 of the admin 
guide.  Is this what the customer is

looking for?

Any suggestions?

Thank you in advance,
Arlina

NOTE: Please email me directly as i'm not on this alias.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help: didn't create the pool as radiz but stripes

2006-08-24 Thread Arlina Goce-Capiral

Boyd and all,

Just an update of what happened and what the customer found out 
regarding the issue.


===
It does appear that the disk is fill up by 140G.

I think I now know what happen.  I created a raidz pool and I did not
write any data to it before I just pulled out a disk.  So I believe the
zfs filesystem did not initialize yet.  So this is why my zfs filesystem
was unusable.  Can you confirm this? 
But when I created a zfs filesystem and wrote data to it, it could now

lose a disk and just be degraded.  I tested this part by removing the
disk partition in format. 
I will try this same test to re-duplicate my issue, but can you confirm

for me if my zfs filesystem as a raidz requires me to write data to it
first before it's really ready?

[EMAIL PROTECTED] df -k
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c1t0d0s04136995 2918711 117691572%/
/devices   0   0   0 0%/devices
ctfs   0   0   0 0%/system/contract
proc   0   0   0 0%/proc
mnttab 0   0   0 0%/etc/mnttab
swap 5563996 616 5563380 1%/etc/svc/volatile
objfs  0   0   0 0%/system/object
/usr/lib/libc/libc_hwcap2.so.1
4136995 2918711 117691572%/lib/libc.so.1
fd 0   0   0 0%/dev/fd
/dev/dsk/c1t0d0s54136995   78182 4017444 2%/var
/dev/dsk/c1t0d0s741369954126 4091500 1%/tmp
swap 5563400  20 5563380 1%/var/run
/dev/dsk/c1t0d0s64136995   38674 4056952 1%/opt
pool 210567315 210566773   0   100%/pool

/
[EMAIL PROTECTED] cd /pool

/pool
[EMAIL PROTECTED] ls -la
total 421133452
drwxr-xr-x   2 root sys3 Aug 23 17:19 .
drwxr-xr-x  25 root root 512 Aug 23 20:34 ..
-rw---   1 root root 171798691840 Aug 23 17:43 nullfile

/pool
[EMAIL PROTECTED]


[EMAIL PROTECTED] zpool status
 pool: pool
state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas
exist for
   the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
  see: http://www.sun.com/msg/ZFS-8000-D3
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   poolDEGRADED 0 0 0
 raidz DEGRADED 0 0 0
   c1t2d0  ONLINE   0 0 0
   c1t3d0  ONLINE   0 0 0
   c1t4d0  UNAVAIL  15.12 10.27 0  cannot open

errors: No known data errors


AND SECOND EMAIL:

I'm unable to re-duplicate my failed zfs pool using raidz.  As for the 
disk size bug (6288488 and 2140116),

I  have a few questions.  The developer said that it would be fixed in u3.
When is u3 suppose to be release?  U2 just came out.  Also, can or will
=

Any ideas when the Solaris 10 update 3 (11/06) be released? And would 
this be fixed in Solaris 10 update 2 (6/06)?


Thanks to all of you.

Arlina-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help: didn't create the pool as radiz but stripes

2006-08-23 Thread Arlina Goce-Capiral

Hello James,

Thanks for the response.

Yes. I got the bug id# and forwarded that to customer. But cu said that 
he can create a large file
that  is large as the stripe of the 3 disks. And if he pull a disk, the 
whole zpool failes, so there's no

degraded pools, just fails.

Any idea on this?

Thank you,.
Arlina-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Need Help: didn't create the pool as radiz but stripes

2006-08-23 Thread Arlina Goce-Capiral

I need help on this and don't know what to give to customer.

System is V40z running Solaris 10 x86 and customer is trying to create 3 
disks as Raidz. After creating the pool,
looking at the disk space and configuration, he thinks that this is not 
raidz pool but rather
stripes. THis is what exactly he told me so i'm not sure if this makes 
sense to all of you.


Any assistance and help is greatly appreciated.

THank you in advance,
Arlina

NOTE: Please email me directly as i'm not on this alias.

Below are more informations.
=
Command used:
# zpool create pool raidz c1t2d0 c1t3d0 c1t4d0

From the format command:
   0. c1t0d0 
 /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci17c2,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c1t2d0 
 /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci17c2,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
  2. c1t3d0 
 /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci17c2,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
  3. c1t4d0 
 /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci17c2,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0

The pool status:
# zpool status
 pool: pool
state: ONLINE
scrub: none requested
config:

   NAMESTATE (BREAD WRITE CKSUM
   poolONLINE   0 0 0
 raidz ONLINE   0 0 0
   c1t2d0  ONLINE   0 0 0
   c1t3d0  ONLINE   0 0 0
   c1t4d0  ONLINE   0 0 0

errors: No known data errors


The df -k output of te newly created pool as raidz.
# df -k
Filesystemkbytesused   avail capacity  Mounted on
pool 210567168  49 210567033 1%/pool

I can create a file that is large as the
stripe of the 3 disks. So the information reported is correct. Also,
if I pull a disk out, the whole zpool fails! There is no degraded
pools, it just fails.
===
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss