Re: [zfs-discuss] ZFS Usability issue : improve means of finding ZFS-physdevice(s) mapping

2006-10-13 Thread Noel Dellofano
I  don't understand why you can't use 'zpool status'?  That will show  
the pools and the physical devices in each and is also a pretty basic  
command.  Examples are given in the sysadmin docs and manpages for  
ZFS on the opensolaris ZFS community page.


I realize it's not quite the same command as in UFS, and it's easier  
when things remain the same, but it's a different filesystem so you  
need some different commands that make more sense for how it's  
structured. The idea being hopefully that  soon zpool and zfs  
commands will become just as 'intuitive' for people :)


Noel

(p.s. not to mention am I the only person that thinks that 'zpool  
status' (in human speak, not geek) makes more sense than 'df'? wtf )


On Oct 13, 2006, at 1:55 PM, Bruce Chapman wrote:


ZFS is supposed to be much easier to use than UFS.

For creating a filesystem, I agree it is, as I could do that easily  
without a man page.


However, I found it rather surprising that I could not see the  
physical device(s) a zfs filesystem was attached to using either  
df command (that shows physical device mount points for all other  
file systems), or even the zfs command.


Even going to zpool command it took a few minutes to finally  
stumble across the only two commands that will give you that  
information, as it is not exactly intuitive.


Ideally, I'd think df should show physical device connections of  
zfs pools, though I can imagine there may be some circumstances  
where that is not desirable so perhaps a new argument would be  
needed to specify if that detail is shown or not.


If this is not done, I think zfs list -v  (-v is not currently an  
option to the zfs list command) should show the physical devices in  
use by the pools.


In any case, I think it is clear zpool list should have a -v  
argument added that will show the device associations, so that  
people don't have to stumble blindly until they run into the zpool  
iostat -v or zpool status -v commands to finally accomplish this  
rather simple task.


Any comments on the above?  I'm using S10 06/06, so perhaps I'll  
get lucky and someone has already added one or all the above  
improvements. :)


Cheers,

   Bruce


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] incorrect link for dmu_tx.c in ZFS Source Code tour

2006-09-27 Thread Noel Dellofano

Fixed.  Thank you for the heads up on that.

Noel
On Sep 27, 2006, at 1:04 AM, Victor Latushkin wrote:


Hi All,

I've noticed that link to dmu_txg.c from the ZFS Source Code tour  
is broken. It looks like it dmu_txg.c should be changed to dmu_tx.c

Please take care of this.

- Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic during recv

2006-09-26 Thread Noel Dellofano

I can also reproduce this on my test machines and have opened up CR
6475506 panic in dmu_recvbackup due to NULL pointer dereference
to track this problem.  This is most likely due to recent changes  
made in the snapshot code for -F.  I'm looking into it...


thanks for testing!
Noel

On Sep 26, 2006, at 6:21 AM, Mark Phalan wrote:


Hi,

I'm using b48 on two machines.. when I issued the following I get a
panic on the recv'ing machine:

$ zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] | ssh machine2
zfs recv -F data

doing the following caused no problems:

zfs send -i data/[EMAIL PROTECTED] data/[EMAIL PROTECTED] | ssh  
machine2 zfs

recv data/[EMAIL PROTECTED]


Is this a known issue? I reproduced it twice. I have core files.

from the log:

Sep 26 14:52:21 dhcp-eprg06-19-134 savecore: [ID 570001 auth.error]
reboot after panic: BAD TRAP: type=e (#pf Page fault) rp=d0965c34  
addr=4

occurred in module zfs due to a NULL pointer dereference


from the core:

echo '$C' | mdb 0

d0072ddc dmu_recvbackup+0x85b(d0562400, d05629d0, d0562828, 1,  
ea5ff9c0,

138)
d0072e18 zfs_ioc_recvbackup+0x4c()
d0072e40 zfsdev_ioctl+0xfc(2d8, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072e6c cdev_ioctl+0x2e(2d8, 5a1b, 8046c0c, 13, d5478840,
d0072f78)6475506
d0072e94 spec_ioctl+0x65(d256f9c0, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072ed4 fop_ioctl+0x27(d256f9c0, 5a1b, 8046c0c, 13, d5478840,
d0072f78)
d0072f84 ioctl+0x151()
d0072fac sys_sysenter+0x100()

-Mark


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] new ZFS links page

2006-08-29 Thread Noel Dellofano

Hey everybody,
I'd like to announce the addition of a ZFS Links page on the  
Opensolaris ZFS community page.  If you have any links to articles  
that pertain to ZFS that you find useful or should be shared with the  
community as a whole, please let us know and we'll add it to the page.


http://www.opensolaris.org/os/community/zfs/links/

thanks,
Noel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and very large directories

2006-08-24 Thread Noel Dellofano
ZFS actually uses the ZAP to handle directory lookups.  The ZAP is  
not a btree but a specialized hash table where a hash for each  
directory entry is generated based on that entry's name.  Hence you  
won't be doing any sort of linear search through the entire directory  
for a file, a hash is generated from the file name and a lookup of  
that hash in the zap will be done.   This is nice and speedy, even  
with 100,000 files in a directory.




Noel

On Aug 24, 2006, at 8:02 AM, Patrick Narkinsky wrote:

Due to legacy constraints, I have a rather complicated system that  
is currently using Sun QFS (actually the SAM portion of it.) For a  
lot of reasons, I'd like to look at moving to ZFS, but would like a  
sanity check to make sure ZFS is suitable to this application.


First of all, we are NOT using the cluster capabilities of SAMFS.   
Instead, we're using it as a way of dealing with one directory that  
contains approximately 100,000 entries.


The question is this: I know from the specs that ZFS can handle a  
directory with this many entries, but what I'm actually wondering  
is how directory lookups are handled? That is, if I do a cd  
foo05 in a directory with foo01 through foo10, will  
the filesystem have to scroll through all the directory contents to  
find foo05, or does it use a btree or something to handle this?


This directory is, in turn, shared out over NFS.  Are there any  
issues I should be aware of with this sort of installation?


Thanks for any advice or input!

Patrick Narkinsky
Sr. System Engineer
EDS


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Issue with zfs snapshot replication from version2 to version3 pool.

2006-08-23 Thread Noel Dellofano

I've filed a bug for the problem Tim mentions below.
6463140 zfs recv with a snapshot name that has 2 @@ in a row succeeds

This is most likely due to the order in which we call  
zfs_validate_name in the zfs recv code, which would explain why other  
snapshot commands like 'zfs snapshot' will fail out and refuse to  
create a snapshot with 2 @@ in a row.  I'll look into it and update  
the bug further.


Noel

On Aug 22, 2006, at 11:45 AM, Shane Milton wrote:

Just updating the discussion with some email chains.  After more  
digging, this does not appear to be a version 2 or 3 replicatiion  
issues.  I believe it to be an invalid named snapshot that causes  
zpool and zfs commands to core.


Tim mentioned it may be similiar to bug 6450219.
I agree it seems similiar to 6450219, but I'm not so sure it's the  
same as the related bug of 6446512.  At least the description of  
...mistakenly trying to copy a file or directory... I do not  
believe to apply in this case.  However, I'm still testing things  
so it very well may produce the same error.


-Shane


--

To: Tim Foster , Eric Schrock
Date: Aug 22, 2006 10:37 AM
Subject: Re: [zfs-discuss] Issue with zfs snapshot replication from  
version2 to version3 pool.



Looks like the problem is that 'zfs recieve' will accept invalid  
snapshot names.  In this case two @ signs
This causes most  other zfs and zpool commands that look up the  
snapshot object type to core dump.


Reproduced on x64 Build44 system with the following command.
zfs send t0/[EMAIL PROTECTED] | zfs recv t1/fs0@@snashot_in


[EMAIL PROTECTED]:/var/tmp/]
$ zfs list -r t1
internal error: Invalid argument
Abort(coredump)


dtrace output

1  51980   zfs_ioc_objset_stats:entry   t1
  1  51981  zfs_ioc_objset_stats:return 0
  1  51980   zfs_ioc_objset_stats:entry   t1/fs0
  1  51981  zfs_ioc_objset_stats:return 0
  1  51980   zfs_ioc_objset_stats:entry   t1/fs0
  1  51981  zfs_ioc_objset_stats:return 0
  1  51980   zfs_ioc_objset_stats:entry   t1/fs0@@snashot_in
  1  51981  zfs_ioc_objset_stats:return22



This may need to be filed as a bug again zfs recv.

Thank you for your time,

-Shane




From: Tim Foster
To: shane milton
Cc: Eric Schrock
Date: Aug 22, 2006 10:56 AM
Subject: Re: [zfs-discuss] Issue with zfs snapshot replication from  
version2 to version3 pool.



Hi Shane,

On Tue, 2006-08-22 at 10:37 -0400, shane milton wrote:

Looks like the problem is that 'zfs recieve' will accept invalid
snapshot names.  In this case two @ signs
This causes most  other zfs and zpool commands that look up the
snapshot object type to core dump.


Thanks for that! I believe this is the same as
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6450219

(but I'm open to corrections :-)

   cheers,
   tim


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool iostat, scrubbing increases used disk space

2006-08-20 Thread Noel Dellofano
thanks for the heads up.  I've fixed them to point to the right documents. NoelOn Aug 20, 2006, at 11:38 AM, Ricardo Correia wrote:By the way, the manpage links in  http://www.opensolaris.org/os/community/zfs/docs/ are not correct, they are  linked to wrong documents.  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz - raidz2

2006-08-02 Thread Noel Dellofano
Your suspicions are correct,  it's not possible to upgrade an  
existing raidz pool to raidz2.  You'll actually have to create the  
raidz2 pool from scratch.


Noel
On Aug 2, 2006, at 10:02 AM, Frank Cusack wrote:

Will it be possible to update an existing raidz to a raidz2?  I  
wouldn't

think so, but maybe I'll be pleasantly surprised.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] disk evacuate

2006-06-28 Thread Noel Dellofano

Hey Robert,

Well, not yet.  Right now our top two priorities are improving 
performance in multiple areas of zfs(soon there will be a performance 
page tracking progess on the zfs community page), and also getting zfs 
boot done.  Hence, we're not currently working on heaps of brand new 
features.  So this is definately on our list, but not currently being 
worked on yet.


Noel

Robert Milkowski wrote:

Hello Noel,

Wednesday, June 28, 2006, 5:59:18 AM, you wrote:

ND a zpool remove/shrink type function is on our list of features we want
ND to add.
ND We have RFE
ND 4852783 reduce pool capacity
ND open to track this.

Is there someone actually working on this right now?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] panic in buf_hash_remove

2006-06-13 Thread Noel Dellofano
Out of curiosity, is this panic reproducible? A bug should be filed on  
this for more investigation. Feel free to open one or I'll open it if  
you forward me info on where the crash dump is and information on the  
I/O stress test you were running.


thanks,
Noel :-)


 
**


Question all the answers
On Jun 12, 2006, at 3:45 PM, Daniel Rock wrote:


Hi,

had recently this panic during some I/O stress tests:

 $msgbuf
[...]
panic[cpu1]/thread=fe80005c3c80:
BAD TRAP: type=e (#pf Page fault) rp=fe80005c3980 addr=30 occurred  
in module zfs due to a NULL pointer dereference



sched:
#pf Page fault
Bad kernel fault at addr=0x30
pid=0, pc=0xf3ee322e, sp=0xfe80005c3a70, eflags=0x10206
cr0: 8005003bpg,wp,ne,et,ts,mp,pe cr4: 6f0xmme,fxsr,pge,mce,pae,pse
cr2: 30 cr3: a49a000 cr8: c
rdi: fe80f0aa2b40 rsi: 89c3a050 rdx:  
6352
rcx:   2f  r8:0  r9:
30
rax: 64f2 rbx:2 rbp:  
fe80005c3aa0
r10:   fe80f0c979 r11:   bd7189449a7087 r12:  
89c3a040
r13: 89c3a040 r14:32790 r15:
 0
fsb: 8000 gsb: 8149d800  ds:
43
 es:   43  fs:0  gs:   
1c3
trp:e err:0 rip:  
f3ee322e
 cs:   28 rfl:10206 rsp:  
fe80005c3a70

 ss:   30

fe80005c3870 unix:die+eb ()
fe80005c3970 unix:trap+14f9 ()
fe80005c3980 unix:cmntrap+140 ()
fe80005c3aa0 zfs:buf_hash_remove+54 ()
fe80005c3b00 zfs:arc_change_state+1bd ()
fe80005c3b70 zfs:arc_evict_ghost+d1 ()
fe80005c3b90 zfs:arc_adjust+10f ()
fe80005c3bb0 zfs:arc_kmem_reclaim+d0 ()
fe80005c3bf0 zfs:arc_kmem_reap_now+30 ()
fe80005c3c60 zfs:arc_reclaim_thread+108 ()
fe80005c3c70 unix:thread_start+8 ()

syncing file systems...
 done
dumping to /dev/md/dsk/swap, offset 644874240, content: kernel
 $c
buf_hash_remove+0x54(89c3a040)
arc_change_state+0x1bd(c0099370, 89c3a040,  
c0098f30)

arc_evict_ghost+0xd1(c0099470, 14b5c0c4)
arc_adjust+0x10f()
arc_kmem_reclaim+0xd0()
arc_kmem_reap_now+0x30(0)
arc_reclaim_thread+0x108()
thread_start+8()
 ::status
debugging crash dump vmcore.0 (64-bit) from server
operating system: 5.11 snv_39 (i86pc)
panic message:
BAD TRAP: type=e (#pf Page fault) rp=fe80005c3980 addr=30 occurred  
in module zfs due to a NULL pointer dereference

dump content: kernel pages only



Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss