[zfs-discuss] Re: Porting ZFS file system to FreeBSD.

2006-10-26 Thread Pawel Jakub Dawidek
On Tue, Sep 05, 2006 at 10:49:11AM +0200, Pawel Jakub Dawidek wrote: > On Tue, Aug 22, 2006 at 12:45:16PM +0200, Pawel Jakub Dawidek wrote: > > Hi. > > > > I started porting the ZFS file system to the FreeBSD operating system. > [...] > > Just a quick note about progress in my work. I needed slow

[zfs-discuss] Re: experiences with zpool errors and glm flipouts

2006-10-26 Thread Daniel B. Price
(For some reason I never actually got this as an email; maybe because I'm not subscribed to zfs-discuss?) Thanks, Eric. So do you guys have any suspicions about what is actually failing here? Is it my drives, or the glm chip? or both? I was wondering whether new drives were going to help. I'l

Re: [zfs-discuss] Planing to use ZFS in production.. some queries...

2006-10-26 Thread Darren Dunham
> 1. How do we get the same 4-Way stripe of the LUNs in ZFS (we do not want any redundancy at ZFS level since it is taken care at HDS 9500 level in hardware)? Just add the disks to the pool. They'll be automatically striped. >How do do we specify a block size of 6144? do I nee

[zfs-discuss] Planing to use ZFS in production.. some queries...

2006-10-26 Thread Binny Raphael
Hi, We are planing to migrate two servers to Solaris 10 06/6 and use ZFS. One of the server (Production) is currently using VxVM/VxFS and the Development server is using SVM. Production Server: The disks on the production server is from an HDS 9500 SAN (i.e LUNs which are already RAID 5 at the

[zfs-discuss] Current status of a ZFS root

2006-10-26 Thread Chris Adams
We're looking at replacing a current Linux server with a T1000 + a fiber channel enclosure to take advantage of ZFS. Unfortunately, the T1000 only has a single drive bay (!) which makes it impossible to follow our normal practice of mirroring the root file system; naturally the idea of using tha

[zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-10-26 Thread Douglas R. McCallum
>> If you share the file systems the time increases even further but as I >> understand it that issue is being worked: >> >> [EMAIL PROTECTED] # time zpool import zpool1 >> >> real 7h6m28.62s >> user 14m55.28s >> sys 5h58m13.24s >> [EMAIL PROTECTED] # > > Yes, this is a limitation of the antiquated

[zfs-discuss] Re: zpool snapshot fails on unmounted filesystem

2006-10-26 Thread Thomas Maier-Komor
Hi Tim, I just retried to reproduce it to generate a reliable test case. Unfortunately, I cannot reproduce the error message. So I really have no idea what might have cause it Sorry, Tom This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] zpool import takes to long with large numbers of file systems

2006-10-26 Thread Eric Schrock
On Thu, Oct 26, 2006 at 02:38:32PM -0700, Chris Gerhard wrote: > To have user quota with ZFS you have to have a file system per user. > However this leads to a very large number of file systems on a large > server. I understand there is work already in hand to make sharing a > large number of file

[zfs-discuss] zpool import takes to long with large numbers of file systems

2006-10-26 Thread Chris Gerhard
To have user quota with ZFS you have to have a file system per user. However this leads to a very large number of file systems on a large server. I understand there is work already in hand to make sharing a large number of file systems faster, however even mounting a large number of file system

Re: [zfs-discuss] Re: ZFS hangs systems during copy

2006-10-26 Thread Matthew Ahrens
Juergen Keil wrote: Sounds familiar. Yes it is a small system a Sun blade 100 with 128MB of memory. Oh, 128MB... Btw, does anyone know if there are any minimum hardware (physical memory) requirements for using ZFS? It seems as if ZFS wan't tested that much on machines with 256MB (or less)

Re: [zfs-discuss] zfs on removable drive

2006-10-26 Thread Artem Kachitchkine
I thought that using a zfs import/export into it's own pool would work, It works for me, at least on recent builds. The only gotcha I'm aware of is that ZFS does not do well with I/O failures in a non-replicated pool - it used to panic when there was read failure on a single-disk USB pool,

Re: [zfs-discuss] Re: ZFS hangs systems during copy

2006-10-26 Thread Joseph Mocker
I have reported similar issues with ZFS taking most of my 2G in one system and 3G in another. I have been told to add a swap partition which I normally do not do. It mostly has cleared up the problem however I am still bugged by needing to do that in the first place. ZFS memory management seems

Re: [zfs-discuss] zfs on removable drive

2006-10-26 Thread Richard Elling - PAE
It is supposed to work, though I haven't tried it. Gary Gendel wrote: Here is the problem I'm trying to solve... Ive been using a sparc machine as my primary home server for years. A few years back the motherboard died. I did a nightly backup on an external USB drive formatted in ufs format.

Re: [zfs-discuss] zpool status - very slow during heavy IOs

2006-10-26 Thread eric kustarz
Robert Milkowski wrote: Hi. When there's a lot of IOs to a pool then zpool status is really slow. These are just statistics and it should work quick. This is: 6430480 grabbing config lock as writer during I/O load can take excessively long eric # truss -ED zpool status [...] 0.000

Re: [zfs-discuss] zpool status - very slow during heavy IOs

2006-10-26 Thread Eric Schrock
See: 6430480 grabbing config lock as writer during I/O load can take excessively long - Eric On Thu, Oct 26, 2006 at 04:42:00AM -0700, Robert Milkowski wrote: > Hi. > > > When there's a lot of IOs to a pool then zpool status is really slow. > These are just statistics and it should work q

Re: [zfs-discuss] experiences with zpool errors and glm flipouts

2006-10-26 Thread Eric Schrock
On Thu, Oct 26, 2006 at 01:30:46AM -0700, Dan Price wrote: > > scsi: WARNING: /[EMAIL PROTECTED],70/[EMAIL PROTECTED] (glm0): > Resetting scsi bus, got incorrect phase from (1,0) > genunix: NOTICE: glm0: fault detected in device; service still available > genunix: NOTICE: glm0: Resetti

Re: [zfs-discuss] Re: ZFS ACLs and Samba

2006-10-26 Thread Spencer Shepler
On Thu, Joerg Schilling wrote: > Spencer Shepler <[EMAIL PROTECTED]> wrote: > > > On Wed, Jonathan Edwards wrote: > > > > > > On Oct 25, 2006, at 15:38, Roger Ripley wrote: > > > > > > >IBM has contributed code for NFSv4 ACLs under AIX's JFS; hopefully > > > >Sun will not tarry in following th

Re: [zfs-discuss] Re: ZFS hangs systems during copy

2006-10-26 Thread Juergen Keil
> Sounds familiar. Yes it is a small system a Sun blade 100 with 128MB of > memory. Oh, 128MB... ZFS' *minimum* ARC cache size is fixed at 64MB, so ZFS' ARC cache should already grab slightly more than half of the memory installed in that machine. Leaving less than 64MB of free memory on your

Re: [zfs-discuss] Re: ZFS hangs systems during copy

2006-10-26 Thread Mark Maybee
This is: 6483887 without direct management, arc ghost lists can run amok The fix I have in mind is to control the ghost lists as part of the arc_buf_hdr_t allocations. If you want to test out my fix, I can send you some diffs... -Mark Juergen Keil wrote: Jürgen Keil writes: > > ZFS 11.0 on S

Re: [zfs-discuss] Re: ZFS hangs systems during copy

2006-10-26 Thread Juergen Keil
> Jürgen Keil writes: > > > ZFS 11.0 on Solaris release 06/06, hangs systems when > > > trying to copy files from my VXFS 4.1 file system. > > > any ideas what this problem could be?. > > > > What kind of system is that? How much memory is installed? > > > > I'm able to hang an Ultra 60

Re: [zfs-discuss] Re: ZFS hangs systems during copy

2006-10-26 Thread Edmundo Ocalagan
Sounds familiar. Yes it is a small system a Sun blade 100 with 128MB of memory. I guess I need to install more memory in this baby. I just very surprised that ZFS will required so much to accomplish just a simple task as a copy. From your experience I can tell might be a memory related issue.

[zfs-discuss] zfs on removable drive

2006-10-26 Thread Gary Gendel
Here is the problem I'm trying to solve... Ive been using a sparc machine as my primary home server for years. A few years back the motherboard died. I did a nightly backup on an external USB drive formatted in ufs format. I use a rsync based backup called dirvish, so I thought I had all the ba

Re: [zfs-discuss] Re: ZFS hangs systems during copy

2006-10-26 Thread Roch - PAE
Jürgen Keil writes: > > ZFS 11.0 on Solaris release 06/06, hangs systems when > > trying to copy files from my VXFS 4.1 file system. > > any ideas what this problem could be?. > > What kind of system is that? How much memory is installed? > > I'm able to hang an Ultra 60 with 256 MByte

[zfs-discuss] zpool status - very slow during heavy IOs

2006-10-26 Thread Robert Milkowski
Hi. When there's a lot of IOs to a pool then zpool status is really slow. These are just statistics and it should work quick. # truss -ED zpool status [...] 0.0008 0.0001 fstat64(1, 0xFFBFAC40) = 0 pool: nfs-1 0.0015 0.0001 write(1, " p o o l : n f s -".

[zfs-discuss] Re: ZFS hangs systems during copy

2006-10-26 Thread Jürgen Keil
> ZFS 11.0 on Solaris release 06/06, hangs systems when > trying to copy files from my VXFS 4.1 file system. > any ideas what this problem could be?. What kind of system is that? How much memory is installed? I'm able to hang an Ultra 60 with 256 MByte of main memory, simply by writing big files

Re: [zfs-discuss] Re: ZFS ACLs and Samba

2006-10-26 Thread Joerg Schilling
Spencer Shepler <[EMAIL PROTECTED]> wrote: > On Wed, Jonathan Edwards wrote: > > > > On Oct 25, 2006, at 15:38, Roger Ripley wrote: > > > > >IBM has contributed code for NFSv4 ACLs under AIX's JFS; hopefully > > >Sun will not tarry in following their lead for ZFS. > > > > > >http://lists.samba

[zfs-discuss] experiences with zpool errors and glm flipouts

2006-10-26 Thread Dan Price
Tonight I've been moving some of my personal data around on my desktop system and have hit some on-disk corruption. As you may know, I'm cursed, and so this had a high probability of ending badly. I have two SCSI disks and use live upgrade, and I have a partition, /aux0, where I tend to keep pers