Re: [zfs-discuss] ZFS percent busy vs zpool iostat

2011-01-12 Thread a . smith

Quoting Bob Friesenhahn :



What function is the system performing when it is so busy?


The work load of the server is SMTP mail server, with associated spam  
and virus scanning, and serving maildir email via POP3 and IMAP.




Wrong conclusion.  I am not sure what the percentages are  
percentages of (total RAM?), but 603MB is a very small ARC.  FreeBSD  
pre-assigns kernel memory for zfs so it is not dynamically shared  
with the kernel as it is with Solaris.


This is the min, max, and actual size of the ARC. ZFS is free to use  
up to the MAX (2098.08M) if it decides it wants to. Depending on the  
work load on this server it will go up to 2098M (as in Ive seen it get  
to that size on this and other servers), just with its usual daily  
work load it decides to set this to around 600M. I assume it decides  
it's not worth using any more RAM.


The ARC is "adaptive" so you should not assume that its objective is  
to try to absorb your hard drive.  It should not want to cache data  
which is rarely accessed.  Regardless,  your ARC size may actually  
be constrained by default FreeBSD kernel tunings.


I guess then that ZFS is weighing up how useful it is to use more than  
600M and deciding that it isnt that useful? Anyway, Ive just  
forced the Min to 1900M so will see how this goes today.




The type of drives you are using have very poor seek performance.  
Higher RPM drives would surely help.  Stuffing lots more memory in  
your system and adjusting the kernel so that zfs can use a lot more  
of it is likely to help dramatically.  Zfs loves memory.


thanks Bob, and also to Matt for your comments...



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS percent busy vs zpool iostat

2011-01-12 Thread a . smith
Ok, think I have the biggest issue. The drives are 4k sector drives,  
and I wasn't aware of that. My fault, I should have checked this. Had  
the disks for ages and are sub 1TB so had the idea that they wouldn't  
be 4k drives...


I will obviously have to address this, either by creating a pool using  
4k aware zfs commands or replacing the disks.


Anyway, thanks to all and to Taemun for getting me to check this...



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool scalability and performance

2011-01-13 Thread a . smith
Basically I think yes you need to add all the vdevs you require in the  
circumstances you describe.


You just have to consider what ZFS is able to do with the disks that  
you give it. If you have 4x mirrors to start with then all writes will  
be spread across all disks and you will get nice performance using all  
8 spindles/disks. If you fill all of these up then add one other  
mirror then its logical that new data written will be only written to  
the free space on the new mirror and you will get the performance of  
writing data to a single mirrored vdev.


To handle this you would either have to add sufficient new devices to  
give you your required performance. Or if there is a fair amount of  
data turn around on your pool, ie you are deleting (including from  
snapshots) old data then you might get reasonable performance by  
adding a new mirror at some point before your existing pool is  
completely full. Ie data will initially get written and spread across  
all disks as there will be free space on all disks, and over time old  
data will be removed from the other older vdevs. Which would result in  
most of the time reads and writes benefiting from all vdevs, but it't  
not going to give you guarantees of that I guess...


Anyway, thats what occurred to me on the subject! ;)

cheers Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive i/o anomaly

2011-02-08 Thread a . smith

It is a 4k sector drive, but I thought zfs recognised those drives and didn't
need any special configuration...?


4k drives are a big problem for ZFS, much has been posted/written  
about it. Basically, if the 4k drives report 512 byte blocks, as they  
almost all do, then ZFS does not detect and configure the pool  
correctly. If the drive actually reports the real 4k block size, ZFS  
handles this very nicely.
So the problem/fault is drives misreporting the real block size, to  
maintain compatibility with other OS's etc, and not really with ZFS.


cheers Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread a . smith

On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote:

My question is about the initial "seed" of the data. Is it possible  
to use a portable drive to copy the initial zfs filesystem(s) to the  
remote location and then make the subsequent incrementals over the  
network? If so, what would I need to do to make sure it is an exact  
copy? Thank you,


Yes, you can send the initial seed snapshot to a file on a portable  
disk. for example:


 # zfs send tank/volume@seed > /myexternaldrive/zfssnap.data

If the volume of data is too much to fit on a single disk then you can  
create a new pool spread across the number of disks you require, make  
a duplicate of the snapshot onto your new pool. Then from the new pool  
you can run a new zfs send when connected to your offsite server.


thanks Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread a . smith

Hi,

  I am using FreeBSD 8.2 in production with ZFS. Although I have had  
one issue with it in the past but I would recommend it and I consider  
it production ready. That said if you can wait for FreeBSD 8.3 or 9.0  
to come out (a few months away) you will get a better system as these  
will include ZFS v28 (FreeBSD-RELEASE is currently v15).
On the other had things can always go wrong, of course RAID is not  
backup, even with snapshots ;)


cheers Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Monitoring disk seeks

2011-05-24 Thread a . smith

Hi,

  see the seeksize script on this URL:

http://prefetch.net/articles/solaris.dtracetopten.html

Not used it but looks neat!

cheers Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread a . smith

Still i wonder what Gartner means with Oracle monetizing on ZFS..


It simply means that Oracle want to make money from ZFS (as is normal  
for technology companies with their own technology). The reason this  
might cause uncertainty for ZFS is that maintaining or helping make  
the open source version of ZFS better may be seen by Oracle as  
contradictory to them making money from it.
That said, what is already open source cannot be un-open sourced, as  
others have said...


cheers Andy.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question on ZFS iSCSI

2011-06-01 Thread a . smith

Disk /dev/zvol/rdsk/pool/dcpool: 4295GB
Sector size (logical/physical): 512B/512B



Just to check, did you already try:

zpool import -d /dev/zvol/rdsk/pool/ poolname

?

thanks Andy.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss