RE: Heads Up: default NFS server changing to the new one

2011-06-13 Thread Chris Forgeron
From: Rick Macklem

Well, I doubt you'll find much difference performance wise. An NFS server can 
be looked as a protocol translator, converting the NFS RPCs into VFS/VOP 
calls. Performance is largely defined by how well the network stack and/or 
file system perform.

When you set up a server, there are a few things that may help:
[..snip..]

Yes, I'm seeing little difference performance wise so far (maybe a slight boost 
on the new code), although I haven't had time to run all the tests that I'd 
like so I can't tell if it's significant.  However, that's good - as long as 
we're not regressing, I'm happy. 

I run ZFS exclusively for my SAN's, and I'm familiar with the various tweaks to 
make it go faster. ZFS and NFS don't play well under ESX due to the ESX client 
forcing an O_SYNC as I've detailed before, but a quick snip of a few lines in 
nfs_nfsdport.c to force ioflags to be what I want helps with that in my 
situation. I was the guy that was bugging you for a switch for that a month 
ago.. :-) I'm seeing around a 10% improvement when I do that, as it doesn't 
flog the ZIL as much (I use multiple hardware RAM drives for a ZIL, so they 
won't get much faster than that)

I'm also PXE booting over NFS, and that's working well, both from iPXE and 
gPXE. My linux clients running parted or clonezilla also don't seem to have any 
issues with the new NFS server. 

There's a bit of error/warning chatter between the old FreeBSD NFS servers I 
haven't upgraded and the new NFS clients, but it all still seems to work, and I 
plan on upgrading everything across the board to my newer 2011.05.28.15.00.00 
build of 9-CURRENT by the end of this week or next. I'm also going to build a 
clang/llvm version of the new systems for testing on that end of things, as it 
may be time for me to switch. 

As for things the nfsd server code can do, I plan on looking at a couple of 
things, but I don't think those will be in 9.0:
- Making MAXBSIZE larger. Even if it is larger than the largest block
 size supported for the underlying fs, this may help, because it can
  reduce the # of I/O RPCs.

That's interesting. I wonder what size would be good for ZFS? Possibly 128K. I 
see your definition in nfsport.h. I may fiddle a bit with this myself. 

I've also been wondering about the performance effect of the malloc's in the 
nfsvno_write() function - Would it be more efficient to malloc further up, and 
pass a pointer to it, so we're not always creating and releasing memory for the 
writes? Possibly malloc the max size at startup, and reuse the memory area. I 
haven't been that in-depth with compiling in a while however, so I don't recall 
how easy this will be, or if the tradeoff of passing the pointer will be just 
as bad. 

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


RE: Heads Up: default NFS server changing to the new one

2011-06-03 Thread Chris Forgeron
BTW,

 I've been pounding on the new NFS server with a few test VM's from my ESX 
cluster for the last 2 weeks, 24/7. Everything looks solid, no panics, no 
errors, no corruption. Memory usage is staying stable, so I haven't found any 
leaks. I'm using IOMETER to move a few TB of randomized data a day over the NFS 
link from each VM. 

 However,  ESX is only a NFS3 client, so my tests are not exercising any NFS4 
based protocols. 

 I hope to do a speed test comparison between new and old NFS servers this 
weekend - At this stage, my feeling is that the new NFS server is at least as 
fast. I suspect tests will show it to be faster (at least the code looks 
faster. :-) ).

 I'm going to expand testing by upgrading one of our non-critical SAN's to the 
latest code as well, so we'll see what it's like having 20 different 
workstations connecting daily and moving small files has on it. 

 Good work on the new code. 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


GENERIC kernconf file suggestion

2011-03-09 Thread Chris Forgeron
Hi,
 I was just caught with a kernel that wouldn't compile when I added USB 
double-bulk pipes (device udbp). A quick google search found that I needed to 
add options NETGRAPH to the kernel file.

No biggie, but I see this has been catching people since 2005 
(http://lists.freebsd.org/pipermail/freebsd-usb/2005-October/001581.html )

Can we update the GENERIC kernel to be more obvious of this requirement, say:

#device udbp# USB Double Bulk Pipe devices - Requires 
'options NETGRAPH'

I'm not sure who I should bring this to the attention to (davidxu?), so I 
thought I'd start here.

--
Christopher Forgeron, B.Sc., CCS, A+, N+
ACSI Consulting, Inc / Aardvark Computer Solutions, Inc.
email: ch...@acsi.ca

2070 Oxford Street, Suite 100, Halifax NS B3L-2T2
Tel: 902-425-2686  Fax: 902-484-7909

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


RE: missing files in readdir(3) on NFS export of ZFS volume (since v28?)

2011-03-08 Thread Chris Forgeron
Interesting - I'm having the same problem, but I thought it was due to some 
tweaking I was doing to NFS to force it into async mode from a ESX mount, and 
to remove the last of giant_lock. Now that someone else is having the same 
problem, maybe it's not my tweaks. 

I've found that if I create a new folder, and copy the files into that folder, 
it often fixes it. Sometimes I need to make the folder via NFS for this to 
work. I just did it twice today in fact. 

I see that r219404 fixed it for him - What was the change in r219404 that made 
the difference here? I guess I could go diff the code to find out.. :-)


___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


RE: HEADS UP: ZFSv28 is in!

2011-02-27 Thread Chris Forgeron
Yay! Thanks for all of your work on ZFS. I was just about to inquire about when 
this was going to happen.

I've been having great success with v28 in the Dec 12 2010 version, other than 
my minor complaints about imports and speed.

Concerning speed, I've been running further speed tests this weekend with 
FreeBSD trying to isolate what effect NFS has on my tests. Earlier I found a 
Solaris 11 box being faster than a FreeBSD 9 box for the same hardware. I 
notice giant-lock logic in the NFS server code, and while it shouldn't be in 
play for a MP safe fs like ZFS, I'm not sure.

I have a few NFS custom tests, including forcing async in the code even when 
VMWare opens it as O_SYNC.  Judging by the Solaris speed difference, I think 
they must be silently doing something similar to get the speed they do on ZFS 
with a ZIL enabled. 

I also need to look into how the Intel X520 card is setup in Solaris compared 
to FreeBSD, as it may be my speed limiter. 

This will make further testing much easier now that I no longer need to juggle 
patches. 

Thanks again. 


-Original Message-
From: owner-freebsd...@freebsd.org [mailto:owner-freebsd...@freebsd.org] On 
Behalf Of Pawel Jakub Dawidek
Sent: Sunday, February 27, 2011 4:30 PM
To: freebsd...@freebsd.org
Cc: freebsd-current@FreeBSD.org
Subject: HEADS UP: ZFSv28 is in!

Hi.

I just committed ZFSv28 to HEAD.

New major features:

- Data deduplication.
- Triple parity RAIDZ (RAIDZ3).
- zfs diff.
- zpool split.
- Snapshot holds.
- zpool import -F. Allows to rewind corrupted pool to earlier
  transaction group.
- Possibility to import pool in read-only mode.

PS. If you like my work, you help me to promote yomoli.com:)

http://yomoli.com
http://www.facebook.com/pages/Yomolicom/178311095544155

-- 
Pawel Jakub Dawidek   http://www.wheelsystems.com
FreeBSD committer http://www.FreeBSD.org
Am I Evil? Yes, I Am! http://yomoli.com
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


RE: My ZFS v28 Testing Experience

2011-01-24 Thread Chris Forgeron
Unfortunately, this didn't make a difference. There is no signifigant change in 
the benchmarks with the new compile. 

I do have a lot of CPU power at hand, so it doesn't look to be bound there at 
all. Possibly, that's one of the issues. I'm running 2 new Xeon X5660's so 
there's 6x2 (12) physical, 12x2 (24) virtual cores present to FreeBSD. How well 
is scheduling being handled on this processor architecture?

At this stage, I can't say with any confidence that it _is_ ZFS at fault here, 
because I'm involving NFS and the ix driver substantially.  I just know that 
NFS to a ZFS share on Solaris 11 Express is wildly faster than FreeBSD 9.0, 
regardless of tweaks. 

Now, there may be some extra debug within the NFS code that is the issue here, 
I'm not sure at this stage. I could also play with iSCSI instead, or the raw 
ZFS filesystem instead, but my needs involve NFS+ZFS, so that's my main test 
environment. I'm not using the new NFSv4 code. 

Let me know if there is something you'd like to test or know more about on my 
setup - I'll be running FreeBSD for about a week on this box (finishing up some 
last bits of work that I need it for), then I'm back to Solaris 11 Express for 
the next few months. 

I may end up having to build a separate box so I'm more easily able to test 
this configuration. I'd like to say with more confidence where the speed is 
going, because I feel FreeBSD deserves to be top-notch, and right now I'm only 
raising issues that aren't exact enough to work on. 

-Original Message-
From: owner-freebsd-curr...@freebsd.org 
[mailto:owner-freebsd-curr...@freebsd.org] On Behalf Of Chris Forgeron
Sent: Saturday, January 22, 2011 3:09 PM
To: Pawel Jakub Dawidek
Cc: freebsd...@freebsd.org; freebsd-current@freebsd.org
Subject: RE: My ZFS v28 Testing Experience

Before we go any further could you please confirm that you commented out this 
line in sys/modules/zfs/Makefile:

   CFLAGS+=-DDEBUG=1

This turns all kind of ZFS debugging and slows it down a lot, but for the 
correctness testing is invaluable. This will be turned off once we import ZFS 
into FreeBSD-CURRENT.

Ah! I did not do this. My bad, I've made the edit, and will be recompiling 
today to see the differences this makes. 

I will also clone my disk, turn witness and full debug back on, and then try 
and find out where my problems importing a pool with multiple cache/log devices 
comes from. It's quite possible it's not hanging, just taking forever, and I'm 
impatient and not letting it sit for a n hour to see if it completes. 

Will report back once I have numbers. 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


RE: My ZFS v28 Testing Experience

2011-01-22 Thread Chris Forgeron
Before we go any further could you please confirm that you commented out this 
line in sys/modules/zfs/Makefile:

   CFLAGS+=-DDEBUG=1

This turns all kind of ZFS debugging and slows it down a lot, but for the 
correctness testing is invaluable. This will be turned off once we import ZFS 
into FreeBSD-CURRENT.

Ah! I did not do this. My bad, I've made the edit, and will be recompiling 
today to see the differences this makes. 

I will also clone my disk, turn witness and full debug back on, and then try 
and find out where my problems importing a pool with multiple cache/log devices 
comes from. It's quite possible it's not hanging, just taking forever, and I'm 
impatient and not letting it sit for a n hour to see if it completes. 

Will report back once I have numbers. 
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


My ZFS v28 Testing Experience

2011-01-12 Thread Chris Forgeron
I've been testing out the v28 patch code for a month now, and I've yet to 
report any real issues other than what is mentioned below. 

I'll detail some of the things I've tested, hopefully the stability of v28 in 
FreeBSD will convince others to give it a try so the final release of v28 will 
be as solid as possible.

I've been using FreeBSD 9.0-CURRENT as of Dec 12th, and 8.2PRE as of Dec 16th

What's worked well:

- I've made and destroyed small raidz's (3-5 disks), large 26 disk raid-10's, 
and a large 20 disk raid-50.
- I've upgraded from v15, zfs 4, no issues on the different arrays noted above
- I've confirmed that a v15 or v28 pool will import into Solaris 11 Express, 
and vice versa, with the exception about dual log or cache devices noted below. 
- I've run many TB of data through the ZFS storage via benchmarks from my VM's 
connected via NFS, to simple copies inside the same pool, or copies from one 
pool to another. 
- I've tested pretty much every compression level, and changing them as I tweak 
my setup and try to find the best blend.
- I've added and subtracted many a log and cache device, some in failed states 
from hot-removals, and the pools always stayed intact.


Issues:

- Import of pools with multiple cache or log devices. (May be a very minor 
point)

A v28 pool created in Solaris 11 Express with 2 or more log devices, or 2 or 
more cache devices won't import in FreeBSD 9. This also applies to a pool that 
is created in FreeBSD, is imported in Solaris to have the 2 log devices added 
there, then exported and attempted to be imported back in FreeBSD. No errors, 
zpool import just hangs forever. If I reboot into Solaris, import the pool, 
remove the dual devices, then reboot into FreeBSD, I can then import the pool 
without issue. A single cache, or log device will import just fine. 
Unfortunately I deleted my witness-enabled FreeBSD-9 drive, so I can't easily 
fire it back up to give more debug info. I'm hoping some kind soul will attempt 
this type of transaction and report more detail to the list.

Note - I just decided to try adding 2 cache devices to a raidz pool in FreeBSD, 
export, and then importing, all without rebooting. That seems to work. BUT - As 
soon as you try to reboot FreeBSD with this pool staying active, it hangs on 
boot. Booting into Solaris, removing the 2 cache devices, then booting back 
into FreeBSD then works. Something is kept in memory between exporting then 
importing that allows this to work.  



- Speed. (More of an issue, but what do we do?)

Wow, it's much slower than Solaris 11 Express for transactions. I do understand 
that Solaris will have a slight advantage over any port of ZFS. All of my speed 
tests are made with a kernel without debug, and yes, these are -CURRENT and 
-PRE releases, but the speed difference is very large.

At first, I thought it may be more of an issue with the ix0/Intel X520DA2 10Gbe 
drivers that I'm using, since the bulk of my tests are over NFS (I'm going to 
use this as a SAN via NFS, so I test in that environment). 

But - I did a raw cp command from one pool to another of several TB. I executed 
the same command under FreeBSD as I did under Solaris 11 Express. When executed 
in FreeBSD, the copy took 36 hours. With a fresh destination pool of the same 
settings/compression/etc under Solaris, the copy took 7.5 hours. 

Here's a quick breakdown of the difference in speed I'm seeing between Solaris 
11 Express and FreeBSD. The test is Performance Test 6.1 on a Windows 2003 
server, connected via NFS to the FreeBSD or Solaris box.  More details are 
here: 
http://christopher-technicalmusings.blogspot.com/2011/01/solaris-11-express-faster-speed-for-san.html

Solaris 11 Express svn_151a

903 MBs - Fileserver
466 MBs - Webserver
53 MBs - Workstation
201 MBs - Database

FreeBSD-9.0 Current @ Dec 12th 2010 w/v28 Patch, all Debug off

95 MBs - Fileserver
60 MBs - Webserver
30 MBs - Workstation
32 MBs - Database

Massive difference as you can see. Same machine, different boot drives. That's 
a real 903 MBs on the Solaris side as well - No cache devices or ZIL in place, 
just a basic raidz 5 disk pool. I've tried many a tweak to get these speeds up 
higher. The old v15 could hit mid 400's for the Fileserver test with 
zil_disable on, but that's no longer an option for v28 pools. I should compile 
my various test results into a summary and make a separate blog entry for those 
who care, as I also fiddled with vfs.nfsrv.async with little luck. I took great 
care to make sure the ZFS details were the same across the tests. 

9 is faster than 8.2 for speed by a small amount. Between v28 pools and v15 
pools there is speed degradation on both 8.2 and 9, but nothing as big as the 
difference between Solaris and FreeBSD.

I haven't benchmarked OpenSolaris or any type of Solaris older than 11, so I'm 
not sure if this is a recent speed boost from the Solaris camp, or if it's 
always been there.

As always, I'm delighted about the work 

RE: Next ZFSv28 patchset ready for testing.

2010-12-15 Thread Chris Forgeron
I've installed and complied it on 9-Current 2010.12.12, looks to be running 
fine.

I've imported a v15 6 disk raidz that was created under 8.1-Stable, and so far 
it's passing all tests.

I'll be doing some serious Send/Receive testing in the next few days, so I'll 
see if I can break it. 

I'll also be upgrading the v15 fs to v28 and testing it further, as well as 
creating a fresh v28 pool, so I'll let you know how it goes. 


-Original Message-
From: owner-freebsd-curr...@freebsd.org 
[mailto:owner-freebsd-curr...@freebsd.org] On Behalf Of Pawel Jakub Dawidek
Sent: December-14-10 11:44 AM
To: Olivier Smedts
Cc: freebsd...@freebsd.org; freebsd-current@freebsd.org; Steven Hartland; 
m...@freebsd.org
Subject: Re: Next ZFSv28 patchset ready for testing.

On Tue, Dec 14, 2010 at 03:20:05PM +0100, Olivier Smedts wrote:
  make installworld
 
 That's what I wanted to do, and why I rebooted single-user on the new 
 kernel. But isn't the v13-v15 userland supposed to work with the v28 
 kernel ?

Yes, it is suppose to work. Exactly to be able to follow FreeBSD common upgrade 
path. Martin was working on this (CCed).

-- 
Pawel Jakub Dawidek   http://www.wheelsystems.com
p...@freebsd.org   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org