On Sat, 4 Oct 2008, Danny Braniss wrote:
at the moment, the best I can do is run it on a different hardware that has
if_em, the results are in
ftp://ftp.cs.huji.ac.il/users/danny/lock.prof/7.1-1000.em the
benchmark ran better with the Intel NIC, averaged UDP 54MB/s, TCP 53MB/s
On Sat, 4 Oct 2008, Danny Braniss wrote:
at the moment, the best I can do is run it on a different hardware that has
if_em, the results are in
ftp://ftp.cs.huji.ac.il/users/danny/lock.prof/7.1-1000.em the
benchmark ran better with the Intel NIC, averaged UDP 54MB/s, TCP 53MB/s (I
get the
On Fri, 3 Oct 2008, Danny Braniss wrote:
On Fri, 3 Oct 2008, Danny Braniss wrote:
gladly, but have no idea how to do LOCK_PROFILING, so some pointers would
be helpfull.
The LOCK_PROFILING(9) man page isn't a bad starting point -- I find that
the defaults work fine most of the
it more difficult than I expected.
for one, the kernel date was missleading, the actual source update is the
key, so
the window of changes is now 28/July to 19/August. I have the diffs, but
nothing
yet seems relevant.
on the other hand, I tried NFS/TCP, and there things seem ok,
On Fri, 3 Oct 2008, Danny Braniss wrote:
it more difficult than I expected.
for one, the kernel date was missleading, the actual source update is the key,
so
the window of changes is now 28/July to 19/August. I have the diffs, but nothing
yet seems relevant.
on the other hand, I tried
On Fri, 3 Oct 2008, Danny Braniss wrote:
it more difficult than I expected.
for one, the kernel date was missleading, the actual source update is the
key, so
the window of changes is now 28/July to 19/August. I have the diffs, but
nothing
yet seems relevant.
on the other
On Fri, 3 Oct 2008, Danny Braniss wrote:
gladly, but have no idea how to do LOCK_PROFILING, so some pointers would be
helpfull.
The LOCK_PROFILING(9) man page isn't a bad starting point -- I find that the
defaults work fine most of the time, so just use them. Turn the enable syscl
on just
forget it about LOCK_PROFILING, I'm RTFM now :-)
though some hints on values might be helpful.
have a nice weekend,
danny
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send
On Fri, 3 Oct 2008, Danny Braniss wrote:
OK, so it looks like this was almost certainly the rwlock change. What
happens if you pretty much universally substitute the following in
udp_usrreq.c:
Currently Change to
- -
INP_RLOCK
On Fri, 3 Oct 2008, Danny Braniss wrote:
OK, so it looks like this was almost certainly the rwlock change. What
happens if you pretty much universally substitute the following in
udp_usrreq.c:
Currently Change to
- -
INP_RLOCK
On Fri, 3 Oct 2008, Danny Braniss wrote:
gladly, but have no idea how to do LOCK_PROFILING, so some pointers would
be
helpfull.
The LOCK_PROFILING(9) man page isn't a bad starting point -- I find that the
defaults work fine most of the time, so just use them. Turn the enable
On Fri, 3 Oct 2008, Danny Braniss wrote:
On Fri, 3 Oct 2008, Danny Braniss wrote:
gladly, but have no idea how to do LOCK_PROFILING, so some pointers would
be helpfull.
The LOCK_PROFILING(9) man page isn't a bad starting point -- I find that
the defaults work fine most of the time, so
On Fri, 26 Sep 2008, Danny Braniss wrote:
after more testing, it seems it's related to changes made between Aug 4
and
Aug 29 ie, a kernel built on Aug 4 works fine, Aug 29 is slow. I'l now
try
and close the gap.
I think this is the best way forward -- skimming August
it more difficult than I expected.
for one, the kernel date was missleading, the actual source update is the
key, so
the window of changes is now 28/July to 19/August. I have the diffs, but
nothing
yet seems relevant.
on the other hand, I tried NFS/TCP, and there things seem ok, ie the
it more difficult than I expected.
for one, the kernel date was missleading, the actual source update is the
key, so
the window of changes is now 28/July to 19/August. I have the diffs, but
nothing
yet seems relevant.
on the other hand, I tried NFS/TCP, and there things seem ok,
On Mon, Sep 29, 2008 at 11:39 AM, Danny Braniss [EMAIL PROTECTED] wrote:
it more difficult than I expected.
for one, the kernel date was missleading, the actual source update is the
key, so
the window of changes is now 28/July to 19/August. I have the diffs, but
nothing
yet seems
Danny Braniss wrote:
Grr, there goes binary search theory out of the window,
So far I have managed to pinpoint the day that the changes affect the
throughput:
18/08/08 00:00:00 19/08/08 00:00:00
(I assume cvs's date is GMT).
now would be a good time for some help,
On Mon, 29 Sep 2008, Oliver Fromme wrote:
Danny Braniss wrote:
Grr, there goes binary search theory out of the window,
So far I have managed to pinpoint the day that the changes affect the
throughput:
18/08/08 00:00:00 19/08/08 00:00:00
(I assume cvs's date is GMT).
now
:-vfs.nfs.realign_test: 22141777
:+vfs.nfs.realign_test: 498351
:
:-vfs.nfsrv.realign_test: 5005908
:+vfs.nfsrv.realign_test: 0
:
:+vfs.nfsrv.commit_miss: 0
:+vfs.nfsrv.commit_blks: 0
:
: changing them did nothing - or at least with respect to nfs throughput
:how can I see the IP fragment reassembly statistics?
:
:thanks,
: danny
netstat -s
Also look for unexpected dropped packets, dropped fragments, and
errors during the test and such, they are counted in the statistics
as well.
-Matt
--==_Exmh_1222467420_5817P
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
David,
You beat me to it.
Danny, read the iperf man page:
-b, --bandwidth n[KM]
set target bandwidth to n bits/sec (default 1 Mbit/sec). This
On Fri, 26 Sep 2008, Danny Braniss wrote:
after more testing, it seems it's related to changes made between Aug 4 and
Aug 29 ie, a kernel built on Aug 4 works fine, Aug 29 is slow. I'l now try
and close the gap.
I think this is the best way forward -- skimming August changes, there are a
On Fri, 26 Sep 2008, Danny Braniss wrote:
after more testing, it seems it's related to changes made between Aug 4 and
Aug 29 ie, a kernel built on Aug 4 works fine, Aug 29 is slow. I'l now try
and close the gap.
I think this is the best way forward -- skimming August changes, there
Danny Braniss wrote:
I know, but I get about 1mgb, which seems somewhat low :-(
If you don't tell iperf how much bandwidth to use for a UDP test, it
defaults to 1Mbps.
See -b option.
http://dast.nlanr.net/projects/Iperf/iperfdocs_1.7.0.php#bandwidth
--eli
--
Eli Dart
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it drops to 20!
Any ideas?
thanks,
danny
___
freebsd-stable@freebsd.org mailing list
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it drops to 20!
Any ideas?
Can you compare performanc with tcp?
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentler gamester
On Fri, Sep 26, 2008 at 10:04:16AM +0300, Danny Braniss wrote:
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it drops to 20!
Any ideas?
1) Network card driver changes,
2) This could be
On Fri, Sep 26, 2008 at 10:04:16AM +0300, Danny Braniss wrote:
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it drops to 20!
Any ideas?
1) Network card driver changes,
could be, but at
On Fri, Sep 26, 2008 at 12:27:08PM +0300, Danny Braniss wrote:
On Fri, Sep 26, 2008 at 10:04:16AM +0300, Danny Braniss wrote:
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it drops to
On Fri, 2008-09-26 at 10:04 +0300, Danny Braniss wrote:
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it drops to 20!
Any ideas?
The scheduler has been changed to ULE, and NFS has
On Fri, Sep 26, 2008 at 12:27:08PM +0300, Danny Braniss wrote:
On Fri, Sep 26, 2008 at 10:04:16AM +0300, Danny Braniss wrote:
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it
On Fri, 2008-09-26 at 10:04 +0300, Danny Braniss wrote:
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it drops to 20!
Any ideas?
The scheduler has been changed to ULE, and NFS has
On Friday 26 September 2008 03:04:16 am Danny Braniss wrote:
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it drops to 20!
Any ideas?
thanks,
danny
Perhaps use nfsstat to see if
On Fri, Sep 26, 2008 at 04:35:17PM +0300, Danny Braniss wrote:
On Fri, Sep 26, 2008 at 12:27:08PM +0300, Danny Braniss wrote:
On Fri, Sep 26, 2008 at 10:04:16AM +0300, Danny Braniss wrote:
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get
On Fri, Sep 26, 2008 at 12:27:08PM +0300, Danny Braniss wrote:
On Fri, Sep 26, 2008 at 10:04:16AM +0300, Danny Braniss wrote:
Hi,
There seems to be some serious degradation in performance.
Under 7.0 I get about 90 MB/s (on write), while, on the same machine
under 7.1 it
: -vfs.nfs.realign_test: 22141777
: +vfs.nfs.realign_test: 498351
:
: -vfs.nfsrv.realign_test: 5005908
: +vfs.nfsrv.realign_test: 0
:
: +vfs.nfsrv.commit_miss: 0
: +vfs.nfsrv.commit_blks: 0
:
: changing them did nothing - or at least with respect to nfs throughput
On Fri, Sep 26, 2008 at 04:35:17PM +0300, Danny Braniss wrote:
I know, but I get about 1mgb, which seems somewhat low :-(
Since UDP has no way to know how fast to send, you need to tell iperf
how fast to send the packets. I think 1Mbps is the default speed.
David.
David,
You beat me to it.
Danny, read the iperf man page:
-b, --bandwidth n[KM]
set target bandwidth to n bits/sec (default 1 Mbit/sec). This
setting requires UDP (-u).
The page needs updating, though. It should read -b, --bandwidth
n[KMG]. It also does NOT
38 matches
Mail list logo