Hi!
I've done so, with some interesting results. Source on
http://ds9a.nl/tmp/recvtimings.c - be careful to adjust the '3000' divider
to your CPU frequency if you care about absolute numbers!
These are two groups, each consisting of 10 consecutive nonblocking UDP
recvfroms, with 10
On Sun, Feb 25, 2007 at 11:41:54AM +0100, Pavel Machek ([EMAIL PROTECTED])
wrote:
I've done so, with some interesting results. Source on
http://ds9a.nl/tmp/recvtimings.c - be careful to adjust the '3000' divider
to your CPU frequency if you care about absolute numbers!
These are
On Wed, Feb 21, 2007 at 02:06:34PM +0300, Evgeniy Polyakov wrote:
Here is data for 50 bytes reading for essentially idle machine
(core duo 2.4 ghz):
delta for syscall: 3326961404-3326969261: 7857 cycles = 3.273750 us
Can you oprofile it too?
-Andi
-
To unsubscribe from this list: send the
Arjan van de Ven wrote:
also.. running vmstat 3 and looking at the cs column is interesting;
it shouldn't be above 50 or so in idle (well not above 10 but our
userland stinks too much for that)
I average 6 or so with my normal configuration.
Chuck kill the daemons Ebbert
-
To unsubscribe
On Mon, Feb 19, 2007 at 03:56:23PM -0800, Stephen Hemminger wrote:
Linux 2.6.20-rc4 appears to take 4 microseconds on my P4 3GHz for a
non-blocking UDPv4 recvfrom() call, both on loopback and ethernet.
Linux 2.6.18 on my 64 bit Athlon64 3200+ takes a similar amount of time.
recvfrom
bert hubert [EMAIL PROTECTED] writes:
Hi people,
I'm trying to save people the cost of buying extra servers by making
PowerDNS (GPL) ever faster, but I've hit a rather fundamental problem.
Linux 2.6.20-rc4 appears to take 4 microseconds on my P4 3GHz for a
non-blocking UDPv4 recvfrom()
On Tue, Feb 20, 2007 at 11:50:13AM +0100, Andi Kleen wrote:
P4s are pretty slow at taking locks (or rather doing atomical operations)
and there are several of them in this path. You could try it with a UP
kernel. Actually hotunplugging the other virtual CPU should be sufficient
with recent
On Tue, Feb 20, 2007 at 05:27:14PM +0100, bert hubert ([EMAIL PROTECTED]) wrote:
I've done so, with some interesting results. Source on
http://ds9a.nl/tmp/recvtimings.c - be careful to adjust the '3000' divider
to your CPU frequency if you care about absolute numbers!
These are two groups,
On Tuesday 20 February 2007 17:27, bert hubert wrote:
On Tue, Feb 20, 2007 at 11:50:13AM +0100, Andi Kleen wrote:
P4s are pretty slow at taking locks (or rather doing atomical operations)
and there are several of them in this path. You could try it with a UP
kernel. Actually hotunplugging
On Tue, Feb 20, 2007 at 07:41:25PM +0300, Evgeniy Polyakov wrote:
It can be recvfrom only problem - syscall overhead on my p4 (core duo,
debian testing) is bout 300 usec - to test I ran read('dev/zero', data,
0) in a loop.
nsec I assume?
The usec numbers for read(fd, c, 0) where fd is
On Tue, Feb 20, 2007 at 06:02:32PM +0100, bert hubert ([EMAIL PROTECTED]) wrote:
On Tue, Feb 20, 2007 at 07:41:25PM +0300, Evgeniy Polyakov wrote:
It can be recvfrom only problem - syscall overhead on my p4 (core duo,
debian testing) is bout 300 usec - to test I ran read('dev/zero', data,
On Tue, Feb 20, 2007 at 08:11:20PM +0300, Evgeniy Polyakov ([EMAIL PROTECTED])
wrote:
I would try it today - but it is a bit late in Moscow already - and
there are some things to complete yet. So, tomorrow I will create a patch
and run it, but I seriously doubt that there is _that_ high
On Tue, Feb 20, 2007 at 07:41:25PM +0300, Evgeniy Polyakov wrote:
On Tue, Feb 20, 2007 at 05:27:14PM +0100, bert hubert ([EMAIL PROTECTED])
wrote:
I've done so, with some interesting results. Source on
http://ds9a.nl/tmp/recvtimings.c - be careful to adjust the '3000' divider
to your CPU
On Tue, Feb 20, 2007 at 09:48:59PM +0300, Evgeniy Polyakov wrote:
Likely first overhead related to cache population or gamma-ray radiation.
If it happens only one (it does in my test), then everything is ok I
think. Bert, how frequently you get that long recvfrom()?
I have plotted the average
On Tue, Feb 20, 2007 at 08:33:20PM +0100, bert hubert wrote:
I'm investigating this further for other system calls. It might be that my
measurements are off, but it appears even a slight delay between calls
incurs a large penalty.
Make sure your system is idle. Userspace bloat means that
On Tue, Feb 20, 2007 at 02:40:40PM -0500, Benjamin LaHaise wrote:
Make sure your system is idle. Userspace bloat means that *lots* of idle
activity occurs in between timer ticks on recent distributions -- all those
You hit the nail on the head. I had previously measured with X shut down,
On Tue, 20 Feb 2007 21:45:05 +0100
bert hubert [EMAIL PROTECTED] wrote:
On Tue, Feb 20, 2007 at 02:40:40PM -0500, Benjamin LaHaise wrote:
Make sure your system is idle. Userspace bloat means that *lots* of idle
activity occurs in between timer ticks on recent distributions -- all those
I measure a huge slope, however. Starting at 1usec for back-to-back system
calls, it rises to 2usec after interleaving calls with a count to 20
million.
4usec is hit after 110 million.
The graph, with semi-scientific error-bars is on
http://ds9a.nl/tmp/recvfrom-usec-vs-wait.png
The code to
On Tue, Feb 20, 2007 at 02:02:00PM -0800, Rick Jones wrote:
The slope appears to be flattening-out the farther out to the right it
goes. Perhaps that is the length of time it takes to take all the
requisite cache misses.
The rate of flattening out appears to correlate with the number of
I'm trying to figure out which processes have the most impact, I had already
killed anything non-essential. But that still leaves 140 pids.
btw if you have systemtap on your system you can see who is doing evil
with
http://www.fenrus.org/cstop.stp
also.. running vmstat 3 and looking at the
On 2/21/07, bert hubert [EMAIL PROTECTED] wrote:
I'm trying to figure out which processes have the most impact, I had already
killed anything non-essential. But that still leaves 140 pids.
Bert
That sounds way too many pids. I run a script to shut down processes
when I do testing as
Hi people,
I'm trying to save people the cost of buying extra servers by making
PowerDNS (GPL) ever faster, but I've hit a rather fundamental problem.
Linux 2.6.20-rc4 appears to take 4 microseconds on my P4 3GHz for a
non-blocking UDPv4 recvfrom() call, both on loopback and ethernet.
Linux
On Tue, 20 Feb 2007 00:14:47 +0100
bert hubert [EMAIL PROTECTED] wrote:
Hi people,
I'm trying to save people the cost of buying extra servers by making
PowerDNS (GPL) ever faster, but I've hit a rather fundamental problem.
Linux 2.6.20-rc4 appears to take 4 microseconds on my P4 3GHz for
23 matches
Mail list logo