On 22/05/07, Stefano Gambetta <[EMAIL PROTECTED]> wrote:
Seb, thanks for the reply.
1) About 1000 q/s is what I get with JMeter, while with queryperf I can
saturate the server throughput at about 8000 q/s (with 30% cpu on the
sampler host).
Still pretty good performance from a single thread...
How many threads does queryperf use, or is it singlethreaded?
2) It is NOT a network problem, for the following reasons:
I did not say it was a network problem.
I meant that the major contribution to the sample time was likely to
be the network.
- bandwith is very far from saturation
- with queryperf I can reach 8x throughput (and still far from network
saturation)
I've tried adding more theads (up to 3), the throughput increases up to a
point (about 1500-1700 qps) but not further, due to the fact the CPU is
already satuarted.
3) Variables: Yes in the java request I use two of them (name_server and
host_to_be_resolved). Than can be a source of CPU processing on the sampler?
It will definitely require more CPU as there is more to do, but I need
to check if it is a significant increase or not.
4) Interesting your test with Java Request, did you use variables?
No; but I will try that.
5) SUN JDKSE 6 (debian package)
6) Actually I already use sampleStart() and sampleEnd(), sorry I didn't
remember that
7) Interesting test plan, why do you think it would be better?
I don't think the plan would be better.
The idea was to see if you got similar results to mine when using the
Java Request / JavaTest sampler provided with JMeter.
Your host seems similar to mine in CPU power, but the OS is different.
I hope this evening I'll have the time to test.
Thanks again!
2007/5/22, sebb <[EMAIL PROTECTED]>:
>
> On 22/05/07, Stefano Gambetta <[EMAIL PROTECTED]> wrote:
> > Hello,
> > first of all thanks for the prompt replies!
> >
> > I'll try to answer to all the asked questions.
> >
> > Seb:
> >
> > 1) I get a max throughput of about 1000 queries sent / sec. I measured
> > throughput counting the result file number of lines.
> > I also measured that with sar, more or less they are the same.
>
> Is this with JMeter or the C program?
>
> If JMeter, then I don't think you have a problem.
>
> If not JMeter, then what throughput did you get with JMeter?
>
> > 2) query response time is very small, in the interval [0,2] ms. The load
> > test scenario regards an authoritative DNS server, that means that all
> the
> > queries are answered from its cache. This is by design. The same query
> is
> > sent in every test.
>
> So the main delays will probably be due to network speed.
>
> > 3) what do you mean by functions or variables? The run method of my test
>
> I mean ${variable) or ${__function()} references in the test plan.
>
> > class is a simple:
> >
> > start_time = System.currentTimeMillis();
> > lookup.run() // send the query
> > stop_time = System.currentTimeMillis();
> > results.setTime(stop_time - start_time);
>
> You should really use
>
> results.sampleStart()
> lookup.run() // send the query
> results.sampleEnd()
>
> Otherwise the result is not set up properly.
>
> > 4) Java 6, Linux 2.6.18, Jmeter 2.2
>
> Which supplier of Java?
> Some versions of Java seem to be very inefficient compared with Sun
> Java on Windows.
>
> > Ian:
> >
> > 1) I'm using only one thread, since it is enough to saturate the sampler
> > CPU. This machine is a single CPU (AMD 1600+ XP).
> >
> > 2) There is NO think time (neither in queryperf nor in my jmeter
> sampler),
> > by design. I'm modelling the sistem as an open queueing center with a
> > transactional load.
> >
> > Seb:
> >
> > 1) What kind of test did you performed? Sistems, CPUs, DNS servers,
> threads?
>
> Windows XP, Sun Java 1.4.2_13, Pentium 1.6GHz 1GB RAM.
>
> No DNS servers, 1 thread
>
> > How, in java, did you performed the name resolution? Are you sure you
> are
> > not measuring your resolver performance, instead of the server one?
> > I have used the dnsjava package just for this purpose, since it sends
> each
> > query directly to the DNS server usign sockets, instead of using system
> > resolver libs, which cache the replies.
>
> I did not use DNS; I just used Java Request.
>
> > 2) The slowdown writing result file is interesting, I will try using the
> > summary listener!
>
> > I'm interested to hear how you reached such a high throughput!
>
> > Thanka again, if you have other questions just let me know :)
>
> I suggest you try the following test on your host:
>
> Thread Group 1 thread, 1 loop
> + Loop Controller 1000
> + + Java Request time=0 mask=0 (name = Java0)
> + Loop Controller 1000
> + + Java Request time=10 mask=0 (name = Java10)
> + Summary Report
>
> and report the results.
>
> [I can e-mail the plan to you privately if you want]
>
> > 2007/5/22, sebb <[EMAIL PROTECTED]>:
> > >
> > > I've just done a test using the Java Request / Java Test sampler in
> JMeter
> > > 2.2.
> > >
> > > If I set the sleep_time and sleep_mask to 0, I can get a throughput of
> > > 20,000-30,000 per second. Obviously this uses around 100% CPU.
> > >
> > > For sleep_time of 10, I get an average elapsed of 15ms, and throughput
> > > of 64/sec which equates to 15.625ms per sample. The average figure of
> > > 15ms is rounded down, so this agrees with there being very little
> > > overhead. CPU usage is minimal.
> > >
> > > Note that I was using only the Summary Report listener.
> > >
> > > If the output is written to a file, then the throughput for the 0
> > > sleep drops to 3,500-7,000/sec. However the rate for the 10ms sleep
> > > samplers is only marginally affected, e.g. 63.9 for CVS and 63.8 for
> > > XML with 1000 samples.
> > >
> > > The above suggests that JMeter is behaving well for the limited tests
> > > I performed.
> > >
> > > S
> > > On 22/05/07, sebb <[EMAIL PROTECTED]> wrote:
> > > > There could perhaps be a problem...
> > > >
> > > > What throughput did you get?
> > > > What was the average sampler response time?
> > > > Are you using any functions or variables?
> > > > Which version of Java, and which OS?
> > > >
> > > > S.
> > > > On 22/05/07, Stefano Gambetta <[EMAIL PROTECTED]> wrote:
> > > > > Hello,
> > > > > I'm evaluating JMeter as a load testing framework for my
> performance
> > > > > analysis.
> > > > >
> > > > > I'm going to load test a DNS server. To do so, I've written my own
> > > Java
> > > > > Request sampler, which issues DNS requests using the package
> dnsjava.
> > > > >
> > > > > My test configuration is very simple: one thread-group (one therad
> > > > > configured for example), one java request sampler, one simple data
> > > writer as
> > > > > listener.
> > > > >
> > > > > The problem basically is that with that configuration JMeter can
> reach
> > > a
> > > > > very low throughput (query sent / sec), since during tests CPU get
> > > > > saturated.
> > > > >
> > > > > The known bind queryperf tool can generate a load about six times
> > > higher
> > > > > while using less than 30 % CPU. Since this tool is a simple C
> program,
> > > I
> > > > > expected a greater performance, but that results seems to be
> > > exaggerated.
> > > > >
> > > > > I assumed that my sampler class was not well optimized, so I did a
> > > quick
> > > > > profiling of JMeter, during the test, in order to discover some
> areas
> > > of
> > > > > improvement.
> > > > >
> > > > > I think I saw an interesting result: most of the thread sampler
> time
> > > is
> > > > > spent in one method of one particular class of JMeter, and a minor
> > > part is
> > > > > spent inside the Java Request sampler. The mentioned method is:
> > > > >
> > > > > SamplePackage.setRunningVersion()
> > > > >
> > > > >
> > > > > Giving that results, I have some questions:
> > > > >
> > > > > - what does that method do?
> > > > > - is it possible to reduce its overhead?
> > > > >
> > > > > Thanks for any replies
> > > > >
> > > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > > For additional commands, e-mail: [EMAIL PROTECTED]
> > >
> > >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]