You really only have one sample on Server A without NextID blocks, so
perhaps a few more samples there would show its really the same as when
you used NextID blocks on that server. Which would indicate you weren't
bumping the bottleneck that this alleviates. I suspect something is
different on Server B that caused it to be significantly slower, so you
might try several tests there without NextID blocks.

 

 

Chad Hall  
(501) 342-2650

________________________________

From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook
Sent: Wednesday, May 28, 2008 3:51 PM
To: arslist@ARSLIST.ORG
Subject: Re: Next ID Blocking = faster submits?

 

** OK, so it seems that I wasn't hitting the server hard enough to see a
real performance increase.  Why would I see a performance DECREASE?  Why
wouldn't the fact that I'm reducing the number of DB calls and locks
show up as a performance increase regardless of load, or at least be a
push at lower load levels?

Rick

On Wed, May 28, 2008 at 1:39 PM, LJ Longwing <[EMAIL PROTECTED]>
wrote:

** 

Rick,

As mentioned by Chad.  I have heard this 3rd person...and never
experienced it myself, but it's my understanding that the reason that
the next-id block feature was implemented was because when you are
submitting millions of records an hour (as sometimes happens with the
larger shops) a single id at a time isn't enough to ensure you get all
of the IDs handed out in time...so it's more of a contention issue than
it was a performance enhancement...if you have 30 threads, each with 100
ids, you will only have 30 calls to get new ids instead of 3000 calls to
get ids, that reduction in update calls to that one table removes it as
a bottleneck.

 

________________________________

From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook

Sent: Wednesday, May 28, 2008 2:02 PM
To: arslist@ARSLIST.ORG
Subject: Next ID Blocking = faster submits?

** I've been doing some testing to see how much this really helps
performance, and my preliminary numbers were surprising and
disappointing.  NOTE:  I don't think a single sample is enough from
which to draw a global conclusion.  HOWEVER...I am concerned enough to
ask some questions.



I have two new servers, equal hardware, same OS (RHEL 5) and AR System
7.1 p2, same code, same DB version, same code and similar (but separate)
databases.

I ran an Escalation that submits hundreds of records into a relatively
small form (perhaps 25 fields) that previously contained no records.
There was no other load or user on either server.

Server A is set up without the NextId blocking.
Server B is set up WITH the NextId blocking set for 100 at the server
level but NOT on the form itself, threaded escalations, and the Status
History update disabled for the form in question.

I went through the SQL logs and tracked the time difference between each
"UPDATE arschema SET nextId = nextId + <1/100> WHERE schemaId = 475"
entry.  The results?

Server A: Each fetch of single NextIds  was separated by an average of
.07 seconds, which is 7 seconds per hundred.

Server B: Each fetch of 100 NextIds was separated by a mean value of
12.4 seconds per entry (hundred).  A second run showed an average of
12.8 seconds, so I'm fairly confident that's a good number.  The fastest
was 5.3 seconds, the slowest almost 40 seconds.  

Then just to eliminate the possibility that the environments were the
issue, I turned on the NextId blocking on Server A to the same
parameters I had set for Server B.  Result?  Average of 8 seconds per
hundred, though if I throw out the first two gets (which were 11 sec.
ea), the remaining runs average around 7.25 seconds per hundred.  Even
in a best-case scenario, it's still slightly slower than doing it
singly.

The median value between the values in all three sets across two servers
was 8 seconds.  The mean value is 11 seconds.  Again, the time it takes
to "get" 100 NextId updates 1 at a time was 7 seconds per hundred.

So the newer, "faster" feature actually appears no faster, and in some
cases slower, than the process it's supposed to have improved.

Maybe it's not hitting the DB as often, but then why are we not seeing
the omission of 99 DB calls reflected in faster overall submit times at
the AR System level?  Am I doing something wrong?  Are my expectations
unreasonable?  Is there some data in a white paper or something that
shows empirically what improvements one should expect from deploying
this new functionality?

Is anyone seeing improved performance because of this feature?  I don't
see it.

Rick

__Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"
html___ 

__Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"
html___ 


__Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"
html___ 

***************************************************************************
The information contained in this communication is confidential, is
intended only for the use of the recipient named above, and may be legally
privileged.

If the reader of this message is not the intended recipient, you are
hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited.

If you have received this communication in error, please resend this
communication to the sender and delete the original message or any copy
of it from your computer system.

Thank You.
****************************************************************************

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"

Reply via email to