If I had to guess...and this is purely a guess...it's possible that the
additional overhead of managing the list with 100 ids in it caused the
retrieval to be slower...but that slow down would be insignificant in the
circumstances that would cause this table to be a bottleneck.

  _____  

From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook
Sent: Wednesday, May 28, 2008 2:51 PM
To: arslist@ARSLIST.ORG
Subject: Re: Next ID Blocking = faster submits?


** OK, so it seems that I wasn't hitting the server hard enough to see a
real performance increase.  Why would I see a performance DECREASE?  Why
wouldn't the fact that I'm reducing the number of DB calls and locks show up
as a performance increase regardless of load, or at least be a push at lower
load levels?

Rick


On Wed, May 28, 2008 at 1:39 PM, LJ Longwing <[EMAIL PROTECTED]> wrote:


** 
Rick,
As mentioned by Chad.  I have heard this 3rd person...and never experienced
it myself, but it's my understanding that the reason that the next-id block
feature was implemented was because when you are submitting millions of
records an hour (as sometimes happens with the larger shops) a single id at
a time isn't enough to ensure you get all of the IDs handed out in time...so
it's more of a contention issue than it was a performance enhancement...if
you have 30 threads, each with 100 ids, you will only have 30 calls to get
new ids instead of 3000 calls to get ids, that reduction in update calls to
that one table removes it as a bottleneck.

  _____  


From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook

Sent: Wednesday, May 28, 2008 2:02 PM
To: arslist@ARSLIST.ORG
Subject: Next ID Blocking = faster submits?


** I've been doing some testing to see how much this really helps
performance, and my preliminary numbers were surprising and disappointing.
NOTE:  I don't think a single sample is enough from which to draw a global
conclusion.  HOWEVER...I am concerned enough to ask some questions. 


I have two new servers, equal hardware, same OS (RHEL 5) and AR System 7.1
p2, same code, same DB version, same code and similar (but separate)
databases.

I ran an Escalation that submits hundreds of records into a relatively small
form (perhaps 25 fields) that previously contained no records.  There was no
other load or user on either server.

Server A is set up without the NextId blocking.
Server B is set up WITH the NextId blocking set for 100 at the server level
but NOT on the form itself, threaded escalations, and the Status History
update disabled for the form in question.

I went through the SQL logs and tracked the time difference between each
"UPDATE arschema SET nextId = nextId + <1/100> WHERE schemaId = 475" entry.
The results?

Server A: Each fetch of single NextIds  was separated by an average of .07
seconds, which is 7 seconds per hundred.

Server B: Each fetch of 100 NextIds was separated by a mean value of 12.4
seconds per entry (hundred).  A second run showed an average of 12.8
seconds, so I'm fairly confident that's a good number.  The fastest was 5.3
seconds, the slowest almost 40 seconds.  

Then just to eliminate the possibility that the environments were the issue,
I turned on the NextId blocking on Server A to the same parameters I had set
for Server B.  Result?  Average of 8 seconds per hundred, though if I throw
out the first two gets (which were 11 sec. ea), the remaining runs average
around 7.25 seconds per hundred.  Even in a best-case scenario, it's still
slightly slower than doing it singly.

The median value between the values in all three sets across two servers was
8 seconds.  The mean value is 11 seconds.  Again, the time it takes to "get"
100 NextId updates 1 at a time was 7 seconds per hundred.

So the newer, "faster" feature actually appears no faster, and in some cases
slower, than the process it's supposed to have improved.

Maybe it's not hitting the DB as often, but then why are we not seeing the
omission of 99 DB calls reflected in faster overall submit times at the AR
System level?  Am I doing something wrong?  Are my expectations
unreasonable?  Is there some data in a white paper or something that shows
empirically what improvements one should expect from deploying this new
functionality?

Is anyone seeing improved performance because of this feature?  I don't see
it.

Rick

__Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"
html___ 
__Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"
html___ 


__Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"
html___ 

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"

Reply via email to