Yes i have :-)  All those event systems send events to the arserver.

On Oct 12, 2017 7:19 PM, "Dave Shellman" <adshell...@gmail.com> wrote:

> **
> Randeep,
>
> It seems that you have never worked in an environment where HPOV or other
> monitoring applications flood a server with events within a short period of
> time that generate hundreds/thousands of records in a short period of time.
>
> Dave
>
> On Thu, Oct 12, 2017 at 10:11 PM Randeep Atwal <ratwals...@gmail.com>
> wrote:
>
>> **
>> Rick, everything that generates NextID's must come through an ARSystem
>> server so I don't know what you mean when you say 'If you have multiple
>> outside entities creating large number of records at once'
>>
>> All requests go through the AR Server.
>>
>> I would theorize that you may have other factors in play once you remove
>> the NextID bottleneck by using chunks of 100.  Something to bear in mind is
>> that by chunking NextID's, you cause the scalability of the system to go up
>> (i.e. more records per second getting created) - but the per record
>> response time under high load is likely to go down (due to the system being
>> under heavy load, other downstream constraints such as CPU/Disk Speed start
>> to appear, where they would not appear if the NextID bottleneck was in
>> place)  It effectively acts as a throttle.  So your results could be easily
>> misinterpreted.
>>
>> I would think a way to confirm this is to generate parallel load from a
>> high number of worker threads, and time how long it took to reach the same
>> number of records created with 1 or 100 for the NextID chunk size.   Then
>> compare the times.  Anyway, we are off on a tangent from the original topic
>>
>> Thomas, I am actually curious as to why you want to know a nextid.  What
>> Abhijeet said explains why it doesn't make sense to ask for it, but often
>> in workflow scenarios, where you want to generate something unique and
>> refer/tie back to something else.  you are better off using
>> Application-Generate-GUID [" *GUIDPrefix*" ]
>>
>>
>>
>>
>>
>>
>> On Thu, Oct 12, 2017 at 5:27 PM, Rick Cook <remedyr...@gmail.com> wrote:
>>
>>> **
>>> Abhijeet, my results were repeated and conclusive.  Here is the only
>>> logical explanation for my findings that makes sense:
>>>
>>> There is a time cost to the DB call for Entry IDs.  The cost increases
>>> incrementally the more that are requested at once.  The thought is that 100
>>> individual requests will take substantially more time than retrieving 100
>>> IDs in one request.  That thought has been proven to be correct - it's why
>>> the feature was added. However, the difference in time was tested against a
>>> system that had multiple sources (NMS) attempting to grab multiple Entry
>>> IDs at the same time, not a normal system without those multiple outside
>>> requests, because it was the first environment that was reporting the
>>> problem with performance.
>>>
>>> On a system where the volume of IDs is high enough to cause a system
>>> bottleneck at the DB ID request, *which requires multiple simultaneous
>>> requests from different sources*, that bottleneck costs more additional
>>> time than is lost by the increased time it takes to request multiple IDs at
>>> one time.  However, on a system that is not bound by the Next ID calls to
>>> the DB, or has only a single source (AR System) requesting IDs, there is no
>>> advantage gained by the multiple requests, because the time "gained" by
>>> having a cached ID is negligible - too small to even notice.  And, as my
>>> testing proved (to my surprise), there was an increasing net cost to system
>>> performance as the number of IDs requested in a single call grew.  This
>>> must be because the time it takes to gather, say, 100 records in one call
>>> is actually *higher* than it is to do 100 individual calls - IF AR System
>>> is the only source of those calls.  The simple correlation is that if AR
>>> System is the only thing generating new Request ID requests to the DB, you
>>> don't need, and will not benefit from, a number larger than 1.  If you have
>>> multiple outside entities creating large number of records at once, you
>>> very well may benefit from it.
>>>
>>> My tests that showed decreasing performance as the Next Id block grew
>>> were creating about 1.5 million records, but, and this is important - all
>>> were from the same source - AR System workflow.  There were no other
>>> simultaneous demands on the table, and few on the system.  Therefore, the
>>> system of generating one ID at a time had a cumulatively lower transaction
>>> time than it did when it had to wait for multiples to be retrieved and then
>>> allocated individually.  The BMC Engineer, who I know to be very smart and
>>> experienced with Remedy, had no explanation for my results.  I believe this
>>> to be that BMC didn't test against a control (normal) system, and therefore
>>> had no data on its effect on them.
>>>
>>> Why then, they chose to recommend that setting to all customers, is a
>>> mystery to me.
>>>
>>> Rick Cook
>>>
>>> On Thu, Oct 12, 2017 at 4:56 PM, Gadgil, Abhijeet <
>>> abhijeet_gad...@bmc.com> wrote:
>>>
>>>> **
>>>>
>>>> Rick, I do not think that is accurate.
>>>>
>>>> Logically, if the block size is 100 then the server will access the DB
>>>> once to retrieve the next block id every 100 records.  If it is 1, then the
>>>> server goes to the DB for every record.
>>>>
>>>> Later cannot be faster.
>>>>
>>>>
>>>>
>>>> Further, to answer the original question, returning the next id in an
>>>> API call would mean preventing creation of ids by anyone other than the
>>>> caller until the id has actually been used -- otherwise the information
>>>> might be obsolete before the client receives it. That is the reason for not
>>>> exposing it via an API
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Abhijeet
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *From:* Action Request System discussion list(ARSList) [mailto:
>>>> arslist@ARSLIST.ORG] *On Behalf Of *Rick Cook
>>>> *Sent:* 13 October 2017 02:12
>>>> *To:* arslist@ARSLIST.ORG
>>>> *Subject:* Re: What's the NextId?
>>>>
>>>>
>>>>
>>>> **
>>>>
>>>> Here's the skinny on that.  I got this from the engineer who built that
>>>> feature, btw.
>>>>
>>>>
>>>>
>>>> The problem was that system performance was being constricted around
>>>> the action of getting the NextId for a record when multiple sources (say,
>>>> Netcool and HPOV) were throwing tons of requests at the (Incident) form at
>>>> the same time.  The process of getting each Next Entry ID in individual DB
>>>> calls was bottlenecking the process of creating the records.  So that's why
>>>> BMC came up with a way to pre-allocate those in bulk, so that only every N
>>>> times (whatever the number is set to) would an actual call to the DB to get
>>>> Next IDs be necessary.  The transaction time to retrieve 1 or 100 wasn't
>>>> much different, and those customers with multiple programs requiring many
>>>> simultaneous record creations saw a marked performance increase.  It was,
>>>> and is, a good feature add.
>>>>
>>>>
>>>>
>>>> So (then a miracle occurs) and BMC announces that this particular
>>>> corner case had a solution that everyone should benefit from, and announced
>>>> that the number should be 100 for *all* customers.  I tested this back in
>>>> 7.6.04 against a RH server, with settings at 1, 10, 100, and 1000, and
>>>> found that performance was actually NEGATIVELY affected the higher the
>>>> number was set.  It wasn't a huge difference (10%~), but it was a clear,
>>>> repeatable one for which BMC's engineer had no explanation.  It is why I
>>>> have always advocated that unless a customer has the specific set of
>>>> circumstances that caused the feature to be beneficial, there is no real
>>>> benefit to setting the number larger than 1.  And there are minor drawbacks
>>>> to doing so, the current subject being one of them.
>>>>
>>>>
>>>>
>>>> Every time I ask someone from BMC to justify a larger number to the
>>>> average customer, they repeat the party line, unaware of the history behind
>>>> the feature.  I will continue to tilt at this windmill until someone at BMC
>>>> shows me some performance testing numbers that justify this setting for the
>>>> entire customer base.
>>>>
>>>> Rick
>>>>
>>>>
>>>>
>>>> On Oct 12, 2017 13:30, "Thomas Miskiewicz" <tmisk...@gmail.com> wrote:
>>>>
>>>> **
>>>>
>>>> i.e. there is no hack to find out the nextID before it actually gets
>>>> submitted?
>>>>
>>>>
>>>>
>>>> Apart from that, I really don’t understand why BMC makes such a fuss
>>>> around the nextID. Why can’t they just provide a special command 
>>>> GET-NEXTID?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Thomas
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Oct 12, 2017, at 10:26 PM, LJ LongWing <lj.longw...@gmail.com>
>>>> wrote:
>>>>
>>>>
>>>>
>>>> **
>>>>
>>>> There are no interfaces that I'm aware of to ask a specific server what
>>>> the next id it will hand out for a specific form is
>>>>
>>>>
>>>>
>>>> On Thu, Oct 12, 2017 at 2:14 PM, Thomas Miskiewicz <tmisk...@gmail.com>
>>>> wrote:
>>>>
>>>> Hello List,
>>>>
>>>> with NextID Block size being set to 100 —>  is it possible the find out
>>>> using the API which will be the next Request-ID that the server will assign
>>>> to a request? Or what other options do I have to find out?
>>>>
>>>>
>>>> --Thomas
>>>>
>>>> ____________________________________________________________
>>>> ___________________
>>>> UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
>>>> "Where the Answers Are, and have been for 20 years"
>>>>
>>>>
>>>>
>>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>>>>
>>>>
>>>>
>>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>>>>
>>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>>>>
>>>
>>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>>>
>>
>> _ARSlist: "Where the Answers Are" and have been for 20 years_
>
> _ARSlist: "Where the Answers Are" and have been for 20 years_

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
"Where the Answers Are, and have been for 20 years"

Reply via email to