Hi Jason,
Well the 10000 is not appropriate for Victor. Whatever process he has
running looks like it would have chewed up all available memory before
crashing the server. I'd say that the 100 is probably fine - at least now
he has trapped what I think is likely to be an error before crashing the
complete ARS process.

The snippet of log that Victor has supplied is not really enough to get an
understanding of the issue beyond the fact that the DMT job either uses
some very mean recursion (100 is huge) or it is failing to reach it's exit
condition. I suspect the latter. To really see the pattern the best bit of
the log would be the start of the loop, so where the filter stack is 1, 2,
3 etc. The log has the stack displayed in it nicely in recent ARS versions.
It could be that there is some unexpected data or a bug or both. I'm not
really familiar with the DM job console myself as yet. Maybe others on the
list can assist when they see the log.

Rod




On 4 December 2012 05:57, Jason Miller <jason.mil...@gmail.com> wrote:

> ** Looking at our ITSM 7.6.04 install the following are set and I am
> pretty these are still the default:
> Maximum Filters for an Operation: 999999999
> Maximum Stack of Filter: 10000
>
> I wonder how appropriate these settings are for a 32-bit AR Server though?
>  I am figuring a 64-bit system with 6gb and 2gb additional per large app as
> the sizing documentation suggests (more or less what the documentation
> says, don't quote me on that) can handle this type of load.  32-bit, well I
> think certain things might start to have issues.
>
> Jason
>
>
> On Mon, Dec 3, 2012 at 12:07 PM, Victor <vico...@gmail.com> wrote:
>
>> Thank you all for your inputs.
>>
>> I set the filter stack limit to 100 and tried to create the job again.
>> The process aborted with "Too many levels in filter processing (ARERR
>> 299)"
>> ARServer process did not crash nor there was any spike in memory or CPU
>> usage.
>> The arfilter.log however displayed a series of errors:
>>
>> <FLTR> <TID: 0000006120> <RPC ID: 0000017180> <Queue: Fast      >
>> <Client-RPC:
>> 390620   > <USER: supervic                                     > <Overlay
>> Group: 1         > **** Error while performing filter action: Error 299
>> <FLTR> <TID: 0000006120> <RPC ID: 0000017180> <Queue: Fast      >
>> <Client-RPC:
>> 390620   > <USER: supervic                                     > <Overlay-
>> Group: 1         > **** Filter "DMT:CopyStagingFormsToSequencingEngine":
>> No
>> enabled error handler
>> <FLTR> <TID: 0000006120> <RPC ID: 0000017180> <Queue: Fast      >
>> <Client-RPC:
>> 390620   > <USER: supervic                                     > <Overlay-
>> Group: 1         > /* Pn gru 03 2012 20:41:59.6370 */     End of filter
>> processing (phase 2) -- Operation - SET on SHR:SchemaNames -
>> 000000000021536
>> <FLTR> <TID: 0000006120> <RPC ID: 0000017180> <Queue: Fast      >
>> <Client-RPC:
>> 390620   > <USER: supervic                                     > <Overlay-
>> Group: 1         >         4: Push Fields ->
>> "DMT:CopyStagingFormsToSequencingEngine" (ARERR 299)
>> <FLTR> <TID: 0000006120> <RPC ID: 0000017180> <Queue: Fast      >
>> <Client-RPC:
>> 390620   > <USER: supervic                                     > <Overlay-
>> Group: 1         > **** Error while performing filter action: Error 299
>> .
>> .
>> .
>>
>> ARError.log displayed:
>> Mon Dec 03 20:41:59 2012  390620 : Too many levels in filter processing
>> (ARERR
>> 299)
>> Mon Dec 03 20:41:59 2012     "DMT:VIS:SkipSequenceInit" :
>> "DMT:SYS:SequencingEngine"
>>
>> Does anyone have a clue?
>>
>> Sorry for the long post.
>>
>> Victor.
>> On Monday 03 Dec 2012 09:16:22 Rod Harris wrote:
>> > Hi Jason and Victor and others,
>> >
>> > One thing that I noticed about V8 is that I think the default for the
>> > maximum stack of filters is still a very high 10,000. The trouble with
>> > having such a high limit is that you can have your system just run out
>> of
>> > resources eg. stack space, before that limit is reached. I would have
>> > thought a limit to nested filters at around 25 or 50 should be plenty.
>> If
>> > you have it set to this limit and you find that a perfectly well behaved
>> > process is still hitting the limit I'd be surprised, and you could just
>> up
>> > the limit. I suspect in the examples here there is a bug or some bad
>> data
>> > that is causing the system to be a lot more recursive than was intended.
>> > You would probably see a bit of a spike on resources such as cpu and
>> memory
>> > if a process does loop.
>> >
>> > In the old days we didn't have a filter stack limit, just a total
>> number of
>> > filters. The trouble with that is that with the large number of filters
>> > that something like HPD:Help Desk had you could hit pretty high limits
>> in
>> > just normal use. Alternatively you could have a tightly recursive set of
>> > filters that used up the stack without hitting the filter limit.
>> >
>> > If you can replicate the problem it would be a good idea to have server
>> > logging on to see what is happening. You may not see the log immediately
>> > prior to the system crash but you will definitely be able to see a trend
>> > before hand. Your log will be pretty full if you are running out of
>> stack.
>> > If you have a low filter stack limit then you will see an error thrown
>> and
>> > won't have a crash to worry about.
>> >
>> > Rod
>> >
>> > On 3 December 2012 15:06, Jason Miller <jason.mil...@gmail.com> wrote:
>> > > ** Not that I know of.  I didn't notice the error right when it
>> happened.
>> > >
>> > >  It occurred about 40 minutes after I released the system to users
>> (all
>> > >  two
>> > >
>> > > at that time of morning).  I was done with everything on the server
>> and
>> > > finishing up some documentation so I don't know what the resources
>> looked
>> > > like a that moment.
>> > >
>> > > Now that I had a change to look again I was wrong about the
>> progression
>> > > being the same.  It was stack limit -> DB connection error (different
>> > > than Victor's DB update timeout) -> AR terminated.
>> > >
>> > > Wed Nov 28 04:23:19 2012 390635 : Approaching physical stack limit. (
>> > > ARERR 8749)
>> > > Wed Nov 28 04:23:19 2012  390635 : Unable to connect to the SQL
>> database.
>> > > (ARERR 551)
>> > > Wed Nov 28 04:23:19 2012     Stop server
>> > > Wed Nov 28 04:23:19 2012: AR System server terminated — fatal error
>> > > occurred in ARSERVER (ARNOTE  21)
>> > > Wed Nov 28 04:23:20 2012 : Action Request System(R) Server x64 Version
>> > > 7.6.04 SP4 201209051922
>> > > (c) Copyright 1991-2011 BMC Software, Inc.
>> > >
>> > > I don't want to steal the thread about his DMT issue.  It seemed odd
>> this
>> > > being the first time I have seen the physical stack limit error and
>> > > Victor happens to mention days later.
>> > >
>> > > Jason
>> > >
>> > > On Sun, Dec 2, 2012 at 2:24 PM, Tauf Chowdhury <taufc...@gmail.com>
>> wrote:
>> > >> **
>> > >> Jason, was there a spike in arserver process memory usage?
>> > >>
>> > >> Sent from my iPhone
>> > >>
>> > >> On Dec 2, 2012, at 5:09 PM, Jason Miller <jason.mil...@gmail.com>
>> wrote:
>> > >>
>> > >> **
>> > >>
>> > >> I saw the "Approaching physical stack limit" error for the first time
>> > >> Tuesday night after upgrading production from 7.5 to 7.6.04 64-bit.
>> It
>> > >> was the same progression you observed, DB timeout -> stack limit ->
>> AR
>> > >> terminated.  I have only seen it once so far.
>> > >>
>> > >> Jason
>> > >>
>> > >> On Dec 2, 2012 1:30 AM, "Victor" <vico...@gmail.com> wrote:
>> > >>> Hi Adhwari,
>> > >>>
>> > >>> Unfortunately I didn't get past creating the first job. After
>> logging
>> > >>> back in
>> > >>> I could not find the job created in job console.
>> > >>>
>> > >>> Thanks for your time.
>> > >>>
>> > >>> Regards,
>> > >>> Victor.
>> > >>>
>> > >>> On Sunday 02 Dec 2012 08:06:20 Kulkarni, Adhwari wrote:
>> > >>> > Hi Victor,
>> > >>> > This kind of error is usually observed when you are creating the
>> > >>> > first
>> > >>>
>> > >>> job
>> > >>>
>> > >>> > using the 'job console'. That is the time when the UDM engine
>> builds
>> > >>>
>> > >>> the
>> > >>>
>> > >>> > complete dependency data required for UDM to work which takes a
>> lot
>> > >>> > of resources. Do you see the same error while creating all the
>> jobs?
>> > >>> > Also, can you please check if the job is created by logging back
>> in.
>> > >>> >
>> > >>> > Regards,
>> > >>> > Adhwari
>> > >>>
>> > >>>
>> _______________________________________________________________________
>> > >>> ________ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
>> > >>> "Where the Answers Are, and have been for 20 years"
>> > >>
>> > >> _ARSlist: "Where the Answers Are" and have been for 20 years_
>> > >>
>> > >> _ARSlist: "Where the Answers Are" and have been for 20 years_
>> > >
>> > > _ARSlist: "Where the Answers Are" and have been for 20 years_
>> >
>> >
>> ___________________________________________________________________________
>> > ____ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
>> > "Where the Answers Are, and have been for 20 years"
>>
>>
>> _______________________________________________________________________________
>> UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
>> "Where the Answers Are, and have been for 20 years"
>>
>
> _ARSlist: "Where the Answers Are" and have been for 20 years_
>

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
"Where the Answers Are, and have been for 20 years"

Reply via email to