Have you checked whether the process is suffering from lock
contention (e.g. by running the sp_who2 database stored procedure from the query
profiler)?
Regards
Malcolm Burtt
From: [email protected] [mailto:[EMAIL PROTECTED] On Behalf Of Stefan Capiau
Sent: 21 December 2005 08:19
To: [email protected]
Subject: [Axapta-Knowledge-Village] Re: Performance
Are you talking about the sql transaction log, the commitment control
in Axapta ?
We have done tests with putting commitment round the entire process
and doing a commit after each 500 records. No big change in
performance.
Regards,
Stefan
--- In [email protected], "Bayliss, Barry"
<[EMAIL PROTECTED]> wrote:
>
> Hello Stefan,
>
>
> I have previously come across a problem with a piece of code running
> slower like this. Turns out the problem was related to the size of
the
> transaction log generated by the process being run.
>
>
> Have you placed the transaction log around the entire batch, or each
> block of the batch?
>
>
>
> Barry.
>
>
> -----Original Message-----
> From: [email protected]
> [mailto:[EMAIL PROTECTED] On Behalf Of
Stefan
> Capiau
> Sent: Wednesday, 21 December 2005 7:19 AM
> To: [email protected]
> Subject: [Axapta-Knowledge-Village] Re: Performance
>
> Thanks for the reply!
>
> We are running this process at night, and no other processes are
> active at that moment.
>
> The strange thing is that when we divide the process into batches
of
> a couple of thousand records to process, it doesn't matter which
> batch we process first. The first one launched is always fast (just
a
> couple of minutes) the last one takes up to hours.
> We also looked at the disk queue time, but saw no strange things.
> Somehow it looks like the transactions are kept in memory and are
not
> released after the process forcing the system to start swapping to
> disk.
>
> At this time we can manage these runs by carefully planning the
> execution time, but in the near future we are also planning on
> uploading the complete customer database (+3000.000 !) customers.
As
> it looks like now the system wil not be able to process these
number
> of transactions.
>
> During the next couple of days we will also try to closely monitor
> the sql server and see if changing settings will affect the
process.
> We also did a huge job on code improvements for performance. I am
> convinced it always can be better but I don't thing we will gain
much
> at this stage by changing the code.
>
> All extra hints are very much appreciated !
>
> Regards,
> Stefan
>
>
>
>
>
>
>
> --- In [email protected], Varden Morris
> <[EMAIL PROTECTED]> wrote:
> >
> > Hi Stefan
> >
> > There are countless ways in which performance may be improved
but
> it depends on your specific situation.
> >
> > If you are selecting records for update in a transaction you
will
> have objects (table, page. Record) locks for the tables within the
> transaction. It looks like you are running several clients that are
> competing for the same tables and resources. The first client that
> gets hold of the tables holds them for a while until the process in
> finished on that client then it releases them to the next client in
> the queue and so on until the entire process is finished. That
> explains why the jobs are not executed in parallel.
> >
> > In a case like this I should think that using one powerful
server
> (Batch Server) to process the entire job would give better
> performance. When you use several clients this will lead to a lot
of
> network trafficking that will reduce performance plus clients are
> waiting in line on each other. With a powerful server that is in
the
> area of the database sever with high speed access to the database
you
> can gain a lot of performance using a 2-tier client or 3-tier fat
> client that will eliminate the latency between the client and AOS
(if
> you are using an AOS).
> >
> > You should also note that processing will be as slow as the
> slowest processor in the batch and database servers in this case
plus
> the process will also be impacted by the speed of the disks in both
> computers. If the disk queue length for either computer is above 2
> you will have to upgrade the disk to a faster one to gain better
> performance. If the memory in any server is not adequate you will
> also end up with a lot of paging which is another downfall.
> >
> > Tuning your code is also another possibility to gain some
> performance. This may also lead to the possibility of you being
able
> to run more than one batch servers in parallel with minimum object
> (table, page. Record) locks.
> >
> > You should also ensure that your SQL Server is tuned for
optimum
> performance. If you are using more than 2G of memory you should
> enable AWE for SQL otherwise the server will only see and use 2G.
> >
> > Hope this helps in some way.
> >
> >
> > Stefan Capiau <[EMAIL PROTECTED]> wrote:
> > Thanks James,
> >
> > We have tried fooling around with the caching but no effect.
> > The process does the following, reading all customers, looking at
> the
> > transactions (new table, mixture of financial and logistical
data)
> > making a balance on the transactions and generating an opening
> > transaction in a new period.
> >
> > During this process we do a lot of reading and updating/writing
on
> > the same table. Can this be a problem ?
> >
> > Regards,
> > Stefan
> >
> > --- In [email protected], "James Flavell"
> > <[EMAIL PROTECTED]> wrote:
> > >
> > > My first guess would be the SQL locking escalating to a full
lock
> or
> > > something like that but I am no expert...which tables are you
> > > updating/inserting?
> > >
> > > Also no idea but could it be at the start data is read into RAM
> but
> > later on
> > > when RAM is exhausted disk IO gets more and so things slow
down?
> > Maybe
> > > there is some caching setting you can tone down when running
this
> > job?
> > >
> > >
> > >
> > >
> > > -----Original Message-----
> > > From: [email protected]
> > > [mailto:[EMAIL PROTECTED] On Behalf Of
> > Stefan Capiau
> > > Sent: 20 December 2005 20:43
> > > To: [email protected]
> > > Subject: [Axapta-Knowledge-Village] Performance
> > >
> > >
> > > Hi all,
> > >
> > > Each month we are running a procedure which is looking at all
> > customer and
> > > their transactions. We do updates and inserts in the selected
> > tables.
> > >
> > > The database contains at this moment +300.000 customers and
over
> > 2.000.000
> > > transactions on the customers.
> > >
> > > The problem we are facing at this moment is that the job is
> taking
> > more and
> > > more time (exponential) as the customer database grows.
> > >
> > > dividing the job into parts of e.g. 25.000 customers gives the
> > following
> > > results :
> > >
> > > The first 25.000 customers are processed in less than 3-4
minutes
> > The last
> > > 25.000 customers are processed in over 2 hours !!!
> > >
> > > We are using an SQL2000 server.
> > >
> > > At this moment the complete process is taking over 12 hours to
> > complete
> > > wheras it should be done withing 30-45 minutes (if we look at
> the
> > first
> > > group of customers)
> > >
> > > We already figured out that this is client independed. If we
run
> 20
> > batches
> > > of customers on 20 clients, the first clients takes a couple of
> > minutes
> > > whereas the last takes over 2 hours.
> > >
> > > Is this sounding familiar to somebody ?
> > >
> > >
> > > Stefan
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > Sharing the knowledge on Axapta.
> > > Yahoo! Groups Links
> > >
> >
> >
> >
> >
> >
> >
> > Sharing the knowledge on Axapta.
> >
> >
> >
> > SPONSORED LINKS
> > Business finance course Business to business finance
> Small business finance Business finance consultant Business
> finance magazine Business finance schools
> >
> > ---------------------------------
> > YAHOO! GROUPS LINKS
> >
> >
> > Visit your group "Axapta-Knowledge-Village" on the web.
> >
> > To unsubscribe from this group, send an email to:
> > [EMAIL PROTECTED]
> >
> > Your use of Yahoo! Groups is subject to the Yahoo! Terms of
> Service.
> >
> >
> > ---------------------------------
> >
> >
> >
> >
> >
> >
> > Varden Morris
> > Senior Developer
> > WellPoint Systems Inc.
> >
> > (403) 444-5848 direct
> > (403) 444-3900 main
> > (403) 615-1604 cell
> > www.wellpoint.ca
> > [EMAIL PROTECTED]
> >
> >
> > __________________________________________________
> > Do You Yahoo!?
> > Tired of spam? Yahoo! Mail has the best spam protection around
> > http://mail.yahoo.com
> >
>
>
>
>
>
>
>
> Sharing the knowledge on Axapta.
> Yahoo! Groups Links
>
Sharing the knowledge on Axapta.
YAHOO! GROUPS LINKS
- Visit your group "Axapta-Knowledge-Village" on the web.
- To unsubscribe from this group, send an email to:
[EMAIL PROTECTED]
- Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.

