Hello,

We had an experience this weekend where we attempted to roll out our
ruote-powered application to a new segment of users. However, we had
to roll back our efforts pretty quickly because our work item
processing started taking up to 10 minutes, particularly when creating
new workflows (as opposed to advancing living workflows, even though
that slowed to a crawl as well). Our application serves work items in
real time via a  UI to users based on their actions and the workflow
definitions, so we are hoping for response times of a few seconds at
most.

On Monday we are going to start picking things apart, trying to figure
out what is wrong with our setup. We threw together the MongoDB
storage late last year and have been using it since, but we haven't
really load tested it, updated it for the latest version of Ruote, or
tried to make it work with multiple workers. We have noticed however
that it has a pretty high CPU utilization which has been growing over
time and now rests at over 50%.

Anyway, the first thing I want to try when troubleshooting this is
swapping out the MongoDB storage for another storage, preferably Redis
based on speed. If that works well, then I know the culprit is our
storage adapter. Otherwise, I'll have to dig deeper. My main question
is this:

* is there a reasonable way to migrate ruote from one storage to
another? I'd like to do our test on a copy of the production database.

My next question is pretty broad, so I apologize for that, but

* Are there any known performance bottlenecks or hot spots we should
be looking at? We will profile of course, but if there are some
obvious places to put the tip of the chisel that would be great to
know.

Also,

* I am guessing we have a number of workflows in the database that are
"dead" or "orphaned" - workflows and processes that, due to exceptions
or un-clean resets were never completed or cancelled. Could this
affect performance in a significant way? Should we routinely attempt
to clean out orphans?

Currently our ruote database (in MondoDB) is 1.4GB with about 3K
schedules and 190K expressions. Our workflows are pretty big - so each
expression is fairly large in size. Maybe much of this is cruft, not
sure - but I'm curious how our setup compares to others? Is this
large, average, very small? Do any of you have experience with DB
sizes this big or bigger? How long should it take to launch a workflow
of substantial size?

Thanks for your time and insight,

Nathan

-- 
you received this message because you are subscribed to the "ruote users" group.
to post : send email to [email protected]
to unsubscribe : send email to [email protected]
more options : http://groups.google.com/group/openwferu-users?hl=en

Reply via email to