Hi Nick,

Just thought I'd interject a little of my experience (and I really do
me little ;-)) with this stuff, I've been through a very similar
challenge to your own recently whereby we've seen substantial and
sudden business growth on our core business application which wasn't
quite ready for it (if I'm honest), in a very similar scenario to
yourself we were struggling regularly with locking issues.

We have a statistics database which contains probably 10million+ rows
of data and is written to on a very regular basis, the data is pulled
somewhat less regularly for reporting purposes involving very large
datasets, we often found conflicts of the read/writes on the tables
where causing serious performance issues for users building the
reports.

I, not using transfer in that particular situation played a fair
amount with isolation levels and doing dirty reads (my business model
means its not a problem for the data to be a little skewey) and
although I did find performance increases here I was advised against
the approach by several DBA's with whom I discussed the problem.

To tackle the solution we looked at improving the performance of our
data import queries, moving away from temporary tables and into using
table variables, this along with other performance tuning and indexing
tweaks on the database caused a DRAMATIC drop in the lock times due to
the lower I/O incurred and cured our problem completely (for now at
least), as a rough example we found that the addition of one
particular index brought the execution time of one of our insert
queries down from around 4 seconds to 0.01 seconds... nice eh?

Obviously I don't know your use case that thoroughly but if your
situation sounds similar to mine then I would probably suggest there
are more effective changes you can make to the application aside from
using NOLOCK on the selects.

Rob

On Mar 6, 4:08 am, NickHaggmark <[email protected]> wrote:
> Hi Mark!
>
> Well that's really the million dollar question isn't it? =P
>
> The honest answer to that question is that our product grew insanely
> fast and aren't correctly archiving our data out of these high
> traffic / high volume tables.  We shouldn't have to be doing high
> traffic reads out of tables with fifty to a hundred million records.
> Doing dirty reads should never HAVE to be an absolute necessity, it's
> just really useful under certain circumstances (which is obviously a
> debatable point).  That being said, it's unfortunately the situation
> that I find myself in until i can remedy all of the issues in the
> existing system. I have been making a conscious effort (and great
> strides) to relieve the pressure on these tables by archiving and
> allowing historical access via views, however it's difficult to tackle
> the monster as a whole and still move forward (especially in a fast
> paced environment).  Out of curiosity, does Transfer do any kind of
> caching on objects even if caching is turned off for an entity in the
> config (and global caching is off)?  I know certain frameworks use
> multiple layers of caching, so I'm just curious.
>
> BTW, I apologize for my frequent typos.  I'm a lousy proof-reader.
>
> Thanks again for your feedback and time!
>
> Nick
--~--~---------~--~----~------------~-------~--~----~
Before posting questions to the group please read:
http://groups.google.com/group/transfer-dev/web/how-to-ask-support-questions-on-transfer

You received this message because you are subscribed to the Google Groups 
"transfer-dev" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/transfer-dev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to