Ok, I'm a bit slow...
At 03:05 PM 12/12/01 +1100, Rob Mueller (fastmail) wrote:
Just thought people might be interested...
Seems like they were! Thanks again.
I didn't see anyone comment about this, but I was a bit surprised by MySQLs good performance. I suppose caching is key. I wo
On Sat, Dec 15, 2001 at 08:57:30PM -0500, Perrin Harkins wrote:
> > One place that Rob and I still haven't found a good solution for
> profiling
> > is trying to work out whether we should be focussing on optimising our
> > mod_perl code, or our IMAP config, or our MySQL DB, or our SMTP setup,
> o
On Sun, Dec 16, 2001 at 09:58:09AM +1100, Jeremy Howard wrote:
> Can anyone suggest a way (under Linux 2.4, if it's OS dependent) to get a
> log of CPU (and also IO preferably) usage by process name over some period
> of time?
What about BSD Process Accounting (supported in most *nix systems) and
> One place that Rob and I still haven't found a good solution for
profiling
> is trying to work out whether we should be focussing on optimising our
> mod_perl code, or our IMAP config, or our MySQL DB, or our SMTP setup,
or
> our daemons' code, or...
Assuming that the mod_perl app is the front-
Rob Mueller (fastmail) wrote:
> > And ++ on Paul's comments about Devel::DProf and other profilers.
>
> Ditto again. I've been using Apache::DProf recently and it's been great at
> tracking down exactly where time is spent in my program.
One place that Rob and I still haven't found a good solutio
> The thing you were missing is that on an OS with an aggressively caching
> filesystem (like Linux), frequently read files will end up cached in RAM
> anyway. The kernel can usually do a better job of managing an efficient
> cache than your program can.
>
> For what it's worth, DeWitt Clinton ac
On December 14, 2001 03:53 pm, Robert Landrum wrote:
> At 6:04 PM -0500 12/14/01, Perrin Harkins wrote:
> >That's actually a bit different. That would fail to notice updates
> > between processes until the in-memory cache was cleared. Still very
> > useful for read-only data or data that can be
At 6:04 PM -0500 12/14/01, Perrin Harkins wrote:
>That's actually a bit different. That would fail to notice updates between
>processes until the in-memory cache was cleared. Still very useful for
>read-only data or data that can be out of sync for some period though.
The primary problem with
On December 14, 2001 03:04 pm, Perrin Harkins wrote:
> > So our solution was caching in-process with just a hash, and using a
> > DBI/mysql persistent store.
> > in pseudo code
> > sub get_stuff {
> > if (! $cache{$whatever} ) {
> > if !( $cache{whatever} = dbi_lookup()) {
> > $cache{$whatever}=de
> So our solution was caching in-process with just a hash, and using a
> DBI/mysql persistent store.
> in pseudo code
> sub get_stuff {
> if (! $cache{$whatever} ) {
> if !( $cache{whatever} = dbi_lookup()) {
> $cache{$whatever}=derive_data_from_original_source($whatever);
> dbi_save($cache_whatev
On December 14, 2001 12:59 pm, Dave Rolsky wrote:
> On Fri, 14 Dec 2001, Perrin Harkins wrote:
> > The thing you were missing is that on an OS with an aggressively caching
> > filesystem (like Linux), frequently read files will end up cached in RAM
> > anyway. The kernel can usually do a better j
On Fri, 14 Dec 2001, Perrin Harkins wrote:
> The thing you were missing is that on an OS with an aggressively caching
> filesystem (like Linux), frequently read files will end up cached in RAM
> anyway. The kernel can usually do a better job of managing an efficient
> cache than your program can
> Another powerful tool for tracking down performance problems is perl's
> profiler combined with Devel::DProf and Apache::DProf. Devel::DProf
> is bundled with perl. Apache::DProf is hidden in the Apache-DB package
> on CPAN.
Ya know the place in my original comment where I was optimizing a dif
> I was using Cache::SharedMemoryCache on my system. I figured, "Hey, it's
> RAM, right? It's gonna be WAY faster than anything disk-based."
The thing you were missing is that on an OS with an aggressively caching
filesystem (like Linux), frequently read files will end up cached in RAM
anyway.
On Fri, Dec 14, 2001 at 10:43:02AM -0800, Rob Bloodgood wrote:
> > > Again, thank you, Rob. This is great,
> >
> > > * Cache::FileCache (uses Storable)
> > > * Cache::SharedMemoryCache (uses Storable)
> > - Can specify the maximum cache size (Cache::SizeAwareFileCache) and/or
> > maximum time an
> > Again, thank you, Rob. This is great,
>
> > * Cache::FileCache (uses Storable)
> > * Cache::SharedMemoryCache (uses Storable)
> - Can specify the maximum cache size (Cache::SizeAwareFileCache) and/or
> maximum time an object is allowed in the cache
> - Follows the Cache::Cache interface syste
> In general the Cache::* modules were designed for clarity and ease of
> use in mind. For example, the modules tend to require absolutely no
> set-up work on the end user's part and try to be as fail-safe as
> possible. Thus there is run-time overhead involved. That said, I'm
> certainly not a
> IPC::ShareLite freezes/thaws the whole data structure, rather than just
the
> hash element being accessed, IIRC, so is probably going to have extremely
> poor scaling characteristics. Worth adding to check, of course.
No, it's probably not worth it. It would be worth adding IPC::Shareable
thou
Perrin Harkins wrote:
> Also, I'd like to see MLDBM + BerkeleyDB (not DB_File) with BerkeleyDB
> doing automatic locking, and IPC::MM, and IPC::Shareable, and
> IPC::ShareLite (though it doesn't serialize complex data by itself), and
> MySQL with standard tables. Of course I could just do them my
On Wed, Dec 12, 2001 at 03:05:33PM +1100, Rob Mueller (fastmail) wrote:
> I sat down the other day and wrote a test script to try out various
> caching implementations. The script is pretty basic at the moment, I
> just wanted to get an idea of the performance of different methods.
Rob, wow! Th
On Wed Dec 12, 2001 at 03:05:33PM +1100, Rob Mueller (fastmail) wrote:
> I tried out the following systems.
> * Null reference case (just store in 'in process' hash)
> * Storable reference case (just store in 'in process' hash after 'freeze')
> * Cache::Mmap (uses Storable)
> * Cache::FileCache (u
Some more points.
I'd like to point out
that I don't think the lack of actual concurrency testing is a real problem, at
least for most single CPU installations. If most of the time is spent doing other stuff in a request (which
is most likely the case), then on average when a process goes t
> One important aspect missing from my tests is the actual concurrency
testing.
Oh, I guess I should have checked your code. I thought these were
concurrent. That makes a huge difference.
> 2. Lock some part of cache for a request
> (Cache::Mmap buckets, MLDBM pages?)
MLDBM::Sync locks the wh
Just wanted to add an extra thought that I
forgot to include in the previous post.
One important aspect missing from my tests
is the actual concurrency testing. In most real world programs, multiple
applications will be reading from/writing to the cache at the same time.
Depending on the c
> I sat down the other day and wrote a test script to try
> out various caching implementations.
Very interesting. Looks like Cache::Mmap deserves more attention (and
maybe a Cache::Cache subclass).
> Have I missed something obvious?
Nothing much, but I'd like to see how these numbers vary wit
25 matches
Mail list logo