Hi Thomas, On Fri, Sep 14, 2018 at 4:23 PM Thomas Salemy <[email protected]> wrote: > > I want to redevelop the shared object cache that is used to filter HTTP > requests. Specifically, I want to serve requests even faster by replacing the > module's current structure with this concurrency mechanism. To be clear, the > module I am talking about is the cache module and the file I am talking about > is mod_cache_socache.c. > > I have been working on this for a while but have not been able to figure out > how to properly redesign the module to incorporate the new concurrency > platform and have it work more than 80% of the time. I am hoping that one of > you might be able to describe to me more in detail how the cache module works > and the details of the current concurrency mechanism supporting it so that I > may redesign it and prove the value of transactional memory in my research > project.
It's not a quick and easy task for us to describe the details of mod_cache and mod_cache_socache as a general response like this, possibly you'd tell us what are the blocking points and we'd talk more specifically about those? On the main lines though, mod_cache_socache is where mod_socache_* providers (struct ap_socache_provider_t, implemented on different backends like memcache/dbm/... and used by different httpd modules, i.e. not only mod_cache) are re-interfaced to fit mod_cache's providers (struct cache_provider). A cache_provider implements the methods needed by mod_cache to store/retrieve the data (HTTP headers/body) according to RFC-7234. The norm for "when" or "what" to cache is mod_cache's business, while the "how" is left to the cache_provider, so concurrency happens at the cache_provider level but is quite dictated by mod_cache's logic with regard to concurrent requests. As you can see the scope is quite large, and it can't be efficient to enter all the details here without specific issues on your side. Maybe you have some code to share already? That'd certainly help discussion (and scope). Regards, Yann.
