http://serv2.ist.psu.edu:8080/showciting;jsessionid=6C0ED7DF5585AAB5C68E6F10B2661E9C?doi=10.1.1.115.4918
-
Abstract
Abstract---
We consider the problem of cache management in a demand paging scenario
with uniform page sizes. We propose a new cache management policy,
namely, Adaptive Replacement Cache (ARC), that has several advantages.
In response to evolving and changing access patterns, ARC dynamically,
adaptively, and continually balances between the recency and frequency
components in an online and selftuning fashion. The policy ARC uses a
learning rule to adaptively and continually revise its assumptions
about the workload. The policy ARC is empirically universal, that is,
it empirically performs as well as a certain fixed replacement
policy--even when the latter uses the best workload-specific tuning
parameter that was selected in an offline fashion. Consequently, ARC
works uniformly well across varied workloads and cache sizes without
any need for workload specific a priori knowledge or tuning. Various
policies such as LRU-2, 2Q, LRFU, and LIRS require user-defined
parameters, and, unfortunately, no single choice works uniformly well
across different workloads and cache sizes. The policy ARC is
simple-to-implement and, like LRU, has constant complexity per request.
In comparison, policies LRU-2 and LRFU both require logarithmic time
complexity in the cache size. The policy ARC is scan-resistant: it
allows one-time sequential requests to pass through without polluting
the cache.ARC: A self-tuning, low overhead replacement cache
- by Nimrod Megiddo, Dharmendra S.
Modha ― 2003 ― In Proceedings of the 2003 Conference on File and
Storage Technologies (FAST
- Cited by 70 (4 self) �C Show/Hide
Context �C Add To MetaCart
-
Abstract
The
buffer-cache replacement policy of the OS can have a significant impact
on the performance of I/Ointensive applications. In this paper, we
introduce a simple fingerprinting tool, Dust, which uncovers the
replacement policy of the OS. Specifically, we are able to identify how
initial access order, recency of access, frequency of access, and
long-term history are used to determine which blocks are replaced from
the buffer cache. We show that our fingerprinting tool can identify
popular replacement policies described in the literature (e.g.,Exploiting Gray-Box Knowledge of Buffer-Cache Contents
- by Nathan C. Burnett, John Bent,
Andrea C. Arpaci-dusseau, Remzi H. Arpaci-dusseau ― 2002 ― In
Proceedings of the USENIX Annual Technical Conference (USENIX ’02
- Cited by 23 (9 self) �C Show/Hide
Context �C Add To MetaCart
-
Abstract
The
self-tuning, low-overhead, scan-resistant adaptive replacement cache
algorithm outperforms the least-recently-used algorithm by dynamically
responding to changing access patterns and continually balancing
between workload recency and frequency features. Caching, a fundamental
metaphor in modern computing, finds wide application in storage
systems, 1 databases, Web servers, middleware, processors, file
systems, disk drives, redundant array of independent disks controllers,
operating systems, and other applications such as data compression and
list updating. 2 In a two-level memory hierarchy, a cache performs
faster than auxiliary storage, but is more expensive. Cost concerns
thus usually limit cache size to a fraction of the auxiliary memory’sOutperforming LRU with an adaptive replacement cache
algorithm
- by S. Modha ― 2004 ― IEEE Computer
- Cited by 11 (1 self) �C Show/Hide
Context �C Add To MetaCart
-
Abstract
RAID
storage arrays often possess gigabytes of RAM for caching disk blocks.
Currently, most RAID systems use LRU or LRU-like policies to manage
these caches. Since these array caches do not recognize the presence of
file system buffer caches, they redundantly retain many of the same
blocks as those cached by the file system, thereby wasting precious
cache space. In this paper, we introduce X-RAY, an exclusive RAID array
caching mechanism. X-RAY achieves a high degree of (but not perfect)
exclusivity through gray-box methods: by observing which files have
been accessed through updates to file system meta-data, X-RAY
constructs an approximate image of the contents of the file system
cache and uses that information to determine the exclusive set of
blocks that should be cached by the array. We use microbenchmarks to
demonstrate that X-RAY’s prediction of the file system buffer cache
contents is highly accurate, and trace-based simulation to show that
X-RAY considerably outperforms LRU and performs as well as other more
invasive approaches. The main strength of the X-RAY approach is that it
is easy to deploy �C all performance gains are achieved without changes
to the SCSI protocol or the file system above. 1X-RAY: A Non-Invasive Exclusive Caching Mechanism for
RAIDs
- by Lakshmi N. Bairavasundaram,
Muthian Sivathanu, Andrea C. Arpaci-dusseau, Remzi H. Arpaci-dusseau ―
2004 ― In Proceedings of the 31st Annual International Symposium on
Computer Architecture (ISCA ’04
- Cited by 11 (7 self) �C Show/Hide
Context �C Add To MetaCart
-
Abstract
Multimedia
streaming applications can consume a significant amount of server and
network resources. Periodic broadcast and patching are two approaches
that use multicast transmission and client buffering in innovative ways
to reduce server and network load, while at the same time allowing
asynchronous access to multimedia steams by a large number of clients.
Current research in this area has focussed primarily on the algorithmic
aspects of these approaches, with evaluation performed via analysis or
simulation. In this paper, we describe the design and implementation of
a flexible streaming video server and client testbed that implements
both periodic broadcast and patching, and explore the issues that arise
when implementing these algorithms. We present measurements detailing
the overheads associated with the various server components (signaling,
transmission schedule computation, data retrieval and transmission),
the interactions between the various components of the architecture,
and the overall end-to-end performance. We also discuss the importance
of an appropriate server video segment caching policy. We conclude with
a discussion of the insights gained from our implementation and
experimental evaluation. I.Periodic broadcast and patching services -
implementation, measurement, and analysis in an internet streaming
video testbed
- by Michael K. Bradshaw, Bing Wang,
Subhabrata Sen, Lixin Gao, Jim Kurose, Prashant Shenoy, Don Towsley ―
2001 ― In Proc. of ACM Multimedia’01
- Cited by 10 (1 self) �C Show/Hide
Context �C Add To MetaCart
-
Abstract
Abstract---Buffer
caches are commonly used in servers to reduce the number of slow disk
accesses or network messages. These buffer caches form a multilevel
buffer cache hierarchy. In such a hierarchy, second-level buffer caches
have different access patterns from first-level buffer caches because
accesses to a second-level are actually misses from a first-level.
Therefore, commonly used cache management algorithms such as the Least
Recently Used (LRU) replacement algorithm that work well for
single-level buffer caches may not work well for second-level. This
paper investigates multiple approaches to effectively manage
second-level buffer caches. In particular, it reports our research
results in 1) second-level buffer cache access pattern
characterization, 2) a new local algorithm called Multi-Queue (MQ) that
performs better than nine tested alternative algorithms for
second-level buffer caches, 3) a set of global algorithms that manage a
multilevel buffer cache hierarchy globally and significantly improve
second-level buffer cache hit ratios over corresponding local
algorithms, and 4) implementation and evaluation of these algorithms in
a real storage system connected with commercial database servers
(Microsoft SQL Server and Oracle) running industrial-strength online
transaction processing benchmarks. Index Terms---Cache memories,
storage hierarchy, storage management.Second-Level Buffer Cache Management
- by Yuanyuan Zhou, Zhifeng Chen, Kai
Li, Senior Member ― 2004 ― IEEE Transactions on Parallel and
Distributed Systems
- Cited by 5 (0 self) �C Show/Hide
Context �C Add To MetaCart
-
Abstract
This
paper introduces MICP, a novel multiversion integrated cache
replacement and prefetching algorithm designed for efficient cache and
transaction management in hybrid data delivery networks. MICP takes
into account the dynamically and sporadically changing cost/benefit
ratios of cached and/or disseminated object versions by making cache
replacement and prefetching decisions sensitive to the objects ’ access
probabilities, their position in the broadcast cycle, and their update
frequency. Additionally, to eliminate the issue of a newly created or
outdated, but re-cacheable, object version replacing a version that may
not be reacquired from the server, MICP logically divides the client
cache into two variable-sized partitions, namely the REC and the
NON-REC partitions for maintaining re-cacheable and nonre-cacheable
object versions, respectively. Besides judiciously selecting
replacement victims, MICP selectively prefetches popular object
versions from the broadcast channel in order to further improve
transaction response time. A simulation study compares MICP with one
offline and two online cache replacement and prefetching algorithms.
Performance results for the workloads and system settings considered
demonstrate that MICP improves transaction throughput rates by about
18.9 % compared to the best performing online algorithm and it performsA multi-version cache replacement and prefetching policy
for hybrid data delivery environments
- by André Seifert, Marc H. Scholl ―
2002 ― In 28th International Conference on Very Large Data Bases (VLDB
- Cited by 3 (0 self) �C Show/Hide
Context �C Add To MetaCart
|