Prefetching: A Markov Decision Process Model
Loading...
Files
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
IISERM
Abstract
Prefetching is a technique used to boost computer execution performance by fetch ing instructions or data before it is actually needed. Constraints on network band width and memory lead to a choice being made in terms of the specific data to
be prefetched. Markov Decision Processes are a valuable tool to model stochastic
decision making. One can view the prefetching process as a random process of a
surfer moving on a graph and a controller trying to ensure that the surfer lands on
a prefetched vertex. This is analogous to the well-known pursuit-evasion game in
graph theory. Our aim is to find the optimal policy for the controller and explore
the characteristics of such a policy. Throughout this thesis, we analyse and study
the properties of the prefetching process modelled on a tree (rooted acyclic graph)
as an MDP. Using the Bellman Optimality equations, we solve for the optimal
policy of the prefetching MDP for different criteria. In the finite horizon criterion,
we obtained a granular greedy optimal policy. We implemented policy iteration
for the average costs infinite horizon criterion and converged to the optimal policy
of the finite horizon MDP. The optimal policy is dependent on the specific shape
and size of the trees. We structured the state space in a manner that eased the
search for the optimal value function and corresponding optimal policy given a
particular state.