It is a system design problem .

Suppose a http request  is sent to server . Now Server maintains cache for
fast retrieval . if link is present int the cache then it just takes a data
from cache and return it to user but if not , then user will fetch that
http address and then store it in its cache and return same to the user .

Problem is that there are many server and many global cache as expected in
distributed system. Now when request is received by a server then how can
we maintain global cache such that server can know which cache to query
instead of querying each global cache as it will be inefficient.

one approach can be...... maintain 26 global cache . Now when request is
received by server it check the web link say , www.*a*bc.com ... here
server will query cache-1 . Similarly cache-2 will take care of links with
starts from "b"...www.*b*bc.com ....and so on....

above method will avoid duplicity in caches but will not be very efficient
as a cache may have higher query rate than others...


any other approach ??

-- 
You received this message because you are subscribed to the Google Groups 
"Algorithm Geeks" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to algogeeks+unsubscr...@googlegroups.com.

Reply via email to