I'm facing a bit of the same problem but with php. Followed the
php.netexample here but the getMulti example doesn't seem to work.
Getting a fatal
error: unspecifed method getMulti().

http://us.php.net/manual/en/memcached.getmulti.php

I've verified that memcache is properly compiled into php via phpinfo()

thanks,
JJ

 <?php
$m = new Memcached();
$m->addServer('localhost', 11211);

$data = array(
    'foo' => 'foo-data',
    'bar' => 'bar-data',
    'baz' => 'baz-data',
    'lol' => 'lol-data',
    'kek' => 'kek-data',
);

$m->setMulti($data, 3600);

$null = null;
$keys = array_keys($data);
$keys[] = 'zoo';
$got = $m->getMulti($keys, $null, Memcached::GET_PRESERVE_ORDER);

foreach ($got as $k => $v) {
    echo "$k $v\n";
}
?>



On Fri, Aug 21, 2009 at 11:25 AM, Jeremy Dunck <[email protected]> wrote:

>
> On Fri, Aug 21, 2009 at 12:10 PM, Bill Moseley<[email protected]> wrote:
> > I'm perhaps more curious how people use multiple gets -- or really the
> > design of an web application that supports this.
>
> I use it for summing a bunch of per-minute keys.  Or getting a bunch
> of object IDs from some other source, then getting all the objects in
> one hope from memcached.
>
> You're right that not all keys are on the same server, but you'll
> still do n/m hops instead of n, and a good client will do the n/m hops
> concurrently.
>
> > So, I'm curious about the design of an application that supports
> gathering
> > up a number of (perhaps unrelated) cache requests and fetch all at once.
> > Then fill in the cache where there are cache-misses.  Am I
> misunderstanding
> > how people use this feature?
>
> Yeah, I'm interested in this use case, too.  I haven't actually done
> it yet, but it seems like latency could be cut down by making
> concurrent service-oriented requests, i.e. a callable that returns the
> source object on a cache miss, code to gather all the hit-or-miss
> paths, and a single big multi-get/function call near the end to
> concurrently do all the work.
>
> a = cache.get('x')
> if not a:
>   a = <expensive>
> b = cache.get('y')
> if not b:
>  b = <expensive>
>
> could become:
> def miss_x():
>   return <expensive>
> def miss_y():
>  return <expensive>
> mux.reg('x', miss_x)
> mux.reg('y', miss_y)
> ret = mux.get_all() #actually does the concurrent network hops and
> expensive (threaded?) calls if needed.
>

Reply via email to