Take a look at ehcache features here... http://ehcache.org/about/features

On second thoughts, please take a real hard look at your architecture.
- Why do you need to look at 4M keys per second?
- Why does only one sever needs to look at 4M keys per second? Can't you
have say 4 servers each of them looking at 1M keys and then your actual
server looking at "processed results" from these server?
- Think Map-reduce.
- Think Partitioning.
- If you need to find the highest, lowest or whatever function evaluated
over those 4M keys then why evaluate it every second. Why not design a data
structure which does calculation when "inserting" the keys, so that your
"answer" is just a lookup?

If you can share some details about what you want to accomplish, we can
help you better.

thanks!
rohitk


On Tue, Apr 17, 2012 at 3:25 PM, yunfeng sun <[email protected]> wrote:

> We have 2 applications to access the memory cache
>
> One is data processing application – it need 4M qps and can be hosted on
> the same physical server of memory cache.
>
> Another is web App application – it needs randomly access memory cache
> with maximum thousands of qps through network.
>
> Sorry we don’t know too much about ehcache . Will it provide network
> based client API like memcached?****
>
> Thanks again!
>
> On Tuesday, April 17, 2012 5:28:17 PM UTC+8, Rohit Karlupia wrote:
>>
>> In that case you are better off using some inmemory java cache...like
>> ehcache or some simple expiry mechanism over a simple hashmap. Will save
>> you the cost of serialization.
>>
>> rohitk
>> On Apr 17, 2012 2:53 PM, "yunfeng sun" <[email protected]> wrote:
>>
>>> Thanks rohitk!  That’s a very good point.
>>>
>>> Is it possible to put the application and memcached on the same physical
>>> machine and application talk to memcached directly(like IPC) without going
>>> thought network stack?
>>>
>>> On Tuesday, April 17, 2012 4:41:49 PM UTC+8, Rohit Karlupia wrote:
>>>>
>>>> As per your calculation you would be transfering 4M * 2K about 8Gb of
>>>> data per second. That is approx 64 Gbps of bandwidth. Network is going to
>>>> be your biggest problem, not memcached.
>>>>
>>>> rohitk
>>>> On Apr 17, 2012 7:04 AM, "yunfeng sun" <[email protected]> wrote:
>>>>
>>>>> Dear Dormando ,
>>>>> Your reply is very helpful!!
>>>>> The question is just based on our limited knowledge of memcached.
>>>>> We will do more investigation with your guide above.
>>>>> Big Thanks again!!
>>>>>
>>>>> Yunfeng Sun
>>>>>
>>>>> On Tuesday, April 17, 2012 9:02:19 AM UTC+8, Dormando wrote:
>>>>>>
>>>>>> > The Java application need Get() once and set() once for each
>>>>>> changed pair, it will be 50M*40%*2=4M qps (query per second) .
>>>>>> >
>>>>>> > We tested memcached - which shows very limited qps.
>>>>>> > Our benchmarking is very similar to results showed herehttp://
>>>>>> xmemcached.**googleco****de.com/svn/trunk/**benchmark/**ben**
>>>>>> chmark.html<http://xmemcached.googlecode.com/svn/trunk/benchmark/benchmark.html>
>>>>>> >
>>>>>> > 10,000 around qps is the limitation of one memcached server.
>>>>>>
>>>>>> Just to be completely clear; "10,000 qps" in your test is the limit of
>>>>>> *one java thread client*, the limit of the server is nowhere near
>>>>>> that. If
>>>>>> you started ten client threads, each doing gets/sets, you will likely
>>>>>> get
>>>>>> 100,000 qps.
>>>>>>
>>>>>> If you edit your java code and fetch 100 keys at once with multiget,
>>>>>> then
>>>>>> set 100 keys (using binary protocol or ASCII noreply for the sets), it
>>>>>> will get even faster still.
>>>>>>
>>>>>> Then you do all the other stuff I said. I'd be surprised if you found
>>>>>> any
>>>>>> daemon that's faster than memcached though.
>>>>>>
>>>>>>
>>>>> On Tuesday, April 17, 2012 9:02:19 AM UTC+8, Dormando wrote:
>>>>>>
>>>>>> > The Java application need Get() once and set() once for each
>>>>>> changed pair, it will be 50M*40%*2=4M qps (query per second) .
>>>>>> >
>>>>>> > We tested memcached - which shows very limited qps.
>>>>>> > Our benchmarking is very similar to results showed herehttp://
>>>>>> xmemcached.**googleco****de.com/svn/trunk/**benchmark/**ben**
>>>>>> chmark.html<http://xmemcached.googlecode.com/svn/trunk/benchmark/benchmark.html>
>>>>>> >
>>>>>> > 10,000 around qps is the limitation of one memcached server.
>>>>>>
>>>>>> Just to be completely clear; "10,000 qps" in your test is the limit of
>>>>>> *one java thread client*, the limit of the server is nowhere near
>>>>>> that. If
>>>>>> you started ten client threads, each doing gets/sets, you will likely
>>>>>> get
>>>>>> 100,000 qps.
>>>>>>
>>>>>> If you edit your java code and fetch 100 keys at once with multiget,
>>>>>> then
>>>>>> set 100 keys (using binary protocol or ASCII noreply for the sets), it
>>>>>> will get even faster still.
>>>>>>
>>>>>> Then you do all the other stuff I said. I'd be surprised if you found
>>>>>> any
>>>>>> daemon that's faster than memcached though.
>>>>>>
>>>>>>

Reply via email to