85 seconds was because of the network latency (was using EC2 with my
computer. pinging time was 350 ms itself..)

Perhaps for x many number of items, it was taking x*350 ms time for
making calls.. while mongo, it was sending all data at one go.

So I ran the script on the server itself:
storing in db  0.108298063278  for  1487  items
storing in memcached  0.208426952362
reading from db  0.0738799571991
reading from memcached  0.145488023758

what am I doing wrong here?

Thanks
Neeraj


On Aug 17, 11:00 pm, Neeraj Agarwal <[email protected]> wrote:
> Yep, I'm printing back the results. Even if the case with MongoDB is
> that it doens't verify, while reading it back, it exists..
>
> I have installed both memcached & mongodb on the same machine (ubuntu
> 9.10)on the network.
>
> And connecting to it with another machine.
>
> On Aug 17, 10:56 pm, Matt Ingenthron <[email protected]> wrote:
>
>
>
>
>
>
>
> > On 8/17/11 8:44 AM, Neeraj Agarwal wrote:
>
> > > I installed memcached on Ubuntu box. Installed MongoDB too on the same
> > > machine to compare the performance for these two.
>
> > By default, mongodb doesn't check for responses at all.  It just sends
> > requests over.  That *could* be playing a role here.
>
> > The "reading from" makes less sense though if there.  Are you verifying
> > that you read it back?
>
> > Something seems broken for sure with 220 records in 85 seconds.
>
> > > Storing in MongoDB  0.0940001010895  for  220  records
> > > Storing in memcached  83.2030000687
> > > Reading from MongoDB  0.0309998989105
> > > Reading from memcached  85.3599998951
>
> > > All time in seconds.
>
> > > I'm 
> > > usingpythonlibrayhttp://www.tummy.com/Community/software/python-memcached/

Reply via email to