Re: multiget_slice

2010-01-14 Thread Jonathan Ellis
how many keys are you fetching? how many columns for each key? On Thu, Jan 14, 2010 at 1:49 AM, Suhail Doshi suh...@mixpanel.com wrote: I've been seeing multiget_slice take an extremely long time: 2010-01-14 07:44:00,513 INFO -- Cassandra, delay: 3.64020800591

Re: multiget_slice

2010-01-14 Thread Suhail Doshi
Right now it's ~5-10 keys, with 5 columns per key. Later it will be 64 keys (max) with 200 columns per key worst case. Suhail On Thu, Jan 14, 2010 at 9:45 AM, Jonathan Ellis jbel...@gmail.com wrote: how many keys are you fetching? how many columns for each key? On Thu, Jan 14, 2010 at 1:49

Re: multiget_slice

2010-01-14 Thread Suhail Doshi
But as you can see value per column is just a byte so the time at which it de-serializes the column shouldn't be horrid. Hopefully that's the right thinking On Thu, Jan 14, 2010 at 10:06 AM, Suhail Doshi digitalwarf...@gmail.comwrote: Right now it's ~5-10 keys, with 5 columns per key. Later

Re: multiget_slice

2010-01-14 Thread Suhail Doshi
Looking at my data directory: 14 G. Just Index.db based files: 4.5 G. Yes only one node so far. vmstat -n 1 -S m procs ---memory-- ---swap-- -io -system-- cpu r b swpd free buff cache si sobibo in cs us sy id wa 0 0 22585 32

Re: multiget_slice

2010-01-14 Thread Jonathan Ellis
it sounds like you just don't have enough ram for the OS to cache your hot data set so you are getting killed on disk seeks. iostat -x 5 (for example) during load should verify this. On Thu, Jan 14, 2010 at 11:19 AM, Suhail Doshi digitalwarf...@gmail.com wrote: Looking at my data directory: 14

Re: multiget_slice

2010-01-14 Thread Suhail Doshi
Yeah, I think you're right: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda1 34.0070.00 409.60 11.20 22596.80 649.6055.24 32.61 77.67 2.38 100.00 sda2 0.00 0.000.000.00 0.00