network bandwidth question

2011-07-20 Thread Arijit Mukherjee
Hi All We're trying to set up a Cassandra cluster (initially with 3 nodes). Each node will generate data @ 32MB per second. What would be the likely network usage for this (say with a replication factor of 3)? I mean, if I use simple arithmetic, I can say 32MBps per node, and hence 96MBps in

Re: performance degradation in cluster

2011-02-03 Thread Arijit Mukherjee
Hi I'll explain a bit. I'm working with Abhinav. We've an application which was earlier based on Lucene which would index a huge volume of data, and later use the indices to fetch data and perform a fuzzy matching operation. We wanted to use Cassandra primarily because of the

Re: client threads locked up - JIRA ISSUE 1594

2011-01-26 Thread Arijit Mukherjee
. On Fri, Jan 21, 2011 at 3:53 AM, Arijit Mukherjee ariji...@gmail.com wrote: Hi All I'm facing the same issue as this one mentioned here - https://issues.apache.org/jira/browse/CASSANDRA-1594 Is there any solution or work-around for this? Regards Arijit -- And when the night is cloudy

Re: client threads locked up - JIRA ISSUE 1594

2011-01-24 Thread Arijit Mukherjee
, Arijit Mukherjee ariji...@gmail.com wrote: Hi All I'm facing the same issue as this one mentioned here - https://issues.apache.org/jira/browse/CASSANDRA-1594 Is there any solution or work-around for this? Regards Arijit -- And when the night is cloudy, There is still a light that shines

client threads locked up - JIRA ISSUE 1594

2011-01-21 Thread Arijit Mukherjee
Hi All I'm facing the same issue as this one mentioned here - https://issues.apache.org/jira/browse/CASSANDRA-1594 Is there any solution or work-around for this? Regards Arijit -- And when the night is cloudy, There is still a light that shines on me, Shine on until tomorrow, let it be.

Re: Why my posts are marked as spam?

2011-01-11 Thread Arijit Mukherjee
I think this happens for RTF. Some of the mails in the post are RTF, and the reply button creates an RTF reply - that's when it happens. Wonder how the mail to which I replied was in RTF... Arijit On 12 January 2011 05:28, Oleg Tsvinev oleg.tsvi...@gmail.com wrote: Whatever I do, it happens :(

Re: how to do a get_range_slices where all keys start with same string

2011-01-11 Thread Arijit Mukherjee
I have a follow on question on this. I have a super column family like this: ColumnFamily Name=EventSpace CompareWith=TimeUUIDType CompareSubcolumnsWith=BytesType ColumnType=Super/ I store some events keyed by a subscriber id, and for each such row, I have a number of super columns which are

TimeUUID question

2011-01-07 Thread Arijit Mukherjee
Hi I'm using the piece of code given in the FAQ (http://wiki.apache.org/cassandra/FAQ#working_with_timeuuid_in_java) to convert a Date to UUID, and then trying to convert it back (using the example code given in Hector TimeUUIDUtils - convert the UUID to long (getTimeFromUUID) and then convert it

quick question about super columns

2011-01-06 Thread Arijit Mukherjee
Hi I've a quick question about supercolumns. Say I've a structure like this (based on the supercolumn family structured mention in WTF is a SuperColum): EventRecord = { eventKey1: { e1-ts1: {set of columns}, e1-ts2: {set of columns}, ... e1-tsn: {set of

Re: quick question about super columns

2011-01-06 Thread Arijit Mukherjee
the Thrift APIs. I attempted to use Hector, but got myself into more confusion. Arijit On 7 January 2011 11:44, Roshan Dawrani roshandawr...@gmail.com wrote: On Fri, Jan 7, 2011 at 11:39 AM, Arijit Mukherjee ariji...@gmail.com wrote: Hi I've a quick question about supercolumns. EventRecord

About a drastic change in performance

2010-12-07 Thread Arijit Mukherjee
Hi All I was building an application which stores some telecom call records in a Cassandra store and later performs some analysis on them. I created two versions, (1) - where the key is of the form A|B where A and B are two mobile numbers and A calls B, and (2) - where the key is of the form

Re: About a drastic change in performance

2010-12-07 Thread Arijit Mukherjee
or try to find a list of all records matching a certain criteria? Is the hadoop-approach the only alternative? Arijit On 7 December 2010 15:41, Arijit Mukherjee ariji...@gmail.com wrote: Hi All I was building an application which stores some telecom call records in a Cassandra store and later

partial matching of keys

2010-11-29 Thread Arijit Mukherjee
Hi All I was wondering if it is possible to match keys partially while searching in Cassandra. I have a requirement where I'm storing a large number of records, the key being something like A|B|T where A and B are mobile numbers and T is the time-stamp (the time when A called B). Such format

Re: Cassandra newbie question

2010-10-27 Thread Arijit Mukherjee
gdusba...@gmail.com wrote: On Mon, Oct 11, 2010 at 04:01, Arijit Mukherjee ariji...@gmail.com wrote: Hi All I've just started reading about Cassandra and writing simple tests using Cassandra 0.6.5 to see if we can use it for our product. I have a data store with a set of columns, like C1, C2, C3

Re: Cassandra newbie question

2010-10-27 Thread Arijit Mukherjee
could prove to be a bottleneck. Am I correct in my thinking? Regards Arijit On 27 October 2010 18:49, Gary Dusbabek gdusba...@gmail.com wrote: On Wed, Oct 27, 2010 at 03:24, Arijit Mukherjee ariji...@gmail.com wrote: Hi All I've another related question. I am using a stream of records

Cassandra newbie question

2010-10-11 Thread Arijit Mukherjee
Hi All I've just started reading about Cassandra and writing simple tests using Cassandra 0.6.5 to see if we can use it for our product. I have a data store with a set of columns, like C1, C2, C3, and C4, but the columns aren't mandatory. For example, there can be a list of (k.v) pairs with only

Re: Cassandra newbie question

2010-10-11 Thread Arijit Mukherjee
Just a follow on question to this - would PIG be a good fit for such questions? Arijit On 11 October 2010 14:31, Arijit Mukherjee ariji...@gmail.com wrote: Hi All I've just started reading about Cassandra and writing simple tests using Cassandra 0.6.5 to see if we can use it for our product