/hadoop users and sharing my
experience.
Do get in touch with me if any of you would like to host a meetup/user
group meeting.
-Adi
On Mon, Mar 21, 2011 at 9:02 AM, Geek Talks geektalks@gmail.com wrote:
Hi,
Anyone interested joining in Apache Cassandra hangout/meetup nearby
mumbai-pune
or some other fashion?
-Adi
On Wed, Sep 7, 2011 at 1:09 PM, Hefeng Yuan hfy...@rhapsody.com wrote:
Adi,
The reason we're attempting to add more nodes is trying to solve the
long/simultaneous compactions, i.e. the performance issue, not the storage
issue yet.
We have RF 5 and CL QUORUM for read and write, we have
size. Keep adjusting it until the frequency/size of
flushing becomes satisfactory and hopefully reduces the compaction overhead.
-Adi
On Sep 7, 2011, at 10:51 AM, Adi wrote:
On Wed, Sep 7, 2011 at 1:09 PM, Hefeng Yuan hfy...@rhapsody.com wrote:
Adi,
The reason we're attempting to add
saw an OOM on one node after 2 weeks. The heap used was close to the
GC threshold and full GC takes around 80 seconds.
-Adi
2011/8/24 Ernst D Schoen-René er...@peoplebrowsr.com:
So, we're on 8, so I don't think there's a key cache setting. Am I wrong?
here's my newest crash log:
ERROR
?
-Adi
The seedlist of A is localhost.
Seedlist of B is localhost, A_ipaddr and
seedlist of C is localhost,B_ipaddr,A_ipaddr.
Using localhost(or own IP address for non-seed nodes) is not a good
practice.
Try
The seedlist of A : A_ipaddr.
Seedlist of B : A_ipaddr
seedlist of C : A_ipaddr
by setting it to 0.0.
Running nodetool repair will reduce the chance of inconsistent data, it does
not mean that read repair will not get triggered.
-Adi
You are reading with ONE and the
On Sat, Jul 30, 2011 at 5:04 AM, Philippe watche...@gmail.com wrote:
Hello,
I have a 3-node ring at RF=3
typo in memtable_troughput
update column family columnfamily2 memtable_troughput=155;
update column family columnfamily2 memtable_throughput=155;
On Wed, Jul 27, 2011 at 9:59 AM, lebron james lebron.m...@gmail.com wrote:
Hi!
Need set memtable_troughput for cassandra
I try do this with help
and this can be a separate discussion but that will
make a Cassandra cluster way too costly , we would have to buy 16 systems
for the same amount of data as opposed to 4 that we have now and my IT
director will strangle me.
-Adi
)
org.apache.cassandra.db.ColumnFamilyStore.isKeyInRemainingSSTables( )
org.apache.cassandra.utils.BloomFilter.getHashBuckets( )
org.apache.cassandra.io.sstable.SSTableIdentityIterator.echoData()
netstats does not show anything streaming to/from any of the nodes.
-Adi Pandit
of tens, hundreds, thousands,millions?
I am not looking for any tested numbers a general suggestion/best practice
recommendation will suffice.
Thanks.
-Adi
that amazon paper has some good tips on solving the transactional
gotcha :-)
-Adi
On Fri, Apr 8, 2011 at 3:49 PM, Ed Anuff e...@anuff.com wrote:
If you're just indexing on a single column value and the values have
low cardinality in, say, the 10's - I'd have a wide row for each
cardinal value
On Tue, Mar 22, 2011 at 3:44 PM, ruslan usifov ruslan.usi...@gmail.comwrote:
2011/3/22 Adi adi.pan...@gmail.com
I have been going through the mailing list and compiling suggestions to
address the swapping due to mmap issue.
1) Use JNA (done but)
Are these steps also required:
- Start
I might be doing incorrectly either in schema
definition or the way I am sending the values are welcome.
-Adi
That was it. Thanks thobbs :-) The queries work as expected now.
-Adi
On Thu, Mar 10, 2011 at 1:01 PM, Tyler Hobbs ty...@datastax.com wrote:
I looked again at the original
emailhttp://mail-archives.apache.org/mod_mbox//cassandra-user/201101.mbox/raw/%3CAANLkTik4Z_6OfvT4ByQ8_kpX_=thxyl39
1) So if your node tokens are set as vertexid_ all keys with the same
prefix will be in the same range.
Adding to Aaron's comment -
This will be the case if you use OrderPreservingPartitioner.
RandomPartitioner(the default) will distribute the tokens randomly across
nodes.
On Mon, Nov 15, 2010
the files a node should be having(say the ones
that show up in stream command) and just scp them to the new node.
Thank you for your time.
-Adi
) Is there a way to find the files a node should be having(say the ones
that show up in stream command) and just scp them to the new node.
Thank you for your time.
-Adi
FYI Ended up re-starting the whole cluster which in effect decommissioned
the dead node and redistributed the data
of Operations wiki page. That actually led to a more unbalanced load
distribution (Which the doc warned can happen if the key distribution is not
even).
Any suggestions/pointers are welcome. Thanks.
-Adi
20 matches
Mail list logo