code attached. Somehow it is not working with 1.1.5.
-Vivek
On Mon, Oct 22, 2012 at 5:20 AM, aaron morton aa...@thelastpickle.comwrote:
Yes AFAIK.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 20/10/2012, at 12:15 AM, Vivek
If you are using the default settings I would try to correlate the GC activity
with some application activity before tweaking.
If this is happening on one machine out of 4 ensure that client load is
distributed evenly.
See if the raise in GC activity us related to Compaction, repair or an
So the groups are a super column with CategoryId as key, GroupId as
superColumnName and then columns for the group members.
If this is a new project please consider not using Super Columns. They have
some limitations http://wiki.apache.org/cassandra/CassandraLimitations and are
often slower
How is it not working ?
Can you replicate the problem withe the CLI ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 22/10/2012, at 7:17 PM, Vivek Mishra mishra.v...@gmail.com wrote:
code attached. Somehow it is not working with 1.1.5.
Well. Last 2 lines of code are deleting 1 record and inserting 2 records,
first one is the deleted one and a new record. Output from command line:
[default@unknown] use bigdata;
Authenticated to keyspace: bigdata
[default@bigdata] list test1;
Using default limit of 100
Using default column limit
Anybody in group got into such issues?
-Vivek
On Fri, Oct 19, 2012 at 3:28 PM, Vivek Mishra mishra.v...@gmail.com wrote:
Ok. I did assume the same, here is what i have tried to fetch composite
columns via thrift and CQL query as well!
Not sure why thrift API is returning me column name as
Mixing the two isn't really recommended because of just this kind of
difficulty, but if you must, I would develop against 1.2 since it will
actually validate that the CT encoding you've done manually is valid.
1.1 will just fail silently.
On Mon, Oct 22, 2012 at 6:57 AM, Vivek Mishra
Hi!
I am having the same issue on 1.0.8.
Checked number of SSTables, on two nodes I have 1 (on each) and on 1 node I
have none.
Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
Fax: +972 2
I figured out the problem. The DELETE query only works if the column used in
the WHERE clause is also the first column used to define the PRIMARY KEY.
-Thomas
From: wang liang [mailto:wla...@gmail.com]
Sent: Monday, October 22, 2012 1:31 AM
To: user@cassandra.apache.org
Subject: Re: DELETE
Hi, I'm hoping to get some help on how to help tune our 1.0.x cluster w.r.t.
row
caching.
We're using the netflix priam client, so unfortunately upgrading to 1.1.x is
out
of the question for now.. but until we find a way around that, is there any way
to help determine where the 'sweet spot'
Thanks. But it means i may have re-write all the stuff in CQL way.
Considering CQL as a future interface for cassandra interface,AFN i will
implement it without mixing them.
-Vivek
On Mon, Oct 22, 2012 at 6:32 PM, Jonathan Ellis jbel...@gmail.com wrote:
Mixing the two isn't really recommended
The memory usage was correlated with the size of the data set. The nodes
were a bit unbalanced which is normal due to variations in compactions.
The nodes with the most data used the most memory. All nodes are affected
eventually not just one. The GC was on-going even when the nodes were not
Is it through filter.collateColumns(resolved, iters, Integer.MIN_VALUE) and
then MergeIterator.get(toCollate, fcomp, reducer) but I don't know what
happens hereafter? How is reconcile exactly been called?
On Mon, Oct 22, 2012 at 6:49 AM, aaron morton aa...@thelastpickle.comwrote:
There are two
if a node, X, has a tombstone marking deleted data, when can node X
remove the data - not the tombstone, but the data? i understand the
tombstone cannot be removed until GCGraceSeconds has passed, but it
seems the data could be compacted away at any time.
My understanding is any time from that node. Another node may have a
different existing value and tombstone vs. that existing data(most recent
timestamp wins). Ie. The data is not needed on that node so compaction
should be getting rid of it, but I never confirmed thisÅ .I hope you get
The data does get removed as soon as possible (as soon as it is
compacted with the tombstone that is).
--
Sylvain
On Mon, Oct 22, 2012 at 7:03 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
My understanding is any time from that node. Another node may have a
different existing value and
excellent, thx
On Mon, Oct 22, 2012 at 10:13 AM, Sylvain Lebresne sylv...@datastax.com wrote:
The data does get removed as soon as possible (as soon as it is
compacted with the tombstone that is).
--
Sylvain
On Mon, Oct 22, 2012 at 7:03 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
My
Hi,
I have a small 2 node cassandra cluster that seems to be constrained by
read throughput. There are about 100 writes/s and 60 reads/s mostly against
a skinny column family. Here's the cfstats for that family:
SSTable count: 13
Space used (live): 231920026568
Space used (total):
does nodetool cleanup perform a major compaction in the process of
removing unwanted data?
i seem to remember this to be the case, but can't find anything definitive
On Mon, Oct 22, 2012 at 11:05 AM, feedly team feedly...@gmail.com wrote:
Hi,
I have a small 2 node cassandra cluster that seems to be constrained by
read throughput. There are about 100 writes/s and 60 reads/s mostly against
a skinny column family. Here's the cfstats for that family:
For what it's worth, Cassandra 1.2 will support deleting a slice of
columns, allowing you to specify the first N components of the primary key
in a WHERE clause for a DELETE statement:
https://issues.apache.org/jira/browse/CASSANDRA-3708
On Mon, Oct 22, 2012 at 8:45 AM, Ryabin, Thomas
AFAIK IP not logged.
If you want to check the connection try lsof
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 22/10/2012, at 9:34 PM, Jean Paul Adant jean.paul.ad...@gmail.com wrote:
Hi all,
How can I log on server the IP of
I'm not aware of how to track the memory usage for the off heap row cache in
1.0. The memory may show up in something like JConsole. What about seeing how
much os memory is allocated to buffers and working backwards from there?
Anyone else ?
(One thing to be away of is each CF has it's own
The GC was on-going even when the nodes were not compacting or running a
heavy application load -- even when the main app was paused constant the GC
continued.
If you restart a node is the onset of GC activity correlated to some event?
As a test we dropped the largest CF and the memory
On 10/22/2012 08:24 PM, aaron morton wrote:
I'm not aware of how to track the memory usage for the off heap row
cache in 1.0. The memory may show up in something like JConsole. What
about seeing how much os memory is allocated to buffers and working
backwards from there?
Anyone else ?
(One
On 10/22/2012 09:05 PM, aaron morton wrote:
# GC tuning options
JVM_OPTS=$JVM_OPTS -XX:+UseParNewGC
JVM_OPTS=$JVM_OPTS -XX:+UseConcMarkSweepGC
JVM_OPTS=$JVM_OPTS -XX:+CMSParallelRemarkEnabled
JVM_OPTS=$JVM_OPTS -XX:SurvivorRatio=8
JVM_OPTS=$JVM_OPTS -XX:MaxTenuringThreshold=1
JVM_OPTS=$JVM_OPTS
Hello, I'm seeing Cassandra behavior that I can't explain, on v1.0.12. I'm
trying to test removing rows after all columns have expired. I've read the
following:
http://wiki.apache.org/cassandra/DistributedDeletes
http://wiki.apache.org/cassandra/MemtableSSTable
Hello,
I'm on version 1.0.11.
I'm seeing this in my system log with occasional frequency:
INFO [GossipTasks:1] 2012-10-23 02:26:34,449 Gossiper.java (line 818)
InetAddress /10.50.10.21 is now dead.
INFO [GossipStage:1] 2012-10-23 02:26:34,620 Gossiper.java (line 804)
InetAddress /10.50.10.21 is
On Oct 22, 2012 11:54 AM, B. Todd Burruss bto...@gmail.com wrote:
does nodetool cleanup perform a major compaction in the process of
removing unwanted data?
No.
On 10/23/2012 01:25 AM, Peter Schuller wrote:
On Oct 22, 2012 11:54 AM, B. Todd Burruss bto...@gmail.com
mailto:bto...@gmail.com wrote:
does nodetool cleanup perform a major compaction in the process of
removing unwanted data?
No.
what is the internal memory model used? It sounds like
30 matches
Mail list logo