On Sun, Jun 13, 2010 at 12:53 AM, Torsten Curdt tcu...@vafer.org wrote:
So you want to increment those counters per hit? I don't think there
is an atomic increment semantic in cassandra yet. (Some one else to
confirm?)
True for Cassandra 0.6.
Progress continued on it a week or so ago:
*
No, i do not disable compaction during my inserts. It is weird the minor
compaction is triggered less ofen.
If you were just inserting a lot of data fast, it may be that
background compaction was unable to keep up with the insertion rate.
Simply leaving the node(s) for a while after the insert
We've also seen similar problems
https://issues.apache.org/jira/browse/CASSANDRA-1177
To be clear though; un-*flushed* data is very different from
un-*compacted* data and the above seems to be about unflushed data?
In my test case there was no problem at all flushing data. But my test
was
On Sun, Jun 13, 2010 at 12:53 AM, Torsten Curdt tcu...@vafer.org wrote:
rant
TBH while we are using super columns, the somehow feel wrong to me. I
would be happier if we could move what we do with super columns into
the row key space. But in our case that does not seem to be so easy.
/rant
On Sat, Jun 12, 2010 at 7:46 PM, Anty anty@gmail.com wrote:
Hi:ALL
I have 10 nodes cluster ,after inserting many records into the cluster, i
compact each node by nodetool compact.
during the compaciton process ,something wrong with one of the 10 nodes ,
when the size of the compacted
Can you open a new ticket, then? Preferably with the thrift code
involved, I'm not sure what find_by_natural_key or find_all_by_service
is translating into. (It looks like just one of those is responsible
for the leak.)
On Sun, Jun 13, 2010 at 12:11 PM, Matthew Conway m...@backupify.com wrote:
Not so much worried about temporary breakages, but more about design decisions
that are made to enhance cassandra at the cost of a data format change. So
long as the policy here is to preserve backwards compatibility with the on disk
storage format (possibly with an automatic conversion), even
What specifically is driving you to use trunk rather than the stable,
0.6 branch?
On Sun, Jun 13, 2010 at 1:37 PM, Matthew Conway m...@backupify.com wrote:
Not so much worried about temporary breakages, but more about design
decisions that are made to enhance cassandra at the cost of a data
On 13 June 2010 18:54, Benjamin Black b...@b3k.us wrote:
On Sun, Jun 13, 2010 at 12:53 AM, Torsten Curdt tcu...@vafer.org wrote:
rant
TBH while we are using super columns, the somehow feel wrong to me. I
would be happier if we could move what we do with super columns into
the row key
On Sun, Jun 13, 2010 at 3:08 PM, Mark Robson mar...@gmail.com wrote:
Range queries I think make them less useful,
Not to my knowledge.
but only work if you're using
OrderPreservingPartitioner. The OPP comes with its own caveats - your nodes
are likely to become badly unbalanced,
The ability to dynamically add new column families. Our app is currently under
heavy development, and we will be adding new column families at least once a
week after we have shipped the initial production app. From the existing docs,
it seemed to me that the procedure for changing schema in
I'm not sure about the client you're using, but I've noticed in the past
the incorrect Thrift stack can make things run slow (like 40 times slower).
Check that the network stack wraps the socket in a Transport, preferably
the TBufferedTransport. I'm guessing the client your're using is doing the
12 matches
Mail list logo