Hi!
I was trying out the truncate command in cassandra-cli.
http://wiki.apache.org/cassandra/CassandraCli08 says A snapshot of the
data is created, which is deleted asyncronously during a 'graveyard'
compaction.
When do graveyard compactions happen? Do I have to trigger them somehow?
Cassandra graveyard sounds like a lot of thombstones that will be
compacted during normal compact.
You can trigger that manually using the nodetool.
2012/3/28 Erik Forsberg forsb...@opera.com
Hi!
I was trying out the truncate command in cassandra-cli.
Yes, that is one of the possible solutions to your problem.
When you want to retrieve only the skills of a particular row just get the
columns with as start value skill:.
A suggestion to your example might be to use a ~ in stead of : as
separator. A tilde is used less often in standard
If you use the CompositeColumn it does, but it looked to me in your example
you just used the simple utf8-based solution. My apologies for the
confusion.
2012/3/28 Ben McCann b...@benmccann.com
Hmm. I thought that Cassandra would encode the composite column without
the colon and that it was
I'm leaning towards storing serialized JSON at the moment. It's too bad
Cassandra doesn't have some better way of storing collections or
document-oriented data (e.g. a JsonType queryable with CQL).
On Wed, Mar 28, 2012 at 1:19 AM, R. Verlangen ro...@us2.nl wrote:
If you use the
yes - but anyway in your example you need key range quey and that
requires OOP, right?
On Tue, Mar 27, 2012 at 5:13 PM, Guy Incognito dnd1...@gmail.com wrote:
multiget does not require OPP.
On 27/03/2012 09:51, Maciej Miklas wrote:
multiget would require Order Preserving Partitioner, and
RAID0 would help me use more efficiently the total disk space available at each
node, but tests have shown that under write load it behaves much worse than
using separate data dirs, one per disk.
there are different strategies how RAID0 splits reads, also changing io
scheduler and filesystem
Radim,
We are only deleting columns. *Rows are never deleted.*
We are continually adding new columns that are then deleted. * Existing
columns (deleted or otherwise) are never updated.
*
Ross*
*
On 28 March 2012 13:51, John Laban j...@pagerduty.com wrote:
(Radim: I'm assuming you mean do not
Hi Radim,
I am hunting for what I believe is a bug in Cassandra and tombstone
handling that may be triggered by our particular application usage.
I appreciate your attempt to help, but without you actually knowing what
our application is doing and why, your advice to change our application is
On 03/28/2012 02:04 PM, Radim Kolar wrote:
RAID0 would help me use more efficiently the total disk space
available at each node, but tests have shown that under write load it
behaves much worse than using separate data dirs, one per disk.
there are different strategies how RAID0 splits reads,
On Wednesday 28 of March 2012, Igor wrote:
I'm also trying to evaluate different strategies for RAID0 as drive for
cassandra data storage. If I need 2T space to keep node tables, which
drive configuration is better: 1T x 2drives or 500G x 4drives?
Having _similar_ family of HDDs 4x smaller
This email was sent to you by Thomson Reuters, the global news and information
company. Any views expressed in this message are those of the individual
sender, except where the sender specifically states them to be the views of
Thomson Reuters.
We upgraded to 1.0.8, and looks the problem is gone.
Thanks for your help,
Daning
On Sun, Mar 25, 2012 at 9:54 AM, aaron morton aa...@thelastpickle.comwrote:
Can you go to those nodes and run describe cluster ? Also check the logs
on the machines that are marked as UNREACHABLE .
A node
Hi,
We are trying to estimate the amount of storage we need for a production
cassandra cluster. While I was doing the calculation, I noticed a very
dramatic difference in terms of storage space used by cassandra data files.
Our previous setup consists of a single-node cassandra 0.8.x with no
Actually, after I read an article on cassandra 1.0 compression just now (
http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-compression), I
am more puzzled. In our schema, we didn't specify any compression options
-- does cassandra 1.0 perform some default compression? or is the data
Hey guys,
We have a fresh 4 node 0.8.10 cluster that we want to pump lots of data into.
The data resides on 5 data machines that are different from Cassandra
nodes. Each of these data nodes has 7 disks where the data resides.
In order to get maximum load performance, we are assigning 7 ips to
each
well, no. my assumption is that he knows what the 5 itemTypes (or
appropriate corresponding ids) are, so he can do a known 5-rowkey
lookup. if he does not know, then agreed, my proposal is not a great fit.
could do (as originally suggested)
userId - itemType:activityId
if you want to keep
Hi,
Here is the stack trace that we get from sstableloader
org.apache.thrift.transport.TTransportException: java.net.ConnectException:
Connection refused
java.lang.RuntimeException:
org.apache.thrift.transport.TTransportException: java.net.ConnectException:
Connection refused
at
Where the F^$% have the packages for 06x gone?
http://www.apache.org/dist/cassandra/debian/dists/06x/main/binary-amd64/
Is empty. What gives?
We are currently using 1.0.0-2 version. Do we still need to migrate to the
latest release of 1.0 before migrating to 1.1? Looks like incompatibility
is only between 1.0.3-1.0.8.
On Tue, Mar 27, 2012 at 6:42 AM, Benoit Perroud ben...@noisette.ch wrote:
Thanks for the quick feedback.
I will
On 03/28/2012 07:45 PM, Ashley Martens wrote:
Where the F^$% have the packages for 06x gone?
Easy there, pardner.
http://www.apache.org/dist/cassandra/debian/dists/06x/main/binary-amd64/
Is empty. What gives?
While the repository Packages list does appear to be empty, the 0.6.13
package
Using this apt source list:
deb http://www.apache.org/dist/cassandra/debian 06x main
deb-src http://www.apache.org/dist/cassandra/debian 06x main
E: Package 'cassandra' has no installation candidate
Has the apt source changed?
On Wed, Mar 28, 2012 at 7:18 PM, Michael Shuler
Hi,
We are using Cassandra JDBC driver (found in [1]) to call to Cassandra
sever using CQL and JDBC calls. One of the main disadvantage is, this
driver is not available in maven repository where people can publicly
access. Currently we have to checkout the source and build ourselves. Is
there
correct - I see also no other solution for this problem
On Thu, Mar 29, 2012 at 1:46 AM, Guy Incognito dnd1...@gmail.com wrote:
well, no. my assumption is that he knows what the 5 itemTypes (or
appropriate corresponding ids) are, so he can do a known 5-rowkey lookup.
if he does not know,
24 matches
Mail list logo