The Cassandra team is pleased to announce the release of the second beta for
the future Apache Cassandra 1.2.0.
Let me first stress that this is beta software and as such is *not* ready
for
production use.
This release is still beta so is likely not bug free. However, lots have
been
fixed since
Here is the thing. I'm modelling User entity and got to problem with
searching trough user columns.
CREATE TABLE users (
user_uuid uuid PRIMARY KEY,
date_created timestamp,
password varchar,
username varchar,
name varchar,
first_name varchar,
last_name varchar,
email varchar,
I think there is just a few solutions.
- Secondary index on username
- CF used as an index (store username as row and all the uuid of users with
this username as columns)
- Get all the data and filter after (really poor performances depending on
the size of the data set)
I can't see an other way
Hi there,
We have had a crashed node that is currently removed from the rack. However
when I try a schema upgrade / truncate operation it complains of the
unreachable node. I tried the removetoken, but that didn't resolve.
Any ideas on how to fix this?
Best regards,
Robin Verlangen
*Software
On cassandra-cli if you describe cluster; I guess you will see an
UNREACHABLE node.
If you do so, there is a way to remove this unreachable node.
Go to the JMX management console (ip_of_one_up_node:8081 by default)
Then go to the org.apache.cassandra.net:type=Gossiper link and use the
Hi Alain,
How can I access that? Web browser does not seem to work. Do I need any
software to login? If so, what is proper software for Windows?
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
http://goo.gl/Lt7BC
Disclaimer: The information
On 2012-11-08, at 1:12 PM, B. Todd Burruss bto...@gmail.com wrote:
we are having the problem where we have huge SSTABLEs with tombstoned data in
them that is not being compacted soon enough (because size tiered compaction
requires, by default, 4 like sized SSTABLEs). this is using more
You have to install mx4j-tools.jar.
http://wiki.apache.org/cassandra/Operations#Monitoring_with_MX4J
It's a java tool, so it is usable on both windows and linux.
Here is the link to dl mx4j-tool.jar :
http://www.java2s.com/Code/JarDownload/mx4j/mx4j-tools-3.0.2.jar.zip
unzip it and add it to
Thanks for sharing this. We are also using Cassandra + Storm + Queue
messaging (Kestrel for now) and are always glad to learn.
Alain
2012/11/9 Brian O'Neill b...@alumni.brown.edu
For those looking to index data in Cassandra with Elastic Search, here
is what we decided to do:
The rules for tombstone eviction are as follows (regardless of your
compaction strategy):
1. gc_grace must be expired, and
2. No other row fragments can exist for the row that aren't also
participating in the compaction.
For LCS, there is no 'rule' that the tombstones can only be evicted at the
Hi,
Does anyone know if DataStax/Cassandra recommends using HugeTLB on a cluster?
Thank you
James Morantus
Sr. Database Administrator
203-299-8733
Priceline.com
On Thu, Nov 8, 2012 at 10:12 AM, B. Todd Burruss bto...@gmail.com wrote:
my question is would leveled compaction help to get rid of the tombstoned
data faster than size tiered, and therefore reduce the disk space usage?
You could also...
1) run a major compaction
2) code up sstablesplit
3)
That must be it. I dumped the sstables to json and there are lots of records,
including ones that are returned to my application, that have the deletedAt
attribute. I think this is because the regular repair job was not running for
some time, surely more than the grace period, and lots of
Hello,
I am trying to run a Hadoop job that pulls data out of Cassandra via
ColumnFamilyInputFormat. I am getting a frame size exception. To remedy that,
I have set both the thrift_framed_transport_size_in_mb and
thrift_max_message_length_in_mb to an infinite amount at 10mb on all
nodes.
On Thu, Nov 8, 2012 at 5:15 PM, Yang tedd...@gmail.com wrote:
some of my colleagues seem to use this method to backup/restore a cluster,
successfully:
on each of the node, save entire /cassandra/data/ dir to S3,
then on a new set of nodes, with exactly the same number of nodes, copy
On Thu, Nov 8, 2012 at 4:57 PM, Jeremy McKay
jeremy.mc...@ntrepidcorp.com wrote:
http://wiki.apache.org/cassandra/FAQ#unsubscribe
--
=Robert Coli
AIMGTALK - rc...@palominodb.com
YAHOO - rcoli.palominob
SKYPE - rcoli_palominodb
I think the row whose row key falls into the token range of the high latency
node is likely to have more columns than the other nodes. I have three nodes
with RF = 3, so all the nodes have all the data. And CL = Quorum, meaning each
request is sent to all three nodes and response is sent back
Hi! Is there a way to retrieve the columns for all column families on a
given row while fetching range slices? My keyspace has two column families
and when I'm scanning over the rows, I'd like to be able to fetch the
columns in both CFs while iterating over the keys so as to avoid having to
run
HBase is different is this regard. A table is comprised of multiple
column families, and they can be scanned at once. However, last time I
checked, scanning a table with two column families is still two
seeks across three different column families.
A similar thing can be accomplished in cassandra
19 matches
Mail list logo