Hi,
Using Cassandra 1.2.10, I am trying to load sstable data into a cluster of
6 machines.
The machines are using vnodes, and are configured with
NetworkTopologyStrategy replication=3 and LeveledCompactionStrategy on the
tables being loaded.
The sstable data was generated using
Hi Ross,
Did you try to use CQL2 tables?
/create the CF / table using cqlsh -2.
We experienced the same but using CQL2 helped us.
Ferenc
From: Ross Black [mailto:ross.w.bl...@gmail.com]
Sent: Wednesday, November 27, 2013 10:12 AM
To: user@cassandra.apache.org
Subject: data dropped when using
Hi,
I need your help on extract column names and values in the trigger augment
method.
*Table Def :*
create table dy_data (
id timeuuid,
data_key text,
time timestamp,
data text,
primary key((id,data_key),time)) with clustering order by (time desc);
public class ArchiveTrigger implements
A-yup. Got burned this too some time ago myself. If you do accidentally try to
bootstrap a seed node, the solution is to run repair after adding the new node
but before removing the old one. However, during this time the node will
advertise itself as owning a range, but when queried, it'll
Thanks Mikhail.
I switched to 2.0.3 and the problem is still there, I will open an issue
with a test case on it. I have not tested 1.2.12, but I assume that will
have the same problem.
Shahryar
On Mon, Nov 25, 2013 at 5:57 PM, Shahryar Sedghi shsed...@gmail.com wrote:
I did some test and
On Wed, Nov 27, 2013 at 2:47 AM, Turi, Ferenc (GE Power Water, Non-GE)
ferenc.t...@ge.com wrote:
Did you try to use CQL2 tables?
/create the CF / table using “cqlsh -2”.
We experienced the same but using CQL2 helped us.
CQL2 is a historical footnote and is likely to be removed from
Hi all,
We made a decision to use compact storage for a couple of very large tables to
get maximum storage efficiency. We understood that this would limit us to a
single non-primary key column. We did not realize at the time that we would not
be able to add this column using the cql3 alter
Hello,
I’m working on an distributed analytics service that originally uses Storm
as an RPC. I thought since Cassandra is already distributed, I may not have
to use another RPC system to write data to Cassandra.
There are plenty of great ideas in Triggers issue page (
Hi Tyler,
Thanks (somehow I missed that ticket when I searched for sstableloader
bugs).
I will retry with 1.2.12 when we get a chance to upgrade. In the meantime
I have switched to loading data via the normal client API (slower but
reliable).
Ross
On 28 November 2013 03:45, Tyler Hobbs
We have noticed that a cluster we upgraded to 1.1.6 (from 1.0.*) still has a
single large (~4GB) row in system.Migrations on each cluster node.
There is some code in there to drop that CF at startup, but I’m not sure on the
requirements for it to run. if the time stamps have not been updated in
What’s the value of “max_hint_window_in_ms” in your cassandra.yaml?
-M
Xavier Fustero xav...@rightscale.com wrote in message
news:cah7zuusuh7s_9vvjaxg75fo5xd2rpwy6rgpw_ubxmwbwd4_...@mail.gmail.com...
Hi all,
I am installing a cassandra 1.2 on Ubuntu. I followed the Debian/Ubuntu
guidelines
We have the following structure in a composite CF, comprising 2 parts
Key=123 - A:1, A:2, A:3,B:1, B:2, B:3, B:4, C:1, C:2, C:3,
Our application provides the following inputs for querying on the
first-part of composite column
key=123, [(colName=A, range=2), (colName=B, range=3),
Tom,
Here is the definition
List all the endpoints that this node has hints for, and count the number
of hints for each such endpoint.
Returns:map of endpoint - hint count
I would suggest looking at at the gossipinfo to validate if there are any
nodes which have that token value. If there is (
13 matches
Mail list logo