Hi Yuki,
thanks for your answer. I still do nt know if it is expected behaviour
that Cassandra tries to repair these 1280 ranges everytime I run a
nodetool repair on every node?
Regards,
Dennis
Am 03.11.2013 03:27, schrieb Yuki Morishita:
Hi Dennis,
As you can see in the output,
I tested the same and it seems to be so that you cannot such queries with
indexed columns. Probably you need to have at least one condition with
equal sign in the where clause. I am not sure.
You can achieve your goal by defining the primary key as follows:
create table test (
employee_id
Hi,
Is it possible to filter records by using timeuuid column types in case the
column is not part of the primary key?
I tried the followings:
[cqlsh 3.1.2 | Cassandra 1.2.10.1 | CQL spec 3.0.0 | Thrift protocol 19.36.0]
CREATE TABLE timeuuid_test2(
row_key text,
time timeuuid,
time2
What is the best way to manage index tables on update/deletion of the
indexed data?
I have a table containing all kinds of data fora user, i.e. name, address,
contact data, company data etc. Key to this table is the user ID.
I also maintain about a dozen index tables matching my queries, like
If i do that, wouldn't I need to scrub my sstables ?
Takenori Sato ts...@cloudian.com wrote:
Try increasing column_index_size_in_kb.
A slice query to get some ranges(SliceFromReadCommand) requires to read
all the column indexes for the row, thus could hit OOM if you have a very
wide row.
No one can find something useful in our logs ? :(
--
Cyril SCETBON
On 29 Oct 2013, at 16:38, Cyril Scetbon cyril.scet...@free.fr wrote:
Sorry but as the link is bad here is the good one :
http://www.sendspace.com/file/7p81lz
Cassandra 1.2.9, embedded into the RHQ 4.9 project.
I'm getting the following:
Caused by: java.lang.RuntimeException: Tried to create duplicate hard link
to
/data05/rhq/data/system/NodeIdInfo/snapshots/1383587405678/system-NodeIdInfo-ic-
1-TOC.txt
at
I have a dual DC setup, 4 nodes, RF=4 in each.
The one that is used as primary has its system keyspace fill up with
200 gigs of data, majority of which is hints.
Why does this happen ?
How can I clean it up ?
--
Regards,
Oleg Dulin
http://www.olegdulin.com
Hi all,
We're pleased to announce the call for participation for the NoSQL devroom,
returning after a great last year.
NoSQL is an encompassing term that covers a multitude of different and
interesting database solutions. As the interest in NoSQL continues to
grow, we are looking for talks on
On Mon, Nov 4, 2013 at 10:08 AM, Elias Ross gen...@noderunner.net wrote:
Cassandra 1.2.9, embedded into the RHQ 4.9 project.
I'm getting the following:
Caused by: java.lang.RuntimeException: Tried to create duplicate hard link
to
On Mon, Nov 4, 2013 at 11:34 AM, Oleg Dulin oleg.du...@gmail.com wrote:
I have a dual DC setup, 4 nodes, RF=4 in each.
The one that is used as primary has its system keyspace fill up with 200
gigs of data, majority of which is hints.
Why does this happen ?
How can I clean it up ?
If you
On Fri, Nov 1, 2013 at 10:29 PM, Krishna Chaitanya
bnsk1990r...@gmail.comwrote:
I am newbie to the Cassandra world. I am currently using
Cassandra 2.0.0 with thrift 0.8.0 for storing netflow packets using
libQtCassandra library. ... Is this a known issue because it did not occur
Thanks Robert.
CASSANDRA-6298
Is there any way to maybe do a workaround? I guess the thinking I have is
the duplicate hard link is probably pretty harmless and getting rid of the
check would at least get me past this issue.
I would go with cleanup.
Be careful for this bug.
https://issues.apache.org/jira/browse/CASSANDRA-5454
On Mon, Nov 4, 2013 at 9:05 PM, Oleg Dulin oleg.du...@gmail.com wrote:
If i do that, wouldn't I need to scrub my sstables ?
Takenori Sato ts...@cloudian.com wrote:
Try increasing
My understanding of CASSANDRA-4110 is that the file name (not the total path
length) has to be = 255 chars long.
On not windows platforms in 1.1.0+ you should be ok with KS + CF names that
combined go up to about 230 chars. Leaving room for the extra few things
Cassandra dds to the SStable
However, when monitoring the performance of our cluster, we see sustained
periods - especially during repair/compaction/cleanup - of several hours
where there are 2000 IOPS.
If the IOPS are there compaction / repair / cleanup will use them if the
configuration allows it. If there are not
When we analyzed the heap, almost all of it was memtables.
What were the top classes ?
I would normally expect an OOM in pre 1.2 days to be the result of bloom
filters, compaction meta data and index samples.
Is there any known issue with 1.1.5 which causes memtable_total_space_in_mb
not
For a while now the binary distribution as included a tool to calculate tokens:
aarons-MBP-2011:apache-cassandra-1.2.11 aaron$ tools/bin/token-generator
Token Generator Interactive Mode
How many datacenters will participate in this Cassandra cluster? 1
How
My First Node details are -
initial_token: 0
seeds: 10.0.0.4
listen_address: 10.0.0.4 #IP of Machine - A (Wireless LAN adapter
Wireless Network Connection)
rpc_address: 10.0.0.4
My Second Node details are -
initial_token: 0
seeds: 10.0.0.4
19 matches
Mail list logo