Hi Joe,
PFB output of system.log
tail -n 100 system.log
INFO [CompactionExecutor:164] 2015-01-06 11:58:28,555
CompactionTask.java:251 - Compacted 4 sstables to
That should be “writing too many bytes” not “waiting too many bytes” just for
clarity’s sake.
On Jan 6, 2015, at 2:03 AM, Joe Ramsey joe.ram...@mac.com wrote:
I’m not an expert. Really just learning this myself but it looks like
according to the stack you’re getting an exception waiting
Thanks Rahul and good luck! I’m really curious to hear what the result is.
On Jan 6, 2015, at 2:10 AM, Rahul Bhardwaj rahul.bhard...@indiamart.com
wrote:
Thanks for your response.. i will get back to you with my findings.
On Tue, Jan 6, 2015 at 12:36 PM, Joe Ramsey joe.ram...@mac.com
Hi ,
There is a “possible memory leak “ issue with c* 2.1.2.
https://issues.apache.org/jira/browse/CASSANDRA-8248
It happen with our c* 2.1.2 cluster . In /proc/{pid}/maps there are a lot of
deleted file maps .
_
Stephen li
发件人: Joe Ramsey
Thanks for input, Rob. Just making sure, is older version the same as less
than version 2?
On Mon, Jan 5, 2015 at 8:13 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Jan 5, 2015 at 2:52 AM, Jens Rantil jens.ran...@tink.se wrote:
Since repair is a slow and daunting process*, I am
Hi Joe,
PFA heap dump
regards:
Rahul Bhardwaj
On Tue, Jan 6, 2015 at 11:35 AM, Joe Ramsey joe.ram...@mac.com wrote:
Did you try generating a heap dump so you can look through it to see
what’s actually happened?
On Jan 6, 2015, at 12:58 AM, Rahul Bhardwaj rahul.bhard...@indiamart.com
I’m not an expert. Really just learning this myself but it looks like
according to the stack you’re getting an exception waiting too many bytes to
the commit log.
That’s controlled by commit_log_segment_size_in_mb setting. The maximum write
size that C* will allow is half of the value set
Thanks for your response.. i will get back to you with my findings.
On Tue, Jan 6, 2015 at 12:36 PM, Joe Ramsey joe.ram...@mac.com wrote:
That should be “writing too many bytes” not “waiting too many bytes” just
for clarity’s sake.
On Jan 6, 2015, at 2:03 AM, Joe Ramsey joe.ram...@mac.com
Hi Ajay,
1. you should have at least 2 Seed nodes as it will help, Node1 (only one
seed node) is down.
2. Check you should be using internal ip address in listen_address and
rpc_address.
On Mon, Jan 5, 2015 at 2:07 PM, Ajay ajay.ga...@gmail.com wrote:
Hi,
I did the Cassandra cluster set up
Hi,
Since repair is a slow and daunting process*, I am considering increasing
max_hint_window_in_ms from its default value of one (1) hour to something like
24-48 hours. This will give me and my team more time to fix the underlying
problem of a node. I understand that
- repair is the only
Neha,
This is just for a trial set up. Anyway, thanks for the suggestion(more
than 1 seed node).
I figured out the problem. The Node2 was having the incorrect Cluster name.
The error seems to be misleading though.
Thanks
Ajay Garga
On Mon, Jan 5, 2015 at 4:21 PM, Neha Trivedi
Hi,
I did the Cassandra cluster set up as below:
Node 1 : Seed Node
Node 2
Node 3
Node 4
All 4 nodes are Virtual Box VMs with Ubuntu 14.10. I have set the
listen_address, rpc_address as the inet address with SimpleSnitch.
When I start Node2 after Node1 is started, I get the
Hi All,
I have designed a column family
prodgroup text, prodid int, status int, , PRIMARY KEY ((prodgroup), prodid,
status)
The data model is to cater
- Get list of products from the product group
- get list of products for a given range of ids
- Get details of a specific product
-
Hi All,
I have designed a column family
prodgroup text, prodid int, status int, , PRIMARY KEY ((prodgroup), prodid,
status)
The data model is to cater
- Get list of products from the product group
- get list of products for a given range of ids
- Get details of a specific product
-
Hi all,
Can anyone explain what mine deletedAt and localDeletion in
SliceQueryFilter log.
SliceQueryFilter.java (line 225) Read 6 live and 2688 tombstoned cells in
ks.mytable (see tombstone_warn_threshold). 10 columns was requested,
slices=[-], delInfo={deletedAt=-9223372036854775808,
Just an arrow in the dark: Doucment CQL for Cassandra 2.x Documentation
informs that cassandra allows to query on a column when it is indexed.
Regards,
Seenu.
On Mon, Jan 5, 2015 at 5:14 PM, Nagesh nageswara.r...@gmail.com wrote:
Hi All,
I have designed a column family
prodgroup text,
@Robert could you point me to some of those issues?
I would be very graceful for some explanation why this is semi-expected.
On Fri, Jan 2, 2015 at 8:01 PM, Robert Coli rc...@eventbrite.com wrote:
On Mon, Dec 15, 2014 at 1:51 AM, Michał Łowicki mlowi...@gmail.com
wrote:
We've noticed that
Better yet, if you're using a client where you can pass the time in, you
can validate it is indeed clock skew. Do all your writes with timestamp =
0, all your deletes with timestamp = 1.
On Wed, Dec 24, 2014 at 7:47 AM, Ryan Svihla rsvi...@datastax.com wrote:
Every time I've heard this but one
Hi, All,
I turned on the dbclient_encryption_options like this:
client_encryption_options:
enabled: true
keystore: path-to-my-keystore-file
keystore_password: my-keystore-password
truststore: path-to-my-truststore-file
truststore_password: my-truststore-password
...
I can use following
--
Follow IndiaMART.com http://www.indiamart.com for latest updates on this
and more: https://plus.google.com/+indiamart
https://www.facebook.com/IndiaMART https://twitter.com/IndiaMART Mobile
Channel:
https://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=668561641mt=8
Did you try generating a heap dump so you can look through it to see what’s
actually happened?
On Jan 6, 2015, at 12:58 AM, Rahul Bhardwaj rahul.bhard...@indiamart.com
wrote:
Hi,
We are using cassandra 2.1 version in a cluster of three machines each with
64 GB RAM
The processes
Hi,
We are using cassandra 2.1 version in a cluster of three machines each with
64 GB RAM
The processes are killed by kernel, coz they are eating all memory
(oom-killer). We have set JAVA heap to default (i.e. it is using 8G)
because we have 64 GB RAM.
Please help.
Regards:
Rahul Bhardwaj
--
Hi guys, I have to work with the following model:
userid : text
categories: [3, 4, 55, 623, ...]
in my use case, the list of values is updated every day, with 100 millons
of users and a total of 500 categories at most.
There is a way to assign a TT to each item in the category list?
Hi, using the following updates i made expire the direfent values in
deferent times:
update categories_sync using ttl 60 set category = category + {'2'} where
userid = 'u1';
update categories_sync using ttl 120 set category = category + {'3'}
where userid = 'u1';
update categories_sync using
On Mon, Jan 5, 2015 at 2:52 AM, Jens Rantil jens.ran...@tink.se wrote:
Since repair is a slow and daunting process*, I am considering increasing
max_hint_window_in_ms from its default value of one (1) hour to something
like 24-48 hours.
...
Are there any other implications of making this
25 matches
Mail list logo