Re: TWCS and autocompaction

2018-01-16 Thread Cogumelos Maravilha
> > Cheers, > > > Le mar. 16 janv. 2018 à 12:07, Cogumelos Maravilha > <cogumelosmaravi...@sapo.pt <mailto:cogumelosmaravi...@sapo.pt>> a écrit : > > Hi list, > > My settings: > > AND compaction = {'class': > 'org.apache.cassandra.db.com

TWCS and autocompaction

2018-01-16 Thread Cogumelos Maravilha
Hi list, My settings: AND compaction = {'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 'compaction_window_size': '4', 'compaction_window_unit': 'HOURS', 'enabled': 'true', 'max_threshold': '64', 'min_threshold': '2', 'tombstone_compaction_interval': '15000',

version 3.11.1 number_of_keys_estimate is missing

2017-10-11 Thread Cogumelos Maravilha
Hi list, After upgrading from 3.11.0 to 3.11.1 I've notice in nodetool tablestats that the number_of_keys_estimate is missing. How can I get this value now? Thanks in advance. - To unsubscribe, e-mail:

decommission mode with background compactors running

2017-10-04 Thread Cogumelos Maravilha
Hi list, I've decommission a node but in the background with nodetool status I've checked and there was 4 compactors running and simultaneously the SSTables sent to other nodes. Is this safe or we should disable all background process before decommission like: nodetool disableautocompaction

Re: From SimpleStrategy to DCs approach

2017-09-16 Thread Cogumelos Maravilha
If zone a or b goes dark I want to keep my cluster alive with QUORUM on reading. That's why I'm imagine solve this using another node in a different location. Thanks On 15-09-2017 22:32, kurt greaves wrote: > You can add a tiny node with 3 tokens. it will own a very small amount > of data and

Re: From SimpleStrategy to DCs approach

2017-09-15 Thread Cogumelos Maravilha
e DC per > rack thing isn't necessary and will make your clients overly complicated. > > On 5 Sep. 2017 21:01, "Cogumelos Maravilha" > <cogumelosmaravi...@sapo.pt <mailto:cogumelosmaravi...@sapo.pt>> wrote: > > Hi list, > > CREA

Re: truncate table in C* 3.11.0

2017-09-07 Thread Cogumelos Maravilha
din.com/in/carlosjuzarterolo>_ > Mobile: +351 918 918 100 > www.pythian.com <http://www.pythian.com/> > > On Thu, Sep 7, 2017 at 10:07 AM, Cogumelos Maravilha > <cogumelosmaravi...@sapo.pt <mailto:cogumelosmaravi...@sapo.pt>> wrote: > > Hi list, > >

truncate table in C* 3.11.0

2017-09-07 Thread Cogumelos Maravilha
Hi list, Using cqlsh: consistency all; select count(*) table1; 219871 truncate table1; select count(*) table1; 219947 There is a consumer reading data from kafka and inserting in C* but the rate is around 50 inserts/minute. Cheers

Re: C* 3 node issue -Urgent

2017-09-06 Thread Cogumelos Maravilha
After insert a new node we should: ALTER KEYSPACE system_auth WITH REPLICATION = { 'class' : ... 'replication_factor' : x }; x = number of nodes in dc The default user and password should work: -u cassandra -p cassandra Cheers. On 23-08-2017 11:14, kurt greaves wrote: > The cassandra user

From SimpleStrategy to DCs approach

2017-09-05 Thread Cogumelos Maravilha
Hi list, CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '2'} AND durable_writes = true; I'm using C* 3.11.0 with 8 nodes at aws, 4 nodes at zone a and the other 4 nodes at zone b. The idea is to keep the cluster alive if zone a or b goes dark and keep

Adding a new node with the double of disk space

2017-08-17 Thread Cogumelos Maravilha
Hi all, I need to add a new node to my cluster but this time the new node will have the double of disk space comparing to the other nodes. I'm using the default vnodes (num_tokens: 256). To fully use the disk space in the new node I just have to configure num_tokens: 512? Thanks in advance.

Re: Deflate compressor

2017-07-08 Thread Cogumelos Maravilha
4', 'class': 'org.apache.cassandra.io.compress.DeflateCompressor'} Is this approach enough? Thanks. On 07/06/2017 06:27 PM, Jeff Jirsa wrote: > > On 2017-07-06 01:37 (-0700), Cogumelos Maravilha <cogumelosmaravi...@sapo.pt> > wrote: >> Hi Jeff, >> >> Thanks for your reply. But I've already changed from LZ4 to

Re: Deflate compressor

2017-07-06 Thread Cogumelos Maravilha
'} There are some days that I have exactly 24 SSTables: ls -alFh *Data*|grep 'Jul 3'|wc 24 Others no: ls -alFh *Data*|grep 'Jul 2'|wc 59 Is this normal? Thanks in advance. On 06-07-2017 06:30, Jeff Jirsa wrote: On 2017-07-01 02:50 (-0700), Cogumelos Maravilha <cogumelosmaravi...@sapo

Deflate compressor

2017-07-01 Thread Cogumelos Maravilha
Hi list, Is there a way to set Deflate level of compression? Brotli sounds good but unstable. I just need more compression ratio. I'm using C* 3.11.0 Cheers. - To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org

Re: Node replacement strategy with AWS EBS

2017-06-13 Thread Cogumelos Maravilha
Simplest way of all, if you are using RF>=2 simple terminate the old instance and create a new one. Cheers. On 13-06-2017 18:01, Rutvij Bhatt wrote: > Nevermind, I misunderstood the first link. In this case, the > replacement would just be leaving the listen_address as is (to >

Re: Cassandra Server 3.10 unable to Start after crash - commitlog needs to be removed

2017-06-01 Thread Cogumelos Maravilha
You can also manually delete the corrupt log file. Just check is name on the logs. Of course you are losing some data or not! Cheers On 01-06-2017 20:01, Peter Reilly wrote: > Please, how do you do this? > > Peter > > > On Fri, May 19, 2017 at 7:13 PM, Varun Gupta

Re: EC2 instance recommendations

2017-05-24 Thread Cogumelos Maravilha
Exactly. On 23-05-2017 23:55, Gopal, Dhruva wrote: > > By that do you mean it’s like bootstrapping a node if it fails or is > shutdown and with a RF that is 2 or higher, data will get replicated > when it’s brought up? > > > > *From: *Cogumelos Maravilha <cogumelo

Re: EC2 instance recommendations

2017-05-23 Thread Cogumelos Maravilha
Yes we can only reboot. But using rf=2 or higher it's only a node fresh restart. EBS is a network attached disk. Spinning disk or SSD is almost the same. It's better take the "risk" and use type i instances. Cheers. On 23-05-2017 21:39, sfesc...@gmail.com wrote: > I think this is overstating

Re: InternalResponseStage low on some nodes

2017-05-23 Thread Cogumelos Maravilha
This is really atypical. What about nodetool compactionstats? crontab jobs in each node like nodetool repair, etc? Also security these 2 nodes have the same ports open? Same configuration, same JVM params? nodetool ring normal? Cheers. On 23-05-2017 20:11, Andrew Jorgensen wrote: > Hello,

Re: Slowness in C* cluster after implementing multiple network interface configuration.

2017-05-23 Thread Cogumelos Maravilha
Hi, I never used version 2.0.x but I think port 7000 isn't enough. Try enable: 7000 inter-node 7001 SSL inter-node 9042 CQL 9160 Thrift is enable in that version And **In Cassandra.yaml, add property “broadcast_address”. = local ipv4 **In Cassandra.yaml, change “listen_address” to

Re: Bottleneck for small inserts?

2017-05-23 Thread Cogumelos Maravilha
Hi, Change to *|durable_writes = false|* And please post the results. Thanks. On 05/22/2017 10:08 PM, Jonathan Haddad wrote: > How many CPUs are you using for interrupts? > > http://www.alexonlinux.com/smp-affinity-and-proper-interrupt-handling-in-linux > > Have you tried making a flame

Re: Is it safe to upgrade 2.2.6 to 3.0.13?

2017-05-20 Thread Cogumelos Maravilha
It's better wait for 3.0.14 https://issues.apache.org/jira/browse/CASSANDRA/fixforversion/12340362/?selectedTab=com.atlassian.jira.jira-projects-plugin:version-summary-panel Cheers. On 05/20/2017 11:31 AM, Stefano Ortolani wrote: > Hi Varun, > > can you elaborate a bit more? I have seen a

Re: Nodes stopping

2017-05-11 Thread Cogumelos Maravilha
Can you grep ERROR system.log On 11-05-2017 21:52, Daniel Steuernol wrote: > There is nothing in the system log about it being drained or shutdown, > I'm not sure how else it would be pre-empted. No one else on the team > is on the servers and I haven't been shutting them down. There also is >

Try version 3.11

2017-05-06 Thread Cogumelos Maravilha
Hi all, deb http://www.apache.org/dist/cassandra/debian 310x main deb http://www.apache.org/dist/cassandra/debian 311x main deb http://www.apache.org/dist/cassandra/debian sid main deb http://www.apache.org/dist/cassandra/debian unstable main Is there a way to try C* version 3.11 binary before

Re: Totally unbalanced cluster

2017-05-05 Thread Cogumelos Maravilha
could be helpful to know Apache Cassandra internals and > processes a bit more. > > C*heers, > --- > Alain Rodriguez - @arodream - al...@thelastpickle.com > <mailto:al...@thelastpickle.com> > France > > The Last Pickle - Apache Cassan

Re: DTCS to TWCS

2017-05-04 Thread Cogumelos Maravilha
Hi, Take a look to https://issues.apache.org/jira/browse/CASSANDRA-13038 Regards On 04-05-2017 18:22, vasu gunja wrote: > Hi All, > > We are currently on C* 2.1.13 version and we are using DTCS for our > tables. > We planning to move to TWCS. > > My questions > From which versions TWCS is

Re: Totally unbalanced cluster

2017-05-04 Thread Cogumelos Maravilha
* * * rootnodetool flush > > > This will produce tables to be flushed at the same time, no matter > their sizes or any other considerations. It is not to be used unless > you are doing some testing, debugging or on your way to shut down the > node. > > C*heers

Totally unbalanced cluster

2017-05-04 Thread Cogumelos Maravilha
Hi all, I'm using C* 3.10. CREATE KEYSPACE mykeyspace WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '2'} AND durable_writes = false; CREATE TABLE mykeyspace.data ( id bigint PRIMARY KEY, kafka text ) WITH bloom_filter_fp_chance = 0.5 AND caching = {'keys':

Re: Node always dieing

2017-04-11 Thread Cogumelos Maravilha
"system_auth" not my table. On 04/11/2017 07:12 AM, Oskar Kjellin wrote: > You changed to 6 nodes because you were running out of disk? But you > still replicate 100% to all so you don't gain anything > > > > On 10 Apr 2017, at 13:48, Cogumelos Maravilha &

Re: Node always dieing

2017-04-10 Thread Cogumelos Maravilha
nodes as seeds? is it possible that > the last one you added used itself as the seed and is isolated? > > On Thu, Apr 6, 2017 at 6:48 AM, Cogumelos Maravilha > <cogumelosmaravi...@sapo.pt <mailto:cogumelosmaravi...@sapo.pt>> wrote: > > Yes C* is running as cassandra: > >

Re: Node always dieing

2017-04-07 Thread Cogumelos Maravilha
, Carlos Rolo wrote: > i3 are having those issues more than the other instances it seems. Not > the first report I heard about. > Regards, > Carlos Juzarte Rolo > Cassandra Consultant / Datastax Certified Architect / Cassandra MVP > > Pythian - Love your data > rolo@pythian

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
Yes but this time I going to give lots of time between killing and pickup. Thanks a lot. On 04/06/2017 05:31 PM, Avi Kivity wrote: > > Your disk is bad. Kill that instance and hope someone else gets it. > > > On 04/06/2017 07:27 PM, Cogumelos Maravilha wrote: &g

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
: > > Is there anything in dmesg? > > > On 04/06/2017 07:25 PM, Cogumelos Maravilha wrote: >> >> Now dies and restart (systemd) without logging why >> >> system.log >> >> INFO [Native-Transport-Requests-2] 2017-04-06 16:06:55,362 >> AuthCache.java:172

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
On 04/06/2017 04:18 PM, Cogumelos Maravilha wrote: > find /mnt/cassandra/ \! -user cassandra > nothing > > I've found some "strange" solutions on Internet > chmod -R 2777 /tmp > chmod -R 2775 cassandra folder > > Lets give some time to see the result > >

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
run. Checking only the > top level directory ownership is insufficient, since root could own > files/dirs created below the top level. Find all files not owned by user > cassandra: `find /mnt/cassandra/ \! -user cassandra` > > Just another thought. > > -- Michael On 04/06/2017 0

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
: > There was some issue with the i3 instances and Cassandra. Did you had > this cluster running always on i3? > > On Apr 6, 2017 13:06, "Cogumelos Maravilha" > <cogumelosmaravi...@sapo.pt <mailto:cogumelosmaravi...@sapo.pt>> wrote: > > Limit

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
0 Max realtime timeout unlimitedunlimitedus Please find something wrong there! Thanks. On 04/06/2017 11:50 AM, benjamin roth wrote: > Limits: You should check them in /proc/$pid/limits > > 2017-04-06 12:48 GMT+02:00 Cogumelos

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
you checked the effective limits of a running CS process? > Is CS run as Cassandra? Just to rule out missing file perms. > > > Am 06.04.2017 12:24 schrieb "Cogumelos Maravilha" > <cogumelosmaravi...@sapo.pt <mailto:cogumelosmaravi...@sapo.pt>>: > > From

Re: Node always dieing

2017-04-06 Thread Cogumelos Maravilha
appropriate limit for max open files. Running > out of open files can also be a reason for the IO error. > > 2017-04-06 11:34 GMT+02:00 Cogumelos Maravilha > <cogumelosmaravi...@sapo.pt <mailto:cogumelosmaravi...@sapo.pt>>: > > Hi list, > > I'm using C* 3.10

Node always dieing

2017-04-06 Thread Cogumelos Maravilha
Hi list, I'm using C* 3.10 in a 6 nodes cluster RF=2. All instances type i3.xlarge (AWS) with 32GB, 2 cores and SSD LVM XFS formated 885G. I have one node that is always dieing and I don't understand why. Can anyone give me some hints please. All nodes using the same configuration. Thanks in

How to add a node with zero downtime

2017-03-21 Thread Cogumelos Maravilha
Hi list, I'm using C* 3.10; authenticator: PasswordAuthenticator and authorizer: CassandraAuthorizer When adding a node and before |nodetool repair system_auth| finished all my clients die with: cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'10.100.100.19':

Re: Count(*) is not working

2017-02-16 Thread Cogumelos Maravilha
Selvam Raman wrote: > I am using cassandra 3.9. > > Primary Key: > id text; > > On Thu, Feb 16, 2017 at 12:25 PM, Cogumelos Maravilha > <cogumelosmaravi...@sapo.pt <mailto:cogumelosmaravi...@sapo.pt>> wrote: > > C* version please and partition key. > &g

Extract big data to file

2017-02-08 Thread Cogumelos Maravilha
Hi list, My database stores data from Kafka. Using C* 3.0.10 In my cluster I'm using: AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'} The result of extract one day of data uncompressed is around 360G. I've find these approaches: echo "SELECT kafka

Re: Global TTL vs Insert TTL

2017-02-01 Thread Cogumelos Maravilha
new default replacing TWCS, so no extra jar is needed, you can > enable TWCS as any other default compaction strategy. > > C*heers, > --- > Alain Rodriguez - @arodream - al...@thelastpickle.com > <mailto:al...@thelastpickle.com> > France > >

Re: Global TTL vs Insert TTL

2017-01-31 Thread Cogumelos Maravilha
6/12/08/TWCS-part1.html > http://thelastpickle.com/blog/2017/01/10/twcs-part2.html > > C*heers, > --- > Alain Rodriguez - @arodream - al...@thelastpickle.com > <mailto:al...@thelastpickle.com> > France > > The Last Pickle - Apache Cassandra Consulting

Global TTL vs Insert TTL

2017-01-31 Thread Cogumelos Maravilha
Hi I'm just wondering what option is fastest: Global:***create table xxx (.|AND |**|default_time_to_live = |**|XXX|**|;|**||and**UPDATE xxx USING TTL XXX;* Line by line: *INSERT INTO xxx (...USING TTL xxx;* Is there a overhead using line by line option or wasted disk

Kill queries

2017-01-23 Thread Cogumelos Maravilha
Hi, I'm using cqlsh --request-timeout=1 but because I've more than 600.000.000 rows some times I get blocked and I kill the cqlsh. But what about the running query in Cassandra? How can I check that? Thanks in advance.

Re: Is this normal!?

2017-01-11 Thread Cogumelos Maravilha
Nodetool repair always list lots of data and never stays repaired. I think. Cheers On 01/11/2017 02:15 PM, Hannu Kröger wrote: > Just to understand: > > What exactly is the problem? > > Cheers, > Hannu > >> On 11 Jan 2017, at 16.07, Cogumelos Maravilha <cogumel

Is this normal!?

2017-01-11 Thread Cogumelos Maravilha
Cassandra 3.9. nodetool status Datacenter: dc1 === Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.0.120.145 1.21 MiB 256 49.5%