Re: Deflate compressor

2017-07-08 Thread Cogumelos Maravilha
*"Decrease tombstone_compaction_interval and decrease
tombstone_threshold, or set unchecked_tombstone_compaction to true to
ignore both conditions and collect based /purely on gc grace/."*

Is this actually true for C* version 3.11.0?

AND compaction = {'class': 
'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 
'compaction_window_size': '1', 'compaction_window_unit': 'HOURS', 
'max_threshold': '128', 'min_threshold': 
'2',|*'unchecked_tombstone_compaction': 'true'*|}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.DeflateCompressor'}

Is this approach enough?

Thanks.

On 07/06/2017 06:27 PM, Jeff Jirsa wrote:
>
> On 2017-07-06 01:37 (-0700), Cogumelos Maravilha  
> wrote: 
>> Hi Jeff,
>>
>> Thanks for your reply. But I've already changed from LZ4 to Deflate to 
>> get higher compression level. Can I in the Deflate compressor do the 
>> same, set for a higher level of compression?
>
> Not at this time; if it's important to you, please open a JIRA (as always, 
> patches from the community are welcome)
>
>
>> Another question to the creator of the TimeWindowsCompactionStrategy:
>>
>> AND compaction = {'class': 
>> 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 
>> 'compaction_window_size': '1', 'compaction_window_unit': 'HOURS', 
>> 'max_threshold': '128', 'min_threshold': '2'}
>> AND compression = {'chunk_length_in_kb': '64', 'class': 
>> 'org.apache.cassandra.io.compress.DeflateCompressor'}
>>
>> There are some days that I have exactly 24 SSTables:
>> ls -alFh *Data*|grep 'Jul  3'|wc
>>   24
>> Others no:
>> ls -alFh *Data*|grep 'Jul  2'|wc
>>   59
>>
>> Is this normal?
> "Maybe", you could use sstablemetadata to get the maxTimestamp from the 
> table, that's what TWCS will use to group data files together. Keep in mind 
> that repaired sstables won't compact with unrepaired (if you're using 
> incremental repair), and that tombstone compaction subproperties (which I 
> don't see in your options, but maybe you had set before) can cause single 
> sstable compactions that change the timestamp of the FILE, but the data 
> within it may continue to be much older.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>



Re: Starting Cassandrs after restore of Data - get error

2017-07-08 Thread Subroto Barua
Use "dos2unix" utility when editing/moving from windows to Linux -- could be a 
formatting issue 

Subroto 

> On Jul 7, 2017, at 9:47 AM, Jonathan Baynes  
> wrote:
> 
> Yes both clusters match I've checked 3 Times and diff'd it as well. Would 
> file format have any affect I'm amending on windows machine and returning the 
> file back to Linux 
> 
> Thanks
> J
> 
> Sent from my iPhone
> 
> On 7 Jul 2017, at 17:43, Nitan Kainth  wrote:
> 
>> Jonathan,
>> 
>> Make sure initial tokens have values from back up cluster i.e. 256 tokens. 
>> It is possible to have typo.
>> 
>>> On Jul 7, 2017, at 9:14 AM, Jonathan Baynes  
>>> wrote:
>>> 
>>> Hi again,
>>>  
>>> Trying to restart my nodes after restoring snapshot data, initial tokens 
>>> have been added in as per the instructions online.
>>>  
>>> In system.log I get this error (same error is I run nodetool cleanup)
>>>  
>>> Exception encountered during startup: The number of initial tokens (by 
>>> initial_token) specified is different from num_tokens value
>>>  
>>>  
>>> On both Cluster A and Cluster B the Num_tokens = 256
>>>  
>>> Ive taken the initial tokens from running this script
>>>  
>>> nodetool ring | grep "$(ifconfig | awk '/inet /{print $2}' | head -1)" | 
>>> awk '{print $NF ","}' | xargs > /tmp/tokens
>>>  
>>> when pasting in the tokens originally I got the an error, but this was due 
>>> to the spacing between the tokens. That error has been resolved I’m just 
>>> left with this one?
>>>  
>>> Any ideas
>>>  
>>> Thanks
>>> J
>>>  
>>> Jonathan Baynes
>>> DBA
>>> Tradeweb Europe Limited
>>> Moor Place  •  1 Fore Street Avenue  •  London EC2Y 9DT
>>> P +44 (0)20 77760988  •  F +44 (0)20 7776 3201  •  M +44 (0) xx
>>> jonathan.bay...@tradeweb.com
>>>  
>>>follow us: 
>>> —
>>> A leading marketplace for electronic fixed income, derivatives and ETF 
>>> trading
>>>  
>>> 
>>> 
>>> This e-mail may contain confidential and/or privileged information. If you 
>>> are not the intended recipient (or have received this e-mail in error) 
>>> please notify the sender immediately and destroy it. Any unauthorized 
>>> copying, disclosure or distribution of the material in this e-mail is 
>>> strictly forbidden. Tradeweb reserves the right to monitor all e-mail 
>>> communications through its networks. If you do not wish to receive 
>>> marketing emails about our products / services, please let us know by 
>>> contacting us, either by email at contac...@tradeweb.com or by writing to 
>>> us at the registered office of Tradeweb in the UK, which is: Tradeweb 
>>> Europe Limited (company number 3912826), 1 Fore Street Avenue London EC2Y 
>>> 9DT. To see our privacy policy, visit our website @ www.tradeweb.com.
>>> 
>> 
> 
> 
> This e-mail may contain confidential and/or privileged information. If you 
> are not the intended recipient (or have received this e-mail in error) please 
> notify the sender immediately and destroy it. Any unauthorized copying, 
> disclosure or distribution of the material in this e-mail is strictly 
> forbidden. Tradeweb reserves the right to monitor all e-mail communications 
> through its networks. If you do not wish to receive marketing emails about 
> our products / services, please let us know by contacting us, either by email 
> at contac...@tradeweb.com or by writing to us at the registered office of 
> Tradeweb in the UK, which is: Tradeweb Europe Limited (company number 
> 3912826), 1 Fore Street Avenue London EC2Y 9DT. To see our privacy policy, 
> visit our website @ www.tradeweb.com.
> 


Re: Corrupted commit log prevents Cassandra start

2017-07-08 Thread Varun Gupta
If you already have a regular cadence of repair then you can set
"commit_failure_policy" to ignore in cassandra.yaml. So that C* process
does not crash on corrupt commit log.

On Fri, Jul 7, 2017 at 2:10 AM, Hannu Kröger  wrote:

> Hello,
>
> yes, that’s what we do when things like this happen.
>
> My thinking is just that when commit log is corrupted, you cannot really
> do anything else but exactly those steps. Delete corrupted file and run
> repair after starting. At least I haven’t heard of any tools for salvaging
> commit log sections.
>
> Current behaviour gives DBA control over when to do those things and of
> course DBA realizes this way that things didn’t go ok but that’s about it.
> There is no alternative way of healing the system or anything.
>
> Hannu
>
> On 7 July 2017 at 12:03:06, benjamin roth (brs...@gmail.com) wrote:
>
> Hi Hannu,
>
> I remember there have been discussions about this in the past. Most
> probably there is already a JIRA for this.
> I roughly remember a consense like that:
> - Default behaviour should remain
> - It should be configurable to the needs and preferences of the DBA
> - It should at least spit out errors in the logs
>
> ... of course it would be even better to have the underlying issue fixed
> that commit logs should not be corrupt but I remember that this is not so
> easy due to some "architectural implications" of Cassandra. IIRC Ed
> Capriolo posted something related to that some months ago.
>
> For a quick fix, I'd recommend:
> - Delete the affected log file
> - Start the node
> - Run a full-range (not -pr) repair on that node
>
> 2017-07-07 10:57 GMT+02:00 Hannu Kröger :
>
>> Hello,
>>
>> We had a test server crashing for some reason (not related to Cassandra
>> probably) and now when trying to start cassandra, it gives following error:
>>
>> ERROR [main] 2017-07-06 09:29:56,140 JVMStabilityInspector.java:82 -
>> Exiting due to error while processing commit log during initialization.
>> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>> Mutation checksum failure at 24240116 in Next section at 24239690 in
>> CommitLog-6-1498576271195.log
>> at 
>> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:332)
>> [apache-cassandra-3.10.jar:3.10]
>> at org.apache.cassandra.db.commitlog.CommitLogReader.readCommit
>> LogSegment(CommitLogReader.java:201) [apache-cassandra-3.10.jar:3.10]
>> at 
>> org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:84)
>> [apache-cassandra-3.10.jar:3.10]
>> at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFi
>> les(CommitLogReplayer.java:140) [apache-cassandra-3.10.jar:3.10]
>> at 
>> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:177)
>> [apache-cassandra-3.10.jar:3.10]
>> at 
>> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:158)
>> [apache-cassandra-3.10.jar:3.10]
>> at 
>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:326)
>> [apache-cassandra-3.10.jar:3.10]
>> at 
>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601)
>> [apache-cassandra-3.10.jar:3.10]
>> at 
>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:735)
>> [apache-cassandra-3.10.jar:3.10]
>>
>> Shouldn’t Cassandra tolerate this situation?
>>
>> Of course we can delete commit logs and life goes on. But isn’t this a
>> bug or something?
>>
>> Hannu
>>
>>
>


Re: "nodetool repair -dc"

2017-07-08 Thread Varun Gupta
I do not see the need to run repair, as long as cluster was in healthy
state on adding new nodes.

On Fri, Jul 7, 2017 at 8:37 AM, vasu gunja  wrote:

> Hi ,
>
> I have a question regarding "nodetool repair -dc" option. recently we
> added multiple nodes to one DC center, we want to perform repair only on
> current DC.
>
> Here is my question.
>
> Do we need to perform "nodetool repair -dc" on all nodes belongs to that
> DC ?
> or only one node of that DC?
>
>
>
> thanks,
> V
>