Re: SSTableMetadata Util

2018-10-01 Thread kurt greaves
Pranay,

3.11.3 should include all the C* binaries in /usr/bin. Maybe try
reinstalling? Sounds like something got messed up along the way.

Kurt

On Tue, 2 Oct 2018 at 12:45, Pranay akula 
wrote:

> Thanks Christophe,
>
> I have installed using rpm package I actually ran locate command to find
> the sstable utils I could find only those 4
>
> Probably I may need to manually copy them.
>
> Regards
> Pranay
>
> On Mon, Oct 1, 2018, 9:01 PM Christophe Schmitz <
> christo...@instaclustr.com> wrote:
>
>> Hi Pranay,
>>
>> The sstablemetadata is still available in the tarball file
>> ($CASSANDRA_HOME/tools/bin) in 3.11.3. Not sure why it is not available in
>> your packaged installation, you might want to manually copy the one from
>> the package to your /usr/bin/
>>
>> Additionaly, you can have a look at
>> https://github.com/instaclustr/cassandra-sstable-tools which will
>> provided you with the desired info, plus more info you might find useful.
>>
>>
>> Christophe Schmitz - Instaclustr  -
>> Cassandra | Kafka | Spark Consulting
>>
>>
>>
>>
>>
>> On Tue, 2 Oct 2018 at 11:31 Pranay akula 
>> wrote:
>>
>>> Hi,
>>>
>>> I am testing apache 3.11.3 i couldn't find sstablemetadata util
>>>
>>> All i can see is only these utilities in /usr/bin
>>>
>>> -rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstableverify
>>> -rwxr-xr-x.   1 root root2045 Jul 25 06:12 sstableutil
>>> -rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstableupgrade
>>> -rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstablescrub
>>> -rwxr-xr-x.   1 root root2034 Jul 25 06:12 sstableloader
>>>
>>>
>>> If this utility is no longer available how can i get sstable metadata
>>> like repaired_at, Estimated droppable tombstones
>>>
>>>
>>> Thanks
>>> Pranay
>>>
>>


Re: SSTableMetadata Util

2018-10-01 Thread Pranay akula
Thanks Christophe,

I have installed using rpm package I actually ran locate command to find
the sstable utils I could find only those 4

Probably I may need to manually copy them.

Regards
Pranay

On Mon, Oct 1, 2018, 9:01 PM Christophe Schmitz 
wrote:

> Hi Pranay,
>
> The sstablemetadata is still available in the tarball file
> ($CASSANDRA_HOME/tools/bin) in 3.11.3. Not sure why it is not available in
> your packaged installation, you might want to manually copy the one from
> the package to your /usr/bin/
>
> Additionaly, you can have a look at
> https://github.com/instaclustr/cassandra-sstable-tools which will
> provided you with the desired info, plus more info you might find useful.
>
>
> Christophe Schmitz - Instaclustr  -
> Cassandra | Kafka | Spark Consulting
>
>
>
>
>
> On Tue, 2 Oct 2018 at 11:31 Pranay akula 
> wrote:
>
>> Hi,
>>
>> I am testing apache 3.11.3 i couldn't find sstablemetadata util
>>
>> All i can see is only these utilities in /usr/bin
>>
>> -rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstableverify
>> -rwxr-xr-x.   1 root root2045 Jul 25 06:12 sstableutil
>> -rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstableupgrade
>> -rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstablescrub
>> -rwxr-xr-x.   1 root root2034 Jul 25 06:12 sstableloader
>>
>>
>> If this utility is no longer available how can i get sstable metadata
>> like repaired_at, Estimated droppable tombstones
>>
>>
>> Thanks
>> Pranay
>>
>


Re: SSTableMetadata Util

2018-10-01 Thread Christophe Schmitz
Hi Pranay,

The sstablemetadata is still available in the tarball file
($CASSANDRA_HOME/tools/bin) in 3.11.3. Not sure why it is not available in
your packaged installation, you might want to manually copy the one from
the package to your /usr/bin/

Additionaly, you can have a look at
https://github.com/instaclustr/cassandra-sstable-tools which will provided
you with the desired info, plus more info you might find useful.


Christophe Schmitz - Instaclustr  - Cassandra
| Kafka | Spark Consulting





On Tue, 2 Oct 2018 at 11:31 Pranay akula  wrote:

> Hi,
>
> I am testing apache 3.11.3 i couldn't find sstablemetadata util
>
> All i can see is only these utilities in /usr/bin
>
> -rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstableverify
> -rwxr-xr-x.   1 root root2045 Jul 25 06:12 sstableutil
> -rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstableupgrade
> -rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstablescrub
> -rwxr-xr-x.   1 root root2034 Jul 25 06:12 sstableloader
>
>
> If this utility is no longer available how can i get sstable metadata like
> repaired_at, Estimated droppable tombstones
>
>
> Thanks
> Pranay
>


Re: Cassandra loading data from another table

2018-10-01 Thread Christophe Schmitz
Have a look at using Spark on Cassandra. It's commonly used for data
movement / data migration / reconciliation (on top of analytics). You will
get much better performances.

Christophe Schmitz - Instaclustr  - Cassandra
| Kafka | Spark Consulting





On Tue, 2 Oct 2018 at 09:58 Richard Xin 
wrote:

> Christophe, thanks for your insights,
> Sorry, I forgot to mention that currently both tableA and tableB are being
> updated by application (all newly inserted/updated records should be
> identical on A and B), exporting from tableB and COPY it back later on will
> result in older data overwrites newly updated data.
>
> I can only thinking about using COPY tableA to a csv, and then iterate the
> csv line by line to insert to tableB using "if not exists" clause to avoid
> down-time , but it's error-prone and slow. Not sure whether there is a
> better way.
> Best,
> Richard
>
> On Monday, October 1, 2018, 4:34:38 PM PDT, Christophe Schmitz <
> christo...@instaclustr.com> wrote:
>
>
> Hi Richard,
>
> You could consider exporting your few thousands record of Table B in a
> file, with *COPY TO*. Then *TRUNCATE* Table B, copy the SSTable files of
> TableA to the data directory of Table A (make sure you *flush* the
> memtables first), then run nodetool *refresh*. Final step is to load the
> few thousands record on Table B with *COPY FROM*. This will overwrite the
> data you loaded from the SSTables of Table A.
> Overall, there is no downtime on your cluster, there is no downtime on
> Table A, yet you need to think about the consequences on Table B if your
> application is writing on Table A or Table B during this process.
> Please test first :)
>
> Cheers,
> Christophe
>
> Christophe Schmitz - Instaclustr  -
> Cassandra | Kafka | Spark Consulting
>
>
>
>
> On Tue, 2 Oct 2018 at 09:18 Richard Xin 
> wrote:
>
> I have a tableA with about a few ten millions record, and I have tableB
> with a few thousands record,
> TableA and TableB have exact same schema (except that tableB doesnt have
> TTL)
>
> I want to load all data to tableB from tableA EXCEPT for those already on
> tableB (we don't want data on tableB to be overwritten)
>
> What's the best to way accomplish this?
>
> Thanks,
>
>


SSTableMetadata Util

2018-10-01 Thread Pranay akula
Hi,

I am testing apache 3.11.3 i couldn't find sstablemetadata util

All i can see is only these utilities in /usr/bin

-rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstableverify
-rwxr-xr-x.   1 root root2045 Jul 25 06:12 sstableutil
-rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstableupgrade
-rwxr-xr-x.   1 root root2042 Jul 25 06:12 sstablescrub
-rwxr-xr-x.   1 root root2034 Jul 25 06:12 sstableloader


If this utility is no longer available how can i get sstable metadata like
repaired_at, Estimated droppable tombstones


Thanks
Pranay


Re: [EXTERNAL] Re: Rolling back Cassandra upgrades (tarball)

2018-10-01 Thread Jeff Jirsa
sstable version alone isn’t sufficient - there can be other surprises that will 
break the lower version (commitlog format change, new types or concepts like 
UDTs that may appear in the schema, etc) 

I think 3.11 to 3.0 still works but I’m not certain of it personally 

-- 
Jeff Jirsa


> On Oct 1, 2018, at 6:13 PM, Christophe Schmitz  
> wrote:
> 
> Adding to the thread:
> SSTable format is identical between 3.0.x and 3..11.x, so your SSTable files 
> are compatible, in this case. BTW an easy way to check that is to look at the 
> SSTables filename convention; first letters ('mc' in this case) indicate the 
> SSTable storage format version.
> In the future, if you really really want rollback when doing a major upgrade 
> with a change of SSTable format, your only option will be to create a 
> secondary data center (same number of nodes, same Cassandra version, please 
> check your keyspaces are using NetworkTopologyStrategy). You will be able to 
> upgrade the Cassandra version of one DC, while keeping the other DC to the 
> current version. You will need to consider carefully the consistency level of 
> your application (probably LOCAL_QUORUM) so that your application is writing 
> to one DC, with automatic replication on the secondary DC. Once you are 
> happy, you can decommission the old version DC (check carefully your 
> application endpoint configuration, local_dc configuration)
> Hope this helps.
> 
> 
> Christophe Schmitz - Instaclustr - Cassandra | Kafka | Spark Consulting
> 
> 
> 
>> On Mon, 1 Oct 2018 at 23:18 Durity, Sean R  
>> wrote:
>> Version choices aside, I am an advocate for forward-only (in most cases). 
>> Here is my reasoning, so that you can evaluate for your situation:
>> - upgrades are done while the application is up and live and writing data 
>> (no app downtime)
>> - the upgrade usually includes a change to the sstable version (which is 
>> unreadable in the older version)
>> - any data written to upgraded nodes will be written in the new sstable 
>> format
>> + this includes any compaction that takes place on upgraded nodes, so even 
>> an app outage doesn't protect you
>> - so, there is no going back, unless you are willing to lose new (or 
>> compacted) data written to any upgraded nodes
>> 
>> As you can tell, if the assumptions don't hold true, a roll back may be 
>> possible. For example, if the sstable version is the same (e.g., for a minor 
>> upgrade), then the risk of lost data is gone. Or, if you are able to stop 
>> your application during the upgrade process and stop compaction. Etc.
>> 
>> You could upgrade a single node to see how it behaves. If there is some 
>> problem, you could wipe out the data, go back to the old version, and 
>> bootstrap it again. Once I get to the 2nd node, though, I am only going 
>> forward.
>> 
>> Sean Durity
>> 
>> 
>> -Original Message-
>> From: Jeff Jirsa 
>> Sent: Sunday, September 30, 2018 8:38 PM
>> To: user@cassandra.apache.org
>> Subject: [EXTERNAL] Re: Rolling back Cassandra upgrades (tarball)
>> 
>> Definitely don’t go to 3.10, go to 3.11.3 or newest 3.0 instead
>> 
>> 
>> --
>> Jeff Jirsa
>> 
>> 
>> On Sep 30, 2018, at 5:29 PM, Nate McCall  wrote:
>> 
>> >> I have a cluster on v3.0.11 I am planning to upgrade this to 3.10.
>> >> Is rolling back the binaries a viable solution?
>> >
>> > What's the goal with moving form 3.0 to 3.x?
>> >
>> > Also, our latest release in 3.x is 3.11.3 and has a couple of
>> > important bug fixes over 3.10 (which is a bit dated at this point).
>> >
>> > -
>> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> > For additional commands, e-mail: user-h...@cassandra.apache.org
>> >
>> 
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>> 
>> 
>> 
>> 
>> The information in this Internet Email is confidential and may be legally 
>> privileged. It is intended solely for the addressee. Access to this Email by 
>> anyone else is unauthorized. If you are not the intended recipient, any 
>> disclosure, copying, distribution or any action taken or omitted to be taken 
>> in reliance on it, is prohibited and may be unlawful. When addressed to our 
>> clients any opinions or advice contained in this Email are subject to the 
>> terms and conditions expressed in any applicable governing The Home Depot 
>> terms of business or client engagement letter. The Home Depot disclaims all 
>> responsibility and liability for the accuracy and content of this attachment 
>> and for any damages or losses arising from any inaccuracies, errors, 
>> viruses, e.g., worms, trojan horses, etc., or other items of a destructive 
>> nature, which may be contained in this attachment and shall not be liable 
>> for direct, indirect, consequential or 

Re: [EXTERNAL] Re: Rolling back Cassandra upgrades (tarball)

2018-10-01 Thread Christophe Schmitz
Adding to the thread:

   - SSTable format is identical between 3.0.x and 3..11.x, so your SSTable
   files are compatible, in this case. BTW an easy way to check that is to
   look at the SSTables filename convention; first letters ('mc' in this case)
   indicate the SSTable storage format version.
   - In the future, if you really really want rollback when doing a major
   upgrade with a change of SSTable format, your only option will be to create
   a secondary data center (same number of nodes, same Cassandra version,
   please check your keyspaces are using NetworkTopologyStrategy). You will be
   able to upgrade the Cassandra version of one DC, while keeping the other DC
   to the current version. You will need to consider carefully the consistency
   level of your application (probably LOCAL_QUORUM) so that your application
   is writing to one DC, with automatic replication on the secondary DC. Once
   you are happy, you can decommission the old version DC (check carefully
   your application endpoint configuration, local_dc configuration)

Hope this helps.


Christophe Schmitz - Instaclustr  - Cassandra
| Kafka | Spark Consulting



On Mon, 1 Oct 2018 at 23:18 Durity, Sean R 
wrote:

> Version choices aside, I am an advocate for forward-only (in most cases).
> Here is my reasoning, so that you can evaluate for your situation:
> - upgrades are done while the application is up and live and writing data
> (no app downtime)
> - the upgrade usually includes a change to the sstable version (which is
> unreadable in the older version)
> - any data written to upgraded nodes will be written in the new sstable
> format
> + this includes any compaction that takes place on upgraded nodes, so even
> an app outage doesn't protect you
> - so, there is no going back, unless you are willing to lose new (or
> compacted) data written to any upgraded nodes
>
> As you can tell, if the assumptions don't hold true, a roll back may be
> possible. For example, if the sstable version is the same (e.g., for a
> minor upgrade), then the risk of lost data is gone. Or, if you are able to
> stop your application during the upgrade process and stop compaction. Etc.
>
> You could upgrade a single node to see how it behaves. If there is some
> problem, you could wipe out the data, go back to the old version, and
> bootstrap it again. Once I get to the 2nd node, though, I am only going
> forward.
>
> Sean Durity
>
>
> -Original Message-
> From: Jeff Jirsa 
> Sent: Sunday, September 30, 2018 8:38 PM
> To: user@cassandra.apache.org
> Subject: [EXTERNAL] Re: Rolling back Cassandra upgrades (tarball)
>
> Definitely don’t go to 3.10, go to 3.11.3 or newest 3.0 instead
>
>
> --
> Jeff Jirsa
>
>
> On Sep 30, 2018, at 5:29 PM, Nate McCall  wrote:
>
> >> I have a cluster on v3.0.11 I am planning to upgrade this to 3.10.
> >> Is rolling back the binaries a viable solution?
> >
> > What's the goal with moving form 3.0 to 3.x?
> >
> > Also, our latest release in 3.x is 3.11.3 and has a couple of
> > important bug fixes over 3.10 (which is a bit dated at this point).
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: user-h...@cassandra.apache.org
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>
> 
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>


Re: Cassandra loading data from another table

2018-10-01 Thread Richard Xin
  Christophe, thanks for your insights,Sorry, I forgot to mention that 
currently both tableA and tableB are being updated by application (all newly 
inserted/updated records should be identical on A and B), exporting from tableB 
and COPY it back later on will result in older data overwrites newly updated 
data.
I can only thinking about using COPY tableA to a csv, and then iterate the csv 
line by line to insert to tableB using "if not exists" clause to avoid 
down-time , but it's error-prone and slow. Not sure whether there is a better 
way. Best,Richard
On Monday, October 1, 2018, 4:34:38 PM PDT, Christophe Schmitz 
 wrote:  
 
 Hi Richard,
You could consider exporting your few thousands record of Table B in a file, 
with COPY TO. Then TRUNCATE Table B, copy the SSTable files of TableA to the 
data directory of Table A (make sure you flush the memtables first), then run 
nodetool refresh. Final step is to load the few thousands record on Table B 
with COPY FROM. This will overwrite the data you loaded from the SSTables of 
Table A.Overall, there is no downtime on your cluster, there is no downtime on 
Table A, yet you need to think about the consequences on Table B if your 
application is writing on Table A or Table B during this process.Please test 
first :)
Cheers,Christophe

Christophe Schmitz - Instaclustr - Cassandra | Kafka | Spark Consulting




On Tue, 2 Oct 2018 at 09:18 Richard Xin  wrote:

I have a tableA with about a few ten millions record, and I have tableB with a 
few thousands record,TableA and TableB have exact same schema (except that 
tableB doesnt have TTL)
I want to load all data to tableB from tableA EXCEPT for those already on 
tableB (we don't want data on tableB to be overwritten)
What's the best to way accomplish this?  
Thanks,
  

Re: Cassandra loading data from another table

2018-10-01 Thread Christophe Schmitz
Hi Richard,

You could consider exporting your few thousands record of Table B in a
file, with *COPY TO*. Then *TRUNCATE* Table B, copy the SSTable files of
TableA to the data directory of Table A (make sure you *flush* the
memtables first), then run nodetool *refresh*. Final step is to load the
few thousands record on Table B with *COPY FROM*. This will overwrite the
data you loaded from the SSTables of Table A.
Overall, there is no downtime on your cluster, there is no downtime on
Table A, yet you need to think about the consequences on Table B if your
application is writing on Table A or Table B during this process.
Please test first :)

Cheers,
Christophe

Christophe Schmitz - Instaclustr  - Cassandra
| Kafka | Spark Consulting




On Tue, 2 Oct 2018 at 09:18 Richard Xin 
wrote:

> I have a tableA with about a few ten millions record, and I have tableB
> with a few thousands record,
> TableA and TableB have exact same schema (except that tableB doesnt have
> TTL)
>
> I want to load all data to tableB from tableA EXCEPT for those already on
> tableB (we don't want data on tableB to be overwritten)
>
> What's the best to way accomplish this?
>
> Thanks,
>


Cassandra loading data from another table

2018-10-01 Thread Richard Xin
I have a tableA with about a few ten millions record, and I have tableB with a 
few thousands record,TableA and TableB have exact same schema (except that 
tableB doesnt have TTL)
I want to load all data to tableB from tableA EXCEPT for those already on 
tableB (we don't want data on tableB to be overwritten)
What's the best to way accomplish this?  
Thanks,

Re: Metrics matrix: migrate 2.1.x metrics to 2.2.x+

2018-10-01 Thread Carl Mueller
That's great too, thank you.

Datadog 350 metric limit is a PITA for tables once you get over 10 tables,
but I guess we can use bean_regex to do specific targetted metrics for the
important tables anyway.

On Mon, Oct 1, 2018 at 4:21 AM Alain RODRIGUEZ  wrote:

> Hello Carl,
>
> Here is a message I sent to my team a few months ago. I hope this will be
> helpful to you and more people around :). It might not be exhaustive and we
> were moving from C*2.1 to C*3+ in this case, thus skipping C*2.2, but C*2.2
> is similar to C*3.0 if I remember correctly in terms of metrics. Here it is
> for what it's worth:
>
> Quite a few things changed between metric reporter in C* 2.1 and C*3.0.
> - ColumnFamily --> Table
> - XXpercentile --> pXX
> - 1MinuteRate -->  m1_rate
> - metric name before KS and Table names and some other changes of this
> kind.
> - ^ aggregations / aliases indexes changed because of this (using graphite
> for example) ^
> - ‘.value’ is not appended in the metric name anymore for gauges, nothing
> instead.
>
> For example (graphite):
>
> From
> aliasByNode(averageSeriesWithWildcards(cassandra.$env.$dc.$host.org.apache.cassandra.metrics.ColumnFamily.$ks.$table.ReadLatency.95percentile,
> 2, 3), 1, 7, 8, 9)
>
> to
> aliasByNode(averageSeriesWithWildcards(cassandra.$env.$dc.$host.org.apache.cassandra.metrics.Table.ReadLatency.$ks.$table.p95,
> 2, 3), 1, 8, 9, 10)
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> Le ven. 28 sept. 2018 à 20:38, Carl Mueller
>  a écrit :
>
>> VERY NICE! Thank you very much
>>
>> On Fri, Sep 28, 2018 at 1:32 PM Lyuben Todorov <
>> lyuben.todo...@instaclustr.com> wrote:
>>
>>> Nothing as fancy as a matrix but a list of what JMX term can see.
>>> Link to the online diff here: https://www.diffchecker.com/G9FE9swS
>>>
>>> /lyubent
>>>
>>> On Fri, 28 Sep 2018 at 19:04, Carl Mueller
>>>  wrote:
>>>
 It's my understanding that metrics got heavily re-namespaced in JMX for
 2.2 from 2.1

 Did anyone ever make a migration matrix/guide for conversion of old
 metrics to new metrics?





Re: Re: Re: how to configure the Token Allocation Algorithm

2018-10-01 Thread Alain RODRIGUEZ
Hello again :),

I thought a little bit more about this question, and I was actually
wondering if something like this would work:

Imagine 3 node cluster, and create them using:
For the 3 nodes: `num_token: 4`
Node 1: `intial_token: -9223372036854775808, -4611686018427387905, -2,
4611686018427387901`
Node 2: `intial_token: -7686143364045646507, -3074457345618258604,
1537228672809129299, 6148914691236517202`
Node 3: `intial_token: -6148914691236517206, -1537228672809129303,
3074457345618258600, 7686143364045646503`

 If you know the initial size of your cluster, you can calculate the total
number of tokens: number of nodes * vnodes and use the formula/python code
above to get the tokens. Then use the first token for the first node, move
to the second node, use the second token and repeat. In my case there is a
total of 12 tokens (3 nodes, 4 tokens each)
```
>>> number_of_tokens = 12
>>> [str(((2**64 / number_of_tokens) * i) - 2**63) for i in
range(number_of_tokens)]
['-9223372036854775808', '-7686143364045646507', '-6148914691236517206',
'-4611686018427387905', '-3074457345618258604', '-1537228672809129303',
'-2', '1537228672809129299', '3074457345618258600', '4611686018427387901',
'6148914691236517202', '7686143364045646503']
```

it actually works nicely apparently. Here is a quick ccm test I have run,
with the configuration above:

```

$ ccm node1 nodetool status tlp_lab


Datacenter: datacenter1

===

Status=Up/Down

|/ State=Normal/Leaving/Joining/Moving

--  AddressLoad   Tokens   Owns (effective)  Host ID
Rack

UN  127.0.0.1  82.47 KiB  466.7%
1ed8680b-7250-4088-988b-e4679514322f  rack1

UN  127.0.0.2  99.03 KiB  466.7%
ab3655b5-c380-496d-b250-51b53efb4c00  rack1

UN  127.0.0.3  82.36 KiB  466.7%
ad2b343e-5f6e-4b0d-b79f-a3dfc3ba3c79  rack1
```

Ownership is perfectly distributed, like it would be without vnodes. Tested
with C* 3.11.1 and CCM.

I followed the procedure we were talking about on my second test, after
wiping out the data in my 3 nodes ccm cluster.
RF=2 for tlp_lab, the first node with initial_tokens defined and other
nodes using 'allocate_tokens_for_keyspace: tlp_lab':

$ ccm node1 nodetool status tlp_lab


Datacenter: datacenter1

===

Status=Up/Down

|/ State=Normal/Leaving/Joining/Moving

--  AddressLoad   Tokens   Owns (effective)  Host ID
Rack

UN  127.0.0.1  86.71 KiB  496.2%
6e4c0ce0-2e2e-48ff-b7e0-3653e76366a3  rack1

UN  127.0.0.2  65.63 KiB  454.2%
592cda85-5807-4e7a-aa3b-0d9ae54cfaf3  rack1

UN  127.0.0.3  99.04 KiB  449.7%
f2c4eccc-31cc-458c-a599-5373c1169d3c  rack1

This is not as great. I guess a fourth node would help, but still not make
it as perfect.

I would still check what happens when you add a few more nodes with
'allocate_tokens_for_keyspace' afterward and without 'initial_token', not
to have any surprise.
I did not see anyone using this yet. Please take it as an idea to dig, and
not as a recommendation :).

I also noticed I did not answer the second part of the mail:

My cluster Size won't go beyond 150 nodes, should i still use The
> Allocation Algorithm instead of random with 256 tokens (performance wise or
> load-balance wise)?
>

I would say yes. There is a talk to change this default (256 vnodes), that
is now probably always a bad idea since 'allocate_tokens_for_keyspace' was
added.

Is the Allocation Algorithm, widely used and tested with Community and can
> we migrate all clusters with any size to use this Algorithm Safely?
>

Here again, I would say yes. I am not sure that it is widely used yet, but
I think so. Also, you can always check the ownership with 'nodetool status
' after adding the nodes, and before adding data or traffic to
this data center, so there is probably no real risk if you check ownership
distribution after adding nodes. If you don't like the distribution, you
can decommission the nodes, clean them, and try again, I use to call it
'rolling the dice' when I am still using the random algorithm :). I mean,
once the token ranges ownership are distributed to the nodes, it does not
change anything during the transaction. We don't need this 'algorithm'
after the bootstrap I would say.


> Out of Curiosity, i wonder how people (i.e, in Apple) config and maintain
> token management of clusters with thousands of nodes?
>

I am not sure about Apple, but my understanding is that some of those
companies don't use vnodes and have a 'ring management tool' to perform the
necessary 'nodetool move' around the cluster relatively easily or
automatically. Some other probably use a low number of vnodes (something
between 4 and 32) and 'allocate_tokens_for_keyspace'.

Also, my understanding is that it's very rare to have clusters with
thousands of nodes. You can then start having issues around gossip if I
remember correctly what I read/discussed. I would probably add a second
cluster 

RE: [EXTERNAL] Re: Rolling back Cassandra upgrades (tarball)

2018-10-01 Thread Durity, Sean R
Version choices aside, I am an advocate for forward-only (in most cases). Here 
is my reasoning, so that you can evaluate for your situation:
- upgrades are done while the application is up and live and writing data (no 
app downtime)
- the upgrade usually includes a change to the sstable version (which is 
unreadable in the older version)
- any data written to upgraded nodes will be written in the new sstable format
+ this includes any compaction that takes place on upgraded nodes, so even an 
app outage doesn't protect you
- so, there is no going back, unless you are willing to lose new (or compacted) 
data written to any upgraded nodes

As you can tell, if the assumptions don't hold true, a roll back may be 
possible. For example, if the sstable version is the same (e.g., for a minor 
upgrade), then the risk of lost data is gone. Or, if you are able to stop your 
application during the upgrade process and stop compaction. Etc.

You could upgrade a single node to see how it behaves. If there is some 
problem, you could wipe out the data, go back to the old version, and bootstrap 
it again. Once I get to the 2nd node, though, I am only going forward.

Sean Durity


-Original Message-
From: Jeff Jirsa 
Sent: Sunday, September 30, 2018 8:38 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Rolling back Cassandra upgrades (tarball)

Definitely don’t go to 3.10, go to 3.11.3 or newest 3.0 instead


--
Jeff Jirsa


On Sep 30, 2018, at 5:29 PM, Nate McCall  wrote:

>> I have a cluster on v3.0.11 I am planning to upgrade this to 3.10.
>> Is rolling back the binaries a viable solution?
>
> What's the goal with moving form 3.0 to 3.x?
>
> Also, our latest release in 3.x is 3.11.3 and has a couple of
> important bug fixes over 3.10 (which is a bit dated at this point).
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org




The information in this Internet Email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this Email by 
anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be taken in 
reliance on it, is prohibited and may be unlawful. When addressed to our 
clients any opinions or advice contained in this Email are subject to the terms 
and conditions expressed in any applicable governing The Home Depot terms of 
business or client engagement letter. The Home Depot disclaims all 
responsibility and liability for the accuracy and content of this attachment 
and for any damages or losses arising from any inaccuracies, errors, viruses, 
e.g., worms, trojan horses, etc., or other items of a destructive nature, which 
may be contained in this attachment and shall not be liable for direct, 
indirect, consequential or special damages in connection with this e-mail 
message or its attachment.

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org


Re: How default_time_to_live would delete rows without tombstones in Cassandra?

2018-10-01 Thread Gabriel Giussi
Hello Alain,
thanks for clarifying this topic. You had alerted that this should be
explored indeed, so there is nothing to apologize for.

I've asked this in stackoverflow too (
https://stackoverflow.com/q/52282517/3517383), so if you want to answer
there I will mark yours as the correct one, if not I will reference this
mail from the mailing list.

Your posts on LastPickle are really great BTW.

Cheers.

El jue., 27 sept. 2018 a las 13:48, Alain RODRIGUEZ ()
escribió:

> Hello Gabriel,
>
> Sorry for not answering earlier. I should have, given that I contributed
> spreading this wrong idea. I will also try to edit my comment in the post.
> I have been fooled by the piece of documentation you mentioned when
> answering this question on our blog. I probably answered this one too
> quickly, even though I wrote this a thing 'to explore', even saying I did
> not try it explicitely.
>
> Another clue to explore would be to use the TTL as a default value if
>> that's a good fit. TTLs set at the table level with
>> 'default_time_to_live' **should not generate any tombstone at all in
>> C*3.0+**. Not tested on my hand, but I read about this.
>
>
> So my sentence above is wrong. Basically, the default can be overwritten
> by the TTL at the query level and I do not see how Cassandra could handle
> this without tombstones.
>
> I spent time on the post and it was reviewed. I believe it is reliable.
> The questions, on the other side, are answered by me alone and well, that
> only reflects my opinion at the moment I am asked and I sometimes find
> enough time and interest to dig topics, sometimes a bit less. So this is
> fully on me, my apologies for this inaccuracy. I must say am always afraid
> when writing publicly and sharing information to do this kind of mistakes
> and mislead people. I hope the impact of this read was still positive for
> you overall.
>
> From the example I conclude that isn't true that `default_time_to_live`
>> not require tombstones, at least for version 3.0.13.
>>
>
> Also, I am glad to see you did not believe me or Datastax documentation
> but tried it by yourself. This is definitively the right approach.
>
> But how would C* delete without tombstones? Why this should be a different
>> scenario to using TTL per insert?
>>
>
> Yes, exactly this,
>
> C*heers.
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> Le lun. 17 sept. 2018 à 14:58, Gabriel Giussi  a
> écrit :
>
>>
>> From
>> https://docs.datastax.com/en/cassandra/3.0/cassandra/dml/dmlAboutDeletes.html
>>
>> > Cassandra allows you to set a default_time_to_live property for an
>> entire table. Columns and rows marked with regular TTLs are processed as
>> described above; but when a record exceeds the table-level TTL, **Cassandra
>> deletes it immediately, without tombstoning or compaction**.
>>
>> This is also answered in https://stackoverflow.com/a/50060436/3517383
>>
>> >  If a table has default_time_to_live on it then rows that exceed this
>> time limit are **deleted immediately without tombstones being written**.
>>
>> And commented in LastPickle's post About deletes and tombstones (
>> http://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html#comment-3949581514
>> )
>>
>> > Another clue to explore would be to use the TTL as a default value if
>> that's a good fit. TTLs set at the table level with 'default_time_to_live'
>> **should not generate any tombstone at all in C*3.0+**. Not tested on my
>> hand, but I read about this.
>>
>> I've made the simplest test that I could imagine using
>> `LeveledCompactionStrategy`:
>>
>> CREATE KEYSPACE IF NOT EXISTS temp WITH replication = {'class':
>> 'SimpleStrategy', 'replication_factor': '1'};
>>
>> CREATE TABLE IF NOT EXISTS temp.test_ttl (
>> key text,
>> value text,
>> PRIMARY KEY (key)
>> ) WITH  compaction = { 'class': 'LeveledCompactionStrategy'}
>>   AND default_time_to_live = 180;
>>
>>  1. `INSERT INTO temp.test_ttl (key,value) VALUES ('k1','v1');`
>>  2. `nodetool flush temp`
>>  3. `sstabledump mc-1-big-Data.db`
>> [image: cassandra0.png]
>>
>>  4. wait for 180 seconds (default_time_to_live)
>>  5. `sstabledump mc-1-big-Data.db`
>> [image: cassandra1.png]
>>
>> The tombstone isn't created yet
>>  6. `nodetool compact temp`
>>  7. `sstabledump mc-2-big-Data.db`
>> [image: cassandra2.png]
>>
>> The **tombstone is created** (and not dropped on compaction due to
>> gc_grace_seconds)
>>
>> The test was performed using apache cassandra 3.0.13
>>
>> From the example I conclude that isn't true that `default_time_to_live`
>> not require tombstones, at least for version 3.0.13.
>> However this is a very simple test and I'm forcing a major compaction
>> with `nodetool compact` so I may not be recreating the scenario where
>> default_time_to_live magic comes into play.
>>
>> But how would C* delete without 

Fwd: Re: Re: how to configure the Token Allocation Algorithm

2018-10-01 Thread onmstester onmstester
Thanks Alex, You are right, that would be a mistake. Sent using Zoho Mail 
 Forwarded message  From : Oleksandr Shulgin 
 To : "User" Date : 
Mon, 01 Oct 2018 13:53:37 +0330 Subject : Re: Re: how to configure the Token 
Allocation Algorithm  Forwarded message  On Mon, Oct 1, 
2018 at 12:18 PM onmstester onmstester  wrote: What if 
instead of running that python and having one node with non-vnode config, i 
remove the first seed node and re-add it after cluster was fully up ? so the 
token ranges of first seed node would also be assigned by Allocation Alg I 
think this is tricky because the random allocation of the very first tokens 
from the first seed affects the choice of tokens made by the algorithm on the 
rest of the nodes: it basically tries to divide the token ranges in more or 
less equal parts.  If your very first 8 tokens resulted in really bad balance, 
you are not going to remove that imbalance by removing the node, it would still 
have the lasting effect on the rest of your cluster. -- Alex

Re: Re: how to configure the Token Allocation Algorithm

2018-10-01 Thread Oleksandr Shulgin
On Mon, Oct 1, 2018 at 12:18 PM onmstester onmstester 
wrote:

>
> What if instead of running that python and having one node with non-vnode
> config, i remove the first seed node and re-add it after cluster was fully
> up ? so the token ranges of first seed node would also be assigned by
> Allocation Alg
>

I think this is tricky because the random allocation of the very first
tokens from the first seed affects the choice of tokens made by the
algorithm on the rest of the nodes: it basically tries to divide the token
ranges in more or less equal parts.  If your very first 8 tokens resulted
in really bad balance, you are not going to remove that imbalance by
removing the node, it would still have the lasting effect on the rest of
your cluster.

--
Alex


Fwd: Re: how to configure the Token Allocation Algorithm

2018-10-01 Thread onmstester onmstester
Thanks Alain, What if instead of running that python and having one node with 
non-vnode config, i remove the first seed node and re-add it after cluster was 
fully up ? so the token ranges of first seed node would also be assigned by 
Allocation Alg  Forwarded message  From : Alain 
RODRIGUEZ  To : "user 
cassandra.apache.org" Date : Mon, 01 Oct 2018 
13:14:21 +0330 Subject : Re: how to configure the Token Allocation Algorithm 
 Forwarded message  Hello, Your process looks good to 
me :). Still a couple of comments to make it more efficient (hopefully). - 
Improving step 2: I believe you can actually get a slightly better distribution 
picking the tokens for the (first) seed node. This is to prevent the node from 
randomly calculating its token ranges. You can calculate the token ranges using 
the following python code:  $ python # Start the python shell [...] >>> 
number_of_tokens = 8 >>> [str(((2**64 / number_of_tokens) * i) - 2**63) for i 
in range(number_of_tokens)] ['-9223372036854775808', '-6917529027641081856', 
'-4611686018427387904', '-2305843009213693952', '0', '2305843009213693952', 
'4611686018427387904', '6917529027641081856'] Set the 'initial_token' with the 
above list (coma separated list) and the number of vnodes to 'num_tokens: 8'. 
This technique proved to be way more efficient (especially for low token 
numbers / small number of nodes). Luckily it's also easy to test.

Re: unsubscribe

2018-10-01 Thread Alain RODRIGUEZ
Hello,

You're still subscribed to this mailing list I am afraid :). In case you
missed it:

To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org


C*heers.

Le lun. 1 oct. 2018 à 08:04, Gabriel Lindeborg <
gabriel.lindeb...@svenskaspel.se> a écrit :

>
> AB SVENSKA SPEL
> 621 80 Visby
> Norra Hansegatan 17, Visby
> Växel: +4610-120 00 00
> https://svenskaspel.se
>
> Please consider the environment before printing this email
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: how to configure the Token Allocation Algorithm

2018-10-01 Thread Alain RODRIGUEZ
Hello,

Your process looks good to me :). Still a couple of comments to make it
more efficient (hopefully).

*- Improving step 2:*

I believe you can actually get a slightly better distribution picking the
tokens for the (first) seed node. This is to prevent the node from randomly
calculating its token ranges. You can calculate the token ranges using the
following python code:

$ python  # Start the python shell
[...]
>>> number_of_tokens = 8
>>> [str(((2**64 / number_of_tokens) * i) - 2**63) for i in 
>>> range(number_of_tokens)]
['-9223372036854775808', '-6917529027641081856',
'-4611686018427387904', '-2305843009213693952', '0',
'2305843009213693952', '4611686018427387904', '6917529027641081856']


Set the 'initial_token' with the above list (coma separated list) and the
number of vnodes to 'num_tokens: 8'.

This technique proved to be way more efficient (especially for low token
numbers / small number of nodes). Luckily it's also easy to test.

- *Step 4 might not be needed*

I don't see the need of stopping/starting the seed. The option
'allocate_tokens_for_keyspace'
won't affect this seed node (already initialized) in any way

Also, do not forget to have more nodes becoming 'seeds', either after
bootstrap or just start a couple more of seeds after the first one for
example.

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com



Le lun. 1 oct. 2018 à 07:16, onmstester onmstester  a
écrit :

> Since i failed to find a document on how to configure and use the Token
> Allocation Algorithm (to replace the random Algorithm), just wanted to be
> sure about the procedure i've done:
> 1. Using Apache Cassandra 3.11.2
> 2. Configured one of seed nodes with num_tokens=8 and started it.
> 3. Using Cqlsh created keyspace test with NetworkTopologyStrategy and RF=3.
> 4. Stopped the seed node.
> 5. add this line to cassandra.yaml of all nodes (all have num_tokens=8)
> and started the cluster:
> allocate_tokens_for_keyspace=test
>
> My cluster Size won't go beyond 150 nodes, should i still use The
> Allocation Algorithm instead of random with 256 tokens (performance wise or
> load-balance wise)?
> Is the Allocation Algorithm, widely used and tested with Community and can
> we migrate all clusters with any size to use this Algorithm Safely?
> Out of Curiosity, i wonder how people (i.e, in Apple) config and maintain
> token management of clusters with thousands of nodes?
>
>
> Sent using Zoho Mail 
>
>
>


Re: Metrics matrix: migrate 2.1.x metrics to 2.2.x+

2018-10-01 Thread Alain RODRIGUEZ
Hello Carl,

Here is a message I sent to my team a few months ago. I hope this will be
helpful to you and more people around :). It might not be exhaustive and we
were moving from C*2.1 to C*3+ in this case, thus skipping C*2.2, but C*2.2
is similar to C*3.0 if I remember correctly in terms of metrics. Here it is
for what it's worth:

Quite a few things changed between metric reporter in C* 2.1 and C*3.0.
- ColumnFamily --> Table
- XXpercentile --> pXX
- 1MinuteRate -->  m1_rate
- metric name before KS and Table names and some other changes of this kind.
- ^ aggregations / aliases indexes changed because of this (using graphite
for example) ^
- ‘.value’ is not appended in the metric name anymore for gauges, nothing
instead.

For example (graphite):

From
aliasByNode(averageSeriesWithWildcards(cassandra.$env.$dc.$host.org.apache.cassandra.metrics.ColumnFamily.$ks.$table.ReadLatency.95percentile,
2, 3), 1, 7, 8, 9)

to
aliasByNode(averageSeriesWithWildcards(cassandra.$env.$dc.$host.org.apache.cassandra.metrics.Table.ReadLatency.$ks.$table.p95,
2, 3), 1, 8, 9, 10)

C*heers,
---
Alain Rodriguez - @arodream - al...@thelastpickle.com
France / Spain

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

Le ven. 28 sept. 2018 à 20:38, Carl Mueller
 a écrit :

> VERY NICE! Thank you very much
>
> On Fri, Sep 28, 2018 at 1:32 PM Lyuben Todorov <
> lyuben.todo...@instaclustr.com> wrote:
>
>> Nothing as fancy as a matrix but a list of what JMX term can see.
>> Link to the online diff here: https://www.diffchecker.com/G9FE9swS
>>
>> /lyubent
>>
>> On Fri, 28 Sep 2018 at 19:04, Carl Mueller
>>  wrote:
>>
>>> It's my understanding that metrics got heavily re-namespaced in JMX for
>>> 2.2 from 2.1
>>>
>>> Did anyone ever make a migration matrix/guide for conversion of old
>>> metrics to new metrics?
>>>
>>>
>>>


RE: Cassandra 2.1.21 ETA?

2018-10-01 Thread Steinmaurer, Thomas
Michael,

can you please elaborate on your SocketServer question. Is this for Thrift only 
or also affects the native protocol (CQL)?

Yes, we basically have iptables rules in place disallowing remote access from 
machines outside the cluster.

Thanks again,
Thomas

> -Original Message-
> From: Michael Shuler  On Behalf Of Michael
> Shuler
> Sent: Freitag, 21. September 2018 15:49
> To: user@cassandra.apache.org
> Subject: Re: Cassandra 2.1.21 ETA?
>
> On 9/21/18 3:28 AM, Steinmaurer, Thomas wrote:
> >
> > is there an ETA for 2.1.21 containing the logback update (security
> > vulnerability fix)?
>
> Are you using SocketServer? Is your cluster firewalled?
>
> Feb 2018 2.1->3.11 commits noting this in NEWS.txt:
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub
> .com%2Fapache%2Fcassandra%2Fcommit%2F4bbd28adata=01%7C01
> %7Cthomas.steinmaurer%40dynatrace.com%7C4b4bcec4c04d4c52f74c08d61
> fc9e154%7C70ebe3a35b30435d9d677716d74ca190%7C1sdata=YqHz6ul
> 55SdPuxHhz5qubNb6MeK1XEjxg63Ttf2v6Uc%3Dreserved=0
>
> Feb 2018 trunk (4.0) commit for the library update:
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub
> .com%2Fapache%2Fcassandra%2Fcommit%2Fc0aa79edata=01%7C01%
> 7Cthomas.steinmaurer%40dynatrace.com%7C4b4bcec4c04d4c52f74c08d61fc
> 9e154%7C70ebe3a35b30435d9d677716d74ca190%7C1sdata=256fWCvc
> XDCdFqeQYe618JZfQQDAmV8LVRga4UBvSKs%3Dreserved=0
>
> --
> Michael
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org

The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freistädterstraße 313


Re: Multi dc reaper

2018-10-01 Thread Alexander Dejanovski
Hi Abdul,

the thing with multi DC Cassandra clusters is usually that JMX is not
accessible on the cross DC link, which means that one Reaper in a DC cannot
reach the nodes in remote DCs directly.
That's when you need to start Reaper instances in each DC which will sync
up through the Cassandra backend.
if you want one Reaper instance to control multiple DCs with closed JMX
ports, you'll need to set datacenterAvailability to LOCAL, but that will
disable so safety checks and is not recommended.
You can start multiple Reaper instances in the same DC if you want to
achieve HA.
I recommend to check this page to get all the informations about multi DC
setups with Reaper : http://cassandra-reaper.io/docs/usage/multi_dc/

Cheers,


On Sat, Sep 29, 2018 at 6:47 PM Abdul Patel  wrote:

> Is the multidc reaper for load balancing if one goes dpwn another node can
> take care of shchedule repairs or we can actuly schedule repairs at dc
> level woth seperate reaper instances.
> I am planning to have 3 reaper instances in 3 dc .
>
>
> On Friday, September 28, 2018, Abdul Patel  wrote:
>
>> Hi
>>
>> I have 18 node 3 dc cluster, trying to use reaper multi dc concept using
>> datacenteravailabiloty =EACH
>> But is there differnt steps as i start the first instance and add cluster
>> it repairs for full clustrr rather than dc.
>> Am i missing any steps?
>> Also the contact points on this scebario should be only relevabt to that
>> dc?
>>
> --
> You received this message because you are subscribed to the Google Groups
> "TLP Apache Cassandra Reaper users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to tlp-apache-cassandra-reaper-users+unsubscr...@googlegroups.com.
> To post to this group, send email to
> tlp-apache-cassandra-reaper-us...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/tlp-apache-cassandra-reaper-users/CAHEGkNMRpWnU7MvUsiN3xos1D6CqJXvUWxhX%3DS4ahZFQpfNGLQ%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>
-- 
-
Alexander Dejanovski
France
@alexanderdeja

Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com


unsubscribe

2018-10-01 Thread Gabriel Lindeborg


AB SVENSKA SPEL
621 80 Visby
Norra Hansegatan 17, Visby
Växel: +4610-120 00 00
https://svenskaspel.se

Please consider the environment before printing this email


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org