Golang + Cassandra + Text Search

2017-10-23 Thread Ridley Submission
Hi,

Quick question, I am wondering if anyone here who works with Go has
specific recommendations for as simple framework to add text search on top
of cassandra?

(Apologies if this is off topic—I am not quite sure what forum in the
cassandra community would be best for this type of question)

Thanks,
Riley


[no subject]

2017-10-23 Thread vbhang...@gmail.com
It is RF=3 and 12 nodes in 3 regions and 6 in other 2, so total 48 nodes. Are 
you suggesting forced read repair by reading consistency of ONE or by bumping 
up read_repair_chance? 

We have tried from command  line with ONE but that times out. 
On 2017-10-23 10:18, "Mohapatra, Kishore"  wrote: 
> What is your RF for the keyspace and how many nodes are there in each DC ?
> 
> Did you force a Read Repair to see, if you are getting the data or getting an 
> error ?
> 
> Thanks
> 
> Kishore Mohapatra
> Principal Operations DBA
> Seattle, WA
> Email : kishore.mohapa...@nuance.com
> 
> 
> -Original Message-
> From: vbhang...@gmail.com [mailto:vbhang...@gmail.com] 
> Sent: Sunday, October 22, 2017 11:31 PM
> To: user@cassandra.apache.org
> Subject: [EXTERNAL] 
> 
> -- Consistency level  LQ
> -- It started happening approximately couple of months back.  Issue is very 
> inconsistent and can't be reproduced.  It used rarely happen earlier (since 
> last few years).
> -- There are very few GC pauses but  they don't coincide with the issue. 
> -- 99% latency is less than 80ms and 75% is less than 5ms.
> 
> - Vedant
> On 2017-10-22 21:29, Jeff Jirsa  wrote: 
> > What consistency level do you use on writes?
> > Did this just start or has it always happened ?
> > Are you seeing GC pauses at all?
> > 
> > What’s your 99% write latency? 
> > 
> > --
> > Jeff Jirsa
> > 
> > 
> > > On Oct 22, 2017, at 9:21 PM, "vbhang...@gmail.com" 
> > > wrote:
> > > 
> > > This is for Cassandra 2.1.13. At times there are replication delays 
> > > across multiple regions. Data is available (getting queried from command 
> > > line) in 1 region but not seen in other region(s).  This is not 
> > > consistent. It is cluster spanning multiple data centers with total > 30 
> > > nodes. Keyspace is configured to get replicated in all the data centers.
> > > 
> > > Hints are getting piled up in the source region. This happens especially 
> > > for large data payload (appro 1kb to few MB blobs).  Network  level 
> > > congestion or saturation does not seem to be an issue.  There is no 
> > > memory/cpu pressure on individual nodes.
> > > 
> > > I am sharing Cassandra.yaml below, any pointers on what can be tuned are 
> > > highly appreciated. Let me know if you need any other info.
> > > 
> > > We tried bumping up hinted_handoff_throttle_in_kb: 30720 and handoff 
> > > tends to be slower max_hints_delivery_threads: 12 on one of the nodes to 
> > > see if it speeds up hints delivery, there was some improvement but not 
> > > whole lot.
> > > 
> > > Thanks
> > > 
> > > =
> > > # Cassandra storage config YAML
> > > 
> > > # NOTE:
> > > #   See 
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_cassandra_StorageConfiguration=DwIBaQ=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY=O20_rcIS1QazTO3_J10I1cPIygxnuBZ4sUCz1TS16XE=n1yhBCTDUhib4RoMH1SWmzcJU1bb-kL6WyTdhDlBL5g=1SQ9gAKWYTFTLEnR1ubZ0zPq_wtBEpY9udxtmNRr6Qg=
> > >   for
> > > #   full explanations of configuration directives
> > > # /NOTE
> > > 
> > > # The name of the cluster. This is mainly used to prevent machines 
> > > in # one logical cluster from joining another.
> > > cluster_name: "central"
> > > 
> > > # This defines the number of tokens randomly assigned to this node 
> > > on the ring # The more tokens, relative to other nodes, the larger 
> > > the proportion of data # that this node will store. You probably 
> > > want all nodes to have the same number # of tokens assuming they have 
> > > equal hardware capability.
> > > #
> > > # If you leave this unspecified, Cassandra will use the default of 1 
> > > token for legacy compatibility, # and will use the initial_token as 
> > > described below.
> > > #
> > > # Specifying initial_token will override this setting on the node's 
> > > initial start, # on subsequent starts, this setting will apply even if 
> > > initial token is set.
> > > #
> > > # If you already have a cluster with 1 token per node, and wish to 
> > > migrate to # multiple tokens per node, see 
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_
> > > cassandra_Operations=DwIBaQ=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0
> > > rrLsOzY=O20_rcIS1QazTO3_J10I1cPIygxnuBZ4sUCz1TS16XE=n1yhBCTDUhib
> > > 4RoMH1SWmzcJU1bb-kL6WyTdhDlBL5g=rbkIhV_HMQ4R_YS_6-hGmPMYhsJJa6DDjg
> > > ZfON6bo6M=
> > > #num_tokens: 256
> > > 
> > > # initial_token allows you to specify tokens manually.  While you 
> > > can use # it with # vnodes (num_tokens > 1, above) -- in which case 
> > > you should provide a # comma-separated list -- it's primarily used 
> > > when adding nodes # to legacy clusters # that do not have vnodes enabled.
> > > # initial_token:
> > > 
> > > initial_token: 
> > > 
> > > # See 
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_
> > > 

Re: Cassandra 3.10 Bootstrap- Error

2017-10-23 Thread kurt greaves
Looks like you're having SSL issues. Is the new node configured with the
same internode_encryption settings as the existing nodes?.

No appropriate protocol (protocol is disabled or cipher suites are
> inappropriate)

Implies the new node is making a connection without SSL or the wrong
ciphers.
​


Cassandra 3.10 Bootstrap- Error

2017-10-23 Thread Anumod Mullachery
Hi,

We are using cassandra *3.10* , with *Network topology strategy* , &* 2 DC*
having *only 1 node each*.

We are trying to add New nodes (auto_bootstrap: true) in yaml ,  but
getting the below error-

In the Seed nodes list, we have provided both the existing nodes from both
DC(total -2 nodes). & tried with different option, by keeping only 1 node,
but no hope.


2017-10-23 20:06:31,739 [MessagingService-Outgoing-/96.115.209.92-Gossip]
WARN   SSLFactory.java:221 - Filtering out [TLS_RSA_WITH_AES_256_CBC_SHA]
as it isn't supported by the socket
2017-10-23 20:06:31,739 [MessagingService-Outgoing-/96.115.209.92-Gossip]
ERROR  OutboundTcpConnection.java:487 - SSL handshake error for outbound
connection to 15454e08[SSL_NULL_WITH_NULL_NULL: Socket[addr=/96.115.209.92
,port=10145,localport=60859]]
javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is
disabled or cipher suites are inappropriate)


*2017-10-23 20:06:32,655 [main] ERROR  CassandraDaemon.java:752 - Exception
encountered during startupjava.lang.RuntimeException: Unable to gossip with
any seeds*

2017-10-23 20:06:32,666 [StorageServiceShutdownHook] INFO
HintsService.java:221 - Paused hints dispatch
2017-10-23 20:06:32,667 [StorageServiceShutdownHook] WARN
Gossiper.java:1514 - No local state, state is in silent shutdown, or node
hasn't joined, not announcing shutdown
2017-10-23 20:06:32,667 [StorageServiceShutdownHook] INFO
MessagingService.java:964 - Waiting for messaging service to quiesce
2017-10-23 20:06:32,667 [ACCEPT-/96.115.208.150] INFO
MessagingService.java:1314 - MessagingService has terminated the accept()
thread
2017-10-23 20:06:33,134 [StorageServiceShutdownHook] INFO
HintsService.java:221 - Paused hints dispatch

Can some one able to put some light on this issue, will be a great help.

thanks in advance,

- regards

Anumod.


Re: [EXTERNAL] Lot of hints piling up

2017-10-23 Thread Jai Bheemsen Rao Dhanwada
Do not see any errors in the logs or OS and compactions are happening in
the regular interval and good too.


Issue here is, this causing replication lag across the datacenters.

On Mon, Oct 23, 2017 at 10:23 AM, Mohapatra, Kishore <
kishore.mohapa...@nuance.com> wrote:

> Do you see any error in the cassandra log ?
>
> Check compactionstats ?
>
> Also check the OS level log messages to see if you are getting hardware
> level error messages.
>
>
>
> Thanks
>
>
>
> *Kishore Mohapatra*
>
> Principal Operations DBA
>
> Seattle, WA
>
> Ph : 425-691-6417 (cell)
>
> Email : kishore.mohapa...@nuance.com
>
>
>
>
>
> *From:* Jai Bheemsen Rao Dhanwada [mailto:jaibheem...@gmail.com]
> *Sent:* Friday, October 20, 2017 9:44 AM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Lot of hints piling up
>
>
>
> Hello,
>
>
>
> We have cassandra cluster in 3 regions with version 2.1.13, and all of a
> sudden we started seeing lot of hints accumulating on the nodes. We are
> pretty sure there is no issue with the network between the regions and all
> the nodes are up and running all the time.
>
>
>
> Is there any other reason for the hints accumulation other than the n/w?
> eg: wide rows or bigger objects?
>
>
>
> Any pointers here could be very helpful.
>
>
>
> b/w the hints gets processed after some point of time.
>


Re: What is a node's "counter ID?"

2017-10-23 Thread Paul Pollack
Makes sense, thanks Blake!

On Fri, Oct 20, 2017 at 9:17 PM, Blake Eggleston 
wrote:

> I believe that’s just referencing a counter implementation detail. If I
> remember correctly, there was a fairly large improvement of the
> implementation of counters in 2.1, and the assignment of the id would
> basically be a format migration.
>
>
> On Oct 20, 2017, at 9:57 AM, Paul Pollack 
> wrote:
>
> Hi,
>
> I was reading the doc page for nodetool cleanup
> https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/
> toolsCleanup.html because I was planning to run it after replacing a node
> in my counter cluster and the sentence "Cassandra assigns a new counter ID
> to the node" gave me pause. I can't find any other reference to a node's
> counter ID in the docs and was wondering if anyone here could shed light on
> what this means, and how it would affect the data being stored on a node
> that had its counter ID changed?
>
> Thanks,
> Paul
>
>


RE: [EXTERNAL] Lot of hints piling up

2017-10-23 Thread Mohapatra, Kishore
Do you see any error in the cassandra log ?
Check compactionstats ?
Also check the OS level log messages to see if you are getting hardware level 
error messages.

Thanks

Kishore Mohapatra
Principal Operations DBA
Seattle, WA
Ph : 425-691-6417 (cell)
Email : kishore.mohapa...@nuance.com


From: Jai Bheemsen Rao Dhanwada [mailto:jaibheem...@gmail.com]
Sent: Friday, October 20, 2017 9:44 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Lot of hints piling up

Hello,

We have cassandra cluster in 3 regions with version 2.1.13, and all of a sudden 
we started seeing lot of hints accumulating on the nodes. We are pretty sure 
there is no issue with the network between the regions and all the nodes are up 
and running all the time.

Is there any other reason for the hints accumulation other than the n/w? eg: 
wide rows or bigger objects?

Any pointers here could be very helpful.

b/w the hints gets processed after some point of time.


RE: [EXTERNAL]

2017-10-23 Thread Mohapatra, Kishore
What is your RF for the keyspace and how many nodes are there in each DC ?

Did you force a Read Repair to see, if you are getting the data or getting an 
error ?

Thanks

Kishore Mohapatra
Principal Operations DBA
Seattle, WA
Email : kishore.mohapa...@nuance.com


-Original Message-
From: vbhang...@gmail.com [mailto:vbhang...@gmail.com] 
Sent: Sunday, October 22, 2017 11:31 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] 

-- Consistency level  LQ
-- It started happening approximately couple of months back.  Issue is very 
inconsistent and can't be reproduced.  It used rarely happen earlier (since 
last few years).
-- There are very few GC pauses but  they don't coincide with the issue. 
-- 99% latency is less than 80ms and 75% is less than 5ms.

- Vedant
On 2017-10-22 21:29, Jeff Jirsa  wrote: 
> What consistency level do you use on writes?
> Did this just start or has it always happened ?
> Are you seeing GC pauses at all?
> 
> What’s your 99% write latency? 
> 
> --
> Jeff Jirsa
> 
> 
> > On Oct 22, 2017, at 9:21 PM, "vbhang...@gmail.com" 
> > wrote:
> > 
> > This is for Cassandra 2.1.13. At times there are replication delays across 
> > multiple regions. Data is available (getting queried from command line) in 
> > 1 region but not seen in other region(s).  This is not consistent. It is 
> > cluster spanning multiple data centers with total > 30 nodes. Keyspace is 
> > configured to get replicated in all the data centers.
> > 
> > Hints are getting piled up in the source region. This happens especially 
> > for large data payload (appro 1kb to few MB blobs).  Network  level 
> > congestion or saturation does not seem to be an issue.  There is no 
> > memory/cpu pressure on individual nodes.
> > 
> > I am sharing Cassandra.yaml below, any pointers on what can be tuned are 
> > highly appreciated. Let me know if you need any other info.
> > 
> > We tried bumping up hinted_handoff_throttle_in_kb: 30720 and handoff tends 
> > to be slower max_hints_delivery_threads: 12 on one of the nodes to see if 
> > it speeds up hints delivery, there was some improvement but not whole lot.
> > 
> > Thanks
> > 
> > =
> > # Cassandra storage config YAML
> > 
> > # NOTE:
> > #   See 
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_cassandra_StorageConfiguration=DwIBaQ=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY=O20_rcIS1QazTO3_J10I1cPIygxnuBZ4sUCz1TS16XE=n1yhBCTDUhib4RoMH1SWmzcJU1bb-kL6WyTdhDlBL5g=1SQ9gAKWYTFTLEnR1ubZ0zPq_wtBEpY9udxtmNRr6Qg=
> >   for
> > #   full explanations of configuration directives
> > # /NOTE
> > 
> > # The name of the cluster. This is mainly used to prevent machines 
> > in # one logical cluster from joining another.
> > cluster_name: "central"
> > 
> > # This defines the number of tokens randomly assigned to this node 
> > on the ring # The more tokens, relative to other nodes, the larger 
> > the proportion of data # that this node will store. You probably 
> > want all nodes to have the same number # of tokens assuming they have equal 
> > hardware capability.
> > #
> > # If you leave this unspecified, Cassandra will use the default of 1 
> > token for legacy compatibility, # and will use the initial_token as 
> > described below.
> > #
> > # Specifying initial_token will override this setting on the node's 
> > initial start, # on subsequent starts, this setting will apply even if 
> > initial token is set.
> > #
> > # If you already have a cluster with 1 token per node, and wish to 
> > migrate to # multiple tokens per node, see 
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_
> > cassandra_Operations=DwIBaQ=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0
> > rrLsOzY=O20_rcIS1QazTO3_J10I1cPIygxnuBZ4sUCz1TS16XE=n1yhBCTDUhib
> > 4RoMH1SWmzcJU1bb-kL6WyTdhDlBL5g=rbkIhV_HMQ4R_YS_6-hGmPMYhsJJa6DDjg
> > ZfON6bo6M=
> > #num_tokens: 256
> > 
> > # initial_token allows you to specify tokens manually.  While you 
> > can use # it with # vnodes (num_tokens > 1, above) -- in which case 
> > you should provide a # comma-separated list -- it's primarily used 
> > when adding nodes # to legacy clusters # that do not have vnodes enabled.
> > # initial_token:
> > 
> > initial_token: 
> > 
> > # See 
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_
> > cassandra_HintedHandoff=DwIBaQ=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dl
> > gP0rrLsOzY=O20_rcIS1QazTO3_J10I1cPIygxnuBZ4sUCz1TS16XE=n1yhBCTDU
> > hib4RoMH1SWmzcJU1bb-kL6WyTdhDlBL5g=X5a8VFm3Dap2-T8Zlo_9XZRVqgKaU7t
> > 46eYJ3ztBX7c= # May either be "true" or "false" to enable 
> > globally, or contain a list # of data centers to enable 
> > per-datacenter.
> > # hinted_handoff_enabled: DC1,DC2
> > hinted_handoff_enabled: true
> > # this defines the maximum amount of time a dead host will have 
> > hints # generated.  After it has been dead this long, new hints for 
> > it will not be # created until it has been seen alive 

RE: cassandra non-super user login fails but super user works

2017-10-23 Thread Meg Mara
You should probably verify if the ‘can_login’ field of the non-superuser role 
is set to true. You can query the column family system_auth.roles to find out.

Thanks,
Meg Mara

From: Justin Cameron [mailto:jus...@instaclustr.com]
Sent: Sunday, October 22, 2017 6:21 PM
To: user@cassandra.apache.org
Subject: Re: cassandra non-super user login fails but super user works

Try setting the replication factor of the system_auth keyspace to the number of 
nodes in your cluster.

ALTER KEYSPACE system_auth WITH replication = {'class': 
'NetworkTopologyStrategy', '': ''};

On Sun, 22 Oct 2017 at 20:06 Who Dadddy 
> wrote:
Anyone seen this before? Pretty basic setup, super user can login fine but 
non-super user can’t?

Any pointers appreciated.



-
To unsubscribe, e-mail: 
user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: 
user-h...@cassandra.apache.org
--
Justin Cameron
Senior Software Engineer

[https://cdn2.hubspot.net/hubfs/2549680/Instaclustr-Navy-logo-new.png]

This email has been sent on behalf of Instaclustr Pty. Limited (Australia) and 
Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally privileged 
information.  If you are not the intended recipient, do not copy or disclose 
its content, but please reply to this email immediately and highlight the error 
to the sender and then immediately delete the message.


Re: Integrating Cassandra With Hadoop

2017-10-23 Thread Lutaya Shafiq Holmes
Thank you so much

On 10/23/17, Justin Cameron  wrote:
> I'd highly recommend looking at using Spark instead of Hadoop if you need
> to run batch analytics over your Cassandra data - it integrates much
> better, has more flexibility and will be faster/more efficient. You'll save
> yourself a lot of time and hassle.
>
> If you really need to use Hadoop for batch analytics, you should take a
> look at using this approach to ETL your Cassandra backups to HDFS:
> https://www.youtube.com/watch?v=eY5oSZnwmJg
> The main benefits of this approach is that it is fast, scalable and has
> little to no performance impact on your Cassandra cluster. Once the data is
> in HDFS you can run your Hadoop jobs over it. The downside is that it isn't
> open-source (AFAIK), so you'd have to build it yourself.
>
> On Sun, 22 Oct 2017 at 20:41 Lutaya Shafiq Holmes 
> wrote:
>
>> I would like to get some help on Integrating Casssandra with Hadoop,
>>
>> How do I get started with this Process
>>
>> --
>> Lutaaya Shafiq
>> Web: www.ronzag.com | i...@ronzag.com
>> Mobile: +256702772721 <+256%20702%20772721> | +256783564130
>> <+256%20783%20564130>
>> Twitter: @lutayashafiq
>> Skype: lutaya5
>> Blog: lutayashafiq.com
>> http://www.fourcornersalliancegroup.com/?a=shafiqholmes
>>
>> "The most beautiful people we have known are those who have known defeat,
>> known suffering, known struggle, known loss and have found their way out
>> of
>> the depths. These persons have an appreciation, a sensitivity and an
>> understanding of life that fills them with compassion, gentleness and a
>> deep loving concern. Beautiful people do not just happen." - *Elisabeth
>> Kubler-Ross*
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>> --
>
>
> *Justin Cameron*Senior Software Engineer
>
>
> 
>
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>


-- 
Lutaaya Shafiq
Web: www.ronzag.com | i...@ronzag.com
Mobile: +256702772721 | +256783564130
Twitter: @lutayashafiq
Skype: lutaya5
Blog: lutayashafiq.com
http://www.fourcornersalliancegroup.com/?a=shafiqholmes

"The most beautiful people we have known are those who have known defeat,
known suffering, known struggle, known loss and have found their way out of
the depths. These persons have an appreciation, a sensitivity and an
understanding of life that fills them with compassion, gentleness and a
deep loving concern. Beautiful people do not just happen." - *Elisabeth
Kubler-Ross*

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



[no subject]

2017-10-23 Thread vbhang...@gmail.com
-- Consistency level  LQ
-- It started happening approximately couple of months back.  Issue is very 
inconsistent and can't be reproduced.  It used rarely happen earlier (since 
last few years).
-- There are very few GC pauses but  they don't coincide with the issue. 
-- 99% latency is less than 80ms and 75% is less than 5ms.

- Vedant
On 2017-10-22 21:29, Jeff Jirsa  wrote: 
> What consistency level do you use on writes?
> Did this just start or has it always happened ?
> Are you seeing GC pauses at all?
> 
> What’s your 99% write latency? 
> 
> -- 
> Jeff Jirsa
> 
> 
> > On Oct 22, 2017, at 9:21 PM, "vbhang...@gmail.com" 
> > wrote:
> > 
> > This is for Cassandra 2.1.13. At times there are replication delays across 
> > multiple regions. Data is available (getting queried from command line) in 
> > 1 region but not seen in other region(s).  This is not consistent. It is 
> > cluster spanning multiple data centers with total > 30 nodes. Keyspace is 
> > configured to get replicated in all the data centers.
> > 
> > Hints are getting piled up in the source region. This happens especially 
> > for large data payload (appro 1kb to few MB blobs).  Network  level 
> > congestion or saturation does not seem to be an issue.  There is no 
> > memory/cpu pressure on individual nodes.
> > 
> > I am sharing Cassandra.yaml below, any pointers on what can be tuned are 
> > highly appreciated. Let me know if you need any other info.
> > 
> > We tried bumping up hinted_handoff_throttle_in_kb: 30720 and handoff tends 
> > to be slower max_hints_delivery_threads: 12 on one of the nodes to see if 
> > it speeds up hints delivery, there was some improvement but not whole lot.
> > 
> > Thanks
> > 
> > =
> > # Cassandra storage config YAML
> > 
> > # NOTE:
> > #   See http://wiki.apache.org/cassandra/StorageConfiguration for
> > #   full explanations of configuration directives
> > # /NOTE
> > 
> > # The name of the cluster. This is mainly used to prevent machines in
> > # one logical cluster from joining another.
> > cluster_name: "central"
> > 
> > # This defines the number of tokens randomly assigned to this node on the 
> > ring
> > # The more tokens, relative to other nodes, the larger the proportion of 
> > data
> > # that this node will store. You probably want all nodes to have the same 
> > number
> > # of tokens assuming they have equal hardware capability.
> > #
> > # If you leave this unspecified, Cassandra will use the default of 1 token 
> > for legacy compatibility,
> > # and will use the initial_token as described below.
> > #
> > # Specifying initial_token will override this setting on the node's initial 
> > start,
> > # on subsequent starts, this setting will apply even if initial token is 
> > set.
> > #
> > # If you already have a cluster with 1 token per node, and wish to migrate 
> > to
> > # multiple tokens per node, see http://wiki.apache.org/cassandra/Operations
> > #num_tokens: 256
> > 
> > # initial_token allows you to specify tokens manually.  While you can use # 
> > it with
> > # vnodes (num_tokens > 1, above) -- in which case you should provide a
> > # comma-separated list -- it's primarily used when adding nodes # to legacy 
> > clusters
> > # that do not have vnodes enabled.
> > # initial_token:
> > 
> > initial_token: 
> > 
> > # See http://wiki.apache.org/cassandra/HintedHandoff
> > # May either be "true" or "false" to enable globally, or contain a list
> > # of data centers to enable per-datacenter.
> > # hinted_handoff_enabled: DC1,DC2
> > hinted_handoff_enabled: true
> > # this defines the maximum amount of time a dead host will have hints
> > # generated.  After it has been dead this long, new hints for it will not be
> > # created until it has been seen alive and gone down again.
> > max_hint_window_in_ms: 1080 # 3 hours
> > # Maximum throttle in KBs per second, per delivery thread.  This will be
> > # reduced proportionally to the number of nodes in the cluster.  (If there
> > # are two nodes in the cluster, each delivery thread will use the maximum
> > # rate; if there are three, each will throttle to half of the maximum,
> > # since we expect two nodes to be delivering hints simultaneously.)
> > hinted_handoff_throttle_in_kb: 1024
> > # Number of threads with which to deliver hints;
> > # Consider increasing this number when you have multi-dc deployments, since
> > # cross-dc handoff tends to be slower
> > max_hints_delivery_threads: 6
> > 
> > # Maximum throttle in KBs per second, total. This will be
> > # reduced proportionally to the number of nodes in the cluster.
> > batchlog_replay_throttle_in_kb: 1024
> > 
> > # Authentication backend, implementing IAuthenticator; used to identify 
> > users
> > # Out of the box, Cassandra provides 
> > org.apache.cassandra.auth.{AllowAllAuthenticator,
> > # PasswordAuthenticator}.
> > #
> > # - AllowAllAuthenticator performs no checks - set it to disable 
> >