Re: Golang + Cassandra + Text Search

2017-10-24 Thread Justin Cameron
https://github.com/Stratio/cassandra-lucene-index is another option - it
plugs a full Lucene engine into Cassandra's custom secondary index
interface.

If you only need text prefix/postfix/substring matching or basic
tokenization there is SASI.

On Wed, 25 Oct 2017 at 03:50 Who Dadddy  wrote:

> Ridley - have a look at Elassandra
> https://github.com/strapdata/elassandra
>
>
> On 24 Oct 2017, at 06:50, Ridley Submission <
> ridley.submission2...@gmail.com> wrote:
>
> Hi,
>
> Quick question, I am wondering if anyone here who works with Go has
> specific recommendations for as simple framework to add text search on top
> of cassandra?
>
> (Apologies if this is off topic—I am not quite sure what forum in the
> cassandra community would be best for this type of question)
>
> Thanks,
> Riley
>
>
> --


*Justin Cameron*Senior Software Engineer





This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
and Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.


Re: [EXTERNAL] Lot of hints piling up

2017-10-24 Thread Jai Bheemsen Rao Dhanwada
No OOM or HEAP errors

On Tue, Oct 24, 2017 at 2:51 PM, Mohapatra, Kishore <
kishore.mohapa...@nuance.com> wrote:

> Check how many sstables are there for the table you are having issues with.
>
> You might be having the Heap pressure. Check your system.log for any OOM
> error or HEAP error.
>
>
>
> Thanks
>
>
>
> *Kishore Mohapatra*
>
> Principal Operations DBA
>
> Seattle, WA
>
> Email : kishore.mohapa...@nuance.com
>
>
>
>
>
> *From:* Jai Bheemsen Rao Dhanwada [mailto:jaibheem...@gmail.com]
> *Sent:* Monday, October 23, 2017 11:54 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: [EXTERNAL] Lot of hints piling up
>
>
>
> Do not see any errors in the logs or OS and compactions are happening in
> the regular interval and good too.
>
>
>
>
>
> Issue here is, this causing replication lag across the datacenters.
>
>
>
> On Mon, Oct 23, 2017 at 10:23 AM, Mohapatra, Kishore <
> kishore.mohapa...@nuance.com> wrote:
>
> Do you see any error in the cassandra log ?
>
> Check compactionstats ?
>
> Also check the OS level log messages to see if you are getting hardware
> level error messages.
>
>
>
> Thanks
>
>
>
> *Kishore Mohapatra*
>
> Principal Operations DBA
>
> Seattle, WA
>
> Ph : 425-691-6417 (cell)
>
> Email : kishore.mohapa...@nuance.com
>
>
>
>
>
> *From:* Jai Bheemsen Rao Dhanwada [mailto:jaibheem...@gmail.com]
> *Sent:* Friday, October 20, 2017 9:44 AM
> *To:* user@cassandra.apache.org
> *Subject:* [EXTERNAL] Lot of hints piling up
>
>
>
> Hello,
>
>
>
> We have cassandra cluster in 3 regions with version 2.1.13, and all of a
> sudden we started seeing lot of hints accumulating on the nodes. We are
> pretty sure there is no issue with the network between the regions and all
> the nodes are up and running all the time.
>
>
>
> Is there any other reason for the hints accumulation other than the n/w?
> eg: wide rows or bigger objects?
>
>
>
> Any pointers here could be very helpful.
>
>
>
> b/w the hints gets processed after some point of time.
>
>
>


RE: [EXTERNAL] Lot of hints piling up

2017-10-24 Thread Mohapatra, Kishore
Check how many sstables are there for the table you are having issues with.
You might be having the Heap pressure. Check your system.log for any OOM error 
or HEAP error.

Thanks

Kishore Mohapatra
Principal Operations DBA
Seattle, WA
Email : kishore.mohapa...@nuance.com


From: Jai Bheemsen Rao Dhanwada [mailto:jaibheem...@gmail.com]
Sent: Monday, October 23, 2017 11:54 AM
To: user@cassandra.apache.org
Subject: Re: [EXTERNAL] Lot of hints piling up

Do not see any errors in the logs or OS and compactions are happening in the 
regular interval and good too.


Issue here is, this causing replication lag across the datacenters.

On Mon, Oct 23, 2017 at 10:23 AM, Mohapatra, Kishore 
> wrote:
Do you see any error in the cassandra log ?
Check compactionstats ?
Also check the OS level log messages to see if you are getting hardware level 
error messages.

Thanks

Kishore Mohapatra
Principal Operations DBA
Seattle, WA
Ph : 425-691-6417 (cell)
Email : kishore.mohapa...@nuance.com


From: Jai Bheemsen Rao Dhanwada 
[mailto:jaibheem...@gmail.com]
Sent: Friday, October 20, 2017 9:44 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Lot of hints piling up

Hello,

We have cassandra cluster in 3 regions with version 2.1.13, and all of a sudden 
we started seeing lot of hints accumulating on the nodes. We are pretty sure 
there is no issue with the network between the regions and all the nodes are up 
and running all the time.

Is there any other reason for the hints accumulation other than the n/w? eg: 
wide rows or bigger objects?

Any pointers here could be very helpful.

b/w the hints gets processed after some point of time.



RE: [EXTERNAL]

2017-10-24 Thread Mohapatra, Kishore
Hi Vedant,
  I was actually referring to command line select query 
with Consistency level=ALL . This will force a read repair in the background.
But as I can see, you have tried with consistency level = one and and it is 
still timing out. SO what error you see in the system.log ?
Streaming error ?

Can you also check how many sstables are there for that table . Seems like your 
compaction may not be working.
Is your repair job running fine ?

Thanks

Kishore Mohapatra
Principal Operations DBA
Seattle, WA
Ph : 425-691-6417 (cell)
Email : kishore.mohapa...@nuance.com


-Original Message-
From: vbhang...@gmail.com [mailto:vbhang...@gmail.com] 
Sent: Monday, October 23, 2017 6:59 PM
To: user@cassandra.apache.org
Subject: [EXTERNAL] 

It is RF=3 and 12 nodes in 3 regions and 6 in other 2, so total 48 nodes. Are 
you suggesting forced read repair by reading consistency of ONE or by bumping 
up read_repair_chance? 

We have tried from command  line with ONE but that times out. 
On 2017-10-23 10:18, "Mohapatra, Kishore"  wrote: 
> What is your RF for the keyspace and how many nodes are there in each DC ?
> 
> Did you force a Read Repair to see, if you are getting the data or getting an 
> error ?
> 
> Thanks
> 
> Kishore Mohapatra
> Principal Operations DBA
> Seattle, WA
> Email : kishore.mohapa...@nuance.com
> 
> 
> -Original Message-
> From: vbhang...@gmail.com [mailto:vbhang...@gmail.com]
> Sent: Sunday, October 22, 2017 11:31 PM
> To: user@cassandra.apache.org
> Subject: [EXTERNAL]
> 
> -- Consistency level  LQ
> -- It started happening approximately couple of months back.  Issue is very 
> inconsistent and can't be reproduced.  It used rarely happen earlier (since 
> last few years).
> -- There are very few GC pauses but  they don't coincide with the issue. 
> -- 99% latency is less than 80ms and 75% is less than 5ms.
> 
> - Vedant
> On 2017-10-22 21:29, Jeff Jirsa  wrote: 
> > What consistency level do you use on writes?
> > Did this just start or has it always happened ?
> > Are you seeing GC pauses at all?
> > 
> > Whatââ,¬â"¢s your 99% write latency? 
> > 
> > --
> > Jeff Jirsa
> > 
> > 
> > > On Oct 22, 2017, at 9:21 PM, "vbhang...@gmail.com" 
> > > wrote:
> > > 
> > > This is for Cassandra 2.1.13. At times there are replication delays 
> > > across multiple regions. Data is available (getting queried from command 
> > > line) in 1 region but not seen in other region(s).  This is not 
> > > consistent. It is cluster spanning multiple data centers with total > 30 
> > > nodes. Keyspace is configured to get replicated in all the data centers.
> > > 
> > > Hints are getting piled up in the source region. This happens especially 
> > > for large data payload (appro 1kb to few MB blobs).  Network  level 
> > > congestion or saturation does not seem to be an issue.  There is no 
> > > memory/cpu pressure on individual nodes.
> > > 
> > > I am sharing Cassandra.yaml below, any pointers on what can be tuned are 
> > > highly appreciated. Let me know if you need any other info.
> > > 
> > > We tried bumping up hinted_handoff_throttle_in_kb: 30720 and handoff 
> > > tends to be slower max_hints_delivery_threads: 12 on one of the nodes to 
> > > see if it speeds up hints delivery, there was some improvement but not 
> > > whole lot.
> > > 
> > > Thanks
> > > 
> > > =
> > > # Cassandra storage config YAML
> > > 
> > > # NOTE:
> > > #   See 
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_cassandra_StorageConfiguration=DwIBaQ=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY=O20_rcIS1QazTO3_J10I1cPIygxnuBZ4sUCz1TS16XE=n1yhBCTDUhib4RoMH1SWmzcJU1bb-kL6WyTdhDlBL5g=1SQ9gAKWYTFTLEnR1ubZ0zPq_wtBEpY9udxtmNRr6Qg=
> > >   for
> > > #   full explanations of configuration directives
> > > # /NOTE
> > > 
> > > # The name of the cluster. This is mainly used to prevent machines 
> > > in # one logical cluster from joining another.
> > > cluster_name: "central"
> > > 
> > > # This defines the number of tokens randomly assigned to this node 
> > > on the ring # The more tokens, relative to other nodes, the larger 
> > > the proportion of data # that this node will store. You probably 
> > > want all nodes to have the same number # of tokens assuming they have 
> > > equal hardware capability.
> > > #
> > > # If you leave this unspecified, Cassandra will use the default of 
> > > 1 token for legacy compatibility, # and will use the initial_token as 
> > > described below.
> > > #
> > > # Specifying initial_token will override this setting on the 
> > > node's initial start, # on subsequent starts, this setting will apply 
> > > even if initial token is set.
> > > #
> > > # If you already have a cluster with 1 token per node, and wish to 
> > > migrate to # multiple tokens per node, see 
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.or
> > > g_
> > > 

Index summary redistribution seems to block all compactions

2017-10-24 Thread Sotirios Delimanolis
On a Cassandra 2.2.11 cluster, I noticed estimated compactions accumulating on 
one node. nodetool compactionstats showed the following:
                compaction type    keyspace         table   completed       
total    unit   progress                     Compaction         ks1    
some_table   204.68 MB   204.98 MB   bytes     99.86%   Index summary 
redistribution        null          null   457.72 KB      950 MB   bytes      
0.05%                     Compaction         ks1    some_table   461.61 MB   
461.95 MB   bytes     99.93%           Tombstone Compaction         ks1    
some_table   618.34 MB   618.47 MB   bytes     99.98%                     
Compaction         ks1    some_table   378.37 MB      380 MB   bytes     99.57% 
          Tombstone Compaction         ks1    some_table   326.51 MB   327.63 
MB   bytes     99.66%           Tombstone Compaction         ks2   other_table  
  29.38 MB    29.38 MB   bytes    100.00%           Tombstone Compaction        
 ks1    some_table    503.4 MB   507.28 MB   bytes     99.24%                   
  Compaction         ks1    some_table   353.44 MB   353.47 MB   bytes     
99.99%

They had been like this for a while (all different tables). A thread dump 
showed all 8 CompactionExecutor threads looking like
"CompactionExecutor:6" #84 daemon prio=1 os_prio=4 tid=0x7f5771172000 
nid=0x7646 waiting on condition [0x7f578847b000]   java.lang.Thread.State: 
WAITING (parking)        at sun.misc.Unsafe.park(Native Method)        - 
parking to wait for  <0x0005fe5656e8> (a 
com.google.common.util.concurrent.AbstractFuture$Sync)        at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
        at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:285)
        at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)   
     at 
org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:390)       
 at 
org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:593)
        at 
org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:368)
        at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:205)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)        
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
        at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:80)
        at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:257)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)        
at java.util.concurrent.FutureTask.run(FutureTask.java:266)        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
       at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
       at java.lang.Thread.run(Thread.java:745)  
A MemtablePostFlush thread was awaiting some flush count down latch
"MemtablePostFlush:1" #30 daemon prio=5 os_prio=0 tid=0x7f57705dac00 
nid=0x75bf waiting on condition [0x7f578a8fb000]   java.lang.Thread.State: 
WAITING (parking)        at sun.misc.Unsafe.park(Native Method)        - 
parking to wait for  <0x000573da6c90> (a 
java.util.concurrent.CountDownLatch$Sync)        at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
        at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)   
     at 
org.apache.cassandra.db.ColumnFamilyStore$PostFlush.call(ColumnFamilyStore.java:1073)
        at 
org.apache.cassandra.db.ColumnFamilyStore$PostFlush.call(ColumnFamilyStore.java:1026)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
       at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
       at java.lang.Thread.run(Thread.java:745)  

The 4 MemtableFlushWriter threads were all RUNNABLE, sorting something in 
IntervalTree. Finally, the IndexSummaryManager thread was 

Re: Best approach to prepare to shutdown a cassandra node

2017-10-24 Thread Javier Canillas
So, just to clarify.. a good approach to shutdown an instance of cassandra
should be:

*# Drain all information from commitlog into sstables*
*bin/nodetool stopdaemon*
*cassandra_pid=`ps -ef|grep "java.*apache-cassandra"|grep -v "grep"|awk
'{print $2}'`*
*if [ "$?" -ne 0 ]; then*
*echo "Cassandra stopdaemon fail? Please check logs"*
*if [ ! -z "$cassandra_pid" ] && [ "$cassandra_pid" -ne "1" ]; then*
*echo "Cassandra is still running, killing it gracefully"*
*kill $cassandra_pid*
*echo -n "+ Checking it is down. "*
*counter=10*
*while [ "$counter" -ne 0 -a ! kill -0 $cassandra_pid >
/dev/null 2>&1 ]*
*do*
*  echo -n ". "*
*  ((counter--))*
*  sleep 1s*
*done*
*echo ""*
*if ! kill -0 $cassandra_pid > /dev/null 2>&1; then*
*echo "+ Its down."*
*else*
*echo "- Killing forcefully Cassandra."*
*kill -9 $cassandra_pid*
*fi*
*   else*
*echo "Care there was a problem finding Cassandra PID, it
might be still running"*
*exit 1*
*   fi*
*  else*
*   echo "Cassandra stopped"*
*fi*

2017-10-20 9:04 GMT-03:00 Lutaya Shafiq Holmes :

> Looking at the code in trunk, the stopdemon command invokes the
> CassandraDaemon.stop() function which does a graceful shutdown by
> stopping jmxServer and drains the node by the shutdown hook.
>
>
> On 10/20/17, Simon Fontana Oscarsson
>  wrote:
> > Yes, drain will always be run when Cassandra exits normally.
> >
> > On 2017-10-20 00:57, Varun Gupta wrote:
> >> Does, nodetool stopdaemon, implicitly drain too? or we should invoke
> >> drain and then stopdaemon?
> >>
> >> On Mon, Oct 16, 2017 at 4:54 AM, Simon Fontana Oscarsson
> >>  >> > wrote:
> >>
> >> Looking at the code in trunk, the stopdemon command invokes the
> >> CassandraDaemon.stop() function which does a graceful shutdown by
> >> stopping jmxServer and drains the node by the shutdown hook.
> >>
> >> /Simon
> >>
> >>
> >> On 2017-10-13 20:42, Javier Canillas wrote:
> >>> As far as I know, the nodetool stopdaemon is doing a "kill -9".
> >>>
> >>> Or did it change?
> >>>
> >>> 2017-10-12 23:49 GMT-03:00 Anshu Vajpayee
> >>> >:
> >>>
> >>> Why are you killing when we have nodetool stopdaemon ?
> >>>
> >>> On Fri, Oct 13, 2017 at 1:49 AM, Javier Canillas
> >>>  >>> > wrote:
> >>>
> >>> That's what I thought.
> >>>
> >>> Thanks!
> >>>
> >>> 2017-10-12 14:26 GMT-03:00 Hannu Kröger
> >>> >:
> >>>
> >>> Hi,
> >>>
> >>> Drain should be enough.  It stops accepting writes
> >>> and after that cassandra can be safely shut down.
> >>>
> >>> Hannu
> >>>
> >>> On 12 October 2017 at 20:24:41, Javier Canillas
> >>> (javier.canil...@gmail.com
> >>> ) wrote:
> >>>
>  Hello everyone,
> 
>  I have some time working with Cassandra, but every
>  time I need to shutdown a node (for any reason like
>  upgrading version or moving instance to another
>  host) I see several errors on the client
>  applications (yes, I'm using the official java
> driver).
> 
>  By the way, I'm starting C* as a stand-alone process
> 
>   referenceStartCprocess.html?hl=start>,
>  and C* version is 3.11.0.
> 
>  The way I have implemented the shutdown process is
>  something like the following:
> 
>  /# Drain all information from commitlog into sstables/
>  /bin/nodetool drain
>  /
>  /
>  /
>  /cassandra_pid=`ps -ef|grep
>  "java.*apache-cassandra"|grep -v "grep"|awk '{print
>  $2}'`
>  /
>  /if [ ! -z "$cassandra_pid" ] && [ "$cassandra_pid"
>  -ne "1" ]; then/
>  /  echo "Asking Cassandra to shutdown (nodetool
>  drain doesn't stop cassandra)"/
>  /  kill $cassandra_pid/
> 

Re: Cassandra 3.10 Bootstrap- Error

2017-10-24 Thread Anumod Mullachery
Hi All,

thanks for all your inputs, appreciate it.

the issue resolved , nodes are able to join.

Pls find the solution I applied,


: Enable 256 bit encryption

  a : Copy the jce_policy-8.zip from  ..  to all nodes.

  b : Unzip jce_policy-8.zip,

  c : Copy local_policy.jar  and US_export_policy.jar from
UnlimitedJCEPolicyJDK8/ to /usr/java/jdk1.8.0_102/jre/lib/security/.

(or the latest java version directory being used )

& make sure the cassandra-topology.properties file details ; are in sync
with joining nodes.

restart the nodes.


regards

Anumod.



On Tue, Oct 24, 2017 at 9:12 AM, Dipan Shah  wrote:

> Hi Anumod,
>
>
> I faced the same issue with 3.11 and I'll suggest you first go through
> this link to check if the new node is able to communicate back on forth on
> the required port with the seed node.
>
>
> https://support.datastax.com/hc/en-us/articles/209691483-
> Bootstap-fails-with-Unable-to-gossip-with-any-seeds-yet-new-
> node-can-connect-to-seed-nodes
>
> 
> Bootstap fails with "Unable to gossip with any seeds" yet ...
> 
> support.datastax.com
> DataStax Support; DataStax Enterprise; Install/Upgrade; Bootstap fails
> with "Unable to gossip with any seeds" yet new node can connect to seed
> nodes
>
> This will mostly be the issue but even if that is not solving your
> problem, check the following points:
>
>
> 1) Check free disk space on the seed nodes. There should be sufficient
> free space for data migration to the new node.
>
> 2) Check logs of the seed nodes and see if there are any errors. I found
> some gossip file corruption on one of the seed nodes.
>
> 3) Finally, restart server\cassandra services on the seed nodes and see if
> that helps.
>
>
> Do let me know if this solved your problem.
>
>
>
> Thanks,
>
> Dipan Shah
>
>
> --
> *From:* Anumod Mullachery 
> *Sent:* Tuesday, October 24, 2017 2:12 AM
> *To:* user@cassandra.apache.org
> *Subject:* Cassandra 3.10 Bootstrap- Error
>
> Hi,
>
> We are using cassandra *3.10* , with *Network topology strategy* , &* 2
> DC* having *only 1 node each*.
>
> We are trying to add New nodes (auto_bootstrap: true) in yaml ,  but
> getting the below error-
>
> In the Seed nodes list, we have provided both the existing nodes from both
> DC(total -2 nodes). & tried with different option, by keeping only 1 node,
> but no hope.
>
>
> 2017-10-23 20:06:31,739 [MessagingService-Outgoing-/96.115.209.92-Gossip]
> WARN   SSLFactory.java:221 - Filtering out [TLS_RSA_WITH_AES_256_CBC_SHA]
> as it isn't supported by the socket
> 2017-10-23 20:06:31,739 [MessagingService-Outgoing-/96.115.209.92-Gossip]
> ERROR  OutboundTcpConnection.java:487 - SSL handshake error for outbound
> connection to 15454e08[SSL_NULL_WITH_NULL_NULL: Socket[addr=/96.115.209.92
> ,port=10145,localport=60859]]
> javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is
> disabled or cipher suites are inappropriate)
>
>
> *2017-10-23 20:06:32,655 [main] ERROR  CassandraDaemon.java:752 -
> Exception encountered during startup java.lang.RuntimeException: Unable to
> gossip with any seeds*
>
> 2017-10-23 20:06:32,666 [StorageServiceShutdownHook] INFO
> HintsService.java:221 - Paused hints dispatch
> 2017-10-23 20:06:32,667 [StorageServiceShutdownHook] WARN
> Gossiper.java:1514 - No local state, state is in silent shutdown, or node
> hasn't joined, not announcing shutdown
> 2017-10-23 20:06:32,667 [StorageServiceShutdownHook] INFO
> MessagingService.java:964 - Waiting for messaging service to quiesce
> 2017-10-23 20:06:32,667 [ACCEPT-/96.115.208.150] INFO
> MessagingService.java:1314 - MessagingService has terminated the accept()
> thread
> 2017-10-23 20:06:33,134 [StorageServiceShutdownHook] INFO
> HintsService.java:221 - Paused hints dispatch
>
> Can some one able to put some light on this issue, will be a great help.
>
> thanks in advance,
>
> - regards
>
> Anumod.
>


Re: Golang + Cassandra + Text Search

2017-10-24 Thread Who Dadddy
Ridley - have a look at Elassandra
https://github.com/strapdata/elassandra 



> On 24 Oct 2017, at 06:50, Ridley Submission  
> wrote:
> 
> Hi,
> 
> Quick question, I am wondering if anyone here who works with Go has specific 
> recommendations for as simple framework to add text search on top of 
> cassandra? 
> 
> (Apologies if this is off topic—I am not quite sure what forum in the 
> cassandra community would be best for this type of question)
> 
> Thanks,
> Riley



Re: Golang + Cassandra + Text Search

2017-10-24 Thread Jon Haddad
When someone talks about full text search, I usually assume there’s more 
required than keyword search, ie simple tokenization and a little stemming.  

* Term Vectors, common used for a “more like this feature”
* Ranking of search results
* Facets
* More complex tokenization like trigrams

So anyway, I don’t know if the OP had those requirements, but it’s important to 
keep in mind. 


> On Oct 24, 2017, at 1:33 AM, DuyHai Doan  wrote:
> 
> There is already a full text search index in Cassandra called SASI
> 
> On Tue, Oct 24, 2017 at 6:50 AM, Ridley Submission 
> > 
> wrote:
> Hi,
> 
> Quick question, I am wondering if anyone here who works with Go has specific 
> recommendations for as simple framework to add text search on top of 
> cassandra? 
> 
> (Apologies if this is off topic—I am not quite sure what forum in the 
> cassandra community would be best for this type of question)
> 
> Thanks,
> Riley
> 



RE: Request for Advice about Cassandra on Bitnami AWS

2017-10-24 Thread Lutaya Shafiq Holmes
Greetings

Iam new to  Cassandra - after having gone through the book- Cassandra-
The Definitive Guide, by Eben Hewitt,

I installed Cassandra on AWS EC2, using AMI Bitnami on Amazon Web Services,

I successfully connected to the EC2 Ubuntu Instance and created Key
Spaces and input some tables and data,,

Now, I would lie to know what I could do with the Cassandra
installation on Ubuntu Bitnami on AWS Cloud.

How can I integrate it with an application,

forexample,a JAVA application ( Iam new to JAVA though)

Thanks

Regards,

Shafiq Lutaaya,
Web Developer
www.ronzag.com

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Cassandra 3.10 Bootstrap- Error

2017-10-24 Thread Dipan Shah
Hi Anumod,


I faced the same issue with 3.11 and I'll suggest you first go through this 
link to check if the new node is able to communicate back on forth on the 
required port with the seed node.


https://support.datastax.com/hc/en-us/articles/209691483-Bootstap-fails-with-Unable-to-gossip-with-any-seeds-yet-new-node-can-connect-to-seed-nodes

[http://p8.zdassets.com/hc/settings_assets/31686/200039586/dvv6wj1TrSrpmATlV5ibLw-DataStax-whitelogo.png]

Bootstap fails with "Unable to gossip with any seeds" yet 
...
support.datastax.com
DataStax Support; DataStax Enterprise; Install/Upgrade; Bootstap fails with 
"Unable to gossip with any seeds" yet new node can connect to seed nodes


This will mostly be the issue but even if that is not solving your problem, 
check the following points:


1) Check free disk space on the seed nodes. There should be sufficient free 
space for data migration to the new node.

2) Check logs of the seed nodes and see if there are any errors. I found some 
gossip file corruption on one of the seed nodes.

3) Finally, restart server\cassandra services on the seed nodes and see if that 
helps.


Do let me know if this solved your problem.



Thanks,

Dipan Shah



From: Anumod Mullachery 
Sent: Tuesday, October 24, 2017 2:12 AM
To: user@cassandra.apache.org
Subject: Cassandra 3.10 Bootstrap- Error

Hi,

We are using cassandra 3.10 , with Network topology strategy , & 2 DC having 
only 1 node each.

We are trying to add New nodes (auto_bootstrap: true) in yaml ,  but getting 
the below error-

In the Seed nodes list, we have provided both the existing nodes from both 
DC(total -2 nodes). & tried with different option, by keeping only 1 node, but 
no hope.


2017-10-23 20:06:31,739 [MessagingService-Outgoing-/96.115.209.92-Gossip] WARN  
 SSLFactory.java:221 - Filtering out [TLS_RSA_WITH_AES_256_CBC_SHA] as it isn't 
supported by the socket
2017-10-23 20:06:31,739 [MessagingService-Outgoing-/96.115.209.92-Gossip] ERROR 
 OutboundTcpConnection.java:487 - SSL handshake error for outbound connection 
to 15454e08[SSL_NULL_WITH_NULL_NULL: 
Socket[addr=/96.115.209.92,port=10145,localport=60859]]
javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is 
disabled or cipher suites are inappropriate)

2017-10-23 20:06:32,655 [main] ERROR  CassandraDaemon.java:752 - Exception 
encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds

2017-10-23 20:06:32,666 [StorageServiceShutdownHook] INFO   
HintsService.java:221 - Paused hints dispatch
2017-10-23 20:06:32,667 [StorageServiceShutdownHook] WARN   Gossiper.java:1514 
- No local state, state is in silent shutdown, or node hasn't joined, not 
announcing shutdown
2017-10-23 20:06:32,667 [StorageServiceShutdownHook] INFO   
MessagingService.java:964 - Waiting for messaging service to quiesce
2017-10-23 20:06:32,667 [ACCEPT-/96.115.208.150] INFO   
MessagingService.java:1314 - MessagingService has terminated the accept() thread
2017-10-23 20:06:33,134 [StorageServiceShutdownHook] INFO   
HintsService.java:221 - Paused hints dispatch

Can some one able to put some light on this issue, will be a great help.

thanks in advance,

- regards

Anumod.


Re: Adding a New Node

2017-10-24 Thread shalom sagges
Thanks Kurt!

That sorted things in my head. Much appreciated!



On Tue, Oct 24, 2017 at 12:29 PM, kurt greaves  wrote:

> Your node shouldn't show up in DC1 in nodetool status from the other
> nodes, this implies a configuration problem. Sounds like you haven't added
> the new node to all the existing nodes cassandra-topology.properties file.
> You don't need to do a rolling restart with PropertyFileSnitch, it should
> reload the cassandra-topology.properties file automatically every 5 seconds.
>
> With GPFS each node only needs to know about its own topology settings in
> cassandra-rackdc.properties, so the problem you point out in 2 goes away,
> as when adding a node you only need to specify its configuration and that
> will be propagated to the rest of the cluster through gossip.
>
> On 24 October 2017 at 07:13, shalom sagges  wrote:
>
>> Hi Everyone,
>>
>> I have 2 DCs (v2.0.14) with the following topology.properties:
>>
>> DC1:
>> xxx11=DC1:RAC1
>> xxx12=DC1:RAC1
>> xxx13=DC1:RAC1
>> xxx14=DC1:RAC1
>> xxx15=DC1:RAC1
>>
>>
>> DC2:
>> yyy11=DC2:RAC1
>> yyy12=DC2:RAC1
>> yyy13=DC2:RAC1
>> yyy14=DC2:RAC1
>> yyy15=DC2:RAC1
>>
>>
>> # default for unknown nodes
>> default=DC1:RAC1
>>
>> Now let's say that I want to add a new node yyy16 to DC2, and I've added
>> yyy16 to the topology properties file only on that specific node.
>>
>> What I saw is that during bootstrap, the new node is receiving data only
>> from DC2 nodes (which is what I want), but nodetool status on other nodes
>> shows that it was joining to DC1 (which is the default DC for unknown
>> nodes).
>>
>> So I have a few questions on this matter:
>>
>> 1) What are the implications of such a bootstrap, where the joining node
>> actually gets data from nodes in the right DC, but all nodes see it in the
>> default DC when running nodetool status?
>>
>> 2) I know that I must change the topology.properties file on all nodes to
>> be the same. If I do that, do I need to perform a rolling restart on all of
>> the cluster before each bootstrap (which is a real pain for large clusters)?
>>
>> 3) Regarding the Snitch, the docs say that the recommended snitch in
>> Production is the GossipingPropertyFileSnitch with
>> cassandra-rackdc.properties file.
>> What's the difference between the GossipingPropertyFileSnitchand and the
>> PropertyFileSnitch?
>> I currently use PropertyFileSnitch and cassandra-topology.properties.
>>
>>
>> Thanks!
>>
>>
>>
>>
>>
>>
>


can repair and bootstrap run simultaneously

2017-10-24 Thread Peng Xiao
Hi there,


Can we add a new node (bootstrap) and run repair on another DC in the cluster 
or even run repair in the same DC?


Thanks,
Peng Xiao

Re: cassandra non-super user login fails but super user works

2017-10-24 Thread Sam Tunnicliffe
Which version of Cassandra are you running?

My guess is that you're on a version >= 2.2 and that you've created the
non-superuser since upgrading, but haven't yet removed the legacy tables
from the system_auth keyspace. If that's the case, then the new user will
be present in the new tables, but authentication at login time is still
using the old ones.

The schema of the system_auth keyspace was changed in 2.2 with the
introduction of role based access control and requires a little operator
involvement to switch over to using the new tables, see the section on
upgrading to 2.2 in NEWS.txt for the full details.

Thanks,
Sam


On 23 October 2017 at 16:08, Meg Mara  wrote:

> You should probably verify if the ‘can_login’ field of the non-superuser
> role is set to true. You can query the column family system_auth.roles to
> find out.
>
>
>
> Thanks,
>
> Meg Mara
>
>
>
> *From:* Justin Cameron [mailto:jus...@instaclustr.com]
> *Sent:* Sunday, October 22, 2017 6:21 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: cassandra non-super user login fails but super user works
>
>
>
> Try setting the replication factor of the system_auth keyspace to the
> number of nodes in your cluster.
>
> ALTER KEYSPACE system_auth WITH replication = {'class':
> 'NetworkTopologyStrategy', '': ''};
>
>
>
> On Sun, 22 Oct 2017 at 20:06 Who Dadddy  wrote:
>
> Anyone seen this before? Pretty basic setup, super user can login fine but
> non-super user can’t?
>
> Any pointers appreciated.
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
> --
>
> *Justin Cameron*
> Senior Software Engineer
>
>
>
> 
>
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>


Re: Adding a New Node

2017-10-24 Thread kurt greaves
Your node shouldn't show up in DC1 in nodetool status from the other nodes,
this implies a configuration problem. Sounds like you haven't added the new
node to all the existing nodes cassandra-topology.properties file. You
don't need to do a rolling restart with PropertyFileSnitch, it should
reload the cassandra-topology.properties file automatically every 5 seconds.

With GPFS each node only needs to know about its own topology settings in
cassandra-rackdc.properties, so the problem you point out in 2 goes away,
as when adding a node you only need to specify its configuration and that
will be propagated to the rest of the cluster through gossip.

On 24 October 2017 at 07:13, shalom sagges  wrote:

> Hi Everyone,
>
> I have 2 DCs (v2.0.14) with the following topology.properties:
>
> DC1:
> xxx11=DC1:RAC1
> xxx12=DC1:RAC1
> xxx13=DC1:RAC1
> xxx14=DC1:RAC1
> xxx15=DC1:RAC1
>
>
> DC2:
> yyy11=DC2:RAC1
> yyy12=DC2:RAC1
> yyy13=DC2:RAC1
> yyy14=DC2:RAC1
> yyy15=DC2:RAC1
>
>
> # default for unknown nodes
> default=DC1:RAC1
>
> Now let's say that I want to add a new node yyy16 to DC2, and I've added
> yyy16 to the topology properties file only on that specific node.
>
> What I saw is that during bootstrap, the new node is receiving data only
> from DC2 nodes (which is what I want), but nodetool status on other nodes
> shows that it was joining to DC1 (which is the default DC for unknown
> nodes).
>
> So I have a few questions on this matter:
>
> 1) What are the implications of such a bootstrap, where the joining node
> actually gets data from nodes in the right DC, but all nodes see it in the
> default DC when running nodetool status?
>
> 2) I know that I must change the topology.properties file on all nodes to
> be the same. If I do that, do I need to perform a rolling restart on all of
> the cluster before each bootstrap (which is a real pain for large clusters)?
>
> 3) Regarding the Snitch, the docs say that the recommended snitch in
> Production is the GossipingPropertyFileSnitch with
> cassandra-rackdc.properties file.
> What's the difference between the GossipingPropertyFileSnitchand and the
> PropertyFileSnitch?
> I currently use PropertyFileSnitch and cassandra-topology.properties.
>
>
> Thanks!
>
>
>
>
>
>


Re: Golang + Cassandra + Text Search

2017-10-24 Thread DuyHai Doan
There is already a full text search index in Cassandra called SASI

On Tue, Oct 24, 2017 at 6:50 AM, Ridley Submission <
ridley.submission2...@gmail.com> wrote:

> Hi,
>
> Quick question, I am wondering if anyone here who works with Go has
> specific recommendations for as simple framework to add text search on top
> of cassandra?
>
> (Apologies if this is off topic—I am not quite sure what forum in the
> cassandra community would be best for this type of question)
>
> Thanks,
> Riley
>


Adding a New Node

2017-10-24 Thread shalom sagges
Hi Everyone,

I have 2 DCs (v2.0.14) with the following topology.properties:

DC1:
xxx11=DC1:RAC1
xxx12=DC1:RAC1
xxx13=DC1:RAC1
xxx14=DC1:RAC1
xxx15=DC1:RAC1


DC2:
yyy11=DC2:RAC1
yyy12=DC2:RAC1
yyy13=DC2:RAC1
yyy14=DC2:RAC1
yyy15=DC2:RAC1


# default for unknown nodes
default=DC1:RAC1

Now let's say that I want to add a new node yyy16 to DC2, and I've added
yyy16 to the topology properties file only on that specific node.

What I saw is that during bootstrap, the new node is receiving data only
from DC2 nodes (which is what I want), but nodetool status on other nodes
shows that it was joining to DC1 (which is the default DC for unknown
nodes).

So I have a few questions on this matter:

1) What are the implications of such a bootstrap, where the joining node
actually gets data from nodes in the right DC, but all nodes see it in the
default DC when running nodetool status?

2) I know that I must change the topology.properties file on all nodes to
be the same. If I do that, do I need to perform a rolling restart on all of
the cluster before each bootstrap (which is a real pain for large clusters)?

3) Regarding the Snitch, the docs say that the recommended snitch in
Production is the GossipingPropertyFileSnitch with
cassandra-rackdc.properties file.
What's the difference between the GossipingPropertyFileSnitchand and the
PropertyFileSnitch?
I currently use PropertyFileSnitch and cassandra-topology.properties.


Thanks!