Re: Pagination support on Java Driver Query API

2015-02-10 Thread Ajay
Thanks Alex. But is there any workaround possible?. I can't believe that everyone read process all rows at once (without pagination). Thanks Ajay On Feb 10, 2015 11:46 PM, Alex Popescu al...@datastax.com wrote: On Tue, Feb 10, 2015 at 4:59 AM, Ajay ajay.ga...@gmail.com wrote: 1) Java

Re: How to remove obsolete error message in Datastax Opscenter?

2015-02-10 Thread Björn Hachmann
Thank you! I would like to add, that Opscenter is a valuable tool for my work. Thanks for your work! Kind regards Björn -- Björn Hachmann metrigo GmbH NEUE ADRESSE: Lagerstraße 36 20357 Hamburg p: +49 40 2093108-88 Geschäftsführer: Christian Müller, Tobias Schlottke, Philipp Westermeyer Die

Re: Pagination support on Java Driver Query API

2015-02-10 Thread Alex Popescu
On Tue, Feb 10, 2015 at 4:59 AM, Ajay ajay.ga...@gmail.com wrote: 1) Java driver implicitly support Pagination in the ResultSet (using Iterator) which can be controlled through FetchSize. But it is limited in a way that we cannot skip or go previous. The FetchState is not exposed. Cassandra

Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Robert Coli
On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson pgn...@gmail.com wrote: I am getting an out of memory error why I try to start Cassandra on one of my nodes. Cassandra will run for a minute, and then exit without outputting any error in the log file. It is happening while SSTableReader is

Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Paul Nickerson
I am getting an out of memory error why I try to start Cassandra on one of my nodes. Cassandra will run for a minute, and then exit without outputting any error in the log file. It is happening while SSTableReader is opening a couple hundred thousand things. I am running a 6 node cluster using

Re: Question about adding nodes to a cluster

2015-02-10 Thread Robert Coli
On Mon, Feb 9, 2015 at 5:25 PM, Seth Edwards s...@pubnub.com wrote: I see what you are saying. So basically take whatever existing token I have and divide it by 2, give or take a couple of tokens? Yep! bisect the token ranges if you want to be fancy about it. =Rob

Re: Repairing OpsCenter rollups60 Results in Snapshot Errors

2015-02-10 Thread Paul Nickerson
Thank you Reynald. I have contributed to that issue. But I cannot participate further right now because now I'm having an out of memory issue which may be unrelated. I think I'll start a new thread on this list for that. ~ Paul Nickerson On Thu, Jan 29, 2015 at 11:15 AM, Reynald Bourtembourg

Re: nodetool status shows large numbers of up nodes are down

2015-02-10 Thread Cheng Ren
Hi Carlos, Thanks for your suggestion. We did check the NTP setting and clock, and they are all working normally. Schema versions are also consistent with peers'. BTW, the only change we made was to set some of nodes' request timeout(read_request_timeout, write_request_timeout,

Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Chris Lohfink
Your cluster is probably having issues with compactions (with STCS you should never have this many). I would probably punt with OpsCenter/rollups60. Turn the node off and move all of the sstables off to a different directory for backup (or just rm if you really don't care about 1 minute metrics),

Re: nodetool status shows large numbers of up nodes are down

2015-02-10 Thread Chris Lohfink
Are you hitting long GCs on your nodes? Can check gc log or look at cassandra log for GCInspector. Chris On Tue, Feb 10, 2015 at 1:28 PM, Cheng Ren cheng@bloomreach.com wrote: Hi Carlos, Thanks for your suggestion. We did check the NTP setting and clock, and they are all working

Re: Accesing Cassandra Database which uses 'PasswordAuthenticator' in Java

2015-02-10 Thread Chamila Wijayarathna
Hi Deepak, Thanks. Got it working by adding withCredentials method. cluster = Cluster.builder() .addContactPoint(node) .withCredentials(yourusername, yourpassword) .build(); On Wed, Feb 11, 2015 at 2:03 AM, Deepak Shetty shet...@gmail.com wrote: see the API docs

Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Paul Nickerson
I was having trouble with snapshots failing while trying to repair that table (http://www.mail-archive.com/user@cassandra.apache.org/msg40686.html). I have a repair running on it now, and it seems to be going successfully this time. I am going to wait for that to finish, then try a manual nodetool

Re: nodetool status shows large numbers of up nodes are down

2015-02-10 Thread Carlos Rolo
Can you run nodetool tpstats and check if there is pending requests on GossipStage. The timeout should not affect gossip (AFAIK). As for problems you can have with this state is, if your nodes are marked down for long and if you are using hinted handoff, your hints may not be delivered and your

Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Chris Lohfink
yeah... probably just 2.1.2 things and not compactions. Still probably want to do something about the 1.6 million files though. It may be worth just mv/rm'ing to 60 sec rollup data though unless really attached to it. Chris On Tue, Feb 10, 2015 at 4:04 PM, Paul Nickerson pgn...@gmail.com

Pagination support on Java Driver Query API

2015-02-10 Thread Ajay
Hi, I am working on exposing the Cassandra Query APIs(Java Driver) as REST APIs for our internal project. To support Pagination, I looked at the Cassandra documentation, Source code and other forums. What I mean by pagination support is like below: 1) Client fires query to REST server 2) Server

Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Eric Stevens
This kind of recovery is definitely not my strong point, so feedback on this approach would certainly be welcome. As I understand it, if you really want to keep that data, you ought to be able to mv it out of the way to get your node online, then move those files in a several thousand at a time,

Re: Out of Memory Error While Opening SSTables on Startup

2015-02-10 Thread Flavien Charlon
I already experienced the same problem (hundreds of thousands of SSTables) with Cassandra 2.1.2. It seems to appear when running an incremental repair while there is a medium to high insert load on the cluster. The repair goes in a bad state and starts creating way more SSTables than it should