Nodetool tablehistograms

2017-07-19 Thread Abhinav Solan
Hi Everyone,

Here is the result of my tablehistograms command on one of our tables.

Percentile  SSTables Write Latency  Read LatencyPartition Size
   Cell Count
  (micros)  (micros)   (bytes)
50% 4.00 73.46545.79152321
 8239
75%10.00 88.15   2346.80379022
20501
95%10.00152.32   4055.27   1358102
73457
98%10.00219.34   4866.32   1955666
88148
99%10.00315.85   5839.59   1955666
   105778
Min 0.00 17.09 35.4373
3
Max10.00  36157.19  52066.35   2816159
   152321

What does SSTables column represent here?
Does it mean how many SSTables the read is spanning to?

Thanks,
Abhinav


Re: Row cache not working

2016-10-03 Thread Abhinav Solan
It's cassandra 3.0.7,
I had to set caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}, then
only it works don't know why.
If I set 'rows_per_partition':'1' then it does not work.

Also wanted to ask one thing, if I set row_cache_save_period: 60 then this
cache would be refreshed automatically or it would be lazy, whenever the
fetch call is made then only it caches it.

On Mon, Oct 3, 2016 at 1:31 PM Jeff Jirsa <jeff.ji...@crowdstrike.com>
wrote:

> Which version of Cassandra are you running (I can tell it’s newer than
> 2.1, but exact version would be useful)?
>
>
>
> *From: *Abhinav Solan <abhinav.so...@gmail.com>
> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Date: *Monday, October 3, 2016 at 11:35 AM
> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Subject: *Re: Row cache not working
>
>
>
> Hi, can anyone please help me with this
>
>
>
> Thanks,
>
> Abhinav
>
>
>
> On Fri, Sep 30, 2016 at 6:20 PM Abhinav Solan <abhinav.so...@gmail.com>
> wrote:
>
> Hi Everyone,
>
>
>
> My table looks like this -
>
> CREATE TABLE test.reads (
>
> svc_pt_id bigint,
>
> meas_type_id bigint,
>
> flags bigint,
>
> read_time timestamp,
>
> value double,
>
> PRIMARY KEY ((svc_pt_id, meas_type_id))
>
> ) WITH bloom_filter_fp_chance = 0.1
>
> AND caching = {'keys': 'ALL', 'rows_per_partition': '10'}
>
> AND comment = ''
>
> AND compaction = {'class':
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
>
> AND compression = {'chunk_length_in_kb': '64', 'class':
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
>
> AND crc_check_chance = 1.0
>
> AND dclocal_read_repair_chance = 0.1
>
> AND default_time_to_live = 0
>
> AND gc_grace_seconds = 864000
>
> AND max_index_interval = 2048
>
> AND memtable_flush_period_in_ms = 0
>
> AND min_index_interval = 128
>
> AND read_repair_chance = 0.0
>
> AND speculative_retry = '99PERCENTILE';
>
>
>
> Have set up the C* nodes with
>
> row_cache_size_in_mb: 1024
>
> row_cache_save_period: 14400
>
>
>
> and I am making this query
>
> select svc_pt_id, meas_type_id, read_time, value FROM
> cts_svc_pt_latest_int_read where svc_pt_id = -9941235 and meas_type_id =
> 146;
>
>
>
> with tracing on every time it says Row cache miss
>
>
>
> activity
>
>| timestamp  | source  | source_elapsed
>
>
> ---++-+
>
>
>
> Execute CQL3 query | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
>  0
>
>  Parsing select svc_pt_id, meas_type_id, read_time, value FROM
> cts_svc_pt_latest_int_read where svc_pt_id = -9941235 and meas_type_id =
> 146; [SharedPool-Worker-1] | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
>111
>
>
>Preparing statement
> [SharedPool-Worker-1] | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
>209
>
>
> reading data from /192.168.170.186
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__192.168.170.186=DQMGaQ=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M=yfYEBHVkX6l0zImlOIBID0gmhluYPD5Jje-3CtaT3ow=TzZ71ThTYrI2Cs7eYc2nhu4gOJpHM6B89KY97yj0Pp4=Rsg4cca5QVAWlI6cS1M673hWQ66Jxg2B5zK-HoJ6ZlQ=>
> [SharedPool-Worker-1] | 2016-09-30 18:15:00.446001 |  192.168.199.75 |
>370
>
>
>Sending READ message to /192.168.170.186
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__192.168.170.186=DQMGaQ=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M=yfYEBHVkX6l0zImlOIBID0gmhluYPD5Jje-3CtaT3ow=TzZ71ThTYrI2Cs7eYc2nhu4gOJpHM6B89KY97yj0Pp4=Rsg4cca5QVAWlI6cS1M673hWQ66Jxg2B5zK-HoJ6ZlQ=>
> [MessagingService-Outgoing-/192.168.170.186
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__192.168.170.186=DQMGaQ=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M=yfYEBHVkX6l0zImlOIBID0gmhluYPD5Jje-3CtaT3ow=TzZ71ThTYrI2Cs7eYc2nhu4gOJpHM6B89KY97yj0Pp4=Rsg4cca5QVAWlI6cS1M673hWQ66Jxg2B5zK-HoJ6ZlQ=>]
> | 2016-09-30 18:15:00.446001 |  192.168.199.75 |450
>
>
> REQUEST_RESPONSE message received from /192.168.170.186
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__192.168.170.186=DQMGaQ=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M=yfYEBHVkX6l0zImlOIBID0gmhluYPD5Jje-3CtaT3ow=TzZ71ThTYrI2Cs7eYc2nhu4gOJpHM6B89KY97yj0Pp4=Rsg4cca5QVAWlI6cS

Re: Row cache not working

2016-10-03 Thread Abhinav Solan
Hi, can anyone please help me with this

Thanks,
Abhinav

On Fri, Sep 30, 2016 at 6:20 PM Abhinav Solan <abhinav.so...@gmail.com>
wrote:

> Hi Everyone,
>
> My table looks like this -
> CREATE TABLE test.reads (
> svc_pt_id bigint,
> meas_type_id bigint,
> flags bigint,
> read_time timestamp,
> value double,
> PRIMARY KEY ((svc_pt_id, meas_type_id))
> ) WITH bloom_filter_fp_chance = 0.1
> AND caching = {'keys': 'ALL', 'rows_per_partition': '10'}
> AND comment = ''
> AND compaction = {'class':
> 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
> AND compression = {'chunk_length_in_kb': '64', 'class':
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
>
> Have set up the C* nodes with
> row_cache_size_in_mb: 1024
> row_cache_save_period: 14400
>
> and I am making this query
> select svc_pt_id, meas_type_id, read_time, value FROM
> cts_svc_pt_latest_int_read where svc_pt_id = -9941235 and meas_type_id =
> 146;
>
> with tracing on every time it says Row cache miss
>
> activity
>
>| timestamp  | source  | source_elapsed
>
> ---++-+
>
>
> Execute CQL3 query | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
>  0
>  Parsing select svc_pt_id, meas_type_id, read_time, value FROM
> cts_svc_pt_latest_int_read where svc_pt_id = -9941235 and meas_type_id =
> 146; [SharedPool-Worker-1] | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
>111
>
>Preparing statement
> [SharedPool-Worker-1] | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
>209
>
> reading data from /192.168.170.186
> [SharedPool-Worker-1] | 2016-09-30 18:15:00.446001 |  192.168.199.75 |
>370
>
>Sending READ message to /192.168.170.186 [MessagingService-Outgoing-/
> 192.168.170.186] | 2016-09-30 18:15:00.446001 |  192.168.199.75 |
>450
>
> REQUEST_RESPONSE message received from /192.168.170.186
> [MessagingService-Incoming-/192.168.170.186] | 2016-09-30 18:15:00.448000
> |  192.168.199.75 |   2469
>
>  Processing response from /192.168.170.186
> [SharedPool-Worker-8] | 2016-09-30 18:15:00.448000 |  192.168.199.75 |
>   2609
>
>   READ message received from /192.168.199.75 [MessagingService-Incoming-/
> 192.168.199.75] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
> 75
>
> Row cache miss
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
>218
>   Fetching data but not
> populating cache as query does not query from the start of the partition
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
>246
>
> Executing single-partition query on cts_svc_pt_latest_int_read
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
>259
>
>   Acquiring sstable references
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>281
>
>  Merging memtable contents
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>295
>
>Merging data from sstable 8
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>326
>
>Key cache hit for sstable 8
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>351
>
>Merging data from sstable 7
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>439
>
>Key cache hit for sstable 7
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>468
>
>  Read 1 live and 0 tombstone cells
> [SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
>615
>
>

Row cache not working

2016-09-30 Thread Abhinav Solan
Hi Everyone,

My table looks like this -
CREATE TABLE test.reads (
svc_pt_id bigint,
meas_type_id bigint,
flags bigint,
read_time timestamp,
value double,
PRIMARY KEY ((svc_pt_id, meas_type_id))
) WITH bloom_filter_fp_chance = 0.1
AND caching = {'keys': 'ALL', 'rows_per_partition': '10'}
AND comment = ''
AND compaction = {'class':
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'chunk_length_in_kb': '64', 'class':
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';

Have set up the C* nodes with
row_cache_size_in_mb: 1024
row_cache_save_period: 14400

and I am making this query
select svc_pt_id, meas_type_id, read_time, value FROM
cts_svc_pt_latest_int_read where svc_pt_id = -9941235 and meas_type_id =
146;

with tracing on every time it says Row cache miss

activity

   | timestamp  | source  | source_elapsed
---++-+

Execute
CQL3 query | 2016-09-30 18:15:00.446000 |  192.168.199.75 |  0
 Parsing select svc_pt_id, meas_type_id, read_time, value FROM
cts_svc_pt_latest_int_read where svc_pt_id = -9941235 and meas_type_id =
146; [SharedPool-Worker-1] | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
   111

 Preparing statement
[SharedPool-Worker-1] | 2016-09-30 18:15:00.446000 |  192.168.199.75 |
   209

  reading data from /192.168.170.186
[SharedPool-Worker-1] | 2016-09-30 18:15:00.446001 |  192.168.199.75 |
   370

 Sending READ message to /192.168.170.186 [MessagingService-Outgoing-/
192.168.170.186] | 2016-09-30 18:15:00.446001 |  192.168.199.75 |
 450

REQUEST_RESPONSE message received from /192.168.170.186
[MessagingService-Incoming-/192.168.170.186] | 2016-09-30 18:15:00.448000 |
 192.168.199.75 |   2469

   Processing response from /192.168.170.186
[SharedPool-Worker-8] | 2016-09-30 18:15:00.448000 |  192.168.199.75 |
  2609

READ message received from /192.168.199.75 [MessagingService-Incoming-/
192.168.199.75] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
  75

  Row cache miss
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
   218
  Fetching data but not
populating cache as query does not query from the start of the partition
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
   246

  Executing single-partition query on cts_svc_pt_latest_int_read
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449000 | 192.168.170.186 |
   259

Acquiring sstable references
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
   281

   Merging memtable contents
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
   295

 Merging data from sstable 8
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
   326

 Key cache hit for sstable 8
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
   351

 Merging data from sstable 7
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
   439

 Key cache hit for sstable 7
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
   468

   Read 1 live and 0 tombstone cells
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449001 | 192.168.170.186 |
   615

   Enqueuing response to /192.168.199.75
[SharedPool-Worker-2] | 2016-09-30 18:15:00.449002 | 192.168.170.186 |
   766
   Sending
REQUEST_RESPONSE message to /192.168.199.75 [MessagingService-Outgoing-/
192.168.199.75] | 2016-09-30 18:15:00.449002 | 192.168.170.186 |
 897


Request complete | 2016-09-30 18:15:00.44 |  192.168.199.75 |
2888

Can please anyone tell me what I am doing wrong?

Thanks,
Abhinav


Re: NoHostAvailableException coming up on our server

2016-07-13 Thread Abhinav Solan
Thanks a lot for suggestion Romain, I have done the setup to see the driver
logs, but haven't seen that error again.
Also thanks for the MaxRequestPerConnection tip, I will change it to 32K.

Regards,
Abhinav

On Wed, Jul 13, 2016 at 1:02 PM Romain Hardouin <romainh...@yahoo.fr> wrote:

> Put the driver logs in debug mode to see what's happen.
> Btw I am surprised by the few requests by connections in your setup:
>
> .setConnectionsPerHost(HostDistance.LOCAL, 20, 20)
>  .setMaxRequestsPerConnection(HostDistance.LOCAL, 128)
>
> It looks like a protocol v2 settings (Cassandra 2.0) because it was
> limited to 128 requests per connection. You're using C* 3.3 so the protocol
> v4.
> You can go up to 32K since protocol v3. As a first step I would try to
> open only 2 connections with 16K in MaxRequestsPerConnection. Then try to
> fine tune.
>
> Best,
>
> Romain
>
>
> Le Mardi 12 juillet 2016 23h57, Abhinav Solan <abhinav.so...@gmail.com> a
> écrit :
>
>
> I am using 3.0.0 version over apache-cassandra-3.3
>
> On Tue, Jul 12, 2016 at 2:37 PM Riccardo Ferrari <ferra...@gmail.com>
> wrote:
>
> What driver version are you using?
>
> You can look at the LoggingRetryPolicy to have more meaningful messages in
> your logs.
>
> best,
>
> On Tue, Jul 12, 2016 at 9:02 PM, Abhinav Solan <abhinav.so...@gmail.com>
> wrote:
>
> Thanks, Johnny
> Actually, they were running .. it went through a series of read and writes
> .. and recovered after the error.
> Is there any settings I can specify in preparing the Session at java
> client driver level, here are my current settings -
>
> PoolingOptions poolingOptions = new PoolingOptions()
>  .setConnectionsPerHost(HostDistance.LOCAL, 20, 20)
>  .setMaxRequestsPerConnection(HostDistance.LOCAL, 128)
>  .setNewConnectionThreshold(HostDistance.LOCAL, 100);
>
>  Cluster.Builder builder = Cluster.builder()
>  .addContactPoints(cp)
>  .withPoolingOptions(poolingOptions)
>  .withProtocolVersion(ProtocolVersion.NEWEST_SUPPORTED)
>  .withPort(port);
>
>
>
> On Tue, Jul 12, 2016 at 11:47 AM Johnny Miller <johnny.p.mil...@gmail.com>
> wrote:
>
> Abhinav - your getting that as the driver isn’t finding any hosts up for
> your query. You probably need to check if all the nodes in your cluster are
> running.
>
> See:
> http://docs.datastax.com/en/drivers/java/3.0/com/datastax/driver/core/exceptions/NoHostAvailableException.html
>
>
> Johnny
>
> On 12 Jul 2016, at 18:46, Abhinav Solan <abhinav.so...@gmail.com> wrote:
>
> Hi Everyone,
>
> I am getting this error on our server, it comes and goes seems the
> connection drops a comes back after a while -
>
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: :9042 
> (com.datastax.driver.core.exceptions.ConnectionException: [] 
> Pool is CLOSING))
>   at 
> com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:218)
>   at 
> com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
>   at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:284)
>   at 
> com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
>   at 
> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:91)
>   at 
> com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:129)
>
> Can anyone suggest me what can be done to handle this error ?
>
>
> Thanks,
>
> Abhinav
>
>
>
>
>
>


Re: NoHostAvailableException coming up on our server

2016-07-12 Thread Abhinav Solan
I am using 3.0.0 version over apache-cassandra-3.3

On Tue, Jul 12, 2016 at 2:37 PM Riccardo Ferrari <ferra...@gmail.com> wrote:

> What driver version are you using?
>
> You can look at the LoggingRetryPolicy to have more meaningful messages in
> your logs.
>
> best,
>
> On Tue, Jul 12, 2016 at 9:02 PM, Abhinav Solan <abhinav.so...@gmail.com>
> wrote:
>
>> Thanks, Johnny
>> Actually, they were running .. it went through a series of read and
>> writes .. and recovered after the error.
>> Is there any settings I can specify in preparing the Session at java
>> client driver level, here are my current settings -
>>
>> PoolingOptions poolingOptions = new PoolingOptions()
>>  .setConnectionsPerHost(HostDistance.LOCAL, 20, 20)
>>  .setMaxRequestsPerConnection(HostDistance.LOCAL, 128)
>>  .setNewConnectionThreshold(HostDistance.LOCAL, 100);
>>
>>  Cluster.Builder builder = Cluster.builder()
>>  .addContactPoints(cp)
>>  .withPoolingOptions(poolingOptions)
>>  .withProtocolVersion(ProtocolVersion.NEWEST_SUPPORTED)
>>  .withPort(port);
>>
>>
>>
>> On Tue, Jul 12, 2016 at 11:47 AM Johnny Miller <johnny.p.mil...@gmail.com>
>> wrote:
>>
>>> Abhinav - your getting that as the driver isn’t finding any hosts up for
>>> your query. You probably need to check if all the nodes in your cluster are
>>> running.
>>>
>>> See:
>>> http://docs.datastax.com/en/drivers/java/3.0/com/datastax/driver/core/exceptions/NoHostAvailableException.html
>>>
>>>
>>> Johnny
>>>
>>> On 12 Jul 2016, at 18:46, Abhinav Solan <abhinav.so...@gmail.com> wrote:
>>>
>>> Hi Everyone,
>>>
>>> I am getting this error on our server, it comes and goes seems the
>>> connection drops a comes back after a while -
>>>
>>> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: 
>>> All host(s) tried for query failed (tried: :9042 
>>> (com.datastax.driver.core.exceptions.ConnectionException: 
>>> [] Pool is CLOSING))
>>> at 
>>> com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:218)
>>> at 
>>> com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
>>> at 
>>> com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:284)
>>> at 
>>> com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
>>> at 
>>> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:91)
>>> at 
>>> com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:129)
>>>
>>> Can anyone suggest me what can be done to handle this error ?
>>>
>>>
>>> Thanks,
>>>
>>> Abhinav
>>>
>>>
>>>
>


Re: NoHostAvailableException coming up on our server

2016-07-12 Thread Abhinav Solan
Thanks, Johnny
Actually, they were running .. it went through a series of read and writes
.. and recovered after the error.
Is there any settings I can specify in preparing the Session at java client
driver level, here are my current settings -

PoolingOptions poolingOptions = new PoolingOptions()
 .setConnectionsPerHost(HostDistance.LOCAL, 20, 20)
 .setMaxRequestsPerConnection(HostDistance.LOCAL, 128)
 .setNewConnectionThreshold(HostDistance.LOCAL, 100);

 Cluster.Builder builder = Cluster.builder()
 .addContactPoints(cp)
 .withPoolingOptions(poolingOptions)
 .withProtocolVersion(ProtocolVersion.NEWEST_SUPPORTED)
 .withPort(port);



On Tue, Jul 12, 2016 at 11:47 AM Johnny Miller <johnny.p.mil...@gmail.com>
wrote:

> Abhinav - your getting that as the driver isn’t finding any hosts up for
> your query. You probably need to check if all the nodes in your cluster are
> running.
>
> See:
> http://docs.datastax.com/en/drivers/java/3.0/com/datastax/driver/core/exceptions/NoHostAvailableException.html
>
>
> Johnny
>
> On 12 Jul 2016, at 18:46, Abhinav Solan <abhinav.so...@gmail.com> wrote:
>
> Hi Everyone,
>
> I am getting this error on our server, it comes and goes seems the
> connection drops a comes back after a while -
>
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: :9042 
> (com.datastax.driver.core.exceptions.ConnectionException: [] 
> Pool is CLOSING))
>   at 
> com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:218)
>   at 
> com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
>   at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:284)
>   at 
> com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
>   at 
> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:91)
>   at 
> com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:129)
>
> Can anyone suggest me what can be done to handle this error ?
>
>
> Thanks,
>
> Abhinav
>
>
>


NoHostAvailableException coming up on our server

2016-07-12 Thread Abhinav Solan
Hi Everyone,

I am getting this error on our server, it comes and goes seems the
connection drops a comes back after a while -

Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (tried: :9042
(com.datastax.driver.core.exceptions.ConnectionException:
[] Pool is CLOSING))
at 
com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:218)
at 
com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:43)
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:284)
at 
com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
at 
com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:91)
at 
com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:129)

Can anyone suggest me what can be done to handle this error ?


Thanks,

Abhinav


Blob or columns

2016-06-03 Thread Abhinav Solan
Hi Everyone,

We have a unique situation at my workplace while storing data.
We are using Cassandra as a write through cache where we keep real time
data in Cassandra for around 10 - 20 days and rest we archive it to another
data store as archived data.
The current data which we are going to store has around 20 columns, of
which 3 would be used in primary key and 2 more would be read by systems
which would query while working on Cassandra, rest of the columns are of no
use, only use of these columns are when these would be required
re-construct the data to be archived in our archive store which would be
accessed by our legacy applications.
The question here is -
Should we store these inconsequential data as blob or JSON in one column or
create separate columns for them, which one should be the preferred way
here ?
We are currently using Cassandra 3.x version.

Thanks,
Abhinav