Re: IPv6-only host, can't seem to get Cassandra to bind to a public port

2017-04-12 Thread Khaja, Raziuddin (NIH/NLM/NCBI) [C]
Are you specifying both the listen_address and listen_interface, or just one of 
the two?

Send,  an example of the following 3 lines.  Here is what I have on my 2.1.16 
cluster that uses ipv6:

listen_address: ::hhh::h::hhh:h
# listen_interface: eth0
# listen_interface_prefer_ipv6: false

Also, looking at my config, I can confirm that it is uneccessary or wrong to 
escape the ipv6 address with \ as I suggested before.

-Razi

On 4/12/17, 4:05 PM, "Martijn Pieters"  wrote:

From: "Khaja, Raziuddin (NIH/NLM/NCBI) [C]" 
> Maybe you have to escape the IPV6 addresses in the cassandra.yaml in the 
same way.
> I think it’s worth a try.

Nope, no luck. You get an error instead:

ERROR [main] 2017-04-12 20:03:46,899 CassandraDaemon.java:752 - 
Exception encountered during startup: Unknown listen_address 
'\:\:\:\:\:h\:hh\:h'

(actual address digits replaced with h characters).

Martijn









Re: IPv6-only host, can't seem to get Cassandra to bind to a public port

2017-04-12 Thread Martijn Pieters
From: "Khaja, Raziuddin (NIH/NLM/NCBI) [C]" 
> Maybe you have to escape the IPV6 addresses in the cassandra.yaml in the same 
> way.
> I think it’s worth a try.

Nope, no luck. You get an error instead:

ERROR [main] 2017-04-12 20:03:46,899 CassandraDaemon.java:752 - Exception 
encountered during startup: Unknown listen_address 
'\:\:\:\:\:h\:hh\:h'
    
(actual address digits replaced with h characters).

Martijn







Re: IPv6-only host, can't seem to get Cassandra to bind to a public port

2017-04-12 Thread Khaja, Raziuddin (NIH/NLM/NCBI) [C]
See this note in cassandra-topology.properties:



# Native IPv6 is supported, however you must escape the colon in the IPv6 
Address

# Also be sure to comment out JVM_OPTS="$JVM_OPTS 
-Djava.net.preferIPv4Stack=true"

# in cassandra-env.sh

fe80\:0\:0\:0\:202\:b3ff\:fe1e\:8329=DC1:RAC3



Maybe you have to escape the IPV6 addresses in the cassandra.yaml in the same 
way.

I think it’s worth a try.

-Razi





On 4/12/17, 3:08 PM, "Martijn Pieters"  wrote:



From: sai krishnam raju potturi 

> We have included the IPV6 address with scope GLOBAL, and not IPV6 with 
SCOPE LINK in the YAML and TOPOLOGY files.

>

> inet6 addr: 2001: *** : ** : ** : * : * :  :   Scope:Global

> inet6 addr: fe80 :: *** :  :  :  Scope:Link

>

> Not sure if this might be of relevance to the issue you are facing.



I already stated I tried both the initial email.



Martijn Pieters












Re: IPv6-only host, can't seem to get Cassandra to bind to a public port

2017-04-12 Thread Martijn Pieters
From: sai krishnam raju potturi 
> We have included the IPV6 address with scope GLOBAL, and not IPV6 with SCOPE 
> LINK in the YAML and TOPOLOGY files.  
>
> inet6 addr: 2001: *** : ** : ** : * : * :  :   Scope:Global
> inet6 addr: fe80 :: *** :  :  :  Scope:Link
>
> Not sure if this might be of relevance to the issue you are facing.

I already stated I tried both the initial email.

Martijn Pieters






Re: Multiple nodes decommission

2017-04-12 Thread Vlad
Interesting, there is no such explicit warning for v.3 
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddNodeToCluster.html
It says   
   - Start the bootstrap node.
   - verify that the node is fully bootstrapped and all other nodes are up (UN)

Does it mean that we should start them one by one? May somebody from developers 
can clarify this issue? 

On Wednesday, April 12, 2017 9:16 PM, Jacob Shadix  
wrote:
 

 It's still not recommended to start at the same time. Stagger by 2 minutes is 
what the following documentation suggests; along with additional steps. re. 
version 2.1
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_node_to_cluster_t.html

-- Jacob Shadix 

On Wed, Apr 12, 2017 at 1:48 PM, Vlad  wrote:

But it seems OK to add multiple nodes at once, right?
 

On Tuesday, April 11, 2017 8:38 PM, Jacob Shadix  
wrote:
 

 Right! Another reason why I just stick with sequential decommissions. Maybe 
someone here could shed some light on what happens under the covers if parallel 
decommissions are kicked off.
-- Jacob Shadix 

On Tue, Apr 11, 2017 at 12:55 PM, benjamin roth  wrote:

I did not test it but I'd bet that parallel decommision will lead to 
inconsistencies.Each decommission results in range movements and range 
reassignments which becomes effective after a successful decommission.If you 
start several decommissions at once, I guess the calculated reassignments are 
invalid for at least one node after the first node finished the decommission 
process.
I hope someone will correct me if i am wrong.
2017-04-11 18:43 GMT+02:00 Jacob Shadix :

Are you using vnodes? I typically do one-by-one as the decommission will create 
additional load/network activity streaming data to the other nodes as the token 
ranges are reassigned. 
-- Jacob Shadix 

On Sat, Apr 8, 2017 at 10:55 AM, Vlad  wrote:

Hi,
how multiple nodes should be decommissioned by "nodetool decommission"- one by 
one or in parallel ?

Thanks.








   



   

Re: Multiple nodes decommission

2017-04-12 Thread Jacob Shadix
It's still not recommended to start at the same time. Stagger by 2 minutes
is what the following documentation suggests; along with additional steps.
re. version 2.1

https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_node_to_cluster_t.html

-- Jacob Shadix

On Wed, Apr 12, 2017 at 1:48 PM, Vlad  wrote:

> But it seems OK to add multiple nodes at once, right?
>
>
> On Tuesday, April 11, 2017 8:38 PM, Jacob Shadix 
> wrote:
>
>
> Right! Another reason why I just stick with sequential decommissions.
> Maybe someone here could shed some light on what happens under the covers
> if parallel decommissions are kicked off.
>
> -- Jacob Shadix
>
> On Tue, Apr 11, 2017 at 12:55 PM, benjamin roth  wrote:
>
> I did not test it but I'd bet that parallel decommision will lead to
> inconsistencies.
> Each decommission results in range movements and range reassignments which
> becomes effective after a successful decommission.
> If you start several decommissions at once, I guess the calculated
> reassignments are invalid for at least one node after the first node
> finished the decommission process.
>
> I hope someone will correct me if i am wrong.
>
> 2017-04-11 18:43 GMT+02:00 Jacob Shadix :
>
> Are you using vnodes? I typically do one-by-one as the decommission will
> create additional load/network activity streaming data to the other nodes
> as the token ranges are reassigned.
>
> -- Jacob Shadix
>
> On Sat, Apr 8, 2017 at 10:55 AM, Vlad  wrote:
>
> Hi,
>
> how multiple nodes should be decommissioned by "nodetool decommission"-
> one by one or in parallel ?
>
> Thanks.
>
>
>
>
>
>
>


Re: Multiple nodes decommission

2017-04-12 Thread Vlad
But it seems OK to add multiple nodes at once, right?
 

On Tuesday, April 11, 2017 8:38 PM, Jacob Shadix  
wrote:
 

 Right! Another reason why I just stick with sequential decommissions. Maybe 
someone here could shed some light on what happens under the covers if parallel 
decommissions are kicked off.
-- Jacob Shadix 

On Tue, Apr 11, 2017 at 12:55 PM, benjamin roth  wrote:

I did not test it but I'd bet that parallel decommision will lead to 
inconsistencies.Each decommission results in range movements and range 
reassignments which becomes effective after a successful decommission.If you 
start several decommissions at once, I guess the calculated reassignments are 
invalid for at least one node after the first node finished the decommission 
process.
I hope someone will correct me if i am wrong.
2017-04-11 18:43 GMT+02:00 Jacob Shadix :

Are you using vnodes? I typically do one-by-one as the decommission will create 
additional load/network activity streaming data to the other nodes as the token 
ranges are reassigned. 
-- Jacob Shadix 

On Sat, Apr 8, 2017 at 10:55 AM, Vlad  wrote:

Hi,
how multiple nodes should be decommissioned by "nodetool decommission"- one by 
one or in parallel ?

Thanks.








   

Constant MemtableFlushWriter Messages Following upgrade from 2.2.5 to 2.2.8

2017-04-12 Thread Fd Habash
In the process of upgrading our cluster. Nodes that go upgraded are constantly 
emitting these messages. No impact, but wanted to know what they mean and why 
after the upgrade only.

Any feedback will be appreciated. 


17-04-10 20:18:11,580 Memtable.java:352 - Writing 
Memtable-compactions_in_progress@748675126(0.008KiB serialized bytes, 1 ops, 
0%/0% 
of on/off-heap limit)
INFO  [MemtableFlushWriter:1] 2017-04-10 20:18:11,588 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@1129449190(0.195KiB serialized bytes, 
12 ops, 0%/0
% of on/off-heap limit)
INFO  [MemtableFlushWriter:2] 2017-04-10 20:18:14,426 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@931709037(0.008KiB serialized bytes, 1 
ops, 0%/0% 
of on/off-heap limit)
INFO  [MemtableFlushWriter:1] 2017-04-10 20:18:44,950 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@1057180976(0.008KiB serialized bytes, 
1 ops, 0%/0%
 of on/off-heap limit)
INFO  [MemtableFlushWriter:2] 2017-04-10 20:18:44,963 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@2110307908(0.195KiB serialized bytes, 
12 ops, 0%/0
% of on/off-heap limit)
INFO  [MemtableFlushWriter:1] 2017-04-10 20:18:45,546 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@1803704247(0.008KiB serialized bytes, 
1 ops, 0%/0%
 of on/off-heap limit)
INFO  [MemtableFlushWriter:2] 2017-04-10 20:19:16,196 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@1692030234(0.008KiB serialized bytes, 
1 ops, 0%/0%
 of on/off-heap limit)
INFO  [MemtableFlushWriter:1] 2017-04-10 20:19:16,240 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@12532575(0.098KiB serialized bytes, 6 
ops, 0%/0% o
f on/off-heap limit)
INFO  [MemtableFlushWriter:2] 2017-04-10 20:19:16,241 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@337283565(0.098KiB serialized bytes, 6 
ops, 0%/0% 
of on/off-heap limit)
INFO  [MemtableFlushWriter:1] 2017-04-10 20:19:52,322 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@810846450(0.008KiB serialized bytes, 1 
ops, 0%/0% 
of on/off-heap limit)
INFO  [MemtableFlushWriter:2] 2017-04-10 20:19:52,561 Memtable.java:352 - 
Writing Memtable-compactions_in_progress@2010893318(0.008KiB serialized bytes, 
1 ops, 0%/0%
 of on/off-heap limit)


Thank you



Re: WriteTimeoutException with LWT after few milliseconds

2017-04-12 Thread benjamin roth
Hi Roland,

LWTs set consistency level implicitly to SERIAL which requires at least
QUORUM.
No, no node is/was down. If that happens the query will fail with "Could
not achieve consistency level QUORUM ..."

2017-04-12 16:48 GMT+02:00 Roland Otta :

> Hi Benjamin,
>
> its unlikely that i can assist you .. but nevertheless ... i give it a try
> ;-)
>
> whats your consistency level for the insert?
> what if one ore more nodes are marked down and proper consistency cant be
> achieved?
> of course the error message does not indicate that problem (as it says its
> a timeout)... but in that case you would get an instant error for inserts.
> wouldn't you?
>
> br,
> roland
>
>
>
> On Wed, 2017-04-12 at 15:09 +0200, benjamin roth wrote:
>
> Hi folks,
>
> Can someone explain why that occurs?
>
> Write timeout after 0.006s
> Query: 'INSERT INTO log_moment_import ("source", "reference", "user_id",
> "moment_id", "date", "finished") VALUES (3, '1305821272790495', 65675537,
> 0, '2017-04-12 13:00:51', NULL) IF NOT EXISTS
> Primary key and parition key is source + reference
> Message: Operation timed out - received only 1 responses.
>
> This appears every now and then in the log. When I check the for the
> record in the table, it is there.
> I could explain that, if the WTE occured after the configured write
> timeout but it happens withing a few milliseconds.
> Is this caused by lock contention? It is possible that there are
> concurrent inserts on the same PK - actually thats the reason why I use
> LWTs.
>
> Thanks!
>
>


Re: WriteTimeoutException with LWT after few milliseconds

2017-04-12 Thread Carlos Rolo
You can try to use TRACING to debug the situation, but for a LWT to fail so
fast, the most probable cause is what you stated: "It is possible that
there are concurrent inserts on the same PK - actually thats the reason why
I use LWTs." AKA, someone inserted first.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
*linkedin.com/in/carlosjuzarterolo
*
Mobile: +351 918 918 100
www.pythian.com

On Wed, Apr 12, 2017 at 3:51 PM, Roland Otta 
wrote:

> sorry .. ignore my comment ...
>
> i missed your comment that the record is in the table ...
>
> On Wed, 2017-04-12 at 16:48 +0200, Roland Otta wrote:
>
> Hi Benjamin,
>
> its unlikely that i can assist you .. but nevertheless ... i give it a try
> ;-)
>
> whats your consistency level for the insert?
> what if one ore more nodes are marked down and proper consistency cant be
> achieved?
> of course the error message does not indicate that problem (as it says its
> a timeout)... but in that case you would get an instant error for inserts.
> wouldn't you?
>
> br,
> roland
>
>
>
> On Wed, 2017-04-12 at 15:09 +0200, benjamin roth wrote:
>
> Hi folks,
>
> Can someone explain why that occurs?
>
> Write timeout after 0.006s
> Query: 'INSERT INTO log_moment_import ("source", "reference", "user_id",
> "moment_id", "date", "finished") VALUES (3, '1305821272790495', 65675537,
> 0, '2017-04-12 13:00:51', NULL) IF NOT EXISTS
> Primary key and parition key is source + reference
> Message: Operation timed out - received only 1 responses.
>
> This appears every now and then in the log. When I check the for the
> record in the table, it is there.
> I could explain that, if the WTE occured after the configured write
> timeout but it happens withing a few milliseconds.
> Is this caused by lock contention? It is possible that there are
> concurrent inserts on the same PK - actually thats the reason why I use
> LWTs.
>
> Thanks!
>
>

-- 


--





Re: WriteTimeoutException with LWT after few milliseconds

2017-04-12 Thread Roland Otta
sorry .. ignore my comment ...

i missed your comment that the record is in the table ...

On Wed, 2017-04-12 at 16:48 +0200, Roland Otta wrote:
Hi Benjamin,

its unlikely that i can assist you .. but nevertheless ... i give it a try ;-)

whats your consistency level for the insert?
what if one ore more nodes are marked down and proper consistency cant be 
achieved?
of course the error message does not indicate that problem (as it says its a 
timeout)... but in that case you would get an instant error for inserts. 
wouldn't you?

br,
roland



On Wed, 2017-04-12 at 15:09 +0200, benjamin roth wrote:
Hi folks,

Can someone explain why that occurs?

Write timeout after 0.006s
Query: 'INSERT INTO log_moment_import ("source", "reference", "user_id", 
"moment_id", "date", "finished") VALUES (3, '1305821272790495', 65675537, 0, 
'2017-04-12 13:00:51', NULL) IF NOT EXISTS
Primary key and parition key is source + reference
Message: Operation timed out - received only 1 responses.

This appears every now and then in the log. When I check the for the record in 
the table, it is there.
I could explain that, if the WTE occured after the configured write timeout but 
it happens withing a few milliseconds.
Is this caused by lock contention? It is possible that there are concurrent 
inserts on the same PK - actually thats the reason why I use LWTs.

Thanks!


Re: WriteTimeoutException with LWT after few milliseconds

2017-04-12 Thread Roland Otta
Hi Benjamin,

its unlikely that i can assist you .. but nevertheless ... i give it a try ;-)

whats your consistency level for the insert?
what if one ore more nodes are marked down and proper consistency cant be 
achieved?
of course the error message does not indicate that problem (as it says its a 
timeout)... but in that case you would get an instant error for inserts. 
wouldn't you?

br,
roland



On Wed, 2017-04-12 at 15:09 +0200, benjamin roth wrote:
Hi folks,

Can someone explain why that occurs?

Write timeout after 0.006s
Query: 'INSERT INTO log_moment_import ("source", "reference", "user_id", 
"moment_id", "date", "finished") VALUES (3, '1305821272790495', 65675537, 0, 
'2017-04-12 13:00:51', NULL) IF NOT EXISTS
Primary key and parition key is source + reference
Message: Operation timed out - received only 1 responses.

This appears every now and then in the log. When I check the for the record in 
the table, it is there.
I could explain that, if the WTE occured after the configured write timeout but 
it happens withing a few milliseconds.
Is this caused by lock contention? It is possible that there are concurrent 
inserts on the same PK - actually thats the reason why I use LWTs.

Thanks!


WriteTimeoutException with LWT after few milliseconds

2017-04-12 Thread benjamin roth
Hi folks,

Can someone explain why that occurs?

Write timeout after 0.006s
Query: 'INSERT INTO log_moment_import ("source", "reference", "user_id",
"moment_id", "date", "finished") VALUES (3, '1305821272790495', 65675537,
0, '2017-04-12 13:00:51', NULL) IF NOT EXISTS
Primary key and parition key is source + reference
Message: Operation timed out - received only 1 responses.

This appears every now and then in the log. When I check the for the record
in the table, it is there.
I could explain that, if the WTE occured after the configured write timeout
but it happens withing a few milliseconds.
Is this caused by lock contention? It is possible that there are concurrent
inserts on the same PK - actually thats the reason why I use LWTs.

Thanks!


Re: [Marketing Mail] Re: [Marketing Mail] Re: nodetool status high load info

2017-04-12 Thread Osman YOZGATLIOGLU
Actually there is no delete right now. Only inserts.
I use twcs and not much compaction occurs.
It just miscalculates sstable sizes.


On 12-04-2017 14:58, anuja jain wrote:
Do you perform a lot of deletes or updates on your database?
On restart, it performs major compaction which can reduce the load on your node 
by removing stale data.
Try configuring compaction in you conf to perform minor compaction i.e. 
compactions at a regular interval.

Thanks,
Anuja

On Wed, Apr 12, 2017 at 3:02 PM, Osman YOZGATLIOGLU 
> wrote:
Hello,

Here is the problem loads, first node shows 206TB data. After cassandra restart 
it shows 51TB, like df shows.

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address   Load   Tokens   Owns (effective)  Host ID  Rack
UN  x.x.x.1  206 TB 256  50.6% xx  rack1
UN  x.x.x.2  190.77 TB  256  49.9% yy  rack1
..

--  Address   Load   Tokens   Owns (effective)  Host ID  Rack
UN  x.x.x.1  51.01 TB   256  50.6% xx  rack1
UN  x.x.x.2  49.84 TB   256  49.9% yy  rack1
..


nodetool tpstats;
Pool NameActive   Pending  Completed   Blocked  All 
time blocked
MutationStage 2 175536494778 0  
   0
ViewMutationStage 0 0  0 0  
   0
ReadStage 0 0  41402 0  
   0
RequestResponseStage  0 035515109625 0  
   0
ReadRepairStage   0 0  3 0  
   0
CounterMutationStage  0 0  0 0  
   0
MiscStage 0 0  0 0  
   0
CompactionExecutor5 5 732161 0  
   0
MemtableReclaimMemory 0 0 198602 0  
   0
PendingRangeCalculator0 0 11 0  
   0
GossipStage   0 03854373 0  
   0
SecondaryIndexManagement  0 0  0 0  
   0
HintsDispatcher   1 7  6 0  
   0
MigrationStage0 0  6 0  
   0
MemtablePostFlush 0 0 200265 0  
   0
ValidationExecutor0 0  0 0  
   0
Sampler   0 0  0 0  
   0
MemtableFlushWriter   0 0 198602 0  
   0
InternalResponseStage 0 05209219 0  
   0
AntiEntropyStage  0 0  0 0  
   0
CacheCleanupExecutor  0 0  0 0  
   0
Native-Transport-Requests 0 015910719923 0 
192131887

Message type   Dropped
READ 0
RANGE_SLICE  0
_TRACE   0
HINT 0
MUTATION   185
COUNTER_MUTATION 0
BATCH_STORE  0
BATCH_REMOVE 0
REQUEST_RESPONSE 0
PAGED_RANGE  0
READ_REPAIR  0

sar values;
05:10:01CPU %user %nice   %system   %iowait%steal %idle
05:20:01all 26.96 16.09  3.73  2.23  0.00 50.99
05:30:02all 26.99 16.83  3.82  2.86  0.00 49.50
05:40:01all 27.17 18.19  3.83  0.89  0.00 49.91
05:50:01all 27.16 18.74  3.80  0.28  0.00 50.02
06:00:01all 26.30 19.88  3.88  0.29  0.00 49.64
06:10:01all 28.02 21.11  3.91  0.28  0.00 46.68
06:20:01all 28.37 19.64  3.98  0.40  0.00 47.61
06:30:01all 29.56 19.51  4.08  0.45  0.00 46.40
06:40:01all 29.28 20.56  4.08  0.34  0.00 45.74
06:50:01all 29.46 19.15  3.99  0.19  0.00 47.20
07:00:01all 29.45 21.09  4.07  0.26  0.00 45.13
07:10:01all 29.23 21.59  4.18  0.29  0.00 44.71
07:20:01all 30.78 21.24  4.09  0.48  0.00 43.40
07:30:01all 29.06 21.63  4.09  0.27  0.00 44.94
07:40:01all 28.84 21.85  4.13  1.76  0.00 43.41
07:50:01all 29.22 21.35  4.14  

Re: [Marketing Mail] Re: nodetool status high load info

2017-04-12 Thread anuja jain
Do you perform a lot of deletes or updates on your database?
On restart, it performs major compaction which can reduce the load on your
node by removing stale data.
Try configuring compaction in you conf to perform minor compaction i.e.
compactions at a regular interval.

Thanks,
Anuja

On Wed, Apr 12, 2017 at 3:02 PM, Osman YOZGATLIOGLU <
osman.yozgatlio...@krontech.com> wrote:

> Hello,
>
> Here is the problem loads, first node shows 206TB data. After cassandra
> restart it shows 51TB, like df shows.
>
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address   Load   Tokens   Owns (effective)  Host ID  Rack
> UN  x.x.x.1  206 TB 256  50.6% xx  rack1
> UN  x.x.x.2  190.77 TB  256  49.9% yy  rack1
> ..
>
> --  Address   Load   Tokens   Owns (effective)  Host ID  Rack
> UN  x.x.x.1  51.01 TB   256  50.6% xx  rack1
> UN  x.x.x.2  49.84 TB   256  49.9% yy  rack1
> ..
>
>
> nodetool tpstats;
> Pool NameActive   Pending  Completed   Blocked
> All time blocked
> MutationStage 2 175536494778 0
>  0
> ViewMutationStage 0 0  0 0
>  0
> ReadStage 0 0  41402 0
>  0
> RequestResponseStage  0 035515109625 0
>  0
> ReadRepairStage   0 0  3 0
>  0
> CounterMutationStage  0 0  0 0
>  0
> MiscStage 0 0  0 0
>  0
> CompactionExecutor5 5 732161 0
>  0
> MemtableReclaimMemory 0 0 198602 0
>  0
> PendingRangeCalculator0 0 11 0
>  0
> GossipStage   0 03854373 0
>  0
> SecondaryIndexManagement  0 0  0 0
>  0
> HintsDispatcher   1 7  6 0
>  0
> MigrationStage0 0  6 0
>  0
> MemtablePostFlush 0 0 200265 0
>  0
> ValidationExecutor0 0  0 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   0 0 198602 0
>  0
> InternalResponseStage 0 05209219 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> Native-Transport-Requests 0 015910719923 0
>  192131887
>
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> HINT 0
> MUTATION   185
> COUNTER_MUTATION 0
> BATCH_STORE  0
> BATCH_REMOVE 0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
>
> sar values;
> 05:10:01CPU %user %nice   %system   %iowait%steal
>  %idle
> 05:20:01all 26.96 16.09  3.73  2.23  0.00
>  50.99
> 05:30:02all 26.99 16.83  3.82  2.86  0.00
>  49.50
> 05:40:01all 27.17 18.19  3.83  0.89  0.00
>  49.91
> 05:50:01all 27.16 18.74  3.80  0.28  0.00
>  50.02
> 06:00:01all 26.30 19.88  3.88  0.29  0.00
>  49.64
> 06:10:01all 28.02 21.11  3.91  0.28  0.00
>  46.68
> 06:20:01all 28.37 19.64  3.98  0.40  0.00
>  47.61
> 06:30:01all 29.56 19.51  4.08  0.45  0.00
>  46.40
> 06:40:01all 29.28 20.56  4.08  0.34  0.00
>  45.74
> 06:50:01all 29.46 19.15  3.99  0.19  0.00
>  47.20
> 07:00:01all 29.45 21.09  4.07  0.26  0.00
>  45.13
> 07:10:01all 29.23 21.59  4.18  0.29  0.00
>  44.71
> 07:20:01all 30.78 21.24  4.09  0.48  0.00
>  43.40
> 07:30:01all 29.06 21.63  4.09  0.27  0.00
>  44.94
> 07:40:01all 28.84 21.85  4.13  1.76  0.00
>  43.41
> 07:50:01all 29.22 21.35  4.14  2.53  0.00
>  42.76
> 08:00:01all 30.10 21.66  4.24  2.39  0.00
>  41.60
> 08:10:01all 

How to change frozen to non frozen columns in cassandra

2017-04-12 Thread anuja jain
Hi ,
I have a table with columns of type frozen
I want to convert it to simple list
How can I do that without droping existing column? I have data in that
column.
I am using dse 4.8.11

Thanks,
Anuja


Re: Can we get username and timestamp in cqlsh_history?

2017-04-12 Thread anuja jain
Thanks Nicolas. That is exactly what I was looking for.

On Tue, Apr 4, 2017 at 12:08 AM, Durity, Sean R  wrote:

> Sounds like you want full auditing of CQL in the cluster. I have not seen
> anything built into the open source version for that (but I could be
> missing something). DataStax Enterprise does have an auditing feature.
>
>
>
>
>
> Sean Durity
>
>
>
> *From:* anuja jain [mailto:anujaja...@gmail.com]
> *Sent:* Wednesday, March 29, 2017 7:37 AM
> *To:* user@cassandra.apache.org
> *Subject:* Can we get username and timestamp in cqlsh_history?
>
>
>
> Hi,
>
> I have a cassandra cluster having a lot of keyspaces and users. I want to
> get the history of cql commands along with the username and the time at
> which the command is run.
>
> Also if we are running some commands from GUI tools like
> Devcenter,dbeaver, can we log those commands too? If yes, how?
>
>
>
> Thanks,
>
> Anuja
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>


Re: [Marketing Mail] Re: nodetool status high load info

2017-04-12 Thread Osman YOZGATLIOGLU
Hello,

Here is the problem loads, first node shows 206TB data. After cassandra restart 
it shows 51TB, like df shows.

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address   Load   Tokens   Owns (effective)  Host ID  Rack
UN  x.x.x.1  206 TB 256  50.6% xx  rack1
UN  x.x.x.2  190.77 TB  256  49.9% yy  rack1
..

--  Address   Load   Tokens   Owns (effective)  Host ID  Rack
UN  x.x.x.1  51.01 TB   256  50.6% xx  rack1
UN  x.x.x.2  49.84 TB   256  49.9% yy  rack1
..


nodetool tpstats;
Pool NameActive   Pending  Completed   Blocked  All 
time blocked
MutationStage 2 175536494778 0  
   0
ViewMutationStage 0 0  0 0  
   0
ReadStage 0 0  41402 0  
   0
RequestResponseStage  0 035515109625 0  
   0
ReadRepairStage   0 0  3 0  
   0
CounterMutationStage  0 0  0 0  
   0
MiscStage 0 0  0 0  
   0
CompactionExecutor5 5 732161 0  
   0
MemtableReclaimMemory 0 0 198602 0  
   0
PendingRangeCalculator0 0 11 0  
   0
GossipStage   0 03854373 0  
   0
SecondaryIndexManagement  0 0  0 0  
   0
HintsDispatcher   1 7  6 0  
   0
MigrationStage0 0  6 0  
   0
MemtablePostFlush 0 0 200265 0  
   0
ValidationExecutor0 0  0 0  
   0
Sampler   0 0  0 0  
   0
MemtableFlushWriter   0 0 198602 0  
   0
InternalResponseStage 0 05209219 0  
   0
AntiEntropyStage  0 0  0 0  
   0
CacheCleanupExecutor  0 0  0 0  
   0
Native-Transport-Requests 0 015910719923 0 
192131887

Message type   Dropped
READ 0
RANGE_SLICE  0
_TRACE   0
HINT 0
MUTATION   185
COUNTER_MUTATION 0
BATCH_STORE  0
BATCH_REMOVE 0
REQUEST_RESPONSE 0
PAGED_RANGE  0
READ_REPAIR  0

sar values;
05:10:01CPU %user %nice   %system   %iowait%steal %idle
05:20:01all 26.96 16.09  3.73  2.23  0.00 50.99
05:30:02all 26.99 16.83  3.82  2.86  0.00 49.50
05:40:01all 27.17 18.19  3.83  0.89  0.00 49.91
05:50:01all 27.16 18.74  3.80  0.28  0.00 50.02
06:00:01all 26.30 19.88  3.88  0.29  0.00 49.64
06:10:01all 28.02 21.11  3.91  0.28  0.00 46.68
06:20:01all 28.37 19.64  3.98  0.40  0.00 47.61
06:30:01all 29.56 19.51  4.08  0.45  0.00 46.40
06:40:01all 29.28 20.56  4.08  0.34  0.00 45.74
06:50:01all 29.46 19.15  3.99  0.19  0.00 47.20
07:00:01all 29.45 21.09  4.07  0.26  0.00 45.13
07:10:01all 29.23 21.59  4.18  0.29  0.00 44.71
07:20:01all 30.78 21.24  4.09  0.48  0.00 43.40
07:30:01all 29.06 21.63  4.09  0.27  0.00 44.94
07:40:01all 28.84 21.85  4.13  1.76  0.00 43.41
07:50:01all 29.22 21.35  4.14  2.53  0.00 42.76
08:00:01all 30.10 21.66  4.24  2.39  0.00 41.60
08:10:01all 28.63 21.69  4.22  2.57  0.00 42.88
08:20:01all 28.63 20.78  4.08  2.61  0.00 43.91
08:30:01all 30.46 20.08  3.83  2.58  0.00 43.05
08:40:01all 27.71 21.31  4.06  2.60  0.00 44.33
08:50:01all 28.87 21.49  4.15  2.58  0.00 42.91
09:00:01all 29.61 21.38  3.86  2.51  0.00 42.64
09:10:01  

Re: nodetool status high load info

2017-04-12 Thread Bhuvan Rawal
Try nodetool tpstats - it can lead you to where your threads are stuck.
There could be various reasons for load factor to go high like disk/cpu
getting choked, you'll probably need to check dstat & iostat output along
with Cassandra Threadpool stats to get a decent idea.

On Wed, Apr 12, 2017 at 1:48 PM, Osman YOZGATLIOGLU <
osman.yozgatlio...@krontech.com> wrote:

> Hello,
>
> Nodetool status shows much more than actual data size.
> When I restart node, it shows normal a while and increase load in time.
> Where should I look?
>
> Cassandra 3.0.8, jdk 1.8.121
>
> Regards,
> Osman
>
>
> This e-mail message, including any attachments, is for the sole use of the
> person to whom it has been sent, and may contain information that is
> confidential or legally protected. If you are not the intended recipient or
> have received this message in error, you are not authorized to copy,
> distribute, or otherwise use this message or its attachments. Please notify
> the sender immediately by return e-mail and permanently delete this message
> and any attachments. KRON makes no warranty that this e-mail is error or
> virus free.
>


nodetool status high load info

2017-04-12 Thread Osman YOZGATLIOGLU
Hello,

Nodetool status shows much more than actual data size.
When I restart node, it shows normal a while and increase load in time.
Where should I look?

Cassandra 3.0.8, jdk 1.8.121

Regards,
Osman


This e-mail message, including any attachments, is for the sole use of the 
person to whom it has been sent, and may contain information that is 
confidential or legally protected. If you are not the intended recipient or 
have received this message in error, you are not authorized to copy, 
distribute, or otherwise use this message or its attachments. Please notify the 
sender immediately by return e-mail and permanently delete this message and any 
attachments. KRON makes no warranty that this e-mail is error or virus free.