[no subject]

2024-02-03 Thread Gavin McDonald
Hello to all users, contributors and Committers!

The Travel Assistance Committee (TAC) are pleased to announce that
travel assistance applications for Community over Code EU 2024 are now
open!

We will be supporting Community over Code EU, Bratislava, Slovakia,
June 3th - 5th, 2024.

TAC exists to help those that would like to attend Community over Code
events, but are unable to do so for financial reasons. For more info
on this years applications and qualifying criteria, please visit the
TAC website at < https://tac.apache.org/ >. Applications are already
open on https://tac-apply.apache.org/, so don't delay!

The Apache Travel Assistance Committee will only be accepting
applications from those people that are able to attend the full event.

Important: Applications close on Friday, March 1st, 2024.

Applicants have until the the closing date above to submit their
applications (which should contain as much supporting material as
required to efficiently and accurately process their request), this
will enable TAC to announce successful applications shortly
afterwards.

As usual, TAC expects to deal with a range of applications from a
diverse range of backgrounds; therefore, we encourage (as always)
anyone thinking about sending in an application to do so ASAP.

For those that will need a Visa to enter the Country - we advise you apply
now so that you have enough time in case of interview delays. So do not
wait until you know if you have been accepted or not.

We look forward to greeting many of you in Bratislava, Slovakia in June,
2024!

Kind Regards,

Gavin

(On behalf of the Travel Assistance Committee)


[no subject]

2022-05-27 Thread Prachi Rath
unsubscribe


[no subject]

2022-03-14 Thread Patrick McFadin
Hello Cassandra Community!

Data on Kubernetes day will be on Monday, May 16th. This is a virtual and
in-person event in Valencia Spain the day before KubeCon EU. The CFP closes
tomorrow and I'm here to rally the Cassandra community to show off some of
the great things you are doing with Cassandra + Kubernetes. (You know who
you are...)

https://dok.community/dok-day-europe-2022-kubecon/

Reply here or hit me up on the ASF slack if you have questions about this
event. It would be great to see our community representing!

Patrick


[no subject]

2020-01-22 Thread Sowjanya Karangula
stop


[no subject]

2019-01-13 Thread Irtiza Ali
Unsubscribe

On Sun, 13 Jan 2019, 22:11 Osman YOZGATLIOĞLU <
osman.yozgatlio...@krontech.com wrote:

> Thank you for clarification.
>
> Regards
>
> Osman
>
>
> On 13.01.2019 11:24, Jürgen Albersdorfer wrote:
>
> Just turn it off. There is no persistent change to the cluster until the
> node has finished bootstrap and in Status UN.
>
> Von meinem iPhone gesendet
>
> Am 12.01.2019 um 22:36 schrieb Osman YOZGATLIOĞLU <
> osman.yozgatlio...@krontech.com>:
>
> Hello,
>
> I have one joining node. I decided to change cluster topology and I need
> to move this node to another cluster.
>
> How can I decommission joining node? I can't find exact case at google.
>
>
> Regards,
> Osman
>
>


[no subject]

2018-09-05 Thread sha p
Hi all ,
Me new to Cassandra , i was asked to migrate data from Oracle to Cassandra.
Please help me giving your valuable guidance.
1) Can it be done using open source Cassandra.
2) Where should I start data model from?
3) I should use java, what kind of  jar/libs/tools I need use ?
4) How I decide the size of cluster , please provide some sample guidelines.
5) this should be in production , so what kind of things i should take care
for better support or debugging tomorrow?
6) Please provide some good books /links which can help me in this task.


Thanks in advance.
Highly appreciated your every amal help.

Regards,
Shyam


[no subject]

2018-06-19 Thread Deniz Acay
Hello there,

Let me get straight to the point. Yesterday our three node Cassandra
production cluster had a problem and we could not find a solution yet.
Before taking more radical actions, I would like to consult you about the
issue.

We are using Cassandra version 3.11.0. Cluster is living on AWS EC2 nodes
of type m4.2xlarge with 32 GBs of RAM. Each node Dockerized using host
networking mode. Two EBS SSD volumes are attached to each node, 32GB for
commit logs (io1) and 4TB for data directory (gp2). We have been running
smoothly for 7 months and filled %55 of data directory on each node.
Now our C* nodes fail during bootstrapping phase. Let me paste the logs
from system.log file from start to the time of error:

INFO  [main] 2018-06-19 09:51:32,726 YamlConfigurationLoader.java:89 -
Configuration location:
file:/opt/apache-cassandra-3.11.0/conf/cassandra.yaml
INFO  [main] 2018-06-19 09:51:32,954 Config.java:481 - Node
configuration:[allocate_tokens_for_keyspace=botanalytics;
authenticator=AllowAllAuthenticator; authorizer=AllowAllAuthorizer;
auto_bootstrap=false; auto_snapshot=true; back_pressure_enabled=false;
back_pressure_strategy=org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9,
factor=5, flow=FAST}; batch_size_fail_threshold_in_kb=50;
batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024;
broadcast_address=null; broadcast_rpc_address=null;
buffer_pool_use_heap_if_exhausted=true; cas_contention_timeout_in_ms=1000;
cdc_enabled=false; cdc_free_space_check_interval_ms=250;
cdc_raw_directory=/var/data/cassandra/cdc_raw; cdc_total_space_in_mb=0;
client_encryption_options=; cluster_name=Botanalytics Production;
column_index_cache_size_in_kb=2; column_index_size_in_kb=64;
commit_failure_policy=stop_commit; commitlog_compression=null;
commitlog_directory=/var/data/cassandra_commitlog;
commitlog_max_compression_buffers_in_pool=3;
commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32;
commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN;
commitlog_sync_period_in_ms=1; commitlog_total_space_in_mb=8192;
compaction_large_partition_warning_threshold_mb=100;
compaction_throughput_mb_per_sec=1600; concurrent_compactors=null;
concurrent_counter_writes=32; concurrent_materialized_view_writes=32;
concurrent_reads=32; concurrent_replicates=null; concurrent_writes=64;
counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200;
counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000;
credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1;
credentials_validity_in_ms=2000; cross_node_timeout=false;
data_file_directories=[Ljava.lang.String;@662b4c69; disk_access_mode=auto;
disk_failure_policy=best_effort;
disk_optimization_estimate_percentile=0.95;
disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd;
dynamic_snitch=true; dynamic_snitch_badness_threshold=0.1;
dynamic_snitch_reset_interval_in_ms=60;
dynamic_snitch_update_interval_in_ms=100;
enable_scripted_user_defined_functions=false;
enable_user_defined_functions=false;
enable_user_defined_functions_threads=true; encryption_options=null;
endpoint_snitch=Ec2Snitch; file_cache_size_in_mb=null;
gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000;
hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true;
hinted_handoff_throttle_in_kb=1024; hints_compression=null;
hints_directory=null; hints_flush_period_in_ms=1;
incremental_backups=false; index_interval=null;
index_summary_capacity_in_mb=null;
index_summary_resize_interval_in_minutes=60; initial_token=null;
inter_dc_stream_throughput_outbound_megabits_per_sec=200;
inter_dc_tcp_nodelay=false; internode_authenticator=null;
internode_compression=dc; internode_recv_buff_size_in_bytes=0;
internode_send_buff_size_in_bytes=0; key_cache_keys_to_save=2147483647;
key_cache_save_period=14400; key_cache_size_in_mb=null;
listen_address=172.31.6.233; listen_interface=null;
listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false;
max_hint_window_in_ms=1080; max_hints_delivery_threads=2;
max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null;
max_streaming_retries=3; max_value_size_in_mb=256;
memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null;
memtable_flush_writers=0; memtable_heap_space_in_mb=null;
memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50;
native_transport_max_concurrent_connections=-1;
native_transport_max_concurrent_connections_per_ip=-1;
native_transport_max_frame_size_in_mb=256;
native_transport_max_threads=128; native_transport_port=9042;
native_transport_port_ssl=null; num_tokens=8;
otc_backlog_expiration_interval_ms=200;
otc_coalescing_enough_coalesced_messages=8;
otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200;
partitioner=org.apache.cassandra.dht.Murmur3Partitioner;
permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1;
permissions_validity_in_ms=2000; phi_convict_threshold=8.0;

[no subject]

2018-06-15 Thread Vsevolod Filaretov
Good time of day everyone,

I've got three questions on Cassandra paging mechanics and cluster usage
regulation.

1) Am I correct to assume that the larger page size some user session has
set - the larger portion of cluster/coordinator node resources will be
hogged by the corresponding session?

2) Do I understand correctly that page size (imagine we have no timeout
settings) is limited by RAM and iops which I want to hand down to a single
user session?

3) Am I correct to assume that the page size/read request timeout allowance
I set is direct representation of chance to lock some node to single user's
requests?


Best regards,

Vsevolod.


[no subject]

2017-11-04 Thread vbhang...@gmail.com
Kishore, Here is the table dean and cfstats o/p  ---
===
CREATE TABLE ks1.table1 (
key text,
column1 
'org.apache.cassandra.db.marshal.DynamicCompositeType(org.apache.cassandra.db.marshal.UTF8Type)',
value blob,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'sstable_size_in_mb': '256', 'class': 
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 86400
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
==
SSTable count: 261
SSTables in each level: [0, 6, 40, 215, 0, 0, 0, 0, 0]
Space used (live): 129255873809
Space used (total): 129255873809
Space used by snapshots (total): 0
Off heap memory used (total): 20977830
SSTable Compression Ratio: 0.7879224917729545
Number of keys (estimate): 71810
Memtable cell count: 2010
Memtable data size: 226253
Memtable off heap memory used: 1327192
Memtable switch count: 47
Local read count: 11688546
Local read latency: 0.195 ms
Local write count: 225262
Local write latency: 0.055 ms
Pending flushes: 0
Bloom filter false positives: 146072
Bloom filter false ratio: 0.01543
Bloom filter space used: 35592
Bloom filter off heap memory used: 33504
Index summary off heap memory used: 26686
Compression metadata off heap memory used: 19590448
Compacted partition minimum bytes: 25
Compacted partition maximum bytes: 10299432635
Compacted partition mean bytes: 2334776
Average live cells per slice (last five minutes): 
4.346574725773759
Maximum live cells per slice (last five minutes): 2553.0
Average tombstones per slice (last five minutes): 
0.3096773382165276
Maximum tombstones per slice (last five minutes): 804.0
=

On 2017-10-24 14:39, "Mohapatra, Kishore" <kishore.mohapa...@nuance.com> wrote: 
> Hi Vedant,
>   I was actually referring to command line select query 
> with Consistency level=ALL . This will force a read repair in the background.
> But as I can see, you have tried with consistency level = one and and it is 
> still timing out. SO what error you see in the system.log ?
> Streaming error ?
> 
> Can you also check how many sstables are there for that table . Seems like 
> your compaction may not be working.
> Is your repair job running fine ?
> 
> Thanks
> 
> Kishore Mohapatra
> Principal Operations DBA
> Seattle, WA
> Ph : 425-691-6417 (cell)
> Email : kishore.mohapa...@nuance.com
> 
> 
> -Original Message-
> From: vbhang...@gmail.com [mailto:vbhang...@gmail.com] 
> Sent: Monday, October 23, 2017 6:59 PM
> To: user@cassandra.apache.org
> Subject: [EXTERNAL] 
> 
> It is RF=3 and 12 nodes in 3 regions and 6 in other 2, so total 48 nodes. Are 
> you suggesting forced read repair by reading consistency of ONE or by bumping 
> up read_repair_chance? 
> 
> We have tried from command  line with ONE but that times out. 
> On 2017-10-23 10:18, "Mohapatra, Kishore" <kishore.mohapa...@nuance.com> 
> wrote: 
> > What is your RF for the keyspace and how many nodes are there in each DC ?
> > 
> > Did you force a Read Repair to see, if you are getting the data or getting 
> > an error ?
> > 
> > Thanks
> > 
> > Kishore Mohapatra
> > Principal Operations DBA
> > Seattle, WA
> > Email : kishore.mohapa...@nuance.com
> > 
> > 
> > -Original Message-
> > From: vbhang...@gmail.com [mailto:vbhang...@gmail.com]
> > Sent: Sunday, October 22, 2017 11:31 PM
> > To: user@cassandra.apache.org
> > Subject: [EXTERNAL]
> > 
> > -- Consistency level  LQ
> > -- It started happening approximately couple of months back.  Issue is very 
> > inconsistent and can't be reproduced.  It used rarely happen earlier (since 
> > last few years).
> > -- Th

[no subject]

2017-10-23 Thread vbhang...@gmail.com
It is RF=3 and 12 nodes in 3 regions and 6 in other 2, so total 48 nodes. Are 
you suggesting forced read repair by reading consistency of ONE or by bumping 
up read_repair_chance? 

We have tried from command  line with ONE but that times out. 
On 2017-10-23 10:18, "Mohapatra, Kishore" <kishore.mohapa...@nuance.com> wrote: 
> What is your RF for the keyspace and how many nodes are there in each DC ?
> 
> Did you force a Read Repair to see, if you are getting the data or getting an 
> error ?
> 
> Thanks
> 
> Kishore Mohapatra
> Principal Operations DBA
> Seattle, WA
> Email : kishore.mohapa...@nuance.com
> 
> 
> -Original Message-
> From: vbhang...@gmail.com [mailto:vbhang...@gmail.com] 
> Sent: Sunday, October 22, 2017 11:31 PM
> To: user@cassandra.apache.org
> Subject: [EXTERNAL] 
> 
> -- Consistency level  LQ
> -- It started happening approximately couple of months back.  Issue is very 
> inconsistent and can't be reproduced.  It used rarely happen earlier (since 
> last few years).
> -- There are very few GC pauses but  they don't coincide with the issue. 
> -- 99% latency is less than 80ms and 75% is less than 5ms.
> 
> - Vedant
> On 2017-10-22 21:29, Jeff Jirsa <jji...@gmail.com> wrote: 
> > What consistency level do you use on writes?
> > Did this just start or has it always happened ?
> > Are you seeing GC pauses at all?
> > 
> > What’s your 99% write latency? 
> > 
> > --
> > Jeff Jirsa
> > 
> > 
> > > On Oct 22, 2017, at 9:21 PM, "vbhang...@gmail.com"<vbhang...@gmail.com> 
> > > wrote:
> > > 
> > > This is for Cassandra 2.1.13. At times there are replication delays 
> > > across multiple regions. Data is available (getting queried from command 
> > > line) in 1 region but not seen in other region(s).  This is not 
> > > consistent. It is cluster spanning multiple data centers with total > 30 
> > > nodes. Keyspace is configured to get replicated in all the data centers.
> > > 
> > > Hints are getting piled up in the source region. This happens especially 
> > > for large data payload (appro 1kb to few MB blobs).  Network  level 
> > > congestion or saturation does not seem to be an issue.  There is no 
> > > memory/cpu pressure on individual nodes.
> > > 
> > > I am sharing Cassandra.yaml below, any pointers on what can be tuned are 
> > > highly appreciated. Let me know if you need any other info.
> > > 
> > > We tried bumping up hinted_handoff_throttle_in_kb: 30720 and handoff 
> > > tends to be slower max_hints_delivery_threads: 12 on one of the nodes to 
> > > see if it speeds up hints delivery, there was some improvement but not 
> > > whole lot.
> > > 
> > > Thanks
> > > 
> > > =
> > > # Cassandra storage config YAML
> > > 
> > > # NOTE:
> > > #   See 
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_cassandra_StorageConfiguration=DwIBaQ=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY=O20_rcIS1QazTO3_J10I1cPIygxnuBZ4sUCz1TS16XE=n1yhBCTDUhib4RoMH1SWmzcJU1bb-kL6WyTdhDlBL5g=1SQ9gAKWYTFTLEnR1ubZ0zPq_wtBEpY9udxtmNRr6Qg=
> > >   for
> > > #   full explanations of configuration directives
> > > # /NOTE
> > > 
> > > # The name of the cluster. This is mainly used to prevent machines 
> > > in # one logical cluster from joining another.
> > > cluster_name: "central"
> > > 
> > > # This defines the number of tokens randomly assigned to this node 
> > > on the ring # The more tokens, relative to other nodes, the larger 
> > > the proportion of data # that this node will store. You probably 
> > > want all nodes to have the same number # of tokens assuming they have 
> > > equal hardware capability.
> > > #
> > > # If you leave this unspecified, Cassandra will use the default of 1 
> > > token for legacy compatibility, # and will use the initial_token as 
> > > described below.
> > > #
> > > # Specifying initial_token will override this setting on the node's 
> > > initial start, # on subsequent starts, this setting will apply even if 
> > > initial token is set.
> > > #
> > > # If you already have a cluster with 1 token per node, and wish to 
> > > migrate to # multiple tokens per node, see 
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__wiki.apache.org_
> > > cassandra_Operations=DwIBaQ=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0
> >

[no subject]

2017-10-23 Thread vbhang...@gmail.com
umber when you have multi-dc deployments, since
> > # cross-dc handoff tends to be slower
> > max_hints_delivery_threads: 6
> > 
> > # Maximum throttle in KBs per second, total. This will be
> > # reduced proportionally to the number of nodes in the cluster.
> > batchlog_replay_throttle_in_kb: 1024
> > 
> > # Authentication backend, implementing IAuthenticator; used to identify 
> > users
> > # Out of the box, Cassandra provides 
> > org.apache.cassandra.auth.{AllowAllAuthenticator,
> > # PasswordAuthenticator}.
> > #
> > # - AllowAllAuthenticator performs no checks - set it to disable 
> > authentication.
> > # - PasswordAuthenticator relies on username/password pairs to authenticate
> > #   users. It keeps usernames and hashed passwords in 
> > system_auth.credentials table.
> > #   Please increase system_auth keyspace replication factor if you use this 
> > authenticator.
> > authenticator: AllowAllAuthenticator
> > 
> > # Authorization backend, implementing IAuthorizer; used to limit 
> > access/provide permissions
> > # Out of the box, Cassandra provides 
> > org.apache.cassandra.auth.{AllowAllAuthorizer,
> > # CassandraAuthorizer}.
> > #
> > # - AllowAllAuthorizer allows any action to any user - set it to disable 
> > authorization.
> > # - CassandraAuthorizer stores permissions in system_auth.permissions 
> > table. Please
> > #   increase system_auth keyspace replication factor if you use this 
> > authorizer.
> > authorizer: AllowAllAuthorizer
> > 
> > # Validity period for permissions cache (fetching permissions can be an
> > # expensive operation depending on the authorizer, CassandraAuthorizer is
> > # one example). Defaults to 2000, set to 0 to disable.
> > # Will be disabled automatically for AllowAllAuthorizer.
> > permissions_validity_in_ms: 2000
> > 
> > # Refresh interval for permissions cache (if enabled).
> > # After this interval, cache entries become eligible for refresh. Upon next
> > # access, an async reload is scheduled and the old value returned until it
> > # completes. If permissions_validity_in_ms is non-zero, then this must be
> > # also.
> > # Defaults to the same value as permissions_validity_in_ms.
> > # permissions_update_interval_in_ms: 1000
> > 
> > # The partitioner is responsible for distributing groups of rows (by
> > # partition key) across nodes in the cluster.  You should leave this
> > # alone for new clusters.  The partitioner can NOT be changed without
> > # reloading all data, so when upgrading you should set this to the
> > # same partitioner you were already using.
> > #
> > # Besides Murmur3Partitioner, partitioners included for backwards
> > # compatibility include RandomPartitioner, ByteOrderedPartitioner, and
> > # OrderPreservingPartitioner.
> > #
> > partitioner: org.apache.cassandra.dht.RandomPartitioner
> > 
> > # Directories where Cassandra should store data on disk.  Cassandra
> > # will spread data evenly across them, subject to the granularity of
> > # the configured compaction strategy.
> > # If not set, the default directory is $CASSANDRA_HOME/data/data.
> > data_file_directories:
> > - /var/lib/cassandra/data
> > 
> > # commit log.  when running on magnetic HDD, this should be a
> > # separate spindle than the data directories.
> > # If not set, the default directory is $CASSANDRA_HOME/data/commitlog.
> > commitlog_directory: /data/cassandra/commitlog
> > 
> > # policy for data disk failures:
> > # die: shut down gossip and client transports and kill the JVM for any fs 
> > errors or
> > #  single-sstable errors, so the node can be replaced.
> > # stop_paranoid: shut down gossip and client transports even for 
> > single-sstable errors,
> > #kill the JVM for errors during startup.
> > # stop: shut down gossip and client transports, leaving the node 
> > effectively dead, but
> > #   can still be inspected via JMX, kill the JVM for errors during 
> > startup.
> > # best_effort: stop using the failed disk and respond to requests based on
> > #  remaining available sstables.  This means you WILL see 
> > obsolete
> > #  data at CL.ONE!
> > # ignore: ignore fatal errors and let requests fail, as in pre-1.2 Cassandra
> > disk_failure_policy: stop
> > 
> > # policy for commit disk failures:
> > # die: shut down gossip and Thrift and kill the JVM, so the node can be 
> > replaced.
> > # stop: shut down gossip and Thrift, leaving the no

[no subject]

2017-10-01 Thread Bill Walters
Hi All,

I need some help with deploying a monitoring and alerting system for our
new Cassandra 3.0.4 cluster that we are setting up in AWS East region.
I have a good experience with Cassandra as we are running some 2.0.16
clusters in production on our on-prem servers. We use Nagios tool to
monitor and alert our on-call people if the any of the nodes in our on-prem
servers go down. (Nagios is the default monitoring and alerting system used
by our company)
Since, our leadership started a plan to migrate our infrastructure to
cloud, we have chosen AWS as our public cloud.
We are planning to use same old Nagios as our monitoring and alerting
system even for our cloud servers.
But not sure if this is the ideal approach, I have seen uses cases where Yelp
used Sensu

 and Netflix wrote their own tool

for
monitoring their cloud Cassandra clusters.

Please let me know if there are any cloud native monitoring systems that
work well with Cassandra, we will review it for our setup.



Thank You,
Bill Walters.


[no subject]

2017-09-28 Thread Dan Kinder
Hi,

I recently upgraded our 16-node cluster from 2.2.6 to 3.11 and see the
following. The cluster does function, for a while, but then some stages
begin to back up and the node does not recover and does not drain the
tasks, even under no load. This happens both to MutationStage and
GossipStage.

I do see the following exception happen in the logs:


ERROR [ReadRepairStage:2328] 2017-09-26 23:07:55,440
CassandraDaemon.java:228 - Exception in thread
Thread[ReadRepairStage:2328,5,main]

org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out -
received only 1 responses.

at
org.apache.cassandra.service.DataResolver$RepairMergeListener.close(DataResolver.java:171)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.close(UnfilteredPartitionIterators.java:182)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:82)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.service.DataResolver.compareResponses(DataResolver.java:89)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:50)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
~[apache-cassandra-3.11.0.jar:3.11.0]

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
~[na:1.8.0_91]

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
~[na:1.8.0_91]

at
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
~[apache-cassandra-3.11.0.jar:3.11.0]

at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_91]


But it's hard to correlate precisely with things going bad. It is also very
strange to me since I have both read_repair_chance and
dclocal_read_repair_chance set to 0.0 for ALL of my tables. So it is
confusing why ReadRepairStage would err.

Anyone have thoughts on this? It's pretty muddling, and causes nodes to
lock up. Once it happens Cassandra can't even shut down, I have to kill -9.
If I can't find a resolution I'm going to need to downgrade and restore to
backup...

The only issue I found that looked similar is
https://issues.apache.org/jira/browse/CASSANDRA-12689 but that appears to
be fixed by 3.10.


$ nodetool tpstats

Pool Name Active   Pending  Completed   Blocked
All time blocked

ReadStage  0 0 582103 0
  0

MiscStage  0 0  0 0
  0

CompactionExecutor1111   2868 0
  0

MutationStage 32   4593678   55057393 0
  0

GossipStage1  2818 371487 0
  0

RequestResponseStage   0 04345522 0
  0

ReadRepairStage0 0 151473 0
  0

CounterMutationStage   0 0  0 0
  0

MemtableFlushWriter181 76 0
  0

MemtablePostFlush  1   382139 0
  0

ValidationExecutor 0 0  0 0
  0

ViewMutationStage  0 0  0 0
  0

CacheCleanupExecutor   0 0  0 0
  0

PerDiskMemtableFlushWriter_10  0 0 69 0
  0

PerDiskMemtableFlushWriter_11  0 0 69 0
  0

MemtableReclaimMemory  0 0 81 0
  0

PendingRangeCalculator 0 0 32 0
  0

SecondaryIndexManagement   0 0  0 0
  0

HintsDispatcher0 0596 0
  0

PerDiskMemtableFlushWriter_1   0 0 69 0
  0

Native-Transport-Requests 11 04547746 0
  67

PerDiskMemtableFlushWriter_2   0 0 69 0
  0

MigrationStage 1  1545586 0
  0

PerDiskMemtableFlushWriter_0   0 0 80 0
  0

Sampler0 0  0 0
  0

PerDiskMemtableFlushWriter_5   0 0 69 0
  

[no subject]

2016-10-19 Thread Anseh Danesharasteh
unsubscribe


[no subject]

2016-03-15 Thread Rami Badran
Hi

i have the following cassandra schema structure:

CREATE TABLE users (
uid TEXT,
loginIds map,
primary key (uid)
);

CREATE TYPE loginId (
emails set,
unverifiedEmails set,
 );

and i tried to insert record to my table,but i have problem with loginIds
attribute,
could you please advice how i can insert record

-- 

Regards
Rami Badran


[no subject]

2015-11-30 Thread Jay Reddy



Test Subject

2015-09-14 Thread Ajay Garg
Testing simple content, as my previous email bounced :(

-- 
Regards,
Ajay


[no subject]

2015-01-05 Thread Nagesh
Hi All,

I have designed a column family

prodgroup text, prodid int, status int, , PRIMARY KEY ((prodgroup), prodid,
status)

The data model is to cater

   - Get list of products from the product group
   - get list of products for a given range of ids
   - Get details of a specific product
   - Update status of the product acive/inactive
   - Get list of products that are active or inactive (select * from
   product where prodgroup='xyz' and prodid  0 and status = 0)

The design works fine, except for the last query . Cassandra not allowing
to query on status unless I fix the product id. I think defining a super
column family which has the key PRIMARY KEY((prodgroup), staus,
productid) should work. Would like to get expert advice on other
alternatives.
-- 
Thanks,
Nageswara Rao.V

*The LORD reigns*


[no subject]

2014-08-25 Thread Sávio S . Teles de Oliveira
We're using cassandra 2.0.9 with datastax java cassandra driver 2.0.0 in a
cluster of eight nodes.

We're doing an insert and after a delete like:

delete from *column_family_name* where *id* = value

Immediatly select to check whether the DELETE was successful. Sometimes the
value still there!!


Any suggestions?

-- 
Atenciosamente,
Sávio S. Teles de Oliveira
voice: +55 62 9136 6996
http://br.linkedin.com/in/savioteles
Mestrando em Ciências da Computação - UFG
Arquiteto de Software
CUIA Internet Brasil


python fast table copy/transform (subject updated)

2014-06-06 Thread Laing, Michael
Hi Marcelo,

I have updated the prerelease app in this gist:

https://gist.github.com/michaelplaing/37d89c8f5f09ae779e47

I found that it was too easy to overrun my Cassandra clusters so I added a
throttle arg which by default is 1000 rows per second.

Fixed a few bugs too, reworked the args, etc.

I'll be interested to hear if you find it useful and/or have any comments.

ml


On Thu, Jun 5, 2014 at 1:09 PM, Marcelo Elias Del Valle 
marc...@s1mbi0se.com.br wrote:

 Michael,

 I will try to test it up to tomorrow and I will let you know all the
 results.

 Thanks a lot!

 Best regards,
 Marcelo.


 2014-06-04 22:28 GMT-03:00 Laing, Michael michael.la...@nytimes.com:

 BTW you might want to put a LIMIT clause on your SELECT for testing. -ml


 On Wed, Jun 4, 2014 at 6:04 PM, Laing, Michael michael.la...@nytimes.com
  wrote:

 Marcelo,

 Here is a link to the preview of the python fast copy program:

 https://gist.github.com/michaelplaing/37d89c8f5f09ae779e47

 It will copy a table from one cluster to another with some
 transformation- they can be the same cluster.

 It has 3 main throttles to experiment with:

1. fetch_size: size of source pages in rows
2. worker_count: number of worker subprocesses
3. concurrency: number of async callback chains per worker subprocess

 It is easy to overrun Cassandra and the python driver, so I recommend
 starting with the defaults: fetch_size: 1000; worker_count: 2; concurrency:
 10.

 Additionally there are switches to set 'policies' by source and
 destination: retry (downgrade consistency), dc_aware, and token_aware.
 retry is useful if you are getting timeouts. For the others YMMV.

 To use it you need to define the SELECT and UPDATE cql statements as
 well as the 'map_fields' method.

 The worker subprocesses divide up the token range among themselves and
 proceed quasi-independently. Each worker opens a connection to each cluster
 and the driver sets up connection pools to the nodes in the cluster. Anyway
 there are a lot of processes, threads, callbacks going at once so it is fun
 to watch.

 On my regional cluster of small nodes in AWS I got about 3000 rows per
 second transferred after things warmed up a bit - each row about 6kb.

 ml


 On Wed, Jun 4, 2014 at 11:49 AM, Laing, Michael 
 michael.la...@nytimes.com wrote:

 OK Marcelo, I'll work on it today. -ml


 On Tue, Jun 3, 2014 at 8:24 PM, Marcelo Elias Del Valle 
 marc...@s1mbi0se.com.br wrote:

 Hi Michael,

 For sure I would be interested in this program!

 I am new both to python and for cql. I started creating this copier,
 but was having problems with timeouts. Alex solved my problem here on the
 list, but I think I will still have a lot of trouble making the copy to
 work fine.

 I open sourced my version here:
 https://github.com/s1mbi0se/cql_record_processor

 Just in case it's useful for anything.

 However, I saw CQL has support for concurrency itself and having
 something made by someone who knows Python CQL Driver better would be very
 helpful.

 My two servers today are at OVH (ovh.com), we have servers at AWS but
 but several cases we prefer other hosts. Both servers have SDD and 64 Gb
 RAM, I could use the script as a benchmark for you if you want. Besides, 
 we
 have some bigger clusters, I could run on the just to test the speed if
 this is going to help.

 Regards
 Marcelo.


 2014-06-03 11:40 GMT-03:00 Laing, Michael michael.la...@nytimes.com:

 Hi Marcelo,

 I could create a fast copy program by repurposing some python apps
 that I am using for benchmarking the python driver - do you still need 
 this?

 With high levels of concurrency and multiple subprocess workers,
 based on my current actual benchmarks, I think I can get well over 1,000
 rows/second on my mac and significantly more in AWS. I'm using variable
 size rows averaging 5kb.

 This would be the initial version of a piece of the benchmark suite
 we will release as part of our nyt⨍aбrik project on 21 June for my
 Cassandra Day NYC talk re the python driver.

 ml


 On Mon, Jun 2, 2014 at 2:15 PM, Marcelo Elias Del Valle 
 marc...@s1mbi0se.com.br wrote:

 Hi Jens,

 Thanks for trying to help.

 Indeed, I know I can't do it using just CQL. But what would you use
 to migrate data manually? I tried to create a python program using auto
 paging, but I am getting timeouts. I also tried Hive, but no success.
 I only have two nodes and less than 200Gb in this cluster, any
 simple way to extract the data quickly would be good enough for me.

 Best regards,
 Marcelo.



 2014-06-02 15:08 GMT-03:00 Jens Rantil jens.ran...@tink.se:

 Hi Marcelo,

 Looks like you can't do this without migrating your data manually:
 https://stackoverflow.com/questions/18421668/alter-cassandra-column-family-primary-key-using-cassandra-cli-or-cql

 Cheers,
 Jens


 On Mon, Jun 2, 2014 at 7:48 PM, Marcelo Elias Del Valle 
 marc...@s1mbi0se.com.br wrote:

 Hi,

 I have some cql CFs in a 2 node Cassandra 2.0.8 cluster.

 I realized I created my column 

[no subject]

2014-04-29 Thread Ebot Tabi
Hi there,
We are working on an API service that receives arbitrary json data, these
data can be nested json data or just normal json data. We started using
Astyanax but we noticed we couldn't use CQL3 to target the arbitrary
columns, in CQL3 those arbitrary columns ain't available. Ad-hoc query are
to be ran against these arbitrary data stored in Cassandra.


-- 
Ebot T.


[no subject]

2014-04-09 Thread Ben Hood
Hi all,

I'm getting the following error in a 2.0.6 instance:

ERROR [Native-Transport-Requests:16633] 2014-04-09 10:11:45,811
ErrorMessage.java (line 222) Unexpected exception during request
java.lang.AssertionError: localhost/127.0.0.1
at org.apache.cassandra.service.StorageProxy.submitHint(StorageProxy.java:860)
at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:480)
at 
org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:524)
at 
org.apache.cassandra.cql3.statements.BatchStatement.executeWithoutConditions(BatchStatement.java:210)
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:203)
at 
org.apache.cassandra.cql3.statements.BatchStatement.executeWithPerStatementVariables(BatchStatement.java:192)
at 
org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:373)
at 
org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:206)
at 
org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
at 
org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at 
org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

Looking at the source for this, it appears to be related to a timeout:

// local write that time out should be handled by LocalMutationRunnable
assert !target.equals(FBUtilities.getBroadcastAddress()) : target;

Cursory testing indicates that this occurs during larger batch ingests.

But the error does not appear to be propagated properly back to the
client and it seems like this could be due to some misconfiguration.

Has anybody seen something like this before?

Cheers,

Ben


[no subject]

2014-03-12 Thread Batranut Bogdan
Hello all,

The environment:

I have a 6 node Cassandra cluster. On each node I have:
- 32 G RAM
- 24 G RAM for cassa
- ~150 - 200 MB/s disk speed
- tomcat 6 with axis2 webservice that uses the datastax java driver to make
asynch reads / writes 
- replication factor for the keyspace is 3

All nodes in the same data center 
The clients that read / write are in the same datacenter so network is
Gigabit.

Writes are performed via exposed methods from Axis2 WS . The Cassandra Java
driver uses the round robin load balancing policy so all the nodes in the
cluster should be hit with write requests under heavy write or read load
from multiple clients.

I am monitoring all nodes with JConsole from another box.

The problem:

When wrinting to a particular column family, only 3 nodes have high CPU load
~ 80 - 99 %. The remaining 3 are at ~2 - 10 % CPU. During writes, reads
timeout. 

I need more speed for both writes of reads. Due to the fact that 3 nodes
barely have CPU activity leads me to think that the whole potential for C*
is not touched.

I am running out of ideas...

If further details about the environment I can provide them.


Thank you very much.

[no subject]

2014-02-27 Thread Kumar Ranjan
Hey folks,

I am dealing with a legacy CFs where super_column has been used and python
client pycassa is being used. An example is given below. My question here
is, can I make use of  include_timestamp to select data between two
returned timestamps e.g between 1393516744591751 and 1393516772131811. This
is not exactly timeseries but just selected between two. Please help on
this?


Data is inserted like this

TEST_CF.insert('test_r_key',{'1234': {'key_name_1': 'taf_test_1'}})


Data Fetch:

TEST_CF.get('test_r_key', include_timestamp=True)


OrderedDict([('1234', OrderedDict([('key_name_1', (u'taf_test_1',
1393451990902345))])),

 ('1235', OrderedDict([('key_name_2', (u'taf_test_2',
1393516744591751))])),

 ('1236', OrderedDict([('key_name_3', (u'taf_test_3',
1393516772131782))]))

 ('1237', OrderedDict([('key_name_4', (u'taf_test_4',
1393516772131799))]))

 ('1238', OrderedDict([('key_name_5', (u'taf_test_5',
1393516772131811))]))

 ('1239', OrderedDict([('key_name_6', (u'taf_test_6',
1393516772131854))]))

 ('1240', OrderedDict([('key_name_7', (u'taf_test_7',
1393516772131899))]))

])


[no subject]

2013-12-11 Thread Kumar Ranjan
Hey Folks,

So I am creating, column family using pycassaShell. See below:

validators = {

'approved':  'BooleanType',

'text':  'UTF8Type',

'favorite_count':'IntegerType',

'retweet_count': 'IntegerType',

'expanded_url':  'UTF8Type',

'tuid':  'LongType',

'screen_name':   'UTF8Type',

'profile_image': 'UTF8Type',

'embedly_data':  'CompositeType',

'created_at':'UTF8Type',

}

SYSTEM_MANAGER.create_column_family('Narrative','Twitter_search_test',
comparator_type='CompositeType', default_validation_class='UTF8Type',
key_validation_class='UTF8Type', column_validation_classes=validators)


I am getting this error:

*InvalidRequestException*: InvalidRequestException(why='Invalid definition
for comparator org.apache.cassandra.db.marshal.CompositeType.'


My data will look like this:

'row_key' : { 'tid' :

{

'expanded_url': u'http://instagram.com/p/hwDj2BJeBy/',

'text': '#snowinginNYC Makes me so happy\xe2\x9d\x840brittles0
\xe2\x9b\x84 @ Grumman Studios http://t.co/rlOvaYSfKa',

'profile_image': u'
https://pbs.twimg.com/profile_images/3262070059/1e82f895559b904945d28cd3ab3947e5_normal.jpeg
',

'tuid': 339322611,

'approved': 'true',

'favorite_count': 0,

'screen_name': u'LonaVigi',

'created_at': u'Wed Dec 11 01:10:05 + 2013',

'embedly_data': {u'provider_url': u'http://instagram.com/',
u'description': ulonavigi's photo on Instagram, u'title':
u'#snwinginNYC Makes me so happy\u2744@0brittles0 \u26c4', u'url': u'
http://distilleryimage7.ak.instagram.com/5b880dec61c711e3a50b129314edd3b_8.jpg',
u'thumbnail_width': 640, u'height': 640, u'width': 640, u'thumbnail_url': u'
http://distilleryimage7.ak.instagram.com/b880dec61c711e3a50b1293d14edd3b_8.jpg',
u'author_name': u'lonavigi', u'version': u'1.0', u'provider_name':
u'Instagram', u'type': u'poto', u'thumbnail_height': 640, u'author_url': u'
http://instagram.com/lonavigi'},

'tid': 410577192746500096,

'retweet_count': 0

}

}


[no subject]

2013-10-06 Thread Ran Tavory
Hi, I have a small cluster of 1.2.6 and after some config changes I started
seeing errors int the logs.

Not sure that's related, but the changes I performed were to disable hinted
handoff and disable auto snapshot. I'll try to reverte these, see if the
picture changes.

But anyway, that seems like a bug, right?

I see this across many nodes, not only one.

ERROR [ReplicateOnWriteStage:105] 2013-10-06 16:13:13,799
CassandraDaemon.java (line 192) Exception in thread
Thread[ReplicateOnWriteStage:105,5,main]
java.lang.AssertionError: DecoratedKey(-9223372036854775808, ) !=
DecoratedKey(-1854619418400985942, 00033839390a4769676f707469782d3100)
in
/raid0/cassandra/data/test_realtime/activities_summary_realtime/test_realtime-activities_summary_realtime-ic-2-Data.db
 at
org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:119)
at
org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:60)
 at
org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
at
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
 at
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:272)
at
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
 at
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1391)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214)
 at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
at org.apache.cassandra.db.Table.getRow(Table.java:347)
 at
org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
at
org.apache.cassandra.db.CounterMutation.makeReplicationMutation(CounterMutation.java:90)
 at
org.apache.cassandra.service.StorageProxy$7$1.runMayThrow(StorageProxy.java:772)
at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1593)
 at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
ERROR [ReplicateOnWriteStage:82] 2013-10-06 16:13:14,249
CassandraDaemon.java (line 192) Exception in thread
Thread[ReplicateOnWriteStage:82,5,main]
java.lang.RuntimeException: java.lang.IllegalArgumentException: unable to
seek to position 2171332 in
/raid0/cassandra/data/test_realtime/activities_summary_realtime/test_realtime-activities_summary_realtime-ic-2-Data.db
(1250125 bytes) in read-only mode
 at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1597)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.IllegalArgumentException: unable to seek to position
2171332 in
/raid0/cassandra/data/test_realtime/activities_summary_realtime/test_realtime-activities_summary_realtime-ic-2-Data.db
(1250125 bytes) in read-only mode
 at
org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:306)
at
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:42)
 at
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1054)
at
org.apache.cassandra.db.columniterator.SSTableNamesIterator.createFileDataInput(SSTableNamesIterator.java:94)
 at
org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:112)
at
org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:60)
 at
org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
at
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
 at
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:272)
at
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
 at
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1391)
at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214)
 at
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
at org.apache.cassandra.db.Table.getRow(Table.java:347)
 at
org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
at
org.apache.cassandra.db.CounterMutation.makeReplicationMutation(CounterMutation.java:90)
 at
org.apache.cassandra.service.StorageProxy$7$1.runMayThrow(StorageProxy.java:772)
at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1593)

-- 
/Ran
http://tavory.com


[no subject]

2013-05-14 Thread bjbylh

hello all:
i use datastax java-driver to connect c* ,when the program calls 
cluster.shutdown(),it prints 
out:java.lang.NoSuchMethodError:org.jboss.netty.channelFactory.shutdown()V.
but i do not kown why...
c* is 1.2.4,java-driver is 1.0.0
thank you.

Sent from Samsung Mobile

[no subject]

2013-05-10 Thread Bao Le
My cluster of 11 nodes running Casandra 1.1-5 is  pausing too long for ParNew 
GC, which increases our response latency, Is it a good idea to have a a smaller 
HEAP_NEWSIZE so that we can collect more often, but not pause that long?


INFO [ScheduledTasks:1] 2013-05-10 01:00:17,245 GCInspector.java (line 122) GC 
for ParNew: 252 ms for 1 collections, 3238218616 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 02:15:20,650 GCInspector.java (line 122) GC 
for ParNew: 445 ms for 1 collections, 4810760088 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 02:30:19,932 GCInspector.java (line 122) GC 
for ParNew: 419 ms for 2 collections, 5210373288 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 02:40:23,201 GCInspector.java (line 122) GC 
for ParNew: 333 ms for 1 collections, 2172614912 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 03:45:19,975 GCInspector.java (line 122) GC 
for ParNew: 201 ms for 1 collections, 4134399864 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 03:55:20,345 GCInspector.java (line 122) GC 
for ParNew: 685 ms for 1 collections, 4696326432 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 04:40:21,957 GCInspector.java (line 122) GC 
for ParNew: 379 ms for 1 collections, 4051166216 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 04:50:23,057 GCInspector.java (line 122) GC 
for ParNew: 334 ms for 1 collections, 4695497128 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 05:05:20,304 GCInspector.java (line 122) GC 
for ParNew: 222 ms for 1 collections, 5527026728 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 05:35:21,848 GCInspector.java (line 122) GC 
for ParNew: 279 ms for 1 collections, 3138206504 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 05:45:19,939 GCInspector.java (line 122) GC 
for ParNew: 353 ms for 1 collections, 3445606832 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 05:55:21,326 GCInspector.java (line 122) GC 
for ParNew: 344 ms for 1 collections, 4331945664 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 06:05:20,424 GCInspector.java (line 122) GC 
for ParNew: 214 ms for 1 collections, 4787806520 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 07:00:21,402 GCInspector.java (line 122) GC 
for ParNew: 256 ms for 1 collections, 5119566040 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 07:15:20,747 GCInspector.java (line 122) GC 
for ParNew: 512 ms for 2 collections, 2068901896 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 07:30:19,081 GCInspector.java (line 122) GC 
for ParNew: 267 ms for 1 collections, 2614774320 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 08:10:22,440 GCInspector.java (line 122) GC 
for ParNew: 305 ms for 1 collections, 4042611368 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 08:15:20,482 GCInspector.java (line 122) GC 
for ParNew: 371 ms for 1 collections, 4365244824 used; max is 8422162432
 INFO [ScheduledTasks:1] 2013-05-10 08:25:20,047 GCInspector.java (line 122) GC 
for ParNew: 251 ms for 1 collections, 4900957800 used; max is 8422162432

Thanks
Bao


[no subject]

2013-04-17 Thread Ertio Lew
I run cassandra on single win 8 machine for development needs. Everything
has been working fine for  several months but just today I saw this error
message in cassandra logs  all host pools were marked down.



ERROR 08:40:42,684 Error occurred during processing of message.
java.lang.StringIndexOutOfBoundsException: String index out of range:
-214741811
1
at java.lang.String.checkBounds(String.java:397)
at java.lang.String.init(String.java:442)
at
org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol
.java:339)
at
org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandr
a.java:18958)
at
org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(
Cassandra.java:3441)
at
org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.jav
a:2889)
at
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run
(CustomTThreadPoolServer.java:187)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExec
utor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:908)
at java.lang.Thread.run(Thread.java:662)


After restarting the server everything again worked fine.
I am curious to know what is this related to. Is this caused due to my
application putting any corrupted data?


[no subject]

2012-01-17 Thread RAJASHEKAR REDDY

...It’s absolutely effective.  
http://camille.ngothe.perso.sfr.fr/new-year.link.php?zynID=26w5


[no subject]

2012-01-07 Thread RAJASHEKAR REDDY

...Don’t waste your money on cigarettes!  
http://igaudi.net/new-year.link.php?qugoogleId=62r8


[no subject]

2011-11-20 Thread quinteros8...@gmail.com


--- Sent with mail@metro - the new generation of mobile messaging

[no subject]

2011-10-26 Thread Amit Schreiber

Hi, 
After unpacking the 1.0.0 release and running:
apache-cassandra-1.0.0$ ./bin/cassandra -f
I get:Error opening zip file or JAR manifest missing : 
./bin/../lib/jamm-0.2.2.jarError occurred during initialization of VMagent 
library failed to init: instrument

A quick examination of the lib directory shows that the jamm library in there 
is jamm-0.2.5.jar
Has anyone else encountered this?
Thanks,Amit   


[no subject]

2011-08-29 Thread Stanislav Vodetskyi
unsubscribe


[no subject]

2011-02-09 Thread Onur AKTAS

unsubscribe   

[no subject]

2011-01-26 Thread Geoffry Roberts
-- 
Geoffry Roberts


[no subject]

2010-12-02 Thread Eric Evans
On Wed, 2010-12-01 at 23:13 +0100, Moldován Eduárd wrote:
 unsubscribe 

http://wiki.apache.org/cassandra/FAQ#unsubscribe

-- 
Eric Evans
eev...@rackspace.com



[no subject]

2010-12-01 Thread Moldován Eduárd

 unsubscribe 

[no subject]

2010-11-23 Thread Amin Sakka, Novapost
-- 

Amin SAKKA
Research and Development Engineer
32 rue de Paradis, 75010 Paris
*Tel:* +33 (0)6 34 14 19 25
*Mail:* amin.sa...@novapost.fr
*Web:* www.novapost.fr / www.novapost-rh.fr