Re[2]: Test

2014-12-03 Thread Plotnik, Alexey
Alah Akbar -- Original Message -- From: "Servando Muñoz G." mailto:smg...@gmail.com>> To: "user@cassandra.apache.org" mailto:user@cassandra.apache.org>> Sent: 04.12.2014 16:12:32 Subject: RE: Test Saludos… Quien eres tu De: Castelain, Alain [mailto:alain.castel...@xerox.com

RE: Test

2014-12-03 Thread Servando Muñoz G .
Saludos… Quien eres tu De: Castelain, Alain [mailto:alain.castel...@xerox.com] Enviado el: miércoles, 3 de diciembre de 2014 09:46 a. m. Para: user@cassandra.apache.org Asunto: Test Test Cordialement, Regards, Alain Castelain Database Administrator Xerox - Global Document Ou

Re: Performance Difference between Batch Insert and Bulk Load

2014-12-03 Thread Dong Dai
Thanks a lot for the great answers. P.S. I move this thread here from dev. By checking the source code of java-driver, i noticed that the execute() method is implemented using executeAsync() with an immediate get(): @Override public ResultSet execute(Statement statement) { return ex

Re: Wide rows best practices and GC impact

2014-12-03 Thread Gianluca Borello
Thanks Robert, I really appreciate your help! I'm still unsure why Cassandra 2.1 seem to perform much better in that same scenario (even setting the same values of compaction threshold and number of compactors), but I guess we'll revise when we'll decide to upgrade 2.1 in production. On Dec 3, 20

Re: Wide rows best practices and GC impact

2014-12-03 Thread Robert Coli
On Tue, Dec 2, 2014 at 5:01 PM, Gianluca Borello wrote: > We mainly store time series-like data, where each data point is a binary > blob of 5-20KB. We use wide rows, and try to put in the same row all the > data that we usually need in a single query (but not more than that). As a > result, our

Re: Keyspace and table/cf limits

2014-12-03 Thread Nikolai Grigoriev
We had the similar problem - multi-tenancy and multiple DC support. But we did not really have strict requirement of one keyspace per tenant. Our row keys allow us to put any number of tenants per keyspace. So, on one side - we could put all data in a single keyspace for all tenants. And size the

Re: Keyspace and table/cf limits

2014-12-03 Thread Raj N
The question is more from a multi-tenancy point of view. We wanted to see if we can have a keyspace per client. Each keyspace may have 50 column families, but if we have 200 clients, that would be 10,000 column families. Do you think that's reasonable to support? I know that key cache capacity is r

Re: Recommissioned node is much smaller

2014-12-03 Thread Eric Stevens
Well, as I understand it, deleting the entire data directory, including system, should have the same effect as if you totally lost a node and were bootstrapping a replacement. And that's an operation you should be able to have confidence in. I wonder what your load does if you run nodetool cleanu

Re: Cassandra taking snapshots automatically?

2014-12-03 Thread Robert Wille
No. auto_snapshot is turned on, but snapshot_before_compaction is off. Maybe this will shed some light on it. I tried running nodetool repair. I got several messages saying "Lost notification. You should check server log for repair status of keyspace test2_browse”. I looked in system.log, and I

Re: Recommissioned node is much smaller

2014-12-03 Thread Robert Wille
Load and ownership didn’t correlate nearly as well as I expected. I have lots and lots of very small records. I would expect very high correlation. I think the moral of the story is that I shouldn’t delete the system directory. If I have issues with a node, I should recommission it properly. Ro

Re: Cassandra taking snapshots automatically?

2014-12-03 Thread Robert Wille
No. auto_snapshot is turned on, but not snapshot_before_compaction. On Dec 3, 2014, at 10:30 AM, Eric Stevens mailto:migh...@gmail.com>> wrote: Do you have snapshot_before_compaction enabled? http://datastax.com/documentation/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html#ref

Re: Cassandra taking snapshots automatically?

2014-12-03 Thread Eric Stevens
Do you have snapshot_before_compaction enabled? http://datastax.com/documentation/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html#reference_ds_qfg_n1r_1k__snapshot_before_compaction On Wed Dec 03 2014 at 10:25:12 AM Robert Wille wrote: > I built my first multi-node cluster and

Cassandra taking snapshots automatically?

2014-12-03 Thread Robert Wille
I built my first multi-node cluster and populated it with a bunch of data, and ran out of space far more quickly than I expected. On one node, I ended up with 76 snapshots, consuming a total of 220 GB of space. I only have 40 GB of data. It took several snapshots per hour, sometimes within a min

Re: Recommissioned node is much smaller

2014-12-03 Thread Eric Stevens
How does the difference in load compare to the effective ownership? If you deleted the system directory as well, you should end up with new ranges, so I'm wondering if perhaps you just ended up with a really bad shuffle. Did you run removenode on the old host after you took it down (I assume so si

Re: opscenter: 0 of 0 agents connected, but /nodes/all gives 3 results

2014-12-03 Thread Ian Rose
For anyone following at home, the problem had something to do with the fact that I was accessing opscenter through a proxy. Not sure exactly what went wrong - rather than try to debug it I'm just going to move it and enable authentication. On Tue, Dec 2, 2014 at 6:33 PM, Nick Bailey wrote: > Of

Test

2014-12-03 Thread Castelain, Alain
Test Cordialement, Regards, Alain Castelain Database Administrator Xerox - Global Document Outsourcing 253, avenue du Président Wilson La Plaine Saint Denis Cedex, 93211 France http://www.xerox.fr/services/frfr.html p (+33) 155 847 333 m (+33) 682 999 617 @ alain.castel...@xerox.com smime.

Why a cluster don't start after cassandra.yaml range_timeout parameter change ?

2014-12-03 Thread Castelain, Alain
Hi, I had a three node cluster in cassandra 1.2 16 version running well So I have changed the range_request_timeout_in_ms form 1 to 2 on two nodes and this nodes restarted well. On the last node I received this messages from the output.log file : INFO 19:27:18,691 Cassandra shutting d

Re: nodetool repair exception

2014-12-03 Thread Rafał Furmański
I see “Too many open files” exception in logs, but I’m sure that my limit is now 150k. Should I increase it? What’s the reasonable limit of open files for cassandra? On 3 gru 2014, at 15:02, Yuki Morishita wrote: > As the exception indicates, nodetool just lost communication with the > Cassandr

Re: nodetool repair exception

2014-12-03 Thread Yuki Morishita
As the exception indicates, nodetool just lost communication with the Cassandra node and cannot print progress any further. Check your system.log on the node, and see if your repair was completed. If there is no error, then it should be fine. On Wed, Dec 3, 2014 at 5:08 AM, Rafał Furmański wrote:

nodetool repair exception

2014-12-03 Thread Rafał Furmański
Hi All! We have a 8 nodes cluster in 2 DC (4 per DC, RF=3) running Cassandra 2.1.2 on Linux Debian Wheezy. I executed “nodetool repair” on one of the nodes, and this command returned following error: Exception occurred during clean-up. java.lang.reflect.UndeclaredThrowableException error: JMX

Re: Nodes get stuck in crazy GC loop after some time, leading to timeouts

2014-12-03 Thread Paulo Ricardo Motta Gomes
Thanks a lot for the help Graham and Robert! Will try increasing heap and see how it goes. Here are my gc settings, if they're still helpful (they're mostly the defaults): -Xms6G -Xmx6G -Xmn400M -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenu