On Mon, Jun 8, 2015 at 6:22 AM, Anton Koshevoy nowa...@gmail.com wrote:
- sudo rm -rf /db/cassandra/cr/data0*/system/*
This removes the schema. You can't load SSTables for column families which
don't exist.
=Rob
It could be the linux kernel killing Cassandra b/c of memory usage. When
this happens, nothing is logged in Cassandra. Check the system
logs: /var/log/messages Look for a message saying Out of Memory... kill
process...
On Mon, Jun 8, 2015 at 1:37 PM, Paulo Motta pauloricard...@gmail.com
wrote:
On Mon, Jun 8, 2015 at 6:58 AM, ZeroUno zerozerouno...@gmail.com wrote:
So you mean that refresh needs to be used if the cluster is running, but
if I stopped cassandra while copying the sstables then refresh is useless?
So the error No new SSTables were found during my refresh attempt is due
try checking your system logs (generally /var/log/syslog) to check if the
cassandra process was killed by the OS oom-killer
2015-06-06 15:39 GMT-03:00 Brian Sam-Bodden bsbod...@integrallis.com:
Berk,
1 GB is not enough to run C*, the minimum memory we use on Digital
Ocean is 4GB.
Cheers,
Rob, thanks for the answer.
I just follow instruction from
http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_snapshot_restore_new_cluster.html
If not to remove system table data, the test cluster starts interfering to a
production cluster. How Can I avoid this situation?
On
Yes, you shouldn’t delete the system directory. Next steps are …reconfigure the
test cluster with new IP addresses, clear the gossiping information and then
boot the test cluster.
If you are running Cassandra on VMware, then you may also want to look at this
Hi, Cassandra users:
I have a question related to how to Deserialize the new collection types data
in the Cassandra 2.x. (The exactly version is C 2.0.10).
I create the following example tables in the CQLSH:
CREATE TABLE coupon ( account_id bigint, campaign_id uuid,
,
I'm not sure why sstable2json doesn't work for collections, but if you're
into reading raw sstables we use the following code with good success:
Hi, Cassandra users:
I have a question related to how to Deserialize the new collection types data
in the Cassandra 2.x. (The exactly version is C 2.0.10).
I create the following example tables in the CQLSH:
CREATE TABLE coupon ( account_id bigint, campaign_id uuid,
,
I think you just have to do a DESC KEYSPACE mykeyspace; from one node of
the production cluster then copy the output and import it in your dev
cluster using cqlsh -f output.cql.
Take care at the start of the output you might want to change DC names, RF
or strategy.
Also, if you don't want to
On Mon, Jun 8, 2015 at 2:52 PM, Sanjay Baronia
sanjay.baro...@triliodata.com wrote:
Yes, you shouldn’t delete the system directory. Next steps are
…reconfigure the test cluster with new IP addresses, clear the gossiping
information and then boot the test cluster.
If you don't delete the
Cassandra authorization is at the keyspace and table level. Click on the
GRANT link on the doc page, to get more info:
http://docs.datastax.com/en/cql/3.1/cql/cql_reference/grant_r.html
Which says *Permissions to access all keyspaces, a named keyspace, or a
table can be granted to a user.*
There
I have C* 2.1.5,store some data with ttl.Reduce the gc_grace_seconds to
zero.
But it seems has no effect.
Did I miss something?
--
Ranger Tsao
So gc_grace zero will remove tombstones without any delay after compaction. So
it's possible that tombstones containing SSTs still need to be compacted. So
either you can wait for compaction to happen or do a manual compaction
depending on your compaction strategy. Manual compaction does have
Thank You. I have change unchecked_tombstone_compaction to true . Major
compaction will cause a big sstable ,I think is a lot good choice
--
Ranger Tsao
2015-06-09 11:16 GMT+08:00 Aiman Parvaiz ai...@flipagram.com:
So gc_grace zero will remove tombstones
Hi everyone
I am running C* 2.0.9 and decided to do a rolling upgrade. Added a node of
C* 2.0.15 in the existing cluster and saw this twice:
Jun 9 02:27:20 prod-cass23.localdomain cassandra: 2015-06-09 02:27:20,658
INFO CompactionExecutor:4 CompactionTask.runMayThrow - Compacting
Hi All,
Thanks for all the input. I posted the same question in HBase forum and got
more response.
Posting the consolidated list here.
Our case is that a central team builds and maintain the platform (Cassandra
as a service). We have couple of usecases which fits Cassandra like
time-series
Hi Jens,
All the points listed weren't from me. I posted the HBase Vs Cassandra in
both the forums and consolidated here for the discussion.
On Mon, Jun 8, 2015 at 2:27 PM, Jens Rantil jens.ran...@tink.se wrote:
Hi,
Some minor comments:
2.terrible!!! Ambari/cloudera manager rulezzz.
Does `nodetool comactionstats` show nothing running as well? Also, for
posterity what are some details of the setup (C* version, etc.)?
-Tim
--
Tim Heckman
Operations Engineer
PagerDuty, Inc.
On Sun, Jun 7, 2015 at 6:40 PM, Arturas Raizys artu...@noantidot.com
wrote:
Hello,
I'm having
Hello,
I'm having problem there in 1 node I have continues compaction process
running and consuming CPU. nodetool tpstats show 1 compaction in
progress, but if I try to query system.compactions_in_progress table, I
see 0 records. This never ending compaction does slow down node and it
becomes
Hi,
Does `nodetool comactionstats` show nothing running as well? Also, for
posterity what are some details of the setup (C* version, etc.)?
`nodetool comactionstats` does not return anything, it just waits.
If I do enable DEBUG logging, I see this line poping up while executing
`nodetool
Hi,
Some minor comments:
2.terrible!!! Ambari/cloudera manager rulezzz. Netflix has its own tool
for Cassandra but it doesn't support vnodes.
Not entirely sure what you mean here, but we ran Cloudera for a while and
Cloudera Manager was buggy and hard to debug. Overall, our experience
wasn't
HI,
Is it 2.0.14 or 2.1.4? If you are on 2.1.4 I would recommend an upgrade to
2.1.5 regardless of that issue.
From the data you provide it is difficult to access what is the issue. If
you are running with RF=2 you can always add another node and kill that one
if that is the only node that shows
On Mon, Jun 8, 2015 at 11:16 AM, Ajay ajay.ga...@gmail.com wrote:
If I understand correctly, you mean when we write with QUORUM and
Cassandra writes to few machines and fails to write to few machines and
throws exception if it doesn't satisfy QUORUM, leaving it inconsistent and
doesn't
Il 05/06/15 22:40, Robert Coli ha scritto:
On Fri, Jun 5, 2015 at 7:53 AM, Sebastian Estevez
sebastian.este...@datastax.com mailto:sebastian.este...@datastax.com
wrote:
Since you only restored one dc's sstables, you should be able to
rebuild them on the second DC.
Refresh means
Hello all.
I need to transfer and start the copy of production cluster in a test
environment. My steps:
- nodetool snapshot -t `hostname`-#{cluster_name}-#{timestamp} -p #{jmx_port}
- nodetool ring -p #{jmx_port} | grep `/sbin/ifconfig eth0 | grep 'inet addr' |
awk -F: '{print $2}' | awk
Some options I can think of:
1 - depending on your data size and stime query frequency, you may use
spark to peform queries filtering by server time in the log table, maybe
within an device time window to reduce the dataset your spark job will need
to go through. more info on the spark connector:
The Cassandra team is pleased to announce the release of Apache Cassandra
version 2.2.0-rc1.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of
The Cassandra team is pleased to announce the release of Apache Cassandra
version 2.1.6. We are now calling 2.1 series stable and suitable for
production.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
29 matches
Mail list logo