To disable auto snapshots, set the property auto_snapshot: false in your
cassandra.yaml file.
Mark
On 21 May 2015 at 08:30, Ali Akhtar ali.rac...@gmail.com wrote:
Is there a config setting where automatic snapshots can be disabled? I
have a use case where a table is truncated quite often, and
Thanks!
On Thu, May 21, 2015 at 12:34 PM, Mark Reddy mark.l.re...@gmail.com wrote:
To disable auto snapshots, set the property auto_snapshot: false in your
cassandra.yaml file.
Mark
On 21 May 2015 at 08:30, Ali Akhtar ali.rac...@gmail.com wrote:
Is there a config setting where automatic
They seem to be in place now. Thanks!
On Wed, May 20, 2015 at 5:46 PM, Michael Shuler mich...@pbandjelly.org
wrote:
On 05/20/2015 03:19 AM, Andrej Pregl wrote:
So i was trying to install Cassandra via the Debian repository at
http://www.apache.org/dist/cassandra/debian and for some reason
Is there a config setting where automatic snapshots can be disabled? I have
a use case where a table is truncated quite often, and would like to not
have snapshots. I can't find anything on google.
Thanks.
Hi,
I also advice against multiple instances on the same hardware. If you have
really big boxes why not virtualize?
Other option is experiment with CCM. Although there are some limitations
with CCM (ex: JNA is disabled)
If you follow up on this I would to hear how it went.
Em 21/05/2015 19:33,
Is there any method to disable this programmatically on a table-by-table
basis.
I'm running into an issue regarding drop table which I'll post in a
separate thread.
On Thu, May 21, 2015 at 3:34 AM, Mark Reddy mark.l.re...@gmail.com wrote:
To disable auto snapshots, set the property
No.
On Thu, May 21, 2015 at 7:07 AM Eax Melanhovich m...@eax.me wrote:
Say I would like to have a replica cluster, which state is a state of
real cluster 12 hours ago. Does Cassandra support such feature?
--
Best regards,
Eax Melanhovich
http://eax.me/
JBOD -- just a bunch of disks, no raid.
All the best,
[image: datastax_logo.png] http://www.datastax.com/
Sebastián Estévez
Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
[image: linkedin.png] https://www.linkedin.com/company/datastax [image:
facebook.png]
@James Rothering yeah I was thinking of container in a broad sense: either
full virtual machines, docker containers, straight LXC, or whatever else
would allow the Cassandra nodes to have their own IPs and bind to default
ports.
@Jonathan Haddad thanks for the blog post. To ensure the same host
You could use docker but it's not required. You could use LXC if you
wanted.
Shameless self promo:
http://rustyrazorblade.com/2013/08/advanced-devops-with-vagrant-and-lxc/
On Thu, May 21, 2015 at 1:00 PM James Rothering jrother...@codojo.me
wrote:
Hmmm ... Not familiar with JBOD. Is that
Hi,
we do operate multiple instances (of possibly different versions) of
Cassandra on rather thick nodes. The only problem we encountered so far was
sharing same physical data disk among multiple instances - it proved to not
be the best idea.Sharing of commitlog disks caused no troubles so far.
Hmmm ... Not familiar with JBOD. Is that just RAID-0?
Also ... wrt the container talk, is that a Docker container you're talking
about?
On Thu, May 21, 2015 at 12:48 PM, Jonathan Haddad j...@jonhaddad.com wrote:
If you run it in a container with dedicated IPs it'll work just fine.
Just be
Counters differ significantly between 2.0 and 2.1 (
https://issues.apache.org/jira/browse/CASSANDRA-6405 among others). But in
both scenarios, you will pay more for counter reconciles and compactions
vs. regular updates.
The final counter performance fix will come with CASSANDRA-6506.
For
Yep, that would be one way to handle it.
On Thu, May 21, 2015 at 2:07 PM Dan Kinder dkin...@turnitin.com wrote:
@James Rothering yeah I was thinking of container in a broad sense: either
full virtual machines, docker containers, straight LXC, or whatever else
would allow the Cassandra nodes
Artur,
That's not entirely true. Writes to Cassandra are first written to a
memtable (in-memory table) which is periodically flushed to disk. If
multiple writes are coming in before the flush, then only a single record
will be written to the disk/sstable. If your have writes that aren't coming
Yes, it's a known issue. For more information on the topic see this support
post from DataStax:
https://support.datastax.com/hc/en-us/articles/204226339-How-to-drop-and-recreate-a-table-in-Cassandra-versions-older-than-2-1
Mark
On 21 May 2015 at 15:31, Ken Hancock ken.hanc...@schange.com wrote:
We've been running into the reused key cache issue (CASSANDRA-5202) with
dropping and recreating the same table in Cassandra 1.2.18 so we've been
testing with key caches disabled which does not seem to solve the issue.
In the latest logs it seems that old SSTables metadata gets read after the
Say I would like to have a replica cluster, which state is a state of
real cluster 12 hours ago. Does Cassandra support such feature?
--
Best regards,
Eax Melanhovich
http://eax.me/
yeah, you can confirm in the log such as the one below.
WARN [main] 2015-05-22 11:23:25,584 CassandraDaemon.java:81 - JMX is not
enabled to receive remote connections. Please see cassandra-env.sh for more
info.
we are running c* with ipv6, cqlsh works superb but not on local link.
$ nodetool -h
If you run it in a container with dedicated IPs it'll work just fine. Just
be sure you aren't using the same machine to replicate it's own data.
On Thu, May 21, 2015 at 12:43 PM Manoj Khangaonkar khangaon...@gmail.com
wrote:
+1.
I agree we need to be able to run multiple server instances on
+1.
I agree we need to be able to run multiple server instances on one physical
machine. This is especially necessary in development and test environments
where one is experimenting and needs a cluster, but do not have access to
multiple physical machines.
If you google , you can find a few
Just wondering if anyone else is seeing this issue on the nodetool after
installing 2.1.5
This works
nodetool -h 127.0.0.1 cfstats keyspace.table
This works
nodetool -h localhost cfstats keyspace.table
This works
nodetool cfstats keyspace.table
This doesn't work
nodetool -h 192.168.1.10
Thanks Mark (though that article doesn't appear publicly accessible for
others).
Truncate would have been the tool of choice, however my understanding is
truncate fails unless all nodes are up and running which makes it a
non-workable choice since we can't determine when failures will occur.
Ken
For security reason, Cassandra changes JMX to listen localhost only
since version 2.0.14/2.1.4.
From NEWS.txt:
The default JMX config now listens to localhost only. You must enable
the other JMX flags in cassandra-env.sh manually.
On Thu, May 21, 2015 at 11:05 AM, Walsh, Stephen
Hi, I'd just like some clarity and advice regarding running multiple
cassandra instances on a single large machine (big JBOD array, plenty of
CPU/RAM).
First, I am aware this was not Cassandra's original design, and doing this
seems to unreasonably go against the commodity hardware intentions of
25 matches
Mail list logo