It looks like the /cassandra directory is missing from most of the
mirrors right now. The only mirror that I've found to work is
http://www.eu.apache.org
On Fri, Aug 24, 2012 at 2:53 AM, ruslan usifov ruslan.usi...@gmail.com wrote:
Hm, from erope servere cassandra packages prestn, but from
That library requires you to serialize and deserialize the data
yourself. So to insert a ruby Float you would
value = 28.21
[value].pack('G')
@client.insert(:somecf, 'key', {'floatval' = [value].pack('G')})
and to read it back out:
value = @client.get(:somecf, 'key',
By default, Cassandra is configured to use half the ram of your
system. That's way overkill for playing around with it on a laptop.
Edit /etc/cassandra/cassandra-env.sh and set max_heap_size_in_mb to
something more suited for your environment.
I have it set to 256M for my laptop (with 4G of ram).
Just rebooting a machine with ephemeral drives is ok (it does an os
level reboot). You will also keep the same IP address. If you stop and
then start a machine with ephemeral drives you will lose data.
See: http://alestic.com/2011/09/ec2-reboot-stop-start
On Wed, Dec 7, 2011 at 6:43 PM,
We're working on upgrading from 1.0.12 to 1.1.12. After upgrading a test
node I ran into CASSANDRA-4157 which restricts the max length of CF names
to = 48 characters. It looks like CASSANDRA-4110 will allow us to upgrade
and keep our existing long CF names, but we won't be able to create new CFs
I can't tell you why that one-liner isn't working, but you can try
http://www.cassandraring.com for generating balanced tokens.
On Thu, Oct 31, 2013 at 11:59 PM, Techy Teck comptechge...@gmail.comwrote:
I am trying to setup two node Cassandra Cluster on windows machine. I have
basically two
The issue you should look at is CASSANDRA-4206.
This is apparently fixed on 2.0 so upgrading is one option. If you are not
ready to upgrade to 2.0 then you can try increasing in_memory_compaction.
We were hitting this exception on one of our nodes and increasing
in_memory_compaction did fix it.
Your general assessments of the limitations of the Ec2 snitches seem to
match what we've found. We're currently using the
GossipingPropertyFileSnitch in our VPCs. This is also the snitch to use if
you ever want to have a DC in EC2 and a DC with another hosting provider.
-Peter
On Mon, Jun 9,
,
though it could be.
Mitchell
*From:* Peter Sanford [mailto:psanf...@retailnext.net
psanf...@retailnext.net]
*Sent:* Monday, June 09, 2014 7:19 AM
*To:* user@cassandra.apache.org
*Subject:* Re: VPC AWS
Your general assessments of the limitations of the Ec2 snitches seem to
match what
On Wed, Jun 11, 2014 at 10:12 AM, Jeremy Jongsma jer...@barchart.com
wrote:
The big problem seems to have been requesting a large number of row keys
combined with a large number of named columns in a query. 20K rows with 20K
columns destroyed my cluster. Splitting it into slices of 100
On Wed, Jun 11, 2014 at 9:17 PM, Jack Krupansky j...@basetechnology.com
wrote:
Hmmm... that multipl-gets section is not present in the 2.0 doc:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architecturePlanningAntiPatterns_c.html
Was that intentional – is that
You should delete the backup files once you have copied them off. Otherwise
they will start to use disk space as the live SSTables diverge from the
snapshots/incrementals.
-psanford
On Sat, Jun 14, 2014 at 10:17 AM, S C as...@outlook.com wrote:
Is it ok to delete files from backups directory
For snapshots, yes. For incremental backups you need to delete the files
yourself.
On Wed, Jun 18, 2014 at 6:28 AM, Marcelo Elias Del Valle
marc...@s1mbi0se.com.br wrote:
Wouldn't be better to use nodetool clearsnapshot?
[]s
2014-06-14 17:38 GMT-03:00 S C as...@outlook.com:
I am
On Mon, Oct 6, 2014 at 1:56 PM, DuyHai Doan doanduy...@gmail.com wrote:
Isn't there a video of Ooyala at some past Cassandra Summit demonstrating
usage of Cassandra for text search using Trigram ? AFAIK they were storing
kind of bitmap to perform OR AND operations on trigram
That sounds
Hmm. I was able to reproduce the behavior with your go program on my dev
machine (C* 2.0.12). I was hoping it was going to just be an unchecked
error from the .Exec() or .Scan(), but that is not the case for me.
The fact that the issue seems to happen on loop iteration 10, 100 and 1000
is pretty
This project implements the graphite api on top of Cassandra and can be
used from grafana:
https://github.com/pyr/cyanite
On Wed, May 9, 2018 at 10:39 AM dba newsql wrote:
> Any one use Cassandra as data storage for Grafana as timeseries DB?
>
> Thanks,
> Fay
>
16 matches
Mail list logo