i think what he means is...do you know what day the 'oldest' day is? eg
if you have a rolling window of say 2 weeks, structure your query so
that your slice range only goes back 2 weeks, rather than to the
beginning of time. this would avoid iterating over all the tombstones
from prior to
ok great, thanks ed, that's really helpful.
just wanted to make sure i wasn't missing something fundamental.
On 13/11/2011 23:57, Ed Anuff wrote:
Yes, correct, it's not going to clean itself. Using your example with
a little more detail:
1 ) A(T1) reads previous location (T0,L0) from
I have digged a bit more to try to find the root cause of the error, and I have
some more information.
It seems that all started after I upgraded Cassandra from 0.8.x to 1.0.0
When I do a incr on the CLI I also get a timeout.
row_cache_save_period_in_seconds is set to 60sec.
Could be a problem
Check if Cassandra secondary index meets your requirement.
Thank you,
Jaydeep
From: Aklin_81 asdk...@gmail.com
To: user user@cassandra.apache.org
Sent: Sunday, 13 November 2011 12:32 PM
Subject: Fast lookups for userId to username and vice versa
I need to
Thanks for your feedback Kolar.
Well to be honest I was thinking of using that connection in production,
not for a backup node.
My Cassandra deployment works just like an expensive file caching and
replication - I mean, all I use it for is to replicate some 5million files
of 2M each across few
Well to be honest I was thinking of using that connection in
production, not for a backup node.
For productions. there are several problems. Added network latency which
is inconsistent and vary greatly during day, sometimes you will face
network lags which will break cluster for a while
Broadband here is fairly stable, to be honest don't remember last time I
had problems such as larger than expected latency or downtime - ISP Bethere
/UK
My application can cope fine with up to 10 min lag (data
freshness), however taking your input into consideration I agree with you,
so don't
from log output it seems that during hintedhandoff delivery compaction
is kicked too soon. There needs to be some delay for flusher to write
sstable.
INFO [GossipStage:1] 2011-11-14 13:16:03,933 Gossiper.java (line 745)
InetAddress /***.99.40 is now UP
INFO [HintedHandoff:1] 2011-11-14
I am new to cassandra. I search for random write examples in wiki
(http://wiki.apache.org/cassandra/ClientOptions) and mailing list, but
do not find similar one. My question is - does cassandra suppport
random write access? Is there any code example that explains this? Or
any doc that may provide
On Mon, Nov 14, 2011 at 1:21 AM, Michael Vaknine micha...@citypath.com wrote:
Hi,
After configuring the encryption on Cassandra.yaml I get this error when
upgrading from 1.0.0 to 1.0.2
Attached the log file with the errors.
https://issues.apache.org/jira/browse/CASSANDRA-3466
-Brandon
Does this means that I have to wait to 1.0.3?
-Original Message-
From: Brandon Williams [mailto:dri...@gmail.com]
Sent: Monday, November 14, 2011 3:51 PM
To: user@cassandra.apache.org
Cc: cassandra-u...@incubator.apache.org
Subject: Re: Upgrade Cassandra Cluster to 1.0.2
On Mon, Nov
On Mon, Nov 14, 2011 at 7:53 AM, Michael Vaknine micha...@citypath.com wrote:
Does this means that I have to wait to 1.0.3?
In the meantime you can just delete the hints and rely on read repair
or antientropy repair if you're concerned about the consistency of
your replicas.
-Brandon
Hi,
As of Nov. 9, 2011 Amazon added us-west-2 (US West Oregon) region:
http://aws.typepad.com/aws/2011/11/now-open-us-west-portland-region.html
In looking at the EC2Snitch code (in the 0.8.x and 1.0.x branches), I see
it determining which data center (which I think is supposed to be
equivalent
Well,
I tried to delete the hints on the failed cluster but I could not start it
I got other errors such as
ERROR [MutationStage:34] 2011-11-14 15:37:43,813
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
Thread[MutationStage:34,5,main]
java.lang.StackOverflowError
at
I am new to cassandra. I search for random write examples
you can access cassandra data at any node and keys can be accessed at
random.
It may be the case that your CL is the issue. You are writing it at
ONE, which means that out of the 4 replicas of that key (two in each
data center), you are only putting it on one of them. When you read at
CL ONE, if only looks at a single replica to see if the data is there.
In other words. If
You should be able to do it as long as you shut down the whole cluster
for it:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Upgrading-to-1-0-tp6954908p6955316.html
On 11/13/2011 02:14 PM, Timothy Smith wrote:
Due to some application dependencies I've been holding off on a
you can access cassandra data at any node and keys can be accessed at
random.
Including individual columns in a row.
--
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)
Hi all,
Quick note about our Cassandra London 1st birthday party!
We'll be looking at what's changed in Cassandra over the past year,
with talks on feature improvements, performance and Hadoop
integration. Please come along if you're UK-based! It's a great chance
to meet other Cassandra users.
Hi
While testing the proposed 1.0.3 version I got the following exception
while running repair: (StackOverflowError)
http://pastebin.com/raw.php?i=35Rt7ryB
The affected column family is denfined like this:
create column family FileStore
with comparator=UTF8Type and key_validation_class =
On Mon, Nov 14, 2011 at 8:06 AM, Michael Vaknine micha...@citypath.com wrote:
Well,
I tried to delete the hints on the failed cluster but I could not start it
I got other errors such as
ERROR [MutationStage:34] 2011-11-14 15:37:43,813
AbstractCassandraDaemon.java (line 133) Fatal exception
Additionally, you have the barely-documented but nasty behavior of
Hotspot forcing full GCs when allocateDirect reaches
-XX:MaxDirectMemorySize.
On Sun, Nov 13, 2011 at 2:09 PM, Peter Schuller
peter.schul...@infidyne.com wrote:
I would like to know it also - actually is should be similar, plus
Hi,
Sorry for the intrusion.
I was speaking to some of the LinkedIn engineers at ApacheCon last week
about to see how to get
Cassandra into the linkedin skills page [1].
They claim if more people add Cassandra as a skill in their profile then it
will show up. So my request
is if you use
Hello everyone,
We're using the bulk loader to load data every day to Cassandra. The
machines that use the bulkloader are diferent every day so their IP
addresses change. When I do describe cluster i see all the unreachable
nodes that keep piling up for the past few days. Is there a way to remove
Thanks for the note. Ideally I would not like to keep track of what is
the oldest indexed date,
because this means that I'm already creating a bit of infrastructure on
top of my database,
with attendant referential integrity problems.
But I suppose I'll be forced to do that. In addition, I'll
Hello Giannis,
Can you share a little bit on how to use the bulk loader ? We're considering
use bulk loader for a user case.
Thanks,
Mike
From: Giannis Neokleous [mailto:gian...@generalsentiment.com]
Sent: Monday, November 14, 2011 2:50 PM
To: user@cassandra.apache.org
Subject: BulkLoader
Hi
I'm getting this error when I try to run describe cluster:
[] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
Error retrieving data: Internal error processing
I am trying to run the word count example and looking at the source, I
don't see where it knows which job tracker to connect to.
1. Would I be correct in guessing that I have to run hadoop jar ?
(just wondering why I don't see a port/hostname for job tracker in the
WordCount.java file)
This
- It would be super cool if all of that counter work made it possible
to support other atomic data types (sets? CAS? just pass a assoc/commun
Function to apply).
- Again with types, pluggable type specific compression.
- Wishy washy wish: Simpler elasticity I would like to go from
6--8--7
Re Simpler elasticity:
Latest opscenter will now rebalance cluster optimally
http://www.datastax.com/dev/blog/whats-new-in-opscenter-1-3
/plug
-Jake
On Mon, Nov 14, 2011 at 7:27 PM, Chris Burroughs
chris.burrou...@gmail.comwrote:
- It would be super cool if all of that counter work made it
On Mon, Nov 14, 2011 at 4:44 PM, Jake Luciani jak...@gmail.com wrote:
Re Simpler elasticity:
Latest opscenter will now rebalance cluster optimally
http://www.datastax.com/dev/blog/whats-new-in-opscenter-1-3
/plug
Does it cause any impact on reads and writes while re-balance is in
progress?
+1 on coprocessors
On Mon, Nov 14, 2011 at 6:51 PM, Mohit Anchlia mohitanch...@gmail.comwrote:
On Mon, Nov 14, 2011 at 4:44 PM, Jake Luciani jak...@gmail.com wrote:
Re Simpler elasticity:
Latest opscenter will now rebalance cluster optimally
There are 4 jobs submitted by the wordcount cassandra example and the first
one fails and the other 3 all pass and work with results.
The first job I noticed is looking for column name text0 due to i being 0
in the loop. The exception is not going through the wordcount code at all
though, but
Well, by edting
src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
in version 1.0.2 cassandra src just before the
totalRead++;
KeySlice ks = rows.get(i++);
SortedMapByteBuffer, IColumn map = new TreeMapByteBuffer,
IColumn(comparator);
I added
oh yeah, one more BIG one.in memory writes with asynch write-behind to
disk like cassandra does for speed.
So if you have atomic locking, it writes to the primary node(memory) and
some other node(memory) and returns with success to the client. asynch
then writes to disk later. This prove to
+1 on co-processors.
Edward
Giannis,
From here:
http://wiki.apache.org/cassandra/Operations#Removing_nodes_entirely
Have you tried nodetool removetoken ?
Ernie
On Mon, Nov 14, 2011 at 4:20 PM, mike...@thomsonreuters.com wrote:
Hello Giannis,
** **
Can you share a little bit on how to use the bulk loader ?
Can you describe how you did the upgrade on these machines? You may
still have some old jars on the classpath.
On Mon, Nov 14, 2011 at 4:03 PM, Silviu Matei silvma...@gmail.com wrote:
Hi
I'm getting this error when I try to run describe cluster:
[] describe cluster;
Cluster Information:
I am running java version:
java version 1.6.0_20
OpenJDK Runtime Environment (IcedTea6 1.9.7) (6b20-1.9.7-0ubuntu1~10.04.1)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
I have tried to run with -Xss96k or -Xss256k or -Xss320 but it still give me
the error
TheNode has 8GB memory and %GB
It may be the case that your CL is the issue. You are writing it at
ONE, which means that out of the 4 replicas of that key (two in each
data center), you are only putting it on one of them.
cassandra will always try to replicate key to all available replicas.
Under normal conditions if you do
40 matches
Mail list logo