Are there any official recomendations, validations/tests done with
Cassandra = 2.0 on Java 8?
Regards
/Fredrik
Hi,
I'm working on an application using a Cassandra (2.1.0) cluster where
- our entire dataset is around 22GB
- each node has 48GB of memory but only a single (mechanical) hard disk
- in normal operation we have a low level of writes and no reads
- very
I'm having problems understanding how incremental repairs are supposed to
be run.
If I try to do nodetool repair -inc cassandra will complain that It is
not possible to mix sequential repair and incremental repairs. However it
seems that running nodetool repair -inc -par does the job, but I
On Wed, Oct 22, 2014 at 2:39 PM, Juho Mäkinen juho.maki...@gmail.com
wrote:
I'm having problems understanding how incremental repairs are supposed to
be run.
If I try to do nodetool repair -inc cassandra will complain that It is
not possible to mix sequential repair and incremental repairs.
Hi,
I have a table that I dropped, recreated with two clustering primary keys (only
had a single partition key before), and loaded previous data into the table.
I started noticing that a single node of mine was not able to do `ORDER BY`
executions on the table (while the other nodes were).
I assume that you are restoring snapshot data onto a new ring with the same
topology (i.e. if the old ring has n nodes, your new ring has n nodes
also). I discussed this a consultant from DataStax, and he told me that I
need to make sure each new node in the new ring need to have the same token
Hi again,
Follow-up: The incorrect schema propagated to other servers. Luckily this was a
smaller table. I dropped the table and noticed that no sstables were removed. I
then created the table again, and truncated it instead. This removed all the
sstables and things look good now.
If you're using 2.1.0 the row cache has been redesigned. How did you
configure it ? There is some new parameters to specify how many CQL rows
you want to keep in the cache:
http://www.datastax.com/dev/blog/row-caching-in-cassandra-2-1
On Wed, Oct 22, 2014 at 1:34 PM, Thomas Whiteway
Sorry, I copy-and-pasted the wrong variable name. I meant to copy and paste
streaming_socket_timeout_in_ms. So my question should be:
streaming_socket_timeout_in_ms is the timeout per operation on the streaming
socket. The docs recommend not to set it too low (because a timeout causes
Question about the read path in cassandra. If a partition/row is in the
Memtable and is being actively written to by other clients, will a READ of
that partition also have to hit SStables on disk (or in the page cache)? Or
can it be serviced entirely from the Memtable?
If you select all
First, did you run a query trace?
I recommend Al Tobey's pcstat util to determine if your files are in
the buffer cache: https://github.com/tobert/pcstat
On Wed, Oct 22, 2014 at 4:34 AM, Thomas Whiteway
thomas.white...@metaswitch.com wrote:
Hi,
I’m working on an application using a
No. Consider a scenario where you supply a timestamp a week in the future,
flush it to sstable, and then do a write, with the current timestamp. The
record in disk will have a timestamp greater than the one in the memtable.
On Wed, Oct 22, 2014 at 9:18 AM, Donald Smith
I was using the pre-2.1.0 configuration scheme of setting caching to
‘rows_only’ on the column family. I’ve tried runs with row_cache_size_in_mb
set to both 16384 and 32768.
I don’t think the new settings would have helped in my case. My understanding
of the rows_per_partition setting is
On the cassandra irc channel I discussed this question. I learned that the
timestamp in the Memtable may be OLDER than the timestamp in some SSTable
(e.g., due to hints or retries). So there’s no guarantee that the Memtable has
the most recent version.
But there may be cases, they say, in
Hey folks,
I am sure that this is a simple oversight on my part, but I just can not see
the forest for the trees. Any ideas on this one?
copy strevus_data.strevus_metadata_data to
'c:/temp/strevus/export/strevus_data.strevus_metadata_data.csv';
Bad Request: Undefined name
Shabab,
Apologize for the late answer.
On Mon, Oct 6, 2014 at 2:38 PM, shahab shahab.mok...@gmail.com wrote:
But do you mean that inserting columns with large size (let's say a text
with 20-30 K) is potentially problematic in Cassandra?
AFAIK, the size _warning_ you are getting relates to
On Wed, Oct 22, 2014 at 4:34 AM, Thomas Whiteway
thomas.white...@metaswitch.com wrote:
I’m working on an application using a Cassandra (2.1.0) cluster where
- our entire dataset is around 22GB
- each node has 48GB of memory but only a single (mechanical)
hard disk
-
What's your schema for that table, and what version of Cassandra are you
using?
On Wed, Oct 22, 2014 at 12:25 PM, Jeremy Franzen jeremy.fran...@strevus.com
wrote:
Hey folks,
I am sure that this is a simple oversight on my part, but I just can not
see the forest for the trees. Any ideas
On Wed, Oct 22, 2014 at 7:58 AM, Li, George guangxing...@pearson.com
wrote:
I assume that you are restoring snapshot data onto a new ring with the
same topology (i.e. if the old ring has n nodes, your new ring has n nodes
also). I discussed this a consultant from DataStax, and he told me that
On Wed, Oct 22, 2014 at 5:47 AM, Marcus Eriksson krum...@gmail.com wrote:
no, if you get a corrupt sstable for example, you will need to run an old
style repair on that node (without -inc).
As a general statement, if you get a corrupt SSTable, restoring it from a
backup (with the node down)
On 10/22/2014 02:42 AM, Fredrik wrote:
Are there any official recomendations, validations/tests done with
Cassandra = 2.0 on Java 8?
We've been running JDK8 dtest jenkins jobs on the cassandra-2.1 branch
for a while, I recently added a trunk_dtest_jdk8 job, and I just now
added unit test
On 10/22/2014 03:14 PM, Michael Shuler wrote:
On 10/22/2014 02:42 AM, Fredrik wrote:
Are there any official recomendations, validations/tests done with
Cassandra = 2.0 on Java 8?
We've been running JDK8 dtest jenkins jobs on the cassandra-2.1 branch
for a while, I recently added a
We are dropping the Copy command and just going with the sstableloader command
instead. Sorry for the noise on the channel.
Jeremy J. Franzen
VP Operations | Strevus
jeremy.fran...@strevus.com
T: +1.415.649.6234 | M: +1.408.726.4363
Compliance Made Easy.
... . -- .--. . .-. / ..-. ..
From:
For the sake of fixing a potential bug, would you mind sharing your schema
and Cassandra version anyway?
On Wed, Oct 22, 2014 at 4:49 PM, Jeremy Franzen jeremy.fran...@strevus.com
wrote:
We are dropping the Copy command and just going with the sstableloader
command instead. Sorry for the
Hi,
I am hoping to get the word out that we are looking for a Cassandra
Developerhttp://careers.choicehotels.com/careers/jobDetails.html?jobTitle=Cassandra+Developer
for a full time position at our office in Scottsdale, AZ. Please let me know
what I can do to let folks know we are looking :)
25 matches
Mail list logo