We store objects that are a couple of tens of K, sometimes 100K, and we
store quite a few of these per row, sometimes hundreds of thousands.
One problem we encountered early was that these rows would become so big
that C* couldn't compact the rows in-memory and had to revert to slow
two-pass
Hi,
Raised here: https://issues.apache.org/jira/browse/CASSANDRA-5737
Am thinking this might be either a kernel bug, or some strange triggering combo
on the mmap usage
Thanks,
Glyn
From: aaron morton aa...@thelastpickle.commailto:aa...@thelastpickle.com
Reply-To:
Hello everybody,
The thread below makes me wonder Does RF matter when using sstable loader.?
My assumption was that stable loader will take care of RF when the streaming is
done but just wanted to cross check. We are currently moving data from a RF=1
to RF=3 cluster by using sstable loader
Pls explain why and how.
Why and how what?
Not encoding blobs into strings is the preferred way because that's
obviously
more efficient (in speed and space), since you don't do any encoding pass.
As for how, use prepared statement was the how. What are the exact
lines of
code to use to do
Thanks Aaron
On 7/9/13, aaron morton aa...@thelastpickle.com wrote:
Can I just copy data files for the required keyspaces, create schema
manually and run repair?
If you have something like RF 3 and 3 nodes then yes, you can copy the data
from one node in the source cluster to all nodes in the
Hi,
C*1.2.2.
I have removed 4 nodes with nodetool decommission. 2 of them have left
with no issue, while the 2 others nodes remained leaving even after
streaming their data.
The only specific thing of these 2 nodes is that they had a lot of hints
pending. Hints from a node that couldn't come
Hi,
Using C*1.2.2.
We recently dropped our 18 m1.xLarge (4CPU, 15GB RAM, 4 Raid-0 Disks)
servers to get 3 hi1.4xLarge (16CPU, 60GB RAM, 2 Raid-0 SSD) servers
instead, for about the same price.
We tried it after reading some benchmark published by Netflix.
It is awesome and I recommend it to
Thank you for your patience. That is what I have expected.
PS. Do you know any direct ways in CQL to handle BLOB, just like DataStax
Java driver?
On Tue, Jul 9, 2013 at 4:53 PM, Sylvain Lebresne sylv...@datastax.comwrote:
Pls explain why and how.
Why and how what?
Not encoding blobs into
Has anyone tried binding a prepared statement for an IN query?
For example,
protected PreparedStatement getUserInList =
prepare(QueryBuilder.select(ADDRESS, USER).from(USER_BY_ADDRESS_COLUMN_FAMILY)
.where(QueryBuilder.in(ADDRESS, QueryBuilder.bindMarker(;
Object[] addressList
Do you know any direct ways in CQL to handle BLOB, just like DataStax
Java driver?
Well, CQL3 specification explicitly says that there is no way to encode
blob into CQL request other than HEX string:
http://cassandra.apache.org/doc/cql3/CQL.html#constants
On Tue, Jul 9, 2013 at 6:40 PM, Ollif
What you are trying to do is not currently supported:
https://issues.apache.org/jira/browse/CASSANDRA-4210.
That being said, you get that exact error because what you pass to bind()
is an array, but bind() is a variadic method, so this
is equivalent to writing:
ResultSet results =
So was the point of breaking into 36 parts to bring each row to the 64 or
128mb threshold?
On Tue, Jul 9, 2013 at 3:18 AM, Theo Hultberg t...@iconara.net wrote:
We store objects that are a couple of tens of K, sometimes 100K, and we
store quite a few of these per row, sometimes hundreds of
Hi all,
I am trying to alter a column family to change gc_grace_seconds, and now,
any of the properties
The sequence:
use ks ;
alter table CF with gc_grace_seconds=864000 ;
When listing the CF, gc_grace_seconds is set to 0, after
running the CLI, gc_grace_seconds is still set to 0.
I tried
On Tue, Jul 9, 2013 at 10:08 AM, Langston, Jim
jim.langs...@compuware.comwrote:
I am trying to alter a column family to change gc_grace_seconds, and
now,
any of the properties
The sequence:
use ks ;
alter table CF with gc_grace_seconds=864000 ;
When listing the CF, gc_grace_seconds
yes, by splitting the rows into 36 parts it's very rare that any part gets
big enough to impact the clusters performance. there are still rows that
are bigger than the in memory compaction limit, but when it's only some it
doesn't matter as much.
T#
On Tue, Jul 9, 2013 at 5:43 PM, S Ahmed
On Tue, Jul 9, 2013 at 12:36 AM, Ananth Gundabattula
agundabatt...@threatmetrix.com wrote:
The thread below makes me wonder Does RF matter when using sstable
loader.? My assumption was that stable loader will take care of RF when
the streaming is done but just wanted to cross check. We are
I'm on version 1.1.2
The nodetool command by itself
# nodetool netstats -h localhost
Mode: NORMAL
Not sending any streams.
Not receiving any streams.
Pool NameActive Pending Completed
Commandsn/a 0 5909
Responses
Hi Aaron,
Can he not specify all 256 tokens in the YAML of the new
cluster and then copy sstables?
I know it is a bit ugly but should work.
Sankalp
On Tue, Jul 9, 2013 at 3:19 AM, Baskar Duraikannu
baskar.duraikannu...@gmail.com wrote:
Thanks Aaron
On 7/9/13, aaron morton
On Mon, Jul 8, 2013 at 5:58 PM, Faraaz Sareshwala fsareshw...@quantcast.com
wrote:
What does cassandra do when it is at its data capacity (disk drives and
memtable
is full) and writes continue to pour in? My intuition says that cassandra
won't
be able to handle the new writes (they will
On Tue, Jul 9, 2013 at 10:26 AM, Robert Coli rc...@eventbrite.com wrote:
nodetool -h localhost netstats |grep SCHEMA |sort | uniq -c | sort -n
Sorry, I meant gossipinfo and not netstats.
With the right command, do you see that all nodes in the cluster have the
same schema version?
I'm on
On the command (4 node cluster):
nodetool gossipinfo -h localhost |grep SCHEMA |sort | uniq -c | sort -n
4 SCHEMA:60edeaa8-70a4-3825-90a5-d7746ffa8e4d
On the second part, I have the same Cassandra version in staging and
production, with staging being a smaller cluster. Not sure what you
Hi,
We recently switched from size tired compaction to Leveled compaction. We made
this change because our rows are frequently updated. We also have a lot of data.
With size-tiered compaction, we have about 5-10 sstables per CF. So with about
15 CF's we had about 100 sstables.
With a sstable
We run with 128mb some run with 256mb. Leveled compaction creates fixed
sized sstables by design so this is the only way to lower the file count.
On Tue, Jul 9, 2013 at 2:56 PM, PARASHAR, BHASKARJYA JAY bp1...@att.comwrote:
Hi,
** **
We recently switched from size tired compaction to
Hi,
In a quorum read, my understanding is that Cassandra gets a digest of the
object to be read from all nodes, and then chooses the fastest node for
retrieving the data (if the quorum is met).
1) Is it possible to log which node provides the real data in a read
operation?
2) Also, is it
Thanks Jake. Guess we will have to increase the size.
From: Jake Luciani [mailto:jak...@gmail.com]
Sent: Tuesday, July 09, 2013 2:05 PM
To: user
Subject: Re: Leveled Compaction, number of SStables growing.
We run with 128mb some run with 256mb. Leveled compaction creates fixed sized
sstables
Blair, thanks for the clarification! My friend actually just told me the
same..
Any idea on how to do logging??
Thanks!
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Logging-Cassandra-Reads-Writes-tp7588893p7588896.html
Sent from the
No idea on the logging, I'm pretty new to Cassandra.
Regards,
Blair
On Jul 9, 2013, at 12:50 PM, hajjat haj...@purdue.edu wrote:
Blair, thanks for the clarification! My friend actually just told me the
same..
Any idea on how to do logging??
Thanks!
--
View this message in
There is a new tracing feature in Cassandra 1.2 that might help you with
this.
On Tue, Jul 9, 2013 at 1:31 PM, Blair Zajac bl...@orcaware.com wrote:
No idea on the logging, I'm pretty new to Cassandra.
Regards,
Blair
On Jul 9, 2013, at 12:50 PM, hajjat haj...@purdue.edu wrote:
Blair,
I'm curious because we are experimenting with a very similar configuration,
what basis did you use for expanding the index_interval to that value? Do
you have before and after numbers or was it simply reduction of the heap
pressure warnings that you looked for?
thanks,
Mike
On Tue, Jul 9, 2013
Thanks Sankalp...I will look at these.
From: sankalp kohli [mailto:kohlisank...@gmail.com]
Sent: Tuesday, July 09, 2013 3:22 PM
To: user@cassandra.apache.org
Subject: Re: Leveled Compaction, number of SStables growing.
Do you have lot of sstables in L0?
Since you moved from size tiered
Since you moved from size tiered compaction. All your sstables are in L0.
You might be hitting this. Copied from code.
// LevelDB gives each level a score of how much data it contains vs its
ideal amount, and
// compacts the level with the highest score. But this falls apart
spectacularly
On Tue, Jul 9, 2013 at 11:52 AM, Langston, Jim jim.langs...@compuware.com
wrote:
On the command (4 node cluster):
nodetool gossipinfo -h localhost |grep SCHEMA |sort | uniq -c | sort -n
4 SCHEMA:60edeaa8-70a4-3825-90a5-d7746ffa8e4d
If your schemas actually agree (and given that
Thank you very much for you response. Follows my comments about your email.
Att.
*Rodrigo Felix de Almeida*
LSBD - Universidade Federal do Ceará
Project Manager
MBA, CSM, CSPO, SCJP
On Mon, Jul 8, 2013 at 6:05 PM, Robert Coli rc...@eventbrite.com wrote:
On Sat, Jul 6, 2013 at 1:50 PM,
Ok Robert,
I updated the jira issue you have a link to below. Looks like with Cassandra
1.2.5 you can not use row caching AND column family caching at the same time
else queries return with no rows (when there should be) and I suspect inserts
fail both with no exceptions thrown.
Regards,
34 matches
Mail list logo