I try to use CQL3 to create CF with composite columns,
CREATE TABLE Friends (
... user_id bigint,
... friend_id bigint,
... status int,
... source int,
... created timestamp,
... lastupdated timestamp,
...
relation)
Thanks.
-Wei
From: Tristan Seligmann mithra...@mithrandi.net
To: user@cassandra.apache.org; Wei Zhu wz1...@yahoo.com
Sent: Wednesday, October 31, 2012 10:47 AM
Subject: Re: Create CF with composite column through CQL 3
On Wed, Oct 31, 2012 at 7:14
I heard about virtual nodes. But it doesn't come out until 1.2. Is it easy to
convert the existing installation to use virtual nodes?
Thanks.
-Wei
From: aaron morton aa...@thelastpickle.com
To: user@cassandra.apache.org
Sent: Wednesday, October 31, 2012
Hi All,
I am trying to design my schema using composite column. One thing I am a bit
confused is how to define validation_class for the composite column, or is
there a way to define it?
for the composite column, I might insert different value based on the column
name, for example
I will insert
Hi All,
I am doing a benchmark on a Cassandra. I have a three node cluster with RF=3. I
generated 6M rows with sequence number from 1 to 6m, so the rows should be
evenly distributed among the three nodes disregarding the replicates.
I am doing a benchmark with read only requests, I generate
Any thoughts?
Thanks.
-Wei
From: Wei Zhu wz1...@yahoo.com
To: Cassandr usergroup user@cassandra.apache.org
Sent: Wednesday, November 7, 2012 12:47 PM
Subject: composite column validation_class question
Hi All,
I am trying to design my schema using
: org.apache.cassandra.locator.SimpleStrategy
Durable Writes: true
Options: [replication_factor:3]
I am really confused by the Read Count number from nodetool cfstats
Really appreciate any hints.
-Wei
From: Wei Zhu wz1...@yahoo.com
To: Cassandr usergroup user@cassandra.apache.org
In my cassandra-env.sh for 1.1.6, there is no setting regarding mx4j at all. I
simply dropped the mx4j jar to the lib folder and enable jmx from
cassandra-env.sh, I can connect to the default mx4j port 8081 with no problem.
I guess without the mx4j setting, it uses default port. If youi need to
given the fact it's in charge of the
wider columns than the other two.
Thanks.
-Wei
From: Tyler Hobbs ty...@datastax.com
To: user@cassandra.apache.org; Wei Zhu wz1...@yahoo.com
Sent: Saturday, November 10, 2012 3:15 PM
Subject: Re: read request distribution
two?
Thanks for the explanation of QUORUM, it clears a lot of confusion.
-Wei
From: Tyler Hobbs ty...@datastax.com
To: user@cassandra.apache.org; Wei Zhu wz1...@yahoo.com
Sent: Monday, November 12, 2012 12:43 PM
Subject: Re: read request distribution
? ( If yes , then 8.33 % ownership of
a cluster seems wrong to me . If not 100% ownership for a node cluster seems
wrong to me. Am I missing something in the calculation?
Regards,
Ananth
On Fri, Nov 9, 2012 at 4:37 PM, Wei Zhu wz1...@yahoo.com wrote:
Hi All,
I am doing a benchmark
Good information Edward.
For my case, we have good size of RAM (76G) and the heap is 8G. So I set the
row cache to be 800M as recommended. Our column is kind of big, so the hit
ratio for row cache is around 20%, so according to datastax, might just turn
the row cache altogether.
Anyway, for
...@gmail.com
To: user@cassandra.apache.org
CC:
OOM at deserializing 747321th row
On Thu, Nov 15, 2012 at 9:08 AM, Manu Zhang owenzhang1...@gmail.com wrote:
oh, as for the number of rows, it's 165. How long would you expect it to
be read back?
On Thu, Nov 15, 2012 at 3:57 AM, Wei Zhu wz1
Last time I checked, it took about 120 seconds to load up 21125 keys with total
about 500M in memory ( We have a pretty wide row:). So it's about 4 MB/sec.
Just curious Andras, how can you manage such a big row cache (10-15GB
currently)? By recommendation, you will have 10% of your heap as row
FYI,
We are using Hector 1.0-5 which comes with cassandra-thrift 1.09 - libthrift
0.6.1. It can work with Cassandra 1.1.6.
Totally agree it's a pain to deal with different version of libthrift. We use
scribe for logging, a bit messy over there.
Thanks.
-Wei
We are using Hector now. What is the major advantage of astyanax over Hector?
Thanks.
-Wei
From: Andrey Ilinykh ailin...@gmail.com
To: user@cassandra.apache.org
Sent: Wednesday, November 28, 2012 9:37 AM
Subject: Re: Java high-level client
+1
On Tue, Nov
, November 28, 2012 11:49 AM
To: user@cassandra.apache.org user@cassandra.apache.org, Wei Zhu
wz1...@yahoo.com
Subject: Re: Java high-level client
First at all, it is backed by Netflix. They used it production for long time,
so it is pretty solid. Also they have nice tool (Priam) which makes
Hi,
I am trying to rename a cluster by following the instruction on Wiki:
Cassandra says ClusterName mismatch: oldClusterName != newClusterName and
refuses to start
To prevent
operator errors, Cassandra stores the name of the cluster in its system
table. If you need to rename a cluster for
I think Aaron meant 300-400GB instead of 300-400MB.
Thanks.
-Wei
- Original Message -
From: Wade L Poziombka wade.l.poziom...@intel.com
To: user@cassandra.apache.org
Sent: Thursday, December 6, 2012 6:53:53 AM
Subject: RE: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered
I know it's probably not a good idea to use multiget, but for my use case, it's
the only choice,
I have question regarding the SlicePredicate argument of the multiget_slice
The SlicePredicate takes slice_range which takes start, end and range. I
suppose start and end will apply to each
Cassandra site for
reference.
You are right, the count is for each individual row.
Thanks.
-Wei
From: Hiller, Dean dean.hil...@nrel.gov
To: user@cassandra.apache.org user@cassandra.apache.org; Wei Zhu
wz1...@yahoo.com
Sent: Monday, December 10, 2012 1:13 PM
I tried to registered and got the following page and haven't received email
yet. I registered 10 minutes ago.
Thank you for registering to attend:
Is My App a Good Fit for Apache Cassandra?
Details about this webinar have also been sent to your email, including a link
to the webinar's URL.
Never mind, the email arrived after 15 minutes or so...
From: Wei Zhu wz1...@yahoo.com
To: user@cassandra.apache.org user@cassandra.apache.org
Sent: Thursday, December 13, 2012 10:06 AM
Subject: Re: Datastax C*ollege Credit Webinar Series : Create your first
Another potential issue is when some failure happens to some of the mutations.
Is atomic batches in 1.2 designed to resolve this?
http://www.datastax.com/dev/blog/atomic-batches-in-cassandra-1-2
-Wei
- Original Message -
From: aaron morton aa...@thelastpickle.com
To:
Hi,
When I run nodetool compactionstats
I see the number of pending tasks keep going up steadily.
I tried to increase the compactionthroughput, by using
nodetool setcompactionthroughput
I even tried the extreme to set it to 0 to disable the throttling.
I checked iostats and we have SSD for
I agree that Cassandra cfhistograms is probably the most bizarre metrics I have
ever come across although it's extremely useful.
I believe the offset is actually the metrics it has tracked (x-axis on the
traditional histogram) and the number under each column is how many times that
value has
it all because the compactions are usually single threaded.
Dstat will help you measure this.
I hope this helps,
jc
From: Wei Zhu wz1...@yahoo.com
Reply-To: user@cassandra.apache.org user@cassandra.apache.org, Wei Zhu
wz1...@yahoo.com
Date: Friday, January 18, 2013 12:10 PM
To: Cassandr
In the yaml, it has the following setting
# Throttles all outbound streaming file transfers on this node to the
# given total throughput in Mbps. This is necessary because Cassandra does
# mostly sequential IO when streaming data during bootstrap or repair, which
# can lead to saturating the
it
overestimate for zero, only for non-zero amounts.
As for timeouts etc, you will need to look at things like nodetool tpstats to
see if you have pending transactions queueing up.
Jc
From: Wei Zhu wz1...@yahoo.com
Reply-To: user@cassandra.apache.org user@cassandra.apache.org , Wei Zhu
- Original Message -
From: Derek Williams de...@fyrie.net
To: user@cassandra.apache.org, Wei Zhu wz1...@yahoo.com
Sent: Thursday, January 24, 2013 11:06:00 PM
Subject: Re: Cassandra pending compaction tasks keeps increasing
Increasing the stack size in cassandra-env.sh should help you get
threaded,
any chance to speed it up?
* We use default SSTable size as 5M, Will increase the size of SSTable
help? What will happen if I change the setting after the data is loaded.
Any suggestion is very much appreciated.
-Wei
- Original Message -
From: Wei Zhu wz1...@yahoo.com
Any thoughts?
Thanks.
-Wei
- Original Message -
From: Wei Zhu wz1...@yahoo.com
To: user@cassandra.apache.org
Sent: Friday, January 25, 2013 10:09:37 PM
Subject: Re: Cassandra pending compaction tasks keeps increasing
To recap the problem,
1.1.6 on SSD, 5 nodes, RF = 3, one CF only
on SSD? That is insane. Anything I can do to speed it up?
* 1,837,023,925 to 1,836,694,446 (~99% of original) bytes for 1,686,604
keys at 0.717223MB/s. Time: 2,442,208ms.
Thanks.
-Wei
- Original Message -
From: Wei Zhu wz1...@yahoo.com
To: user@cassandra.apache.org
Sent
, no idea.
On Mon, Jan 28, 2013 at 12:16 PM, Wei Zhu wz1...@yahoo.com wrote:
Any thoughts?
Thanks.
-Wei
- Original Message -
From: Wei Zhu wz1...@yahoo.com
To: user@cassandra.apache.org
Sent: Friday, January 25, 2013 10:09:37 PM
Subject: Re: Cassandra pending compaction tasks
set it via JMX, and supposedly log4j is configured to watch the config
file.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 29/01/2013, at 9:36 PM, Wei Zhu wz1...@yahoo.com wrote:
blockquote
Thanks
Hi,
After messing around with my Cassandra cluster recently, I think I need some
basic understanding on how things work behind scene regarding data streaming.
Let's say we have three node cluster with RF = 3. If node 3 for some reason
dies and I want to replace it with a new node with the same
still not sure how about the my first question regarding the
bootstrap, anyone?
Thanks.
-Wei
From: Wei Zhu wz1...@yahoo.com
To: Cassandr usergroup user@cassandra.apache.org
Sent: Thursday, January 31, 2013 10:50 AM
Subject: General question regarding bootstrap
@cassandra.apache.org
Sent: Thursday, January 31, 2013 1:50 PM
Subject: Re: General question regarding bootstrap and nodetool repair
On Thu, Jan 31, 2013 at 12:19 PM, Wei Zhu wz1...@yahoo.com wrote:
But I am still not sure how about the my first question regarding the
bootstrap, anyone?
As I
be the case. Is it a bug? Or anyone tried to replace a dead node with the same
IP?
Thanks.
-Wei
- Original Message -
From: Wei Zhu wz1...@yahoo.com
To: user@cassandra.apache.org
Sent: Thursday, January 31, 2013 3:14:59 PM
Subject: Re: General question regarding bootstrap and nodetool
That is must be it.
Yes. it happens to be the seed. I should have tried rebuild. Instead I did
repair and now I am sitting here waiting for the compaction to finish...
Thanks.
-Wei
From: Derek Williams de...@fyrie.net
To: user@cassandra.apache.org; Wei Zhu wz1
Anyone has first hand experience with Zing JVM which is claimed to be pauseless? How do they charge, per CPU?Thanks-WeiFrom: Edward Capriolo edlinuxg...@gmail.com To:
user@cassandra.apache.org Sent: Wednesday, February 6, 2013 7:07 AM Subject: Re: Why do Datastax docs recommend Java 6?
I have been struggling with the LCS myself. I observed that for the higher
level compaction,(from level 4 to 5) it involves much more SSTables than
compacting from lower level. One compaction could take an hour or more. By the
way, you set the your SSTable size to be 100M?
Thanks.
-Wei
From the exception, looks like astyanax didn't even try to call Cassandra. My
guess would be astyanax is token aware, it detects the node is down and it
doesn't even try. If you use Hector, it might try to write since it's not
token aware. But As Byran said, it eventually will fail. I guess
I haven't tried to switch compaction strategy. We started with LCS.
For us, after massive data imports (5000 w/seconds for 6 days), the first
repair is painful since there is quite some data inconsistency. For 150G nodes,
repair brought in about 30 G and created thousands of pending
We have 250G data and running at 8GB heap and one of the node is OOM during
repair.
I checked bloomfilter, only 200M. Not sure how the memory is used, maybe take a
memory dump and exam that.
- Original Message -
From: Edward Capriolo edlinuxg...@gmail.com
To:
, and what
made you decide on that number?
Thanks,
-Mike
On 2/14/2013 3:51 PM, Wei Zhu wrote:
I haven't tried to switch compaction strategy. We started with LCS.
For us, after massive data imports (5000 w/seconds for 6 days), the first
repair is painful since there is quite some data
From my limited experience with Mongo, it seems that Mongo only performs when
the whole data set is in the memory which makes me wonder how the 40TB data
works..
- Original Message -
From: Edward Capriolo edlinuxg...@gmail.com
To: user@cassandra.apache.org
Sent: Tuesday, February 19,
It should not take that long. For my 200G node, it takes about an hour to
calculate the Merkle tree and then data streaming.
By the way, how do you know the repair is not done?
If you run nodetool tpstats, it should give you the AntiEntropy session info,
active/pending/completed etc. While
What does rpc_timeout control? Only the reads/writes? How about other
inter-node communication, like data stream, merkle tree request? What is the
reasonable value for roc_timeout? The default value of 10 seconds are way too
long. What is the side effect if it's set to a really small number,
long a request
takes in your system.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 21/02/2013, at 6:56 AM, Wei Zhu wz1...@yahoo.com wrote:
What does rpc_timeout control? Only the reads/writes? How about other
We have 200G and ended going with 10M. The compaction after repair takes a day
to finish. Try to run a repair and see how it goes.
-Wei
- Original Message -
From: Dean Hiller dean.hil...@nrel.gov
To: user@cassandra.apache.org
Sent: Monday, March 4, 2013 10:52:27 AM
Subject: what size
According to this:
https://issues.apache.org/jira/browse/CASSANDRA-5029
Bloom filter is still on by default for LCS in 1.2.X
Thanks.
-Wei
From: Hiller, Dean dean.hil...@nrel.gov
To: user@cassandra.apache.org user@cassandra.apache.org
Sent: Monday, March 4,
It also depends on you SLA, it should work for 99% of the time. But one
GC/flush/compact could screw things up big time if you have tight SLA.
-Wei
From: Drew Kutcharian d...@venarc.com
To: user@cassandra.apache.org
Sent: Wednesday, March 6, 2013 9:32 AM
It seems to be normal to explode data size during repair. For our case, we have
a node around 200G with RF =3, during repair, it goes to as high as 300G. We
are using LCS, it creates more than 5000 compaction tasks and takes more than a
day to finish. We are on 1.1.6
There is parallel LCS
If you are tight about your SLA, try set socketTimeout from Hector with small
number so that it can retry faster given the assumption that your write is
idempotent.
Regarding your write latency, don't have much insight. We see spike on the
reads due to GC/compaction etc. But not write latency.
Where did you read that bloom filters are off for LCS on 1.1.9?
Those are the two issues I can find regarding this matter:
https://issues.apache.org/jira/browse/CASSANDRA-4876
https://issues.apache.org/jira/browse/CASSANDRA-5029
Looks like in 1.2, it defaults at 0.1, not sure about 1.1.X
-Wei
From: Alain RODRIGUEZ arodr...@gmail.com
To: user@cassandra.apache.org
Cc: Wei Zhu wz1...@yahoo.com
Sent: Friday, March 8, 2013 1:25 AM
Subject: Re: Size Tiered - Leveled Compaction
I'm still wondering about how to chose the size of the sstable under LCS.
Defaul is 5MB
Hi Dean,
The index_interval is controlling the sampling of the SSTable to speed up the
lookup of the keys in the SSTable. Here is the code:
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/DataTracker.java#L478
To increase the interval meaning, taking less
It's not BloomFilter.
Cassandra will read through sstable index files on start-up, doing what is
known as index sampling. This is used to keep a subset (currently and by
default, 1 out of 100) of keys and and their on-disk location in the index, in
memory. See ArchitectureInternals. This
Do you see anything related to merkle tree in your log?
Also do a nodetool compactionstats, during merkle tree calculation, you will
see validation there.
-Wei
- Original Message -
From: Dane Miller d...@optimalsocial.com
To: user@cassandra.apache.org
Sent: Wednesday, March 13, 2013
Here is the JIRA I submitted regarding the ancestor.
https://issues.apache.org/jira/browse/CASSANDRA-5342
-Wei
- Original Message -
From: Wei Zhu wz1...@yahoo.com
To: user@cassandra.apache.org
Sent: Wednesday, March 13, 2013 11:35:29 AM
Subject: Re: About the heap
Hi Dean
, it should give you a rough idea in which stage
repaired died.
-Wei
- Original Message -
From: Dane Miller d...@optimalsocial.com
To: user@cassandra.apache.org, Wei Zhu wz1...@yahoo.com
Sent: Wednesday, March 13, 2013 12:32:20 PM
Subject: Re: repair hangs
On Wed, Mar 13, 2013 at 11:44 AM
Did you restart the node? As I can tell compactions start a few minutes after
restarting. Did you see a file called $CFName.json ($CFName is your cf name) in
your data directory?
-Wei
- Original Message -
From: Dean Hiller dean.hil...@nrel.gov
To: user@cassandra.apache.org
Sent:
No problem. Back to the old trick, doesn't work, restart:)
From: Hiller, Dean dean.hil...@nrel.gov
To: user@cassandra.apache.org user@cassandra.apache.org; Wei Zhu
wz1...@yahoo.com
Sent: Thursday, March 14, 2013 9:53 AM
Subject: Re: 13k pending compaction
Are you looking for something like this
http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-services-chkconfig.html
Thanks.
-Wei
- Original Message -
From: Jason Kushmaul | WDA jason.kushm...@wda.com
To: user@cassandra.apache.org user@cassandra.apache.org
Sent: Tuesday, March
Hi Dean,
If you are not using VNode and try to replace the node, use the new token as
old token -1, not +1. The reason is that, the assignment of token is clock wise
along the ring. If you set your new token to be old token -1, the new node will
take over all the data of the old node except for
There is setting in the cassandra.yaml file which controls that.
# Whether or not a snapshot is taken of the data before keyspace truncation
# or dropping of column families. The STRONGLY advised default of true
# should be used to provide data safety. If you set this flag to false, you will
#
is some background on what is kept on heap in pre 1.2
http://www.mail-archive.com/user@cassandra.apache.org/msg25762.html
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 13/03/2013, at 12:19 PM, Wei Zhu wz1
According to your cfstats, read latency is over 100 ms which is really really
slow. I am seeing less than 3ms reads for my cluster which is on SSD. Can you
also check the nodetool cfhistorgram, it tells you more about the number of
SSTable involved and read/write latency. Somtimes average
It's there:
http://www.datastax.com/docs/1.2/cluster_architecture/cluster_planning#node-init-config
It's a long document You need to look at the cassandra.yaml and
cassandra-env.sh and make sure you understand the settings there.
By the way, did datastax just face lift their document web
compaction needs some disk I/O. Slowing down our compaction will improve
overall system performance. Of course, you don't want to go too slow and fall
behind too much.
-Wei
- Original Message -
From: Dane Miller d...@optimalsocial.com
To: user@cassandra.apache.org
Cc: Wei Zhu wz1
check nodetool tpstats and looking for AntiEntropySessions/AntiEntropyStages
grep the log and looking for repair and merkle tree
- Original Message -
From: S C as...@outlook.com
To: user@cassandra.apache.org
Sent: Monday, March 25, 2013 2:55:30 PM
Subject: nodetool repair hung?
I am
PM, Hiller, Dean wrote:
+1 (I would love to know this info).
Dean
From: Wei Zhu wz1...@yahoo.commailto:wz1...@yahoo.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org, Wei Zhu
wz1...@yahoo.commailto:wz1
Hi Ben,
If affordable, just blow away the node and bootstrap in a replacement/ or
restore from snapshot and repair.
-Wei
- Original Message -
From: Dean Hiller dean.hil...@nrel.gov
To: user@cassandra.apache.org
Sent: Thursday, March 28, 2013 11:40:21 AM
Subject: Re: lots of extra bytes
We have tried very hard to speed up lcs on 1.1.6 with no luck. It seems to be
single threaded and not much parallelism you can achieve. 1.2 does come with
parallel lcs which should help.
One more thing to try is to enlarge the sstable size which will reduce the
number of SSTable. It *might*
Hi,
We are trying to upgrade from 1.1.6 to 1.2.4, it's not really a live upgrade.
We are going to retire the old hardware and bring in a set of new hardware for
1.2.4.
For old cluster, we have 5 nodes with RF = 3, total of 1TB data.
For new cluster, we will have 10 nodes with RF = 3. We will
.
-Wei
From: Hiller, Dean dean.hil...@nrel.gov
To: user@cassandra.apache.org user@cassandra.apache.org; Wei Zhu
wz1...@yahoo.com
Sent: Tuesday, April 23, 2013 11:17 AM
Subject: Re: move data from Cassandra 1.1.6 to 1.2.4
We went from 1.1.4 to 1.2.2 and in QA
Same here. We disable the throttling and our disk and CPU usage both low (
10%) and still takes hours for LCS compaction to finish after a repair. For
this cluster, we don't delete any data, so we can rule out tombstones. Not sure
what is holding compaction back. My observation is that for the
We have a long running script which wakes up every minute to get reads/writes
through JMX. It does the calculation to get r/s and w/s and send them to
ganglia. We are thinking of using graphite which comes with some sort of
intelligence mentioned by Tomàs, but it's just too big of a change for
1) 1.1.6 on 5 nodes, 24CPU, 72 RAM
2) local quorum (we only have one DC though). We do delete through TTL
3) yes
4) once a week rolling repairs -pr using cron job
5) it definitely has negative impact on the performance. Our data size is
around 100G per node and during repair it brings in
For us, the biggest killer is repair and compaction following repair. If you
are running VNodes, you need to test the performance while running repair.
- Original Message -
From: Igor i...@4friends.od.ua
To: user@cassandra.apache.org
Sent: Wednesday, May 22, 2013 7:48:34 AM
Subject:
how it goes.
-Wei
- Original Message -
From: Dean Hiller dean.hil...@nrel.gov
To: user@cassandra.apache.org, Wei Zhu wz1...@yahoo.com
Sent: Wednesday, May 22, 2013 12:19:44 PM
Subject: Re: High performance disk io
If you are only running repair on one node, should it not skip
default value of 5MB is way too small in practice. Too many files in one
directory is not a good thing. It's not clear what should be a good number. I
have heard people are using 50MB, 75MB, even 100MB. Do your own test o find a
right number.
-Wei
- Original Message -
From: Franc
Correction, the largest I heard is 256MB SSTable size.
- Original Message -
From: Wei Zhu wz1...@yahoo.com
To: user@cassandra.apache.org
Sent: Sunday, June 16, 2013 10:28:25 PM
Subject: Re: Large number of files for Leveled Compaction
default value of 5MB is way too small
Cassandra doesn't do async replication like HBase does.You can run nodetool
repair to insure the consistency.
Or you can increase your Read or Write consistency. As long as R + W RF, you
have strong consistency. In your case, you can use CL.TWO for either read and
write.
-Wei
-
If you want, you can try to force the GC through Jconsole. Memory-Perform GC.
It theoretically triggers a full GC and when it will happen depends on the JVM
-Wei
- Original Message -
From: Robert Coli rc...@eventbrite.com
To: user@cassandra.apache.org
Sent: Tuesday, June 18, 2013
) for this latest test.
Thanks,
James
-Original Message-
From: Robert Coli [mailto:rc...@eventbrite.com]
Sent: 18 June 2013 19:45
To: user@cassandra.apache.org
Subject: Re: Data not fully replicated with 2 nodes and replication factor 2
On Tue, Jun 18, 2013 at 11:36 AM, Wei Zhu wz1
Rob,
Thanks.
I was not aware of that. So we can avoid repair if there is no hardware
failure...I found a blog:
http://www.datastax.com/dev/blog/modern-hinted-handoff
-Wei
- Original Message -
From: Robert Coli rc...@eventbrite.com
To: user@cassandra.apache.org, Wei Zhu wz1
, change
your consistency level. Or do a repair.
Thanks.
-Wei
- Original Message -
From: James Lee james@metaswitch.com
To: user@cassandra.apache.org, Wei Zhu wz1...@yahoo.com,
rc...@eventbrite.com
Sent: Thursday, June 20, 2013 3:21:30 AM
Subject: RE: Data not fully replicated
I think the new SSTable will be in the new size. In order to do that, you need
to trigger a compaction so that the new SSTables will be generated. for LCS,
there is no major compaction though. You can run a nodetool repair and
hopefully you will bring some new SSTables and compactions will kick
I have got bitten by it once. At least there should be a message saying, there
is no streaming data since it's a seed node.
I searched the source code, the message was there and it got removed at certain
version.
-Wei
From: Robert Coli rc...@eventbrite.com
/java/org/apache/cassandra/service/StorageService.java#L549
-Wei
- Original Message -
From: Dean Hiller dean.hil...@nrel.gov
To: user@cassandra.apache.org, Wei Zhu wz1...@yahoo.com
Sent: Monday, June 24, 2013 12:04:10 PM
Subject: Re: AssertionError: Unknown keyspace?
Yes, it would
what is output of show keyspaces from cassandra-cli, did you see the new value?
Compaction Strategy:
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
Compaction Strategy Options:
sstable_size_in_mb: XXX
From: Keith Wright
Hi,
As we bring more use cases to Cassandra, we have been thinking about the best
way to host it. Let's say we will have 15 physical machines available, we can
use all of them to form a big cluster or divide them into 3 clusters with 5
nodes each. As we will deploy to 1.2, it becomes easier to
?
On Oct 12, 2013, at 9:05 PM, Wei Zhu wz1...@yahoo.com wrote:
Hi,
As we bring more use cases to Cassandra, we have been thinking about the best
way to host it. Let's say we will have 15 physical machines available, we can
use all of them to form a big cluster or divide them into 3 clusters
95 matches
Mail list logo