Thanks, that did the trick!
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
Fax: +972 2 5612956
On Thu, Oct 11, 2012 at 3:42 AM, Roshan codeva...@gmail.com wrote:
Hello
You can delete the
It's import to point out the difference between Read Repair, in the context of
the read_repair_chance setting, and Consistent Reads in the context of the CL
setting.
If RR is active on a request it means the request is sent to ALL UP nodes for
the key and the RR process is ASYNC to the
Regarding memory usage after a repair ... Are the merkle trees kept around?
They should not be.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 24/10/2012, at 4:51 PM, B. Todd Burruss bto...@gmail.com wrote:
Regarding memory usage
This sounds very much like my heap is so consumed by (mostly) bloom
filters that I am in steady state GC thrash.
Yes, I think that was at least part of the issue.
The rough numbers I've used to estimate working set are:
* bloom filter size for 400M rows at 0.00074 fp without java fudge
People probably saw...
http://www.networkworld.com/cgi-bin/mailto/x.cgi?pagetosend=/news/tech/2012/102212-nosql-263595.html
To clarify things take a look at...
http://brianoneill.blogspot.com/2012/10/solid-nosql-benchmarks-from-ycsb-w-side.html
-brian
--
Brian ONeill
Lead Architect, Health
Yes another benchmark with 100,000,000 rows on EC2 machines probably
less powerful then my laptop. The benchmark might as well have run 4
vmware instances on the same desktop.
On Thu, Oct 25, 2012 at 7:40 AM, Brian O'Neill b...@alumni.brown.edu wrote:
People probably saw...
I am using the Sun JDK. There are only two issues I have found
unrelated to Cassandra.
1) DateFormat is more liberal mmDD vs yyymmdd If you write an
application with java 7 the format is forgiving with DD vs dd. Yet if
you deploy that application to some JDK 1.6 jvms it fails
2) Ran into
Kind of an interesting question
I think you are saying if a client read resolved only the two nodes as
said in Aaron's email back to the client and read -repair was kicked off
because of the inconsistent values and the write did not complete yet and
I guess you would have two nodes go down to
Hello all,
Currently we implement wide rows for most of our entities. For example:
user {
event1=x
event2=y
event3=z
...
}
Normally the entires are bounded to be less then 256 columns and most
columns are small in size say 30 bytes. Because the blind write nature
of Cassandra it is possible
read quorum doesn't mean we read newest values from a quorum number of
replicas but to ensure we read at least one newest value as long as write
quorum succeeded beforehand and W+R N.
On Fri, Oct 26, 2012 at 12:00 AM, Hiller, Dean dean.hil...@nrel.gov wrote:
Kind of an interesting question
I
We have a 5 node cluster, with a matching 5 nodes for DR in another data
center. With a replication factor of 3, does the node I send a write too
attempt to send it to the 3 servers in the DR also? Or does it send it to 1
and let it replicate locally in the DR environment to save bandwidth
manuzhang wrote
read quorum doesn't mean we read newest values from a quorum number of
replicas but to ensure we read at least one newest value as long as write
quorum succeeded beforehand and W+R N.
I beg to differ here. Any read/write, by definition of quorum, should have
at least n/2 + 1
Use the datacenter replication strategy and try it with that so you tell
cassandra all your data centers, racks, etc.
Dean
From: Bryce Godfrey
bryce.godf...@azaleos.commailto:bryce.godf...@azaleos.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
Use placement_strategy =
'org.apache.cassandra.locator.NetworkTopologyStrategy' and also fill the
topology.properties file. This will tell cassandra that you have two DCs.
You can verify that by looking at output of the ring command.
If you DCs are setup properly, only one request will go over
I dont have any sample data on this, but read latency will depend on these
1) Consistency level of the read
2) Disk speed.
Also you can look at the Netflix client as it makes the co-ordinator node
same as the node which holds that data. This will reduce one hop.
On Thu, Oct 25, 2012 at 9:04 AM,
On Thu, Oct 25, 2012 at 4:15 AM, aaron morton aa...@thelastpickle.comwrote:
This sounds very much like my heap is so consumed by (mostly) bloom
filters that I am in steady state GC thrash.
Yes, I think that was at least part of the issue.
The rough numbers I've used to estimate working
For this scenario, remove disk speed from the equation. Assume the row
is completely in Row Cache. Also lets assume Read.ONE. With this
information I would be looking to determine response size/maximum
requests second/max latency.
I would use this to say You want to do 5,000 reads/sec, on a
17 matches
Mail list logo