Hi Ran, thanks for the compliment. It is true that we benefited enormously
from batch mutate. Without that the Mutator/Selector paradigm would not have
been possible in the same way. It will be interesting to see where Cassandra
takes us next. Best, Dominic
On 12 June 2010 20:05, Ran Tavory
Also think this looks really promising.
The fact that there are so many API wrappers now (3?) doesn't reflect
well on the native API though :)
/me ducks and runs
On Mon, Jun 14, 2010 at 11:55, Dominic Williams
thedwilli...@googlemail.com wrote:
Hi Ran, thanks for the compliment. It is true that
Can I expect batch_mutate to work in what I would think of as an atomic
operation?
That either all the mutations in the batch_mutate call are executed or none of
them are? Or can some of them fail while some of them succeeds?
rant
TBH while we are using super columns, the somehow feel wrong to me. I
would be happier if we could move what we do with super columns into
the row key space. But in our case that does not seem to be so easy.
/rant
I'd be quite interested to learn what you are doing with super columns
no, it's not atomic. it just shortens the roundtrip of many update requests.
Some may fail and some may succeed
On Mon, Jun 14, 2010 at 2:40 PM, Per Olesen p...@trifork.com wrote:
Can I expect batch_mutate to work in what I would think of as an atomic
operation?
That either all the mutations
This question has been coming up quite regularly now. I've added an
entry to the FAQ. Please feel free to expand an clarify.
http://wiki.apache.org/cassandra/FAQ#batch_mutate_atomic
Gary.
On Mon, Jun 14, 2010 at 06:43, Ran Tavory ran...@gmail.com wrote:
no, it's not atomic. it just shortens
Hi,
I have a question that relates to how to best model data. I have some pretty
simple tabular data, which I am to show to a large amount of users, and the
users need to be able to search some of the columns.
Given this tabular data:
Company| Amount|...many more columns here
I'll put something together and submit it. Thanks for the help.
Todd
-Original Message-
From: Gary Dusbabek [mailto:gdusba...@gmail.com]
Sent: Friday, June 11, 2010 4:49 AM
To: user@cassandra.apache.org
Subject: Re: Running Cassandra as a Windows Service
Sure. Please create a jira
That's the tradeoff we made to get basic functionality for a dozen or
so languages for free; it's impossible to be idiomatic with Thrift.
The glass-half-full view is, having lots of API wrappers shows that
building on Thrift is far easier than throwing bytes around at the
socket layer the way a
On Jun 13, 2010, at Sun Jun 13, 9:34 PM, Benjamin Black wrote:
On Sun, Jun 13, 2010 at 5:58 PM, Matthew Conway m...@backupify.com wrote:
The ability to dynamically add new column families. Our app is currently
under heavy development, and we will be adding new column families at least
Great API that looks easy and intuitive to use. Regarding your connection pool
implementation, how does it handle failed/crashed nodes? Will the pool
auto-detect failed nodes via a tester thread or will a failed node, and hence
its pooled connection(s), be removed only when they are used?
On Mon, Jun 14, 2010 at 7:27 AM, Matthew Conway m...@backupify.com wrote:
On Jun 13, 2010, at Sun Jun 13, 9:34 PM, Benjamin Black wrote:
On Sun, Jun 13, 2010 at 5:58 PM, Matthew Conway m...@backupify.com wrote:
The ability to dynamically add new column families. Our app is currently
under
On Jun 14, 2010, at Mon Jun 14, 10:45 AM, Jonathan Ellis wrote:
I already have automation, whats missing are the details of the exact steps
I need to automate to accomplish the schema modification on a live cluster.
Even the FAQ just points to the feature in 0.7 trunk.
Huh?
Good to have new API wrappers.
I guess different APIs because people look Cassandra from different angle
with different backgrounds/ skills. At this stage, it's better that
different API find the good ideas from each other, and maybe one day, there
is one, which widely accepted. That's good for
Done, https://issues.apache.org/jira/browse/CASSANDRA-1188
It was find_all_by_service_id that was the culprit, and it resolves down to a
multiget_slice on a super column family. The super CF is acting as an index
back into a regular CF, thus I'm providing key, supercolumn name, and getting
On Mon, Jun 14, 2010 at 6:09 AM, Per Olesen p...@trifork.com wrote:
So, in my use case, when searching on e.g. company, I can then access the
DashboardCompanyIndex with a slice on its SC and then grab all the uuids
from the columns, and after this, make a lookup in the Dashboard CF for each
Hi everyone,
i am new to nosql databases and especially column-oriented Databases
like cassandra.
I am a student on information-systems and i evaluate a fitting no-sql
database for a web analytics system. Got the use-case of data like
webserver-logfile.
in an RDBMS it would be for every hit a row
On Jun 14, 2010, at 6:29 PM, Benjamin Black wrote:
On Mon, Jun 14, 2010 at 6:09 AM, Per Olesen p...@trifork.com wrote:
So, in my use case, when searching on e.g. company, I can then access the
DashboardCompanyIndex with a slice on its SC and then grab all the uuids
from the columns, and
I checked out the source and noticed a few things:
1. You did not include an Ant build file. Not a big deal, but if you happen
to have one it would be nice to have.
2. It appears you built against Cassandra 0.6.0. Have you built and/or run
Pelops against 0.6.2 or trunk?
Todd
Hi,
I was updating to a newer 0.6.3 and happened to remember that I noticed
back in 0.6.2 there's this change in CHANGES.txt
* improve default JVM GC options (CASSANDRA-1014)
Looking at that ticket, I don't actually see the options listed or a
reason for why they changed. Also, I'm not
Hi,
I wrote 200k records to db with each record 5MB. Get this error when I uses
3 threads (each thread tries to read 200k record totally, 100 records a
time) to read data from db. The write is OK, the error comes from read.
Right now the Xmx of JVM is 1GB. I changed it to 2GB, still not working.
My guess: you are outrunning your disk I/O. Each of those 5MB rows
gets written to the commitlog, and the memtable is flushed when it
hits the configured limit, which you've probably left at 128MB. Every
25 rows or so you are getting memtable flushed to disk. Until these
things complete, they
...or does it very greatly from installation to installation?
Yes.
23 matches
Mail list logo