for small amount of data under
each column name?
- Or is it an issue with Java Driver?
- Or did I do something wrong?
--
View this message in context: Denormalization leads to terrible, rather
than better, Cassandra performance -- I am really puzzled
http
?
- Or is it an issue with Java Driver?
- Or did I do something wrong?
--
View this message in context: Denormalization leads to terrible, rather
than better, Cassandra performance -- I am really puzzled
http://cassandra-user-incubator-apache-org.3065146.n2
undereach column
name?
Or is it an issue with Java Driver?
Or did I do something wrong?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Denormalization-leads-to-terrible-rather-than-better-Cassandra-performance-I-am-really-puzzled-tp7600561.html
Sent
@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Denormalization
When I said that writes were cheap, I was speaking that in a normal case
people are making 2-10 inserts what in a relational database might be one.
30K inserts
Hi.
Since denormalized data is first-class citizen in Cassandra, how to
handle updating denormalized data.
E.g. If we have a USER cf with name, email etc. and denormalize user
data into many other CF:s and then
update the information about a user (name, email...). What is the best
way to handle
There is a really a mix of denormalization and normalization. It really
depends on specific use-cases. To get better help on the email list, a
more specific use case may be appropriate.
Dean
On 1/27/13 2:03 PM, Fredrik Stigbäck fredrik.l.stigb...@sitevision.se
wrote:
Hi.
Since denormalized
I don't have a current use-case. I was just curious how applications
handle and how to think when modelling, since I guess denormalization
might increase the complexity of the application.
Fredrik
2013/1/27 Hiller, Dean dean.hil...@nrel.gov:
There is a really a mix of denormalization
when modelling, since I guess denormalization
might increase the complexity of the application.
Fredrik
2013/1/27 Hiller, Dean dean.hil...@nrel.gov javascript:;:
There is a really a mix of denormalization and normalization. It really
depends on specific use-cases. To get better help
Things like PlayOrm exist to help you with half and half of denormalized and
normalized data. There are more and more patterns out there of denormalization
and normalization but allowing for scalability still. Here is one patterns page
https://github.com/deanhiller/playorm/wiki/Patterns-Page
Oh and check out the last pattern Scalable equals only index which can allow
you to still have normalized data though the pattern does denormalization just
enough that you can
1. Update just two pieces of info (the users email for instance and the Xref
table email as well).
2. Allow
One technique is on the client side you build a tool that takes the even
and produces N mutations. In c* writes are cheap so essentially, re-write
everything on all changes.
On Sun, Jan 27, 2013 at 4:03 PM, Fredrik Stigbäck
fredrik.l.stigb...@sitevision.se wrote:
Hi.
Since denormalized data
@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Sunday, January 27, 2013 5:50 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Denormalization
One
@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: Denormalization
When I said that writes were cheap, I was speaking that in a normal case people
are making 2-10 inserts what in a relational database might be one. 30K
13 matches
Mail list logo