When I said that writes were cheap, I was speaking that in a normal case
people are making 2-10 inserts what in a relational database might be one.
30K inserts is certainly not cheap.

Your use case with 30,000 inserts is probably a special case. Most
directory services that I am aware of OpenLDAP, Active Directory, Sun
Directory server do eventually consistent master/slave and multi-master
replication. So no worries about having to background something. You just
want the replication to be fast enough so that when you call the employee
about to be fired into the office, that by the time he leaves and gets home
he can not VPN to rm -rf / your main file server :)


On Sun, Jan 27, 2013 at 7:57 PM, Hiller, Dean <dean.hil...@nrel.gov> wrote:

> Sometimes this is true, sometimes not…..….We have a use case where we have
> an admin tool where we choose to do this denorm for ACL on permission
> checks to make permission checks extremely fast.  That said, we have one
> issue with one object that too many children(30,000) so when someone gives
> a user access to this one object with 30,000 children, we end up with a bad
> 60 second wait and users ended up getting frustrated and trying to
> cancel(our plan since admin activity hardly ever happens is to do it on our
> background thread and just return immediately to the user and tell him his
> changes will take affect in 1 minute ).  After all, admin changes are
> infrequent anyways.  This example demonstrates how sometimes it could
> almost burn you.
>
> I guess my real point is it really depends on your use cases ;).  In a lot
> of cases denorm can work but in some cases it burns you so you have to
> balance it all.  In 90% of our cases our denorm is working great and for
> this one case, we need to background the permission change as we still LOVE
> the performance of our ACL checks.
>
> Ps. 30,000 writes in cassandra is not cheap when done from one server ;)
> but in general parallized writes is very fast for like 500.
>
> Later,
> Dean
>
> From: Edward Capriolo <edlinuxg...@gmail.com<mailto:edlinuxg...@gmail.com
> >>
> Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <
> user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
> Date: Sunday, January 27, 2013 5:50 PM
> To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <
> user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
> Subject: Re: Denormalization
>
> One technique is on the client side you build a tool that takes the even
> and produces N mutations. In c* writes are cheap so essentially, re-write
> everything on all changes.
>
> On Sun, Jan 27, 2013 at 4:03 PM, Fredrik Stigbäck <
> fredrik.l.stigb...@sitevision.se<mailto:fredrik.l.stigb...@sitevision.se>>
> wrote:
> Hi.
> Since denormalized data is first-class citizen in Cassandra, how to
> handle updating denormalized data.
> E.g. If we have  a USER cf with name, email etc. and denormalize user
> data into many other CF:s and then
> update the information about a user (name, email...). What is the best
> way to handle updating those user data properties
> which might be spread out over many cf:s and many rows?
>
> Regards
> /Fredrik
>
>

Reply via email to