You seem to be well aware that you're not looking at using Cassandra for what it is designed for (which obviously imply you'll need to expect under-optimal behavior), so I'm not going to insist on it.
As to how you could achieve that, a relatively simple solution (that do not require writing your own partitioner) would consist in using 2 datacenters (that obviously don't have to be real physical datacenter), to put the one that should have it all in one datacenter with RF=1 and to pull all other nodes in the other datacenter with RF=0. As Janne said, you could still have hint being written by other nodes if the one storage node is dead, but you can use the system property cassandra.maxHintTTL to 0 to disable hints. -- Sylvain On Wed, Dec 18, 2013 at 10:20 AM, Colin MacDonald <colin.macdon...@sas.com>wrote: > Ahoy the list. I am evaluating Cassandra in the context of using it as > a storage back end for the Titan graph database. > > > > We’ll have several nodes in the cluster. However, one of our > requirements is that data has to be loaded into and stored on a specific > node and only on that node. Also, it cannot be replicated around the > system, at least not stored persistently on disk – we will of course make > copies in memory and on the wire as we access remote notes. These > requirements are non-negotiable. > > > > We understand that this is essentially the opposite of what Cassandra is > designed for, and that we’re missing all the scalability and robustness, > but is it technically possible? > > > > First, I would need to create a custom partitioner – is there any > tutorial on that? I see a few “you don’t need” to threads, but I do. > > > > Second, how easy is it to have Cassandra not replicate data between nodes > in a cluster? I’m not seeing an obvious configuration option for that, > presumably because it obviates much of the point of using Cassandra, but > again, we’re working within some rather unfortunate constraints. > > > > Any hints or suggestions would be most gratefully received. > > > > Kind regards, > > > > -Colin MacDonald- > > >