Once schema I was thinking of was this:

max.row.age: The producer will not write to this row after max row age,
even if not full (e.g. 10 minutes)
max.number.of.messages: Most messages that will ever be written to a row
max.expire.time: TTL every column is written with

set Queue[timeuuid][max.row.age] = date()
set Queue[timeuuid][max.number.of.messages] = 50000
set Queue[timeuuid][messages.written] = 0
set Queue[timeuuid][consumer.id] = ''
set Queue[timeuuid][messages.consumed] = 0
set Queue[timeuuid][uuid-of-message1] = value-of-message1
set Queue[timeuuid][uuid-of-message2] = value-of-message2

So a producer writes these messages.

Fields like messages.consumed and consumer.id are only accesses by
consumers with CAS to ensure that messages are not double counted. You want
to have either TTL or a very low gc_grace_next. This way N consumers can
get_range_slice to find rather large rows and can use CAS to register
without double consuming. The consumer can update messages.consumed
atomically as it is readying, chunks of slice (1000 sized messages).When
the consumer is done reading all the data one tombstone destroys the entire
row.


On Sat, Feb 22, 2014 at 9:27 PM, Jagan Ranganathan <ja...@zohocorp.com>wrote:

> Thanks Duy Hai for sharing the details. I have a doubt. If for some reason
> there is a Network Partition or more than 2 Node failure serving the same
> partition/load and you ended up writing hinted hand-off.
>
> Is there a possibility of a data loss? If yes, how do we avoid that?
>
> Regards,
> Jagan
>
> ---- On Sat, 22 Feb 2014 22:48:19 +0530 *DuyHai Doan
> <doanduy...@gmail.com <doanduy...@gmail.com>>* wrote ----
>
>     Jagan
>
> Few time ago I dealed with a similar queuing design for one customer.
>
> *If you never delete messages in the queue*, then it is possible to use
> wide rows with bucketing and increasing monotonic column name to store
> messages.
>
> CREATE TABLE *read_only_queue *(
>    bucket_number int,
>    insertion_time timeuuid,
>    message text,
>    PRIMARY KEY(bucket_number,insertion_time)
>   );
>
>  Let's say that you allow only 100 000 messages per partition (physical
> row) to avoid too wide rows, then inserting/reading from the table 
> *read_only_queue
> *is easy;
>
>  For message producer :
>
>    1) Start at bucket_number = 1
>    2) Insert messages with column name = generated timeUUID with
> micro-second precision (depending on whether the insertion rate is high or
> not)
>     3) If message count = 100 000, increment bucket_number by one and go
> to 2)
>
> For message reader:
>
>    1) Start at bucket_number = 1
>    2) Read messages by slice of * N, *save the *insertion_time *of the
> last read message
>     3) Use the saved *insertion_time *to perform next slice query
>    4) If read messages count = 100 000, increment bucket_number and go to
> 2). Keep the *insertion_time, *do not reset it since his value is
> increasing monotonically
>
> For multiple and concurrent producers & writers, there is a trick. Let's
> assume you have *P* concurrent producers and *C* concurrent consumers.
>
>   Assign a numerical ID for each producer and consumer. First producer ID
> = 1... last producer ID = *P*. Same for consumers.
>
>   - re-use the above algorithm
>   - each producer/consumer start at *bucket_number *= his ID
>   - at the end of the row,
>         - next bucket_number = current bucker_number + *P* for producers
>         - next bucket_number = current bucker_number + *C* for consumers
>
>
> The last thing to take care of is compaction configuration to reduce the
> number of SSTables on disk.
>
> If you achieve to get rid of accumulation effects, e.g reading rate is
> faster than writing rate,  the message are likely to be consumed while it's
> still in memory (in memtable) at server side. In this particular case, you
> can optimize further by deactivating compaction for the table.
>
> Regards
>
>  Duy Hai
>
>
>
>
>
>
>
>
> On Sat, Feb 22, 2014 at 5:56 PM, Jagan Ranganathan <ja...@zohocorp.com>wrote:
>
>  Hi,
>
> Thanks for the pointer.
>
> Following are some options given there,
>
>    - If you know where your live data begins, hint Cassandra with a start
>    column, to reduce the scan times and the amount of tombstones to collect.
>    -  A broker will usually have some notion of what's next in the
>    sequence and thus be able to do much more targeted queries, down to a
>    single record if the storage strategy were to choose monotonic sequence
>    numbers.
>
>  We need to do is have some intelligence in using the system and avoid
> tombstones either use the pointed Column Name or use proper start column if
> slice query is used.
>
>  Is that right or I am missing something here?
>
>  Regards,
>  Jagan
>
> ---- On Sat, 22 Feb 2014 20:55:39 +0530 *DuyHai Doan<doanduy...@gmail.com
> <doanduy...@gmail.com>>* wrote ----
>
>   Jagan
>
>   Queue-like data structures are known to be one of the worst anti
> patterns for Cassandra:
> http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets
>
>
>
> On Sat, Feb 22, 2014 at 4:03 PM, Jagan Ranganathan <ja...@zohocorp.com>wrote:
>
>  Hi,
>
>  I need to decouple some of the work being processed from the user thread
> to provide better user experience. For that I need a queuing system with
> the following needs,
>
>    - High Availability
>    - No Data Loss
>    - Better Performance.
>
> Following are some libraries that were considered along with the
> limitation I see,
>
>    - Redis - Data Loss
>    - ZooKeeper - Not advised for Queue system.
>    - TokyoCabinet/SQLite/LevelDB - of this Level DB seem to be performing
>    better. With replication requirement, I probably have to look at Apache
>    ActiveMQ+LevelDB.
>
> After checking on the third option above, I kind of wonder if Cassandra
> with Leveled Compaction offer a similar system. Do you see any issues in
> such a usage or is there other better solutions available.
>
> Will be great to get insights on this.
>
> Regards,
> Jagan
>
>
>
>
>
>

Reply via email to