Hi,

On Wed, Oct 30, 2013 at 2:50 AM, Chetan Mehrotra
<[email protected]> wrote:
> Currently we are storing blobs by breaking them into small chunks and
> then storing those chunks in MongoDB as part of blobs collection. This
> approach would cause issues as Mongo maintains a global exclusive
> write locks on a per database level [1]. So even writing multiple
> small chunks of say 2 MB each would lead to write lock contention.

Note that the underlying disk in any case forces the serialization of
all writes on a single shard, so I wouldn't be too worried about this
as MongoDB can still allow concurrent read access to cached content
(see 
http://docs.mongodb.org/manual/faq/concurrency/#does-a-read-or-write-operation-ever-yield-the-lock).
So AFAICT the worry about a write blocking all concurrent reads is
unfounded unless it shows up in a benchmark.

BR,

Jukka Zitting

Reply via email to