To close this thread
On Wed, Oct 30, 2013 at 7:52 PM, Jukka Zitting wrote:
> So AFAICT the worry about a write blocking all concurrent reads is
> unfounded unless it shows up in a benchmark.
I tried to measure effect of such scenario in OAK-1153 [1] and from
results obtained there does not appea
Hi,
On Wed, Oct 30, 2013 at 2:50 AM, Chetan Mehrotra
wrote:
> Currently we are storing blobs by breaking them into small chunks and
> then storing those chunks in MongoDB as part of blobs collection. This
> approach would cause issues as Mongo maintains a global exclusive
> write locks on a per d
Hi Chetan,
>
> 3. Bring back the JR2 DataStore implementation and just save metadata
> related to binaries in Mongo. We already have S3 based implementation
> there and they would continue to work with Oak also
>
I think we will need the data store impl for Oak in any case (regardless the
outc
>> So even adding a 2
>>MB chunk on a sharded system over remote connection would block read
>>for that complete duration. So at minimum we should be avoiding that.
I guess if there are read replicas in the shard replica set then, it will
mitigate the effect to some extent
On Wed, Oct 30, 2013
> sounds reasonable. what is the impact of such a design when it comes
> to map-reduce features? I was thinking that we could use it e.g. for
> garbage collection, but I don't know if this is still an option when data
> is spread across multiple databases.
Would investigate that aspect further
>
> Open questions are, what is the write thoughput for one
> shard, does the write lock also block reads (I guess not), does the write
As Ian mentioned above write locks block all reads. So even adding a 2
MB chunk on a sharded system over remote connection would block read
for that complete durat
On 30 October 2013 07:55, Thomas Mueller wrote:
> Hi,
>
>> as Mongo maintains a global exclusive write locks on a per database level
>
> I think this is not necessarily a huge problem. As far as I understand, it
> limits write concurrency within one shard only, so it does not block
> scalability.
Hi,
> as Mongo maintains a global exclusive write locks on a per database level
I think this is not necessarily a huge problem. As far as I understand, it
limits write concurrency within one shard only, so it does not block
scalability. Open questions are, what is the write thoughput for one
shar
Hi,
> Currently we are storing blobs by breaking them into small chunks and
> then storing those chunks in MongoDB as part of blobs collection. This
> approach would cause issues as Mongo maintains a global exclusive
> write locks on a per database level [1]. So even writing multiple
> small chunk
Hi,
Currently we are storing blobs by breaking them into small chunks and
then storing those chunks in MongoDB as part of blobs collection. This
approach would cause issues as Mongo maintains a global exclusive
write locks on a per database level [1]. So even writing multiple
small chunks of say 2
10 matches
Mail list logo