On 25/11/15 13:53, Jan Beulich wrote:
On 25.11.15 at 14:43, wrote:
>> On 25/11/15 12:35, Jan Beulich wrote:
>> On 20.11.15 at 17:03, wrote:
@@ -208,8 +210,6 @@ active_entry_acquire(struct grant_table *t,
grant_ref_t
>> e)
{
struct active_grant_entry *act;
>>> On 25.11.15 at 14:43, wrote:
> On 25/11/15 12:35, Jan Beulich wrote:
> On 20.11.15 at 17:03, wrote:
>>> @@ -208,8 +210,6 @@ active_entry_acquire(struct grant_table *t, grant_ref_t
> e)
>>> {
>>> struct active_grant_entry *act;
>>>
>>> -ASSERT(rw_is_locked(&t->lock));
>>
>> E
On 25/11/15 12:35, Jan Beulich wrote:
On 20.11.15 at 17:03, wrote:
>> @@ -208,8 +210,6 @@ active_entry_acquire(struct grant_table *t, grant_ref_t
>> e)
>> {
>> struct active_grant_entry *act;
>>
>> -ASSERT(rw_is_locked(&t->lock));
>
> Even if not covering all cases, I don't thin
>>> On 20.11.15 at 17:03, wrote:
> @@ -208,8 +210,6 @@ active_entry_acquire(struct grant_table *t, grant_ref_t e)
> {
> struct active_grant_entry *act;
>
> -ASSERT(rw_is_locked(&t->lock));
Even if not covering all cases, I don't think this should be dropped,
just like you don't drop r
The per domain grant table read lock suffers from significant contention when
performance multi-queue block or network IO due to the parallel
grant map/unmaps/copies occurring on the DomU's grant table.
On multi-socket systems, the contention results in the locked compare swap
operation failing fr