Hi Tim,

On Thu, Nov 27, 2014 at 2:33 AM, Tim Small <t...@seoss.co.uk> wrote:
> On 25/11/14 16:30, Jens Axboe wrote:
>
>> do we really need to do paride here?
>
> I did consider this, but I made the change there too on the basis that:
>
> . paride has received a few commits this year (and is listed as being
>     maintained)
> . The change is trivial
> . It fixes a performance regression which was introduced during the BKL
>     removal (mutex being retained by sleeping processes).
>
> I'm happy to drop it, if you prefer.
>
>> Patches 2-4 have identical subjects, and no commit message...
>
> Sorry about that, will fix it with next version.
>
> Having just seen this thread from 2013:
>
> http://permalink.gmane.org/gmane.linux.scsi/79483
>
> I decided to exercise the eject code path a bit more by triggering
> simultaneous eject commands on all 11 optical drives in my test box,
> followed by simultaneous close-tray commands, repeatedly.
>
> I haven't been able to reproduce the error reported in that email, but
> from observing the behaviour of the drives it looks like access to pata
> drives is being serialising elsewhere, so the issue in that link may
> have been fixed?
>
> Unfortunately running these tests did eventually make all further
> attempts to open /dev/sr* block on my test box.
>
> I've stared at the code for a while, but not making any headway
> currently, except that a blocking blk_execute_rq (called by
> test_unit_ready) is then causing all over cdrom open/close calls to
> block (because sr_mutex is held by sr_block_open(), and in turn calls
> check_disk_change... scsi_test_unit_ready).
>
> How do I work out why blk_execute_rq is blocking?

As you're playing with locks, I assume you're running with LOCKDEP
enabled? If not, that might tell you what's going on.

Thanks,

-- 
Julian Calaby

Email: julian.cal...@gmail.com
Profile: http://www.google.com/profiles/julian.calaby/
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to