A 3140 reason code does mean 'device is not available'. Typically though this is not an RMS problem. The quickest way to find out would be query the drive, and if 'FREE', then try attaching the drive outside of RMS to see if that is successful. Generally there is a reason why RMS could not attach the drive. If the drive was not available during RMS initialization, RMS will later try to initialize and use the device. You should not have to restart RMS. This all assumes you are current with RMS service. Latest RMS service does not require the drive to be free to query it (with corresponding CP service).
Best Regards, Les Geer IBM z/VM and Linux Development >Yes, I have tried a native mount with DFSMSRM and got the same problem. It > >gives me a RC=8(3140) which suggests a drive problem but I also get a >message saying NO SCRATCH80 - although, now I think about it, this was in >the VMTAPE console. > >If I do a DRSMSRM Q LIB DEV EE6 (LIBNAM VTSTPFC2 then I get a message back > >saying drive not available. > >While VMTAPE may be misunderstanding the reason for the problem - it is >clear that the problem lies in DFSMSRM and not VMTAPE. > >Incidentally, I can successfully query that drive from another system - >giving further credence to my idea that this is caused by this drive being > >in use when RMSMASTR was initialised on the system where I am see the >problem (I think we had a problem with RACF the weekend before last where >some service machines needed to be restarted). > > >>What is the RC and REASON returned by RMS when the mount request is issued >> >>to the "bad" drive? >> >>I assume you've tried the mount outside of VM:Tape using the native >>DFSMSRM command and that's why you are suspicious of DFSMS in this case? >> >> >> >>>We have been using DFSMS/RMS on VM for a long time. At the weekend we had >>>an issue which, I suspect, has happened before but not been identified. >>> >>>We have 16 drives per VTS shared between 6 VM systems (VMTAPE STAM) >>>handles the sharing of the drives. >>> >>>Yesterday, on only one of these drives, I was getting a message that there >>> >>>were no scratch volumes available. An identical mount request (for the >>>same scratch pool) in an adjacent drive in the same VTS was satisfied >>>without problems. >>> >>>I think I know what happened - but I am not sure if this is a bug (or how >>>to handle it). What I think happened is this:- >>> >>>When RMSMASTR was started on this system, it went through it's >>>initialisation process but this particular drive was in use at the time. >>>As a result, RMS is confused. >>> >>>If I am right then a restart of RMSMASTR would solve the problem for this >>>drive but create other potential problems with the other drives in use. >>>Because we have drives in use for long periods of time (by TPF test guest >>>systems) it would almost take a system outage to have no VTS drives in >>>use. >>> >>>Does anyone have any experience or knowledge of this situation? If so then >>> >>>:- >>>a) Does my theory sound correct? >>>b) Is this a reportable bug (should I open a PMR)? >>>c) Is there a way to get round this problem (for example: can I get >>>RMSMASTR to reinitialise a specific drive)? >>> >
