Hi Aharon, Yes, I remove the lun from the storage. The problem already solved, but a little bit confuse about the iscsi session on rhev-m. I create new lun and rhev-m recognize the new LUN properly But when I delete the previous lun, the rhev-m still showing the lun but the lun marked as unavailable lun. For solving the "cache" issue on the rhev-m, running "iscsiadm -m session -u" and relogin with "iscsiadm -m node -l". The issue is when I relogin the iscsi target on rhev-host, the running VM would be paused.
There any other method to "clear cache" the lun on rhev-m ? On Tue, Mar 3, 2015 at 8:34 PM, Aharon Canan <[email protected]> wrote: > Hi > > How did you remove the lun? > from the storage side? > > > > Regards, > __________________________________________________ > *Aharon Canan* > ------------------------------ > > *From: *"aditya hilman" <[email protected]> > *To: *[email protected] > *Sent: *Tuesday, March 3, 2015 6:03:40 AM > *Subject: *[ovirt-users] Storage Domain ISCSI Inactive > > > Hi folks, > > I'm already searching on internet and any related documentation with > ovirt, but i'm still facing the problem. > > I've 3 hosts and 1 storage using netapp. After create new LUN and deleting > the new one. The Data Centers is down, but VM guests still running. > > Step to reproduce : > 1. Create new LUN. > 2. Attach new LUN to existing iscsi storage data (master) > 3. The space on the existing iscsi storage would be increase > 4. Remove the new LUN from existing iscsi storage > 5. Storage domain would be Non-Responsive > > I've already reboot the rhev-m and also rhev-host. But the Data Centers > status is still Non-Responsive. > Below the related log : > > engine.log > 2015-03-02 07:55:47,884 WARN > [org.ovirt.engine.core.bll.storage.ExtendSANStorageDomainCommand] > (ajp-/127.0.0.1:8702-4) [70555e6d] CanDoAction of action > ExtendSANStorageDomain failed. > Reasons:VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__EXTEND,ERROR_CANNOT_EXTEND_CONNECTION_FAILED,$lun > > > 2015-03-02 08:05:35,214 ERROR > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] > (DefaultQuartzScheduler_Worker-5) [241bd724] Failed in SpmStatusVDS method > 2015-03-02 08:05:35,214 ERROR > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] > (DefaultQuartzScheduler_Worker-5) [241bd724] Error code GeneralException > and error message VDSGenericException: VDSErrorException: Failed to > SpmStatusVDS, error = [Errno 19] Could not find dm device named `unknown > device` > 2015-03-02 08:05:35,214 ERROR > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] > (DefaultQuartzScheduler_Worker-5) [241bd724] Command SpmStatusVDS execution > failed. Exception: VDSErrorException: VDSGenericException: > VDSErrorException: Failed to SpmStatusVDS, error = [Errno 19] Could not > find dm device named `unknown device` > > 2015-03-02 08:05:49,332 ERROR > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] > (DefaultQuartzScheduler_Worker-92) [65ea552c] SPM Init: could not find > reported vds or not up - pool:ISCSI_DATA_CENTER vds_spm_id: 2 > > > > Thanks. > > > _______________________________________________ > Users mailing list > [email protected] > http://lists.ovirt.org/mailman/listinfo/users > > >
_______________________________________________ Users mailing list [email protected] http://lists.ovirt.org/mailman/listinfo/users

