Hi,

well at this stage it looks very strange:
* it looks like the disks are really not on the storage domain - the lv for the 
vm you are attemtping to start (From the previous mail's logs) is not visible 
at all.

* did something happen to the storage server itself/is it possible that things 
were deleted from the storage itself? (maybe you deleted the vms' disks?)
* On the problematic node I see that the lvm command gave some errors regarding 
missing devices:
 /dev/mapper/360a9800042415569305d434565795a54: read failed after 0 of 4096 at 
10737352704: Chyba vstupu/výstupu
  /dev/mapper/360a9800042415569305d434565795a54: read failed after 0 of 4096 at 
10737410048: Chyba vstupu/výstupu
  /dev/mapper/360a9800042415569305d434565795a54: read failed after 0 of 4096 at 
0: Chyba vstupu/výstupu
  /dev/mapper/360a9800042415569305d434565795a54: read failed after 0 of 4096 at 
4096: Chyba vstupu/výstupu
* Is the domain a single LUN or multiple LUNs? what is the output of
# multipath -ll
# vgs

could you attach the full vdsm log from the problematic host maybe?

Gadi Ickowicz

----- Original Message -----
From: "Jakub Bittner" <[email protected]>
To: "Gadi Ickowicz" <[email protected]>
Cc: [email protected]
Sent: Tuesday, February 11, 2014 10:25:33 AM
Subject: Re: [Users] node can not access disks

Dne 11.2.2014 07:47, Gadi Ickowicz napsal(a):
> lvs ce77262f-8346-42e5-823a-bd321f0814e7
Hello,

it is ISCSI domain. All nodes are problematic.

lvs output:

http://fpaste.org/76058/

I restarted problematic node but it did not help. So I removed it from 
cluster, but many other VMs on ther nodes are dead (can not run, or can 
not boot from disk because they do not see it). About 5% of VM we were 
able to export or restart to stable status
_______________________________________________
Users mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to