Hi,

When trying to add a manually installed 3.5-git vdsm on Centos 7, I get these messages:

   Detector thread::DEBUG::2014-08-05
   
08:07:12,445::protocoldetector::160::vds.MultiProtocolAcceptor::(_add_connection)
   Adding connection from **.***.***.***:34219
   Detector thread::DEBUG::2014-08-05
   
08:07:12,448::protocoldetector::171::vds.MultiProtocolAcceptor::(_remove_connection)
   Connection removed from **.***.***.***:34219
   Detector thread::WARNING::2014-08-05
   
08:07:12,448::protocoldetector::192::vds.MultiProtocolAcceptor::(_handle_connection_read)
   Unrecognized protocol: '\x16\x03\x00\x00c\x01\x00\x00_\x03\x00'


The engine install fails by the way, I think it's because of lvm:

   storageRefresh::DEBUG::2014-08-05
   08:06:32,292::lvm::317::Storage.OperationMutex::(_reloadpvs)
   Operation 'lvm reload operation' got the operation mutex
   storageRefresh::DEBUG::2014-08-05
   08:06:32,293::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
   /usr/sbin/lvm pvs --config ' devices { preferred_names =
   ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
   disable_after_error_count=3 obtain_device_list_from_udev=0 filter =
   [ '\''r|.*|'\'' ] }  global {  locking_type=1
   prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } backup
   {  retain_min = 50  retain_days = 0 } ' --noheadings --units b
   --nosuffix --separator '|' --ignoreskippedcluster -o
   
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size
   (cwd None)
   storageRefresh::DEBUG::2014-08-05
   08:06:32,319::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =
   '  WARNING: lvmetad is running but disabled. Restart lvmetad before
   enabling it!\n'; <rc> = 0
   storageRefresh::DEBUG::2014-08-05
   08:06:32,320::lvm::342::Storage.OperationMutex::(_reloadpvs)
   Operation 'lvm reload operation' released the operation mutex
   storageRefresh::DEBUG::2014-08-05
   08:06:32,320::lvm::365::Storage.OperationMutex::(_reloadvgs)
   Operation 'lvm reload operation' got the operation mutex
   storageRefresh::DEBUG::2014-08-05
   08:06:32,321::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
   /usr/sbin/lvm vgs --config ' devices { preferred_names =
   ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
   disable_after_error_count=3 obtain_device_list_from_udev=0 filter =
   [ '\''r|.*|'\'' ] }  global {  locking_type=1
   prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } backup
   {  retain_min = 50  retain_days = 0 } ' --noheadings --units b
   --nosuffix --separator '|' --ignoreskippedcluster -o
   
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
   (cwd None)
   storageRefresh::DEBUG::2014-08-05
   08:06:32,344::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =
   '  WARNING: lvmetad is running but disabled. Restart lvmetad before
   enabling it!\n  No volume groups found\n'; <rc> = 0
   storageRefresh::DEBUG::2014-08-05
   08:06:32,345::lvm::407::Storage.OperationMutex::(_reloadvgs)
   Operation 'lvm reload operation' released the operation mutex
   storageRefresh::DEBUG::2014-08-05
   08:06:32,345::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
   /usr/sbin/lvm lvs --config ' devices { preferred_names =
   ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
   disable_after_error_count=3 obtain_device_list_from_udev=0 filter =
   [ '\''r|.*|'\'' ] }  global {  locking_type=1
   prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } backup
   {  retain_min = 50  retain_days = 0 } ' --noheadings --units b
   --nosuffix --separator '|' --ignoreskippedcluster -o
   uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None)
   storageRefresh::DEBUG::2014-08-05
   08:06:32,365::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =
   '  WARNING: lvmetad is running but disabled. Restart lvmetad before
   enabling it!\n  No volume groups found\n'; <rc> = 0
   storageRefresh::DEBUG::2014-08-05
   08:06:32,366::lvm::365::Storage.OperationMutex::(_reloadvgs)
   Operation 'lvm reload operation' got the operation mutex
   storageRefresh::DEBUG::2014-08-05
   08:06:32,366::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
   /usr/sbin/lvm vgs --config ' devices { preferred_names =
   ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
   disable_after_error_count=3 obtain_device_list_from_udev=0 filter =
   [ '\''r|.*|'\'' ] }  global {  locking_type=1
   prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 } backup
   {  retain_min = 50  retain_days = 0 } ' --noheadings --units b
   --nosuffix --separator '|' --ignoreskippedcluster -o
   
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
   (cwd None)
   storageRefresh::DEBUG::2014-08-05
   08:06:32,386::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =
   '  WARNING: lvmetad is running but disabled. Restart lvmetad before
   enabling it!\n  No volume groups found\n'; <rc> = 0
   storageRefresh::DEBUG::2014-08-05
   08:06:32,387::lvm::407::Storage.OperationMutex::(_reloadvgs)
   Operation 'lvm reload operation' released the operation mutex
   storageRefresh::DEBUG::2014-08-05
   08:06:32,387::hsm::387::Storage.HSM::(storageRefresh) HSM is ready


Kind regards, Jorick Astrego
Netbulae BV

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to