Re: [Users] storage domain reactivate not working

2012-04-01 Thread Rene Rosenberger
Hi,

ok, but how can i delete it if nothing goes. I want to generate a new storage 
domain.

-Ursprüngliche Nachricht-
Von: Saggi Mizrahi [mailto:smizr...@redhat.com]
Gesendet: Freitag, 30. März 2012 21:00
An: rvak...@redhat.com
Cc: users@ovirt.org; Rene Rosenberger
Betreff: Re: AW: [Users] storage domain reactivate not working

I am currently working on patches to fix the issues with upgraded domains. I've 
been ill for the most part of last week so it is taking a bit more time then it 
should.

- Original Message -
> From: "Rami Vaknin" 
> To: "Saggi Mizrahi" , "Rene Rosenberger"
> 
> Cc: users@ovirt.org
> Sent: Thursday, March 29, 2012 11:57:08 AM
> Subject: Fwd: AW: [Users] storage domain reactivate not working
>
> Rene, VDSM can't read the storage domain's metadata, the problem is
> that vdsm tries to read the metadata using 'dd' command which applies
> to the old version of storage domains as in the new format the
> metadata is saved as vg tags. Are you using storage domain version
> lower that V2? Can you attach the full log?
>
> Saggi, any thoughts on that?
>
>  Original Message 
> Subject:  AW: [Users] storage domain reactivate not working
> Date: Thu, 29 Mar 2012 06:33:27 -0400
> From: Rene Rosenberger 
> To:   rvak...@redhat.com  , users@ovirt.org
> 
>
>
>
>
> Hi,
>
>
>
> not sure if the logs i posted is waht you need . The thing ist hat the
> iscsi target is connected but in web gui it is locked. Can I unlock
> it?
>
>
>
> Regards, rene
>
>
>
>
>
> Von: Rene Rosenberger
> Gesendet: Donnerstag, 29. März 2012 12:00
> An: Rene Rosenberger; rvak...@redhat.com ; users@ovirt.org
> Betreff: AW: [Users] storage domain reactivate not working
>
>
>
> Hi,
>
>
>
> This is the roor message:
>
>
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:46,310::misc::1032::SamplingMethod::(__call__) Returning last
> result
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:46,313::lvm::349::OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' got the operation mutex
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:46,322::lvm::284::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n
> /sbin/lvm vgs --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%360014052dd702d2defc8d459adba02dc|360014057fda80efdcae4d414eda829
> d7%\\ ", \\"r%.*%\\ " ] } global { locking_type=1
> prioritise_write_locks=1
> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } "
> --noheadings --units b --nosuffix --separator | -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m
> da_size,vg_mda_free 8ed25a57-f53a-4cf0-bb92-781f3ce36a48' (cwd None)
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,096::lvm::284::Storage.Misc.excCmd::(cmd) SUCCESS:  = '
> /dev/mapper/360014057fda80efdcae4d414eda829d7: read failed after 0 of
> 4096 at 2147483582464: Input/output error\n
> /dev/mapper/360014057fda80efdcae4d414eda829d7: read failed after 0 of
> 4096 at 2147483639808: Input/output error\n
> /dev/mapper/360014057fda80efdcae4d414eda829d7: read failed after 0 of
> 4096 at 0: Input/output error\n WARNING: Error counts reached a limit
> of 3. Device /dev/mapper/360014057fda80efdcae4d414eda829d7 was
> disabled\n';  = 0
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,105::lvm::376::OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' released the operation mutex
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,107::persistentDict::175::Storage.PersistentDict::(__init__)
> Created a persistant dict with LvMetadataRW backend
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,110::blockSD::177::Storage.Misc.excCmd::(readlines)
> '/bin/dd iflag=direct skip=0 bs=2048
> if=/dev/8ed25a57-f53a-4cf0-bb92-781f3ce36a48/metadata count=1' (cwd
> None)
>
> Thread-5448::DEBUG::2012-03-29
> 11:57:47,155::blockSD::177::Storage.Misc.excCmd::(readlines) FAILED:
>  = "/bin/dd: reading
> `/dev/8ed25a57-f53a-4cf0-bb92-781f3ce36a48/metadata': Input/output
> error\n0+0 records in\n0+0 records out\n0 bytes (0 B) copied,
> 0.000525019 s, 0.0 kB/s\n";  = 1
>
> Thread-5448::ERROR::2012-03-29
> 11:57:47,158::sdc::113::Storage.StorageDomainCache::(_findDomain)
> Error while looking for domain
> `8ed25a57-f53a-4cf0-bb92-781f3ce36a48`
>
> Traceback (most recent call last):
>
> File "/usr/share/vdsm/storage/sdc.py", line 109, in _findDomain
>
> return mod.findDomain(sdUUID)
>
> File "/usr/share/vdsm/storage/blockSD.py", line 1051, in findDomain
>
> return BlockStorageDomain(BlockStorageDomain.findDomainPath(sdUUID))
>
> File "/usr/share/vdsm/storage/blockSD.py", line 241, in __init__
>
> metadata = selectMetadata(sdUUID)
>
> File "/usr/share/vdsm/storage/blockSD.py", line 210, in selectMetadata
>
> if len(mdProvider) > 0:
>
> File "/usr/share/vdsm/storage/persistentDict.py", line 51, in __len__
>
> return len(self.keys())
>
> File "/usr/share/vdsm/storage/persistentDict.py", line 95, in keys
>
> return list(self.__iter__())

[Users] Steps to be taken while shutting down ovirt-manager and the nodes.

2012-04-01 Thread Rahul Upadhyaya
Hi folks,

After I shut down my ovirt-manager and the KVM (+vdsm) nodes and restarted
them my DB in the manager gave me errors... do we need to follow some
specific procedure while restarting the whole setup ?

-- 
Regards,
Rahul
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users