[ovirt-users] Export Domain Lock File

2018-05-15 Thread Nicholas Vaughan
​Hi,

Is there a lock file on the Export Domain which stops it being mounted to a
2nd instance of oVirt?  We have a replicated Export Domain in a separate
location that we would like to mount to a backup instance of oVirt for DR
purposes.

Thanks in advance.
Nick
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


Re: [ovirt-users] Export Domain no show

2017-12-12 Thread Simone Tiraboschi
On Tue, Dec 12, 2017 at 8:32 AM, Rajat Patel  wrote:

> Hi Ovirt,
>
> We are using Ovirt 4.1 selfhosted engine attach nfs storega as
> data/iso/export, we have one image which we want to import
> (manageiq-ovirt-fine-4.qc2). We did copy to our export location
> (/export/3157c57b-8f6a-4709-862a-713bfa59899a) and chnage the owner ship
> to (chown -R 36:36 manageiq-ovirt-fine-4.qc2)issues is we are not able to
> see at ovirtUI-->>Strorage-->>export-->>VM Import nither at Template
> Import. At the same time not able to see error at logs.
>

Hi,
did you tried uploading your image from the web UI trough oVirt Image I/O?


>
>
> Regards
> Techieim
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Export Domain no show

2017-12-11 Thread Rajat Patel
Hi Ovirt,

We are using Ovirt 4.1 selfhosted engine attach nfs storega as
data/iso/export, we have one image which we want to import
(manageiq-ovirt-fine-4.qc2). We did copy to our export location
(/export/3157c57b-8f6a-4709-862a-713bfa59899a) and chnage the owner ship to
(chown -R 36:36 manageiq-ovirt-fine-4.qc2)issues is we are not able to see
at ovirtUI-->>Strorage-->>export-->>VM Import nither at Template Import. At
the same time not able to see error at logs.

Regards
Techieim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Export domain. Same LUN but different NFS server : update SQL table?

2014-04-16 Thread Nicolas Ecarnot

Hi,

Our NFS storage domain is stored as a LUN, and served via a NFS server.
I have to change this NFS server, but I don't need to change the LUN.

After setting this NFS export domain in maintenance and detached, is it 
safe to just update the database table storage_server_connections and 
restart the engine?


Regards,

PS : I read no answer to this :
http://lists.ovirt.org/pipermail/users/2014-March/022610.html

--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Export domain. Same LUN but different NFS server : update SQL table?

2014-04-16 Thread Nicolas Ecarnot

Le 16/04/2014 11:35, Nicolas Ecarnot a écrit :

Hi,

Our NFS storage domain is stored as a LUN, and served via a NFS server.
I have to change this NFS server, but I don't need to change the LUN.

After setting this NFS export domain in maintenance and detached, is it
safe to just update the database table storage_server_connections and
restart the engine?

Regards,

PS : I read no answer to this :
http://lists.ovirt.org/pipermail/users/2014-March/022610.html



Hi,

At present, I have two oVirt setups : oldest one in oVirt 3.3.0-4.el6 
which was used to create the NFS export domain, and one in oVirt 
3.4.0-1.el6.
I used to attach it in one Datacenter or the other, carefully putting it 
into maintenance mode, then gracefully detaching it, and it worked.


I had to change the NFS server that was serving this LUN, and I finally 
updated the database table in the 3.4 setup.

I was immediately able to play with it, attach/import/etc...

After detaching it, I tried the same from the 3.3.0 and it failed, with 
the engine log file below.
I tried to delete the line in the 3.3.0 database, and to import from 
scratch. The line gets correctly re-created, but the import fails the 
same way.


Has anyone an idea?

2014-04-16 14:56:03,878 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (pool-6-thread-49) 
[7a31ca00] START, Atta
chStorageDomainVDSCommand( storagePoolId = 
5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, 
storageDomainId = a7c189dd-4980-49a

b-ad86-65a14235cbe4), log id: 62dfd8c8
2014-04-16 14:56:03,945 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (pool-6-thread-49) 
[7a31ca00] Failed in A

ttachStorageDomainVDS method
2014-04-16 14:56:03,973 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (pool-6-thread-49) 
[7a31ca00] Error code
AcquireLockFailure and error message IRSGenericException: 
IRSErrorException: Failed to AttachStorageDomainVDS, error = Cannot 
obtain lock: id=a7
c189dd-4980-49ab-ad86-65a14235cbe4, rc=1, out=['error - lease file does 
not exist or is not writeable', 'usage: /usr/libexec/vdsm/spmprotect.sh C
OMMAND PARAMETERS', 'Commands:', '  start { sdUUID hostId 
renewal_interval_sec lease_path[:offset] lease_time_ms io_op_timeout_ms 
fail_retries }'
, 'Parameters:', '  sdUUID -domain uuid', '  hostId - 
 host id in pool', '  renewal_interval_sec -  intervals for l
ease renewals attempts', '  lease_path -path to lease 
file/volume', '  offset -offset of lease within file', ' 
 lease
_time_ms - time limit within which lease must be renewed (at 
least 2*renewal_interval_sec)', '  io_op_timeout_ms -  I/O operation tim
eout', '  fail_retries -  Maximal number of attempts to retry to 
renew the lease before fencing (= lease_time_ms/renewal_interval_sec)']

, err=[]
2014-04-16 14:56:03,977 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (pool-6-thread-49) 
[7a31ca00] Command Att
achStorageDomainVDS execution failed. Exception: 
IrsOperationFailedNoFailoverException: IRSGenericException: 
IRSErrorException: Failed to AttachS
torageDomainVDS, error = Cannot obtain lock: 
id=a7c189dd-4980-49ab-ad86-65a14235cbe4, rc=1, out=['error - lease file 
does not exist or is not wr
iteable', 'usage: /usr/libexec/vdsm/spmprotect.sh COMMAND PARAMETERS', 
'Commands:', '  start { sdUUID hostId renewal_interval_sec lease_path[:off
set] lease_time_ms io_op_timeout_ms fail_retries }', 'Parameters:', ' 
sdUUID -domain uuid', '  hostId -host id i
n pool', '  renewal_interval_sec -  intervals for lease renewals 
attempts', '  lease_path -path to lease file/volume', ' 
offset -
 offset of lease within file', '  lease_time_ms - 
time limit within which lease must be renewed (at least 2*renewal_interval_
sec)', '  io_op_timeout_ms -  I/O operation timeout', ' 
fail_retries -  Maximal number of attempts to retry to renew the 
lease befor

e fencing (= lease_time_ms/renewal_interval_sec)'], err=[]
2014-04-16 14:56:03,980 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (pool-6-thread-49) 
[7a31ca00] FINISH, Att

achStorageDomainVDSCommand, log id: 62dfd8c8
2014-04-16 14:56:03,981 ERROR 
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] 
(pool-6-thread-49) [7a31ca00] Command org.ovir
t.engine.core.bll.storage.AttachStorageDomainToPoolCommand throw Vdc Bll 
exception. With error message VdcBLLException: org.ovirt.engine.core.vds
broker.irsbroker.IrsOperationFailedNoFailoverException: 
IRSGenericException: IRSErrorException: Failed to 
AttachStorageDomainVDS, error = Cannot
obtain lock: id=a7c189dd-4980-49ab-ad86-65a14235cbe4, rc=1, out=['error 
- lease file does not exist or is not writeable', 'usage: /usr/libexec/v
dsm/spmprotect.sh COMMAND PARAMETERS', 'Commands:', '  start { sdUUID 

Re: [Users] Export Domain Upgrade

2013-09-29 Thread Omer Frenkel
- Original Message -

 From: Nicholas Kesick cybertimber2...@hotmail.com
 To: oVirt Mailing List users@ovirt.org
 Sent: Wednesday, September 25, 2013 8:43:47 PM
 Subject: [Users] Export Domain  Upgrade

 This is the first time I've run through an 'upgrade', so I'm very new to
 export domains, and I'm having some trouble getting one connected to 3.3. I
 had hoped to add this to the wiki, but it hasn't been as straightforward as
 I thought.

 On my oVirt 3.2 install, I created a NFS export (/var/lib/exports/DAONE) and
 exported my VMs to it. I created a tarball of everything in DAONE, and then
 formatted the system and installed Fedora 19, and then oVirt 3.3. Created
 the NFS resource (/var/lib/exports/DAONE), extracted the tarball, and
 followed these directions to clear the storage domain. However when I try to
 add it, the webadmin reports the following error:

 Error while executing action New NFS Storage Domain: Error in creating a
 Storage Domain. The selected storage path is not empty (probably contains
 another Storage Domain). Either remove the existing Storage Domain from this
 path, or change the Storage path).

 Any suggestions? I wonder if anything changed for 3.3 that need to be in the
 instructions?

looks like you should have done 'import existing export domain' to this path 
instead of trying create a new one. 

 I also tried making another export domain (/var/lib/exports/export) and
 dumping everything into the UUID under that, but no VMs showed up to import.

 #showmount -e f19-ovirt.mkesick.net
 Export list for f19-ovirt.mkesick.net:
 /var/lib/exports/storage 0.0.0.0/0.0.0.0
 /var/lib/exports/iso 0.0.0.0/0.0.0.0
 /var/lib/exports/export 0.0.0.0/0.0.0.0
 /var/lib/exports/DAONE 0.0.0.0/0.0.0.0

 [root@f19-ovirt dom_md]# cat metadata
 CLASS=Backup
 DESCRIPTION=DaOne
 IOOPTIMEOUTSEC=1
 LEASERETRIES=3
 LEASETIMESEC=5
 LOCKPOLICY=
 LOCKRENEWALINTERVALSEC=5
 MASTER_VERSION=0
 POOL_UUID=
 REMOTE_PATH=localhost:/var/lib/exports/DAONE
 ROLE=Regular
 SDUUID=8e4f6fbd-b635-4f47-b113-ba146ee1c0cf
 TYPE=NFS
 VERSION=0

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Export Domain Upgrade

2013-09-29 Thread Itamar Heim

On 09/29/2013 12:15 PM, Omer Frenkel wrote:





*From: *Nicholas Kesick cybertimber2...@hotmail.com
*To: *oVirt Mailing List users@ovirt.org
*Sent: *Wednesday, September 25, 2013 8:43:47 PM
*Subject: *[Users] Export Domain  Upgrade

This is the first time I've run through an 'upgrade', so I'm very
new to export domains, and I'm having some trouble getting one
connected to 3.3. I had hoped to add this to the wiki, but it hasn't
been as straightforward as I thought.

On my oVirt 3.2 install, I created a NFS export
(/var/lib/exports/DAONE) and exported my VMs to it. I created a
tarball of everything in DAONE, and then formatted the system and
installed Fedora 19, and then oVirt 3.3. Created the NFS resource
(/var/lib/exports/DAONE), extracted the tarball, and followed these
directions

http://www.ovirt.org/How_to_clear_the_storage_domain_pool_config_of_an_exported_nfs_domain
to clear the storage domain. However when I try to add it, the
webadmin reports the following error:

Error while executing action New NFS Storage Domain: Error in
creating a Storage Domain. The selected storage path is not empty
(probably contains another Storage Domain). Either remove the
existing Storage Domain from this path, or change the Storage path).

Any suggestions? I wonder if anything changed for 3.3 that need to
be in the instructions?

looks like you should have done 'import existing export domain' to this
path instead of trying create a new one.


indeed.
also note you could have just upgraded rather than export/import all the 
vms:
Bug 1009335 - TestOnly - upgrade path from Fedora 18 with oVirt 3.2 to 
Fedora 19 with oVirt 3.3




I also tried making another export domain (/var/lib/exports/export)
and dumping everything into the UUID under that, but no VMs showed
up to import.

#showmount -e f19-ovirt.mkesick.net
Export list for f19-ovirt.mkesick.net:
/var/lib/exports/storage 0.0.0.0/0.0.0.0
/var/lib/exports/iso 0.0.0.0/0.0.0.0
/var/lib/exports/export  0.0.0.0/0.0.0.0
/var/lib/exports/DAONE   0.0.0.0/0.0.0.0

[root@f19-ovirt dom_md]# cat metadata
CLASS=Backup
DESCRIPTION=DaOne
IOOPTIMEOUTSEC=1
LEASERETRIES=3
LEASETIMESEC=5
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=0
POOL_UUID=
REMOTE_PATH=localhost:/var/lib/exports/DAONE
ROLE=Regular
SDUUID=8e4f6fbd-b635-4f47-b113-ba146ee1c0cf
TYPE=NFS
VERSION=0


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Export Domain Upgrade

2013-09-29 Thread Nicholas Kesick

 
 Date: Mon, 30 Sep 2013 00:33:08 +0300
 From: ih...@redhat.com
 To: ofren...@redhat.com
 CC: cybertimber2...@hotmail.com; users@ovirt.org
 Subject: Re: [Users] Export Domain  Upgrade
 
 On 09/29/2013 12:15 PM, Omer Frenkel wrote:
 
 
  
 
  *From: *Nicholas Kesick cybertimber2...@hotmail.com
  *To: *oVirt Mailing List users@ovirt.org
  *Sent: *Wednesday, September 25, 2013 8:43:47 PM
  *Subject: *[Users] Export Domain  Upgrade
 
  This is the first time I've run through an 'upgrade', so I'm very
  new to export domains, and I'm having some trouble getting one
  connected to 3.3. I had hoped to add this to the wiki, but it hasn't
  been as straightforward as I thought.
 
  On my oVirt 3.2 install, I created a NFS export
  (/var/lib/exports/DAONE) and exported my VMs to it. I created a
  tarball of everything in DAONE, and then formatted the system and
  installed Fedora 19, and then oVirt 3.3. Created the NFS resource
  (/var/lib/exports/DAONE), extracted the tarball, and followed these
  directions
  
  http://www.ovirt.org/How_to_clear_the_storage_domain_pool_config_of_an_exported_nfs_domain
  to clear the storage domain. However when I try to add it, the
  webadmin reports the following error:
 
  Error while executing action New NFS Storage Domain: Error in
  creating a Storage Domain. The selected storage path is not empty
  (probably contains another Storage Domain). Either remove the
  existing Storage Domain from this path, or change the Storage path).
 
  Any suggestions? I wonder if anything changed for 3.3 that need to
  be in the instructions?
 
  looks like you should have done 'import existing export domain' to this
  path instead of trying create a new one.
 
 indeed.
 also note you could have just upgraded rather than export/import all the 
 vms:
 Bug 1009335 - TestOnly - upgrade path from Fedora 18 with oVirt 3.2 to 
 Fedora 19 with oVirt 3.3
 I should have responded to this sooner, I got it working thanks to some help 
 in IRC. I should have been using import domain, but I also found that the 
 path has to match what was in the metdata file. So while I was trying to use 
 hostname:/var/lib/exports/export I had used localhost:/var/lib/exports/export 
 in ovirt 3.2, so that caused some delay.
 
  I also tried making another export domain (/var/lib/exports/export)
  and dumping everything into the UUID under that, but no VMs showed
  up to import.
 
  #showmount -e f19-ovirt.mkesick.net
  Export list for f19-ovirt.mkesick.net:
  /var/lib/exports/storage 0.0.0.0/0.0.0.0
  /var/lib/exports/iso 0.0.0.0/0.0.0.0
  /var/lib/exports/export  0.0.0.0/0.0.0.0
  /var/lib/exports/DAONE   0.0.0.0/0.0.0.0
 
  [root@f19-ovirt dom_md]# cat metadata
  CLASS=Backup
  DESCRIPTION=DaOne
  IOOPTIMEOUTSEC=1
  LEASERETRIES=3
  LEASETIMESEC=5
  LOCKPOLICY=
  LOCKRENEWALINTERVALSEC=5
  MASTER_VERSION=0
  POOL_UUID=
  REMOTE_PATH=localhost:/var/lib/exports/DAONE
  ROLE=Regular
  SDUUID=8e4f6fbd-b635-4f47-b113-ba146ee1c0cf
  TYPE=NFS
  VERSION=0
 
 
  
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Export domain was working... then... NFS, rpc.statd issues

2013-04-17 Thread Yeela Kaplan
Hi Nicolas,
Please send the full vdsm log. 
Also, what is the name of the previous mount and mount point.
and what is the name of the new mount and mount point.
also, what are the exact steps you used to switch the 2 exports domains? Have 
you done anything through the engine?

- Original Message -
 From: Nicolas Ecarnot nico...@ecarnot.net
 To: users users@ovirt.org
 Sent: Tuesday, April 16, 2013 1:18:40 PM
 Subject: [Users] Export domain was working... then... NFS, rpc.statd issues
 
 Hi,
 
 [oVirt 3.1, F17]
 My good old NFS export domain was OK, but getting too small for our needs.
 Then I unmounted it, created another bigger one somewhere else, and
 tried to mount the new one.
 
 Long things short, the NFS is not mounted and the relevant error is here:
 
 Thread-142::DEBUG::2013-04-16
 10:08:25,973::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo
 -n /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6
 serv-vm-adm7.xxx:/data/vmex
 /rhev/data-center/mnt/serv-vm-adm7.xxx:_data_vmex' (cwd None)
 Thread-142::ERROR::2013-04-16
 10:08:26,047::hsm::1932::Storage.HSM::(connectStorageServer) Could not
 connect to storageServer
 Traceback (most recent call last):
File /usr/share/vdsm/storage/hsm.py, line 1929, in connectStorageServer
File /usr/share/vdsm/storage/storageServer.py, line 256, in connect
File /usr/share/vdsm/storage/storageServer.py, line 179, in connect
File /usr/share/vdsm/storage/mount.py, line 190, in mount
File /usr/share/vdsm/storage/mount.py, line 206, in _runcmd
 MountError: (32, ;mount.nfs: rpc.statd is not running but is required
 for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks
 local, or start statd.\nmount.nfs: an incorrect mount option was
 specified\n)
 
 I confirm trying to manually mount the same from the node, and using the
 nolock option does work.
 
 While googling, I checked the /etc/services : no problem.
 I don't know what to change, what I did wrong, what to improve?
 
 --
 Nicolas Ecarnot
 [Very rare msg from me using HTML and colors... I'm ready to wear a tie ;)]
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Export domain was working... then... NFS, rpc.statd issues

2013-04-16 Thread Nicolas Ecarnot

Hi,

[oVirt 3.1, F17]
My good old NFS export domain was OK, but getting too small for our needs.
Then I unmounted it, created another bigger one somewhere else, and 
tried to mount the new one.


Long things short, the NFS is not mounted and the relevant error is here:

Thread-142::DEBUG::2013-04-16 
10:08:25,973::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo 
-n /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 
serv-vm-adm7.xxx:/data/vmex 
/rhev/data-center/mnt/serv-vm-adm7.xxx:_data_vmex' (cwd None)
Thread-142::ERROR::2013-04-16 
10:08:26,047::hsm::1932::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

Traceback (most recent call last):
  File /usr/share/vdsm/storage/hsm.py, line 1929, in connectStorageServer
  File /usr/share/vdsm/storage/storageServer.py, line 256, in connect
  File /usr/share/vdsm/storage/storageServer.py, line 179, in connect
  File /usr/share/vdsm/storage/mount.py, line 190, in mount
  File /usr/share/vdsm/storage/mount.py, line 206, in _runcmd
MountError: (32, ;mount.nfs: rpc.statd is not running but is required 
for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks 
local, or start statd.\nmount.nfs: an incorrect mount option was 
specified\n)


I confirm trying to manually mount the same from the node, and using the 
nolock option does work.


While googling, I checked the /etc/services : no problem.
I don't know what to change, what I did wrong, what to improve?

--
Nicolas Ecarnot
[Very rare msg from me using HTML and colors... I'm ready to wear a tie ;)]
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Export domain

2013-03-15 Thread Dafna Ron
On 03/14/2013 05:55 PM, Shu Ming wrote:
 Dafna Ron:
 ovirt should not react in any way :)
 the size should be updated by queries that are sent to vdsm on domains
 status.
 I think vdsm domains do have their own meta-data except the lvm or
 posix file system meta data.
the export domain is an NFS domain and the size is not listed in its
metadata. much like posix which is based on file type storage, if we
extend the size in the storage server itself the domain size will be
updated with no issues or changed to the metadata.
you are correct regarding iscsi domain types which would have a problem
with extending a lun in the storage side.
extend  for iscsi is done via the ovirt by editing the domain and adding
more luns since extend of the used lv on the storage side will cause
issues in the metadata.

 If you want to be extra careful you can always put the export domain in
 maintenance before the extend :)
 If the export domain come back to life, I think the domain meta data
 should be invalidated after the extend.
since its not iscsi type domain the extend done on the server side to
extend the domain should not be a problem





 On 03/13/2013 09:45 PM, Jonathan Horne wrote:
 What would happen if the export domain (via NFS) was extended (say,
 while the server was doing a reboot?)  id like to extend the space of
 my export domain from 200 to 500GB (with lvm commands).

  
 How will ovirt react to finding the size of the export domain
 changed?  Pleased or unpleased?

  
 Thanks,

 jonathan


 

 This is a PRIVATE message. If you are not the intended recipient,
 please delete without copying and kindly advise us by e-mail of the
 mistake in delivery. NOTE: Regardless of content, this e-mail shall
 not operate to bind SKOPOS to any order or other contract unless
 pursuant to explicit written agreement or government initiative
 expressly permitting the use of e-mail for such purpose.


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users





-- 
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Export domain

2013-03-15 Thread Itamar Heim

On 03/14/2013 05:55 PM, Shu Ming wrote:

Dafna Ron:

ovirt should not react in any way :)
the size should be updated by queries that are sent to vdsm on domains
status.

I think vdsm domains do have their own meta-data except the lvm or posix
file system meta data.


iirc, the MD no longer cares about SD size. so for block storage it is a 
matter of refreshing the hosts.
but in any case, since export is NFS, i don't remember any issues with 
extending it.





If you want to be extra careful you can always put the export domain in
maintenance before the extend :)

If the export domain come back to life, I think the domain meta data
should be invalidated after the extend.





On 03/13/2013 09:45 PM, Jonathan Horne wrote:

What would happen if the export domain (via NFS) was extended (say,
while the server was doing a reboot?)  id like to extend the space of
my export domain from 200 to 500GB (with lvm commands).


How will ovirt react to finding the size of the export domain
changed?  Pleased or unpleased?


Thanks,

jonathan



This is a PRIVATE message. If you are not the intended recipient,
please delete without copying and kindly advise us by e-mail of the
mistake in delivery. NOTE: Regardless of content, this e-mail shall
not operate to bind SKOPOS to any order or other contract unless
pursuant to explicit written agreement or government initiative
expressly permitting the use of e-mail for such purpose.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Export domain

2013-03-14 Thread Dafna Ron
ovirt should not react in any way :)
the size should be updated by queries that are sent to vdsm on domains
status.
If you want to be extra careful you can always put the export domain in
maintenance before the extend :)



On 03/13/2013 09:45 PM, Jonathan Horne wrote:

 What would happen if the export domain (via NFS) was extended (say,
 while the server was doing a reboot?)  id like to extend the space of
 my export domain from 200 to 500GB (with lvm commands). 

  

 How will ovirt react to finding the size of the export domain
 changed?  Pleased or unpleased?

  

 Thanks,

 jonathan


 
 This is a PRIVATE message. If you are not the intended recipient,
 please delete without copying and kindly advise us by e-mail of the
 mistake in delivery. NOTE: Regardless of content, this e-mail shall
 not operate to bind SKOPOS to any order or other contract unless
 pursuant to explicit written agreement or government initiative
 expressly permitting the use of e-mail for such purpose.


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


-- 
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Export domain

2013-03-14 Thread Shu Ming

Dafna Ron:

ovirt should not react in any way :)
the size should be updated by queries that are sent to vdsm on domains
status.
I think vdsm domains do have their own meta-data except the lvm or posix 
file system meta data.



If you want to be extra careful you can always put the export domain in
maintenance before the extend :)
If the export domain come back to life, I think the domain meta data 
should be invalidated after the extend.






On 03/13/2013 09:45 PM, Jonathan Horne wrote:

What would happen if the export domain (via NFS) was extended (say,
while the server was doing a reboot?)  id like to extend the space of
my export domain from 200 to 500GB (with lvm commands).

  


How will ovirt react to finding the size of the export domain
changed?  Pleased or unpleased?

  


Thanks,

jonathan



This is a PRIVATE message. If you are not the intended recipient,
please delete without copying and kindly advise us by e-mail of the
mistake in delivery. NOTE: Regardless of content, this e-mail shall
not operate to bind SKOPOS to any order or other contract unless
pursuant to explicit written agreement or government initiative
expressly permitting the use of e-mail for such purpose.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





--
---
舒明 Shu Ming
Open Virtualization Engineerning; CSTL, IBM Corp.
Tel: 86-10-82451626  Tieline: 9051626 E-mail: shum...@cn.ibm.com or 
shum...@linux.vnet.ibm.com
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, 
Beijing 100193, PRC


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Export domain

2013-03-14 Thread Shu Ming
So you have a block device in NFS server to be extended from 200 to 
50GB?I think you should extend both the block device and the file 
system on the block device in the server side.   To oVirt,  it doesn't 
save the size in storage domain' meta-data and I believe the size is 
only stored in the engine server.  The only issue is before the engine 
refresh its metadata in the next time, there is a window with the 
meta-data out of sync.   Please let us know if it is successful.


Jonathan Horne:


What would happen if the export domain (via NFS) was extended (say, 
while the server was doing a reboot?) id like to extend the space of 
my export domain from 200 to 500GB (with lvm commands).


How will ovirt react to finding the size of the export domain 
changed?  Pleased or unpleased?


Thanks,

jonathan



This is a PRIVATE message. If you are not the intended recipient, 
please delete without copying and kindly advise us by e-mail of the 
mistake in delivery. NOTE: Regardless of content, this e-mail shall 
not operate to bind SKOPOS to any order or other contract unless 
pursuant to explicit written agreement or government initiative 
expressly permitting the use of e-mail for such purpose.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
---
?? Shu Ming
Open Virtualization Engineerning; CSTL, IBM Corp.
Tel: 86-10-82451626  Tieline: 9051626 E-mail: shum...@cn.ibm.com or 
shum...@linux.vnet.ibm.com
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, 
Beijing 100193, PRC

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Export domain

2013-03-13 Thread Jonathan Horne
What would happen if the export domain (via NFS) was extended (say, while the 
server was doing a reboot?)  id like to extend the space of my export domain 
from 200 to 500GB (with lvm commands).

How will ovirt react to finding the size of the export domain changed?  Pleased 
or unpleased?

Thanks,
jonathan


This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] export domain issues with latest nightly/GIT Master

2012-11-08 Thread Omer Frenkel
Hi, 
Thanks for your input, comments inline. 

- Original Message -

 From: Dead Horse deadhorseconsult...@gmail.com
 To: Eli Mesika emes...@redhat.com
 Cc: users users@ovirt.org
 Sent: Thursday, November 8, 2012 5:47:26 AM
 Subject: Re: [Users] export domain issues with latest nightly/GIT
 Master

 On issue 1 attached are log files narrowed down what was logged
 during the attempted import when the disks are not shown.

 On issue 2 I found this bug (
 https://bugzilla.redhat.com/show_bug.cgi?id=801112 ) which seems to
 have a similar footprint. This was working a few weeks back so I
 would classify this as a regression.

 On Wed, Nov 7, 2012 at 2:59 AM, Eli Mesika  emes...@redhat.com 
 wrote:

  - Original Message -
 
   From: Dead Horse  deadhorseconsult...@gmail.com 
 
   To:  users@ovirt.org   users@ovirt.org 
 
   Sent: Wednesday, November 7, 2012 3:12:07 AM
 
   Subject: [Users] export domain issues with latest nightly/GIT
   Master
 
  
 
  
 
   I have noted some export domain issues with builds from the
   latest
 
   GIT Master.
 
  
 
   1) When importing a VM, the disks subtab of the VM to import will
   not
 
   show the disks. Instead it shows the blinking progress squares.
   This
 
   is persistent and will it will not show information on the disks
   to
 
   be imported. This stopped working about 2 weeks back. Expected is
   to
 
   show the disks to import as well as set provisioning type and
 
   destination storage domain.
 

  Hi, can you plesae attach engin/vdsm logs as this is occuring
  generally when an exception is thrown in the middle of the import
  operation.
 

i understood from Gilad that this patch should solve the issue. 
http://gerrit.ovirt.org/#/c/9100 

  
 
   2) The import process does does not respect thin provisioning.
 
   Importing a VM without checking the collapse snapshots box (there
 
   are actually no snapshots to collapse...) results in the imported
 
   disks being imported thickly provisioned. If the collapse
   snapshots
 
   box is checked thin provisioning is respected and the disks
   import
 
   thin provisioned. This may be intertwined with the prior issue.
 

  Is that as regression or a new bug ?
 

  
 
   - DHC
 
  
 
   ___
 
   Users mailing list
 
   Users@ovirt.org
 
   http://lists.ovirt.org/mailman/listinfo/users
 
  
 

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] export domain issues with latest nightly/GIT Master

2012-11-07 Thread Eli Mesika


- Original Message -
 From: Dead Horse deadhorseconsult...@gmail.com
 To: users@ovirt.org users@ovirt.org
 Sent: Wednesday, November 7, 2012 3:12:07 AM
 Subject: [Users] export domain issues with latest nightly/GIT Master
 
 
 I have noted some export domain issues with builds from the latest
 GIT Master.
 
 1) When importing a VM, the disks subtab of the VM to import will not
 show the disks. Instead it shows the blinking progress squares. This
 is persistent and will it will not show information on the disks to
 be imported. This stopped working about 2 weeks back. Expected is to
 show the disks to import as well as set provisioning type and
 destination storage domain.

Hi, can you plesae attach engin/vdsm logs as this is occuring generally when an 
exception is thrown in the middle of the import operation.

 
 2) The import process does does not respect thin provisioning.
 Importing a VM without checking the collapse snapshots box (there
 are actually no snapshots to collapse...) results in the imported
 disks being imported thickly provisioned. If the collapse snapshots
 box is checked thin provisioning is respected and the disks import
 thin provisioned. This may be intertwined with the prior issue.

Is that as regression or a new bug ?

 
 - DHC
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] export domain issues with latest nightly/GIT Master

2012-11-07 Thread Dead Horse
On issue 1 attached are log files narrowed down what was logged during the
attempted import when the disks are not shown.

On issue 2 I found this bug (
https://bugzilla.redhat.com/show_bug.cgi?id=801112) which seems to have a
similar footprint. This was working a few weeks back so I would classify
this as a regression.



On Wed, Nov 7, 2012 at 2:59 AM, Eli Mesika emes...@redhat.com wrote:



 - Original Message -
  From: Dead Horse deadhorseconsult...@gmail.com
  To: users@ovirt.org users@ovirt.org
  Sent: Wednesday, November 7, 2012 3:12:07 AM
  Subject: [Users] export domain issues with latest nightly/GIT Master
 
 
  I have noted some export domain issues with builds from the latest
  GIT Master.
 
  1) When importing a VM, the disks subtab of the VM to import will not
  show the disks. Instead it shows the blinking progress squares. This
  is persistent and will it will not show information on the disks to
  be imported. This stopped working about 2 weeks back. Expected is to
  show the disks to import as well as set provisioning type and
  destination storage domain.

 Hi, can you plesae attach engin/vdsm logs as this is occuring generally
 when an exception is thrown in the middle of the import operation.

 
  2) The import process does does not respect thin provisioning.
  Importing a VM without checking the collapse snapshots box (there
  are actually no snapshots to collapse...) results in the imported
  disks being imported thickly provisioned. If the collapse snapshots
  box is checked thin provisioning is respected and the disks import
  thin provisioned. This may be intertwined with the prior issue.

 Is that as regression or a new bug ?

 
  - DHC
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

2012-11-07 21:22:45,978 INFO  [org.ovirt.engine.core.bll.LoginAdminUserCommand] 
(ajp--127.0.0.1-8702-11) Checking if user admin@internal is an admin, result 
true
2012-11-07 21:22:45,980 INFO  [org.ovirt.engine.core.bll.LoginAdminUserCommand] 
(ajp--127.0.0.1-8702-11) Running command: LoginAdminUserCommand internal: false.
2012-11-07 21:22:46,139 ERROR 
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] 
(ajp--127.0.0.1-8702-1) Failed to decrypt Data must start with zero
2012-11-07 21:22:46,140 ERROR 
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] 
(ajp--127.0.0.1-8702-1) Failed to decrypt value for property LocalAdminPassword 
will be used encrypted value
2012-11-07 21:22:46,162 ERROR 
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] 
(ajp--127.0.0.1-8702-1) Failed to decrypt Data must start with zero
2012-11-07 21:22:46,162 ERROR 
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] 
(ajp--127.0.0.1-8702-1) Failed to decrypt value for property LocalAdminPassword 
will be used encrypted value
2012-11-07 21:22:46,183 ERROR 
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] 
(ajp--127.0.0.1-8702-1) Failed to decrypt Data must start with zero
2012-11-07 21:22:46,183 ERROR 
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] 
(ajp--127.0.0.1-8702-1) Failed to decrypt value for property LocalAdminPassword 
will be used encrypted value
2012-11-07 21:22:46,204 ERROR 
[org.ovirt.engine.core.engineencryptutils.EncryptionUtils] 
(ajp--127.0.0.1-8702-1) Failed to decrypt Data must start with zero
2012-11-07 21:22:46,205 ERROR 
[org.ovirt.engine.core.dal.dbbroker.generic.DBConfigUtils] 
(ajp--127.0.0.1-8702-1) Failed to decrypt value for property LocalAdminPassword 
will be used encrypted value
2012-11-07 21:22:51,085 ERROR [org.ovirt.engine.core.ServletUtils] 
(ajp--127.0.0.1-8702-9) Can't read file 
/usr/share/ovirt-engine/docs/DocumentationPath.csv for request 
/docs/DocumentationPath.csv, will send a 404 error response.
2012-11-07 21:23:06,285 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand] 
(ajp--127.0.0.1-8702-11) START, GetVmsInfoVDSCommand( storagePoolId = 
f90a0d1c-06ca-11e2-a05b-00151712f280, ignoreFailoverLimit = false, 
compatabilityVersion = null, storageDomainId = 
1130b87a-3b34-45d6-8016-d435825c68ef, vmIdList = null), log id: 4863ae13
2012-11-07 21:23:06,336 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVmsInfoVDSCommand] 
(ajp--127.0.0.1-8702-11) FINISH, GetVmsInfoVDSCommand, log id: 4863ae13
2012-11-07 21:23:12,622 INFO  [org.ovirt.engine.core.bll.VdsLoadBalancer] 
(QuartzScheduler_Worker-84) VdsLoadBalancer: Starting load balance for cluster: 
Horde, algorithm: EvenlyDistribute.
2012-11-07 21:23:12,625 INFO  [org.ovirt.engine.core.bll.VdsLoadBalancer] 
(QuartzScheduler_Worker-84) VdsLoadBalancer: high util: 75, low util: 0, 
duration: 2, threashold: 80
2012-11-07 21:23:12,695 INFO  
[org.ovirt.engine.core.bll.VdsLoadBalancingAlgorithm] 
(QuartzScheduler_Worker-84) VdsLoadBalancer: number of relevant vdss (no 
migration, no pending): 1.
2012-11-07 21:23:12,698 INFO

[Users] export domain issues with latest nightly/GIT Master

2012-11-06 Thread Dead Horse
I have noted some export domain issues with builds from the latest GIT
Master.

1) When importing a VM, the disks subtab of the VM to import will not show
the disks. Instead it shows the blinking progress squares. This is
persistent and will it will not show information on the disks to be
imported. This stopped working about 2 weeks back. Expected is to show the
disks to import as well as set provisioning type and destination storage
domain.

2) The import process does does not respect thin provisioning. Importing a
VM without checking the collapse snapshots box (there are actually no
snapshots to collapse...) results in the imported disks being imported
thickly provisioned. If the collapse snapshots box is checked thin
provisioning is respected and the disks import thin provisioned. This may
be intertwined with the prior issue.

 - DHC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users