Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail

2013-01-24 Thread Mark Wu

Great work!
The default action for SIGCHLD is ignore, so there's no problems 
reported before a signal handler is installed by zombie reaper.
But I still have one problem: the python multiprocessing.manager code 
is running a new thread and according to the implementation of python's 
signal, only the main thread can receive the signal.

So how is the signal delivered to the server thread?


On Fri 25 Jan 2013 12:30:39 PM CST, Royce Lv wrote:


Hi,
   I reproduced this issue, and I believe it's a python bug.
   1. How to reproduce:
   with the test case attached, put it under /usr/share/vdsm/tests/,
run #./run_tests.sh superVdsmTests.py
   and this issue will be reproduced.
   2.Log analyse:
   We notice a strange pattern in this log: connectStorageServer be
called twice, first supervdsm call succeed, second fails becasue of
validateAccess().
   That is because for the first call validateAccess returns normally
and leave a child there, when the second validateAccess call arrives
and multirprocessing manager is receiving the method message, it is
just the time first child exit and SIGCHLD comming, this signal
interrupted multiprocessing receive system call, python managers.py
should handle INTR and retry recv() like we do in vdsm but it's not,
so the second one raise error.
>Thread-18::DEBUG::2013-01-22 
10:41:03,570::misc::85::Storage.Misc.excCmd::() '/usr/bin/sudo -n 
/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3 
192.168.0.1:/ovirt/silvermoon /rhev/data-center/mnt/192.168.0.1:_ovirt_silvermoon' (cwd 
None)
>Thread-18::DEBUG::2013-01-22 
10:41:03,607::misc::85::Storage.Misc.excCmd::() '/usr/bin/sudo -n 
/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3 
192.168.0.1:/ovirt/undercity /rhev/data-center/mnt/192.168.0.1:_ovirt_undercity' (cwd 
None)
>Thread-18::ERROR::2013-01-22 
10:41:03,627::hsm::2215::Storage.HSM::(connectStorageServer) Could not connect to 
storageServer
>Traceback (most recent call last):
>  File "/usr/share/vdsm/storage/hsm.py", line 2211, in connectStorageServer
>conObj.connect()
>  File "/usr/share/vdsm/storage/storageServer.py", line 303, in connect
>return self._mountCon.connect()
>  File "/usr/share/vdsm/storage/storageServer.py", line 209, in connect
>fileSD.validateDirAccess(self.getMountObj().getRecord().fs_file)
>  File "/usr/share/vdsm/storage/fileSD.py", line 55, in validateDirAccess
>(os.R_OK | os.X_OK))
>  File "/usr/share/vdsm/supervdsm.py", line 81, in __call__
>return callMethod()
>  File "/usr/share/vdsm/supervdsm.py", line 72, in 
>**kwargs)
>  File "", line 2, in validateAccess
>  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in 
_callmethod
>raise convert_to_error(kind, result)
the vdsm side receive RemoteError because of supervdsm server
multiprocessing manager raise error KIND='TRACEBACK'
  >RemoteError:
The upper part is the trace back from the client side, the following
part is from server side:
>---
>Traceback (most recent call last):
>  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 214, in 
serve_client
>request = recv()
>IOError: [Errno 4] Interrupted system call
>---

Corresponding Python source code:managers.py(Server side)
 def serve_client(self, conn):
 '''
 Handle requests from the proxies in a particular process/thread
 '''
 util.debug('starting server thread to service %r',
threading.current_thread().name)
 recv = conn.recv
 send = conn.send
 id_to_obj = self.id_to_obj
 while not self.stop:
 try:
 methodname = obj = None
 request = recv()<--this line been interrupted 
by SIGCHLD
 ident, methodname, args, kwds = request
 obj, exposed, gettypeid = id_to_obj[ident]
 if methodname not in exposed:
 raise AttributeError(
 'method %r of %r object is not in exposed=%r' %
 (methodname, type(obj), exposed)
 )
 function = getattr(obj, methodname)
 try:
 res = function(*args, **kwds)
 except Exception, e:
 msg = ('#ERROR', e)
 else:
 typeid = gettypeid and gettypeid.get(methodname, None)
 if typeid:
 rident, rexposed = self.create(conn, typeid, res)
 token = Token(typeid, self.address, rident)
 msg = ('#PROXY', (rexposed, token))
 else:
 msg = ('#RETURN', res)
 except AttributeError:
 if methodname is None:
 msg = ('#TRACEBACK', f

Re: [Users] ovirt node

2013-01-24 Thread Alon Bar-Lev


- Original Message -
> From: "David Michael" 
> To: users@ovirt.org
> Sent: Tuesday, January 22, 2013 11:20:32 PM
> Subject: [Users] ovirt node
> 
> 
> 
> hi
> 
> i cannot add ovirt node to thr ovirt engine and i got this log
> message
> [org.ovirt.engine.core.bll.AddVdsCommand] > (http-0.0.0.0-8080-3)
> CanDoAction of action AddVds failed. >
> Reasons:VDS_CANNOT_CONNECT_TO_SERVER,VAR__ACTION__ADD,VAR__TYPE__HOST

Full log file please.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] best disk type for WIn XP guests

2013-01-24 Thread Vadim Rozenfeld
On Wednesday, January 23, 2013 06:17:16 PM Gianluca Cecchi wrote:
> Hello,
> I have a WIn XP guest configured with one ide disk.
> I would like to pass to virtio. Is it supported/usable for Win XP as a
> disk type on oVirt?
> What else are using other ones in case, apart IDE?
> My attempt is to add a second 1Gb disk configured as virtio and then
> if successful change disk type for the first disk too.
> But when powering up the guest it finds new hardware for the second
> disk, I point it to the directory
> WXP\X86 of the iso using virtio-win-1.1.16.vfd
> 
> It finds the viostor.xxx files but at the end it fails installing the
> driver (see
> https://docs.google.com/file/d/0BwoPbcrMv8mvMUQ2SWxYZWhSV0E/edit
> )
> 
> Any help/suggestion is welcome.
Error code 39 means that OS cannot load the device driver.
On 32 bit platforms it usually happens with corrupted installation
media or platform/architecture mismatch.
Vadim.  
> 
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ovirt node

2013-01-24 Thread David Michael
hi

i cannot add ovirt node to thr ovirt engine and i got this log message

*[org.ovirt.engine.core.bll.AddVdsCommand]*>* (http-0.0.0.0-8080-3)
CanDoAction of action AddVds failed.*>*
Reasons:VDS_CANNOT_CONNECT_TO_SERVER,VAR__ACTION__ADD,VAR__TYPE__HOST*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Community feedback on the new UI-plugin Framework

2013-01-24 Thread Oved Ourfalli
Hey all,

We had an oVirt workshop this week, which included a few sessions about the new 
oVirt UI Plugin framework, including a Hackaton and a BOF session.

I've gathered some feedback we got from the different participants about the 
framework, and what they would like to see in the future of it.

1. People liked the fact that it is a simple framework, allowing you to do nice 
extensions rapidly,  without the need to know complex technologies (simple 
javascript knowledge is all you need to know).

2. People want the framework to provide tools for adding UI components 
(main/sub tabs, dialogs, etc.) that aren't URL based, but are based on 
components we currently have in oVirt, such as grids, key-value pairs (such as 
the general sub-tab), action buttons in these custom tabs and etc.

The main reason for that is to easily develop a plugin with an oVirt-like 
look-and-feel. Chris Morrissey from Netapp showed a very nice plugin he wrote 
that did have an oVirt-like look-and-feel, but it wasn't easy and it 
required him to to develop something specific for the plugin to interact with, 
in the 3rd party application (something similar to the work we did in the 
oVirt-Foreman UI-plugin).

3. Support adding tasks to the system - plugins may trigger asynchronous tasks 
behind the scene, both oVirt and external ones. oVirt tasks and their progress 
will be reflected in the tasks management view, but if the flows contain 
external tasks as well, then it would be hard to track through the oVirt UI.

4. Plugin management
* The ability to see what plugins are installed... install new plugins and 
remove existing ones.
* Change the plugin configuration through webadmin
* Distinguish between public plugin configuration entries (entries the user to 
change), to private ones (entries it can't).

I guess that this point will be relevant for engine-plugins as well (once 
support for such plugins will be available) so we should consider providing a 
similar solution for both. Also, Chris pointed out that it should be taken into 
consideration as well when working on supporting HA-oVirt-engine, as plugins 
are vital part of the oVirt environment.

If you find the feedback above true, or you have other comments that weren't 
mentioned here, please share it with us!

Thank you,
Oved

P.S:
I guess the slides will be uploaded sometime next week (I guess someone would 
have asked it soon... so now you have your answer :-) )
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Change locale for VNC console

2013-01-24 Thread Adrian Gibanel
- Mensaje original -

> De: "Nicolas Ecarnot" 
> Para: users@ovirt.org
> Enviados: Lunes, 21 de Enero 2013 8:10:45
> Asunto: Re: [Users] Change locale for VNC console

> Le 19/01/2013 15:19, Adrian Gibanel a écrit :
> > Salut Nicolas,

> Ola Adrian,

> >
> > Can you run:
> >
> > rpm -qa | grep ovirt
> >
> > so that we know which ovirt version you're using?

> [root@xxx etc]# rpm -qa | grep ovirt
> ovirt-engine-restapi-3.1.0-4.fc17.noarch
> ovirt-log-collector-3.1.0-0.git10d719.fc17.noarch
> ovirt-engine-notification-service-3.1.0-4.fc17.noarch
> ovirt-release-fedora-5-2.noarch
> ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch
> ovirt-engine-userportal-3.1.0-4.fc17.noarch
> ovirt-engine-config-3.1.0-4.fc17.noarch
> ovirt-engine-webadmin-portal-3.1.0-4.fc17.noarch
> ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch
> ovirt-engine-setup-3.1.0-4.fc17.noarch
> ovirt-engine-tools-common-3.1.0-4.fc17.noarch
> ovirt-engine-3.1.0-4.fc17.noarch
> ovirt-engine-dbscripts-3.1.0-4.fc17.noarch
> ovirt-engine-backend-3.1.0-4.fc17.noarch
> ovirt-engine-genericapi-3.1.0-4.fc17.noarch
> ovirt-engine-sdk-3.1.0.4-1.fc17.noarch
> [root@xxx etc]#

> Apart from the gtk that I had to downgrade (iso import issue - see
> recent mailing list msgs), they look like the same.

You hit this bug: 

https://bugzilla.redhat.com/show_bug.cgi?id=869457 

Keith, which I CC, said in another email that this problem was solved in git 
but that no package had been released . 

Keith can you update the bug with the commit where it gets solved or something 
similar? 

Thank you. 

-- 

Adrián Gibanel 
I.T. Manager 

+34 675 683 301 
www.btactic.com 

Ens podeu seguir a/Nos podeis seguir en: 

i 

Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és 
cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El 
medio ambiente es cosa de todos. 

AVIS: 
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou 
el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o 
copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge 
per error, us agrairem que ho feu saber immediatament al remitent i que 
procediu a destruir el missatge . 

AVISO: 
El contenido de este mensaje y de sus anexos es confidencial. Si no es el 
destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o 
copiarlo sin tener la autorización correspondiente. Si han recibido este 
mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al 
remitente y que procedan a destruir el mensaje . 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] win 7 vm creation:

2013-01-24 Thread Gianluca Cecchi
On Thu, Jan 24, 2013 at 7:07 PM, Gianluca Cecchi  wrote:
>
> My latest engine-upgrade was run on 14th of January
> If I want to stay at the moment on this release version should I only
> do upgrade of db after patch?

Never mind,
I decided to update to 3.2.0-1.20130123.git2ad65d0 that contains the
fix and I was able to start the windows 7 vm

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail

2013-01-24 Thread Dead Horse
Tried some manual edits to SD states in the dbase. The net result was I was
able to get a node active. However as reconstructing the master storage
domain kicked in it was unable to do so. It was also not able to recognize
the other SD with similar failure modes to the unrecognized master above.
Guessing the newer VDSM version borked things pretty good. So being as this
is a test harness and the SD data is not worth saving I just smoked the all
SD. I ran engine-cleanup and started from fresh and all is well now.

- DHC


On Thu, Jan 24, 2013 at 11:53 AM, Dead Horse
wrote:

> This test harness setup here consists of two servers tied to NFS storage
> via IB (NFS mounts are via IPoIB, NFS over RDMA is disabled) . All storage
> domains are NFS. The issue does occur with both servers on when attempting
> to bring them out of maintenance mode with the end result being
> non-operational due to storage attach fail.
>
> The current issue is now that with a working older commit the master
> storage domain is "stuck" in state "locked" and I see the secondary issue
> wherein VDSM cannot seem to find or contact the master storage domain even
> though it is there. I am can mount the master storage domain manually and
> and all content appears to be accounted for accordingly on either host.
>
> Here is the current contents of the master storage domain metadata:
> CLASS=Data
> DESCRIPTION=orgrimmar
> IOOPTIMEOUTSEC=1
> LEASERETRIES=3
> LEASETIMESEC=5
> LOCKPOLICY=
> LOCKRENEWALINTERVALSEC=5
> MASTER_VERSION=417
> POOL_DESCRIPTION=Azeroth
>
> POOL_DOMAINS=0549ee91-4498-4130-8c23-4c173b5c0959:Active,d8b55105-c90a-465d-9803-8130da9a671e:Active,67534cca-1327-462a-b455-a04464084b31:Active,c331a800-839d-4d23-9059-870a7471240a:Active,f8984825-ff8d-43d9-91db-0d0959f8bae9:Active,c434056e-96be-4702-8beb-82a408a5c8cb:Active,f7da73c7-b5fe-48b6-93a0-0c773018c94f:Active,82e3b34a-6f89-4299-8cd8-2cc8f973a3b4:Active,e615c975-6b00-469f-8fb6-ff58ae3fdb2c:Active,5bc86532-55f7-4a91-a52c-fad261f322d5:Active,1130b87a-3b34-45d6-8016-d435825c68ef:Active
> POOL_SPM_ID=1
> POOL_SPM_LVER=6
> POOL_UUID=f90a0d1c-06ca-11e2-a05b-00151712f280
> REMOTE_PATH=192.168.0.1:/ovirt/orgrimmar
> ROLE=Master
> SDUUID=67534cca-1327-462a-b455-a04464084b31
> TYPE=NFS
> VERSION=3
> _SHA_CKSUM=1442bb078fd8c9468d241ff141e9bf53839f0721
>
> So now with the older working commit I now get this the
> "StoragePoolMasterNotFound: Cannot find master domain" error (prior details
> above when I worked backwards to that commit)
>
> This is odd as the nodes can definitely reach the master storage domain:
>
> showmount from one of the el6.3 nodes:
> [root@kezan ~]# showmount -e 192.168.0.1
> Export list for 192.168.0.1:
> /ovirt/orgrimmar192.168.0.0/16
>
> mount/ls from one of the nodes:
> [root@kezan ~]# mount 192.168.0.1:/ovirt/orgrimmar /mnt
> [root@kezan ~]# ls -al /mnt/67534cca-1327-462a-b455-a04464084b31/dom_md/
> total 1100
> drwxr-xr-x 2 vdsm kvm4096 Jan 24 11:44 .
> drwxr-xr-x 5 vdsm kvm4096 Oct 19 16:16 ..
> -rw-rw 1 vdsm kvm 1048576 Jan 19 22:09 ids
> -rw-rw 1 vdsm kvm   0 Sep 25 00:46 inbox
> -rw-rw 1 vdsm kvm 2097152 Jan 10 13:33 leases
> -rw-r--r-- 1 vdsm kvm 903 Jan 10 13:39 metadata
> -rw-rw 1 vdsm kvm   0 Sep 25 00:46 outbox
>
>
> - DHC
>
>
>
> On Thu, Jan 24, 2013 at 7:51 AM, ybronhei  wrote:
>
>> On 01/24/2013 12:44 AM, Dead Horse wrote:
>>
>>> I narrowed down on the commit where the originally reported issue crept
>>> in:
>>> commitfc3a44f71d2ef202cff18d72**03b9e4165b546621building and testing
>>> with
>>>
>>> this commit or subsequent commits yields the original issue.
>>>
>> Interesting.. it might be related to this commit and we're trying to
>> reproduce it.
>>
>> Did you try to remove that code and run again? does it work without the
>> additional of zombieReaper?
>> does the connectivity to the storage work well? when you run 'ls' on the
>> mounted folder you get see the files without a long delay ? it might
>> related to too long timeout when validating access to this mount..
>> we work on that.. any additional info can help
>>
>> Thanks.
>>
>>
>>> - DHC
>>>
>>>
>>> On Wed, Jan 23, 2013 at 3:56 PM, Dead Horse
>>> wrote:
>>>
>>>  Indeed reverting back to an older vdsm clears up the above issue.
 However
 now I the issue is see is:
 Thread-18::ERROR::2013-01-23
 15:50:42,885::task::833::**TaskManager.Task::(_setError)
 Task=`08709e68-bcbc-40d8-843a-**d69d4df40ac6`::Unexpected error

 Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.**py", line 840, in _run
  return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
  res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.**py", line 923, in
 connectStoragePool
  masterVersion, options)
File "/usr/share/vdsm/storage/hsm.**py", line 970, in
 _connectStoragePool
  res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
>>>

Re: [Users] Trouble building Ovirt from source - "No rule to make target `install_tools'. Stop."

2013-01-24 Thread Juan Hernandez

On 01/24/2013 04:20 PM, Yuval M wrote:

Hi,
I'm installing Ovirt 3.1 on Fedora using this guide:
http://www.ovirt.org/Building_oVirt_engine#Deploying_engine-config_.26_engine-manage-domains

and I'm getting the error in the subject from make.
there is indeed no rule for install_tools in the makefile.

What am I missing?


Those instructions are out of date, use "make install". That installs 
the files, but you will still need some changes to make the engine work:


1. Create the ovirt user (the engine runs by default with this service, 
unless you change the /etc/syscofig/ovirt-engine file and add the 
ENGINE_USER and ENGINE_GROUP parameters):


  useradd ovirt

2. Create (mkdir -p ...) and change the ownership of the directories 
that the engine needs to own to ovirt:ovirt (chown ovirt:ovirt ...):


  /etc/ovirt-engine
  /var/log/ovirt-engine
  /var/lock/ovirt-engine
  /var/lib/ovirt-engine/content
  /var/lib/ovirt-engine/deployments
  /var/tmp/ovirt-engine
  /var/cache/ovirt-engine

3. Enable the HTTP connector in the engine (the default is to enable 
only AJP, and that doesn't work without Apache as frontend) adding the 
following to the /etc/sysconfig/ovirt-engine file:


  ENGINE_PROXY_ENABLED=false
  ENGINE_HTTP_ENABLED=true
  ENGINE_HTTP_PORT=8700
  ENGINE_HTTPS_ENABLED=false
  ENGINE_AJP_ENABLED=false

4. Configure database connection details (the default in development 
environments is to use the postgres user and the trust mode) adding this 
to /etc/sysconfig/ovirt-engine:


  ENGINE_DB_USER=postgres
  ENGINE_DB_PASSWORD=

5. Make sure that you have the PostgreSQL JDBC driver installed (rpm -q 
postgresql-jdbc) and install it if needed (yum install postgresql-jdbc).


6. Now you can start the engine running the engine-service script:

  engine-service start

Look at the system log (the file /var/log/messages) and the engine logs 
(the files /var/log/ovirt-engine/server.log and /var/log/ovirt-engine) 
for errors.


7. Connect to http://localhost:8700 and you should be able to login with 
user admin and letmein! as password.


Note that I am assuming that you already created the database, and that 
you want to use this installation for development. If you are looking 
for an production installation I suggest using the RPMs.


Also I tested this with the latest source from the repository, it will 
not work with older versions.


--
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 
3ºD, 28016 Madrid, Spain

Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] win 7 vm creation:

2013-01-24 Thread Gianluca Cecchi
On Sun, Jan 13, 2013 at 1:32 PM, Eli Mesika wrote:
>
>
> - Original Message -
>> From: "Livnat Peer"
>> To: "Gianluca Cecchi"
>> Cc: "users" 
>> Sent: Sunday, January 13, 2013 12:27:57 PM
>> Subject: Re: [Users] win 7 vm creation:
>>
>> Hi Gianluca,
>> From looking in the logs it looks like you had some DB upgrade issue,
>> or
>> DB clean install issue.
>>
>> "Caused by: org.postgresql.util.PSQLException: The column name
>> count_threads_as_cores was not found in this ResultSet"
>>
>> Adding Doron to take take a look, and I think Einav also had some db
>> upgrade issue last week...
>
> Yes, she had, there was a bug on upgrade for pg 9.x (f18 defaulted to that)
> Anyway, this was merged already upstream
> http://gerrit.ovirt.org/#/c/10778/
> You can fix and rerun the upgrade again

Hello,
now I have a setup with engine and host on separate servers, both
fedora 18 and on engine rpm nightly as of 3.2.0-1.20130113.gitc954518
I get again this problem with a w7 vm.

If I select run on any host I get
w7test:
Cannot run VM. There are no available running Hosts in the Host Cluster.

If I select this particular (and currently only) host I get
w7test:

Cannot run VM. VM is pinned to a specific Host but cannot run on it due to:
Invalid Host status or not enough free resources on it.
You may free resources on the Host by migrating other VMs.
Cannot run VM. There are no available running Hosts in the Host Cluster.


My
03_02_0100_add_cpu_thread_columns.sql
contains
"
select fn_db_add_column('vds_groups', 'count_threads_as_cores',
'BOOLEAN NOT NULL DEFAULT FALSE');

select fn_db_add_column('vds_dynamic', 'cpu_threads', 'INTEGER');
"

So what should I do?

My latest engine-upgrade was run on 14th of January
If I want to stay at the moment on this release version should I only
do upgrade of db after patch?

so
- stop engine
# systemctl stop ovirt-engine.service

cd /usr/share/ovirt-engine/dbscripts
- patch drop_old_uuid_functions.sql

- backup DB
./backup.sh

- upgrade DB agan
./upgrade.sh

?

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail

2013-01-24 Thread Dead Horse
This test harness setup here consists of two servers tied to NFS storage
via IB (NFS mounts are via IPoIB, NFS over RDMA is disabled) . All storage
domains are NFS. The issue does occur with both servers on when attempting
to bring them out of maintenance mode with the end result being
non-operational due to storage attach fail.

The current issue is now that with a working older commit the master
storage domain is "stuck" in state "locked" and I see the secondary issue
wherein VDSM cannot seem to find or contact the master storage domain even
though it is there. I am can mount the master storage domain manually and
and all content appears to be accounted for accordingly on either host.

Here is the current contents of the master storage domain metadata:
CLASS=Data
DESCRIPTION=orgrimmar
IOOPTIMEOUTSEC=1
LEASERETRIES=3
LEASETIMESEC=5
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=417
POOL_DESCRIPTION=Azeroth
POOL_DOMAINS=0549ee91-4498-4130-8c23-4c173b5c0959:Active,d8b55105-c90a-465d-9803-8130da9a671e:Active,67534cca-1327-462a-b455-a04464084b31:Active,c331a800-839d-4d23-9059-870a7471240a:Active,f8984825-ff8d-43d9-91db-0d0959f8bae9:Active,c434056e-96be-4702-8beb-82a408a5c8cb:Active,f7da73c7-b5fe-48b6-93a0-0c773018c94f:Active,82e3b34a-6f89-4299-8cd8-2cc8f973a3b4:Active,e615c975-6b00-469f-8fb6-ff58ae3fdb2c:Active,5bc86532-55f7-4a91-a52c-fad261f322d5:Active,1130b87a-3b34-45d6-8016-d435825c68ef:Active
POOL_SPM_ID=1
POOL_SPM_LVER=6
POOL_UUID=f90a0d1c-06ca-11e2-a05b-00151712f280
REMOTE_PATH=192.168.0.1:/ovirt/orgrimmar
ROLE=Master
SDUUID=67534cca-1327-462a-b455-a04464084b31
TYPE=NFS
VERSION=3
_SHA_CKSUM=1442bb078fd8c9468d241ff141e9bf53839f0721

So now with the older working commit I now get this the
"StoragePoolMasterNotFound: Cannot find master domain" error (prior details
above when I worked backwards to that commit)

This is odd as the nodes can definitely reach the master storage domain:

showmount from one of the el6.3 nodes:
[root@kezan ~]# showmount -e 192.168.0.1
Export list for 192.168.0.1:
/ovirt/orgrimmar192.168.0.0/16

mount/ls from one of the nodes:
[root@kezan ~]# mount 192.168.0.1:/ovirt/orgrimmar /mnt
[root@kezan ~]# ls -al /mnt/67534cca-1327-462a-b455-a04464084b31/dom_md/
total 1100
drwxr-xr-x 2 vdsm kvm4096 Jan 24 11:44 .
drwxr-xr-x 5 vdsm kvm4096 Oct 19 16:16 ..
-rw-rw 1 vdsm kvm 1048576 Jan 19 22:09 ids
-rw-rw 1 vdsm kvm   0 Sep 25 00:46 inbox
-rw-rw 1 vdsm kvm 2097152 Jan 10 13:33 leases
-rw-r--r-- 1 vdsm kvm 903 Jan 10 13:39 metadata
-rw-rw 1 vdsm kvm   0 Sep 25 00:46 outbox


- DHC



On Thu, Jan 24, 2013 at 7:51 AM, ybronhei  wrote:

> On 01/24/2013 12:44 AM, Dead Horse wrote:
>
>> I narrowed down on the commit where the originally reported issue crept
>> in:
>> commitfc3a44f71d2ef202cff18d72**03b9e4165b546621building and testing with
>>
>> this commit or subsequent commits yields the original issue.
>>
> Interesting.. it might be related to this commit and we're trying to
> reproduce it.
>
> Did you try to remove that code and run again? does it work without the
> additional of zombieReaper?
> does the connectivity to the storage work well? when you run 'ls' on the
> mounted folder you get see the files without a long delay ? it might
> related to too long timeout when validating access to this mount..
> we work on that.. any additional info can help
>
> Thanks.
>
>
>> - DHC
>>
>>
>> On Wed, Jan 23, 2013 at 3:56 PM, Dead Horse
>> wrote:
>>
>>  Indeed reverting back to an older vdsm clears up the above issue. However
>>> now I the issue is see is:
>>> Thread-18::ERROR::2013-01-23
>>> 15:50:42,885::task::833::**TaskManager.Task::(_setError)
>>> Task=`08709e68-bcbc-40d8-843a-**d69d4df40ac6`::Unexpected error
>>>
>>> Traceback (most recent call last):
>>>File "/usr/share/vdsm/storage/task.**py", line 840, in _run
>>>  return fn(*args, **kargs)
>>>File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
>>>  res = f(*args, **kwargs)
>>>File "/usr/share/vdsm/storage/hsm.**py", line 923, in
>>> connectStoragePool
>>>  masterVersion, options)
>>>File "/usr/share/vdsm/storage/hsm.**py", line 970, in
>>> _connectStoragePool
>>>  res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
>>>File "/usr/share/vdsm/storage/sp.**py", line 643, in connect
>>>  self.__rebuild(msdUUID=**msdUUID, masterVersion=masterVersion)
>>>File "/usr/share/vdsm/storage/sp.**py", line 1167, in __rebuild
>>>  self.masterDomain = self.getMasterDomain(msdUUID=**msdUUID,
>>> masterVersion=masterVersion)
>>>File "/usr/share/vdsm/storage/sp.**py", line 1506, in getMasterDomain
>>>  raise se.StoragePoolMasterNotFound(**self.spUUID, msdUUID)
>>> StoragePoolMasterNotFound: Cannot find master domain:
>>> 'spUUID=f90a0d1c-06ca-11e2-**a05b-00151712f280,
>>> msdUUID=67534cca-1327-462a-**b455-a04464084b31'
>>> Thread-18::DEBUG::2013-01-23
>>> 15:50:42,887::task::852::**TaskManager.Task::(_run)
>>> Task=`08709e68-bcbc-40d8-843a-**d69d4df

Re: [Users] Attaching export domain to dc fails

2013-01-24 Thread Patrick Hurrelmann
On 24.01.2013 18:08, Dafna Ron wrote:
> Before you do this be sure that the export domain is *really* *not
> attached to* *any* *DC*!
> if you look under the storage main tab it should appear as unattached or
> it should not be in the setup or under a DC in any other setup at all.
> 
> 1. go to the export domain's metadata located under the domain dom_md
> (example)
> 
> 72ec1321-a114-451f-bee1-6790cbca1bc6/dom_md/metadata
> 
> 2. (backup the metadata before you edit it!) 
> vim the metadata and remove the pool's uuid value from POOL_UUID field
> leaving: 'POOL_UUID='
> also remove the SHA_CKSUM (remove entire entry - not just the value)
> 
> so for example my metadata was this:
> 
> CLASS=Backup
> DESCRIPTION=BlaBla
> IOOPTIMEOUTSEC=1
> LEASERETRIES=3
> LEASETIMESEC=5
> LOCKPOLICY=
> LOCKRENEWALINTERVALSEC=5
> MASTER_VERSION=0
> POOL_UUID=cee3603b-2308-4973-97a8-480f7d6d2132
> REMOTE_PATH=BlaBla.com:/volumes/bla/BlaBla
> ROLE=Regular
> SDUUID=72ec1321-a114-451f-bee1-6790cbca1bc6
> TYPE=NFS
> VERSION=0
> _SHA_CKSUM=95bf1c9b8a75b077fe65d782e86b4c4c331a765d
> 
> 
> it will be this:
> 
> CLASS=Backup
> DESCRIPTION=BlaBla
> IOOPTIMEOUTSEC=1
> LEASERETRIES=3
> LEASETIMESEC=5
> LOCKPOLICY=
> LOCKRENEWALINTERVALSEC=5
> MASTER_VERSION=0
> POOL_UUID=
> REMOTE_PATH=BlaBla.com:/volumes/bla/BlaBla
> ROLE=Regular
> SDUUID=72ec1321-a114-451f-bee1-6790cbca1bc6
> TYPE=NFS
> VERSION=0
> 
> 
> you should be able to attach the domain after this change.
> 

Uh, that was fast! Thank you very much. Problem solved. Export domain,
is back to life :)

This mailing list is really incredible and that valuable.

Regards
Patrick

-- 
-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Attaching export domain to dc fails

2013-01-24 Thread Dafna Ron
Before you do this be sure that the export domain is *really* *not
attached to* *any* *DC*!
if you look under the storage main tab it should appear as unattached or
it should not be in the setup or under a DC in any other setup at all.

1. go to the export domain's metadata located under the domain dom_md
(example)

72ec1321-a114-451f-bee1-6790cbca1bc6/dom_md/metadata

2. (backup the metadata before you edit it!) 
vim the metadata and remove the pool's uuid value from POOL_UUID field
leaving: 'POOL_UUID='
also remove the SHA_CKSUM (remove entire entry - not just the value)

so for example my metadata was this:

CLASS=Backup
DESCRIPTION=BlaBla
IOOPTIMEOUTSEC=1
LEASERETRIES=3
LEASETIMESEC=5
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=0
POOL_UUID=cee3603b-2308-4973-97a8-480f7d6d2132
REMOTE_PATH=BlaBla.com:/volumes/bla/BlaBla
ROLE=Regular
SDUUID=72ec1321-a114-451f-bee1-6790cbca1bc6
TYPE=NFS
VERSION=0
_SHA_CKSUM=95bf1c9b8a75b077fe65d782e86b4c4c331a765d


it will be this:

CLASS=Backup
DESCRIPTION=BlaBla
IOOPTIMEOUTSEC=1
LEASERETRIES=3
LEASETIMESEC=5
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=0
POOL_UUID=
REMOTE_PATH=BlaBla.com:/volumes/bla/BlaBla
ROLE=Regular
SDUUID=72ec1321-a114-451f-bee1-6790cbca1bc6
TYPE=NFS
VERSION=0


you should be able to attach the domain after this change.


On 01/24/2013 06:39 PM, Patrick Hurrelmann wrote:
> Hi list,
>
> in one datacenter I'm facing problems with my export storage. The dc is
> of type single host with local storage. On the host I see that the nfs
> export domain is still connected, but the engine does not show this and
> therefore it cannot be used for exports or detached.
>
> Trying to add attach the export domain again fails. The following is
> logged n vdsm:
>
> Thread-1902159::ERROR::2013-01-24
> 17:11:45,474::task::853::TaskManager.Task::(_setError)
> Task=`4bc15024-7917-4599-988f-2784ce43fbe7`::Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 861, in _run
> return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 960, in attachStorageDomain
> pool.attachSD(sdUUID)
>   File "/usr/share/vdsm/storage/securable.py", line 63, in wrapper
> return f(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 924, in attachSD
> dom.attach(self.spUUID)
>   File "/usr/share/vdsm/storage/sd.py", line 442, in attach
> raise se.StorageDomainAlreadyAttached(pools[0], self.sdUUID)
> StorageDomainAlreadyAttached: Storage domain already attached to pool:
> 'domain=cd23808b-136a-4b33-a80c-f2581eab022d,
> pool=d95c53ca-9cef-4db2-8858-bf4937bd8c14'
>
> It won't let me attach the export domain saying that it is already
> attached. Manually umounting the export domain on the host results in
> the same error on subsequent attach.
>
> This is on CentOS 6.3 using Dreyou's rpms. Installed versions on host:
>
> vdsm.x86_64 4.10.0-0.44.14.el6
> vdsm-cli.noarch 4.10.0-0.44.14.el6
> vdsm-python.x86_64  4.10.0-0.44.14.el6
> vdsm-xmlrpc.noarch  4.10.0-0.44.14.el6
>
> Engine:
>
> ovirt-engine.noarch 3.1.0-3.19.el6
> ovirt-engine-backend.noarch 3.1.0-3.19.el6
> ovirt-engine-cli.noarch 3.1.0.7-1.el6
> ovirt-engine-config.noarch  3.1.0-3.19.el6
> ovirt-engine-dbscripts.noarch   3.1.0-3.19.el6
> ovirt-engine-genericapi.noarch  3.1.0-3.19.el6
> ovirt-engine-jbossas711.x86_64  1-0
> ovirt-engine-notification-service.noarch3.1.0-3.19.el6
> ovirt-engine-restapi.noarch 3.1.0-3.19.el6
> ovirt-engine-sdk.noarch 3.1.0.5-1.el6
> ovirt-engine-setup.noarch   3.1.0-3.19.el6
> ovirt-engine-tools-common.noarch3.1.0-3.19.el6
> ovirt-engine-userportal.noarch  3.1.0-3.19.el6
> ovirt-engine-webadmin-portal.noarch 3.1.0-3.19.el6
> ovirt-image-uploader.noarch 3.1.0-16.el6
> ovirt-iso-uploader.noarch   3.1.0-16.el6
> ovirt-log-collector.noarch  3.1.0-16.el6
>
> How can this be recovered to a sane state? If more information is
> needed, please do not hesitate to request it.
>
> Thanks and regards
> Patrick
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-- 
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] DL380 G5 - Fails to Activate

2013-01-24 Thread Tom Brown

> 
> 
> 
> I have a couple of old DL380 G5's and i am putting them into their own 
> cluster for testing various things out.
> The install of 3.1 from dreyou goes fine onto them but when they try to 
> activate i get the following
> 
> Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet the 
> cluster's minimum CPU level. Missing CPU features : model_Conroe, nx
> 
> KVM appears to run just fine on these host and their cpu's are
> 
> Intel(R) Xeon(R) CPU5140  @ 2.33GHz
> 
> Is it possible to add these in to a 3.1 cluster ??

and now i have managed to find a similar post

# vdsClient -s 0 getVdsCaps | grep -i flags
cpuFlags = 
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow

# virsh -r capabilities


  
134bd567-da9f-43f9-8a2b-c259ed34f938

  x86_64
  kvm32
  Intel
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  


  


  
  
tcp
  


  

  


  

  

  

  
hvm

  32
  /usr/libexec/qemu-kvm
  rhel6.3.0
  pc
  rhel6.2.0
  rhel6.1.0
  rhel6.0.0
  rhel5.5.0
  rhel5.4.4
  rhel5.4.0
  
  
  
/usr/libexec/qemu-kvm
  


  
  
  
  
  
  

  

  
hvm

  64
  /usr/libexec/qemu-kvm
  rhel6.3.0
  pc
  rhel6.2.0
  rhel6.1.0
  rhel6.0.0
  rhel5.5.0
  rhel5.4.4
  rhel5.4.0
  
  
  
/usr/libexec/qemu-kvm
  


  
  
  
  

  




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Attaching export domain to dc fails

2013-01-24 Thread Patrick Hurrelmann
Hi list,

in one datacenter I'm facing problems with my export storage. The dc is
of type single host with local storage. On the host I see that the nfs
export domain is still connected, but the engine does not show this and
therefore it cannot be used for exports or detached.

Trying to add attach the export domain again fails. The following is
logged n vdsm:

Thread-1902159::ERROR::2013-01-24
17:11:45,474::task::853::TaskManager.Task::(_setError)
Task=`4bc15024-7917-4599-988f-2784ce43fbe7`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 960, in attachStorageDomain
pool.attachSD(sdUUID)
  File "/usr/share/vdsm/storage/securable.py", line 63, in wrapper
return f(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 924, in attachSD
dom.attach(self.spUUID)
  File "/usr/share/vdsm/storage/sd.py", line 442, in attach
raise se.StorageDomainAlreadyAttached(pools[0], self.sdUUID)
StorageDomainAlreadyAttached: Storage domain already attached to pool:
'domain=cd23808b-136a-4b33-a80c-f2581eab022d,
pool=d95c53ca-9cef-4db2-8858-bf4937bd8c14'

It won't let me attach the export domain saying that it is already
attached. Manually umounting the export domain on the host results in
the same error on subsequent attach.

This is on CentOS 6.3 using Dreyou's rpms. Installed versions on host:

vdsm.x86_64 4.10.0-0.44.14.el6
vdsm-cli.noarch 4.10.0-0.44.14.el6
vdsm-python.x86_64  4.10.0-0.44.14.el6
vdsm-xmlrpc.noarch  4.10.0-0.44.14.el6

Engine:

ovirt-engine.noarch 3.1.0-3.19.el6
ovirt-engine-backend.noarch 3.1.0-3.19.el6
ovirt-engine-cli.noarch 3.1.0.7-1.el6
ovirt-engine-config.noarch  3.1.0-3.19.el6
ovirt-engine-dbscripts.noarch   3.1.0-3.19.el6
ovirt-engine-genericapi.noarch  3.1.0-3.19.el6
ovirt-engine-jbossas711.x86_64  1-0
ovirt-engine-notification-service.noarch3.1.0-3.19.el6
ovirt-engine-restapi.noarch 3.1.0-3.19.el6
ovirt-engine-sdk.noarch 3.1.0.5-1.el6
ovirt-engine-setup.noarch   3.1.0-3.19.el6
ovirt-engine-tools-common.noarch3.1.0-3.19.el6
ovirt-engine-userportal.noarch  3.1.0-3.19.el6
ovirt-engine-webadmin-portal.noarch 3.1.0-3.19.el6
ovirt-image-uploader.noarch 3.1.0-16.el6
ovirt-iso-uploader.noarch   3.1.0-16.el6
ovirt-log-collector.noarch  3.1.0-16.el6

How can this be recovered to a sane state? If more information is
needed, please do not hesitate to request it.

Thanks and regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
Thread-1902157::DEBUG::2013-01-24 17:11:36,039::BindingXMLRPC::156::vds::(wrapper) [xxx.xxx.xxx.190]
Thread-1902157::DEBUG::2013-01-24 17:11:36,039::task::588::TaskManager.Task::(_updateState) Task=`a686738c-ff6f-43ad-8966-8ec158dc7c2e`::moving from state init -> state preparing
Thread-1902157::INFO::2013-01-24 17:11:36,039::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=1, spUUID='----', conList=[{'connection': 'xxx.xxx.xxx.191:/data/ovirt-export-fra', 'iqn': '', 'portal': '', 'user': '', 'password': '**', 'id': '5207d7da-2655-4843-b126-3252e38beafa', 'port': ''}], options=None)
Thread-1902157::INFO::2013-01-24 17:11:36,039::logUtils::39::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '5207d7da-2655-4843-b126-3252e38beafa'}]}
Thread-1902157::DEBUG::2013-01-24 17:11:36,039::task::1172::TaskManager.Task::(prepare) Task=`a686738c-ff6f-43ad-8966-8ec158dc7c2e`::finished: {'statuslist': [{'status': 0, 'id': '5207d7da-2655-4843-b126-3252e38beafa'}]}
Thread-1902157::DEBUG::2013-01-24 17:11:36,040::task::588::TaskManager.Task::(_updateState) Task=`a686738c-ff6f-43ad-8966-8ec158dc7c2e`::moving from state preparing -> state finished
Thread-1902157::DEBUG::2013-01-24 17:11:36,040::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1902157::DEBUG::2013-01-24 17:11:36,040::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1902157::DEBUG::2013-01-24 17:11:36,040::task::978::TaskManager.Task::(_decref) Task=`a686738c-ff6f-43ad-8966-8ec158dc7c2e`::ref 0 aborting False
Thread-1902158::DEBUG::2013-01-24 17:11:36,057::BindingXMLRPC::156::vds::(wrapper) [xxx.xxx.xxx.190]
Thread-1902158::DEBUG::2013-01-24 17:11:36,057::task::588::TaskManager.Ta

[Users] DL380 G5 - Fails to Activate

2013-01-24 Thread Tom Brown
Hi

I have a couple of old DL380 G5's and i am putting them into their own cluster 
for testing various things out.
The install of 3.1 from dreyou goes fine onto them but when they try to 
activate i get the following

Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet the 
cluster's minimum CPU level. Missing CPU features : model_Conroe, nx

KVM appears to run just fine on these host and their cpu's are

Intel(R) Xeon(R) CPU5140  @ 2.33GHz

Is it possible to add these in to a 3.1 cluster ??

thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] storage domain auto re-cover

2013-01-24 Thread Itamar Heim

On 24/01/2013 06:25, Alex Leonhardt wrote:

For the DB entry on vdc_options or in which file in /etc/ovirt-engine/ ?


db entry (vie ovirt-engine-config -g...)




On 24 January 2013 14:14, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 24/01/2013 04:34, Alex Leonhardt wrote:

hi,

is it possible to set a storage domain to auto-recover /
auto-reactivate
? e.g. after I restart a host that runs a storage domain, i want
ovirt
engine to make that storage domain active after the host has
come up.

thanks
alex


--

| RHCE | Senior Systems Engineer | www.vcore.co
  |
www.vsearchcloud.com 
 |


_
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/__mailman/listinfo/users



please check your config for this:
http://gerrit.ovirt.org/#/c/__10387/





--

| RHCE | Senior Systems Engineer | www.vcore.co  |
www.vsearchcloud.com  |


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] default mutipath.conf config for fedora 18 invalid

2013-01-24 Thread Yeela Kaplan
Hi,
I've tested the new patch on fedora 18 vdsm host (created iscsi storage domain, 
attached, activated) and it works well.
Even though multipath.conf no longer uses getuid_callout to recognize the 
device's wwid,
it still knows how to deal with the attribute's existence in the conf file when 
running multipath command (only output is to stdout which we don't use anyway, 
stderr empty and rc=0). 
The relevant patch is: http://gerrit.ovirt.org/#/c/10824/

Yeela

- Original Message -
> From: "Ayal Baron" 
> To: "Gianluca Cecchi" 
> Cc: "users" , "Dan Kenigsberg" , "Yeela 
> Kaplan" 
> Sent: Wednesday, January 23, 2013 7:51:28 PM
> Subject: Re: [Users] default mutipath.conf config for fedora 18 invalid
> 
> 
> 
> - Original Message -
> > On Wed, Jan 23, 2013 at 4:41 PM, Yeela Kaplan  wrote:
> > > Yes, you need a different DC and host for iSCSI SDs.
> > 
> > Possibly I can test tomorrow adding another host that should go
> > into
> > the same DC but I can temporarily put it in another newly created
> > iSCSI DC for testing.
> > What is the workflow when I have a host in a DC and then I want to
> > put
> > it into another one, in general and when the two DCs have
> > configured
> > different SD types?
> > 
> 
> As long as the host has visibility to the target storage domains, all
> you need to do is put the host in maintenance and then edit it and
> change the cluster/dc it belongs to.
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Trouble building Ovirt from source - "No rule to make target `install_tools'. Stop."

2013-01-24 Thread Yuval M
Hi,
I'm installing Ovirt 3.1 on Fedora using this guide:
http://www.ovirt.org/Building_oVirt_engine#Deploying_engine-config_.26_engine-manage-domains

and I'm getting the error in the subject from make.
there is indeed no rule for install_tools in the makefile.

What am I missing?

Thanks,

Yuval
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] custom nfs mount options

2013-01-24 Thread Greg Padgett

On 01/24/2013 08:03 AM, Alex Leonhardt wrote:

same here, ovirt 3.1 from dreyou's repo ...

vdsm-python-4.10.0-0.44.14.el6.x86_64
vdsm-cli-4.10.0-0.44.14.el6.noarch
vdsm-xmlrpc-4.10.0-0.44.14.el6.noarch
vdsm-4.10.0-0.44.14.el6.x86_64

Alex



On 24 January 2013 12:33, Alexandru Vladulescu mailto:avladule...@bfproject.ro>> wrote:

On 01/24/2013 02:25 PM, Itamar Heim wrote:

On 24/01/2013 04:24, Alexandru Vladulescu wrote:


Hi guys,

I remember asking the same thing a couple of weeks ago. Itamar
answered
to be the same, should check the vdsm.conf file for nfs mount
options.
Because I did not had the time to do test this until now, I
return with
the test results.

Well Alex it seems to be right, on the 3.1 version, if you go
and edit
the /etc/vdsm/vdsm.conf file, on line 146, I uncommented the
nfs_mount_options parameter and changed it to :

nfs_mount_options =
soft,nosharecache,noatime,__rsize=8192,wsize=8192

Went in Ovirt interface, put the node from Hosts tab into
Maintenance,
so that the ISO domain and Master Domain will get unmounted
automatically; restarted the vdsm service on the hypervisior
server and
activate the node back from GUI. Upon mount command, there is
no change
or difference between what I have added and what was configured
automatically before.


I remember something about you must not pass any nfs option from
ovirt, or it will override the vdsm.conf.
are you trying to set nfs options from both engine and vdsm.conf?

Basically, I had 2 questions, one was like Alex asked and it is in the
current topic, and the other was suggestion to add these nfs
configuration parameters changes into the GUI in Storage tab. I asked
if besides the retrans, timeo and vers is it possible to add anything
else in the future GUI devel.

Must mention, I test on 3.1 version from dreyou's repo.


This is key.  There was a bug where vdsm did not take the vdsm.conf 
nfs_mount_options into consideration [1], which was fixed upstream in 
4.10.1--so after the version you are running.  There was also a 
complementary patch in engine--the two work together to fix this issue.


If you had the fix, you would simply need to be sure to not check the 
"Override Default Options" checkbox for your NFS storage in the UI. 
However, without the fix, I think the most straightforward way to 
accomplish what you want is to configure the storage using a Posix domain 
as Itamar suggested earlier in the thread.


I'll leave the question of adding additional, custom parameters in the UI 
for others to answer.  Seems like it could be useful, but can be 
accomplished other ways.


[1] http://bugzilla.redhat.com/826921





   type nfs

(rw,soft,nosharecache,timeo=__10,retrans=6,vers=4,addr=x.x.__x.x,clientaddr=x.x.x.x)


Might this be a bug on vdsm to be fixed ?

Alex.


On 01/24/2013 01:45 PM, Alex Leonhardt wrote:

So I've tried some bits Itamar asked me to - however,
still get the
same mount options shown.

I tried "service vdsmd reconfigure; service vdsmd restart"
- the mount
for HV03:/vmfs/data should now have the new mount options,
but no luck.

Anyone time to help ?

Alex



On 24 January 2013 10:54, Alex Leonhardt
mailto:alex.t...@gmail.com>
>>
wrote:

 I've got now :

 nfs_mount_options =
soft,nosharecache,rsize=32768,__wsize=32768,noatime


 However, when I check the mounts on the host, it does
not show
 these addtitional options used ? (only
soft,nosharecache), here
 the mount output:

 HV02:/vmfs/data.old2 on
/rhev/data-center/mnt/HV02:___vmfs_data.old2
 type nfs

(rw,soft,nosharecache,timeo=__600,retrans=6,nfsvers=3,addr=__x.x.x8)
 HV02:/vmfs/data on
/rhev/data-center/mnt/HV02:___vmfs_data type nfs

(rw,soft,nosharecache,timeo=__600,retrans=6,vers=3,addr=x.x.__x8)
 HV03:/vmfs/data on
/rhev/data-center/mnt/HV03:___vmfs_data type nfs

(rw,soft,nosharecache,timeo=__600,retrans=6,nfsvers=3,addr=__127.0.0.1)

 Above is after I restarted HV03 so it really should
have mounted
 HV03:/vmfs/data with the new option

[Users] VM migration failed on oVirt Node Hypervisor release 2.5.5 (0.1.fc17) : empty cacert.pem

2013-01-24 Thread Kevin Maziere Aubry
Hi

My concern is about ovirt node oVirt Node Hypervisor release 2.5.5
(0.1.fc17), iso downloaded from ovirt.

I've installed and connect 4 nodes on a manager, and try to migrate a VM
between hypervisors.
I always fail with error :
libvirtError: operation failed: Failed to connect to remote libvirt URI
qemu+tls://172.16.6.3/system
where 172.0.0.1 is the IP of a nde.
I've check on the node and the port 16514 is open.

I also test the virsh command to have a better error message :
virsh -c tls://172.16.6.3/system
error: Unable to import client certificate /etc/pki/CA/cacert.pem

I've checked the cert file on the ovirt node and found it was empty, and
that on all nodes install from the Ovirt ISO it is empty
I also checked /config/etc/pki/CA/cacert.pem, which is also empty.

On a vdsm node install from package on fedora17, it works.
 ls -al /etc/pki/CA/cacert.pem
lrwxrwxrwx. 1 root root 30 18 janv. 14:30 /etc/pki/CA/cacert.pem ->
/etc/pki/vdsm/certs/cacert.pem
And the cert is good.

I've seen no bug report regarding the feature...

Kevin


-- 

Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
 1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
 http://www.alterway.fr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 3.1 to 3.2 migration

2013-01-24 Thread Alexandru Vladulescu

On 01/24/2013 04:42 PM, Adrian Gibanel wrote:




*De: *"Alexandru Vladulescu" 
*Para: *users@ovirt.org
*Enviados: *Jueves, 24 de Enero 2013 14:21:22
*Asunto: *Re: [Users] 3.1 to 3.2 migration

Hi,

On 01/24/2013 03:13 PM, Adrian Gibanel wrote:

As far as I know in Fedora you need to upgrade from Fedora 17
to Fedora 18.
As you you're using CentOS I suppose you don't need to upgrade
your CentOS but I'm not sure at all.

This is not the case, it goes right if installed 3.2 from scratch
on a new machine. Don't think CentOS has a problem here.

Well, I remember having asked about Fedora and CentOS oVirt support 
differences in the irc and I think I was answered that for some 
features you needed extra repositories for new kernel + kvm and / or 
gluster and /or libvirt. Not sure about what were exactly the features 
you missed and the repos you need to enable manually.


But just for upgrading to oVirt 3.2 I think you won't have any problem.

Anyways what I've read in the mailing list about 3.1 to 3.2
update is the following one:

  * Update packages
  * Run: engine-upgrade command

I don't know how easy it is to update packages in your case
(stable to beta) so you might do it in your way like this:
  * Remove 3.1 packages
  * Install 3.2 beta packages
  * Run: engine-upgrade command

I have removed 3.1 using yum, installed 3.2 using yum and run the
engine-setup and not upgrade command. As seen engine-upgrade,
tries first to locate a mirror through yum.

Hummm. Maybe I was mistaken and what you have to do is to add a yum 
repository and then run engine-upgrade. What I remember without any 
doubt is that you had to run engine-upgrade no matter what.


Hope someone can clarify it.

I will check that and get back to you.



--
*Adrián Gibanel*
I.T. Manager

+34 675 683 301
www.btactic.com 


*
Ens podeu seguir a/Nos podeis seguir en:

 
i *


Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi 
ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el 
medio ambiente. El medio ambiente es cosa de todos.


AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si 
no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, 
divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si 
heu rebut aquest missatge per error, us agrairem que ho feu saber 
immediatament al remitent i que procediu a destruir el missatge.


AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es 
el destinatario, les hacemos saber que está prohibido utilizarlo, 
divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si 
han recibido este mensaje por error, les agradeceríamos que lo hagan 
saber inmediatamente al remitente y que procedan a destruir el mensaje.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 3.1 to 3.2 migration

2013-01-24 Thread Adrian Gibanel
- Mensaje original -

> De: "Alexandru Vladulescu" 
> Para: users@ovirt.org
> Enviados: Jueves, 24 de Enero 2013 14:21:22
> Asunto: Re: [Users] 3.1 to 3.2 migration

> Hi,

> On 01/24/2013 03:13 PM, Adrian Gibanel wrote:

> > As far as I know in Fedora you need to upgrade from Fedora 17 to
> > Fedora 18.
> 
> > As you you're using CentOS I suppose you don't need to upgrade your
> > CentOS but I'm not sure at all.
> 

> This is not the case, it goes right if installed 3.2 from scratch on
> a new machine. Don't think CentOS has a problem here.

Well, I remember having asked about Fedora and CentOS oVirt support differences 
in the irc and I think I was answered that for some features you needed extra 
repositories for new kernel + kvm and / or gluster and /or libvirt. Not sure 
about what were exactly the features you missed and the repos you need to 
enable manually. 

But just for upgrading to oVirt 3.2 I think you won't have any problem. 

> > Anyways what I've read in the mailing list about 3.1 to 3.2 update
> > is
> > the following one:
> 

> > * Update packages
> 
> > * Run: engine-upgrade command
> 

> > I don't know how easy it is to update packages in your case (stable
> > to beta) so you might do it in your way like this:
> 
> > * Remove 3.1 packages
> 
> > * Install 3.2 beta packages
> 
> > * Run: engine-upgrade command
> 

> I have removed 3.1 using yum, installed 3.2 using yum and run the
> engine-setup and not upgrade command. As seen engine-upgrade, tries
> first to locate a mirror through yum.

Hummm. Maybe I was mistaken and what you have to do is to add a yum repository 
and then run engine-upgrade. What I remember without any doubt is that you had 
to run engine-upgrade no matter what. 

Hope someone can clarify it. 

-- 

Adrián Gibanel 
I.T. Manager 

+34 675 683 301 
www.btactic.com 

Ens podeu seguir a/Nos podeis seguir en: 

i 

Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és 
cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El 
medio ambiente es cosa de todos. 

AVIS: 
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou 
el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o 
copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge 
per error, us agrairem que ho feu saber immediatament al remitent i que 
procediu a destruir el missatge . 

AVISO: 
El contenido de este mensaje y de sus anexos es confidencial. Si no es el 
destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o 
copiarlo sin tener la autorización correspondiente. Si han recibido este 
mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al 
remitente y que procedan a destruir el mensaje . 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 3.0 to 3.1 upgrade

2013-01-24 Thread Adrian Gibanel
While checking for Vladulescu issue an specific 3.1 to 3.2 upgrade page in the 
wiki I found out this one: 

http://www.ovirt.org/OVirt_3.0_to_3.1_upgrade 

I've changed email subject because 3.0 to 3.1 upgrade seems to be completely 
different from 3.1 to 3.2 upgrade. 

- Mensaje original - 

> De: "Sven Knohsalla" 
> Para: "users" 
> Enviados: Jueves, 24 de Enero 2013 14:26:21
> Asunto: Re: [Users] 3.1 to 3.2 migration

> Hi,

> would be very good to know, although for future versions,
> if the command engine-upgrade is able to handle engine updates with
> no issues
> (for default ovirt-engine installations) .

> For us it’s a disaster to move productive environment from 3.0 to
> 3.1.

> Thanks in advance!

> Best,
> Sven.

-- 

-- 
Adrián Gibanel 
I.T. Manager 

+34 675 683 301 
www.btactic.com 



Ens podeu seguir a/Nos podeis seguir en: 

i 


Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és 
cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El 
medio ambiente es cosa de todos. 

AVIS: 
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou 
el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o 
copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge 
per error, us agrairem que ho feu saber immediatament al remitent i que 
procediu a destruir el missatge . 

AVISO: 
El contenido de este mensaje y de sus anexos es confidencial. Si no es el 
destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o 
copiarlo sin tener la autorización correspondiente. Si han recibido este 
mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al 
remitente y que procedan a destruir el mensaje . 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] storage domain auto re-cover

2013-01-24 Thread Alex Leonhardt
For the DB entry on vdc_options or in which file in /etc/ovirt-engine/ ?


On 24 January 2013 14:14, Itamar Heim  wrote:

> On 24/01/2013 04:34, Alex Leonhardt wrote:
>
>> hi,
>>
>> is it possible to set a storage domain to auto-recover / auto-reactivate
>> ? e.g. after I restart a host that runs a storage domain, i want ovirt
>> engine to make that storage domain active after the host has come up.
>>
>> thanks
>> alex
>>
>>
>> --
>>
>> | RHCE | Senior Systems Engineer | www.vcore.co  |
>> www.vsearchcloud.com  |
>>
>>
>> __**_
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/**mailman/listinfo/users
>>
>>
> please check your config for this:
> http://gerrit.ovirt.org/#/c/**10387/ 
>



-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] storage domain auto re-cover

2013-01-24 Thread Itamar Heim

On 24/01/2013 04:34, Alex Leonhardt wrote:

hi,

is it possible to set a storage domain to auto-recover / auto-reactivate
? e.g. after I restart a host that runs a storage domain, i want ovirt
engine to make that storage domain active after the host has come up.

thanks
alex


--

| RHCE | Senior Systems Engineer | www.vcore.co  |
www.vsearchcloud.com  |


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



please check your config for this:
http://gerrit.ovirt.org/#/c/10387/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail

2013-01-24 Thread ybronhei

On 01/24/2013 12:44 AM, Dead Horse wrote:

I narrowed down on the commit where the originally reported issue crept in:
commitfc3a44f71d2ef202cff18d7203b9e4165b546621building and testing with
this commit or subsequent commits yields the original issue.
Interesting.. it might be related to this commit and we're trying to 
reproduce it.


Did you try to remove that code and run again? does it work without the 
additional of zombieReaper?
does the connectivity to the storage work well? when you run 'ls' on the 
mounted folder you get see the files without a long delay ? it might 
related to too long timeout when validating access to this mount..

we work on that.. any additional info can help

Thanks.


- DHC


On Wed, Jan 23, 2013 at 3:56 PM, Dead Horse
wrote:


Indeed reverting back to an older vdsm clears up the above issue. However
now I the issue is see is:
Thread-18::ERROR::2013-01-23
15:50:42,885::task::833::TaskManager.Task::(_setError)
Task=`08709e68-bcbc-40d8-843a-d69d4df40ac6`::Unexpected error

Traceback (most recent call last):
   File "/usr/share/vdsm/storage/task.py", line 840, in _run
 return fn(*args, **kargs)
   File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
 res = f(*args, **kwargs)
   File "/usr/share/vdsm/storage/hsm.py", line 923, in connectStoragePool
 masterVersion, options)
   File "/usr/share/vdsm/storage/hsm.py", line 970, in _connectStoragePool
 res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
   File "/usr/share/vdsm/storage/sp.py", line 643, in connect
 self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
   File "/usr/share/vdsm/storage/sp.py", line 1167, in __rebuild
 self.masterDomain = self.getMasterDomain(msdUUID=msdUUID,
masterVersion=masterVersion)
   File "/usr/share/vdsm/storage/sp.py", line 1506, in getMasterDomain
 raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain:
'spUUID=f90a0d1c-06ca-11e2-a05b-00151712f280,
msdUUID=67534cca-1327-462a-b455-a04464084b31'
Thread-18::DEBUG::2013-01-23
15:50:42,887::task::852::TaskManager.Task::(_run)
Task=`08709e68-bcbc-40d8-843a-d69d4df40ac6`::Task._run:
08709e68-bcbc-40d8-843a-d69d4df40ac6
('f90a0d1c-06ca-11e2-a05b-00151712f280', 2,
'f90a0d1c-06ca-11e2-a05b-00151712f280',
'67534cca-1327-462a-b455-a04464084b31', 433) {} failed - stopping task

This is with vdsm built from
commit25a2d8572ad32352227c98a86631300fbd6523c1
- DHC


On Wed, Jan 23, 2013 at 10:44 AM, Dead Horse <
deadhorseconsult...@gmail.com> wrote:


VDSM was built from:
commit 166138e37e75767b32227746bb671b1dab9cdd5e

Attached is the full vdsm log

I should also note that from engine perspective it sees the master
storage domain as locked and the others as unknown.


On Wed, Jan 23, 2013 at 2:49 AM, Dan Kenigsberg wrote:


On Tue, Jan 22, 2013 at 04:02:24PM -0600, Dead Horse wrote:

Any ideas on this one? (from VDSM log):
Thread-25::DEBUG::2013-01-22
15:35:29,065::BindingXMLRPC::914::vds::(wrapper) client

[3.57.111.30]::call

getCapabilities with () {}
Thread-25::ERROR::2013-01-22 15:35:29,113::netinfo::159::root::(speed)
cannot read ib0 speed
Traceback (most recent call last):
   File "/usr/lib64/python2.6/site-packages/vdsm/netinfo.py", line 155,

in

speed
 s = int(file('/sys/class/net/%s/speed' % dev).read())
IOError: [Errno 22] Invalid argument

Causes VDSM to fail to attach storage


I doubt that this is the cause of the failure, as vdsm has always
reported "0" for ib devices, and still is.
it happens only when you call to getCapabilities.. so it doesn't related 
to the flow, and it can't effect the storage.

Dan: I guess this is not the issue but why is the IOError?


Does a former version works with your Engine?
Could you share more of your vdsm.log? I suppose the culprit lies in one
one of the storage-related commands, not in statistics retrieval.



Engine side sees:
ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
(QuartzScheduler_Worker-96) [553ef26e] The connection with details
192.168.0.1:/ovirt/ds failed because of error code 100 and error

message

is: general exception
2013-01-22 15:35:30,160 INFO
[org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
(QuartzScheduler_Worker-96) [1ab78378] Running command:
SetNonOperationalVdsCommand internal: true. Entities affected :  ID:
8970b3fe-1faf-11e2-bc1f-00151712f280 Type: VDS
2013-01-22 15:35:30,200 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(QuartzScheduler_Worker-96) [1ab78378] START,
SetVdsStatusVDSCommand(HostName = kezan, HostId =
8970b3fe-1faf-11e2-bc1f-00151712f280, status=NonOperational,
nonOperationalReason=STORAGE_DOMAIN_UNREACHABLE), log id: 4af5c4cd
2013-01-22 15:35:30,211 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(QuartzScheduler_Worker-96) [1ab78378] FINISH, SetVdsStatusVDSCommand,

log

id: 4af5c4cd
2013-01-22 15:35:30,242 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(QuartzScheduler_Worker-96) [1ab

Re: [Users] 3.1 to 3.2 migration

2013-01-24 Thread Sven Knohsalla
Hi,

would be very good to know, although for future versions,
if the command  engine-upgrade is able to handle engine updates with no issues
(for default ovirt-engine installations) .

For us it’s a disaster to move productive environment from 3.0 to 3.1.

Thanks in advance!

Best,
Sven.


Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Adrian Gibanel
Gesendet: Donnerstag, 24. Januar 2013 14:14
An: users
Betreff: Re: [Users] 3.1 to 3.2 migration

As far as I know in Fedora you need to upgrade from Fedora 17 to Fedora 18.
As you you're using CentOS I suppose you don't need to upgrade your CentOS but 
I'm not sure at all.

Anyways what I've read in the mailing list about 3.1 to 3.2 update is the 
following one:

  * Update packages
  * Run: engine-upgrade command

I don't know how easy it is to update packages in your case (stable to beta) so 
you might do it in your way like this:
  * Remove 3.1 packages
  * Install 3.2 beta packages
  * Run: engine-upgrade command

From what I have read (not an expert) there's a difference between the old and  
new database schema (and probably other parts of oVirt)  and the engine-upgrade 
handles that.

Maybe other list contributors can confirm if I'm missing some other steps or 
not.

De: "Alexandru Vladulescu" 
mailto:avladule...@bfproject.ro>>
Para: "Adrian Gibanel" 
mailto:adrian.giba...@btactic.com>>
CC: "users" mailto:users@ovirt.org>>
Enviados: Jueves, 24 de Enero 2013 10:59:14
Asunto: Re: [Users] 3.1 to 3.2 migration

Dear Adrian,


By migration I mean to say "migration of the products from 3.1 to 3.2 -- or 
upgrade of the Ovirt platform". The test upgrade was done, in my case, on a 
machine that only acts as a node controller in the system, therefore there is 
no ISO domains, local storage volumes or vdsm daemon for hypervisior purpose 
running.

Using, dreyou's repo, I included the 3.2 alpha release, removed the 3.1 version 
clean from the system, install all the 3.2 ovirt packages version and run 
engine-setup on the new installation.

I did not run engine-cleanup, therefore the DB was untouched and running 
engine-setup, I saw that the DB initialization sequence was pushing new table & 
data updates on the running DB from postgres as the new size went from 10MB to 
15MB.

--
Adrián Gibanel
I.T. Manager

+34 675 683 301
www.btactic.com

[http://www.btactic.com/signaturabtacticmail/btacticsignature.png]

Ens podeu seguir a/Nos podeis seguir en:

[http://www.btactic.com/wp-content/themes/btactic/img/facebookfoot.jpg]i
 [http://www.btactic.com/wp-content/themes/btactic/img/twitterfoot.jpg] 

Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és 
cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El 
medio ambiente es cosa de todos.

AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou 
el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o 
copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge 
per error, us agrairem que ho feu saber immediatament al remitent i que 
procediu a destruir el missatge.

AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es el 
destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o 
copiarlo sin tener la autorización correspondiente. Si han recibido este 
mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al 
remitente y que procedan a destruir el mensaje.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 3.1 to 3.2 migration

2013-01-24 Thread Alexandru Vladulescu


Hi,

On 01/24/2013 03:13 PM, Adrian Gibanel wrote:
As far as I know in Fedora you need to upgrade from Fedora 17 to 
Fedora 18.
As you you're using CentOS I suppose you don't need to upgrade your 
CentOS but I'm not sure at all.


This is not the case, it goes right if installed 3.2 from scratch on a 
new machine. Don't think CentOS has a problem here.
Anyways what I've read in the mailing list about 3.1 to 3.2 update is 
the following one:


  * Update packages
  * Run: engine-upgrade command

I don't know how easy it is to update packages in your case (stable to 
beta) so you might do it in your way like this:

  * Remove 3.1 packages
  * Install 3.2 beta packages
  * Run: engine-upgrade command

I have removed 3.1 using yum, installed 3.2 using yum and run the 
engine-setup and not upgrade command. As seen engine-upgrade, tries 
first to locate a mirror through yum.


Anyone tried so far a migration from old to new ?
From what I have read (not an expert) there's a difference between the 
old and  new database schema (and probably other parts of oVirt)  and 
the engine-upgrade handles that.


Maybe other list contributors can confirm if I'm missing some other 
steps or not.




*De: *"Alexandru Vladulescu" 
*Para: *"Adrian Gibanel" 
*CC: *"users" 
*Enviados: *Jueves, 24 de Enero 2013 10:59:14
*Asunto: *Re: [Users] 3.1 to 3.2 migration


Dear Adrian,


By migration I mean to say "migration of the products from 3.1 to
3.2 -- or upgrade of the Ovirt platform". The test upgrade was
done, in my case, on a machine that only acts as a node controller
in the system, therefore there is no ISO domains, local storage
volumes or vdsm daemon for hypervisior purpose running.

Using, dreyou's repo, I included the 3.2 alpha release, removed
the 3.1 version clean from the system, install all the 3.2 ovirt
packages version and run engine-setup on the new installation.

I did not run engine-cleanup, therefore the DB was untouched and
running engine-setup, I saw that the DB initialization sequence
was pushing new table & data updates on the running DB from
postgres as the new size went from 10MB to 15MB.


--
*Adrián Gibanel*
I.T. Manager

+34 675 683 301
www.btactic.com 


*
Ens podeu seguir a/Nos podeis seguir en:

 
i *


Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi 
ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el 
medio ambiente. El medio ambiente es cosa de todos.


AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si 
no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, 
divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si 
heu rebut aquest missatge per error, us agrairem que ho feu saber 
immediatament al remitent i que procediu a destruir el missatge.


AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es 
el destinatario, les hacemos saber que está prohibido utilizarlo, 
divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si 
han recibido este mensaje por error, les agradeceríamos que lo hagan 
saber inmediatamente al remitente y que procedan a destruir el mensaje.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Alexandru Vladulescu
System Engineer
-
Bright Future Project Romania
Skype :   avladulescu
Mobile :  +4(0)726.373.098
-

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 3.1 to 3.2 migration

2013-01-24 Thread Adrian Gibanel
As far as I know in Fedora you need to upgrade from Fedora 17 to Fedora 18. 
As you you're using CentOS I suppose you don't need to upgrade your CentOS but 
I'm not sure at all. 

Anyways what I've read in the mailing list about 3.1 to 3.2 update is the 
following one: 

* Update packages 
* Run: engine-upgrade command 

I don't know how easy it is to update packages in your case (stable to beta) so 
you might do it in your way like this: 
* Remove 3.1 packages 
* Install 3.2 beta packages 
* Run: engine-upgrade command 

>From what I have read (not an expert) there's a difference between the old and 
>new database schema (and probably other parts of oVirt) and the engine-upgrade 
>handles that. 

Maybe other list contributors can confirm if I'm missing some other steps or 
not. 

- Mensaje original -

> De: "Alexandru Vladulescu" 
> Para: "Adrian Gibanel" 
> CC: "users" 
> Enviados: Jueves, 24 de Enero 2013 10:59:14
> Asunto: Re: [Users] 3.1 to 3.2 migration

> Dear Adrian,

> By migration I mean to say "migration of the products from 3.1 to 3.2
> -- or upgrade of the Ovirt platform". The test upgrade was done, in
> my case, on a machine that only acts as a node controller in the
> system, therefore there is no ISO domains, local storage volumes or
> vdsm daemon for hypervisior purpose running.

> Using, dreyou's repo, I included the 3.2 alpha release, removed the
> 3.1 version clean from the system, install all the 3.2 ovirt
> packages version and run engine-setup on the new installation.

> I did not run engine-cleanup, therefore the DB was untouched and
> running engine-setup, I saw that the DB initialization sequence was
> pushing new table & data updates on the running DB from postgres as
> the new size went from 10MB to 15MB.

-- 

Adrián Gibanel 
I.T. Manager 

+34 675 683 301 
www.btactic.com 

Ens podeu seguir a/Nos podeis seguir en: 

i 

Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és 
cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El 
medio ambiente es cosa de todos. 

AVIS: 
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou 
el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o 
copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge 
per error, us agrairem que ho feu saber immediatament al remitent i que 
procediu a destruir el missatge . 

AVISO: 
El contenido de este mensaje y de sus anexos es confidencial. Si no es el 
destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o 
copiarlo sin tener la autorización correspondiente. Si han recibido este 
mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al 
remitente y que procedan a destruir el mensaje . 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problems when trying to delete a snapshot

2013-01-24 Thread Eduardo Warszawski


- Original Message -
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Hi,
> I recovered from this error by import my base-image in a new machine
> and make a restore of the backups.
> 
> But is it possible "by hand" to merge latest snapshot in to a
> base-image to get a new VM up and running with the old disk image?
> 
Looking at your vdsm logs the snapshot should be intact, then it can be
manually restored to the previous state. Please restore the images dirs,
removing the "old" and the "orig" dirs you have. 
You need to change the engine db accordingly too. 

Later you can retry the merge.

Regards.


> I have tried with qemu-img but have no go with it.
> 
> Regards //Ricky
> 
> 
> On 2012-12-30 16:57, Haim Ateya wrote:
> > Hi Ricky,
> > 
> > from going over your logs, it seems like create snapshot failed,
> > its logged clearly in both engine and vdsm logs [1]. did you try to
> > delete this snapshot or was it a different one? if so, not sure its
> > worth debugging.
> > 
> > bee7-78e7d1cbc201, vmId=d41b4ebe-3631-4bc1-805c-d762c636ca5a), log
> > id: 46d21393 2012-12-13 10:40:24,372 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Failed in SnapshotVDS method
> > 2012-12-13 10:40:24,372 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Error code SNAPSHOT_FAILED and error
> > message VDSGenericException: VDSErrorException: Failed to
> > SnapshotVDS, error = Snapshot failed 2012-12-13 10:40:24,372 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Command
> > org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return
> > value Class Name:
> > org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
> >
> > 
> mStatus   Class Name:
> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
> > mCode 48 mMessage
> > Snapshot failed
> > 
> > 
> > 
> > enter/6d91788c-99d9-11e1-b913-78e7d1cbc201/mastersd/master/tasks/21cbcc25-7672-4704-a414-a44f5e9944ed
> > temp
> > /rhev/data-center/6d91788c-99d9-11e1-b913-78e7d1cbc201/mastersd/maste
> >
> > 
> r/tasks/21cbcc25-7672-4704-a414-a44f5e9944ed.temp
> > 21cbcc25-7672-4704-a414-a44f5e9944ed::ERROR::2012-12-14
> > 10:48:41,189::volume::492::Storage.Volume::(create) Unexpected
> > error Traceback (most recent call last): File
> > "/usr/share/vdsm/storage/volume.py", line 475, in create
> > srcVolUUID, imgPath, volPath) File
> > "/usr/share/vdsm/storage/fileVolume.py", line 138, in _create
> > oop.getProcessPool(dom.sdUUID).createSparseFile(volPath,
> > sizeBytes) File "/usr/share/vdsm/storage/remoteFileHandler.py",
> > line 277, in callCrabRPCFunction *args, **kwargs) File
> > "/usr/share/vdsm/storage/remoteFileHandler.py", line 195, in
> > callCrabRPCFunction raise err IOError: [Errno 27] File too large
> > 
> > 2012-12-13 10:40:24,372 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> > (pool-5-thread-50) [12561529] Vds: virthost01 2012-12-13
> > 10:40:24,372 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
> > (pool-5-thread-50) [12561529] Command SnapshotVDS execution failed.
> > Exception: VDSErrorException: VDSGenericException:
> > VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed
> > 2012-12-13 10:40:24,373 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
> > (pool-5-thread-50) [12561529] FINISH, SnapshotVDSCommand, log id:
> > 46d21393 2012-12-13 10:40:24,373 ERROR
> > [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
> > (pool-5-thread-50) [12561529] Wasnt able to live snpashot due to
> > error: VdcBLLException: VdcBLLException:
> > org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> > VDSGenericException: VDSErrorException: Failed to SnapshotVDS,
> > error = Snapshot failed, rolling back. 2012-12-13 10:40:24,376
> > ERROR [org.ovirt.engine.core.bll.CreateSnapshotCommand]
> > (pool-5-thread-50) [4fd6c4e4] Ending command with failure:
> > org.ovirt.engine.core.bll.CreateSnapshotCommand 2012-12-13 1
> > 
> > 21cbcc25-7672-4704-a414-a44f5e9944ed::ERROR::2012-12-14
> > 10:48:41,196::task::833::TaskManager.Task::(_setError)
> > Task=`21cbcc25-7672-4704-a414-a44f5e9944ed`::Unexpected error
> > Traceback (most recent call last): File
> > "/usr/share/vdsm/storage/task.py", line 840, in _run return
> > fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line
> > 307, in run return self.cmd(*self.argslist, **self.argsdict) File
> > "/usr/share/vdsm/storage/securable.py", line 68, in wrapper return
> > f(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line
> > 1903, in createVolume srcImgUUID=srcImgUUID,
> > srcVolUUID=srcVolUUID) File "/usr/share/vdsm/storage/fileSD.py",
> > line 258, in createVolume volUUID, desc, srcImgUUID, srcVolUUID)
> > File "/usr/share/vdsm/storage/volume.py", line 494, in create
> > (volUUID, e)

Re: [Users] custom nfs mount options

2013-01-24 Thread Alex Leonhardt
same here, ovirt 3.1 from dreyou's repo ...

vdsm-python-4.10.0-0.44.14.el6.x86_64
vdsm-cli-4.10.0-0.44.14.el6.noarch
vdsm-xmlrpc-4.10.0-0.44.14.el6.noarch
vdsm-4.10.0-0.44.14.el6.x86_64

Alex



On 24 January 2013 12:33, Alexandru Vladulescu wrote:

> On 01/24/2013 02:25 PM, Itamar Heim wrote:
>
>> On 24/01/2013 04:24, Alexandru Vladulescu wrote:
>>
>>>
>>> Hi guys,
>>>
>>> I remember asking the same thing a couple of weeks ago. Itamar answered
>>> to be the same, should check the vdsm.conf file for nfs mount options.
>>> Because I did not had the time to do test this until now, I return with
>>> the test results.
>>>
>>> Well Alex it seems to be right, on the 3.1 version, if you go and edit
>>> the /etc/vdsm/vdsm.conf file, on line 146, I uncommented the
>>> nfs_mount_options parameter and changed it to :
>>>
>>> nfs_mount_options = soft,nosharecache,noatime,**rsize=8192,wsize=8192
>>>
>>> Went in Ovirt interface, put the node from Hosts tab into Maintenance,
>>> so that the ISO domain and Master Domain will get unmounted
>>> automatically; restarted the vdsm service on the hypervisior server and
>>> activate the node back from GUI. Upon mount command, there is no change
>>> or difference between what I have added and what was configured
>>> automatically before.
>>>
>>
>> I remember something about you must not pass any nfs option from ovirt,
>> or it will override the vdsm.conf.
>> are you trying to set nfs options from both engine and vdsm.conf?
>>
> Basically, I had 2 questions, one was like Alex asked and it is in the
> current topic, and the other was suggestion to add these nfs configuration
> parameters changes into the GUI in Storage tab. I asked if besides the
> retrans, timeo and vers is it possible to add anything else in the future
> GUI devel.
>
> Must mention, I test on 3.1 version from dreyou's repo.
>
>
>>
>>>   type nfs
>>> (rw,soft,nosharecache,timeo=**10,retrans=6,vers=4,addr=x.x.**x.x,clientaddr=x.x.x.x)
>>>
>>>
>>> Might this be a bug on vdsm to be fixed ?
>>>
>>> Alex.
>>>
>>>
>>> On 01/24/2013 01:45 PM, Alex Leonhardt wrote:
>>>
 So I've tried some bits Itamar asked me to - however, still get the
 same mount options shown.

 I tried "service vdsmd reconfigure; service vdsmd restart" - the mount
 for HV03:/vmfs/data should now have the new mount options, but no luck.

 Anyone time to help ?

 Alex



 On 24 January 2013 10:54, Alex Leonhardt >>> > wrote:

 I've got now :

 nfs_mount_options = soft,nosharecache,rsize=32768,**
 wsize=32768,noatime


 However, when I check the mounts on the host, it does not show
 these addtitional options used ? (only soft,nosharecache), here
 the mount output:

 HV02:/vmfs/data.old2 on /rhev/data-center/mnt/HV02:_**
 vmfs_data.old2
 type nfs
 (rw,soft,nosharecache,timeo=**600,retrans=6,nfsvers=3,addr=**x.x.x8)
 HV02:/vmfs/data on /rhev/data-center/mnt/HV02:_**vmfs_data type nfs
 (rw,soft,nosharecache,timeo=**600,retrans=6,vers=3,addr=x.x.**x8)
 HV03:/vmfs/data on /rhev/data-center/mnt/HV03:_**vmfs_data type nfs
 (rw,soft,nosharecache,timeo=**600,retrans=6,nfsvers=3,addr=**127.0.0.1)

 Above is after I restarted HV03 so it really should have mounted
 HV03:/vmfs/data with the new options

 another question would be if "nolock" would be a good idea as it
 seems that nfs ops are sometimes being blocked by a lock ? at
 least, it behaves as if .. i havent further investigated yet.

 Alex





 On 23 January 2013 13:27, Itamar Heim >>> > wrote:

 On 22/01/2013 11:43, Haim Ateya wrote:

 you can set it manually on each hypervisor by using
 vdsm.conf.

 add the following into /etc/vdsm/vdsm.conf

 [irs]
 nfs_mount_options = soft,nosharecache

 restart vdsmd service on the end.


 you can also set them via a posixfs storage domain, but for
 nfs, an nfs storage domain is recommended over a posixfs one.
 question is what is the use case for them, and should they be
 added for nfs domains as well.



 - Original Message -

 From: "Alex Leonhardt" >>> >
 To: "oVirt Mailing List" >>> >
 Sent: Tuesday, January 22, 2013 1:46:56 AM
 Subject: [Users] custom nfs mount options





 Hi,

 Is it possible set custom nfs mount options,
 specifically : noatime,
 wsize and rsize ? I couldnt see anythi

[Users] storage domain auto re-cover

2013-01-24 Thread Alex Leonhardt
hi,

is it possible to set a storage domain to auto-recover / auto-reactivate ?
e.g. after I restart a host that runs a storage domain, i want ovirt engine
to make that storage domain active after the host has come up.

thanks
alex


-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] custom nfs mount options

2013-01-24 Thread Alexandru Vladulescu

On 01/24/2013 02:25 PM, Itamar Heim wrote:

On 24/01/2013 04:24, Alexandru Vladulescu wrote:


Hi guys,

I remember asking the same thing a couple of weeks ago. Itamar answered
to be the same, should check the vdsm.conf file for nfs mount options.
Because I did not had the time to do test this until now, I return with
the test results.

Well Alex it seems to be right, on the 3.1 version, if you go and edit
the /etc/vdsm/vdsm.conf file, on line 146, I uncommented the
nfs_mount_options parameter and changed it to :

nfs_mount_options = soft,nosharecache,noatime,rsize=8192,wsize=8192

Went in Ovirt interface, put the node from Hosts tab into Maintenance,
so that the ISO domain and Master Domain will get unmounted
automatically; restarted the vdsm service on the hypervisior server and
activate the node back from GUI. Upon mount command, there is no change
or difference between what I have added and what was configured
automatically before.


I remember something about you must not pass any nfs option from 
ovirt, or it will override the vdsm.conf.

are you trying to set nfs options from both engine and vdsm.conf?
Basically, I had 2 questions, one was like Alex asked and it is in the 
current topic, and the other was suggestion to add these nfs 
configuration parameters changes into the GUI in Storage tab. I asked if 
besides the retrans, timeo and vers is it possible to add anything else 
in the future GUI devel.


Must mention, I test on 3.1 version from dreyou's repo.




  type nfs
(rw,soft,nosharecache,timeo=10,retrans=6,vers=4,addr=x.x.x.x,clientaddr=x.x.x.x) 



Might this be a bug on vdsm to be fixed ?

Alex.


On 01/24/2013 01:45 PM, Alex Leonhardt wrote:

So I've tried some bits Itamar asked me to - however, still get the
same mount options shown.

I tried "service vdsmd reconfigure; service vdsmd restart" - the mount
for HV03:/vmfs/data should now have the new mount options, but no luck.

Anyone time to help ?

Alex



On 24 January 2013 10:54, Alex Leonhardt mailto:alex.t...@gmail.com>> wrote:

I've got now :

nfs_mount_options = 
soft,nosharecache,rsize=32768,wsize=32768,noatime



However, when I check the mounts on the host, it does not show
these addtitional options used ? (only soft,nosharecache), here
the mount output:

HV02:/vmfs/data.old2 on /rhev/data-center/mnt/HV02:_vmfs_data.old2
type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=x.x.x8)
HV02:/vmfs/data on /rhev/data-center/mnt/HV02:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,vers=3,addr=x.x.x8)
HV03:/vmfs/data on /rhev/data-center/mnt/HV03:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=127.0.0.1)

Above is after I restarted HV03 so it really should have mounted
HV03:/vmfs/data with the new options

another question would be if "nolock" would be a good idea as it
seems that nfs ops are sometimes being blocked by a lock ? at
least, it behaves as if .. i havent further investigated yet.

Alex





On 23 January 2013 13:27, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 22/01/2013 11:43, Haim Ateya wrote:

you can set it manually on each hypervisor by using 
vdsm.conf.


add the following into /etc/vdsm/vdsm.conf

[irs]
nfs_mount_options = soft,nosharecache

restart vdsmd service on the end.


you can also set them via a posixfs storage domain, but for
nfs, an nfs storage domain is recommended over a posixfs one.
question is what is the use case for them, and should they be
added for nfs domains as well.



- Original Message -

From: "Alex Leonhardt" mailto:alex.t...@gmail.com>>
To: "oVirt Mailing List" mailto:users@ovirt.org>>
Sent: Tuesday, January 22, 2013 1:46:56 AM
Subject: [Users] custom nfs mount options





Hi,

Is it possible set custom nfs mount options,
specifically : noatime,
wsize and rsize ? I couldnt see anything when "adding
a NFS domain"
- only timeout & retry.


Thanks!
Alex





--



| RHCE | Senior Systems Engineer | www.vcore.co
 |
| www.vsearchcloud.com  |

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--

| RHCE | Senior Systems Engineer | www.vcore.co
 | www.vsearchcloud.com

Re: [Users] custom nfs mount options

2013-01-24 Thread Itamar Heim

On 24/01/2013 04:24, Alexandru Vladulescu wrote:


Hi guys,

I remember asking the same thing a couple of weeks ago. Itamar answered
to be the same, should check the vdsm.conf file for nfs mount options.
Because I did not had the time to do test this until now, I return with
the test results.

Well Alex it seems to be right, on the 3.1 version, if you go and edit
the /etc/vdsm/vdsm.conf file, on line 146, I uncommented the
nfs_mount_options parameter and changed it to :

nfs_mount_options = soft,nosharecache,noatime,rsize=8192,wsize=8192

Went in Ovirt interface, put the node from Hosts tab into Maintenance,
so that the ISO domain and Master Domain will get unmounted
automatically; restarted the vdsm service on the hypervisior server and
activate the node back from GUI. Upon mount command, there is no change
or difference between what I have added and what was configured
automatically before.


I remember something about you must not pass any nfs option from ovirt, 
or it will override the vdsm.conf.

are you trying to set nfs options from both engine and vdsm.conf?



  type nfs
(rw,soft,nosharecache,timeo=10,retrans=6,vers=4,addr=x.x.x.x,clientaddr=x.x.x.x)

Might this be a bug on vdsm to be fixed ?

Alex.


On 01/24/2013 01:45 PM, Alex Leonhardt wrote:

So I've tried some bits Itamar asked me to - however, still get the
same mount options shown.

I tried "service vdsmd reconfigure; service vdsmd restart" - the mount
for HV03:/vmfs/data should now have the new mount options, but no luck.

Anyone time to help ?

Alex



On 24 January 2013 10:54, Alex Leonhardt mailto:alex.t...@gmail.com>> wrote:

I've got now :

nfs_mount_options = soft,nosharecache,rsize=32768,wsize=32768,noatime


However, when I check the mounts on the host, it does not show
these addtitional options used ? (only soft,nosharecache), here
the mount output:

HV02:/vmfs/data.old2 on /rhev/data-center/mnt/HV02:_vmfs_data.old2
type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=x.x.x8)
HV02:/vmfs/data on /rhev/data-center/mnt/HV02:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,vers=3,addr=x.x.x8)
HV03:/vmfs/data on /rhev/data-center/mnt/HV03:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=127.0.0.1)

Above is after I restarted HV03 so it really should have mounted
HV03:/vmfs/data with the new options

another question would be if "nolock" would be a good idea as it
seems that nfs ops are sometimes being blocked by a lock ? at
least, it behaves as if .. i havent further investigated yet.

Alex





On 23 January 2013 13:27, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 22/01/2013 11:43, Haim Ateya wrote:

you can set it manually on each hypervisor by using vdsm.conf.

add the following into /etc/vdsm/vdsm.conf

[irs]
nfs_mount_options = soft,nosharecache

restart vdsmd service on the end.


you can also set them via a posixfs storage domain, but for
nfs, an nfs storage domain is recommended over a posixfs one.
question is what is the use case for them, and should they be
added for nfs domains as well.



- Original Message -

From: "Alex Leonhardt" mailto:alex.t...@gmail.com>>
To: "oVirt Mailing List" mailto:users@ovirt.org>>
Sent: Tuesday, January 22, 2013 1:46:56 AM
Subject: [Users] custom nfs mount options





Hi,

Is it possible set custom nfs mount options,
specifically : noatime,
wsize and rsize ? I couldnt see anything when "adding
a NFS domain"
- only timeout & retry.


Thanks!
Alex





--



| RHCE | Senior Systems Engineer | www.vcore.co
 |
| www.vsearchcloud.com  |

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--

| RHCE | Senior Systems Engineer | www.vcore.co
 | www.vsearchcloud.com
 |




--

| RHCE | Senior Systems Engineer | www.vcore.co 
| www.vsearchcloud.com  |


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Alexandru Vladulescu
System Engineer
-

Re: [Users] custom nfs mount options

2013-01-24 Thread Alexandru Vladulescu


Hi guys,

I remember asking the same thing a couple of weeks ago. Itamar answered 
to be the same, should check the vdsm.conf file for nfs mount options. 
Because I did not had the time to do test this until now, I return with 
the test results.


Well Alex it seems to be right, on the 3.1 version, if you go and edit 
the /etc/vdsm/vdsm.conf file, on line 146, I uncommented the 
nfs_mount_options parameter and changed it to :


nfs_mount_options = soft,nosharecache,noatime,rsize=8192,wsize=8192

Went in Ovirt interface, put the node from Hosts tab into Maintenance, 
so that the ISO domain and Master Domain will get unmounted 
automatically; restarted the vdsm service on the hypervisior server and 
activate the node back from GUI. Upon mount command, there is no change 
or difference between what I have added and what was configured 
automatically before.


 type nfs 
(rw,soft,nosharecache,timeo=10,retrans=6,vers=4,addr=x.x.x.x,clientaddr=x.x.x.x)


Might this be a bug on vdsm to be fixed ?

Alex.


On 01/24/2013 01:45 PM, Alex Leonhardt wrote:
So I've tried some bits Itamar asked me to - however, still get the 
same mount options shown.


I tried "service vdsmd reconfigure; service vdsmd restart" - the mount 
for HV03:/vmfs/data should now have the new mount options, but no luck.


Anyone time to help ?

Alex



On 24 January 2013 10:54, Alex Leonhardt > wrote:


I've got now :

nfs_mount_options = soft,nosharecache,rsize=32768,wsize=32768,noatime


However, when I check the mounts on the host, it does not show
these addtitional options used ? (only soft,nosharecache), here
the mount output:

HV02:/vmfs/data.old2 on /rhev/data-center/mnt/HV02:_vmfs_data.old2
type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=x.x.x8)
HV02:/vmfs/data on /rhev/data-center/mnt/HV02:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,vers=3,addr=x.x.x8)
HV03:/vmfs/data on /rhev/data-center/mnt/HV03:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=127.0.0.1)

Above is after I restarted HV03 so it really should have mounted
HV03:/vmfs/data with the new options

another question would be if "nolock" would be a good idea as it
seems that nfs ops are sometimes being blocked by a lock ? at
least, it behaves as if .. i havent further investigated yet.

Alex





On 23 January 2013 13:27, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 22/01/2013 11:43, Haim Ateya wrote:

you can set it manually on each hypervisor by using vdsm.conf.

add the following into /etc/vdsm/vdsm.conf

[irs]
nfs_mount_options = soft,nosharecache

restart vdsmd service on the end.


you can also set them via a posixfs storage domain, but for
nfs, an nfs storage domain is recommended over a posixfs one.
question is what is the use case for them, and should they be
added for nfs domains as well.



- Original Message -

From: "Alex Leonhardt" mailto:alex.t...@gmail.com>>
To: "oVirt Mailing List" mailto:users@ovirt.org>>
Sent: Tuesday, January 22, 2013 1:46:56 AM
Subject: [Users] custom nfs mount options





Hi,

Is it possible set custom nfs mount options,
specifically : noatime,
wsize and rsize ? I couldnt see anything when "adding
a NFS domain"
- only timeout & retry.


Thanks!
Alex





--



| RHCE | Senior Systems Engineer | www.vcore.co
 |
| www.vsearchcloud.com  |

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





-- 


| RHCE | Senior Systems Engineer | www.vcore.co
 | www.vsearchcloud.com
 |




--

| RHCE | Senior Systems Engineer | www.vcore.co  
| www.vsearchcloud.com  |



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Alexandru Vladulescu
System Engineer
-
Bright Future Project Romania
Skype :   avladulescu
Mobile :  +4(0)726.373.098
-

___

Re: [Users] Installing a lab setup from scratch using F18

2013-01-24 Thread noc

Sorry for the late reply, see comments inline.


2013-1-22 2:11, Joop:
As promised on IRC (jvandewege) I'll post my findings of setting up 
an ovirt lab environment from scratch using F18.

First some background:
- 2 hosts for testing storage cluster with replicated gluster data 
and iso domains (HP ML110 G5)

gluster data or cluster data?

Gluster



- 2 hosts for VMs (HP DL360 G5?)
- 1 managment server (HP ML110 G5)
All physical servers have atleast 1Gb connection and they also have 2 
10Gb ethernet ports connected to two Arista switches.
Complete setup (except for the managment srv) is redundant. Using 
F18-x64 DVD and using minimal server with extra tools, after install 
the ovirt.repo and the beta gluster repo is activated.

This serves as a proof of concept for a bigger setup.

Problems sofar:
- looks like F18 uses a different path to access video since using 
the defaults leads to garbled video, need to use nomodeset as a 
kernel option
Do you mean the VDSM host installed with F18 or the guest in the host 
with FC 18 here? I suppose you don't have much chance to access the 
graphics screen of the VDSM host.

Talking about the hosts.


- upgrading the minimal install (yum upgrade) gives me 
kernel-3.7.2-204 and the boot process halts with soft locks on 
different cpus, reverting to 3.6.10-4.fc18.x86_64 fixes that. 
Managment is using 3.7.2 kernel without problems BUT it doesn't use 
libvirt/qemu-kvm/vdsm, my guess its related.
- need to disable NetworkManager and enable network (and ifcfg-xxx) 
to get network going
- adding the storage hosts from de webui works but after reboot vdsm 
is not starting, reason seems to be that network isn't initialised 
until after all interfaces are done with their dhcp requests. There 
are 4 interfaces which use dhcp, setting those to bootprotocol=none 
seems the help.
Usually, I use the static IP for each server because the engine add 
the VDSM host by IPs.
I set a static address too but still NetworkManager runs and tries to do 
its best.
The DL360 machines run 3.7.2 without problems BUT they don't run the 
latest gluster-3.4, that might be the difference or else they are immune 
for this specific problem between 3.7.2 and ML110 hardware.




- during deploy their is a warning about 'cannot set tuned profile', 
seems harmless but hadn't seen that one until now.
- the deployment script discovers during deployment that the ID of 
the second storage server is identical to the first one and abort the 
deployment (Blame HP!) shouldn't it generate a unique one using 
uuidgen??



Host deployment went OK but for one thing and that the storage domain 
wouldn't attach. Putting the host in maintenance mode and activating did 
not help. It turnes out there are two problem of which one is already 
solved:

- ntpd/chronyc SOLVED thanks to Gianluca
- host needs to have SELinux set to Permissive and then attach storage 
domain works.
- on the managment host I still need NFSv3 settings in/etc/nfsmount.conf 
or else iso-upload doesn't work




Things that are OK sofar:
- ovirt-engine setup (no problems with postgresql)
- creating/activating gluster volumes (no more deadlocks)

Adding virt hosts has to wait til tomorrow, got problems getting the 
dvd iso onto an usb stick, will probably burn a DVD to keep going.
Migration works OK, powermanagment through ipmilan/ilo2 works, 
export/import VMs, v2v test from esxi to ovirt, all OK ;-)



Setting up a storage network still needs to be tested

Joop
--
irc:jvandewege
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] custom nfs mount options

2013-01-24 Thread Alex Leonhardt
So I've tried some bits Itamar asked me to - however, still get the same
mount options shown.

I tried "service vdsmd reconfigure; service vdsmd restart" - the mount for
HV03:/vmfs/data should now have the new mount options, but no luck.

Anyone time to help ?

Alex



On 24 January 2013 10:54, Alex Leonhardt  wrote:

> I've got now :
>
> nfs_mount_options = soft,nosharecache,rsize=32768,wsize=32768,noatime
>
>
> However, when I check the mounts on the host, it does not show these
> addtitional options used ? (only soft,nosharecache), here the mount output:
>
> HV02:/vmfs/data.old2 on /rhev/data-center/mnt/HV02:_vmfs_data.old2 type
> nfs (rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=x.x.x8)
> HV02:/vmfs/data on /rhev/data-center/mnt/HV02:_vmfs_data type nfs
> (rw,soft,nosharecache,timeo=600,retrans=6,vers=3,addr=x.x.x8)
> HV03:/vmfs/data on /rhev/data-center/mnt/HV03:_vmfs_data type nfs
> (rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=127.0.0.1)
>
> Above is after I restarted HV03 so it really should have mounted
> HV03:/vmfs/data with the new options
>
> another question would be if "nolock" would be a good idea as it seems
> that nfs ops are sometimes being blocked by a lock ? at least, it behaves
> as if .. i havent further investigated yet.
>
> Alex
>
>
>
>
>
> On 23 January 2013 13:27, Itamar Heim  wrote:
>
>> On 22/01/2013 11:43, Haim Ateya wrote:
>>
>>> you can set it manually on each hypervisor by using vdsm.conf.
>>>
>>> add the following into /etc/vdsm/vdsm.conf
>>>
>>> [irs]
>>> nfs_mount_options = soft,nosharecache
>>>
>>> restart vdsmd service on the end.
>>>
>>
>> you can also set them via a posixfs storage domain, but for nfs, an nfs
>> storage domain is recommended over a posixfs one.
>> question is what is the use case for them, and should they be added for
>> nfs domains as well.
>>
>>
>>
>>> - Original Message -
>>>
 From: "Alex Leonhardt" 
 To: "oVirt Mailing List" 
 Sent: Tuesday, January 22, 2013 1:46:56 AM
 Subject: [Users] custom nfs mount options





 Hi,

 Is it possible set custom nfs mount options, specifically : noatime,
 wsize and rsize ? I couldnt see anything when "adding a NFS domain"
 - only timeout & retry.


 Thanks!
 Alex





 --



 | RHCE | Senior Systems Engineer | www.vcore.co |
 | www.vsearchcloud.com |

 __**_
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/**mailman/listinfo/users

  __**_
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/**mailman/listinfo/users
>>>
>>>
>>
>
>
> --
>
> | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
>



-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] custom nfs mount options

2013-01-24 Thread Itamar Heim

On 24/01/2013 02:54, Alex Leonhardt wrote:

I've got now :

nfs_mount_options = soft,nosharecache,rsize=32768,wsize=32768,noatime


However, when I check the mounts on the host, it does not show these
addtitional options used ? (only soft,nosharecache), here the mount output:

HV02:/vmfs/data.old2 on /rhev/data-center/mnt/HV02:_vmfs_data.old2 type
nfs (rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=x.x.x8)
HV02:/vmfs/data on /rhev/data-center/mnt/HV02:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,vers=3,addr=x.x.x8)
HV03:/vmfs/data on /rhev/data-center/mnt/HV03:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=127.0.0.1)

Above is after I restarted HV03 so it really should have mounted
HV03:/vmfs/data with the new options

another question would be if "nolock" would be a good idea as it seems
that nfs ops are sometimes being blocked by a lock ? at least, it
behaves as if .. i havent further investigated yet.

Alex


service vdsm reload/reconfigure or something?







On 23 January 2013 13:27, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 22/01/2013 11:43, Haim Ateya wrote:

you can set it manually on each hypervisor by using vdsm.conf.

add the following into /etc/vdsm/vdsm.conf

[irs]
nfs_mount_options = soft,nosharecache

restart vdsmd service on the end.


you can also set them via a posixfs storage domain, but for nfs, an
nfs storage domain is recommended over a posixfs one.
question is what is the use case for them, and should they be added
for nfs domains as well.



- Original Message -

From: "Alex Leonhardt" mailto:alex.t...@gmail.com>>
To: "oVirt Mailing List" mailto:users@ovirt.org>>
Sent: Tuesday, January 22, 2013 1:46:56 AM
Subject: [Users] custom nfs mount options





Hi,

Is it possible set custom nfs mount options, specifically :
noatime,
wsize and rsize ? I couldnt see anything when "adding a NFS
domain"
- only timeout & retry.


Thanks!
Alex





--



| RHCE | Senior Systems Engineer | www.vcore.co
 |
| www.vsearchcloud.com  |

_
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/__mailman/listinfo/users


_
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/__mailman/listinfo/users






--

| RHCE | Senior Systems Engineer | www.vcore.co  |
www.vsearchcloud.com  |


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] custom nfs mount options

2013-01-24 Thread Alex Leonhardt
I've got now :

nfs_mount_options = soft,nosharecache,rsize=32768,wsize=32768,noatime


However, when I check the mounts on the host, it does not show these
addtitional options used ? (only soft,nosharecache), here the mount output:

HV02:/vmfs/data.old2 on /rhev/data-center/mnt/HV02:_vmfs_data.old2 type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=x.x.x8)
HV02:/vmfs/data on /rhev/data-center/mnt/HV02:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,vers=3,addr=x.x.x8)
HV03:/vmfs/data on /rhev/data-center/mnt/HV03:_vmfs_data type nfs
(rw,soft,nosharecache,timeo=600,retrans=6,nfsvers=3,addr=127.0.0.1)

Above is after I restarted HV03 so it really should have mounted
HV03:/vmfs/data with the new options

another question would be if "nolock" would be a good idea as it seems that
nfs ops are sometimes being blocked by a lock ? at least, it behaves as if
.. i havent further investigated yet.

Alex





On 23 January 2013 13:27, Itamar Heim  wrote:

> On 22/01/2013 11:43, Haim Ateya wrote:
>
>> you can set it manually on each hypervisor by using vdsm.conf.
>>
>> add the following into /etc/vdsm/vdsm.conf
>>
>> [irs]
>> nfs_mount_options = soft,nosharecache
>>
>> restart vdsmd service on the end.
>>
>
> you can also set them via a posixfs storage domain, but for nfs, an nfs
> storage domain is recommended over a posixfs one.
> question is what is the use case for them, and should they be added for
> nfs domains as well.
>
>
>
>> - Original Message -
>>
>>> From: "Alex Leonhardt" 
>>> To: "oVirt Mailing List" 
>>> Sent: Tuesday, January 22, 2013 1:46:56 AM
>>> Subject: [Users] custom nfs mount options
>>>
>>>
>>>
>>>
>>>
>>> Hi,
>>>
>>> Is it possible set custom nfs mount options, specifically : noatime,
>>> wsize and rsize ? I couldnt see anything when "adding a NFS domain"
>>> - only timeout & retry.
>>>
>>>
>>> Thanks!
>>> Alex
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>> | RHCE | Senior Systems Engineer | www.vcore.co |
>>> | www.vsearchcloud.com |
>>>
>>> __**_
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/**mailman/listinfo/users
>>>
>>>  __**_
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/**mailman/listinfo/users
>>
>>
>


-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 3.1 to 3.2 migration

2013-01-24 Thread Alexandru Vladulescu


Dear Adrian,


By migration I mean to say "migration of the products from 3.1 to 3.2 -- 
or upgrade of the Ovirt platform". The test upgrade was done, in my 
case, on a machine that only acts as a node controller in the system, 
therefore there is no ISO domains, local storage volumes or vdsm daemon 
for hypervisior purpose running.


Using, dreyou's repo, I included the 3.2 alpha release, removed the 3.1 
version clean from the system, install all the 3.2 ovirt packages 
version and run engine-setup on the new installation.


I did not run engine-cleanup, therefore the DB was untouched and running 
engine-setup, I saw that the DB initialization sequence was pushing new 
table & data updates on the running DB from postgres as the new size 
went from 10MB to 15MB.


After completing the setup, I was only able to login into the http 
section configured to port 80, but going to admin portal on 443 proved 
to be impossible as the "page could not be displayed" was listed.


In 3.1 I am using port 8080 for http and 8443 for https, but in the 3.2 
version, trying to configure in the engine-setup to run same or other 
ports different from 80 and 443, proved to make the (jboss server) not 
listening on https connections when tried to log in.


As a remark, I saw a different integration mechanism between 3.1 and 3.2 
versions, on how jboss interacts with ovirt-engine process. In 3.1 the 
jboss was running (if configured) on -server daemon process (in 
netstat); and if not configured as java; and in 3.2 the jboss process I 
guess it's started withing the engine-service process (seen that in 
netstat -tupan).


I might add, that I don't want to strictly run the jboss with https 
adding in apache files in conf.d the ovirt-engine.conf (where the 
ProxyPass is added).


Reading the list, I could not find any threads open so far for the 3.1 
to 3.2 migration (upgrade/update platform), therefore I had open this in 
hope that someone else has tested this so far.I'm guessing the 3.2 as we 
know it, it not a stable release yet, so, some bugs or incompatibility 
issues might arise during the upgrade/swap versions process.



Any thoughts on this ?


My regards to you all,
Alex.

On 01/24/2013 02:26 AM, Adrian Gibanel wrote:



*De: *"Alexandru Vladulescu" 
*Para: *"users" 
*Enviados: *Martes, 22 de Enero 2013 20:47:55
*Asunto: *[Users] 3.1 to 3.2 migration

Hi everybody,

This might seem to be a stupid question but I might just give it a
shot
and ask you if has anybody tried so far to migrate a 3.1 stable to
a 3.2
alpha release ? On my side I have no luck.

What do you mean by migrating? Updating packages on the same machine?
Migrating from one machine to another one?
You mean you want to update/migrate ovirt-engine, isn't it ?

Might have found a bug as well, but that is what you need to
confirm to
me. I had the jboss setup running on port http 8080 and for https
8443.
After the upgrade, everything I try besides the port 80 and 443
doesn't
work. If I try to reconfigure the previous used ports, I find java
listening on port 8080 for http, but when I try to log in and
switch to
https on admin portal there is nothing listening out there and I get
"Page cannot be displayed".

How do you try to reconfigure it? What commands do you run? Or what 
files do you edit?


If we cannot consider migration, would it be sufficient to insert the
dump from the 3.1 into 3.2 current alpha release ?

As I said above we don't know what you mean by migration.

--
*Adrián Gibanel*
I.T. Manager

+34 675 683 301
www.btactic.com 


*
Ens podeu seguir a/Nos podeis seguir en:

 
i *


Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi 
ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el 
medio ambiente. El medio ambiente es cosa de todos.


AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si 
no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, 
divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si 
heu rebut aquest missatge per error, us agrairem que ho feu saber 
immediatament al remitent i que procediu a destruir el missatge.


AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es 
el destinatario, les hacemos saber que está prohibido utilizarlo, 
divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si 
han recibido este mensaje por error, les agradeceríamos que lo hagan 
saber inmediatamente al remitente y que procedan a destruir el mensaje.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail

2013-01-24 Thread Royce Lv

On 01/24/2013 05:21 PM, Dan Kenigsberg wrote:

quent commits yields the

Hi,
Will you provide the log or let me access the test env if 
possible(cause we don't have IB in our Lab)? I'll look at it immediately.

Sorry for the inconvenience if I have introduced the regression.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail

2013-01-24 Thread Dan Kenigsberg
On Wed, Jan 23, 2013 at 04:44:29PM -0600, Dead Horse wrote:
> I narrowed down on the commit where the originally reported issue crept in:
> commitfc3a44f71d2ef202cff18d7203b9e4165b546621building and testing with
> this commit or subsequent commits yields the original issue.

Could you provide more info on the failure to attach storage?
Does it happen only with NFS? Only with particular server or options?
It may have been a regression due to

commit fc3a44f71d2ef202cff18d7203b9e4165b546621
Author: Royce Lv 

integrate zombie reaper in supervdsmServer

not that I understand how. Let's ask the relevant parties (Royce and
Yaniv).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users