Re: [ovirt-users] help, manager hard drive died (3.3.4)

2014-05-23 Thread David Smith
Hey Bob,

Get this.. So this process worked great for one of the nfs servers,
however, we had two nfs servers. One was a primary data center (master) and
the other non-master.
When I try to import the non-master data directory with the metadata file
edit method, it imports but then shows no VMs/templates/etc available for
import.
On the other hand, when i activate the master data center nfs filesystem, i
can see VMs that were stored on the non-master NFS, if I try to import
them, it blows an error, the files aren't there, and I look at the log on
the hv manager and I see the folder that it's looking for which is on the
non-master NFS.

Right now I'm trying to copy the folders over from the non-master nfs to
the imported master data (now export domain) to see if I can import those
VMs.
I wonder if there's a better way to import VMs from a non-master domain.


On Thu, May 22, 2014 at 12:17 PM, Bob Doolittle b...@doolittle.us.comwrote:

  On 05/22/2014 03:08 PM, David Smith wrote:

  I meant ovirt-engine (i was calling it the manager)
 also i'll need to reinstall version 3.3.4, whats the best path for that
 w/restore?


 One thing, if all else fails, is to convert your Export Domain to a Data
 Domain with a few edits to the metadata.
 Then install a fresh engine and import your old VMs.

 Personally, I'd install the same version as you had previously, then make
 sure you can attach your new Export domain (old Data domain), then upgrade.

 Some guides for conversion are in this thread:
 http://lists.ovirt.org/pipermail/users/2012-October/010258.html
 http://lists.ovirt.org/pipermail/users/2012-November/010273.html

 Note that's from a while ago. I am not sure it still works exactly the
 same. Make sure to backup your metadata file first.

 I did something a bit similar earlier today, by editing the metadata on an
 Export domain to remove information about the previously-attached Data
 Center.

 -Bob



  On Thu, May 22, 2014 at 11:58 AM, David Smith dsm...@mypchelp.comwrote:

 So the ovirt manager hard disk died, 3.3.4 version, i can't get it to
 spin up to get any data off.
 I have an old copy of the manager database which was living on the
 manager machine.

  What is the best procedure for restoring my ovirt setup? I've not been
 able to import the data store, only export and iso in the past.

  Thanks in advance,
 David




 ___
 Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot find guest domain /var/tmp/console-9.rdp

2014-05-23 Thread Sven Kieske
For the record:

you can change the default used remote
connection to all vms via engine-config
since ovirt 3.4. I believe.

the exact value should be documented on the wiki.

HTH

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Reports Input Parameters

2014-05-23 Thread Daniel Helgenberger
I seem to have exactly the same issue.

HostedEngine 3.4.1; with engine-dwh and engine-reports installed.
I only ever managed to view reports from the same day.

Cheers,

On Di, 2014-05-20 at 12:21 +0300, Mohyedeen Nazzal wrote:
 Thanks Yaniv,
 
 In the following Ovirt-Reports-Log and
 Screenshotshttp://www.mediafire.com/download/tnslajg9xel8k03/Ovirt-Reports.rar
 I
 have added the log file and two screen shoots,
 1- when trying to change the period range.
 2- when trying to change the date.
 
 Both and any other input parameter have the same behavior
 * values can not be changed. instead in the most lef of the screen [the
 blue little bar, part of the values appear there]
   the first character of every value [like, D for Daily, M for Monthly, and
 Y for yearly].
 
 The same thing for all other input parameter.
 
 Thanks again.
 
 
 On Sun, May 18, 2014 at 5:46 PM, Yaniv Dary yd...@redhat.com wrote:
 
  Please attached the jasperserver.log and screenshots.
 
 
 
 
  Yaniv
 
  - Original Message -
   From: MohyedeenN mohyedeen.naz...@gmail.com
   To: users@ovirt.org
   Sent: Thursday, May 8, 2014 11:47:35 AM
   Subject: [ovirt-users] Ovirt Reports Input Parameters
  
   Greatings,
  
   Lately I have installed Jasper Reports for Ovirt, followed the
  documentation
   descriped:
  
   yum install ovirt-engine-reports
   engine-setup
  
   I'm able to generate reports but i can not modify any input parameter
  value
   as
   an example: i can't select which data center or Storage or VM.
  
   below the versions i am using:
  
   Engine:
   ovirt-engine-3.4.0-1.el6.noarch
  
   Engine Reports:
   ovirt-engine-reports-setup-3.4.0-2.el6.noarch
  
   DWH:
   ovirt-engine-dwh-3.4.0-2.el6.noarch
  
   Any Help Please...
  
  
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

-- 

Daniel Helgenberger 
m box bewegtbild GmbH 

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19 
D-10115 BERLIN 


www.m-box.de  www.monkeymen.tv 

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767 


smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 3.2.2 -- 3.2.3 Database rename failed (Solved)

2014-05-23 Thread Neil
Hi guys,

I've managed to resolve this problem. Firstly after doing my fresh
re-install of my original rollback of Dreyou 3.2.2 ovirt-engine
re-install I hadn't run engine-cleanup, and then when I restored my DB
I used the restore.sh -u postgres -f /root/ovirt.sql instead of
doing a manual db restore which between the two of them got rid of the
issue. I'm assuming it was the engine-cleanup that sorted out the db
renaming problem though.

Once that was done I then managed to upgrade to 3.3 and I'll now do
the 3.4 upgrade.

Thanks very much for those who assisted.

Regards.

Neil Wilson.

On Thu, May 22, 2014 at 6:12 AM, Neil nwilson...@gmail.com wrote:
 Hi guys,  sorry to repost but getting a bit desperate. Is anyone able to
 assist?

 Thanks.

 Regards.

 Neil Wilson

 On 21 May 2014 12:06 PM, Neil nwilson...@gmail.com wrote:

 Hi guys,

 Just a little more info on the problem. I've upgraded another oVirt
 system before from Dreyou and it worked perfectly, however on this
 particular system, we had to restore from backups (DB PKI and
 etc/ovirt-engine) as the physical machine died that was hosting the
 engine, so perhaps this is the reason we encountering this problem
 this time around...

 Any help is greatly appreciated.

 Thank you.

 Regards.

 Neil Wilson.



 On Wed, May 21, 2014 at 11:46 AM, Sven Kieske s.kie...@mittwald.de
 wrote:
  Hi,
 
  I don't know the exact resolution for this, but I'll add some people
  who managed to make it work, following this tutorial:
  http://wiki.dreyou.org/dokuwiki/doku.php?id=ovirt_rpm_start33
 
  See this thread on the users ML:
 
  http://lists.ovirt.org/pipermail/users/2013-December/018341.html
 
  HTH
 
 
  Am 20.05.2014 17:00, schrieb Neil:
  Hi guys,
 
  I'm trying to upgrade from Dreyou to the official repo, I've installed
  the official 3.2 repo (I'll do the 3.3 update once this works). I've
  updated to ovirt-engine-setup.noarch 0:3.2.3-1.el6 and when I run
  engine upgrade it bombs out when trying to rename my database with the
  following error...
 
  [root@engine01 /]#  cat
  /var/log/ovirt-engine/ovirt-engine-upgrade_2014_05_20_16_34_21.log
  2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
  pgpass file /etc/ovirt-engine/.pgpass, fetching DB host value
  2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
  pgpass file /etc/ovirt-engine/.pgpass, fetching DB port value
  2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
  pgpass file /etc/ovirt-engine/.pgpass, fetching DB user value
  2014-05-20 16:34:21::DEBUG::common_utils::332::root:: YUM: VERB:
  Loaded plugins: refresh-packagekit, versionlock
  2014-05-20 16:34:21::INFO::engine-upgrade::969::root:: Info:
  /etc/ovirt-engine/.pgpass file found. Continue.
  2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
  pgpass file /etc/ovirt-engine/.pgpass, fetching DB admin value
  2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
  pgpass file /etc/ovirt-engine/.pgpass, fetching DB host value
  2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
  pgpass file /etc/ovirt-engine/.pgpass, fetching DB port value
  2014-05-20 16:34:21::DEBUG::common_utils::481::root:: running sql
  query 'SELECT pg_database_size('engine')' on db server: 'localhost'.
  2014-05-20 16:34:21::DEBUG::common_utils::434::root:: Executing
  command -- '/usr/bin/psql -h localhost -p 5432 -U postgres -d
  postgres -c SELECT pg_database_size('engine')'
  2014-05-20 16:34:21::DEBUG::common_utils::472::root:: output =
  pg_database_size
  --
   11976708
  (1 row)
 
 
  2014-05-20 16:34:21::DEBUG::common_utils::473::root:: stderr =
  2014-05-20 16:34:21::DEBUG::common_utils::474::root:: retcode = 0
  2014-05-20 16:34:21::DEBUG::common_utils::1567::root:: Found mount
  point of '/var/cache/yum' at '/'
  2014-05-20 16:34:21::DEBUG::common_utils::663::root:: Checking
  available space on /var/cache/yum
  2014-05-20 16:34:21::DEBUG::common_utils::668::root:: Available space
  on /var/cache/yum is 172329
  2014-05-20 16:34:21::DEBUG::common_utils::1567::root:: Found mount
  point of '/var/lib/ovirt-engine/backups' at '/'
  2014-05-20 16:34:21::DEBUG::common_utils::663::root:: Checking
  available space on /var/lib/ovirt-engine/backups
  2014-05-20 16:34:21::DEBUG::common_utils::668::root:: Available space
  on /var/lib/ovirt-engine/backups is 172329
  2014-05-20 16:34:21::DEBUG::common_utils::1567::root:: Found mount
  point of '/usr/share' at '/'
  2014-05-20 16:34:21::DEBUG::common_utils::663::root:: Checking
  available space on /usr/share
  2014-05-20 16:34:21::DEBUG::common_utils::668::root:: Available space
  on /usr/share is 172329
  2014-05-20 16:34:21::DEBUG::common_utils::1590::root:: Mount points
  are: {'/': {'required': 1511, 'free': 172329}}
  2014-05-20 16:34:21::DEBUG::common_utils::1599::root:: Comparing free
  space 172329 MB with required 1511 MB
  2014-05-20 

[ovirt-users] Persisting glusterfs configs on an oVirt node

2014-05-23 Thread Simon Barrett
I am working through the setup of oVirt node for a 3.4.1 deployment.

I setup some glusterfs volumes/bricks on oVirt Node Hypervisor release 3.0.4 
(1.0.201401291204.el6) and created a storage domain. All was working OK until I 
rebooted the node and found that the glusterfs configuration had not been 
retained.

Is there something I should be doing to persist any glusterfs configuration so 
it survives a node reboot?

Many thanks,

Simon

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] selinux on oVirt Node

2014-05-23 Thread Simon Barrett
I set SELINUX=disabled in /etc/selinux/config and ran a persist 
/etc/selinux/config.

After the node reboots, the file has the correct SELINUX=disabled line but I 
see that selinux is still enabled:

# grep ^SELINUX= /etc/selinux/config
SELINUX=disabled
# getenforce
Enforcing
# cat /selinux/enforce
1

It's like the bind mounts for the files in config happen after selinux is setup.

Is there something else I should be doing to make a change to selinux survive a 
node reboot?

Many thanks,

Simon

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] selinux on oVirt Node

2014-05-23 Thread Sven Kieske
afaik you need to disable selinux by passing
the relevant parameter direct via kernel boot options.

search the ML or the net if you need the exact command line.

HTH

Am 23.05.2014 10:36, schrieb Simon Barrett:
 I set SELINUX=disabled in /etc/selinux/config and ran a persist 
 /etc/selinux/config.
 
 After the node reboots, the file has the correct SELINUX=disabled line but 
 I see that selinux is still enabled:
 
 # grep ^SELINUX= /etc/selinux/config
 SELINUX=disabled
 # getenforce
 Enforcing
 # cat /selinux/enforce
 1
 
 It's like the bind mounts for the files in config happen after selinux is 
 setup.
 
 Is there something else I should be doing to make a change to selinux survive 
 a node reboot?
 
 Many thanks,
 
 Simon

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt China User Group?

2014-05-23 Thread Brian Proffitt
The rental issue is a possibility. As you say, we would need to make sure that 
look and feel was mirrored. Is this something, perhaps, that IBM could assist 
us with setting up?

A question I have is, when you start adding new content, would we be able to 
have some of it translated into English to use on the ovirt.org?

Thank you so much for the interest! Please reach out to me to coordinate 
community efforts and let me know what you need in terms of content/assistance 
for local meetings/events.

Peace,
Brian Proffitt
oVirt Community Manager

- Original Message -
 From: Zhou Zheng Sheng zhshz...@linux.vnet.ibm.com
 To: 适兕 lijiangshe...@gmail.com, users@ovirt.org, Mark Wu 
 wu...@linux.vnet.ibm.com, Dan Kenigsberg
 dan...@redhat.com
 Sent: Thursday, May 22, 2014 10:14:20 PM
 Subject: Re: [ovirt-users] oVirt China User Group?
 
 on 2014/05/22 17:44, 适兕 wrote:
  Hello,
  
  I'm writing to the list in reference to helping setting up a user group in
  China.
  
  As the oVirt community develops rapidly, there is a growing number of users
  from China who have deployed oVirt, even in their production environment.
  However, there is hardly voice/feedback from these users.
  
  For better spreading oVirt in China, and to help more Chinese users get
  into touch with the whole oVirt community, I am going to help setting up a
  oVirt user group in China.
  
  Things that the user group might help:
0. translating articles, discussions, case studies on the oVirt website
1. organizing online(irc, mailing list) and offline discussions about
  oVirt
2. volunteering in test week
3. bug fixing
4. developing features related to localization
  
  By the way, when translating articles, should we setup a new individual
  website or just do the l10n work on http://www.ovirt.org? If the former,
  should we keep the look and feel the same with the oVirt website?
 
 Great! I have some suggestions.
 
 1. If it is possible, we'd better rent a (virtual/cloud) server inside
 China. Sometimes it's very slow to access ovirt.org. Maybe we can also
 have a sub-domain name such as cnuser.ovirt.org, and make the outlook of
 cnuser the same as oVirt main site.
 
 2. Considering the content, we can start by translating primary pages
 and slides in ovirt.org. I think in China most of the technical people
 are able to read English, so it is not necessary to translate all the
 pages. We shall then provide new content that is different from the
 oVirt main site. For example, successful customer stories, technical
 experience sharing, and all the things you mentioned in the mail.
 
 --
 Zhou Zheng Sheng / 周征晟
 E-mail: zhshz...@linux.vnet.ibm.com
 Telephone: 86-10-82454397
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] sanlock + gluster recovery -- RFE

2014-05-23 Thread Vijay Bellur

On 05/21/2014 10:22 PM, Federico Simoncelli wrote:

- Original Message -

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: fsimo...@redhat.com
Cc: users@ovirt.org
Sent: Wednesday, May 21, 2014 5:15:30 PM
Subject: sanlock + gluster recovery -- RFE

Hi,


- Original Message -

From: Ted Miller tmiller at hcjb.org
To: users users at ovirt.org
Sent: Tuesday, May 20, 2014 11:31:42 PM
Subject: [ovirt-users] sanlock + gluster recovery -- RFE

As you are aware, there is an ongoing split-brain problem with running
sanlock on replicated gluster storage. Personally, I believe that this is
the 5th time that I have been bitten by this sanlock+gluster problem.

I believe that the following are true (if not, my entire request is
probably
off base).


 * ovirt uses sanlock in such a way that when the sanlock storage is
 on a
 replicated gluster file system, very small storage disruptions can
 result in a gluster split-brain on the sanlock space


Although this is possible (at the moment) we are working hard to avoid it.
The hardest part here is to ensure that the gluster volume is properly
configured.

The suggested configuration for a volume to be used with ovirt is:

Volume Name: (...)
Type: Replicate
Volume ID: (...)
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
(...three bricks...)
Options Reconfigured:
network.ping-timeout: 10
cluster.quorum-type: auto

The two options ping-timeout and quorum-type are really important.

You would also need a build where this bug is fixed in order to avoid any
chance of a split-brain:

https://bugzilla.redhat.com/show_bug.cgi?id=1066996


It seems that the aforementioned bug is peculiar to 3-bricks setups.

I understand that a 3-bricks setup can allow proper quorum formation without
resorting to first-configured-brick-has-more-weight convention used with
only 2 bricks and quorum auto (which makes one node special, so not
properly any-single-fault tolerant).


Correct.


But, since we are on ovirt-users, is there a similar suggested configuration
for a 2-hosts setup oVirt+GlusterFS with oVirt-side power management
properly configured and tested-working?
I mean a configuration where any host can go south and oVirt (through the
other one) fences it (forcibly powering it off with confirmation from IPMI
or similar) then restarts HA-marked vms that were running there, all the
while keeping the underlying GlusterFS-based storage domains responsive and
readable/writeable (maybe apart from a lapse between detected other-node
unresposiveness and confirmed fencing)?


We already had a discussion with gluster asking if it was possible to
add fencing to the replica 2 quorum/consistency mechanism.

The idea is that as soon as you can't replicate a write you have to
freeze all IO until either the connection is re-established or you
know that the other host has been killed.

Adding Vijay.




There is a related thread on gluster-devel [1] to have a better behavior 
in GlusterFS for prevention of split brains with sanlock and 2-way 
replicated gluster volumes.


Please feel free to comment on the proposal there.

Thanks,
Vijay

[1] 
http://supercolony.gluster.org/pipermail/gluster-devel/2014-May/040751.html

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] help, manager hard drive died (3.3.4)

2014-05-23 Thread Bob Doolittle
If you read the whole thread I referenced it says the conversion only works
for Master. It does not say how to handle non-Master. Good luck.

-Bob
On May 23, 2014 2:48 AM, David Smith dsm...@mypchelp.com wrote:

 Hey Bob,

 Get this.. So this process worked great for one of the nfs servers,
 however, we had two nfs servers. One was a primary data center (master) and
 the other non-master.
 When I try to import the non-master data directory with the metadata file
 edit method, it imports but then shows no VMs/templates/etc available for
 import.
 On the other hand, when i activate the master data center nfs filesystem,
 i can see VMs that were stored on the non-master NFS, if I try to import
 them, it blows an error, the files aren't there, and I look at the log on
 the hv manager and I see the folder that it's looking for which is on the
 non-master NFS.

 Right now I'm trying to copy the folders over from the non-master nfs to
 the imported master data (now export domain) to see if I can import those
 VMs.
 I wonder if there's a better way to import VMs from a non-master domain.


 On Thu, May 22, 2014 at 12:17 PM, Bob Doolittle b...@doolittle.us.comwrote:

  On 05/22/2014 03:08 PM, David Smith wrote:

  I meant ovirt-engine (i was calling it the manager)
 also i'll need to reinstall version 3.3.4, whats the best path for that
 w/restore?


 One thing, if all else fails, is to convert your Export Domain to a Data
 Domain with a few edits to the metadata.
 Then install a fresh engine and import your old VMs.

 Personally, I'd install the same version as you had previously, then make
 sure you can attach your new Export domain (old Data domain), then upgrade.

 Some guides for conversion are in this thread:
 http://lists.ovirt.org/pipermail/users/2012-October/010258.html
 http://lists.ovirt.org/pipermail/users/2012-November/010273.html

 Note that's from a while ago. I am not sure it still works exactly the
 same. Make sure to backup your metadata file first.

 I did something a bit similar earlier today, by editing the metadata on
 an Export domain to remove information about the previously-attached Data
 Center.

 -Bob



  On Thu, May 22, 2014 at 11:58 AM, David Smith dsm...@mypchelp.comwrote:

 So the ovirt manager hard disk died, 3.3.4 version, i can't get it to
 spin up to get any data off.
 I have an old copy of the manager database which was living on the
 manager machine.

  What is the best procedure for restoring my ovirt setup? I've not been
 able to import the data store, only export and iso in the past.

  Thanks in advance,
 David




 ___
 Users mailing 
 listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users




 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-23 Thread Vijay Bellur

On 05/21/2014 07:22 PM, Kanagaraj wrote:

Ok.

I am not sure deleting the file or re-peer probe would be the right way
to go.

Gluster-users can help you here.


On 05/21/2014 07:08 PM, Gabi C wrote:

Hello!


I haven't change the IP, nor reinstall nodes. All nodes are updated
via yum. All I can think of was that after having some issue with
gluster,from WebGUI I deleted VM, deactivate and detach storage
domains ( I have 2) , than, _manually_, from one of the nodes , remove
bricks, then detach peers, probe them, add bricks again, bring the
volume up, and readd storage domains from the webGUI.


On Wed, May 21, 2014 at 4:26 PM, Kanagaraj kmayi...@redhat.com
mailto:kmayi...@redhat.com wrote:

What are the steps which led this situation?

Did you re-install one of the nodes after forming the cluster or
reboot which could have changed the ip?



On 05/21/2014 03:43 PM, Gabi C wrote:

On afected node:

gluster peer status

gluster peer status
Number of Peers: 3

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)

Hostname: 10.125.1.196
Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
State: Peer in Cluster (Connected)

Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)





ls -la /var/lib/gluster



ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 11:10 .
drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
-rw---. 1 root root   73 May 21 11:10
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 10:52
c22e41b8-2818-4a96-a6df-a237517836d6
-rw---. 1 root root   73 May 21 11:10
d95558a0-a306-4812-aec2-a361a9ddde3e





Can you please check the output of cat 
/var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e ?


If it does contain information about the duplicated peer and none of the 
other 2 nodes do have this file in /var/lib/glusterd/peers/, the file 
can be moved out of /var/lib/glusterd or deleted.


Regards,
Vijay


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Users] Unable to delete a snapshot

2014-05-23 Thread Nicolas Ecarnot

Le 13/01/2014 08:43, Sandro Bonazzola a écrit :

Il 12/01/2014 10:40, Maor Lipchuk ha scritto:

Adding Sandro to the thread

On 01/12/2014 11:26 AM, Maor Lipchuk wrote:

On 01/09/2014 12:29 PM, Nicolas Ecarnot wrote:

Hi Maor, hi everyone,

Le 07/01/2014 04:09, Maor Lipchuk a écrit :

Looking at bugzilla, it could be related to
https://bugzilla.redhat.com/1029069
(based on the exception described at
https://bugzilla.redhat.com/show_bug.cgi?id=1029069#c1)


In my case, there where nothing live : the VM was shut down when
creating the snapshot as well as when tyring to delete it.

I understand though live storage migration sub flows are, creation of
live snapshot, so it could be related.



The issue there was fixed after an upgrade to 3.3.1 (as Sander mentioned
it before in the mailing list)

Could you give it a try and check if that works for you?


I'm very shy with upgrading my oVirt production framework, but I began
to read some things to upgrade it. Maybe you can lead me to a precise
way to upgrade vdsm?

Sandro, do you aware of any documentation which can help in upgrading
ovirt, specifically VDSM?


No, I'm not aware of any documentation about it.
However since 3.3.1 is in ovirt-stable if you just need to update vdsm on a 
system I think that

yum  --enablerepo=ovirt-stable update vdsm*


on that system should be enough.


Done


Also it will be great if you could open a bug on that with the full
VDSM, engine logs and the list of lvs.


Done :

https://bugzilla.redhat.com/show_bug.cgi?id=1050901


Just to close the bug :

I updated to 3.4.1-1.el6, and I can not reproduce the bug anymore.

--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-23 Thread Gabi C
On problematic node:

[root@virtual5 ~]# ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 16:33 .
drwxr-xr-x. 9 root root 4096 May 21 16:33 ..
-rw---. 1 root root   73 May 21 16:33
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 16:33
c22e41b8-2818-4a96-a6df-a237517836d6
-rw---. 1 root root   73 May 21 16:33
d95558a0-a306-4812-aec2-a361a9ddde3e
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6
uuid=c22e41b8-2818-4a96-a6df-a237517836d6
state=3
hostname1=10.125.1.196
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194







on other 2 nodes


[root@virtual4 ~]# ls -la /var/lib/glusterd/peers/
total 16
drwxr-xr-x. 2 root root 4096 May 21 16:34 .
drwxr-xr-x. 9 root root 4096 May 21 16:34 ..
-rw---. 1 root root   73 May 21 16:34
bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
-rw---. 1 root root   73 May 21 11:09
c22e41b8-2818-4a96-a6df-a237517836d6
[root@virtual4 ~]# cat
/var/lib/glusterd/peers/bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
uuid=bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
state=3
hostname1=10.125.1.195
[root@virtual4 ~]# cat
/var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6
uuid=c22e41b8-2818-4a96-a6df-a237517836d6
state=3
hostname1=10.125.1.196





[root@virtual6 ~]# ls -la /var/lib/glusterd/peers/
total 16
drwxr-xr-x. 2 root root 4096 May 21 16:34 .
drwxr-xr-x. 9 root root 4096 May 21 16:34 ..
-rw---. 1 root root   73 May 21 11:10
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 16:34
bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
[root@virtual6 ~]# cat
/var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194
[root@virtual6 ~]# cat
/var/lib/glusterd/peers/bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
uuid=bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84
state=3
hostname1=10.125.1.195
[root@virtual6 ~]#



On Fri, May 23, 2014 at 2:05 PM, Vijay Bellur vbel...@redhat.com wrote:

 On 05/21/2014 07:22 PM, Kanagaraj wrote:

 Ok.

 I am not sure deleting the file or re-peer probe would be the right way
 to go.

 Gluster-users can help you here.


 On 05/21/2014 07:08 PM, Gabi C wrote:

 Hello!


 I haven't change the IP, nor reinstall nodes. All nodes are updated
 via yum. All I can think of was that after having some issue with
 gluster,from WebGUI I deleted VM, deactivate and detach storage
 domains ( I have 2) , than, _manually_, from one of the nodes , remove

 bricks, then detach peers, probe them, add bricks again, bring the
 volume up, and readd storage domains from the webGUI.


 On Wed, May 21, 2014 at 4:26 PM, Kanagaraj kmayi...@redhat.com
 mailto:kmayi...@redhat.com wrote:

 What are the steps which led this situation?

 Did you re-install one of the nodes after forming the cluster or
 reboot which could have changed the ip?



 On 05/21/2014 03:43 PM, Gabi C wrote:

 On afected node:

 gluster peer status

 gluster peer status
 Number of Peers: 3

 Hostname: 10.125.1.194
 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
 State: Peer in Cluster (Connected)

 Hostname: 10.125.1.196
 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
 State: Peer in Cluster (Connected)

 Hostname: 10.125.1.194
 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
 State: Peer in Cluster (Connected)





 ls -la /var/lib/gluster



 ls -la /var/lib/glusterd/peers/
 total 20
 drwxr-xr-x. 2 root root 4096 May 21 11:10 .
 drwxr-xr-x. 9 root root 4096 May 21 11:09 ..
 -rw---. 1 root root   73 May 21 11:10
 85c2a08c-a955-47cc-a924-cf66c6814654
 -rw---. 1 root root   73 May 21 10:52
 c22e41b8-2818-4a96-a6df-a237517836d6
 -rw---. 1 root root   73 May 21 11:10
 d95558a0-a306-4812-aec2-a361a9ddde3e




 Can you please check the output of cat /var/lib/glusterd/peers/
 d95558a0-a306-4812-aec2-a361a9ddde3e ?

 If it does contain information about the duplicated peer and none of the
 other 2 nodes do have this file in /var/lib/glusterd/peers/, the file can
 be moved out of /var/lib/glusterd or deleted.

 Regards,
 Vijay



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-23 Thread Vijay Bellur

On 05/23/2014 05:25 PM, Gabi C wrote:

On problematic node:

[root@virtual5 ~]# ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 16:33 .
drwxr-xr-x. 9 root root 4096 May 21 16:33 ..
-rw---. 1 root root   73 May 21 16:33
85c2a08c-a955-47cc-a924-cf66c6814654
-rw---. 1 root root   73 May 21 16:33
c22e41b8-2818-4a96-a6df-a237517836d6
-rw---. 1 root root   73 May 21 16:33
d95558a0-a306-4812-aec2-a361a9ddde3e
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6
uuid=c22e41b8-2818-4a96-a6df-a237517836d6
state=3
hostname1=10.125.1.196
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194



Looks like this is stale information for 10.125.1.194 that has somehow 
persisted. Deleting this file and then restarting glusterd on this node 
should lead to a consistent state for the peers.


Regards,
Vijay

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] selinux on oVirt Node

2014-05-23 Thread Simon Barrett

I added enforcing=0 to my pxe menu and re-installed the node. All looks 
better now.
 
# sestatus
SELinux status: enabled
SELinuxfs mount:/selinux
Current mode:   permissive
Mode from config file:  disabled
Policy version: 24
Policy from config file:targeted

# cat /selinux/enforce
0

Thanks for the information.

Simon


-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Sven Kieske
Sent: 23 May 2014 09:45
To: users@ovirt.org
Subject: Re: [ovirt-users] selinux on oVirt Node

afaik you need to disable selinux by passing the relevant parameter direct via 
kernel boot options.

search the ML or the net if you need the exact command line.

HTH

Am 23.05.2014 10:36, schrieb Simon Barrett:
 I set SELINUX=disabled in /etc/selinux/config and ran a persist 
 /etc/selinux/config.
 
 After the node reboots, the file has the correct SELINUX=disabled line but 
 I see that selinux is still enabled:
 
 # grep ^SELINUX= /etc/selinux/config
 SELINUX=disabled
 # getenforce
 Enforcing
 # cat /selinux/enforce
 1
 
 It's like the bind mounts for the files in config happen after selinux is 
 setup.
 
 Is there something else I should be doing to make a change to selinux survive 
 a node reboot?
 
 Many thanks,
 
 Simon

--
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VDSM 4.15 cluster compatibility

2014-05-23 Thread Jon Archer

Hi all,

just upgraded to the latest nightly and am faced with a host/cluster 
compatibility issue, seems my cluster is only compatible with vdsm 
[4.13, 4.14, 4.9,.4.11,4.12,4.10] but i'm now running 4.15.


Any ideas on how to upgrade the cluster to be 4.15 compatible?

Thanks

Jon
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs tips/questions

2014-05-23 Thread Gabi C
just did it and seems to be OK!

Many thanks!



On Fri, May 23, 2014 at 3:11 PM, Vijay Bellur vbel...@redhat.com wrote:

 On 05/23/2014 05:25 PM, Gabi C wrote:

 On problematic node:

 [root@virtual5 ~]# ls -la /var/lib/glusterd/peers/
 total 20
 drwxr-xr-x. 2 root root 4096 May 21 16:33 .
 drwxr-xr-x. 9 root root 4096 May 21 16:33 ..
 -rw---. 1 root root   73 May 21 16:33
 85c2a08c-a955-47cc-a924-cf66c6814654
 -rw---. 1 root root   73 May 21 16:33
 c22e41b8-2818-4a96-a6df-a237517836d6
 -rw---. 1 root root   73 May 21 16:33
 d95558a0-a306-4812-aec2-a361a9ddde3e
 [root@virtual5 ~]# cat
 /var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654
 uuid=85c2a08c-a955-47cc-a924-cf66c6814654
 state=3
 hostname1=10.125.1.194
 [root@virtual5 ~]# cat
 /var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6
 uuid=c22e41b8-2818-4a96-a6df-a237517836d6
 state=3
 hostname1=10.125.1.196
 [root@virtual5 ~]# cat
 /var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e
 uuid=85c2a08c-a955-47cc-a924-cf66c6814654
 state=3
 hostname1=10.125.1.194


 Looks like this is stale information for 10.125.1.194 that has somehow
 persisted. Deleting this file and then restarting glusterd on this node
 should lead to a consistent state for the peers.

 Regards,
 Vijay


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Server 2012 R2 no drive found

2014-05-23 Thread Neil
Hi guys,

I've been trying to install 2012 R2 onto my ovirt 3.4 but no matter
what I do, it either doesn't find an IDE drive or a Virtio drive (when
using the virtio ISO).

ovirt-engine-lib-3.4.0-1.el6.noarch
ovirt-engine-restapi-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
ovirt-engine-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
ovirt-host-deploy-java-1.2.0-1.el6.noarch
ovirt-engine-cli-3.2.0.10-1.el6.noarch
ovirt-engine-setup-3.4.0-1.el6.noarch
ovirt-host-deploy-1.2.0-1.el6.noarch
ovirt-engine-backend-3.4.0-1.el6.noarch
ovirt-image-uploader-3.4.0-1.el6.noarch
ovirt-engine-tools-3.4.0-1.el6.noarch
ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
ovirt-engine-webadmin-portal-3.4.0-1.el6.noarch
ovirt-engine-setup-base-3.4.0-1.el6.noarch
ovirt-iso-uploader-3.4.0-1.el6.noarch
ovirt-engine-userportal-3.4.0-1.el6.noarch
ovirt-log-collector-3.4.1-1.el6.noarch
ovirt-engine-websocket-proxy-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.4.0-1.el6.noarch
ovirt-engine-dbscripts-3.4.0-1.el6.noarch

vdsm-4.14.6-0.el6.x86_64
vdsm-xmlrpc-4.14.6-0.el6.noarch
vdsm-cli-4.14.6-0.el6.noarch
vdsm-python-zombiereaper-4.14.6-0.el6.noarch
vdsm-python-4.14.6-0.el6.x86_64

qemu-img-0.12.1.2-2.415.el6_5.8.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6_5.8.x86_64
qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64
gpxe-roms-qemu-0.9.7-6.9.el6.noarch

Is there a special trick to get this working, or could something be
wrong? When it comes to creating a guest I don't see a Server 2012 R2
64bit in the drop down list?

Thanks.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] SLA : RAM scheduling

2014-05-23 Thread Nathanaël Blanchet

Hello,
On ovirt 3.4, is it possible to schedule vms distribution depending on 
host RAM availibility?
Concretly, I had to manually move vms all the vms to the second host of 
the cluster, this lead to reach 90% occupation of memory on the 
destination host. When my first host has rebooted, none vms of the 
second host automatically migrated to the first one which had full RAM. 
How to make this happen?


--
Nathanaël Blanchet

Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SLA : RAM scheduling

2014-05-23 Thread Nathanaël Blanchet


Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :

Hello,
On ovirt 3.4, is it possible to schedule vms distribution depending on 
host RAM availibility?
Concretly, I had to manually move vms all the vms to the second host 
of the cluster, this lead to reach 90% occupation of memory on the 
destination host. When my first host has rebooted, none vms of the 
second host automatically migrated to the first one which had full 
RAM. How to make this happen?


... so as to both hosts be RAM evenly distributed... hope to be enough 
clear...


--
Nathanaël Blanchet

Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] sanlock + gluster recovery -- RFE

2014-05-23 Thread Ted Miller

Vijay, I am not a member of the developer list, so my comments are at end.

On 5/23/2014 6:55 AM, Vijay Bellur wrote:

On 05/21/2014 10:22 PM, Federico Simoncelli wrote:

- Original Message -

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: fsimo...@redhat.com
Cc: users@ovirt.org
Sent: Wednesday, May 21, 2014 5:15:30 PM
Subject: sanlock + gluster recovery -- RFE

Hi,


- Original Message -

From: Ted Miller tmiller at hcjb.org
To: users users at ovirt.org
Sent: Tuesday, May 20, 2014 11:31:42 PM
Subject: [ovirt-users] sanlock + gluster recovery -- RFE

As you are aware, there is an ongoing split-brain problem with running
sanlock on replicated gluster storage. Personally, I believe that this is
the 5th time that I have been bitten by this sanlock+gluster problem.

I believe that the following are true (if not, my entire request is
probably
off base).


 * ovirt uses sanlock in such a way that when the sanlock storage is
 on a
 replicated gluster file system, very small storage disruptions can
 result in a gluster split-brain on the sanlock space


Although this is possible (at the moment) we are working hard to avoid it.
The hardest part here is to ensure that the gluster volume is properly
configured.

The suggested configuration for a volume to be used with ovirt is:

Volume Name: (...)
Type: Replicate
Volume ID: (...)
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
(...three bricks...)
Options Reconfigured:
network.ping-timeout: 10
cluster.quorum-type: auto

The two options ping-timeout and quorum-type are really important.

You would also need a build where this bug is fixed in order to avoid any
chance of a split-brain:

https://bugzilla.redhat.com/show_bug.cgi?id=1066996


It seems that the aforementioned bug is peculiar to 3-bricks setups.

I understand that a 3-bricks setup can allow proper quorum formation without
resorting to first-configured-brick-has-more-weight convention used with
only 2 bricks and quorum auto (which makes one node special, so not
properly any-single-fault tolerant).


Correct.


But, since we are on ovirt-users, is there a similar suggested configuration
for a 2-hosts setup oVirt+GlusterFS with oVirt-side power management
properly configured and tested-working?
I mean a configuration where any host can go south and oVirt (through the
other one) fences it (forcibly powering it off with confirmation from IPMI
or similar) then restarts HA-marked vms that were running there, all the
while keeping the underlying GlusterFS-based storage domains responsive and
readable/writeable (maybe apart from a lapse between detected other-node
unresposiveness and confirmed fencing)?


We already had a discussion with gluster asking if it was possible to
add fencing to the replica 2 quorum/consistency mechanism.

The idea is that as soon as you can't replicate a write you have to
freeze all IO until either the connection is re-established or you
know that the other host has been killed.

Adding Vijay.
There is a related thread on gluster-devel [1] to have a better behavior in 
GlusterFS for prevention of split brains with sanlock and 2-way replicated 
gluster volumes.


Please feel free to comment on the proposal there.

Thanks,
Vijay

[1] http://supercolony.gluster.org/pipermail/gluster-devel/2014-May/040751.html

One quick note before my main comment: I see references to quorum being N/2 
+ 1.  Isn't if more accurate to say that quorum is (N + 1)/2 or N/2 + 0.5?


Now to my main comment.

I see a case that is not being addressed.  I have no proof of how often this 
use-case occurs, but I believe that is does occur.  (It could (theoretically) 
occur in any situation where multiple bricks are writing to different parts 
of the same file.)


Use-case: sanlock via fuse client.

Steps to produce originally

   (not tested for reproducibility, because I was unable to recover the
   ovirt cluster after occurrence, had to rebuild from scratch), time frame
   was late 2013 or early 2014

   2 node ovirt cluster using replicated gluster storage
   ovirt cluster up and running VMs
   remove power from network switch
   restore power to network switch after a few minutes

Result

   both copies of .../dom_md/ids file accused the other of being out of sync

Hypothesis of cause

   servers (ovirt nodes and gluster bricks) are called A and B
   At the moment when network communication was lost, or just a moment after
   communication was lost

   A had written to local ids file
   A had started process to send write to B
   A had not received write confirmation from B
   and
   B had written to local ids file
   B had started process to send write to A
   B had not received write confirmation from A

   Thus, each file had a segment that had been written to the local file,
   but had not been confirmed written on the remote file.  Each file
   correctly accused the other file of being out-of-sync.  I did read and
   

Re: [ovirt-users] emulated machine error

2014-05-23 Thread Nathanaël Blanchet

Hi all, thank you for your help.
I found the reason of my issue : dreyou 3.2 repo installed an old 
external qemu-kvm-rhev rpm intended to support live snapshot and was 
still on the hosts after upgrading manually vdsm. This qemu version was 
too old to support 6.5 guest. I upgraded to official qemu-kvm, then 
everything went on. Finally, I decided to install qemu-kvm from the 
jenkins site to get back the snapshot capability. No more problem!


Le 22/05/2014 12:25, Itamar Heim a écrit :

On 05/22/2014 11:59 AM, Karli Sjöberg wrote:

On Wed, 2014-05-21 at 19:14 +0200, Nathanaël Blanchet wrote:

Hello
I was used tout run ovirt 3.2.2 installed from the dreyou repo and 
it has worked like a charm until now. I succeeded to upgrade to 
3.3.5 official repository but I didn't pay attention with the host 
vdsm upgrade and I installed 4.14.


There´s your problem. You should always make sure that your hosts are at
the same level of functionality as the engine. Breakage like this isn´t
nice, but you should have paid more close attention to the upgrade
process.

Having a gazillion different release packages floating around isn´t
helping, and installing them leaves .repo files behind that can mess
it all up, but that´s the way it is. That particular issue was just
about to annoy me enough to bug report, but then I noticed that it
already was:)
https://bugzilla.redhat.com/show_bug.cgi?id=1097874


actually, upgrading the hosts should always be ok, even if to a newer 
version. its a bug in engine 3.3 not allowing this.
but that doesn't seem related to the complaint here, which requires 
further investigation with he details Roy asked for.


--
Nathanaël Blanchet

Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SLA : RAM scheduling

2014-05-23 Thread Karli Sjöberg

Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= 
blanc...@abes.fr:


 Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :
  Hello,
  On ovirt 3.4, is it possible to schedule vms distribution depending on
  host RAM availibility?
  Concretly, I had to manually move vms all the vms to the second host
  of the cluster, this lead to reach 90% occupation of memory on the
  destination host. When my first host has rebooted, none vms of the
  second host automatically migrated to the first one which had full
  RAM. How to make this happen?
 
 ... so as to both hosts be RAM evenly distributed... hope to be enough
 clear...

Sounds like you just want to apply the cluster policy for even distribution. 
Have you assigned any policy for that cluster?

/K


 --
 Nathanaël Blanchet

 Supervision réseau
 Pôle exploitation et maintenance
 Département des systèmes d'information
 227 avenue Professeur-Jean-Louis-Viala
 34193 MONTPELLIER CEDEX 5
 Tél. 33 (0)4 67 54 84 55
 Fax  33 (0)4 67 54 84 14
 blanc...@abes.fr

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM 4.15 cluster compatibility

2014-05-23 Thread Sandro Bonazzola
Il 23/05/2014 15:04, Jon Archer ha scritto:
 Hi all,
 
 just upgraded to the latest nightly and am faced with a host/cluster 
 compatibility issue, seems my cluster is only compatible with vdsm [4.13, 
 4.14,
 4.9,.4.11,4.12,4.10] but i'm now running 4.15.
 
 Any ideas on how to upgrade the cluster to be 4.15 compatible?


As far as I know, latest version released is 4.14.8.1 (with oVirt 3.4.1).
Where did you got 4.15?


 
 Thanks
 
 Jon
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM 4.15 cluster compatibility

2014-05-23 Thread Jon Archer

Hi,
I've got the nightly repo enabled it came from ovirt-master-snapshot

I'm quite prepared for breakages and work involved to fix, just wondered 
if anyone could point me in the right direction.


Jon

On 2014-05-23 16:54, Sandro Bonazzola wrote:

Il 23/05/2014 15:04, Jon Archer ha scritto:

Hi all,

just upgraded to the latest nightly and am faced with a host/cluster 
compatibility issue, seems my cluster is only compatible with vdsm 
[4.13, 4.14,

4.9,.4.11,4.12,4.10] but i'm now running 4.15.

Any ideas on how to upgrade the cluster to be 4.15 compatible?



As far as I know, latest version released is 4.14.8.1 (with oVirt 
3.4.1).

Where did you got 4.15?




Thanks

Jon
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SLA : RAM scheduling

2014-05-23 Thread Nathanaël Blanchet

even distribution is for cpu only

Le 23/05/2014 17:48, Karli Sjöberg a écrit :



Den 23 maj 2014 17:13 skrev =?ISO-8859-1?Q?Nathana=EBl_Blanchet?= 
blanc...@abes.fr:



 Le 23/05/2014 17:11, Nathanaël Blanchet a écrit :
  Hello,
  On ovirt 3.4, is it possible to schedule vms distribution 
depending on

  host RAM availibility?
  Concretly, I had to manually move vms all the vms to the second host
  of the cluster, this lead to reach 90% occupation of memory on the
  destination host. When my first host has rebooted, none vms of the
  second host automatically migrated to the first one which had full
  RAM. How to make this happen?
 
 ... so as to both hosts be RAM evenly distributed... hope to be enough
 clear...

Sounds like you just want to apply the cluster policy for even 
distribution. Have you assigned any policy for that cluster?


/K


 --
 Nathanaël Blanchet

 Supervision réseau
 Pôle exploitation et maintenance
 Département des systèmes d'information
 227 avenue Professeur-Jean-Louis-Viala
 34193 MONTPELLIER CEDEX 5
 Tél. 33 (0)4 67 54 84 55
 Fax  33 (0)4 67 54 84 14
 blanc...@abes.fr

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



--
Nathanaël Blanchet

Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] help, manager hard drive died (3.3.4)

2014-05-23 Thread David Smith
Hey Bob,

Yea my method worked. I was able to copy the non-master data center VM data
into the images folder on the master data center (now export domain) and
import everything.

This would have been tragic if there wasn't the hard disk space, would have
required lots of big data shuffling.  I've split the data center model so
now we have two clusters with two masters, and will avoid using a
non-master data center entirely due to this issue.

The good news is it looks like we didn't lose any data.


On Fri, May 23, 2014 at 3:56 AM, Bob Doolittle b...@doolittle.us.com wrote:

 If you read the whole thread I referenced it says the conversion only
 works for Master. It does not say how to handle non-Master. Good luck.

 -Bob
 On May 23, 2014 2:48 AM, David Smith dsm...@mypchelp.com wrote:

 Hey Bob,

 Get this.. So this process worked great for one of the nfs servers,
 however, we had two nfs servers. One was a primary data center (master) and
 the other non-master.
 When I try to import the non-master data directory with the metadata file
 edit method, it imports but then shows no VMs/templates/etc available for
 import.
 On the other hand, when i activate the master data center nfs filesystem,
 i can see VMs that were stored on the non-master NFS, if I try to import
 them, it blows an error, the files aren't there, and I look at the log on
 the hv manager and I see the folder that it's looking for which is on the
 non-master NFS.

 Right now I'm trying to copy the folders over from the non-master nfs to
 the imported master data (now export domain) to see if I can import those
 VMs.
 I wonder if there's a better way to import VMs from a non-master domain.


 On Thu, May 22, 2014 at 12:17 PM, Bob Doolittle b...@doolittle.us.comwrote:

  On 05/22/2014 03:08 PM, David Smith wrote:

  I meant ovirt-engine (i was calling it the manager)
 also i'll need to reinstall version 3.3.4, whats the best path for that
 w/restore?


 One thing, if all else fails, is to convert your Export Domain to a Data
 Domain with a few edits to the metadata.
 Then install a fresh engine and import your old VMs.

 Personally, I'd install the same version as you had previously, then
 make sure you can attach your new Export domain (old Data domain), then
 upgrade.

 Some guides for conversion are in this thread:
 http://lists.ovirt.org/pipermail/users/2012-October/010258.html
 http://lists.ovirt.org/pipermail/users/2012-November/010273.html

 Note that's from a while ago. I am not sure it still works exactly the
 same. Make sure to backup your metadata file first.

 I did something a bit similar earlier today, by editing the metadata on
 an Export domain to remove information about the previously-attached Data
 Center.

 -Bob



  On Thu, May 22, 2014 at 11:58 AM, David Smith dsm...@mypchelp.comwrote:

 So the ovirt manager hard disk died, 3.3.4 version, i can't get it to
 spin up to get any data off.
 I have an old copy of the manager database which was living on the
 manager machine.

  What is the best procedure for restoring my ovirt setup? I've not
 been able to import the data store, only export and iso in the past.

  Thanks in advance,
 David




 ___
 Users mailing 
 listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users




 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Users] Migrate cluster 3.3 - 3.4 hosted on existing hosts

2014-05-23 Thread Ted Miller


On 4/2/2014 1:58 AM, Yedidyah Bar David wrote:

- Original Message -

From: Ted Miller tmil...@hcjb.org
To: users users@ovirt.org
Sent: Tuesday, April 1, 2014 10:40:38 PM
Subject: [Users] Migrate cluster 3.3 - 3.4 hosted on existing hosts

Current setup:
 * 3 identical hosts running on HP GL180 g5 servers
 * gluster running 5 volumes in replica 3
 * engine running on VMWare Server on another computer (that computer is
 NOT available to convert to a host)

Where I want to end up:
 * 3 identical hosted-engine hosts running on HP GL180 g5 servers
 * gluster running 6 volumes in replica 3
 * new volume will be nfs storage for engine VM
 * hosted engine in oVirt VM
 * as few changes to current setup as possible

The two pages I found on the wiki are: Hosted Engine Howto and Migrate to
Hosted Engine . Both were written during the testing process, and have not
been updated to reflect production status. I don't know if anything in the
process has changed since they were written.

Basically things remained the same, with some details changing perhaps.


Process outlined in above two pages (as I understand it):

have nfs file store ready to hold VM

Do minimal install (not clear if ovirt node, Centos, or Fedora was used--I am
Centos-based)

Fedora/Centos/RHEL are supposed to work. ovirt node is currently not
supported - iirc it's planned to be supported soon, not sure.


# yum install ovirt-hosted-engine-setup
# hosted-engine --deploy


Install OS on VM


return to host console


at Please install the engine in the VM prompt on host


on VM console
# yum install ovirt-engine


on old engine:
service ovirt-engine stop
chkconfig ovirt-engine off

set up dns for new engine


# engine-backup --mode=backup --file=backup1 --log=backup1.log
scp backup file to new engine VM


on new VM:

Please see [1]. Specifically, if you had a local db, you'll first have
to create it yourself.

[1] http://www.ovirt.org/Ovirt-engine-backup#Howto


# engine-backup --mode=restore --file=backup1 --log=backup1-restore.log
--change-db-credentials --db-host=didi-lap --db-user=engine --db-password
--db-name=engine

The above assumes a db was already created and ready to use (access etc)
using the supplied credentials. You'll naturally have to provide your own.


# engine-setup

on host:
run script until: The system will wait until the VM is down.

on new VM:
# reboot

on Host: finish script
My questions:

1. Is the above still the recommended way to do a hosted-engine install?

Yes.


2. Will it blow up at me if I use my existing host (with glusterfs all set
up, etc) as the starting point, instead of a clean install?

a. Probably yes, for now. I did not hear much about testing such a migration
using an existing host - ovirt or gluster or both. I did not test that myself
either.

If at all possible, you should use a new clean host. Do plan well and test.

Also see discussions on the mailing lists, e.g. this one:

http://lists.ovirt.org/pipermail/users/2014-March/thread.html#22441

Good luck, and please report back!

I have good news and bad news.

I migrated the 3 host cluster from 3.4 to 3.4 hosted.  The process went 
fairly smoothly.  Engine ran, I was able to add the three hosts to the 
engine's domain, etc.  That was all working about Thursday. (I did not get 
fencing set up).


Friday, at the end of the day, I shut down the entire system (it is not yet 
in production) because I was leaving for a week's vacation/holiday.  I am 
fairly certain that I put the system into global maintenance mode before 
shutting down.  I know I shut down the engine before shutting down the hosts.


Monday (10 days later) I came back from vacation and powered up the three 
machines.  The hosts came up fine, but the engine will not start.  (I found 
some gluster split-brain errors, and chased that for a couple of days, until 
I realized that the split-brain was not the fundamental problem.)


During bootup /var/log/messages shows:

May 21 19:22:00 s2 ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to 
getVdsCapabilities: VDSM initialization timeout
May 21 19:22:00 s2 ovirt-ha-broker mem_free.MemFree ERROR Failed to 
getVdsStats: VDSM initialization timeout
May 21 19:22:00 s2 ovirt-ha-broker cpu_load_no_engine.EngineHealth ERROR Failed 
to getVmStats: VDSM initialization timeout
May 21 19:22:00 s2 ovirt-ha-broker engine_health.CpuLoadNoEngine ERROR Failed 
to getVmStats: VDSM initialization timeout
May 21 19:22:03 s2 vdsm vds WARNING Unable to load the json rpc server module. 
Please make sure it is installed.


and then /var/log/ovirt-hosted-engine-ha/agent.log shows:

MainThread::ERROR::2014-05-21 
19:22:04,198::hosted_engine::414::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
 Failed trying to connect storage:
MainThread::CRITICAL::2014-05-21 
19:22:04,199::agent::103::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Could 
not start ha-agent
Traceback (most recent 

Re: [ovirt-users] Server 2012 R2 no drive found

2014-05-23 Thread Rene Koch

Hi,

Server 2012 R2 works fine for me when using virtio-win drivers from  
Red Hat Enterprise Linux Supplementary channel. I don't know if latest  
drivers from linux-kvm.org include Windows Server 2012 R2.

I'm using VirtIO disks and network interfaces, btw.


Regards,
René



Zitat von Neil nwilson...@gmail.com:


Hi guys,

I've been trying to install 2012 R2 onto my ovirt 3.4 but no matter
what I do, it either doesn't find an IDE drive or a Virtio drive (when
using the virtio ISO).

ovirt-engine-lib-3.4.0-1.el6.noarch
ovirt-engine-restapi-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
ovirt-engine-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
ovirt-host-deploy-java-1.2.0-1.el6.noarch
ovirt-engine-cli-3.2.0.10-1.el6.noarch
ovirt-engine-setup-3.4.0-1.el6.noarch
ovirt-host-deploy-1.2.0-1.el6.noarch
ovirt-engine-backend-3.4.0-1.el6.noarch
ovirt-image-uploader-3.4.0-1.el6.noarch
ovirt-engine-tools-3.4.0-1.el6.noarch
ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
ovirt-engine-webadmin-portal-3.4.0-1.el6.noarch
ovirt-engine-setup-base-3.4.0-1.el6.noarch
ovirt-iso-uploader-3.4.0-1.el6.noarch
ovirt-engine-userportal-3.4.0-1.el6.noarch
ovirt-log-collector-3.4.1-1.el6.noarch
ovirt-engine-websocket-proxy-3.4.0-1.el6.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.4.0-1.el6.noarch
ovirt-engine-dbscripts-3.4.0-1.el6.noarch

vdsm-4.14.6-0.el6.x86_64
vdsm-xmlrpc-4.14.6-0.el6.noarch
vdsm-cli-4.14.6-0.el6.noarch
vdsm-python-zombiereaper-4.14.6-0.el6.noarch
vdsm-python-4.14.6-0.el6.x86_64

qemu-img-0.12.1.2-2.415.el6_5.8.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6_5.8.x86_64
qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64
gpxe-roms-qemu-0.9.7-6.9.el6.noarch

Is there a special trick to get this working, or could something be
wrong? When it comes to creating a guest I don't see a Server 2012 R2
64bit in the drop down list?

Thanks.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Server 2012 R2 no drive found

2014-05-23 Thread Paul.LKW
Hi guys:
Please do not just say your one is working it is helpless in fact there
already some guys reported issues in Win platform (including me) and there
is no way to report that, do your think paid version in Redhat would be the
same or the client will already fxxked.
I noted this seems occured only in newly installed ovirt and old
installation is fine.

Paul.LKW
於 2014/5/23 下午11:02,Neil nwilson...@gmail.com 寫道:

 Hi guys,

 I've been trying to install 2012 R2 onto my ovirt 3.4 but no matter
 what I do, it either doesn't find an IDE drive or a Virtio drive (when
 using the virtio ISO).

 ovirt-engine-lib-3.4.0-1.el6.noarch
 ovirt-engine-restapi-3.4.0-1.el6.noarch
 ovirt-engine-setup-plugin-ovirt-engine-3.4.0-1.el6.noarch
 ovirt-engine-3.4.0-1.el6.noarch
 ovirt-engine-setup-plugin-websocket-proxy-3.4.0-1.el6.noarch
 ovirt-host-deploy-java-1.2.0-1.el6.noarch
 ovirt-engine-cli-3.2.0.10-1.el6.noarch
 ovirt-engine-setup-3.4.0-1.el6.noarch
 ovirt-host-deploy-1.2.0-1.el6.noarch
 ovirt-engine-backend-3.4.0-1.el6.noarch
 ovirt-image-uploader-3.4.0-1.el6.noarch
 ovirt-engine-tools-3.4.0-1.el6.noarch
 ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
 ovirt-engine-webadmin-portal-3.4.0-1.el6.noarch
 ovirt-engine-setup-base-3.4.0-1.el6.noarch
 ovirt-iso-uploader-3.4.0-1.el6.noarch
 ovirt-engine-userportal-3.4.0-1.el6.noarch
 ovirt-log-collector-3.4.1-1.el6.noarch
 ovirt-engine-websocket-proxy-3.4.0-1.el6.noarch
 ovirt-engine-setup-plugin-ovirt-engine-common-3.4.0-1.el6.noarch
 ovirt-engine-dbscripts-3.4.0-1.el6.noarch

 vdsm-4.14.6-0.el6.x86_64
 vdsm-xmlrpc-4.14.6-0.el6.noarch
 vdsm-cli-4.14.6-0.el6.noarch
 vdsm-python-zombiereaper-4.14.6-0.el6.noarch
 vdsm-python-4.14.6-0.el6.x86_64

 qemu-img-0.12.1.2-2.415.el6_5.8.x86_64
 qemu-kvm-tools-0.12.1.2-2.415.el6_5.8.x86_64
 qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64
 gpxe-roms-qemu-0.9.7-6.9.el6.noarch

 Is there a special trick to get this working, or could something be
 wrong? When it comes to creating a guest I don't see a Server 2012 R2
 64bit in the drop down list?

 Thanks.

 Regards.

 Neil Wilson.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM 4.15 cluster compatibility

2014-05-23 Thread Dan Kenigsberg
On Fri, May 23, 2014 at 02:04:26PM +0100, Jon Archer wrote:
 Hi all,
 
 just upgraded to the latest nightly and am faced with a host/cluster
 compatibility issue, seems my cluster is only compatible with vdsm [4.13,
 4.14, 4.9,.4.11,4.12,4.10] but i'm now running 4.15.
 
 Any ideas on how to upgrade the cluster to be 4.15 compatible?

Which version of Engine are you running?
(I really hope it's not a nightly build, that would mean that

Bug 1016461 - [vdsm] engine fails to add host with vdsm version 4.13.0

was not solved properly)

What is your cluster level?

Regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users