Re: [ovirt-users] schedule a VM backup in ovirt 3.5

2015-10-22 Thread Christopher Cox

On 10/22/2015 10:46 PM, Indunil Jayasooriya wrote:
...


Hmm,

*How to list the sanphot?
*
*how to backup the VM with snapshot?
*
*finally , how to remove this snapshot?
*


Then. I think it will be OVER. Yesterday, I tried a lot.  but, NO success.

Hope to hear from you.


Not exactly "help" but AFAIK, even with 3.5, there is no live merging of 
snapshots so they can't be deleted unless the VM is down.  I know for large 
snapshots that have been around for awhile removing them can take some time too.


Others feel free to chime in...


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is a dedicated oVirt mgmt VLAN still needed for oVirt host nodes?

2015-09-15 Thread Christopher Cox

On 09/15/2015 03:28 PM, Gianluca Cecchi wrote:


Il 15/Set/2015 05:10, > ha 
scritto:
 >
 > We have an oVirt environment that I inherited.  One is running 3.4.0 and
 > one is running 3.5.0.
 >
 > Seem in both cases the prior administrator stated that a dedicated VLAN
 > was necessary for oVirt mgmt.  That is, we could not run multiple tagged
 > VLANs on a nic for a given oVirt host node.
 >
 > Does any of this make sense?  Is this true?  Is it still true for more
 > contemporary versions of oVirt?
 >
 > My problem is that our nodes are blades and I only have two physical nics
 > per blade.  In our network for redundancy we need to have the two nics
 > have the same VLANs so that things failover ok.  Which means we have to
 > share the oVirt mgmt network on the same wire.  That's the ideal.

Hy, My opinion:

ovirt-engine supports configuration of the ovirtmgmt as a non-vm network,
untagged. And on top of that nic (bond in your possible specific case) configure
all of the VLANs.
If you configure it as a vm network and want to put on the bond other networks
too, then you have to configure both the ovirtmgmt and the other ones as tagged
VLANs, you cannot mix tagged and untagged in this case.
Or at least it was so that I remained in terms of configuration.
Valid for 3.4 and 3.5, i think.
Can anyone of the net part maintainers confirm?


The import oVirt setup is 3.4 based.. so it's important that we can run all the 
vlans on the same nic for that version.

Thanks to any who can verify or confirm.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Need to clear domain Export already exists in an ovirt (3.5)

2015-12-04 Thread Christopher Cox

On 12/03/2015 08:55 AM, Eli Mesika wrote:



- Original Message -

From: c...@endlessnow.com
To: users@ovirt.org
Sent: Wednesday, December 2, 2015 11:02:25 PM
Subject: [ovirt-users] Need to clear domain Export already exists in an ovirt 
(3.5)

Our ovirt 3.5 host thinks it has an export domain, but it's not visible
anywhere, but it's keeping us from importing a domain from a different
datacenter.  What database update do I need to issue to clear the bad
state from the ovirt 3.5 we are trying to Import into?


Hi

Can you please provide the result of the following query

select * from storage_domain_static where storage_domain_type = 3;

Is that the storage domain you wish to remove ?


Correct.

engine=> select * from storage_domain_static where storage_domain_type = 3;
  id  |   storage| 
storage_name | storage_domain_type | storage_type | storage_domain_format_typ
e | _create_date  | _update_date | recoverable | 
last_time_used_as_master | storage_description | storage_comment

--+--+--+-+--+--
--+---+--+-+--+-+-
 57fdd1c6-5c76-4174-8df7-50dbe82bc957 | d0eebb52-b9f6-4b11-8bc1-f100880519dd | 
Export   |   3 |1 | 0
  | 2015-12-02 13:57:10.111442-06 |  | t   | 
 0 | |
 c29afe14-0bc5-4fde-ab15-3f98f1ef9ed0 | 2e0bb17a-5f0c-4a24-9ab7-312de3f718be | 
MoveIt   |   3 |1 | 0
  | 2014-09-24 15:36:03.066565-05 |  | t   | 
 0 | |

(2 rows)


These don't exist on the datacenter.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] On 3.6.6, tried doing a live VM storage migration... didn't work

2016-05-25 Thread Christopher Cox
In our old 3.4 ovirt, I know I've migrated storage on live VMs and everything 
seemed to work.


However on 3.6.6, I tried this and I saw the warning about moving storage on a 
live VM (it wasn't doing much of anything) and I went ahead and migrated the 
storage from one storage domain to another.   But when it was through, even 
though the VM was still alive, when I tried to write to a virtual disk that was 
part of the move, it paused the VM saying there wasn't enough storage.


I could unpause the VM, but in a few seconds, with things writing to the virtual 
disk, again it was paused with the same out of space message.  Vdsm logs showed 
the enospc return code... so it made sense, it's just that the VM shows plenty 
of storage there.  Once I rebooted the VM, everything went back to normal.


So is moving storage for a live VM not supported?  I guess we got lucky in our 
3.4 system (?)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt-Engine VM

2016-06-20 Thread Christopher Cox

On 06/20/2016 04:41 AM, Carlos García Gómez wrote:

Hello,
I have just arrived and I have the first question.
The oVirt-engine
+ It can be a physical server?

Yes (ours is physical)

+ It can be a virtual machine?

Yes

   - Do it run under oVirt or can it run under other platforms like  vmware and
or Hyper-V?
Yes and it can run on an oVirt node, but maybe not the best way to run it (some 
chicken egg scenarios possible).

I Would like to run it under vmware. I know this is a contradiction but I think
this is the heart of the solution I prefer to run it under a known platform
What do you recommend?
I don't see running under VMware as that much different than running on some 
other out of band (with respect to oVirt) platform.

Operation System? Centos 6 or Centos 7?

CentOS 7


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NAS for oVirt

2016-12-24 Thread Christopher Cox

On 12/23/2016 04:45 PM, rightkicktech.gmail.com wrote:

Hi Mikhail,

Thank you for your suggestion.
Have you had any performance issues with freenas? It has been mentioned on some
blogs that freenas might have performance issues. Not sure why.
A clean Centos with NFS sounds ok also. What do you do if you need snapshots of
data? Lvm snapshots?


So... if you have VERY limited network, that is 1Gbit, for example, you'll find 
that to be one of the biggest bottlenecks performance wise for a NAS.


Assuming you have good 10Gbit, the bottleneck likely switches to storage.  For 
spinning rust, number of spindles can help.  Generally speaking, because this is 
NAS, you might not get much out of rotational speeds and seek times on those 
higher end drives.  That is to say, you won't see huge differences between 7200 
rpm and even 15K rpm.  SSD is a different thing, obviously, it will perform the 
best and allow you to make the most of that 10Gbit (or more) connection to storage.


Memory.  On a NAS, memory is important.  I wouldn't go for anything less than 
32GB.  The more you can cache the better.  With regards to "sync" vs "async"... 
if you have an enterprise setup where power is reliable, going to "async" could 
really help on writes.  If you need lots and lots of NAS storage NFS 4.1 (which 
could perform better in certain cases anyway) will allow you to offload some 
operations when you have a parallel NFS configuration.


I'm running a NAS on CentOS with 10Gbit iSCSI attached SAN storage with lots of 
spindles in RAID6.  Again, if you have "good" storage.. it's just storage and 
now you can focus on the NAS head itself.  I run NFS 4.1 there and have 32G of 
ram.  I run "async" since this in the datacenter, data is backed up and lots of 
power redundancy.  You can increase write performance by 40-60% doing that (btw, 
this is what the "big boys" do to post the numbers the post).


By using a CentOS based NAS, I get the flexibility of doing CIFS via Samba4 as 
an option... now this isn't oVirt, but for times when I need to expose storage 
to Windows hosts with full Windows permissions support.


Underneath, again, since it's a home grown thing, I can use a broad mixture of 
storage configuration and fileystems.  If you feel very uncomfortable with Linux 
(which would be odd if you chose oVirt).. then maybe go FreeNAS, but I'm saying 
you can do a whole lot better.


But, I'm guessing possibly this is a low end network.  In which case, again, 
that's going to be primary bottleneck performance wise.  I'd go for space and 
price realizing that performance will never be all that awesome.  With that 
said, there are lots of ultra-cheap home NAS units that say they are gigabit, 
but often can deliver no more than 300Mbit or so.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cloning VM on a different data domain than master

2017-08-22 Thread Christopher Cox

On 08/22/2017 07:20 AM, Yohan JAROSZ wrote:

Dear list,

When cloning a VM, is it possible to select the data domain to which it would 
clone.
I have two data domains attached to my cluster and it automatically clone the 
VM to the master one.



Clones are not copies, but rather built using snapshots, which is why 
they are on the same storage domain(s) as the original disk(s).


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] chrony or ntp ?

2017-06-11 Thread Christopher Cox

On 06/11/2017 04:19 AM, Fabrice Bacchella wrote:



Le 10 juin 2017 à 22:21, Michal Skrivanek  a écrit :


On 09 Jun 2017, at 15:48, Fabrice Bacchella  wrote:


People might be suprised. I'm currently trying to understand what chrony did to 
my ntpd setup, it look like it killed it and puppet has hard time to 
reconfigure it.

And as it's not a 'ovirt update' but instead vdsm update seems to happen more 
frequently, some people might forget to read release notes and be disappointed.


We do not configure anything. Just pull in dependency. You're free to
disable the service as a common admin task. As long as you replace it
with other time synchronization solution


Yes, that's I've done, but beware of user complain about broken ntp service 
because their specially crafted ntpd configuration now lying dead. I detected 
it because my puppet setup tried to uninstall chrony and failed. What about 
other users ? Does the default chrony settings always works, for every one ?



Since you mentioned puppet, here's out puppet pp and template erb we use, hope 
it help.  IMHO, ntp has problems that chrony doesn't have:


chrony/manifests/init.pp:

# This class is really only for CentOS 7 or higher.
#
class chrony (
  $stratumweight  = 0,
  $driftfile  = '/var/lib/chrony/drift',
  $keyfile= '/etc/chrony.keys',
  $keyfile_commandkey = 1,
  $generatecommandkey = true,
  $logdir = '/var/log/chrony',
  $noclientlog= true,
  $logchange  = '0.5',
  $makestep_enable= true,
  $makestep_threshold = 10,
  $makestep_update= -1,
  $bindcmdaddress = '127.0.0.1',
  $servers= ['ntp1.example.com', 'ntp2.example.com'],
  $iburst_enable  = true,
  $rtcsync_enable = false,) {
  if $operatingsystem in ['CentOS', 'RedHat'] and ($::operatingsystemmajrelease 
+ 0) >= 7 {

ensure_packages(['chrony'])
# Red Hat, CentOS don't readily have ability to change location of conf
#  file.
$conf_file = '/etc/chrony.conf'

service { 'chronyd':
  ensure  => 'running',
  enable  => true,
  require => Package['chrony'],
}

file { $conf_file:
  ensure  => present,
  group   => 'root',
  mode=> '644',
  owner   => 'root',
  content => template('chrony/chrony_conf.erb'),
  notify  => Service['chronyd'],
  require => Package['chrony'],
}
  } else {
notify { 'chrony only supported in CentOS/RHEL 7 or greater': }

exec { '/bin/false': }
  }
}

chrony/templates/chrony_conf.erb

<% @servers.flatten.each do |server| -%>
server <%= server %><% if @iburst_enable == true -%> iburst<% end -%>

<% end -%>

<% if @stratumweight -%>
stratumweight <%= @stratumweight %>
<% end -%>
<% if @driftfile -%>
driftfile <%= @driftfile %>
<% end -%>
<% if @makestep_enable == true -%>
makestep <%= @makestep_threshold %> <%= @makestep_update %>
<% end -%>
<% if @rtcsync_enable == true -%>
rtcsync
<% end -%>
<% if @bindcmdaddress -%>
bindcmdaddress <%= @bindcmdaddress %>
<% end -%>
<% if @keyfile -%>
keyfile <%= @keyfile %>
<%   if @keyfile_commandkey -%>
commandkey <%= @keyfile_commandkey %>
<%   else -%>
commandkey 0
<%   end -%>
<%   if @generatecommandkey == true -%>
generatecommandkey
<%   end -%>
<% end -%>
<% if @noclientlog -%>
noclientlog
<% end -%>
<% if @logchange -%>
logchange <%= @logchange %>
<% end -%>
<% if @logdir -%>
logdir <%= @logdir %>
<% end -%>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine randomly updated 1 package on all my hosts overnight

2017-09-17 Thread Christopher Cox

On 09/17/2017 02:00 AM, Yedidyah Bar David wrote:

On Fri, Sep 15, 2017 at 5:12 PM, Charles Kozler  wrote:

Hello -

Can anyone just briefly tell me if this is expected behavior or not?

I know you can tell the engine to update hosts, but nobody was using the
engine and I see the engine logging in and the yum command being run so I am
curious if this is expected or not?


My iproute mysteriously updated as well.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt high points

2017-09-07 Thread Christopher Cox

On 09/07/2017 06:33 AM, david caughey wrote:

Hi Folks,

I'm giving a demo of our new 3 node oVirt deployment next week and am 
looking for some high points that I can give to the Managers that will 
be a sell point.


Could be hard to sell.  It's not like VMware (all in all) is deficient 
functionality wise.



If you could help with the below questions I would really appreciate it:

Who are the big users of oVirt??


We use oVirt in production.  We have about 130 VMs on a 9 node cluster 
using Dell blades.  It houses both our test and production VMs.  We have 
a separate oVirt setup for development hosting probably about 20 VMs 
(maybe less), it's an 7 node cluster (but much lesser blades there).


In both cases they are connected to Equalogic iSCSI SAN equipment with 
multiple tiers of storage.  Each production blade has 4 x 10gbit iSCSI 
(multi)paths to storage.  The production blade subsystem uses multiple 
40Gbit links, for iSCSI storage and for LAN. Just 10Gbit links and 1Gbit 
paths on the development blades and subsystem.


Both use a dedicated oVirt management host.

The production(and test) blades run oVirt 3.6 and the dev blades are 
oVirt 3.5.


About 2 years ago we migrated our production blades from oVirt 3.4 on 
older blades and older SAN equipment to oVirt 3.6 on new blades and new 
SAN storage.  We used oVirt's export domain to facilitate the move.


We will be migrating off the development cluster and we are setting up a 
new cluster on the same DC as our production area which will be used to 
house both test and development.  Thus we are moving to just the one 
oVirt 3.6 (we're adding 5 extra blades for that cluster).


Btw, our VMs include multiple version of CentOS, Windows Server and 
Windows desktops (and even some docker nodes, but we're redoing all of 
that).  Our VMs include about 10 large PostgreSQL database servers, some 
MySQL, several Jboss servers, many web microservices (Springboot) 
servers and lost of application infrastructure servers.




Why oVirt and not vMware??
(we are a big vMware house so free doesn't cover it)


Uh free, and to be honest, that's the best reason to do this IMHO.



What is the future for oVirt??


Unknown.  But pretty sure Red Hat will want to keep RHEV around, which 
means oVirt probably will be here for quite some time.




Why do you use oVirt??


Free.



Any links or ideas appreciated,


oVirt is NOT VMware.  But if you do things "well" oVirt works quite 
well.  Follow the list to see folks that didn't necessarily do things 
"well" (sad, but true).


I inherited this oVirt... not ideal for blades because it's better to 
have lots of networks.  We just have two blade fabrics, one for SAN and 
one for the rest, and it would be nice to have ovirtmgmt and migration 
networks be isolated.  With that said, with our massively VLAN'd setup, 
it does work and has been very reliable.  For performance reasons, I 
recommend that you attempt to dedicate a host for SPM, or at least keep 
the number of VMs deployed there to a minimum.  There are tweaks in the 
setup to keep VMs off the SPM node (talking mainly if you have a 
massively combined network like I have currently).


We've survived many bad events with regards to SAN and power, which is a 
tribute to oVirt's reliability.  However, you can shoot yourself in the 
foot very easily with oVirt... so just be careful.


Is VMware better?  Yes.  Is it more flexible than oVirt?  Yes. Is it 
more reliable than oVirt? Yes.  In other words, if money is of no 
concern, VMware and VCenter.


We will likely never do VMware here due to cost (noting, that the cost 
is in VCenter, and IMHO, it's not horrible, but I do not control the 
wallet here, and we tend to prefer FOSS here... and FOSS is my personal 
preference as well).


Companies generally speaking just want something that works.  And oVirt 
does work.  But if money is of no concern and you need the friendliness 
of something VCenter like (noting that not everyone needs VCenter or 
RHEV-M or oVirt Manager), then VMware is still better.


If you don't need something VCenter like, I can also so say that libvirt 
(KVM) and virt-manager is also reasonable, and we use that as well.  But 
we also have a (free) ESXi (because we have to, forced requirement).


The ovirtmgmt web ui is gross IMHO.  It's a perfect example of an 
overweight UI where a simplified UI would have been cleaner, faster and 
better.  Just because you know how to write thousands of lines of 
javascript doesn't mean you should.  Not everything needs to act like a 
trading floor application or facebook.  The art of efficient UI design 
has been lost.  With that said, the RESTful i/f part is nice.  Nice to 
the point of not needing the SDK.


Finally, VMware can be expensive.  It's not a "one time" purchase.  It's 
HAS TO BE ongoing.  And it can get very expensive if not understood. 
With that said, if you have anything Microsoft in the enterprise, you 
already understand and are prepared to 

Re: [ovirt-users] ovirt high points

2017-09-07 Thread Christopher Cox

On 09/07/2017 04:06 PM, Dan Yasny wrote:



On Thu, Sep 7, 2017 at 5:04 PM, Christopher Cox <c...@endlessnow.com 
<mailto:c...@endlessnow.com>> wrote:



Not to defend VMware too much, but if you buy certified HW for
VMware, I've never had the purple screen of death.  With that said,
I have had the purple screen of death using non-certified VMware HW.


I've worked for a certified hardware vendor, you will not believe the 
amount of calls we received for PSODs every day :)


Hmmm... I guess I was fortunate.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt high points

2017-09-07 Thread Christopher Cox

On 09/07/2017 03:49 PM, Michael Kleinpaste wrote:
...snippity di do da...
really, so purchasing VMware just doesn't make sense.  We've also found 
VMware to not be as stable as oVirt.   Our initial oVirt system has been 
running for about 3-4 years (I'd have to look at the ticket to get the 
exact date) without so much as a reboot.  Our VMware systems kept 
getting kernel errors about every 2 months further prompting us to look 
elsewhere.


Not to defend VMware too much, but if you buy certified HW for VMware, 
I've never had the purple screen of death.  With that said, I have had 
the purple screen of death using non-certified VMware HW.


Just saying... and again, it does mean that you need to usually spend 
more for an effective VMware setup.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Convert local storage domain to shared

2017-11-29 Thread Christopher Cox

On 11/29/2017 09:39 AM, Demeter Tibor wrote:


Dear Users,

We have an old ovirt3.5 install with a local and a shared clusters. Meanwhile we
created a new data center, that based on 4.1 and it use only shared 
infrastructure.
I would like to migrate an big VM from the old local datacenter to our new, but
I don't have enough downtime.

Is it possible to convert the old local storage to shared (by share via NFS) and
attach that as new storage domain to the new cluster?
I just want to import VM and copy (while running) with live storage migration
function.

I know, the official way for move vms between ovirt clusters is the export
domain, but it has very big disks.

What can I do?


Just my opinion, but if you don't figure out a way to have occasional downtime, 
you'll probably pay the price with unplanned downtime eventually (and it could 
be painful).


Define "large disks"?  Terabytes?

I know for a fact that if you don't have good network segmentation that live 
migrations of large disks can be very problematic.  And I'm not talking about 
what you're wanting to do.  I'm just talking about storage migration.


We successfully migrated hundreds of VMs from a 3.4 to a 3.6 (on new blades and 
storage) last year over time using the NFS export domain method.


If storage is the same across DC's, you might be able to shortcut this with 
minimal downtime, but I'm pretty sure there will be some downtime.


I've seen large storage migrations render entire nodes offline (not nice) due to 
non-isolated paths or QoS.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practice for iSCSI storage domains

2017-12-07 Thread Christopher Cox

On 12/07/2017 01:10 AM, Richard Chan wrote:

What is the best practice for iSCSI storage domains:

Many small targets vs a few large targets?

Specific example: if you wanted a 8TB storage domain would you prepare a 
single 8TB LUN or (for example) 8 x 1 TB LUNs.


There's no "best" answer for this.  However, if you know you're going to 
be doing a ton of VMs (storage), then less is best.  The problem with 
"less" is that you might end up wasting more space for longer periods of 
time.  Not necessarily an issue for subsystems with thin provisioning, 
but just something to be aware of.


The number of virtual disks can get extreme.  And while that's not 
really related to the iSCSI, just getting all those virtual disks online 
(for example, after node maintenance) can get time consuming.


Took me over 30 minutes the other day.

Obviously that tends to get worse as the number of virtual disks increases.

So, there is something to be said about trying to reduce the number of 
virtual disks.


Our standard for new storage domains is 6TB.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Re: VM interface bonding (LACP)

2018-05-15 Thread Christopher Cox

In the ideal case, what you'd have:

   | Single virtio virtual interface
   |
 VM  Host  Switch stack
 |
 |--- 4x 1Gbit interfaces bonded over LACP

The change: virtio instead of "1 Gbit"

You can't get blood from a stone, that is, you can't manufacture 
bandwidth that isn't there.  If you need more than gigabit speed, you 
need something like 10Gbit.  Realize that usually, we're talking about a 
system created to run more than one VM.  If just one, you'll do better 
with dedicated hardware.  If more than one VM, then there sharing going 
on, though you might be able to use QoS (either in oVirt or outside). 
Even so, if just one VM on 10Gbit, you won't necessarily get full 10Gbit 
out of virtio.  But at the same  time bonding should help in the case of 
multiple VMs.


Now, back to the suggestion at hand.  Multiple virtual NICs.  If the 
logical networks presented via oVirt are such that each (however many) 
logical network has it's own "pipe", then defining a vNIC on each of 
those networks gets you the same sort of "gain" with respect to bonding. 
 That is, no magic bandwidth increase for a particular connection, but 
more pipes available for multiple connections (essentially what you'd 
expect).


Obviously up to you how you want to do this.  I think you might do 
better to consider a better underlying infrastructure to oVirt rather 
than trying to bond vNICs.  Pretty sure I'm right about that.  Would 
think the idea of bonding at the VM level might be best for simulating 
something rather than something you do because it's right/best.




On 05/14/2018 03:03 PM, Doug Ingham wrote:
On 14 May 2018 at 15:35, Juan Pablo > wrote:


so you have lacp on your host, and you want lacp also on your vm...
somehow doesn't sounds correct.
there are several lacp modes. which one are you using on the host?


  Correct!

  | Single 1Gbit virtual interface
  |
VM  Host  Switch stack
    |
        |--- 4x 1Gbit interfaces bonded over LACP

The traffic for all of the VMs is distributed across the host's 4 bonded 
links, however each VM is limited to the 1Gbit of its own virtual 
interface. In the case of my proxy, all web traffic is routed through 
it, so its single Gbit interface has become a bottleneck.


To increase the total bandwidth available to my VM, I presume I will 
need to add multiple Gbit VIFs & bridge them with a bonding mode.
Balance-alb (mode 6) is one option, however I'd prefer to use LACP (mode 
4) if possible.



2018-05-14 16:20 GMT-03:00 Doug Ingham:

On 14 May 2018 at 15:03, Vinícius Ferrão wrote:

You should use better hashing algorithms for LACP.

Take a look at this explanation:

https://www.ibm.com/developerworks/community/blogs/storageneers/entry/Enhancing_IP_Network_Performance_with_LACP?lang=en



In general only L2 hashing is made, you can achieve better
throughput with L3 and multiple IPs, or with L4 (ports).

Your switch should support those features too, if you’re
using one.

V.


The problem isn't the LACP connection between the host & the
switch, but setting up LACP between the VM & the host. For
reasons of stability, my 4.1 cluster's switch type is currently
"Linux Bridge", not "OVS". Ergo my question, is LACP on the VM
possible with that, or will I have to use ALB?

Regards,
  Doug



On 14 May 2018, at 15:16, Doug Ingham wrote:

Hi All,
  My hosts have all of their interfaces bonded via LACP to
maximise throughput, however the VMs are still limited to
Gbit virtual interfaces. Is there a way to configure my VMs
to take full advantage of the bonded physical interfaces?

One way might be adding several VIFs to each VM & using ALB
bonding, however I'd rather use LACP if possible...

Cheers,
--
Doug


-- 
Doug





--
Doug


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: FC Storage

2018-06-12 Thread Christopher Cox

On 06/12/2018 01:04 PM, Carlos Cesario wrote:

Hi Folks,


Could someone could suggest me what is the best way to usage FC Storage 
with oVirt ?


Is there any wayt to create DataStorage like "VMware" with Luns? Or need 
I usage the LUNS as VM disks!?



Sorry by compare with VMware, but  cyurrently Im using VMware and 
planning to move to oVirt solution.




You basically carve out LUNs to create Storage Domains in oVirt and 
create virtual disks out of that (the Storage Domain acts like a pool of 
storage for oVirt use).


While there are use cases for direct LUN mapping, it's not the "norm".
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OV4WPCHNEWB7XFSTL5VQWXYQWET53JXL/


[ovirt-users] Re: [ANN] oVirt 4.2.4 Second Release Candidate is now available

2018-05-31 Thread Christopher Cox

On 05/31/2018 11:55 AM, Karli Sjöberg wrote:



Den 31 maj 2018 17:57 skrev Christopher Cox :

I found some humor in the signature... so, my own spin.

TRIED (by a few). TESTED (a bit). TRUSTED (for now).


Hah, that is awesome, LOL! :) Nothing negative towards any devs, I know 
you do your best, but with some self-distance, the above is very apt.


1) Our community isn't that large
2) You test what you can but certainly not all possible (sometimes 
impossible :)) scenarios
3) Alot of the community, myself included are hesitant of upgrading 
until the next major release, always staying one major behind.


Not complaining one bit, I have what I payed for; not a dime, yet gotten 
so much out of it, it's amazing! Big thanks to everyone involved for the 
work you do!




I second the sentiment to all the devs who work on oVirt.  It's really 
really good stuff considering.


But as a FOSS contributor, I stand behind my modification of the sig 
above.  It's akin to the "Linux sucks" motto (it just sucks less).

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DZ5W37EDV2DW4F6WG7QRRQKPHBCRJETA/


[ovirt-users] Re: [ANN] oVirt 4.2.4 Second Release Candidate is now available

2018-05-31 Thread Christopher Cox

I found some humor in the signature... so, my own spin.

TRIED (by a few). TESTED (a bit). TRUSTED (for now).

(sort of the FOSS mantra, you know?)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KWAQ42RZI2YQZALY4HS3I2D3E74Y4FT2/


[ovirt-users] Re: unable to create iSCSI storage domain

2018-06-22 Thread Christopher Cox

On 06/22/2018 10:20 AM, Bernhard Dick wrote:

Hi,

I've a problem creating an iSCSI storage domain. My hosts are running 
the current ovirt 4.2 engine-ng version. I can detect and login to the 
iSCSI targets, but I cannot see any LUNs (on the LUNs > Targets page).
That happens with our storage and with a linux based iSCSI target which 
I created for testing purposes.
When I logon to the ovirt hosts I see that they are connected with the 
target LUNs (dmesg is telling that there are iscsi devices being found 
and they are getting assigned to devices in /dev/sdX ). Writing and 
reading from the devices (also accros hosts) works. Do you have some 
advice how to troubleshoot this?


Stating the obvious... you're not LUN masking them out?  Normally, you'd 
create an access mask that allows the ovirt hypevisors to see the LUNs. 
But without that, maybe your default security policy is to prevent all (?).

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VQWW4QVPFDYN4VIO2NTWBWI5XKKECATF/


Re: [ovirt-users] How is everyone performing backups?

2017-10-27 Thread Christopher Cox

On 10/27/2017 10:27 AM, Wesley Stewart wrote:
Originally, I used a script I found on github, but since updating I 
can't seem to get that to work again.


I was just curious if there were any other more elegant type solutions?  
I am currently running a single host and local storage, but I would love 
to backup VM's automatically once a week or so to an NFS share.


Just curious if anyone had tackled this issue.



We use our normal backup process for VMs.  Why?  Because it's pretty 
simple to put up templated VM and then copy data over the top of it from 
the backups.


Just an idea (we've been doing this for years now)

At some point I need to push our system out to the world.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Has meltdown impacted glusterFS performance?

2018-01-26 Thread Christopher Cox
Does it matter?  This is just one of those required things.  IMHO, most 
companies know there will be impact, and I would think they would accept 
any informational measurement after the fact.


There are probably only a few cases where timing is so limited to where 
a skew would matter.


Just saying...


On 01/26/2018 11:48 AM, Jayme wrote:
I've been considering hyperconverged oVirt setup VS san/nas but I wonder 
how the meltdown patches have affected glusterFS performance since it is 
CPU intensive.  Has anyone who has applied recent kernel updates noticed 
a performance drop with glusterFS?




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad way and all VMs for one node marked Unknown and Not Reponding while up

2018-01-25 Thread Christopher Cox

On 01/25/2018 02:25 PM, Douglas Landgraf wrote:

On Wed, Jan 24, 2018 at 10:18 AM, Christopher Cox <c...@endlessnow.com> wrote:

Would restarting vdsm on the node in question help fix this?  Again, all the
VMs are up on the node.  Prior attempts to fix this problem have left the
node in a state where I can issue the "has been rebooted" command to it,
it's confused.

So... node is up.  All VMs are up.  Can't issue "has been rebooted" to the
node, all VMs show Unknown and not responding but they are up.

Chaning the status is the ovirt db to 0 works for a second and then it goes
immediately back to 8 (which is why I'm wondering if I should restart vdsm
on the node).


It's not recommended to change db manually.



Oddly enough, we're running all of this in production.  So, watching it all
go down isn't the best option for us.

Any advice is welcome.



We would need to see the node/engine logs, have you found any error in
the vdsm.log
(from nodes) or engine.log? Could you please share the error?



In short, the error is our ovirt manager lost network (our problem) and 
crashed hard (hardware issue on the server)..  On bring up, we had some 
network changes (that caused the lost network problem) so our LACP bond 
was down for a bit while we were trying to bring it up (noting the ovirt 
manager is up while we're reestablishing the network on the switch side).


In other word, that's the "error" so to speak that got us to where we are.

Full DEBUG enabled on the logs... The error messages seem obvious to 
me.. starts like this (nothing the ISO DOMAIN was coming off an NFS 
mount off the ovirt management server... yes... we know... we do have 
plans to move that).


So on the hypervisor node itself, from the vdsm.log (vdsm.log.33.xz):

(hopefully no surprise here)

Thread-2426633::WARNING::2018-01-23 
13:50:56,672::fileSD::749::Storage.scanDomains::(collectMetaFiles) Could 
not collect metadata file for domain path 
/rhev/data-center/mnt/d0lppc129.skopos.me:_var_lib_exports_iso-20160408002844

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/fileSD.py", line 735, in collectMetaFiles
sd.DOMAIN_META_DATA))
  File "/usr/share/vdsm/storage/outOfProcess.py", line 121, in glob
return self._iop.glob(pattern)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 
536, in glob

return self._sendCommand("glob", {"pattern": pattern}, self.timeout)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 
421, in _sendCommand

raise Timeout(os.strerror(errno.ETIMEDOUT))
Timeout: Connection timed out
Thread-27::ERROR::2018-01-23 
13:50:56,672::sdc::145::Storage.StorageDomainCache::(_findDomain) domain 
e5ecae2f-5a06-4743-9a43-e74d83992c35 not found

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain
return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
  File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'e5ecae2f-5a06-4743-9a43-e74d83992c35',)
Thread-27::ERROR::2018-01-23 
13:50:56,673::monitor::276::Storage.Monitor::(_monitorDomain) Error 
monitoring domain e5ecae2f-5a06-4743-9a43-e74d83992c35

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/monitor.py", line 272, in _monitorDomain
self._performDomainSelftest()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 769, in 
wrapper

value = meth(self, *a, **kw)
  File "/usr/share/vdsm/storage/monitor.py", line 339, in 
_performDomainSelftest

self.domain.selftest()
  File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__
return getattr(self.getRealDomain(), attrName)
  File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce
domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain
return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
  File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'e5ecae2f-5a06-4743-9a43-e74d83992c35',)



Again, all the hypervisor nodes will complain about having the NFS area 
for ISO DOMAIN now gone.  Remember the ovirt manager node held this and 
it has now network has gone out and the node crashed (note: the ovirt 
node (the

Re: [ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad way and all VMs for one node marked Unknown and Not Reponding while up

2018-01-25 Thread Christopher Cox



On 01/25/2018 04:57 PM, Douglas Landgraf wrote:

On Thu, Jan 25, 2018 at 5:12 PM, Christopher Cox <c...@endlessnow.com> wrote:

On 01/25/2018 02:25 PM, Douglas Landgraf wrote:


On Wed, Jan 24, 2018 at 10:18 AM, Christopher Cox <c...@endlessnow.com>
wrote:


Would restarting vdsm on the node in question help fix this?  Again, all
the
VMs are up on the node.  Prior attempts to fix this problem have left the
node in a state where I can issue the "has been rebooted" command to it,
it's confused.

So... node is up.  All VMs are up.  Can't issue "has been rebooted" to
the
node, all VMs show Unknown and not responding but they are up.

Chaning the status is the ovirt db to 0 works for a second and then it
goes
immediately back to 8 (which is why I'm wondering if I should restart
vdsm
on the node).



It's not recommended to change db manually.



Oddly enough, we're running all of this in production.  So, watching it
all
go down isn't the best option for us.

Any advice is welcome.




We would need to see the node/engine logs, have you found any error in
the vdsm.log
(from nodes) or engine.log? Could you please share the error?




In short, the error is our ovirt manager lost network (our problem) and
crashed hard (hardware issue on the server)..  On bring up, we had some
network changes (that caused the lost network problem) so our LACP bond was
down for a bit while we were trying to bring it up (noting the ovirt manager
is up while we're reestablishing the network on the switch side).

In other word, that's the "error" so to speak that got us to where we are.

Full DEBUG enabled on the logs... The error messages seem obvious to me..
starts like this (nothing the ISO DOMAIN was coming off an NFS mount off the
ovirt management server... yes... we know... we do have plans to move that).

So on the hypervisor node itself, from the vdsm.log (vdsm.log.33.xz):

(hopefully no surprise here)

Thread-2426633::WARNING::2018-01-23
13:50:56,672::fileSD::749::Storage.scanDomains::(collectMetaFiles) Could not
collect metadata file for domain path
/rhev/data-center/mnt/d0lppc129.skopos.me:_var_lib_exports_iso-20160408002844
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/fileSD.py", line 735, in collectMetaFiles
 sd.DOMAIN_META_DATA))
   File "/usr/share/vdsm/storage/outOfProcess.py", line 121, in glob
 return self._iop.glob(pattern)
   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 536,
in glob
 return self._sendCommand("glob", {"pattern": pattern}, self.timeout)
   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 421,
in _sendCommand
 raise Timeout(os.strerror(errno.ETIMEDOUT))
Timeout: Connection timed out
Thread-27::ERROR::2018-01-23
13:50:56,672::sdc::145::Storage.StorageDomainCache::(_findDomain) domain
e5ecae2f-5a06-4743-9a43-e74d83992c35 not found
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
 dom = findMethod(sdUUID)
   File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain
 return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
   File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath
 raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'e5ecae2f-5a06-4743-9a43-e74d83992c35',)
Thread-27::ERROR::2018-01-23
13:50:56,673::monitor::276::Storage.Monitor::(_monitorDomain) Error
monitoring domain e5ecae2f-5a06-4743-9a43-e74d83992c35
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/monitor.py", line 272, in _monitorDomain
 self._performDomainSelftest()
   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 769, in
wrapper
 value = meth(self, *a, **kw)
   File "/usr/share/vdsm/storage/monitor.py", line 339, in
_performDomainSelftest
 self.domain.selftest()
   File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__
 return getattr(self.getRealDomain(), attrName)
   File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce
 domain = self._findDomain(sdUUID)
   File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
 dom = findMethod(sdUUID)
   File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain
 return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
   File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath
 raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'e5ecae2f-5a06-4743-9a43-e74d83992c35',)


Again, all the hypervisor nodes will complain about having the NFS area for
ISO DOMAIN now gone.

Re: [ovirt-users] Bad hd? or?

2018-01-26 Thread Christopher Cox

checked /etc/mtab ?

On 01/26/2018 06:10 PM, Alex Bartonek wrote:

I'm stumped.

I powercycled my server on accident and I cannot mount my data drive.  I 
was getting buffer i/o errors but finally was able to boot up by 
disabling automount in fstab.


I cannot mount my ext4 drive.   Anything else I can check?

root@blitzen t]# dmesg|grep sdb
[    1.714138] sd 2:1:0:1: [sdb] 585871964 512-byte logical blocks: (299 
GB/279 GiB)

[    1.714275] sd 2:1:0:1: [sdb] Write Protect is off
[    1.714279] sd 2:1:0:1: [sdb] Mode Sense: 6b 00 00 08
[    1.714623] sd 2:1:0:1: [sdb] Write cache: disabled, read cache: 
enabled, doesn't support DPO or FUA

[    1.750400]  sdb: sdb1
[    1.750969] sd 2:1:0:1: [sdb] Attached SCSI disk
[  443.936794]  sdb: sdb1
[  452.519482]  sdb: sdb1

sdb   8:16   0 279.4G  0 disk
├─sdb1    8:17   0 279.4G  0 part
└─3600508b10010343132202020202a 253:3    0 279.4G  0 mpath
   └─3600508b10010343132202020202a1  253:4    0 279.4G  0 part

[root@blitzen t]# mount /dev/sdb1 /mnt/ovirt_data/
mount: /dev/sdb1 is already mounted or /mnt/ovirt_data busy
[root@blitzen t]# mount|grep sdb
[root@blitzen t]#


Thanks in advance.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move Export Domain across web via NFS verses Rsync Image

2018-01-30 Thread Christopher Cox

On 01/30/2018 05:10 PM, Matt Simonsen wrote:

Hello all,

We have a several oVirt data centers mostly using oVirt 4.1.9 and NFS 
backed storage.


I'm planning a move for what will eventually be an exported VM, from one 
physical location to another one.


Is there any reason it would be problematic to export the image and then 
use rsync to migrate the image directory to a different export domain?


So, you're saying you export to an Export Domain (NFS), detach, and then 
rsync that somewhere else (a different NFS system) and try to attach 
that an Export(import) Domain to a different datacenter and import? 
Sounds like it should work to me.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad way and all VMs for one node marked Unknown and Not Reponding while up

2018-02-05 Thread Christopher Cox
Forgive the top post.  I guess what I need to know now is whether there 
is a recovery path that doesn't lead to total loss of the VMs that are 
currently in the "Unknown" "Not responding" state.


We are planning a total oVirt shutdown.  I just would like to know if 
we've effectively lot those VMs or not.  Again, the VMs are currently 
"up".  And we use a file backup process, so in theory they can be 
restored, just somewhat painfully, from scratch.


But if somebody knows if we shutdown all the bad VMs and the blade, is 
there someway oVirt can know the VMs are "ok" to start up??  Will 
changing their state directly to "down" in the db stick if the blade is 
down?  That is, will we get to a known state where the VMs can actually 
be started and brought back into a known state?


Right now, we're feeling there's a good chance we will not be able to 
recover these VMs, even though they are "up" right now.  I really need 
some way to force oVirt into an integral state, even if it means we take 
the whole thing down.


Possible?


On 01/25/2018 06:57 PM, Christopher Cox wrote:



On 01/25/2018 04:57 PM, Douglas Landgraf wrote:
On Thu, Jan 25, 2018 at 5:12 PM, Christopher Cox <c...@endlessnow.com> 
wrote:

On 01/25/2018 02:25 PM, Douglas Landgraf wrote:


On Wed, Jan 24, 2018 at 10:18 AM, Christopher Cox <c...@endlessnow.com>
wrote:


Would restarting vdsm on the node in question help fix this?  
Again, all

the
VMs are up on the node.  Prior attempts to fix this problem have 
left the
node in a state where I can issue the "has been rebooted" command 
to it,

it's confused.

So... node is up.  All VMs are up.  Can't issue "has been rebooted" to
the
node, all VMs show Unknown and not responding but they are up.

Chaning the status is the ovirt db to 0 works for a second and then it
goes
immediately back to 8 (which is why I'm wondering if I should restart
vdsm
on the node).



It's not recommended to change db manually.



Oddly enough, we're running all of this in production.  So, 
watching it

all
go down isn't the best option for us.

Any advice is welcome.




We would need to see the node/engine logs, have you found any error in
the vdsm.log
(from nodes) or engine.log? Could you please share the error?




In short, the error is our ovirt manager lost network (our problem) and
crashed hard (hardware issue on the server)..  On bring up, we had some
network changes (that caused the lost network problem) so our LACP 
bond was
down for a bit while we were trying to bring it up (noting the ovirt 
manager

is up while we're reestablishing the network on the switch side).

In other word, that's the "error" so to speak that got us to where we 
are.


Full DEBUG enabled on the logs... The error messages seem obvious to 
me..
starts like this (nothing the ISO DOMAIN was coming off an NFS mount 
off the
ovirt management server... yes... we know... we do have plans to move 
that).


So on the hypervisor node itself, from the vdsm.log (vdsm.log.33.xz):

(hopefully no surprise here)

Thread-2426633::WARNING::2018-01-23
13:50:56,672::fileSD::749::Storage.scanDomains::(collectMetaFiles) 
Could not

collect metadata file for domain path
/rhev/data-center/mnt/d0lppc129.skopos.me:_var_lib_exports_iso-20160408002844 


Traceback (most recent call last):
   File "/usr/share/vdsm/storage/fileSD.py", line 735, in 
collectMetaFiles

 sd.DOMAIN_META_DATA))
   File "/usr/share/vdsm/storage/outOfProcess.py", line 121, in glob
 return self._iop.glob(pattern)
   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", 
line 536,

in glob
 return self._sendCommand("glob", {"pattern": pattern}, 
self.timeout)
   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", 
line 421,

in _sendCommand
 raise Timeout(os.strerror(errno.ETIMEDOUT))
Timeout: Connection timed out
Thread-27::ERROR::2018-01-23
13:50:56,672::sdc::145::Storage.StorageDomainCache::(_findDomain) domain
e5ecae2f-5a06-4743-9a43-e74d83992c35 not found
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
 dom = findMethod(sdUUID)
   File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain
 return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
   File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath
 raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'e5ecae2f-5a06-4743-9a43-e74d83992c35',)
Thread-27::ERROR::2018-01-23
13:50:56,673::monitor::276::Storage.Monitor::(_monitorDomain) Error
monitoring domain e5ecae2f-5a06-4743-9a43-e74d83992c35
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/monitor.py", line 272, in 
_monitorDomain

 self._performDomainSelftest()
   File "/usr

Re: [ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad way and all VMs for one node marked Unknown and Not Reponding while up

2018-02-05 Thread Christopher Cox
Answering my own post... a restart of vdsmd on the affected blade has 
fixed everything.  Thanks everyone who helped.



On 02/05/2018 10:02 AM, Christopher Cox wrote:
Forgive the top post.  I guess what I need to know now is whether there 
is a recovery path that doesn't lead to total loss of the VMs that are 
currently in the "Unknown" "Not responding" state.


We are planning a total oVirt shutdown.  I just would like to know if 
we've effectively lot those VMs or not.  Again, the VMs are currently 
"up".  And we use a file backup process, so in theory they can be 
restored, just somewhat painfully, from scratch.


But if somebody knows if we shutdown all the bad VMs and the blade, is 
there someway oVirt can know the VMs are "ok" to start up??  Will 
changing their state directly to "down" in the db stick if the blade is 
down?  That is, will we get to a known state where the VMs can actually 
be started and brought back into a known state?


Right now, we're feeling there's a good chance we will not be able to 
recover these VMs, even though they are "up" right now.  I really need 
some way to force oVirt into an integral state, even if it means we take 
the whole thing down.


Possible?


On 01/25/2018 06:57 PM, Christopher Cox wrote:



On 01/25/2018 04:57 PM, Douglas Landgraf wrote:
On Thu, Jan 25, 2018 at 5:12 PM, Christopher Cox 
<c...@endlessnow.com> wrote:

On 01/25/2018 02:25 PM, Douglas Landgraf wrote:


On Wed, Jan 24, 2018 at 10:18 AM, Christopher Cox 
<c...@endlessnow.com>

wrote:


Would restarting vdsm on the node in question help fix this? 
Again, all

the
VMs are up on the node.  Prior attempts to fix this problem have 
left the
node in a state where I can issue the "has been rebooted" command 
to it,

it's confused.

So... node is up.  All VMs are up.  Can't issue "has been 
rebooted" to

the
node, all VMs show Unknown and not responding but they are up.

Chaning the status is the ovirt db to 0 works for a second and 
then it

goes
immediately back to 8 (which is why I'm wondering if I should restart
vdsm
on the node).



It's not recommended to change db manually.



Oddly enough, we're running all of this in production.  So, 
watching it

all
go down isn't the best option for us.

Any advice is welcome.




We would need to see the node/engine logs, have you found any error in
the vdsm.log
(from nodes) or engine.log? Could you please share the error?




In short, the error is our ovirt manager lost network (our problem) and
crashed hard (hardware issue on the server)..  On bring up, we had some
network changes (that caused the lost network problem) so our LACP 
bond was
down for a bit while we were trying to bring it up (noting the ovirt 
manager

is up while we're reestablishing the network on the switch side).

In other word, that's the "error" so to speak that got us to where 
we are.


Full DEBUG enabled on the logs... The error messages seem obvious to 
me..
starts like this (nothing the ISO DOMAIN was coming off an NFS mount 
off the
ovirt management server... yes... we know... we do have plans to 
move that).


So on the hypervisor node itself, from the vdsm.log (vdsm.log.33.xz):

(hopefully no surprise here)

Thread-2426633::WARNING::2018-01-23
13:50:56,672::fileSD::749::Storage.scanDomains::(collectMetaFiles) 
Could not

collect metadata file for domain path
/rhev/data-center/mnt/d0lppc129.skopos.me:_var_lib_exports_iso-20160408002844 


Traceback (most recent call last):
   File "/usr/share/vdsm/storage/fileSD.py", line 735, in 
collectMetaFiles

 sd.DOMAIN_META_DATA))
   File "/usr/share/vdsm/storage/outOfProcess.py", line 121, in glob
 return self._iop.glob(pattern)
   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", 
line 536,

in glob
 return self._sendCommand("glob", {"pattern": pattern}, 
self.timeout)
   File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", 
line 421,

in _sendCommand
 raise Timeout(os.strerror(errno.ETIMEDOUT))
Timeout: Connection timed out
Thread-27::ERROR::2018-01-23
13:50:56,672::sdc::145::Storage.StorageDomainCache::(_findDomain) 
domain

e5ecae2f-5a06-4743-9a43-e74d83992c35 not found
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain
 dom = findMethod(sdUUID)
   File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain
 return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
   File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath
 raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'e5ecae2f-5a06-4743-9a43-e74d83992c35',)
Thread-27::ERROR::2018-01-23
13:50:56,673::monitor::276::Storage.Monitor::(_monitorDomain) Error
monitoring domain e5ecae2f-5a06-4743-9a43-e74d83992c35

Re: [ovirt-users] Unable to put Host into Maintenance mode

2018-02-15 Thread Christopher Cox

On 02/15/2018 11:10 AM, Michal Skrivanek wrote:
..snippity... with regards to oVirt 3.5


that’s a really old version….


I know I'll catch heat for this, but by "old" you mean like December of 
2015?  Just trying put things into perspective.  Thus it goes with the 
ancient and decrepit Red Hat Ent. 7.1 days, right?


I know, I know, FOSS... the only thing worse than running today's code 
is running yesterday's.


We still run a 3.5 oVirt in our dev lab, btw.  But I would not have set 
that up (not that I would have recommended oVirt to begin with), 
preferring 3.4 at the time.  I would have waited for 3.6.


With that said, 3.5 isn't exactly on the "stable line" to Red Hat 
Virtualization, that was 3.4 and then 3.6.


Some people can't afford major (downtime) upgrades every 3-6 months or 
so.  But, arguably, maybe we shouldn't be running oVirt.  Maybe it's not 
designed for "production".


I guess oVirt isn't really for production by definition, but many of us 
are doing so.


So... not really a "ding" against oVirt developers, it's just a rapidly 
moving target with the normal risks that come with that.  People just 
need to understand that.


And with that said, the fact that many of us are running those ancient 
decrepit evil versions of oVirt in production today, is actually a 
testimony to its quality.  Good job devs!



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM with "a lot" of disks : OK ?

2018-02-21 Thread Christopher Cox

On 02/21/2018 08:38 AM, spfma.t...@e.mail.fr wrote:

Hi,
Is there any kind of penalty or risk using something like a dozen of separate
disks for a VM stored on a NFS datastore ?
Regards


I don't use NFS, we use iSCSI SAN, but we have some hosts with that many disks 
(or more).


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad way and all VMs for one node marked Unknown and Not Reponding while up

2018-01-24 Thread Christopher Cox
Like the subject says.. I tried to clear the status from the vm_dynamic 
for a VM, but it just goes back to 8.


Any hints on how to get things back to a known state?

I tried marking the node in maint, but it can't move the "Unknown" VMs, 
so that doesn't work.  I tried rebooting a VM, that doesn't work.


The state of the VMs is up and I think they are running on the node 
they say they are running on, we just have the Unknown problem with VMs 
on that one node.  So... can't move them, reboot VMs doens't fix


Any trick to restoring state so that oVirt is ok???

(what a mess)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad way and all VMs for one node marked Unknown and Not Reponding while up

2018-01-24 Thread Christopher Cox
Would restarting vdsm on the node in question help fix this?  Again, all the VMs 
are up on the node.  Prior attempts to fix this problem have left the node in a 
state where I can issue the "has been rebooted" command to it, it's confused.


So... node is up.  All VMs are up.  Can't issue "has been rebooted" to the node, 
all VMs show Unknown and not responding but they are up.


Chaning the status is the ovirt db to 0 works for a second and then it goes 
immediately back to 8 (which is why I'm wondering if I should restart vdsm on 
the node).


Oddly enough, we're running all of this in production.  So, watching it all go 
down isn't the best option for us.


Any advice is welcome.

On 01/23/2018 03:58 PM, Christopher Cox wrote:

Like the subject says.. I tried to clear the status from the vm_dynamic for a
VM, but it just goes back to 8.

Any hints on how to get things back to a known state?

I tried marking the node in maint, but it can't move the "Unknown" VMs, so that
doesn't work.  I tried rebooting a VM, that doesn't work.

The state of the VMs is up and I think they are running on the node they say
they are running on, we just have the Unknown problem with VMs on that one
node.  So... can't move them, reboot VMs doens't fix

Any trick to restoring state so that oVirt is ok???

(what a mess)


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Quick question about oVirt 3.6 and vdsm log in DEBUG mode (apprently by default??)

2018-03-13 Thread Christopher Cox
We're running oVirt 3.6 on 9 Dell Blades but with just two fairly fat 
fabrics, one for LAN stuff, ovirtmgmt and one for iSCSI to the storage 
domains.


15 VM Storage Domains
iSCSI has 4 paths going through a 40Gbit i/o blade to switch

115 VMs or thereabouts
9 VLANS, sharing an i/o blade with ovirtmgmt 40Gbit to switch
500+ virtual disks

What we are seeing more and more is that if we do an operation like 
expose a new LUN and configure a new storage domain, that all of the 
hyervisors go "red triangle" and "Connecting..." and it takes a very 
long time (all day) to straighten out.


My guess is that there's too much to look at vdsm wise and so it's 
waiting a short(er) period of time for a completed response than what 
vdsm is going to us, and it just cycles over and over until it just 
happens to work.


I'm thinking that vdsm having DEBUG enabled isn't helping the latency, 
but as far as I know it came this way be default.  Can we safely disable 
DEBUG on the hypervisor hosts for vdsm?  Can we do this while things are 
roughly in a steady state?  Remember, just doing the moves could throw 
everything into vdsm la-la-land (actually, that might not be true, might 
take a new storage thing to do that).


Just thinking out loud... can we safely turn off DEBUG logging on the 
vdsms? Can we do this "live" through bouncing of the vdsm if everything 
is "steady state"?  Do you think this might help the problems we're 
having with storage operations? (I can see all the blades logging in 
iSCSI wise, but ovirt engine does the whole red triangle connecting 
thing, for many, many, many hours).


Thanks,
Christopher


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Tape Library!

2018-03-08 Thread Christopher Cox

On 03/08/2018 12:43 AM, Nasrum Minallah Manzoor wrote:

Hi,

I need help in configuring Amanda backup in virtual machine added to 
ovirt node! How can I assign my FC tape library (TS 3100 in my case) to 
virtual machine!


I know at one time there was an issue created to make this work through 
virtio.  I mean, it was back in the early 3.x days I think.  So this 
might be possible now (??).  Passthrough LUN?


https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Quick question about oVirt 3.6 and vdsm log in DEBUG mode (apprently by default??)

2018-03-14 Thread Christopher Cox

On 03/14/2018 01:34 AM, Yaniv Kaul wrote:



On Mar 13, 2018 11:48 PM, "Christopher Cox" <c...@endlessnow.com
<mailto:c...@endlessnow.com>> wrote:


...snip...


What we are seeing more and more is that if we do an operation like expose a
new LUN and configure a new storage domain, that all of the hyervisors go
"red triangle" and "Connecting..." and it takes a very long time (all day)
to straighten out.

My guess is that there's too much to look at vdsm wise and so it's waiting a
short(er) period of time for a completed response than what vdsm is going to
us, and it just cycles over and over until it just happens to work.


Please upgrade. We have solved issues and improved performance and scale
substantially since 3.6.
You may also wish to apply lvm filters.
Y.


Oh, we know and are looking at what we'll have to do to upgrade.  With that 
said, is there more information on what you mentioned as "lvm filters" posted 
somewhere?


Also, would VM reduction, and IMHO, virtual disk reduction help this problem?

Is there and engine config parameters that might help as well?

Thanks for any help on this.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt KVM Guest Definition: How to read these from within a virtual machine?

2018-04-16 Thread Christopher Cox
If you just have to have to query the engine, you could always make a 
rest query to the engine (if that's https visible to the VM).


On 04/16/2018 05:06 PM, Doug Ingham wrote:
Err...by reading the hardware specs in the standard manner? eg. 
dmidecode, etc.


On 15 April 2018 at 01:28, TomK > wrote:


 From within an oVirt (KVM) guest machine, how can I read the guest
specific definitions such as memory, CPU, disk etc configuration
that the guest was given?

I would like to do this from within the virtual machine guest.

-- 
Cheers,

Tom K.

-

Living on earth is expensive, but it includes a free trip around the
sun.

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





--
Doug


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdi over wan optimation

2018-03-27 Thread Christopher Cox

On 03/27/2018 10:10 AM, Andreas Huser wrote:

Hi, i have a question about vdi over wan. The traffic is very high when i look 
videos or online streams. 100% of my capacity of Internet Bandwidth is used.

Does anyone an idea how can i optimise spice for wan?


You can always look into QoS, but might have to apply that uniformly to 
the guest spice traffic (likely).  And by QoS, that likely means 
something you do outside of oVirt (Internet vs Intranet).


Of course, applying QoS for "video" may make things horrible.  Pumping 
large amounts of "live" important synchronized data costs a lot, it's 
the nature of the beast.  Restrict it and you often end up with less 
than usable audio/video, especially if the data is unknown (e.g. a full 
remote desktop).


Ideally, from a thin client perspective, the solution is to run as much 
of that as possible outside of the remote desktop.


There's a reason why those very setup specific options exist out there 
for handling these types of VDI things (transparently).  They are very 
restricted and of course have a slew of dependencies and usually a lot 
of cost. (and IMHO, often times go "belly up" within 5 years)


I've seen some of these.  Basically the thin client uses remote desktop, 
but when an embedded video happens, that is offloaded as a direct 
connection handled by the thin client (kind of like "casting" in today's 
world).


I these VDI quests are Windows, RDP is likely going to do better than 
Spice, especially for remote audio and video.  Doesn't mean it won't 
occupy all your bandwidth, just saying it performs better.   With that 
said, remote desktop via other means, be that RDP, or NX, etc.. might be 
"better" than Spice.


PSA: If these are Windows, be aware of Microsoft's VDI tax (the VDA). 
This is an arbitrary invented tax that Microsoft created strictly to get 
people to only user their hypervisor platform.  It can cost a lot and 
it's required annually.


In the past I used NX for my Linux "desktops".  This worked well even 
over very low bandwidth connects, however, it assumed my business was 
not the source of network bottlenecks on the Internet.  Just saying. 
Even so, things that did massive amount of work, be that large AV or 
IntelliJ (which does a gazillion window creates/destroys) were still 
some concern.  We tweaked our IntelliJ profiles to help reduce the load 
there.  Not a whole lot we could do with regards to audio/video but 
educate people.


And no, I do not recommend 10 users playing PUBG via VDI. :-)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-26 Thread Christopher Cox

On 03/24/2018 03:33 AM, Andy Michielsen wrote:

Hi all,

Not sure if this is the place to be asking this but I was wondering which 
hardware you all are using and why in order for me to see what I would be 
needing.

I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 
vm’s.
The engine, I can run on an other server. The hosts can be fitted with the 
storage and share the space through glusterfs. I would think I will be needing 
at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)


Just because you asked, but not because this is helpful to you

But first, a comment on "3 hosts to be able to run 30 VMs".  The SPM 
node shouldn't run a lot of VMs.  There are settings (the setting slips 
my mind) on the engine to give it a "virtual set" of VMs in order to 
keep VMs off of it.


With that said, CPU wise, it doesn't require a lot to run 30 VM's.  The 
costly thing is memory (in general).  So while a cheap set of 3 machines 
might handle the CPU requirements of 30 VM's, those cheap machines might 
not be able to give you the memory you need (depends).  You might be 
fine.  I mean, there are cheap desktop like machines that do 64G (and 
sometimes more).  Just something to keep in mind.  Memory and storage 
will be the most costly items.  It's simple math.  Linux hosts, of 
course, don't necessarily need much memory (or storage).  But Windows...


1Gbit NIC's are "ok", but again, depends on storage.  Glusterfs is no 
speed demon.  But you might not need "fast" storage.


Lastly, your setup is just for "fun", right?  Otherwise, read on.


Running oVirt 3.6 (this is a production setup)

ovirt engine (manager):
Dell PowerEdge 430, 32G

ovirt cluster nodes:
Dell m1000e 1.1 backplane Blade Enclosure
9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all 
10GbE, CentOS 7.2

4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's)

120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012
We've run on as few as 3 nodes.

Network, SAN and Storage (for ovirt Domains):
2 x S4810 (part is used for SAN, part for LAN)
Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K 
SAS)

Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS

ISO and Export Domains are handled by:
Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), 
CentOS 7.4, NFS


What I like:
* Easy setup.
* Relatively good network and storage.

What I don't like:
* 2 "effective" networks, LAN and iSCSI.  All networking uses the same 
effective path.  Would be nice to have more physical isolation for mgmt 
vs motion vs VMs.  QoS is provided in oVirt, but still, would be nice to 
have the full pathways.
* Storage doesn't use active/active controllers, so controller failover 
is VERY slow.
* We have a fast storage system, and somewhat slower storage system 
(matter of IOPS),  neither is SSD, so there isn't a huge difference.  No 
real redundancy or flexibility.
* vdsm can no longer respond fast enough for the amount of disks defined 
(in the event of a new Storage Domain add).  We have raised vdsTimeout, 
but have not tested yet.


I inherited the "style" above.  My recommendation of where to start for 
a reasonable production instance, minimum (assumes the S4810's above, 
not priced here):


1 x ovirt manager/engine, approx $1500
4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K
3 x Nexsan 18P 108TB, approx $96K

While significantly cheaper (by 6 figures), it provides active/active 
controllers, storage reliability and flexibility and better network 
pathways.  Why 4 x nodes?  Need at least N+1 for reliability.  The extra 
4th node is merely capacity.  Why 3 x storage?  Need at least N+1 for 
reliability.


Obviously, you'll still want to back things up and test the ability to 
restore components like the ovirt engine from scratch.


Btw, my recommended minimum above is regardless of hypervisor cluster 
choice (could be VMware).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node Resize tool for local storage

2018-03-28 Thread Christopher Cox

On 03/28/2018 01:37 AM, Pavol Brilla wrote:

Hi

AFAIK ext4 is not supporting online shrinking of filesystem,
to shrink storage you would need to unmount filesystem,
thus it is not possible to do with VM online.


Correct.  Just saying it's not possible at all with XFS, be that online or 
offline.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] dns vm and ovirt

2018-03-16 Thread Christopher Cox

On 03/16/2018 07:58 AM, Nathanaël Blanchet wrote:

Hi all,

I'd need some piece of good practice about dealing a DNS server in or 
out of ovirt.
Until now we never wanted to integrate the DNS vm into ovirt because of 
the strong dependency. if the DNS server fails for any reason, it 
becomes difficult ot join the webadmin (except with a static etc hosts) 
and the nodes may become unvailable if they had been configured with fqdn.
We could consider a DNS failover setup, but in a self hosted engine 
setup (and more globally an hyperconverged setup) , it doesn't make 
sense of setting up a stand alone DNS vm outside of ovirt.


So what about imitating engine vm status in a hosted engine setup? Is 
there a way to install the DNS vm outside of ovirt but on the ovirt host 
(and why not in a HA mode)?
Second option could be installing the named service on the hosted engine 
vm?


Any suggestion or return of experience would be much appreciated.



You are wise to think of this as a dependency problem.  When dealing 
with any "in band" vs. "out of band" type of scenario you want to 
properly address how things work "without" the dependency.


So.. for example, you could maintain a static host table setup for your 
ovirt nodes.  Thus, they could find each other without DNS.  Also, those 
nodes might have an external DNS configured for lookups (something you 
don't own) just so things like updates can happen.


There are risks to everything.  Putting key (normally) out of band 
infrastructure into your oVirt, including the engine, always involves 
more risk.


With that said, if you think about you key infrastructure being as a 
separate oVirt datacenter, it would have things like the "static host" 
maps and such.  Some of the infrastructure VMs housed there could 
include the engine for the "general" datacenters (the ones not providing 
VMs for key infrastructure).  This these "general" purpose datacenters 
would house the normal VMs and use potentially VMs out of the 
"infrastructure" datacenter.  Does that make sense?


It's not unlike how a lot of cloud providers operate.  In fact, one well 
known provider used to house their core cloud infrastructure in VMware 
and use "cheaper" hypervisors for their cloud clients.


Summary:
static confs for infrastructure ovirt datacenter containing key core 
infrastructure VMs (including things like DNS, DHCP, Active Directory, 
and oVirt engines) used by general purpose ovirt datacenters.


Obviously the infrastructure datacenter becomes very important, much 
like your base network and should be thought of as "first" priority, 
much like the network.  And much like the network, depends on some 
kickstarter static configs.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Quick question about oVirt 3.6 and vdsm log in DEBUG mode (apprently by default??)

2018-03-16 Thread Christopher Cox

On 03/14/2018 09:10 AM, Christopher Cox wrote:

On 03/14/2018 01:34 AM, Yaniv Kaul wrote:



On Mar 13, 2018 11:48 PM, "Christopher Cox" <c...@endlessnow.com
<mailto:c...@endlessnow.com>> wrote:


...snip...


    What we are seeing more and more is that if we do an operation 
like expose a
    new LUN and configure a new storage domain, that all of the 
hyervisors go
    "red triangle" and "Connecting..." and it takes a very long time 
(all day)

    to straighten out.

    My guess is that there's too much to look at vdsm wise and so it's 
waiting a
    short(er) period of time for a completed response than what vdsm 
is going to

    us, and it just cycles over and over until it just happens to work.


Please upgrade. We have solved issues and improved performance and scale
substantially since 3.6.
You may also wish to apply lvm filters.
Y.


Oh, we know and are looking at what we'll have to do to upgrade.  With 
that said, is there more information on what you mentioned as "lvm 
filters" posted somewhere?


Also, would VM reduction, and IMHO, virtual disk reduction help this 
problem?


Is there and engine config parameters that might help as well?

Thanks for any help on this.


Based on a different older thread about having lots of virtual networks 
which sounded somewhat similar, I have increased our vdsTimeout value. 
Any opinions on whether or not that might help?  Right now I'm forced to 
tell my management that we'll have to "roll the dice" to find out.  But 
kind of hoping to hear someone "say" it should help. Anyone?


Just looking for something more substantial...

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] dns vm and ovirt

2018-03-16 Thread Christopher Cox



On 03/16/2018 12:28 PM, Nathanaël Blanchet wrote:

Thanks for precious advices!

So  it means that people who thought about hosted engine feature didn't 
get into your philosophy of running the engine into a second datacenter


Again, strictly a "risk" thing.  Hosted engine is by definition a 
"chicken and egg" thing.  It's great for learning and for lab... but if 
you're going to run production, I'd at least consider the latter option 
I presented.


With that said, we run dedicated engines today, not hosted.  Remember, 
ovirt nodes run even while the engine is down.  So you can tolerate an 
engine outage for a time period, just can't have reliability in case of 
node failures, etc.  So for us, most of the risk is in rebuilding a new 
engine if we have to... but certainly considered a "rare" case.


Putting key infrastructure inside the very thing that needs the key 
infrastructure to run is just fraught with problems.


Everything has costs and typically, the more robust/reliable your setup, 
the more it's going to cost.  I just wanted to present an "in between" 
style setup that gives you more reliability, but perhaps not the "best", 
while keeping costs way down.


To me, if you're running any datacenter cluster (for example), you need 
to have a minimum of 3 nodes.  People might not like that, but it's my 
minimum for reliability and flexibility.


So... if wanted to use VMs for core infrastructure, that's 3 nodes. 
That core infrastructure datacenter might have a hosted engine, but 
likely also has "static definitions".  It's part of the "core", at least 
several parts of it are.  But the idea is it could hold: DNS, DHCP, 
Active Directory/LDAP, files shares (even storage domains for other 
datacenters), etc.  Obviously a "core" failure is a "core" failure and 
thus needs the same treatment as whatever you consider to be "core" today.


(thus on total "outage" bring up, you bring up the core, which now
includes this core infrastructure datacenter... your core "tests" are 
run to verify, and then the rest is brought up)


Then each general production datacenter cluster would have 3 nodes with 
the engine(s) being a VM(s) off the infrastructure datacenter using core 
infrastructure off that infrastructure datacenter as well.  Again, this 
is very much like most cloud service providers today.


Again, just ideas, mainly thinking on the "cheap", though some might not 
think so (you'll just have to trust me, what I'm presenting here is

incredibly cheap for the reliability and flexibility it provides).

Just my opinion.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Any monitoring tool provided?

2018-03-22 Thread Christopher Cox

On 03/21/2018 10:41 PM, Terry hey wrote:

Dear all,

Now, we can just read how many storage used, cpu usage on ovirt dashboard.
But is there any monitoring tool for monitoring virtual machine time to 
time?

If yes, could you guys give me the procedure?


A possible option, for a full OS with network connectivity, is to 
monitor the VM like you would any other host.


We use omd/check_mk.

Right now there isn't an oVirt specific monitor plugin for check_mk.

I know what I said is probably pretty obvious, but just in case.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Start VM automatically

2018-02-26 Thread Christopher Cox

On 02/26/2018 05:33 PM, Fabrice SOLER wrote:

Hi Hanson,

Thank you for your answer, but the routeur is a VM.

So I need that this VM start without the engine. It has to start when 
the node start (after a power failure).


Only when the routeur will be up, I will be able to manage the node and 
the VMs.


Do you think there a solution ?


Obviously this a bit more "chicken-and-egg" than running a hosted 
engine.  Even if there was a "start automatically" sort of thing, this 
is still going to be fraught with potential issues, because almost 
anything could happen to prevent the "router vm" from starting.


I think you've created a fairly high risk scenario.

In other words, even if you got this working, I wouldn't trust it.

Just my opinion.  At a minimum I'd pull the "router" out of the VM 
stack.  Now, you could have a separate hypervisor cluster stack for the 
router, just don't put it's mgmt engine (separate engine req'd) behind 
the router :-)





Sincerely,

Fabrice


Le 26/02/2018 à 18:18, Hanson Turner a écrit :


Hi Fabrice,

If there's an issue with the hypervisor, the VM should pause. In the 
highly available section, (edit the advanced options on the vm) you 
can set the resume options restart/resume/stay off


The engine needs to be able to see + manage the node. You'll have to 
take care of the networking/port forwarding/vpn/vlan etc to make sure 
the engine can control the node.


Once the node's in control, the engine can restore the VM when it 
knows the node is good.


Thanks,

Hanson


On 02/26/2018 04:09 PM, Fabrice SOLER wrote:


Hello,

My node (IP ovirtmgmt) is behind a routeur that is running on the 
hypervisor (the node itself).


So, I need that the VM (routeur) start automatically after the node 
start.


The ovirt engine is running on another infrastructure and the version 
is 4.2.0. The node is also in this version.


Is there a solution ?

Sincerely,

--


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Re: Local disk RAID best practices (HW vs SW RAID)

2018-10-18 Thread Christopher Cox

On 10/18/18 11:00 AM, Vinícius Ferrão wrote:

Hello,

I’m RAIDing the local hypervisor disks. I’m aware that in some scenarios this 
isn’t even needed.

But I would like to know which is the best practice in this scenario: Hardware 
RAID from a cheap Dell H330 controller, or just leave the controller in HBA 
mode and make a software RAID during install of the Hypervisor?


Normal best practice is to use the storage's RAID.  Not saying there 
aren't cases for when you'd want to create a RAID on virtual disks, but 
performance usually wouldn't be one of the reasons.


You'll get better features usually from the HW RAID with regards to 
failure detection and automatic rebuild.


Now...after setting storage up using your storage solution's RAID, 
anything on top of that is totally up to you.


With that said, I've never used the H330 RAID controller.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JHCWCN42ZDS3ZE7AEGPXD7AI4W2KKDO4/


[ovirt-users] Re: Replicated storage gets erased when added to new environment.

2018-11-13 Thread Christopher Cox
Don't think so.  AFAIK, but this might have changed in 4.x.  But usually 
to float "stuff" between datacenters, you have to export to an export 
domain and then do an import.


1. Attach non-attached export domain
2. Export VMs (must include templates if from templates)
3. Detach export domain.
4. On new datacenter, attach export domain
5. Import VMs
6. Detach export domain.

There might be come sort of unsupported way to take a Storage Domain 
into a different datacenter, I'm just not aware of such.  Thus, the idea 
of a "replicate" would have to be a full "replicate" in a failover style 
scenario where you can't really tell the difference between the two (the 
storage domain works on the replicate because it can't tell the 
difference).  Thus, only one could ever be "up" at any time.  Even so, I 
haven't tried to do this.  If the metadata contains something so 
intrinsic to the the hypervisor cluster that it can't be effectively 
replicated, obviously this isn't going to work without some sort of 
assist that can massage/convert the metadata.


IMHO (again, I'm not up on any 4.x stuff that might have made this 
easier/possible)



On 11/13/18 5:01 PM, Jacob Green wrote:
Ok I found my answer in the *engine.log *"Domain 
'80dcf277-9958-4368-b7dd-2a5d5d29b3ec' is already attached to a 
different storage pool, clean the storage domain metadata."


So I am assuming in a real disaster recovery scenario the Ansible stuff 
is doing some magic there.


However, I would like to know if I properly detach a Fiber Channel 
storage domain, is it simple to import it to a new environment. Any 
guidance before I undertake such a task?



On 11/13/2018 04:34 PM, Jacob Green wrote:


Ok I hear what your saying and thank you for your input! By the way 
this is a test environment I am working with so I can learn, so there 
is no risk for dual brainy-ness. Also I figured out where I was going 
wrong earlier on site B, I was clicking "New Domain" instead of 
"Import Domain" So I was able to import the replicated iSCSI domain, 
however now when I try to attach the storage domain, I get the 
following in the even log.


"VDSM jake-ovirt2 command CleanStorageDomainMetaDataVDS failed: Cannot 
acquire host id: ('80dcf277-9958-4368-b7dd-2a5d5d29b3ec', 
SanlockException(5, 'Sanlock lockspace add failure', 'Input/output 
error'))"


It is my opinion that my problem now is that since the data domain was 
not properly detached before replication, that my replicated storage 
will not attach because of some "lock" on the storage domain?Is that 
what this error means or am I missing the mark completely?




On 11/13/2018 02:41 PM, Christopher Cox wrote:
Normally a "replicate" or RAID 1 style scenario is handled by a SAN 
frontend (like IBM's SVC) or some other mirroring mechanism that 
presents an abstracted mirrored LUN as a Storage Domain to oVirt.


So, the answer lies with your storage supplier and/or SAN abstractor.

With that said, reading your full text, this would still be likely 
for a failover scenario and not a "live/live" scenario. A "live/live" 
scenarios risks the problem inherent to two things being "split 
brained".  And this is usually very bad for non-cluster aware storage 
(and the complexity of cluster aware storage could be great in this 
case).



On 11/13/18 2:24 PM, Jacob Green wrote:
So I was testing with two Identical ovirt environments running the 
latest 4.2 environment. I have iscsi storage set up at Site A, and I 
have that same storage replicated to Side B, before I get into 
learning disaster recovery I wanted to see what importing the 
replicated storage would look like. However when I import it I get 
the follow, and the vms are wiped from the replicated storage I 
presented with iscsi.



So is this possible with iscsi? Is there another way to go about 
doing this?


My iscsi solution is freenas.








--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LUXGTSTEFRWICYBUNI66ZDQRE2BXGU3Y/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/553A54IW23CBC2W7YMSCOYU2O46YGZIO/


--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690


___
User

[ovirt-users] Re: Replicated storage gets erased when added to new environment.

2018-11-13 Thread Christopher Cox
Normally a "replicate" or RAID 1 style scenario is handled by a SAN 
frontend (like IBM's SVC) or some other mirroring mechanism that 
presents an abstracted mirrored LUN as a Storage Domain to oVirt.


So, the answer lies with your storage supplier and/or SAN abstractor.

With that said, reading your full text, this would still be likely for a 
failover scenario and not a "live/live" scenario.   A "live/live" 
scenarios risks the problem inherent to two things being "split 
brained".  And this is usually very bad for non-cluster aware storage 
(and the complexity of cluster aware storage could be great in this case).



On 11/13/18 2:24 PM, Jacob Green wrote:
So I was testing with two Identical ovirt environments running the 
latest 4.2 environment. I have iscsi storage set up at Site A, and I 
have that same storage replicated to Side B, before I get into learning 
disaster recovery I wanted to see what importing the replicated storage 
would look like. However when I import it I get the follow, and the vms 
are wiped from the replicated storage I presented with iscsi.



So is this possible with iscsi? Is there another way to go about doing this?

My iscsi solution is freenas.








--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LUXGTSTEFRWICYBUNI66ZDQRE2BXGU3Y/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/553A54IW23CBC2W7YMSCOYU2O46YGZIO/


[ovirt-users] Re: iSCSI multipath with Dell Compellent

2018-10-02 Thread Christopher Cox

Let me also add... make sure your LUN masking allows all nodes to see them.

On 10/02/2018 09:59 AM, Christopher Cox wrote:
You usually have to specify that the LUN has to be accessible to 
multiple systems (e.g. like a clustered filesystem would).  It's not 
unusual for a system to default to allowing only one initiator to connect.


On 10/02/2018 05:51 AM, Bernhard Dick wrote:


Hi,

I'm trying to achieve iSCSI multipathing with an Dell Compellent 
SC4020 array. As the Dell Array does not work as an ALUA system it 
displays available LUNs only on the currently active system (here is a 
good description that I found: 
https://niktips.wordpress.com/2016/05/16/dell-compellent-is-not-an-alua-storage-array/ 
). As a result I cannot add the currently non-active controller to an 
iSCSI-Bond (as it does not present the LUNs to oVirt) and so the path 
to the second controller will not be up. Is there any way to solve this?


   Regards
 Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTE3KM4L2762KHBMN2XJ5ZFU32M236OY/ 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EWPFWLBLPYGGBPGU3QAZXNL2JWTLYDKG/ 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LF2MG6K3AEIISTH44CYZKWVVLZYZGZ5Q/


[ovirt-users] Re: iSCSI multipath with Dell Compellent

2018-10-02 Thread Christopher Cox
You usually have to specify that the LUN has to be accessible to 
multiple systems (e.g. like a clustered filesystem would).  It's not 
unusual for a system to default to allowing only one initiator to connect.


On 10/02/2018 05:51 AM, Bernhard Dick wrote:


Hi,

I'm trying to achieve iSCSI multipathing with an Dell Compellent SC4020 
array. As the Dell Array does not work as an ALUA system it displays 
available LUNs only on the currently active system (here is a good 
description that I found: 
https://niktips.wordpress.com/2016/05/16/dell-compellent-is-not-an-alua-storage-array/ 
). As a result I cannot add the currently non-active controller to an 
iSCSI-Bond (as it does not present the LUNs to oVirt) and so the path to 
the second controller will not be up. Is there any way to solve this?


   Regards
     Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTE3KM4L2762KHBMN2XJ5ZFU32M236OY/ 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EWPFWLBLPYGGBPGU3QAZXNL2JWTLYDKG/


[ovirt-users] Re: iSCSI multipath with Dell Compellent

2018-10-02 Thread Christopher Cox

On 10/02/2018 10:10 AM, Bernhard Dick wrote:

Hi,

Am 02.10.2018 um 16:59 schrieb Christopher Cox:
You usually have to specify that the LUN has to be accessible to 
multiple systems (e.g. like a clustered filesystem would).  It's not 
unusual for a system to default to allowing only one initiator to 
connect.
the problem is not that not multiple initiators are unable to connect 
(that runs well), but that I see the LUNs only on one of the two 
controller ports, as the storage decided that the Top Controller is 
currently the active one. I can login to the second controller, but that 
does not present any LUNs to any of my servers.


Maybe I should have written active controller instead of active system.



In an active active case the exposed LUN will show all paths when you 
allow for this in oVirt.  In oVirt 3.6, because of this oVirt "two 
step", you sometimes ended up with an extraneous path to the storage 
domain (not a problem though).  And that extra path will go way on 
reboot of the nodes.




   Regards
     Bernhard


On 10/02/2018 05:51 AM, Bernhard Dick wrote:


Hi,

I'm trying to achieve iSCSI multipathing with an Dell Compellent 
SC4020 array. As the Dell Array does not work as an ALUA system it 
displays available LUNs only on the currently active system (here is 
a good description that I found: 
https://niktips.wordpress.com/2016/05/16/dell-compellent-is-not-an-alua-storage-array/ 
). As a result I cannot add the currently non-active controller to an 
iSCSI-Bond (as it does not present the LUNs to oVirt) and so the path 
to the second controller will not be up. Is there any way to solve this?


   Regards
 Bernhard

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GUPFXMI2I5JJJN6JU7QLXTY6G6X25QVL/ 


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RNQGUMVFRNRCLNTI54Q6OQWTFY4IV3AD/


[ovirt-users] Re: vm names export

2018-09-21 Thread Christopher Cox

Outside of the API, maybe something like this still works:

ovirt-shell -E 'list vms'

On 09/21/2018 10:16 AM, Budur Nagaraju wrote:

Hi

I didn't understand, could you please help me on that ?

Thanks,
Nagaraju



On Fri, Sep 21, 2018 at 8:28 PM Sandro Bonazzola > wrote:




Il giorno gio 20 set 2018 alle ore 09:23 Budur Nagaraju
mailto:nbud...@gmail.com>> ha scritto:


HI

We have deployed vms in oVirt , is there a way to export the vm
names along with owner names ? any script which would help ?


Ondra?


Thanks,
Nagaraju
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/4GC4RHEQBVGXST5UOLBUNMYOXKKYDSWN/



-- 


SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com 








___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2PMLYK3RJVXONC3OQ4SSHFTOW6IJMDEL/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6UMD7R2UCTXAXJZ4JCMUKPZCNKQEGVGT/


[ovirt-users] Re: vm names export

2018-09-24 Thread Christopher Cox
This doesn't use the SDK and I'm not sure if it works with 4.x, but 
posting anyway: https://endlessnow.com/ten/Source/oVirtVMInfo-py.txt





On 09/24/2018 10:08 AM, Budur Nagaraju wrote:

Below is the script and getting error while executing.


#!/usr/bin/env python
# -*- coding: utf-8 -*-

import logging
import ovirtsdk4 as sdk
import ovirtsdk4.types as types

logging.basicConfig(level=logging.DEBUG, filename='example.log')

connection = sdk.Connection(
     url='https://pscloud.bnglab.psecure.net/ovirt-engine/api ',
     username='admin@internal',
     password='password',
     ca_file= '/etc/pki/ovirt-engine/ca.pem',
     debug=True,
     log=logging.getLogger(),
)

vms_service = connection.system_service().vms_service()

vms = vms_service.list()

for vm in vms:
     print("%s: %s" % (vm.name , vm.id ))
connection.close()
=


[root@cephc ovirt-scripts]# python get_vm_names
Traceback (most recent call last):
   File "get_vm_names", line 16, in 
     log=logging.getLogger(),
   File "/usr/lib64/python2.7/site-packages/ovirtsdk4/__init__.py", line 
307, in __init__

     raise Error('The CA file \'%s\' doesn\'t exist' % ca_file)
ovirtsdk4.Error: The CA file '/etc/pki/ovirt-engine/ca.pem' doesn't exist
[root@cephc ovirt-scripts]#

On Mon, Sep 24, 2018 at 8:20 PM Staniforth, Paul 
> wrote:


Hi Nagaraju,

                      I meant not signed by a trusted CA, when you
install oVirt it signs the certificates with it's own local CA, you
can download it the CA certificate  for your engine from
https://your.engine.address/ovirt-engine
/  it you are running the
program on your engine machine I think it's in
/etc/pki/ovirt-engine/ca.pem


Regards,

                Paul S.


*From:* Budur Nagaraju mailto:nbud...@gmail.com>>
*Sent:* 24 September 2018 12:20
*To:* Staniforth, Paul
*Cc:* users
*Subject:* Re: [ovirt-users] Re: vm names export
Am not using any self signed certificate, it was the default
certificate installed at the time of ovirt-engine installation , do
I need to comment for that also ?
Tried commenting the line but still facing issues.

On Mon, Sep 24, 2018 at 4:28 PM Staniforth, Paul
mailto:p.stanifo...@leedsbeckett.ac.uk>> wrote:

Hi Nagaraju,

                      if you are using the self-signed
certificate have you downloaded your CA certificate, if you are
using a certificate from a trusted CA then you should comment 
or remove the CA file line.



Regards,

                Paul S.


*From:* Budur Nagaraju mailto:nbud...@gmail.com>>
*Sent:* 24 September 2018 11:16
*To:* Sandro Bonazzola
*Cc:* users
*Subject:* [ovirt-users] Re: vm names export

Have tried the below URL , getting the below error

Script:


https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/list_vms.py1

Error:

https://pastebin.com/EnJqA2Tr

Thanks,
Nagaraju

On Fri, Sep 21, 2018 at 8:50 PM Sandro Bonazzola
mailto:sbona...@redhat.com>> wrote:



Il giorno ven 21 set 2018 alle ore 17:16 Budur Nagaraju
mailto:nbud...@gmail.com>> ha scritto:

Hi

I didn't understand, could you please help me on that ?


I was asking Ondra to follow up on your question


Thanks,
Nagaraju



On Fri, Sep 21, 2018 at 8:28 PM Sandro Bonazzola
mailto:sbona...@redhat.com>> wrote:



Il giorno gio 20 set 2018 alle ore 09:23 Budur
Nagaraju mailto:nbud...@gmail.com>> ha scritto:


HI

We have deployed vms in oVirt , is there a way
to export the vm names along with owner names ?
any script which would help ?


Ondra?


Thanks,
Nagaraju
___
Users mailing list -- users@ovirt.org

To unsubscribe send an email to
users-le...@ovirt.org 
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:

https://www.ovirt.org/community/about/community-guidelines/
List Archives:
 

[ovirt-users] Re: How to import from another oVirt / RHV environment

2019-01-17 Thread Christopher Cox

On 1/17/19 8:49 AM, Gianluca Cecchi wrote:

Hello,
I have two different oVirt 4.2 environments and I want to migrate some 
big VMs from one to another.
I'm not able to detach and attach the block based domain where are the 
disks of source.

And I cannot use export domain functionality.


The only way I've ever done this is via an Export Domain.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PDV55PCYIBBIHXMG3ZE5ZN5YQ3RVCBIA/


[ovirt-users] Re: Which option is best for storage?

2019-01-04 Thread Christopher Cox

Somebody recommended NFS?  Interesting.

Use iSCSI or FC.  They work.


On 1/4/19 3:30 PM, Fabrice Bacchella wrote:

It works with SAN on SAS too, with a cost between iSCSI and FC.


Le 4 janv. 2019 à 08:31, ge...@pdclouds.com.au a écrit :

Hi,

Which is the preferred option for connecting oVirt VM farm to SAN/NAS?

NFS (10G), iSCSI (10G) or FC (8G)?

We are confused, some people say iSCSI is preferred and others say NFS performs 
the better than iSCSI, but FC is most expensive but performs the best 
overall

Would value expert opinion.

Cheers
Geoff
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HEIWQ5DOON47MPF4ZKE2RGD3ZEJAQ5QK/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RPFVRRNIQMGNEXQ7ESF6XWAZS5GZOAPD/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZLJ3KZXLOFWM35MXZSAURLJYBVPSRW5C/


[ovirt-users] Re: virsh edit vm not saving after reboot

2019-03-28 Thread Christopher Cox

On 3/28/19 6:22 PM, Dev Ops wrote:

Figured out that Ovirt doesn't allow for virsh to edit XML files and have them 
be persistent through reboots. I needed to edit the SMBIOS settings in an XML 
of a VM. Once the VM is off the XML is gone. VDSM apparently builds the XML's 
out on boot up. To get this working this has to be done with VDSM hooks and 
that only took me 8 hours to figure out and sort out. :)


...snip


Refresh the Capabilities in the UI for the host in question. I could then build 
out a VM and set the Custom Properties for smbios to this:
{'product': 'KVM'}


You know, my VM shows SMBIOS 0x0100 System Information Manufacturer: 
oVirt and Product Name: oVirt Node


Without me doing anything.

Not saying you don't have a particular need, just saying in case 
somebody wanted to know.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4VHZHYFBOGDUTH2V3SYWSC77DQ35MJK2/


[ovirt-users] Re: Windows Server 2019 Drivers

2019-05-22 Thread Christopher Cox
Just saying, but I tested Server 2019 (months ago) under our oVirt 3.6 with virt 
drivers...and everything went fine.


I took it all down after testing.  Just saying there should be a way to have 
this work in newer versions of oVirt.


On 5/22/19 11:19 AM, Timmi wrote:

Hi,

the easiest would be to change the disk interface to IDE.
I guess it is currently VirtIO-SCSI or so.

Best regards
Timmi

Am 22.05.19 um 18:09 schrieb racev...@lenovo.com:
We have updated to oVirt 4.3.3.1 and I'm trying to create a Windows Server 
2019 VM. When It's time to select the drivers so the windows installer can 
detect the hard disk, I am unable to find working drivers.  I've tried using 
virtio-win-0.1.141.iso, oVirt-toolsSetup_4.3-2.el7.iso as well as attaching 
virtio-0.1.141_amd64.fvd and virtio-0.1.141_x86.fvd as a floppy disk.


I've eventually tried every directory manually in those iso's. Can someone 
link proper documentation on how to get Windows Server 2019 to work or point 
to the correct drivers?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZMVP5AYR6KEXZW23GWQHA56XHGXHKMOX/


[ovirt-users] Re: All Windows vms report "Actual timezone in guest doesn't match configuration"

2019-05-07 Thread Christopher Cox

On 5/7/19 12:04 PM, Tomáš Golembiovský wrote:

On Tue, 07 May 2019 13:43:03 -
mich...@wanderingmad.com wrote:


I wonder if it's something that has changed in server 2016 / server 2019?  I 
haven't loaded server 2012 in over a year.


I tried also with Windows 10 (not Server I admit) and EST timezone
worked for me.



Since windows runs in local time, did you set the hardware clock offset 
in oVirt for the guest to its local timezone?  You have to do that and 
set the timezone in Windows itself.


ovirt is going to default to use UTC offset for a guest.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OXPAWQ7Q5MPTPWQRB4T2EBBXS7RPTCG/


[ovirt-users] Re: Graphics performance and Windows VM

2019-06-26 Thread Christopher Cox

On 6/26/19 9:21 AM, Ciro Iriarte wrote:

Hello!,

I'm wondering what would be a good fit for decent graphics performance
on Windows 10/2016/2019 + oVirt/KVM.

Is there some kind of offloading with SPICE?, should I use RDP
instead?, will vGPU backed by Nvidia cards help somehow?. The idea is
to build a VDI environment for Windows VM, probably using
OpenThinClient or a custom Linux image at the endpoint.

The clients will use office suites, Java IDE, AutoCAD and Sketchup.

Comments?



Use RDP.  (you may need to check your licensing with regards to running Windows 
VDI, there may be some cost)

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/54W73EWK5ZPHRDJX2FF6TTEUEQTOSSYZ/


[ovirt-users] oVirt 3.6: Node went into Error state while migrations happening

2019-07-08 Thread Christopher Cox
On the node in question, the metadata isn't coming across (state) wise. 
It  shows VMs being in an unknown state (some are up and some are down), 
some show as migrating and there are 9 forever hung migrating tasks.  We 
tried to bring up some of the down VMs that had a state of Down, but 
that ended up getting them the state of "Wait for Lauch", though those 
VMs are actually started.


Right now, my plan is attempt a restart of vdsmd on the node in 
question.  Just trying to get the node to a working state again.  There 
a total of 9 nodes in our cluster, but we can't manage any VMs on the 
affected node right now.


Is there a way in 3.6 to cancel the hung tasks?  I'm worried that if 
vdsmd is restarted on the node, the tasks might be "attempted"... I 
really need them to be forgotten if possible.


Ideally want all "Unknown" to return to either an "up" or "down" state 
(depending if the VM is up or down) and for "Wait for Launch" for those, 
to go to "up" and for all the "Migrating" to go to "up" or "down" (I 
think only one is actually down).


I'm concerned that any attempt manually maniplate the state in the ovirt 
mgmt head db will be moot because the node will be queried for state and 
that state will be taken and override anything I attempt to do.


Thoughts??
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q3AG2HTDIUWLZIINI5OBDZ37A5PEM7N7/


[ovirt-users] Re: Hardware Used

2019-07-17 Thread Christopher Cox

On 7/17/19 8:23 AM, Strahil wrote:


As far as I know several orgs  are  using AMD Ryzen CPUs with oVirt.

Ryzen CPUs are quite nice from price/performance perspective.



While this may be true, today, in 99% (because if you're running the new Ryzens 
there are a select few and I mean few, that might have 128GB) of Ryzen 
setups are going to be 64GB or less, which might not make for the most ideal 
hypervisor environment. It's possible that you have very very very very heavy 
CPU bound VMs that do not require much memory though.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FJ73VCKVUHHBOAJ4OKIW2QCBSHWKIH3W/


[ovirt-users] Re: Windows Sluggish Performance

2019-12-03 Thread Christopher Cox
I tend to VNC my Windows VMs, including server.  Last one I tested was 
2019 Server, seemed to work ok.


On 12/3/19 1:19 PM, Vijay Sachdeva wrote:

Anyone?

On Tue, 3 Dec 2019 at 11:21 PM, Vijay Sachdeva 
mailto:vijay.sachd...@indiqus.com>> wrote:


Hello Everyone,

I managed to install Windows Server 2016 server on Ovirt node using
ovirt engine.All virtio-Win drivers also installed, but the
performance of Windows using console is very sluggish and mouse
pointer is not at all responsive.

Can anyone let me know, what could be the reason. Not able to to
even add IP to VM as mouse doesn’t work at all.

Thanks


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GG2L2K3QDGURBZ37HPQAEF64BSUTTWCN/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4K63KMJC4NLPFLXX54FKO3KEUHHJIBIP/


[ovirt-users] Re: EqualLogic SAN controller switchover

2020-02-27 Thread Christopher Cox
Equallogic since they are active passive only, it can be a pretty long 
pause.  Dell recommends setting your path params to allow for 1 minute.


(btw, there's a reason why Equallogic is dead now)



On 2/27/20 10:59 AM, Chris Adams wrote:

How well does oVirt handle an EqualLogic SAN controller switchover
event?  IIRC that can result in a short iSCSI "pause" (can't remember
how long it takes) - I'm not sure what oVirt's threshold before VMs
(including the hosted engine) get paused for storage timeouts.

I've got a small setup where the active SAN controller's battery has
gone bad, so I need to switch to the other controller, and I'm trying to
figure out the impact - do I need to shut all VMs (including the engine)
down first, will they just briefly pause and then continue, etc.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4A5RZCEWIIVC3MJZOR6CCNECJ5ZTYBS/


[ovirt-users] Re: EqualLogic SAN controller switchover

2020-02-27 Thread Christopher Cox
This might help. 
https://www.dell.com/community/EqualLogic/Linux-multipath-conf-Multipathd/td-p/5113421


What the first poster said about Dell's recommendations are accurate 
(I'm a long time Equallogic user)


On 2/27/20 12:34 PM, eev...@digitaldatatechs.com wrote:

I worked with Equalogic in the past. Doesn't it have the ability to replicate 
to a partner Equalogic?
If so, replicate, once complete do a failover to the new one.
That may be a simplistic approach and not sure if it's Dell best practices to 
do so.
Just a thought.

Eric Evans
Digital Data Services LLC.
304.660.9080


-Original Message-
From: Chris Adams 
Sent: Thursday, February 27, 2020 12:55 PM
To: users@ovirt.org
Subject: [ovirt-users] Re: EqualLogic SAN controller switchover

Once upon a time, Strahil Nikolov  said:

Do you have  an idea  how long will it take  ?


No, it has been years and years since I had to do a switchover on an EqualLogic 
(they mostly just run).  I know I've read of others using EqualLogic's for 
oVirt, so I'm hoping for someone who's experienced a switchover...


Keep in mind that in case  the domain is declared  unavailable  (reached  a  
threshold , which I doesn't know)  ,  all VMs using it will be paused and oVirt 
 will try to recover them once the storage is back available.


Right - with the hosted engine on the SAN, I am also curious how that is 
impacted (how will the engine HA tooling handle a pause).
--
Chris Adams 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F44KYWA6SPMFURZWRMEEPM6YWYAGYG7Z/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6ER5NXH67X7TQ5MLC2JKZFHGK7EV7EU/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/42FL4GE6KRNSY6KR2PIGJU5RH3G35Y2D/


[ovirt-users] Re: Virtual machine replica - DR

2020-04-26 Thread Christopher Cox

On 4/26/20 4:19 PM, ccesa...@blueit.com.br wrote:

Hello,
Does someone know any tool/method to replicate the VMS from a "Production" Cluster to 
"Secondary"  Cluster, to provide a DR solution without Storage replication dependency or 
Gluster Storage.
Like Veeam, Zerto toosl do with other hypervisors.

Is there anyway, tool, product to do it!?


In all fairness, most of your "likes" play some roulette.  Taking snapshots and 
copying is fine and dandy, but cannot insure application data integrity.  Some 
might have agents for popular databases, but not necessarily everything (see 
next paragraph).


Ideally "writers" have to be quiesced (suspend write actions in a good way) 
somehow so that the data on disk prior to snapshot is integral.  Now, with that 
said, the roulette wheel is definitely slanted in your favor.  Just noting that 
the techniques that many things do aren't as fool proof as they might have you 
believe.


(btw, I think I just hinted as to the common recipe that is used to pull this 
off (with the stated flaws), at least part of it)


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PXYKE2RTDY6YO2G4AWREWJF7BIXPE2CB/


[ovirt-users] Re: Migrate VMs from oVirt (3.5) to oVirt (4.4)

2020-07-29 Thread Christopher Cox

On 7/29/20 4:19 PM, Ian Easter wrote:

Hey folks,

What is the proper course of action to migrate VMs managed by and older oVirt 
instance into a new oVirt instance?


Assuming this was the best case to upgrade due to the age of the older instance.


This one it tough.  AFAIK, you have to upgrade incrementally.  Mixture of OS 
levels as well.  Might force a "hop" using an export domain for the latter.


I know when we went from 3.4 to 3.6, we used an export domain.

Somebody might know of what "shortcuts" are safe.  But I sort of doubt it.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CNGKDAZ3FX73P37EJ2LFBYNEGNUZLSXG/


[ovirt-users] Re: teaming vs bonding

2020-06-10 Thread Christopher Cox

On 6/10/20 1:30 PM, Diggy Mc wrote:

Does 4.4.x support adapter teaming?  If yes, which is preferred, teaming or 
bonding?


(just an informational post)

Linux (not necessarily oVirt), supports various "bond modes", see:


https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-using_channel_bonding

(you are welcome to Google other sources besides Red Hat)


Linux bond mode 4, is LACP, 802.3ad, what traditionally you'd want and is for 
sure supported in oVirt host nodes (because we use it, so I know it works).   We 
use this for the non-SAN side of the fence and using multiple VLAN tags. 
Essentially, due to limitation of ports on the nodes, we run everything non-SAN 
on the LACP bond.


With that said, iSCSI multipathing (as in oVirt), which is not a "bond" as 
discussed thus far, but was called "bonding" in early oVirt is just 
multipathing.  And is what you'd use for storage coming off a SAN.  I mention 
this because of the historical confusion of what this was called in early oVirt. 
 In our case, we have 4 x 10Gbit paths on each node going to our SAN.


LACP requires cooperation of the switches involved (I say switches because 
multi-chassis link aggregation is often supported).  Usually you work with your 
network admin to configure the switch side (of course, one could be wearing all 
of the hats).


Microsoft Teaming has various configurations, some of these map closely to Linux 
bond modes (default is either like mode 2 or mode 6 in Linux I think, but there 
may not be anything close to a direct mapping) and obviously, there's still the 
ubiquitous LACP, which certainly is the "standard" present in both OS's.  I 
mention this, because usually when I hear someone say "teaming" they are coming 
from a Microsoft background.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O63ZZ5KKY7ZIESDU73MTH2OGNIQ2Q427/


[ovirt-users] Re: Shrink iSCSI Domain

2020-12-27 Thread Christopher Cox

On 12/27/20 12:15 PM, Vinícius Ferrão via Users wrote:

Hello,

Is there any way to reduce the size of an iSCSI Storage Domain? I can’t seem to 
figure this myself. It’s probably unsupported, and the path would be create a 
new iSCSI Storage Domain with the reduced size and move the virtual disks to 
there and them delete the old one.

But I would like to confirm if this is the only way to do this…

In the past I had a requirement, so I’ve created the VM Domains with 10TB, now 
it’s just too much, and I need to use the space on the storage for other 
activities.

Thanks all and happy new year.


Not sure.  I mean ideally there might be a way to shrink the LUN, but how you'd 
have to tell oVirt...unknown (packing, moving blocks, etc.).


With that said, I often have to tell my managers (over and over and over again) 
that it requires space to reorganize and possibly even reduce space.


So, you know what "works" in adding, moving and removing.  Might seem clunky, 
but it's a proven pattern.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHGDKKYJGBQ5L47HAPMVFYW6RQRFHG3R/


[ovirt-users] Re: Another illegal disk snapshot problem!

2020-12-08 Thread Christopher Cox

On 12/8/20 5:30 AM, Magnus Isaksson wrote:

Hi

We have the same issue as you, and we are also using vProtect.
I have no solution, but I'm very interested in how to address this.

Some VM:s we do have managed to remove the illegal snapshots after changing 
storage for the VM:s disks, but we have 3-4 VM:s that will not want to remove 
the illegal snapshot.

As for us, this issue has escalated the last couple of months.

Is it only us who have these issues or does people not take backup of their 
VM:s? Feels like more people should have these issues.


We backup our oVirt VMs just like we backup physical hosts.

With that said, it's a backup system I wrote some 12 years ago.

We're in the process of moving from oVirt to VMware and our home grown backup 
system has made moving very very easy.  I've migrated several VMs across.


There's more than one way to skin a cat.  Snapshots don't buy you much with 
regards to integral integrity.  They aren't (and never will be) application 
logic aware (for example).  And at the virtual disk level, it obviously becomes 
even more of "black box".


(I'm mainly answering the question do "people not take backup of their VMs?")

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QOAEPZFHD2YFMH3OGKOFN7QXT4AJZ6AT/


[ovirt-users] Re: CentOS 8 is dead

2020-12-08 Thread Christopher Cox

On 12/8/20 2:20 PM, Michael Watters wrote:

This was one of my fears regarding the IBM acquisition.  I guess we
can't complain too much, it's not like anybody *pays* for CentOS.  :)


Yes, but this greatly limits oVirt use to temporal dev labs only.

Maybe oVirt should look into what it would take to one of the long term Devian 
based distros


...snippity
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWFN3FJSJ6I4JK6OGQ7GEIR22PLOWNJZ/


[ovirt-users] Re: Improve glusterfs performance

2020-10-28 Thread Christopher Cox

On 10/28/20 12:46 PM, supo...@logicworks.pt wrote:

I think I have a problem in a Nic of one host. This host is the SPM
That's probably why the gluster is slow! how can I change SPM to other host, by 
increasing SPM priority to high?

On a different host in the cluster just select it to be the SPM, it should move.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QWYBUCAJCZK7NFQOT63HO3VJHFTPAYU6/