[ovirt-users] Re: Ovirt 4.3.5.4-1.el7 noVNC keeps disconnecting with 1006

2019-08-08 Thread Ryan Barry
On Thu, Aug 8, 2019 at 4:26 AM Sandro Bonazzola  wrote:
>
>
>
> Il giorno dom 4 ago 2019 alle ore 16:11 Strahil Nikolov 
>  ha scritto:
>>
>> Hello Community,
>>
>> did anyone experience disconnects after a minute or 2 (seems random ,but I 
>> will check it out)  with error code 1006 ?
>> Can someone with noVNC reproduce that behaviour ?
>>
>> As I manage to connect, it seems strange to me to loose connection like 
>> that. The VM was not migrated - so it should be something else.
>

Can you please post firewall details and console.vv file? It should
not be possible to connect with the firewall in the way, but I would
wonder if something is changing it from behind

>
> @Ryan Barry , @Michal Skrivanek any clue?
>
>>
>>
>> Best Regards,
>> Strahil Nikolov
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVBK5NWKAHXH2KREVRSVES3U75ZDQ34L/
>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
>
> sbona...@redhat.com
>
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.



-- 

Ryan Barry

Associate Manager - RHV Virt/SLA

rba...@redhat.comM: +16518159306 IM: rbarry
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YONQ3MYAIGPKFJJE3BNXABI3FDX6VWSL/


[ovirt-users] Re: oVirt 4.3 - create windows2012 vm failed - kvm_init_vcpu failed

2019-03-26 Thread Ryan Barry
Hey Jingjie -

Do you have any more information? This isn't something we've seen before,
but it looks like your VM may be initialized with the wrong CPU type. The
following would help:

(On the engien)
/var/log/ovirt-engine/engine.log

(On the host)
lscpu
virsh domcapabilities
virsh capabilities
/var/log/vdsm/vdsm.log

On Thu, Mar 21, 2019 at 2:48 PM Jingjie Jiang 
wrote:

> I tried on oVirt4.2.8 and it worked fine.
>
> Is this oVirt 4.3 problem only?
>
>
>
> On 3/19/19 4:42 PM, jingjie.ji...@oracle.com wrote:
> > Hi,
> >
> > The oVirt 4.3 is installed on CentOS 7.6.
> > Windows2012 vm failed with following error messages:
> >
> > 2019-03-18 09:19:12,941-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ForkJoinPool-1-worker-15) [] EVENT_ID: VM_DOWN_ERROR(119), VM win12_nfs is
> down with error. Exit message: internal error: qemu unexpectedly closed the
> monitor: 2019-03-18T13:19:11.837850Z qemu-kvm: warning: All CPU(s) up to
> maxcpus should be described in NUMA config, ability to start up with
> partial NUMA mappings is obsoleted and will be removed in future
> > Hyper-V SynIC is not supported by kernel
> > 2019-03-18T13:19:11.861071Z qemu-kvm: kvm_init_vcpu failed: Function not
> implemented.
> >
> > Anyone suggestion?
> >
> > Thanks,
> > Jingjie
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GHUBLKYNM4G5QOIEFY5H7QKPLBDU5H7Z/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZACJF7WRHZ3PYZ2GKHW4SO3ZS5MM6T7/
>


-- 

Ryan Barry

Associate Manager - RHV Virt/SLA

rba...@redhat.comM: +16518159306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T5KFMZ2ROLWPYHFXMNFSQRBWSXC3UXNX/


[ovirt-users] Re: Migrate HE beetwen hosts failed.

2019-03-21 Thread Ryan Barry
Do you have logs?

On Thu, Mar 21, 2019 at 8:35 AM  wrote:

> I have 3 nodes for oVirt.
>
> 1. Intel Westmere IBRS SSBD Family
> 2. Intel Westmere IBRS SSBD Family
> 3. Intel SandyBridge IBRS SSBD Family - if install HE on this node
>
> HE cannot migrate to other node.
> 1. Intel Westmere IBRS SSBD Family
> 2. Intel Westmere IBRS SSBD Family - if install HE on this node
> 3. Intel SandyBridge IBRS SSBD Family
>
> HE migrate to all nodes.
> In all cases cluster is set to Westmere CPU type.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SEYW5ZX5OHGC46BLFACBIVOVCLARHKBA/
>


-- 

Ryan Barry

Associate Manager - RHV Virt/SLA

rba...@redhat.comM: +16518159306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HUAXLXB3AYLHR24F65ZCQ347DUUPP2MH/


[ovirt-users] Re: Hosted Engine I/O scheduler

2019-03-20 Thread Ryan Barry
On Wed, Mar 20, 2019, 1:16 PM Darrell Budic  wrote:

> Inline:
>
> On Mar 20, 2019, at 4:25 AM, Roy Golan  wrote:
>
> On Mon, 18 Mar 2019 at 22:14, Darrell Budic 
> wrote:
>
>> I agree, been checking some of my more disk intensive VMs this morning,
>> switching them to noop definitely improved responsiveness. All the virtio
>> ones I’ve found were using deadline (with RHEL/Centos guests), but some of
>> the virt-scsi were using deadline and some were noop, so I’m not sure of a
>> definitive answer on that level yet.
>>
>> For the hosts, it depends on what your backend is running. With a
>> separate storage server on my main cluster, it doesn’t matter what the
>> hosts set for me. You mentioned you run hyper converged, so I’d say it
>> depends on what your disks are. If you’re using SSDs, go none/noop as they
>> don’t benefit from the queuing. If they are HDDs, I’d test cfq or deadline
>> and see which gave better latency and throughput to your vms. I’d guess
>> you’ll find deadline to offer better performance, but cfq to share better
>> amongst multiple VMs. Unless you use ZFS underneath, then go noop and let
>> ZFS take care of it.
>>
>> On Mar 18, 2019, at 2:05 PM, Strahil  wrote:
>>
>> Hi Darrel,
>>
>> Still, based on my experience we shouldn't queue our I/O in the VM, just
>> to do the same in the Host.
>>
>> I'm still considering if I should keep deadline  in my hosts or to switch
>> to 'cfq'.
>> After all, I'm using Hyper-converged oVirt and this needs testing.
>> What I/O scheduler  are  you using on the  host?
>>
>>
> Our internal scale team is testing now 'throughput-performance' tuned
> profile and it gives
> promising results, I suggest you try it as well.
> We will go over the results of a comparison against the virtual-guest
> profile
> , if there will be evidence for improvements we will set it as the default
> (if it won't degrade small,medium scale envs).
>
>
> I don’t think that will make a difference in this case. Both virtual-host
> and virtual-guest include the throughput-performance profile, just with
> “better” virtual memory tunings for guest and hosts. None of those 3 modify
> the disk queue schedulers, by default, at least not on my Centos 7.6
> systems.
>
> Re my testing, I have virtual-host on my hosts and virtual-guest on my
> guests already.
>

Unfortunately, the ideal scheduler really depends on storage configuration.
Gluster, ZFS, iSCSI, FC, and NFS don't align on a single "best"
configuration (to say nothing of direct LUNs on guests), then there's
workload considerations.

The scale team is aiming for a balanced "default" policy rather than one
which is best for a specific environment.

That said, I'm optimistic that the results will let us give better
recommendations if your workload/storage benefits from a different scheduler


>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FH5LLYXSEJKXTVVOAZCSMV6AAU33CNCA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/74CWGUYUKEKSV3ANEGGEE2L5GJZVCN23/


[ovirt-users] Re: Migrate HE beetwen hosts failed.

2019-03-19 Thread Ryan Barry
Just to confirm, the entire cluster is set to Westmere as the CPU type?

Can you please attach vdsm logs and libvirt logs from the host you are
trying to migrate to?

On Mon, Mar 18, 2019 at 4:20 AM  wrote:
>
> All my VM's, including VM with HE:
>
> Guest CPU Type: Intel Westmere Family
>
> All VM migrating, excluding VM with HE.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJNYD54C4727TXF6MN26LADGEETIYMQ4/



-- 

Ryan Barry

Associate Manager - RHV Virt/SLA

rba...@redhat.comM: +16518159306 IM: rbarry
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WXRXFQPF3MSCPXB7A4MA5GR5RDVQZBVA/


[ovirt-users] Re: Live migration failed

2019-03-19 Thread Ryan Barry
dstQemu='192.168.138.135'}), log id: 5cef4981
> 2019-03-12 14:39:30,039+08 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
> (default task-131) [7f0cf113-55e8-4def-9c68-de3b91d6d641] FINISH,
> MigrateBrokerVDSCommand, return: , log id: 5cef4981
> 2019-03-12 14:39:30,048+08 INFO
> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-131)
> [7f0cf113-55e8-4def-9c68-de3b91d6d641] FINISH, MigrateVDSCommand, return:
> MigratingFrom, log id: 7eeb678c
> 2019-03-12 14:39:30,067+08 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-131) [7f0cf113-55e8-4def-9c68-de3b91d6d641] EVENT_ID:
> VM_MIGRATION_START(62), Migration started (VM: Win_2016_1, Source:
> host2..com, Destination: host3, User: admin@internal-authz).
> 2019-03-12 14:39:33,901+08 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-3) [] VM '5cad5c5f-5aab-46ec-a28e-d484abc0401d' was
> reported as Down on VDS '1bc9b9e9-1e90-4570-9930-08416d1927cc'(host3)
> 2019-03-12 14:39:33,903+08 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> (ForkJoinPool-1-worker-3) [] START, DestroyVDSCommand(HostName = host3,
> DestroyVmVDSCommandParameters:{hostId='1bc9b9e9-1e90-4570-9930-08416d1927cc',
> vmId='5cad5c5f-5aab-46ec-a28e-d484abc0401d', secondsToWait='0',
> gracefully='false', reason='', ignoreNoVm='true'}), log id: c853ba5
> 2019-03-12 14:39:34,211+08 INFO
> [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
> (EE-ManagedThreadFactory-engineScheduled-Thread-73) []
> BaseAsyncTask::onTaskEndSuccess: Task
> '67631cf6-4c75-4681-88ef-fd4af56c0363' (Parent Command 'RemoveDisk',
> Parameters Type
> 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
> successfully.
> 2019-03-12 14:39:34,604+08 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> (ForkJoinPool-1-worker-3) [] Failed to destroy VM
> '5cad5c5f-5aab-46ec-a28e-d484abc0401d' because VM does not exist, ignoring
> 2019-03-12 14:39:34,605+08 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> (ForkJoinPool-1-worker-3) [] FINISH, DestroyVDSCommand, return: , log id:
> c853ba5
> 2019-03-12 14:39:34,605+08 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-3) [] VM
> '5cad5c5f-5aab-46ec-a28e-d484abc0401d'(Win_2016_1) was unexpectedly
> detected as 'Down' on VDS '1bc9b9e9-1e90-4570-9930-08416d1927cc'(ohost3)
> (expected on 'f9014bc4-485c-4eb0-a9bc-42d13ed68f41')
> 2019-03-12 14:39:34,605+08 ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-3) [] Migration of VM 'Win_2016_1' to host 'host3'
> failed: VM destroyed during the startup.
> 2019-03-12 14:39:34,615+08 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-10) [] VM
> '5cad5c5f-5aab-46ec-a28e-d484abc0401d'(Win_2016_1) moved from
> 'MigratingFrom' --> 'Up'
> 2019-03-12 14:39:34,615+08 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-10) [] Adding VM
> '5cad5c5f-5aab-46ec-a28e-d484abc0401d'(Win_2016_1) to re-run list
> 2019-03-12 14:39:34,621+08 ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
> (ForkJoinPool-1-worker-10) [] Rerun VM
> '5cad5c5f-5aab-46ec-a28e-d484abc0401d'. Called from VDS 'host2..com'
> 2019-03-12 14:39:34,752+08 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-53959) [] START,
> MigrateStatusVDSCommand(HostName = host2..com,
> MigrateStatusVDSCommandParameters:{hostId='f9014bc4-485c-4eb0-a9bc-42d13ed68f41',
> vmId='5cad5c5f-5aab-46ec-a28e-d484abc0401d'}), log id: 7ded4ad7
> 2019-03-12 14:39:34,760+08 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-53959) [] FINISH,
> MigrateStatusVDSCommand, return: , log id: 7ded4ad7
> 2019-03-12 14:39:34,786+08 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-53959) [] EVENT_ID:
> VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed  (VM: Win_2016_1,
> Source: host2..com, Destination: host3).
>
> Any help is greatly appreciated.
>
>
> regards,
> Bong SF
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DWCLF4NYZWWN43K6774YGQKANYHCWFTL/
>


-- 

Ryan Barry

Associate Manager - RHV Virt/SLA

rba...@redhat.comM: +16518159306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PB2OLTT2WCCR6HBUJJGEIRSD7ASX654O/


[ovirt-users] Re: oVirt Node install - kickstart postintall

2019-01-16 Thread Ryan Barry
The quick answer is that operations in %post must be performed before
'nodectl init'. If that's done, they'll happily stick.

On Mon, Jan 14, 2019, 9:29 PM Brad Riemann  Can you send over the whole %post segment i via pastebin? I think i've run
> across something similar that i addressed, but just want to be sure.
>
>
> Brad Riemann
> Sr. System Architect
> Cloud5 Communications
>
> --
> *From:* jeanbapti...@nfrance.com 
> *Sent:* Friday, January 11, 2019 1:51:14 AM
> *To:* users@ovirt.org
> *Subject:* [ovirt-users] oVirt Node install - kickstart postintall
>
> CAUTION: This email originated from outside of CLOUD5. Do not click links
> or open attachments unless you recognize the sender and know the content is
> safe.
>
>
> Hello everybody,
>
> Since days, I'm trying to install oVirt (via Foreman) in network mode
> (TFTP net install).
> All is great, but I want to make some actions in postinstall (%post).
> Some actions are relatated to /etc/sysconfig/network-interfaces and
> another action is related to root authorized_keys.
>
> When I try to add pub-key to a created  authorized_keys for root, I work
> (verified into anaconda.
> But after installation and anaconda reboot, I've noticed all my %post
> actions in /(root) are discared. After reboot, there is nothing in
> /root/.ssh for example.
> Whereas, in /etc, all my modications are preserved.
>
> I thought to a Selinux relative issue, but It is not relative to  Selinux
>
> I miss something. Please can you help to  understand how oVirt install /
> partition work ?
>
> thanks for all
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BCQEKHCPSFDEFDPI4OVSXRTJS3BSJH5R/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGT7K3N245SH37OIKCTG75WI5PM46OOI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6EFH7W5L6R6PZ3UADFU5ZQUKT2Y543NX/


[ovirt-users] Re: migrate hosted-engine vm to another cluster?

2019-01-16 Thread Ryan Barry
Re-raising this for discussion.

As I commented on the bug, Hosted Engine is such a special use case in
terms of setup, configuration, and migration that I'm not sure engine
itself is the right place to handle this. We have the option of changing
the ha broker|agent to use the engine API to initiate migrations, but
there's still a risk that the hosts in the secondary cluster will not be
able to reach the storage, etc.

It would be great to get this resolved if there's not currently a way to do
it, but we need to decide on a long-term direction for it. Currently, HE
can run on additional hosts in the datacenter as an emergency fallback, but
it reverts once the HE cluster is back out of maintenance. My ideal would
be to extend the hosted-engine utility with an additional parameter which
reaches out to the Engine API in order to handle the needed database
updates after some safety checks (probably over ansible) to ensure that the
HE storage domain is reachable from hosts in the other cluster.

But I'm not a hosted engine expert. Is there currently a way to do this? If
there isn't, do we want to add additional logic to ha agent|broker, or
reach out to the Engine?

On Tue, Jan 15, 2019 at 8:27 AM Douglas Duckworth 
wrote:

> Hi
>
> I opened a BugZilla at https://bugzilla.redhat.com/show_bug.cgi?id=1664777
> but no steps have been shared on how to resolve.  Does anyone know how this
> can be fixed without destroying the data center and building a new hosted
> engine?
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit <https://scu.med.cornell.edu>
> Weill Cornell Medicine
> 1300 York Avenue
> New York, NY 10065
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
>
> On Wed, Jan 9, 2019 at 10:22 AM Douglas Duckworth 
> wrote:
>
>> Hi
>>
>> Should I open a Bugzilla to resolve this problem?
>>
>> Thanks,
>>
>> Douglas Duckworth, MSc, LFCS
>> HPC System Administrator
>> Scientific Computing Unit <https://scu.med.cornell.edu>
>> Weill Cornell Medicine
>> 1300 York Avenue
>> New York, NY 10065
>> E: d...@med.cornell.edu
>> O: 212-746-6305
>> F: 212-746-8690
>>
>>
>> On Wed, Dec 19, 2018 at 1:13 PM Douglas Duckworth <
>> dod2...@med.cornell.edu> wrote:
>>
>>> Hello
>>>
>>> I am trying to migrate my hosted-engine VM to another cluster in the
>>> same data center.  Hosts in both clusters have the same logical networks
>>> and storage.  Yet migrating the VM isn't an option.
>>>
>>> To get the hosted-engine VM on the other cluster I started the VM on
>>> host in that other cluster using "hosted-engine --vm-start."
>>>
>>> However HostedEngine still associated with old cluster as shown
>>> attached.  So I cannot live migrate the VM.  Does anyone know how to
>>> resolve?  With other VMs one can shut them down then using the "Edit"
>>> option.  Though that will not work for HostedEngine.
>>>
>>>
>>> Thanks,
>>>
>>> Douglas Duckworth, MSc, LFCS
>>> HPC System Administrator
>>> Scientific Computing Unit <https://scu.med.cornell.edu>
>>> Weill Cornell Medicine
>>> 1300 York Avenue
>>> New York, NY 10065
>>> E: d...@med.cornell.edu
>>> O: 212-746-6305
>>> F: 212-746-8690
>>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/J2GZK5PUBZIQLGLNZ2UUCSIES6HSZLHC/
>


-- 

Ryan Barry

Associate Manager - RHV Virt/SLA

rba...@redhat.comM: +16518159306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IRKEWL34BRPBQSMR5HGHT5RI6O2PQA63/


[ovirt-users] Re: Ovirt Node NG (4.2.3.1-0.20180530) Boot fail ISCSI using ibft after installation

2018-06-18 Thread Ryan Barry
On Mon, Jun 11, 2018 at 12:06 PM, Ralf Schenk  wrote:

> Hello,
>
> I successfully installed Ovirt Node NG from ISO to an ISCSI target
> attached via first network interface by using following extensions to the
> grub cmdline:
>
> "rd.iscsi.ibft=1 ip=ibft ip=eno2:dhcp"
>
> I want to use the server as diskless ovirt-node-ng Server.
>
> After successful install the system reboots and starts up but it fails
> later in dracut even having detected correctly the disk and all the LV's.
>
> I think "iscsistart" is run multiple times even after already being logged
> in to the ISCSI-Target and that fails finally like that:
>
> *[  147.644872] localhost dracut-initqueue[1075]: iscsistart: initiator
> reported error (15 - session exists)*
> [  147.645588] localhost dracut-initqueue[1075]: iscsistart: Logging into
> iqn.2018-01.de.databay.office:storage01.epycdphv02-disk1 172.16.1.3:3260,1
> [  147.651027] localhost dracut-initqueue[1075]: Warning: 'iscsistart -b '
> failed with return code 0
> [  147.807510] localhost systemd[1]: Starting Login iSCSI Target...
> [  147.809293] localhost iscsistart[6716]: iscsistart: TargetName not set.
> Exiting iscsistart
> [  147.813625] localhost systemd[1]: iscsistart_iscsi.service: main
> process exited, code=exited, status=7/NOTRUNNING
> [  147.824897] localhost systemd[1]: Failed to start Login iSCSI Target.
> [  147.825050] localhost systemd[1]: Unit iscsistart_iscsi.service entered
> failed state.
> [  147.825185] localhost systemd[1]: iscsistart_iscsi.service failed.
>

Hey Ralf -

I don't have an ibft environment to test on, but my understanding from the
documentation is that TargetName shouldn't be required when using ibft. Can
you please send over the whole dracut log, and I'll see if I can run down a
root cause?


> --
> Mit freundlichen Grüßen
>
> Ralf Schenk
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/VVWCTGGBVCPEFKO6U7FMJNWQ6XXTSJO7/
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRITUVYCWGIP2K35VP5FAFXKKNQJ5B27/


Re: [ovirt-users] Non-RHEL RPMs gone after RHV-H upgrade

2018-01-11 Thread Ryan Barry
Hey Colin -

You're correct -- they will persist. However, there's only a plugin for
yum, and not bare RPM. If the packages were installed with "rpm -Uvh ...",
they won't be 'sticky'. I believe this is also part of the documentation.

If you don't remember whether you used yum or not, you can check
/var/imgbased/persisted-rpms/ to see if anything is present.

On Tue, Jan 9, 2018 at 7:24 PM, Colin Coe  wrote:

> Hi all
>
> We're running RHV 4.1.6.  Yesterday I upgraded the RHV-H nodes in our DEV
> environment to 20180102 and found that all the non-RHEL RPMs are now gone.
> Their associated config files in /etc are still there.  The RPMs in
> question were from HPE SPP plus a monitoring system client (Xymon).
>
> I had thought that non-RHEL RPMs would persist after a host upgrade.
>
> Am I wrong on this?
>
> Thanks
>
> CC
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1 question, writing files to /root and RPMs

2018-01-11 Thread Ryan Barry
Next-gen node (since 4.0) doesn't have any strong notion of "persistence",
unlike previous versions of oVirt Node.

My guess would be that /exports was not moved across to the new system.

The update process for Node is essentially:

* create a new lv
* unpack the squashfs in the new RPM and put it on the LV
* Go through /var, /etc, and /root to see if any files were modified/added,
and copy those to the new layer
* Add a new bootloader entry
* Re-install any persistent RPMs

I would guess that /exports as not on its own partition, so it didn't carry
across. Mounting the LV for the old image somewhere (/tmp/...) will make
them visible again,


On Tue, Dec 19, 2017 at 9:26 AM, Kasturi Narra  wrote:

> Hello Matt,
>
>All the partitions will be persisted when gluster is installed on
> the ovirt node since gluster recommends user not to create bricks in root
> directory. If the gluster bricks are created in root partition then once
> the update of the node is done, you will not be able to see any of the
> bricks.
>
> Hope this helps !!!
>
> Thanks
> kasturi.
>
> On Tue, Dec 19, 2017 at 4:28 AM, Matt Simonsen  wrote:
>
>> On 12/15/2017 03:06 AM, Simone Tiraboschi wrote:
>>
>> On Fri, Dec 15, 2017 at 4:45 AM, Donny Davis 
>> wrote:
>>
>>> have you gotten an image update yet?
>>>
>>> On Thu, Dec 14, 2017 at 8:08 PM, Matt Simonsen  wrote:
>>>
>>>> Hello all,
>>>>
>>>> I read at https://www.ovirt.org/develop/projects/node/troubleshooting/
>>>> that "Changes made from the command line are done at your own risk. Making
>>>> changes has the potential to leave your system in an unusable state." It
>>>> seems clear that RPMs should not be installed.
>>>>
>>>
>> That document mainly refers to vintage node.
>> In Next Generation Node now we have rpm persistence; please check
>> https://www.ovirt.org/develop/release-management/features/no
>> de/node-next-persistence/
>>
>>
>>
>>
>> I'm sure glad we tested!
>>
>> On one Node image we had images locally stored in /exports and shared out
>> via NFS. After an upgrade & reboot, images are gone.
>>
>> If we "Convert to local storage" will the data persist?  I am planning to
>> test, but want to be sure how this is designed.
>>
>> I assume during a Gluster installation something is also updated in oVirt
>> Node to allow for the Gluster partition to persist?
>>
>> At this point I'm thinking I should manually install via CentOS7 to
>> ensure folders and partitions are persistent. Is there any downside to
>> installing over CentOS7?
>>
>> Thanks
>> Matt
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt NGN image customization troubles

2018-01-11 Thread Ryan Barry
I haven't tried to build on EL for a long time, but the easiest way to
modify may simply be to unpack the squashfs, chroot inside of it, and
repack it. Have you tried this?

On Wed, Dec 27, 2017 at 11:33 AM, Giuseppe Ragusa <
giuseppe.rag...@hotmail.com> wrote:

> Hi all,
>
> I'm trying to modify the oVirt NGN image (to add RPMs, since imgbased
> rpmpersistence currently seems to have a bug: https://bugzilla.redhat.com/
> show_bug.cgi?id=1528468 ) but I'm unfortunately stuck at the very
> beginning: it seems that I'm unable to recreate even the standard 4.1
> squashfs image.
>
> I'm following the instructions at https://gerrit.ovirt.org/
> gitweb?p=ovirt-node-ng.git;a=blob;f=README
>
> I'm working inside a CentOS7 fully-updated vm (hosted inside VMware, with
> nested virtualization enabled).
>
> I'm trying to work on the 4.1 branch, so I issued a:
>
> ./autogen.sh --with-ovirt-release-rpm-url=http://resources.ovirt.org/pub/
> yum-repo/ovirt-release41.rpm
>
> And after that I'm stuck in the "make squashfs" step: it never ends (keeps
> printing dots forever with no errors/warnings in log messages nor any
> apparent activity on the virtual disk image).
>
> Invoking it in debug mode and connecting to the VNC console shows the
> detailed Plymouth startup listing stuck (latest messages displayed:
> "Starting udev Wait for Complete Device Initialization..." and "Starting
> Device-Mapper Multipath Device Controller...")
>
> I wonder if it's actually supposed to be run only from a recent Fedora
> (the "dnf" reference seems a good indicator): if so, which version?
>
> I kindly ask for advice: has anyone succeeded in modifying/reproducing NGN
> squash images recently? If so, how? :-)
>
> Many thanks in advance,
>
> Giuseppe
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1.8 -> 4.2 upgrade

2018-01-11 Thread Ryan Barry
   179
> >> ovirt-4.2-centos-gluster312/x86_64
> >> CentOS-7 - Gluster 3.12
> >>
> >>   93
> >> ovirt-4.2-centos-opstools/x86_64
> >> CentOS-7 - OpsTools - release
> >>
> >>  421
> >> ovirt-4.2-centos-ovirt42/x86_64
> >> CentOS-7 - oVirt 4.2
> >>
> >>  201
> >> ovirt-4.2-centos-qemu-ev/x86_64
> >> CentOS-7 - QEMU EV
> >>
> >>   39
> >> ovirt-4.2-epel/x86_64
> >> Extra Packages for Enterprise Linux 7
> >> - x86_64
> >>   12,184
> >> ovirt-4.2-virtio-win-latest
> >> virtio-win builds roughly matching
> >> what will be shipped in upcoming RHEL
> >>  35
> >> ovirt-centos-ovirt41/x86_64
> >> CentOS-7 - oVirt 4.1
> >>
> >>  456
> >> sac-gdeploy/x86_64
> >> Copr repo for gdeploy owned by sac
> >>
> >>4
> >> virtio-win-stable
> >> virtio-win builds roughly matching
> >> what was shipped in latest RHEL
> >>   5
> >> repolist: 34,890
> >
> > I think it’s missing the base centos repos, maybe that is a bug in the
> initial deployment.
> > Try to add base CentOS 4.2 repo from CentOS site, all the above are
> supposed to be “on top” of that one
> >
> > Thanks,
> > michal
>
> Much obliged - the CentOS-Base.repo is there, but set to 'enabled=0' -
> perhaps a bug somewhere along the line as you say - with it enabled it
> lets the yum update run as far as confirming the 'Transaction
> Summary', suspect it will be ok now.  Many thanks.
>
> >
> >>
> >>>
> >>>>
> >>>>
> >>>> OS Version:
> >>>> RHEL - 7 - 4.1708.el7.centos
> >>>> OS Description:
> >>>> oVirt Node 4.1.8
> >>>> Kernel Version:
> >>>> 3.10.0 - 693.11.1.el7.x86_64
> >>>> KVM Version:
> >>>> 2.9.0 - 16.el7_4.8.1
> >>>> LIBVIRT Version:
> >>>> libvirt-3.2.0-14.el7_4.5
> >>>> VDSM Version:
> >>>> vdsm-4.19.43-1.el7.centos
> >>>> SPICE Version:
> >>>> 0.12.8 - 2.el7.1
> >>>> GlusterFS Version:
> >>>> glusterfs-3.8.15-2.el7
> >>>> CEPH Version:
> >>>> librbd1-0.94.5-2.el7
> >>>>
> >>>> Cheers,
> >>>>
> >>>> Ed
> >>>> ___
> >>>> Users mailing list
> >>>> Users@ovirt.org
> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>>
> >>>>
> >>>
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node ng upgrade failed

2017-11-23 Thread Ryan Barry
t;/usr/lib/python2.7/site-packages/yum/__init__.py", line 4910, in
> install
>
> raise Errors.InstallError, _('No package(s) available to install')
>
> InstallError: Kein(e) Paket(e) zum Installieren verfügbar.
>
>
> ###
>
>
>
> Some more information on my system:
>
>
> ###
>
>
> $ mount
>
> ...
>
> /dev/mapper/onn-ovirt--node--ng--4.1.1.1--0.20170504.0+1 on / type ext4
> (rw,relatime,discard,stripe=128,data=ordered)
>
>
>
> $ imgbase layout
>
> ovirt-node-ng-4.1.1.1-0.20170406.0
>
> ovirt-node-ng-4.1.1.1-0.20170504.0
>
>  +- ovirt-node-ng-4.1.1.1-0.20170504.0+1
>
> ovirt-node-ng-4.1.7-0.20171108.0
>
>  +- ovirt-node-ng-4.1.7-0.20171108.0+1
>
>
>
>
>
> $ rpm -q ovirt-node-ng-image
>
> Das Paket ovirt-node-ng-image ist nicht installiert
>
>
>
> $ nodectl check
>
> Status: OK
>
> Bootloader ... OK
>
>   Layer boot entries ... OK
>
>   Valid boot entries ... OK
>
> Mount points ... OK
>
>   Separate /var ... OK
>
>   Discard is used ... OK
>
> Basic storage ... OK
>
>   Initialized VG ... OK
>
>   Initialized Thin Pool ... OK
>
>   Initialized LVs ... OK
>
> Thin storage ... OK
>
>   Checking available space in thinpool ... OK
>
>   Checking thinpool auto-extend ... OK
>
> vdsmd ... OK
>
>
> ###
>
>
> I can restart my Node and VMs are running, but oVirt Engine tells me no
> update is available. It seems 4.1.7 is installed, but Node still boots the
> old 4.1.1 image.
>
>
> Can i force run the upgrade again or is there another way to fix this?
>
>
> Thanks
>
> Greets
>
> Kilian
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> <https://www.redhat.com>
>
> l...@redhat.com | lve...@redhat.com
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306  IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.2 Lab on ESXi

2017-10-31 Thread Ryan Barry
Simone, any thoughts?

Based on some old bugs, I suspect the machine type is incorrect (there's
another bug about how the HE VM simply appears as "Linux"). Using
pc-i440fx-rhel7.2.0 is reported to work

On Tue, Oct 31, 2017 at 10:26 AM, Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:

> We had a system some nodes on VMware in nested mode, I think they may have
> worked in 3.6 and VMware 6.0, when they were upgraded to VMware 6.5 and oVirt 
> 4.1 when
> the VMs were started they hung at the SeaBIOS prompt, if
> they were migrated to a oVirt bare metal node the VM would finish booting.
>
>
> I had a Centos KVM VM on the VMware host in nested mode and it would boot
> VMs without a problem.
>
>
> Regards,
>
>   Paul S.
> --
> *From:* users-boun...@ovirt.org  on behalf of
> Mustapha Aissat 
> *Sent:* 31 October 2017 14:01
> *To:* users@ovirt.org
> *Subject:* [ovirt-users] Ovirt 4.2 Lab on ESXi
>
> Dears,
>
> I'm trying to setup a lab for Ovirt 4.2 on VMware ESXi 6.5.
> I checked "Expose hardware assisted virtualization to the guest OS" and 
> "Enable
> virtualized CPU performance counters" options to activate the nested
> virtualization.
>
> I installed Ovirt node and I try to setup a self hosted engine on it.
> All pre-configrations were done successfully. But when arriving to step
> "Running engine-setup on the appliance", the Hosted engine VM wouldn't
> start.
>
> I open the VM console, it stuck a bios level. It display the following :
>
> SeaBIOS (Version 1.10.2-3.el7_4.1)
> Machine UUID ced5025d-eec1-458a-991c-cc3bba9392dd
>
> Does anybody already encountered this issue? Is there any more parameters
> to set in the ESXi for it to work?
>
> Thanks for your help
> Regards,
> To view the terms under which this email is distributed, please go to:-
> http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.2 Lab on ESXi

2017-10-31 Thread Ryan Barry
It's been a while since I've used ESXi, but it's possible that the vSwitch
must be in promiscuous mode.

Have you tried other guests with the same configuration?

On Tue, Oct 31, 2017 at 10:01 AM, Mustapha Aissat 
wrote:

> Dears,
>
> I'm trying to setup a lab for Ovirt 4.2 on VMware ESXi 6.5.
> I checked "Expose hardware assisted virtualization to the guest OS" and 
> "Enable
> virtualized CPU performance counters" options to activate the nested
> virtualization.
>
> I installed Ovirt node and I try to setup a self hosted engine on it.
> All pre-configrations were done successfully. But when arriving to step
> "Running engine-setup on the appliance", the Hosted engine VM wouldn't
> start.
>
> I open the VM console, it stuck a bios level. It display the following :
>
> SeaBIOS (Version 1.10.2-3.el7_4.1)
> Machine UUID ced5025d-eec1-458a-991c-cc3bba9392dd
>
> Does anybody already encountered this issue? Is there any more parameters
> to set in the ESXi for it to work?
>
> Thanks for your help
> Regards,
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to start oVirt in GUI

2017-10-31 Thread Ryan Barry
oVirt Node does not ship with an X server.

The recommended setup path is to install oVirt, then open a web browser and
browse to:

http://ip.address.of.node:9090

And click the "Virtualization" tab after you log in. Use this to configure
oVirt Hosted Engine, and the web console for the Engine will manage
everything which cockpit cannot

On Tue, Oct 31, 2017 at 9:51 AM, Stephen Liu  wrote:

> Hi all,
>
> I have ovirt-node-ng-installer-ovirt-4.1-2017103006 <(201)%20710-3006>.iso
> installed on KVM as VM.  It is now running but only a console without GUI.
> I can login to run commands.  Copy and Paste between Host and VM is NOT
> working (I suppose because not running on graphic mode).
>
> Please advise how to start GUI oVirt?
>
> Thanks
>
> Regards
> SL
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1.6 on IBM x3650 M3

2017-10-31 Thread Ryan Barry
We have an outstanding problem with 7.4 images on EFI, since platform moved
"grub2-efi" to "grub2-efi-x86_64". There's a patch pending here:
https://gerrit.ovirt.org/#/c/83008/

Hopefully merged soon.

In the meantime, using Anaconda from CentOS 7.3 via kickstart (with the
latest Node image) would work, but this isn't very convenient.

Otherwise, you can drop to a shell and manually copy
/run/sysroot/.../grubx64.efi to /boot/efi, but this isn't very nice.

On Tue, Oct 31, 2017 at 4:46 AM, Yuval Turgeman  wrote:

> Hi,
>
> We did have some problems in the past with efi, but they should be fixed
> by now.
> Did you use the ISO for installation ?  What error are you seeing - which
> file is missing there ?
>
> Thanks,
> Yuval.
>
> On Thu, Oct 26, 2017 at 11:03 PM, Jonathan Baecker 
> wrote:
>
>> Thank you, good to know that this works! I need to play a bit with it.
>>
>>
>>
>> Am 26.10.2017 um 21:59 schrieb Eduardo Mayoral:
>>
>>> Yes, I use power management with ipmilan, no issues.
>>>
>>> I do have license on the IMM for remote console, but that is not a
>>> requirement, AFAIK.
>>>
>>> I remember I first tried to use for oVirt a dedicated login on the IMM
>>> with just "Remote Server Power/Restart Access" and I could not get to
>>> work, so I just granted "Supervisor" to the dedicated login. Other than
>>> that, no problem.
>>>
>>> Eduardo Mayoral Jimeno (emayo...@arsys.es)
>>> Administrador de sistemas. Departamento de Plataformas. Arsys internet.
>>> +34 941 620 145 ext. 5153
>>>
>>> On 26/10/17 21:47, Jonathan Baecker wrote:
>>>
>>>> Thank you, for your commands! I have now also install CentOS minimal,
>>>> this works. I only though that oVirt Node have some optimizations, but
>>>> maybe not.
>>>>
>>>> @Eduardo Mayoral, can I ask you that you are able with this servers,
>>>> to use the power management? As I understand, they support ipmilan,
>>>> but I don't know how...
>>>>
>>>> Regards
>>>> Jonathan
>>>>
>>>> Am 24.10.2017 um 23:41 schrieb Sean McMurray:
>>>>
>>>>> I have seen this problem before. For some reason, oVirt Node 4.1.x
>>>>> does not always install everything right for efi. In my limited
>>>>> experience, it fails to do it correctly 4 out of 5 times. The mystery
>>>>> to me is why it gets it right sometimes. I solve the problem by
>>>>> manually copying the missing file into my efi boot partition.
>>>>>
>>>>>
>>>>> On 10/24/2017 12:46 PM, Eduardo Mayoral wrote:
>>>>>
>>>>>> 3 of my compute nodes are IBM x3650 M3 . I do not use oVirt Node but
>>>>>> rather plain CentOS 7 for the compute nodes. I use 4.1.6 too.
>>>>>>
>>>>>> I remember I had a bad time trying to disable UEFI on the BIOS of
>>>>>> those servers. In my opinion, the firmware in that model ridden with
>>>>>> problems. In the end, I installed with UEFI (You will need a
>>>>>> /boot/efi partition)
>>>>>>
>>>>>> Once installed, I have not had any issues with them.
>>>>>>
>>>>>> Eduardo Mayoral Jimeno (emayo...@arsys.es)
>>>>>> Administrador de sistemas. Departamento de Plataformas. Arsys
>>>>>> internet.
>>>>>> +34 941 620 145 ext. 5153
>>>>>> On 24/10/17 09:57, Jon bae wrote:
>>>>>>
>>>>>>> Hello everybody,
>>>>>>> I would like to install oVirt Node on a IBM Machine, but after the
>>>>>>> installation it can not boot. I get the message:
>>>>>>>
>>>>>>> "/boot/efi/..." file not found
>>>>>>>
>>>>>>> I try many different things like turn of uefi options in bios etc.
>>>>>>> but with no effect.
>>>>>>>
>>>>>>> Now I figure out that when I install full CentOS 7.3 from live DVD
>>>>>>> it just boot normal.
>>>>>>>
>>>>>>> Is there any workaround to get this to work?
>>>>>>>
>>>>>>> Regards
>>>>>>>
>>>>>>> Jonathan
>>>>>>>
>>>>>>>
>>>>>>> ___
>>>>>>> Users mailing list
>>>>>>> Users@ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>
>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Users mailing list
>>>>>> Users@ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>> ___
>>>>> Users mailing list
>>>>> Users@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cockpit oVirt support

2017-10-18 Thread Ryan Barry
This looks great, guys. Congrats!

Does this also work with plain libvirt?

On Wed, Oct 18, 2017 at 3:24 AM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

> Hi all,
> I’m happy to announce that we finally finished initial contribution of
> oVirt specific support into the Cockpit management platform
> See below for more details
>
> There are only limited amount of operations you can do at the moment, but
> it may already be interesting for troubleshooting and simple admin actions
> where you don’t want to launch the full blown webadmin UI
>
> Worth noting that if you were ever intimidated by the complexity of the
> GWT UI of oVirt portals and it held you back from contributing, please take
> another look!
>
> Thanks,
> michal
>
> Begin forwarded message:
>
> *From: *Marek Libra 
> *Subject: **Re: Cockpit 153 released*
> *Date: *17 October 2017 at 16:02:59 GMT+2
> *To: *Development discussion for the Cockpit Project  fedorahosted.org>
> *Reply-To: *Development discussion for the Cockpit Project <
> cockpit-de...@lists.fedorahosted.org>
>
> Walk-through video for the new "oVirt Machines" page can be found here:
> https://youtu.be/5i-kshT6c5A
>
> On Tue, Oct 17, 2017 at 12:08 PM, Martin Pitt  wrote:
>
>> http://cockpit-project.org/blog/cockpit-153.html
>>
>> Cockpit is the modern Linux admin interface. We release regularly. Here
>> are the release notes from version 153.
>>
>>
>> Add oVirt package
>> -
>>
>> This version introduces the "oVirt Machines" page on Fedora for
>> controlling
>> oVirt virtual machine clusters.  This code was moved into Cockpit as it
>> shares
>> a lot of code with the existing "Machines" page, which manages virtual
>> machines
>> through libvirt.
>>
>> This feature is packaged in cockpit-ovirt and when installed it will
>> replace
>> the "Machines" page.
>>
>> Thanks to Marek Libra for working on this!
>>
>> Screenshot:
>>
>> http://cockpit-project.org/images/ovirt-overview.png
>>
>> Change: https://github.com/cockpit-project/cockpit/pull/7139
>>
>>
>> Packaging cleanup
>> -
>>
>> This release fixes a lot of small packaging issues that were spotted by
>> rpmlint/lintian.
>>
>> Get it
>> --
>>
>> You can get Cockpit here:
>>
>> http://cockpit-project.org/running.html
>>
>> Cockpit 153 is available in Fedora 27:
>>
>> https://bodhi.fedoraproject.org/updates/cockpit-153-1.fc27
>>
>> Or download the tarball here:
>>
>> https://github.com/cockpit-project/cockpit/releases/tag/153
>>
>>
>> Take care,
>>
>> Martin Pitt
>>
>> ___
>> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
>> To unsubscribe send an email to cockpit-devel-le...@lists.fedo
>> rahosted.org
>>
>>
>
>
> --
> Marek Libra
>
> senior software engineer
> Red Hat Czech
>
> <https://www.redhat.com/>
> ___
> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
> To unsubscribe send an email to cockpit-devel-le...@lists.fedorahosted.org
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt node with SHE on FC

2017-09-14 Thread Ryan Barry
On Mon, Sep 11, 2017 at 10:39 AM, Gianluca Cecchi  wrote:

> On Mon, Sep 11, 2017 at 4:13 PM, Simone Tiraboschi 
> wrote:
> Are the ovirt repos already setup when I have installed from the iso or do
> I have to install ovirt-release-xx rpm?
>

They're already ready-to-go, so engine will pick up any updates when
there's available.


>
> I'm exploring situations where NGN could be suitable better than full
> CentOS OS for hypervisors and I would like to know clearly advantages and
> limits.
>

The primary advantage is that it's a system which comes ready for oVirt,
and the entire thing is updated in one shot, with rollback capability. So
you can "yum upgrade" from 4.0.3 (for example) to 4.1.5 in one command, and
go back to the old version of something doesn't work as expected.

As of 4.1, we also reinstall any vendor tooling you've installed with yum,
so packages "stick" across images. If all you want is a hypervisor, and you
don't need the flexibility of a full CentOS host, Node is probably a good
choice.


>
> Thanks in advance
>
>>
>> On Sat, Sep 9, 2017 at 8:25 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> Hello,
>>> it is not clear to me the exact workflow in case of testing what in
>>> object.
>>>
>>> For sure first step is to install base node from iso.
>>> NGN has no more read only filesystem.
>>> Does this mean that I have to set up eg multipath for FC LUNs (engine
>>> and data domain) before running the engine setup via cockpit?
>>> Or when I specify FC in type of storage domain it would ask and setup
>>> automatically multipath for me?
>>>
>>
>> No, it doesn't: HBA and mutipath are supposed to be directly configured
>> before running hosted-engine-setup.
>> It's exactly the same as for vlan and bonding.
>>
>
> Ah, ok.
> So I can also customize my multipath.conf file on oVirt Node as desired,
> if storage vendor requires it, using
>
> # VDSM REVISION 1.3
> # VDSM PRIVATE
>
> as I do in regular CentOS 7, correct?
>

Yep, you can configure it exactly like CentOS 7.

The VDSM header is not required AFAIK.


>
> And if so, what is the mapping of "1.3" in VDSM REVISION line above?
> I didn't find reference in oVirt docs and also in RHEV kb I only found
> this:
> https://access.redhat.com/solutions/43458
>

The idea of persistence is gone in oVirt Node 4.x, so you don't need to
worry about any of this.


> that doesn't clarify much in my opinion and seems to cover version 4.x in
> the summary of knowledge base article, but then I don't seem to find  its
> real application...
>
> Thanks,
> Gianluca
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] V4.1 questions

2017-09-14 Thread Ryan Barry
On Thu, Sep 14, 2017 at 8:36 AM, Colin Coe  wrote:

> Hi all
>
> We're in the process of upgrading from RHEV 3.5 to 4.1.5 and I have a
> couple of questions.
>
> RHV 4.1.5 takes between 10-15 minutes to become usable after a service
> restart, is this expected or is something wrong with our environment?
>

10-15 minutes seems long, but it depends on the database. How much
historical data is present?


>
> The webUI seems to timeout and go back to the logon page after a very
> short time.  Can this behavior be changed?
>
> Lastly, the RH318 course book states that to upgrade a RHEV-H node via
> RHEV-M, you need to install the "rhev-hypervisor7" package on the RHEV-M
> node.  Is this still correct?
>

Node/RHVH for 4.x now upgrades straight from yum, so nothing is needed on
the engine :)


>
> Thanks
>
> CC
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] node ng upgrade failed

2017-07-10 Thread Ryan Barry
Ok, so Python may be confused here.

As one final question, what about:

lvm lvs

?

On Mon, Jul 10, 2017 at 8:10 AM, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> cat /etc/fstab
>
> #
>
> # /etc/fstab
>
> # Created by anaconda on Wed Dec 14 15:19:56 2016
>
> #
>
> # Accessible filesystems, by reference, are maintained under '/dev/disk'
>
> # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
>
> #
>
> /dev/onn_cs-kvm-001/ovirt-node-ng-4.1.2-0.20170523.0+1 / xfs
> defaults,discard 0 0
>
> UUID=7899e80f-5066-4eca-a7d6-afa52a512039 /boot   ext4
> defaults1 2
>
> UUID=42B1-C1A3  /boot/efi   vfat
> umask=0077,shortname=winnt 0 0
>
> /dev/mapper/onn_cs--kvm--001-var /var xfs defaults,discard 0 0
>
> /dev/mapper/onn_cs--kvm--001-swap swapswap
> defaults0 0
>
>
>
> mount
>
> sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
>
> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
>
> devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=
> 49382320k,nr_inodes=12345580,mode=755)
>
> securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,
> relatime)
>
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
>
> devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,
> seclabel,gid=5,mode=620,ptmxmode=000)
>
> tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
>
> tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,
> seclabel,mode=755)
>
> cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,
> relatime,xattr,release_agent=/usr/lib/systemd/systemd-
> cgroups-agent,name=systemd)
>
> pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
>
> efivarfs on /sys/firmware/efi/efivars type efivarfs
> (rw,nosuid,nodev,noexec,relatime)
>
> cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,
> relatime,blkio)
>
> cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,
> relatime,cpuset)
>
> cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup
> (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
>
> cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,
> relatime,cpuacct,cpu)
>
> cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,
> relatime,devices)
>
> cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,
> relatime,memory)
>
> cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,
> relatime,freezer)
>
> cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,
> relatime,pids)
>
> cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,
> relatime,hugetlb)
>
> cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,
> relatime,perf_event)
>
> configfs on /sys/kernel/config type configfs (rw,relatime)
>
> /dev/mapper/onn_cs--kvm--001-ovirt--node--ng--4.1.2--0.20170523.0+1 on /
> type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=64k,sunit=
> 128,swidth=128,noquota)
>
> rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
>
> selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
>
> systemd-1 on /proc/sys/fs/binfmt_misc type autofs
> (rw,relatime,fd=34,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
>
> debugfs on /sys/kernel/debug type debugfs (rw,relatime)
>
> mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
>
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
>
> nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
>
> /dev/mapper/3600605b003af3f301666cedd1935887c2 on /boot type ext4
> (rw,relatime,seclabel,data=ordered)
>
> /dev/mapper/onn_cs--kvm--001-var on /var type xfs
> (rw,relatime,seclabel,attr2,discard,inode64,logbsize=64k,
> sunit=128,swidth=128,noquota)
>
> /dev/mapper/3600605b003af3f301666cedd1935887c1 on /boot/efi type vfat
> (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=
> ascii,shortname=winnt,errors=remount-ro)
>
> tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,
> seclabel,size=9881988k,mode=700)
>
>
>
>
>
> *Von:* Ryan Barry [mailto:rba...@redhat.com]
> *Gesendet:* Montag, 10. Juli 2017 14:09
>
> *An:* Grundmann, Christian 
> *Cc:* Yuval Turgeman ; users@ovirt.org
> *Betreff:* Re: [ovirt-users] node ng upgrade failed
>
>
>
> What does `mount` look like? I'm wondering whether /var/log is already a
> partition/in fstab or whether os.path.ismount is confused here.
>
>
>
> On Mon, Jul 10, 2017 at 7:48 AM, Grundmann, Christian <
> christian.gr

Re: [ovirt-users] node ng upgrade failed

2017-07-10 Thread Ryan Barry
What does `mount` look like? I'm wondering whether /var/log is already a
partition/in fstab or whether os.path.ismount is confused here.

On Mon, Jul 10, 2017 at 7:48 AM, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> Hi,
>
> no I did no custom partitioning I used the autopartition from installer
>
> the upgrade did work until some version (don’t know which)
>
>
>
> here my nodectl check output
>
>
>
> nodectl check
>
> Status: OK
>
> Bootloader ... OK
>
>   Layer boot entries ... OK
>
>   Valid boot entries ... OK
>
> Mount points ... OK
>
>   Separate /var ... OK
>
>   Discard is used ... OK
>
> Basic storage ... OK
>
>   Initialized VG ... OK
>
>   Initialized Thin Pool ... OK
>
>   Initialized LVs ... OK
>
> Thin storage ... OK
>
>   Checking available space in thinpool ... OK
>
>   Checking thinpool auto-extend ... OK
>
> vdsmd ... OK
>
>
>
> Thx Christian
>
>
>
>
>
> *Von:* Ryan Barry [mailto:rba...@redhat.com]
> *Gesendet:* Montag, 10. Juli 2017 13:39
> *An:* Grundmann, Christian 
> *Cc:* Yuval Turgeman ; users@ovirt.org
>
> *Betreff:* Re: [ovirt-users] node ng upgrade failed
>
>
>
> Christian -
>
> It looks like perhaps you used custom partitioning. That's ok -- we
> support that. We check for the existence of partitions here:
>
> https://gerrit.ovirt.org/gitweb?p=imgbased.git;a=blob;
> f=src/imgbased/plugins/osupdater.py;h=032d7e95d475b1085dab7c3f776873
> f6d6d4a280;hb=HEAD#l180
>
> It's possible that os.path.ismount doesn't think /var/log is a mount for
> some reason. We'll try to reproduce. But can you provide a little more
> information about your environment? Did you use autopartitioning or custom
> partitioning?
>
>
>
> On Mon, Jul 10, 2017 at 7:16 AM, Grundmann, Christian <
> christian.grundm...@fabasoft.com> wrote:
>
> attached
>
>
>
> *Von:* Yuval Turgeman [mailto:yuv...@redhat.com]
> *Gesendet:* Montag, 10. Juli 2017 13:10
> *An:* Grundmann, Christian 
> *Cc:* users@ovirt.org
> *Betreff:* Re: [ovirt-users] node ng upgrade failed
>
>
>
> Hi,
>
>
>
> Can you please attach /tmp/imgbased.log ?
>
>
>
> Thanks,
>
> Yuval.
>
>
>
>
>
> On Mon, Jul 10, 2017 at 11:27 AM, Grundmann, Christian <
> christian.grundm...@fabasoft.com> wrote:
>
> Hi,
>
> I tried to update to node ng 4.1.3 (from 4.1.1) which failed
>
>
>
> Jul 10 10:10:49 imgbased: 2017-07-10 10:10:49,986 [INFO] Extracting image
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.1.3-0.
> 20170709.0.el7.squashfs.img'
>
> Jul 10 10:10:50 imgbased: 2017-07-10 10:10:50,816 [INFO] Starting base
> creation
>
> Jul 10 10:10:50 imgbased: 2017-07-10 10:10:50,816 [INFO] New base will be:
> ovirt-node-ng-4.1.3-0.20170709.0
>
> Jul 10 10:10:51 imgbased: 2017-07-10 10:10:51,539 [INFO] New LV is:  'onn_cs-kvm-001/ovirt-node-ng-4.1.3-0.20170709.0' />
>
> Jul 10 10:10:53 imgbased: 2017-07-10 10:10:53,070 [INFO] Creating new
> filesystem on base
>
> Jul 10 10:10:53 imgbased: 2017-07-10 10:10:53,412 [INFO] Writing tree to
> base
>
> Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,344 [INFO] Adding a new
> layer after 
>
> Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,344 [INFO] Adding a new
> layer after 
>
> Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,345 [INFO] New layer will
> be: 
>
> Jul 10 10:12:52 imgbased: 2017-07-10 10:12:52,714 [ERROR] Failed to
> migrate etc#012Traceback (most recent call last):#012  File
> "/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-packages/
> imgbased/plugins/osupdater.py", line 119, in on_new_layer#012
> check_nist_layout(imgbase, new_lv)#012  File "/tmp/tmp.EMsKrrbmZs/usr/lib/
> python2.7/site-packages/imgbased/plugins/osupdater.py", line 173, in
> check_nist_layout#012v.create(t, paths[t]["size"],
> paths[t]["attach"])#012  File "/tmp/tmp.EMsKrrbmZs/usr/lib/
> python2.7/site-packages/imgbased/volume.py", line 48, in create#012
> "Path is already a volume: %s" % where#012AssertionError: Path is already a
> volume: /var/log
>
> Jul 10 10:12:53 python: detected unhandled Python exception in
> '/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-packages/imgbased/__main__.py'
>
> Jul 10 10:12:53 abrt-server: Executable '/tmp/tmp.EMsKrrbmZs/usr/lib/
> python2.7/site-packages/imgbased/__main__.py' doesn't belong to any
> package and ProcessUnpackaged is set to 'no'
>
> Jul 10 10:15:10 imgbased: 2017-07-10 10:15:10,079 [INFO] Extracting image
> '/usr/share/

Re: [ovirt-users] node ng upgrade failed

2017-07-10 Thread Ryan Barry
Christian -

It looks like perhaps you used custom partitioning. That's ok -- we support
that. We check for the existence of partitions here:

https://gerrit.ovirt.org/gitweb?p=imgbased.git;a=blob;f=src/imgbased/plugins/osupdater.py;h=032d7e95d475b1085dab7c3f776873f6d6d4a280;hb=HEAD#l180

It's possible that os.path.ismount doesn't think /var/log is a mount for
some reason. We'll try to reproduce. But can you provide a little more
information about your environment? Did you use autopartitioning or custom
partitioning?

On Mon, Jul 10, 2017 at 7:16 AM, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> attached
>
>
>
> *Von:* Yuval Turgeman [mailto:yuv...@redhat.com]
> *Gesendet:* Montag, 10. Juli 2017 13:10
> *An:* Grundmann, Christian 
> *Cc:* users@ovirt.org
> *Betreff:* Re: [ovirt-users] node ng upgrade failed
>
>
>
> Hi,
>
>
>
> Can you please attach /tmp/imgbased.log ?
>
>
>
> Thanks,
>
> Yuval.
>
>
>
>
>
> On Mon, Jul 10, 2017 at 11:27 AM, Grundmann, Christian <
> christian.grundm...@fabasoft.com> wrote:
>
> Hi,
>
> I tried to update to node ng 4.1.3 (from 4.1.1) which failed
>
>
>
> Jul 10 10:10:49 imgbased: 2017-07-10 10:10:49,986 [INFO] Extracting image
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.1.3-0.
> 20170709.0.el7.squashfs.img'
>
> Jul 10 10:10:50 imgbased: 2017-07-10 10:10:50,816 [INFO] Starting base
> creation
>
> Jul 10 10:10:50 imgbased: 2017-07-10 10:10:50,816 [INFO] New base will be:
> ovirt-node-ng-4.1.3-0.20170709.0
>
> Jul 10 10:10:51 imgbased: 2017-07-10 10:10:51,539 [INFO] New LV is:  'onn_cs-kvm-001/ovirt-node-ng-4.1.3-0.20170709.0' />
>
> Jul 10 10:10:53 imgbased: 2017-07-10 10:10:53,070 [INFO] Creating new
> filesystem on base
>
> Jul 10 10:10:53 imgbased: 2017-07-10 10:10:53,412 [INFO] Writing tree to
> base
>
> Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,344 [INFO] Adding a new
> layer after 
>
> Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,344 [INFO] Adding a new
> layer after 
>
> Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,345 [INFO] New layer will
> be: 
>
> Jul 10 10:12:52 imgbased: 2017-07-10 10:12:52,714 [ERROR] Failed to
> migrate etc#012Traceback (most recent call last):#012  File
> "/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-packages/
> imgbased/plugins/osupdater.py", line 119, in on_new_layer#012
> check_nist_layout(imgbase, new_lv)#012  File "/tmp/tmp.EMsKrrbmZs/usr/lib/
> python2.7/site-packages/imgbased/plugins/osupdater.py", line 173, in
> check_nist_layout#012v.create(t, paths[t]["size"],
> paths[t]["attach"])#012  File "/tmp/tmp.EMsKrrbmZs/usr/lib/
> python2.7/site-packages/imgbased/volume.py", line 48, in create#012
> "Path is already a volume: %s" % where#012AssertionError: Path is already a
> volume: /var/log
>
> Jul 10 10:12:53 python: detected unhandled Python exception in
> '/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-packages/imgbased/__main__.py'
>
> Jul 10 10:12:53 abrt-server: Executable '/tmp/tmp.EMsKrrbmZs/usr/lib/
> python2.7/site-packages/imgbased/__main__.py' doesn't belong to any
> package and ProcessUnpackaged is set to 'no'
>
> Jul 10 10:15:10 imgbased: 2017-07-10 10:15:10,079 [INFO] Extracting image
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.1.3-0.
> 20170622.0.el7.squashfs.img'
>
> Jul 10 10:15:11 imgbased: 2017-07-10 10:15:11,226 [INFO] Starting base
> creation
>
> Jul 10 10:15:11 imgbased: 2017-07-10 10:15:11,226 [INFO] New base will be:
> ovirt-node-ng-4.1.3-0.20170622.0
>
> Jul 10 10:15:11 python: detected unhandled Python exception in
> '/tmp/tmp.pqf2qhifaY/usr/lib/python2.7/site-packages/imgbased/__main__.py'
>
> Jul 10 10:15:12 abrt-server: Executable '/tmp/tmp.pqf2qhifaY/usr/lib/
> python2.7/site-packages/imgbased/__main__.py' doesn't belong to any
> package and ProcessUnpackaged is set to 'no'
>
>
>
>
>
> How can I fix it?
>
>
>
> Thx Christian
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4.1 appliance install fails due to missing GPG key

2017-05-21 Thread Ryan Barry
There was a bug filed about a very similar problem when using Cockpit:
https://bugzilla.redhat.com/show_bug.cgi?id=1421249

This looks like a problem with ovirt-hosted-engine-setup not accepting your
response. Can you attachs logs from /var/log/ovirt-hosted-engine-setup ?

As a workaround, you can "yum install ovirt-engine-appliance", which will
prompt you to accept the GPG key even if you don't actually install it,
though the root cause is hard to determine without logs.

On Sat, May 20, 2017 at 3:23 PM, Michael Hall  wrote:

> Note: screenshot attached.
>
> I installed an oVirt node 4.1, then attempted to install oVirt engine on
> that using ovirt-hosted-engine-setup The installer downloads the appliance
> RPM, but installation fails due to missing GPG key. Installation fails
> whether I answer yes or no to "Use this key?" question. There is what
> appears to be an oVirt key in /etc/pki/rpm-gpg.
>
>
> Have I missed something?
>
> Thanks, Mike
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Internet access for oVirt Nodes?

2017-05-16 Thread Ryan Barry
On Mon, May 15, 2017 at 1:09 PM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> thanks, i guess configuring repositories in oVirt Node can only be
> achieved when using Foreman/Satellite integration, is that correct? i've
> just started to use oVirt Node and i'm beginning to realize that things are
> a _little_ bit different compared to a standard linux host.
>

Well, yes/no. We whitelist which packages are able to be updated from the
oVirt repositories and disable the base centos repositories, but you can
easily change "enabled=0" to "enabled=1" in any of them, or add your own
repos just like you would with CentOS.

In general, I'd recommend not including updates for any packages which are
part of Node itself, but that decision is yours to make.


>
> this brings me to another update related question:
> right now oVirt Nodes in my test environment can connect to the internet
> and there recently was an update available which i applied through the
> engine gui, which seemed to finish successfully. i remember wondering how i
> could check what actually changend, there was eg. no kernel change IIRC.
> today i discovered that on both updated hosts /tmp/imgbased.log exists and
> ends in an error:
>

Node is still an A/B image, so you'd need to reboot in order to see a new
kernel, if it's part of a new image.


>
> subprocess.CalledProcessError: Command '['lvcreate', '--thin',
> '--virtualsize', u'8506048512B', '--name', 
> 'ovirt-node-ng-4.1.1.1-0.20170406.0',
> u'HostVG/pool00']' returned non-zero exit status 5
>
> i have to mention i manually partitioned my oVirt Node host when i
> installed it from the installer ISO (because i want to use software raid).
> i used partitioning recommendations from https://bugzilla.redhat.com/sh
> ow_bug.cgi?id=1369874 (doubling size recommendations).
>

As long as you're thinly provisioned, this should update normally,, though
I have to say that I haven't tried software RAID.


>
> did my oVirt Node update complete successfully?
> how can i check this?
> why was there an lvcreate error?
>

I'll try to reproduce this, but attempting the lvcreate by hand may give
some usable debugging information.


>
> 'imgbase layout' says:
> ovirt-node-ng-4.1.1.1-0.20170406.0
>  +- ovirt-node-ng-4.1.1.1-0.20170406.0+1
>

If 'imgabase layout' only shows these, then it's likely that it didn't
update. Node uses LVM directly, so "lvm lvs" may show a new device, but
from the command above, I'm guessing it wasn't able to create it. I'd
suspect that it wasn't able to create it because it's the same version, and
LVM sees a duplicate LV. Can you attach your engine log (or the yum log
from the host) so we can see what it pulled?


>
> kernel version is:
> 3.10.0-514.10.2.el7.x86_64
>
> thanks a lot again
>




-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Internet access for oVirt Nodes?

2017-05-15 Thread Ryan Barry
On Mon, May 15, 2017 at 5:00 AM,  wrote:

>
> hi,
>
> do hypervisors, that are running oVirt Node (not standard CentOS/RHEL),
> need internet access for updates or can they be in a private, non routed
> network (and updates happen via engine)? it seems the latter is the
> case, but i want to be sure
>
> thx
> matthias


Engine isn't very particular about updating in this case. As long as any
repository is configured where 'yum check-update
ovirt-node-ng-image-update' is true, upgrades from engine will work.

In general, otopi's miniyum is a bit smarter than base yum, so
'check-update ...' is not always a reliable mechanism to verify this, but
yes, a local repo in a non-routed network which presents the update will
show an update from engine.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade hypervisor to 4.1.1.1

2017-04-10 Thread Ryan Barry
Hey Eric -

It's possible that there are some silent denials in some policy. Can you
please try "semodule -DB" to see if anything pops up?

I'll also try to reproduce this...

On Mon, Apr 10, 2017 at 5:11 AM, eric stam  wrote:

> We are on the good way.
> I first put SELinux in permissive mode, and now it is possible to start
> virtual machine's.
>
>
> I will test the "restorecon" next.
>
> Regards,
> Eric
>
> 2017-04-10 14:00 GMT+02:00 Yuval Turgeman :
>
>> restorecon on virtlogd.conf would be enough
>>
>> On Apr 10, 2017 2:51 PM, "Misak Khachatryan"  wrote:
>>
>>> Is it node setup? Today i tried to upgrade my one node cluster, after
>>> that VM's fail to start, it turns out that selinux prevents virtlogd to
>>> start.
>>>
>>> ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
>>> semodule -i my-virtlogd.pp
>>> /sbin/restorecon -v /etc/libvirt/virtlogd.conf
>>>
>>> fixed things for me, YMMV.
>>>
>>>
>>> Best regards,
>>> Misak Khachatryan
>>>
>>> On Mon, Apr 10, 2017 at 2:03 PM, Sandro Bonazzola 
>>> wrote:
>>>
>>>> Can you please provide a full sos report from that host?
>>>>
>>>> On Sun, Apr 9, 2017 at 8:38 PM, Sandro Bonazzola 
>>>> wrote:
>>>>
>>>>> Adding node team.
>>>>>
>>>>> Il 09/Apr/2017 15:43, "eric stam"  ha scritto:
>>>>>
>>>>> Yesterday I executed an upgrade on my hypervisor to version 4.1.1.1
>>>>> After the upgrade, it is impossible to start a virtual machine on it.
>>>>> The messages I found: Failed to connect socket to
>>>>> '/var/run/libvirt/virtlogd-sock': Connection refused
>>>>>
>>>>> [root@vm-1 log]# hosted-engine --vm-status | grep -i engine
>>>>>
>>>>> Engine status  : {"reason": "bad vm status",
>>>>> "health": "bad", "vm": "down", "detail": "down"}
>>>>>
>>>>> state=EngineUnexpectedlyDown
>>>>>
>>>>> The redhead version: CentOS Linux release 7.3.1611 (Core)
>>>>>
>>>>> Is this a known problem?
>>>>>
>>>>> Regards, Eric
>>>>>
>>>>>
>>>>>
>>>>> ___
>>>>> Users mailing list
>>>>> Users@ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> SANDRO BONAZZOLA
>>>>
>>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>>>
>>>> Red Hat EMEA <https://www.redhat.com/>
>>>> <https://red.ht/sig>
>>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>
>
> --
> Gr. Eric Stam
> *Mob*.: 06-50278119
>



-- 

RYAN BARRY

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR

Red Hat NA <https://www.redhat.com/>

rba...@redhat.comM: +1-651-815-9306 IM: rbarry
<https://red.ht/sig>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] node-ng update failed from ovirt-node-ng-4.1.0-0 to ovirt-node-ng-image-4.1.0-1, and NM + iscsi boo issue

2017-02-06 Thread Ryan Barry
Hey Sergey -

If you check "lvs" and ensure that there's not actually a new LV from the
update, you can cleanly 'rpm -e ovirt-node-ng-image-update', and be ok
without redeploying.

Unfortunately, it's hard to tell from the logs (and '--justdb' hanging)
what's happening here, but I'll try to reproduce.

NetworkManager disablement should "stick" across upgrades, but it's
possible that iscsi roots are doing something here. I'll check for a dracut
flag, also...

On Mon, Feb 6, 2017 at 1:14 PM, Sandro Bonazzola 
wrote:

> Adding Douglas and Ryan
>
> Il 06/Feb/2017 13:32, "Sergey Kulikov"  ha scritto:
>
>>
>> 1) I've updated from 4.0.6 to 4.1.0 (on Feb 01 node-ng was at version
>> 4.1.0-0)
>> After some time engine alerted, that this node have updates to
>> ovirt-node-ng-image-4.1.0-1,
>> but update from engine timed out, there were hanging processes in ps on
>> this node:
>>
>> root 36309  0.0  0.0 113120  1564 ?Ss   19:04   0:00 bash -c
>> umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t
>> ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm
>> -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C
>> "${MYTMP}" -x &&  "${MYTMP}"/ovirt-host-mgmt DIALOG/dialect=str:machine
>> DIALOG/customization=bool:True
>> root 36339  0.2  0.0 496700 94208 ?S19:04   0:21
>> /bin/python /tmp/ovirt-GCmVusccfe/pythonlib/otopi/__main__.py
>> "BASE/pluginPath=str:/tmp/ovirt-GCmVusccfe/otopi-plugins"
>> APPEND:BASE/pluginGroups=str:ovirt-host-common:ovirt-host-mgmt
>> DIALOG/dialect=str:machine DIALOG/customization=bool:True
>> root 37498  0.0  0.0 113124  1452 ?S19:09   0:00 /bin/sh
>> /var/tmp/rpm-tmp.4UqJ4e 1
>> root 37560  0.0  0.0  0 0 ?S<   21:42   0:00
>> [kworker/21:2H]
>> root 37626  0.0  0.0 174516  5996 ?S19:09   0:00 rpm -Uvh
>> --quiet --justdb /usr/share/imgbased/ovirt-node
>> -ng-image-update-4.1.0-1.el7.centos.noarch.rpm
>>
>> they were hanging forever, I ended up with rebooting the node, no errors
>> in log, it was just hanging at:
>>
>> 2017-02-03 19:09:16 DEBUG otopi.plugins.otopi.dialog.machine
>> dialog.__logString:204 DIALOG:SEND   ***CONFIRM GPG_KEY Confirm use of
>> GPG Key userid=oVirt  hexkeyid=FE590CB7
>> 2017-02-03 19:09:16 DEBUG otopi.plugins.otopi.dialog.machine
>> dialog.__logString:204 DIALOG:SEND   ###
>> 2017-02-03 19:09:16 DEBUG otopi.plugins.otopi.dialog.machine
>> dialog.__logString:204 DIALOG:SEND   ### Please confirm 'GPG_KEY'
>> Confirm use of GPG Key userid=oVirt  hexkeyid=FE590CB7
>> 2017-02-03 19:09:16 DEBUG otopi.plugins.otopi.dialog.machine
>> dialog.__logString:204 DIALOG:SEND   ### Response is CONFIRM
>> GPG_KEY=yes|no or ABORT GPG_KEY
>> 2017-02-03 19:09:16 DEBUG otopi.plugins.otopi.dialog.machine
>> dialog.__logString:204 DIALOG:RECEIVECONFIRM GPG_KEY=yes
>> 2017-02-03 19:09:16 INFO otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.info:80 Yum Status: Running Test Transaction
>> Running Transaction Check
>> 2017-02-03 19:09:16 INFO otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.info:80 Yum Status: Running Transaction
>> 2017-02-03 19:09:16 INFO otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.info:80 Yum install: 1/2: ovirt-node-ng-image-4.1.0-1.el
>> 7.centos.noarch
>> 2017-02-03 19:09:20 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Done: ovirt-node-ng-image-4.1.0-1.el
>> 7.centos.noarch
>> 2017-02-03 19:09:20 INFO otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.info:80 Yum install: 2/2: ovirt-node-ng-image-update-4.1
>> .0-1.el7.centos.noarch
>>
>> now my node have this layout:
>> # imgbase layout
>> ovirt-node-ng-4.1.0-0.20170201.0
>>  +- ovirt-node-ng-4.1.0-0.20170201.0+1
>> (so update failed)
>> but 4.1.0-1 rpms are marked as "installed" and yum can't find any
>> updates, can I rollback to base layout without installed  4.1.0-1 rms ?
>> imgbase rollback needs at least 2 layers over base.
>>
>> Or maybe the only way is to reinstall this node?
>>
>> 2) And another question, how can I disable NetworkManger permanently, or
>> exclude some interfaces permanently?
>> I've tried to disable NetworkManger by systemctl, but after update from
>> 4.0 to 4.1 it was re-enabled(so it's not persistent between updates).
>> I've an issue with iscsi root and enabled NetworkManger, because NM tries
>> to bring down\up my iscsi interfaces on boot, and sometimes FS remounting RO
>> because of IO errors, I can't put NM_CONTROLLED=no in ifcfg, because
>> ifcfg is generated by dracut at every boot.
>>
>>
>> -
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Invitation: Next-generation Node package persistence for oVirt 4.1 @ Tue Jan 24, 2017 8am - 9am (MST) (de...@ovirt.org)

2017-01-24 Thread Ryan Barry
Sorry everyone -- accidentally created this with the wrong account, and I
can't start the stream (thanks Youtube).

I'll reschedule for Thursday. New invite to follow

On Thu, Jan 19, 2017 at 6:21 AM,  wrote:

> more details »
> 
> Next-generation Node package persistence for oVirt 4.1
> oVirt Node is a composed hypervisor image which can be used to provision
> virtualization hosts for use with oVirt out of the box, with no additional
> package installation necessary.
>
> oVirt Node is upgraded via yum in an A/B fashion, but the installation of
> a completely new image means that packages which have been installed on a
> previous version of the hypervisor were lost on upgrades, or required
> reinstallation.
>
> With oVIrt 4.1, oVirt Node will cache and reinstall packages installed
> with yum or dnf onto the new image, ensuring that customizations made by
> users or administrators are kept.
>
> The advantages of keeping packages across upgrades:
> - ability to persistently modify oVirt Node with additions for tooling or
> support
> - removes the need to build a brand-new image with a modified kickstart to
> modify oVirt Node
> - simplified management
>
> Session outline:
> - Next-generation oVirt Node overview
> - yum API overview
> - oVIrt Node integration with yum/dnf to persist RPMs across upgrades
>
> Session link:
> https://www.youtube.com/watch?v=VAznsxvZpuk
>
> Feature Page:
> https://www.ovirt.org/develop/release-management/features/
> node/node-next-persistence
> 
>
> *When*
> Tue Jan 24, 2017 8am – 9am Mountain Time - Arizona
>
> *Where*
> https://www.youtube.com/watch?v=VAznsxvZpuk (map
> 
> )
>
> *Calendar*
> de...@ovirt.org
>
> *Who*
> •
> rba...@redhat.com - organizer
> •
> users@ovirt.org
> •
> de...@ovirt.org
>
> Going?   *Yes
> 
> - Maybe
> 
> - No
> *
> more options »
> 
>
> Invitation from Google Calendar 
>
> You are receiving this courtesy email at the account de...@ovirt.org
> because you are an attendee of this event.
>
> To stop receiving future updates for this event, decline this event.
> Alternatively you can sign up for a Google account at
> https://www.google.com/calendar/ and control your notification settings
> for your entire calendar.
>
> Forwarding this invitation could allow any recipient to modify your RSVP
> response. Learn More
> .
>
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Black Screen Issue when installing Ovirt Hypervisor bare metal

2017-01-17 Thread Ryan Barry
Hey Jeramy -

Can you boot anaconda with a serial console or SSH enabled to grab logs? It
would be interesting to see what happens here.

On Mon, Jan 16, 2017 at 2:43 AM, Sandro Bonazzola 
wrote:

>
>
> On Wed, Jan 11, 2017 at 4:15 PM, Jeramy Johnson 
> wrote:
>
>> Hey Support, Im new to Ovirt and wanted to know if you can help me out
>> for some strange reason when i try to install Ovirt Node Hypervisor on a
>> machine (baremetal) using the ISO,
>
>
> Hi, welcome aboard!
> Which ISO are you using for the installation?
>
>
> I get a black screen after I select Install Ovirt Hypervisor and nothing
>> happens. Can someone help assist? The machine i'm using for deployment is
>> HP 280 Business PC, i5 processor, 8gigs memory, 1tb hard drive.
>>
>
> Please note that oVirt is not designed for a single host use case. If you
> need to run VMs on a single host there are other solutions designed for it
> like Kimchi : https://github.com/kimchi-project/kimchi
>
>
>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RHEV-H 7.2 Beta Install

2016-02-23 Thread Ryan Barry
Hi Christopher --

You can get to a console by pressing F2 (I think this is blocked while the
progress bar is moving, but should be available after the exception, and
it's available at any point the installer is interactive).

This is a new issue to me.

Can you please file a bug report? An sosreport may be helpful (this works
normally from the rescue shell at )

If possible, it would be best to get debug logs.

To do that, press F2 immediately after entering the installer, and run:

python -m ovirt.node.installer --debug

Logs will be available at /tmp/ovirt-node.debug.log or
/var/log/ovirt-node.debug.log, depending on how far through the installer
it gets.

On Tue, Feb 23, 2016 at 9:32 AM,  wrote:

> I know this is the ovirt-list, but I've had great success with help
> here as I have been running a couple of ovirt instances for some time
> (as part of a lab/testing).
>
> In any case, I'm trying to install the RHEV-H beta (7.2 latest ISO),
> and for whatever reason, my install gets to 22% (right after
> partitioning up the local drive, I believe) and fails.
>
> I have no idea how to troubleshoot this.  I've tried changing install
> options (i've been installing via PXE) to no avail, and I notice that
> there don't appear to be any virtual terminals to switch to look at
> things and/or see where things failed.
>
> The errors that I get say something along the lines of:
>
> unexpected EOF while looking for matching ''
>
> I'll try and get more details, but i've attached a screenshot from the
> console.
>
> Any help on HOW to troubleshoot this, pull logs, etc.. would be most
> appreciated as I'd like to get this moving forward.
>
> Thanks for all of your hard work and the help of the community!
>
> -- Chris
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA cluster

2015-12-02 Thread Ryan Barry
On Wed, Dec 2, 2015 at 5:05 AM,  wrote:

> On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju  wrote:
>
> > I have installed KVM in the nested environment  in ESXi6.x version is
> that
> > recommended ?
> >
>
> I often use KVM over KVM in nested environment but honestly I never tried
> to run KVM over ESXi but I suspect that all of your issues comes from
> there.
>
> It should be fine in ESXi as well, as long as the VMX for that VM has
vhv.enabled=true, and the config for the VM is reloaded.

>
> > apart from Hosted engine is there any other alternate way to configure
> > Engine HA cluster ?
> >
>
> Nothing else from the project. You can use two external VMs in cluster with
> pacemaker but it's completely up to you.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt node weekly talk

2015-01-21 Thread Ryan Barry

> From: "Fabian Deutsch" 
> To: dougsl...@redhat.com
> Cc: "Tolik Litovsky" , users@ovirt.org
> Sent: Wednesday, 21 January, 2015 8:30:40 PM
> Subject: Re: [ovirt-users] oVirt node weekly talk
>
> - Original Message -
>> On 01/21/2015 10:05 AM, Tolik Litovsky wrote:
>>> Hello Fabian
>>>
>>> Can we move the oVirt node weekly talk to another week day?
>>> Or just a bit earlier ?
>>
>> Both are ok for me.
>
> Let's see: I'd suggest to move it to
>
>  Mondays, 3 p.m. UTC
>
> Does that work for everybody?
>
> Greetings
> fabian

Works for me, though I feel like there's another meeting happening at 
that time, I'd have to check. Being somewhere without daylight savings 
plays tricks on me

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [OVIRT-3.5-TEST-DAY-3] backup/restore/dr + Node

2014-09-17 Thread Ryan Barry

I installed a fresh engine on CentOS 6.5, and 3.5 nodes on two nested VMs.

A significant portion of the day was spent testing basic Node 
functionality to ensure we didn't have any more SELinux denials, install 
problems on EFI, or other issues, after which I moved onto engine.


DR was simply tested by pulling the network cable in the middle of an 
operation, disconnecting both nodes and waiting for storage arbitration 
to succeed, and uncleanly terminating the VM running engine then restarting.


Backup and restore was tested with ovirt-engine-backup (which I haven't 
used before, and is quite nice).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Persisting glusterfs configs on an oVirt node

2014-05-28 Thread Ryan Barry

On 05/28/2014 10:23 AM, Fabian Deutsch wrote:

Am Mittwoch, den 28.05.2014, 14:22 + schrieb Simon Barrett:

I just wasn't sure if I was missing something in the configuration to enable 
this.

I'll stick with the workarounds I have for now and see how it goes.

Thanks again.


You are welcome! :)


Simon

-Original Message-
From: Fabian Deutsch [mailto:fdeut...@redhat.com]
Sent: 28 May 2014 15:20
To: Simon Barrett
Cc: Ryan Barry; Doron Fediuck; users@ovirt.org
Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node

Am Mittwoch, den 28.05.2014, 14:14 + schrieb Simon Barrett:

I did a "persist /var/lib/glusterd" and things are looking better. The gluster 
config is now still in place after a reboot.

As a workaround to getting glusterd running on boot, I added "service glusterd 
start" to /etc/rc.local and ran persist /etc/rc.local. It appears to be working but 
feels like a bit of a hack.

Does anyone have any other suggestions as to the correct way to do this?


Hey Simon,

I was also investigating both the steps you did. And was also about to 
recommend them :) They are more a workaround.

We basically need some patches to change the defaults on Node, to let gluster 
work out of the box.

This would include persisting the correct paths and enabling glusterd if 
enabled.

- fabian


Thanks,

Simon

-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On
Behalf Of Simon Barrett
Sent: 28 May 2014 14:12
To: Ryan Barry; Fabian Deutsch; Doron Fediuck
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt
node

Thanks for the replies.

I cannot get glusterd to start on boot and I lose all gluster config every 
reboot.

The following shows what I did on the node to start glusterd, create a volume 
etc, followed by the state of the node after a reboot.


[root@ovirt_node]# service glusterd status glusterd is stopped

[root@ovirt_node]# chkconfig --list glusterd
glusterd0:off   1:off   2:off   3:off   4:off   5:off   6:off

[root@ovirt_node]# service glusterd start Starting glusterd:[  OK  ]

gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore
volume create: vmstore: success: please start the volume to access
data

gluster> vol start vmstore
volume start: vmstore: success

gluster> vol info
Volume Name: vmstore
Type: Distribute
Volume ID: 5bd01043-1352-4014-88ca-e632e264d088
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.22.8.46:/data/glusterfs/vmstore

[root@ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks node_state.info 
trusted-vmstore-fuse.vol cksum
rbstate
vmstore.10.22.8.46.data-glusterfs-vmstore.vol
info
run
vmstore-fuse.vol

[root@ovirt_node]# grep gluster /etc/rwtab.d/*
/etc/rwtab.d/ovirt:files   /var/lib/glusterd

[root@ovirt_node]# chkconfig glusterd on [root@ovirt_node]# chkconfig --list 
glusterd
glusterd0:off   1:off   2:on3:on4:on5:on6:off



  I then reboot the node and see the following:


[root@ovirt_node]# service glusterd status glusterd is stopped

[root@ovirt_node]# chkconfig --list glusterd
glusterd0:off   1:off   2:off   3:off   4:off   5:off   6:off

[root@ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0
I believe that we intentionally do not start glusterd, since glusterfsd 
is all that's required for the engine to manage volumes, but I could be 
mis-remembering this, and I don't have any real arguments to starting 
glusterd at boot unless somebody speaks up against it.


No more gluster volume configuration files.

I've taken a look through 
http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Persisting_changes
 but I'm unsure what needs to be done to persist this configuration.

To get glusterd to start on boot, do I need to manually persist /etc/rc* files?

I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is this a list 
of the files/dirs that should be persisted automatically? If so, is it recursive and 
should it include everything in /var/lib/glusterd/vols?
rwtab is a mechanism from readonly-root, which walks through the 
filesystem and says "copy these files to 
/var/lib/stateless/writable/${path} and bind mount them back in their 
original location. So you can write files there, but they don't survive 
reboots on Node.


Since Node is booting from the same ramdisk every time (essentially the 
ISO copied to the hard drive), this mechanism doesn't really work for 
us, and persistence is a different mechanism entirely.


TIA for any help with this.

Simon



-Original Message-
From: Ryan Barry [mailto:rba...@redhat.com]
Sent: 27 May 2014 14:01
To: Fabian Deutsch; Doron Fediuck; Simon Barrett
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt
node

On 05/26/2

Re: [ovirt-users] workflow node installation

2014-05-27 Thread Ryan Barry

On 05/26/2014 04:15 AM, Fabian Deutsch wrote:

Am Sonntag, den 25.05.2014, 09:01 -0400 schrieb Doron Fediuck:

Don't we have ovirtmgmt defined by default
during the registration process?


Hey,

IIRC that is done by ovirt-host-deploy during the registration, yes.

- fabian


- Forwarded Message -

From: "Jorick Astrego" 
To: "users" 
Sent: Monday, May 19, 2014 1:11:58 PM
Subject: [ovirt-users] workflow node installation

Hi,

I don't know if I'm doing things wrong but the workflow for node installation
is a bit cumbersome for me.

The things I have to do to get a node up in ovirt engine:

- install node image from pxe
- approve node
- put node into maintenance mode (storage not available)
- Setup Host Networks
- Sync network (otherwise I cannot save the network changes)
- Setup Host Networks again
- Save Network setup
- activate node


This seems problematic to me -- the host networks should be
automatically set when you approve the node joining a given cluster.

Are you setting up the host networks on the engine side, or do you have
to approve, log back into the node, configure networking, etc?

And this I have to do for every node. It could save me some time if I could
setup the networks on approval or engine puts it directly into maintenance
mode without the sync requirement.

Kind regards,

Jorick Astrego
Netbulae B.V.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Persisting glusterfs configs on an oVirt node

2014-05-27 Thread Ryan Barry

On 05/26/2014 04:14 AM, Fabian Deutsch wrote:

Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck:


- Original Message -

From: "Simon Barrett" 
To: users@ovirt.org
Sent: Friday, May 23, 2014 11:29:39 AM
Subject: [ovirt-users] Persisting glusterfs configs on an oVirt node



I am working through the setup of oVirt node for a 3.4.1 deployment.



I setup some glusterfs volumes/bricks on oVirt Node Hypervisor release 3.0.4
(1.0.201401291204.el6) and created a storage domain. All was working OK
until I rebooted the node and found that the glusterfs configuration had not
been retained.



Is there something I should be doing to persist any glusterfs configuration
so it survives a node reboot?



Many thanks,



Simon



Hi Simon,
it actually sounds like a bug to me, as node are supposed to support
gluster.

Ryan / Fabian- thoughts?


Hey,

I vaguely remember that we were seeing a bug like this some time ago.
We fixed /var/lib/glusterd to be writable (using tmpfs), but it can
actually be that we need to persist those contents.

But Simon, can you give details which configuration files are missing
and why glusterd is not starting?
Is glusterd starting? I'm getting the impression that it's starting, but 
that it has no configuration. As far as I know, Gluster keeps most of 
the configuration on the brick itself, but it finds brick information in 
/var/lib/glusterd.


The last patch simply opened the firewall, and it's entirely possible 
that we need to persist this. It may be a good idea to just persist the 
entire directory from the get-go, unless we want to try to have a thread 
watching /var/lib/glusterd for relevant files, but then we're stuck 
trying to keep up with what's happening with gluster itself...


Can we


Thanks
fabian


Either way I suggest you take a look in the below link-
http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Persisting_changes

Let s know how it works.

Doron





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] virt-manager migration

2014-04-03 Thread Ryan Barry

On 04/03/2014 03:58 PM, Jeremiah Jahn wrote:

Well, that was fun. So I let the ovirt engine install to a running
host that already had kvm/libvirt running on it. Don't ask why, but it
did happen.  After figuring out how to setup a sasl user/password and
adding qemu to the disk group I could startup all of my guests again.
   My host now shows up in the list of hosts, but has "One of the
Logical Networks defined for this Cluster is Unreachable by the Host."
  error sitting on it.  ovirt-node-setup also tells me I should setup a
network.   I currently have 6 bridges running on this thing all one
for each vlan. I'm unsure as to how to meld the 'bondX' in
ovirt-node-setup with my current network configuration to resolve the
error.  esp given that I don't actually want to bond any of my NIC's
together at this point.  I do realise I'm doing this the hard way.  My
goal at the moment is to just get the host to fully report in the
engine, at which point I think I'll be able to use v2v to finish up
the rest.

Thanks for any suggestions or pointers.

That's ok. I probably would have skipped ovirt-node-setup (which is 
intended to run on the oVirt Node ISO) and used the "New" wizard from 
the Engine to add it (which would install requisite RPMs, etc).


The "one of the networks is not reachable" error isn't really my area, 
but it's probably looking for the host to be reachable by IP on a bridge 
called ovirtmgmt.


An engine developer can verify this, but I'd guess that adding a virtual 
NIC to whatever VLAN has the IP the engine sees and bridging that to 
ovirtmgmt with the appropriate address would work. It may only need the 
right bridge (ovirtmgmt, probably) defined. I've never experimented with 
this aspect of it, to be honest.


On Thu, Apr 3, 2014 at 10:29 AM, Ryan Barry  wrote:

From: "Jeremiah Jahn" 
To: users@ovirt.org
Sent: Wednesday, April 2, 2014 8:38:02 PM
Subject: [Users] virt-manager migration







Anyway, long story short. I'm having a difficult time finding
documentation
on migrating from virt-manager to oVirt. As well as installing ovirt-nodes
manually. I'd love to find this perfect world where I can just install the
ovirt-node RPMs on my already running Hosts and begin to have them managed
by the oVirt engine. Without and serious downtime.


The usual way is to go through virt-v2v. Essentially, you'd install the
engine somewhere and configure a storage domain (the properties of which
vary, but it's UUIDed and the UUID must match the engine) to bring the
datacenter up, then add an export domain (which is also UUIDed).

Once an export domain is created, virt-v2v can move your VMs over, but with
downtime.

As far as turning your existing hosts into nodes, adding them from the
engine is the easiest way (there's a wizard for this). It's possible to
install the ovirt-node RPMs directly, but they take over your system a bit,
and it's probably not what you're looking for. The engine can manage regular
EL6/fedora hosts.

But registering to the engine will reconfigure libvirt, so the general path
is:

Install engine.
Live-migrate VMs off one of your hosts.
Add that host as a node.
virt-v2v machines than can take downtime (can you get a maintenance window)?
Bring them up on the new node.
Repeat until your environment is converted.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] virt-manager migration

2014-04-03 Thread Ryan Barry

> From: "Jeremiah Jahn" 
> To: users@ovirt.org
> Sent: Wednesday, April 2, 2014 8:38:02 PM
> Subject: [Users] virt-manager migration
>

> Anyway, long story short. I'm having a difficult time finding 
documentation
> on migrating from virt-manager to oVirt. As well as installing 
ovirt-nodes
> manually. I'd love to find this perfect world where I can just 
install the
> ovirt-node RPMs on my already running Hosts and begin to have them 
managed

> by the oVirt engine. Without and serious downtime.

The usual way is to go through virt-v2v. Essentially, you'd install the 
engine somewhere and configure a storage domain (the properties of which 
vary, but it's UUIDed and the UUID must match the engine) to bring the 
datacenter up, then add an export domain (which is also UUIDed).


Once an export domain is created, virt-v2v can move your VMs over, but 
with downtime.


As far as turning your existing hosts into nodes, adding them from the 
engine is the easiest way (there's a wizard for this). It's possible to 
install the ovirt-node RPMs directly, but they take over your system a 
bit, and it's probably not what you're looking for. The engine can 
manage regular EL6/fedora hosts.


But registering to the engine will reconfigure libvirt, so the general 
path is:


Install engine.
Live-migrate VMs off one of your hosts.
Add that host as a node.
virt-v2v machines than can take downtime (can you get a maintenance window)?
Bring them up on the new node.
Repeat until your environment is converted.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt Node Hypervisor 3.0.3 (f19) Installation Failed - Failed to Partition/format

2014-02-12 Thread Ryan Barry
>
> Message: 2
> Date: Wed, 12 Feb 2014 15:17:18 +0800 (SGT)
> From: Udaya Kiran P 
> To: users 
> Subject: [Users] oVirt Node Hypervisor 3.0.3 (f19) Installation Failed
> -   Failed to Partition/format
> Message-ID:
> <1392189438.17050.yahoomail...@web193206.mail.sg3.yahoo.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi All,
>
> I am trying to install?oVirt Node Hypervisor 3.0.3 (f19) . I am getting
> below error.
>
> Exception: RuntimeError('Failed to Partition/format')
>
> The progress bar stopped at 40%.
>
> Please suggest how to fix or which logs to be checked?
>
> Thank You,
> Udaya Kiran


Can you check /tmp/ovirt.log and /var/log/ovirt.log?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Users Digest, Vol 26, Issue 72

2013-11-17 Thread Ryan Barry
Without knowing how the disks are split among the controllers, I don't want
to make any assumptions about how shared it actually is, since it may be
half and half with no multipathing.

While a multi-controller DAS array *may* be shared storage, it may not be.
Moreover, I have no idea whether VDSM looks at by-path, by-bus, dm-*, or
otherwise, and there are no guarantees that a SAS disk will present like a
FC LUN (by-path/pci...-fc-$wwn...), whereas OCFS POSIXFS is assured to
work, albeit with a more complex setup and another intermediary layer.
On Nov 17, 2013 10:00 AM,  wrote:

> Send Users mailing list submissions to
> users@ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-requ...@ovirt.org
>
> You can reach the person managing the list at
> users-ow...@ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
>1. Re: oVirt and SAS shared storage?? (Jeff Bailey)
>
>
> --
>
> Message: 1
> Date: Sat, 16 Nov 2013 21:39:35 -0500
> From: Jeff Bailey 
> To: users@ovirt.org
> Subject: Re: [Users] oVirt and SAS shared storage??
> Message-ID: <52882c67.9000...@cs.kent.edu>
> Content-Type: text/plain; charset=ISO-8859-1
>
>
> On 11/16/2013 9:22 AM, Ryan Barry wrote:
> >
> > unfortunally, I didn't got a reply for my question. So.. let's try
> > again.
> >
> > Does oVirt supports SAS shared storages (p. e. MSA2000sa) as
> > storage domain?
> > If yes.. what kind of storage domain I've to choose at setup time?
> >
> > SAS is a bus which implements the SCSI protocol in a point-to-point
> > fashion. The array you have is the effective equivalent of attaching
> > additional hard drives directly to your computer.
> >
> > It is not necessarily faster than iSCSI or Fiber Channel; almost any
> > nearline storage these days will be SAS, almost all the SANs in
> > production, and most of the tiered storage as well (because SAS
> > supports SATA drives). I'm not even sure if NetApp uses FC-AL drives
> > in their arrays anymore. I think they're all SAS, but don't quote me
> > on that.
> >
> > What differentiates a SAN (iSCSI or Fiber Channel) from a NAS is that
> > a SAN presents raw devices over a fabric or switched medium rather
> > than point-to-point (point-to-point Fiber Channel still happens, but
> > it's easier to assume that it doesn't for the sake of argument). A NAS
> > presents network file systems (CIFS, GlusterFS, Lustre, NFS, Ceph,
> > whatever), though this also gets complicated when you start talking
> > about distributed clustered network file systems.
> >
> > Anyway, what you have is neither of these. It's directly-attached
> > storage. It may work, but it's an unsupported configuration, and is
> > only shared storage in the sense that it has multiple controllers. If
> > I were going to configure it for oVirt, I would:
> >
>
> It's shared storage in every sense of the word.  I would simply use an
> FC domain and choose the LUNs as usual.
>
> > Attach it to a 3rd server and export iSCSI LUNs from it
> > Attach it to a 3rd server and export NFS from it
> > Attach it to multiple CentOS/Fedora servers, configure clustering (so
> > you get fencing, a DLM, and the other requisites of a clustered
> > filesystem), and use raw cLVM block devices or GFS2/OCFS filesystems
> > as POSIXFS storage for oVirt.
> >
>
> These would be terrible choices for both performance and reliability.
> It's exactly the same as fronting an FC LUN would be with all of that
> crud when you could simply access the LUN directly.  If the array port
> count is a problem then just toss an SAS switch in between and you have
> an all SAS equivalent of a Fibre Channel SAN.  This is exactly what we
> do in production vSphere environments and there are no technical reasons
> it shouldn't work fine with oVirt.
>
> > Thank you for your help
> >
> > Hans-Joachim
> >
> >
> > Hans
> >
> > --
> > while (!asleep) { sheep++; }
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> End of Users Digest, Vol 26, Issue 72
> *
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt and SAS shared storage??

2013-11-16 Thread Ryan Barry
>
> unfortunally, I didn't got a reply for my question. So.. let's try again.
>
> Does oVirt supports SAS shared storages (p. e. MSA2000sa) as storage
> domain?
> If yes.. what kind of storage domain I've to choose at setup time?
>
> SAS is a bus which implements the SCSI protocol in a point-to-point
fashion. The array you have is the effective equivalent of attaching
additional hard drives directly to your computer.

It is not necessarily faster than iSCSI or Fiber Channel; almost any
nearline storage these days will be SAS, almost all the SANs in production,
and most of the tiered storage as well (because SAS supports SATA drives).
I'm not even sure if NetApp uses FC-AL drives in their arrays anymore. I
think they're all SAS, but don't quote me on that.

What differentiates a SAN (iSCSI or Fiber Channel) from a NAS is that a SAN
presents raw devices over a fabric or switched medium rather than
point-to-point (point-to-point Fiber Channel still happens, but it's easier
to assume that it doesn't for the sake of argument). A NAS presents network
file systems (CIFS, GlusterFS, Lustre, NFS, Ceph, whatever), though this
also gets complicated when you start talking about distributed clustered
network file systems.

Anyway, what you have is neither of these. It's directly-attached storage.
It may work, but it's an unsupported configuration, and is only shared
storage in the sense that it has multiple controllers. If I were going to
configure it for oVirt, I would:

Attach it to a 3rd server and export iSCSI LUNs from it
Attach it to a 3rd server and export NFS from it
Attach it to multiple CentOS/Fedora servers, configure clustering (so you
get fencing, a DLM, and the other requisites of a clustered filesystem),
and use raw cLVM block devices or GFS2/OCFS filesystems as POSIXFS storage
for oVirt.

Thank you for your help
>
> Hans-Joachim
>

Hans

-- 
while (!asleep) { sheep++; }
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Users Digest, Vol 25, Issue 120

2013-10-25 Thread Ryan Barry


On 10/25/2013 04:41 AM, users-requ...@ovirt.org wrote:

 I haven't looked into this very much, but it sounds promising.
   Anyone on list familiar with it?

It is, in essence, LXC containers combined with an overlay filesystem. 
It's basic PaaS with a Go binary ("docker") wrapped around LXC. It's 
neat in the same sense as Vagrant -- you can ship a Dockerfile which can 
reproduce your environment very easily, and the Docker team itself has 
wrapped all the images in a git repository you can easily branch 
from/etc. That said, Docker support won't land in Fedora until F20, and 
CentOS around the same time (officially).


I'll admit that I don't get the hype around Docker, since it doesn't do 
anything that LXC doesn't already do, but the templating and a 
user-friendly binary is nice.


   I wonder if there's interest in shipping oVirt docker containers.

I'm interested in Docker to ease the process of building Node images, at 
least. oVirt Docker containers would be interesting, assuming LXC 
support isn't painful, since the CoreOS (where Docker originated) also 
relies on an image with readonly root and overlays on top, so there's 
some overlap.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Customizing node configuration

2013-10-02 Thread Ryan Barry
> 
> From: users-boun...@ovirt.org (mailto:users-boun...@ovirt.org) 
> mailto:users-boun...@ovirt.org)> on behalf of Allen 
> Belletti mailto:al...@ggc.edu)>
> Sent: Wednesday, October 02, 2013 4:40 PM
> To: users@ovirt.org (mailto:users@ovirt.org)
> Subject: [Users] Customizing node configuration
> 
> Hello All,
> 
> I've been searching the archives and not yet managed to find anything on the 
> subject, so I'm hoping that someone can point me in the right direction.
> 
> I am using nodes with Intel 10G network cards. The driver for these cards has 
> been tweaked to complain (and fail to start the interface) if a non-Intel 
> approved optical transceiver is in use. It's easy to override with the 
> following options:
> 
> ixgbe.allow_unsupported_sfp=1
> 
> The usual ways of doing this are either to place it on the kernel command 
> line at boot time, or in /etc/modprobe.d/ixgbe.conf where it would read 
> "options ixgbe allow_unsupported_sfp=1". Unfortunately I have no idea how to 
> make either of these permanent on an oVirt node.
> 
> I'm running the ovirt-node-iso-3.0.1-1.0.2.vdsm.el6.iso image. It has an 
> /etc/modprobe.d directory where changes do not persist across reboots. It 
> also has /config/etc where I've added modprobe.d and modprobe.d/ixgbe.conf. 
> By adding this file to /config/files, I've been able to make it appear in 
> /etc/modprobe.d at boot time. This has no effect. It appears to take place 
> after the ixgbe driver has already been started. If I "rmmod ixgbe" and 
> "modprobe ixgbe" manually, it works fine.
> 
> So the general question is, how do I make configuration changes which persist 
> across reboots? Surely others have run into the same situation when trying to 
> support specific devices.
> 
rc.local is probably the easiest way for now. There's a patch at 
http://gerrit.ovirt.org/#/c/19811/1 which gives us the ability to arbitrarily 
modify boot loader arguments, but I'm not sure that we have any plans to add a 
field in the TUI to do so, though it may be a good idea, since this is coming 
up more and more often.  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] RAID1 mirror on ovirt-node + engine nfs server

2013-07-07 Thread Ryan Barry
>
> From: Jason Keltz 
> To: users@ovirt.org
> Subject: [Users] RAID1 mirror on ovirt-node + engine nfs server
> Message-ID: <51d5a06e.10...@cse.yorku.ca>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi.
>
> I've been reading about ovirt, and ready to try my own experiments with
> it.  I have two small questions..
>
> When an ovirt-node is installed from the standard ISO, there's no
> mention in the documentation about setting up a RAID1 mirror for the
> root disk on the node.  I'm sure that once I get around to working out
> kickstarting the node, I could easily install the raid1 mirror, but I'm
> just wondering why I don't see that in the default ISO. Maybe I'm just
> missing something.  Is redundancy of the disk on the node not
> important?  Sure, if the node goes down, I guess the VMs could be run on
> other nodes, but if we can prevent the node from going down in the first
> place, then why not?
>
> Kickstarting the Node is an interesting proposition, but not one that the
current image is well-suited for. Nor is support for mdraid devices, to be
honest. It's something worth considering (whether btrfs redundancy or
mdraid), but it's not currently implemented to my knowledge. I could be
wrong about that.

I also have a question about the storage backend.  In particular, I have
> a pretty powerful server that I intend to use as the NFS server, and a
> few servers to use as nodes.  On the other hand, I don't have a powerful
> machine (at the moment) to use for the ovirt-engine.  Would it be poor
> practice to run the ovirt-engine ON the NFS server?  During engine
> setup, I see that you can setup an NFS share for ISOs from the
> ovirt-engine, but I don't think there's mention of just generalized
> storage there.  I suspect it's "poor practice", but I thought I'd ask
> anyway.   My setup will be relatively small (say, 4 nodes), and this
> would let me reduce 1 general server from the infrastructure (dedicated
> ovirt-engine).
>

I run the engine virtualized as a guest on the NFS server. This is
definitely a workable use case, especially for small environments.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt Live CD root password

2013-07-07 Thread Ryan Barry
>
> I am trying to install ovirt using the Live CD Image located at
> http://resources.ovirt.org/releases/3.2/tools/ovirt-live-1.0.iso
>
> but the install to HDD menu is prompting for root password?
>
> Could someone please point me to the root password for the installation of
> oVirt using install to HDD menu?
>
> Best Regards,
> Moto
>

Moto, please use the oVirt Node release found here instead:
http://resources.ovirt.org/releases/3.2/iso/ovirt-node-iso-2.6.1-20120228.fc18.iso
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users