On 03/20/2017 05:09 AM, /dev/null wrote:
Hi,
how do i make the hosted_storage aware of gluster server failure? In --deploy i
cannot
provide backup-volfile-servers. In /etc/ovirt-hosted-engine/hosted-engine.conf
there is
an mnt_options line, but i read
Hi All,
Thanks to the mailing list support and advice, now I have finished installing
oVirt 4.1, I have created few virtual machines and it is working fine. Now I am
thinking about migrating some virtual machines from vmware/KVM to ovirt.
I am checking the docs at
Thanks Arik
On Monday, 20 March 2017 11:41 AM, Arik Hadas wrote:
On Mon, Mar 20, 2017 at 9:15 AM, Yedidyah Bar David wrote:
On Mon, Mar 20, 2017 at 9:07 AM, John Joseph wrote:
> Hi All,
> Like to know is there any other simple
On Mon, Mar 20, 2017 at 9:15 AM, Yedidyah Bar David wrote:
> On Mon, Mar 20, 2017 at 9:07 AM, John Joseph wrote:
> > Hi All,
> > Thanks to the mailing list support and advice, now I have finished
> > installing oVirt 4.1, I have created few virtual machines
I haven't seen it used at any of my RHV customers - but I can see this
capability becoming popular as people learn about it.
- Greg
On Sun, Mar 19, 2017 at 10:17 AM, Leon Goldberg wrote:
> Hey,
>
> We've been wondering whether the ability to add custom iptables rules
> to
On Mon, Mar 20, 2017 at 9:07 AM, John Joseph wrote:
> Hi All,
> Thanks to the mailing list support and advice, now I have finished
> installing oVirt 4.1, I have created few virtual machines and it is working
> fine. Now I am thinking about migrating some virtual machines from
Hi Ian,
Please include only the relevant files of the specified date, I could not
figure out which ones to look at.
There are also no supervdsm logs (except one for node2, but for different
dates). Are there such logs at all?
Thanks,
Edy.
On Mon, Mar 20, 2017 at 2:59 AM, Ian Neilsen
knara
Looks like your conf is incorrect for mnt option.
It should be I believe; mnt_options=backupvolfile-server=server name
not
mnt_options=backup-volfile-servers=host2
If your dns isnt working or your hosts file is incorrect this will prevent
it as well.
On 21 March 2017 at 03:30,
Hi Yedidyah,
Il 19/03/2017 11:55, Yedidyah Bar David ha scritto:
> On Sat, Mar 18, 2017 at 12:25 PM, Paolo Margara
> wrote:
>> Hi list,
>>
>> I'm working on a system running on oVirt 3.6 and the Engine is reporting
>> the warning "The Hosted Engine Storage Domain
Edy
Arh ok, my fault. I will push up the files a bit later.
Ian
On 20 March 2017 at 16:47, Edward Haas wrote:
> Hi Ian,
>
> Please include only the relevant files of the specified date, I could not
> figure out which ones to look at.
> There are also no supervdsm logs
The best and easiest way is via the Import dialog as you saw in
'virt-v2v-integration' wiki page.
another way to do it and only for VMware (not KVM-Libvirt) is via the
virt-v2v utility with the option '-o rhev' which export the VM to oVirt
export domain and then you can import the VM to a data
Heya,
I've put together another blog post, now on a slightly more advanced
topic of hostdev passthrough. Currently only looking into generic PCI
devices, but I hope to summarize other passthrough related
technologies soon!
https://mpolednik.github.io/2017/03/19/hostdev-passthrough-pci/
Hi Fernando,
There was a problem with the version you're using, the details can be found
here [1].
Please try to use a newer release of node-ng-4.1 for the upgrade.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1427468
On Wed, Mar 15, 2017 at 8:50 PM, FERNANDO FREDIANI <
Hi,
Since I've upgraded to 4.1 I don't have "actual downtime" after VM
migration displayed, it displays N/A
Duration: 1 minute 59 seconds, Total: 1 minute 59 seconds, Actual
downtime: (N/A) .
So far I remember this value was available in 3.6, does anybody else
have the same
On Mon, Mar 20, 2017 at 12:28 PM, Arsène Gschwind wrote:
> Hi,
>
> Since I've upgraded to 4.1 I don't have "actual downtime" after VM
> migration displayed, it displays N/A
>
> Duration: 1 minute 59 seconds, Total: 1 minute 59 seconds, Actual
> downtime: (N/A)
That is another odd aspect about this. The ovirt-engine service is up as
is the httpd and ovirt-engine-dwhd services.
Any ideas on how to fix it?
Logan
On Mon, Mar 20, 2017 at 11:02 AM, Simone Tiraboschi
wrote:
>
>
> On Mon, Mar 20, 2017 at 4:39 PM, Logan Kuhn
On Mon, Mar 20, 2017 at 5:16 PM, Logan Kuhn
wrote:
> That is another odd aspect about this. The ovirt-engine service is up as
> is the httpd and ovirt-engine-dwhd services.
>
> Any ideas on how to fix it?
>
Please start running
curl --insecure
Hi -
I am wondering why OSSEC would be reporting hidden processes on my ovirt
nodes? I run OSSEC across the infrastructure and multiple ovirt clusters
have assorted nodes that will report a process is running but does not have
an entry in /proc and thus "possible rootkit" alert is fired
I am
On Mon, Mar 20, 2017 at 4:39 PM, Logan Kuhn
wrote:
> So that sounds like the host isn't able to communicate properly with the
> HEVM. The cluster is still in global maintenance, but the HEVM still
> thinks that it isn't because the database says it isn't in global
>
Hi,
there's an updated version of scrat14/ovirt-guest-agent available at
https://forge.puppet.com/scrat14/ovirt_guest_agent
Latest version includes support for installing oVirt guest agent on Ubuntu.
Regards,
René
smime.p7s
Description: S/MIME Cryptographic Signature
So that sounds like the host isn't able to communicate properly with the
HEVM. The cluster is still in global maintenance, but the HEVM still
thinks that it isn't because the database says it isn't in global
maintenance.
Logan
On Mon, Mar 20, 2017 at 10:34 AM, Simone Tiraboschi
Dear lan and Didi
> Found a working option to get second and subsequent hosts deployed with
ovirt 4.1
>
> 1-Set second host into maintenance
> 2-highlight second host and choose "Installation --> Reinstall", edit
params in popup and click OK
> 3-Ignore warning that pops up and watch the vdsm.log
Hello Yaniv.
It also looks to me initially that for 1Gbps multi-queue would not be
necessary, however the Virtual Machine is relatively busy where the CPU
necessary to process it may (or not) be competing with the processes
running on in the guest.
The network is as following: 3 x 1 Gb
On Monday, March 20, 2017 9:14:51 AM EDT Logan Kuhn wrote:
> Starting at 1:09am on Saturday the Hosted Engine has been rebooting because
> it failed it's liveliness check. This is due to the webadmin not loading.
> Nothing changed as far as I can tell on the engine since it's last
> successful
Yup, ovirttest1 ran out of disk space on Friday, we recovered it and
everything seemed completely normal.
the postgres service is down on the HEVM, but that is because it's on our
postgresql cluster, has been for weeks. I can connect to it's database
from within the HEVM using the credentials
We have a hosted-engine running on 4.1 with an iSCSI hosted_storage domain, and
are able to import the domain. However, we cannot attache the domain to the
data center.
Just to make sure I'm not missing something basic, does the engine VM need to
be able to connect to the iSCSI target itself?
On Mon, Mar 20, 2017 at 4:28 PM, Logan Kuhn
wrote:
> Yup, ovirttest1 ran out of disk space on Friday, we recovered it and
> everything seemed completely normal.
>
> the postgres service is down on the HEVM, but that is because it's on our
> postgresql cluster, has
On Monday, March 20, 2017 11:34:49 AM EDT Simone Tiraboschi wrote:
> On Mon, Mar 20, 2017 at 4:28 PM, Logan Kuhn
>
> wrote:
> > Yup, ovirttest1 ran out of disk space on Friday, we recovered it and
> > everything seemed completely normal.
> >
> > the postgres service
On Thu, Mar 9, 2017 at 1:01 PM, Fred Rolland wrote:
> I don't think it will work.
> We rely heavily on LVM when working with iSCSI and FC and I am not sure
> how LVM will handle this kind of operation.
> A storage domain is a VG that contains PVs (LUNS), and each disk is a
On Mon, Mar 20, 2017 at 4:30 PM, Devin A. Bougie
wrote:
> We have a hosted-engine running on 4.1 with an iSCSI hosted_storage
> domain, and are able to import the domain. However, we cannot attache the
> domain to the data center.
>
The engine should import it by
It hangs with no output.
We managed to get it working by manually changing it to global maintenance
in the database after stopping the engine and dwhd services and dumping the
database.
One more bit that might or might not be useful, this was being output to
the postgres logs quite a bit
Hi kasturi,
thank you. I tested and it seems not to work, even after rebooting the current
mount does not show up the mnt_options nor the switch over works.
[root@host2 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
gateway=192.168.2.1
iqn=
32 matches
Mail list logo