Hi,
will it be a more clean approach? I can't tolerate full stop of all
VMs just to enable it, seems too disastrous for real production
environment. Will it be some migration mechanisms in future?
Best regards,
Misak Khachatryan
On Fri, Nov 10, 2017 at 12:35 AM, Darrell Budic
Or second option for estimate file space usage is use
# du -h $PATH_TO_DISKS
On Wed, Nov 15, 2017 at 12:19 AM, andre...@starlett.lv wrote:
> Dumb error.
> ls -la doesn't show real allocated size of sparse image,
> ls -lsha does the trick.
>
>
> On 11/15/2017 12:55 AM,
Hi,
Can somebody help me?
Thanks.
T.
- 2017. nov.. 13., 21:36, Demeter Tibor írta:
> Dear Users,
> I have a disk of a vm, that is have a snapshot. It is very interesting,
> because
> there are two other disk of that VM, but there are no snapshots of them.
> I
Hi Bryan,
In your output if you see the -drive file=gluster:///
this means that vm disk drives are being accessed using libgfapi.
If it is fuse then you would have seen something like
Dumb error.
ls -la doesn't show real allocated size of sparse image,
ls -lsha does the trick.
On 11/15/2017 12:55 AM, andre...@starlett.lv wrote:
> Hi,
>
> Run into strange problem with newly installed latest 4.1.
> Defined local storage domain.
> Create virtual disk - thin provisioning (in web
Hi,
Run into strange problem with newly installed latest 4.1.
Defined local storage domain.
Create virtual disk - thin provisioning (in web interface) makes RAW
images, and PREALLOCATES to full size instead of min size 1GB.
Preallocation occurs even before formatting to ext4.
Tried several times,
On Tue, Nov 14, 2017 at 4:44 PM, Kyle Conti
wrote:
> Sending this again in case I sent this prior to being fully setup as an
> ovirt-user subscriber (received confirmation after I sent). If you did
> receive this already, my apologies:
>
> Hello,
>
> I'm brand new
Looked into CPU settings for KVM. Looks like nested virtualization is not
the default behavior in KVM.
Check for Nested Virtualization:
sudo virt-host-validate
[root@ovirt1 ~]# sudo virt-host-validate
QEMU: Checking for hardware virtualization
: FAIL (Only emulated CPUs are available,
Hi Kasturi,
Thanks a lot for taking a look at this. I think its
"grafton-sanity-check.sh" . Following is the complete output from the
install attempt. Ansible ver is 2.4. Gdeploy is 2.0.2.
Do you have a tested step by step for 4.1.6/7 ?. Would be great if you can
share it.
PLAY
Hi, I had similar problems with rn104, I think they share the same OS inside, haven't found an answer to using nfs, only iscsi.Have you seen this thread on netgear forum https://community.netgear.com/t5/New-to-ReadyNAS/ReadyNAS-4220-NFS-configuration/td-p/1191554 ?There is a working solution (as a
Hi,
Can someone please help me "move" the network bridge to another port? I
have tried a couple times but it doesn't seem to work.
Our internal LAN IP range is 192.168.102.0/24 and the storage area network
is 10.10.10.0/24
When I installed hosted-engine, I accidently told it to use the
Any advice or other things to check? I'm at a loss as to what's causing this
and I can't add more nodes to the same cluster without a working NFS as I
understand it.
-Walt
- Original Message -
From: "Walt Holman"
To: "Fred Rolland"
Cc:
Thanx Alan.
On Tue, Nov 14, 2017 at 10:11 PM, Alan Griffiths
wrote:
> Mount the Gluster volume and delete everything on it. Should be a
> directory with a UUID name and a file called __DIRECT_IO_TEST__
>
> On 14 November 2017 at 18:54, Rudi Ahlers
On Tue, Nov 14, 2017 at 7:09 PM, Artem Tambovskiy
wrote:
> Thanks, Darrell!
>
> Restarted vdsmd but it didn't helped.
> systemctl status vdsmd -l showing following:
>
> ● vdsmd.service - Virtual Desktop Server Manager
>Loaded: loaded
Thanks, Darrell!
Restarted vdsmd but it didn't helped.
systemctl status vdsmd -l showing following:
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
Active: active (running) since Tue 2017-11-14
Try restarting vdsmd from the shell, “systemctl restart vdsmd”.
> From: Artem Tambovskiy
> Subject: [ovirt-users] Non-responsive host, VM's are still running - how to
> resolve?
> Date: November 14, 2017 at 11:23:32 AM CST
> To: users
>
> Apparently, i lost the
Looks like you need to clean out your storage domain left over from
the previous install attempt. What are you using, gluster, NFS?
On 14 November 2017 at 14:35, Rudi Ahlers wrote:
> Hi,
>
> Can someone please help?
>
> I installed hosted exchange but specified the wrong
Apparently, i lost the host which was running hosted-engine and another 4
VM's exactly during migration of second host from bare-metal to second host
in the cluster. For some reason first host entered the "Non reponsive"
state. The interesting thing is that hosted-engine and all other VM's up
and
Can you please provide full vdsm logs (only the engine log is attached) and
the versions of the engine, vdsm, gluster?
On Tue, Nov 14, 2017 at 6:16 PM, Bryan Sockel wrote:
> Having an issue moving a hard disk from one vm data store new a newly
> created gluster data
Having an issue moving a hard disk from one vm data store new a newly
created gluster data store. I can shut down the machine and copy the hard
drive, detach the old hard drive and attach the new hard drive, but i would
prefer to keep the vm on line when moving the disk.
I have attached a
Hi,
Here is with —verbose option.
Please note this is draft test all-in-one system (everyhting on one PC, not
self-hosted engine).
Although it is not recommended, somehow it works.
[root@vm-hostengine vmhosts]# engine-iso-uploader --verbose --ssh-user=root
--iso-domain=iso upload
Hrm, not sure what i am doing wrong then, does not seem to be working for
me. I am not using the hosted engine, but a direct install on a physical
server. I thought i have enabled support for libgfapi with this command:
# engine-config -g LibgfApiSupported
LibgfApiSupported: false version:
Sending this again in case I sent this prior to being fully setup as an
ovirt-user subscriber (received confirmation after I sent). If you did receive
this already, my apologies:
Hello,
I'm brand new to Ovirt and trying to get my Hosted Engine setup configured with
ISCSI storage. I have
Hello,
I'm brand new to Ovirt and trying to get my Hosted Engine setup configured with
ISCSI storage. I have ~8TB usable storage available on an LVM partition. This
storage is on the same server that is hosting the ovirt engine virtual machine.
After I use the discovery/sendtargets command
Hi,
Can someone please help?
I installed hosted exchange but specified the wrong interface, and thus
couldn't access it. So I removed it (yum install) and reinstalled it, and
re-ran the deploy but got the following error:
Please confirm installation settings (Yes, No)[Yes]:
[ INFO ]
Great answare. Thanks!
Atenciosamente,
Arthur Melo
Linux User #302250
2017-11-12 6:23 GMT-02:00 Yaniv Kaul :
>
>
> On Thu, Nov 9, 2017 at 8:28 PM, Arthur Melo wrote:
>
>> Is it possible to mount a export share using CIFS?
>>
>
> We generally support any
In the mean time, as I had to give an answer for the snapshotted VM, I
decided to follow one of the suggestions to run engine-setup and so also to
pass my engine from 4.1.6 to 4.1.7.
And indeed the 2 stale tasks have been cleaned.
The lock symbol has gone away from the apex VM too.
Probably the
Can you connect to http://hostname:8080/ovirt-engine/api/ using this
credentials?
Even if the already posted stacktrace looks like expected, maybe you
can share your /etc/ovirt-provider-ovn (without
ovirt-sso-client-secret, which seems to be correct)?
Thanks,
Dominik
On Tue, 14 Nov 2017
On Tue, Nov 14, 2017 at 10:06 AM, Gianluca Cecchi wrote:
> On Tue, Nov 14, 2017 at 9:33 AM, Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Trying to configure power management for a certain host and fence agent
>> always fail when I'm pressing Test
On Tue, Nov 14, 2017 at 10:10 AM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:
> Hi,
>
> In the engine.log appears following:
>
> 2017-11-14 12:04:33,081+03 ERROR
> [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
> (default task-184) [32fe1ce0-2e25-4e2e-a6bf-59f39a65b2f1] Can not run
>
Hello everybody,
I have a node where I installed nfs storage and I have a second nfs network
storage. My node have a bond (4) with two nics and the other storage have a
bond with 4 nics.
My oVirt engine runs as a VM on my network storage.
The bond on the node side is relative new, before I had
Hi,
In the engine.log appears following:
2017-11-14 12:04:33,081+03 ERROR
[org.ovirt.engine.core.bll.pm.FenceProxyLocator] (default task-184)
[32fe1ce0-2e25-4e2e-a6bf-59f39a65b2f1] Can not run fence action on host
'ovirt.prod.env', no suitable proxy host was found.
2017-11-14 12:04:36,534+03
On Tue, Nov 14, 2017 at 9:33 AM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:
> Trying to configure power management for a certain host and fence agent
> always fail when I'm pressing Test button.
>
> At the same time from command line on the same host all looks good:
>
> [root@ovirt
Hi,
could you please provide engine logs so we can investigate?
Thanks
Martin
On Tue, Nov 14, 2017 at 9:33 AM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:
> Trying to configure power management for a certain host and fence agent
> always fail when I'm pressing Test button.
>
> At
Hi David,
Thanks for reply, I have Ovirt env which is in very bad shape, and as a new
employer I have to fix it :). I have 5 hosts in whole env. There is broken
HA for hosted engine, so engine can only works on host1. I can't add
another host because it shows me that it was deployed with version
On Tue, Nov 14, 2017 at 12:44 AM, Sverker Abrahamsson <
sver...@abrahamsson.com> wrote:
> Since upgrading my test lab to ovirt 4.2 I can't get ovirt-provider-ovn to
> work. From ovirt-provider-ovn.log:
>
> 2017-11-14 00:40:15,795 Request: POST : /v2.0///tokens
> 2017-11-14 00:40:15,795
Trying to configure power management for a certain host and fence agent
always fail when I'm pressing Test button.
At the same time from command line on the same host all looks good:
[root@ovirt ~]# fence_ipmilan -a 172.16.22.1 -l user -p pwd -o status -v -P
Executing: /usr/bin/ipmitool -I
On Mon, Nov 13, 2017 at 3:33 PM, Rudi Ahlers wrote:
> Hi,
>
> Can someone please give me some pointers, what would be the best setup for
> performance and reliability?
>
> We have the following hardware setup:
>
> 3x Supermicro server with following features per server:
>
Usually db is inside the engine vm. Is a postgres. Deleting it is not a
problem, engine installation tools should recreate itt.
Il 14 nov 2017 8:05 AM, "Rudi Ahlers" ha scritto:
> Hi Luca,
>
> Where is the engine DB stored? Can I simply delete it, or would that
>
On Tue, Nov 14, 2017 at 8:19 AM, Rudi Ahlers wrote:
> Hi,
>
> I have setup an oVirt cluster and did some tests. But how do I redo
> everything, without reinstalling CentOS as well?
> Would it be as simple as uninstalling all the ovirt? Or do I need to
> manually delete some
40 matches
Mail list logo