Hello,
Just would like to check if it really possible to convert old Centos6 based
physical box into oVirt 4.3 VM? I haven't been able to find any success
stories on this and process seems a bit complicated.
As I understand it I need a virt-v2v conversion proxy + image with virt-p2v
running on
;>>
>>> Can you please share your vdsm log?
>>> I suppose you do manage to ssh that inactive host (correct me if I'm
>>> wrong).
>>> While getting the vdsm log, maybe try to restart the network and vdsmd
>>> services on the host.
>>>
>&g
e/chap-Clusters.html#introduction-to-clusters
>
>
> *Regards,*
>
> *Shani Leviim*
>
>
> On Mon, Jun 10, 2019 at 4:42 PM Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Hello,
>>
>> May I ask you for and advise?
>> I'm running a smal
Hello,
May I ask you for and advise?
I'm running a small oVirt cluster and couple of months ago I decided to do
an upgrade from oVirt 4.2.8 to 4.3 and having an issues since that time. I
can only guess what I did wrong - probably one of the problems that I
haven't switched the cluster from
Hi,
I have exactly the same issue after upgrade from 4.2.8 to 4.3.2. I can
reach the host from SHE but the VDSM is constantly failing to start on the
host after upgrade.
чт, 21 мар. 2019 г., 19:48 Simone Tiraboschi :
>
>
> On Thu, Mar 21, 2019 at 3:47 PM Arif Ali wrote:
>
>> Hi all,
>>
>>
Hello,
Just started upgrading my small cluster to from 4.2.8 to 4.3.2 and endup in
the situation that one of the hosts is not working after upgrade.
For some reason vdsmd is not starting up, I have tried to restart it
manually with no luck:
Any ideas on what could be the reason?
[root@ovirt2
Hello,
I have a question indirectly related to oVirt - I have a VM with CentOS 6
running on my cluster, which has 6 virtual interfaces (eth0 - eth5). Now
its a time to do an upgrade to CentOS 7 based, and I did a VM clone to test
an upgrade process and was a bit surprised to see that now I have
Hi,
Just run into the issue during cluster upgrade from 4.24 to 4.2.6.1. I'm
running small cluster with 2 hosts and gluster storage. Once I upgraded one
of the hosts to 4.2.6.1 something went wrong (looks like it tried to start
HE instance) and I can't connect to hosted-engine any longer.
As I
Hello,
I'm upgrading my cluster from 4.2.2 to 4.2.3, HE upgrade went well, but
having some issues with hosts upgrade: for some reason yum complaining
about conflicts during transaction check:
Transaction check error:
file /usr/share/cockpit/networkmanager/manifest.json from install of
Hi,
How many hosts you have? Check hosted-engine.conf on all hosts including
the one you have problem with and look if all host_id values are unique. It
might happen that you have several hosts with host_id=1
Regards,
Artem
ср, 28 мар. 2018 г., 20:49 Jamie Lawrence :
Hello Krzysztof,
As I can see both hosts have the same host_id=1, which causing conflict.
You need this this manually on the newly deployed host and restart
ovirt-ha-agent.
You may run following command on engine VM in order to find correct host_id
values for your hosts.
sudo -u postgres psql
Hello,
I'm still troubleshooting my cluster and trying to figure out which
lockspaces should be present and which shouldn't.
If HE VM is not running both ovirt-ha-agent and ovirt-ha-broker are down
and storage disconnected by hosted-engine --disconnect-storage should I see
something related to
t 6:45 PM, Martin Sivak <msi...@redhat.com> wrote:
>
>> Hi Artem,
>>
>> just a restart of ovirt-ha-agent services should be enough.
>>
>> Best regards
>>
>> Martin Sivak
>>
>> On Mon, Feb 19, 2018 at 4:40 PM, Artem Tambovskiy
>> <art
> Martin Sivak
>
> On Mon, Feb 19, 2018 at 4:40 PM, Artem Tambovskiy
> <artem.tambovs...@gmail.com> wrote:
> > Ok, understood.
> > Once I set correct host_id on both hosts how to take changes in force?
> With
> > minimal downtime? Or i need reboot bot
y opposite values.
>> So how to get this fixed in the simple way? Update the engine DB?
>>
>
> I'd suggest to manually fix /etc/ovirt-hosted-engine/hosted-engine.conf
> on both the hosts
>
>
>>
>> Regards,
>> Artem
>>
>> On Mon, Feb 19, 2018
, 2018 at 12:13 PM, Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Hello,
>>
>> Last weekend my cluster suffered form a massive power outage due to human
>> mistake.
>> I'm using SHE setup with Gluster, I managed to bring the cluster up
&g
Hello,
Last weekend my cluster suffered form a massive power outage due to human
mistake.
I'm using SHE setup with Gluster, I managed to bring the cluster up
quickly, but once again I have a problem with duplicated host_id (
https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host and
force-clean"
>
> Hope this helps !!
>
> Thanks
> kasturi
>
>
> On Fri, Jan 19, 2018 at 12:07 AM, Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Hi,
>>
>> Ok, i decided to remove second host from the cluster.
>> I reinstal
e.owner-gid 36
> gluster volume set volume server.allow-insecure on
>
>
>
> I have problems with hosted engine storage on gluster replica 3 arbiter
> with oVIrt 4.1.
> I recommend update oVirt to 4.2. I have no problems with 4.2.
>
>
> 19.01.2018, 16:43, "Arte
I'm still troubleshooting the my oVirt 4.1.8 cluster and idea came to my
mind that I have an issue with storage settings for hosted_engine storage
domain.
But in general if I have a 2 ovirt nodes running gluster + 3rd host as
arbiter, how the settings should looks like?
lets say I have a 3
ge=True
maintenance=False
state=AgentStopped
stopped=True
!! Cluster is in GLOBAL MAINTENANCE mode !!
Thank you in advance!
Regards,
Artem
On Wed, Jan 17, 2018 at 6:47 PM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:
> Hello,
>
> Any f
янв. 2018 г. 17:00 пользователь "Artem Tambovskiy" <
artem.tambovs...@gmail.com> написал:
> Hi Martin,
>
> Thanks for feedback.
>
> All hosts and hosted-engine running 4.1.8 release.
> The strange thing : I can see that host ID is set to 1 on both hosts at
gt; Martin Sivak
>
> On Tue, Jan 16, 2018 at 1:16 PM, Derek Atkins <de...@ihtfp.com> wrote:
> > Why are both hosts reporting as ovirt 1?
> > Look at the hostname fields to see what mean.
> >
> > -derek
> > Sent using my mobile device. Please excuse any typos.
> >
d
> 4) uncheck 'automatically configure host firewall'
> 5) click on 'Deploy' tab
> 6) click Hosted Engine deployment as 'Deploy'
>
> And once the host installation is done, wait till the active score of the
> host shows 3400 in the general tab then check hosted-engine --vm-status.
r services and check if things are working fine ?
>
> Thanks
> kasturi
>
> On Sat, Jan 13, 2018 at 12:33 AM, Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Explored logs on both hosts.
>> broker.log shows no errors.
>>
>> agent.log
)
Connecting the storage
MainThread::INFO::2018-01-12
22:02:29,586::storage_server::220::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(validate_storage_server)
Validating storage server
Any suggestions how to resolve this .
regards,
Artem
On Fri, Jan 12, 2018 at 7:08 PM, Artem Tambovs
Trying to fix one thing I broke another :(
I fixed mnt_options for hosted engine storage domain and installed latest
security patches to my hosts and hosted engine. All VM's up and running,
but hosted_engine --vm-status reports about issues:
[root@ovirt1 ~]# hosted-engine --vm-status
--==
2018 at 1:22 PM, Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Hi,
>>
>> I have deployed a small cluster with 2 ovirt hosts and GlusterFS cluster
>> some time ago. And recently during software upgrade I noticed that I made
>> some mistakes duri
Hi,
I have deployed a small cluster with 2 ovirt hosts and GlusterFS cluster
some time ago. And recently during software upgrade I noticed that I made
some mistakes during the installation:
if the host which was deployed first will be taken down for upgrade
(powered off or rebooted) the engine
Hi,
AFAIK, during hosted engine deployment installer will check the GlusterFS
replica type. And replica 3 is a mandatory requirement. Previously, i got
and idvise within this mailing list to look on DRDB solution if you do t
have a third node to to run at a GlusterFS replica 3.
14 дек. 2017 г.
backups, is there a best
>> practice that should be followed?
>>
>> On Tue, Dec 12, 2017 at 8:59 AM, Artem Tambovskiy <
>> artem.tambovs...@gmail.com> wrote:
>>
>>> I did exactly the same mistake with my standalone GlusterFS cluster and
>>> now
I did exactly the same mistake with my standalone GlusterFS cluster and now
need to take down all Storage Domains in order to fix this mistake.
Probably, worth to add a few words about this in Installation guide!
On Tue, Dec 12, 2017 at 4:52 PM, Simone Tiraboschi
wrote:
>
>
I have a question indirectly releted to Ovirt. I need to move one old setup
into VM running in oVirt cluster. The VM was based on on Debian 8.9, so I
took a Debian cloude image from
https://cdimage.debian.org/cdimage/openstack/8.9.8-20171105/ uploaded it
into my cluster and attached it to VM. All
': (), 'guestIPs': '', 'disksUsage': []}}
On Tue, Nov 14, 2017 at 8:49 PM, Darrell Budic <bu...@onholyground.com>
wrote:
> Try restarting vdsmd from the shell, “systemctl restart vdsmd”.
>
>
> ------
> *From:* Artem Tambovskiy <artem.tambovs...@gmail.
Apparently, i lost the host which was running hosted-engine and another 4
VM's exactly during migration of second host from bare-metal to second host
in the cluster. For some reason first host entered the "Non reponsive"
state. The interesting thing is that hosted-engine and all other VM's up
and
>
>
> On Tue, Nov 14, 2017 at 9:33 AM, Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Trying to configure power management for a certain host and fence agent
>> always fail when I'm pressing Test button.
>>
>> At the same t
Trying to configure power management for a certain host and fence agent
always fail when I'm pressing Test button.
At the same time from command line on the same host all looks good:
[root@ovirt ~]# fence_ipmilan -a 172.16.22.1 -l user -p pwd -o status -v -P
Executing: /usr/bin/ipmitool -I
Any suggestion what can be the reason for those strange messages
(repeating every hour) in the web gui event log:
Nov 11, 2017 7:07:01 PM Status of host ovirt2.prod.env was set to Up.
Nov 11, 2017 7:06:54 PM Failed to update OVF disks
94b7554b-4c18-4296-b795-98ca6c0fb251,
ards,
Artem
On Thu, Nov 9, 2017 at 3:56 PM, Artem Tambovskiy <artem.tambovs...@gmail.com
> wrote:
> Hi,
>
> Just realized that I probably went in the wrong way. Reinstalled
> everything from the scratch added 4 volumes (hosted_engine, data, export,
> iso). All looks good so far
Another yet attempt to get a help on hosted-engine deployment with
glusterfs cluster.
I already spend a day trying to get bring such a setup to work with no
luck.
The hosted engine being successfully deployed but I can't activate the
host, the storage domain for the host is missing and I can't
; gluster part.
>
> Denis, Sahina: can you please help me here?
>
> Best regards
>
> Martin Sivak
>
> On Fri, Nov 3, 2017 at 11:29 AM, Artem Tambovskiy
> <artem.tambovs...@gmail.com> wrote:
> > Thanks for an article, Martin!
> > Any chance to configur
onverged/
>
> It uses three hosts and collocates the VMs together with Gluster storage.
>
> Best regards
>
> --
> Martin Sivak
> SLA /oVirt
>
> On Fri, Nov 3, 2017 at 8:39 AM, Artem Tambovskiy
> <artem.tambovs...@gmail.com> wrote:
> > Thanks Eduardo!
&g
Good point! Need to focus on this first
Thanks,
Artem
On Fri, Nov 3, 2017 at 10:50 AM, Karli Sjöberg <ka...@inparadise.se> wrote:
> On fre, 2017-11-03 at 10:39 +0300, Artem Tambovskiy wrote:
> > Thanks Eduardo!
> >
> > I think I can find a third server to bu
ive you quorum, avoid
> split-brains and have something that you can call "HA" with a straight face.
>
> Eduardo Mayoral Jimeno (emayo...@arsys.es)
> Administrador de sistemas. Departamento de Plataformas. Arsys internet.+34
> 941 620 145 ext. 5153 <+34%20941%2062%2
Looking for a design advise on oVirt provisioning. I'm running a PoC lab on
single bare-metal host (suddenly it was setup with just Local Storage
domain) and
no I'd like to rebuild the setup by making a cluster of 2 physical servers,
no external storage array available. That are the options here?
45 matches
Mail list logo