Hi all,
I want to reproduce log shown on the "event console" on my monitoring
system but I cant' find the right filter to use. I'm parsing the engine.log
on the hosted engine vm. Is the right log file to parse? There is any other
way?
Thank you
___
User
Hi,
Is there any problem to use the "hosted_engine" data domain to put disk of
others VM? I have created a "too big" "hosted_engine" data domain so I want
to use that space...
Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailm
2018-01-08 15:12 GMT+01:00 Yedidyah Bar David :
>
> Please see this page:
>
> https://www.ovirt.org/documentation/self-hosted/
> chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment/
>
> There, note that when you restore the engine ('engine-backup
> --mode=restore'),
> you should pass
Hi,
Sorry for ask it again but the steps are not clear ...
Thank you
2018-01-07 16:39 GMT+01:00 yayo (j) :
> Hi,
>
> Sorry but I needs to migrate from one hosted-engine to another, so, where
> I can restore backup? Before or after the autoimport triggered?
>
> * Create new
ample here:
http://lists.ovirt.org/pipermail/users/2017-June/082466.html
Best regards
--
Martin Sivak
SLA / oVirt
On Wed, Jan 3, 2018 at 12:20 PM, yayo (j) wrote:
> hi at all,
>
> We have the "hosted engine" Data domain on a FC LUN too big for only the
> hosted-engine so we
hi at all,
We have the "hosted engine" Data domain on a FC LUN too big for only the
hosted-engine so we want to create another little FC LUN and move the
hosted-engine vm on this new LUN and destroy old one ...
Is there any official workflow or how to to do this operation? Or, someone
can guide
Hi,
2017-10-11 16:13 GMT+02:00 Adam Litke :
> What is the status of your Datacenter?
>
The status of datacenter is "operational"
> Are these hosts both operational?
>
yes
> Are you experiencing other problems with your storage other than the
> inconsistent task state?
>
No, what kind of
Hi all,
ovirt 4.1 hosted engine on 2 node cluster and FC LUN Storage
I'm trying to clear some task pending from months using vdsClient but I
can't do anything. Below are the steps (on node 1, the SPM):
1. Show all tasks:
*# vdsClient -s 0 getAllTasksInfofd319af4-d160-48ce-b68
2017-07-19 11:22 GMT+02:00 yayo (j) :
> running the "gluster volume heal engine" don't solve the problem...
>
> Some extra info:
>
> We have recently changed the gluster from: 2 (full repliacated) + 1
> arbiter to 3 full replicated cluster but i don't kno
>
>
>
> Better workaround is to download newer python-apt from here [1] and
> install it manually with dpkg. The guest agent seems to work OK with it.
> There's much less chance of breaking something else, also newer
> python-apt will be picked-up automatically on upgrades.
>
> Tomas
>
>
> [1]
>
> Agreed.
>>
>> Open a bug with Zentyal. They broke the packages from Ubuntu and should
>> fix it themselves. They have to backport newer version of python-apt.
>> The one from yakkety (1.1.0~beta5) should be good enough to fix the
>> problem.
>>
>> In the bug report note that the ovirt-guest-age
>
> This is the problem!
>
> I looked at the packages for conflict and figured the issue is in gnpug.
> Zentyal repository contains gnupg version 2.1.15-1ubuntu6 which breaks
> python-apt <= 1.1.0~beta4.
>
>
Ok, thank you! Any workaround (something like packege pinning?) to fix this
problem?
>
>
t; python-apt per-se or with ovirt-guest-agent using python-apt?
>
>
In the past with Zentyal 5 Dev Edition I had the same error: Added
suggested repository that want install "python-apt" and remove "apt-get"
(because conflicts)
>
> >
> >
> > On Fri, Aug 4
Hi all,
I have this problem: I'm tring to install the guest tools following this
guide:
https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-ubuntu/#for-ubuntu-1604
This is the sources:
deb http://it.archive.ubuntu.com/ubuntu/ xenial main restricted
deb http://it.arc
2017-07-25 11:31 GMT+02:00 Sahina Bose :
>
>> Other errors on unsync gluster elements still remain... This is a
>> production env, so, there is any chance to subscribe to RH support?
>>
>
> The unsynced entries - did you check for disconnect messages in the mount
> log as suggested by Ravi?
>
>
Hi
2017-07-25 7:42 GMT+02:00 Kasturi Narra :
> These errors are because not having glusternw assigned to the correct
> interface. Once you attach that these errors should go away. This has
> nothing to do with the problem you are seeing.
>
Hi,
You talking about errors like these?
2017-07-24 15:5
>
> All these ip are pingable and hosts resolvible across all 3 nodes but,
>> only the 10.10.10.0 network is the decidated network for gluster (rosolved
>> using gdnode* host names) ... You think that remove other entries can fix
>> the problem? So, sorry, but, how can I remove other entries?
>>
>
,
>
>Regarding the UI showing incorrect information about engine and data
> volumes, can you please refresh the UI and see if the issue persists plus
> any errors in the engine.log files ?
>
> Thanks
> kasturi
>
> On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N
> w
hard: on*
*user.cifs: off*
*storage.owner-gid: 36*
*features.shard-block-size: 512MB*
*network.ping-timeout: 30*
*performance.strict-o-direct: on*
*cluster.granular-entry-heal: on*
*auth.allow: **
server.allow-insecure: on
2017-07-21 19:13 GMT+02:00 yayo (j) :
> 2017-07-20 14:48 GM
2017-07-20 14:48 GMT+02:00 Ravishankar N :
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and th
2017-07-20 14:48 GMT+02:00 Ravishankar N :
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and th
2017-07-20 11:34 GMT+02:00 Ravishankar N :
>
> Could you check if the self-heal daemon on all nodes is connected to the 3
> bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using `gluster volume start
> engine force`, then launch the heal
Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N :
1. What does the glustershd.log say on all 3 nodes when you run the
> command? Does it complain anything about these files?
>
No, glustershd.log is clean, no extra log after command on all 3 nodes
> 2
Hi all,
We have an ovirt cluster hyperconverged with hosted engine on 3 full
replicated node . This cluster have 2 gluster volume:
- data: volume for the Data (Master) Domain (For vm)
- engine: volume fro the hosted_storage Domain (for hosted engine)
We have this problem: "engine" gluster volum
king
> the 'Maintenance' button. This should allow you to then put the last host
> into maintenance mode.
>
> However, I think your initial deployment needs to be fixed before adding
> more hosts. Hopefully, I or someone else will be able to help you with that
> once you'
Hi at all,
I have correctly deployed an hosted engine using node01 via:
hosted-engine --deploy
Using FC shared storage.
Seems all work good but, when I login in to the ovirt web interface I can't
find the hosted engine under the VM tab (also the node01 server).
So, I have tried to add node02 (
sed, paths: 2 active
Can I go forward or is this totally not supported?
Thank you
2017-07-06 11:43 GMT+02:00 yayo (j) :
>
> Hi all,
>
> I'm tring to install a new cluster ovirt 4.1 (Centos 7) configured to use
> a SAN that expose LUN via SAS . When I start to deploy ovi
Hi all,
I'm tring to install a new cluster ovirt 4.1 (Centos 7) configured to use a
SAN that expose LUN via SAS . When I start to deploy ovirt and the engine
using "hosted-engine --deploy" the only options I have are:
(glusterfs, iscsi, fc, nfs3, nfs4)
There is no option for "local" storage (tha
2017-07-03 15:42 GMT+02:00 knarra :
> So, please poweroff your vms while performing this.
Thank you,
Ok, no problem, cluster is not (yet) in production
Thank you again!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/us
Hi,
And sorry for delay
2017-06-30 14:09 GMT+02:00 knarra :
> To add a fully replicated node you need to reduce the replica count to 2
> and add new brick to the volume so that it becomes replica 3. Reducing
> replica count by removing a brick from replica / arbiter cannot be done
> from UI cur
2017-06-30 12:54 GMT+02:00 yayo (j) :
> The actual arbiter must be removed because is too obsolete. So, I needs to
> add the new "full replicated" node but I want to know what are the steps
> for add a new "full replicated" node and remove the arbiter node (Also a
&
2017-06-30 11:01 GMT+02:00 knarra :
> You do not need to remove the arbiter node as you are getting the
> advantage of saving on space by having this config.
>
> Since you have a new you can add this as fourth node and create another
> gluster volume (replica 3) out of this node plus the other two
Hi at all,
we have a 3 node cluster with this configuration:
ovirtzz 4.1 with 3 node hyperconverged with gluster. 2 node are "full
replicated" and 1 node is the arbiter.
Now we have a new server to add to cluster then we want to add this new
server and remove the arbiter (or, make this new serve
Hi,
Did you have any news about this topic? I did some other tests but without
success... When a vm migrate to second node, internet connection is lost
Thank you
2017-05-15 9:36 GMT+02:00 Sandro Bonazzola :
>
>
> On Mon, May 15, 2017 at 9:33 AM, yayo (j) wrote:
>
>>
>>
2017-05-11 17:08 GMT+02:00 Sandro Bonazzola :
> Can you be a bit more specific? Is it a Hosted Engine deployment?
> Hyperconverged? Using oVirt Node for nodes?
>
Hi and sorry for delay, It's an hosted engine deployment Hyperconverged
with gluster 2 node + 1 arbiter ... We have used the official r
Hi all,
I have a simple 3 node ovirt 4.1 gluster. All works fine but when I create
or move a vm to node 2 the connection is lost, if I move the vm back to
node 1 all works fine again. Looking to ovirt engine, network seems
identical (for now I have only ovirtmng network). If I ping something from
36 matches
Mail list logo