Olaf, thank you very much for this feedback, I was just about to upgrade my
12 nodes 4.2.8 production cluster. And it seem so that you speared me of a
lot of trouble.
Though, I thought that 4.3.1 comes with gluster 5.5 which has been solved
the issues, and the upgrade procedure works seemless.
Not
Forgot one more issue with ovirt, on some hypervisor nodes we also run docker,
it appears vdsm tries to get an hold of the interfaces docker creates/removes
and this is spamming the vdsm and engine logs with;
Get Host Statistics failed: Internal JSON-RPC error: {'reason': '[Errno 19]
Dear All,
I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While previous
upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a different
experience. After first trying a test upgrade on a 3 node setup, which went
fine. i headed to upgrade the 9 node production
On Thu, Mar 28, 2019 at 2:28 PM Krutika Dhananjay
wrote:
> Gluster 5.x does have two important performance-related fixes that are not
> part of 3.12.x -
> i. in shard-replicate interaction -
> https://bugzilla.redhat.com/show_bug.cgi?id=1635972
>
Sorry, wrong bug-id. This should be
Gluster 5.x does have two important performance-related fixes that are not
part of 3.12.x -
i. in shard-replicate interaction -
https://bugzilla.redhat.com/show_bug.cgi?id=1635972
ii. in qemu-gluster-fuse interaction -
https://bugzilla.redhat.com/show_bug.cgi?id=1635980
The two fixes do improve
Hi Krutika,
I have noticed some performance penalties (10%-15%) when using sharing in
v3.12 .
What is the situation now with 5.5 ?
Best Regards,
Strahil NikolovOn Mar 28, 2019 08:56, Krutika Dhananjay
wrote:
>
> Right. So Gluster stores what are called "indices" for each modified file (or
>
Right. So Gluster stores what are called "indices" for each modified file
(or shard)
under a special hidden directory of the "good" bricks at
$BRICK_PATH/.glusterfs/indices/xattrop.
When the offline brick comes back up, the file corresponding to each index
is healed, and then the index deleted
to
Hi Krutika,
So how does the Gluster node know which shards were modified after it went
down?
Do the other Gluster nodes keep track of it?
Regards,
Indivar Nair
On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay
wrote:
> Each shard is a separate file of size equal to value of
>
Each shard is a separate file of size equal to value of
"features.shard-block-size".
So when a brick/node was down, only those shards belonging to the VM that
were modified will be sync'd later when the brick's back up.
Does that answer your question?
-Krutika
On Wed, Mar 27, 2019 at 7:48 PM
On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair wrote:
>
> Hi Strahil,
>
> Ok. Looks like sharding should make the resyncs faster.
>
> I searched for more info on it, but couldn't find much.
> I believe it will still have to compare each shard to determine whether there
> are any changes that need
Hi Strahil,
Ok. Looks like sharding should make the resyncs faster.
I searched for more info on it, but couldn't find much.
I believe it will still have to compare each shard to determine whether
there are any changes that need to be replicated.
Am I right?
Regards,
Indivar Nair
On Wed, Mar
By default ovirt uses 'sharding' which splits the files into logical chunks.
This greatly reduces healing time, as VM's disk is not always completely
overwritten and only the shards that are different will be healed.
Maybe you should change the default shard size.
Best Regards,
Strahil
Hi Krutika, Leo,
Sounds promising. I will test this too, and report back tomorrow (or
maybe sooner, if corruption occurs again).
-- Sander
On 27-03-19 10:00, Krutika Dhananjay wrote:
> This is needed to prevent any inconsistencies stemming from buffered
> writes/caching file data during live
Following up on this, my test/dev cluster is now completely upgraded to ovirt
4.3.2-1 and gluster5.5 and I’ve bumped the op-version on the gluster volumes.
It’s behaving normally and gluster is happy, no excessive healing or crashing
bricks.
I did encounter
I’m not quite done with my test upgrade to ovirt 4.3.x with gluster 5.5, but so
far it’s looking good. I have NOT encountered the upgrade bugs listed as
resolved in the 5.5 release notes. Strahil, I didn’t encounter the brick death
issue and don’t have a bug ID handy for it, but so far I
I just used below repo in centos7 and able to install latest
*gluster-ansible-roles-1.0.4-4.el7.noarch*
[sac-gluster-ansible]
name=Copr repo for gluster-ansible owned by sac
baseurl=
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-$basearch/
type=rpm-md
We do have an updated rpm gluster-ansible-roles. +Sachidananda URS
On Sun, Mar 10, 2019 at 7:00 PM Hesham Ahmed wrote:
>
> sac-gluster-ansible is there and is enabled:
>
> [sac-gluster-ansible]
> enabled=1
> name = Copr repo for gluster-ansible owned by sac
> baseurl =
>
sac-gluster-ansible is there and is enabled:
[sac-gluster-ansible]
enabled=1
name = Copr repo for gluster-ansible owned by sac
baseurl =
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-$basearch/
type = rpm-md
skip_if_unavailable = False
gpgcheck = 1
gpgkey =
Check if you have a repo called sac-gluster-ansible.
Best Regards,
Strahil NikolovOn Mar 10, 2019 08:21, Hesham Ahmed wrote:
>
> On a new 4.3.1 oVirt Node installation, when trying to deploy HCI
> (also when trying adding a new gluster volume to existing clusters)
> using Cockpit, an error is
Il giorno ven 1 mar 2019 alle ore 12:57 Jayme ha scritto:
> These are both reported bugs
>
yes, just adding some pointers:
>
> On Fri, Mar 1, 2019 at 7:34 AM Stefano Danzi wrote:
>
>> Hello,
>>
>> I've just upgrade to version 4.3.1 and I can see this message in gluster
>> log of all my host
These are both reported bugs
On Fri, Mar 1, 2019 at 7:34 AM Stefano Danzi wrote:
> Hello,
>
> I've just upgrade to version 4.3.1 and I can see this message in gluster
> log of all my host (running oVirt Node):
>
> The message "E [MSGID: 101191]
> [event-epoll.c:671:event_dispatch_epoll_worker]
I got the Gluster working but I can’t finish setting up the engine it fails
and says can’t query dns for engine and then can’t remove the storage
domain without redoing everything all over again.
On Tue, Feb 26, 2019 at 2:27 AM Parth Dhanjal wrote:
> Hey Matthew!
>
> Can you please provide me
Hey Matthew!
Can you please provide me with the following to help you debug the issue
that you are facing?
1. oVirt and gdeploy version
2. /var/log/messages file
3. /root/.gdeploy file
On Mon, Feb 25, 2019 at 1:23 PM Parth Dhanjal wrote:
> Hey Matthew!
>
> Can you please provide which oVirt
Thank you Krutika,
Does it mean that turning that setting off, i have chances to get into data
corruption ?
It seems to have a pretty big impact on vm performance..
On Mon, Feb 25, 2019, 12:40 Krutika Dhananjay wrote:
> Gluster's write-behind translator by default buffers writes for flushing
>
Gluster's write-behind translator by default buffers writes for flushing to
disk later, *even* when the file is opened with O_DIRECT flag. Not honoring
O_DIRECT could mean a reader from another client could be READing stale
data from bricks because some WRITEs may not yet be flushed to disk.
Hey Matthew!
Can you please provide which oVirt and gdeploy version have you installed?
Regards
Parth Dhanjal
On Mon, Feb 25, 2019 at 12:56 PM Sahina Bose wrote:
> +Gobinda Das +Dhanjal Parth can you please check?
>
> On Fri, Feb 22, 2019 at 11:52 PM Matthew Roth wrote:
> >
> > I have 3
+Gobinda Das +Dhanjal Parth can you please check?
On Fri, Feb 22, 2019 at 11:52 PM Matthew Roth wrote:
>
> I have 3 servers, Node 1 is 3tb /dev/sda, Node 2, 3tb /dev/sdb, node3 3tb
> /dev/sdb
>
> I start the process for gluster deployment. I change node 1 to sda and all
> the other ones to
> On Feb 7, 2019, at 11:55 AM, supo...@logicworks.pt wrote:
>
> Hi,
>
> What Glusterfs version should I use with oVirt 4.3.0 ?
Gluster 5.2 is the release used by oVirt 4.3
Simon
>
> Thanks
>
> --
> Jose Ferradeira
> http://www.logicworks.pt
>
Il giorno gio 7 feb 2019 alle ore 11:57 ha scritto:
> Hi,
>
> What Glusterfs version should I use with oVirt 4.3.0 ?
>
4.3.0 is using Gluster 5
>
> Thanks
>
> --
> --
> Jose Ferradeira
> http://www.logicworks.pt
> ___
>
Hi Marco,
It looks like I'm suffering form the same issue, see;
https://lists.gluster.org/pipermail/gluster-users/2019-January/035602.html
I've included a simple github gist there, which you can run on the machines
with the stale shards.
However i haven't tested the full purge, it works well
Hi,
It's possible to remove a gluster folder and files (corresponding to a disk)
from the command line?
Thanks
José
From: supo...@logicworks.pt
To: "users"
Sent: Monday, December 17, 2018 11:46:54 PM
Subject: [ovirt-users] Gluster Disk Full
Hi,
I have a gluster volume with disk
Hi,
is there a way to recover file from "Stale file handle" errors?
Here some of the tests we have done:
- compared the extended attributes of all of the three replicas of the
involved shard. Found identical attributes.
- compared SHA512 message digest of all of the three replicas of the
I would recommend just putting a gluster Arbiter on the 3rd node, then you can
use normal ovirt tools more easily.
If you really want to do this, I wouldn’t bother with ctdb. I used to do it,
switch to a simpler DNS trick, just put entries in your hosts file with the
storage ip of both nodes,
On Wed, Oct 17, 2018 at 7:15 PM wrote:
> Thank you for this information.
> I guess I should at least wait for that bug to be resolved before
> deploying in production. Do you have the bugzilla reference so I could
> track it?
>
https://bugzilla.redhat.com/show_bug.cgi?id=1600156
Thank you for this information.
I guess I should at least wait for that bug to be resolved before deploying in
production. Do you have the bugzilla reference so I could track it?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
On Wed, Oct 17, 2018 at 6:42 PM wrote:
> Hi,
>
> Anyone with experience with vdo on Hyperconverged with ovirt 4.2.7?
> Should I force thin provisioning for lv in gdeploy's conf in order to have
> working gluster snapshots?
>
> I am not sure about the status of dedup in ovirt 4.2/hyperconverged.
Hi,
Anyone with experience with vdo on Hyperconverged with ovirt 4.2.7?
Should I force thin provisioning for lv in gdeploy's conf in order to have
working gluster snapshots?
I am not sure about the status of dedup in ovirt 4.2/hyperconverged. Supported,
advised or not?
Bug filed
https://bugzilla.redhat.com/show_bug.cgi?id=1637302
On Mon, 8 Oct 2018 at 11:21, Sahina Bose wrote:
> Thanks for reporting this issue. Can you please log a bug report on this?
>
> On Mon, Oct 8, 2018 at 3:20 PM Kaustav Majumder
> wrote:
>
>> Hi,
>> You can find the related logs in
Thanks for reporting this issue. Can you please log a bug report on this?
On Mon, Oct 8, 2018 at 3:20 PM Kaustav Majumder wrote:
> Hi,
> You can find the related logs in supervdsm.log
>
> MainProcess|jsonrpc/4::DEBUG::2018-10-05
> 06:05:18,038::commands::65::root::(execCmd) /usr/bin/taskset
Hi,
You can find the related logs in supervdsm.log
MainProcess|jsonrpc/4::DEBUG::2018-10-05
06:05:18,038::commands::65::root::(execCmd) /usr/bin/taskset --cpu-list 0-3
/usr/sbin/gluster --mode=script volume heal gv0 info --xml (cwd None)
MainProcess|jsonrpc/4::ERROR::2018-10-05
This error was raised on vdsm side here [1]. I was unable to find
'getiterator' in vdsm code based.
Please provide gluster related logs.
This error means that 'bool' object had no attribute 'getiterator' and
the call failed with runtime issue.
Thanks,
Piotr
[1]
Hi ,
I don't see any errors in the vdsm logs you have sent. Can you forward
engine.log as well
On Fri, Oct 5, 2018 at 11:56 AM Sahina Bose wrote:
> Can you provide the vdsm.log and supervdsm.log with the relevant log.
> Adding Kaustav to look into this
>
> On Fri, Oct 5, 2018 at 11:00 AM Maton,
Can you provide the vdsm.log and supervdsm.log with the relevant log.
Adding Kaustav to look into this
On Fri, Oct 5, 2018 at 11:00 AM Maton, Brett
wrote:
>
> I'm seeing the following errors appear in the event log every 10 minutes
> for each participating host in the gluster cluster
>
>
Hi, there was a memory leak in the gluster client that is fixed in
release 3.12.13
(https://github.com/gluster/glusterdocs/blob/master/docs/release-notes/3.12.13.md).
What version of gluster are you using?
Paolo
Il 11/09/2018 16:51, Endre Karlson ha scritto:
> Hi, we are seeing some issues
I just use engine and data in mine
On Wed, Sep 12, 2018, 1:33 AM femi adegoke wrote:
> For the engine, you will need at least 58 GB. I always use 62 just to be
> on the safe side. If you use 50, your install will fail.
>
> You don't need an ISO domain. ISO files can be stored in "data" or
>
For the engine, you will need at least 58 GB. I always use 62 just to be on the
safe side. If you use 50, your install will fail.
You don't need an ISO domain. ISO files can be stored in "data" or "vmstore".
Each vm you create should have 2 disks, 1 for the o/s & 1 for the data.
The o/s disk
You don't really need a data and vmstore. Vmstore I believe iaeamt to be
the new iso domain but even it is not needed as all data domains act the
same. You can use a seperate data and vmstore domain because it will give
you greater flexibility in terms of backing up thr volumes so you can
choose
Sorry, please ignore, incorrect mailing list (doh!)
--
Sam McLeod (protoporpoise on IRC)
https://twitter.com/s_mcleod
https://smcleod.net
Words are my own opinions and do not necessarily represent those of my
employer or partners.
On Mon, 3 Sep 2018, at 12:30 PM, Sam McLeod wrote:
> We've got
Hi,
The problem is solved. I found that the problem was with Ansible, it
couldn't ssh (SSH Error) to one of the nodes. With that fixed, it installed
the oVirt successfully.
Thank you for your support
On Tue, Jul 17, 2018 at 2:05 PM, Gobinda Das wrote:
> Hi Sakhi,
> Can you please provide
Hi Sakhi,
Can you please provide engine log and ovirt-host-deploy log ? You
mentioned that, you have attached log but unfortunately I can't find the
attachment.
On Tue, Jul 17, 2018 at 3:12 PM, Sakhi Hadebe wrote:
> Hi,
>
> Why is gluster deployment hangs on enabling or disabling the chronyd
>
Ok,
So removing one downed node cleared all the non syncing issues.
In the mean time, when that one node was coming back, it seems to have
corrupted the hosted-engine vm.
Remote-Viewer nodeip:5900, the console shows:
Probing EDD (edd=off to disable)... ok
Doesn't matter which of the three
Yes Greg, I thought the same thing...like there is no way the folks at RH would
make a page & expect us to leave it blank!!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Hm, that is confusing. There should be some wording there to clarify,
perhaps via info icons / tooltips. cc'ing our UXD lead Liz.
Thanks for raising it.
Greg
On Thu, Jun 14, 2018 at 2:56 AM, Karli Sjöberg wrote:
> On Wed, 2018-06-13 at 23:47 -0700, femi adegoke wrote:
> > Forgot to attach
On Wed, 2018-06-13 at 23:47 -0700, femi adegoke wrote:
> Forgot to attach picture.
>
> On 2018-06-13 23:43, femi adegoke wrote:
> > In Step 2 of the HE deployment what should be filled in here?
Nothing, if you don´t have any special packages that you´d like to add.
/K
> > Repositories: ??
> >
Forgot to attach picture.
On 2018-06-13 23:43, femi adegoke wrote:
In Step 2 of the HE deployment what should be filled in here?
Repositories: ??
Packages: ??
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Is storage working as it should? Does the gluster mount point respond as
it should? Can you write files to it? Does the physical drives say that
they are ok? Can you write (you shouldn't bypass gluster mount point but
you need to test the drives) to the physical drives?
For me this sounds
At the moment, it is responding like I would expect. I do know I have one
failed drive on one brick (hardware failure, OS removed drive completely;
the underlying /dev/sdb is gone). I have a new disk on order (overnight),
but that is also one brick of one volume that is replica 3, so I would
On Wed, May 30, 2018 at 10:42 AM, Jim Kusznir wrote:
> hosted-engine --deploy failed (would not come up on my existing gluster
> storage). However, I realized no changes were written to my existing
> storage. So, I went back to trying to get my old engine running.
>
> hosted-engine --vm-status
Dear Jim,
Thank you for your help, now it's working again!!! :)
Have a nice day!
Regards,
Tibor
- 2018. máj.. 29., 23:57, Jim Kusznir írta:
> I had the same problem when I upgraded to 4.2. I found that if I went to the
> brick in the UI and selected it, there was a "start" button
hosted-engine --deploy failed (would not come up on my existing gluster
storage). However, I realized no changes were written to my existing
storage. So, I went back to trying to get my old engine running.
hosted-engine --vm-status is now taking a very long time (5+minutes) to
return, and it
Well, things went from bad to very, very bad
It appears that during one of the 2 minute lockups, the fencing agents
decided that another node in the cluster was down. As a result, 2 of the 3
nodes were simultaneously reset with fencing agent reboot. After the nodes
came back up, the engine
Adding Ravi to look into the heal issue.
As for the fsync hang and subsequent IO errors, it seems a lot like
https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from
qemu had pointed out that this would be fixed by the following commit:
commit
I also finally found the following in my system log on one server:
[10679.524491] INFO: task glusterclogro:14933 blocked for more than 120
seconds.
[10679.525826] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[10679.527144] glusterclogro D 97209832bf40 0
I think this is the profile information for one of the volumes that lives
on the SSDs and is fully operational with no down/problem disks:
[root@ovirt2 yum.repos.d]# gluster volume profile data info
Brick: ovirt2.nwfiber.com:/gluster/brick2/data
--
I had the same problem when I upgraded to 4.2. I found that if I went to
the brick in the UI and selected it, there was a "start" button in the
upper-right of the gui. clicking that resolved this problem a few minutes
later.
I had to repeat for each volume that showed a brick down for which
Thank you for your response.
I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks
are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica
3, with a brick on all three ovirt machines.
The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD (same
Due to the cluster spiraling downward and increasing customer complaints, I
went ahead and finished the upgrade of the nodes to ovirt 4.2 and gluster
3.12. It didn't seem to help at all.
I DO have one brick down on ONE of my 4 gluster
filesystems/exports/whatever. The other 3 are fully
I would check disks status and accessibility of mount points where your
gluster volumes reside.
On Tue, May 29, 2018, 22:28 Jim Kusznir wrote:
> On one ovirt server, I'm now seeing these messages:
> [56474.239725] blk_update_request: 63 callbacks suppressed
> [56474.239732] blk_update_request:
On one ovirt server, I'm now seeing these messages:
[56474.239725] blk_update_request: 63 callbacks suppressed
[56474.239732] blk_update_request: I/O error, dev dm-2, sector 0
[56474.240602] blk_update_request: I/O error, dev dm-2, sector 3905945472
[56474.241346] blk_update_request: I/O error,
I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
May 29 11:54:41 ovirt3 ovs-vsctl:
ovs|1|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
connection failed (No such file or directory)
May 29 11:54:51 ovirt3 ovs-vsctl:
Do you see errors reported in the mount logs for the volume? If so, could
you attach the logs?
Any issues with your underlying disks. Can you also attach output of volume
profiling?
On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir wrote:
> Ok, things have gotten MUCH worse this morning. I'm
Ok, things have gotten MUCH worse this morning. I'm getting random errors
from VMs, right now, about a third of my VMs have been paused due to
storage issues, and most of the remaining VMs are not performing well.
At this point, I am in full EMERGENCY mode, as my production services are
now
[Adding gluster-users to look at the heal issue]
On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir wrote:
> Hello:
>
> I've been having some cluster and gluster performance issues lately. I
> also found that my cluster was out of date, and was trying to apply updates
> (hoping to fix some of
Hi,
Ok I will try it.
In this case, is it possible to remove and re-add a host that member of HA
gluster ? This is an another task, but I need to separate my gluster network
from my ovirtmgmt network.
What is the recommended way for do this?
It is not important now, but I need to do in
On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor wrote:
> Dear Sahina,
>
> Yes, exactly. I can check that check box, but I don't know how is safe
> that. Is it safe?
>
It is safe - if you can ensure that only one host is put into maintenance
at a time.
>
> I want to upgrade
Dear Sahina,
Yes, exactly. I can check that check box, but I don't know how is safe that. Is
it safe?
I want to upgrade all of my host. If it will done, then the monitoring will
work perfectly?
Thanks.
R.
Tibor
- 2018. máj.. 28., 10:09, Sahina Bose írta:
> On
On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor wrote:
> Hi,
>
> Somebody could answer to my question please?
> It is very important for me, I could no finish my upgrade process (from
> 4.1 to 4.2) since 9th May!
>
Can you explain how the upgrade process is blocked due to
Hi,
Somebody could answer to my question please?
It is very important for me, I could no finish my upgrade process (from 4.1 to
4.2) since 9th May!
Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on
the GUI. So, now only one is down.
I need help. What can I
Hi,
I've updated again to the latest version, but there are no changes. All of
bricks on my first node are down in the GUI (in console are ok)
An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and
all bricks, but "Space used" column is zero for all hosts/bricks.
Can I
Hello!
On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor wrote:
>
> Is there any changes with this bug?
>
> Still I haven't finish my upgrade process that i've started on 9th may:(
>
> Please help me if you can.
>
>
Looks like all required patches are already merged, so
Dear Sahina,
Is there any changes with this bug?
Still I haven't finish my upgrade process that i've started on 9th may:(
Please help me if you can.
Thanks
Tibor
- 2018. máj.. 18., 9:29, Demeter Tibor írta:
> Hi,
> I have to update the engine again?
>
Hi,
I have to update the engine again?
Thanks,
R
Tibor
- 2018. máj.. 18., 6:47, Sahina Bose írta:
> Thanks for reporting this. [ https://gerrit.ovirt.org/91375 |
> https://gerrit.ovirt.org/91375 ] fixes this. I've re-opened bug [
>
Thanks for reporting this. https://gerrit.ovirt.org/91375 fixes this. I've
re-opened bug https://bugzilla.redhat.com/show_bug.cgi?id=1574508
On Thu, May 17, 2018 at 10:12 PM, Demeter Tibor wrote:
> Hi,
>
> 4.2.4-0.0.master.20180515183442.git00e1340.el7.centos
>
> Firstly, I
Hi,
4.2.4-0.0.master.20180515183442.git00e1340.el7.centos
Firstly, I did a yum update "ovirt-*-setup*"
second, I have ran engine-setup to upgrade.
I didn't remove the old repos, just installed the nightly repo.
Thank you again,
Regards,
Tibor
- 2018. máj.. 17., 15:02, Sahina
It doesn't look like the patch was applied. Still see the same error in
engine.log
"Error while refreshing brick statuses for volume 'volume1' of cluster
'C6220': null"\
Did you use engine-setup to upgrade? What's the version of ovirt-engine
currently installed?
On Thu, May 17, 2018 at 5:10 PM,
[+users]
Can you provide the engine.log to see why the monitoring is not working
here. thanks!
On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor wrote:
> Hi,
>
> Meanwhile, I did the upgrade engine, but the gluster state is same on my
> first node.
> I've attached some
On Tue, May 15, 2018 at 1:28 PM, Demeter Tibor wrote:
> Hi,
>
> Could you explain how can I use this patch?
>
You can use the 4.2 nightly to test it out -
http://resources.ovirt.org/pub/yum-repo/ovirt-release42-snapshot.rpm
> R,
> Tibor
>
>
> - 2018. máj.. 14., 11:18,
Hi,
Could you explain how can I use this patch?
R,
Tibor
- 2018. máj.. 14., 11:18, Demeter Tibor írta:
> Hi,
> Sorry for my question, but can you tell me please how can I use this patch?
> Thanks,
> Regards,
> Tibor
> - 2018. máj.. 14., 10:47, Sahina Bose
Hi,
Sorry for my question, but can you tell me please how can I use this patch?
Thanks,
Regards,
Tibor
- 2018. máj.. 14., 10:47, Sahina Bose írta:
> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu
> |
> tdeme...@itsmart.hu ] > wrote:
On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor wrote:
> Hi,
>
> Could someone help me please ? I can't finish my upgrade process.
>
https://gerrit.ovirt.org/91164 should fix the error you're facing.
Can you elaborate why this is affecting the upgrade process?
> Thanks
>
Meanwhile i just changed my gluster network to 10.104.0.0/24 but does not
happend anything.
Regards,
Tibor
- 2018. máj.. 14., 9:49, Demeter Tibor írta:
> Hi,
> Yes, I have a gluster network, but it's "funny" because that is the
> 10.105.0.x/24. :( Also, the
Hi,
Yes, I have a gluster network, but it's "funny" because that is the
10.105.0.x/24. :( Also, the n4.itsmart.cloud is mean 10.104.0.4.
The 10.104.0.x/24 is my ovirtmgmt network.
However, the 10.104.0.x is accessable from all hosts.
What should I do?
Thanks,
R
Tibor
- 2018.
The two key errors I'd investigate are these...
2018-05-10 03:24:21,048+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:
> /gluster/brick/brick1' of volume
Hi,
Could someone help me please ? I can't finish my upgrade process.
Thanks
R
Tibor
- 2018. máj.. 10., 12:51, Demeter Tibor írta:
> Hi,
> I've attached the vdsm and supervdsm logs. But I don't have engine.log here,
> because that is on hosted engine vm. Should
On Wed, May 9, 2018 at 12:56 PM, wrote:
> Hi, have a quick question regarding ovirt UI and provisioning of gluster
> volumes.
> I've found an old thread - https://lists.ovirt.org/
> pipermail/users/2015-February/064602.html - where it's said that creating
> dispersed volumes
There's a bug here. Can you log one attaching this engine.log and also
vdsm.log & supervdsm.log from n3.itsmart.cloud
On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor wrote:
> Hi,
>
> I found this:
>
>
> 2018-05-10 03:24:19,096+02 INFO
Hi,
I found this:
2018-05-10 03:24:19,096+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
(DefaultQuartzScheduler7) [43f4eaec] FINISH,
GetGlusterVolumeAdvancedDetailsVDSCommand, return:
This doesn't affect the monitoring of state.
Any errors in vdsm.log?
Or errors in engine.log of the form "Error while refreshing brick statuses
for volume"
On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor wrote:
> Hi,
>
> Thank you for your fast reply :)
>
>
> 2018-05-10
Hi,
Thank you for your fast reply :)
2018-05-10 11:01:51,574+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler6) [7f01fc2d] START,
GlusterServersListVDSCommand(HostName = n2.itsmart.cloud,
Could you check the engine.log if there are errors related to getting
GlusterVolumeAdvancedDetails ?
On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor wrote:
> Dear Ovirt Users,
> I've followed up the self-hosted-engine upgrade documentation, I upgraded
> my 4.1 system to
301 - 400 of 400 matches
Mail list logo