On Wed, May 31, 2017 at 4:16 AM, wrote:
> Hi,
>
> I found the cause of this problem. I had to turn off sharding.
>
Did you have sharding enabled but not have any sharded VM images or were
their shards missing on some bricks?
> --
> *De: *supo...@logicworks.pt
> *P
On Fri, Jun 30, 2017 at 10:34 AM, cmc wrote:
> Hi Denis,
>
> Yes, I did check that and it said it was out of global maintenance
> ('False' I think it said).
>
>
Did you check that the storage the hostedengine VM attaches to mounted and
is in a healthy state, and that the broker and agent services
ed up just doing
so manually and after that I could bring up my VM's.
If this was caused by some error would I likely find that in the engine
log's on the engine vm or on one of the vdsm logs of a host?
I'm running oVirt Engine Version: 3.6.1.3-1.el7.centos
*David Gossage*
*Caro
On Sat, Jun 4, 2016 at 11:02 AM, Nir Soffer wrote:
> On Sat, Jun 4, 2016 at 5:17 PM, David Gossage
> wrote:
> > This morning I updated my gluster version to 3.7.11 and during this I
> > shutdown ovirt completely and all VM's. On bringing them back up ovirt
> >
I am looking to update my gluster from 3.7.11->3.7.12
Their were some bugs in 3.7.12 in relation to libgfapi, but if I recall
using oVirt 3.6 and CentOS7 traffic should still be using the fuse mount
which would avoid any issues they are having with libgfapi correct?
*David Gossage*
*Carou
]
[2016-07-09 15:27:59.613781] W [fuse-bridge.c:2227:fuse_readv_cbk]
0-glusterfs-fuse: 168: READ => -1 gfid=deb61291-5176-4b81-8315-3f1cf8e3534d
fd=0x7f5224002f68 (Operation not permitted)
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*O
I'm creating a test box I can more thoroughly mess with so I can submit to
bugzilla something. Since my errors all popped up while trying to get
ovirt and gluster functional again rather than thoroughly gather logs and
test my data is kinda sketchy.
*David Gossage*
*Carousel Check
What back end storage do you run gluster on? xfs/zfs/ext4 etc?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Thu, Jul 21, 2016 at 8:18 AM, Scott wrote:
> I get similar problems with oVirt 4.0.1 and hosted engine. After
> upgrading all my ho
d. I'm wondering now if the issue was zfs settings.
Hopefully should have a test machone up soon I can play around with more.
Scott
>
> On Thu, Jul 21, 2016 at 11:36 AM David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> What back end storage do you run gluster o
On Thu, Jul 21, 2016 at 1:24 PM, Karli Sjöberg wrote:
>
> Den 21 jul 2016 7:54 em skrev David Gossage :
> >
> > On Thu, Jul 21, 2016 at 11:47 AM, Scott wrote:
> >>
> >> Hi David,
> >>
> >> My backend storage is ZFS.
> >>
> >&g
etach the storage thats on xfs and attach
the one thats over zfs with sharding enabled. My test is 3 bricks on same
local machine, with 3 different volumes but I think im running into sanlock
issue or something as it won't mount more than one volume that was created
locally.
-Krutik
l@54
)
> -Krutika
>
> On Mon, Jul 25, 2016 at 4:57 PM, Samuli Heinonen
> wrote:
>
>> Hi,
>>
>> > On 25 Jul 2016, at 12:34, David Gossage
>> wrote:
>> >
>> > On Mon, Jul 25, 2016 at 1:01 AM, Krutika Dhananjay
>> wrote:
>>
On Mon, Jul 25, 2016 at 1:00 PM, David Gossage
wrote:
>
> On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay
> wrote:
>
>> OK, could you try the following:
>>
>> i. Set network.remote-dio to off
>> # gluster volume set network.remote-dio off
>>
On Mon, Jul 25, 2016 at 1:07 PM, David Gossage
wrote:
>
> On Mon, Jul 25, 2016 at 1:00 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>> On Mon, Jul 25, 2016 at 9:58 AM, Krutika Dhananjay
>> wrote:
>>
>>> OK, could you try the
>
> >
>
> > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
> locak
>
> > disk right now isn't allowing me to add the gluster storage at all.
>
> >
>
> > Keep getting some type of UI error
>
> >
>
>
>
> Yes that is definitely a UI error. To get a better stack trace can you
> in
On Mon, Jul 25, 2016 at 1:39 PM, Alexander Wels wrote:
> On Monday, July 25, 2016 01:37:47 PM David Gossage wrote:
> > > > My test install of ovirt 3.6.7 and gluster 3.7.13 with 3 bricks on a
> > >
> > > locak
> > >
> > > > disk right n
On Mon, Jul 25, 2016 at 3:48 PM, Alexander Wels wrote:
> On Monday, July 25, 2016 01:49:32 PM David Gossage wrote:
> > On Mon, Jul 25, 2016 at 1:39 PM, Alexander Wels
> wrote:
> > > On Monday, July 25, 2016 01:37:47 PM David Gossage wrote:
> > > > > > My
ver setup and some
the week after upgrade as I was preparing for moving disks off and on
storage to get them sharded and felt it would be easier to just recreate
some disks that had no data yet rather than move them off and on later.
>
> -Krutika
>
> On Mon, Jul 25, 2016 at 11:30 PM, Davi
On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi
wrote:
> Hi,
>
> Issue: Cannot find master domain
> Changes applied before issue started to happen: replaced
> 172.16.0.12:/data/brick1/brick1
> with 172.16.0.12:/data/brick3/brick3, did minor package upgrades for vdsm
> and glusterfs
>
> vdsm log: h
On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi
wrote:
>
>
> On Thu, Jul 28, 2016 at 6:29 PM David Gossage
> wrote:
>
>> On Thu, Jul 28, 2016 at 8:52 AM, Siavash Safi
>> wrote:
>>
>>> Hi,
>>>
>>> Issue: Cannot find master domain
How about on the bricks anything out of place?
Is gluster still using same options as before? could it have reset the
user and group to not be 36?
>
> On Thu, Jul 28, 2016 at 7:03 PM David Gossage
> wrote:
>
>> On Thu, Jul 28, 2016 at 9:28 AM, Siavash Safi
>> wrote:
>
On Thu, Jul 28, 2016 at 10:00 AM, Siavash Safi
wrote:
>
>
> On Thu, Jul 28, 2016 at 7:19 PM David Gossage
> wrote:
>
>> On Thu, Jul 28, 2016 at 9:38 AM, Siavash Safi
>> wrote:
>>
>>> file system: xfs
>>> features.shard: off
>>>
>&
>
> On Thu, Jul 28, 2016 at 7:40 PM David Gossage
> wrote:
>
>> On Thu, Jul 28, 2016 at 10:00 AM, Siavash Safi
>> wrote:
>>
>>>
>>>
>>> On Thu, Jul 28, 2016 at 7:19 PM David Gossage <
>>> dgoss...@carouselchecks.com>
bout.
>
> On Thu, Jul 28, 2016 at 9:06 PM Sahina Bose wrote:
>
>>
>>
>> ----- Original Message -
>> > From: "Siavash Safi"
>> > To: "Sahina Bose"
>> > Cc: "David Gossage" , "users" <
>> users@ovirt.org>
a few days and make sure no locking freezing
occurs but looks hopeful so far.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Tue, Jul 26, 2016 at 8:15 AM, David Gossage
wrote:
> On Tue, Jul 26, 2016 at 4:37 AM, Krutika Dhananjay
> wrote:
>
>>
not been tested with oVirt and so versions will break
something. Same with kernels, libraries etc..
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org
sue. Neither of those 2 I think have ever been suggested as
good practice. At least not for VM storage.
> Hopefully we can see Gluster 3.7.14 moved out of testing repo soon.
>
> Scott
>
> On Tue, Aug 2, 2016 at 9:05 AM, David Gossage > wrote:
>
>> So far gluster 3.7.
On Sat, Aug 13, 2016 at 11:00 AM, David Gossage wrote:
> On Sat, Aug 13, 2016 at 8:19 AM, Scott wrote:
>
>> Had a chance to upgrade my cluster to Gluster 3.7.14 and can confirm it
>> works for me too where 3.7.12/13 did not.
>>
>> I did find that you should NOT
Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
>
> On Fri, Aug 12, 2016 at 9:01 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> Is it considered safe from an ovirt stability standpoint to
On Fri, Apr 15, 2016 at 8:00 AM, Luiz Claudio Prazeres Goncalves <
luiz...@gmail.com> wrote:
> I'm not planning to move to ovirt 4 until it gets stable, so would be
> great to backport to 3.6 or ,ideally, gets developed on the next release of
> 3.6 branch. Considering the urgency (its a single poi
:
INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
storage server
Aug 25 15:38:47 ccovirt3 ovirt-ha-agent:
INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
storage server
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office
s still applied or that I still get logs at
all.
> Renout
>
> On Thu, Aug 25, 2016 at 10:39 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> This service seems to be logging to both /var/log/messages
>> and /var/log/ovirt-hosted-engine-ha/agent.lo
On Mon, Aug 29, 2016 at 10:47 AM, Simone Tiraboschi
wrote:
>
>
> On Fri, Aug 26, 2016 at 8:54 AM, Sandro Bonazzola
> wrote:
>
>>
>>
>> On Tue, Aug 23, 2016 at 8:44 PM, David Gossage <
>> dgoss...@carouselchecks.com> wrote:
>>
>>>
r matters so I could never quite
say for certain it was working or not.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
licensed for some production servers and I
don't now why I didn't log back in to check that doc out as now that you
mentioned it I do recall perusing it long ago and it had decent
explanations on that.
Thanks for the reminder and the answer as well.
>
> Daniel
>
>
>
>
On Thu, Oct 27, 2016 at 10:05 AM, Bryan Sockel
wrote:
> Hi,
>
> We currently have our HostedEngine VM running on a gluster Replica 3
> storage domain. Is there a way to modify the mount options in oVirt to
> specify the Backup-Volfile servers?
>
>
I think all steps needed may be found in this th
On Tue, Jan 24, 2017 at 4:56 PM, Devin Acosta
wrote:
>
> I have created an oVIRT 4.0.6 Cluster, it has 2 Compute nodes, and 3
> Dedicated Gluster nodes. The Gluster nodes are configured correctly and
> they have the replica set to 3. I'm trying to figure out when I go to
> attach the Data (Master
ve set using --add-console-password for logging in
** (remote-viewer:11479): WARNING **: Could not open X display
Cannot open display:
Run 'remote-viewer --help' to see a full list of available command line
options
Their are 7 other VM's running I was able to manually continue from
image back over to ovirt and it started up. Little lengthy
but it got me back up.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Tue, Nov 27, 2018 at 12:54 AM Yedidyah Bar David wrote:
> On Mon, Nov 26, 2018 at 8:18 PM David Gossage
> wrote:
> &
39 matches
Mail list logo