torage domain for each ovirt-node?
No, you dont need to.
This seems like a bug in the Create storage domain UI. Does refreshing
your browser fix the greyed-out input field? Any errors seen in engine logs?
On Tue, Jun 21, 2016 at 7:15 PM, Sahina Bose <mailto:sab...@redhat.com>> wrote
/2016/01/ovirt-and-gluster-hyperconvergence.html
On Fri, Jun 24, 2016 at 3:15 PM, Sahina Bose <mailto:sab...@redhat.com>> wrote:
On 06/24/2016 11:25 AM, Dewey Du wrote:
I prefer deploying as a hyperconverged setup, but it is still
under experiment, right?
Hype
What's the output of below from the node?
# gluster volume info mgmttank --remote-host=storage1.nullaxiom.com
On 07/01/2016 08:52 AM, Kevin Hung wrote:
Hello,
I thought I would ask the list if anyone is aware of this issue (or if
I am doing something obviously wrong) before I submit a bug repo
Sorry, missed that you already ran it.
Version of glusterfs on node and server?
On 07/01/2016 10:15 AM, Kevin Hung wrote:
The same as the other command, two blank lines.
On 7/1/2016 12:41 AM, Sahina Bose wrote:
What's the output of below from the node?
# gluster volume info mgmttank --r
when there's a version
mismatch)
On 07/01/2016 10:25 AM, Kevin Hung wrote:
Version 3.7.11 on node and version 3.7.6 on server. The node was set
up using the ovirt-node-ng-installer-ovirt-4.0-2016062412 ISO.
On 7/1/2016 12:50 AM, Sahina Bose wrote:
Sorry, missed that you already r
Yes, fuse access is not affected by the bugs. oVirt 3.6 is using fuse to access
gluster storage domain, so there should be no issues upgrading
- Original Message -
From: "David Gossage"
To: "users"
Sent: Friday, July 8, 2016 8:23:28 AM
Subject: [ovirt-users] Gluster 3.7.12 update
I am
Did you see any errors in the gluster mount logs during the time when
the VMs were frozen ( I assume the I/O not responding during this time?)
. There have been bugs fixed around concurrent I/O on gluster volume and
vm's pausing in 3.7.12 - the mount logs can tell us if you ran into
similar iss
First off, a replica 2 gluster volume is not recommended if you want to survive
power loss - you need to have a replica 3 gluster volume.
From engine logs, I see "Could not connect host to Data Center(Storage issue)"
but there are no errors in the attached vdsm.log. Can you provide the relevant
Error from vdsm log: Permission settings on the specified path do not allow
access to the storage. Verify permission settings on the specified storage
path.: 'path = /rhev/data-center/mnt/glusterSD/172.16.0.11:_ovirt'
I remember another thread about a similar issue - can you check the ACL
setti
- Original Message -
> From: "Siavash Safi"
> To: "Sahina Bose"
> Cc: "David Gossage" , "users"
> Sent: Thursday, July 28, 2016 8:35:18 PM
> Subject: Re: [ovirt-users] Cannot find master domain
>
> [root@node1 ~]# ls -ld /r
- Original Message -
> From: "Siavash Safi"
> To: "Sahina Bose"
> Cc: "David Gossage" , "users" ,
> "Nir Soffer" ,
> "Allon Mureinik"
> Sent: Thursday, July 28, 2016 9:04:32 PM
> Subject: Re: [ovirt-u
On Thu, Aug 4, 2016 at 8:13 PM, Florian Nolden wrote:
> Hi,
>
> I try to install the hosted_engine on 3 Servers which host also the
> gluster data storage.
> When I install now the hosted_engine, which storage path should I use?
>
> Setup 1:
> server1: server1.san:/hosted_engine
> server2: server
/
>
We're working on updating the docs and feature page for hyperconverged
deployment. Meanwhile, please fee free to contribute and update this info
in link
>
> 2016-08-04 17:36 GMT+02:00 Sahina Bose :
>
>>
>>
>> On Thu, Aug 4, 2016 at 8:13 PM, Florian Nolden
&g
On Mon, Aug 8, 2016 at 3:37 AM, Luiz Claudio Prazeres Goncalves <
luiz...@gmail.com> wrote:
> Hi, it seems the ovirt-3.6-dependencies.repo is pointing the yum repo to
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/
> EPEL.repo/epel-$releasever/$basearch/ and http://download.gluster.org
On Tue, Aug 9, 2016 at 12:49 AM, Dan Lavu wrote:
> So this is on 4.0.1, Fedora 23, the engine is running fine on glusterfs,
> on a 3 replica arbiter 1 volume, the hosted_engine attached storage domain
> is *stuck* in offline, wondernig if there is anyway to fix that? Since the
> domain cannot be
On Tue, Aug 9, 2016 at 4:28 PM, Arsène Gschwind
wrote:
> Hi,
>
> We have setup an oVirt 4 cluster using Gluster as storage backend.
> I've setup a separate 10 Gb Network for Gluster traffic and tried to use
> that network as written in : https://www.ovirt.org/develop/
> release-management/feature
On Fri, Aug 19, 2016 at 12:29 PM, Nicolas Ecarnot
wrote:
> Hello,
>
> I'm digging out this thread because I now had the time to work on this
> subject, and I'm stuck.
>
> This oVirt setup has a standalone engine, and 3 hosts.
> These 3 hosts are hypervisors and gluster nodes, each using one NIC f
Gluster hyperconverged will be integrated and fully supported in 4.1 - but
already available as preview from 3.6.8.
I think there are couple of trackers around this. One of which you can look
at for list of upcoming features/fixes -
https://bugzilla.redhat.com/showdependencytree.cgi?id=1277939&hid
On Fri, Aug 19, 2016 at 2:33 PM, Nicolas Ecarnot
wrote:
> Le 19/08/2016 à 09:55, Sahina Bose a écrit :
>
>
>
> On Fri, Aug 19, 2016 at 12:29 PM, Nicolas Ecarnot
> wrote:
>
>> Hello,
>>
>> I'm digging out this thread because I now had the ti
On Fri, Aug 19, 2016 at 6:20 PM, Nicolas Ecarnot
wrote:
> Le 19/08/2016 à 13:43, Sahina Bose a écrit :
>
>
> Or are you adding the 3 nodes to your existing cluster? If so, I suggest
>> you try adding this to a new cluster
>>
>> OK, I tried and succeed to create
On Tue, Aug 23, 2016 at 2:21 PM, Sandro Bonazzola
wrote:
>
>
> On Fri, Aug 19, 2016 at 10:05 AM, Sahina Bose wrote:
>
>> Gluster hyperconverged will be integrated and fully supported in 4.1 -
>> but already available as preview from 3.6.8.
>>
>> I think ther
On Tue, Sep 20, 2016 at 5:07 PM, Fedele Stabile <
fedele.stab...@fis.unical.it> wrote:
> I confirm:
> I am not able to modify compatibility level using engine UI.
> I'm running oVirt Version 3.6.7.5-1.el7.centos and I would upgrade to
> 4.0
> I was able to migrate HostedEngine to Default Cluster
>
On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari wrote:
> I'm struggling with the same problem (I say struggling because I'm still
> having stability issues for what i consider a stable cluster) but you can:
> - create a replica 3 engine gluster volume
> - create replica 2 data, iso and export vol
On Fri, Sep 23, 2016 at 5:02 PM, Davide Ferrari wrote:
>
> 2016-09-23 13:17 GMT+02:00 Sahina Bose :
>
>>
>> What are the stability issues you're facing? Data volume if used as a
>> data storage domain should be a replica 3 volume as well.
>>
>
> Basi
On Fri, Sep 23, 2016 at 7:09 PM, Davide Ferrari wrote:
>
>
> 2016-09-23 13:50 GMT+02:00 Sahina Bose :
>
> Ok, if I encounter again similiar problems I will post logs here
>
>
>>
>> If you have additional capacity on the other 3 hosts , then yes, you can
>
On Fri, Sep 23, 2016 at 7:41 PM, Davide Ferrari wrote:
>
>
> 2016-09-23 15:57 GMT+02:00 Sahina Bose :
>
>>
>> You could do this - where Node3 & Node 2 also has arbiter bricks. Arbiter
>> bricks only store metadata and requires very low storage capacity compared
volume are not from the same host, thus you avoid the
problem you describe above
>
>
> 2016-09-23 16:11 GMT+02:00 Davide Ferrari :
>
>>
>>
>> 2016-09-23 15:57 GMT+02:00 Sahina Bose :
>>
>>>
>>> You could do this - where Node3 & Node 2 al
ever you see now should improve further.
> Regards,
>
> Bertjan
>
> On Fri, Sep 23, 2016 at 04:47:25PM +0530, Sahina Bose wrote:
> >
> >
> > On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari
> wrote:
> >
> > I'm struggling with the same problem
[Forwarding to a wider audience]
Feature page outlining the proposed solution is at
http://www.ovirt.org/develop/release-management/features/gluster/gluster-dr/
Please review and provide feedback.
thanks,
sahina
-- Forwarded message --
From: Sahina Bose
Date: Wed, Sep 14, 2016
Yes, this is a GlusterFS problem. Adding gluster users ML
On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari wrote:
> Hello
>
> maybe this is more glustefs then ovirt related but since OVirt integrates
> Gluster management and I'm experiencing the problem in an ovirt cluster,
> I'm writing here.
>
[Adding gluster-users ML]
The brick logs are filled with errors :
[2016-10-05 19:30:28.659061] E [MSGID: 113077]
[posix-handle.c:309:posix_handle_pump] 0-engine-posix: malformed internal
link
/var/run/vdsm/storage/0a021563-91b5-4f49-9c6b-fff45e85a025/d84f0551-0f2b-457c-808c-6369c6708d43/1b5a5e34-81
cdd2818>), in state , has disconnected from glusterd.
> Thanks
>
>
>
> *From:* Sahina Bose [mailto:sab...@redhat.com]
> *Sent:* 05 October 2016 08:11
> *To:* Jason Jeffrey ; gluster-us...@gluster.org;
> Ravishankar Narayanankutty
> *Cc:* Simone Tiraboschi
On Tue, Oct 4, 2016 at 9:51 PM, Hanson wrote:
> Running iperf3 between node1 & node2, I can achieve almost 10gbps without
> ever going out to the gateway...
>
> So switching between port to port on the switch is working properly on the
> vlan.
>
> This must be a problem in the gluster settings? W
This looks like a bug displaying status in the UI (similar to
https://bugzilla.redhat.com/show_bug.cgi?id=1381175 ?). Could you also
attach engine logs from the timeframe that you notice the issue in UI.
Do all nodes in the cluster return peer status as Connected? (Engine logs
will help determine
Hi!
Looks like you have setup your gluster volume outside of oVirt and your
bricks are identified via "rdkvm1-data" , "rdkvm2-data" and not the IP
addresses associated with the gluster network ( 10.3.10.5, 10.3.10.6)
What does "gluster volume info gluvol2" return?
Currently we cannot identify mul
7.16 my external gluster server's
> ? The plan is to upgrade to oVirt 3.6.7 after this gluster server's upgrade.
>
>
>
Yes, it is safe.
>
> Thanks
>
> -Luiz
>
>
>
>
> 2016-08-08 4:26 GMT-03:00 Sahina Bose :
>
>>
>>
>> On Mon, Au
On Tue, Oct 25, 2016 at 12:32 PM, knarra wrote:
> Hi All,
>
> I am currently using ovirt master branch, having hosted-engine setup
> with glusterfs as the storage domain. I launched some vms and try to start
> them. VMs fails to start on two other hosts in the cluster and both of them
> go in
If you want these hosts to run only gluster service - create a cluster with
"Enable gluster service" checked and "Enable Virt Service" unchecked (i.e
disabled).
You should then add your 3 hosts to this cluster. Are the three hosts
subscribed to the ovirt repos? - During installation, the required p
will try again doing so. I didnt notice this as a requirement so I
>> assumed the method was the ovirt server uploaded the needed packages to
>> each host.
>>
>>
>>
>> On 26 October 2016 at 17:52, Sahina Bose wrote:
>>
>>> If you want these hosts t
On Fri, Oct 28, 2016 at 8:14 AM, Thing wrote:
> Hi,
>
> So was was trying to make a 3 way mirror and it reported failed. Now I
> get these messages,
>
> On glusterp1,
>
> =
> [root@glusterp1 ~]# gluster peer status
> Number of Peers: 1
>
> Hostname: 192.168.1.32
> Uuid: ef780f56-267f-4a6
On Thu, Oct 27, 2016 at 4:28 AM, Thing wrote:
> Hi,
>
> I have 3 machines imported into ovirt 4.0.4 just to do storage. I have no
> storage setup. I am a bit confused, can I add new storage from scratch via
> ovirt? ie picking mount points (I have /gv1 so set on each) or do I create
> the glust
>From the host-deploy logs - the error seems to be "RuntimeError: Failed to
start service 'glusterd'"
Can you try to start the glusterd service (systemctl start glusterd) on the
failed nodes? Please attach
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log to analyze the failure.
On Wed, Nov 2, 20
A wild guess, not sure if it is related - can you check if multipathd
service is enabled. If you set up your oVirt-Gluster hyperconverged
environment via gdeploy, multipathd service is disabled and the
/etc/multipath.conf is edited to blacklist all devices - this was to fix
https://bugzilla.redhat.
On Mon, Nov 7, 2016 at 11:20 AM, Gary Pedretty wrote:
> As a storage domain, this gluster volume will not work whether it is
> preallocated or thin provision. It will work as a straight gluster volume
> mounted directly to any VM on the ovirt Cluster, or any physical machine,
> just not as a da
ial Road / \/\ 907-450-7238 fax
> Fairbanks, Alaska 99709/\ /\ \ Second greatest commandment
> Serving All of Alaska / \/ /\ \ \/\ “Love your neighbor as
> Really loving the record green up date! Summmer!! yourself” Matt 22:39
> --
On Wed, Nov 23, 2016 at 5:41 AM, Oscar Segarra
wrote:
> Hi,
>
> As on oVirt is it possible to attach local storage I supose it can be used
> to run virtual machines:
>
> I have drawn a couple of diagrams in order to know if is it possible to
> set up this configuration:
>
> 1.- In on-going scenar
On Wed, Nov 23, 2016 at 1:18 PM, Oscar Segarra
wrote:
> Hi,
>>
>> As on oVirt is it possible to attach local storage I supose it can be
>> used to run virtual machines:
>>
>> I have drawn a couple of diagrams in order to know if is it possible to
>> set up this configuration:
>>
>> 1.- In on-goin
On Fri, Nov 25, 2016 at 10:11 PM, Nathanaël Blanchet
wrote:
> Hello,
>
> After registering a new host in an existing gluster cluster, it can't
> activate because it has no other gluster peer registered, so no existing
> declared gluster domain storage. It has no more /gluster-bricks directory.
>
On Thu, Dec 1, 2016 at 10:54 AM, Bill James wrote:
> I have a 3 node cluster with replica 3 gluster volume.
> But for some reason the volume is not using the full size available.
> I thought maybe it was because I had created a second gluster volume on
> same partition, so I tried to remove it.
t.org/67844
>>
>> I also need more information on your setup, to add more
>> details to the commit message.
>>
>> Thanks for reporting this,
>> Nir
>>
>> On Mon, Dec 5, 2016 at 10:28 AM, 胡茂荣 wrote:
>>
>>>
>>> Thanks for
On Thu, Dec 8, 2016 at 2:58 PM, Yedidyah Bar David wrote:
> On Wed, Dec 7, 2016 at 6:46 PM, Sven Achtelik
> wrote:
> > Hi All,
> >
> >
> >
> > I’m planning to install oVirt 4 with the HC setup from this blogpost and
> did
> > a demo setup.
> >
> >
> >
> > http://www.ovirt.org/blog/2016/08/up-and
[+ gluster-users]
Could you be more specific about error?
On Wed, Dec 14, 2016 at 1:50 AM, lejeczek wrote:
> libvirt/qemu does not get to gluster vols when one has these:
>
> Upgraded:
> glusterfs.x86_64 3.7.17-1.el7 glusterfs-api.x86_64
> 3.7.17-1.el7
> glusterfs-cli.x86_6
On Wed, Dec 14, 2016 at 9:48 PM, Nathanaël Blanchet
wrote:
> Hi,
>
> I changed the previous all in one network to a dedicated gluster storage
> at the network level. But when doing netstat (or gluster peer status), I
> can see that listening connection still use the previous vlan and gluster
> br
Could you attach the engine log with this error?
On Fri, Dec 16, 2016 at 4:29 PM, Nathanaël Blanchet
wrote:
> Hi,
>
> I used to successfully run a replica 3 gluster volume, but since the last
> 4.0.5 update, they can't connect each other with the message : gluster
> [gluster peer status guadalup
>
>
> Le 16/12/2016 à 14:02, Sahina Bose a écrit :
>
> Could you attach the engine log with this error?
>
> On Fri, Dec 16, 2016 at 4:29 PM, Nathanaël Blanchet <
> blanc...@abes.fr> wrote:
>
>> Hi,
>>
>> I used to successfully run a replica 3 gluster
On Fri, Dec 16, 2016 at 11:00 PM, Nathanaël Blanchet
wrote:
>
>
> Le 16/12/2016 à 16:34, Sahina Bose a écrit :
>
> Failed to find host 'Host[guadalupe1,7a30c899-a317-479a-b07b-244bc2374485]'
> in gluster peer list from
> 'Host[guadalupe1,7a30c899-a317-479
On Thu, Dec 29, 2016 at 10:53 AM, Jim Kusznir wrote:
> Hello:
>
> I've been involved in virtualization from its very early days, and been
> running linux virtualization solutions off and on for a decade.
> Previously, I was always frustrated with the long feature list offered by
> many linux virt
On Sat, Dec 31, 2016 at 12:01 AM, Jim Kusznir wrote:
> Hi all:
>
> I'm trying to set up a new ovirt cluster. I got it "mostly working"
> earlier, but wanted to change some physical networking stuff, and so I
> thought I'd blow away my machines and rebuild. I followed the same recipe
> to build
Can you provide the gluster mount logs
On Fri, 6 Jan 2017 at 12:01 PM, Rodrick Brown
wrote:
> I'm using gluster/zfs for the backing store on my Ovirt VM's its seems our
> gluster volume may have ran low on space and a few VM's we're paused due to
> long i/o wait times.
>
> I'm no longer able to b
On Tue, Jan 10, 2017 at 8:12 PM, Doug Ingham wrote:
> Hey all,
> Each of my hosts/nodes also hosts its own gluster bricks for the storage
> domains, and peers over a dedicated FQDN & interface.
>
> For example, the first server is setup like the following...
> eth0: v0.dc0.example.com (10.10.10.
You will need to add the hosts to oVirt only to manage the gluster
deployment -i.e creation of volumes, setting volume options, add/remove
bricks, monitoring gluster volume.
To consume the gluster volume as storage domain, you don't necessarily need
to add the hosts (in a non-hyperconverged requir
On Fri, Jan 20, 2017 at 3:01 PM, Liebe, André-Sebastian <
andre.li...@gematik.de> wrote:
> Hello List,
>
> I run into trouble after moving our hosted engine from nfs to
> hyperconverged glusterFS by backup/restore[1] procedure. The engine logs
> it can't import and activate the hosted-storage alt
The qemu logs (for a VM) contain the gluster client logs as well
On Fri, Jan 27, 2017 at 11:14 PM, Doug Ingham wrote:
> Hey guys,
> Would anyone be able to tell me the name/location of the gluster client
> log when mounting through libgfapi?
>
> Cheers,
> --
> Doug
>
> _
On Mon, Feb 6, 2017 at 6:36 PM, p...@email.cz wrote:
> Hello everybody,
>
> We are using oVirt Engine Version: 4.0.6.3-1.el7.centos on centos 7.3
> with gluster replica 3 arbiter = (1+1)+1
>
> I'm confused with GUI delaying - if node details are wanted ( cluster ->
> nodes -> node detail = clic
cle DB requests licences for all nodes in cluster ( lic. per each
> available CPU socket in cluster ). Nobody will not spent money for
> unuseable CPU socket , especially dedicated cluster user :-( )
>
For this one, I have no solution
>
> From that reasons arbiter not in cluster
Any errors in the gluster mount logs for this gluster volume?
How about "gluster vol heal info" - does it list any entries to
heal?
On Thu, Feb 9, 2017 at 11:57 PM, Doug Ingham wrote:
> Some interesting output from the vdsm log...
>
>
> 2017-02-09 15:16:24,051 INFO (jsonrpc/1) [storage.Stora
vdsm logs indicate that the gluster volume is either not online or issues
connecting to the bricks. Could you also share the gluster mount logs?
On Sun, Feb 12, 2017 at 8:23 PM, Bryan Sockel wrote:
> Attached are the requested logs. Is there away to just try and kill these
> jobs?
>
>
> -Or
On Fri, Feb 10, 2017 at 11:04 PM, Doug Ingham wrote:
> Hey Guys,
> I currently use dedicated interfaces & hostnames to separate gluster
> traffic on my "hyperconverged" hosts.
>
> For example, the first node uses "v0" for its management interface & "s0"
> for its gluster interface.
>
> With this
You will need to pass additional mount options while creating the storage
domain (transport=rdma)
Please let us know if this works.
On Thu, Mar 2, 2017 at 2:42 PM, Arman Khalatyan wrote:
> Hi,
> Are there way to force the connections over RDMA only?
> If I check host mounts I cannot see rdma m
mt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
>>> -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7f524c9d65d2]
>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
>>> received signum (1), shutting down
>>> [2017-03-02 11:49:47.830053] I
On Mon, Mar 6, 2017 at 3:21 PM, Arman Khalatyan wrote:
>
>
> On Fri, Mar 3, 2017 at 7:00 PM, Darrell Budic
> wrote:
>
>> Why are you using an arbitrator if all your HW configs are identical? I’d
>> use a true replica 3 in this case.
>>
>>
> This was just GIU suggestion when I was creating the cl
Are you shrinking the gluster volume by removing bricks, or are you
shrinking the underlying LV/filesystem of the bricks associated with
gluster volume?
If latter, you need to move storage domain to maintenance and umount from
all hosts.
On Thu, Mar 9, 2017 at 4:56 AM, Ian Neilsen wrote:
>
> Hi
Responses inline
On Tue, Mar 21, 2017 at 5:42 PM, Davide Ferrari wrote:
> I may add that as of now I've just added the bricks and nothing more, no
> VMs/new disks created in the oVirt cluster (in case i should remove the
> bricks)
>
> 2017-03-21 13:09 GMT+01:00 Davide Ferrari :
>
>> Hello
>>
>>
On Wed, Mar 22, 2017 at 8:29 PM, Davide Ferrari wrote:
>
>
> 2017-03-22 8:09 GMT+01:00 Sahina Bose :
>
>
>>
>> A gluster volume cannot span multiple clusters. If you want to create a
>> separate cluster with the Broadwell nodes, then create a new gluster vol
Could you provide engine.log from engine and vdsm.log & supervdsm.log from
this host?
On Wed, Mar 22, 2017 at 8:02 PM, Davide Ferrari wrote:
> I get this error when I click "
> Gluster status is disconnected for this host.
> Restart Glusterd service" in the Genral tab of a the sole host in a new
Is "spfy-hw01" resolvable from engine? Have you associated network named
"gluster" to interface associated with "spfy-hw01" under "Setup Host
Network" of host?
On Sat, Mar 25, 2017 at 5:56 PM, Arsène Gschwind
wrote:
> Hi,
>
> I do have a recuring warning in engine.log and i'm not able to figure
ace associated with host that has gluster network
tagged to it
> 2017-03-27 10:12 GMT+02:00 Sahina Bose :
>
>> Is "spfy-hw01" resolvable from engine? Have you associated network named
>> "gluster" to interface associated with "spfy-hw01" under &
to be it, if you have identified the brick using an
FQDN instead of ip address.
>
> 2017-03-27 12:14 GMT+02:00 Sahina Bose :
>
>>
>>
>> On Mon, Mar 27, 2017 at 2:06 PM, Davide Ferrari
>> wrote:
>>
>>> I have the same error (warning) and in my case th
On Thu, Mar 30, 2017 at 1:23 PM, Liron Aravot wrote:
> Hi Jim, please see inline
>
> On Thu, Mar 30, 2017 at 4:08 AM, Jim Kusznir wrote:
>
>> hello:
>>
>> I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a
>> while now, and am now revisiting some aspects of it for ensuring tha
On Wed, Mar 29, 2017 at 5:22 PM, Goorkate, B.J.
wrote:
> Hi all,
>
> When creating a gluster brick in oVirt, I have to fill in the parameters
> of the
> RAID volume the brick is on (that's how I understand it anyway):
> RAID-type, number of disks and stripe size.
>
> What is the reason for that?
On Wed, Mar 29, 2017 at 10:36 PM, Liron Aravot wrote:
>
>
> On Wed, Mar 29, 2017 at 4:35 PM, Nicolas Ecarnot
> wrote:
>
>> [Please ignore the previous msg]
>>
>> Hello,
>>
>> One of our DC is a very small one, though quite critical.
>> It's almost hyper converged : hosts are compute+storage, but
On Thu, Mar 30, 2017 at 4:39 PM, Sahina Bose wrote:
>
>
> On Wed, Mar 29, 2017 at 10:36 PM, Liron Aravot wrote:
>
>>
>>
>> On Wed, Mar 29, 2017 at 4:35 PM, Nicolas Ecarnot
>> wrote:
>>
>>> [Please ignore the previous msg]
>>>
>&g
256
> Block device 253:1
>
> --- Logical volume ---
> LV Path/dev/centos_ovirt/root
> LV Nameroot
> VG Namecentos_ovirt
> LV UUIDg2h2fn-sF0r-Peos-hAE1-WEo9-WENO-MlO3ly
> LV Write Acces
On Wed, Apr 12, 2017 at 11:15 PM, Jamie Lawrence
wrote:
>
> > On Apr 12, 2017, at 1:31 AM, Evgenia Tokar wrote:
> >
> > Hi Jamie,
> >
> > Are you trying to setup hosted engine using the "hosted-engine --deploy"
> command, or are you trying to migrate existing he vm?
> >
> > For hosted engine set
Could you try "gluster volume start VGSF1 force" to make sure the brick
processes are restarted.
From the status output, it looks like the brick processes are not online.
On 04/22/2015 09:14 PM, p...@email.cz wrote:
Hello dears,
i've got some troubles with reattaching gluster volumes with data.
On 05/07/2015 01:34 PM, Jorick Astrego wrote:
On 05/06/2015 08:15 PM, knarra wrote:
On 05/06/2015 11:22 PM, Jorick Astrego wrote:
On 05/06/2015 06:24 PM, knarra wrote:
On 05/06/2015 06:59 PM, Jorick Astrego wrote:
On 05/06/2015 02:49 PM, knarra wrote:
On 05/06/2015 05:33 PM, Jorick As
vdsm-gluster installed on your node?
From logs - it seems to indicate that it is not.
On 05/12/2015 03:02 PM, supo...@logicworks.pt wrote:
This is the engine log:
2015-05-12 10:27:44,012 INFO
[org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-2)
[76c5a7e7] Lock Acquired to o
[+ gluster list]
On 05/25/2015 09:32 PM, p...@email.cz wrote:
Hello,
can anybody help me with hanging replica2 stripe2 datastore on 4
nodes cluster ??
oVirt - ovirt-engine-lib-3.5.2.1-1.el7.centos.noarch
gluster - glusterfs-server-3.7.0-2.el7.x86_64
VM - Centos 7.1
If I use any bigger writ
It looks like there is already a remove-brick operation going on for the
volume.
remove-brick in gluster is a 2 step process. The data is first migrated
from the chosen bricks to the other available bricks in the volume. This
could be a long running task, and can be monitored via engine using
On 06/29/2015 06:50 PM, Frank Rothenstein wrote:
Hi,
I just added a gluster-node. I wanted to replace one brick of the data
domain. I did with "gluster vol replace-brick
commit force" and all gluster processes did well.
My Cluster volumes now shows this one down and does not
show the , where
On 07/29/2015 05:37 AM, John Gardeniers wrote:
What makes me say that RHEV and the community version of Gluster are
incompatible is a message from Red Hat saying so.
Could you point me to this message?
Of particular note, the RHSS version of Gluster uses afr-v1, whereas
the community vers
On 08/06/2015 02:38 PM, Vered Volansky wrote:
- Original Message -
From: "Nicolas Ecarnot"
To: "users@ovirt.org"
Sent: Wednesday, August 5, 2015 5:32:38 PM
Subject: [ovirt-users] ovirt+gluster+NFS : storage hicups
Hi,
I used the two links below to setup a test DC :
http://communi
On 07/30/2015 08:11 PM, Adrian Lewis wrote:
Hi,
Just wondering if it will be possible to create gluster replica 3
volumes in oVirt 3.6 that use the arbiter function instead of actually
storing three copies of the data? If so, could this be used for the
hosted engine on gluster feature which
On 08/13/2015 11:48 PM, Chris Liebman wrote:
I've just force deleted a DC. I did this because gluster was
completely hosed. Multiple nodes with broken disks - don't ask...
Anyway - now I see that the Cluster still exists with the hosts. And
I cant remove, re-install etc the hosts, nor can
Please provide output of "gluster volume info" command, vdsm.log &
engine.log
There could be a mismatch between node information in engine database
and gluster - one of the reasons is because the gluster server uuid
changed on the node and we will need to see why.
On 08/17/2015 12:35 AM, D
16.0.*
storage.owner-uid: 36
storage.owner-gid: 36
Also, I have attached the logs .
Thanks in advance,
Tibor
- 2015. aug.. 17., 8:40, Sahina Bose sab...@redhat.com írta:
Please provide output of "gluster volume info" command, vdsm.log &
engine.log
There could be a mismatch
Hi all
When I send a message to #ovirt on OFTC , I get a response - #ovirt
:Cannot send to channel
Anyone else facing this?
thanks
sahina
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi all,
While installing a host to ovirt-3.6 engine, the host installation is
stuck at "Enrolling serial console certificate"
I installed the engine from ovirt-release36, and answered No to setting
up WebConsole-proxy as well as VM Console proxy on the engine.
Does anyone know how to debug
helping, please send me "ps -efa" output so at least I see what
is running.
thanks!
output of ps -efa
http://fpaste.org/257513/44015204/
- Original Message -
From: "Sahina Bose"
To: "Alon Bar-Lev"
Cc: "users"
Sent: Friday, August 21, 2015 12:
.lab.eng.blr.redhat.com-ssh-cert.pub:
id "rhsdev9.lab.eng.blr.redhat.com" serial 0 for
rhsdev9.lab.eng.blr.redhat.com valid from 2015-08-21T02:51:27 to
2020-08-19T03:51:27
let's see what happens.
- Original Message -
From: "Sahina Bose"
To: "Alon Bar-L
201 - 300 of 546 matches
Mail list logo