I am having some trouble when deploying an oVirt 4.1 hosted engine
When I m just to end the installation and the hosted engine setup script is
about to start the Vm engine (appliance) it fails saying "The VM is not
If I double check the service vdsmd i
Why are you using an arbitrator if all your HW configs are identical? I’d use a
true replica 3 in this case.
Also in my experience with gluster and vm hosting, the ZIL/slog degrades write
performance unless it’s a truly dedicated disk. But I have 8 spinners backing
my ZFS volumes, so trying to
I think I was not clear in my explanations, let me try again :
we have a oVirt 18.104.22.168 cluster with multiple hosts (centos 7.2).
In this cluster, we added a SAN volume (iscsi) a few months ago directly in GUI
Later we had to remove a DATA volume (SAN iscsi). Below the steps we have
Ok thanks, I'll give that another go shortly
On 3 March 2017 at 15:24, Yaniv Kaul wrote:
> On Fri, Mar 3, 2017 at 5:21 PM Maton, Brett
>> Hi Simone,
>> I just tried to install the RPM but got a dependency issue:
>> As you can see from my previous email that the RDMA connection tested with
I think you have wrong command. Your testing TCP & not RDMA. Also check if you
have RDMA & IB modules loaded on your hosts.
root@clei26 ~]# qperf clei22.vib tcp_bw tcp_lat
bw = 475 MB/sec
On 03/03/2017 12:27 PM, Arman Khalatyan wrote:
> Dear Deepak, thank you for the hints, which gluster are you using?
> As you can see from my previous email that the RDMA connection tested
> with qperf. It is working as expected. In my case the clients are
> servers as well, they are
I have been testing glusterfs over RDMA & below is the command I use. Reading
up the logs, it looks like your IB(InfiniBand) device is not being initialized.
I am not sure if u have an issue on the client IB or the storage server IB.
Also have you configured ur IB devices correctly. I am using
On Fri, Mar 3, 2017 at 5:21 PM Maton, Brett
> Hi Simone,
> I just tried to install the RPM but got a dependency issue:
> Error: Package: ovirt-hosted-engine-ha-22.214.171.124-1.el7.centos.noarch
I just tried to install the RPM but got a dependency issue:
Error: Package: ovirt-hosted-engine-ha-126.96.36.199-1.el7.centos.noarch
Requires: vdsm-client >= 4.18.6
I haven't tried to install vdsm-client as I'm not sure
cd to inside the pool path
then dd if=/dev/zero of=test.tt bs=1M
leave it runing 5/10 minutes.
do ctrl+c paste result here.
2017-03-03 11:30 GMT-03:00 Arman Khalatyan :
> No, I have one pool made of the one disk and ssd as a cache and log device.
> I have 3 Glusterfs
No, I have one pool made of the one disk and ssd as a cache and log device.
I have 3 Glusterfs bricks- separate 3 hosts:Volume type Replicate
(Arbiter)= replica 2+1!
That how much you can push into compute nodes(they have only 3 disk slots).
On Fri, Mar 3, 2017 at 3:19 PM, Juan Pablo
ok, you have 3 pools, zclei22, logs and cache, thats wrong. you should have
1 pool, with zlog+cache if you are looking for performance.
also, dont mix drives.
whats the performance issue you are facing?
2017-03-03 11:00 GMT-03:00 Arman Khalatyan :
> This is CentOS
This is CentOS 7.3 ZoL version 0.6.5.9-1
[root@clei22 ~]# lsscsi
[2:0:0:0]diskATA INTEL SSDSC2CW24 400i /dev/sda
[3:0:0:0]diskATA HGST HUS724040AL AA70 /dev/sdb
[4:0:0:0]diskATA WDC WD2002FYPS-0 1G01 /dev/sdc
[root@clei22 ~]# pvs ;vgs;lvs
Which operating system version are you using for your zfs storage?
zfs get all your-pool-name
use arc_summary.py from freenas git repo if you wish.
2017-03-03 10:33 GMT-03:00 Arman Khalatyan :
> Pool load:
> [root@clei21 ~]# zpool iostat -v 1
[root@clei21 ~]# zpool iostat -v 1
poolalloc free read write read
-- - - - - -
Glusterfs now in healing mode:
[root@clei21 ~]# arcstat.py 1
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
13:24:49 0 0 0 00 00 00 4.6G 31G
13:24:50 15480 5180 51 0080 51 4.6G 31G
> > Thanks for your reply. Maybe I did not explain myself correctly.
> > > > I have a Hosted engine Setup, 2 physical hosts, 1 for virtualization
> > and
> > > > another for managing OVirt.
> > >
> > > The host that runs the engine VM could also run other VMs at the same
I think there are some bug in the vdsmd checks;
2017-03-03 11:15:42,413 ERROR (jsonrpc/7) [storage.HSM] Could not connect
to storageServer (hsm:2391)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2388, in connectStorageServer
I'll give that a go this evening, I'm remote at the moment.
On 3 March 2017 at 10:48, Simone Tiraboschi wrote:
> On Fri, Mar 3, 2017 at 11:35 AM, Maton, Brett
>> VM Up not responding - Yes that
Thank you all for the nice hints.
Somehow my host was not able to access the userspace RDMA, after
yum install -y libmlx4.x86_64
I can mount:
/usr/bin/mount -t glusterfs -o
On Fri, Mar 3, 2017 at 11:35 AM, Maton, Brett
> VM Up not responding - Yes that seems to be the case.
> I did actually try hosted-engine --console
> hosted-engine --console
> The engine VM is running on this host
> Connected to domain HostedEngine
VM Up not responding - Yes that seems to be the case.
I did actually try hosted-engine --console
The engine VM is running on this host
Connected to domain HostedEngine
Escape character is ^]
error: internal error: cannot find character device
On 3 March 2017 at 08:39,
I am interested in being part of ovirt for gsoc 2017.
I have looked into ovirt project ideas and the project that I find
is Configuring the backup storage in ovirt.
Since the ovirt online docs have sufficient info for getting started for
development so I don't have a
On Thu, Mar 2, 2017 at 11:42 AM, Maton, Brett
> What is the correct way to shut down a cluster ?
> I shutdown my 4.1 cluster following this guide
> irt-3.6-Cluster-Shutdown-and-Startup as the NAS
Mail list logo