Re: [Users] Ovirt Gluster problems

2014-01-29 Thread Sahina Bose


On 01/29/2014 04:15 AM, Steve Dainard wrote:
Not sure if this is exactly your issue, but this post here: 
http://comments.gmane.org/gmane.comp.emulators.ovirt.user/12200 might 
lead you in the right direction.


"one note - if you back it up while its attached to an engine, you will
need to edit its meta data file to remove the association to allow the
other engine to connect it to the new pool for restore."



Did this solve your issue?

If not, could you let us know the error messages from the logs? (engine.log)
If you're looking to remove a host from a gluster cluster when there are 
no online hosts, checking the Force option should do this for you.



*Steve Dainard *



On Tue, Jan 28, 2014 at 12:41 PM, Juan Pablo Lorier 
mailto:jplor...@gmail.com>> wrote:


Hi,

I had some issues with a gluster cluster and after some time trying to
get the storage domain up or delete it (I opened a BZ about a deadlock
in the process of removing the domain) I gave up and destroyed the DC.
The thing is that I want to add the hosts that where part of the
DC and
now I get that I can't as they have the volume. I try to stop the
volume
but I can't as no host is running in the deleted cluster and for some
reason, ovirt needs that.
I can't delete the hosts either as they have the volume... so  I'm
back
in another chicken and egg problem.
Any hints??

PD: I can't nuke the hole ovirt plataform as I have another DC in
production otherwise I would :-)

Regards,


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [URGENT] hacking the DB

2014-01-31 Thread Sahina Bose


On 01/31/2014 04:49 PM, Juan Pablo Lorier wrote:

Hi,

I'm about to hack engine db to remove a gluster volume and I'd like
someone to advise me on this. The environment is in production, I've
backed up the db and I'm planning to remove every entry I can find of
the volume (host members, cluster, etc) but I found that there's no
clusters table (and I need to remove a cluster that no longer exists
after a force remove of the DC but somehow the volume still is attached
to it).


delete from gluster_volume_bricks where volume_id = "..";
delete from gluster_volume_options where volume_id = "..";
delete from gluster_volume_transport_types where volume_id="..";
delete from gluster_volume_access_protocols where volume_id="..";
delete from gluster_volumes where id="..";

The above would delete all gluster volume information stored in the db.

Cluster information is stored in vds_groups table.

You should have been able to remove the host using "Force" option. 
Didn't that work for you. I would be interested to see the logs if it 
did not.


thanks
sahina

I'll try IRC also to see if someone can give me a hand.
Regards,



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [URGENT] hacking the DB

2014-01-31 Thread Sahina Bose


On 01/31/2014 05:49 PM, Juan Pablo Lorier wrote:

Hi again,

I'm now in the process of deleting the removed cluster from the hosts. I
can't just delete it from vds_groups as is a external key to vds_static,
and it looks like vds_static has the hosts definition.
Can I safely update the vds_group_id to ' ' in the hosts to "detach"
them from the cluster?
Regards,


I think it would be safer to remove the host and then remove the cluster.
You would have to remove the host entry from vds_static as well as 
vds_dynamic.




On 31/01/14 09:35, Sahina Bose wrote:

delete from gluster_volume_bricks where volume_id = "




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Error message constantly being reported

2014-02-02 Thread Sahina Bose


On 02/03/2014 05:02 AM, Itamar Heim wrote:

On 02/02/2014 08:01 PM, Jon Archer wrote:

Hi All,

Constantly seeing this message in the logs:
vdsm vds ERROR vdsm exception occured#012Traceback (most recent call
last):#012  File "/usr/share/vdsm/BindingXMLRPC.py", line 952, in
wrapper#012res = f(*args, **kwargs)#012  File
"/usr/share/vdsm/gluster/api.py", line 54, in wrapper#012rv =
func(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line
306, in tasksList#012status =
self.svdsmProxy.glusterTasksList(taskIds)#012  File
"/usr/share/vdsm/supervdsm.py", line 50, in __call__#012 return
callMethod()#012  File "/usr/share/vdsm/supervdsm.py", line 48, in
#012**kwargs)#012  File "", line 2, in
glusterTasksList#012  File
"/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in
_callmethod#012raise convert_to_error(kind,
result)#012GlusterCmdExecFailedException: Command execution
failed#012error: tasks is not a valid status option#012Usage: volume
status [all |  [nfs|shd|]]
[detail|clients|mem|inode|fd|callpool]#012return code: 1


looks like an option which isn't recognised by the "gluster volume
status" command.

Any ideas how to resolve? It's not causing any problems, but I would
like to stop it.

Cheers

Jon
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


sahina - iirc, there is a patch removing that noise?


Yes, there was a patch removing this for clusters < 3.4 compatibility 
version


For 3.4 gluster clusters, we need a version of gluster (>= 3.5) to 
support the gluster async task feature. This version has the support for 
"gluster volume status tasks"



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Error message constantly being reported

2014-02-02 Thread Sahina Bose


On 02/03/2014 12:06 PM, Itamar Heim wrote:

On 02/03/2014 07:35 AM, Sahina Bose wrote:


On 02/03/2014 05:02 AM, Itamar Heim wrote:

On 02/02/2014 08:01 PM, Jon Archer wrote:

Hi All,

Constantly seeing this message in the logs:
vdsm vds ERROR vdsm exception occured#012Traceback (most recent call
last):#012  File "/usr/share/vdsm/BindingXMLRPC.py", line 952, in
wrapper#012res = f(*args, **kwargs)#012  File
"/usr/share/vdsm/gluster/api.py", line 54, in wrapper#012 rv =
func(*args, **kwargs)#012  File "/usr/share/vdsm/gluster/api.py", line
306, in tasksList#012status =
self.svdsmProxy.glusterTasksList(taskIds)#012  File
"/usr/share/vdsm/supervdsm.py", line 50, in __call__#012 return
callMethod()#012  File "/usr/share/vdsm/supervdsm.py", line 48, in
#012**kwargs)#012  File "", line 2, in
glusterTasksList#012  File
"/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in
_callmethod#012raise convert_to_error(kind,
result)#012GlusterCmdExecFailedException: Command execution
failed#012error: tasks is not a valid status option#012Usage: volume
status [all |  [nfs|shd|]]
[detail|clients|mem|inode|fd|callpool]#012return code: 1


looks like an option which isn't recognised by the "gluster volume
status" command.

Any ideas how to resolve? It's not causing any problems, but I would
like to stop it.

Cheers

Jon
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


sahina - iirc, there is a patch removing that noise?


Yes, there was a patch removing this for clusters < 3.4 compatibility
version

For 3.4 gluster clusters, we need a version of gluster (>= 3.5) to
support the gluster async task feature. This version has the support for
"gluster volume status tasks"




was this backported to stable 3.3 ?


Unfortunately, no - missed this.

Have submitted a patch now - http://gerrit.ovirt.org/23982


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] My first wiki page

2014-02-04 Thread Sahina Bose


On 02/03/2014 07:18 PM, Juan Pablo Lorier wrote:

Hi,

I've created my first wiki page and I'd like someone to review it and
tell me if there's something that need to be changed (besides it does
not have any style yet)
The URL is
http://www.ovirt.org/oVirt_Wiki:How_to_change_Gluster%27s_network_interface
Regards,



Firstly, thanks for putting this information up!

Some comments -

1.  when you use different IP addresses for engine -to -gluster host 
(say IP1) and gluster -to -gluster communication (say IP2), operations 
from ovirt engine like add brick or remove brick would fail (as brick is 
tried to be added with IP1 which gluster does not understand)


To work around this, it is better to use a FQDN both for registering the 
host with engine and also to peer probe the host from gluster CLI.

You could have multiple IP addresses on the host resolve to the same FQDN.

2. To reuse a brick directory, gluster provides the force option during 
volume creation as well as adding bricks. This is available from gluster 
3.5 upwards.


[Adding Vijay to correct me, if I'm wrong]






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Dedicated Bonding interface for Gluster

2014-02-11 Thread Sahina Bose


On 02/12/2014 12:28 AM, ml ml wrote:

...hmmdo i want/mean geo-replication?

if yes, i a missing this tab here:
http://www.ovirt.org/images/5/51/Georepsession1list.png


No. Geo-replication is for disaster recovery by replicating your gluster 
volume to a cluster in a different geography. And note that the 
management and monitoring of gluster geo-replication has not yet been 
added in oVirt.


I think what you are looking for is the synchronous replication provided 
by the "replicated" volume types. Currently, there's no way to specify 
the interface to be used for gluster communication from within oVirt.  
Please refer to this thread - 
http://lists.ovirt.org/pipermail/users/2014-February/020894.html - for 
help in setting this up.





On Tue, Feb 11, 2014 at 5:40 PM, ml ml > wrote:


Hello List,

i would like to use a dedicated Bonding Interface for Gluster
Replication and Migration.

I was already able to add my Bonding Interfaces in my
Hosts-Network Tab, however i am failing to tell Gluster to use
this interface. It only offers me nodeX.local (which is my
managment network).

More infos are collected here: http://oi58.tinypic.com/v6qaa1.jpg

Can someone tell me how to configure Gluster to use the dedicated
bond interface?

I already asked in the IRC Channel but nobody seems to know :-/

Thanks,
Mario




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Dedicated Bonding interface for Gluster

2014-02-12 Thread Sahina Bose


On 02/12/2014 11:21 PM, Gianluca Cecchi wrote:

On Wed, Feb 12, 2014 at 6:18 PM, ml ml wrote:


I guess the brick details are stored in the postgres database and everything
else after will fail?!


Yes, we fixed the issue with resolving brick's host while syncing with 
gluster CLI in oVirt 3.4. However, when you use multiple addresses, you 
will need to use the workaround below.




Am i the only one with dedicated migration/storage interfaces? :)

Thanks,
Mario


One of the workarounds I found and that works for me as I'm not using
dns is this:

- for engine host node1 and node two have ip on mgmt
- for node1 and node2 their own ip addresses are on dedicated gluster network

so for example

10.4.4.x = mgmt
192.168.3.x = gluster dedicated

before:

on engine
/etc/hosts
10.4.4.58 node01
10.4.4.59 node02
10.4.4.60 engine

on node01
10.4.4.58 node01
10.4.4.59 node02
10.4.4.60 engine


after:

on engine (the same as before)
/etc/hosts
10.4.4.58 node01
10.4.4.59 node02
10.4.4.60 engine

on node01
#10.4.4.58 node01
#10.4.4.59 node02
192.168.3.1 node01
192.168.3.3 node02
10.4.4.60 engine

No operations on RDBMS.


Thanks, Gianluca!

I will update the wiki page so that this workaround is clear.



HIH,
Gianluca


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-02-27 Thread Sahina Bose


On 02/28/2014 01:14 AM, Itamar Heim wrote:

On 02/27/2014 09:35 PM, Liviu Elama wrote:

I think it will be nice to have:

1. A VM appliance to do backups using the backup APIs or add this
functionality in engine


I'd rather see integrated with backup solutions to tackle this 
(hopefully, there are relevant open source ones as well)



2. engine to create and maintain gluster geo-replication


sahina - thoughts?


Yes, managing and monitoring gluster geo-replication from oVirt is on 
the roadmap.
We had started some work on designing it 
(http://www.ovirt.org/Features/Gluster_Geo_Replication), but had to 
abandon it due to other priorities.


Though this may not make it to 3.5, could you review the feature page 
and tell us if these are the features of Geo replication that you were 
looking for?






3. freeBSD as supported guest and guest agent for it


adam - where are we with getting freebsd added as guest?
http://gerrit.ovirt.org/#/c/22273/
Liviu - note it is just a config file (other than the icon).

vinzenz - i don't remember anyone volunteered to test and package the 
guest agent on freebsd?

(liviu - maybe you can try to tackle that?)



Regards
Liviu




On Fri, Feb 28, 2014 at 2:42 AM, Maurice James mailto:midnightst...@msn.com>> wrote:

Its a "plug" icon


 > Date: Thu, 27 Feb 2014 15:31:34 +0200
 > From: ih...@redhat.com 
 > To: jo...@israelsson.com ;
users@ovirt.org ; mskri...@redhat.com


 > Subject: Re: [Users] oVirt 3.5 planning
 >
 > On 02/25/2014 12:00 PM, Jonas Israelsson wrote:
 > > Not sure if this already exist but I have had to help quite a
few users
 > > that have only user-permissions to restart their VM if they
managed to
 > > hang the OS.
 > > This since they lack the permission to power off the 
machine, and

 > > shutdown simply is not enough. Giving them more permission can
help,
 > > since they
 > > then will have the extended tab with more options including the
ability
 > > to power off a VM , this however IMO is overkill since they are
then
 > > presented
 > > with a vast number of options such as add disk, nic, networks
etc, all
 > > not working since they have no (and should have none)
 > > permission to those objects.
 > >
 > > So adding to the basic view in the user portal a power off
button and
 > > extending the ordinary user-permission to also include power
off I think
 > > would be a good idea.
 > >
 > > Rgds Jonas
 > >
 > > ___
 > > Users mailing list
 > > Users@ovirt.org 
 > > http://lists.ovirt.org/mailman/listinfo/users
 >
 > michal - don't we have power off vm in the basic user portal?
 > ___
 > Users mailing list
 > Users@ovirt.org 
 > http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] method "glusterHostsList" is not supported

2014-03-03 Thread Sahina Bose


On 03/04/2014 01:59 AM, Itamar Heim wrote:

On 03/03/2014 07:26 PM, René Koch wrote:

Hi list,

My hosted engine is running again, so I want to start a new thread for
another issue with my setup.

I have a GlusterFS storage domain, which can be mounted from CLI without
problems. oVirt is 3.4 from ovirt-3.4.0-prerelease repository running on
CentOS 6.5 with latest updates (both OS and oVirt).

Both hosts, which act as hypervisors and GlusterFS nodes are in state
"Non Operational" in oVirt because "Gluster command [Non interactive
user] failed on server ovirt-host02.dmz.linuxland.at."

In engine.log I see the entry "glusterHostList is not supported"
(attached are the log entries when activating one of the hosts):

2014-03-03 18:17:11,764 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(org.ovirt.thread.pool-6-thread-21) [6eee3cbd] Command
GlusterServersListVDSCommand(HostName = ovirt-host02.dmz.linuxland.at,
HostId = dd399eeb-f623-457a-9986-a7efc69010b2) execution failed.
Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException: :method "glusterHostsList" is not supported

Can you give me a hint what this means and how I can activate my hosts
and storage again?
Thanks a lot!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sahina ?


Do you have vdsm-gluster on the node?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] method "glusterHostsList" is not supported

2014-03-04 Thread Sahina Bose


On 03/04/2014 01:46 PM, René Koch wrote:

On 03/04/2014 05:14 AM, Sahina Bose wrote:


On 03/04/2014 01:59 AM, Itamar Heim wrote:

On 03/03/2014 07:26 PM, René Koch wrote:

Hi list,

My hosted engine is running again, so I want to start a new thread for
another issue with my setup.

I have a GlusterFS storage domain, which can be mounted from CLI 
without
problems. oVirt is 3.4 from ovirt-3.4.0-prerelease repository 
running on

CentOS 6.5 with latest updates (both OS and oVirt).

Both hosts, which act as hypervisors and GlusterFS nodes are in state
"Non Operational" in oVirt because "Gluster command [Non interactive
user] failed on server ovirt-host02.dmz.linuxland.at."

In engine.log I see the entry "glusterHostList is not supported"
(attached are the log entries when activating one of the hosts):

2014-03-03 18:17:11,764 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(org.ovirt.thread.pool-6-thread-21) [6eee3cbd] Command
GlusterServersListVDSCommand(HostName = ovirt-host02.dmz.linuxland.at,
HostId = dd399eeb-f623-457a-9986-a7efc69010b2) execution failed.
Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException: 

'exceptions.Exception'>:method "glusterHostsList" is not supported

Can you give me a hint what this means and how I can activate my hosts
and storage again?
Thanks a lot!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



sahina ?


Do you have vdsm-gluster on the node?


No, I didn't had it on (both) nodes.

After installing vdsm-gluster the storage works fine again. Thanks a lot!

But there's one strange thing. According to oVirt logs and yum.log 
GlusterFS storage worked fine until yesterdays "yum update", where the 
following packages where updated:


Mar 03 10:01:09 Updated: ovirt-hosted-engine-ha-1.1.0-1.el6.noarch
Mar 03 10:01:10 Updated: otopi-1.2.0-0.5.rc.el6.noarch
Mar 03 10:01:11 Updated: ovirt-engine-sdk-python-3.4.0.6-1.el6.noarch
Mar 03 10:01:12 Updated: ovirt-hosted-engine-setup-1.1.0-1.el6.noarch
Mar 03 10:01:13 Updated: libtiff-3.9.4-10.el6_5.x86_64

According to yum.log vdsm-gluster was never installed on these hosts, 
but storage did work.


Shouldn't vdsm-gluster be a requirement for hosts and therefor be 
installed during host setup?


Do you have any clue why it storage did work until the update of these 
packages?


The host moving to Non-Operational state with error "Gluster command 
failed..." is dependent on whether "Enable gluster service" is checked 
on your cluster. This check indicates that you also want to manage 
gluster storage provisioning on the nodes.


A recent change now checks that vdsm-gluster support is available for 
such clusters. That's probably why you are seeing this error after update.





Regards,
René



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPICE causes migration failure?

2014-03-04 Thread Sahina Bose


On 03/05/2014 03:14 AM, Ted Miller wrote:


On 3/3/2014 12:26 PM, Dafna Ron wrote:
I don't see a reason why open monitor will fail migration - at most, 
if there is a problem I would close the spice session on src and 
restarted it at the dst.
can you please attach vdsm/libvirt/qemu logs from both hosts and 
engine logs so that we can see the migration failure reason?


Thanks,
Dafna



On 03/03/2014 05:16 PM, Ted Miller wrote:
I just got my Data Center running again, and am proceeding with some 
setup & testing.


I created a VM (not doing anything useful)
I clicked on the "Console" and had a SPICE console up (viewed in Win7).
I had it printing the time on the screen once per second (while 
date;do sleep 1; done).

I tried to migrate the VM to another host and got in the GUI:

Migration started (VM: web1, Source: s1, Destination: s3, User: 
admin@internal).


Migration failed due to Error: Fatal error during migration (VM: 
web1, Source: s1, Destination: s3).


As I started the migration I happened to think "I wonder how they 
handle the SPICE console, since I think that is a link from the host 
to my machine, letting me see the VM's screen."


After the failure, I tried shutting down the SPICE console, and 
found that the migration succeeded.  I again opened SPICE and had a 
migration fail.  Closed SPICE, migration failed.


I can understand how migrating SPICE is a problem, but, at least 
could we give the victim of this condition a meaningful error 
message?  I have seen a lot of questions about failed migrations 
(mostly due to attached CDs), but I have never seen this discussed. 
If I had not had that particular thought cross my brain at that 
particular time, I doubt that SPICE would have been where I went 
looking for a solution.


If this is the first time this issue has been raised, I am willing 
to file a bug.


Ted Miller
Elkhart, IN, USA

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



In finding the right one-minute slice of the logs, I saw something 
that makes me think this is due to a missing method in the glusterfs 
support.  Others who understand more of what the logs are saying can 
verify or correct my hunch.



If you are referring to 'method "glusterTasksList" is not supported' 
error reported in the logs - this is not related to the migration 
failure. This is used to monitor gluster asynchronous tasks like 
rebalance and should not affect anything else.





Was trying to migrate from s2 to s1.

Logs on fpaste.org:
http://ur1.ca/gr48c
http://ur1.ca/gr48r
http://ur1.ca/gr493
http://ur1.ca/gr49e
http://ur1.ca/gr49i
http://ur1.ca/gr49x
http://ur1.ca/gr4a6

Ted Miller
Elkhart, IN, USA



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Gluster command [gluster peer status IP] failed on server "Name"

2014-03-27 Thread Sahina Bose


On 03/27/2014 09:51 PM, Gabi C wrote:

Update:
On node downgraded 10 packages, back to the ones from 3.3.3 release, 
rebooted the node and got same message.

I suspect something on engine. Restoring it now and let you now


Could you share the vdsm log for this error?





On Thu, Mar 27, 2014 at 5:54 PM, Gabi C > wrote:


Hello!


Fedora 19 fo engine and nodes.

Ovirt-engine just upgraded from 3.3.3 to 3.4, running on an
separate Vm (ESXi).

3 nodes acting also as gluster nodes (replicated) also on 3.3.3.

After engine upgrade, one one of the host I ran

yum update
http://resources.ovirt.org/pub/yum-repo/ovirt-release-3.4.noarch.rpm
yum clean all
yum update


snip " /var/log/yum.log"


Mar 27 17:24:47 Updated: ovirt-release-11.1.0-1.noarch
Mar 27 17:26:25 Installed:
vdsm-python-zombiereaper-4.14.6-0.fc19.noarch
Mar 27 17:26:25 Updated: vdsm-python-4.14.6-0.fc19.x86_64
Mar 27 17:26:25 Updated: vdsm-xmlrpc-4.14.6-0.fc19.noarch
Mar 27 17:26:46 Updated: vdsm-4.14.6-0.fc19.x86_64
Mar 27 17:26:46 Updated: vdsm-api-4.14.6-0.fc19.noarch
Mar 27 17:26:46 Updated: file-libs-5.11-13.fc19.x86_64
Mar 27 17:26:46 Updated: file-5.11-13.fc19.x86_64
Mar 27 17:26:47 Updated: python-magic-5.11-13.fc19.noarch
Mar 27 17:26:47 Updated: vdsm-gluster-4.14.6-0.fc19.noarch
Mar 27 17:26:47 Updated: vdsm-jsonrpc-4.14.6-0.fc19.noarch
Mar 27 17:26:47 Updated: vdsm-cli-4.14.6-0.fc19.noarch



Rebooting the node, I can see on "Events"

the 'Subj' message!

Any ideeas, hints?

Thanks!










___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Glusterd en Glusterfsd services are not set to autostart on install

2014-03-28 Thread Sahina Bose


On 03/28/2014 04:02 PM, Joop wrote:
Setting up oVirt with a separate cluster for storage using gluster we 
discovered that the gluster services are not set to autostart for the 
default runlevels.
Host(s) were clean Centos-6.5 hosts with the correct repositories and 
then added from the webui. Hosts got all the packages and gluster was 
correctly started for that session but after a reboot gluster isn't 
running and the host was set to non-operational.


Is this a known problem and if not should I file a BZ but under which 
category?


Which version of gluster are you using?

[Adding gluster-users]



Regards,

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How do you oVirt? Here the answers!

2017-04-17 Thread Sahina Bose
[+gluster-users]

On Fri, Apr 14, 2017 at 1:57 PM, Sandro Bonazzola 
wrote:

> Hi,
> a few weeks ago I opened a survey asking oVirt users about how they use
> oVirt.
> Here are the analytics of the survey:
> https://docs.google.com/forms/d/1s3k4pq_U4vHTuUUKr6dOosGj_
> EGFxqEisansHlqnyyM/viewanalytics#responses
>
> Most notable facts:
> - 197 responses, two times the responses we got with the 3.6 survey!
> - CentOS confims as the most used distribution for running oVirt engine
> (89.8%) and Hypervisors (74.3% which become 94.2% including oVirt Node
> which is a CentOS spin)
> - Fedora is the second distribution (4.6%) mostly used only for runnig
> oVirt engine
> - oVirt Node jumped from 7.1% on 3.6 survey to 19.9% in 4.1!
> - Surprisingly we have someone running both engine and hypervisor on
> Ubuntu based system, within VMs. I'd like to hear more about this use case.
> - Within other OS we had Red Hat Enterprise Linux 6 and 7 and Scientific
> Linux (I guess 7, wasn't specified)
> - Within the storage, there isn't a "winner" between NFS, Gluster and
> iSCSI. Within Other Storage, Fiber Channel is the most used. We had also:
> Ceph, DAS, EMC ScaleIO, GPFS
> - Hosted Engine is the most used deployment choice with 58.4% (was 50.0%
> in 3.6)
> - 65.8% of Hosted Engine users use it in Hyperconverged setup.
>
> Thank you for your feedback and appreciation, we'll look into suggestions
> you sent in the survey.
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-04-25 Thread Sahina Bose
On Tue, Apr 25, 2017 at 9:18 PM, Jim Kusznir  wrote:

> So with arbiter, I actually only have two copies of data...Does arbiter
> have at least checksum or something to detect corruption of a copy? (like
> old RAID-4 disk configuration)?
>

Yes, the arbiter brick stores metadata information about the files to
decide the good copy of data stored on the replicas in case of conflict.


>
> Ok...Related question:  Is there a way to set up an offsite gluster
> storage server to mirror the contents of my main server?  As "fire"
> insurance basically?  (eventually, I'd like to have an "offsite" DR
> cluster, but I don't have the resources or scale yet for that).
>
> What I'd like to do is place a basic storage server somewhere else and
> have it sync any gluster data changes on a regular basis, and be usable to
> repopulate storage should I loose all of my current cluster (eg, a building
> fire or theft).
>

Yes, the geo-replication feature can help with that. There's a remote data
sync feature introduced for gluster storage domains, that helps with this.
You can set this up such that data from your storage domain is regularly
synced to a remote gluster volume, while ensuring data consistency. The
remote gluster volume does not have to a replica 3.


>
> I find gluster has amazing power from what I hear, but I have a hard time
> finding documentation at "the right level" to be useful.  I've found some
> very basic introductory guide, then some very advanced guides that require
> extensive knowledge of gluster already.  Something in the middle to explain
> some of these questions (like arbitrar and migration strategies,
> geo-replication, etc; and how to deploy them) are absent (or at least, i
> haven't found them yet).  I still feel like I'm using something I don't
> understand, and the only avenue I have to learn more is to ask questions
> here, as the docs aren't at an accessible level.
>

Thanks for the feedback. Are you looking at documentation on a use-case
basis?


>
> Thanks!
> --Jim
>
> On Mon, Apr 3, 2017 at 10:34 PM, Sahina Bose  wrote:
>
>>
>>
>> On Sat, Apr 1, 2017 at 10:32 PM, Jim Kusznir  wrote:
>>
>>> Thank you!
>>>
>>> Here's the output of gluster volume info:
>>> [root@ovirt1 ~]# gluster volume info
>>>
>>> Volume Name: data
>>> Type: Replicate
>>> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
>>> Status: Started
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
>>> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
>>> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
>>> Options Reconfigured:
>>> performance.strict-o-direct: on
>>> nfs.disable: on
>>> user.cifs: off
>>> network.ping-timeout: 30
>>> cluster.shd-max-threads: 6
>>> cluster.shd-wait-qlength: 1
>>> cluster.locking-scheme: granular
>>> cluster.data-self-heal-algorithm: full
>>> performance.low-prio-threads: 32
>>> features.shard-block-size: 512MB
>>> features.shard: on
>>> storage.owner-gid: 36
>>> storage.owner-uid: 36
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> network.remote-dio: enable
>>> cluster.eager-lock: enable
>>> performance.stat-prefetch: off
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> performance.readdir-ahead: on
>>> server.allow-insecure: on
>>>
>>> Volume Name: engine
>>> Type: Replicate
>>> Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
>>> Status: Started
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
>>> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
>>> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
>>> Options Reconfigured:
>>> performance.readdir-ahead: on
>>> performance.quick-read: off
>>> performance.read-ahead: off
>>> performance.io-cache: off
>>> performance.stat-prefetch: off
>>> cluster.eager-lock: enable
>>> network.remote-dio: off
>>> cluster.quorum-type: auto
>>> cluster.server-quorum-type: server
>>> storage.owner-uid: 36
>>> storage.owner-gid: 36
>>> features.shard: on
>>> features.shard-block-size: 512MB
>>> performance.low-prio-threads: 32
>>> cluster

Re: [ovirt-users] oVirt Hosted Engine Setup fails

2017-05-01 Thread Sahina Bose
There's an error creating the HE VM. Which version on qemu-kvm are you
using?

2017-05-01 11:47:45,038+0100 ERROR (vm/92faf919) [virt.vm]
(vmId='92faf919-b7b1-42dc-a70c-15c6eb32ba2f') The vm start process failed
(vm:632)
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 563, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 2021, in _run
self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
123, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in
wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: internal error: process exited while connecting to monitor:
/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7
-msg timestamp=on
Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied


On Mon, May 1, 2017 at 6:44 PM, Manuel Luis Aznar <
manuel.luis.az...@gmail.com> wrote:

> Hello there,
>
> I have been looking in the internet using google why my installation of
> ovirt-hosted-engine is failing.
>
> I have found this link:
>
>  https://www.mail-archive.com/users@ovirt.org/msg40864.html (Hosted
> engine install failed; vdsm upset about broker)
>
> It seems to be the same error...
>
> So to knarra and Jamie Lawrence my question is:
>
> Did you manage to discover the problem?? In my instalation I am using
> nfs and not gluster...
>
> I have read the error and is the same error "BrokerConnectionError: ...".
> The ovirt-ha-agent and ovirt-ha-broker did not start when the installation
> is creating the engine VM...
>
> As I have said before any help would be very appreciated...no matter whom
> will give it
> Thanks for all in advance
> Manuel Luis Aznar
>
>
>
> 2017-05-01 12:21 GMT+01:00 Manuel Luis Aznar  >:
>
> > Hello Simone and all the comunity,
> >
> > I have been doing the instalation of ovirt hosted engine again and it
> > fails, libvirtd and vdsmd services are failing. They are failing with the
> > following errors:
> >
> >
> > libvirt daemon
> >
> >   libvirtd.service - Virtualization daemon
> >Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
> > vendor preset: enabled)
> >   Drop-In: /etc/systemd/system/libvirtd.service.d
> >ââunlimited-core.conf
> >Active: active (running) since lun 2017-05-01 11:43:49 WEST; 14min ago
> >  Docs: man:libvirtd(8)
> >http://libvirt.org
> >  Main PID: 21993 (libvirtd)
> >CGroup: /system.slice/libvirtd.service
> >ââ21993 /usr/sbin/libvirtd --listen
> >
> > may 01 11:43:49 host1.bajada.es systemd[1]: Starting Virtualization
> > daemon...
> > may 01 11:43:49 host1.bajada.es systemd[1]: Started Virtualization
> daemon
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Hosted Engine Setup fails

2017-05-01 Thread Sahina Bose
On Mon, May 1, 2017 at 9:39 PM, Manuel Luis Aznar <
manuel.luis.az...@gmail.com> wrote:

> Hello there Sahina Bose
>
> First of all thanks for your answer...
>
> Yeah it seems there is a problem creating the Hosted Engine VM (I mean the
> VM that will have the ovirt-engine). I have done the installation before
> and sometimes fails and others not...
>
> To the question: Which version on qemu-kvm are you using?
>
> [root@host1 manuel]# yum list | grep qemu-kvm
> qemu-kvm-common-ev.x86_64  10:2.6.0-28.el7_3.9.1
> @ovirt-4.1-snapshot
> qemu-kvm-ev.x86_64 10:2.6.0-28.el7_3.9.1
> @ovirt-4.1-snapshot
> qemu-kvm-ev-debuginfo.x86_64   10:2.6.0-28.el7_3.9.1
> @ovirt-4.1-snapshot
> qemu-kvm-tools-ev.x86_64   10:2.6.0-28.el7_3.9.1
> @ovirt-4.1-snapshot
> qemu-kvm.x86_6410:1.5.3-126.el7_3.6
>  updates
> qemu-kvm-common.x86_64 10:1.5.3-126.el7_3.6
>  updates
> qemu-kvm-tools.x86_64  10:1.5.3-126.el7_3.6
>  updates
>
> It seems I am using version 2.6.0. Do you know which version is fine?¿?
>


qemu version looks to be fine, but adding virt team if they know the cause
of error.

"Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied"


>
> Thanks for all in advance
> I will be waiting for an answer
> Manuel Luis Aznar
>
>
> 2017-05-01 16:07 GMT+01:00 Sahina Bose :
>
>> There's an error creating the HE VM. Which version on qemu-kvm are you
>> using?
>>
>> 2017-05-01 11:47:45,038+0100 ERROR (vm/92faf919) [virt.vm]
>> (vmId='92faf919-b7b1-42dc-a70c-15c6eb32ba2f') The vm start process
>> failed (vm:632)
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 563, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 2021, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
>> line 123, in wrapper
>> ret = f(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in
>> wrapper
>> return func(inst, *args, **kwargs)
>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in
>> createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self)
>> libvirtError: internal error: process exited while connecting to monitor:
>> /dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7
>> -msg timestamp=on
>> Could not access KVM kernel module: Permission denied
>> failed to initialize KVM: Permission denied
>>
>>
>> On Mon, May 1, 2017 at 6:44 PM, Manuel Luis Aznar <
>> manuel.luis.az...@gmail.com> wrote:
>>
>>> Hello there,
>>>
>>> I have been looking in the internet using google why my installation of
>>> ovirt-hosted-engine is failing.
>>>
>>> I have found this link:
>>>
>>>  https://www.mail-archive.com/users@ovirt.org/msg40864.html (Hosted
>>> engine install failed; vdsm upset about broker)
>>>
>>> It seems to be the same error...
>>>
>>> So to knarra and Jamie Lawrence my question is:
>>>
>>> Did you manage to discover the problem?? In my instalation I am using
>>> nfs and not gluster...
>>>
>>> I have read the error and is the same error "BrokerConnectionError: ...".
>>> The ovirt-ha-agent and ovirt-ha-broker did not start when the
>>> installation
>>> is creating the engine VM...
>>>
>>> As I have said before any help would be very appreciated...no matter whom
>>> will give it
>>> Thanks for all in advance
>>> Manuel Luis Aznar
>>>
>>>
>>>
>>> 2017-05-01 12:21 GMT+01:00 Manuel Luis Aznar <
>>> manuel.luis.az...@gmail.com>:
>>>
>>> > Hello Simone and all the comunity,
>>> >
>>> > I have been doing the instalation of ovirt hosted engine again and it
>>> > fails, libvirtd and vdsmd services are failing. They are failing with
>>> the
>>> > following errors:
>>> >
>>> >
>>> > libvirt daemon
>>> >
>>> >   libvirtd.service - Virtualization daemon
>>> >Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
>>> > vendor preset: enabled)
>>> >   Drop-In: /etc/systemd/system/libvirtd.service.d
>>> >ââunlimited-core.conf
>>> >Active: active (running) since lun 2017-05-01 11:43:49 WEST; 14min
>>> ago
>>> >  Docs: man:libvirtd(8)
>>> >http://libvirt.org
>>> >  Main PID: 21993 (libvirtd)
>>> >CGroup: /system.slice/libvirtd.service
>>> >ââ21993 /usr/sbin/libvirtd --listen
>>> >
>>> > may 01 11:43:49 host1.bajada.es systemd[1]: Starting Virtualization
>>> > daemon...
>>> > may 01 11:43:49 host1.bajada.es systemd[1]: Started Virtualization
>>> daemon
>>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trouble Adding Gluster Host in oVirt Manager

2017-05-02 Thread Sahina Bose
On Tue, May 2, 2017 at 9:59 AM, Langley, Robert 
wrote:

> Okay, no, the Engine does not have that communication.
> So, this answers a question I have had. The requirements for the Engine
> are similar in nature as vSphere's VCenter server, where the VCenter server
> has to have a direct connection to the storage networks. Makes sense, they
> have to be managed by the Engine. I wasn't sure if the Engine somehow used
> the host's network connection without having its own.
>
> Then, what I need to know, is how to go about connecting the storage
> network to the Engine VM? I have not found this in documentation.
> Also, do I need to do the same thing for the Engine's storage network
> (192.168.3.x)? Or, is that supposed to have been done already, as part of
> setup?
> I have looked around in the Manager and attempted to add an interface for
> the VM storage network (192.168.2.x) for the Engine. I believe I was met
> with resistance. I feel as though I may have been going about it wrong.
>

Is the gsa-stor1e.engine.local accessible to the the engine VM? If so, you
should be adding the hosts to engine using this address.
Once the host is added, create a Logical network in the cluster (non-VM
network) and assigne this a role of "Gluster" network in the cluster. You
can then assign this network to the gsa-stor1s.local using the host's
"Setup Networks" option. This will ensure that gluster is aware of this
network to be used while creating bricks from the UI.




> Thank you,
> Robert
> --
> *From:* knarra 
> *Sent:* Tuesday, May 2, 2017 3:44:53 AM
> *To:* Langley, Robert; users
> *Subject:* Re: [ovirt-users] Trouble Adding Gluster Host in oVirt Manager
>
> Hi,
>
> Can you please tell me which is the version of ovirt you are using ? I
> looked at the engine log and i see that engine failed to establish SSH
> session with the host . Can you check if your hosts are reachable from the
> engine ?
>
> 2017-04-25 16:46:23,944-07 ERROR 
> [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand]
> (default task-34) [0335c88e-96cc-46f4-ab22-cbf10d4645a2] Failed to
> establish session wi
> th host 'gsa-stor1s.stor.local': SSH connection timed out connecting to '
> root@192.168.2.5'
> 2017-04-25 16:46:23,944-07 WARN  
> [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand]
> (default task-34) [0335c88e-96cc-46f4-ab22-cbf10d4645a2] Validation of
> action 'AddVds'
> failed for user admin@internal-authz. Reasons:
> VAR__ACTION__ADD,VAR__TYPE__HOST,$server 192.168.2.5,VDS_CANNOT_
> CONNECT_TO_SERVER
> 2017-04-25 16:47:53,357-07 ERROR 
> [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand]
> (default task-59) [3724100f-2593-41d6-b8fc-513c24cb2074] Failed to
> establish session wi
> th host 'gsa-stor1s.stor.local': SSH connection timed out connecting to '
> root@192.168.2.5'
>
> Thanks
> kasturi
>
> On 05/02/2017 01:47 AM, Langley, Robert wrote:
>
> Attempt #3 to send engine log file with the compressed file. -Robert
> These log files can be large for sending in email. So, I’m guessing it is
> best to send them as compressed. I’m learning here with the mailing list.
>
> _
> *From:* Langley, Robert
> *Sent:* Monday, May 1, 2017 12:58 PM
> *To:* 'users'  
> *Cc:* 'Fred Rolland'  
> *Subject:* Re: Trouble Adding Gluster Host in oVirt Manager
>
>
> Engine.log attached from 20170427 (only including the one day, in order to
> decrease size)
> Please, bear with me, I’m not sure about the best practice for sending the
> log. I hope the attachment goes through okay. << File:
> engine.log-20170427.txt >>
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Newly deployed cluster with glusterfs doesn't show engine and gives errors

2017-05-02 Thread Sahina Bose
On Thu, Apr 27, 2017 at 7:22 AM, Endre Karlson 
wrote:

> VDSM ovhost20 command GlusterTaskListVDS failed: 'AutoProxy[instance]
> object has no attribute 'glusterTaskList' do you guys have any idea on this?
>

Is vdsm-gluster installed on the host?


>
> Also I cannot select hosts as hosted engine host when I add a new host nor
> does the HostedEgnine vm show in the vm's pane for the cluster.
>

You will need to add a master storage domain before the HE VM is imported
and shown in the vm's pane


>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster Bricks Out of Space

2017-05-10 Thread Sahina Bose
On Mon, May 8, 2017 at 10:46 PM, Ryan Housand <
rhous...@empoweredbenefits.com> wrote:

> We have three gluster shares (_data, _engine, _export) created by a brick
> located on three of our VM hosts. See output from "gluster volume info"
> below:
>
>  Volume Name: data
> Type: Replicate
> Volume ID: c07fdf43-b838-4e4b-bb26-61dbf406cb57
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: vmhost01-chi.empoweredbenefits.com:/gluster/brick2/data
> Brick2: vmhost02-chi.empoweredbenefits.com:/gluster/brick2/data
> Brick3: vmhost03-chi.empoweredbenefits.com:/gluster/brick2/data (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: engine
> Type: Distributed-Replicate
> Volume ID: 25455f13-75ba-4bc6-926a-d06ee7c5859a
> Status: Started
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: vmhost01-chi.empoweredbenefits.com:/gluster/brick1/engine
> Brick2: vmhost02-chi.empoweredbenefits.com:/gluster/brick1/engine
> Brick3: vmhost03-chi.empoweredbenefits.com:/gluster/brick1/engine
> (arbiter)
> Brick4: vmhost04-chi:/mnt/engine
> Brick5: vmhost05-chi:/mnt/engine
> Brick6: vmhost06-chi:/mnt/engine (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: export
> Type: Replicate
> Volume ID: a4c3a49a-fa83-4a62-9523-989c8e016c35
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: vmhost01-chi.empoweredbenefits.com:/gluster/brick3/export
> Brick2: vmhost02-chi.empoweredbenefits.com:/gluster/brick3/export
> Brick3: vmhost03-chi.empoweredbenefits.com:/gluster/brick3/export
> (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Our issue is that we ran out of space on our gluster-engine bricks which
> caused our Hosted Engine vm to crash. We added additional bricks from new
> VM Hosts (see vmhost05 to vmhost06 above) but we still are unable to
> restart our Hosted Engine due to the first three space being depleted. My
> understanding is that I need to extend the bricks that are 100% full on our
> engine partition. Is it the best practice to stop the glusterd service or
> can I use "gloster volume stop engine" to only stop the volume I need to
> extend? Also, if I need to stop glusterd will my VMs hosted on my ovirt
> cluster be affected by mount points export and data being off line?
>

Adding the 3 bricks to engine does not redistribute the data. You need to
run rebalance on gluster volume engine for this. There's a bug currently
that rebalance causes corruption when performed with ongoing IO on the
volume.
I think the best way for you to do this, is put hosted-engine to global
maintenance, stop the hosted-engine and rebalance the engine gluster volume.

What was the original size of the engine gluster volume? (Curious to
understand why you ran out of space)

The VMs running on data gluster volume should not be affected by this.



>
> Thanks,
>
> Ryan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
__

Re: [ovirt-users] Setting up GeoReplication

2017-05-14 Thread Sahina Bose
Adding Aravinda

On Sat, May 13, 2017 at 11:21 PM, Jim Kusznir  wrote:

> Hi All:
>
> I've been trying to set up georeplication for a while now, but can't seem
> to make it work.  I've found documentation on the web (mostly
> https://gluster.readthedocs.io/en/refactor/Administrator%20Guide/Geo%
> 20Replication/), and I found http://blog.gluster.org/2015/09/introducing-
> georepsetup-gluster-geo-replication-setup-tool/
>
> Unfortunately, it seems that some critical steps are missing from both,
> and I can't figure out for sure what they are.
>
> My environment:
>
> Production: replica 2 + arbitrator running on my 3-node oVirt cluster, 3
> volumes (engine, data, iso).
>
> New geo-replication: Raspberry Pi3 with USB hard drive shoved in some
> other data closet off-site.
>
> I've installed rasbian-lite, and after much fighting, got
> glusterfs-*-3.8.11 installed.  I've created my mountpoint (USB hard drive,
> much larger than my gluster volumes), and then ran the command.  I get this
> far:
>
> [OK] georep.nwfiber.com is Reachable(Port 22)
> [OK] SSH Connection established r...@georep.nwfiber.com
> [OK] Master Volume and Slave Volume are compatible (Version: 3.8.11)
> [NOT OK] Unable to Mount Gluster Volume georep.nwfiber.com:engine-rep
>
> Trying it with the steps in the gluster docs also has the same problem.
> No long files are generated on the slave.  Log files on the master include:
>
> [root@ovirt1 geo-replication]# more georepsetup.mount.log
> [2017-05-13 17:26:27.318599] I [MSGID: 100030] [glusterfsd.c:2454:main]
> 0-glusterfs: Started running glusterfs version 3.8.11 (args:
>  glusterfs --xlator-option="*dht.lookup-unhashed=off" --volfile-server
> localhost --volfile-id engine -l /var/log/glusterfs/geo-repli
> cation/georepsetup.mount.log --client-pid=-1 /tmp/georepsetup_wZtfkN)
> [2017-05-13 17:26:27.341170] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [2017-05-13 17:26:27.341260] E [socket.c:2309:socket_connect_finish]
> 0-glusterfs: connection to ::1:24007 failed (Connection refused
> )
> [2017-05-13 17:26:27.341846] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: failed to connect with remote-host: local
> host (Transport endpoint is not connected)
> [2017-05-13 17:26:31.335849] I [MSGID: 101190] 
> [event-epoll.c:628:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 2
> [2017-05-13 17:26:31.337545] I [MSGID: 114020] [client.c:2356:notify]
> 0-engine-client-0: parent translators are ready, attempting co
> nnect on transport
> [2017-05-13 17:26:31.344485] I [MSGID: 114020] [client.c:2356:notify]
> 0-engine-client-1: parent translators are ready, attempting co
> nnect on transport
> [2017-05-13 17:26:31.345146] I [rpc-clnt.c:1965:rpc_clnt_reconfig]
> 0-engine-client-0: changing port to 49157 (from 0)
> [2017-05-13 17:26:31.350868] I [MSGID: 114020] [client.c:2356:notify]
> 0-engine-client-2: parent translators are ready, attempting co
> nnect on transport
> [2017-05-13 17:26:31.355946] I [MSGID: 114057] [client-handshake.c:1440:
> select_server_supported_programs] 0-engine-client-0: Using P
> rogram GlusterFS 3.3, Num (1298437), Version (330)
> [2017-05-13 17:26:31.356280] I [rpc-clnt.c:1965:rpc_clnt_reconfig]
> 0-engine-client-1: changing port to 49157 (from 0)
> Final graph:
> +---
> ---+
>   1: volume engine-client-0
>   2: type protocol/client
>   3: option clnt-lk-version 1
>   4: option volfile-checksum 0
>   5: option volfile-key engine
>   6: option client-version 3.8.11
>   7: option process-uuid ovirt1.nwfiber.com-25660-2017/
> 05/13-17:26:27:311929-engine-client-0-0-0
>   8: option fops-version 1298437
>   9: option ping-timeout 30
>  10: option remote-host ovirt1.nwfiber.com
>  11: option remote-subvolume /gluster/brick1/engine
>  12: option transport-type socket
>  13: option username 028984cf-0399-42e6-b04b-bb9b1685c536
>  14: option password eae737cc-9659-405f-865e-9a7ef97a3307
>  15: option filter-O_DIRECT off
>  16: option send-gids true
>  17: end-volume
>  18:
>  19: volume engine-client-1
>  20: type protocol/client
>  21: option ping-timeout 30
>  22: option remote-host ovirt2.nwfiber.com
>  23: option remote-subvolume /gluster/brick1/engine
>  24: option transport-type socket
>  25: option username 028984cf-0399-42e6-b04b-bb9b1685c536
>  26: option password eae737cc-9659-405f-865e-9a7ef97a3307
>  27: option filter-O_DIRECT off
>  28: option send-gids true
>  29: end-volume
>  30:
>  31: volume engine-client-2
>  32: type protocol/client
>  33: option ping-timeout 30
>  34: option remote-host ovirt3.nwfiber.com
>  35: option remote-subvolume /gluster/brick1/engine
>  36: option transport-type socket
>  37: option username 028984cf-0399-42e6-b04b-bb9b1685c536
>  38: option passwo

Re: [ovirt-users] Setting up GeoReplication

2017-05-15 Thread Sahina Bose
On Mon, May 15, 2017 at 7:40 PM, Jim Kusznir  wrote:

> I tried to create a gluster volume on the georep node by running:
>
> gluster volume create engine-rep replica 1 georep.nwfiber.com:/mnt/gluste
> r/engine-rep
>
> I got back an error saying replica must be > 1.  So I tried to create it
> again:
>

"replica 1" is not required on command when you're not replicating to
another server.
So,

gluster volume create engine-rep georep.nwfiber.com:/mnt/gluster/engine-rep




> gluster volume create engine-rep replica 2 
> georep.nwfiber.com:/mnt/gluster/engine-rep
> server2.nwfiber.com:/mnt/gluster/engine-rep
>
> where server2 did not exist.  That failed too, but I don't recall the
> error message.
>
> gluster is installed, but when I try and start it with the init script, it
> fails to start with a complaint about reading the block file; my googling
> indicated that's the error you get until you've created a gluster volume,
> and that was the first clue to me that maybe I needed to create one first.
>
> So, how do I create a replica 1 volume?
>
>
> Thinking way ahead, I have a related replica question:  Currently my ovirt
> nodes are also my gluster nodes (replica 2 arbitrar 1).  Eventually I'll
> want to pull my gluster off onto dedicated hardware I suspect.  If I do so,
> do I need 3 servers, or is a replica 2 sufficient?  I guess I could have an
> ovirt node continue to be an arbitrar...  I would eventually like to
> distribute my ovirt cluster accross multiple locations with the option for
> remote failover (say location A looses all its network and/or power; have
> important VMs started at location B in addition to location B's normal
> VMs).  I assume at this point the recommended arch would be:
>
> 2 Gluster servers at each location
> Each location has a gluster volume for that location, and is georep for
> the other location (so all my data will physically exist on 4 gluster
> servers).  I probably won't have more than 2 or 3 ovirt hosts at each
> location, so I don't expect this to be a "heavy use" system.
>

To move to a dedicated gluster setup, 2 gluster servers + 1 oVirt node
holding the arbiter brick should work.

Distributed or stretch cluster does not work off the bat yet. If a volume
is geo-replicated to another location, the destination volume cannot be a
storage domain in a cluster, as geo-replication needs the destination
volume data to be in sync with master.
Another option is for the master volume/cluster is spread across
geographies - but this requires the network latency to be very low in order
for this to work as all operations are synchronous in gluster.
We're working on features in both gluster and Ovirt to support the
stretched cluster requirements.


> Am I on track?  I'd be interested to learn what others suggest for this
> deployment model.
>
> On Sun, May 14, 2017 at 11:09 PM, Sahina Bose  wrote:
>
>> Adding Aravinda
>>
>> On Sat, May 13, 2017 at 11:21 PM, Jim Kusznir 
>> wrote:
>>
>>> Hi All:
>>>
>>> I've been trying to set up georeplication for a while now, but can't
>>> seem to make it work.  I've found documentation on the web (mostly
>>> https://gluster.readthedocs.io/en/refactor/Administr
>>> ator%20Guide/Geo%20Replication/), and I found http://blog.gluster.org/
>>> 2015/09/introducing-georepsetup-gluster-geo-replication-setup-tool/
>>>
>>> Unfortunately, it seems that some critical steps are missing from both,
>>> and I can't figure out for sure what they are.
>>>
>>> My environment:
>>>
>>> Production: replica 2 + arbitrator running on my 3-node oVirt cluster, 3
>>> volumes (engine, data, iso).
>>>
>>> New geo-replication: Raspberry Pi3 with USB hard drive shoved in some
>>> other data closet off-site.
>>>
>>> I've installed rasbian-lite, and after much fighting, got
>>> glusterfs-*-3.8.11 installed.  I've created my mountpoint (USB hard drive,
>>> much larger than my gluster volumes), and then ran the command.  I get this
>>> far:
>>>
>>> [OK] georep.nwfiber.com is Reachable(Port 22)
>>> [OK] SSH Connection established r...@georep.nwfiber.com
>>> [OK] Master Volume and Slave Volume are compatible (Version: 3.8.11)
>>> [NOT OK] Unable to Mount Gluster Volume georep.nwfiber.com:engine-rep
>>>
>>> Trying it with the steps in the gluster docs also has the same problem.
>>> No long files are generated on the slave.  Log files on the master include:
>>>
>>> [root@ovirt1 geo-replication]# more georepset

Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-18 Thread Sahina Bose
On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo  wrote:

> Well, I tried both of the following:
> 1. Having only a boot partition and a PV for the OS that does not take
> up the entire disk, and then specifying "sda" in Hosted Engine Setup.
> 2. Having not only a boot partition and a PV for the OS, but also an
> empty (and not formatted) /dev/sda3 PV that I created with fdisk.
> Then, specfying "sda3" in Hosted Engine Setup.
>
> Both attempts resulted in errors like this:
> failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
> true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
> Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}
>

Can you provide gdeploy logs? I think, it's at ~/.gdeploy/gdeploy.log


>
> It seems like having gluster bricks on the same disk as the OS doesn't
> work at all.
>
> I am going to buy separate OS SSDs.
>
> -Mike
>
> On Tue, May 9, 2017 at 6:22 AM, Mike DePaulo  wrote:
> > On Mon, May 8, 2017 at 9:00 AM, knarra  wrote:
> >> On 05/07/2017 04:48 PM, Mike DePaulo wrote:
> >>>
> >>> Hi. I am trying to follow this guide. Is it possible to use part of my
> >>> OS disk /dev/sda for the bricks?
> >>>
> >>> https://www.ovirt.org/blog/2017/04/up-and-running-with-
> ovirt-4-1-and-gluster-storage/
> >>>
> >>> I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
> >>> requirements. I am guessing I have to create an LV for the OS that
> >>> does not take up the entire disk during install, manually create a pv
> >>> like /dev/sda3 afterwards, and then run Hosted Engine Setup and
> >>> specify /sda3 rather than sdb?
> >>>
> >>> Thanks,
> >>> -Mike
> >>> ___
> >>> Users mailing list
> >>> Users@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >>
> >> Hi Mike,
> >>
> >> If you create gluster bricks on the same disk as OS it works but we
> do
> >> not recommend setting up gluster bricks on the same disk as the os. When
> >> user tries to create a gluster volume using by specifying the bricks
> from
> >> root partition it displays an error message "Bricks in root parition not
> >> recommended and use force at the end to create volume".
> >>
> >> Thanks
> >>
> >> kasturi
> >>
> >
> > Thank you very much. Is my process for doing this (listed in my
> > original email) correct though?
> >
> > -Mike
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error when taking snapshot on gluster SD

2017-05-23 Thread Sahina Bose
Engine.log only indicated a timeout. Could you check the vdsm.log and the
gluster mount logs if there are any errors related to snapshot creation?

On Tue, May 23, 2017 at 6:47 PM, Sven Achtelik 
wrote:

> Hi All,
>
>
>
> I was taking a snapshot from a gluster based SD and got this errors:
>
>
>
> [root@ovirt-engine ovirt-engine]# zgrep -e 
> 3102e526-172b-49a7-ae64-5879a7eb8079
> engine.log-*
>
> engine.log-20170523.gz:2017-05-23 01:54:13,223-05 INFO
> [org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
> (default task-18) [3102e526-172b-49a7-ae64-5879a7eb8079] Lock Acquired to
> object 'EngineLock:{exclusiveLocks='[4eeaa622-17f9-4586-b99a-cddb3ad942de= ACTION_TYPE_FAILED_SNAPSHOT_IS_BEING_TAKEN_FOR_VM$VmName wc_app01>]',
> sharedLocks='null'}'
>
>
>
> engine.log-20170523.gz:2017-05-23 01:54:13,247-05 INFO
> [org.ovirt.engine.core.bll.memory.MemoryStorageHandler] (default task-18)
> [3102e526-172b-49a7-ae64-5879a7eb8079] The memory volumes of VM (name
> 'wc_app01', id '4eeaa622-17f9-4586-b99a-cddb3ad942de') will be stored in
> storage domain (name 'data', id '185718c7-1da9-442c-bedc-776e4ac95c33')
>
>
>
> engine.log-20170523.gz:2017-05-23 01:54:13,295-05 INFO
> [org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
> (default task-18) [3102e526-172b-49a7-ae64-5879a7eb8079] Running command:
> CreateAllSnapshotsFromVmCommand internal: false. Entities affected :  ID:
> 4eeaa622-17f9-4586-b99a-cddb3ad942de Type: VMAction group
> MANIPULATE_VM_SNAPSHOTS with role type USER
>
>
>
> engine.log-20170523.gz:2017-05-23 01:54:17,717-05 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-18) [2ff883d4] EVENT_ID: USER_CREATE_SNAPSHOT(45),
> Correlation ID: 3102e526-172b-49a7-ae64-5879a7eb8079, Job ID:
> 020b9b14-16d9-4303-8fa3-d60b248bc158, Call Stack: null, Custom Event ID:
> -1, Message: Snapshot 'Sesam_SC20170523015128500@5aH9YZNk4-A' creation
> for VM 'wc_app01' was initiated by sven@asl.local@asl.local-authz.
>
>
>
> engine.log-20170523.gz:2017-05-23 01:57:27,103-05 WARN
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler1) [1b15b09b] EVENT_ID: 
> USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE(170),
> Correlation ID: 3102e526-172b-49a7-ae64-5879a7eb8079, Job ID:
> 020b9b14-16d9-4303-8fa3-d60b248bc158, Call Stack:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> VDSGenericException: VDSNetworkException: Message timeout which can be
> caused by communication issues (Failed with error VDS_NETWORK_ERROR and
> code 5022)
>
>
>
> engine.log-20170523.gz:2017-05-23 01:58:11,969-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler1) [] EVENT_ID: 
> USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69),
> Correlation ID: 3102e526-172b-49a7-ae64-5879a7eb8079, Job ID:
> 020b9b14-16d9-4303-8fa3-d60b248bc158, Call Stack:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> VDSGenericException: VDSNetworkException: Message timeout which can be
> caused by communication issues (Failed with error VDS_NETWORK_ERROR and
> code 5022)
>
>
>
> [root@ovirt-engine ovirt-engine]# zgrep -e  
> 020b9b14-16d9-4303-8fa3-d60b248bc158
> engine.log-*
>
> engine.log-20170523.gz:2017-05-23 01:54:17,717-05 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-18) [2ff883d4] EVENT_ID: USER_CREATE_SNAPSHOT(45),
> Correlation ID: 3102e526-172b-49a7-ae64-5879a7eb8079, Job ID:
> 020b9b14-16d9-4303-8fa3-d60b248bc158, Call Stack: null, Custom Event ID:
> -1, Message: Snapshot 'Sesam_SC20170523015128500@5aH9YZNk4-A' creation
> for VM 'wc_app01' was initiated by sven@asl.local@asl.local-authz.
>
>
>
> engine.log-20170523.gz:2017-05-23 01:57:27,103-05 WARN
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler1) [1b15b09b] EVENT_ID: 
> USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE(170),
> Correlation ID: 3102e526-172b-49a7-ae64-5879a7eb8079, Job ID:
> 020b9b14-16d9-4303-8fa3-d60b248bc158, Call Stack:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> VDSGenericException: VDSNetworkException: Message timeout which can be
> caused by communication issues (Failed with error VDS_NETWORK_ERROR and
> code 5022)
>
>
>
> engine.log-20170523.gz:2017-05-23 01:58:11,969-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler1) [] EVENT_ID: 
> USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69),
> Correlation ID: 3102e526-172b-49a7-ae64-5879a7eb8079, Job ID:
> 020b9b14-16d9-4303-8fa3-d60b248bc158, Call Stack:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetwo

Re: [ovirt-users] vm has been paused due to unknown storage

2017-05-24 Thread Sahina Bose
The glusterfs logs contain below errors:
[2017-05-22 18:12:50.941883] E [MSGID: 133010]
[shard.c:1725:shard_common_lookup_shards_cbk] 0-gv2-shard: Lookup on shard
50 failed. Base file gfid = 33f1fe3e-c626-49f2-861e-2259c972931d [No data
available]
[2017-05-22 18:12:50.945085] W [fuse-bridge.c:1291:fuse_err_cbk]
0-glusterfs-fuse: 61306713: FSYNC() ERR => -1 (No data available)

Krutika, could you take a look?

On Thu, May 25, 2017 at 1:02 AM,  wrote:

> Hi,
>
> I setup an ovirt hosted enine, in only one server with local gluster
> bricks.
>
> When running a MS SQL 2012 process to rebuild a data base, which take
> around 4 hours, after a while the VM is paused with the error:
>
> vm has been paused due to unknown storage
>
> The VM disk is in Thin provision
>
> Ovirt and gluter versions:
>
> Version 4.1.1.8-1.el7.centos
>
> glusterfs-cli-3.8.11-1.el7.x86_64
> glusterfs-libs-3.8.11-1.el7.x86_64
> glusterfs-3.8.11-1.el7.x86_64
> glusterfs-client-xlators-3.8.11-1.el7.x86_64
> glusterfs-fuse-3.8.11-1.el7.x86_64
> glusterfs-api-3.8.11-1.el7.x86_64
> glusterfs-server-3.8.11-1.el7.x86_64
>
>
> I can find the reason why
> The logs are attached.
>
> Any idea?
>
> Thanks
>
> --
> --
> Jose Ferradeira
> http://www.logicworks.pt
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GetStorageDeviceListVDS failed

2017-05-25 Thread Sahina Bose
Could you provide the supervdsm.log from either host2 or host3.

Were the packages on these hosts updated?
What's the version of python-blivet? Is this different from host1?

On Wed, May 24, 2017 at 6:26 PM, Joel Diaz  wrote:

> Good morning oVirt community,
>
> I need some assistance.
>
> I am running a 3 host, hosted engine, gluster environment. The hosts are
> running centos 7.3 and the engine is version 4.1.2.3
>
> Since yesterday, every 2 hours, the engine reports the error below on host
> 2 and 3.
>
> event ID 10802
>
> VDSM  command GetStorageDeviceListVDS failed:
> 'gluster_vg_sdb-/dev/mapper/36589cfc00f05aea0f2b50f8d76e5: read
> failed after 0 of 4096 at 0: Input/output error'
>
> I've attached logs from both hosts. Hosts 3 is the SPM and holds the
> arbiter brick of all 4 gluster volumes.
>
> As always, your help is appreciated.
>
> Joel
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-node-ng & gluster installation question

2017-05-25 Thread Sahina Bose
On Thu, May 25, 2017 at 3:13 PM, NUNIN Roberto 
wrote:

> I’m trying to install latest oVirt release on a cluster lab.
>
>
>
> I have 6 servers and I need to use gluster from hyperconverged hosted
> engine lab, all of them have volumes for gluster storage, apart OS disks.
>
>
>
> Servers are installed with ovirt-node-ng-installer-ovirt-4.1-pre-2017051210
> iso image.
>
> After installation, full update.
>
>
>
> I’m try to follow the guide in : http://www.ovirt.org/blog/
> 2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/
>
> In this guide, I can seen in the host selection, the possibility to add
> more than three hosts to the cluster.
>
> In current cockpit version, I can’t see this.
>

Yes, we changed it to a 3 host setup only and removed the possibility to
add more hosts in the initial setup.


>
>
> These are cockpit components, installed in all six servers:
>
>
>
> cockpit-ws-140-1.el7.centos.x86_64
>
> cockpit-system-140-1.el7.centos.noarch
>
> cockpit-140-1.el7.centos.x86_64
>
> cockpit-dashboard-140-1.el7.centos.x86_64
>
> cockpit-storaged-140-1.el7.centos.noarch
>
> cockpit-ovirt-dashboard-0.10.7-0.0.18.el7.centos.noarch
>
> cockpit-bridge-140-1.el7.centos.x86_64
>
>
>
> Two questions:
>
>
>
> Must I proceed with only three servers and only after add the remainder
> three to the cluster, to have distributed&replicated gluster solution or I
> must change something in the first installation phase, to add all six
> servers from the beginning ?
>

Please proceed with 3 servers, and then add the remaining 3 to the cluster
from the oVirt UI.

You can add additional gluster volumes from the 3 new servers and create
new storage domain.

If you want to expand the gluster volume that was created in the initial
setup (i.e add new bricks to it) - it's advised to add bricks while there
are no VMs running (this is if you're on a gluster version =3.8, as there's
a bug with rebalance with ongoing I/O)


>
>
> In the second cockpit screen there aren’t default repo/packages, must I
> leave it blank ? (servers are already updated).
>

Yes, leave this blank


>
>
> Thanks in advance for any hints.
>
>
>
> *Roberto *
>
>
>
> --
>
> Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
> potrebbe contenere informazioni confidenziali, riservate o proprietarie.
> Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
> immediatamente al mittente, cancellando l'originale e ogni sua copia e
> distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente
> proibito e potrebbe essere fonte di violazione di legge.
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise private information. If you have
> received it in error, please notify the sender immediately, deleting the
> original and all copies and destroying any hard copies. Any other use is
> strictly prohibited and may be unlawful.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GetStorageDeviceListVDS failed

2017-05-25 Thread Sahina Bose
On Thu, May 25, 2017 at 8:29 PM, Joel Diaz  wrote:

> I almost forgot about python-blivet. All 3 hosts are on version
> 0.61.15.59-1.
>
> The hosts and engine were updated using the snapshot repo last week. I
> updated them just now. I just noticed that host 2 and required the same 20
> updates but host 1 required an additional update, ovirt-engine-appliance.
>
>
>
> On May 25, 2017 10:40 AM, "Joel Diaz"  wrote:
>
> Hello Sahina,
>
> Thanks for the response.
>
> Attached are the requested supervsdm logs from both hosts.
>
>
Seems to be an error returned by lvm module.

" lvm lvs -a --unit k --nosuffix --nameprefixes --unquoted --noheadings
-ovg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype"  - seems to throw this
error. Can you check?

Do you have stale devicemapper entries on these hosts?


> Joel
>
>
>
> On May 25, 2017 5:41 AM, "Sahina Bose"  wrote:
>
>> Could you provide the supervdsm.log from either host2 or host3.
>>
>> Were the packages on these hosts updated?
>> What's the version of python-blivet? Is this different from host1?
>>
>> On Wed, May 24, 2017 at 6:26 PM, Joel Diaz  wrote:
>>
>>> Good morning oVirt community,
>>>
>>> I need some assistance.
>>>
>>> I am running a 3 host, hosted engine, gluster environment. The hosts are
>>> running centos 7.3 and the engine is version 4.1.2.3
>>>
>>> Since yesterday, every 2 hours, the engine reports the error below on
>>> host 2 and 3.
>>>
>>> event ID 10802
>>>
>>> VDSM  command GetStorageDeviceListVDS failed:
>>> 'gluster_vg_sdb-/dev/mapper/36589cfc00f05aea0f2b50f8d76e5: read
>>> failed after 0 of 4096 at 0: Input/output error'
>>>
>>> I've attached logs from both hosts. Hosts 3 is the SPM and holds the
>>> arbiter brick of all 4 gluster volumes.
>>>
>>> As always, your help is appreciated.
>>>
>>> Joel
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GetStorageDeviceListVDS failed

2017-05-29 Thread Sahina Bose
ut error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
>> of 4096 at 134209536: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
>> of 4096 at 4096: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
>> of 4096 at 0: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
>> of 4096 at 1073676288: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
>> of 4096 at 1073733632: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
>> of 4096 at 4096: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
>> of 4096 at 0: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
>> of 4096 at 2147418112 <%28214%29%20741-8112>: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
>> of 4096 at 2147475456 <%28214%29%20747-5456>: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
>> of 4096 at 4096: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
>> 4096 at 0: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
>> 4096 at 134152192: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
>> 4096 at 134209536: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
>> 4096 at 4096: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0
>> of 4096 at 0: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0
>> of 4096 at 134152192: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0
>> of 4096 at 134209536: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0
>> of 4096 at 4096: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
>> of 4096 at 0: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
>> of 4096 at 1073676288: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
>> of 4096 at 1073733632: Input/output error
>>
>>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
>> of 4096 at 4096: Input/output error
>>
>>   
>> /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/d7e958f7-e307-4c33-95ea-f98532ad6fd0:
>> read failed after 0 of 4096 at 0: Input/output error
>>
>>   
>> /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/d7e958f7-e307-4c33-95ea-f98532ad6fd0:
>> read failed after 0 of 4096 at 21608988672: Input/output error
>>
>>   
>> /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/d7e958f7-e307-4c33-95ea-f98532ad6fd0:
>> read failed after 0 of 4096 at 21609046016: Input/output error
>>
>>   
>> /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/d7e958f7-e307-4c33-95ea-f98532ad6fd0:
>> read failed after 0 of 4096 at 4096: Input/output error
>>
>>   /dev/mapper/36589cfc00b8e821765d5febefda2: read failed after 0 of
>> 4096 at 0: Input/output error
>>
>>   /dev/mapper/36589cfc00b8e821765d5febefda2: read failed after 0 of
>> 4096 at 536870846464: Input/output error
>>
>>  /dev/mapper/36589cfc00b8e821765d5febefda2: read failed after 0 of
>> 4096 at 536870903808: Input/output error
>>
>>   /dev/mapper/36589cfc00b8e821765d5febefda2: read failed after 0 of
>> 4096 at 4096: Input/output error
>>
>>   LVM2_VG_NAME=centos_ovirt-hyp-03 LVM2_LV_NAME=home
>> LVM2_LV_UUID=d4pR6F-JD7f-JR7F-RjYv-EkB9-ka7b-4BVJqZ
>> LVM2_LV_SIZE=63832064.00 LVM2_LV_ATTR=-wi-ao LVM2_SEGTYPE=linear
>>
>>   LVM2_VG_NAME=centos_ovirt-hyp-03 LVM2_LV_NAME=root
>> LVM2_LV_UUID=Dk3ihf-qtXB-8h7m-0zul-d9wo-oTGd-4gijh1
>> LVM2_LV_SIZE=52428800.00 LVM2_LV_ATTR=-wi-ao LVM2_SEGTYPE=linear
>>
>>   LVM2_VG_NAME=centos_ovirt-hyp-03 LVM2_LV_NAME=swap
>> LVM2_LV_UUID=RVT0Wl-f3Kx-vxha-CIFA-nK1M-IEeU-CNyFFu
>> LVM2_LV_SIZE=8192000.00 LVM2_LV_ATTR=-wi-ao LVM2_SEGTYPE=linear
>>
>>   LVM2_VG_NAME=gluster_vg_sdb LVM

Re: [ovirt-users] Cannot connect to gluster storage after HE installation

2017-05-30 Thread Sahina Bose
On Tue, May 30, 2017 at 6:56 PM, NUNIN Roberto 
wrote:

> Hi
>
>
>
> Ovirt-node-ng installation using iso image 20170526.
>
> I’ve made five attempts, each one ended with different fail, still loking
> at : http://www.ovirt.org/blog/2017/04/up-and-running-with-
> ovirt-4-1-and-gluster-storage/
>
>
>
> The last one was done, successfully (I hope)  taking care of :
>
>
>
> 1)Configure networking before to set date & time, to have chronyd up
> & running
>
> 2)Modifying gdeploy generated script, still looking at ntpd instead
> of chronyd
>
> 3)Being a gluster based cluster, configured partition on each data
> disk (sdb > sdb1 type 8e + partprobe)
>
> 4)Blacklisted all nodes on multipath.conf
>
> 5)Double check if refuses from previous attempts was already visible
> (for example gluster volume group > vgremove –f –y ).
>
>
>
> After HE installation and restart, no advisory about additional servers to
> add to the cluster, so manually added as new servers. Successfully.
>
>
>
> Now I must add storage, but unfortunately, nothing is shown in the Gluster
> drop-down list, even if I change the host.
>
> I’ve chosen “Use managed gluster”.
>

I'm assuming HE is up and running, as you're able to access the engine.

Can you check if the Default cluster has "Gluster service" enabled. (This
would have been a prompt during HE install, and the service is enabled
based on your choice)


Are the gluster volumes listed in the "Volumes" tab? The engine needs to be
aware of the volumes to use in the New storage domain dialog.



>
> AT a first look, glusterd is up & running (but disabled at system startup
> !) :
>
>
>
> aps-te65-mng.mydomain.it:Loaded: loaded 
> (/usr/lib/systemd/system/glusterd.service;
> disabled; vendor preset: disabled)
>
> aps-te65-mng.mydomain.it:Active: active (running) since Tue
> 2017-05-30 09:54:23 CEST; 4h 40min ago
>
>
>
> aps-te66-mng.mydomain.it:Loaded: loaded 
> (/usr/lib/systemd/system/glusterd.service;
> disabled; vendor preset: disabled)
>
> aps-te66-mng.mydomain.it:Active: active (running) since Tue
> 2017-05-30 09:54:24 CEST; 4h 40min ago
>
>
>
> aps-te67-mng.mydomain.it:Loaded: loaded 
> (/usr/lib/systemd/system/glusterd.service;
> disabled; vendor preset: disabled)
>
> aps-te67-mng.mydomain.it:Active: active (running) since Tue
> 2017-05-30 09:54:24 CEST; 4h 40min ago
>
>
>
> data gluster volume is ok :
>
>
>
> [root@aps-te65-mng ~]# gluster volume info data
>
>
>
> Volume Name: data
>
> Type: Replicate
>
> Volume ID: ea6a2c9f-b042-42b4-9c0e-1f776e50b828
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 1 x (2 + 1) = 3
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: aps-te65-mng.mydomain.it:/gluster_bricks/data/data
>
> Brick2: aps-te66-mng.mydomain.it:/gluster_bricks/data/data
>
> Brick3: aps-te67-mng.mydomain.it:/gluster_bricks/data/data (arbiter)
>
> Options Reconfigured:
>
> cluster.granular-entry-heal: enable
>
> performance.strict-o-direct: on
>
> network.ping-timeout: 30
>
> storage.owner-gid: 36
>
> storage.owner-uid: 36
>
> user.cifs: off
>
> features.shard: on
>
> cluster.shd-wait-qlength: 1
>
> cluster.shd-max-threads: 8
>
> cluster.locking-scheme: granular
>
> cluster.data-self-heal-algorithm: full
>
> cluster.server-quorum-type: server
>
> cluster.quorum-type: auto
>
> cluster.eager-lock: enable
>
> network.remote-dio: off
>
> performance.low-prio-threads: 32
>
> performance.stat-prefetch: off
>
> performance.io-cache: off
>
> performance.read-ahead: off
>
> performance.quick-read: off
>
> transport.address-family: inet
>
> performance.readdir-ahead: on
>
> nfs.disable: on
>
> [root@aps-te65-mng ~]#
>
>
>
> [root@aps-te65-mng ~]# gluster volume status data
>
> Status of volume: data
>
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> 
> --
>
> Brick aps-te65-mng.mydomain.it:/gluster_bric
>
> ks/data/data49153 0  Y
> 52710
>
> Brick aps-te66-mng.mydomain.it:/gluster_bric
>
> ks/data/data49153 0  Y
> 45265
>
> Brick aps-te67-mng.mydomain.it:/gluster_bric
>
> ks/data/data49153 0  Y
> 45366
>
> Self-heal Daemon on localhost   N/A   N/AY
> 57491
>
> Self-heal Daemon on aps-te67-mng.mydomain.it N/A   N/AY
> 46488
>
> Self-heal Daemon on aps-te66-mng.mydomain.it N/A   N/AY
> 46384
>
>
>
> Task Status of Volume data
>
> 
> --
>
> There are no active volume tasks
>
>
>
> [root@aps-te65-mng ~]#
>
>
>
> Any hints on this ? May I send logs ?
>
> In hosted-engine log, apart fencing problems with HPE iLO3 agent, I can
> find only these errors:
>
>
>
> 2017-05-30 11:58:56,981+02 ERROR 
> [org.ovirt.engine.core.utils.servlet.ServletUtils]
> (default task-23) [] Can't read f

Re: [ovirt-users] Cannot connect to gluster storage after HE installation

2017-06-01 Thread Sahina Bose
On Tue, May 30, 2017 at 9:20 PM, NUNIN Roberto 
wrote:

>
>
> I'm assuming HE is up and running, as you're able to access the engine.
>
> Yes. It is up and running.
>
> Can you check if the Default cluster has "Gluster service" enabled. (This
> would have been a prompt during HE install, and the service is enabled
> based on your choice)
>
> This was not activated. Did you mean that, during HE setup the prompt was
> “Do you want to activate Gluster service on this host ?” Honestly,
> considering that it was under gluster installation, and the default value
> was “No”, I’ve left No.
>
> Now I’ve activated on the GUI.
>
>
>  Are the gluster volumes listed in the "Volumes" tab? The engine needs to
> be aware of the volumes to use in the New storage domain dialog.
>
> Yes, now the gluster volumes are shown.
>
>
> Last question Sahina:
>
>
>
> It is correct that glusterd service is disabled on hypervisors ?
>

Glusted service is not disabled. Glusterd needs to be running for the
gluster peer communication.


>
>
> Thanks for pointing me to the right solution.
>
>
>
>
>
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
>
> Questo messaggio e' indirizzato esclusivamente al destinatario indicato e
> potrebbe contenere informazioni confidenziali, riservate o proprietarie.
> Qualora la presente venisse ricevuta per errore, si prega di segnalarlo
> immediatamente al mittente, cancellando l'originale e ogni sua copia e
> distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente
> proibito e potrebbe essere fonte di violazione di legge.
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise private information. If you have
> received it in error, please notify the sender immediately, deleting the
> original and all copies and destroying any hard copies. Any other use is
> strictly prohibited and may be unlawful.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm has been paused due to unknown storage

2017-06-01 Thread Sahina Bose
On Wed, May 31, 2017 at 3:01 PM, David Gossage 
wrote:

> On Wed, May 31, 2017 at 4:16 AM,  wrote:
>
>> Hi,
>>
>> I found the cause of this problem. I had to turn off sharding.
>>
>
>
> Did you have sharding enabled but not have any sharded VM images or were
> their shards missing on some bricks?
>


Yes, interested to know why you had to turn off sharding. Could you
elaborate?


>
>
>
>> ------
>> *De: *supo...@logicworks.pt
>> *Para: *"Sahina Bose" 
>> *Cc: *"ovirt users" 
>> *Enviadas: *Sexta-feira, 26 De Maio de 2017 12:27:43
>> *Assunto: *Re: [ovirt-users] vm has been paused due to unknown storage
>>
>> Hi,
>>
>> I updated glusterfs:
>> glusterfs-client-xlators-3.8.12-1.el7.x86_64
>> glusterfs-cli-3.8.12-1.el7.x86_64
>> glusterfs-api-3.8.12-1.el7.x86_64
>> glusterfs-fuse-3.8.12-1.el7.x86_64
>> glusterfs-server-3.8.12-1.el7.x86_64
>> glusterfs-libs-3.8.12-1.el7.x86_64
>> glusterfs-3.8.12-1.el7.x86_64
>>
>> Now I cannot add a volume disk preallocated, after a while it breaks.
>>
>> message log:
>> May 26 11:18:16 node journal: vdsm root ERROR VM metrics collection
>> failed#012Traceback (most recent call last):#012  File
>> "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py", line 221, in
>> send_metrics#012diskinfo['readOps']#012KeyError: 'readOps'
>>
>> vdsm.log
>> 2017-05-26 11:18:16,715+0100 ERROR (periodic/3) [root] VM metrics
>> collection failed (vmstats:264)
>> 2017-05-26 11:19:39,369+0100 ERROR (tasks/5) [storage.Volume] Unexpected
>> error (fileVolume:456)
>> 2017-05-26 11:19:39,373+0100 ERROR (tasks/5) [storage.Volume] Unexpected
>> error (volume:1107)
>> 2017-05-26 11:19:39,374+0100 ERROR (tasks/5) [storage.TaskManager.Task]
>> (Task='5b2adb9a-e24e-48fa-9f01-f21c23588aef') Unexpected error (task:870)
>>
>> glusterfs
>> [2017-05-26 10:53:08.247219] W [MSGID: 114031]
>> [client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-gv2-client-0: remote
>> operation failed. Path: /.shard/55b94942-dee5-4f69-8b0f-52e251ac6f5e.164
>> (----) [No data available]
>> [2017-05-26 10:53:14.899499] W [MSGID: 114031]
>> [client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-gv2-client-0: remote
>> operation failed. Path: /.shard/55b94942-dee5-4f69-8b0f-52e251ac6f5e.167
>> (----) [No data available]
>> [2017-05-26 10:53:14.899526] E [MSGID: 133010]
>> [shard.c:1725:shard_common_lookup_shards_cbk] 0-gv2-shard: Lookup on
>> shard 167 failed. Base file gfid = 55b94942-dee5-4f69-8b0f-52e251ac6f5e
>> [No data available]
>> [2017-05-26 10:53:19.712567] W [MSGID: 114031]
>> [client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-gv2-client-0: remote
>> operation failed. Path: /.shard/55b94942-dee5-4f69-8b0f-52e251ac6f5e.169
>> (----) [No data available]
>> [2017-05-26 10:53:19.712614] E [MSGID: 133010]
>> [shard.c:1725:shard_common_lookup_shards_cbk] 0-gv2-shard: Lookup on
>> shard 169 failed. Base file gfid = 55b94942-dee5-4f69-8b0f-52e251ac6f5e
>> [No data available]
>> [2017-05-26 10:53:29.419317] W [MSGID: 114031]
>> [client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-gv2-client-0: remote
>> operation failed. Path: /.shard/55b94942-dee5-4f69-8b0f-52e251ac6f5e.173
>> (----) [No data available]
>> [2017-05-26 10:53:29.419369] E [MSGID: 133010]
>> [shard.c:1725:shard_common_lookup_shards_cbk] 0-gv2-shard: Lookup on
>> shard 173 failed. Base file gfid = 55b94942-dee5-4f69-8b0f-52e251ac6f5e
>> [No data available]
>>
>>
>> thanks
>>
>> --
>> *De: *"Sahina Bose" 
>> *Para: *supo...@logicworks.pt, "Krutika Dhananjay" 
>> *Cc: *"ovirt users" 
>> *Enviadas: *Quinta-feira, 25 De Maio de 2017 7:12:40
>> *Assunto: *Re: [ovirt-users] vm has been paused due to unknown storage
>>
>> The glusterfs logs contain below errors:
>> [2017-05-22 18:12:50.941883] E [MSGID: 133010]
>> [shard.c:1725:shard_common_lookup_shards_cbk] 0-gv2-shard: Lookup on
>> shard 50 failed. Base file gfid = 33f1fe3e-c626-49f2-861e-2259c972931d
>> [No data available]
>> [2017-05-22 18:12:50.945085] W [fuse-bridge.c:1291:fuse_err_cbk]
>> 0-glusterfs-fuse: 61306713: FSYNC() ERR => -1 (No data available)
>>
>> Krutika, could you take a look?
>>
>> On Thu, May 25, 2017 at 1:02 AM,  wrote:
>>
>>> Hi,
>>>
>>> I setu

Re: [ovirt-users] oVirt gluster sanlock issue

2017-06-05 Thread Sahina Bose
Can we have the gluster mount logs and brick logs to check if it's the same
issue?

On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi 
wrote:

> I clean installed everything and ran into the same.
> I then ran gdeploy and encountered the same issue when deploying engine.
> Seems that gluster (?) doesn't like 4K sector drives. I am not sure if it
> has to do with alignment. The weird thing is that gluster volumes are all
> ok, replicating normally and no split brain is reported.
>
> The solution to the mentioned bug (1386443
> ) was to format with
> 512 sector size, which for my case is not an option:
>
> mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
> illegal sector size 512; hw sector is 4096
>
> Is there any workaround to address this?
>
> Thanx,
> Alex
>
>
> On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi 
> wrote:
>
>> Hi Maor,
>>
>> My disk are of 4K block size and from this bug seems that gluster replica
>> needs 512B block size.
>> Is there a way to make gluster function with 4K drives?
>>
>> Thank you!
>>
>> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk  wrote:
>>
>>> Hi Alex,
>>>
>>> I saw a bug that might be related to the issue you encountered at
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>>>
>>> Sahina, maybe you have any advise? Do you think that BZ1386443is related?
>>>
>>> Regards,
>>> Maor
>>>
>>> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi 
>>> wrote:
>>> > Hi All,
>>> >
>>> > I have installed successfully several times oVirt (version 4.1) with 3
>>> nodes
>>> > on top glusterfs.
>>> >
>>> > This time, when trying to configure the same setup, I am facing the
>>> > following issue which doesn't seem to go away. During installation i
>>> get the
>>> > error:
>>> >
>>> > Failed to execute stage 'Misc configuration': Cannot acquire host id:
>>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922', SanlockException(22,
>>> 'Sanlock
>>> > lockspace add failure', 'Invalid argument'))
>>> >
>>> > The only different in this setup is that instead of standard
>>> partitioning i
>>> > have GPT partitioning and the disks have 4K block size instead of 512.
>>> >
>>> > The /var/log/sanlock.log has the following lines:
>>> >
>>> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/m
>>> nt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-
>>> c2b8-46e7-b2c8-91e4a5bb2047/dom_md/ids:0
>>> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/m
>>> nt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-
>>> c2b8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
>>> > for 2,9,23040
>>> > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
>>> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/m
>>> nt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-
>>> c8b4d5e5e922/dom_md/ids:0
>>> > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
>>> > 0x7f59b8c0:0x7f59b8d0:0x7f59b0101000 result -22:0 match res
>>> > 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors delta_leader
>>> offset
>>> > 127488 rv -22
>>> > /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e
>>> 7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
>>> > 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
>>> > 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
>>> > 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail result -22
>>> >
>>> > And /var/log/vdsm/vdsm.log says:
>>> >
>>> > 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
>>> > [storage.StorageServer.MountConnection] Using user specified
>>> > backup-volfile-servers option (storageServer:253)
>>> > 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled] MOM not
>>> > available. (throttledlog:105)
>>> > 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled] MOM not
>>> > available, KSM stats will be missing. (throttledlog:105)
>>> > 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
>>> > [storage.StorageServer.MountConnection] Using user specified
>>> > backup-volfile-servers option (storageServer:253)
>>> > 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4) [storage.initSANLock]
>>> Cannot
>>> > initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922
>>> > (clusterlock:238)
>>> > Traceback (most recent call last):
>>> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
>>> line
>>> > 234, in initSANLock
>>> > sanlock.init_lockspace(sdUUID, idsPath)
>>> > SanlockException: (107, 'Sanlock lockspace init failure', 'Transport
>>> > endpoint is not connected')
>>> > 2017-06-03 19:21:15,515+0200 WARN  (jsonrpc/4)
>>> > [storage.StorageDomainManifest] lease did not initialize successfully
>>> > (sd:557)
>>> > Traceback (most recent call last):
>>> >   File "/usr/share/vdsm/storage/sd.py", line 552, in initDomainLock
>>> > self._domainLock.initLock(self.getDomainLease())
>>> >   File "/usr/lib/python2.7/site-packages/vdsm

Re: [ovirt-users] Hosted engine

2017-06-14 Thread Sahina Bose
What's the output of "hosted-engine --vm-status" and "gluster volume status
engine" tell you? Are all the bricks running as per gluster vol status?

Can you try to restart the ovirt-ha-agent and ovirt-ha-broker services?

If HE still has issues powering up, please provide agent.log and broker.log
from /var/log/ovirt-hosted-engine-ha and gluster mount logs from
/var/log/glusterfs/rhev-data-center-mnt .log

On Thu, Jun 8, 2017 at 6:57 PM, Joel Diaz  wrote:

> Good morning oVirt community,
>
> I'm running a three host gluster environment with hosted engine.
>
> Yesterday the engine went down and has not been able to come up properly.
> It tries to start on all three host.
>
> I have two gluster volumes, data and engne. The data storage domian volume
> is no longer mounted but the engine volume is up. I've restarted the
> gluster service and make sure both volumes were running. The data volume
> will not mount.
>
> How can I get the engine running properly again?
>
> Thanks,
>
> Joel
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Deploy Ovirt VM's By Ansible Playbook Issue

2017-06-15 Thread Sahina Bose
On Thu, Jun 15, 2017 at 9:30 PM, khalid mahmood 
wrote:

> Dear Users
>
> *Procedure :*
> 1- create clean volume replica 2 distributed with glusterfs .
>

Only replica 3 or replica 3 with arbiter is supported as storage domain in
oVirt


> 2- create clean ovirt-engine machine .
> 3- create clean vm from scratch then create template from this vm.
> 4- then create two vm from this template (vm1) & (vm2).
> 5- then delete the two vm .
> 6- create new two vm with the same name (vm1) & (vm2) from the template .
> 7- till now the two vm stable and work correctly .
> 8- repeat no (7) three time all vm's is working correctly .
>
> *issue :*
> i have ansible playbook to deploy vm's to our ovirt , my playbook use the
> above template to deploy the vm's .
> my issue is after ansible script deploy the vm's , all vm's disk crash and
> the template disk is crash also and the script make change into the
> template checksum hash .
>
> you can look at ansible parameters :
>
> - hosts: localhost
> connection: local
> gather_facts: false
> tasks:
>   - name: entering
> ovirt_auth:
> url: https://ovirt-engine.elcld.net:443/ovirt-engine/api
> username: admin@internal
> password: pass
> insecure: yes
>   - name: creating
> ovirt_vms:
>   auth: "{{ ovirt_auth }}"
>   name: myvm05
>   template: mahdi
>   #state: present
>   cluster: Cluster02
>   memory: 4GiB
>   cpu_cores: 2
>   comment: Dev
>   #type: server
>   cloud_init:
> host_name: vm01
> user_name: root
> root_password: pass
> nic_on_boot: true
> nic_boot_protocol: static
> nic_name: eth0
> dns_servers: 109.224.19.5
> dns_search: elcld.net
> nic_ip_address: 10.10.20.2
> nic_netmask: 255.255.255.0
> nic_gateway: 10.10.20.1
>   - name: Revoke
> ovirt_auth:
>   state: absent
>   ovirt_auth: "{{ ovirt_auth }}"
>
> can you assist me with this issue by checking if that any missing in my
> ansible .
>
> best regards
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine

2017-06-15 Thread Sahina Bose
 └─3150 /usr/bin/python 
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
> --no-daemon
>
>
>
> Jun 15 08:56:15 ovirt-hyp-01.example.lan systemd[1]: Started oVirt Hosted
> Engine High Availability Monitoring Agent.
>
> Jun 15 08:56:15 ovirt-hyp-01.example.lan systemd[1]: Starting oVirt Hosted
> Engine High Availability Monitoring Agent...
>
> Jun 15 09:17:18 ovirt-hyp-01.example.lan ovirt-ha-agent[3150]:
> ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine
> ERROR Engine VM stopped on localhost
>
> [root@ovirt-hyp-01 ‾]# systemctl status ovirt-ha-broker.service
>
> ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability
> Communications Broker
>
>Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service;
> enabled; vendor preset: disabled)
>
>Active: active (running) since Thu 2017-06-15 08:54:06 EDT; 24min ago
>
> Main PID: 968 (ovirt-ha-broker)
>
>CGroup: /system.slice/ovirt-ha-broker.service
>
>└─968 /usr/bin/python 
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
> --no-daemon
>
>
>
> Jun 15 08:54:06 ovirt-hyp-01.example.lan systemd[1]: Started oVirt Hosted
> Engine High Availability Communications Broker.
>
> Jun 15 08:54:06 ovirt-hyp-01.example.lan systemd[1]: Starting oVirt Hosted
> Engine High Availability Communications Broker...
>
> Jun 15 08:56:16 ovirt-hyp-01.example.lan ovirt-ha-broker[968]:
> ovirt-ha-broker ovirt_hosted_engine_ha.broker.listener.ConnectionHandler
> ERROR Error handling request, data: '...1b55bcf76'
>
> Traceback
> (most recent call last):
>
>   File
> "/usr/lib/python2.7/site-packages/ovirt...
>
> Hint: Some lines were ellipsized, use -l to show in full.
>
>
>
>
>
>
>
>
>
> [root@ovirt-hyp-01 ‾]# systemctl restart ovirt-ha-agent.service
>
> [root@ovirt-hyp-01 ‾]# systemctl status ovirt-ha-agent.service
>
> ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability
> Monitoring Agent
>
>Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
> enabled; vendor preset: disabled)
>
>Active: active (running) since Thu 2017-06-15 09:19:21 EDT; 26s ago
>
> Main PID: 8563 (ovirt-ha-agent)
>
>CGroup: /system.slice/ovirt-ha-agent.service
>
>└─8563 /usr/bin/python 
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
> --no-daemon
>
>
>
> Jun 15 09:19:21 ovirt-hyp-01.example.lan systemd[1]: Started oVirt Hosted
> Engine High Availability Monitoring Agent.
>
> Jun 15 09:19:21 ovirt-hyp-01.example.lan systemd[1]: Starting oVirt Hosted
> Engine High Availability Monitoring Agent...
>
> [root@ovirt-hyp-01 ‾]# systemctl restart ovirt-ha-broker.service
>
> [root@ovirt-hyp-01 ‾]# systemctl status ovirt-ha-broker.service
>
> ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability
> Communications Broker
>
>Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service;
> enabled; vendor preset: disabled)
>
>Active: active (running) since Thu 2017-06-15 09:20:59 EDT; 28s ago
>
> Main PID: 8844 (ovirt-ha-broker)
>
>CGroup: /system.slice/ovirt-ha-broker.service
>
>└─8844 /usr/bin/python 
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
> --no-daemon
>
>
>
> Jun 15 09:20:59 ovirt-hyp-01.example.lan systemd[1]: Started oVirt Hosted
> Engine High Availability Communications Broker.
>
> Jun 15 09:20:59 ovirt-hyp-01.example.lan systemd[1]: Starting oVirt Hosted
> Engine High Availability Communications Broker...
>
>
> On Jun 14, 2017 4:45 AM, "Sahina Bose"  wrote:
>
>> What's the output of "hosted-engine --vm-status" and "gluster volume
>> status engine" tell you? Are all the bricks running as per gluster vol
>> status?
>>
>> Can you try to restart the ovirt-ha-agent and ovirt-ha-broker services?
>>
>> If HE still has issues powering up, please provide agent.log and
>> broker.log from /var/log/ovirt-hosted-engine-ha and gluster mount logs
>> from /var/log/glusterfs/rhev-data-center-mnt .log
>>
>> On Thu, Jun 8, 2017 at 6:57 PM, Joel Diaz  wrote:
>>
>>> Good morning oVirt community,
>>>
>>> I'm running a three host gluster environment with hosted engine.
>>>
>>> Yesterday the engine went down and has not been able to come up
>>> properly. It tries to start on all three host.
>>>
>>> I have two gluster volumes, data and engne. The data storage domian
>>> volume is no longer mounted but the engine volume is up. I've restarted the
>>> gluster service and make sure both volumes were running. The data volume
>>> will not mount.
>>>
>>> How can I get the engine running properly again?
>>>
>>> Thanks,
>>>
>>> Joel
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine

2017-06-16 Thread Sahina Bose
I don't notice anything wrong on the gluster end.

Maybe Simone can help take a look at HE behaviour?

On Fri, Jun 16, 2017 at 6:14 PM, Joel Diaz  wrote:

> Good morning,
>
> Info requested below.
>
> [root@ovirt-hyp-02 ~]# hosted-engine --vm-start
>
> Exception in thread Client localhost:54321 (most likely raised during
> interpreter shutdown):VM exists and its status is Up
>
>
>
> [root@ovirt-hyp-02 ~]# ping engine
>
> PING engine.example.lan (192.168.170.149) 56(84) bytes of data.
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=1 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=2 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=3 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=4 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=5 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=6 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=7 Destination
> Host Unreachable
>
> From ovirt-hyp-02.example.lan (192.168.170.143) icmp_seq=8 Destination
> Host Unreachable
>
>
>
>
>
> [root@ovirt-hyp-02 ~]# gluster volume status engine
>
> Status of volume: engine
>
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> 
> --
>
> Brick 192.168.170.141:/gluster_bricks/engin
>
> e/engine49159 0  Y
> 1799
>
> Brick 192.168.170.143:/gluster_bricks/engin
>
> e/engine49159 0  Y
> 2900
>
> Self-heal Daemon on localhost   N/A   N/AY
> 2914
>
> Self-heal Daemon on ovirt-hyp-01.example.lan   N/A   N/A
> Y   1854
>
>
>
> Task Status of Volume engine
>
> 
> --
>
> There are no active volume tasks
>
>
>
> [root@ovirt-hyp-02 ~]# gluster volume heal engine info
>
> Brick 192.168.170.141:/gluster_bricks/engine/engine
>
> Status: Connected
>
> Number of entries: 0
>
>
>
> Brick 192.168.170.143:/gluster_bricks/engine/engine
>
> Status: Connected
>
> Number of entries: 0
>
>
>
> Brick 192.168.170.147:/gluster_bricks/engine/engine
>
> Status: Connected
>
> Number of entries: 0
>
>
>
> [root@ovirt-hyp-02 ~]# cat /var/log/glusterfs/rhev-data-c
> enter-mnt-glusterSD-ovirt-hyp-01.example.lan\:engine.log
>
> [2017-06-15 13:37:02.009436] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
>
>
>
>
>
> Each of the three host sends out the following notifications about every
> 15 minutes.
>
> Hosted engine host: ovirt-hyp-01.example.lan changed state:
> EngineDown-EngineStart.
>
> Hosted engine host: ovirt-hyp-01.example.lan changed state:
> EngineStart-EngineStarting.
>
> Hosted engine host: ovirt-hyp-01.example.lan changed state:
> EngineStarting-EngineForceStop.
>
> Hosted engine host: ovirt-hyp-01.example.lan changed state:
> EngineForceStop-EngineDown.
>
> Please let me know if you need any additional information.
>
> Thank you,
>
> Joel
>
>
>
> On Jun 16, 2017 2:52 AM, "Sahina Bose"  wrote:
>
>> From the agent.log,
>> MainThread::INFO::2017-06-15 11:16:50,583::states::473::ovi
>> rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine
>> vm is running on host ovirt-hyp-02.reis.com (id 2)
>>
>> It looks like the HE VM was started successfully? Is it possible that the
>> ovirt-engine service could not be started on the HE VM. Could you try to
>> start the HE vm using below and then logging into the VM console.
>> #hosted-engine --vm-start
>>
>> Also, please check
>> # gluster volume status engine
>> # gluster volume heal engine info
>>
>> Please also check if there are errors in gluster mount logs - at
>> /var/log/glusterfs/rhev-data-center-mnt...log
>>
>>
>> On Thu, Jun 15, 2017 at 8:53 PM, Joel Diaz  wrote:
>>
>>> Sorry. I forgot to attached the requested logs in the previous email.
>>>
>>> Thanks,
>>>
>>> On Jun 15, 2017 9:38 AM, "Joel Diaz"  wrote:
>>>
>>> Good morning,
>>>
>>> Requested info below. Along with some additional info.
>>>
>>> You&#x

Re: [ovirt-users] Very poor GlusterFS performance

2017-06-20 Thread Sahina Bose
[Adding gluster-users]

On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot  wrote:

> Hi folks,
>
> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
> 6 bricks, which themselves live on two SSDs in each of the servers (one
> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
> SSDs. Connectivity is 10G Ethernet.
>
> Performance within the VMs is pretty terrible. I experience very low
> throughput and random IO is really bad: it feels like a latency issue.
> On my oVirt nodes the SSDs are not generally very busy. The 10G network
> seems to run without errors (iperf3 gives bandwidth measurements of >=
> 9.20 Gbits/sec between the three servers).
>
> To put this into perspective: I was getting better behaviour from NFS4
> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
> feel right at all.
>
> My volume configuration looks like this:
>
> Volume Name: vmssd
> Type: Distributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet6
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard-block-size: 128MB
> performance.strict-o-direct: on
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
>
> I would really appreciate some guidance on this to try to improve things
> because at this rate I will need to reconsider using GlusterFS altogether.
>


Could you provide the gluster volume profile output while you're running
your I/O tests.

# gluster volume profile  start
to start profiling

# gluster volume profile  info

for the profile output.


>
> Cheers,
> Chris
>
> --
> Chris Boot
> bo...@bootc.net
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage network not being used for Gluster

2017-07-03 Thread Sahina Bose
Could you  provide output of "gluster peer status" and "gluster volume
info" ?


On Sun, Jul 2, 2017 at 9:33 AM, Mike DePaulo  wrote:

> Hi,
>
>
> I configured a "Gluster storage" network, but it doesn't look like it
> is being used for Gluster. Specifically, the switch's LEDs are not
> blinking, and the hosts' "Total Tx" and "Total Rx" counts are not
> changing (and they're tiny, under 1 MB.) The management network must
> still be being used.
>
> I have 3 hosts running oVirt Node 4.1.x. I set them up via the gluster
> hosted engine. The gluster storage network is 10.0.20.x. These are the
> contents of /var/lib/glusterd/peers:
> [root@centerpoint peers]# cat 8a83fddd-df7e-4e3b-9fc7-ca1c9bf9deaa
> uuid=8a83fddd-df7e-4e3b-9fc7-ca1c9bf9deaa
> state=3
> hostname1=death-star.ad.depaulo.org
> hostname2=death-star
> hostname3=192.168.1.52
> hostname4=10.0.20.52
> [root@centerpoint peers]# cat b6b96427-a0dd-47ff-b3e0-038eb0967fb9
> uuid=b6b96427-a0dd-47ff-b3e0-038eb0967fb9
> state=3
> hostname1=starkiller-base.ad.depaulo.org
> hostname2=starkiller-base
> hostname3=192.168.1.53
>
> Thanks in advance,
> -Mike
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster issue with /var/lib/glusterd/peers/ file

2017-07-03 Thread Sahina Bose
On Sun, Jul 2, 2017 at 5:38 AM, Mike DePaulo  wrote:

> Hi everyone,
>
> I have ovirt 4.1.1/4.1.2 running on 3 hosts with a gluster hosted engine.
>
> I was working on setting up a network for gluster storage and
> migration. The addresses for it will be 10.0.20.x, rather than
> 192.168.1.x for the management network.  However, I switched gluster
> storage and migration back over to the management network.
>
> I updated and rebooted one of my hosts (death-star, 10.0.20.52) and on
> reboot, the glusterd service would start, but wouldn't seem to work.
> The engine webgui reported that its bricks were down, and commands
> like this would fail:
>
> [root@death-star glusterfs]# gluster pool list
> pool list: failed
> [root@death-star glusterfs]# gluster peer status
> peer status: failed
>
> Upon further investigation, I had under /var/lib/glusterd/peers/ the 2
> existing UUID files, plus a new 3rd one:
> [root@death-star peers]# cat 10.0.20.53
> uuid=----
> state=0
> hostname1=10.0.20.53
>

[Adding gluster-users]

How did you add this peer "10.0.20.53"? Is this another interface for an
existing peer?


> I moved that file out of there, restarted glusterd, and now gluster is
> working again.
>
> I am guessing that this is a bug. Let me know if I should attach other
> log files; I am not sure which ones.
>
> And yes, 10.0.20.53 is the IP of one of the other hosts.
>
> -Mike
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-03 Thread Sahina Bose
On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham  wrote:

>
> Only problem I would like to manage is that I have gluster network shared
>> with ovirtmgmt one.
>> Can I move it now with these updated packages?
>>
>
> Are the gluster peers configured with the same hostnames/IPs as your hosts
> within oVirt?
>
> Once they're configured on the same network, separating them might be a
> bit difficult. Also, the last time I looked, oVirt still doesn't support
> managing HCI oVirt/Gluster nodes running each service on a different
> interface (see below).
>
> In theory, the procedure would involve stopping all of the Gluster
> processes on all of the peers, updating the peer addresses in the gluster
> configs on all of the nodes, then restarting glusterd & the bricks. I've
> not tested this however, and it's not a "supported" procedure. I've no idea
> how oVirt would deal with these changes either.
>

Which version of glusterfs do you have running now? With glusterfs>= 3.9,
there's a reset-brick command that can help you do this.

It's possible to move to the new interface for gluster.

The procedure would be:

1. Create a network with "gluster" network role.
2. On each host, use "Setup networks" to associate the gluster network on
the desired interface. (This would ensure thet the engine will peer probe
this interface's IP address as well, so that it can be used to identify the
host in brick defintion)
3. For each of the volume's bricks - change the definition of the brick, so
that the new ip address is used. Ensure that there's no pending heal (i.e
gluster volume heal info - should list 0 entires) before you start this(see
https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ - Introducing
reset-brick command)

gluster volume reset-brick VOLNAME :BRICKPATH start
gluster volume reset-brick VOLNAME :BRICKPATH
:BRICKPATH commit force




>
>
> To properly separate my own storage & management networks from the
> beginning, I configured each host with 2 IPs on different subnets and a
> different hostname corresponding to each IP. For example, "v0" points to
> the management interface of the first node, and "s0" points to the storage
> interface.
>
> oVirt's problem is that, whilst it can see the pre-configured bricks and
> volumes on each host, it can't create any new bricks or volumes because it
> wants to use the same hostnames it uses to manage the hosts. It also means
> that it can't fence the hosts correctly, as it doesn't understand that "v0"
> & "s0" are the same host.
> This isn't a problem for me though, as I don't need to manage my Gluster
> instances via the GUI, and automatic fencing can be done via the IPMI
> interfaces.
>
> Last I read, this is a recognised problem, but a fix isn't expect to
> arrive any time soon.
>
> --
> Doug
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Sahina Bose
On Tue, Jul 4, 2017 at 3:18 PM, Gianluca Cecchi 
wrote:

>
>
> On Mon, Jul 3, 2017 at 12:48 PM, Sahina Bose  wrote:
>
>>
>>
>> On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham  wrote:
>>
>>>
>>> Only problem I would like to manage is that I have gluster network
>>>> shared with ovirtmgmt one.
>>>> Can I move it now with these updated packages?
>>>>
>>>
>>> Are the gluster peers configured with the same hostnames/IPs as your
>>> hosts within oVirt?
>>>
>>> Once they're configured on the same network, separating them might be a
>>> bit difficult. Also, the last time I looked, oVirt still doesn't support
>>> managing HCI oVirt/Gluster nodes running each service on a different
>>> interface (see below).
>>>
>>> In theory, the procedure would involve stopping all of the Gluster
>>> processes on all of the peers, updating the peer addresses in the gluster
>>> configs on all of the nodes, then restarting glusterd & the bricks. I've
>>> not tested this however, and it's not a "supported" procedure. I've no idea
>>> how oVirt would deal with these changes either.
>>>
>>
>> Which version of glusterfs do you have running now? With glusterfs>= 3.9,
>> there's a reset-brick command that can help you do this.
>>
>
> At this moment on my oVirt nodes I have gluster packages as provided by
> 4.1.2 repos, so:
>
> glusterfs-3.8.13-1.el7.x86_64
> glusterfs-api-3.8.13-1.el7.x86_64
> glusterfs-cli-3.8.13-1.el7.x86_64
> glusterfs-client-xlators-3.8.13-1.el7.x86_64
> glusterfs-fuse-3.8.13-1.el7.x86_64
> glusterfs-geo-replication-3.8.13-1.el7.x86_64
> glusterfs-libs-3.8.13-1.el7.x86_64
> glusterfs-server-3.8.13-1.el7.x86_64
> vdsm-gluster-4.19.15-1.el7.centos.noarch
>
> Is 3.9 version of Gluster packages provided when updating to upcoming
> 4.1.3, perhaps?
>

No, it's not. One option is to update glusterfs packages to 3.10.


>
>
>
>>
>> It's possible to move to the new interface for gluster.
>>
>> The procedure would be:
>>
>> 1. Create a network with "gluster" network role.
>> 2. On each host, use "Setup networks" to associate the gluster network on
>> the desired interface. (This would ensure thet the engine will peer probe
>> this interface's IP address as well, so that it can be used to identify the
>> host in brick defintion)
>> 3. For each of the volume's bricks - change the definition of the brick,
>> so that the new ip address is used. Ensure that there's no pending heal
>> (i.e gluster volume heal info - should list 0 entires) before you start
>> this(see https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ -
>> Introducing reset-brick command)
>>
>> gluster volume reset-brick VOLNAME :BRICKPATH start
>> gluster volume reset-brick VOLNAME :BRICKPATH 
>> :BRICKPATH commit force
>>
>>
>>
>
> So do you think I can use any other commands with oVirt 4.1.2 and gluster
> 3.8?
> Can I safely proceed with steps 1 and 2? When I setup a gluster network
> and associated it to one host, what are exactly the implications? Will I
> disrupt anything, or is it seen only an option for having gluster traffing
> going on...?
>

Steps 1 & 2 will ensure that the IP address associated with the gluster
network is peer probed. It does not ensure that brick communication happens
using that interface. This happens only when the brick is identified using
that IP as well. (Step 3)


>
> BTW: How would I complete the webadmin gui part of step 3? I don't see an
> "edit" brick funcionality; I only see "Add" and "Replace Brick"...
>

There's an RFE open to add this to GUI. For now, this has to be done from
command line of one of the gluster nodes.


>
> Thanks,
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading HC from 4.0 to 4.1

2017-07-04 Thread Sahina Bose
On Wed, Jul 5, 2017 at 3:10 AM, Gianluca Cecchi 
wrote:

> On Tue, Jul 4, 2017 at 2:57 PM, Gianluca Cecchi  > wrote:
>
>>
>>> No, it's not. One option is to update glusterfs packages to 3.10.
>>>
>>
>> Is it supported throughout oVirt to use CentOS Storage SIG packages
>> instead of ovirt provided ones? I imagine you mean it, correct?
>>
>> If this is a case, would I have to go with Gluster 3.9 (non LTS)
>> https://lists.centos.org/pipermail/centos-announce/2017-Janu
>> ary/022249.html
>>
>> Or Gluster 3.10 (LTS)
>> https://lists.centos.org/pipermail/centos-announce/2017-March/022337.html
>>
>> I suppose the latter...
>> Any problem then with updates of oVirt itself, eg going through 4.1.2 to
>> 4.1.3?
>>
>> Thanks
>> Gianluca
>>
>>>
>>> Is 3.9 version of Gluster packages provided when updating to upcoming
>>> 4.1.3, perhaps?
>>>
>>
> Never mind, I will verify. At the end this is a test system.
> I put the nodes in maintenance one by one and then installed glusterfs
> 3.10 with;
>
> yum install centos-release-gluster
> yum update
>
> All were able to self heal then and I see the 4 storage domains (engine,
> data, iso, export) up and running.
> See some notes at the end of the e-mail.
> Now I'm ready to test the change of gluster network traffic.
>
> In my case the current hostnames that are also matching the ovirtmgmt
> network are ovirt0N.localdomain.com with N=1,2,3
>
> On my vlan2, defined as gluster network role in the cluster, I have
> defined (on each node /etc/hosts file) the hostnames
>
> 10.10.2.102 gl01.localdomain.local gl01
> 10.10.2.103 gl02.localdomain.local gl02
> 10.10.2.104 gl03.localdomain.local gl03
>
> I need more details about command to run:
>
> Currently I have
>
> [root@ovirt03 ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: ovirt01.localdomain.local
> Uuid: e9717281-a356-42aa-a579-a4647a29a0bc
> State: Peer in Cluster (Connected)
> Other names:
> 10.10.2.102
>
> Hostname: ovirt02.localdomain.local
> Uuid: b89311fe-257f-4e44-8e15-9bff6245d689
> State: Peer in Cluster (Connected)
> Other names:
> 10.10.2.103
>
> Suppose I start form export volume, that has these info:
>
> [root@ovirt03 ~]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt01.localdomain.local:/gluster/brick3/export
> Brick2: ovirt02.localdomain.local:/gluster/brick3/export
> Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)
> ...
>
> then the commands I need to run would be:
>
> gluster volume reset-brick export 
> ovirt01.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt01.localdomain.local:/gluster/brick3/export
> gl01.localdomain.local:/gluster/brick3/export commit force
>
> Correct?
>

Yes, correct. gl01.localdomain.local should resolve correctly on all 3
nodes.


> Is it sufficient to run it on a single node? And then on the same node, to
> run also for the other bricks of the same volume:
>

Yes, it is sufficient to run on single node. You can run the reset-brick
for all bricks from same node.


>
> gluster volume reset-brick export 
> ovirt02.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt02.localdomain.local:/gluster/brick3/export
> gl02.localdomain.local:/gluster/brick3/export commit force
>
> and
>
> gluster volume reset-brick export 
> ovirt03.localdomain.local:/gluster/brick3/export
> start
> gluster volume reset-brick export 
> ovirt03.localdomain.local:/gluster/brick3/export
> gl03.localdomain.local:/gluster/brick3/export commit force
>
> Correct? Do I have to wait self-heal after each commit command, before
> proceeding with the other ones?
>

Ideally, gluster should recognize this as same brick as before, and heal
will not be needed. Please confirm that it is indeed the case before
proceeding


>
> Thanks in advance for input so that I can test it.
>
> Gianluca
>
>
> NOTE: during the update of gluster packages from 3.8 to 3.10 I got these:
>
> warning: /var/lib/glusterd/vols/engine/engine.ovirt01.localdomain.
> local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/
> engine.ovirt01.localdomain.local.gluster-brick1-engine.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/engine.ovirt02.localdomain.
> local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/
> engine.ovirt02.localdomain.local.gluster-brick1-engine.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/engine.ovirt03.localdomain.
> local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/
> engine.ovirt03.localdomain.local.gluster-brick1-engine.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol saved
> as /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol.rpmsave
> warning: /var/lib/glusterd/vols/engine/engine.tcp-fuse.vol saved as
> /var/lib/glusterd/vo

[ovirt-users] op-version for reset-brick (Was: Re: Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Sahina Bose
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi 
wrote:

>
>
> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose  wrote:
>
>>
>>
>>> ...
>>>
>>> then the commands I need to run would be:
>>>
>>> gluster volume reset-brick export 
>>> ovirt01.localdomain.local:/gluster/brick3/export
>>> start
>>> gluster volume reset-brick export 
>>> ovirt01.localdomain.local:/gluster/brick3/export
>>> gl01.localdomain.local:/gluster/brick3/export commit force
>>>
>>> Correct?
>>>
>>
>> Yes, correct. gl01.localdomain.local should resolve correctly on all 3
>> nodes.
>>
>
>
> It fails at first step:
>
>  [root@ovirt01 ~]# gluster volume reset-brick export
> ovirt01.localdomain.local:/gluster/brick3/export start
> volume reset-brick: failed: Cannot execute command. The cluster is
> operating at version 30712. reset-brick command reset-brick start is
> unavailable in this version.
> [root@ovirt01 ~]#
>
> It seems somehow in relation with this upgrade not of the commercial
> solution Red Hat Gluster Storage
> https://access.redhat.com/documentation/en-US/Red_Hat_
> Storage/3.1/html/Installation_Guide/chap-Upgrading_Red_Hat_Storage.html
>
> So ti seems I have to run some command of type:
>
> gluster volume set all cluster.op-version X
>
> with X > 30712
>
> It seems that latest version of commercial Red Hat Gluster Storage is 3.1
> and its op-version is indeed 30712..
>
> So the question is which particular op-version I have to set and if the
> command can be set online without generating disruption
>

It should have worked with the glusterfs 3.10 version from Centos repo.
Adding gluster-users for help on the op-version


>
> Thanks,
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-19 Thread Sahina Bose
[Adding gluster-users]

On Wed, Jul 19, 2017 at 2:52 PM, yayo (j)  wrote:

> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage  Domain (for hosted engine)
>
> We have this problem: "engine" gluster volume have always unsynced
> elements and we cant' fix the problem, on command line we have tried to use
> the "heal" command but elements remain always unsynced 
>
> Below the heal command "status":
>
> [root@node01 ~]# gluster volume heal engine info
> Brick node01:/gluster/engine/brick
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.2
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-
> 52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-
> a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.61
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.1
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.20
> /__DIRECT_IO_TEST__
> Status: Connected
> Number of entries: 12
>
> Brick node02:/gluster/engine/brick
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-
> 52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01
> 
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids
> 
> 
> 
> /__DIRECT_IO_TEST__
> 
> 
> /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-
> a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6
> 
> 
> Status: Connected
> Number of entries: 12
>
> Brick node04:/gluster/engine/brick
> Status: Connected
> Number of entries: 0
>
>
>
> running the "gluster volume heal engine" don't solve the problem...
>
> Some extra info:
>
> We have recently changed the gluster from: 2 (full repliacated) + 1
> arbiter to 3 full replicated cluster but i don't know this is the problem...
>
> The "data" volume is good and healty and have no unsynced entry.
>
> Ovirt refuse to put the node02 and node01 in "maintenance mode" and
> complains about "unsynced elements"
>
> How can I fix this?
> Thank you
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-24 Thread Sahina Bose
On Tue, Jul 25, 2017 at 11:12 AM, Kasturi Narra  wrote:

> These errors are because not having glusternw assigned to the correct
> interface. Once you attach that these errors should go away.  This has
> nothing to do with the problem you are seeing.
>
> sahina any idea about engine not showing the correct volume info ?
>

Please provide the vdsm.log (contianing the gluster volume info) and
engine.log


> On Mon, Jul 24, 2017 at 7:30 PM, yayo (j)  wrote:
>
>> Hi,
>>
>> UI refreshed but problem still remain ...
>>
>> No specific error, I've only these errors but I've read that there is no
>> problem if I have this kind of errors:
>>
>>
>> 2017-07-24 15:53:59,823+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] START, GlusterServersListVDSCommand(HostName =
>> node01.localdomain.local, VdsIdVDSCommandParametersBase:{runAsync='true',
>> hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 29a62417
>> 2017-07-24 15:54:01,066+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] FINISH, GlusterServersListVDSCommand, return: 
>> [10.10.20.80/24:CONNECTED,
>> node02.localdomain.local:CONNECTED, gdnode04:CONNECTED], log id: 29a62417
>> 2017-07-24 15:54:01,076+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] START, GlusterVolumesListVDSCommand(HostName =
>> node01.localdomain.local, GlusterVolumesListVDSParameters:{runAsync=
>> 'true', hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3
>> 2017-07-24 15:54:02,209+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode01:/gluster/engine/brick' of
>> volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,212+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode02:/gluster/engine/brick' of
>> volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,215+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode04:/gluster/engine/brick' of
>> volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,218+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode01:/gluster/data/brick' of
>> volume 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,221+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode02:/gluster/data/brick' of
>> volume 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,224+02 WARN  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2)
>> [b7590c4] Could not associate brick 'gdnode04:/gluster/data/brick' of
>> volume 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no
>> gluster network found in cluster '0002-0002-0002-0002-017a'
>> 2017-07-24 15:54:02,224+02 INFO  [org.ovirt.engine.core.vdsbro
>> ker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2)
>> [b7590c4] FINISH, GlusterVolumesListVDSCommand, return: {d19c19e3-910d
>> -437b-8ba7-4f2a23d17515=org.ovirt.engine.core.
>> common.businessentities.gluster.GlusterVolumeEntity@fdc91062, c7a5dfc9
>> -3e72-4ea1-843e-c8275d4a7c2d=org.ovirt.engine.core.c
>> ommon.businessentities.gluster.GlusterVolumeEntity@999a6f23}, log id: 7
>> fce25d3
>>
>>
>> Thank you
>>
>>
>> 2017-07-24 8:12 GMT+02:00 Kasturi Narra :
>>
>>> Hi,
>>>
>>>Regarding the UI showing incorrect information about engine and data
>>> volumes, can you please refresh the UI and see if the issue persists  plus
>>> any errors in the engine.log files ?
>>>
>>> Thanks
>>> kasturi
>>>
>>> On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N 
>>> wrote:
>>>

 On 07/21/2017 11:41 PM, yayo (j) wrote:

 Hi,

 Sorry for follow up again, but, checking the ovirt interface I've found
 that ovirt report the "engine" volume as an "arbiter" configuration and the
 "data" volume as full replicated volume. Check these screenshots:


 This is probably some refresh bug in the UI, 

Re: [ovirt-users] [Gluster-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread Sahina Bose
On Tue, Jul 25, 2017 at 1:45 PM, yayo (j)  wrote:

> 2017-07-25 7:42 GMT+02:00 Kasturi Narra :
>
>> These errors are because not having glusternw assigned to the correct
>> interface. Once you attach that these errors should go away.  This has
>> nothing to do with the problem you are seeing.
>>
>
> Hi,
>
> You talking  about errors like these?
>
> 2017-07-24 15:54:02,209+02 WARN  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-017a'
>
>
> How to assign "glusternw (???)" to the correct interface?
>

https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
"Storage network" section explains this. Please make sure that gdnode01 is
resolvable from engine.



>
> Other errors on unsync gluster elements still remain... This is a
> production env, so, there is any chance to subscribe to RH support?
>

The unsynced entries - did you check for disconnect messages in the mount
log as suggested by Ravi?

For Red Hat support, the best option is to contact your local Red Hat
representative.


> Thank you
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Recovering from a multi-node failure

2017-08-16 Thread Sahina Bose
On Sun, Aug 6, 2017 at 4:42 AM, Jim Kusznir  wrote:

> Well, after a very stressful weekend, I think I have things largely
> working.  Turns out that most of the above issues were caused by the linux
> permissions of the exports for all three volumes (they had been reset to
> 600; setting them to 774 or 770 fixed many of the issues).  Of course, I
> didn't find that until a much more harrowing outage, and hours and hours of
> work, including beginning to look at rebuilding my cluster
>
> So, now my cluster is operating again, and everything looks good EXCEPT
> for one major Gluster issue/question that I haven't found any references or
> info on.
>
> my host ovirt2, one of the replica gluster servers, is the one that lost
> its storage and had to reinitialize it from the cluster.  the iso volume is
> perfectly fine and complete, but the engine and data volumes are smaller on
> disk on this node than on the other node (and this node before the crash).
> On the engine store, the entire cluster reports the smaller utilization on
> mounted gluster filesystems; on the data partition, it reports the larger
> size (rest of cluster).  Here's some df statments to help clarify:
>
> (brick1 = engine; brick2=data, brick4=iso):
> Filesystem Size  Used Avail Use% Mounted on
> /dev/mapper/gluster-engine  25G   12G   14G  47% /gluster/brick1
> /dev/mapper/gluster-data   136G  125G   12G  92% /gluster/brick2
> /dev/mapper/gluster-iso 25G  7.3G   18G  29% /gluster/brick4
> 192.168.8.11:/engine15G  9.7G  5.4G  65%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine
> 192.168.8.11:/data 136G  125G   12G  92%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_data
> 192.168.8.11:/iso   13G  7.3G  5.8G  56%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_iso
>
> View from ovirt2:
> Filesystem Size  Used Avail Use% Mounted on
> /dev/mapper/gluster-engine  15G  9.7G  5.4G  65% /gluster/brick1
> /dev/mapper/gluster-data   174G  119G   56G  69% /gluster/brick2
> /dev/mapper/gluster-iso 13G  7.3G  5.8G  56% /gluster/brick4
> 192.168.8.11:/engine15G  9.7G  5.4G  65%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_engine
> 192.168.8.11:/data 136G  125G   12G  92%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_data
> 192.168.8.11:/iso   13G  7.3G  5.8G  56%
> /rhev/data-center/mnt/glusterSD/192.168.8.11:_iso
>
> As you can see, in the process of rebuilding the hard drive for ovirt2, I
> did resize some things to give more space to data, where I desperately need
> it.  If this goes well and the storage is given a clean bill of health at
> this time, then I will take ovirt1 down and resize to match ovirt2, and
> thus score a decent increase in storage for data.  I fully realize that
> right now the gluster mounted volumes should have the total size as the
> least common denominator.
>
> So, is this size reduction appropriate?  A big part of me thinks data is
> missing, but I even went through and shut down ovirt2's gluster daemons,
> wiped all the gluster data, and restarted gluster to allow it a fresh heal
> attempt, and it again came back to the exact same size.  This cluster was
> originally built about the time ovirt 4.0 came out, and has been upgraded
> to 'current', so perhaps some new gluster features are making more
> efficient use of space (dedupe or something)?
>

The used capacity should be consistent on all nodes - I see you have a
discrepancy with the data volume brick. What does "gluster vol heal data
info" tell you? Are there entries to be healed?

Can you provide the glustershd logs?



>
> Thank  you for your assistance!
> --JIm
>
> On Fri, Aug 4, 2017 at 7:49 PM, Jim Kusznir  wrote:
>
>> Hi all:
>>
>> Today has been rough.  two of my three nodes went down today, and self
>> heal has not been healing well.  4 hours later, VMs are running.  but the
>> engine is not happy.  It claims the storage domain is down (even though it
>> is up on all hosts and VMs are running).  I'm getting a ton of these
>> messages logging:
>>
>> VDSM engine3 command HSMGetAllTasksStatusesVDS failed: Not SPM
>>
>> Aug 4, 2017 7:23:00 PM
>>
>> VDSM engine3 command SpmStatusVDS failed: Error validating master storage
>> domain: ('MD read error',)
>>
>> Aug 4, 2017 7:22:49 PM
>>
>> VDSM engine3 command ConnectStoragePoolVDS failed: Cannot find master
>> domain: u'spUUID=5868392a-0148-02cf-014d-0121,
>> msdUUID=cdaf180c-fde6-4cb3-b6e5-b6bd869c8770'
>>
>> Aug 4, 2017 7:22:47 PM
>>
>> VDSM engine1 command ConnectStoragePoolVDS failed: Cannot find master
>> domain: u'spUUID=5868392a-0148-02cf-014d-0121,
>> msdUUID=cdaf180c-fde6-4cb3-b6e5-b6bd869c8770'
>>
>> Aug 4, 2017 7:22:46 PM
>>
>> VDSM engine2 command SpmStatusVDS failed: Error validating master storage
>> domain: ('MD read error',)
>>
>> Aug 4, 2017 7:22:44 PM
>>
>> VDSM engine2 command ConnectStoragePoolVDS failed: Cannot find master
>

Re: [ovirt-users] NTP

2017-08-16 Thread Sahina Bose
On Thu, Aug 10, 2017 at 7:09 PM, Sandro Bonazzola 
wrote:

>
>
> 2017-08-10 15:21 GMT+02:00 Moacir Ferreira :
>
>> Hi Sandro,
>>
>>
>> I found that I can install ntpd enabling the CentOS base repository that
>> comes disabled by default in oVirt. This said, the GUI gdeploy's generated
>> script for deploying the hosted-engine + GlusterFS is still expecting to
>> disable chronny by enabling ntpd. So my question now is if we
>> need/should keep ntpd or if we should just keep chronnyd.
>>
>>
>>
> Looks like a gdeploy bug. Adding Sahina and Sacchi. chronyd should be used
> instead of ntpd.
>

https://bugzilla.redhat.com/show_bug.cgi?id=1450152 - was fixed to use
chronyd instead of ntpd in gdeploy.

If you're still seeing the issue, can you re-open the bug with version
details of cockpit-ovirt?

thanks!


>
>
>
>
>> Moacir
>>
>>
>> --
>> *From:* Sandro Bonazzola 
>> *Sent:* Thursday, August 10, 2017 2:06 PM
>> *To:* Moacir Ferreira
>> *Cc:* users@ovirt.org
>> *Subject:* Re: [ovirt-users] NTP
>>
>>
>>
>> 2017-08-07 16:53 GMT+02:00 Moacir Ferreira :
>>
>>> I found that NTP does not get installed on oVirt node on the latest
>>> version ovirt-node-ng-installer-ovirt-4.1-2017052309
>>> <%28201%29%20705-2309>.iso.
>>>
>>>
>>> Also the installed repositories does not have it. So, is this a bug or
>>> NTP is not considered appropriated anymore?
>>>
>>>
>>> vdsm is now requiring chronyd but we have re-added ntpd in ovirt-node
>> for 4.1.5 RC3 (https://bugzilla.redhat.com/1476650)
>> I'm finishing to test the release before announcing it today.
>>
>>
>>
>>
>>
>>> Thanks.
>>>
>>> Moacir
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] glusterfs E­rror message constantly b­eing reported

2017-08-16 Thread Sahina Bose
Can you check if you have vdsm-gluster rpm installed on the hosts?

On Wed, Aug 16, 2017 at 7:08 PM, Vadim  wrote:

> In vdsm.log
>
> 2017-08-16 16:34:15,314+0300 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:577)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 572, in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,
> in _dynamicMethod
> result = fn(*methodArgs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py",
> line 117, in status
> return self._gluster.volumeStatus(volumeName, brick, statusOption)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89,
> in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 411,
> in volumeStatus
> data = self.svdsmProxy.glusterVolumeStatvfs(volumeName)
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
> __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in
> 
> getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> AttributeError: 'AutoProxy[instance]' object has no attribute
> 'glusterVolumeStatvfs'
>
>
> 2017-08-16 16:37:39,566+0300 ERROR (jsonrpc/3) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:577)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 572, in _handle_request
> res = method(**params)
>   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,
> in _dynamicMethod
> result = fn(*methodArgs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py",
> line 109, in list
> return self._gluster.tasksList(taskIds)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89,
> in wrapper
> rv = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 507,
> in tasksList
> status = self.svdsmProxy.glusterTasksList(taskIds)
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
> __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50, in
> 
> getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> AttributeError: 'AutoProxy[instance]' object has no attribute
> 'glusterTasksList'
>
>
> Срд 16 Авг 2017 16:08:24 +0300, Vadim  написал:
> > Hi, All
> >
> > ovirt 4.1.4 fresh install
> > Constantly seeing this message in the logs, how to fix this:
> >
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> >
> > --
> > Thanks,
> > Vadim
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> --
> Thanks,
> Vadim
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster­fs E­rror message constan­tly b­eing reported

2017-08-17 Thread Sahina Bose
On Thu, Aug 17, 2017 at 12:21 PM, Vadim  wrote:

> Hi, Sahina
>
> on all hosts
> #  rpm -qa | grep vdsm-gluster
> vdsm-gluster-4.19.24-1.el7.centos.noarch
>

Does this version match the vdsm version installed?

If yes, can you try restarting supervdsmd service?


>
>
>
> Срд 16 Авг 2017 17:49:14 +0300, Sahina Bose  написал:
> > Can you check if you have vdsm-gluster rpm installed on the hosts?
> >
> > On Wed, Aug 16, 2017 at 7:08 PM, Vadim  wrote:
> > In vdsm.log
> >
> > 2017-08-16 16:34:15,314+0300 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:577)
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 572, in _handle_request
> > res = method(**params)
> >   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,
> in _dynamicMethod
> > result = fn(*methodArgs)
> >   File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py",
> line 117, in status
> > return self._gluster.volumeStatus(volumeName, brick, statusOption)
> >   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89,
> in wrapper
> > rv = func(*args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line
> 411, in volumeStatus
> > data = self.svdsmProxy.glusterVolumeStatvfs(volumeName)
> >   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53,
> in __call__
> > return callMethod()
> >   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50,
> in 
> > getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> > AttributeError: 'AutoProxy[instance]' object has no attribute
> 'glusterVolumeStatvfs'
> >
> > 2017-08-16 16:37:39,566+0300 ERROR (jsonrpc/3) [jsonrpc.JsonRpcServer]
> Internal server error (__init__:577)
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> 572, in _handle_request
> > res = method(**params)
> >   File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198,
> in _dynamicMethod
> > result = fn(*methodArgs)
> >   File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py",
> line 109, in list
> > return self._gluster.tasksList(taskIds)
> >   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 89,
> in wrapper
> > rv = func(*args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line
> 507, in tasksList
> > status = self.svdsmProxy.glusterTasksList(taskIds)
> >   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53,
> in __call__
> > return callMethod()
> >   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 50,
> in 
> > getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> > AttributeError: 'AutoProxy[instance]' object has no attribute
> 'glusterTasksList'
> >
> > Срд 16 Авг 2017 16:08:24 +0300, Vadim  написал:
> > > Hi, All
> > >
> > > ovirt 4.1.4 fresh install
> > > Constantly seeing this message in the logs, how to fix this:
> > >
> > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> > > VDSM kvm04 command GetGlusterVolumeAdvancedDetailsVDS failed:
> 'AutoProxy[instance]' object has no attribute 'glusterVolumeStatvfs'
> > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > > VDSM kvm04 command GlusterTasksListVDS failed: 'AutoProxy[instance]'
> object has no attribute 'glusterTasksList'
> > >
> > > --
> > > Thanks,
> > > Vadim
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> >
> > --
> > Thanks,
> > Vadim
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> --
> Thanks,
> Vadim
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] install of engine

2017-08-17 Thread Sahina Bose
If you are installing engine via the hosted engine install, you can
download and install the ovirt-engine-appliance rpm on the first host.

On Thu, Aug 17, 2017 at 9:53 PM, david caughey  wrote:

> Hi folks,
>
> I have installed the ovirt OS on 3 dl380's.
> It has been deployed with gluster and all is fine up to a point.
> When I try to install the engine it is trying to get out to the internet
> but unfortunately I am behind a proxy with strict rules, is there a way to
> point to a local copy of the engine using the GUI install or can I scp the
> iso over to the first node for local install, is there a particular
> partition where the iso should be stored.
>
> Any help appreciated,
>
> BR/David
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hyperconverged question

2017-08-31 Thread Sahina Bose
To the OP question, when you set up a gluster storage domain, you need to
specify backup-volfile-servers=: where server2 and
server3 also have bricks running. When server1 is down, and the volume is
mounted again - server2 or server3 are queried to get the gluster volfiles.

@Jim, if this does not work, are you using 4.1.5 build with libgfapi
access? If not, please provide the vdsm and gluster mount logs to analyse

If VMs go to paused state - this could mean the storage is not available.
You can check "gluster volume status " to see if atleast 2 bricks
are running.

On Fri, Sep 1, 2017 at 11:31 AM, Johan Bernhardsson  wrote:

> If gluster drops in quorum so that it has less votes than it should it
> will stop file operations until quorum is back to normal.If i rember it
> right you need two bricks to write for quorum to be met and that the
> arbiter only is a vote to avoid split brain.
>
>
> Basically what you have is a raid5 solution without a spare. And when one
> disk dies it will run in degraded mode. And some raid systems will stop the
> raid until you have removed the disk or forced it to run anyway.
>
> You can read up on it here: https://gluster.readthedocs.io/en/latest/
> Administrator%20Guide/arbiter-volumes-and-quorum/
>
> /Johan
>
> On Thu, 2017-08-31 at 22:33 -0700, Jim Kusznir wrote:
>
> Hi all:
>
> Sorry to hijack the thread, but I was about to start essentially the same
> thread.
>
> I have a 3 node cluster, all three are hosts and gluster nodes (replica 2
> + arbitrar).  I DO have the mnt_options=backup-volfile-servers= set:
>
> storage=192.168.8.11:/engine
> mnt_options=backup-volfile-servers=192.168.8.12:192.168.8.13
>
> I had an issue today where 192.168.8.11 went down.  ALL VMs immediately
> paused, including the engine (all VMs were running on host2:192.168.8.12).
> I couldn't get any gluster stuff working until host1 (192.168.8.11) was
> restored.
>
> What's wrong / what did I miss?
>
> (this was set up "manually" through the article on setting up self-hosted
> gluster cluster back when 4.0 was new..I've upgraded it to 4.1 since).
>
> Thanks!
> --Jim
>
>
> On Thu, Aug 31, 2017 at 12:31 PM, Charles Kozler 
> wrote:
>
> Typo..."Set it up and then failed that **HOST**"
>
> And upon that host going down, the storage domain went down. I only have
> hosted storage domain and this new one - is this why the DC went down and
> no SPM could be elected?
>
> I dont recall this working this way in early 4.0 or 3.6
>
> On Thu, Aug 31, 2017 at 3:30 PM, Charles Kozler 
> wrote:
>
> So I've tested this today and I failed a node. Specifically, I setup a
> glusterfs domain and selected "host to use: node1". Set it up and then
> failed that VM
>
> However, this did not work and the datacenter went down. My engine stayed
> up, however, it seems configuring a domain to pin to a host to use will
> obviously cause it to fail
>
> This seems counter-intuitive to the point of glusterfs or any redundant
> storage. If a single host has to be tied to its function, this introduces a
> single point of failure
>
> Am I missing something obvious?
>
> On Thu, Aug 31, 2017 at 9:43 AM, Kasturi Narra  wrote:
>
> yes, right.  What you can do is edit the hosted-engine.conf file and there
> is a parameter as shown below [1] and replace h2 and h3 with your second
> and third storage servers. Then you will need to restart ovirt-ha-agent and
> ovirt-ha-broker services in all the nodes .
>
> [1] 'mnt_options=backup-volfile-servers=:'
>
> On Thu, Aug 31, 2017 at 5:54 PM, Charles Kozler 
> wrote:
>
> Hi Kasturi -
>
> Thanks for feedback
>
> > If cockpit+gdeploy plugin would be have been used then that would have
> automatically detected glusterfs replica 3 volume created during Hosted
> Engine deployment and this question would not have been asked
>
> Actually, doing hosted-engine --deploy it too also auto detects
> glusterfs.  I know glusterfs fuse client has the ability to failover
> between all nodes in cluster, but I am still curious given the fact that I
> see in ovirt config node1:/engine (being node1 I set it to in hosted-engine
> --deploy). So my concern was to ensure and find out exactly how engine
> works when one node goes away and the fuse client moves over to the other
> node in the gluster cluster
>
> But you did somewhat answer my question, the answer seems to be no (as
> default) and I will have to use hosted-engine.conf and change the parameter
> as you list
>
> So I need to do something manual to create HA for engine on gluster? Yes?
>
> Thanks so much!
>
> On Thu, Aug 31, 2017 at 3:03 AM, Kasturi Narra  wrote:
>
> Hi,
>
>During Hosted Engine setup question about glusterfs volume is being
> asked because you have setup the volumes yourself. If cockpit+gdeploy
> plugin would be have been used then that would have automatically detected
> glusterfs replica 3 volume created during Hosted Engine deployment and this
> question would not have been asked.
>
>During new storage domain creation when glusterfs is

[ovirt-users] Re: Native Access on gluster storage domain

2017-09-10 Thread Sahina Bose
You could try to enable the config option for the 4.1 cluster level - using
engine-config tool from the Hosted Engine VM. This will require a restart
of the engine service and will enable gfapi access for all clusters at 4.1
level though - so try this option if this is acceptable.

On Wed, Aug 30, 2017 at 8:02 PM, Stefano Danzi  wrote:

> above the logs.
> PS cluster compatibility level is 4.1
>
> engine:
>
> 2017-08-30 16:26:07,928+02 INFO  
> [org.ovirt.engine.core.bll.UpdateClusterCommand]
> (default task-8) [56d090c5-1097-4641-b745-74af8397d945] Lock Acquired to
> object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
> 2017-08-30 16:26:07,951+02 WARN  
> [org.ovirt.engine.core.bll.UpdateClusterCommand]
> (default task-8) [56d090c5-1097-4641-b745-74af8397d945] Validation of
> action 'UpdateCluster' failed for user admin@internal. Reasons:
> VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_
> UPDATE_SUPPORTED_FEATURES_WITH_LOWER_HOSTS
> 2017-08-30 16:26:07,952+02 INFO  
> [org.ovirt.engine.core.bll.UpdateClusterCommand]
> (default task-8) [56d090c5-1097-4641-b745-74af8397d945] Lock freed to
> object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
>
> vdsm:
>
> 2017-08-30 16:29:23,310+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call GlusterHost.list succeeded in 0.15 seconds (__init__:539)
> 2017-08-30 16:29:23,419+0200 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
> call Host.getAllVmStats succeeded in 0.01 seconds (__init__:539)
> 2017-08-30 16:29:23,424+0200 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
> call Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:539)
> 2017-08-30 16:29:23,814+0200 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
> call GlusterHost.list succeeded in 0.15 seconds (__init__:539)
> 2017-08-30 16:29:24,011+0200 INFO  (Reactor thread) 
> [ProtocolDetector.AcceptorImpl]
> Accepted connection from ::1:51862 (protocoldetector:72)
> 2017-08-30 16:29:24,023+0200 INFO  (Reactor thread)
> [ProtocolDetector.Detector] Detected protocol stomp from ::1:51862
> (protocoldetector:127)
> 2017-08-30 16:29:24,024+0200 INFO  (Reactor thread) [Broker.StompAdapter]
> Processing CONNECT request (stompreactor:103)
> 2017-08-30 16:29:24,031+0200 INFO  (JsonRpc (StompReactor))
> [Broker.StompAdapter] Subscribe command received (stompreactor:130)
> 2017-08-30 16:29:24,287+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
> call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:539)
> 2017-08-30 16:29:24,443+0200 INFO  (jsonrpc/7) [vdsm.api] START
> getSpmStatus(spUUID=u'0002-0002-0002-0002-01ef',
> options=None) from=:::192.168.1.55,46502, flow_id=1f664a9,
> task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:46)
> 2017-08-30 16:29:24,446+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH
> getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver':
> 1430L}} from=:::192.168.1.55,46502, flow_id=1f664a9,
> task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:52)
> 2017-08-30 16:29:24,447+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.getSpmStatus succeeded in 0.00 seconds (__init__:539)
> 2017-08-30 16:29:24,460+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
> call GlusterHost.list succeeded in 0.16 seconds (__init__:539)
> 2017-08-30 16:29:24,467+0200 INFO  (jsonrpc/1) [vdsm.api] START
> getStoragePoolInfo(spUUID=u'0002-0002-0002-0002-01ef',
> options=None) from=:::192.168.1.55,46506, flow_id=1f664a9,
> task_id=029ec55e-9c47-4a20-be44-8c80fd1fd5ac (api:46)
>
>
> Il 30/08/2017 16:06, Shani Leviim ha scritto:
>
> Hi Stefano,
> Can you please attach your engine and vdsm logs?
>
>
> *Regards, *
>
> *Shani Leviim *
>
> On Wed, Aug 30, 2017 at 12:46 PM, Stefano Danzi  wrote:
>
>> Hello,
>> I have a test environment with a sigle host and self hosted engine
>> running oVirt Engine: 4.1.5.2-1.el7.centos
>>
>> I what to try the option "Native Access on gluster storage domain" but I
>> get an error because I have to put the
>> host in maintenance mode. I can't do that because I have a single host so
>> the hosted engine can't be migrated.
>>
>> There are a way to change this option but apply it at next reboot?
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] Recommended gluster setup

2017-09-15 Thread Sahina Bose
On Fri, Sep 15, 2017 at 5:48 PM, Alex K  wrote:

> In the replica 3 + 1 arbiter does this mean that if I loose the two nodes
> having the normal volumes and left only with the node that has the arbiter
> volume, I loose all data?
>

Yes!


>
> Thanx,
> Alex
>
> On Fri, Sep 15, 2017 at 11:25 AM, Kasturi Narra  wrote:
>
>> Hi ,
>>
>>yes, you are right. Since arbiter brick  has only metadata and  data
>> for the vm has to be served from one of the other two replicas, read is
>> slow.
>>
>> Arbiter is a special subset of replica 3 volumes and is aimed at
>> preventing split-brains and providing same consistency as a normal replica
>> 3 volume with out consuming 3x space. You could use replica 3 and no issues
>> with that.
>>
>> Thanks
>> kasturi
>>
>> On Fri, Sep 15, 2017 at 12:41 PM, Abi Askushi 
>> wrote:
>>
>>> Hi all,
>>>
>>> I see in the ovirt guides that a gluster volume replica 3 with 1 arbiter
>>> is recommended.
>>> Why not simple replica 3? Is it due to the higher replication data that
>>> would cause performance issues?
>>>
>>> What I am observing is that a VM running on the server which has the
>>> arbiter brick has slower read performance then when the same VM runs on
>>> another server with a normal brick. Has anyone observed this? Is it because
>>> the arbiter does not have the real data on it?
>>>
>>> Thanx,
>>> Alex
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about gluster network

2017-10-16 Thread Sahina Bose
On Mon, Oct 16, 2017 at 2:33 PM, Arsène Gschwind 
wrote:

> Hi,
>
> My setup uses a separate physical network for gluster storage, this
> network is available on all hosts and defined as gluster network in the
> engine but the engine itself has no connection to that network.
> Does the engine need to have a connection to the gluster network?
>

No, it does not.


> engine.log reports a lot of those warnings:
>
> 2017-10-16 10:56:58,789+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler8) [44485247] START, 
> GlusterVolumesListVDSCommand(HostName
> = spfy-hw03, GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='78cb5865-a771-415b-af27-79e4681a6831'}), log id: 75765557
> 2017-10-16 10:56:58,987+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler8) [44485247] Could not associate brick
> 'spfy-hw01:/gluster/engine/brick' of volume 
> 'af169181-d72e-4325-9947-d7dd09e512f0'
> with correct network as no gluster network found in cluster
> 'd9a88245-00ef-4d8b-b8f9-0948cde2b3e4'
> 2017-10-16 10:56:58,991+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler8) [44485247] Could not associate brick
> 'spfy-hw02:/gluster/engine/brick' of volume 
> 'af169181-d72e-4325-9947-d7dd09e512f0'
> with correct network as no gluster network found in cluster
> 'd9a88245-00ef-4d8b-b8f9-0948cde2b3e4'
> 2017-10-16 10:56:58,994+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler8) [44485247] Could not associate brick
> 'spfy-hw03:/gluster/engine/brick' of volume 
> 'af169181-d72e-4325-9947-d7dd09e512f0'
> with correct network as no gluster network found in cluster
> 'd9a88245-00ef-4d8b-b8f9-0948cde2b3e4'
> 2017-10-16 10:56:59,016+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler8) [44485247] FINISH, GlusterVolumesListVDSCommand,
> return: {7ef3b155-47d7-4405-aa70-82a9b8be4033=org.ovirt.engine.
> core.common.businessentities.gluster.GlusterVolumeEntity@34662183,
> af169181-d72e-4325-9947-d7dd09e512f0=org.ovirt.engine.
> core.common.businessentities.gluster.GlusterVolumeEntity@5f70adf6,
> 6d3c8561-4e52-4221-9473-88fd48ef4909=org.ovirt.engine.
> core.common.businessentities.gluster.GlusterVolumeEntity@30e34b53,
> 9ecd980c-512a-4f9e-ae5e-fa1708c74076=org.ovirt.engine.
> core.common.businessentities.gluster.GlusterVolumeEntity@bb50177f}, log
> id: 75765557
>
> What does this mean?
>
spfy-hw01, spfy-hw02, spfy-hw03 - Do these resolve correctly to the gluster
network's IP address that engine knows? This message indicates that engine
cannot resolve spfy-hw01 to associate it with one of the host's network.


> Thanks for any info / help.
>
> rgds,
> Arsène
> --
>
> *Arsène Gschwind*
> Fa. Sapify AG im Auftrag der Universität Basel
> IT Services
> Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
> Tel. +41 79 449 25 63  |  http://its.unibas.ch
> ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster-server 3.5.2 requies gluasterfs-3.52

2014-11-03 Thread Sahina Bose
ovirt 3.5 does not have a dependency on glusterfs-3.6.0, AFAIK. 3.6 is 
in beta stage still.


Are you installing hosted engine? Do you by any chance have additional 
repos for glusterfs configured?


On 11/03/2014 05:44 AM, Juan Carlos YJ. Lin wrote:
gluster-server 3.5.2 requies gluasterfs-3.52, but ovirt3.5 install 
glusterfs-3.6.0

how to solve the dependency?

Juan Carlos Lin
Unisoft S.A.
+595-993-288330


System-wide Disclaimer ---
"Antes de imprimir, recuérdese de su compromiso con el Medio Ambiente"
"Aviso: Este mensaje es dirigido para su destinatario y contiene 
informaciones que no pueden ser usadas por otras personas que no sean 
su(s) destinatario(s). La retransmisión del contenido no está 
autorizada fuera del contexto de su envío y a quien corresponde. El 
uso no autorizado de la información en este mensaje se halla penado 
por las leyes vigentes en todo el mundo. Si ha recibido este mensaje 
por error, por favor bórrala y notifique al remitente en la brevedad 
posible. El contenido de este mensaje no es responsabilidad de la 
Empresa y debe ser atribuido siempre a su autor. Gracias."



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster-server 3.5.2 requies gluasterfs-3.52

2014-11-03 Thread Sahina Bose


On 11/03/2014 01:44 PM, Sahina Bose wrote:
ovirt 3.5 does not have a dependency on glusterfs-3.6.0, AFAIK. 3.6 is 
in beta stage still.


My bad. glusterfs 3.6 was released on Oct 31.



Are you installing hosted engine? Do you by any chance have additional 
repos for glusterfs configured?


On 11/03/2014 05:44 AM, Juan Carlos YJ. Lin wrote:
gluster-server 3.5.2 requies gluasterfs-3.52, but ovirt3.5 install 
glusterfs-3.6.0

how to solve the dependency?

Juan Carlos Lin
Unisoft S.A.
+595-993-288330


System-wide Disclaimer 
---

"Antes de imprimir, recuérdese de su compromiso con el Medio Ambiente"
"Aviso: Este mensaje es dirigido para su destinatario y contiene 
informaciones que no pueden ser usadas por otras personas que no sean 
su(s) destinatario(s). La retransmisión del contenido no está 
autorizada fuera del contexto de su envío y a quien corresponde. El 
uso no autorizado de la información en este mensaje se halla penado 
por las leyes vigentes en todo el mundo. Si ha recibido este mensaje 
por error, por favor bórrala y notifique al remitente en la brevedad 
posible. El contenido de este mensaje no es responsabilidad de la 
Empresa y debe ser atribuido siempre a su autor. Gracias."



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Slow resync with GlusterFS Replication

2014-11-04 Thread Sahina Bose

[+gluster-users]

On 11/04/2014 05:05 PM, ml ml wrote:

Hello List,

i have a 2 Node Cluster with GlusterFS. After putting one machine in
Maintance and turning it off and on again the glustefs starts its
resync.

However, this process seems to be very slow.

iotop and iftop only show me a few hundred K/s.

Is this normal?

Thanks,
Mario
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster-server 3.5.2 requies gluasterfs-3.52

2014-11-05 Thread Sahina Bose
It looks like you have run into this issue - 
https://www.mail-archive.com/gluster-devel@gluster.org/msg02310.html

[Thanks, Humble for helping on this]

One workaround is to install the yum-plugin-priorities and add a 
priority = 1 to the ovirt35-glusterfs-epel repo. Meanwhile, the problem 
is going to be solved by the release of glusterfs 3.6.1 in the following 
days.




On 11/05/2014 05:48 AM, Juan Carlos YJ. Lin wrote:

Reinstall ovirt hosted engine today, and get this

[root@node1 ~]# yum list *gluster*
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.xpg.com.br
 * epel: mirror.ci.ifes.edu.br
 * extras: mirror.ci.ifes.edu.br
 * ovirt-3.5: www.gtlib.gatech.edu
 * ovirt-3.5-epel: mirror.ci.ifes.edu.br
 * ovirt-3.5-jpackage-6.0-generic: ftp.heanet.ie
 * updates: mirror.ci.ifes.edu.br
Installed Packages
glusterfs.x86_64  3.6.0.29-2.el6 @updates
glusterfs-api.x86_64  3.6.0.29-2.el6 @updates
glusterfs-cli.x86_64  3.6.0.29-2.el6 @updates
glusterfs-fuse.x86_64 3.6.0.29-2.el6 @updates
glusterfs-libs.x86_64 3.6.0.29-2.el6 @updates
glusterfs-rdma.x86_64 3.6.0.29-2.el6 @updates
Available Packages
glusterfs-api-devel.x86_643.6.0.29-2.el6 updates
glusterfs-debuginfo.x86_643.5.2-1.el6 ovirt-3.5-glusterfs-epel
glusterfs-devel.x86_643.6.0.29-2.el6 updates
glusterfs-extra-xlators.x86_643.5.2-1.el6 ovirt-3.5-glusterfs-epel
glusterfs-geo-replication.x86_64  3.5.2-1.el6 ovirt-3.5-glusterfs-epel
glusterfs-regression-tests.x86_64 3.5.2-1.el6 ovirt-3.5-glusterfs-epel
glusterfs-resource-agents.noarch  3.5.2-1.el6 
ovirt-3.5-glusterfs-noarch-epel

glusterfs-server.x86_64   3.5.2-1.el6 ovirt-3.5-glusterfs-epel
puppet-gluster.noarch 0.0.3-1.el6 epel
puppet-gluster-doc.noarch 0.0.3-1.el6 epel
samba-glusterfs.x86_643.6.23-12.el6 base
vdsm-gluster.noarch   4.16.7-1.gitdb83943.el6
 ovirt-3.5

Juan Carlos Lin
Unisoft S.A.
+595-993-288330


*De: *"Sahina Bose" 
*Para: *"Juan Carlos YJ Lin" 
*Enviados: *Martes, 4 de Noviembre 2014 4:55:45
*Asunto: *Re: [ovirt-users] gluster-server 3.5.2 requies gluasterfs-3.52

The node would pull the latest rpms based on your repo configuration. 
vdsm on the node has a requirement on glusterfs > 3.4.2.


Which OS are you using. I wonder how you got the glusterfs 3.6 as they 
are not available in the download.gluster.org repo.


Could you tell me the output of > rpm -qa | grep gluster

On 11/03/2014 08:39 PM, Juan Carlos YJ. Lin wrote:

so is correct the dependecy?
I try to install glusterfs-server in the node host and got the error

Juan Carlos Lin
Unisoft S.A.
+595-993-288330

--------
*De: *"Sahina Bose" 
*Para: *"Juan Carlos YJ Lin" , users@ovirt.org
*Enviados: *Lunes, 3 de Noviembre 2014 5:17:30
*Asunto: *Re: [ovirt-users] gluster-server 3.5.2 requies
gluasterfs-3.52


On 11/03/2014 01:44 PM, Sahina Bose wrote:

ovirt 3.5 does not have a dependency on glusterfs-3.6.0,
AFAIK. 3.6 is in beta stage still.


My bad. glusterfs 3.6 was released on Oct 31.


Are you installing hosted engine? Do you by any chance have
additional repos for glusterfs configured?

On 11/03/2014 05:44 AM, Juan Carlos YJ. Lin wrote:

gluster-server 3.5.2 requies gluasterfs-3.52, but ovirt3.5
install glusterfs-3.6.0
how to solve the dependency?

Juan Carlos Lin
Unisoft S.A.
+595-993-288330


System-wide Disclaimer
---
"Antes de imprimir, recuérdese de su compromiso con el
Medio Ambiente"
"Aviso: Este mensaje es dirigido para su destinatario y
contiene informaciones que no pueden ser usadas por otras
personas que no sean su(s) destinatario(s). La
retransmisión del contenido no está autorizada fuera del
contexto de su envío y a quien corresponde. El uso no
autorizado de la información en este mensaje se halla
penado por las leyes vigentes en todo el mundo. Si ha
recibido este mensaje por error, por favor bórrala y
notifique al remitente en la brevedad posible. El
contenido de este mensaje no es responsabilidad de la
Empresa y debe ser atribuido siempre a su autor. Gracias."


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





System-wide Disclaimer

Re: [ovirt-users] upgrade glusterfs under ovirt

2014-11-12 Thread Sahina Bose


On 11/12/2014 12:58 PM, Demeter Tibor wrote:

Hi,

I see that glusterfs 3.6 has released.
What is the official way to upgrade this under a production ovirt 
cluster?
Can I use the "yum update" command  or I need to switch to the host to 
maintenance and select "reinstall" button?


If you do not want to change the compatibility version of the cluster, 
the best option is to do a yum update on the nodes.





Thanks
Tibor



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] upgrade glusterfs under ovirt

2014-11-12 Thread Sahina Bose


On 11/12/2014 05:47 PM, Demeter Tibor wrote:

Hi,

After yum update just need a service glusterd restart?
Is it possible on a active node that use gluster fs?



Updating in the offline mode is the recommended approach. See 
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6 
for more details






Thank you.

Tibor

On 11/12/2014 12:58 PM, Demeter Tibor wrote:

Hi,

I see that glusterfs 3.6 has released.
What is the official way to upgrade this under a production
ovirt cluster?
Can I use the "yum update" command  or I need to switch to the
host to maintenance and select "reinstall" button?


If you do not want to change the compatibility version of the
cluster, the best option is to do a yum update on the nodes.



Thanks
Tibor



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] separate ovirtmgmt from glusterfs traffic

2014-11-19 Thread Sahina Bose
We will try and address this in the next release of oVirt - there were 
some issues earlier with identifying the bricks correctly, but with the 
gluster host uuid support in gluster xml commands, we can fix this.


https://bugzilla.redhat.com/show_bug.cgi?id=1049994 - marked for 3.6.0



On 11/19/2014 01:30 AM, Demeter Tibor wrote:

Hi,

I think is a really good question, why does not possible sepereate storage 
network traffic like display, etc?
This is not an easy task to set up, and it's more difficult on a live system.

Tibor


- Eredeti üzenet -

On 18/11/14 06:25, Juan Pablo Lorier wrote:

Hi,

In my experience, having ovirt traffic on the same nic that gluster
can make your platafrom unstable. I was using it for large file
storage and gluster has so big traffic that ovirt got confused and
started marking hosts as unavailable because of hi latency.
I've opened an RFE over a year ago, but had no luck with the team to
get it done. In the RFE I was asking to have a way in the UI to decide
which nic to use for gluster other than the MGMT net that is the one
ovirt lets you use.
There's another way to do this and it's from outside ovirt. There you
have to unregister and re register the bricks using gluster console
commands. This way, when you register the bricks, you can specify the
IP address of the spare NIC and then the traffic will not interfere
with the mgmt.
There's a step that I don't recall much, but ovirt is going to need to
know that the bricks are no longer is the mgmt IP, maybe someone else
in the list can help with this. I can tell you that if you search the
list you'll see my posts about this and the replys of those who helped
my back then.
Regards,


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



I think that the possibility to add a separate storage network is a
must. It doesn't make sense Gluster support if you can't separate
management network traffic of storage network traffic. In 3.5 you can
define management/vm/display/migration interfaces but not storage
interfaces.  Why? I don't know. But I think that Gluster support is
incomplete without exclusive storage networking.

Regards

Federico
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-12-10 Thread Sahina Bose


On 11/03/2014 09:09 PM, Dan Kenigsberg wrote:

On Sun, Sep 14, 2014 at 03:17:39PM +0300, Itamar Heim wrote:

On 09/12/2014 03:22 PM, Itamar Heim wrote:

With oVirt 3.5 nearing GA, time to ask for "what do you want to see in
oVirt 3.6"?


Even the short list of items mentions so far are big features we would need
to do justice with.
this means probably a slightly longer cycle than the 6 months we currently
do.

We're not so good in keeping up with deadlines - our recent 3.5 version
slipped for 2.5 months past the original plan. With longer cycles, this
is even harder to control.

Can we instead break the "heavier" features to smaller phases, that are
easier to maintain, and deliver the complete feature in 3.7?


+1 for smaller release cycle.

It would be great if we could have a 3 month cycle.





with many of the features requiring way newer versions of some of the
components, I think we should plan for "3.6 cluster" to require an .el7 host
(expecting even 7.1 to be out before we release oVirt 3.6).

so i want us to plan for:
- oVirt 3.6 supports .el6 and .el7 hosts in 3.5 cluster (and for fedora
   - fc20?)
- oVirt 3.6 supports .el7 in 3.6 cluster (and for fedora - fc21?)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot activate storage domain

2014-12-18 Thread Sahina Bose


On 12/18/2014 01:35 PM, Vered Volansky wrote:

Adding Sahina.

- Original Message -

From: "Brent Hartzell" 
To: users@ovirt.org
Sent: Thursday, December 18, 2014 3:38:11 AM
Subject: [ovirt-users] Cannot activate storage domain



Have the following:



6 hosts – virt + Gluster shared



Gluster volume is distributed-replicate – replica 2



Shutting down servers one at a time all work except for 1 brick. If we shut
down one specific brick (1 brick per host) – we’re unable to activate the
storage domain. VM’s that were actively running from other bricks continue
to run. Whatever was running form that specific brick fails to run, gets
paused etc.



Error log shows the entry below. I’m not certain what it’s saying is read
only…nothing is read only that I can find.





2014-12-17 19:57:13,362 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand]
(DefaultQuartzScheduler_Worker-47) [4e9290a2] Command
SpmStatusVDSCommand(HostName = U23.domainame.net, HostId =
0db58e46-68a3-4ba0-a8aa-094893c045a1, storagePoolId =
7ccd6ea9-7d80-4170-afa1-64c10c185aa6) execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
SpmStatusVDS, error = [Errno 30] Read-only file system, code = 100

2014-12-17 19:57:13,363 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(DefaultQuartzScheduler_Worker-47) [4e9290a2] hostFromVds::selectedVds -
U23.domainname.net, spmStatus returned null!





According to Ovirt/Gluster, if a brick goes down, the VM should be able to be
restarted from another brick without issue. This does not appear to be the
case… If we take other bricks offline, it appears to work as expected.
Something with this specific brick cases everything to break which then
makes any VM’s that were running from the brick unable to start.


Do you have the recommended options for using volume as virt store 
turned on? Is client-side quorum turned on for the volume? Is the brick 
that causes the issue, the first brick in the replica set?





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot activate storage domain

2014-12-18 Thread Sahina Bose

[+Sas - thanks for the link to virt-store usecase article inline]

On 12/18/2014 06:56 PM, Brent Hartzell wrote:

Hello,

I had actually gotten this sorted out, somewhat. If I disable server quorum
on the volume, the storage domain will activate. The volume is/was optimized
for virt store via oVirt. The brick in question was not the first brick
added to the volume through oVirt however, it appears that it may have been
the first brick in the replica being used, but I'm not certain how to find
this out.


The recommended setting is to have both client and server side quorum 
turned on. But turning on server-side quorum with a 2-way replica volume 
would mean that your volume goes offline when one of the bricks goes down.


"gluster volume info" command will give you information about the volume 
topology. So will the bricks sub-tab for Volume in oVirt. The order in 
which the bricks are listed, is the order of the replica sets.



Disabling quorum allowed me to get the VM's affected back online however, is
this the recommended procedure? I tried to use replace-brick with another
node but it failed because the failed brick was not available. Would we
leave quorum disabled until that brick gets replaced? IE - rebuild the
server with the same hostname/IP file structure and rebalance the cluster?


http://www.gluster.org/community/documentation/index.php/Virt-store-usecase 
- for recommendations on volume tunables.


You could add another brick to your volume to make it a replica 3 and 
then turn on quorum?


For help on recovering your volume, I suggest you write to 
gluster-us...@gluster.org







While that happened, I read somewhere about this happening with a replica 2
- I've created a new volume with replica 3 and plan to test this again. Is
there any info you can point me to for how to handle this when it happens or
what the correct procedure is when a "first" brick fails?






-Original Message-
From: Sahina Bose [mailto:sab...@redhat.com]
Sent: Thursday, December 18, 2014 3:51 AM
To: Vered Volansky; Brent Hartzell
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Cannot activate storage domain


On 12/18/2014 01:35 PM, Vered Volansky wrote:

Adding Sahina.

- Original Message -

From: "Brent Hartzell" 
To: users@ovirt.org
Sent: Thursday, December 18, 2014 3:38:11 AM
Subject: [ovirt-users] Cannot activate storage domain



Have the following:



6 hosts - virt + Gluster shared



Gluster volume is distributed-replicate - replica 2



Shutting down servers one at a time all work except for 1 brick. If
we shut down one specific brick (1 brick per host) - we're unable to
activate the storage domain. VM's that were actively running from
other bricks continue to run. Whatever was running form that specific
brick fails to run, gets paused etc.



Error log shows the entry below. I'm not certain what it's saying is
read only.nothing is read only that I can find.





2014-12-17 19:57:13,362 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand]
(DefaultQuartzScheduler_Worker-47) [4e9290a2] Command
SpmStatusVDSCommand(HostName = U23.domainame.net, HostId =
0db58e46-68a3-4ba0-a8aa-094893c045a1, storagePoolId =
7ccd6ea9-7d80-4170-afa1-64c10c185aa6) execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
SpmStatusVDS, error = [Errno 30] Read-only file system, code = 100

2014-12-17 19:57:13,363 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(DefaultQuartzScheduler_Worker-47) [4e9290a2]
hostFromVds::selectedVds - U23.domainname.net, spmStatus returned null!





According to Ovirt/Gluster, if a brick goes down, the VM should be
able to be restarted from another brick without issue. This does not
appear to be the case. If we take other bricks offline, it appears to

work as expected.

Something with this specific brick cases everything to break which
then makes any VM's that were running from the brick unable to start.

Do you have the recommended options for using volume as virt store turned
on? Is client-side quorum turned on for the volume? Is the brick that causes
the issue, the first brick in the replica set?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using gluster on other hosts?

2015-01-08 Thread Sahina Bose


On 01/08/2015 12:07 AM, Will K wrote:

Hi

I would like to see if anyone has good suggestion.

I have two physical hosts with 1GB connections to switched networks. 
The hosts also have 10GB interface connected directly using Twinax 
cable like copper crossover cable.  The idea was to use the 10GB as a 
"private network" for GlusterFS till the day we want to grow out of 
this 2 node setup.


GlusterFS was setup with the 10GB ports using non-routable IPs and 
hostnames in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2 
192.168.1.2.  I'm following example from 
community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/ 
 
, Currently I'm only using Gluster volume on node1, but `gluster probe 
peer` test worked fine with node2 through the 10GB connection.


oVirt engine was setup on physical host1 with hosted engine.  Now, 
when I try to create new Gluster storage domain, I can only see the 
host "node1" available.


Is there anyway I can setup oVirt on node1 and node2, while using 
"gfs1" and "gfs2" for GlusterFS? or some way to take advantage of the 
10GB connection?


If I understand right, you have 2 interfaces on each of your hosts, and 
you want oVirt to communicate via 1 interface and glusterfs to use other?


While adding the hosts to oVirt you could use ip1 and then.while 
creating the volume, add the brick using the other ip address.

For instance, gluster volume create  192.168.1.2:/bricks/b1

Currently, there's no way to specify the IP address to use while adding 
a brick from oVirt UI (we're working on this for 3.6), but you could do 
this from the gluster CLI commands. This would then be detected in the 
oVirt UI.





Thanks
W


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using gluster on other hosts?

2015-01-08 Thread Sahina Bose


On 01/08/2015 09:41 PM, Will K wrote:

That's what I did, but didn't work for me.

1. use the 192.168.x interface to setup gluster. I used hostname in 
/etc/hosts.

2. setup oVirt using the switched network hostnames, let's say 10.10.10.x
3. oVirt and all that comes up fine.
4. When try to create a storage domain, it only shows the 10.10.10.x 
hostnames available.



Tried to add a brick and I would get something like
Host gfs2 is not in 'Peer in Cluster' state  (while node2 is the 
hostname and gfs2 is the 192.168 name)



Which version of glusterfs do you have?

Kaushal, will this work in glusterfs3.6 and above?




Ran command `gluster probe peer gfs2` or `gluster probe peer 
192.168.x.x` didn't work

peer probe: failed: Probe returned with unknown errno 107

Ran probe again with the switched network hostname or IP worked fine. 
May be it is not possible with current GlusterFS version?

http://www.gluster.org/community/documentation/index.php/Features/SplitNetwork


Will


On Thursday, January 8, 2015 3:43 AM, Sahina Bose  
wrote:




On 01/08/2015 12:07 AM, Will K wrote:

Hi

I would like to see if anyone has good suggestion.

I have two physical hosts with 1GB connections to switched networks. 
The hosts also have 10GB interface connected directly using Twinax 
cable like copper crossover cable.  The idea was to use the 10GB as a 
"private network" for GlusterFS till the day we want to grow out of 
this 2 node setup.


GlusterFS was setup with the 10GB ports using non-routable IPs and 
hostnames in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2 
192.168.1.2.  I'm following example from 
community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/ 
<http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/> 
, Currently I'm only using Gluster volume on node1, but `gluster 
probe peer` test worked fine with node2 through the 10GB connection.


oVirt engine was setup on physical host1 with hosted engine.  Now, 
when I try to create new Gluster storage domain, I can only see the 
host "node1" available.


Is there anyway I can setup oVirt on node1 and node2, while using 
"gfs1" and "gfs2" for GlusterFS? or some way to take advantage of the 
10GB connection?


If I understand right, you have 2 interfaces on each of your hosts, 
and you want oVirt to communicate via 1 interface and glusterfs to use 
other?


While adding the hosts to oVirt you could use ip1 and then.while 
creating the volume, add the brick using the other ip address.

For instance, gluster volume create  192.168.1.2:/bricks/b1

Currently, there's no way to specify the IP address to use while 
adding a brick from oVirt UI (we're working on this for 3.6), but you 
could do this from the gluster CLI commands. This would then be 
detected in the oVirt UI.






Thanks
W



___
Users mailing list
Users@ovirt.org  <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using gluster on other hosts?

2015-01-09 Thread Sahina Bose


On 01/09/2015 01:13 PM, Kaushal M wrote:

Hey Will,

It seems to me you are trying manage GlusterFS from oVirt, and trying to get 
your multi-network setup to work. As Sahina mentioned already, this is not 
currently possible as oVirt doesn't have the required support.

If you want to make this work right now, I suggest you manage GlusterFS 
manually. You could do the following,

- Install GlusterFS on both the hosts and setup a GlusterFS trusted storage 
pool using the 'gluster peer probe' commands. Run 'gluster peer probe gfs2' 
from node1 (and the reverse just for safety)
- Create a GlusterFS volume, 'gluster volume create  gfs1: 
gfs2:; and start it, 'gluster volume start '.
After this you'll have GlusterFS setup on the particular network and you'll 
have volume ready to be added as a oVirt storage domain.



To enable, oVirt to use the node1 interface, is it possible to peer 
probe using node1 and node2 interface in steps above - i.e "gluster peer 
probe node2" (This is essentially what happens when a host is added with 
host address node1 or node2)


and then create a GlusterFS volume from CLI using the command you 
mentioned above?




- Now setup oVirt on the nodes with the node* network.
- Add the gfs* network to oVirt. I'm not sure if this would be required, but 
you can try it anyway.
- Add the created GlusterFS volume as a storage domain using a gfs* address.

You should now be ready to begin using the new storage domain.

If you would want to expand the volume later, you will need to do it manually 
with an explicit 'gluster volume add-brick' command.

You could possible add the GlusterFS cluster to the oVirt interface, just so 
you can get stats and monitoring. But even then you shouldn't use the oVirt 
interface to do any management tasks.

Multi-network support for GlusterFS within oVirt is an upcoming feature, and 
Sahina can give you more details on when to expect it to be available.

Thanks,
Kaushal


- Original Message -

From: "Sahina Bose" 
To: "Will K" , users@ovirt.org, "Kaushal M" 

Sent: Friday, 9 January, 2015 11:10:48 AM
Subject: Re: [ovirt-users] Using gluster on other hosts?


On 01/08/2015 09:41 PM, Will K wrote:

That's what I did, but didn't work for me.

1. use the 192.168.x interface to setup gluster. I used hostname in
/etc/hosts.
2. setup oVirt using the switched network hostnames, let's say 10.10.10.x
3. oVirt and all that comes up fine.
4. When try to create a storage domain, it only shows the 10.10.10.x
hostnames available.


Tried to add a brick and I would get something like
 Host gfs2 is not in 'Peer in Cluster' state  (while node2 is the
hostname and gfs2 is the 192.168 name)


Which version of glusterfs do you have?

Kaushal, will this work in glusterfs3.6 and above?



Ran command `gluster probe peer gfs2` or `gluster probe peer
192.168.x.x` didn't work
 peer probe: failed: Probe returned with unknown errno 107

Ran probe again with the switched network hostname or IP worked fine.
May be it is not possible with current GlusterFS version?
http://www.gluster.org/community/documentation/index.php/Features/SplitNetwork


Will


On Thursday, January 8, 2015 3:43 AM, Sahina Bose 
wrote:



On 01/08/2015 12:07 AM, Will K wrote:

Hi

I would like to see if anyone has good suggestion.

I have two physical hosts with 1GB connections to switched networks.
The hosts also have 10GB interface connected directly using Twinax
cable like copper crossover cable.  The idea was to use the 10GB as a
"private network" for GlusterFS till the day we want to grow out of
this 2 node setup.

GlusterFS was setup with the 10GB ports using non-routable IPs and
hostnames in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2
192.168.1.2.  I'm following example from
community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
<http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/>
, Currently I'm only using Gluster volume on node1, but `gluster
probe peer` test worked fine with node2 through the 10GB connection.

oVirt engine was setup on physical host1 with hosted engine.  Now,
when I try to create new Gluster storage domain, I can only see the
host "node1" available.

Is there anyway I can setup oVirt on node1 and node2, while using
"gfs1" and "gfs2" for GlusterFS? or some way to take advantage of the
10GB connection?

If I understand right, you have 2 interfaces on each of your hosts,
and you want oVirt to communicate via 1 interface and glusterfs to use
other?

While adding the hosts to oVirt you could use ip1 and then.while
creating the volume, add the brick using the other ip address.
For instance, gluster volume create  192.168.1.2:/bricks/b1

Currently, there's no way to specify the IP address to use while
adding a brick from oVirt UI (we're 

[ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-12 Thread Sahina Bose

Hi all,

Please review the feature page for this proposed solution and provide 
your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster


thanks
sahina


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-13 Thread Sahina Bose


On 01/12/2015 06:14 PM, Oved Ourfali wrote:

Hi Sahina,

Some comments:

1. As far as I understand, you might not have an IP available immediately after 
setupNetworks runs (getCapabilities should run, but it isn't run automatically, 
afair).
2. Perhaps you should pass not the IP but the name of the network? IPs might 
change.
3. Adding to "2", perhaps using DNS names is a more valid approach?


To the gluster volume add brick command, the brick information needs to 
be passed in the form :


So even if we do show the network names in the UI, we will need the 
underlying IP address to form this command.
Regarding DNS names, currently is there a way to query for the DNS 
aliases for a host? I would need to use hostname in the command above, 
and assume that the user has setup his DNS outside of oVirt to correctly 
resolve to internal/external network, correct?




4. You're using the terminology "role", but it might be confusing, as we have "roles" with regards 
to permissions. Consider changing "storage usage" and not "storage role" in the feature page.

Thanks,
Oved

- Original Message -

From: "Sahina Bose" 
To: de...@ovirt.org, "users" 
Sent: Monday, January 12, 2015 2:00:16 PM
Subject: [ovirt-users] [Feature review] Select network to be used for   
glusterfs

Hi all,

Please review the feature page for this proposed solution and provide
your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster

thanks
sahina


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-13 Thread Sahina Bose


On 01/12/2015 06:21 PM, Lior Vernia wrote:

Hi Sahina! :)

Cool feature, and I think long-awaited by many users. I have a few comments:

1. In the "Add Bricks" dialog, it seems like the "IP Address" field is a
list box - I presume the items contained there are all IP addresses
configured on the host's interfaces.

1. a. May I suggest that this contain network names instead of IP
addresses? Would be easier for users to think about things (they surely
remember the meaning of network names, not necessarily of IP addresses).





1. b. If I correctly understood the mock-up, then configuring a "Storage
Network" role only affects the default entry chosen in the list box. Is
it really worth the trouble of implementing this added role? It's quite
different than display/migration roles, which are used to determine what
IP address to use at a later time (i.e. not when configuring the host),
when a VM is run/migrated in the cluster.



If not for "Storage network" role, how would we default which network to 
use. In fact, we are planning to remove the drop down to choose network 
from the Add Brick UI, to avoid confusion and just use the network with 
this role, if available - otherwise use the host address. (host_address 
in vds_static)


Will update page accordingly




1. c. A word of warning: sometimes a host interface's IP address is
missing in the engine - this usually happens when they're configured for
the first time with DHCP, and the setup networks command returns before
an IP address is allocated (this can later be resolved by refreshing
host capabilities, there's a button for that). So when displaying items
in the list box, you should really check that an IP address exists for
each network.

2. "Storage Network": if you intend to keep this role in the feature (I
don't think it adds a lot of functionality, see article 1b), it might be
better to call it "Gluster Network" - otherwise people using virt mode
might think this network is gonna be used to communicate with other
types of storage domains.



Could this network be reused for other storage needs also. If not, we 
can rename it "gluster network"




Yours, Lior.

On 12/01/15 14:00, Sahina Bose wrote:

Hi all,

Please review the feature page for this proposed solution and provide
your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster

thanks
sahina


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-13 Thread Sahina Bose


On 01/12/2015 08:52 PM, Dan Kenigsberg wrote:

On Mon, Jan 12, 2015 at 02:59:50PM +0200, Lior Vernia wrote:


On 12/01/15 14:44, Oved Ourfali wrote:

Hi Sahina,

Some comments:

1. As far as I understand, you might not have an IP available immediately after 
setupNetworks runs (getCapabilities should run, but it isn't run automatically, 
afair).
2. Perhaps you should pass not the IP but the name of the network? IPs might 
change.

Actually, IP address can indeed change - which would be very bad for
gluster functioning! I think moving networks or changing their IP
addresses via Setup Networks should be blocked if they're used by
gluster bricks.

In the suggested feature, there is no real storage "role". The "storage
role" title means only "default value for glusterfs IP".

For example, once a brick was created, nothing protects the admin from
accidently removing the storage network, or changing its IP address.

Another "proof" that this is not a real "role", is that it affects only
GUI: I am guessing that REST API would not make use of it at all. (maybe
I'm wrong; for sure, REST must be defined in the feature page)


REST API that lists the available networks (with IP addresses) would be 
used to select the network and pass to the create gluster volume API


I'll update the feature page with the REST API changes as well.



Maybe that's the behavior we want. But alternatively, Engine can enforce
a stronger linkage between the brick to the network that it uses. When
adding a brick, the dialog would list available networks instead of the
specific IP. As long as the brick is being used, the admin would be
blocked/warned against deleting the network.


Is there a way to block against changing IP address used by a network?



I'm missing a discussion regarding the upgrade path. If we would opt to
requiring a single storage role network in a cluster, in an upgraded
cluster the management network should take this role.


There would not be any change to existing volumes on upgrade, as bricks 
have already been added. Users can use the Edit brick option to update 
the network to be used, if required as mentioned in "Change network used 
by brick "






3. Adding to "2", perhaps using DNS names is a more valid approach?
4. You're using the terminology "role", but it might be confusing, as we have "roles" with regards 
to permissions. Consider changing "storage usage" and not "storage role" in the feature page.

Well, we've already been using this terminology for a while now
concerning display/migration roles for networks... That's probably the
terminology to use.


Thanks,
Oved

- Original Message -

From: "Sahina Bose" 
To: de...@ovirt.org, "users" 
Sent: Monday, January 12, 2015 2:00:16 PM
Subject: [ovirt-users] [Feature review] Select network to be used for   
glusterfs

Hi all,

Please review the feature page for this proposed solution and provide
your inputs - http://www.ovirt.org/Features/Select_Network_For_Gluster

thanks
sahina

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-14 Thread Sahina Bose


On 01/13/2015 09:45 PM, Dan Kenigsberg wrote:

On Tue, Jan 13, 2015 at 02:51:34PM +0200, Lior Vernia wrote:


On 13/01/15 10:21, Sahina Bose wrote:

On 01/12/2015 08:52 PM, Dan Kenigsberg wrote:

On Mon, Jan 12, 2015 at 02:59:50PM +0200, Lior Vernia wrote:

On 12/01/15 14:44, Oved Ourfali wrote:

Hi Sahina,

Some comments:

1. As far as I understand, you might not have an IP available
immediately after setupNetworks runs (getCapabilities should run,
but it isn't run automatically, afair).
2. Perhaps you should pass not the IP but the name of the network?
IPs might change.

Actually, IP address can indeed change - which would be very bad for
gluster functioning! I think moving networks or changing their IP
addresses via Setup Networks should be blocked if they're used by
gluster bricks.

In the suggested feature, there is no real storage "role". The "storage
role" title means only "default value for glusterfs IP".

For example, once a brick was created, nothing protects the admin from
accidently removing the storage network, or changing its IP address.

Another "proof" that this is not a real "role", is that it affects only
GUI: I am guessing that REST API would not make use of it at all. (maybe
I'm wrong; for sure, REST must be defined in the feature page)

REST API that lists the available networks (with IP addresses) would be
used to select the network and pass to the create gluster volume API

My question regarded the argument of the add brick API (in Engine
level). Is it an IPv4 address (like it seems) or could it be a network
name?


I've updated the feature page with the REST API and other comments. On 
further thought, there will be no change to Add brick API, as the engine 
will select the network to be used based on the networks setup for the 
host. If "Storage network" role is associated with any of the networks, 
this will be used. Otherwise, the host's address will be used to add the 
brick.


There is a NEW API to allow for updation of brick's address.




I'll update the feature page with the REST API changes as well.


If REST allows to choose the network used for gluster traffic, then I
think so should the GUI - I would not drop the list box from the design
in that case.


See above - have kept REST API consistent.




Maybe that's the behavior we want. But alternatively, Engine can enforce
a stronger linkage between the brick to the network that it uses. When
adding a brick, the dialog would list available networks instead of the
specific IP. As long as the brick is being used, the admin would be
blocked/warned against deleting the network.

Is there a way to block against changing IP address used by a network?


Yes, this should be implemented at least in the canDoAction() method of
SetupNetworksCommand (most of it is done in the SetupNetworksHelper
class). And perhaps this should be blocked in the GUI as well.

Note that by the time 3.6 is released, the REST (and probably GUI) are
supposed to work with a different backend command that is currently
being implemented - so maybe you'll need to modify that instead, or on
top of the changes in SetupNetworksHelper.



Ok. Thanks!





I'm missing a discussion regarding the upgrade path. If we would opt to
requiring a single storage role network in a cluster, in an upgraded
cluster the management network should take this role.

There would not be any change to existing volumes on upgrade, as bricks
have already been added. Users can use the Edit brick option to update
the network to be used, if required as mentioned in "Change network used
by brick "


I suspect Dan referred to the upgrade path of the engine itself - if you
add a new "Gluster Network" boolean column to the DB, it will initially
be null for all current networks. You'd likely need to write an upgrade
script to assign the role by default to the existing management networks
in each cluster.

yep.


Aah..ok! The "Gluster network" is not a mandatory role. That is, we 
could have a case where the user does not want to select any network as 
"Gluster network" and instead choose to continue using host's address 
for adding bricks.


So existing deployments would continue to work as before - without this 
role assigned to any of the networks.






3. Adding to "2", perhaps using DNS names is a more valid approach?
4. You're using the terminology "role", but it might be confusing,
as we have "roles" with regards to permissions. Consider changing
"storage usage" and not "storage role" in the feature page.

Well, we've already been using this terminology for a while now
concerning display/migration roles for networks... That's probably the
terminology to use.

If I am not mistaken, it could make sense to have a setup with one brick
using network A and another - 

Re: [ovirt-users] [Feature review] Select network to be used for glusterfs

2015-01-15 Thread Sahina Bose


On 01/15/2015 02:27 PM, Dan Kenigsberg wrote:

On Thu, Jan 15, 2015 at 12:34:18PM +0530, Sahina Bose wrote:


I've updated the feature page with the REST API and other comments. On
further thought, there will be no change to Add brick API, as the engine
will select the network to be used based on the networks setup for the host.
If "Storage network" role is associated with any of the networks, this will
be used. Otherwise, the host's address will be used to add the brick.




The paragraph above rules out the use case I lay below. Could you relate
to it? Isn't it a reasonable use case?


If I am not mistaken, it could make sense to have a setup with one brick
using network A and another - using network B. Does your design support
this? I think that this would be particularly important on upgraded
clusters, where the management network is already used, but newly
created bricks should start using another network.




On upgraded clusters, the user would have to assign a network with the 
role "Storage network". Any newly created brick would then start using 
this, rather than the management network.


I'm not sure if the use case where each brick on a host is added using 
different networks is a common one (apart from the upgrade scenario, 
that is). If it is, we could provide an Advanced edit option in the UI 
to select network in Add Bricks dialog.
The entity design supports setting different network per brick and the 
REST API already provides a way to set this as an optional parameter.



May I repeat my follow request? It would help me understand the content
of the feature.


Sorry, I missed these before!



Would you add a feature page section regarding modification to the
Vdsm/Engine API?


http://www.ovirt.org/Features/Select_Network_For_Gluster#Change_to_VDSM_API
http://www.ovirt.org/Features/Select_Network_For_Gluster#Change_to_REST_API



One last comment - may I ask that new APIs accept both ipv4 and ipv6
addresses? There is an ongoing effort to support ipv6 on Vdsm.



Glusterfs does not support ipv6 yet, so addition of bricks using ipv6 
addresses would not work.


thanks,
sahina

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade to 3.5.1

2015-01-22 Thread Sahina Bose


On 01/22/2015 08:03 PM, Demeter Tibor wrote:

Hello,

I have an ovirt 3.5.0 cluster with three nodes and we using glusterfs 
for serving backend storage for VM-s. Glusterfs are on same servers 
with ovirt.

We have Gluster 3.5.1 on all of nodes.

How can I upgrade to ovirt 3.5.1 with new glusterfs? I don't want data 
inconsistency/data loss/split brains?


Do you have a replica 3 volume setup across these 3 nodes?

Glusterfs supports rolling upgrade for replica volumes.
But there are some issues to upgrade from 3.5 to 3.6 versions (depending 
on version used)

Adding gluster-users & Pranith for recommended procedures.


thanks
sahina



Need I update gluster version to the latest?
How I need to upgrade the nodes ?  Is there any whitepaper about this?

*
*

*
*

Thanks in advance,


Tibor



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade to 3.5.1

2015-01-22 Thread Sahina Bose

Sorry, wrong ML earlier

On 01/23/2015 12:33 PM, Sahina Bose wrote:


On 01/22/2015 08:03 PM, Demeter Tibor wrote:

Hello,

I have an ovirt 3.5.0 cluster with three nodes and we using glusterfs 
for serving backend storage for VM-s. Glusterfs are on same servers 
with ovirt.

We have Gluster 3.5.1 on all of nodes.

How can I upgrade to ovirt 3.5.1 with new glusterfs? I don't want 
data inconsistency/data loss/split brains?


Do you have a replica 3 volume setup across these 3 nodes?

Glusterfs supports rolling upgrade for replica volumes.
But there are some issues to upgrade from 3.5 to 3.6 versions 
(depending on version used)

Adding gluster-users & Pranith for recommended procedures.


thanks
sahina



Need I update gluster version to the latest?
How I need to upgrade the nodes ?  Is there any whitepaper about this?

*
*

*
*

Thanks in advance,


Tibor



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade to 3.5.1

2015-02-02 Thread Sahina Bose


On 02/01/2015 04:40 PM, Demeter Tibor wrote:

Hi,

Is glusterfs 3.6 is neccessary for ovirt 3.5.1?
I have 3.5.2 at this moment.

3.6 neccessary for upgade ?



If you only want to upgrade the engine to oVirt 3.5.1 - you don't need 
to upgrade the gluster nodes. I understand that your gluster nodes are 
separate from the virt nodes. That is, you do not have any cluster 
enabled with both virt + gluster service.


glusterfs 3.6 is not necessary to upgrade engine to oVirt 3.5.1. It 
would only be required if you were upgrading the cluster level (the 
compatibility version) which in turn would require newer vdsm versions 
and gluster versions. But in your case, I think you already have your 
cluster running with glusterfs 3.5.2. What is the cluster compatibility 
version?



I'm worried what will happed with my gluster volumes after the 
upgrade... :(


Plese help me.

Also, how can I upgrade the nodes? Need i'm to a "yum upgrade" on the 
compute nodes?


By upgrade the nodes - you mean only the compute nodes? yum upgrade on 
each node should work.
You can also move the Host to maintenance mode and click on Reinstall or 
upgrade from engine.





Thanks.

Tibor




Hi,

No, I have only one replica beetwen two nodes.
The third node is only a host for vms and that using this replica.

Thanks
Tibor




On 01/22/2015 08:03 PM, Demeter Tibor wrote:

Hello,

I have an ovirt 3.5.0 cluster with three nodes and we
using glusterfs for serving backend storage for VM-s.
Glusterfs are on same servers with ovirt.
We have Gluster 3.5.1 on all of nodes.

How can I upgrade to ovirt 3.5.1 with new glusterfs? I
don't want data inconsistency/data loss/split brains?


Do you have a replica 3 volume setup across these 3 nodes?

Glusterfs supports rolling upgrade for replica volumes.
But there are some issues to upgrade from 3.5 to 3.6 versions
(depending on version used)
Adding gluster-users & Pranith for recommended procedures.


thanks
sahina


Need I update gluster version to the latest?
How I need to upgrade the nodes ?  Is there any whitepaper
about this?

*
*

*
*

Thanks in advance,


Tibor



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Help understanding Gluster in oVirt

2015-02-03 Thread Sahina Bose


On 01/28/2015 08:59 AM, Groten, Ryan wrote:


I was planning on making a Gluster Data domain to test, and found some 
great information on this page: 
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/


The article the author uses the CTDB service for VIP failover.  Is it 
possible/recommended to not do this, and just create a gluster volume 
on all the hosts in a cluster, then create the Gluster data domain as 
localhost:?




Theoretically, it should work - if you make sure that you have a replica 
3 gluster volume spread across 3 nodes, and these 3 nodes are your 
compute nodes as well - you should be fine without CTDB setup for 
failover and mounting as localhost.


But I've not tried this to recommend it. Maybe if others have tried it, 
they can chime in?


Btw, is there any reason you do not want to set up CTDB?


Thanks,

Ryan

ThTh


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade to 3.5.1

2015-02-03 Thread Sahina Bose


On 02/02/2015 08:12 PM, Demeter Tibor wrote:

Hi Sahina,

Unfortunately my compute nodes and gluster nodes are on same servers. 
I running my VMs on the gluster nodes and vica-versa.
Also, my ovirt portal is running on the first node, that is an gluster 
and compute node too.


Is this hosted-engine setup? Have you looked at 
http://www.ovirt.org/Hosted_Engine_Howto#Upgrade_Hosted_Engine ?




I have a gluster based installation with three servers, two for 
replica&VMs, one for VMs-s only.


If you're upgrading from oVirt 3.5 to 3.5.1 - there's no new dependency 
on glusterfs. So, you could decide not to upgrade glusterfs and just 
upgrade the vdsm on the nodes.




This is a productive environment, so I'm affraid so much of 
upgrading...:(.I don't want glusterfs mistakes, split-brains, etc..)


Also, I could make little downtime for upgrading, but I'm need to know 
what is the trustable upgrade procedure. Is there an official upgrade 
documentation for ovirt, that is contain the glusterfs upgrde methods?


There are separate documentations - for oVirt upgrade and glusterfs 
upgrade. No mechanism to upgrade glusterfs on nodes from within oVirt.







My cluster level is 3.5.

Thanks a lot.

Tibor





On 02/01/2015 04:40 PM, Demeter Tibor wrote:

Hi,

Is glusterfs 3.6 is neccessary for ovirt 3.5.1?
I have 3.5.2 at this moment.

3.6 neccessary for upgade ?


If you only want to upgrade the engine to oVirt 3.5.1 - you don't
need to upgrade the gluster nodes. I understand that your gluster
nodes are separate from the virt nodes. That is, you do not have
any cluster enabled with both virt + gluster service.

glusterfs 3.6 is not necessary to upgrade engine to oVirt 3.5.1.
It would only be required if you were upgrading the cluster level
(the compatibility version) which in turn would require newer vdsm
versions and gluster versions. But in your case, I think you
already have your cluster running with glusterfs 3.5.2. What is
the cluster compatibility version?


I'm worried what will happed with my gluster volumes after the
upgrade... :(

Plese help me.

Also, how can I upgrade the nodes? Need i'm to a "yum upgrade"
on the compute nodes?


By upgrade the nodes - you mean only the compute nodes? yum
upgrade on each node should work.
You can also move the Host to maintenance mode and click on
Reinstall or upgrade from engine.



Thanks.

Tibor




Hi,

No, I have only one replica beetwen two nodes.
The third node is only a host for vms and that using this
replica.

Thanks
Tibor





On 01/22/2015 08:03 PM, Demeter Tibor wrote:

Hello,

I have an ovirt 3.5.0 cluster with three nodes and
we using glusterfs for serving backend storage for
VM-s. Glusterfs are on same servers with ovirt.
We have Gluster 3.5.1 on all of nodes.

How can I upgrade to ovirt 3.5.1 with new
glusterfs? I don't want data inconsistency/data
loss/split brains?


Do you have a replica 3 volume setup across these 3 nodes?

Glusterfs supports rolling upgrade for replica volumes.
But there are some issues to upgrade from 3.5 to 3.6
versions (depending on version used)
Adding gluster-users & Pranith for recommended procedures.


thanks
sahina


Need I update gluster version to the latest?
How I need to upgrade the nodes ?  Is there any
whitepaper about this?

*
*

*
*

Thanks in advance,


Tibor



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster services won't start any more

2015-03-12 Thread Sahina Bose

Did you upgrade glusterfs on the node?
Looks like there's some problem with your volume file?

[Adding gluster-users for further help]

On 03/12/2015 03:57 PM, RASTELLI Alessandro wrote:

Hi,
tonight - without any apparent reason - the /var/log/gluster directory filled 
up disk space of one node.
I shut down services, cleaned logs, rebooted but services won't start any more.

glusterd log says:
[2015-03-12 09:08:14.919478] I [MSGID: 100030] [glusterfsd.c:2018:main] 
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.6.2 (args: 
/usr/sbin/glusterd --pid-file=/var/run/glusterd.pid)
[2015-03-12 09:08:14.935111] I [glusterd.c:1214:init] 0-management: Maximum 
allowed open file descriptors set to 65536
[2015-03-12 09:08:14.935142] I [glusterd.c:1259:init] 0-management: Using 
/var/lib/glusterd as working directory
[2015-03-12 09:08:14.953202] W [rdma.c:4221:__gf_rdma_ctx_create] 
0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device)
[2015-03-12 09:08:14.953221] E [rdma.c:4519:init] 0-rdma.management: Failed to 
initialize IB Device
[2015-03-12 09:08:14.953229] E [rpc-transport.c:333:rpc_transport_load] 
0-rpc-transport: 'rdma' initialization failed
[2015-03-12 09:08:14.953280] W [rpcsvc.c:1524:rpcsvc_transport_create] 
0-rpc-service: cannot create listener, initing the transport failed
[2015-03-12 09:08:14.956004] I [glusterd.c:413:glusterd_check_gsync_present] 
0-glusterd: geo-replication module not installed in the system
[2015-03-12 09:08:14.958341] I 
[glusterd-store.c:2063:glusterd_restore_op_version] 0-management: Detected new 
install. Setting op-version to maximum : 30600
[2015-03-12 09:08:15.166709] E [xlator.c:425:xlator_init] 0-management: 
Initialization of volume 'management' failed, review your volfile again
[2015-03-12 09:08:15.166729] E [graph.c:322:glusterfs_graph_init] 0-management: 
initializing translator failed
[2015-03-12 09:08:15.166737] E [graph.c:525:glusterfs_graph_activate] 0-graph: 
init failed
[2015-03-12 09:08:15.166987] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: 
received signum (0), shutting down

Can you please help?
Thank you

Alessandro
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt GusterFS assistance

2015-03-23 Thread Sahina Bose

What is the type of volume that you've created? Is it a replicate volume?

# gluster volume info - should give you this information

If you're replicating the volume across 3 nodes, even when one of the 
server goes down, your storage domain should still be UP.


thanks
sahina
On 03/23/2015 02:10 PM, Jonathan Mathews wrote:
Hi I am trying to setup an Ovirt, Glusterfs, VM servers. I have 
followed examples on setting up Ovirt and they have helped me so far, 
but not the end point that I am looking for.

The web sites are:
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-storage-virt-kvm-rao.pdf

I am running 3 HP micro servers and 2 HP DL360 G5
The 3 micro servers are my glusterfs storage and have been provisioned 
for virt storage.

The 2 DL360 are my processing machines.

Now my 3 gluster hosts are in one cluster, the volume is in up status 
and has been provisioned for Virt Storage. But the problem is that my 
mount point is directed to one server, so when that server goes down, 
the volume storage domain goes down. I am not sure whether there is a 
way of mounting it by a "volume identity", so when a server goes down 
the storage domain stays up.


With my 2 processing hosts, I have them in one cluster, but I have not 
gotten any where with this, as I want the Virtual machines to use the 
gluster volume as storage but use the processing hosts hardware for 
processing power.


I would appreciate any assistance.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 3.5 problem adding GlusterFS servers

2015-03-23 Thread Sahina Bose

Is the ovirt repo configured as well as on the 2 gluster servers?
ovirt-engine requires vdsm-gluster to communicate to gluster. Please 
check if vdsm-gluster can be installed on these servers


On 03/23/2015 10:19 PM, wodel youchi wrote:

Hi,

I've Ovirt 3.5 (oVirt Engine Version: 3.5.1.1-1.fc20) Hosted engine on 
Fedora 20 (latest updates)


and I've two glusterfs servers (glusterfs-3.6.2-1), actually two 
GlusterFS VMs on another machine on Centos7x64 from  (latest updates) 
using this repository 
http://download.gluster.org/pub/gluster/glusterfs/LATEST/


I've created a new cluster GlusterFS on Ovirt and tried to import the 
two hosts without success

I am getting this error

Host gluster1.domain.tld installation failed. Command returned failure 
code 1 during SSH session 'root@192.168.1.72 '.



and from the host deploy on the engine

2015-03-23 17:35:53 DEBUG otopi.context context._executeMethod:152 
method exception

Traceback (most recent call last):
  File "/tmp/ovirt-ZZd4JGVWVI/pythonlib/otopi/context.py", line 142, 
in _executeMethod

method['method']()
  File 
"/tmp/ovirt-ZZd4JGVWVI/otopi-plugins/ovirt-host-deploy/gluster/packages.py", 
line 66, in _validation

'Cannot locate gluster packages, '
RuntimeError: Cannot locate gluster packages, possible cause is 
incorrect channels
2015-03-23 17:35:53 ERROR otopi.context context._executeMethod:161 
Failed to execute stage 'Setup validation': Cannot locate gluster 
packages, possible cause is incorrect channels
2015-03-23 17:35:53 DEBUG otopi.context context.dumpEnvironment:490 
ENVIRONMENT DUMP - BEGIN
2015-03-23 17:35:53 DEBUG otopi.context context.dumpEnvironment:500 
ENV BASE/error=bool:'True'
2015-03-23 17:35:53 DEBUG otopi.context context.dumpEnvironment:500 
ENV BASE/exceptionInfo=list:'[(, 
RuntimeError('Cannot locate gluster packages, possible cause is 
incorrect channels',), )]'



GlusterFS is running on the two VMs and I did configure a volume and 
did change the uid/gid to 36

so what's the problem?

thanks in advance.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt GusterFS assistance

2015-03-24 Thread Sahina Bose


On 03/24/2015 12:13 PM, Jonathan Mathews wrote:
the gluster volume is raid 0 replicated over 3 nodes and yes you are 
correct when one server goes down the volume stays up, but the storage 
domain goes down and that is where I need to find a volume identity to 
mount it by in storage domain.


Aah..ok. Maybe the/"/backupvolfile-server" mount option would help you?

[+ Allon] for storage domain issue.




On Mon, Mar 23, 2015 at 12:02 PM, Sahina Bose <mailto:sab...@redhat.com>> wrote:


What is the type of volume that you've created? Is it a replicate
volume?

# gluster volume info - should give you this information

If you're replicating the volume across 3 nodes, even when one of
the server goes down, your storage domain should still be UP.

thanks
sahina

On 03/23/2015 02:10 PM, Jonathan Mathews wrote:

Hi I am trying to setup an Ovirt, Glusterfs, VM servers. I have
followed examples on setting up Ovirt and they have helped me so
far, but not the end point that I am looking for.
The web sites are:
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/

http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-storage-virt-kvm-rao.pdf

I am running 3 HP micro servers and 2 HP DL360 G5
The 3 micro servers are my glusterfs storage and have been
provisioned for virt storage.
The 2 DL360 are my processing machines.

Now my 3 gluster hosts are in one cluster, the volume is in up
status and has been provisioned for Virt Storage. But the problem
is that my mount point is directed to one server, so when that
server goes down, the volume storage domain goes down. I am not
sure whether there is a way of mounting it by a "volume
identity", so when a server goes down the storage domain stays up.

With my 2 processing hosts, I have them in one cluster, but I
have not gotten any where with this, as I want the Virtual
machines to use the gluster volume as storage but use the
processing hosts hardware for processing power.

I would appreciate any assistance.


___
Users mailing list
Users@ovirt.org  <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt GusterFS assistance

2015-03-24 Thread Sahina Bose

[+ ovirt-users]
On 03/24/2015 01:28 PM, Jonathan Mathews wrote:
Please note you have not been helpful at all as I have tried many 
things and have not been able to solve the issue and please do realize 
I am trying to do it through the ovirt front end.


You can provide the mount options from the ovirt front end, while 
creating the storage domain.
I'm assuming you selected Storage type as "Data/Posix compliant FS" or 
"Data/GlusterFS". There is a "Mount Options" field where you can specify 
"backupvolfile-server=". These are 
the other nodes on which the volume is replicated. So if the primary 
server is down, it will try to use the other servers to get the volume 
file. This is useful only when the virt server is first trying to mount 
the gluster volume.


Ideally, if your virt hosts are UP,  when one of the gluster servers 
goes down, your storage domain should be UP. (even without the 
backupvolfile-server option)


I'm not very familiar with oVirt Storage Domain ,so someone from the 
oVirt storage team should be able to guide you further.





On Tue, Mar 24, 2015 at 9:50 AM, Sahina Bose <mailto:sab...@redhat.com>> wrote:



On 03/24/2015 12:13 PM, Jonathan Mathews wrote:

the gluster volume is raid 0 replicated over 3 nodes and yes you
are correct when one server goes down the volume stays up, but
the storage domain goes down and that is where I need to find a
volume identity to mount it by in storage domain.


Aah..ok. Maybe the/"/backupvolfile-server" mount option would
help you?

[+ Allon] for storage domain issue.




On Mon, Mar 23, 2015 at 12:02 PM, Sahina Bose mailto:sab...@redhat.com>> wrote:

What is the type of volume that you've created? Is it a
replicate volume?

# gluster volume info - should give you this information

If you're replicating the volume across 3 nodes, even when
one of the server goes down, your storage domain should still
be UP.

thanks
sahina

On 03/23/2015 02:10 PM, Jonathan Mathews wrote:

Hi I am trying to setup an Ovirt, Glusterfs, VM servers. I
have followed examples on setting up Ovirt and they have
helped me so far, but not the end point that I am looking for.
The web sites are:
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/

http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-storage-virt-kvm-rao.pdf

I am running 3 HP micro servers and 2 HP DL360 G5
The 3 micro servers are my glusterfs storage and have been
provisioned for virt storage.
The 2 DL360 are my processing machines.

Now my 3 gluster hosts are in one cluster, the volume is in
up status and has been provisioned for Virt Storage. But the
problem is that my mount point is directed to one server, so
when that server goes down, the volume storage domain goes
down. I am not sure whether there is a way of mounting it by
a "volume identity", so when a server goes down the storage
domain stays up.

With my 2 processing hosts, I have them in one cluster, but
I have not gotten any where with this, as I want the Virtual
machines to use the gluster volume as storage but use the
processing hosts hardware for processing power.

I would appreciate any assistance.


___
Users mailing list
Users@ovirt.org  <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] DB error on fresh ovirt install

2013-02-20 Thread Sahina Bose
We ran into the same issue yesterday...The call to backup database seems 
to be causing this.
engine_setup.py still uses the old method call and not the changed one 
as per http://gerrit.ovirt.org/10548.


On 02/20/2013 04:25 PM, Eli Mesika wrote:


- Original Message -

From: "Dead Horse" 
To: "" 
Sent: Tuesday, February 19, 2013 11:58:01 PM
Subject: [Users] DB error on fresh ovirt install




Normally I upgrade from prior test installs. However for the past two
weeks or so attempting a fresh install yields:

oVirt Engine will be installed using the following configuration:
=
override-httpd-config: yes
http-port: 80
https-port: 443
host-fqdn: ovirtfoo.test.domain
auth-pass: 
org-name: DHC
application-mode: both
default-dc-type: NFS
db-remote-install: local
db-local-pass: 
config-nfs: no
override-firewall: None
Proceed with the configuration listed above? (yes|no): yes

Installing:
Configuring oVirt Engine... [ DONE ]
Configuring JVM... [ DONE ]
Creating CA... [ DONE ]
Updating ovirt-engine service... [ DONE ]
Setting Database Configuration... [ DONE ]
Setting Database Security... [ DONE ]
Upgrading Database Schema... [ ERROR ]
dictionary update sequence element #0 has length 1; 2 is required
Please check log file
/var/log/ovirt-engine/engine-setup_2013_02_19_15_31_02.log for more
information


log attached

Hi
Seems like a BUG in the setup Python script
see
http://www.gossamer-threads.com/lists/python/python/917709
Alex, Moran can you check please ?




- DHC





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Cannot add host - non-responsive.

2013-03-15 Thread Sahina Bose
Maybe you need to set ssl=false as per 
http://www.ovirt.org/Vdsm_Getting_Started


thx
sahina

On 03/15/2013 11:02 AM, snmis...@linux.vnet.ibm.com wrote:


I am trying to add a fedora18 host to an engine that was built from 
source. SElinux and firewalld are disabled/stopped on both the fedora 
boxes. I can ping between the two machines using IP and FQDN. But when 
I go to add the host it gets connection refused error and goes 
non-responsive. Here are the engine logs.


Any Clues?
Sharad Mishra


2013-03-15 01:21:07,208 INFO [org.ovirt.engine.core.bll.AddVdsCommand] 
(http--0.0.0.0-8700-1) [2ebde801] Running command: AddVdsCommand 
internal: false. Entities affected :  ID: 
99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2013-03-15 01:21:07,279 INFO 
[org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (http--0.0.0.0-8700-1) 
Running command: AddVdsSpmIdCommand internal: true. Entities affected 
:  ID: f66bcf41-56bc-4ac9-b702-c5674060a1a3 Type: VDS
2013-03-15 01:21:07,293 INFO 
[org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (http--0.0.0.0-8700-1) 
Lock freed to object EngineLock [exclusiveLocks= key: 
5849b030-626e-47cb-ad90-3ce782d831b3 value: REGISTER_VDS

, sharedLocks= ]
2013-03-15 01:21:07,297 ERROR 
[org.ovirt.engine.core.vdsbroker.ResourceManager] 
(http--0.0.0.0-8700-1) Cannot get vdsManager for 
vdsid=f66bcf41-56bc-4ac9-b702-c5674060a1a3
2013-03-15 01:21:07,313 INFO 
[org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] 
(http--0.0.0.0-8700-1) START, RemoveVdsVDSCommand(HostName = ovirt2, 
HostId = f66bcf41-56bc-4ac9-b702-c5674060a1a3), log id: 4a95fb09
2013-03-15 01:21:07,314 ERROR 
[org.ovirt.engine.core.vdsbroker.ResourceManager] 
(http--0.0.0.0-8700-1) Cannot get vdsManager for 
vdsid=f66bcf41-56bc-4ac9-b702-c5674060a1a3
2013-03-15 01:21:07,315 INFO 
[org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] 
(http--0.0.0.0-8700-1) FINISH, RemoveVdsVDSCommand, log id: 4a95fb09
2013-03-15 01:21:07,317 ERROR 
[org.ovirt.engine.core.vdsbroker.ResourceManager] 
(http--0.0.0.0-8700-1) Cannot get vdsManager for 
vdsid=f66bcf41-56bc-4ac9-b702-c5674060a1a3
2013-03-15 01:21:07,333 INFO 
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] 
(http--0.0.0.0-8700-1) START, AddVdsVDSCommand(HostName = ovirt2, 
HostId = f66bcf41-56bc-4ac9-b702-c5674060a1a3), log id: 2621f1c9
2013-03-15 01:21:07,334 INFO 
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] 
(http--0.0.0.0-8700-1) AddVds - entered , starting logic to add VDS 
f66bcf41-56bc-4ac9-b702-c5674060a1a3
2013-03-15 01:21:07,350 INFO 
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] 
(http--0.0.0.0-8700-1) AddVds - VDS 
f66bcf41-56bc-4ac9-b702-c5674060a1a3 was added, will try to add it to 
the resource manager
2013-03-15 01:21:07,354 INFO 
[org.ovirt.engine.core.vdsbroker.VdsManager] (http--0.0.0.0-8700-1) 
Entered VdsManager constructor
2013-03-15 01:21:07,360 INFO 
[org.ovirt.engine.core.vdsbroker.VdsManager] (http--0.0.0.0-8700-1) 
Initialize vdsBroker (9.27.28.20,54,321)
2013-03-15 01:21:07,398 INFO 
[org.ovirt.engine.core.vdsbroker.ResourceManager] 
(http--0.0.0.0-8700-1) VDS f66bcf41-56bc-4ac9-b702-c5674060a1a3 was 
added to the Resource Manager
2013-03-15 01:21:07,399 INFO 
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] 
(http--0.0.0.0-8700-1) FINISH, AddVdsVDSCommand, log id: 2621f1c9
2013-03-15 01:21:10,499 WARN 
[org.ovirt.engine.core.vdsbroker.VdsManager] 
(DefaultQuartzScheduler_Worker-8) Failed to refresh VDS , vds = 
f66bcf41-56bc-4ac9-b702-c5674060a1a3 : ovirt2, VDS Network Error, 
continuing.

java.net.ConnectException: Connection refused
2013-03-15 01:21:13,529 WARN 
[org.ovirt.engine.core.vdsbroker.VdsManager] 
(DefaultQuartzScheduler_Worker-10) Failed to refresh VDS , vds = 
f66bcf41-56bc-4ac9-b702-c5674060a1a3 : ovirt2, VDS Network Error, 
continuing.

java.net.ConnectException: Connection refused
2013-03-15 01:21:16,556 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsManager] 
(DefaultQuartzScheduler_Worker-4) Server failed to respond, vds_id = 
f66bcf41-56bc-4ac9-b702-c5674060a1a3, vds_name = ovirt2, error = 
java.net.ConnectException: Connection refused
2013-03-15 01:21:16,577 INFO 
[org.ovirt.engine.core.bll.VdsEventListener] (pool-10-thread-8) 
ResourceManager::vdsNotResponding entered for Host 
f66bcf41-56bc-4ac9-b702-c5674060a1a3, 9.27.28.20
2013-03-15 01:21:16,644 WARN 
[org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand] 
(pool-10-thread-8) [734218b6] CanDoAction of action 
VdsNotRespondingTreatment failed. Reasons:VDS_FENCE_DISABLED


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Package installation error

2013-03-19 Thread Sahina Bose

You are right - Gluster 3.4 is only required to manage gluster clusters.

Currently the question asked at setup is this :

"The engine can be configured to present the UI in three different 
application modes. virt [Manage virtualization only], gluster [Manage 
gluster storage only],
and both [Manage virtualization as well as gluster storage]" where both 
is the default.


If this is confusing to the user, we can change this message. Suggestions?


On 03/19/2013 01:41 PM, Dave Neary wrote:

Hi,

On 03/19/2013 08:16 AM, Alon Bar-Lev wrote:

Now I am confused
Do you or don't you need vdsm-gluster on your system?


Allow me to clarify.

There have been several messages from users since the oVirt 3.2 
release asking why they need Gluster 3.4 pre-releases to run oVirt.


My understanding is that you don't need Gluster 3.4 unless you want to 
manage a Gluster cluster with oVirt.


So my question is: are we sure that we are not leading users wrong, 
and confusing them during the installation & set-up process?


Thanks,
Dave.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to add data/NFS domain.

2013-05-16 Thread Sahina Bose

Hi Sharad,

Are you running ovirt-engine in dev enviroment? There are some errors 
related to infinispan in engine logs, not sure if this is the cause for 
your error though. (refer Michael Kublin's message - 
http://lists.ovirt.org/pipermail/engine-devel/2013-April/004344.html for 
infinispan related changes)


Also see in vdsm logs - someone else could shed light on this:
  File "/usr/share/vdsm/storage/clusterlock.py", line 187, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id: 
('47c26e8e-c28f-451d-8a5d-9ae9154b5d08', SanlockException(19, 'Sanlock 
lockspace add failure', 'No such device'))


-sahina


On 05/17/2013 06:38 AM, snmis...@linux.vnet.ibm.com wrote:


After filling up the required entried in "New Doamin" dialog for a 
"Data/NFS". I hit "OK", and get error dialog - "Error while executing 
action. Add Storage Connection: General Exception.


Attached are engine and vdsm logs. I think its not code but something 
to do with my env.


-Sharad


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster performance oVirt 3.4

2014-05-13 Thread Sahina Bose


On 05/13/2014 07:27 PM, Vadims Korsaks wrote:

Citējot Humble Devassy Chirammal
 :
   
  |

  | Citējot Vijay Bellur  :
  | >  On 05/11/2014 02:04 AM, Vadims Korsaks wrote:
  | >  > HI!
  | >  >
  | >  > Created 2 node setup with oVirt 3.4 and
  | CentOS 6.5, for storage created
  | >  > 2 node replicated gluster (3.5) fs on same
  | hosts with oVirt.
  | >  > mount looks like this:
  | >  > 127.0.0.1:/gluster01 on
  | >  >
  |

/rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01

  | type fuse.glusterfs
  | >  >
  |

(rw,default_permissions,allow_other,max_read=131072)

  | >  >
  | >  > when i making gluster test with dd,

something

  | like
  | >  > dd if=/dev/zero bs=1M count=2
  | >  >
  |

of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kaka

  | >  > i'm gettting speed ~ 110 MB/s, so this is
  | 1Gbps speed of ethernet adapter
  | >  >
  | >  > but with in VM created in oVirt speed is
  | lower than 20 MB/s
  | >  >
  | >  > why there is so huge difference?
  | >  > how can improve VMs disks speed?
  | >  >
  | >
  | >  What are your gluster volume settings?

Have you

  | applied the following
  | >  performance tunables in gluster's virt

profile:

  | >
  | >  eager-lock=enable
  | >  remote-dio=enable
  | >
  | >  Regards,
  | >  Vijay
  | >
  | setting were:
  | [root@centos155 ~]# gluster volume info gluster01
  |
  | Volume Name: gluster01
  | Type: Replicate
  | Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
  | Status: Started
  | Number of Bricks: 1 x 2 = 2
  | Transport-type: tcp
  | Bricks:
  | Brick1: 10.2.75.152:/mnt/gluster01/brick
  | Brick2: 10.2.75.155:/mnt/gluster01/brick
  | Options Reconfigured:
  | storage.owner-gid: 36
  | storage.owner-uid: 36
  |
  |
  | add your settings settings now it looks
  |
  | [root@centos155 ~]# gluster volume info gluster01
  |
  | Volume Name: gluster01
  | Type: Replicate
  | Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
  | Status: Started
  | Number of Bricks: 1 x 2 = 2
  | Transport-type: tcp
  | Bricks:
  | Brick1: 10.2.75.152:/mnt/gluster01/brick
  | Brick2: 10.2.75.155:/mnt/gluster01/brick
  | Options Reconfigured:
  | network.remote-dio: enable
  | cluster.eager-lock: enable
  | storage.owner-gid: 36
  | storage.owner-uid: 36
  |
  |
  | but this didn't affected performace in any

big way

  | should hosts to be restarted?
  |
  
  glusterfs storage domain configuration GUI got

""Optimize for virt. store" option which have to
be enabled when configuring for virt store.
  


Ref#http://www.ovirt.org/Features/GlusterFS_Storage_Domain
  
  If the configuration is manual,  you need to

set the group to 'virt' as shown below
  
  #gluster volume set VOLNAME group virt
  
  This will enable below options on gluster volume :
  
  quick-read=off

  read-ahead=off
  io-cache=off
  stat-prefetch=off
  eager-lock=enable
  remote-dio=on
  
  
  Can you please make sure group has set properly ?
  
  Also, invoke  "dd" with oflag=direct option and

check whether it helps ..
  
  
  --Humble

tnx a lot!! now it's much better - from VM i can
get dd with ~ 60 MB/s
this is still ~ x2 lower than from host, but x3
better than it was before :)

BTW could not found "GUI got ""Optimize for virt.
store" option"  in oVirt 3.5



The option "Optimize for Virt store" is available when you select a 
volume in Ovirt - both as a right click menu option as well as a button 
in the top sub navigation bar.


You also can check this option while creating a gluster volume using the 
oVirt GUI





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster performance oVirt 3.4

2014-05-14 Thread Sahina Bose


On 05/14/2014 02:36 PM, Vadims Korsaks wrote:

Citējot Sahina Bose  :
  
  On 05/13/2014 07:27 PM, Vadims Korsaks wrote:

  > Citējot Humble Devassy Chirammal
  >  :
  >>
  >> |
  >> | Citējot Vijay Bellur  :
  >> | > On 05/11/2014 02:04 AM, Vadims Korsaks

wrote:

  >> | > > HI!
  >> | > >
  >> | > > Created 2 node setup with oVirt 3.4 and
  >> | CentOS 6.5, for storage created
  >> | > > 2 node replicated gluster (3.5) fs on same
  >> | hosts with oVirt.
  >> | > > mount looks like this:
  >> | > > 127.0.0.1:/gluster01 on
  >> | > >
  >> |
  >

/rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01

  >> | type fuse.glusterfs
  >> | > >
  >> |
  >

(rw,default_permissions,allow_other,max_read=131072)

  >> | > >
  >> | > > when i making gluster test with dd,
  > something
  >> | like
  >> | > > dd if=/dev/zero bs=1M count=2
  >> | > >
  >> |
  >

of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kaka

  >> | > > i'm gettting speed ~ 110 MB/s, so this is
  >> | 1Gbps speed of ethernet adapter
  >> | > >
  >> | > > but with in VM created in oVirt speed is
  >> | lower than 20 MB/s
  >> | > >
  >> | > > why there is so huge difference?
  >> | > > how can improve VMs disks speed?
  >> | > >
  >> | >
  >> | > What are your gluster volume settings?
  > Have you
  >> | applied the following
  >> | > performance tunables in gluster's virt
  > profile:
  >> | >
  >> | > eager-lock=enable
  >> | > remote-dio=enable
  >> | >
  >> | > Regards,
  >> | > Vijay
  >> | >
  >> | setting were:
  >> | [root@centos155 ~]# gluster volume info

gluster01

  >> |
  >> | Volume Name: gluster01
  >> | Type: Replicate
  >> | Volume ID:

436edaa3-ac8b-421f-aa35-68b5bd7064b6

  >> | Status: Started
  >> | Number of Bricks: 1 x 2 = 2
  >> | Transport-type: tcp
  >> | Bricks:
  >> | Brick1: 10.2.75.152:/mnt/gluster01/brick
  >> | Brick2: 10.2.75.155:/mnt/gluster01/brick
  >> | Options Reconfigured:
  >> | storage.owner-gid: 36
  >> | storage.owner-uid: 36
  >> |
  >> |
  >> | add your settings settings now it looks
  >> |
  >> | [root@centos155 ~]# gluster volume info

gluster01

  >> |
  >> | Volume Name: gluster01
  >> | Type: Replicate
  >> | Volume ID:

436edaa3-ac8b-421f-aa35-68b5bd7064b6

  >> | Status: Started
  >> | Number of Bricks: 1 x 2 = 2
  >> | Transport-type: tcp
  >> | Bricks:
  >> | Brick1: 10.2.75.152:/mnt/gluster01/brick
  >> | Brick2: 10.2.75.155:/mnt/gluster01/brick
  >> | Options Reconfigured:
  >> | network.remote-dio: enable
  >> | cluster.eager-lock: enable
  >> | storage.owner-gid: 36
  >> | storage.owner-uid: 36
  >> |
  >> |
  >> | but this didn't affected performace in any
  > big way
  >> | should hosts to be restarted?
  >> |
  >>
  >> glusterfs storage domain configuration GUI got
  > ""Optimize for virt. store" option which have to
  > be enabled when configuring for virt store.
  >>
  >>
  >

Ref#http://www.ovirt.org/Features/GlusterFS_Storage_Domain

  >>
  >> If the configuration is manual, you need to
  > set the group to 'virt' as shown below
  >>
  >> #gluster volume set VOLNAME group virt
  >>
  >> This will enable below options on gluster

volume :

  >>
  >> quick-read=off
  >> read-ahead=off
  >> io-cache=off
  >> stat-prefetch=off
  >> eager-lock=enable
  >> remote-dio=on
  >>
  >>
  >> Can you please make sure group has set

properly ?

  >>
  >> Also, invoke "dd" with oflag=direct option and
  > check whether it helps ..
  >>
  >>
  >> --Humble
  > tnx a lot!! now it's much better - from VM i can
  > get dd with ~ 60 MB/s
  > this is still ~ x2 lower than from host, but x3
  > better than it was before :)
  >
  > BTW could not found "GUI got ""Optimize for virt.
  > store" option" in oVirt 3.5
  
  
  The option "Optimize for Virt store" is

available when you select a

  volume in Ovirt - both as a right click menu

option as well as a button

  in the top sub navigation bar.
  
  You also can check this option while creating a

gluster volume using the

  oVirt GUI
  
  

i have glusterfs as my master storage, but there
is nothing in volumes, empty. And i can't create
gluster volume, there is no choise in Data Center
and Volume Cluster.




To be able to manage gluster (i.e create volume, add bricks and peer 
probe hosts) from Ovirt UI, the cluster should have the "Enable gluster 
service" option checked. I think you may not have enabled this when you 
created the cluster. To use gluster volume as storage domain, this 
option is not required.


Hope this helps.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster performance oVirt 3.4

2014-05-14 Thread Sahina Bose


On 05/14/2014 02:55 PM, Vadims Korsaks wrote:

Citējot "Sahina Bose" :

On 05/14/2014 02:36 PM, Vadims Korsaks wrote:

Citējot Sahina Bose  :

On 05/13/2014 07:27 PM, Vadims Korsaks wrote:

Citējot Humble Devassy Chirammal
 :

Citējot Vijay Bellur  :

On 05/11/2014 02:04 AM, Vadims Korsaks

wrote:

HI!

Created 2 node setup with oVirt 3.4 and

CentOS 6.5, for storage created

2 node replicated gluster (3.5) fs on same

hosts with oVirt.

mount looks like this:
127.0.0.1:/gluster01 on


/rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01

type fuse.glusterfs

(rw,default_permissions,allow_other,max_read=131072)

when i making gluster test with dd,

something

like

dd if=/dev/zero bs=1M count=2


of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kaka

i'm gettting speed ~ 110 MB/s, so this is

1Gbps speed of ethernet adapter

but with in VM created in oVirt speed is

lower than 20 MB/s

why there is so huge difference?
how can improve VMs disks speed?


What are your gluster volume settings?

Have you

applied the following

performance tunables in gluster's virt

profile:

eager-lock=enable
remote-dio=enable

Regards,
Vijay


setting were:
[root@centos155 ~]# gluster volume info

gluster01

Volume Name: gluster01
Type: Replicate
Volume ID:

436edaa3-ac8b-421f-aa35-68b5bd7064b6

Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.2.75.152:/mnt/gluster01/brick
Brick2: 10.2.75.155:/mnt/gluster01/brick
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36


add your settings settings now it looks

[root@centos155 ~]# gluster volume info

gluster01

Volume Name: gluster01
Type: Replicate
Volume ID:

436edaa3-ac8b-421f-aa35-68b5bd7064b6

Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.2.75.152:/mnt/gluster01/brick
Brick2: 10.2.75.155:/mnt/gluster01/brick
Options Reconfigured:
network.remote-dio: enable
cluster.eager-lock: enable
storage.owner-gid: 36
storage.owner-uid: 36


but this didn't affected performace in any

big way

should hosts to be restarted?


glusterfs storage domain configuration GUI got

""Optimize for virt. store" option which have to
be enabled when configuring for virt store.



Ref#http://www.ovirt.org/Features/GlusterFS_Storage_Domain

If the configuration is manual, you need to

set the group to 'virt' as shown below

#gluster volume set VOLNAME group virt

This will enable below options on gluster

volume :

quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on


Can you please make sure group has set

properly ?

Also, invoke "dd" with oflag=direct option and

check whether it helps ..


--Humble

tnx a lot!! now it's much better - from VM i can
get dd with ~ 60 MB/s
this is still ~ x2 lower than from host, but x3
better than it was before :)

BTW could not found "GUI got ""Optimize for virt.
store" option" in oVirt 3.5


The option "Optimize for Virt store" is

available when you select a

volume in Ovirt - both as a right click menu

option as well as a button

in the top sub navigation bar.

You also can check this option while creating a

gluster volume using the

oVirt GUI



i have glusterfs as my master storage, but there
is nothing in volumes, empty. And i can't create
gluster volume, there is no choise in Data Center
and Volume Cluster.



To be able to manage gluster (i.e create volume,

add bricks and peer

probe hosts) from Ovirt UI, the cluster should

have the "Enable gluster

service" option checked. I think you may not

have enabled this when you

created the cluster. To use gluster volume as

storage domain, this

option is not required.

Hope this helps.



Ok, i see, tnx a lot.
any chances to change this for Default Cluster,
because i have self hosted engine and just 2 hosts?




You can edit the cluster and change this option. Please make sure that 
vdsm-gluster is installed on the hosts in the cluster and a gluster peer 
probe has been done if there are more than one hosts in the cluster.







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] post glusterfs 3.4 -> 3.5 upgrade issue in ovirt (3.4.0-1.fc19): bricks unavailable

2014-05-28 Thread Sahina Bose

Hi Alastair,

This could be a mismatch in the hostname identified in ovirt and gluster.

You could check for any exceptions from GlusterSyncJob in engine.log.

Also, what version of ovirt are you using. And the compatibility version 
of your cluster?


On 05/28/2014 12:40 AM, Alastair Neil wrote:
Hi thanks for the reply. Here is an extract from a grep I ran on the 
vdsm log grepping for the volume name vm-store.  It seems to indicate 
the bricks are ONLINE.


I am uncertain how to extract meaningful information from the 
engine.log can you provide some guidance?


Thanks,

Alastair

Thread-100::DEBUG::2014-05-27
15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client
[129.174.94.239]::call volumeStatus with ('vm-store', '', '') {}
Thread-100::DEBUG::2014-05-27
15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return
volumeStatus with {'volumeStatus': {'bricks': [{'status':
'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid': '2675',
'port': '49158', 'hostuuid':
'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status': 'ONLINE',
'brick': 'gluster1:/export/brick4/vm-store', 'pid': '2309',
'port': '49158', 'hostuuid':
'54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs': [{'status':
'ONLINE', 'hostname': '129.174.126.56', 'pid': '27012', 'port':
'2049', 'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'},
{'status': 'ONLINE', 'hostname': 'gluster0', 'pid': '12875',
'port': '2049', 'hostuuid':
'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd': [{'status':
'ONLINE', 'hostname': '129.174.126.56', 'pid': '27019',
'hostuuid': '54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status':
'ONLINE', 'hostname': 'gluster0', 'pid': '12882', 'hostuuid':
'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'name': 'vm-store'},
'status': {'message': 'Done', 'code': 0}}
Thread-16::DEBUG::2014-05-27
15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:01:55,507::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:02:05,549::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:02:15,590::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:02:25,657::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:02:35,698::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:02:45,740::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:02:55,784::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:03:05,827::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-45

Re: [ovirt-users] post glusterfs 3.4 -> 3.5 upgrade issue in ovirt (3.4.0-1.fc19): bricks unavailable

2014-05-28 Thread Sahina Bose


On 05/28/2014 08:36 PM, Alastair Neil wrote:

I just noticed this in the console and I don't know if it is relevant.

When I look at the "General" tab on the hosts under "GlusterFS 
Version" it shows "N/A".


That's not related. The GlusterFS version in UI is populated from the 
getVdsCaps output from vdsm - looks like the vdsm running on your 
gluster node is not returning that?


Could you share the engine.log so that we can look at how the gluster 
status was interpreted and updated ? The log from the last 10 mins 
should do.


thanks!





On 28 May 2014 11:03, Alastair Neil <mailto:ajneil.t...@gmail.com>> wrote:


ovirt version is 3.4.  I did have a slightly older version of vdsm
on gluster0 but I have updated it and the issue persists.  The
compatibility version on the storage cluster is 3.3.

I checked the logs for GlusterSyncJob notifications and there are
none.






On 28 May 2014 10:19, Sahina Bose mailto:sab...@redhat.com>> wrote:

Hi Alastair,

This could be a mismatch in the hostname identified in ovirt
and gluster.

You could check for any exceptions from GlusterSyncJob in
engine.log.

Also, what version of ovirt are you using. And the
compatibility version of your cluster?


On 05/28/2014 12:40 AM, Alastair Neil wrote:

Hi thanks for the reply. Here is an extract from a grep I ran
on the vdsm log grepping for the volume name vm-store.  It
seems to indicate the bricks are ONLINE.

I am uncertain how to extract meaningful information from the
engine.log can you provide some guidance?

Thanks,

Alastair

Thread-100::DEBUG::2014-05-27
15:01:06,335::BindingXMLRPC::1067::vds::(wrapper) client
[129.174.94.239]::call volumeStatus with ('vm-store', '',
'') {}
Thread-100::DEBUG::2014-05-27
15:01:06,356::BindingXMLRPC::1074::vds::(wrapper) return
volumeStatus with {'volumeStatus': {'bricks': [{'status':
'ONLINE', 'brick': 'gluster0:/export/brick0', 'pid':
'2675', 'port': '49158', 'hostuuid':
'bcff5245-ea86-4384-a1bf-9219c8be8001'}, {'status':
'ONLINE', 'brick': 'gluster1:/export/brick4/vm-store',
'pid': '2309', 'port': '49158', 'hostuuid':
'54d39ae4-91ae-410b-828c-67031f3d8a68'}], 'nfs':
[{'status': 'ONLINE', 'hostname': '129.174.126.56',
'pid': '27012', 'port': '2049', 'hostuuid':
'54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status':
'ONLINE', 'hostname': 'gluster0', 'pid': '12875', 'port':
'2049', 'hostuuid':
'bcff5245-ea86-4384-a1bf-9219c8be8001'}], 'shd':
[{'status': 'ONLINE', 'hostname': '129.174.126.56',
'pid': '27019', 'hostuuid':
'54d39ae4-91ae-410b-828c-67031f3d8a68'}, {'status':
'ONLINE', 'hostname': 'gluster0', 'pid': '12882',
'hostuuid': 'bcff5245-ea86-4384-a1bf-9219c8be8001'}],
'name': 'vm-store'}, 'status': {'message': 'Done',
'code': 0}}
Thread-16::DEBUG::2014-05-27
15:01:15,339::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:01:25,381::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:01:35,423::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
'/bin/dd iflag=direct

if=/rhev/data-center/mnt/glusterSD/gluster0:vm-store/6d637c7f-a4ab-4510-a0d9-63a04c55d6d8/dom_md/metadata
bs=4096 count=1' (cwd None)
Thread-16::DEBUG::2014-05-27
15:01:45,465::fileSD::225::Storage.Misc.excCmd::(getReadDelay)
&#x

Re: [ovirt-users] Issue / strange behavior with GlusterFS nodes

2014-06-06 Thread Sahina Bose


On 06/06/2014 02:03 PM, Joop wrote:

René Koch wrote:

Hi,

I have the following oVirt setup:
2 storage hosts with GlusterFS (replicated) - will be extended to 4 
hosts (distributed-replicated)

4 virtualization hosts
1 datacenter
2 clusters (1 only for virtualization, 1 only for GlusterFS)

The strange thing is now:
One of the 2 GlusterFS hosts (these hosts and the Gluster volumes are 
managed by oVirt) fails to mount one of the storage domains, but I 
can mount it manually.


So the first question is: why should these hosts mount the storage 
domains? They are in a cluster without virtualization, so it's not 
possible to run virtual machines on these hosts. I think they try to 
mount the storage domains because the storage domains are attached to 
the data center, but imho the cluster should be checked as well if 
virtualization is enabled.


So I was thinking of moving the GlusterFS cluster to a new data 
center without storage domains, but this is impossible as Gluster 
volumes are created. The only possibility would be to remove the 
Gluster volume.


So do you have any idea how I can solve this issue? I mean the 
Gluster volume is working fine, but it's really disturbing that I get 
an error message every 5 minutes.


Just to let you know you're not the only one with this problem. I have 
had it too a couple of times and still don't know the right 
incantation to get rid of it. I have the same setup: 2 clusters, 
storage and virt, 2 nodes each, one datacenter. Currently on 3.4.1 and 
had the problem on the 3.2/3 series. For me it seemed related to 
upgrading but that could be just chance. It might as well be related 
to a complete shutdown/restart of all machines ( this test setup is 
not behind a ups which gives some rare opportunities to troubleshoot 
problems :-) )



Yes, this is a bug I would think. For a gluster only cluster, the 
storage domain is not relevant.


Could you log a bug with the error attached, and we will try to fix 
this. If it's a simple enough fix, we could backport it to 3.4


thanks!
sahina



Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Feature GlusterVolumeSnapshots not in 3.5?

2014-06-11 Thread Sahina Bose


On 06/11/2014 01:19 PM, Jorick Astrego wrote:

Hi again,

After reading up on all the backup possibilities of our ovirt cluster 
(we now do in VM backup with our traditional backup software), I came 
across http://www.ovirt.org/Features/GlusterVolumeSnapshots


*Name*: Gluster Volume Snapshot
*Modules*: engine
*Target version*: 3.5
*Status*: Not Started
*Last updated*: 2014-01-21 by Shtripat

It was originally planned for 3.5 but I haven't seen it on the oVirt 
Planning & Tracking document on google docs. It looks like a great 
feature, is this still planned somewhere in the future?



Being able to manage gluster volume snapshots is on our to-do list. 
We're currently prioritizing the backlog to see which of the gluster 
features -(managing/monitoring georeplication, quota, snapshot) - need 
to be added to next release.


Meanwhile, you could use the gluster cli option to create volume snapshot.




Kind regards,

Jorick Astrego
Netbulae BV


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host installation failed. Unexpected connection termination.

2014-07-28 Thread Sahina Bose


 Original message 
From: Itamar Heim  
Date: 2014/07/27  08:28  (GMT-05:00) 
To: Punit Dambiwal ,users@ovirt.org,Dan Kenigsberg 
,aha...@redhat.com,Sven Kieske ,Michal 
Skrivanek ,Antoni Segura Puimedon 
,Frantisek Kobzik ,Sahina Bose 
 
Subject: Re: Host installation failed. Unexpected connection termination. 
 
On 07/26/2014 08:25 PM, Punit Dambiwal wrote:
> Hi,
>
> I have successfully installed ovirt engine 3.4.3...but when i try to add
> host in the cluster it failed with the following error :-
>
> Host compute1 installation failed. Unexpected connection termination.
>
> Engine Log :-
>
> 
> 2014-07-26 17:18:37,140 INFO
> [org.ovirt.engine.core.bll.InstallerMessages]
> (org.ovirt.thread.pool-6-thread-20) [40582e53] Installation
> 43.252.176.13 <http://43.252.176.13>: Connected to host 43.252.176.13
> with SSH key fingerprint: 1e:38:88:c3:20:0f:cb:08:6c:ae:cb:87:12:c1:01:50

what do you see in engine Events tab (GUI)?

> 2014-07-26 17:18:37,165 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-6-thread-20) [40582e53] Correlation ID: 40582e53,
> Call Stack: null, Custom Event ID: -1, Message: Installing Host
> compute1. Connected to host 43.252.176.13 with SSH key fingerprint:
> 1e:38:88:c3:20:0f:cb:08:6c:ae:cb:87:12:c1:01:50.
> 2014-07-26 17:18:37,194 INFO  [org.ovirt.engine.core.bll.VdsDeploy]
> (org.ovirt.thread.pool-6-thread-20) [40582e53] Installation of
> 43.252.176.13. Executing command via SSH umask 0077; MYTMP="$(mktemp -t
> ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1;
> rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; rm -fr "${MYTMP}" && mkdir
> "${MYTMP}" && tar --warning=no-timestamp -C "${MYTMP}" -x &&
> "${MYTMP}"/setup DIALOG/dialect=str:machine
> DIALOG/customization=bool:True <
> /var/cache/ovirt-engine/ovirt-host-deploy.tar
> 2014-07-26 17:18:37,201 INFO
> [org.ovirt.engine.core.utils.ssh.SSHDialog]
> (org.ovirt.thread.pool-6-thread-20) SSH execute root@43.252.176.13
> <mailto:root@43.252.176.13> 'umask 0077; MYTMP="$(mktemp -t
> ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1;
> rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; rm -fr "${MYTMP}" && mkdir
> "${MYTMP}" && tar --warning=no-timestamp -C "${MYTMP}" -x &&
> "${MYTMP}"/setup DIALOG/dialect=str:machine DIALOG/customization=bool:True'
> 2014-07-26 17:18:39,871 INFO
> [org.ovirt.engine.core.vdsbroker.VdsManager]
> (DefaultQuartzScheduler_Worker-33) Initializing Host: compute1
> 2014-07-26 17:19:11,798 INFO
> [org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService]
> (DefaultQuartzScheduler_Worker-18) No up server in cluster
> 2014-07-26 17:19:11,799 ERROR
> [org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob]
> (DefaultQuartzScheduler_Worker-18) Error updating tasks from CLI:
> org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException:
> NO_UP_SERVER_FOUND (Failed with error NO_UP_SERVER_FOUND and code 7000)
>  at
> org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService.getTaskListForCluster(GlusterTasksService.java:30)
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob.updateGlusterAsyncTasks(GlusterTasksSyncJob.java:84)
> [bll.jar:]
>  at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
> [:1.7.0_65]
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.7.0_65]
>  at java.lang.reflect.Method.invoke(Method.java:606)
> [rt.jar:1.7.0_65]
>  at
> org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60)
> [scheduler.jar:]
>  at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> [quartz.jar:]
>  at
> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
> [quartz.jar:]

Sahina - above looks like an unrelated bug - please investigate


The error related to GlusterTasks is unrelated...this is the error in syncing 
info related to gluster tasks as no server in up state was available. This job 
runs in a separate thread.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] oVirt 3.5 beta2 - results

2014-07-29 Thread Sahina Bose

Piotr,

Thanks for the test report!

On 07/29/2014 08:48 PM, Piotr Kliczewski wrote:

Hi all,

I tested gluster related features:


Nagios Integration -> http://www.ovirt.org/Features/Nagios_Integration#HOW_TO

I installed Nagios dependencies on f20 which went smoothly but when I
did the same for rhel6 I noticed that I had to install manually
additional rpm which was not covered by howto.

rrdtool-perl-1.3.8-6.el6.x86_64.rpm


I will retry this and update the How_To




During discovery of the Nagios server I got following issue:

[root@rhel gluster]# /usr/lib64/nagios/plugins/gluster/discovery.py -c
Default -H 192.168.1.9
Failed to execute NRPE command 'discover_volume_list' in host '192.168.1.9'
Error : Make sure NPRE server in host '192.168.1.9' is configured to
accept requests from Nagios server


Did you get this error even after following the step to edit 
allowed_hosts in /etc/nagios/nrpe.cfg?





so I followed http://tecadmin.net/install-nrpe-on-centos-rhel/.

Nagios server reported status of the cluster. When I had configured
first nagios server I saw:

"OK : None of the Volumes in the cluster are in Critical State"

but for the second there was:

(null).


Do you mean configuring second cluster in the same Nagios server?




I followed howto and installed oVirt UI plugin but after restart I was
not able to see monitoring details tab so I opened:
https://bugzilla.redhat.com/show_bug.cgi?id=1124371




Volume performance stats ->
http://www.ovirt.org/Features/Gluster_Volume_Performance_Statistics#HOW_TO

I reused already existing setup. I enabled stats and added a volume.
When checking stats details I saw "could not fetch stats".

I wanted to generate some stats so I mount volume previously created using:

mount -t nfs 192.168.1.9:/vol1 /media/volume

I had to redo it several times do to:

mount.nfs: requested NFS version or transport protocol is not supported

After several attempts I lost connectivity to the machine. After host
recovered I tried to run:

mount -o mountproto=tcp -t nfs 192.168.1.9:/vol1 /media/volume

but the result was the same.

I opened: https://bugzilla.redhat.com/show_bug.cgi?id=1124376


I checked whether gluster still works with jsonrpc. I removed the host
that I installed before and added new one using jsonrpc protocol.
After the installation I noticed that host was moved to Non-Operation
state. In the logs I found:

{"jsonrpc": "2.0", "id": "101bf460-6529-42d6-9370-a9629daad628",
"error": {"message": "The method does not exist / is not available.",
"code": -32601}}

I checked what was the reason and there was no apiwrapper.py module so I opened:

https://bugzilla.redhat.com/show_bug.cgi?id=1124481



Thanks,
Piotr
___
Devel mailing list
de...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network Question

2014-07-30 Thread Sahina Bose


On 07/30/2014 01:21 PM, Dan Kenigsberg wrote:

On Wed, Jul 30, 2014 at 01:04:48PM +0800, Punit Dambiwal wrote:

Hi,

I am facing one strange problem with Ovirt network...i have the below
network in my ovirt cluster :-

eth0+eth1=bond0

bond0.10= ovirtmgmt (43.250.X.X)
bond0.37= VMnetwork

eth2+eth3=bond1 (Storage) (10.10.10.X)

Glusterfs running on 4 HV's with each have 2 bricks...

But still the glusterfs data transfer from ovirtmgmt network instead of
storage network ?? Is there any more setting required to change it to
ovirtmgmt to storage ??

The title "Storage" means nothing to oVirt per se. The user needs to make sure
that her NFS or iSCSI storage server uses the specific network.
But unfortunately - and Bala is welcome to correct me if I'm wrong - there
is no means to set the IP address which gluster uses via oVirt.

Can it be done "underneath" oVirt?


For glusterd communication - that is gluster peer to gluster peer - 
there's a workaround to achieve this..Please see 
http://lists.ovirt.org/pipermail/users/2014-February/021213.html


For VM to gluster host, the host IP used to mount the volume is the one 
used for communication by oVirt.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   3   4   5   6   >