Re: [Users] Package installation error

2013-03-22 Thread Shireesh Anjal

On 03/22/2013 12:11 AM, Alon Bar-Lev wrote:


- Original Message -

From: Simon Grinberg si...@redhat.com
To: Alon Bar-Lev alo...@redhat.com
Cc: users@ovirt.org, Sahina Bose sab...@redhat.com, Dave Neary 
dne...@redhat.com
Sent: Thursday, March 21, 2013 7:40:17 PM
Subject: Re: [Users] Package installation error



- Original Message -

From: Alon Bar-Lev alo...@redhat.com
To: Dave Neary dne...@redhat.com
Cc: users@ovirt.org, Sahina Bose sab...@redhat.com, Simon
Grinberg si...@redhat.com
Sent: Thursday, March 21, 2013 6:35:56 PM
Subject: Re: [Users] Package installation error

Simon?

- Original Message -

From: Dave Neary dne...@redhat.com
To: Sahina Bose sab...@redhat.com
Cc: Alon Bar-Lev alo...@redhat.com, users@ovirt.org
Sent: Thursday, March 21, 2013 6:34:56 PM
Subject: Re: [Users] Package installation error

Hi,

On 03/19/2013 05:24 AM, Sahina Bose wrote:

You are right - Gluster 3.4 is only required to manage gluster
clusters.

Currently the question asked at setup is this :

The engine can be configured to present the UI in three
different
application modes. virt [Manage virtualization only], gluster
[Manage
gluster storage only],
and both [Manage virtualization as well as gluster storage]
where
both
is the default.

If this is confusing to the user, we can change this message.
Suggestions?

I would either:

(a) put Virt as the default, or

I would go for this one, this keeps backwards compatibility.
Is there an easy method to change this configuration later?

Yes, changing the cluster settings.


No - you can change the cluster settings (virt service/gluster service) 
only if the application mode is both. In virt only mode, the cluster 
is virt only, and same with gluster-only. So if you want user to be able 
to change the mode *after* installation, this config (ApplicationMode) 
must be exposed in the engine-config tool.





(b) remove this question entirely - if someone is installing
oVirt,
assume they want to manage virt. If someone is installing
Gluster,
have
an option during the Gluster install to have the oVirt management
console.

We do want people to get exposed to the Gluster option.


If someone wants to manage both their Gluster bricks and their
virt
with
oVirt, I would make that a post-install option to oVirt - perhaps
with
an extra package to install (which would depend on the later
version
of
Gluster).

Cheers,
Dave.




On 03/19/2013 01:41 PM, Dave Neary wrote:

Hi,

On 03/19/2013 08:16 AM, Alon Bar-Lev wrote:

Now I am confused
Do you or don't you need vdsm-gluster on your system?

Allow me to clarify.

There have been several messages from users since the oVirt
3.2
release asking why they need Gluster 3.4 pre-releases to run
oVirt.

My understanding is that you don't need Gluster 3.4 unless you
want to
manage a Gluster cluster with oVirt.

So my question is: are we sure that we are not leading users
wrong,
and confusing them during the installation  set-up process?

Thanks,
Dave.


--
Dave Neary - Community Action and Impact
Open Source and Standards, Red Hat - http://community.redhat.com
Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Shireesh Anjal

On 03/05/2013 06:08 AM, Rob Zwissler wrote:

Running CentOS 6.3 with the following VDSM packages from dre's repo:

vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch
vdsm-gluster-4.10.3-0.30.19.el6.noarch
vdsm-python-4.10.3-0.30.19.el6.x86_64
vdsm-4.10.3-0.30.19.el6.x86_64
vdsm-cli-4.10.3-0.30.19.el6.noarch

And the following gluster packages from the gluster repo:

glusterfs-3.3.1-1.el6.x86_64
glusterfs-fuse-3.3.1-1.el6.x86_64
glusterfs-vim-3.2.7-1.el6.x86_64
glusterfs-server-3.3.1-1.el6.x86_64


oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently 
in alpha and hence not available in stable repositories.

http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/

This issue has been reported multiple times now, and I think it needs an 
update to the oVirt 3.2 release notes. Have added a note to this effect at:

http://www.ovirt.org/OVirt_3.2_release_notes#Storage


I get the following errors in vdsm.log:

Thread-1483::DEBUG::2013-03-04
16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client
[10.33.9.73]::call volumesList with () {}
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,429::misc::84::Storage.Misc.excCmd::(lambda)
'/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
MainProcess|Thread-1483::DEBUG::2013-03-04
16:35:27,480::misc::84::Storage.Misc.excCmd::(lambda) SUCCESS: err
= ''; rc = 0
MainProcess|Thread-1483::ERROR::2013-03-04
16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
   File /usr/share/vdsm/supervdsmServer.py, line 78, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/supervdsmServer.py, line 352, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.py, line 45, in wrapper
 return func(*args, **kwargs)
   File /usr/share/vdsm/gluster/cli.py, line 430, in volumeInfo
 except (etree.ParseError, AttributeError, ValueError):
AttributeError: 'module' object has no attribute 'ParseError'
Thread-1483::ERROR::2013-03-04
16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
   File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
 return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
   File /usr/share/vdsm/supervdsm.py, line 81, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 72, in lambda
 **kwargs)
   File string, line 2, in glusterVolumeInfo
   File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
in _callmethod
 raise convert_to_error(kind, result)
AttributeError: 'module' object has no attribute 'ParseError'

Which corresponds to the following in the engine.log:

2013-03-04 16:34:46,231 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-86) START,
GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId =
b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 987aef3
2013-03-04 16:34:46,365 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-86) Failed in GlusterVolumesListVDS method
2013-03-04 16:34:46,366 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-86) Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
2013-03-04 16:34:46,367 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase]
(QuartzScheduler_Worker-86) Command GlusterVolumesListVDS execution
failed. Exception: VDSErrorException: VDSGenericException:
VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected
exception
2013-03-04 16:34:46,369 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-86) FINISH, GlusterVolumesListVDSCommand, log
id: 987aef3
2013-03-04 16:34:46,370 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterManager]
(QuartzScheduler_Worker-86) Error while refreshing Gluster lightweight
data of cluster qa-cluster1!:
org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
at 
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454)
[engine-bll.jar:]
at 
org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440)

Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-06 Thread Shireesh Anjal

On 03/06/2013 10:29 PM, Rob Zwissler wrote:

On Wed, Mar 6, 2013 at 12:34 AM, Shireesh Anjal san...@redhat.com wrote:

oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently in
alpha and hence not available in stable repositories.
http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/

This issue has been reported multiple times now, and I think it needs an
update to the oVirt 3.2 release notes. Have added a note to this effect at:
http://www.ovirt.org/OVirt_3.2_release_notes#Storage


On one hand I like oVirt, I think you guys have done a good job with
this, and it is free software so I don't want to complain.

But on the other hand, if you release a major/stable release (ie:
oVirt 3.2), but it relies on a major/critical component (clustering
filesystem server) that is in alpha, not even beta, but alpha
prerelease form, you really should be up front and communicative about
this.  My searches turned up nothing except an offhand statement from
a GlusterFS developer, nothing from the oVirt team until now.

It is not acceptable to expect people to run something as critical as
a cluster filesystem server in alpha form on anything short of a
development test setup.  Are any other components of oVirt 3.2
dependent on non-stable general release packages?

What is the latest release of oVirt considered to be stable and
considered safe for use on production systems?


Hi Rob,

Your points are completely valid, and it's my fault (and not the oVirt 
release team's) not mentioning this important information when providing 
details of gluster related features to be included in the oVirt 3.2 
release notes. Genuine apologies for the same.


Having said this, I believe the stable release of glusterfs 3.4.0 should 
be coming out very soon (some time this month if I'm correct), which 
will provide some relief.


Regards,
Shireesh



Rob


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] databases naming way is so confusing

2013-03-06 Thread Shireesh Anjal

On 03/07/2013 08:23 AM, bigclouds wrote:
hi, today i look into the databases,i find the way of naming  of 
tables is so confusing.

like vds_group==cluster,storage_pool==datacenter,
i think it is better straightforward.


I agree :)
I believe it's more of a 'legacy' issue.



now i have not found which table is HOST.


The view VDS should provide most details of a HOST. The underlying 
tables are VDS_STATIC, VDS_DYNAMIC and VDS_STATISTICS

Once you get that VDS is HOST, VDS_GROUP suddenly starts making sense :)



i appreciate your hard work,but i hope my suggest is meaningful.

thanks




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Shireesh Anjal

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are of 
the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. 
I think because of the v before 3.4, these are considered as lower 
version, and by default yum picks up the rpms from fedora repository.


To work around this issue, you could try:

yum --disablerepo=* --enablerepo=gluster-nieuw install glusterfs 
glusterfs-fuse glusterfs-geo-replication glusterfs-server




Joop




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Managing Gluster from oVirt

2012-12-11 Thread Shireesh Anjal

Hi all,

Here is the link to the video of the talk I gave at FOSS.IN 2012 
Bangalore on the topic Managing Gluster from oVirt


http://youtu.be/NBu3wAXxq-Q

Regards,
Shireesh
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Is there a way to force remove a host?

2012-09-28 Thread Shireesh Anjal

On Friday 28 September 2012 01:00 PM, Itamar Heim wrote:

On 09/25/2012 01:45 PM, Shireesh Anjal wrote:

On Tuesday 25 September 2012 04:04 PM, Itamar Heim wrote:

On 09/25/2012 12:32 PM, Shireesh Anjal wrote:

On Tuesday 25 September 2012 01:42 PM, Itamar Heim wrote:

On 09/25/2012 09:44 AM, Shireesh Anjal wrote:

On Tuesday 25 September 2012 03:25 AM, Itamar Heim wrote:

On 09/24/2012 11:53 PM, Jason Brooks wrote:

On Mon 24 Sep 2012 01:24:44 PM PDT, Itamar Heim wrote:

On 09/24/2012 08:49 PM, Dominic Kaiser wrote:
This conversation is fine but if I want to force remove no 
matter

what I
should be able to from the GUI.  The nodes are no longer
available I
want to get rid of them ovirt does not let me. I can delete from
database but why not from the GUI?  I am sure others may run 
into

this
problem as well.


what happens to the status of the host when you right click on 
the

host and specify you confirm it was shutdown?


I'm having this same issue. Confirming the host is shut down 
doesn't

make a difference.

I'm seeing lots of Failed to GlusterHostRemoveVDS, error =
Unexpected
exception errors in my engine log that seem to correspond w/ the
failed
remove host attempts.


is cluster defined as gluster as well?
what is the status of the host after you confirm shutdown?
any error on log on this specific command?

shireesh - not sure if relevant to this flow, but need to make sure
removing a host from the engine isn't blocked on gluster needing to
remove it from the gluster cluster if the host is not available any
more, or last host in gluster cluster?


Yes, currently the system tries the 'gluster peer detach hostname'
command when trying to remove a server, which fails if the server is
unavailable. This can be enhanced to show the error to user and then
allow 'force remove' which can use the 'gluster peer detach 
hostname
*force*' command that forcefully removes the server from the 
cluster,

even if it is not available or has bricks on it.


what if it is the last server in the cluster?
what if there is another server in the cluster but no 
communication to

it as well?


A quick look at code tells me that in case of virt, we don't allow
removing a host if it has  VM(s) in it (even if the host is currently
not available) i.e. vdsDynamic.getvm_count()  0. Please correct me if
I'm wrong. If that's correct, and if we want to keep it consistent for
gluster as well, then we should not allow removing a host if it has
gluster volume(s) in it. This is how it behaves in case of 'last 
server

in cluster' today.


true, but user can fence the host or confirm shutdown manually, which
will release all resources on it, then it can be removed.


I see. In that case, we can just remove the validation and allow
removing the host irrespective of whether it contains volume(s) or not.
Since it's the only host in the cluster, this won't cause any harm.





In case of no up server available in the cluster, we can show the 
error
and provide a 'force' option that will just remove it from the 
engine DB

and will not attempt gluster peer detach.


something like that.
i assume the gluster storage will handle this somehow?


What would you expect gluster storage to do in such a case? If all
servers are not accessible to a gluster client, the client can't
read/write from/to volumes of the cluster. Cluster management operations
in gluster (like removing a server from the cluster) are always done
from one of the servers of the cluster. So if no servers are available,
nothing can be done. Vijay can shed more light on this if required.

Assuming that some of the servers come up at a later point in time, they
would continue to consider this (removed from engine) server as one of
the peers. This would create an inconsistency between actual gluster
configuration and the engine DB. This, however can be handled once we
have a feature to sync configuration with gluster (this is WIP). This
feature will automatically identify such servers, and allow the user to
either import them to engine, or remove (peer detach) from the gluster
cluster.


why is that an issue though - worst case the server wouldn't appear in 
the admin console[1] if it is alive, and if it is dead, it is 
something the gluster cluster is supposed to deal with?


It's just that I think it's not good to have the management console 
being out of sync with gluster configuration. However, as I said, we 
will soon have a mechanism to handle such cases.


Also, we're thinking of a simpler approach by just providing a 'force 
remove' checkbox on the remove host confirmation dialog (only if the 
host belongs to a gluster enabled cluster). User can then tick this 
checkbox when normal remove flow doesn't work in above discussed scenarios.




[1] though i assume the admin will continue to alert on its presence 
for being out-of-sync on list of servers in cluster.


Yes - this feature is WIP.






















Dominic

On Sep 22, 2012 4:19 PM, Eli Mesika emes

Re: [Users] Is there a way to force remove a host?

2012-09-25 Thread Shireesh Anjal

On Tuesday 25 September 2012 01:42 PM, Itamar Heim wrote:

On 09/25/2012 09:44 AM, Shireesh Anjal wrote:

On Tuesday 25 September 2012 03:25 AM, Itamar Heim wrote:

On 09/24/2012 11:53 PM, Jason Brooks wrote:

On Mon 24 Sep 2012 01:24:44 PM PDT, Itamar Heim wrote:

On 09/24/2012 08:49 PM, Dominic Kaiser wrote:

This conversation is fine but if I want to force remove no matter
what I
should be able to from the GUI.  The nodes are no longer available I
want to get rid of them ovirt does not let me.  I can delete from
database but why not from the GUI?  I am sure others may run into 
this

problem as well.


what happens to the status of the host when you right click on the
host and specify you confirm it was shutdown?


I'm having this same issue. Confirming the host is shut down doesn't
make a difference.

I'm seeing lots of Failed to GlusterHostRemoveVDS, error = Unexpected
exception errors in my engine log that seem to correspond w/ the 
failed

remove host attempts.


is cluster defined as gluster as well?
what is the status of the host after you confirm shutdown?
any error on log on this specific command?

shireesh - not sure if relevant to this flow, but need to make sure
removing a host from the engine isn't blocked on gluster needing to
remove it from the gluster cluster if the host is not available any
more, or last host in gluster cluster?


Yes, currently the system tries the 'gluster peer detach hostname'
command when trying to remove a server, which fails if the server is
unavailable. This can be enhanced to show the error to user and then
allow 'force remove' which can use the 'gluster peer detach hostname
*force*' command that forcefully removes the server from the cluster,
even if it is not available or has bricks on it.


what if it is the last server in the cluster?
what if there is another server in the cluster but no communication to 
it as well?


A quick look at code tells me that in case of virt, we don't allow 
removing a host if it has  VM(s) in it (even if the host is currently 
not available) i.e. vdsDynamic.getvm_count()  0. Please correct me if 
I'm wrong. If that's correct, and if we want to keep it consistent for 
gluster as well, then we should not allow removing a host if it has 
gluster volume(s) in it. This is how it behaves in case of 'last server 
in cluster' today.


In case of no up server available in the cluster, we can show the error 
and provide a 'force' option that will just remove it from the engine DB 
and will not attempt gluster peer detach.















Dominic

On Sep 22, 2012 4:19 PM, Eli Mesika emes...@redhat.com
mailto:emes...@redhat.com wrote:



- Original Message -
  From: Douglas Landgraf dougsl...@redhat.com
mailto:dougsl...@redhat.com
  To: Dominic Kaiser domi...@bostonvineyard.org
mailto:domi...@bostonvineyard.org
  Cc: Eli Mesika emes...@redhat.com
mailto:emes...@redhat.com, users@ovirt.org
mailto:users@ovirt.org, Robert Middleswarth
rob...@middleswarth.net mailto:rob...@middleswarth.net
  Sent: Friday, September 21, 2012 8:12:27 PM
  Subject: Re: [Users] Is there a way to force remove a host?
 
  Hi Dominic,
 
  On 09/20/2012 12:11 PM, Dominic Kaiser wrote:
   Sorry I did not explain.
  
   I had tried to remove the host and had not luck
troubleshooting it.
I
   then had removed it and used it for a storage unit
reinstalling
   fedora
   17.  I foolishly thought that I could just remove the host
   manually.
It physically is not there. (My fault I know)  Is there a
way that
   you know of to remove a host brute force.
  
   dk
 
  Fell free to try the below script (not part of official
project) for
  brute force:
 
  (from the engine side)
  # yum install python-psycopg2 -y
  # wget
 

https://raw.github.com/dougsland/misc-rhev/master/engine_force_remove_Host.py 





  # (edit the file and change the db password)
  # python ./engine_force_remove_Host.py

Hi , had looked in the Python script you had provided:
First, I must say that handling the database directly may 
leave DB
in inconsistent state, therefore, if there is no other 
option, the

database should be backed up prior to this operation.
In addition, I do not like the execution of the SQL 
statements in

the script.
There is a SP called DeleteVds(v_vds_id UUID) and you should use
that since it encapsulates all details.
For example, your script does not handle permission clean-up as
the
SP does and therefore leaves garbage in the database.
In addition, a failure in your script may leave database in
inconsistent state while the SP is executed in one 
transaction and

will leave DB consistent.
So, in short I would prefer in this case that the relevant SP 
will
do the clean-up since this is the one that is used by the 
code and
that insures (at least I hope so

Re: [Users] Is there a way to force remove a host?

2012-09-25 Thread Shireesh Anjal

On Tuesday 25 September 2012 04:04 PM, Itamar Heim wrote:

On 09/25/2012 12:32 PM, Shireesh Anjal wrote:

On Tuesday 25 September 2012 01:42 PM, Itamar Heim wrote:

On 09/25/2012 09:44 AM, Shireesh Anjal wrote:

On Tuesday 25 September 2012 03:25 AM, Itamar Heim wrote:

On 09/24/2012 11:53 PM, Jason Brooks wrote:

On Mon 24 Sep 2012 01:24:44 PM PDT, Itamar Heim wrote:

On 09/24/2012 08:49 PM, Dominic Kaiser wrote:

This conversation is fine but if I want to force remove no matter
what I
should be able to from the GUI.  The nodes are no longer 
available I

want to get rid of them ovirt does not let me.  I can delete from
database but why not from the GUI?  I am sure others may run into
this
problem as well.


what happens to the status of the host when you right click on the
host and specify you confirm it was shutdown?


I'm having this same issue. Confirming the host is shut down doesn't
make a difference.

I'm seeing lots of Failed to GlusterHostRemoveVDS, error = 
Unexpected

exception errors in my engine log that seem to correspond w/ the
failed
remove host attempts.


is cluster defined as gluster as well?
what is the status of the host after you confirm shutdown?
any error on log on this specific command?

shireesh - not sure if relevant to this flow, but need to make sure
removing a host from the engine isn't blocked on gluster needing to
remove it from the gluster cluster if the host is not available any
more, or last host in gluster cluster?


Yes, currently the system tries the 'gluster peer detach hostname'
command when trying to remove a server, which fails if the server is
unavailable. This can be enhanced to show the error to user and then
allow 'force remove' which can use the 'gluster peer detach hostname
*force*' command that forcefully removes the server from the cluster,
even if it is not available or has bricks on it.


what if it is the last server in the cluster?
what if there is another server in the cluster but no communication to
it as well?


A quick look at code tells me that in case of virt, we don't allow
removing a host if it has  VM(s) in it (even if the host is currently
not available) i.e. vdsDynamic.getvm_count()  0. Please correct me if
I'm wrong. If that's correct, and if we want to keep it consistent for
gluster as well, then we should not allow removing a host if it has
gluster volume(s) in it. This is how it behaves in case of 'last server
in cluster' today.


true, but user can fence the host or confirm shutdown manually, which 
will release all resources on it, then it can be removed.


I see. In that case, we can just remove the validation and allow 
removing the host irrespective of whether it contains volume(s) or not. 
Since it's the only host in the cluster, this won't cause any harm.






In case of no up server available in the cluster, we can show the error
and provide a 'force' option that will just remove it from the engine DB
and will not attempt gluster peer detach.


something like that.
i assume the gluster storage will handle this somehow?


What would you expect gluster storage to do in such a case? If all 
servers are not accessible to a gluster client, the client can't 
read/write from/to volumes of the cluster. Cluster management operations 
in gluster (like removing a server from the cluster) are always done 
from one of the servers of the cluster. So if no servers are available, 
nothing can be done. Vijay can shed more light on this if required.


Assuming that some of the servers come up at a later point in time, they 
would continue to consider this (removed from engine) server as one of 
the peers. This would create an inconsistency between actual gluster 
configuration and the engine DB. This, however can be handled once we 
have a feature to sync configuration with gluster (this is WIP). This 
feature will automatically identify such servers, and allow the user to 
either import them to engine, or remove (peer detach) from the gluster 
cluster.



















Dominic

On Sep 22, 2012 4:19 PM, Eli Mesika emes...@redhat.com
mailto:emes...@redhat.com wrote:



- Original Message -
  From: Douglas Landgraf dougsl...@redhat.com
mailto:dougsl...@redhat.com
  To: Dominic Kaiser domi...@bostonvineyard.org
mailto:domi...@bostonvineyard.org
  Cc: Eli Mesika emes...@redhat.com
mailto:emes...@redhat.com, users@ovirt.org
mailto:users@ovirt.org, Robert Middleswarth
rob...@middleswarth.net mailto:rob...@middleswarth.net
  Sent: Friday, September 21, 2012 8:12:27 PM
  Subject: Re: [Users] Is there a way to force remove a host?
 
  Hi Dominic,
 
  On 09/20/2012 12:11 PM, Dominic Kaiser wrote:
   Sorry I did not explain.
  
   I had tried to remove the host and had not luck
troubleshooting it.
I
   then had removed it and used it for a storage unit
reinstalling
   fedora
   17.  I foolishly thought that I could just remove the 
host

   manually

Re: [Users] Problem with creating a glusterfs volume

2012-09-05 Thread Shireesh Anjal

Hi Dominic,

Looking at the engine log immediately after trying to create the volume 
should tell us on which node the gluster volume creation was attempted. 
Then looking at the vdsm log on that node should help us identifying the 
exact reason for failure.


In case this doesn't help you, can you please simulate the issue again 
and send back all the 5 log files? (engine.log from engine server and 
vdsm.log from the 4 nodes)


Regards,
Shireesh

On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:

So I have a problem creating glusterfs volumes.  Here is the install:

 1. Ovirt 3.1
 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64
 3. 4 nodes peer joined and running
 4. 4 nodes added as hosts to ovirt
 5. created a directory on each node this path /data
 6. chmod 36.36 -R /data all nodes
 7. went back to ovirt and created a distributed/replicated volume and
added the 4 nodes with brick path of /data

I received this error:

Creation of Gluster Volume maingfs1 failed.

I went and looked at the vdsm logs on the nodes and the ovirt server 
which did not say much.  Where else should I look?  Also this error is 
vague what does it mean?



--
Dominic Kaiser
Greater Boston Vineyard
Director of Operations

cell: 617-230-1412
fax: 617-252-0238
email: domi...@bostonvineyard.org mailto:domi...@bostonvineyard.org




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Anyone started working on the documentation for GlusterFS and Ovirt 3.1 yet?

2012-06-11 Thread Shireesh Anjal

On Sunday 10 June 2012 02:02 AM, Robert Middleswarth wrote:

On 06/09/2012 11:33 AM, Haim Ateya wrote:

adding Shireesh\Daniel

Hi Robert,

afaik, there is no official documentation that show's how to work
with glusterFS using oVirt since its too new and we are in early 
integration phases (shireesh, correct me if i'm wrong).

That is why I asked is anyone has started working on the doc's


Here is the feature page: http://ovirt.org/wiki/Features/Gluster_Support
We intend to add more documentation soon, though nothing else is 
available right now.


I would start by browsing tohttp://www.gluster.org/  and read about 
the underlying technology (glusterFS, admin guide is good place to 
start).
Read though there doc's but wasn't sure how much needed to be done 
using native glusterfs tools and how much is done using ovirt engine 
because of the fact it was recently integrated means there are still 
likely holes in the process.


Most of the features mentioned in the feature page, except async tasks 
(rebalance volume, migrate brick) and bootstrapping changes are 
currently available from UI and REST api. You should run engine with the 
option InstallVds set to false and SSL disabled (on both engine and 
vdsm). When creating clusters, make sure you select the checkbox to 
enable gluster services, and always add hosts to such clusters from UI only.


as for tips and hints, I will describe what you need in general (very 
high-level):


- cluster and data-center - edit cluster and set the gluster check box
- host side (vds):
   * install vdsm-gluster package
   * install all gluster-related packages
   * make sure all gluster related services are fully operational
Didn't see vdsm-gluster when I built the nodes but when I looked for 
the package it was there.  Already had all the glusterfs services 
running.
- create new volume (volume is a collection of bricks which represent 
a collection of export domains):

   * allocate storage space on your host
   * make sure this space is exposed via export (NFS) directory
   * create new volume\bricks
This is where I am confused.  glusterfs supports 3 methods of exposing 
the cluster.  Is ovirt using the NFS or the native(fs) method?  If it 
is NFS why the interface for bricks?  Wouldn't you just create a giant 
nfs share?  This is the area I am most confused with and looking for 
hints on.


To be more precise, glusterfs supports 3 methods of exposing a *volume*, 
and not cluster. A cluster can contain a number of volumes. In Gluster 
terminology, Bricks (server:directory) are building blocks of a 
volume. Volumes can be exported using native (GLUSTER), NFS or CIFS 
protocols, though only GLUSTER and NFS are available from the UI for 
now. A volume need not necessarily have bricks on all servers of the 
cluster. If you want one giant NFS share spread over all servers of the 
cluster, you'll have to use one brick from each of the servers when 
creating the volume, and then export the Volume via NFS.


Feel free to ask glusterfs specific questions on the #gluster IRC 
channel on FreeNode.




Thanks
Robert

Thanks,

Haim
- Original Message -

From: Robert Middleswarthrob...@middleswarth.net
To:users@ovirt.org
Sent: Friday, June 8, 2012 9:23:40 PM
Subject: [Users] Anyone started working on the documentation for 
GlusterFS and Ovirt 3.1 yet?


I installed ovirt 3.1 using the assume repo provided by
http://www.dreyou.org/ovirt/  and it works really well.  Been a few
glitches but overall it works well.  I see an interface for GlusterFS
is
in there but no clue how to use it any hints or info been created
yet?

Thanks
Robert

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users