Re: [ovirt-users] remove a lost gluster based datacenter

2014-07-12 Thread Kanagaraj Mayilsamy
Looks like the cluster has both virt and gluster services enabled. Otherwise a 
host part of gluster-cluster cannot be a SPM. 

I am not sure about the standard procedure to removed a SPM host.

Once SPM is disabled, you should be able to remove all the gluster hosts 
forcefully.

Thanks,
Kanagaraj

- Original Message -
 From: Alexander Wels aw...@redhat.com
 To: users@ovirt.org
 Cc: Demeter Tibor tdeme...@itsmart.hu, Kanagaraj kmayi...@redhat.com
 Sent: Saturday, July 12, 2014 2:06:38 AM
 Subject: Re: [ovirt-users] remove a lost gluster based datacenter
 
 On Friday, July 11, 2014 10:11:00 PM Demeter Tibor wrote:
  Hi,
  Somebody can me help?
  
  Tibor
  
 
 Tibor,
 
 This is not an officially supported solution, but I had a similar issue
 myself
 at some point (virtual gluster hosts that I was silly enough to delete
 without
 first stopping the volumes). Basically the status of the volumes is stored in
 the gluster_volumes table, status column. If you manually update the status
 to
 be 'DOWN' instead of 'UP' for the volumes that are giving you trouble. Then
 in
 the UI they will be down as well. At which point you should be able to
 properly remove them from the UI using the UI.
 
 Once that is done you should be able to simply remove the offending hosts as
 there are no longer any gluster volumes associated with it.
 
 I am sure there is a much better way of doing this (maybe REST api?), but I
 couldn't find it (I am not a gluster expert).
 
 Alexander
 
  - Eredeti üzenet -
  
   Hi,
   
   I have try it before wrote this mail.
   It is impossible, it is a spm host
   
   Error while executing action: Cannot switch Host to Maintenance mode.
   Host is Storage Pool Manager and is in Non Responsive state.
   - If power management is configured, engine will try to fence the host
   automatically.
   - Otherwise, either bring the node back up, or release the SPM resource.
   To do so, verify that the node is really down by right clicking on the
   host
   and confirm that the node was shutdown manually.
   
   When I want to switch to maintence a non-spm host:
   
   Error while executing action:
   
   gluster4:
   Cannot remove Host. Server having Gluster volume.
   
   When I want to remove an gluster brick:
   
   Error while executing action: Cannot stop Gluster Volume. No up server
   found in glusters.
   
   When I want to remove a VM that was on this gluster based datacenter:
   
   Error while executing action:
   
   F19glusters:
   Cannot remove VM: Storage Domain cannot be accessed.
   -Please check that at least one Host is operational and Data Center state
   is up.
   
   So it is an vicious circle
   
   Tibor
   
   - Eredeti üzenet -
   
On 07/09/2014 01:45 AM, Demeter Tibor wrote:
 Dear listmembers,
 
 
 
 I have created a test environment for ovirt via four pc and a server.
 
 
 I have two datacenter on the main server:
 
 
 - a local datacenter on the server
 
 
 - a gluster based datacenter by the four dekstop pc
 
 
 
 2 weeks ago someone took my pc-s (for doing other things:)
 
 
 
 I am could not remove the dead gluster based datacenter from ovirt.
 
 
 - It always want to your own servers:)
 
 
 - The force option doesn't work in this case.
 
 
 - The gluster bricks are does not uninstallable.
 
 
 - the cluster not removable
 
 
 - Everything want to everything else:)

Move all the gluster hosts to maintenance. And then select all of them
and
click 'Remove', then select the 'Force' checkbox and Ok.

 I don't want to reinstall the whole ovirt because I have a lot of vm
 on
 the
 local datacenter.
 
 
 
 So, how can I remove a complety dead datacenter from ovirt?
 
 
 
 Thanks in advance.
 
 
 
 Tibor
 
 
 
 ___
 
 
 Users mailing list Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
   
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] UI Plugin: shellinabox did not work in ovirt-3.4.3

2014-08-06 Thread Kanagaraj Mayilsamy
the url should be plugin/ShellInABoxPlugin/start.html in shellinabox.json

Thanks,
Kanagaraj

- Original Message -
 From: Einav Cohen eco...@redhat.com
 To: lofyer lof...@gmail.com, Daniel Erez de...@redhat.com
 Cc: users users@ovirt.org
 Sent: Wednesday, August 6, 2014 9:32:57 PM
 Subject: Re: [ovirt-users] UI Plugin: shellinabox did not work in ovirt-3.4.3
 
 the URL structure has changed from 3.3 to 3.4;
 maybe the problem is in the URL within the UI Plugin json file?
 
 url: /webadmin/webadmin/plugin/ShellBoxPlugin/start.html
 (I copied it from [1], not sure what exists in the repo)
 
 maybe @Daniel would know better.
 
 [1]
 http://derezvir.blogspot.co.il/2013/01/ovirt-webadmin-shellinabox-ui-plugin.html
 
 - Original Message -
  From: lofyer lof...@gmail.com
  To: users users@ovirt.org
  Sent: Wednesday, August 6, 2014 10:45:35 AM
  Subject: [ovirt-users] UI Plugin: shellinabox did not work in ovirt-3.4.3
  
  My OS is CentOS-6.5.
  I'd cloned samples from gerrit, but it did not work as it used to be.
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Running a Gluster volume off of a seperate network interface and managing it via the oVirt engine Web-GUI

2014-10-09 Thread Kanagaraj Mayilsamy
You can refer http://www.ovirt.org/Change_network_interface_for_Gluster

Thanks,
Kanagaraj

- Original Message -
 From: Thomas Keppler (PEBA) thomas.kepp...@kit.edu
 To: users@ovirt.org
 Sent: Thursday, October 9, 2014 7:36:51 PM
 Subject: [ovirt-users] Running a Gluster volume off of a seperate network 
 interface and managing it via the oVirt
 engine Web-GUI
 
 Hello!
 
 I have a rather quick question on behalf of GlusterFS. My Nodes are set up so
 that they have two NICs, four ports. The NIC that is additionally installed
 to the internal motherboard's NIC is a 10Gbps one that I want to use for
 Gluster-Communication in the GlusterCluster.
 
 Now, I could just create a plain old Gluster Volume and run everything off of
 the terminal and be good with it but that's really not what I want to do! I
 want to manage as much as possible through the oVirt engine and I saw it's
 possible to manage Gluster Volumes, too.
 If I try to create one there though, I can only see the IP addresses I used
 to communicate with oVirt (on the internal 1Gbps NIC).
 
 I have tried creating another network and bound all my nodes to it - still no
 success.
 
 Any ideas how I can do this?
 
 --
 Best regards
 Thomas Keppler
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster volumes - space used not reporting

2014-10-20 Thread Kanagaraj Mayilsamy
Also make sure Compatibility version of the cluster is 3.5

If the issue still persists, look for errors in engine.log and vdsm.log

Thanks,
Kanagaraj

- Original Message -
 From: Ryan Nix ryan@gmail.com
 To: Users@ovirt.org users@ovirt.org
 Sent: Tuesday, October 21, 2014 1:38:12 AM
 Subject: [ovirt-users] Gluster volumes - space used not reporting
 
 Is there anything I need to do to get the space used reporting working in
 ovirt 3.5? Thanks!
 
 
 ​
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] problem adding a gluster storage domain.

2012-11-29 Thread Kanagaraj Mayilsamy
Have tried mounting the gluster volume? (to make sure problem is not there in 
gluster) Something like 'mount -t glusterfs localhost:/volume /tmp/volume'.

You could execute '/usr/bin/sudo -n /usr/bin/mount -t glusterfs -o vers=3 
localhost:/volume /rhev/data-center/mnt/localhost:_volume' and see what happens.


- Original Message -
From: yoshinobu.ushida yoshinobu.ush...@gmail.com
To: Kanagaraj Mayilsamy kmayi...@redhat.com
Cc: users@ovirt.org
Sent: Thursday, November 29, 2012 8:09:33 PM
Subject: Re: [Users] problem adding a gluster storage domain.

Hi Kanagaraj-san, 


Thank you for your reply. The check list are all okay. Do you have other check 
list? 
I have created another volume. However, result is same. vdsm.log is as follows. 


vdsm.log 

Thread-3849::DEBUG::2012-11-29 07:34:11,810::BindingXMLRPC::156::vds::(wrapper) 
[192.168.77.1] 
Thread-3849::DEBUG::2012-11-29 
07:34:11,811::task::588::TaskManager.Task::(_updateState) 
Task=`e2011748-ff36-4ffe-bb2d-4244ba461f29`::moving from state init - state 
preparing 
Thread-3849::INFO::2012-11-29 07:34:11,811::logUtils::37::dispatcher::(wrapper) 
Run and protect: validateStorageServerConnection(domType=6, 
spUUID='----', conList=[{'port': '', 
'connection': 'localhost:/volume', 'mnt_options': 'vers=3', 'portal': '', 
'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '**', 'id': 
'----'}], options=None) 
Thread-3849::INFO::2012-11-29 07:34:11,811::logUtils::39::dispatcher::(wrapper) 
Run and protect: validateStorageServerConnection, Return response: 
{'statuslist': [{'status': 0, 'id': '----'}]} 
Thread-3849::DEBUG::2012-11-29 
07:34:11,811::task::1172::TaskManager.Task::(prepare) 
Task=`e2011748-ff36-4ffe-bb2d-4244ba461f29`::finished: {'statuslist': 
[{'status': 0, 'id': '----'}]} 
Thread-3849::DEBUG::2012-11-29 
07:34:11,812::task::588::TaskManager.Task::(_updateState) 
Task=`e2011748-ff36-4ffe-bb2d-4244ba461f29`::moving from state preparing - 
state finished 
Thread-3849::DEBUG::2012-11-29 
07:34:11,812::resourceManager::809::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {} 
Thread-3849::DEBUG::2012-11-29 
07:34:11,812::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {} 
Thread-3849::DEBUG::2012-11-29 
07:34:11,812::task::978::TaskManager.Task::(_decref) 
Task=`e2011748-ff36-4ffe-bb2d-4244ba461f29`::ref 0 aborting False 
Thread-3850::DEBUG::2012-11-29 07:34:11,828::BindingXMLRPC::156::vds::(wrapper) 
[192.168.77.1] 
Thread-3850::DEBUG::2012-11-29 
07:34:11,828::task::588::TaskManager.Task::(_updateState) 
Task=`b21afac6-67df-49f2-9d5e-57888a8471a6`::moving from state init - state 
preparing 
Thread-3850::INFO::2012-11-29 07:34:11,829::logUtils::37::dispatcher::(wrapper) 
Run and protect: connectStorageServer(domType=6, 
spUUID='----', conList=[{'port': '', 
'connection': 'localhost:/volume', 'mnt_options': 'vers=3', 'portal': '', 
'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '**', 'id': 
'a3675b5e-9433-435f-aecb-d9cad60f6d36'}], options=None) 
Thread-3850::DEBUG::2012-11-29 
07:34:11,840::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n 
/usr/bin/mount -t glusterfs -o vers=3 localhost:/volume 
/rhev/data-center/mnt/localhost:_volume' (cwd None) 
Thread-3850::ERROR::2012-11-29 
07:34:11,906::hsm::1932::Storage.HSM::(connectStorageServer) Could not connect 
to storageServer 
Traceback (most recent call last): 
File /usr/share/vdsm/storage/hsm.py, line 1929, in connectStorageServer 
conObj.connect() 
File /usr/share/vdsm/storage/storageServer.py, line 179, in connect 
self._mount.mount(self.options, self._vfsType) 
File /usr/share/vdsm/storage/mount.py, line 190, in mount 
return self._runcmd(cmd, timeout) 
File /usr/share/vdsm/storage/mount.py, line 206, in _runcmd 
raise MountError(rc, ;.join((out, err))) 
MountError: (1, 'unknown option vers (ignored)\nMount failed. Please check the 
log file for more details.\n;ERROR: failed to create logfile 
/var/log/glusterfs/rhev-data-center-mnt-localhost:_volume.log (Permission 
denied)\nERROR: failed to open logfile 
/var/log/glusterfs/rhev-data-center-mnt-localhost:_volume.log\n') 
Thread-3850::DEBUG::2012-11-29 
07:34:11,907::lvm::457::OperationMutex::(_invalidateAllPvs) Operation 'lvm 
invalidate operation' got the operation mutex 
Thread-3850::DEBUG::2012-11-29 
07:34:11,907::lvm::459::OperationMutex::(_invalidateAllPvs) Operation 'lvm 
invalidate operation' released the operation mutex 
Thread-3850::DEBUG::2012-11-29 
07:34:11,907::lvm::469::OperationMutex::(_invalidateAllVgs) Operation 'lvm 
invalidate operation' got the operation mutex 
Thread-3850::DEBUG::2012-11-29 
07:34:11,908::lvm::471::OperationMutex::(_invalidateAllVgs) Operation 'lvm 
invalidate operation' released the operation mutex 
Thread-3850::DEBUG::2012-11-29 
07:34

Re: [Users] problem adding a gluster storage domain.

2012-11-29 Thread Kanagaraj Mayilsamy
I am not sure how you have created a gluster volume with name as 'volume'. In 
gluster you cannot create volume with name 'volume'.
If i try to do that, gluster says 'volume cannot be the name of a volume'.

Can you replace 'volume' with actual volume name while creating storage domain 
and repeat the steps?
something like: 192.168.77.107:/test_vol or localhost:/test_vol

- Original Message -
From: yoshinobu.ushida yoshinobu.ush...@gmail.com
To: Kanagaraj Mayilsamy kmayi...@redhat.com
Cc: users@ovirt.org
Sent: Thursday, November 29, 2012 8:57:12 PM
Subject: Re: [Users] problem adding a gluster storage domain.

Thank you very much. I have executed the mount. Result is below. 



# /usr/bin/sudo -n /usr/bin/mount -t glusterfs -o vers=3 localhost:/volume 
/rhev/data-center/mnt/localhost\:_volume 
unknown option vers (ignored) 


I can mounted when I have created using gluster's command. 



# gluster volume create test_vol replica 2 transport tcp 192.168.77.107:/test 
192.168.77.108:/test 
Creation of volume test_vol has been successful. Please start the volume to 
access data. 

# gluster volume start test_vol 
Starting volume test_vol has been successful 
# mount.glusterfs localhost:/test_vol /mnt 

# df -h | grep test_vol 
localhost:/test_vol 50G 2.2G 45G 5% /mnt 


Regards, 
Ushida 



2012/11/29 Kanagaraj Mayilsamy  kmayi...@redhat.com  


Have tried mounting the gluster volume? (to make sure problem is not there in 
gluster) Something like 'mount -t glusterfs localhost:/volume /tmp/volume'. 

You could execute '/usr/bin/sudo -n /usr/bin/mount -t glusterfs -o vers=3 
localhost:/volume /rhev/data-center/mnt/localhost:_volume' and see what 
happens. 



- Original Message - 
From: yoshinobu.ushida  yoshinobu.ush...@gmail.com  


To: Kanagaraj Mayilsamy  kmayi...@redhat.com  
Cc: users@ovirt.org 
Sent: Thursday, November 29, 2012 8:09:33 PM 
Subject: Re: [Users] problem adding a gluster storage domain. 

Hi Kanagaraj-san, 


Thank you for your reply. The check list are all okay. Do you have other check 
list? 
I have created another volume. However, result is same. vdsm.log is as follows. 


vdsm.log 

Thread-3849::DEBUG::2012-11-29 07:34:11,810::BindingXMLRPC::156::vds::(wrapper) 
[192.168.77.1] 
Thread-3849::DEBUG::2012-11-29 
07:34:11,811::task::588::TaskManager.Task::(_updateState) 
Task=`e2011748-ff36-4ffe-bb2d-4244ba461f29`::moving from state init - state 
preparing 
Thread-3849::INFO::2012-11-29 07:34:11,811::logUtils::37::dispatcher::(wrapper) 
Run and protect: validateStorageServerConnection(domType=6, 
spUUID='----', conList=[{'port': '', 
'connection': 'localhost:/volume', 'mnt_options': 'vers=3', 'portal': '', 
'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '**', 'id': 
'----'}], options=None) 
Thread-3849::INFO::2012-11-29 07:34:11,811::logUtils::39::dispatcher::(wrapper) 
Run and protect: validateStorageServerConnection, Return response: 
{'statuslist': [{'status': 0, 'id': '----'}]} 
Thread-3849::DEBUG::2012-11-29 
07:34:11,811::task::1172::TaskManager.Task::(prepare) 
Task=`e2011748-ff36-4ffe-bb2d-4244ba461f29`::finished: {'statuslist': 
[{'status': 0, 'id': '----'}]} 
Thread-3849::DEBUG::2012-11-29 
07:34:11,812::task::588::TaskManager.Task::(_updateState) 
Task=`e2011748-ff36-4ffe-bb2d-4244ba461f29`::moving from state preparing - 
state finished 
Thread-3849::DEBUG::2012-11-29 
07:34:11,812::resourceManager::809::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {} 
Thread-3849::DEBUG::2012-11-29 
07:34:11,812::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {} 
Thread-3849::DEBUG::2012-11-29 
07:34:11,812::task::978::TaskManager.Task::(_decref) 
Task=`e2011748-ff36-4ffe-bb2d-4244ba461f29`::ref 0 aborting False 
Thread-3850::DEBUG::2012-11-29 07:34:11,828::BindingXMLRPC::156::vds::(wrapper) 
[192.168.77.1] 
Thread-3850::DEBUG::2012-11-29 
07:34:11,828::task::588::TaskManager.Task::(_updateState) 
Task=`b21afac6-67df-49f2-9d5e-57888a8471a6`::moving from state init - state 
preparing 
Thread-3850::INFO::2012-11-29 07:34:11,829::logUtils::37::dispatcher::(wrapper) 
Run and protect: connectStorageServer(domType=6, 
spUUID='----', conList=[{'port': '', 
'connection': 'localhost:/volume', 'mnt_options': 'vers=3', 'portal': '', 
'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '**', 'id': 
'a3675b5e-9433-435f-aecb-d9cad60f6d36'}], options=None) 
Thread-3850::DEBUG::2012-11-29 
07:34:11,840::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n 
/usr/bin/mount -t glusterfs -o vers=3 localhost:/volume 
/rhev/data-center/mnt/localhost:_volume' (cwd None) 
Thread-3850::ERROR::2012-11-29 
07:34:11,906::hsm::1932::Storage.HSM::(connectStorageServer) Could not connect 
to storageServer 
Traceback (most recent call last): 
File

Re: [Users] gluster volume creation error

2013-01-21 Thread Kanagaraj Mayilsamy
Hi Jithin,

 By looking at the logs, looks like you already had a volume named 'vol1' in 
the gluster and you have tried to create another volume with the same name from 
the UI. Thats why you were able to see the volume 'vol1' even after the 
creation was failed. 

 I am not sure which version of ovirt-engine you are using. The recent 
releases(3.2) and the upstream code currently have the support for reflecting 
the old volumes in the UI even though there were created via UI or directly 
from CLI. With this change vol1 should have appeared in the UI even before the 
creation. 

So it looks like there are no issues with the creation of volume. I am not 
familiar with the mount issues, some one else will help you out.

Thanks,
Kanagaraj

- Original Message -
 From: Jithin Raju rajuj...@gmail.com
 To: Kanagaraj Mayilsamy kmayi...@redhat.com, users@ovirt.org
 Sent: Monday, January 21, 2013 1:33:56 PM
 Subject: Re: [Users] gluster volume creation error
 
 
 Hi Kanagaraj,
 
 
 PFA,
 
 
 gluster version info:
 glusterfs-geo-replication-3.2.7-2.fc17.x86_64
 glusterfs-3.2.7-2.fc17.x86_64
 glusterfs-fuse-3.2.7-2.fc17.x86_64
 glusterfs-rdma-3.2.7-2.fc17.x86_64
 vdsm-gluster-4.10.0-10.fc17.noarch
 glusterfs-server-3.2.7-2.fc17.x86_64
 
 
 Thanks,
 Jithin
 
 
 
 On Mon, Jan 21, 2013 at 1:15 PM, Kanagaraj Mayilsamy 
 kmayi...@redhat.com  wrote:
 
 
 
 
 
 - Original Message -
  From: Jithin Raju  rajuj...@gmail.com 
  To: users@ovirt.org
  Sent: Monday, January 21, 2013 1:10:15 PM
  Subject: [Users] gluster volume creation error
  
  
  
  Hi ,
  
  
  Volume creation is failing in posixfs data center.
  
  
  While trying to create a distribute volume web UI exits with error:
  
  
  creation of volume failed and volume is not listed in web UI.
  
 Can you please provide the engine.log and vdsm.log(from all the hosts
 in the cluster ).
 
 
  
  From the backend I can see volume got created.
  
  
  gluster volume info
  
  
  
  Volume Name: vol1
  Type: Distribute
  Status: Created
  Number of Bricks: 2
  Transport-type: tcp
  Bricks:
  Brick1: x.250.76.71:/data
  Brick2: x.250.76.70:/data
  
  
  When I try to mount the volume manually to /mnt
  its not giving any message
  
  
  exit status is zero.
  
  
  mount command listed as below:
  
  
  fig:/vol1 on /mnt type fuse.glusterfs
  (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
  
  
  
  when I run a df it gives me like below:
  df: `/mnt': Transport endpoint is not connected
  
  
  
  So i just tail'ed
  /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
  
  
  
  [2013-01-21 11:30:07.828518] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:1009 )
  [2013-01-21 11:30:10.839882] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:1007 )
  [2013-01-21 11:30:13.852374] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:1005 )
  [2013-01-21 11:30:16.864634] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:1003 )
  [2013-01-21 11:30:19.875986] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:1001 )
  [2013-01-21 11:30:22.886854] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:999 )
  [2013-01-21 11:30:25.898840] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:997 )
  [2013-01-21 11:30:28.91] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:995 )
  [2013-01-21 11:30:31.922336] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:993 )
  [2013-01-21 11:30:34.934772] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:991 )
  [2013-01-21 11:30:37.946215] W
  [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
  reading from socket failed. Error (Transport endpoint is not
  connected), peer ( 135.250.76.70:989 )
  
  
  
  
  Just wanted to know what am I doing wrong here?
  
  
  package details:
  
  
  
  vdsm-python-4.10.0-10.fc17

Re: [Users] Issues using local storage for gluster shared volume

2013-03-28 Thread Kanagaraj Mayilsamy


- Original Message -
 From: Tony Feldmann trfeldm...@gmail.com
 To: users@ovirt.org
 Sent: Thursday, March 28, 2013 8:19:17 PM
 Subject: [Users] Issues using local storage for gluster shared volume
 
 
 I have been trying for a month or so to get a 2 node cluster up and
 running. I have engine installed on the first node, then add each
 each system as a host to a posix dc. Both boxes have 4 data disks.
 After adding the hosts I create a distributed replicate volume using
 3 disk from each host with ext4 filesystems. I click the 'optimize
 for virt' option on the volume. There is a message in events that
 says that it can't set a volume option, then it sets 2 volume
 options. Checking the options tab I see that it added the gid/uid
 options. I was unable to find in the logs what option was not set, I
 just see a message about usage for volume set volname option.

gid and uid options are enough to make a gluster volume ready for virt store. 
The third option sets a group(called as virt group) of options on the volume 
mainly related to performance tuning. To make this option work, you have to 
copy the file 
https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example to 
/var/lib/glusterd/groups/ and name it as virt. Now you can click on 'Optimize 
for virt store' again to set the virt group. Setting this group option 
recommended but not necessary to make the gluster volume to be used as virt 
store.

I am not sure about the below errors, other people in the list can help you out.

Thanks,
Kanagaraj

 The volume starts fine and I am able to create a data domain on the
 volume. Once the domain is created I try to create a vm and it fails
 creating the disk. Error messages are along the lines of task file
 exists and can't remove task files. There are directories under
 tasks and when trying to manually remove them I get the directory
 not empty error. Can someone please shed some light on what I am
 doing wrong to get this 2 node cluster with local disk as shared
 storage up and running?
 
 
 Thanks,
 
 
 Tony
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host Agent-35 moved to Non-Operational state as host does not meet the cluster's minimum CPU level. Missing CPU features : model_Penryn

2013-08-16 Thread Kanagaraj Mayilsamy
The first question would be, 
1)Do you want to use oVirt from managing gluster storage 
(or)
2)You want to manage both your storage and virtualization

for case 1)

You can create a new cluster(or modify the existing one) with 'Enable Gluster 
Service' and do not select 'Enable Virt Service'. Then add hosts to that 
cluster, you won't see any errors complaining missing CPU features as this 
cluster is not meant for Virt. Also gluster needs the machines to be x86 arch 
and nothing else.

for case 2)

Here you would need create a cluster with both 'Gluster' and 'Virt' services 
enabled. The CPU you select here should match with the CPU the hosts are having.


Thanks,
Kanagaraj

- Original Message -
 From: higkoohk higko...@gmail.com
 To: users@ovirt.org
 Sent: Friday, August 16, 2013 6:02:21 PM
 Subject: [Users] Host Agent-35 moved to Non-Operational state as host does 
 not meet the cluster's minimum CPU level.
 Missing CPU features : model_Penryn
 
 Hello,
 
 I'm using oVirt 3.3 beta for gluster, but when I add host into cluster.
 Many machine change into non-operational .
 Error tips:'Host Agent-35 moved to Non-Operational state as host does not
 meet the cluster's minimum CPU level. Missing CPU features : model_Penryn'.
 
 This one is OK:
 Intel(R) Xeon(R) CPU E5410 @ 2.33GHz
 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc
 arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est
 tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm dts tpr_shadow vnmi flexpriority
 
 This one is FAIL:
 Intel(R) Xeon(R) CPU 5110 @ 1.60GHz
 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc
 arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx tm2
 ssse3 cx16 xtpr pdcm dca lahf_lm dts tpr_shadow
 
 Is this mean that failed machine couldn't make vm?
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Creating dispersed volume

2015-02-05 Thread Kanagaraj Mayilsamy
This is currently not supported from oVirt. It will be available in the next 
oVirt version.

Thanks,
Kanagaraj

- Original Message -
 From: RASTELLI Alessandro alessandro.raste...@skytv.it
 To: users@ovirt.org
 Sent: Tuesday, February 3, 2015 5:06:02 PM
 Subject: [ovirt-users] Creating dispersed volume
 
 Hi,
 is it possible to create dispersed-distributed volumes using oVirt?
 Our gluster nodes are v3.6.2
 
 Thanks
 Alessandro Rastelli
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to find host Host in gluster peer list from Host

2015-01-10 Thread Kanagaraj Mayilsamy
Looks like some issue while peer probing cpu02.

Can you provide 'gluster peer status' output from host cpu04? Also in the same 
host, search for any errors in vdsm.log file.

Thanks,
Kanagaraj

- Original Message -
 From: Punit Dambiwal hypu...@gmail.com
 To: Kanagaraj Mayilsamy kmayi...@redhat.com
 Cc: Martin Pavlík mpav...@redhat.com, gluster-us...@gluster.org, Kaushal 
 M kshlms...@gmail.com,
 users@ovirt.org
 Sent: Friday, January 9, 2015 3:44:58 PM
 Subject: Re: [ovirt-users] Failed to find host Host in gluster peer list from 
 Host
 
 Hi,
 
 Again i am facing the same issue
 
 Hi Kanagaraj,
 
 Please find the attached logs :-
 
 Engine Logs :- http://ur1.ca/jdopt
 VDSM Logs :- http://ur1.ca/jdoq9
 
 
 
 On Thu, Jan 8, 2015 at 10:00 AM, Punit Dambiwal hypu...@gmail.com wrote:
 
  Yes...Gluster service running on the host...disable the selinux and
  reinstall the host work for me..
 
  On Thu, Jan 8, 2015 at 1:20 AM, Kanagaraj Mayilsamy kmayi...@redhat.com
  wrote:
 
  Can you check if glusterd service is running on the host?
 
  Regards,
  Kanagaraj
 
  - Original Message -
   From: Martin Pavlík mpav...@redhat.com
   To: Punit Dambiwal hypu...@gmail.com
   Cc: gluster-us...@gluster.org, Kaushal M kshlms...@gmail.com,
  users@ovirt.org
   Sent: Wednesday, January 7, 2015 9:36:24 PM
   Subject: Re: [ovirt-users] Failed to find host Host in gluster peer
  list  from Host
  
   Hi Punit,
  
   could you describe steps which led to this result?
  
   regards
  
   Matin Pavlik - RHEV QE
  
  
  
   On 07 Jan 2015, at 14:27, Punit Dambiwal  hypu...@gmail.com  wrote:
  
   Hi,
  
   I am facing one strange issue in the ovirt with glusterfs ..i want
  to
   reactivate onw of my host nodebut it's failed with the following
  error
   :-
  
   Gluster command [gluster peer status cpu04.zne01.hkg1.ovt.com ] failed
  on
   server cpu04.
  
   Engine Logs :- http://ur1.ca/jczdp
  
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 
 
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to create volume in OVirt with gluster

2015-01-12 Thread Kanagaraj Mayilsamy
I can see the failures in glusterd log.

Can someone from glusterfs dev pls help on this?

Thanks,
Kanagaraj

- Original Message -
 From: Punit Dambiwal hypu...@gmail.com
 To: Kanagaraj kmayi...@redhat.com
 Cc: Martin Pavlík mpav...@redhat.com, Vijay Bellur 
 vbel...@redhat.com, Kaushal M kshlms...@gmail.com,
 users@ovirt.org, gluster-us...@gluster.org
 Sent: Monday, January 12, 2015 3:36:43 PM
 Subject: Re: Failed to create volume in OVirt with gluster
 
 Hi Kanagaraj,
 
 Please find the logs from here :- http://ur1.ca/jeszc
 
 [image: Inline image 1]
 
 [image: Inline image 2]
 
 On Mon, Jan 12, 2015 at 1:02 PM, Kanagaraj kmayi...@redhat.com wrote:
 
   Looks like there are some failures in gluster.
  Can you send the log output from glusterd log file from the relevant hosts?
 
  Thanks,
  Kanagaraj
 
 
  On 01/12/2015 10:24 AM, Punit Dambiwal wrote:
 
  Hi,
 
   Is there any one from gluster can help me here :-
 
   Engine logs :-
 
   2015-01-12 12:50:33,841 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:34,725 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:36,824 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:36,853 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:36,866 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:37,751 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:39,849 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:39,878 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:39,890 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:40,776 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:42,878 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:42,903 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:42,916 INFO
   [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
  (DefaultQuartzScheduler_Worker-12) Failed to acquire lock and wait lock
  EngineLock [exclusiveLocks= key: 0001-0001-0001-0001-0300
  value: GLUSTER
  , sharedLocks= ]
  2015-01-12 12:50:43,771 INFO
   [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
  (ajp--127.0.0.1-8702-1) [330ace48] FINISH, CreateGlusterVolumeVDSCommand,
  log id: 303e70a4
  2015-01-12 12:50:43,780 ERROR
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
  (ajp--127.0.0.1-8702-1) [330ace48] Correlation ID: 330ace48, Job ID:
  896a69b3-a678-40a7-bceb-3635e4062aa0, Call Stack: null, Custom Event ID:
  -1, Message: Creation of Gluster Volume vol01 failed.
  2015-01-12 12:50:43,785 INFO
   

Re: [ovirt-users] Failed to find host Host in gluster peer list from Host

2015-01-07 Thread Kanagaraj Mayilsamy
Can you check if glusterd service is running on the host?

Regards,
Kanagaraj

- Original Message -
 From: Martin Pavlík mpav...@redhat.com
 To: Punit Dambiwal hypu...@gmail.com
 Cc: gluster-us...@gluster.org, Kaushal M kshlms...@gmail.com, 
 users@ovirt.org
 Sent: Wednesday, January 7, 2015 9:36:24 PM
 Subject: Re: [ovirt-users] Failed to find host Host in gluster peer list  
 from Host
 
 Hi Punit,
 
 could you describe steps which led to this result?
 
 regards
 
 Matin Pavlik - RHEV QE
 
 
 
 On 07 Jan 2015, at 14:27, Punit Dambiwal  hypu...@gmail.com  wrote:
 
 Hi,
 
 I am facing one strange issue in the ovirt with glusterfs ..i want to
 reactivate onw of my host nodebut it's failed with the following error
 :-
 
 Gluster command [gluster peer status cpu04.zne01.hkg1.ovt.com ] failed on
 server cpu04.
 
 Engine Logs :- http://ur1.ca/jczdp
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users