[ovirt-users] why replica 3

2015-09-03 Thread Richard Neuboeck
Hi oVirt group,

I'm currently trying to set up an oVirt 3.6 self hosed engine on a
replica 2 gluster volume. As you can imagine it fails since it expects a
replica 3 volume.

Since this part is hard coded during the installation this could be
easily remedied but on the other hand I'm not familiar with the oVirt
setup so I'm not sure if I would be disturbing something far a head that
again is puzzled by the lack of a replica 3 volume.

Is there a reason why it has to be exactly replica 3?
Shouldn't the redundancy of the storage back end be the responsibility
of the admin and therefore the option to choose (at least replicated and
distributed replicated) open?

I'm asking because the storage back and I'm using is kind of a 'big'
thing and it's hardware is put together to facilitate the use of replica
2. Sure I could create multiple bricks on those two machines but other
than a negative performance impact I would gain nothing. Since the price
tag of one of those servers is quite high the likelihood of getting
another one is exactly 0. So I'm sure it makes sens to you that I would
rather choose my own replication level than having one predetermined.

All the best
Richard



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Invitation: [3.6 deep dive] - AAA - local user management @ Mon 2015-09-07 17:00 - 17:45 (bazu...@redhat.com)

2015-09-03 Thread Barak Azulay
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20150907T14Z
DTEND:20150907T144500Z
DTSTAMP:20150903T221527Z
ORGANIZER;CN=Barak Azulay:mailto:bazu...@redhat.com
UID:7ufq6cqoqeufmtu78j2v7cp...@google.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Oved Ourfali;X-NUM-GUESTS=0:mailto:ov...@redhat.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=Martin Perina;X-NUM-GUESTS=0:mailto:mper...@redhat.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=bazu...@redhat.com;X-NUM-GUESTS=0:mailto:bazu...@redhat.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=ih...@redhat.com;X-NUM-GUESTS=0:mailto:ih...@redhat.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=users@ovirt.org;X-NUM-GUESTS=0:mailto:users@ovirt.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=de...@ovirt.org;X-NUM-GUESTS=0:mailto:de...@ovirt.org
CREATED:20150719T165813Z
DESCRIPTION:abstract:\n  oVirt 3.6 comes by default with a new AAA-JDBC ext
 ension which stores authentication  \n  and authorization data in relationa
 l database and provides these data using standardized \n  oVirt AAA API sim
 ilarly to already existing AAA-LDAP extension.\n  In this session we will d
 iscuss the design/usage/features &  customization  \n  of the AAA-JDBC exte
 ntion\n\nFeature page:\n  http://www.ovirt.org/Features/AAA_JDBC\n\nGoogle 
 hangout link:\n  https://plus.google.com/events/c45mkdo294kkjlcfiknk1bjc2bo
 \n\nYoutube link:\n  http://www.youtube.com/watch?v=CUsaqLQIkuQ\nView your 
 event at https://www.google.com/calendar/event?action=VIEW&eid=N3VmcTZjcW9x
 ZXVmbXR1NzhqMnY3Y3BxcG8gdXNlcnNAb3ZpcnQub3Jn&tok=MTgjYmF6dWxheUByZWRoYXQuY2
 9tODZlNjNiODY0YzJhOTIwOWVkZjU0NmIzYjk2ZGFmODk0N2EzMDgxYQ&ctz=Asia/Jerusalem
 &hl=en.
LAST-MODIFIED:20150903T221526Z
LOCATION:http://www.youtube.com/watch?v=CUsaqLQIkuQ
SEQUENCE:2
STATUS:CONFIRMED
SUMMARY:[3.6 deep dive]  - AAA - local user management
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR


invite.ics
Description: application/ics
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread Matthew Lagoe
Somehow my outlook broke sorry everyone

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
Matthew Lagoe
Sent: Thursday, September 03, 2015 11:15 AM
To: 'knarra'; supo...@logicworks.pt; 'Ramesh Nachimuthu'
Cc: Users@ovirt.org
Subject: Re: [ovirt-users] Gluster command [] failed on server

 

199.180.152.220 please

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
knarra
Sent: Thursday, September 03, 2015 09:38 AM
To: supo...@logicworks.pt; Ramesh Nachimuthu
Cc: Users@ovirt.org
Subject: Re: [ovirt-users] Gluster command [] failed on server

 

On 09/03/2015 07:15 PM, supo...@logicworks.pt wrote:

Hi did a reinstall on the Host, and everything comes up again.

Than I put the Host in maintenance, reboot it, Confirm 'Host has been
Rebooted', Activate and the error comes up again: Gluster command
[] failed on server

 

??

once the reboot happens and host comes up back, can you please check if
glusterd is running and operational?

 

  _  

De: supo...@logicworks.pt
Para: "Ramesh Nachimuthu"  

Cc: Users@ovirt.org
Enviadas: Quinta-feira, 3 De Setembro de 2015 14:13:55
Assunto: Re: [ovirt-users] Gluster command [] failed on server

 

I just update it to Version 3.5.4.2-1.el7.centos

but the problem still remains.

 

Any idea?

 

 

  _  

De: "Ramesh Nachimuthu"   
Para: supo...@logicworks.pt
Cc: Users@ovirt.org
Enviadas: Quinta-feira, 3 De Setembro de 2015 13:11:52
Assunto: Re: [ovirt-users] Gluster command [] failed on server

 

 

On 09/03/2015 05:35 PM, supo...@logicworks.pt wrote:

On the gluster node (server)

Is not a replicate solution, only one gluster node

 

# gluster peer status
Number of Peers: 0


Strange. 



Thanks

 

José

 

  _  

De: "Ramesh Nachimuthu"   
Para: supo...@logicworks.pt, Users@ovirt.org
Enviadas: Quinta-feira, 3 De Setembro de 2015 12:55:31
Assunto: Re: [ovirt-users] Gluster command [] failed on server

 

Can u post the output of 'gluster peer status' on the gluster node?

Regards,
Ramesh

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote:

Hi,

 

I just installed Version 3.5.3.1-1.el7.centos, on centos 7.1, no HE.

 

for storage, I have only one server with glusterfs:

glusterfs-fuse-3.7.3-1.el7.x86_64
glusterfs-server-3.7.3-1.el7.x86_64
glusterfs-libs-3.7.3-1.el7.x86_64
glusterfs-client-xlators-3.7.3-1.el7.x86_64
glusterfs-api-3.7.3-1.el7.x86_64
glusterfs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64

# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Thu 2015-09-03 11 
:23:32 WEST; 10min ago
  Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
(code=exited, status=0/SUCCESS)
 Main PID: 1387 (glusterd)
   CGroup: /system.slice/glusterd.service
   ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid
   ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id
gv0.gfs...

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered
f
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered
fi
Hint: Some lines were ellipsized, use -l to show in full.

 

Everything was running until I need to restart the node (host), after that I
was not ables to make the host active. This is the error message:

Gluster command [] failed on server

 

 

I also disable JSON protocol, but no success

 

vdsm.log:

Thread-14::DEBUG::2015-09-03 11 
:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [192.168.6.200]::call
getHardwareInfo with () {}
Thread-14::DEBUG::2015-09-03 11 
:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with
{'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName':
'PRIMERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily':
'SERVER', 'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}}
Thread-14::DEBUG::2015-09-03 11 
:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [192.168.6.200]::call
hostsList with () {} flowID [4acc5233]
Thread-14::ERROR::2015-09-03 11 
:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in 
**kwargs)
  File "", line 2, in glusterPeerStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_

Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread Matthew Lagoe
199.180.152.220 please

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
knarra
Sent: Thursday, September 03, 2015 09:38 AM
To: supo...@logicworks.pt; Ramesh Nachimuthu
Cc: Users@ovirt.org
Subject: Re: [ovirt-users] Gluster command [] failed on server

 

On 09/03/2015 07:15 PM, supo...@logicworks.pt wrote:

Hi did a reinstall on the Host, and everything comes up again.

Than I put the Host in maintenance, reboot it, Confirm 'Host has been
Rebooted', Activate and the error comes up again: Gluster command
[] failed on server

 

??

once the reboot happens and host comes up back, can you please check if
glusterd is running and operational?



 

  _  

De: supo...@logicworks.pt
Para: "Ramesh Nachimuthu"  

Cc: Users@ovirt.org
Enviadas: Quinta-feira, 3 De Setembro de 2015 14:13:55
Assunto: Re: [ovirt-users] Gluster command [] failed on server

 

I just update it to Version 3.5.4.2-1.el7.centos

but the problem still remains.

 

Any idea?

 

 

  _  

De: "Ramesh Nachimuthu"   
Para: supo...@logicworks.pt
Cc: Users@ovirt.org
Enviadas: Quinta-feira, 3 De Setembro de 2015 13:11:52
Assunto: Re: [ovirt-users] Gluster command [] failed on server

 

 

On 09/03/2015 05:35 PM, supo...@logicworks.pt wrote:

On the gluster node (server)

Is not a replicate solution, only one gluster node

 

# gluster peer status
Number of Peers: 0


Strange. 




Thanks

 

José

 

  _  

De: "Ramesh Nachimuthu"   
Para: supo...@logicworks.pt, Users@ovirt.org
Enviadas: Quinta-feira, 3 De Setembro de 2015 12:55:31
Assunto: Re: [ovirt-users] Gluster command [] failed on server

 

Can u post the output of 'gluster peer status' on the gluster node?

Regards,
Ramesh

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote:

Hi,

 

I just installed Version 3.5.3.1-1.el7.centos, on centos 7.1, no HE.

 

for storage, I have only one server with glusterfs:

glusterfs-fuse-3.7.3-1.el7.x86_64
glusterfs-server-3.7.3-1.el7.x86_64
glusterfs-libs-3.7.3-1.el7.x86_64
glusterfs-client-xlators-3.7.3-1.el7.x86_64
glusterfs-api-3.7.3-1.el7.x86_64
glusterfs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64

# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Thu 2015-09-03 11 
:23:32 WEST; 10min ago
  Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
(code=exited, status=0/SUCCESS)
 Main PID: 1387 (glusterd)
   CGroup: /system.slice/glusterd.service
   ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid
   ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id
gv0.gfs...

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered
f
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered
fi
Hint: Some lines were ellipsized, use -l to show in full.

 

Everything was running until I need to restart the node (host), after that I
was not ables to make the host active. This is the error message:

Gluster command [] failed on server

 

 

I also disable JSON protocol, but no success

 

vdsm.log:

Thread-14::DEBUG::2015-09-03 11 
:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [192.168.6.200]::call
getHardwareInfo with () {}
Thread-14::DEBUG::2015-09-03 11 
:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with
{'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName':
'PRIMERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily':
'SERVER', 'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}}
Thread-14::DEBUG::2015-09-03 11 
:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [192.168.6.200]::call
hostsList with () {} flowID [4acc5233]
Thread-14::ERROR::2015-09-03 11 
:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in 
**kwargs)
  File "", line 2, in glusterPeerStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod
raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1

 

supervdsm.log:

MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback

Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread suporte
No, it's not. 
Just start it and host is up again. 
Many thanks 

- Mensagem original -

De: "knarra"  
Para: supo...@logicworks.pt, "Ramesh Nachimuthu"  
Cc: Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 17:37:30 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 

On 09/03/2015 07:15 PM, supo...@logicworks.pt wrote: 



Hi did a reinstall on the Host, and everything comes up again. 
Than I put the Host in maintenance, reboot it, Confirm 'Host has been 
Rebooted', Activate and the error comes up again: Gluster command [] 
failed on server 

?? 


once the reboot happens and host comes up back, can you please check if 
glusterd is running and operational? 




- Mensagem original -

De: supo...@logicworks.pt 
Para: "Ramesh Nachimuthu"  
Cc: Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 14:13:55 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 

I just update it to Version 3.5.4.2-1.el7.centos 
but the problem still remains. 

Any idea? 


- Mensagem original -

De: "Ramesh Nachimuthu"  
Para: supo...@logicworks.pt 
Cc: Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 13:11:52 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 



On 09/03/2015 05:35 PM, supo...@logicworks.pt wrote: 



On the gluster node (server) 
Is not a replicate solution, only one gluster node 

# gluster peer status 
Number of Peers: 0 




Strange. 




Thanks 

José 

- Mensagem original -

De: "Ramesh Nachimuthu"  
Para: supo...@logicworks.pt , Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 12:55:31 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 

Can u post the output of 'gluster peer status' on the gluster node? 

Regards, 
Ramesh 

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote: 



Hi, 

I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE. 

for storage, I have only one server with glusterfs: 
glusterfs-fuse-3.7.3-1.el7.x86_64 
glusterfs-server-3.7.3-1.el7.x86_64 
glusterfs-libs-3.7.3-1.el7.x86_64 
glusterfs-client-xlators-3.7.3-1.el7.x86_64 
glusterfs-api-3.7.3-1.el7.x86_64 
glusterfs-3.7.3-1.el7.x86_64 
glusterfs-cli-3.7.3-1.el7.x86_64 

# service glusterd status 
Redirecting to /bin/systemctl status glusterd.service 
glusterd.service - GlusterFS, a clustered file-system server 
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) 
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago 
Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
(code=exited, status=0/SUCCESS) 
Main PID: 1387 (glusterd) 
CGroup: /system.slice/glusterd.service 
ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid 
ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gfs... 

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered 
f 
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered 
fi 
Hint: Some lines were ellipsized, use -l to show in full. 


Everything was running until I need to restart the node (host), after that I 
was not ables to make the host active. This is the error message: 
Gluster command [] failed on server 


I also disable JSON protocol, but no success 

vdsm.log: 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
getHardwareInfo with () {} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with 
{'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 
'PRIMERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 
'SERVER', 'systemVersion': 'GS01', 'systemUUID': 
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
hostsList with () {} flowID [4acc5233] 
Thread-14::ERROR:: 2015-09-03 11 
:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured 
Traceback (most recent call last): 
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper 
res = f(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper 
rv = func(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList 
return {'hosts': self.svdsmProxy.glusterPeerStatus()} 
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ 
return callMethod() 
File "/usr/share/vdsm/supervdsm.py", line 48, in  
**kwargs) 
File "", line 2, in glusterPeerStatus 
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod 
raise convert_to_error(kind, result) 
GlusterCmdExecFailedException: Command execution failed 
error: Connection failed. Please check if gluster daemon is operational. 
return code: 1 


supervdsm.log: 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call 
getHardw

Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread knarra

On 09/03/2015 07:15 PM, supo...@logicworks.pt wrote:

Hi did a reinstall on the Host, and everything comes up again.
Than I put the Host in maintenance, reboot it, Confirm 'Host has been 
Rebooted', Activate and the error comes up again: Gluster command 
[] failed on server


??
once the reboot happens and host comes up back, can you please check if 
glusterd is running and operational?



*De: *supo...@logicworks.pt
*Para: *"Ramesh Nachimuthu" 
*Cc: *Users@ovirt.org
*Enviadas: *Quinta-feira, 3 De Setembro de 2015 14:13:55
*Assunto: *Re: [ovirt-users] Gluster command [] failed on server

I just update it to Version 3.5.4.2-1.el7.centos
but the problem still remains.

Any idea?



*De: *"Ramesh Nachimuthu" 
*Para: *supo...@logicworks.pt
*Cc: *Users@ovirt.org
*Enviadas: *Quinta-feira, 3 De Setembro de 2015 13:11:52
*Assunto: *Re: [ovirt-users] Gluster command [] failed on server



On 09/03/2015 05:35 PM, supo...@logicworks.pt wrote:

On the gluster node (server)
Is not a replicate solution, only one gluster node

# gluster peer status
Number of Peers: 0


Strange.

Thanks

José


*De: *"Ramesh Nachimuthu" 
*Para: *supo...@logicworks.pt, Users@ovirt.org
*Enviadas: *Quinta-feira, 3 De Setembro de 2015 12:55:31
*Assunto: *Re: [ovirt-users] Gluster command [] failed on
server

Can u post the output of 'gluster peer status' on the gluster node?

Regards,
Ramesh

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote:

Hi,

I just installed Version 3.5.3.1-1.el7.centos, on centos 7.1,
no HE.

for storage, I have only one server with glusterfs:
glusterfs-fuse-3.7.3-1.el7.x86_64
glusterfs-server-3.7.3-1.el7.x86_64
glusterfs-libs-3.7.3-1.el7.x86_64
glusterfs-client-xlators-3.7.3-1.el7.x86_64
glusterfs-api-3.7.3-1.el7.x86_64
glusterfs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64

# service glusterd status
Redirecting to /bin/systemctl status glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service;
enabled)
   Active: active (running) since Thu 2015-09-03 11
:23:32 WEST; 10min ago
  Process: 1153 ExecStart=/usr/sbin/glusterd -p
/var/run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 1387 (glusterd)
   CGroup: /system.slice/glusterd.service
   ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid
   ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt
--volfile-id gv0.gfs...

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS,
a clustered f
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS,
a clustered fi
Hint: Some lines were ellipsized, use -l to show in full.


Everything was running until I need to restart the node
(host), after that I was not ables to make the host active.
This is the error message:
Gluster command [] failed on server


I also disable JSON protocol, but no success

vdsm.log:
Thread-14::DEBUG::2015-09-03 11
:37:23,131::BindingXMLRPC::1133::vds::(wrapper)
client [192.168.6.200 ]::call
getHardwareInfo with () {}
Thread-14::DEBUG::2015-09-03 11
:37:23,132::BindingXMLRPC::1140::vds::(wrapper)
return getHardwareInfo with {'status': {'message': 'Done',
'code': 0}, 'info': {'systemProductName': 'PRIMERGY RX2520
M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily':
'SERVER', 'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU'}}
Thread-14::DEBUG::2015-09-03 11
:37:23,266::BindingXMLRPC::1133::vds::(wrapper)
client [192.168.6.200 ]::call hostsList
with () {} flowID [4acc5233]
Thread-14::ERROR::2015-09-03 11
:37:23,279::BindingXMLRPC::1149::vds::(wrapper)
vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in
wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in 
**kwargs)
  File "", line 2, in glusterPeerStatus
  

Re: [ovirt-users] Change IP Address of Ovirt Engine

2015-09-03 Thread Sandro Bonazzola
On Thu, Aug 20, 2015 at 7:42 AM, Phil Gersekowski  wrote:

> We have an operational ovirt cluster where all nodes on 1 IP Network, and
> the oVirt Engine is on another IP Network and are wanting to change IP
> Address of the host of the ovirt engine so that it is on the same network
> as the nodes that are managed.
>
> I have not been able to find a definative answer, but since I are NOT
> changing the name of the ovirt engine host, from what I have read it seems
> that all I will need to do is alter the IP Address in the DNS of the
> hostname for the ovirt engine host (apart from plumbing and address on the
> new network into the ovrit engine host of course).
>
> Is this correct, or is there some configuration file on either the engine
> host or the nodes that needs to be updated to reflect the new IP Address of
> the engine host ?
>


Hi, if you configured everything using FQDN only and you're changing IP
preserving FQDN everything should continue working.
BTW, I suggest to wait for someone from network team to confirm



>
>
>
>
> --
> Regards,
> Phil Gersekowski
> IT Director
> http://www.aspedia.net
> 
> | ph...@aspedia.net 
> --
> [image: Aspedia]
> 
>  Phone:
> *1800 677 656*
> Mobile: *0447 546 890*
> Suite 1, 1 Clunies Ross Court, Eight Mile Plains QLD 4113 | Map
> 
> --
> This message and any files transmitted with it are confidential and should
> be read only by those persons to whom it is addressed. It may contain
> sensitive and private proprietary or legally privileged information. No
> confidentiality or privilege is waived or lost by any mistransmission. If
> you are not the intended recipient, please immediately delete this message
> and notify the sender Aspedia Australia Pty Ltd. You must not, directly or
> indirectly, use, disclose, distribute, print, or copy any part of this
> message if you are not the intended recipient. Unless otherwise expressly
> stated by an authorised representative of Aspedia Australia Pty Ltd, any
> views, opinions and other information expressed in this message and any
> attachments are solely those of the sender and do not constitute formal
> views or opinions of our company. *Please consider the environment before
> printing*.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread suporte
Hi did a reinstall on the Host, and everything comes up again. 
Than I put the Host in maintenance, reboot it, Confirm 'Host has been 
Rebooted', Activate and the error comes up again: Gluster command [] 
failed on server 

?? 

- Mensagem original -

De: supo...@logicworks.pt 
Para: "Ramesh Nachimuthu"  
Cc: Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 14:13:55 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 

I just update it to Version 3.5.4.2-1.el7.centos 
but the problem still remains. 

Any idea? 


- Mensagem original -

De: "Ramesh Nachimuthu"  
Para: supo...@logicworks.pt 
Cc: Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 13:11:52 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 



On 09/03/2015 05:35 PM, supo...@logicworks.pt wrote: 



On the gluster node (server) 
Is not a replicate solution, only one gluster node 

# gluster peer status 
Number of Peers: 0 




Strange. 




Thanks 

José 

- Mensagem original -

De: "Ramesh Nachimuthu"  
Para: supo...@logicworks.pt , Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 12:55:31 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 

Can u post the output of 'gluster peer status' on the gluster node? 

Regards, 
Ramesh 

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote: 



Hi, 

I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE. 

for storage, I have only one server with glusterfs: 
glusterfs-fuse-3.7.3-1.el7.x86_64 
glusterfs-server-3.7.3-1.el7.x86_64 
glusterfs-libs-3.7.3-1.el7.x86_64 
glusterfs-client-xlators-3.7.3-1.el7.x86_64 
glusterfs-api-3.7.3-1.el7.x86_64 
glusterfs-3.7.3-1.el7.x86_64 
glusterfs-cli-3.7.3-1.el7.x86_64 

# service glusterd status 
Redirecting to /bin/systemctl status glusterd.service 
glusterd.service - GlusterFS, a clustered file-system server 
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) 
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago 
Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
(code=exited, status=0/SUCCESS) 
Main PID: 1387 (glusterd) 
CGroup: /system.slice/glusterd.service 
ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid 
ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gfs... 

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered 
f 
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered 
fi 
Hint: Some lines were ellipsized, use -l to show in full. 


Everything was running until I need to restart the node (host), after that I 
was not ables to make the host active. This is the error message: 
Gluster command [] failed on server 


I also disable JSON protocol, but no success 

vdsm.log: 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
getHardwareInfo with () {} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with 
{'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 
'PRIMERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 
'SERVER', 'systemVersion': 'GS01', 'systemUUID': 
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
hostsList with () {} flowID [4acc5233] 
Thread-14::ERROR:: 2015-09-03 11 
:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured 
Traceback (most recent call last): 
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper 
res = f(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper 
rv = func(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList 
return {'hosts': self.svdsmProxy.glusterPeerStatus()} 
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ 
return callMethod() 
File "/usr/share/vdsm/supervdsm.py", line 48, in  
**kwargs) 
File "", line 2, in glusterPeerStatus 
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod 
raise convert_to_error(kind, result) 
GlusterCmdExecFailedException: Command execution failed 
error: Connection failed. Please check if gluster daemon is operational. 
return code: 1 


supervdsm.log: 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call 
getHardwareInfo with () {} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return 
getHardwareInfo with {'systemProductName': 'PRIMERGY RX2520 M1', 
'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion': 
'GS01', 'systemUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 
'systemManufacturer': 'FUJITSU'} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,266::supervdsmServer::102::SuperVdsm.S

Re: [ovirt-users] ovirt+gluster+NFS : storage hicups

2015-09-03 Thread Nicolas Ecarnot

Le 06/08/2015 16:36, Tim Macy a écrit :

Nicolas,  I have the same setup dedicated physical system running engine
on CentOS 6.6 three hosts running CentOS 7.1 with Gluster and KVM, and
firewall is disabled on all hosts.  I also followed the same documents
to build my environment so I assume they are very similar.  I have on
occasion had the same errors and have also found that "ctdb rebalanceip
" is the only way to resolve the problem.  I intend to
remove ctdb since it is not needed with the configuration we are
running.  CTDB is only needed for hosted engine on a floating NFS mount,
so you should be able change the gluster storage domain mount paths to
"localhost:".  The only thing that has prevented me from making
this change is that my environment is live with running VM's.  Please
let me know if you go this route.

>
> Thank you,
> Tim Macy

This week, I eventually took the time to change this, as this DC is not 
in production.


- Our big NFS storage domain was the master, it contained some VMs
- I wiped all my VMs
- I created a very small temporary NFS master domain, because I did not 
want to bother with any issue related with erasing the last master 
storage domain

- I removed the big NFS SD
- I wiped all that was inside, on a filesystem level
[
- I disabled ctdb, removed the "meta" gluster volume that ctdb used for 
its locks

]
- I added a new storage domain, using your advice :
  - gluster type
  - localhost:
- I removed the temp SD, and all switched correctly on the big glusterFS

I then spent some time playing with P2V, and storing new VMs on this new 
style glusterFS storage domain.
I'm watching the CPU and i/o on the hosts, and yes, they are working, 
but that keeps sane.


On this particular change (NFS to glusterFS), everything was very smooth.

Regards,

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread suporte
I just update it to Version 3.5.4.2-1.el7.centos 
but the problem still remains. 

Any idea? 


- Mensagem original -

De: "Ramesh Nachimuthu"  
Para: supo...@logicworks.pt 
Cc: Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 13:11:52 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 



On 09/03/2015 05:35 PM, supo...@logicworks.pt wrote: 



On the gluster node (server) 
Is not a replicate solution, only one gluster node 

# gluster peer status 
Number of Peers: 0 




Strange. 




Thanks 

José 

- Mensagem original -

De: "Ramesh Nachimuthu"  
Para: supo...@logicworks.pt , Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 12:55:31 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 

Can u post the output of 'gluster peer status' on the gluster node? 

Regards, 
Ramesh 

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote: 



Hi, 

I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE. 

for storage, I have only one server with glusterfs: 
glusterfs-fuse-3.7.3-1.el7.x86_64 
glusterfs-server-3.7.3-1.el7.x86_64 
glusterfs-libs-3.7.3-1.el7.x86_64 
glusterfs-client-xlators-3.7.3-1.el7.x86_64 
glusterfs-api-3.7.3-1.el7.x86_64 
glusterfs-3.7.3-1.el7.x86_64 
glusterfs-cli-3.7.3-1.el7.x86_64 

# service glusterd status 
Redirecting to /bin/systemctl status glusterd.service 
glusterd.service - GlusterFS, a clustered file-system server 
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) 
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago 
Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
(code=exited, status=0/SUCCESS) 
Main PID: 1387 (glusterd) 
CGroup: /system.slice/glusterd.service 
ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid 
ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gfs... 

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered 
f 
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered 
fi 
Hint: Some lines were ellipsized, use -l to show in full. 


Everything was running until I need to restart the node (host), after that I 
was not ables to make the host active. This is the error message: 
Gluster command [] failed on server 


I also disable JSON protocol, but no success 

vdsm.log: 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
getHardwareInfo with () {} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with 
{'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 
'PRIMERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 
'SERVER', 'systemVersion': 'GS01', 'systemUUID': 
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
hostsList with () {} flowID [4acc5233] 
Thread-14::ERROR:: 2015-09-03 11 
:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured 
Traceback (most recent call last): 
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper 
res = f(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper 
rv = func(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList 
return {'hosts': self.svdsmProxy.glusterPeerStatus()} 
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ 
return callMethod() 
File "/usr/share/vdsm/supervdsm.py", line 48, in  
**kwargs) 
File "", line 2, in glusterPeerStatus 
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod 
raise convert_to_error(kind, result) 
GlusterCmdExecFailedException: Command execution failed 
error: Connection failed. Please check if gluster daemon is operational. 
return code: 1 


supervdsm.log: 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call 
getHardwareInfo with () {} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return 
getHardwareInfo with {'systemProductName': 'PRIMERGY RX2520 M1', 
'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion': 
'GS01', 'systemUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 
'systemManufacturer': 'FUJITSU'} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call 
wrapper with () {} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,267::utils::739::root::(execCmd) /usr/sbin/gluster --mode=script peer 
status --xml (cwd None) 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,278::utils::759::root::(execCmd) FAILED:  = '';  = 1 
MainProcess|Thread-14::ERROR:: 2015-09-03 11 
:37:23,279::supervdsmServer::106::SuperVdsm.ServerCallback::(wrapper) Error in 
wrapper 
Traceback (most

Re: [ovirt-users] [ANN] oVirt 3.5.4 Final Release is now available

2015-09-03 Thread suporte
Sorry, it was a question of time 
is updated now 
Version 3.5.4.2-1.el7.centos 


- Mensagem original -

De: supo...@logicworks.pt 
Para: "Sandro Bonazzola"  
Cc: annou...@ovirt.org, "users" , de...@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 14:04:01 
Assunto: Re: [ovirt-users] [ANN] oVirt 3.5.4 Final Release is now available 

I have V ersion 3.5.3.1-1.el7 
Doing a yum update "ovirt-engine-setup*" 


I got 
Loaded plugins: fastestmirror, versionlock 
Loading mirror speeds from cached hostfile 
* base: ftp.dei.uc.pt 
* extras: ftp.dei.uc.pt 
* ovirt-3.5: ftp.nluug.nl 
* ovirt-3.5-epel: ftp.up.pt 
* updates: ftp.dei.uc.pt 
No packages marked for update 
[root@engine ovirt-engine]# 
[root@engine ovirt-engine]# 


- Mensagem original -

De: "Sandro Bonazzola"  
Para: annou...@ovirt.org, "users" , de...@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 13:56:14 
Assunto: Re: [ovirt-users] [ANN] oVirt 3.5.4 Final Release is now available 



On Thu, Sep 3, 2015 at 2:55 PM, Sandro Bonazzola < sbona...@redhat.com > wrote: 



The oVirt team is pleased to announce that the oVirt 3 .5. 4 Final Release is 
now available as of September 3rd 2015. 

oVirt is an open source alternative to VMware vSphere, and provides an 
excellent KVM management interface for multi-node virtualization. 
oVirt is available now for Fedora 20, 
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and 
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar). 

This release of oVirt includes numerous bug fixes. 
See the release notes [1] for a list of the new features and bugs fixed. 

Please refer to release notes [1] for Installation / Upgrade instructions. 

A new oVirt Live ISO will be soon available[2] 

Please note that mirrors[3] may need usually one day before being synchronized. 

Please refer to the release notes for known issues in this release. 

[1] http://www.ovirt.org/OVirt_3.5.3_Release_Notes 




Sorry, http://www.ovirt.org/OVirt_3.5.4_Release_Notes 




[2] http://resources.ovirt.org/pub/ovirt- 3 .5/iso/ovirt-live/ 
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors 

-- 
Sandro Bonazzola 
Better technology. Faster innovation. Powered by community collaboration. 
See how it works at redhat.com 






-- 
Sandro Bonazzola 
Better technology. Faster innovation. Powered by community collaboration. 
See how it works at redhat.com 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 


___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 3.5.4 Final Release is now available

2015-09-03 Thread suporte
I have V ersion 3.5.3.1-1.el7 
Doing a yum update "ovirt-engine-setup*" 


I got 
Loaded plugins: fastestmirror, versionlock 
Loading mirror speeds from cached hostfile 
* base: ftp.dei.uc.pt 
* extras: ftp.dei.uc.pt 
* ovirt-3.5: ftp.nluug.nl 
* ovirt-3.5-epel: ftp.up.pt 
* updates: ftp.dei.uc.pt 
No packages marked for update 
[root@engine ovirt-engine]# 
[root@engine ovirt-engine]# 


- Mensagem original -

De: "Sandro Bonazzola"  
Para: annou...@ovirt.org, "users" , de...@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 13:56:14 
Assunto: Re: [ovirt-users] [ANN] oVirt 3.5.4 Final Release is now available 



On Thu, Sep 3, 2015 at 2:55 PM, Sandro Bonazzola < sbona...@redhat.com > wrote: 



The oVirt team is pleased to announce that the oVirt 3 .5. 4 Final Release is 
now available as of September 3rd 2015. 

oVirt is an open source alternative to VMware vSphere, and provides an 
excellent KVM management interface for multi-node virtualization. 
oVirt is available now for Fedora 20, 
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and 
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar). 

This release of oVirt includes numerous bug fixes. 
See the release notes [1] for a list of the new features and bugs fixed. 

Please refer to release notes [1] for Installation / Upgrade instructions. 

A new oVirt Live ISO will be soon available[2] 

Please note that mirrors[3] may need usually one day before being synchronized. 

Please refer to the release notes for known issues in this release. 

[1] http://www.ovirt.org/OVirt_3.5.3_Release_Notes 




Sorry, http://www.ovirt.org/OVirt_3.5.4_Release_Notes 




[2] http://resources.ovirt.org/pub/ovirt- 3 .5/iso/ovirt-live/ 
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors 

-- 
Sandro Bonazzola 
Better technology. Faster innovation. Powered by community collaboration. 
See how it works at redhat.com 






-- 
Sandro Bonazzola 
Better technology. Faster innovation. Powered by community collaboration. 
See how it works at redhat.com 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 3.5.4 Final Release is now available

2015-09-03 Thread suporte
I have 

- Mensagem original -

De: "Sandro Bonazzola"  
Para: annou...@ovirt.org, "users" , de...@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 13:56:14 
Assunto: Re: [ovirt-users] [ANN] oVirt 3.5.4 Final Release is now available 



On Thu, Sep 3, 2015 at 2:55 PM, Sandro Bonazzola < sbona...@redhat.com > wrote: 



The oVirt team is pleased to announce that the oVirt 3 .5. 4 Final Release is 
now available as of September 3rd 2015. 

oVirt is an open source alternative to VMware vSphere, and provides an 
excellent KVM management interface for multi-node virtualization. 
oVirt is available now for Fedora 20, 
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and 
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar). 

This release of oVirt includes numerous bug fixes. 
See the release notes [1] for a list of the new features and bugs fixed. 

Please refer to release notes [1] for Installation / Upgrade instructions. 

A new oVirt Live ISO will be soon available[2] 

Please note that mirrors[3] may need usually one day before being synchronized. 

Please refer to the release notes for known issues in this release. 

[1] http://www.ovirt.org/OVirt_3.5.3_Release_Notes 




Sorry, http://www.ovirt.org/OVirt_3.5.4_Release_Notes 




[2] http://resources.ovirt.org/pub/ovirt- 3 .5/iso/ovirt-live/ 
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors 

-- 
Sandro Bonazzola 
Better technology. Faster innovation. Powered by community collaboration. 
See how it works at redhat.com 






-- 
Sandro Bonazzola 
Better technology. Faster innovation. Powered by community collaboration. 
See how it works at redhat.com 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 3.5.4 Final Release is now available

2015-09-03 Thread Sandro Bonazzola
On Thu, Sep 3, 2015 at 2:55 PM, Sandro Bonazzola 
wrote:

> The oVirt team is pleased to announce that the oVirt 3.5.4 Final Release
> is now available as of September 3rd 2015.
>
> oVirt is an open source alternative to VMware vSphere, and provides an
> excellent KVM management interface for multi-node virtualization.
> oVirt is available now for Fedora 20,
> Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
> Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
>
> This release of oVirt includes numerous bug fixes.
> See the release notes [1] for a list of the new features and bugs fixed.
>
> Please refer to release notes [1] for Installation / Upgrade instructions.
>
> A new oVirt Live ISO will be soon available[2]
>
> Please note that mirrors[3] may need usually one day before being
> synchronized.
>
> Please refer to the release notes for known issues in this release.
>
> [1] http://www.ovirt.org/OVirt_3.5.3_Release_Notes
>

Sorry, http://www.ovirt.org/OVirt_3.5.4_Release_Notes


>
> [2] http://resources.ovirt.org/pub/ovirt-3.5/iso/ovirt-live/
> 
> [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 3.5.4 Final Release is now available

2015-09-03 Thread Sandro Bonazzola
The oVirt team is pleased to announce that the oVirt 3.5.4 Final Release is
now available as of September 3rd 2015.

oVirt is an open source alternative to VMware vSphere, and provides an
excellent KVM management interface for multi-node virtualization.
oVirt is available now for Fedora 20,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).

This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.

Please refer to release notes [1] for Installation / Upgrade instructions.

A new oVirt Live ISO will be soon available[2]

Please note that mirrors[3] may need usually one day before being
synchronized.

Please refer to the release notes for known issues in this release.

[1] http://www.ovirt.org/OVirt_3.5.3_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5/iso/ovirt-live/

[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt on OFTC issues? #ovirt :Cannot, send to channel

2015-09-03 Thread Ramesh Nachimuthu
Even I faced this problem suddenly. I solved it by registering my nick 
again using

"/msg NickServ REGISTER  "

Regards,
Ramesh

On 08/23/2015 01:43 AM, Greg Sheremeta wrote:

Happening to me too. Started out of nowhere -- I didn't change
anything with IRC.

On Fri, Aug 21, 2015 at 2:03 AM, Sahina Bose  wrote:

Hi all

When I send a message to #ovirt on OFTC , I get a response -  #ovirt :Cannot
send to channel

Anyone else facing this?

thanks
sahina
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread Ramesh Nachimuthu



On 09/03/2015 05:35 PM, supo...@logicworks.pt wrote:

On the gluster node (server)
Is not a replicate solution, only one gluster node

# gluster peer status
Number of Peers: 0



Strange.


Thanks

José


*De: *"Ramesh Nachimuthu" 
*Para: *supo...@logicworks.pt, Users@ovirt.org
*Enviadas: *Quinta-feira, 3 De Setembro de 2015 12:55:31
*Assunto: *Re: [ovirt-users] Gluster command [] failed on server

Can u post the output of 'gluster peer status' on the gluster node?

Regards,
Ramesh

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote:

Hi,

I just installed Version 3.5.3.1-1.el7.centos, on centos 7.1, no HE.

for storage, I have only one server with glusterfs:
glusterfs-fuse-3.7.3-1.el7.x86_64
glusterfs-server-3.7.3-1.el7.x86_64
glusterfs-libs-3.7.3-1.el7.x86_64
glusterfs-client-xlators-3.7.3-1.el7.x86_64
glusterfs-api-3.7.3-1.el7.x86_64
glusterfs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64

# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Thu 2015-09-03 11
:23:32 WEST; 10min ago
  Process: 1153 ExecStart=/usr/sbin/glusterd -p
/var/run/glusterd.pid (code=exited, status=0/SUCCESS)
 Main PID: 1387 (glusterd)
   CGroup: /system.slice/glusterd.service
   ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid
   ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt
--volfile-id gv0.gfs...

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a
clustered f
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a
clustered fi
Hint: Some lines were ellipsized, use -l to show in full.


Everything was running until I need to restart the node (host),
after that I was not ables to make the host active. This is the
error message:
Gluster command [] failed on server


I also disable JSON protocol, but no success

vdsm.log:
Thread-14::DEBUG::2015-09-03 11
:37:23,131::BindingXMLRPC::1133::vds::(wrapper)
client [192.168.6.200 ]::call
getHardwareInfo with () {}
Thread-14::DEBUG::2015-09-03 11
:37:23,132::BindingXMLRPC::1140::vds::(wrapper)
return getHardwareInfo with {'status': {'message': 'Done', 'code':
0}, 'info': {'systemProductName': 'PRIMERGY RX2520 M1',
'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER',
'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU'}}
Thread-14::DEBUG::2015-09-03 11
:37:23,266::BindingXMLRPC::1133::vds::(wrapper)
client [192.168.6.200 ]::call hostsList with
() {} flowID [4acc5233]
Thread-14::ERROR::2015-09-03 11
:37:23,279::BindingXMLRPC::1149::vds::(wrapper)
vdsm exception occured
Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in 
**kwargs)
  File "", line 2, in glusterPeerStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line
773, in _callmethod
raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is
operational.
return code: 1


supervdsm.log:
MainProcess|Thread-14::DEBUG::2015-09-03 11

:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call getHardwareInfo with () {}
MainProcess|Thread-14::DEBUG::2015-09-03 11

:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return getHardwareInfo with {'systemProductName': 'PRIMERGY RX2520
M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER',
'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU'}
MainProcess|Thread-14::DEBUG::2015-09-03 11

:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call wrapper with () {}
MainProcess|Thread-14::DEBUG::2015-09-03 11
:37:23,267::utils::739::root::(execCmd)
/usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|Thread-14::DEBUG::2015-09-03 11
:37:23,278::utils::759::root::(execCmd)
FAILED:  = '';  = 1
MainProcess|Thread-14::ERROR::2015-09-03 11

:37:23,279::supervdsmServer::106::SuperVdsm

Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread suporte
On the gluster node (server) 
Is not a replicate solution, only one gluster node 

# gluster peer status 
Number of Peers: 0 

Thanks 

José 

- Mensagem original -

De: "Ramesh Nachimuthu"  
Para: supo...@logicworks.pt, Users@ovirt.org 
Enviadas: Quinta-feira, 3 De Setembro de 2015 12:55:31 
Assunto: Re: [ovirt-users] Gluster command [] failed on server 

Can u post the output of 'gluster peer status' on the gluster node? 

Regards, 
Ramesh 

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote: 



Hi, 

I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE. 

for storage, I have only one server with glusterfs: 
glusterfs-fuse-3.7.3-1.el7.x86_64 
glusterfs-server-3.7.3-1.el7.x86_64 
glusterfs-libs-3.7.3-1.el7.x86_64 
glusterfs-client-xlators-3.7.3-1.el7.x86_64 
glusterfs-api-3.7.3-1.el7.x86_64 
glusterfs-3.7.3-1.el7.x86_64 
glusterfs-cli-3.7.3-1.el7.x86_64 

# service glusterd status 
Redirecting to /bin/systemctl status glusterd.service 
glusterd.service - GlusterFS, a clustered file-system server 
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) 
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago 
Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
(code=exited, status=0/SUCCESS) 
Main PID: 1387 (glusterd) 
CGroup: /system.slice/glusterd.service 
ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid 
ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gfs... 

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered 
f 
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered 
fi 
Hint: Some lines were ellipsized, use -l to show in full. 


Everything was running until I need to restart the node (host), after that I 
was not ables to make the host active. This is the error message: 
Gluster command [] failed on server 


I also disable JSON protocol, but no success 

vdsm.log: 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
getHardwareInfo with () {} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with 
{'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 
'PRIMERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 
'SERVER', 'systemVersion': 'GS01', 'systemUUID': 
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
hostsList with () {} flowID [4acc5233] 
Thread-14::ERROR:: 2015-09-03 11 
:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured 
Traceback (most recent call last): 
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper 
res = f(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper 
rv = func(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList 
return {'hosts': self.svdsmProxy.glusterPeerStatus()} 
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ 
return callMethod() 
File "/usr/share/vdsm/supervdsm.py", line 48, in  
**kwargs) 
File "", line 2, in glusterPeerStatus 
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod 
raise convert_to_error(kind, result) 
GlusterCmdExecFailedException: Command execution failed 
error: Connection failed. Please check if gluster daemon is operational. 
return code: 1 


supervdsm.log: 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call 
getHardwareInfo with () {} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return 
getHardwareInfo with {'systemProductName': 'PRIMERGY RX2520 M1', 
'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion': 
'GS01', 'systemUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 
'systemManufacturer': 'FUJITSU'} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call 
wrapper with () {} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,267::utils::739::root::(execCmd) /usr/sbin/gluster --mode=script peer 
status --xml (cwd None) 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,278::utils::759::root::(execCmd) FAILED:  = '';  = 1 
MainProcess|Thread-14::ERROR:: 2015-09-03 11 
:37:23,279::supervdsmServer::106::SuperVdsm.ServerCallback::(wrapper) Error in 
wrapper 
Traceback (most recent call last): 
File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper 
res = func(*args, **kwargs) 
File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper 
return func(*args, **kwargs) 
File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper 
return func(*args, **kwargs) 
File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus 
xmltree = _execGlusterXml(command) 
Fi

Re: [ovirt-users] Gluster command [] failed on server

2015-09-03 Thread Ramesh Nachimuthu

Can u post the output of 'gluster peer status' on the gluster node?

Regards,
Ramesh

On 09/03/2015 05:10 PM, supo...@logicworks.pt wrote:

Hi,

I just installed Version 3.5.3.1-1.el7.centos, on centos 7.1, no HE.

for storage, I have only one server with glusterfs:
glusterfs-fuse-3.7.3-1.el7.x86_64
glusterfs-server-3.7.3-1.el7.x86_64
glusterfs-libs-3.7.3-1.el7.x86_64
glusterfs-client-xlators-3.7.3-1.el7.x86_64
glusterfs-api-3.7.3-1.el7.x86_64
glusterfs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64

# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Thu 2015-09-03 11 
:23:32 WEST; 10min ago
  Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
(code=exited, status=0/SUCCESS)

 Main PID: 1387 (glusterd)
   CGroup: /system.slice/glusterd.service
   ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid
   ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id 
gv0.gfs...


Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a 
clustered f
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a 
clustered fi

Hint: Some lines were ellipsized, use -l to show in full.


Everything was running until I need to restart the node (host), after 
that I was not ables to make the host active. This is the error message:

Gluster command [] failed on server


I also disable JSON protocol, but no success

vdsm.log:
Thread-14::DEBUG::2015-09-03 11 
:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client 
[192.168.6.200 ]::call getHardwareInfo with () {}
Thread-14::DEBUG::2015-09-03 11 
:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return 
getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 
'info': {'systemProductName': 'PRIMERGY RX2520 M1', 
'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER', 
'systemVersion': 'GS01', 'systemUUID': 
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}}
Thread-14::DEBUG::2015-09-03 11 
:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client 
[192.168.6.200 ]::call hostsList with () {} 
flowID [4acc5233]
Thread-14::ERROR::2015-09-03 11 
:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm 
exception occured

Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in 
**kwargs)
  File "", line 2, in glusterPeerStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, 
in _callmethod

raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1


supervdsm.log:
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) 
call getHardwareInfo with () {}
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) 
return getHardwareInfo with {'systemProductName': 'PRIMERGY RX2520 
M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER', 
'systemVersion': 'GS01', 'systemUUID': 
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) 
call wrapper with () {}
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,267::utils::739::root::(execCmd) 
/usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|Thread-14::DEBUG::2015-09-03 11 
:37:23,278::utils::759::root::(execCmd) 
FAILED:  = '';  = 1
MainProcess|Thread-14::ERROR::2015-09-03 11 
:37:23,279::supervdsmServer::106::SuperVdsm.ServerCallback::(wrapper) 
Error in wrapper

Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper
res = func(*args, **kwargs)
  File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper
return func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper
return func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus
xmltree = _execGlusterXml(command)
  File "/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml
raise ge.GlusterCmdExecFailedException(rc, out, err)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1



Any idea?

Thanks

José


--
---

[ovirt-users] Gluster command [] failed on server

2015-09-03 Thread suporte
Hi, 

I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE. 

for storage, I have only one server with glusterfs: 
glusterfs-fuse-3.7.3-1.el7.x86_64 
glusterfs-server-3.7.3-1.el7.x86_64 
glusterfs-libs-3.7.3-1.el7.x86_64 
glusterfs-client-xlators-3.7.3-1.el7.x86_64 
glusterfs-api-3.7.3-1.el7.x86_64 
glusterfs-3.7.3-1.el7.x86_64 
glusterfs-cli-3.7.3-1.el7.x86_64 

# service glusterd status 
Redirecting to /bin/systemctl status glusterd.service 
glusterd.service - GlusterFS, a clustered file-system server 
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled) 
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago 
Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid 
(code=exited, status=0/SUCCESS) 
Main PID: 1387 (glusterd) 
CGroup: /system.slice/glusterd.service 
ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid 
ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gfs... 

Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered 
f 
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered 
fi 
Hint: Some lines were ellipsized, use -l to show in full. 


Everything was running until I need to restart the node (host), after that I 
was not ables to make the host active. This is the error message: 
Gluster command [] failed on server 


I also disable JSON protocol, but no success 

vdsm.log: 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
getHardwareInfo with () {} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with 
{'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 
'PRIMERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily': 
'SERVER', 'systemVersion': 'GS01', 'systemUUID': 
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}} 
Thread-14::DEBUG:: 2015-09-03 11 
:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [ 192.168.6.200 ]::call 
hostsList with () {} flowID [4acc5233] 
Thread-14::ERROR:: 2015-09-03 11 
:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured 
Traceback (most recent call last): 
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper 
res = f(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper 
rv = func(*args, **kwargs) 
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList 
return {'hosts': self.svdsmProxy.glusterPeerStatus()} 
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ 
return callMethod() 
File "/usr/share/vdsm/supervdsm.py", line 48, in  
**kwargs) 
File "", line 2, in glusterPeerStatus 
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod 
raise convert_to_error(kind, result) 
GlusterCmdExecFailedException: Command execution failed 
error: Connection failed. Please check if gluster daemon is operational. 
return code: 1 


supervdsm.log: 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call 
getHardwareInfo with () {} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return 
getHardwareInfo with {'systemProductName': 'PRIMERGY RX2520 M1', 
'systemSerialNumber': 'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion': 
'GS01', 'systemUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 
'systemManufacturer': 'FUJITSU'} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call 
wrapper with () {} 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,267::utils::739::root::(execCmd) /usr/sbin/gluster --mode=script peer 
status --xml (cwd None) 
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 
:37:23,278::utils::759::root::(execCmd) FAILED:  = '';  = 1 
MainProcess|Thread-14::ERROR:: 2015-09-03 11 
:37:23,279::supervdsmServer::106::SuperVdsm.ServerCallback::(wrapper) Error in 
wrapper 
Traceback (most recent call last): 
File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper 
res = func(*args, **kwargs) 
File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper 
return func(*args, **kwargs) 
File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper 
return func(*args, **kwargs) 
File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus 
xmltree = _execGlusterXml(command) 
File "/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml 
raise ge.GlusterCmdExecFailedException(rc, out, err) 
GlusterCmdExecFailedException: Command execution failed 
error: Connection failed. Please check if gluster daemon is operational. 
return code: 1 



Any idea? 

Thanks 

José 


-- 

Jose Ferradeira 
http://www.logicworks.pt 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trying hosted-engine on ovirt-3.6 beta

2015-09-03 Thread Simone Tiraboschi
On Sun, Aug 30, 2015 at 11:54 AM, Elad Ben Aharon 
wrote:

>
> 2015-08-28 17:18:30 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
> heconflib.create_heconfimage:230 stderr: dd: failed to open
> ‘/rhev/data-center/mnt/lich**FILTERED**:_nfs_ovirt-he_data/03eb6ca0-b532-4949-a0dd-085520bc54eb/images/bad49156-7aa8-448f-b8b7-174854821668/04bd6885-d1e4-4943-9450-638541234339’:
> Permission denied


Can you please share the configuration line of your NFS export?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Videoes on 3.6 features

2015-09-03 Thread Eli Mesika
Hi all

The following 3.6 videos on new features were omitted from last Brian P report: 

oVirt 3.6 power management UI changes : 
https://www.youtube.com/watch?v=AkfAMpEykdU&html5=1
oVirt 3.6 external status for host & storage domain  : 
https://www.youtube.com/watch?v=xUIbNeN-AxA&html5=1

Thanks
Eli Mesika

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users