Re: [Users] Testday aftermath

2013-02-22 Thread noc

On 22-2-2013 11:37, Josué Delgado wrote:

Hello,


I had the same XML ERROR when i tried to configure GlusterFS on a 
node. Followed up the instructions and i updated glusterfs, now i have 
the following packages:


glusterfs-server-3.4.0alpha-1.el6.x86_64
glusterfs-fuse-3.4.0alpha-1.el6.x86_64
vdsm-gluster-4.10.3-8.fc18.noarch
glusterfs-3.4.0alpha-1.el6.x86_64

But now i have the following error when i tried to start glusterd:


systemctl start glusterd
Failed to issue method call: Access denied


On the logs i have this:
Starting glusterd (via systemctl):  Failed to issue method call: 
Access denied
Feb 22 05:32:22 grc003 systemd[1]: Failed to get security context on 
/usr/lib/systemd/system/glusterd.service: No such file or directory

 [FAILED]


There's no systemd init script  for gluster, but there's a script in 
/etc/init.d . Is that the cause of problem? It's the package somehow 
broken?


What can i do now to fix and start glusterfs in order to test again?


check your owner:group on the gluster folders, they should be 36:36. You 
can achieve this by using the 'optimize' button in the webui while 
looking at the Volume.


Joop

--
irc: jvandewege
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-04 Thread noc

On 3-2-2013 19:59, Joop wrote:
I can report back that 3.4.0qa8 does work. oVirt also picked up on the 
volumes that I created but didn't show up in the interface. Could 
start them an will test if they are fully usable.


I added my export domain and re-imported the VMs and all is well despite 
that the original creation of the glustervolumes failed because of 
gluster version 3.3.1.


So lets TEST ;-)

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-03 Thread Joop

Vijay Bellur wrote:

On 02/01/2013 07:38 PM, Kanagaraj wrote:

On 02/01/2013 06:47 PM, Joop wrote:

Shireesh Anjal wrote:

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.

 Can you update the glusterfs to
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and
check it out?

How??

I tried adding this repo but but yum says that there are no updates
available, atleast yesterday it did.

[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I
tried through yum localinstall but it will revert when yum update is
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are
of the form 3.3.1-8, whereas the ones from above QA release are
v3.4.0qa7. I think because of the v before 3.4, these are
considered as lower version, and by default yum picks up the rpms
from fedora repository.


The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just
had a look that folder and repo doesn't have the 'v' in front of it.


Thats correct.

[kanagaraj@localhost ~]$ rpmdev-vercmp glusterfs-3.3.1-8.fc18.x86_64
glusterfs-v3.4.0qa7-1.el6.x86_64
glusterfs-3.3.1-8.fc18.x86_64  glusterfs-v3.4.0qa7-1.el6.x86_64


Is there someone on this list that has the 'powers' to change that ??



[Adding Vijay]



3.4.0qa8 is available now. Can you please check with that?

Thanks,
Vijay
I can report back that 3.4.0qa8 does work. oVirt also picked up on the 
volumes that I created but didn't show up in the interface. Could start 
them an will test if they are fully usable.


Thanks for the quick respons.

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Kanagaraj

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?


Thanks,
Kanagaraj

On 02/01/2013 03:23 PM, Joop wrote:

Yesterday was testday but not much fun :-(

I had a reasonable working setup but for testday I decided to start from
scratch and that ended rather soon. Installing and configuring engine was
not a problem but I want a setup where I have two gluster hosts and two
hosts as vmhosts.
I added a second cluster using the webinterface set it to gluster storage
and added two minimal installed Fedora 18 hosts where I setup static
networking and verified that it worked.
Adding the two hosts went OK but adding a Volume gives the following error
on engine:

2013-02-01 09:32:39,084 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] Running command:
CreateGlusterVolumeCommand internal: false. Entities affected :  ID:
8720debc-a184-4b61-9fa8-0fdf4d339b9a Type: VdsGroups
2013-02-01 09:32:39,117 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] START,
CreateGlusterVolumeVDSCommand(HostName = st02, HostId =
e7b74172-2f95-43cb-83ff-11705ae24265), log id: 4270f4ef
2013-02-01 09:32:39,246 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc
[mCode=4106, mMessage=XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput
]
2013-02-01 09:32:39,248 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc
[mCode=4106, mMessage=XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput
]
2013-02-01 09:32:39,249 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Failed in CreateGlusterVolumeVDS method
2013-02-01 09:32:39,250 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput

2013-02-01 09:32:39,254 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--127.0.0.1-8702-4)
[5ea886d] Command CreateGlusterVolumeVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput

2013-02-01 09:32:39,255 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] FINISH, CreateGlusterVolumeVDSCommand,
log id: 4270f4ef
2013-02-01 09:32:39,256 ERROR
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] Command
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll
exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = XML error
error: ?xml version=1.0 encoding=UTF-8 standalone=yes?
cliOutputvolCreatecount2/countbricks
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
/brickstransporttcp/transporttype2/typevolnameGlusterData/volnamereplica-count2/replica-count/volCreate/cliOutput

2013-02-01 09:32:39,268 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] Lock freed to object EngineLock
[exclusiveLocks= key: 8720debc-a184-4b61-9fa8-0fdf4d339b9a value: GLUSTER
, sharedLocks= ]
2013-02-01 09:32:40,902 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-85) START, GlusterVolumesListVDSCommand(HostName =
st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 61cafb32

And on ST01 the 

Re: [Users] Testday aftermath

2013-02-01 Thread noc

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


Joop


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Shireesh Anjal

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are of 
the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. 
I think because of the v before 3.4, these are considered as lower 
version, and by default yum picks up the rpms from fedora repository.


To work around this issue, you could try:

yum --disablerepo=* --enablerepo=gluster-nieuw install glusterfs 
glusterfs-fuse glusterfs-geo-replication glusterfs-server




Joop




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Joop

Shireesh Anjal wrote:

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are 
of the form 3.3.1-8, whereas the ones from above QA release are 
v3.4.0qa7. I think because of the v before 3.4, these are considered 
as lower version, and by default yum picks up the rpms from fedora 
repository.


To work around this issue, you could try:

yum --disablerepo=* --enablerepo=gluster-nieuw install glusterfs 
glusterfs-fuse glusterfs-geo-replication glusterfs-server


[root@st01 ~]# yum --disablerepo=* --enablerepo=gluster-nieuw 
install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server


Loaded plugins: langpacks, presto, refresh-packagekit
Package matching glusterfs-v3.4.0qa7-1.el6.x86_64 already installed. 
Checking for update.
Package matching glusterfs-fuse-v3.4.0qa7-1.el6.x86_64 already 
installed. Checking for update.
Package matching glusterfs-server-v3.4.0qa7-1.el6.x86_64 already 
installed. Checking for update.

Resolving Dependencies
-- Running transaction check
--- Package glusterfs-geo-replication.x86_64 0:v3.4.0qa7-1.el6 will be 
installed
-- Processing Dependency: glusterfs = v3.4.0qa7-1.el6 for package: 
glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64

-- Finished Dependency Resolution
Error: Package: glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 
(gluster-nieuw)

  Requires: glusterfs = v3.4.0qa7-1.el6
  Installed: glusterfs-3.3.1-8.fc18.x86_64 (@updates)
  glusterfs = 3.3.1-8.fc18
  Available: glusterfs-v3.4.0qa7-1.el6.x86_64 (gluster-nieuw)
  glusterfs = v3.4.0qa7-1.el6
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest


[root@st01 ~]# yum --disablerepo=* --enablerepo=gluster-nieuw 
install glusterfs glusterfs-fuse glusterfs-geo-replication 
glusterfs-server --skip-broken


Loaded plugins: langpacks, presto, refresh-packagekit
Package matching glusterfs-v3.4.0qa7-1.el6.x86_64 already installed. 
Checking for update.
Package matching glusterfs-fuse-v3.4.0qa7-1.el6.x86_64 already 
installed. Checking for update.
Package matching glusterfs-server-v3.4.0qa7-1.el6.x86_64 already 
installed. Checking for update.

Resolving Dependencies
-- Running transaction check
--- Package glusterfs-geo-replication.x86_64 0:v3.4.0qa7-1.el6 will be 
installed
-- Processing Dependency: glusterfs = v3.4.0qa7-1.el6 for package: 
glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64
gluster-nieuw/filelists 
| 7.2 kB  00:00:00


Packages skipped because of dependency problems:
   glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 from gluster-nieuw

Last post, probably, until sunday-evening/monday morning, off to fosdem ;-)

Joop


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Joop

Shireesh Anjal wrote:

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are 
of the form 3.3.1-8, whereas the ones from above QA release are 
v3.4.0qa7. I think because of the v before 3.4, these are considered 
as lower version, and by default yum picks up the rpms from fedora 
repository.


The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just had 
a look that folder and repo doesn't have the 'v' in front of it.


Is there someone on this list that has the 'powers' to change that ??

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Kanagaraj

On 02/01/2013 06:47 PM, Joop wrote:

Shireesh Anjal wrote:

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are 
using. vdsm could not parse the output from gluster.


 Can you update the glusterfs to 
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and 
check it out?

How??

I tried adding this repo but but yum says that there are no updates 
available, atleast yesterday it did.


[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I 
tried through yum localinstall but it will revert when yum update is 
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are 
of the form 3.3.1-8, whereas the ones from above QA release are 
v3.4.0qa7. I think because of the v before 3.4, these are 
considered as lower version, and by default yum picks up the rpms 
from fedora repository.


The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just 
had a look that folder and repo doesn't have the 'v' in front of it.



Thats correct.

[kanagaraj@localhost ~]$ rpmdev-vercmp glusterfs-3.3.1-8.fc18.x86_64 
glusterfs-v3.4.0qa7-1.el6.x86_64

glusterfs-3.3.1-8.fc18.x86_64  glusterfs-v3.4.0qa7-1.el6.x86_64


Is there someone on this list that has the 'powers' to change that ??



[Adding Vijay]


Joop



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Testday aftermath

2013-02-01 Thread Vijay Bellur

On 02/01/2013 07:38 PM, Kanagaraj wrote:

On 02/01/2013 06:47 PM, Joop wrote:

Shireesh Anjal wrote:

On 02/01/2013 05:13 PM, noc wrote:

On 1-2-2013 11:07, Kanagaraj wrote:

Hi Joop,

 Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.

 Can you update the glusterfs to
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and
check it out?

How??

I tried adding this repo but but yum says that there are no updates
available, atleast yesterday it did.

[gluster-nieuw]
name=GlusterFS
baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster
enabled=1

My yumfoo isn't that good so I don't know how to force it. Besides I
tried through yum localinstall but it will revert when yum update is
run. It looks like it thinks that 3.3.1 is newer than 3.4


The problem is that, released glusterfs rpms in fedora repository are
of the form 3.3.1-8, whereas the ones from above QA release are
v3.4.0qa7. I think because of the v before 3.4, these are
considered as lower version, and by default yum picks up the rpms
from fedora repository.


The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just
had a look that folder and repo doesn't have the 'v' in front of it.


Thats correct.

[kanagaraj@localhost ~]$ rpmdev-vercmp glusterfs-3.3.1-8.fc18.x86_64
glusterfs-v3.4.0qa7-1.el6.x86_64
glusterfs-3.3.1-8.fc18.x86_64  glusterfs-v3.4.0qa7-1.el6.x86_64


Is there someone on this list that has the 'powers' to change that ??



[Adding Vijay]



3.4.0qa8 is available now. Can you please check with that?

Thanks,
Vijay


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users