The error indicates : OSError: [Errno 30] Read-only file system

Can you check the output of "gluster volume status gv0" on host01.ovirt.forest.go.th. Please make sure that firewall is not blocking gluster ports from communicating on the 3 nodes.

On a different note, since you are using gv0 as storage domain, set the virt group profile on this volume - "gluster volume set gv0 group virt"

On 02/23/2016 01:39 PM, Wee Sritippho wrote:
Hi,

I'm trying to deploy an oVirt Hosed Engine environment using this glusterfs volume:

# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 37bba03b-7276-421a-8960-81e28196ebde
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host01.ovirt.forest.go.th:/data/brick1/gv0
Brick2: host03.ovirt.forest.go.th:/data/brick1/gv0
Brick3: host02.ovirt.forest.go.th:/data/brick1/gv0
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
performance.readdir-ahead: on

But the deployment failed with this error message:

[ ERROR ] Failed to execute stage 'Misc configuration': Error creating a storage domain: ('storageType=7, sdUUID=be5f66d8-57ef-43c8-90a5-e9132e0c95b4, domainName=hosted_storage, domClass=1, typeSpecificArg=host01.ovirt.forest.go.th:/gv0 domVersion=3',)

I tried to figure out what is happening via the log files:

Line ~7243 of vdsm.log
Line ~2930 of ovirt-hosted-engine-setup-20160223204857-585hqv.log

But didn't seem to understand it at all.

Please guide me on how to solve this problem.

Here is my environment:

CentOS Linux release 7.2.1511 (Core)
ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch
vdsm-4.17.18-1.el7.noarch
glusterfs-3.7.8-1.el7.x86_64

Thank you,
Wee


---
ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
https://www.avast.com/antivirus


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to