Re: [ovirt-users] ovirt - import detached gluter volumes

2015-04-30 Thread Sahina Bose
Could you try gluster volume start VGSF1 force to make sure the brick 
processes are restarted.

From the status output, it looks like the brick processes are not online.

On 04/22/2015 09:14 PM, p...@email.cz wrote:

Hello dears,
i've got some troubles with reattaching gluster volumes with data.

1) Base on a lot of tests I decided clear oVirt database ( # 
engine-cleanup ; # yum remove ovirt-engine;  # yum -y install 
ovirt-engine; #  engine-setup)

2) clearing sucessfully done and start with empty oVirt envir.
3) then I added networks, nodes and make basic network adjustment = 
all works fine
4) time to attach  volumes/ domains with original data ( a lot of 
virtuals , ISO files ,  )


So, main question is about HOWTO attach this volumes if I haven't 
defined any domain and can't clearly import them ??


Current status of nodes are without glusterfs NFS mounted, but bricks 
are OK


# gluster volume info

Volume Name: VGFS1
Type: Replicate
Volume ID: b9a1c347-6ffd-4122-8756-d513fe3f40b9
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1kvm2:/FastClass/p1/GFS1
Brick2: 1kvm1:/FastClass/p1/GFS1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36

Volume Name: VGFS2
Type: Replicate
Volume ID: b65bb689-ecc8-4c33-a4e7-11dea6028f83
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1kvm2:/FastClass/p2/GFS1
Brick2: 1kvm1:/FastClass/p2/GFS1
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36


[root@1kvm1 glusterfs]# gluster volume status
Status of volume: VGFS1
Gluster process PortOnline  Pid
--
Brick 1kvm1:/FastClass/p1/GFS1 N/A N   N/A
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost N/A N   N/A

Task Status of Volume VGFS1
--
There are no active volume tasks

Status of volume: VGFS2
Gluster process PortOnline  Pid
--
Brick 1kvm1:/FastClass/p2/GFS1 N/A N   N/A
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost N/A N   N/A

Task Status of Volume VGFS2
--
There are no active volume tasks

[root@1kvm1 glusterfs]# gluster volume start VGFS1
volume start: VGFS1: failed: Volume VGFS1 already started



# mount | grep mapper # base XFS mounting
/dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p1 on /FastClass/p1 type 
xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p2 on /FastClass/p2 type 
xfs (rw,relatime,seclabel,attr2,inode64,noquota)



*5)* import screen
/VGFS1 dir exists  iptables flushed


# cat rhev-data-center-mnt-glusterSD-1kvm1:_VGFS1.log
[2015-04-22 15:21:50.204521] I [MSGID: 100030] 
[glusterfsd.c:2018:main] 0-/usr/sbin/glusterfs: Started running 
/usr/sbin/glusterfs version 3.6.2 (args: /usr/sbin/glusterfs 
--volfile-server=1kvm1 --volfile-id=/VGFS1 
/rhev/data-center/mnt/glusterSD/1kvm1:_VGFS1)
[2015-04-22 15:21:50.220383] I [dht-shared.c:337:dht_init_regex] 
0-VGFS1-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2015-04-22 15:21:50.55] I [client.c:2280:notify] 
0-VGFS1-client-1: parent translators are ready, attempting connect on 
transport
[2015-04-22 15:21:50.224528] I [client.c:2280:notify] 
0-VGFS1-client-2: parent translators are ready, attempting connect on 
transport

Final graph:
+--+
  1: volume VGFS1-client-1
  2: type protocol/client
  3: option ping-timeout 42
  4: option remote-host 1kvm2
  5: option remote-subvolume /FastClass/p1/GFS1
  6: option transport-type socket
  7: option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66
  8: option password 34bac9cd-0b4f-41c6-973b-7af568784d7b
  9: option send-gids true
 10: end-volume
 11:
 12: volume VGFS1-client-2
 13: type protocol/client
 14: option ping-timeout 42
 15: option remote-host 1kvm1
 16: option remote-subvolume /FastClass/p1/GFS1
 17: option transport-type socket
 18: option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66
 19: option password 34bac9cd-0b4f-41c6-973b-7af568784d7b
 20: option send-gids true
 21: end-volume
 22:
 23: volume VGFS1-replicate-0
 24: type cluster/replicate
 25: subvolumes VGFS1-client-1 VGFS1-client-2
 26: end-volume
 27:
 28: volume VGFS1-dht
 29: type cluster/distribute
 30: subvolumes VGFS1-replicate-0
 31: end-volume
 32:
 33: volume VGFS1-write-behind
 34: type performance/write-behind
 35: subvolumes VGFS1-dht
 36: end-volume
 37:
 38: volume VGFS1-read-ahead
 39: type performance/read-ahead
 40: subvolumes VGFS1-write-behind
 41: end-volume
 42:
 43: volume VGFS1-io-cache
 44: type 

[ovirt-users] ovirt - import detached gluter volumes

2015-04-27 Thread p...@email.cz

Hello dears,
i've got some troubles with reattaching gluster volumes with data.

1) Base on a lot of tests I decided clear oVirt database ( # 
engine-cleanup ; # yum remove ovirt-engine;  # yum -y install 
ovirt-engine; #  engine-setup)

2) clearing sucessfully done and start with empty oVirt envir.
3) then I added networks, nodes and make basic network adjustment = all 
works fine
4) time to attach  volumes/ domains with original data ( a lot of 
virtuals , ISO files ,  )


So, main question is about HOWTO attach this volumes if I haven't 
defined any domain and can't clearly  import them ??


Current status of nodes are without glusterfs NFS mounted, but bricks are OK

# gluster volume info

Volume Name: VGFS1
Type: Replicate
Volume ID: b9a1c347-6ffd-4122-8756-d513fe3f40b9
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1kvm2:/FastClass/p1/GFS1
Brick2: 1kvm1:/FastClass/p1/GFS1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36

Volume Name: VGFS2
Type: Replicate
Volume ID: b65bb689-ecc8-4c33-a4e7-11dea6028f83
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 1kvm2:/FastClass/p2/GFS1
Brick2: 1kvm1:/FastClass/p2/GFS1
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36


[root@1kvm1 glusterfs]# gluster volume status
Status of volume: VGFS1
Gluster process PortOnline  Pid
--
Brick 1kvm1:/FastClass/p1/GFS1 N/A N   N/A
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost N/A N   N/A

Task Status of Volume VGFS1
--
There are no active volume tasks

Status of volume: VGFS2
Gluster process PortOnline  Pid
--
Brick 1kvm1:/FastClass/p2/GFS1 N/A N   N/A
NFS Server on localhost N/A N   N/A
Self-heal Daemon on localhost N/A N   N/A

Task Status of Volume VGFS2
--
There are no active volume tasks

[root@1kvm1 glusterfs]# gluster volume start VGFS1
volume start: VGFS1: failed: Volume VGFS1 already started



# mount | grep mapper # base XFS mounting
/dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p1 on /FastClass/p1 type 
xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/mapper/3600605b0099f9e601cb1b5bf0e9765e8p2 on /FastClass/p2 type 
xfs (rw,relatime,seclabel,attr2,inode64,noquota)



*5)* import screen
/VGFS1 dir exists  iptables flushed


# cat rhev-data-center-mnt-glusterSD-1kvm1:_VGFS1.log
[2015-04-22 15:21:50.204521] I [MSGID: 100030] [glusterfsd.c:2018:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.2 
(args: /usr/sbin/glusterfs --volfile-server=1kvm1 --volfile-id=/VGFS1 
/rhev/data-center/mnt/glusterSD/1kvm1:_VGFS1)
[2015-04-22 15:21:50.220383] I [dht-shared.c:337:dht_init_regex] 
0-VGFS1-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2015-04-22 15:21:50.55] I [client.c:2280:notify] 0-VGFS1-client-1: 
parent translators are ready, attempting connect on transport
[2015-04-22 15:21:50.224528] I [client.c:2280:notify] 0-VGFS1-client-2: 
parent translators are ready, attempting connect on transport

Final graph:
+--+
  1: volume VGFS1-client-1
  2: type protocol/client
  3: option ping-timeout 42
  4: option remote-host 1kvm2
  5: option remote-subvolume /FastClass/p1/GFS1
  6: option transport-type socket
  7: option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66
  8: option password 34bac9cd-0b4f-41c6-973b-7af568784d7b
  9: option send-gids true
 10: end-volume
 11:
 12: volume VGFS1-client-2
 13: type protocol/client
 14: option ping-timeout 42
 15: option remote-host 1kvm1
 16: option remote-subvolume /FastClass/p1/GFS1
 17: option transport-type socket
 18: option username 52f1efd1-60dc-4fb1-b94f-572945d6eb66
 19: option password 34bac9cd-0b4f-41c6-973b-7af568784d7b
 20: option send-gids true
 21: end-volume
 22:
 23: volume VGFS1-replicate-0
 24: type cluster/replicate
 25: subvolumes VGFS1-client-1 VGFS1-client-2
 26: end-volume
 27:
 28: volume VGFS1-dht
 29: type cluster/distribute
 30: subvolumes VGFS1-replicate-0
 31: end-volume
 32:
 33: volume VGFS1-write-behind
 34: type performance/write-behind
 35: subvolumes VGFS1-dht
 36: end-volume
 37:
 38: volume VGFS1-read-ahead
 39: type performance/read-ahead
 40: subvolumes VGFS1-write-behind
 41: end-volume
 42:
 43: volume VGFS1-io-cache
 44: type performance/io-cache
 45: subvolumes VGFS1-read-ahead
 46: end-volume
 47:
 48: volume VGFS1-quick-read
 49: type performance/quick-read
 50: subvolumes VGFS1-io-cache
 51: end-volume
 52:
 53: volume VGFS1-open-behind
 54: