[ovirt-users] Gluster storage and TRIM VDO

2022-03-29 Thread Oleh Horbachov
Hello everyone. I have a Gluster distributed replication cluster deployed. The 
cluster - store for ovirt. For bricks - VDO over a raw disk. When discarding 
via 'fstrim -av' the storage hangs for a few seconds and the connection is 
lost. Does anyone know the best practices for using TRIM with VDO in the 
context of ovirt?
ovirt - v4.4.10
gluster - v8.6
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UCTN2ZIG3EDVUU5COPXLMOH2T6WHTPBB/


[ovirt-users] Gluster Storage

2021-01-25 Thread dkiplagat
Hi, Am new using oVirt and i would like to know if i could deploy oVirt and be 
able to use it to deploy and manage Gluster storage.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NVEUHPXAUR4O4N363DEMW2LE6GRBBNKJ/


[ovirt-users] Gluster storage options

2020-01-23 Thread Shareef Jalloq
Hi there,

I'm wanting to build a 3 node Gluster hyperconverged setup but am
struggling to find documentation and examples of the storage setup.

There seems to be a dead link to an old blog post on the Gluster section of
the documentation:
https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/

Is the flow to install the oVirt Node image on a boot drive and then add
disks for Gluster? Or is Gluster setup first with ovirt installed on top?

Thanks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBUO7APZDQJB2JF3ECBLR2JEUHDWO2IW/


Re: [ovirt-users] Gluster storage network not being used for Gluster

2017-07-03 Thread Sahina Bose
Could you  provide output of "gluster peer status" and "gluster volume
info" ?


On Sun, Jul 2, 2017 at 9:33 AM, Mike DePaulo  wrote:

> Hi,
>
>
> I configured a "Gluster storage" network, but it doesn't look like it
> is being used for Gluster. Specifically, the switch's LEDs are not
> blinking, and the hosts' "Total Tx" and "Total Rx" counts are not
> changing (and they're tiny, under 1 MB.) The management network must
> still be being used.
>
> I have 3 hosts running oVirt Node 4.1.x. I set them up via the gluster
> hosted engine. The gluster storage network is 10.0.20.x. These are the
> contents of /var/lib/glusterd/peers:
> [root@centerpoint peers]# cat 8a83fddd-df7e-4e3b-9fc7-ca1c9bf9deaa
> uuid=8a83fddd-df7e-4e3b-9fc7-ca1c9bf9deaa
> state=3
> hostname1=death-star.ad.depaulo.org
> hostname2=death-star
> hostname3=192.168.1.52
> hostname4=10.0.20.52
> [root@centerpoint peers]# cat b6b96427-a0dd-47ff-b3e0-038eb0967fb9
> uuid=b6b96427-a0dd-47ff-b3e0-038eb0967fb9
> state=3
> hostname1=starkiller-base.ad.depaulo.org
> hostname2=starkiller-base
> hostname3=192.168.1.53
>
> Thanks in advance,
> -Mike
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster storage network not being used for Gluster

2017-07-01 Thread Mike DePaulo
Hi,


I configured a "Gluster storage" network, but it doesn't look like it
is being used for Gluster. Specifically, the switch's LEDs are not
blinking, and the hosts' "Total Tx" and "Total Rx" counts are not
changing (and they're tiny, under 1 MB.) The management network must
still be being used.

I have 3 hosts running oVirt Node 4.1.x. I set them up via the gluster
hosted engine. The gluster storage network is 10.0.20.x. These are the
contents of /var/lib/glusterd/peers:
[root@centerpoint peers]# cat 8a83fddd-df7e-4e3b-9fc7-ca1c9bf9deaa
uuid=8a83fddd-df7e-4e3b-9fc7-ca1c9bf9deaa
state=3
hostname1=death-star.ad.depaulo.org
hostname2=death-star
hostname3=192.168.1.52
hostname4=10.0.20.52
[root@centerpoint peers]# cat b6b96427-a0dd-47ff-b3e0-038eb0967fb9
uuid=b6b96427-a0dd-47ff-b3e0-038eb0967fb9
state=3
hostname1=starkiller-base.ad.depaulo.org
hostname2=starkiller-base
hostname3=192.168.1.53

Thanks in advance,
-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage question

2017-02-11 Thread Bartosiak-Jentys, Chris
Thanks for the links, I will add them to my reading list. Absolutely 
would read the docs before deploying ovirt in production and definitely 
would not use this storage configuration, this is purely to keep from 
wasting electricity.


Chris.

On 2017-02-11 19:18, Doug Ingham wrote:

On 11 February 2017 at 15:39, Bartosiak-Jentys, Chris 
 wrote:



Thank you for your reply Doug,

I didn't use localhost as I was preparing to follow instructions (blog 
post: 
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part-two/) 
 for setting up CTDB and had already created hostnames for the 
floating IP when I decided to ditch that and go with the hosts file 
hack. I already had the volumes mounted on those hostnames but you are 
absolutely right, simply using localhost would be the best option.


oVirt 3.5? 2014? That's ld. Both oVirt & Gluster have moved on a 
lot since then. I would strongly recommend studying Gluster's 
documentation before implementing it in production. It's not 
complicated, but you have to have a good understanding of what you're 
doing & why if you want to protect the integrity of your data & avoid 
waking up one day to find everything in meltdown.


https://gluster.readthedocs.io/en/latest/

Red Hat's portal is also very good & full of detailed tips for tuning 
your setup, however their "stable" versions (which they have to 
support) are of course much older than the project's own latest stable, 
so keep this in mind when considering their advice.


https://access.redhat.com/documentation/en/red-hat-storage/

Likewise with their oVirt documentation, although their supported oVirt 
versions are much closer to the current stable release. It also 
features a lot of very good advice for configuring & tuning an oVirt 
(RHEV) & GlusterFS (RHGS) hyperconverged setup.


https://access.redhat.com/documentation/en/red-hat-virtualization/

For any other Gluster specific questions, you can usually get good & 
timely responses on their mailing list & IRC channel.


Thank you for your suggested outline of how to power up/down the 
cluster, I hadn't considered the fact that turning on two out of date 
nodes would clobber data on the new node. This is something I will need 
to be very careful to avoid. The setup is mostly for lab work so not 
really mission critical but I do run a few VM's (freeIPA, GitLab and 
pfSense) that I'd like to keep up 24/7. I make regular backups (outside 
of ovirt) of those just in case.


Thanks, I will do some reading on how gluster handles quorum and heal 
operations but your procedure sounds like a sensible way to operate 
this cluster.


Regards,

Chris.

On 2017-02-11 18:08, Doug Ingham wrote:

On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris 
 wrote:

Hello list,

Just wanted to get your opinion on my ovirt home lab setup. While this 
is not a production setup I would like it to run relatively reliably so 
please tell me if the following storage configuration is likely to 
result in corruption or just bat s**t insane.


I have a 3 node hosted engine setup, VM data store and engine data 
store are both replica 3 gluster volumes (one brick on each host).
I do not want to run all 3 hosts 24/7 due to electricity costs, I only 
power up the larger hosts (2 Dell R710's) when I need additional 
resources for VM's.


I read about using CTDB and floating/virtual IP's to allow the storage 
mount point to transition between available hosts but after some 
thought decided to go about this another, simpler, way:


I created a common hostname for the storage mount points: gfs-data and 
gfs-engine


On each host I edited /etc/hosts file to have these hostnames resolve 
to each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP

on host2 gfs-data & gfs-engine --> host2 IP
etc.

In ovirt engine each storage domain is mounted as gfs-data:/data and 
gfs-engine:/engine
My thinking is that this way no matter which host is up and acting as 
SPM it will be able to mount the storage as its only dependent on that 
host being up.


I changed gluster options for server-quorum-ratio so that the volumes 
remain up even if quorum is not met, I know this is risky but its just 
a lab setup after all.


So, any thoughts on the /etc/hosts method to ensure the storage mount 
point is always available? Is data corruption more or less inevitable 
with this setup? Am I insane ;) ?


Why not just use localhost? And no need for CTDB with a floating IP, 
oVirt uses libgfapi for Gluster which deals with that all natively.


As for the quorum issue, I would most definitely *not* run with quorum 
disabled when you're running more than one node. As you say you 
specifically plan for when the other 2 nodes of the replica 3 set will 
be active or not, I'd do something along the lines of the following...


Going from 3 nodes to 1 node:
- Put nodes 2 & 3 in maintenance to offload their virtual load;
- Once 

Re: [ovirt-users] Gluster storage question

2017-02-11 Thread Doug Ingham
On 11 February 2017 at 15:39, Bartosiak-Jentys, Chris <
chris.bartosiak-jen...@certico.co.uk> wrote:

> Thank you for your reply Doug,
>
> I didn't use localhost as I was preparing to follow instructions (blog
> post: http://community.redhat.com/blog/2014/11/up-and-
> running-with-ovirt-3-5-part-two/)  for setting up CTDB and had already
> created hostnames for the floating IP when I decided to ditch that and go
> with the hosts file hack. I already had the volumes mounted on those
> hostnames but you are absolutely right, simply using localhost would be the
> best option.
>
oVirt 3.5? 2014? That's ld. Both oVirt & Gluster have moved on a lot
since then. I would strongly recommend studying Gluster's documentation
before implementing it in production. It's not complicated, but you have to
have a good understanding of what you're doing & why if you want to protect
the integrity of your data & avoid waking up one day to find everything in
meltdown.

https://gluster.readthedocs.io/en/latest/

Red Hat's portal is also very good & full of detailed tips for tuning your
setup, however their "stable" versions (which they have to support) are of
course much older than the project's own latest stable, so keep this in
mind when considering their advice.

https://access.redhat.com/documentation/en/red-hat-storage/

Likewise with their oVirt documentation, although their supported oVirt
versions are much closer to the current stable release. It also features a
lot of very good advice for configuring & tuning an oVirt (RHEV) &
GlusterFS (RHGS) hyperconverged setup.

https://access.redhat.com/documentation/en/red-hat-virtualization/

For any other Gluster specific questions, you can usually get good & timely
responses on their mailing list & IRC channel.

Thank you for your suggested outline of how to power up/down the cluster, I
> hadn't considered the fact that turning on two out of date nodes would
> clobber data on the new node. This is something I will need to be very
> careful to avoid. The setup is mostly for lab work so not really mission
> critical but I do run a few VM's (freeIPA, GitLab and pfSense) that I'd
> like to keep up 24/7. I make regular backups (outside of ovirt) of those
> just in case.
>
> Thanks, I will do some reading on how gluster handles quorum and heal
> operations but your procedure sounds like a sensible way to operate this
> cluster.
>
> Regards,
>
> Chris.
>
>
> On 2017-02-11 18:08, Doug Ingham wrote:
>
>
>
> On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris <
> chris.bartosiak-jen...@certico.co.uk> wrote:
>
>> Hello list,
>>
>> Just wanted to get your opinion on my ovirt home lab setup. While this is
>> not a production setup I would like it to run relatively reliably so please
>> tell me if the following storage configuration is likely to result in
>> corruption or just bat s**t insane.
>>
>> I have a 3 node hosted engine setup, VM data store and engine data store
>> are both replica 3 gluster volumes (one brick on each host).
>> I do not want to run all 3 hosts 24/7 due to electricity costs, I only
>> power up the larger hosts (2 Dell R710's) when I need additional resources
>> for VM's.
>>
>> I read about using CTDB and floating/virtual IP's to allow the storage
>> mount point to transition between available hosts but after some thought
>> decided to go about this another, simpler, way:
>>
>> I created a common hostname for the storage mount points: gfs-data and
>> gfs-engine
>>
>> On each host I edited /etc/hosts file to have these hostnames resolve to
>> each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP
>> on host2 gfs-data & gfs-engine --> host2 IP
>> etc.
>>
>> In ovirt engine each storage domain is mounted as gfs-data:/data and
>> gfs-engine:/engine
>> My thinking is that this way no matter which host is up and acting as SPM
>> it will be able to mount the storage as its only dependent on that host
>> being up.
>>
>> I changed gluster options for server-quorum-ratio so that the volumes
>> remain up even if quorum is not met, I know this is risky but its just a
>> lab setup after all.
>>
>> So, any thoughts on the /etc/hosts method to ensure the storage mount
>> point is always available? Is data corruption more or less inevitable with
>> this setup? Am I insane ;) ?
>
>
> Why not just use localhost? And no need for CTDB with a floating IP, oVirt
> uses libgfapi for Gluster which deals with that all natively.
>
> As for the quorum issue, I would most definitely *not* run with quorum
> disabled when you're running more than one node. As you say you
> specifically plan for when the other 2 nodes of the replica 3 set will be
> active or not, I'd do something along the lines of the following...
>
> Going from 3 nodes to 1 node:
>  - Put nodes 2 & 3 in maintenance to offload their virtual load;
>  - Once the 2 nodes are free of load, disable quorum on the Gluster
> volumes;
>  - Power down the 2 nodes.
>
> Going from 1 node to 3 nodes:
>  - Power on *only* 1 of the 

Re: [ovirt-users] Gluster storage question

2017-02-11 Thread Bartosiak-Jentys, Chris
Thank you for your reply Doug, 

I didn't use localhost as I was preparing to follow instructions (blog
post:
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part-two/)
 for setting up CTDB and had already created hostnames for the floating
IP when I decided to ditch that and go with the hosts file hack. I
already had the volumes mounted on those hostnames but you are
absolutely right, simply using localhost would be the best option. 

Thank you for your suggested outline of how to power up/down the
cluster, I hadn't considered the fact that turning on two out of date
nodes would clobber data on the new node. This is something I will need
to be very careful to avoid. The setup is mostly for lab work so not
really mission critical but I do run a few VM's (freeIPA, GitLab and
pfSense) that I'd like to keep up 24/7. I make regular backups (outside
of ovirt) of those just in case. 

Thanks, I will do some reading on how gluster handles quorum and heal
operations but your procedure sounds like a sensible way to operate this
cluster. 

Regards, 

Chris. 

On 2017-02-11 18:08, Doug Ingham wrote:

> On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris 
>  wrote:
> 
>> Hello list,
>> 
>> Just wanted to get your opinion on my ovirt home lab setup. While this is 
>> not a production setup I would like it to run relatively reliably so please 
>> tell me if the following storage configuration is likely to result in 
>> corruption or just bat s**t insane.
>> 
>> I have a 3 node hosted engine setup, VM data store and engine data store are 
>> both replica 3 gluster volumes (one brick on each host).
>> I do not want to run all 3 hosts 24/7 due to electricity costs, I only power 
>> up the larger hosts (2 Dell R710's) when I need additional resources for 
>> VM's.
>> 
>> I read about using CTDB and floating/virtual IP's to allow the storage mount 
>> point to transition between available hosts but after some thought decided 
>> to go about this another, simpler, way:
>> 
>> I created a common hostname for the storage mount points: gfs-data and 
>> gfs-engine
>> 
>> On each host I edited /etc/hosts file to have these hostnames resolve to 
>> each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP
>> on host2 gfs-data & gfs-engine --> host2 IP
>> etc.
>> 
>> In ovirt engine each storage domain is mounted as gfs-data:/data and 
>> gfs-engine:/engine
>> My thinking is that this way no matter which host is up and acting as SPM it 
>> will be able to mount the storage as its only dependent on that host being 
>> up.
>> 
>> I changed gluster options for server-quorum-ratio so that the volumes remain 
>> up even if quorum is not met, I know this is risky but its just a lab setup 
>> after all.
>> 
>> So, any thoughts on the /etc/hosts method to ensure the storage mount point 
>> is always available? Is data corruption more or less inevitable with this 
>> setup? Am I insane ;) ?
> 
> Why not just use localhost? And no need for CTDB with a floating IP, oVirt 
> uses libgfapi for Gluster which deals with that all natively. 
> 
> As for the quorum issue, I would most definitely *not* run with quorum 
> disabled when you're running more than one node. As you say you specifically 
> plan for when the other 2 nodes of the replica 3 set will be active or not, 
> I'd do something along the lines of the following...
> 
> Going from 3 nodes to 1 node: 
> - Put nodes 2 & 3 in maintenance to offload their virtual load; 
> - Once the 2 nodes are free of load, disable quorum on the Gluster volumes; 
> - Power down the 2 nodes.
> 
> Going from 1 node to 3 nodes: 
> - Power on *only* 1 of the pair of nodes (if you power on both & self-heal is 
> enabled, Gluster will "heal" the files on the main node with the older files 
> on the 2 nodes which were powered down); 
> - Allow Gluster some time to detect that the files are in split-brain; 
> - Tell Gluster to heal the files in split-brain based on modification time; 
> - Once the 2 nodes are in sync, re-enable quorum & power on the last node, 
> which will be resynchronised automatically; 
> - Take the 2 hosts out of maintenance mode. 
> 
> If you want to power on the 2nd two nodes at the same time, make absolutely 
> sure self-heal is disabled first! If you don't, Gluster will see the 2nd two 
> nodes as in quorum & heal the data on your 1st node with the out-of-date 
> data. 
> 
> -- 
> Doug

-- 

Chris Bartosiak-Jentys
Certico
Tel: 0 444 884
Mob: 077 0246 8132 
e-mail: ch...@certico.co.uk 
www.certico.co.uk [1]

-

Confidentiality Notice: the information contained in this email and any
attachments may be legally privileged and confidential.
If you are not an intended recipient, you are hereby notified that any
dissemination, distribution, or copying of this e-mail is strictly
prohibited.
If you have received this e-mail in error, please notify the sender and
permanently delete the e-mail and any 

Re: [ovirt-users] Gluster storage question

2017-02-11 Thread Doug Ingham
On 11 February 2017 at 13:32, Bartosiak-Jentys, Chris <
chris.bartosiak-jen...@certico.co.uk> wrote:

> Hello list,
>
> Just wanted to get your opinion on my ovirt home lab setup. While this is
> not a production setup I would like it to run relatively reliably so please
> tell me if the following storage configuration is likely to result in
> corruption or just bat s**t insane.
>
> I have a 3 node hosted engine setup, VM data store and engine data store
> are both replica 3 gluster volumes (one brick on each host).
> I do not want to run all 3 hosts 24/7 due to electricity costs, I only
> power up the larger hosts (2 Dell R710's) when I need additional resources
> for VM's.
>
> I read about using CTDB and floating/virtual IP's to allow the storage
> mount point to transition between available hosts but after some thought
> decided to go about this another, simpler, way:
>
> I created a common hostname for the storage mount points: gfs-data and
> gfs-engine
>
> On each host I edited /etc/hosts file to have these hostnames resolve to
> each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP
> on host2 gfs-data & gfs-engine --> host2 IP
> etc.
>
> In ovirt engine each storage domain is mounted as gfs-data:/data and
> gfs-engine:/engine
> My thinking is that this way no matter which host is up and acting as SPM
> it will be able to mount the storage as its only dependent on that host
> being up.
>
> I changed gluster options for server-quorum-ratio so that the volumes
> remain up even if quorum is not met, I know this is risky but its just a
> lab setup after all.
>
> So, any thoughts on the /etc/hosts method to ensure the storage mount
> point is always available? Is data corruption more or less inevitable with
> this setup? Am I insane ;) ?
>

Why not just use localhost? And no need for CTDB with a floating IP, oVirt
uses libgfapi for Gluster which deals with that all natively.

As for the quorum issue, I would most definitely *not* run with quorum
disabled when you're running more than one node. As you say you
specifically plan for when the other 2 nodes of the replica 3 set will be
active or not, I'd do something along the lines of the following...

Going from 3 nodes to 1 node:
 - Put nodes 2 & 3 in maintenance to offload their virtual load;
 - Once the 2 nodes are free of load, disable quorum on the Gluster volumes;
 - Power down the 2 nodes.

Going from 1 node to 3 nodes:
 - Power on *only* 1 of the pair of nodes (if you power on both & self-heal
is enabled, Gluster will "heal" the files on the main node with the older
files on the 2 nodes which were powered down);
 - Allow Gluster some time to detect that the files are in split-brain;
 - Tell Gluster to heal the files in split-brain based on modification time;
 - Once the 2 nodes are in sync, re-enable quorum & power on the last node,
which will be resynchronised automatically;
 - Take the 2 hosts out of maintenance mode.

If you want to power on the 2nd two nodes at the same time, make absolutely
sure self-heal is disabled first! If you don't, Gluster will see the 2nd
two nodes as in quorum & heal the data on your 1st node with the
out-of-date data.


-- 
Doug
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster storage question

2017-02-11 Thread Bartosiak-Jentys, Chris

Hello list,

Just wanted to get your opinion on my ovirt home lab setup. While this 
is not a production setup I would like it to run relatively reliably so 
please tell me if the following storage configuration is likely to 
result in corruption or just bat s**t insane.


I have a 3 node hosted engine setup, VM data store and engine data store 
are both replica 3 gluster volumes (one brick on each host).
I do not want to run all 3 hosts 24/7 due to electricity costs, I only 
power up the larger hosts (2 Dell R710's) when I need additional 
resources for VM's.


I read about using CTDB and floating/virtual IP's to allow the storage 
mount point to transition between available hosts but after some thought 
decided to go about this another, simpler, way:


I created a common hostname for the storage mount points: gfs-data and 
gfs-engine


On each host I edited /etc/hosts file to have these hostnames resolve to 
each hosts IP i.e. on host1 gfs-data & gfs-engine --> host1 IP

on host2 gfs-data & gfs-engine --> host2 IP
etc.

In ovirt engine each storage domain is mounted as gfs-data:/data and 
gfs-engine:/engine
My thinking is that this way no matter which host is up and acting as 
SPM it will be able to mount the storage as its only dependent on that 
host being up.


I changed gluster options for server-quorum-ratio so that the volumes 
remain up even if quorum is not met, I know this is risky but its just a 
lab setup after all.


So, any thoughts on the /etc/hosts method to ensure the storage mount 
point is always available? Is data corruption more or less inevitable 
with this setup? Am I insane ;) ?


Thanks,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage expansion

2017-01-23 Thread Goorkate, B.J.
Hi,

Thanks for the answer!

Adding nodes per 3 makes sense. The disadvantage is having multiple 
storage domains when you do that. Or is it possible to combine them?

Regards,

Bertjan


On Fri, Jan 20, 2017 at 11:27:28AM +0530, knarra wrote:
> On 01/19/2017 09:15 PM, Goorkate, B.J. wrote:
> > Hi all,
> > 
> > I have an oVirt environment with 5 nodes. 3 nodes offer a replica-3 gluster 
> > storage domain for the virtual
> > machines.
> > 
> > Is there a way to use storage in the nodes which are no member of the 
> > replica-3 storage domain?
> > Or do I need another node and make a second replica-3 gluster storage 
> > domain?
> since  you have 5 nodes in your cluster, you could add another node and make
> replica-3 gluster storage domain out of these three nodes which are no
> member of the already existing replica-3 storage domain.
> > 
> > In other words: I would like to expand the existing storage domain by 
> > adding more nodes, rather
> > than adding disks to the existing gluster nodes. Is that possible?
> > 
> > Thanks!
> > 
> > Regards,
> > 
> > Bertjan
> > 
> > 
> > 
> > --
> > 
> > De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
> > uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
> > ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
> > te informeren door het bericht te retourneren. Het Universitair Medisch
> > Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de 
> > W.H.W.
> > (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd 
> > bij
> > de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> > 
> > Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> > 
> > --
> > 
> > This message may contain confidential information and is intended 
> > exclusively
> > for the addressee. If you receive this message unintentionally, please do 
> > not
> > use the contents but notify the sender immediately by return e-mail. 
> > University
> > Medical Center Utrecht is a legal person by public law and is registered at
> > the Chamber of Commerce for Midden-Nederland under no. 30244197.
> > 
> > Please consider the environment before printing this e-mail.
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage expansion

2017-01-19 Thread knarra

On 01/19/2017 09:15 PM, Goorkate, B.J. wrote:

Hi all,

I have an oVirt environment with 5 nodes. 3 nodes offer a replica-3 gluster 
storage domain for the virtual
machines.

Is there a way to use storage in the nodes which are no member of the replica-3 
storage domain?
Or do I need another node and make a second replica-3 gluster storage domain?
since  you have 5 nodes in your cluster, you could add another node and 
make replica-3 gluster storage domain out of these three nodes which are 
no member of the already existing replica-3 storage domain.


In other words: I would like to expand the existing storage domain by adding 
more nodes, rather
than adding disks to the existing gluster nodes. Is that possible?

Thanks!

Regards,

Bertjan



--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster storage expansion

2017-01-19 Thread Goorkate, B.J.
Hi all,

I have an oVirt environment with 5 nodes. 3 nodes offer a replica-3 gluster 
storage domain for the virtual
machines.

Is there a way to use storage in the nodes which are no member of the replica-3 
storage domain?
Or do I need another node and make a second replica-3 gluster storage domain?

In other words: I would like to expand the existing storage domain by adding 
more nodes, rather
than adding disks to the existing gluster nodes. Is that possible?

Thanks!

Regards,

Bertjan 



--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-06 Thread Stefano Danzi

I pathced the code for "emergency"
I can't find how change confguration.

But I think that's a bug:

- All work in ovirt 3.5, after upgrade stop working
- The log show a python exception.

I think a thing:

If there are chages on configuration requirements I have to be warned 
during upgrade, or I have to find a specific error in log.
...remove something that non exist from a list, and leave a cryptic 
python exception as error log, isn't the better solution...


Il 06/11/2015 8.12, Nir Soffer ha scritto:



בתאריך 5 בנוב׳ 2015 8:18 אחה״צ,‏ "Stefano Danzi" > כתב:

>
> To temporary solve the problem I patched storageserver.py as 
suggested on link above.


I would not patch the code but change the configuration.

> I can't find a related issue on bugzilla.

Would you file bug about this?

>
>
> Il 05/11/2015 11.43, Stefano Danzi ha scritto:
>>
>> My error is related to this message:
>>
>> http://lists.ovirt.org/pipermail/users/2015-August/034316.html
>>
>> Il 05/11/2015 0.28, Stefano Danzi ha scritto:
>>>
>>> Hello,
>>> I have an Ovirt installation with only 1 host and self-hosted engine.
>>> My Master Data storage domain is GlusterFS type.
>>>
>>> After upgrading to Ovirt 3.6 data storage domain and default 
dataceter are down.

>>> The error in vdsm.log is:
>>>
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init -> 
state preparin

>>> g
>>> Thread-6585::INFO::2015-11-04 
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, 
spUUID=u'----', conLi
>>> st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', 
u'connection': u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': 
u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':

>>>  '', u'port': u''}], options=None)
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating 
directory: 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data mode: Non

>>> e
>>> Thread-6585::WARNING::2015-11-04 
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already 
exists
>>> Thread-6585::ERROR::2015-11-04 
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in 
connectStorageServer

>>> conObj.connect()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 224, in 
connect

>>> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 323, in 
options

>>> backup_servers_option = self._get_backup_servers_option()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 340, in 
_get_backup_servers_option

>>> servers.remove(self._volfileserver)
>>> ValueError: list.remove(x): x not in list
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: 
{46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
>>> Thread-6585::INFO::2015-11-04 
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 100, 
'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist': 
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state 
preparing -> state finished

>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-06 Thread Stefano Danzi
oVirt is configured to use ovirtbk-mount.hawai.lan but gluster 
ovirt01.hawai.lan.

ovirtbk-mount.hawai.lan is an alias of ovirt01 and is in /etc/hosts

Il 06/11/2015 8.01, Nir Soffer ha scritto:



בתאריך 5 בנוב׳ 2015 1:47 לפנה״צ,‏ "Stefano Danzi" > כתב:

>
> Hello,
> I have an Ovirt installation with only 1 host and self-hosted engine.
> My Master Data storage domain is GlusterFS type.
>
> After upgrading to Ovirt 3.6 data storage domain and default 
dataceter are down.

> The error in vdsm.log is:
>
> Thread-6585::DEBUG::2015-11-04 
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init -> 
state preparin

> g
> Thread-6585::INFO::2015-11-04 
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, 
spUUID=u'----', conLi
> st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection': 
u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': 
u'1', u'vfs_type': u'glusterfs', u'password':

>  '', u'port': u''}], options=None)

The error below suggest that ovirt and gluster are not configured in 
the same way, one using a domain name and the other ip address.


Can you share the output of
gluster volme info

On one of the bricks, or on the host (you will need to use --remote-host)

Nir

> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating 
directory: 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data mode: Non

> e
> Thread-6585::WARNING::2015-11-04 
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already 
exists
> Thread-6585::ERROR::2015-11-04 
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in 
connectStorageServer

> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>   File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
> backup_servers_option = self._get_backup_servers_option()
>   File "/usr/share/vdsm/storage/storageServer.py", line 340, in 
_get_backup_servers_option

> servers.remove(self._volfileserver)
> ValueError: list.remove(x): x not in list
> Thread-6585::DEBUG::2015-11-04 
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: 
{46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
> Thread-6585::INFO::2015-11-04 
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 100, 
'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist': 
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
> Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state 
preparing -> state finished

> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-06 Thread Nir Soffer
On Fri, Nov 6, 2015 at 10:38 AM, Stefano Danzi  wrote:
> I pathced the code for "emergency"

A safer way is to downgrade vdsm to the previous version.

> I can't find how change confguration.

1. Put the gluster domain in maintenance
- select the domain in the storage tab
- in the "data center" sub tab, click maintenance
- the domain will turn to locked, and then to maintenance mode

2. Edit the domain
3. Change the address to the same one configured in gluster (ovirt01...)
4. Activate the domain

> But I think that's a bug:
>
> - All work in ovirt 3.5, after upgrade stop working

Yes, it should keep working after an upgrade

However, using straightforward setup, like using same server address
in both ovirt and
gluster will increase the chance that things will continue to work
after an upgrade.

> - The log show a python exception.

This is good, and makes debugging this issue easy.

> I think a thing:
>
> If there are chages on configuration requirements I have to be warned during
> upgrade, or I have to find a specific error in log.

Correct

> ...remove something that non exist from a list,

The code assumes that the server address is in the list, so removing
it is correct.

This assumption is wrong, we will have to change to code to handle this case.

> and leave a cryptic python
> exception as error log, isn't the better solution...

The traceback in the log is very important, without it would be very
hard to debug this issue.

>
>
> Il 06/11/2015 8.12, Nir Soffer ha scritto:
>
>
> בתאריך 5 בנוב׳ 2015 8:18 אחה״צ,‏ "Stefano Danzi"  כתב:
>>
>> To temporary solve the problem I patched storageserver.py as suggested on
>> link above.
>
> I would not patch the code but change the configuration.
>
>> I can't find a related issue on bugzilla.
>
> Would you file bug about this?
>
>>
>>
>> Il 05/11/2015 11.43, Stefano Danzi ha scritto:
>>>
>>> My error is related to this message:
>>>
>>> http://lists.ovirt.org/pipermail/users/2015-August/034316.html
>>>
>>> Il 05/11/2015 0.28, Stefano Danzi ha scritto:

 Hello,
 I have an Ovirt installation with only 1 host and self-hosted engine.
 My Master Data storage domain is GlusterFS type.

 After upgrading to Ovirt 3.6 data storage domain and default dataceter
 are down.
 The error in vdsm.log is:

 Thread-6585::DEBUG::2015-11-04
 23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState)
 Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init -> 
 state
 preparin
 g
 Thread-6585::INFO::2015-11-04
 23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect:
 connectStorageServer(domType=7,
 spUUID=u'----', conLi
 st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection':
 u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': u'1',
 u'vfs_type': u'glusterfs', u'password':
  '', u'port': u''}], options=None)
 Thread-6585::DEBUG::2015-11-04
 23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating
 directory: /rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data
 mode: Non
 e
 Thread-6585::WARNING::2015-11-04
 23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir
 /rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already exists
 Thread-6585::ERROR::2015-11-04
 23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not
 connect to storageServer
 Traceback (most recent call last):
   File "/usr/share/vdsm/storage/hsm.py", line 2462, in
 connectStorageServer
 conObj.connect()
   File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
 self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
   File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
 backup_servers_option = self._get_backup_servers_option()
   File "/usr/share/vdsm/storage/storageServer.py", line 340, in
 _get_backup_servers_option
 servers.remove(self._volfileserver)
 ValueError: list.remove(x): x not in list
 Thread-6585::DEBUG::2015-11-04
 23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
 {46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
 Thread-6585::INFO::2015-11-04
 23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect:
 connectStorageServer, Return response: {'statuslist': [{'status': 100, 
 'id':
 u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
 Thread-6585::DEBUG::2015-11-04
 23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare)
 Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist':
 [{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
 Thread-6585::DEBUG::2015-11-04
 23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState)
 

Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-05 Thread Stefano Danzi
To temporary solve the problem I patched storageserver.py as suggested 
on link above.

I can't find a related issue on bugzilla.

Il 05/11/2015 11.43, Stefano Danzi ha scritto:

My error is related to this message:

http://lists.ovirt.org/pipermail/users/2015-August/034316.html

Il 05/11/2015 0.28, Stefano Danzi ha scritto:

Hello,
I have an Ovirt installation with only 1 host and self-hosted engine.
My Master Data storage domain is GlusterFS type.

After upgrading to Ovirt 3.6 data storage domain and default 
dataceter are down.

The error in vdsm.log is:

Thread-6585::DEBUG::2015-11-04 
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init 
-> state preparin

g
Thread-6585::INFO::2015-11-04 
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=7, 
spUUID=u'----', conLi
st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection': 
u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': 
u'1', u'vfs_type': u'glusterfs', u'password':

 '', u'port': u''}], options=None)
Thread-6585::DEBUG::2015-11-04 
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating 
directory: 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data mode: Non

e
Thread-6585::WARNING::2015-11-04 
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir 
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already 
exists
Thread-6585::ERROR::2015-11-04 
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could 
not connect to storageServer

Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2462, in 
connectStorageServer

conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
  File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
backup_servers_option = self._get_backup_servers_option()
  File "/usr/share/vdsm/storage/storageServer.py", line 340, in 
_get_backup_servers_option

servers.remove(self._volfileserver)
ValueError: list.remove(x): x not in list
Thread-6585::DEBUG::2015-11-04 
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) 
knownSDs: {46f55a31-f35f-465c-b3e2-df45c05e06a7: 
storage.nfsSD.findDomain}
Thread-6585::INFO::2015-11-04 
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 
100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist': 
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
Thread-6585::DEBUG::2015-11-04 
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state 
preparing -> state finished

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-05 Thread Nir Soffer
בתאריך 5 בנוב׳ 2015 1:47 לפנה״צ,‏ "Stefano Danzi"  כתב:
>
> Hello,
> I have an Ovirt installation with only 1 host and self-hosted engine.
> My Master Data storage domain is GlusterFS type.
>
> After upgrading to Ovirt 3.6 data storage domain and default dataceter
are down.
> The error in vdsm.log is:
>
> Thread-6585::DEBUG::2015-11-04
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState)
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init ->
state preparin
> g
> Thread-6585::INFO::2015-11-04
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID=u'----', conLi
> st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection':
u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs', u'password':
>  '', u'port': u''}], options=None)

The error below suggest that ovirt and gluster are not configured in the
same way, one using a domain name and the other ip address.

Can you share the output of
gluster volme info

On one of the bricks, or on the host (you will need to use --remote-host)

Nir

> Thread-6585::DEBUG::2015-11-04
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data
mode: Non
> e
> Thread-6585::WARNING::2015-11-04
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already exists
> Thread-6585::ERROR::2015-11-04
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in
connectStorageServer
> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>   File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
> backup_servers_option = self._get_backup_servers_option()
>   File "/usr/share/vdsm/storage/storageServer.py", line 340, in
_get_backup_servers_option
> servers.remove(self._volfileserver)
> ValueError: list.remove(x): x not in list
> Thread-6585::DEBUG::2015-11-04
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
> Thread-6585::INFO::2015-11-04
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 100,
'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
> Thread-6585::DEBUG::2015-11-04
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare)
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist':
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
> Thread-6585::DEBUG::2015-11-04
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState)
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state preparing ->
state finished
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage domain error after upgrading to 3.6

2015-11-05 Thread Nir Soffer
בתאריך 5 בנוב׳ 2015 8:18 אחה״צ,‏ "Stefano Danzi"  כתב:
>
> To temporary solve the problem I patched storageserver.py as suggested on
link above.

I would not patch the code but change the configuration.

> I can't find a related issue on bugzilla.

Would you file bug about this?

>
>
> Il 05/11/2015 11.43, Stefano Danzi ha scritto:
>>
>> My error is related to this message:
>>
>> http://lists.ovirt.org/pipermail/users/2015-August/034316.html
>>
>> Il 05/11/2015 0.28, Stefano Danzi ha scritto:
>>>
>>> Hello,
>>> I have an Ovirt installation with only 1 host and self-hosted engine.
>>> My Master Data storage domain is GlusterFS type.
>>>
>>> After upgrading to Ovirt 3.6 data storage domain and default dataceter
are down.
>>> The error in vdsm.log is:
>>>
>>> Thread-6585::DEBUG::2015-11-04
23:55:00,173::task::595::Storage.TaskManager.Task::(_updateState)
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state init ->
state preparin
>>> g
>>> Thread-6585::INFO::2015-11-04
23:55:00,173::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID=u'----', conLi
>>> st=[{u'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499', u'connection':
u'ovirtbk-mount.hawai.lan:data', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs', u'password':
>>>  '', u'port': u''}], options=None)
>>> Thread-6585::DEBUG::2015-11-04
23:55:00,235::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data
mode: Non
>>> e
>>> Thread-6585::WARNING::2015-11-04
23:55:00,235::fileUtils::152::Storage.fileUtils::(createdir) Dir
/rhev/data-center/mnt/glusterSD/ovirtbk-mount.hawai.lan:data already exists
>>> Thread-6585::ERROR::2015-11-04
23:55:00,235::hsm::2465::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/hsm.py", line 2462, in
connectStorageServer
>>> conObj.connect()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 224, in connect
>>> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 323, in options
>>> backup_servers_option = self._get_backup_servers_option()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 340, in
_get_backup_servers_option
>>> servers.remove(self._volfileserver)
>>> ValueError: list.remove(x): x not in list
>>> Thread-6585::DEBUG::2015-11-04
23:55:00,235::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{46f55a31-f35f-465c-b3e2-df45c05e06a7: storage.nfsSD.findDomain}
>>> Thread-6585::INFO::2015-11-04
23:55:00,236::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 100,
'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>> Thread-6585::DEBUG::2015-11-04
23:55:00,236::task::1191::Storage.TaskManager.Task::(prepare)
Task=`86e72580-fa76-4347-b919-a73970d12682`::finished: {'statuslist':
[{'status': 100, 'id': u'dc0eaef1-8494-4e35-abea-80a4e7f37499'}]}
>>> Thread-6585::DEBUG::2015-11-04
23:55:00,236::task::595::Storage.TaskManager.Task::(_updateState)
Task=`86e72580-fa76-4347-b919-a73970d12682`::moving from state preparing ->
state finished
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] gluster storage ( wanted / supported / recommended configuration )

2015-05-11 Thread p...@email.cz

Hello dears,

is anybody here for serious storage conversation ??
I've got any ideas and a lot of errors - gluster filesystem concept checking

let me know, will sent diagram and questions 

regs.
Pavel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users