Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread dm

btw, all I wrote before was about raw file format,
if it is qcow2 then, using gfapi:


 virsh create /kvmconf/stewjon.xml
error: Failed to create domain from /kvmconf/stewjon.xml
error: internal error: process exited while connecting to monitor: 
[2020-08-13 04:17:37.326933] E [MSGID: 108006] 
[afr-common.c:6073:__afr_handle_child_down_event] 0-pool-replicate-0: 
All subvolumes are down. Going offline until at least one of them comes 
back up.
[2020-08-13 04:17:47.220840] I [io-stats.c:4054:fini] 0-pool: io-stats 
translator unloaded
2020-08-13T04:17:47.222064Z qemu-kvm: -drive 
file=gluster://127.0.0.1:24007/pool/stewjon.qcow2,file.debug=4,format=qcow2,if=none,id=drive-virtio-disk0,cache=directsync: 
Could not read qcow2 header: Invalid argument


very interesting...

only problem here- should I report this to qemu, gluster or vdo? :-(


13.08.2020 08:14, Dmitry Melekhov пишет:

13.08.2020 07:31, Strahil Nikolov пишет:
I ment dis you use C7 with Gluster 7 (or older) or C7 with the new 
Gluster 8.

Frankly, I don't know what you mean by C7.. :-(


Anyways,
if it worked before , it should run now - open an issue in github and 
I guess someone  from the devs will take a  look.




No, it never worked...


But opening issue is good idea, thank you!








Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-12 Thread Sachidananda Urs
On Thu, Aug 13, 2020 at 12:58 AM Strahil Nikolov 
wrote:

> I couldn't make it work  on C8...
> Maybe I was cloning the wrong branch.
>
> Details can be found at
> https://github.com/gluster/gstatus/issues/30#issuecomment-673041743


I have commented on the issue:
https://github.com/gluster/gstatus/issues/30#issuecomment-673238987
These are the steps:

 $ git clone https://github.com/gluster/gstatus.git
 $ cd gstatus
 $ VERSION=1.0.0 make gen-version
 # python3 setup.py install

make gen-version will create version.py

Thanks,
sac




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread Dmitry Melekhov


12.08.2020 23:25, Strahil Nikolov пишет:

I am not sure that it  is ok to use any caching (at least ovirt  doesn't uses) .

Have you set the 'virt' group of  settings ? They seem to be optimal , but keep in 
mind that  if you enable them -> you will enable sharding  which cannot be 
'disabled' afterwards.



Sorry, I don't follow, as I said everything works until we set 
cache=none or cache=directsync in libvirt,


i.e. there is no relation with other gluster settings.



The fact that it works on C7 is strange,  with wifh version of gluster did you 
test.


Dunno, we run qemu on the same host as gluster itself, so we have the 
same gfapi version as gluster server.







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-12 Thread Strahil Nikolov
I couldn't make it work  on C8...
Maybe I was cloning the wrong branch.

Details can be found at 
https://github.com/gluster/gstatus/issues/30#issuecomment-673041743

Best Regards,
Strahil Nikolov

На 12 август 2020 г. 20:01:45 GMT+03:00, Gilberto Nunes 
 написа:
>It's work!
>./gstatus -a
>
>Cluster:
>Status: Healthy GlusterFS: 8.0
>Nodes: 2/2  Volumes: 1/1
>
>Volumes:
>VMS   Replicate  Started (UP) - 2/2
>Bricks Up
> Capacity: (28.41%
>used) 265.00 GiB/931.00 GiB (used/total)
> Bricks:
>Distribute Group 1:
>
>glusterfs01:/DATA/vms
>  (Online)
>
>glusterfs02:/DATA/vms
>  (Online)
>
>
>
>Awesome, thanks!
>---
>Gilberto Nunes Ferreira
>
>(47) 3025-5907
>(47) 99676-7530 - Whatsapp / Telegram
>
>Skype: gilberto.nunes36
>
>
>
>
>
>Em qua., 12 de ago. de 2020 às 13:52, Sachidananda Urs
>
>escreveu:
>
>>
>>
>> On Sun, Aug 9, 2020 at 10:43 PM Gilberto Nunes
>
>> wrote:
>>
>>> How did you deploy it ? - git clone, ./gstatus.py, and python
>gstatus.py
>>> install then gstatus
>>>
>>> What is your gluster version ? Latest stable to Debian Buster (v8)
>>>
>>>
>>>
>> Hello Gilberto. I just made a 1.0.0 release.
>> gstatus binary is available to download from (requires python >= 3.6)
>> https://github.com/gluster/gstatus/releases/tag/v1.0.0
>>
>> You can find the complete documentation here:
>> https://github.com/gluster/gstatus/blob/master/README
>>
>> Follow the below steps for a quick method to test it out:
>>
>> # curl -LO
>> https://github.com/gluster/gstatus/releases/download/v1.0.0/gstatus
>>
>> # chmod +x gstatus
>>
>> # ./gstatus -a
>> # ./gstatus --help
>>
>> If you like what you see. You can move it to /usr/local/bin
>>
>> Would like to hear your feedback. Any feature requests/bugs/PRs are
>> welcome.
>>
>> -sac
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread Strahil Nikolov
I am not sure that it  is ok to use any caching (at least ovirt  doesn't uses) .

Have you set the 'virt' group of  settings ? They seem to be optimal , but keep 
in mind that  if you enable them -> you will enable sharding  which cannot be 
'disabled' afterwards.

The fact that it works on C7 is strange,  with wifh version of gluster did you 
test.

Best Regards,
Strahil Nikolov

На 12 август 2020 г. 18:03:29 GMT+03:00, Dmitry Melekhov  
написа:
>
>12.08.2020 17:50, Strahil Nikolov пишет:
>> Libgfapi brings far better performance ,
>
>Yes, and several vms do not rely on the same mount point...
>
>
>> but qemu has some limitations.
>>
>>
>> If it works on FUSE , but not on libgfapi -> it seems obvious.
>
>
>Not obvious for me, we tested vdo locally, i.e. without gluster and
>qemu 
>works with cache=none or cache=directsync without problems,
>
>so problem is somewhere in gluster.
>
>>
>> Have you tried to connect from C7 to the Gluster TSP via libgfapi.
>No, but we tested the same setup with gluster 7 with the same result 
>before we upgraded to 8.
>>
>> Also,  is SELINUX in enforcing or not ?
>
>selinux is disabled...
>
>
>Thank you!
>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 12 август 2020 г. 16:34:26 GMT+03:00, Satheesaran Sundaramoorthi
> написа:
>>> On Wed, Aug 12, 2020 at 2:30 PM Dmitry Melekhov 
>wrote:
>>>
 12.08.2020 12:55, Amar Tumballi пишет:
> Hi Dimitry,
>
> Was this working earlier and now failing on Version 8 or is this a
>>> new
> setup which you did first time?
>
 Hello!


 This is first time we  are testing gluster over vdo.

 Thank you!


 Hello Dmitry,
>>> I have been testing the RHEL downstream variant of gluster with RHEL
>>> 8.2,
>>> where VMs are created with their images on fuse mounted gluster
>volume
>>> with
>>> VDO.
>>> This worked good.
>>>
>>> But I see you are using 'gfapi', so that could be different.
>>> Though I don't have valuable inputs to help you, do you see 'gfapi'
>>> good
>>> enough than using fuse mounted volume
>
>
>We think that gfapi is better for 2 reasons:
>
>1. it is faster;
>
>2. each qemu process connects to gluster cluster , so there is no one 
>point of failure- fuse mount...
>
>
>Thank you!
>
>>>
>>> -- Satheesaran S




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Pending healing...

2020-08-12 Thread Gilberto Nunes
Well after a couple of hours the healing process was completed Thanks
anyway.

Em qua, 12 de ago de 2020 19:04, Artem Russakovskii 
escreveu:

> Remove the "summary" part of the command, which should list the exact file
> pending heal.
>
> Then launch the heal manually. If it still doesn't heal, try running
> md5sum on the file and see if it heals after that.
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | @ArtemR 
>
>
> On Fri, Aug 7, 2020 at 11:03 AM Gilberto Nunes 
> wrote:
>
>> Hi
>>
>> I have a pending entry like this
>>
>> gluster vol heal VMS info summary
>> Brick glusterfs01:/DATA/vms
>> Status: Connected
>> Total Number of entries: 1
>> Number of entries in heal pending: 1
>> Number of entries in split-brain: 0
>> Number of entries possibly healing: 0
>>
>> Brick glusterfs02:/DATA/vms
>> Status: Connected
>> Total Number of entries: 1
>> Number of entries in heal pending: 1
>> Number of entries in split-brain: 0
>> Number of entries possibly healing: 0
>>
>> How can I solve this?
>> Should I follow this?
>>
>>
>> https://icicimov.github.io/blog/high-availability/GlusterFS-metadata-split-brain-recovery/
>>
>>
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>>
>>
>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] performance

2020-08-12 Thread Artem Russakovskii
Hmm, in our case of running gluster across Linode block storage (which
itself runs inside Ceph, as I found out), the only thing that helped with
the hangs so far was defragmenting xfs.

I tried changing many things, including the scheduler to "none" and
this performance.write-behind-window-size setting, and nothing seemed to
help or provide any meaningful difference.

Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | @ArtemR 


On Fri, Aug 7, 2020 at 11:28 AM Computerisms Corporation <
b...@computerisms.ca> wrote:

> Hi Artem and others,
>
> Happy to report the system has been relatively stable for the remainder
> of the week.  I have one wordpress site that seems to get hung processes
> when someone logs in with an incorrect password.  Since it is only one,
> and reliably reproduceable, I am not sure if the issue is to do with
> Gluster or Wordpress itself, but afaik it was not doing it some months
> back before the system was using Gluster so I am guessing some combo of
> both.
>
> Regardless, that is the one and only time apache processes stacked up to
> over 150, and that still only brought the load average up to just under
> 25; the system did go a bit sluggish, but remained fairly responsive
> throughout until I restarted apache.  Otherwise 15 minute load average
> consistently runs between 8 and 11 during peak hours and between 4 and 7
> during off hours, and other than the one time I have not seen the
> one-minute load average go over 15.  all resources still spike to full
> capacity from time to time, but it never remains that way for long like
> it did before.
>
> For site responsiveness, first visit to any given site is quite slow,
> like 3-5 seconds on straight html pages, 10-15 seconds for some of the
> more bloated WP themes, but clicking links within the site after the
> first page is loaded is relatively quick, like 1 second on straight html
> pages, and ~5-6 seconds on the bloated themes.  Again, not sure if that
> is a Gluster related thing or something else.
>
> So, still holding my breath a bit, but seems this solution is working,
> at least for me.  I haven't played with any of the other settings yet to
> see if I can improve it further, probably will next week.  thinking to
> increase the write behind window size further to see what happens, as
> well as play with the settings suggested by Strahil.
>
> On 2020-08-05 5:28 p.m., Artem Russakovskii wrote:
> > I'm very curious whether these improvements hold up over the next few
> > days. Please report back.
> >
> > Sincerely,
> > Artem
> >
> > --
> > Founder, Android Police , APK Mirror
> > , Illogical Robot LLC
> > beerpla.net  | @ArtemR 
> >
> >
> > On Wed, Aug 5, 2020 at 9:44 AM Computerisms Corporation
> > mailto:b...@computerisms.ca>> wrote:
> >
> > Hi List,
> >
> >  > So, we just moved into a quieter time of the day, but maybe I just
> >  > stumbled onto something.  I was trying to figure out if/how I
> could
> >  > throw more RAM at the problem.  gluster docs says write behind is
> > not a
> >  > cache unless flush-behind is on.  So seems that is a way to throw
> > ram to
> >  > it?  I put performance.write-behind-window-size: 512MB and
> >  > performance.flush-behind: on and the whole system calmed down
> pretty
> >  > much immediately.  could be just timing, though, will have to see
> >  > tomorrow during business hours whether the system stays at a
> > reasonable
> >  > load.
> >
> > so reporting back that this seems to have definitely had a
> significant
> > positive effect.
> >
> > So far today I have not seen the load average climb over 13 with the
> > 15minute average hovering around 7.  cpus are still spiking from
> > time to
> > time, but they are not staying maxed out all the time, and
> frequently I
> > am seeing brief periods of up to 80% idle.  glusterfs process still
> > spiking up to 180% or so, but consistently running around 70%, and
> the
> > brick processes still spiking up to 70-80%, but consistently running
> > around 20%.  Disk has only been above 50% in atop once so far today
> > when
> > it spiked up to 92%, and still lots of RAM left over.  So far nload
> > even
> > seems indicates I could get away with a 100Mbit network connection.
> > Websites are snappy relative to what they were, still a bit sluggish
> on
> > the first page of any given site, but tolerable or close to.  Apache
> > processes are opening and closing right away, instead of stacking up.
> >
> > Overall, system is performing pretty much like I would expect it to
> > without gluster.  I haven't played with any of the other settings
> yet,
> > just going to leave it like this for a 

Re: [Gluster-users] Pending healing...

2020-08-12 Thread Artem Russakovskii
Remove the "summary" part of the command, which should list the exact file
pending heal.

Then launch the heal manually. If it still doesn't heal, try running md5sum
on the file and see if it heals after that.

Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | @ArtemR 


On Fri, Aug 7, 2020 at 11:03 AM Gilberto Nunes 
wrote:

> Hi
>
> I have a pending entry like this
>
> gluster vol heal VMS info summary
> Brick glusterfs01:/DATA/vms
> Status: Connected
> Total Number of entries: 1
> Number of entries in heal pending: 1
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> Brick glusterfs02:/DATA/vms
> Status: Connected
> Total Number of entries: 1
> Number of entries in heal pending: 1
> Number of entries in split-brain: 0
> Number of entries possibly healing: 0
>
> How can I solve this?
> Should I follow this?
>
>
> https://icicimov.github.io/blog/high-availability/GlusterFS-metadata-split-brain-recovery/
>
>
>
> ---
> Gilberto Nunes Ferreira
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread Strahil Nikolov
Libgfapi brings far better performance , but qemu has some limitations.


If it works on FUSE , but not on libgfapi -> it seems obvious.

Have you tried to connect from C7 to the Gluster TSP via libgfapi.

Also,  is SELINUX in enforcing or not ?

Best Regards,
Strahil Nikolov

На 12 август 2020 г. 16:34:26 GMT+03:00, Satheesaran Sundaramoorthi 
 написа:
>On Wed, Aug 12, 2020 at 2:30 PM Dmitry Melekhov  wrote:
>
>> 12.08.2020 12:55, Amar Tumballi пишет:
>> > Hi Dimitry,
>> >
>> > Was this working earlier and now failing on Version 8 or is this a
>new
>> > setup which you did first time?
>> >
>> Hello!
>>
>>
>> This is first time we  are testing gluster over vdo.
>>
>> Thank you!
>>
>>
>> Hello Dmitry,
>
>I have been testing the RHEL downstream variant of gluster with RHEL
>8.2,
>where VMs are created with their images on fuse mounted gluster volume
>with
>VDO.
>This worked good.
>
>But I see you are using 'gfapi', so that could be different.
>Though I don't have valuable inputs to help you, do you see 'gfapi'
>good
>enough than using fuse mounted volume
>
>-- Satheesaran S




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-12 Thread Gilberto Nunes
It's work!
./gstatus -a

Cluster:
Status: Healthy GlusterFS: 8.0
Nodes: 2/2  Volumes: 1/1

Volumes:
VMS   Replicate  Started (UP) - 2/2
Bricks Up
 Capacity: (28.41%
used) 265.00 GiB/931.00 GiB (used/total)
 Bricks:
Distribute Group 1:

glusterfs01:/DATA/vms
  (Online)

glusterfs02:/DATA/vms
  (Online)



Awesome, thanks!
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua., 12 de ago. de 2020 às 13:52, Sachidananda Urs 
escreveu:

>
>
> On Sun, Aug 9, 2020 at 10:43 PM Gilberto Nunes 
> wrote:
>
>> How did you deploy it ? - git clone, ./gstatus.py, and python gstatus.py
>> install then gstatus
>>
>> What is your gluster version ? Latest stable to Debian Buster (v8)
>>
>>
>>
> Hello Gilberto. I just made a 1.0.0 release.
> gstatus binary is available to download from (requires python >= 3.6)
> https://github.com/gluster/gstatus/releases/tag/v1.0.0
>
> You can find the complete documentation here:
> https://github.com/gluster/gstatus/blob/master/README
>
> Follow the below steps for a quick method to test it out:
>
> # curl -LO
> https://github.com/gluster/gstatus/releases/download/v1.0.0/gstatus
>
> # chmod +x gstatus
>
> # ./gstatus -a
> # ./gstatus --help
>
> If you like what you see. You can move it to /usr/local/bin
>
> Would like to hear your feedback. Any feature requests/bugs/PRs are
> welcome.
>
> -sac
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-12 Thread Sachidananda Urs
On Sun, Aug 9, 2020 at 10:43 PM Gilberto Nunes 
wrote:

> How did you deploy it ? - git clone, ./gstatus.py, and python gstatus.py
> install then gstatus
>
> What is your gluster version ? Latest stable to Debian Buster (v8)
>
>
>
Hello Gilberto. I just made a 1.0.0 release.
gstatus binary is available to download from (requires python >= 3.6)
https://github.com/gluster/gstatus/releases/tag/v1.0.0

You can find the complete documentation here:
https://github.com/gluster/gstatus/blob/master/README

Follow the below steps for a quick method to test it out:

# curl -LO
https://github.com/gluster/gstatus/releases/download/v1.0.0/gstatus

# chmod +x gstatus

# ./gstatus -a
# ./gstatus --help

If you like what you see. You can move it to /usr/local/bin

Would like to hear your feedback. Any feature requests/bugs/PRs are welcome.

-sac




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread Dmitry Melekhov


12.08.2020 17:50, Strahil Nikolov пишет:

Libgfapi brings far better performance ,


Yes, and several vms do not rely on the same mount point...



but qemu has some limitations.


If it works on FUSE , but not on libgfapi -> it seems obvious.



Not obvious for me, we tested vdo locally, i.e. without gluster and qemu 
works with cache=none or cache=directsync without problems,


so problem is somewhere in gluster.



Have you tried to connect from C7 to the Gluster TSP via libgfapi.
No, but we tested the same setup with gluster 7 with the same result 
before we upgraded to 8.


Also,  is SELINUX in enforcing or not ?


selinux is disabled...


Thank you!



Best Regards,
Strahil Nikolov

На 12 август 2020 г. 16:34:26 GMT+03:00, Satheesaran Sundaramoorthi 
 написа:

On Wed, Aug 12, 2020 at 2:30 PM Dmitry Melekhov  wrote:


12.08.2020 12:55, Amar Tumballi пишет:

Hi Dimitry,

Was this working earlier and now failing on Version 8 or is this a

new

setup which you did first time?


Hello!


This is first time we  are testing gluster over vdo.

Thank you!


Hello Dmitry,

I have been testing the RHEL downstream variant of gluster with RHEL
8.2,
where VMs are created with their images on fuse mounted gluster volume
with
VDO.
This worked good.

But I see you are using 'gfapi', so that could be different.
Though I don't have valuable inputs to help you, do you see 'gfapi'
good
enough than using fuse mounted volume



We think that gfapi is better for 2 reasons:

1. it is faster;

2. each qemu process connects to gluster cluster , so there is no one 
point of failure- fuse mount...



Thank you!



-- Satheesaran S





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread Satheesaran Sundaramoorthi
On Wed, Aug 12, 2020 at 2:30 PM Dmitry Melekhov  wrote:

> 12.08.2020 12:55, Amar Tumballi пишет:
> > Hi Dimitry,
> >
> > Was this working earlier and now failing on Version 8 or is this a new
> > setup which you did first time?
> >
> Hello!
>
>
> This is first time we  are testing gluster over vdo.
>
> Thank you!
>
>
> Hello Dmitry,

I have been testing the RHEL downstream variant of gluster with RHEL 8.2,
where VMs are created with their images on fuse mounted gluster volume with
VDO.
This worked good.

But I see you are using 'gfapi', so that could be different.
Though I don't have valuable inputs to help you, do you see 'gfapi' good
enough than using fuse mounted volume

-- Satheesaran S




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread Dmitry Melekhov

12.08.2020 12:55, Amar Tumballi пишет:

Hi Dimitry,

Was this working earlier and now failing on Version 8 or is this a new 
setup which you did first time?



Hello!


This is first time we  are testing gluster over vdo.

Thank you!






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread Amar Tumballi
Hi Dimitry,

Was this working earlier and now failing on Version 8 or is this a new
setup which you did first time?

-Amar

On Wed, Aug 12, 2020 at 1:17 PM dm  wrote:

> 12.08.2020 11:39, dm пишет:
> > Some more info, really we have lvm over lvm here:
> >
> > lvm-vdo-lvm...
> >
> > Thank you!
> >
>
> Sorry, this is wrong, I forgot we replaced this,
>
> vdo now is over physical drive...
>
> So, only one lvm layer here.
>
> >
> > 12.08.2020 11:00, Dmitry Melekhov пишет:
> >> Hello!
> >>
> >>
> >> We are testing gluster 8 on centos 8.2 and we try to use volume
> >> created over vdo.
> >>
> >> This is 2 nodes setup.
> >>
> >> There is lvm created over vdo, and xfs filesystem.
> >>
> >>
> >> Test vm runs just fine if  we run vm over fuse:
> >>
> >>
> >>   
> >>  
> >>  
> >>  
> >>
> >>
> >> /root/pool/ is fuse mount.
> >>
> >>
> >> but if we try to run:
> >>
> >>
> >>   
> >>  
> >>  
> >>
> >>  
> >>  
> >>
> >>
> >>
> >> then vm boot dies, qemu says- no bootable device.
> >>
> >>
> >> It works without cache='directsync' though.
> >>
> >> But live migration does not work.
> >>
> >>
> >> btw, everything work OK if we run VM on gluster volume without vdo...
> >>
> >> Any ideas what can cause this and how it can be fixed?
> >>
> >>
> >> Thank you!
> >>
> >
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
--
https://kadalu.io
Container Storage made easy!




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread dm

Some more info, really we have lvm over lvm here:

lvm-vdo-lvm...

Thank you!


12.08.2020 11:00, Dmitry Melekhov пишет:

Hello!


We are testing gluster 8 on centos 8.2 and we try to use volume 
created over vdo.


This is 2 nodes setup.

There is lvm created over vdo, and xfs filesystem.


Test vm runs just fine if  we run vm over fuse:


  
 
 
 


/root/pool/ is fuse mount.


but if we try to run:


  
 
 
   
 
 
   


then vm boot dies, qemu says- no bootable device.


It works without cache='directsync' though.

But live migration does not work.


btw, everything work OK if we run VM on gluster volume without vdo...

Any ideas what can cause this and how it can be fixed?


Thank you!







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread dm

btw, part of brick log:

[2020-08-12 07:08:32.646082] I [MSGID: 115029] 
[server-handshake.c:561:server_setvolume] 0-pool-server: accepted client 
from CTX_ID:9eea4bec-a522-4a29-be83-5d66c04ce6ee-GRAPH_ID:0-PID:765
2-HOST:nabu-PC_NAME:pool-client-2-RECON_NO:-0 (version: 8.0) with subvol 
/wall/pool/brick
[2020-08-12 07:08:32.669522] E [MSGID: 113040] 
[posix-inode-fd-ops.c:1727:posix_readv] 0-pool-posix: read failed on 
gfid=231fbad6-8d8d-4555-8137-2362a06fc140, fd=0x7f342800ca38, offset=0

size=512, buf=0x7f345450f000 [Invalid argument]
[2020-08-12 07:08:32.669565] E [MSGID: 115068] 
[server-rpc-fops_v2.c:1374:server4_readv_cbk] 0-pool-server: READ info 
[{frame=34505}, {READV_fd_no=0}, {uuid_utoa=231fbad6-8d8d-4555-8137-2
362a06fc140}, 
{client=CTX_ID:9eea4bec-a522-4a29-be83-5d66c04ce6ee-GRAPH_ID:0-PID:7652-HOST:nabu-PC_NAME:pool-client-2-RECON_NO:-0}, 
{error-xlator=pool-posix}, {errno=22}, {error=Invalid a

rgument}]
[2020-08-12 07:08:33.241625] E [MSGID: 113040] 
[posix-inode-fd-ops.c:1727:posix_readv] 0-pool-posix: read failed on 
gfid=231fbad6-8d8d-4555-8137-2362a06fc140, fd=0x7f342800ca38, offset=0

size=512, buf=0x7f345450f000 [Invalid argument]
[2020-08-12 07:08:33.241669] E [MSGID: 115068] 
[server-rpc-fops_v2.c:1374:server4_readv_cbk] 0-pool-server: READ info 
[{frame=34507}, {READV_fd_no=0}, {uuid_utoa=231fbad6-8d8d-4555-8137-2
362a06fc140}, 
{client=CTX_ID:9eea4bec-a522-4a29-be83-5d66c04ce6ee-GRAPH_ID:0-PID:7652-HOST:nabu-PC_NAME:pool-client-2-RECON_NO:-0}, 
{error-xlator=pool-posix}, {errno=22}, {error=Invalid a

rgument}]
[2020-08-12 07:09:45.897326] W [socket.c:767:__socket_rwv] 
0-tcp.pool-server: readv on 192.168.222.25:49081 failed (No data available)
[2020-08-12 07:09:45.897357] I [MSGID: 115036] 
[server.c:498:server_rpc_notify] 0-pool-server: disconnecting connection 
[{client-uid=CTX_ID:9eea4bec-a522-4a29-be83-5d66c04ce6ee-GRAPH_ID:0

-PID:7652-HOST:nabu-PC_NAME:pool-client-2-RECON_NO:-0}]


Thank you!

12.08.2020 11:00, Dmitry Melekhov пишет:

Hello!


We are testing gluster 8 on centos 8.2 and we try to use volume 
created over vdo.


This is 2 nodes setup.

There is lvm created over vdo, and xfs filesystem.


Test vm runs just fine if  we run vm over fuse:


  
 
 
 


/root/pool/ is fuse mount.


but if we try to run:


  
 
 
   
 
 
   


then vm boot dies, qemu says- no bootable device.


It works without cache='directsync' though.

But live migration does not work.


btw, everything work OK if we run VM on gluster volume without vdo...

Any ideas what can cause this and how it can be fixed?


Thank you!







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] gluster over vdo, problem with gfapi

2020-08-12 Thread Dmitry Melekhov

Hello!


We are testing gluster 8 on centos 8.2 and we try to use volume created 
over vdo.


This is 2 nodes setup.

There is lvm created over vdo, and xfs filesystem.


Test vm runs just fine if  we run vm over fuse:


  
 
 
 


/root/pool/ is fuse mount.


but if we try to run:


  
 
 
   
 
 
   


then vm boot dies, qemu says- no bootable device.


It works without cache='directsync' though.

But live migration does not work.


btw, everything work OK if we run VM on gluster volume without vdo...

Any ideas what can cause this and how it can be fixed?


Thank you!





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users