Re: [Gluster-users] GlusterFS Replica over ZFS

2024-09-21 Thread Gilberto Ferreira
Hi

Yes! I always use some of the features in the virt group.
It's the best practice that I have had for years now.

Thanks Strahil.
---


Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em sáb., 21 de set. de 2024 às 04:56, Strahil Nikolov 
escreveu:

> I assume you will be using the volumes for VM workload.
> There is a 'virt' group of settings optimized for virtualization (location
> at /var/lib/glusterd/groups/virt) which is also used by oVirt. It
> guarantees that VMs can live migrate without breaking.
>
>
> Best Regards,
> Strahil Nikolov
>
> On Fri, Sep 20, 2024 at 19:00, Gilberto Ferreira
>  wrote:
> Hi there.
>
> I am about to set up 3 server with GlusterFS over ZFS, running in Proxmox
> VE 8.
> Any advice or warning about this set up or I am at the right side of the
> road!
>
> Thanks for any advice.
> ---
>
>
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS Replica over ZFS

2024-09-20 Thread Gilberto Ferreira
Hi there.

I am about to set up 3 server with GlusterFS over ZFS, running in Proxmox
VE 8.
Any advice or warning about this set up or I am at the right side of the
road!

Thanks for any advice.
---


Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Change brick-port

2024-08-23 Thread Gilberto Ferreira
Hi folks.

I need to fix the --birck-port for a gluster replicated volume.
Every time that I start the volum it assumes a different port.
Is there any way to fix it?
Thanks




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo-rep will not initialize

2024-08-22 Thread Gilberto Ferreira
Perhaps you can use this tools:
https://aravindavk.in/blog/gluster-georep-tools/

I am using it with great success.






Em qui., 22 de ago. de 2024 às 17:36, Karl Kleinpaste 
escreveu:

> On 8/22/24 14:08, Strahil Nikolov wrote:
>
> I can try to reproduce it if you could provide the gluster version,
> operating system and volume options.
>
>
> Most kind.
>
> Fedora39,  Packages:
>
> $ grep gluster /var/log/rpmpkgs
> gluster-block-0.5-11.fc39.x86_64.rpm
> glusterfs-11.1-1.fc39.x86_64.rpm
> glusterfs-cli-11.1-1.fc39.x86_64.rpm
> glusterfs-client-xlators-11.1-1.fc39.x86_64.rpm
> glusterfs-cloudsync-plugins-11.1-1.fc39.x86_64.rpm
> glusterfs-coreutils-0.3.2-1.fc39.x86_64.rpm
> glusterfs-events-11.1-1.fc39.x86_64.rpm
> glusterfs-extra-xlators-11.1-1.fc39.x86_64.rpm
> glusterfs-fuse-11.1-1.fc39.x86_64.rpm
> glusterfs-geo-replication-11.1-1.fc39.x86_64.rpm
> glusterfs-resource-agents-11.1-1.fc39.noarch.rpm
> glusterfs-server-11.1-1.fc39.x86_64.rpm
> glusterfs-thin-arbiter-11.1-1.fc39.x86_64.rpm
> libglusterfs0-11.1-1.fc39.x86_64.rpm
> libvirt-daemon-driver-storage-gluster-9.7.0-4.fc39.x86_64.rpm
> python3-gluster-11.1-1.fc39.x86_64.rpm
> qemu-block-gluster-8.1.3-5.fc39.x86_64.rpm
>
> (Somewhere along the way, I'm sure I just did "dnf install *gluster*".)
>
> The two volumes were created using the quick start guide:
> https://docs.gluster.org/en/main/Quick-Start-Guide/Quickstart/
> which means that, after establishing peering, I used these simple commands:
>
> (on pjs) gluster volume create j pjs:/xx/brick
> (on pms) gluster volume create n pms:/xx/brick
>
> where /xx on these 2 systems are small, spare, otherwise empty, identical
> filesystems of about 40G, formatted ext4. No other options were used in
> creation.
>
> As I said in my initial note, it seems that the underlying problem (from
> logged complaints) is lack of a geo-rep template configuration from which
> to set up, and I simply don't know where/how/when that should have been
> created. But this is just a surmise on my part.
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo Replication sync intervals

2024-08-18 Thread Gilberto Ferreira
Oh! Is that so?
Never mind!
Just would like to know if it is possible to get some control about how
long it takes between the async jobs.
Thank you for the response.


---


Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em dom., 18 de ago. de 2024 às 17:46, Strahil Nikolov 
escreveu:

> Hi Gilberto,
>
> I doubt you can change that stuff. Officially it's async replication and
> it might take some time to replicate.
>
> What do you want to improve ?
>
> Best Regards,
> Strahil Nikolov
>
> В петък, 16 август 2024 г. в 20:31:25 ч. Гринуич+3, Gilberto Ferreira <
> gilberto.nune...@gmail.com> написа:
>
>
> Hi there.
>
> I have two sites with gluster geo replication, and all work pretty well.
> But I want to check about the sync intervals and if there is some way to
> change it.
> Thanks for any tips.
> ---
>
>
> Gilbert
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Geo Replication sync intervals

2024-08-16 Thread Gilberto Ferreira
Hi there.

I have two sites with gluster geo replication, and all work pretty well.
But I want to check about the sync intervals and if there is some way to
change it.
Thanks for any tips.
---


Gilbert




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrade 10.4 -> 11.1 making problems

2024-01-19 Thread Gilberto Ferreira
> 2271
> > >> Brick glusterpub3:/gluster/md4/workdata494610  Y
> 2399
> > >> Brick glusterpub1:/gluster/md5/workdata546510  Y
> 4208
> > >> Brick glusterpub2:/gluster/md5/workdata496850  Y
> 2751
> > >> Brick glusterpub3:/gluster/md5/workdata592020  Y
> 2803
> > >> Brick glusterpub1:/gluster/md6/workdata558290  Y
> 4583
> > >> Brick glusterpub2:/gluster/md6/workdata504550  Y
> 3296
> > >> Brick glusterpub3:/gluster/md6/workdata502620  Y
> 3237
> > >> Brick glusterpub1:/gluster/md7/workdata522380  Y
> 5014
> > >> Brick glusterpub2:/gluster/md7/workdata524740  Y
> 3673
> > >> Brick glusterpub3:/gluster/md7/workdata579660  Y
> 3653
> > >> Self-heal Daemon on localhost  N/A  N/AY
> 4141
> > >> Self-heal Daemon on glusterpub1N/A  N/AY
> 5570
> > >> Self-heal Daemon on glusterpub2N/A  N/AY
> 4139
> > >>
> > >> "gluster volume heal workdata info" lists a lot of files per brick.
> > >> "gluster volume heal workdata statistics heal-count" shows thousands
> > >> of files per brick.
> > >> "gluster volume heal workdata enable" has no effect.
> > >>
> > >> gluster volume heal workdata full
> > >> Launching heal operation to perform full self heal on volume workdata
> > >> has been successful
> > >> Use heal info commands to check status.
> > >>
> > >> -> not doing anything at all. And nothing happening on the 2 "good"
> > >> servers in e.g. glustershd.log. Heal was working as expected on
> > >> version 10.4, but here... silence. Someone has an idea?
> > >>
> > >>
> > >> Best regards,
> > >> Hubert
> > >>
> > >> Am Di., 16. Jan. 2024 um 13:44 Uhr schrieb Gilberto Ferreira
> > >> :
> > >>>
> > >>> Ah! Indeed! You need to perform an upgrade in the clients as well.
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> Em ter., 16 de jan. de 2024 às 03:12, Hu Bert <
> revi...@googlemail.com> escreveu:
> > >>>>
> > >>>> morning to those still reading :-)
> > >>>>
> > >>>> i found this:
> https://docs.gluster.org/en/main/Troubleshooting/troubleshooting-glusterd/#common-issues-and-how-to-resolve-them
> > >>>>
> > >>>> there's a paragraph about "peer rejected" with the same error
> message,
> > >>>> telling me: "Update the cluster.op-version" - i had only updated the
> > >>>> server nodes, but not the clients. So upgrading the
> cluster.op-version
> > >>>> wasn't possible at this time. So... upgrading the clients to version
> > >>>> 11.1 and then the op-version should solve the problem?
> > >>>>
> > >>>>
> > >>>> Thx,
> > >>>> Hubert
> > >>>>
> > >>>> Am Mo., 15. Jan. 2024 um 09:16 Uhr schrieb Hu Bert <
> revi...@googlemail.com>:
> > >>>>>
> > >>>>> Hi,
> > >>>>> just upgraded some gluster servers from version 10.4 to version
> 11.1.
> > >>>>> Debian bullseye & bookworm. When only installing the packages:
> good,
> > >>>>> servers, volumes etc. work as expected.
> > >>>>>
> > >>>>> But one needs to test if the systems work after a daemon and/or
> server
> > >>>>> restart. Well, did a reboot, and after that the rebooted/restarted
> > >>>>> system is "out". Log message from working node:
> > >>>>>
> > >>>>> [2024-01-15 08:02:21.585694 +] I [MSGID: 106163]
> > >>>>> [glusterd-handshake.c:1501:__glusterd_mgmt_hndsk_versions_ack]
> > >>>>> 0-management: using the op-version 10
> > >>>>> [2024-01-15 08:02:21.589601 +] I [MSGID: 106490]
> > >>>>> [glusterd-handler.c:2546:__glusterd_handle_incoming_friend_req]
> > >>>>> 0-gluster

Re: [Gluster-users] Upgrade 10.4 -> 11.1 making problems

2024-01-16 Thread Gilberto Ferreira
Ah! Indeed! You need to perform an upgrade in the clients as well.








Em ter., 16 de jan. de 2024 às 03:12, Hu Bert 
escreveu:

> morning to those still reading :-)
>
> i found this:
> https://docs.gluster.org/en/main/Troubleshooting/troubleshooting-glusterd/#common-issues-and-how-to-resolve-them
>
> there's a paragraph about "peer rejected" with the same error message,
> telling me: "Update the cluster.op-version" - i had only updated the
> server nodes, but not the clients. So upgrading the cluster.op-version
> wasn't possible at this time. So... upgrading the clients to version
> 11.1 and then the op-version should solve the problem?
>
>
> Thx,
> Hubert
>
> Am Mo., 15. Jan. 2024 um 09:16 Uhr schrieb Hu Bert  >:
> >
> > Hi,
> > just upgraded some gluster servers from version 10.4 to version 11.1.
> > Debian bullseye & bookworm. When only installing the packages: good,
> > servers, volumes etc. work as expected.
> >
> > But one needs to test if the systems work after a daemon and/or server
> > restart. Well, did a reboot, and after that the rebooted/restarted
> > system is "out". Log message from working node:
> >
> > [2024-01-15 08:02:21.585694 +] I [MSGID: 106163]
> > [glusterd-handshake.c:1501:__glusterd_mgmt_hndsk_versions_ack]
> > 0-management: using the op-version 10
> > [2024-01-15 08:02:21.589601 +] I [MSGID: 106490]
> > [glusterd-handler.c:2546:__glusterd_handle_incoming_friend_req]
> > 0-glusterd: Received probe from uuid:
> > b71401c3-512a-47cb-ac18-473c4ba7776e
> > [2024-01-15 08:02:23.608349 +] E [MSGID: 106010]
> > [glusterd-utils.c:3824:glusterd_compare_friend_volume] 0-management:
> > Version of Cksums sourceimages differ. local cksum = 2204642525,
> > remote cksum = 1931483801 on peer gluster190
> > [2024-01-15 08:02:23.608584 +] I [MSGID: 106493]
> > [glusterd-handler.c:3819:glusterd_xfer_friend_add_resp] 0-glusterd:
> > Responded to gluster190 (0), ret: 0, op_ret: -1
> > [2024-01-15 08:02:23.613553 +] I [MSGID: 106493]
> > [glusterd-rpc-ops.c:467:__glusterd_friend_add_cbk] 0-glusterd:
> > Received RJT from uuid: b71401c3-512a-47cb-ac18-473c4ba7776e, host:
> > gluster190, port: 0
> >
> > peer status from rebooted node:
> >
> > root@gluster190 ~ # gluster peer status
> > Number of Peers: 2
> >
> > Hostname: gluster189
> > Uuid: 50dc8288-aa49-4ea8-9c6c-9a9a926c67a7
> > State: Peer Rejected (Connected)
> >
> > Hostname: gluster188
> > Uuid: e15a33fe-e2f7-47cf-ac53-a3b34136555d
> > State: Peer Rejected (Connected)
> >
> > So the rebooted gluster190 is not accepted anymore. And thus does not
> > appear in "gluster volume status". I then followed this guide:
> >
> >
> https://gluster-documentations.readthedocs.io/en/latest/Administrator%20Guide/Resolving%20Peer%20Rejected/
> >
> > Remove everything under /var/lib/glusterd/ (except glusterd.info) and
> > restart glusterd service etc. Data get copied from other nodes,
> > 'gluster peer status' is ok again - but the volume info is missing,
> > /var/lib/glusterd/vols is empty. When syncing this dir from another
> > node, the volume then is available again, heals start etc.
> >
> > Well, and just to be sure that everything's working as it should,
> > rebooted that node again - the rebooted node is kicked out again, and
> > you have to restart bringing it back again.
> >
> > Sry, but did i miss anything? Has someone experienced similar
> > problems? I'll probably downgrade to 10.4 again, that version was
> > working...
> >
> >
> > Thx,
> > Hubert
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Performance - 12 Gbps SSDs and 10 Gbps NIC

2023-12-14 Thread Gilberto Ferreira
Thanks for the advice.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qui., 14 de dez. de 2023 às 09:54, Strahil Nikolov 
escreveu:

> Hi Gilberto,
>
>
> Have you checked
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance
>  ?
>
> I think that you will need to test the virt profile as the settings will
> prevent some bad situations - especially VM live migration.
> You should also consider sharding which can reduce healing time but also
> makes your life more difficult if you need to access the disks of the VMs.
>
> I think that client.event-thread , server.event-thread and
> performance.io-thread-count can be tuned in your case. Consider setting ip
> a VM using the gluster volume as backing store and run the tests inside the
> VM to simulate real workload (best is to run a DB, webserver, etc inside
> a VM).
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
>
>
> On Wednesday, December 13, 2023, 2:34 PM, Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
> Hi all
> Aravinda, usually I set this in two server env and never get split brain:
> gluster vol set VMS cluster.heal-timeout 5
> gluster vol heal VMS enable
> gluster vol set VMS cluster.quorum-reads false
> gluster vol set VMS cluster.quorum-count 1
> gluster vol set VMS network.ping-timeout 2
> gluster vol set VMS cluster.favorite-child-policy mtime
> gluster vol heal VMS granular-entry-heal enable
> gluster vol set VMS cluster.data-self-heal-algorithm full
> gluster vol set VMS features.shard on
>
> Strahil, in general, I get 0,06ms with 1G dedicated NIC.
> My env are very simple, using Proxmox + QEMU/KVM, with 3 or 5 VM.
>
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em qua., 13 de dez. de 2023 às 06:08, Strahil Nikolov <
> hunter86...@yahoo.com> escreveu:
>
> Hi Aravinda,
>
> Based on the output it’s a ‘replica 3 arbiter 1’ type.
>
> Gilberto,
> What’s the latency between the nodes ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
> On Wednesday, December 13, 2023, 7:36 AM, Aravinda 
> wrote:
>
> Only Replica 2 or Distributed Gluster volumes can be created with two
> servers. High chance of split brain with Replica 2 compared to Replica 3
> volume.
>
> For NFS Ganesha, no issue exporting the volume even if only one server is
> available. Run NFS Ganesha servers in Gluster server nodes and NFS clients
> from the network can connect to any NFS Ganesha server.
>
> You can use Haproxy + Keepalived (or any other load balancer) if high
> availability required for the NFS Ganesha connections (Ex: If a server node
> goes down, then nfs client can connect to other NFS ganesha server node).
>
> --
> Aravinda
> Kadalu Technologies
>
>
>
>  On Wed, 13 Dec 2023 01:42:11 +0530 *Gilberto Ferreira
> >* wrote ---
>
> Ah that's nice.
> Somebody knows this can be achieved with two servers?
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
>
> Em ter., 12 de dez. de 2023 às 17:08, Danny 
> escreveu:
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> Wow, HUGE improvement with NFS-Ganesha!
>
>
> sudo dnf -y install glusterfs-ganesha
> sudo vim /etc/ganesha/ganesha.conf
>
> NFS_CORE_PARAM {
> mount_path_pseudo = true;
> Protocols = 3,4;
> }
> EXPORT_DEFAULTS {
> Access_Type = RW;
> }
>
> LOG {
> Default_Log_Level = WARN;
> }
>
> EXPORT{
> Export_Id = 1 ; # Export ID unique to each export
> Path = "/data"; # Path of the volume to be exported
>
> FSAL {
> name = GLUSTER;
> hostname = "localhost"; # IP of one of the nodes in the trusted
> pool
> volume = "data";# Volume name. Eg: "test_volume"
> }
>
> Access_type = RW;   # Access permissions
> Squash = No_root_squash;# To enable/disable root squashing
> Disable_ACL = TRUE; # To enable/disable ACL
> Pseudo = "/data";   # NFSv4 pseudo path for this export
> Protocols = "3","4" ;   # NFS protocols supported
> Transports = "UDP","TCP" ;  # Transport protocols supported
> 

Re: [Gluster-users] Gluster Performance - 12 Gbps SSDs and 10 Gbps NIC

2023-12-13 Thread Gilberto Ferreira
Hi all
Aravinda, usually I set this in two server env and never get split brain:
gluster vol set VMS cluster.heal-timeout 5
gluster vol heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster vol set VMS cluster.favorite-child-policy mtime
gluster vol heal VMS granular-entry-heal enable
gluster vol set VMS cluster.data-self-heal-algorithm full
gluster vol set VMS features.shard on

Strahil, in general, I get 0,06ms with 1G dedicated NIC.
My env are very simple, using Proxmox + QEMU/KVM, with 3 or 5 VM.


---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qua., 13 de dez. de 2023 às 06:08, Strahil Nikolov 
escreveu:

> Hi Aravinda,
>
> Based on the output it’s a ‘replica 3 arbiter 1’ type.
>
> Gilberto,
> What’s the latency between the nodes ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
> On Wednesday, December 13, 2023, 7:36 AM, Aravinda 
> wrote:
>
> Only Replica 2 or Distributed Gluster volumes can be created with two
> servers. High chance of split brain with Replica 2 compared to Replica 3
> volume.
>
> For NFS Ganesha, no issue exporting the volume even if only one server is
> available. Run NFS Ganesha servers in Gluster server nodes and NFS clients
> from the network can connect to any NFS Ganesha server.
>
> You can use Haproxy + Keepalived (or any other load balancer) if high
> availability required for the NFS Ganesha connections (Ex: If a server node
> goes down, then nfs client can connect to other NFS ganesha server node).
>
> --
> Aravinda
> Kadalu Technologies
>
>
>
>  On Wed, 13 Dec 2023 01:42:11 +0530 *Gilberto Ferreira
> >* wrote ---
>
> Ah that's nice.
> Somebody knows this can be achieved with two servers?
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
>
> Em ter., 12 de dez. de 2023 às 17:08, Danny 
> escreveu:
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> Wow, HUGE improvement with NFS-Ganesha!
>
>
> sudo dnf -y install glusterfs-ganesha
> sudo vim /etc/ganesha/ganesha.conf
>
> NFS_CORE_PARAM {
> mount_path_pseudo = true;
> Protocols = 3,4;
> }
> EXPORT_DEFAULTS {
> Access_Type = RW;
> }
>
> LOG {
> Default_Log_Level = WARN;
> }
>
> EXPORT{
> Export_Id = 1 ; # Export ID unique to each export
> Path = "/data"; # Path of the volume to be exported
>
> FSAL {
> name = GLUSTER;
> hostname = "localhost"; # IP of one of the nodes in the trusted
> pool
> volume = "data";# Volume name. Eg: "test_volume"
> }
>
> Access_type = RW;   # Access permissions
> Squash = No_root_squash;# To enable/disable root squashing
> Disable_ACL = TRUE; # To enable/disable ACL
> Pseudo = "/data";   # NFSv4 pseudo path for this export
> Protocols = "3","4" ;   # NFS protocols supported
> Transports = "UDP","TCP" ;  # Transport protocols supported
> SecType = "sys";# Security flavors supported
> }
>
>
> sudo systemctl enable --now nfs-ganesha
> sudo vim /etc/fstab
>
> localhost:/data /data nfs
> defaults,_netdev  0 0
>
>
> sudo systemctl daemon-reload
> sudo mount -a
>
> fio --name=test --filename=/data/wow --size=1G --readwrite=write
>
> Run status group 0 (all jobs):
>   WRITE: bw=2246MiB/s (2355MB/s), 2246MiB/s-2246MiB/s (2355MB/s-2355MB/s),
> io=1024MiB (1074MB), run=456-456msec
>
> Yeah 2355MB/s is much better than the original 115MB/s
>
> So in the end, I guess FUSE isn't the best choice.
>
> On Tue, Dec 12, 2023 at 3:00 PM Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
> Fuse there some overhead.
> Take a look at libgfapi:
>
> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/
>
> I know this doc somehow is out of date, but could be a hint
>
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
>
> Em ter., 12 de dez. de 2023 às 16:29, Danny 
> escreveu:
>
> Nope, not a caching thing. I've tried multiple different types of fio
> tests, all produce the same results. Gbps when hitting the disk

Re: [Gluster-users] Gluster Performance - 12 Gbps SSDs and 10 Gbps NIC

2023-12-12 Thread Gilberto Ferreira
Ah that's nice.
Somebody knows this can be achieved with two servers?

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 12 de dez. de 2023 às 17:08, Danny 
escreveu:

> Wow, HUGE improvement with NFS-Ganesha!
>
> sudo dnf -y install glusterfs-ganesha
> sudo vim /etc/ganesha/ganesha.conf
>
> NFS_CORE_PARAM {
> mount_path_pseudo = true;
> Protocols = 3,4;
> }
> EXPORT_DEFAULTS {
> Access_Type = RW;
> }
>
> LOG {
> Default_Log_Level = WARN;
> }
>
> EXPORT{
> Export_Id = 1 ; # Export ID unique to each export
> Path = "/data"; # Path of the volume to be exported
>
> FSAL {
> name = GLUSTER;
> hostname = "localhost"; # IP of one of the nodes in the trusted
> pool
> volume = "data";# Volume name. Eg: "test_volume"
> }
>
> Access_type = RW;   # Access permissions
> Squash = No_root_squash;# To enable/disable root squashing
> Disable_ACL = TRUE; # To enable/disable ACL
> Pseudo = "/data";   # NFSv4 pseudo path for this export
> Protocols = "3","4" ;   # NFS protocols supported
> Transports = "UDP","TCP" ;  # Transport protocols supported
> SecType = "sys";# Security flavors supported
> }
>
>
> sudo systemctl enable --now nfs-ganesha
> sudo vim /etc/fstab
>
> localhost:/data /data nfs
> defaults,_netdev  0 0
>
> sudo systemctl daemon-reload
> sudo mount -a
>
> fio --name=test --filename=/data/wow --size=1G --readwrite=write
>
> Run status group 0 (all jobs):
>   WRITE: bw=2246MiB/s (2355MB/s), 2246MiB/s-2246MiB/s (2355MB/s-2355MB/s),
> io=1024MiB (1074MB), run=456-456msec
>
> Yeah 2355MB/s is much better than the original 115MB/s
>
> So in the end, I guess FUSE isn't the best choice.
>
> On Tue, Dec 12, 2023 at 3:00 PM Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
>> Fuse there some overhead.
>> Take a look at libgfapi:
>>
>> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/
>>
>> I know this doc somehow is out of date, but could be a hint
>>
>>
>> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 12 de dez. de 2023 às 16:29, Danny 
>> escreveu:
>>
>>> Nope, not a caching thing. I've tried multiple different types of fio
>>> tests, all produce the same results. Gbps when hitting the disks locally,
>>> slow MB\s when hitting the Gluster FUSE mount.
>>>
>>> I've been reading up on glustr-ganesha, and will give that a try.
>>>
>>> On Tue, Dec 12, 2023 at 1:58 PM Ramon Selga 
>>> wrote:
>>>
>>>> Dismiss my first question: you have SAS 12Gbps SSDs  Sorry!
>>>>
>>>> El 12/12/23 a les 19:52, Ramon Selga ha escrit:
>>>>
>>>> May ask you which kind of disks you have in this setup? rotational, ssd
>>>> SAS/SATA, nvme?
>>>>
>>>> Is there a RAID controller with writeback caching?
>>>>
>>>> It seems to me your fio test on local brick has a unclear result due to
>>>> some caching.
>>>>
>>>> Try something like (you can consider to increase test file size
>>>> depending of your caching memory) :
>>>>
>>>> fio --size=16G --name=test --filename=/gluster/data/brick/wow --bs=1M
>>>> --nrfiles=1 --direct=1 --sync=0 --randrepeat=0 --rw=write --refill_buffers
>>>> --end_fsync=1 --iodepth=200 --ioengine=libaio
>>>>
>>>> Also remember a replica 3 arbiter 1 volume writes synchronously to two
>>>> data bricks, halving throughput of your network backend.
>>>>
>>>> Try similar fio on gluster mount but I hardly see more than 300MB/s
>>>> writing sequentially on only one fuse mount even with nvme backend. On the
>>>> other side, with 4 to 6 clients, you can easily reach 1.5GB/s of aggregate
>>>> throughput
>>>>
>>>> To start, I think is better to try with default parameters for your
>>>> replica volume.
>>>>
>>>> Best regards!
>>>>
>>>> Ramon
>>>>
>>>>
>>>> El 12/12/23 a les 19:10, Danny ha escrit:
>>>>
>>>> Sorry, I noticed that too after I posted, so I instantly upgraded to
>>>> 10. 

Re: [Gluster-users] Gluster Performance - 12 Gbps SSDs and 10 Gbps NIC

2023-12-12 Thread Gilberto Ferreira
Fuse there some overhead.
Take a look at libgfapi:
https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/

I know this doc somehow is out of date, but could be a hint


---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 12 de dez. de 2023 às 16:29, Danny 
escreveu:

> Nope, not a caching thing. I've tried multiple different types of fio
> tests, all produce the same results. Gbps when hitting the disks locally,
> slow MB\s when hitting the Gluster FUSE mount.
>
> I've been reading up on glustr-ganesha, and will give that a try.
>
> On Tue, Dec 12, 2023 at 1:58 PM Ramon Selga  wrote:
>
>> Dismiss my first question: you have SAS 12Gbps SSDs  Sorry!
>>
>> El 12/12/23 a les 19:52, Ramon Selga ha escrit:
>>
>> May ask you which kind of disks you have in this setup? rotational, ssd
>> SAS/SATA, nvme?
>>
>> Is there a RAID controller with writeback caching?
>>
>> It seems to me your fio test on local brick has a unclear result due to
>> some caching.
>>
>> Try something like (you can consider to increase test file size depending
>> of your caching memory) :
>>
>> fio --size=16G --name=test --filename=/gluster/data/brick/wow --bs=1M
>> --nrfiles=1 --direct=1 --sync=0 --randrepeat=0 --rw=write --refill_buffers
>> --end_fsync=1 --iodepth=200 --ioengine=libaio
>>
>> Also remember a replica 3 arbiter 1 volume writes synchronously to two
>> data bricks, halving throughput of your network backend.
>>
>> Try similar fio on gluster mount but I hardly see more than 300MB/s
>> writing sequentially on only one fuse mount even with nvme backend. On the
>> other side, with 4 to 6 clients, you can easily reach 1.5GB/s of aggregate
>> throughput
>>
>> To start, I think is better to try with default parameters for your
>> replica volume.
>>
>> Best regards!
>>
>> Ramon
>>
>>
>> El 12/12/23 a les 19:10, Danny ha escrit:
>>
>> Sorry, I noticed that too after I posted, so I instantly upgraded to 10.
>> Issue remains.
>>
>> On Tue, Dec 12, 2023 at 1:09 PM Gilberto Ferreira <
>> gilberto.nune...@gmail.com> wrote:
>>
>>> I strongly suggest you update to version 10 or higher.
>>> It's come with significant improvement regarding performance.
>>> ---
>>> Gilberto Nunes Ferreira
>>> (47) 99676-7530 - Whatsapp / Telegram
>>>
>>>
>>>
>>>
>>>
>>>
>>> Em ter., 12 de dez. de 2023 às 13:03, Danny 
>>> escreveu:
>>>
>>>> MTU is already 9000, and as you can see from the IPERF results, I've
>>>> got a nice, fast connection between the nodes.
>>>>
>>>> On Tue, Dec 12, 2023 at 9:49 AM Strahil Nikolov 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Let’s try the simple things:
>>>>>
>>>>> Check if you can use MTU9000 and if it’s possible, set it on the Bond
>>>>> Slaves and the bond devices:
>>>>>  ping GLUSTER_PEER -c 10 -M do -s 8972
>>>>>
>>>>> Then try to follow up the recommendations from
>>>>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance
>>>>>
>>>>>
>>>>>
>>>>> Best Regards,
>>>>> Strahil Nikolov
>>>>>
>>>>> On Monday, December 11, 2023, 3:32 PM, Danny <
>>>>> dbray925+glus...@gmail.com> wrote:
>>>>>
>>>>> Hello list, I'm hoping someone can let me know what setting I missed.
>>>>>
>>>>> Hardware:
>>>>> Dell R650 servers, Dual 24 Core Xeon 2.8 GHz, 1 TB RAM
>>>>> 8x SSD s Negotiated Speed 12 Gbps
>>>>> PERC H755 Controller - RAID 6
>>>>> Created virtual "data" disk from the above 8 SSD drives, for a ~20 TB
>>>>> /dev/sdb
>>>>>
>>>>> OS:
>>>>> CentOS Stream
>>>>> kernel-4.18.0-526.el8.x86_64
>>>>> glusterfs-7.9-1.el8.x86_64
>>>>>
>>>>> IPERF Test between nodes:
>>>>> [ ID] Interval   Transfer Bitrate Retr
>>>>> [  5]   0.00-10.00  sec  11.5 GBytes  9.90 Gbits/sec0
>>>>> sender
>>>>> [  5]   0.00-10.04  sec  11.5 GBytes  9.86 Gbits/sec
>>>>>  receiver
>>>>>

Re: [Gluster-users] Gluster Performance - 12 Gbps SSDs and 10 Gbps NIC

2023-12-12 Thread Gilberto Ferreira
I strongly suggest you update to version 10 or higher.
It's come with significant improvement regarding performance.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 12 de dez. de 2023 às 13:03, Danny 
escreveu:

> MTU is already 9000, and as you can see from the IPERF results, I've got a
> nice, fast connection between the nodes.
>
> On Tue, Dec 12, 2023 at 9:49 AM Strahil Nikolov 
> wrote:
>
>> Hi,
>>
>> Let’s try the simple things:
>>
>> Check if you can use MTU9000 and if it’s possible, set it on the Bond
>> Slaves and the bond devices:
>>  ping GLUSTER_PEER -c 10 -M do -s 8972
>>
>> Then try to follow up the recommendations from
>> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance
>>
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Monday, December 11, 2023, 3:32 PM, Danny 
>> wrote:
>>
>> Hello list, I'm hoping someone can let me know what setting I missed.
>>
>> Hardware:
>> Dell R650 servers, Dual 24 Core Xeon 2.8 GHz, 1 TB RAM
>> 8x SSD s Negotiated Speed 12 Gbps
>> PERC H755 Controller - RAID 6
>> Created virtual "data" disk from the above 8 SSD drives, for a ~20 TB
>> /dev/sdb
>>
>> OS:
>> CentOS Stream
>> kernel-4.18.0-526.el8.x86_64
>> glusterfs-7.9-1.el8.x86_64
>>
>> IPERF Test between nodes:
>> [ ID] Interval   Transfer Bitrate Retr
>> [  5]   0.00-10.00  sec  11.5 GBytes  9.90 Gbits/sec0
>> sender
>> [  5]   0.00-10.04  sec  11.5 GBytes  9.86 Gbits/sec
>>  receiver
>>
>> All good there. ~10 Gbps, as expected.
>>
>> LVM Install:
>> export DISK="/dev/sdb"
>> sudo parted --script $DISK "mklabel gpt"
>> sudo parted --script $DISK "mkpart primary 0% 100%"
>> sudo parted --script $DISK "set 1 lvm on"
>> sudo pvcreate --dataalignment 128K /dev/sdb1
>> sudo vgcreate --physicalextentsize 128K gfs_vg /dev/sdb1
>> sudo lvcreate -L 16G -n gfs_pool_meta gfs_vg
>> sudo lvcreate -l 95%FREE -n gfs_pool gfs_vg
>> sudo lvconvert --chunksize 1280K --thinpool gfs_vg/gfs_pool
>> --poolmetadata gfs_vg/gfs_pool_meta
>> sudo lvchange --zero n gfs_vg/gfs_pool
>> sudo lvcreate -V 19.5TiB --thinpool gfs_vg/gfs_pool -n gfs_lv
>> sudo mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10
>> /dev/mapper/gfs_vg-gfs_lv
>> sudo vim /etc/fstab
>> /dev/mapper/gfs_vg-gfs_lv   /gluster/data/brick   xfs
>> rw,inode64,noatime,nouuid 0 0
>>
>> sudo systemctl daemon-reload && sudo mount -a
>> fio --name=test --filename=/gluster/data/brick/wow --size=1G
>> --readwrite=write
>>
>> Run status group 0 (all jobs):
>>   WRITE: bw=2081MiB/s (2182MB/s), 2081MiB/s-2081MiB/s
>> (2182MB/s-2182MB/s), io=1024MiB (1074MB), run=492-492msec
>>
>> All good there. 2182MB/s =~ 17.5 Gbps. Nice!
>>
>>
>> Gluster install:
>> export NODE1='10.54.95.123'
>> export NODE2='10.54.95.124'
>> export NODE3='10.54.95.125'
>> sudo gluster peer probe $NODE2
>> sudo gluster peer probe $NODE3
>> sudo gluster volume create data replica 3 arbiter 1
>> $NODE1:/gluster/data/brick $NODE2:/gluster/data/brick
>> $NODE3:/gluster/data/brick force
>> sudo gluster volume set data network.ping-timeout 5
>> sudo gluster volume set data performance.client-io-threads on
>> sudo gluster volume set data group metadata-cache
>> sudo gluster volume start data
>> sudo gluster volume info all
>>
>> Volume Name: data
>> Type: Replicate
>> Volume ID: b52b5212-82c8-4b1a-8db3-52468bc0226e
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.54.95.123:/gluster/data/brick
>> Brick2: 10.54.95.124:/gluster/data/brick
>> Brick3: 10.54.95.125:/gluster/data/brick (arbiter)
>> Options Reconfigured:
>> network.inode-lru-limit: 20
>> performance.md-cache-timeout: 600
>> performance.cache-invalidation: on
>> performance.stat-prefetch: on
>> features.cache-invalidation-timeout: 600
>> features.cache-invalidation: on
>> network.ping-timeout: 5
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> nfs.disable: on
>> performance.client-io-threads: on
>>
>> sudo vim /etc/fstab
>> localhost:/data /data glusterfs
>> defaults,_netdev  0 0
>>
>> sudo systemctl daemon-reload && sudo mount -a
>> fio --name=test --filename=/data/wow --size=1G --readwrite=write
>>
>> Run status group 0 (all jobs):
>>   WRITE: bw=109MiB/s (115MB/s), 109MiB/s-109MiB/s (115MB/s-115MB/s),
>> io=1024MiB (1074MB), run=9366-9366msec
>>
>> Oh no, what's wrong? From 2182MB/s down to only 115MB/s? What am I
>> missing? I'm not expecting the above ~17 Gbps, but I'm thinking it should
>> at least be close(r) to ~10 Gbps.
>>
>> Any suggestions?
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> 
>
>
>
> Com

Re: [Gluster-users] Announcing Gluster release 11.1

2023-11-27 Thread Gilberto Ferreira
It's works for me: ``` rm -rf
/var/lib/apt/lists/download.gluster.org_pub_gluster_glusterfs_10_LATEST_Debian_bookworm_amd64_apt_dists_bookworm_*
rm -rf /var/lib/dpkg/info/glusterfs-c*
rm -rf /var/cache/apt/archives/partial/*
apt update
apt dist-upgrade -y ```
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em seg., 27 de nov. de 2023 às 11:28, Gilberto Ferreira <
gilberto.nune...@gmail.com> escreveu:

> I tried downloaded the file directly from the website but wget gave me
> errors:
> wget
> https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
> --2023-11-27 11:25:50--
> https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
> Resolving download.gluster.org (download.gluster.org)... 8.43.85.185
> Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
> connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 3188596 (3,0M)
> Saving to: ‘glusterfs-client_11.1-1_amd64.deb’
>
> glusterfs-client_11.1-1_amd64.deb   1%[
> ]  47,65K   311KB/sin 0,2s
>
> 2023-11-27 11:25:51 (311 KB/s) - Connection closed at byte 48792. Retrying.
>
> So I suppose it's something is unavailable at this moment in the gluster
> website.
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em seg., 27 de nov. de 2023 às 11:07, Hu Bert 
> escreveu:
>
>> Hi,
>> on console with wget i see this:
>>
>> 2023-11-27 15:04:35 (317 MB/s) - Read error at byte 32408/3166108
>> (Error decoding the received TLS packet.). Retrying.
>>
>> that looks strange :-)
>>
>> Best regards,
>> Hubert
>>
>>
>> Am Mo., 27. Nov. 2023 um 14:44 Uhr schrieb Gilberto Ferreira
>> :
>> >
>> > I am getting this errors:
>> >
>> > Err:10
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt
>> bookworm/main amd64 glusterfs-server amd64 11.1-1
>> >   Error reading from server - read (5: Input/output error) [IP:
>> 8.43.85.185 443]
>> > Fetched 35.9 kB in 36s (1,006 B/s)
>> > E: Failed to fetch
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libglusterfs0_11.1-1_amd64.deb
>> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
>> 443]
>> > E: Failed to fetch
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfxdr0_11.1-1_amd64.deb
>> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
>> 443]
>> > E: Failed to fetch
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfrpc0_11.1-1_amd64.deb
>> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
>> 443]
>> > E: Failed to fetch
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfchangelog0_11.1-1_amd64.deb
>> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
>> 443]
>> > E: Failed to fetch
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfapi0_11.1-1_amd64.deb
>> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
>> 443]
>> > E: Failed to fetch
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-common_11.1-1_amd64.deb
>> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
>> 443]
>> > E: Failed to fetch
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
>> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
>> 443]
>> > E: Failed to fetch
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-cli_11.1-1_amd64.deb
>> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
>> 443]
>> > E: Failed to fetch
>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-server_11.1-1_amd64.deb
>> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
>> 443]
>> > E: Unable to fetch some archives, maybe run apt-get update or try with
>> -

Re: [Gluster-users] Announcing Gluster release 11.1

2023-11-27 Thread Gilberto Ferreira
I tried downloaded the file directly from the website but wget gave me
errors:
wget
https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
--2023-11-27 11:25:50--
https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
Resolving download.gluster.org (download.gluster.org)... 8.43.85.185
Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
connected.
HTTP request sent, awaiting response... 200 OK
Length: 3188596 (3,0M)
Saving to: ‘glusterfs-client_11.1-1_amd64.deb’

glusterfs-client_11.1-1_amd64.deb   1%[
  ]  47,65K   311KB/sin 0,2s

2023-11-27 11:25:51 (311 KB/s) - Connection closed at byte 48792. Retrying.

So I suppose it's something is unavailable at this moment in the gluster
website.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em seg., 27 de nov. de 2023 às 11:07, Hu Bert 
escreveu:

> Hi,
> on console with wget i see this:
>
> 2023-11-27 15:04:35 (317 MB/s) - Read error at byte 32408/3166108
> (Error decoding the received TLS packet.). Retrying.
>
> that looks strange :-)
>
> Best regards,
> Hubert
>
>
> Am Mo., 27. Nov. 2023 um 14:44 Uhr schrieb Gilberto Ferreira
> :
> >
> > I am getting this errors:
> >
> > Err:10
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt
> bookworm/main amd64 glusterfs-server amd64 11.1-1
> >   Error reading from server - read (5: Input/output error) [IP:
> 8.43.85.185 443]
> > Fetched 35.9 kB in 36s (1,006 B/s)
> > E: Failed to fetch
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libglusterfs0_11.1-1_amd64.deb
> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
> 443]
> > E: Failed to fetch
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfxdr0_11.1-1_amd64.deb
> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
> 443]
> > E: Failed to fetch
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfrpc0_11.1-1_amd64.deb
> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
> 443]
> > E: Failed to fetch
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfchangelog0_11.1-1_amd64.deb
> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
> 443]
> > E: Failed to fetch
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfapi0_11.1-1_amd64.deb
> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
> 443]
> > E: Failed to fetch
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-common_11.1-1_amd64.deb
> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
> 443]
> > E: Failed to fetch
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
> 443]
> > E: Failed to fetch
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-cli_11.1-1_amd64.deb
> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
> 443]
> > E: Failed to fetch
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-server_11.1-1_amd64.deb
> Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
> 443]
> > E: Unable to fetch some archives, maybe run apt-get update or try with
> --fix-missing?
> >
> > Anybody can help me???
> > Thanks a lot.
> > ---
> > Gilberto Nunes Ferreira
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> >
> >
> >
> >
> >
> > Em sáb., 25 de nov. de 2023 às 09:00, Strahil Nikolov <
> hunter86...@yahoo.com> escreveu:
> >>
> >> Great news!
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >> On Fri, Nov 24, 2023 at 3:32, Shwetha Acharya
> >>  wrote:
> >> The Gluster community is pleased to announce the release of Gluster11.1
> >> Packages available at [1].
> >> Release notes for the release can be found at [2].
> >>
> >> Highlights of Release:
> >>
> >> -  Fix

Re: [Gluster-users] Announcing Gluster release 11.1

2023-11-27 Thread Gilberto Ferreira
I am getting this errors:

Err:10
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt
bookworm/main amd64 glusterfs-server amd64 11.1-1
  Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
Fetched 35.9 kB in 36s (1,006 B/s)

E: Failed to fetch
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libglusterfs0_11.1-1_amd64.deb
 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
E: Failed to fetch
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfxdr0_11.1-1_amd64.deb
 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
E: Failed to fetch
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfrpc0_11.1-1_amd64.deb
 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
E: Failed to fetch
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfchangelog0_11.1-1_amd64.deb
 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
E: Failed to fetch
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/libgfapi0_11.1-1_amd64.deb
 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
E: Failed to fetch
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-common_11.1-1_amd64.deb
 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
E: Failed to fetch
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
E: Failed to fetch
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-cli_11.1-1_amd64.deb
 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
E: Failed to fetch
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt/pool/main/g/glusterfs/glusterfs-server_11.1-1_amd64.deb
 Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
E: Unable to fetch some archives, maybe run apt-get update or try with
--fix-missing?

Anybody can help me???
Thanks a lot.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em sáb., 25 de nov. de 2023 às 09:00, Strahil Nikolov 
escreveu:

> Great news!
>
> Best Regards,
> Strahil Nikolov
>
> On Fri, Nov 24, 2023 at 3:32, Shwetha Acharya
>  wrote:
> The Gluster community is pleased to announce the release of Gluster11.1
> Packages available at [1].
> Release notes for the release can be found at [2].
>
>
> *Highlights of Release: *
> -  Fix upgrade issue by reverting posix change related to storage.reserve
> value
> -  Fix possible data loss during rebalance if there is any link file on
> the system
> -  Fix maximum op-version for release 11
>
> Thanks,
> Shwetha
>
> References:
>
> [1] Packages for 11.1
> https://download.gluster.org/pub/gluster/glusterfs/11/11.1/
>
> [2] Release notes for 11.1:
> https://docs.gluster.org/en/latest/release-notes/11.1/
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Apt doesn't work.

2023-11-23 Thread Gilberto Ferreira
Hi
I get this error again!
Any ideas??

Thanks

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 12 de jul. de 2022 às 10:32, Gilberto Ferreira <
gilberto.nune...@gmail.com> escreveu:

> Hi there.
> I don't know if this happens only to me but I am trying to install
> GlusterFS Latest using apt from
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
> but it's not working.
> I get stuck with I/O error like this:
> Do you want to continue? [Y/n]
> Get:1
> https://download.gluster.org/pub/gluster/glusterfs/10/LATEST/Debian/bullseye/amd64/apt
> bullseye/main amd64 glusterfs-client amd64 10.2-1 [3,116 kB]
> Err:1
> https://download.gluster.org/pub/gluster/glusterfs/10/LATEST/Debian/bullseye/amd64/apt
> bullseye/main amd64 glusterfs-client amd64 10.2-1
>   Error reading from server - read (5: Input/output error) [IP:
> 8.43.85.185 443]
> Get:2
> https://download.gluster.org/pub/gluster/glusterfs/10/LATEST/Debian/bullseye/amd64/apt
> bullseye/main amd64 libgfxdr0 amd64 10.2-1 [3,109 kB]
> Err:2
> https://download.gluster.org/pub/gluster/glusterfs/10/LATEST/Debian/bullseye/amd64/apt
> bullseye/main amd64 libgfxdr0 amd64 10.2-1
>
> It always works fine.
> Anybody else?
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Core file right on the root partition?!?!

2023-07-13 Thread Gilberto Ferreira
Yep
After sending this message, the thought crossed my mind, indeed!
Thank you for your confirmation.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qui., 13 de jul. de 2023 às 01:40, Anoop C S 
escreveu:

> On Wed, 2023-07-12 at 14:47 -0300, Gilberto Ferreira wrote:
> > pve01:/# file core
> > core: ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style,
> > from '/usr/sbin/glusterfs --process-name fuse --volfile-
> > server=gluster1 --volfile-ser', real uid: 0, effective uid: 0, real
> > gid: 0, effective gid: 0, execfn: '/usr/sbin/glusterfs', platform:
> > 'x86_64'
>
> In case you haven't figured it out yet, this is clearly a core-dump
> file which is set to be created when a process is terminated
> unexpectedly whose location/nature is configured based on the contents
> of /proc/sys/kernel/core_pattern.
>
>
> Anoop C S.
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Core file right on the root partition?!?!

2023-07-12 Thread Gilberto Ferreira
Hi
I wonder what is the purpose of this file named core!

-rw---   1 root root  93M Jun  4 13:00 core
drwxr-xr-x  20 root root 5.0K Jun 12 16:37 dev
drwxr-xr-x  94 root root 8.0K Jun 23 14:05 etc
drwxr-xr-x   2 root root6 Dec  9  2022 home
drwxr-xr-x   7 root root 4.0K May  2 16:49 isos
lrwxrwxrwx   1 root root7 Mar 22 11:38 lib -> usr/lib
lrwxrwxrwx   1 root root9 Mar 22 11:38 lib32 -> usr/lib32
lrwxrwxrwx   1 root root9 Mar 22 11:38 lib64 -> usr/lib64
lrwxrwxrwx   1 root root   10 Mar 22 11:38 libx32 -> usr/libx32
drwxr-xr-x   2 root root6 Mar 22 11:38 media
drwxr-xr-x   4 root root   34 Jun  1 12:38 mnt
drwxr-xr-x   2 root root6 Mar 22 11:38 opt
dr-xr-xr-x 872 root root0 Jun  4 13:21 proc
drwxr-xr-x  16 root root 4.0K Jul 12 14:43 root
drwxr-xr-x  35 root root 1.6K Jul 12 14:44 run
lrwxrwxrwx   1 root root8 Mar 22 11:38 sbin -> usr/sbin
drwxr-xr-x   2 root root6 Mar 22 11:38 srv
dr-xr-xr-x  13 root root0 Jun  4 13:21 sys
drwxrwxrwt  10 root root 4.0K Jul 12 04:29 tmp
drwxr-xr-x  14 root root  160 May 15 04:59 usr
drwxr-xr-x  12 root root  153 May 29 15:28 var
drwxr-xr-x   8 root root  178 Jun  6 08:08 vms1
pve01:/# file core
core: ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style, from
'/usr/sbin/glusterfs --process-name fuse --volfile-server=gluster1
--volfile-ser', real uid: 0, effective uid: 0, real gid: 0, effective gid:
0, execfn: '/usr/sbin/glusterfs', platform: 'x86_64'

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Using glusterfs for virtual machines with qcow2 images

2023-06-07 Thread Gilberto Ferreira
Hi everybody

Regarding the issue with mount, usually I am using this systemd service to
bring up the mount points:
/etc/systemd/system/glusterfsmounts.service
[Unit]
Description=Glustermounting
Requires=glusterd.service
Wants=glusterd.service
After=network.target network-online.target glusterd.service

[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
ExecStart=/bin/mount -a -t glusterfs
TimeoutSec=600
SuccessExitStatus=15
Restart=on-failure
RestartSec=60
StartLimitBurst=6
StartLimitInterval=3600

[Install]
WantedBy=multi-user.target

After create it remember to reload the systemd daemon like:
systemctl enable glusterfsmounts.service
systemctl demon-reload

Also, I am using /etc/fstab to mount the glusterfs mount point properly,
since the Proxmox GUI seems to me a little broken in this regards
gluster1:VMS1 /vms1 glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qua., 7 de jun. de 2023 às 01:51, Strahil Nikolov 
escreveu:

> Hi Chris,
>
> here is a link to the settings needed for VM storage:
> https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
>
> You can also ask in ovirt-users for real-world settings.Test well before
> changing production!!!
>
> IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
>
> Best Regards,
> Strahil Nikolov
>
> On Mon, Jun 5, 2023 at 13:55, Christian Schoepplein
>  wrote:
> Hi,
>
> we'd like to use glusterfs for Proxmox and virtual machines with qcow2
> disk images. We have a three node glusterfs setup with one volume and
> Proxmox is attached and VMs are created, but after some time, and I think
> after much i/o is going on for a VM, the data inside the virtual machine
> gets corrupted. When I copy files from or to our glusterfs
> directly everything is OK, I've checked the files with md5sum. So in
> general
> our glusterfs setup seems to be OK I think..., but with the VMs and the
> self
> growing qcow2 images there are problems. If I use raw images for the VMs
> tests look better, but I need to do more testing to be sure, the problem
> is
> a bit hard to reproduce :-(.
>
> I've also asked on a Proxmox mailinglist, but got no helpfull response so
> far :-(. So maybe you have any helping hint what might be wrong with our
> setup, what needs to be configured to use glusterfs as a storage backend
> for
> virtual machines with self growing disk images. e.g. Any helpfull tip
> would
> be great, because I am absolutely no glusterfs expert and also not a
> expert
> for virtualization and what has to be done to let all components play well
> together... Thanks for your support!
>
> Here some infos about our glusterfs setup, please let me know if you need
> more infos. We are using Ubuntu 22.04 as operating system:
>
> root@gluster1:~# gluster --version
> glusterfs 10.1
> Repository revision: git://git.gluster.org/glusterfs.git
> Copyright (c) 2006-2016 Red Hat, Inc. 
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> It is licensed to you under your choice of the GNU Lesser
> General Public License, version 3 or any later version (LGPLv3
> or later), or the GNU General Public License, version 2 (GPLv2),
> in all cases as published by the Free Software Foundation.
> root@gluster1:~#
>
> root@gluster1:~# gluster v status gfs_vms
>
> Status of volume: gfs_vms
> Gluster processTCP Port  RDMA Port  Online  Pid
>
> --
> Brick gluster1.linova.de:/glusterfs/sde1enc
> /brick  584480  Y
> 1062218
> Brick gluster2.linova.de:/glusterfs/sdc1enc
> /brick  502540  Y
> 20596
> Brick gluster3.linova.de:/glusterfs/sdc1enc
> /brick  528400  Y
> 1627513
> Brick gluster1.linova.de:/glusterfs/sdf1enc
> /brick  498320  Y
> 1062227
> Brick gluster2.linova.de:/glusterfs/sdd1enc
> /brick  560950  Y
> 20612
> Brick gluster3.linova.de:/glusterfs/sdd1enc
> /brick  512520  Y
> 1627521
> Brick gluster1.linova.de:/glusterfs/sdg1enc
> /brick  549910  Y
> 1062230
> Brick gluster2.linova.de:/glusterfs/sde1enc
> /brick  608120  Y
> 20628
> Brick gluster3.linova.de:/glusterfs/sde1enc
> /brick  592540  Y
> 1627522
> Self-heal Daemon on localhost  N/A  N/AY
> 1062249
> Bitrot Daemon on localhost  N/A  N/AY
> 3591335
> Scrubber Daemon on localhostN/A  N/AY
> 3591346
> Self-heal Daemon o

Re: [Gluster-users] [EXT] [Glusterusers] Using glusterfs for virtual machines with qco

2023-06-05 Thread Gilberto Ferreira
Hi there.
I don't know if you are using 2node glusterfs solution, but here is my way
in this scenario and it's work awesome for me:
(VMS1 is the gluster volume, as you can see)

gluster vol heal VMS1 enable
gluster vol set VMS1 network.ping-timeout 2
gluster vol set VMS1 performance.quick-read off
gluster vol set VMS1 performance.read-ahead off
gluster vol set VMS1 performance.io-cache off
gluster vol set VMS1 performance.low-prio-threads 32
gluster vol set VMS1 performance.write-behind off
gluster vol set VMS1 performance.flush-behind off
gluster vol set VMS1 network.remote-dio disable
gluster vol set VMS1 performance.strict-o-direct on
gluster vol set VMS1 cluster.quorum-type fixed
gluster vol set VMS1 cluster.server-quorum-type none
gluster vol set VMS1 cluster.locking-scheme granular
gluster vol set VMS1 cluster.shd-max-threads 8
gluster vol set VMS1 cluster.shd-wait-qlength 1
gluster vol set VMS1 cluster.data-self-heal-algorithm full
gluster vol set VMS1 cluster.favorite-child-policy mtime
gluster vol set VMS1 cluster.quorum-count 1
gluster vol set VMS1 cluster.quorum-reads false
gluster vol set VMS1 cluster.self-heal-daemon enable
gluster vol set VMS1 cluster.heal-timeout 5
gluster vol heal VMS1 granular-entry-heal enable
gluster vol set VMS1 features.shard on
gluster vol set VMS1 user.cifs off
gluster vol set VMS1 cluster.choose-local off
gluster vol set VMS1 client.event-threads 4
gluster vol set VMS1 server.event-threads 4
gluster vol set VMS1 performance.client-io-threads on
gluster vol set VMS1 network.ping-timeout 20
gluster vol set VMS1 server.tcp-user-timeout 20
gluster vol set VMS1 server.keepalive-time 10
gluster vol set VMS1 server.keepalive-interval 2
gluster vol set VMS1 server.keepalive-count 5
gluster vol set VMS1 cluster.lookup-optimize off

I have had created the replica 2 like this:
gluster vol create VMS1 replica 2 gluster1:/mnt/pve/dataglusterfs/vms/
gluster2:/mnt/pve/dataglusterfs/vms/
And to avoid split-brain I have had enabled thoses options above.
That I have had created the a folder like:
mkdir /vms1
After that I have had edit /etc/fstab like
in the first node:
gluster1:VMS1 /vms1 glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0
in the second node:
gluster2:VMS1 /vms1 glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster1 0 0
gluster1 and gluster2 it's a dedicated 10g nic and included in the
/etc/hosts like
172.16.20.10 gluster1
172.16.20.20 gluster2

Than in both nodes I do
mount /vms1
Now everything is ok.
As I am using Proxmox VE here, I just create a storage entry in the Proxmox
/etc/pve/storage.cfg file like:
dir: STG-VMS-1
path /vms1
content rootdir,images
preallocation metadata
prune-backups keep-all=1
shared 1

And I am ready to fly!

Hope this can help you in any way!

Cheers




---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em seg., 5 de jun. de 2023 às 12:20, Christian Schoepplein <
christian.schoeppl...@linova.de> escreveu:

> Hi Gilberto, hi all,
>
> thanks a lot for all your answers.
>
> At first I changed both settings mentioned below and first test look good.
>
> Before changing the settings I was able to crash a new installed VM every
> time after a fresh installation by producing much i/o, e.g. when
> installing
> Libre Office. This always resulted in corrupt files inside the VM, but
> researching the qcow2 file with the qemu-img tool showed no errors for the
> file.
>
> I'll do further testing and will run more VMs on the volume during the
> next
> days, lets see how things go on and if further tweaking of the volume is
> necessary.
>
> Cheers,
>
>   Chris
>
>
> On Fri, Jun 02, 2023 at 09:05:28AM -0300, Gilberto Ferreira wrote:
> >Try turn off this options:
> >performance.write-behind
> >performance.flush-behind
> >
> >---
> >Gilberto Nunes Ferreira
> >(47) 99676-7530 - Whatsapp / Telegram
> >
> >
> >
> >
> >
> >
> >Em sex., 2 de jun. de 2023 às 07:55, Guillaume Pavese <
> >guillaume.pav...@interactiv-group.com> escreveu:
> >
> >On oVirt / Redhat Virtualization,
> >the following Gluster volumes settings are recommended to be applied
> >(preferably at the creation of the volume)
> >These settings are important for data reliability, ( Note that
> Replica 3 or
> >Replica 2+1 is expected )
> >
> >performance.quick-read=off
> >performance.read-ahead=off
> >performance.io-cache=off
> >performance.low-prio-threads=32
> >network.remote-dio=enable
> >cluster.eager-lock=enable
> >cluster.quorum-type=auto
> >cluster.server-quorum-type=server
> >cluster.data-self-heal-algorithm

Re: [Gluster-users] [EXT] [Glusterusers] Using glusterfs for virtual machines with qco

2023-06-02 Thread Gilberto Ferreira
Try turn off this options:
performance.write-behind
performance.flush-behind

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em sex., 2 de jun. de 2023 às 07:55, Guillaume Pavese <
guillaume.pav...@interactiv-group.com> escreveu:

> On oVirt / Redhat Virtualization,
> the following Gluster volumes settings are recommended to be applied
> (preferably at the creation of the volume)
> These settings are important for data reliability, ( Note that Replica 3
> or Replica 2+1 is expected )
>
> performance.quick-read=off
> performance.read-ahead=off
> performance.io-cache=off
> performance.low-prio-threads=32
> network.remote-dio=enable
> cluster.eager-lock=enable
> cluster.quorum-type=auto
> cluster.server-quorum-type=server
> cluster.data-self-heal-algorithm=full
> cluster.locking-scheme=granular
> cluster.shd-max-threads=8
> cluster.shd-wait-qlength=1
> features.shard=on
> user.cifs=off
> cluster.choose-local=off
> client.event-threads=4
> server.event-threads=4
> performance.client-io-threads=on
>
>
>
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Fri, Jun 2, 2023 at 5:33 AM W Kern  wrote:
>
>> We use qcow2 with libvirt based kvm on many small clusters and have
>> found it to be exremely reliable though maybe not the fastest, though
>> some of that is most of our storage is SATA SSDs in a software RAID1
>> config for each brick.
>>
>> What problems are you running into?
>>
>> You just mention 'problems'
>>
>> -wk
>>
>> On 6/1/23 8:42 AM, Christian Schoepplein wrote:
>> > Hi,
>> >
>> > we'd like to use glusterfs for Proxmox and virtual machines with qcow2
>> > disk images. We have a three node glusterfs setup with one volume and
>> > Proxmox is attached and VMs are created, but after some time, and I
>> think
>> > after much i/o is going on for a VM, the data inside the virtual machine
>> > gets corrupted. When I copy files from or to our glusterfs
>> > directly everything is OK, I've checked the files with md5sum. So in
>> general
>> > our glusterfs setup seems to be OK I think..., but with the VMs and the
>> self
>> > growing qcow2 images there are problems. If I use raw images for the VMs
>> > tests look better, but I need to do more testing to be sure, the
>> problem is
>> > a bit hard to reproduce :-(.
>> >
>> > I've also asked on a Proxmox mailinglist, but got no helpfull response
>> so
>> > far :-(. So maybe you have any helping hint what might be wrong with our
>> > setup, what needs to be configured to use glusterfs as a storage
>> backend for
>> > virtual machines with self growing disk images. e.g. Any helpfull tip
>> would
>> > be great, because I am absolutely no glusterfs expert and also not a
>> expert
>> > for virtualization and what has to be done to let all components play
>> well
>> > together... Thanks for your support!
>> >
>> > Here some infos about our glusterfs setup, please let me know if you
>> need
>> > more infos. We are using Ubuntu 22.04 as operating system:
>> >
>> > root@gluster1:~# gluster --version
>> > glusterfs 10.1
>> > Repository revision: git://git.gluster.org/glusterfs.git
>> > Copyright (c) 2006-2016 Red Hat, Inc. 
>> > GlusterFS comes with ABSOLUTELY NO WARRANTY.
>> > It is licensed to you under your choice of the GNU Lesser
>> > General Public License, version 3 or any later version (LGPLv3
>> > or later), or the GNU General Public License, version 2 (GPLv2),
>> > in all cases as published by the Free Software Foundation.
>> > root@gluster1:~#
>> >
>> > root@gluster1:~# gluster v status gfs_vms
>> >
>> > Status of volume: gfs_vms
>> > Gluster process TCP Port  RDMA Port
>> Online  Pid
>> >
>> --
>> > Brick gluster1.linova.de:/glusterfs/sde1enc
>> > /brick  58448 0  Y
>>  1062218
>> > Brick gluster2.linova.de:/glusterfs/sdc1enc
>> > /brick  50254 0  Y
>>  20596
>> > Brick gluster3.linova.de:/glusterfs/sdc1enc
>> > /brick  52840 0  Y
>>  1627513
>> > Brick gluster1.linova.de:/glusterfs/sdf1enc
>> > /brick  49832 0  Y
>>  1062227
>> > Brick gluster2.linova.de:/glusterfs/sdd1enc
>> > /brick  56095 0  Y
>>  20612
>> > Brick gluster3.linova.de:/glusterfs/sdd1enc
>> > /brick  51252 0  Y
>>  1627521
>> > Brick gluster1.linova.de:/glusterfs/sdg1enc
>> > /brick  54991 0  Y
>>  1062230
>> > Brick gluster2.linova.de:/glusterfs/sde1enc
>> > /brick  60812 0  Y
>>  20628
>> > Brick gluster3.linova.de:/glusterfs/sde1enc
>> > /brick  59254 0  Y
>>  1627522
>> > Self-heal Da

Re: [Gluster-users] [Gluster-devel] Error in gluster v11

2023-05-17 Thread Gilberto Ferreira
Nevertheless, I got an error which broke apt install...
Just fixed it when manually overwriting with rsync... :-/
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qua., 17 de mai. de 2023 às 05:09, Xavi Hernandez 
escreveu:

> On Tue, May 16, 2023 at 4:00 PM Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
>> Hi again
>> I just noticed that there is some updates from glusterd
>>
>> apt list --upgradable
>> Listing... Done
>> glusterfs-client/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
>> glusterfs-common/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
>> glusterfs-server/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
>> libgfapi0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
>> libgfchangelog0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
>> libgfrpc0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
>> libgfxdr0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
>> libglusterfs0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
>>
>> Perhaps this could fix the issue?
>>
>
> No. I think this is just to fix a packaging problem in the latest version.
> The patch won't be included in any official version until it's properly
> tested and merged to the main code. Hopefully the reporter of the GitHub
> issue will be able to test it so that it can be verified and provided in
> the next 11.x release.
>
> Regards,
>
> Xavi
>
> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 16 de mai. de 2023 às 09:31, Gilberto Ferreira <
>> gilberto.nune...@gmail.com> escreveu:
>>
>>> Ok. No problem. I can test it in a virtual environment.
>>> Send me the path.
>>> Oh but the way, I don't compile gluster from scratch.
>>> I was used the deb file from
>>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
>>>
>>> ---
>>> Gilberto Nunes Ferreira
>>> (47) 99676-7530 - Whatsapp / Telegram
>>>
>>>
>>>
>>>
>>>
>>>
>>> Em ter., 16 de mai. de 2023 às 09:21, Xavi Hernandez <
>>> jaher...@redhat.com> escreveu:
>>>
>>>> Hi Gilberto,
>>>>
>>>> On Tue, May 16, 2023 at 12:56 PM Gilberto Ferreira <
>>>> gilberto.nune...@gmail.com> wrote:
>>>>
>>>>> Hi Xavi
>>>>> That's depend. Is it safe? I have this env production you know???
>>>>>
>>>>
>>>> It should be safe, but I wouldn't test it on production. Can't you try
>>>> it in any test environment before ?
>>>>
>>>> Xavi
>>>>
>>>>
>>>>>
>>>>> ---
>>>>> Gilberto Nunes Ferreira
>>>>> (47) 99676-7530 - Whatsapp / Telegram
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Em ter., 16 de mai. de 2023 às 07:45, Xavi Hernandez <
>>>>> jaher...@redhat.com> escreveu:
>>>>>
>>>>>> The referenced GitHub issue now has a potential patch that could fix
>>>>>> the problem, though it will need to be verified. Could you try to apply 
>>>>>> the
>>>>>> patch and check if the problem persists ?
>>>>>>
>>>>>> On Mon, May 15, 2023 at 2:10 AM Gilberto Ferreira <
>>>>>> gilberto.nune...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi there, anyone in the Gluster Devel list.
>>>>>>>
>>>>>>> Any fix about this issue?
>>>>>>>
>>>>>>> May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +0000]
>>>>>>> C [gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
>>>>>>> -linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
>>>>>>> -->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5)
>>>>>>> [0x7fb4ebad42e5] -->/lib
>>>>>>> /x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] )
>>>>>>> 0-: Assertion failed:
>>>>>>> May 14 07:05:39 srv01 vms[9404]: patchset: git://
>>>>>>> git.gluster.org/glusterfs.git
>>>>>>> May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0
>>>>>>> ---
>>>>>>> Gilberto Nunes Ferreira
&g

Re: [Gluster-users] [Gluster-devel] Error in gluster v11

2023-05-16 Thread Gilberto Ferreira
Hi again
I just noticed that there is some updates from glusterd

apt list --upgradable
Listing... Done
glusterfs-client/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
glusterfs-common/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
glusterfs-server/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
libgfapi0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
libgfchangelog0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
libgfrpc0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
libgfxdr0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
libglusterfs0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]

Perhaps this could fix the issue?
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 16 de mai. de 2023 às 09:31, Gilberto Ferreira <
gilberto.nune...@gmail.com> escreveu:

> Ok. No problem. I can test it in a virtual environment.
> Send me the path.
> Oh but the way, I don't compile gluster from scratch.
> I was used the deb file from
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em ter., 16 de mai. de 2023 às 09:21, Xavi Hernandez 
> escreveu:
>
>> Hi Gilberto,
>>
>> On Tue, May 16, 2023 at 12:56 PM Gilberto Ferreira <
>> gilberto.nune...@gmail.com> wrote:
>>
>>> Hi Xavi
>>> That's depend. Is it safe? I have this env production you know???
>>>
>>
>> It should be safe, but I wouldn't test it on production. Can't you try it
>> in any test environment before ?
>>
>> Xavi
>>
>>
>>>
>>> ---
>>> Gilberto Nunes Ferreira
>>> (47) 99676-7530 - Whatsapp / Telegram
>>>
>>>
>>>
>>>
>>>
>>>
>>> Em ter., 16 de mai. de 2023 às 07:45, Xavi Hernandez <
>>> jaher...@redhat.com> escreveu:
>>>
>>>> The referenced GitHub issue now has a potential patch that could fix
>>>> the problem, though it will need to be verified. Could you try to apply the
>>>> patch and check if the problem persists ?
>>>>
>>>> On Mon, May 15, 2023 at 2:10 AM Gilberto Ferreira <
>>>> gilberto.nune...@gmail.com> wrote:
>>>>
>>>>> Hi there, anyone in the Gluster Devel list.
>>>>>
>>>>> Any fix about this issue?
>>>>>
>>>>> May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C
>>>>> [gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
>>>>> -linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
>>>>> -->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
>>>>> -->/lib
>>>>> /x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-:
>>>>> Assertion failed:
>>>>> May 14 07:05:39 srv01 vms[9404]: patchset: git://
>>>>> git.gluster.org/glusterfs.git
>>>>> May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0
>>>>> ---
>>>>> Gilberto Nunes Ferreira
>>>>> (47) 99676-7530 - Whatsapp / Telegram
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Em dom., 14 de mai. de 2023 às 16:53, Strahil Nikolov <
>>>>> hunter86...@yahoo.com> escreveu:
>>>>>
>>>>>> Looks similar to https://github.com/gluster/glusterfs/issues/4104
>>>>>> I don’t see any progress there.
>>>>>> Maybe asking in gluster-devel (in CC) could help.
>>>>>>
>>>>>> Best Regards,
>>>>>> Strahil Nikolov
>>>>>>
>>>>>>
>>>>>> On Sunday, May 14, 2023, 5:28 PM, Gilberto Ferreira <
>>>>>> gilberto.nune...@gmail.com> wrote:
>>>>>>
>>>>>> Anybody also has this error?
>>>>>>
>>>>>> May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C
>>>>>> [gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
>>>>>> -linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
>>>>>> -->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
>>>>>> -->/lib
>>>>>> /x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-:
>>>>>> Assertion failed:
>>>>>> May 14 07:05:39 srv01 vms[9404]: patchset: git://
>>>>>> git.gluster.org/glusterfs.git
>>>>>> May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0
>>>>>>
>>>>>> ---
>>>>>> Gilberto Nunes Ferreira
>>>>>> (47) 99676-7530 - Whatsapp / Telegram
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> 
>>>>>>
>>>>>>
>>>>>>
>>>>>> Community Meeting Calendar:
>>>>>>
>>>>>> Schedule -
>>>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users@gluster.org
>>>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>>
>>>>>> ---
>>>>>
>>>>> Community Meeting Calendar:
>>>>> Schedule -
>>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>>>>
>>>>> Gluster-devel mailing list
>>>>> gluster-de...@gluster.org
>>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>>
>>>>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Error in gluster v11

2023-05-16 Thread Gilberto Ferreira
Ok. No problem. I can test it in a virtual environment.
Send me the path.
Oh but the way, I don't compile gluster from scratch.
I was used the deb file from
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 16 de mai. de 2023 às 09:21, Xavi Hernandez 
escreveu:

> Hi Gilberto,
>
> On Tue, May 16, 2023 at 12:56 PM Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
>> Hi Xavi
>> That's depend. Is it safe? I have this env production you know???
>>
>
> It should be safe, but I wouldn't test it on production. Can't you try it
> in any test environment before ?
>
> Xavi
>
>
>>
>> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 16 de mai. de 2023 às 07:45, Xavi Hernandez 
>> escreveu:
>>
>>> The referenced GitHub issue now has a potential patch that could fix the
>>> problem, though it will need to be verified. Could you try to apply the
>>> patch and check if the problem persists ?
>>>
>>> On Mon, May 15, 2023 at 2:10 AM Gilberto Ferreira <
>>> gilberto.nune...@gmail.com> wrote:
>>>
>>>> Hi there, anyone in the Gluster Devel list.
>>>>
>>>> Any fix about this issue?
>>>>
>>>> May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C
>>>> [gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
>>>> -linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
>>>> -->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
>>>> -->/lib
>>>> /x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-:
>>>> Assertion failed:
>>>> May 14 07:05:39 srv01 vms[9404]: patchset: git://
>>>> git.gluster.org/glusterfs.git
>>>> May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0
>>>> ---
>>>> Gilberto Nunes Ferreira
>>>> (47) 99676-7530 - Whatsapp / Telegram
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Em dom., 14 de mai. de 2023 às 16:53, Strahil Nikolov <
>>>> hunter86...@yahoo.com> escreveu:
>>>>
>>>>> Looks similar to https://github.com/gluster/glusterfs/issues/4104
>>>>> I don’t see any progress there.
>>>>> Maybe asking in gluster-devel (in CC) could help.
>>>>>
>>>>> Best Regards,
>>>>> Strahil Nikolov
>>>>>
>>>>>
>>>>> On Sunday, May 14, 2023, 5:28 PM, Gilberto Ferreira <
>>>>> gilberto.nune...@gmail.com> wrote:
>>>>>
>>>>> Anybody also has this error?
>>>>>
>>>>> May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C
>>>>> [gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
>>>>> -linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
>>>>> -->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
>>>>> -->/lib
>>>>> /x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-:
>>>>> Assertion failed:
>>>>> May 14 07:05:39 srv01 vms[9404]: patchset: git://
>>>>> git.gluster.org/glusterfs.git
>>>>> May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0
>>>>>
>>>>> ---
>>>>> Gilberto Nunes Ferreira
>>>>> (47) 99676-7530 - Whatsapp / Telegram
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 
>>>>>
>>>>>
>>>>>
>>>>> Community Meeting Calendar:
>>>>>
>>>>> Schedule -
>>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>>>> Gluster-users mailing list
>>>>> Gluster-users@gluster.org
>>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>> ---
>>>>
>>>> Community Meeting Calendar:
>>>> Schedule -
>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>>>
>>>> Gluster-devel mailing list
>>>> gluster-de...@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>>>
>>>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Error in gluster v11

2023-05-16 Thread Gilberto Ferreira
Hi Xavi
That's depend. Is it safe? I have this env production you know???

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 16 de mai. de 2023 às 07:45, Xavi Hernandez 
escreveu:

> The referenced GitHub issue now has a potential patch that could fix the
> problem, though it will need to be verified. Could you try to apply the
> patch and check if the problem persists ?
>
> On Mon, May 15, 2023 at 2:10 AM Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
>> Hi there, anyone in the Gluster Devel list.
>>
>> Any fix about this issue?
>>
>> May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C
>> [gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
>> -linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
>> -->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
>> -->/lib
>> /x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-:
>> Assertion failed:
>> May 14 07:05:39 srv01 vms[9404]: patchset: git://
>> git.gluster.org/glusterfs.git
>> May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0
>> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em dom., 14 de mai. de 2023 às 16:53, Strahil Nikolov <
>> hunter86...@yahoo.com> escreveu:
>>
>>> Looks similar to https://github.com/gluster/glusterfs/issues/4104
>>> I don’t see any progress there.
>>> Maybe asking in gluster-devel (in CC) could help.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>> On Sunday, May 14, 2023, 5:28 PM, Gilberto Ferreira <
>>> gilberto.nune...@gmail.com> wrote:
>>>
>>> Anybody also has this error?
>>>
>>> May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C
>>> [gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
>>> -linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
>>> -->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
>>> -->/lib
>>> /x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-:
>>> Assertion failed:
>>> May 14 07:05:39 srv01 vms[9404]: patchset: git://
>>> git.gluster.org/glusterfs.git
>>> May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0
>>>
>>> ---
>>> Gilberto Nunes Ferreira
>>> (47) 99676-7530 - Whatsapp / Telegram
>>>
>>>
>>>
>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>> ---
>>
>> Community Meeting Calendar:
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Error in gluster v11

2023-05-14 Thread Gilberto Ferreira
Hi there, anyone in the Gluster Devel list.

Any fix about this issue?

May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C
[gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
-linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
-->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
-->/lib
/x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-:
Assertion failed:
May 14 07:05:39 srv01 vms[9404]: patchset: git://
git.gluster.org/glusterfs.git
May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em dom., 14 de mai. de 2023 às 16:53, Strahil Nikolov 
escreveu:

> Looks similar to https://github.com/gluster/glusterfs/issues/4104
> I don’t see any progress there.
> Maybe asking in gluster-devel (in CC) could help.
>
> Best Regards,
> Strahil Nikolov
>
>
> On Sunday, May 14, 2023, 5:28 PM, Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
> Anybody also has this error?
>
> May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C
> [gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
> -linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
> -->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
> -->/lib
> /x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-:
> Assertion failed:
> May 14 07:05:39 srv01 vms[9404]: patchset: git://
> git.gluster.org/glusterfs.git
> May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Error in gluster v11

2023-05-14 Thread Gilberto Ferreira
Anybody also has this error?

May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C
[gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
-linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
-->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
-->/lib
/x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-:
Assertion failed:
May 14 07:05:39 srv01 vms[9404]: patchset: git://
git.gluster.org/glusterfs.git
May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Rename volume?

2023-04-12 Thread Gilberto Ferreira
I think gluster volume rename is not available anymore since version 6.5.

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qua., 12 de abr. de 2023 às 11:51, Ruediger Kupper 
escreveu:

> Hi!
> Is it possible to rename a gluster volume? If so, what is to be done?
> (Context: I'm trying to recover from a misconfiguration by copying all
> contents of a volume to a new one. After that the old volume will be
> removed and the new one needs to be renamed to the old name.)
>
> Thanks for you help!
> Rüdiger
>
> --
> OStR Dr. R. Kupper
> Kepler-Gymnasium Freudenstadt
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS 11 is out!

2023-02-11 Thread Gilberto Ferreira
https://download.gluster.org/pub/gluster/glusterfs/11/11.0/Debian/bullseye/amd64/apt/pool/main/g/glusterfs/
---
Gilberto Nunes Ferreira




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 11 is out!

2023-02-08 Thread Gilberto Ferreira
Well
I am sorry!
Apparently it is just the folder structure that has been created.
There's nothing inside.
---
Gilberto Nunes Ferreira






Em ter., 7 de fev. de 2023 às 17:11, Gilberto Ferreira <
gilberto.nune...@gmail.com> escreveu:

> Here we go
>
> https://download.gluster.org/pub/gluster/glusterfs/11/
> ---
> Gilberto Nunes Ferreira
>
>
>
>
>
>
> Em ter., 7 de fev. de 2023 às 17:05, sacawulu 
> escreveu:
>
>> But *is* it out?
>>
>> I don't see it anywhere...
>>
>> MJ
>>
>>
>> Op 07-02-2023 om 18:07 schreef Gilberto Ferreira:
>> > Hello guys!
>> > So what is the good news about this new release?
>> > Is Anybody are using it?
>> >
>> > Thanks for any feedback!
>> >
>> >
>> > ---
>> > Gilberto Nunes Ferreira
>> >
>> >
>> >
>> >
>> >
>> > 
>> >
>> >
>> >
>> > Community Meeting Calendar:
>> >
>> > Schedule -
>> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> > Bridge: https://meet.google.com/cpu-eiue-hvk
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 11 is out!

2023-02-07 Thread Gilberto Ferreira
Here we go

https://download.gluster.org/pub/gluster/glusterfs/11/
---
Gilberto Nunes Ferreira






Em ter., 7 de fev. de 2023 às 17:05, sacawulu 
escreveu:

> But *is* it out?
>
> I don't see it anywhere...
>
> MJ
>
>
> Op 07-02-2023 om 18:07 schreef Gilberto Ferreira:
> > Hello guys!
> > So what is the good news about this new release?
> > Is Anybody are using it?
> >
> > Thanks for any feedback!
> >
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> >
> >
> >
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS 11 is out!

2023-02-07 Thread Gilberto Ferreira
Hello guys!
So what is the good news about this new release?
Is Anybody are using it?

Thanks for any feedback!


---
Gilberto Nunes Ferreira




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS and sysctl tweaks.

2022-12-18 Thread Gilberto Ferreira
Thanks.

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em dom., 18 de dez. de 2022 às 20:03, Strahil Nikolov 
escreveu:

> oVirt (upstream of RHV which is also KVM-based) uses sharding, which
> reduces sync times as only the changed shards are synced.
>
> Check the virt group's gluster tunables at /var/lib/glusterd/groups/virt .
> Also in the source:
> https://github.com/gluster/glusterfs/blob/devel/extras/group-virt.example
>
> WARNING: ONCE THE SHARDING IS ENABLED, NEVER EVER DISABLE IT !
>
> Best Regards,
> Strahil Nikolov
>
> On Sun, Dec 18, 2022 at 6:51, Gilberto Ferreira
>  wrote:
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS and sysctl tweaks.

2022-12-17 Thread Gilberto Ferreira
Em sáb, 17 de dez de 2022 13:20, Strahil Nikolov 
escreveu:

> Gluster's tuned profile 'rhgs-random-io' has the following :
>
> [main]
> include=throughput-performance
>
> [sysctl]
> vm.dirty_ratio = 5
> vm.dirty_background_ratio = 2
>

Nice


> What kind of workload do you have (sequential IO or not)?
>

It's for kvm images. Which means big files.
My main concern is about healing time between fails.


> Best Regards,
> Strahil Nikolov
>
> On Fri, Dec 16, 2022 at 21:31, Gilberto Ferreira
>  wrote:
> Hello!
>
> Is there any sysctl tuning to improve glusterfs regarding network
> configuration?
>
> Thanks
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS and sysctl tweaks.

2022-12-16 Thread Gilberto Ferreira
Hello!

Is there any sysctl tuning to improve glusterfs regarding network
configuration?

Thanks
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Community Announcement] Announcing Kadalu Storage 1.0 Beta

2022-08-08 Thread Gilberto Ferreira
Ok, thank you.
---
Gilberto Nunes Ferreira






Em seg., 8 de ago. de 2022 às 14:21, Aravinda Vishwanathapura
 escreveu:

> It is planned, but not available with the first release.
>
> On Mon, 8 Aug 2022 at 9:57 PM, Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
>> Is there any web gui?
>> I would like to try that.
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>>
>>
>>
>>
>>
>> Em seg., 8 de ago. de 2022 às 13:09, Aravinda Vishwanathapura
>>  escreveu:
>>
>>> Hi All,
>>>
>>> Kadalu Storage is a modern storage solution based on GlusterFS. It uses
>>> core file system layer from GlusterFS and provides a modern management
>>> interface, ReST APIs, and many features.
>>>
>>> We are happy to announce the beta release of Kadalu Storage 1.0. This
>>> release includes many features from GlusterFS along with many improvements.
>>>
>>> Following quick start guide will help you to try out Kadalu Storage.
>>> Please provide your valuable feedback and feel free to open issues with
>>> feature requests or bug reports (Github Issues <
>>> https://github.com/kadalu/moana/issues>)
>>>
>>> https://kadalu.tech/storage/quick-start
>>>
>>> A few other additional links to understand the similarities/differences
>>> between Kadalu Storage and Gluster.
>>>
>>> - Gluster vs Kadalu Storage: https://kadalu.tech/gluster-vs-kadalu/
>>> - Try Kadalu Storage with containers:
>>> https://kadalu.tech/blog/try-kadalu-storage/
>>> - Project repository: https://github.com/kadalu/moana
>>>
>>> Notes:
>>>
>>> - 1.0 Beta release of Kubernetes integration is expected in a couple of
>>> weeks.
>>> - Packages for other distributions are work in progress and will be
>>> available after the 1.0 release.
>>>
>>> Blog: https://kadalu.tech/blog/announcing-kadalu-storage-1.0-beta
>>>
>>> --
>>> Thanks and Regards
>>> Aravinda Vishwanathapura
>>> https://kadalu.tech
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Community Announcement] Announcing Kadalu Storage 1.0 Beta

2022-08-08 Thread Gilberto Ferreira
Is there any web gui?
I would like to try that.

---
Gilberto Nunes Ferreira






Em seg., 8 de ago. de 2022 às 13:09, Aravinda Vishwanathapura
 escreveu:

> Hi All,
>
> Kadalu Storage is a modern storage solution based on GlusterFS. It uses
> core file system layer from GlusterFS and provides a modern management
> interface, ReST APIs, and many features.
>
> We are happy to announce the beta release of Kadalu Storage 1.0. This
> release includes many features from GlusterFS along with many improvements.
>
> Following quick start guide will help you to try out Kadalu Storage.
> Please provide your valuable feedback and feel free to open issues with
> feature requests or bug reports (Github Issues <
> https://github.com/kadalu/moana/issues>)
>
> https://kadalu.tech/storage/quick-start
>
> A few other additional links to understand the similarities/differences
> between Kadalu Storage and Gluster.
>
> - Gluster vs Kadalu Storage: https://kadalu.tech/gluster-vs-kadalu/
> - Try Kadalu Storage with containers:
> https://kadalu.tech/blog/try-kadalu-storage/
> - Project repository: https://github.com/kadalu/moana
>
> Notes:
>
> - 1.0 Beta release of Kubernetes integration is expected in a couple of
> weeks.
> - Packages for other distributions are work in progress and will be
> available after the 1.0 release.
>
> Blog: https://kadalu.tech/blog/announcing-kadalu-storage-1.0-beta
>
> --
> Thanks and Regards
> Aravinda Vishwanathapura
> https://kadalu.tech
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] 2node gluster server with replica 2 and no split-brain!

2022-07-29 Thread Gilberto Ferreira
Hello there.
I am using a 2node gluster server with replica 2 and I have  set this
feature in glusterfs:
Here is command to create the gluster volume:
gluster vol create VMS replica 2 server1:/data/vms server2:/data/vms
Than I just issue the following:
gluster volume set VMS cluster.heal-timeout 5
gluster volume heal VMS enable
gluster volume set VMS cluster.quorum-reads false
gluster volume set VMS cluster.quorum-count 1
gluster volume set VMS network.ping-timeout 2
gluster volume set VMS cluster.favorite-child-policy mtime
gluster volume heal VMS granular-entry-heal enable
gluster volume set VMS cluster.data-self-heal-algorithm full
And that's it! Read to go.

I am using this scenario for several clients now, mainly with Proxmox VE
with several Linux and Windows VM.
But some people are complaining with me about split-brains and stuff, but I
have had none since I started using this configuration, about 6 or 8
months ago.
I am not sure if I need to change something or just ignore the bad
commentaries I heard about it and move on.
I just want to share that, in my experience, this setup has worked
flawlessly.



---
Gilberto Nunes Ferreira




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster web gui

2022-07-20 Thread Gilberto Ferreira
Yep! I agree about Ovirt. It's awesome.
But I am looking for something specific just to gluster itself.


Em qua., 20 de jul. de 2022 às 12:39, Strahil Nikolov 
escreveu:

> oVirt is the upstream of the Red Hat Gluster Storage console.
> Also, it has an API for automation purposes.
>
> Best Regards,
> Strahil Nikolov
>
> On Wed, Jul 20, 2022 at 15:18, Gilberto Ferreira
>  wrote:
>
> Hello there.
>
> Does anybody know a good web interface to create and manage gluster nodes?
> Thanks in advance.
>
>
> Gilberto
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster web gui

2022-07-20 Thread Gilberto Ferreira
Hello there.

Does anybody know a good web interface to create and manage gluster nodes?
Thanks in advance.


Gilberto




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Apt doesn't work.

2022-07-13 Thread Gilberto Ferreira
well.. It's working now.






Em ter., 12 de jul. de 2022 às 14:59, Gilberto Ferreira <
gilberto.nune...@gmail.com> escreveu:

> Exactly my problem too.
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em ter., 12 de jul. de 2022 às 14:09, J B  escreveu:
>
>> I am also seeing this issue.  If I test with a wget on the package that
>> is failing it comes back with a ‘certificate doesn’t match hostname’ error
>> for that particular IP
>>
>>
>>
>>
>>
>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Apt doesn't work.

2022-07-12 Thread Gilberto Ferreira
Exactly my problem too.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 12 de jul. de 2022 às 14:09, J B  escreveu:

> I am also seeing this issue.  If I test with a wget on the package that is
> failing it comes back with a ‘certificate doesn’t match hostname’ error for
> that particular IP
>
>
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Apt doesn't work.

2022-07-12 Thread Gilberto Ferreira
Hi there.
I don't know if this happens only to me but I am trying to install
GlusterFS Latest using apt from
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
but it's not working.
I get stuck with I/O error like this:
Do you want to continue? [Y/n]
Get:1
https://download.gluster.org/pub/gluster/glusterfs/10/LATEST/Debian/bullseye/amd64/apt
bullseye/main amd64 glusterfs-client amd64 10.2-1 [3,116 kB]
Err:1
https://download.gluster.org/pub/gluster/glusterfs/10/LATEST/Debian/bullseye/amd64/apt
bullseye/main amd64 glusterfs-client amd64 10.2-1
  Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
Get:2
https://download.gluster.org/pub/gluster/glusterfs/10/LATEST/Debian/bullseye/amd64/apt
bullseye/main amd64 libgfxdr0 amd64 10.2-1 [3,109 kB]
Err:2
https://download.gluster.org/pub/gluster/glusterfs/10/LATEST/Debian/bullseye/amd64/apt
bullseye/main amd64 libgfxdr0 amd64 10.2-1

It always works fine.
Anybody else?
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Create more would increse performance?

2022-03-05 Thread Gilberto Ferreira
That's nice to hear.

Regarding libgfapi you mean this:

# gluster volume set VOL_NAME server.allow-insecure on

Can you point some docs about create a gluster subvols?

Thanks

Em sáb, 5 de mar de 2022 19:37, Strahil Nikolov 
escreveu:

> Sharding is created for virtualization and it  provides better performance
> in a distributed- replicated volumes, as each shard is "placed" ontop of
> DHT.
> This way when the VM reads a large file (which spans over several shards),
> each shard can be read from a different brick -> speeding up the read.
>
> Also, you can explore libgfapi which , despite it's drawbacks , brings a
> lot of performance (at least based on several reports in the oVirt list).
>
> Overall, more subvolumes (replica sets) will bring better performance
> (most probably you will feel it in the reads) and with libgfapi the
> performance can go better.
>
> Best Regards,
> Strahil Nikolov
>
> On Sun, Mar 6, 2022 at 0:23, Gilberto Ferreira
>  wrote:
> Hi
>
> I'm working with kvm/qemu virtualization here.
> I already activated virt group.
> However I am considering make some changes.
> Mostly it's work with really big files.
>
>
> Em sáb, 5 de mar de 2022 18:21, Strahil Nikolov 
> escreveu:
>
> It depends. What kind of workload do you have ?
>
> Best Regards,
> Strahil Nikolov
>
> On Sat, Mar 5, 2022 at 17:22, Gilberto Ferreira
>  wrote:
> Hi there.
> Usually I create one gluster volume and one brick, /mnt/data. If create
> more than on brick, like server1:/data1 server1:/data2 n would this
> increate overall performance??
> Thanks
> ---
> Gilberto Nunes Ferreira
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Create more would increse performance?

2022-03-05 Thread Gilberto Ferreira
Hi

I'm working with kvm/qemu virtualization here.
I already activated virt group.
However I am considering make some changes.
Mostly it's work with really big files.


Em sáb, 5 de mar de 2022 18:21, Strahil Nikolov 
escreveu:

> It depends. What kind of workload do you have ?
>
> Best Regards,
> Strahil Nikolov
>
> On Sat, Mar 5, 2022 at 17:22, Gilberto Ferreira
>  wrote:
> Hi there.
> Usually I create one gluster volume and one brick, /mnt/data. If create
> more than on brick, like server1:/data1 server1:/data2 n would this
> increate overall performance??
> Thanks
> ---
> Gilberto Nunes Ferreira
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Create more would increse performance?

2022-03-05 Thread Gilberto Ferreira
Hi there.
Usually I create one gluster volume and one brick, /mnt/data. If create
more than on brick, like server1:/data1 server1:/data2 n would this
increate overall performance??
Thanks
---
Gilberto Nunes Ferreira




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter

2022-02-08 Thread Gilberto Ferreira
Yes! That's what I meant. Two-nodes plus the arbiter to achieve quorum.
Sorry if I made some confusion.
Thanks a lot.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 8 de fev. de 2022 às 08:18, Karthik Subrahmanya <
ksubr...@redhat.com> escreveu:

>
>
> On Tue, Feb 8, 2022 at 4:28 PM Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
>> Forgive me if I am wrong, but AFAIK, arbiter is for a two-node
>> configuration, isn't it?
>>
> Arbiter is to give the same consistency as replica-3 with 3 nodes, without
> the need to have a full sized 3rd brick [1]. It will store the files and
> their metadata but no data. This acts as a quorum brick to
> avoid split-brains.
> Since there are 4 nodes available here, and based on the configuration of
> the available volumes (requested volume info for the same) I was thinking
> whether the arbiter brick can be hosted on one of those nodes itself, or a
> new node is required.
>
> [1]
> https://docs.gluster.org/en/latest/Administrator-Guide/arbiter-volumes-and-quorum/
>
> Regards,
> Karthik
>
>> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 8 de fev. de 2022 às 07:17, Karthik Subrahmanya <
>> ksubr...@redhat.com> escreveu:
>>
>>> Hi Andre,
>>>
>>> Striped volumes are deprecated long back, see [1] & [2]. Seems like you
>>> are using a very old version. May I know which version of gluster you are
>>> running and the gluster volume info please?
>>> Release schedule and the maintained branches can be found at [3].
>>>
>>>
>>> [1] https://docs.gluster.org/en/latest/release-notes/6.0/
>>> [2]
>>> https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html
>>> [3] https://www.gluster.org/release-schedule/
>>>
>>> Regards,
>>> Karthik
>>>
>>> On Mon, Feb 7, 2022 at 9:43 PM Andre Probst 
>>> wrote:
>>>
>>>> I have a striped and replicated volume with 4 nodes. How do I add an
>>>> arbiter to this volume?
>>>>
>>>>
>>>> --
>>>> André Probst
>>>> Consultor de Tecnologia
>>>> 43 99617 8765
>>>> 
>>>>
>>>>
>>>>
>>>> Community Meeting Calendar:
>>>>
>>>> Schedule -
>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter

2022-02-08 Thread Gilberto Ferreira
Forgive me if I am wrong, but AFAIK, arbiter is for a two-node
configuration, isn't it?
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 8 de fev. de 2022 às 07:17, Karthik Subrahmanya <
ksubr...@redhat.com> escreveu:

> Hi Andre,
>
> Striped volumes are deprecated long back, see [1] & [2]. Seems like you
> are using a very old version. May I know which version of gluster you are
> running and the gluster volume info please?
> Release schedule and the maintained branches can be found at [3].
>
>
> [1] https://docs.gluster.org/en/latest/release-notes/6.0/
> [2]
> https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html
> [3] https://www.gluster.org/release-schedule/
>
> Regards,
> Karthik
>
> On Mon, Feb 7, 2022 at 9:43 PM Andre Probst 
> wrote:
>
>> I have a striped and replicated volume with 4 nodes. How do I add an
>> arbiter to this volume?
>>
>>
>> --
>> André Probst
>> Consultor de Tecnologia
>> 43 99617 8765
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Permission denied closing file when accessing GlusterFS via NFS

2021-08-11 Thread Gilberto Ferreira
Those options you need put it in the NFS server options, generally in
/etc/exports
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 10 de ago. de 2021 às 18:24, David Cunningham <
dcunning...@voisonics.com> escreveu:

> Hi Strahil and Gilberto,
>
> Thanks very much for your replies. SELinux is disabled on the NFS server
> (and the client too), and both have the same UID and GID for the user who
> owns the files.
>
> On the NFS mount we had options "rw,noatime,hard,bg,intr,vers=4". I added
> "async" which did not solve the problem, and the NFS client mount gave an
> error when trying to use "no_root_squash" or "no_subtree_check". Gilberto,
> is there a specific reason why you suggested those options?
>
> Thanks again.
>
>
> On Wed, 11 Aug 2021 at 03:55, Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
>> HOw about the NFS options?
>> (rw,async,no_root_squash,no_subtree_check)
>> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 10 de ago. de 2021 às 12:46, Strahil Nikolov <
>> hunter86...@yahoo.com> escreveu:
>>
>>> Hey David,
>>>
>>> can you give the volume info ?
>>>
>>> Also, I assume SELINUX is in permissive/disabled state.
>>>
>>> What about the uod of the user on the nfs client and the nfs server ? Is
>>> it the same ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> On Tue, Aug 10, 2021 at 5:52, David Cunningham
>>>  wrote:
>>> Hello,
>>>
>>> We have a GlusterFS node which also uses the FUSE client to mount the
>>> filesystem. The same GlusterFS node server also runs an NFS server which
>>> exports the FUSE client mount, and another machine NFS mounts it.
>>>
>>> When the NFS client writes data to the mounted filesystem we are seeing
>>> "Permission denied" errors like this:
>>>
>>> cp: closing
>>> `/var/lib/gfs/company/david/1075/Copyrec/1628448189883606-203-17184805327-out-08-08-21-14~43~10-203.mp3':
>>> Permission denied
>>>
>>> The file mentioned in the error is actually created on the GlusterFS
>>> filesystem, but has zero size, so the problem is not a normal Linux
>>> filesystem permission one.
>>>
>>> In the brick log nodirectwritedata-gluster-gvol0.log on the GlusterFS
>>> node we see an error as follows. Would anyone have a suggestion on what the
>>> problem might be? Thank you in advance!
>>>
>>> [2021-08-10 02:30:20.359159] I [MSGID: 139001]
>>> [posix-acl.c:262:posix_acl_log_permit_denied] 0-gvol0-access-control:
>>> client:
>>> CTX_ID:8f69363a-f0f4-44e1-84e9-69dfa77a8164-GRAPH_ID:0-PID:2657-HOST:gfs1.company.com-PC_NAME:gvol0-client-0-RECON_NO:-0,
>>> gfid: f70b1cd6-745a-4ea6-b0a5-1fcfef960f15,
>>> req(uid:106,gid:111,perm:2,ngrps:0),
>>> ctx(uid:106,gid:111,in-groups:1,perm:000,updated-fop:SETATTR, acl:-)
>>> [Permission denied]
>>> [2021-08-10 02:30:20.359187] E [MSGID: 115070]
>>> [server-rpc-fops_v2.c:1502:server4_open_cbk] 0-gvol0-server: 5554927: OPEN
>>> /company/david/1075/Copyrec/1628448189883606-203-17184805327-out-08-08-21-14~43~10-203.mp3
>>> (f70b1cd6-745a-4ea6-b0a5-1fcfef960f15), client:
>>> CTX_ID:8f69363a-f0f4-44e1-84e9-69dfa77a8164-GRAPH_ID:0-PID:2657-HOST:gfs1.company.com-PC_NAME:gvol0-client-0-RECON_NO:-0,
>>> error-xlator: gvol0-access-control [Permission denied]
>>>
>>> --
>>> David Cunningham, Voisonics Limited
>>> http://voisonics.com/
>>> USA: +1 213 221 1092
>>> New Zealand: +64 (0)28 2558 3782
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Permission denied closing file when accessing GlusterFS via NFS

2021-08-10 Thread Gilberto Ferreira
HOw about the NFS options?
(rw,async,no_root_squash,no_subtree_check)
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 10 de ago. de 2021 às 12:46, Strahil Nikolov 
escreveu:

> Hey David,
>
> can you give the volume info ?
>
> Also, I assume SELINUX is in permissive/disabled state.
>
> What about the uod of the user on the nfs client and the nfs server ? Is
> it the same ?
>
> Best Regards,
> Strahil Nikolov
>
> On Tue, Aug 10, 2021 at 5:52, David Cunningham
>  wrote:
> Hello,
>
> We have a GlusterFS node which also uses the FUSE client to mount the
> filesystem. The same GlusterFS node server also runs an NFS server which
> exports the FUSE client mount, and another machine NFS mounts it.
>
> When the NFS client writes data to the mounted filesystem we are seeing
> "Permission denied" errors like this:
>
> cp: closing
> `/var/lib/gfs/company/david/1075/Copyrec/1628448189883606-203-17184805327-out-08-08-21-14~43~10-203.mp3':
> Permission denied
>
> The file mentioned in the error is actually created on the GlusterFS
> filesystem, but has zero size, so the problem is not a normal Linux
> filesystem permission one.
>
> In the brick log nodirectwritedata-gluster-gvol0.log on the GlusterFS node
> we see an error as follows. Would anyone have a suggestion on what the
> problem might be? Thank you in advance!
>
> [2021-08-10 02:30:20.359159] I [MSGID: 139001]
> [posix-acl.c:262:posix_acl_log_permit_denied] 0-gvol0-access-control:
> client:
> CTX_ID:8f69363a-f0f4-44e1-84e9-69dfa77a8164-GRAPH_ID:0-PID:2657-HOST:gfs1.company.com-PC_NAME:gvol0-client-0-RECON_NO:-0,
> gfid: f70b1cd6-745a-4ea6-b0a5-1fcfef960f15,
> req(uid:106,gid:111,perm:2,ngrps:0),
> ctx(uid:106,gid:111,in-groups:1,perm:000,updated-fop:SETATTR, acl:-)
> [Permission denied]
> [2021-08-10 02:30:20.359187] E [MSGID: 115070]
> [server-rpc-fops_v2.c:1502:server4_open_cbk] 0-gvol0-server: 5554927: OPEN
> /company/david/1075/Copyrec/1628448189883606-203-17184805327-out-08-08-21-14~43~10-203.mp3
> (f70b1cd6-745a-4ea6-b0a5-1fcfef960f15), client:
> CTX_ID:8f69363a-f0f4-44e1-84e9-69dfa77a8164-GRAPH_ID:0-PID:2657-HOST:gfs1.company.com-PC_NAME:gvol0-client-0-RECON_NO:-0,
> error-xlator: gvol0-access-control [Permission denied]
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster x Ceph: disk space consuming...

2021-08-05 Thread Gilberto Ferreira
Ok, thanks.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qui., 5 de ago. de 2021 às 00:49, Strahil Nikolov 
escreveu:

> VDO cannot compress everything very well, but the deduplication feature
> can deal pretty well with VMs of same type.
> Imagine that you don't use template for OS disks, then 10 VMs' OS disks
> should eat less space ontop VDO.
>
>
> Best Regards,
> Strahil Nikolov
>
> On Wed, Aug 4, 2021 at 20:58, Gilberto Ferreira
>  wrote:
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster x Ceph: disk space consuming...

2021-08-04 Thread Gilberto Ferreira
Back to score 0
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qua., 4 de ago. de 2021 às 14:10, Gilberto Ferreira <
gilberto.nune...@gmail.com> escreveu:

> Ok.
> But, with VDO I should see less consumed space, compared to a mounted
> point without VDO.
> But this does not happen.
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em qua., 4 de ago. de 2021 às 13:41, Dmitry Melekhov 
> escreveu:
>
>> 04.08.2021 20:19, Gilberto Ferreira пишет:
>>
>> Well
>> I make some POC but doesn't seems to me any advantage in use VDO.
>> Without VDO I have created a single Ubuntu VM Server, and the SO just
>> used 3.6G.
>> But using VDO I noticed that the underlay used the very same 3.6G.
>> Perhaps I am doing something wrong.
>> I had have used the following command in order to create VDO:
>> vdo create --name=vdo1 --device=/dev/sdb --emulate512=enabled
>> --vdoLogicalSize=200G
>>
>> Thanks for any advice.
>>
>>
>> Well, I use VDO on RHEL based backup server and here is how it looks like:
>>
>>  vdostats --si
>> DeviceSize  Used Available Use% Space saving%
>> /dev/mapper/vdo1  3.8T  2.3T  1.6T  59%   54%
>>
>>
>> df -h
>>
>> /dev/mapper/vdo17,5T 4,6T  3,0T   61% /BACKUP
>>
>> As you can see I saved 1/2 of space.
>>
>> On filesystem level file sizes will be the same , of course...
>>
>>
>>
>>
>>
>> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em qua., 4 de ago. de 2021 às 07:26, Dmitry Melekhov 
>> escreveu:
>>
>>> 04.08.2021 14:19, Strahil Nikolov пишет:
>>>
>>> Are you sure you did enable the '--emulate512' when creating VDO ?
>>>
>>> I'm sure we did not add  this option.
>>>
>>> Thank you, we'll try .
>>>
>>>
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> On Wed, Aug 4, 2021 at 5:36, Dmitry Melekhov
>>>   wrote:
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster x Ceph: disk space consuming...

2021-08-04 Thread Gilberto Ferreira
Ok.
But, with VDO I should see less consumed space, compared to a mounted point
without VDO.
But this does not happen.

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qua., 4 de ago. de 2021 às 13:41, Dmitry Melekhov 
escreveu:

> 04.08.2021 20:19, Gilberto Ferreira пишет:
>
> Well
> I make some POC but doesn't seems to me any advantage in use VDO.
> Without VDO I have created a single Ubuntu VM Server, and the SO just
> used 3.6G.
> But using VDO I noticed that the underlay used the very same 3.6G.
> Perhaps I am doing something wrong.
> I had have used the following command in order to create VDO:
> vdo create --name=vdo1 --device=/dev/sdb --emulate512=enabled
> --vdoLogicalSize=200G
>
> Thanks for any advice.
>
>
> Well, I use VDO on RHEL based backup server and here is how it looks like:
>
>  vdostats --si
> DeviceSize  Used Available Use% Space saving%
> /dev/mapper/vdo1  3.8T  2.3T  1.6T  59%   54%
>
>
> df -h
>
> /dev/mapper/vdo17,5T 4,6T  3,0T   61% /BACKUP
>
> As you can see I saved 1/2 of space.
>
> On filesystem level file sizes will be the same , of course...
>
>
>
>
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em qua., 4 de ago. de 2021 às 07:26, Dmitry Melekhov 
> escreveu:
>
>> 04.08.2021 14:19, Strahil Nikolov пишет:
>>
>> Are you sure you did enable the '--emulate512' when creating VDO ?
>>
>> I'm sure we did not add  this option.
>>
>> Thank you, we'll try .
>>
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Wed, Aug 4, 2021 at 5:36, Dmitry Melekhov
>>   wrote:
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster x Ceph: disk space consuming...

2021-08-04 Thread Gilberto Ferreira
Well
I make some POC but doesn't seems to me any advantage in use VDO.
Without VDO I have created a single Ubuntu VM Server, and the SO just
used 3.6G.
But using VDO I noticed that the underlay used the very same 3.6G.
Perhaps I am doing something wrong.
I had have used the following command in order to create VDO:
vdo create --name=vdo1 --device=/dev/sdb --emulate512=enabled
--vdoLogicalSize=200G

Thanks for any advice.



---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qua., 4 de ago. de 2021 às 07:26, Dmitry Melekhov 
escreveu:

> 04.08.2021 14:19, Strahil Nikolov пишет:
>
> Are you sure you did enable the '--emulate512' when creating VDO ?
>
> I'm sure we did not add  this option.
>
> Thank you, we'll try .
>
>
>
> Best Regards,
> Strahil Nikolov
>
> On Wed, Aug 4, 2021 at 5:36, Dmitry Melekhov
>   wrote:
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster x Ceph: disk space consuming...

2021-08-03 Thread Gilberto Ferreira
Hi there

Is there any paper or howto whatsoever, that points out some comparison
with Gluster versus Ceph, regards disk space consuming??
It seems to me glusterfs has more efficiency regard this matter;
Some advice will be welcome!

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS for virtualization + RAID10 (HDD)

2021-04-16 Thread Gilberto Ferreira
Hi there

I am about to deploy a gluster server with Proxmox VE.
Both servers have 4 SAS disks configured as RAID10.
RAID10 is ok? Network is 4 1GB NIC. I am thinking of using some kind
of bond with 3 1G NICS. What about it?
Thanks for any advice.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS and Network

2021-03-12 Thread Gilberto Ferreira
Hi there

I have some issues with glusterfs (v8) regarding network connection.
I have tried using a single 1GB NIC to run over 2 nodes about 22 VM
with 15GB disk size each one, but I notice a lot of trouble, as you
can imagine.
Now I am looking forward to use some of the two scenarios:

1 - Work with 4 1 GB NIC's bonding into mode 2 or mode 6: question:
which mode is better and which one do not require switch special mode
etc...

2 - Work with 10GB NIC - I know this is the best choice but need to
know if the previous one will work as well the second.

Thanks a lot
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users