Re: [Gluster-devel] [Gluster-users] Error in gluster v11

2023-05-14 Thread Strahil Nikolov
Looks similar to https://github.com/gluster/glusterfs/issues/4104 I don’t see 
any progress there.Maybe asking in gluster-devel (in CC) could help.
Best Regards,Strahil Nikolov 



On Sunday, May 14, 2023, 5:28 PM, Gilberto Ferreira 
 wrote:

Anybody also has this error?
May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +] C 
[gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
-linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae] 
-->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5] -->/lib
/x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5) [0x7fb4ebad41a5] ) 0-: Assertion 
failed:   
May 14 07:05:39 srv01 vms[9404]: patchset: git://git.gluster.org/glusterfs.git
May 14 07:05:39 srv01 vms[9404]: package-string: glusterfs 11.0

---Gilberto Nunes Ferreira(47) 99676-7530 - Whatsapp / Telegram













Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
gluster-us...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users



---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Regarding Glusterfs file locking

2023-02-02 Thread Strahil Nikolov
 As far as I remember there are only 2 types of locking in Linux:
- Advisory
- Mandatory

In order to use mandatory locking, you need to pass the "mand" mount option to 
the FUSE client(mount -o mand, ...) and chmod g+s,g-x 
//


Best Regards,
Strahil Nikolov
 В сряда, 1 февруари 2023 г., 13:22:59 ч. Гринуич+2, Maaz Sheikh 
 написа:  
 
  #yiv5808026394 P {margin-top:0;margin-bottom:0;}Team, please let us know if u 
have any feedback.From: Maaz Sheikh
Sent: Wednesday, January 25, 2023 4:51 PM
To: gluster-devel@gluster.org ; 
gluster-us...@gluster.org 
Subject: Regarding Glusterfs file locking Hi,Greetings of the day,
Our configuration is like:We have installed both glusterFS server and GlusterFS 
client on node1 as well as node2. We have mounted node1 volume to both nodes.
Our use case is :From glusterFS node 1, we have to take an exclusive lock and 
open a file (which is a shared file between both the nodes) and we should 
write/read in that file.
>From glusterFS node 2, we should not be able to read/write that file.

Now the problem we are facing is:From node1, we are able to take an exclusive 
lock and the program has started writing in that shared file.From node2, we are 
able to read and write on that file which should not happen because node1 has 
already acquired the lock on that file.
Therefore, requesting you to please provide us a solution asap.
Thanks,Maaz SheikhAssociate Software Engineer Impetus Technologies India 






NOTE: This message may contain information that is confidential, proprietary, 
privileged or otherwise protected by law. The message is intended solely for 
the named addressee. If received in error, please destroy and notify the 
sender. Any use of this email is prohibited when received in error. Impetus 
does not represent, warrant and/or guarantee, that the integrity of this 
communication has been maintained nor that the communication is free of errors, 
virus, interception or interference.
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

  ---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Gluster 5.10 rebalance stuck

2022-11-07 Thread Strahil Nikolov
Hi Dev list,
How can I find the details about the rebalance_status/status ids ? Is it 
actually normal that some systems are in '4' , others in '3' ?
Is it safe to forcefully start a new rebalance ?
Best Regards,Strahil Nikolov  
 
  On Mon, Nov 7, 2022 at 9:15, Shreyansh Shah 
wrote:   Hi Strahil,
Adding the info below:

--
Node IP = 10.132.0.19
rebalance_status=1
status=4
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=27054
size=7104425578505
scanned=72141
failures=10
skipped=19611
run-time=92805.00
--
Node IP = 10.132.0.20
rebalance_status=1
status=4
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=23945
size=7126809216060
scanned=71208
failures=7
skipped=18834
run-time=94029.00
--
Node IP = 10.132.1.12
rebalance_status=1
status=4
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=12533
size=12945021256
scanned=40398
failures=14
skipped=1194
run-time=92201.00
--
Node IP = 10.132.1.13
rebalance_status=1
status=3
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=41483
size=8845076025598
scanned=179920
failures=25
skipped=62373
run-time=130017.00
--
Node IP = 10.132.1.14
rebalance_status=1
status=3
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=43603
size=7834691799355
scanned=204140
failures=2878
skipped=87761
run-time=130016.00
--
Node IP = 10.132.1.15
rebalance_status=1
status=4
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=29968
size=6389568855140
scanned=69320
failures=7
skipped=17999
run-time=93654.00
--
Node IP = 10.132.1.16
rebalance_status=1
status=4
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=23226
size=5899338197718
scanned=56169
failures=7
skipped=12659
run-time=94030.00
--
Node IP = 10.132.1.17
rebalance_status=1
status=4
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=17538
size=6247281008602
scanned=50038
failures=8
skipped=11335
run-time=92203.00
--
Node IP = 10.132.1.18
rebalance_status=1
status=4
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=20394
size=6395008466977
scanned=50060
failures=7
skipped=13784
run-time=92103.00
--
Node IP = 10.132.1.19
rebalance_status=1
status=1
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=0
size=0
scanned=0
failures=0
skipped=0
run-time=0.00
--
Node IP = 10.132.1.20
rebalance_status=1
status=3
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=0
size=0
scanned=24
failures=0
skipped=2
run-time=1514.00

On Thu, Nov 3, 2022 at 10:10 PM Strahil Nikolov  wrote:

And the other servers ?
 
 
  On Thu, Nov 3, 2022 at 16:21, Shreyansh Shah 
wrote:   Hi Strahil,
Thank you for your reply. node_state.info has the below data


root@gluster-11:/usr/var/lib/glusterd/vols/data# cat node_state.info
 rebalance_status=1
status=3
rebalance_op=19
rebalance-id=39a89b51-2549-4348-aa47-0db321c3a32f
rebalanced-files=0
size=0
scanned=24
failures=0
skipped=2
run-time=1514.00



On Thu, Nov 3, 2022 at 4:00 PM Strahil Nikolov  wrote:

I would check the details in 
/var/lib/glusterd/vols//node_state.info
Best Regards,Strahil Nikolov 
 
 
  On Wed, Nov 2, 2022 at 9:06, Shreyansh Shah 
wrote:   Hi,
I Would really appreciate it if someone would be able to help on the above 
issue. We are stuck as we cannot run rebalance due to this and thus are not 
able to extract peak performance from the setup due to unbalanced data.
Adding gluster info (without the bricks) below. Please let me know if any other 
details/logs are needed.


Volume Name: data
Type: Distribute
Volume ID: 75410231-bb25-4f14-bcde-caf18fce1d31
Status: Started
Snapshot Count: 0
Number of Bricks: 41
Transport-type: tcp
Options Reconfigured:
server.event-threads: 4
network.ping-timeout: 90
client.keepalive-time: 60
server.keepalive-time: 60
storage.health-check-interval: 60
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.cache-size: 8GB
performance.cache-refresh-timeout: 60
cluster.min-free-disk: 3%
client.event-threads: 4
performance.io-thread-count: 16


On Fri, Oct 28, 2022 at 11:40 AM Shreyansh Shah  
wrote:

Hi,
We are running glusterfs 5.10 server volume. Recently we added a few new bricks 
and started a rebalance operation. After a couple of days the rebalance 
operation was just stuck, with one of the peers showing In-Progress with no 
file being read

[Gluster-devel] no SAMBA group settings in gluster v9

2021-10-05 Thread Strahil Nikolov
Hello All,

does anyone know why the SAMBA group settings are missing on Gluster v9.3 ?
The system is Ubuntu 20.04.3 LTS and it seems that the samba group of settings 
is no longer there:
# ls -l /var/lib/glusterd/groups
total 20
-rw-r--r-- 1 root root 337 Sep 30 12:25 db-workload
-rw-r--r-- 1 root root 787 Sep 30 12:25 gluster-block
-rw-r--r-- 1 root root 198 Sep 30 12:25 metadata-cache
-rw-r--r-- 1 root root 159 Sep 30 12:25 nl-cache
-rw-r--r-- 1 root root 674 Sep 30 12:25 virt

I checked the release notes for v8 & v9 and I can't find something like that.
It seems that group-samba still exists:
https://github.com/gluster/glusterfs/blob/devel/extras/group-samba

Best Regards,
Strahil Nikolov 
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Ansible - allow ctdb communication in firewalld

2021-08-31 Thread Strahil Nikolov
Hello All,

While I was working on a custom version of the ctdb role, I noticed that we got 
no firewalld task to open the ctdb traffic.

I have opened a PR: https://github.com/gluster/gluster-ansible-features/pull/50

Any comments are welcome.

P.S.: The logic and variable names were derived from gluster.infra


Best Regards,
Strahil Nikolov
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] Who can review Ansible code

2021-05-30 Thread Strahil Nikolov
Hello All,

I have opened https://github.com/gluster/gluster-ansible-infra/pull/122 and it 
seems that it hasn't been reviewed for a while.

Who can assist with the review ? Are there any maintainers of the code ?


Best Regards,
Strahil Nikolov
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [CentOS-devel] Multiple Geo Rep issues due to SELINUX on CentOS 8.3

2021-01-08 Thread Strahil Nikolov

Hi Srijan,

I have compiled the rpm from the git repo and zeroed-out the
lab and redeployed with that package:


[root@glustera ~]# rpm -qa | grep glusterfs-selinux
glusterfs-selinux-0.1.0-2.el8.noarch
[root@glustera ~]# rpm -ql glusterfs-selinux
/usr/share/selinux/devel/include/contrib/ipp-glusterd.if
/usr/share/selinux/packages/targeted/glusterd.pp.bz2
/var/lib/selinux/targeted/active/modules/200/glusterd

Yet , the result is very bad. I had to set SELINUX in permissive on all source 
nodes to establish a successfull geo-rep and all 7 denials have occured again 
on the source volume nodes:

[root@glustera ~]# sealert -a /var/log/audit/audit.log 
100% done
found 7 alerts in /var/log/audit/audit.log
-
---

Are you sure that glusterfs-selinux is suitable for EL8 and glusterfs
v8.3 ?


Best Regards,
Strahil Nikolov



В 19:47 + на 06.01.2021 (ср), Strahil Nikolov via CentOS-devel
написа:
> Hi Srijan,
> 
> I just checked the gluster repo 'centos-gluster8' and it seems that
> there is no gluster package containing 'selinux' in the name . Same
> is valid for CentOS 7.
> Maybe I have to address that to the CentOS Storage SIG ?
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В сряда, 6 януари 2021 г., 10:44:37 Гринуич+2, Strahil Nikolov via
> CentOS-devel  написа: 
> 
> 
> 
> 
> 
> Hi Srijan,
> 
> I will redeploy the scenario and I will check if the steps include
> that package.
> Shouldn't glusterfs-selinux a dependency ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В сряда, 6 януари 2021 г., 07:29:27 Гринуич+2, Srijan Sivakumar <
> ssiva...@redhat.com> написа: 
> 
> 
> 
> 
> 
> Hi Strahil,
> 
> Selinux policies and rules have to be added for gluster processes to
> work as intended when selinux is in enforced mode. Could you confirm
> if you've installed the glusterfs-selinux package in the nodes ?
> If not then you can check out the repo at 
> https://github.com/gluster/glusterfs-selinux.
> 
> Regards,
> Srijan
> 
> On Wed, Jan 6, 2021 at 2:15 AM Strahil Nikolov  > wrote:
> > Did anyone receive that e-mail ?
> > Any hints ?
> > 
> > Best Regards,
> > Strahil Nikolov
> > 
> > В 19:05 + на 30.12.2020 (ср), Strahil Nikolov написа:
> > > Hello All,
> > > 
> > > I have been testing Geo Replication on Gluster v 8.3 ontop CentOS
> > > 8.3.
> > > It seems that everything works untill SELINUX is added to the
> > > equasion.
> > > 
> > > So far I have identified several issues on the Master Volume's
> > > nodes:
> > > - /usr/lib/ld-linux-x86-64.so.2 has a different SELINUX Context
> > > than
> > > the target that it is pointing to. For details check 
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1911133
> > > 
> > > - SELINUX prevents /usr/bin/ssh from search access to
> > > /var/lib/glusterd/geo-replication/secret.pem
> > > 
> > > - SELinux is preventing /usr/bin/ssh from search access to .ssh
> > > 
> > > - SELinux is preventing /usr/bin/ssh from search access to
> > > /tmp/gsyncd-aux-ssh-
> > > tnwpw5tx/274d5d142b02f84644d658beaf86edae.sock
> > > 
> > > Note: Using 'semanage fcontext' doesn't work due to the fact that
> > > files created are inheriting the SELINUX context of the parent
> > > dir
> > > and you need to restorecon after every file creation by the geo-
> > > replication process.
> > > 
> > > - SELinux is preventing /usr/bin/rsync from search access on 
> > > .gfid/----0001
> > > 
> > > Obviously, those needs fixing before anyone is able to use Geo-
> > > Replication with SELINUX enabled on the "master" volume nodes.
> > > 
> > > Should I open a bugzilla at bugzilla.redhat.com for the selinux
> > > policy?
> > > 
> > > Further details:
> > > [root@glustera ~]# cat /etc/centos-release
> > > CentOS Linux release 8.3.2011
> > > 
> > > [root@glustera ~]# rpm -qa | grep selinux | sort
> > > libselinux-2.9-4.el8_3.x86_64
> > > libselinux-utils-2.9-4.el8_3.x86_64
> > > python3-libselinux-2.9-4.el8_3.x86_64
> > > rpm-plugin-selinux-4.14.3-4.el8.x86_64
> > > selinux-policy-3.14.3-54.el8.noarch
> > > selinux-policy-devel-3.14.3-54.el8.noarch
> > > selinux-policy-doc-3.14.3-54.el8.noarch
> > > selinux-policy-targeted-3.14.3-54.el8.noarch
> > > 
> > > [root@glustera ~]# rpm -qa | grep 

Re: [Gluster-devel] [CentOS-devel] Multiple Geo Rep issues due to SELINUX on CentOS 8.3

2021-01-06 Thread Strahil Nikolov
Hi Srijan,

I just checked the gluster repo 'centos-gluster8' and it seems that there is no 
gluster package containing 'selinux' in the name . Same is valid for CentOS 7.
Maybe I have to address that to the CentOS Storage SIG ?


Best Regards,
Strahil Nikolov






В сряда, 6 януари 2021 г., 10:44:37 Гринуич+2, Strahil Nikolov via CentOS-devel 
 написа: 





Hi Srijan,

I will redeploy the scenario and I will check if the steps include that package.
Shouldn't glusterfs-selinux a dependency ?

Best Regards,
Strahil Nikolov






В сряда, 6 януари 2021 г., 07:29:27 Гринуич+2, Srijan Sivakumar 
 написа: 





Hi Strahil,

Selinux policies and rules have to be added for gluster processes to work as 
intended when selinux is in enforced mode. Could you confirm if you've 
installed the glusterfs-selinux package in the nodes ?
If not then you can check out the repo at 
https://github.com/gluster/glusterfs-selinux.

Regards,
Srijan

On Wed, Jan 6, 2021 at 2:15 AM Strahil Nikolov  wrote:
> Did anyone receive that e-mail ?
> Any hints ?
> 
> Best Regards,
> Strahil Nikolov
> 
> В 19:05 + на 30.12.2020 (ср), Strahil Nikolov написа:
>> Hello All,
>> 
>> I have been testing Geo Replication on Gluster v 8.3 ontop CentOS
>> 8.3.
>> It seems that everything works untill SELINUX is added to the
>> equasion.
>> 
>> So far I have identified several issues on the Master Volume's nodes:
>> - /usr/lib/ld-linux-x86-64.so.2 has a different SELINUX Context than
>> the target that it is pointing to. For details check 
>> https://bugzilla.redhat.com/show_bug.cgi?id=1911133
>> 
>> - SELINUX prevents /usr/bin/ssh from search access to
>> /var/lib/glusterd/geo-replication/secret.pem
>> 
>> - SELinux is preventing /usr/bin/ssh from search access to .ssh
>> 
>> - SELinux is preventing /usr/bin/ssh from search access to
>> /tmp/gsyncd-aux-ssh-tnwpw5tx/274d5d142b02f84644d658beaf86edae.sock
>> 
>> Note: Using 'semanage fcontext' doesn't work due to the fact that
>> files created are inheriting the SELINUX context of the parent dir
>> and you need to restorecon after every file creation by the geo-
>> replication process.
>> 
>> - SELinux is preventing /usr/bin/rsync from search access on 
>> .gfid/----0001
>> 
>> Obviously, those needs fixing before anyone is able to use Geo-
>> Replication with SELINUX enabled on the "master" volume nodes.
>> 
>> Should I open a bugzilla at bugzilla.redhat.com for the selinux
>> policy?
>> 
>> Further details:
>> [root@glustera ~]# cat /etc/centos-release
>> CentOS Linux release 8.3.2011
>> 
>> [root@glustera ~]# rpm -qa | grep selinux | sort
>> libselinux-2.9-4.el8_3.x86_64
>> libselinux-utils-2.9-4.el8_3.x86_64
>> python3-libselinux-2.9-4.el8_3.x86_64
>> rpm-plugin-selinux-4.14.3-4.el8.x86_64
>> selinux-policy-3.14.3-54.el8.noarch
>> selinux-policy-devel-3.14.3-54.el8.noarch
>> selinux-policy-doc-3.14.3-54.el8.noarch
>> selinux-policy-targeted-3.14.3-54.el8.noarch
>> 
>> [root@glustera ~]# rpm -qa | grep gluster | sort
>> centos-release-gluster8-1.0-1.el8.noarch
>> glusterfs-8.3-1.el8.x86_64
>> glusterfs-cli-8.3-1.el8.x86_64
>> glusterfs-client-xlators-8.3-1.el8.x86_64
>> glusterfs-fuse-8.3-1.el8.x86_64
>> glusterfs-geo-replication-8.3-1.el8.x86_64
>> glusterfs-server-8.3-1.el8.x86_64
>> libglusterd0-8.3-1.el8.x86_64
>> libglusterfs0-8.3-1.el8.x86_64
>> python3-gluster-8.3-1.el8.x86_64
>> 
>> 
>> [root@glustera ~]# gluster volume info primary
>>  
>> Volume Name: primary
>> Type: Distributed-Replicate
>> Volume ID: 89903ca4-9817-4c6f-99de-5fb3e6fd10e7
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 5 x 3 = 15
>> Transport-type: tcp
>> Bricks:
>> Brick1: glustera:/bricks/brick-a1/brick
>> Brick2: glusterb:/bricks/brick-b1/brick
>> Brick3: glusterc:/bricks/brick-c1/brick
>> Brick4: glustera:/bricks/brick-a2/brick
>> Brick5: glusterb:/bricks/brick-b2/brick
>> Brick6: glusterc:/bricks/brick-c2/brick
>> Brick7: glustera:/bricks/brick-a3/brick
>> Brick8: glusterb:/bricks/brick-b3/brick
>> Brick9: glusterc:/bricks/brick-c3/brick
>> Brick10: glustera:/bricks/brick-a4/brick
>> Brick11: glusterb:/bricks/brick-b4/brick
>> Brick12: glusterc:/bricks/brick-c4/brick
>> Brick13: glustera:/bricks/brick-a5/brick
>> Brick14: glusterb:/bricks/brick-b5/brick
>> Brick15: glusterc:/bricks/brick-c5/brick
>> Options Reconfigured:
>> changelog.changelog: on
>> geo-replication.ignore-pid-check: on
>> geo-replication.indexin

Re: [Gluster-devel] Multiple Geo Rep issues due to SELINUX on CentOS 8.3

2021-01-06 Thread Strahil Nikolov
Hi Srijan,

I will redeploy the scenario and I will check if the steps include that package.
Shouldn't glusterfs-selinux a dependency ?

Best Regards,
Strahil Nikolov






В сряда, 6 януари 2021 г., 07:29:27 Гринуич+2, Srijan Sivakumar 
 написа: 





Hi Strahil,

Selinux policies and rules have to be added for gluster processes to work as 
intended when selinux is in enforced mode. Could you confirm if you've 
installed the glusterfs-selinux package in the nodes ?
If not then you can check out the repo at 
https://github.com/gluster/glusterfs-selinux.

Regards,
Srijan

On Wed, Jan 6, 2021 at 2:15 AM Strahil Nikolov  wrote:
> Did anyone receive that e-mail ?
> Any hints ?
> 
> Best Regards,
> Strahil Nikolov
> 
> В 19:05 + на 30.12.2020 (ср), Strahil Nikolov написа:
>> Hello All,
>> 
>> I have been testing Geo Replication on Gluster v 8.3 ontop CentOS
>> 8.3.
>> It seems that everything works untill SELINUX is added to the
>> equasion.
>> 
>> So far I have identified several issues on the Master Volume's nodes:
>> - /usr/lib/ld-linux-x86-64.so.2 has a different SELINUX Context than
>> the target that it is pointing to. For details check 
>> https://bugzilla.redhat.com/show_bug.cgi?id=1911133
>> 
>> - SELINUX prevents /usr/bin/ssh from search access to
>> /var/lib/glusterd/geo-replication/secret.pem
>> 
>> - SELinux is preventing /usr/bin/ssh from search access to .ssh
>> 
>> - SELinux is preventing /usr/bin/ssh from search access to
>> /tmp/gsyncd-aux-ssh-tnwpw5tx/274d5d142b02f84644d658beaf86edae.sock
>> 
>> Note: Using 'semanage fcontext' doesn't work due to the fact that
>> files created are inheriting the SELINUX context of the parent dir
>> and you need to restorecon after every file creation by the geo-
>> replication process.
>> 
>> - SELinux is preventing /usr/bin/rsync from search access on 
>> .gfid/----0001
>> 
>> Obviously, those needs fixing before anyone is able to use Geo-
>> Replication with SELINUX enabled on the "master" volume nodes.
>> 
>> Should I open a bugzilla at bugzilla.redhat.com for the selinux
>> policy?
>> 
>> Further details:
>> [root@glustera ~]# cat /etc/centos-release
>> CentOS Linux release 8.3.2011
>> 
>> [root@glustera ~]# rpm -qa | grep selinux | sort
>> libselinux-2.9-4.el8_3.x86_64
>> libselinux-utils-2.9-4.el8_3.x86_64
>> python3-libselinux-2.9-4.el8_3.x86_64
>> rpm-plugin-selinux-4.14.3-4.el8.x86_64
>> selinux-policy-3.14.3-54.el8.noarch
>> selinux-policy-devel-3.14.3-54.el8.noarch
>> selinux-policy-doc-3.14.3-54.el8.noarch
>> selinux-policy-targeted-3.14.3-54.el8.noarch
>> 
>> [root@glustera ~]# rpm -qa | grep gluster | sort
>> centos-release-gluster8-1.0-1.el8.noarch
>> glusterfs-8.3-1.el8.x86_64
>> glusterfs-cli-8.3-1.el8.x86_64
>> glusterfs-client-xlators-8.3-1.el8.x86_64
>> glusterfs-fuse-8.3-1.el8.x86_64
>> glusterfs-geo-replication-8.3-1.el8.x86_64
>> glusterfs-server-8.3-1.el8.x86_64
>> libglusterd0-8.3-1.el8.x86_64
>> libglusterfs0-8.3-1.el8.x86_64
>> python3-gluster-8.3-1.el8.x86_64
>> 
>> 
>> [root@glustera ~]# gluster volume info primary
>>  
>> Volume Name: primary
>> Type: Distributed-Replicate
>> Volume ID: 89903ca4-9817-4c6f-99de-5fb3e6fd10e7
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 5 x 3 = 15
>> Transport-type: tcp
>> Bricks:
>> Brick1: glustera:/bricks/brick-a1/brick
>> Brick2: glusterb:/bricks/brick-b1/brick
>> Brick3: glusterc:/bricks/brick-c1/brick
>> Brick4: glustera:/bricks/brick-a2/brick
>> Brick5: glusterb:/bricks/brick-b2/brick
>> Brick6: glusterc:/bricks/brick-c2/brick
>> Brick7: glustera:/bricks/brick-a3/brick
>> Brick8: glusterb:/bricks/brick-b3/brick
>> Brick9: glusterc:/bricks/brick-c3/brick
>> Brick10: glustera:/bricks/brick-a4/brick
>> Brick11: glusterb:/bricks/brick-b4/brick
>> Brick12: glusterc:/bricks/brick-c4/brick
>> Brick13: glustera:/bricks/brick-a5/brick
>> Brick14: glusterb:/bricks/brick-b5/brick
>> Brick15: glusterc:/bricks/brick-c5/brick
>> Options Reconfigured:
>> changelog.changelog: on
>> geo-replication.ignore-pid-check: on
>> geo-replication.indexing: on
>> storage.fips-mode-rchecksum: on
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> cluster.enable-shared-storage: enable
>> 
>> I'm attaching the audit log and sealert analysis from glustera (one
>> of the 3 nodes consisting of the 'master' volume).
>> 
>> 
>> Best Regards,
>> Strahil Nikolov
> 
>> 
> 
> ---
> 
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> 
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Multiple Geo Rep issues due to SELINUX on CentOS 8.3

2021-01-05 Thread Strahil Nikolov
Did anyone receive that e-mail ?
Any hints ?

Best Regards,
Strahil Nikolov

В 19:05 + на 30.12.2020 (ср), Strahil Nikolov написа:
> Hello All,
> 
> I have been testing Geo Replication on Gluster v 8.3 ontop CentOS
> 8.3.
> It seems that everything works untill SELINUX is added to the
> equasion.
> 
> So far I have identified several issues on the Master Volume's nodes:
> - /usr/lib/ld-linux-x86-64.so.2 has a different SELINUX Context than
> the target that it is pointing to. For details check 
> https://bugzilla.redhat.com/show_bug.cgi?id=1911133
> 
> - SELINUX prevents /usr/bin/ssh from search access to
> /var/lib/glusterd/geo-replication/secret.pem
> 
> - SELinux is preventing /usr/bin/ssh from search access to .ssh
> 
> - SELinux is preventing /usr/bin/ssh from search access to
> /tmp/gsyncd-aux-ssh-tnwpw5tx/274d5d142b02f84644d658beaf86edae.sock
> 
> Note: Using 'semanage fcontext' doesn't work due to the fact that
> files created are inheriting the SELINUX context of the parent dir
> and you need to restorecon after every file creation by the geo-
> replication process.
> 
> - SELinux is preventing /usr/bin/rsync from search access on 
> .gfid/----0001
> 
> Obviously, those needs fixing before anyone is able to use Geo-
> Replication with SELINUX enabled on the "master" volume nodes.
> 
> Should I open a bugzilla at bugzilla.redhat.com for the selinux
> policy?
> 
> Further details:
> [root@glustera ~]# cat /etc/centos-release
> CentOS Linux release 8.3.2011
> 
> [root@glustera ~]# rpm -qa | grep selinux | sort
> libselinux-2.9-4.el8_3.x86_64
> libselinux-utils-2.9-4.el8_3.x86_64
> python3-libselinux-2.9-4.el8_3.x86_64
> rpm-plugin-selinux-4.14.3-4.el8.x86_64
> selinux-policy-3.14.3-54.el8.noarch
> selinux-policy-devel-3.14.3-54.el8.noarch
> selinux-policy-doc-3.14.3-54.el8.noarch
> selinux-policy-targeted-3.14.3-54.el8.noarch
> 
> [root@glustera ~]# rpm -qa | grep gluster | sort
> centos-release-gluster8-1.0-1.el8.noarch
> glusterfs-8.3-1.el8.x86_64
> glusterfs-cli-8.3-1.el8.x86_64
> glusterfs-client-xlators-8.3-1.el8.x86_64
> glusterfs-fuse-8.3-1.el8.x86_64
> glusterfs-geo-replication-8.3-1.el8.x86_64
> glusterfs-server-8.3-1.el8.x86_64
> libglusterd0-8.3-1.el8.x86_64
> libglusterfs0-8.3-1.el8.x86_64
> python3-gluster-8.3-1.el8.x86_64
> 
> 
> [root@glustera ~]# gluster volume info primary
>  
> Volume Name: primary
> Type: Distributed-Replicate
> Volume ID: 89903ca4-9817-4c6f-99de-5fb3e6fd10e7
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 5 x 3 = 15
> Transport-type: tcp
> Bricks:
> Brick1: glustera:/bricks/brick-a1/brick
> Brick2: glusterb:/bricks/brick-b1/brick
> Brick3: glusterc:/bricks/brick-c1/brick
> Brick4: glustera:/bricks/brick-a2/brick
> Brick5: glusterb:/bricks/brick-b2/brick
> Brick6: glusterc:/bricks/brick-c2/brick
> Brick7: glustera:/bricks/brick-a3/brick
> Brick8: glusterb:/bricks/brick-b3/brick
> Brick9: glusterc:/bricks/brick-c3/brick
> Brick10: glustera:/bricks/brick-a4/brick
> Brick11: glusterb:/bricks/brick-b4/brick
> Brick12: glusterc:/bricks/brick-c4/brick
> Brick13: glustera:/bricks/brick-a5/brick
> Brick14: glusterb:/bricks/brick-b5/brick
> Brick15: glusterc:/bricks/brick-c5/brick
> Options Reconfigured:
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.enable-shared-storage: enable
> 
> I'm attaching the audit log and sealert analysis from glustera (one
> of the 3 nodes consisting of the 'master' volume).
> 
> 
> Best Regards,
> Strahil Nikolov
> 

---

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Replica 3 volume with forced quorum 1 fault tolerance and recovery

2020-12-01 Thread Strahil Nikolov
Replica 3 with quorum 1 ?
This is not good. I doubt anyone will help you with this. The idea of replica 3 
volumes is to tolerate 1 node ,as when a second one is dead - only 1 will 
accept writes.

You can imagine the situation when 2 bricks are down and data is writen to 
brick 3. What happens when the brick 1 and 2 is up and running -> how is 
gluster going to decide where to heal from ?
2 is more than 1 , so the third node should delete the file instead of the 
opposite.

What are you trying to achive with the quorum 1 ?


Best Regards,
Strahil Nikolov






В вторник, 1 декември 2020 г., 14:09:32 Гринуич+2, Dmitry Antipov 
 написа: 





It seems that consistency of replica 3 volume with quorum forced to 1 becomes
broken after a few forced volume restarts initiated after 2 brick failures.
At least it breaks GFAPI clients, and even volume restart doesn't help.

Volume setup is:

Volume Name: test0
Type: Replicate
Volume ID: 919352fb-15d8-49cb-b94c-c106ac68f072
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.1.112:/glusterfs/test0-000
Brick2: 192.168.1.112:/glusterfs/test0-001
Brick3: 192.168.1.112:/glusterfs/test0-002
Options Reconfigured:
cluster.quorum-count: 1
cluster.quorum-type: fixed
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Client is fio with the following options:

[global]
name=write
filename=testfile
ioengine=gfapi_async
volume=test0
brick=localhost
create_on_open=1
rw=randwrite
direct=1
numjobs=1
time_based=1
runtime=600

[test-4-kbytes]
bs=4k
size=1G
iodepth=128

How to reproduce:

0) start the volume;
1) run fio;
2) run 'gluster volume status', select 2 arbitrary brick processes
    and kill them;
3) make sure fio is OK;
4) wait a few seconds, then issue 'gluster volume start [VOL] force'
    to restart bricks, and finally issue 'gluster volume status' again
    to check whether all bricks are running;
5) restart from 2).

This is likely to work for a few times but, sooner or later, it breaks
at 3) and fio detects an I/O error, most probably EIO or ENOTCONN. Starting
from this point, killing and restarting fio yields in error in glfs_creat(),
and even the manual volume restart doesn't help.

NOTE: as of 7914c6147adaf3ef32804519ced850168fff1711, fio's gfapi_async
engine is still incomplete and _silently ignores I/O errors_. Currently
I'm using the following tweak to detect and report them (YMMV, consider
experimental):

diff --git a/engines/glusterfs_async.c b/engines/glusterfs_async.c
index 0392ad6e..27ebb6f1 100644
--- a/engines/glusterfs_async.c
+++ b/engines/glusterfs_async.c
@@ -7,6 +7,7 @@
  #include "gfapi.h"
  #define NOT_YET 1
  struct fio_gf_iou {
+    struct thread_data *td;
      struct io_u *io_u;
      int io_complete;
  };
@@ -80,6 +81,7 @@ static int fio_gf_io_u_init(struct thread_data *td, struct 
io_u *io_u)
      }
      io->io_complete = 0;
      io->io_u = io_u;
+    io->td = td;
      io_u->engine_data = io;
      return 0;
  }
@@ -95,7 +97,20 @@ static void gf_async_cb(glfs_fd_t * fd, ssize_t ret, void 
*data)
      struct fio_gf_iou *iou = io_u->engine_data;

      dprint(FD_IO, "%s ret %zd\n", __FUNCTION__, ret);
-    iou->io_complete = 1;
+    if (ret != io_u->xfer_buflen) {
+        if (ret >= 0) {
+            io_u->resid = io_u->xfer_buflen - ret;
+            io_u->error = 0;
+            iou->io_complete = 1;
+        } else
+            io_u->error = errno;
+    }
+
+    if (io_u->error) {
+        log_err("IO failed (%s).\n", strerror(io_u->error));
+        td_verror(iou->td, io_u->error, "xfer");
+    } else
+        iou->io_complete = 1;
  }

  static enum fio_q_status fio_gf_async_queue(struct thread_data fio_unused * 
td,

--

Dmitry




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
gluster-us...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] NFS Ganesha (Storage SIG) node fails to boot after fencing

2020-11-23 Thread Strahil Nikolov
Hi Kaleb,

I have opened https://github.com/gluster/glusterfs/issues/1830 .

I was not sure where to open it ,as I saw that 
'nfs-ganesha-selinux-3.3-2.el8.noarch' rpm provided by the storage SIG repo  
'centos-nfs-ganesha3'.

I'm adding the Gluster devel here,as the current version of NFS Ganesha will 
not be usable without proper workaround in the SELINUX policy.


Best Regards,
Strahil Nikolov






В неделя, 22 ноември 2020 г., 22:47:42 Гринуич+2, Kaleb Keithley 
 написа: 







On Sat, Nov 21, 2020 at 4:48 PM Strahil Nikolov  wrote:
> Hi All,
> 
> The reason seems to be the link "/var/lib/nfs" pointing to the shared 
> storage.When the cluster software is stopped gracefully, no issues are 
> observed,as the nfs_setup resource restores /var/lib/nfs .
> 
> Should I open a bug to bugzilla.redhat.com or it's specific to CentOS only ?
> 

Not sure what you'd file it under.

I'd suggest opening an issue in github at 
https://github.com/gluster/glusterfs/issues

--

Kaleb

___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-20 Thread Strahil Nikolov
Hm...

In such case , we got only 1 real problem left -> ocf:heartbeat:portblock 
supports only IPTABLES.
In my EL8 tests , NFS-Ganesha works if firewalld is disabled and I install 
iptables & iptables-services, yet this is highly unrecommended by Red Hat.

I got a feature request opened 
https://github.com/ClusterLabs/resource-agents/issues/1577 , but I believe that 
we also need a bugzilla entry on bugzilla.redhat.com .What do you think about ?

I do not know other Distro that is using NFTABLES by default .


Any advise on that ?

Best Regards,
Strahil Nikolov







В петък, 20 ноември 2020 г., 04:55:26 Гринуич+2, Shwetha Acharya 
 написа: 





Thanks for bringing this to our attention @Strahil

[1] and [2]  were discretely worked handling different issues at the same span 
of time. Merging [2] was to be done before [1], which was done otherwise. 

[1] https://review.gluster.org/#/c/glusterfs/+/23039/
[2] https://review.gluster.org/#/c/glusterfs/+/22636/

Regards,
Shwetha

On Wed, Nov 18, 2020 at 12:41 PM Ravishankar N  wrote:
> 
> On 18/11/20 12:17 pm, Strahil Nikolov wrote:
>> Nope, it's a deeper s**t.
>> I had to edit the ".spec.in" file so it has Source0 point to local tar.gz.
>> The I edit the requires in both ".spec" & ".spec.in" and also I had to 
>> remove an obsolete stanza in the glusterfs section.
>>
>> In the end, I got the source - extracted, copied the spec & spec.in , and 
>> then tar.gz-ed again and put it into the dir.
>>
>> Only then the rpms were properly built.
>>
>> The proposed patch is fixing the issue.
> Thanks for confirming!
>>
>> Why do we have line 285 in 
>> https://raw.githubusercontent.com/gluster/glusterfs/devel/glusterfs.spec.in ?
>>
>> I guess I need to open 2 issues for the glusterfs:
>> - that obsolete stanza is useless
> 
> Using git blame points me to 
> https://github.com/gluster/glusterfs/commit/f9118c2c9389e0793951388c2d69ce0350bb9318.
>  
> Adding Shwetha to confirm if the change was intended.
> 
> -Ravi
> 
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>> В вторник, 17 ноември 2020 г., 14:16:36 Гринуич+2, Ravishankar N 
>>  написа:
>>
>>
>>
>>
>>
>> Hi Strahil,
>>
>> I would have imagined editing the 'Requires' section in
>> glusterfs.spec.in would have sufficed. Do you need rpms though? A source
>> install is not enough?
>>
>> Regards,
>> Ravi
>>
>> On 17/11/20 5:32 pm, Strahil Nikolov wrote:
>>> Hi Ravi,
>>>
>>>
>>> Any idea how to make the glusterfs-ganesha.x86_64 require resource-agents 
>>> >= 4.1.0 (instead of 4.2.0) ?
>>> I 've replaced every occurance I found and still it tries to grab 
>>> resource-agents 4.2 (which is not available on EL8).
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>> В понеделник, 16 ноември 2020 г., 13:15:54 Гринуич+2, Ravishankar 
>>> N  написа:
>>>
>>>
>>>
>>>
>>>
>>>
>>> I am surprised too that it wasn't caught earlier.
>>>
>>>
>>> Steps:
>>>
>>> 1. Clone the gluster repo
>>>
>>> 2. Compile  the 
>>> sourcehttps://docs.gluster.org/en/latest/Developer-guide/Building-GlusterFS/
>>>
>>> 3. Make the changes (in a different branch if you prefer), compile again 
>>> and install
>>>
>>> 4.  Test it out:
>>>
>>> [root@linuxpad glusterfs]#  gluster v create testvol  
>>> 127.0.0.2:/home/ravi/bricks/brick{1..2} force
>>> volume create: testvol: success: please start the volume to access data
>>> [root@linuxpad glusterfs]#
>>> [root@linuxpad glusterfs]# gluster v start testvol
>>> volume start: testvol: success
>>> [root@linuxpad glusterfs]#
>>> [root@linuxpad glusterfs]# gluster v set testvol ganesha.enable on
>>> volume set: failed: The option nfs-ganesha should be enabled before setting 
>>> ganesha.enable.
>>> [root@linuxpad glusterfs]#
>>>      
>>>
>>> I just tried the change and it looks like some new error shows up. Not too 
>>> familiar with these settings; I will need to debug further.
>>>
>>> Thanks,
>>>
>>> Ravi
>>>
>>>
>>> On 16/11/20 4:05 pm, Strahil Nikolov wrote:
>>>
>>>
>>>>      I can try to help with the testing (I'm quite new to that).
>>>> Can someone share documentation of that proc

Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-18 Thread Strahil Nikolov
Nope, it's a deeper s**t.
I had to edit the ".spec.in" file so it has Source0 point to local tar.gz.
The I edit the requires in both ".spec" & ".spec.in" and also I had to remove 
an obsolete stanza in the glusterfs section.

In the end, I got the source - extracted, copied the spec & spec.in , and then 
tar.gz-ed again and put it into the dir.

Only then the rpms were properly built.

The proposed patch is fixing the issue.


Why do we have line 285 in 
https://raw.githubusercontent.com/gluster/glusterfs/devel/glusterfs.spec.in ?

I guess I need to open 2 issues for the glusterfs:
- that obsolete stanza is useless


Best Regards,
Strahil Nikolov



В вторник, 17 ноември 2020 г., 14:16:36 Гринуич+2, Ravishankar N 
 написа: 





Hi Strahil,

I would have imagined editing the 'Requires' section in 
glusterfs.spec.in would have sufficed. Do you need rpms though? A source 
install is not enough?

Regards,
Ravi

On 17/11/20 5:32 pm, Strahil Nikolov wrote:
> Hi Ravi,
>
>
> Any idea how to make the glusterfs-ganesha.x86_64 require resource-agents >= 
> 4.1.0 (instead of 4.2.0) ?
> I 've replaced every occurance I found and still it tries to grab 
> resource-agents 4.2 (which is not available on EL8).
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 16 ноември 2020 г., 13:15:54 Гринуич+2, Ravishankar 
> N  написа:
>
>
>
>
>
>
> I am surprised too that it wasn't caught earlier.
>
>
> Steps:
>
> 1. Clone the gluster repo
>
> 2. Compile  the 
> sourcehttps://docs.gluster.org/en/latest/Developer-guide/Building-GlusterFS/
>
> 3. Make the changes (in a different branch if you prefer), compile again and 
> install
>
> 4.  Test it out:
>
> [root@linuxpad glusterfs]#  gluster v create testvol  
> 127.0.0.2:/home/ravi/bricks/brick{1..2} force
> volume create: testvol: success: please start the volume to access data
> [root@linuxpad glusterfs]#
> [root@linuxpad glusterfs]# gluster v start testvol
> volume start: testvol: success
> [root@linuxpad glusterfs]#
> [root@linuxpad glusterfs]# gluster v set testvol ganesha.enable on
> volume set: failed: The option nfs-ganesha should be enabled before setting 
> ganesha.enable.
> [root@linuxpad glusterfs]#
>    
>
> I just tried the change and it looks like some new error shows up. Not too 
> familiar with these settings; I will need to debug further.
>
> Thanks,
>
> Ravi
>
>
> On 16/11/20 4:05 pm, Strahil Nikolov wrote:
>
>
>>    I can try to help with the testing (I'm quite new to that).
>> Can someone share documentation of that process ?
>>
>> yet we have another problem -> ganesha is deployed with 
>> ocf:heartbeat:portblock which supports only IPTABLES, while EL8 uses 
>> NFTABLES ...
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В понеделник, 16 ноември 2020 г., 10:47:43 Гринуич+2, Yaniv 
>> Kaul  написа:
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Nov 16, 2020 at 10:26 AM Ravishankar N  
>> wrote:
>>
>>>    On 15/11/20 8:24 pm, Strahil Nikolov wrote:
>>>
>>>>    Hello All,
>>>>
>>>> did anyone get a chance to look 
>>>> athttps://github.com/gluster/glusterfs/issues/1778  ?
>>>>
>>> A look at
>>> https://review.gluster.org/#/c/glusterfs/+/23648/4/xlators/mgmt/glusterd/src/glusterd-op-sm.c@1117
>>>   
>>> seems to indicate this could be due to a typo error. Do you have a
>>> source install where you can apply this simple diff and see if it fixes
>>> the issue?
>>>
>> I think you are right - I seem to have introduced it as part 
>> ofhttps://github.com/gluster/glusterfs/commit/e081ac683b6a5bda54891318fa1e3ffac981e553
>>   - my bad.
>>
>> However, it was merged ~1 year ago, and no one has complained thus far... :-/
>> 1. Is no one using NFS Ganesha?
>> 2. We are lacking tests for NFS Ganesha - code coverage indicates this path 
>> is not covered.
>>
>> Y.
>>
>>
>>>      
>>> diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>>> b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>>> index 558f04fb2..d7bf96adf 100644
>>> --- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>>> +++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>>> @@ -1177,7 +1177,7 @@ glusterd_op_stage_set_volume(dict_t *dict, char
>>> **op_errstr)
>>>    }
>>>    } else if (len_strcmp(key, keylen, "ganesha.enable")) {
>>>    key_matched = _gf_true;
>>> 

Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-17 Thread Strahil Nikolov
Hi Ravi,


Any idea how to make the glusterfs-ganesha.x86_64 require resource-agents >= 
4.1.0 (instead of 4.2.0) ?
I 've replaced every occurance I found and still it tries to grab 
resource-agents 4.2 (which is not available on EL8).

Best Regards,
Strahil Nikolov






В понеделник, 16 ноември 2020 г., 13:15:54 Гринуич+2, Ravishankar N 
 написа: 






I am surprised too that it wasn't caught earlier. 


Steps:

1. Clone the gluster repo

2. Compile  the source 
https://docs.gluster.org/en/latest/Developer-guide/Building-GlusterFS/

3. Make the changes (in a different branch if you prefer), compile again and 
install

4.  Test it out:

[root@linuxpad glusterfs]#  gluster v create testvol  
127.0.0.2:/home/ravi/bricks/brick{1..2} force
volume create: testvol: success: please start the volume to access data
[root@linuxpad glusterfs]#
[root@linuxpad glusterfs]# gluster v start testvol
volume start: testvol: success
[root@linuxpad glusterfs]#
[root@linuxpad glusterfs]# gluster v set testvol ganesha.enable on
volume set: failed: The option nfs-ganesha should be enabled before setting 
ganesha.enable.
[root@linuxpad glusterfs]# 
  

I just tried the change and it looks like some new error shows up. Not too 
familiar with these settings; I will need to debug further.

Thanks,

Ravi


On 16/11/20 4:05 pm, Strahil Nikolov wrote:


>  I can try to help with the testing (I'm quite new to that).
> Can someone share documentation of that process ?
> 
> yet we have another problem -> ganesha is deployed with 
> ocf:heartbeat:portblock which supports only IPTABLES, while EL8 uses NFTABLES 
> ...
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 16 ноември 2020 г., 10:47:43 Гринуич+2, Yaniv Kaul 
>  написа: 
> 
> 
> 
> 
> 
> 
> 
> On Mon, Nov 16, 2020 at 10:26 AM Ravishankar N  wrote:
> 
>>  On 15/11/20 8:24 pm, Strahil Nikolov wrote:
>> 
>>>  Hello All,
>>> 
>>> did anyone get a chance to look at 
>>> https://github.com/gluster/glusterfs/issues/1778 ?
>>> 
>> A look at 
>> https://review.gluster.org/#/c/glusterfs/+/23648/4/xlators/mgmt/glusterd/src/glusterd-op-sm.c@1117
>>  
>> seems to indicate this could be due to a typo error. Do you have a 
>> source install where you can apply this simple diff and see if it fixes 
>> the issue?
>> 
> I think you are right - I seem to have introduced it as part of 
> https://github.com/gluster/glusterfs/commit/e081ac683b6a5bda54891318fa1e3ffac981e553
>  - my bad.
> 
> However, it was merged ~1 year ago, and no one has complained thus far... :-/
> 1. Is no one using NFS Ganesha? 
> 2. We are lacking tests for NFS Ganesha - code coverage indicates this path 
> is not covered.
> 
> Y.
> 
> 
>>
>> diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c 
>> b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>> index 558f04fb2..d7bf96adf 100644
>> --- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>> +++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>> @@ -1177,7 +1177,7 @@ glusterd_op_stage_set_volume(dict_t *dict, char 
>> **op_errstr)
>>   }
>>   } else if (len_strcmp(key, keylen, "ganesha.enable")) {
>>   key_matched = _gf_true;
>> -    if (!strcmp(value, "off") == 0) {
>> +    if (strcmp(value, "off") == 0) {
>>       ret = ganesha_manage_export(dict, "off", _gf_true, 
>> op_errstr);
>>   if (ret)
>>   goto out;
>> 
>> Thanks,
>> 
>> Ravi
>> 
>>>  It's really strange that NFS Ganesha has ever passed the tests.
>>> How do we test NFS Ganesha exporting ?
>>> 
>>> Best Regards,
>>> Strahil Nikolov
>>> ___
>>> 
>>> Community Meeting Calendar:
>>> 
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>> 
>>> 
>>> 
>>> 
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>> 
>> 
>>>  
>> ___
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>> 
>> 
>> 
>> 
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>> 
>> 
>> 
> 

___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-16 Thread Strahil Nikolov
I was thinking about an automatic way for testing ...

I will use my notes to rebuild a fresh cluster on EL8 and I will give feedback 
if the patch is fixing it .


Thanks all for the assitance.

Best Regards,
Strahil Nikolov






В понеделник, 16 ноември 2020 г., 13:15:54 Гринуич+2, Ravishankar N 
 написа: 






I am surprised too that it wasn't caught earlier. 


Steps:

1. Clone the gluster repo

2. Compile  the source 
https://docs.gluster.org/en/latest/Developer-guide/Building-GlusterFS/

3. Make the changes (in a different branch if you prefer), compile again and 
install

4.  Test it out:

[root@linuxpad glusterfs]#  gluster v create testvol  
127.0.0.2:/home/ravi/bricks/brick{1..2} force
volume create: testvol: success: please start the volume to access data
[root@linuxpad glusterfs]#
[root@linuxpad glusterfs]# gluster v start testvol
volume start: testvol: success
[root@linuxpad glusterfs]#
[root@linuxpad glusterfs]# gluster v set testvol ganesha.enable on
volume set: failed: The option nfs-ganesha should be enabled before setting 
ganesha.enable.
[root@linuxpad glusterfs]# 
  

I just tried the change and it looks like some new error shows up. Not too 
familiar with these settings; I will need to debug further.

Thanks,

Ravi


On 16/11/20 4:05 pm, Strahil Nikolov wrote:


>  I can try to help with the testing (I'm quite new to that).
> Can someone share documentation of that process ?
> 
> yet we have another problem -> ganesha is deployed with 
> ocf:heartbeat:portblock which supports only IPTABLES, while EL8 uses NFTABLES 
> ...
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 16 ноември 2020 г., 10:47:43 Гринуич+2, Yaniv Kaul 
>  написа: 
> 
> 
> 
> 
> 
> 
> 
> On Mon, Nov 16, 2020 at 10:26 AM Ravishankar N  wrote:
> 
>>  On 15/11/20 8:24 pm, Strahil Nikolov wrote:
>> 
>>>  Hello All,
>>> 
>>> did anyone get a chance to look at 
>>> https://github.com/gluster/glusterfs/issues/1778 ?
>>> 
>> A look at 
>> https://review.gluster.org/#/c/glusterfs/+/23648/4/xlators/mgmt/glusterd/src/glusterd-op-sm.c@1117
>>  
>> seems to indicate this could be due to a typo error. Do you have a 
>> source install where you can apply this simple diff and see if it fixes 
>> the issue?
>> 
> I think you are right - I seem to have introduced it as part of 
> https://github.com/gluster/glusterfs/commit/e081ac683b6a5bda54891318fa1e3ffac981e553
>  - my bad.
> 
> However, it was merged ~1 year ago, and no one has complained thus far... :-/
> 1. Is no one using NFS Ganesha? 
> 2. We are lacking tests for NFS Ganesha - code coverage indicates this path 
> is not covered.
> 
> Y.
> 
> 
>>
>> diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c 
>> b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>> index 558f04fb2..d7bf96adf 100644
>> --- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>> +++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
>> @@ -1177,7 +1177,7 @@ glusterd_op_stage_set_volume(dict_t *dict, char 
>> **op_errstr)
>>   }
>>   } else if (len_strcmp(key, keylen, "ganesha.enable")) {
>>   key_matched = _gf_true;
>> -    if (!strcmp(value, "off") == 0) {
>> +    if (strcmp(value, "off") == 0) {
>>   ret = ganesha_manage_export(dict, "off", _gf_true, 
>> op_errstr);
>>   if (ret)
>>   goto out;
>> 
>> Thanks,
>> 
>> Ravi
>> 
>>>  It's really strange that NFS Ganesha has ever passed the tests.
>>> How do we test NFS Ganesha exporting ?
>>> 
>>> Best Regards,
>>> Strahil Nikolov
>>> ___
>>> 
>>> Community Meeting Calendar:
>>> 
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>> 
>>> 
>>> 
>>> 
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>> 
>> 
>>>  
>> ___
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>> 
>> 
>> 
>> 
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>> 
>> 
>> 
> 

___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Docs on gluster parameters

2020-11-16 Thread Strahil Nikolov
Hi Ravi,

I can propose a pull request if someone gives me a general idea of each setting.
Do we have comments in the source code that can be used as a description ?

Best Regards,
Strahil Nikolov






В понеделник, 16 ноември 2020 г., 10:36:09 Гринуич+2, Ravishankar N 
 написа: 









On 14/11/20 3:23 am, Mahdi Adnan wrote:


>  
Hi, 



 Differently, the Gluster docs missing quite a bit regarding the available 
options that can be used in the volumes.

Not only that, there are some options that might corrupt data and do not have 
proper documentation, for example, disable Sharding will lead to data 
corruption and I think it does not give any warning? "maybe I'm wrong regarding 
the warning tho" and I can not find any details about it in the official 
Gluster docs. The same goes for multiple clients accessing a volume with 
Sharding enabled.

also, in some cases, write-behind and stat-prefetch can lead to data 
inconsistency if multiple clients accessing the same data.

I think having solid "Official" Gluster docs with all of these details is 
essential to have stable Gluster deployments.




On Thu, Nov 12, 2020 at 7:34 PM Eli V  wrote:


> I think docs.gluster.org needs a section on the available parameters,
> especially considering how important some of them can be. For example
> a google for performance.parallel-readdir, or
> features.cache-invalidation only seems to turn up some hits in the
> release notes on docs.gluster.org. I woudn't expect a new user to have
> to go read the release notes for all previous releases to understand
> the importance of these parameters, or what paremeters even exist.
> 





https://docs.gluster.org/en/latest/  can be updated by sending pull requests to 
https://github.com/gluster/glusterdocs. It would be great if you can send some 
patches regarding the changes you would like to see. It doesn't have to be 
perfect. I can help in getting them reviewed and merged.
Thanks,
Ravi





>  
>  
>>  
>> 
>> 
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>> 
> 
> 
> 
> 
> 
> -- 
> 
>  
> Respectfully 
> Mahdi
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
gluster-us...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] NFS Ganesha fails to export a volume

2020-11-16 Thread Strahil Nikolov
I can try to help with the testing (I'm quite new to that).
Can someone share documentation of that process ?

yet we have another problem -> ganesha is deployed with ocf:heartbeat:portblock 
which supports only IPTABLES, while EL8 uses NFTABLES ...

Best Regards,
Strahil Nikolov






В понеделник, 16 ноември 2020 г., 10:47:43 Гринуич+2, Yaniv Kaul 
 написа: 







On Mon, Nov 16, 2020 at 10:26 AM Ravishankar N  wrote:
> 
> On 15/11/20 8:24 pm, Strahil Nikolov wrote:
>> Hello All,
>>
>> did anyone get a chance to look at 
>> https://github.com/gluster/glusterfs/issues/1778 ?
> 
> A look at 
> https://review.gluster.org/#/c/glusterfs/+/23648/4/xlators/mgmt/glusterd/src/glusterd-op-sm.c@1117
>  
> seems to indicate this could be due to a typo error. Do you have a 
> source install where you can apply this simple diff and see if it fixes 
> the issue?

I think you are right - I seem to have introduced it as part of 
https://github.com/gluster/glusterfs/commit/e081ac683b6a5bda54891318fa1e3ffac981e553
 - my bad.

However, it was merged ~1 year ago, and no one has complained thus far... :-/
1. Is no one using NFS Ganesha? 
2. We are lacking tests for NFS Ganesha - code coverage indicates this path is 
not covered.

Y.

>  
> diff --git a/xlators/mgmt/glusterd/src/glusterd-op-sm.c 
> b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
> index 558f04fb2..d7bf96adf 100644
> --- a/xlators/mgmt/glusterd/src/glusterd-op-sm.c
> +++ b/xlators/mgmt/glusterd/src/glusterd-op-sm.c
> @@ -1177,7 +1177,7 @@ glusterd_op_stage_set_volume(dict_t *dict, char 
> **op_errstr)
>   }
>   } else if (len_strcmp(key, keylen, "ganesha.enable")) {
>   key_matched = _gf_true;
> -    if (!strcmp(value, "off") == 0) {
> +    if (strcmp(value, "off") == 0) {
>   ret = ganesha_manage_export(dict, "off", _gf_true, 
> op_errstr);
>   if (ret)
>   goto out;
> 
> Thanks,
> 
> Ravi
>>
>> It's really strange that NFS Ganesha has ever passed the tests.
>> How do we test NFS Ganesha exporting ?
>>
>> Best Regards,
>> Strahil Nikolov
>> ___
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>>
>>
>>
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
>>
> 
> ___
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> 
> 
> 
> 
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
> 
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] NFS Ganesha fails to export a volume

2020-11-15 Thread Strahil Nikolov
Hello All,

did anyone get a chance to look at 
https://github.com/gluster/glusterfs/issues/1778 ?

It's really strange that NFS Ganesha has ever passed the tests.
How do we test NFS Ganesha exporting ?

Best Regards,
Strahil Nikolov
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] NFS Ganesha on EL8 needs redesign

2020-11-09 Thread Strahil Nikolov
Hello All,

I have been playing around with Ganesha on CentOS 8 and I have noticed some 
issues so far. Could you assist me in pinpointing the correct location for 
tracking the issues:
- 'gluster nfs-ganesha enable' builds pacemaker groups that use 
'ocf:heartbeat:portblock', but that resource is relying on IPTABLES, while EL8 
is using NFTABLES.
- nfs-ganesha.service is starting the process via "/bin/bash -c" which SELINUX 
hates and of course blocks. Custom selinux policies were needed.
- the glusterfs-ganesha.x86_64 rpm is deploying the following boolean 'semanage 
boolean -m ganesha_use_fusefs --on' but it seems that something disables it. 
Manual setting of 'setsebool -P ganesha_use_fusefs 1' is necessary
- rpcbind is blocked by selinux, I had to enable 'rpcd_use_fusefs'
- nfs-ganesha.service is starting before the shared volume is mounted locally, 
dependency is needed like this one:
[root@glustera ~]# cat /etc/systemd/system/nfs-ganesha.service.d/01-debug.conf  

[Unit]
After=run-gluster-shared_storage.mount
Requires=run-gluster-shared_storage.mount

I will rebuild my cluster to find out if I missed something.
Any feedback is appreciated.

P.S.: If anyone is interested , I can share the deployment procedure.


Best Regards,
Strahil Nikolov
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Gluster block storage status

2020-11-05 Thread Strahil Nikolov
Seems still alive:
https://marc.info/?l=gluster-users=160147697907996=3

Best Regards,
Strahil Nikolov






В четвъртък, 5 ноември 2020 г., 16:24:10 Гринуич+2, Alex K 
 написа: 





Hi friends, 

I am using gluster for some years though only as a file storage. 
I was wandering what is the status of block storage through gluster.

I see the following project: 

https://github.com/gluster/gluster-block

Is this still receiving updates and could be used to production or is it 
discontinued?

Thanx, 
Alex




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
gluster-us...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 7.7

2020-07-30 Thread Strahil Nikolov
In CentOS7 , the packages were available several hours before the announcement.

Best Regards,
Strahil Nikolov

На 30 юли 2020 г. 22:32:24 GMT+03:00, Artem Russakovskii  
написа:
>Hi,
>
>https://download.opensuse.org/repositories/home:/glusterfs:/Leap15.1-7/openSUSE_Leap_15.1/x86_64/
>is still missing 7.7. Is there an ETA please?
>
>Thanks.
>
>
>Sincerely,
>Artem
>
>--
>Founder, Android Police <http://www.androidpolice.com>, APK Mirror
><http://www.apkmirror.com/>, Illogical Robot LLC
>beerpla.net | @ArtemR <http://twitter.com/ArtemR>
>
>
>On Wed, Jul 22, 2020 at 9:27 AM Rinku Kothiya 
>wrote:
>
>> Hi,
>>
>> The Gluster community is pleased to announce the release of
>Gluster7.7
>> (packages available at [1]).
>> Release notes for the release can be found at [2].
>>
>> Major changes, features and limitations addressed in this release:
>> None
>>
>> Please Note: Some of the packages are unavailable and we are working
>on
>> it. We will release them soon.
>>
>> Thanks,
>> Gluster community
>>
>> References:
>>
>> [1] Packages for 7.7:
>> https://download.gluster.org/pub/gluster/glusterfs/7/7.7/
>>
>> [2] Release notes for 7.7:
>> https://docs.gluster.org/en/latest/release-notes/7.7/
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Gluster-devel Digest, Vol 76, Issue 1

2020-07-02 Thread Strahil Nikolov
Hi ,

for the gfid  I use method  2  described in 
https://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/ .
Once  you identify the file you can check the extended attributes on all bricks 
 for any mismatch. Usually the gfid is the culprit on replica volumes.

By the way, what is your volume type (gluster vol info)  ?

Best Regards,
Strahil Nikolov



На 2 юли 2020 г. 15:00:01 GMT+03:00, gluster-devel-requ...@gluster.org написа:
>Send Gluster-devel mailing list submissions to
>   gluster-devel@gluster.org
>
>To subscribe or unsubscribe via the World Wide Web, visit
>   https://lists.gluster.org/mailman/listinfo/gluster-devel
>or, via email, send a message with subject or body 'help' to
>   gluster-devel-requ...@gluster.org
>
>You can reach the person managing the list at
>   gluster-devel-ow...@gluster.org
>
>When replying, please edit your Subject line so it is more specific
>than "Re: Contents of Gluster-devel digest..."
>
>
>Today's Topics:
>
>   1. heal info output (Emmanuel Dreyfus)
>
>
>--
>
>Message: 1
>Date: Thu, 2 Jul 2020 03:05:27 +0200
>From: m...@netbsd.org (Emmanuel Dreyfus)
>To: gluster-devel@gluster.org (Gluster Devel)
>Subject: [Gluster-devel] heal info output
>Message-ID: <1osw8ax.rmfglc10lttijm%m...@netbsd.org>
>
>Hello
>
>gluster volume heal info show me questionable entries. I wonder if
>these
>are bugs, or if I shoud handle them and how.
>
>bidon# gluster volume heal gfs info 
>Brick bidon:/export/wd0e_tmp
>Status: Connected
>Number of entries: 0
>
>Brick baril:/export/wd0e
>/.attribute/system 
> 
>Status: Connected
>Number of entries: 2
>
>(...)
>Brick bidon:/export/wd2e
> 
> 
>/owncloud/data 
> 
> 
> 
> 
>
>There are three cases:
>1) /.attribute directory is special on NetBSD, it is where extended
>attributes are stored for the filesystem. The posix xlator takes care
>of
>screening it, but there must be some other softrware component that
>should learn it must disregeard it. Hints are welcome about where I
>should look at.
>
>2) /owncloud/data  is a directory. mode, owner and groups are the same
>on bricks. Why is it listed here?
>
>3)  What should I do with this?
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Gluster Test Day

2020-06-24 Thread Strahil Nikolov
Hi Rinku,

can you tell me how the packages for CentOS 7 are build, as I had issues 
yesterday bulding both latest and v7 branches ?

Best Regards,
Strahil Nikolov

На 24 юни 2020 г. 14:00:47 GMT+03:00, Rinku Kothiya  
написа:
>Hi All,
>
>Release-8 RC0 packages are built. As this is a major release we want to
>ensure that the
>upgrades are possible without any problems from gluster version 5, 6
>and 7.
>For this we have organized an online test day on Tuesday 30-Jun-2020,
>from
>11am to 4:30pm IST.
>
>Prerequisite would be to have your specific distro machine where you
>can
>clone gluster and work on it.
>I was facing challenges to compile the glusterfs release-7 on fedora32
>so I
>used a fedora30.
>If others are seeing similar problems then please help with a work
>around
>or a fix for this before the test day.
>Another option would be to use some other version of the distro like I
>used
>Fedora30.
>
>Also for people who would like to contribute to automating this test
>can
>join and contribute.
>Sanju Rakonde will be leading this effort. I request you all to
>participate
>and make this
>a successful event.
>
>[1] To join the video meeting, click the below link :
>https://meet.google.com/qey-fvmt-uda
>you can also join via phone, Dial-in: (US) +1 361-271-3542 PIN: 122 789
>544#
>
>[2] Date and Time : Tuesday 30-Jun-2020, from 11am to 4:30pm IST
>
>[3] The document link where the results can be shared :
>
>https://docs.google.com/spreadsheets/d/16NaDFgiotJRLBgMQV4-Z0j0qhqdwenkQa_MkY7XHBQ8/edit?usp=sharing
>
>[4] Packages are available at :
> https://download.gluster.org/pub/gluster/glusterfs/qa-releases/8.0rc0/
>
>[5] Packages are signed and the public key is at :
>https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub
>
>Regards
>Rinku
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Gluster-devel Digest, Vol 75, Issue 4

2020-06-11 Thread Strahil Nikolov
Hi Federico,

I'm not a dev, but based on my admin experience Gluster has  some weaknesses:

- Negative lookup is  bad ,  so if we got a fast way to identify a missing  
file would be nice - but this  will be against the nature of Gluster and it's 
decentralised approach to metadata 
- Small  file performance boost is also a good one, as currently working with 
small files  is not as fast as some users  would like to. Checking the contents 
of a dir that is having 5 small files is taking multiple times than the 
response of the bricks. For example  XFS responds  for  0.2 secs while FUSE 
needs at least 3 sec  (replica volume).

- Performance tuning is quite  hard, so  anything that could help the admin to 
set the optimal settings  would help alot.

Best Regards,
Strahil Nikolov 

На 11 юни 2020 г. 15:00:01 GMT+03:00, gluster-devel-requ...@gluster.org написа:
>Send Gluster-devel mailing list submissions to
>   gluster-devel@gluster.org
>
>To subscribe or unsubscribe via the World Wide Web, visit
>   https://lists.gluster.org/mailman/listinfo/gluster-devel
>or, via email, send a message with subject or body 'help' to
>   gluster-devel-requ...@gluster.org
>
>You can reach the person managing the list at
>   gluster-devel-ow...@gluster.org
>
>When replying, please edit your Subject line so it is more specific
>than "Re: Contents of Gluster-devel digest..."
>
>
>Today's Topics:
>
>   1. Introducing me, questions on general improvements in gluster
>  re. latency and throughput (Federico Strati)
>
>
>--
>
>Message: 1
>Date: Thu, 11 Jun 2020 09:51:52 +0200
>From: Federico Strati 
>To: gluster-devel@gluster.org
>Subject: [Gluster-devel] Introducing me, questions on general
>   improvements in gluster re. latency and throughput
>Message-ID: <0c283930-ebb7-4cc0-a063-2fb40a8b8...@gmail.com>
>Content-Type: text/plain; charset=utf-8; format=flowed
>
>Dear All,
>
>I just started working for a company named A3Cube, who produces HPC 
>supercomputers.
>
>I was assigned the task to investigate which improvements to gluster
>are 
>viable
>
>in order to lead to overall better performance in latency and
>throughput.
>
>I'm quite new to SDS and so pardon me if some questions are naive.
>
> From what I've understood so far, possible bottlenecks are
>
>in FUSE and transport.
>
>Generally speaking, if you have time to just drop me some pointers,
>
>1] FUSE + splice has never been considered (issue closed without real 
>discussions)
>
>(probably because it conflicts with the general architecture and in 
>particular
>
>with the write-behind translator)
>
>Recently, it has been announced a new userspace fs kernel module, ZUFS,
>
>whose aim
>
>is to zero copy and improving vastly over FUSE: would you be interested
>
>in investigating it ?
>
>(ZUFS: https://github.com/NetApp/zufs-zuf ; 
>https://lwn.net/Articles/756625/)
>
>2] Transport over RDMA (Infiniband) has been recently dropped:
>
>may I ask you what considerations have been made ?
>
>3] I would love to hear what you consider real bottlenecks in gluster
>
>right now regarding latency and thruput.
>
>Thanks in advance
>
>Kind regards
>
>Federico
>
>
>
>--
>
>___
>Gluster-devel mailing list
>Gluster-devel@gluster.org
>https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>End of Gluster-devel Digest, Vol 75, Issue 4
>
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Announcing Gluster release 7.5

2020-04-20 Thread Strahil Nikolov
Hi Rinku,

I already got it running (yesterday) on CentOS7.

I have noticed that for 1 volume (out of 6) the arbiter failed to sync (pending 
for heal forever) and I had to 'reset-brick' in order to return it to 
operational.

My ACL xlator issue was still valid, but I finaly gave up and moved the data to 
fresh volumes and everything is working.

Best Regards,
Strahil Nikolov


В понеделник, 20 април 2020 г., 17:02:46 Гринуич+3, Rinku Kothiya 
 написа: 





Hi,

The Gluster community is pleased to announce the release of Gluster7.5 
(packages available at [1]).
Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:
None

Please Note: Some of the packages are unavailable and we are working on it. We 
will release them soon.

Thanks,
Gluster community

References:

[1] Packages for 7.5:
https://download.gluster.org/pub/gluster/glusterfs/7/7.5/

[2] Release notes for 7.5:
https://docs.gluster.org/en/latest/release-notes/7.5/




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
gluster-us...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
Take a look at Stefan Solbrig's e-mail 


Best Regards,
Strahil Nikolov


В сряда, 25 март 2020 г., 22:55:23 Гринуич+2, Mauro Tridici 
 написа: 





Hi Strahil,

unfortunately, no process is holding file or directory.
Do you know if some other community user could help me?

Thank you,
Mauro



> On 25 Mar 2020, at 21:08, Strahil Nikolov  wrote:
> 
> You can also check if there is a process holding a file that was deleted 
> there:
> lsof /tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505
> 
> If it's not that one , I'm out of ideas :)
> 
> It's not recommended to delete it from the bricks , so avoid that if possible.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В сряда, 25 март 2020 г., 21:12:58 Гринуич+2, Mauro Tridici 
>  написа: 
> 
> 
> 
> 
> 
> Hi Sttrahil,
> 
> thank you for your answer.
> Directory is empty and no immutable bit has been assigned to it.
> 
> [athena-login2][/tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505]>
>  ls -la
> total 8
> drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 .
> drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..
> 
> Any other idea related this issue?
> Many thanks,
> Mauro
> 
> 
>> On 25 Mar 2020, at 18:32, Strahil Nikolov  wrote:
>> 
>> On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici 
>>  wrote:
>>> Dear All,
>>> 
>>> some users tht use regularly our gluster file system are experiencing a
>>> strange error during attempting to remove a empty directory.
>>> All bricks are up and running, no perticular error has been detected,
>>> but they are not able to remove it successfully.
>>> 
>>> This is the error they are receiving:
>>> 
>>> [athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
>>> rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not
>>> empty
>>> 
>>> I tried to delete this directory from root user without success.
>>> Do you have some suggestions to solve this issue?
>>> 
>>> Thank you in advance.
>>> Kind Regards,
>>> Mauro
>> 
>> What do you have in 'RECOVERY20190416/GlobNative/20190505' ?
>> 
>> Maybe you got an immutable bit (chattr +i) on any file/folder  ?
>> 
>> Best Regards,
>> Sttrahil Nikolov
> 
> 
> 



___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
You can also check if there is a process holding a file that was deleted there:
lsof /tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505

If it's not that one , I'm out of ideas :)

It's not recommended to delete it from the bricks , so avoid that if possible.

Best Regards,
Strahil Nikolov






В сряда, 25 март 2020 г., 21:12:58 Гринуич+2, Mauro Tridici 
 написа: 





Hi Sttrahil,

thank you for your answer.
Directory is empty and no immutable bit has been assigned to it.

[athena-login2][/tier2/OPA/archive/GOFS/RECOVERY20190416/GlobNative/20190505]> 
ls -la
total 8
drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 .
drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 ..

Any other idea related this issue?
Many thanks,
Mauro


> On 25 Mar 2020, at 18:32, Strahil Nikolov  wrote:
> 
> On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici  
> wrote:
>> Dear All,
>> 
>> some users tht use regularly our gluster file system are experiencing a
>> strange error during attempting to remove a empty directory.
>> All bricks are up and running, no perticular error has been detected,
>> but they are not able to remove it successfully.
>> 
>> This is the error they are receiving:
>> 
>> [athena-login2][/tier2/OPA/archive/GOFS]> rm -rf RECOVERY20190416/
>> rm: cannot remove `RECOVERY20190416/GlobNative/20190505': Directory not
>> empty
>> 
>> I tried to delete this directory from root user without success.
>> Do you have some suggestions to solve this issue?
>> 
>> Thank you in advance.
>> Kind Regards,
>> Mauro
> 
> What do you have in 'RECOVERY20190416/GlobNative/20190505' ?
> 
> Maybe you got an immutable bit (chattr +i) on any file/folder  ?
> 
> Best Regards,
> Sttrahil Nikolov



___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-07-18 Thread Strahil Nikolov
Hi,

I'm posting this again as it got bounced.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil Nikolov

On Apr 30, 2019 15:19, Jim Kinney  wrote:
>  
> +1!
> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
> instead of fuse mounts. Having an integrated, designed in process to 
> coordinate multiple nodes into an HA cluster will very welcome.
> 
> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
> wrote:
>>  
>> Hi all,
>> 
>> Some of you folks may be familiar with HA solution provided for nfs-ganesha 
>> by gluster using pacemaker and corosync.
>> 
>> That feature was removed in glusterfs 3.10 in favour for common HA project 
>> "Storhaug". Even Storhaug was not progressed
>> 
>> much from last two years and current development is in halt state, hence 
>> planning to restore old HA ganesha solution back
>> 
>> to gluster code repository with some improvement and targetting for next 
>> gluster release 7.
>> 
>>  I have opened up an issue [1] with details and posted initial set of 
>>patches [2]
>> 
>> Please share your thoughts on the same
>> 
>> 
>> Regards,
>> 
>> Jiffin  
>> 
>> [1] https://github.com/gluster/glusterfs/issues/663
>> 
>> [2] 
>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)
>> 
>> 
> 
> -- 
> Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
> reflect authenticity.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  wrote:
>
> +1!
> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
> instead of fuse mounts. Having an integrated, designed in process to 
> coordinate multiple nodes into an HA cluster will very welcome.
>
> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan  
> wrote:
>>
>> Hi all,
>>
>> Some of you folks may be familiar with HA solution provided for nfs-ganesha 
>> by gluster using pacemaker and corosync.
>>
>> That feature was removed in glusterfs 3.10 in favour for common HA project 
>> "Storhaug". Even Storhaug was not progressed
>>
>> much from last two years and current development is in halt state, hence 
>> planning to restore old HA ganesha solution back
>>
>> to gluster code repository with some improvement and targetting for next 
>> gluster release 7.
>>
>> I have opened up an issue [1] with details and posted initial set of patches 
>> [2]
>>
>> Please share your thoughts on the same
>>
>> Regards,
>>
>> Jiffin  
>>
>> [1] https://github.com/gluster/glusterfs/issues/663
>>
>> [2] 
>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)
>
>
> -- 
> Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
> reflect authenticity.

___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel