[Gluster-users] Gluster 5.5 slower than 3.12.15

2019-04-02 Thread Strahil Nikolov
-qlength: 1features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid: 36network.ping-timeout: 30performance.strict-o-direct: oncluster.granular-entry-heal: enablecluster.enable-shared-storage: enable Network: 1 gbit/s Filesystem:XFS Best Regards,Strahil Nikolov

Re: [Gluster-users] [ovirt-users] Re: Hosted-Engine constantly dies

2019-04-05 Thread Strahil Nikolov
oVirt and Gluster dev teams. Best Regards,Strahil Nikolov ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster snapshot fails

2019-04-09 Thread Strahil Nikolov
t2 are on /dev/gluster_vg_ssd/gluster_lv_engine , while arbiter is on /dev/gluster_vg_sda3/gluster_lv_engine. Is that the issue ? Should I rename my brick's VG ?If so, why there is no mentioning in the documentation ? Best Regards,Strahil Nikolov ___ Gl

Re: [Gluster-users] Gluster snapshot fails

2019-04-11 Thread Strahil Nikolov
    0.16   1.58 As you can see - all bricks are thin LV and space is not the issue. Can someone hint me how to enable debug , so gluster logs can show the reason for that pre-check failure ? Best Regards,Strahil Nikolov В сряда, 10 април 2019 г., 9:05:15 ч. Гринуич-4, Rafi Kavungal Chun

Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
I hope this is the last update on the issue -> opened a bug https://bugzilla.redhat.com/show_bug.cgi?id=1699309 Best regards,Strahil Nikolov В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov написа: Hi All, I have tested gluster snapshot without systemd.automo

Re: [Gluster-users] Gluster 5.6 slow read despite fast local brick

2019-04-22 Thread Strahil Nikolov
reads: off cluster.enable-shared-storage: enable Any issues expected when downgrading the version ? Best Regards,Strahil Nikolov В понеделник, 22 април 2019 г., 0:26:51 ч. Гринуич-4, Strahil написа: Hello Community, I have been left with the impression that FUSE mounts will read from both

Re: [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-04-30 Thread Strahil Nikolov
loating IPs and taking care of the NFS locks, so no disruption will be felt by the clients. Still, this will be a lot of work to achieve. Best Regards, Strahil Nikolov On Apr 30, 2019 15:19, Jim Kinney wrote: >   > +1! > I'm using nfs-ganesha in my next upgrade so my client s

Re: [Gluster-users] Cannot see all data in mount

2019-05-15 Thread Strahil Nikolov
It seems that I got confused.So you see the files on the bricks (servers) , but not when you mount glusterfs on the clients ? If so - this is not the sharding feature as it works the opposite way. Best Regards,Strahil Nikolov В четвъртък, 16 май 2019 г., 0:35:04 ч. Гринуич+3, Paul van der

Re: [Gluster-users] Client Handling of Elastic Clusters

2019-10-16 Thread Strahil Nikolov
you can keep the number of servers the same ... still the server bandwidth will be a limit at some point . I'm not sure how other SDS deal with such elasticity . I guess many users in the list will hate me for saying this , but have you checked CEPH for your needs ? Best Regards,Strahil Ni

[Gluster-users] Gluster v6.6 replica 2 arbiter 1 - gfid on arbiter is different

2019-11-13 Thread Strahil Nikolov
ter Docs Project documentation for Gluster Filesystem | | | Thanks in advance for your response. Best Regards,Strahil Nikolov Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/118564314 NA/EMEA Schedule - Every

Re: [Gluster-users] Gluster v6.6 replica 2 arbiter 1 - gfid on arbiter is different

2019-11-13 Thread Strahil Nikolov
It seems that whenever I reboot a gluster node , I got this problem - so it's not an arbiter issue.Obviously there is something wrong with v6.6 ,as I never had such issues with v6.5 . Any ideas where should I start this up ? Best Regards,Strahil Nikolov В сряда, 13 ноември 2019 г., 22:

Re: [Gluster-users] Use GlusterFS as storage for images of virtual machines - available issues

2019-11-27 Thread Strahil Nikolov
/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh [Install] RequiredBy=shutdown.target Of course systemd has to be reloaded :) Best Regards, Strahil Nikolov В сряда, 27 ноември 2019 г., 8:07:52 ч. Гринуич-5, Sankarshan Mukhopadhyay написа: On Wed, Nov 27, 2019 at 6:10 PM Rav

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-19 Thread Strahil Nikolov
mance. Best Regards,Strahil Nikolov В четвъртък, 19 декември 2019 г., 02:28:55 ч. Гринуич+2, David Cunningham написа: Hi Raghavendra and Strahil, We are using GFS version 5.6-1.el7 from the CentOS repository. Unfortunately we can't modify the application and it expects to read

Re: [Gluster-users] No possible to mount a gluster volume via /etc/fstab?

2020-01-24 Thread Strahil Nikolov
a sysctl with following >> parameters. >> > >> > net.ipv6.conf.all.disable_ipv6 = 1 >> > net.ipv6.conf.default.disable_ipv6 = 1 >> > >> > That did not help. >> > >> > Volumes are configured with inet. >> > >> > sudo glu

Re: [Gluster-users] [Errno 107] Transport endpoint is not connected

2020-01-30 Thread Strahil Nikolov
ompatibility mode 4.2 and there were 2 older VM's which had snapshots >from >prior versions, while the leaf was in compatibility level 4.2. note; >the >backup was taken on the engine running 4.3. > >Thanks Olaf > > > >Op di 28 jan. 2020 om 17:31 schreef Strahil Nikolo

Re: [Gluster-users] interpreting heal info and reported entries

2020-01-30 Thread Strahil Nikolov
luster.org/mailman/listinfo/gluster-users Hi Ravi, This is the third time an oVirt user (one is me and I think my email is in the list) that report such issue. We need a through investigation as this is reoccurring. Best Regards, Strahil Nikolov Community Meeting Calendar: APAC Schedul

[Gluster-users] Upgrade gluster v7.0 to 7.2 posix-acl.c:262:posix_acl_log_permit_denied

2020-01-31 Thread Strahil Nikolov
Hello Community, I am experiencing again the issue with the ACL and none of the fixes , previously stated, are helping out. Bug report -> https://bugzilla.redhat.com/show_bug.cgi?id=1797099 Any ideas would be helpful. Best Regards, Strahil Nikolov Community Meeting Calen

Re: [Gluster-users] [ovirt-users] Re: ACL issue v6.6, v6.7, v7.1, v7.2

2020-02-07 Thread Strahil Nikolov
e bug report, some other things should be different. >> >> >> Greetings, >> >>     Paolo >> >> Il 06/02/20 23:30, Christian Reiss ha scritto: >>> Hey, >>> >>> I hit this bug, too. With disastrous results. >>> I second this po

Re: [Gluster-users] Permission denied at some directories/files after a split brain

2020-02-10 Thread Strahil Nikolov
What version of gluster are you using ? In my case only a downgrade has restored the operation of the cluster, so you should consider that as an option (last, but still an option). You can try to run a find against the fuse and 'find /path/to/fuse -exec setfacl -m u:root:rw {} \;' Ma

Re: [Gluster-users] It appears that readdir is not cached for FUSE mounts

2020-02-10 Thread Strahil Nikolov
On February 10, 2020 5:32:29 PM GMT+02:00, Matthias Schniedermeyer wrote: >On 10.02.20 16:21, Strahil Nikolov wrote: >> On February 10, 2020 2:25:17 PM GMT+02:00, Matthias Schniedermeyer > wrote: >>> Hi >>> >>> >>> I would describe our basic use

Re: [Gluster-users] GlusterFS problems & alternatives

2020-02-11 Thread Strahil Nikolov
an 2 weeks where 2k+ machines were read-only, before the vendor provided a new patch), so the issues in Gluster are nothing new and we should not forget that Gluster is free (and doesn't costs millions like some arrays). The only mitigation is to thoroughly test each patch on a cluster th

Re: [Gluster-users] Gluster Performance Issues

2020-02-20 Thread Strahil Nikolov
ng%20Workload/ This information will allow more experienced adminiatrators and the developers to identify any pattern that could cause the symptoms. Tuning Gluster is one of the hardest topics, so you should prepare yourself for a lot of test untill you reach the optimal settings for your

Re: [Gluster-users] Brick Goes Offline After server reboot/Or Gluster Container is restarted, on which a gluster node is running

2020-02-28 Thread Strahil Nikolov
g list >> > Gluster-users@gluster.org mailto:Gluster-users@gluster.org >> > https://lists.gluster.org/mailman/listinfo/gluster-users >> > >> > > >> >> >> Met vriendelijke groet, With kind regards, >> >>

Re: [Gluster-users] Advice on moving volumes/bricks to new servers

2020-02-29 Thread Strahil Nikolov
didn't have the necessary data. Another way to migrate the data is to: 1. Add the new disks on the old srv1,2,3 2. Add the new disks to the VG 3. pvmove all LVs to the new disks (I prefer to use the '--atomic' option) 4. vgreduce with the old disks 5. pvremove the old disks 6. The

Re: [Gluster-users] Geo-replication

2020-03-01 Thread Strahil Nikolov
t;> node’s authorized_keys file, So that if anyone gain access using >this key >>> can access only gsyncd command. >>> >>> ``` >>> command=gsyncd ssh-key…. >>> ``` >>> >>> >>> >>> Thanks for your help. >

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Strahil Nikolov
Hi Felix, can you test /on non-prod system/ the latest minor version of gluster v6 ? Best Regards, Strahil Nikolov В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow написа: Dear Community, this message appears for me to on GlusterFS 6.0. Before that, we had GlusterFS

Re: [Gluster-users] Geo-replication

2020-03-02 Thread Strahil Nikolov
t;>> >>>> Security: Command prefix is added while adding public key to remote >>>> node’s authorized_keys file, So that if anyone gain access using >this key >>>> can access only gsyncd command. >>>> >>>> ``` >>>> command=gsyncd ssh-key…. >>>> ``` >>>> >>>> >>>> >>>> Thanks for your help. >>>> >>>> -- >>>> David Cunningham, Voisonics Limited >>>> http://voisonics.com/ >>>> USA: +1 213 221 1092 >>>> New Zealand: +64 (0)28 2558 3782 >>>> >>>> >>>> >>>> >>>> Community Meeting Calendar: >>>> >>>> Schedule - >>>> Every Tuesday at 14:30 IST / 09:00 UTC >>>> Bridge: https://bluejeans.com/441850968 >>>> >>>> Gluster-users mailing list >>>> Gluster-users@gluster.org >>>> https://lists.gluster.org/mailman/listinfo/gluster-users >>>> >>>> >>>> >>>> — >>>> regards >>>> Aravinda Vishwanathapura >>>> https://kadalu.io >>>> >>>> >>> >>> -- >>> David Cunningham, Voisonics Limited >>> http://voisonics.com/ >>> USA: +1 213 221 1092 >>> New Zealand: +64 (0)28 2558 3782 >>> >> >> >> -- >> David Cunningham, Voisonics Limited >> http://voisonics.com/ >> USA: +1 213 221 1092 >> New Zealand: +64 (0)28 2558 3782 >> >> >> Hey David, Why don't you set the B cluster's hostnames in /etc/hosts of all A cluster nodes ? Maybe you won't need to rebuild the whole B cluster. I guess the A cluster nodes nees to be able to reach all nodes from B cluster, so you might need to change the firewall settings. Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-03 Thread Strahil Nikolov
emains in v6.0. Actually, we do not have a non-prod gluster system, so >it will take some time > >to do this. > >Regards, > >Felix > > >On 02/03/2020 23:25, Strahil Nikolov wrote: >> Hi Felix, >> >> can you test /on non-prod system/ the latest minor

Re: [Gluster-users] Disk use with GlusterFS

2020-03-06 Thread Strahil Nikolov
mber of Bricks: 1 >> >>>> Transport-type: tcp >> >>>> Bricks: >> >>>> Brick1: myhost:/nodirectwritedata/gluster/gvol0 >> >>>> Options Reconfigured: >> >>>> transport.address-family: inet >> >

Re: [Gluster-users] Erroneous "No space left on device." messages

2020-03-10 Thread Strahil Nikolov
t; Pid  : 4650 >> File System  : xfs >> Device   : /dev/sda >> Mount Options    : rw >> Inode Size   : 512 >> Disk Space Free  : 325.3GB >> Total Disk Space : 91.0TB >> Inode Count  : 6920019

Re: [Gluster-users] Erroneous "No space left on device." messages

2020-03-10 Thread Strahil Nikolov
M, Pat Haley wrote: >> >> Hi, >> >> I get the following >> >> [root@mseas-data2 bricks]# gluster  volume get data-volume all | grep > >> cluster.min-free >> cluster.min-free-disk 10% >> cluster.min-free-inodes 5% >> >> >> On 3

Re: [Gluster-users] Gluster Performance Issues

2020-03-10 Thread Strahil Nikolov
c. As before, > >the performance is lower than the individual brick performance. Is this >a normal behavior or > >or what can be done to improve the single client performance as pointed >out in this case? > > >Regards, > >Felix > > > > >On 20/02/2020 22:26,

Re: [Gluster-users] geo-replication sync issue

2020-03-11 Thread Strahil Nikolov
-mediaslave-node Active >Hybrid Crawl N/A > >Any idea? please. Thank you. Hi Etem, Have you checked the log on both source and destination. Maybe they can hint you what the issue is. Best Regards, Strahil Nikolov Community Meeting Calendar: Schedu

Re: [Gluster-users] Erroneous "No space left on device." messages

2020-03-11 Thread Strahil Nikolov
_all_smalldom.m    pe_PB.in >PE_Data_Comparison_glider_sp011_smalldom.m  pe_PB.log >PE_Data_Comparison_glider_sp064_smalldom.m pe_PB_short.in >PeManJob.log        PlotJob > >mseas(DSMccfzR75deg_001b)% ls PeManJob >PeManJob > >mseas(DSMccfzR75deg_001b)% ls

Re: [Gluster-users] geo-replication sync issue

2020-03-11 Thread Strahil Nikolov
2020-03-11 20:08:55.286410] I [master(worker >/srv/media-storage):1441:process] _GMaster: Batch Completed >changelog_end=1583917610 entry_stime=None changelog_start=1583917610 >stime=None duration=153.5185 num_changelogs=1 mode=xsync >[2020-03-11 20:08:55.315442] I [master(worker >/srv/media

Re: [Gluster-users] geo-replication sync issue

2020-03-12 Thread Strahil Nikolov
[2020-03-08 09:49:51.705982] I [MSGID: 114046] >> [client-handshake.c:1105:client_setvolume_cbk] >0-media-storage-client-0: >> Connected to media-storage-client-0, attached to remote volume >> '/srv/media-storage'. >> [2020-03-08 09:49:51.707627] I [fu

Re: [Gluster-users] Stale file handle

2020-03-12 Thread Strahil Nikolov
=-=-=-=-=-=-=- >Pat Haley Email: pha...@mit.edu >Center for Ocean Engineering Phone: (617) 253-6824 >Dept. of Mechanical EngineeringFax:(617) 253-8125 >MIT, Room 5-213http://web.mit.edu/phaley/www/ >77 Massachusetts Avenue

Re: [Gluster-users] geo-replication sync issue

2020-03-18 Thread Strahil Nikolov
; Could you try disabling syncing xattrs and check ? >> >> gluster vol geo-rep :: config >sync-xattrs >> false >> >> On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov > >> wrote: >> >>> On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu&

Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici wrote: >Dear All, > >some users tht use regularly our gluster file system are experiencing a >strange error during attempting to remove a empty directory. >All bricks are up and running, no perticular error has been detected, >but they are not

Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
tal 8 drwxr-xr-x 2 das oclab_prod 4096 Mar 25 10:02 . drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 .. Any other idea related this issue? Many thanks, Mauro > On 25 Mar 2020, at 18:32, Strahil Nikolov wrote: > > On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici > wrote: >

Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
Take a look at Stefan Solbrig's e-mail Best Regards, Strahil Nikolov В сряда, 25 март 2020 г., 22:55:23 Гринуич+2, Mauro Tridici написа: Hi Strahil, unfortunately, no process is holding file or directory. Do you know if some other community user could help me? Thank you,

Re: [Gluster-users] Cann't mount NFS,please help!

2020-04-01 Thread Strahil Nikolov
1850968 >> >> Gluster-users mailing list >> Gluster-users@gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-users > > > >Erik Jacobson >Software Engineer > >erik.jacob...@hpe.com >+1 612 851 0550 Office > >Eagan, MN >

Re: [Gluster-users] not support so called “structured data”

2020-04-01 Thread Strahil Nikolov
nd thus a 'replica 3' volume or a 'replica 3 arbiter 1' volume should be used and a different set of options are needed (compared to other workloads). Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09

Re: [Gluster-users] Red Hat Bugzilla closed for GlusterFS bugs?

2020-04-03 Thread Strahil Nikolov
Community Meeting Calendar: > >Schedule - >Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC >Bridge: https://bluejeans.com/441850968 > >Gluster-users mailing list >Gluster-users@gluster.org >https://lists.gluster.org/mailman/listinfo/gluster-users Everything was moved

Re: [Gluster-users] Replica 2 to replica 3

2020-04-09 Thread Strahil Nikolov
n SSD (for example mounted on /gluster) from which you can create 6 directories.The arbiter stores only metadata and the SSD random access performance will be the optimal approach. Something like: arbiter:/gluster/data1 arbiter:/gluster/data2 arbiter:/gluster/data3 arbiter:/gluster/data4 arb

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-11 Thread Strahil Nikolov
ready known? >> >> >> Regards, >> Hubert > > > > >Community Meeting Calendar: > >Schedule - >Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC >Bridge: https://bluejeans.com/441850968 > >Gluster-users mailing list >Gluster

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-11 Thread Strahil Nikolov
> >Best Regards, >Hubert > >Am Sa., 11. Apr. 2020 um 11:12 Uhr schrieb Strahil Nikolov >: >> >> On April 11, 2020 8:40:47 AM GMT+03:00, Hu Bert > wrote: >> >Hi, >> > >> >so no one has seen the problem of disabled systemd units be

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-11 Thread Strahil Nikolov
50968 >> >> Gluster-users mailing list >> Gluster-users@gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-users >> > > > > >Community Meeting Calendar: > >Schedule - >Every 2nd and 4th Tuesday at 14:30

Re: [Gluster-users] Working with uid/guid

2020-04-16 Thread Strahil Nikolov
d for your user , or to use ACLs (maybe with a find -exec ). Still you got the option for '0777' , but then security will be just a word. I think the first one is easier to implement. Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every 2nd an

Re: [Gluster-users] [Gluster-devel] Announcing Gluster release 7.5

2020-04-20 Thread Strahil Nikolov
moved the data to fresh volumes and everything is working. Best Regards, Strahil Nikolov В понеделник, 20 април 2020 г., 17:02:46 Гринуич+3, Rinku Kothiya написа: Hi, The Gluster community is pleased to announce the release of Gluster7.5 (packages available at [1]). Release not

Re: [Gluster-users] Lightweight read

2020-04-29 Thread Strahil Nikolov
d the opposite on the brick2 - then only metadata at the Arbiter level can show us which data is good and which has to be fixed. > >On Sat, 25 Apr 2020 at 19:41, Strahil Nikolov >wrote: > >> On April 25, 2020 9:00:30 AM GMT+03:00, David Cunningham < >> dcunning...@voisonic

Re: [Gluster-users] Upgrade from 5.13 to 7.5 full of weird messages

2020-05-05 Thread Strahil Nikolov
x27;t have a hunch on which patch would have caused an >increase >in logs! > >-Amar > > >> >> On Sat, May 2, 2020, 12:47 AM Strahil Nikolov >> wrote: >> >>> On May 1, 2020 8:03:50 PM GMT+03:00, Artem Russakovskii < >>> arch

Re: [Gluster-users] Gluster on top of xfs inode size 1024

2020-05-12 Thread Strahil Nikolov
>Gluster-users mailing list >Gluster-users@gluster.org >https://lists.gluster.org/mailman/listinfo/gluster-users Inode size 1024 is the recommended for Gluster used with Openstack (SWIFT) , so it shouldn't have any issues. Best Regards, Strahil Nikolov Communit

Re: [Gluster-users] Configuring legacy Gulster NFS

2020-05-25 Thread Strahil Nikolov
На 25 май 2020 г. 5:49:00 GMT+03:00, Olivier написа: >Strahil Nikolov writes: > >> On May 23, 2020 7:29:23 AM GMT+03:00, Olivier > wrote: >>>Hi, >>> >>>I have been struggling with NFS Ganesha: one gluster node with >ganesha >>>serving

Re: [Gluster-users] Configuring legacy Gulster NFS

2020-05-25 Thread Strahil Nikolov
I forgot to mention that you need to verify/set the VMware machines for high-performance/low-lattency workload. На 25 май 2020 г. 17:13:52 GMT+03:00, Strahil Nikolov написа: > > >На 25 май 2020 г. 5:49:00 GMT+03:00, Olivier > написа: >>Strahil Nikolov writes: >> >

Re: [Gluster-users] File system very slow

2020-05-27 Thread Strahil Nikolov
Also, can you provide a ping between the nodes, so we get an idea of the lattency between the nodes. Also, I'm interested how much time it takes on the bricks to 'du'. Best Regards, Strahil Nikolov На 27 май 2020 г. 10:27:34 GMT+03:00, Karthik Subrahmanya написа: >Hi,

Re: [Gluster-users] Readdirp (ls -l) Performance Improvement

2020-05-27 Thread Strahil Nikolov
Hi Rafi, I have a test oVirt 4.3.9 cluster with Gluster v7.5 on CentOS7. Can you provide the rpms and I will try to test. Also, please share the switch that disables this behaviour (in case something goes wrong). Best Regards, Strahil Nikolov На 27 май 2020 г. 14:54:34 GMT+03:00, RAFI KC

Re: [Gluster-users] Readdirp (ls -l) Performance Improvement

2020-05-28 Thread Strahil Nikolov
Hey Rafi, what do you mean with volume configuration and tree structure. Best Regards, Strahil Nikolov На 27 май 2020 г. 16:18:36 GMT+03:00, RAFI KC написа: >Sure, I have back-ported the patch to release-7. Now I will see How I >can build the rpms. > >On the other hand, if possibl

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-05-30 Thread Strahil Nikolov
move away the file from the slave , does it fixes the issue ? Best Regards, Strahil Nikolov На 30 май 2020 г. 1:10:56 GMT+03:00, David Cunningham написа: >Hello, > >We're having an issue with a geo-replication process with unusually >high >CPU use and giving "En

Re: [Gluster-users] GlusterFS saturates server disk IO due to write brick temporary file to ".glusterfs" directory

2020-05-30 Thread Strahil Nikolov
article (it's for small files tuning, but describes the options above): https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/small_file_performance_enhancements Best Regards, Strahil Nikolov На 29 май 2020 г. 22:25:29 GMT+03:00, Qing Wang н

Re: [Gluster-users] One error/warning message after upgrade 5.11 -> 6.8

2020-05-30 Thread Strahil Nikolov
Yesterday another ovirt user hit the issue (or similar one) after gluster v6.6 to 6.8 upgrade. I guess Adrian can provide the logs, so we check if it is the same issue or not. Best Regards, Strahil Nikolov На 28 май 2020 г. 14:15:55 GMT+03:00, Alan Orth написа: >We upgraded from 5.10 or 5

Re: [Gluster-users] Faulty staus in geo-replication session of a sub-volume

2020-05-30 Thread Strahil Nikolov
Hello Naranderan, what OS are you using ? Do you have SELINUX in enforcing mode (verify via 'sestatus') ? Best Regards, Strahil Nikolov В събота, 30 май 2020 г., 13:33:05 ч. Гринуич+3, Naranderan Ramakrishnan написа: Dear Developers/Users, A geo-rep session of a sub-vo

Re: [Gluster-users] Expecting to achieve atomic read in a FUSE mount of Gluster

2020-05-30 Thread Strahil Nikolov
ncy level 3 (8 +3) 12 bricks with redundancy level 4 (8 + 4) In your case if 2 bricks fail - the volume will be available without any disruption. Sadly there is no way to convert replicated to dispersed volume and based on your workload dispersed volume might not be suitable. Best Regards, Stra

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-02 Thread Strahil Nikolov
antly looping over some data causing the CPU hog. Sadly, I can't find an instruction for increasing the log level of the geo rep log . Best Regards, Strahil Nikolov На 2 юни 2020 г. 6:14:46 GMT+03:00, David Cunningham написа: >Hi Strahil and Sunny, > >Thank you for the replies.

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-06 Thread Strahil Nikolov
-slave logs. Does the issue still occurs ? Best Regards, Strahil Nikolov На 6 юни 2020 г. 1:21:55 GMT+03:00, David Cunningham написа: >Hi Sunny and Strahil, > >Thanks again for your responses. We don't have a lot of renaming >activity - >maybe some, but not a lot. We do have

Re: [Gluster-users] One error/warning message after upgrade 5.11 -> 6.8

2020-06-08 Thread Strahil Nikolov
on't forget the 'ionice' to reduce the pressure). Once you have the list of files, stat them via the FUSE client and check if they got healed. I fully agree that you need to first heal the golumes before proceeding further or you might get into a nasty situation. Best Regards, S

Re: [Gluster-users] unable to connect to download.gluster.org

2020-06-08 Thread Strahil Nikolov
I'm using OS repos. Have you checked for system/package manager proxy ? Is there any difference between the nodes (for example some package)? Best Regards, Strahil Nikolov На 8 юни 2020 г. 15:26:08 GMT+03:00, Hu Bert написа: >Hi @ll, > >on 2 of 3 identical servers (hosts, resol

Re: [Gluster-users] One error/warning message after upgrade 5.11 -> 6.8

2020-06-08 Thread Strahil Nikolov
Hm... That's something I didn't expect. By the way, have you checked if all clients are connected to all bricks (if using FUSE)? Maybe you have some clients that cannot reach a brick. Best Regards, Strahil Nikolov На 8 юни 2020 г. 12:48:22 GMT+03:00, Hu Bert написа: >Hi Strahi

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-09 Thread Strahil Nikolov
one) . As this script is python, I guess you can put some debug print statements in it. Best Regards, Strahil Nikolov На 9 юни 2020 г. 5:07:11 GMT+03:00, David Cunningham написа: >Hi Sankarshan, > >Thanks for that. So what should we look for to figure out what this >process &

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-10 Thread Strahil Nikolov
27; can be used ?) - Find (on all replica sets ) the file and check the gfid - Check for heals pending for that gfid Best Regards, Strahil Nikolov На 10 юни 2020 г. 6:37:35 GMT+03:00, David Cunningham написа: >Hi Strahil, > >Thank you for that. Do you know if these "Stale file

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-11 Thread Strahil Nikolov
problem ? Best Regards, Strahil Nikolov На 11 юни 2020 г. 3:15:36 GMT+03:00, David Cunningham написа: >Hi Strahil, > >Thanks for that. I did search for a file with the gfid in the name, on >both >the master nodes and geo-replication slave, but none of them had such a >file

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-12 Thread Strahil Nikolov
topics is discussing open bugs and issues reported in the mailing list. It will be nice to join the meeting and discuss that in audio, as there could be other devs willing to join the 'fight'. @Sankarshan, any idea how to enable debug on the python script ? Best Regards, Strahil Nikolo

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Strahil Nikolov
Hi Ahemad, You can simplify it by creating a systemd service that will call the script. It was already mentioned in a previous thread (with example), so you can just use it. Best Regards, Strahil Nikolov На 16 юни 2020 г. 16:02:07 GMT+03:00, Hu Bert написа: >Hi, > >if y

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Strahil Nikolov
illall -HUP glusterfsd || /bin/true" [Install] WantedBy=multi-user.target Best Regards, Strahil Nikolov На 16 юни 2020 г. 18:41:59 GMT+03:00, ahemad shaik написа: > Hi,  >I see there is a script file in below mentioned path in all nodes using >which gluster volume >c

Re: [Gluster-users] Issues with replicated gluster volume

2020-06-16 Thread Strahil Nikolov
s will also be killed both by the script and the glusterfsd service. Best Regards, Strahil Nikolov На 16 юни 2020 г. 19:48:32 GMT+03:00, ahemad shaik написа: > Hi Strahil, >I have the gluster setup on centos 7 cluster.I see glusterfsd service >and it is in inactive

Re: [Gluster-users] State of Gluster project

2020-06-16 Thread Strahil Nikolov
Hey Mahdi, For me it looks like Red Hat are focusing more on CEPH than on Gluster. I hope the project remains active, cause it's very difficult to find a Software-defined Storage as easy and as scalable as Gluster. Best Regards, Strahil Nikolov На 17 юни 2020 г. 0:06:33 GMT+03:00,

Re: [Gluster-users] State of Gluster project

2020-06-18 Thread Strahil Nikolov
d of comparison) - the issue rate is not so big, but the peice for Gluster is not millions :) Best Regards, Strahil Nikolov На 17 юни 2020 г. 19:15:00 GMT+03:00, Erik Jacobson написа: >> It is very hard to compare them because they are structurally very >different. For exam

Re: [Gluster-users] re-use a brick from an old gluster volume to create a new one.

2020-06-18 Thread Strahil Nikolov
Best Regards, Strahil Nikolov На 18 юни 2020 г. 19:22:46 GMT+03:00, Computerisms Corporation написа: >Hi Gluster Gurus, > >Due to some hasty decisions and inadequate planning/testing, I find >myself with a single-brick Distributed gluster volume. I had initially > >intended to ex

Re: [Gluster-users] State of Gluster project

2020-06-20 Thread Strahil Nikolov
eplace-brick' . >Note that I am not implying that Ceph is faster; rather, than a small >Gluster setup with few brick can be slower than expected. > >I would love to ear other opinions and on-the-field experiences. >Thanks. > >[1] >https://lists.gluster.org/pipermail/glust

Re: [Gluster-users] State of Gluster project

2020-06-21 Thread Strahil Nikolov
На 21 юни 2020 г. 10:53:10 GMT+03:00, Gionatan Danti написа: >Il 2020-06-21 01:26 Strahil Nikolov ha scritto: >> The efforts are far less than reconstructing the disk of a VM from >> CEPH. In gluster , just run a find on the brick searching for the >> name of the VM d

Re: [Gluster-users] One error/warning message after upgrade 5.11 -> 6.8

2020-06-21 Thread Strahil Nikolov
D > > >Hubert > >Am Mo., 8. Juni 2020 um 15:36 Uhr schrieb Strahil Nikolov >: >> >> Hm... That's something I didn't expect. >> >> >> By the way, have you checked if all clients are connected to all >bricks (if using FUSE)? >> >> Ma

Re: [Gluster-users] State of Gluster project

2020-06-22 Thread Strahil Nikolov
Hi Hubert, keep in mind RH recommends disks of size 2-3 TB, not 10. I guess that has changed the situation. For NVMe/SSD - raid controller is pointless , so JBOD makes most sense. Best Regards, Strahil Nikolov На 22 юни 2020 г. 7:58:56 GMT+03:00, Hu Bert написа: >Am So., 21. Juni 2020

Re: [Gluster-users] State of Gluster project

2020-06-22 Thread Strahil Nikolov
much more sense for any kind of software defined storage (no matter Gluster, CEPH or Lustre). Of course, I could be wrong and I would be glad to read benchmark results on this topic. Best Regards, Strahil Nikolov На 22 юни 2020 г. 18:48:43 GMT+03:00, Erik Jacobson написа: >> For

Re: [Gluster-users] One of cluster work super slow (v6.8)

2020-06-23 Thread Strahil Nikolov
What is the OS and it's version ? I have seen similar behaviour (different workload) on RHEL 7.6 (and below). Have you checked what processes are in 'R' or 'D' state on st2a ? Best Regards, Strahil Nikolov На 23 юни 2020 г. 19:31:12 GMT+03:00, Pavel Znamensky

Re: [Gluster-users] Gluster Test Day

2020-06-24 Thread Strahil Nikolov
Hi Rinku, can you tell me how the packages for CentOS 7 are build, as I had issues yesterday bulding both latest and v7 branches ? Best Regards, Strahil Nikolov На 24 юни 2020 г. 14:00:47 GMT+03:00, Rinku Kothiya написа: >Hi All, > >Release-8 RC0 packages are built. As this i

Re: [Gluster-users] build options

2020-06-25 Thread Strahil Nikolov
Hey Bob, Are you going to build the rpms? If yes, could you share your results. For me building the rpms from the gluster source was easy /on CentOS8/, but on CentOS7 I got errors. Best Regards, Strahil Nikolov На 25 юни 2020 г. 4:22:29 GMT+03:00, Computerisms Corporation написа

Re: [Gluster-users] Latest NFS-Ganesha Gluster Integration docs

2020-06-29 Thread Strahil Nikolov
can test on another volume setting a bigger shard size. Best Regards, Strahil Nikolov На 29 юни 2020 г. 5:00:22 GMT+03:00, "wkm...@bneit.com" написа: >For many years, we have maintained a number of standalone, >hyperconverged Gluster/Libvirt clusters  Replica 2 + Arbiter usi

Re: [Gluster-users] Problems with qemu and disperse volumes (live merge)

2020-06-30 Thread Strahil Nikolov
t the virt group options and try again. Does the issue occur on another VM ? Best Regards, Strahil Nikolov На 30 юни 2020 г. 1:59:36 GMT+03:00, Marco Fais написа: >Hi, > >I am having a problem recently with Gluster disperse volumes and live >merge >on qemu-kvm. > >I am usi

Re: [Gluster-users] Latest NFS-Ganesha Gluster Integration docs

2020-06-30 Thread Strahil Nikolov
На 30 юни 2020 г. 3:02:32 GMT+03:00, WK написа: > >On 6/28/2020 8:52 PM, Strahil Nikolov wrote: >> Last time I did storhaug+NFS-Ganesha I used >https://github.com/gluster/storhaug/wiki . > >Well, that certainly helps but since i have no experience with Samba, I > >

Re: [Gluster-users] Latest NFS-Ganesha Gluster Integration docs

2020-06-30 Thread Strahil Nikolov
Also, not only Ganesha uses libgfapi - qemu can directly use it but it has somw limitations. Best Regards, Strahil Nikolov На 30 юни 2020 г. 11:29:49 GMT+03:00, "Felix Kölzow" написа: >Dear Users, > > >> On this list I keep on seeing comments that VM performance

Re: [Gluster-users] volume process does not start - glusterfs is happy with it?

2020-07-01 Thread Strahil Nikolov
=network-online.target I have created systemd mount units, due to VDO , but most probably the local-fs.target will generate the mount units for you from the fstab. Best Regards, Strahil Nikolov На 1 юли 2020 г. 20:57:22 GMT+03:00, "Felix Kölzow" написа: >Hey, > > >what a

Re: [Gluster-users] Problems with qemu and disperse volumes (live merge)

2020-07-02 Thread Strahil Nikolov
so use it for analysis of the logs. Most probably the brick logs can provide useful information. > >> Check ovirt engine logs (on the HostedEngine VM or your standalone >> engine) , vdsm logs on the host that was running the VM and next - >check >> the brick logs. &

Re: [Gluster-users] "Mismatching layouts" in glusterfs client logs after new brick addition and rebalance

2020-07-02 Thread Strahil Nikolov
put - it won't be soon. Best Regards, Strahil Nikolov На 2 юли 2020 г. 17:39:25 GMT+03:00, Shreyansh Shah написа: >Hi All, > >*We are facing "Mismatching layouts for ,gfid = " >errors.* > >We have a distributed glusterfs 5.10, no replication, 2 bricks (4TB &

Re: [Gluster-users] Geo-replication completely broken

2020-07-03 Thread Strahil Nikolov
Hi Felix, It seems I missed your reply with the change log that Shwetha requested. Best Regards, Strahil Nikolov На 3 юли 2020 г. 11:16:30 GMT+03:00, "Felix Kölzow" написа: >Dear Users, >the geo-replication is still broken. This is not really a comfortable >situation. >

Re: [Gluster-users] Restore a replica after failed hardware

2020-07-07 Thread Strahil Nikolov
and then create xfs with the necessary options to align it properly. I have never used xfsdump to recover a brick. Just ensure gluster brick process is not running on the node during the restore. Best Regards, Strahil Nikolov На 6 юли 2020 г. 23:32:28 GMT+03:00, Shanon Swafford написа: >Hi

Re: [Gluster-users] Problems with qemu and disperse volumes (live merge)

2020-07-08 Thread Strahil Nikolov
ts.c:1548:default_lookup_cbk] 0-stack-trace: stack-address: >0x7f0dc007c428, SSD_Storage-disperse-5 returned -1 error: Stale file >handle >[Stale file handle] >[2020-07-07 21:23:06.839835] D [MSGID: 0] >[dht-common.c:998:dht_discover_cbk] 0-SSD_Storage-dht: lookup of (null) >on &

Re: [Gluster-users] "Mismatching layouts" in glusterfs client logs after new brick addition and rebalance

2020-07-08 Thread Strahil Nikolov
At least for EL 7 ,there are 2 modules for sosreport: gluster & gluster_block Best Regards, Strahil Nikolov На 8 юли 2020 г. 9:02:10 GMT+03:00, Artem Russakovskii написа: >I think it'd be extremely helpful if gluster had a feature to grab all >the >necessary logs/debug

Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
same mechanism mdadm is using -> it should be possible. Best Regards, Strahil Nikolov На 29 юли 2020 г. 0:10:44 GMT+03:00, Darrell Budic написа: >ZFS isn’t that resource intensive, although it does like RAM. > >But why not just add additional bricks? Gluster is kind of built to use >

Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
LVM allows creating/converting striped/mirrored LVs without any dowtime and it's using the md module. Best Regards, Strahil Nikolov На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes написа: >Hi there > >'till now, I am using glusterfs over XFS and so far so g

Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
I guess there is no automatic hot-spare replacement in LVM, but mdadm has that functionality. Best Regards, Strahil Nikolov На 30 юли 2020 г. 15:39:18 GMT+03:00, Gilberto Nunes написа: >Doing some research and the chvg command, which is responsable for >create >hotspare dis

  1   2   3   4   >