Re: [Gluster-users] Client Handling of Elastic Clusters

2019-11-24 Thread Vlad Kopylov
Something like this? http://hrkscribbles.blogspot.com/2018/11/elastic-search-on-gluster.html I was never able to make it work on gluster just have it setup so kibana sync shards v On Wed, Oct 16, 2019 at 4:06 PM Timothy Orme wrote: > I did explore CEPH a bit, and that might be an option as

Re: [Gluster-users] how to downgrade GlusterFS from version 7 to 3.13?

2019-11-24 Thread Vlad Kopylov
Just delete all gluster stuff, including volumes, configs and .gluster folder on bricks Point new volumes 3.13 to same brick locations and run stat on each file https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html v On Wed, Nov 6, 2019 at 3:35 AM Riccardo Murri wrote: >

Re: [Gluster-users] gluster-block v0.4 is alive!

2019-05-20 Thread Vlad Kopylov
issue you have noticed: > https://github.com/gluster/gluster-block/pull/233 > > Thanks! > -- > Prasanna > > On Sat, May 18, 2019 at 4:48 AM Vlad Kopylov wrote: > > > > > > straight from > > > > ./autogen.sh && ./configure && make -j in

Re: [Gluster-users] gluster-block v0.4 is alive!

2019-05-17 Thread Vlad Kopylov
straight from ./autogen.sh && ./configure && make -j install CentOS Linux release 7.6.1810 (Core) May 17 19:13:18 vm2 gluster-blockd[24294]: Error opening log file: No such file or directory May 17 19:13:18 vm2 gluster-blockd[24294]: Logging to stderr. May 17 19:13:18 vm2

Re: [Gluster-users] Prioritise local bricks for IO?

2019-03-26 Thread Vlad Kopylov
I don't remember if it still in works NUFA https://github.com/gluster/glusterfs-specs/blob/master/done/Features/nufa.md v On Tue, Mar 26, 2019 at 7:27 AM Nux! wrote: > Hello, > > I'm trying to set up a distributed backup storage (no replicas), but I'd > like to prioritise the local bricks for

Re: [Gluster-users] Elasticsearch on gluster

2018-11-13 Thread Vlad Kopylov
Such approach never worked for me. If nodes fail and restart you need to do everything by hands. My suggestion is to leave it to elasticsearch and kibana to do index replication, even though they are replicating index for replicated gluster file system.

Re: [Gluster-users] latency limits of glusterfs for replicated mod

2018-11-13 Thread Vlad Kopylov
Works with same latency for me on 3.12.15 Only problem is that read and write gets slow. If you host VMs having hypervisor use gfapi: writes will be acceptable, but read will wonder through nodes, obviously slowing down. 0.3 to 13 ms x 2 around 100 times slower. There is select local and

Re: [Gluster-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Vlad Kopylov
Good thing about gluster is that you have files as files. Whatever happens good old file access is still there - if you need backup, or rebuilding volumes - every replica brick has your files. As a contrary to object blue..something storage with separate metadata, if it gets lost/mixed you will be

Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-11-06 Thread Vlad Kopylov
to test 3.13.2 tiering feature, but have my thoughts about if >> 3.12.6 or 4.1.5 should be tested instead. >> >> Regards, >> Jeevan. >> >> >> On Nov 4, 2018 5:11 AM, "Vlad Kopylov" wrote: >> >> If you doing replica - start with 5, ru

Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-10-30 Thread Vlad Kopylov
3.12.14 working fine in production for file access you can find vol and mount settings in mailing list archive On Tue, Oct 30, 2018 at 11:05 AM Jeevan Patnaik wrote: > Hi All, > > I see gluster 3 has reached end of life and gluster 5 has just been > introduced. > > Is gluster 4.1.5 stable

Re: [Gluster-users] Filesystem problem

2018-10-30 Thread Vlad Kopylov
simple working solution for such cases is rebuilding volume with .glusterfs recover dead node, create fresh bricks, copy files there and then attr them to regenerate .glusterfs something like mentioned here https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html On Mon, Oct

Re: [Gluster-users] How to use system.affinity/distributed.migrate-data on distributed/replicated volume?

2018-10-30 Thread Vlad Kopylov
nufa helps you write to local brick, if replication is involved it will still copy it to other bricks (or suppose to do so) what might be happening is that when initial file was created other nodes were down and it didn't replicate properly and now heal is failing check your gluster vol heal

Re: [Gluster-users] client glusterfs connection problem

2018-10-30 Thread Vlad Kopylov
Try adding routes so it can connect to all. Also curious if fuse mount does need access to all nodes. Supposedly it does wright to all at the same time unless you have halo feature enabled. v On Sun, Oct 28, 2018 at 1:07 AM Oğuz Yarımtepe wrote: > My two nodes are at another vlan. Should my

Re: [Gluster-users] Gluster client

2018-10-18 Thread Vlad Kopylov
Maximum number of connect attempts to server On Wed, Oct 17, 2018 at 11:30 AM Alfredo De Luca wrote: > What does fetch-attempts=5 do? > > On Wed, Oct 17, 2018 at 12:05 AM Vlad Kopylov wrote: > >> You can add fetch-attempts=5 to fstab, so it will try to connect more, >>

Re: [Gluster-users] architecture suggestions

2018-10-18 Thread Vlad Kopylov
e: > Any idea about the Halo configuration? Didn't find any documentation about > it. > > On Wed, Oct 17, 2018 at 1:10 AM Vlad Kopylov wrote: > >> if you going for redundancy go for 3 full nodes, arbiter setup seen bugs >> doubt anything good will come out of u

Re: [Gluster-users] architecture suggestions

2018-10-16 Thread Vlad Kopylov
if you going for redundancy go for 3 full nodes, arbiter setup seen bugs doubt anything good will come out of using NFS if you doing websites use VMs as hypervisors are using libgfapi or implement libgfapi in your app directly v On Sun, Oct 14, 2018 at 2:13 PM Oğuz Yarımtepe wrote: > Hi, >

Re: [Gluster-users] Gluster client

2018-10-16 Thread Vlad Kopylov
You can add fetch-attempts=5 to fstab, so it will try to connect more, never had an issue after this Problem might be as it might connect to the other server not the local one, starting to push all reads through the network - so close client ports on other nodes but to local v On Tue, Oct 16,

Re: [Gluster-users] "Solving" a recurrent "performing entry selfheal on [...]" on my bricks

2018-10-12 Thread Vlad Kopylov
. > > Hoggins! > > Le 10/10/2018 à 07:05, Vlad Kopylov a écrit : > > isn't it trying to heal your dovecot-uidlist? try updating, restarting > > and initiating heal again > > > > -v > > > > On Sun, Oct 7, 2018 at 12:54 PM Hoggins! > <mailto:fu

Re: [Gluster-users] "Solving" a recurrent "performing entry selfheal on [...]" on my bricks

2018-10-09 Thread Vlad Kopylov
isn't it trying to heal your dovecot-uidlist? try updating, restarting and initiating heal again -v On Sun, Oct 7, 2018 at 12:54 PM Hoggins! wrote: > Hello list, > > My Gluster cluster has a condition, I'd like to know how to cure it. > > The setup: two bricks, replicated, with an arbiter. >

Re: [Gluster-users] Quick and small file read/write optimization

2018-10-09 Thread Vlad Kopylov
It also matters how you mount it: glusterfs defaults,_netdev,negative-timeout=10,attribute-timeout=30,fopen-keep-cache,direct-io-mode=enable,fetch-attempts=5 0 0 Options Reconfigured: performance.io-thread-count: 8 server.allow-insecure: on cluster.shd-max-threads: 12

Re: [Gluster-users] Poor performance compared to Netapp NAS with small files

2018-09-23 Thread Vlad Kopylov
Forgot mount options for small files defaults,_netdev,negative-timeout=10,attribute-timeout=30,fopen-keep-cache,direct-io-mode=enable,fetch-attempts=5 On Sat, Sep 22, 2018 at 10:14 PM, Vlad Kopylov wrote: > Here is what I have for small files. I don't think you really need much > f

Re: [Gluster-users] Data on gluster volume gone

2018-09-22 Thread Vlad Kopylov
check that you have files on bricks then run attr like described here with recreating volume, or maybe even recreate volume to be on a save side https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html -v On Thu, Sep 20, 2018 at 4:37 AM, Pranith Kumar Karampuri <

Re: [Gluster-users] Poor performance compared to Netapp NAS with small files

2018-09-22 Thread Vlad Kopylov
Here is what I have for small files. I don't think you really need much for git Options Reconfigured: performance.io-thread-count: 8 server.allow-insecure: on cluster.shd-max-threads: 12 performance.rda-cache-limit: 128MB cluster.readdir-optimize: on cluster.read-hash-mode: 0

Re: [Gluster-users] systemd-script

2018-07-26 Thread Vlad Kopylov
or maybe something like fetch-attempts=5 vm1:/v /var/volumes/vol-1 glusterfs defaults,_netdev,negative-timeout=10,attribute-timeout=30,fopen-keep-cache,direct-io-mode=enable,fetch-attempts=5 0 0 On Thu, Jul 26, 2018 at 6:36 AM, Anoop C S wrote: > On Thu, 2018-07-26 at 12:23 +0200, Stefan

Re: [Gluster-users] trying to figure out the best solution for vm and email storage

2018-07-25 Thread Vlad Kopylov
Just create one 3 replica volume with 1 brick on each of 3 storage servers. Raid5 for servers will be more then enough - it is already replica 3. Use ovirt to mount glusterfs to VM from hosts (as it uses libgfapi) rather then fuse mount from VM itself. libgfapi is supposedly faster. Might depend

Re: [Gluster-users] delettion of files in gluster directories

2018-07-04 Thread Vlad Kopylov
If you delete those from the bricks it will start healing them - restoring from other bricks I have similar issue with email storage which uses maildir format with millions of small files doing delete on the server takes days sometimes worth recreating volumes wiping .glusterfs on bricks,

Re: [Gluster-users] Files not healing & missing their extended attributes - Help!

2018-07-04 Thread Vlad Kopylov
you'll need to query attr of those files for them to be updated in . glusterfs regarding wiping .glusterfs - I've done it half a dozen times on live data: it is a simple drill which fixes almost everything. often you don't have time to ask around etc. you just need it working ASAP so you delete

Re: [Gluster-users] Files not healing & missing their extended attributes - Help!

2018-07-03 Thread Vlad Kopylov
might be too late but sort of simple always working solution for such cases is rebuilding .glusterfs kill it and query attr for all files again, it will recreate .glusterfs on all bricks something like mentioned here https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html On

Re: [Gluster-users] Replicated volume read request are served by remote brick

2018-04-18 Thread Vlad Kopylov
I was trying to use http://lists.gluster.org/pipermail/gluster-users/2015-June/022322.html as an example and it never worked Neither did gluster volume set cluster.nufa enable on with cluster.choose-local: on cluster.nufa: on It still reads data from network bricks. Was thinking to block

Re: [Gluster-users] Unreasonably poor performance of replicated volumes

2018-04-12 Thread Vlad Kopylov
Guess you went through user lists and tried something like this already http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html I have a same exact setup and below is as far as it went after months of trail and error. We all have somewhat same setup and same issue with this - you

Re: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-10 Thread Vlad Kopylov
Founder, Android Police <http://www.androidpolice.com>, APK Mirror > <http://www.apkmirror.com/>, Illogical Robot LLC > beerpla.net | +ArtemRussakovskii > <https://plus.google.com/+ArtemRussakovskii> | @ArtemR > <http://twitter.com/ArtemR> > > On Mon, Apr 9

Re: [Gluster-users] performance.cache-size for high-RAM clients/servers, other tweaks for performance, and improvements to Gluster docs

2018-04-09 Thread Vlad Kopylov
you definitely need mount options to /etc/fstab use ones from here http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html I went on with using local mounts to achieve performance as well Also, 3.12 or 3.10 branches would be preferable for production On Fri, Apr 6, 2018 at 4:12

Re: [Gluster-users] Invisible files and directories

2018-04-03 Thread Vlad Kopylov
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html On Tue, Apr 3, 2018 at 6:43 PM, Serg Gulko wrote: > Hello! > > We are running distributed volume that contains 7 bricks. > Volume is mounted using native fuse client. > > After an unexpected system

Re: [Gluster-users] cluster.readdir-optimize and disappearing files/dirs bug

2018-04-03 Thread Vlad Kopylov
I was able to make it out of that issue with listings by recreating everything on 3.12.X Or maybe it was some setting conflict as I ran tests of each setting for performance and left only those that gave better results. End up with settings below for the distributed mail-server using maildir

Re: [Gluster-users] Announcing Gluster release 4.0.1 (Short Term Maintenance)

2018-03-28 Thread Vlad Kopylov
I think we are missing 3.12.7 in CentOS releases. On Wed, Mar 28, 2018 at 4:47 AM, Niels de Vos wrote: > On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote: >> Hi, >> >> Thanks, yes, not very familiar with Centos and hence googling took a while >> to find a 4.0 version at,

Re: [Gluster-users] [ovirt-users] GlusterFS performance with only one drive per host?

2018-03-22 Thread Vlad Kopylov
The bottleneck is definitely not the disk speed with glusterFS, no point of using SSD for bricks what so ever -v On Thu, Mar 22, 2018 at 6:01 AM, Sahina Bose wrote: > > > On Mon, Mar 19, 2018 at 5:57 PM, Jayme wrote: >> >> I'm spec'ing a new oVirt build

Re: [Gluster-users] Write speed of gluster volume reduced

2018-03-08 Thread Vlad Kopylov
http://lists.gluster.org/pipermail/gluster-users/2017-July/031788.html http://lists.gluster.org/pipermail/gluster-users/2017-September/032385.html also try disperse.eager-lock off On Tue, Mar 6, 2018 at 7:40 AM, Sherin George wrote: > Hi Guys, > > I have a gluster volume

Re: [Gluster-users] new Gluster cluster: 3.10 vs 3.12

2018-02-26 Thread Vlad Kopylov
Thanks! On Mon, Feb 26, 2018 at 4:26 PM, Ingard Mevåg wrote: > After discussing with Xavi in #gluster-dev we found out that we could > eliminate the slow lstats by disabling disperse.eager-lock. > There is an open issue here : >

Re: [Gluster-users] Re-adding an existing brick to a volume

2018-02-25 Thread Vlad Kopylov
eed > to be synced, and the heal operation finishes much faster. > Do I have this right? > > Kind regards, > Mitja > > > On 25/02/2018 17:02, Vlad Kopylov wrote: >> >> .gluster and attr already in that folder so it would not connect it as a >> brick >> I don'

Re: [Gluster-users] Re-adding an existing brick to a volume

2018-02-25 Thread Vlad Kopylov
.gluster and attr already in that folder so it would not connect it as a brick I don't think there is option to "reconnect brick back" what I did many times - delete .gluster and reset attr on the folder, connect the brick and then update those attr. with stat commands example here

Re: [Gluster-users] new Gluster cluster: 3.10 vs 3.12

2018-02-23 Thread Vlad Kopylov
http://lists.gluster.org/pipermail/gluster-users/2018-February/033532.html On Fri, Feb 23, 2018 at 10:38 AM, Anatoliy Dmytriyev wrote: > Hi > > We are planning to install new gluster cluster and I am wondering which > gluster LTM version will you suggest to install: 3.10 or

Re: [Gluster-users] Very slow rsync to gluster volume UNLESS `ls` or `find` scan dir on gluster volume first

2018-02-04 Thread Vlad Kopylov
You mounting it to the local bricks? struggling with same performance issues try using this volume setting http://lists.gluster.org/pipermail/gluster-users/2018-January/033397.html performance.stat-prefetch: on might be it seems like when it gets to cache it is fast - those stat fetch which seem

Re: [Gluster-users] Tiered volume performance degrades badly after a volume stop/start or system restart.

2018-01-30 Thread Vlad Kopylov
Tested it in two different environments lately with exactly same results. Was trying to get better read performance from local mounts with hundreds of thousands maildir email files by using SSD, hoping that .gluster file stat read will improve which does migrate to hot tire. After seeing what

Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4

2018-01-26 Thread Vlad Kopylov
l-readdir and readdir-ahead options: > > $ gluster volume set homes group metadata-cache > > I'm hoping Atin or Poornima can shed some light and squash this bug. > > [0] > https://github.com/gluster/glusterfs/blob/release-3.11/doc/release-notes/3.11.0.md > > Regards, &

Re: [Gluster-users] Replacing a third data node with an arbiter one

2018-01-25 Thread Vlad Kopylov
Having "expanding volume corruption" issue fixed only in 3.13 brunch you better off recreating the thing use the trick mentioned here http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html kill volume, reset attributes, delete .glusterfs, add new and run stat seems that

Re: [Gluster-users] It necessary make backup the .glusterfs directory ?

2018-01-25 Thread Vlad Kopylov
In my experience .glusterfs is easily recoverable by going to the brick path (if you have files there) and running stat for each object but through mount point, something like: cd BRICKPATH sudo find . -path ./.glusterfs -prune -o -exec stat 'MUNTPATH/{}' \; for example, if you need to recreate

Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4

2018-01-25 Thread Vlad Kopylov
can you please test parallel-readdir or readdir-ahead gives disconnects? so we know which to disable parallel-readdir doing magic ran on pdf from last year https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf -v On Thu, Jan 25, 2018 at 8:20 AM, Alan

Re: [Gluster-users] parallel-readdir is not recognized in GlusterFS 3.12.4

2018-01-25 Thread Vlad Kopylov
Same here, even after update to 3.12.5-2 [2018-01-26 02:48:58.113996] W [MSGID: 101174] [graph.c:363:_log_if_unknown_option] 0-q-readdir-ahead-0: option 'parallel-readdir' is not recognized -v On Tue, Jan 23, 2018 at 12:09 PM, Alan Orth wrote: > Hello, > > I saw that

Re: [Gluster-users] Documentation on readdir performance

2018-01-19 Thread Vlad Kopylov
It would be all good if any of this would work not just on pdf see raddir-ahead issues http://lists.gluster.org/pipermail/gluster-users/2018-January/033170.html http://lists.gluster.org/pipermail/gluster-users/2018-January/033179.html ___ Gluster-users

[Gluster-users] no luck making read local ONLY local work

2018-01-18 Thread Vlad Kopylov
Standard fuse mount - mount and brick are on the save server. 3.13 branch. Volume created and fstab mounted via server-name from the hosts file vm1 vm2 vm3 ... pointing to interface IP trying: cluster.nufa on cluster.choose-local on cluster.read-hash-mode: 0 cluster.choose-local on (by default)

[Gluster-users] fail-over network for nodes

2018-01-17 Thread Vlad Kopylov
What is the best way to make cluster switch to the different route/network if primary network is down. Lets say there is a separate Gluster network for data exchange, but if it is not available, cluster should utilize the production network, until resolved. Separate interfaces for different

[Gluster-users] performance.readdir-ahead on volume folders not showing with ls command 3.13.1-1.el7

2018-01-06 Thread Vlad Kopylov
Guess it is same as [Gluster-users] A Problem of readdir-optimize http://lists.gluster.org/pipermail/gluster-users/2018-January/033170.html but on 3.13 ___ Gluster-users mailing list Gluster-users@gluster.org

[Gluster-users] performance.readdir-ahead on volume folders not showing with ls command 3.13.1-1.el7

2018-01-06 Thread Vlad Kopylov
with performance.readdir-ahead on on the volume maked folders on mounts invisible to ls command but it will show files fine it shows folders fine with ls on bricks what am I missing? maybe some settings are incompatible guess over-tuning happened vm1:/t1 /home/t1 glusterfs