Re: [Gluster-users] java application crushes while reading a zip file

2019-01-21 Thread Amar Tumballi Suryanarayan
Dmitry, Thanks for the detailed updates on this thread. Let us know how your 'production' setup is running. For much smoother next upgrade, we request you to help out with some early testing of glusterfs-6 RC builds which are expected to be out by Feb 1st week. Also, if it is possible for you to

Re: [Gluster-users] 'dirfingerprint' to get glusterfs directory stats

2019-01-21 Thread Amar Tumballi Suryanarayan
Thanks for working on this Manhong! We always like people trying out new things with glusterfs. It would be great if you can publish a blog, and let Amye know about it, so we can also publish same / or have a link to it in our website. A added bonus if you can tweet about it with mention of

Re: [Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-01-21 Thread Amar Tumballi Suryanarayan
On Thu, Jan 10, 2019 at 1:56 PM Hu Bert wrote: > Hi, > > > > We ara also using 10TB disks, heal takes 7-8 days. > > > You can play with "cluster.shd-max-threads" setting. It is default 1 I > > > think. I am using it with 4. > > > Below you can find more info: > > >

Re: [Gluster-users] Samba+Gluster: Performance measurements for small files

2019-01-21 Thread Amar Tumballi Suryanarayan
For Samba usecase, please make sure you have nl-cache (ie, 'negative-lookup cache') enabled. We have seen some improvements from this value. -Amar On Tue, Dec 18, 2018 at 8:23 PM David Spisla wrote: > Dear Gluster Community, > > it is a known fact that Samba+Gluster has a bad smallfile

Re: [Gluster-users] Glusterfs backup and restore

2019-01-21 Thread Amar Tumballi Suryanarayan
Kannan, Currently GlusterFS depends on backend LVM snapshot for snapshot feature, and LVM is not very friendly with migrating blocks. Currently best way of sending the data out from snapshot would be to do the 'xfsdump' from snapshot, and send it to another place. May be that would work faster.

Re: [Gluster-users] [Bugs] Bricks are going offline unable to recover with heal/start force commands

2019-01-21 Thread Amar Tumballi Suryanarayan
Hi Shaik, Can you check what is there in brick logs? They are located in /var/log/glusterfs/bricks/*? Looks like the samba hooks script failed, but that shouldn't matter in this use case. Also, I see that you are trying to setup heketi to provision volumes, which means you may be using gluster

Re: [Gluster-users] Increasing Bitrot speed glusterfs 4.1.6

2019-01-22 Thread Amar Tumballi Suryanarayan
On Tue, Jan 22, 2019 at 1:50 PM Amudhan P wrote: > > Bitrot feature in Glusterfs is production ready or is it in beta phase? > > We have not done extensive performance testing with BitRot, as it is known to consume resources, and depending on the resources (CPU/Memory) available, the speed would

Re: [Gluster-users] Gluster performance issues - need advise

2019-01-23 Thread Amar Tumballi Suryanarayan
I didn't understand the issue properly. Mostly I missed something. Are you concerned the performance is 49MB/s with and without perf options? or are you expecting it to be 123MB/s as over the n/w you get that speed? If it is the first problem, then you are actually having

Re: [Gluster-users] writev: Transport endpoint is not connected

2019-01-23 Thread Amar Tumballi Suryanarayan
Hi Lindolfo, Can you now share the 'gluster volume info' from your setup? Please note some basic documentation on shard is available @ https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/shard/ -Amar On Wed, Jan 23, 2019 at 7:55 PM Lindolfo Meira wrote: > Does this

Re: [Gluster-users] Gluster 5.5 slower than 3.12.15

2019-04-03 Thread Amar Tumballi Suryanarayan
Strahil, With some basic testing, we are noticing the similar behavior too. One of the issue we identified was increased n/w usage in 5.x series (being addressed by https://review.gluster.org/#/c/glusterfs/+/22404/), and there are few other features which write extended attributes which caused

[Gluster-users] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
All, We currently have 3 meetings which are public: 1. Maintainer's Meeting - Runs once in 2 weeks (on Mondays), and current attendance is around 3-5 on an avg, and not much is discussed. - Without majority attendance, we can't take any decisions too. 2. Community meeting - Supposed to happen

[Gluster-users] GlusterFS v7.0 (and v8.0) roadmap discussion

2019-03-25 Thread Amar Tumballi Suryanarayan
Hello Gluster Members, We are now done with glusterfs-6.0 release, and the next up is glusterfs-7.0. But considering for many 'initiatives', 3-4 months are not enough time to complete the tasks, we would like to call for a road-map discussion meeting for calendar year 2019 (covers both

Re: [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-03-25 Thread Amar Tumballi Suryanarayan
ing 12 hours opposed to the one you’re proposing? > > -Darrell > > On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan < > atumb...@redhat.com> wrote: > > All, > > We currently have 3 meetings which are public: > > 1. Maintainer's Meeting > > -

[Gluster-users] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-02-25 Thread Amar Tumballi Suryanarayan
Hi all, We are calling out our users, and developers to contribute in validating ‘glusterfs-6.0rc’ build in their usecase. Specially for the cases of upgrade, stability, and performance. Some of the key highlights of the release are listed in release-notes draft

Re: [Gluster-users] Version uplift query

2019-02-27 Thread Amar Tumballi Suryanarayan
GlusterD2 is not yet called out for standalone deployments. You can happily update to glusterfs-5.x (recommend you to wait for glusterfs-5.4 which is already tagged, and waiting for packages to be built). Regards, Amar On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL wrote: > Hi, > > Could

Re: [Gluster-users] GlusterFS - 6.0RC - Test days (27th, 28th Feb)

2019-03-04 Thread Amar Tumballi Suryanarayan
Thanks to those who participated. Update at present: We found 3 blocker bugs in upgrade scenarios, and hence have marked release as pending upon them. We will keep these lists updated about progress. -Amar On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan < atumb...@redhat.com>

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-04 Thread Amar Tumballi Suryanarayan
What does self-heal pending numbers show? On Mon, Mar 4, 2019 at 7:52 PM Hu Bert wrote: > Hi Alberto, > > wow, good hint! We switched from old servers with version 4.1.6 to new > servers (fresh install) with version 5.3 on february 5th. I saw that > there was more network traffic on server

Re: [Gluster-users] glusterfsd Ubuntu 18.04 high iowait issues

2019-02-20 Thread Amar Tumballi Suryanarayan
If you have both systems to get some idea, can you get the `gluster profile info' output? That helps a bit to understand the issue. On Thu, Feb 21, 2019 at 8:20 AM Kartik Subbarao wrote: > We're running gluster on two hypervisors running Ubuntu. When we > upgraded from Ubuntu 14.04 to 18.04,

Re: [Gluster-users] GlusterFS Scale

2019-02-20 Thread Amar Tumballi Suryanarayan
On Mon, Feb 18, 2019 at 11:23 PM Lindolfo Meira wrote: > We're running some benchmarks on a striped glusterfs volume. > > Hi Lindolfo, We are not supporting Stripe anymore, and planning to remove it from build too by glusterfs-6.0 (ie, next release). See if you can use 'Shard' for the usecase.

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-17 Thread Amar Tumballi Suryanarayan
On Fri, Mar 15, 2019 at 9:24 PM Taste-Of-IT wrote: > Hi, > > ok this seems to be a bug. I now update from 3.x to 4.x to 5.x Latest > Debian Releases. After each Upgrade i run remove-brick and the problem is > still the same. Gluster ignores storage.reserve Option. Positiv is that the >

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread Amar Tumballi Suryanarayan
more_data = 1 >> #22 0x3fffb7842ec0 in .xdr_gfx_dirplist () from >> /usr/lib64/libgfxdr.so.0 >> No symbol table info available. >> #23 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20, >> pp=0x3fffa81090f0, size=, proc=) at >> xdr_ref.c:84 >&g

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-13 Thread Amar Tumballi Suryanarayan
/5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1: >>> not a dynamic executable >>> pabhishe@arn-build3$ldd >>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1 >>> not a dynamic executable >>> >>> >>> For backtraces I ha

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-11 Thread Amar Tumballi Suryanarayan
Hi Abhishek, Can you check and get back to us? ``` bash# ldd /usr/lib64/libglusterfs.so bash# ldd /usr/lib64/libgfrpc.so ``` Also considering you have the core, can you do `(gdb) thr apply all bt full` and pass it on? Thanks & Regards, Amar On Mon, Mar 11, 2019 at 3:41 PM ABHISHEK PALIWAL

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-06 Thread Amar Tumballi Suryanarayan
We are talking days. Not weeks. Considering already it is Thursday here. 1 more day for tagging, and packaging. May be ok to expect it on Monday. -Amar On Thu, Mar 7, 2019 at 11:54 AM Artem Russakovskii wrote: > Is the next release going to be an imminent hotfix, i.e. something like >

Re: [Gluster-users] Help analise statedumps

2019-03-19 Thread Amar Tumballi Suryanarayan
It is really good to hear the good news. The one thing we did in 5.4 (and which is present in 6.0 too), is implementing garbage collection logic in fuse module, which keeps a check on memory usage. Looks like the feature is working as expected. Regards, Amar On Wed, Mar 20, 2019 at 7:24 AM

Re: [Gluster-users] recovery from reboot time?

2019-03-19 Thread Amar Tumballi Suryanarayan
There are 2 things happen after a reboot. 1. glusterd (management layer) does a sanity check of its volumes, and sees if there are anything different while it went down, and tries to correct its state. - This is fine as long as number of volumes are less, or numbers of nodes are less. (less is

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-19 Thread Amar Tumballi Suryanarayan
you refer to this bug: > >>> https://bugzilla.redhat.com/show_bug.cgi?id=1674225 : in the test > >>> setup i haven't seen those entries, while copying & deleting a few GBs > >>> of data. For a final statement we have to wait until i updated ou

Re: [Gluster-users] Transport endpoint is not connected failures in 5.3 under high I/O load

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Brandon, There were few concerns raised about 5.3 issues recently, and we fixed some of them and made 5.5 (in 5.4 we faced an upgrade issue, so 5.5 is recommended upgrade version). Can you please upgrade to 5.5 version? -Amar On Mon, Mar 18, 2019 at 10:16 PM wrote: > Hello list, > > > >

Re: [Gluster-users] Constant fuse client crashes "fixed" by setting performance.write-behind: off. Any hope for a 4.1.8 release?

2019-03-18 Thread Amar Tumballi Suryanarayan
Due to this issue, along with few other logging issues, we did make a glusterfs-5.5 release, which has the fix for particular crash. Regards, Amar On Tue, 19 Mar, 2019, 1:04 AM , wrote: > Hello Ville-Pekka and list, > > > > I believe we are experiencing similar gluster fuse client crashes on

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Jim, On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney wrote: > > Issues with glusterfs fuse mounts cause issues with python file open for > write. We have to use nfs to avoid this. > > Really want to see better back-end tools to facilitate cleaning up of > glusterfs failures. If system is going to

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Amar Tumballi Suryanarayan
Hi Hans, Thanks for the honest feedback. Appreciate this. On Tue, Mar 19, 2019 at 5:39 PM Hans Henrik Happe wrote: > Hi, > > Looking into something else I fell over this proposal. Being a shop that > are going into "Leaving GlusterFS" mode, I thought I would give my two > cents. > > While

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-18 Thread Amar Tumballi Suryanarayan
chon...@gmail.com > > > > <mailto:archon...@gmail.com>> wrote: > > > > > > > > Hi Amar, > > > > > > > > Any updates on this? I'm still not seeing it in OpenSUSE build > > > > repos. Maybe later today? > > >

[Gluster-users] Gluster Container Storage: Release Update

2019-02-13 Thread Amar Tumballi Suryanarayan
Hello everyone, We are announcing v1.0RC release of GlusterCS this week!** The version 1.0 is due along with *glusterfs-6.0* next month. Below are the Goals for v1.0: - RWX PVs - Scale and Performance - RWO PVs - Simple, leaner stack with Gluster’s Virtual Block. - Thin Arbiter (2

Re: [Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-01-23 Thread Amar Tumballi Suryanarayan
On Thu, Jan 24, 2019 at 12:47 PM Hu Bert wrote: > Good morning, > > we currently transfer some data to a new glusterfs volume; to check > the throughput of the new volume/setup while the transfer is running i > decided to create some files on one of the gluster servers with dd in > loop: > >

Re: [Gluster-users] Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume

2019-01-23 Thread Amar Tumballi Suryanarayan
gt; > > From: Shaik Salam/HYD/TCS > To:"Amar Tumballi Suryanarayan" , > b...@gluster.org, "gluster-users@gluster.org List" < > gluster-users@gluster.org> > Cc:"Murali Kottakota" , "Sanju Rakonde" > > Dat

Re: [Gluster-users] Can't write to volume using vim/nano

2019-01-23 Thread Amar Tumballi Suryanarayan
I suspect this is a bug with 'Transport: rdma' part. We have called out for de-scoping that feature as we are lacking experts in that domain right now. Recommend you to use IPoIB option, and use tcp/socket transport type (which is default). That should mostly fix all the issues. -Amar On Thu,

Re: [Gluster-users] Failed to provision volume with StorageClass "glusterfs-storage": glusterfs: create volume err: error creating volume

2019-01-23 Thread Amar Tumballi Suryanarayan
On Thu, Jan 24, 2019 at 12:08 PM Shaik Salam wrote: > Hi All, > > Could you please help us to resolve issue (atleast workaround). > 429 volumes are not requested at all in cluster. I am trying to create > only one volume at a time. > > BR > Salam > > > > From

Re: [Gluster-users] Access to Servers hangs after stop one server...

2019-01-24 Thread Amar Tumballi Suryanarayan
Also note that, this way of mounting with a 'static' volfile is not recommended as you wouldn't get any features out of gluster's Software Defined Storage behavior. this was an approach we used to have say 8 years before. With the introduction of management daemon called glusterd, the way of

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-01-31 Thread Amar Tumballi Suryanarayan
Hi Artem, Opened https://bugzilla.redhat.com/show_bug.cgi?id=1671603 (ie, as a clone of other bugs where recent discussions happened), and marked it as a blocker for glusterfs-5.4 release. We already have fixes for log flooding - https://review.gluster.org/22128, and are the process of

Re: [Gluster-users] Corrupted File readable via FUSE?

2019-02-04 Thread Amar Tumballi Suryanarayan
Hi David, I guess https://review.gluster.org/#/c/glusterfs/+/21996/ helps to fix the issue. I will leave it to Raghavendra Bhat to reconfirm. Regards, Amar On Fri, Feb 1, 2019 at 8:45 PM David Spisla wrote: > Hello Gluster Community, > I have got a 4 Node Cluster with a Replica 4 Volume, so

Re: [Gluster-users] SEGFAULT in FUSE layer

2019-04-11 Thread Amar Tumballi Suryanarayan
Thanks for the report Florian. We will look into this. On Wed, Apr 10, 2019 at 9:03 PM Florian Manschwetus < manschwe...@cs-software-gmbh.de> wrote: > Hi All, > > I’d like to bring this bug report, I just opened, to your attention. > > https://bugzilla.redhat.com/show_bug.cgi?id=1697971 > > > >

Re: [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-04-11 Thread Amar Tumballi Suryanarayan
carve out an AI to get them fixed. <https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both#RoundTable>RoundTable - ---- Regards, Amar On Mon, Mar 25, 2019 at 8:53 PM Amar Tumballi Suryanarayan < atumb...@redhat.com> wrote: > Thanks for the feedback Darrell, > > The new proposal is to hav

Re: [Gluster-users] XFS, WORM and the Year-2038 Problem

2019-04-15 Thread Amar Tumballi Suryanarayan
On Mon, Apr 15, 2019 at 2:40 PM David Spisla wrote: > Hi folks, > I tried out default retention periods e.g. to set the Retention date to > 2071. When I did the WORMing, everything seems to be OK. From FUSE and also > at Brick-Level, the retention was set to 2071 on all nodes.Additionally I >

[Gluster-users] Update: GlusterFS code coverage

2019-06-05 Thread Amar Tumballi Suryanarayan
All, I just wanted to update everyone about one of the initiatives we have undertaken, ie, increasing the overall code coverage of GlusterFS above 70%. You can have a look at current code coverage here: https://build.gluster.org/job/line-coverage/lastCompletedBuild/Line_20Coverage_20Report/ (This

Re: [Gluster-users] GlusterFS on ZFS

2019-05-01 Thread Amar Tumballi Suryanarayan
On Tue, Apr 23, 2019 at 11:38 PM Cody Hill wrote: > > Thanks for the info Karli, > > I wasn’t aware ZFS Dedup was such a dog. I guess I’ll leave that off. My > data get’s 3.5:1 savings on compression alone. I was aware of stripped > sets. I will be doing 6x Striped sets across 12x disks. > > On

Re: [Gluster-users] Gluster 5 Geo-replication Guide

2019-05-01 Thread Amar Tumballi Suryanarayan
On Fri, Apr 26, 2019 at 7:00 PM Shon Stephens wrote: > Dear All, > Is there a good, step by step guide for setting up geo-replication > with Glusterfs 5? The docs are a difficult to decipher read, for me, and > seem more feature guide than actual instruction. > > Geo-Replication steps in

Re: [Gluster-users] gluster mountbroker failed after upgrade to gluster 6

2019-05-01 Thread Amar Tumballi Suryanarayan
Few questions inline. On Fri, Apr 12, 2019 at 1:09 PM Benedikt Kaleß wrote: > |Hi,| > > |I updated to gluster to gluster 6 and now the geo-replication remains > in status "Faulty". > >From which version did you upgrade? And what does the volume info look like ? (Helps us to understand if this

Re: [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-05-01 Thread Amar Tumballi Suryanarayan
l > > On Apr 22, 2019, at 4:20 PM, FNU Raghavendra Manjunath > wrote: > > > Hi, > > This is the agenda for tomorrow's community meeting for NA/EMEA timezone. > > https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both > > > > > On Thu, Apr 11, 2019 at 4:56 AM

Re: [Gluster-users] Hard Failover with Samba and Glusterfs

2019-05-01 Thread Amar Tumballi Suryanarayan
On Wed, Apr 17, 2019 at 1:33 PM David Spisla wrote: > Dear Gluster Community, > > I have this setup: 4-Node Glusterfs v5.5 Cluster, using SAMBA/CTDB v4.8 to > access the volumes (each node has a VIP) > > I was testing this failover scenario: > > 1. Start Writing 940 GB with small files

Re: [Gluster-users] performance - what can I expect

2019-05-01 Thread Amar Tumballi Suryanarayan
Hi Pascal, Sorry for complete delay in this one. And thanks for testing out in different scenarios. Few questions before others can have a look and advice you. 1. What is the volume info output ? 2. Do you see any concerning logs in glusterfs log files? 3. Please use `gluster volume profile`

Re: [Gluster-users] adding thin arbiter

2019-05-01 Thread Amar Tumballi Suryanarayan
On Mon, Apr 22, 2019 at 3:12 PM Karthik Subrahmanya wrote: > Hi, > > Currently we do not have support for converting an existing volume to a > thin-arbiter volume. It is also not supported to replace the thin-arbiter > brick with a new one. > You can create a fresh thin arbiter volume using GD2

Re: [Gluster-users] parallel-readdir prevents directories and files listing - Bug 1670382

2019-05-01 Thread Amar Tumballi Suryanarayan
On Mon, Apr 29, 2019 at 3:56 PM João Baúto < joao.ba...@neuro.fchampalimaud.org> wrote: > Hi, > > I have an 8 brick distributed volume where Windows and Linux clients mount > the volume via samba and headless compute servers using gluster native > fuse. With parallel-readdir on, if a Windows

Re: [Gluster-users] Community Happy Hour at Red Hat Summit

2019-05-01 Thread Amar Tumballi Suryanarayan
On Mon, Apr 22, 2019 at 8:14 PM Amye Scavarda wrote: > The Ceph and Gluster teams are joining forces to put on a Community > Happy Hour in Boston on Tuesday, May 7th as part of Red Hat Summit. > > I will be there at Gluster Booth in Red Hat Summit. If you, or your colleagues/friends are

Re: [Gluster-users] performance - what can I expect

2019-05-02 Thread Amar Tumballi Suryanarayan
ntly they are not auto-tuned). * If one has good n/w and disk speed, even back end filesystem configuration (because of the layout we have with gfid etc) too matter a bit. Best thing is to understand the workload first, and then tuning for it (at present). cheers > > Pascal > On 0

Re: [Gluster-users] gluster-block v0.4 is alive!

2019-05-06 Thread Amar Tumballi Suryanarayan
On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever wrote: > Hello Gluster folks, > > Gluster-block team is happy to announce the v0.4 release [1]. > > This is the new stable version of gluster-block, lots of new and > exciting features and interesting bug fixes are made available as part > of this

Re: [Gluster-users] [External] Re: anyone using gluster-block?

2019-05-06 Thread Amar Tumballi Suryanarayan
Davide, With release 0.4, gluster-block is now having more functionality, and we did many stability fixes. Feel free to try out, and let us know how you feel. -Amar On Fri, Nov 9, 2018 at 3:36 AM Davide Obbi wrote: > Hi Vijay, > > The Volume has been created using heketi-cli blockvolume

[Gluster-users] [Announcement] Gluster Community Update

2019-07-09 Thread Amar Tumballi Suryanarayan
Hello Gluster community, Today marks a new day in the 26-year history of Red Hat. IBM has finalized its acquisition of Red Hat , which will operate as a distinct

Re: [Gluster-users] What is TCP/4007 for ?

2019-07-03 Thread Amar Tumballi Suryanarayan
I am not aware of port 4007 usage in entire Gluster. Not aware of any dependent projects too. -Amar On Wed, Jul 3, 2019 at 9:44 PM wrote: > Does anybody have info about this port, is it mandatory, is there any way > to disable it if so ? > > - Mail original - > De: n...@furyweb.fr > À:

[Gluster-users] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-04-26 Thread Amar Tumballi Suryanarayan
Hi All, We wanted to move to Slack from IRC for our official communication channel from sometime, but couldn't as we didn't had a proper URL for us to register. 'gluster' was taken and we didn't knew who had it registered. Thanks to constant ask from Satish, Slack team has now agreed to let us

Re: [Gluster-users] [Gluster-devel] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-04-26 Thread Amar Tumballi Suryanarayan
cture needs, and has valid concerns from his perspective :-) I, on the other hand, bother more about code, users, and how to make sure we are up-to-date with other technologies and communities, from the engineering view point. > On Fri, Apr 26, 2019, 3:16 AM Michael Scherer wrote: >&g

Re: [Gluster-users] [ovirt-users] Update 4.2.8 --> 4.3.5

2019-07-16 Thread Amar Tumballi Suryanarayan
There were some issues with gluster v3.12.x to v5.x, but we haven't heard any major problems with v3.12.x to v6.x version, other than https://bugzilla.redhat.com/show_bug.cgi?id=1727682 which also seems to be all resolved now. Regards, Amar On Thu, Jul 11, 2019 at 11:16 PM Strahil wrote: >