Re: [Gluster-devel] Dead translators

2016-11-17 Thread Niels de Vos
On Thu, Nov 17, 2016 at 12:21:52PM -0500, Jeff Darcy wrote:
> As the first part of the general cleanup and technical-debt-reduction
> process, I'd like to start nuking some of the unused translators.  If
> any of the following are still useful and not broken, please speak up.
> They'll always be in our git history, but there seems to be little
> reason to keep building them or risk confusing future developers by
> having them around.
> 
>   cluster/ha
>   cluster/map
>   features/filter
>   features/mac-compat
>   features/path-convertor
>   features/protect

I'm not aware of any users. +1 from me on removing them.

It may be nice to keep a list of xlators (and other pieces of code) that
got removed. So that when someone comes around and would like to
resurrect or add similar functionality, it is easy to find it, possibly
with reasoning why it got removed.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-17 Thread Kaushal M
On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M  wrote:
> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>
> I made a mistake.
>
> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
> not correct.
> So I corrected it with a new commit, c11131f, directly on top of my
> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
> tagged this commit as 3.7.17.
>
> Unfortunately, when pushing I just pushed the tags and didn't push my
> updated branch to release-3.7. Because of this I inadvertently created
> a new (virtual) branch.
> Any new changes merged in release-3.7 since have happened on top of
> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
> v3.7.17 exists as a virtual branch now.
>
> The current branching for release-3.7 and v3.7.17 looks like this.
>
> | release-3.7 CURRENT HEAD
> |
> | new commits
> |   | c11131f (tag: v3.7.17)
> 8b95eba /
> |
> | old commits
>
> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
> push this as the new release-3.7.
>
>  | release-3.7 NEW HEAD
> |release-3.7 CURRENT HEAD -->| Merge commit
> ||
> | new commits*   |
> || c11131f (tag: v3.7.17)
> | 8b95eba ---/
> |
> | old commits
>
> I'd like to avoid doing a rebase because it would lead to changes
> commit-ids, and break any existing clones.
>
> The actual commands I'll be doing on my local system are:
> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
> to the 3.7.17 branch in the picture above)
> ```
> $ git fetch origin # fetch latest origin
> $ git checkout release-3.7 # checking out my local release-3.7
> $ git merge origin/release-3.7 # merge updates from origin into my
> local release-3.7. This will create a merge commit.
> $ git push origin release-3.7:release-3.7 # push my local branch to
> remote and point remote release-3.7 to my release-3.7 ie. the merge
> commit.
> ```
>
> After this users with existing clones should get changes done on their
> next `git pull`.

I've tested this out locally, and it works.

>
> I'll do this in the next couple of hours, if there are no objections.
>
> ~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-17 Thread Kaushal M
IMPORTANT: Till this is fixed please stop merging changes into release-3.7

I made a mistake.

When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
not correct.
So I corrected it with a new commit, c11131f, directly on top of my
local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
tagged this commit as 3.7.17.

Unfortunately, when pushing I just pushed the tags and didn't push my
updated branch to release-3.7. Because of this I inadvertently created
a new (virtual) branch.
Any new changes merged in release-3.7 since have happened on top of
8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
v3.7.17 exists as a virtual branch now.

The current branching for release-3.7 and v3.7.17 looks like this.

| release-3.7 CURRENT HEAD
|
| new commits
|   | c11131f (tag: v3.7.17)
8b95eba /
|
| old commits

The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
push this as the new release-3.7.

 | release-3.7 NEW HEAD
|release-3.7 CURRENT HEAD -->| Merge commit
||
| new commits*   |
|| c11131f (tag: v3.7.17)
| 8b95eba ---/
|
| old commits

I'd like to avoid doing a rebase because it would lead to changes
commit-ids, and break any existing clones.

The actual commands I'll be doing on my local system are:
(NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
to the 3.7.17 branch in the picture above)
```
$ git fetch origin # fetch latest origin
$ git checkout release-3.7 # checking out my local release-3.7
$ git merge origin/release-3.7 # merge updates from origin into my
local release-3.7. This will create a merge commit.
$ git push origin release-3.7:release-3.7 # push my local branch to
remote and point remote release-3.7 to my release-3.7 ie. the merge
commit.
```

After this users with existing clones should get changes done on their
next `git pull`.

I'll do this in the next couple of hours, if there are no objections.

~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [GD2] New dev release - GlusterD2 v4.0dev-3

2016-11-17 Thread Kaushal M
On Fri, Nov 18, 2016 at 12:37 PM, Humble Devassy Chirammal
 wrote:
> Good going !!
>
> Does this embedded etcd capable of connecting or working with external etcd
> if its availble ?

Nope. This is a private to GD2. But GD2 itself will become capable of
connecting to external stores at some point in the future.
External etcd clients apart from GD2 could connect to this, but this may change.

>
> --Humble
>
>
> On Thu, Nov 17, 2016 at 8:36 PM, Kaushal M  wrote:
>>
>> I'm pleased to announce the third development release of GD2.
>>
>> The big news in this release is the move to embedded etcd. You no
>> longer need to install etcd separately. You just install GD2, and do
>> you work, just like old times with GD1.
>>
>> Prashanth was all over this release, in addition to doing the
>> embedding work, he also did a lot of minor cleanup and fixes, and a
>> lot of linter fixes.
>>
>> Prebuilt binaries for Linux x86-64 are available from the release page
>> [1]. A docker image gluster/glusterd2-test [2] has also been created
>> with this release.
>>
>> Please refer to the 'Testing releases' wiki [3] page for more
>> information on how to go about installing, running and testing GD2.
>> This also contains instructions on how to make use of the docker image
>> with Vagrant to set up a testing environment.
>>
>> Thanks,
>> Kaushal
>>
>> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-3
>> [2]:
>> https://hub.docker.com/r/gluster/glusterd2-test/builds/bgr3aitcjysvfmgubfk3ud/
>> [3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [GD2] New dev release - GlusterD2 v4.0dev-3

2016-11-17 Thread Mohammed Rafi K C
This is great news, kudos guys .

I created my first gluster volume using glusterd2 and postman ;).

For one time I got an errorsaying "FATA[2921] removexattr failed  
brickPath=/home/bricks/b1 error=no data available host=hostname
xattr=trusted.glusterfs.test" and the process died, but was not able to
reproduce it.

btw, great going.


Rafi KC




On 11/17/2016 08:36 PM, Kaushal M wrote:
> I'm pleased to announce the third development release of GD2.
>
> The big news in this release is the move to embedded etcd. You no
> longer need to install etcd separately. You just install GD2, and do
> you work, just like old times with GD1.
>
> Prashanth was all over this release, in addition to doing the
> embedding work, he also did a lot of minor cleanup and fixes, and a
> lot of linter fixes.
>
> Prebuilt binaries for Linux x86-64 are available from the release page
> [1]. A docker image gluster/glusterd2-test [2] has also been created
> with this release.
>
> Please refer to the 'Testing releases' wiki [3] page for more
> information on how to go about installing, running and testing GD2.
> This also contains instructions on how to make use of the docker image
> with Vagrant to set up a testing environment.
>
> Thanks,
> Kaushal
>
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-3
> [2]: 
> https://hub.docker.com/r/gluster/glusterd2-test/builds/bgr3aitcjysvfmgubfk3ud/
> [3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [GD2] New dev release - GlusterD2 v4.0dev-3

2016-11-17 Thread Humble Devassy Chirammal
Good going !!

Does this embedded etcd capable of connecting or working with external etcd
if its availble ?

--Humble


On Thu, Nov 17, 2016 at 8:36 PM, Kaushal M  wrote:

> I'm pleased to announce the third development release of GD2.
>
> The big news in this release is the move to embedded etcd. You no
> longer need to install etcd separately. You just install GD2, and do
> you work, just like old times with GD1.
>
> Prashanth was all over this release, in addition to doing the
> embedding work, he also did a lot of minor cleanup and fixes, and a
> lot of linter fixes.
>
> Prebuilt binaries for Linux x86-64 are available from the release page
> [1]. A docker image gluster/glusterd2-test [2] has also been created
> with this release.
>
> Please refer to the 'Testing releases' wiki [3] page for more
> information on how to go about installing, running and testing GD2.
> This also contains instructions on how to make use of the docker image
> with Vagrant to set up a testing environment.
>
> Thanks,
> Kaushal
>
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-3
> [2]: https://hub.docker.com/r/gluster/glusterd2-test/builds/
> bgr3aitcjysvfmgubfk3ud/
> [3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Dead translators

2016-11-17 Thread Jeff Darcy
As the first part of the general cleanup and technical-debt-reduction process, 
I'd like to start nuking some of the unused translators.  If any of the 
following are still useful and not broken, please speak up.  They'll always be 
in our git history, but there seems to be little reason to keep building them 
or risk confusing future developers by having them around.

  cluster/ha
  cluster/map
  features/filter
  features/mac-compat
  features/path-convertor
  features/protect
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [GD2] New dev release - GlusterD2 v4.0dev-3

2016-11-17 Thread Kaushal M
I'm pleased to announce the third development release of GD2.

The big news in this release is the move to embedded etcd. You no
longer need to install etcd separately. You just install GD2, and do
you work, just like old times with GD1.

Prashanth was all over this release, in addition to doing the
embedding work, he also did a lot of minor cleanup and fixes, and a
lot of linter fixes.

Prebuilt binaries for Linux x86-64 are available from the release page
[1]. A docker image gluster/glusterd2-test [2] has also been created
with this release.

Please refer to the 'Testing releases' wiki [3] page for more
information on how to go about installing, running and testing GD2.
This also contains instructions on how to make use of the docker image
with Vagrant to set up a testing environment.

Thanks,
Kaushal

[1]: https://github.com/gluster/glusterd2/releases/tag/v4.0dev-3
[2]: 
https://hub.docker.com/r/gluster/glusterd2-test/builds/bgr3aitcjysvfmgubfk3ud/
[3]: https://github.com/gluster/glusterd2/wiki/Testing-releases
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: Feature: Rebalance completion time estimation

2016-11-17 Thread Nithya Balachandran
On 14 November 2016 at 05:10, Shyam  wrote:

> On 11/11/2016 05:46 AM, Susant Palai wrote:
>
>> Hello All,
>>We have been receiving many requests from users to give a "Rebalance
>> completion time estimation". This email is to gather ideas and feedback
>> from the community for the same. We have one proposal, but nothing is
>> concrete. Please feel free to give your input for this problem.
>>
>> A brief about rebalance operation:
>> - Rebalance process is used to rebalance data across cluster most likely
>> in the event of add-brick and remove-brick. Rebalance is spawned on each
>> node. The job for the process is to read directories, fix it's layout to
>> include the newly added brick. Read children files(only those reside on
>> local bricks) of the directory and migrate them if necessary decided by the
>> new layout.
>>
>>
>> Here is one of the solution pitched by Manoj Pillai.
>>
>> Assumptions for this idea:
>>  - files are of similar size.
>>  - Max 40% of the total files will be migrated
>>
>> 1- Do a statfs on the local bricks. Say the total size is St.
>>
>
> Why not use the f_files from statfs that shows inode count and use that
> and possibly f_ffree, to determine how many inodes are there, and then use
> the crawl, to figure out how many we have visited and how many are pending
> to determine rebalance progress.
>
> I am not sure if the local FS (XFS say) fills up this data for use, but if
> it does, then it may provide a better estimate.
>
>
>
>> Thanks Shyam, that is a good idea.
I tried out  a very rough version of this. The statfs does return the inode
info (available and used) on my XFS brick. However those numbers are thrown
way off by the entries in the .glusterfs directory.  In my very limited
file only dataset, there were almost twice as many inodes in use as there
were files in the volume.  I am yet to try out the results with a directory
heavy data set.


High level algorithm:

1. When rebalance starts up, get the estimated number of files on the brick
using the statfs inode count.
2. As rebalance proceeds, calculate the rate at which files are being
looked up. This is based on the assumption that a rebalance cannot complete
until the filesystem crawl is complete. Actual file migration operations do
not seem to contribute greatly to this time but that still needs to be
validated with more realistic data sets.
3. Using the calculated rate and the estimated number of files, calculate
the time it would take to process all the files on the brick.  That would
be our estimate for how long rebalance would take to complete on that node.

Things to be considered/assumptions:

1. A single filesystem partition contains a single brick in order for the
statfs info to be valid
2. My test was run on a single brick volume to which I added another brick
and started rebalance. More nodes and bricks in the cluster would mean that
the total number of files might change more frequently as files are not
just moved off the brick but to it as well.

That being said, the initial results are encouraging. The estimates
generated were fairly close to the times actually taken. The estimates are
generated every time the
gluster v rebalance  status

command is run and the values autocorrect to take the latest data into
consideration. However, mine was a limited setup and most rebalance runs
took around 10 mins or so. It would be interesting to see the numbers for
larger data sets where rebalance takes days or weeks.

Regards,
Nithya



>>
>>
>>
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Community Meetings - Feedback on new meeting format

2016-11-17 Thread Atin Mukherjee
On Thu, Nov 17, 2016 at 5:23 PM, Kaushal M  wrote:

> Hi All,
>
> We have begun following a new format for the weekly community meetings
> for the past 4 weeks.
>
> The new format is just a massive Open floor for everyone to discuss a
> topic of their choice. The old boring weekly updates have been
> relegated to just being notes in the meeting agenda. The meetings are
> being captured into wiki [1][2][3], and give a good picture of what's
> been happening in the community in the past week.
>
> The format was trialed the format for 3 weeks (we actually did an
> extra week, and will follow it next week as well). We'd like hear
> feedback about this from the community. It'll be good if your feedback
> covers the following,
> 1. What did you like or not like about the new format?
>

I like the new format because (a) it doesn't look monotonous as it used to
be earlier (b) it's much productive as the floor remains open for
discussions for most part of the slot we have.


> 2. What could be done better?
>

I'd definitely like to see updates coming from all the major
components/initiatives to get to know the over all work going across the
Gluster world.

3. Should we continue with the format?
>

A big +1 from me.


>
> ---
> I'll begin with my feedback.
>
> This has resulted in several good changes,
> a. Meetings are now more livelier with more people speaking up and
> making themselves heard.
> b. Each topic in the open floor gets a lot more time for discussion.
> c. Developers are sending out weekly updates of works they are doing,
> and linking those mails in the meeting agenda.
>
> Thought the response and attendance to the initial 2 meetings was
> good, it dropped for the last 2. This week in particular didn't have a
> lot of updates added to the meeting agenda. It seems like interest has
> dropped already.
>
> We could probably do a better job of collecting updates to make it
> easier for people to add their updates, but the current format of
> adding updates to etherpad(/hackmd) is simple enough. I'd like to know
> if there is anything else preventing people from providing updates.
>
> I vote we continue with the new format.
> ---
>
> Everyone please provide your feedback by replying to this mail. We'll
> be going over the feedback in the next meeting.
>
> Thanks.
> ~kaushal
>
> [1]: https://github.com/gluster/glusterfs/wiki/Community-
> Meeting-2016-11-16
> [2]: https://github.com/gluster/glusterfs/wiki/Community-
> Meeting-2016-11-09
> [3]: https://github.com/gluster/glusterfs/wiki/Community-
> Meeting-2016-11-02
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Community Meetings - Feedback on new meeting format

2016-11-17 Thread Jeff Darcy
> This has resulted in several good changes,
> a. Meetings are now more livelier with more people speaking up and
> making themselves heard.
> b. Each topic in the open floor gets a lot more time for discussion.
> c. Developers are sending out weekly updates of works they are doing,
> and linking those mails in the meeting agenda.

I agree with these points.  People seem much more engaged during the
meeting, which is a good thing.

> Thought the response and attendance to the initial 2 meetings was
> good, it dropped for the last 2. This week in particular didn't have a
> lot of updates added to the meeting agenda. It seems like interest has
> dropped already.
> 
> We could probably do a better job of collecting updates to make it
> easier for people to add their updates, but the current format of
> adding updates to etherpad(/hackmd) is simple enough. I'd like to know
> if there is anything else preventing people from providing updates.

I'm one of the culprits here.  As an observation, not an excuse, I'll
point out that we were already missing lots of updates from people
who didn't even show up to the meetings.  Has the overall level of
missed updates gone up or down?  Has the level of attention paid to
them?  If people provide updates about as consistently, and those
updates are at least as detailed (possibly more because they're
written and meant to be read asynchronously), then we might actually
be *ahead* of where we were before.

The new format gets a big +1 from me.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Meetings - Feedback on new meeting format

2016-11-17 Thread Kaushal M
Hi All,

We have begun following a new format for the weekly community meetings
for the past 4 weeks.

The new format is just a massive Open floor for everyone to discuss a
topic of their choice. The old boring weekly updates have been
relegated to just being notes in the meeting agenda. The meetings are
being captured into wiki [1][2][3], and give a good picture of what's
been happening in the community in the past week.

The format was trialed the format for 3 weeks (we actually did an
extra week, and will follow it next week as well). We'd like hear
feedback about this from the community. It'll be good if your feedback
covers the following,
1. What did you like or not like about the new format?
2. What could be done better?
3. Should we continue with the format?

---
I'll begin with my feedback.

This has resulted in several good changes,
a. Meetings are now more livelier with more people speaking up and
making themselves heard.
b. Each topic in the open floor gets a lot more time for discussion.
c. Developers are sending out weekly updates of works they are doing,
and linking those mails in the meeting agenda.

Thought the response and attendance to the initial 2 meetings was
good, it dropped for the last 2. This week in particular didn't have a
lot of updates added to the meeting agenda. It seems like interest has
dropped already.

We could probably do a better job of collecting updates to make it
easier for people to add their updates, but the current format of
adding updates to etherpad(/hackmd) is simple enough. I'd like to know
if there is anything else preventing people from providing updates.

I vote we continue with the new format.
---

Everyone please provide your feedback by replying to this mail. We'll
be going over the feedback in the next meeting.

Thanks.
~kaushal

[1]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-16
[2]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-09
[3]: https://github.com/gluster/glusterfs/wiki/Community-Meeting-2016-11-02
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Notice: https://download.gluster.org:/pub/gluster/glusterfs/LATEST has changed

2016-11-17 Thread Michael Scherer
Le mercredi 16 novembre 2016 à 12:51 -0500, Kaleb S. KEITHLEY a écrit :
> Hi,
> 
> As some of you may have noticed, GlusterFS-3.9.0 was released. Watch
> this space for the official announcement soon.
> 
> If you are using Community GlusterFS packages from download.gluster.org
> you should check your package metadata to be sure that an update doesn't
> inadvertently update your system to 3.9.
> 
> There is a new symlink:
> https://download.gluster.org:/pub/gluster/glusterfs/LTM-3.8 which will
> remain pointed at the GlusterFS-3.8 packages. Use this instead of
> .../LATEST to keep getting 3.8 updates without risk of accidentally
> getting 3.9. There is also a new LTM-3.7 symlink that you can use for
> 3.7 updates.
> 
> Also note that there is a new package signing key for the 3.9 packages
> that are on download.gluster.org. The old key remains the same for 3.8
> and earlier packages. New releases of 3.8 and 3.7 packages will continue
> to use the old key.

The new key had wrong username/group and selinux context. Someone came
on irc to notify us, I did fixed that. 

I am not sure how we can improve them, except with some upload script
rather than direct access.

> GlusterFS-3.9 is the first "short term" release; it will be supported
> for approximately six months. 3.7 and 3.8 are Long Term Maintenance
> (LTM) releases. 3.9 will be followed by 3.10; 3.10 will be a LTM release
> and 3.9 and 3.7 will be End-of-Life (EOL) at that time.
> 
> 

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS




signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Notice: https://download.gluster.org:/pub/gluster/glusterfs/LATEST has changed

2016-11-17 Thread Pranith Kumar Karampuri
On Wed, Nov 16, 2016 at 11:47 PM, Serkan Çoban 
wrote:

> Hi,
> Will disperse related new futures be ported to 3.7? or we should
> upgrade for those features?
>

hi Serkan,
  Unfortunately, no they won't be backported to 3.7. We are adding
new features to latest releases to prevent accidental bugs slipping in
stable releases. While the features are working well, we did see a
performance problem very late in the cycle in the I/O path just with EC for
small files. You should wait before you upgrade IMO.

You were trying to test how long it takes to heal data with multi-threaded
heal in EC right? Do you want to give us feedback by trying this feature
out?


> On Wed, Nov 16, 2016 at 8:51 PM, Kaleb S. KEITHLEY 
> wrote:
> > Hi,
> >
> > As some of you may have noticed, GlusterFS-3.9.0 was released. Watch
> > this space for the official announcement soon.
> >
> > If you are using Community GlusterFS packages from download.gluster.org
> > you should check your package metadata to be sure that an update doesn't
> > inadvertently update your system to 3.9.
> >
> > There is a new symlink:
> > https://download.gluster.org:/pub/gluster/glusterfs/LTM-3.8 which will
> > remain pointed at the GlusterFS-3.8 packages. Use this instead of
> > .../LATEST to keep getting 3.8 updates without risk of accidentally
> > getting 3.9. There is also a new LTM-3.7 symlink that you can use for
> > 3.7 updates.
> >
> > Also note that there is a new package signing key for the 3.9 packages
> > that are on download.gluster.org. The old key remains the same for 3.8
> > and earlier packages. New releases of 3.8 and 3.7 packages will continue
> > to use the old key.
> >
> > GlusterFS-3.9 is the first "short term" release; it will be supported
> > for approximately six months. 3.7 and 3.8 are Long Term Maintenance
> > (LTM) releases. 3.9 will be followed by 3.10; 3.10 will be a LTM release
> > and 3.9 and 3.7 will be End-of-Life (EOL) at that time.
> >
> >
> > --
> >
> > Kaleb
> > ___
> > Gluster-users mailing list
> > gluster-us...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel