Re: [Gluster-devel] [gluster-packaging] Release 4.0: Making it happen! (GlusterD2)

2018-01-16 Thread Shyam Ranganathan
On 01/12/2018 06:31 AM, Niels de Vos wrote:
> I think it will be really painful to maintain a .spec that has the
> current (already very complex) glusterfs bits, and the new GD2
> components. Packaging Golang is quite different from anything written in
> C, and will make the mixed language .spec most ugly. (Remember the
> difficulties with the gluster-swift/ufo bundling?)
> 
> If GD2 evolves at a different rate than glusterfs, it seems better to
> package is separately. This will make it possible to update it more
> often if needed. Maintaining the packages will also be simpler. Because
> GD2 is supposed to be less intimate with the glusterfs internals, there
> may come a point where the GD2 version does not need to match the
> glusterfs version anymore.
> 
> Keeping the same major versions would be good, and that makes it easy to
> set the correct Requires: in the .spec files.
> 
> Packaging GD2 by itself for Fedora should not be a problem. There are
> several package maintainers in the Gluster Community and all can
> propose, review and approve new packages. If two packages is the right
> technical approach, we should work to make that happen.

So, I would state either way is fine, as GD2 in either approach exists
as a separate package in my eyes.

If package maintainers feel that creating a separate package is easy,
clean and also the right way to go forward, and can make the new package
appear in Fedora and other distributions by the 4.0 release (I assume
that pre-release RC builds do not have that problem), we can go ahead.

Can one of the package maintainers finalize the decision on this?

Also, the GD2 reference spec file is here [1]

[1] GD2 Reference spec file:
https://github.com/gluster/glusterd2/blob/master/extras/rpms/glusterd2.spec
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 4.0: Making it happen!

2018-01-16 Thread Shyam Ranganathan
On 01/10/2018 01:14 PM, Shyam Ranganathan wrote:
> Hi,
> 
> 4.0 branching date is slated on the 16th of Jan 2018 and release is
> slated for the end of Feb (28th), 2018.

This is today! So read on...

Short update: I am going to wait a couple more days before branching, to
settle release content and exceptions. Branching is hence on Jan, 18th
(Thursday).

> 
> We are at the phase when we need to ensure our release scope is correct
> and *must* release features are landing. Towards this we need the
> following information for all contributors.
> 
> 1) Features that are making it to the release by branching date
> 
> - There are currently 35 open github issues marked as 4.0 milestone [1]
> - Need contributors to look at this list and let us know which will meet
> the branching date

Other than the protocol changes (from Amar), I did not receive any
requests for features that are making it to the release. I have compiled
a list of features based on patches in gerrit that are open, to check
what features are viable to make it to 4.0. This can be found here [3].

NOTE: All features, other than the ones in [3] are being moved out of
the 4.0 milestone.

> - Need contributors to let us know which may slip and hence needs a
> backport exception to 4.0 branch (post branching).
> - Need milestone corrections on features that are not making it to the
> 4.0 release

I need the following contributors to respond and state if the feature in
[3] should still be tracked against 4.0 and how much time is possibly
needed to make it happen.

- Poornima, Amar, Jiffin, Du, Susant, Sanoj, Vijay

> 
> NOTE: Slips are accepted if they fall 1-1.5 weeks post branching, not
> post that, and called out before branching!
> 
> 2) Reviews needing priority
> 
> - There could be features that are up for review, and considering we
> have about 6-7 days before branching, we need a list of these commits,
> that you want review attention on.
> - This will be added to this [2] dashboard, easing contributor access to
> top priority reviews before branching

As of now, I am adding a few from the list in [3] for further review
attention as I see things evolving, more will be added as the point
above is answered by the respective contributors.

> 
> 3) Review help!
> 
> - This link [2] contains reviews that need attention, as they are
> targeted for 4.0. Request maintainers and contributors to pay close
> attention to this list on a daily basis and help out with reviews.
> 
> Thanks,
> Shyam
> 
> [1] github issues marked for 4.0:
> https://github.com/gluster/glusterfs/milestone/3
> 
> [2] Review focus for features planned to land in 4.0:
> https://review.gluster.org/#/q/owner:srangana%2540redhat.com+is:starred

[3] Release 4.0 features with pending code reviews: http://bit.ly/2rbjcl8

> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] query about why glustershd can not afr_selfheal_recreate_entry because of "afr: Prevent null gfids in self-heal entry re-creation"

2018-01-16 Thread Ravishankar N



On 01/16/2018 02:22 PM, Lian, George (NSB - CN/Hangzhou) wrote:


Hi,

Thanks a lots for your update.

I would like try to introduce more detail for which the issue came from.

This issue is came from a test case in our team, it is the step like 
the following:


1)Setup a glusterfs ENV with replicate 2 storage server nodes and 2 
client nodes


2)Generate a split-brain file , sn-0 is normal, sn-1 is dirty.

Hi , sorry I did not understand the test case. What type of split-brain 
did you create? (data/metadata or gfid or file type mismatch)?


3)Delete the directory before heal begin  (in this phase, the normal 
correct file in sn-0 is deleted by “rm” command , dirty file is still 
there )



Delete from the backend brick directly?


4)After that, the self-heal process will always be failure with the 
log which attached in last mail


Maybe you can write a script or a .t file (like the ones in 
https://github.com/gluster/glusterfs/tree/master/tests/basic/afr) so 
that your test can be understood unambiguously.



Also attach some command output FYI.

From my understand , the Glusterfs maybe can’t handle the split-brain 
file in this case, could you share your comments and confirm whether 
do some enhancement for this case or not?


If you create a split-brain in gluster, self-heal cannot heal it. You 
need to resolve it using one of the methods listed in 
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/#heal-info-and-split-brain-resolution


Thanks,
Ravi

/_rm -rf /mnt/export/testdir rm: cannot remove 
'/mnt/export/testdir/test file': No data available_//__/


/__/

/__/

/_[root@sn-1:/root]_/

/_# ls -l /mnt/export/testdir/_/

/_ls: cannot access '/mnt/export/testdir/IORFILE_82_2': No data 
available_/


/_total 0_/

/_-? ? ? ? ?    ? test_file_/

/__/

/_[root@sn-1:/root]_/

/_# getfattr -m . -d -e hex /mnt/bricks/export/brick/testdir/_/

/_getfattr: Removing leading '/' from absolute path names_/

/_# file: mnt/bricks/export/brick/testdir/_/

/_trusted.afr.dirty=0x0001_/

/_trusted.afr.export-client-0=0x0054_/

/_trusted.gfid=0xb217d6af49024f189a69e0ccf5207572_/

/_trusted.glusterfs.dht=0x0001_/

/__/

/_[root@sn-0:/var/log/glusterfs]_/

/_#  getfattr -m . -d -e hex /mnt/bricks/export/brick/testdir/_/

/_getfattr: Removing leading '/' from absolute path names_/

/_# file: mnt/bricks/export/brick/testdir/_/

/_trusted.gfid=0xb217d6af49024f189a69e0ccf5207572_/

/_trusted.glusterfs.dht=0x0001_/

/__/

Best Regards

George

*From:*gluster-devel-boun...@gluster.org 
[mailto:gluster-devel-boun...@gluster.org] *On Behalf Of *Ravishankar N

*Sent:* Tuesday, January 16, 2018 1:44 PM
*To:* Zhou, Cynthia (NSB - CN/Hangzhou) 
; Gluster Devel 
*Subject:* Re: [Gluster-devel] query about why glustershd can not 
afr_selfheal_recreate_entry because of "afr: Prevent null gfids in 
self-heal entry re-creation"


+ gluster-devel

On 01/15/2018 01:41 PM, Zhou, Cynthia (NSB - CN/Hangzhou) wrote:

Hi glusterfs expert,

    Good day,

    When I do some test about glusterfs self-heal I find
following prints showing when dir/file type get error it cannot
get self-healed.

*Could you help to check if it is an expected behavior ? because I
find the code change **https://review.gluster.org/#/c/17981/**add
check for iatt->ia_type,  so what if a file’s ia_type get
corrupted ? in this case it should not get self-healed* ?


Yes, without knowing the ia-type , afr_selfheal_recreate_entry () 
cannot decide what type of FOP to do (mkdir/link/mknod ) to create the 
appropriate file on the sink. You would need to find out why the 
source brick is not returning valid ia_type. i.e. why 
replies[source].poststat is not valid.

Thanks,
Ravi


Thanks!

//heal info output

[root@sn-0:/home/robot]

# gluster v heal export info

Brick sn-0.local:/mnt/bricks/export/brick

Status: Connected

Number of entries: 0

Brick sn-1.local:/mnt/bricks/export/brick

/testdir - Is in split-brain

Status: Connected

Number of entries: 1

//sn-1 glustershd
log///

[2018-01-15 03:53:40.011422] I [MSGID: 108026]
[afr-self-heal-entry.c:887:afr_selfheal_entry_do]
0-export-replicate-0: performing entry selfheal on
b217d6af-4902-4f18-9a69-e0ccf5207572

[2018-01-15 03:53:40.013994] W [MSGID: 114031]
[client-rpc-fops.c:2860:client3_3_lookup_cbk] 0-export-client-1:
remote operation failed. Path: (null)
(----) [No data available]

[2018-01-15 03:53:40.014025] E [MSGID: 108037]
[afr-self-heal-entry.c:92:afr_selfheal_recreate_entry]
0-export-replicate-0: Invalid ia_type (0) or

[Gluster-devel] Coverity covscan for 2018-01-16-7ba7a4b2 (master branch)

2018-01-16 Thread staticanalysis
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-01-16-7ba7a4b2
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] cluster/dht: restrict migration of opened files

2018-01-16 Thread Raghavendra Gowdappa
All,

Patch [1] prevents migration of opened files during rebalance operation. If
patch [1] affects you, please voice out your concerns. [1] is a stop-gap
fix for the problem discussed in issues [2][3]

[1] https://review.gluster.org/#/c/19202/
[2] https://github.com/gluster/glusterfs/issues/308
[3] https://github.com/gluster/glusterfs/issues/347

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel