Re: [Gluster-users] [Gluster-devel] Release 3.12.6: Scheduled for the 12th of February

2018-02-02 Thread Nithya Balachandran
On 2 February 2018 at 11:16, Jiffin Tony Thottan 
wrote:

> Hi,
>
> It's time to prepare the 3.12.6 release, which falls on the 10th of
> each month, and hence would be 12-02-2018 this time around.
>
> This mail is to call out the following,
>
> 1) Are there any pending **blocker** bugs that need to be tracked for
> 3.12.6? If so mark them against the provided tracker [1] as blockers
> for the release, or at the very least post them as a response to this
> mail
>
> 2) Pending reviews in the 3.12 dashboard will be part of the release,
> **iff** they pass regressions and have the review votes, so use the
> dashboard [2] to check on the status of your patches to 3.12 and get
> these going
>
> 3) I have made checks on what went into 3.10 post 3.12 release and if
> these fixes are already included in 3.12 branch, then status on this is
> **green**
> as all fixes ported to 3.10, are ported to 3.12 as well.
>

Hi Jiffin,

We will need to get https://review.gluster.org/19468 in. It is currently
pending regressions and should
make it well in time but this is a heads up.

Regards,
Nithya

> Thanks,
> Jiffin
>
> [1] Release bug tracker:
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.6
>
> [2] 3.12 review dashboard:
> https://review.gluster.org/#/projects/glusterfs,dashboards/
> dashboard:3-12-dashboard
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Gluster release 3.10.10 (Long Term Maintenance)

2018-02-02 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
3.10.10 (packages available at [1]).

Release notes for the release can be found at [2].

The corruption issue when sharded volumes are rebalanced, is fixed with
this release.

Thanks,
Gluster community

[1] Packages:
https://download.gluster.org/pub/gluster/glusterfs/3.10/3.10.10/

[2] Release notes: http://docs.gluster.org/en/latest/release-notes/3.10.10/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How to trigger a resync of a newly replaced empty brick in replicate config ?

2018-02-02 Thread Serkan Çoban
If I were you I follow the following steps. Stop the rebalance and fix
the cluster health first.
Bring up the down server, replace server4:brick4 with a new disk,
format and be sure it is started, then start a full heal.
Without all bricks up full heal will not start. The you can continue
with rebalance.


On Fri, Feb 2, 2018 at 1:27 PM, Alessandro Ipe  wrote:
> Hi,
>
>
> I simplified the config in my first email, but I actually have 2x4 servers in 
> replicate-distribute with each 4 bricks for 6 of them and 2 bricks  for the 
> remaining 2. Full healing will just take ages... for a just single brick to 
> resync !
>
>> gluster v status home
> volume status home
> Status of volume: home
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick server1:/data/glusterfs/home/brick1  49157 0  Y   5003
> Brick server1:/data/glusterfs/home/brick2  49153 0  Y   5023
> Brick server1:/data/glusterfs/home/brick3  49154 0  Y   5004
> Brick server1:/data/glusterfs/home/brick4  49155 0  Y   5011
> Brick server3:/data/glusterfs/home/brick1  49152 0  Y   5422
> Brick server4:/data/glusterfs/home/brick1  49152 0  Y   5019
> Brick server3:/data/glusterfs/home/brick2  49153 0  Y   5429
> Brick server4:/data/glusterfs/home/brick2  49153 0  Y   5033
> Brick server3:/data/glusterfs/home/brick3  49154 0  Y   5437
> Brick server4:/data/glusterfs/home/brick3  49154 0  Y   5026
> Brick server3:/data/glusterfs/home/brick4  49155 0  Y   5444
> Brick server4:/data/glusterfs/home/brick4  N/A   N/AN   N/A
> Brick server5:/data/glusterfs/home/brick1  49152 0  Y   5275
> Brick server6:/data/glusterfs/home/brick1  49152 0  Y   5786
> Brick server5:/data/glusterfs/home/brick2  49153 0  Y   5276
> Brick server6:/data/glusterfs/home/brick2  49153 0  Y   5792
> Brick server5:/data/glusterfs/home/brick3  49154 0  Y   5282
> Brick server6:/data/glusterfs/home/brick3  49154 0  Y   5794
> Brick server5:/data/glusterfs/home/brick4  49155 0  Y   5293
> Brick server6:/data/glusterfs/home/brick4  49155 0  Y   5806
> Brick server7:/data/glusterfs/home/brick1  49156 0  Y   22339
> Brick server8:/data/glusterfs/home/brick1  49153 0  Y   17992
> Brick server7:/data/glusterfs/home/brick2  49157 0  Y   22347
> Brick server8:/data/glusterfs/home/brick2  49154 0  Y   18546
> NFS Server on localhost 2049  0  Y   683
> Self-heal Daemon on localhost   N/A   N/AY   693
> NFS Server on server8  2049  0  Y   18553
> Self-heal Daemon on server8N/A   N/AY   18566
> NFS Server on server5  2049  0  Y   23115
> Self-heal Daemon on server5N/A   N/AY   23121
> NFS Server on server7  2049  0  Y   4201
> Self-heal Daemon on server7N/A   N/AY   4210
> NFS Server on server3  2049  0  Y   5460
> Self-heal Daemon on server3N/A   N/AY   5469
> NFS Server on server6  2049  0  Y   22709
> Self-heal Daemon on server6N/A   N/AY   22718
> NFS Server on server4  2049  0  Y   6044
> Self-heal Daemon on server4N/A   N/AY   6243
>
> server 2 is currently powered off as we are waiting a replacement RAID 
> controller, as well as for
> server4:/data/glusterfs/home/brick4
>
> And as I said, there is a rebalance in progress
>> gluster rebalance home status
> Node Rebalanced-files  size   
> scanned  failures   skipped   status  run time in h:m:s
>-  ---   ---   
> ---   ---   ---  
> --
>localhost4208323.3GB   
> 1568065  1359303734  in progress   16:49:31
> server53569823.8GB   
> 1027934 0240748  in progress   16:49:23
> server43509623.4GB
> 899491 0229064  in progress   16:49:18
> server32703118.0GB
> 701759  

Re: [Gluster-users] How to trigger a resync of a newly replaced empty brick in replicate config ?

2018-02-02 Thread Alessandro Ipe
Hi,


I simplified the config in my first email, but I actually have 2x4 servers in 
replicate-distribute with each 4 bricks for 6 of them and 2 bricks  for the 
remaining 2. Full healing will just take ages... for a just single brick to 
resync !

> gluster v status home
volume status home
Status of volume: home
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick server1:/data/glusterfs/home/brick1  49157 0  Y   5003 
Brick server1:/data/glusterfs/home/brick2  49153 0  Y   5023 
Brick server1:/data/glusterfs/home/brick3  49154 0  Y   5004 
Brick server1:/data/glusterfs/home/brick4  49155 0  Y   5011 
Brick server3:/data/glusterfs/home/brick1  49152 0  Y   5422 
Brick server4:/data/glusterfs/home/brick1  49152 0  Y   5019 
Brick server3:/data/glusterfs/home/brick2  49153 0  Y   5429 
Brick server4:/data/glusterfs/home/brick2  49153 0  Y   5033 
Brick server3:/data/glusterfs/home/brick3  49154 0  Y   5437 
Brick server4:/data/glusterfs/home/brick3  49154 0  Y   5026 
Brick server3:/data/glusterfs/home/brick4  49155 0  Y   5444 
Brick server4:/data/glusterfs/home/brick4  N/A   N/AN   N/A  
Brick server5:/data/glusterfs/home/brick1  49152 0  Y   5275 
Brick server6:/data/glusterfs/home/brick1  49152 0  Y   5786 
Brick server5:/data/glusterfs/home/brick2  49153 0  Y   5276 
Brick server6:/data/glusterfs/home/brick2  49153 0  Y   5792 
Brick server5:/data/glusterfs/home/brick3  49154 0  Y   5282 
Brick server6:/data/glusterfs/home/brick3  49154 0  Y   5794 
Brick server5:/data/glusterfs/home/brick4  49155 0  Y   5293 
Brick server6:/data/glusterfs/home/brick4  49155 0  Y   5806 
Brick server7:/data/glusterfs/home/brick1  49156 0  Y   22339
Brick server8:/data/glusterfs/home/brick1  49153 0  Y   17992
Brick server7:/data/glusterfs/home/brick2  49157 0  Y   22347
Brick server8:/data/glusterfs/home/brick2  49154 0  Y   18546
NFS Server on localhost 2049  0  Y   683  
Self-heal Daemon on localhost   N/A   N/AY   693  
NFS Server on server8  2049  0  Y   18553
Self-heal Daemon on server8N/A   N/AY   18566
NFS Server on server5  2049  0  Y   23115
Self-heal Daemon on server5N/A   N/AY   23121
NFS Server on server7  2049  0  Y   4201 
Self-heal Daemon on server7N/A   N/AY   4210 
NFS Server on server3  2049  0  Y   5460 
Self-heal Daemon on server3N/A   N/AY   5469 
NFS Server on server6  2049  0  Y   22709
Self-heal Daemon on server6N/A   N/AY   22718
NFS Server on server4  2049  0  Y   6044 
Self-heal Daemon on server4N/A   N/AY   6243 

server 2 is currently powered off as we are waiting a replacement RAID 
controller, as well as for 
server4:/data/glusterfs/home/brick4

And as I said, there is a rebalance in progress
> gluster rebalance home status
Node Rebalanced-files  size   
scanned  failures   skipped   status  run time in h:m:s
   -  ---   ---   
---   ---   ---  --
   localhost4208323.3GB   
1568065  1359303734  in progress   16:49:31
server53569823.8GB   
1027934 0240748  in progress   16:49:23
server43509623.4GB
899491 0229064  in progress   16:49:18
server32703118.0GB
701759 8182592  in progress   16:49:27
server800Bytes
327602 0   805  in progress   16:49:18
server63567223.9GB   
1028469 0240810  in progress   16:49:17
server71   45Bytes  
  53 0 0