Re: [Gluster-users] Setting up geo replication with GlusterFS 3.6.5

2015-09-16 Thread Saravanakumar Arumugam

Hi,
Replies inline.


On 09/16/2015 01:34 AM, ML mail wrote:

Thanks for your detailed example. Based on that it looks like my issue is SSH 
based. Now I have the following two SSH related questions:

1) The setup of a SSH passwordless account on the slave, does it need to be 
using the same SSH public key as stored by GlusterFS in the
/var/lib/glusterd/geo-replication directory? or can I simply generate my own 
with ssh-keygen?

You can simply generate on your own with ssh-keygen.

You should be able to login from Master node to Slave node without password.
(# ssh root@ ) thats it.



2) is it possible to use another user than root for geo-replication with 
GlusterFS v3.6?
It is supported in 3.6.5. It involves more steps in addition to below 
mentioned(which is for root user).
Please refer the link which you have 
mentioned.(http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html)




Regards
ML



On Tuesday, September 15, 2015 9:16 AM, Saravanakumar Arumugam 
 wrote:
Hi,
You are right,   This tool may not be compatible with 3.6.5.

I have tried myself with 3.6.5, but faced this error.
==
georepsetup tv1 gfvm3 tv2
Geo-replication session will be established between tv1 and gfvm3::tv2
Root password of gfvm3 is required to complete the setup. NOTE: Password
will not be stored.

root@gfvm3's password:
[OK] gfvm3 is Reachable(Port 22)
[OK] SSH Connection established root@gfvm3
[OK] Master Volume and Slave Volume are compatible (Version: 3.6.5)
[OK] Common secret pub file present at
/var/lib/glusterd/geo-replication/common_secret.pem.pub
[OK] common_secret.pem.pub file copied to gfvm3
[OK] Master SSH Keys copied to all Up Slave nodes
[OK] Updated Master SSH Keys to all Up Slave nodes authorized_keys file
[NOT OK] Failed to Establish Geo-replication Session
Command type not found while handling geo-replication options
[root@gfvm3 georepsetup]#
==
So, some more changes required in this tool.


Coming back to your question:

I have setup geo-replication using the commands in 3.6.5.
Please recheck all the commands (with necessary changes at your end).


[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# cat /etc/redhat-release
Fedora release 21 (Twenty One)
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# rpm -qa | grep glusterfs
glusterfs-devel-3.6.5-1.fc21.x86_64
glusterfs-3.6.5-1.fc21.x86_64
glusterfs-rdma-3.6.5-1.fc21.x86_64
glusterfs-fuse-3.6.5-1.fc21.x86_64
glusterfs-server-3.6.5-1.fc21.x86_64
glusterfs-debuginfo-3.6.5-1.fc21.x86_64
glusterfs-libs-3.6.5-1.fc21.x86_64
glusterfs-extra-xlators-3.6.5-1.fc21.x86_64
glusterfs-geo-replication-3.6.5-1.fc21.x86_64
glusterfs-api-3.6.5-1.fc21.x86_64
glusterfs-api-devel-3.6.5-1.fc21.x86_64
glusterfs-cli-3.6.5-1.fc21.x86_64
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# service glusterd start
Redirecting to /bin/systemctl start  glusterd.service
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# service glusterd status
Redirecting to /bin/systemctl status  glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
 Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled)
 Active: active (running) since Tue 2015-09-15 12:19:32 IST; 4s ago
Process: 2778 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
(code=exited, status=0/SUCCESS)
   Main PID: 2779 (glusterd)
 CGroup: /system.slice/glusterd.service
 └─2779 /usr/sbin/glusterd -p /var/run/glusterd.pid
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# ps aux | grep glus
root  2779  0.0  0.4 448208 17288 ?Ssl  12:19   0:00
/usr/sbin/glusterd -p /var/run/glusterd.pid
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume create tv1
gfvm3:/opt/volume_test/tv_1/b1 gfvm3:/opt/volume_test/tv_1/b2 force
volume create: tv1: success: please start the volume to access data
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume create tv2
gfvm3:/opt/volume_test/tv_2/b1 gfvm3:/opt/volume_test/tv_2/b2 force
volume create: tv2: success: please start the volume to access data
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#  gluster volume start tv1
volume start: tv1: success
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster volume start tv2
volume start: tv2: success
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# mount -t glusterfs gfvm3:/tv1 /mnt/master/
[root@gfvm3 georepsetup]# mount -t glusterfs gfvm3:/tv2 /mnt/slave/
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]#
[root@gfvm3 georepsetup]# gluster system:: execute gsec_create
Common secret pub file present at

Re: [Gluster-users] Rebalance failures

2015-09-16 Thread Davy Croonen
Hi all

After some testing and debugging I was able to reproduce the problem in our 
lab. It turned out that this behaviour happens when root-sqaushing is turned 
on, see the details below. Without root-squashing turned on rebalancing happens 
just fine.

Volume Name: public
Type: Distributed-Replicate
Volume ID: 158bf6ae-a486-4164-bb39-ca089ecdf767
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gfs01a-dcg:/mnt/public/brick1
Brick2: gfs01b-dcg:/mnt/public/brick1
Brick3: gfs02a-dcg.intnet.be:/mnt/public/brick1
Brick4: gfs02b-dcg.intnet.be:/mnt/public/brick1
Options Reconfigured:
server.anongid: 33
server.anonuid: 33
server.root-squash: on

Now only one question remains, what is the way to go to get the cluster back in 
a healthy state?

Any help would be really appreciated.

Kind regards
Davy

On 15 Sep 2015, at 17:04, Davy Croonen 
> wrote:

Hi all

After expanding our cluster we are facing failures while rebalancing. In my 
opinion this doesn’t look good, so can anybody maybe explain how these failures 
could arise, how you can fix them or what the consequences can be?

$gluster volume rebalance public status
Node Rebalanced-files  size 
   scanned  failures   skipped   status
run time in secs
   ---- 
------   --- --- 
   --
   localhost0   
   0Bytes 49496 23464 0in progress  
  3821.00
   gfs01b-dcg.intnet.be0  
 0Bytes  49496 0 0  
  in progress3821.00
   gfs02a-dcg.intnet.be0  
  0Bytes 49497 0 0  
  in progress3821.00
   gfs02b-dcg.intnet.be0  
  0Bytes 49495 0 0  
  in progress3821.00

After looking in the public-rebalance.log this is one paragraph that shows up. 
The whole log is filled up with these.

[2015-09-15 14:50:58.239554] I [dht-common.c:3309:dht_setxattr] 0-public-dht: 
fixing the layout of /ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355
[2015-09-15 14:50:58.239730] I [dht-selfheal.c:960:dht_fix_layout_of_directory] 
0-public-dht: subvolume 0 (public-replicate-0): 251980 chunks
[2015-09-15 14:50:58.239750] I [dht-selfheal.c:960:dht_fix_layout_of_directory] 
0-public-dht: subvolume 1 (public-replicate-1): 251980 chunks
[2015-09-15 14:50:58.239759] I 
[dht-selfheal.c:1065:dht_selfheal_layout_new_directory] 0-public-dht: chunk 
size = 0x / 503960 = 0x214a
[2015-09-15 14:50:58.239784] I 
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-public-dht: assigning 
range size 0x7ffe51f8 to public-replicate-0
[2015-09-15 14:50:58.239791] I 
[dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-public-dht: assigning 
range size 0x7ffe51f8 to public-replicate-1
[2015-09-15 14:50:58.239816] I [MSGID: 109036] 
[dht-common.c:6296:dht_log_new_layout_for_dir_selfheal] 0-public-dht: Setting 
layout of /ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355 with 
[Subvol_name: public-replicate-0, Err: -1 , Start: 0 , Stop: 2147373559 ], 
[Subvol_name: public-replicate-1, Err: -1 , Start: 2147373560 , Stop: 
4294967295 ],
[2015-09-15 14:50:58.306701] I [dht-rebalance.c:1405:gf_defrag_migrate_data] 
0-public-dht: migrate data called on 
/ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355
[2015-09-15 14:50:58.346531] W [client-rpc-fops.c:1090:client3_3_getxattr_cbk] 
0-public-client-2: remote operation failed: Permission denied. Path: 
/ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355/1.1 rationale getallen.pdf 
(ba5220be-a462-4008-ac67-79abb16f4dd9). Key: trusted.glusterfs.pathinfo
[2015-09-15 14:50:58.354111] W [client-rpc-fops.c:1090:client3_3_getxattr_cbk] 
0-public-client-3: remote operation failed: Permission denied. Path: 
/ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355/1.1 rationale getallen.pdf 
(ba5220be-a462-4008-ac67-79abb16f4dd9). Key: trusted.glusterfs.pathinfo
[2015-09-15 14:50:58.354166] E [dht-rebalance.c:1576:gf_defrag_migrate_data] 
0-public-dht: /ka1hasselt/Lqw9pnXKV8ojBzzzsqHyChSU914422947204355/1.1 rationale 
getallen.pdf: failed to get trusted.distribute.linkinfo key - Permission denied
[2015-09-15 14:50:58.356191] I [dht-rebalance.c:1649:gf_defrag_migrate_data] 
0-public-dht: Migration operation on dir 

Re: [Gluster-users] autosnap feature?

2015-09-16 Thread Avra Sengupta

Hi,

Could you please raise a RFE for the same. We will triage the same, and 
update the RFE. Thanks.


Regards,
Avra

On 09/15/2015 07:30 PM, Alastair Neil wrote:
Not really.  this is useful  as it distributes the snapshot control 
over all the cluster members,  I am  looking for the ability to 
specify a snapshot schedule like this :


frequent snapshots every 15 mins, keeping 4 snapshots
hourlysnapshots every hour, keeping 24 snapshots
dailysnapshots every day, keeping 31 snapshots
weeklysnapshots every week, keeping 7 snapshots
monthly  snapshots every month, keeping 12 snapshots.

Clearly this could be handled via the scheduling as described,  but 
the feature that is missing is user friendly labeling so that users 
don't have to parse long time-stamps in the snapshot name to figure 
out what is the most recent snapshot.  Ideally they could have labels 
like "Now", "Fifteen Minutes Ago",  "Thirty Minutes Ago", "Sunday", 
"Last Week" etc.  The system should handle rotating the labels 
automatically, when necessary.  So some sort of ability to create and 
manipulate labels on snapshots and then expose them as links in the 
.snaps directory would probably be a start.


-Alastair



On 15 September 2015 at 01:35, Rajesh Joseph > wrote:




- Original Message -
> From: "Alastair Neil" >
> To: "gluster-users" >
> Sent: Friday, September 11, 2015 2:24:32 AM
> Subject: [Gluster-users] autosnap feature?
>
> Wondering if there were any plans for a fexible and easy to use
snapshotting
> feature along the lines of zfs autosnap scipts. I imagine at the
least it
> would need the ability to rename snapshots.
>

Are you looking for something like this ?

http://www.gluster.org/community/documentation/index.php/Features/Scheduling_of_Snapshot

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Advice for auto-scaling

2015-09-16 Thread Paul Thomas

Hi,

I’m new to shared file systems and horizontal cloud scaling.

I have already played with auto-scaling on aws/ec2. In term of spawning 
a destroying and I can achieve that.


I just want to some advice of how best implement syncing for web files, 
infrastructure, data, etc.


I have pretty much decided to put the database side of things on a 
private instance.
I'll worry about db clustering later I’m not to bothered about this not, 
because the software supports it.


It seems logical to put the web folder / application layer on a shared 
file system, maybe some configuration too.


What I'm really unsure about is how to ensure that the current system is 
up to date and the configuration tweaked for the physical specs.


How do people typically approach this? I'm guessing it not always viable 
to have a shared file system for everything.


Is the approach a disciplined one? Where say I have development instance 
for infrastructure changes.
Then there is a deployment flow where production instances are somehow 
refreshed without downtime.


Or is there some other approach?

I notice on sites like yahoo, things are often noticeably unsynced, 
mostly on the data front, but also other things.

This would be unacceptable in my case.

I appreciate any help I can get regarding this.

My typical load is from php-fpm/nginx processes, mysql bellow this.

Should the memory cache also be separated, or as I think it is quite 
good for this to be divided up with the infrastructure to support each 
public instance individually?


Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Advice for auto-scaling

2015-09-16 Thread Mathieu Chateau
Hello,

I am doing that in production for web farm.
My experience:

   - Gluster is synchronous (client writes to all replicated nodes), so no
   issue with old content
   - Gluster is slwww with small files in replicated mode due to
   metadata
   - for configuration, I ended replicating locally instead for availability

So it work as you can imagine (good), just slow

Cordialement,
Mathieu CHATEAU
http://www.lotp.fr

2015-09-16 14:23 GMT+02:00 Paul Thomas :

> Hi,
>
> I’m new to shared file systems and horizontal cloud scaling.
>
> I have already played with auto-scaling on aws/ec2. In term of spawning a
> destroying and I can achieve that.
>
> I just want to some advice of how best implement syncing for web files,
> infrastructure, data, etc.
>
> I have pretty much decided to put the database side of things on a private
> instance.
> I'll worry about db clustering later I’m not to bothered about this not,
> because the software supports it.
>
> It seems logical to put the web folder / application layer on a shared
> file system, maybe some configuration too.
>
> What I'm really unsure about is how to ensure that the current system is
> up to date and the configuration tweaked for the physical specs.
>
> How do people typically approach this? I'm guessing it not always viable
> to have a shared file system for everything.
>
> Is the approach a disciplined one? Where say I have development instance
> for infrastructure changes.
> Then there is a deployment flow where production instances are somehow
> refreshed without downtime.
>
> Or is there some other approach?
>
> I notice on sites like yahoo, things are often noticeably unsynced, mostly
> on the data front, but also other things.
> This would be unacceptable in my case.
>
> I appreciate any help I can get regarding this.
>
> My typical load is from php-fpm/nginx processes, mysql bellow this.
>
> Should the memory cache also be separated, or as I think it is quite good
> for this to be divided up with the infrastructure to support each public
> instance individually?
>
> Paul
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Weekly gluster community meeting to start in ~2 hours

2015-09-16 Thread Krutika Dhananjay
Hi All, 

In about 2 hours from now we will have the regular weekly Gluster 
Community meeting. 

Meeting details: 
- location: #gluster-meeting on Freenode IRC 
- date: every Wednesday 
- time: 12:00 UTC, 14:00 CEST, 17:30 IST 
(in your terminal, run: date -d "12:00 UTC") 
- agenda: https://public.pad.fsfe.org/p/gluster-community-meetings 

Currently the following items are listed: 
* Roll Call 
* Status of last week's action items 
* Gluster 3.7 
* Gluster 3.8 
* Gluster 3.6 
* Gluster 3.5 
* Gluster 4.0 
* Open Floor 
- bring your own topic! 

The last topic has space for additions. If you have a suitable topic to 
discuss, please add it to the agenda. 


-Krutika 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] question about how to handle bugs filed against End-Of-Life versions of glusterfs

2015-09-16 Thread Kaleb S. KEITHLEY
Hi,

A question was raised during Tuesday's (2015-09-15) Gluster Bug Triage
meeting[1], and discussed today (2015-09-16) at the Gluster Community
meeting[2] about how to handle currently open bugs and new bugs filed
against GlusterFS versions which have reached end-of-life (EOL).

As an example, Fedora simply closes any remaining open bugs when the
version reaches EOL. It's incumbent on the person who filed the bug to
reopen it if it still exists in newer versions.

Option A is: create a new set of 'umbrella' Versions, e.g.
3.4-end-of-life, 3.3-end-of-life, etc.; _reassign_ all bugs filed
against 3.4.x to 3.4.x-end-of-life; then delete the 3.4.x Versions from
bugzilla. Any new bugs filed against, e.g., any 3.4.x version are
assigned to, 3.4-end-of-life.

Option B is: create a new set of 'umbrella' Versions, e.g.
3.4-end-of-life, 3.3-end-of-life, etc.; _close_ all bugs filed against
3.4.x; then delete the 3.4.x Versions from bugzilla. Any new bugs filed
against, e.g., any 3.4.x version are assigned to, 3.4-end-of-life.

The main difference is whether existing bugs are reassigned or simply
closed. In either case if a new bug is filed against an EOL version then
during bug triage the bug will be checked to see if it still exists in
newer versions and reassigned to the later version, or closed as
appropriate.

You may reply to this email — Reply-to: is set to
mailto:gluster-de...@gluster.org — to register your opinion.

Thanks,


[1]
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-15/gluster-meeting.2015-09-15-12.02.log.html
[2]
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-16/gluster-meeting.2015-09-16-12.01.log.html

-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Hi new to Gluster

2015-09-16 Thread Nagaprasad Sathyanarayana
Hello Tarkeshwar,

GlusterFS uses file level and range locks based on the operation. 

With the description you have given below, Gluster definitely seems to be the 
right choice for your workload. I recommend that you should try a proof of 
concept with Gluster under your application work load to experience the 
benefits. 

You can always reach out to GlusterFS community (this mailing list) for any 
queries and help regarding the same. 

Regards
Nagaprasad. 


> On 15-Sep-2015, at 3:16 pm, M.Tarkeshwar Rao  wrote:
> 
> Hi Nagaprasad,
>  
> Thanks for reply.
>  
>  
> Nature of I/O workload your application is generating:
> It is very high. Our product woking on files. It is collecting data(multi 
> process and multi threaded) from remote nodes and then processing it and then 
> it sending it to remote locations.
> our Execution engine runs the processing busines logic. So huge no of 
> open,read,write,rename calls in the our code per second.
>  
> recently we made it scalable as well, so our business logic runs horigontally 
> from multiple nodes. we are collecting it in common directory and reading it 
> from same directory for processing.
>  
> We are using Veritas cluster file system. It is locking on directory level. 
> Since same directory accessed from multiple nodes. There is a delay in 
> processing. further it reducess performance drastically.
>  
> For improving performance we made some changes in our application by breaking 
> the direcotories for collection and processing.
> By this we got performance improvement.
>  
> We feel if we change our file system we will get more improvement. Please 
> suggest.
>  
>  
> Regards
> Tarkeshwar
> 
>> On Tue, Sep 15, 2015 at 2:27 PM, Nagaprasad Sathyanarayana 
>>  wrote:
>> Hello Tarakeshwar,
>> 
>> Firstly, welcome to the Gluster community.
>> 
>> Please visit 
>> http://www.gluster.org/community/documentation/index.php/GlusterFS_General_FAQ,
>>  which answers some of your queries about GlusterFS capabilities.
>> If you could share with us the nature of I/O workload your application is 
>> generating, the performance need of your application, type of client access 
>> (NFS, CIFS etc.,)
>> that users of your application need etc, we will be in a better position to 
>> guide.
>> 
>> Regards
>> Nagaprasad
>> 
>> - Original Message -
>> From: "M.Tarkeshwar Rao" 
>> To: gluster-users@gluster.org
>> Sent: Tuesday, 15 September, 2015 12:15:23 PM
>> Subject: [Gluster-users] Hi new to Gluster
>> 
>> Hi all,
>> We have a product which is written in c++ on Red hat.
>> In production our customers using our product with Veritas cluster file 
>> system for HA and as sharded storage(EMC).
>> Initially this product was run on only single node. In our last release we 
>> make it Scalable(more than one nodes).
>> Due to excessive locking(CFS) we are not getting the performance.
>> Can you please suggest Gluster will resolve our problem as it is distributed 
>> file system.
>> is Gluster POSIX complined?
>> Can we use it in Production? Pls suggest.
>> If any other file system please suggest.
>> Regards
>> Tarkeshwar
>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Advice for auto-scaling

2015-09-16 Thread Paul Thomas

Would you run puppet in init.d of the new node to sync infrastructure?

Then you could use rundeck to trigger the shared config on each 
instance, for on demand syncing.


On 16/09/15 13:23, Paul Thomas wrote:

Hi,

I’m new to shared file systems and horizontal cloud scaling.

I have already played with auto-scaling on aws/ec2. In term of 
spawning a destroying and I can achieve that.


I just want to some advice of how best implement syncing for web 
files, infrastructure, data, etc.


I have pretty much decided to put the database side of things on a 
private instance.
I'll worry about db clustering later I’m not to bothered about this 
not, because the software supports it.


It seems logical to put the web folder / application layer on a shared 
file system, maybe some configuration too.


What I'm really unsure about is how to ensure that the current system 
is up to date and the configuration tweaked for the physical specs.


How do people typically approach this? I'm guessing it not always 
viable to have a shared file system for everything.


Is the approach a disciplined one? Where say I have development 
instance for infrastructure changes.
Then there is a deployment flow where production instances are somehow 
refreshed without downtime.


Or is there some other approach?

I notice on sites like yahoo, things are often noticeably unsynced, 
mostly on the data front, but also other things.

This would be unacceptable in my case.

I appreciate any help I can get regarding this.

My typical load is from php-fpm/nginx processes, mysql bellow this.

Should the memory cache also be separated, or as I think it is quite 
good for this to be divided up with the infrastructure to support each 
public instance individually?


Paul


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] question about how to handle bugs filed against End-Of-Life versions of glusterfs

2015-09-16 Thread Sankarshan Mukhopadhyay
[for reference]

On Wed, Sep 16, 2015 at 6:35 PM, Kaleb S. KEITHLEY  wrote:
> As an example, Fedora simply closes any remaining open bugs when the
> version reaches EOL. It's incumbent on the person who filed the bug to
> reopen it if it still exists in newer versions.

 - All bugs
for EOL releases are automatically closed on the EOL date after
providing a warning in the bug comments, 30 days prior to EOL.


- The bug is reported against a version of Fedora that is no longer
maintained.Thank you for your bug report.We are sorry, but the Fedora
Project is no longer releasing bug fixes or any other updates for this
version of Fedora. This bug will be set to CLOSED:WONTFIX to reflect
this, but please reopen it if the problem persists after upgrading to
the latest version of Fedora, which is available
from:http://fedoraproject.org/get-fedora





-- 
sankarshan mukhopadhyay

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] question about how to handle bugs filed against End-Of-Life versions of glusterfs

2015-09-16 Thread Pat Riehecky



On 09/16/2015 08:05 AM, Kaleb S. KEITHLEY wrote:

Hi,

A question was raised during Tuesday's (2015-09-15) Gluster Bug Triage
meeting[1], and discussed today (2015-09-16) at the Gluster Community
meeting[2] about how to handle currently open bugs and new bugs filed
against GlusterFS versions which have reached end-of-life (EOL).

As an example, Fedora simply closes any remaining open bugs when the
version reaches EOL. It's incumbent on the person who filed the bug to
reopen it if it still exists in newer versions.

Option A is: create a new set of 'umbrella' Versions, e.g.
3.4-end-of-life, 3.3-end-of-life, etc.; _reassign_ all bugs filed
against 3.4.x to 3.4.x-end-of-life; then delete the 3.4.x Versions from
bugzilla. Any new bugs filed against, e.g., any 3.4.x version are
assigned to, 3.4-end-of-life.

Option B is: create a new set of 'umbrella' Versions, e.g.
3.4-end-of-life, 3.3-end-of-life, etc.; _close_ all bugs filed against
3.4.x; then delete the 3.4.x Versions from bugzilla. Any new bugs filed
against, e.g., any 3.4.x version are assigned to, 3.4-end-of-life.

The main difference is whether existing bugs are reassigned or simply
closed. In either case if a new bug is filed against an EOL version then
during bug triage the bug will be checked to see if it still exists in
newer versions and reassigned to the later version, or closed as
appropriate.

You may reply to this email — Reply-to: is set to
mailto:gluster-de...@gluster.org — to register your opinion.

Thanks,


[1]
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-15/gluster-meeting.2015-09-15-12.02.log.html
[2]
http://meetbot.fedoraproject.org/gluster-meeting/2015-09-16/gluster-meeting.2015-09-16-12.01.log.html



I'd go with Option B (close the bugs), so long as it is possible to 
either (a) re-open or (b) connect to the older bug - so that any logs 
are not misplaced.


There would probably need to be some sort of "This version is EOL, if 
you are still having this issue on a non-EOL version do X, Y, and Z"


Pat

--
Pat Riehecky
Scientific Linux developer

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Why does glusterfs take 1/4 of the CPU while beeing idle?

2015-09-16 Thread Merlin Morgenstern
I am experiencing unusual CPU usage on a ubuntu 14.04.03 box which is
supposed to be idle. It is around 25% on a 4 core system and htop says load
is about 1.0 while the underlying proccess all show 0 to a bit % CPU load.

The only service running is a glusterfsd and a glusterfs client. There is
no load on the glusterfs shares. The high CPU consumption now holds for
over 4 hours.

How can I determine the process eating up a 1/4 of the CPU power or if it
is gluster how to fix that?

Thank you in advance for any help.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users