Re: [Gluster-users] How to stop glusterfsd ?

2015-09-10 Thread Merlin Morgenstern
thank you. that helped. Just for references the command is different in
3.7.x

instead:
sudo gluster v stop vol1

2015-09-09 14:06 GMT+02:00 A Ghoshal :

> The service is halted already. Note that by service, you would mean the
> /usr/sbin/glusterd process. The other processes are specific to the volume.
> If you wish to stop these, you must stop the volume using
>
> # gluster v  stop
>
> You could also kill them, but that might come with additional
> repercussions like data loss, etc.
>
>
>
> From:Merlin Morgenstern 
> To:gluster-users 
> Date:09/09/2015 05:27 PM
> Subject:[Gluster-users] How to stop glusterfsd ?
> Sent by:gluster-users-boun...@gluster.org
> --
>
>
>
> I am running Gluster 3.7.x on 3 nodes and want to stop the service.
> Unfortunatelly this does not seem to work:
> sudo /usr/sbin/glusterd stop
> user@fx2:~$ ps -ef | grep gluster
> root  2334 1  0 Sep08 ?00:00:03 /usr/sbin/glusterfs -s
> localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l
> /var/log/glusterfs/nfs.log -S
> /var/run/gluster/f66ad4ca3f2b040a2b828e28e9648b0d.socket
> root  2342 1  0 Sep08 ?00:00:04 /usr/sbin/glusterfs -s
> localhost --volfile-id gluster/glustershd -p
> /var/lib/glusterd/glustershd/run/glustershd.pid -l
> /var/log/glusterfs/glustershd.log -S
> /var/run/gluster/673e39003114a00621cd86113e27d107.socket --xlator-option
> *replicate*.node-uuid=1a401094-307d-4ada-b710-58a906e97e66
> root  2348 1  0 Sep08 ?00:00:06 /usr/sbin/glusterfsd -s
> node2 --volfile-id vol1.node2.bricks-brick1 -p
> /var/lib/glusterd/vols/vol1/run/node2-bricks-brick1.pid -S
> /var/run/gluster/8b23a0563fdaecb0c7023644ffb933f1.socket --brick-name
> /bricks/brick1 -l /var/log/glusterfs/bricks/bricks-brick1.log
> --xlator-option *-posix.glusterd-uuid=1a401094-307d-4ada-b710-58a906e97e66
> --brick-port 49152 --xlator-option vol1-server.listen-port=49152
> user  8189  6703  0 13:51 pts/000:00:00 grep --color=auto gluster
>
> I tried all sorts of stop comands on gluster-server without success. How
> can I stop Gluster?___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] News of the week missing

2015-09-10 Thread André Bauer
Hey guys,

seems the news blog for last week(s) is missing?
Last one i found on the planet.gluster.org blog is for week 32:
https://atinmu.wordpress.com/2015/08/17/gluster-news-of-week-322015-2/

I already put some stuff in the Etherpad, if its because of lack of
news: https://public.pad.fsfe.org/p/gluster-weekly-news

Maybe its also an idea to switch to monthly news, if this would be
easier to maintain.

Maybe its also a good idea to have the news more prominent on
gluster.org. Maybe you could filter for news blogs (only) in the "Planet
Gluster News" section or have another "Gluster news" section?

-- 
Regards
André Bauer

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] "gluster v heal info split-brain" show solved split-brain

2015-09-10 Thread Jesper Led Lauridsen TS Infra server


Fra: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] På vegne af ? ???
Sendt: 9. september 2015 12:18
Til: gluster-users@gluster.org
Emne: [Gluster-users] "gluster v heal info split-brain" show solved split-brain

Hello all.
Today I had got a split brain on 2 node replica set installation.
I have solved problem by removing files from bricks.
find /storage/gluster_brick_repo -samefile 
/storage/gluster_brick_repo/.glusterfs/f7/bd/f7bdbdab-8dda-498f-ab03-6dcdfa2ed435
 -delete -print
But in output of "gluster v heal repofiles info split-brain" I still see 
removed file.
Try restarting self-heal-deamon.
http://article.gmane.org/gmane.comp.file-systems.gluster.user/21968/match=splitbrain+log+entries

How can I remove it from output of this command?

More information:
[13:02:04] root@dist-master01:/ # cat /etc/issue
Ubuntu 12.04.5 LTS \n \l

[13:02:07] root@dist-master01:/ # dpkg -l | grep glusterfs
ii  glusterfs-client 3.6.4-7   
clustered file-system (client package)
ii  glusterfs-common 3.6.4-7   
GlusterFS common libraries and translator modules
ii  glusterfs-server 3.6.4-7   
clustered file-system (server package)

[13:02:13] root@dist-master01:/ # gluster volume info

 Volume Name: repofiles
 Type: Replicate
 Volume ID: ae71fb65-d477-492b-9701-cec2a6986759
 Status: Started
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: xxx.xxx.xxx.xxx:/storage/gluster_brick_repo
 Brick2: yyy.yyy.yyy.yyy:/storage/gluster_brick_repo
 Options Reconfigured:
 cluster.data-self-heal-algorithm: diff
 cluster.self-heal-window-size: 2
 cluster.background-self-heal-count: 10
 server.allow-insecure: on
 performance.quick-read: on
 performance.cache-size: 6442450944
 performance.io-thread-count: 48
 performance.stat-prefetch: on
 performance.write-behind-window-size: 4MB
 performance.read-ahead: on
 nfs.disable: on

 [13:02:45] root@dist-master01:/ # gluster v heal repofiles info
 Brick dist-master01.xxx:/storage/gluster_brick_repo/
 Number of entries: 0

 Brick dist-master02.xxx:/storage/gluster_brick_repo/
 Number of entries: 0


 [13:03:19] root@dist-master01:/ # gluster v heal repofiles info split-brain
 Gathering list of split brain entries on volume repofiles has been successful

 Brick xxx.xxx.xxx.xxx:/storage/gluster_brick_repo
 Number of entries: 8
 atpath on brick
 ---
 2015-09-09 02:17:33 
 2015-09-09 02:18:34 
 2015-09-09 02:19:35 
 2015-09-09 02:20:36 
 2015-09-09 02:21:37 
 2015-09-09 02:22:38 
 2015-09-09 02:23:39 
 2015-09-09 02:24:40 

 Brick yyy.yyy.yyy.yyy:/storage/gluster_brick_repo
 Number of entries: 0

 [13:03:27] root@dist-master01:/ #
Best regards,
Igor


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] "gluster v heal info split-brain" show solved split-brain

2015-09-10 Thread Игорь Бирюлин
Hello,
indeed restarting self-heal-deamon have solved my problem.
Thank you very much!

Best regards,
Igor





2015-09-10 11:30 GMT+03:00 Jesper Led Lauridsen TS Infra server :

>
>
>
>
> *Fra:* gluster-users-boun...@gluster.org [mailto:
> gluster-users-boun...@gluster.org] *På vegne af *? ???
> *Sendt:* 9. september 2015 12:18
> *Til:* gluster-users@gluster.org
> *Emne:* [Gluster-users] "gluster v heal info split-brain" show solved
> split-brain
>
>
>
> Hello all.
> Today I had got a split brain on 2 node replica set installation.
> I have solved problem by removing files from bricks.
> find /storage/gluster_brick_repo -samefile
> /storage/gluster_brick_repo/.glusterfs/f7/bd/f7bdbdab-8dda-498f-ab03-6dcdfa2ed435
> -delete -print
> But in output of "gluster v heal repofiles info split-brain" I still see
> removed file.
>
> Try restarting self-heal-deamon.
>
>
> http://article.gmane.org/gmane.comp.file-systems.gluster.user/21968/match=splitbrain+log+entries
>
>
> How can I remove it from output of this command?
>
> More information:
> [13:02:04] root@dist-master01:/ # cat /etc/issue
> Ubuntu 12.04.5 LTS \n \l
>
> [13:02:07] root@dist-master01:/ # dpkg -l | grep glusterfs
> ii  glusterfs-client 3.6.4-7
> clustered file-system (client package)
> ii  glusterfs-common 3.6.4-7
> GlusterFS common libraries and translator modules
> ii  glusterfs-server 3.6.4-7
> clustered file-system (server package)
>
> [13:02:13] root@dist-master01:/ # gluster volume info
>
>  Volume Name: repofiles
>  Type: Replicate
>  Volume ID: ae71fb65-d477-492b-9701-cec2a6986759
>  Status: Started
>  Number of Bricks: 1 x 2 = 2
>  Transport-type: tcp
>  Bricks:
>  Brick1: xxx.xxx.xxx.xxx:/storage/gluster_brick_repo
>  Brick2: yyy.yyy.yyy.yyy:/storage/gluster_brick_repo
>  Options Reconfigured:
>  cluster.data-self-heal-algorithm: diff
>  cluster.self-heal-window-size: 2
>  cluster.background-self-heal-count: 10
>  server.allow-insecure: on
>  performance.quick-read: on
>  performance.cache-size: 6442450944
>  performance.io-thread-count: 48
>  performance.stat-prefetch: on
>  performance.write-behind-window-size: 4MB
>  performance.read-ahead: on
>  nfs.disable: on
>
>  [13:02:45] root@dist-master01:/ # gluster v heal repofiles info
>  Brick dist-master01.xxx:/storage/gluster_brick_repo/
>  Number of entries: 0
>
>  Brick dist-master02.xxx:/storage/gluster_brick_repo/
>  Number of entries: 0
>
>
>  [13:03:19] root@dist-master01:/ # gluster v heal repofiles info
> split-brain
>  Gathering list of split brain entries on volume repofiles has been
> successful
>
>  Brick xxx.xxx.xxx.xxx:/storage/gluster_brick_repo
>  Number of entries: 8
>  atpath on brick
>  ---
>  2015-09-09 02:17:33 
>  2015-09-09 02:18:34 
>  2015-09-09 02:19:35 
>  2015-09-09 02:20:36 
>  2015-09-09 02:21:37 
>  2015-09-09 02:22:38 
>  2015-09-09 02:23:39 
>  2015-09-09 02:24:40 
>
>  Brick yyy.yyy.yyy.yyy:/storage/gluster_brick_repo
>  Number of entries: 0
>
>  [13:03:27] root@dist-master01:/ #
>
> Best regards,
>
> Igor
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.7 and OpenShift Enterprise v3

2015-09-10 Thread Vijay Bellur

On Sunday 06 September 2015 07:34 AM, Ryan Nix wrote:

Hello,

We're evaluating OpenShift v3, which uses Docker. The manual recommends
using NFS v4 for storage. I would much prefer to use Gluster's native
client instead of NFS, mainly because the Gluster client is so easy to
use. One of our Redhat SEs says it is probably fine to use Gluster
instead of NFS v4 for now until OpenShift gets formal support for
Gluster in OpenShift 3.1. I asked him why Gluster wasn't initially
supported and he said it probably came down to not having enough time to
test thoroughly before OSE's release.



Hi Ryan - I checked internally with the corresponding Red Hat teams and 
got a similar response. Lack of time is the core reason for going ahead 
with NFS. You can follow up with your Red Hat representatives to ensure 
that support for fuse lands in a subsequent OSE release.



Does anyone have any experience with running Docker images with Gluster
using the Gluster client? Is it possible that Redhat's recommendation is
based on the performance of NFS v4 vs Gluster?


No performance numbers have been gathered with fuse client in OSE and 
hence the question of comparison does not arise. What 
applications/workloads do you intend running in OSE?




I can't seem to find
anything that says one is faster than the other. Has anyone used Gluster
with Docker (which powers OpenShift)?



There have been various write ups on the web about using Gluster and 
Docker. I would be very interested if someone on this list has direct 
experience with this combination.


Thanks,
Vijay




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 3.7 and OpenShift Enterprise v3

2015-09-10 Thread Ryan Nix
Hey Vijay,

Thanks for the response.

OSE 3.1, due to be released late fall, is supposed to formally support
Gluster for persistent storage. Right now Gluster support is offered as a
technology preview. We're not planning on running any CPU or network
intensive applications on OSE, at least not yet. I've seen a few of the
write ups on Gluster and Docker, however, many of them seem to be about
running* Gluster in a Docker container* which is just mind blowing to me.
:)

I suppose we should simply try Gluster as the backend for OSE v3.0 and see
how it goes. We're not planning on launching OSE v3 in production just yet.
Sadly, we've had slow internal adoption of OSE v2.2. Developers seem to
have 99 problems and deployment/hosting isn't one of them.

But are there any performance benchmarks pitting NFS v4 vs Gluster client
3.7.x?

Thanks,

Ryan


On Thu, Sep 10, 2015 at 9:08 AM, Vijay Bellur  wrote:

> On Sunday 06 September 2015 07:34 AM, Ryan Nix wrote:
>
>> Hello,
>>
>> We're evaluating OpenShift v3, which uses Docker. The manual recommends
>> using NFS v4 for storage. I would much prefer to use Gluster's native
>> client instead of NFS, mainly because the Gluster client is so easy to
>> use. One of our Redhat SEs says it is probably fine to use Gluster
>> instead of NFS v4 for now until OpenShift gets formal support for
>> Gluster in OpenShift 3.1. I asked him why Gluster wasn't initially
>> supported and he said it probably came down to not having enough time to
>> test thoroughly before OSE's release.
>>
>>
> Hi Ryan - I checked internally with the corresponding Red Hat teams and
> got a similar response. Lack of time is the core reason for going ahead
> with NFS. You can follow up with your Red Hat representatives to ensure
> that support for fuse lands in a subsequent OSE release.
>
> Does anyone have any experience with running Docker images with Gluster
>> using the Gluster client? Is it possible that Redhat's recommendation is
>> based on the performance of NFS v4 vs Gluster?
>>
>
> No performance numbers have been gathered with fuse client in OSE and
> hence the question of comparison does not arise. What
> applications/workloads do you intend running in OSE?
>
>
> I can't seem to find
>> anything that says one is faster than the other. Has anyone used Gluster
>> with Docker (which powers OpenShift)?
>>
>>
> There have been various write ups on the web about using Gluster and
> Docker. I would be very interested if someone on this list has direct
> experience with this combination.
>
> Thanks,
> Vijay
>
>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] oVirt replacement?

2015-09-10 Thread Sachidananda URS
Hi Ryan,

On Wed, Sep 9, 2015 at 5:20 PM, Ryan Nix  wrote:

> At the Redhat conference in June one of the lead developers mentioned
> Gluster would be getting a new GUI management tool to replace oVirt. Has
> there been any formal announcement on this?
>
>

Currently we have gdeploy which is a CLI for deployment and management of
Gluster clusters.
This is an ansible based tool, which can be used to:

* Set up backend bricks.
* Create volume.
* Mount the clients and bunch of other things.

The project is hosted at:
https://github.com/gluster/gdeploy/tree/1.0

I suggest using 1.0 branch, master is unstable and work in progress.

The tool is very easy to use. It relies on configuration files. Writing a
configuration file is explained
in the example configuration file:

https://github.com/gluster/gdeploy/blob/1.0/examples/gluster.conf.sample

A more detailed explanation of the fields can be found at:
https://github.com/gluster/gdeploy/blob/1.0/examples/README

`gdeploy --help' will print out help message on the usage.

Please let us know if you have any questions. It would be much awesome if
you can send us
pull requests.

-sac
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] oVirt replacement?

2015-09-10 Thread Sachidananda URS
On Fri, Sep 11, 2015 at 1:07 AM, Ryan Nix  wrote:

> Thanks. Seems like just about everything open source is using Ansible for
> deployment.
>


Considering the flexibility of ansible and the ease of bootstrapping makes
it a choice of deployment.



>
> On Thu, Sep 10, 2015 at 12:50 PM, Sachidananda URS 
> wrote:
>
>> Hi Ryan,
>>
>> On Wed, Sep 9, 2015 at 5:20 PM, Ryan Nix  wrote:
>>
>>> At the Redhat conference in June one of the lead developers mentioned
>>> Gluster would be getting a new GUI management tool to replace oVirt. Has
>>> there been any formal announcement on this?
>>>
>>>
>>
>> Currently we have gdeploy which is a CLI for deployment and management of
>> Gluster clusters.
>> This is an ansible based tool, which can be used to:
>>
>> * Set up backend bricks.
>> * Create volume.
>> * Mount the clients and bunch of other things.
>>
>> The project is hosted at:
>> https://github.com/gluster/gdeploy/tree/1.0
>>
>> I suggest using 1.0 branch, master is unstable and work in progress.
>>
>> The tool is very easy to use. It relies on configuration files. Writing a
>> configuration file is explained
>> in the example configuration file:
>>
>> https://github.com/gluster/gdeploy/blob/1.0/examples/gluster.conf.sample
>>
>> A more detailed explanation of the fields can be found at:
>> https://github.com/gluster/gdeploy/blob/1.0/examples/README
>>
>> `gdeploy --help' will print out help message on the usage.
>>
>> Please let us know if you have any questions. It would be much awesome if
>> you can send us
>> pull requests.
>>
>> -sac
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] oVirt replacement?

2015-09-10 Thread Ryan Nix
Thanks. Seems like just about everything open source is using Ansible for
deployment.

On Thu, Sep 10, 2015 at 12:50 PM, Sachidananda URS  wrote:

> Hi Ryan,
>
> On Wed, Sep 9, 2015 at 5:20 PM, Ryan Nix  wrote:
>
>> At the Redhat conference in June one of the lead developers mentioned
>> Gluster would be getting a new GUI management tool to replace oVirt. Has
>> there been any formal announcement on this?
>>
>>
>
> Currently we have gdeploy which is a CLI for deployment and management of
> Gluster clusters.
> This is an ansible based tool, which can be used to:
>
> * Set up backend bricks.
> * Create volume.
> * Mount the clients and bunch of other things.
>
> The project is hosted at:
> https://github.com/gluster/gdeploy/tree/1.0
>
> I suggest using 1.0 branch, master is unstable and work in progress.
>
> The tool is very easy to use. It relies on configuration files. Writing a
> configuration file is explained
> in the example configuration file:
>
> https://github.com/gluster/gdeploy/blob/1.0/examples/gluster.conf.sample
>
> A more detailed explanation of the fields can be found at:
> https://github.com/gluster/gdeploy/blob/1.0/examples/README
>
> `gdeploy --help' will print out help message on the usage.
>
> Please let us know if you have any questions. It would be much awesome if
> you can send us
> pull requests.
>
> -sac
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] autosnap feature?

2015-09-10 Thread Alastair Neil
Wondering if there were any plans for a fexible and  easy to use
snapshotting feature along the lines of zfs autosnap scipts.  I imagine at
the least it would need the ability to rename snapshots.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users