Re: [Gluster-users] [Gluster-devel] Idea: Alternate Release process

2016-05-18 Thread M S Vishwanath Bhat
On 13 May 2016 at 13:46, Aravinda  wrote:

> Hi,
>
> Based on the discussion in last community meeting and previous discussions,
>
> 1. Too frequent releases are difficult to manage.(without dedicated
> release manager)
> 2. Users wants to see features early for testing or POC.
> 3. Backporting patches to more than two release branches is pain
>
> Enclosed visualizations to understand existing release and support cycle
> and proposed alternatives.
>
> - Each grid interval is 6 months
> - Green rectangle shows supported release or LTS
> - Black dots are minor releases till it is supported(once a month)
> - Orange rectangle is non LTS release with minor releases(Support ends
> when next version released)
>
> Enclosed following images
> 1. Existing Release cycle and support plan(6 months release cycle, 3
> releases supported all the time)
> 2. Proposed alternative 1 - One LTS every year and non LTS stable release
> once in every 2 months
> 3. Proposed alternative 2 - One LTS every year and non LTS stable release
> once in every 3 months
>

+1 for 3rd option. non-LTS stable release on every three months.

Best Regards,
Vishwanath



> 4. Proposed alternative 3 - One LTS every year and non LTS stable release
> once in every 4 months
> 5. Proposed alternative 4 - One LTS every year and non LTS stable release
> once in every 6 months (Similar to existing but only alternate one will
> become LTS)
>
> Please do vote for the proposed alternatives about release intervals and
> LTS releases. You can also vote for the existing plan.
>
> Do let me know if I missed anything.
>
> regards
> Aravinda
>
> On 05/11/2016 12:01 AM, Aravinda wrote:
>
> I couldn't find any solution for the backward incompatible changes. As you
> mentioned this model will not work for LTS.
>
> How about adopting this only for non LTS releases? We will not have
> backward incompatibility problem since we need not release minor updates to
> non LTS releases.
>
> regards
> Aravinda
>
> On 05/05/2016 04:46 PM, Aravinda wrote:
>
>
> regards
> Aravinda
>
> On 05/05/2016 03:54 PM, Kaushal M wrote:
>
> On Thu, May 5, 2016 at 11:48 AM, Aravinda 
>  wrote:
>
> Hi,
>
> Sharing an idea to manage multiple releases without maintaining
> multiple release branches and backports.
>
> This idea is heavily inspired by the Rust release model(you may feel
> exactly same except the LTS part). I think Chrome/Firefox also follows
> the same model.
>
> http://blog.rust-lang.org/2014/10/30/Stability.html
>
> Feature Flag:
> --
> Compile time variable to prevent compiling featurerelated code when
> disabled. (For example, ./configure--disable-geo-replication
> or ./configure --disable-xml etc)
>
> Plan
> -
> - Nightly build with all the features enabled(./build --nightly)
>
> - All new patches will land in Master, if the patch belongs to a
>existing feature then it should be written behind that feature flag.
>
> - If a feature is still work in progress then it will be only enabled in
>nightly build and not enabled in beta or stable builds.
>Once the maintainer thinks the feature is ready for testing then that
>feature will be enabled in beta build.
>
> - Every 6 weeks, beta branch will be created by enabling all the
>features which maintainers thinks it is stable and previous beta
>branch will be promoted as stable.
>All the previous beta features will be enabled in stable unless it
>is marked as unstable during beta testing.
>
> - LTS builds are same as stable builds but without enabling all the
>features. If we decide last stable build will become LTS release,
>then the feature list from last stable build will be saved as
>`features-release-.yaml`, For example:
>features-release-3.9.yaml`
>Same feature list will be used while building minor releases for the
>LTS. For example, `./build --stable --features
> features-release-3.8.yaml`
>
> - Three branches, nightly/master, testing/beta, stable
>
> To summarize,
> - One stable release once in 6 weeks
> - One Beta release once in 6 weeks
> - Nightly builds every day
> - LTS release once in 6 months or 1 year, Minor releases once in 6 weeks.
>
> Advantageous:
> -
> 1. No more backports required to different release branches.(only
> exceptional backports, discussed below)
> 2. Non feature Bugfix will never get missed in releases.
> 3. Release process can be automated.
> 4. Bugzilla process can be simplified.
>
> Challenges:
> 
> 1. Enforcing Feature flag for every patch
> 2. Tests also should be behind feature flag
> 3. New release process
>
> Backports, Bug Fixes and Features:
> --
> - Release bug fix - Patch only to Master, which will be available in
>next beta/stable build.
> - Urgent bug fix - Patch to Master and Backport to beta and stable
>branch, and early release stable and beta build.
> - Beta bug fix - Patch to Master 

Re: [Gluster-users] [Gluster-devel] Show and Tell sessions for Gluster 4.0

2016-05-10 Thread M S Vishwanath Bhat
On 9 May 2016 at 23:56, Vijay Bellur  wrote:

> On Mon, May 9, 2016 at 2:01 PM, Amye Scavarda  wrote:
> >
> >
> > On Sun, May 8, 2016 at 11:50 PM, Raghavendra Talur 
> > wrote:
> >>
> >>
> >>
> >> On Mon, May 9, 2016 at 11:55 AM, Atin Mukherjee 
> >> wrote:
> >>>
> >>> In the view of keeping the visibility of the work completed vs work in
> >>> progress and getting some fruitful feedback from the community we are
> >>> thinking of having a hangout/bluejeans session for 30-45 minutes once
> in
> >>> every month. The sessions will be concentrating on discussing the work
> >>> done over the last month and demoing few pieces of Gluster 4.0 projects
> >>> and what's the expectation for the coming month(s).
> >>>
> >>> There are couple of options we have w.r.t scheduling these sessions.
> >>>
> >>> 1. We use the weekly community meeting slot once in a month
> >>> or 2. Have a dedicated separate slot (probably @ 12:00 UTC, last
> >>> Thursday of each month)
> >>
> >>
> >> I would prefer dedicated time slot.
> >>
> >>>
> >>>
> >>> I'd request you to vote for any of these two and based on that we can
> >>> move forward.
> >>>
> >>> Thanks,
> >>> Atin
> >
> >
> > If we use a dedicated timeslot of 12:00 UTC, we'll never be able to have
> > folks from Pacific weigh in.
> > I'd suggest more of a rotating timeslot, but we've not had good luck
> getting
> > traction around that.
> > - amye
> >
>
> +1 to this. How about 1500 UTC? That should give time zones where we
> have more presence (North America, Europe, India) an equal chance to
> be present.
>

+1 for dedicated time slot at 1500 UTC

Best Regards,
Vishwanath



> I think accommodating one more meeting in everybody's busy schedules
> might be tough. We already have 4 bug triage meetings and 4 community
> meetings in a month. Adding one more might cause us to spread
> ourselves thin across these meetings. How about not doing a community
> or bug triage meeting in the same week when we schedule this meeting?
>
> We could turn this meeting into a Gluster.next meeting so that
> everybody who is working on something new in Gluster gets a chance to
> showcase their work.
>
> Thanks!
> Vijay
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 26th April 2016

2016-04-27 Thread M S Vishwanath Bhat
On 27 April 2016 at 16:49, Jiffin Tony Thottan  wrote:

> Hi all,
>
> Minutes:
> https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.html
> Minutes (text):
> https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.txt
> Log:
> https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.log.html
>
>
> Meeting summary
> ---
> * agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (jiffin,
>   12:11:39)
> * Roll call  (jiffin, 12:12:07)
>
> * msvbhat  will look into lalatenduM's automated Coverity setup in
>   Jenkins   which need assistance from an admin with more permissions
>   (jiffin, 12:18:13)
>   * ACTION: msvbhat  will look into lalatenduM's automated Coverity
> setup in   Jenkins   which need assistance from an admin with more
> permissions  (jiffin, 12:21:04)
>
> * ndevos need to decide on how to provide/use debug builds (jiffin,
>   12:21:18)
>   * ACTION: Manikandan to followup with kashlm to get access to
> gluster-infra  (jiffin, 12:24:18)
>   * ACTION: Manikandan and Nandaja will update on bug automation
> (jiffin, 12:24:30)
>
> * msvbhat  provide a simple step/walk-through on how to provide
>   testcases for the nightly rpm tests  (jiffin, 12:25:09)
>

I have already added how to write test cases here.
 This was
completed last time I attended the meeting (which was a month ago).

Best Regards,
Vishwanath


  * ACTION: msvbhat  provide a simple step/walk-through on how to
> provide testcases for the nightly rpm tests  (jiffin, 12:27:00)
>
> * rafi needs to followup on #bug 1323895  (jiffin, 12:27:15)
>
> * ndevos need to decide on how to provide/use debug builds (jiffin,
>   12:30:44)
>   * ACTION: ndevos need to decide on how to provide/use debug builds
> (jiffin, 12:32:09)
>   * ACTION: ndevos to propose some test-cases for minimal libgfapi test
> (jiffin, 12:32:21)
>   * ACTION: ndevos need to discuss about writing a script to update bug
> assignee from gerrit patch  (jiffin, 12:32:31)
>
> * Group triage  (jiffin, 12:33:07)
>
> * openfloor  (jiffin, 12:52:52)
>
> * gluster bug triage meeting schedule May 2016  (jiffin, 12:55:33)
>   * ACTION: hgowtham will host meeting on 03/05/2016  (jiffin, 12:56:18)
>   * ACTION: Saravanakmr will host meeting on 24/05/2016  (jiffin,
> 12:56:49)
>   * ACTION: kkeithley_ will host meeting on 10/05/2016  (jiffin,
> 13:00:13)
>   * ACTION: jiffin will host meeting on 17/05/2016  (jiffin, 13:00:28)
>
> Meeting ended at 13:01:34 UTC.
>
>
>
>
> Action Items
> 
> * msvbhat  will look into lalatenduM's automated Coverity setup in
>   Jenkins   which need assistance from an admin with more permissions
> * Manikandan to followup with kashlm to get access to gluster-infra
> * Manikandan and Nandaja will update on bug automation
> * msvbhat  provide a simple step/walk-through on how to provide
>   testcases for the nightly rpm tests
> * ndevos need to decide on how to provide/use debug builds
> * ndevos to propose some test-cases for minimal libgfapi test
> * ndevos need to discuss about writing a script to update bug assignee
>   from gerrit patch
> * hgowtham will host meeting on 03/05/2016
> * Saravanakmr will host meeting on 24/05/2016
> * kkeithley_ will host meeting on 10/05/2016
> * jiffin will host meeting on 17/05/2016
>
> People Present (lines said)
> ---
> * jiffin (87)
> * rafi1 (21)
> * ndevos (10)
> * hgowtham (8)
> * kkeithley_ (6)
> * Saravanakmr (6)
> * Manikandan (5)
> * zodbot (3)
> * post-factum (2)
> * lalatenduM (1)
> * glusterbot (1)
>
>
> Cheers,
>
> Jiffin
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Backup data from gluster Brick

2016-02-18 Thread M S Vishwanath Bhat
On 16 February 2016 at 18:03, Kim Jahn  wrote:

> Hello,
>
> is it safe to create desaster recovery backups from the bricks
> filesystem instead of the gluster storage? It would be much faster. Can
> there be some data missing or be wrong? I've tried to find an answer to
> that in the documentation but couldn't find anything.
>

When you backup your disk instead of gluster storage, you will have
reconstruct the volume again to get the full view of your data. Also the
data in bricks maybe incomplete or not correct depending on the type of
volume and it's status.

So I would suggest you to use few of the features provided by gluster
itself such as geo-rep[1] or snapshot[2]. You can also use the tool
glusterfind[3] to list the files which have been changed. You can build
your own backup with it.

Hope it helps

Best Regards,
Vishwanath

[1] -
https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Distributed%20Geo%20Replication/
[2] -
https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Snapshots/
[3] -
https://gluster.readthedocs.org/en/latest/GlusterFS%20Tools/glusterfind/



Cheers
> Kim
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Gluster Monthly Newsletter, January 2015 Edition

2016-01-27 Thread M S Vishwanath Bhat
On 22 January 2016 at 22:59, Niels de Vos  wrote:

> On Mon, Jan 18, 2016 at 07:46:16PM -0800, Amye Scavarda wrote:
> > We're kicking off an updated Monthly Newsletter, coming out mid-month.
> > We'll highlight special posts, news and noteworthy threads from the
> > mailing lists, events, and other things that are important for the
> > Gluster community.
>
> ... snip!
>
> > FOSDEM:
> > * Gluster roadmap, recent improvements and upcoming features - Niels De
> Vos
>
> More details about the talk and related interview here:
>
>   https://fosdem.org/2016/schedule/event/gluster_roadmap/
>   https://fosdem.org/2016/interviews/2016-niels-de-vos/
>
> > * Go & Plugins - Kaushal Madappa
> > * Gluster Stand
> > DevConf
> > * small Gluster Developer Gathering
> > * Heketi GlusterFS volume management - Lusis Pabon
>

Sorry for the delay in replying. But I am talking about DiSTAF on 7th
February, last day of DevConf. Its not really 100% related to gluster but
about test automation of glusterfs.

Best Regards,
Vishwanath



> > * Gluster roadmap, recent improvements and upcoming features - Niels De
> Vos
>
> Sorry, this is not correct. That talk was proposed, but not accepted.
> I'll be giving a workshop though:
>
>   Build your own Scale-Out Storage with Gluster
>   http://sched.co/5m1X
>
> > FAST
> >
> > ==
> > Questions? Comments? Want to be involved?
>
> Can the newsletter get posted in a blog as well? I like reading posts
> like this through the RSS feed from http://planet.gluster.org/ .
>
> Thanks!
> Niels
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] DiSTAF demo and feedback

2015-12-18 Thread M S Vishwanath Bhat
On 17 December 2015 at 16:28, M S Vishwanath Bhat <msvb...@gmail.com> wrote:

> Please use the following links to join the session remotely.
>
> Event page: https://plus.google.com/events/c1u9mm8s59772sfpstbbb96odts
> Video link: http://www.youtube.com/watch?v=CpHRtsWiCSg
>

Depending on the time taken by other speakers, I may start 30 min early.
Sorry for the last minute notice.

Best Regards,
Vishwanath



>
> Best Regards,
> Vishwanath
>
>
> On 12 December 2015 at 00:53, M S Vishwanath Bhat <msvb...@gmail.com>
> wrote:
>
>>
>>
>> On 10 December 2015 at 22:01, M S Vishwanath Bhat <msvb...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I have made some changes to distaf (github.com/gluster/distaf) project
>>> to suit glusterfs testing more. I want to demo the changes and collect the
>>> feedback.
>>>
>>> I will do a hangouts session on Monday, 14th Dec 2015 at 1300 hours *UTC*
>>>  (6:30PM IST). Please attend the event and provide your feedback. If
>>> you want to attend and the date/time is not suitable for you, please let me
>>> know and I will try to change the schedule.
>>>
>>
>> Since I received few concerns, I am moving it to 1130 hours UTC on
>> 18-12-2015 (Friday of next week). I will be presenting this as part of
>> gluster meetup in Bangalore (
>> http://www.meetup.com/glusterfs-India/events/227287952/). I will send
>> out a hangout link later on for the same.
>>
>> Best Regards,
>> Vishwanath
>>
>>
>>
>>> I will share the Google hangout link here soon.
>>>
>>> Best Regards,
>>> Vishwanath
>>>
>>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] DiSTAF demo and feedback

2015-12-17 Thread M S Vishwanath Bhat
Please use the following links to join the session remotely.

Event page: https://plus.google.com/events/c1u9mm8s59772sfpstbbb96odts
Video link: http://www.youtube.com/watch?v=CpHRtsWiCSg

Best Regards,
Vishwanath


On 12 December 2015 at 00:53, M S Vishwanath Bhat <msvb...@gmail.com> wrote:

>
>
> On 10 December 2015 at 22:01, M S Vishwanath Bhat <msvb...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I have made some changes to distaf (github.com/gluster/distaf) project
>> to suit glusterfs testing more. I want to demo the changes and collect the
>> feedback.
>>
>> I will do a hangouts session on Monday, 14th Dec 2015 at 1300 hours *UTC*
>>  (6:30PM IST). Please attend the event and provide your feedback. If you
>> want to attend and the date/time is not suitable for you, please let me
>> know and I will try to change the schedule.
>>
>
> Since I received few concerns, I am moving it to 1130 hours UTC on
> 18-12-2015 (Friday of next week). I will be presenting this as part of
> gluster meetup in Bangalore (
> http://www.meetup.com/glusterfs-India/events/227287952/). I will send out
> a hangout link later on for the same.
>
> Best Regards,
> Vishwanath
>
>
>
>> I will share the Google hangout link here soon.
>>
>> Best Regards,
>> Vishwanath
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Proposal to improve DiSTAF

2015-09-08 Thread M S Vishwanath Bhat

Hi,

I have sent a proposal doc [1] outlining the changes that are required 
to make the distaf [2] project more stable and user friendly.


Please go through the doc once and let me know your input. If you need 
any other functionality not mentioned in the doc, please add it a review 
comment. Any suggestion/advice is greatly appreciated.


Thanks

[1] http://review.gluster.org/#/c/12048/
[2] https://github.com/gluster/distaf

Best Regards,
Vishwanath

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] What is the recommended backup strategy for GlusterFS?

2015-09-05 Thread M S Vishwanath Bhat
MS
On 5 Sep 2015 12:57 am, "Mathieu Chateau"  wrote:
>
> Hello,
>
> so far I use rsnapshot. This script do rsync with rotation, and most
important same files are stored only once through hard link (inode). I save
space, but still rsync need to parse all folders to know for new files.
>
> I am also interested in solution 1), but need to be stored on distinct
drives/servers. We can't afford to loose data and snapshot in case of human
error or disaster.
>
>
>
> Cordialement,
> Mathieu CHATEAU
> http://www.lotp.fr
>
> 2015-09-03 13:05 GMT+02:00 Merlin Morgenstern <
merlin.morgenst...@gmail.com>:
>>
>> I have about 1M files in a GlusterFS with rep 2 on 3 nodes runnnig
gluster 3.7.3.
>>
>> What would be a recommended automated backup strategy for this setup?
>>
>> I already considered the following:

Have you considered glusterfs geo-rep? It's actually for disaster recovery.
But might suit your backup use case as well.

My two cents

//MS

>>
>> 1) glusterfs snapshots in combination with dd. This unfortunatelly was
not possible so far as I could not find any info on how to make a image
file out of the snapshots and how to automate the snapshot procedure.
>>
>> 2) rsync the mounted file share to a second directory and do a tar on
the entire directory after rsync completed
>>
>> 3) combination of 1 and 2. Doing a snapshot that gets mounted
automaticaly and then rsync from there. Problem: How to automate snapshots
and how to know the mount path
>>
>> Currently I am only able to do the second option, but the fist option
seems to be the most atractive.
>>
>> Thank you for any help on this.
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo-replication master is distributed-replicated, slave is distributed only?

2015-08-23 Thread M S Vishwanath Bhat
On 24 August 2015 at 01:11, Christian Rice cr...@pandora.com wrote:

 Thanks so much for the response.  I want to be sure I understand your
 caveat about slave volume being larger—that is not intuitive.  I’d think
 the slave volume could be the same size, that is, same useable space as
 seen by a fuse client.  Where does a larger slave volume size requirement
 come from, if I may ask?


Well, slave *can* be of sma esize as master (useable space). There is no
need for slave to me *more* size than master.

But if you expand your master volume, make sure to expand slave volume as
well.

//MS




 From: M S Vishwanath Bhat msvb...@gmail.com
 Date: Saturday, August 22, 2015 at 10:59 AM
 To: Christian Rice cr...@pandora.com
 Cc: gluster-users@gluster.org gluster-users@gluster.org
 Subject: Re: [Gluster-users] geo-replication master is
 distributed-replicated, slave is distributed only?



 On 21 August 2015 at 23:46, Christian Rice cr...@pandora.com wrote:

 I’d like to have a distributed-replicated master volume, and
 distributed-only slave.

 Can this be done?  Just beginning the research, but so far I’ve only done
 geo-replication with distributed-only volumes.  Tips/caveats on this kind
 architecture are welcome.


 Yes, This can be done. Both master and slave can be of different
 configurations.

 But make sure that your slave volume has more effective size available
 than the master volume.

 HTH

 //MS



 The rationale is straightforward—the master volume should be able to stay
 available with all data when suffering a node loss, but the geo-replicated
 volumes can be taken offline for repairs and resync as soon as possible.

 Cheers,
 Christian

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo-replication master is distributed-replicated, slave is distributed only?

2015-08-22 Thread M S Vishwanath Bhat
On 21 August 2015 at 23:46, Christian Rice cr...@pandora.com wrote:

 I’d like to have a distributed-replicated master volume, and
 distributed-only slave.

 Can this be done?  Just beginning the research, but so far I’ve only done
 geo-replication with distributed-only volumes.  Tips/caveats on this kind
 architecture are welcome.


Yes, This can be done. Both master and slave can be of different
configurations.

But make sure that your slave volume has more effective size available than
the master volume.

HTH

//MS



 The rationale is straightforward—the master volume should be able to stay
 available with all data when suffering a node loss, but the geo-replicated
 volumes can be taken offline for repairs and resync as soon as possible.

 Cheers,
 Christian

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Gluster News of the week #30/2015

2015-08-18 Thread M S Vishwanath Bhat
On 18 August 2015 at 08:09, Sankarshan Mukhopadhyay 
sankarshan.mukhopadh...@gmail.com wrote:

 On Tue, Aug 18, 2015 at 12:44 AM, M S Vishwanath Bhat msvb...@gmail.com
 wrote:
 
 
  On 17 August 2015 at 21:44, Sankarshan Mukhopadhyay
  sankarshan.mukhopadh...@gmail.com wrote:

  And now, a very primitive first draft of what it could look like -
  https://public.pad.fsfe.org/p/mock-gluster-weekly-news
 
  (If the pad does get borked, here's the content pasted)
 
  Feedback welcome and appreciated.
 
 
  +1 This seems better.

 Thank you!

  In commit logs we can mention about the major bug fixes and feature
 commits
  of the past week.

 Who can help me see how this plays out by adding a few lines and text
 around the commits?


I just added few lines of commits which happened in last week. We might
have to select the major commits in case there are many.

//MS





 --
 sankarshan mukhopadhyay
 https://about.me/sankarshan.mukhopadhyay
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Gluster News of the week #30/2015

2015-08-17 Thread M S Vishwanath Bhat
On 17 August 2015 at 21:44, Sankarshan Mukhopadhyay 
sankarshan.mukhopadh...@gmail.com wrote:

 On Mon, Aug 17, 2015 at 10:35 AM, sankarshan
 sankarshan.mukhopadh...@gmail.com wrote:
  On Mon, 17 Aug 2015 06:27:53 +0200, Niels de Vos wrote:
 
  It would be good to have some standard structure that includes
  interesting statistics.
 
  I'm not absolutely sure about the statistics but I was thinking about
  highlighting specific threads that are interesting.

 And now, a very primitive first draft of what it could look like -
 https://public.pad.fsfe.org/p/mock-gluster-weekly-news

 (If the pad does get borked, here's the content pasted)

 Feedback welcome and appreciated.


+1 This seems better.

In commit logs we can mention about the major bug fixes and feature commits
of the past week.

//MS


 /s

 === BEGIN ===

 This is the work-in-progress mock Gluster Weekly News pad.

 The proposed approach is to (a) break out the weekly news into
 sections (b) addition of content to the sections can be managed by
 specific contributors


 Week 33: 17-23 August - assigned blog poster for this week:
 replace_with_your_name

 == News about the Gluster Project from around the world

 TBD: work out more avenues than just #gluster on Twitter

 == Conversations from the mailing lists

 Avra responded to a query around backup -
 http://article.gmane.org/gmane.comp.file-systems.gluster.user/21936
 suggesting that snapshots of the volume is a preferred good approach

 Atin indicates that the failed after update on centos 7
 http://article.gmane.org/gmane.comp.file-systems.gluster.user/21924
 could actually be a split-brain issue and suggests a recovery path

 Interesting thread around a single node cluster -
 http://article.gmane.org/gmane.comp.file-systems.gluster.user/21912
 with Kaushal deducing that the issue might be due to name
 resolution/FQDN being at fault. More
 athttp://article.gmane.org/gmane.comp.file-systems.gluster.user/21940


 == Commit Logs



 == Meetup News and Events

 http://kvmforum2015.sched.org/event/43b9efa496fa5ecf0effa0e98ce2aeba -
 Martin Sivak talks about oVirt and Gluster - Hyperconvergence

 http://www.meetup.com/glusterfs-India/events/01221/ - GlusterFS
 group has a meetup at Bangalore; distaf and more will be talked about




 --
 sankarshan mukhopadhyay
 https://about.me/sankarshan.mukhopadhyay
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster News of the week #30/2015

2015-08-13 Thread M S Vishwanath Bhat
You can find the gluster news of the week #30/2015 below

https://medium.com/@msvbhat/gluster-news-of-the-week-30-2015-30452f44a144

You should be able to see the same in www.planet.gluster.org very shortly.

If you have anything that needs to be mentioned in news of the week, please
add them here at https://public.pad.fsfe.org/p/gluster-weekly-news

Best Regards,
Vishwanath
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Restful Api Gluster 3.6

2015-08-04 Thread M S Vishwanath Bhat
On 5 August 2015 at 00:01, John S bun...@gmail.com wrote:

 Is there any restful api available for 3.6.4 version. Was trying options
 to manage cluster using apis.


There is no restful api support in gluster to manage your cluster. Yet that
is. AFAIK it might make it to glusterfs-3.8 or glusterfs-4.0

http://www.gluster.org/community/documentation/index.php/Features/rest-api




 I have one more question is: is there any way to execute the gluster peer
 probe commands from the new server, I know due to trusted only existing
 nodes in the cluster need to execute commands to add and replace the older
 brick.


Yes, peer probe can be done only from a server which is part of the cluster
(trusted storage pool). Servers from outside can not peer probe.


 Was checking any option to allow new  bricks to join the cluster and
 replace the older brick address?


You can peer probe a new server from within the cluster. And then you can
use either replace-brick or remove-brick/add-brick to replace your existing
brick.

HTH

Best Regards,
Vishwanath




 Regards
 John

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Expanding a replicated volume

2015-07-03 Thread M S Vishwanath Bhat
On 3 July 2015 at 15:02, Sjors Gielen sj...@sjorsgielen.nl wrote:

 Hi Vishwanath,

 Op do 2 jul. 2015 om 21:51 schreef M S Vishwanath Bhat msvb...@gmail.com
 :

 AFAIK there are two ways you can trigger the self-heal

 1. Use the gluster CLI heal command. I'm not sure why it didn't work
 for you and needs to be investigated.


 Do you think I should file a bug for this? I can reliably reproduce using
 the steps in my original e-mail. (This is Gluster 3.7.2, by the way.)

Yes, you should file a bug if it's not working.

Meanwhile Pranith or Xavi (self-heal developers) might be able to help you.

Best Regards,
Vishwanath



 2. Running 'stat' on files on gluster volume mountpoint, So if you run
 stat on the entire mountpoint, the files should be properly synced across
 all the replica bricks.


 This indeed seems to do the same as the `du`: when run as root on the
 server running the complete brick, the file appears on the incomplete brick
 as well. Initially as an empty file, but after a few seconds the complete
 file exists. When the `stat` is not ran as root, this doesn't happen, which
 I still think is bizarre.

 Thanks,
 Sjors

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Can't start geo-replication with version 3.6.3

2015-07-03 Thread M S Vishwanath Bhat
On 3 July 2015 at 16:45, John Ewing johnewi...@gmail.com wrote:

 I am only allowing port 22 inbound on the slave server , I thought that
 the traffic would be tunnelled over ssh, is this not the case ?


Well, to mount the volume client needs to communicate with the glusterd
(which runs on 24007). So that needs to be open. Also the client talks to
bricks once mounted, so bricks port (49152 in your case) should be open too.

Best Regards,
Vishwanath



 Thanks

 J.


 On Fri, Jul 3, 2015 at 12:09 PM, M S Vishwanath Bhat msvb...@gmail.com
 wrote:



 On 3 July 2015 at 15:54, John Ewing johnewi...@gmail.com wrote:

 Hi Vishwanath,

 The slave volume is definitely started

 [root@ip-192-168-4-55 ~]# gluster volume start myvol start
 volume start: myvol: failed: Volume myvol already started


 IIRC geo-rep create first tries to mount the slave volume in master in
 some temporary location.

 Can you try mounting the slave in master once manually? If the mount
 doesn't work, there might some firewall restrictions in the slave volume?
 Can you please check that as well?

 Hope it helps...

 Best Regards,
 Vishwanath


 [root@ip-192-168-4-55 ~]# gluster volume status
 Status of volume: myvol
 Gluster process PortOnline
  Pid

 --
 Brick 192.168.4.55:/export/xvdb1/brick  49152   Y
 9972
 NFS Server on localhost 2049Y
 12238

 Task Status of Volume myvol

 --
 There are no active volume tasks


 Anyone have any debugging suggestions ?

 Thanks

 John.

 On Thu, Jul 2, 2015 at 8:46 PM, M S Vishwanath Bhat msvb...@gmail.com
 wrote:



 On 2 July 2015 at 21:33, John Ewing johnewi...@gmail.com wrote:

 Hi,

 I'm trying to build a new geo-replicated cluster using Centos 6.6 and
 Gluster 3.6.3

 I've got as far creating a replicated volume with two peers on site,
 and a slave volume in EC2.

 I've set up passwordless ssh from one of the pair to the slave server,
 and I've run

 gluster system:: execute gsec_create


 When I try and create the geo-replication relationship between the
 servers I get:


 gluster volume geo-replication myvol X.X.X.X::myvol create  push-pem
 force

  Unable to fetch slave volume details. Please check the slave cluster
 and slave volume.
  geo-replication command failed


 I remember seeing this error when the slave volume is either not
 created or not started or not present in your x.x.x.x host.

 Can you check if the slave volume is started?

 Best Regards,
 Vishwanath




 The geo-replication-slaves log file from the master looks like this


 [2015-07-02 15:13:37.324823] I [rpc-clnt.c:1761:rpc_clnt_reconfig]
 0-myvol-client-0: changing port to 49152 (from 0)
 [2015-07-02 15:13:37.334874] I
 [client-handshake.c:1413:select_server_supported_programs]
 0-myvol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version 
 (330)
 [2015-07-02 15:13:37.335419] I
 [client-handshake.c:1200:client_setvolume_cbk] 0-myvol-client-0: Connected
 to myvol-client-0, attached to remote volume '/export/sdb1/brick,'.
 [2015-07-02 15:13:37.335493] I
 [client-handshake.c:1210:client_setvolume_cbk] 0-myvol-client-0: Server 
 and
 Client lk-version numbers are not same, reopening the fds
 [2015-07-02 15:13:37.336050] I [MSGID: 108005]
 [afr-common.c:3669:afr_notify] 0-myvol-replicate-0: Subvolume
 'myvol-client-0' came back up; going online.
 [2015-07-02 15:13:37.336170] I [rpc-clnt.c:1761:rpc_clnt_reconfig]
 0-myvol-client-1: changing port to 49152 (from 0)
 [2015-07-02 15:13:37.336298] I
 [client-handshake.c:188:client_set_lk_version_cbk] 0-myvol-client-0: 
 Server
 lk version = 1
 [2015-07-02 15:13:37.343247] I
 [client-handshake.c:1413:select_server_supported_programs]
 0-myvol-client-1: Using Program GlusterFS 3.3, Num (1298437), Version 
 (330)
 [2015-07-02 15:13:37.343964] I
 [client-handshake.c:1200:client_setvolume_cbk] 0-myvol-client-1: Connected
 to myvol-client-1, attached to remote volume '/export/sdb1/brick'.
 [2015-07-02 15:13:37.344043] I
 [client-handshake.c:1210:client_setvolume_cbk] 0-myvol-client-1: Server 
 and
 Client lk-version numbers are not same, reopening the fds
 [2015-07-02 15:13:37.351151] I [fuse-bridge.c:5080:fuse_graph_setup]
 0-fuse: switched to graph 0
 [2015-07-02 15:13:37.351491] I
 [client-handshake.c:188:client_set_lk_version_cbk] 0-myvol-client-1: 
 Server
 lk version = 1
 [2015-07-02 15:13:37.352078] I [fuse-bridge.c:4009:fuse_init]
 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 
 kernel
 7.14
 [2015-07-02 15:13:37.355056] I
 [afr-common.c:1477:afr_local_discovery_cbk] 0-myvol-replicate-0: selecting
 local read_child myvol-client-0
 [2015-07-02 15:13:37.396403] I [fuse-bridge.c:4921:fuse_thread_proc]
 0-fuse: unmounting /tmp/tmp.NPixVv7xk9
 [2015-07-02 15:13:37.396922] W [glusterfsd.c:1194:cleanup_and_exit

Re: [Gluster-users] Expanding a replicated volume

2015-07-02 Thread M S Vishwanath Bhat
On 2 July 2015 at 18:35, Sjors Gielen sj...@sjorsgielen.nl wrote:

 2015-07-02 14:25 GMT+02:00 Sjors Gielen sj...@sjorsgielen.nl:

 At this point, /local/glustertest/stor1 is still filled on mallorca, and
 empty on hawaii (except for .glusterfs). Here is the actual question: how
 do I sync the contents of the two?


 I found another way:  by doing a `du -hc /stor1` on mallorca, all files
 instantly appear on hawaii as well. Bizarrely, this only works when running
 `du` as root on Mallorca; running it as another user does give the correct
 output but does not make the files appear on Hawaii.


AFAIK there are two ways you can trigger the self-heal

1. Use the gluster CLI heal command. I'm not sure why it didn't work for
you and needs to be investigated.

2. Running 'stat' on files on gluster volume mountpoint, So if you run stat
on the entire mountpoint, the files should be properly synced across all
the replica bricks.

*my two cents*

Cheers,
Vishwanath


 Sjors

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Can't start geo-replication with version 3.6.3

2015-07-02 Thread M S Vishwanath Bhat
On 2 July 2015 at 21:33, John Ewing johnewi...@gmail.com wrote:

 Hi,

 I'm trying to build a new geo-replicated cluster using Centos 6.6 and
 Gluster 3.6.3

 I've got as far creating a replicated volume with two peers on site, and a
 slave volume in EC2.

 I've set up passwordless ssh from one of the pair to the slave server, and
 I've run

 gluster system:: execute gsec_create


 When I try and create the geo-replication relationship between the servers
 I get:


 gluster volume geo-replication myvol X.X.X.X::myvol create  push-pem force

  Unable to fetch slave volume details. Please check the slave cluster and
 slave volume.
  geo-replication command failed


I remember seeing this error when the slave volume is either not created or
not started or not present in your x.x.x.x host.

Can you check if the slave volume is started?

Best Regards,
Vishwanath




 The geo-replication-slaves log file from the master looks like this


 [2015-07-02 15:13:37.324823] I [rpc-clnt.c:1761:rpc_clnt_reconfig]
 0-myvol-client-0: changing port to 49152 (from 0)
 [2015-07-02 15:13:37.334874] I
 [client-handshake.c:1413:select_server_supported_programs]
 0-myvol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
 [2015-07-02 15:13:37.335419] I
 [client-handshake.c:1200:client_setvolume_cbk] 0-myvol-client-0: Connected
 to myvol-client-0, attached to remote volume '/export/sdb1/brick,'.
 [2015-07-02 15:13:37.335493] I
 [client-handshake.c:1210:client_setvolume_cbk] 0-myvol-client-0: Server and
 Client lk-version numbers are not same, reopening the fds
 [2015-07-02 15:13:37.336050] I [MSGID: 108005]
 [afr-common.c:3669:afr_notify] 0-myvol-replicate-0: Subvolume
 'myvol-client-0' came back up; going online.
 [2015-07-02 15:13:37.336170] I [rpc-clnt.c:1761:rpc_clnt_reconfig]
 0-myvol-client-1: changing port to 49152 (from 0)
 [2015-07-02 15:13:37.336298] I
 [client-handshake.c:188:client_set_lk_version_cbk] 0-myvol-client-0: Server
 lk version = 1
 [2015-07-02 15:13:37.343247] I
 [client-handshake.c:1413:select_server_supported_programs]
 0-myvol-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
 [2015-07-02 15:13:37.343964] I
 [client-handshake.c:1200:client_setvolume_cbk] 0-myvol-client-1: Connected
 to myvol-client-1, attached to remote volume '/export/sdb1/brick'.
 [2015-07-02 15:13:37.344043] I
 [client-handshake.c:1210:client_setvolume_cbk] 0-myvol-client-1: Server and
 Client lk-version numbers are not same, reopening the fds
 [2015-07-02 15:13:37.351151] I [fuse-bridge.c:5080:fuse_graph_setup]
 0-fuse: switched to graph 0
 [2015-07-02 15:13:37.351491] I
 [client-handshake.c:188:client_set_lk_version_cbk] 0-myvol-client-1: Server
 lk version = 1
 [2015-07-02 15:13:37.352078] I [fuse-bridge.c:4009:fuse_init]
 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel
 7.14
 [2015-07-02 15:13:37.355056] I [afr-common.c:1477:afr_local_discovery_cbk]
 0-myvol-replicate-0: selecting local read_child myvol-client-0
 [2015-07-02 15:13:37.396403] I [fuse-bridge.c:4921:fuse_thread_proc]
 0-fuse: unmounting /tmp/tmp.NPixVv7xk9
 [2015-07-02 15:13:37.396922] W [glusterfsd.c:1194:cleanup_and_exit] (--
 0-: received signum (15), shutting down
 [2015-07-02 15:13:37.396970] I [fuse-bridge.c:5599:fini] 0-fuse:
 Unmounting '/tmp/tmp.NPixVv7xk9'.
 [2015-07-02 15:13:37.412584] I [MSGID: 100030] [glusterfsd.c:2018:main]
 0-glusterfs: Started running glusterfs version 3.6.3 (args: glusterfs
 --xlator-option=*dht.lookup-unhashed=off --volfile-server X.X.X.X
 --volfile-id myvol -l /var/log/glusterfs/geo-replication-slaves/slave.log
 /tmp/tmp.am6rnOYxE7)
 [2015-07-02 15:14:40.423812] E [socket.c:2276:socket_connect_finish]
 0-glusterfs: connection to X.X.X.X:24007 failed (Connection timed out)
 [2015-07-02 15:14:40.424077] E [glusterfsd-mgmt.c:1811:mgmt_rpc_notify]
 0-glusterfsd-mgmt: failed to connect with remote-host: X.X.X.X (Transport
 endpoint is not connected)
 [2015-07-02 15:14:40.424119] I [glusterfsd-mgmt.c:1817:mgmt_rpc_notify]
 0-glusterfsd-mgmt: Exhausted all volfile servers
 [2015-07-02 15:14:40.424557] W [glusterfsd.c:1194:cleanup_and_exit] (--
 0-: received signum (1), shutting down
 [2015-07-02 15:14:40.424626] I [fuse-bridge.c:5599:fini] 0-fuse:
 Unmounting '/tmp/tmp.am6rnOYxE7'.


 I'm confused by the error message about not being able to connect to the
 slave on port 24007. Should it not be connecting over ssh ?

 Thanks

 John.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Looking for Fedora Package Maintainers.

2015-06-24 Thread M S Vishwanath Bhat
On 24 June 2015 at 14:56, Humble Devassy Chirammal humble.deva...@gmail.com
 wrote:

 Hi All,

 As we maintain 3 releases ( currently 3.5, 3.6 and 3.7)  of GlusterFS and
 having an average of  one release per week , we need more helping hands on
 this task.

 The responsibility includes building fedora and epel rpms using koji build
 system and deploying  the rpms to download.gluster.org [1] after signing
 and creating repos.

 If any one is interested to help us on maintaining fedora GlusterFS
 packaging, please let us ( kkeithley,  ndevos or myself )  know.


I'm interested in helping/maintaining of gluster packaging.

Best Regards,
Vishwanath


 [1] http://download.gluster.org/pub/gluster/glusterfs/

 --Humble


 ___
 Gluster-devel mailing list
 gluster-de...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo-replication settings: ignore-deletes

2015-06-18 Thread M S Vishwanath Bhat
On 18 June 2015 at 19:51, Sander Zijlstra sander.zijls...@surfsara.nl
wrote:

 LS,

 I’ve created a replication session (cascaded btw) from our production
 cluster to a backup cluster with version 3.6.2 on CentOS 6.6 and when I
 want to deactivate the “ignore-deletes” option I get the following error:

 [root@b02-bkp-01 ]# gluster volume geo-replication bkp01gv0
 b02-bkp-02::bkp02gv0 config '!ignore-deletes'
 Reserved option
 geo-replication command failed

 According to the docs this is the way to deactivate the option, but
 clearly it doesn’t work!

 How do I de-actvate this option as I clearly don’t want deletes to be
 ignored when syncing 50TB …..


I believe the command is \!ignore-deletes

Like below

gluster volume geo-replication bkp01gv0 b02-bkp-02::bkp02gv0 config
\!ignore-deletes

Hope it works :)

Greetings,
Vishwanath




 Met vriendelijke groet / kind regards,

 *Sander Zijlstra*

 | Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31
 (0)6 43 99 12 47 | sander.zijls...@surfsara.nl | www.surfsara.nl |

 *Regular day off on friday*


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] brick become to offline

2015-06-15 Thread M S Vishwanath Bhat
On 15 June 2015 at 07:38, 何亦军 heyi...@greatwall.com.cn wrote:

  Hi,



  I have done gluster upgrade to 3.7.1, then after downgrade back
 3.6.2.I don’t know something wrong, the server brick become to
 offline.  How to fix that problem?  Thanks so much.



 [root@gwgfs01 ~]# gluster volume status

 Status of volume: vol01

 Gluster process PortOnline  Pid


 --

 Brick gwgfs01:/data/brick1/vol01N/A N   N/A

 Brick gwgfs03:/data/brick2/vol0149152   Y
 17566

 Brick gwgfs01:/data/brick2/vol01N/A N   N/A

 Brick gwgfs02:/data/brick2/vol0149152   Y
 4109

 Brick gwgfs02:/data/brick1/vol0149153   Y
 4121

 Brick gwgfs03:/data/brick1/vol0149153   Y
 17623

 Self-heal Daemon on localhost   N/A Y
 12720

 Quota Daemon on localhost   N/A Y
 12727

 Self-heal Daemon on gwgfs02 N/A Y
 4412

 Quota Daemon on gwgfs02 N/A Y
 4422

 Self-heal Daemon on gwgfs03 N/A Y
 17642

 Quota Daemon on gwgfs03 N/A Y
 17652



 Task Status of Volume vol01


 --

 Task : Rebalance

 ID   : 0bb30902-e7d3-4b2f-9c83-b708ebbad592

 Status   : failed





 some log in data-brick1-vol01.log :



 [2015-06-15 02:06:54.757968] I [MSGID: 100030] [glusterfsd.c:2018:main]
 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.6.2
 (args: /usr/sbin/glusterfsd -s gwgfs01 --volfile-id
 vol01.gwgfs01.data-brick1-vol01 -p
 /var/lib/glusterd/vols/vol01/run/gwgfs01-data-brick1-vol01.pid -S
 /var/run/ecf2e5c591c01357cf33cbaf3b700bc6.socket --brick-name
 /data/brick1/vol01 -l /var/log/glusterfs/bricks/data-brick1-vol01.log
 --xlator-option *-posix.glusterd-uuid=b80f71d0-6944-4236-af96-e272a1f7e739
 --brick-port 49152 --xlator-option vol01-server.listen-port=49152)

 [2015-06-15 02:06:56.808733] W [xlator.c:191:xlator_dynload] 0-xlator:
 /usr/lib64/glusterfs/3.6.2/xlator/features/trash.so: cannot open shared
 object file: No such file or directory

 [2015-06-15 02:06:56.808793] E [graph.y:212:volume_type] 0-parser: Volume
 'vol01-trash', line 9: type 'features/trash' is not valid or not found on
 this machine

 [2015-06-15 02:06:56.808896] E [graph.y:321:volume_end] 0-parser: type
 not specified for volume vol01-trash

 [2015-06-15 02:06:56.809044] E [MSGID: 100026]
 [glusterfsd.c:1892:glusterfs_process_volfp] 0-: failed to construct the
 graph

 [2015-06-15 02:06:56.809369] W [glusterfsd.c:1194:cleanup_and_exit] (--
 0-: received signum (0), shutting down


I'm not an expert here in analysing the logs, But looks like trash xlator
(which is present only in 3.7.x) is causing problems. Because the feature
is unavailable in 3.6.x versions. It's picking up the wrong volfile maybe?

Also did you downgrade the whole setup or just the part of the cluster?

Cheers,
Vishwanath




 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo-replication vs replicated volumes

2015-06-10 Thread M S Vishwanath Bhat
On 10 June 2015 at 12:24, Gabriel Kuri gk...@ieee.org wrote:

 I need some clarification on how geo-replication (gluster = 3.5)
 operates, as I'm not fully understanding how the new and improved version
 works from the docs.

 Let's assume the following scenario, three servers setup in a geo-rep
 cluster all separated by a WAN:

 server a -- WAN -- server b -- WAN -- server c

 Does this scenario allow for a client with the a volume mounted to write
 to any of the servers directly and the write then gets replicated from that
 server to the other servers? For example, a client has the volume mounted
 (via FUSE) on server C and writes a file, does that file get written to
 server C directly and then the file replicates (asynchronously) to server A
 and server B ? Or is it that the writes only occur on the master of that
 geo-rep volume? I'm trying to understand if the replication for geo-rep
 occurs in a master-master setup or if it's still master-slave ? What was
 the big change for geo-rep in 3.5 ?


glusterfs doesn't support master-master yet. In your case, one of the
servers (A or B or C) should be a master and your client should write to
only that volume. Other two volumes should be read-only till volume in
server-A fails for some reason.

The big change in glusterfs-3.5 was the design of the geo-replication.
Earlier one single node in master volume was responsible for syncing data
to slave (which has lots of performance problems). So from  glusterfs-3.5,
the responsibility of syncing is shared across servers of volume in master.


 If it's not master-master, how does one get master-master replication
 working over a WAN?


AFAIK, there is no work around as of now, at least I am not aware of it

Best Regards,
Vishwanath


 Thanks ...


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Any plans for a c API?

2015-06-10 Thread M S Vishwanath Bhat
On 10 June 2015 at 14:06, Andreas Hollaus andreas.holl...@ericsson.com
wrote:

 Hi,

 I wonder if there are any plans for a c API which would let me manage my
 GlusterFS
 volume (create volume, addr brick, remove brick, examine status and so on)?


I am not aware of any plans for c API. But there sure is plans for REST API
support to manage gluster

http://www.gluster.org/community/documentation/index.php/Features/rest-api

Greetings,
Vishwanath


 Regards
 Andreas
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo-replication vs replicated volumes

2015-06-10 Thread M S Vishwanath Bhat
On 10 June 2015 at 22:38, Aravinda avish...@redhat.com wrote:



 On 06/10/2015 09:43 PM, Gabriel Kuri wrote:

   glusterfs doesn't support master-master yet. In your case, one of the
 servers (A or B or C) should be a master and your client should write to
 only that volume.
  Other two volumes should be read-only till volume in server-A fails for
 some reason.

  So the writes from the client will go directly to whichever server is the
 master, even though the client has mounted the volume on one of the slaves?
 What about the reads, do they still hit the server (ie slave) the client is
 connected to or do the reads hit the master as well?

 To be specific, Gluster Geo-rep doesn't support Master-Master. That means
 no automatic failover when master Volume goes down. Gluster replicated
 volumes support Master-Master within the Volume.

 Replicated Volumes:
 --
 The replication is synchronous, all writes on the mount will be copied to
 multiple bricks(replica count) synchronously. If one node is down during
 the write, other nodes takes care of syncing data when node comes online.
 This is automatic using self-heal.

 Replication is between bricks of single volume.

 Geo-replicated Volumes:
 
 Asynchronous replication of whole Gluster Volume. That means their will be
 delay in syncing data from Master Volume to Slave Volume. Volume topology
 does not matter Geo-replication works even if Master and Slave Volume types
 are different.

 Geo-replication is mainly used as disaster recovery mecanism like backups.
 Manual failover failback is also supported, if Master Volume goes down
 Slave Volume can become Master and can establish connection back. It is not
 supported to have Georep running in both ways at same time.


  In the case of geo-rep, how is split-brain handled? If the network is
 down between server A (master) and server B (slave) and the client has
 mounted to server B, I assume server B will then become the master and
 writes will then be committed directly to server B, but if writes were also
 committed to server A by other clients while the network was down, what
 happens when the network is back up between server A and B, does it just
 figure out which files had the most recent time stamp and commit those
 changes across all the servers?

 Since Master-Master is not supported in Geo-rep, Split brain is not
 handled.  I think there is some confusion between replicated volume and
 geo-rep. Replicated Volume replicates data within Volume. For example,
 Create a Gluster Volume with two bricks with replica count as 2.
 Bricks/Nodes cannot be across data centers. In case of Geo-rep, replication
 is between two Gluster Volumes.


Yes, I think you are confused between replication and geo-replication.

Just to add to what Aravinda mentioned, in AFR (Automatic File Replication,
that's what it's called in glusterfs) the replication happens between the
bricks of the same volume. The bricks are expected to be part of same
network. And the replication is synchronous. You can read more at
http://gluster.readthedocs.org/en/latest/Features/afr-v1/index.html?highlight=afr

And glusterfs geo-replication is between two (or more) glusterfs volumes.
These volumes in turn can have replicated setup internally. And volumes are
generally in different networks. And it's asynchronous one way replication.
It's mainly used for disaster recovery.

I found below link which is very old. But the content seems to be valid
still

*http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Replicated_Volumes_vs_Geo-replication
http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Replicated_Volumes_vs_Geo-replication*

HTH

Greetings,
Vishwanath


  If it's not master-master, how does one get master-master replication
 working over a WAN?
AFAIK, there is no work around as of now, at least I am not aware of
 it

  Does the basic replicated volume work in this fashion, reads and writes
 to all servers? The only problem is it's meant for a low latency network
 environment?

  Thanks ...




 ___
 Gluster-users mailing 
 listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users


 --
 regards
 Aravinda


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Monitorig gluster 3.6.1

2015-06-08 Thread M S Vishwanath Bhat
On 1 June 2015 at 12:28, Félix de Lelelis felix.deleli...@gmail.com wrote:

 Hi,

 I have monitoring gluster with scripts that lunch scripts. All scripts are
 redirected to a one script that check if is active any process glusterd and
 if the repsonse its false, the script lunch the check.

 All checks are:

- gluster volume volname info
- gluster volume heal volname info
- gluster volume heal volname split-brain
- gluster volume volname status detail
- gluster volume volname statistics

 Since I enable the monitoring in our pre-production gluster, the gluster
 is down 2 times. We  suspect that the monitoring are overloading but should
 not.

 The question is, there any way to check those states otherwise?


You can make use of https://github.com/keithseahus/fluent-plugin-glusterfs
as well.

http://docs.fluentd.org/articles/collect-glusterfs-logs

HTH

Best Regards,
Vishwanath



 Thanks

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] slave is rebalancing, master is not?

2015-06-08 Thread M S Vishwanath Bhat
On 5 June 2015 at 20:46, Dr. Michael J. Chudobiak m...@avtechpulse.com
wrote:

 I seem to have an issue with my replicated setup.

 The master says no rebalancing is happening, but the slave says there is
 (sort of). The master notes the issue:

 [2015-06-05 15:11:26.735361] E
 [glusterd-utils.c:9993:glusterd_volume_status_aggregate_tasks_status]
 0-management: Local tasks count (0) and remote tasks count (1) do not
 match. Not aggregating tasks status.

 The slave shows some odd messages like this:
 [2015-06-05 14:44:56.525402] E [glusterfsd-mgmt.c:1494:mgmt_getspec_cbk]
 0-glusterfs: failed to get the 'volume file' from server

 I want the supposed rebalancing to stop, so I can add bricks.

 Any idea what is going on, and how to fix it?

 Both servers were recently upgraded from Fedora 21 to 22.

 Status output is below.

 - Mike



 Master: [root@karsh ~]# /usr/sbin/gluster volume status
 Status of volume: volume1
 Gluster process PortOnline  Pid

 --
 Brick karsh:/gluster/brick1/data49152   Y
  4023
 Brick xena:/gluster/brick2/data 49152   Y
  1719
 Brick karsh:/gluster/brick3/data49153   Y
  4015
 Brick xena:/gluster/brick4/data 49153   Y
  1725
 NFS Server on localhost 2049Y
  4022
 Self-heal Daemon on localhost   N/A Y
  4034
 NFS Server on xena  2049Y
  24550
 Self-heal Daemon on xenaN/A Y
  24557

 Task Status of Volume volume1

 --
 There are no active volume tasks


 [root@xena glusterfs]# /usr/sbin/gluster volume status
 Status of volume: volume1
 Gluster process PortOnline  Pid

 --
 Brick karsh:/gluster/brick1/data49152   Y
  4023
 Brick xena:/gluster/brick2/data 49152   Y
  1719
 Brick karsh:/gluster/brick3/data49153   Y
  4015
 Brick xena:/gluster/brick4/data 49153   Y
  1725
 NFS Server on localhost 2049Y
  24550
 Self-heal Daemon on localhost   N/A Y
  24557
 NFS Server on 192.168.0.240 2049Y
  4022
 Self-heal Daemon on 192.168.0.240   N/A Y
  4034

 Task Status of Volume volume1

 --
 Task : Rebalance
 ID   : f550b485-26c4-49f8-b7dc-055c678afce8
 Status   : in progress

 [root@xena glusterfs]# gluster volume rebalance volume1 status
 volume rebalance: volume1: success:


This is weird. Did you start rebalance yourself? What does gluster volume
rebalance volume1 status say? Also check if both the nodes are properly
connected using gluster peer status.

If it says completed/stopped, you can go ahead and add the bricks. Also can
you check if rebalance process is running in your second server (xena?)

BTW, there is *no* master and slave in a single gluster volume :)

Best Regards,
Vishwanath




 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Features - Object Count

2015-06-01 Thread M S Vishwanath Bhat
On 29 May 2015 at 18:11, aasenov1989 aasenov1...@gmail.com wrote:

 Hi,
 So is there a way to find how many files I have on each brick of the
 volume?

I don't think gluster provides a way to  exactly get the number of files in
a brick or volume.

Sorry if my solution is very obvious. But I generally use find to get the
number of files in a particular brick.

find /brick/path ! -path /brick/path/.glusterfs* | wc -l


Best Regards,
Vishwanath


 Regards,
 Asen Asenov

 On Fri, May 29, 2015 at 3:33 PM, Atin Mukherjee 
 atin.mukherje...@gmail.com wrote:

 Sent from Samsung Galaxy S4
 On 29 May 2015 17:59, aasenov1989 aasenov1...@gmail.com wrote:
 
  Hi,
  Thnaks for the help. I was able to retrieve number of objects for
 entire volume. But I didn't figure out how to set quota for particular
 brick. I have replicated volume with 2 bricks on 2 nodes:
  Bricks:
  Brick1: host1:/dataDir
  Brick2: host2:/dataDir
  Both bricks are up and files are replicated. But when I try to set
 quota on a particular brick:
 IIUC, You won't be able to set quota at brick level as multiple bricks
 comprise a volume which is exposed to the user. Quota team can correct me
 if I am wrong.

 
  gluster volume quota TestVolume limit-objects /dataDir/
 9223372036854775807
  quota command failed : Failed to get trusted.gfid attribute on path
 /dataDir/. Reason : No such file or directory
  please enter the path relative to the volume
 
  What should be the path to brick directories relative to the volume?
 
  Regards,
  Asen Asenov
 
 
  On Fri, May 29, 2015 at 12:35 PM, Sachin Pandit span...@redhat.com
 wrote:
 
  - Original Message -
   From: aasenov1989 aasenov1...@gmail.com
   To: Humble Devassy Chirammal humble.deva...@gmail.com
   Cc: Gluster-users@gluster.org List gluster-users@gluster.org
   Sent: Friday, May 29, 2015 12:22:43 AM
   Subject: Re: [Gluster-users] Features - Object Count
  
   Thanks Humble,
   But as far as I understand the object count is connected with the
 quotas set
   per folders. What I want is to get number of files I have in entire
 volume -
   even when volume is distributed across multiple computers. I think
 the
   purpose of this feature:
  
 http://gluster.readthedocs.org/en/latest/Feature%20Planning/GlusterFS%203.7/Object%20Count/
 
  Hi,
 
  You are absolutely correct. You can retrieve number of files in the
 entire
  volume if you have the limit-objects set on the root. If
 limit-objects
  is set on the directory present in a mount point then it will only show
  the number of files and directories of that particular directory.
 
  In your case, if you want to retrieve number of files and directories
  present in the entire volume then you might have to set the object
 limit
  on the root.
 
 
  Thanks,
  Sachin Pandit.
 
 
   is to provide such functionality. Am I right or there is no way to
 retrieve
   number of files for entire volume?
  
   Regards,
   Asen Asenov
  
   On Thu, May 28, 2015 at 8:09 PM, Humble Devassy Chirammal 
   humble.deva...@gmail.com  wrote:
  
  
  
   Hi Asen,
  
  
 https://gluster.readthedocs.org/en/latest/Features/quota-object-count/ ,
 hope
   this helps.
  
   --Humble
  
  
   On Thu, May 28, 2015 at 8:38 PM, aasenov1989  aasenov1...@gmail.com
  wrote:
  
  
  
   Hi,
   I wanted to ask how to use this feature in gluster 3.7.0, as I was
 unable to
   find anything. How can I retrieve number of objects in volume and
 number of
   objects in particular brick?
  
   Thanks in advance.
  
   Regards,
   Asen Asenov
  
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org
   http://www.gluster.org/mailman/listinfo/gluster-users
  
  
  
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org
   http://www.gluster.org/mailman/listinfo/gluster-users
 
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-users



 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Network Topology Question

2015-05-22 Thread M S Vishwanath Bhat
On 22 May 2015 at 12:11, Patrick Ernst patrick.er...@bullites.com wrote:

 Hi there,

 I'm planning to setup a 3-Node Cluster for oVirt and would like to use 56
 GBe (RoCe)
 exclusively for GlusterFS. Since 56 GBe switches are far too expensive
 and  it's not
 planned to add more nodes and furthermore this would add a SPOF I'd like
 to
 cross connect the nodes as shown in the diagram below:

 Node 1   Node 2Node3
   ||___||||
   |___|

 This way there's a dedicated 56 Gbit connection to/from each member node.

 Is is possible to do this with GlusterFS?


IIUC, you have a 56Gbe dedicated network among glusterfs servers. Right?

If that is correct, it's not going to help much. In glusterfs clustering
(distribution/replication) is done by the client. So having a faster
connection from client to servers would be helpful.
Only glusterds (and glustershd as well???)  talk to each other among
servers.  Maybe be someone can elaborate more.

My first thought was to have different IPs on each node's /etc/host mapped
 to the node
 hostnames but I'm unsure if I can force GlusterFS to hostnames instead of
 IPs.


Yes, you can use any properly resolvable hostname with  gluster.

HTH

Best Regards,
Vishwanath


 Patrick

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Distributed volume going to Read only mode if any of the Brick is not available

2015-05-20 Thread M S Vishwanath Bhat
On 20 May 2015 at 17:18, Varadharajan S rajanvara...@gmail.com wrote:

 Hi Team,
 Anyone can suggest my below query, so that I can get clear idea

 Regards,
 Varad
 On 19 May 2015 20:28, Varadharajan S rajanvara...@gmail.com wrote:

 FYI
 On 19 May 2015 20:25, Varadharajan S rajanvara...@gmail.com wrote:

 Hi,
 Replication means, I won't get space. Distribution is not like striping
 right? If one brick is not available in the volume, other bricks can
 distribute data in between. If I do any tuning will get solution?

 In a pure distribute volume, there is no duplicate of a file. So when a
brick/server containing the file goes down, you loose those data.

And about the newly created data: *IF* the file gets hashed to the
brick/server which is down you get errors. If they get hashed to
bricks/servers which are online, they work just fine.

NOTE: If you create a new directory, they get distributed among the bricks
which are up. Meaning in your case they get distributed among the 3 bricks
which are up.


So if you want redundancy but do not want the space disadvantage of pure
replication,  why don't try disperse volume.
http://www.gluster.org/community/documentation/index.php/Features/disperse
 But for that you will have to upgrade to latest glusterfs-3.7 version.

HTH

Best Regards,
Vishwanath

 On 19 May 2015 20:02, Atin Mukherjee atin.mukherje...@gmail.com wrote:


 On 19 May 2015 17:10, Varadharajan S rajanvara...@gmail.com wrote:
 
  Hi,
 
  We are using Ubuntu 14.04 server and for storage purpose we
 configured gluster 3.5 as distributed volume and find the below details,
 
  1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces
 are  configured as ZFS raiddz2 volume
 
  2). Each server has /pool/gluster zfs volume and capacity as - 5 TB,8
 TB,6 TB and 10 TB
 
  3). Bricks are - rep1,rep2,rep3 and st1 and all the bricks are
 connected as Distributed Volume and mounted on each system as,
 
For E.x in rep1 - mount -t glusterfs  rep1:/glustervol  /data.
rep2  - mount  -t glusterfs  rep2:/glustervol
 /data
rep3  - mount  -t glusterfs  rep3:/glustervol
 /data
st1-  mount  -t glusterfs  st1:/glustervol
 /data
 
  So we get /data is having  around 29 TB and all our applications
 data's are stored in /data mount point.
 
  Details about volume:
 
  volume glustervol-client-0
  type protocol/client
  option send-gids true
  option password b217da9d1d8b-bb55
  option username 9d76-4553-8c75
  option transport-type tcp
  option remote-subvolume /pool/gluster
  option remote-host rep1
  option ping-timeout 42
  end-volume
 
  volume glustervol-client-1
  type protocol/client
  option send-gids true
  option password b217da9d1d8b-bb55
  option username jkd76-4553-5347
  option transport-type tcp
  option remote-subvolume /pool/gluster
  option remote-host rep2
  option ping-timeout 42
  end-volume
 
  volume glustervol-client-2
  type protocol/client
  option send-gids true
  option password b217da9d1d8b-bb55
  option username 19d7-5a190c2
  option transport-type tcp
  option remote-subvolume /pool/gluster
  option remote-host rep3
  option ping-timeout 42
  end-volume
 
  volume glustervol-client-3
  type protocol/client
  option send-gids true
  option password b217da9d1d8b-bb55
  option username c75-5436b5a168347
  option transport-type tcp
  option remote-subvolume /pool/gluster
  option remote-host st1
 
  option ping-timeout 42
  end-volume
 
  volume glustervol-dht
  type cluster/distribute
  subvolumes glustervol-client-0 glustervol-client-1
 glustervol-client-2 glustervol-client-3
  end-volume
 
  volume glustervol-write-behind
  type performance/write-behind
  subvolumes glustervol-dht
  end-volume
 
  volume glustervol-read-ahead
  type performance/read-ahead
  subvolumes glustervol-write-behind
  end-volume
 
  volume glustervol-io-cache
  type performance/io-cache
  subvolumes glustervol-read-ahead
  end-volume
 
  volume glustervol-quick-read
  type performance/quick-read
  subvolumes glustervol-io-cache
  end-volume
 
  volume glustervol-open-behind
  type performance/open-behind
  subvolumes glustervol-quick-read
  end-volume
 
  volume glustervol-md-cache
  type performance/md-cache
  subvolumes glustervol-open-behind
  end-volume
 
  volume glustervol
  type debug/io-stats
  option count-fop-hits off
  option latency-measurement off
  subvolumes glustervol-md-cache
  end-volume
 
 
  ap@rep3:~$ sudo gluster volume info
 
  Volume Name: glustervol
  Type: Distribute
  Volume ID: 165b-X
  Status: Started
  Number of Bricks: 4
  Transport-type: tcp
  Bricks:
  Brick1: rep1:/pool/gluster
  Brick2: rep2:/pool/gluster
  Brick3: rep3:/pool/gluster
  Brick4: st1:/pool/gluster
 
  Problem:
 
  If we shutdown any of the bricks 

Re: [Gluster-users] About gluster fsck question, thanks

2015-05-13 Thread M S Vishwanath Bhat
On 12 May 2015 at 17:48, zhengbin.08...@h3c.com zhengbin.08...@h3c.com
wrote:

  I want to know how glusterfs run fsck, I found some thing about fsck on
 the internet


glusterfs itself will *not* run any fsck on the brick. That should be taken
care by the filesystem of backend bricks (XFS and ext4). gluster just makes
sure that both the bricks of the replica pair are consistent.

Best Regards,
Vishwanath





 *What happens if a GlusterFS brick crashes?*

 You treat it like any other storage server. The underlying filesystem will
 run fsck and recover from crash. With journaled file system such as Ext3 or
 XFS, recovery is much faster and safer. When the brick comes back,
 glusterfs fixes all the changes on it by its self-heal feature.





 So this means glusterfs just use the underlying filesystem’s fsck, Does
 glusterfs have it’s own fsck?

 -
 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
 邮件!
 This e-mail and its attachments contain confidential information from H3C,
 which is
 intended only for the person or entity whose address is listed above. Any
 use of the
 information contained herein in any way (including, but not limited to,
 total or partial
 disclosure, reproduction, or dissemination) by persons other than the
 intended
 recipient(s) is prohibited. If you receive this e-mail in error, please
 notify the sender
 by phone or email immediately and delete it!

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Gluster Benchmark Kit

2015-04-29 Thread M S Vishwanath Bhat
On 28 April 2015 at 01:03, Benjamin Turner bennytu...@gmail.com wrote:

 Hi Kiran, thanks for the feedback!  I already put up a repo on githib:

 https://github.com/bennyturns/gluster-bench

 On my TODO list is:

 -The benchmark is currently RHEL / RHGS(Red Hat Gluster Storage) specific,
 I want to make things work with at least non paid RPM distros and Ubuntu.
 -Other filesystems(like you mentioned)
 -No LVM and non thinp config options.
 -EC, tiering, snapshot capabilities.

 I'll probably fork things and have a Red Hat specific version and an
 upstream version.  As soon as I have everything working on Centos I'll let
 the list know and we can enhance things to do whatever we need.  I always
 thought it would be interesting if we had a page where people could submit
 their benchmark data and the HW / config used.  Having a standard tool /
 tool set will help there.


Ben,

Do you think it is a good Idea (or is it possible) to integrate these with
distaf? (https://github.com/gluster/distaf)

That would enable us to choose workloads suitable to each scenario for
single (set of) tests.

Best Regards,
Vishwanath



 -b


 On Mon, Apr 27, 2015 at 3:31 AM, Kiran Patil ki...@fractalio.com wrote:

 Hi,

 I came across Gluster Benchmark Kit while reading [Gluster-users]
 Disastrous performance with rsync to mounted Gluster volume thread.

 http://54.82.237.211/gluster-benchmark/gluster-bench-README

 http://54.82.237.211/gluster-benchmark

 The Kit includes tools such as iozone, smallfile and fio.

 This Kit is not documented and need to baseline this tool for Gluster
 Benchmark testing.

 The community is going to benefit by adopting and extending it as per
 their needs and the kit should be hosted on Github.

 The init.sh script in the Kit contains only XFS filesystem which can be
 extended to BTRFS and ZFS.

 Thanks Ben Turner for sharing it.

 Kiran.

 ___
 Gluster-devel mailing list
 gluster-de...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel



 ___
 Gluster-devel mailing list
 gluster-de...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Add brick question.

2015-03-19 Thread M S Vishwanath Bhat
On 19 March 2015 at 14:59, 何亦军 heyi...@greatwall.com.cn wrote:

  Hi Guys,



  I have two servers in my pool , I plan add new server to that pool.

  My volume info below:



 Volume Name: vol01

 Type: Distributed-Replicate

 Volume ID: 0bcd8d7c-b48a-4408-b7b8-56d4b5f8a97c

 Status: Started

 Number of Bricks: 2 x 2 = 4

 Transport-type: tcp

 Bricks:

 Brick1: gwgfs01:/data/brick1/vol01

 Brick2: gwgfs02:/data/brick1/vol01

 Brick3: gwgfs01:/data/brick2/vol01

 Brick4: gwgfs02:/data/brick2/vol01



 I plan to form a combination:



 Brick1: gwgfs01:/data/brick1/vol01

 Brick2: gwgfs03:/data/brick2/vol01

 Brick3: gwgfs01:/data/brick2/vol01

 Brick4: gwgfs02:/data/brick2/vol01

 Brick5: gwgfs02:/data/brick1/vol01

 Brick6: gwgfs03:/data/brick1/vol01



 My processing steps:

 1.   gluster peer probe gwgfs03

 2.   gluster volume replace-brick vol01 gwgfs02:/data/brick1/vol01
 gwgfs03:/data/brick2/vol01 status

 3.   After replace completed, do: gluster volume rebalance vol01 start





 And  finally  I try to add brick meet problem:

 [root@gwgfs03 vol01]# gluster volume add-brick vol01
 gwgfs02:/data/brick1/vol01  gwgfs03:/data/brick1/vol01

 volume add-brick: failed: /data/brick1/vol01 is already part of a volume



The brick directory will have some xattrs while being part of the volume.
So you will have to remove the xattrs from the directory before adding the
brick to the volume again. You can simply delete the directory and
re-create it again.



 [root@gwgfs03 vol01]# gluster volume remove-brick vol01
 gwgfs02:/data/brick1/vol01 start

 volume remove-brick start: failed: Remove brick incorrect brick count of 1
 for replica 2


Since you have replica 2 volume, you should add/remove at least 2 bricks at
a time. You can try the following (after you delete and re-create
gwgfs02:/data/brick1/vol01)

gluster volume add-brick vol01  gwgfs02:/data/brick1/vol01
gwgfs03:/data/brick1/vol01

HTH
MS



 Current , my volume info below:What can I do now? Help?



 [root@gwgfs03 vol01]# gluster volume info



 Volume Name: vol01

 Type: Distributed-Replicate

 Volume ID: 0bcd8d7c-b48a-4408-b7b8-56d4b5f8a97c

 Status: Started

 Number of Bricks: 2 x 2 = 4

 Transport-type: tcp

 Bricks:

 Brick1: gwgfs01:/data/brick1/vol01

 Brick2: gwgfs03:/data/brick2/vol01

 Brick3: gwgfs01:/data/brick2/vol01

 Brick4: gwgfs02:/data/brick2/vol01

 Options Reconfigured:

 nfs.disable: on

 user.cifs: disable

 auth.allow: *







 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 答复: Add brick question.

2015-03-19 Thread M S Vishwanath Bhat
On 19 March 2015 at 17:48, 何亦军 heyi...@greatwall.com.cn wrote:

  Thanks, MS.

 My problem resolved.

Great...


 Volume Name: vol01
 Type: Distributed-Replicate
 Volume ID: 0bcd8d7c-b48a-4408-b7b8-56d4b5f8a97c
 Status: Started
 Number of Bricks: 3 x 2 = 6
 Transport-type: tcp
 Bricks:
 Brick1: gwgfs01:/data/brick1/vol01
 Brick2: gwgfs03:/data/brick2/vol01
 Brick3: gwgfs01:/data/brick2/vol01
 Brick4: gwgfs02:/data/brick2/vol01
 Brick5: gwgfs02:/data/brick1/vol01
 Brick6: gwgfs03:/data/brick1/vol01


 I have last question, What are correct procedure like my requirement? I
 didn't find any similar case in document.

 Environment: Every node have two brick, Distributed-Replicate
 Requirement: Add node to pool


peer probe to add new node to the pool and add_brick + rebalance *is* the
correct procedure to expand the volume (increase the storage space)

MS




  --
 *发件人:* M S Vishwanath Bhat [msvb...@gmail.com]
 *发送时间:* 2015年3月19日 18:27
 *收件人:* 何亦军
 *抄送:* gluster-users@gluster.org
 *主题:* Re: [Gluster-users] Add brick question.



 On 19 March 2015 at 14:59, 何亦军 heyi...@greatwall.com.cn wrote:

  Hi Guys,



  I have two servers in my pool , I plan add new server to that pool.

  My volume info below:



 Volume Name: vol01

 Type: Distributed-Replicate

 Volume ID: 0bcd8d7c-b48a-4408-b7b8-56d4b5f8a97c

 Status: Started

 Number of Bricks: 2 x 2 = 4

 Transport-type: tcp

 Bricks:

 Brick1: gwgfs01:/data/brick1/vol01

 Brick2: gwgfs02:/data/brick1/vol01

 Brick3: gwgfs01:/data/brick2/vol01

 Brick4: gwgfs02:/data/brick2/vol01



 I plan to form a combination:



 Brick1: gwgfs01:/data/brick1/vol01

 Brick2: gwgfs03:/data/brick2/vol01

 Brick3: gwgfs01:/data/brick2/vol01

 Brick4: gwgfs02:/data/brick2/vol01

 Brick5: gwgfs02:/data/brick1/vol01

 Brick6: gwgfs03:/data/brick1/vol01



 My processing steps:

 1.   gluster peer probe gwgfs03

 2.   gluster volume replace-brick vol01 gwgfs02:/data/brick1/vol01
 gwgfs03:/data/brick2/vol01 status

 3.   After replace completed, do: gluster volume rebalance vol01
 start





 And  finally  I try to add brick meet problem:

 [root@gwgfs03 vol01]# gluster volume add-brick vol01
 gwgfs02:/data/brick1/vol01  gwgfs03:/data/brick1/vol01

 volume add-brick: failed: /data/brick1/vol01 is already part of a volume



 The brick directory will have some xattrs while being part of the volume.
 So you will have to remove the xattrs from the directory before adding the
 brick to the volume again. You can simply delete the directory and
 re-create it again.



 [root@gwgfs03 vol01]# gluster volume remove-brick vol01
 gwgfs02:/data/brick1/vol01 start

 volume remove-brick start: failed: Remove brick incorrect brick count of
 1 for replica 2


  Since you have replica 2 volume, you should add/remove at least 2 bricks
 at a time. You can try the following (after you delete and re-create
 gwgfs02:/data/brick1/vol01)

  gluster volume add-brick vol01  gwgfs02:/data/brick1/vol01
 gwgfs03:/data/brick1/vol01

  HTH
  MS



 Current , my volume info below:What can I do now? Help?



 [root@gwgfs03 vol01]# gluster volume info



 Volume Name: vol01

 Type: Distributed-Replicate

 Volume ID: 0bcd8d7c-b48a-4408-b7b8-56d4b5f8a97c

 Status: Started

 Number of Bricks: 2 x 2 = 4

 Transport-type: tcp

 Bricks:

 Brick1: gwgfs01:/data/brick1/vol01

 Brick2: gwgfs03:/data/brick2/vol01

 Brick3: gwgfs01:/data/brick2/vol01

 Brick4: gwgfs02:/data/brick2/vol01

 Options Reconfigured:

 nfs.disable: on

 user.cifs: disable

 auth.allow: *







 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication (v3.5.3)

2015-03-11 Thread M S Vishwanath Bhat
On 11 March 2015 at 06:30, John Gardeniers jgardeni...@objectmastery.com
wrote:

 Using Gluster v3.5.3 and trying to follow the geo-replication instructions
 (https://github.com/gluster/glusterfs/blob/master/doc/
 admin-guide/en-US/markdown/admin_distributed_geo_rep.md), step by step,
 gets me nowhere.

 The slave volume has been created and passwordless SSH is set up for root
 from the master to slave. Both master and slave volumes are running.

 Running gluster system:: execute gsec_create, no problem.
 Running gluster volume geo-replication master_volume
 slave_host::slave_volume create push-pem [force] (with appropriate
 parameters, with and without force) results in Passwordless ssh login
 has not been setup with slave_server. geo-replication command failed

 As I said, passwordless SSH *is* set up. I can SSH from the master to the
 slave without a password just fine. What gives? More to the point, how do I
 make this work.


Just to make it clear, password less ssh needs to be setup between the
master node where you run the geo-rep-create command to slave node
specified in the geo-rep-create command.

And if you have identity file saved in non-standard location, geo-rep had a
bug for it https://bugzilla.redhat.com/show_bug.cgi?id=1181117

The patch is set for it and should be available in next release of
glusterfs.



 regards,
 John


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] unbalance

2015-02-28 Thread M S Vishwanath Bhat
MS
On 28 Feb 2015 13:27, Jorick Astrego j.astr...@netbulae.eu wrote:


 On 02/28/2015 08:36 AM, M S Vishwanath Bhat wrote:



 On 28 February 2015 at 02:17, Jorick Astrego j.astr...@netbulae.eu
wrote:

 Hi,

 When I have a 4 or 6 node distributed replicated volume, is there an
 easy way to unbalance the data? The rebalancing is very nice, but I was
 thinking of a scenario where I would have take half the nodes offline.
I
 could shift part or all of the data on one replicated volume and stuff
 it all on the other replicated volume.


 I'm not sure what you meant by unbalance the data ?

 Do you want to replace the bricks (backend disks)? Or do you want to
move the data from one gluster volume to another gluster volume?

 * If you want to replace the bricks, there is gluster replace-brick.
But there are some known issues with it. I would suggest remove-brick and
then add-brick and rebalance instead.

 Say I have two racks with a distributed replicated setup




 And I need to take server3 and server4 offline and I don't have
replacement bricks. How do I move File 2 to  Replicated Volume 0 while the
file is in use (VM storage or big MP4)
If you do remove brick of replicate volume 1, it will move (rebalance) ask
the data to replicate volume 0.
Note that it makes distribute replicate volume to pure replicate.



 * If you want to move the data from one gluster volume to another, you
can just rsync the data from mountpoint of volume1 to mountpoint of volume2.

 If I rsync to the other volume, will gluster dynamically remap the file
and keep serving it? What about the old file left over on Server3 and
Server4

Okay, when I say volume, I actually meant mountpoint of one Gluster volume
to another Gluster volume mountpoint. But looks like that's not what you
want. Sorry for the confusion.

Best Regards,
MS


 HTH

 //MS




 There would have to be enough space and performance will suffer a lot,
 but at least it will continue to be available.






 Met vriendelijke groet, With kind regards,

 Jorick Astrego

 Netbulae Virtualization Experts
 
 Tel: 053 20 30 270
 i...@netbulae.eu
 Staalsteden 4-3A
 KvK 08198180
 Fax: 053 20 30 271
 www.netbulae.eu
 7547 TA Enschede
 BTW NL821234584B01

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Functional Test Suite of Glusterfs

2015-02-26 Thread M S Vishwanath Bhat
On 26 February 2015 at 18:50, Justin Clift jus...@gluster.org wrote:

 On 26 Feb 2015, at 09:55, Kiran Patil ki...@fractalio.com wrote:
  Hi,
 
  Currently I am aware of Gluster Regression Test suite.
 
  I would like to know if there is a Test suite which covers the
  Functionality of Glusterfs.
 
  If not then what are the options do we have to come up with Functional
  test suite.
 
  The only option we have right now is to use Gluster Regression
  framework to come up with Functional test suite.
 
  Let me know your thoughts.

 Does this look like a potentially useful framework for creating
 it?

   https://github.com/msvbhat/distaf


Kiran,  as Justin pointed out, you can try distaf.

I have written quite a some functional test cases using distaf. Please try
it out once and let me know your input. I need feedback to improve it
further.

I hope the README is clear. But if you have any questions please feel free
to send us a mail.

Also, I have couple of enhancements planned for which I have half written
patch in my local repo. I will complete them by Monday and send out a
notification to the mailing list.

-MS


 I've been meaning to look into it / try it out, as our current
 regression testing approach bothers me a lot. ;)

 + Justin

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift

 ___
 Gluster-devel mailing list
 gluster-de...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo-replication

2015-01-27 Thread M S Vishwanath Bhat

On 26/01/15 21:35, Rosemond, Sonny wrote:
I have a RHEL7 testing environment consisting of 6 nodes total, all 
running Gluster 3.6.1. The master volume is distributed/replicated, 
and the slave volume is distributed. Firewalls and SELinux have been 
disabled for testing purposes. Passwordless SSH has been established 
and tested successfully, however when I try to start geo-replication, 
the process churns for a bit and then drops me back to the command 
prompt with the message, geo-replication command failed.

What should I look for? What am I missing?
Have you created the geo-rep session between master and slave? If yes, I 
assume you have run geo-rep create with push-pem? And before that you 
need to collect the pem keys using, gluster system:: execute 
gsec_create. And the another thing to be noted, is the passwordless ssh 
need to be established between master node where you are running 
geo-rep create command and the slave node specified in the geo-rep 
create command.


If you have done all of the above properly but still it's failing, 
please share glusterd log file of the node where you are running geo-rep 
start and the slave node specified in the geo-rep start command.


Best Regards,
Vishwanath



~Sonny


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Updates about DiSTAF - Multi Node test framework

2014-12-24 Thread M S Vishwanath Bhat

Hi,

I had sent a mail about new multi node test framework for glusterfs 
(http://www.gluster.org/pipermail/gluster-users/2014-October/019211.html).This 
is an update about the same.


* Have changed the name to distaf - https://github.com/msvbhat/distaf
About the name: DiSTAF stand for Distributed Software Test Automation 
Framework
Also, distaff is a tool used in spinning. It makes the spinning easy by 
holding the unspun fibers together and keeping them untangled. The 
framework is supposed to do the same thing. Holding the vms/machines 
together and easing the process of writing and executing the tests. And 
since DiSTAF is pronounced same as distaff, I felt it's an apt name.


* Have added couple of libraries and utilities.

Things in pipeline
* Using rpyc zero deploy instead of plain rpyc. The zero-deploy rpyc 
makes a *lot* easier to setup and run the tests.

* Better logging to create a test log per test case.
* Integrating with docker/dockit.

There are couple of TODO's mentioned in README file. Please go through 
the framework once and let me know your inputs. The plan is to start 
using it for upstream testing soon.


Best Regards,
Vishwanath

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] New multi node automation framework for glusterfs

2014-10-22 Thread M S Vishwanath Bhat

Hi,

I have written a multi node test automation framework for glusterfs. I 
have hosted it at https://github.com/msvbhat/glusterfs-automation-framework


Although it can be used to automate any distributed systems, this is 
optimized for glusterfs automation. Please try it out once and let me 
know your review/comments/suggestions.


The README describes most of the things. If you have any 
questions/suggestions, please send me a mail.


We are also working on integrating it with docker for simulating 
multinode tests from single host. But it is a Work In Progress. We are 
also working on lot of other enhancements. Please let me know how it can 
be improved.


Note: My responses might be delayed because of the long festive weekend 
in India.


Best Regards,
Vishwanath
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster 3.6.0 beta3

2014-10-21 Thread M S Vishwanath Bhat

On 20/10/14 21:48, Justin Clift wrote:

- Original Message -

The solution involves changelog crash consistency among other things.
Since this feature itself is targeted for glusterfs-3.7, I would say the
complete solution would be available with glusterfs-3.7

One the major challenges in solving it involves ordering/sequencing the
fops that happen on the master volume. Because of the distributed nature
of glusterfs and geo-rep, coordinating between gsyncds for proper
ordering of fops is hard.

I have cc'd Aravinda who is the maintainer of geo-rep. He would have
more details.

Thanks. :)

In the meantime, who is the right person to update our geo-rep
docs to include the gotchas, so people know to avoid them?
I can add do it. I need some time (we have long weekend coming up in 
India for Diwali) and some help from Aravinda.


Best Regards,
Vishwanath



+ Justin



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster 3.6.0 beta3

2014-10-20 Thread M S Vishwanath Bhat

On 18/10/14 12:46, James Payne wrote:

Not in my particular use case which is where in Windows a new folder or file is 
created through explorer. The new folder is created by Windows with the name 
'New Folder' which almost certainly the user will the rename. The same goes 
with newly created files in explorer.

does this mean the issue shouldn't be there in a replicate only scenario?

Yes. The issue shouldn't be seen in pure replicate volume.

Best Regards,
Vishwanath



Regards
James

--- Original Message ---

From: M S Vishwanath Bhat vb...@redhat.com
Sent: 17 October 2014 20:53
To: Kingsley glus...@gluster.dogwind.com, James Payne 
jimqwer...@hotmail.com
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster 
3.6.0 beta3

Hi,

Right now, distributed-geo-rep has bunch of known issues with deletes
and renames. Part of the issue was solved with a patch sent to upstream
recently. But still it doesn't solve complete issue.

So long story short, dist-geo-rep has still issues with short lived
renames where the renamed files are hashed to different subvolume
(bricks). If the renamed file is hashed to same brick then issue should
not be seen (hopefully).

Using volume set, we can force the renamed file to be hashed to same
brick. gluster volume set volname cluster.extra-hash-regex
regex_of_the_renamed_files

For example if you open a file in vi, it will rename the file to
filename.txt~, so the regex should be
gluster volume set VOLNAME cluster.extra-hash-regex '^(.+)~$'

But for this to work, the format of the files created by your
application has to be identified. Does your application create files in
a identifiable format which can be specified in a regex? Is this a
possibility?


Best Regards,
Vishwanath

On 15/10/14 15:41, Kingsley wrote:

I have added a comment to that bug report (a paste of my original
email).

Cheers,
Kingsley.

On Tue, 2014-10-14 at 22:10 +0100, James Payne wrote:

Just adding that I have verified this as well with the 3.6 beta, I added a
log to the ticket regarding this.

https://bugzilla.redhat.com/show_bug.cgi?id=1141379

Please feel free to add to the bug report, I think we are seeing the same
issue. It isn't present in the 3.4 series which in the one I'm testing
currently. (no distributed geo rep though)

Regards
James

-Original Message-
From: Kingsley [mailto:glus...@gluster.dogwind.com]
Sent: 13 October 2014 16:51
To: gluster-users@gluster.org
Subject: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster
3.6.0 beta3

Hi,

I have a small script to simulate file activity for an application we have.
It breaks geo-replication within about 15 - 20 seconds when I try it.

This is on a small Gluster test environment running in some VMs running
CentOS 6.5 and using gluster 3.6.0 beta3. I have 6 VMs - test1, test2,
test3, test4, test5 and test6. test1, test2 , test3 and test4 are gluster
servers while test5 and test6 are the clients. test3 is actually not used in
this test.


Before the test, I had a single gluster volume as follows:

test1# gluster volume status
Status of volume: gv0
Gluster process PortOnline  Pid

--
Brick test1:/data/brick/gv0 49168   Y
12017
Brick test2:/data/brick/gv0 49168   Y
11835
NFS Server on localhost 2049Y
12032
Self-heal Daemon on localhost   N/A Y
12039
NFS Server on test4 2049Y   7934
Self-heal Daemon on test4   N/A Y   7939
NFS Server on test3 2049Y
11768
Self-heal Daemon on test3   N/A Y
11775
NFS Server on test2 2049Y
11849
Self-heal Daemon on test2   N/A Y
11855

Task Status of Volume gv0

--
There are no active volume tasks


I created a new volume and set up geo-replication as follows (as these are
test machines I only have one file system on each, hence using force to
create the bricks in the root FS):

test4# date ; gluster volume create gv0-slave test4:/data/brick/gv0-slave
force; date Mon Oct 13 15:03:14 BST 2014 volume create: gv0-slave: success:
please start the volume to access data Mon Oct 13 15:03:15 BST 2014

test4# date ; gluster volume start gv0-slave; date Mon Oct 13 15:03:36 BST
2014 volume start: gv0-slave: success Mon Oct 13 15:03:39 BST 2014

test4# date ; gluster volume geo-replication gv0 test4::gv0-slave create
push-pem force ; date Mon Oct 13 15:05:59 BST 2014 Creating geo-replication
session between gv0  test4::gv0-slave has been successful Mon Oct 13
15:06:11 BST 2014


I then mount volume gv0 on one of the client machines

Re: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster 3.6.0 beta3

2014-10-20 Thread M S Vishwanath Bhat

On 18/10/14 20:31, Justin Clift wrote:

- Original Message -
snip

Right now, distributed-geo-rep has bunch of known issues with deletes
and renames. Part of the issue was solved with a patch sent to upstream
recently. But still it doesn't solve complete issue.

snip

Do we have an idea when the complete solution to this might be ready
for testing?
The solution involves changelog crash consistency among other things. 
Since this feature itself is targeted for glusterfs-3.7, I would say the 
complete solution would be available with glusterfs-3.7


One the major challenges in solving it involves ordering/sequencing the 
fops that happen on the master volume. Because of the distributed nature 
of glusterfs and geo-rep, coordinating between gsyncds for proper 
ordering of fops is hard.


I have cc'd Aravinda who is the maintainer of geo-rep. He would have 
more details.


Best Regards,
Vishwanath



We should update the 3.6.0 (and master) geo-rep docs with these Known
Gotcha's too.

   
https://github.com/gluster/glusterfs/blob/master/doc/features/geo-replication/distributed-geo-rep.md

Regards and best wishes,

Justin Clift



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster 3.6.0 beta3

2014-10-17 Thread M S Vishwanath Bhat

Hi,

Right now, distributed-geo-rep has bunch of known issues with deletes 
and renames. Part of the issue was solved with a patch sent to upstream 
recently. But still it doesn't solve complete issue.


So long story short, dist-geo-rep has still issues with short lived 
renames where the renamed files are hashed to different subvolume 
(bricks). If the renamed file is hashed to same brick then issue should 
not be seen (hopefully).


Using volume set, we can force the renamed file to be hashed to same 
brick. gluster volume set volname cluster.extra-hash-regex 
regex_of_the_renamed_files


For example if you open a file in vi, it will rename the file to 
filename.txt~, so the regex should be

gluster volume set VOLNAME cluster.extra-hash-regex '^(.+)~$'

But for this to work, the format of the files created by your 
application has to be identified. Does your application create files in 
a identifiable format which can be specified in a regex? Is this a 
possibility?



Best Regards,
Vishwanath

On 15/10/14 15:41, Kingsley wrote:

I have added a comment to that bug report (a paste of my original
email).

Cheers,
Kingsley.

On Tue, 2014-10-14 at 22:10 +0100, James Payne wrote:

Just adding that I have verified this as well with the 3.6 beta, I added a
log to the ticket regarding this.

https://bugzilla.redhat.com/show_bug.cgi?id=1141379

Please feel free to add to the bug report, I think we are seeing the same
issue. It isn't present in the 3.4 series which in the one I'm testing
currently. (no distributed geo rep though)

Regards
James

-Original Message-
From: Kingsley [mailto:glus...@gluster.dogwind.com]
Sent: 13 October 2014 16:51
To: gluster-users@gluster.org
Subject: [Gluster-users] geo-replication breaks on CentOS 6.5 + gluster
3.6.0 beta3

Hi,

I have a small script to simulate file activity for an application we have.
It breaks geo-replication within about 15 - 20 seconds when I try it.

This is on a small Gluster test environment running in some VMs running
CentOS 6.5 and using gluster 3.6.0 beta3. I have 6 VMs - test1, test2,
test3, test4, test5 and test6. test1, test2 , test3 and test4 are gluster
servers while test5 and test6 are the clients. test3 is actually not used in
this test.


Before the test, I had a single gluster volume as follows:

test1# gluster volume status
Status of volume: gv0
Gluster process PortOnline  Pid

--
Brick test1:/data/brick/gv0 49168   Y
12017
Brick test2:/data/brick/gv0 49168   Y
11835
NFS Server on localhost 2049Y
12032
Self-heal Daemon on localhost   N/A Y
12039
NFS Server on test4 2049Y   7934
Self-heal Daemon on test4   N/A Y   7939
NFS Server on test3 2049Y
11768
Self-heal Daemon on test3   N/A Y
11775
NFS Server on test2 2049Y
11849
Self-heal Daemon on test2   N/A Y
11855

Task Status of Volume gv0

--
There are no active volume tasks


I created a new volume and set up geo-replication as follows (as these are
test machines I only have one file system on each, hence using force to
create the bricks in the root FS):

test4# date ; gluster volume create gv0-slave test4:/data/brick/gv0-slave
force; date Mon Oct 13 15:03:14 BST 2014 volume create: gv0-slave: success:
please start the volume to access data Mon Oct 13 15:03:15 BST 2014

test4# date ; gluster volume start gv0-slave; date Mon Oct 13 15:03:36 BST
2014 volume start: gv0-slave: success Mon Oct 13 15:03:39 BST 2014

test4# date ; gluster volume geo-replication gv0 test4::gv0-slave create
push-pem force ; date Mon Oct 13 15:05:59 BST 2014 Creating geo-replication
session between gv0  test4::gv0-slave has been successful Mon Oct 13
15:06:11 BST 2014


I then mount volume gv0 on one of the client machines. I can create files
within the gv0 volume and can see the changes being replicated to the
gv0-slave volume, so I know that geo-replication is working at the start.

When I run my script (which quickly creates, deletes and renames files),
geo-replication breaks within a very short time. The test script output is
in http://gluster.dogwind.com/files/georep20141013/test6_script-output.log
(I interrupted the script once I saw that geo-replication was broken).
Note that when it deletes a file, it renames any later-numbered file so that
the file numbering remains sequential with no gaps; this simulates a real
world application that we use.

If you want a copy of the test script, it's here:
http://gluster.dogwind.com/files/georep20141013/test_script.tar.gz


The various 

Re: [Gluster-users] geo-replication 3.5.2 not working on Ubuntu 12.0.4 - transport.address-family not specified

2014-09-30 Thread M S Vishwanath Bhat
On 30 September 2014 23:29, Bin Zhou lakerz...@yahoo.com wrote:

 I managed to get around the transport.address-family not specified issue
 by using IP address instead of host name in the CLI when creating and
 starting the geo-replication volume.
 However the replication to slave is not happening. The status shows as
 Not Started. Any suggestions?

Can you please check if your iptable/firewall rules are blocking the ports
being used by gluster processes?

Can you also check for any errors in glusterd and geo-rep logs?

-MS


 ubuntu@local-server-2:~$ sudo gluster volume geo-replication bpv1
 172.16.17.43::bpslave create push-pem force
 --log-file=/var/log/gluster_c.log --log-level=DEBUG
 Creating geo-replication session between bpv1  172.16.17.43::bpslave has
 been successful
 ubuntu@local-server-2:~$ sudo gluster volume geo-replication bpv1
 172.16.17.43::bpslave start --log-file=/var/log/gluster_s_1.log
 --log-level=DEBUG
 Starting geo-replication session between bpv1  172.16.17.43::bpslave has
 been successful
 ubuntu@local-server-2:~$ sudo gluster volume geo-replication bpv1
 172.16.17.43::bpslave status


 MASTER NODE  MASTER VOLMASTER BRICK SLAVE
   STATUS CHECKPOINT STATUSCRAWL STATUS
 ———

 local-server-2 bpv1  /var/gluster/vol1172.16.17.43::bpslave
   Not StartedN/A  N/A
 local-server-0 bpv1  /var/gluster/vol1172.16.17.43::bpslave
   Not StartedN/A  N/A
 local-server-8bpv1  /var/gluster/vol1172.16.17.43::bpslave
   Not StartedN/A  N/A


 Thanks,
 Bin


  On Tuesday, September 30, 2014 1:04 PM, Bin Zhou lakerz...@yahoo.com
 wrote:
  Hi,
 
  I am testing geo-replication 3.5.2 by following the instruction from
 https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md
  All commands are executed successfully without returning any error, but
 no replication is done from master to the slave.
  Enclosed please find the logs when starting the geo-replication volume.
 At the end of the log, it indicated transport.address-family not
 specified. Is it a problem? How to have it fixed? Thanks for the help.
 
  [2014-09-30 16:38:32.162980] D [cli.c:581:cli_rpc_init] 0-cli:
 Connecting to glusterd using default socket
  [2014-09-30 16:38:32.163062] D [rpc-clnt.c:975:rpc_clnt_connection_init]
 0-glusterfs: defaulting frame-timeout to 30mins
  [2014-09-30 16:38:32.163090] D [rpc-transport.c:262:rpc_transport_load]
 0-rpc-transport: attempt to load file
 /usr/lib/glusterfs/3.5.2/rpc-transport/socket.so
  [2014-09-30 16:38:32.163626] D [socket.c:3449:socket_init] 0-glusterfs:
 disabling nodelay
  [2014-09-30 16:38:32.163649] I [socket.c:3561:socket_init] 0-glusterfs:
 SSL support is NOT enabled
  [2014-09-30 16:38:32.163659] I [socket.c:3576:socket_init] 0-glusterfs:
 using system polling thread
  [2014-09-30 16:38:32.163778] D [rpc-clnt.c:975:rpc_clnt_connection_init]
 0-glusterfs: defaulting frame-timeout to 30mins
  [2014-09-30 16:38:32.163801] D [rpc-transport.c:262:rpc_transport_load]
 0-rpc-transport: attempt to load file
 /usr/lib/glusterfs/3.5.2/rpc-transport/socket.so
  [2014-09-30 16:38:32.163831] I [socket.c:3561:socket_init] 0-glusterfs:
 SSL support is NOT enabled
  [2014-09-30 16:38:32.163843] I [socket.c:3576:socket_init] 0-glusterfs:
 using system polling thread
  [2014-09-30 16:38:32.163888] D [registry.c:408:cli_cmd_register] 0-cli:
 Returning 0
  … …
  [2014-09-30 16:38:32.164078] D [registry.c:408:cli_cmd_register] 0-cli:
 Returning 0
  [2014-09-30 16:38:32.225367] D
 [cli-cmd-volume.c:1727:cli_check_gsync_present] 0-cli: Returning 0
  [2014-09-30 16:38:32.225432] D [registry.c:408:cli_cmd_register] 0-cli:
 Returning 0
  … …
  [2014-09-30 16:38:32.225806] D [registry.c:408:cli_cmd_register] 0-cli:
 Returning 0
  [2014-09-30 16:38:32.225906] I [socket.c:2238:socket_event_handler]
 0-transport: disconnecting now
  [2014-09-30 16:38:32.225957] D
 [cli-cmd-parser.c:1795:force_push_pem_parse] 0-cli: Returning 0
  [2014-09-30 16:38:35.164339] W [dict.c:1055:data_to_str]
 (--/usr/lib/glusterfs/3.5.2/rpc-transport/socket.so(+0x4ca4)
 [0x7fcf2540fca4]
 (--/usr/lib/glusterfs/3.5.2/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0x4e)
 [0x7fcf254161be]
 (--/usr/lib/glusterfs/3.5.2/rpc-transport/socket.so(client_fill_address_family+0x200)
 [0x7fcf25415eb0]))) 0-dict: data is NULL
  [2014-09-30 16:38:35.164372] W [dict.c:1055:data_to_str]
 (--/usr/lib/glusterfs/3.5.2/rpc-transport/socket.so(+0x4ca4)
 [0x7fcf2540fca4]
 (--/usr/lib/glusterfs/3.5.2/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0x4e)
 [0x7fcf254161be]
 (--/usr/lib/glusterfs/3.5.2/rpc-transport/socket.so(client_fill_address_family+0x20b)
 [0x7fcf25415ebb]))) 0-dict: data is NULL
  [2014-09-30 16:38:35.164383] E 

Re: [Gluster-users] geo-replication fails on CentOS 6.5, gluster v 3.5.2

2014-09-29 Thread M S Vishwanath Bhat

On 29/09/14 17:42, Kingsley wrote:

On Mon, 2014-09-29 at 13:50 +0530, Aravinda wrote:
[snip]

Are these fop involves renames and delete of the same files? Geo-rep had
issue with short lived renamed files(Now fixed in Master
http://review.gluster.org/#/c/8761/).

Hi,

Apologies if this is a newbie question but how do I get hold of that
patch?

I've got the following repos enabled on my test servers, but yum
update doesn't report any updates due:

http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch/

What is the usual time between a patch being built and the update being
available via the main repos?
The patch should be available in glusterfs-3.6 version. Currently it's 
in beta phase and hopefully next rc build will have this patch.


If you are comfortable with source installation,  you can also from 
glusterfs master branch. The patch is already merged in master.


Best Regards,
Vishwanath





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Use geo-replication without passwordless ssh login

2014-09-15 Thread M S Vishwanath Bhat

On 15/09/14 20:48, Bo Yu wrote:

Hi,

I wonder if it is possible to configure Gluster geo-replication in a 
manner that it does not require passwordless ssh login, since in our 
system passwordless ssh is not allowed.


Or, is it possible to configure passwordless ssh for Gluster only, not 
for every user or programm.
TBH, the passwordless ssh configured by push-pem option is very 
specific to gluster (gsyncd to be more specific). But this will used 
after the session is created. During the create gluster need the 
passwordless ssh to get the details of the slave cluster (it's status, 
available size, files present or not etc).


So you need to have passwordless ssh from one node in master to one in 
slave *only* during the geo-rep create push-pem. After session 
created, you can actually remove the passwordless ssh and Ideally 
geo-rep should still work.


HTH

 Best Regards,
Vishwanath



Thanks.

Bo




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.5 replication to local filesystem?

2014-09-10 Thread M S Vishwanath Bhat

On 10/09/14 18:43, Danny Sauer wrote:

With the previous version of Gluster, I was able to use geo-replication to
asynchronously synchronize a gluster volume to a local directory.  My
application uses a large number of small files, and as everyone knows,
Gluster's not great for that.  But there are features that I need which
Gluster's good at, so this setup (two systems with a regular replicated-bricks
volume, and then three geo-replicated systems replicating to a local directory)
worked well.

With 3.5, though, I no longer seem to be able to just specify a local path to
replicate to; it seems to only want to accept a second volume.  The
documentation isn't quite complete on the new geo-replication, and I haven't
quite gotten a handle on the source code to just figure it out yet.  Has the
syntax changed in a way that I'm not properly guessing, or is this no longer
supported?
From the glusterfs-3.5, the destination has to be a gluster volume. 
geo-replication to local directory is not supported.


AFAIK this was done because the new geo-rep syncs the data using gfid. 
Since local file systems don't have gfid, the destination has to be a 
volume.


Best Regards,
Vishwanath



Thanks,
Danny
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Expanding Volumes and Geo-replication

2014-09-04 Thread M S Vishwanath Bhat

On 04/09/14 00:33, Vijaykumar Koppad wrote:




On Wed, Sep 3, 2014 at 8:20 PM, M S Vishwanath Bhat vb...@redhat.com 
mailto:vb...@redhat.com wrote:


On 01/09/14 23:09, Paul Mc Auley wrote:

Hi Folks,

Bit of a query on the process for setting this up and the best
practices for same.

I'm currently working with a prototype using 3.5.2 on vagrant
and I'm running into assorted failure modes with each pass.

The general idea is I start with two sites A and B where A has
3 bricks used to build a volume vol at replica 3 and
B has 2 bricks with at replica 2 to also build vol.
I create 30 files is A::vol and then set up geo-replication
from A to B after which I verify that the files have appeared
in B::vol.
What I want to do then is double the size of volumes
(presumably growing one and not the other is a bad thing)
by adding 3 bricks to A and 2 bricks to B.

I've had this fail number of ways and so I have a number of
questions.

Is geo-replication from a replica 3 volume to a replica 2
volume possible?

Yes. geo-replication just needs two gluster volumes (master -
slave). It doesn't matter what configuration master and slave has.
But slave should be big enough to have all the data in master.

Should I stop geo-replication before adding additional bricks?
(I assume yes)

There is no need to stop geo-rep while adding more bricks to the
volume.

Should I stop the volume(s) before adding additional bricks?
(Doesn't _seem_ to be the case)

No.

Should I rebalance the volume(s) after adding the bricks?

Yes. After add-brick, rebalance should be run.

Should I need to recreate the geo-replication to push-pem
subsequently, or can I do that out-of-band?
...and if so should I have to add the passwordless SSH key
back in? (As opposed to the restricted secret.pem)
For that matter in the inital setup is it an expected failure
mode that the initial geo-replication create will fail if the
slave host's SSH key isn't known?

After the add-brick, the newly added node will not have any pem
files. So you need to do geo-rep create push-pem force. This
will actually push the pem files to the newly added node as well.
And then you need to do geo-rep start force to start the gsync
processes in newly added node.

So the sequence of steps for you will be,

1. Add new nodes to both master and slave using gluster add-brick
command.

After this, we need to run  gluster system:: execute gsec_create  on 
master node and then proceed with step 2.

Yeah. Missed it... Sorry :)

The pem files needs to be generated for the newly added nodes before 
pushing them to slave. Above command does that.


2. Run geo-rep create push-pem force and start force.
3. Run rebalance.

Hope this works and hope it helps :)


Best Regards,
Vishwanath



Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org mailto:Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org mailto:Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Expanding Volumes and Geo-replication

2014-09-03 Thread M S Vishwanath Bhat

On 01/09/14 23:09, Paul Mc Auley wrote:

Hi Folks,

Bit of a query on the process for setting this up and the best 
practices for same.


I'm currently working with a prototype using 3.5.2 on vagrant and I'm 
running into assorted failure modes with each pass.


The general idea is I start with two sites A and B where A has 3 
bricks used to build a volume vol at replica 3 and

B has 2 bricks with at replica 2 to also build vol.
I create 30 files is A::vol and then set up geo-replication from A to 
B after which I verify that the files have appeared in B::vol.
What I want to do then is double the size of volumes (presumably 
growing one and not the other is a bad thing)

by adding 3 bricks to A and 2 bricks to B.

I've had this fail number of ways and so I have a number of questions.

Is geo-replication from a replica 3 volume to a replica 2 volume 
possible?
Yes. geo-replication just needs two gluster volumes (master - slave). 
It doesn't matter what configuration master and slave has. But slave 
should be big enough to have all the data in master.
Should I stop geo-replication before adding additional bricks? (I 
assume yes)

There is no need to stop geo-rep while adding more bricks to the volume.
Should I stop the volume(s) before adding additional bricks? (Doesn't 
_seem_ to be the case)

No.

Should I rebalance the volume(s) after adding the bricks?

Yes. After add-brick, rebalance should be run.
Should I need to recreate the geo-replication to push-pem 
subsequently, or can I do that out-of-band?
...and if so should I have to add the passwordless SSH key back in? 
(As opposed to the restricted secret.pem)
For that matter in the inital setup is it an expected failure mode 
that the initial geo-replication create will fail if the slave host's 
SSH key isn't known?
After the add-brick, the newly added node will not have any pem files. 
So you need to do geo-rep create push-pem force. This will actually 
push the pem files to the newly added node as well. And then you need to 
do geo-rep start force to start the gsync processes in newly added node.


So the sequence of steps for you will be,

1. Add new nodes to both master and slave using gluster add-brick command.
2. Run geo-rep create push-pem force and start force.
3. Run rebalance.

Hope this works and hope it helps :)


Best Regards,
Vishwanath



Thanks,
Paul
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo replication help

2014-09-03 Thread M S Vishwanath Bhat

On 03/09/14 20:31, David F. Robinson wrote:

Is this bug-fix going to be in the 3.5.3 beta release?
Not sure. I will have to check that. AFAIK the patches are present in 
upstream and 3.6 branch. Is upgrading to 3.6 an option?


Best Regards,
Vishwanath



David


-- Original Message --
From: Niels de Vos nde...@redhat.com
To: M S Vishwanath Bhat vb...@redhat.com
Cc: David F. Robinson david.robin...@corvidtec.com; 
gluster-users@gluster.org

Sent: 8/15/2014 6:25:04 AM
Subject: Re: [Gluster-users] geo replication help


On Wed, Aug 13, 2014 at 04:17:11PM +0530, M S Vishwanath Bhat wrote:

 On 13/08/14 02:27, David F. Robinson wrote:
 I was hoping someone could help me debug my geo-replication under
 gluster 3.5.2.
 I am trying to use geo-replication to create a lagged backup of my
 data. What I wanted to do was to turn off the geo-replication
 (gluster volume geo-replication homegfs
 gfsib01bkp.corvidtec.com::homegfs_bkp stop) during the day, and
 then turn it back on at midnight to allow the remote system to
 sync.
 If I stop the geo-replication, delete a file, and then restart the
 geo-replication, the deletion never shows up on the slave system.
 From the website below, I thought that it would pick up these
 changes. Any suggestions for how I can get the changes made while
 the geo-replication is stopped to propagate after restart the
 geo-replication?
 This is a known issue with glusterfs-3.5.2. The problem is xsync can
 not handle deletes and renames. So when you restart the geo-rep, the
 change detection mechanism falls back to xsync even though the
 Changelog was ON, the whole time. So deletes and renames won't be
 propagated to slave in 3.5.2

 The patch to fix this issue is already submitted and is present in
 glusterfs master branch. The fix should be available in
 glusterfs-3.6 soon.


Can you point me to the bug/patch, or clone the bug for 3.5 and provide
a backport?

Thanks,
Niels



 Best Regards,
 Vishwanath


https://medium.com/@msvbhat/distributed-geo-replication-in-glusterfs-ec95f4393c50 


 
 */gluster volume geo-replication master_volume
 slave_volume::slave_volume stop [force]/*
 
 Force option is to be used, when one of the node (or glusterd in
 one of the node) is down. Once stopped, the session can be
 restarted any time. Note that upon restarting of the session, the
 change detection mechanism falls back to xsync mode. This happens
 even though you have changelog generating journals, while the
 geo-rep session is stopped.
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users




 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo replication help

2014-08-18 Thread M S Vishwanath Bhat

On 15/08/14 15:55, Niels de Vos wrote:

On Wed, Aug 13, 2014 at 04:17:11PM +0530, M S Vishwanath Bhat wrote:

On 13/08/14 02:27, David F. Robinson wrote:

I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my
data. What I wanted to do was to turn off the geo-replication
(gluster volume geo-replication homegfs
gfsib01bkp.corvidtec.com::homegfs_bkp stop) during the day, and
then turn it back on at midnight to allow the remote system to
sync.
If I stop the geo-replication, delete a file, and then restart the
geo-replication, the deletion never shows up on the slave system.

From the website below, I thought that it would pick up these

changes.  Any suggestions for how I can get the changes made while
the geo-replication is stopped to propagate after restart the
geo-replication?

This is a known issue with glusterfs-3.5.2. The problem is xsync can
not handle deletes and renames. So when you restart the geo-rep, the
change detection mechanism falls back to xsync even though the
Changelog was ON, the whole time. So deletes and renames won't be
propagated to slave in 3.5.2

The patch to fix this issue is already submitted and is present in
glusterfs master branch. The fix should be available in
glusterfs-3.6 soon.

Can you point me to the bug/patch, or clone the bug for 3.5 and provide
a backport?
Apologies to the delay. We had long weekend and I was out of town having 
no access to emails.


The issue was some design limitation/flaw with xsync in 3.5. And the 
issue is fixed by introducing changelog history APIs. These changes are 
present only in upstream as of now. Not sure if it can be backported to 
3.5 but should be included in 3.6 at least.


I am cc'ing Aravinda, who is maintaining of geo-rep. He would provide 
the list of bugs/patches which should be backported.


Best Regards,
Vishwanath




Thanks,
Niels


Best Regards,
Vishwanath


https://medium.com/@msvbhat/distributed-geo-replication-in-glusterfs-ec95f4393c50

*/gluster volume geo-replication master_volume
slave_volume::slave_volume stop [force]/*

Force option is to be used, when one of the node (or glusterd in
one of the node) is down. Once stopped, the session can be
restarted any time. Note that upon restarting of the session, the
change detection mechanism falls back to xsync mode. This happens
even though you have changelog generating journals, while the
geo-rep session is stopped.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo replication help

2014-08-18 Thread M S Vishwanath Bhat

On 18/08/14 14:35, M S Vishwanath Bhat wrote:

On 15/08/14 15:55, Niels de Vos wrote:

On Wed, Aug 13, 2014 at 04:17:11PM +0530, M S Vishwanath Bhat wrote:

On 13/08/14 02:27, David F. Robinson wrote:

I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my
data. What I wanted to do was to turn off the geo-replication
(gluster volume geo-replication homegfs
gfsib01bkp.corvidtec.com::homegfs_bkp stop) during the day, and
then turn it back on at midnight to allow the remote system to
sync.
If I stop the geo-replication, delete a file, and then restart the
geo-replication, the deletion never shows up on the slave system.

From the website below, I thought that it would pick up these

changes.  Any suggestions for how I can get the changes made while
the geo-replication is stopped to propagate after restart the
geo-replication?

This is a known issue with glusterfs-3.5.2. The problem is xsync can
not handle deletes and renames. So when you restart the geo-rep, the
change detection mechanism falls back to xsync even though the
Changelog was ON, the whole time. So deletes and renames won't be
propagated to slave in 3.5.2

The patch to fix this issue is already submitted and is present in
glusterfs master branch. The fix should be available in
glusterfs-3.6 soon.

Can you point me to the bug/patch, or clone the bug for 3.5 and provide
a backport?
Apologies to the delay. We had long weekend and I was out of town 
having no access to emails.


The issue was some design limitation/flaw with xsync in 3.5. And the 
issue is fixed by introducing changelog history APIs. These changes 
are present only in upstream as of now. Not sure if it can be 
backported to 3.5 but should be included in 3.6 at least.


I am cc'ing Aravinda, who is maintaining of geo-rep. He would provide 
the list of bugs/patches which should be backported.

Forgot to cc :P



Best Regards,
Vishwanath




Thanks,
Niels


Best Regards,
Vishwanath

https://medium.com/@msvbhat/distributed-geo-replication-in-glusterfs-ec95f4393c50 



*/gluster volume geo-replication master_volume
slave_volume::slave_volume stop [force]/*

Force option is to be used, when one of the node (or glusterd in
one of the node) is down. Once stopped, the session can be
restarted any time. Note that upon restarting of the session, the
change detection mechanism falls back to xsync mode. This happens
even though you have changelog generating journals, while the
geo-rep session is stopped.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo replication help

2014-08-18 Thread M S Vishwanath Bhat

On 13/08/14 20:02, David F. Robinson wrote:
One other question... Is there a way to set a config variable to turn 
off the compression for the rsync?
You can use rsync-options to specify any rsync options that you want 
to rsync to use. Just make sure that it does not conflict with the 
default rsync options used by geo-rep.


For example,

#gluster volume geo-replication MASTER SLAVE config rsync-options 
'--bwlimit=value'


Best Regards,
Vishwanath


David
-- Original Message --
From: M S Vishwanath Bhat vb...@redhat.com mailto:vb...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com 
mailto:david.robin...@corvidtec.com; gluster-users@gluster.org 
mailto:gluster-users@gluster.org

Sent: 8/13/2014 6:47:11 AM
Subject: Re: [Gluster-users] geo replication help

On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone could help me debug my geo-replication under 
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my 
data. What I wanted to do was to turn off the geo-replication 
(gluster volume geo-replication homegfs 
gfsib01bkp.corvidtec.com::homegfs_bkp stop) during the day, and then 
turn it back on at midnight to allow the remote system to sync.
If I stop the geo-replication, delete a file, and then restart the 
geo-replication, the deletion never shows up on the slave system.  
From the website below, I thought that it would pick up these 
changes.  Any suggestions for how I can get the changes made while 
the geo-replication is stopped to propagate after restart the 
geo-replication?
This is a known issue with glusterfs-3.5.2. The problem is xsync can 
not handle deletes and renames. So when you restart the geo-rep, the 
change detection mechanism falls back to xsync even though the 
Changelog was ON, the whole time. So deletes and renames won't be 
propagated to slave in 3.5.2


The patch to fix this issue is already submitted and is present in 
glusterfs master branch. The fix should be available in glusterfs-3.6 
soon.


Best Regards,
Vishwanath


https://medium.com/@msvbhat/distributed-geo-replication-in-glusterfs-ec95f4393c50

*/gluster volume geo-replication master_volume 
slave_volume::slave_volume stop [force]/*


Force option is to be used, when one of the node (or glusterd in one 
of the node) is down. Once stopped, the session can be restarted any 
time. Note that upon restarting of the session, the change detection 
mechanism falls back to xsync mode. This happens even though you 
have changelog generating journals, while the geo-rep session is 
stopped.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo replication help

2014-08-13 Thread M S Vishwanath Bhat

On 13/08/14 02:27, David F. Robinson wrote:
I was hoping someone could help me debug my geo-replication under 
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my 
data. What I wanted to do was to turn off the geo-replication (gluster 
volume geo-replication homegfs gfsib01bkp.corvidtec.com::homegfs_bkp 
stop) during the day, and then turn it back on at midnight to allow 
the remote system to sync.
If I stop the geo-replication, delete a file, and then restart the 
geo-replication, the deletion never shows up on the slave system.  
From the website below, I thought that it would pick up these 
changes.  Any suggestions for how I can get the changes made while the 
geo-replication is stopped to propagate after restart the geo-replication?
This is a known issue with glusterfs-3.5.2. The problem is xsync can not 
handle deletes and renames. So when you restart the geo-rep, the change 
detection mechanism falls back to xsync even though the Changelog was 
ON, the whole time. So deletes and renames won't be propagated to slave 
in 3.5.2


The patch to fix this issue is already submitted and is present in 
glusterfs master branch. The fix should be available in glusterfs-3.6 soon.


Best Regards,
Vishwanath


https://medium.com/@msvbhat/distributed-geo-replication-in-glusterfs-ec95f4393c50

*/gluster volume geo-replication master_volume 
slave_volume::slave_volume stop [force]/*


Force option is to be used, when one of the node (or glusterd in one 
of the node) is down. Once stopped, the session can be restarted any 
time. Note that upon restarting of the session, the change detection 
mechanism falls back to xsync mode. This happens even though you have 
changelog generating journals, while the geo-rep session is stopped.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] geo replication, invalid slave name and gluster 3.5.1

2014-07-15 Thread M S Vishwanath Bhat

On 15/07/14 15:08, Stefan Moravcik wrote:

Hello Guys,

I have been trying to set a geo replication in our glusterfs test 
environment and got a problem with a message invalid slave name


So first things first...

I have 3 nodes configured in a cluster. Those nodes are configured as 
replica. On this cluster I have a volume created with let say name 
myvol1. So far everything works and looks good...


Next step was to create a geo replication off site.. So i followed 
this documentation:
http://www.gluster.org/community/documentation/index.php/HowTo:geo-replication 

These are old docs. I have edited this to mention that it is old geo-rep 
docs.


Please refer to 
https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md 
or 
https://medium.com/@msvbhat/distributed-geo-replication-in-glusterfs-ec95f4393c50 
for latest distributed-geo-rep documentation.


I had peered the slave server, created secret.pem was able to ssh 
without the password and tried to create the geo replication volume 
with the code from the documentation and got the following error:


on master:
gluster volume geo-replication myvol1 1.2.3.4:/shared/myvol1_slave start

on master:
[2014-07-15 09:15:37.188701] E 
[glusterd-geo-rep.c:4083:glusterd_get_slave_info] 0-: Invalid slave name
[2014-07-15 09:15:37.188827] W [dict.c:778:str_to_data] 
(--/usr/lib64/glusterfs/3.5.1/xlator/mgmt/glusterd.so(glusterd_op_stage_gsync_create+0x1e2) 
[0x7f979e20f1f2] 
(--/usr/lib64/glusterfs/3.5.1/xlator/mgmt/glusterd.so(glusterd_get_slave_details_confpath+0x116) 
[0x7f979e20a306] (--/usr/lib64/libglusterfs.so.0(dict_set_str+0x1c) 
[0x7f97a322045c]))) 0-dict: value is NULL
[2014-07-15 09:15:37.188837] E 
[glusterd-geo-rep.c:3995:glusterd_get_slave_details_confpath] 0-: 
Unable to store slave volume name.
[2014-07-15 09:15:37.188849] E 
[glusterd-geo-rep.c:2056:glusterd_op_stage_gsync_create] 0-: Unable to 
fetch slave or confpath details.
[2014-07-15 09:15:37.188861] E 
[glusterd-syncop.c:912:gd_stage_op_phase] 0-management: Staging of 
operation 'Volume Geo-replication Create' failed on localhost


there are no logs on slave what so ever
I also tried different documentation with create push-pem got the 
very same problem as above...


I tried to start the volume as node:/path/to/dir and also created a 
volume on slave and started as node:/slave_volume_name always a same 
result...


Tried to search for a solution and found this 
http://fpaste.org/114290/04117421/


It was different user with a very same problem... The issue was shown 
on IRC channel, but never answered..


This is a fresh install of 3.5.1, so no upgrade should be needed i 
guess... Any help solving this problem would be appreciated..
From what you have described, it looks like your slave is not a gluster 
volume. In latest geo-rep, slave has to be a gluster volume. Now 
glusterfs does not support a simple directory as a slave.


Please follow new documentation and try once more.

HTH

Best Regards,
Vishwanath



Thank you and best regards,
Stefan




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] geo replication, invalid slave name and gluster 3.5.1

2014-07-15 Thread M S Vishwanath Bhat

On 15/07/14 18:13, Stefan Moravcik wrote:

Hello Vishwanath

thank you for your quick reply but i have a follow up question if it 
is ok... Maybe a different issue and i should open a new thread, but i 
will try to continue to use this one...


So I followed the new documentation... let me show you what i have 
done and what is the final error message...



I have 3 servers node1, node2 and node3 with IPs 1.1.1.1, 1.1.1.2 and 
1.1.1.3


I installed glusterfs-server and glusterfs-geo-replication on all 3 of 
them... I created replica volume called myvol1 and run the command


gluster system:: execute gsec_create

this created 4 files:
secret.pem
secret.pem.pub
tar_ssh.pem
tar_ssh.pem.pub

The pub file is different on all 3 nodes so I copied all 3 
secret.pem.pub to slave authorized_keys. I tried to ssh directly to 
slave server from all 3 nodes and got through with no problem.


So I connected to slave server installed glusterfs-server and 
glusterfs-geo-replication there too.


Started the glusterd and created a volume called myvol1_slave

Then I peer probed one of the masters with slave. This showed the 
volume in my master and peer appeared in peer status.


From here i run the command in your documentation

volume geo-replication myvol1 1.2.3.4::myvol1_slave create push-pem
Passwordless ssh login has not been setup with 1.2.3.4.
geo-replication command failed

Couple of things here.

I believe it was not clear enough in the docs and I apologise for that. 
But this is the prerequisite for dist-geo-rep.


* /There should be a password-less ssh setup between at least one node 
in master volume to one node in slave volume. The geo-rep create command 
should be executed from this node which has password-less ssh setup to 
slave./


So in your case, you can setup a password less ssh between 1.1.1.1 (one 
master volume node) to 1.2.3.4 (one slave volume node). You can use 
ssh-keygen and ssh-copy-id to do the same.
After the above step is done, execute the gluster system:: execute 
gsec_create. You don't need to copy it to the slave autorized_keys. 
geo-rep create push-pem takes care of it for you.


Now, you should execute gluster volume geo-rep myvol1 
1.2.3.4::myvol1_slave cerate push-pem from 1.1.1.1 (because this node 
has passwordless ssh to 1.2.3.4 mentioned in the command)


That should create a geo-rep session for you. That can be started later on.

And you don't need to peer probe slave from master or vice versa. 
Logically both master and slave volumes are in different clusters (in 
two different geographic locations).


HTH,
Vishwanath



In the secure log file i could see the connection though.

2014-07-15T13:26:56.083445+01:00 1testlab sshd[23905]: Set 
/proc/self/oom_score_adj to 0
2014-07-15T13:26:56.089423+01:00 1testlab sshd[23905]: Connection from 
1.1.1.1 port 58351
2014-07-15T13:26:56.248687+01:00 1testlab sshd[23906]: Connection 
closed by 1.1.1.1


and in the logs of one of the masters

[2014-07-15 12:26:56.247667] E 
[glusterd-geo-rep.c:1889:glusterd_verify_slave] 0-: Not a valid slave
[2014-07-15 12:26:56.247752] E 
[glusterd-geo-rep.c:2106:glusterd_op_stage_gsync_create] 0-: 
1.2.3.4::myvol1_slave is not a valid slave volume. Error: Passwordless 
ssh login has not been setup with 1.2.3.4.
[2014-07-15 12:26:56.247772] E 
[glusterd-syncop.c:912:gd_stage_op_phase] 0-management: Staging of 
operation 'Volume Geo-replication Create' failed on localhost : 
Passwordless ssh login has not been setup with 1.2.3.4.


there is no log in the other masters in the cluster nor on slave..

I even tried with force option, but same result... I disabled firewall 
and selinux just to make sure those parts of the system do not 
interfere. Searched a google for same problem and found one... 
http://irclog.perlgeek.de/gluster/2014-01-16 but again no answer or 
solution.


Thank you for your time and help.

Best regards,
Stefan

On 15/07/14 12:26, M S Vishwanath Bhat wrote:

On 15/07/14 15:08, Stefan Moravcik wrote:

Hello Guys,

I have been trying to set a geo replication in our glusterfs test 
environment and got a problem with a message invalid slave name


So first things first...

I have 3 nodes configured in a cluster. Those nodes are configured 
as replica. On this cluster I have a volume created with let say 
name myvol1. So far everything works and looks good...


Next step was to create a geo replication off site.. So i followed 
this documentation:
http://www.gluster.org/community/documentation/index.php/HowTo:geo-replication 

These are old docs. I have edited this to mention that it is old 
geo-rep docs.


Please refer to 
https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md 
or 
https://medium.com/@msvbhat/distributed-geo-replication-in-glusterfs-ec95f4393c50 
for latest distributed-geo-rep documentation.


I had peered the slave server, created secret.pem was able to ssh 
without the password and tried to create the geo replication

Re: [Gluster-users] [Gluster-devel] autodelete in snapshots

2014-06-02 Thread M S Vishwanath Bhat
On 3 June 2014 01:02, M S Vishwanath Bhat msvb...@gmail.com wrote:




 On 2 June 2014 20:22, Vijay Bellur vbel...@redhat.com wrote:

 On 04/23/2014 05:50 AM, Vijay Bellur wrote:

 On 04/20/2014 11:42 PM, Lalatendu Mohanty wrote:

 On 04/16/2014 11:39 AM, Avra Sengupta wrote:

 The whole purpose of introducing the soft-limit is, that at any point
 of time the number of
 snaps should not exceed the hard limit. If we trigger auto-delete on
 hitting hard-limit, then
 the purpose itself is lost, because at that point we would be taking a
 snap, making the limit
 hard-limit + 1, and then triggering auto-delete, which violates the
 sanctity of the hard-limit.
 Also what happens when we are at hard-limit + 1, and another snap is
 issued, while auto-delete
 is yet to process the first delete. At that point we end up at
 hard-limit + 1. Also what happens
 if for a particular snap the auto-delete fails.

 We should see the hard-limit, as something set by the admin keeping in
 mind the resource consumption
 and at no-point should we cross this limit, come what may. If we hit
 this limit, the create command
 should fail asking the user to delete snaps using the snapshot
 delete command.

 The two options Raghavendra mentioned are applicable for the
 soft-limit only, in which cases on
 hitting the soft-limit

 1. Trigger auto-delete

 or

 2. Log a warning-message, for the user saying the number of snaps is
 exceeding the snap-limit and
 display the number of available snaps

 Now which of these should happen also depends on the user, because the
 auto-delete option
 is configurable.

 So if the auto-delete option is set as true, auto-delete should be
 triggered and the above message
 should also be logged.

 But if the option is set as false, only the message should be logged.

 This is the behaviour as designed. Adding Rahul, and Seema in the
 mail, to reflect upon the
 behaviour as well.

 Regards,
 Avra


 This sounds correct. However we need to make sure that the usage or
 documentation around this should be good enough , so that users
 understand the each of the limits correctly.


 It might be better to avoid the usage of the term soft-limit.
 soft-limit as used in quota and other places generally has an alerting
 connotation. Something like auto-deletion-limit might be better.


 I still see references to soft-limit and auto deletion seems to get
 triggered upon reaching soft-limit.

 Why is the ability to auto delete not configurable? It does seem pretty
 nasty to go about deleting snapshots without obtaining explicit consent
 from the user.


 I agree with Vijay here. It's not good to delete a snap (even though it is
 oldest) without the explicit consent from user.

 FYI It took me more than 2 weeks to figure out that my snaps were getting
 autodeleted after reaching soft-limit. For all I know I had not done
 anything and my snap restore were failing.

 I propose to remove the terms soft and hard limit. I believe there
 should be a limit (just limit) after which all snapshot creates should
 fail with proper error messages. And there can be a water-mark after which
 user should get warning messages. So below is my proposal.

 *auto-delete + snap-limit:  *If the snap-limit is set to *n*, next snap
 create (n+1th) will succeed *only if* *if auto-delete is set to on/true/1*
 and oldest snap will get deleted automatically. If autodelete is set to
 off/false/0 , (n+1)th snap create will fail with proper error message from
 gluster CLI command.  But again by default autodelete should be off.

 *snap-water-mark*: This should come in picture only if autodelete is
 turned off. It should not have any meaning if auto-delete is turned ON.
 Basically it's usage is to give the user warning that limit almost being
 reached and it is time for admin to decide which snaps should be deleted
 (or which should be kept)

 *my two cents*

Adding gluster-users as well.

-MS


 -MS



 Cheers,

 Vijay

 ___
 Gluster-devel mailing list
 gluster-de...@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Request for feedback on staging.gluster.org

2014-05-30 Thread M S Vishwanath Bhat

Awesome. +1

I hope all the docs here are 'google'able :)

Best Regards,
Vishwanath

On 30/05/14 04:40, Eco Willson wrote:

Dear Community members,

We have been working on a new site design and we would love to get your 
feedback.  You can check things out at staging.gluster.org.  Things are still 
very much in beta (a few pages not displaying properly or at all, etc), but we 
decided to roll things out so that we can get a list of improvements (and 
hopefully help from) the community.  If you would like to test the most recent 
changes on your own machine, you can do the following:

  git clone g...@forge.gluster.org:gluster-site/gluster-site.git

  git clone 
g...@forge.gluster.org:~eco/gluster-docs-project/ecos-gluster-docs-project.git 
(this is the temporary testing grounds, we will switch back to the official 
docs-project once initial evaluation is complete)

  cd gluster-site

  gem install bundle

  gem install middleman

  bundle exec middleman

You will need to manually copy or symlink the contents of the 
gluster-docs-project/htmltext/documentation folder into the 
gluster-site/source/documentation directory for testing currently. Once you 
`bundle exec middleman`, you can point a browser to localhost:4567

Looking forward to feedback and submissions!

Regards,

Eco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster command not found

2014-04-13 Thread M S Vishwanath Bhat

On 13/04/14 13:46, Gopu Krishnan wrote:

Hi Joe,

You are right. When I installed from ppa, command is working fine. I
referred the blog :
http://blogs.reliablepenguin.com/2013/09/05/glusterfs-cluster-with-ubuntu-on-rackspace-cloud
and successfully installed gluster. I am not configuring client as I
only need to replicate it across those two master servers and use it
under Loadbalancer. Setup is completed successfully but the file I add
on /data in first server is not replicating in second server and vice
versa. Following are the output from the servers:


root@gluster1:/data# gluster volume info

Volume Name: testvol
Type: Replicate
Volume ID: ceb5e415-8194-4e30-85b6-d92492e86a11
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/data
Brick2: gluster2:/data





root@gluster1:~# gluster volume status
Status of volume: testvol
Gluster process PortOnline  Pid
--
Brick gluster1:/data24009   Y   794
Brick gluster2:/data24009   Y   784
NFS Server on localhost 38467   Y   800
Self-heal Daemon on localhost   N/A Y   806
NFS Server on gluster2  38467   Y   790
Self-heal Daemon on gluster2N/A Y   796





root@gluster1:/data# gluster peer status
Number of Peers: 1

Hostname: gluster2
Uuid: 5fead283-8f03-4495-86e4-2d80d2fefb1b
State: Peer in Cluster (Connected)



root@gluster2:~# gluster volume status
Status of volume: testvol
Gluster process PortOnline  Pid
--
Brick gluster1:/data24009   Y   794
Brick gluster2:/data24009   Y   784
NFS Server on localhost 38467   Y   2208
Self-heal Daemon on localhost   N/A Y   2214
NFS Server on 10.181.106.25438467   Y   2511
Self-heal Daemon on 10.181.106.254  N/A Y   2517



root@gluster2:~# gluster peer status
Number of Peers: 1

Hostname: 10.181.106.254
Uuid: c3af97be-4ef1-4908-89e2-0d1699329f2a
State: Peer in Cluster (Connected)



root@gluster2:~# gluster volume info

Volume Name: testvol
Type: Replicate
Volume ID: ceb5e415-8194-4e30-85b6-d92492e86a11
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/data
Brick2: gluster2:/data



root@gluster1:/data# touch g
root@gluster1:/data# ls
g


root@gluster2:/data# ls
root@gluster2:/data#

Nothing came in the second server !!

Kindly assist.
glusterfs handles replication through client. You can't directly add the 
data to your backend brick. It has to be added from the client (or using 
libgfapi)


mount the gluster volume on the client (If you don't have gluster client 
bits, mount via NFS). And then create a file. That will be replicated to 
both of your backend bricks.


Best Regards,
Vishwanath


Thanks,
Gopu


-
On 4/13/14, Joe Julian j...@julianfamily.org wrote:

Are you installing from the ppa?

On April 12, 2014 7:58:02 PM PDT, Baochuan Wu wildpointe...@gmail.com
wrote:

It is strange. I have a virtual machine running Ubuntu 12.04 server.
After
running sudo apt-get install glusterfs-server, I can run gluster
command.
You can try to install from source code.

Thanks,
Baochuan


2014-04-13 1:19 GMT+08:00 Gopu Krishnan gopukrishnan...@gmail.com:


Hi,

I am trying to configure GlusterFS in my rackspace cloud servers and
facing multiple issues.
  I have
  installed fuse and glusterfs packagaes but glusterfs is not
running at all.
  Could you please guide me through installation in ubuntu serves ?

Below

are
  the steps I have tried:
apt-get install libfuse2

apt-get install  fuse-utils

apt-get install glusterfs-server
service glusterd start

root@test1:~# service glusterfs-server status

   * GlusterFS server is not running.

root@test1:~# gluster peer probe test2
  -bash: gluster: command not found

  So I am stuck here from the remaining steps. Please let me
know if I missed
  something.

Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: [Gluster-users] Expanding volume

2013-12-05 Thread M S Vishwanath Bhat
On 4 December 2013 04:33, Will Glass-Husain wgl...@forio.com wrote:

 Hi,

 I'm trying to expand my replicated volume but am getting an error. My goal
 is to double my capacity by adding new volumes.  Any suggestions?

 # gluster volume info

 Volume Name: eproduct
 Type: Replicate
 Volume ID: 1325b138-9777-4d76-9701-6410c0ff35d9
 Status: Started
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: staging1:/export/brick1
 Brick2: staging2:/export/brick1

 (I created a new volume /export/brick2 on each of the two servers)

 # gluster volume add-brick eproduct staging1:/export/brick2
 staging2:/export/brick2
 volume add-brick: failed: /export/brick2 or a prefix of it is already part
 of a volume

 Looks like export brick /export/brick2 was part of some other gluster
volume before. Brick has some xattrs which needs to be cleared before
adding them to the gluster volume.

This should help you.

http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/

Note that you will have to run rebalance after adding new bricks.





 ---

 Would appreciate suggestions as to what I need to do to add new volumes to
 give me larger storage capacity.

 Thanks!

 WILL

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Resync or how to force the replication

2013-11-26 Thread M S Vishwanath Bhat

On 26/11/13 12:47, gandalf istari wrote:
Hi have setup a two node replication glusterfs. After the initial 
installation the master node was put into the datacenter and after 
two week we moved the second one also to the datacenter.


But the sync has not started yet.

On the master

gluster volume info all

Volume Name: datastore1

Type: Replicate

Volume ID: fdff5190-85ef-4cba-9056-a6bbbd8d6863

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: nas-01-data:/datastore

Brick2: nas-02-data:/datastore

gluster peer status

Number of Peers: 1


Hostname: nas-02-data

Uuid: 71df9f86-a87b-481d-896c-c0d4ab679cfa

State: Peer in Cluster (Connected)


On the slave

gluster peer status

Number of Peers: 1

Hostname: 192.168.70.6

Uuid: 97ef0154-ad7b-402a-b0cb-22be09134a3c

State: Peer in Cluster (Connected)


gluster volume status all

Status of volume: datastore1

Gluster processPortOnlinePid

--

Brick nas-01-data:/datastore49152Y2130

Brick nas-02-data:/datastoreN/ANN/A

NFS Server on localhost2049Y8064

Self-heal Daemon on localhostN/AY8073

NFS Server on 192.168.70.62049Y3379

Self-heal Daemon on 192.168.70.6N/AY3384


Which version of glusterfs are you running?

volume status suggests that the second brick (nas-02-data:/datastore) is 
not running.


Can you run gluster volume start volname force in any of these two 
nodes and try again?
Then you would also required to run `find . | xargs stat` on the 
mountpoint of the volume. That should trigger the self heal.


There are no active volume tasks


I would like to run on the slave gluster volume sync nas-01-data 
datastore1


BTW, There is no concept of master and slave in afr (replication). 
However there is concept of master volume and slave volume in 
gluster geo-replication.


But then the virtual machines hosted will be unavailible is there 
another way to start the replication ?



Thanks






___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Where is glusterfs QA forum?

2013-08-02 Thread M S Vishwanath Bhat

Hi,

I was having a conversation today in IRC and found this broken link. 
http://goo.gl/F6jqx


To fix this, I wanted to find where our glusterfs QA forum was moved 
to. But asking around nobody seems to know it.


Anybody knows where our glusterfs QA is now?

Best Regards,
Vishwanath
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Where is glusterfs QA forum?

2013-08-02 Thread M S Vishwanath Bhat

On 02/08/13 19:21, John Mark Walker wrote:
On Fri, Aug 2, 2013 at 9:45 AM, Gowrishankar Rajaiyan g...@redhat.com 
mailto:g...@redhat.com wrote:



Something like http://ask.gluster.org would be great. We have one
similar for openstack ref: https://ask.openstack.org



Agreed. I've looked at that, as well as discourse - 
http://meta.discourse.org/


And HyperKitty, which is a web-based interface for Mailman 3: 
http://lists-dev.cloud.fedoraproject.org/hyperkitty


I've CC'd gluster-infra in the hopes that we can revive this discussion.

I remember we having something similar to quora. Even that would be good.

-MS


-JM







___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Log files to monitor ?????

2013-06-28 Thread M S Vishwanath Bhat

On 27/06/13 14:17, Bobby Jacob wrote:


Hi All,

Once the glusterFS is setup and running, which are all the actual log 
files to monitor in them.


1)/var/log/glusterfs/glustershd.log

2)/var/log/glusterfs/nfs.log

3)/var/log/glusterfs/bricks/brick.log

4)Client Server :

a./var/log/glusterFS/mountpoint.log

b./var/log/glusterFS/mountpoint*-*.log

Among the above mentioned logs, can anyone give a clear idea as to 
which we have to track and please mention any other logs if I've 
missed out.




There is one more.
5. glusterd log file.  -- /var/log/glusterfs/*.glusterd.vol.log

If something goes wrong while self healing then you should look at 
glustershd logs.
If you have mounted gluster volume via nfs then nfs.log is the log file 
to moniter.
gluster mount log (mountpoint.log) is where the client process will log. 
So this is the place to look for anything related to I/O.
Each glusterfsd (brick processes) will log in brick.log. So this might 
be one more place to look at when you're doing I/O or when something 
unexpected happens with glusterfsd process.
For anything related to glusterd (management daemon) you should look at 
glusterd logs. If any of the gluster CLI command return failure, this is 
the file to look at.



Best Regards,
Vishwanath


Thanks  Regards,

*Bobby Jacob*

*Senior Technical Systems Engineer | **eGroup*

P*SAVE TREES**. *Please don't print this e-mail unless you really need to.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users