[Gluster-devel] gluster-block v0.5.1 is alive!

2020-09-30 Thread Prasanna Kalever
Hello Gluster folks,

The gluster-block team is happy to announce the v0.5.1 release [1].

This is the security and bug fix release of gluster-block, the CVE and
few bug fixes are made available as part of this release. Please find
the release notes with notable fixes at [2].

One can find the details about prerequisites, how to install and setup
at [3]. If you are a new user, check out the demo video attached in
the README doc [4], which will be a good source of intro to the
project. There are good examples of how to use gluster-block both in
the man pages [5] and test file [6] (also in the README).

If you want to quickly bring up a cluster with gluster-block
environment locally, please head to vagrant setting-up details [7].

gluster-block is part of fedora package collection, an updated package
with release version v0.5.1 will be made available. And the community
provided packages will be soon made available at [8].

Please spend a minute to report any kind of issue that comes to your
notice with this handy link [9]. We look forward to your feedback,
which will help gluster-block get better!

We would like to thank all our users, contributors for bug filing and
fixes, also the whole team involved in the huge effort with
pre-release testing.

[1] https://github.com/gluster/gluster-block/releases
[2] https://github.com/gluster/gluster-block/releases/tag/v0.5.1
[3] https://github.com/gluster/gluster-block#install
[4] https://github.com/gluster/gluster-block#demo
[5] https://github.com/gluster/gluster-block/tree/master/docs
[6] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
[7] 
https://github.com/gluster/gluster-block#how-to-quickly-bringup-gluster-block-environment-locally-
[8] https://download.gluster.org/pub/gluster/gluster-block/
[9] https://github.com/gluster/gluster-block/issues/new

Cheers,
--
Team Gluster-Block!

___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] gluster-block v0.5 is alive!

2020-05-13 Thread Prasanna Kalever
Hello Gluster folks,

Gluster-block team is happy to announce the v0.5 release [1].

This is the new stable version of gluster-block, a good number of
features and interesting bug fixes are made available as part of this
release. Please find the list of release highlights and notable fixes
at [2].

One can find the details about prerequisites, how to install and setup
at [3]. If you are a new user, check out the demo video attached in
the README doc [4], which will be a good source of intro to the
project. There are good examples about how to use gluster-block both
in the man pages [5] and test file [6] (also in the README).

If you want to quickly bring up a cluster with gluster-block
environment locally, please head to vagrant setting-up details  [7].

gluster-block is part of fedora package collection, an updated package
with release version v0.5 will be soon made available. And the
community provided packages will be soon made available at [8].

Please spend a minute to report any kind of issue that comes to your
notice with this handy link [9]. We look forward to your feedback,
which will help gluster-block get better!

We would like to thank all our users, contributors for bug filing and
fixes, also the whole team involved in the huge effort with
pre-release testing.

[1] https://github.com/gluster/gluster-block/releases
[2] https://github.com/gluster/gluster-block/releases/tag/v0.5
[3] https://github.com/gluster/gluster-block#install
[4] https://github.com/gluster/gluster-block#demo
[5] https://github.com/gluster/gluster-block/tree/master/docs
[6] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
[7] 
https://github.com/gluster/gluster-block#how-to-quickly-bringup-gluster-block-environment-locally-
[8] https://download.gluster.org/pub/gluster/gluster-block/
[9] https://github.com/gluster/gluster-block/issues/new

Cheers,
Team Gluster-Block!

___

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968




Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] Gluster on RHEL 8

2019-07-23 Thread Prasanna Kalever
On Tue, Jul 23, 2019 at 4:41 PM Barak Sason  wrote:
>
> Greeting all,
>
> I've made a fresh installation of RHEL 8 on a VM and have been trying to set 
> up Gluster on that system.
>
> Running ./autogen.sh completes OK, but running ./config results in an error 
> regarding missing 'rpcgen'.
> 'libtirpc-devel package is installed.
> Running ./configure --without-libtirp  results in the same error.

I see:
[root@localhost src]# ./configure --without-libtirp
configure: WARNING: unrecognized options: --without-libtirp

should it be '--without-libtirpc' instead of '--without-libtirp' ?

BRs,
--
Prasanna





> I'm attaching reverent terminal output.
> I'm currently out of ideas.
>
> I appreciate any help you may offer,
>
> Barak
> ___
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/836554017
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/486278655
>
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] gluster-block v0.4 is alive!

2019-05-21 Thread Prasanna Kalever
On Mon, May 20, 2019 at 9:05 PM Vlad Kopylov  wrote:
>
> Thank you Prasanna.
>
> Do we have architecture somewhere?

Vlad,

Although the complete set of details might be missing at one place
right now, some pointers to start are available at,
https://github.com/gluster/gluster-block#gluster-block and
https://pkalever.wordpress.com/2019/05/06/starting-with-gluster-block,
hopefully that should give some clarity about the project. Also
checkout the man pages.

> Dies it bypass Fuse and go directly gfapi ?

yes, we don't use Fuse access with gluster-block. The management
as-well-as IO happens over gfapi.

Please go through the docs pointed above, if you have any specific
queries, feel free to ask them here or on github.

Best Regards,
--
Prasanna

>
> v
>
> On Mon, May 20, 2019, 8:36 AM Prasanna Kalever  wrote:
>>
>> Hey Vlad,
>>
>> Thanks for trying gluster-block. Appreciate your feedback.
>>
>> Here is the patch which should fix the issue you have noticed:
>> https://github.com/gluster/gluster-block/pull/233
>>
>> Thanks!
>> --
>> Prasanna
>>
>> On Sat, May 18, 2019 at 4:48 AM Vlad Kopylov  wrote:
>> >
>> >
>> > straight from
>> >
>> > ./autogen.sh && ./configure && make -j install
>> >
>> >
>> > CentOS Linux release 7.6.1810 (Core)
>> >
>> >
>> > May 17 19:13:18 vm2 gluster-blockd[24294]: Error opening log file: No such 
>> > file or directory
>> > May 17 19:13:18 vm2 gluster-blockd[24294]: Logging to stderr.
>> > May 17 19:13:18 vm2 gluster-blockd[24294]: [2019-05-17 23:13:18.966992] 
>> > CRIT: trying to change logDir from /var/log/gluster-block to 
>> > /var/log/gluster-block [at utils.c+495 :]
>> > May 17 19:13:19 vm2 gluster-blockd[24294]: No such path 
>> > /backstores/user:glfs
>> > May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service: main process 
>> > exited, code=exited, status=1/FAILURE
>> > May 17 19:13:19 vm2 systemd[1]: Unit gluster-blockd.service entered failed 
>> > state.
>> > May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service failed.
>> >
>> >
>> >
>> > On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever  
>> > wrote:
>> >>
>> >> Hello Gluster folks,
>> >>
>> >> Gluster-block team is happy to announce the v0.4 release [1].
>> >>
>> >> This is the new stable version of gluster-block, lots of new and
>> >> exciting features and interesting bug fixes are made available as part
>> >> of this release.
>> >> Please find the big list of release highlights and notable fixes at [2].
>> >>
>> >> Details about installation can be found in the easy install guide at
>> >> [3]. Find the details about prerequisites and setup guide at [4].
>> >> If you are a new user, checkout the demo video attached in the README
>> >> doc [5], which will be a good source of intro to the project.
>> >> There are good examples about how to use gluster-block both in the man
>> >> pages [6] and test file [7] (also in the README).
>> >>
>> >> gluster-block is part of fedora package collection, an updated package
>> >> with release version v0.4 will be soon made available. And the
>> >> community provided packages will be soon made available at [8].
>> >>
>> >> Please spend a minute to report any kind of issue that comes to your
>> >> notice with this handy link [9].
>> >> We look forward to your feedback, which will help gluster-block get 
>> >> better!
>> >>
>> >> We would like to thank all our users, contributors for bug filing and
>> >> fixes, also the whole team who involved in the huge effort with
>> >> pre-release testing.
>> >>
>> >>
>> >> [1] https://github.com/gluster/gluster-block
>> >> [2] https://github.com/gluster/gluster-block/releases
>> >> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
>> >> [4] https://github.com/gluster/gluster-block#usage
>> >> [5] https://github.com/gluster/gluster-block/blob/master/README.md
>> >> [6] https://github.com/gluster/gluster-block/tree/master/docs
>> >> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
>> >> [8] https://download.gluster.org/pub/gluster/gluster-block/
>> >> [9] https://github.com/gluster/gluster-block/issues/new
>> >>
>> >> Cheers,
>> >> Team Gluster-Block!
>> >> ___
>> >> Gluster-users mailing list
>> >> gluster-us...@gluster.org
>> >> https://lists.gluster.org/mailman/listinfo/gluster-users
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



Re: [Gluster-devel] [Gluster-users] gluster-block v0.4 is alive!

2019-05-20 Thread Prasanna Kalever
Hey Vlad,

Thanks for trying gluster-block. Appreciate your feedback.

Here is the patch which should fix the issue you have noticed:
https://github.com/gluster/gluster-block/pull/233

Thanks!
--
Prasanna

On Sat, May 18, 2019 at 4:48 AM Vlad Kopylov  wrote:
>
>
> straight from
>
> ./autogen.sh && ./configure && make -j install
>
>
> CentOS Linux release 7.6.1810 (Core)
>
>
> May 17 19:13:18 vm2 gluster-blockd[24294]: Error opening log file: No such 
> file or directory
> May 17 19:13:18 vm2 gluster-blockd[24294]: Logging to stderr.
> May 17 19:13:18 vm2 gluster-blockd[24294]: [2019-05-17 23:13:18.966992] CRIT: 
> trying to change logDir from /var/log/gluster-block to /var/log/gluster-block 
> [at utils.c+495 :]
> May 17 19:13:19 vm2 gluster-blockd[24294]: No such path /backstores/user:glfs
> May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service: main process exited, 
> code=exited, status=1/FAILURE
> May 17 19:13:19 vm2 systemd[1]: Unit gluster-blockd.service entered failed 
> state.
> May 17 19:13:19 vm2 systemd[1]: gluster-blockd.service failed.
>
>
>
> On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever  wrote:
>>
>> Hello Gluster folks,
>>
>> Gluster-block team is happy to announce the v0.4 release [1].
>>
>> This is the new stable version of gluster-block, lots of new and
>> exciting features and interesting bug fixes are made available as part
>> of this release.
>> Please find the big list of release highlights and notable fixes at [2].
>>
>> Details about installation can be found in the easy install guide at
>> [3]. Find the details about prerequisites and setup guide at [4].
>> If you are a new user, checkout the demo video attached in the README
>> doc [5], which will be a good source of intro to the project.
>> There are good examples about how to use gluster-block both in the man
>> pages [6] and test file [7] (also in the README).
>>
>> gluster-block is part of fedora package collection, an updated package
>> with release version v0.4 will be soon made available. And the
>> community provided packages will be soon made available at [8].
>>
>> Please spend a minute to report any kind of issue that comes to your
>> notice with this handy link [9].
>> We look forward to your feedback, which will help gluster-block get better!
>>
>> We would like to thank all our users, contributors for bug filing and
>> fixes, also the whole team who involved in the huge effort with
>> pre-release testing.
>>
>>
>> [1] https://github.com/gluster/gluster-block
>> [2] https://github.com/gluster/gluster-block/releases
>> [3] https://github.com/gluster/gluster-block/blob/master/INSTALL
>> [4] https://github.com/gluster/gluster-block#usage
>> [5] https://github.com/gluster/gluster-block/blob/master/README.md
>> [6] https://github.com/gluster/gluster-block/tree/master/docs
>> [7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
>> [8] https://download.gluster.org/pub/gluster/gluster-block/
>> [9] https://github.com/gluster/gluster-block/issues/new
>>
>> Cheers,
>> Team Gluster-Block!
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
___

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/836554017

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/486278655

Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel



[Gluster-devel] gluster-block v0.4 is alive!

2019-05-02 Thread Prasanna Kalever
Hello Gluster folks,

Gluster-block team is happy to announce the v0.4 release [1].

This is the new stable version of gluster-block, lots of new and
exciting features and interesting bug fixes are made available as part
of this release.
Please find the big list of release highlights and notable fixes at [2].

Details about installation can be found in the easy install guide at
[3]. Find the details about prerequisites and setup guide at [4].
If you are a new user, checkout the demo video attached in the README
doc [5], which will be a good source of intro to the project.
There are good examples about how to use gluster-block both in the man
pages [6] and test file [7] (also in the README).

gluster-block is part of fedora package collection, an updated package
with release version v0.4 will be soon made available. And the
community provided packages will be soon made available at [8].

Please spend a minute to report any kind of issue that comes to your
notice with this handy link [9].
We look forward to your feedback, which will help gluster-block get better!

We would like to thank all our users, contributors for bug filing and
fixes, also the whole team who involved in the huge effort with
pre-release testing.


[1] https://github.com/gluster/gluster-block
[2] https://github.com/gluster/gluster-block/releases
[3] https://github.com/gluster/gluster-block/blob/master/INSTALL
[4] https://github.com/gluster/gluster-block#usage
[5] https://github.com/gluster/gluster-block/blob/master/README.md
[6] https://github.com/gluster/gluster-block/tree/master/docs
[7] https://github.com/gluster/gluster-block/blob/master/tests/basic.t
[8] https://download.gluster.org/pub/gluster/gluster-block/
[9] https://github.com/gluster/gluster-block/issues/new

Cheers,
Team Gluster-Block!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Network Block device (NBD) on top of glusterfs

2019-03-21 Thread Prasanna Kalever
On Thu, Mar 21, 2019 at 6:31 PM Xiubo Li  wrote:

> On 2019/3/21 18:09, Prasanna Kalever wrote:
>
>
>
> On Thu, Mar 21, 2019 at 9:00 AM Xiubo Li  wrote:
>
>> All,
>>
>> I am one of the contributor for gluster-block
>> <https://github.com/gluster/gluster-block>[1] project, and also I
>> contribute to linux kernel and open-iscsi <https://github.com/open-iscsi>
>> project.[2]
>>
>> NBD was around for some time, but in recent time, linux kernel’s Network
>> Block Device (NBD) is enhanced and made to work with more devices and also
>> the option to integrate with netlink is added. So, I tried to provide a
>> glusterfs client based NBD driver recently. Please refer github issue
>> #633 <https://github.com/gluster/glusterfs/issues/633>[3], and good news
>> is I have a working code, with most basic things @ nbd-runner project
>> <https://github.com/gluster/nbd-runner>[4].
>>
>> While this email is about announcing the project, and asking for more
>> collaboration, I would also like to discuss more about the placement of the
>> project itself. Currently nbd-runner project is expected to be shared by
>> our friends at Ceph project too, to provide NBD driver for Ceph. I have
>> personally worked with some of them closely while contributing to
>> open-iSCSI project, and we would like to take this project to great success.
>>
>> Now few questions:
>>
>>1. Can I continue to use http://github.com/gluster/nbd-runner as home
>>for this project, even if its shared by other filesystem projects?
>>
>>
>>- I personally am fine with this.
>>
>>
>>1. Should there be a separate organization for this repo?
>>
>>
>>- While it may make sense in future, for now, I am not planning to
>>start any new thing?
>>
>> It would be great if we have some consensus on this soon as nbd-runner is
>> a new repository. If there are no concerns, I will continue to contribute
>> to the existing repository.
>>
>
> Thanks Xiubo Li, for finally sending this email out. Since this email is
> out on gluster mailing list, I would like to take a stand from gluster
> community point of view *only* and share my views.
>
> My honest answer is "If we want to maintain this within gluster org, then
> 80% of the effort is common/duplicate of what we did all these days with
> gluster-block",
>
> The great idea came from Mike Christie days ago and the nbd-runner
> project's framework is initially emulated from tcmu-runner. This is why I
> name this project as nbd-runner, which will work for all the other
> Distributed Storages, such as Gluster/Ceph/Azure, as discussed with Mike
> before.
>
> nbd-runner(NBD proto) and tcmu-runner(iSCSI proto) are almost the same and
> both are working as lower IO(READ/WRITE/...) stuff, not the management
> layer like ceph-iscsi-gateway and gluster-block currently do.
>
> Currently since I only implemented the Gluster handler and also using the
> RPC like glusterfs and gluster-block, most of the other code (about 70%) in
> nbd-runner are for the NBD proto and these are very different from
> tcmu-runner/glusterfs/gluster-block projects, and there are many new
> features in NBD module that not yet supported and then there will be more
> different in future.
>
> The framework coding has been done and the nbd-runner project is already
> stable and could already work well for me now.
>
> like:
> * rpc/socket code
> * cli/daemon parser/helper logics
> * gfapi util functions
> * logger framework
> * inotify & dyn-config threads
>
> Yeah, these features were initially from tcmu-runner project, Mike and I
> coded two years ago. Currently nbd-runner also has copied them from
> tcmu-runner.
>

I don't think tcmu-runner has any of,

-> cli/daemon approach routines
-> rpc low-level clnt/svc routines
-> gfapi level file create/delete util functions
-> Json parser support
-> socket bound/listener related functionalities
-> autoMake build frame-work, and
-> many other maintenance files

I actually can go in detail and furnish a long list of reference made here
and you cannot deny the fact, but its **all okay** to take references from
other alike projects. But my intention was not to point about the copy made
here, but rather saying we are just wasting our efforts rewriting,
copy-pasting, maintaining and fixing the same functionality framework.

Again all I'm trying to make is, if at all you want to maintain nbd client
as part of gluster.org, why not use gluster-block itself ? which is well
tested and stable enough.

Apart from all the examples I have mentioned in my previous thread, there
are

Re: [Gluster-devel] Network Block device (NBD) on top of glusterfs

2019-03-21 Thread Prasanna Kalever
On Thu, Mar 21, 2019 at 9:00 AM Xiubo Li  wrote:

> All,
>
> I am one of the contributor for gluster-block
> [1] project, and also I
> contribute to linux kernel and open-iscsi 
> project.[2]
>
> NBD was around for some time, but in recent time, linux kernel’s Network
> Block Device (NBD) is enhanced and made to work with more devices and also
> the option to integrate with netlink is added. So, I tried to provide a
> glusterfs client based NBD driver recently. Please refer github issue #633
> [3], and good news is I
> have a working code, with most basic things @ nbd-runner project
> [4].
>
> While this email is about announcing the project, and asking for more
> collaboration, I would also like to discuss more about the placement of the
> project itself. Currently nbd-runner project is expected to be shared by
> our friends at Ceph project too, to provide NBD driver for Ceph. I have
> personally worked with some of them closely while contributing to
> open-iSCSI project, and we would like to take this project to great success.
>
> Now few questions:
>
>1. Can I continue to use http://github.com/gluster/nbd-runner as home
>for this project, even if its shared by other filesystem projects?
>
>
>- I personally am fine with this.
>
>
>1. Should there be a separate organization for this repo?
>
>
>- While it may make sense in future, for now, I am not planning to
>start any new thing?
>
> It would be great if we have some consensus on this soon as nbd-runner is
> a new repository. If there are no concerns, I will continue to contribute
> to the existing repository.
>

Thanks Xiubo Li, for finally sending this email out. Since this email is
out on gluster mailing list, I would like to take a stand from gluster
community point of view *only* and share my views.

My honest answer is "If we want to maintain this within gluster org, then
80% of the effort is common/duplicate of what we did all these days with
gluster-block",

like:
* rpc/socket code
* cli/daemon parser/helper logics
* gfapi util functions
* logger framework
* inotify & dyn-config threads
* configure/Makefile/specfiles
* docsAboutGluster and etc ..

The repository gluster-block is actually a home for all the block related
stuff within gluster and its designed to accommodate alike functionalities,
if I was you I would have simply copied nbd-runner.c into
https://github.com/gluster/gluster-block/tree/master/daemon/ just like ceph
plays it here
https://github.com/ceph/ceph/blob/master/src/tools/rbd_nbd/rbd-nbd.cc and
be done.

Advantages of keeping nbd client within gluster-block:
-> No worry about maintenance code burdon
-> No worry about monitoring a new component
-> shipping packages to fedora/centos/rhel is handled
-> This helps improve and stabilize the current gluster-block framework
-> We can build a common CI
-> We can use reuse common test framework and etc ..

If you have an impression that gluster-block is for management, then I
would really want to correct you at this point.

Some of my near future plans for gluster-block:
* Allow exporting blocks with FUSE access via fileIO backstore to improve
large-file workloads, draft:
https://github.com/gluster/gluster-block/pull/58
* Accommodate kernel loopback handling for local only applications
* The same way we can accommodate nbd app/client, and IMHO this effort
shouldn't take 1 or 2 days to get it merged with in gluster-block and ready
for a go release.


Hope that clarifies it.


Best Regards,
--
Prasanna


> Regards,
> Xiubo Li (@lxbsz)
>
> [1] - https://github.com/gluster/gluster-block
> [2] - https://github.com/open-iscsi
> [3] - https://github.com/gluster/glusterfs/issues/633
> [4] - https://github.com/gluster/nbd-runner
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] gluster-block Dev Stream Update

2018-10-08 Thread Prasanna Kalever
Hello Community!

Starting this week, we will be sending a (bi)weekly update about the
gluster-block development activities.

For someone who is catching up with gluster's block storage project
for the first time, here is everything that you need to know:
- https://github.com/gluster/gluster-block/blob/master/README.md

Dev update for the week:

What is done:
-> Dynamic config reloading feature
 - https://github.com/gluster/gluster-block/pull/88
-> CLI audit log feature
 - https://github.com/gluster/gluster-block/pull/83
-> Defending on minimum kernel version at various distros
 - https://github.com/gluster/gluster-block/pull/119

What is in-progress:
-> Design about various locking api's support required by ALUA feature
 - https://github.com/gluster/glusterfs/issues/466
 - https://github.com/gluster/gluster-block/issues/53
-> Design about block configuration self heal feature
 - TODO: Will share the link in the next update
-> Get rid of huge buffer allocation for reading the configuration file
 - https://github.com/gluster/gluster-block/pull/123
-> Dump all failure msgs to stderr
 - https://github.com/gluster/gluster-block/pull/121
-> Support new glibc versions by adopting libtirpc for fedora >=28
 - https://github.com/gluster/gluster-block/pull/57

What is coming up-next:
-> v0.4 release of gluster-block, mainly waiting on
 - https://github.com/gluster/gluster-block/pull/57
-> Package update for fedora 28
 - currently waiting on dependent projects package updates

How can one be part of gluster-block:
-> Share us your experience with gluster-block:
 - More information about how to use/test, can be found at '# man
gluster-block' or refer basic.t
-> Report new issues or submit a pull request for existing issues:
 - https://github.com/gluster/gluster-block/issues


Cheers!
--
Gluster-block Team.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] reflink support for glusterfs and gluster-block using it for taking snapshots

2017-11-06 Thread Prasanna Kalever
On Tue, Nov 7, 2017 at 7:43 AM, Pranith Kumar Karampuri
 wrote:
> hi,
>  I just created a github issue for reflink support(#349) in glusterfs.
> We are intending to use this feature to do block snapshots in gluster-block.
>
> Please let us know your comments on the github issue. I have added the
> changes we may need for xlators I know a little bit about. Please help in
> identifying gaps in implementing this FOP.

Pranith,

You might be interested in taking a look at an approach taken earlier
to support snapshots using xfs reflink feature.

Patch with working code:  https://review.gluster.org/#/c/13979/
Spec: 
https://github.com/gluster/glusterfs-specs/blob/master/under_review/reflink-based-fsnap.md

Note: It was more than one and a half years old stuff (April 2016), so
expect a rebase :-)

Thanks,
--
Prasanna

>
> --
> Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] gluster-block v0.3 is alive!

2017-10-16 Thread Prasanna Kalever
Hello Gluster folks,

We are happy to announce the release of gluster-block [1] v0.3. Please
find highlights and notable fixes which went in this release at [2].

The packages are made available at copr for Fedora users [3]. For
other distributions, one can easily compile it from source. Details
about installation can be found in the easy install guide at [4]. The
source tarball and community provided packages will be available soon
at [5].

The next release v0.4 is planned to have some exciting features and
improvements. Here is a potential list of features in v0.4:
* Replace node feature to substitute a faulty node.
* Support for Snapshots
* Ability to re-size existing block devices
* Performance improvements (IO and management)
* Containerized gluster-block and more!

gluster-block is now part of fedora package collection (f26), an
updated package with release version v0.3 will be soon made available.

Please report any issues that you observe using [6].

We look forward to your feedback to help us get better!


[1] https://github.com/gluster/gluster-block
[2] https://github.com/gluster/gluster-block/blob/master/NEWS
[3] https://copr.fedorainfracloud.org/coprs/pkalever/gluster-block/build/637643/
[4] https://github.com/gluster/gluster-block/blob/master/INSTALL
[5] https://download.gluster.org/pub/gluster/gluster-block/
[6] https://github.com/gluster/gluster-block/issues/new


Thanks!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] some gluster-block fixes

2017-06-22 Thread Prasanna Kalever
On Thu, Jun 22, 2017 at 10:25 PM, Michael Adam  wrote:
> On 2017-06-22 at 16:26 +0200, Michael Adam wrote:
>> Hi all,
>>
>> I have created a few patches to gluster-block.
>> But am  a little bit at a loss how to create
>> gerrit review requests. Hence I have created
>> github PRs.
>>
>> https://github.com/gluster/gluster-block/pull/29
>> https://github.com/gluster/gluster-block/pull/30
>>
>> Prasanna, I hope you can convert those to gerrit
>> again... :-D
>
> Ok, thanks to Niels, I was able to move them to gerrit:
>
> https://review.gluster.org/#/c/17613/
> https://review.gluster.org/#/c/17614/
>
> Updated according to review commits on github.

Thanks Michael, will take a look.

--
Prasanna

>
> Cheers - Michael
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] gluster-block v0.2.1 is alive!

2017-06-07 Thread Prasanna Kalever
Hello Gluster folks,

gluster-block [1] release 0.2.1 is tagged, this release is more
focused on bug fixing. All the documents are updated and packages made
available at copr for fedora users [2]

However for other distros one can easily compile it from source, find
the install guide at [3]

The source tar file and community provided packages will be soon made
available at [4]


Highlights:
-
* Implement LRU cache to hold glfs objects, this makes the cli
commands run fast.
For example on a single node,
create command takes ~1 sec now, while it was ~5 sec before.

* Log severity level is configurable now.
look for --log-level option of daemon and '/etc/sysconfig/gluster-blockd'


Other Notable Fixes:
---
* betterments in messages on failure
* fix heap-buffer-overflow
* prevent crashes when errMsg is not set
* print human readable time-stamp in log files
* improve logging at server side
* handle SIGPIPE in daemon
* update journal-data/block meta-data synchronously
* reuse port 24006 (SO_REUSEADDR) on bind
* add manual for gluster-blockd
* updated ReadMe
* and many more ...

Read more at about gluster-block [5]


Please report issues using [6]
Also do let us know your feedback and help us get better :-)

[1] https://github.com/gluster/gluster-block
[2] https://copr.fedorainfracloud.org/coprs/pkalever/gluster-block/build/562504/
[3] https://github.com/gluster/gluster-block/blob/master/INSTALL
[4] https://download.gluster.org/pub/gluster/gluster-block/
[5] https://github.com/gluster/gluster-block/blob/master/README.md
[6] https://github.com/gluster/gluster-block/issues/new


Thanks!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] gluster-block v0.2 is alive!

2017-05-05 Thread Prasanna Kalever
Hello Gluster folks,

gluster-block [1] is ready with its v0.2 tag, all the documents are
updated and packages made available at copr for fedora users [2]

However for other distros one can easily compile it from source, find
the install guide at [3]

The source tar file and community provided packages will be soon made
available at [4]


Highlights of release:
---
One command for logging-in to all gateways of a target (#9)
Add support for one way authentication (#5)
Support json response (#3)

Other notable fixes:
---
Increase the clnt_call() total time out (#15)
Clue if gluster-block daemon is not operational (#14)
Show reason for cmd failue on non-existent/not started volume (#10)
Redirect configshell logs to GB_LOGDIR (#13)
Use rpcgen to generate all XDR code (#2)

Read more at about gluster-block [5]


Please report issues using [6]
Also do let us know your feedback and help us get better :-)

[1] https://github.com/gluster/gluster-block
[2] https://copr.fedorainfracloud.org/coprs/pkalever/gluster-block/build/547645/
[3] https://github.com/gluster/gluster-block/blob/master/INSTALL
[4] https://download.gluster.org/pub/gluster/gluster-block/
[5] https://github.com/gluster/gluster-block/blob/master/README.md
[6] https://github.com/gluster/gluster-block/issues/new


Thanks!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Elasticsearch with gluster-block

2017-03-16 Thread Prasanna Kalever
If you have missed our post on "Elasticsearch with gluster-block" in
social media feeds, then here is the nexus [1]

[1] https://pkalever.wordpress.com/2017/03/14/elasticsearch-with-gluster-block/


Cheers!
--
prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Announcing gluster-block v0.1

2017-03-02 Thread Prasanna Kalever
Heads up!

Packages are made available at download.gluster.org [1]

[1] https://download.gluster.org/pub/gluster/gluster-block/


Cheers.

On Thu, Mar 2, 2017 at 11:47 PM, Prasanna Kalever <pkale...@redhat.com> wrote:
> gluster-block [1] is a block device management framework which aims at
> making gluster backed block storage creation and maintenance as simple
> as possible. With this release, gluster-block provisions block devices
> and exports them using iSCSI. Read about usage, examples and more at
> [2]
>
> The initial gluster-block is ready with its v0.1 tagging and the
> packages are available at copr for fedora users [3]
>
> However one can compile it from source, find the install guide at [4]
>
> The source tar file and community provided packages will be soon made
> available at download.gluster.org.
>
> We will be iterating and improve gluster-block with a release every
> 2-3 weeks. Please let us know your feedback and help us get better :-)
>
>
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/blob/master/README.md
> [3] 
> https://copr.fedorainfracloud.org/coprs/pkalever/gluster-block/build/520204/
> [4] https://github.com/gluster/gluster-block/blob/master/INSTALL
>
>
> Thanks!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Announcing gluster-block v0.1

2017-03-02 Thread Prasanna Kalever
gluster-block [1] is a block device management framework which aims at
making gluster backed block storage creation and maintenance as simple
as possible. With this release, gluster-block provisions block devices
and exports them using iSCSI. Read about usage, examples and more at
[2]

The initial gluster-block is ready with its v0.1 tagging and the
packages are available at copr for fedora users [3]

However one can compile it from source, find the install guide at [4]

The source tar file and community provided packages will be soon made
available at download.gluster.org.

We will be iterating and improve gluster-block with a release every
2-3 weeks. Please let us know your feedback and help us get better :-)


[1] https://github.com/gluster/gluster-block
[2] https://github.com/gluster/gluster-block/blob/master/README.md
[3] https://copr.fedorainfracloud.org/coprs/pkalever/gluster-block/build/520204/
[4] https://github.com/gluster/gluster-block/blob/master/INSTALL


Thanks!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] gluster-block design is out

2017-01-22 Thread Prasanna Kalever
Hello,

Here [1] is the gluster-block design document, please feel free to comment.


[1] 
https://docs.google.com/document/d/1psjLlCxdllq1IcJa3FN3MFLcQfXYf0L5amVkZMXd-D8/edit?usp=sharing


Thanks,
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.10 feature proposal : Gluster Block Storage CLI Integration

2016-12-21 Thread Prasanna Kalever
[Top posting]

I agree with Niels and Shyam here. We are now trying to decouple the
gluster-block cli from gluster cli.
Since anyway it doesn't depend on core gluster changes, I think its
better to move it out. Also I do not see a decent tool/util that does
these jobs, hence its better we make it as a separate project (may be
gluster-block).
The design side changes are still in discussion, I shall give an
update once we conclude on it.

Since gluster-block plans to maintain it as separate project, I don't
think we still need to make it as 3.10 feature.
With gluster-block we will aim to support all possible versions of gluster.

Thanks,
--
Prasanna


On Mon, Dec 19, 2016 at 5:10 PM, Shyam <srang...@redhat.com> wrote:
> On 12/14/2016 01:38 PM, Niels de Vos wrote:
>>
>> On Wed, Dec 14, 2016 at 12:40:53PM +0530, Prasanna Kumar Kalever wrote:
>>>
>>> On 16-12-14 07:43:05, Niels de Vos wrote:
>>>>
>>>> On Fri, Dec 09, 2016 at 11:28:52AM +0530, Prasanna Kalever wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> As we know gluster block storage creation and maintanace is not simple
>>>>> today, as it involves all the manual steps mentioned at [1]
>>>>> To make this basic operations simple we would like to integrate the
>>>>> block story with gluster CLI.
>>>>>
>>>>> As part of it, we would like Introduce the following commands
>>>>>
>>>>> # gluster block create 
>>>>> # gluster block modify   
>>>>> # gluster block list
>>>>> # gluster block delete 
>>>>
>>>>
>>>> I am not sure why this needs to be done through the Gluster CLI.
>>>> Creating a file on a (how to select?) volume, and then export that as a
>>>> block device through tcmu-runner (iSCSI) seems more like a task similar
>>>> to what libvirt does with VM images.
>>>
>>>
>>> May be not exactly, but similar
>>>
>>>>
>>>> Would it not be more suitable to make this part of whatever tcmu admin
>>>> tools are available? I assume tcmu needs to address this, with similar
>>>> configuration options for LVM and other backends too. Building on top of
>>>> that may give users of tcmu a better experience.
>>>
>>>
>>> s/tcmu/tcmu-runner/
>>>
>>> I don't think there are separate tools/utils for tcmu-runner as of now.
>>> Also currently we are using tcmu-runner to export the file in the
>>> gluster volume as a iSCSI block device, in the future we may move to
>>> qemu-tcmu (which does the same job of tcmu-runner, except it uses
>>> qemu gluster driver) for benefits like snapshots ?
>>
>>
>> One of the main objections I have, is that the CLI is currently very
>> 'dumb'. Integrating with it to have it generate the tcmu-configuration
>> as well as let the (current management only!) CLI create the disk-images
>> on a volume seem breaking the current seperation of tasks. Integrations
>> are good to have, but they should be done on the appropriate level.
>>
>> Teaching the CLI all it needs to know about tcmu-runner, including
>> setting suitable permissions on the disk-image on a volume, access
>> permissions for the iSCSI protocol and possibly more seems quite a lot
>> of effort to me. I prefer to keep the CLI as simple as possible, and any
>> integration should use the low-level tools (CLI, gfapi, ...) that are
>> available.
>
>
> +1, I agree. This seems more like a task for a tool using gfapi in parts for
> the file creation and other CLI/deploy options for managing tcmu-runner. The
> latter a more tcmu project, or gluster-block as the abstraction if we want
> to gain eyeballs into the support.
>
>
>>
>> When we integrate tcmu-runner now, people will hopefully use it. That
>> means it can not easily be replaced by an other project. qemu-tcmu would
>> be an addition to the tcmu-integration, leaving a huge maintainance
>> burdon.
>>
>> I have a strong preference to see any integrations done on a higher
>> level. If there are no tcmu-runner tools (like targetcli?) to configure
>> iSCSI backends and other options, it may make sense to start a new
>> project dedicated to iSCSI access for Gluster. If no suitable projects
>> exist, a gluster-block-utils project can be created. Management
>> utilities also benefit from being written in languages other than C, a
>> new project offers you many options there ;-)
>>
>>> Also configuring and running tcmu-runner on each node in the c

Re: [Gluster-devel] Release 3.10 feature proposal : Gluster Block Storage CLI Integration

2016-12-13 Thread Prasanna Kalever
On Mon, Dec 12, 2016 at 11:57 PM, Shyam <srang...@redhat.com> wrote:
> Prasanna,
>
> When can the design be ready for review? I ask this as feature completion
> for 3.10 is slated around 17th Jan, 2017.

Shyam, I am currently working on libvirt bug in its gluster driver.
Hopefully will wind that up at most by end of the week. Hence will be
able to update the design doc sometime early next week.

>
> Based on the above, it would be good to close the design reviews by end of
> Dec (or very early Jan), so that we just deal with code later.

Sure. I'll definitely align to the plan.

Thanks,
--
Prasanna

>
> Let us know your plans, and what help you may need.
>
> Thanks,
> Shyam
>
>
> On 12/09/2016 03:28 AM, Prasanna Kalever wrote:
>>
>> On Fri, Dec 9, 2016 at 1:28 PM, Atin Mukherjee <amukh...@redhat.com>
>> wrote:
>>>
>>> I'd like to see more details around the feature page about it. Currently
>>> it
>>> just merely talks about the CLI semantics and nothing else.
>>
>>
>> Sorry, but please do not expect much about the design details now, as
>> it is still not done with design phase. Once the block storage team
>> sticks on the design, one of us will definitely update all the
>> required information.
>>
>> --
>> Prasanna
>>
>>>
>>> On Fri, Dec 9, 2016 at 12:38 PM, Prasanna Kalever <pkale...@redhat.com>
>>> wrote:
>>>>
>>>>
>>>> Feature Page at https://github.com/gluster/glusterfs-specs/pull/10
>>>>
>>>> --
>>>> Prasanna
>>>>
>>>> On Fri, Dec 9, 2016 at 11:28 AM, Prasanna Kalever <pkale...@redhat.com>
>>>> wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> As we know gluster block storage creation and maintanace is not simple
>>>>> today, as it involves all the manual steps mentioned at [1]
>>>>> To make this basic operations simple we would like to integrate the
>>>>> block story with gluster CLI.
>>>>>
>>>>> As part of it, we would like Introduce the following commands
>>>>>
>>>>> # gluster block create 
>>>>> # gluster block modify   
>>>>> # gluster block list
>>>>> # gluster block delete 
>>>>>
>>>>>
>>>>> [1]
>>>>>
>>>>> https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
>>>>>
>>>>>
>>>>> Thanks,
>>>>> --
>>>>> Prasanna
>>>>
>>>> ___
>>>> Gluster-devel mailing list
>>>> Gluster-devel@gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> ~ Atin (atinm)
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.10 feature proposal : Gluster Block Storage CLI Integration

2016-12-09 Thread Prasanna Kalever
On Fri, Dec 9, 2016 at 1:28 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
> I'd like to see more details around the feature page about it. Currently it
> just merely talks about the CLI semantics and nothing else.

Sorry, but please do not expect much about the design details now, as
it is still not done with design phase. Once the block storage team
sticks on the design, one of us will definitely update all the
required information.

--
Prasanna

>
> On Fri, Dec 9, 2016 at 12:38 PM, Prasanna Kalever <pkale...@redhat.com>
> wrote:
>>
>> Feature Page at https://github.com/gluster/glusterfs-specs/pull/10
>>
>> --
>> Prasanna
>>
>> On Fri, Dec 9, 2016 at 11:28 AM, Prasanna Kalever <pkale...@redhat.com>
>> wrote:
>> > Hi all,
>> >
>> > As we know gluster block storage creation and maintanace is not simple
>> > today, as it involves all the manual steps mentioned at [1]
>> > To make this basic operations simple we would like to integrate the
>> > block story with gluster CLI.
>> >
>> > As part of it, we would like Introduce the following commands
>> >
>> > # gluster block create 
>> > # gluster block modify   
>> > # gluster block list
>> > # gluster block delete 
>> >
>> >
>> > [1]
>> > https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
>> >
>> >
>> > Thanks,
>> > --
>> > Prasanna
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>
> --
>
> ~ Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 3.10 feature proposal : Gluster Block Storage CLI Integration

2016-12-08 Thread Prasanna Kalever
Feature Page at https://github.com/gluster/glusterfs-specs/pull/10

--
Prasanna

On Fri, Dec 9, 2016 at 11:28 AM, Prasanna Kalever <pkale...@redhat.com> wrote:
> Hi all,
>
> As we know gluster block storage creation and maintanace is not simple
> today, as it involves all the manual steps mentioned at [1]
> To make this basic operations simple we would like to integrate the
> block story with gluster CLI.
>
> As part of it, we would like Introduce the following commands
>
> # gluster block create 
> # gluster block modify   
> # gluster block list
> # gluster block delete 
>
>
> [1]  
> https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
>
>
> Thanks,
> --
> Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Release 3.10 feature proposal : Gluster Block Storage CLI Integration

2016-12-08 Thread Prasanna Kalever
Hi all,

As we know gluster block storage creation and maintanace is not simple
today, as it involves all the manual steps mentioned at [1]
To make this basic operations simple we would like to integrate the
block story with gluster CLI.

As part of it, we would like Introduce the following commands

# gluster block create 
# gluster block modify   
# gluster block list
# gluster block delete 


[1]  
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/


Thanks,
--
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFs upstream bugzilla components Fine graining

2016-09-30 Thread Prasanna Kalever
On Fri, Sep 30, 2016 at 3:16 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Wed, Sep 28, 2016 at 10:09:34PM +0530, Prasanna Kalever wrote:
>> On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
>> <mvign...@redhat.com> wrote:
>> >
>> > Hi,
>> >
>> > This an update to the previous mail about Fine graining of the
>> > GlusterFS upstream bugzilla components.
>> >
>> > Finally we have come out a new structure that would help in easy
>> > access of the bug for reporter and assignee too.
>> >
>> > In the new structure we have decided to remove components that are
>> > listed as below -
>> >
>> > - BDB
>> > - HDFS
>> > - booster
>> > - coreutils
>> > - gluster-hdoop
>> > - gluster-hadoop-install
>> > - libglusterfsclient
>> > - map
>> > - path-converter
>> > - protect
>> > - qemu-block
>>
>> Well, we are working on bringing qemu-block xlator to alive again.
>> This is needed in achieving qcow2 based internal snapshots for/in the
>> gluster block store.
>
> We can keep this as a subcomponent for now.

What should be the main component in this case?

>
>> Take a look at  http://review.gluster.org/#/c/15588/  and dependent patches.
>
> Although we can take qemu-block back, we need a plan to address the
> copied qemu sources to handle the qcow2 format. Reducing the bundled
> sources (in contrib/) is important. Do you have a feature page in the
> glusterfs-specs repository that explains the usability of qemu-block? I
> have not seen a discussion on gluster-devel about this yet either,
> otherwise I would have replied there...

Yeah, have refreshed some part of the code already (local). The
current code is way old (2013) and miss the compat 1.1 (qcow2v3)
features and many more. We are cross checking the merits in using this
in the block store. Once we are in a state to say yes/continue with
this approach, I'm glad to take initiation in refreshing the complete
source and flush out the unused bundle of code.

Well, I do not know about any qcow libraries other than [1], and don't
think we have choice of keeping this outside the repo tree?

And currently I don't have a feature page, will update after summit
time frame, also make a note to post with the complete details in
devel mailing list.

>
> Nobody used this before, and I wonder if we should not design and
> develop a standard file-snapshot functionality that is not dependent on
> qcow2 format.

IMO, that will take an another year or more to bring into block store use.


[1] https://github.com/libyal/libqcow

--
Prasanna

>
> Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFs upstream bugzilla components Fine graining

2016-09-30 Thread Prasanna Kalever
On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
 wrote:
>
> Hi,
>
> This an update to the previous mail about Fine graining of the
> GlusterFS upstream bugzilla components.
>
> Finally we have come out a new structure that would help in easy
> access of the bug for reporter and assignee too.
>
> In the new structure we have decided to remove components that are
> listed as below -
>
> - BDB
> - HDFS
> - booster
> - coreutils
> - gluster-hdoop
> - gluster-hadoop-install
> - libglusterfsclient
> - map
> - path-converter
> - protect
> - qemu-block

Well, we are working on bringing qemu-block xlator to alive again.
This is needed in achieving qcow2 based internal snapshots for/in the
gluster block store.

Take a look at  http://review.gluster.org/#/c/15588/  and dependent patches.

--
Prasanna

[...]
> Thanks and regards,
>
> Muthu Vigneshwaran & Niels de vos
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-19 Thread Prasanna Kalever
On Mon, Sep 19, 2016 at 4:09 PM, Niels de Vos <nde...@redhat.com> wrote:
>
> On Mon, Sep 19, 2016 at 03:34:29PM +0530, Prasanna Kalever wrote:
> > On Mon, Sep 19, 2016 at 10:13 AM, Niels de Vos <nde...@redhat.com> wrote:
> > > On Tue, Sep 13, 2016 at 12:06:00PM -0400, Luis Pabón wrote:
> > >> Very good points.  Thanks Prasanna for putting this together.  I agree 
> > >> with
> > >> your comments in that Heketi is the high level abstraction API and it 
> > >> should have
> > >> an API similar of what is described by Prasanna.
> > >>
> > >> I definitely do not think any File Api should be available in Heketi,
> > >> because that is an implementation of the Block API.  The Heketi API 
> > >> should
> > >> be similar to something like OpenStack Cinder.
> > >>
> > >> I think that the actual management of the Volumes used for Block storage
> > >> and the files in them should be all managed by Heketi.  How they are
> > >> actually created is still to be determined, but we could have Heketi
> > >> create them, or have helper programs do that.
> > >
> > > Maybe a tool like qemu-img? If whatever iscsi service understand the
> > > format (at the very least 'raw'), you could get functionality like
> > > snapshots pretty simple.
> >
> > Niels,
> >
> > This is brilliant and subset of the Idea falls in one among my
> > thoughts, only concern is about building dependencies of qemu with
> > Heketi.
> > But at an advantage of easy and cool snapshots solution.
>
> And well tested as I understand that oVirt is moving to use qemu-img as
> well. Other tools are able to use the qcow2 format, maybe the iscsi
> servce that gets used does so too.
>
> Has there already been a decision on what Heketi will configure as iscsi
> service? I am aware of the tgt [1] and LIO/TCMU [2] projects.

Niels,

yes we will be using TCMU (Kernel Module) and TCMU-runner (user space
service) to expose file in Gluster volume as an iSCSI target.
more at [1], [2] & [3]

[1] 
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
[2] 
https://pkalever.wordpress.com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes/
[3] 
https://pkalever.wordpress.com/2016/08/16/read-write-once-persistent-storage-for-openshift-origin-using-gluster/

--
Prasanna

>
> Niels
>
> 1. http://stgt.sourceforge.net/
> 2. https://github.com/open-iscsi/tcmu-runner
>http://blog.gluster.org/2016/04/using-lio-with-gluster/
>
> >
> > --
> > Prasanna
> >
> > >
> > > Niels
> > >
> > >
> > >> We also need to document the exact workflow to enable a file in
> > >> a Gluster volume to be exposed as a block device.  This will help
> > >> determine where the creation of the file could take place.
> > >>
> > >> We can capture our decisions from these discussions in the
> > >> following page:
> > >>
> > >> https://github.com/heketi/heketi/wiki/Proposed-Changes
> > >>
> > >> - Luis
> > >>
> > >>
> > >> - Original Message -
> > >> From: "Humble Chirammal" <hchir...@redhat.com>
> > >> To: "Raghavendra Talur" <rta...@redhat.com>
> > >> Cc: "Prasanna Kalever" <pkale...@redhat.com>, "gluster-devel" 
> > >> <gluster-devel@gluster.org>, "Stephen Watt" <sw...@redhat.com>, "Luis 
> > >> Pabon" <lpa...@redhat.com>, "Michael Adam" <ma...@redhat.com>, 
> > >> "Ramakrishna Yekulla" <rre...@redhat.com>, "Mohamed Ashiq Liyazudeen" 
> > >> <mliya...@redhat.com>
> > >> Sent: Tuesday, September 13, 2016 2:23:39 AM
> > >> Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
> > >> discussion
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> - Original Message -
> > >> | From: "Raghavendra Talur" <rta...@redhat.com>
> > >> | To: "Prasanna Kalever" <pkale...@redhat.com>
> > >> | Cc: "gluster-devel" <gluster-devel@gluster.org>, "Stephen Watt" 
> > >> <sw...@redhat.com>, "Luis Pabon" <lpa...@redhat.com>,
> > >> | "Michael Adam" <ma...@red

Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-19 Thread Prasanna Kalever
On Mon, Sep 19, 2016 at 10:13 AM, Niels de Vos <nde...@redhat.com> wrote:
> On Tue, Sep 13, 2016 at 12:06:00PM -0400, Luis Pabón wrote:
>> Very good points.  Thanks Prasanna for putting this together.  I agree with
>> your comments in that Heketi is the high level abstraction API and it should 
>> have
>> an API similar of what is described by Prasanna.
>>
>> I definitely do not think any File Api should be available in Heketi,
>> because that is an implementation of the Block API.  The Heketi API should
>> be similar to something like OpenStack Cinder.
>>
>> I think that the actual management of the Volumes used for Block storage
>> and the files in them should be all managed by Heketi.  How they are
>> actually created is still to be determined, but we could have Heketi
>> create them, or have helper programs do that.
>
> Maybe a tool like qemu-img? If whatever iscsi service understand the
> format (at the very least 'raw'), you could get functionality like
> snapshots pretty simple.

Niels,

This is brilliant and subset of the Idea falls in one among my
thoughts, only concern is about building dependencies of qemu with
Heketi.
But at an advantage of easy and cool snapshots solution.

--
Prasanna

>
> Niels
>
>
>> We also need to document the exact workflow to enable a file in
>> a Gluster volume to be exposed as a block device.  This will help
>> determine where the creation of the file could take place.
>>
>> We can capture our decisions from these discussions in the
>> following page:
>>
>> https://github.com/heketi/heketi/wiki/Proposed-Changes
>>
>> - Luis
>>
>>
>> - Original Message -
>> From: "Humble Chirammal" <hchir...@redhat.com>
>> To: "Raghavendra Talur" <rta...@redhat.com>
>> Cc: "Prasanna Kalever" <pkale...@redhat.com>, "gluster-devel" 
>> <gluster-devel@gluster.org>, "Stephen Watt" <sw...@redhat.com>, "Luis Pabon" 
>> <lpa...@redhat.com>, "Michael Adam" <ma...@redhat.com>, "Ramakrishna 
>> Yekulla" <rre...@redhat.com>, "Mohamed Ashiq Liyazudeen" 
>> <mliya...@redhat.com>
>> Sent: Tuesday, September 13, 2016 2:23:39 AM
>> Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
>> discussion
>>
>>
>>
>>
>>
>> - Original Message -
>> | From: "Raghavendra Talur" <rta...@redhat.com>
>> | To: "Prasanna Kalever" <pkale...@redhat.com>
>> | Cc: "gluster-devel" <gluster-devel@gluster.org>, "Stephen Watt" 
>> <sw...@redhat.com>, "Luis Pabon" <lpa...@redhat.com>,
>> | "Michael Adam" <ma...@redhat.com>, "Humble Chirammal" 
>> <hchir...@redhat.com>, "Ramakrishna Yekulla"
>> | <rre...@redhat.com>, "Mohamed Ashiq Liyazudeen" <mliya...@redhat.com>
>> | Sent: Tuesday, September 13, 2016 11:08:44 AM
>> | Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
>> discussion
>> |
>> | On Mon, Sep 12, 2016 at 11:30 PM, Prasanna Kalever <pkale...@redhat.com>
>> | wrote:
>> |
>> | > Hi all,
>> | >
>> | > This mail is open for discussion on gluster block store integration with
>> | > heketi and its REST API interface design constraints.
>> | >
>> | >
>> | >  ___ Volume Request ...
>> | > |
>> | > |
>> | > PVC claim -> Heketi --->|
>> | > |
>> | > |
>> | > |
>> | > |
>> | > |__ BlockCreate
>> | > |   |
>> | > |   |__ BlockInfo
>> | > |   |
>> | > |___ Block Request (APIS)-> |__ BlockResize
>> | > |
>> | > |__ BlockList
>> | > |
>> | > |__ BlockDelete
>> | >
>> | > Heketi will have block API and volume API, when user submit a Persistent
>> | > volume claim, Kubernetes provisioner based on the 

Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-19 Thread Prasanna Kalever
On Mon, Sep 19, 2016 at 9:05 AM, Luis Pabón <lpa...@redhat.com> wrote:
> Hi Prasanna,
>   I started the wiki page with the documentation on the API.  There
> still needs to be more information added, and we still need to work
> on the workflow, but at least it is a start.
>
> Please take a look at the wiki:
>
> https://github.com/heketi/heketi/wiki/Proposed-API:-Block-Storage

This is cool Luis, we will append the workflow and other info needed
there based on our future discussions.

Thanks for keeping this in a brilliant way, you made it simple and
easy to append info over the doc.

--
Prasanna

>
> - Luis
>
> - Original Message -
> From: "Luis Pabón" <lpa...@redhat.com>
> To: "Humble Chirammal" <hchir...@redhat.com>
> Cc: "gluster-devel" <gluster-devel@gluster.org>, "Stephen Watt" 
> <sw...@redhat.com>, "Ramakrishna Yekulla" <rre...@redhat.com>
> Sent: Tuesday, September 13, 2016 12:06:00 PM
> Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
> discussion
>
> Very good points.  Thanks Prasanna for putting this together.  I agree with
> your comments in that Heketi is the high level abstraction API and it should 
> have
> an API similar of what is described by Prasanna.
>
> I definitely do not think any File Api should be available in Heketi,
> because that is an implementation of the Block API.  The Heketi API should
> be similar to something like OpenStack Cinder.
>
> I think that the actual management of the Volumes used for Block storage
> and the files in them should be all managed by Heketi.  How they are
> actually created is still to be determined, but we could have Heketi
> create them, or have helper programs do that.
>
> We also need to document the exact workflow to enable a file in
> a Gluster volume to be exposed as a block device.  This will help
> determine where the creation of the file could take place.
>
> We can capture our decisions from these discussions in the
> following page:
>
> https://github.com/heketi/heketi/wiki/Proposed-Changes
>
> - Luis
>
>
> - Original Message -
> From: "Humble Chirammal" <hchir...@redhat.com>
> To: "Raghavendra Talur" <rta...@redhat.com>
> Cc: "Prasanna Kalever" <pkale...@redhat.com>, "gluster-devel" 
> <gluster-devel@gluster.org>, "Stephen Watt" <sw...@redhat.com>, "Luis Pabon" 
> <lpa...@redhat.com>, "Michael Adam" <ma...@redhat.com>, "Ramakrishna Yekulla" 
> <rre...@redhat.com>, "Mohamed Ashiq Liyazudeen" <mliya...@redhat.com>
> Sent: Tuesday, September 13, 2016 2:23:39 AM
> Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
> discussion
>
>
>
>
>
> - Original Message -
> | From: "Raghavendra Talur" <rta...@redhat.com>
> | To: "Prasanna Kalever" <pkale...@redhat.com>
> | Cc: "gluster-devel" <gluster-devel@gluster.org>, "Stephen Watt" 
> <sw...@redhat.com>, "Luis Pabon" <lpa...@redhat.com>,
> | "Michael Adam" <ma...@redhat.com>, "Humble Chirammal" 
> <hchir...@redhat.com>, "Ramakrishna Yekulla"
> | <rre...@redhat.com>, "Mohamed Ashiq Liyazudeen" <mliya...@redhat.com>
> | Sent: Tuesday, September 13, 2016 11:08:44 AM
> | Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
> discussion
> |
> | On Mon, Sep 12, 2016 at 11:30 PM, Prasanna Kalever <pkale...@redhat.com>
> | wrote:
> |
> | > Hi all,
> | >
> | > This mail is open for discussion on gluster block store integration with
> | > heketi and its REST API interface design constraints.
> | >
> | >
> | >  ___ Volume Request ...
> | > |
> | > |
> | > PVC claim -> Heketi --->|
> | > |
> | > |
> | > |
> | > |
> | > |__ BlockCreate
> | > |   |
> | > |   |__ BlockInfo
> | > |   |
> | > |___ Block Request (APIS)-> |__ BlockResize
> | > |
> | > |__ BlockList
> | >

Re: [Gluster-devel] Checklist for QEMU integration for upstream release

2016-09-06 Thread Prasanna Kalever
On Mon, Sep 5, 2016 at 9:27 PM, Niels de Vos  wrote:
> On Sat, Sep 03, 2016 at 01:04:37AM +0530, Pranith Kumar Karampuri wrote:
>> hi Bharata,
>>What tests are run before the release of glusterfs so that we make
>> sure this integration is stable? Could you add that information here so
>> that I can update it at
>> https://public.pad.fsfe.org/p/gluster-component-release-checklist
>
> I normally run some qemu-img commands to create/copy/... VM-images. When
> I have sufficient time, I start a VM based on a gluster:// URL on the
> commandline (through libvirt XML files), similar to this:
>   http://blog.nixpanic.net/2013/11/initial-work-on-gluster-integration.html

Certainly this is good way of testing, but unfortunately this is not enough.

With the recent changes to support multivolfile server in qemu, I feel
we need more tests w.r.t that area (ex. switching volfile servers both
initial client select time and run-time) ?

Niels,

Why don't we add some testcases/scripts for this?
I shall create a repository for this in my fee times and we keep
adding the test cases here which will be one run per release. (Let me
know if you are in favor of adding them in the gluster repo itself)

And I also feel we should be responsible with some checks with libvirt
compatibility, as in testing with virsh commands would be super cool.

>
> In case Bharata is not actively working (or interested) in QEMU and it's
> Gluster driver, Prasanna and I should probably replace or get added in
> the MAINTAINERS file, both of us get requests from the QEMU maintainers
> directly.

I am happy to take this responsibility.

Thanks,
--
Prasanna

>
> Niels
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] mainline compilation fails

2016-08-27 Thread Prasanna Kalever
Oops!
Didn't noticed these changes were part of parent/child patches, Just
noticed "BUILD BROKEN" and went into action :)

I'm not sure about it!

If it takes time to decide on whether the other set of patches need to
be taken or not, at-least my patch will fix the broken build (That
much I can assure)

Lets see if the regressions break after these patch goes in (mostly
not, that I see from the code)

Thanks,
--
Prasanna


On Sat, Aug 27, 2016 at 8:34 PM, Atin Mukherjee
<atin.mukherje...@gmail.com> wrote:
>
>
> On Saturday 27 August 2016, Prasanna Kalever <pkale...@redhat.com> wrote:
>>
>> Here is the patch that should fix it
>> http://review.gluster.org/#/c/15331/
>
>
> Thanks! Well thats an easy way, but the question here is dont we need the
> parent patch to be merged to ensure there is no other functionality broken.
> Currently I see that the parent patch has a -1, in that case is it required
> to revert 15225?
>>
>>
>> Happy weekend!
>>
>> --
>> Prasanna
>>
>>
>> On Sat, Aug 27, 2016 at 7:49 PM, Atin Mukherjee <amukh...@redhat.com>
>> wrote:
>> > [1] has broken mainline compilation and I feel this could be because its
>> > parent patch is not been merged otherwise smoke should have caught it.
>> > Please resolve it at earliest.
>> >
>> > [1] http://review.gluster.org/#/c/15225/
>> >
>> >
>> > --Atin
>> >
>> > ___
>> > Gluster-devel mailing list
>> > Gluster-devel@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-devel
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> --
> --Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] mainline compilation fails

2016-08-27 Thread Prasanna Kalever
Here is the patch that should fix it
http://review.gluster.org/#/c/15331/

Happy weekend!

--
Prasanna


On Sat, Aug 27, 2016 at 7:49 PM, Atin Mukherjee  wrote:
> [1] has broken mainline compilation and I feel this could be because its
> parent patch is not been merged otherwise smoke should have caught it.
> Please resolve it at earliest.
>
> [1] http://review.gluster.org/#/c/15225/
>
>
> --Atin
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] CFP for Gluster Developer Summit

2016-08-16 Thread Prasanna Kalever
Hey All,

Here is my topic to utter at gluster summit

Abstract:

Title: GLUSTER AS BLOCK STORE IN CONTAINERS

As we all know containers are stateless entities which are used to
deploy applications and hence need persistent storage to store
application data for availability across container incarnations.

Persistent storage in containers are of two types, shared and non-shared.

Shared storage:
Consider this as a volume/store where multiple Containers perform both
read and write operations on the same data. Useful for applications
like web servers that need to serve the same data from multiple
container instances.

Non Shared Storage:
Only a single container can perform write operations to this store at
a given time.

This presentation intend to show/discuss how gluster plays a role as a
nonshared block store in containers
Hence it indoctrinate the background to terminology (LIO, iSCSI,
tcmurunner, targetcli) and explains the solution achieving 'Block
store in Containers using gluster' followed by a demo.

Demo will showcase some basic (could be elaborated, based on the
audience) gluster setup, then show nodes initiating the iSCSI session,
attaches iSCSI target as block device and serve it to containers where
the application is running and requires persistent storage.

Will show the working demos about its integration with
* Docker
* Kubernetes
* OpenShift

Intention of this presentation is to get more feedback from people who
use similar solutions and also know  potential risks for better
defence
While discussing TODO's (access locking, encryption, snapshots and
etc.) we could gather some education around.


Cheers,
--
Prasanna


On Tue, Aug 16, 2016 at 7:23 PM, Kaushal M  wrote:
> Okay. Here's another proposal from me.
>
> # GlusterFS Release process
> An overview of the GlusterFS release process
>
> The GlusterFS release process has been recently updated and been
> documented for the first time. In this presentation, I'll be giving an
> overview the whole release process including release types, release
> schedules, patch acceptance criteria and the release procedure.
>
> Kaushal
> kshlms...@gmail.com
> Process & Infrastructure
>
> On Mon, Aug 15, 2016 at 5:30 AM, Amye Scavarda  wrote:
>> Kaushal,
>>
>> That's probably best. We'll be able to track similar proposals here.
>> - amye
>>
>> On Sat, Aug 13, 2016 at 6:30 PM, Kaushal M  wrote:
>>>
>>> How do we submit proposals now? Do we just reply here?
>>>
>>>
>>> On 13 Aug 2016 03:49, "Amye Scavarda"  wrote:
>>>
>>> GlusterFS for Users
>>> "GlusterFS for users" introduces you with GlusterFS, it's terminologies,
>>> it's features and how to manage y GlusterFS cluster.
>>>
>>> GlusterFS is a scalable network filesystem. Using commodity hardware, you
>>> can create large, distributed storage solutions for media streaming, data
>>> analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
>>> and open source software.
>>>
>>> This session is more intended for users/admins.
>>> Scope of this session :
>>>
>>> * What is Glusterfs
>>> * Glusterfs terminologies
>>> * Easy steps to get started with glusterfs
>>> * Volume topologies
>>> * Access protocols
>>> * Various features from user perspective :
>>> Replication, Data distribution, Geo-replication, Bit rot detection,
>>> data tiering,  Snapshot, Encryption, containerized glusterfs
>>> * Various configuration files
>>> * Various logs and it's location
>>> * various custom profile for specific use-cases
>>> * Collecting statedump and it's usage
>>> * Few common problems like :
>>>1) replacing a faulty brick
>>>2) resolving split-brain
>>>3) peer disconnect issue
>>>
>>> Bipin Kunal
>>> bku...@redhat.com
>>> User Perspectives
>>>
>>> On Fri, Aug 12, 2016 at 3:18 PM, Amye Scavarda  wrote:

 Demo : Quickly setup GlusterFS cluster
 This demo will let you understand How to setup GlusterFS cluster and how
 to exploit its features.

 GlusterFS is a scalable network filesystem. Using commodity hardware, you
 can create large, distributed storage solutions for media streaming, data
 analysis, and other data and bandwidth-intensive tasks. GlusterFS is free
 and open source software.

 This demo is intended for new user who is willing to setup glusterFS
 cluster.

 This demo will let you understand How to setup GlusterFS cluster and how
 to exploit its features.

 Scope of this session :

 1) Install GlusterFS packages
 2) Create a trusted storage pool
 3) Create a GlusterFS volume
 4) Access GlusterFS volume using various protocols
a) FUSE b) NFS c) CIFS d) NFS-ganesha
 5) Using Snapshot
 6) Creating geo-rep session
 7) Adding/removing/replacing bricks
 8) Bit-rot detection and correction

 Bipin Kunal
 bku...@redhat.com
 User Perspectives

 On Fri, Aug 

Re: [Gluster-devel] Suggestions for improving the block/gluster driver in QEMU

2016-07-28 Thread Prasanna Kalever
On Thu, Jul 28, 2016 at 4:13 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Thu, Jul 28, 2016 at 03:51:11PM +0530, Prasanna Kalever wrote:
>> On Thu, Jul 28, 2016 at 3:32 PM, Niels de Vos <nde...@redhat.com> wrote:
>> > There are some features in QEMU that we could implement with the
>> > existing libgfapi functions. Kevin asked me about this a while back, and
>> > I have finally (sorry for the delay Kevin!) taken the time to look into
>> > it.
>> >
>> > There are some optional operations that can be set in the BlockDriver
>> > structure. The ones missing that we could have, or have useless
>> > implementations are these:
>> >
>> >   .bdrv_get_info/.bdrv_refresh_limits:
>> > This seems to set values in a BlockDriverInfo and BlockLimits
>> > structure that is used by QEMUs block layer. By setting the right
>> > values, we can use glfs_discard() and glfs_zerofill() to reduce the
>> > writing of 0-bytes that QEMU falls back on at the moment.
>>
>> Hey Niels and Kevin,
>>
>> In one of our discussions Jeff shown his interest in knowing about
>> discard support in gluster upstream.
>> I thinks his intention was same here.
>>
>> >
>> >   .bdrv_has_zero_init / qemu_gluster_has_zero_init:
>> > Currently always returns 0. But if a file gets created on a Gluster
>> > volume, it should never have old contents in it. Rewriting it with
>> > 0-bytes looks unneeded to me.
>>
>> I agree
>>
>> >
>> > With these improvements the gluster:// URL usage with QEMU (and now also
>> > the new JSON QAPI), certain operations are expected to be a little
>> > faster. Anyone starting to work on this would want to trace the actual
>> > operations (on a single-brick volume) with ltrace/wireshark on the
>> > system where QEMU runs.
>> >
>> > Who is interested to take this on?
>>
>> Of course I am very much interested to do this work :)
>>
>> But please expect at least a week or two at initializing this from my side,
>> as currently my plate is filled with block store tasks.
>>
>> Hopefully this is meant for 2.8 (as 2.7 is in hard-freeze) I think
>> delay should be acceptable.
>
> Thanks! There are no strict timelines for any of the community work. It
> all depends on what your manager(s) want to see in future productized
> versions. At the moment, and for all I know, this is just an improvement
> that we should do at one point.

Yeah, right!

Since we are okay with the timelines, I shall get this done to fall
with qemu-2.8

Thanks for bringing this into notice :)

--
Prasanna

>
> Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Suggestions for improving the block/gluster driver in QEMU

2016-07-28 Thread Prasanna Kalever
On Thu, Jul 28, 2016 at 3:32 PM, Niels de Vos  wrote:
> There are some features in QEMU that we could implement with the
> existing libgfapi functions. Kevin asked me about this a while back, and
> I have finally (sorry for the delay Kevin!) taken the time to look into
> it.
>
> There are some optional operations that can be set in the BlockDriver
> structure. The ones missing that we could have, or have useless
> implementations are these:
>
>   .bdrv_get_info/.bdrv_refresh_limits:
> This seems to set values in a BlockDriverInfo and BlockLimits
> structure that is used by QEMUs block layer. By setting the right
> values, we can use glfs_discard() and glfs_zerofill() to reduce the
> writing of 0-bytes that QEMU falls back on at the moment.

Hey Niels and Kevin,

In one of our discussions Jeff shown his interest in knowing about
discard support in gluster upstream.
I thinks his intention was same here.

>
>   .bdrv_has_zero_init / qemu_gluster_has_zero_init:
> Currently always returns 0. But if a file gets created on a Gluster
> volume, it should never have old contents in it. Rewriting it with
> 0-bytes looks unneeded to me.

I agree

>
> With these improvements the gluster:// URL usage with QEMU (and now also
> the new JSON QAPI), certain operations are expected to be a little
> faster. Anyone starting to work on this would want to trace the actual
> operations (on a single-brick volume) with ltrace/wireshark on the
> system where QEMU runs.
>
> Who is interested to take this on?

Of course I am very much interested to do this work :)

But please expect at least a week or two at initializing this from my side,
as currently my plate is filled with block store tasks.

Hopefully this is meant for 2.8 (as 2.7 is in hard-freeze) I think
delay should be acceptable.

Thanks,
--
Prasanna


> Niels
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] What tranpsort type in glfs_set_volfile_server() exactly mean ?

2016-07-18 Thread Prasanna Kalever
On Mon, Jul 18, 2016 at 4:31 PM, Raghavendra Gowdappa
<rgowd...@redhat.com> wrote:
>
>
> - Original Message -
>> From: "Prasanna Kalever" <pkale...@redhat.com>
>> To: "gluster-devel" <gluster-devel@gluster.org>
>> Cc: "Kaushal Madappa" <kmada...@redhat.com>
>> Sent: Monday, July 18, 2016 4:10:24 PM
>> Subject: [Gluster-devel] What tranpsort type in glfs_set_volfile_server()
>>  exactly mean ?
>>
>> Hey Team,
>>
>>
>> My understanding is that @transport argumet in
>> glfs_set_volfile_server() is meant for specifying transport used in
>> fetching volfile server,
>> IIRC which currently supports tcp and unix only...
>
> Yes, its hard-coded to use only "socket" transport. See [1]. However if 
> required its not difficult to add rdma option. However, infiniband need not 
> be present on all nodes.
>
> [1] 
> https://github.com/gluster/glusterfs/blob/master/glusterfsd/src/glusterfsd-mgmt.c#L2132
>

Thanks for the info Raghavendra,

According to me, even addition of rdma transport for fetching a
volfile server is zero and perhaps not needed at all. :)

--
Prasanna

>>
>> The doc here https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h
>> +166 shows the rdma as well, which is something I cannot digest.
>>
>>
>> Can someone correct me ?
>>
>> Have we ever supported volfile fetch over rdma ?
>>
>>
>> Thanks,
>> --
>> Prasanna
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] What tranpsort type in glfs_set_volfile_server() exactly mean ?

2016-07-18 Thread Prasanna Kalever
Hey Team,


My understanding is that @transport argumet in
glfs_set_volfile_server() is meant for specifying transport used in
fetching volfile server,
IIRC which currently supports tcp and unix only...

The doc here https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h
+166 shows the rdma as well, which is something I cannot digest.


Can someone correct me ?

Have we ever supported volfile fetch over rdma ?


Thanks,
--
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] iSCSI

2016-07-08 Thread Prasanna Kalever
Hi Francis,


The tcmu-runner is not available as a pre built package in Ubuntu distro,
so you need to build it on your own, honestly I have not tried this on my
own but the suggested solution should solve it for you.

glfs is supported from initial version of tcmu-runner, so you can take up
any latest version of it.

perform cmake . -Dwith-glfs=true followed by make

that's not all, you need to manually copy the systemd unit files
(tcmu-runner.sevice) the respective directives and copy the handlers (in
our case 'handler_glfs.so' to /usr/lib/tcmu-runner/) and other required
conf files


take a look at the fedora rpm spec, which should be a way to follow
[...]
%install

make install DESTDIR=%{buildroot}

mkdir -p %{buildroot}%{_mandir}/man8/

install -m 644 tcmu-runner.8.gz %{buildroot}%{_mandir}/man8/



%post -n libtcmu -p /sbin/ldconfig



%postun -n libtcmu -p /sbin/ldconfig



%files

%{_bindir}/tcmu-runner

%dir %{_libdir}/tcmu-runner

%{_libdir}/tcmu-runner/*

%{_sysconfdir}/dbus-1/system.d/tcmu-runner.conf

%{_datarootdir}/dbus-1/system-services/org.kernel.TCMUService1.service

%{_unitdir}/tcmu-runner.service

%doc README.md

%license LICENSE

%{_mandir}/man8/tcmu-runner.8.gz





%files -n libtcmu

%{_libdir}/*.so.*



%files -n libtcmu-devel

%{_includedir}/libtcmu.h

%{_includedir}/libtcmu_common.h

%{_libdir}/*.so
[...]


list of things you need to do:
1. do the path changes in the systemd unit file tcmu-runner.service and
copy it to /lib/systemd/system/
2. copy the libraries to respective paths
3. mkdir and copy handler_glfs.so to /usr/lib/tcmu-runner
4. copy the tcmu-runner.conf   to /etc/dbus-1/system.d/
5. copy org.kernel.TCMUService1.service to /etc/dbus-1/system-services/
6. make sure target_core_user is loaded and target.service is running
7. run systemctl start tcmu-runner.servie

Hopefully that should work!

Looking forward for your update.

Cheers,
--
Prasanna

On Fri, Jul 8, 2016 at 3:51 AM, francis Lavalliere <
francis.lavalli...@gmail.com> wrote:

> Hello,
>
> I've been messing around with Gluster FS version 3.7 and 3.8.
>
> Currently i am trying to implement iSCSI with Gluster FS.
>
> I've installed ubuntu 16.04 and have installed various components:
>
> I didnt found any tcmu-runner packages for ubuntu, so i manually built it
> using their git repository:
>
> I've tried : cmake . -Dwith-glfs=true  and successfully installed it using
> make/make install..
>
> when I do : targetcli ls  i do not see the output of the user:glfs
>
>
> Here is an example of the output. I've managed to do it via fileio, but
> this is not the right way.
>
> targetcli ls
>
> o- */*
> .*
> [*...*]*
>
>   o- *backstores*
> ..*
> [*...*]*
>
>   | o- *fileio*
> ...*
> [*1 Storage Object*]*
>
>   | | o- *testfs*
> ..*
> [*711.0M, /nfs/testfs.img, in use*]*
>
>   | o- *iblock*
> ...*
> [*0 Storage Object*]*
>
>   | o- *pscsi*
> *
> [*0 Storage Object*]*
>
>   | o- *rd_mcp*
> ...*
> [*0 Storage Object*]*
>
>   o- *ib_srpt*
> ...*
> [*0 Targets*]*
>
>   o- *iscsi*
> ..*
> [*1 Target*]*
>
>   | o- *iqn.2015-04.com.example:target1*
> .*
> [*1 TPG*]*
>
>   |   o- *tpg1*
> *
> [*enabled*]*
>
>   | o- *acls*
> ...*
> [*0 ACLs*]*
>
>   | o- *luns*
> *
> [*1 LUN*]*
>
>   | | o- *lun0*
> *
> [*fileio/testfs (/nfs/testfs.img)*]*
>
>   | o- *portals*
> ..*
> [*1 Portal*]*
>
>   |   o- *0.0.0.0:3260 *
> ...*
> [*OK, iser enabled*]*
>
>   

Re: [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-15 Thread Prasanna Kalever
On Wed, Jun 15, 2016 at 2:41 PM, André Bauer  wrote:
>
> Hi Lists,
>
> i just updated on of my Ubuntu KVM Servers from 14.04 (Trusty) to 16.06
> (Xenial).
>
> I use the Glusterfs packages from the officail Ubuntu PPA and my own
> Qemu packages (
> https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7 )
> which have libgfapi enabled.
>
> On Ubuntu 14.04 everything is working fine. I only had to add the
> following lines to the Apparmor config in
> /etc/apparmor.d/abstractions/libvirt-qemu to get it work:
>
> # for glusterfs
> /proc/sys/net/ipv4/ip_local_reserved_ports r,
> /usr/lib/@{multiarch}/glusterfs/**.so mr,
> /tmp/** rw,
>
> In Ubuntu 16.04 i'm not able to start the my VMs via libvirt or to
> create new images via qemu-img using libgfapi.
>
> Mounting the volume via fuse does work without problems.
>
> Examples:
>
> qemu-img create gluster://storage.mydomain/vmimages/kvm2test.img 1G
> Formatting 'gluster://storage.intdmz.h1.mdd/vmimages/kvm2test.img',
> fmt=raw size=1073741824
> [2016-06-15 08:15:26.710665] E [MSGID: 108006]
> [afr-common.c:4046:afr_notify] 0-vmimages-replicate-0: All subvolumes
> are down. Going offline until atleast one of them comes back up.
> [2016-06-15 08:15:26.710736] E [MSGID: 108006]
> [afr-common.c:4046:afr_notify] 0-vmimages-replicate-1: All subvolumes
> are down. Going offline until atleast one of them comes back up.
>
> Libvirtd log:
>
> [2016-06-13 16:53:57.055113] E [MSGID: 104007]
> [glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch
> volume file (key:vmimages) [Invalid argument]
> [2016-06-13 16:53:57.055196] E [MSGID: 104024]
> [glfs-mgmt.c:738:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with
> remote-host: storage.intdmz.h1.mdd (Permission denied) [Permission denied]
> 2016-06-13T16:53:58.049945Z qemu-system-x86_64: -drive
> file=gluster://storage.intdmz.h1.mdd/vmimages/checkbox.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=writeback:
> Gluster connection failed for server=storage.intdmz.h1.mdd port=0
> volume=vmimages image=checkbox.qcow2 transport=tcp: Permission denied

I think you have missed enabling bind insecure which is needed by
libgfapi access, please try again after following below steps

=> edit /etc/glusterfs/glusterd.vol by add "option
rpc-auth-allow-insecure on" #(on all nodes)
=> gluster vol set $volume server.allow-insecure on
=> systemctl restart glusterd #(on all nodes)

In case this does not work,
provide help us with the below, along with the logfiles
# gluster vol info
# gluster vol status
# gluster peer status

--
Prasanna

>
> I don't see anything in the apparmor logs when setting everything to
> complain or audit.
>
> It also seems GlusterFS servers don't get any request because brick logs
> are not complaining anything.
>
> Any hints?
>
>
> --
> Regards
> André Bauer
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-infra] Please test Gerrit 2.12.2

2016-05-31 Thread Prasanna Kalever
Hi Kotresh,

This is where I was peeping in
http://review.nigelb.me/#/c/14346/1/xlators/features/index/src/index.c

May be this patch could have been posted before upgrade ?


Thanks,
--
Prasanna


On Tue, May 31, 2016 at 12:39 PM, Kotresh Hiremath Ravishankar
<khire...@redhat.com> wrote:
> Hi Prasanna,
>
> 'Fix' button is visible. May be you are missing something, please check.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -----
>> From: "Prasanna Kalever" <pkale...@redhat.com>
>> To: "Nigel Babu" <nig...@redhat.com>
>> Cc: "gluster-infra" <gluster-in...@gluster.org>, "gluster-devel" 
>> <gluster-devel@gluster.org>
>> Sent: Tuesday, May 31, 2016 12:13:47 PM
>> Subject: Re: [Gluster-devel] [Gluster-infra] Please test Gerrit 2.12.2
>>
>> Hi Nigel,
>>
>> I don't see 'Fix' button in the comment section which is "fix for a
>> remote code execution exploit" introduced in 2.12.2, it helps us in
>> editing the code in the gerrit web editor instantaneously, hence we
>> don't have to cherry pick the patch every time to address minor code
>> changes.
>>
>> I feel that is really helpful for the developers to address comments
>> faster and easier.
>>
>> Please see [1], it also has attachments showing how this looks
>>
>> [1] http://www.gluster.org/pipermail/gluster-devel/2016-May/049429.html
>>
>>
>> Thanks,
>> --
>> Prasanna
>>
>> On Tue, May 31, 2016 at 10:39 AM, Nigel Babu <nig...@redhat.com> wrote:
>> > Hello,
>> >
>> > A reminder: I'm hoping to get this done tomorrow morning at 0230 GMT[1].
>> > I'll have a backup ready in case something goes wrong. I've tested this
>> > process on review.nigelb.me and it's gone reasonably smoothly.
>> >
>> > [1]:
>> > http://www.timeanddate.com/worldclock/fixedtime.html?msg=Maintenance=20160601T08=176=1
>> >
>> > On Mon, May 30, 2016 at 7:26 PM, Nigel Babu <nig...@redhat.com> wrote:
>> >>
>> >> Hello,
>> >>
>> >> I've now upgraded Gerrit on http://review.nigelb.me to 2.12.2. Please
>> >> spend a few minutes testing that everything works as you expect it to. If
>> >> I
>> >> don't hear anything negative by tomorrow, I'd like to schedule an upgrade
>> >> this week.
>> >>
>> >> --
>> >> nigelb
>> >
>> >
>> >
>> >
>> > --
>> > nigelb
>> >
>> > ___
>> > Gluster-infra mailing list
>> > gluster-in...@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-infra
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] changing client insecure port ceil

2016-05-13 Thread Prasanna Kalever
Hello all,

please review [1]

the above patch changes client insecure port ceiling from 65535 to 49151:

current port allocation to various processes (clumsy):
  1023 - 1-> client ports range if bind secure is turned on
49151 - 1024   -> fall back to this, if in above case ports exhaust
65535 - 1024   -> client port range if bind insecure is on
49152 - 65535 -> brick port range


now, we have segregated port ranges 0 - 65535 to below 3 ranges
 1023 - 1 -> client ports range if bind secure is turned on
49151 - 1024   -> client port range if bind insecure is on (fall back
to this, if in above case ports exhaust)
49152 - 65535 -> brick port range

What this address?
with this above approach, we can achieve a clean segregation of port mapping.
Since bind insecure is on now, the client ports start from 65535 to
down and bricks start from 49152 to up,
there could be a possibility of port clash between brick and client ports.


All the above story is for now,
In the next releases we (Raghavendra Talur and myself) would like to
remove glusterd doing port allocation,
also honor "ip_local_port_range" and "ip_local_reserved_ports" by
leave the allocation to be done by kernel.
I shall write it as a separate email when we start doing that, but for
now [1] is important.

http://review.gluster.org/#/c/14326


Thanks,
--
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] git-branch-diff: wrapper script for git to visualize backports

2016-05-09 Thread Prasanna Kalever
Hi all,

checkout http://review.gluster.org/14230
I have added more cool features in patch set 2 to meet both developers
as well as release maintainers expectations.

Thanks,
--
Prasanna


On Fri, May 6, 2016 at 1:22 PM, Prasanna Kalever <pkale...@redhat.com> wrote:
> On Fri, May 6, 2016 at 12:03 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>>
>>
>> On 05/05/2016 11:25 PM, Prasanna Kalever wrote:
>>> Hi Team,
>>>
>>> Checkout glusterfs script that is capable of showing your list of commits
>>> missed (backporting) in other branches (say 3.7.12) w.r.t master
>>>
>>> http://review.gluster.org/#/c/14230/
>>>
>>>
>>> This script helps in visualizing backported and missed commits between two
>>> different branches.
>>>
>>> While backporting commit to another branch only subject of the patch may
>>> remain unchanged, all others such as commit message,  commit Id, change Id,
>>> bug Id, will be changed. This script works by taking commit subject as the
>>> key value for comparing two git branches, which can be local or remote.
>> Nice work!! This will be really helpful in situation like now where we
>> have to backport almost all our fixes from mainline to 3.7 & 3.8 branches.
>>
>> I'd also request you to send a report on the patches missing in 3.8
>> branch and send a reminder to all the developers asking for the backports.
>>
>> Also is there a way to limit the comparison? For an example I may be
>> interested in seeing the delta between mainline and last 3.7 release
>> i.e. 3.7.11 and do not want to compare the whole 3.7 branch?
>
> Thank you Atin, that a good Idea.
> That can be done using git tags, but I need to some more investigation
> with the git :)
>
> I would like to here more from the developers to make this tool better.
>
> --
> Prasanna
>>
>> ~Atin
>>>
>>>
>>>
>>> Help:
>>>
>>> $ ./extras/git-branch-diff.py --help
>>> usage: git-branch-diff.py [-h] [-s SOURCE_BRANCH] -t TARGET_BRANCH
>>>   [-a GIT_AUTHOR] [-p REPO_PATH]
>>>
>>> git wrapper to diff local/remote branches
>>>
>>> optional arguments:
>>>   -h, --helpshow this help message and exit
>>>   -s SOURCE_BRANCH, --source-branch SOURCE_BRANCH
>>> source branch name
>>>   -t TARGET_BRANCH, --target-branch TARGET_BRANCH
>>> target branch name
>>>   -a GIT_AUTHOR, --author GIT_AUTHOR
>>> default: git config name/email
>>>   -p REPO_PATH, --path REPO_PATH
>>> show branches diff specific to path
>>>
>>>
>>> Sample usages:
>>>
>>>   $ ./extras/git-branch-diff.py -t origin/release-3.8
>>>   $ ./extras/git-branch-diff.py -s local_branch -t origin/release-3.7
>>>   $ ./extras/git-branch-diff.py -t origin/release-3.8
>>> --author="us...@redhat.com"
>>>   $ ./extras/git-branch-diff.py -t origin/release-3.8 --path="xlators/"
>>>
>>>   $ ./extras/git-branch-diff.py -t origin/release-3.8 --author=""
>>>
>>>
>>>
>>> Example output:
>>>
>>> $ ./extras/git-branch-diff.py -t origin/release-3.8 --path=./rpc
>>>
>>> 
>>>
>>> [ ✔ ] Successfully Backported changes:
>>> {from: remotes/origin/master  to: origin/release-3.8}
>>>
>>> glusterd: try to connect on GF_PMAP_PORT_FOREIGN aswell
>>> rpc: fix gf_process_reserved_ports
>>> rpc: assign port only if it is unreserved
>>> server/protocol: option for dynamic authorization of client permissions
>>> rpc: fix binding brick issue while bind-insecure is enabled
>>> rpc: By default set allow-insecure, bind-insecure to on
>>>
>>> 
>>>
>>> [ ✖ ] Missing patches in origin/release-3.8:
>>>
>>> glusterd: add defence mechanism to avoid brick port clashes
>>> rpc: define client port range
>>>
>>> 
>>>
>>>
>>> Note: This script may ignore commits which have altered their commit 
>>> subjects
>>> while backporting patches. Also this script doesn't have any intelligence to
>>> detect squashed commits.
>>>
>>>
>>>
>>> Thanks,
>>> --
>>> Prasanna
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] gerrit-2.12.2 will help us improve address comments faster

2016-05-07 Thread Prasanna Kalever
Hello Everyone,

Gerrit has an exciting news for all of us!
The latest version of gerrit-2.12.2 includes a fix for a remote code
execution exploit see this at [1];

Current gerrit version which we use is 2.9.4, on a comment at a code
line (say for a typo or a white space or correcting log msg),
current gerrit version shows two options 'Reply' and 'Done'. With
gerrit-2.12.2 it introduces a new option 'Fix' which allows us to fix
minor changes in the gerrit editor and post directly from GUI.

I feel this will help all of us to address the minor comments faster
and unstuck the patches.


Please find the attached images to visualize this better.

So when are we upgrading to gerrit-2.12.2 ?


[1] 
https://gerrit-documentation.storage.googleapis.com/ReleaseNotes/ReleaseNotes-2.12.2.html



Thanks,
--
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] git-branch-diff: wrapper script for git to visualize backports

2016-05-05 Thread Prasanna Kalever
Hi Team,

Checkout glusterfs script that is capable of showing your list of commits
missed (backporting) in other branches (say 3.7.12) w.r.t master

http://review.gluster.org/#/c/14230/


This script helps in visualizing backported and missed commits between two
different branches.

While backporting commit to another branch only subject of the patch may
remain unchanged, all others such as commit message,  commit Id, change Id,
bug Id, will be changed. This script works by taking commit subject as the
key value for comparing two git branches, which can be local or remote.



Help:

$ ./extras/git-branch-diff.py --help
usage: git-branch-diff.py [-h] [-s SOURCE_BRANCH] -t TARGET_BRANCH
  [-a GIT_AUTHOR] [-p REPO_PATH]

git wrapper to diff local/remote branches

optional arguments:
  -h, --helpshow this help message and exit
  -s SOURCE_BRANCH, --source-branch SOURCE_BRANCH
source branch name
  -t TARGET_BRANCH, --target-branch TARGET_BRANCH
target branch name
  -a GIT_AUTHOR, --author GIT_AUTHOR
default: git config name/email
  -p REPO_PATH, --path REPO_PATH
show branches diff specific to path


Sample usages:

  $ ./extras/git-branch-diff.py -t origin/release-3.8
  $ ./extras/git-branch-diff.py -s local_branch -t origin/release-3.7
  $ ./extras/git-branch-diff.py -t origin/release-3.8
--author="us...@redhat.com"
  $ ./extras/git-branch-diff.py -t origin/release-3.8 --path="xlators/"

  $ ./extras/git-branch-diff.py -t origin/release-3.8 --author=""



Example output:

$ ./extras/git-branch-diff.py -t origin/release-3.8 --path=./rpc



[ ✔ ] Successfully Backported changes:
{from: remotes/origin/master  to: origin/release-3.8}

glusterd: try to connect on GF_PMAP_PORT_FOREIGN aswell
rpc: fix gf_process_reserved_ports
rpc: assign port only if it is unreserved
server/protocol: option for dynamic authorization of client permissions
rpc: fix binding brick issue while bind-insecure is enabled
rpc: By default set allow-insecure, bind-insecure to on



[ ✖ ] Missing patches in origin/release-3.8:

glusterd: add defence mechanism to avoid brick port clashes
rpc: define client port range




Note: This script may ignore commits which have altered their commit subjects
while backporting patches. Also this script doesn't have any intelligence to
detect squashed commits.



Thanks,
--
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] changes to client port range in release 3.1.3

2016-05-03 Thread Prasanna Kalever
Hi all,

The various port ranges in glusterfs as of now:  (very high level view)


client:
  In case of bind secure:
will start from 1023 - 1, In case all these
port exhaust bind to random port (a connect() with out bind() call)
  In case of bind insecure:
will start from 65535 all the way down till 1

bricks/server:
  any port starting from 49152 to 65535

glusterd:
  24007


There was a recent bug, In case of bind secure, client see all ports
as exhausted and connect to a random port which was unfortunately in
brick port map range. So client successfully got a connected on a
given port. Now without these information with glusterd (since pmap
alloc done only at start), it passes the same port to brick, where
brick fails to connect on it (also consider the race situation)


To solve this issue we decided to split the client and brick port ranges. [1]

As usual bricks port map range will be IANA  ephemeral port range i.e
49152-65535.
For clients only in-case of secure ports exhaust (which is a rare
case),  we decided to fall back to registered ports i.e 49151 - 1024


If we see the ephemeral port standards
1.  The Internet Assigned Numbers Authority (IANA) suggests the range
49152 to 65535
2.  Many Linux kernels use the port range 32768 to 61000
more at [2]

Some of our thoughts include split the current brick port range ( ~16K
) into two (may be ~8K each or some other ratio) and use them for
client and bricks, which could solve the problem but also  introduce a
 limitation for scalability.

The patch [1] goes in 3.1.3, we wanted know if there are any impacts
caused with these changes.


[1] http://review.gluster.org/#/c/13998/
[2] https://en.wikipedia.org/wiki/Ephemeral_port


Thanks,
--
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regression failure ./tests/bitrot/bug-1244613.t

2016-04-28 Thread Prasanna Kalever
On Thu, Apr 28, 2016 at 3:01 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Thu, Apr 28, 2016 at 02:21:17PM +0530, Prasanna Kalever wrote:
>> Hi all,
>>
>>
>> 12:53:34 Test Summary Report
>> 12:53:34 ---
>> 12:53:34 ./tests/bitrot/bug-1244613.t (Wstat: 0 Tests: 35 Failed: 2)
>> 12:53:34   Failed tests:  11, 34
>>
>> which is "TEST mount_nfs $H0:/$V0 $N0;" while trying to mount using nfs
>>
>> I am suspecting that the reason could be "glusterd: default value of
>> nfs.disable, change from false to true" ?
>
> That looks possible indeed.
>
>> do some one already send a patch for this?
>
> I could not spot one quickly in this list:
>
>   
> http://review.gluster.org/#/q/project:glusterfs+branch:master+after:2016-04-27
>
> A patch would be most welcome :)

Here is the fix, please review and merge it
http://review.gluster.org/#/c/14103/

Thanks,
--
Prasanna

>
> Thanks,
> Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] regression failure ./tests/bitrot/bug-1244613.t

2016-04-28 Thread Prasanna Kalever
Hi all,


12:53:34 Test Summary Report
12:53:34 ---
12:53:34 ./tests/bitrot/bug-1244613.t (Wstat: 0 Tests: 35 Failed: 2)
12:53:34   Failed tests:  11, 34

which is "TEST mount_nfs $H0:/$V0 $N0;" while trying to mount using nfs

I am suspecting that the reason could be "glusterd: default value of
nfs.disable, change from false to true" ?

do some one already send a patch for this?

Thanks,
--
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] regressions: tests/basic/afr/self-heald.t fails

2016-04-27 Thread Prasanna Kalever
Hi all,

Noticed 'tests/basic/afr/self-heald.t' fails both in CentOs and Netbsd

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/16201/console
https://build.gluster.org/job/rackspace-regression-2GB-triggered/20043/console

http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/16199/console
http://build.gluster.org/job/rackspace-regression-2GB-triggered/20042/console

--
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] regression machines reporting slowly ? here is the reason ...

2016-04-24 Thread Prasanna Kalever
On Sun, Apr 24, 2016 at 7:29 PM, Niels de Vos <nde...@redhat.com> wrote:
> On Sun, Apr 24, 2016 at 04:22:55PM +0530, Prasanna Kalever wrote:
>> On Sun, Apr 24, 2016 at 7:11 AM, Vijay Bellur <vbel...@redhat.com> wrote:
>> > On Sat, Apr 23, 2016 at 9:30 AM, Prasanna Kalever <pkale...@redhat.com> 
>> > wrote:
>> >> Hi all,
>> >>
>> >> Noticed our regression machines are reporting back really slow,
>> >> especially CentOs and Smoke
>> >>
>> >> I found that most of the slaves are marked offline, this could be the
>> >> biggest reasons ?
>> >>
>> >>
>> >
>> > Regression machines are scheduled to be offline if there are no active
>> > jobs. I wonder if the slowness is related to LVM or related factors as
>> > detailed in a recent thread?
>> >
>>
>> Sorry, the previous mail was sent incomplete (blame some Gmail shortcut)
>>
>> Hi Vijay,
>>
>> Honestly I was not aware of this case where the machines move to
>> offline state by them self, I was only aware that they just go to idle
>> state,
>> Thanks for sharing that information. But we still need to reclaim most
>> of machines, Here are the reasons why each of them are offline.
>
> Well, slaves go into offline, and should be woken up when needed.
> However it seems that Jenkins fails to connect to many slaves :-/
>
> I've rebooted:
>
>  - slave46
>  - slave28
>  - slave26
>  - slave25
>  - slave24
>  - slave23
>  - slave21
>
> These all seem to have come up correctly after clicking the 'Lauch slave
> agent' button on the slave's status page.
>
> Remember that anyone with a Jankins account can reboot VMs. This most
> often is sufficient to get them working again. Just go to
> https://build.gluster.org/job/reboot-vm/ , login and press some buttons.
>
> One slave is in a weird status, maybe one of the tests overwrote the ssh
> key?
>
> [04/24/16 06:48:02] [SSH] Opening SSH connection to 
> slave29.cloud.gluster.org:22.
> ERROR: Failed to authenticate as jenkins. Wrong password. 
> (credentialId:c31bff89-36c0-4f41-aed8-7c87ba53621e/method:password)
> [04/24/16 06:48:04] [SSH] Authentication failed.
> hudson.AbortException: Authentication failed.
> at 
> hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:1217)
> at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:711)
> at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:706)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> [04/24/16 06:48:04] Launch failed - cleaning up connection
> [04/24/16 06:48:05] [SSH] Connection closed.
>
> Leaving slave29 as is, maybe one of our admins can have a look and see
> if it needs reprovisioning.

That's really cool Neils, thank you!

It will be helpful if somebody with Jenkins login perms can reboot
netbsd slave nbslave72.cloud.gluster.org ?

the below mentioned netbsd slaves were marked as offline
intentionally, just in case if forget to restore the state to online
(Please ignore if they still needed for some other jobs or has some issues)


Kaushal :
nbslave74.cloud.gluster.org on Mar 21, 2016 10:59:43 PM
nbslave7h.cloud.gluster.org on Apr 13, 2016 3:15:06 AM


Raghavendra Talur:
nbslave7g.cloud.gluster.org on Mar 29, 2016 2:27:20 AM


Jeff Darcy:
nbslave7i.cloud.gluster.org on Feb 27, 2016 9:09:09 PM


Thanks,
--
Prasanna

>
> Cheers,
> Niels
>
>>
>>
>> CentOs slaves: Hardly (2/14) salves are online [1]
>>
>> slave20.cloud.gluster.org (online)
>> slave21.cloud.gluster.org [Offline Reason: This node is offline
>> because Jenkins failed to launch the slave agent on it.]
>> slave22.cloud.gluster.org (online)
>> slave23.cloud.gluster.org [Offline Reason: This node is offline
>> because Jenkins failed to launch the slave agent on it.]
>> slave24.cloud.gluster.org [Offline Reason: This node is offline
>> because Jenkins failed to launch the slave agent on it.]
>> slave25.cloud.gluster.org [Offline Reason: This node is offline
>> because Jenkins failed to launch the slave agent on it.]
>> slave26.cloud.gluster.org [Offline Reason: This node is offline
>> because Jenkins failed to launch the slave agent on it.]
>> slave27.cloud.gluster.org [Offline Reason: Disconnected by rastar :
>> rastar taking this down for pranith. Needed

Re: [Gluster-devel] [Gluster-infra] regression machines reporting slowly ? here is the reason ...

2016-04-24 Thread Prasanna Kalever
On Sun, Apr 24, 2016 at 7:11 AM, Vijay Bellur <vbel...@redhat.com> wrote:
> On Sat, Apr 23, 2016 at 9:30 AM, Prasanna Kalever <pkale...@redhat.com> wrote:
>> Hi all,
>>
>> Noticed our regression machines are reporting back really slow,
>> especially CentOs and Smoke
>>
>> I found that most of the slaves are marked offline, this could be the
>> biggest reasons ?
>>
>>
>
> Regression machines are scheduled to be offline if there are no active
> jobs. I wonder if the slowness is related to LVM or related factors as
> detailed in a recent thread?
>

Sorry, the previous mail was sent incomplete (blame some Gmail shortcut)

Hi Vijay,

Honestly I was not aware of this case where the machines move to
offline state by them self, I was only aware that they just go to idle
state,
Thanks for sharing that information. But we still need to reclaim most
of machines, Here are the reasons why each of them are offline.


CentOs slaves: Hardly (2/14) salves are online [1]

slave20.cloud.gluster.org (online)
slave21.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave22.cloud.gluster.org (online)
slave23.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave24.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave25.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave26.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave27.cloud.gluster.org [Offline Reason: Disconnected by rastar :
rastar taking this down for pranith. Needed for debugging with tar
issue.  Apr 20, 2016 3:44:14 AM]
slave28.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave29.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]

slave32.cloud.gluster.org [Offline Reason: idle]
slave33.cloud.gluster.org [Offline Reason: idle]
slave34.cloud.gluster.org [Offline Reason: idle]

slave46.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]




Smoke slaves:  Hardly (2/15) slaves are online [2]

slave20.cloud.gluster.org (onine)
slave21.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave22.cloud.gluster.org (online)
slave23.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave24.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave25.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave26.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave27.cloud.gluster.org [Offline Reason: Disconnected by rastar :
rastar taking this down for pranith. Needed for debugging with tar
issue.Apr 20, 2016 3:44:14 AM]
slave28.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave29.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]

slave32.cloud.gluster.org [Offline Reason: idle]
slave33.cloud.gluster.org [Offline Reason: idle]
slave34.cloud.gluster.org [Offline Reason: idle]

slave46.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave47.cloud.gluster.org [Offline Reason: idle]




Netbsd slaves:   Only (6 /11) are online [3]

nbslave71.cloud.gluster.org (online)
nbslave72.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
nbslave74.cloud.gluster.org [Ofline Reason: Disconnected by kaushal
Mar 21, 2016 10:59:43 PM]
nbslave75.cloud.gluster.org (online)
nbslave77.cloud.gluster.org (online)
nbslave79.cloud.gluster.org (online)

nbslave7c.cloud.gluster.org (online)
nbslave7g.cloud.gluster.org [Ofline Reason: Disconnected by rastar :
anoop is using this to debug netbsd related issue Mar 29, 2016 2:27:20
AM]
nbslave7h.cloud.gluster.org [Ofline Reason: Disconnected by kaushal
Apr 13, 2016 3:15:06 AM]
nbslave7i.cloud.gluster.org [Ofline Reason: Disconnected by jdarcy :
Consistently generating spurious failures due to ping timeouts. This
costs people *hours* for a platform nobody uses except as a test for
perfused. Feb 27, 2016 9:09:09 PM]
nbslave7j.cloud.gluster.org (online)


Summary:

For CentOs Regressions: 9/14 slaves were completely down  [not just idle]
For Smoke: 9/15 slaves were completely down
For Netbsd Regressio

Re: [Gluster-devel] [Gluster-infra] regression machines reporting slowly ? here is the reason ...

2016-04-24 Thread Prasanna Kalever
On Sun, Apr 24, 2016 at 7:11 AM, Vijay Bellur <vbel...@redhat.com> wrote:
> On Sat, Apr 23, 2016 at 9:30 AM, Prasanna Kalever <pkale...@redhat.com> wrote:
>> Hi all,
>>
>> Noticed our regression machines are reporting back really slow,
>> especially CentOs and Smoke
>>
>> I found that most of the slaves are marked offline, this could be the
>> biggest reasons ?
>>
>>
>
> Regression machines are scheduled to be offline if there are no active
> jobs. I wonder if the slowness is related to LVM or related factors as
> detailed in a recent thread?

Hi Vijay,

Honestly I was not aware of this case where the machines move to
offline state by them self, I was aware that they just go to idle
state, Thanks for sharing this information.
But we still need to reclaim most of machines,


CentOs slaves: Hardly (2/14) salves are online [1]

slave20.cloud.gluster.org (online)
slave21.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave22.cloud.gluster.org (online)
slave23.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave24.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave25.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave26.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave27.cloud.gluster.org [Offline Reason: Disconnected by rastar :
rastar taking this down for pranith. Needed for debugging with tar
issue.  Apr 20, 2016 3:44:14 AM]
slave28.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave29.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]

slave32.cloud.gluster.org [Offline Reason: idle]
slave33.cloud.gluster.org [Offline Reason: idle]
slave34.cloud.gluster.org [Offline Reason: idle]

slave46.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]




Smoke slaves:  Hardly (2/15) slaves are online [2]

slave20.cloud.gluster.org (onine)
slave21.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave22.cloud.gluster.org (online)
slave23.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave24.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave25.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave26.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave27.cloud.gluster.org [Offline Reason: Disconnected by rastar :
rastar taking this down for pranith. Needed for debugging with tar
issue.Apr 20, 2016 3:44:14 AM]
slave28.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave29.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]

slave32.cloud.gluster.org [Offline Reason: idle]
slave33.cloud.gluster.org [Offline Reason: idle]
slave34.cloud.gluster.org [Offline Reason: idle]

slave46.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
slave47.cloud.gluster.org [Offline Reason: idle]



`
Netbsd slaves:   Only (6 /11) are online [3]

nbslave71.cloud.gluster.org (online)
nbslave72.cloud.gluster.org [Offline Reason: This node is offline
because Jenkins failed to launch the slave agent on it.]
nbslave74.cloud.gluster.org [Ofline Reason: Disconnected by kaushal
Mar 21, 2016 10:59:43 PM]
nbslave75.cloud.gluster.org (online)
nbslave77.cloud.gluster.org (online)
nbslave79.cloud.gluster.org (online)

nbslave7c.cloud.gluster.org (online)
nbslave7g.cloud.gluster.org [Ofline Reason: Disconnected by rastar :
anoop is using this to debug netbsd related issue Mar 29, 2016 2:27:20
AM]
nbslave7h.cloud.gluster.org [Ofline Reason: Disconnected by kaushal
Apr 13, 2016 3:15:06 AM]
nbslave7i.cloud.gluster.org [Ofline Reason: Disconnected by jdarcy :
Consistently generating spurious failures due to ping timeouts. This
costs people *hours* for a platform nobody uses except as a test for
perfused.
Feb 27, 2016 9:09:09 PM]
nbslave7j.cloud.gluster.org (online)









For CentOs regressions: 9/14 slaves were completly down  [not just idle]
For Smoke: 9/15 slaves, that's a good number
Netbsd Regresstion: We can to reclaim 5/11 slaves, that's a good number










https://build.gluster.org/label/rackspace_regression_2gb/
https://build.

[Gluster-devel] regression machines reporting slowly ? here is the reason ...

2016-04-23 Thread Prasanna Kalever
Hi all,

Noticed our regression machines are reporting back really slow,
especially CentOs and Smoke

I found that most of the slaves are marked offline, this could be the
biggest reasons ?


Here are their numbers :

CentOs slaves: Hardly (2/14) salves are online [1]

slave20.cloud.gluster.org (online)
slave21.cloud.gluster.org
slave22.cloud.gluster.org (online)
slave23.cloud.gluster.org
slave24.cloud.gluster.org
slave25.cloud.gluster.org
slave26.cloud.gluster.org
slave27.cloud.gluster.org
slave28.cloud.gluster.org
slave29.cloud.gluster.org

slave32.cloud.gluster.org
slave33.cloud.gluster.org
slave34.cloud.gluster.org

slave46.cloud.gluster.org


Smoke slaves:  Hardly (2/15) slaves are online [2]

slave20.cloud.gluster.org (onine)
slave21.cloud.gluster.org
slave22.cloud.gluster.org (online)
slave23.cloud.gluster.org
slave24.cloud.gluster.org
slave25.cloud.gluster.org
slave26.cloud.gluster.org
slave27.cloud.gluster.org
slave28.cloud.gluster.org
slave29.cloud.gluster.org

slave32.cloud.gluster.org
slave33.cloud.gluster.org
slave34.cloud.gluster.org

slave46.cloud.gluster.org
slave47.cloud.gluster.org


Netbsd slaves:   Only (6 /11) are online [3]

nbslave71.cloud.gluster.org (online)
nbslave72.cloud.gluster.org
nbslave74.cloud.gluster.org
nbslave75.cloud.gluster.org (online)
nbslave77.cloud.gluster.org (online)
nbslave79.cloud.gluster.org (online)

nbslave7c.cloud.gluster.org (online)
nbslave7g.cloud.gluster.org
nbslave7h.cloud.gluster.org
nbslave7i.cloud.gluster.org
nbslave7j.cloud.gluster.org (online)


If we can reclaim the possible slaves that will be super useful.


https://build.gluster.org/label/rackspace_regression_2gb/
https://build.gluster.org/label/smoke_tests/
https://build.gluster.org/label/netbsd7_regression/


Thanks,
--
Prasanna
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Smoke results voting

2016-04-05 Thread Prasanna Kalever
On Tue, Apr 5, 2016 at 4:24 PM, Kaushal M <kshlms...@gmail.com> wrote:
>
> On 5 Apr 2016 3:43 p.m., "Kaushal M" <kshlms...@gmail.com> wrote:
>>
>> So gerrit voting seems to be working again. It required all the jobs
>> to have the same trigger configuration.
>>
>> Smoke vote was given for [1] a test change that I posted for review.
>> Now just need to check if it works for changes which already have
>> regressions run on them.
>> I'll be following [2] to see it works.
>
> Working now. If anyone notices anything amiss, please let gluster-infra know
> of it.

Kaushal,

Since now we have collated smoke report-back, it would be really
helpful If we can make "event based on regexp match with comment"
()available for other jobs such as :

compare-bug-version-and-git-branch
netbsd6-smoke
freebsd-smoke


Once a user fix the cause for failure then he can re-trigger via 'recheck *'

Thanks,
--
Prasanna




>
>>
>> [1] https://review.gluster.org/13898
>> [2] https://review.gluster.org/13869
>>
>> On Tue, Apr 5, 2016 at 12:46 PM, Prasanna Kalever <pkale...@redhat.com>
>> wrote:
>> > On Tue, Apr 5, 2016 at 12:39 PM, Kaushal M <kshlms...@gmail.com> wrote:
>> >> On Tue, Apr 5, 2016 at 11:26 AM, Kaushal M <kshlms...@gmail.com> wrote:
>> >>> On Tue, Apr 5, 2016 at 11:10 AM, Atin Mukherjee <amukh...@redhat.com>
>> >>> wrote:
>> >>>>
>> >>>>
>> >>>> On 04/05/2016 11:06 AM, Kaushal M wrote:
>> >>>>> I did some changes so that all smoke jobs (linux, *bsd smoke jobs,
>> >>>>> devrpm jobs etc) are triggered for `recheck smoke`.
>> >>>>>
>> >>>>> The collated results are being reported back and the 'Smoke' flag is
>> >>>>> being set. But sometimes, if regression jobs have been already run
>> >>>>> on
>> >>>>> the patchset, jenkins is collating those results as well.
>> >>>>> When this happens, a '-Verified' flag is being set.
>> >>>>>
>> >>>>> Jenkins and its gerrit plugin collate results for jobs launched by
>> >>>>> the
>> >>>>> same event. The regression and smoke jobs should be triggered for
>> >>>>> different events,
>> >>>>> but for some reason jenkins is assuming that they're being triggered
>> >>>>> by the same event and collating all of them together.
>> >>>>>
>> >>>>> I need some time to figure out why this is happening, and fix it.
>> >>>> Is there a way that this doesn't impact the merging as until we get
>> >>>> all
>> >>>> the positive votes, web interface doesn't provide a submit button.
>> >>>
>> >>> This would require manually running the gerrit ssh command as the
>> >>> build user to set the flag, which requires sudo access on
>> >>> build.gluster.org.
>> >>
>> >> Alternatively, administrators can spoof other users. Administrators can
>> >> do
>> >> `ssh @review.gluster.org  suexec --as
>> >> jenk...@build.gluster.org -- gerrit review --label Smoke=+1
>> >> ,`
>> >
>> > As Kaushal mentioned above
>> >
>> > 1. users are free to comment "recheck smoke" (which will  trigger smoke)
>> > 2. only after success on step 1, administrators will get +1 done with
>> > 'ssh ...'
>> >
>> >
>> > Thanks,
>> > --
>> > Prasanna
>> >
>> >
>> >
>> >>
>> >> I'm still figuring out how to solve it properly though.
>> >>
>> >>>
>> >>>
>> >>>>>
>> >>>>> ~kaushal
>> >>>>>
>> >>>>>
>> >>>>> On Mon, Apr 4, 2016 at 10:39 PM, Prasanna Kalever
>> >>>>> <pkale...@redhat.com> wrote:
>> >>>>>> On Mon, Apr 4, 2016 at 9:58 PM, Atin Mukherjee
>> >>>>>> <atin.mukherje...@gmail.com> wrote:
>> >>>>>>> Did anyone notice that for few of the patches smoke results are
>> >>>>>>> not voted
>> >>>>>>> back?
>> >>>>>>>
>> >>>>>>> http://review.gluster.org/#/c/13869 is one of them.
>> >>>>>>
>> >>>>>> +1
>> >>>>>>
>> >>>>>> Here is another one http://review.gluster.org/#/c/11083/
>> >>>>>> I have also re-triggered it with "recheck smoke", after which
>> >>>>>> it return success "http://build.gluster.org/job/smoke/26373/ :
>> >>>>>> SUCCESS"
>> >>>>>>
>> >>>>>> but again failed to report back ...
>> >>>>>>
>> >>>>>> --
>> >>>>>> Prasanna
>> >>>>>>
>> >>>>>>
>> >>>>>>>
>> >>>>>>> -Atin
>> >>>>>>> Sent from one plus one
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ___
>> >>>>>>> Gluster-infra mailing list
>> >>>>>>> gluster-in...@gluster.org
>> >>>>>>> http://www.gluster.org/mailman/listinfo/gluster-infra
>> >>>>>> ___
>> >>>>>> Gluster-devel mailing list
>> >>>>>> Gluster-devel@gluster.org
>> >>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>> >>>>> ___
>> >>>>> Gluster-devel mailing list
>> >>>>> Gluster-devel@gluster.org
>> >>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>> >>>>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Smoke results voting

2016-04-05 Thread Prasanna Kalever
On Tue, Apr 5, 2016 at 12:39 PM, Kaushal M <kshlms...@gmail.com> wrote:
> On Tue, Apr 5, 2016 at 11:26 AM, Kaushal M <kshlms...@gmail.com> wrote:
>> On Tue, Apr 5, 2016 at 11:10 AM, Atin Mukherjee <amukh...@redhat.com> wrote:
>>>
>>>
>>> On 04/05/2016 11:06 AM, Kaushal M wrote:
>>>> I did some changes so that all smoke jobs (linux, *bsd smoke jobs,
>>>> devrpm jobs etc) are triggered for `recheck smoke`.
>>>>
>>>> The collated results are being reported back and the 'Smoke' flag is
>>>> being set. But sometimes, if regression jobs have been already run on
>>>> the patchset, jenkins is collating those results as well.
>>>> When this happens, a '-Verified' flag is being set.
>>>>
>>>> Jenkins and its gerrit plugin collate results for jobs launched by the
>>>> same event. The regression and smoke jobs should be triggered for
>>>> different events,
>>>> but for some reason jenkins is assuming that they're being triggered
>>>> by the same event and collating all of them together.
>>>>
>>>> I need some time to figure out why this is happening, and fix it.
>>> Is there a way that this doesn't impact the merging as until we get all
>>> the positive votes, web interface doesn't provide a submit button.
>>
>> This would require manually running the gerrit ssh command as the
>> build user to set the flag, which requires sudo access on
>> build.gluster.org.
>
> Alternatively, administrators can spoof other users. Administrators can do
> `ssh @review.gluster.org  suexec --as
> jenk...@build.gluster.org -- gerrit review --label Smoke=+1
> ,`

As Kaushal mentioned above

1. users are free to comment "recheck smoke" (which will  trigger smoke)
2. only after success on step 1, administrators will get +1 done with 'ssh ...'


Thanks,
--
Prasanna



>
> I'm still figuring out how to solve it properly though.
>
>>
>>
>>>>
>>>> ~kaushal
>>>>
>>>>
>>>> On Mon, Apr 4, 2016 at 10:39 PM, Prasanna Kalever <pkale...@redhat.com> 
>>>> wrote:
>>>>> On Mon, Apr 4, 2016 at 9:58 PM, Atin Mukherjee
>>>>> <atin.mukherje...@gmail.com> wrote:
>>>>>> Did anyone notice that for few of the patches smoke results are not voted
>>>>>> back?
>>>>>>
>>>>>> http://review.gluster.org/#/c/13869 is one of them.
>>>>>
>>>>> +1
>>>>>
>>>>> Here is another one http://review.gluster.org/#/c/11083/
>>>>> I have also re-triggered it with "recheck smoke", after which
>>>>> it return success "http://build.gluster.org/job/smoke/26373/ : SUCCESS"
>>>>>
>>>>> but again failed to report back ...
>>>>>
>>>>> --
>>>>> Prasanna
>>>>>
>>>>>
>>>>>>
>>>>>> -Atin
>>>>>> Sent from one plus one
>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Gluster-infra mailing list
>>>>>> gluster-in...@gluster.org
>>>>>> http://www.gluster.org/mailman/listinfo/gluster-infra
>>>>> ___
>>>>> Gluster-devel mailing list
>>>>> Gluster-devel@gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>> ___
>>>> Gluster-devel mailing list
>>>> Gluster-devel@gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Smoke results voting

2016-04-04 Thread Prasanna Kalever
On Mon, Apr 4, 2016 at 9:58 PM, Atin Mukherjee
 wrote:
> Did anyone notice that for few of the patches smoke results are not voted
> back?
>
> http://review.gluster.org/#/c/13869 is one of them.

+1

Here is another one http://review.gluster.org/#/c/11083/
I have also re-triggered it with "recheck smoke", after which
it return success "http://build.gluster.org/job/smoke/26373/ : SUCCESS"

but again failed to report back ...

--
Prasanna


>
> -Atin
> Sent from one plus one
>
>
> ___
> Gluster-infra mailing list
> gluster-in...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] freebsd-smoke failures

2016-04-02 Thread Prasanna Kalever
On Sat, Apr 2, 2016 at 2:13 AM, Jeff Darcy  wrote:
>
> I've seen a lot of patches blocked lately by this:
>
> > BD xlator requested but required lvm2 development library not found.
>
Hi Jeff,

IIRC, this happens because in the build job use "--enable-bd-xlator"
option while configure;
but looks like bd library dependencies were missing in this slave.

/me looking into configure.ac

[part of configure.ac]

if test "x$enable_bd_xlator" != "xno"; then
  AC_CHECK_LIB([lvm2app],
   [lvm_init,lvm_lv_from_name],
   [HAVE_BD_LIB="yes"],
   [HAVE_BD_LIB="no"])
[...]

if test "x$enable_bd_xlator" = "xyes" -a "x$HAVE_BD_LIB" = "xno"; then
   echo "BD xlator requested but required lvm2 development library not found."
   exit 1
fi

>From above I can say liblvm2app.so doesn't have
lvm_init/lvm_lv_from_name functions or
there is no liblvm2app.so in this slave, hence 'HAVE_BD_LIB' will be set to 'no'

IMO, no one really uses "BD-xlator" and I think we need to remove this
option In the build script while we configure.


Thanks,
-- Prasanna

> It doesn't happen all the time, so there must be something about
> certain patches that triggers it.  Any thoughts?
> ___
> Gluster-infra mailing list
> gluster-in...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Requesting for help with gluster test framework

2016-04-01 Thread Prasanna Kalever
On Fri, Apr 1, 2016 at 5:37 PM, Karthik Subrahmanya  wrote:
>
> Hi all,
>
> As I am trying to write a test for the WORM translator
> which I am working on right now, I am facing some issues
> while executing the test framework.
> I followed the steps in
> https://github.com/gluster/glusterfs/blob/master/tests/README.md
>
>
> [Issue #1]
> While running the run-tests.sh
>
> ... GlusterFS Test Framework ...
>
>
> ==
> Running tests in file ./tests/basic/0symbol-check.t
> [11:48:09] ./tests/basic/0symbol-check.t .. Dubious, test returned 1 (wstat 
> 256, 0x100)
> No subtests run
> [11:48:09]
>
> Test Summary Report
> ---
> ./tests/basic/0symbol-check.t (Wstat: 256 Tests: 0 Failed: 0)
>   Non-zero exit status: 1
>   Parse errors: No plan found in TAP output
> Files=1, Tests=0,  0 wallclock secs ( 0.01 usr +  0.00 sys =  0.01 CPU)
> Result: FAIL
> End of test ./tests/basic/0symbol-check.t
> ==
>
>
> Run complete
> 1 test(s) failed
> ./tests/basic/0symbol-check.t
> 0 test(s) generated core
>
> Slowest 10 tests:
> ./tests/basic/0symbol-check.t  -  1
> Result is 1
>
>
>
> [Issue #2]
> While running a single .t file using "prove -vf"
>
> tests/features/worm.t ..
> Aborting.
> Aborting.
>
> env.rc not found
> env.rc not found
>
> Please correct the problem and try again.
> Please correct the problem and try again.
>
> Dubious, test returned 1 (wstat 256, 0x100)
> No subtests run
>
> Test Summary Report
> ---
> tests/features/worm.t (Wstat: 256 Tests: 0 Failed: 0)
>   Non-zero exit status: 1
>   Parse errors: No plan found in TAP output
> Files=1, Tests=0,  0 wallclock secs ( 0.02 usr +  0.01 sys =  0.03 CPU)
> Result: FAIL
>

This is due to lag of configuration stuff,
run ./autogen.sh && ./configure and then try to run the tests

--
Prasanna

>
> It would be awesome if someone can guide me with this.
>
> Thanks & Regards,
> Karthik Subrahmanya
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Wanted - 3.7.5 release manager

2015-09-02 Thread Prasanna Kalever
Hi Pranith,

If you need any assistance please let me know, I will be happy to learn this.

Thanks & regards,
Prasanna Kumar K

- Original Message -
From: "Pranith Kumar Karampuri" 
To: "Vijay Bellur" , "Atin Mukherjee" 

Cc: "Gluster Devel" 
Sent: Wednesday, September 2, 2015 7:34:08 PM
Subject: Re: [Gluster-devel] Wanted - 3.7.5 release manager



On 09/02/2015 07:33 PM, Vijay Bellur wrote:
> On Wednesday 02 September 2015 06:38 PM, Atin Mukherjee wrote:
>> IIRC, Pranith already volunteered for it in one of the last community
>> meetings?
>>
>
> Thanks Atin. I do recollect it now.
>
> Pranith - can you confirm being the release manager for 3.7.5?
Yes, I can do this.

Pranith
>
> -Vijay
>
>> -Atin
>> Sent from one plus one
>>
>> On Sep 2, 2015 6:00 PM, "Vijay Bellur" > > wrote:
>>
>> Hi All,
>>
>> We have been rotating release managers for minor releases in the
>> 3.7.x train. We just released 3.7.4 and are looking for volunteers
>> to be release managers for 3.7.5 (scheduled for 30th September). If
>> anybody is interested in volunteering, please drop a note here.
>>
>> Thanks,
>> Vijay
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Plans for Gluster 3.8

2015-08-17 Thread Prasanna Kalever
Hi Athin :)

I shall take Bug 1245380
[RFE] Render all mounts of a volume defunct upon access revocation 
https://bugzilla.redhat.com/show_bug.cgi?id=1245380 

Thanks  Regards,
Prasanna Kumar K.


- Original Message -
From: Atin Mukherjee atin.mukherje...@gmail.com
To: Kaushal M kshlms...@gmail.com
Cc: Csaba Henk ch...@redhat.com, gluster-us...@gluster.org, Gluster Devel 
gluster-devel@gluster.org
Sent: Thursday, August 13, 2015 8:58:20 PM
Subject: Re: [Gluster-users] [Gluster-devel] Plans for Gluster 3.8



Can we have some volunteers of these BZs? 

-Atin 
Sent from one plus one 
On Aug 12, 2015 12:34 PM, Kaushal M  kshlms...@gmail.com  wrote: 


Hi Csaba, 

These are the updates regarding the requirements, after our meeting 
last week. The specific updates on the requirements are inline. 

In general, we feel that the requirements for selective read-only mode 
and immediate disconnection of clients on access revocation are doable 
for GlusterFS-3.8. The only problem right now is that we do not have 
any volunteers for it. 

 1. Bug 829042 - [FEAT] selective read-only mode 
 https://bugzilla.redhat.com/show_bug.cgi?id=829042 
 
 absolutely necessary for not getting tarred  feathered in Tokyo ;) 
 either resurrect http://review.gluster.org/3526 
 and _find out integration with auth mechanism for special 
 mounts_, or come up with a completely different concept 
 

With the availability of client_t, implementing this should become 
easier. The server xlator would store the incoming connections common 
name or address in the client_t associated with the connection. The 
read-only xlator could then make use of this information to 
selectively allow read-only clients. The read-only xlator would need 
to implement a new option for selective read-only, which would be 
populated with lists of common-names and addresses of clients which 
would get read-only access. 

 2. Bug 1245380 - [RFE] Render all mounts of a volume defunct upon access 
 revocation 
 https://bugzilla.redhat.com/show_bug.cgi?id=1245380 
 
 necessary to let us enable a watershed scalability 
 enhancement 
 

Currently, when auth.allow/reject and auth.ssl-allow options are 
changed, the server xlator does a reconfigure to reload its access 
list. It just does a reload, and doesn't affect any existing 
connections. To bring this feature in, the server xlator would need to 
iterate through its xprt_list and check every connection for 
authorization again on a reconfigure. Those connections which have 
lost authorization would be disconnected. 

 3. Bug 1226776 – [RFE] volume capability query 
 https://bugzilla.redhat.com/show_bug.cgi?id=1226776 
 
 eventually we'll be choking in spaghetti if we don't get 
 this feature. The ugly version checks we need to do against 
 GlusterFS as in 
 
 https://review.openstack.org/gitweb?p=openstack/manila.git;a=commitdiff;h=29456c#patch3
  
 
 will proliferate and eat the guts of the code out of its 
 living body if this is not addressed. 
 

This requires some more thought to figure out the correct solution. 
One possible way to get the capabilities of the cluster would be to 
look at the clusters running op-version. This can be obtained using 
`gluster volume get all cluster.op-version` (the volume get command is 
available in glusterfs-3.6 and above). But this doesn't provide much 
improvement over the existing checks being done in the driver. 
___ 
Gluster-devel mailing list 
Gluster-devel@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-devel 

___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

2015-07-30 Thread Prasanna Kalever
Hi Josh Boon,

Can you provide me the load test simulation steps in detailed so that I will 
try to reproduce the scenario.
I am planning to run Iozone in VM, how about this?

Well, I can also give a session of my machine through tmate, but that involves 
some process to be completed, so we can think of that later.

I am expecting some kind of scripts that helped you to produce segfaults in 
QEMU 2.0 + Ubuntu 14.04 + gluster 3.6.3 (libgfapi).


Thanks  Regards,
Prasanna Kumar K.



- Original Message -
From: Josh Boon glus...@joshboon.com
To: Prasanna Kalever pkale...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Wednesday, July 29, 2015 8:15:02 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hey Prasanna,

Thanks for your help! One of the issues we've had is DD doesn't seem to 
reproduce it. Anything that logs and handles large volumes, think mail and web 
servers, tends to segfault the most frequently. I could write up a load test 
and we could put apache on it and try that as that's closet to what we run. 
Also if you don't object would I be able to get on the machine to figure out 
apparmor and do a writeup? Most folks probably won't be able to disable it 
completely. 

Best,
Josh

- Original Message -
From: Prasanna Kalever pkale...@redhat.com
To: Josh Boon glus...@joshboon.com
Cc: Pranith Kumar Karampuri pkara...@redhat.com, Gluster Devel 
gluster-devel@gluster.org
Sent: Wednesday, July 29, 2015 1:54:34 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Josh Boon,

Below are my setup details:


# qemu-system-x86_64 --version  
  

  
QEMU emulator version 2.3.0 (Debian 1:2.3+dfsg-5ubuntu4), Copyright (c) 
2003-2008 Fabrice Bellard 

  
# gluster --version 
  

  
glusterfs 3.6.3 built on Jul 29 2015 16:01:10   
  
Repository revision: git://git.gluster.com/glusterfs.git
  
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com   
  
GlusterFS comes with ABSOLUTELY NO WARRANTY.
  

  
# lsb_release -a
  

  
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.   
Distributor ID: Ubuntu  
  
Description:Ubuntu 14.04 LTS
  
Release:14.04   
  
Codename:   trusty  
  

  
# gluster vol info  
  

  
Volume Name: vol1   
  
Type: Replicate 
  
Volume ID: ad78ac6c-c55e-4f4a-8b1b-a11865f1d01e 
  
Status: Started 
  
Number of Bricks: 1 x 2 = 2 
  
Transport-type: tcp 
  
Bricks: 
  
Brick1: 10.70.1.156:/brick1

Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

2015-07-29 Thread Prasanna Kalever
   49152   Y   3726
  
Brick 10.70.1.156:/brick2   49153   Y   7014
  
NFS Server on localhost 2049Y   7028
  
Self-heal Daemon on localhost   N/A Y   7035
  

  
Task Status of Volume vol1  
  
--  
  
There are no active volume tasks
  

  

  
# du -sh  /brick1/vm1.img   
  
8.6G/brick1/vm1.img 
  

  
# du -sh  /brick2/vm1.img   
  
8.6G/brick2/vm1.img 


# cat vm1.xml

...

devices   

emulator/usr/bin/qemu-system-x86_64/emulator
  
disk type='network' device='disk' 
  
  driver name='qemu' type='qcow2' cache='none'/   
  
  source protocol='gluster' name='vol1/vm1.img'   
  
host name='10.70.1.156' port='24007'/ 
  
  /source 
  
  target dev='vda' bus='virtio'/  
  
  alias name='virtio-disk0'/  
  
  address type='pci' domain='0x' bus='0x00' slot='0x04' 
function='0x0'/ 
/disk 
  
disk type='file' device='cdrom'   
  
  driver name='qemu' type='raw'/  
  
  source file='/home/pkalever/work/qemu/ubuntu-14.04-server-amd64.iso'/   
  
  target dev='hdb' bus='ide'/ 
  
  readonly/   
  
  alias name='ide0-0-1'/  
  
  address type='drive' controller='0' bus='0' target='0' unit='1'/
  
/disk  

...

/devices



With the underline setup given above, Once the VM booted successfully 
I have wrote 5G using dd command but I have not encountered any crashes

I feel qemu- 2.3.0 doesn't have the segfaults issue.

Please revert back for any information if required.


Thanks  regards,
Prasanna Kumar K.



- Original Message -
From: Prasanna Kalever pkale...@redhat.com
To: Josh Boon glus...@joshboon.com
Cc: Pranith Kumar Karampuri pkara...@redhat.com, SATHEESARAN 
sasun...@redhat.com
Sent: Tuesday, July 28, 2015 5:04:46 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Josh Boon,

finally able to boot the VM

for now I disabled apparmor by update-rc.d -f apparmor remove

Thanks for the support, Now I shall try on reproducing the actual problem :)

Best Regards,
Prasanna Kumar K.




- Original Message -
From: Prasanna Kalever pkale...@redhat.com
To: Josh Boon glus...@joshboon.com
Cc: Pranith Kumar Karampuri pkara...@redhat.com, SATHEESARAN 
sasun...@redhat.com
Sent: Tuesday, July 28, 2015 3:45:12 PM
Subject: Re: [Gluster-devel] gfapi 3.6.3 QEMU 2.3 Ubuntu 14.04 testing

Hi Josh Boon,

apparmor trick didn't work

Re: [Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-02 Thread Prasanna Kalever

This is caused because when bind-insecure is turned on (which is the default 
now), it may happen
that brick is not able to bind to port assigned by Glusterd for example 
49192-49195...
It seems to occur because the rpc_clnt connections are binding to ports in the 
same range. 
so brick fails to bind to a port which is already used by someone else.

This bug already exist before http://review.gluster.org/#/c/11039/ when use 
rdma, i.e. even
previously rdma binds to port = 1024 if it cannot find a free port  1024,
even when bind insecure was turned off (ref to commit '0e3fd04e').
Since we don't have tests related to rdma we did not discover this issue 
previously.

http://review.gluster.org/#/c/11039/ discovers the bug we encountered, however 
now the bug can be fixed by
http://review.gluster.org/#/c/11512/ by making rpc_clnt to get port numbers 
from 65535 in a descending
order, as a result port clash is minimized, also it fixes issues in rdma too

Thanks to Raghavendra Talur for help in discovering the real cause


Regards,
Prasanna Kalever



- Original Message -
From: Raghavendra Talur raghavendra.ta...@gmail.com
To: Krishnan Parthasarathi kpart...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Thursday, July 2, 2015 6:45:17 PM
Subject: Re: [Gluster-devel] spurious failures  
tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t



On Thu, Jul 2, 2015 at 4:40 PM, Raghavendra Talur  raghavendra.ta...@gmail.com 
 wrote: 





On Thu, Jul 2, 2015 at 10:52 AM, Krishnan Parthasarathi  kpart...@redhat.com  
wrote: 



  
  A port assigned by Glusterd for a brick is found to be in use already by 
  the brick. Any changes in Glusterd recently which can cause this? 
  
  Or is it a test infra problem? 

This issue is likely to be caused by http://review.gluster.org/11039 
This patch changes the port allocation that happens for rpc_clnt based 
connections. Previously, ports allocated where  1024. With this change, 
these connections, typically mount process, gluster-nfs server processes 
etc could end up using ports that bricks are being assigned to. 

IIUC, the intention of the patch was to make server processes lenient to 
inbound messages from ports  1024. If we don't require to use ports  1024 
we could leave the port allocation for rpc_clnt connections as before. 
Alternately, we could reserve the range of ports starting from 49152 for bricks 
by setting net.ipv4.ip_local_reserved_ports using sysctl(8). This is specific 
to Linux. 
I'm not aware of how this could be done in NetBSD for instance though. 


It seems this is exactly whats happening. 

I have a question, I get the following data from netstat and grep 

tcp 0 0 f6be17c0fbf5:1023 f6be17c0fbf5:24007 ESTABLISHED 31516/glusterfsd 
tcp 0 0 f6be17c0fbf5:49152 f6be17c0fbf5:490 ESTABLISHED 31516/glusterfsd 
unix 3 [ ] STREAM CONNECTED 988353 31516/glusterfsd 
/var/run/gluster/4878d6e905c5f6032140a00cc584df8a.socket 

Here 31516 is the brick pid. 

Looking at the data, line 2 is very clear, it shows connection between brick 
and glusterfs client. 
unix socket on line 3 is also clear, it is the unix socket connection that 
glusterd and brick process use for communication. 

I am not able to understand line 1; which part of brick process established a 
tcp connection with glusterd using port 1023? 
Note: this data is from a build which does not have the above mentioned patch. 


The patch which exposed this bug is being reverted till the underlying bug is 
also fixed. 
You can monitor revert patches here 
master: http://review.gluster.org/11507 
3.7 branch: http://review.gluster.org/11508 

Please rebase your patches after the above patches are merged to ensure that 
you patches pass regression. 





-- 
Raghavendra Talur 




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proposal: Using LLVM clang-analyzer in gluster development

2015-05-27 Thread Prasanna Kalever

Niels de Vos, I wish to get access to you setup :)

We know that Clang is compiler more than analyzer, it support many 
Architectures.
you can have a glance at http://llvm.org/docs/doxygen/html/Triple_8h_source.html

Further Compiling clang from source should not be that difficult in many of the 
distros.

Since our purpose is to use Clang-Analyzer only in development of glusterfs, 
that means
only the distributions that developers use i.e. Fedora, CentOs, RHEL, Ubuntu 
and very few others.

From above, I hope integrating Scan-build in script like checkpatch.pl or any 
other will be
a good Idea as Athin proposed.


Thanks  Regards,
Prasanna Kumar K



- Original Message -
From: Atin Mukherjee amukh...@redhat.com
To: Niels de Vos nde...@redhat.com, Atin Mukherjee 
atin.mukherje...@gmail.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Wednesday, May 27, 2015 9:58:00 AM
Subject: Re: [Gluster-devel] Proposal: Using LLVM clang-analyzer in gluster 
development



On 05/27/2015 12:24 AM, Niels de Vos wrote:
 On Tue, May 26, 2015 at 11:00:25PM +0530, Atin Mukherjee wrote:
 On 26 May 2015 17:30, Prasanna Kalever pkale...@redhat.com wrote:

 Hi gluster team,

 Proposal:

 Using Clang static analyzer tool for gluster project


 About Clang:

 From a very high level view, Clang has two features

 1. Clang as a compiler
 2. Clang as a code analyzer

 The Idea hear is to use second point i.e Clang as code analyzer and still
 gcc
 will be our default compiler.

 The Clang Static Analyzer is a source code analysis tool that finds bugs
 in C,
 C++, and Objective-C programs. Given the exact same code base,
 clang-analyzer
 reported ~70 potential issues. clang is an awesome and free tool.

 The reports from clang-analyzer are in HTML and there's a single file for
 each
 issue and it generates a nice looking source code with embedded comments
 about
 which flow that was followed all the way down to the problem.


 Why Clang-Analyzer: (Advantages)

  1. Since its is an open source tool:

   * Available to all the developers
   * Easy Access, we can run the tool while we compile the code (say $
 scan-build make)
   * No restrictions on Number of Runs per week/day/hour/min ..
   * Defects are Identified before submitting a patch, thus very less
 chance of
 defect injection into project

  2. The Html view of clang is very impressive with all the source code
 including
 comments of clang-analyzer, which lead to defect line number directly
 .



 I have attached a sample clang results for geo-replication module run on
 latest
 3.7+ glusterfs code, please find them above.


 Thanks for your time.
 On a relative note, I feel we should try to integrate any of these static
 analyzer as part of our checkpatch.pl and compare the pre and post report
 and proceed if the change doesn't introduce any new defects. Thoughts?
 
 That sounds more like a test we can run in Jenkins. Having this check in
 checkpatch.pl might be difficult because that should be able to run on
 any OS/distribution we support. When we move the test to Jenkins, we can
 run it on the regression test slaves and have them post -1 verified in
 case of issues.
Sounds good to me, be it at local or jenkins, my only intention is to
refrain introducing defects for a new patch.
 
 Are there tools to do the pre/post result comparing? I have recently
 setup a test environment for Jenkins jobs and am happy to give you (or
 any one else) access to it for testing (sorry, my setup is on the Red
 Hat internal network only).
We need to explore on that part, I am hoping that we should have such
kind of tools available. However if not at worst case can't we compare
them through our own scripts?
 
 Niels
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
 

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Proposal: Using LLVM clang-analyzer in gluster development

2015-05-26 Thread Prasanna Kalever
Hi gluster team, 

Proposal:

Using Clang static analyzer tool for gluster project


About Clang:

From a very high level view, Clang has two features

1. Clang as a compiler
2. Clang as a code analyzer

The Idea hear is to use second point i.e Clang as code analyzer and still gcc
will be our default compiler.

The Clang Static Analyzer is a source code analysis tool that finds bugs in C,
C++, and Objective-C programs. Given the exact same code base, clang-analyzer
reported ~70 potential issues. clang is an awesome and free tool.

The reports from clang-analyzer are in HTML and there's a single file for each
issue and it generates a nice looking source code with embedded comments about
which flow that was followed all the way down to the problem.


Why Clang-Analyzer: (Advantages)

 1. Since its is an open source tool:
   
  * Available to all the developers
  * Easy Access, we can run the tool while we compile the code (say $ 
scan-build make)
  * No restrictions on Number of Runs per week/day/hour/min ..
  * Defects are Identified before submitting a patch, thus very less chance of
defect injection into project

 2. The Html view of clang is very impressive with all the source code including
comments of clang-analyzer, which lead to defect line number directly .



I have attached a sample clang results for geo-replication module run on latest
3.7+ glusterfs code, please find them above.


Thanks for your time.


Best Regards, 
Prasanna Kumar K. 



scan-build-geo-repln.tar.bz2
Description: application/bzip-compressed-tar
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Proposal: Using LLVM clang-analyzer in gluster development

2015-05-26 Thread Prasanna Kalever
Hi gluster team, 

Proposal:

Using Clang static analyzer tool for gluster project



About Clang:

From a very high level view, Clang has two features

1. Clang as a compiler
2. Clang as a code analyzer

The Idea hear is to use second point i.e Clang as code analyzer and still gcc
will be our default compiler.

The Clang Static Analyzer is a source code analysis tool that finds bugs in C,
C++, and Objective-C programs. Given the exact same code base, clang-analyzer
reported ~70 potential issues. clang is an awesome and free tool.

The reports from clang-analyzer are in HTML and there’s a single file for each
issue and it generates a nice looking source code with embedded comments about
which flow that was followed all the way down to the problem.



Why Clang-Analyzer: (Advantages)

Since its is an open source tool:
   
   Available to all the developers
   
   Easy Access, we can run the tool while we compile the code (say $ 
scan-build make )
   
   No restrictions on Number of Runs per week/day/hour/min ..
   
   Defects are Identified before submitting a patch, thus very less chance
   of defect injection into project


The Html view of clang is very impressive with all the source code including
comments of clang-analyzer, which lead to defect line number directly .

I have attached a sample clang results for geo-replication module run on latest 
3.7+ glusterfs code, please find them above.

Thanks for your time.

Best Regards, 
Prasanna Kumar K. 



scan-build-geo-repln.tar.bz2
Description: application/bzip-compressed-tar
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel