[Gluster-devel] No more unused-variables in the glusterfs sources!

2016-09-18 Thread Niels de Vos
After ~80 patches from Kaleb, the unused variables have been removed
from the glusterfs sources. The change that makes it possible to pass
the compile options through to all files is included below. It was a
major cleanup to place the generated XDR-files in the correct location.

While getting the 'unused variable' patches included, many others were
introduced... It is dangerous to include #pragma statements that
override compiler options in header files that are included almost
everywhere.

Thanks for your persistence, Kaleb!


- Forwarded message from "Niels de Vos (Code Review)" 
 -

> Date: Sun, 18 Sep 2016 09:34:38 -0700
> From: "Niels de Vos (Code Review)" 
> To: Niels de Vos , Kaleb KEITHLEY 
> CC: Gluster Build System , Kaushal M 
> , Milind Changire , Anoop C S 
> 
> Subject: Change in glusterfs[master]: build: out-of-tree builds generates 
> files in the wrong direc...
> 
> Niels de Vos has submitted this change and it was merged.
> 
> Change subject: build: out-of-tree builds generates files in the wrong 
> directory
> ..
> 
> 
> build: out-of-tree builds generates files in the wrong directory
> 
> And minor cleanup of a few of the Makefile.am files while we're
> at it.
> 
> Rewrite the make rules to do what xdrgen does. Now we can get rid
> of xdrgen.
> 
> Note 1. netbsd6's sed doesn't do -i. Why are we still running
> smoke tests on netbsd6 and not netbsd7? We barely support netbsd7
> as it is.
> 
> Note 2. Why is/was libgfxdr.so (.../rpc/xdr/src/...) linked with
> libglusterfs? A cut-and-paste mistake? It has no references to
> symbols in libglusterfs.
> 
> Note3. "/#ifndef\|#define\|#endif/" (note the '\'s) is a _basic_
> regex that matches the same lines as the _extended_ regex
> "/#(ifndef|define|endif)/". To match the extended regex sed needs to
> be run with -r on Linux; with -E on *BSD. However NetBSD's and
> FreeBSD's sed helpfully also provide -r for compatibility. Using a
> basic regex avoids having to use a kludge in order to run sed with
> the correct option on OS X.
> 
> Note 4. Not copying the bit of xdrgen that inserts copyright/license
> boilerplate. AFAIK it's silly to pretend that machine generated
> files like these can be copyrighted or need license boilerplate.
> The XDR source files have their own copyright and license; and
> their copyrights are bound to be more up to date than old
> boilerplate inserted by a script. From what I've seen of other
> Open Source projects -- e.g. gcc and its C parser files generated
> by yacc and lex -- IIRC they don't bother to add copyright/license
> boilerplate to their generated files.
> 
> It appears that it's a long-standing feature of make (SysV, BSD,
> gnu) for out-of-tree builds to helpfully pretend that the source
> files it can find in the VPATH "exist" as if they are in the $cwd.
> rpcgen doesn't work well in this situation and generates files
> with "bad" #include directives.
> 
> E.g. if you `rpcgen ../../../../$srcdir/rpc/xdr/src/glusterfs3-xdr.x`,
> you get an #include directive in the generated .c file like this:
> 
>   ...
>   #include "../../../../$srcdir/rpc/xdr/src/glusterfs3-xdr.h"
>   ...
> 
> which (obviously) results in compile errors on out-of-tree build
> because the (generated) header file doesn't exist at that location.
> Compared to `rpcgen ./glusterfs3-xdr.x` where you get:
> 
>   ...
>   #include "glusterfs3-xdr.h"
>   ...
> 
> Which is what we need. We have to resort to some Stupid Make Tricks
> like the addition of various .PHONY targets to work around the VPATH
> "help".
> 
> Warning: When doing an in-tree build, -I$(top_builddir)/rpc/xdr/...
> looks exactly like -I$(top_srcdir)/rpc/xdr/...  Don't be fooled though.
> And don't delete the -I$(top_builddir)/rpc/xdr/... bits
> 
> Change-Id: Iba6ab96b2d0a17c5a7e9f92233993b318858b62e
> BUG: 1330604
> Signed-off-by: Kaleb S KEITHLEY 
> Reviewed-on: http://review.gluster.org/14085
> Tested-by: Niels de Vos 
> Smoke: Gluster Build System 
> NetBSD-regression: NetBSD Build System 
> CentOS-regression: Gluster Build System 
> Reviewed-by: Niels de Vos 
> ---
> M Makefile.am
> M api/src/Makefile.am
> D build-aux/xdrgen
> M cli/src/Makefile.am
> M contrib/umountd/Makefile.am
> M extras/LinuxRPM/Makefile.am
> M extras/geo-rep/Makefile.am
> M geo-replication/src/Makefile.am
> M glusterfsd/src/Makefile.am
> M heal/src/Makefile.am
> M libglusterfs/src/Makefile.am
> M libglusterfs/src/gfdb/Makefile.am
> M rpc/rpc-lib/src/Makefile.am
> M rpc/rpc-transport/rdma/src/Makefile.am
> M rpc/rpc-transport/socket/src/Makefile.am
> M rpc/xdr/src/Makefile.am
> M rpc/xdr/src/acl3-xdr.x
> M 

Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-18 Thread Luis Pabón
Hi Prasanna,
  I started the wiki page with the documentation on the API.  There
still needs to be more information added, and we still need to work
on the workflow, but at least it is a start.

Please take a look at the wiki:

https://github.com/heketi/heketi/wiki/Proposed-API:-Block-Storage

- Luis

- Original Message -
From: "Luis Pabón" 
To: "Humble Chirammal" 
Cc: "gluster-devel" , "Stephen Watt" 
, "Ramakrishna Yekulla" 
Sent: Tuesday, September 13, 2016 12:06:00 PM
Subject: Re: [Gluster-devel] [Heketi] Block store related API design discussion

Very good points.  Thanks Prasanna for putting this together.  I agree with
your comments in that Heketi is the high level abstraction API and it should 
have
an API similar of what is described by Prasanna.

I definitely do not think any File Api should be available in Heketi,
because that is an implementation of the Block API.  The Heketi API should
be similar to something like OpenStack Cinder.

I think that the actual management of the Volumes used for Block storage
and the files in them should be all managed by Heketi.  How they are
actually created is still to be determined, but we could have Heketi
create them, or have helper programs do that.

We also need to document the exact workflow to enable a file in
a Gluster volume to be exposed as a block device.  This will help
determine where the creation of the file could take place.

We can capture our decisions from these discussions in the
following page:

https://github.com/heketi/heketi/wiki/Proposed-Changes

- Luis


- Original Message -
From: "Humble Chirammal" 
To: "Raghavendra Talur" 
Cc: "Prasanna Kalever" , "gluster-devel" 
, "Stephen Watt" , "Luis Pabon" 
, "Michael Adam" , "Ramakrishna Yekulla" 
, "Mohamed Ashiq Liyazudeen" 
Sent: Tuesday, September 13, 2016 2:23:39 AM
Subject: Re: [Gluster-devel] [Heketi] Block store related API design discussion





- Original Message -
| From: "Raghavendra Talur" 
| To: "Prasanna Kalever" 
| Cc: "gluster-devel" , "Stephen Watt" 
, "Luis Pabon" ,
| "Michael Adam" , "Humble Chirammal" , 
"Ramakrishna Yekulla"
| , "Mohamed Ashiq Liyazudeen" 
| Sent: Tuesday, September 13, 2016 11:08:44 AM
| Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
discussion
| 
| On Mon, Sep 12, 2016 at 11:30 PM, Prasanna Kalever 
| wrote:
| 
| > Hi all,
| >
| > This mail is open for discussion on gluster block store integration with
| > heketi and its REST API interface design constraints.
| >
| >
| >  ___ Volume Request ...
| > |
| > |
| > PVC claim -> Heketi --->|
| > |
| > |
| > |
| > |
| > |__ BlockCreate
| > |   |
| > |   |__ BlockInfo
| > |   |
| > |___ Block Request (APIS)-> |__ BlockResize
| > |
| > |__ BlockList
| > |
| > |__ BlockDelete
| >
| > Heketi will have block API and volume API, when user submit a Persistent
| > volume claim, Kubernetes provisioner based on the storage class(from PVC)
| > talks to heketi for storage, heketi intern calls block or volume API's
| > based on request.
| >
| 
| This is probably wrong. It won't be Heketi calling block or volume APIs. It
| would be Kubernetes calling block or volume API *of* Heketi.
| 
| 
| > With my limited understanding, heketi currently creates clusters from
| > provided nodes, creates volumes and handover them to the user.
| > For block related API's, it has to deal with files right ?
| >
| > Here is how block API's look like in short-
| > Create: heketi has to create file in the volume and export it as a iscsi
| > target device and hand it over to user.
| > Info: show block stores information across all the clusters, connection
| > info, size etc.
| > resize: resize the file in the volume, refresh connections from initiator
| > side
| > List: List the connections
| > Delete: logout the connections and delete the file in the gluster volume
| >
| > Couple of questions:
| > 1. Should Block API have sub API's 

Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-18 Thread Vijay Bellur
On Tue, Sep 13, 2016 at 12:10 PM, Stephen Watt  wrote:

>
> Also, some important requirements to figure out/think about are:
>
> - How are you managing locking a block device against a container (or a
> host?)
> - Will your implementation work with OpenShift volume security for block
> devices (FSGroups + Recursive chown, chmod and SELinux labeling)
>
> If these aren't already figured out, would it be possible to create separate
> cards in your trello board so we can track the progress on the resolution of
> these two topics?
>

Tracking through Trello cards [1] and [2].

Thanks,
Vijay

[1] 
https://trello.com/c/LvfuP1cB/4-read-write-once-limit-a-block-device-to-a-single-container

[2] 
https://trello.com/c/Ne9NeU2y/43-review-openshift-volume-security-for-block-devices


> On Tue, Sep 13, 2016 at 11:06 AM, Luis Pabón  wrote:
>>
>> Very good points.  Thanks Prasanna for putting this together.  I agree
>> with
>> your comments in that Heketi is the high level abstraction API and it
>> should have
>> an API similar of what is described by Prasanna.
>>
>> I definitely do not think any File Api should be available in Heketi,
>> because that is an implementation of the Block API.  The Heketi API should
>> be similar to something like OpenStack Cinder.
>>
>> I think that the actual management of the Volumes used for Block storage
>> and the files in them should be all managed by Heketi.  How they are
>> actually created is still to be determined, but we could have Heketi
>> create them, or have helper programs do that.
>>
>> We also need to document the exact workflow to enable a file in
>> a Gluster volume to be exposed as a block device.  This will help
>> determine where the creation of the file could take place.
>>
>> We can capture our decisions from these discussions in the
>> following page:
>>
>> https://github.com/heketi/heketi/wiki/Proposed-Changes
>>
>> - Luis
>>
>>
>> - Original Message -
>> From: "Humble Chirammal" 
>> To: "Raghavendra Talur" 
>> Cc: "Prasanna Kalever" , "gluster-devel"
>> , "Stephen Watt" , "Luis Pabon"
>> , "Michael Adam" , "Ramakrishna
>> Yekulla" , "Mohamed Ashiq Liyazudeen"
>> 
>> Sent: Tuesday, September 13, 2016 2:23:39 AM
>> Subject: Re: [Gluster-devel] [Heketi] Block store related API design
>> discussion
>>
>>
>>
>>
>>
>> - Original Message -
>> | From: "Raghavendra Talur" 
>> | To: "Prasanna Kalever" 
>> | Cc: "gluster-devel" , "Stephen Watt"
>> , "Luis Pabon" ,
>> | "Michael Adam" , "Humble Chirammal"
>> , "Ramakrishna Yekulla"
>> | , "Mohamed Ashiq Liyazudeen" 
>> | Sent: Tuesday, September 13, 2016 11:08:44 AM
>> | Subject: Re: [Gluster-devel] [Heketi] Block store related API design
>> discussion
>> |
>> | On Mon, Sep 12, 2016 at 11:30 PM, Prasanna Kalever 
>> | wrote:
>> |
>> | > Hi all,
>> | >
>> | > This mail is open for discussion on gluster block store integration
>> with
>> | > heketi and its REST API interface design constraints.
>> | >
>> | >
>> | >  ___ Volume Request ...
>> | > |
>> | > |
>> | > PVC claim -> Heketi --->|
>> | > |
>> | > |
>> | > |
>> | > |
>> | > |__ BlockCreate
>> | > |   |
>> | > |   |__ BlockInfo
>> | > |   |
>> | > |___ Block Request (APIS)-> |__ BlockResize
>> | > |
>> | > |__ BlockList
>> | > |
>> | > |__ BlockDelete
>> | >
>> | > Heketi will have block API and volume API, when user submit a
>> Persistent
>> | > volume claim, Kubernetes provisioner based on the storage class(from
>> PVC)
>> | > talks to heketi for storage, heketi intern calls block or volume API's
>> | > based on request.
>> | >
>> |
>> | This is probably wrong. It won't be Heketi calling block or volume APIs.
>> It
>> | would be Kubernetes calling block or volume API *of* Heketi.
>> |
>> |
>> | > With my limited understanding, heketi currently creates clusters from
>> | > provided nodes, creates volumes and handover them to the user.
>> | > For block related API's, it has to deal with files right ?
>> | >
>> | > Here is how block API's look like in 

[Gluster-devel] Upcall details for NLINK

2016-09-18 Thread Niels de Vos
Hey Soumya,

do we have a description of the different actions that we expect/advise
users of upcall to take? I'm looking at the flags that are listed in
libglusterfs/src/upcall-utils.h and api/src/glfs-handles.h and passed in
the glfs_callback_inode_arg structure from api/src/glfs-handles.h.

We have a NLINK flag, but that does not seem to carry the stat/iatt
attributes for the changed inode. It seems we send an upcall on file
removal that incudes NLINK, and just after that we send an other one
with FORGET.

This attachment in Bugzilla shows the behaviour:
  https://bugzilla.redhat.com/attachment.cgi?id=1202190

You'll need https://code.wireshark.org/review/17776 to decode the flags,
so I'll attach the tshark output to this email for your convenience.
  $ tshark -r /tmp/upcall_xid.1474190284.pcap.gz -V 'glusterfs.cbk'

Question: For the NLINK flag, should we not include the stat/iatt of the
modified inode? And only if the iatt->nlink is 0, a FORGET should get
sent? NLINK would then only happen when a (not the last) hardlink is
removed.

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Upcall details for NLINK

2016-09-18 Thread Niels de Vos
Duh, and now with the attachment. I'm going to get some coffee now.


On Mon, Sep 19, 2016 at 06:22:58AM +0200, Niels de Vos wrote:
> Hey Soumya,
> 
> do we have a description of the different actions that we expect/advise
> users of upcall to take? I'm looking at the flags that are listed in
> libglusterfs/src/upcall-utils.h and api/src/glfs-handles.h and passed in
> the glfs_callback_inode_arg structure from api/src/glfs-handles.h.
> 
> We have a NLINK flag, but that does not seem to carry the stat/iatt
> attributes for the changed inode. It seems we send an upcall on file
> removal that incudes NLINK, and just after that we send an other one
> with FORGET.
> 
> This attachment in Bugzilla shows the behaviour:
>   https://bugzilla.redhat.com/attachment.cgi?id=1202190
> 
> You'll need https://code.wireshark.org/review/17776 to decode the flags,
> so I'll attach the tshark output to this email for your convenience.
>   $ tshark -r /tmp/upcall_xid.1474190284.pcap.gz -V 'glusterfs.cbk'
> 
> Question: For the NLINK flag, should we not include the stat/iatt of the
> modified inode? And only if the iatt->nlink is 0, a FORGET should get
> sent? NLINK would then only happen when a (not the last) hardlink is
> removed.
> 
> Thanks,
> Niels



> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

Frame 37: 468 bytes on wire (3744 bits), 468 bytes captured (3744 bits)
Encapsulation type: Linux cooked-mode capture (25)
Arrival Time: Sep 18, 2016 11:18:16.284213000 CEST
[Time shift for this packet: 0.0 seconds]
Epoch Time: 1474190296.284213000 seconds
[Time delta from previous captured frame: 0.000289000 seconds]
[Time delta from previous displayed frame: 0.0 seconds]
[Time since reference or first frame: 9.241353000 seconds]
Frame Number: 37
Frame Length: 468 bytes (3744 bits)
Capture Length: 468 bytes (3744 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: sll:ethertype:ip:tcp:rpc:glusterfs.cbk]
Linux cooked capture
Packet type: Unicast to us (0)
Link-layer address type: 772
Link-layer address length: 6
Source: 00:00:00_00:00:00 (00:00:00:00:00:00)
Protocol: IPv4 (0x0800)
Internet Protocol Version 4, Src: 172.31.122.140, Dst: 172.31.122.140
0100  = Version: 4
 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT)
 00.. = Differentiated Services Codepoint: Default (0)
 ..00 = Explicit Congestion Notification: Not ECN-Capable Transport 
(0)
Total Length: 452
Identification: 0xf440 (62528)
Flags: 0x02 (Don't Fragment)
0...  = Reserved bit: Not set
.1..  = Don't fragment: Set
..0.  = More fragments: Not set
Fragment offset: 0
Time to live: 64
Protocol: TCP (6)
Header checksum: 0xf79b [validation disabled]
[Good: False]
[Bad: False]
Source: 172.31.122.140
Destination: 172.31.122.140
Transmission Control Protocol, Src Port: 49152, Dst Port: 49150, Seq: 1, Ack: 
1, Len: 400
Source Port: 49152
Destination Port: 49150
[Stream index: 4]
[TCP Segment Len: 400]
Sequence number: 1(relative sequence number)
[Next sequence number: 401(relative sequence number)]
Acknowledgment number: 1(relative ack number)
Header Length: 32 bytes
Flags: 0x018 (PSH, ACK)
000.   = Reserved: Not set
...0   = Nonce: Not set
 0...  = Congestion Window Reduced (CWR): Not set
 .0..  = ECN-Echo: Not set
 ..0.  = Urgent: Not set
 ...1  = Acknowledgment: Set
  1... = Push: Set
  .0.. = Reset: Not set
  ..0. = Syn: Not set
  ...0 = Fin: Not set
[TCP Flags: ···AP···]
Window size value: 475
[Calculated window size: 475]
[Window size scaling factor: -1 (unknown)]
Checksum: 0x4f0e [validation disabled]
[Good Checksum: False]
[Bad Checksum: False]
Urgent pointer: 0
Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps
No-Operation (NOP)
Type: 1
0...  = Copy on fragmentation: No
.00.  = Class: Control (0)
...0 0001 = Number: No-Operation (NOP) (1)
No-Operation (NOP)
Type: 1
0...  = Copy on fragmentation: No
.00.  = Class: Control (0)
...0 0001 = Number: No-Operation (NOP) (1)
Timestamps: TSval 1196377, TSecr 1180839
Kind: Time Stamp Option (8)
Length: 10
Timestamp value: 1196377
Timestamp echo reply: 1180839
[SEQ/ACK analysis]
[Bytes in flight: 400]
[Bytes sent since last 

Re: [Gluster-devel] Review request for 3.9 patches

2016-09-18 Thread Poornima Gurusiddaiah
Hi All,

There are 3 more patches that we need for enabling md-cache invalidation in 3.9.
Request your help with the reviews:

http://review.gluster.org/#/c/15378/   - afr: Implement IPC fop
http://review.gluster.org/#/c/15387/   - ec: Implement IPC fop
http://review.gluster.org/#/c/15398/   - mdc/upcall/afr: Reduce the window of 
stale read


Thanks,
Poornima

- Original Message -
> From: "Poornima Gurusiddaiah" 
> To: "Gluster Devel" , "Raghavendra Gowdappa" 
> , "Rajesh Joseph"
> , "Raghavendra Talur" , "Soumya 
> Koduri" , "Niels de Vos"
> , "Anoop Chirayath Manjiyil Sajan" 
> Sent: Tuesday, August 30, 2016 5:13:36 AM
> Subject: Re: [Gluster-devel] Review request for 3.9 patches
> 
> Hi,
> 
> Few more patches, have addressed the review comments, could you please review
> these patches:
> 
> http://review.gluster.org/15002   md-cache: Register the list of xattrs with
> cache-invalidation
> http://review.gluster.org/15300   dht, md-cache, upcall: Add invalidation of
> IATT when the layout changes
> http://review.gluster.org/15324   md-cache: Process all the cache
> invalidation flags
> http://review.gluster.org/15313   upcall: Mark the clients as accessed even
> on readdir entries
> http://review.gluster.org/15193   io-stats: Add stats for upcall
> notifications
> 
> Regards,
> Poornima
> 
> - Original Message -
> 
> > From: "Poornima Gurusiddaiah" 
> > To: "Gluster Devel" , "Raghavendra Gowdappa"
> > , "Rajesh Joseph" , "Raghavendra
> > Talur" , "Soumya Koduri" , "Niels de
> > Vos" , "Anoop Chirayath Manjiyil Sajan"
> > 
> > Sent: Thursday, August 25, 2016 5:22:43 AM
> > Subject: Review request for 3.9 patches
> 
> > Hi,
> 
> > There are few patches that are part of the effort of integrating md-cache
> > with upcall.
> > Hope to take these patches for 3.9, it would be great if you can review
> > these
> > patches:
> 
> > upcall patches:
> > http://review.gluster.org/#/c/15313/
> > http://review.gluster.org/#/c/15301/
> 
> > md-cache patches:
> > http://review.gluster.org/#/c/15002/
> > http://review.gluster.org/#/c/15045/
> > http://review.gluster.org/#/c/15185/
> > http://review.gluster.org/#/c/15224/
> > http://review.gluster.org/#/c/15225/
> > http://review.gluster.org/#/c/15300/
> > http://review.gluster.org/#/c/15314/
> 
> > Thanks,
> > Poornima
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-18 Thread Niels de Vos
On Tue, Sep 13, 2016 at 12:06:00PM -0400, Luis Pabón wrote:
> Very good points.  Thanks Prasanna for putting this together.  I agree with
> your comments in that Heketi is the high level abstraction API and it should 
> have
> an API similar of what is described by Prasanna.
> 
> I definitely do not think any File Api should be available in Heketi,
> because that is an implementation of the Block API.  The Heketi API should
> be similar to something like OpenStack Cinder.
> 
> I think that the actual management of the Volumes used for Block storage
> and the files in them should be all managed by Heketi.  How they are
> actually created is still to be determined, but we could have Heketi
> create them, or have helper programs do that.

Maybe a tool like qemu-img? If whatever iscsi service understand the
format (at the very least 'raw'), you could get functionality like
snapshots pretty simple.

Niels


> We also need to document the exact workflow to enable a file in
> a Gluster volume to be exposed as a block device.  This will help
> determine where the creation of the file could take place.
> 
> We can capture our decisions from these discussions in the
> following page:
> 
> https://github.com/heketi/heketi/wiki/Proposed-Changes
> 
> - Luis
> 
> 
> - Original Message -
> From: "Humble Chirammal" 
> To: "Raghavendra Talur" 
> Cc: "Prasanna Kalever" , "gluster-devel" 
> , "Stephen Watt" , "Luis Pabon" 
> , "Michael Adam" , "Ramakrishna Yekulla" 
> , "Mohamed Ashiq Liyazudeen" 
> Sent: Tuesday, September 13, 2016 2:23:39 AM
> Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
> discussion
> 
> 
> 
> 
> 
> - Original Message -
> | From: "Raghavendra Talur" 
> | To: "Prasanna Kalever" 
> | Cc: "gluster-devel" , "Stephen Watt" 
> , "Luis Pabon" ,
> | "Michael Adam" , "Humble Chirammal" 
> , "Ramakrishna Yekulla"
> | , "Mohamed Ashiq Liyazudeen" 
> | Sent: Tuesday, September 13, 2016 11:08:44 AM
> | Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
> discussion
> | 
> | On Mon, Sep 12, 2016 at 11:30 PM, Prasanna Kalever 
> | wrote:
> | 
> | > Hi all,
> | >
> | > This mail is open for discussion on gluster block store integration with
> | > heketi and its REST API interface design constraints.
> | >
> | >
> | >  ___ Volume Request ...
> | > |
> | > |
> | > PVC claim -> Heketi --->|
> | > |
> | > |
> | > |
> | > |
> | > |__ BlockCreate
> | > |   |
> | > |   |__ BlockInfo
> | > |   |
> | > |___ Block Request (APIS)-> |__ BlockResize
> | > |
> | > |__ BlockList
> | > |
> | > |__ BlockDelete
> | >
> | > Heketi will have block API and volume API, when user submit a Persistent
> | > volume claim, Kubernetes provisioner based on the storage class(from PVC)
> | > talks to heketi for storage, heketi intern calls block or volume API's
> | > based on request.
> | >
> | 
> | This is probably wrong. It won't be Heketi calling block or volume APIs. It
> | would be Kubernetes calling block or volume API *of* Heketi.
> | 
> | 
> | > With my limited understanding, heketi currently creates clusters from
> | > provided nodes, creates volumes and handover them to the user.
> | > For block related API's, it has to deal with files right ?
> | >
> | > Here is how block API's look like in short-
> | > Create: heketi has to create file in the volume and export it as a iscsi
> | > target device and hand it over to user.
> | > Info: show block stores information across all the clusters, connection
> | > info, size etc.
> | > resize: resize the file in the volume, refresh connections from initiator
> | > side
> | > List: List the connections
> | > Delete: logout the connections and delete the file in the gluster volume
> | >
> | > Couple of questions:
> | > 1. Should Block API have sub API's such as FileCreate, FileList,
> | > FileResize, File delete and etc then get it used in Block API as they
> | > mostly deal with files.
> | >
> | 
> | IMO, Heketi should not expose any File related API. It should only have
> | APIs