Hi,

Am 26.05.2010 13:10, schrieb Dejan Muhamedagic:
>
On Wed, May 26, 2010 at 10:32:23AM +0200, Thomas Bätzler wrote:
Hi,

Am 25.05.2010 16:05, schrieb Dejan Muhamedagic:

On Fri, May 21, 2010 at 03:38:53PM +0200, Thomas Bätzler wrote:
[exporting nfs share for *]
This causes an incorrect "Export not reported by showmount -e" in
exportfs_start().

I fixed this for myself by shortening

   grep -E "^${OCF_RESKEY_directory}[[:space:]]*${OCF_RESKEY_clientspec}$"

to

   grep -E "^${OCF_RESKEY_directory}[[:space:]]"

The only problem I see with this is that it could cause problems if
I wanted to have multiple exportsfs resources for the same
filesystem. On the other hand, the existing pattern would fail in
that case, too, since showmount seems to list multiple clients specs
separated by comma in a single line.

True. Perhaps something like this could work:

for client in $(grep "^${OCF_RESKEY_directory}[[:space:]]" | \
        awk '{print $2}' | sed 's/,/ /g')
do
        [ "$client" = "$OCF_RESKEY_clientspec" ]&&
                return 0
done
return 1

Can you test this?

I figured that since you're using awk anyways I might as well go
ahead and put all of the string manipulation into an awk script -
please see the attached patch against tip/changeset 1813.

I've tested this version of exportfs on my testbed, and it seemed to
work allright so far.

There are still two border cases where this doesn't properly, though:
- shares containing whitespace in their path names and
- exporting to * and another client spec.

As for the whitespace-in-name issue, maybe somebody could clarify if
this is allowed in the first place.

Or maybe people could just use '_' or '-' instead of spaces :)

At least on Debian "Lenny", all
of my tries to create such a share using exportfs were met with the
message "exportfs: Warning:<path>  does not support NFS export."

The other issue stems from the fact that showmount will not list
individual clients specs for an export if one of them is "*" -
instead it'll just say "(everyone)". So checking if the export was
done will not work in this case.

So, why not use "(everyone)" as a joker:

Cheers,
Thomas

--- a/heartbeat/exportfs        Tue May 25 16:13:19 2010 +0200
+++ b/heartbeat/exportfs        Wed May 26 09:53:29 2010 +0200
@@ -139,9 +139,19 @@

         RETRIES=0
         while [ 1 ]; do
-               showmount -e | grep -E 
"^${OCF_RESKEY_directory}[[:space:]]*${OCF_RESKEY_clientspec}$"
+               showmount -e | awk -v export="$OCF_RESKEY_directory" -v 
client="$OCF_RESKEY_clientspec" '

Instead of these two lines:

+                       {
+                               if( $1 == export ){

I'd put this more awk-like: $1 == export { ...

And here to add: if( $2 == "(everyone)" ) exit 1;

I've modified the patch according to your suggestion (now it's really awk-some ;-)) and verified that it still works. Please find enclosed the revised patch.

Cheers,
Thomas
diff -r 366c346a2514 heartbeat/exportfs
--- a/heartbeat/exportfs        Tue May 25 16:13:19 2010 +0200
+++ b/heartbeat/exportfs        Wed May 26 14:32:21 2010 +0200
@@ -139,9 +139,16 @@
 
        RETRIES=0
        while [ 1 ]; do
-               showmount -e | grep -E 
"^${OCF_RESKEY_directory}[[:space:]]*${OCF_RESKEY_clientspec}$"
+               showmount -e | awk -v export="$OCF_RESKEY_directory" -v 
client="$OCF_RESKEY_clientspec" '
+                       $1 == export {
+                               if( $2 == "(everyone)" ) exit 1
+                               split($2,clients,",")
+                               for ( i in clients ){
+                                       if( clients[i] == client ) exit 1
+                               }
+                       }'
                rc=$?
-               if [ $rc -eq 0 ]; then
+               if [ $rc -eq 1 ]; then
                        break
                fi
                RETRIES=`expr ${RETRIES} + 1`
_______________________________________________________
Linux-HA-Dev: [email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/

Reply via email to