Just to follow up, I've been sent an efix today which hopefully will resolve 
this (and also the other LROC bugs), so I'm guessing this fix will make it out 
generally in 4.1.1-02

Will be testing the fix out over the next few days.

Simon

From: Dean Hildebrand <[email protected]<mailto:[email protected]>>
Reply-To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: Thursday, 27 August 2015 20:24
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)


Hi Simon,

This appears to be a mistake, as using clients for the System.log pool should 
not require a server license (should be similar to lroc).... thanks for opening 
the PMR...
Dean Hildebrand
IBM Almaden Research Center


[Inactive hide details for "Simon Thompson (Research Computing - IT Services)" 
---08/27/2015 12:42:47 AM---Hi Dean, Thanks. I wa]"Simon Thompson (Research 
Computing - IT Services)" ---08/27/2015 12:42:47 AM---Hi Dean, Thanks. I wasn't 
sure if the system.log disks on clients in the remote cluster would be "va

From: "Simon Thompson (Research Computing - IT Services)" 
<[email protected]<mailto:[email protected]>>
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: 08/27/2015 12:42 AM
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)
Sent by: 
[email protected]<mailto:[email protected]>

________________________________



Hi Dean,

Thanks. I wasn't sure if the system.log disks on clients in the remote cluster 
would be "valid" as they are essentially NSDs in a different cluster from where 
the storage cluster would be, but it sounds like it is.

Now if I can just get it working ... Looking in mmfsfuncs:

  if [[ $diskUsage != "localCache" ]]
  then
    combinedList=${primaryAdminNodeList},${backupAdminNodeList}
    IFS=","
    for server in $combinedList
    do
      IFS="$IFS_sv"
      [[ -z $server ]] && continue

      $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1
      if [[ $? -ne 0 ]]
      then
        # The node does not have a server license.
        printErrorMsg 118 $mmcmd $server
        return 1
      fi
      IFS=","
    done  # end for server in ${primaryAdminNodeList},${backupAdminNodeList}
    IFS="$IFS_sv"
  fi  # end of if [[ $diskUsage != "localCache" ]]

So unless the NSD device usage=localCache, then it requires a server License 
when you try and create the NSD, but localCache cannot have a storage pool 
assigned.

I've opened a PMR with IBM.

Simon

From: Dean Hildebrand <[email protected]<mailto:[email protected]>>
Reply-To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: Thursday, 27 August 2015 01:22
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)

Hi Simon,

HAWC leverages the System.log (or metadata pool if no special log pool is 
defined) pool.... so its independent of local or multi-cluster modes... small 
writes will be 'hardened' whereever those pools are defined for the file system.

Dean Hildebrand
IBM Master Inventor and Manager | Cloud Storage Software
IBM Almaden Research Center


[Inactive hide details for "Simon Thompson (Research Computing - IT Services)" 
---08/26/2015 05:58:12 AM---Oh and one other ques]"Simon Thompson (Research 
Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question 
about HAWC, does it work when running multi-cluster? I.e. Can clients in a

From: "Simon Thompson (Research Computing - IT Services)" 
<[email protected]<mailto:[email protected]>>
To: gpfsug main discussion list 
<[email protected]<mailto:[email protected]>>
Date: 08/26/2015 05:58 AM
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)
Sent by: 
[email protected]<mailto:[email protected]>

________________________________



Oh and one other question about HAWC, does it work when running
multi-cluster? I.e. Can clients in a remote cluster have HAWC devices?

Simon

On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)"
<[email protected]<mailto:[email protected]>> wrote:

>Hi,
>
>I was wondering if anyone knows how to configure HAWC which was added in
>the 4.1.1 release (this is the hardened write cache)
>(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect
>r
>um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm)
>
>In particular I'm interested in running it on my client systems which have
>SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD
>for HAWC on our hypervisors as it buffers small IO writes, which sounds
>like what we want for running VMs which are doing small IO updates to the
>VM disk images stored on GPFS.
>
>The docs are a little lacking in detail of how you create NSD disks on
>clients, I've tried using:
>%nsd: device=sdb2
>  nsd=cl0901u17_hawc_sdb2
>  servers=cl0901u17
>  pool=system.log
>  failureGroup=90117
>
>(and also with usage=metadataOnly as well), however mmcrsnd -F tells me
>"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license
>designation"
>
>
>Which is correct as its a client system, though HAWC is supposed to be
>able to run on client systems. I know for LROC you have to set
>usage=localCache, is there a new value for using HAWC?
>
>I'm also a little unclear about failureGroups for this. The docs suggest
>setting the HAWC to be replicated for client systems, so I guess that
>means putting each client node into its own failure group?
>
>Thanks
>
>Simon
>
>_______________________________________________
>gpfsug-discuss mailing list
>gpfsug-discuss at gpfsug.org
>http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

[attachment "graycol.gif" deleted by Dean Hildebrand/Almaden/IBM]
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to