Author: buildbot
Date: Fri Apr 12 10:21:41 2013
New Revision: 858217

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/shared-file-system-master-slave.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: 
websites/production/activemq/content/shared-file-system-master-slave.html
==============================================================================
--- websites/production/activemq/content/shared-file-system-master-slave.html 
(original)
+++ websites/production/activemq/content/shared-file-system-master-slave.html 
Fri Apr 12 10:21:41 2013
@@ -82,7 +82,7 @@
 
 <p>From <a shape="rect" class="external-link" 
href="http://sources.redhat.com/cluster/faq.html#gfs_vs_ocfs2"; 
rel="nofollow">http://sources.redhat.com/cluster/faq.html#gfs_vs_ocfs2</a> :<br 
clear="none">
 OCFS2: No cluster-aware flock or POSIX locks<br clear="none">
-GFS: Cluster-wide flocks and POSIX locks</p></td></tr></table></div>
+GFS: fully supports Cluster-wide flocks and POSIX locks and is 
supported.</p></td></tr></table></div>
 
 <div class="panelMacro"><table class="noteMacro"><colgroup span="1"><col 
span="1" width="24"><col span="1"></colgroup><tr><td colspan="1" rowspan="1" 
valign="top"><img align="middle" 
src="https://cwiki.apache.org/confluence/images/icons/emoticons/warning.gif"; 
width="16" height="16" alt="" border="0"></td><td colspan="1" 
rowspan="1"><b>NFSv3 Warning</b><br clear="none">In the event of an abnormal 
NFSv3 client termination (i.e., the ActiveMQ master broker), the NFSv3 server 
will not timeout the lock that is held by that client. This effectively renders 
the ActiveMQ data directory inaccessible because the ActiveMQ slave broker 
can't acquire the lock and therefore cannot start up. The only solution to this 
predicament with NFSv3 is to reboot all ActiveMQ instances to reset everything. 
 


Reply via email to