Hi, while "run service check" for HAWQ within Ambari, you may get such errors:
1. 20161207:14:48:29:182323 gpcheck:hlamaster02:gpadmin-[ERROR]:-host(hlamaster01): HDFS configuration: expected 'dfs.namenode.accesstime.precision' for '-1', actual value is '0' *"0" is the recommend value in the HAWQ documents. * *May we need updating the corresponding value in "gpcheck.cnf"?* 2. 20161207:14:48:29:182323 gpcheck:hlamaster02:gpadmin-[ERROR]:-host(hlamaster01): HDFS configuration missing: 'dfs.client.enable.read.from.local' needs to be set to 'true' *There are a similar parameters in HDFS* *"dfs.client.read.shortcircuit"* *Is 'dfs.client.enable.read.from.local' actually needed?* *If yes, is that because "libhdfs3" is used instead of the standard apache hdfs lib?* *If yes again, adding a <description> segment for this both in documents and "gpcheck.cnf" may be a good idea.* 3. 20161207:14:48:29:182323 gpcheck:hlamaster02:gpadmin-[ERROR]:-host(hlamaster01): HDFS configuration missing: 'ipc.server.handler.queue.size' needs to be set to '3300' *I am not able to find any information about 'ipc.server.handler.queue.size' in the HAWQ 2.0.0 documents.* *Is this a deprecated parameter inherited from Pivotal HD 2.1.0?* *http://pivotalhd-210.docs.pivotal.io/doc/2010/PreparingtoInstallHAWQ.html <http://pivotalhd-210.docs.pivotal.io/doc/2010/PreparingtoInstallHAWQ.html>* Best regards, *Parker Han* Pivotal CSD - GRC
