[ 
https://issues.apache.org/jira/browse/AMBARI-11366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Onischuk resolved AMBARI-11366.
--------------------------------------
    Resolution: Fixed

Committed to trunk

> Falcon Server Start failed
> --------------------------
>
>                 Key: AMBARI-11366
>                 URL: https://issues.apache.org/jira/browse/AMBARI-11366
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Andrew Onischuk
>            Assignee: Andrew Onischuk
>             Fix For: 2.1.0
>
>
> Latest trunk 2f218e7eadb1b7083835f2542f4fd599b4f85c2f, 2.3 stack
>     
>     
>     
>     2015-05-25 06:24:51,535 - 
> Directory['/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/'] {'recursive': 
> True}
>     2015-05-25 06:24:51,537 - 
> File['/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//jce_policy-8.zip'] 
> {'content': 
> DownloadSource('http://dlysnichenko-ru1-1.c.pramod-thangali.internal:8080/resources//jce_policy-8.zip')}
>     2015-05-25 06:24:51,537 - Not downloading the file from 
> http://dlysnichenko-ru1-1.c.pramod-thangali.internal:8080/resources//jce_policy-8.zip,
>  because /var/lib/ambari-agent/data/tmp/jce_policy-8.zip already exists
>     2015-05-25 06:24:51,537 - Group['spark'] {'ignore_failures': False}
>     2015-05-25 06:24:51,538 - Group['hadoop'] {'ignore_failures': False}
>     2015-05-25 06:24:51,538 - Group['users'] {'ignore_failures': False}
>     2015-05-25 06:24:51,539 - Group['knox'] {'ignore_failures': False}
>     2015-05-25 06:24:51,539 - User['hive'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,540 - User['storm'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,541 - User['zookeeper'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,542 - User['oozie'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'users']}
>     2015-05-25 06:24:51,543 - User['ams'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,544 - User['falcon'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,545 - User['tez'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'users']}
>     2015-05-25 06:24:51,546 - User['mahout'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,546 - User['spark'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,547 - User['ambari-qa'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'users']}
>     2015-05-25 06:24:51,548 - User['flume'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,549 - User['kafka'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,550 - User['hdfs'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,551 - User['sqoop'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,552 - User['yarn'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,553 - User['mapred'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,554 - User['hbase'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,555 - User['knox'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,556 - User['hcat'] {'gid': 'hadoop', 
> 'ignore_failures': False, 'groups': [u'hadoop']}
>     2015-05-25 06:24:51,556 - 
> File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': 
> StaticFile('changeToSecureUid.sh'), 'mode': 0555}
>     2015-05-25 06:24:51,558 - 
> Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
>  {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
>     2015-05-25 06:24:51,598 - Skipping 
> Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
> /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
>  due to not_if
>     2015-05-25 06:24:51,599 - Directory['/grid/0/hadoop/hbase'] {'owner': 
> 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
>     2015-05-25 06:24:51,600 - 
> File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': 
> StaticFile('changeToSecureUid.sh'), 'mode': 0555}
>     2015-05-25 06:24:51,601 - 
> Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase 
> /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/grid/0/hadoop/hbase'] 
> {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
>     2015-05-25 06:24:51,609 - Skipping 
> Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase 
> /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/grid/0/hadoop/hbase'] 
> due to not_if
>     2015-05-25 06:24:51,610 - Group['hdfs'] {'ignore_failures': False}
>     2015-05-25 06:24:51,610 - User['hdfs'] {'ignore_failures': False, 
> 'groups': [u'hadoop', u'hdfs']}
>     2015-05-25 06:24:51,611 - Directory['/etc/hadoop'] {'mode': 0755}
>     2015-05-25 06:24:51,637 - 
> File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': 
> InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
>     2015-05-25 06:24:51,651 - Execute['('setenforce', '0')'] {'not_if': '(! 
> which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 
> 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
>     2015-05-25 06:24:51,701 - Directory['/var/log/hadoop'] {'owner': 'root', 
> 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
>     2015-05-25 06:24:51,702 - Directory['/var/run/hadoop'] {'owner': 'root', 
> 'group': 'root', 'recursive': True, 'cd_access': 'a'}
>     2015-05-25 06:24:51,703 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 
> 'recursive': True, 'cd_access': 'a'}
>     2015-05-25 06:24:51,707 - 
> File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] 
> {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
>     2015-05-25 06:24:51,709 - 
> File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': 
> Template('health_check.j2'), 'owner': 'hdfs'}
>     2015-05-25 06:24:51,709 - 
> File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': 
> '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
>     2015-05-25 06:24:51,718 - 
> File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] 
> {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
>     2015-05-25 06:24:51,719 - 
> File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': 
> StaticFile('task-log4j.properties'), 'mode': 0755}
>     2015-05-25 06:24:51,720 - 
> File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 
> 'hdfs', 'group': 'hadoop'}
>     2015-05-25 06:24:51,919 - Directory['/var/run/falcon'] {'owner': 
> 'falcon', 'recursive': True}
>     2015-05-25 06:24:51,921 - Directory['/var/log/falcon'] {'owner': 
> 'falcon', 'recursive': True}
>     2015-05-25 06:24:51,921 - 
> Directory['/usr/hdp/current/falcon-server/webapp'] {'owner': 'falcon', 
> 'recursive': True}
>     2015-05-25 06:24:51,922 - Directory['/usr/hdp/current/falcon-server'] 
> {'owner': 'falcon', 'recursive': True}
>     2015-05-25 06:24:51,922 - Changing owner for 
> /usr/hdp/current/falcon-server from 0 to falcon
>     2015-05-25 06:24:51,922 - Directory['/etc/falcon'] {'recursive': True, 
> 'mode': 0755}
>     2015-05-25 06:24:51,922 - 
> Directory['/usr/hdp/current/falcon-server/conf'] {'owner': 'falcon', 
> 'recursive': True}
>     2015-05-25 06:24:51,922 - Changing owner for 
> /usr/hdp/current/falcon-server/conf from 0 to falcon
>     2015-05-25 06:24:51,928 - 
> File['/usr/hdp/current/falcon-server/conf/falcon-env.sh'] {'owner': 'falcon', 
> 'content': InlineTemplate(...)}
>     2015-05-25 06:24:51,928 - Writing 
> File['/usr/hdp/current/falcon-server/conf/falcon-env.sh'] because contents 
> don't match
>     2015-05-25 06:24:51,928 - Changing owner for 
> /usr/hdp/current/falcon-server/conf/falcon-env.sh from 0 to falcon
>     2015-05-25 06:24:51,932 - 
> File['/usr/hdp/current/falcon-server/conf/client.properties'] {'owner': 
> 'falcon', 'content': Template('client.properties.j2'), 'mode': 0644}
>     2015-05-25 06:24:51,933 - Writing 
> File['/usr/hdp/current/falcon-server/conf/client.properties'] because 
> contents don't match
>     2015-05-25 06:24:51,933 - Changing owner for 
> /usr/hdp/current/falcon-server/conf/client.properties from 0 to falcon
>     2015-05-25 06:24:51,934 - 
> PropertiesFile['/usr/hdp/current/falcon-server/conf/runtime.properties'] 
> {'owner': 'falcon', 'mode': 0644, 'properties': 
> {u'*.log.cleanup.frequency.hours.retention': u'minutes(1)', u'*.domain': 
> u'${falcon.app.type}', u'*.log.cleanup.frequency.months.retention': 
> u'months(3)', u'*.log.cleanup.frequency.minutes.retention': u'hours(6)', 
> u'*.log.cleanup.frequency.days.retention': u'days(7)'}}
>     2015-05-25 06:24:51,940 - Generating properties file: 
> /usr/hdp/current/falcon-server/conf/runtime.properties
>     2015-05-25 06:24:51,940 - 
> File['/usr/hdp/current/falcon-server/conf/runtime.properties'] {'owner': 
> 'falcon', 'content': InlineTemplate(...), 'group': None, 'mode': 0644}
>     2015-05-25 06:24:51,944 - Writing 
> File['/usr/hdp/current/falcon-server/conf/runtime.properties'] because 
> contents don't match
>     2015-05-25 06:24:51,945 - Changing owner for 
> /usr/hdp/current/falcon-server/conf/runtime.properties from 0 to falcon
>     2015-05-25 06:24:51,945 - 
> PropertiesFile['/usr/hdp/current/falcon-server/conf/startup.properties'] 
> {'owner': 'falcon', 'mode': 0644, 'properties': ...}
>     2015-05-25 06:24:51,949 - Generating properties file: 
> /usr/hdp/current/falcon-server/conf/startup.properties
>     2015-05-25 06:24:51,950 - 
> File['/usr/hdp/current/falcon-server/conf/startup.properties'] {'owner': 
> 'falcon', 'content': InlineTemplate(...), 'group': None, 'mode': 0644}
>     2015-05-25 06:24:51,978 - Writing 
> File['/usr/hdp/current/falcon-server/conf/startup.properties'] because 
> contents don't match
>     2015-05-25 06:24:51,978 - Changing owner for 
> /usr/hdp/current/falcon-server/conf/startup.properties from 0 to falcon
>     2015-05-25 06:24:51,978 - 
> Directory['/grid/0/hadoop/falcon/data/lineage/graphdb'] {'owner': 'falcon', 
> 'recursive': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
>     2015-05-25 06:24:51,979 - Creating directory 
> Directory['/grid/0/hadoop/falcon/data/lineage/graphdb']
>     2015-05-25 06:24:51,979 - Changing owner for 
> /grid/0/hadoop/falcon/data/lineage/graphdb from 0 to falcon
>     2015-05-25 06:24:51,979 - Changing group for 
> /grid/0/hadoop/falcon/data/lineage/graphdb from 0 to hadoop
>     2015-05-25 06:24:51,980 - Changing permission for 
> /grid/0/hadoop/falcon/data/lineage/graphdb from 755 to 775
>     2015-05-25 06:24:51,980 - Directory['/grid/0/hadoop/falcon/data/lineage'] 
> {'owner': 'falcon', 'recursive': True, 'group': 'hadoop', 'mode': 0775, 
> 'cd_access': 'a'}
>     2015-05-25 06:24:51,980 - Changing owner for 
> /grid/0/hadoop/falcon/data/lineage from 0 to falcon
>     2015-05-25 06:24:51,980 - Changing group for 
> /grid/0/hadoop/falcon/data/lineage from 0 to hadoop
>     2015-05-25 06:24:51,981 - Changing permission for 
> /grid/0/hadoop/falcon/data/lineage from 755 to 775
>     2015-05-25 06:24:51,981 - Directory['/hadoop/falcon/store'] {'owner': 
> 'falcon', 'recursive': True}
>     2015-05-25 06:24:51,981 - Creating directory 
> Directory['/hadoop/falcon/store']
>     2015-05-25 06:24:51,982 - Changing owner for /hadoop/falcon/store from 0 
> to falcon
>     2015-05-25 06:24:51,982 - HdfsResource['/apps/falcon'] 
> {'security_enabled': False, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 
> 'hdfs://dlysnichenko-ru1-1.c.pramod-thangali.internal:8020', 'hdfs_site': 
> ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 
> 'owner': 'falcon', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 
> 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0777}
>     2015-05-25 06:24:52,003 - checked_call['curl -L -w '%{http_code}' -X GET 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/falcon?op=GETFILESTATUS&user.name=hdfs'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,052 - checked_call returned (0, 
> '{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  does not exist: /apps/falcon"}}404')
>     2015-05-25 06:24:52,055 - checked_call['curl -L -w '%{http_code}' -X PUT 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/falcon?op=MKDIRS&user.name=hdfs'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,100 - checked_call returned (0, '{"boolean":true}200')
>     2015-05-25 06:24:52,102 - checked_call['curl -L -w '%{http_code}' -X PUT 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/falcon?op=SETPERMISSION&user.name=hdfs&permission=777'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,141 - checked_call returned (0, '200')
>     2015-05-25 06:24:52,143 - checked_call['curl -L -w '%{http_code}' -X PUT 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/falcon?op=SETOWNER&user.name=hdfs&owner=falcon&group='']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,186 - checked_call returned (0, '200')
>     2015-05-25 06:24:52,186 - Directory['/hadoop/falcon/store'] {'owner': 
> 'falcon', 'recursive': True}
>     2015-05-25 06:24:52,187 - HdfsResource['/apps/data-mirroring'] 
> {'security_enabled': False, 'hadoop_bin_dir': 
> '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'source': 
> '/usr/hdp/current/falcon-server/data-mirroring', 'default_fs': 
> 'hdfs://dlysnichenko-ru1-1.c.pramod-thangali.internal:8020', 'user': 'hdfs', 
> 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 
> 'recursive_chmod': True, 'recursive_chown': True, 'owner': 'hdfs', 'group': 
> 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 
> 'directory', 'action': ['create_on_execute'], 'mode': 0770}
>     2015-05-25 06:24:52,190 - checked_call['curl -L -w '%{http_code}' -X GET 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/data-mirroring?op=GETFILESTATUS&user.name=hdfs'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,236 - checked_call returned (0, 
> '{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  does not exist: /apps/data-mirroring"}}404')
>     2015-05-25 06:24:52,239 - checked_call['curl -L -w '%{http_code}' -X PUT 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/data-mirroring?op=MKDIRS&user.name=hdfs'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,282 - checked_call returned (0, '{"boolean":true}200')
>     2015-05-25 06:24:52,283 - Creating DFS directory 
> /apps/data-mirroring/workflows
>     2015-05-25 06:24:52,285 - checked_call['curl -L -w '%{http_code}' -X PUT 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/data-mirroring/workflows?op=MKDIRS&user.name=hdfs'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,332 - checked_call returned (0, '{"boolean":true}200')
>     2015-05-25 06:24:52,334 - checked_call['curl -L -w '%{http_code}' -X GET 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/data-mirroring/workflows/hive-disaster-recovery-workflow.xml?op=GETFILESTATUS&user.name=hdfs'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,391 - checked_call returned (0, 
> '{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  does not exist: 
> /apps/data-mirroring/workflows/hive-disaster-recovery-workflow.xml"}}404')
>     2015-05-25 06:24:52,392 - Creating new file 
> /apps/data-mirroring/workflows/hive-disaster-recovery-workflow.xml in DFS
>     2015-05-25 06:24:52,394 - checked_call['curl -L -w '%{http_code}' -X PUT 
> -T 
> /usr/hdp/current/falcon-server/data-mirroring/workflows/hive-disaster-recovery-workflow.xml
>  
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/data-mirroring/workflows/hive-disaster-recovery-workflow.xml?op=CREATE&user.name=hdfs&overwrite=True'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,487 - checked_call returned (0, '201')
>     2015-05-25 06:24:52,490 - checked_call['curl -L -w '%{http_code}' -X GET 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/data-mirroring/workflows/hdfs-replication-workflow.xml?op=GETFILESTATUS&user.name=hdfs'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,530 - checked_call returned (0, 
> '{"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  does not exist: 
> /apps/data-mirroring/workflows/hdfs-replication-workflow.xml"}}404')
>     2015-05-25 06:24:52,530 - Creating new file 
> /apps/data-mirroring/workflows/hdfs-replication-workflow.xml in DFS
>     2015-05-25 06:24:52,532 - checked_call['curl -L -w '%{http_code}' -X PUT 
> -T 
> /usr/hdp/current/falcon-server/data-mirroring/workflows/hdfs-replication-workflow.xml
>  
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/data-mirroring/workflows/hdfs-replication-workflow.xml?op=CREATE&user.name=hdfs&overwrite=True'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,601 - checked_call returned (0, '201')
>     2015-05-25 06:24:52,603 - checked_call['curl -L -w '%{http_code}' -X PUT 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/data-mirroring?op=SETPERMISSION&user.name=hdfs&permission=770'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,646 - checked_call returned (0, '200')
>     2015-05-25 06:24:52,647 - checked_call['curl -L -w '%{http_code}' -X GET 
> 'http://dlysnichenko-ru1-1.c.pramod-thangali.internal:50070/webhdfs/v1/apps/data-mirroring?op=LISTSTATUS&user.name=hdfs'']
>  {'logoutput': None, 'user': 'hdfs', 'quiet': False}
>     2015-05-25 06:24:52,684 - checked_call returned (0, 
> '{"FileStatuses":{"FileStatus":[\r\n{"accessTime":0,"blockSize":0,"childrenNum":2,"fileId":18291,"group":"hdfs","length":0,"modificationTime":1432535092577,"owner":"hdfs","pathSuffix":"workflows","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"}\r\n]}}\r\n200')
>     
>     Traceback (most recent call last):
>       File 
> "/var/lib/ambari-agent/cache/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_server.py",
>  line 155, in <module>
>         FalconServer().execute()
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 214, in execute
>         method(env)
>       File 
> "/var/lib/ambari-agent/cache/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_server.py",
>  line 42, in start
>         self.configure(env)
>       File 
> "/var/lib/ambari-agent/cache/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_server.py",
>  line 37, in configure
>         falcon('server', action='config')
>       File 
> "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, 
> in thunk
>         return fn(*args, **kwargs)
>       File 
> "/var/lib/ambari-agent/cache/common-services/FALCON/0.5.0.2.1/package/scripts/falcon.py",
>  line 132, in falcon
>         source=params.local_data_mirroring_dir
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 
> 148, in __init__
>         self.env.run()
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 152, in run
>         self.run_action(resource, action)
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 118, in run_action
>         provider_action()
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 377, in action_create_on_execute
>         self.action_delayed("create")
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 374, in action_delayed
>         self.get_hdfs_resource_executor().action_delayed(action_name, self)
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 231, in action_delayed
>         self._set_mode(self.target_status)
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 334, in _set_mode
>         self._fill_directories_list(self.main_resource.resource.target, 
> results)
>       File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py",
>  line 356, in _fill_directories_list
>         results.add(new_path)
>     AttributeError: 'list' object has no attribute 'add'
>     



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to