tp://ambariserver:8080/api/v1/clusters/clustername/services/OOZIE_SERVER
HTTP/1.1 404 Not Found
Set-Cookie: AMBARISESSIONID=1xa5af3najngmuwjytkx6n6rt;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 205
Server: Jetty(7.6.7.v20120910)
{
"status"
Ambari does not have any automated way of doing it. The process is a mix of
manual steps and API calls.
You can refer to
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_using_Ambari_bo
ok/content/ambari-chap9_2x.html for the details of this process.
-Sumit
From: JOAQUIN GUANTER GO
Can you check if you can download the hep.repo?
wget
http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.3.0/hdp.repo
I just tried and I can download it.
If you can download, then on one one of your agent hosts, verify that
/etc/zypp/repos.d/HDP.repo has the same base url value for
nglia installed,
but other components which i did not install manually failed.
so, I think ambari has skipped component downloading and installing steps,
how to enable the two steps?
2014/1/13 Sumit Mohanty
> Can you do a "zypper clean" on all the hosts?
>
> Next time you try to
the repo base url problem.
have you used anyone of them ?
2014/1/13 Sumit Mohanty
> Assuming the cluster is only installed and no services are started, you can
> reset and reinstall the whole cluster. The reset does not uninstall the
> already installed packages.
>
> To reset:
&g
.x/updates/1.3.3.0/ ?
sorry to response late,
I use suse and can download rpm and component manually using "zypper install
xxx" , but ambari can not download automatically.
2014/1/14 Sumit Mohanty
> Just curious – did you intentionally tried the centos6 repo url -
> http://public-
Upgrade should not wipe out the database. Only command that cleans up the
database is "ambari-server reset".
Can you share the document link you used for upgrade?
Is the DB Postgres?
-Sumit
On Fri, Jan 24, 2014 at 10:16 AM, Vinod Kumar Vavilapalli <
vino...@hortonworks.com> wrote:
> +user@amb
Meghavi,
Looks like you are using RHEL and it may require registration to download
packages. Try searching "This system is not registered with RHN" on Google
and there are a few articles on this topic.
You can use command line to manually call "yum install lzo" and see if that
works.
-Sumit
On
Are you using APIs to install services? If so you can install only specific
components.
-Sumit
On Fri, Feb 7, 2014 at 1:01 PM, Anisha Agarwal wrote:
> Hi,
>
> We have a custom service, containing multiple components which we add to
> the ambari cluster.
> I wanted to have the option of not be
The state of the service is now calculated based on the actual states of
the *master host components* that belong to the service. E.g. if a master
component (NAMENODE for HDFS) is INSTALLED then the state of the service
will be INSTALLED.
What is happening in the case of client only services is th
Hi Aaron,
Ambari does not support automatic upgrade of the stack yet. What you see
are remnants of the feature that we started working on but it is not
complete.
Upgrade, for now, is manual and there are documented steps on how to do it.
-Sumit
On Tue, Feb 25, 2014 at 11:14 AM, Aaron Cody wro
ual' method?
>
> From: Sumit Mohanty
> Reply-To:
> Date: Tue, 25 Feb 2014 11:26:41 -0800
> To:
> Subject: Re: UPGRADE
>
> Hi Aaron,
>
> Ambari does not support automatic upgrade of the stack yet. What you see
> are remnants of the feature that we started work
the JIRAs with the details and share them.
-Sumit
On Tue, Feb 25, 2014 at 12:26 PM, Aaron Cody wrote:
> Will you be finishing the existing implementation or going in a different
> direction? Can you share any preliminary design docs?
> thanks
>
> From: Sumit Mohanty
> Repl
stack releases and beyond. Lets open a new JIRA (let me know if
you want to do it) and discuss the design and implementation requirements.
-Sumit
On Tue, Feb 25, 2014 at 12:36 PM, Sumit Mohanty wrote:
> We have not thought of that yet. Some of the current design will remain
> but Amba
Looks like datanode start script succeeded but datanode failed soon after.
Try to see if datanode is running by checking the process with id in
/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid.
If not then check the datanode log file at
/var/log/hadoop/hdfs/hadoop-hdfs-datanode-*.log.
This should say
apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> 2014-02-26 23:34:12,438 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
>
>
>
> Thanks,
>
> Akshatha
>
>
>
>
>
>
>
>
>
> --- *Original Message*
You need to use this API to stop the component. The reason its different is
that stop of the host component is a processed on the host component
resource.
curl -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Stop
Component"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' http:
//AMBARI_SERVER_H
Thanks Gunnar.
You may have already figured it out but the Rhel6 link will be
http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.4.4.23/ambari.repo
We will edit the document.
On Wed, Mar 5, 2014 at 9:10 PM, Tapper, Gunnar wrote:
>
> https://cwiki.apache.org/confluence/display/A
There is no automatic updates. The set call using configs.sh will persist
the change. Persisting of a config means adding a new version of the config
and applying it to the cluster resource.
Where did you see it reverted?
Go to the cluster resource - e.g.
http://c6402.ambari.apache.org:8080/api/v
In fact the python support makes it much much easier to to add custom
services and custom scripts. So I will encourage you to try that and
provide feedback.
Python support is at par with puppet support and so its mature..
On Thu, Mar 20, 2014 at 12:42 PM, Erin Boyd wrote:
> It's my understandi
These could be the reason:
useradd: Can't get unique system GID (no more available GIDs)
useradd: can't create group
See if
http://superuser.com/questions/666505/creating-new-user-using-useradd-but-fails-since-its-unable-to-create-group-in-osolves
it?
On Thu, Mar 20, 2014 at 4:55 PM, Anferne
Try running "'/usr/bin/yum -y install hadoop-yarn" as the same user as
ambari-agent ensure it succeeds.
On Thu, Mar 20, 2014 at 5:14 PM, Sumit Mohanty wrote:
> These could be the reason:
> useradd: Can't get unique system GID (no more available GIDs)
> useradd:
at 5:41 PM, Anfernee Xu wrote:
> Thanks, it works now.
>
> BTW, how can I add one more service to the existing cluster, for instance
> I want to add Ganglia?
>
> Thanks
>
>
> On Thu, Mar 20, 2014 at 5:32 PM, Sumit Mohanty
> wrote:
>
>> Try running "
information should be provided when creating a
> cluster"
> }sh-4.1# curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST -d
> '{"Clusters": {"desired_configs": { "type": "global", "tag"
> :"version139537192
You can use Ambari support for add/remove host.
For example, if you have not removed (using Ambari) the host already you
can do so now in the "host" page through Delete Host.
After that you can do a "Add Host" on the "hosts" and add the host back.
"Add Host" will install agent and install compone
host?
>
> -a
>
> On Fri, Mar 21, 2014 at 10:39 AM, Sumit Mohanty
> wrote:
> > You can use Ambari support for add/remove host.
> >
> > For example, if you have not removed (using Ambari) the host already you
> can
> > do so now in the "host" page th
ntly that was
> enough to get the components installed on the newly reinstalled host.
>
> -a
>
> On Fri, Mar 21, 2014 at 10:52 AM, Sumit Mohanty
> wrote:
> > When you do add host, you can install components on the host.
> >
> > Assuming you did not do a "
This should be possible through API (I have not tried it myself).
Here is what you are trying:
* Define a cluster with no HDFS (say just YARN and ZK)
* Add necessary configs for YARN and ZK
* Add/modify core-site and hdfs-site to have the correct property values to
point to the other cluster
* Sta
ack.
>curl -H "X-Requested-By: ambari" -u admin:admin -X DELETE http://
> /services/HDFS
> 4. Configure core-site
>su - hadoop
> /var/lib/ambari-server/resources/scripts/configs.sh -port set
> core-site "fs.defaultFS"
> "hdfs://slc00dgd:55310&
Which version of ambari are you using?
Depending on the version, hadop-env.sh.j2 (latest trunk) or
hadoop-env.sh.erb (1.4.4 and before) file can tell you how to modify it.
Alternatively, you can drop your jars in the same location as other jars -
e.g. the path that is already in the class path.
If config is changed and saved then it gets saved as a newer version. When
Ambari detects a mismatch in the config versions between "desired at the
cluster/host level" and "what is reported by the agent" it flags it as
restart required.
On Tue, Apr 1, 2014 at 6:25 AM, Gerd Koenig
wrote:
> Hi,
>
Count me in as well for Ambari.
-Sumit
On Fri, Apr 11, 2014 at 6:26 PM, Siddharth Wagle wrote:
> I can join you guys as well with Ambari backend support.
>
> -Sid
>
>
> On Fri, Apr 11, 2014 at 6:24 PM, Yusaku Sako wrote:
>
>> Hi Roman,
>>
>> This is a great idea!
>> I'm interested in providing
The shell is indeed useful.
Pls. go ahead and create a Apache Ambari JIRA (at
https://issues.apache.org/jira/browse/AMBARI) to integrate the shell into
Ambari. Looks like contrib/ambari-shell might be a good location.
Those two blueprint related commands would be good candidates. We should
also a
Lets create a JIRA for that as well.
-Sumit
On Wed, Apr 16, 2014 at 8:13 AM, Lajos Papp wrote:
> Hi Sumit,
>
> > Pls. go ahead and create a Apache Ambari JIRA (at
> https://issues.apache.org/jira/browse/AMBARI) to integrate the shell into
> Ambari. Looks like contrib/ambari-shell might be a goo
Can you also share /etc/ambari-agent/conf/ambari-agent.ini and check if
there is any Error/Warning in /var/log/ambari-agent/ambari-agent.log?
-Sumit
On Wed, Apr 23, 2014 at 11:30 AM, Erin Boyd wrote:
> Hi EI,
> Is your agent.ini file pointing to the server?
> Can you agent get out on the netwo
ar/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
>
>
> /var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive,
>
> /var/log/nagios
>
> rpms=nagios,ganglia,
>
>
> hadoop,hadoop-lzo,hbase,oozie,sqoop,pig,zookeeper,h
nks.. Where can I find the new version of
> ambari-agent.ini?
>
>
>
> Thanks a lot
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 3:08 PM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 instal
at 12:27 PM, ILOGLU, EMINE wrote:
> [ei947t@zlxv2263 1.4-3]$ rpm -qa |grep ambari-agent
>
> ambari-agent-1.5.1.110-1.x86_64
>
> [ei947t@zlxv2263 1.4-3]$ rpm -qf /etc/ambari-agent/conf/ambari-agent.ini
>
> ambari-agent-1.5.1.110-1.x86_64
>
>
>
>
>
>
_failures=true
> *settings should solve the problem*.*
>
> Emine
>
>
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
> *Sent:* Wednesday, April 23, 2014 3:32 PM
>
> *To:* user@ambari.apache.org
> *Subject:* Re: ambari 1.5.1 install problem ..
&
I am not going to HBaseCon either.
I will prefer 7th/8th if we are open for that week.
-Sumit
On Wed, Apr 23, 2014 at 3:13 PM, Yusaku Sako wrote:
> All,
>
> I can be pretty flexible next week or the week after (though I'm not
> going to HBaseCon).
> Hortonworks should be able to host as well.
t
On Wed, Apr 23, 2014 at 3:51 PM, Roman Shaposhnik wrote:
> On Wed, Apr 23, 2014 at 3:21 PM, Sumit Mohanty
> wrote:
> > I am not going to HBaseCon either.
> >
> > I will prefer 7th/8th if we are open for that week.
>
> I'm flying off to a LinuxTAG on 7th, retu
Great. Yusaku and I will look into the logistics requirement and get back.
On Fri, Apr 25, 2014 at 9:14 AM, Roman Shaposhnik wrote:
> On Wed, Apr 23, 2014 at 6:24 PM, Sumit Mohanty
> wrote:
> > In that case, lets do it next Thursday - May 1st, 6:00 PM onwards. If
> this
> &g
Which version of Ambari are you using? The 1.5.x release and also latest
from trunk set them to 711 - based on some installs I checked.
-Sumit
On Mon, Apr 28, 2014 at 11:33 PM, Tapper, Gunnar wrote:
> Ambari seems to set up /apps/hbase as follows:
>
> [hdfs@bronto03 ~]$ hadoop fs -lsr /apps/hb
Which version of Ambari are you using? The 1.5.x release and also latest
from trunk set them to 711 - based on some installs I checked.
You can also manually change the permission using hdfs commands if its a
one time fix you are looking for.
-Sumit
On Thu, May 15, 2014 at 9:46 AM, Tapper, Gun
What was the state of the host after the maintenance? Just asking because
if the directory structure was left untouched (e.g. the name node
directory) then you may be able to start the agent on that host and start
namenode and other mapped components.
I am trying to figure out if there is a easier
ministration tool
> for Vertica and Hadoop at: http://www.vertica.com/marketplace
>
>
>
> *“People don’t know what they want until you show it to them… Our task is
> to read things that are not yet on the page.” *— Steve Jobs
>
>
>
> *From:* Sumit Mohanty [mailto:smoha...@ho
task is
to read things that are not yet on the page.” *— Steve Jobs
*From:* Sumit Mohanty [mailto:smoha...@hortonworks.com]
*Sent:* Friday, May 16, 2014 4:47 PM
*To:* user@ambari.apache.org
*Subject:* Re: HBase HDFS Security
You can use rpm -qa | grep ambari
You can also use (if its a
How are you setting the version?
This is what I have used in past - mvn -B -e versions:set
-DnewVersion=1.6.1.7.
-Sumit
On Mon, Jun 2, 2014 at 1:20 PM, Aaron Cody wrote:
> what should I be setting AMBARI_VERSION to? I tried 1.6.0 but got some
> regex errors…
> How do I figure this out in ge
2, 2014 at 1:46 PM, Aaron Cody wrote:
> ok four digits… that worked - thanks
> so how do we figure out this version number for a particular branch ? is
> it stored in a file somewhere?
>
> From: Sumit Mohanty
> Reply-To: "user@ambari.apache.org"
> Date: Monda
The implementation of smoke test depends on what will happen when you run
smoke tests. Smoke tests shipped with the stacks test the services for
basic capabilities - e.g. hbase smoke test will create a table. From the
perspective of creating a table it does not make much sense to test it on
all hos
that happen?
>
> Thanks,
> Anisha
>
> From: Sumit Mohanty
> Reply-To: "user@ambari.apache.org"
> Date: Tuesday, June 3, 2014 at 9:45 PM
> To: "user@ambari.apache.org"
> Subject: Re: Running smoke test
>
> The implementation of smoke test de
Ambari 1.2.4 does not have the feature where you can add a custom action of
your own. Are you planning to upgrade to the latest version?
On Thu, Jun 5, 2014 at 11:35 AM, Anisha Agarwal
wrote:
> I am using ambari-1.2.4.
>
> From: Sumit Mohanty
> Reply-To: "user@ambari.ap
The configs that determine the postgres db are in ambari.properties file:
- server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
- server.jdbc.user.name=ambari
- server.jdbc.database=ambari
If you are restoring the database on the original host and ambari-server is
running on th
Host check will only report space issue based on available space at "/". It
does not look into various mounts.
When you deploy the stack using Ambari there are configuration properties
you can change so that various default folders (e.g. HDFS data dirs, log
dirs) points to sub-folders within /grid
What version did you upgrade from? What is the version of stack?
What does these calls return (assumes default login/password - change it as
needed?
curl-u admin:admin http://:8080/api/v1/clusters/
curl-u admin:admin http://:8080/api/v1/clusters//services
On Mon, Jun 30, 2014 at 6:45 AM, ILOGLU
ackout
> targetService: NAGIOS targetComponent: NAGIOS_SERVER defaultTimeout: 60
> targetType: ANY
>
>
>
> And at this attempt to upgrade, I see HDFS and Nagios are missing and all
> the other services do exist.
>
> Why is it trying to go public-repo? That might be the p
The wiki has few samples on API usage -
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=41812517
Top level link -
https://cwiki.apache.org/confluence/display/AMBARI/API+usage+scenarios%2C+troubleshooting%2C+and+other+FAQs
thanks
Sumit
On Tue, Jul 8, 2014 at 9:38 AM, Tapper, Gun
By "I restarted the process." do you mean that you restarted installation?
Can you share the command logs for tasks (e.g. 10, 42, 58, etc.)? These
would help debug why the tasks are still active.
If you look at the Ambari UI and look at the past requests (top left) then
the task specific UI will
The Apache Ambari team is proud to announce Apache Ambari version 1.6.1
Apache Ambari is a tool for provisioning, managing, and monitoring Apache
Hadoop clusters. Ambari consists of a set of RESTful APIs and a
browser-based management console UI.
The release bits are at:
http://www.apache.org/dyn
It really depends on the workload you want to run on the cluster.
Are you asking about the node that will host the Ambari Server or all the
nodes in the cluster?
If its for the node hosting Ambari Server then you should have around 4 GB
of RAM and run Ambari Server, Ganglia, and Nagios on the sam
Whether a component is client or a slave is driven by the metainfo.xml for
the service type in the stack definition.
On Fri, Aug 15, 2014 at 10:58 AM, Anisha Agarwal
wrote:
> Hi,
>
> I was looking at the code to understand how a slave component differs
> from a client component.
>
>1. Is
Could we save these as FAQs on the Ambari wiki?
-Sumit
On Thu, Sep 4, 2014 at 5:53 PM, Siddharth Wagle
wrote:
> Hi Alex,
>
> Replies inline.
>
> 1. If a component exists in the parent stack and is defined again in the
> child stack with just a few attributes, are these values just to override
Apache Ambari describes a way to get the latest Ambari (trunk or 1.7.0) and
use that to install the latest of any supported stack.
https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide
See if that helps.
On Mon, Nov 17, 2014 at 1:47 PM, hsy...@gmail.com wrote:
> Hi guys,
>
> Is
There is another way to find out.
The command-*.json file created while invoking the INSTALL command will
have a property bag by the name of "clusterHostInfo". That has list of
hosts for various component types.
For example, one can get the list of rs_hosts using
rs_hosts = default('/clusterHo
https://issues.apache.org/jira/browse/AMBARI-4223 has the background.
-Sumit
On Tue, Nov 25, 2014 at 5:12 PM, Sumit Mohanty
wrote:
> There is another way to find out.
>
> The command-*.json file created while invoking the INSTALL command will
> have a property bag by
ZK is required when you use the Ambari Web UI. If you install using the
APIs then you can pick and choose.
The best option is to Stop ZK service post installation. You can mark ZK
service to be in maintenance and Start/Stop All services will skip ZK.
On Sun, Nov 30, 2014 at 6:13 PM, Fabio wrote:
1. Is there a way via the API to force it to update the DecomHosts field with
fresh data? There's a slight delay after the decommission process finishes
before it is returned in the DecomHosts field of the NAMENODE, which is
creating a race condition in my automation (sometimes it doesn't see
By original do you mean that you want to reset ambari so that you can do a
fresh installation?
From: Brian Jeltema
Sent: Tuesday, March 03, 2015 9:20 AM
To: user@ambari.apache.org
Subject: 'resetting' Ambari
I have a small cluster that I recently set up,
Brian,
All INSTALLs are scheduled before all STARTs.
Does the install of your service require HDFS to be started? What operations do
you perform on HDFS during the INSTALL? Could you move them to the START of the
CUSTOM_MASTER/SLAVE and if you can the role_command_order you specified should
run if needed. Thank you for the help and
explanation!
Sincerely,
Brian
On Wed, Mar 11, 2015 at 5:47 PM, Sumit Mohanty
mailto:smoha...@hortonworks.com>> wrote:
Brian,
All INSTALLs are scheduled before all STARTs.
Does the install of your service require HDFS to be started? What ope
It is possible that host components, such as HBASE_REGIONSERVER, DATANODE are
not able to push metrics to Ganglia.
Can you check if /var/lib/ganglia/data on the Ganglia metad Server host to see
if metrics files are being created? You can also check /var/log/messages on the
machines where host
rs/HADOOP_LAB/components?ServiceComponentInfo/categry.in(MASTER,SLAVE)&host_components/HostRoles/host_name=host1&host_components/HostRoles/state=INSTALLED"
On Thu, Apr 9, 2015 at 7:56 AM Krzysztof Adamski
mailto:adamskikrzys...@gmail.com>> wrote:
That's it. A clever sol
You can try something like this
curl -u admin:admin -H "X-Requested-By: ambari" -i -X PUT -d
'{"RequestInfo":{"context":"Stop All Host
Components"},"Body":{"HostRoles":{"state":"INSTALLED"}}}'
"http://u1201.ambari.apache.org:8080/api/v1/clusters/c1/host_components?HostRoles/host_name=u120
If you have a Ambari UI based deployment that is configured to use MySQL then
you can export blueprint from it.
https://cwiki.apache.org/confluence/display/AMBARI/Blueprints? has pointers on
how to export.
thanks
Sumit
From: Pratik Gadiya
Sent: Tuesday, Apri
Can you check ambari-agent logs (/var/log/ambari-agent/ambari-agent.log or
/var/log/ambari-agent/ambari-agent.out) and ambari-server logs
(/var/log/ambari-server/ambari-server.log)?
From: Frank Eisenhauer
Sent: Thursday, April 16, 2015 2:32 PM
To: Ambari
/cluster
ERROR [pool-1-thread-5473] AppCookieManager:122 - SPNego authentication
failed, can not get hadoop.auth cookie for URL:
http://:8744/api/v1/cluster
Am 16.04.2015 um 23:47 schrieb Sumit Mohanty:
> Can you check ambari-agent logs (/var/log/ambari-agent/ambari-agent.log or
> /var/log/
These steps seem fine to me. In fact I just tried and deleted some service in
my test cluster (using latest trunk code base though).
What does GET return
curl -u admin:password -H "X-Requested-By: ambari" -X GET
http://localhost:8080/api/v1/clusters/c1/services/STORM ?
the past.
Might that be a problem?
If you need more information from the log file, I'll look for a way to
share the log.
Am 17.04.2015 um 00:19 schrieb Sumit Mohanty:
> Those do not seem related to the error. Try this:
>
> You can pick one agent for which ambari-server is reportin
?Not without code change. This is probably a good feature to add. Can you
create a task?
From: Greg Hill
Sent: Friday, April 17, 2015 8:32 AM
To: user@ambari.apache.org
Subject: adjust the agent heartbeat?
https://github.com/apache/ambari/blob/trunk/ambari-agent
?That error is something I am not familiar with. Perhaps someone else can chime
in.
From: dbis...@gmail.com on behalf of Artem Ervits
Sent: Friday, April 17, 2015 8:24 AM
To: user@ambari.apache.org
Subject: Re: delete using API problem
I think the anwer lies i
A_LAB/hosts/HADOOP01.BIGDATA.LOCAL";,
"Hosts" : {
"cluster_name" : "BIGDATA_LAB",
"host_name" : "HADOOP01.BIGDATA.LOCAL"
}
},
{
"href" :
"http://localhost:8080/api/v1/clusters/BIGDATA_
Unable to get component REST metrics. No host
name for STORM_UI_SERVER.
I must confess that I erased all storm packages from each server prior to doing
any API calls, if that is of any help.
On Fri, Apr 17, 2015 at 12:29 PM, Yusaku Sako
mailto:yus...@hortonworks.com>> wrote:
Wow
+Alejandro
In theory, you can stop ambari-server, modify all occurrences of the hostname
and that should be it. There is not first class support for it.
Alejandro, did you look at the possibility of manually changing all host names
to rename a host (https://issues.apache.org/jira/browse/AMBARI-
ve hive_hostname here.
Please let me know how can I pass the oozie_database_name and the
oozie_hostname in the configuration part.
Also, do let me know if I could skip the entries highlighted in yellow from the
above configs if they are not of use for just configuring all the services to
use MySQL as
;,
"oozie.service.JPAService.create.db.schema" : "false",
"oozie.service.JPAService.jdbc.driver" : "com.mysql.jdbc.Driver",
"oozie.service.JPAService.jdbc.username" : "oozie"
}
}
Let me know if there has to be an
Whats your goal in terms of dividing this over two nodes? Generally, such
division depends on what kind of work load you are running (I am no expert
here). So I can easily see moving HBASE_REGIONSERVER and SUPERVISOR to one node
so that Storm and HBase work load can use the available resources.
Some documentation exist at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133?
Rather than the java/python code a better start would be existing metainfo.xml
files such as -
https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/HBA
One addition to question 4.
From: Alejandro Fernandez
Sent: Saturday, May 09, 2015 1:44 PM
To: user@ambari.apache.org; Christopher Jackson
Subject: Re: Ambari Custom Service Questions.
Hi Christopher, these are all very good questions, and it would be useful to
Inline ...
From: Christopher Jackson
Sent: Sunday, May 10, 2015 8:49 AM
To: user@ambari.apache.org
Cc: Sumit Mohanty; Alejandro Fernandez
Subject: Re: Ambari Custom Service Questions.
Thanks for this information. I have a few follow up questions asked inline
?Occasions where I do not see the node go to decommission is when the
replication factor (dfs.replication) is equal to or greater than the number of
data nodes that are active.
Hosts get removed from exclude file when the host gets deleted. This was added
at some point so that when the host is
Can you try the delete at the level of "components" -
http://localhost:8080/api/v1/clusters/c1/services/STORM/components/STORM_REST_API
This is what succeeded for me after getting STORM_REST_API to INSTALLED.
[root@c6403 vagrant]# curl -i -uadmin:admin -H "X-Requested-By: ambari" -X GET
http://
From: Eirik Thorsnes
Sent: Sunday, May 17, 2015 6:54 AM
To: user@ambari.apache.org
Subject: Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STORM
service/components by using API
On 16.05.15 17:45, Sumit Mohanty wrote:
> Can you try the delete at
Can you call a DELETE on
http://localhost:8080/api/v1/clusters/helm/services/STORM and check if it
succeeds?
From: Eirik Thorsnes
Sent: Sunday, May 17, 2015 7:36 AM
To: user@ambari.apache.org
Subject: Re: Ambari 2.0, stack: HDP-2.2.4: Unable to remove STO
Eirik, you are right. STORM_REST_API does not exist for STORM in HDP-2.2.
The happy path that should work ( verified it yesterday) is
Start at HDP-2.1
Stop STORM
Upgrade to HDP-2.2
Before starting STORM service - delete STORM_REST_API at the level of component
(not host_component)
- DELETE
.
What is the error you see or what step are you blocked on?
From: Frans Thamura
Sent: Sunday, May 24, 2015 6:39 AM
To: user@ambari.apache.org
Subject: Q Regarding Ambari QuickStart
Hi All
I use the Ambari Quick Start in my notebook
the work are working we
In general, if you removed the service and re-add it ambari will call the
install command. If yum (assuming you are on rhel/centos) is refusing to
upgrade then likely it is not finding the new package or expecting an "yum
upgrade" call. In that case, you have to manually upgrade the package.
Wh
Its the implementation of start command that should create the pid file -
essentially you have to do it yourself in the start script.
You can use the pid file to make the start script idempotent. Its a best
practice for stop command to delete the pid file after stopping the component
instance.
Hi Christopher,
Ambari does not support installation of clients. The dependency, for now, only
ensures that they are installed on the same host.
Lets open a JIRA. Its easy to implement an install order as well - basically
call the same helper method as done for START.
thanks
Sumit
___
Does this have the details you need?
https://github.com/apache/ambari/blob/branch-2.0.0/ambari-server/docs/api/v1/alert-definitions.md
-Sumit
From: Eirik Thorsnes
Sent: Thursday, June 18, 2015 2:58 AM
To: user@ambari.apache.org
Subject: Ambari 2.1.0-snap:
While it is possible the data model is indeed corrupt but more than likely
there are sections of code that stop at the first cluster they see and that is
creating some confusion. I think you can delete the second/third cluster from
the cluster* tables and after a restart Ambari Server should get
1 - 100 of 171 matches
Mail list logo