Yes, there is typically a slight difference between the x.y.z version from
Apache and that offered by Hortonworks. The Hortonworks version usually
contains several additional fixes that did not make it into the official
Apache release. Additionally, there are stacks shipped with the Hortonworks
s Jonathan,
Is there a way to upgrade to most recent (at least 1.0) kafka with 2.6
Ambari? If not which Ambari can give me out of the box 1.0 Kafka?
Jacek
> On Jan 11, 2019, at 14:56, Jonathan Hurley
wrote:
>
> HDP 2.6 uses Kafka 0.10.0:
>
https://github.com/apache/
HDP 2.6 uses Kafka 0.10.0:
https://github.com/apache/ambari/blob/branch-2.6/ambari-server/src/main/resources/stacks/HDP/2.6/services/KAFKA/metainfo.xml#L23
The version number which you are seeing is a combination of the HDP stack
version (2.6.1.0-129) along with the Apache version of Kafka which
Hi,
Please uninstall hadoop_2_6_4_0_91-yarn.x86_64 from my-yum-local-repo. Then,
remove this repo by searching /etc/yum.repos.d for my-yum-local-repo. After
this, you can retry the install.
From: Lian Jiang
Reply-To: "user@ambari.apache.org"
Date: Wednesday, June 13, 2018 at 11:00 PM
To:
I’m not sure if you meant mpack instead of mstack, but the alert below
indicates that there are components installed which are not reporting the
correct versions. It should tell you which components are wrong.
Normally, this happens with an mpack that has installed services which indicate
they
This is expected for now. The problem is that client configs are run and
rendered on the Ambari server itself, which might not even be a part of the
cluster. Some properties, such as the ones you listed below, are rendered on a
per-host basis, and can be different depending on the versions of
Depending on which version of Ambari you are on, a request like this might work:
POST api/v1/clusters//requests
{
"RequestInfo":
{
"command":"RESTART",
"context":"Restart all ZooKeeper Clients Across the Cluster",
"operation_level": {
This is caused by the connection refused exception to your standalone postgres
database. It means the Ambari Server can't connect to it. You can check the
settings in ambari.properties to see if any of them are wrong:
grep jdbc /etc/ambari-server/conf/ambari.properties
And the adjust any which
No, it cannot. The script-approach for SNMP in Ambari 2.2 is meant as a way of
providing your own custom behavior on top of Ambari. The core logic of Ambari
only passes the fields that you see here. (alert state, name, service, etc).
You need to edit the script to provide your own host. I've
ervers Health Summary fires at WARNING level?
On Fri, Mar 24, 2017 at 12:27 PM, Jonathan Hurley
<jhur...@hortonworks.com<mailto:jhur...@hortonworks.com>> wrote:
I'm not sure what you mean when you say "turn down" the process. If you are
shutting down the process, then
I'm not sure what you mean when you say "turn down" the process. If you are
shutting down the process, then the port is released and the alert will not be
able to make a socket connection. You will get a CRITICAL right away. The
values in the alert are a round-trip-time coupled with a socket
s alerts.
[1] https://cwiki.apache.org/confluence/display/AMBARI/Known+Issues
-Ganesh
On Fri, Oct 28, 2016 at 12:36 PM, Jonathan Hurley
<jhur...@hortonworks.com<mailto:jhur...@hortonworks.com>> wrote:
It sounds like you're asking two different questions here. Let me see if I can
It sounds like you're asking two different questions here. Let me see if I can
address them:
Most "CRITICAL" thresholds do contain different text then their OK/WARNING
counterparts. This is because there is different information which needs to be
conveyed when an alert has gone CRITICAL. In
I believe this is because you have a "hadoop" directory in /usr/hdp ...
/usr/hdp should only contain versions and "current". If there's another
directory, it would cause the hdp-select tool to fail.
On Jun 15, 2016, at 3:23 PM, Pawel Akonom
We'd need to know which version of Ambari you're using. This type of error can
typically be seen in one of two scenarios:
- You're using MySQL with MyISAM as the database engine. MyISAM doesn't support
transactions or foreign keys and can lead to a corrupted Ambari database.
- You're using an
Ensure that this MySQL JAR file is specified in your
/etc/ambari-server/conf/ambari.properties:
db.mysql.jdbc.name=/var/lib/ambari-server/resources/mysql-connector-java-5.1.36-bin.jar
On May 25, 2016, at 2:23 PM, Anandha L Ranganathan
>
You hitting an instance of https://issues.apache.org/jira/browse/AMBARI-15482
I don't know of a way around this aside from:
- Finalizing the upgrade
- Starting NameNode manually from the command prompt
It's probably best to just finalize the upgrade and start NameNode from the web
client after
Running two Ambari servers concurrently is not going to work due to the nature
of how the server uses JPA to interact with the database. You can keep a spare
Ambari server ready to startup on another host and use a virtual IP so that the
agents don't need to change who they talk to. But now you
know Ambari automatically invokes the
status function.
On Mon, Apr 18, 2016 at 7:50 PM, Jonathan Hurley
<jhur...@hortonworks.com<mailto:jhur...@hortonworks.com>> wrote:
When your command runs, it will show up in the UI as something like
"command-123.json". You'll match this up
s.format import format
and also given 777 permission till the intended pid file. Still its not working.
Can you please tell me from where can I see the print statements which I
provide in status function so that I can debug the function.
On Mon, Apr 18, 2016, 18:35 Jonathan Hurley
<jhur...
What are your import statements? The "format" function provided by Ambari's
common library has a naming conflict with a default python function named
"format". If you don't import the right one, your format("...") command will
fail silently. Make sure you are importing:
from
orry for the confusion. In my search for an answer I came accross the
host-only alerts and thought it was related.
Thanks again for your help.
Regards,
Henning
Am 06/04/16 um 15:26 schrieb Jonathan Hurley:
I think what you're asking about is a concept known as host-level alerts. These
are alert
rote:
How can an alert be added to a host?
Am 05/04/16 um 18:41 schrieb Henning Kropp:
Worked now. Thanks.
Am 05/04/16 um 18:01 schrieb Jonathan Hurley:
The alerts.json file is only to pickup brand new alerts that are not currently
defined in the system. It's more of a way to quickly
The alerts.json file is only to pickup brand new alerts that are not currently
defined in the system. It's more of a way to quickly seed Ambari with a default
set of alerts. If the alert has already been created, any updates for that
alert made in alerts.json will not be brought in. You'll need
That's very odd, especially since the upgrade doesn't touch the topology
tables. Are you using MySQL by any chance? If so, can you check to make sure
that your database engine is Innodb and not MyISAM. You have an integrity
violation here which doesn't seem possible unless you're using a
Maven looks for dependencies in your local repositories (~/.m2). When you have
another compiled project as a dependency (which ambari-server has on
ambari-metrics), you need to "install" this dependency in your local repo. In
the ambari-metrics subproject folder, you'll want to do a: mvn clean
quot;host_name" :
"ip-10-4-148-160.us<http://ip-10-4-148-160.us>-west-2.compute.internal"
}
},
{
"href" :
"http://10.4.148.160:8080/api/v1/clusters/indigo/hosts/ip-10-4-148-49.us-west-2.compute.internal;,
"Hosts" : {
"cluster_na
ease clarify?
Thanks
Naga
On Wed, Nov 18, 2015 at 8:43 AM, Jonathan Hurley
<jhur...@hortonworks.com<mailto:jhur...@hortonworks.com>> wrote:
This all kind of depends on how you created your blueprint and what host groups
you have defined. Assuming you have two host groups, here’s an examp
This is a problem that happens when the host with Ambari is not also a part of
the cluster. You should probably downgrade and make the Ambari server a part of
the cluster by installing a simple client on it. Then you can try the upgrade
again.
This is fixed in Ambari 2.1.3
On Nov 17, 2015, at
What this step is doing is loading classes which match an interface and binding
them as individual alert dispatchers in Guice. I haven’t experienced any
slowdown starting Ambari server - usually starts up in about 10 seconds total.
Can you provide a jstack dump during your startup so we can see
When you make a REST request to Ambari, it gives you back some JSON which
contains the data along with some decorator information. The “href” and “items”
elements are only for informational and structure purposes; you wouldn’t want
to include them in a POST going back to the server. What are
The ambari disk usage alerts are meant to check two things: that you have have
enough space total and percent free space in /usr/hdp for data created by
hadoop and for installing versioned RPMs. Total free space alerts are something
that you’ll probably want to fix since it means you have less
The upgrade should have preserved all of this properties. They are marked in
the upgrade pack as “keep” so if they existed before the upgrade, then they
should have been present.
With that said, the values from the manual upgrade look correct. When you
replace this values, what error does
The ControllerModule is iterating over all classes which are instances of
NotificationDispatcher and binding them to a singleton instance. At this point
in the startup, the classes have been identified and the only real work being
done is a conversation from a String to a Class instance before
There is currently no way to change the directories which are checked by this
alert. The alert is mostly concerned with the free space where stack components
are installed (either /usr/hdp or /usr/lib).
You can easily create another alert script to check a different directory if
desired.
On
You can update the values, but you’ll need to us the APIs to do this. When
sending a new “source” element, you need to include all fields - it will not
merge omitted source child elements in with the existing source:
PUT api/v1/clusters/cluster-name/alert_definitions/definition-id
{
eirik.thors...@uni.no wrote:
On 18. juni 2015 16:53, Jonathan Hurley wrote:
You can update the values, but you’ll need to us the APIs to do this.
When sending a new “source” element, you need to include all fields - it
will not merge omitted source child elements in with the existing source
the exact same result : tcpdump returns nothing.
De : Jonathan Hurley [mailto:jhur...@hortonworks.com]
Envoyé : mardi 19 mai 2015 14:42
À : user@ambari.apache.orgmailto:user@ambari.apache.org
Objet : Re: Restarting nodes to avoid HTTP 403
It might be because you’re using the loopback adapter. Your command
absolutely nothing.
Does that help you ?
If you want any additional information, feel free to ask.
Regards,
Loïc
De : Jonathan Hurley [mailto:jhur...@hortonworks.com]
Envoyé : mercredi 13 mai 2015 23:39
À : user@ambari.apache.orgmailto:user@ambari.apache.org
Objet : Re: Restarting nodes to avoid HTTP
that there is not
network trace.
The command sudo tcpdump -i lo -l -s0 -w - tcp dst port 8042 | strings
executed on the host returning 403 returns absolutely nothing.
Does that help you ?
If you want any additional information, feel free to ask.
Regards,
Loïc
De : Jonathan Hurley
emails.
Hope this will help understand where the problem comes from.
Have a nice day,
Loïc
De : Jonathan Hurley [mailto:jhur...@hortonworks.com]
Envoyé : jeudi 7 mai 2015 18:53
À : user@ambari.apache.orgmailto:user@ambari.apache.org
Objet : Re: Restarting nodes to avoid HTTP 403
All
: Jonathan Hurley [mailto:jhur...@hortonworks.com]
Envoyé : mercredi 6 mai 2015 16:24
À : user@ambari.apache.orgmailto:user@ambari.apache.org
Objet : Re: Restarting nodes to avoid HTTP 403
OK, so I think I have a clear picture of how you get to this situation. I’d
still like to know a few things:
1
-2.2.4.2-2
Have a nice weekend,
Loïc
De : Jonathan Hurley [mailto:jhur...@hortonworks.com]
Envoyé : jeudi 7 mai 2015 17:34
À : user@ambari.apache.orgmailto:user@ambari.apache.org
Objet : Re: Restarting nodes to avoid HTTP 403
The logs indicate that the alerts are running correctly and are simply
handles Ambari metrics.
As I am not sure my explanations are quite understandable, do not hesitate to
tell me if something remains unclear.
Thanks,
De : Jonathan Hurley [mailto:jhur...@hortonworks.com]
Envoyé : mardi 5 mai 2015 20:38
À : user@ambari.apache.orgmailto:user@ambari.apache.org
Objet : Re
Can you provide some more information on your environment, such as:
1) Version of Ambari
2) Whether the environment is kerberized
3) Are you running the Ambari agent as root, or another user.
4) Any information from the ambari-agent.log file that might seen to indicate a
problem
5) You said that
up
-- Restarting the agent totally resolves the problem. It does not happen
anymore, and everything run quite normally.
De : Jonathan Hurley [mailto:jhur...@hortonworks.com]
Envoyé : mardi 5 mai 2015 17:36
À : user@ambari.apache.orgmailto:user@ambari.apache.org
Objet : Re: Restarting nodes
: [ ]
}
$
From: Jonathan Hurley jhur...@hortonworks.commailto:jhur...@hortonworks.com
To: user@ambari.apache.orgmailto:user@ambari.apache.org
user@ambari.apache.orgmailto:user@ambari.apache.org; Jayesh Thakrar
j_thak...@yahoo.commailto:j_thak...@yahoo.com
Sent: Thursday, April 23, 2015
You can find all of the common and agent scripts located in site-packages of
python. For example:
ls -l /usr/lib/python2.6/site-packages/
drwxr-xr-x 4 root root 4096 Feb 18 18:21 ambari_agent
drwxr-xr-x 3 root root 4096 Feb 18 18:23 ambari_commons
lrwxrwxrwx 1 root root 46 Feb 18 18:22
48 matches
Mail list logo