Hi everyone,

I find that most people still rely on Ambari, including myself.
Therefore, my team is planning to bring Ambari back to incubator, and we have 
connected with some IPMCs who are willing to help.
Currently, we are preparing the proposal, and we believe that we can start 
voting about this in the Apache Incubator community in the near future.
Even if the proposal is rejected, we will still create a fork version on github 
and keep it running.
If you are also interested in this, or want to be part of this, please contact 
me, thanks!

Best Regards,
Zhiguo Wu

On 2022/05/19 06:41:57 Yuqi Gu wrote:
> Bigtop adopted the Ambari stable version (2.7.5) as the cluster management
> tools for the potential developers and administrators. It also provided the
> bigtop-ambari-mpack(Bgtp-Mpack) to decouple stack management and definition
> from Ambari's core.
> Currently Bgtp-Mpack still just supports Bigtop-1.5 (Hadoop 2.x), but the
> Bigtop has already supported Hadoop 3.x in Bigtop 3 release. So we plan to
> start working on upgrading Bgtp-Mpack services from Bigtop-1.5 (Hadoop 2.x)
> to  Bigtop-3 (Hadoop-3.x).
> 
> BRs,
> Yuqi
> 
> 
> 
> Michiel Verheul <[email protected]> 于2022年5月19日周四 03:56写道:
> 
> > Personally I'm struggling with the exactly the  same issue. There was
> > already some kind of discussion on this topic here, earlier:
> >
> > https://lists.apache.org/thread/wj898zq8q348721xf460mttqlty4v3zw
> >
> > Personally I have never ran a production cluster without CM before. Bigtop
> > 3 with Ambari seemed ideal, but as there was no ambari-mpack for bigtop 3
> > yet, I put some energy in porting the HDP mpack to support bigtop.
> > But I can understand that maintaining such a component under the Bigtop
> > project is a no-go, because of Ambari's attic state, Python 2.7 and the
> > (un)maintainability of such an mpack, so I stopped working on that.
> >
> > From what I also understand from the above thread, 李帅 is already working on
> > some light weight ambari alternative. It feels like that would be a good
> > way forward but I don't know how much work has to be done to make this work
> > and if it's still viable?
> >
> > The alternative would be running vanilla hadoop/bigtop. I don't have any
> > experience with it but I guess the most important gaps for me will be:
> >
> > - initial installation (with support for Kerberos and SSL)
> > - (rolling) service restarts
> > - basic service monitoring (is a service running or down?)
> >
> > Maybe we just have to put some energy in creating puppet/Ansible scripts
> > for this purpose and just forget about Ambari.
> >
> >
> > Op wo 18 mei 2022 20:07 schreef Battula, Brahma Reddy
> > <[email protected]>:
> >
> > > Supposed to ask same question @Martin Blom here.
> > > IMO, still Ambari will be good choice for cluster management. I think,
> > > most/some amount of people from bigtop(who are using for building the
> > > packages) still use ambari.
> > >
> > > Planning to take collective opinion on bring back even with some other
> > > name if not with same name.
> > >
> > > Any thoughts on this..?
> > >
> > >
> > > On 18/05/22, 6:28 PM, "李帅" <[email protected]> wrote:
> > >
> > >     Maintaining Ambari is not easy for its complex architecture. There
> > are
> > > many
> > >     Configuration Management Tools such as Puppet, Ansible for
> > > alternatives. So
> > >     for me, I will try to use Puppet or Ansible to deploy and monitor
> > > cluster.
> > >     Using Puppet and Ansible without Ambari-like web ui will be a gap for
> > >     common users.
> > >
> > >
> > >
> > >     Martin Blom <[email protected]> 于 2022年5月18日周三 17:04写道:
> > >
> > >     >
> > >     > Hi all. I'm new here.
> > >     >
> > >     > So at work we provide a service that's been running flawlessly on
> > > top of
> > >     > HDP since at least 2014. The recent "events" (which we became aware
> > > of only
> > >     > late last year, when it was already too late) had us panicking a
> > bit
> > > at
> > >     > first, since we realised we could no longer manage our cluster, but
> > > we've
> > >     > since then been able to track down repo mirrors so the immediate
> > > crisis is
> > >     > over (no thanks to Cloudera).
> > >     >
> > >     > However, with HDP dead and CentOS 7 nearing EOL, it's clear that we
> > > need
> > >     > to move on. While we have evaluated both CDP, GCP Dataproc and even
> > >     > considered migrating from HBase to Bigtable, Bigtop seems like the
> > > only
> > >     > sensible way forward. We just want a small database/MQ backend
> > > cluster that
> > >     > keeps running forever to power our service.
> > >     >
> > >     > Which brings us to Ambari. I really liked Ambari, for cluster
> > > management
> > >     > obviously but also so for monitoring, and it has worked great for
> > us
> > > all
> > >     > those years. A lot of work has obviously been put into Ambari and
> > it
> > > seems
> > >     > like such a waste to throw it all away.
> > >     >
> > >     > What are this Bigtop's plans for Ambari, the upstream project being
> > >     > retired and all?
> > >     >
> > >     > For us, we would like to bring us to the point were, in 2024 when
> > > CentOS 7
> > >     > goes EOL, we can bring up a new cluster on Rocky 8 with HDFS, Yarn,
> > > HBase,
> > >     > Kafka and Zookeeper using Ambari to manage, monitor and keep it up
> > > and
> > >     > running like that until at least 2029 when RL8 retires. It doesn't
> > > seem
> > >     > *that* difficult given the HDP 3.1 vs Bigtop 3.0.1 BOMs for the
> > > components
> > >     > we need. We would also be willing to put some time into making that
> > >     > happening.
> > >     >
> > >     > What are your thoughts on that? Is anybody here still interested in
> > >     > Ambari? Has anybody actually been using it to manage Bigtop
> > > components?
> > >     >
> > >     > PS. Also cross-posting to user. Anybody else in my situation? What
> > > are you
> > >     > all using to monitor your clusters once deployed?
> > >     >
> > >     > --
> > >     > Martin Blom
> > >     > [email protected]
> > >     >
> > >     >
> > >
> > >
> >
> 

Reply via email to