Github user matyix commented on the issue:

    https://github.com/apache/spark/pull/19775
  
    @GaalDornick @erikerlandson @jerryshao @felixcheung  et all 
    
    We gave up this - we have made the requested changes several times and I am 
not willing to put more time on this and get in the middle of a debate which is 
not my concern. Currently the Spark monitoring architecture it is how it is - 
and we have made the PR to align with the current architecture of the existing 
sinks and metrics subsystem. What did happen is that now the debate is not 
about whether this is good, needed or not but whether it should be part of 
Spark core, be pluggable, we should refactor the whole metrics subsystem, etc. 
Most likely this will still be the case later and once these will be changed, 
nailed down or agreed by all parties I can rework and resend the PR... 
    
    Anyways, we (and our customers) are using this in production for months - 
we have externalized this into a separate jar which we put it on the CP and 
does not need to be part of Spark (though it should I believe, as Prometheus is 
one of the best open source monitoring framework). 
    
    Should anybody need help how to use this sink with Spark drop me a mail at 
[email protected] happy to help anybody who'd like to use Prometheus with 
Spark. We do pretty advanced scenarios with this sink and all open source - you 
can read more about [Monitoring Spark with 
Prometheus](https://banzaicloud.com/blog/spark-monitoring/) and [Federated 
monitoring of multiple Spark 
clusters](https://banzaicloud.com/blog/prometheus-application-monitoring/). 
    
    Thanks for all the support.
    Janos


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to