Hi Steve -

This is a long overdue DISCUSS thread!

Perhaps the UIs can very visibly state (in red) "WARNING: UNSECURED UI
ACCESS - OPEN TO COMPROMISE" - maybe even force a click through the warning
to get to the page like SSL exceptions in the browser do?
Similar tactic for UI access without SSL?
A new AuthenticationFilter can be added to the filter chains that blocks
API calls unless explicitly configured to be open and obvious log a similar
message?

thanks,

--larry




On Wed, Jul 4, 2018 at 11:58 AM, Steve Loughran <ste...@hortonworks.com>
wrote:

> Bitcoins are profitable enough to justify writing malware to run on Hadoop
> clusters & schedule mining jobs: there have been a couple of incidents of
> this in the wild, generally going in through no security, well known
> passwords, open ports.
>
> Vendors of Hadoop-related products get to deal with their lockdown
> themselves, which they often do by installing kerberos from the outset,
> making users make up their own password for admin accounts, etc.
>
> The ASF releases though: we just provide something insecure out the box
> and some docs saying "use kerberos if you want security"
>
> What we can do here?
>
> Some things to think about
>
> * docs explaining IN CAPITAL LETTERS why you need to lock down your
> cluster to a private subnet or use Kerberos
> * Anything which can be done to make Kerberos easier (?). I see there are
> some oustanding patches for HADOOP-12649 which need review, but what else?
>
> Could we have Hadoop determine when it's coming up on an open network and
> start warning? And how?
>
> At the very least, single node hadoop should be locked down. You shouldn't
> have to bring up kerberos to run it like that. And for more sophisticated
> multinode deployments, should the scripts refuse to work without kerberos
> unless you pass in some argument like "--Dinsecure-clusters-permitted"
>
> Any other ideas?
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>

Reply via email to