Hi Sanjay, thanks for the response, replying inline:

- NN on top HDSL where the NN uses the new block layer (Both Daryn and Owen
> acknowledge the benefit of the new block layer).  We have two choices here
>  ** a) Evolve NN so that it can interact with both old and new block layer,
>  **  b) Fork and create new NN that works only with new block layer, the
> old NN will continue to work with old block layer.
> There are trade-offs but clearly the 2nd option has least impact on the
> old HDFS code.
>
> Are you proposing that we pursue the 2nd option to integrate HDSL with
HDFS?


> - Share the HDSL’s netty  protocol engine with HDFS block layer.  After
> HDSL and Ozone has stabilized the engine, put the new netty engine in
> either HDFS or in Hadoop common - HDSL will use it from there. The HDFS
> community  has been talking about moving to better thread model for HDFS
> DNs since release 0.16!!
>
> The Netty-based protocol engine seems like it could be contributed
separately from HDSL. I'd be interested to learn more about the performance
and other improvements from this new engine.


> - Shallow copy. Here HDSL needs a way to get the actual linux file system
> links - HDFS block layer needs  to provide a private secure API to get file
> names of blocks so that HDSL can do a hard link (hence shallow copy)o
>

Why isn't this possible with two processes? SCR for instance securely
passes file descriptors between the DN and client over a unix domain
socket. I'm sure we can construct a protocol that securely and efficiently
creates hardlinks.

It also sounds like this shallow copy won't work with features like HDFS
encryption or erasure coding, which diminishes its utility. We also don't
even have HDFS-to-HDFS shallow copy yet, so HDFS-to-Ozone shallow copy is
even further out.

Best,
Andrew

Reply via email to