On Wed, Aug 26, 2015 at 03:48:12PM +0000, James wrote:
> Alec Ten Harmsel <alec <at> alectenharmsel.com> writes:
> > 64-bit hardware with the no-multilib profile[1]. I have no "-bin" packages
> > on my system, nor do I run any pre-built 3rd party applications, so I
> > waste no time compiling worthless 32-bit libraries. Therefore, I need
> > grub 2.
> 
> Ok this is interesting. Is this only an AMD64 thing? On Arm64 you'd
> most likely want to run 32 bit binaries.

I don't know anything about arm64, but if it is 64-bit, why would you
need 32-bit binaries?

> This is profile [11} right?
> 
>   default/linux/amd64/13.0/no-multilib

Yes.

> I'm OK with this, but what is the benefit of such profile selection::
> curiously I have no experience with the profile selection, despite
> running quite a few amd64 system. What would the benefits be 
> running this profile on older amd64 hardware ?

The main benefit is reduced compile times for some packages since I only
compile the 64-bit versions, less stuff on the filesystem, etc. If you
do not run any applications that use a 32-bit version of a library, that
library is taking up disk space and compile time, but is never used.

I also am a bit of a purist, and just run no-multilib because it is
emotionally satisfying.

> > > AMD64 Team; <amd64 <at> gentoo.org>
> > > grub-1 is not available on no-multilib profiles;
> 
> I had not seen this, but so I guess this is well documented......?
> Does that profile selection prevent one from selecting grub-1 during
> and installation?

Yes, although just now was the first time I ever tried installing
grub-1.

> OFF TOPIC
> On another note: have you seen spark-1.5 ? Cleaner build?
> http://apache-spark-developers-list.1001551.n3.nabble.com/Fwd-ANNOUNCE-Spark-1-5-0-preview-package-td13683.html
> ..............................................................

I haven't looked at the new features of 1.5 specifically, but I know
that the build process is basically the same. The API is nice, but it is
definitely possible to write a faster job using Hadoop's API since it is
lower-level and can be optimized more, so I spend more time writing jobs
using Hadoop's API.

Alec

Reply via email to