On Tue, Feb 16, 2016 at 5:22 AM, Martin Panter <vadmium...@gmail.com> wrote:

> On 15 February 2016 at 08:24, Russell Keith-Magee
> <russ...@keith-magee.com> wrote:
> > Hi all,
> >
> > I’ve been working on developing Python builds for mobile platforms, and
> I’m
> > looking for some help resolving a bug in Python’s build system.
> >
> > The problem affects cross-platform builds - builds where you are
> compiling
> > python for a CPU architecture other than the one on the machine that is
> > doing the compilation. This requirement stems from supporting mobile
> > platforms (iOS, Android etc) where you compile on your laptop, then ship
> the
> > compiled binary to the device.
> >
> > In the Python 3.5 dev cycle, Issue 22359 [1] was addressed, fixing
> parallel
> > builds. However, as a side effect, this patch broke (as far as I can
> tell)
> > *all* cross platform builds. This was reported in issue 22625 [2].
> >
> > Since that time, the problem has gotten slightly worse; the addition of
> > changeset 95566 [3] and 95854 [4] has cemented the problem. I’ve been
> able
> > to hack together a fix that enables me to get a set of binaries, but the
> > patch is essentially reverting 22359, and making some (very dubious)
> > assumptions about the order in which things are built.
> >
> > Autoconf et al aren’t my strong suit; I was hoping someone might be able
> to
> > help me resolve this issue.
> Would you mind answering my question in
> <https://bugs.python.org/issue22625#msg247652>? In particular, how did
> cross-compiling previously work before these changes. AFAIK Python
> builds a preliminary Python executable which is executed on the host
> to complete the final build. So how do you differentiate between host
> and target compilers etc?

In order to build for a host platform, you have to compile for a local
platform first - for example, to compile an iOS ARM64 binary, you have to
compile for OS X x86_64 first. This gives you a local platform version of
Python you can use when building the iOS version.

Early in the Makefile, the variable PYTHON_FOR_BUILD is set. This points at
the CPU-local version of Python that can be invoked, which is used for
module builds, and for compiling the standard library source code. This is
set by —host and —build flags to configure, plus the use of CC and LDFLAGS
environment variables to point at the compiler and libraries for the
platform you’re compiling for, and a PATH variable that provides the local
platform’s version of Python.

There are two places where special handling is required: the compilation
and execution of the parser generator, and _freeze_importlib. In both
cases, the tool needs to be compiled for the local platform, and then
executed. Historically (i.e., Py3.4 and earlier), this has been done by
spawning a child MAKE to compile the tool; this runs the compilation phase
with the local CPU environment, before returning to the master makefile and
executing the tool. By spawning the child MAKE, you get a “clean”
environment, so the tool is built natively. However, as I understand it, it
causes problems with parallel builds due to race conditions on build rules.
The change in Python3.5 simplified the rule so that child MAKE calls
weren’t used, but that means that pgen and _freeze_importlib are compiled
for ARM64, so they won’t run on the local platform.

As best as I can work out, the solution is to:

(1) Include the parser generator and _freeze_importlib as part of the
artefacts of local platform. That way, you could use the version of pgen
and _freeze_importlib that was compiled as part of the local platform
build. At present, pgen and _freeze_importlib are used during the build
process, but aren’t preserved at the end of the build; or

(2) Include some concept of the “local compiler” in the build process,
which can be used to compile pgen and _freeze_importlib; or

There might be other approaches that will work; as I said, build systems
aren’t my strength.

Russ Magee %-)
Python-Dev mailing list

Reply via email to