Probably the best option is to ping llvm-dev.

Another possible approach would be to work on improving Julia's static
compilation support, and add support for cross-targeting, with the idea
being to switch the backend to BG/Q and compile compute kernels to a shared
library without (most of) the Julia runtime.


On Thu, Jun 5, 2014 at 10:19 AM, Justin Wozniak <[email protected]>
wrote:

> Thanks, I will try to get in touch with him.
>
> On Wednesday, June 4, 2014 11:17:43 AM UTC-5, Keno Fischer wrote:
>
>> Power PC shouldn't be a problem. The real problem on BG/Q is that you
>> can't allocate extra executable memory after the program has started (or so
>> I've been told). When I last talked about this with Hal Finkel (he works on
>> BG/Q support for LLVM), he thought that it ought to be possible to just
>> allocate extra writable/executable memory at program start and just JIT
>> into that, but I'm not sure if there have been any updates on that.
>>
>>
>> On Wed, Jun 4, 2014 at 11:50 AM, Justin Wozniak <[email protected]>
>> wrote:
>>
>>> Hi all
>>>     I am trying to call Julia from the Swift language (
>>> https://sites.google.com/site/exmcomputing/swift-t) and run it on large
>>> computers like the Blue Gene/Q.  (This technique currently allows us to run
>>> Python, R, and Tcl on many cores.)  I have been able to get the basic
>>> embedded Julia API working from Swift on a PC but am looking for tips for
>>> other architectures.  Based on my initial attempts and previous threads on
>>> this list it looks like the various library dependencies are the main
>>> challenge.  Has anyone else been able to get Julia running on a Blue Gene,
>>> PowerPC, ppc64, or anything like that?  If I were to dive in and start
>>> modifying the Julia build system scripts, are there any known issues,
>>> workarounds, or blockers?
>>>     Justin
>>>
>>>
>>

Reply via email to