On Sun, May 6, 2012 at 9:14 AM, Nicolas Cellier < [email protected]> wrote:
> 2012/5/6 Igor Stasenko <[email protected]>: > > On 6 May 2012 17:08, Nicolas Cellier <[email protected]> > wrote: > >> 2012/5/6 Guillermo Polito <[email protected]>: > >>> > >>> > >>> On Sun, May 6, 2012 at 2:16 PM, Esteban Lorenzano <[email protected] > > > >>> wrote: > >>>> > >>>> <snip> > >>>> I'm more worried about having all-platforms-specific-stuff inside > image... > >>>> but we can mitigate that with fuel, and making loadable packages when > >>>> running the image... I don't know, I'm just thinking while writing, > so, this > >>>> is probably stupid :) > >>>> > >>> > >>> Just think how many times you took a development image and used it in > >>> several platforms. At least I don't. Same happened when I used > Eclipse. I > >>> didn't share my eclipses between systems. Even, I had several eclipse > >>> installations with their own plugins (just like images, hehe). > >>> > >>> Probably with jenkins, metacello, and kernel/bootstrap we can generate > >>> distributions per platform (With the possibility of an all-in-one > >>> distribution for the ones who like that). > >>> > >>> Guille > >> > >> Yes, I understand that we can live without this feature... > >> - If we can reconstruct images easily (one of the goal of Pharo) - I > >> mean not only code, but any object (eventually with Fuel) > >> - If we solve the bootstrap problem (or if we can still prepare an > >> image for cross platform startup) > >> - If we don't forget to always talk (send messages) thru an abstract > >> layer, and never directly name the target library, > >> > >> Since I didn't have all these tools in the past, I were forced to use > >> development images across different platform a lot, and yes, it was > >> not following the mainstream rules (a la "we can reconstruct all from > >> scratch") but damn powerful. > >> For deploying applications, it also is very powerful and cheap. > >> Personally, I would feel sore to lose it. > >> > > > > But look at the root of what we are talking about: N bytes in VM > > versus M bytes in image to support certain functionality. > > I think if you need it, you will make sure that those bytes is there > > and properly packaged with you application. > > > > Unfortunately, it's more than moving code... > What I mean is that when I need to pass an O_NONBLOCK flag to a FFI > call, it's going to be a problem because I have to know how this > information is encoded on each and every platform I want to support. > But there are solutions to this which mean you *don't* have to know. I wrote a prototype for VisualWorks that maps a SharedPool to these externally defined variables. Here's how it works. For each group of C constants, e.g. i/o constants, one populates a subclasss of SharedPoolForC with the variables one wants to define, and in a class-side method one defines the set of include files per platform that one should pull in to evaluate the constants. SharedPoolForC has code in it to automatically generate a C program, and compile it, e.g. to provide a shared library/dll for the current platform. The C program is essentially a name-value dictionary that maps from the abstract name #O_NONBLOCK to the size and value for a particular platform. SharedPoolForC also contains code to load the shared library/dll, extract the values and update the pool variables automatically. The deployment scheme is as follows, at start-up the system asks each SharedPoolForC subclass to check the platform and see if the platform has changed. If it hasn't changed, nothing needs to happen. If it has changed the system attempts to locate the shared library/dll for the current platform (the platform name is embedded in the dll's name), and update the pool variables from that dll, raising an exception if unavailable (and the exception could be mapped into a warning or an error to suit). So to deploy e.g. a one-click one needs to generate the set of dlls fort the platforms one wants to deploy on. The development scheme is simply to run a method on the SharedPoolForC when one adds some class variables and/or changes the set of include files which turns the crank, generating, compiling and loading the C file to get the value(s) for the new variable(s). An alternative scheme would generate a program that would print e.g. STON, which could be parsed or evaluated to compute the values. This would have the advantage that the definitions of the values are readable and editable by mere humans. So I think I'd discard the shared library/dll approach and keep it simple. At least, in C code, I just care about the symbolic information and > have a relatively portable sentence. > To me that's one of the highest hurdle with FFI, because this is the > kind of complexity I wish I never cared of. > That's just a flavour of #define/#ifdef hell. > It can be worse if you want to interface with IPC (which has lot's of > different flavours). > Same for functions defined by macros that just use machine specific > structure layout... We can no longer use these structures as opaque > handles. > > > You can ship your product and use it on multiple platforms with ease, > > granted that appropriate platform-specific code is loaded into your > > image. > > With distribution using VMs it's a bit different story: it is a > > barrier with high entry cost. Especially if you think about all those > > RPMs, which is controlled by third-party maintainers. It is not that > > easy to directly control it, and much much slower in case if you need > > to deal with some problems. > > > > I agree on the principle, I always prefer to develop in FFI than hack > the VM, I'm far far more efficient on the former. > Nonetheless, I don't think FFI can magically solve all our problems. > In certain ways it can make them worse. > > Nicolas > > >> Nicolas > >> > > > > > > > > -- > > Best regards, > > Igor Stasenko. > > > > -- best, Eliot
