Nicolas Cellier wrote:
2012/6/3 Ben Coman <[email protected]>:
Igor Stasenko wrote:
On 30 May 2012 16:04, Ben Coman <[email protected]> wrote:
What ideas are floating around about mixing open source and closed source
using Pharo? I am implementing an IEC Standard object model for
electrical
power systems to provide a platform for developing electrical
applications.
I am considering the case where a company may maintain the model of
their
electrical power distribution network in the open source platform, but
then
buy various commercial plug-ins perform different calculations upon the
shared model. Here are the options I can imagine...
1. Using fuel to load binary packages within the one image without the
source. Currently available technology but viewing and decompiling
bytecode
is still possible - but to what degree this enables reverse engineering I
am
not sure.
Decompiler is able to fully reproduce the source code of method.
only variable names is lost, but you can see everything else quite clear.
2. Having VM support for restricting displaying/decompiling bytecode. To
avoid the ease of switching to another VM without this restriction, the
fuel
package could be encrypted with a key match one compiled inside the
required
VM.
this is not an option.
A current Debugger implementation implies that you have access to CM
bytecodes.
A required implication of (2.) would be that you could not debug "through"
that package. While this might be unfortunate from an open-source debugging
view point, in comparison to having the whole application delivered
closed-source with development tools stripped - I would still consider this
to be a step up. Ignoring the current implementation of Debugger, could
something like this be reasonably achievable?
To expand on this with a specific use case... The VM could internally
generate a public/private key pair. When requesting a plug-in from an App
Store, the public key is sent which the App Store uses to encrypt the
bytecode of the plug-in. Once downloaded into the image, upon execution the
VM receives the encrypted bytecode, decrypts it with its private key and
caches the decrypted bytecode internally, such that it is never visible to
the image.
Then what prevents an attacker to dump process memory and retrieve the
bytecodes from well known image structure patterns, or a custom image
tracer?
Nicolas
Ultimately at that level - there is no security. This only raises the
level of difficulty by requiring a certain conjunction of skill,
motivation and ethics. It is no worse than the same attack targetted at
option (3.). However there is now some incentive for commercial
companies to invest the the effort into releasing an App to execute on
top of an otherwise open-source platform. The advantage is that it if
ten plugins were required to do proprietary calculations on one dataset
and integrate the results into a single screen, then running within the
one image seems more elegant than ten images running on ten VMs.
(btw, I'm assuming CM bytecodes was a typo meant to be VM bytecodes? or
otherwise what is CM?)
3. Running multiple images on a single VM such that the VM passing
message
calls efficiently between the two images like an "enterprise bus*" - one
open source and one closed source. Any common base objects between the
images might be shared on a copy-on-write basis.
What are your thoughts?
I think option 3 is most vital:
you can communicate between two images like between two parties who
don't trust each other (so images should handshake, use encryption
key(s), and try to log-in one to other), then if succeed, one can be
able to see source code use remote debugger etc
Nothing new under the sun...
cheers -ben