-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Whatever technology you choose, it's definitely a step forward IMHO.
JSON with RPC sounds really good.

But if I may suggest, and from a user perspective, I'm not sure about
the idea of "polluting" your great functional scripts with objects
though. I personally feel a bit confused when both ways are mixed
together. But, very likely, I may be the only one in this case.

- -- 
best regards,

okay_awright
<okay_awright AT ddcr DOT biz>
[PGP key on request]


On 25/07/2011 17:57, Romain Beauxis wrote:
> 2011/7/25 David Baelde <david.bae...@ens-lyon.org>:
>> Hi Romain,
>>
>> On Mon, Jul 25, 2011 at 12:13 AM, Romain Beauxis <to...@rastageeks.org> 
>> wrote:
>>> Our ultimate goal is to export everything that we used to do through
>>> telnet using json-rpc. The nice idea would be to have a command such
>>> as:
>>>  rpc.register(function)
>>> which would use the function's type to deduce which type of json-rpc
>>> request corresponds to it. For instance, a function "foo" of type:
>>>  (int,float) -> int
>>> would translate into a request of the form:
>>>  { method: "foo", params: { unamed: [1, 3.14]} , ... }
>>
>> Here you're "translating" a liquidsoap function type to one possible
>> call, namely f(1,3.14). Does JSON-RPC have a notion of type, or
>> something that lets clients explore available services / methods? This
>> is what our types would translate to.
>>
>> As a bonus (this is probably asking too much for now) is there a
>> ready-made application for browsing a service described in that means.
>> Using a simple standard is the top priority, but it's even better if
>> it comes with more tools than just libraries in major scripting
>> languages, so that non-programmers can use it more easily.
> 
> I don't think this is provided by json-rpc. I was more thinking of
> extending the current "help" command to return this information in
> json mode.
> There might exist a protocol that does that though.
> 
>>> Getting rid of the current mess
>>> ======================
>>
>> I'll focus more on that part. The main idea is to unify the server
>> interface and the scripting language. Calling a command will be the
>> same in the script language and the server interface: if you want to
>> skip the "before" source in your transition you'll write before#skip,
>> and the server will be a remote REPL (read-eval-print loop, similar to
>> liquidsoap --interactive) in which you'll write the same source#skip.
>>
>> I use the notation object#method(...) rather than object.method(...)
>> for simplicity, since we already use the dot as part of normal
>> identifiers. It would also be nice to put more structure to this use
>> of dot, introducing a notion of namespace or module, but it makes
>> sense to keep it separated from objects/methods.
>>
>> One question that puzzles me is how to address sources in the server
>> interface. Seeing it as a REPL made me consider things like using the
>> environment at the end of the (final) script as the environment for
>> the REPL: source definitions that are visible at toplevel at the end
>> of the script are accessible under the same name in the server.
>> However this is too limiting as this does not make it possible to
>> access dynamically created sources, and modifications of that idea
>> haven't yielded anything convincing, except addressing sources by
>> their ID like we do currently. This would be an okay solution in a
>> first time, but it has its problems: (1) default IDs are often ugly
>> and explain little and (2) all sources would be accessible through the
>> server interface, even if they have nothing else to offer than
>> source#id and source#skip. As an alternative, we might require the
>> explicit addition of a source to the server interface, for example by
>> setting an explicit ID. I suspect that this won't be very practical --
>> and it only solves the ID issue in the server, not in the logs.
>> Another thing that would be nice would be to view graphically the
>> source graph, with ID annotations, to identify who's who -- for the
>> logs, this would require the possibility to view the graph at a given
>> point in time.
> 
> That's a good question. I think that we should have something
> backward-compatible at least at first, where sources register some
> functions as they used to be, and a global setting that allows to
> disable this.
> 
> Once the user has disabled the automatic registration, I think it
> should simply be his duty to pick and register the things he wants.
> However, we may provide a "export common stuff" that would export
> everything that used to be exported (queues etc..).
> 
>>> However, this approach is very limited for two reasons:
>>>  * Some functionalities are intrinsic to some specific sources, for
>>> instance the queues in a request source
>>>  * The current approach is ugly and ad-hoc: insert_metadata(source)
>>> returns a new source and a function to insert metadata. This is
>>> cumbersome, bloat the type of the function and is just not very
>>> elegant..
>>
>> I will add that the server is string based, and it's annoying to do
>> server.execute("<id>.<method> #{param}") instead of
>> source#method(param). Parsing the returned string must be simplified
>> as well: for example, to obtain a list of requests from
>> <source>.queue, we currently have to split a string, convert its
>> components to integers, and get the requests from them.
> 
> Very much..
> 
>>> The idea proposed by david some time ago was to lift the liquidsoap
>>> language and add some object-oriented aspects. I think this is really
>>> a good idea!
>>>
>>> For instance, instead of writing:
>>>  s = source.on_metadata(f, s)
>>> one would write:
>>>  s.on_metadata(f)
>>
>> What I have in mind is closer to the current style and clearer from
>> the typing point of view.
>>
>> I don't see a problem with on_metadata, because it doesn't export
>> commands. In the snippet above, it sounds like you can add on_metadata
>> handlers to any function a posteriori. Essentially it boils down to
>> integrating the on_metadata facility into the core source class. It
>> may be convenient but it might be a slippery slope: what else should
>> become a core feature, and what remains an operator? Some answers
>> result in a messy model: if you perform metadata rewriting in the same
>> imperative style, and install several on_metadata handlers, you have
>> to know if rewriting is performed before or after this and that
>> handler. Anyway, this is a different question from the server
>> interface.
>>
>>> Similarly:
>>>  x = insert_metadata(s)
>>>  insert_function = fst(x)
>>>  s = snd(x)
>>> would be written:
>>>  insert_function = s.insert_metadata
>>> (or even simplier..)
>>
>> This is more like what I had in mind, except that I would keep the
>> insert_metadata operator.
>>
>> s = playlist(...) # create a source
>> s = insert_metadata(s) # install a metadata insertion point
>> ...
>>  # insert metadata
>>  # in arbitrary pieces of code (or in the server)
>>  s#insert([("title","foo"),...])
>>  another_source#skip
>>  ...
> 
> The idea of registering a method is probably the best indeed.
> On_metadata was just an example and I agree that its not really
> relevant at this point.
> 
>> (By the way, I realize now that object#method is not a good notation,
>> it creates some confusion with comments...)
>>
>> Keeping an explicit insert_metadata operator, in addition to keeping
>> things similar to the current model, allows us to keep track of
>> available methods: insert_metadata takes a source (with any methods
>> attached) and returns a source with (the base methods and) a special
>> "insert" method. This way, everything is statically known, it'll also
>> make it possible to document available commands together with the
>> documentation API.
>>
>>> Concerning optional functionalities, such as queues in request
>>> sources, I am not sure yet. I think it could be a optional methods,
>>> which would return None (or null or anything else) if they do not
>>> exist..
>>
>> That would be a mess. In the style I propose, only request sources
>> would have the "queue" method, and only request.queue() would have
>> "push", etc.
> 
> Sure. In this case, we'd have to type the methods associated with each
> sources, though.
> 
>>> Of course, these are huge changes in the language. They raise many
>>> questions, such as whether we should type the source objects with
>>> their methods or let them be typed "source" as before etc..
>>
>> There are several OO styles, I'm not sure which one is best here.
>>
>> In the nominal style, classes are identified by their name, and
>> related by inheritance relationship. In this style we would have a
>> base source class, and an insert_metadata class providing the same
>> methods plus insertion. An insert_metadata source would be usable
>> everywhere a source is expected. In this style, insert_metadata is not
>> so much a function, but a new class with a constructor that takes a
>> child source. Instead of having a function of type (source)->source we
>> have a new class which may be written:
>> class insert_metadata =
>>  inherit source
>>  constructor insert_metadata(source)
>>  method insert : (metadata)->unit
>> end
>>
>> The less common structural style is the one used in OCaml. Here a
>> class is described by its structured, ie. a set of available methods.
>> Two classes with the same methods are identified. Inheritance is a way
>> to build new classes, but at the level of types only subtyping
>> matters: if class A offers the same methods as class B (and possibly
>> some more) then A is a subtype of B and it may be used where B is
>> expected. In this style, insert_metadata is still a function but it
>> returns an object, which has all the methods of the base source class,
>> plus one:
>> insert_metadata : (source) -> object extend source with insert :
>> (metadata)->unit end
>>
>> The notation may be improved. The "extend" is a way of avoiding to
>> write all the methods of the base source class.
>>
>> I believe the structural style is conceptually simpler and fits well
>> with "light" OO approaches that would make it very easy to quickly add
>> user-defined methods to a source.
>>
>> In the nominal style, everything is abstract in a sense, while the
>> structural style makes abstraction less natural. Concretely, my
>> problem is with the source type: what is it? In the structural style,
>> if we define it as "object method skip : unit, method id : string
>> end", ie. anything which offers a skip and ID methods, then one can
>> implement something of type source that has nothing to do with
>> streaming. Having an abstract base class source is not something that
>> OCaml would allow as far as I know -- it would be something like a
>> private class, whose methods are known but whose implementation is
>> hidden and which cannot be implemented in another way.
>>
>> In the end, we can also invent our own style. For example, I'm toying
>> with the idea of attaching methods to any value, be it an integer or a
>> source. This way we can have a structural style and naturally keep an
>> abstract source type: source would be totally abstract, as is the case
>> currently; most operators would return a source with a few common
>> methods such as skip; other operators would add their own specific
>> method such as insert.
> 
> Yeah. Honnestly, I am not sure how much we actually do want to import
> OO aspects in liq scripts. To me, having an "object oriented feeling"
> is way enough and I don't think we need all the burden of inheritance,
> extensions and etc.
> 
> I totally agree with your idea of just allowing to attach methods to
> any object. This is pretty much how JS work on the surface and I find
> it simple enough.
> 
> One feature that I like from JS and ruby that we could think about is
> the possibility to attach a method to a source at run-time:
>   s#foo = fun () -> (...)
> 
>>> My concern here is that, although all these ideas are really exciting,
>>> we are currently supposed to be preparing a stable release. It seems
>>> to me that all those ideas would take way too much code and changes
>>> right now and would postpone even further later a stable release.
>>
>> As said before, I won't start now. Fixing bugs should be the priority,
>> and I don't mind if 1.0 features the current server interface style.
>> However, I don't think it'll be such a big change, not the kind of
>> change that compromises stability anyway. We'll discuss later if it
>> comes with a 2.0 bump or not ;)
> 
> The most changes would be in the parsing and typing, which is your
> department so I take your word here :)
> 
> I'm all for a 1.0 release before these changes and soon enough :)
> 
>>> Please, do comment. Once we have a sort of agreement, I'll open some
>>> tickets and sub-tasks in the bug tracker so that we can keep track of
>>> the discussion :)
>>
>> The mailing list (savonet-devl perhaps) seems better for preliminary
>> discussion IMO.
> 
> Yup, moving the discussion there.. I've let a CC to -users but feel
> free to remove it in the next response :)
> 
> Romain
> 
> ------------------------------------------------------------------------------
> Storage Efficiency Calculator
> This modeling tool is based on patent-pending intellectual property that
> has been used successfully in hundreds of IBM storage optimization engage-
> ments, worldwide.  Store less, Store more with what you own, Move data to 
> the right place. Try It Now! http://www.accelacomm.com/jaw/sfnl/114/51427378/
> _______________________________________________
> Savonet-users mailing list
> savonet-us...@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/savonet-users
-----BEGIN PGP SIGNATURE-----

iQEcBAEBAgAGBQJOLqhxAAoJEN2X/7ng71pTpWkH/073D2+rsDmB1rz7+3u1wz63
y6UC/aM/NYrbH0+lQRsuQ1XgUXv5WmndiQnc4885VXJVAYr3YOpC/Cjz2/wh/BkX
w8DKmfuVmKF/IxXDQgW8aLyK5SnIBWljcDtiFchx/iLPVwH/zXz4ntfOleXEFpHy
piTN50s5t8/fn9Ur98fgKQKTnthdP3i1iBCXy1ZX7szErEtlbtKa/FM7ElVsXBeI
98OHXXo7o+4tZoeqz3FVs7F/4dQA78JZsDfmSG25aGJPY9lPE92vC09QqYyMqCzm
skzFa61ZXlRAUB85/x3M8t4quklUC8eduQMccuI8q06FZUjOznp1ydMsT4pr7dc=
=PqFB
-----END PGP SIGNATURE-----

------------------------------------------------------------------------------
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
_______________________________________________
Savonet-devl mailing list
Savonet-devl@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/savonet-devl

Répondre à