I'm forwarding this mail to the list, it seems some of my letters to Helge get lost in wires. Given an opportunity to revise, I've changed accents slightly.
> you fail to explain whats wrong with using the existing approach. > Which means reusing already available and tested infrastructure with no additional work. There is nothing wrong in the existing approach itself. But the other approach seemes superior to me. Thus the better become the enemy of the good. > You seem to be focused on going "your" way That may be true, too... > despite the fact that even JSON-RPC spec dropped that approach. I would like to understand why. I got an impression they focus on Internet applications for JSON-RPC and do not see any other uses of it. I thought I saw. On the other hand the lack of communication layer provided by HTTP might make JSON-over-streams useful in so little number of cases that it becomes not interesting. > Now thats your choice and I won't attempt to convince you No, no, this was and is very helpful and educating discussion for me, there is nothing wrong in convincing. But you haven't convinced me. I expected that you'd say something about JSON-over-streams won't work because there is no or hard to do error recovery or there is something else why it won't work for real. Right now you are saying that it's possible, but gives no real benefit over existing and robust XML-RPC. I think I can see ones. > a) if speed is a concern, I would use a binary protocol on a socket, > possibly using shared-memory instead of sockets (you said local IO) There is 32/64 bit mismatch problem so I think it would be much better to serialize. If I serialize, shared memory would do essentially the same as local sockets (it seems? how are they implemented?), only sockets give simpler programmatic interface and can be easily switched to inet sockets if the future need arise - again, just for LAN. Yes, binary protocol over sockets looks better. Only I need to devise one that can handle recursive structures similar to property lists. It seemed that JSON (being concise!) gives me just that for the small price of converting to ASCII the stuff that's 70% ASCII already. And for extension over LAN character encoding is wise - no byteorder problems etc. > b) if its fast enough, I would use XML-RPC JSON-over-streams seems to make no sense > for your setup in any case, since its also slow ... XML is less fun ;) I think JSON-over-streams has a potential of being much faster that XML-RPC since all HTTP stuff is not there and local sockets are allowed. I got an impression that XML-RPC always works with TCP/IP, event within one machine, is this true? More importantly, it's a solution that do not have machinery I do not need and is, therefore, cleaner from inside. Thus JSON-over-streams looks to me like in-between solution that attain good balance: faster than XML-RPC, more flexible than binary protocol. > ... and its not an accepted standard either. Well, it's again a question of balance. Are we allowed to deviate from accepted standards? Shall we try to make breakthrough programs, but only using standard blocks? Besides, JSON-RPC *is* standard, only not as widespread as XML-RPC. But the width of acceptance does not always guarantee quality: the "most accepted" Linux toolkit is gtk, yet somehow we are on GNUstep list ;-)) I have a feeling that JSON-over-streams is simple and elegant and has its merits, but I might be quite wrong. I'll try build a workable solution to see whether it can work in real life. > Anyways, I've even pointed you to a toolkit which enhances GNUstep with the streaming functionality you want. Yes, thank you, I'll take a look. >> I was very confused not seeing the equivalent of getc() in GNUstep. > Well, as mentioned thats because its usually not required in real world applications. Yes, you gave a very good explanation actually. Thank you again, --Tima _______________________________________________ Discuss-gnustep mailing list [email protected] http://lists.gnu.org/mailman/listinfo/discuss-gnustep
