On Thursday, April 11, 2013 10:57:44 AM UTC-7, Derek wrote:
>
> TCP is only the transport - you still have your application-specific 
> protocols that you'd need to write. If your logstash gets a message like 
> 'addlog "i like pizza" 07-11-2011' it doesn't know whether 'i like pizza' 
> is an application, a status, or a priority level. So ZeroMQ isn't going to 
> make connecting differing things easier. You still need your application 
> layer messages. For all Logstash would know, 07-11-2011 could be July 11th, 
> or November 7th. If you have a 'version control' ZeroMQ socket, who's to 
> say it's going to use Git? What about Mercurial? Svn? All of those speak 
> different languages, so you won't make anything easier by using ZeroMQ. In 
> fact, you'd make it more difficult since not all those systems would have a 
> ZeroMQ endpoint. Sure, if they had them, then good, but you'd still need to 
> have your application level protocol, and good luck getting all the source 
> control vendors to agree on a single protocol. Also, fwiw I use "fossil" 
> http://fossil-scm.org.
>
>
ZeroMQ is NOT a protocol - It is a messaging-library - it already has it's 
own application-level protocol, in the form of an API.
And it is also NOT a replacement for TCP, it is NOT another 
TCP-kind-of-socket. It is a library built "on-top" of TCP/UDP, using a 
software stack.
AMQP IS a protocol, but it is not the same as the one ZeroMQ has internally.

Of course, you CAN layer a "message-format" on top of that, to agree 
on semantics "within" the messages themselves.
But that can be a simple manner of reading a short documentation of any 
application, to know in what format the application is expecting the 
messages to be in. That will exist in most cases anyway, but that is not to 
say that nothing can be gained by layering on-top of a TCP socket... In 
TCP, you have to do much lower-level stuff, such as structuring bytes, 
defining message byte-sizes, and in effect defining a messaging protocol, 
in it's lower-level - like, "what defines a message-body", or "what defines 
a receiving socket?", or "what defines a queue?", or "what defines a 
publish/subscribe socket?", etc. Then there's defining "frames" of 
data-packets, making sure it is all bug-free, and performant, utilizing 
existing low-level system APIs in the most efficient way to achieve maximum 
performance, with connection-pooling, and the list goes on...

The work that has been done within ZeroMQ is astounding - I think you are 
not giving it it's due credit... It is much faster than the native 
python-implementation of a "bare-bone" TCP socket, and is actually built 
on-top of it (in it's system-level manifestation, it's a c-compiled module, 
it is not actually using a TCP socket of the python level).
Another benefit of it, is that although it's not a protocol per-say, it is 
still extremely wide in the amout of programming-languages it support, and 
already has binding for basically any common language in use today.

Here is a good talk about it:
http://blip.tv/pycon-us-videos-2009-2010-2011/pycon-2011-advanced-network-architectures-with-zeromq-4896861

As for Logstash, the whole point of the concept of "centralized Logging 
systems", is that every application has it's own log-format... The whole 
point of using "any" centralized-logging system, is to be able to translate 
every format of any application-log, into a consistent interpretation - it 
is done by building plug-ins in the form of a declarative-language 
text-file, in which you describe gor each target-application, how it's 
log-message is to be interpreted, by "tagging" the components of each 
message-part. After you have a plug-in for an application, you add in to a 
plug-in pool somewhere  and others can us it for that application. After a 
while, you get what exists today - a library of plug-ins that cover most of 
the common applications that exist - you just download the ones you need, 
and everything works. Then, logstash can cross-reference logs from 
different applications, into a chronological time-line, so you see 
what happened, and in what order. in a cross-application kind of fashion.

Here is a great talk about it:
http://www.youtube.com/watch?v=RuUFnog29M4 

The point of using ZeroMQ for this, is not to resolve the issue of defining 
a consistent protocol of "semantic-communication" of the "data" within the 
messages (that will most probable will never happen...), but it is to have 
a consistent API for "establishing" a "physical-connection" AND a 
"messaging-semantics" between different applications.
There is a big gap between a TCP socket-definition, and a 
messaging-semantic definition, and ZeroMQ fills this gap very well. It is 
what "allows" a developer to push the worries of the "physical-connection" 
AND "messaging-protocol" down the abstraction-stack, so he can focus his 
efforts in bridging the "logical" semantic-differences, which will always 
exist.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to