Thanks for taking the time to answer my questions Ben.
Best Regards,
Joe
On 6/8/2014 9:43 PM, Benjamin Black wrote:
you are talking about the relatively small latency introduced by one
transport vs another at the same time you indicate wanting to write
parts in arbitrary languages, including python, and intending to run
it all on a single machine. this is not a good way to computer.
On Sun, Jun 8, 2014 at 6:38 PM, Benjamin Black <[email protected]
<mailto:[email protected]>> wrote:
the answer is no.
On Sun, Jun 8, 2014 at 6:25 PM, joe roberts
<[email protected]
<mailto:[email protected]>> wrote:
Thanks for your response - it is much appreciated. I looked
at Ordasity, but I am not sure it handles writing code / rules
in different languages like Storm does, which is one of the
primary reasons that I wanted to use Storm. Also, regarding
scalability, yes, my plans are to code for it and test for it
in the beginning, but the requirement is to deploy on one box
initially, hence Storm also looked appealing in this regard as
it supports local and clustered setups. Yes, latency is one of
the primary concerns for my client - game server architecture
and I forgot to mention that ... :(. So, is there no way to
achieve a latency of ~100ms with Storm when processing tuples
via Spouts and Bolts (I realize this is a loaded question
since it really depends on what the processing logic does, but
assuming that logic is fast, what kind of latency can I expect
from the Storm communication layer)? Storm looks very
appealing but if it flat out won't handle < 100 ms latency,
then I cannot use it. If the answer is No, then I'll have to
keep looking for a solution.
On 6/8/2014 5:02 PM, Benjamin Black wrote:
tl;dr - no.
the aspect you left out of your list is latency. storm, like
most stream processing systems, is throughput oriented, not
latency oriented. think hundreds to thousands of milliseconds
rather than tens. what you've described so far is not a good
candidate for any stream processing system. if your "business
logic" layer is essentially a stateless layer, then have a
look at ordasity https://github.com/boundary/ordasity
if you intend your system to be distributed, then do it
distributed from day one. running it all on one box is a
reliable way to unknowingly include assumptions that later
preclude running on multiple systems.
storm already uses netty:
https://storm.incubator.apache.org/2013/12/08/storm090-released.html
b
On Sun, Jun 8, 2014 at 12:12 PM, joe roberts
<[email protected]
<mailto:[email protected]>> wrote:
Hi,
I am starting to look at Storm as a possible candidate
for a writing the business logic for a real time game
server and I am interested in your opinion and if you
think this would be a good use case for Storm. Basically
here is a rough view / flow of the architecture that I
want to use:
Game Client -> UDP/UDT -> Game Server-> Storm (runs
business logic via rules written in different languages
(C, C++, Java (JDBC), C#, LUA, Python) -> Server ->
UDP/UDT -> Client.
The reasons that I want to use Storm are:
* Scalable
* Parallel processing
* Guarantee of at-least-once message delivery
* Writing rules in different languages.
* Fail-over
Regarding scalability, I plan to start with everything
running on one box at first, but with the option to break
the work into different nodes as the number of clients
scales up, so ideally, this solution should perform well
with only one node, when there are 10 clients, and scale
well as the number of clients increase and I add more nodes.
Also, it seems Storm uses TCP via ZeroMQ by default -Is
that right? And if so, can it be switched to use UDP or
UDT instead, perhaps by replacing ZeroMQ with Netty?
Please let me know your thoughts and if this architecture
is a good idea or not.
Regards,
Joe