On Jan 18, 2014 12:00 AM, "sri" <[email protected]> wrote:
>>
>> What a phenomenal story!  I'll add this into the wiki section on the
topic.
>
>
> Btw. I've intentionally avoided the word "performance", because
non-blocking I/O is about "scalability". Those terms are often used
interchangeably, but they shouldn't.

This make sense. But doesn't your application perform poorly when it blocks
unintentionally? For me, when I don't know about non-blocking, performance
is the first place I look.

I can see that ultimately scalability is the right word, but until I know
better I don't realize that I need to be considering scalability when I'm
just trying to get merely a second web request to respond in a reasonable
amount of time and will glaze right over anything talking about
scalability. In general, I don't think, people don't consider they need to
start scaling their program after 1 test case already or even after 10 or
100. I generally assume that  scalability is a concept reserved for the
enterprise and that my development or production run of even 10 to 20 users
doesn't need to already scale.

I recall you mentioning something similar to me a year ago when my
production app for 10-20 users failed and you talked about scalability. It
makes perfect sense now, but at the time I didn't know how to deal with
that. You even gave me the straight up answer (use -w large -c 1) but I
didn't understand it and I read that message about 100 times. When you said
to set concurrency to 1 I kept thinking that I need my app to support
concurrency beyond 1 because I need multiple people to access it at the
same time. I kind of understood that this is what the more workers were
for, but -- more on this later -- more processes is not generally what one
wants to see. :)

The only thing I had to compare to was apache and behind the scenes apache
would crank out additional processes if needed. I hardly knew -- if at all
-- that it was doing so let alone why.

As a sys admin, with most network services that I support, I'm used to the
documentation talking about scalability of 100s or 1000s of users, not just
10. 1 NTP process, 1 dhcp process, 1 cups process... 1 of each has been
serving 100s of users well for me for years. I generally freak out if I see
a dozen or more of the same process running (only exception I've ever
encountered was samba) and think something must be wrong so it was against
every good sense I had to intentionally crank them up. And, of course
clearly, if my mojo app didn't block, it too would serve 100s of users well
with just a single process.

In the end, all I could see is that my app wasn't performing well. 10
people in a room were getting timeouts. I'm not Google over here,
scalability is a word reserved for folks like them...

Just my two cents on perspective and taking into consideration who will be
looking for such related information: people who don't yet have the right
information. :)

I'm glad you brought it up tho, I will definitely work scalability into the
information and provide readers with a shift of perspective.

I hope this works but if not I'm open to suggestions or can certainly just
adjust the word if that's what's preferred.

-- 
You received this message because you are subscribed to the Google Groups 
"Mojolicious" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/mojolicious.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to