Justin

On Fri, Mar 9, 2012 at 4:05 PM, Justin Slattery
<[email protected]> wrote:
> We ran into this exact same issue with our RabbitMQ and node consumers.
> Given our needs, the performance hit of using the amqp ack was acceptable
> for us and that is how we are running in production right now.
>
> It seemed to us that the amqp messages were simply coming off the queue MUCH
> faster than the worker could perform the work. Incoming messages were
> stacking up 1000s of callbacks in the event queue before the first few ticks
> were completed, which meant that every resulting response (in our case the
> workers are doing HTTP requests) ends up at the end of this massive queue.
>
> The whole experience made me think that it would be nice for node to have
> some sort of priority queue for elevating or de-prioritizing certain types
> of events.

You could use rabbit to route events by tagging them with a priority.
In the example below, routing keys can represent priorities, eg
"high", "medium", "low".

http://www.rabbitmq.com/tutorials/tutorial-four-python.html

alexis


>
> Would love to see a solution to this issue with our existing tools. Please
> share if you find one!
>
> Justin
>
> On Thu, Mar 8, 2012 at 4:18 PM, Dan Milon <[email protected]> wrote:
>>
>> Hmm, i believe I'm going that way also. But still, do you believe it might
>> be a problem of node-amqp?
>> Why doesn't the same happen with, say an http server in node? That also
>> gets too many callbacks fired, but this mess does not happen.
>>
>>
>> On 03/08/2012 02:45 PM, Jeroen Janssen wrote:
>>>
>>> Hi,
>>>
>>> I have a similar situation where I consume rabbitmq messages and
>>> process them into a couchdb document structure.
>>>
>>> Currently how my solution is setup is:
>>> *) ack enabled, prefetchCount = 1 (only one message can be pending
>>> ack, this also results in 1 concurrent access to couchdb from my app)
>>> *) have an "in memory" document cache
>>> *) upon receiving a message to process I look into the cache, if it's
>>> not there yet I fetch it from couchdb
>>> *) updates get done to the document in cache (and then I 'ack' the
>>> message)
>>> *) after a little while I write everything in the cache into couchdb
>>> (I use amqp unsubscribe to suspend incoming amqp traffic until
>>> everything is written to couchdb)
>>>
>>> At the moment this solution 'solves' the 'hit couchdb everytime a
>>> message is received' that slows processing down a lot for me.
>>> Also it stops incoming amqp traffic while I'm actually writing data
>>> back to couchdb (and provides a window to exit my app in a 'safe' way.
>>>
>>> The downside of my current implementation is that there is a window
>>> where I could lose data (if my app crashes before writing the cache
>>> into couchd while I have actually 'acked' the amqp message) - I
>>> believe this is also what Chris is referring to.
>>>
>>> I was thinking about using amqp ack (in combination with higher
>>> prefetchCount) until after I actually have written the cache to
>>> couchdb. This would ensure that rabbitmq keeps the messages
>>> (persistent) until the data is actually written into couchdb. And I
>>> can have multiple messages 'in processing' so I don't write to couchdb
>>> on every message being processed.
>>>
>>> I hope this makes sense.
>>>
>>> I was also wondering if anyone else has a similar setup or if there is
>>> some kind of (better?) pattern that can be applied to this problem.
>>>
>>> Best regards,
>>>
>>> Jeroen Janssen
>>>
>>> On 7 mrt, 22:28, Dan Milon<[email protected]>  wrote:
>>>>
>>>> Hello everyone,
>>>>
>>>> I'm facing the following problem. I just have set up a rabbitmq queue,
>>>> and my node app is subscribed to that queue (via node-amqp). When a
>>>> message arrives, the app does an insert and a few updates on a mongodb
>>>> (via mongoose). Now the problem comes when there are say 10000 messages
>>>> in the queue (durable), and i fire up node. It is flooded by the
>>>> messages and although mongodb insertions are started, no callback is
>>>> ever called. Finally all messages are consumed, and after a while
>>>> mongoose will throw a timeout exception, or node will crash out of
>>>> memory. During that time, mongodb reports that it received only like 30
>>>> queries.
>>>>
>>>> A temporary solution to that problem, is using rabbitmq's ACKs. That
>>>> instructs the queue to not send another message, unless i ack the
>>>> previous one. Now if i use that technique, although i ack just after i
>>>> call inserts and updates (not in their callbacks), in the end things get
>>>> throttled and with a lot less performance the app is doing what it's
>>>> supposed to do.
>>>>
>>>> Anyone knows why this is happening? How to fix/avoid it?
>>>> I'll give example code if needed.
>>>>
>>>> Thank you,
>>>> Dan.
>>
>>
>> --
>> Job Board: http://jobs.nodejs.org/
>> Posting guidelines:
>> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
>> You received this message because you are subscribed to the Google
>> Groups "nodejs" group.
>> To post to this group, send email to [email protected]
>> To unsubscribe from this group, send email to
>> [email protected]
>> For more options, visit this group at
>> http://groups.google.com/group/nodejs?hl=en?hl=en
>
>
> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines:
> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to [email protected]
> To unsubscribe from this group, send email to
> [email protected]
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

Reply via email to