Hi,

A common case is deploying a database,  hadoop jobs, running sql script.

The request can be small, because files are usually out of bounds and 
downloaded as part of the task.

The output of the task can be pretty long or running for an extended period of 
times. Multiple hours in the case of hadoop jobs.

I see that we adopted a similar pattern as the  shell plugin for hadoop job.

Run the command without waiting for it to exit, return a handle to the command 
to the application. Which can then do something like for ;; tail(handle, 
offset, size) 

That seems the only sensible approach to me at the time.

Jos

Sent from my iPhone

> On 5 Jul 2017, at 18:29, R.I. Pienaar <r...@devco.net> wrote:
> 
> 
> 
>> On 5 Jul 2017, at 13:45, Christopher Wood <christopher_w...@pobox.com> wrote:
>> 
>> Not sure if there's a consensus, but if you don't know the upper bound to 
>> your message size you could chunk your payloads or distribute files in some 
>> other fashion and pass pointers via mcollective.
>> 
>> People are better at describing these things here:
>> 
>> https://github.com/nats-io/go-nats/issues/63
> 
> Yeah unfortunately mcollective RPC have no pattern for breaking messages 
> apart into chunks
> 
> What are these big replies and requests you are sending?
> 
> 
>> 
>>> On Wed, Jul 05, 2017 at 01:43:09AM -0700, jhoutman via mcollective-users 
>>> wrote:
>>>  Fair enough for sending the file. 
>>>  But what about the reply? 
>>>  That can also be pretty big, is there a preferred pattern or helpers to
>>>  get that across without hitting the maximum payload?
>>>  With regards,
>>>  Jos Houtman
>>>  On Tuesday, July 4, 2017 at 6:11:02 PM UTC+2, R.I.Pienaar wrote:
>>> 
>>>    On Tue, Jul 4, 2017, at 17:57, jhoutman via mcollective-users wrote:
>>>> Hi group,
>>>> 
>>>> 
>>>> We have build a few custom mcollective plugins, that are build on the
>>>> rpcclient.
>>>> 
>>>> With one of those we now get the following error:
>>>> natswrapper.rb:138:in `block in start' Disconnected from NATS:
>>>> NATS::IO::ServerError: 'Maximum Payload Violation'
>>>> 
>>>> The code uses Mcollective::RPC#rpcclient.  One of the arguments send
>>>    over
>>>> the line is the content of a user provided file.
>>>> And in this scenario the whole message is bigger then the 1MB payload
>>>> limit.
>>>> 
>>>> 
>>>> Is it possible that the mcollective rpcclient starts chunking these
>>>> requests?
>>>> Or should we look towards solving the issue of that transfer in
>>>    another
>>>> way.
>>> 
>>>    for sure using middleware to transfer files is not really a great idea,
>>>    I'd
>>>    use mco to initiate a rsync or something instead
>>> 
>>>  --
>>> 
>>>  ---
>>>  You received this message because you are subscribed to the Google Groups
>>>  "mcollective-users" group.
>>>  To unsubscribe from this group and stop receiving emails from it, send an
>>>  email to [1]mcollective-users+unsubscr...@googlegroups.com.
>>>  For more options, visit [2]https://groups.google.com/d/optout.
>>> 
>>> References
>>> 
>>>  Visible links
>>>  1. mailto:mcollective-users+unsubscr...@googlegroups.com
>>>  2. https://groups.google.com/d/optout
>> 
>> -- 
>> 
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "mcollective-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to mcollective-users+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "mcollective-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to mcollective-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"mcollective-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mcollective-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to