Re: [asterisk-dev] Audio to/from Asterisk

2019-07-22 Thread Luca Pradovera
Hello,
I remember this being talked about, and it's essentially tied to the
mechanism that would allow streaming ASR/TTS services to be used.
+1 on this feature!

On Mon, Jul 22, 2019 at 10:01 AM Dan Jenkins  wrote:

> Also coming back to this with some real-life case issues I'm currently
> facing and why I can't use audiosocket :(
>
> I need to be able to ask the ARI/AGI/AMI for an IP/port combo and for an
> external app to then connect into asterisk rather than asterisk connecting
> to a URI elsewhere. Lets say I already have a nodejs (or any other
> language) process taking care of controlling that call via ARI or even AGI
> (all the different ways) - I need that same process to handle the media I'm
> sending and receiving to/from asterisk and so if I already have that
> process up and then Asterisk calls out to a generic URI then that media
> will never make it back to the original nodejs process.
>
> I think its of upmost importance that I be able to ask asterisk for a
> host:port pair and then be able to connect to that port from my external
> application.
>
> What do people think? I thought we'd talked about this mechanism at devcon?
>
> Dan
>
> On Sat, Jul 20, 2019 at 2:39 PM Dan Jenkins  wrote:
>
>> Just  going to chime in and say I don't see a one way audio solution as
>> enough and I'd worry that doing that would lead to "oh but only so many
>> people need to chuck audio in". This has been discussed at at least 3
>> devcons now.
>>
>> On Thu, Jul 18, 2019 at 2:09 PM Seán C. McCord  wrote:
>>
>>> I certainly don't mind if a better-designed system comes along and
>>> obviates my AudioSocket implementation.  I built it because I needed it.
>>> However, bidirectional audio flow is critical for me (speech synthesis,
>>> external interfacing, real-time processed audio, processed injections,
>>> etc).  While I would actually prefer a system which was a bit beefier than
>>> my own (simple protocol aside, there's a good deal of range between my
>>> protocol and MRCP), my meagre C skills (and patience) don't allow me to
>>> venture forth into those difficult waters.
>>>
>>> I do like the separate connection (unlike Wazo's) and the support of TLS
>>> (unlike mine)... and yours is certainly (even without looking) more
>>> performant.  Mine also probably needs a multi-threaded, dedicated-receiver
>>> approach like most of the other channels which handle socket-received
>>> media, rather than the simple non-blocking I/O with null frame insertion.
>>> No perfect solution yet.
>>>
>>>
>>>
>>> On Thu, Jul 18, 2019 at 8:01 AM George Joseph 
>>> wrote:
>>>
 Hey Guys,

 I was on vacation when this thread happened but I'm also working on
 this now.  The implementation uses the existing ARI channel and bridge
 recording endpoints ands add the ability to specify a URI in the form of
 (udp|tcp|tls)://hostname:port to stream the media.  This makes use of the
 existing chan_bridge_media driver and only requires a scheme handler
 similar to Seán's res_audiosocket.   This implementation is more targeted
 to real-time speech recognition/transcription/captioning and is therefore
 one way (outbound).  A future enhancement is planned that would send each
 channel in a bridge as a separate audio channel in a multi-channel
 container.

 I'm not suggesting that this should replace Seán's audiosocket stuff
 but I did want to let you know what was in the pipeline.

 george

 On Fri, Jul 5, 2019 at 7:38 AM Seán C. McCord  wrote:

> Solutions such as Jack are non-network oriented and severely limited
> in scalability.  There are, of course, many other options, but the closest
> are chan_rtp and chan_nbs.  RTP is a good option except for the difficulty
> for non-telephony people to interact with it.  NBS is deprecated,
> undocumented, and unsupported by any locatable resources.
>
> While the original app interface from last year required dialplan, the
> channel interface does not.  It is a plain channel which can be used by 
> ARI
> directly.
>
>
> On Fri, Jul 5, 2019, 15:28 Sylvain Boily  wrote:
>
>> Hello Seán,
>>
>> On 2019-07-05 4:45 a.m., Seán C. McCord wrote:
>>
>> A brief update:
>>
>> I have adapted my app_audiosocket from last year to become
>> chan_audiosocket, a full bidirectional audio channel interface for 
>> Asterisk
>> to any AudioSocket service (which itself is a dead-simple 
>> implementation).
>> I'll be demoing it in my talk next week at CommCon, for anyone who might 
>> be
>> interested.  I'm going to try to have it ready to push to gerrit for 
>> review
>> this weekend.
>>
>>
>> I'll be there.
>>
>>
>> For now, you can see it in the 'channel' branch of
>> github.com/CyCoreSystems/audiosocket.
>>
>>
>> This is very different from what we did. You need dialplan to use it.
>> 

Re: [asterisk-dev] Audio to/from Asterisk

2019-07-22 Thread Jean Aunis
It may not be suitable for your use case, but you could instantiate a 
UnicastRTP channel. It will allocate an RTP port and store it into a 
channel variable.


Jean

Le 22/07/2019 à 10:01, Dan Jenkins a écrit :
Also coming back to this with some real-life case issues I'm currently 
facing and why I can't use audiosocket :(


I need to be able to ask the ARI/AGI/AMI for an IP/port combo and for 
an external app to then connect into asterisk rather than asterisk 
connecting to a URI elsewhere. Lets say I already have a nodejs (or 
any other language) process taking care of controlling that call via 
ARI or even AGI (all the different ways) - I need that same process to 
handle the media I'm sending and receiving to/from asterisk and so if 
I already have that process up and then Asterisk calls out to a 
generic URI then that media will never make it back to the original 
nodejs process.


I think its of upmost importance that I be able to ask asterisk for a 
host:port pair and then be able to connect to that port from my 
external application.


What do people think? I thought we'd talked about this mechanism at 
devcon?


Dan

On Sat, Jul 20, 2019 at 2:39 PM Dan Jenkins > wrote:


Just  going to chime in and say I don't see a one way audio
solution as enough and I'd worry that doing that would lead to "oh
but only so many people need to chuck audio in". This has been
discussed at at least 3 devcons now.

On Thu, Jul 18, 2019 at 2:09 PM Seán C. McCord mailto:ule...@gmail.com>> wrote:

I certainly don't mind if a better-designed system comes along
and obviates my AudioSocket implementation.  I built it
because I needed it. However, bidirectional audio flow is
critical for me (speech synthesis, external interfacing,
real-time processed audio, processed injections, etc).  While
I would actually prefer a system which was a bit beefier than
my own (simple protocol aside, there's a good deal of range
between my protocol and MRCP), my meagre C skills (and
patience) don't allow me to venture forth into those difficult
waters.

I do like the separate connection (unlike Wazo's) and the
support of TLS (unlike mine)... and yours is certainly (even
without looking) more performant. Mine also probably needs a
multi-threaded, dedicated-receiver approach like most of the
other channels which handle socket-received media, rather than
the simple non-blocking I/O with null frame insertion.  No
perfect solution yet.



On Thu, Jul 18, 2019 at 8:01 AM George Joseph
mailto:gjos...@digium.com>> wrote:

Hey Guys,

I was on vacation when this thread happened but I'm also
working on this now.  The implementation uses the existing
ARI channel and bridge recording endpoints ands add the
ability to specify a URI in the form of
(udp|tcp|tls)://hostname:port to stream the media.  This
makes use of the existing chan_bridge_media driver and
only requires a scheme handler similar to Seán's
res_audiosocket.  This implementation is more targeted to
real-time speech recognition/transcription/captioning and
is therefore one way (outbound).  A future enhancement is
planned that would send each channel in a bridge as a
separate audio channel in a multi-channel container.

I'm not suggesting that this should replace Seán's
audiosocket stuff but I did want to let you know what was
in the pipeline.

george

On Fri, Jul 5, 2019 at 7:38 AM Seán C. McCord
mailto:ule...@gmail.com>> wrote:

Solutions such as Jack are non-network oriented and
severely limited in scalability.  There are, of
course, many other options, but the closest are
chan_rtp and chan_nbs.  RTP is a good option except
for the difficulty for non-telephony people to
interact with it.  NBS is deprecated, undocumented,
and unsupported by any locatable resources.

While the original app interface from last year
required dialplan, the channel interface does not.  It
is a plain channel which can be used by ARI directly.


On Fri, Jul 5, 2019, 15:28 Sylvain Boily
mailto:sylv...@wazo.io>> wrote:

Hello Seán,

On 2019-07-05 4:45 a.m., Seán C. McCord wrote:

A brief update:

I have adapted my app_audiosocket from last year
to become chan_audiosocket, a full bidirectional
audio channel interface for Asterisk to any
AudioSocket service (which itself is a
  

Re: [asterisk-dev] Audio to/from Asterisk

2019-07-22 Thread Dan Jenkins
Also coming back to this with some real-life case issues I'm currently
facing and why I can't use audiosocket :(

I need to be able to ask the ARI/AGI/AMI for an IP/port combo and for an
external app to then connect into asterisk rather than asterisk connecting
to a URI elsewhere. Lets say I already have a nodejs (or any other
language) process taking care of controlling that call via ARI or even AGI
(all the different ways) - I need that same process to handle the media I'm
sending and receiving to/from asterisk and so if I already have that
process up and then Asterisk calls out to a generic URI then that media
will never make it back to the original nodejs process.

I think its of upmost importance that I be able to ask asterisk for a
host:port pair and then be able to connect to that port from my external
application.

What do people think? I thought we'd talked about this mechanism at devcon?

Dan

On Sat, Jul 20, 2019 at 2:39 PM Dan Jenkins  wrote:

> Just  going to chime in and say I don't see a one way audio solution as
> enough and I'd worry that doing that would lead to "oh but only so many
> people need to chuck audio in". This has been discussed at at least 3
> devcons now.
>
> On Thu, Jul 18, 2019 at 2:09 PM Seán C. McCord  wrote:
>
>> I certainly don't mind if a better-designed system comes along and
>> obviates my AudioSocket implementation.  I built it because I needed it.
>> However, bidirectional audio flow is critical for me (speech synthesis,
>> external interfacing, real-time processed audio, processed injections,
>> etc).  While I would actually prefer a system which was a bit beefier than
>> my own (simple protocol aside, there's a good deal of range between my
>> protocol and MRCP), my meagre C skills (and patience) don't allow me to
>> venture forth into those difficult waters.
>>
>> I do like the separate connection (unlike Wazo's) and the support of TLS
>> (unlike mine)... and yours is certainly (even without looking) more
>> performant.  Mine also probably needs a multi-threaded, dedicated-receiver
>> approach like most of the other channels which handle socket-received
>> media, rather than the simple non-blocking I/O with null frame insertion.
>> No perfect solution yet.
>>
>>
>>
>> On Thu, Jul 18, 2019 at 8:01 AM George Joseph  wrote:
>>
>>> Hey Guys,
>>>
>>> I was on vacation when this thread happened but I'm also working on this
>>> now.  The implementation uses the existing ARI channel and bridge recording
>>> endpoints ands add the ability to specify a URI in the form of
>>> (udp|tcp|tls)://hostname:port to stream the media.  This makes use of the
>>> existing chan_bridge_media driver and only requires a scheme handler
>>> similar to Seán's res_audiosocket.   This implementation is more targeted
>>> to real-time speech recognition/transcription/captioning and is therefore
>>> one way (outbound).  A future enhancement is planned that would send each
>>> channel in a bridge as a separate audio channel in a multi-channel
>>> container.
>>>
>>> I'm not suggesting that this should replace Seán's audiosocket stuff but
>>> I did want to let you know what was in the pipeline.
>>>
>>> george
>>>
>>> On Fri, Jul 5, 2019 at 7:38 AM Seán C. McCord  wrote:
>>>
 Solutions such as Jack are non-network oriented and severely limited in
 scalability.  There are, of course, many other options, but the closest are
 chan_rtp and chan_nbs.  RTP is a good option except for the difficulty for
 non-telephony people to interact with it.  NBS is deprecated, undocumented,
 and unsupported by any locatable resources.

 While the original app interface from last year required dialplan, the
 channel interface does not.  It is a plain channel which can be used by ARI
 directly.


 On Fri, Jul 5, 2019, 15:28 Sylvain Boily  wrote:

> Hello Seán,
>
> On 2019-07-05 4:45 a.m., Seán C. McCord wrote:
>
> A brief update:
>
> I have adapted my app_audiosocket from last year to become
> chan_audiosocket, a full bidirectional audio channel interface for 
> Asterisk
> to any AudioSocket service (which itself is a dead-simple implementation).
> I'll be demoing it in my talk next week at CommCon, for anyone who might 
> be
> interested.  I'm going to try to have it ready to push to gerrit for 
> review
> this weekend.
>
>
> I'll be there.
>
>
> For now, you can see it in the 'channel' branch of
> github.com/CyCoreSystems/audiosocket.
>
>
> This is very different from what we did. You need dialplan to use it.
> In our case we don't need any dialplan to use it, it's more ARI approach.
>
> Sylvain
>
 --
 _
 -- Bandwidth and Colocation Provided by http://www.api-digital.com --

 asterisk-dev mailing list
 To UNSUBSCRIBE or update options visit: