Also, the "proc" recording method has an interesting pipeline, involving
an ephemeral metadata file and a memory sink exposed through /proc, and
a userspace daemon picking up the data and writing it to disk (if this
doesn't happen, audio frames going into the sink are discarded). 

This pipeline leads to two or three philosophical questions:

1. Is the purpose of this design to allow for writing audio frames in a
scatter-gather fashion, so that audio is written serially to storage?

Conventional recording solutions suffer from the problem that under high
volume, a large number of file handles are written in parallel. This
thrashes the disk. In the days of mechanical disks, this would lead to
frequent disk burn-out. In the SSD era the situation is slightly
improved, but is still very taxing on the SSD, as I understand it, in
terms of write wear/write levelling. 

This usually leads to a solution like writing recordings to a tmpfs area
and then serially copying them out of there, one at a time.

Would I be correct to assume that this pipeline is designed to address
this same problem, but in a different and novel manner?

2. Is the other purpose of a RTP sink to allow the possibility of
real-time call intercept and diversion to live playback? If so, are
there any plans for the recording daemon to expose an RTSP interface or
similar to make this easier?

3. What happens if, under high load and I/O wait conditions, the
userspace recording daemon cannot read frames from the sink fast
enough, or the CPU encoding workload (-> WAV/MP3) is too high?

According to the documentation, the depth of the sink is only 10 frames:

   Packet data is held in kernel memory until retrieved by 
   the userspace component, but only a limited number of 
   packets (default 10) per media stream. If packets are not 
   retrieved in time, they will be simply discarded. This makes 
   it possible to flag all calls to be recorded and then leave 
   it to the userspace component to decided whether to use the 
   packet data for any purpose or not.

Is it possible to increase that depth? Or is this not a concern because
the userspace component is implemented in an asynchronous/threaded
manner, so frames are retrieved quickly for processing and then enqueued
into a local blocking queue?

-- Alex

Alex Balashov | Principal | Evariste Systems LLC

Tel: +1-706-510-6800 / +1-800-250-5920 (toll-free) 

Kamailio (SER) - Users Mailing List

Reply via email to