Hi Because Freeswitch is calling out both legs and when the lys are connected FS is bridging them together In this constellation it can happen that leg A is running G711 and leg B G729 now freeswitch has to trancode the media. Because voice is a realtime application the VM needs access to hardware clock, which is very important for conferences and transcoding. KR KT
On 11/08/14 19:45, "Adam Thompson" <athom...@athompso.net> wrote: >On 14-08-11 11:44 AM, Joel S. | VOZELIA wrote: >> Did you get any feedback or advice? I'm interested too. >>> We are running a proxmox 3.24 on top a ubuntu 12.04 and a freeswitch >>> On the freeswitch we have to transcode voice from G711 to G729 and >>>vice versa >>> to perform this correctly we have to have the hardware and the >>>software in >>> perfect clock sync. >>> My question is now do you have a best practice for this or does >>>someone have >>> experience with proxmox VE and freeswitch? > >I'm running a 4-node PVE cluster, each node being 2 x 4-core (+HT) Xeon >L5520 with 48GB RAM. None of the nodes are anywhere near capacity, so >I'm not contending for CPU cycles or network bandwidth right now. >On top of that, I'm running AsteriskNOW (aka FreePBX Distro) in an >all-G.711 environment. No analog lines or analog hardware exists except >for two ATAs. >I am using Asterisk as a B2BUA which means it's in the voice path for >every call; I'm not transcoding, however. >I very occasionally encounter minor glitches in voice call quality when >the link to my upstream SIP provider is congested, but I have never >encountered any problems with calls internally. >Latency isn't quite as low as I'd like, but it's still well within >acceptable limits. > >I have done NO tweaking yet, nor have I implemented any QoS yet, because >my results have shown that I don't need to. >I expect to have to give that VM more CPU cycles as the cluster becomes >more heavily loaded, and I also expect to have to implement QoS at the >network layer... I probably won't have to do network QoS on the PVE >hosts, they each have more than enough bandwidth to the switches right >now to accomodate VoIP without any packet loss. > >Note that the vmbr (linux bridging module) does add a measurable amount >of latency; I haven't tried VT-d yet, because I'd rather retain the >ability to hot-migrate the VM than reduce latency by ~0.5msec. > >However, my question is: what on earth does the OP mean by ³c"? Unless >you're running DADHi cards, >there is no hardware to keep in sync; and running DADHi (or worse, >zaptel) through VT-d would just be... well... bizarre. And even in that >case, there still isn't really anything that can get out of sync; if you >lose sync on a T1 trunk, you lose the entire set of channels - it's >either OK or it's not, and it has nothing to do with transcoding. > >Perhaps the OP is talking about some sort of PCI co-processor that >handles transcoding for him??? > >-- >-Adam Thompson > athom...@athompso.net > _______________________________________________ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user