Pawel

 

Are you familiar with ContactQ?   http://wiki.contactq.org/index.php/Main_Page  
and http://www.contactq.org/

 

They unfortunately started putting this on Asterisk, but the call center 
approach looks interesting.  We talked to them using FreeSWITCH instead.  
ContactQ does not include the media processing so that they need either 
Asterisk or FreeSWITCH.  

 

--martin

 

 

________________________________

From: [email protected] 
[mailto:[email protected]] On Behalf Of Pawel Pierscionek
Sent: Saturday, May 30, 2009 3:37 PM
To: [email protected]
Subject: Re: [sipX-dev] Fwd: There is an ACD Queue License limit?

 

My idea was to replicate all sipXacd functionality within a single threaded 
process using inbound socket connection to FreeSwitch boundled with sipXecs 4.0.

Unfortunately inbound socket in FS 1.0.3 just could not handle my approach. 

 

1.0.4 is OK but still I have to use workarounds in several places - like using 
two sockets - one for system wide events and one for asynchronous commands. 

Also originating several agents at once and tracking down the winner is a 
tricky one. 

 

The way I talk to FS is still not 100% frozen - I have to choose a set of 
commands that produce deterministic results.

Right now I get different event sequence depending on queue type eg with 
"never" queues, one leg gets BRIDGED event before the other gets into ANSWERED 
state which is reversed compared to sequence for "immediate" queues. 

 

Once we test all scenarios I'll release it so everyone can start experimenting. 

 

I'll probably see no (commercial) reason to implement my prototype in any of 
the languages "supported" by Nortel  - eg Java or Ruby so this will probably be 
only a proof-of-concept for Nortel not a contribution. 

 

Anyways - while my team is testing the ACD - I am designing ACDv2 - a 
load-balancing/clustered one - this one we should discuss before I start any 
kind of prototyping.

 

I have this idea of a distributed presence server.  Each media server gets it's 
own presence server that has full control over calls that are connected to it 
(by SRVs, gateways  or other mechanisms) and agents currently "owned" by that 
presence server. This way data synchronization can be asynchronous with no 
single point of failure or delays for ACID agent allocation.

 

Agent "ownership" is distributed among cluster based on parameters like agent 
physical location and media server capacity.

Agent available events and New Call events are local in this scenario but local 
server makes a cluster wide decision on where to connect the call or agent to.

If the decision (based on different priorities) is to connect to a local 
resource that this is what happens and the info on this event is distributed to 
all other member asynchronously (eg over WAN).

But when an agent is to be connected to a call on remote server (for whatever 
reason - eg site load balancing, skills) then the local server stops owning the 
agent and "sells" him to the sever that contains the winning call. 

Than the remote server makes it's own decision on what to do with the agent - 
this does not necessarily is going match the previous one based on asynchronous 
cluster state.  

If the agent gets connected to a local call (does not matter which one) than we 
win, if the decision is to sell the agent elsewhere than we loose points. 

 

I have to do some simulations and see what is the "agent migration rate" as a 
function of number of servers and call pickup rate. 

The idea is to make this work for WAN environments with servers 100-200ms apart 
which is ok for agents taking over (RTP) traffic from failed/overloaded server 
from other site but not for centralized agent allocation. 

 

There are multitude of problems with ownership of agents in node failure 
scenarios but this gets solved auto-magically if I choose AMQP and message 
queue federation as a mean for distributing cluster state updates. In case of 
split-brain scenarios (when there is no quorum in the cluster) I can try and 
verify the cluster state just by sending several SIP "pings" to agents owned by 
"downed" node to deduce the cause of "downed peer". If I cannot see my peer but 
can see his agents than I steal their ownership. 

 

Pawel,

 

On 2009-05-30, at 14:27, Picher, Michael wrote:





Would be nice if this could be a springboard for an ACD replacement...

 

From: [email protected] 
[mailto:[email protected]] On Behalf Of Pawel Pierscionek
Sent: Saturday, May 30, 2009 7:27 AM
To: [email protected]
Subject: [sipX-dev] Fwd: There is an ACD Queue License limit?

 

 

 

Begin forwarded message:






From: Paweł Pierścionek <[email protected]>

Date: 29 maja 2009 19:42:00 GMT+02:00

To: Damian Dowling <[email protected]>

Cc: sipx-dev <[email protected]>

Subject: Re: [sipX-dev] There is an ACD Queue License limit?

 


On 2009-05-29, at 18:03, Damian Dowling wrote:





Paweł Pierścionek wrote:

                         

                         

                        Can you confirm what you mean by a "hardcoded limit of 
30 calls to ACD

                        server". Does this 30 call limit mean that only a 
maximum of 30 calls

                        can be queued at any one time, and if so what happens 
if you try to

                        queue call 31?

                         

                Queued + bridged to agents = 30.

                 

                [ cut ]

        Is it possible to increase this limit? The reason I ask is that a max of

        30 queued is a bit on the smallish side for us. Even if we split the

        queues and run a second ACD Server we could easily hit the 30 call limit

        on busy days.

         

        Damian


You will have to create Your own binaries but they will not be stable.
The limit is there for a reason.
With the restrictions removed and other tweaks to the core libraries I get 50% 
chance of no audio for new calls with 100 agents talking :(

I have created my own ACD implementation from scratch based on FreeSwitch for 
the purpose of having a stable sipXacd replacement
for deployments in 100-500 range. It's feature complete but not out of internal 
tests yet so I have no magic solution for You at the moment :(

Pawel,

 

 

_______________________________________________
sipx-dev mailing list
[email protected]
List Archive: http://list.sipfoundry.org/archive/sipx-dev
Unsubscribe: http://list.sipfoundry.org/mailman/listinfo/sipx-dev

Reply via email to