Introducing OpenSIPS 2.4

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Introducing OpenSIPS 2.4

Bogdan-Andrei Iancu-2
One more year, one more evolution cycle, one more OpenSIPS major release. So let me introduce you the upcoming OpenSIPS 2.4 .

For the OpenSIPS 2.4 release we decided to focus on the clustering abilities. Today’s VoIP world is getting more and more dynamic, services are moving into Clouds and more and more flexibility is needed for the application to fully exploit such environments. But let’s pin point the main reasons for going for a clustered approach :

  • scaling up with the processing/traffic load
  • geographical distribution
  • redundancy and High-Availability

For the OpenSIPS 2.4 we laid down a roadmap that addresses the clustering both from the clustering engine itself (the underlayer) and from the functionalities that will perform on top of the clustering layer, to share data and state.

With OpenSIPS 2.4, it will never be easier to built a consistent and powerful clustered solution:

  • clustering engine – enhances the capabilities of controlling the cluster topology, like re-routing for bypassing broken links, dynamic joining of new nodes, support for multiple capabilities per node, data syncing between nodes and many more;
  • distributed user location – this is a very complex topic as it exceeds the simple concept of data sharing. By the nature of the data (the user registrations), you may have different constraints on how data is roaming in a cluster – registrations may be tied to a node due NAT or TCP constraints. Even more, a sharding aspect must be addressed when looking at distributing the pinging effort across the cluster. So, multiple solutions are viable here, depending on what is to be achieved (scaling, redundancy) and what are the network constraints – see a detailed presentation of the available solutions;
  • distributed presence server – quite similar (but less complex) as the distributed user location, a distributed presence server provides a consistent, but distributed way of sharing presence information – SIP entities may publish data via different nodes in the SIP cluster, while the subscribers may fetch presence data via multiple various nodes. Two approaches are under work : (a) a cluster built around a noSQL DB based as primary data storage and (b) a cluster exclusively relying on OpenSIPS for data sharing;
  • anycast support – to be able to build a fully-flavored anycast support (addressing both redundancy and balancing) requires OpenSIPS to replicate/share transaction state across the nodes in the cluster (nodes sharing the same anycast IP). Depending on the nature of the replication (full transaction versus transaction meta-data) a full anycast and light anycast support will be available – here is a detailed description on the anycast support;
  • clustered media relays – as OpenSIPS has the ability to work together with several flavors of media relays (such as RTPproxy, RTPEngine, MediaProxy), the clustering support will help OpenSIPS do distributed load-balancing over the relays – even if a relay is used by multiple nodes in the cluster, all the nodes will share information on the load on the relay, to avoid overloading or idle time;
  • distributed call center – an agent is able to register with multiple queues on different nodes on a cluster. Still, all the queues do share the status / availability of the agent and its statistics for call distribution;
  • custom clustering – the OpenSIPS clustering underlayer provides at script level the ability to broadcast (in the cloud) or send to a given node a custom message/action (with replying possibility) – this is a very flexible and powerful way to build your custom distributed functionality directly at script level.

And because we started on the integration path with OpenSIPS 2.3, and because we did it well, we decided to push forward on this path with the 2.4 version as well:

  • more Homer integration to be able to report TCP statistics, DB events and media relay events via HEP;
  • SIPREC integration for standard call recording. The new SIPREC module provides a standard and transparent (for the call parties) way to do call recording against an external recorder like Oreka provided by Orecx;
  • more FreeSWITCH integration in order to capture the call-events (DTMFs, call status) from FreeSWITCH into OpenSIPS script or for being able to control a FreeSWITCH call from OpenSIPS script via ESL
  • Asterisk flavored Load-Balancing for a more realistic and accurate traffic balancing over Asterisk clusters (as the load information is fetched in realtime from Asterisk);

The timeline for OpenSIPS 2.4 is:

  • Beta Release – 12-16 March 2018
  • Stable Release – 23-27 April 2018
  • General Availability – 1st of May 2018, during OpenSIPS Summit 2018

To talk more about the features of this new release, a public audio conference will be available on 21st of November 2017, 4 pm GMT , thanks to the kind sponsorship of UberConference. Anyone is welcome to join to find out more details or to ask questions about OpenSIPS 2.4 .

This is a public and open conference, so no registration is needed, but if you want to announce your intention to participate, please let us know here:

      http://blog.opensips.org/2017/11/01/introducing-opensips-2-4/


Best regards,

-- 
Bogdan-Andrei Iancu
  OpenSIPS Founder and Developer
  http://www.opensips-solutions.com

_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users
Reply | Threaded
Open this post in threaded view
|

Re: Introducing OpenSIPS 2.4

Nuno Ferreira
Hi Bogdan,

Do you have further details to share about the "clustered media relays" and "Asterisk flavored Load-Balancing" features? 

Thanks,

Nuno Ferreira

On Wed, Nov 1, 2017 at 5:16 PM, Bogdan-Andrei Iancu <[hidden email]> wrote:
One more year, one more evolution cycle, one more OpenSIPS major release. So let me introduce you the upcoming OpenSIPS 2.4 .

For the OpenSIPS 2.4 release we decided to focus on the clustering abilities. Today’s VoIP world is getting more and more dynamic, services are moving into Clouds and more and more flexibility is needed for the application to fully exploit such environments. But let’s pin point the main reasons for going for a clustered approach :

  • scaling up with the processing/traffic load
  • geographical distribution
  • redundancy and High-Availability

For the OpenSIPS 2.4 we laid down a roadmap that addresses the clustering both from the clustering engine itself (the underlayer) and from the functionalities that will perform on top of the clustering layer, to share data and state.

With OpenSIPS 2.4, it will never be easier to built a consistent and powerful clustered solution:

  • clustering engine – enhances the capabilities of controlling the cluster topology, like re-routing for bypassing broken links, dynamic joining of new nodes, support for multiple capabilities per node, data syncing between nodes and many more;
  • distributed user location – this is a very complex topic as it exceeds the simple concept of data sharing. By the nature of the data (the user registrations), you may have different constraints on how data is roaming in a cluster – registrations may be tied to a node due NAT or TCP constraints. Even more, a sharding aspect must be addressed when looking at distributing the pinging effort across the cluster. So, multiple solutions are viable here, depending on what is to be achieved (scaling, redundancy) and what are the network constraints – see a detailed presentation of the available solutions;
  • distributed presence server – quite similar (but less complex) as the distributed user location, a distributed presence server provides a consistent, but distributed way of sharing presence information – SIP entities may publish data via different nodes in the SIP cluster, while the subscribers may fetch presence data via multiple various nodes. Two approaches are under work : (a) a cluster built around a noSQL DB based as primary data storage and (b) a cluster exclusively relying on OpenSIPS for data sharing;
  • anycast support – to be able to build a fully-flavored anycast support (addressing both redundancy and balancing) requires OpenSIPS to replicate/share transaction state across the nodes in the cluster (nodes sharing the same anycast IP). Depending on the nature of the replication (full transaction versus transaction meta-data) a full anycast and light anycast support will be available – here is a detailed description on the anycast support;
  • clustered media relays – as OpenSIPS has the ability to work together with several flavors of media relays (such as RTPproxy, RTPEngine, MediaProxy), the clustering support will help OpenSIPS do distributed load-balancing over the relays – even if a relay is used by multiple nodes in the cluster, all the nodes will share information on the load on the relay, to avoid overloading or idle time;
  • distributed call center – an agent is able to register with multiple queues on different nodes on a cluster. Still, all the queues do share the status / availability of the agent and its statistics for call distribution;
  • custom clustering – the OpenSIPS clustering underlayer provides at script level the ability to broadcast (in the cloud) or send to a given node a custom message/action (with replying possibility) – this is a very flexible and powerful way to build your custom distributed functionality directly at script level.

And because we started on the integration path with OpenSIPS 2.3, and because we did it well, we decided to push forward on this path with the 2.4 version as well:

  • more Homer integration to be able to report TCP statistics, DB events and media relay events via HEP;
  • SIPREC integration for standard call recording. The new SIPREC module provides a standard and transparent (for the call parties) way to do call recording against an external recorder like Oreka provided by Orecx;
  • more FreeSWITCH integration in order to capture the call-events (DTMFs, call status) from FreeSWITCH into OpenSIPS script or for being able to control a FreeSWITCH call from OpenSIPS script via ESL
  • Asterisk flavored Load-Balancing for a more realistic and accurate traffic balancing over Asterisk clusters (as the load information is fetched in realtime from Asterisk);

The timeline for OpenSIPS 2.4 is:

  • Beta Release – 12-16 March 2018
  • Stable Release – 23-27 April 2018
  • General Availability – 1st of May 2018, during OpenSIPS Summit 2018

To talk more about the features of this new release, a public audio conference will be available on 21st of November 2017, 4 pm GMT , thanks to the kind sponsorship of UberConference. Anyone is welcome to join to find out more details or to ask questions about OpenSIPS 2.4 .

This is a public and open conference, so no registration is needed, but if you want to announce your intention to participate, please let us know here:

      http://blog.opensips.org/2017/11/01/introducing-opensips-2-4/


Best regards,

-- 
Bogdan-Andrei Iancu
  OpenSIPS Founder and Developer
  http://www.opensips-solutions.com

_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users




*Confidentiality Notice: The information contained in this e-mail and any
attachments may be confidential. If you are not an intended recipient, you
are hereby notified that any dissemination, distribution or copying of this
e-mail is strictly prohibited. If you have received this e-mail in error,
please notify the sender and permanently delete the e-mail and any
attachments immediately. You should not retain, copy or use this e-mail or
any attachment for any purpose, nor disclose all or any part of the
contents to any other person. Thank you.*
_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users
Reply | Threaded
Open this post in threaded view
|

Re: Introducing OpenSIPS 2.4

Bogdan-Andrei Iancu-2
Hi Nuno,

On the Asterisk part, the plan is to do exactly what we already have for FreeSWITCH (see https://blog.opensips.org/2017/03/01/freeswitch-driven-routing-in-opensips-2-3/)

In terms of clustering media relays, is about the ability to share the state (enabled/disabled) between all the cluster nodes using the media relays. Optionally, we are looking in adding the ability to balance the traffic between the relays, in a cluster-level aware (all the nodes in the cluster will share the information on the load of the media relays )

Regards,
Bogdan-Andrei Iancu
  OpenSIPS Founder and Developer
  http://www.opensips-solutions.com
On 11/08/2017 12:17 PM, Nuno Ferreira wrote:
Hi Bogdan,

Do you have further details to share about the "clustered media relays" and "Asterisk flavored Load-Balancing" features? 

Thanks,

Nuno Ferreira

On Wed, Nov 1, 2017 at 5:16 PM, Bogdan-Andrei Iancu <[hidden email]> wrote:
One more year, one more evolution cycle, one more OpenSIPS major release. So let me introduce you the upcoming OpenSIPS 2.4 .

For the OpenSIPS 2.4 release we decided to focus on the clustering abilities. Today’s VoIP world is getting more and more dynamic, services are moving into Clouds and more and more flexibility is needed for the application to fully exploit such environments. But let’s pin point the main reasons for going for a clustered approach :

  • scaling up with the processing/traffic load
  • geographical distribution
  • redundancy and High-Availability

For the OpenSIPS 2.4 we laid down a roadmap that addresses the clustering both from the clustering engine itself (the underlayer) and from the functionalities that will perform on top of the clustering layer, to share data and state.

With OpenSIPS 2.4, it will never be easier to built a consistent and powerful clustered solution:

  • clustering engine – enhances the capabilities of controlling the cluster topology, like re-routing for bypassing broken links, dynamic joining of new nodes, support for multiple capabilities per node, data syncing between nodes and many more;
  • distributed user location – this is a very complex topic as it exceeds the simple concept of data sharing. By the nature of the data (the user registrations), you may have different constraints on how data is roaming in a cluster – registrations may be tied to a node due NAT or TCP constraints. Even more, a sharding aspect must be addressed when looking at distributing the pinging effort across the cluster. So, multiple solutions are viable here, depending on what is to be achieved (scaling, redundancy) and what are the network constraints – see a detailed presentation of the available solutions;
  • distributed presence server – quite similar (but less complex) as the distributed user location, a distributed presence server provides a consistent, but distributed way of sharing presence information – SIP entities may publish data via different nodes in the SIP cluster, while the subscribers may fetch presence data via multiple various nodes. Two approaches are under work : (a) a cluster built around a noSQL DB based as primary data storage and (b) a cluster exclusively relying on OpenSIPS for data sharing;
  • anycast support – to be able to build a fully-flavored anycast support (addressing both redundancy and balancing) requires OpenSIPS to replicate/share transaction state across the nodes in the cluster (nodes sharing the same anycast IP). Depending on the nature of the replication (full transaction versus transaction meta-data) a full anycast and light anycast support will be available – here is a detailed description on the anycast support;
  • clustered media relays – as OpenSIPS has the ability to work together with several flavors of media relays (such as RTPproxy, RTPEngine, MediaProxy), the clustering support will help OpenSIPS do distributed load-balancing over the relays – even if a relay is used by multiple nodes in the cluster, all the nodes will share information on the load on the relay, to avoid overloading or idle time;
  • distributed call center – an agent is able to register with multiple queues on different nodes on a cluster. Still, all the queues do share the status / availability of the agent and its statistics for call distribution;
  • custom clustering – the OpenSIPS clustering underlayer provides at script level the ability to broadcast (in the cloud) or send to a given node a custom message/action (with replying possibility) – this is a very flexible and powerful way to build your custom distributed functionality directly at script level.

And because we started on the integration path with OpenSIPS 2.3, and because we did it well, we decided to push forward on this path with the 2.4 version as well:

  • more Homer integration to be able to report TCP statistics, DB events and media relay events via HEP;
  • SIPREC integration for standard call recording. The new SIPREC module provides a standard and transparent (for the call parties) way to do call recording against an external recorder like Oreka provided by Orecx;
  • more FreeSWITCH integration in order to capture the call-events (DTMFs, call status) from FreeSWITCH into OpenSIPS script or for being able to control a FreeSWITCH call from OpenSIPS script via ESL
  • Asterisk flavored Load-Balancing for a more realistic and accurate traffic balancing over Asterisk clusters (as the load information is fetched in realtime from Asterisk);

The timeline for OpenSIPS 2.4 is:

  • Beta Release – 12-16 March 2018
  • Stable Release – 23-27 April 2018
  • General Availability – 1st of May 2018, during OpenSIPS Summit 2018

To talk more about the features of this new release, a public audio conference will be available on 21st of November 2017, 4 pm GMT , thanks to the kind sponsorship of UberConference. Anyone is welcome to join to find out more details or to ask questions about OpenSIPS 2.4 .

This is a public and open conference, so no registration is needed, but if you want to announce your intention to participate, please let us know here:

      http://blog.opensips.org/2017/11/01/introducing-opensips-2-4/


Best regards,

-- 
Bogdan-Andrei Iancu
  OpenSIPS Founder and Developer
  http://www.opensips-solutions.com

_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users




*Confidentiality Notice: The information contained in this e-mail and any
attachments may be confidential. If you are not an intended recipient, you
are hereby notified that any dissemination, distribution or copying of this
e-mail is strictly prohibited. If you have received this e-mail in error,
please notify the sender and permanently delete the e-mail and any
attachments immediately. You should not retain, copy or use this e-mail or
any attachment for any purpose, nor disclose all or any part of the
contents to any other person. Thank you.*

_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users
Reply | Threaded
Open this post in threaded view
|

Re: Introducing OpenSIPS 2.4

Maxim Sobolev
Bogdan, with regards to the media relays clustering what's the advantage of sharing that load info between each signalling node versus having each node tracking this independently? In my view the latter could be more reliable and much less complicated construct. The only disadvantage is that you'd get more command load on relays, however at least with the RTPproxy pulling load stats is very lightweight operation so even with tens of signalling nodes pulling single media relay every 1-10 seconds it won't cause any noticeable performance degradation on the relay. On the flip side, each signalling node would get accurate view from its vantage PoV, so in the case of geographically distributed system when signalling node can only see subset of all media nodes it would still be able to make proper decisions. This is the approach we use in the rtp_cluster and it works pretty well with cluster size of up to 5 signalling and 10 RTP handling nodes, 40-50K media sessions in total. It can also give you accurate RTT information, so your signalling node can not only factor in the load but also proximity or each and every media relay.

As far as the load tracking is concerned, I think the approach to implement "b2b-driven routing" using API that is specific to each particular b2b is somewhat wasteful and is not very future-proof. What we would like to see instead, is for opensips to publish some kind of API (preferably SIP-based, using OPTIONS or SUBSCRIBE/NOTIFY mechanism) to pull this information out and let each b2b vendor to implement proper hooks. Then it can go as far as making this info some king of RFC.

Anyhow, just my $0.02c. Not volunteering to do opensips side (ENOTIME), but if opensips project comes up with the reasonable b2bua-agnostic load query API to use we might look at implementing it in the sippy [py/go]-B2BUAs.

-Max

On Wed, Nov 8, 2017 at 9:31 AM, Bogdan-Andrei Iancu <[hidden email]> wrote:
Hi Nuno,

On the Asterisk part, the plan is to do exactly what we already have for FreeSWITCH (see https://blog.opensips.org/2017/03/01/freeswitch-driven-routing-in-opensips-2-3/)

In terms of clustering media relays, is about the ability to share the state (enabled/disabled) between all the cluster nodes using the media relays. Optionally, we are looking in adding the ability to balance the traffic between the relays, in a cluster-level aware (all the nodes in the cluster will share the information on the load of the media relays )

Regards,
Bogdan-Andrei Iancu
  OpenSIPS Founder and Developer
  http://www.opensips-solutions.com


_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users
Reply | Threaded
Open this post in threaded view
|

Re: Introducing OpenSIPS 2.4

Bogdan-Andrei Iancu-2
Hi Maxim,

Thank you for the valuable input - it is true what you are saying IF the media relay has the ability to publish (or report) its internal load. What you are describing here is what we did in terms of SIP call balancing, with FreeSWITCH - local call counting versus pulling call counters from FS.

In regards to the second topic, that is true also, let me give it some thoughts. The only issue I see here is with "let each vendor to implement proper hooks" :). But as concept it is something definitely interesting.

Best regards,
Bogdan-Andrei Iancu
  OpenSIPS Founder and Developer
  http://www.opensips-solutions.com
On 11/11/2017 07:01 AM, Maxim Sobolev wrote:
Bogdan, with regards to the media relays clustering what's the advantage of sharing that load info between each signalling node versus having each node tracking this independently? In my view the latter could be more reliable and much less complicated construct. The only disadvantage is that you'd get more command load on relays, however at least with the RTPproxy pulling load stats is very lightweight operation so even with tens of signalling nodes pulling single media relay every 1-10 seconds it won't cause any noticeable performance degradation on the relay. On the flip side, each signalling node would get accurate view from its vantage PoV, so in the case of geographically distributed system when signalling node can only see subset of all media nodes it would still be able to make proper decisions. This is the approach we use in the rtp_cluster and it works pretty well with cluster size of up to 5 signalling and 10 RTP handling nodes, 40-50K media sessions in total. It can also give you accurate RTT information, so your signalling node can not only factor in the load but also proximity or each and every media relay.

As far as the load tracking is concerned, I think the approach to implement "b2b-driven routing" using API that is specific to each particular b2b is somewhat wasteful and is not very future-proof. What we would like to see instead, is for opensips to publish some kind of API (preferably SIP-based, using OPTIONS or SUBSCRIBE/NOTIFY mechanism) to pull this information out and let each b2b vendor to implement proper hooks. Then it can go as far as making this info some king of RFC.

Anyhow, just my $0.02c. Not volunteering to do opensips side (ENOTIME), but if opensips project comes up with the reasonable b2bua-agnostic load query API to use we might look at implementing it in the sippy [py/go]-B2BUAs.

-Max

On Wed, Nov 8, 2017 at 9:31 AM, Bogdan-Andrei Iancu <[hidden email]> wrote:
Hi Nuno,

On the Asterisk part, the plan is to do exactly what we already have for FreeSWITCH (see https://blog.opensips.org/2017/03/01/freeswitch-driven-routing-in-opensips-2-3/)

In terms of clustering media relays, is about the ability to share the state (enabled/disabled) between all the cluster nodes using the media relays. Optionally, we are looking in adding the ability to balance the traffic between the relays, in a cluster-level aware (all the nodes in the cluster will share the information on the load of the media relays )

Regards,
Bogdan-Andrei Iancu
  OpenSIPS Founder and Developer
  http://www.opensips-solutions.com



_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users


_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users
Reply | Threaded
Open this post in threaded view
|

Re: [OpenSIPS-News] Introducing OpenSIPS 2.4

Bogdan-Andrei Iancu-2
In reply to this post by Bogdan-Andrei Iancu-2
Hello all,

Thank you all the participants to the conf call.

For the people who were not able to join us, please find the audio recording http://opensips.org/html/media/Introducing_OpenSIPS_2-4_2018-11-21.mp3

And keep in mind that whatever comment, objection, idea or generic feedback you may have on the topic we are working on for OpenSIPS 2.4 is more than welcome ;)

Best regards,
Bogdan-Andrei Iancu
  OpenSIPS Founder and Developer
  http://www.opensips-solutions.com
On 11/01/2017 07:16 PM, Bogdan-Andrei Iancu wrote:
One more year, one more evolution cycle, one more OpenSIPS major release. So let me introduce you the upcoming OpenSIPS 2.4 .

For the OpenSIPS 2.4 release we decided to focus on the clustering abilities. Today’s VoIP world is getting more and more dynamic, services are moving into Clouds and more and more flexibility is needed for the application to fully exploit such environments. But let’s pin point the main reasons for going for a clustered approach :

  • scaling up with the processing/traffic load
  • geographical distribution
  • redundancy and High-Availability

For the OpenSIPS 2.4 we laid down a roadmap that addresses the clustering both from the clustering engine itself (the underlayer) and from the functionalities that will perform on top of the clustering layer, to share data and state.

With OpenSIPS 2.4, it will never be easier to built a consistent and powerful clustered solution:

  • clustering engine – enhances the capabilities of controlling the cluster topology, like re-routing for bypassing broken links, dynamic joining of new nodes, support for multiple capabilities per node, data syncing between nodes and many more;
  • distributed user location – this is a very complex topic as it exceeds the simple concept of data sharing. By the nature of the data (the user registrations), you may have different constraints on how data is roaming in a cluster – registrations may be tied to a node due NAT or TCP constraints. Even more, a sharding aspect must be addressed when looking at distributing the pinging effort across the cluster. So, multiple solutions are viable here, depending on what is to be achieved (scaling, redundancy) and what are the network constraints – see a detailed presentation of the available solutions;
  • distributed presence server – quite similar (but less complex) as the distributed user location, a distributed presence server provides a consistent, but distributed way of sharing presence information – SIP entities may publish data via different nodes in the SIP cluster, while the subscribers may fetch presence data via multiple various nodes. Two approaches are under work : (a) a cluster built around a noSQL DB based as primary data storage and (b) a cluster exclusively relying on OpenSIPS for data sharing;
  • anycast support – to be able to build a fully-flavored anycast support (addressing both redundancy and balancing) requires OpenSIPS to replicate/share transaction state across the nodes in the cluster (nodes sharing the same anycast IP). Depending on the nature of the replication (full transaction versus transaction meta-data) a full anycast and light anycast support will be available – here is a detailed description on the anycast support;
  • clustered media relays – as OpenSIPS has the ability to work together with several flavors of media relays (such as RTPproxy, RTPEngine, MediaProxy), the clustering support will help OpenSIPS do distributed load-balancing over the relays – even if a relay is used by multiple nodes in the cluster, all the nodes will share information on the load on the relay, to avoid overloading or idle time;
  • distributed call center – an agent is able to register with multiple queues on different nodes on a cluster. Still, all the queues do share the status / availability of the agent and its statistics for call distribution;
  • custom clustering – the OpenSIPS clustering underlayer provides at script level the ability to broadcast (in the cloud) or send to a given node a custom message/action (with replying possibility) – this is a very flexible and powerful way to build your custom distributed functionality directly at script level.

And because we started on the integration path with OpenSIPS 2.3, and because we did it well, we decided to push forward on this path with the 2.4 version as well:

  • more Homer integration to be able to report TCP statistics, DB events and media relay events via HEP;
  • SIPREC integration for standard call recording. The new SIPREC module provides a standard and transparent (for the call parties) way to do call recording against an external recorder like Oreka provided by Orecx;
  • more FreeSWITCH integration in order to capture the call-events (DTMFs, call status) from FreeSWITCH into OpenSIPS script or for being able to control a FreeSWITCH call from OpenSIPS script via ESL
  • Asterisk flavored Load-Balancing for a more realistic and accurate traffic balancing over Asterisk clusters (as the load information is fetched in realtime from Asterisk);

The timeline for OpenSIPS 2.4 is:

  • Beta Release – 12-16 March 2018
  • Stable Release – 23-27 April 2018
  • General Availability – 1st of May 2018, during OpenSIPS Summit 2018

To talk more about the features of this new release, a public audio conference will be available on 21st of November 2017, 4 pm GMT , thanks to the kind sponsorship of UberConference. Anyone is welcome to join to find out more details or to ask questions about OpenSIPS 2.4 .

This is a public and open conference, so no registration is needed, but if you want to announce your intention to participate, please let us know here:

      http://blog.opensips.org/2017/11/01/introducing-opensips-2-4/


Best regards,

-- 
Bogdan-Andrei Iancu
  OpenSIPS Founder and Developer
  http://www.opensips-solutions.com


_______________________________________________
News mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/news


_______________________________________________
Users mailing list
[hidden email]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users