Re: [reSIProcate-users] Resiprocate design
Hi Scott and Jason,
We are also facing similar performance issues reported by Neil, with
resiprocate(and repro). The receive queue of 5060 port piles up when using 10k+
users in UDP. Performance degrades more when using large number of TCP
connections(2k+).
Currently we have to assign multiple cpus for repro to scale.
So, as suggested by Neil, do you have any plans to change the resiprocate
design in the near future ? Or you are advocating use of libraries like asio ?
regds,
Suprabhat
-----Original Message-----
From: resiprocate-users-bounces@xxxxxxxxxxxxxxx
[mailto:resiprocate-users-bounces@xxxxxxxxxxxxxxx]On Behalf Of Theo
Zourzouvillys
Sent: Friday, February 01, 2008 1:16 AM
To: Ryan Kereliuk
Cc: resiprocate-users@xxxxxxxxxxxxxxx
Subject: Re: [reSIProcate-users] Resiprocate design
On Jan 31, 2008 6:45 PM, Ryan Kereliuk <ryker@xxxxxxxxx> wrote:
> I just wanted to highlight this point for people experiencing performance
> problems. It is not too difficult to shoehorn epoll into the open source
> implementation but the right approach would be to integrate with libevent.
There are other issues other than simply moving to epoll() - this is
fairly small in the whole scheme of things on a busy server.
One of the biggest things we noticed when scaling up with resip in
terms of txns/sec was minor page faults - notably because of passing
messages using queues serviced by different threads meant any form of
locallity was lost.
the cautious use of mutexes was another issue we stumbled on with
large (16 way) boxes - better scalability [1] could perhaps be reached
by using something like asio::strand serviced in a pool of X threads
and pushing work into the strand should mean a lot less locks and
because you can keep hot data tied to a CPU locallity faults should be
a lot less.
(disclaimer: this was pre 1.0, so things may well have changed a lot since).
~ Theo
1 - Although there is nothing wrong with the performance imo - and
hardware is so cheap scaling horizonally is a far more sensible plan
in almost all cases :-)
_______________________________________________
resiprocate-users mailing list
resiprocate-users@xxxxxxxxxxxxxxx
List Archive: http://list.resiprocate.org/archive/resiprocate-users/