Re: [reSIProcate-users] Resiprocate design
The more interesting question than how many users you have is how many
transactions per second of throughput you are seeing. On servers from
roughly 4-5 years ago, I was seeing around 3000 transactions / second
/ cpu of throughput with UDP. If you aren't seeing that, then I expect
there is something wrong in your application.
On Jan 31, 2008 9:35 PM, Suprabhat Chatterjee <suprabhat@xxxxxxxxxxxxx> wrote:
> Hi Scott and Jason,
>
> We are also facing similar performance issues reported by Neil, with
> resiprocate(and repro). The receive queue of 5060 port piles up when using
> 10k+ users in UDP. Performance degrades more when using large number of TCP
> connections(2k+).
>
> Currently we have to assign multiple cpus for repro to scale.
>
> So, as suggested by Neil, do you have any plans to change the resiprocate
> design in the near future ? Or you are advocating use of libraries like
> asio ?
>
> regds,
> Suprabhat
>
>
> -----Original Message-----
> From: resiprocate-users-bounces@xxxxxxxxxxxxxxx
> [mailto:resiprocate-users-bounces@xxxxxxxxxxxxxxx]On Behalf Of Theo
> Zourzouvillys
> Sent: Friday, February 01, 2008 1:16 AM
> To: Ryan Kereliuk
> Cc: resiprocate-users@xxxxxxxxxxxxxxx
> Subject: Re: [reSIProcate-users] Resiprocate design
>
>
>
> On Jan 31, 2008 6:45 PM, Ryan Kereliuk <ryker@xxxxxxxxx> wrote:
> > I just wanted to highlight this point for people experiencing performance
> > problems. It is not too difficult to shoehorn epoll into the open source
> > implementation but the right approach would be to integrate with libevent.
>
> There are other issues other than simply moving to epoll() - this is
> fairly small in the whole scheme of things on a busy server.
>
> One of the biggest things we noticed when scaling up with resip in
> terms of txns/sec was minor page faults - notably because of passing
> messages using queues serviced by different threads meant any form of
> locallity was lost.
>
> the cautious use of mutexes was another issue we stumbled on with
> large (16 way) boxes - better scalability [1] could perhaps be reached
> by using something like asio::strand serviced in a pool of X threads
> and pushing work into the strand should mean a lot less locks and
> because you can keep hot data tied to a CPU locallity faults should be
> a lot less.
>
> (disclaimer: this was pre 1.0, so things may well have changed a lot since).
>
> ~ Theo
>
> 1 - Although there is nothing wrong with the performance imo - and
> hardware is so cheap scaling horizonally is a far more sensible plan
> in almost all cases :-)
> _______________________________________________
> resiprocate-users mailing list
> resiprocate-users@xxxxxxxxxxxxxxx
> List Archive: http://list.resiprocate.org/archive/resiprocate-users/
> _______________________________________________
> resiprocate-users mailing list
> resiprocate-users@xxxxxxxxxxxxxxx
> List Archive: http://list.resiprocate.org/archive/resiprocate-users/
>