< Previous by Date Date Index Next by Date >
< Previous in Thread Thread Index Next in Thread >

Re: [reSIProcate] The Million User Dilemma


        tfdum is actually doing the boost::bind trick here, but no asio Strand 
(the bindings are to the various blahCommand() functions). I just wish the 
compiler spew wasn't so bad when you got a parameter not quite right, but 
that's gcc templates for you. An app-writer can easily use boost::bind in their 
app, which does not require any boost dependency in resip or DUM, so that is at 
least nice. I'm not familiar enough with asio Strand to say how much work it 
would be to make resip's threading use it; I'm guessing this is a wrapper for 
pthreads/whatever Windows uses/the fancy Intel threading stuff?

        As for using asio to just drive the event loops, Scott, roughly how 
much work would need to be done here? And how many platforms would this 
benefit? I know the epoll stuff works on OS X; how would Windows benefit from 
using asio? I'm thrilled with epoll, but that's just me.

Best regards,
Byron Campen


> Hi Kennard,
> 
> I think you're on the right track with using epoll, but I'd like to go
> one step further and improve cross platform compatibility in the
> process.  Scott Godin has been keeping header only asio up to date in
> the resiprocate tree, and it provides support for every platforms most
> sophisticated version of select/epoll/kqueue etc.  Reimplementing things
> like FdSet and and the wait and process functions with async_wait that
> asio provides could provide a humongous performance improvement as well
> as Asio can be multithreaded easily.
> 
> Also consider things like DumCommand.  It can be easily replaced with
> Asio's Strand + boost::bind/boost::function or C++0x lambdas which are
> much more flexible and require significantly less code.
> 
> Dan
> 
> On 01/26/2011 01:45 PM, Kennard White wrote:
>> Hi Dan,
>> 
>> I found your post very interesting, since we have very similar goals.
>> The changes I've made recently to resip to add epoll support is to
>> address the first limitation: simply being able to have many connections
>> open.
>> 
>> I've spent some amount of time profiling resip, and unfortunately I
>> haven't found one single hot-spot. Probably SipMessage allocation and
>> destruction is most expensive, but I haven't looked into it in any
>> detail. For reference, I'm getting about 2ktps on good hardware in
>> "real" usage scenarios. Probably first thing to do is look for
>> unnecessary message copies.
>> 
>> For the SIP-aspect of NAT traversal, we are switch to TCP/TLS (away from
>> UDP) using RFC 5626 outbound support.
>> 
>> Would like to hear your plans.
>> 
>> Regards,
>> Kennard
>> 
>> On Wed, Jan 26, 2011 at 10:15 AM, Dan Weber <dan@xxxxxxxxxxxxxx
>> <mailto:dan@xxxxxxxxxxxxxx>> wrote:
>> 
>>    Hi guys,
>> 
>>    I must say I have a quite ambitious goal.  I want to make it so that I
>>    can build a network of repros that can support millions upon millions of
>>    users.  Likewise, I like to consider myself as a standards based guy,
>>    and I want to take as much of everyone's input as possible in the design
>>    path to doing this.  In return, everything will be made available for
>>    free under the same Vovida license and/or BSD licensing that is already
>>    available.
>> 
>> 
>>    Several key areas of concern are the following:
>> 
>>    Reliability:
>>    How do we make it so that we can have many repro nodes work together
>>    across large geographic topology, and allow calls to continue processing
>>    in the event of an attack or a failure?
>> 
>>    Scalability:
>>    If you've ever run the testStack application and you're running a modern
>>    computer, you'll notice that it doesn't matter how many cores you have,
>>    or even to the point of the clock rate of your processor, there seems to
>>    be a magic threshold around 6500 TPS for non invite scenarios.
>>    Likewise, for calls, I can get about 1/3rd of that.  Also, those are
>>    tests done with TCP, when you add in UDP, you can watch it suck up
>>    memory like its job.  Based on what Byron has shown me, on inferior
>>    hardware, the stack that Estacado/Tekelec has built and modified from
>>    the main resiprocate tree can perform over 12000 TPS for noninvite
>>    transactions in a single thread.  This means there are even great areas
>>    for improvement beyond just adding concurrency.
>> 
>>    Security:
>>    Resiprocate supports TLS fairly well.  I would like to be able to take
>>    advantage of that with any reliability mechanism put forth to help meet
>>    HIPAA style requirements that require that all data stored to disk be
>>    encrypted, and all data in transit be in encrypted.  Thankfully, part of
>>    this problem can be more easily resolved by keeping more state in
>>    memory.
>> 
>>    NAT Traversal:
>>    Jeremy Geras and Scott Godin among others have worked very hard to
>>    provide NAT traversal mechanisms for calls and registrations and so
>>    forth through reTurn, reflow, and recon.  Jeremy's branch of recon
>>    utilizes an outdated stack, but supports ICE to a large degree.  It is
>>    missing support for ICE with TURN and has some other quirks that I've
>>    managed to work out.
>> 
>>    In my research around these key areas, I have come up with several ideas
>>    of my own to deal with these issues, however, I would like to open this
>>    up to the community to discuss these areas in an open forum where
>>    everyone can participate and have their input taken seriously.
>> 
>>    Thanks guys,
>>    Dan
>> 
>>    _______________________________________________
>>    resiprocate-devel mailing list
>>    resiprocate-devel@xxxxxxxxxxxxxxx
>>    <mailto:resiprocate-devel@xxxxxxxxxxxxxxx>
>>    https://list.resiprocate.org/mailman/listinfo/resiprocate-devel
>> 
>> 
> 
> 
> _______________________________________________
> resiprocate-devel mailing list
> resiprocate-devel@xxxxxxxxxxxxxxx
> https://list.resiprocate.org/mailman/listinfo/resiprocate-devel