[reSIProcate] [reSIProcate-users] resip/dum performance issue
Derek MacDonald
derek.macdonald at gmail.com
Mon Apr 27 16:22:23 CDT 2009
Yes; on Windows I had to use a loopback UDP socket...I'm not sure what the
performance penalty is like. I use this approach in our client to wake up
the stack if there was nothing going on and a message needs to be sent.
-Derek
On Mon, Apr 27, 2009 at 12:47 PM, Gabriel Hege
<gabriel-mailinglists at gmx.de>wrote:
> As far as I know, Windows handles sockets and file descriptors differently.
> You can not call select() on anything but a set of SOCKET structures, so
> this would be an UNIX-only solution.
>
> regards,
> gabriel
>
> Byron Campen wrote:
>
>> Cross-posting to resip-devel, since we're talking about a pretty
>> significant change here.
>> Yeah, but there is a tradeoff here; when dealing with really high load,
>> the transport fds are firing constantly, meaning that we almost never block.
>> And putting that extra fd into the select call does impose a (small)
>> performance penalty.
>>
>> What does everyone think about this?
>>
>> Best regards,
>> Byron Campen
>>
>> At this point I am pretty sure that the problem is the udp buffer size.
>>> After switching to TCP or when increasing the udp RCV buffer size to 16K
>>> (the default is 8K) I am not seeing the retransmissions and there is no
>>> spikes. I think that the approach of having the TU put thing into the pipe
>>> that is a part of the select fd set the stack sits on is better then have it
>>> wake up every 25ms.
>>> I think maybe we should have a buffer size configurable thru stack API in
>>> the future.
>>> Ilana *From:* Byron Campen [mailto:bcampen at estacado.net] *Sent:*
>>> Monday, April 27, 2009 12:47 PM
>>> *To:* Adam Roach
>>> *Cc:* Scott Godin; Ilana Polyak; resiprocate-users at resiprocate.org<mailto:
>>> resiprocate-users at resiprocate.org>
>>> *Subject:* Re: [reSIProcate-users] resip/dum performance issue
>>> We should be taking everything that is there. Also, I'm
>>> wondering how we're seeing INVITE retransmissions 400 ms after the 200 hits
>>> the wire; this doesn't involve DUM in any way.
>>> Best regards,
>>> Byron Campen
>>>
>>>
>>> Are you saying we just take the top message and go back into a wait
>>> instead of draining the queue?
>>>
>>> /a
>>>
>>> On 04/27/2009 11:23 AM, Byron Campen wrote:
>>> Yeah, but we're talking about 25 transactions.
>>> Best regards,
>>> Byron Campen
>>>
>>>
>>> That wouldn't be a bad thing to do, but I don't think it causes the issue
>>> described in the email. There's an order of magnitude difference between 25
>>> ms and 500 ms.
>>>
>>> So it may improve the situation, but it's not the sole cause.
>>>
>>> /a
>>>
>>> On 04/27/2009 10:41 AM, Byron Campen wrote:
>>> A while back, I noticed a slight problem in the way the stack's process
>>> loop is constructed. The issue is that we can get in a situation where we
>>> are blocking on a select call (in the transport code) when there is work to
>>> be done in the transaction layer. Basically, let's say we get down into the
>>> select call; we're waiting for an fd to become ready. Then, the TU sends a
>>> request down to the stack. This does not interrupt the select call; we sit
>>> there waiting for an fd to become ready for 25ms, time out, and then notice
>>> the new work that the TU kicked down to us. We could solve this by using the
>>> "self-pipe" trick; where we have an anonymous pipe that we write a single
>>> byte to whenever stuff gets put in the state machine fifo, and we select on
>>> this pipe's fd along with the transport fds.
>>> Best regards,
>>> Byron Campen
>>>
>>>
>>> That sounds strange. Assuming the process loop is written correctly, I
>>> wouldn't expect this behaviour. Are you using StackThread and DumThread
>>> from resip code unmodified? Can you post your test program(s)?
>>> >Also I noticed that stack gets select interrupt for each message is it
>>> possible that this creates a problem?
>>> I'm not really sure why you think this is a problem - that's what is
>>> supposed to happen. Can you explain your question in more detail?
>>> Thanks,
>>> Scott
>>> On Wed, Apr 1, 2009 at 10:02 AM, Ilana Polyak <
>>> Ilana.Polyak at audiocodes.com <mailto:Ilana.Polyak at audiocodes.com>> wrote:
>>>
>>> Hello
>>>
>>> Maybe someone can help me with some performance issues I am seeing.
>>>
>>> I have a test client application that generates 25 simultaneous calls
>>> and another application that acts as a server and response back with ok.
>>> After all 25 calls are connected the client app issues another 25 calls and
>>> so own.
>>>
>>> A am calculating the time from when the first invite is sent until the
>>> last OK for invite is received to see how many connect per second I can have
>>> and the result are vary. Sometimes it will be just 60 msec and sometimes 500
>>> msec. It looks like the stack is retransmitting some of the Invites because
>>> OK is not received on time. But when I run ethereal I see that OK came it
>>> 400 msec before invite was retransmitted. So it looks like the dum is not
>>> fast enough to keep up with the stack. Does anyone have any dum performance
>>> measurements?
>>>
>>> I am running stack using StackThread and I tried running dum from my app
>>> and separately in dumthread.
>>>
>>> Also I noticed that stack gets select interrupt for each message is it
>>> possible that this creates a problem?
>>>
>>> Thanks a lot
>>>
>>>
>>> I am using release 1.3.4, windows XP Pentium 3.2Gh.
>>>
>>>
>>> Ilana
>>>
>>>
>>> ------------------------------------------------------------------------
>>> This email and any files transmitted with it are confidential material.
>>> They are intended solely for the use of the designated individual or entity
>>> to whom they are addressed. If the reader of this message is not the
>>> intended recipient, you are hereby notified that any dissemination, use,
>>> distribution or copying of this communication is strictly prohibited and may
>>> be unlawful.
>>>
>>> If you have received this email in error please immediately notify the
>>> sender and delete or destroy any copy of this message
>>>
>>> _______________________________________________
>>> resiprocate-users mailing list
>>> resiprocate-users at resiprocate.org <mailto:
>>> resiprocate-users at resiprocate.org>
>>> List Archive: http://list.resiprocate.org/archive/resiprocate-users/
>>> _______________________________________________
>>> resiprocate-users mailing list
>>> resiprocate-users at resiprocate.org <mailto:
>>> resiprocate-users at resiprocate.org>
>>> List Archive: http://list.resiprocate.org/archive/resiprocate-users/
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>>
>>> _______________________________________________
>>> resiprocate-users mailing list
>>> resiprocate-users at resiprocate.org <mailto:
>>> resiprocate-users at resiprocate.org>
>>> List Archive: http://list.resiprocate.org/archive/resiprocate-users/
>>>
>>> ------------------------------------------------------------------------
>>> This email and any files transmitted with it are confidential material.
>>> They are intended solely for the use of the designated individual or entity
>>> to whom they are addressed. If the reader of this message is not the
>>> intended recipient, you are hereby notified that any dissemination, use,
>>> distribution or copying of this communication is strictly prohibited and may
>>> be unlawful.
>>>
>>> If you have received this email in error please immediately notify the
>>> sender and delete or destroy any copy of this message
>>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> resiprocate-devel mailing list
>> resiprocate-devel at resiprocate.org
>> https://list.resiprocate.org/mailman/listinfo/resiprocate-devel
>>
>
> _______________________________________________
> resiprocate-devel mailing list
> resiprocate-devel at resiprocate.org
> https://list.resiprocate.org/mailman/listinfo/resiprocate-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.resiprocate.org/pipermail/resiprocate-devel/attachments/20090427/031e0cfe/attachment.htm>
More information about the resiprocate-devel
mailing list