< Previous by Date Date Index Next by Date >
< Previous in Thread Thread Index  

Re: [reSIProcate] Seg. fault in TransactionState


OK great - I've committed this change to SVN and it will show up in the next 1.8.X release.

Scott

On Tue, Sep 18, 2012 at 3:35 AM, Krister Jarl <kj@xxxxxxxxxxx> wrote:
Hi Scott,

sorry for the lack of feedback. The patch is working fine. It has not introduced any new issues.

Regards,
Krister

Skickat: den 17 september 2012 16:06

Till: Krister Jarl
Kopia: resiprocate-devel@xxxxxxxxxxxxxxx
Ämne: Re: [reSIProcate] Seg. fault in TransactionState

Hi Krister,

Is this patch working for you?  Has it introduced any new issues?

Thanks for you feedback.  If all is going well, then I'd like to get this patch commited to SVN.

Regards,
Scott

On Thu, Aug 30, 2012 at 1:57 AM, Krister Jarl <kj@xxxxxxxxxxx> wrote:
Thanks! I'll try this out.

Regards,
Krister

Från: slgodin@xxxxxxxxx [slgodin@xxxxxxxxx] för Scott Godin [sgodin@xxxxxxxxxxxxxxx]
Skickat: den 29 augusti 2012 15:50
Till: Krister Jarl
Kopia: resiprocate-devel@xxxxxxxxxxxxxxx
Ämne: Re: [reSIProcate] Seg. fault in TransactionState

Yup - it looks like reception of the 200 will clear the mNextTransmission storage and handleInternalCancel just blindly accesses it.  This line of code modifies the branch parameter in the CANCEL for cases where we re-tranmitted an INVITE (with a new branch) because of transport failure. I've corrected this by ensuring we move the transaction state to completed when we receive a 200 response.  This will cause the stack to instead internally generate a 200 for the cancel to the TU, and not call handleInternalCancel to send the CANCEL on the wire.

Can you try out the following patch?

Index: TransactionState.cxx
===================================================================
--- TransactionState.cxx (revision 9869)
+++ TransactionState.cxx (working copy)
@@ -1216,6 +1216,7 @@
                sendToTU(sip); // don't delete msg
                //terminateClientTransaction(mId);
                mMachine = ClientStale;
+               mState = Completed;
                // !bwc! We have a final response. We don't need either of
                // mMsgToRetransmit or mNextTransmission. We ignore further
                // traffic.

Thanks,
Scott


On Wed, Aug 29, 2012 at 3:27 AM, Krister Jarl <kj@xxxxxxxxxxx> wrote:
Hi!

After moving to version 1.8.5 from 1.5 we've had a couple of crashes in resiprocate. The issue seems to be a race between 200 and CANCEL. Our application constructs and sends a CANCEL down to the stack. At the same time a 200 to the INVITE is received from the wire. I can see that there has been some cleanup added to the processing of the 200 in TransactionState that I can't find in 1.5. Perhaps this is the problem?

Program terminated with signal 11, Segmentation fault.
#0  0x0000000000705431 in resip::SipMessage::ensureHeaders (this=0x0, headerType=...) at ../../resip/stack/SipMessage.hxx:575
#1  resip::SipMessage::header (this=0x0, headerType=...) at SipMessage.cxx:1550
#2  0x0000000000731e99 in resip::SipMessage::const_header (cancel=0x7f889c660820, clientInvite=...)
    at ../../resip/stack/SipMessage.hxx:428
#3  resip::TransactionState::handleInternalCancel (cancel=0x7f889c660820, clientInvite=...) at TransactionState.cxx:102
#4  0x0000000000733aa5 in resip::TransactionState::processSipMessageAsNew (sip=0x7f889c660820, controller=..., tid=...)
    at TransactionState.cxx:343
#5  0x0000000000734660 in resip::TransactionState::process (controller=..., message=0x7f889c660820) at TransactionState.cxx:743
#6  0x0000000000725ebb in resip::TransactionController::process (this=0x7f88940047f0, timeout=-1) at TransactionController.cxx:141
#7  0x0000000000719c54 in resip::SipStack::processTimers (this=0x7f88a61b3010) at SipStack.cxx:790

Regards,
Krister

_______________________________________________
resiprocate-devel mailing list
resiprocate-devel@xxxxxxxxxxxxxxx
https://list.resiprocate.org/mailman/listinfo/resiprocate-devel