< Previous by Date | Date Index | Next by Date > |
< Previous in Thread | Thread Index | Next in Thread > |
Hello all,
We still have this problem and I still can't solve it unfortunately :(
I have created a barebones testcase that reproduces the scenarios.
The attached sipp scenario XMLs should be run like:
./sipp -i 127.0.0.1 -p 12010 -sf refer100.xml
or
./sipp -i 127.0.0.1 -p 12010 -sf refer.xml
and run the BasicCall from resip/dum/test patched with the attached
diff. This test app has only one leg, the client leg in my original
post, and I simulated the BYE from the "A" leg with end() in the
onNewSubscriptionFromRefer(). The BasicCall app will crash and produce
the stacktrace I described below.
Please take a look!
TIA
br
Szo
On 2016-11-11 11:41, Szokovacs Robert wrote:
> Hi all,
>
> We have found a situation where a malicious third party possibly can
> crash the resiprocate based application by carefully rearranging SIP
> packets.
>
> We have the following setup: the resiprocate-based B2BUA receives the
> incoming call, calls the application server on an other leg, which,
> after some processing, REFERs the call towards it's final destination.
>
> Relevant packet flow:
>
> 1, A leg -> INVITE -> B2BUA
>
> 2, A leg <- 100 <- B2BUA
>
> 3, A leg <- 200OK <- B2BUA
>
> 4, B2BUA -> INVITE -> AppServer
>
> 5, B2BUA <- 100 <- AppServer
>
> 6, B2BUA <- 200OK <- AppServer
>
> 7, B2BUA -> ACK -> AppServer
>
> 8, B2BUA <- REFER <- AppServer
>
> 9, B2BUA -> 202 -> AppServer
>
> Etc, etc.
>
>
> In our situation packet #6 got lost and for reasons not important here
> the AppServer doesn't wait for ACK (#7) before sending the #8 REFER, #9
> is 491 instead of 202. All this is fine and correct behaviour; the
> problem manifests when the A leg sends BYE, causing our B2BUA to end the
> B leg too (bye calling AppDialogSet::end(), which in turn calls
> DialogSet::end() ), before #6 was retransmitted and received. Upon
> receiving the BYE, our app crashed:
>
> #16 in resip::Dialog::cancel (this=<optimized out>) at Dialog.cxx:341
>
> line 338: void
> Dialog::cancel()
> {
> resip_assert(mType == Invitation);
> ClientInviteSession* uac =
> dynamic_cast<ClientInviteSession*>( mInviteSession);
> resip_assert (uac);
> uac->cancel();
> line 345: }
>
> #17 in resip::DialogSet::end (this=0x874a27) at DialogSet.cxx:987
>
> line 983: for (DialogMap::iterator it = mDialogs.begin(); it !=
> mDialogs.end(); it++)
> {
> try
> {
> it->second->cancel();
> }
> catch (UsageUseException& e)
> {
> InfoLog(<< "Caught: " << e);
> }
> line 993: }
>
>
> Further investigation uncovered an other situation, when both #5 and #6
> is missing and not retransmitted, we hit a different assert, when the
> timeout generates the internal 408 response:
>
> #3 in __GI___assert_fail (assertion=0x875e85 "mDialogs.empty()",
> file=0x85b510 "DialogSet.cxx", line=352, function=0x8768c0 "void
> resip::DialogSet::dispatch(const resip::SipMessage&)") at assert.c:103
> #4 in resip::DialogSet::dispatch (this=0x7efec001f8d0, msg=...) at
> DialogSet.cxx:352
>
> line 350: if (mState == WaitingToEnd)
> {
> resip_assert(mDialogs.empty());
> if (msg.isResponse())
> line 354: {
>
> #5 in resip::DialogUsageManager::processResponse (this=<optimized out>,
> response=...) at DialogUsageManager.cxx:2173
> #6 in resip::DialogUsageManager::incomingProcess (this=0x7efefa6690c0,
> msg=...) at DialogUsageManager.cxx:1680
> #7 in resip::DialogUsageManager::internalProcess (this=0x7efefa6690c0,
> msg=...) at DialogUsageManager.cxx:1486
> #8 in resip::DialogUsageManager::process (this=0x7efefa6690c0,
> mutex=<optimized out>) at DialogUsageManager.cxx:1710
>
>
> What common in these scenarios is that the DialogSet has unexpected
> Dialogs hanging around (in the wrong state in the first case, existing
> at all in the second). We think the root of the problem is that after
> DUM sends the 491 response to the wayward REFER, it should get rid of
> the corresponding Dialog, but for some reason, doesn't. We don't see
> where would it fit naturally to do it, so we have no patch proposal yet,
> please comment, advise!
>
> Thanks in advance!
>
> br
>
> Szo
>
_______________________________________________
resiprocate-devel mailing list
resiprocate-devel@resiprocate.org
https://list.resiprocate.org/mailman/listinfo/resiprocate- devel