< Previous by Date | Date Index | Next by Date > |
< Previous in Thread | Thread Index | Next in Thread > |
Ok, here's what I propose doing: Index: resip/stack/TransactionState.cxx =================================================================== *** resip/stack/TransactionState.cxx (revision 6902) --- resip/stack/TransactionState.cxx (working copy) *************** *** 1758,1778 **** else // reuse the last dns tuple { assert(sip->isRequest()); ! assert(mTarget.getType() != UNKNOWN_TRANSPORT); ! if (resend) { ! if (mTarget.transport) { ! mController.mTransportSelector.retransmit(sip, mTarget); } else {! DebugLog (<< "No transport found(network could be down) for " << sip->brief());
} } else { ! mController.mTransportSelector.transmit(sip, mTarget); } } } --- 1758,1793 ---- else // reuse the last dns tuple { assert(sip->isRequest()); ! if(mTarget.getType() != UNKNOWN_TRANSPORT) { ! if (resend) { ! if (mTarget.transport) ! {! mController.mTransportSelector.retransmit(sip, mTarget);
! } ! else ! {! DebugLog (<< "No transport found(network could be down) for " << sip->brief());
! } } else { ! mController.mTransportSelector.transmit(sip, mTarget); } } else {! // !bwc! While the resolver was attempting to find a target, another ! // request came down from the TU. This could be a bug in the TU, or ! // could be a retransmission of an ACK/200. Either way, we cannot ! // expect to ever be able to send this request (nowhere to store it
! // temporarily).! DebugLog(<< "Received a second request from the TU for a transaction" ! " that already existed, before the DNS subsystem was done " ! "resolving the target for the first request. Either the TU" ! " has messed up, or it is retransmitting ACK/ 200 (the only"
! " valid case for this to happen)"); } } } Any thoughts? Best regards, Byron Campen
Byron Campen wrote:This is a bug that was noticed a little while back. It was occurring when repro was trying to retransmit an ACK before the DNS lookup for the first had completed. (This can happen easily if we get multiple responses rapidly, or the DNS servers are being very slow)This bug was uncovered in repro due to the fact that it had previouslybeen putting a different tid on each ACK, meaning that a separate TransactionState was being maintained for each ACK, so we never had a second ACK hitting the same TransactionState to trigger the bug. I'll look into fixing this.Thanks for the quick response - I can confirm that my DNS is occasionally a little slow, the machine in question is about to get a memory upgrade.Best regards, Byron CampenOn Saturday, I updated to the latest version of reSIProcate from SVN (6901). Since then, I've been getting this assert() occasionally: else // reuse the last dns tuple { assert(sip->isRequest());assert(mTarget.getType() != UNKNOWN_TRANSPORT); // line 1761 ****if (resend) { if (mTarget.transport) Does anyone have any ideas about the cause of this? #0 0xb7343947 in raise () from /lib/tls/libc.so.6 (gdb) bt #0 0xb7343947 in raise () from /lib/tls/libc.so.6 #1 0xb7345212 in abort () from /lib/tls/libc.so.6 #2 0xb733d05f in __assert_fail () from /lib/tls/libc.so.6#3 0xb7d3d95b in resip::TransactionState::sendToWire (this=0xb6968c00,msg=0xb6985e18, resend=false) at TransactionState.cxx:1761 #4 0xb7d40344 in resip::TransactionState::processStateless (this=0xb6968c00, message=0xb6985e18) at TransactionState.cxx:523 #5 0xb7d443e3 in resip::TransactionState::process (controller=@0xb725eb8c) at TransactionState.cxx:262 #6 0xb7d338aa in resip::TransactionController::process (this=0xb725eb8c, fdset=@0xbfffdcd0) at TransactionController.cxx:83 _______________________________________________ resiprocate-devel mailing list resiprocate-devel@xxxxxxxxxxxxxxxxxxxx https://list.resiprocate.org/mailman/listinfo/resiprocate-devel_______________________________________________ resiprocate-devel mailing list resiprocate-devel@xxxxxxxxxxxxxxxxxxxx https://list.resiprocate.org/mailman/listinfo/resiprocate-devel
Attachment:
smime.p7s
Description: S/MIME cryptographic signature