< Previous by Date | Date Index | Next by Date > |
Thread Index | Next in Thread > |
At the last working group we introduced the idea of a
TuSelector which effectively replaces the TuFifo of the SipStack. We are
now seeing issues w/ shutdown, statistics, etc. Here is a design proposal
to clean up our use of tu's which should also help w/ different
threading/processing models. -= Overview =- The core of the idea is to not have component own the fifo
which is effectively the next hop; the SipStack currently owns the TuFifo,
which needs to be polled w/ receive or receiveAny. TransactionUser is
closer to the new model; it own it’s fifo, which are written to by the
TuFifo but are owned by the TransactionUser. In the new approach, the fifo would be abstracted as a 3
method interface which represents the next step in the pipeline: class HalfPipe { virtual void post(Message* msg, DepthUsage
usage); //may want to clean up depth usage/have 2 posts virtual int size() const; virtual bool wouldAccept(DepthUsage usage)
const; }; This would be passed to the SipStack as a constructor
parameter, and replace mTuFifo. TuSelector would be a subclass of HalfPipe; TuSelector
relies on information about which tu sent request/posted a message; this could
be replaced w/ an opaque piece of userdata which can be used to match
requests/responses/failure message back to the originating Tu. -= Shutdown & Statistics(the app vs the Tu) =- With the introduction of the TuSelector, there are new
messages that are only of interest to the app(Tu shutdown, stats) which need to
be consumed. Right now these go directly to SipStack::mTuFifo. In
the new approach, TuSelector would inherit from Halfpipe, but would also take a
Halfpipe as a cons. parameter which represents the apps fifo; TuSelector(HalfPipe&
applicationFifo). Any message that isn't matched to a fifo would go there,
currently shutdown + statistics. Once all TransactionUsers have been shutdown(the TuSelector
could provide a specific method for this), the app can shutdown the stack.
The app could simply end any stackthread/stop calling process, but it might
want to wait for all retransmissions to finish or each message to be
transmitted at least once. The SipStack already shutdown routine like
this which would be a basis for this. -= Threading/Processing Models(DumProcessHandler) =- With the introduction of external transports that are not
select based, there was a requirement to get the SipStack/Dum cycles when
external events occurred. DumProcessHandler is a rather messy realization
of this requirement. Here is the new approach: SipStack would still have a AsyncProcessHandler, but this
will not interact with Tu(ie dum) processing. Tu's needing to process
incoming events without polling would do this by subclassing the specific TransactionUser
or TuSelector to give cycles to the TransactionUser. Applications that do not
wish to poll for shutdown/statistics messages could write a HalfPipe that acts
as a synchronous demuxer of events. -= Porting existing applications =- Applications that are built directly on top of the SipStack
would just have to provide a TimeLimitFifo<Message> as a SipStack constructor,
and do the equivalent of receive/receiveAny directly against the fifo in their
build/select/process loop. Applications built on top of DUM will not have to do
anything different as dum will be ported to this new approach. Comments/Suggestions/Requirments? --Derek CONFIDENTIALITY
NOTICE This
email and any files transmitted with it contains proprietary information and,
unless expressly stated otherwise, all contents and attachments are
confidential. This email is intended for the addressee(s) only and access by
anyone else is unauthorized. If you are not an addressee, any disclosure,
distribution, printing or copying of the contents of this email or its
attachments, or any action taken in reliance on it, is unauthorized and may be unlawful.
If you are not an addressee, please inform the sender immediately and then
delete this email and any copies of it. Thank you for your co-operation. |