Trade-offs for caching PipelineManagers in PolicyInjectionBehavior?

Sep 27, 2010 at 12:11 PM
Edited Sep 27, 2010 at 3:07 PM

We've had some performance issues with intercepting large classes. Tracing showed that a lot of time was spent in the loop in the PolicyInjectionBehavior constructor.

public PolicyInjectionBehavior(CurrentInterceptionRequest interceptionRequest, InjectionPolicy[] policies, IUnityContainer container)
{
    var allPolicies = new PolicySet(policies);
    bool hasHandlers = false;

    var manager = new PipelineManager();

    foreach (MethodImplementationInfo method in
        interceptionRequest.Interceptor.GetInterceptableMethods(
            interceptionRequest.TypeToIntercept, interceptionRequest.ImplementationType))
    {
        bool hasNewHandlers = manager.InitializePipeline(method,
            allPolicies.GetHandlersFor(method, container));
        hasHandlers = hasHandlers || hasNewHandlers;
    }
    pipelineManager = hasHandlers   manager : null;
}

One thing that seems to solve the problem for us is caching the PipelineManager in a dictionary using a key built from interceptionRequest.

We're not changing policies after the container has been set up and are only using the standard interceptors. Between two calls, with the same type to intercept and type to implement, the result from GetInterceptableMethods() shouldn't change, right?

Except the increase in memory used for the cache, what are the trade-offs?  From what I can see we can't handle interceptors that return different methods between two calls to GetInterceptableMethods(), and react to when polices changes, which isn't a problem for us.

Thanks for any help,
Håkan Canberger

Sep 28, 2010 at 5:20 AM

This has been a common request. During the development of Unity 2.0, I tried an experiment to do exactly this sort of caching. It ended up significantly slower, so I tabled the idea. If you can actually get a speedup out of this, by all means go for it!