When working with defining or designing front-office, back-office, and the middle-office processes, one is frequently asked how in this context six sigma would fit into process improvement. It is true that six sigma is a set of techniques for process improvement. But,it is equally true that in its original incarnation its aim was to reduce variability in manufacturing output. Six sigma techniques use complex statistical tools, such as, GLM (General Linear Models), ANOVA (Analysis of Variance), to name just a couple. Manufacturing or industrial processes are many notches more complex than a (front, back, middle)-office process. There are many more parameters (both input and output) at every step of the process, arising from both human and machine participants. If you find the need to dust off your old Box-Jenkins notes or some other statistical book when dealing with (front, back, middle)-office processes then everyone would like to know exactly how complex these processes are to require such techniques to measure quality.
How exactly would you measure the quality of a claims process that has been closed by an underwriter? Is it by the amount of payout to the policyholder? Perhaps the time taken to deal with the claim, or the avoidance of a runaway legal process? Of course using the payout amount as a measure of quality is totally immoral. There is similar difficulty in using “time to closure” of the process as a quality measure, except when confined to a small category of policies in a very restricted sense, but not across the whole portfolio of policies. Mostly achieved through business policies and straight through processing. Quality in the context of (front, back, middle)-office process can be highly subjective.
There are many techniques and tools for process mining, and optimisation of (front, back, middle)-office processes, very few of them have we encountered that need deep knowledge of statistical modeling. Perhaps there are yet to be identified processes whose quality measurement requires deep statistical or mathematical techniques. But, processes that fall in the (front, back, middle)-office category don’t produce physical output whose quality is understood through tactile experience or physical measurements or properties. These processes are associated with qualitative measures, whose quantitate aspects are no more complicated then can be solved with some rudimentary mathematics.
The intention here is not to minimise the importance of six sigma techniques or tools, but to ask why such a steam-driven hammer is needed to measure quality in what are essentially a far less complex (front, back, middle)-office processes. Techniques such as rule-based engines in conjunction with straight through processing provide a simpler way to handle optimisation of certain parts of the process. Many instances of office processes are by their very nature of long duration, involving knowledge workers from many domains, consider a claims process entangled in a judicial process. No amount of six sigma will help here. Optimisation of (front, back, middle)-office processes is essentially traversing the process graph, identifying inactive or unused activities of the processes, activities that are bottle necks in terms of execution time, adjusting activities to changing work practices driven by changing regulations, etc. This is a highly simplified view, but it gives the gist of it.