Classical FP&A (Controlling) approaches are on a dead-end street.

We are witnessing the materialization of three fundamental facts of life which is forcing us to change:

  • Incomplete data – 1
    The avalanche of newly created data is growing faster than ever before. IoT or Industry 4.0 is real.
  • Incomplete data – 2
    The number of deployed enterprise systems has grown for more than 30 years now. The knowledge about how these systems interact is fading away, day by day.
  • Incomplete data – 3
    Today, it is not uncommon for vendors to not fully understand how their ERP systems function in detail. The developers who have conceived these products are all above 60 years old.

As a result, classical deterministic approaches are doomed to fail – sooner than later. Therefore, deterministic approaches need to be complemented with stochastic statistical approaches – better sooner than later.

Do these proven approaches—used by physicists—work in enterprise finance? Yes, they do.

We don’t know how “stochastic controlling” will unfold in the future. Our vision is that “stochastic controlling” will revolve around performance scoring.

FP&A stands for Financial Planning & Analysis. Planning is largely determined by aspirations and ambitions rather than capability to execute. Analysis is more or less confined to Excel massaging.

In order to improve planning – advancing it from once a year to continuously – one must cognize business drivers. This means FP&A need to understand which of their financial indicators are dependent on which operational process steps. Moreover, FP&A must know about each and every operational process steps. Provided FP&A has access to all detailed money and goods movements during a considerable amount of time, they can then apply statistical correlation analyses. Equipped with data and the right software, FP&A can find out which operational process steps are impacting which financial indicator.

By leveraging data and analytics, FP&A can identify business drivers and recognize other variables (e.g. whether perfect fulfillment impacts cash or profitability). To take it a step further, data and analytics can determine improvements to performance indicators based on changes made to performance drivers. For example, you don’t know how much cash can be set free if perfect fulfillment can be improved by 2%. Statistical modelling (leveraging machine learning) can be applied here. This way, you can determine that 2% improvement of perfect fulfillment can release $14m, for example. That’s all what you need to know.

In summary, stochastic controlling, applying advanced statistics to the wealth of all your business transactions, enables the identification, as well as the quantification, of performance drivers.

The benefits are manifold. You can set more realistic targets. You can react to deviations much earlier.

In order to improve analyses even further – advancing it to analytics – you have to be able to overcome your bias. What kind of bias exactly? Every report, every data warehouse cube, every fixed hierarchy constitutes a bias because you have to make assumptions on how your business runs to be able to specify a report, a cube, a hierarchy.

For example, say you and your customers have agreed on certain payment terms and targets. In order to control whether these are met, you have authored a report which monitors payment thresholds. You run this report in certain intervals expecting that the DSO stays within the projected range. After a while, you realize that this is not happening. What’s the reason for this? In order to find out, you must have a suspicion that can be verified through a newly created report. As a matter of fact, this problem could not be solved for years. Applying statistical outlier recognition to the whole population of payments revealed the culprit right away.

But if you have incomplete data you are lost, aren’t you? Hmm. Actually, statistics has always had to deal with incomplete data. As a result, there are very sophisticated and proven algorithms abound which are capable to deal with this fact of life. We have proven that those are highly applicable in business context as well.

Where is the inflection point? We know that deterministic approaches are limited. We know that statistical approaches are not 100% exact. What do we do with this contradiction? Where do we draw a line in the sand? We think statistic approaches are preferable wherever deterministic approaches fail out of the one or the other reason. Ultimately, knowing something is always better than knowing nothing.