The 15 Pitfalls of Long Range Planning
Common Pitfall #12 – Moving Target
The moving target problem occurs most frequently when a company has many groups who are dependent on each other for numbers. This is especially true in large matrixed organizations, but it can happen in smaller organizations as well when those firms have multiple business units and functions. In this environment, a moving target situation develops when numbers changing in one place necessarily results in changes elsewhere in an organization. The process then feeds back on itself, because changes in one place necessitate changes in another place, which in turn creates changes in another place.
Consider the situation where a company has four business units (let’s call them A, B, C and D) and four central functions (let’s call them engineering, sales, operations, and finance). Functions generally have budget targets that they operate within. When one business unit increases their resource requirements on one function (let’s say A tells engineering they need more headcount), engineering is forced to tell another business unit (let’s say B) that they will be able to provide less heads than they originally anticipated. Because B now has less engineering headcount than they thought, they may have fewer products to sell, so B may reduce the amount of headcount they report that they need from sales. However, sales has revenue targets, and those revenue targets are often tied to individual quotas, so if they want to make their targets, those persons will need to find another business unit to support. For this reason, sales may now tell Business Unit C that they have deployed additional sales headcount. Even if Business Unit C is happy to have the additional sales firepower, they may still be concerned about reaching their OPEX targets, so C may tell operations and finance they will need to do more with less people. And the situation goes on and on...
Some of this kind of dialog is healthy. It is organizational alignment; it is a key component of effective planning. When these discussions become too extended, they become counterproductive. They can actually prevent the true work of analysis and optimization of a portfolio. Warning signs often include missed deadlines. Associated comments include things like “we’re still waiting for information from X” or “we are late because X kept changing their numbers.” When a process is manual, warning signs will often include extended iterations of planning. Planning processes that involve more than three iterations of numbers, guidance, and deadlines are absorbing too much time and getting bogged down in detail.
When planning processes get confused with alignment processes, both can mire down, so neither is ultimately successful. When alignment breaks down, there is usually a disconnection perpetuated between different parts of an organization. Generally, when this misalignment happens, organizations will “move resources around” from other parts of their budget or from future quarters, in order to “make their numbers” or “meet their targets.” Usually, this means that organizations will miss the quarters later in their fiscal year (3rd and/or 4th).
As well, extended planning cycles mean that business owners are often bogged down in planning and alignment for too long, and often neglect important elements of their business. When this occurs, companies often experience much poorer than forecast results during the quarters of the planning process. Surprises in business results during the planning process can be the result of a process which takes too long and is too detailed.
There are at least two best practices to consider here. Firms that reduce the cycle times involved in the planning process typically provide more direct guidance at the outset of the process. Without being so specific with guidance as to dictate a “premature destination,” companies which provide a realistic envelope from the outset of planning can significantly reduce cycle times.
Another best practice is to set expectations for each cycle in the planning process from the outset, with an expected outcome. For example, a planner might indicate that the “first round” of planning will last two weeks and those constituents in the process should be within 10% range of their requirements for other parts of the business. The planner would also define the expectations equally for the second and third round. Finally, a planner might specifically state the end goal of this last round in planning, emphasizing a goal or target range of alignment. In this case, the planner would note that the actual and real alignment would be dictated by the budget process itself. This will help set the mindset among participants that the goal of planning is to optimize investments in the portfolio, not to achieve perfect alignment.
Finally, companies who automate the planning process can significantly reduce cycle times and increase alignment within the organization while providing optimization tools at all levels of the portfolio. A best practice solution involves a central repository for data where each element of the organization is able to access information entered by another element of the organization which impacts them. This way, as changes are made, pieces of the organization can react to those changes real-time. Using this type of configuration still requires cycle times, but it also frees up the business owners’ time to be more productive – they can spend their cycles actually analyzing and optimizing their expenses, instead of merely reallocating dollars.
The 15 Pitfalls of Long Range Planning
Common Pitfall #11 – Data Inequality
Many companies find themselves with data sets that do not paint equal pictures. This can occur when business owners don’t use the same sets of assumptions or data points when they complete their forecasts. It happens more often with revenue than with cost of goods sold or expense data, because the easiest way to paint a rosy picture for a project or set of projects is to portray a “hockey stick in revenue” – especially when there is little accountability in the outcomes. Usually, cost of goods sold data and operating expenses like headcount are published by a company or its functions.
When completing planning information, often business owners will ask “how are other people answering this question?” These types of questions may express legitimate concern for making adequate comparisons across information in the planning process. If business owners have concerns that others might be inflating their answers at their political expense, they may make statements like “We try to be as realistic as possible when answering these questions” or “We don’t just tell you what we think you want to hear.” These kind of statements are a red flag that there are concerns that the planning process is not adequately calibrated. Usually, there is fire where this kind of smoke exists, so calibration should become a major concern.
For obvious reasons, data which is not accurately calibrated can lead to skewed investment decisions. Many times, decision makers will attempt to remedy this lack of calibration by attempting to compensate for the skew, inherently discounting some data, trusting other data, and even inflating data which they think may be too conservative (yes, this happens too). When decision makers inject themselves into the planning process in this capacity, the result is generally little better than “decision by instinct” – significantly undercutting the purpose of planning in the first place.
When data isn’t adequately calibrated, teams that have painted a more rosy scenario for their projects (whether intentional or not), generally receive an inequitable share of the invested capital. In this case, collaboration between groups can break down due to political resentment, pressure to meet an impossibly optimistic scenario increases, and often targets/plans are revised. The result can be a capital whipsaw which reallocates resources mid-stream. In this case, generally no one makes their original forecasted plan.
Calibrating data is not always easy to achieve, but a company that makes a commitment in this area can make it happen. When it does, confidence and participation in the planning process itself improves. There are at least four best practices worth mentioning.
The first is a sharing of information. While the data itself is often confidential, groups that share how they made calculations can help each other improve their processes. Planners can help coordinate this information sharing by identifying best practices in forecasting, and scheduling sharing sessions.
Another best practice involves guidance. Planners are generally in a good position to provide guidance for how questions should be answered. For example, planners can publish guidance for a scoring tactic such as “if X occurs, would you expect your forecast revenue to be a) 3% more or higher; b) 1-3% higher; c) about the same; d) 1-3% lower; or e) 3% lower or more.” When objective criteria are provided to all the constituents of the planning process, planners dramatically increase the odds that answers will be calibrated.
Another best practice is to automate the planning process. This is based on a very human instinct. For some reason, excel templates will generally draw out a broader range of answers than a centralized repository will. Central repositories tend to make us think more formally and in a more structured, rigid way. Excel templates often encourage answers that are less formal and have a greater range to them. As well, planners will have more immediate visibility across results with a central repository than they do with a template, and those completing the information know that. For best calibration results, use a central repository.
Finally, planners should not be afraid to question responses. Sometimes the only way to truly make sure that results are properly calibrated is to talk with the business owners who provided the forecasts. In this way, it is possible to see if the mindset and approach of those who provided the data are truly similar. Especially when data seems anomalous, planners should not be afraid to discretely ask about assumptions used to derive the data. Sometimes, those providing the data may even have made an error which only a planner could catch. Other times, a planner may learn about an approach which could be characterized as a best process and shared back with other groups.
The 15 Pitfalls of Long Range Planning
Common Pitfall #10 – Premature Destination
Some companies put the cart before the horse when it comes to their planning processes. Rather than solicit input from business leaders about potential funding scenarios, these companies provide specific financial guidelines to the business leaders in order to expedite the planning process. These companies typically have finance-driven planning processes with rather static portfolios. Because they tend to “play it safe” by keeping business leaders on a tight leash, they rarely rebalance investments across business units. In effect, the company “roadmaps” the destination to its business leaders at the outset of the LRP process itself.
Because these companies rarely rebalance their investments, their portfolios tend to be fairly static. Since most business leaders will choose to spend their budgets on “keep the lights on” type of activities, often companies in this trap will have rather low “innovation” tendencies. The result is that these companies will often fall behind their competitors. This is especially problematic in very competitive marketplaces. These companies also foster a business climate which rewards those who do not take risks because they become complacent in their “business as usual” approach. In the long term, these types of companies will experience deteriorating business results.
Companies who fall prey to the premature destination problem have telltale symptoms. Almost all companies start their planning process with some kind of window of guidance, but some are far too rigid. How much is too much to start with? Usually a company that issues budget constraints based a percentage of a previous year’s spend (taking a “peanut butter” approach) across all business units is predicting the outcome of their process before it even begins. Business leaders in these environments often say things like “can we just drag and drop our plans from last year?” or they may ask “is anyone getting anything different?”
At the outset of the planning process provide guidance to business leaders which will encourage them to explore. If you have to issue some type of guidance at the outset of the process, couch guidance comes in the form of scenarios, such as “what would you do with X% more funding, the same funding, and X% less funding. Treat business leaders as owners in the planning process, rather than asking them to go through a finance budgeting exercise. While this may make more work for finance, the mentality will cascade throughout the business unit.
The 15 Pitfalls of Long Range Planning
Common Pitfall #9 – Accountability Decoupling
Most companies have this kind of problem. Many companies do not track their long term forecasts to actual business results. Few companies track the outcome of long range plans. For this reason, there is little incentive to ascertain the validity of long range forecasting. Absent accountability for data that is provided, “gaming” behavior is encouraged. Business leaders know that they can “hockey stick” their revenue or bookings projections for out years in order to obtain more operating expense. Since they know they will not be held accountable for future results, projecting deferred revenue carries no penalty.
Business leaders will determine immediately which metrics are used and which ones are not. Not holding business leaders accountable for their projections leads to poor discipline in portfolio decisions. Pet projects get incubated, projects are hard or impossible to kill, and company performance suffers. Usually this problem manifests itself in revenue and OPEX “misses.” Often companies will incur duplicative charges for capital items, since there is little or no incentive for different parts of a business to work together.
Companies experiencing these pitfalls will usually have business leaders whose feedback on the planning process runs the gamut of emotions. Some will want to hedge the data they provide, and will say things like “I’m not really sure about these forecasts.” Others will ask “what decisions are being made based on this data?”, which is often a way of determining whether or not and to what extent to game the system. The more astute and experienced business leaders may directly ask “how do you plan to track this information?” or complain about forecasts made by other business units as being unrealistic. One of the most telling signs that the long range forecasting process has become unreliable is that decision makers no longer put faith in the data. Often they will directly state that they don’t have confidence in future outcomes or projections.In this case, decision makers will use a very limited time horizon on which to build their decisions.
Corrective action is easy to prescribe, but may be difficult to implement in this case. The most obvious solution is the one most often overlooked: track long term results. Most companies do not track long term business forecasts. However, tracking is not enough; companies must also reward people who project well. Usually, this means tying some portion of compensation to ability to a project and to long term results. Few companies actually do this, but those who do experience better long term performance, for obvious reasons – they foster a culture and cultivate leaders with a long term vision.
The 15 Pitfalls of Long Range Planning
Common Pitfall #8 – Objective Obsession
Some companies get carried away with scoring. These companies spend a lot of time thinking about how to quantify almost all aspects of their business. Everything from corporate goals to department culture can and have been translated into numerical values. Many companies who fall into this pitfall are using numerical scoring guides as a substitute for difficult qualitative discussions. Still others find that they ask for more data than they can possibly produce or sift through. In these cases, critical resources may be filling out forms or templates at the expense of their business productivity. Still other important analytical resources are spending most of their time accumulating and sorting data, and insufficient time actually analyzing the data for results. Ultimately, when data is accumulated and sorted, decision makers in these environments typically find themselves in an information overload situation – there simply are too many numbers and scores for them to make a real business decision.
Complaints voiced within companies who are developing an obsession with objective criteria are often numerous and contradictory in nature. One common passive form of response is simply to provide incomplete data. Another common question that is often asked is “how are other businesses going about completing this information. Other persons may indicate a concern with potential calibration, making remarks such as “We are being honest about our evaluation” or “I wouldn’t necessarily trust other responses, they tend to be exaggerated.” Decision makers in the planning process may passively and politely accept the data, but not provide any real indication that the data was used in decision making. A more aggressive response from decision makers may question the validity of the data itself (“how did you come up with these numbers?”), or even openly refuse to accept data after a certain point, indicating that “they have enough data to make a decision.” If the company does continue down this path over the long term, the scores will start to normalize to the point that they become similar enough as to no longer be useful in distinguishing between projects. This is because data providers will engage in gaming behavior and/or get the scoring and calibration guidance changed in order to gain better evaluations for their type of projects.
Ironically, objective obsession can be one of the most insidious and harmful roads a company can pursue in its planning process. In the short run, companies that over-quantify will find their lines of business deteriorating during their planning process, as business leaders spend too much time allocated to the planning process. Instead of being a productive light-weight exercise which feeds into proactive budget formulation, the planning process becomes an encumbrance which weighs on business leaders and drags down corporate performance very quickly. Worse, decision makers may alter critical budget allocation decisions from their impulses based on data scoring models which don’t accurately reflect the state of the business – usually a gut instinct would even be more accurate. In this case, corporate performance will suffer in the year to come – sometimes dramatically. Finally, over the long term, a company which “sticks to its guns” in the quest to relegate almost everything to scoring models, will find that it’s decisions become based on scores which may vary by 1% or less, ignoring the “margin of error” rule. This company will encourage gaming behavior by its business leaders by sending a signal that its leadership considers the planning process more important than actual business results. This is a strategy for going out of business, yet it’s a road that many companies continue to pursue.
Scoring isn’t bad, and a drive to data purity isn’t bad either. There are times when such information is vital to the development of a strategy. Most strengths are weaknesses when they are overextended, and quantification is a perfect example. When organizations start to exhibit some of the signs mentioned above, it is time to take action toward putting a solution into place. Unfortunately, it is hard to put this genie back in the bottle. There are a couple of reasons for that fact. The first is that a retreat from asking for certain information risks sending a signal that the business metric associated with that information just doesn’t matter anymore. This isn’t always true. Just because a quantitative metric isn’t one which is appropriate to be used by decision makers as a way to evaluate businesses does NOT necessarily render the metric irrelevant to each part of a business. For example, the metric “lifetime customer value” may make sense as a business metric to a certain division like a services organization. However, it may not be a metric that decision makers use to evaluate things like projects, technical spending, etc. So one key solution to an objectively obsessed planning process is to pick the metrics that matter, and then communicate clearly and openly about the ones which aren’t necessarily being tracked at the corporate level anymore and the reasons why. It is vital to know that metrics are in the context of corporate success and are not just metrics for metrics sake.
The 15 Pitfalls of Long Range Planning
Common Pitfall #7 – Risk Homogenation
Firms have varying strategies for dealing with risk involved in a business. Some of the most simplistic approaches include assuming that risk is baked into forecasts or the financial metrics (like the discount rate factor in NPV for example). Other approaches which do involve some risk analysis have approaches which treat risk as a homogenous factor. Some of these approaches include treating risk as a factor to be financially analyzed (the classic financial “risk management” approach) and even approaches which ask business leaders to quantify risk involved in their business. By treating the subject of risk as a single abstract entity, none of these approaches add value to business leaders or decision makers by helping them really understand the source and composition of risk along various vectors throughout the portfolio. This is reducing risk to a single score, and usually involves a lack of real analysis in calculating the risk score. Think about the different types of risk – competitive risk, technology risk, demand risk, execution risk, etc. Either it’s done centrally by persons who may not have the right level of visibility or it’s decentralized with no guidance for calibration purposes. Without adequate stratification of risk, there is the chance that all risk will be concentrated in a single type of risk.
Firms which suffer from a unified view of risk often have difficulty calibrating risk scores, and usually question whether or not they have fully assessed all aspects of the programs or projects in their portfolio. The first challenge usually manifests itself in questions such as “how do I rate the risk inherent in one project versus another?,” “are we relying on self-scoring here?” or “how can we be sure everyone is answering this question the same way”? The second challenge (not fully articulating all the various types of risk) usually manifests itself in questions about the type of risk itself. For example, business leaders may ask “how are we accounting for potential competitive pressures across the portfolio?” or “aren’t some investments more exposed than others to potential disintermediation?” or “some of these investments seem right in our wheelhouse, but aren’t some of these outside of our expertise?” A truly diversified view of risk manages and measures each of these independently, and is ready of give an account for the approach taken to each of the challenges mentioned about.
Firms which do not have a comprehensive view of risk tend to have investments which are either uniformly conservative (i.e. they keep the firm from embracing too much risk), or they often concentrate risk in a particular area, leaving the entire firm undiversified and exposed. For example, a company which has not taken enough risk may have a profile that emphasizes short term returns – a strategy which leaves the firm fighting each year to find projects that can bring incremental growth. These companies usually have lower rates of innovation, and may find themselves out positioned by in the market. Companies which do not adequately describe various kinds of risk may not recognize or acknowledge the fact that their risk is concentrated in a particular element. For example, the company that does not adequately acknowledge competitive risk may find that their portfolio selection has left the company as a whole vulnerable to competitors. Firms that do not explicitly acknowledge execution risk as a factor may end up with an unbalanced portfolio which leaves the firm stretched too thin. These companies usually cannot sustain all their investments, and end up falling short of financial performance, usually in the last quarter of the fiscal year.
Different types of risk need to be acknowledged, but they also need to be quantified in a meaningful way, and each approach needs to be calibrated. Explicitly understanding the different types of risks facing the firm means honestly thinking through all the possible elements of exposure a company may have. For example some common elements of risk include execution risk, market/demand risk, competitive risk, technology risk, price/supply risk, price pressure risk, political instability risk, and economic risk. Not all of these risk factors apply to every firm, and this list is certainly not an exhaustive one. However, any risk factor identified needs be defined and thought through in the way it impacts a particular project. Further, guidance must be issued on each relevant risk factor to help assure that the evaluations are similarly calibrated across business units, functions, etc. A system which automates the process of posting guidance, calibrating scores, and capturing those scores, is usually vital to achieving a solution here. This approach will enable the ability to create risk profiles for various funding scenarios – an aggregative view of the types of risk which would be faced by such a permutation of projects. For more information on appropriate quantification and calibration tactics, please contact Agylytyx directly.
The 15 Pitfalls of Long Range Planning
Common Pitfall #6 – Pragmatic Profiling
At its face, there is a temptation to reduce all projects to some financial metric. Expressing a project in terms of its NPV or EVA often ignores subtleties which exist in projects. For example, under a strictly financial approach, there is little or no consideration given to project interdependencies, balance within a portfolio, risk profiles, forecast uncertainty, timing of profitability streams. The danger then is that a common financial metric may result in a funding profile which is not the most desirable combination of projects for a firm.
Firms experiencing problems with pragmatic profiling will often hear business leaders ask business related questions about recommended funding distributions. They will often point out that funding Project A does not make sense without also funding project X. Business leaders will often express frustration with financial metrics around their projects and will try to introduce business concerns into the metrics – for example, in the case of NPV, they may argue that discount rates should be changed on particular projects in order to better account for lower and higher risk.
The impacts occur whenever organizations apply a similar standard to all projects, and therefore do not make adequate comparisons because they treat all projects in the same way. Organizations which rely too heavily on financial metrics for planning processes will usually skew their decisions over time to those projects which show the highest financial returns, often at the expense of their ability to execute projects. Some of the reasons this can happen are obvious, but many are not. For example, a strictly financial/quantitative approach toward project selection often leads organizations to focus on short term returns at the expense of long-term investments. They may also find that funding decisions do not take into account the potential risk profiles, especially execution risks, within a company. Firms relying too heavily on a finance driven process will usually find that data used as input for decision making becomes less and less reliable over time. As a consequence, firms relying relatively exclusively on financial data for decision making will miss their forecasts and projected guidance.
Rather than rely solely on financial data, other data important to creating decision making profiles needs to be collected. Capturing qualitative information will actually make the financial metrics more reliable. Like projects will be compared to each other, and more balanced profiles will be created. A process which recognizes the need to collect such information will also instill more confidence in the planning process. A transparent process which emphasizes the incorporation.
The 15 Pitfalls of Long Range Planning
Common Pitfall #5 – Forecast Folly
Firms often fall into the trap of assuming that confidence levels around later years in a forecast should be weighted the same as the following year. Many firms ask their business leaders to make long range forecasts over which they have little or no business visibility. Relying on time value of money adjustments to discount the impact of future years is not sufficient to solve this problem because these adjustments are still being made to point forecast which may have little validity.
Firms experiencing folly in forecasting often hear it from their business units. Usually, the feedback comes in very specific comments such as “we don’t really know what will happen to the business beyond a year or two,” or “we didn’t know three years ago what would be happening today, so how could we be predicting so far into the future now.” Another common phenomenon in these situations is the tendency to project “hockey stick” business results, where most of all of the benefits of a project or set of projects occurs toward the end of the forecast period. This gaming behavior is designed to couch projects as viable long-term investments with less short term commitments, usually because business leaders know long term forecasts are really tracked and measure down the road, and/or that they will have the opportunity to revise the forecast in the next annual planning cycle. Finally, many firms experiencing forecast folly will find it necessary to change their forecasts and long range plans frequently and materially.
Forecast folly is one of the most insidious to impact a business. It can be one of the hardest to detect because the impact may not be felt for several years. If a firm has consistently relied on a single point forecast in long range decision making, and has done so for many years, especially without a long range tracking mechanism, the company will miss earnings estimates. When “gaming” behavior is encouraged accountability is discouraged and the firm will also lose the ability to course correct.
Avoiding forecast follies requires a firm to take several steps. First, uncertainty of forecasts in out years by various business units needs to be recognized and quantified. Calibration of the uncertainty involved in the forecasts should come in the form of specific guidance (i.e. – “here’s how you score the certainty of your forecast. Decisions should be made based on these banded ranges, not on point forecasts. Second, system of a tracking needs to be put in place which memorializes and tracks the evolution of the long range plan. Of course, point forecasts will still be conducted throughout the year, but a typical solution compares plan to forecast to actual. Further, this process needs be carried out over the long term, meaning each year long range plans and forecasts are memorialized and revisited in future years. The point of conducting this analysis will be to discourage gaming behavior and reward business leaders with more accurate long term views of their business.
The 15 Pitfalls of Long Range Planning
Common Pitfall #4 – Organization Usurpation
Some firms place too much planning authority in the hands of their finance group. These organizations usually have strong finance leaders who tend to speak with authority, often leading decision making discussions. When finance organizations have too much authority within an organization, decisions are typically driven based on data collected in the LRP process.
Ideally, data collected in the LRP process mirrors the requirements of the decision making process. Even when it does, having finance control the planning process lends a very negative perception to the rest of the organization. Often, business leaders will make statements like “well, we’ve done what we can, I guess the rest is up to you,” or “beyond our data input, the planning process looks like a black box here.” They may also withhold data, often coming with “updated” numbers or providing information well beyond stated deadlines. Often, politically powerful individuals will negotiate for additional investments, quota relief, etc. outside the official planning process. These are sure signs that finance is perceived as having too much influence in the planning process.
Organizations which rely too heavily on finance for planning processes will usually adopt funding decisions which do not garner the confidence of various business leaders with the firm. When this circumstance arises, business leaders will often not participate fully in the process, and the entire planning process will ultimately revert to a budgeting exercise. When this circumstance occurs, the opportunity for corporate portfolio management is usually lost. Investment across an organization is not optimized. Over time a company in these circumstances will find their funding decisions become less and less linked to their corporate strategic goals. Ultimately these companies will revise their investment planning processes.
Finance can and should have a crucial role in the planning process. Finance is often the right instigator, collector of data, and source of authoritative information for the planning process. With great authority comes much responsibility, and so the challenge for finance comes in playing these often high profile and important roles without being perceived as a usurper of the planning process. For this reason, a successful planning process organized by finance will stress the importance of collaborative concepts like training, preparation, assistance, consensus building, and transparency.
The 15 Pitfalls of Long Range Planning
Common Pitfall #3 – Manual Manipulation
Most planning processes today are driven through manual cycles. Even though historical data is often pulled from ERP systems, the forward looking data which is used for planning purposes is most often manually crafted into templates using Microsoft Excel. These templates are typically populated by planning elements through a company, and consolidated by the corporate function responsible for LRP (often FP&A).
Firms suffering from manual manipulation usually know it, although they may have become so accustomed to it that they don’t realize there is another way. Companies experiencing manual manipulation of accumulated data typically have finance persons working on data consolidation, parsing, and communication late into evenings and weekends during planning cycles. Worse, these finance personnel are often folks who should be spent analyzing data, not consolidating and reissuing it. Companies experiencing problems with manual data processing often find that their long range planning process requires multiple iterative cycles, often lasting several months.
Companies with manual manipulation problems typically experience a degradation of their ability to execute during the planning cycle. This is because business owners and the finance community which support them spend most of their time manipulating models and spreadsheets, and consolidating them within their teams. This is usually a time intensive process which takes valuable cycles away from actually running a business. These types of companies are usually analysis-starved because their resources are typically absorbed in data consolidation, leaving little time for critical analysis of data. This means entire FP&A organizations become “big F, little P, no A” in their focus. Thus, these firms decisions are often based more on anecdote and instinct, and less on objective evolution.
For most firms, manual manipulation means using excel to manage a planning process which has long outgrown it. Most firms instinctively want to adopt automation as an alternative, but are often unclear about how to implement automation successfully. Many firms fall into the trap which will be detailed in Common Pitfall #14, using whatever they have at their disposal to attempt to address the problem. Some firms adopt a hybrid approach, choosing to manage only “incremental” investment in through the planning process, while allowing existing business units and functions to plan their existing budgets (given appropriate guidance of course). This approach, while common, in fact is the worst of all possible processes. Incremental investments are almost always associated with existing budgets, resulting in misalignments between investments. Other firms, like the ostrich, bury their heads in the sand, committing to manual manipulation. These firms perpetuate the problems outlined above, and continue to make them worse.
Of all pitfalls, the solution to this pitfall is the most obvious, and requires the most dramatic organizational change. Actually automating the long range planning process involves the implementation of a centralized database which all constituents of the LRP process can access. This database functions as a single source of truth (SSOT). This approach does not eliminate the use of Excel, but it does replace the use of Excel as an alignment or consolidation tool. Phasing in an automated solution typically involves the use of Excel templates (since that is what constituents are familiar with) which can be imported into the centralized repository. Once the data has been imported into the centralized repository, the interface to that repository usually makes it easier to make revisions and change data within the tool itself. Eventually, the use of the excel template to import data is usually completely replaced in the planning process.
One of the primary advantages of a centralized tool is the capability to expedite alignment. Usually, within matrixed organizations, access controls are put in place which allows various parts of an organization to view information which has been submitted which affects that part of an organization. For example, in some companies a sales function should be able to see sales requirements or budget information from various theaters, but that organization does not necessarily need to see services, engineering, or operations information. However, a theater (like North America) needs to see all information pertaining to it, including sales, marketing, engineering, services, operations, etc. When a centralized repository exists, changes made by the theater or the function are visible to each other, expediting the planning process.
There is always a temptation to fall into what we will describe in Common Pitfall #12 Moving Target later on, allowing infinite regression of the changes by each organization (“there is no end to that”). On the other hand, consider how much more difficult the manual approach makes this problem. Automating the planning process expedites deadlines and facilitates faster alignment, freeing up finance resources to focus on analysis of the alignment rather than spending cycles facilitating alignment.
The 15 Pitfalls of Long Range Planning
Common Pitfall #2 – Class Warfare
Everyone is familiar with the common expression “comparing apples to oranges.” The expression is commonly used to communicate the need to compare like objects to each other, and not to compare dissimilar objects to each other. Applied to long range planning, it means to compare common priorities (or projects, or whatever the unit of measure) to each other, and not to attempt to compare dissimilar priorities to each other. Two commonly used illustrations of dissimilar comparisons involve:
priorities associated with innovation and those priorities required to “run the business”
“revenue generating” priorities and “non-revenue generating” (“keep the lights on”) priorities
Most firms tend to be good at separating these priorities into separate classes so they aren’t compared to each other.
What is surprising is that a number of firms still use the same measures to compare those priorities within their “buckets.” The same unit of measure may be used for priorities. For example, the NPV for non-revenue generating priorities may be the calculation used to evaluate those priorities. A different bucket may be used for revenue generating projects, but NPV may still be used to calculate those priorities.
When common models and units of measure are used across buckets within an organization, typically two kinds of behavior are observed. The first type of behavior is a “gaming” one – because models may not be applicable to a particular class of priorities, those priorities often interpret the need to complete the data as a liberty to guess, and this guess will generally be more liberal than is warranted. The second type of behavior is a subtle psychological discrimination that often builds in an organization. In the revenue generating, non- revenue generating example, the revenue generating initiatives may refer to non-revenue generating initiatives as “sunk costs” or “burdens” or “organizational taxes.” When the same units of measure are enforced, this type of subtle linguistic discrimination can run rampant. This phenomenon often leads to “class warfare” - the tendency to subtly or psychologically compare the importance of one bucket to another based on the common unit of evaluation.
There is a reason that priorities are often properly sorted into difference classes – that’s because there is an inherent recognition that the priorities within one class behave differently than priorities in another class. Using the same unit of evaluation for objects within different classes defeats the purpose of separating them in the first place. There should be more appropriate measures for objects in a different class – if there aren’t, the need to separate out the objects should be reevaluated. To treat the objects in the same way for decision making purposes can lead to improper allocations to certain buckets within an enterprise because there is always a tendency to aggregate the sum of the parts of the bucket. Ironically, this can result in comparison of the sum of the parts on equal footing again. This approach can actually jeopardize an enterprise’s ability to execute on any of the buckets.
Recognize that classes of investments usually deserve different measures by which to evaluate them, and resist the urge to attempt to compare on “bucket” to another. In the example of non-revenue generating versus revenue generating initiatives above, forcing non-revenue generating initiatives to calculate NPV is often tantamount to asking them to supply speculative, unreliable, information about the benefits realized. Decisions may need to be made within this group based on their impact or necessity for timely delivery or support of revenue generating projects, for example. Metrics about productivity or efficiency may even be more relevant to this group. EVA may be a more productive measure for revenue generating initiatives than NPV. This approach may require multiple “models” or “templates” for each bucket. The point is to avoid open class warfare between your priorities and buckets by measuring them in appropriate ways.
The 15 Pitfalls of Long Range Planning
Common Pitfall #1 – Information Overload
Requesting too much information during the LRP Process is probably the most common pitfall of all. This information tends to be quantitative and is usually requested in an Excel spreadsheet or “template” in the LRP process. Those of us in the finance community are especially good at crafting intricate templates. We often want to know things like “how many headcount will be required to execute this program in a certain theater in each quarter for the next five years.”
Workers in the finance community will often push back directly and vocally. Those persons outside the finance community are often too diplomatic to provide direct feedback but firms with this problem often get questions like “why do you need to know that?” or “what are you going to do with this information?”. Often, the most telling sign that too much information is being requested is that the information simply won’t be provided. A company who finds themselves in this situation, especially when there are too many “blanks” to manage attempting to obtain the information, is almost certainly asking for too much detail.
Asking for too much information is the fastest way to derail an LRP Process. When the requested information isn’t collected, usually a firm has one of two choices. One is to complete the missing data at the corporate level using the rationale “well we asked for it, and they didn’t give it to us, and we told them we would fill in the data they didn’t provide. “ This approach is especially tempting when the owners of the LRP process have access to the whole body of LRP data and historical information. However, putting words into the mouth of business owners is dangerous – executives from various functions or business units will disavow the information - “those aren’t my numbers” – undercutting the credibility of LRP in the budgeting and planning process. Another choice firms often make when faced with incomplete information is to revert to a least common denominator approach, essentially simplifying the process to accommodate the data they do receive. This approach usually leads to insufficient information upon which to base conclusions – because various parts of the business have “completed” the information in the template in different ways. Either approach, or a hybrid of the two approaches, lead to a body of information which is unreliable for decision making purposes. At best, this problem results in a lack of sufficient information for decision making, at worst, erroneous decisions may be made because the available data isn’t really giving a true picture of the choices facing the enterprise.
Almost all LRP processes that suffer from Information Overload need to follow the simple rule, simplify, simplify, simplify, and then when you think you are simple enough, simplify some more. In fact, most simplified LRP templates will usually make the owners of the template feel uncomfortable, because they will always feel that not enough information is being requested. Start by understanding the firm’s decision making processes (business reviews, portfolio reviews, etc.), and the inputs to those decisions. Ask only for that information which is germane to those inputs – only that which is necessary to help formulate the inputs necessary to make those decisions. The answer to the question “what do you plan to do with this information” should be self-evident. When those questions aren’t asked any longer, the firm has reached the “right” level of information request.
The 15 Pitfalls of Long Range Planning
Many companies have some kind of strategic planning which is, in theory, linked to their budgeting for the following fiscal year. In practice, strategic planning is rarely coupled with budgeting. In many companies strategic plans are merely guidelines which serve as context for the budgeting process. Linking and tracking budgets with strategic plans are the hallmark of successful companies. This paper will help diagnose common issues and suggest potential solutions.
This process goes by different names in different companies. For industries which rely heavily on fixed investments, it is often referred to as “capital planning.” Companies who have little or no capital investment may refer to this exercise simply as “planning,” “strategic planning,” “budgeting,” “annual allocation” or some other term. In this paper, all these processes are referred to as “long range planning” or “LRP.”
LRP typically drives many activities which are crucial to a firm’s future. For example, annual budgeting, forecasting, and even strategic planning and portfolio management often are directly linked to LRP. As important as LRP is to an enterprise’s survival, few companies have efficient, effective LRP processes. In fact, LRP tends to be one of the processes most commonly criticized by both finance and non-finance constituents alike. While there is no “silver bullet” for improving LRP in a corporate environment, there are some immediate steps a firm can take which will set it on a solid path to an improved planning environment.
This series examines some of the common pitfalls associated with the Long Range Planning process. Many are interrelated or associated, others represent different extremes on a spectrum, but each pitfall manifests itself in some unique ways which are easy to identify. Recognizing the existence of these symptoms doesn’t necessarily mean that the problem exists, but the symptoms are warning signs – if they are present, firms should carefully consider the suggested solution. As a general rule, the suggested solutions are ones which will only improve a company’s decision making approach.
In this series, we will follow the same format for each pitfall. First, we will give a brief description of the pitfall, followed by a way to recognize the symptoms of this pitfall in your organization. Next, we will turn our attention to the impact which this particular pitfall can have if unchecked, but we will then offer a solution to help avoid this particular pitfall or head it off.
Finance Analytics Analyzed Concluded
In part one of this series we looked at what the world means by the term “governance” with respect to analytics. In part two, we introduced a completely different concept – the idea that there were actually two governance concepts with respect to analytics. The first concept is the one which is commonly called “governance” today and is actually what we call “data governance.” This common notion of governance is a well-known issue that most analytic vendors have “solved” now. The second one is called “presentation governance” and it is an entirely new but very important concept. In this conclusion we will explain why presentation governance is in fact the new “governance” in analytics.
Data governance has become the “cost of admission” for analytic packages. It has gone from a “me too” feature to a “must have” feature. At this point, an analytic package which does not have functionality enabling the use of a “single source of truth” (SSOT) will not be considered by any serious enterprise. Initially, vendors who did not support the notion of data governance did not really understand the need of their package to stage and enable an SSOT, explaining to clients that they could simply mandate the use of the same data source by their customers (as if clients never thought of that!).
It seems obvious to us now that this type of solution was inadequate. Customers across companies are unlikely to use the same data source. When they do, the data source may contain multiple tables or multiple time frames. There is common problem of single data sources having sufficient complexity that they are often manipulated. Of course, even when none of these conditions were present, there is still the problem of end users developing different interpretations of the same data, then developing graphics which support that narrative.
All of these conditions still exist even if the issue of data governance is solved. Data governance solutions do not ensure that everyone will display or talk about the data with the same narrative. While getting everyone to use the same data is clearly a step in the right directions, this step does not ensure that people across the company will discuss the data using the same strategic perspectives.
This does not mean that different interpretations of data cannot be valid. In fact, we have often seen them be the basis for very productive discussions. When productive strategic discussions occur, however, they take place around a common graphic representation of the data. Where we have seen strategic discussions derail, they are almost centered around debates about the proper way to look at data, or questions about the validity of a graphic representation itself.
Too often, we have seen very well placed executives spend significant time in strategy meetings discussing analytics where the x or y axis is significantly adjusted beyond zero or where competing analytics applied to the same data result in radically different views. This happens because everyone had an agenda for interpreting data in their favor in a portfolio and the lack of uniformity in analytics makes crafting the narrative first possible. In our last post, we even talked about how the loudest voice in the room or the prettiest picture can sway very important discussions.
It is desirable to head off situations like these. As much as possible, it helps decision makers to agree on analytics so that they can focus on the impact of various strategic options. That is where presentation governance comes in. It is important to redefine the term “governance” to mean that the most meaningful analytics are always applied to the data. True “governance” is built into the Agylytyx Generator. To the extent that a company can control the analytics used across the company, there can be no more discussion or debate about the data or how it is displayed. That means no more chance that the prettiest picture can carry the day. The loudest voice in the room is now irrelevant.
That means companies can now focus on what is really important. Not only is there a single source of truth (SSOT) with respect to data. Analytics should not be controversial, and they should not be the focus of debate and discussion. Real analytic governance means that there is an SSOT with respect to analytics output (presentation) as well, so the quest for truth can be enhanced.
And Now for Something Completely Different
In part one of this series we introduced the idea that real governance when applied to analytics meant more than ensuring that everyone was using the same data but also ensuring that people were talking about that data in the same way. Part two of this series covered the fact that the term “governance” had only been applied to analytics in the past few years, and that it was taken to mean that everyone across a company was accessing the same data (often called using a “single source of truth” which we abbreviated as SSOT).
This application of the term “governance” occurred out of necessity. It was a great leap forward in thinking when the term “governance” was finally applied to analytics. Early analytic engines had no data governance built in, so tech-savvy users were soon downloading analytic engines and applying them to any data source using any hierarchy or other data schema existing in a company. This led to more than one debate about the source of data used in the creation of the graphic output. The better the graphic and the greater influence on strategic decisions, the more important these debates became.
Ultimately the term “governance” was applied to analytic output in the same way it had always been applied to the production of financial information within a company. As we noted in our previous post, this led to the creation of engines built into the analytic software which would allow a company to designate “data custodians” who were able to use the engine to control the data which went into the analytic engine used across a company.
In our experience, this “governance” approach is insufficient. We now break the application of the term “governance” in two parts: 1) what is traditionally meant as “governance” – which we now call “data governance” and 2) a previously underheard of concept which we are calling “presentation governance.”
When we encounter new things, it can be difficult to understand them since we have not heard of them before, and we don’t even know they exist. Fortunately, it is pretty easy to see the problems caused by a lack of presentation governance. We have encountered all of these in various places. In all likelihood we have all encountered one of more these situations:
A “new” or “novel” approach to displaying data captures the imagination of an entire executive team, leading to important strategic decisions being made.
Different analytic approaches to the same data lead to the loudest voice in the room being the owner of the analytic approach from which strategic decisions are made.
A best practice we have seen is for all the executives to agree on the analytic approaches which will be used consistently whenever strategic decisions are to be made. This means that the “constructs” (as we call them) as selected before the data is applied to them. For example, the executive team might decide on a scatter plot format which will depict the share of revenue by channel of distribution by business unit. This means that anyone attempting to use another format in a meeting (such as a bubble chart or trend line chart) will be invalidated in their attempt to discuss the data by virtue of not using the agreed-upon approach.
This best practice avoids a situation we have seen – the use of important strategic meetings to debate the merits of different approaches to strategic data. We have seen entire meeting completely derailed on this topic, when the executives in the room should have spent the same amount of time evaluating the actual results and deciding on a strategy based on those results.
There is an analytic application which allows companies to control the output and format used in addition to controlling the data. We call this kind of control “presentation governance” (as opposed to what we currently call “governance” which we call “data governance”). An application environment which not only allows a company to specify the user output type but also enforces that “presentation governance” through the use of restricted built-in “constructs” is the type of application that meets real requirements for the use of analytics in decision making. That is real governance, and it is something completely different.
In part One of this Series we introduced the concept of governance within the context of analytics. We talked about how the concept of governance with respect to analytics was important. An important point which was implicit in that piece was the critical assumption that data governance inherently also addresses any governance issues when it came to analytics. It does seem to be a reasonable assumption that since the data being analyzed is governed, the analysis of such data will be subject to such governance by the property of transference.
This erroneous point of view is the central thrust of this post. When the financial community talks about governance today, it is within the context of ensuring there is uniformity around a Single Source of Truth (SSOT). The “state of the art” in governance today goes beyond that – finance communities realize there is importance in developing a common set of definitions as well.
Initial analytic products did not constrain users at all, or even enable basic governance. Even when companies could agree on the same data set to use, different approaches to the data often thwarted attempts at governance. It was possible to use these analytic products to access data and either drop out certain data sets or even to perform analysis based on definitions that resulted in disparate views of performance.
In one large company, for example, the company maintained several different “hierarchies” – ways the offerings were organized. In this case, there were valid reasons for maintaining separate “hierarchies” – one was organizing offerings according to a “market facing” customer perspective, one was organizing offerings according to the way internal strategic and organizational decisions were made. While the two hierarchies used the same SSOT so they were “tying out” from a perspective of totals – the two hierarchies often resulted in materially different measurements for a discrete entity, such as a business unit, geography, channel, etc. The result was that executives representing the interests of a particular entity would often present a very different view of the entities’ performance than the corporate executives did.
Most finance communities have now learned to head off these problems by anticipating the need to establish governance around the definitions (“hierarchies” in the example above), so that across the company executives use not just the same data from an SSOT but they also use the same approach to the data. This means that not only is everyone “singing off the same songbook” they are in fact “on the same page” and “singing the same song.”
In fact, a lot of analytic products now have a data governance approach built in. As these data governance approaches have gotten better, they allow companies to create analytics with the same data and definitions. Some products now institutionalize this approach, and allow finance users to certify not just the data but the rules for using that data. The result has been a great leap forward in applying the concept of governance to analytics.
This is the state of the art in the governance approach to analytics today. We think another leap forward awaits us. Because we have not been able to imagine the application of governance to the presentation layer, we have been content not to apply governance to this level. In our next post, we will argue that the previous governance improvements, while they are moves forward, do little to address the overarching need for uniformity in output, and that this is a requirement for true governance.
There are almost as many definitions of the word “governance” in corporations as there are credible sources on the matter. A quick online search shows that. Still, we tend to know what procedures are related to corporate governance. It is almost like the famous quote about pornography from Supreme Court Justice Potter Stewart (actually attributed to his clerk) that despite being difficult to define you “know it when you see it.”
Today governance in a corporate environment makes us think of policies and procedures which help companies adhere to regulations which they must follow and interests they must balance. That may seem vague to most of us, but it is the best way to encompass all the various aspects of things we know as governance. Companies which cast a “wide net” when they consider governance related items are the ones that tend to attract the least unwanted attention in this area. Just about everything can be, and traditionally has been, linked with the concept of governance.
An exception is the use of governance in conjunction with corporate decision making, but this is changing. In the past the term “governance” was underused to describe the area of corporate decision making. Since corporate strategic decisions by definition effect the direction of a company, all the interest present in most companies (from shareholders to employees to communities) have a vested interest in their outcome. This is why deciding what is best for a company’s direction means all interests have to be considered. That is why governance in fact plays such a central role in corporate decisions. This is why as far back as 2007 the cover story for the September issue of Strategic Finance was an article titled “Linking Governance to Strategy: The Role of the Finance Organization.”
Analytics are the heart of most companies’ corporate decision making. In setting strategy, companies frequently rely on the use of analytics to support the rationale behind those moves. Such analytics are often presented at investor conferences and road shows, and are used as the underpinning for scripts in quarterly conference calls. These visual aids have come to pervade our approach to corporate decision making.
If corporate decision making is really a governance issue, and analytics are often the key to corporate decision making, it stands to reason that the very use of analytics should be subject to the same governance concept. As early as 2007 it was becoming clear that governance was central to strategy. To that end anything, such as analytics, which will be an input to that strategy should be subject to the same governance.
Conversely, the lack of governance in an analytic approach risks a lack of governance in corporate decision making. When the production of analytics is unregulated, there is a perceived or even real lack of corporate control over the governance process related to corporate decision making. The result can be a lack of alignment between corporate governance responsibilities and corporate strategy.
This predicament actually got worse before it got better, and analytic products were often largely to blame. In fact, certain analytic approaches have actually added fuel to this fire by fostering an unregulated “wild west” mentality, encouraging anyone who could use a business intelligence product to make analytics which could then affect corporate strategy. The problem was that some of those persons were unintentionally giving incomplete pictures of the data they were representing, or even working with inaccurate data sets to begin with. This mentality obviously circumvented any attempt which could be made to govern the process.
This phenomenon did not go unnoticed, especially by analysts covering this marketplace, and products began to spring up to address the very problem of data governance. In order to ensure that all analytics were using the same data, these products leveled the playing field for input into those analytics. Having everyone across the company use the same “single source of truth” (SSOT) in their analytics was a tremendous leap forward in the formal adoption of a governance process for analytics.
It is now not enough to make sure everyone is using the same data, governance in strategic decision making means that everyone will not only be using the same data, but telling the same story about it. A new governance crisis is looming in the analytic world - even though the issue of data inequality has been addressed, people are still using very different analytic approaches to describe the same data set.
Truly governing analytics means ensuring that everyone across the company will show similar analytics in the same way, so that corporate strategy can be made objectively. Lou Gerstner, the CEO of companies ranging from RJR Nabisco to American Express to IBM, famously required that all documents be formatted in the same way, even down to the font size, so that he would not be biased when leading strategic conversations. The same concept applies to the approach to the governance of analytics. A leading professor from the University of Maryland has made that very point on this blog. Enforcement of that approach will be vital to lasting and successful analytic governance.
This is part one in a four-part blog post series. The next post, entitled “What the World Thinks Analytic Governance Is” will cover the commonly held belief that data governance is sufficient.
Finance Analytics Analyzed
Our last five blog posts have been dedicated to understanding how to rollout a finance-led strategic analytic process. We have closely examined the best practices of this process from its very conception through to its impact on the decision making process. To ensure the best possible chance of success for a process such as this one, planning and forethought is required.
Understanding what works and what does not work can help us avoid common pitfalls for finance-led analytic programs. To help us with this analysis we used a phased-based approach called the Finance-Led Process Lifecycle. We introduced this matrix in a previous blog post. To review, this is a typical “2 by 2” matrix. The matrix uses a typical x and y axis for analysis, with the intersection of those two axes forming four quadrants. The X (horizontal) axis is the “degree of completeness” in program design, with one extreme representing program conceptualization. The “y” (vertical) axis is the degree of corporate involvement, ranging from a low of not involved to a high degree of involvement.
Because this matrix was designed to analyze all finance programs, we also talked about each quadrant in the lifecycle and what each one means. We described the lifecycle of any finance-led process. The arrow overlaid on the 2x2 is designed very specifically to plot the track that finance led-processes should take within a company. All finance-led processes essentially go through a lifecycle which starts with the conception of a project. In this first quadrant, it is desirable not to socialize an envisioned process outside the finance community. This is why this quadrant is named “Conception.” In the second quadrant, a finance-led process begins the socialization process. While it is still relatively low on the degree of completeness scale, it has started a gentle trajectory to the right as it climbs sharply into the second quadrant. That quadrant, named “Collaboration” since input from across the company is solicited in this phase, and the program is still at a low-enough degree of completion to incorporate feedback. The program should become more complete during this phase with the impact from across the company, and finance should switch the intended effect of socialization with others into a support-building method as the program moves into the third quadrant, called “Consensus” for its consensus-building intention. As any program moves toward full design, the finance-led process begins to decline again in terms of cross-company involvement. Notice that the arrow overlaid on the matrix does not “fall” quite a low on the y (vertical) axis of corporate involvement. While finance does take back more control over the process, it continues to involve others across the company in the successful execution of the process – that is why this is called the “coordination” phase.
We applied this matrix to an analytic process, since finance groups are often charged with producing analytics and supporting strategic decision making processes. First, we spoke generally about how a finance-led strategic analytic process could benefit from using this matrix to help plan that process. Next, we took a look each one of the four quadrants in the finance-led process matrix, and looked at the specific implications for a process designed to support strategic analytics. In the Conception Stage post, we talked about the importance of taking into account how analytics are used to create strategic influence within a company, and how that can be made easier through self service facilities, the automation of charts, and the necessity of import mechanisms. In the Collaboration Stage post, we talked about the need to carefully plan the scope of the input, and to assess requirements for analytics, security, and data integrity. In the Consensus Stage post we focused a lot on usability. We emphasized the need to think about how other analysts in the company might use the process help them with their existing systems and templates. We also focused on the need to ensure that these analysts can do that and still enable finance to ensure that the output is standardized so that everyone talks about the data in the same way. In our last post we looked at the final phase in the process, the Coordination phase we focused on how to plan a process for maximum support of decision makers, including how to make analytics better, faster, and more consistent.
We were detailed in our analysis for a reason: we have been involved in many analytic processes across many companies of many different sizes and industries and we wanted to provide as much insight as possible. We were so methodical because we felt the information needed to be summarized and organized in order to be actionable. It is very important to remember that these are not merely considerations for a finance-led analytic process at each stage of the lifecycle – these are best practices to be anticipated and even planned for from the very beginning of a strategic analytic exercise. Planning out each one of these sixteen elements (four in each phase) will result in a successful analytic program which supports and improves business decisions on an ongoing basis.
Finance Analytics Analyzed
We have been looking at the elements of success required for a finance-led strategic analytic processes. We have closely examined the best practices of this process from its very conception through socialization and consensus building stages. Once this process reaches maturity, it will be time for finance to manifest its ownership over the process and lead the company through the process of preparing these strategic analytics.
Of course, finance ownership has been implicit through the entire lifecycle of the product, but the time for consensus building and input should be clearly ended and the process should be self-contained and ready for execution. This fact is recognized by the placement of the fourth quadrant on the matrix – remember that as a finance-led process reaches maturity, it moves from the consensus building third quadrant into the fourth quadrant, coordination, by “falling” on the corporate involvement axis from high to low. This is because finance is now called to execute this corporate-wide process and consensus building should have already been accomplished.
For finance to successfully maintain leadership of an impactful and productive strategic analytic process, four key things should receive emphasis from the finance team. The first two have to do with the generation of the analytics themselves, the second two have to do with the way the analytics are used. The best practices for finance in each of these four area will largely determine the success of the finance-led analytic process. As we discussed, each of the first three phases are vital to ensure a successful outcome. However, success in these three phases are irrelevant if the process doesn’t yield useful output.
Let’s look at best practices in each of the following coordination areas:
Presentation Automation. Ensuring that presentation building is expedited by analytic processes and technologies.
Output Uniformity. Obtaining consistent and repeatable analytic formats across time periods and across the company.
Responsive Acceleration. Making sure that analytic output can be generated a lot faster than was previously possible.
Decision Support. Verifying that the analytics produced are actually being used by decision makers.
Let’s take a little closer look at each element and how a finance team can help ensure success in this coordination stage.
First, it is vital for a finance team leading a strategic analytic process to ensure that entire presentations can be produced a lot faster than was previously possible. We have seen finance team “can” entire sets of graphs using a spreadsheet such they could simply add columns each quarter to update graphs. Although it worked, it was not without its problems, primarily caused by teams producing inputs to the analytic process in slightly different formats. Still, the amount of time spent was considerably improved by “canning” analytics so that the team could spend a lot more of their time doing things like diagnostics, root cause analysis, predictive analytics, etc. Of course, these types of critical analyses will always require human involvement to create an effective presentation. However, software which can help consolidate and generate analytics quickly will increase the amount of time a finance team can perform this more valuable forensic investigation. A best practice in this phase is to use software which can also be used across the company to speed consolidation and to make analytics readily available.
Second, it is critical to be able to retain consistent control over the analytic output. We have written extensively about why this is so important. In this context it comes down to a simple question of governance. It is common for vendors to talk about governance in the context of data. Finance teams also need to think about it in the context of analytics. We have all heard the adage that “the squeaky wheel gets the grease.” In terms of corporate strategic analytics, it is often the slickest looking analytic that captures the attention of the execute team, even if the data story is being misrepresented by the picture. A common criticism of some analytic packages in this area is that they essentially result in a “Wild West” mentality in a company, meaning that people compete to become the “squeakiest wheel” – i.e. the person with the best story from the data. In the best practice we’ve seen here, the software used across the company generated the same format and colors for all team’s analytics, regardless of the slice of data being visualized. This method ensured that everyone across the company was always consistent.
Third, it is important to ensure that requests can be quickly and expeditiously addressed. At first blush, this may seem similar to the first best practice mentioned. While there are similarities, there are some important differences here as well. We know how important it is that our teams are responsive to requests. If an analytic process does not increase the substance and decrease the time it takes to respond to requests, it may not be perceived to add enough value to justify its existence. Since requests for different views cannot always be anticipated (especially scenarios, the “what happens if we do X” questions) it is important that all possible different views of consolidated data be readily accessible to generate analytics. A best practice we have seen here accommodates applying entire analytic reports (notice that “reports” is plural here) to any slice of data, and where new sets of data can be created with the equivalent of “save as” functionality. We have even seen teams use this approach to do real time modeling with decision makers, although we would not recommend that as a best practice – it is almost always better to demonstrate response time measured in hours. Still, we have found that most finance teams are not able to be responsive enough when it comes to analytics. Of course, to make a strategic analytic process worthwhile to decision makers means increasing response times significantly, whatever that may mean in your company.
Finally, it is necessary for a successful strategic analytic process to support decision making processes. That seems self-evident – the whole reason for a strategic analytic process is to help the business make better decisions. We have often seen that deemphasized or overlooked in the rush to complete a process or make sure folks across the company are participating. In one notable example, a finance team spent months designing a data consolidation and analytic process, only to see decision makers make a “gut feel” call anyway. In this case, decision makers were essentially expressing their lack of confidence in the analytic output by politely ignoring it. This situation could have been avoided to a large degree with the Phase II (Collaboration) and Phase III (Consensus) stages had been successfully executed. As well, if the three elements above are successfully completed, the likelihood of impacting decisions become much greater. A best practice we’ve seen here is to measure the impact of the analytic process on decision making. We have seen teams express this in terms of time involved in decision making, debate times in meetings, and scenarios supported. Whatever the key metrics are for decisions in your company, expressing a “before” and an “after” view will help make analytic coordination successful.
Finance Analytics Analyzed
This blog post is part IV in our series which applies the Finance Led Process Lifecycle in an effort to see what we can learn.
The third quadrant of the Finance Led Process Lifecycle is the “Consensus” stage. It is called consensus because the finance led analytic process no longer in the formative stages (envisioned) and begins to move into its useful stage. It should now be familiar to all constituents, having been introduced across the company during the last stage (stage two – collaboration).
When deciding on a process to lead strategic analytics across a company, this Quadrant is essential to formally enlisting cross-company participation. When the finance-led process becomes real rather than just planned, it should be an outgrowth of successful collaboration. In this stage, the roles and agreements made in the collaboration stage become real. As they do, there are some vital elements which will make the consensus building stage successful. There are the ones on which we will focus in this blog post.
The consensus phase of a strategic analytic process is about actually implementing the idea and forging agreement for formal participation in the process. Although the process is in use as the analytic program becomes real, it is important to communicate at this point that there are important analytic tweaks necessary in order to prove the processes adaptability. As the program becomes real, forging consensus for the process can be successfully done if four major steps are taken:
Template Standardization. Ensuring that all groups use the finance led process in the same way, relying on the same data sources and inputs.
System Compatibility. Making sure that all constituents of the process are able to successfully contribute based on their technologies and processes.
Analyst Usability. Providing constituents of the process with the ability to feed back on their strategic processes in their respective organizations.
Governance Enforcement. Ensuring that constituents of the process, including decision makers, are using the same analytic output and approaches to make consistent decisions.
As the finance led strategic analytic process moves from the collaboration to the consensus stage, there are a couple of key gating factors to keep in mind. In order to make sure that groups across the company live up to the roles they agreed to play in the collaboration phase, it is essential to ensure that they can use the process as advertised. A key to achieving widespread usage is to make sure that the process is easy to engage. In order to achieve that ease-of-use, the process must have standard templates and be compatible with already existing systems.
Templates are almost inevitable in any successful finance led strategic process. When it comes to creating these templates, designing a standard that can be widely used is vital. Two “best practices” are notable here. The first is to use a “least common denominator” approach. This means incorporating only what is necessary to support strategic analytics in the company. The second is to make the template “feel like” something that is already in use in a company. This may be a commonly used type of system in the company. In many companies it is a spreadsheet metaphor. In any case, the best chance to ensure a smooth transition from collaboration to consensus stage is to have an easy to use, engaging template.
Ensuring system compatibility is also a key part of transitioning a finance led strategic analytics process from collaboration into consensus building. Although the term “system compatibility” sounds strictly like technology, it isn’t. Systems can also be processes, namely the strategic processes that are used by pieces of the business, be it a business unit, region, or channel. In large companies, it is not uncommon for the parts of a business to have their own strategic processes as well. As a strategic analytic processes is implemented at a “higher level” (let’s say a corporate level for example), it is important during the transition to a “consensus” stage in the process that these strategic systems, and especially the analytic output they through off, be an input. Ideally, these strategic systems would embrace and use the finance-led process, or at least the analytic library, to ensure maximum compatibility. At a minimum, compatibility at the system level means that the processes are timed appropriately to interlock.
As the finance led strategic analytic process achieves maturity, the question of analyst usability become vital to maintaining consensus building. Some of this should have already been achieved with the measures described above. However, assessing the ability of users across the company to easily participate in the process is both an opportunity to continue to build consensus and represent a last chance to facilitate adoption. Although the process should be well defined by now, a last push to represent inclusiveness before taking control of the process for final implementation is warranted.
Once the finance led strategic analytic process has achieved system compatibility and eased adoption through the creation of standard templates and analyst usability, the consensus stage becomes more about leveraging the consensus that has been built through these processes, and not about building it any longer. As the process begins its transition back into the finance realm as a truly finance led process, there is a qualifying gate necessary to leverage a successful implementation.
The last step, governance enforcement, is about using consensus to ensure that everyone across the company will use the same analytics to talk about business strategy. We have written before about the lack of governance at the output level, and how that can lead to lying with statistics. There is a unique opportunity during the rollout of this process. Once consensus has been adequately built, there is a chance to use that consensus support as a way to get all analysis using not just the same analytics, but the same scale, color, sizes, etc. in a way that ensures that everyone across the company can talk about problems in the same way. It also helps assure consistency in decision making.
The consensus phase of a finance led strategic analytic process, then, is one which builds on the success of the collaboration stage by ensuring participation across the company. As the program begins its transition to the final stage, which we will look at in our next post, the finance led strategic analytic begins to use the consensus support to ensure the ultimate success of the process.
Finance Analytics Analyzed
This blog post is part III in our series which applies the Finance Led Process Lifecycle in an effort to see what we can learn.
The second quadrant of the Finance Led Process Lifecycle is the “Collaboration” stage. It is called collaboration because while the finance led analytic process is still in the formative stages (envisioned), it is well formed enough to begin to “socialize” the concept out of the finance organization. This is critical transition point in the finance-led process, since this is the phase during which the process will be subject to criticism. It is also the phase during which an opportunity to lay the groundwork for the next phase in the project by obtaining the kind of buy-in which will be required to build consensus.
When deciding on a process to lead strategic analytics across a company, this Quadrant represents an opportunity to hear and understand requirements from other parts of the organization. Gathering requirements in an understanding and methodical way will help instill confidence in the analytic process and build confidence from organizations outside of finance. Identifying and even sympathizing with concerns in ways that can turn critics into champions is a much more methodical process than most people understand. There are a few simple things to plan for which can help make that happen. There are the ones on which we will focus in this blog post.
The collaboration phase of a strategic analytic process is about determining if the idea is a good one and is in fact one worth rolling out. As soon as a finance team begins to socialize an analytic process that will be corporate wide and is likely to impact corporate decision making, there are some elements to consider in the planning process. The four main potential pitfalls and the real questions to think about during this stage of the process are:
Data Integrity. Understanding all data sources currently being used across a company in order to help generate strategic analytics, and reconcile them.
Scope Determination. Establishing the extent of inclusion necessary and desirable in order to establish a strategic analytic process successfully.
Analytic Assessment. Performing a complete inventory of analytics currently used for strategic analysis within an organization.
Security Verification. Documenting security and access levels required by all constituents within the organization.
First, let’s look at the issue of data integrity. This is often the most vexing issue within large companies. Often, companies will find the “hierarchies” used by different groups are not the same. Tying specific data like programs can be difficult or even impossible under these circumstances. In the collaboration phase of introducing a strategic analytic process, transparency to these data difficulties is vital. This kind of openness may be the best hope for achieving a consensus approach to data integrity.
It is important in this phase to avoid trying to ensure complete data integrity across all systems. All the elements that are important in this phase of the project are tied together. Avoid the tendency to overestimate the need to tie data out that doesn’t need to be used in the process (“scope determination” and “analytic assessment”). In other words, remember that for the success of this project, only the data that is needed for users to support the strategic analytic process needs to have data integrity.
Obtaining buy-in from various constituents regarding data sources and output is necessary to ensure data integrity is persistent. Without obtaining agreement from constituents across the company, the issues of data integrity may crop up again later, when it was initially and successfully addressed. This is why transparency is crucial: in order to ensure that the sources of data and their application is used consistently and persistently across the organization.
The issue of data integrity is best established by understanding the analytic requirements across an organization so we will tackle the analytic assessment next. Although in reality these four critical elements in the collaboration phase of a finance-led strategic analytic process should be run simultaneously, if we were doing this ordinally, this would be the first element in the collaboration phase. The reason should be obvious – understanding the set of analytics used across an organization for decision making will help determine the scope of the project and ensure the data with which the project needs to ensure integrity.
Casting a wide net in the collection of strategic analytics across an organization can have beneficial and long lasting effects. A “best practice” for the collaboration stage here is to identify analytic subject matter experts in various parts of the organization in order to solicit sharing of their analytic approaches – and that means not just the analytic but detail around it (how it calculated, used, etc.). Next, creation of an online repository of these analytics which adheres to a similar format and sharing of that repository across the organization is required. In this way, the analytic assessment become an exercise that benefits everyone across the organization. In turn, confidence in the finance-led strategic analytic process is achieved.
Determining the scope of the strategic analytic process is important to avoid over or under reaching. It is important to define very carefully what this process will achieve, the time parameters involved, and the roles and responsibilities of each constituent in the process. Collaborating on “who provides what and to whom, when” is essential. Make sure that the scope of the process is carefully documented in this stage, and make sure constituent sign off on their agreement of the scope.
Finally, it is important in most organizations to ensure the collaboration on security requirements be achieved and understood. In many organizations this is paramount, and become so well-defined that it is already well understood. In the collaboration phase, the opportunity to document any additional security requirements from other parts of the organization is present. If security is already well understood, the collaboration phase represents a chance to demonstrate the importance of security and compliance of the process. Remember, that in this phase, security also means access control. It is important to understand at this point which constituents should be able to see which data and at which points in the process.
Remember that the collaboration stage is the stage where the finance led process is still in idea stage, but begins to be socialized outside the finance community. It is important during this stage to not only gather all requirements from across the organization, but also to communicate back to these organizations that a process is being designed in which they can have confidence and participate.
Finance Analytics Analyzed
This blog post is part II in our series which applies the Finance Led Process Lifecycle in an effort to see what we can learn.
The first quadrant of the Finance Led Process Lifecycle is the “Conception” stage. When deciding on a process to lead strategic analytics across a company, this Quadrant represents the one where the initial decision and planning about the process takes place. As we proceed through a Quadrant by Quadrant analysis of the potential pitfalls an analytic process may face along the way, it is important to remember while planning in advance for all these quadrants of the lifecycle, that even the ideation and planning process contains some potential pitfalls itself. There are the ones on which we will focus in this blog post.
The conception phase of an analytic process is about determining if the idea is a good one and is in fact one worth rolling out. As soon as a team conceives of an analytic process that will be corporate wide and is likely to impact corporate decision making, there are some elements to consider in the planning process. The four main potential pitfalls and the real questions to think about during this stage of the process are:
Strategic Influence. Understanding how analytics can be regular and compelling way to influence strategic direction.
Chart Automation. Assessing how easy is it to produce analytics repeatably, consistently, and reliably.
Self-Service Facilities. Assessing whether analysts able to create and edit their own analytic views on the fly, and it not, understanding what it will take to get to that state: and finally;
Import Mechanisms. Understanding how data from systems of record is is being incorporated into analytic systems.
First, let’s look at the strategic influence of analytics. In the idea conception stage, the first question to ask is whether or not there is really an appetite/audience for strategic analytics. It is important at this stage not to confuse strategic analytics with analytics in general or what most people might call “kpi’s” or “operational metrics.” If these kind of strategic analytics are currently used in corporate decision making this question become about how they can be produced in a regular and compelling way so that they become systemically embedded in the company’s decision making process. If these kinds of strategic analysis are not being used currently, the question becomes even more difficult. If there is no appetite for this kind of strategic influence using analytics today, designing a process to produce them makes little sense. Creating a desire for the use of the strategic metrics is a prerequisite to building an analytic process in the first place.
Second let’s turn our attention to the idea of chart automation. We have previously described what we meant by chart automation in the context of analytics. Here we mean that entire presentations are essentially shells that can be instantly populated with data or slices of data, so they can be instantly produced with current information from any perspective. Anyone seeking to introduce an analytics process successfully needs to plan for the ability to automate charts.
Next, let’s consider the subject of self-service facilities. Within the context of an analytic process, the term “self-service” as it is traditionally used has become almost meaningless. As popularly defined, “self-service” simply meant that no IT was required in order to create and use analytics. As more and more vendors began to support this requirement by allowing analysts to write their own queries with ease, the term “self-service” became a prerequisite. Quite a bit of perceived flexibility emerged since analyst could do a lot within the established parameters. There was of course a need to call IT in order to widen those parameters. We have expressed what we know to be self-service today to mean that entire presentation templates can be created by end users without ever contacting IT. What this means it that when introducing an analytic process, the entire process has to be a flexible platform that all analysts can engage without contacting IT. Ever.
Finally it behooves those planning an analytic process to ensure the process contains import mechanisms. The ability to choose from existing data sources as a baseline, then to use those existing data sources to create slices within the data set, is critical to any successful analytic process. This means that analysts must be able to import data from any source of truth within their enterprise, quickly and easily. It also means that the ability to suck in the data must be easy and quick so that analyst can perform this task themselves. Designers and planners of a successful analytic system must be able to ensure that analysts will be able to do this themselves.
The key to successful “Conception” (design and planning) when it comes to analytic systems is to anticipate some basic requirements, namely for automated, self-service template creation platforms with data import capabilities.
Next week we’ll start to think about how the analytic process works when the idea is first broached outside a finance team, and how to plan to success in the collaboration stage.
Finance Analytics Analyzed
We can learn a lot about Strategic Analytics circulated by Finance by using the Finance Led Process Lifecycle. In order to increase the effectiveness of any finance- led process which will result in analytics that effect strategy decisions at a company, we are starting a new series of blog posts. These posts will focus on ways which finance teams can significantly improve their analytic input to decision makers.
Finance teams are often tasked with owning this type of strategy process within a company, whether it is done in the guise of “long-range planning,” “budgeting”, or any combination of this and other such process. Any process owned by finance which will ultimately effect the strategic decision making direction of a company, can significantly increase their chances of success by using the Finance Led Process Lifecycle.
Before we embark on this series, it is important to clarify what will and what will not be covered. Operational Analytics, while important in their own right, will not be addressed. The full process of long range planning and budgeting will not be addressed, only their analytic components will be considered. This blog has covered long range planning and budgeting separately in other posts. Strategic Analytics usually means something different than common terms like “Operating Metrics,” “KPIs” or “Balanced Scorecards.” These are the focus of this series.
This series applies the Finance-Led Process Lifecycle (shown at left) to understand how finance can best rollout, ensure successful adoption of, and successful use of, strategic analytics. To understand this Lifecycle fully, it may help to review the blog post where this matrix was introduced and explained. In short, it tracks the rollout and execution of any finance-led process through its lifecycle, dividing that lifecycle into four distinct quadrants. Using those quadrants to look at issues concerning that finance-led process during that stage can help a finance group prepare for the issues bound to come up during that stage.
In this series, we will apply this Lifecycle to the process of Strategic Analytics as owned by Finance teams. We will assume that the process being adopted is an analytic one, and along the way we may also discover impacts relevant to Finance Led Strategic Processes which introduce analytics, but which may not have that as their focus.
Each blog post will cover a separate quadrant. Along the way, we will be looking at topics of relevance in the Analytic world. We will cover some very salient questions facing finance-led analytic projects today. They include the following topics:
Analyst Usability – how easy is it for analysts to use an analytic system?
Analytic Assessment – which analytics are most widely used in the company today?
Chart Automation – how easy is it to produce analytics repeatably?
Data Integrity – how reliable and consistent is the data which is relied upon to produce analytics?
Decision Support – what role do analytics play in strategic decision making?
Governance Enforcement – are all analysts using the same data when making analytics?
Import Mechanisms – how is data from systems of record being incorporated into analytic systems?
Output Uniformity - are all analyst using the same analytic approaches to similar concepts?
Presentation Automation – are the same or similar presentations made in automated methods today?
Responsive Acceleration – can an analytic process significantly improve decision making times?
Scope Determination – what constitutes and distinguishes analytics which help make strategic decision?
Security Verification – can access to data used for analytics be controlled and limited?
Self-Service Facilities – are analysts able to create and edit their own analytic views on the fly?
System Compatibility – can analytic systems be seamlessly linked with systems of record?
Template Standardization – can analytics be treated as building blocks for entire reports and dashboards?
We have divided these twelve questions into four quadrants. Since this is an ambitious list of topics, we will be taking the next few weeks to get into the “high weeds” on each topic. Each quadrant contains four of the topics, so we will need four more of these blog posts to cover each quadrant. Finally, we will probably make one post which recaps the series. We hope you are as excited about reading these as we are about writing them!
Driver Based Planning Revisited
Combining the best practices identified for Driver Based Planning with our Lifecycle for Finance-Led Processes shows how to optimize such an initiative within your company. If your finance department plans to lead a Driver-Based Planning effort at your firm, designing according to these best practices can help increase your chances for a successful outcome.
Back in November, we wrote about 10 best practices for Driver Based Planning. Since then, Driver Based Planning has become a hot topic within the finance community. A recent publication by the Association for Financial Planning entitled “AFP® GUIDE TO Driver-based Modeling and How it Works” deals with the topic in great detail. One of our principals is quoted extensively in this document.
The document constitutes a very hands-on methodology to actually conducting Driver Based Planning and also contains no fewer than 10 case studies (from very different company sizes and industries) where Driver Based Planning was implemented to varying degrees and with varying levels of success.
The document closes with some best practices which largely mirror the best practices we put forward in our blog post back in November. Thinking about these best practices and combining them with our original list, we came up with a chart that explains the recommendations and puts them in a Lifecycle context. It became obvious to us in doing this work that the best practices which are most useful and broadly applicable are those which deal with political factors related to socialization and acceptance of Driver Based Planning, or Phase I and Phase II of our Finance-Led Process Lifecycle.
In our first application of the Finance-Led Process Lifecycle, we map the best practices for Driver-Based Planning in order to increase chances for success. In order to ensure that a Driver Based Planning initiative has best possible chance to succeed in any company, designing a process which will allow for the inclusion of these best practices at various stages in a process’s lifecycle is vital. While previous writings have identified some best practices for Driver Based Planning, these have been lists of Best Practices with little explanation given about how to employ them. Thinking about how to employ these best practices in designing the process itself is necessary not just for including those practices, but to use them most effectively.
What follows is a quick quadrant-by-quadrant summary of the best practices needed to support a Driver-Based Planning initiative in any company. To maximize the chances of the success of a Driver-Based Planning effort, it is best to incorporate the necessary time and steps to support these best practices when planning an initiative. For the sake of making the graphic easy to digest, we have shortened each of the best practices identified in the various literature to a couple of words.
Phase One: The “Conception” Stage
In the first stage of Driver Based Planning, the initiative itself moves from being envisioned to being more fully developed and almost ready to expose for input. In this stage, there are a few best practices that are important. For a model to succeed, the team must identify the influencing factors behind the initiative (here called “know motivations”). As well, it is important that resources within the team be identified and cultivated. Chances are good that there are innate skills on the team which can be harassed for this particular job.
There are several best practices which are vital at this stage when it comes to initiating the build of the model itself. One of the most difficult factors facing an initiative like this one is knowing how to get started in building the model. For that reason, a recommended best practice is to start small, focusing development efforts on the easiest, known factors. Another best practice is then to grow the model through the incorporation of scenarios which will enable the model to naturally develop the kind of depth that will prepare it will for the next phase.
Finally, the best practice of embracing robust technology is critical at this stage. This best practice should not be put off until later in the lifecycle, since the choice of technology will become harder and more complicated later on. Building the Driver Based Planning initiative on a solid technology foundation will be vital to the success of the process itself.
Phase Two: The “Collaboration” Stage
During Phase II of the Lifecycle of a Driver-Based Planning initiative, the model and process begins to be exposed to the various constituents necessary to provide input. There are several best practices in the literature for Driver-Based Planning mapped to this phase of a processes lifecycle. During the Phase I design process, it is vital to ensure that adequate time and procedures are built in to support the exercises associated with the best practices in this phase.
It is very important that all requirements for Driver-Based Planning are identified at this stage. Many requirements will have been “baked into” the process during the Phase I design. Since Driver Based Planning is such an intricate process involving so many players at this stage, it is a mistake to think that all requirements will already have been identified. In order to fully appreciate the nuances of the requirements for Driver-Based Planning, it is important to understand requirements. Be wary of trying to react too quickly to expressed requirements. On the other hand, it is important to demonstrate a willingness to incorporate requirements by iterating the Driver Based Planning process quickly. Usually it is easier to listen to requirements and reiterate a Driver Based Planning process than it is to react to the requirement before iterating the process.
During this phase, the identification and cultivation of those “champions” from outside the finance organization – those who will advocate for the Driver Based Planning process – is extremely important. In order to make this happen, it is essential that the finance group is adequately resourced and that they partner closely with the requisite organization (usually operations) to cultivate those sponsors. During this stage, resourcing requirements sessions liberally (usually an ‘all-hands-on-deck’ situation) will demonstrate the group’s commitment to requirements gathering for the Driver Based Planning process. Partnering with operations to run these sessions will help maximize the chance of identifying and cultivating advocates for the Driver Based Planning process together.
Phase Three: The “Consensus” Stage
As the Driver Based Planning process becomes more complete based on a full incorporation of requirements, it begins to move into the consensus-building phase. If the Driver Based Planning process has been designed appropriately and has proceeded through phase I and II successfully, Phase III should be easier and quicker to accomplish. Still, there are some best practices for Driver Based Planning which should be incorporated during this phase of the process as well.
First, during this Phase, it is important to begin by demonstrating that all requirements expressed have been met. In this way, advocates identified in the previous phase can help build consensus for the Driver Based Planning process. It is important to have the necessary constituents of the Driver Based Planning process identified so that all necessary stakeholders can be brought on board. It is important in this consensus-building process that they understand and agree what role their teams will play in the process. In some more complicated Driver Based Planning processes, we have seen stakeholder matrixes used in order to facilitate this step.
As this consensus is being built, those members of the team responsible for the actual model can build on their partnership with operations to test the model with historical data. As part of the consensus-building efforts, a model which successfully replicates historical results can help instill confidence for a successful outcome of Driver Based Planning.
Phase Four: Coordination
As a Driver Based Planning process reaches maturity, it is ready to be implemented across the company. While the Phase II and Phase III models require a close partnership with operations and other personnel across the company, it is essential that Phase IV control reverts strongly to finance, since Driver Based Planning is a finance-led activity. Successful execution of a Driver Based Planning process requires that Phase I – III be successful. Phase IV is no time to let off the gas pedal. Since the process is ready for prime time, there are a few best practices to employ in order to ensure the Driver Based Planning process runs smoothly.
Remember not to make any more changes to the Driver Based Planning process. If Phase II and III were successful, additional changes should already have been incorporated and the Driver Based Planning process should be firing on all cylinders. Any new changes would be sure to frustrate the process for those who had already agreed to it. Finally, although it is essential to fully incorporate Driver Based Planning into any relevant process, it is a frequent mistake to assume that each process of any importance in the company must somehow be impacted by a successful Driver Based Planning initiative. In fact, success of a Driver Based Planning initiative will often result in others wanting to somehow incorporate that into other initiatives. We have seen Driver Based Planning processes become “victims of their own success” at this stage, being employed for things other than which they were intended. The result is usually damaging to both initiatives. Resist the urge to overextend.
To sum it up, while there is a lot written about best practices in Driver Based Planning, it is helpful to take a methodical approach to planning out such an initiative before undertaking it. Laying out a lifecycle plan which incorporates the best practices for Driver Based Planning at each stage in the process can save a lot of headache down the road.
The Finance Process Lifecycle Quadrants
In our previous post we noted that finance is often called upon to lead corporate processes ranging from strategy to tactics.We noted that most of the processes require input from the larger business community.We also explained how this matrix came into existence.When we started to analyze best practices across finance-led initiatives, a clear picture of the lifecycle emerged naturally.We have introduced this lifecycle and put some terms around it.In our last post we looked at the vertical and horizontal axes as ways of positioning the current state of a process.In this post, we will look at the four quadrants in the lifecycle, put some names around them, and briefly discuss some of the best practices associated with each.Since the arrows indicate the relative path that a process traverses, we will examine the quadrants in order.It is important to realize that even though we discuss the four quadrants separately, the reality of a process lifecycle is that it is much more fluid as it moves from one quadrant to the next.
Phase One: Conception
This stage is formed by the “low” extremes of the both the horizontal and the vertical axes.This means that this process is only being envisioned, and that the process at this point is limited to finance-only involvement. There are two very important things to note here.The first is that the idea (or something very similar) for the process may have come from some place outside of finance – for example we have seen these ideas come out of executive leadership meetings.The second thing to notice here is that, wherever the idea originated, the process design is being envisioned within finance.A common mistake we see is the tendency to omit or rush through this stage and move directly to cross-company involvement.Skipping this stage or attempting to move through it too quickly is a mistake.There is immense value to a finance team brainstorming amongst itself before consulting external sources.Best practices in this stage include the anticipation of potential contingencies, building in time to respond to unknown/unanticipatable factors, staging any required data, focusing on desired outcomes, and identifying needed capabilities.Perhaps the most important best practice at this stage is ensuring that the process includes sufficient time to execute best practices as it proceeds through the other quadrants of its lifecycle.
Phase Two: Collaboration
This quadrant is the next point of natural evolution in the lifecycle of a finance-led process.It is also representative of processes that aren’t yet complete, but notice that they are moving more dramatically to the “right” in this quadrant.A goal of processes in this stage two of design, then, is to get them much further along the “completion” axis.Since the process has crossed over into the higher degree of corporate involvement, the process is now in the stage where input is being solicited.The process has now been exposed to elements of the company outside finance for the purpose of feedback and redesign.These are some of the best practices from this stage:a high degree of responsiveness, rapid iteration and advocate cultivation. Remember that the primary objective is to identify and incorporate requirements in the process which the finance team may have missed.Just as proper execution of the conception phase is critical to success here, success in this phase is essential to continuing a process along the lifecycle into the third phase.
Phase Three: Consensus
The third quadrant maintains cross company involvement, but the process should move to completion in this stage. As the previous quadrant indicated, the process is still in an active feedback period.As the process crosses into the third quadrant, changes to the process should be becoming more minor and less frequent.During this phase of a project’s lifecycle, finance works largely with the advocates identified in the collaboration stage (those persons whose valuable feedback helped establish requirements) in order to establish confidence in the process among its team members.Best practices for this phase of a process lifecycle include things that are likely to inspire confidence in the process. These include testing the process, reiterating requirements and how the process meets them, and aligning the process with corporate objectives. Successfully proceeding through this phase means moving to the completion of the process design so it is ready to implement in phase four.
Phase Four: Coordination
While the model remains complete, this stage of the lifecycle falls back into finance-only domain.This is the point where we typically get a lot of questions such as:“why would that process seemingly “digress” back into the realm of the finance community?”And “aren’t these “two by two” matrixes supposed to show everything moving up and to the right?”These questions deserve a clear answer.
That answer is critical to understanding why the lifecycle works the way it does. Let’s look at the first question. First, and perhaps most importantly, in Phase Four it is vital that finance exerts leadership over the process.When it is time to actually execute the finance-led process, finance must realize it is called finance-led for a reason.While the process is still involving persons across the company, the need for collaboration on the process and consensus building around the process should be settled (if phases II and III were successful).At this time, it is vital for finance to assume leadership of their process across the company. Second, this is why the lifecycle does not fall back to the extreme of finance-only on the vertical axis (such as the beginning of the initial conception of phase I) but recognizes that at maturity a finance-led process will be truly finance-led.
The second question is purely polemic.We developed the finance-led lifecycle model for a reason.This model reflects reality and matches our best practice development. It was tempting to “jury-rig” the model and the axes in order to represent the lifecycle of a project as up and to the right. Taking that measure would have distorted the model and made it less relevant to the real world.
In our next blog post, we will apply to the model to a popular finance led process, that of driver-based planning.There has been quite a bit written about best practices for driver based planning, and we will examine how those best practices fit in the lifecycle model.We will then be able to determine what a successful driver based planning initiative should incorporate at each stage of the lifecycle.
The Finance Process Lifecycle
Finance is often called upon to lead corporate processes. These processes can run the gamut from strategy input (like long range planning and budgeting) to specific processes (like rolling forecasts). Such processes are rarely insulated within finance and require larger input from the business community. As such, working together with business partners is an important part of the responsibility for the finance organization.
Understanding when and how to engage with these business partners is key to the success of any finance-led, cross-company process. There clearly are times when finance needs to communicate within itself – for example to design the process successfully. When finance does communicate externally with business partners, it is still important for the finance team to communicate within itself too, so that it can put forward a unified front.
We have seen many cases where finance-led process worked well, and some where they were less successful. Many companies have applied best practice analysis to various specific finance-led processes. We aren’t aware of any methodologies which attempt to explain these best practices in a way which will apply to all finance-led initiatives. In doing so, we noticed that, across all finance-led processes we have encountered, these best practices tend to correspond to the (for lack of a better term) “stage” in which a process falls. We began to map these best practices. The result was a very clear picture of a lifecycle.
In order to understand the evolution of a finance-led process, we are introducing a lifecycle diagram. In order to fully understand this diagram, it is necessary to first understand the elements used to plot the process on both the horizontal axis and the vertical axis. Then, the quadrants which make up the lifecycle can be understood. We will deal with the axes in this post, and save our discussion of the quadrants and resulting lifecycle for the next.
The vertical axis represents the continuum by which a process is exposed within a company. At one extreme (the “lowest” end of the continuum) a process is not communicated outside of the immediate finance team responsible for that process. At the other extreme (the “upper” end of the continuum) the process is fully exposed to all relevant parties across a company.
The horizontal axis plots the degree of completion of a particular process. At one extreme (the “left” end) the process is just envisioned and is not even really defined yet. At the other extreme (the “right”) side of the continuum, the process is fully implemented and in process.
By using these axes together, it is possible to plot the current position of any finance-led process. By charting the evolution of finance-led processes, it is possible to understand and plan their lifecycle. Further, by understanding the best practices relative to any given process at any given stage, it is possible to optimize the prospects for success for any finance-led process.
In our next post, we will look at the four quadrants formed by this two by two matrix. Examining the meaning of each of the quadrants which we have labelled (starting in the lower left hand quadrant and working up, over, and then back down) Conception, Collaboration, Consensus, and Coordination. Understanding the meaning of each quadrant will help understand the stages of a finance-led program. They will also help explain the potential pitfalls facing a finance-led project at each stage of its evolution.
Analyzing Small Clinical Trials
When it comes to clinical trials and data analysis, size matters. By definition, even a “smaller” clinical trial still contains a statistically significant sample size. The challenges in analyzing such a dataset are different from the challenges faced by clinical trials with much great sample sizes. Specific statistical rules usually apply.
Quite a bit has been written on the subject already, and there was even a course devoted to the topic in 2012. Even though a lot of this writing is more recent, we think the authority of this subject is over decade old now, a 2001 Symposium Book called Small Clinical Trials: Issues and Challenges published by the National Academy Press which documents a study conducted jointly by the Institute of Medicine, the National Academy of Sciences, the National Academy of Engineering, and the National Research Council. The long list of contributors, editors, and reviewers used in this book are impressive.
It is not uncommon in smaller clinical trials to conduct “rolling trials” which analyze early results as a way to increase focus on certain participant population in later studies. For these trials, it is vital that information be analyzed and statistically significant results be uncovered as early as possible. Using this strategy can often increase the validity, efficacy, and persuasiveness of results. As the book cited above notes (p82) “…combining data from various studies to obtain a common estimate can increase the statistical power for the discovery of treatment efficacy and can increase the precision of the estimate.”
Of course to accomplish this successful analysis, the studies must be successfully and increasingly focused. Both that step, and the final outcome of conjoint data analysis, can best be performed using a highly sophistical data analysis tool. As the book cited above goes onto explain “insight into reasons for the heterogeneity of trial results may often be as important as or even more important than producing aggregate results.” In other words, an application which helps uncover the “why” is useful, whereas a tool which makes charts which summarize the overall outcomes is not.
Certain applications like R and Excel have been commonly used to analyze data from small clinical trials. When applied correctly, these applications can be excellent for summarizing the statistical outcomes of small clinical trials. As forensic mechanisms which help uncover the reasons why particular outcomes may be achieved, they are helpful only as “trial and error” tools which build one or two charts at a time as various “power pivots” are selected.
Uncovering reasons for trends which will help focus future study participant selection and summarize reasons for results (not just results themselves) requires an application built specifically to help with forensic investigations of data. The Agylytyx Generator was built specifically for forensic analysis.
Sometimes data analysis of small clinical trials can be tough, especially when there are a lot of potential variables involved. In many ways it is not an exaggeration to say that uncovering the reasons for trends can make or break the perceived success of the trial. Ultimately, there is no substitute for a human looking for the reasons why a trial outcome is what it is. The right application designed to help humans conduct the critical forensic analysis can make a big difference.
CFO Perspectives on 2015
Every year about this time it is common to look back at the last year to see if we identify some common trends. We also try to determine if these trends are likely to continue into the next year or if they were fleeting. Going through this kind of exercise helps us to identify the areas which should receive our focus in the coming year. As 2015 draws to a close we look back at the trends of the last year using our CFO lens.
Before we launch into these trends, it is important to put these trends in the right context. Every year there is a tendency for many to be overly dramatic as they go through this exercise by saying that this year is unlike any other, or more pivotal, or more important for any number of reasons. We have resisted the urge to summarize previous years (yes, we have been doing this blog for a long time). While we don’t think there is anything magic about the 12 month period we are referring to as 2015, we do think there is something worth noticing in the trends we summarize here.
We do mention these particular trends because they represent a common thread we have noticed among most of our clients. At the same time, we recognize that each case is different. Not all companies have experienced these trends at the same time. They may have started in your company a lot earlier than 2015 and just come to a head this year. They may have started later in the year. Perhaps they have even not begun at all or have already peaked. However, they are significant and common enough to warrant some careful consideration. Since we are not immune from these forces either (especially as they relate to our consulting engagements) we will proceed to write about our observations in the first person as if they happened this year anyway.
First, this is the year we realized the importance of our role.
We started to be invited to more high-level meetings. Executives were reading our emails more carefully and forwarding them more often. Our visibility increased.
The rise of cloud based applications meant people turned to us in meetings to see how we would react. The financial implications, the legal ramifications, and the security characteristics were all items on which we were expected to have an informed opinion. At this same time, this topic was becoming pressing for us as cloud based vendors like Workday started to enter our consideration for back office ERP applications.
The CEO and other executives started asking us to interpret financials more. They expected us to understand business strategy, and provide budgeting options which reflected strategic choices. Further, they expected us to be able to translate financial results into strategic language, explaining how our actual outcomes reflected what we set out to achieve. Even if this involved a bit of revisionist history, we were asked to be miracle workers in this capacity.
Second, this is the year we realized how limited we were.
With our increased profile came the increased pressure to perform, and the realization that our ability to deliver key insights was held back by our own limitations.
We realized that analytics were not the same thing as analysis. We weren’t getting enough strategic insight, so we started to look for ways to generate better and faster analysis. Our leadership insisted that they didn’t need another “fact pack” but were looking to our team to provide key insights and analysis on trends. This perspective came to a head when we realized that the questions were being asked were prompted by executives using our reports, dashboards, etc., and that these static tools were no longer enough to answer the real questions facing the business. So we committed ourselves to finding ways to facilitate our team’s ability to generate these key insights.
Third, this is the year we put buzzwords in context.
Finance organizations thrived on building a mystique around our own language with our own complicated-sounding terms which would make the business perceive we were somehow adding value.
Terms like “zero-based budgeting,” “rolling forecasting,” “balanced scorecard” (as if a scorecard should ever be “unbalanced”), “scenario planning” and many more were not new terms. In fact we have been reading about them for years. We have even used some of them in our company. However, these terms seem to have been used and debated a lot this year.
Especially in light of the trends mentioned above, we began to put these terms in the context of our strategic contribution to the business. For example “zero-based budgeting” really meant “wiping the slate clean” in order to accommodate our strategic choices. “Rolling forecasting” really meant “tracking our plan to actual outlook” in order to assess our progress toward our strategic goals. “Balanced scorecard” was replaced with “strategic tracking.” “Scenario planning” became “strategic options.”
For us, the third trend was just a manifestation of the first two. 2015 was the year that we became committed to being able to lead our discussion of the strategies which we possible to achieve and what would be needed to achieve them. We also committed to spend 2016 in pursuit of ways to help us accomplish this objective.
E & Y CFO Digital Divide Survey Recommendations Summary – Actually Solving the Great Divide
A couple of weeks ago, we talked about a recent study of CFO’s by Ernst & Young that had, ostensibly, studied the impact of the CFO, particularly the role a CFO plays and should desire to play, in a company’s “digital” business strategy. We noted the inherent difficulties in identifying a “digital” strategy, particularly for companies that do not have a digital product. We noted how, at its core, the study really was referring to what most of us know as the common organizational misalignment between strategy and execution.
This week, we look at the recommendations from the Ernst & Young study. We really focus on a single recommendation made in that study. That single recommendation, when implemented properly, obviates the need for other solutions. They are also the only recommendations which have a chance to really match up against the strategy-execution misalignment we identified as being as the heart of the survey (as we mentioned in our last post).
The study makes four recommendations, which it calls “digital priorities” for both the CFO and CEO. All four recommendations stem from what the study called the need for CFO’s and CEO’s to communicate more completely, effectively linking up what the company can do with what it wants to do. It almost seems as if there are two threads in the study. In one thread, the study seems to talk quite a bit about the need for CEO’s and CFO’s to work together to make strategy operational, particularly in potentially disruptive trends such as digital business models. The link between those two issues 1) the strategy execution gap (the natural gap between CEO and CFO thinking), and 2) the “digital divide” where future strategic choices reflect more forward-thinking business models is not clear in the survey. Figure One at the left depicts the first issue – the natural gap that exists between CFO and CEO thinking. It also attempts to establish a link to the Digital Divide issue by simply stating reasons the CFO should become more “digital” in their thinking.
At first glance, this generic sounding approach seems to go hand-in-glove with the study recommendations. Of the four main recommendations in the study, two of them - using analytics to measure and predict disruption and creating a governance and risk oversight framework – actually have a chance to create a solid and permanent link between issues #1 and #2 above. If implemented correctly, systemically, and continuously, analytics can embody that framework and apply it to company performance, forecasts, etc. on an ongoing basis.
In our most recent post on this topic, we introduced a graphic which accurately and in more detail, expresses the reasons for the existing gap between finance departments (the CFOs who run them) and corporate strategy. In this post, we illustrate how a correctly configured analytic package which contains built-in risk and governance frameworks operates.
An analytics package which truly has a risk and governance framework built into it functions as a link between #1 and #2 above. There are few analytic packages which give a CFO this kind of control. The Agylytyx Generator functions that way. By allowing companies to build-in their own preferred risk and governance profiles, those elements become “building blocks” in the same way analytic constructs do.
The result, is that persons using the Agylytyx Generator in a company have real-time application of those “constructs” in a user-defined template to whatever data they selected (plan, actual, forecast, budget, etc.) In this way, the Agylytyx Generator is able, through user-applied frameworks, to continuously translate financial forecasts, results, budgets, actuals, plans, etc. into strategy language. The two recommendations that we cover from the Ernst & Young’s Digital Divide survey do not inherently link the CFO/CEO thought dichotomy with “Digital Divide” Issues. If these recommendations are implemented correctly (as Figure 2 shows), the lines of communication become continuous between finance and strategy, and there is then no “Digital Divide.”
E & Y CFO Digital Divide Survey Summary – The Strategy-Execution Gap in Disguise
While we may not agree with everything in the Survey, the 2015 Ernst & Young sponsored study of CFO’s entitled: “High-performing CFO. Driving and enabling the shift to digital. Partnering for performance” is an instructive document, most notably in its recommendations. We will cover the recommendations extensively in the second of two part series. This first part will focus on the results of the study itself, the second part of this series will focus on the recommendations.
The overall context for the study was the strategic role of the CFO, how that role and strengthened over the past three years, and the area(s) in which the CFO was having the least strategic impact. The study found that issues of digital strategy were the ones where CFO’s had the least strategic impact. This was not surprising – the name of the study seems to have been selected after the study and specifically for that reason. This outcome was somewhat surprising since CFO’s often own IT, the organization one would think is most responsible for driving what Ernst & Young calls “digital disruption” in the study summary.
The study documented something that we have been talking about here at Agylytyx for years. The Ernst & Young graphics shown in figure one explains that CFO’s have a greater strategic role than they used to have, but note that significant obstacles remain, mostly due to their traditional finance responsibilities.
The things that are going well in ¾ of companies responding allowed CFOs great influence on corporate strategy. Not surprisingly, the things holding the CFO back from a greater input on strategy were traditional finance activated such as cost cutting. As well, CFO’s continue to suffer from organizational and political boundaries which limited their strategic input.
Early on, we identified the reason these limitations commonly present themselves. We discuss figure two at length in another post, but there are two salient points which put the Ernst & Young study in context. First, current finance organizations led by the CFO still have setting financial goals and targets as the primary task they face. Second, these and other financial outputs serve as context inputs (a kind of feedback loop) to business strategy. In the absence of a continuous translation mechanism, there is nothing which joins corporate strategy and financial results and plans directly.
The next parts of the study are heavy on both anecdotes and statistics which underscore the uncertainty of today’s market and economic climates. The study advocates for a greater understanding of the risks and opportunities posed by such an environment, especially urging CFO’s to be more proactive in assessing the impact of what it calls “the shift to digital.” The study never defines that shift specifically, instead referring to numerous authors and other studies in its footnotes who advocate that this shift is taking place. The case studies cited in these sections, as well as later in the study, seem to discuss companies that 1) are purely or largely digital in the nature of their products or services anyway (like CNBC) or 2) are primarily traditional companies (like the Aviva Insurance Group) which do not define what digital means in those firms. None of the case studies involved actually seem to identify the role which the CFO played in the evolution of a company’s “digital” strategy.
In the final three sections, Ernst & Young begins to focus more on recommendations, so we will largely cover that in part 2 of this series. What was interesting to us is how much the recommendations actually focused on closing the gap between strategy and execution. In fact, the recommendations are sound advice for any firms that have this problem. In the same post that we link to above, we cite McKinsey Research statistics which estimate that 90-95% of companies have this problem. In fact, if one were to remove all “digital” references in the Ernst & Young study, that study would still make perfect sense.
If CFO’s really want to increase their influence on strategy then, regardless of whether there is a “digital” component or not, they will want to follow some of the recommendations in this study. In our next post, we will look very closely at one specific recommendation from the study and how that may be the key to linking CFO’s more closely to corporate strategy.
Best Practices for Driver Based Modeling
In our last post, we looked at what driver-based modeling really is, and when it can be used successfully. In this post, we focus on best practices for building driver based models. We have listed 10 best practices to increase the reliability of driver based modeling. We have divided these in three sections: what to do before you start the modeling process, what to do during the modeling process, and what to do when you have completed the modeling process. Following these steps will dramatically improve the likelihood that driver based planning will be successful at your firm.
Before you start:
1. Choose your time wisely. Plan to spend 90% of your time developing the model and 10% to tweaking scenarios. Map out a proposed timeline for development of your model. Once you build a model in which you have confidence, running scenarios becomes an easy and fast process.
2. Understand requirements. The way you manipulate the model and create output from the model are dependent on decision maker requirements. Ask the decision makers and/or your customers in the business to offer as much detail as they can regarding the relevant scenarios they plan to consider before you start modeling.
3. Build consensus. There is often consensus around the need to do driver based modeling of the business. This kind of scenario planning is something that it is hard to argue is a bad idea. That kind of consensus will help, but real agreement needs to go a lot further than that. Capitalize on this consensus to make sure that there is also agreement that your team is the right one to do this activity and the proposed timeline involved. This step is vital since political “losers” in the scenario are likely to attack the credibility of the exercise.
During the modeling process:
4. Focus on the things you can control. It doesn’t do any good to create business drivers in models which your company can’t do anything about. That doesn’t mean that all inputs have to be controllable, though. Do not confuse variables with drivers. Just because something is a variable in your business model does not mean it is a driver. All drivers are variable by definition, but all variables are not necessarily drivers.
5. Establish redundancy. Have more than one person who understands the model completely. This one seems obvious, but there continue to be cases where only one person created a model and so only one person can support it. In addition to the support there are a lot of good reason to do this.
6. Embrace trial and error. Good models take time to build, and they are iterative. You almost certainly won’t choose all the correct variables and the right sensitivity levels the first time. Allow yourself the time and latitude to make major structural changes in the model. The better understood requirements are, the less time this will take.
7. Check in on requirements frequently. Understanding the requirements may evolve, as you are building the model it makes sense to find out if any changes which may have occurred. The more you can accommodate any changes needed as you are building the model, the easier you will find the validation, vetting, and usage process.
After you have completed:
8. Use historical data to vet the model. If a driver based model really describes a business and the right drivers have been identified with the right settings, it should come close to being able to replicate exact results. This step will also help you build political consensus.
9. Avoid making structural changes. Don’t second guess the model after it is completed. If you have followed best practices listed above, the output won’t lie. Avoid the tendency to make any major structural changes in the model, or you risk derailing your timeline and unravelling the consensus you have worked so hard to build.
10. Don’t overextend. A successful driver based modeling effort will naturally lead people to the conclusion that the same model can and should be used for other purposes as well. Although it may be tempting to make some “small tweaks” in the model in order to use it for other purposes, resist that urge. Instead, consider the steps outlined here and start fresh with a new model built specifically for the purpose requested. There may very well be reusable components from the original model, but the requirements assessment should uncover that fact.
When Driver Based Modeling Could Work (or Not)
A driver based model, simply put, allows users to easily create scenarios based on changing key assumptions about the things that matter to your company. The output of such “what-if” scenarios is usually expressed in financials.
Driver based modeling can be overused – it is not always applicable and can be overextended. In this post and our next post, we provide specific guidance to help understand what driver based modeling really is and how to do it successfully.
To develop such a model, it is necessary to understand 1) what the variables are 2) the way the variables impact each other and the rest of the financials 3) what the difference is between variables and drivers and 4) how the way the variables behave may change over time.
To understand the limitations of driver based modeling, it is important to understand what it isn’t.
Driver based modeling is not synonymous with sensitivity analysis. A successful driver based model must have the sensitivity to variables understood and incorporated. In other words, sensitivity analysis is not an output to the model, it a prerequisite to building the model itself.
Driver based modeling is not an efficient frontier technique. To make an efficient frontier requires creating all possible scenarios. That is not what driver based models output.
Similarly, driver based modeling is not an optimizer. That is also not what driver based models output. The optimal solution is not always executable. Successful driver based models plan scenarios around what is possible to achieve.
The very concept can be overused. Driver based planning is most applicable in long range planning or other resource allocation exercises. As a general rule the more complexity which exists around these decisions the more useful driver based modeling becomes.
It is not applicable as a predictor of business outcomes for a specific time period. For example, driver based modeling is not appropriate for companies to formulate guidance for investors, predict EVA, calculate expected dividend payments, understand likely treasury yields, etc.
Driver based planning can also become too complex to be useful. It is a good idea to limit the number of scenarios under consideration. It is also a good idea to limit the number of drivers which an end user can change.
It is important to understand the difference between variables and drivers. The difference lies in what a business can control and what it can’t control. Drivers should be things which are under the control of a business. They are a subset of variables which can be changed by those seeking to understand the impact of a scenario. Variables which are not drivers are those things which are not under the control of a business but which may fluctuate as well. It is important to separate those variables out into a different “panel” of the driver based model so that they can be changed as needed but held constant across scenarios as needed as well.
A simple illustration helps explain the difference in a variable and a driver. Interest rates may be a variable in a model. They may effect a business, but they should not be a driver because your company cannot control them. Price could be an example of a driver. In some models, the ability to change price assumptions may also have a large impact on a business model, it is within your company’s control, and it may fluctuate enough to be driver.
In this post we have examined some recommendations for employing driver based planning and how to go about it. In our next post, we will examine some best practices for driver based planning.
Why Finance Should Want to Own Strategic Analytics
Our last post noted that the question of who should own data and analytics has been a popular one lately. We noted the fact that several posts on finance related blogs and Linked In groups have focused on this question recently.
In that previous post we also noted the critical distinction between operational and strategic analytics in most firms. We noted that that it was not desirable for finance to take ownership of operational analytics. In this current post we turn our attention to the desirability of finance taking ownership of strategic analytic support.
In many companies, strategic metrics often focus on the same topics on which operating metrics are focused. Three examples from companies of very different industry types should be considered. Many other examples from different industries could be cited, but here are a few:
Professional services companies may make tactical operating decisions regarding bench strength and utilization of resources, but the strategic decisions in the same companies may be based on decisions which will allow the company to maximize these metrics over the long term.
Retail companies may rely heavily on analyzing the day to day patterns of product sales across their website for operational decisions about pricing, ordering, and discounting patterns. These very same companies make strategic decisions about acquisitions and new product investment based on an abstraction of this information.
Manufacturing companies which analyze their production and distribution patterns in order to make short term operating decisions about raw material inputs, inventory, and means of distribution may rely on analytics around margin analysis and channels in order to make long term decisions.
Many companies which struggle with strategic decisions have short term operating decision making processes which are well understood. In fact, if a company does not have short term operating decision making in place, there will be no point in making effective strategic decisions since the company will not be competitive in the first place.
On the other hand, if a company has strong operating decisions without the ability to make equally strong long term strategic decisions, it will become less competitive over time. One study famously suggested that the impact of a failure to make effective strategic decisions over time would result in a 40% reduction in value to shareholders.
There are good reasons why finance involvement in the operating metrics of a company is not desirable or even deleterious. These reasons are documented in our previous post. There is an even better reason why finance should not want to be involved in operating metrics, and it has to do with time allocation.
Even if finance were to be strong partners in operating metric analysis, this is rarely a good use of finance time. All things being equal, finance should be dedicated to the exclusive task of analyzing the operating metrics and performance information in order to help companies better understand their strategic decisions.
Consider the examples provided above for illustration purposes. The operating decisions which are considered crucial to these companies are very important. In fact, these operating decisions are the business of these companies day in and day out. The persons who run the business must be experts in providing a quick analysis of these metrics which will help executives make the daily and weekly decisions effecting these businesses. In each of these examples, strategic decisions will be vital to the long term viability of the company.
More importantly, in each of the examples, participation by finance is vital to the support of well-informed decisions. Only finance will typically have the insight and data required to support critical investment decisions regarding product mix and channel mix. Finance typically is best suited from a skillset perspective to adequately assess the impact of prospective decisions and choices. Finance will have the best view then, of the analytics required to best inform portfolio evolution decisions regarding bottom line margin analysis.
Finance can produce analytics to support granular operating decisions. But the time finance has coupled with its unique position within a company make it best suited to support strategic decisions. Finance should care a lot about owning analytic support.
Why Finance Should Not Want to Own Operating Analytics
The question of who should own data and analytics has been a popular one lately. Several posts on finance related blogs and Linked In groups have focused on this question recently. Judging from the heavier-than-usual volume on these threads, perspectives on the answer to this question are pretty broad.
The most enlightened answers to this question tend to be the most idealistic – they generally focus on the fact that the notion of “ownership” is an outdated one which shouldn’t be relevant. These perspectives are probably correct in a vacuum. The fact remains that in all companies someone must maintain the single source of truth, and one group is usually looked to in order to interpret that single source of truth.
In real-world corporate environments, the answer to this question varies greatly from company to company, so perspectives tend to cluster along industry lines. The tendency to think about data ownership from our own point of view is very human. The man with a hammer thinks everything is a nail.
Across all company sizes, types, and industries, the difference between operating and strategic metrics is a useful one when addressing this question. The dividing line between these types of metrics is not always clear. In most companies operating metrics are actually more important to the firm’s short terms survival than strategic ones.
The difference between operating metrics and strategic metrics should not be confused with importance of decisions in a company. In many companies the same executives involved in strategic decision making are also the same executives who make daily decisions based on operating metrics which will define the way the company does business in the next week or even the next day. The question about ownership of the data and analytic support for these two different types of metrics has to be different teams almost by definition.
Just a few examples of operating metrics in various companies include: customer service, website performance, project management, clinical trials, and industrial machine manufacturing – all these processes produce lots of data and are potentially very important for their respective companies – but they are not the kind of strategic metrics vital to the long-term health of a company. Suggesting it is somehow appropriate for finance to “own” the data and analytics for these metrics is obviously inappropriate.
In fact, in many of these cases, it actually would hurt a company's ability to respond and to do business if finance were to "own" the operating analytic process instead of the relevant business function working directly with executives to understand, evaluate, and support these critical decisions. For a finance team to interject itself into this process at best represents an unacceptable delay – at worst it may actually distort decision making since finance may not have as deep an understanding as those business persons closest to the process.
There is a very important case – strategic decisions - where it is appropriate for finance to “own” data and analytics. We will consider that case in our next post.
Analytic Portals for Customers
Many companies provide or wish they could provide data externally to their clients. We have run into several situations where this is happening or conceptualized at various levels. In one case a company sells data to its customers today but delivers them in a Microsoft Access™ database. In another case a company sells data to its customers today but delivers then as PDF reports in attachments to emails. In yet another case a company has accumulated a lot of data - arguably the most in the world in its particular industry - but has not figured out a way to monetize that information today. In all these cases, the sales and delivery of the information is a lot harder than it should be.
Selling and distributing data to customers over the internet does not have to be difficult. Still there are several challenges that many firms encounter today. One challenge is simple inertia: a company may be so invested in providing data another way that they are resistant to efforts to streamline which they don’t understand. Another may be the perceived logistical complexity – attempting to make that jump today requires cobbling together bits and pieces of different technologies in order to make this happen.
Certainly the idea is appealing. The notion of providing customer interfaces from a single backend infrastructure would be easy if it were systemically possible and reasonably priced. If the backend could be refreshed with automated data updates such that clients could experience those real-time updates, this delivery method would improve the efficiency of data delivery and probably make it more appealing even to those companies who have not been able to determine a way to monetize their data.
The single largest impediment to quickly and easily syndicating our data for internet sales distribution is probably the lack of a turnkey syndication system. Creating a method for storing and updating our data is not easy, but there are content management and database technologies that are pretty strong at that task. There are certainly portal-creation technologies that allow provisioning of “instances” for users and even create browser-based authentications which restrict access to that environment.
A system that allows a company to provision a portal for an end user and also allows for the definition of data access which will be accessible to the end user through that portal is required in order to make syndication easy. Of course, there has not been a system which has combines both the data storage/updating/access control capability and the portal creation and authentication capability. The combination of that functionality is what is required for a company to really be competitive in the data sales and distribution business.
To succeed, this system will require administration by business users, rather than IT users. For example, a consultant attempting to deliver an order to a customer should have the ability to provision, add users, and add data levels, etc. without having to put in a request to IT in order to make that change. To have IT control over these items does not scale – it will make IT become a bottleneck as a company attempts to grow.
There are numerous benefits to creating a quick and easy way to syndicate data for customers online. In addition to being a superior way to create strong customer relationships which will improve retention rates, the addition of online management consulting opportunities and the chance to improve data resale rates create an improvement in transaction sizes and lifetime customer value. Finally, the ease of updating and delivering data through this portal can dramatically reduce the expense required.
The Agylytyx Generator is a turnkey way to create consolidated backend infrastructure which a business person can use to define and create analytic portals of high value for your customers.
Report Automation Means Applying Reports to Anything
We all create reports. We have in the past, and we will again. We use different tools to make that happen. Many of us use Microsoft Excel in order to generate our reports. We may also use some other report writer such as Crystal Reports with or Cognos with TM-1. Some “cloud products” like Host Analytics and Adaptive Insights have standard reports built-in. We may even achieve some degree of report automation – by using repeatable OLAP queries, designing standardizations using PowerPivot, or by saving custom reports in other applications.
Companies used to generate too many reports, but most companies seem to have found a good balance between reporting and analysis. There was a time where many companies were guilt of over-reporting. In fact, one high profile consulting group famously recommended that a group stop making reports and wait for users to notice – then start creating only those reports for which someone asked or noticed was absent. We think the pendulum in most companies has swung back to the middle. Most reports which are generated regularly do seem to support decision making and analysis.
Creating these reports may be normal or even frequent occurrences for many of us, but they are rarely, if ever, much of our jobs. Most of us are called upon to perform ad-hoc analysis as well. It is an expectation that we will generate certain reports, but we typically have other significant responsibilities, usually related to this ad hoc analysis. The faster and more effective we are at creating reports, the more time we will have for these other tasks, which typically involve elements of analysis also. We often wish we could simply apply these reports to a particular data set which we are analyzing.
There may be any number of reasons we can’t apply a report to a data set we are using for our ad hoc analysis. If we use PowerPivot, the data may not be in a format which we’ve already standardized. If we are using system generated reports, the information may be coming from a different system than the one in which we’ve built our reports. Often the report format may need customization in order to analytically support the data set at which we are looking – many times it might be easier to actually build a new report from scratch than to repurpose an old one. The net is that our standard report formats don’t always lend themselves well to ad hoc analysis.
Real report automation means being able to apply any report formats we have created to any data sets so that we can hasten our ad hoc analysis. When we are asked a question which requires us to build a model, some charts, a presentation, scenarios (or any combination of these) we are usually using custom built data sets. There is a reason why we save reports and why people think they are valuable – they usually contain some key information we use to make decisions. We should be able to quickly and easily apply any report to any slice of data (any scenario we’ve created, any model we’re built) to help us in our ad hoc analysis. If your team can’t do this today, they aren’t using the Agylytyx Generator and they should be. Contact us today for a free demonstration of how reports can be applied to scenarios or models.
Achieving Financial Governance through Access Control
Controversy was ignited by our last blog post. That post argued that retaining Access Control as a business user was a much better Governance strategy than “outsourcing” grants of access to an IT department. In a Linked In group dedicated to the finance community comments on this perspective reflected positions ranging from “finance time is better spent doing other things” to “finance does manage access controls at our company.” This concluding blog post on the subject illustrates how easy it is for finance to retain access control and ensure corporate governance. This post focuses on the way that happens – how finance departments can meet governance requirements by practically expecting to manage grants on behalf of all business users.
The issue of access controls and governance is not a new one. It did surprise us how many people actually considered governance implications when choosing to let their finance departments handle access control grants themselves. It surprised us even more that these stories came in from all over the world, and from different size organizations. One user told us her team would make Hyperion grants themselves, another user mentioned controlling access grants using Host Analytics, still another referred to skills in his finance team in Windows Active Directory and OLAP.
The granting of access to authorized users is nothing new. Sometimes these types of grants get pretty complex pretty quickly. For example, some companies will have client representative that would like to be able to see all data pertaining to a particular client account including costs data and expense data, for all products and services, for all geographies. In a very complex series of data grants, a company might grant one user all the data pertinent to revenue for a single product or service across regions and channels; another user might get access to revenue information for all products and services in all regions, but only for a particular channel of distribution. Others might have the same types of grants, but for expense information. In rare cases, a user like General Manager of a region might have access to all revenue, cost, and expense information for that particular region only. Many products, including the ones listed above, can handle such grants.
The issues of various levels of access control, especially grants that “stripe across” other data sets was relatively new. The Hyperion user referred to previously seemed surprised to hear that this was possible. In order to effectively create grants which cross lines such as the ones mentioned in the previous paragraph, a product must support the creation of dynamic datasets, and link its access control strategy to the creation of that dataset. In the illustration provided here, a dataset must be created which represents all the costs, expenses, and revenues related to this client regardless of distribution channel, region of the world, or product and services ordered. Next, access to that dataset must be provided to the individual in the company who “represents” that customer. The Agylytyx Generator is the only product we know which makes it easy to create those datasets and dynamically assign users access to them. Since all this can be done by the business user, compliance with access control rules is assured.
Access Control is really about Financial Governance
“Access Control” – even the words make it sound important. There is a good reason for that. The most basic notion of “access control” – basically “who can see what” – is extremely important. Very often there are critical issues of confidentiality involved. No firm wants their client lists exposed. There are very specific legal guidelines protecting data access to employee information. There are Sarbanes-Oxley Act (“SOX”) issues carefully defining timing around the release of financial information. In the best case scenario, messing up access control can put a firm’s reputation at risk. In the worst case scenario, messing up access control can result in fines and even jail time. As if this was not enough incentive, there is another reason access control is important and it has do with corporate governance. In part one of this two part series, we will look at what access control has to do with governance. In part two, we will focus on available access control approaches which address the governance problem as well.
“Access control” has often been the purview of IT. When a user needs access to certain information, the manager or executive who decides to authorize this grant typically completes an online form or sends an email to the appropriate contact with IT who makes the necessary grant authorization. Almost by definition, there is a potential governance problem here because decision makers in departments like corporate finance are dependent on their IT partners for access control. “Governance” in this context means that corporate finance controls “who can see what.” In this case, they do not.
For true governance to exist, access control must be in the hands of business users like corporate finance. No matter how automated the process may be, if corporate finance does not have direct control over the assignment of roles and access, the conditions of governance do not exist. Auditors are usually okay with the fact that IT can assign finance users access to systems – if they don’t actually have the ability to see the data themselves, they don’t “count” as users with access to the system. We don’t typically think about the fact that IT users then have the ability to “grant” themselves access to the system – we just count on them not to do so. There are myriad other potential problems with this scenario that actually happen. They may be infrequent but among the ones we have seen: 1) a user was inadvertently granted access to systems because their corporate email address was one letter different from the intended user and the manager made a typo; 2) an employee switched roles in a company and should no longer have been able to access sensitive financial data but the manager forgot to notify IT to deauthorize access; 3) the request to change access controls was made but the IT person who handles the access control was on extended PTO and wasn’t able to address the request for a couple of months.
The fact that governance actually involves someone outside business users like corporate finance matters. The examples cited in the paragraph above are human errors which can happen when IT is not involved also. This is all the more reason that access control should remain in the realm of the business user. First, when an “extra” person is involved, it increases the likelihood of this kind of occurrence. Second, when a “mistake” occurs, the fact that “solution” is out of the hands of corporate finance is not in compliance with most governance requirements.
When it comes to sensitive information, particularly in the realm of corporate finance, access control is really a governance issue. Too much is at stake to cede access control responsibility to any other organization. Fortunately, there are solutions. In the second part of this series, we will look at how access control can remain in the realm of a business department like corporate finance.
You Might Benefit from a Construct Library
A Construct Library is a must for most companies. We created them for clients before our software even existed.In fact, a long time ago before we started our company many of us created Construct Libraries within large companies.This kind of Construct Library is good to have. An application which builds-in the Construct Library and uses it to automate chart building is a powerful idea.
Without an application to use it, a Construct Library can serve as a reference for ways to visualize data – a kind of repository of data visualization best practices. When creating a chart, table, or graph, the idea is that this online reference library can be accessed by folks across a company to help “short-cut” the chart type selection process. Further, the examples in such a Construct Library can help expedite the selection of a “chart type,” since they will use a “Chart Type” in order to show the data.
Further, an externally referenceable “Construct Library” can be used to expedite the assembly of templates. In the same way that Construct Libraries make it easier to create charts since there is a point of reference, that same process can be used multiple times in order to create a “template” of sorts manually.
As much as a Construct Library can help create expedite the creation of a single template, it is not a substitute for a template creation platform.In the sense that the template is as manually created collection of objects, it is not a real template in the traditional sense of the word. Rather, it represents the manual assembly of chart types.
When a Construct Library is used within an application, the nature of a template changes. When an application treats Constructs as “building blocks” to be used in the creation of reports, dashboards, or scorecards, entire templates are created at once.
This is the real power of a Construct Library. Sure, they are great tools for any organization to have at their disposal. Harnessing the real power of a Construct Library into an application means realizing the true potential of Constructs, so that whole “templates” can be created at once. If you don’t have a Construct Library, you are missing out. If you have one, but don’t have an application which makes use of them, you are missing an opportunity.
What a “Template Creation Platform for Analytics” Is
The phrase “Template Creation Platform for Analytics” is a mouthful. It sounds technical and intimidating when said all at once. Even for those who understand the full meaning of every word the implications of the phrase are difficult to process. However, parsing each word into digestible bites and then understanding the phrase in context makes it very easy to understand. Even though there may not be anything to use as a frame of reference, no point of comparison, it becomes a lot easier to understand exactly what the Agylytyx Generator does.
When we used to describe the Agylytyx Generator as a “Template Creation Platform for Analytics,” we would get a lot of glazed-over looks. That was probably because people weren’t really used to thinking in those terms, and all they would hear were what they perceived as buzzwords. To some extent, we know that is still the case. When confronted with entirely new developments we haven’t heard of before or didn’t know existed, our first tendency is to discount things we don’t understand. The more use cases the company accumulates the more understandable this new approach becomes.
It is usually not the first the word “Template” where we lose people. Everyone knows what a “Template” is – or at least they think they do. For a full explanation of where the standard notions are insufficient, please read “When a Template is Not a Template” for a technical explanation of where our standard notion of what a Template is falls short. At least people have heard the word and may even have used it. In rare cases they may have even created a template before.
Those persons who have created a template don’t usually get lost on the second word either. They are usually the ones hanging in there with “Template Creation…” In fact, even many people who have never actually created a template are still with us at this point. It is not a difficult concept to grasp that a template must have been created for it to exist.
We often start to loose people at “Platform.” Many people are unfamiliar with the word when applied to technology. Those who are familiar with that word in technology are often used to hearing it in the context of an infrastructure provider to another vendor to use in the delivery of their product or service (for example PaaS or “platform as a service”). The concept of a “Template Creation Platform” is too much for most people in the sense that conjures up images of vendors using a product to create templates in order to repackage and sell those templates are part of their product. We get that and do concede that it is a bit confusing.
But the Agylytyx Generator is designed for the end user, not for other vendors. When we use the term “Platform” we clearly don’t mean it as in the same context of “platform as a service.” In fact we put the platform directly in the hands of the end-user. What kind of platform is that? A template creation platform of course. That implication is intentional, and it is why we use the metaphor of “building blocks.”
The “building blocks” are the things that are the final word in the product description. That final word in the product description explains what kind of “Templates” are being “Created” using the “Platform” –“Analytic” ones. Using the analytic building blocks, users create templates from those building blocks. The Agylytyx Generator is a platform which users access in order to do that themselves.
Breaking down each word, it is possible to appreciate the fact that a “Template Creation Platform for Analytics” exists even though it constitutes something for which there is no analog.
What “Data to Charts in One Click” Really Means
It sounds catchy. Who wouldn’t want to be able to do that? There is an appeal to anything which only takes “one click” or for one to be a “click away” from anything. In fact, just about any vendor can (and many do) make similar claims, since technically the final “click” required by a user constitutes “one click” if one starts counting then. Anything can count as one click particularly if a user 1) considers a chart and many charts the same thing; 2) doesn’t count the previous steps required; or 3) considers a “canned” report format the same as dynamic template creation.
First, there is a big difference between one charts and multiple charts. In a previous post (Filtering and Pivoting or Making Templates?) we showed the many steps required to use filters to change a single chart. Using a graphic generation package from a leading vendor, the picture that vendor uses to explain the approach to data portrays a chart with seven different filters which may be adjusted in order to change a chart. That approach makes an interesting case study – setting multiple filters results in multiple “clicks” to change a single chart. Of course, when the chart changes, the previous chart is lost (unless the user is able to remember the filter setting used to create the chart. On the other hand, changing a lot of charts by simply pointing at a different data set, or clicking back to restore that set of charts sounds like a lot better approach.
Second, there are always previous steps required. BI Vendors typically make their product look easy by leaving out a lot of the previous steps. One factor we didn’t even mention in the post referenced above is the amount of preparation work necessary to create the filter alignment in the first place. For products on the market today, things have not gone much beyond the Microsoft Excel metaphor for creating charts – picking rows and columns of data, choosing chart types, playing with attributes, axis, formatting, etc. Usually through a process of trial and error (selecting different sets of data for example), a user can arrive at a single chart. Today’s BI products have either made that process marginally easier, or have offloaded it onto IT to program.
Third, so called “dynamic” templates really aren’t. A few products have created canned dashboards, scorecards, or reports to which filters can be applied. These product follow the same process: define the format to be viewed by the consumer, define the “attributes” (“filters,” “pivots”) to be applied/changed by the end user, and then map the data fields to the proper part of the report format. The outcome is a dashboard, scorecard, or report that can be repopulated/redrawn by the user simply by changing the filters. Because they are filterable, these formats are called “dynamic.” They are not really dynamic, they are static because the format itself cannot be changed without reprogramming. A superior, truly dynamic option is one which, in addition to the “filtering” capability mentioned above, gives a user the capability to create and to edit as many dashboards/scorecards/reports as they want. For more information on this critical difference, read “When is a Template Not a Template”).
A true “Data to Charts in One Click” solution means a few things. First, it means that there is no user specification involved - no pivoting, no filtering, no selecting of data elements, no choosing of attributes, no selection of chart types, etc. Second, it means that minimal (or ideally) no data preparation is required. Third, it means that the output is truly dynamic, not predefined.
Governed Data Discovery Should Mean No Lying with Statistics
A lot of vendors are writing about their approach to “Governed Data Discovery.” All vendors are approaching the concept of governance the same way today – ensuring there is uniform source control over the underlying data used by a BI application, so that all the data ties in all analytics. For real governance, that is not enough. Real governance means that in addition, all the analytics are presented using company approved and controlled formats.
The term “governed data discovery” is relatively new one, and most people credit the invention of the term to a single source. According to the Gartner Group’s February 2014 Magic Quadrant for Business Intelligence and Analytic Platforms, “Data discovery capabilities are dominating new purchasing requirements, even for larger deployments, as alternatives to traditional BI tools. But ‘governed data discovery’ — the ability to meet the dual demands of enterprise IT and business users — remains a challenge unmet by any one vendor.”
Gartner got it right – they invented the term out of necessity, based on customer requirements. As one senior executive at a Fortune 100 company asked us recently “how do you keep people from monkeying with the data?” When pressed for specifics, the executive revealed a very common practice in their company (and likely most others) – folks were commonly eliminating certain deals or data points as “outliers” when they prepared their analytics. Governed Data Discovery means enterprise IT controls data so the requirement for governance is met in the sense that there is a single source of truth for all analytics.
This governance leaves companies are better off than before - at least they can be sure users are accessing the same data. There is some control point beyond the corporate edict that “all users must use a certain data source” which is practically daring users to find other sources of data. In that sense, governance is effective.
For real “governance” to be achieved, data control simply isn’t enough. Governance means that a company can control not just backend data applications through enterprise IT, it means that the business users of the application will all be using the same “building blocks” to create their analytics. For example, this means if something like “evolution of contribution margin by product by region” is produced, the same chart type and even the same colors will be used by users across the company. It does not mean one user can use a bubble chart, another uses a trend line chart, another uses a scatter diagram, and yet another uses a tornado chart. Even elements as basic as colors can affect our understanding of data.
In a perverse corporate edition of a “beauty contest,” the best looking chart often provokes the most discussion whether or not it is the most compelling way to present the data. Real “governance” means users are not spending their time trying to create the most impressive version of a chart, or that meetings are not derailed by discussions rooted in the latest eye-catching graphic.
There was a book published over fifty years called “How to Lie with Statistics.” In this book, the author famously documents how graphic images can misrepresent underlying data. Even if a user does not intend to misrepresent facts, they can still unintentionally lead viewers to incorrect conclusions. The book never alleges that users are accessing incorrect or erroneous data. It assumes the data is valid, but documents myriad ways that end users can and do mislead readers using that underlying data. The point applies to the term “governed data discovery” in a very important way.
The point is this: even when data completely ties out in analytics, without control over the output, companies still have no effective governance. Even when users are all accessing the same underlying data, if companies have no control over the way the data is presented, there is no effective governance. True “governed data discovery” means that companies enforce a uniform presentation method as well.
The Difference Between Horizontal and Vertical Drill Down
“Drill down” on a chart is a frequently heard term. It has become so commonly used that most analysts who cover business intelligence often make it a category all its own which they have dubbed “drillability.” Today, a critical distinction is introduced. The drillability used by today’s applications actually employs a vertical drilling technique, and a new and better way of drilling exists – “horizontal drilling.”
We do not need a different term to define things until we need it. A new metaphor needs a new term to define it. Residential piping provides a good basic example. Until plastic piping was invented, there was no “metal” piping or “PVC” piping. There was just “piping” - one didn’t have to say “plastic” because that wasn’t introduced yet. When it was, the distinction had to be made. Eventually when we talk about the piping in new houses, it became clear we were talking about plastic piping since it is now in standard use. In the same way, we need to make a distinction between “vertical” and “horizontal” drilling.
“Vertical drilling” is what we know of as “drilling” today. So far the common use of the term “drillabliity” refers to the way all applications handle the exploration of data, and it is an appropriate term to describe the act of “clicking” on a chart element to display what is “behind it.” The term means essentially the same thing to everyone – clicking on a chart will lead us to “the next level” of data, so that the chart becomes a gateway into data exploration. This method of exploration has become so appealing that it has become the metaphor for data exploration.
There are some inherent problems with vertical drilling. Vertical drilling often constrains what we can view. Consider the example provided to the right – a basic trendline chart (although the format and chart elements don’t really matter here). In this example, let’s suppose we decide we would like some more information on what looks to be revenue acceleration in Asia, so we decide to drill into the chart. Two things happen under the current metaphor. The first is that we must decide which “point” on the chart to click. Next we will be presented with additional information about that quarter, rather than exploring the trendline as a whole. The second problem is that the information we are presented will only be a single part of the whole. In this example, if we click on Asia Q4, is the next thing we expect to see the Q4 revenue for the entire Asia region by product? It is Q4 revenue for each of the countries in Asia? Is it Q4 revenue for each of the channels of distribution we use in Asia? Rather than help our investigation, we are likely headed down the kind of “rat hole” that vertical drilling frequently leads us.
Horizontal drilling is a different experience entirely. In horizontal drilling, we choose which charts we wish to see to drill into our data, even to point of viewing all the pieces of the whole at once. Rather than clicking on the chart somewhere to hope what we see next will assist us in our investigation, we take control of our investigation with the same mouseclicks in order to display what we want instead. In the example above we decided that we need to understand the factors influencing rapid revenue acceleration in Asia. Choosing to horizontally drill by choosing “Asia” would lead us from revenue trends to every possible factor effecting revenue in Asia. So instead of seeing revenue decomposed for one point in time for a single factor (like Q4 product revenue in Asia, for example), we would immediately see multiple charts instead (examples on the left).
Horizontal drilling enables us to select any element of a graphic and decompose (“explode”) that element into multiple variables. In this example, we have used horizontal drilling to decompose the revenue trend for Asia into multiple charts which depict the various trends which might assist our investigation. In this case, we can immediately deduce from our visual review that that the growth rate for product 4 (from the first chart displayed) and particularly the dramatic growth of the reseller channel throughout the region (chart three displayed here), warrant further horizontal investigation. The only chart which doesn’t help us here is a decomposition of countries (shown in the second chart) since all the Asian countries appear to be growing at roughly equivalent rates.
Keep in mind that the vertical drilling experience would still have us looking at a single point in time for the “next” layer of data (probably countries). Adjusting that chart view to create a trendline chart would leave us looking at the second chart depicted here. We would then need to start all over with our investigation, vertically drilling next an element of our choosing (say, products) in order to produce and then drill into chart one above.
It doesn’t take much from this simple example to see how much time and effort we save in our investigation by using horizontal drilling, not to mention the dramatic increase in the likelihood that we can find the answers in our our data.
Horizontal drilling, like PVC piping or any such innovation, will take us some time to understand. Eventually, this type of drilling will make vertical drilling obsolete. Horizontal drill down will become the standard for drilling.
Redefining Data Discovery Part III – What a new approach to data discovery means
In Parts I and II of this series, we advocated the use of a new approach to business intelligence, especially in the world of corporate FP&A. This week, we conclude this series with three use cases where this different approach can be applied in the real world. These case studies may look familiar, because they describe very common situations which happen in most large companies today. Making these processes a lot easier and more effective is what results from a redefined approach to data discovery.
Use case #1 – New Product Introduction
The approach without Agylytyx:
How the team would do it with the Agylytyx Generator:
Use Case #2 –Analyzing and Reporting on Portfolio Complexity
The team’s approach before Agylytyx:
What the Agylytyx Generator did for the team:
Use Case #3 – Long Range Planning and Annual Budgeting
The team’s approach before Agylytyx:
What Agylytyx did for the team:
Redefining Data Discovery Part II – What a new approach to data discovery means
After taking a “special request” from a Linked In FP&A Group, we now turn our attention back to what amounts to a quantum leap forward in data discovery. In this post, we will begin to get specific about what a new approach to analytics means.
For many BI use cases, it is more efficient and effective to model the backend source data to fit a comprehensive set of pre-developed visual templates (e.g. dashboards) rather than to embrace the alternative - empowering users to generate multiple dashboards instantly by creating custom report templates and manipulating a data model in order to populate those templates. These benefits are most apparent when multiple complex business scenarios requiring advanced visualizations from different stakeholder viewpoints need to be analyzed. In fact, if your job consists of publishing the same report or dashboard each month or quarter and embraces no ad hoc analysis or field specific queries based on those dashboard or reports, you may not need to look for an alternate approach.
Those of us find ourselves building strategic presentations can realize a 10:1 reduction in analytical processing time and a marked improvement in business insights from this approach, creating better decisions and improved business performance.
The business problem that so many of us face are limitations of using existing applications for visualizing complex portfolios when creating analytic presentations for key decision makers. Time constraints and governance requirements mean teams often present inferior, incomplete, erroneous, or inaccurate analytics when attempting to support analysis and decision making.
For these reasons, we have created an alternative approach which we use in our application, the Agylytyx Generator. The Agylytyx Generator includes data preparation methods for creating a unified DataMart, self-service tools to create datasets for analysis (e.g. scenarios), and an extensive library of visualization objects that represent building blocks which can be combined by users into logical “frameworks.” A very unique capability is that users create their own framework, and each framework can analyze multiple datasets. With a single click, a whole set of graphs and charts will be populated by data from a new dataset, or multiple datasets. Using the product’s unique comparison function, users can apply any framework to multiple datasets, and the output can be compared side-by-side. With other spreadsheet visualization and dashboard tools, each chart or graph would need to be re-created for each dataset. Given time constraints and governance considerations, the visualization “metaphor” used by other products is inferior to the Agylytyx Generator approach.
In the final post in this series, we will examine some common use cases based on the application of this approach. We will close this post with a chart which represents a “deep dive” on the difference in functionality between the approach which other applications use when attempting to solve this problem, and contrast that approach with the approach that we have built into the Agylytyx Generator application.
Ten Best Practices for Data Analysis
Since we wrote the blog post “10 Signs Your Data Analysis is Inefficient” it was suggested that we write a follow-up post indicating some ways you can tell if your process is efficient also. Incidentally, the suggestion came to us in a very active Linked In group called “FP&A Club” which you may want to check out if you are not already a member. In any case, we felt that suggestion warranted a digression from the series on Data Discovery that we just started in order to write about this topic.
Here are ten ways you can tell if your data analysis is really firing on all cylinders:
1. Your team has changed their data discovery metaphor.
Since we are in the midst of a series on this topic now, we will not labor on this point too much. We will note that a few companies have successfully changed the way they look at data discovery – from the traditional method of “vertical” drill down to a much faster method of “horizontal” drill down.
We will say more about this in a future post. Horizontal drill down is usually enabled when the second best practice is present.
2. Your team’s charts are built for them.
The time we spend adjusting filters and changing chart types is better spent looking through sets of charts built from different drill down perspectives. For example, a dashboard which is composed of several analytics (sales, tam, market share, revenue, gross margin) about a product might not be meaningful, but it may put us on the right track. Playing with various combinations like regions, or channels of distribution, will eventually (we hope) uncover a key insight. An approach which has all these charts already built-in will allow users to simply view all the charts and look through them for key insights, rather than relying on the user to “build” charts by playing with filters, hoping to come on the “right” discoveries.
3. Your team can create and edit entire templates within minutes.
Let’s face it, templates take time to create. Despite having access to preformatted templates (which are even available on the web), customizing a template can be tedious and time consuming. A best practice instead is to leverage a template creation platform where users can focus on analysis rather than on customizing charts.
4. Your team has developed very strong writing skills.
Sounds basic right? Read on. As many have famously discovered, it’s a lot easier to write a little than to write a lot. The shorter and more impactful you can make bullets, the more effective they will be. Too often we leave this as an afterthought in the analytical process. Teams that are successful are invariably very good at writing analytics bullets. It is not something that we can automate – no application can do it for us – at some point there is an inevitable need for human interpretation of the analytics. Of course there are exceptions to all rules, but some notable best practices for these bullets include the fact that they are usually:
Positioned properly – Often bullets are placed next to an analytic so as to best interpret it and not leave anything to the reader.
Written concisely – A single bullet usually fits on a single line with 18point font.
Edited well – Adjectives, adverbs and articles are often left out.
Presented consistently – the same structure of the bullets for each item is essential to avoid “cognitive dissonance” – in order words if you start a bullet with a verb, they need to all start with a verb – throughout the presentation. Also, make sure punctuation is consistent – for example don’t end some bullets with a period and others without it.
Precise linguistically – This may be a bit more of an art than a science, but exciting sounding words like “significantly” are generally less informative than “18% Y/Y growth” for example.
How often have we heard “the charts speak for themselves” or “just slap some analysis on these slides and send them out”?
5. Your team saves time for analysis of the analytics.
We get that a lot of important questions are time sensitive. When facing a lot of important strategy questions coming in from different quarters, we may often have a tendency to complete a presentation so that we can email it off and move on to creating the next presentation deck. As difficult as this may sound to do, it is always better to set expectations appropriately about timing for these responses, so that you can build in the necessary time for analytics. Some of the best practices above (and some mentioned below) focus on ways to free up time for analysis. The teams who are the very best at this spend at least as much time analyzing their charts as they do creating them.
6. Your team has automated the creation of entire presentations.
Unfortunately, one of the most time consuming areas of analytic communications is taking screenshots or copying and pasting from another application into PowerPoint. The most efficient teams have automated the creation of entire presentation decks, so that any of the analytic output (no matter how many charts) they build is exported en masse to their PowerPoint application. Teams that have this capability also don’t fret updates, edits and changes, because they simply re-export rather than going through the copy/paste process again.
7. Your team already knows about “best practices” for data display.
The very best teams don’t need to play around with chart formats for optimal display of financial data. They have already developed chart formats with which they are comfortable, based on the long history of finance persons thinking about the best way to display data and a knowledge of what their leaders are comfortable with seeing. For this reason, your team doesn’t need to play around with chart types – they use prebuilt charts that already use the format they like.
8. Your team is open-minded about data display. This may seem like a contradiction with the point above. It isn’t. The point is that your team pays attention to the best practices out there. They are always looking for the next great innovation and aren’t above taking some guidance on when to use, and when not use, certain views.
9. Your team collaborates on analytics from the beginning of the process. This one is much easier said than done, but the organizations that do it properly save a lot of time. A natural human inclination is to want to show a final draft of a product to get feedback. In the situation we’re describing, it means circulating a presentation deck amongst the team to provide feedback. This is a form of collaboration, but it may be the form of collaboration which requires greater time cycles (reformulating charts and updating presentations).
The most effective teams have developed ways to work collaboratively from the start of the process. Instead of working “in silos” on different problems which they will later share with each other, truly efficient teams share analytical responsibilities on each of the problems. They have applications and processes to support this collaboration, and produce presentation decks together which need very little final editing.
10. Your team makes minimal (or no) data errors.
One of the hardest things in a set of analytics in a presentation is to ensure they are consistent in their data, and that the data will be accepted by stakeholders as accurate. When a team has to introduce any extra steps in the process, the potential for manual human error goes way up. Ultimately, we all know that executives will zero in on any data that is incorrect or inconsistent. Even teams who double and triple check all their formula, data, and charts, can still make mistakes. The most efficient teams don’t make these errors at all. They have analytic capabilities like the ones described here which are built into (or at least layered on top of) existing systems. By leveraging the ability to analyze data and export their final presentations to PowerPoint directly from the system, these teams avoid the mistakes that can be so costly to a team’s credibility.
Redefining Data Discovery – An Overview
The term "data discovery" is a fairly recent linguistic construct. The idea has captured the imagination of most of us. It already has its own Wikipedia entry, analysts have quantified it, and vendors have invested millions to align their brand with it.
The old method of data discovery was to play with data and chart types in Microsoft Excel. Some products have emerged which make the selection of a chart type and the data to configure it faster and easier. Consequently, end users often use products like Qlikview or Tableau which facilitate the process of data discovery. In many cases, users ask IT departments to build interactive drillable dashboards or preconfigured reports.
In these situations, analysts are using the metaphor for data discovery that has been the modus operandi of data scientists before the term "data discovery" even existed. Even using Excel, an end user would conduct essentially forensic business exercises by either 1) drawing charts until something stood out or 2) try to avoid having IT create a report by trying to can entire static presentations and arranging them so they automatically repopulate. Many applications have gotten pretty good at being able to help users draw a chart quickly using filters. They have even figured out a way to make customizing a data set possible.
It is time for a quantum leap in the metaphor we all use for "data discovery." We think it should mean that lots of charts get built right away based on a built-in Construct Library so analysts immediately start reviewing data pictorially to discover the insights based on the application.
We will elaborate more on these points in our next few posts.
Filtering and Pivoting or Making Templates
We all know the drill – someone important wants to know the answer to a question, so we make a presentation deck full of analytics that address the issue. Pull the data from various sources, merge it together in Excel, and then paste the charts into PowerPoint, adding analysis to the individual slide elements.
Spreadsheets do offer lot flexibility when it comes to chart making. They can change chart types easily – so easily a user can choose a kind of chart that doesn't even show up because the data selected does not support that chart type. They can link to data sets so updates can build charts automatically – but if scale on the data is different or the format of the data is slightly different – the chart breaks and has to be fixed. When this flexibility takes the form of a pivot table, the charting support a forensic exercise of looking at different data sets – but in this format there tends to be less flexibility since users cannot "save" charts to compare with charts in other pivots – as soon as a new pivot is created the first chart breaks.
Some products have recently attempted to solve some of these issues through the use of "filters" or ways to slice data into a chart. This method takes the place of the traditional pivot chart metaphor of dragging measure and attribute combinations onto a canvas to make a chart. The promise of the filter is that it is easier to slice and dice data into a chart format which is argued to be more helpful than pivots when conducting a forensic exercise on a piece of data.
When it comes right down to it, "filters" are an improvement on the standard "pivot" to create charts. They still use the same paradigm of looking at data, as the diagram here shows. Even though some the "attributes" like Region are filterable, the basic chart itself is still set up through the use of pivots (i.e. the highlighted lists on the side of the diagram shown here. Users go through the same forensic exercise as they do with pivots, and they are still manually building one chart at a time. And when the filters are changed, the chart does change but the previous pivot is lost. Creating an entire presentation deck still requires building one chart at a time whether one is using pivots or the slightly improved "filters."
A completely different paradigm for creating those ubiquitous presentation decks
No losing charts.
No broken links.
No guessing what combinations of elements makes a good chart.
No building of one chart at a time.
Filtering and pivoting are time wasters. When it comes to making presentation decks out of any dataset, there is no substitute for making templates.
The Crucial Difference Between Creating and Comparing Scenarios
In a strategic business environment creating scenarios is hard. Comparing scenarios is even harder – a lot harder.
Sure, creating and comparing simple scenarios is easy. On an academic level, determining mathematical outcomes and sensitivities given a certain set of inputs can be a straightforward exercise. In a capital environment, this kind of objectivity often exists. Even though these kind of scenario models can be complex, they are usually pretty straightforward to build. For example, building a model which answers the question, "what happens to our investment holdings if interest rates (fall or decline) to X" is usually a straightforward exercise.
Scenario creation gets more complicated when variables are not always known. These kinds of situations often exist in business strategy. For example questions such as "what happens to the market if competitor X and competitor Y merge?" or "what happens if we gear our investments more toward emerging markets" are less straightforward.
That is why creating scenarios is hard work. Since most scenario creation techniques start by identifying drivers of models, and since those drivers are not readily apparent, scenario creation experts will look for useful "proxies" – known pieces of information that should serve as reasonable substitutes for the unknown drivers – which will help them build models.
The one piece of good news in scenario creating is that there are ample tools available for this purpose. There are specific software applications which are used in various industries in order to help build scenarios. Microsoft Excel is very strong as a generic tool for scenario creation. Excel functions like goal seek help build sensitivity analysis, and the built-in Scenario Manager helps develop and keep track of scenarios that have been created.
Many of us have known some very strong scenario modelers. Using proxies and variables, they can create models which can be used to build very descriptive scenarios. These scenario modelers often take full advantage of the tools at their disposal.
All the skill in the world at scenario building will not help compare the output of scenarios. It turns out that there are few people who do this well, and even fewer tools available for comparing scenario output.
Let's say that an executive, with the support of the scenario planner, has finally painted two different pictures. For example, let's say two scenarios support the same level of investment in the following year. One scenario depicts what is likely to happen under a "business-as-usual" scenario, and the other scenario depicts what is likely to happen if that same investment level is tilted slightly toward spending on emerging markets.
How can the executive choose make a strategic choice between the two scenarios? What application can be used to compare the scenarios? Trying to use the output from the existing scenario builder application means toggling back and forth, looking at one set of numbers at a time, or choosing information to print out and try to place side by side.
This is the exact reason why we have taken the approach we have with the Agylytyx Generator. The Agylytyx Generator is not a modeling tool, it does not build scenarios. There are enough tools like that in the marketplace, and as we've seen, Microsoft Excel is pretty good at that. The Agylytyx Generator allows users to quickly and easily create graphic side by side comparisons of scenarios, using perspectives built by end users themselves. Even better, with a click of the mouse, The Agylytyx Generator applies yet another perspective on the scenarios. For example, one might be the CFO point of view, one might be the VP of Sales point of view.
Scenario building is hard enough. Scenario comparison shouldn't be. Graphically understanding a business strategy can help understand questions where staring at numbers may not help. "How much risk am I taking?" "What will be the short term and long term effects on net margin" or "What will be the effect of my marketshare and ability to compete?" are questions that the Agylytyx Generator can help address. That is what we mean by the tagline "Strategy Visualized." It could just as easily be, in this case "Scenarios Visualized."
5 Ways to Help Sell a New Approach to Business Analytics Internally
It is not unusual for a business constituent to identify better solutions to their existing problems only to have IT represent that existing tools can or will be applied to that problem. This roadblock is even more common when prospective solutions are SaaS/Cloud based solutions.
There are various reasons why this happens so often. Next we look at some of the motivations for IT throwing up these obstacles. These include:
"Not-invented-here" syndrome. This is the idea that anything not created or sponsored by the IT department is not a good option. Particularly when a toolset has recently been purchased in the same perceived space, this problem can be relatively acute.
"Rising-Expectations." This is the idea that a recently purchased or deployed solution can or will directly address the business problem identified, but in fact business users finds that it either partially or completely leaves the problem unaddressed. No one wants to admit that they have adopted a solution which leaves business problems unanswered.
"The Unknown Quantity" factor. When IT is presented with an approach to a business problem which they have not encountered, the approach can be perceived as negative. Ostensibly this can occur because IT doesn't know the impact of having this approach become part of the fabric of the business. In reality, this reaction is motivated by human nature. No one wants to admit that they haven't seen a particular approach before.
As daunting as some of the hurdles can be, in reality some best practices for engaging IT can help overcome them. Some "best practices" can help head off or even overcome these challenges. Some of these best practices appear below:
Engage IT early and often. The best way to make IT feel like a partner in any effort is to involve IT in any efforts to identify a solution. In one recent software enterprise implementation in support of finance, IT was involved with the production of the RFP. They were invited to all vendor meetings – IT was involved from the beginning as a key partner.
Produce a Business Requirement Document (BRD). Informally expressing product requirements in meetings, hallway conversations, even emails, are too easily dismissed. A resistant IT organization can ignore gaps in meeting business requirements too easily. Documenting business requirements leaves no room to claim a business challenge is being fully addressed when it isn't. This approach has the benefit of creating an objective definition of requirements which everyone can agree upon. It also has the advantage of improving the way business users elucidate requirements. There is something about being forced to write things down that brings out the best in our communications.
Build an effective business case. Different organizations use different approaches to justify adoption of solutions – some use NPV, some use Payback, some use IRR, some use ROI, etc. In any case, demonstrating how quickly solutions pay for themselves and generate returns for the business often prove difficult to ignore.
Identify ways the suggested approach maximizes the value of existing tools. Positioning a new approach as highly complementary to existing applications in use by the organization can help considerably. Explaining how the prospective solutions can be delivered within the interface of an existing application and provide increased value to that solution can help, because it can increase the value of solutions which have already been adopted. Showing how IT can "take credit" for the impact of adoption by improving the business case for existing applications often works.
Minimize the impact on IT. Using a cloud based solution often can minimize the impact on IT. This approach should be used carefully and depends an organization. In some cases, IT can feel involved with a prospective solution, particularly when data integration is required for a cloud solution to work, IT can be minimally involved and take credit for the implementation without much resource utilization. In other cases, IT may not need to be involved at all, or even know the application is in use. If a solution is cloud-based, is fully self-service, and can be delivered at an appropriate price point, the solution can even fly "under the radar." At an appropriate time, this type of implementation can even help directly make the case to IT why the solution should be integrated.
Business constituents should never accept roadblocks from IT. If a solution is worth adopting, there are plenty of ways to get them on board.
When a Template is not a Template
In the last blog post, we looked at the way the Agylytyx Generator can be used to create perspectives of various constituents across the business. In this post, we look at how those perspectives really represent what are real and useful templates. By way of contract, we will more closely examine what we have come to expect templates to be, and how existing products reinforce those perceptions.
The word "templates" means different things in different contexts. Marketing folks may use the word to create consistency in the form of a document or presentation that is prepared in different parts of the company. In the context of reports or analytic packages, they have come to be associated with the mold, pattern, or model, for a particular approach.
Vendors in the business analytic space have come to approach templates like a check box – almost as if to say "yeah we have that too." For example, vendors commonly represent that they have an "out of the box portfolio management template" or a "ready to use gross margin module." What they really mean is – "yes we have some charts we have collected that portray gross margin information" but when it comes time to actually use those charts (or reports, or dashboard, or whatever the "template" concerns), we invariably find out that the template must the "customized" to include our specific information. For example, support may tell us that it is necessary to "associate part id's with the unit of measure for each report" and that "defining regions in this template is as easy as choosing the proper attributes and mapping them to the correct field id's so that the reports will populate correctly." These are actual situations, by the way.
Following these steps in order to "customize a template" often requires an IT-like level of system knowledge. It is little wonder that data analysts spend so much time on technical manipulations of data and usually have little time for valuable strategic analysis. Often, "customizing a template" takes as long as it would take to build all the charts in the template in the first place. So what good are "templates" anyway? When a user needs to choose attributes and units of measure in order to populate the template, the so-called template is not really a template at all.
The Agylytyx Generator is a real solution to templates. Rather than approach the issue of "templates" as a one-size-fits-all tool, the Agylytyx Generator is a template creation platform. The Agylytyx Generator application eliminates the need for end-users to define units of measure, map data fields, or choose the order of attributes. The only thing end users customize is templates, not charts. The Agylytyx Generator builds the charts through the application of the templates.
So when is a template not really a template? When an end user has to customize charts in a template. When is a template a real template? When the end user has complete control over the creation and editing of the template.
Defining Business Intelligence
Business intelligence is not as intelligent as we may think it is. This has a lot to do with the way that we have come to think about the whole "category" of applications called "Business Intelligence." A quick linguistic analysis is insightful. Sometimes a term or phrase finds its way into our business vernacular. That term's value may have relevance before it find its way into common use, but it is almost always impacted by it.
The common pattern is this:
1) No one has heard the term before, so the use of term probably indicates advanced knowledge, but its use is a way of recognizing others who understand it;
Consider the following examples.
1. If a concept has merit, the evolution of the term corresponds to the concept's evolution. For example, the ability of a local computer to access applications running on a server was known as:
"time sharing" in the 1970's;
"client-server" in the 1980's;
"application service provider" (ASP) in the 1990's;
"software as a service (SaaS) in the 2000's;
"cloud" from 2010.
2. Sometimes terms are not time dependent, but they may reveal a lot about the maturity of an organization. Think about the idea of evaluating an investment. Common terms which have been used over the years include:
Payback – the time at which at investment pays for itself
Internal Rate of Return (IRR) – the amount of benefit generated by an investment compared to its cost
Return on Investment (ROI) – the return generated by an investment for each time period
Net Present Value (NPV) – the total return generated by an investment adjusted for the time value of money
Economic Value Added™ (EVA) – the total returned generated by an investment adjusted for the time value of money with balance sheet factors included
Now, think about the concept of "Business Intelligence" (BI). The concept of business intelligence was originated as early as the 1950's. The implications for finance have evolved over time, but some of the terms commonly heard are:
There is a dilemma here. For real-time information to populate anything, IT for Finance is usually involved. For financial analysts to produce dashboards, balanced scorecards, or metrics readouts that make sense each quarter, the required information and related queries change frequently. Keeping up with the latest terminology can be almost as important as producing relevant information. There is a very strong argument that the concept of "Business Intelligence" has actually devolved to pattern four or pattern five (see above). In the finance community, "Business Intelligence" is almost synonymous with the terms "dashboard" or "balanced scorecard" or "metrics review" or any other such terms. IT organizations that support finance departments frequently deride these activities. Even when they are successfully implemented, their value as decision making tools tends to be much less than originally claimed.
How can "business intelligence" be redefined in a compelling and lasting terminology? The answer to that question may lie in the original intent of the term. Wikipedia quotes a famous author who in 2009 defined business intelligence as:
"A set of theories, methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information for business purposes. BI can handle large amounts of information to help identify and develop new opportunities. Making use of new opportunities and implementing an effective strategy can provide a competitive market advantage and long-term stability."
The definition makes sense and may be a compelling, lasting one. Many recoil at the very notion of "business intelligence," yet those same people would endorse the definition. After all, who wouldn't want to find actionable data that becomes useful "for business purposes" and helps implement an "effective strategy"? Unfortunately, many business intelligence solutions do not meet the objectives set out in the very definition.
In order to stay relevant, today's finance professional needs to come to grips with existing approaches to BI. A Pattern I and Pattern II approach to BI will rise above discussions of dashboards, scorecards, and reports. A finance executive seeking to improve relevance and strategic contribution would do well to revert to the original definition of business intelligence and redefine it as "Strategy Visualized."
Executive Dashboards, The Moving Target, and
The marketplace is full of noise about dashboard and balanced scorecards. Checking the box called "dashboards" is a requirement for vendors from project management to business intelligence to manufacturing, planning, and ERP vendors. "Balanced scorecards", "built-in dashboards", "out-of-the-box dashboards", "customizable dashboards", "interactive dashboards", "drillable dashboards", and lots of other buzzwords have found their way into common vernacular. After all, what executive wants to admit that he or she doesn't have "immediate visibility" into the "key metrics" effecting their business?
There are certainly some valid dashboard applications – for example for a PMO. But there are significant limitations to the use of dashboard at the executive decision making level in large enterprises.
At this level, these approaches rarely have longevity. In this environment rarely (if ever) have I seen a dashboard obtain critical mass, much less sustain that momentum beyond a quarter. Many times, executives don't even log into dashboard in the first place. Usually, these executives know that if an issue exists it will be surfaced to their attention. As a recent manager once told me about his VP checking dashboards "I don't think he's logged in to check a number himself in years."
Another common problem is that dashboard tend to be moving targets. I can remember creating a new dashboard format each quarter. In the next quarter, executives would inevitable make modifications to the dashboard to reflect the business metrics that they wanted to see that quarter. This moving target effect was not because decision makers wanted to make life difficult for corporate finance, it happened because the metrics that made a difference to the business would naturally vary from quarter to quarter. Trying to make a dashboard in today's changing business environment is often like trying to nail jello to a wall.
All of this just underscores the fact that, when it comes to important matters of business strategy in a large enterprise a dashboard is not the right answer. I certainly can't remember a time that a critical business decision about strategy was made because someone gleaned a critical insight about their business from looking at a red light or business indicator which looked off track on their dashboard. When it comes to important business decisions, there is a reason executives don't log into dashboards, reports, or scorecards. Pretending like business insights can be gleaned from one of these forms shows how diluted (and deluded) our reliance on traditional business intelligence has become. Looking for business insight about strategy requires context around numbers, and you don't get that from a dashboard.
There is a Big Difference Between a Chart and a Construct
Around here we refer to Constructs when we point to a single Chart in our output. One of the most common questions we get is "what is the difference between a chart and a construct?" In fact, even those who may have once been familiar with the difference need to be reminded until it is etched clearly in their memory. There is a big difference between chart and a chart type, and a construct.
The way all other products (Microsoft Excel, Tibco Spotfire, any Tableau products, SAP Lumira, and more) work is to first require a user to select a chart type, then select the data sets(often called things like "measures" "attributes" or "values") required to populate that chart type. Product demonstration usually conveniently skip over this step or make it look a lot easier than it actually is. The outcome, at the conclusion, is something we all refer to as a "chart." Creating each chart requires the same process – want ten charts? Select ten chart types one by one. Populate each chart one at a time by choosing the correct measure and attribute value combinations – usually a long and laborious process of "trial-and-error".
Now want to make a template out of the set of charts so that they can be re-used on any data set, choosing any scenario? Reusing these chart requires ensuring that additional datasets will confirm exactly to the same pattern as the original data, taking care not to break any links. The result of attempting to make reusable templates in this way is usually catastrophic. Now try changing a chart in the "template" a bit and then re-applying that chart within all the datasets you have created. Try creating a new chart to add to the "template" you have created to try applying that to each and every dataset.
Remember, in this case, one "template" has been created. Now try repeating this process a second time in order to create a totally new template. Try a third time, and a fourth, and more.
Managing a team of analysts in a Fortune 100 company working with one "template" of charts we made in Excel which we tried to update quarterly with new data should have been simpler than the exercise imagined above. But invariably links would break, charts would have to be rescaled and even recreated. This simple exercise became so unwieldy that our team would make charts for every data set ad hoc every time they needed to be analyzed.
That is why we created the notion of a Construct. Best said, a Construct is an "idea of a chart" which is completely automated in its creation. Constructs rely on the fact that what all other products refer to as "measure/value" and "attribute" combinations – every possible combination – are built in. A construct is populated automatically when a user selects a dataset to view. Since the measure and attribute combinations are already built-in and defined, users never make any selection, they look at what they think of as a chart instantly.
Now imagine that Constructs are used to create entire templates. Further imagine that a user can add as many Constructs as they want to a template and create as many templates as they want. Since everything is predefined, there is no breaking links or redrawing of charts – all charts render immediately and accurately. Editing templates is as easy as adding and removing Constructs, so entire templates can be created and edited on the fly with no worries about recreating effort or copying and pasting charts in various files in an effort to rebuild presentations.
The difference between a Chart and a Construct is vast. One implies lots of manual work to create hundreds or even dozens of charts. The other does not. Anyone doing data analysis needs to spend some time to understand and appreciate the difference. Building charts is a waste of time. Embracing Constructs means saving time for actual data analysis.