- January 12, 2020
- Marnix Jansen
Despite having developed numerous Financial Models myself over the years using Excel, I am still opposed to using it for that purpose. A bit like ex-smokers are the most fanatical anti-smokers. Because they are well acquainted with it and therefore have experienced its disadvantages first hand. The same applies to Excel: Excel and VBA have no secrets for me, I understand how a Financial Model can be set up robustly and at the same time flexibly and how the output can be optimally matched to the information needs of the end users. Because of this experience I am also regularly exposed to the shortcomings of Excel.
Don't get me wrong; I think Excel is a great program. But only for one specific purpose: performing a one-time analysis. An analysis in which the results are used yourself or shared with a customer. And then you delete the model again because you will never use it again. Then it is a great flexible tool that can quickly provide answers to all kinds of questions. But for developing and maintaining a Financial Model I am opposed to using Excel. For that, Excel is the wrong tool for the job. Here are the limitations of Excel that I have encountered in practice and that led to the statement in the article of this blog.
1. Larger models
Excel cannot handle larger amounts of data. And I'm not just talking about the fact that Excel only has one million rows. Although of course that is already a big problem. Suppose one of the tables in your model is used as the basis for further analysis and output. For example, you perform analysis on that table with a pivot table, you create graphs from that in turn, and those serve as input for analysis and communication. And that table grows by 100,000 rows every month, which is just possible in many organizations. So how do you process the 11th and 12th months? Do you store those in a new worksheet next to it? What about your pivot table? And any other functions or VBA code? That, of course, will be an unmitigated disaster!
Or, as this is usually solved, are you going to store all the detail information in a separate excel each month and only store total information from each month in a total model underneath? So that you can still perform trend analysis. But what do you do if you want to see the trend of the last 2 years for one specific part (for example one product, region or department). So then you have to go through all the monthly excel and filter out that information one by one and save it in an ad-hoc analysis excel. Good luck...
But in addition to the one million row limitation, there is an even bigger problem at play in this area. Namely, that well before the one million rows are reached, the model performance is going to be slow. Many Excel users can relate to this (deep sigh, and then "how slow Excel is again" or "Nooooo, Excel is starting up so slowly again"). This is because Excel stores the entire model in RAM memory. In order for Excel to work as it does, it is necessary to do that again after every change in the model, however small. In small models you don't notice this at all, but as your model grows this slowly becomes a huge limitation.
Excel has only 2 dimensions: rows and columns. In very many financial models, this is a major shortcoming. For example:
The examples are endless. In such situations, it is necessary to link information from one table to another. In Excel, that can really only be done by vertical search (or one of the many ways to mimic that). In a one-off ad hoc analysis, that's no problem at all. But in a Financial Model that needs to be maintained it is:
I don't want to pretend that this list is exhaustive, but these are major shortcomings that I have run into when linking tables in Excel.
3. Manual work difficult to automate
In any Financial Model, roughly three things happen: 1) data is entered into the model, 2) operations are performed on that data, and 3) the output from those operations is displayed somewhere. In practice, this is still done a lot manually. This has three drawbacks:
With manual processing, if new data is entered into a model on a weekly/monthly basis, it is necessary to perform many actions. For example, receiving the data, usually by e-mail from an (internal) supplier or unlocking the data from a system by exporting tables to CSV or Excel. That data is then manually copied and pasted into the model. Next, the operations on that data must be adapted to the added data; the range of pivot tables must be increased, formulas must also refer to the newly added data, etc. And all charts or other forms of output must be adjusted accordingly. Often financials have multiple models under their care, all of which are based on multiple sources. When processing new data, they therefore have to do this several times in multiple models. This takes up an enormous amount of time each month for all team members.
3.b. Error prone
And so much manual work, in turn, carries a huge risk of error. As many Financials will recognize; somewhere one tiny mistake can lead to a totally wrong outcome. And totally wrong is not even an issue; it does stand out. No, if the error causes the outcome to be 10% higher or lower, it will not be noticed and incorrect information will be used or communicated. Several sources suggest that there are errors in 90% of all Excel Spreadsheets, and from my experience, I just believe that!
In addition, after all that manual work, it becomes difficult to figure out what was done with the input to arrive at the output. When you look at someone else's model, or a model you created yourself a while ago, you always want to know where the source data was entered into the model and where it came from. In addition, you want to know what operations were performed on it to arrive at the output. In a Financial Model in Excel, this is usually impossible to do. Many cells contain "hard values" without source indication, links to another model which only the administrator can access or which no longer exists, calculations based on cells in a pivot table (avoid that in your models by the way: a pivot table is output, calculations are based on the source table and never on a pivot table) etc. And documentation of most models is generally not/badly present or not updated after the model has been modified.
3.d. Automating with VBA
In addition to the manual method, I also see that Excel models do get automated with the programming language available for that purpose; Visual Basic for Applications, or VBA. What I almost always see is that someone with a lot of passion and dedication has written VBA code that at that moment and in its current state does what it is supposed to do. But if the model changes a bit the code crashes when playing it. Or, worse, the code plays back but doesn't do what it's supposed to. And then "undo" cannot be clicked, that functionality in Excel does not work after running VBA. But more importantly; understanding someone else's VBA code is a particularly difficult task. As I mentioned above, in an Excel model it is often difficult to figure out exactly what is happening. If VBA code is used as well, it becomes even more of a challenge.
4. It can be done better
Most Financials who have worked a lot with Excel will recognize this picture. Perhaps they see other disadvantages, or would have categorized these disadvantages in a different way. But there will always be recognition. And it can be better. Much better.
For all Financials who are struggling with their Financial Models in Excel and have become interested: please contact me! I am happy to think along with you and with my knowledge and experience there is a good chance I can help you.