In either case, its obvious that analytics isn’t very important to your management.
Which is one of the points I made in my post, 10 Signs Mgmt Doesn’t Really Support Analytics.
So far, I have encountered very few audit departments that track meaningful metrics about their analytics.
Counting the number of projects that include analytics isn’t enough.
Knowing you did more or less projects this year than last is nice, but you can’t do much with it. The whole point of metrics is knowing your current situation and determining how to improve it.
Metrics are needed to determine:
- What types of analytics people ARE doing, and who is doing them
- What types of analytics people are NOT doing, and who is NOT doing them
- Which factors impact the amount of analytics performed and the success of them
- Whether the analytics program is growing, maintaining, or fading
So here’s some metrics to consider (not in any particular order). Keep in mind you can combine some of the categories to identify some interesting patterns.
- Type of project (SOX, Operations, Client request, etc..)
- Size of analytic project (in hours, number of tests performed, number of analytic procedures, etc.)
- System, Department, Area (headquarters, field offices, domestic and foreign locations)
- Risk Level
- Types of analytic procedures performed (e.g., duplicates, joins, compares)
- Number of source files/tables used
- Number of files/tables obtained without assistance
- Software used (Excel, Access, SQL, ACL, IDEA, Power BI, Tableau, etc.)
- Number of audit issues identified, by color/severity
- Auditor name
- Dollars/hours saved
Now let’s look at each metric and how it might be used or interpreted.
Type of project – which project types are you doing the most or least analytics for. For which project type do you tend to find the most issues?
Size of analytic project – This reflects the quantity of analytic work, not the overall audit engagement size.
For example, if your metric why is the number of procedures performed, you could label 25 or less analytic procedures* performed as a Small project, 26-100 Medium, and over 100 Large. I’d aim for mostly medium and large projects.
* Each procedure performed, such as verify data/format, remove duplicates, join files, perform calculations, transform/extract data, etc. For example, if you review 5 files for duplicates, that’s 5 procedures. If you clean 10 fields (remove leading spaces), that’s 10 procedures, etc.
As you can see, you can easily get to 25 procedures pretty quickly.
Frequency/Automation – Are most of your analytics for just single-audit engagements? Are most of your analytics manual (filtering, applying formulas, or menu-driven), or do you have some automated, continuous auditing/monitoring that are performed daily, weekly, monthly, quarterly, or annually? What percentage of your analytics belong to each category and how is that changing year-over-year?
System, Department, Area – If it makes sense, define your categories, and track the number of projects by categories so you can see what gets the most analytic focus, and what doesn’t get as much or any.
Risk Level – At what level of risk are you spending the majority of your analytic hours? At what risk level do you tend to find the most issues? What kind of types of procedures are you doing for each risk level, and do higher risk projects have a higher number of procedures?
Types of analytic procedures – The first few years, you might want to track how many of each analytic procedure are performed in each engagement. Again, if you review 5 file for duplicates, that’s 5 procedures, and so on.
While this gets tedious, it can really reap benefits. You can quickly identify what percentage of engagements the basic profiling procedures are performed. Almost every file in every engagement should
- Have the field values verified for correct format, correct data (e.g., no letters in numeric fields, no unprintable characters, no blanks or zeros where inappropriate).
- Be checked for duplicates
- Be classified by categories (e.g., what percentage of Windows servers vs. Unix servers)
- Have outliers identified, and perhaps the top and bottom 10% of transactions
and so on.
By reviewing this metric the first couple years, I realized that most auditors were not profiling data. I also realized that most auditors using Excel were not using vLookup, and no one was using Excel’s Fuzzy Lookup add-in.
Most auditors weren’t doing these procedures because they didn’t know how. So I developed and delivered training to the audit team to cover these gaps.
Number of source files/tables used – If the number of source files per analytic project is 1 or 2, you probably need to think more outside the box and find other files to combine or compare against each other.
Mack did one analytic project where he used 13 separate files (see Server Audit for the Dauntless).
Number of files/tables obtained without assistance – This metric shows the maturity of your analytic efforts. The more files you obtain through manual or automated queries indicates that either you are becoming more adept at using the technologies yourselves to get the data, or that you had someone else automate a data dump on a scheduled basis, or some of both. All of them are great.
But if most of your projects depend on asking others for data (then waiting, getting the wrong data, re-requesting the data, and waiting some more), you’re headed for the pit.
You can’t do continuous auditing or monitoring in an automated fashion if you have to manually request the data.
Software used – Usually Excel leads the first couple years, and then as your program matures, other software is used more than Excel. Since you can usually do more complex and automated analysis in software other than Excel, this can be a good gauge of your maturity level.
I’m not saying that if Excel is your main tool, you’re not doing “real” analytics. But Excel does not have the capabilities at present that other software like ACL or Power BI have, to do complicated analysis and certain operations. In addition, Excel does not do automation very easily.
Number of audit issues – This will help you track whether you are finding more issues as you get more sophisticated in your analytics. Sometimes, a mature control environment reduces the amount of issues, but I would always challenge auditors in this situation to dig even deeper. Everyone make mistakes and software/hardware fail.
You should also note, year over year, whether more issues are found with a particular software tool or analytic procedure. In other words, do you find more issues with Excel or ACL? Do you find more issues by analyzing duplicates or outliers?
Auditor name – Which auditors do the most analytics and find the most issues? Which auditors never do analytics? Which auditors never use ACL? Which auditors do the most automation? The most complicated procedures?
Dollars/hours saved – Be careful with this one, as the numbers are usually highly subjective. Dollars are often soft dollars, and determining the right number of hours or dollars can take more time than this metric is worth.
Some departments use this metric, but fail to take into account all the time to develop, test, and validate the analytic project. What about software licensing, updates, maintenance, troubleshooting, etc., that analytics require?
CAUTION: Keep in mind that the appearance of correlation does not equal correlation; like any other analysis and metrics, be careful about jumping to conclusions. Other factors might be influencing your stats.
For some reason, audit departments shy away from analytic metrics. But like anything else, improvement requires tracking progress or the lack thereof, and determining the reasons behind it, and making adjustments.
If your department keeps analytic metrics, which ones does it keep? If none, why not?