Always test the test plan and make sure it actually tests the control or risk being assessed. And make sure the tester (especially when you are observing the tester rather than performing the test yourself) actually follows the test plan.
During a segregation of duties (SOD) test for an expense report approval system, an auditor was observing a client perform a test. The client was supposed to enter his user ID into the Approver field to demonstrate that he could not approve his own expense report.
When the client entered his user ID, an error message said, “ID not valid for approval.” Case closed, right? After all, that’s the same error message that was documented in last year’s work papers, and the year before that. Not so fast, this astute auditor audibled.
“You typed your ID in all uppercase,” the auditor noted. “Enter your ID in all lowercase.” When the client did so, guess what happened? Grass grew over the SOD.
“Report approved,” the message on the screen said.
“I never saw that happen before,” the client said. “There must be some mistake!”
The auditor directed the client to create additional expense reports and repeat the same steps. Entering the ID in uppercase produced the same “ID Not Valid for Approval” error message, but so did entering the ID in lowercase.
A couple days later, a sharp developer called a meeting and explained what happened:
- The “ID Not Valid for Approval” message meant that the ID was not in the list of IDs that allowed approvals (the list contained specific manager IDs only). This was the first check performed during an approval.
- An “Approval Not Allowed” message was supposed to appear when someone tried to approve his own expense report. In other words, when the ID on the report matches the ID attempting the approval, the approval was not accepted. This was the second check performed during an approval.
- The expense report that incorrectly allowed the self-approval was originally rejected by the manager and revised by the submitter. For some reason, rejected expense reports were not subject to the SOD check where the approver’s ID was compared against the list of manager IDs. All subsequent tests proved this to be correct. (I never did hear the details of why the check failed and how they fixed it).
A couple observations regarding the failings of the auditor that performed the test the previous year:
- He incorrectly assumed that if the test results match last year’s results, he was good to go (good to go home, maybe!).
- He did not test all scenarios (uppercase IDs and lowercase IDs). This is especially true on UNIX systems where case matters.
- He did not perform a positive test (an ID that is expected to approve expense reports). This test would have revealed the uppercase/lowercase difference. If you don’t know what works, how can you rely on a negative test result?
- He was probably in too much of a hurry to get his audit completed rather than ensure he was doing the right thing.
- He provided assurance where none existed.
Just because last year’s test plan identified no potholes, do not assume you are on the right road.
Try to Trick the Auditor
On a related note, when observing someone else performing a test, make sure the tester does not enter false data. In the expense report example above, the user could enter an ID that is similar to his (entering JON101 instead of JON111). When dealing with user IDs and the like, always ensure you know what the real data is. If you cannot determine that, ask the tester to explain how you can gain confidence*that the data entered is correct.
* Using words like “how do I gain confidence in X” doesn’t challenge clients like saying “can you prove X?” The former is much more friendly and less threatening. Try it!
Another trick that works in some systems is to enter a space after the ID, which is not easy to see if you’re not watching closely. Even when the ID entered is shown on the next screen, spaces can be invisible.
It’s always best to do the testing yourself.
2 responses to “Plan to Test the Test Plan”
Very good one. BTW, it becomes hard if you are dealing with such an Auditee, since the Auditee in this case is smart enough to realize the points an auditor would pick on. I am sure you must have noticed this throughout your career, this is especially applicable to internal auditing, that as soon as the Terms of Reference is issued to the respective auditees, they would start a “cleanup” project to get all their procedures and guidelines in place and start creating evidence to show the auditors that they have been following these procedures.
I agree, and I’ve had some auditees do that. However, I believe 2 things work against them:
1) They are too busy to spend the time to do a thorough job, which is often why they don’t practice the controls in the first place.
2) Even when they do the cleanup project, they usually miss a few items because a) they’re in a hurry and don’t catch them all, and b) their usual sloppy execution produces errors or omissions even in the items they “cleaned up.”
Another good place to catch them is on the periodic reviews. If the frequency is weekly, monthly, or quarterly, it’s hard to fake these, even if they’re done on paper and signed and dated manually. Usually you can reference something else to see that the accounts, databases, or patches that appear on the review were not on the system at the time of review.
What’s everyone else’s experience?