4Q for quality assurance

To go beyond the usual, downtrodden ‘quality in assurance’ epitome of dullness, herewith something worth considering.
Which is about the assessment of controls, to establish their quality (‘qualifications’) on four, subsequent, characteristics [taking some liberties, and applying interpretation and stretching]:

  • Design. The usual suspect here. About how the control, or rather set of them, should be able to function as a self-righting ship. Point being, that you should+ (must?) evaluate the proposed / implemented set of controls to see whether self-righting mechanisms have been built in, with hopefully graceful degradation when not (maintained) implemented correctly and fully — which should be visible in the design or else. Or, you’re relying on a pipe dream.
  • Installation. Similar to implementation-the-old-way, having the CD in hand and loading / mounting it onto or into a ‘system’.
  • Operational. Specifies the conditions within which the control(s) is expected to operate, the procedural stuff ‘around’ the control.
  • Performance. Both in terms of defining the measuring sticks, and the actual metrics on performance attached to the control(s). Here, the elements of (to be established) sufficiency of monitoring and maintenance also come ’round the corner.

Note; where there’s ‘control(s)’ I consider it obvious, going without saying (hence me here now writing instead of that), that all of the discussed applies to singleton controls as well as sets of controls grouped towards achieving some (level of) control objective. All too often, the very hierarchy of controls is overlooked or at best misconstrued to refer to organisational / procedural / technical sorts of divisions whereas my view here is towards the completely ad hoc qua hierarchy or so.
Note; I have taken some liberty in all of this. The Original piece centered around hardware / software, hence the Installation part so explicitly. But, on the whole, things shouldn’t be different for any type of control or would they in which case you miss the point.

And, the above shouldn’t just be done at risk assessment time, in this case seen as the risk assessment time when one establishes the efficacy, effectiveness of current controls, to establish gross to net, inherent to residual risks, on all one can identify in the audit universe, risk universe, at various levels of detail. On the contrary, auditors in particular should at the head of any audit, do the above evaluation within the scope of the audit, and establish the four qualities. Indeed focusing on Maturity, Competence, and Testing to establish that — though maybe Competence (not only the competence of the administrator carrying out the control, but far more importantly, the competence of the control to keep the risk in check) is something just that bit more crucial in the Design phase, with Maturity slightly outweighting the others in Installation and Operational, and Testing of course focusing on the Operational and Performance sides of things.

Intermission: The Dutch have the SIVA method for criteria design — which may have some bearing on the structure of controls along the above.

Now, after possibly having gotten into a jumble of elements above, a closing remark would be: Wouldn’t it be possible to build better, more focused and stakeholder-aligned, assurance standards of the ISAE3402 kind ..? Where Type I and II mix up the above but clients may need only … well, hopefully, only the full picture.
But the Dutch (them again) can at once improve their hazy, inconsistent interpretation of Design, Existence, and Effectiveness of control(s).
With Design often, mistaken very much yes but still, meaning whether there’s some design / overall structure of the control set, some top-down detailing structure and a bit of consistency but with the self-righting part being left to the overall blunder-application of PDCA throughout…;
Existence being the actual control having been written out or more rarely whether the control is found in place when the auditor come ’round;
Effectiveness… — hard to believe but still almost always clenched-teeth confirmed — being ‘repeatedly established to Exist’ e.g., at surprise revisits. Complaints that Effectiveness is utterly determined by Design, fall on stone deaf ears and overshouting of the mortal impostor syndrome fears.

Back to the subject: Can four separate opinions be generated to the above four qualities ..? Would some stakeholder benefit, and in what way? Should an audit be halted when at some stage of the four, the audit opinion is less than very Satisfactory — i.e., when thing go downhill when moving from ideals and plans to nitty practice — or should the scope of the audit be adapted, narrowed down on the fly so the end opinion of In Control applies only to the subset of scope where such an opinion is justified?
But a lot needs to be figured out still. E.g., suppose (really? the following is hard fact at oh so many occasions) change management is so-so or leaky at best; would it be useful to still look at systems integrity?

Help, much? Plus:
DSCN4069[An optimal mix of complexity with clarity; Valencia]

Leave a Reply

Maverisk / Étoiles du Nord