A Tale of Two Conferences - Phil Sheriff

It is hard to imagine two more different conferences.

On the one hand, there was the second iteration of the Global Conference on Cyber Capacity Building, full of international organisations, cyber not-for-profits, with its global representation of government cybersecurity officials and implementors. The setting, Geneva, with its international bodies from WTO to UEFA and mountainous scenic backdrop, was spectacular, the conference crowd well attired and familiar, the content reassuringly predictable, and the catering abundant.

On the other hand there was the UK Evaluation Society conference in Glasgow. No less spectacular in its own way, but with a crowd of evaluation experts and academics who were much less familiar, the content expansive and original, not a suit in sight, and no free drinks.

And whilst both areas, cybersecurity capacity building and evaluation, are well known and well understood in their own right by experienced practitioners and seasoned professionals, if you were to draw a Venn diagram representing the overlap between the two, it would be a very thin sliver. This represents both a risk in terms of a lack of evidence-informed policy, and also a lost opportunity in terms of bringing robust evaluation practice to a relatively new policy area.

In an attempt to expand the tiny sliver of overlap into a more substantial lens, I presented the virtues of investing in Impact Evaluation to a curious audience in Geneva, and advertising the opportunities in cybersecurity evaluation to a similarly curious, but slightly more wary, crowd in Glasgow. Whilst the approaches came at the problem from two opposing directions, the argument was the same.

The problem space 

The problem as such is cybersecurity capacity building. A relatively new policy area, where large sums of money are spent on a range of policy-driven interventions, but where evidence of effectiveness remains weak.

But gathering evidence and evaluating CCB remains challenging because there are differences of opinion about whether this can be done effectively, and if so, whether the required resources are proportionate to what can be realistically achieved.

For some, there is an underlying belief that it is not possible to accurately collect and assess the impacts of CCB due to the (seemingly paradoxical) belief that they are observable yet intangible. Current practice is not thought to be helpful, available data is not applicable, with the outcomes being too diffuse and too reliant on a large range of externalities and assumptions. Compounding matters further, the evaluation methodology, created in the development sector, but now used in conjunction with research methodology from the medical sector, is thought to be unsuitable to cybersecurity. Finally, there is little obvious appetite to use already stretched budgets on evaluation.

In the other camp, CCB is framed as a policy area that is highly technical, security focused, and dominated by large defence and consulting contractors. I have participated in a number of multi-disciplinary workshops where participants, external to cybersecurity, happily claim that as they don’t understand it, they cannot engage with it. In Glasgow, while the range of talks and presentations covered evaluation successes, challenges, and outcomes in development, education, policing, health and housing, there was not one mention of security, nor of cyber, let alone of cybersecurity. As a participant in the Glasgow conference, it was hard not to conclude that, with some noticeable exceptions, the broader evaluation community is not engaging.

A change in mindset

I believe that a narrative shift is required so that we can build closer working practices. It may be that running RCTs in cybersecurity is unrealistic (and, having conducted evidence mapping exercises, very infrequently attempted). It may be that external factors are sufficiently strong, as to drown out the programme impact. And even though cyber is a data-driven environment, there is surprisingly little useable data that demonstrates impact. But similar arguments could have been used in education, policing, and development, where they have still iterated through to evaluation solutions that enable better policy making and resource allocation.

Cyber is a technical security area that can be intimidating to the uninitiated. But it is surprising the number of actual technical experts, compared with people who have come from a range of different backgrounds who still manage to be highly impactful without being technical. Whilst technical knowledge is relevant, it is actually a multi-disciplinary approach that is required to engage well in cyber capacity building and its evaluation.

The nascent level of evaluation, coupled with the novel (but genuinely surmountable) challenges in cybersecurity, is surprising, yet oddly encouraging. It gives both researchers and practitioners the space to understand, adapt and iterate theories of change, causal pathways, and complex system modelling.

Regardless of location, the obvious question that requires the expertise and experience of both camps going forward should simply be: “What impact are we trying to achieve in CCB, and is it working?”.

Comments

Popular posts from this blog

Remote working and Cyber Security: Georgia Crossland and Amy Ertan

New Publication: Remote Working and (In)Security?: Amy Ertan

The Artificial Intelligence Monster: Nicola Bates