PIG: Safer Data: Awareness and Cybersecurity


Safer Data: Awareness and Cybersecurity

Team Members

Nailah Clinton and Lenz Bayas

Short Summary of your improvement idea

Our initiative looks to challenge what constitutes awareness and related action within cybersecurity. Ever-evolving approaches by those seeking to cause data breaches within enterprise security systems are on the rise. We see a need to reconsider existing data breach prevention thinking on the part of companies by challenging the existing paradigm upon which they are built.


What is the existing target protocol you are hoping to improve or enhance?

The existing target protocol is data breach prevention. Despite significant investments in network systems infrastructure, key stakeholders can overlook the ever-present human component of security within traditional cybersecurity frameworks.

What is the core idea or insight about potential improvement you want to pursue?

Our idea is centered on the belief that there is much to be gained by developing a framework viewing humans as being both prone to error (i.e. liabilities) as well as sources of safety & protection (i.e. assets) within workplace settings. While the former showcases a bias towards preventing workers from making errors, the latter takes on a more human-centered approach, focused on humans anticipating and building off of success.

What is your discovery methodology for investigating the current state of the target protocol?

We will focus our energies partially on historical data and failure event analysis, paying special attention to the circumstances that surround data breaches as a result of social engineering. We will explore situations that could be termed as “near misses” - instances where a data breach almost occurred, but ultimately didn’t. In the latter case, the thinking is simple here: “What went right?”

In what form will you prototype your improvement idea?

Our prototype will comprise a draft proposal share with experts for feedback. The proposal will take on the form of action research to be used in tandem with other approaches deemed to be complementary.

How will you field-test your improvement idea?

Our field testing will initially take on the form of a workshop inviting participants from a broad cross-section of industries. As the work is centered on building awareness and promoting desired action, we’re curious to what extent adaptation of the protocol would be carried out within different industry settings.

Who will be able to judge the quality of your output?

We are targeting cybersecurity and risk management experts in particular. While we realize and respect the cross-cutting nature of our work, we choose to emphasize enterprises as opposed to, say, law and regulatory bodies…among other actors.

How will you publish and evangelize your improvement idea?

We plan to share our findings in a variety of ways: (1) Submission to an international standards body, and (2) co-authoring and sharing a blog or white paper on our respective company websites.

What is the success vision for your idea?

By reconsidering both awareness and action with a focus on human component, organizations will be better suited to respond to emerging social engineering trends used to commit data breaches in continuously changing work environments.


Hey Lenz and Nailah - putting my SoP alum hat on here. I like the angle you two are taking on this. I don’t know much about cybersecurity, but I imagine there’s a lot of overlap with safety concepts. For example, human variability is both a source of error and safety. When it comes to protocols, you can look at people as either attack surfaces or flexible joints that help keep the system together. Both are right.

I’d check out Human error: models and management - PMC

and https://www.england.nhs.uk/signuptosafety/wp-content/uploads/sites/16/2015/10/safety-1-safety-2-whte-papr.pdf

and https://prolongedfieldcare.org/wp-content/uploads/2019/09/Transient_Reliability.pdf

Hope you find these useful!


Hi @Timber , these were both useful and fun reads. Thanks for sharing. What was most intriguing was the way one of the articles challenged our notion of systems, specifically organizational systems, and the way in which these systems have become more complex over time.

I’m tempted to lean towards looking at humans as a source of safety just because they’ve been looked at as a source of error traditionally. It’s certainly critical to keep both perspectives in mind. So, maybe the conversation shifts towards when to apply a framework focused on error versus focused on safety. Or better yet, is it possible (and needed) to apply a framework that simultaneously applies both levels of focus?

I’m going to play with that a bit more this week. Appreciate your thoughts on this - thanks for sharing.

1 Like

Security here really sounds like an education program. Workshops are one form of education, and wondering whether it is meaningful to cast the project in a more general educational framework. A title may actually be “Get no phish by learning how to avoid phishing”.

I am curious how such project associates workshops/education with protocols more explicitly. More specifically, it looks like the kind of project that is “real education”, in the sense of learning to learn, not just knowledge piling.

In fact the latter is quite a problem in security requirements (sometimes seen as part of requirements engineering in systems development). Many security practices look like rather passive check lists (do we use HTTPS everywhere, do we use non-deprecated signing algorithms, do we ask two different people to sign off, etc). Here there seems to be an opportunity to have a more active approach, perhaps by identifying the right questions to ask, and build a protocol from there.

What are your thoughts, for the comments deemed relevant here?

Hi @_ic , education certainly seems like a direction we’re heading in. I like your phrasing of “real education.” Formal training programs, traditionally, are exactly what you describe - “knowledge piling.” The thinking goes something like this - if they know what to do, they’ll do it! It doesn’t help that formal training programs tend to be conducted separately and ignore the actual workplace itself.

We stand to make greater gains (and save money) by having people learn within the flow of their work - at their respective workplaces. I like to call it simply meeting people where they are. Such an approach also helps us to see the opportunities and barriers inherent in the workplace environment. We stand a better chance at proposing solutions and/or adjusting protocols that respond to these workplace realities.

I appreciate your point about the passive checklists. How much is too much? It would be a fun exercise to see to what extent those checklists are even being used within the flow of work. We’re continuing to think through this idea of self-awareness and how it can be wielded in a manner that simplifies our approach to security. I’m looking forward to further exploring your point concerning engineering in systems development. Our working hypothesis is that the concept of self-awareness can simplify how we approach cybersecurity with implications for changing how we design workshops/education and improve workplace performance. So, yes indeed! A more active approach is what we’re advocating for here.


(“Post must be at least 20 characters.”)

I gotta change this quota

1 Like