risk-management

The Human Element: Threat Modelling Your Internal Processes

Your threat model probably stops at the code level. But the most exploitable vulnerability in any organisation is the people, and the processes they follow.

5 min readBy Priya Nair
threat modelingsocial engineeringinsider risksecurity cultureprocess security

Whenever threat modelling comes up, the conversation drifts toward data flow diagrams, SQL injection, misconfigured S3 buckets. We treat the system like plumbing. A series of pipes and valves to inspect.

But there's one component that's notoriously difficult to patch, frequently bypasses every firewall you own, and can be talked into handing over the keys to the kingdom.

People.

If your threat model stops at the code level, you're protecting the door while leaving the window wide open. The same rigour you apply to your microservices architecture? It needs to extend to your internal processes. How people communicate, approve things, handle sensitive actions.


Social engineering: the original hack

From an attacker's perspective, tricking a person is orders of magnitude easier than breaking AES-256 encryption. Social engineering is basically a buffer overflow for the human brain. It uses urgency, authority, or empathy to bypass the logical checks we think we have in place.

Some common patterns that keep working, year after year:

  • Pretexting. An attacker calls up posing as IT support, or sends a panicked email "from" an executive who needs a "quick bypass" to make a deadline. It works because people want to be helpful. That's the exploit.
  • MFA fatigue. Bombard someone with login approval notifications until they tap "approve" just to make it stop. Exhaustion beats security awareness training every time.
  • Shadow IT. When official processes are too slow or too rigid, employees find workarounds. Sensitive data ends up in personal Dropbox accounts or WhatsApp threads. Not out of malice, just because someone needed to get their job done and the sanctioned path was too painful.

The fix isn't more training slides telling people "don't get tricked." The fix is process design. Build workflows that require multi-party authorisation for sensitive actions. If a password reset or a large wire transfer needs two approvals from two different departments, a single social engineering success doesn't cascade into a total system failure.


The threat that's already inside

"Insider threat" conjures images of a disgruntled employee sneaking out trade secrets on a USB stick. And sure, that happens. But the far more common, and often more damaging, threat is the accidental insider.

The developer who commits an API key to a public GitHub repo. The HR manager who sends the salary spreadsheet to the "All Staff" distribution list instead of her manager. Nobody meant any harm. The damage is the same.

Here's a question worth asking your team: What's the most sensitive thing a single person can do without a second set of eyes on it? - Can one admin delete the production database? - Can one developer push directly to main? - Can one user export the entire customer list?

If the answer to any of those is yes, you've got a process vulnerability. Not a hypothetical one. A real one, waiting for someone to have a bad morning.


Designing for human failure

The goal here isn't suspicion. You're aiming for shared accountability. Processes that assume a human will eventually make a mistake, because they will.

Require two keys for the important stuff

Call it the "two-person rule" or whatever you want. The point is: critical infrastructure changes (production deployments, DNS modifications, database migrations) should require a second sign-off. It protects against both malicious intent and fat-finger errors, which honestly are the more frequent cause of outages anyway.

Verify out-of-band requests

Attackers thrive in the gaps between departments. When someone gets an urgent Slack message asking for a file or a credential, there needs to be a habit of verifying through a separate channel. A quick phone call. A walk over to someone's desk. Something that can't be spoofed as easily as a chat message.

Make honesty the rational choice

If an employee clicks a phishing link and reports it immediately, that's a good outcome. Seriously. You want people surfacing incidents fast, not hiding them because they're afraid of getting blamed.

If your culture punishes mistakes, people will cover them up. And that gives the attacker more dwell time in your system, which is the actual danger.


Code security vs. process security. More alike than you'd think.

Security LayerTechnical ExampleHuman/Process Equivalent
AuthenticationOAuth / BiometricsVisual ID / Verified callbacks
RedundancyRAID / Multi-regionDual-approval workflows
LoggingSyslog / CloudWatchAudit trails of manual changes
SanitisationInput validationAwareness training and phishing simulations

The patterns map surprisingly well. We've spent decades hardening our technical systems with redundancy, logging, and input validation. The same principles apply to how humans interact with those systems. We've just been slower to implement them.


One thing to check this week

Go look at your joiners and leavers process. Specifically the leavers part.

Does a departing employee still have access to that one legacy SaaS tool nobody remembers? What about the internal Ollama instance? The shared password manager vault?

Offboarding is a security process, not an HR administrative task. Treat it like one.


The next time you sit down for a threat modelling session, don't stop at the API endpoints. Look at the people holding the devices. Ask where the friction is, where shortcuts are being taken, and figure out how to make the secure path the easy path.

Because if the secure way is also the harder way, people will find a workaround. They always do.