Increasingly Sophisticated Phishing Campaigns
At a previous employer, I received a morning email inviting me to view the read-ahead slides for a confirmed meeting later that afternoon regarding an ongoing reorg. It was from a consultancy we’d hired to do the reorg and directed me to the consultancy’s file sharing site to view the slides. Thinking that a readahead was a welcome change of pace from the chaos of the reorg thusfar, I clicked on the link, only to discover that it was not a readahead, but rather a phishing simulation. My first reaction was to think that I’d been unfairly tricked – the simulator had five clean aces, so to speak! – and to grumble that the only way to have known this was a simulation was to do some squinting at the link to see that a capital I had been substituted for a lower case l (if those last few words took a couple of readthroughs to understand, that’s the point!). But the truth of the matter is that hackers are both unfair and tricky – so as phishing campaigns evolve, phishing emails like the one I encountered will become the norm, and even security-minded employees will have a hard time differentiating between genuine emails and phishing.
As a result of these trickier campaigns, driven in large part by the use of generative AI, recent approaches to counter-phishing programs seem to have distilled themselves into two camps: punish users for not being experts in detecting deception, or rely on phishers not to make clever, emotional attacks. In the long term, however, neither is a sustainable option: tricky, punitive training programs will damage the employee trust needed to run an agile business, while straightforward, collaborative training programs that rely on user reporting of “unusual” emails will not identify the “normal” seeming content that AI-generated phishing campaigns will use.
The Folly of Phising Training
Dependency on phishing training fundamentally offloads responsibility for security from the security team to users: the security team provides training to identify deception, but the day-to-day responsibility for doing so is then shifted to the everyday user. In doing so, organizations are making a flawed wager that their users, who have now had deception detection added to an ever-growing list of “other duties as assigned,” will be able to perform that duty better than an entire organization of adversaries equipped with technical tools and oftentimes insider knowledge with one primary duty – tricking users into clicking on links or otherwise disclosing information.
In the early days of phishing, this approach may have seemed reasonable: phishing attacks were limited in frequency, often unsophisticated in their approach, and riddled with spelling errors and stylistic idiosyncrasies that made them easy to spot. Today, however, adversaries can use techniques like applying generative AI informed by real communications sent out by executives, insider knowledge of real-world company goings on, and typo squatting to make their communications virtually indistinguishable from the real thing – particularly for professionals reading through hundreds of emails per day. A gut reaction “see something, say something” security program is common sense, but expecting users to conduct high attention-to-detail inspections of each email they receive isn’t fair to the users – it’s a bit like putting the company’s intramural football team up for competition in a professional league!
The Fair Solution: Protecting Your Users
Organizational leaders seem to face a dilemma: either leave their users at the mercy of tricky hackers using phishing emails far craftier than those users are used to seeing or become the tricksters themselves and create a trust deficit with day-to-day users. But this is a false dichotomy because the goal of a phishing campaign isn’t just to trick a user into believing an email is real – it’s to trick them into clicking on a link that will either execute malicious code or induce the user into providing privileged information, such as their username and password. If these risks can be mitigated, it doesn’t matter whether the initial trick succeeded or not.
The answer, then, isn’t to train users to detect increasingly devious tricks – an error-prone pursuit that has any number of negative side-effects including user distraction, trust deficits, and the potential of “false positives” wherein actual emails aren’t actioned in a timely manner because they’re suspected to be malicious. Instead, cybersecurity leaders should pursue technical solutions that remove the actual risks associated with clicking on a link in a phishing email by erecting hard controls around untrusted web content and providing users awareness that a site may not be all it seems.
Hardsec Browser Isolation: Protecting, not Penalizing, Users
Hardsec browser isolation (HBI), such as that provided by Garrison, is a security-first solution that removes the risk of phishing without creating a trust deficit with users or distracting them from their day-to-day business duties. HBI removes the risk of malicious code execution by redirecting all browsing activity that is not explicitly trusted by the security team to purpose-built appliances outside the network perimeter and providing the user a live, interactive audio/video stream of the site. It also provides the user with a separate browser window letting them know they’re viewing risky content and highlighting password fields to increase awareness that the presented content may not be all it seems.
Garrison offers a reassuringly affordable technical solution to phishing that is more robust than user training programs without having to answer the hard HR question of how to handle users who consistently fail or ignore training. Rather than relying on users to conduct detailed analysis of emails and link contents, Garrison ULTRA’s cloud-hosted HBI allows users to move at the speed of business while providing a technical control against malicious code execution and real-time awareness if a site that’s being displayed isn’t trusted.
Contact us to learn more about how Garrison can help you protect your users from phishing instead of punishing them for being tricked.