Australians are more concerned than ever about their privacy, with 76% saying they experienced harm due to the recent spate of data breaches, according to the latest Australian Community Attitudes to Privacy Survey by the Office of the Australian Information Commissioner (OAIC).
It’s a wake-up call for businesses to take action to protect themselves, their workforces and customers against increasing threats to their data, as cybercriminals turn to emerging technologies to aid in executing breaches — key among them is generative AI.
A popular tactic used by cybercriminals to gain access to sensitive data is through the exploitation of human nature, with the human factor accounting for 74% of breaches analysed in Verizon’s 2023 Data Breaches Investigation Report (DBIR).
Social engineering attacks, which refer to manipulating an organisation’s sensitive information, are growing largely due to pretexting — the practice of employing a fabricated story (or pretext) — to trick a user into sharing such data.
Generative AI has the potential to make pretexting an even greater threat for three reasons: it can make pretexting appear more credible, enable such attacks to be scaled significantly and decrease the time it takes to execute them by systematically scanning an organisation for weaknesses.
The success of a pretexting attack hinges on its credibility; the more authentic it looks, the more likely a time-poor person is to click a link or respond to an email without investigating, believing it is legitimate.
Generative AI creates greater realism for pretexting attacks by harnessing increasingly sophisticated natural language processing capabilities, which allow criminals to mimic the writing styles of an organisation or individuals with ease.
Additionally, generative AI can translate pretexting attacks into different languages on the attacker’s behalf, allowing cybercriminals to cast wider nets and reach larger target demographics.
Cybercriminals are also using generative AI to systematically scan an organisation for weaknesses, making it efficient and effective for an individual to achieve the same result as a whole team of hackers.
Businesses can expect to see an increase of all forms of pretexting attacks, with business email compromise incidents doubling in the last year, representing over half of all social engineering attacks according to the 2023 DBIR.
As generative AI technologies continue to develop and play a greater role in social engineering attacks, organisations with distributed or remote workforces face a challenge that is taking on greater importance: the creation and strict enforcement of human-centric better practices in cybersecurity.
Thanks to this dangerous union of pretexting attacks, coupled with generative AI technologies, business leaders are becoming a key target for cybercriminals, as they hold the key to an organisation’s most sensitive and lucrative data but are often exempted from standard security protocols.
For instance, C-level executives are often granted exceptions in areas such as establishing and updating credentials, using their preferred devices for business, and policies around personal and professional device usage away from the office.
Even when organisations have invested in training and protecting their workforces, it is undermined when key leaders remain vulnerable to data breaches.
The solution starts with eliminating high-risk exceptions to security protocols and holding key executives to the same rigorous standards applied throughout an organisation’s network. Businesses have in recent times focused heavily on training and educating workforces about cybersecurity protocols, but these must be consistently applied to remain effective.
Human error and pretexting attacks will continue to play a role in the vulnerability of organisations to data breaches, but cybersecurity threats can be minimised by utilising the same tools wielded by cybercriminals, including generative AI.
In the same way that it can streamline hacking, generative AI can improve an organisation’s cybersecurity defences — it is only as good or nefarious as those who use it. Data breaches may start with people in many cases, but so does the solution.
Image credit: iStock.com/MF3d
While the ease with which AI can generate malware may lead to an upsurge in attack frequency, the…
Securing an integrated IT/OT network also requires a collaborative approach by disparate teams…
Twenty five per cent of Australia's two-and-a-half million SMBs would struggle to survive the…
Reaping the benefits of cloud without being locked in
Today's IT leaders can benefit from a combined SD-WAN/SASE solution
How SAS is helping organisations harness their data for good
Melbourne's Alamanda College graduates to AI‑driven network operations
engineer sydney
Simoco to Launch P25 Phase 2 for SDM Mobile Radio at Comms Connect
Come on down to Comms Connect Melbourne
Celebrating a milestone for NZ's Public Safety Network
Comms Connect Melbourne conference highlights revealed
Consortium wins digital signalling contract for rail line
engineer sydney
Scientists develop new phase-change non-volatile memory
The fundamentals of Australian RCM compliance
Are our electronics watertight?
There's no such thing as a simple component
Flexible solar cells achieve 19% PCE gain
engineer sydney
Industry welcomes passing of e-signature laws
Fault uncovered in camera detection program
Govt could use procurement to level tech industry
Six critical public sector cybersecurity challenges
Monash Uni, AFP launch AI research lab
Westwick-Farrow Media
Locked Bag 2226
North Ryde BC NSW 1670
ABN: 22 152 305 336
www.wfmedia.com.au
Email Us
Technology Decisions offers senior IT professionals an invaluable source of practical business information from local industry experts and leaders. Each issue of the magazine will feature columns from industry leading Analysts, your C-level Peers, Futurists and Associations, covering all the issues facing IT leaders in Australia and New Zealand today.
SUBSCRIBE
Membership is FREE to qualified industry professionals across Australia.

source