Cybercrime is evolving at an unprecedented pace, and AI-driven attacks are now at the forefront of this shift. In 2024, more than 50% of Australian businesses reported experiencing a cyberattack, with 36% of these being AI-generated – a rate higher than the US and UK. The surge in AI-powered phishing scams, deepfake fraud, and automated hacking has pushed cybersecurity concerns to the top of the agenda for Australian organisations.
With 91% of businesses planning to increase cybersecurity spending by 2025, the challenge now is not just to defend against AI-driven cybercrime, but to harness AI as a defensive tool.
The growing threat of AI in cybercrime
Cybercriminals are using AI to launch more sophisticated, scalable, and targeted attacks. Key trends include:
- AI-generated phishing – AI is being used to craft highly convincing phishing emails, with 61% of all phishing attempts in Australia now AI-generated. The country has also seen a 479% increase in phishing content hosted domestically, making it one of the top ten sources of phishing scams globally.
- Deepfake fraud – Attackers are cloning voices and creating fake videos to manipulate employees into transferring funds or disclosing sensitive data. These AI-generated impersonations are becoming increasingly difficult to detect.
- Automated malware attacks – AI-powered malware can evade detection by adapting its behaviour in real-time, making it harder for traditional security tools to identify and neutralise threats.
- Exploitation of AI tools – Cybercriminals are embedding malware within AI-powered applications and using generative AI to create more convincing fake identities, documents, and fraud schemes.
The financial and operational impact
The consequences of AI-driven cybercrime are severe:
- $49,600 – Average cost of cybercrime per incident for small businesses.
- $4.26 million – Average cost of a data breach for Australian organisations, marking a 27% increase since 2020.
- 70% of businesses feel AI is advancing faster than their ability to defend against it.
Beyond financial losses, AI-driven attacks also lead to long-term reputational damage, legal liabilities, and business disruption, especially in industries handling sensitive customer data, such as finance, healthcare, and government services.
How Australian businesses are responding
1. Investing in AI-powered security
With cybercriminals using AI, businesses are adopting AI-driven threat detection and response tools. These technologies use machine learning to detect anomalies, analyse user behaviour, and identify threats in real time. 41% of organisations are prioritising AI for network security, while 40% are investing in AI-driven cloud security.
2. Strengthening cybersecurity training
AI-powered phishing and deepfake scams are becoming so convincing that traditional cybersecurity awareness programs are no longer enough. Businesses are updating their training to:
- Help employees identify AI-generated phishing emails and fake websites.
- Teach staff to verify sensitive transactions through multi-step authorisation instead of relying on voice or email confirmations.
- Increase executive awareness of deepfake threats, particularly for finance and HR teams who handle payments and sensitive employee data.
3. Automating compliance and regulatory adherence
AI is being used to streamline compliance with cybersecurity regulations, helping businesses automatically detect policy violations, data exposure risks, and unusual access patterns. This shift moves organisations beyond a checkbox compliance approach to a proactive security model that continuously evolves with emerging threats.
4. Adopting multi-layered security strategies
Many organisations are shifting to zero-trust security frameworks, ensuring that users and devices must continuously authenticate before accessing company systems. Other key strategies include:
- Multi-factor authentication (MFA) – Ensuring stolen credentials alone can’t grant unauthorised access.
- Endpoint detection and response (EDR) – Using AI-driven monitoring to detect unusual behaviour on devices.
- Threat intelligence sharing – Collaborating with government bodies and industry groups to track emerging AI-driven cybercrime tactics.
Challenges for small businesses
While larger enterprises are scaling up their cybersecurity investments, small and medium-sized enterprises (SMEs) face significant barriers to protecting themselves against AI-driven threats:
- Limited resources – Only 44% of SMEs have dedicated cybersecurity budgets, making it difficult to invest in advanced security tools.
- Skills gap – Over 60% of SMEs lack in-house cybersecurity expertise, forcing them to rely on outsourced providers or off-the-shelf security solutions that may not fully protect against AI-powered attacks.
- Regulatory complexity – Many small businesses struggle to keep up with rapidly changing cybersecurity regulations, increasing the risk of non-compliance and potential fines.
The future of AI-driven cyber threats
As we move into 2025, cybersecurity experts predict that:
- AI-powered cyberattacks will become more autonomous, allowing criminals to launch large-scale attacks with minimal human intervention.
- Deepfake scams will escalate, targeting businesses and individuals with more convincing fake audio and video impersonations.
- Supply chain vulnerabilities will increase as more organisations integrate AI into their operations without fully securing their digital ecosystems.
- AI will be used to manipulate public perception, with cybercriminals generating fake content, social media posts, and misinformation campaigns.
With AI-driven threats becoming more aggressive, intelligent, and automated, businesses cannot rely on traditional cybersecurity measures alone. They must take a proactive approach, combining AI-powered defence mechanisms, continuous monitoring, and advanced cybersecurity training to stay ahead.
Conclusion
AI-driven cybercrime is no longer a future threat – it’s here. Phishing emails, malware, and deepfake scams are now being created and deployed at a scale never seen before. Businesses that fail to evolve their security strategies risk financial losses, reputational damage, and legal consequences.
To effectively protect against AI-powered threats, organisations must invest in AI-driven cybersecurity solutions that can detect and prevent advanced attacks. Employee education is equally critical, ensuring staff can identify and respond to AI-generated phishing attempts and deepfake scams. Implementing a multi-layered security approach, including zero-trust frameworks and continuous monitoring, strengthens overall defences. Additionally, collaborating with cybersecurity experts helps businesses stay compliant with evolving regulations and proactively address emerging threats.
References
- https://securitybrief.com.au/story/ai-driven-cybercrime-spikes-in-australia-nz-warns-trend-micro
- https://itwire.com/business-it-news/security/australian-businesses-seek-better,-simpler-security-with-over-half-experiencing-a-cyberattack-in-2024.html
- https://www.techrepublic.com/article/5-emerging-ai-threats-australia/
- https://www.pwc.com.au/cyber-security-digital-trust/global-digital-trust-insights.html
- https://www.redsearch.com.au/resources/ai-spam-statistics-australia/
- https://www.minister.defence.gov.au/media-releases/2024-11-20/annual-cyber-threat-report-highlights-evolving-threat
- https://www.cyberdaily.au/tech/11511-industry-predictions-for-2025-part-1-artificial-intelligence-advances-risks-and-uptake
- https://www.cyber.gov.au/about-us/view-all-content/reports-and-statistics/annual-cyber-threat-report-2023-2024