AI, or Artificial Intelligence, is the development of computer systems that imitate human intelligence. It involves creating algorithms and models that enable machines to learn, recognize patterns, and adapt to new situations.
We left 2022 with the beginning of an AI boom, and here we are a year later fully enthralled with the force of big tech. In that time, San Francisco-based OpenAI has grown from a research unit on the fringes to a tech giant which every mega player wants to brush shoulders with, including Microsoft who have a whopping $8 billion stake in the company’s flagship AI product, ChatGPT – a site now boasting 1.5 billion visits per month.
Elon Musk warned at MIT’s AeroAstro Centennial Symposium that “with artificial intelligence, we’re summoning the demon”, later commenting that “the pace of progress in artificial intelligence is incredibly fast – it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.”
It seems fitting that the “AI Safety Summit”, which was hosted by the British Government in November 2023, was held at Bletchley Park; home to Alan Turing and his code-breaking crew during the second world war. Reminiscent of Thatcher’s Downing Street meetings between thinkers, scientists and cabinet colleagues in April 1989 to discuss climate change, the summit gathered more than 100 leading AI nations and companies to face the growing fears over development, human displacement and weaponization.
A major concern is the potential for AI to facilitate and amplify phone hacking, through sophisticated social engineering attacks, automated brute-force attacks, AI-enhanced malware, and deep-fake attacks.
Sunak’s summit may have reached a commitment to state-backed testing and evaluation before any AI technology is released, but as put by Isabel Hardman, assistant editor of the Spectator: “There are going to need to be a lot more summits before it becomes clear whether that agreement is actually going to mean anything”.
Check Point Research revealed an 8% surge in global weekly cyber-attacks in its 2023 Mid-Year Security Report. 75% of security professionals witnessed an increase in attacks over the past 12 months, with 85% attributing this rise to bad actors using generative AI.
AI algorithms supercharge password cracking, reducing the time taken to breach devices. Hackers can leverage machine learning to optimize attempts, exploiting patterns in order to access personal and confidential mobile phone data. AI integrated into malware poses a major threat; with its ability to adapt in real-time, it can easily evade detection. These advanced variants bypass security measures, exploit vulnerabilities, and learn from interactions.
In a further, unsettlingly dystopian development, the rise of deep-fake technology, fuelled by AI, enables realistic impersonation, deceiving victims into compromising their mobile phone security. Some attempt to combat concerns in a “fire with fire” approach. According to James MacKay, COO of MetaCompliance and a recognized security awareness training expert, AI is becoming an increasingly important tool in the struggle against cyber-attacks.
However, AI models require constant monitoring and updating to stay ahead of evolving threats as hackers can deceive AI systems and bypass security measures by intentionally manipulating inputs or injecting malicious data. Ultimately, AI systems are only as good as the data they are trained on. Datasets do not cover emerging threats, and are prone to biases.
MacKay highlights: “As cybercriminals refine their AI based cyber attack techniques, it may result in an “arms race” between cybersecurity professionals and cybercriminals”. An increasing number of attacks, proving costly for the impacted organizations, are generating a need for more sophisticated solutions.
Current approaches caught up in a frenzy of catch-up leave much to be desired, littered with security compromises, vulnerabilities and privacy concerns. The problem is that none of our competitors address the operating systems themselves.
Standard security features are ‘added on’, leaving vulnerabilities at each integral layer.
“There’s no silver bullet solution with cyber security, a layered defense is the only viable defense.” – James Scott, Senior Fellow, Institute for Critical Infrastructure Technology
Our government-trusted and enterprise-dependable Sotera SecurePhone was built in a multi-layer lockdown, meaning the voice streams and text messages between two Sotera SecurePhones cannot be intercepted and decrypted by a remote third party. Tested and verified independently by DoD trusted third-party Netragard, the Sotera SecurePhone is trusted by the Ministry of Defense.
The Sotera SecurePhone is the first and only product that safeguards all three pillars of security, and so remains uniquely unfazed by ever shift cyber threats.