Microsoft just announced Security Copilot, their AI-powered assistant that will revolutionize cybersecurity defense by increasing efficiency and productivity. The tool will incorporate ChatGPT4 technology from OpenAI and a proprietary security specific model created by Microsoft from all the data they have.
The Security Copilot is currently available to a small number of selected companies for testing with the official launch date still unknown. However, hackers are not waiting and have already started utilizing widely available AI tools to launch attacks. Waiting for this public release or any other official AI security defensive tools is leaving companies at a disadvantage, as they’re easy targets for assailants fond of the new tech.
Companies are postponing authorization because of the potential risks they believe it may bring. However, the utilization of AI in organizations brings potential benefits that far outweigh the risks of not using this technology.
To better protect themselves from cyber attacks, while needing to regulate employee usage, organizations must integrate AI into their security and other systems and quickly start reaping benefits that AI can bring.
Many businesses are hesitant to allow cybersecurity employees to use AI tools in their work because it’s unregulated and still underdeveloped. Influential people from various industries have written an open letter demanding the halt of AI experiments more advanced than ChatGPT4. Some even say the letter isn’t enough and society isn’t ready to handle the ramifications of AI.
Unfortunately, Pandora's box has already been opened and those pretending we can reverse any of these innovations are delusional.
AI isn’t a new invention either: We’ve been interacting with limited models for years. Can you count the times you’ve used a website’s chatbot, your smartphone assistant, or an at-home device like Alexa? Artificial Intelligence has infiltrated our lives just as the internet, smartphones and the cloud did before it.
Fear is justifiable, but companies should be concerned about cybercriminals and the advancement and increased sophistication of their attacks.
Hackers using ChatGPT are faster, more sophisticated than before and cybersecurity analysts who don’t have access to similar tools can very quickly find themselves outgunned and outsmarted by these AI-assisted attackers. They’re using ChatGPT to generate code for phishing emails, malware, encryption tools and even create dark web marketplaces. The possibilities for hackers of using AI are endless and, as a result, many analysts are also resorting to unauthorized use of AI systems just to get their job done.
According to HelpNet Security, 96% of security professionals know someone using unauthorized tools within their organization and 80% admitted they use prohibited tools themselves. This proves that AI is already a widely used asset in the cyber security industry, mostly due to necessity. Survey participants even said "they would opt for unauthorized tools due to the better user interface (47%), more specialized capabilities (46%), and allow for more efficient work (44%)."
Corporations are stumbling to figure out governance around AI, but while they do so, their employees are clearly defying rules and possibly jeopardizing company operations.
According to a Cyberhaven study of 1.6 million workers, 3.1% input confidential company information into ChatGPT. Although the number seems small, 11% of users' questions include private information. This can include names, Social Security numbers, internal company files and other confidential information.
When using ChatGPT, it learns from every conversation and it can regurgitate user information if probed correctly. This is a fatal flaw for corporate use considering how hackers can manipulate the system into giving them previously hidden information. More importantly, the AI will also know security mechanisms that the company has when incorporated on a corporate server. Armed with that information, any attacker could successfully obtain and distribute confidential information.
Whether it be the cloud or the internet, integration of new technologies has always caused controversy and hesitation. But halting innovation is impossible when criminals have gained access to advanced tools that practically do the job for them.
To correctly address this issue around our society’s security, companies must apply previous governance rules to AI. Reusing historically proven procedures would allow companies to catch up with their attackers and eliminate the power imbalance.
Streamlined regulation among cybersecurity professionals would allow companies to oversee what tools employees are using, when they are utilizing them, and what information is being input. Contracts between technology providers and organizations are also common for corporate cloud usage and can be applied to the nebulous sphere of AI.
We’ve passed the point of no return and critical adoption is our only solution to live in an AI-driven world. Heightened innovation, increased public accessibility and ease of use has given cybercriminals the upper hand that’s hard to reverse. To turn things around, companies must embrace AI in a safe, controlled environment.
The advanced tech is almost uncontrollable and cybersecurity analysts need to learn how it can be utilized responsibly. Employee training and development of enterprise tools would strengthen cybersecurity procedures until an industry giant like Microsoft uses Security Copilot to transform the industry. In the meantime, companies must stop sticking their head in the sand hoping for reality to change.
Things will become more dystopian if organizations continue to ignore rampant problems instead of dealing with the uncomfortable world we’ve created.