Currently, enterprises are either adopting generative AI (GenAI) solutions, evaluating how to integrate these tools into their business plans, or both. To drive informed decisions and effective planning, obtaining hard data is crucial, yet such data is surprisingly scarce.
The '2025 Enterprise Generative AI Data Security Report' published by LayerX provides unprecedented insights into the actual application of AI tools in the workplace, while revealing key security vulnerabilities. Based on the real telemetric data of LayerX's enterprise clients, it is one of the few reliable sources that can provide detailed information on the actual use of generative AI by employees.
For example, the report reveals that nearly 90% of enterprise AI usage occurs outside the visible range of the IT department, posing significant risks such as data leaks and unauthorized access.
Below, we will share some key findings from the report. Read the full report to refine and enhance your security strategy, use data-driven decisions for risk management, and drive resource allocation to strengthen data protection measures for generative AI.
The use of enterprise generative AI is not yet widespread
Although the hype around generative AI might make it seem like the entire workforce has shifted office operations to generative AI, LayerX finds that the actual usage is somewhat lukewarm. About 15% of users visit generative AI tools every day. While this proportion is not negligible, it is not mainstream.
However, we agree with LayerX's analysis and predict that this trend will accelerate rapidly, especially because currently 50% of users use generative AI every week.
Additionally, the report found that 39% of regular generative AI tool users are software developers. This means the risk of leaking source code and proprietary code through generative AI is the highest, and there are also risks in using risky code in the codebase.
How is generative AI used? No one knows
Since LayerX is embedded in the browser, this tool can observe the usage of 'shadow SaaS'. This means they can see employees using tools未经corporate IT approval or accessing these tools through non-corporate accounts.
Although generative AI tools like ChatGPT are used for work purposes, nearly 72% of employees access them through personal accounts. Even when employees access them through corporate accounts, only about 12% use single sign-on (SSO). Therefore, nearly 90% of generative AI usage is invisible to enterprises. This leaves enterprises unaware of 'shadow AI' applications and the sharing of corporate information on AI tools without authorization.
50% of paste actions involve enterprise data
Remember the Pareto Principle? In this case, while not all users use generative AI every day, users who do use generative AI tend to frequently paste potential confidential information.
LayerX found that among users submitting data to generative AI tools, there are nearly 4 enterprise data paste actions per day on average. This may include business information, customer data, financial plans, source code, and more.
How are enterprises planning the use of generative AI?
Findings in the report indicate an urgent need for new security strategies to manage generative AI risks. Traditional security tools are unable to cope with modern AI-driven, browser-based work environments. They lack the ability to detect, control, and protect AI interactions at the source (i.e., the browser).
Browser-based security solutions can provide visibility into AI SaaS applications, unknown AI applications outside of ChatGPT, AI-enabled browser extensions, and more. This visibility can be used to deploy data loss prevention (DLP) solutions for generative AI, allowing enterprises to safely incorporate generative AI into their plans and secure their business for the future.
For more data on the use of generative AI, please readFull Report.
Reference Source:

评论已关闭