Currently, enterprises are either adopting generative AI (GenAI) solutions, evaluating how to integrate these tools into their business plans, or both. To drive informed decisions and effective planning, obtaining hard data is crucial, yet such data is surprisingly scarce.

The '2025 Enterprise Generative AI Data Security Report' published by LayerX provides unprecedented insights into the actual application of AI tools in the workplace, while revealing key security vulnerabilities. Based on the real telemetric data of LayerX's enterprise clients, it is one of the few reliable sources that can provide detailed information on the actual use of generative AI by employees.

For example, the report reveals that nearly 90% of enterprise AI usage occurs outside the visible range of the IT department, posing significant risks such as data leaks and unauthorized access.

Below, we will share some key findings from the report. Read the full report to refine and enhance your security strategy, use data-driven decisions for risk management, and drive resource allocation to strengthen data protection measures for generative AI.

The use of enterprise generative AI is not yet widespread

Although the hype around generative AI might make it seem like the entire workforce has shifted office operations to generative AI, LayerX finds that the actual usage is somewhat lukewarm. About 15% of users visit generative AI tools every day. While this proportion is not negligible, it is not mainstream.

However, we agree with LayerX's analysis and predict that this trend will accelerate rapidly, especially because currently 50% of users use generative AI every week.

Additionally, the report found that 39% of regular generative AI tool users are software developers. This means the risk of leaking source code and proprietary code through generative AI is the highest, and there are also risks in using risky code in the codebase.

How is generative AI used? No one knows

Since LayerX is embedded in the browser, this tool can observe the usage of 'shadow SaaS'. This means they can see employees using tools未经corporate IT approval or accessing these tools through non-corporate accounts.

Although generative AI tools like ChatGPT are used for work purposes, nearly 72% of employees access them through personal accounts. Even when employees access them through corporate accounts, only about 12% use single sign-on (SSO). Therefore, nearly 90% of generative AI usage is invisible to enterprises. This leaves enterprises unaware of 'shadow AI' applications and the sharing of corporate information on AI tools without authorization.

50% of paste actions involve enterprise data

Remember the Pareto Principle? In this case, while not all users use generative AI every day, users who do use generative AI tend to frequently paste potential confidential information.

LayerX found that among users submitting data to generative AI tools, there are nearly 4 enterprise data paste actions per day on average. This may include business information, customer data, financial plans, source code, and more.

How are enterprises planning the use of generative AI?

Findings in the report indicate an urgent need for new security strategies to manage generative AI risks. Traditional security tools are unable to cope with modern AI-driven, browser-based work environments. They lack the ability to detect, control, and protect AI interactions at the source (i.e., the browser).

Browser-based security solutions can provide visibility into AI SaaS applications, unknown AI applications outside of ChatGPT, AI-enabled browser extensions, and more. This visibility can be used to deploy data loss prevention (DLP) solutions for generative AI, allowing enterprises to safely incorporate generative AI into their plans and secure their business for the future.

For more data on the use of generative AI, please readFull Report.

Reference Source:

89% of Enterprise GenAI Usage Is Invisible to Organizations Exposing Critical Security Risks, New Report Reveals

89% of the use of enterprise generative AI goes unnoticed by the IT department, exposing data security vulnerabilities

0 21
Currently, enterprises are either adopting generative AI (GenAI) solutions, eval...

image

Currently, enterprises are either adopting generative AI (GenAI) solutions, evaluating how to integrate these tools into their business plans, or both. To drive informed decisions and effective planning, obtaining hard data is crucial, yet such data is surprisingly scarce.

The '2025 Enterprise Generative AI Data Security Report' published by LayerX provides unprecedented insights into the actual application of AI tools in the workplace, while revealing key security vulnerabilities. Based on the real telemetric data of LayerX's enterprise clients, it is one of the few reliable sources that can provide detailed information on the actual use of generative AI by employees.

For example, the report reveals that nearly 90% of enterprise AI usage occurs outside the visible range of the IT department, posing significant risks such as data leaks and unauthorized access.

Below, we will share some key findings from the report. Read the full report to refine and enhance your security strategy, use data-driven decisions for risk management, and drive resource allocation to strengthen data protection measures for generative AI.

The use of enterprise generative AI is not yet widespread

Although the hype around generative AI might make it seem like the entire workforce has shifted office operations to generative AI, LayerX finds that the actual usage is somewhat lukewarm. About 15% of users visit generative AI tools every day. While this proportion is not negligible, it is not mainstream.

However, we agree with LayerX's analysis and predict that this trend will accelerate rapidly, especially because currently 50% of users use generative AI every week.

Additionally, the report found that 39% of regular generative AI tool users are software developers. This means the risk of leaking source code and proprietary code through generative AI is the highest, and there are also risks in using risky code in the codebase.

How is generative AI used? No one knows

Since LayerX is embedded in the browser, this tool can observe the usage of 'shadow SaaS'. This means they can see employees using tools未经corporate IT approval or accessing these tools through non-corporate accounts.

Although generative AI tools like ChatGPT are used for work purposes, nearly 72% of employees access them through personal accounts. Even when employees access them through corporate accounts, only about 12% use single sign-on (SSO). Therefore, nearly 90% of generative AI usage is invisible to enterprises. This leaves enterprises unaware of 'shadow AI' applications and the sharing of corporate information on AI tools without authorization.

50% of paste actions involve enterprise data

Remember the Pareto Principle? In this case, while not all users use generative AI every day, users who do use generative AI tend to frequently paste potential confidential information.

LayerX found that among users submitting data to generative AI tools, there are nearly 4 enterprise data paste actions per day on average. This may include business information, customer data, financial plans, source code, and more.

How are enterprises planning the use of generative AI?

Findings in the report indicate an urgent need for new security strategies to manage generative AI risks. Traditional security tools are unable to cope with modern AI-driven, browser-based work environments. They lack the ability to detect, control, and protect AI interactions at the source (i.e., the browser).

Browser-based security solutions can provide visibility into AI SaaS applications, unknown AI applications outside of ChatGPT, AI-enabled browser extensions, and more. This visibility can be used to deploy data loss prevention (DLP) solutions for generative AI, allowing enterprises to safely incorporate generative AI into their plans and secure their business for the future.

For more data on the use of generative AI, please readFull Report.

Reference Source:

89% of Enterprise GenAI Usage Is Invisible to Organizations Exposing Critical Security Risks, New Report Reveals

你可能想看:

Ensure that the ID can be accessed even if it is guessed or cannot be tampered with; the scenario is common in resource convenience and unauthorized vulnerability scenarios. I have found many vulnerab

Announcement regarding the addition of 7 units as technical support units for the Ministry of Industry and Information Technology's mobile Internet APP product security vulnerability database

In-depth analysis of cross-domain vulnerability chain: from postMessage to the precise attack path of CSRF

In-depth Analysis and Practice: Analysis of Apache Commons SCXML Remote Code Execution Vulnerability and POC EXP Construction

5: Determine if the email account exists (if an existing email is found, you can directly exploit the vulnerability)

In today's rapidly developing digital economy, data has become an important engine driving social progress and enterprise development. From being initially regarded as part of intangible assets to now

Different SRC vulnerability discovery approach: Practical case of HTTP request splitting vulnerability

Analysis of SSRF Vulnerability in Next.js: A deep exploration of blind SSRF attacks and their preventive strategies

5. Collect exercise results The main person in charge reviews the exercise results, sorts out the separated exercise issues, and allows the red and blue sides to improve as soon as possible. The main

Data security can be said to be a hot topic in recent years, especially with the rapid development of information security technologies such as big data and artificial intelligence, the situation of d

最后修改时间:
admin
上一篇 2025年03月26日 09:46
下一篇 2025年03月26日 10:09

评论已关闭