Red Hat Ai Inference Server
Don't miss out!
Thousands of developers use stack.watch to stay informed.Get an email whenever new security vulnerabilities are reported in Red Hat Ai Inference Server.
By the Year
In 2025 there have been 2 vulnerabilities in Red Hat Ai Inference Server with an average score of 6.2 out of ten.
| Year | Vulnerabilities | Average Score |
|---|---|---|
| 2025 | 2 | 6.20 |
It may take a day or so for new Ai Inference Server vulnerabilities to show up in the stats or in the list of recent security vulnerabilities. Additionally vulnerabilities may be tagged under a different product or component name.
Recent Red Hat Ai Inference Server Security Vulnerabilities
vLLM MediaConnector SSRF via load_from_url
CVE-2025-6242
7.1 - High
- October 07, 2025
A Server-Side Request Forgery (SSRF) vulnerability exists in the MediaConnector class within the vLLM project's multimodal feature set. The load_from_url and load_from_url_async methods fetch and process media from user-provided URLs without adequate restrictions on the target hosts. This allows an attacker to coerce the vLLM server into making arbitrary requests to internal network resources.
SSRF
Auth Bypass in ai-inference-server /invocations Endpoint
CVE-2025-6920
5.3 - Medium
- July 01, 2025
A flaw was found in the authentication enforcement mechanism of a model inference API in ai-inference-server. All /v1/* endpoints are expected to enforce API key validation. However, the POST /invocations endpoint failed to do so, resulting in an authentication bypass. This vulnerability allows unauthorized users to access the same inference features available on protected endpoints, potentially exposing sensitive functionality or allowing unintended access to backend resources.
Missing Authentication for Critical Function
Stay on top of Security Vulnerabilities
Want an email whenever new vulnerabilities are published for Red Hat Ai Inference Server or by Red Hat? Click the Watch button to subscribe.