redhat enterprise-linux-ai CVE-2026-6859 is a vulnerability in Red Hat Enterprise Linux Ai
Published on April 22, 2026

Instructlab: instructlab: arbitrary code execution due to hardcoded `trust_remote_code=true`
A flaw was found in InstructLab. The `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run `ilab train/download/generate` with a specially crafted malicious model from the HuggingFace Hub. This vulnerability can lead to complete system compromise.

NVD

Vulnerability Analysis

CVE-2026-6859 can be exploited with network access, requires user interaction. This vulnerability is considered to have a low attack complexity. The potential impact of an exploit of this vulnerability is considered to be very high.

Attack Vector:
NETWORK
Attack Complexity:
LOW
Privileges Required:
NONE
User Interaction:
REQUIRED
Scope:
UNCHANGED
Confidentiality Impact:
HIGH
Integrity Impact:
HIGH
Availability Impact:
HIGH

Timeline

Reported to Red Hat.

Made public.

Weakness Type

Inclusion of Functionality from Untrusted Control Sphere

The software imports, requires, or includes executable functionality (such as a library) from a source that is outside of the intended control sphere.


Products Associated with CVE-2026-6859

Want to know whenever a new CVE is published for Red Hat Enterprise Linux Ai? stack.watch will email you.

 

Affected Versions

Red Hat Enterprise Linux AI (RHEL AI) 3: Red Hat Enterprise Linux AI (RHEL AI) 3: Red Hat Enterprise Linux AI (RHEL AI) 3: Red Hat Enterprise Linux AI (RHEL AI) 3: Red Hat Enterprise Linux AI (RHEL AI) 3: Red Hat Enterprise Linux AI (RHEL AI) 3: