Overview
Golf Gateway includes an ML-powered threat detection engine that classifies MCP messages as benign or malicious. The model can run in two modes:| Mode | Description | Best for |
|---|---|---|
| Remote (recommended) | Model runs on Azure ML in your cloud | Enterprise deployments, data residency |
| Local | Model bundled in gateway container | Air-gapped environments |
Prerequisites
- Azure subscription with an Azure ML workspace
- Azure CLI with the
mlextension - HuggingFace account with access to the Golf Prompt Guard model (gated — request access on the model page)
- Golf Gateway deployed in any mode (standalone, distributed, or hybrid)
Step 1: Deploy the model to Azure ML
Create the deployment files
Create the following files in a working directory.Your directory should look like:
- score.py
- environment/conda.yaml
- deployment.yaml
The scoring script that serves the model:
Create the deployment
Step 2: Connect Golf Gateway
Configure the gateway to use your Azure ML endpoint. Theazure_ml protocol is auto-detected from the *.inference.ml.azure.com URL — no additional protocol configuration is needed.
- Environment Variables
- YAML
Azure AD authentication (alternative to API key)
Azure AD authentication (alternative to API key)
For service principal authentication instead of a static API key:The auth mode is auto-detected: when all three Azure AD fields are set, the gateway uses Azure AD token acquisition (
https://ml.azure.com/.default scope) instead of key-based auth. Ensure the service principal has the AzureML Data Scientist role on the workspace.Lightweight gateway image
When using the remote backend, the gateway doesn’t need local ML dependencies. Use the Docker image without the ‘-gpu’ suffix in the tag for a smaller footprint.Related guides
- Configure PII Scrubbing — Data protection
- Set Up Alerting — Alert on threat detections
- Environment Variables Reference — Full configuration reference
- YAML Schema Reference — Declarative gateway configuration