Keeping track of system metrics is crucial for maintaining a stable Rime self-hosted deployment. These insights help guide decisions about scaling and performance. To support this, Rime services expose multiple endpoints that let you monitor system health.
⚠️ More metrics to be added in future releases.For more detailed operational insights, the model service exposes Prometheus-compatible metrics at the /metrics endpoint:
Copy
Ask AI
curl -X GET "http://localhost:8080/metrics"
This endpoint provides telemetry data including:
HTTP Request Counters: Detailed breakdown of requests by endpoint, status code, and HTTP method
Error Tracking: Counts of HTTP errors by type and status code
Example metrics include:
Copy
Ask AI
# HELP http_requests_total Total number of HTTP requests# TYPE http_requests_total counterhttp_requests_total{endpoint="/invocations",http_status="200",method="POST"} 102018.0http_requests_total{endpoint="/metrics",http_status="200",method="GET"} 69908.0http_requests_total{endpoint="/invocations",http_status="500",method="POST"} 38.0# HELP http_errors_total Total number of HTTP errors# TYPE http_errors_total counterhttp_errors_total{endpoint="/invocations",error_message="cannot access local variable '_var_var_6' where it is not associated with a value",http_status="500",method="POST"} 38.0
These metrics can be integrated with Prometheus monitoring systems to create dashboards and alerts for your Rime deployment. Read integration setup in the next page.