15.2.4 Tapping runtime metrics
The /metrics endpoint can report many metrics produced by a running application, including memory, processor, garbage collection, and HTTP requests. Actuator provides more than two dozen categories of metrics out of the box, as evidenced by the following list of metrics categories returned when issuing a GET request to /metrics:
$ curl localhost:8081/actuator/metrics | jq
{
"names": [
"jvm.memory.max",
"process.files.max",
"jvm.gc.memory.promoted",
"http.server.requests",
"system.load.average.1m",
"jvm.memory.used",
"jvm.gc.max.data.size",
"jvm.memory.committed",
"system.cpu.count",
"logback.events",
"jvm.buffer.memory.used",
"jvm.threads.daemon",
"system.cpu.usage",
"jvm.gc.memory.allocated",
"jvm.threads.live",
"jvm.threads.peak",
"process.uptime",
"process.cpu.usage",
"jvm.classes.loaded",
"jvm.gc.pause",
"jvm.classes.unloaded",
"jvm.gc.live.data.size",
"process.files.open",
"jvm.buffer.count",
"jvm.buffer.total.capacity",
"process.start.time"
]
}So many metrics are covered that it would be impossible to discuss them all in any meaningful way in this chapter. Instead, let’s focus on one category of metrics, http.server.requests, as an example of how to consume the /metrics endpoint.
If instead of simply requesting /metrics, you were to issue a GET request for /metrics/{metrics name}, you’d receive more detail about the metrics for that category. In the case of http.server.requests, a GET request for /metrics/ http.server.requests returns data that looks like the following:
$ curl localhost:8081/actuator/metrics/http.server.requests
{
"name": "http.server.requests",
"measurements": [
{ "statistic": "COUNT", "value": 2103 },
{ "statistic": "TOTAL_TIME", "value": 18.086334315 },
{ "statistic": "MAX", "value": 0.028926313 }
],
"availableTags": [
{ "tag": "exception",
"values": [ "ResponseStatusException",
"IllegalArgumentException", "none" ] },
{ "tag": "method", "values": [ "GET" ] },
{ "tag": "uri",
"values": [
"/actuator/metrics/{requiredMetricName}",
"/actuator/health", "/actuator/info", "/ingredients",
"/actuator/metrics", "/**" ] },
{ "tag": "status", "values": [ "404", "500", "200" ] }
]
}The most significant portion of this response is the measurements section, which includes all the metrics for the requested category. In this case, it reports that there have been 2,103 HTTP requests. The total time spent handling those requests is 18.086334315 seconds, and the maximum time spent processing any request is 0.028926313 seconds.
Those generic metrics are interesting, but you can narrow down the results further by using the tags listed under availableTags. For example, you know that there have been 2,103 requests, but what’s unknown is how many of them resulted in an HTTP 200 versus an HTTP 404 or HTTP 500 response status. Using the status tag, you can get metrics for all requests resulting in an HTTP 404 status like this:
$ curl localhost:8081/actuator/metrics/http.server.requests?tag=status:404
{
"name": "http.server.requests",
"measurements": [
{ "statistic": "COUNT", "value": 31 },
{ "statistic": "TOTAL_TIME", "value": 0.522061212 },
{ "statistic": "MAX", "value": 0 }
],
"availableTags": [
{ "tag": "exception",
"values": [ "ResponseStatusException", "none" ] },
{ "tag": "method", "values": [ "GET" ] },
{ "tag": "uri",
"values": [
"/actuator/metrics/{requiredMetricName}", "/**" ] }
]
}By specifying the tag name and value with the tag request attribute, you now see metrics specifically for requests that resulted in an HTTP 404 response. This shows that there were 31 requests resulting in a 404, and it took 0.522061212 seconds to serve them all. Moreover, it’s clear that some of the failing requests were GET requests for /actuator/metrics/{requiredMetricsName} (although it’s unclear what the {requiredMetricsName} path variable resolved to). And some were for some other path, captured by the /** wildcard path.
Hmmm . . . what if you want to know how many of those HTTP 404 responses were for the /** path? All you need to do to filter this further is to specify the uri tag in the request, like this:
% curl "localhost:8081/actuator/metrics/http.server.requests?tag=status:404&tag=uri:/**"
{
"name": "http.server.requests",
"measurements": [
{ "statistic": "COUNT", "value": 30 },
{ "statistic": "TOTAL_TIME", "value": 0.519791548 },
{ "statistic": "MAX", "value": 0 }
],
"availableTags": [
{ "tag": "exception", "values": [ "ResponseStatusException" ] },
{ "tag": "method", "values": [ "GET" ] }
]
}Now you can see that there were 30 requests for some path that matched /** that resulted in an HTTP 404 response, and it took a total of 0.519791548 seconds to handle those requests.
You’ll also notice that as you refine the request, the available tags are more limited. The tags offered are only those that match the requests captured by the displayed metrics. In this case, the exception and method tags each have only a single value; it’s obvious that all 30 of the requests were GET requests that resulted in a 404 because of a ResponseStatusException.
Navigating the /metrics endpoint can be a tricky business, but with a little practice, it’s not impossible to get the data you’re looking for. In chapter 16, you’ll see how Spring Boot Admin makes consuming data from the /metrics endpoint much easier.
Although the information presented by Actuator endpoints offers useful insight into the inner workings of a running Spring Boot application, it’s not well suited for human consumption. Because Actuator endpoints are REST endpoints, the data they provide is intended for consumption by some other application, perhaps a UI. With that in mind, let’s see how you can present Actuator information in a user-friendly web application.
