API#
The API
module implements an HTTP RESTful interface for obtaining basic information
about the web server in JSON format, as well as statistics on client
connections, shared memory zones, DNS queries, HTTP requests, HTTP response cache,
stream module sessions, and zones of the
limit_conn http, limit_conn stream, limit_req, and http upstream modules.
The interface accepts GET
and HEAD
HTTP methods;
a request with another method will cause an error:
{
"error": "MethodNotAllowed",
"description": "The POST method is not allowed for the requested API element \"/\"."
}
In Angie PRO, this interface includes a dynamic configuration section that allows changing settings without reloading the configuration or
restarting; currently, configuration of individual servers within
upstream is available. Enables HTTP RESTful interface in The path parameter is mandatory. Similar to the alias directive, it sets the path for replacing the one specified in If specified in a prefix the part of the request URI matching the prefix /stats/ will be replaced with the path specified in the path parameter: /status/http/server_zones/. For example, a request to /stats/foo/ will access the API element Variables are allowed: api /status/$module/server_zones/$name/ and usage inside regex location: Here the path parameter defines the full path to the API element;
thus, from a request to And the final request will be Note In Angie PRO, you can separate the dynamic configuration
API and the immutable status API that reflects
the current state: The path parameter also allows controlling API access: Or: Enables or disables adding the A request to By default, output is disabled because configuration files may contain
particularly sensitive, confidential information. Angie publishes usage statistics in the Example of partial access, already shown above: With configuration including In response to the request A set of metrics can be requested by individual JSON branch by constructing the appropriate request. For example: Note By default, the module uses ISO 8601 format strings for dates;
to use the integer UNIX epoch format instead,
add the String; version of the running Angie web server String; particular build name when it specified during compilation String; the build time of the Angie executable
in the date format String; the address of the server that accepted API request Number; total number of configuration reloads since last start String; time of the last configuration reload
in the date format;
string values have millisecond resolution Object; its members are absolute pathnames
of all Angie configuration files
that are currently loaded by the server instance,
and their values are string representations of the files' contents,
for example: Caution The Number; the total number of accepted client connections Number; the total number of dropped client connections Number; the current number of active client connections Number; the current number of idle client connections To collect resolver statistics,
the resolver directive must set the The specified shared memory zone will collect the following statistics: Object; queries statistics Number; the number of queries to resolve names to addresses
(A and AAAA queries) Number; the number of queries to resolve services to addresses
(SRV queries) Number; the number of queries to resolve addresses to names
(PTR queries) Object; responses statistics Number; the number of successful responses Number; the number of timed out queries Number; the number of responses with code 1 (Format Error) Number; the number of responses with code 2 (Server Failure) Number; the number of responses with code 3 (Name Error) Number; the number of responses with code 4 (Not Implemented) Number; the number of responses with code 5 (Refused) Number; the number of queries completed with other non-zero code Object; sent DNS queries statistics Number; the number of A type queries Number; the number of AAAA type queries Number; the number of PTR type queries Number; the number of SRV type queries The response codes are described in RFC 1035, section 4.1.1. Various DNS record types are detailed in RFC 1035,
RFC 2782, and
RFC 3596. Example: To collect the To group the metrics by a custom value, use the alternative syntax.
Here, the metrics are aggregated by $host,
with each group reported as a standalone zone: The specified shared memory zone will collect the following statistics: Object; SSL statistics.
Present if Number; the total number of successful SSL handshakes Number; the total number of session reuses during SSL handshake Number; the total number of timed out SSL handshakes Number; the total number of failed SSL handshakes Object; requests statistics Number; the total number of client requests Number; the number of currently being processed client requests Number; the total number of client requests completed without sending a response Object; responses statistics Number; a non-zero number of responses with status <code> (100-599) Number; a non-zero number of responses with other status codes Object; data statistics Number; the total number of bytes received from clients Number; the total number of bytes sent to clients Example: To collect the To group the metrics by a custom value, use the alternative syntax.
Here, the metrics are aggregated by $host,
with each group reported as a standalone zone: The specified shared memory zone will collect the following statistics: Object; requests statistics Number; the total number of client requests Number; the total number of client requests completed without sending a response Object; responses statistics Number; a non-zero number of responses with status <code> (100-599) Number; a non-zero number of responses with other status codes Object; data statistics Number; the total number of bytes received from clients Number; the total number of bytes sent to clients Example: To collect the To group the metrics by a custom value, use the alternative syntax.
Here, the metrics are aggregated by $host,
with each group reported as a standalone zone: The specified shared memory zone will collect the following statistics: Object; SSL statistics.
Present if Number; the total number of successful SSL handshakes Number; the total number of session reuses during SSL handshake Number; the total number of timed out SSL handshakes Number; the total number of failed SSL handshakes Object; connections statistics Number; the total number of client connections Number; the number of currently being processed client connections Number; the total number of client connections
completed without creating a session Number; the total number of client connections
relayed to another listening port with Object; sessions statistics Number; the number of sessions completed with code 200, which means successful completion Number; the number of sessions completed with code 400, which happens when client data could not be parsed, e.g. the PROXY protocol header Number; the number of sessions completed with code 403, when access was forbidden, for example, when access is limited for certain client addresses Number; the number of sessions completed with code 500, the internal server error Number; the number of sessions completed with code 502, bad gateway, for example, if an upstream server could not be selected or reached Number; the number of sessions completed with code 503, service unavailable, for example, when access is limited by the number of connections Object; data statistics Number; the total number of bytes received from clients Number; the total number of bytes sent to clients Example: For each zone configured with proxy_cache, the following data is
stored: Number; the current size of the cache Number; configured limit on the maximum size of the cache Boolean; Object; statistics of valid cached responses (proxy_cache_valid) Number; the total number of responses read from the cache Number; the total number of bytes read from the cache Object; statistics of expired responses taken from the cache (proxy_cache_use_stale) Number; the total number of responses read from the cache Number; the total number of bytes read from the cache Object; statistics of expired responses taken from the cache while responses were being updated (proxy_cache_use_stale updating) Number; the total number of responses read from the cache Number; the total number of bytes read from the cache Object; statistics of expired and revalidated responses taken from the cache (proxy_cache_revalidate) Number; the total number of responses read from the cache Number; the total number of bytes read from the cache Object; statistics of responses not found in the cache Number; the total number of corresponding responses Number; the total number of bytes read from the proxied server Number; the total number of responses written to the cache Number; the total number of bytes written to the cache Object; statistics of expired responses not taken from the cache Number; the total number of corresponding responses Number; the total number of bytes read from the proxied server Number; the total number of responses written to the cache Number; the total number of bytes written to the cache Object; statistics of responses not looked up in the cache (proxy_cache_bypass) Number; the total number of corresponding responses Number; the total number of bytes read from the proxied server Number; the total number of responses written to the cache Number; the total number of bytes written to the cache Added in version 1.2.0: PRO In Angie PRO, if cache sharding is enabled with proxy_cache_path directives,
individual shards are exposed as object members of a Object; lists individual shards as members Object; represents an individual shard with its cache path for name Number; the shard's current size Number; maximum shard size, if configured Boolean; Objects for each configured limit_conn in http or limit_conn in stream contexts with the following fields: Number; the total number of passed connections Number; the total number of connections passed with zero-length key, or key exceeding 255 bytes Number; the total number of connections exceeding the configured limit Number; the total number of connections rejected due to exhaustion of zone storage Objects for each configured limit_req with the following fields: Number; the total number of passed requests Number; the total number of requests passed with zero-length key, or key exceeding 255 bytes Number; the total number of delayed requests Number; the total number of rejected requests Number; the total number of requests rejected due to exhaustion of zone storage Added in version 1.1.0. To enable collection of the following metrics,
set the zone directive in the upstream context,
for instance: where <upstream> is the name of any upstream specified with the zone directive Object; contains the metrics of the upstream's peers as subobjects
whose names are canonical representations of the peers' addresses.
Members of each subobject: String; the parameter of the server directive String; name of service as it's specified in server directive, if configured Number; the specified slow_start value for the server,
expressed in seconds. When setting the value via the
respective subsection
of the dynamic configuration API,
you can specify either a number
or a time value with millisecond precision. Boolean; Number; configured weight String; the current state of the peer and what requests are sent to it: Additional states in Angie PRO: Object; peer selection statistics Number; the current number of connections to peer Number; total number of requests forwarded to peer String or number; time when peer was last selected,
formatted as a date Number; the configured maximum number of simultaneous connections, if specified Object; responses statistics Number; a non-zero number of responses with status <code> (100-599) Number; a non-zero number of responses with other status codes Object; data statistics Number; the total number of bytes received from peer Number; the total number of bytes sent to peer Object; health statistics Number; the total number of unsuccessful attempts to communicate with the peer Number; how many times peer became Number; the total time (in milliseconds) when peer was String or number; time when peer became Number; average time (in milliseconds)
to receive the response headers from the peer;
see response_time_factor (PRO) Number; average time (in milliseconds)
to receive the entire peer response;
see response_time_factor (PRO) String; configured id of the server in upstream group Number; the number of currently cached connections Object; contains the current state of the active backup logic,
present if backup_switch (PRO) is configured for the upstream Number; active group identifier, if any Number; time to expire in milliseconds,
after which the balancer will re-check the groups for healthy peers;
does not appear for the primary group Changed in version 1.2.0: PRO If the upstream has upstream_probe (PRO) probes configured,
the The Counters in Number; total probes for this peer Number; total failed probes String or number; last probe time,
formatted as a date Changed in version 1.4.0: PRO If a request queue is configured for the upstream,
the upstream object also contains a nested Counter values are summed across all worker processes: Number; total number of requests that entered the queue Number; current number of requests in the queue Number; total number of requests removed from the queue
because the client prematurely closed the connection Number; total number of requests removed from the queue due to timeout Number; total number of queue overflow occurrences To enable collection of the following metrics,
set the zone directive in the upstream context,
for instance: Here, <upstream> is the name of an upstream that is
configured with a zone directive. Object; contains the metrics of the upstream's peers as subobjects
whose names are canonical representations of the peers' addresses.
Members of each subobject: String; address set by the server directive String; service name, if set by server directive Number; the specified slow_start value for the server,
expressed in seconds. When setting the value via the
respective subsection
of the dynamic configuration API,
you can specify either a number
or a time value with millisecond precision. Boolean; Number; the weight of the peer String; the current state of the peer and what requests are sent to it: Additional states in Angie PRO: Object; the peer's selection metrics Number; current connections to the peer Number; total connections forwarded to the peer String or number; time when the peer was last selected,
formatted as a date Number;
maximum
number of simultaneous active connections to the peer, if set Object; data transfer metrics Number; total bytes received from the peer Number; total bytes sent to the peer Object; peer health metrics Number; total failed attempts to reach the peer Number; times the peer became Number; total time (in milliseconds) that the peer was
String or number; time when the peer last became Number; average time (in milliseconds)
taken to establish a connection with the peer;
see the response_time_factor (PRO) directive. Number; average time (in milliseconds)
to receive the first byte of the response from the peer;
see the response_time_factor (PRO) directive. Number; average time (in milliseconds)
to receive the complete response from the peer;
see the response_time_factor (PRO) directive. Object; contains the current state of active backup logic,
present if backup_switch (PRO) is configured for the upstream Number; level of the active group
currently used for request balancing.
If the active group is the primary group, the value is 0 Number; remaining wait time in milliseconds
after which the load balancer will recheck for healthy nodes
in groups with lower levels, starting from the primary group,
while groups with higher levels are not checked;
not displayed for the primary group (level 0) Changed in version 1.4.0: PRO In Angie PRO, if the upstream has upstream_probe (PRO) probes configured,
the The Counters in Number; total probes for this peer Number; total failed probes String or number; last probe time,
formatted as a date Added in version 1.2.0: PRO The API includes a Currently, configuration of individual servers within upstreams is available
in the Enables configuring individual upstream peers,
including deleting existing peers or adding new ones. URI path parameters: Name of the upstream; to be configurable via The peer's name within the upstream, defined as
For example, the following configuration: Allows the following peer names: This API subsection enables setting the Note There is no separate Example: Actually available parameters are limited to the ones supported by the
current load balancing method of the upstream.
So, if the upstream is configured with the You will be unable to add a new peer that defines Note Even with a compatible load balancing method, the Allows configuring individual servers within an upstream,
including adding new ones and deleting configured ones. Parameters in the URI path: Name of the Name of a specific server within the specified For example, for the following configuration: These server names are valid: This API subsection allows setting the Note There is no separate Example: Only those parameters that are supported by the current load balancing method
of the upstream will actually be available.
For example, if the upstream is configured with the Then it's impossible to add a new server with the Note Even with a compatible balancing method, the When deleting servers, you can set the
Let's consider the semantics of all HTTP methods applicable to this section,
given this upstream configuration: The For example, the
You can obtain default parameter values with The For example, to set the Verify the changes: The For example, to delete the previously set Verify the changes using The When deleting servers, you can set the The The method operates as follows: if the entities from the new definition
exist in the configuration, they are overwritten; otherwise, they are added. For example, to change the Verify the changes: The JSON object supplied with the The Note This deletion is identical to For example, to delete the Verify the changes: The Directives#
api#
location
.location
, but over the API tree rather than the filesystem.location
:location /stats/ {
api /status/http/server_zones/;
}
/status/http/server_zones/foo/
.location ~^/api/([^/]+)/(.*)$ {
api /status/http/$1_zones/$2;
}
/api/location/data/
the following variables will be extracted:$1 = "location"
$2 = "data/"
/status/http/location_zones/data/
.location /config/ {
api /config/;
}
location /status/ {
api /status/;
}
location /status/ {
api /status/;
allow 127.0.0.1;
deny all;
}
location /blog/requests/ {
api /status/http/server_zones/blog/requests/;
auth_basic "blog";
auth_basic_user_file conf/htpasswd;
}
api_config_files#
config_files
object,
which lists the contents of all Angie configuration files
currently loaded by the server instance,
to the /status/angie/ API section.
For example, with this configuration:location /status/ {
api /status/;
api_config_files on;
}
/status/angie/
returns approximately the following:{
"version":"1.10.0",
"address":"192.168.16.5",
"generation":1,
"load_time":"2025-07-03T12:58:39.789Z",
"config_files": {
"/etc/angie/angie.conf": "...",
"/etc/angie/mime.types": "..."
}
}
Metrics#
/status/
API section; you can
open access to it by setting the appropriate location
. Full access:location /status/ {
api /status/;
}
location /stats/ {
api /status/http/server_zones/;
}
Example configuration#
location /status/
, resolver
, http
in
upstream
, http server
, location
, cache
, limit_conn
in
http
and limit_req
zones:http {
resolver 127.0.0.53 status_zone=resolver_zone;
proxy_cache_path /var/cache/angie/cache keys_zone=cache_zone:2m;
limit_conn_zone $binary_remote_addr zone=limit_conn_zone:10m;
limit_req_zone $binary_remote_addr zone=limit_req_zone:10m rate=1r/s;
upstream upstream {
zone upstream 256k;
server backend.example.com service=_example._tcp resolve max_conns=5;
keepalive 4;
}
server {
server_name www.example.com;
listen 443 ssl;
status_zone http_server_zone;
proxy_cache cache_zone;
access_log /var/log/access.log main;
location / {
root /usr/share/angie/html;
status_zone location_zone;
limit_conn limit_conn_zone 1;
limit_req zone=limit_req_zone burst=5;
}
location /status/ {
api /status/;
allow 127.0.0.1;
deny all;
}
}
}
curl https://www.example.com/status/
Angie returns:JSON tree
{
"angie": {
"version":"1.10.0",
"address":"192.168.16.5",
"generation":1,
"load_time":"2025-07-03T12:58:39.789Z"
},
"connections": {
"accepted":2257,
"dropped":0,
"active":3,
"idle":1
},
"slabs": {
"cache_zone": {
"pages": {
"used":2,
"free":506
},
"slots": {
"64": {
"used":1,
"free":63,
"reqs":1,
"fails":0
},
"512": {
"used":1,
"free":7,
"reqs":1,
"fails":0
}
}
},
"limit_conn_zone": {
"pages": {
"used":2,
"free":2542
},
"slots": {
"64": {
"used":1,
"free":63,
"reqs":74,
"fails":0
},
"128": {
"used":1,
"free":31,
"reqs":1,
"fails":0
}
}
},
"limit_req_zone": {
"pages": {
"used":2,
"free":2542
},
"slots": {
"64": {
"used":1,
"free":63,
"reqs":1,
"fails":0
},
"128": {
"used":2,
"free":30,
"reqs":3,
"fails":0
}
}
}
},
"http": {
"server_zones": {
"http_server_zone": {
"ssl": {
"handshaked":4174,
"reuses":0,
"timedout":0,
"failed":0
},
"requests": {
"total":4327,
"processing":0,
"discarded":8
},
"responses": {
"200":4305,
"302":12,
"404":4
},
"data": {
"received":733955,
"sent":59207757
}
}
},
"location_zones": {
"location_zone": {
"requests": {
"total":4158,
"discarded":0
},
"responses": {
"200":4157,
"304":1
},
"data": {
"received":538200,
"sent":177606236
}
}
},
"caches": {
"cache_zone": {
"size":0,
"cold":false,
"hit": {
"responses":0,
"bytes":0
},
"stale": {
"responses":0,
"bytes":0
},
"updating": {
"responses":0,
"bytes":0
},
"revalidated": {
"responses":0,
"bytes":0
},
"miss": {
"responses":0,
"bytes":0,
"responses_written":0,
"bytes_written":0
},
"expired": {
"responses":0,
"bytes":0,
"responses_written":0,
"bytes_written":0
},
"bypass": {
"responses":0,
"bytes":0,
"responses_written":0,
"bytes_written":0
}
}
},
"limit_conns": {
"limit_conn_zone": {
"passed":73,
"skipped":0,
"rejected":0,
"exhausted":0
}
},
"limit_reqs": {
"limit_req_zone": {
"passed":54816,
"skipped":0,
"delayed":65,
"rejected":26,
"exhausted":0
}
},
"upstreams": {
"upstream": {
"peers": {
"192.168.16.4:80": {
"server":"backend.example.com",
"service":"_example._tcp",
"backup":false,
"weight":5,
"state":"up",
"selected": {
"current":2,
"total":232
},
"max_conns":5,
"responses": {
"200":222,
"302":12
},
"data": {
"sent":543866,
"received":27349934
},
"health": {
"fails":0,
"unavailable":0,
"downtime":0
},
"sid":"<server_id>"
}
},
"keepalive":2
}
}
},
"resolvers": {
"resolver_zone": {
"queries": {
"name":442,
"srv":2,
"addr":0
},
"responses": {
"success":440,
"timedout":1,
"format_error":0,
"server_failure":1,
"not_found":1,
"unimplemented":0,
"refused":1,
"other":0
}
}
}
}
$ curl https://www.example.com/status/angie
$ curl https://www.example.com/status/connections
$ curl https://www.example.com/status/slabs
$ curl https://www.example.com/status/slabs/<zone>/slots
$ curl https://www.example.com/status/slabs/<zone>/slots/64
$ curl https://www.example.com/status/http/
$ curl https://www.example.com/status/http/server_zones
$ curl https://www.example.com/status/http/server_zones/<http_server_zone>
$ curl https://www.example.com/status/http/server_zones/<http_server_zone>/ssl
date=epoch
parameter to the query string:$ curl https://www.example.com/status/angie/load_time
"2024-04-01T00:59:59+01:00"
$ curl https://www.example.com/status/angie/load_time?date=epoch
1711929599
Server status#
/status/angie
#{
"version": "1.10.0",
"build_time": "2025-07-03T16:05:43.805Z",
"address": "192.168.16.5",
"generation": 1,
"load_time": "2025-07-03T16:15:43.805Z"
"config_files": {
"/etc/angie/angie.conf": "...",
"/etc/angie/mime.types": "..."
}
}
version
build
build_time
address
generation
load_time
config_files
{
"/etc/angie/angie.conf": "server {\n listen 80;\n # ...\n\n}\n"
}
config_files
object is available in /status/angie/
only if the
api_config_files
directive is enabled.Connections#
/status/connections
#{
"accepted": 2257,
"dropped": 0,
"active": 3,
"idle": 1
}
accepted
dropped
active
idle
DNS queries to resolver#
/status/resolvers/<zone>
#status_zone
parameter
(HTTP or Stream):resolver 127.0.0.53 status_zone=resolver_zone;
queries
name
srv
addr
responses
success
timedout
format_error
server_failure
not_found
unimplemented
refused
other
sent
a
aaaa
ptr
srv
{
"queries": {
"name": 442,
"srv": 2,
"addr": 0
},
"responses": {
"success": 440,
"timedout": 1,
"format_error": 0,
"server_failure": 1,
"not_found": 1,
"unimplemented": 0,
"refused": 1,
"other": 0
},
"sent": {
"a": 185,
"aaaa": 245,
"srv": 2,
"ptr": 12
}
}
HTTP server and location#
/status/http/server_zones/<zone>
#server
metrics,
set the status_zone directive in the server context:server {
...
status_zone server_zone;
}
status_zone $host zone=server_zone:5;
ssl
server
sets listen ssl;
handshaked
reuses
timedout
failed
requests
total
processing
discarded
responses
<code>
xxx
data
received
sent
{
"ssl":{
"handshaked":4174,
"reuses":0,
"timedout":0,
"failed":0
},
"requests":{
"total":4327,
"processing":0,
"discarded":0
},
"responses":{
"200":4305,
"302":6,
"304":12,
"404":4
},
"data":{
"received":733955,
"sent":59207757
}
}
/status/http/location_zones/<zone>
#location
metrics, set the status_zone directive
in the context of location or if in location:location / {
root /usr/share/angie/html;
status_zone location_zone;
if ($request_uri ~* "^/condition") {
# ...
status_zone if_location_zone;
}
}
status_zone $host zone=server_zone:5;
requests
total
discarded
responses
<code>
xxx
data
received
sent
{
"requests": {
"total": 4158,
"discarded": 0
},
"responses": {
"200": 4157,
"304": 1
},
"data": {
"received": 538200,
"sent": 177606236
}
}
Stream server#
/status/stream/server_zones/<zone>
#server
metrics,
set the status_zone directive in the server context:server {
...
status_zone server_zone;
}
status_zone $host zone=server_zone:5;
ssl
server
sets listen ssl;
handshaked
reuses
timedout
failed
connections
total
processing
discarded
passed
pass
directivessessions
success
invalid
forbidden
internal_error
bad_gateway
service_unavailable
data
received
sent
{
"ssl": {
"handshaked": 24,
"reuses": 0,
"timedout": 0,
"failed": 0
},
"connections": {
"total": 24,
"processing": 1,
"discarded": 0,
"passed": 2
},
"sessions": {
"success": 24,
"invalid": 0,
"forbidden": 0,
"internal_error": 0,
"bad_gateway": 0,
"service_unavailable": 0
},
"data": {
"received": 2762947,
"sent": 53495723
}
}
HTTP caches#
proxy_cache cache_zone;
/status/http/caches/<cache>
#{
"name_zone": {
"size": 0,
"cold": false,
"hit": {
"responses": 0,
"bytes": 0
},
"stale": {
"responses": 0,
"bytes": 0
},
"updating": {
"responses": 0,
"bytes": 0
},
"revalidated": {
"responses": 0,
"bytes": 0
},
"miss": {
"responses": 0,
"bytes": 0,
"responses_written": 0,
"bytes_written": 0
},
"expired": {
"responses": 0,
"bytes": 0,
"responses_written": 0,
"bytes_written": 0
},
"bypass": {
"responses": 0,
"bytes": 0,
"responses_written": 0,
"bytes_written": 0
}
}
}
size
max_size
cold
true
while the cache loader loads data from diskhit
responses
bytes
stale
responses
bytes
updating
responses
bytes
revalidated
responses
bytes
miss
responses
bytes
responses_written
bytes_written
expired
responses
bytes
responses_written
bytes_written
bypass
responses
bytes
responses_written
bytes_written
shards
object:shards
<shard>
size
max_size
cold
true
while the cache loader loads data from disk{
"name_zone": {
"shards": {
"/path/to/shard1": {
"size": 0,
"cold": false
},
"/path/to/shard2": {
"size": 0,
"cold": false
}
}
}
limit_conn#
limit_conn_zone $binary_remote_addr zone=limit_conn_zone:10m;
/status/http/limit_conns/<zone>
, /status/stream/limit_conns/<zone>
#{
"passed": 73,
"skipped": 0,
"rejected": 0,
"exhausted": 0
}
passed
skipped
rejected
exhausted
limit_req#
limit_req_zone $binary_remote_addr zone=limit_req_zone:10m rate=1r/s;
/status/http/limit_reqs/<zone>
#{
"passed": 54816,
"skipped": 0,
"delayed": 65,
"rejected": 26,
"exhausted": 0
}
passed
skipped
delayed
rejected
exhausted
HTTP upstream#
upstream upstream {
zone upstream 256k;
server backend.example.com service=_example._tcp resolve max_conns=5;
keepalive 4;
}
/status/http/upstreams/<upstream>
#{
"peers": {
"192.168.16.4:80": {
"server": "backend.example.com",
"service": "_example._tcp",
"backup": false,
"weight": 5,
"state": "up",
"selected": {
"current": 2,
"total": 232
},
"max_conns": 5,
"responses": {
"200": 222,
"302": 12
},
"data": {
"sent": 543866,
"received": 27349934
},
"health": {
"fails": 0,
"unavailable": 0,
"downtime": 0
},
"sid": "<server_id>"
}
},
"keepalive": 2
}
peers
server
service
slow_start
(PRO 1.4.0+)backup
true
for backup serversweight
state
busy
: indicates that the number of requests to the server
has reached the limit set by max_conns,
and no new requests are sentdown
: manually disabled, no requests are sentrecovering
: recovering after a failure
according to slow_start,
more and more requests are sent over timeunavailable
: reached the max_fails limit,
only trial client requests are sent
at intervals defined by fail_timeout;up
: operational, requests are sent as usualchecking
: configured as essential
and being checked,
only probe requests are sentdraining
: similar to down
,
but requests from previously bound sessions
(via sticky) are still sentunhealthy
: non-operational,
only probe requests are sentselected
current
total
last
max_conns
responses
<code>
xxx
data
received
sent
health
fails
unavailable
unavailable
due to reaching the max_fails limitdowntime
unavailable
for selectiondownstart
unavailable
,
formatted as a dateheader_time
(PRO 1.3.0+)response_time
(PRO 1.3.0+)sid
keepalive
backup_switch
active
timeout
health/probes
(PRO)#health
object also has a probes
subobject
that stores the peer's health probe counters,
while the peer's state
can also be checking
and unhealthy
,
apart from the values listed in the table above:{
"192.168.16.4:80": {
"state": "unhealthy",
"...": "...",
"health": {
"...": "...",
"probes": {
"count": 10,
"fails": 10,
"last": "2025-07-03T09:56:07Z"
}
}
}
}
checking
value of state
isn't counted as downtime
and means that the peer, which has a probe configured as essential
,
hasn't been checked yet;
the unhealthy
value means that the peer is malfunctioning.
Both states also imply that the peer isn't included in load balancing.
For details of health probes, see upstream_probe.probes
:count
fails
last
queue
(PRO)#queue
object
with request queue counters:{
"queue": {
"queued": 20112,
"waiting": 1011,
"dropped": 6031,
"timedout": 560,
"overflows": 13
}
}
queued
waiting
dropped
timedout
overflows
Stream upstream#
upstream upstream {
zone upstream 256k;
server backend.example.com service=_example._tcp resolve max_conns=5;
keepalive 4;
}
/status/stream/upstreams/<upstream>
#{
"peers": {
"192.168.16.4:1935": {
"server": "backend.example.com",
"service": "_example._tcp",
"backup": false,
"weight": 5,
"state": "up",
"selected": {
"current": 2,
"total": 232
},
"max_conns": 5,
"data": {
"sent": 543866,
"received": 27349934
},
"health": {
"fails": 0,
"unavailable": 0,
"downtime": 0
}
}
}
}
peers
server
service
slow_start
(PRO 1.4.0+)backup
true
for backup serverweight
state
busy
: indicates that the number of requests to the server
has reached the limit set by max_conns,
and no new requests are sentdown
: manually disabled, no requests are sentrecovering
: recovering after a failure
according to slow_start,
more and more requests are sent over timeunavailable
: reached the max_fails limit,
only trial client requests are sent
at intervals defined by fail_timeout;up
: operational, requests are sent as usualchecking
: configured as essential
and being checked,
only probe requests are sentdraining
: similar to down
,
but requests from previously bound sessions
(via sticky) are still sentunhealthy
: non-operational,
only probe requests are sentselected
current
total
last
max_conns
data
received
sent
health
fails
unavailable
unavailable
due to
reaching the max_failsdowntime
unavailable
for selectiondownstart
unavailable
,
formatted as a dateconnect_time
(PRO 1.4.0+)first_byte_time
(PRO 1.4.0+)last_byte_time
(PRO 1.4.0+)backup_switch
(PRO 1.10.0+)active
timeout
health
object also has a probes
subobject
that stores the peer's health probe counters,
while the peer's state
can also be checking
and unhealthy
,
apart from the values listed in the table above:{
"192.168.16.4:80": {
"state": "unhealthy",
"...": "...",
"health": {
"...": "...",
"probes": {
"count": 2,
"fails": 2,
"last": "2025-07-03T11:03:54Z"
}
}
}
}
checking
value of state
means that the peer, which has a probe configured as essential
,
hasn't been checked yet;
the unhealthy
value means that the peer is malfunctioning.
Both states also imply that the peer isn't included in load balancing.
For details of health probes, see upstream_probe.probes
:count
fails
last
Dynamic Configuration API (PRO only)#
/config
section that enables dynamic updates
to Angie's configuration in JSON
with PUT
, PATCH
, and DELETE
HTTP requests.
All updates are atomic; new settings are applied as a whole,
or none are applied at all.
On error, Angie reports the reason.Subsections of
/config
#/config
section for the HTTP and stream modules; the number of settings
eligible for dynamic configuration is steadily increasing./config/http/upstreams/<upstream>/servers/<name>
#<upstream>
/config
, it must
have a zone directive configured, defining a shared
memory zone.<name>
<service>@<host>
, where:<service>@
is an optional service name, used for
SRV record resolution.<host>
is the domain name of the service (if resolve
is present) or its IP; an optional port can be defined here.upstream backend {
server backend.example.com service=_http._tcp resolve;
server 127.0.0.1;
zone backend 1m;
}
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/_http._tcp@backend.example.com/
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/127.0.0.1:80/
weight
, max_conns
,
max_fails
, fail_timeout
, backup
, down
and
sid
parameters, as described in server.drain
(PRO) parameter here;
to enable drain
,
set down
to the string value drain
:$ curl -X PUT -d \"drain\" \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/down
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com?defaults=on
{
"weight": 1,
"max_conns": 0,
"max_fails": 1,
"fail_timeout": 10,
"backup": true,
"down": false,
"sid": ""
}
random
method:upstream backend {
zone backend 256k;
server backend.example.com resolve max_conns=5;
random;
}
backup
:$ curl -X PUT -d '{ "backup": true }' \
http://127.0.0.1/config/http/upstreams/backend/servers/backend1.example.com
{
"error": "FormatError",
"description": "The \"backup\" field is unknown."
}
backup
parameter
can only be set when adding a new peer./config/stream/upstreams/<upstream>/servers/<name>
#<upstream>
upstream
block;
to configure it via /config
,
it must contain the zone directive
that defines a shared memory zone.<name>
<upstream>
;
specified in the format <service>@<host>
, where:<service>@
— optional part
that specifies the service name for resolving SRV records.<host>
— domain name of the service (when resolve is present)
or IP address; port can also be specified.upstream backend {
server backend.example.com:8080 service=_example._tcp resolve;
server 127.0.0.1:12345;
zone backend 1m;
}
$ curl http://127.0.0.1/config/stream/upstreams/backend/servers/_example._tcp@backend.example.com:8080/
$ curl http://127.0.0.1/config/stream/upstreams/backend/servers/127.0.0.1:12345/
weight
,
max_conns
, max_fails
, fail_timeout
, backup
, and
down
parameters described in the server section.drain
parameter (PRO);
to enable drain
mode,
set the down
parameter to the string value drain
:$ curl -X PUT -d \"drain\" \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend.example.com/down
curl http://127.0.0.1/config/stream/upstreams/backend/servers/backend.example.com?defaults=on
{
"weight": 1,
"max_conns": 0,
"max_fails": 1,
"fail_timeout": 10,
"backup": true,
"down": false,
}
random
balancing method:upstream backend {
zone backend 256k;
server backend.example.com resolve max_conns=5;
random;
}
backup
parameter:$ curl -X PUT -d '{ "backup": true }' \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend1.example.com
{
"error": "FormatError",
"description": "The \"backup\" field is unknown."
}
backup
parameter
can only be set when adding a new server.connection_drop=<value>
argument (PRO) to override the
proxy_connection_drop settings:$ curl -X DELETE \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend1.example.com?connection_drop=off
$ curl -X DELETE \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend2.example.com?connection_drop=on
$ curl -X DELETE \
http://127.0.0.1/config/stream/upstreams/backend/servers/backend3.example.com?connection_drop=1000
HTTP Methods#
http {
# ...
upstream backend {
zone upstream 256k;
server backend.example.com resolve max_conns=5;
# ...
}
server {
# ...
location /config/ {
api /config/;
allow 127.0.0.1;
deny all;
}
}
}
GET#
GET
HTTP method queries an entity at any existing path within
/config
, just as it does for other API sections./config/http/upstreams/backend/servers/
upstream server branch enables these queries:$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/max_conns
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
$ curl http://127.0.0.1/config/http/upstreams/backend/servers
$ # ...
$ curl http://127.0.0.1/config
defaults=on
:$ curl http://127.0.0.1/config/http/upstreams/backend/servers?defaults=on
{
"backend.example.com": {
"weight": 1,
"max_conns": 5,
"max_fails": 1,
"fail_timeout": 10,
"backup": false,
"down": false,
"sid": ""
}
}
PUT#
PUT
HTTP method creates a new JSON entity at the specified path
or entirely replaces an existing one.max_fails
parameter, not specified earlier,
of the backend.example.com
server within the backend
upstream:$ curl -X PUT -d '2' \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/max_fails
{
"success": "Updated",
"description": "Existing configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com/max_fails\" was updated with replacing."
}
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"max_conns": 5,
"max_fails": 2
}
DELETE#
DELETE
HTTP method deletes previously defined settings at the specified path;
at doing that, it returns to the default values if there are any.max_fails
parameter
of the backend.example.com
server within the backend
upstream:$ curl -X DELETE \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com/max_fails
{
"success": "Reset",
"description": "Configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com/max_fails\" was reset to default."
}
defaults=on
:$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com?defaults=on
{
"weight": 1,
"max_conns": 5,
"max_fails": 1,
"fail_timeout": 10,
"backup": false,
"down": false,
"sid": ""
}
max_fails
setting is back to its default value.connection_drop=<value>
argument
(PRO) to override the proxy_connection_drop, grpc_connection_drop,
fastcgi_connection_drop, scgi_connection_drop, and
uwsgi_connection_drop settings:$ curl -X DELETE \
http://127.0.0.1/config/http/upstreams/backend/servers/backend1.example.com?connection_drop=off
$ curl -X DELETE \
http://127.0.0.1/config/http/upstreams/backend/servers/backend2.example.com?connection_drop=on
$ curl -X DELETE \
http://127.0.0.1/config/http/upstreams/backend/servers/backend3.example.com?connection_drop=1000
PATCH#
PATCH
HTTP method creates a new entity at the specified path
or partially replaces or complements an existing one
(RFC 7386)
by supplying a JSON definition in its payload.down
setting of the
backend.example.com
server within the backend
upstream,
leaving the rest intact:$ curl -X PATCH -d '{ "down": true }' \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"success": "Updated",
"description": "Existing configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com\" was updated with merging."
}
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"max_conns": 5,
"down": true
}
PATCH
request was merged with the
existing one instead of overwriting it, as would be the case with PUT
.null
values are a corner case; they are used to delete specific
configuration items during such merge.DELETE
;
in particular, it reinstates the default values.down
setting added earlier
and simultaneously update max_conns
:$ curl -X PATCH -d '{ "down": null, "max_conns": 6 }' \
http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"success": "Updated",
"description": "Existing configuration API entity \"/config/http/upstreams/backend/servers/backend.example.com\" was updated with merging."
}
$ curl http://127.0.0.1/config/http/upstreams/backend/servers/backend.example.com
{
"max_conns": 6
}
down
parameter, for which a null
was supplied, was deleted;
max_conns
was updated.