Admission Control and Query Queuing
Admission control is an Impala feature that imposes limits on concurrent SQL queries, to avoid resource usage spikes and out-of-memory conditions on busy clusters. The admission control feature lets you set an upper limit on the number of concurrent Impala queries and on the memory used by those queries. Any additional queries are queued until the earlier ones finish, rather than being cancelled or running slowly and causing contention. As other queries finish, the queued queries are allowed to proceed.
In Impala 2.5 and higher, you can specify these limits and thresholds for each pool rather than globally. That way, you can balance the resource usage and throughput between steady well-defined workloads, rare resource-intensive queries, and ad-hoc exploratory queries.
In addition to the threshold values for currently executing queries, you can place limits on the maximum number of queries that are queued (waiting) and a limit on the amount of time they might wait before returning with an error. These queue settings let you ensure that queries do not wait indefinitely so that you can detect and correct "starvation" scenarios.
Queries, DML statements, and some DDL statements, including
CREATE TABLE AS SELECT
and COMPUTE
STATS
are affected by admission control.
On a busy cluster, you might find there is an optimal number of Impala queries that run concurrently. For example, when the I/O capacity is fully utilized by I/O-intensive queries, you might not find any throughput benefit in running more concurrent queries. By allowing some queries to run at full speed while others wait, rather than having all queries contend for resources and run slowly, admission control can result in higher overall throughput.
For another example, consider a memory-bound workload such as many large joins or aggregation queries. Each such query could briefly use many gigabytes of memory to process intermediate results. Because Impala by default cancels queries that exceed the specified memory limit, running multiple large-scale queries at once might require re-running some queries that are cancelled. In this case, admission control improves the reliability and stability of the overall workload by only allowing as many concurrent queries as the overall memory of the cluster can accommodate.
Concurrent Queries and Admission Control
One way to limit resource usage through admission control is to set an upper limit on the number of concurrent queries. This is the initial technique you might use when you do not have extensive information about memory usage for your workload. The settings can be specified separately for each dynamic resource pool.
- Max Running Queries
Maximum number of concurrently running queries in this pool. The default value is unlimited for Impala 2.5 or higher. (optional)
The maximum number of queries that can run concurrently in this pool. The default value is unlimited. Any queries for this pool that exceed Max Running Queries are added to the admission control queue until other queries finish. You can use Max Running Queries in the early stages of resource management, when you do not have extensive data about query memory usage, to determine if the cluster performs better overall if throttling is applied to Impala queries.For a workload with many small queries, you typically specify a high value for this setting, or leave the default setting of "unlimited". For a workload with expensive queries, where some number of concurrent queries saturate the memory, I/O, CPU, or network capacity of the cluster, set the value low enough that the cluster resources are not overcommitted for Impala.
Once you have enabled memory-based admission control using other pool settings, you can still use Max Running Queries as a safeguard. If queries exceed either the total estimated memory or the maximum number of concurrent queries, they are added to the queue.
- Max Queued Queries
- Maximum number of queries that can be queued in this pool. The default value is 200 for Impala 2.1 or higher and 50 for previous versions of Impala. (optional)
- Queue Timeout
- The amount of time, in milliseconds, that a query waits in the
admission control queue for this pool before being canceled. The
default value is 60,000 milliseconds. In the following cases, Queue Timeout is not significant, and you can specify a high value to avoid canceling queries unexpectedly:
- In a low-concurrency workload where few or no queries are queued
- In an environment without a strict SLA, where it does not matter if queries occasionally take longer than usual because they are held in admission control
In a high-concurrency workload, especially for queries with a tight SLA, long wait times in admission control can cause a serious problem. For example, if a query needs to run in 10 seconds, and you have tuned it so that it runs in 8 seconds, it violates its SLA if it waits in the admission control queue longer than 2 seconds. In a case like this, set a low timeout value and monitor how many queries are cancelled because of timeouts. This technique helps you to discover capacity, tuning, and scaling problems early, and helps avoid wasting resources by running expensive queries that have already missed their SLA.
If you identify some queries that can have a high timeout value, and others that benefit from a low timeout value, you can create separate pools with different values for this setting.
You can combine these settings with the memory-based approach described in Memory Limits and Admission Control. If either the maximum number of or the expected memory usage of the concurrent queries is exceeded, subsequent queries are queued until the concurrent workload falls below the threshold again.
Memory Limits and Admission Control
Each dynamic resource pool can have an upper limit on the cluster-wide memory used by queries executing in that pool. This is the technique to use once you have a stable workload with well-understood memory requirements.
Use the following settings to manage memory-based admission control.
- Max Memory
-
The maximum amount of aggregate memory available across the cluster to all queries executing in this pool. This should be a portion of the aggregate configured memory for Impala daemons, which will be shown in the settings dialog next to this option for convenience. Setting this to a non-zero value enables memory based admission control.
Impala determines the expected maximum memory used by all queries in the pool and holds back any further queries that would result in Max Memory being exceeded.
You set Max Memory in
fair-scheduler.xml
file with themaxResources
tag. For example:<maxResources>2500 mb</maxResources>
If you specify Max Memory, you should specify the amount of memory to allocate to each query in this pool. You can do this in two ways:
- By setting Maximum Query Memory Limit and Minimum Query Memory Limit. This is preferred in Impala 3.1 and greater and gives Impala flexibility to set aside more memory to queries that are expected to be memory-hungry.
- By setting Default Query Memory Limit to the exact amount of memory that Impala should set aside for queries in that pool.
Note that in the following cases, Impala will rely entirely on memory estimates to determine how much memory to set aside for each query. This is not recommended because it can result in queries not running or being starved for memory if the estimates are inaccurate. And it can affect other queries running on the same node.- Max Memory, Maximum Query Memory Limit, and Minimum Query
Memory Limit are not set, and the
MEM_LIMIT
query option is not set for the query. - Default Query Memory Limit is set to 0, and the
MEM_LIMIT
query option is not set for the query.
- Minimum Query Memory Limit and Maximum Query Memory Limit
-
These two options determine the minimum and maximum per-host memory limit that will be chosen by Impala Admission control for queries in this resource pool. If set, Impala Admission Control will choose a memory limit between the minimum and maximum values based on the per-host memory estimate for the query. The memory limit chosen determines the amount of memory that Impala Admission control will set aside for this query on each host that the query is running on. The aggregate memory across all of the hosts that the query is running on is counted against the pool’s Max Memory.
Minimum Query Memory Limit must be less than or equal to Maximum Query Memory Limit and Max Memory.
A user can override Impala’s choice of memory limit by setting the
MEM_LIMIT
query option. If the Clamp MEM_LIMIT Query Option setting is set toTRUE
and the user setsMEM_LIMIT
to a value that is outside of the range specified by these two options, then the effective memory limit will be either the minimum or maximum, depending on whetherMEM_LIMIT
is lower than or higher the range.For example, assume a resource pool with the following parameters set:- Minimum Query Memory Limit = 2GB
- Maximum Query Memory Limit = 10GB
MEM_LIMIT
query option set to 14 GB, the following would happen:- If Clamp MEM_LIMIT Query Option = true, admission controller
would override
MEM_LIMIT
with 10 GB and attempt admission using that value. - If Clamp MEM_LIMIT Query Option = false, the admission
controller will retain the
MEM_LIMIT
of 14 GB set by the user and will attempt admission using the value.
- Default Query Memory Limit
- The default memory limit applied to queries executing in this pool
when no explicit
MEM_LIMIT
query option is set. The memory limit chosen determines the amount of memory that Impala Admission control will set aside for this query on each host that the query is running on. The aggregate memory across all of the hosts that the query is running on is counted against the pool’s Max Memory. This option is deprecated in Impala 3.1 and higher and is replaced by Maximum Query Memory Limit and Minimum Query Memory Limit. Do not set this if either Maximum Query Memory Limit or Minimum Query Memory Limit is set.
- Clamp MEM_LIMIT Query Option
- If this field is not selected, the
MEM_LIMIT
query option will not be bounded by the Maximum Query Memory Limit and the Minimum Query Memory Limit values specified for this resource pool. By default, this field is selected in Impala 3.1 and higher. The field is disabled if both Minimum Query Memory Limit and Maximum Query Memory Limit are not set.
-
The cluster is running
impalad
daemons on five hosts. - A dynamic resource pool has Max Memory set to 100 GB.
- The Maximum Query Memory Limit for the pool is 10 GB and Minimum Query Memory Limit is 2 GB. Therefore, any query running in this pool could use up to 50 GB of memory (Maximum Query Memory Limit * number of Impala nodes).
- Impala will execute varying numbers of queries concurrently because queries may be given memory limits anywhere between 2 GB and 10 GB, depending on the estimated memory requirements. For example, Impala may execute up to 10 small queries with 2 GB memory limits or two large queries with 10 GB memory limits because that is what will fit in the 100 GB cluster-wide limit when executing on five hosts.
- The executing queries may use less memory than the per-host memory limit or the Max Memory cluster-wide limit if they do not need that much memory. In general this is not a problem so long as you are able to execute enough queries concurrently to meet your needs.
You can combine the memory-based settings with the upper limit on concurrent queries described in Concurrent Queries and Admission Control. If either the maximum number of or the expected memory usage of the concurrent queries is exceeded, subsequent queries are queued until the concurrent workload falls below the threshold again.
Setting Per-query Memory Limits
Use per-query memory limits to prevent queries from consuming excessive memory resources that impact other queries. We recommends that you set the query memory limits whenever possible.
If you set the Max Memory for a resource pool, Impala attempts to throttle queries if there is not enough memory to run them within the specified resources.
Only use admission control with maximum memory resources if you can
ensure there are query memory limits. Set the pool Maximum Query
Memory Limit to be certain. You can override this setting with the
MEM_LIMIT
query option, if necessary.
Typically, you set query memory limits using the set
MEM_LIMIT=Xg;
query option. When you find the right value for
your business case, memory-based admission control works well. The
potential downside is that queries that attempt to use more memory might
perform poorly or even be cancelled.
How Impala Admission Control Relates to Other Resource Management Tools
The admission control feature is similar in some ways to the YARN resource management framework. These features can be used separately or together. This section describes some similarities and differences, to help you decide which combination of resource management features to use for Impala.
Admission control is a lightweight, decentralized system that is suitable for workloads consisting primarily of Impala queries and other SQL statements. It sets "soft" limits that smooth out Impala memory usage during times of heavy load, rather than taking an all-or-nothing approach that cancels jobs that are too resource-intensive.
Because the admission control system does not interact with other Hadoop workloads such as MapReduce jobs, you might use YARN with static service pools on clusters where resources are shared between Impala and other Hadoop components. This configuration is recommended when using Impala in a multitenant cluster. Devote a percentage of cluster resources to Impala, and allocate another percentage for MapReduce and other batch-style workloads. Let admission control handle the concurrency and memory usage for the Impala work within the cluster, and let YARN manage the work for other components within the cluster. In this scenario, Impala's resources are not managed by YARN.
The Impala admission control feature uses the same configuration mechanism as the YARN resource manager to map users to pools and authenticate them.
Although the Impala admission control feature uses a fair-scheduler.xml
configuration file
behind the scenes, this file does not depend on which scheduler is used for YARN. You still use this file
even when YARN is using the capacity scheduler.
How Impala Schedules and Enforces Limits on Concurrent Queries
The admission control system is decentralized, embedded in each Impala daemon and communicating through the statestore mechanism. Although the limits you set for memory usage and number of concurrent queries apply cluster-wide, each Impala daemon makes its own decisions about whether to allow each query to run immediately or to queue it for a less-busy time. These decisions are fast, meaning the admission control mechanism is low-overhead, but might be imprecise during times of heavy load across many coordinators. There could be times when the more queries were queued (in aggregate across the cluster) than the specified limit, or when number of admitted queries exceeds the expected number. Thus, you typically err on the high side for the size of the queue, because there is not a big penalty for having a large number of queued queries; and you typically err on the low side for configuring memory resources, to leave some headroom in case more queries are admitted than expected, without running out of memory and being cancelled as a result.
To avoid a large backlog of queued requests, you can set an upper limit on the size of the queue for queries that are queued. When the number of queued queries exceeds this limit, further queries are cancelled rather than being queued. You can also configure a timeout period per pool, after which queued queries are cancelled, to avoid indefinite waits. If a cluster reaches this state where queries are cancelled due to too many concurrent requests or long waits for query execution to begin, that is a signal for an administrator to take action, either by provisioning more resources, scheduling work on the cluster to smooth out the load, or by doing Impala performance tuning to enable higher throughput.
How Admission Control works with Impala Clients (JDBC, ODBC, HiveServer2)
Most aspects of admission control work transparently with client interfaces such as JDBC and ODBC:
- If a SQL statement is put into a queue rather than running immediately, the API call blocks until the statement is dequeued and begins execution. At that point, the client program can request to fetch results, which might also block until results become available.
- If a SQL statement is cancelled because it has been queued for too long or because it exceeded the memory limit during execution, the error is returned to the client program with a descriptive error message.
In Impala 2.0 and higher, you can submit a SQL
SET
statement from the client application to change
the REQUEST_POOL
query option. This option lets you
submit queries to different resource pools, as described in REQUEST_POOL Query Option.
At any time, the set of queued queries could include queries submitted through multiple different Impala
daemon hosts. All the queries submitted through a particular host will be executed in order, so a
CREATE TABLE
followed by an INSERT
on the same table would succeed.
Queries submitted through different hosts are not guaranteed to be executed in the order they were
received. Therefore, if you are using load-balancing or other round-robin scheduling where different
statements are submitted through different hosts, set up all table structures ahead of time so that the
statements controlled by the queuing system are primarily queries, where order is not significant. Or, if a
sequence of statements needs to happen in strict order (such as an INSERT
followed by a
SELECT
), submit all those statements through a single session, while connected to the same
Impala daemon host.
Admission control has the following limitations or special behavior when used with JDBC or ODBC applications:
-
The other resource-related query options,
RESERVATION_REQUEST_TIMEOUT
andV_CPU_CORES
, are no longer used. Those query options only applied to using Impala with Llama, which is no longer supported.
SQL and Schema Considerations for Admission Control
When queries complete quickly and are tuned for optimal memory usage, there is less chance of performance or capacity problems during times of heavy load. Before setting up admission control, tune your Impala queries to ensure that the query plans are efficient and the memory estimates are accurate. Understanding the nature of your workload, and which queries are the most resource-intensive, helps you to plan how to divide the queries into different pools and decide what limits to define for each pool.
For large tables, especially those involved in join queries, keep their statistics up to date
after loading substantial amounts of new data or adding new partitions.
Use the COMPUTE STATS
statement for unpartitioned tables, and
COMPUTE INCREMENTAL STATS
for partitioned tables.
When you use dynamic resource pools with a Max Memory setting enabled,
you typically override the memory estimates that Impala makes based on the statistics from the
COMPUTE STATS
statement.
You either set the MEM_LIMIT
query option within a particular session to
set an upper memory limit for queries within that session, or a default MEM_LIMIT
setting for all queries processed by the impalad instance, or
a default MEM_LIMIT
setting for all queries assigned to a particular
dynamic resource pool. By designating a consistent memory limit for a set of similar queries
that use the same resource pool, you avoid unnecessary query queuing or out-of-memory conditions
that can arise during high-concurrency workloads when memory estimates for some queries are inaccurate.
Follow other steps from Tuning Impala for Performance to tune your queries.
Guidelines for Using Admission Control
The limits imposed by admission control are de-centrally managed
"soft" limits. Each Impala coordinator node makes its own
decisions about whether to allow queries to run immediately or to queue
them. These decisions rely on information passed back and forth between
nodes by the StateStore service. If a sudden surge in requests causes
more queries than anticipated to run concurrently, then the throughput
could decrease due to queries spilling to disk or contending for
resources. Or queries could be cancelled if they exceed the
MEM_LIMIT
setting while running.
In impala-shell, you can also specify which
resource pool to direct queries to by setting the
REQUEST_POOL
query option.
- Profile
The information is available through the
PROFILE
statement in impala-shell immediately after running a query in the shell, on the queries page of the Impala debug web UI, or in the Impala log file (basic information at log level 1, more detailed information at log level 2).The profile output contains details about the admission decision, such as whether the query was queued or not and which resource pool it was assigned to. It also includes the estimated and actual memory usage for the query, so you can fine-tune the configuration for the memory limits of the resource pools.
- Summary
Starting in Impala 3.1, the information is available in impala-shell when the
LIVE_PROGRESS
orLIVE_SUMMARY
query option is set toTRUE
.You can also start an
impala-shell
session with the--live_progress
or--live_summary
flags to monitor all queries in thatimpala-shell
session.The summary output includes the queuing status consisting of whether the query was queued and what was the latest queuing reason.
For details about all the Fair Scheduler configuration settings, see
Fair Scheduler Configuration, in
particular the tags such as <queue>
and
<aclSubmitApps>
to map users and groups to
particular resource pools (queues).