Impala includes features that balance and maximize resources in your Apache Hadoop cluster. This topic describes how you can improve efficiency of your a Apache Hadoop cluster using those features.
The configuration options for admission control range from the simple (a single resource pool with a single set of options) to the complex (multiple resource pools with different options, each pool handling queries for a different set of users and groups).
To configure admission control, use a combination of startup options for the Impala daemon and edit or create the configuration files fair-scheduler.xml and llama-site.xml.
For a straightforward configuration using a single resource pool named
default
, you can specify configuration options on the command line and
skip the fair-scheduler.xml and llama-site.xml
configuration files.
‑‑fair_scheduler_allocation_path
and
‑‑llama_site_path
respectively.
The Impala admission control feature uses the Fair Scheduler configuration settings to determine how to map users and groups to different resource pools. For example, you might set up different resource pools with separate memory limits, and maximum number of concurrent and queued queries, for different categories of users within your organization. For details about all the Fair Scheduler configuration settings, see the Apache wiki.
The Impala admission control feature uses a small subset of possible settings from the llama-site.xml configuration file:
llama.am.throttling.maximum.placed.reservations.queue_name
llama.am.throttling.maximum.queued.reservations.queue_name
impala.admission-control.pool-default-query-options.queue_name
impala.admission-control.pool-queue-timeout-ms.queue_name
The llama.am.throttling.maximum.placed.reservations.queue_name
setting specifies the number of queries that are allowed to run concurrently in this pool.
The llama.am.throttling.maximum.queued.reservations.queue_name
setting specifies the number of queries that are allowed to be queued in this pool.
The impala.admission-control.pool-queue-timeout-ms
setting specifies
the timeout value for this pool in milliseconds.
Theimpala.admission-control.pool-default-query-options
settings
designates the default query options for all queries that run in this pool. Its argument
value is a comma-delimited string of 'key=value' pairs, 'key1=val1,key2=val2,
...'
. For example, this is where you might set a default memory limit for all
queries in the pool, using an argument such as MEM_LIMIT=5G
.
The impala.admission-control.*
configuration settings are available in
Impala 2.5 and higher.
Here are sample fair-scheduler.xml and
llama-site.xml files that define resource pools
root.default
, root.development
,
root.production
, and root.coords
.
These files define resource pools for Impala admission control and are separate
from the similar fair-scheduler.xml
that defines resource pools for YARN.
fair-scheduler.xml:
Although Impala does not use the vcores
value, you must still specify
it to satisfy YARN requirements for the file contents.
Each <aclSubmitApps>
tag (other than the one for
root
) contains a comma-separated list of users, then a space, then a
comma-separated list of groups; these are the users and groups allowed to submit
Impala statements to the corresponding resource pool.
If you leave the <aclSubmitApps>
element empty for a pool,
nobody can submit directly to that pool; child pools can specify their own
<aclSubmitApps>
values to authorize users and groups to submit
to those pools.
The <onlyCoordinators>
element is a boolean that defaults
to false
. If this value is set to true
, the
named request pool will contain only the coordinators and none of the executors.
The main purpose of this setting is to enable running queries against the
sys.impala_query_live
table from workload management. Since the
data for this table is stored in the memory of the coordinators, the executors
do not need to be involved if the query only selects from this table.
To use an <onlyCoordinators>
request pool, set the
REQUEST_POOL
query option to the name of the
<onlyCoordinators>
request pool. Caution even though
these request pools do not contain executors, they can still run any query.
Thus, while the REQUEST_POOL
query option is set to an only
coordinators request pool, queries have the potential to run the coordinators
out of resources.
Caution: care must be taken when naming the <onlyCoordinators>
request pool. If the name has the same prefix as a named executor group set, then
queries may be automatically routed to the request pool. For example, if the
coordinator is configured with
--expected_executor_group_sets=prefix1:10
, then an only coordinators
request pool named prefix1-onlycoords
will potentially have
queries routed to it.
<allocations>
<queue name="root">
<aclSubmitApps> </aclSubmitApps>
<queue name="default">
<maxResources>50000 mb, 0 vcores</maxResources>
<aclSubmitApps>*</aclSubmitApps>
</queue>
<queue name="development">
<maxResources>200000 mb, 0 vcores</maxResources>
<aclSubmitApps>user1,user2 dev,ops,admin</aclSubmitApps>
</queue>
<queue name="production">
<maxResources>1000000 mb, 0 vcores</maxResources>
<aclSubmitApps> ops,admin</aclSubmitApps>
</queue>
<queue name="coords">
<maxResources>1000000 mb, 0 vcores</maxResources>
<aclSubmitApps>ops,admin</aclSubmitApps>
<onlyCoordinators>true</onlyCoordinators>
</queue>
</queue>
<queuePlacementPolicy>
<rule name="specified" create="false"/>
<rule name="default" />
</queuePlacementPolicy>
</allocations>
llama-site.xml:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>llama.am.throttling.maximum.placed.reservations.root.default</name>
<value>10</value>
</property>
<property>
<name>llama.am.throttling.maximum.queued.reservations.root.default</name>
<value>50</value>
</property>
<property>
<name>impala.admission-control.pool-default-query-options.root.default</name>
<value>mem_limit=128m,query_timeout_s=20,max_io_buffers=10</value>
</property>
<property>
<name>impala.admission-control.pool-queue-timeout-ms.root.default</name>
<value>30000</value>
</property>
<property>
<name>impala.admission-control.max-query-mem-limit.root.default.regularPool</name>
<value>1610612736</value><!--1.5GB-->
</property>
<property>
<name>impala.admission-control.min-query-mem-limit.root.default.regularPool</name>
<value>52428800</value><!--50MB-->
</property>
<property>
<name>impala.admission-control.clamp-mem-limit-query-option.root.default.regularPool</name>
<value>true</value>
</property>
<property>
<name>impala.admission-control.max-query-cpu-core-per-node-limit.root.default.regularPool</name>
<value>8</value>
</property>
<property>
<name>impala.admission-control.max-query-cpu-core-coordinator-limit.root.default.regularPool</name>
<value>8</value>
</property>
</configuration>
Since Impala 4.5, User Quotas can be used to define rules that limit the total count of queries that a user can run concurrently. The rules are based on either usernames or group membership. The username is the short username, and groups are Hadoop Groups .
Queries are counted from when they are queued to when they are released. In a cluster without an Admission Daemon, the count of queries is synchronized across all coordinators through the Statestore. In this configuration over-admission is possible.
If a query fails to pass the User Quota rules then the query will be rejected when it is submitted.
More specific rules override more general rules. A user rule overrides a group rule or wildcard rule. A group rule overrides a wildcard rule. Pool level rules are evaluated first. If Pool level rules pass, then Root level rules are evaluated.
Note that the examples of User Quota rules below are incomplete xml snippets that would form part of an Admission Control configuration. In particular, they omit the essential aclSubmitApps tag.
<userQueryLimit>
<user>alice</user>
<user>bob</user>
<totalCount>3</totalCount>
</userQueryLimit>
This rule limits the users ‘alice’ and ‘bob’ to each running 3 queries. This rule could be placed at the Pool level or the Root level.
<queue name="root">
<queue name="queueE">
<userQueryLimit>
<user>*</user>
<totalCount>2</totalCount>
</userQueryLimit>
</queue>
...
This is a wildcard rule on pool root.queueE that limits all users to running 2 queries in that pool.
<queue name="root">
<userQueryLimit>
<user>userD</user>
<totalCount>2</totalCount>
</userQueryLimit>
...
This is a Root level user rule that limits userD to 2 queries across all pools.
<queue name="root">
<queue name="queueE">
<groupQueryLimit>
<group>group1</group>
<group>group2</group>
<totalCount>2</totalCount>
</groupQueryLimit>
...
This rule limits any user in groups group1 or group2 to running 2 queries in pool root.queueE. Note this rule could also be placed at the root level.
<queue name="group-set-small">
<!-- A user not matched by any other rule can run 1 query in the small pool -->
<userQueryLimit>
<user>*</user>
<totalCount>1</totalCount>
</userQueryLimit>
<!-- The user 'alice' can run 4 queries in the small pool -->
<userQueryLimit>
<user>alice</user>
<totalCount>4</totalCount>
</userQueryLimit>
...
With this rule, the user ‘alice’ can run 4 queries in "root.group-set-small". The more specific User rule overrides the wildcard rule. A user that is not ‘alice’ would be able to run 1 query, per the wildcard rule.
<queue name="group-set-small">
<!-- A user not matched by any other rule can run 1 query in the small pool -->
<userQueryLimit>
<user>*</user>
<totalCount>1</totalCount>
</userQueryLimit>
<!-- Members of the group 'it' can run 2 queries in the small pool -->
<groupQueryLimit>
<group>it</group>
<totalCount>2</totalCount>
</groupQueryLimit>
...
Assuming the user ‘bob’ is in the ‘it’ group, they can run 2 queries in "root.group-set-small". The more specific Group rule overrides the wildcard rule.
<queue name="group-set-small">
<!-- Members of the group 'it' can run 2 queries in the small pool -->
<groupQueryLimit>
<group>it</group>
<totalCount>2</totalCount>
</groupQueryLimit>
<!-- The user 'fiona' can run 3 queries in the small pool -->
<userQueryLimit>
<user>fiona</user>
<totalCount>3</totalCount>
</userQueryLimit>
...
The user 'fiona' can run 4 queries in "root.group-set-small", despite also matching the 'it' group. The more specific user rule overrides the group rule.
<queue name="root">
<groupQueryLimit>
<group>support</group>
<totalCount>6</totalCount>
</groupQueryLimit>
<queue name="group-set-small">
<groupQueryLimit>
<group>dev</group>
<group>support</group>
<totalCount>5</totalCount>
</groupQueryLimit>
</queue>
...
Members of group 'support' are limited to run 6 queries across all the pools. Members of groups 'dev' and 'support' can run 5 queries in "root.group-set-small". To run, both rules must be passed. The Pool level rule is evaluated first, so that an error mentioning that pool will be seen in the case where both rules would fail.
A user may be a member of more than one group, which means that more than one Group Rule may apply to a user at the Pool or Root level. In this case, at each level, the least restrictive rule is applied. For example, if the user is in 2 groups: 'it' and 'support', where group 'it' restricts her to running 2 queries, and group 'support' restricts her to running 5 queries, then the 'support' rule is the least restrictive, so she can run 5 queries in the pool.
The following Impala configuration options let you adjust the settings of the admission
control feature. When supplying the options on the impalad command
line, prepend the option name with --
.
queue_wait_timeout_ms
Type: int64
Default: 60000
default_pool_max_requests
fair_scheduler_config_path
and llama_site_path
are
set.
Type: int64
Default: -1, meaning unlimited (prior to Impala 2.5 the default was 200)
default_pool_max_queued
fair_scheduler_config_path
and llama_site_path
are
set.
Type: int64
Default: unlimited
default_pool_mem_limit
b
(optional), m
, or g
,
either uppercase or lowercase. You can specify floating-point values for megabytes
and gigabytes, to represent fractional numbers such as 1.5
. You can
also specify it as a percentage of the physical memory by specifying the suffix
%
. 0 or no setting indicates no limit. Defaults to bytes if no unit
is given. Because this limit applies cluster-wide, but each Impala node makes
independent decisions to run queries immediately or queue them, it is a soft limit;
the overall memory used by concurrent queries might be slightly higher during times
of heavy load. Ignored if fair_scheduler_config_path
and
llama_site_path
are set.
COMPUTE STATS
statement
to estimate memory usage for each query. See
COMPUTE STATS Statement for guidelines about how
and when to use this statement.
Type: string
Default: ""
(empty string, meaning unlimited)
disable_pool_max_requests
Type: Boolean
Default: false
disable_pool_mem_limits
Type: Boolean
Default: false
fair_scheduler_allocation_path
fair-scheduler.xml
).
Type: string
Default: ""
(empty string)
Usage notes: Admission control only uses a small subset of the settings that can go in this file, as described below. For details about all the Fair Scheduler configuration settings, see the Apache wiki.
llama_site_path
llama-site.xml
). If set,
fair_scheduler_allocation_path
must also be set.
Type: string
Default: ""
(empty string)
Usage notes: Admission control only uses a few of the settings that can go in this file, as described below.