Controls the amount of detail provided in the output of the
EXPLAIN
statement. The basic output can help you
identify high-level performance issues such as scanning a higher volume of
data or more partitions than you expect. The higher levels of detail show
how intermediate results flow between nodes and how different SQL
operations such as ORDER BY
, GROUP BY
,
joins, and WHERE
clauses are implemented within a
distributed query.
Type: STRING
or INT
Default: 1
Arguments:
The allowed range of numeric values for this option is 0 to 3:
0
or MINIMAL
: A barebones list, one line per operation. Primarily useful
for checking the join order in very long queries where the regular EXPLAIN
output is too
long to read easily.
1
or STANDARD
: The default level of detail, showing the logical way that
work is split up for the distributed query.
2
or EXTENDED
: Includes additional
detail about how the query planner uses statistics in its
decision-making process, to understand how a query could be tuned by
gathering statistics, using query hints, adding or removing predicates,
and so on. In Impala 3.2 and higher, the output
also includes the analyzed query with the cast information in the output
header, and the implicit cast info in the Predicate section.3
or VERBOSE
: The maximum level of detail, showing how work is split up
within each node into "query fragments" that are connected in a pipeline. This extra detail is
primarily useful for low-level performance testing and tuning within Impala itself, rather than for
rewriting the SQL code at the user level.
EXPLAIN_LEVEL
was 0 to 1: level 0 had
the mnemonic NORMAL
, and level 1 was VERBOSE
. In Impala 1.3 and higher,
NORMAL
is not a valid mnemonic value, and VERBOSE
still applies to the
highest level of detail but now corresponds to level 3. You might need to adjust the values if you have any
older impala-shell
script files that set the EXPLAIN_LEVEL
query option.
Changing the value of this option controls the amount of detail in the output of the EXPLAIN
statement. The extended information from level 2 or 3 is especially useful during performance tuning, when
you need to confirm whether the work for the query is distributed the way you expect, particularly for the
most resource-intensive operations such as join queries against large tables, queries against tables with
large numbers of partitions, and insert operations for Parquet tables. The extended information also helps to
check estimated resource usage when you use the admission control or resource management features explained
in Resource Management. See
EXPLAIN Statement for the syntax of the EXPLAIN
statement, and
Using the EXPLAIN Plan for Performance Tuning for details about how to use the extended information.
Usage notes:
As always, read the EXPLAIN
output from bottom to top. The lowest lines represent the
initial work of the query (scanning data files), the lines in the middle represent calculations done on each
node and how intermediate results are transmitted from one node to another, and the topmost lines represent
the final results being sent back to the coordinator node.
The numbers in the left column are generated internally during the initial planning phase and do not
represent the actual order of operations, so it is not significant if they appear out of order in the
EXPLAIN
output.
At all EXPLAIN
levels, the plan contains a warning if any tables in the query are missing
statistics. Use the COMPUTE STATS
statement to gather statistics for each table and suppress
this warning. See Table and Column Statistics for details about how the statistics help
query performance.
The PROFILE
command in impala-shell always starts with an explain plan
showing full detail, the same as with EXPLAIN_LEVEL=3
. After the explain
plan comes the executive summary, the same output as produced by the SUMMARY
command in
impala-shell.
Examples:
These examples use a trivial, empty table to illustrate how the essential aspects of query planning are shown
in EXPLAIN
output:
[localhost:21000] > create table t1 (x int, s string);
[localhost:21000] > set explain_level=1;
[localhost:21000] > explain select count(*) from t1;
+------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=10.00MB VCores=1 |
| WARNING: The following tables are missing relevant table and/or column |
| statistics. |
| explain_plan.t1 |
| |
| 03:AGGREGATE [MERGE FINALIZE] |
| | output: sum(count(*)) |
| | |
| 02:EXCHANGE [PARTITION=UNPARTITIONED] |
| | |
| 01:AGGREGATE |
| | output: count(*) |
| | |
| 00:SCAN HDFS [explain_plan.t1] |
| partitions=1/1 size=0B |
+------------------------------------------------------------------------+
[localhost:21000] > explain select * from t1;
+------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=-9223372036854775808B VCores=0 |
| WARNING: The following tables are missing relevant table and/or column |
| statistics. |
| explain_plan.t1 |
| |
| 01:EXCHANGE [PARTITION=UNPARTITIONED] |
| | |
| 00:SCAN HDFS [explain_plan.t1] |
| partitions=1/1 size=0B |
+------------------------------------------------------------------------+
[localhost:21000] > set explain_level=2;
[localhost:21000] > explain select * from t1;
+------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=-9223372036854775808B VCores=0 |
| WARNING: The following tables are missing relevant table and/or column |
| statistics. |
| explain_plan.t1 |
| |
| 01:EXCHANGE [PARTITION=UNPARTITIONED] |
| | hosts=0 per-host-mem=unavailable |
| | tuple-ids=0 row-size=19B cardinality=unavailable |
| | |
| 00:SCAN HDFS [explain_plan.t1, PARTITION=RANDOM] |
| partitions=1/1 size=0B |
| table stats: unavailable |
| column stats: unavailable |
| hosts=0 per-host-mem=0B |
| tuple-ids=0 row-size=19B cardinality=unavailable |
+------------------------------------------------------------------------+
[localhost:21000] > set explain_level=3;
[localhost:21000] > explain select * from t1;
+------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=-9223372036854775808B VCores=0 |
| WARNING: The following tables are missing relevant table and/or column |
| statistics. |
| explain_plan.t1 |
| |
| F01:PLAN FRAGMENT [PARTITION=UNPARTITIONED] |
| 01:EXCHANGE [PARTITION=UNPARTITIONED] |
| hosts=0 per-host-mem=unavailable |
| tuple-ids=0 row-size=19B cardinality=unavailable |
| |
| F00:PLAN FRAGMENT [PARTITION=RANDOM] |
| DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=01, PARTITION=UNPARTITIONED] |
| 00:SCAN HDFS [explain_plan.t1, PARTITION=RANDOM] |
| partitions=1/1 size=0B |
| table stats: unavailable |
| column stats: unavailable |
| hosts=0 per-host-mem=0B |
| tuple-ids=0 row-size=19B cardinality=unavailable |
+------------------------------------------------------------------------+
As the warning message demonstrates, most of the information needed for Impala to do efficient query
planning, and for you to understand the performance characteristics of the query, requires running the
COMPUTE STATS
statement for the table:
[localhost:21000] > compute stats t1;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 1 partition(s) and 2 column(s). |
+-----------------------------------------+
[localhost:21000] > explain select * from t1;
+------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=-9223372036854775808B VCores=0 |
| |
| F01:PLAN FRAGMENT [PARTITION=UNPARTITIONED] |
| 01:EXCHANGE [PARTITION=UNPARTITIONED] |
| hosts=0 per-host-mem=unavailable |
| tuple-ids=0 row-size=20B cardinality=0 |
| |
| F00:PLAN FRAGMENT [PARTITION=RANDOM] |
| DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=01, PARTITION=UNPARTITIONED] |
| 00:SCAN HDFS [explain_plan.t1, PARTITION=RANDOM] |
| partitions=1/1 size=0B |
| table stats: 0 rows total |
| column stats: all |
| hosts=0 per-host-mem=0B |
| tuple-ids=0 row-size=20B cardinality=0 |
+------------------------------------------------------------------------+
Joins and other complicated, multi-part queries are the ones where you most commonly need to examine the
EXPLAIN
output and customize the amount of detail in the output. This example shows the
default EXPLAIN
output for a three-way join query, then the equivalent output with a
[SHUFFLE]
hint to change the join mechanism between the first two tables from a broadcast
join to a shuffle join.
[localhost:21000] > set explain_level=1;
[localhost:21000] > explain select one.*, two.*, three.* from t1 one, t1 two, t1 three where one.x = two.x and two.x = three.x;
+---------------------------------------------------------+
| Explain String |
+---------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=4.00GB VCores=3 |
| |
| 07:EXCHANGE [PARTITION=UNPARTITIONED] |
| | |
| 04:HASH JOIN [INNER JOIN, BROADCAST] |
| | hash predicates: two.x = three.x |
| | |
| |--06:EXCHANGE [BROADCAST] |
| | | |
| | 02:SCAN HDFS [explain_plan.t1 three] |
| | partitions=1/1 size=0B |
| | |
| 03:HASH JOIN [INNER JOIN, BROADCAST] |
| | hash predicates: one.x = two.x |
| | |
| |--05:EXCHANGE [BROADCAST] |
| | | |
| | 01:SCAN HDFS [explain_plan.t1 two] |
| | partitions=1/1 size=0B |
| | |
| 00:SCAN HDFS [explain_plan.t1 one] |
| partitions=1/1 size=0B |
+---------------------------------------------------------+
[localhost:21000] > explain select one.*, two.*, three.*
> from t1 one join [shuffle] t1 two join t1 three
> where one.x = two.x and two.x = three.x;
+---------------------------------------------------------+
| Explain String |
+---------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=4.00GB VCores=3 |
| |
| 08:EXCHANGE [PARTITION=UNPARTITIONED] |
| | |
| 04:HASH JOIN [INNER JOIN, BROADCAST] |
| | hash predicates: two.x = three.x |
| | |
| |--07:EXCHANGE [BROADCAST] |
| | | |
| | 02:SCAN HDFS [explain_plan.t1 three] |
| | partitions=1/1 size=0B |
| | |
| 03:HASH JOIN [INNER JOIN, PARTITIONED] |
| | hash predicates: one.x = two.x |
| | |
| |--06:EXCHANGE [PARTITION=HASH(two.x)] |
| | | |
| | 01:SCAN HDFS [explain_plan.t1 two] |
| | partitions=1/1 size=0B |
| | |
| 05:EXCHANGE [PARTITION=HASH(one.x)] |
| | |
| 00:SCAN HDFS [explain_plan.t1 one] |
| partitions=1/1 size=0B |
+---------------------------------------------------------+
For a join involving many different tables, the default EXPLAIN
output might stretch over
several pages, and the only details you care about might be the join order and the mechanism (broadcast or
shuffle) for joining each pair of tables. In that case, you might set EXPLAIN_LEVEL
to its
lowest value of 0, to focus on just the join order and join mechanism for each stage. The following example
shows how the rows from the first and second joined tables are hashed and divided among the nodes of the
cluster for further filtering; then the entire contents of the third table are broadcast to all nodes for the
final stage of join processing.
[localhost:21000] > set explain_level=0;
[localhost:21000] > explain select one.*, two.*, three.*
> from t1 one join [shuffle] t1 two join t1 three
> where one.x = two.x and two.x = three.x;
+---------------------------------------------------------+
| Explain String |
+---------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=4.00GB VCores=3 |
| |
| 08:EXCHANGE [PARTITION=UNPARTITIONED] |
| 04:HASH JOIN [INNER JOIN, BROADCAST] |
| |--07:EXCHANGE [BROADCAST] |
| | 02:SCAN HDFS [explain_plan.t1 three] |
| 03:HASH JOIN [INNER JOIN, PARTITIONED] |
| |--06:EXCHANGE [PARTITION=HASH(two.x)] |
| | 01:SCAN HDFS [explain_plan.t1 two] |
| 05:EXCHANGE [PARTITION=HASH(one.x)] |
| 00:SCAN HDFS [explain_plan.t1 one] |
+---------------------------------------------------------+