Using Impala with the Azure Data Lake Store (ADLS)
You can use Impala to query data residing on the Azure Data Lake Store (ADLS) filesystem. This capability allows convenient access to a storage system that is remotely managed, accessible from anywhere, and integrated with various cloud-based services. Impala can query files in any supported file format from ADLS. The ADLS storage location can be for an entire table, or individual partitions in a partitioned table.
The default Impala tables use data files stored on HDFS, which are ideal for bulk loads and queries using
full-table scans. In contrast, queries against ADLS data are less performant, making ADLS suitable for holding
"cold" data that is only queried occasionally, while more frequently accessed "hot" data resides in
HDFS. In a partitioned table, you can set the LOCATION
attribute for individual partitions
to put some partitions on HDFS and others on ADLS, typically depending on the age of the data.
Prerequisites
These procedures presume that you have already set up an Azure account, configured an ADLS store, and configured your Hadoop cluster with appropriate credentials to be able to access ADLS. See the following resources for information:
How Impala SQL Statements Work with ADLS
Impala SQL statements work with data on ADLS as follows:
-
The CREATE TABLE Statement or ALTER TABLE Statement statements can specify that a table resides on the ADLS filesystem by encoding an
adl://
prefix for theLOCATION
property.ALTER TABLE
can also set theLOCATION
property for an individual partition, so that some data in a table resides on ADLS and other data in the same table resides on HDFS.The full format of the location URI is typically:adl://your_account.azuredatalakestore.net/rest_of_directory_path
-
Once a table or partition is designated as residing on ADLS, the SELECT Statement statement transparently accesses the data files from the appropriate storage layer.
-
If the ADLS table is an internal table, the DROP TABLE Statement statement removes the corresponding data files from ADLS when the table is dropped.
-
The TRUNCATE TABLE Statement (Impala 2.3 or higher only) statement always removes the corresponding data files from ADLS when the table is truncated.
-
The LOAD DATA Statement can move data files residing in HDFS into an ADLS table.
-
The INSERT Statement, or the
CREATE TABLE AS SELECT
form of theCREATE TABLE
statement, can copy data from an HDFS table or another ADLS table into an ADLS table.
For usage information about Impala SQL statements with ADLS tables, see Creating Impala Databases, Tables, and Partitions for Data Stored on ADLS and Using Impala DML Statements for ADLS Data.
Specifying Impala Credentials to Access Data in ADLS
To allow Impala to access data in ADLS, specify values for the following configuration settings in your core-site.xml file:
<property>
<name>dfs.adls.oauth2.access.token.provider.type</name>
<value>ClientCredential</value>
</property>
<property>
<name>dfs.adls.oauth2.client.id</name>
<value><varname>your_client_id</varname></value>
</property>
<property>
<name>dfs.adls.oauth2.credential</name>
<value><varname>your_client_secret</varname></value>
</property>
<property>
<name>dfs.adls.oauth2.refresh.url</name>
<value><varname>refresh_URL</varname></value>
</property>
Check if your Hadoop distribution or cluster management tool includes support for filling in and distributing credentials across the cluster in an automated way.
After specifying the credentials, restart both the Impala and Hive services. (Restarting Hive is required because Impala queries, CREATE TABLE statements, and so on go through the Hive metastore.)
Loading Data into ADLS for Impala Queries
If your ETL pipeline involves moving data into ADLS and then querying through Impala, you can either use Impala DML statements to create, move, or copy the data, or use the same data loading techniques as you would for non-Impala data.
Using Impala DML Statements for ADLS Data
In Impala 2.9 and higher, the Impala DML statements (INSERT
, LOAD DATA
,
and CREATE TABLE AS SELECT
) can write data into a table or partition that resides in the
Azure Data Lake Store (ADLS).
The syntax of the DML statements is the same as for any other tables, because the ADLS location for tables and
partitions is specified by an adl://
prefix in the
LOCATION
attribute of
CREATE TABLE
or ALTER TABLE
statements.
If you bring data into ADLS using the normal ADLS transfer mechanisms instead of Impala DML statements,
issue a REFRESH
statement for the table before using Impala to query the ADLS data.
Manually Loading Data into Impala Tables on ADLS
As an alternative, you can use the Microsoft-provided methods to bring data files into ADLS for querying through Impala. See the Microsoft ADLS documentation for details.
After you upload data files to a location already mapped to an Impala table or partition, or if you delete
files in ADLS from such a location, issue the REFRESH table_name
statement to make Impala aware of the new set of data files.
Creating Impala Databases, Tables, and Partitions for Data Stored on ADLS
Impala reads data for a table or partition from ADLS based on the LOCATION
attribute for the
table or partition. Specify the ADLS details in the LOCATION
clause of a CREATE
TABLE
or ALTER TABLE
statement. The notation for the LOCATION
clause is adl://store/path/to/file
.
For a partitioned table, either specify a separate LOCATION
clause for each new partition,
or specify a base LOCATION
for the table and set up a directory structure in ADLS to mirror
the way Impala partitioned tables are structured in HDFS. Although, strictly speaking, ADLS filenames do not
have directory paths, Impala treats ADLS filenames with /
characters the same as HDFS
pathnames that include directories.
To point a nonpartitioned table or an individual partition at ADLS, specify a single directory path in ADLS, which could be any arbitrary directory. To replicate the structure of an entire Impala partitioned table or database in ADLS requires more care, with directories and subdirectories nested and named to match the equivalent directory tree in HDFS. Consider setting up an empty staging area if necessary in HDFS, and recording the complete directory structure so that you can replicate it in ADLS.
For example, the following session creates a partitioned table where only a single partition resides on ADLS.
The partitions for years 2013 and 2014 are located on HDFS. The partition for year 2015 includes a
LOCATION
attribute with an adl://
URL, and so refers to data residing on
ADLS, under a specific path underneath the store impalademo
.
[localhost:21000] > create database db_on_hdfs;
[localhost:21000] > use db_on_hdfs;
[localhost:21000] > create table mostly_on_hdfs (x int) partitioned by (year int);
[localhost:21000] > alter table mostly_on_hdfs add partition (year=2013);
[localhost:21000] > alter table mostly_on_hdfs add partition (year=2014);
[localhost:21000] > alter table mostly_on_hdfs add partition (year=2015)
> location 'adl://impalademo.azuredatalakestore.net/dir1/dir2/dir3/t1';
For convenience when working with multiple tables with data files stored in ADLS, you can create a database
with a LOCATION
attribute pointing to an ADLS path.
Specify a URL of the form adl://store/root/path/for/database
for the LOCATION
attribute of the database.
Any tables created inside that database
automatically create directories underneath the one specified by the database
LOCATION
attribute.
The following session creates a database and two partitioned tables residing entirely on ADLS, one
partitioned by a single column and the other partitioned by multiple columns. Because a
LOCATION
attribute with an adl://
URL is specified for the database, the
tables inside that database are automatically created on ADLS underneath the database directory. To see the
names of the associated subdirectories, including the partition key values, we use an ADLS client tool to
examine how the directory structure is organized on ADLS. For example, Impala partition directories such as
month=1
do not include leading zeroes, which sometimes appear in partition directories created
through Hive.
[localhost:21000] > create database db_on_adls location 'adl://impalademo.azuredatalakestore.net/dir1/dir2/dir3';
[localhost:21000] > use db_on_adls;
[localhost:21000] > create table partitioned_on_adls (x int) partitioned by (year int);
[localhost:21000] > alter table partitioned_on_adls add partition (year=2013);
[localhost:21000] > alter table partitioned_on_adls add partition (year=2014);
[localhost:21000] > alter table partitioned_on_adls add partition (year=2015);
[localhost:21000] > ! hadoop fs -ls adl://impalademo.azuredatalakestore.net/dir1/dir2/dir3 --recursive;
2015-03-17 13:56:34 0 dir1/dir2/dir3/
2015-03-17 16:43:28 0 dir1/dir2/dir3/partitioned_on_adls/
2015-03-17 16:43:49 0 dir1/dir2/dir3/partitioned_on_adls/year=2013/
2015-03-17 16:43:53 0 dir1/dir2/dir3/partitioned_on_adls/year=2014/
2015-03-17 16:43:58 0 dir1/dir2/dir3/partitioned_on_adls/year=2015/
[localhost:21000] > create table partitioned_multiple_keys (x int)
> partitioned by (year smallint, month tinyint, day tinyint);
[localhost:21000] > alter table partitioned_multiple_keys
> add partition (year=2015,month=1,day=1);
[localhost:21000] > alter table partitioned_multiple_keys
> add partition (year=2015,month=1,day=31);
[localhost:21000] > alter table partitioned_multiple_keys
> add partition (year=2015,month=2,day=28);
[localhost:21000] > ! hadoop fs -ls adl://impalademo.azuredatalakestore.net/dir1/dir2/dir3 --recursive;
2015-03-17 13:56:34 0 dir1/dir2/dir3/
2015-03-17 16:47:13 0 dir1/dir2/dir3/partitioned_multiple_keys/
2015-03-17 16:47:44 0 dir1/dir2/dir3/partitioned_multiple_keys/year=2015/month=1/day=1/
2015-03-17 16:47:50 0 dir1/dir2/dir3/partitioned_multiple_keys/year=2015/month=1/day=31/
2015-03-17 16:47:57 0 dir1/dir2/dir3/partitioned_multiple_keys/year=2015/month=2/day=28/
2015-03-17 16:43:28 0 dir1/dir2/dir3/partitioned_on_adls/
2015-03-17 16:43:49 0 dir1/dir2/dir3/partitioned_on_adls/year=2013/
2015-03-17 16:43:53 0 dir1/dir2/dir3/partitioned_on_adls/year=2014/
2015-03-17 16:43:58 0 dir1/dir2/dir3/partitioned_on_adls/year=2015/
The CREATE DATABASE
and CREATE TABLE
statements create the associated
directory paths if they do not already exist. You can specify multiple levels of directories, and the
CREATE
statement creates all appropriate levels, similar to using mkdir
-p
.
Use the standard ADLS file upload methods to actually put the data files into the right locations. You can also put the directory paths and data files in place before creating the associated Impala databases or tables, and Impala automatically uses the data from the appropriate location after the associated databases and tables are created.
You can switch whether an existing table or partition points to data in HDFS or ADLS. For example, if you
have an Impala table or partition pointing to data files in HDFS or ADLS, and you later transfer those data
files to the other filesystem, use an ALTER TABLE
statement to adjust the
LOCATION
attribute of the corresponding table or partition to reflect that change. Because
Impala does not have an ALTER DATABASE
statement, this location-switching technique is not
practical for entire databases that have a custom LOCATION
attribute.
Internal and External Tables Located on ADLS
Just as with tables located on HDFS storage, you can designate ADLS-based tables as either internal (managed
by Impala) or external, by using the syntax CREATE TABLE
or CREATE EXTERNAL
TABLE
respectively. When you drop an internal table, the files associated with the table are
removed, even if they are on ADLS storage. When you drop an external table, the files associated with the
table are left alone, and are still available for access by other tools or components. See
Overview of Impala Tables for details.
If the data on ADLS is intended to be long-lived and accessed by other tools in addition to Impala, create
any associated ADLS tables with the CREATE EXTERNAL TABLE
syntax, so that the files are not
deleted from ADLS when the table is dropped.
If the data on ADLS is only needed for querying by Impala and can be safely discarded once the Impala
workflow is complete, create the associated ADLS tables using the CREATE TABLE
syntax, so
that dropping the table also deletes the corresponding data files on ADLS.
For example, this session creates a table in ADLS with the same column layout as a table in HDFS, then
examines the ADLS table and queries some data from it. The table in ADLS works the same as a table in HDFS as
far as the expected file format of the data, table and column statistics, and other table properties. The
only indication that it is not an HDFS table is the adl://
URL in the
LOCATION
property. Many data files can reside in the ADLS directory, and their combined
contents form the table data. Because the data in this example is uploaded after the table is created, a
REFRESH
statement prompts Impala to update its cached information about the data files.
[localhost:21000] > create table usa_cities_adls like usa_cities location 'adl://impalademo.azuredatalakestore.net/usa_cities';
[localhost:21000] > desc usa_cities_adls;
+-------+----------+---------+
| name | type | comment |
+-------+----------+---------+
| id | smallint | |
| city | string | |
| state | string | |
+-------+----------+---------+
-- Now from a web browser, upload the same data file(s) to ADLS as in the HDFS table,
-- under the relevant store and path. If you already have the data in ADLS, you would
-- point the table LOCATION at an existing path.
[localhost:21000] > refresh usa_cities_adls;
[localhost:21000] > select count(*) from usa_cities_adls;
+----------+
| count(*) |
+----------+
| 289 |
+----------+
[localhost:21000] > select distinct state from sample_data_adls limit 5;
+----------------------+
| state |
+----------------------+
| Louisiana |
| Minnesota |
| Georgia |
| Alaska |
| Ohio |
+----------------------+
[localhost:21000] > desc formatted usa_cities_adls;
+------------------------------+----------------------------------------------------+---------+
| name | type | comment |
+------------------------------+----------------------------------------------------+---------+
| # col_name | data_type | comment |
| | NULL | NULL |
| id | smallint | NULL |
| city | string | NULL |
| state | string | NULL |
| | NULL | NULL |
| # Detailed Table Information | NULL | NULL |
| Database: | adls_testing | NULL |
| Owner: | jrussell | NULL |
| CreateTime: | Mon Mar 16 11:36:25 PDT 2017 | NULL |
| LastAccessTime: | UNKNOWN | NULL |
| Protect Mode: | None | NULL |
| Retention: | 0 | NULL |
| Location: | adl://impalademo.azuredatalakestore.net/usa_cities | NULL |
| Table Type: | MANAGED_TABLE | NULL |
...
+------------------------------+----------------------------------------------------+---------+
In this case, we have already uploaded a Parquet file with a million rows of data to the
sample_data
directory underneath the impalademo
store on ADLS. This
session creates a table with matching column settings pointing to the corresponding location in ADLS, then
queries the table. Because the data is already in place on ADLS when the table is created, no
REFRESH
statement is required.
[localhost:21000] > create table sample_data_adls
> (id int, id bigint, val int, zerofill string,
> name string, assertion boolean, city string, state string)
> stored as parquet location 'adl://impalademo.azuredatalakestore.net/sample_data';
[localhost:21000] > select count(*) from sample_data_adls;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
[localhost:21000] > select count(*) howmany, assertion from sample_data_adls group by assertion;
+---------+-----------+
| howmany | assertion |
+---------+-----------+
| 667149 | true |
| 332851 | false |
+---------+-----------+
Running and Tuning Impala Queries for Data Stored on ADLS
Once the appropriate LOCATION
attributes are set up at the table or partition level, you
query data stored in ADLS exactly the same as data stored on HDFS or in HBase:
- Queries against ADLS data support all the same file formats as for HDFS data.
-
Tables can be unpartitioned or partitioned. For partitioned tables, either manually construct paths in ADLS
corresponding to the HDFS directories representing partition key values, or use
ALTER TABLE ... ADD PARTITION
to set up the appropriate paths in ADLS. - HDFS, Kudu, and HBase tables can be joined to ADLS tables, or ADLS tables can be joined with each other.
- Authorization using the Sentry framework to control access to databases, tables, or columns works the same whether the data is in HDFS or in ADLS.
-
The catalogd daemon caches metadata for both HDFS and ADLS tables. Use
REFRESH
andINVALIDATE METADATA
for ADLS tables in the same situations where you would issue those statements for HDFS tables. - Queries against ADLS tables are subject to the same kinds of admission control and resource management as HDFS tables.
- Metadata about ADLS tables is stored in the same metastore database as for HDFS tables.
- You can set up views referring to ADLS tables, the same as for HDFS tables.
-
The
COMPUTE STATS
,SHOW TABLE STATS
, andSHOW COLUMN STATS
statements work for ADLS tables also.
Understanding and Tuning Impala Query Performance for ADLS Data
Although Impala queries for data stored in ADLS might be less performant than queries against the equivalent data stored in HDFS, you can still do some tuning. Here are techniques you can use to interpret explain plans and profiles for queries against ADLS data, and tips to achieve the best performance possible for such queries.
All else being equal, performance is expected to be lower for queries running against data on ADLS rather
than HDFS. The actual mechanics of the SELECT
statement are somewhat different when the
data is in ADLS. Although the work is still distributed across the datanodes of the cluster, Impala might
parallelize the work for a distributed query differently for data on HDFS and ADLS. ADLS does not have the
same block notion as HDFS, so Impala uses heuristics to determine how to split up large ADLS files for
processing in parallel. Because all hosts can access any ADLS data file with equal efficiency, the
distribution of work might be different than for HDFS data, where the data blocks are physically read
using short-circuit local reads by hosts that contain the appropriate block replicas. Although the I/O to
read the ADLS data might be spread evenly across the hosts of the cluster, the fact that all data is
initially retrieved across the network means that the overall query performance is likely to be lower for
ADLS data than for HDFS data.
Because ADLS does not expose the block sizes of data files the way HDFS does,
any Impala INSERT
or CREATE TABLE AS SELECT
statements
use the PARQUET_FILE_SIZE
query option setting to define the size of
Parquet data files. (Using a large block size is more important for Parquet tables than
for tables that use other file formats.)
When optimizing aspects of for complex queries such as the join order, Impala treats tables on HDFS and
ADLS the same way. Therefore, follow all the same tuning recommendations for ADLS tables as for HDFS ones,
such as using the COMPUTE STATS
statement to help Impala construct accurate estimates of
row counts and cardinality. See Tuning Impala for Performance for details.
In query profile reports, the numbers for BytesReadLocal
,
BytesReadShortCircuit
, BytesReadDataNodeCached
, and
BytesReadRemoteUnexpected
are blank because those metrics come from HDFS.
If you do see any indications that a query against an ADLS table performed "remote read"
operations, do not be alarmed. That is expected because, by definition, all the I/O for ADLS tables involves
remote reads.
Restrictions on Impala Support for ADLS
Impala requires that the default filesystem for the cluster be HDFS. You cannot use ADLS as the only filesystem in the cluster.
Although ADLS is often used to store JSON-formatted data, the current Impala support for ADLS does not include directly querying JSON data. For Impala queries, use data files in one of the file formats listed in How Impala Works with Hadoop File Formats. If you have data in JSON format, you can prepare a flattened version of that data for querying by Impala as part of your ETL cycle.
You cannot use the ALTER TABLE ... SET CACHED
statement for tables or partitions that are
located in ADLS.
Best Practices for Using Impala with ADLS
The following guidelines represent best practices derived from testing and real-world experience with Impala on ADLS:
-
Any reference to an ADLS location must be fully qualified. (This rule applies when ADLS is not designated as the default filesystem.)
-
Set any appropriate configuration settings for impalad.