Overview of Impala Tables
Tables are the primary containers for data in Impala. They have the familiar row and column layout similar to other database systems, plus some features such as partitioning often associated with higher-end data warehouse systems.
Logically, each table has a structure based on the definition of its columns, partitions, and other properties.
Physically, each table that uses HDFS storage is associated with a directory in HDFS. The table data consists of all the data files underneath that directory:
- Internal tables are managed by Impala, and use directories inside the designated Impala work area.
- External tables use arbitrary HDFS directories, where the data files are typically shared between different Hadoop components.
- Large-scale data is usually handled by partitioned tables, where the data files are divided among different HDFS subdirectories.
Impala tables can also represent data that is stored in HBase, or in the Amazon S3 filesystem (Impala 2.2 or higher), or on Isilon storage devices (Impala 2.2.3 or higher). See Using Impala to Query HBase Tables, Using Impala with the Amazon S3 Filesystem, and Using Impala with Isilon Storage for details about those special kinds of tables.
Impala queries ignore files with extensions commonly used for temporary work files by Hadoop tools. Any
files with extensions .tmp
or .copying
are not considered part of the
Impala table. The suffix matching is case-insensitive, so for example Impala ignores both
.copying
and .COPYING
suffixes.
Related statements: CREATE TABLE Statement, DROP TABLE Statement, ALTER TABLE Statement INSERT Statement, LOAD DATA Statement, SELECT Statement
Internal Tables
The default kind of table produced by the CREATE TABLE
statement is known as an internal
table. (Its counterpart is the external table, produced by the CREATE EXTERNAL TABLE
syntax.)
-
Impala creates a directory in HDFS to hold the data files.
-
You can create data in internal tables by issuing
INSERT
orLOAD DATA
statements. -
If you add or replace data using HDFS operations, issue the
REFRESH
command in impala-shell so that Impala recognizes the changes in data files, block locations, and so on. -
When you issue a
DROP TABLE
statement, Impala physically removes all the data files from the directory. -
To see whether a table is internal or external, and its associated HDFS location, issue the statement
DESCRIBE FORMATTED table_name
. TheTable Type
field displaysMANAGED_TABLE
for internal tables andEXTERNAL_TABLE
for external tables. TheLocation
field displays the path of the table directory as an HDFS URI. -
When you issue an
ALTER TABLE
statement to rename an internal table, all data files are moved into the new HDFS directory for the table. The files are moved even if they were formerly in a directory outside the Impala data directory, for example in an internal table with aLOCATION
attribute pointing to an outside HDFS directory.
Examples:
ALTER
TABLE
statement:
-- Switch a table from internal to external.
ALTER TABLE table_name SET TBLPROPERTIES('EXTERNAL'='TRUE');
-- Switch a table from external to internal.
ALTER TABLE table_name SET TBLPROPERTIES('EXTERNAL'='FALSE');
Related information:
External Tables, CREATE TABLE Statement, DROP TABLE Statement, ALTER TABLE Statement, DESCRIBE Statement
External Tables
The syntax CREATE EXTERNAL TABLE
sets up an Impala table that points at existing data
files, potentially in HDFS locations outside the normal Impala data directories.. This operation saves the
expense of importing the data into a new table when you already have the data files in a known location in
HDFS, in the desired file format.
-
You can use Impala to query the data in this table.
-
You can create data in external tables by issuing
INSERT
orLOAD DATA
statements. -
If you add or replace data using HDFS operations, issue the
REFRESH
command in impala-shell so that Impala recognizes the changes in data files, block locations, and so on. -
When you issue a
DROP TABLE
statement in Impala, that removes the connection that Impala has with the associated data files, but does not physically remove the underlying data. You can continue to use the data files with other Hadoop components and HDFS operations. -
To see whether a table is internal or external, and its associated HDFS location, issue the statement
DESCRIBE FORMATTED table_name
. TheTable Type
field displaysMANAGED_TABLE
for internal tables andEXTERNAL_TABLE
for external tables. TheLocation
field displays the path of the table directory as an HDFS URI. -
When you issue an
ALTER TABLE
statement to rename an external table, all data files are left in their original locations. -
You can point multiple external tables at the same HDFS directory by using the same
LOCATION
attribute for each one. The tables could have different column definitions, as long as the number and types of columns are compatible with the schema evolution considerations for the underlying file type. For example, for text data files, one table might define a certain column as aSTRING
while another defines the same column as aBIGINT
.
Examples:
ALTER
TABLE
statement:
-- Switch a table from internal to external.
ALTER TABLE table_name SET TBLPROPERTIES('EXTERNAL'='TRUE');
-- Switch a table from external to internal.
ALTER TABLE table_name SET TBLPROPERTIES('EXTERNAL'='FALSE');
Related information:
Internal Tables, CREATE TABLE Statement, DROP TABLE Statement, ALTER TABLE Statement, DESCRIBE Statement
File Formats
Each table has an associated file format, which determines how Impala interprets the associated data files. See How Impala Works with Hadoop File Formats for details.
You set the file format during the CREATE TABLE
statement,
or change it later using the ALTER TABLE
statement.
Partitioned tables can have a different file format for individual partitions,
allowing you to change the file format used in your ETL process for new data
without going back and reconverting all the existing data in the same table.
Any INSERT
statements produce new data files with the current file format of the table.
For existing data files, changing the file format of the table does not automatically do any data conversion.
You must use TRUNCATE TABLE
or INSERT OVERWRITE
to remove any previous data
files that use the old file format.
Then you use the LOAD DATA
statement, INSERT ... SELECT
, or other mechanism
to put data files of the correct format into the table.
The default file format, text, is the most flexible and easy to produce when you are just getting started with
Impala. The Parquet file format offers the highest query performance and uses compression to reduce storage
requirements; therefore, where practical, use Parquet for Impala tables with substantial amounts of data.
Also, the complex types (ARRAY
, STRUCT
, and MAP
)
available in Impala 2.3 and higher are currently only supported with the Parquet file type.
Based on your existing ETL workflow, you might use other file formats such as Avro, possibly doing a final
conversion step to Parquet to take advantage of its performance for analytic queries.
Kudu Tables
Tables stored in Apache Kudu are treated specially, because Kudu manages its data independently of HDFS files. Some information about the table is stored in the metastore database for use by Impala. Other table metadata is managed internally by Kudu.
When you create a Kudu table through Impala, it is assigned an internal Kudu table name of the form
impala::db_name.table_name
. You can see the Kudu-assigned name
in the output of DESCRIBE FORMATTED
, in the kudu.table_name
field of the table properties.
The Kudu-assigned name remains the same even if you use ALTER TABLE
to rename the Impala table
or move it to a different Impala database. If you issue the statement
ALTER TABLE impala_name SET TBLPROPERTIES('kudu.table_name' = 'different_kudu_table_name')
,
the effect is different depending on whether the Impala table was created with a regular CREATE TABLE
statement (that is, if it is an internal or managed table), or if it was created with a
CREATE EXTERNAL TABLE
statement (and therefore is an external table). Changing the kudu.table_name
property of an internal table physically renames the underlying Kudu table to match the new name.
Changing the kudu.table_name
property of an external table switches which underlying Kudu table
the Impala table refers to; the underlying Kudu table must already exist.
The following example shows what happens with both internal and external Kudu tables as the kudu.table_name
property is changed. In practice, external tables are typically used to access underlying Kudu tables that were created
outside of Impala, that is, through the Kudu API.
-- This is an internal table that we will create and then rename.
create table old_name (id bigint primary key, s string)
partition by hash(id) partitions 2 stored as kudu;
-- Initially, the name OLD_NAME is the same on the Impala and Kudu sides.
describe formatted old_name;
...
| Location: | hdfs://host.example.com:8020/path/user.db/old_name
| Table Type: | MANAGED_TABLE | NULL
| Table Parameters: | NULL | NULL
| | DO_NOT_UPDATE_STATS | true
| | kudu.master_addresses | vd0342.example.com
| | kudu.table_name | impala::user.old_name
-- ALTER TABLE RENAME TO changes the Impala name but not the underlying Kudu name.
alter table old_name rename to new_name;
describe formatted new_name;
| Location: | hdfs://host.example.com:8020/path/user.db/new_name
| Table Type: | MANAGED_TABLE | NULL
| Table Parameters: | NULL | NULL
| | DO_NOT_UPDATE_STATS | true
| | kudu.master_addresses | vd0342.example.com
| | kudu.table_name | impala::user.old_name
-- Setting TBLPROPERTIES changes the underlying Kudu name.
alter table new_name
set tblproperties('kudu.table_name' = 'impala::user.new_name');
describe formatted new_name;
| Location: | hdfs://host.example.com:8020/path/user.db/new_name
| Table Type: | MANAGED_TABLE | NULL
| Table Parameters: | NULL | NULL
| | DO_NOT_UPDATE_STATS | true
| | kudu.master_addresses | vd0342.example.com
| | kudu.table_name | impala::user.new_name
-- Put some data in the table to demonstrate how external tables can map to
-- different underlying Kudu tables.
insert into new_name values (0, 'zero'), (1, 'one'), (2, 'two');
-- This external table points to the same underlying Kudu table, NEW_NAME,
-- as we created above. No need to declare columns or other table aspects.
create external table kudu_table_alias stored as kudu
tblproperties('kudu.table_name' = 'impala::user.new_name');
-- The external table can fetch data from the NEW_NAME table that already
-- existed and already had data.
select * from kudu_table_alias limit 100;
+----+------+
| id | s |
+----+------+
| 1 | one |
| 0 | zero |
| 2 | two |
+----+------+
-- We cannot re-point the external table at a different underlying Kudu table
-- unless that other underlying Kudu table already exists.
alter table kudu_table_alias
set tblproperties('kudu.table_name' = 'impala::user.yet_another_name');
ERROR:
TableLoadingException: Error opening Kudu table 'impala::user.yet_another_name',
Kudu error: The table does not exist: table_name: "impala::user.yet_another_name"
-- Once the underlying Kudu table exists, we can re-point the external table to it.
create table yet_another_name (id bigint primary key, x int, y int, s string)
partition by hash(id) partitions 2 stored as kudu;
alter table kudu_table_alias
set tblproperties('kudu.table_name' = 'impala::user.yet_another_name');
-- Now no data is returned because this other table is empty.
select * from kudu_table_alias limit 100;
-- The Impala table automatically recognizes the table schema of the new table,
-- for example the extra X and Y columns not present in the original table.
describe kudu_table_alias;
+------+--------+---------+-------------+----------+...
| name | type | comment | primary_key | nullable |...
+------+--------+---------+-------------+----------+...
| id | bigint | | true | false |...
| x | int | | false | true |...
| y | int | | false | true |...
| s | string | | false | true |...
+------+--------+---------+-------------+----------+...
The SHOW TABLE STATS
output for a Kudu table shows Kudu-specific details about the layout of the table.
Instead of information about the number and sizes of files, the information is divided by the Kudu tablets.
For each tablet, the output includes the fields
# Rows
(although this number is not currently computed), Start Key
, Stop Key
, Leader Replica
, and # Replicas
.
The output of SHOW COLUMN STATS
, illustrating the distribution of values within each column, is the same for Kudu tables
as for HDFS-backed tables.
impala::
prefix and the Impala
database name. External Kudu tables are those created by a non-Impala
mechanism, such as a user application calling the Kudu APIs. For
these tables, the CREATE EXTERNAL TABLE
syntax lets
you establish a mapping from Impala to the existing Kudu table:
CREATE EXTERNAL TABLE impala_name STORED AS KUDU
TBLPROPERTIES('kudu.table_name' = 'original_kudu_name');
External Kudu tables differ in one important way from other external
tables: adding or dropping a column or range partition changes the
data in the underlying Kudu table, in contrast to an HDFS-backed
external table where existing data files are left untouched.