By default, all the data files for a table are located in a single directory. Partitioning is a technique for physically dividing the
data during loading, based on values from one or more columns, to speed up queries that test those columns. For example, with a
school_records
table partitioned on a year
column, there is a separate data directory for each
different year value, and all the data for that year is stored in a data file in that directory. A query that includes a
WHERE
condition such as YEAR=1966
, YEAR IN (1989,1999)
, or YEAR BETWEEN
1984 AND 1989
can examine only the data files from the appropriate directory or directories, greatly reducing the amount of
data to read and test.
See Attaching an External Partitioned Table to an HDFS Directory Structure for an example that illustrates the syntax for creating partitioned tables, the underlying directory structure in HDFS, and how to attach a partitioned Impala external table to data files stored elsewhere in HDFS.
Parquet is a popular format for partitioned Impala tables because it is well suited to handle huge data volumes. See Query Performance for Impala Parquet Tables for performance considerations for partitioned Parquet tables.
See NULL for details about how NULL
values are represented in partitioned tables.
See Using Impala with Amazon S3 Object Store for details about setting up tables where some or all partitions reside on the Amazon Simple Storage Service (S3).
Partitioning is typically appropriate for:
SELECT COUNT(*) FROM school_records WHERE year = 1985
is efficient, only examining a small fraction of
the data; but SELECT COUNT(*) FROM school_records
has to process a separate data file for each year, resulting in
more overall work than in an unpartitioned table. You would probably not partition this way if you frequently queried the table
based on last name, student ID, and so on without testing the year.
Male
or Female
, you do not gain much efficiency by eliminating only about 50% of the data to
read for each query. If a column has only a few rows matching each value, the number of directories to process can become a
limiting factor, and the data file in each directory could be too small to take advantage of the Hadoop mechanism for transmitting
data in multi-megabyte blocks. For example, you might partition census data by year, store sales data by year and month, and web
traffic data by year, month, and day. (Some users with high volumes of incoming data might even partition down to the individual
hour and minute.)
In terms of Impala SQL syntax, partitioning affects these statements:
CREATE TABLE
: you specify a PARTITIONED
BY
clause when creating the table to identify names and data types of the partitioning columns. These columns are not
included in the main list of columns for the table.
PARTITIONED BY
clause in a CREATE TABLE AS
SELECT
statement. This syntax lets you use a single statement to create a partitioned table, copy data into it, and
create new partitions based on the values in the inserted data.
ALTER
TABLE
: you can add or drop partitions, to work
with different portions of a huge data set. You can designate the HDFS
directory that holds the data files for a specific partition. With
data partitioned by date values, you might "age out" data that is
no longer relevant. ALTER TABLE
statement including both
the ADD PARTITION
and LOCATION
clauses, rather than
separate statements with ADD PARTITION
and SET
LOCATION
clauses.
INSERT
:
When you insert data into a partitioned table, you identify the
partitioning columns. One or more values from each inserted row are
not stored in data files, but instead determine the directory where
that row value is stored. You can also specify which partition to load
a set of data into, with INSERT OVERWRITE
statements;
you can replace the contents of a specific partition but you cannot
append data to a specific partition.
By default, if an INSERT
statement creates any new subdirectories
underneath a partitioned table, those subdirectories are assigned default HDFS
permissions for the impala
user. To make each subdirectory have the
same permissions as its parent directory in HDFS, specify the
‑‑insert_inherit_permissions
startup option for the
impalad daemon.
SELECT
statement is the same whether or
not the table is partitioned, the way queries interact with partitioned tables can have a dramatic impact on performance and
scalability. The mechanism that lets queries skip certain partitions during a query is known as partition pruning; see
Partition Pruning for Queries for details.
SHOW PARTITIONS
statement that displays information about each partition in a
table. See SHOW Statement for details.
Specifying all the partition columns in a SQL statement is called static partitioning, because the statement affects a
single predictable partition. For example, you use static partitioning with an ALTER TABLE
statement that affects
only one partition, or with an INSERT
statement that inserts all values into the same partition:
insert into t1 partition(x=10, y='a') select c1 from some_other_table;
When you specify some partition key columns in an INSERT
statement, but leave out the values, Impala determines
which partition to insert. This technique is called dynamic partitioning:
insert into t1 partition(x, y='b') select c1, c2 from some_other_table;
-- Create new partition if necessary based on variable year, month, and day; insert a single value.
insert into weather partition (year, month, day) select 'cloudy',2014,4,21;
-- Create new partition if necessary for specified year and month but variable day; insert a single value.
insert into weather partition (year=2014, month=04, day) select 'sunny',22;
The more key columns you specify in the PARTITION
clause, the fewer columns you need in the SELECT
list. The trailing columns in the SELECT
list are substituted in order for the partition key columns with no
specified value.
The REFRESH
statement is typically used with partitioned tables when new data files are loaded into a partition by
some non-Impala mechanism, such as a Hive or Spark job. The REFRESH
statement makes Impala aware of the new data
files so that they can be used in Impala queries. Because partitioned tables typically contain a high volume of data, the
REFRESH
operation for a full partitioned table can take significant time.
In Impala 2.7 and higher, you can include a PARTITION (partition_spec)
clause in the
REFRESH
statement so that only a single partition is refreshed. For example, REFRESH big_table PARTITION
(year=2017, month=9, day=30)
. The partition spec must include all the partition key columns. See
REFRESH Statement for more details and examples of REFRESH
syntax and usage.
By default, if an INSERT
statement creates any new subdirectories
underneath a partitioned table, those subdirectories are assigned default HDFS
permissions for the impala
user. To make each subdirectory have the
same permissions as its parent directory in HDFS, specify the
‑‑insert_inherit_permissions
startup option for the
impalad daemon.
Partition pruning refers to the mechanism where a query can skip reading the data files corresponding to one or more partitions. If you can arrange for queries to prune large numbers of unnecessary partitions from the query execution plan, the queries use fewer resources and are thus proportionally faster and more scalable.
For example, if a table is partitioned by columns YEAR
, MONTH
, and DAY
, then
WHERE
clauses such as WHERE year = 2013
, WHERE year < 2010
, or WHERE
year BETWEEN 1995 AND 1998
allow Impala to skip the data files in all partitions outside the specified range. Likewise,
WHERE year = 2013 AND month BETWEEN 1 AND 3
could prune even more partitions, reading the data files for only a
portion of one year.
To check the effectiveness of partition pruning for a query, check the EXPLAIN
output for the query before
running it. For example, this example shows a table with 3 partitions, where the query only reads 1 of them. The notation
#partitions=1/3
in the EXPLAIN
plan confirms that Impala can do the appropriate partition
pruning.
[localhost:21000] > insert into census partition (year=2010) values ('Smith'),('Jones');
[localhost:21000] > insert into census partition (year=2011) values ('Smith'),('Jones'),('Doe');
[localhost:21000] > insert into census partition (year=2012) values ('Smith'),('Doe');
[localhost:21000] > select name from census where year=2010;
+-------+
| name |
+-------+
| Smith |
| Jones |
+-------+
[localhost:21000] > explain select name from census where year=2010;
+------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------+
| PLAN FRAGMENT 0 |
| PARTITION: UNPARTITIONED |
| |
| 1:EXCHANGE |
| |
| PLAN FRAGMENT 1 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 1 |
| UNPARTITIONED |
| |
| 0:SCAN HDFS |
| table=predicate_propagation.census #partitions=1/3 size=12B |
+------------------------------------------------------------------+
For a report of the volume of data that was actually read and processed at each stage of the query, check the output of the
SUMMARY
command immediately after running the query. For a more detailed analysis, look at the output of the
PROFILE
command; it includes this same summary report near the start of the profile output.
Impala can even do partition pruning in cases where the partition key column is not directly compared to a constant, by applying
the transitive property to other parts of the WHERE
clause. This technique is known as predicate propagation, and
is available in Impala 1.2.2 and later. In this example, the census table includes another column indicating when the data was
collected, which happens in 10-year intervals. Even though the query does not compare the partition key column
(YEAR
) to a constant value, Impala can deduce that only the partition YEAR=2010
is required, and
again only reads 1 out of 3 partitions.
[localhost:21000] > drop table census;
[localhost:21000] > create table census (name string, census_year int) partitioned by (year int);
[localhost:21000] > insert into census partition (year=2010) values ('Smith',2010),('Jones',2010);
[localhost:21000] > insert into census partition (year=2011) values ('Smith',2020),('Jones',2020),('Doe',2020);
[localhost:21000] > insert into census partition (year=2012) values ('Smith',2020),('Doe',2020);
[localhost:21000] > select name from census where year = census_year and census_year=2010;
+-------+
| name |
+-------+
| Smith |
| Jones |
+-------+
[localhost:21000] > explain select name from census where year = census_year and census_year=2010;
+------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------+
| PLAN FRAGMENT 0 |
| PARTITION: UNPARTITIONED |
| |
| 1:EXCHANGE |
| |
| PLAN FRAGMENT 1 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 1 |
| UNPARTITIONED |
| |
| 0:SCAN HDFS |
| table=predicate_propagation.census #partitions=1/3 size=22B |
| predicates: census_year = 2010, year = census_year |
+------------------------------------------------------------------+
If a view applies to a partitioned table, any partition pruning considers the clauses on
both the original query and any additional WHERE
predicates in the
query that refers to the view. Prior to Impala 1.4, only the WHERE
clauses on the original query from the CREATE VIEW
statement were used
for partition pruning.
In queries involving both analytic functions and partitioned tables, partition pruning
only occurs for columns named in the PARTITION BY
clause of the
analytic function call. For example, if an analytic function query has a clause such as
WHERE year=2016
, the way to make the query prune all other
YEAR
partitions is to include PARTITION BY year
in the
analytic function call; for example, OVER (PARTITION BY
year,other_columns
other_analytic_clauses)
.
The original mechanism uses to prune partitions is static partition pruning, in which the conditions in the
WHERE
clause are analyzed to determine in advance which partitions can be safely skipped. In Impala 2.5
and higher, Impala can perform dynamic partition pruning, where information about the partitions is collected during
the query, and Impala prunes unnecessary partitions in ways that were impractical to predict in advance.
For example, if partition key columns are compared to literal values in a WHERE
clause, Impala can perform static
partition pruning during the planning phase to only read the relevant partitions:
-- The query only needs to read 3 partitions whose key values are known ahead of time.
-- That's static partition pruning.
SELECT COUNT(*) FROM sales_table WHERE year IN (2005, 2010, 2015);
Dynamic partition pruning involves using information only available at run time, such as the result of a subquery. The following example shows a simple dynamic partition pruning.
CREATE TABLE yy (s STRING) PARTITIONED BY (year INT);
INSERT INTO yy PARTITION (year) VALUES ('1999', 1999), ('2000', 2000),
('2001', 2001), ('2010', 2010), ('2018', 2018);
COMPUTE STATS yy;
CREATE TABLE yy2 (s STRING, year INT);
INSERT INTO yy2 VALUES ('1999', 1999), ('2000', 2000), ('2001', 2001);
COMPUTE STATS yy2;
-- The following query reads an unknown number of partitions, whose key values
-- are only known at run time. The runtime filters line shows the
-- information used in query fragment 02 to decide which partitions to skip.
EXPLAIN SELECT s FROM yy WHERE year IN (SELECT year FROM yy2);
+--------------------------------------------------------------------------+
| PLAN-ROOT SINK |
| | |
| 04:EXCHANGE [UNPARTITIONED] |
| | |
| 02:HASH JOIN [LEFT SEMI JOIN, BROADCAST] |
| | hash predicates: year = year |
| | runtime filters: RF000 <- year |
| | |
| |--03:EXCHANGE [BROADCAST] |
| | | |
| | 01:SCAN HDFS [default.yy2] |
| | partitions=1/1 files=1 size=620B |
| | |
| 00:SCAN HDFS [default.yy] |
| partitions=5/5 files=5 size=1.71KB |
| runtime filters: RF000 -> year |
+--------------------------------------------------------------------------+
SELECT s FROM yy WHERE year IN (SELECT year FROM yy2); -- Returns 3 rows from yy
PROFILE;
In the above example, Impala evaluates the subquery, sends the subquery results to all Impala nodes participating in the query, and then each impalad daemon uses the dynamic partition pruning optimization to read only the partitions with the relevant key values.
The output query plan from the EXPLAIN
statement
shows that runtime filters are enabled. The plan also shows that it
expects to read all 5 partitions of the yy
table,
indicating that static partition pruning will not happen.
The Filter summary in the PROFILE
output shows that
the scan node filtered out based on a runtime filter of dynamic
partition pruning.
Filter 0 (1.00 MB):
- Files processed: 3
- Files rejected: 1 (1)
- Files total: 3 (3)
Dynamic partition pruning is especially effective for queries involving joins of several large partitioned tables. Evaluating the
ON
clauses of the join predicates might normally require reading data from all partitions of certain tables. If
the WHERE
clauses of the query refer to the partition key columns, Impala can now often skip reading many of the
partitions while evaluating the ON
clauses. The dynamic partition pruning optimization reduces the amount of I/O
and the amount of intermediate data stored and transmitted across the network during the query.
When the spill-to-disk feature is activated for a join node within a query, Impala does not produce any runtime filters for that join operation on that host. Other join nodes within the query are not affected.
Dynamic partition pruning is part of the runtime filtering feature, which applies to other kinds of queries in addition to queries against partitioned tables. See Runtime Filtering for Impala Queries (Impala 2.5 or higher only) for full details about this feature.
The columns you choose as the partition keys should be ones that are frequently used to filter query results in important, large-scale queries. Popular examples are some combination of year, month, and day when the data has associated time values, and geographic region when the data is associated with some place.
For time-based data, split out the separate parts into their own columns, because Impala cannot partition based on a
TIMESTAMP
column.
The data type of the partition columns does not have a significant effect on the storage required, because the values from those columns are not stored in the data files, rather they are represented as strings inside HDFS directory names.
In Impala 2.5 and higher, you can enable the OPTIMIZE_PARTITION_KEY_SCANS
query option to speed up
queries that only refer to partition key columns, such as SELECT MAX(year)
. This setting is not enabled by
default because the query behavior is slightly different if the table contains partition directories without actual data inside.
See OPTIMIZE_PARTITION_KEY_SCANS Query Option (Impala 2.5 or higher only) for details.
Partitioned tables can contain complex type columns. All the partition key columns must be scalar types.
Remember that when Impala queries data stored in HDFS, it is most efficient to use multi-megabyte files to take advantage of the HDFS block size. For Parquet tables, the block size (and ideal size of the data files) is 256 MB in Impala 2.0 and later. Therefore, avoid specifying too many partition key columns, which could result in individual partitions containing only small amounts of data. For example, if you receive 1 GB of data per day, you might partition by year, month, and day; while if you receive 5 GB of data per minute, you might partition by year, month, day, hour, and minute. If you have data with a geographic component, you might partition based on postal code if you have many megabytes of data for each postal code, but if not, you might partition by some larger region such as city, state, or country. state
If you frequently run aggregate functions such as MIN()
,
MAX()
, and COUNT(DISTINCT)
on partition key columns,
consider enabling the OPTIMIZE_PARTITION_KEY_SCANS
query option, which
optimizes such queries. This feature is available in Impala 2.5
and higher. See OPTIMIZE_PARTITION_KEY_SCANS Query Option (Impala 2.5 or higher only) for the
kinds of queries that this option applies to, and slight differences in how partitions
are evaluated when this query option is enabled.
Partitioned tables have the flexibility to use different file formats for different partitions. (For background information about the different file formats Impala supports, see How Impala Works with Hadoop File Formats.) For example, if you originally received data in text format, then received new data in RCFile format, and eventually began receiving data in Parquet format, all that data could reside in the same table for queries. You just need to ensure that the table is structured so that the data files that use different file formats reside in separate partitions.
For example, here is how you might switch from text to Parquet data as you receive data for different years:
[localhost:21000] > create table census (name string) partitioned by (year smallint);
[localhost:21000] > alter table census add partition (year=2012); -- Text format;
[localhost:21000] > alter table census add partition (year=2013); -- Text format switches to Parquet before data loaded;
[localhost:21000] > alter table census partition (year=2013) set fileformat parquet;
[localhost:21000] > insert into census partition (year=2012) values ('Smith'),('Jones'),('Lee'),('Singh');
[localhost:21000] > insert into census partition (year=2013) values ('Flores'),('Bogomolov'),('Cooper'),('Appiah');
At this point, the HDFS directory for year=2012
contains a text-format data file, while the HDFS directory for
year=2013
contains a Parquet data file. As always, when loading non-trivial data, you would use INSERT ...
SELECT
or LOAD DATA
to import data in large batches, rather than INSERT ... VALUES
which
produces small files that are inefficient for real-world queries.
For other file types that Impala cannot create natively, you can switch into Hive and issue the ALTER TABLE ... SET
FILEFORMAT
statements and INSERT
or LOAD DATA
statements there. After switching back to
Impala, issue a REFRESH table_name
statement so that Impala recognizes any partitions or new
data added through Hive.
You can add, drop, set the expected file format, or set the HDFS location of the data files for individual partitions within an Impala table. See ALTER TABLE Statement for syntax details, and Setting Different File Formats for Partitions for tips on managing tables containing partitions with different file formats.
ALTER TABLE
statement including both
the ADD PARTITION
and LOCATION
clauses, rather than
separate statements with ADD PARTITION
and SET
LOCATION
clauses.
What happens to the data files when a partition is dropped depends on whether the partitioned table is designated as internal or external. For an internal (managed) table, the data files are deleted. For example, if data in the partitioned table is a copy of raw data files stored elsewhere, you might save disk space by dropping older partitions that are no longer required for reporting, knowing that the original data is still available if needed later. For an external table, the data files are left alone. For example, dropping a partition without deleting the associated files lets Impala consider a smaller set of partitions, improving query efficiency and reducing overhead for DDL operations on the table; if the data is needed again later, you can add the partition again. See Overview of Impala Tables for details and examples.
Kudu tables use a more fine-grained partitioning scheme than tables containing HDFS data files. You specify a PARTITION
BY
clause with the CREATE TABLE
statement to identify how to divide the values from the partition key
columns.
See Partitioning for Kudu Tables for details and examples of the partitioning techniques for Kudu tables.
Because the COMPUTE STATS
statement can be resource-intensive to run on a partitioned table
as new partitions are added, Impala includes a variation of this statement that allows computing statistics
on a per-partition basis such that stats can be incrementally updated when new partitions are added.
For a particular table, use either COMPUTE STATS
or COMPUTE
INCREMENTAL STATS
, but never combine the two or alternate between them. If you
switch from COMPUTE STATS
to COMPUTE INCREMENTAL STATS
during the lifetime of a table, or vice versa, drop all statistics by running
DROP STATS
before making the switch.
When you run COMPUTE INCREMENTAL STATS
on a table for the first time,
the statistics are computed again from scratch regardless of whether the table already
has statistics. Therefore, expect a one-time resource-intensive operation for scanning
the entire table when running COMPUTE INCREMENTAL STATS
for the first
time on a given table.
In Impala 3.0 and lower, approximately 400 bytes of metadata per column per partition are needed for caching. Tables with a big number of partitions and many columns can add up to a significant memory overhead as the metadata must be cached on the catalogd host and on every impalad host that is eligible to be a coordinator. If this metadata for all tables exceeds 2 GB, you might experience service downtime. In Impala 3.1 and higher, the issue was alleviated with an improved handling of incremental stats.
The COMPUTE INCREMENTAL STATS
variation computes statistics only for partitions that were
added or changed since the last COMPUTE INCREMENTAL STATS
statement, rather than the entire
table. It is typically used for tables where a full COMPUTE STATS
operation takes too long to be practical each time a partition is added or dropped. See
impala_perf_stats.html#perf_stats_incremental for full usage details.
-- Initially the table has no incremental stats, as indicated
-- 'false' under Incremental stats.
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | -1 | 1 | 223.74KB | NOT CACHED | PARQUET | false
| Children | -1 | 1 | 230.05KB | NOT CACHED | PARQUET | false
| Electronics | -1 | 1 | 232.67KB | NOT CACHED | PARQUET | false
| Home | -1 | 1 | 232.56KB | NOT CACHED | PARQUET | false
| Jewelry | -1 | 1 | 223.72KB | NOT CACHED | PARQUET | false
| Men | -1 | 1 | 231.25KB | NOT CACHED | PARQUET | false
| Music | -1 | 1 | 237.90KB | NOT CACHED | PARQUET | false
| Shoes | -1 | 1 | 234.90KB | NOT CACHED | PARQUET | false
| Sports | -1 | 1 | 227.97KB | NOT CACHED | PARQUET | false
| Women | -1 | 1 | 226.27KB | NOT CACHED | PARQUET | false
| Total | -1 | 10 | 2.25MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
-- After the first COMPUTE INCREMENTAL STATS,
-- all partitions have stats. The first
-- COMPUTE INCREMENTAL STATS scans the whole
-- table, discarding any previous stats from
-- a traditional COMPUTE STATS statement.
compute incremental stats item_partitioned;
+-------------------------------------------+
| summary |
+-------------------------------------------+
| Updated 10 partition(s) and 21 column(s). |
+-------------------------------------------+
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | 1733 | 1 | 223.74KB | NOT CACHED | PARQUET | true
| Children | 1786 | 1 | 230.05KB | NOT CACHED | PARQUET | true
| Electronics | 1812 | 1 | 232.67KB | NOT CACHED | PARQUET | true
| Home | 1807 | 1 | 232.56KB | NOT CACHED | PARQUET | true
| Jewelry | 1740 | 1 | 223.72KB | NOT CACHED | PARQUET | true
| Men | 1811 | 1 | 231.25KB | NOT CACHED | PARQUET | true
| Music | 1860 | 1 | 237.90KB | NOT CACHED | PARQUET | true
| Shoes | 1835 | 1 | 234.90KB | NOT CACHED | PARQUET | true
| Sports | 1783 | 1 | 227.97KB | NOT CACHED | PARQUET | true
| Women | 1790 | 1 | 226.27KB | NOT CACHED | PARQUET | true
| Total | 17957 | 10 | 2.25MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
-- Add a new partition...
alter table item_partitioned add partition (i_category='Camping');
-- Add or replace files in HDFS outside of Impala,
-- rendering the stats for a partition obsolete.
!import_data_into_sports_partition.sh
refresh item_partitioned;
drop incremental stats item_partitioned partition (i_category='Sports');
-- Now some partitions have incremental stats
-- and some do not.
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | 1733 | 1 | 223.74KB | NOT CACHED | PARQUET | true
| Camping | -1 | 1 | 408.02KB | NOT CACHED | PARQUET | false
| Children | 1786 | 1 | 230.05KB | NOT CACHED | PARQUET | true
| Electronics | 1812 | 1 | 232.67KB | NOT CACHED | PARQUET | true
| Home | 1807 | 1 | 232.56KB | NOT CACHED | PARQUET | true
| Jewelry | 1740 | 1 | 223.72KB | NOT CACHED | PARQUET | true
| Men | 1811 | 1 | 231.25KB | NOT CACHED | PARQUET | true
| Music | 1860 | 1 | 237.90KB | NOT CACHED | PARQUET | true
| Shoes | 1835 | 1 | 234.90KB | NOT CACHED | PARQUET | true
| Sports | -1 | 1 | 227.97KB | NOT CACHED | PARQUET | false
| Women | 1790 | 1 | 226.27KB | NOT CACHED | PARQUET | true
| Total | 17957 | 11 | 2.65MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
-- After another COMPUTE INCREMENTAL STATS,
-- all partitions have incremental stats, and only the 2
-- partitions without incremental stats were scanned.
compute incremental stats item_partitioned;
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 2 partition(s) and 21 column(s). |
+------------------------------------------+
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | 1733 | 1 | 223.74KB | NOT CACHED | PARQUET | true
| Camping | 5328 | 1 | 408.02KB | NOT CACHED | PARQUET | true
| Children | 1786 | 1 | 230.05KB | NOT CACHED | PARQUET | true
| Electronics | 1812 | 1 | 232.67KB | NOT CACHED | PARQUET | true
| Home | 1807 | 1 | 232.56KB | NOT CACHED | PARQUET | true
| Jewelry | 1740 | 1 | 223.72KB | NOT CACHED | PARQUET | true
| Men | 1811 | 1 | 231.25KB | NOT CACHED | PARQUET | true
| Music | 1860 | 1 | 237.90KB | NOT CACHED | PARQUET | true
| Shoes | 1835 | 1 | 234.90KB | NOT CACHED | PARQUET | true
| Sports | 1783 | 1 | 227.97KB | NOT CACHED | PARQUET | true
| Women | 1790 | 1 | 226.27KB | NOT CACHED | PARQUET | true
| Total | 17957 | 11 | 2.65MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------