Using Impala through a Proxy for High Availability
For most clusters that have multiple users and production availability requirements, you might set up a proxy server to relay requests to and from Impala.
Currently, the Impala statestore mechanism does not include such proxying and load-balancing features. Set up a software package of your choice to perform these functions.
Most considerations for load balancing and high availability apply to the impalad daemon. The statestored and catalogd daemons do not have special requirements for high availability, because problems with those daemons do not result in data loss. If those daemons become unavailable due to an outage on a particular host, you can stop the Impala service, delete the Impala StateStore and Impala Catalog Server roles, add the roles on a different host, and restart the Impala service.
Overview of Proxy Usage and Load Balancing for Impala
Using a load-balancing proxy server for Impala has the following advantages:
- Applications connect to a single well-known host and port, rather than keeping track of the hosts where the impalad daemon is running.
- If any host running the impalad daemon becomes unavailable, application connection requests still succeed because you always connect to the proxy server rather than a specific host running the impalad daemon.
- The coordinator node for each Impala query potentially requires more memory and CPU cycles than the other nodes that process the query. The proxy server can issue queries using round-robin scheduling, so that each connection uses a different coordinator node. This load-balancing technique lets the Impala nodes share this additional work, rather than concentrating it on a single machine.
The following setup steps are a general outline that apply to any load-balancing proxy software:
- Select and download the load-balancing proxy software or other load-balancing hardware appliance. It should only need to be installed and configured on a single host, typically on an edge node. Pick a host other than the DataNodes where impalad is running, because the intention is to protect against the possibility of one or more of these DataNodes becoming unavailable.
-
Configure the load balancer (typically by editing a configuration file).
In particular:
- Set up a port that the load balancer will listen on to relay Impala requests back and forth.
- See Choosing the Load-Balancing Algorithm for load balancing algorithm options.
- For Kerberized clusters, follow the instructions in Special Proxy Considerations for Clusters Using Kerberos.
- If you are using Hue or JDBC-based applications, you typically set up load balancing for both ports 21000 and 21050, because these client applications connect through port 21050 while the impala-shell command connects through port 21000. See Ports Used by Impala for when to use port 21000, 21050, or another value depending on what type of connections you are load balancing.
- Run the load-balancing proxy server, pointing it at the configuration file that you set up.
-
For any scripts, jobs, or configuration settings for applications
that formerly connected to a specific DataNode to run Impala SQL
statements, change the connection information (such as the
-i
option in impala-shell) to point to the load balancer instead.
Choosing the Load-Balancing Algorithm
Load-balancing software offers a number of algorithms to distribute requests. Each algorithm has its own characteristics that make it suitable in some situations but not others.
- Leastconn
- Connects sessions to the coordinator with the fewest connections, to balance the load evenly. Typically used for workloads consisting of many independent, short-running queries. In configurations with only a few client machines, this setting can avoid having all requests go to only a small set of coordinators.
- Source IP Persistence
-
Sessions from the same IP address always go to the same coordinator. A good choice for Impala workloads containing a mix of queries and DDL statements, such as
CREATE TABLE
andALTER TABLE
. Because the metadata changes from a DDL statement take time to propagate across the cluster, prefer to use the Source IP Persistence in this case. If you are unable to choose Source IP Persistence, run the DDL and subsequent queries that depend on the results of the DDL through the same session, for example by runningimpala-shell -f script_file
to submit several statements through a single session. - Round-robin
-
Distributes connections to all coordinator nodes. Typically not recommended for Impala.
You might need to perform benchmarks and load testing to determine which setting is optimal for your use case. Always set up using two load-balancing algorithms: Source IP Persistence for Hue and Leastconn for others.
Special Proxy Considerations for Clusters Using Kerberos
In a cluster using Kerberos, applications check host credentials to verify that the host they are connecting to is the same one that is actually processing the request, to prevent man-in-the-middle attacks.
In Impala 2.11 and lower versions, once you enable a proxy server in a Kerberized cluster, users will not be able to connect to individual impala daemons directly from impala-shell.
In Impala 2.12 and higher,
if you enable a proxy server in a Kerberized cluster, users have an
option to connect to Impala daemons directly from
impala-shell using the -b
/
--kerberos_host_fqdn
option when you start
impala-shell. This option can be used for testing or
troubleshooting purposes, but not recommended for live production
environments as it defeats the purpose of a load balancer/proxy.
impala-shell -i impalad-1.mydomain.com -k -b loadbalancer-1.mydomain.com
impala-shell --impalad=impalad-1.mydomain.com:21000 --kerberos --kerberos_host_fqdn=loadbalancer-1.mydomain.com
See impala-shell Configuration Options for information about the option.
To clarify that the load-balancing proxy server is legitimate, perform these extra Kerberos setup steps:
- This section assumes you are starting with a Kerberos-enabled cluster. See Enabling Kerberos Authentication for Impala for instructions for setting up Impala with Kerberos. See the documentation for your Apache Hadoop distribution for general steps to set up Kerberos.
-
Choose the host you will use for the proxy server. Based on the Kerberos setup procedure, it should
already have an entry
impala/proxy_host@realm
in its keytab. If not, go back over the initial Kerberos configuration steps for the keytab on each host running the impalad daemon. - Copy the keytab file from the proxy host to all other hosts in the cluster that run the impalad daemon. (For optimal performance, impalad should be running on all DataNodes in the cluster.) Put the keytab file in a secure location on each of these other hosts.
-
Add an entry
impala/actual_hostname@realm
to the keytab on each host running the impalad daemon. -
For each impalad node, merge the existing keytab with the proxy’s keytab using
ktutil, producing a new keytab file. For example:
$ ktutil ktutil: read_kt proxy.keytab ktutil: read_kt impala.keytab ktutil: write_kt proxy_impala.keytab ktutil: quit
-
To verify that the keytabs are merged, run the command:
which lists the credentials for bothklist -k keytabfile
principal
andbe_principal
on all nodes. -
Make sure that the
impala
user has permission to read this merged keytab file. -
Change the following configuration settings for each host in the cluster that participates
in the load balancing:
-
In the impalad option definition, add:
--principal=impala/proxy_host@realm --be_principal=impala/actual_host@realm --keytab_file=path_to_merged_keytab
Note: Every host has different--be_principal
because the actual hostname is different on each host. Specify the fully qualified domain name (FQDN) for the proxy host, not the IP address. Use the exact FQDN as returned by a reverse DNS lookup for the associated IP address. - Modify the startup options. See Modifying Impala Startup Options for the procedure to modify the startup options.
-
In the impalad option definition, add:
- Restart Impala to make the changes take effect. Restart the impalad daemons on all hosts in the cluster, as well as the statestored and catalogd daemons.
Example of Configuring HAProxy Load Balancer for Impala
If you are not already using a load-balancing proxy, you can experiment with HAProxy a free, open source load balancer. This example shows how you might install and configure that load balancer on a Red Hat Enterprise Linux system.
-
Install the load balancer:
yum install haproxy
-
Set up the configuration file: /etc/haproxy/haproxy.cfg. See the following section for a sample configuration file.
-
Run the load balancer (on a single host, preferably one not running impalad):
/usr/sbin/haproxy –f /etc/haproxy/haproxy.cfg
-
In impala-shell, JDBC applications, or ODBC applications, connect to the listener port of the proxy host, rather than port 21000 or 21050 on a host actually running impalad. The sample configuration file sets haproxy to listen on port 25003, therefore you would send all requests to
haproxy_host:25003
.
This is the sample haproxy.cfg used in this example:
global
# To have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
#stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#
# You might need to adjust timing values to prevent timeouts.
#
# The timeout values should be dependant on how you use the cluster
# and how long your queries run.
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
maxconn 3000
timeout connect 5000
timeout client 3600s
timeout server 3600s
#
# This sets up the admin page for HA Proxy at port 25002.
#
listen stats :25002
balance
mode http
stats enable
stats auth username:password
# This is the setup for Impala. Impala client connect to load_balancer_host:25003.
# HAProxy will balance connections among the list of servers listed below.
# The list of Impalad is listening at port 21000 for beeswax (impala-shell) or original ODBC driver.
# For JDBC or ODBC version 2.x driver, use port 21050 instead of 21000.
listen impala :25003
mode tcp
option tcplog
balance leastconn
server symbolic_name_1 impala-host-1.example.com:21000
server symbolic_name_2 impala-host-2.example.com:21000
server symbolic_name_3 impala-host-3.example.com:21000
server symbolic_name_4 impala-host-4.example.com:21000
# Setup for Hue or other JDBC-enabled applications.
# In particular, Hue requires sticky sessions.
# The application connects to load_balancer_host:21051, and HAProxy balances
# connections to the associated hosts, where Impala listens for JDBC
# requests on port 21050.
listen impalajdbc :21051
mode tcp
option tcplog
balance source
server symbolic_name_5 impala-host-1.example.com:21050 check
server symbolic_name_6 impala-host-2.example.com:21050 check
server symbolic_name_7 impala-host-3.example.com:21050 check
server symbolic_name_8 impala-host-4.example.com:21050 check
check
option at end of each line in
the above file to ensure HAProxy can detect any unreachable Impalad
server, and failover can be successful. Without the TCP check, you may hit
an error when the impalad daemon to which Hue tries to
connect is down.
haproxy
, be cautious about reusing the connections. If the load balancer has set up
connection timeout values, either check the connection frequently so that it never sits idle longer than
the load balancer timeout value, or check the connection validity before using it and create a new one if
the connection has been closed.