Release 348 (14 Dec 2020)#
General#
Add support for
DISTINCT
clause in aggregations within correlated subqueries. (#5904)Support
SHOW STATS
for arbitrary queries. (#3109)Improve query performance by reducing worker to worker communication overhead. (#6126)
Improve performance of
ORDER BY ... LIMIT
queries. (#6072)Reduce memory pressure and improve performance of queries involving joins. (#6176)
Fix
EXPLAIN ANALYZE
for certain queries that contain broadcast join. (#6115)Fix planning failures for queries that contain outer joins and aggregations using
FILTER (WHERE <condition>)
syntax. (#6141)Fix incorrect results when correlated subquery in join contains aggregation functions such as
array_agg
orchecksum
. (#6145)Fix incorrect query results when using
timestamp with time zone
constants with precision higher than 3 describing same point in time but in different zones. (#6318)Fix duplicate query completion events if query fails early. (#6103)
Fix query failure when views are accessed and current session does not specify default schema and catalog. (#6294)
Web UI#
Add support for OAuth2 authorization. (#5355)
Fix invalid operator stats in Stage Performance view. (#6114)
JDBC driver#
Allow reading
timestamp with time zone
value asZonedDateTime
usingResultSet.getObject(int column, Class<?> type)
method. (#307)Accept
java.time.LocalDate
inPreparedStatement.setObject(int, Object)
. (#6301)Extend
PreparedStatement.setObject(int, Object, int)
to allow settingtime
andtimestamp
values with precision higher than nanoseconds. (#6300) This can be done via providing aString
value representing a valid SQL literal.Change representation of a
row
value.ResultSet.getObject
now returns an instance ofio.prestosql.jdbc.Row
class, which better represents the returned value. Previously arow
value was represented as aMap
instance, with unnamed fields being named likefield0
,field1
, etc. You can access the previous behavior by invokinggetObject(column, Map.class)
on theResultSet
object. (#4588)Represent
varbinary
value using hex string representation inResultSet.getString
. Previously the return value was useless, similar to"B@2de82bf8"
. (#6247)Report precision of the
time(p)
,time(p) with time zone
,timestamp(p)
andtimestamp(p) with time zone
in theDECIMAL_DIGITS
column in the result set returned fromDatabaseMetaData#getColumns
. (#6307)Fix the value of the
DATA_TYPE
column fortime(p)
andtime(p) with time zone
in the result set returned fromDatabaseMetaData#getColumns
. (#6307)Fix failure when reading a
timestamp
ortimestamp with time zone
value with seconds fraction greater than or equal to 999999999500 picoseconds. (#6147)Fix failure when reading a
time
value with seconds fraction greater than or equal to 999999999500 picoseconds. (#6204)Fix element representation in arrays returned from
ResultSet.getArray
, making it consistent withResultSet.getObject
. Previously the elements were represented using internal client representation (e.g.String
). (#6048)Fix
ResultSetMetaData.getColumnType
fortimestamp with time zone
. Previously the type was miscategorized asjava.sql.Types.TIMESTAMP
. (#6251)Fix
ResultSetMetaData.getColumnType
fortime with time zone
. Previously the type was miscategorized asjava.sql.Types.TIME
. (#6251)Fix failure when an instance of
SphericalGeography
geospatial type is returned in theResultSet
. (#6240)
CLI#
Fix rendering of
row
values with unnamed fields. Previously they were printed using fake field names likefield0
,field1
, etc. (#4587)Fix query progress reporting. (#6119)
Fix failure when an instance of
SphericalGeography
geospatial type is returned to the client. (#6238)
Hive connector#
Allow configuring S3 endpoint in security mapping. (#3869)
Add support for S3 streaming uploads. Data is uploaded to S3 as it is written, rather than staged to a local temporary file. This feature is disabled by default, and can be enabled using the
hive.s3.streaming.enabled
configuration property. (#3712, #6201)Reduce load on metastore when background cache refresh is enabled. (#6101, #6156)
Verify that data is in the correct bucket file when reading bucketed tables. This is enabled by default, as incorrect bucketing can cause incorrect query results, but can be disabled using the
hive.validate-bucketing
configuration property or thevalidate_bucketing
session property. (#6012)Allow fallback to legacy Hive view translation logic via
hive.legacy-hive-view-translation
config property orlegacy_hive_view_translation
session property. (#6195)Add deserializer class name to split information exposed to the event listener. (#6006)
Improve performance when querying tables that contain symlinks. (#6158, #6213)
Iceberg connector#
Improve performance of queries containing filters on non-partition columns. Such filters are now used for optimizing split generation and table scan. (#4932)
Add support for Google Cloud Storage and Azure Storage. (#6186)
Kafka connector#
Allow writing
timestamp with time zone
values into columns usingmilliseconds-since-epoch
orseconds-since-epoch
JSON encoders. (#6074)
Other connectors#
Fix ineffective table metadata caching for PostgreSQL, MySQL, SQL Server, Redshift, MemSQL and Phoenix connectors. (#6081, #6167)