Release 353 (5 Mar 2021)#
General#
Add ClickHouse коннектор. (#4500)
Extend support for correlated subqueries including
UNNEST
. (#6326, #6925, #6951)Add
to_geojson_geometry()
andfrom_geojson_geometry()
functions. (#6355)Add support for values of any integral type (
tinyint
,smallint
,integer
,bigint
,decimal(p, 0)
) in window frame bound specification. (#6897)Improve query planning time for queries containing
IN
predicates with many elements. (#7015)Fix potential incorrect results when columns from
WITH
clause are exposed with aliases. (#6839)Fix potential incorrect results for queries containing multiple
<
predicates. (#6896)Always show
SECURITY
clause inSHOW CREATE VIEW
. (#6913)Fix reporting of column references for aliased tables in
QueryCompletionEvent
. (#6972)Fix potential compiler failure when constructing an array with more than 128 elements. (#7014)
Fail
SHOW COLUMNS
when column metadata cannot be retrieved. (#6958)Fix rendering of function references in
EXPLAIN
output. (#6703)Fix planning failure when
WITH
clause contains hidden columns. (#6838)Prevent client hangs when OAuth2 authentication fails. (#6659)
Server RPM#
Allow configuring process environment variables through
/etc/trino/env.sh
. (#6635)
BigQuery connector#
Hive connector#
Add support for
current_user()
in Hive defined views. (#6720)Add support for reading and writing column statistics from Glue metastore. (#6178)
Improve parallelism of bucketed tables inserts. Inserts into bucketed tables can now be parallelized within task using
task.writer-count
feature config. (#6924, #6866)Fix a failure when
INSERT
writes to a partition created by an earlierINSERT
statement. (#6853)Fix handling of folders created using the AWS S3 Console. (#6992)
Fix query failures on
information_schema.views
table when there are failures translating hive view definitions. (#6370)
Iceberg connector#
Kafka connector#
MySQL connector#
MemSQL connector#
Phoenix connector#
PostgreSQL connector#
Improve performance of queries with
ORDER BY ... LIMIT
clause, when the computation can be pushed down to the underlying database. This can be enabled by settingtopn-pushdown.enabled
. Enabling this feature can currently result in incorrect query results when sorting onchar
orvarchar
columns. (#6847)Fix incorrect predicate pushdown for
char
andvarchar
column with operators like<>
,<
,<=
,>
and>=
due different case collation between Trino and PostgreSQL. (#3645)
Redshift connector#
Fix failure when reading a
timestamp
value with more than 3 decimal digits of the second fraction. (#6893)
SQL Server connector#
Other connectors#
Reduce number of opened JDBC connections during planning for ClickHouse, Druid, MemSQL, MySQL, Oracle, Phoenix, Redshift, and SQL Server connectors. (#7069)
Add experimental support for join pushdown in PostgreSQL, MySQL, MemSQL, Oracle, and SQL Server connectors. It can be enabled with the
experimental.join-pushdown.enabled=true
catalog configuration property. (#6874)
SPI#
Fix lazy blocks to call listeners that are registered after the top level block is already loaded. Previously, such registered listeners were not called when the nested blocks were later loaded. (#6783)
Fix case where LazyBlock.getFullyLoadedBlock() would not load nested blocks when the top level block was already loaded. (#6783)
Do not include coordinator node in the result of
ConnectorAwareNodeManager.getWorkerNodes()
whennode-scheduler.include-coordinator
is false. (#7007)The function name passed to
ConnectorMetadata.applyAggregation()
is now the canonical function name. Previously, if query used function alias, the alias name was passed. (#6189)Add support for redirecting table scans to multiple tables that are unioned together. (#6679)
Change return type of
Range.intersect(Range)
. The method now returnsOptional.empty()
instead of throwing when ranges do not overlap. (#6976)Change signature of
ConnectorMetadata.applyJoin()
to have an additionalJoinStatistics
argument. (#7000)Deprecate
io.trino.spi.predicate.Marker
.