java - Spark sql with hive table delta support -
as latest version of spark
(2.1.0) lists unsupported major functionality of hive
like:
tables buckets: bucket hash partitioning within hive table partition. spark sql doesn’t support buckets yet.
it means hive
tables have bucketed column structure won't loaded dataframe
processed properly. there workaround achieve such functionality via jdbc
, temp tables , etc. main question how achieve full control using spark sql
on tables materialized deltas ?
Comments
Post a Comment