Github user RotemShaul commented on the issue:
https://github.com/apache/spark/pull/13761
Sure - thought it was closed since the PR got old and had conflicts.
This PR basically generalizes an already implemented solution for the
problem of serializing schema overhead. The solution introduced in spark 1.5
solves the problem for known static schemas(with spark.avro.registeredSchemas
property ), this add the ability to solve the problem for dynamic schemas.
(using repo)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]