[
https://issues.apache.org/jira/browse/HAWQ-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15215135#comment-15215135
]
ASF GitHub Bot commented on HAWQ-577:
-------------------------------------
Github user shivzone commented on a diff in the pull request:
https://github.com/apache/incubator-hawq/pull/522#discussion_r57655549
--- Diff:
pxf/pxf-service/src/main/java/org/apache/hawq/pxf/service/rest/MetadataResource.java
---
@@ -41,6 +41,7 @@
import org.apache.hawq.pxf.api.MetadataFetcher;
import org.apache.hawq.pxf.api.utilities.InputData;
import org.apache.hawq.pxf.service.MetadataFetcherFactory;
+import org.apache.hawq.pxf.service.MetadataResponse;
--- End diff --
yes, it is. I missed adding it and i've just updated
> Stream PXF metadata response
> -----------------------------
>
> Key: HAWQ-577
> URL: https://issues.apache.org/jira/browse/HAWQ-577
> Project: Apache HAWQ
> Issue Type: Bug
> Components: PXF
> Reporter: Shivram Mani
> Assignee: Shivram Mani
>
> getMetadata api returns the metadata corresponding to the user specified
> pattern. There is no limit to the #of tables the pattern can correspond do
> and the current approach of building in memory the json object might not
> scale.
> We needed to serialize them inside a streaming object similar to the approach
> used for streaming the FragmentsResponse
> The same applies also for the debug function that prints metadata of all the
> items - if there are too many of them the StringBuilder will run out of
> memory. The solution in the fragments case was to print a log of one fragment
> at a time.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)