[ 
https://issues.apache.org/jira/browse/BEAM-13335?focusedWorklogId=687580&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-687580
 ]

ASF GitHub Bot logged work on BEAM-13335:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 29/Nov/21 19:52
            Start Date: 29/Nov/21 19:52
    Worklog Time Spent: 10m 
      Work Description: robertwb commented on a change in pull request #16066:
URL: https://github.com/apache/beam/pull/16066#discussion_r758687768



##########
File path: sdks/python/apache_beam/dataframe/io.py
##########
@@ -540,10 +540,15 @@ def process(
     reader = self.reader
     if isinstance(reader, str):
       reader = getattr(pd, self.reader)
+    indices_per_file = 10**int(math.log(2**64 // len(path_indices), 10))
+    if readable_file.metadata.size_in_bytes > indices_per_file:
+      raise RuntimeError(
+          f'Cannot safely index records from {len(path_indices)} files '
+          f'of size {readable_file.metadata.size_in_bytes} '
+          f'as their product is greater than 2^63.')

Review comment:
       It's harder to know the number of rows ahead of time (though we could 
keep track after the fact), but more importantly when we split we start the 
index based on the byte offset so we can't actually use most of that headroom 
anyway. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 687580)
    Time Spent: 1h 50m  (was: 1h 40m)

> DataFrame sources produce excessively large index
> -------------------------------------------------
>
>                 Key: BEAM-13335
>                 URL: https://issues.apache.org/jira/browse/BEAM-13335
>             Project: Beam
>          Issue Type: Improvement
>          Components: dsl-dataframe
>            Reporter: Brian Hulette
>            Assignee: Robert Bradshaw
>            Priority: P2
>          Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> DataFrame reads attempt to match user expectations by giving every element 
> across all
> shards a unique index. This is done by embedding the filepath
> itself in the index, but this results in the (often quite long) path
> being duplicated for every element (sometimes exceeding the size of the
> data itself).
> We should instead generate a guaranteed unique _numeric_ index. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to