[
https://issues.apache.org/jira/browse/BEAM-2857?focusedWorklogId=236651&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-236651
]
ASF GitHub Bot logged work on BEAM-2857:
----------------------------------------
Author: ASF GitHub Bot
Created on: 02/May/19 23:44
Start Date: 02/May/19 23:44
Worklog Time Spent: 10m
Work Description: udim commented on pull request #8394: [BEAM-2857]
Implementing WriteToFiles transform for fileio (Python)
URL: https://github.com/apache/beam/pull/8394#discussion_r279972734
##########
File path: sdks/python/apache_beam/io/fileio.py
##########
@@ -169,3 +179,476 @@ def __init__(self, compression=None,
skip_directories=True):
def expand(self, pcoll):
return pcoll | beam.ParDo(_ReadMatchesFn(self._compression,
self._skip_directories))
+
+
+class FileSink(object):
+ """Specifies how to write elements to individual files in ``WriteToFiles``.
+
+ **NOTE: THIS CLASS IS EXPERIMENTAL.**
+
+ A Sink class must implement the following:
+
+ - The ``open`` method, which initializes writing to a file handler (it is
not
+ responsible for opening the file handler itself).
+ - The ``write`` method, which writes an element to the file that was passed
+ in ``open``.
+ - The ``flush`` method, which flushes any buffered state. This is most often
+ called before closing a file (but not exclusively called in that
+ situation). The sink is not responsible for closing the file handler.
+ """
+
+ def open(self, fh):
+ raise NotImplementedError
+
+ def write(self, record):
+ raise NotImplementedError
+
+ def flush(self):
+ raise NotImplementedError
+
+
+class DefaultSink(FileSink):
+ """A sink that writes elements into file handlers as they come.
+
+ **NOTE: THIS CLASS IS EXPERIMENTAL.**
+
+ This sink simply calls file_handler.write(record) on all records that come
+ into it.
+ """
+
+ def open(self, fh):
+ self._fh = fh
+
+ def write(self, record):
+ self._fh.write(record)
+
+ def flush(self):
+ self._fh.flush()
+
+
+def prefix_naming(prefix):
+ return default_file_naming(prefix)
+
+
+_DEFAULT_FILE_NAME_TEMPLATE = (
+ '{prefix}-{start}-{end}-{pane}-'
+ '{shard:05d}-{total_shards:05d}'
+ '{suffix}{compression}')
+
+
+def destination_prefix_naming():
+
+ def _inner(window, pane, shard_index, total_shards, compression,
destination):
+ args = {'prefix': str(destination),
+ 'start': '',
+ 'end': '',
+ 'pane': '',
+ 'shard': 0,
+ 'total_shards': 0,
+ 'suffix': '',
+ 'compression': ''}
+ if total_shards is not None and shard_index is not None:
+ args['shard'] = int(shard_index)
+ args['total_shards'] = int(total_shards)
+
+ if window != GlobalWindow():
+ args['start'] = window.start.to_utc_datetime().isoformat()
+ args['end'] = window.end.to_utc_datetime().isoformat()
+
+ # If the PANE is the ONLY firing in the window, we don't add it.
+ if pane and not (pane.is_first and pane.is_last):
+ args['pane'] = pane.index
+
+ if compression:
+ args['compression'] = '.%s' % compression
+
+ return _DEFAULT_FILE_NAME_TEMPLATE.format(**args)
+
+ return _inner
+
+
+def default_file_naming(prefix, suffix=None):
+
+ def _inner(window, pane, shard_index, total_shards, compression,
destination):
+ args = {'prefix': prefix,
+ 'start': '',
+ 'end': '',
+ 'pane': '',
+ 'shard': 0,
+ 'total_shards': 0,
+ 'suffix': '',
+ 'compression': ''}
+ if total_shards is not None and shard_index is not None:
+ args['shard'] = int(shard_index)
+ args['total_shards'] = int(total_shards)
+
+ if window != GlobalWindow():
+ args['start'] = window.start.to_utc_datetime().isoformat()
+ args['end'] = window.end.to_utc_datetime().isoformat()
+
+ # If the PANE is the ONLY firing in the window, we don't add it.
+ if pane and not (pane.is_first and pane.is_last):
+ args['pane'] = pane.index
+
+ if compression:
+ args['compression'] = '.%s' % compression
+ if suffix:
+ args['suffix'] = suffix
+
+ return _DEFAULT_FILE_NAME_TEMPLATE.format(**args)
+
+ return _inner
+
+
+class FileResult(object):
Review comment:
I think that this class could be replaced by a `collections.namedtuple`.
Alternatively, you could set `__hash__ = None` if you don't need it to be
hashable. In it's current state however, it's unsafe for it to have a hash
value defined by mutable attributes.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 236651)
Time Spent: 1h 20m (was: 1h 10m)
> Create FileIO in Python
> -----------------------
>
> Key: BEAM-2857
> URL: https://issues.apache.org/jira/browse/BEAM-2857
> Project: Beam
> Issue Type: New Feature
> Components: sdk-py-core
> Reporter: Eugene Kirpichov
> Assignee: Pablo Estrada
> Priority: Major
> Labels: gsoc, gsoc2019, mentor
> Time Spent: 1h 20m
> Remaining Estimate: 0h
>
> Beam Java has a FileIO with operations: match()/matchAll(), readMatches(),
> which together cover the majority of needs for general-purpose file
> ingestion. Beam Python should have something similar.
> An early design document for this: https://s.apache.org/fileio-beam-python
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)