Hi Philipp.

I found this SIDELOADING – INGESTION OF LARGE POINT CLOUDS INTO THE APACHE
SPARK BIG DATA ENGINE
<https://pdfs.semanticscholar.org/efdd/4c6c50cf31c28581fcd7de5eab318c3cd174.pdf>
 paper.
Geotrellis <https://geotrellis.io/> do use pdal <https://pdal.io/> in
geotrellis-pointcloud
<https://github.com/geotrellis/geotrellis-pointcloud>, and
pdal has a java writer for las files
<https://pdal.io/stages/writers.las.html>
spark-iqmulus <https://github.com/IGNF/spark-iqmulus> is a Spark Package to
read and write PLY, LAS and XYZ lidar point clouds using Spark SQL.


fre. 8. apr. 2022 kl. 18:20 skrev Philipp Kraus <
philipp.kraus.flashp...@gmail.com>:

> Hello,
>
> > Am 08.04.2022 um 17:34 schrieb Lalwani, Jayesh <jlalw...@amazon.com>:
> >
> > What format are you writing the file to? Are you planning on your own
> custom format, or are you planning to use standard formats like parquet?
>
> I’m dealing with geo-spatial data (Apache Sedona), so I have got a data
> frame with such information and would like to export it to LAS format (see
> https://en.wikipedia.org/wiki/LAS_file_format )
>
> >
> > Note that Spark can write numeric data in most standard formats. If you
> use  custom format instead, whoever consumes the data needs to parse your
> data. This adds complexity to your and your consumer's code. You will also
> need to worry about backward compatibility.
> >
> > I would suggest that you explore standard formats first before you write
> custom code. If you do have to write data in a custom format, udf is a good
> way to serialize the data into your format
>
> The numerical data must be converted into a binary representation of LAS
> format specification see
> http://www.asprs.org/wp-content/uploads/2019/07/LAS_1_4_r15.pdf section
> 2.6, Table 7
>
> Thank
>
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

-- 
Bjørn Jørgensen
Vestre Aspehaug 4, 6010 Ålesund
Norge

+47 480 94 297

Reply via email to