This is an automated email from the ASF dual-hosted git repository.

alamb pushed a commit to branch production
in repository https://gitbox.apache.org/repos/asf/parquet-site.git


The following commit(s) were added to refs/heads/production by this push:
     new e550179  [BLOG] Geospatial Blog (#156)
e550179 is described below

commit e550179c074c1990220630722ee2518229b4bb4a
Author: Jia Yu <[email protected]>
AuthorDate: Fri Feb 13 06:49:38 2026 -0700

    [BLOG] Geospatial Blog (#156)
    
    * [BLOG] Template for Geospatial Blog
    
    * Add the geospatial blog post
    
    * Update content/en/blog/features/_index.md
    
    Co-authored-by: Andrew Lamb <[email protected]>
    
    * Fix comments
    
    * Add WKB link
    
    * Add spatial pruning example
    
    * fix typo
    
    * Update publish date
    
    ---------
    
    Co-authored-by: Andrew Lamb <[email protected]>
---
 content/en/blog/features/_index.md                 |   5 +
 content/en/blog/features/geospatial.md             | 163 +++++++++++++++++++
 static/blog/geospatial/bounding_boxes.png          | Bin 0 -> 256528 bytes
 .../geospatial/bounding_boxes_visualization.py     |  50 ++++++
 static/blog/geospatial/london_nyc_distance.png     | Bin 0 -> 568349 bytes
 static/blog/geospatial/spatial_pruning.png         | Bin 0 -> 325588 bytes
 .../geospatial/spatial_pruning_visualization.py    | 180 +++++++++++++++++++++
 static/blog/geospatial/westminster_bridge.png      | Bin 0 -> 463991 bytes
 8 files changed, 398 insertions(+)

diff --git a/content/en/blog/features/_index.md 
b/content/en/blog/features/_index.md
new file mode 100644
index 0000000..2719c2e
--- /dev/null
+++ b/content/en/blog/features/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Information on Parquet Features"
+linkTitle: "features"
+weight: 20
+---
diff --git a/content/en/blog/features/geospatial.md 
b/content/en/blog/features/geospatial.md
new file mode 100644
index 0000000..6ad4177
--- /dev/null
+++ b/content/en/blog/features/geospatial.md
@@ -0,0 +1,163 @@
+---
+title: "Native Geospatial Types in Apache Parquet"
+date: 2026-02-13
+description: "Native Geospatial Types in Apache Parquet"
+author: "[Jia Yu](https://github.com/jiayuasu), [Dewey 
Dunnington](https://github.com/paleolimbot), [Kristin 
Cowalcijk](https://github.com/Kontinuation), [Feng 
Zhang](https://github.com/zhangfengcdt)"
+categories: ["features"]
+---
+
+Geospatial data has become a core input for modern analytics across logistics, 
climate science, urban planning, mobility, and location intelligence. Yet for a 
long time, spatial data lived outside the mainstream analytics ecosystem. In 
primarily non-spatial data engineering workflows, spatial data was common but 
required workarounds to handle efficiently at scale. Formats such as Shapefile, 
GeoJSON, or proprietary spatial databases worked well for visualization and GIS 
workflows, but the [...]
+
+The introduction of native geospatial types in Apache Parquet marks a major 
shift. Geometry and geography are no longer opaque blobs stored alongside 
tabular data. They are now first class citizens in the columnar storage layer 
that underpins modern data lakes and lakehouses.
+
+This post explains why native geospatial support in Parquet matters and gives 
a technical overview of how these types are represented and stored.
+
+## Why Geospatial Types Matter in Analytical Storage
+
+Spatial data storage presents unique challenges: a single geometry may 
represent a point, a road segment, or a complex polygon with thousands of 
vertices. Queries are also different: instead of simple equality or range 
filters, users ask spatial questions such as containment, intersection, 
distance, and proximity in two (XY) or even three (XYZ) dimensions.
+
+Historically, geospatial columns in Parquet were stored as generic binary or 
string values, with spatial meaning encoded in external metadata. This approach 
had several limitations.
+
+1. Query engines could not detect a column was GEOMETRY or GEOGRAPHY without 
an explicit function call by the user (even if the engine supported GEOMETRY or 
GEOGRAPHY types natively)
+2. Query engines could not apply statistics-based pruning: full Parquet files 
were required to be read even for spatial queries that returned a small number 
of rows.
+
+Native geospatial types address these issues directly. By making geometry and 
geography part of the Parquet logical type system, spatial columns become 
visible to query planners, execution engines, and storage optimizers.
+
+A key benefit is the ability to attach spatial statistics such as bounding 
boxes to column chunks and row groups. With bounding boxes available in Parquet 
statistics, engines can skip entire row groups that fall completely outside a 
query window. This dramatically reduces IO for spatial filters and joins, 
especially on large datasets.
+
+In practice, this means that spatial analytics can finally benefit from the 
same performance techniques that made Parquet dominant for non-spatial 
workloads.
+
+![Building Bounding Boxes Visualization](/blog/geospatial/bounding_boxes.png)
+
+**Figure 1:** Visualization of bounding boxes for 130 million buildings stored 
in a Parquet file from the contiguous U.S. (Microsoft Buildings, file from 
geoarrow.org/data, visualization 
[code](/blog/geospatial/bounding_boxes_visualization.py) here)
+
+Consider a [SedonaDB](https://sedona.apache.org/sedonadb/) Spatial SQL query 
that filters buildings by intersection with a small region around Austin, Texas:
+
+```sql
+SELECT * FROM buildings
+WHERE ST_Intersects(
+    geometry,
+    ST_SetSRID(
+        ST_GeomFromText('POLYGON((-97.8 30.2, -97.8 30.3, -97.7 30.3, -97.7 
30.2, -97.8 30.2))'),
+        4326
+    )
+)
+```
+
+With bounding box statistics attached to each row group, the query engine 
compares the query window against each row group's bounding box before reading 
any geometry data. In the visualization below, the query window (red box) 
overlaps with only 3 row groups out of 2,585 (highlighted in orange). The 
engine skips all other row groups entirely.
+
+![Spatial Pruning Visualization](/blog/geospatial/spatial_pruning.png)
+
+**Figure 2:** Spatial pruning in action: the query window over Austin (red) 
intersects only 3 row group bounding boxes (orange). The remaining 2,582 row 
groups (gray) are skipped without reading their data. (visualization 
[code](/blog/geospatial/spatial_pruning_visualization.py) here)
+
+## From GeoParquet Metadata to Native Types
+
+Before Parquet adopted GEOMETRY and GEOGRAPHY types in 2025, the 
[GeoParquet](https://geoparquet.org/) community had already standardized how 
geometries should be stored in Parquet as early as 2022, using well known 
binary encoding plus a set of metadata keys. This was an important step because 
it enabled interoperability across tools.
+
+However, geometry columns were still fundamentally binary columns with sidecar 
metadata. Engines had to explicitly opt in to understanding that metadata and 
its placement in the file key/value metadata made it difficult to integrate 
with primarily non-spatial engines that were not designed to be extended in 
this way. Moreover, data lake table formats such as Apache Iceberg require 
concrete, first class Parquet data types to enable engine interoperability, 
which sidecar metadata cannot ad [...]
+
+The newer direction, sometimes referred to as [GeoParquet 
2.0](https://cloudnativegeo.org/blog/2025/02/geoparquet-2.0-going-native/), 
moves geospatial concepts directly into the Parquet type system. Geometry and 
geography are defined as logical types, similar in spirit to decimal or 
timestamp types. This eliminates ambiguity such that non-spatial engines are 
better able to integrate Geometry and Geography concepts, improving type 
fidelity and performance for spatial and non-spatial users alike.
+
+## Overview of Geospatial Types in Parquet
+
+Parquet introduces two primary [logical types for spatial 
data](https://parquet.apache.org/docs/file-format/types/geospatial/).
+
+### GEOMETRY
+
+The GEOMETRY type represents planar spatial objects. This includes points, 
linestrings, polygons, and multi geometries. The logical type indicates that 
the column contains spatial objects, while the physical storage uses a standard 
binary encoding.
+
+Typical examples include:
+
+1. Engineering or CAD data in local coordinates
+2. Projected map data such as Web Mercator or UTM
+3. Spatial joins and overlays where longitude and latitude data distributed 
over a small area or where vertices are closely spaced, such as intersections, 
unions, clipping, and containment analysis
+
+![Westminster Bridge Engineering 
Precision](/blog/geospatial/westminster_bridge.png)
+
+**Figure 3:** Building the London Westminster Bridge: the Geometry type under 
a local coordinate reference system would provide better precision and 
performance than the Geography type.
+
+### GEOGRAPHY
+
+The GEOGRAPHY type is similar to GEOMETRY but represents objects on a 
spherical or ellipsoidal Earth model. Geography values are encoded using 
longitude and latitude coordinates expressed in degrees.
+
+Common use cases include:
+
+1. Global scale datasets that span large geographic extents (e.g., country 
boundaries)
+2. Distance calculations where curvature of the earth matters (e.g., the 
distance between New York and Beijing)
+3. Use cases such as aviation, maritime tracking, or global mobility
+
+![London NYC Distance Comparison](/blog/geospatial/london_nyc_distance.png)
+
+**Figure 4:** The shortest distance between London and NYC should cross Canada 
when using the Geography type, whereas the Geometry type incorrectly misses 
Canada.
+
+Both types integrate into Parquet schemas just like other logical types. From 
the perspective of a schema definition, a geometry column is no longer an 
opaque binary field but a typed spatial column.
+
+## How Geospatial Types Are Stored
+
+Although geospatial types are logical constructs, their physical storage 
follows [Parquet's existing columnar 
design](https://parquet.apache.org/docs/file-format/types/geospatial/). The 
following points highlight key aspects of the geospatial type design.
+
+1. **Physical encoding**
+   Geometry and geography values are stored as binary payloads, using [Well 
Known Binary 
(WKB)](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry#Well-known_binary)
 encoding. This ensures compatibility across engines and languages.
+2. **Spatial statistics**
+   In addition to standard Parquet statistics such as null counts, spatial 
columns can carry bounding box information. Each row group can record the 
minimum and maximum extents of the geometries it contains. Query engines can 
use this information to prune data early when evaluating spatial predicates.
+3. **Engine interoperability**
+   Because the spatial meaning is encoded as a Parquet logical type, engines 
do not need out of band conventions to interpret the column. A reader that 
understands Parquet geospatial types can immediately treat the column as a 
spatial object.
+4. **Coordinate Reference System (CRS) information**
+   CRS information is stored in the file metadata (i.e., type definition) 
using authoritative identifiers or structured definitions such as EPSG codes or 
PROJJSON strings.
+
+Native geospatial types align naturally with modern lakehouse architectures 
built on Parquet. Table formats such as [Apache 
Iceberg](https://iceberg.apache.org/) no longer need to reinvent geospatial 
logic since core spatial semantics live in Parquet. Instead, they can focus on 
well defined type mappings between Parquet and Iceberg and on [propagating 
spatial statistics into the 
tables](https://wherobots.com/blog/iceberg-geo-technical-insights-and-implementation-strategies/).
+
+## Implementation status and ecosystem adoption
+
+Native Parquet geo types are not theoretical. Geometry and geography have 
already been implemented across multiple core libraries, indicating broad and 
growing adoption.
+
+Today, support exists in multiple languages and runtimes, including Parquet 
Java, Arrow C++, Rust, Hyparquet Javascript, DuckDB, and more! This ensures 
that geospatial Parquet files can be produced and consumed consistently across 
ecosystems, from JVM engines to native and embedded query engines.
+
+An up to date view of implementation coverage can be found in the [official 
Parquet 
documentation](https://parquet.apache.org/docs/file-format/implementationstatus/).
+
+## Conclusion
+
+Native geospatial support in Apache Parquet represents a foundational 
improvement for spatial analytics and a welcome quality of life improvement for 
general-purpose workloads with a spatial component. By elevating geometry and 
geography to first class logical types, Parquet enables efficient storage, 
meaningful statistics, and true engine interoperability.
+
+Bounding boxes, columnar layout, and standard encodings together allow spatial 
data to participate fully in modern analytics systems. As a result, geospatial 
workloads no longer need specialized storage formats or isolated systems. They 
can live natively inside the open, scalable data lake ecosystem.
+
+To get started with Geometry/Geography in Parquet, see the [example files 
provided by the geoarrow-data repository](https://geoarrow.org/data) or write 
your own using your favourite Parquet implementation!
+
+```python
+import geoarrow.pyarrow as ga # For GeoArrow extension type registration
+import geopandas
+import pyarrow as pa
+from pyarrow import parquet
+
+# From GeoPandas, create a GeoDataFrame from your favourite data source
+url = 
"https://raw.githubusercontent.com/geoarrow/geoarrow-data/v0.2.0/natural-earth/files/natural-earth_countries.fgb";
+df = geopandas.read_file(url)
+
+# Write to Parquet using pyarrow.parquet()
+tab = pa.table(df.to_arrow())
+parquet.write_table(tab, "countries.parquet")
+
+# Verify that the Geometry logical type was written to the file
+parquet.ParquetFile("countries.parquet").schema
+#> <pyarrow._parquet.ParquetSchema object at 0x10776dac0>
+#> required group field_id=-1 schema {
+#>   optional binary field_id=-1 name (String);
+#>   optional binary field_id=-1 continent (String);
+#>   optional binary field_id=-1 geometry (Geometry(crs=));
+#> }
+
+# Geometry is read to a pyarrow.Table as GeoArrow arrays that can be
+# converted back to GeoPandas
+tab = parquet.read_table("countries.parquet")
+df = geopandas.GeoDataFrame.from_arrow(tab)
+df.head(2)
+#> name continent  \
+#> 0                         Fiji   Oceania  
+#> 1  United Republic of Tanzania    Africa  
+#>
+#>                                             geometry 
+#> 0  MULTIPOLYGON (((180 -16.06713, 180 -16.55522, ... 
+#> 1  MULTIPOLYGON (((33.90371 -0.95, 34.07262 -1.05...
+```
+
+
diff --git a/static/blog/geospatial/bounding_boxes.png 
b/static/blog/geospatial/bounding_boxes.png
new file mode 100644
index 0000000..ad6cd6c
Binary files /dev/null and b/static/blog/geospatial/bounding_boxes.png differ
diff --git a/static/blog/geospatial/bounding_boxes_visualization.py 
b/static/blog/geospatial/bounding_boxes_visualization.py
new file mode 100644
index 0000000..fe4cd48
--- /dev/null
+++ b/static/blog/geospatial/bounding_boxes_visualization.py
@@ -0,0 +1,50 @@
+# Visualization code for Figure 1 in the "Native Geospatial Types in Apache 
Parquet" blog post.
+# Generates a bounding box visualization for 130 million buildings stored in a 
Parquet file
+# from the contiguous U.S. (Microsoft Buildings dataset, file from 
geoarrow.org/data).
+#
+# Original source: 
https://gist.github.com/paleolimbot/06303283b42161b57ffc37a8fed60890
+# Author: Dewey Dunnington (paleolimbot)
+
+import geoarrow.pyarrow as ga  # For GeoArrow extension type registration
+import geopandas
+import pyarrow as pa
+from pyarrow import parquet
+import shapely
+import fsspec
+import matplotlib.pyplot as plt
+
+# Load Natural Earth countries for the backdrop
+url = 
"https://raw.githubusercontent.com/geoarrow/geoarrow-data/v0.2.0/natural-earth/files/natural-earth_countries.fgb";
+df = geopandas.read_file(url)
+
+# Read bounding boxes from row group geo statistics
+url = 
"https://github.com/geoarrow/geoarrow-data/releases/download/v0.2.0/microsoft-buildings_point.parquet";
+
+with fsspec.open(url) as fsspec_f:
+    f = parquet.ParquetFile(fsspec_f)
+
+    boxes = []
+    for i in range(f.num_row_groups):
+        stats = f.metadata.row_group(i).column(0).geo_statistics
+        box = shapely.box(stats.xmin, stats.ymin, stats.xmax, stats.ymax)
+        boxes.append(box)
+
+# Project to Web Mercator for visualization
+boxes_geo = geopandas.GeoSeries(boxes, crs=4326).to_crs(3857)
+backdrop = df.to_crs(3857)
+backdrop = backdrop.intersection(
+    shapely.buffer(shapely.box(*boxes_geo.total_bounds), 1000000)
+)
+
+# Plot
+plt.rcParams["figure.dpi"] = 300
+
+ax = backdrop.plot(edgecolor="black", facecolor="none", antialiased=True, 
linewidth=0.5)
+boxes_geo.plot(ax=ax, edgecolor="purple", facecolor="none", antialiased=True, 
linewidth=0.5)
+bounds = boxes_geo.total_bounds
+x_range = bounds[2] - bounds[0]
+y_range = bounds[3] - bounds[1]
+ax.set_xlim(bounds[0] - 0.05 * x_range, bounds[2] + 0.05 * x_range)
+ax.set_ylim(bounds[1] - 0.05 * y_range, bounds[3] + 0.05 * y_range)
+ax.set_axis_off()
+plt.show()
diff --git a/static/blog/geospatial/london_nyc_distance.png 
b/static/blog/geospatial/london_nyc_distance.png
new file mode 100644
index 0000000..6410cc2
Binary files /dev/null and b/static/blog/geospatial/london_nyc_distance.png 
differ
diff --git a/static/blog/geospatial/spatial_pruning.png 
b/static/blog/geospatial/spatial_pruning.png
new file mode 100644
index 0000000..909b6bf
Binary files /dev/null and b/static/blog/geospatial/spatial_pruning.png differ
diff --git a/static/blog/geospatial/spatial_pruning_visualization.py 
b/static/blog/geospatial/spatial_pruning_visualization.py
new file mode 100644
index 0000000..f532b45
--- /dev/null
+++ b/static/blog/geospatial/spatial_pruning_visualization.py
@@ -0,0 +1,180 @@
+# Visualization code for the "Spatial Pruning in Action" section in the
+# "Native Geospatial Types in Apache Parquet" blog post.
+#
+# Demonstrates how spatial statistics enable row group pruning by showing
+# a query window that intersects only a few row groups out of thousands.
+#
+# Requirements:
+#   pip install sedonadb pyarrow pyproj pyogrio \
+#       --pre \
+#       --index-url https://repo.fury.io/sedona-nightlies/ \
+#       --extra-index-url https://pypi.org/simple/
+#
+# Also requires: geoarrow-pyarrow, shapely, matplotlib, fsspec, aiohttp
+
+import sedonadb
+import geoarrow.pyarrow as ga  # For GeoArrow extension type registration
+from pyarrow import parquet
+import shapely
+from shapely import box
+from shapely.ops import transform as shapely_transform
+import pyproj
+import fsspec
+import numpy as np
+import matplotlib.pyplot as plt
+from matplotlib.patches import Rectangle, Polygon as MplPolygon
+
+# ── SedonaDB: run the spatial query ──────────────────────────────────────────
+
+sd = sedonadb.connect()
+
+parquet_url = (
+    "https://github.com/geoarrow/geoarrow-data/releases/download/";
+    "v0.2.0/microsoft-buildings_point.parquet"
+)
+
+# Load the 130M buildings Parquet file
+buildings = sd.read_parquet(parquet_url)
+buildings.to_view("buildings")
+
+# Spatial filter: Austin, Texas bounding box
+query_sql = """
+SELECT * FROM buildings
+WHERE ST_Intersects(
+    geometry,
+    ST_SetSRID(
+        ST_GeomFromText('POLYGON((-97.8 30.2, -97.8 30.3, -97.7 30.3, -97.7 
30.2, -97.8 30.2))'),
+        4326
+    )
+)
+"""
+
+result = sd.sql(query_sql)
+print(f"Query returned {result.count()} buildings in the Austin query window")
+
+# Show the explain plan with actual execution metrics (includes row-group 
pruning stats)
+print("\n── Execution plan with pruning metrics ──")
+result.explain(type="analyze").show()
+
+# ── Visualization: bounding boxes + pruning ──────────────────────────────────
+
+# Set up CRS transformer (WGS84 to Web Mercator)
+transformer_4326_to_3857 = pyproj.Transformer.from_crs("EPSG:4326", 
"EPSG:3857", always_xy=True)
+
+def to_mercator(geom):
+    """Transform a shapely geometry from EPSG:4326 to EPSG:3857."""
+    return shapely_transform(transformer_4326_to_3857.transform, geom)
+
+# Load Natural Earth countries for the backdrop using SedonaDB's read_pyogrio
+backdrop_url = (
+    "https://raw.githubusercontent.com/geoarrow/geoarrow-data/v0.2.0/";
+    "natural-earth/files/natural-earth_countries.fgb"
+)
+backdrop_df = sd.read_pyogrio(backdrop_url)
+backdrop_geoms = 
ga.as_geoarrow(backdrop_df.to_arrow().column("geometry")).to_shapely()
+
+# Read bounding boxes from row group geo statistics
+
+with fsspec.open(parquet_url) as fsspec_f:
+    f = parquet.ParquetFile(fsspec_f)
+
+    boxes = []
+    for i in range(f.num_row_groups):
+        stats = f.metadata.row_group(i).column(0).geo_statistics
+        b = box(stats.xmin, stats.ymin, stats.xmax, stats.ymax)
+        boxes.append(b)
+
+# Define query window around Austin, Texas
+# This corresponds to: ST_GeomFromText('POLYGON((-97.8 30.2, -97.8 30.3, -97.7 
30.3, -97.7 30.2, -97.8 30.2))')
+query_window = box(-97.8, 30.2, -97.7, 30.3)
+
+# Identify which row groups intersect the query window
+intersects_query = np.array([b.intersects(query_window) for b in boxes])
+
+# Project geometries to Web Mercator for visualization
+boxes_mercator = [to_mercator(b) for b in boxes]
+query_window_mercator = to_mercator(query_window)
+backdrop_mercator = [to_mercator(g) for g in backdrop_geoms]
+
+# Compute bounds from all boxes
+all_bounds = shapely.bounds(shapely.GeometryCollection(boxes_mercator))
+x_range = all_bounds[2] - all_bounds[0]
+y_range = all_bounds[3] - all_bounds[1]
+
+# Clip backdrop to viewing area
+view_bbox = shapely.buffer(box(all_bounds[0], all_bounds[1], all_bounds[2], 
all_bounds[3]), 1000000)
+backdrop_clipped = [g.intersection(view_bbox) for g in backdrop_mercator if 
g.intersects(view_bbox)]
+
+# Separate intersecting and non-intersecting row groups
+intersecting_boxes = [boxes_mercator[i] for i in range(len(boxes_mercator)) if 
intersects_query[i]]
+non_intersecting_boxes = [boxes_mercator[i] for i in 
range(len(boxes_mercator)) if not intersects_query[i]]
+
+# Plot
+plt.rcParams["figure.dpi"] = 300
+fig, ax = plt.subplots(figsize=(12, 8))
+
+# Draw backdrop (country boundaries)
+for geom in backdrop_clipped:
+    if geom.is_empty:
+        continue
+    if geom.geom_type == 'MultiPolygon':
+        for poly in geom.geoms:
+            if poly.exterior:
+                ax.plot(*poly.exterior.xy, color='black', linewidth=0.5)
+    elif geom.geom_type == 'Polygon' and geom.exterior:
+        ax.plot(*geom.exterior.xy, color='black', linewidth=0.5)
+
+# Draw non-intersecting row groups in light gray
+for geom in non_intersecting_boxes:
+    if geom.geom_type == 'Polygon' and geom.exterior:
+        ax.plot(*geom.exterior.xy, color='#cccccc', linewidth=0.2, alpha=0.4)
+
+# Draw intersecting row groups with bright yellow fill and dark orange border
+for geom in intersecting_boxes:
+    if geom.geom_type == 'Polygon' and geom.exterior:
+        patch = MplPolygon(
+            list(zip(*geom.exterior.xy)),
+            facecolor='#ffdd00',
+            edgecolor='#cc5500',
+            linewidth=2,
+            alpha=0.8
+        )
+        ax.add_patch(patch)
+
+# Draw query window as a thin dashed red outline (no fill, drawn last so it's 
on top)
+query_bounds = query_window_mercator.bounds
+query_rect = Rectangle(
+    (query_bounds[0], query_bounds[1]),
+    query_bounds[2] - query_bounds[0],
+    query_bounds[3] - query_bounds[1],
+    linewidth=2,
+    edgecolor="red",
+    facecolor="none",
+    linestyle=(0, (5, 3))  # dashed pattern
+)
+ax.add_patch(query_rect)
+
+# Set bounds to show full US
+ax.set_xlim(all_bounds[0] - 0.05 * x_range, all_bounds[2] + 0.05 * x_range)
+ax.set_ylim(all_bounds[1] - 0.05 * y_range, all_bounds[3] + 0.05 * y_range)
+ax.set_axis_off()
+
+# Add legend
+from matplotlib.lines import Line2D
+from matplotlib.patches import Patch
+legend_elements = [
+    Line2D([0], [0], color='#cccccc', linewidth=1, alpha=0.5, label='Skipped 
row groups'),
+    Patch(facecolor='#ffcc00', edgecolor='#ff3300', linewidth=2, alpha=0.7, 
label='Scanned row groups'),
+    Line2D([0], [0], color='red', linewidth=2, linestyle='--', label='Query 
window (Austin, TX)'),
+]
+ax.legend(handles=legend_elements, loc='lower right', fontsize=10)
+
+# Print statistics
+print(f"Total row groups: {len(boxes)}")
+print(f"Intersecting row groups: {intersects_query.sum()}")
+print(f"Skipped row groups: {(~intersects_query).sum()}")
+print(f"Data reduction: {(~intersects_query).sum() / len(boxes) * 100:.1f}% of 
row groups skipped")
+
+plt.tight_layout()
+plt.savefig("spatial_pruning.png", dpi=300, bbox_inches='tight', 
facecolor='white')
+plt.show()
diff --git a/static/blog/geospatial/westminster_bridge.png 
b/static/blog/geospatial/westminster_bridge.png
new file mode 100644
index 0000000..0875a14
Binary files /dev/null and b/static/blog/geospatial/westminster_bridge.png 
differ

Reply via email to