Hello Apache Beam team,

My name is Sidhartha Jena. I’m an aspiring data engineer working mainly
with Python, Apache Spark, and SQL, and I spend most of my time building
and improving data pipelines and large-scale data processing workflows.

I’ve been learning Apache Beam and really like how it brings batch and
streaming together under one model. It feels very aligned with the kind of
data engineering problems I enjoy solving, and I’d love to start
contributing to the project.

I wanted to ask for a bit of guidance on how to get started as a
contributor, and also request access to the ASF Slack workspace so I can
follow discussions, learn from the community, and get involved in ongoing
work.

I’m especially interested in things like pipeline performance, data
correctness, and connectors, but I’m happy to start wherever help is
needed, including beginner-friendly or good first issue tasks.

Please let me know the best next steps or any onboarding material I should
look at. I really appreciate the work that goes into maintaining Apache
Beam, and I’m excited about the possibility of contributing.

Regards,
Sidhartha

Reply via email to