Hey Miro,

I completely understand your frustration with tests that run but don’t actually 
fail the build—it’s like checking a burger’s freshness by looking at it but 
never taking a bite. I recently decided to put PyPy to the test on my own 
website, https://costcofoodcourtmenu.com/, to see if it could improve 
performance, reduce execution times, and handle caching more efficiently.

How I Tested PyPy on My Website
Setup & Configuration:

Installed PyPy 3.10 alongside my standard CPython environment.
Configured it to work with my WordPress backend, focusing on caching plugins 
(LiteSpeed Cache, Perfmatters) and the JSON-LD schema generation.
Used benchmarking tools to measure request handling speed and memory usage.
Benchmarking & Load Testing:

Ran multiple performance tests on key scripts that power my website’s real-time 
food court menu updates.
Used ApacheBench (ab) and wrk to simulate high traffic loads.
Compared execution time for my custom scripts under both CPython and PyPy.
The Results
✅ The Good:

PyPy drastically improved execution time on long-running scripts. JSON-LD 
schema generation, which took ~1.8s under CPython, dropped to ~0.9s under 
PyPy—almost 2x faster!
Reduced memory footprint when handling large datasets, especially for scraping 
competitor menus.
WordPress REST API responses showed a 20-25% speed increase on backend queries.
❌ The Issues:

Some third-party WordPress plugins had compatibility issues with PyPy (likely 
due to C extensions).
Certain Django-based admin panel scripts crashed due to failed test cases 
similar to what you observed—notably with test_getsitepackages.
Build process didn’t halt on test failures, which made debugging harder. Some 
errors only surfaced in logs.
What This Means for PyPy Testing in Fedora
Your concern is 100% valid—if test failures don’t fail the build, we’re flying 
blind. In my case, some tests silently failed, leading to unexpected behavior 
in production. Here’s what I think should be done:

Make test failures break the build—otherwise, we risk shipping unstable 
packages.
Classify and document expected failures (e.g., known compatibility issues).
Automate reporting for failures that need upstream fixes instead of ignoring 
them.
Testing should be a safety net, not an afterthought. I’d love to hear how you 
plan to tackle this on Fedora. If there’s a structured way to report or debug 
failures, I’m happy to contribute my findings!
_______________________________________________
pypy-dev mailing list -- pypy-dev@python.org
To unsubscribe send an email to pypy-dev-le...@python.org
https://mail.python.org/mailman3/lists/pypy-dev.python.org/
Member address: arch...@mail-archive.com

Reply via email to