Public Benchmarks

Every number on this page is measured, verified, and reproducible. We run side-by-side against CPython and check that outputs match exactly.

Benchmark Methodology

Side-by-Side with CPython

Each workload runs on both Pyvorin and CPython 3.12 under identical hardware, OS, and Python configurations. No cherry-picked baselines.

Correctness Verification

Output hashes and numeric results are compared. If Pyvorin produces a different answer, the run is flagged and falls back to CPython.

Warm Runs & Compilation Cost

We report warm-run averages after JIT warm-up. Compilation time is tracked separately so you can model amortisation for your use case.

Honest Disclaimer

Results are workload-specific. Your mileage may vary. Speedups depend on code structure, data size, and how much time is spent in supported vs unsupported language features. We do not report best-case numbers as guarantees.

Proven Speedup Results

Download JSON Results
Workload Speedup vs CPython Source Fit
ETL Filter / Map 100× – 1,000×+ Phase G7 Strong fit
Group-By Aggregation 1,000×+ Phase G7 Strong fit
Rolling Window 4,000×+ Phase G7 Strong fit
Finance Kernels 10× – 150×+ Phase L Strong fit
String Tokenisation ~500× Phase I Strong fit
Log Parsing 74×+ Phase I Strong fit
JSON ETL 20× – 100× Phase H Strong fit
CSV ETL 15× – 80× Phase H Strong fit

All results measured on x86-64 Linux, Python 3.12, warm runs after JIT compilation. See honest_benchmark_results.json for raw data.

Workload Classification

Strong fit

High-Confidence Speedups

Workloads that exercise supported language features, spend most of their time in Python bytecode (not C extensions), and have minimal I/O blocking.

  • Tight loops over lists, dicts, and numerics
  • ETL transforms with filter, map, and group-by
  • String scanning and tokenisation
  • Finance kernels and rolling-window math
Weak fit

Limited or Variable Speedups

Workloads that hit unsupported language features, spend significant time in C extensions, or are bound by I/O, network, or GPU.

  • Heavy NumPy / Pandas / Polars internals
  • async/await and dynamic code (eval/exec)
  • GPU training loops
  • Network-bound microservices

ROI Calculator

Estimate potential savings based on your pipeline. All calculations are performed in your browser—no data is sent to our servers.

Your Pipeline

Estimated Impact

Current vCPU-hours/day
Accelerated vCPU-hours/day
Saved vCPU-hours/day
Current monthly cost
Estimated monthly cost
Estimated monthly saving
Estimated annual saving
Wall-clock reduction

This is an illustrative estimate. Actual savings depend on workload-specific speedup, cold compile cost, and infrastructure factors.

Run Your Own Benchmark

The best benchmark is your own code. Join the pilot programme, run the benchmark suite against your workloads, and see exactly where Pyvorin helps.