ryankert01 commented on PR #1161:
URL: https://github.com/apache/mahout/pull/1161#issuecomment-4022498562

   The mmist only have 784 features. Therefore the biggest I can do is 
qubits=9, n_samples=500. It's expected to have negligible acceleration.
   
   ## Experiments
   ### Pennylane baseline
   ```bash
   $ CUDA_VISIBLE_DEVICES=4 VIRTUAL_ENV= uv run --project qdp/qdp-python python 
benchmark/encoding_benchmarks/pennylane_baseline/mnist_amplitude.py
    --n-samples 500 --trials 3 --iters 80 --qubits 9
   MNIST amplitude baseline (PennyLane) — 2-class variational classifier
     Data: fetch_openml('mnist_784'), digits 3 vs 6, PCA 512-D, L2 norm  
(n=1000)
     Qubits: 9, iters: 80, batch_size: 10, layers: 10, lr: 0.05
                                                                                
                                                                                
                                             
     Trial 1:
       QML device: cuda
       Compile:   0.1743 s
       Train:     51.1853 s
       Train acc: 0.7973  (n=750)
       Test acc:  0.7760  (n=250)
       Throughput: 15.6 samples/s
                                                                                
                                                                                
                                             
     Trial 2:
       QML device: cuda
       Compile:   0.0854 s
       Train:     52.3345 s
       Train acc: 0.8747  (n=750)
       Test acc:  0.8560  (n=250)
       Throughput: 15.3 samples/s
                                                                                
                                                                                
                                             
     Trial 3:
       QML device: cuda
       Compile:   0.0862 s
       Train:     50.6249 s
       Train acc: 0.8840  (n=750)
       Test acc:  0.9080  (n=250)
       Throughput: 15.8 samples/s
   
     Best test accuracy:  0.9080  (median: 0.8560, min: 0.7760, max: 0.9080)
   ```
   
   ### QDP pipeline
   ```bash
   $ CUDA_VISIBLE_DEVICES=4 VIRTUAL_ENV= uv run --project qdp/qdp-python python 
benchmark/encoding_benchmarks/qdp_pipeline/mnist_amplitude.py --n-samples 500 
--trials 3 --iters 80 --qubits 9
   MNIST amplitude (QDP encoding) — 2-class variational classifier
     Data: fetch_openml('mnist_784'), digits 3 vs 6, PCA 512-D, QDP amplitude  
(n=1000)
     Qubits: 9, iters: 80, batch_size: 10, layers: 10, lr: 0.05
     QDP encode:  0.4790 s  (train + test, 750 + 250 samples)
                                                                                
                                                                                
                                             
     Trial 1:
       QML device: cuda
       Compile:   0.1420 s
       Train:     50.1078 s
       Train acc: 0.8467  (n=750)
       Test acc:  0.8360  (n=250)
       Throughput: 16.0 samples/s
                                                                                
                                                                                
                                             
     Trial 2:
       QML device: cuda
       Compile:   0.0859 s
       Train:     50.1321 s
       Train acc: 0.6787  (n=750)
       Test acc:  0.6480  (n=250)
       Throughput: 16.0 samples/s
                                                                                
                                                                                
                                             
     Trial 3:
       QML device: cuda
       Compile:   0.0909 s
       Train:     49.5452 s
       Train acc: 0.8600  (n=750)
       Test acc:  0.8680  (n=250)
       Throughput: 16.1 samples/s
   
     Best test accuracy:  0.8680  (median: 0.8360, min: 0.6480, max: 0.8680)
   ```
   
   ### Hardware
   NVIDIA RTX 6000 Ada


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to