[TMVA] DNN - Tune minimization test to avoid false positives
The minimizer test quite often did not converge. As a result there were many spurious test failures. The test is tuned to converge _much_ more reliably by increasing the learning rate (0.0001 -> 0.001), and the number of early stopping epochs (5 -> 50). This commit also makes the `testMinimization` test the minimizer _without_ using momentum (this code path was previously untested here). The following code was used to benchmark changes (only single precision part of the test was tested and some aux text was commented out if you want to reproduce): ``` for i in `seq 100`; do tmva/tmva/test/DNN/testMinimizationCpu; done | awk '{print $6;}' | python -c 'import numpy; import fileinput; a = map(float, fileinput.input()); print(numpy.std(a), numpy.mean(a), numpy.min(a), numpy.max(a))' ``` Results (typical values): - Old version: 2.70*10^{-7} (std dev), 3.34*10^5 (mean), 2.27*10^{-6} (min), 0.0017 (max) - New verstion: 2.59 * 10^{-6} (std dev), 2.51*10*{-6} (mean), 1.16*10*{-7} (min), 1.37*10^{-5} (max) Time taken is roughly doubled (~1 sec -> ~2 secs). All results on local mac.
Loading
Please register or sign in to comment