Difference between revisions of "ITK Registration Optimization/2007-04-06-tcon"
From NAMIC Wiki
Line 26: | Line 26: | ||
** <em>When new experiment?</em> | ** <em>When new experiment?</em> | ||
* <em>Appropriate summary statistics</em> | * <em>Appropriate summary statistics</em> | ||
− | ** Per machine: | + | ** Per machine: batch -vs- speed/error |
** Per test: mflops -vs- speed/error | ** Per test: mflops -vs- speed/error | ||
− | ** All, | + | ** All, batch -vs- % change in performance |
=== Review of OptMattesMI === | === Review of OptMattesMI === |
Revision as of 14:57, 6 April 2007
Home < ITK Registration Optimization < 2007-04-06-tconContents
Agenda
Tests
- Two types of tests
- Baseline: LinearInterp is useful for profiling
- Profile reports in Reports subdir
- Optimization: OptMattesMI is a self-contained test
- Submit to dashboard
- test name / method
- non-optimized speed
- optimized speed
- optimized error (difference from non-optimized results)
- Submit to dashboard
- Baseline: LinearInterp is useful for profiling
Timing
- Priority
- Thread affinity
Performance Dashboard
- Ranking computers?
- CPU, memory, etc in dashboard
- MFlops as measured by Whetstone?
- Public submission of performance
- Logins and passwords configured in cmake
- Encryption in cmake?
- Organization of Experiments/Dashboards
- When new experiment?
- Appropriate summary statistics
- Per machine: batch -vs- speed/error
- Per test: mflops -vs- speed/error
- All, batch -vs- % change in performance
Review of OptMattesMI
- Lessons learned
- Mutex bad
- Memory good
ctest suite
- Ideal set of tests is machine specific
- e.g., Number of threads and image size