Difference between revisions of "ITK Registration Optimization/2007-04-06-tcon"
From NAMIC Wiki
(→Tests) |
|||
Line 3: | Line 3: | ||
=== Tests === | === Tests === | ||
* Two types of tests | * Two types of tests | ||
− | ** Baseline: LinearInterp is useful for profiling | + | ** Baseline: LinearInterp is useful for profiling |
+ | *** Profile reports in Reports subdir | ||
** Optimization: OptMattesMI is a self-contained test | ** Optimization: OptMattesMI is a self-contained test | ||
+ | *** Submit to dashboard | ||
+ | **** non-optimized speed | ||
+ | **** optimized speed | ||
+ | **** optimized error (difference from non-optimized results) | ||
=== Timing === | === Timing === |
Revision as of 14:02, 6 April 2007
Home < ITK Registration Optimization < 2007-04-06-tconContents
Agenda
Tests
- Two types of tests
- Baseline: LinearInterp is useful for profiling
- Profile reports in Reports subdir
- Optimization: OptMattesMI is a self-contained test
- Submit to dashboard
- non-optimized speed
- optimized speed
- optimized error (difference from non-optimized results)
- Submit to dashboard
- Baseline: LinearInterp is useful for profiling
Timing
- Priority
- Thread affinity
Performance Dashboard
- Ranking computers?
- CPU, memory, etc in dashboard
- MFlops as measured by Whetstone?
- Public submission of performance
- Logins and passwords configured in cmake
- Encryption in cmake?
- Organization of Experiments/Dashboards
- When new experiment?
- Appropriate summary statistics
- Per machine: day -vs- speed/error
- Per test: mflops -vs- speed/error
- All, day -vs- change in performance
Review of OptMattesMI
- Lessons learned
- Mutex bad
- Memory good
ctest suite
- Ideal set of tests is machine specific
- e.g., Number of threads and image size