“Before relying on a new experimental device, an experimental scientist always establishes its accuracy. A new detector is calibrated when the scientist observes its responses to known input signals. The results of this calibration are compared against the expected response. An experimental scientist would never conduct an experiment with uncalibrated detectors - that would be unscientific. So too, simulations and analysis with untested software do not constitute science.” (copied from Testing and Continuous Integration with Python, created by Kathryn Huff, see also the Testing chapter in Effective Computation In Physics by Anthony Scopatz and Kathryn Huff)
In software tests, expected results are compared with observed results in order to establish accuracy:
def fahrenheit_to_celsius(temp_f):
"""
Converts temperature in Fahrenheit
to Celsius.
"""
temp_c = (temp_f - 32.0) * (5.0/9.0)
return temp_c
def test_fahrenheit_to_celsius():
temp_c = fahrenheit_to_celsius(temp_f=100.0)
expected_result = 37.777777
assert abs(temp_c - expected_result) < 1.0e-6
Why are we not comparing directly all digits with the expected result?
Suiting up to modify untested code:
def fahrenheit_to_celsius(temp_f):
temp_c = (temp_f - 32.0) * (5.0/9.0)
return temp_c
temp_c = fahrenheit_to_celsius(temp_f=100.0)
print(temp_c)
f_to_c_offset = 32.0
f_to_c_factor = 0.555555555
temp_c = 0.0
def fahrenheit_to_celsius_bad(temp_f):
global temp_c
temp_c = (temp_f - f_to_c_offset) * f_to_c_factor
fahrenheit_to_celsius_bad(temp_f=100.0)
print(temp_c)