The calculations used in many statistical tests and methods require that the inputted data be “normally distributed”. Such calculations include those for t-Tests, ANOVA tables, F-tests, Tolerance limits, and Process Capability Indices. Unless the raw data used in such calculations is “normally distributed”, the resulting conclusions may be incorrect.
Therefore, being able to assess whether or not data is “normally distributed” is critical to ensuring that your “valid statistical techniques” are “suitable for their intended use” (as required by the FDA).
Dimensional data (length, width, height) are typically normally distributed. But many other types of data sets are almost always non-normal, such as: tensile strength, burst pressure, and time or cycles to failure. Some non-normal data can be transformed into normality, in order to then allow statistical calculations to be valid when run on the transformed data.
This webinar explains what it means to be “normally distributed”, how to assess normality, how to test for normality, and how to transform non-normal data into normal data.
Normality Tests and normality transformations are a combination of graphical and numerical methods that have been in use for many decades. These methods are essential to apply whenever a statistical test or method is used whose fundamental assumption is that the inputted data is normally distributed.
Normality “testing” involves creating a “normal probability plot” and calculating simple statistics for comparison to critical values in published tables. A normality “transformation” involves making simple changes to each of the raw-data values, such that the resulting values are more normally distributed than the original raw data.
Evaluation of the results of “tests” and “transformations” involves some objective and some subjective decisions; this webinar provides guidance on both types of decision making.
Areas Covered in the Session :
Who Should Attend:
|