One way to validate and test the performance of a statistical method is to conduct a simulation study. In a simulation study data are generated with several specific characteristics, these characteristics form the conditions of the simulation study, under which the method that is validated or tested. It is often most useful to use data settings that simulate data close to a real empirical situation, but to also add conditions of extreme data settings in order to investigate to what extend the statistical method performs well and when it might fail. An example condition that is often used to validate statistical methods is sample size. Data can be generated for different sample size varying for the average sample size that is usually collected in the substantive field that the method is often used and additionally an unusual small sample size where the method might fail.
The design of the simulation study is an important start and needs to be thought out well. In my experience it is most useful to start out with a limited amount of conditions and expand these throughout the study. Don't use too many conditions to be able keep track of the results and interprete them sensibly.
After the statistical method for validation or testing is applied to the simulated data, the performance can be compared using several measures. The suitability of different measures that are available depends on the design and goal of the simulation study.