I know that for digital systems there are "fault simulators". These are basically digital simulators with the added capability of introducing certain kind of typical faults (typically
stuck-at faults or more sophisticated fault models) and evaluating the system n comparison to a fault free system. The goal is to find out whether certain patterns of test vectors (input data to the system) show different behavior with or without a fault present.
This is used to determine a set of test vectors for testing complex systems where you may not have access for measuring every node). Fpr example large digital chips.
For analog chips, typically much less complex than digital ones, I know of no such method. Here what is evaluated is rather the sensitivity to changes in component values, e.g. caused by tolerances in the production. Apart from exhaustively simulating all components with all possible variations, statistical methods are used, e.g. the
Monte-Carlo method which is available in many SPICE based analog simulation programs.
For other "electrical systems" I agree with Minder. Wvery kind of fault simulation must resort to a comparatively simple set of fault conditions to be manageable. If you have a simulator for your "electrical system", you can always introduce faults by one or both of the above methods and evaluate the outcome. It doesn't hurt to put some engineering experience into these simulations (where and when is which kind of fault likely to happen and likely to have troublesome consequences) to keep the effort in check. This is e.g. fone when UL evaluates the safety of a circuit: certain types of faults (short circuits, open circuits) are manually created within the circuit under test and the behavior is evaluated. Not in a simulation, on the real subject.