How to safeguard Artificial Neural Networks against unexpected extrapolation behaviour?

Artificial Neural Networks are often said to be sensitive and critical wrt extrapolation. Often it is hard to predict their behaviour in the case of data not having been part of the training data set. How can one be safe against such unexpected behaviour?

Actually the issue is not a specific problem to Artificial Neural Networks. Rather it is a problem in the same way to any kind of algorithms, optimization problems and mathematical models with a high degree of functionality. Extrapolation behaviour has to be checked and approved for analytical or rule based models and algorithms as thoroughly and conscientiously as for Machine Learing models. Beside the usual validation procedures, which are standard for any kind of Machine Learining training procedures, some further measures are available for safeguarding application and operation of Artificial Neural Networks:

  • introduction of dedicated robustness management,
  • accompanying anomalies and incident detection,
  • introduction of collective learning with the help of connected systems in the Internet-of-Things (IoT),
  • dedicated requirements management including engineering of the operational design domains (ODD), e.g. with the help of scenario-based approaches,
  • constructive increase of the system robustness by design, e.g. by
    • systematic identification and elimination of functional requirement conflicts,
    • specific parametrization of inputs and outputs,
    • special structural design of the Machine Learing models,
    • hybrid modeling techniques,
    • ...
  • ...

The realization of these issues may look very different dependent on the field of application. Further details and information can be requested from info@andata.at.

Last update on 2022-02-20 by Andreas Kuhn.

Go back

Add a comment

Please add 3 and 7.