Removed infoEdit

The below info was cut out because it didn't maintain a Trek perspective but I'm putting it here in case anyone wants to re-incorporate it somehow. Though technically it seems accurate I don't think it quite fits MA. Logan 5 04:08, 10 Jan 2006 (UTC)

For example:
Program A attempts to analyse data block 1. The data is faulty but the 'answer' looks OK to Program A because it has never encountered such an anomalous result before and so doesn't recognise it as such.
Program B takes the answer and performs a more advanced operation on it. Again the outcome looks OK, so an 'OK' message is sent to Program A. Program A, since it is a learning program, moves that method of solving this type of problem up it's hierachical structure. Next time it will waste less time before trying that method. Every time a similar set of data appears, the scenario repeats until Program A's default method is the 'faulty' one and it will cease to ever obtain a valid resolution of any data set.
Program B, if not checked by a routine higher up the chain, will be similarly affected. And so on for C, D, E....
Normally, of course, at some point in this process, the operating system will notice that the data coming up from it's analytical agents is obviously faulty and will terminate the proceedure. A well designed system will also drop the priority of the methods used to process this type of data back to their original settings, which may slow down future operation but stabilises the system.
Of course, when you have the scenario of an android getting scared, for example, the data may be so foreign to the system's experience that the 'manager' never steps in to rectify the problem until it's far to late to stop the process and the management systems themselves are beginning to be affected. This is cascade failure. 'A' malfunctions, which damages B, which damages C.....