Why do we use machine learning in control?

Control engineers, from a historical point of view, are quite conservative. It can be seen from the development of various control theories.

Up to now, Controller design are largely based on a deterministic model, like transfer functions, or state-space models. If there are something in the system we do not know, we tend to phrase them as uncertainties, or disturbances. Then the field like adaptive control and robust control emerged.

It would be a disaster for a control engineer if he does not know his system well before designing the controller, and wish the controller can learn itself the uncertainties, at least for safety critical applications. When the controller fails to learn the system and thus fails to complete the task, I can't say this case might happen, because learning itself involves a certain portion of uncertainty.

But do we need to know our system very well before anything can be done? Appearantly, it is not the case. Like the PID controller, we might not understand the system complete, but through a bunch of trial and error, we manage to get a fairly well performed controller. That's fine.

Modern control design more and more heavily rely on a system model and analysis. Although good analysis results do not necellarily lead to good performance in reality, but the other way around is definitely not possible.

I've chosen another way of modelling, instead of using a state space or transfer function model. I chose Gaussian Process, a non-parametric regression tool. The basic point of choosing such a modelling method is that it can approximate a mapping, given a bunch of data and if new data come, it can update the mapping to reflect the new system behaviours. This, at first instance, sounds to be very promising. But, who wish the controller of an airplane learn something new while the plane is flying? We definitely want an experienced pilot, not one who is still under training, and use us as the training cases. But there is one case, if something unexpected really happens.

In which case, an experienced and a green hand pilot are in the same posistion, in the sense that neither of them have any idea of what happened and what they should respond. In this case, we can only hope for the best. and we are not sure whether they will actually work or not.

It sounds sarcarstic that you've done something and you even have no idea whether it will work or not and can only hope for the best. By phraseing your problem in this way, you really put yourself in a very difficult situation. Can you be a bit optimistic and give the user some guarenttee? No matter whether you are learning something new or your are designing a controller assuming you know everything.

Reinforcement learning seems to be very modest in control. The basis is that the controller does not have much information on the system, it has to learn all by itself. As to the application, it is not a big problem.

Is the concept that we do not intefer with the normal operation of the normal controller, but we can provide some assistance when things go wrong. It sounds to be not a bad idea. But think carefully, does it really tackle the problem of guraenttee? The controller does not intefer with the normal operation, that's fine. It provide some assistance in emergency. Can you guarenttee anything of the assistance? I can't. Then what's the point to have some assistance in back up, which we never know it will be helpful or not?

Is the combination of model predictive control and gaussian process a reasonable one?