The input data are not generated by an external source. The points at which the unknown function is observed need to be chosen by some specific algorithm. This last condition can be found in applications, among others, from neural networks, robotics, optimal control, and statistics. The case where kernel-based local models are employed for the empirical risk minimization (ERM) principle, which is typical of machine learning problems. Kernel smoothing models have been routinely employed in the literature for many applications, both in contexts where input data are provided by an external source and those where the choice of the training set is an issue. Other important contexts in which kernel models are applied are, among others, approximate dynamic programming (ADP) and reinforcement learning, density estimation, control, and image processing . The learning case where observations can be chosen freely leads to the problem of generating a good sampling of the input space that can positively affect the rate of estimation of the best element within the class of approximating structures The issue is even more crucial with local models such as kernel smoothing approximators, since their structure depends directly on the observed data. The kind of sampling we focus on as “good” in the terms defined above is the class of so called low-discrepancy sequences. Such methods, the estimation, through a learning procedure, of the behavior of some complex system as a function of parameters that we can control. For the actual test, a classic system representing a trolley supporting a payload through a cable.
You are here: Home / ieee projects 2013-2014 / Good rate of convergence with a guaranteed empirical risk minimization