Statistics Seminars: Imprecise machine learning models using sets of probabilities and the uncertainty trick
28 November 2016 14:00 in CM221
An approach for incorporating imprecise prior knowledge into the machine learning SVM-based models is considered. The main idea underlying the approach is to use a double duality representation in the framework of the minimax strategy of decision making. This idea allows us to get simple extensions of SVMs including additional constraints for optimization variables (the Lagrange multipliers) formalizing the incorporated imprecise information. The approach is extended to deal with interval-values or set-valued training data by means of the so-called uncertainty trick when training examples with the interval uncertainty are transformed to training data with the probabilistic uncertainty. Every interval is replaced by a set of training points such that every point inside the interval has an unknown probability from a predefined set of probabilities. It is also shown how to incorporate imprecise prior knowledge into neural networks which can be regarded as a promising machine learning tool.
Contact firstname.lastname@example.org for more information