Presentation Name🚸🌄: | Practical Issues and Common Pitfalls in Randomized Learning |
---|---|
Presenter: | Dr. Dian-Hua Wang |
Date: | 2016-11-15 |
Location🧑🔬: | 光华东主楼 1801 |
Abstract: | Randomized learning techniques for neural networks have been explored and developed since late 80’s, and received considerable attention due to its good potential to effectively resolve modelling problems in big data setting. Random Vector Functional-link (RVFL) networks, a class of randomized learner models, can be regarded as a result of feedforward neural networks with a specific randomised algorithm, i.e., random assignment of the hidden weights and biases and fixed during training phase. In this talk, we provide some insights into RVFL networks and highlight some practical issues and common pitfalls associated with RVFL-based modelling techniques. Inspired by the folklore that “all high-dimensional random vectors are almost always nearly orthogonal to each other”, we establish a theoretical result about the disability of RVFL networks for universal approximation of nonlinear maps, if a RVFL network is built incrementally with random selection of the input weights and biases from a fixed scope, and constructive evaluation of its output weights. We also address the significance of the scope setting of random weights and biases in respect to modelling performance, and empirically reveal the correlation between the rank of the hidden output matrix and the learner’s generalization capability. |
Annual Speech Directory🧑🏽🍼: | No.245 |
220 Handan Rd., Yangpu District, Shanghai ( 200433 )| Operator🕵🏿♀️:+86 21 65642222
Copyright © 2016 FUDAN University. All Rights Reserved