Keynote Speeches and Plenary Talks

December 15, 2015, Monday (Venue: Ballroom, Level 3, Shangri-La Hotel)

Bernard Widrow, Stanford University, USA

8:10am - 8:50am

Nature's Little Secret: Hebbian-LMS Learning Algorithm?
Abstract TBA.
Bernard Widrow Bernard Widrow received the S.B., S.M., and Sc.D. degrees in Electrical Engineering from the Massachusetts Institute of Technology in 1951, 1953, and 1956, respectively. He joined the MIT faculty and taught there from 1956 to 1959. In 1959, he joined the faculty of Stanford University, where he is currently Professor of Electrical Engineering, Emeritus. He began research on adaptive filters, learning processes, and artificial neural models in 1957. Together with M.E. Hoff, Jr., his first doctoral student at Stanford, he invented the LMS algorithm in the autumn of 1959. Today, this is the most widely used learning algorithm, used in every MODEM in the world. He has continued working on adaptive signal processing, adaptive controls, and neural networks since that time. Dr. Widrow is a Life Fellow of the IEEE and a Fellow of AAAS. He received the IEEE Centennial Medal in 1984, the IEEE Alexander Graham Bell Medal in 1986, the IEEE Signal Processing Society Medal in 1986, the IEEE Neural Networks Pioneer Medal in 1991, the IEEE Millennium Medal in 2000, and the Benjamin Franklin Medal for Engineering from the Franklin Institute of Philadelphia in 2001. He was inducted into the National Academy of Engineering in 1995 and into the Silicon Valley Engineering Council Hall of Fame in 1999. Dr. Widrow is a past president and member of the Governing Board of the International Neural Network Society. He is associate editor of several journals and is the author of over 125 technical papers and 21 patents. He is co-author of Adaptive Signal Processing and Adaptive Inverse Control, both Prentice-Hall books. A new book, Quantization Noise, was published by Cambridge University Press in June 2008.

Zhaoyang Dong, University of Sidney, Australia

8:50am - 9:30am

Extreme Learning Machines based Intelligent Systems for Power System Security Assessment and Risk Management
Abstract Smart Grid technologies can be used to enable large-scale integration of renewable energies such as wind and solar power. However, the stochastic and volatile nature of such renewables bring significant challenges to the operational security of the smart grid. Conventional security assessment (SA) methods are simulation-based, which are insufficiently fast to accommodate the fast and random changes of the renewable generation outputs. The University of Sydney Smart Grid research group has developed a series of data-driven approaches to enable real-time SA to protect the smart grid against the risk of blackouts. This talk will introduce an intelligent SA system based on ELM. An ensemble model is developed to generalize the randomness of single ELMs during the training. Benefiting from the unique properties of ELM and the strategically designed decision-making rules, the intelligent system learns and works very fast and can estimate the credibility of its SA results, allowing an accurate and reliable real-time SA process.
Zhaoyang Dong Zhaoyang Dong obtained Ph.D. from the University of Sydney in 1999. He is Professor and Head of School of Electrical and Information Engineering, Director of Sydney Energy Systems Research Institute, the University of Sydney, and a contractor with Ausgrid and EPRI, USA. He is also director of Faculty of Engineering and IT research cluster on Clean Intelligent Energy Networks, and Academic Director of Tsinghua University - Sydney University Research Alliance on Energy Networks at the University of Sydney. His immediate role was Ausgrid Chair and Director of Centre for Intelligent Electricity Networks (CIEN), the University of Newcastle, Australia. He also worked at the Hong Kong Polytechnic University and as Manager for system planning with Transend Networks (now TASNetworks), Australia (power transmission company for TAS). His research interest includes smart grid, power system planning and stability, load modeling, renewable energy, electricity market, and computational methods. He is an editor of IEEE TRANSACTIONS ON SMART GRID, IEEE PES LETTERS, IEEE TRANS ON SUSTAINABLE ENERGY, IET RENEWABLE POWER GENERATION, and Springer/State Grid Journal of Modern Power Systems and Clean Energy. He is an international Advisor for the journal of Automation of Electric Power Systems. He also served as guest editor for International Journal of Systems Science

Guang-Bin Huang, Nanyang Technological University, Singapore

9:30am - 10:10am

Hierarchical Extreme Learning Machines (ELM) – New Trend of Machine Learning
Abstract Neural networks (NN) and support vector machines (SVM) play key roles in machine learning and data analysis in the past 2-3 decades. However, it is known that these popular learning techniques face some challenging issues such as: intensive human intervene, slow learning speed, poor learning scalability. The objective of this talk is two-folds: 1) it will introduce the concept of hierarchical Extreme Learning Machines (ELMs); 2) it will show the potential trend of combining ELM and deep learning (DL), which not only expedites the learning speed (up to thousands times faster) and reduces the learning complexity but also improves the learning accuracy in benchmark applications such as OCR, traffic sign recognition, hand gesture recognition, object tracking, 3D Graphics, etc. ELM theories can indeed give some theoretical support to local receptive fields and pooling strategies which are popularly used in deep learning. ELM theories may have explained the reasons why the brain are globally ordered but may be locally random. This talk wishes to share with audiences the trends of machine learning: 1) turning point from machine learning engineering to machine learning science; 2) convergence of machine learning and biological learning; 3) from human and (living) thing intelligence to machine intelligence; 4) from Internet of Things (IoT) to Internet of Intelligent Things and Society of Intelligent Things.
Guang-Bin Huang Guang-Bin Huang received the B.Sc degree in applied mathematics and M.Eng degree in computer engineering from Northeastern University, P. R. China, in 1991 and 1994, respectively, and Ph.D degree in electrical engineering from Nanyang Technological University, Singapore in 1999. During undergraduate period, he also concurrently studied in Applied Mathematics department and Wireless Communication department of Northeastern University, P. R. China. He serves as an Associate Editor of Neurocomputing, Cognitive Computation, neural networks, and IEEE Transactions on Cybernetics. He is a senior member of IEEE. He was awarded “Highly Cited Researcher” and listed in “2014 The World's Most Influential Scientific Minds” by Thomson Reuters. He received the best paper award from IEEE Transactions on Neural Networks and Learning Systems (2013). He was invited to give keynotes on numerous international conferences. His current research interests include big data analytics, human computer interface, brain computer interface, image processing/understanding, machine learning theories and algorithms, extreme learning machine, and pattern recognition. From May 2001, he has been working as an Assistant Professor and Associate Professor (with tenure) in the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. He is Principal Investigator of BMW-NTU Joint Future Mobility Lab on Human Machine Interface and Assisted Driving, Principal Investigator (data and video analytics) of Delta – NTU Joint Lab, Principal Investigator (Scene Understanding) of ST Engineering – NTU Corporate Lab, and Principal Investigator (Marine Data Analysis and Prediction) of Rolls Royce – NTU Corporate Lab. He has led/implemented several key industrial projects (e.g., Chief architect/designer and technical leader of Singapore Changi Airport Cargo Terminal 5 Inventory Control System (T5 ICS) Upgrading Project, etc). One of his main works is to propose a new machine learning theory and learning techniques called Extreme Learning Machines (ELM), which fills the gap between traditional feedforward neural networks, support vector machines, clustering and feature learning techniques. ELM theories have recently been confirmed with biological learning evidence directly, and filled the gap between machine learning and biological learning. ELM theories have also addressed “Father of Computers” J. von Neumann’s concern on why “an imperfect neural network, containing many random connections, can be made to perform reliably those functions which might be represented by idealized wiring diagrams.

December 16, 2014, Tuesday (Venue: Ballroom, Level 3, Shangri-La Hotel)

Sushing Chen, University of Florida, USA

8:30am - 9:10am

Big Data and ELM for Biomedicine
Abstract The Big Data Initiative was proposed by the OSTP (Office of Science and Technology Policy) of the White House in 2012. It has impacted the world significantly. In the biomedical field, there are several relevant programs, such as BD2K (Big Data to Knowledge), Big Brain and Precision Medicine. First, from the Big Brain data, we may ask: “what can we infer the true computational model of the human brain? Is it the ELM?” Next, we shall describe briefly these Big Data programs and explain their scientific contexts, which will lead to some exemplar research problems, which require statistical modeling by ELM (Extreme Learning Machine) methods. Then, we shall describe the Precision Medicine Program, which is actually the Personalized Medicine Problem. For this problem, we shall propose a solution, which is an integrated framework of diagnosis and therapeutics. Hereby, the exemplar research problems mentioned above will lay the foundation of this framework. Two important technologies used in these research problems: microarray of gene expression and NGS (Next Generation Sequencing) of SNP (Single Nucleotide Polymorphism) and how ELM is applied to them should briefly be described. Finally, there is a great digital library, PUBMED, established by NLM (National Library of Medicine) of all biomedical publications. How to classify the collection according to a certain ontology (e.g., GO, Gene Ontology) is a text-mining problem, for which ELM provides a potential solution. In this talk, we wish to explain the usefulness and efficiency of ELM on these research problems.
Sushing Chen After Sushing Chen received his BS in Mathematics from the National Taiwan University, he finished his PhD in Mathematics from the University of Maryland in 1970, and soon started as Assistant Professor and later became Professor of Mathematics at the University of Florida in 1980. He became Program Director of Geometric Analysis at the NSF (National Science Foundation) in 1983. In the meantime, his research started to change to computer science and high performance computing. In 1984, he took the positon of Program Director of Intelligence Systems at NSF and began his research on artificial intelligence pattern recognition and machine learning. Since then, his research has focused on these topics and various applications: computer vision, robotics, manufacturing, uncertain reasoning, spatial reasoning, digital libraries, information access and bioinformatics. He has received numerous NSF and DARPA funding in these applications, and returned to governmental services as program director of various programs. He has also worked at industry as consultant, including IBM Watson Research Center, IBM Scientific Center and Boeing High Tech Center. Currently, he is Emeritus Professor of Computer, Information SEngineering at the University of Florida, Director of Systems Biology Lab, and affiliated faculty of the McKnight Brain Institute and UF Genetics Institute. Sushing Chen’s scientific and engineering contributions include classification of discrete subgroups of Lie groups, isometries of negatively curved manifolds, computer vision of non-rigid motions, spherical modeling of human perception, neural network control of semiconductor manufacturing, evidential reasoning in expert systems, interoperability of distributed digital libraries, genomics of plants, clinical bioinformatics and text-mining of large corpora in knowledge bases. His management of the NSF/DARPA/NASA Dgital libraries Initiative in 1994-1995 has led to a new Internet industry, such as Google, Anazon and Facebook. His current effort is the modernization of TCM (Traditional Chinese Medicine) by using OMICS (genomics and proteomics) technologies in pharmacology of herbal medicine.

Q. M. Jonathan Wu, University of Windsor, Canada

9:10am - 9:50am

Multi Extreme Learning Machines for Image Feature Representation
Abstract Most of actual images such as human face image, industrial image and MRI image are high-dimensional data. The feature representation is mainly for the purpose of extracting useful information and of using this information to build non-supervised classifier/supervised classifier or other types of predictor because the image processing performance is often closely related to the feature data extracted and used. At present, there are three assumptions which are dominant in feature representation area, including: 1) Low-dimensional manifold assumption that the high-dimensional data exist in the embedded low-dimensional manifold; 2) Low-dimensional subspace assumption with high-dimensional data intrinsically existing in the low-dimensional subspace; 3) sparse assumption meaning that data representation on the over-complete base is sparse. It should be noted that we found the many multiple-layer ELMs based feature extraction and clustering learning are very similar to the above-mentioned three assumptions with respect to their intrinsic operation mechanism. Therefore, using these similarity relationships, we may perhaps greatly improve the practical application performance of feature representation. In this lecture, we first discuss these similarity relationships and further bring forward a generalized ELM learning frame which is intended to extract the optimized features. Then, we extend and apply this method for such application fields as dimension reduction, image identification, image reconstruction, etc. Compared with the other feature representation methods, the experimental results show that the generalization performance of the subject generalized learning frame is very advantageous.
Jonathan Wu Jonathan Wu is a Professor of Electrical and Computer Engineering and a Tier 1 Canada Research Chair in Automotive Sensors and Information Systems since 2005. He is the founding director of the Computer Vision and Sensing Systems Laboratory at the University of Windsor, Canada. Prior to joining the University, Dr. Wu was a Senior Research Officer and Group Leader at the National Research Council of Canada (NRC). He has published one book in the area of 3D vision and more than 300 peer-reviewed papers (including 150 journal publications) in areas of computer vision, multimedia information processing, and intelligent systems. Dr. Wu is an Associate Editor for IEEE Transaction on Neural Networks and Learning Systems and the Journal of Cognitive Computation. Dr. Wu has served on editorial board for the IEEE Transaction on Systems, Man, and Cybernetics and the International Journal of Robotics and Automation. He has been on the Technical Program Committees and International Advisory Committees for many prestigious conferences.

Laurent Daudet, Paris Diderot University, France

9:50am - 10:30am

From Computational Imaging to Optical Computing: ELMs at the Speed of Light
Abstract In the recent years, there has been a surge of methods to take advantage of computational methods in order to improve imaging systems. Here, we have investigated how a conceptually simple experiment of imaging with coherent light through a layer of multiply scattering material is indeed close to an idealized physical implementation of compressed sensing. For higher resolution imaging, we have used amplitude-only spatial light modulators for the calibration of the system, which led to the development of new algorithms for phase retrieval, with robustness to strong noise. In reverse, we investigate how this physical system can be used as a computing device that provides a large number of random projections of images that could be later used for classification tasks e.g. physically approximating a given kernel. This can be seen as the first layer of an ELM system, implemented physically at potentially high speed and low energy consumption. This is a joint work with Francesco Caltagirone, Igor Carron, Angélique Drémeau, Sylvain Gigan, Florent Krzakala, Antoine Liutkus, Boshra Rajaei, and Alaa Saade.
Laurent Daudet Laurent Daudet studied at the Ecole Normale Supérieure in Paris, where he graduated in statistical and non-linear physics. In 2000, he received a PhD in mathematical modeling from the Université de Provence, Marseille, France. After a Marie Curie post-doctoral fellowship at Queen Mary University of London, UK, he worked as associate professor at UPMC (Paris 6 University). He is now Professor at Paris Diderot University – Paris 7, with research at the Langevin Institute for Waves and Images. He is « junior fellow » (2010-2015) of the prestigious Institut Universitaire de France, and visiting professor (2012-2016) at the National Institute of Informatics, Tokyo, Japan. He is author or co-author of over 170 publications (journal papers or conference proceedings) on various aspects of signal processing, in particular using sparse representations, applied to audio, acoustics, and optics. He is co-founder of the LightOn project (, to develop new technologies for energy-efficient optical-based co-processors.

Newton Howard, Oxford University, UK

3:40pm - 4:40pm

ELM and Brain Sciences: Into the Deep Mind and Beyond
Abstract TBA.
Newton Howard Newton Howard is the Chairman of the Brain Sciences Foundation and currently serves as Associate Professor of Computational Neuroscience and Functional Neurosurgery at the University of Oxford, where he manages the newly-formed Computational Neuroscience Lab. He is also the Director of the Synthetic Intelligence Laboratory at MIT, where he had served as the Founder and Director of the MIT Mind Machine Project from 2008 to 2012. While a graduate member of the Faculty of Mathematical Sciences at the University of Oxford, England, he proposed the Theory of Intention Awareness (IA), which made a significant impact on the design of command and control systems and information exchange systems at tactical operational and strategic levels. He then went on to receive a PhD in Cognitive Informatics and Mathematics from La Sorbonne, France where he was also awarded the Habilitation a Diriger des Recherches for his leading work on the Physics of Cognition (PoC) and its applications to complex medical, economical and security equilibriums. In 2009 Dr. Howard founded the Mind Machine Project at MIT; an interdisciplinary initiative to reconcile natural intelligence with machine intelligence. In 2011 Dr. Howard established the Brain Sciences Foundation (BSF), a not-for profit, multidisciplinary research foundation dedicated to developing novel paradigms that enable the study of both mind and brain and ultimately the treatment of neurological disorders. In 2014 Newton received a Doctorate in Neurosurgery from Oxford University from the department of neurosurgery, focused on the early detection of neurodegenerative diseases. Dr. Howard works with multi-disciplinary teams of physicists, chemists, biologists, brain scientists, computer scientists, and engineers to reach a deeper understanding of the brain. Dr. Howard’s research efforts aim to improve the quality of life for so many who suffer from degenerating conditions currently considered incurable. Advancing the field of brain sciences opens new opportunities for solving brain disorders and finding new means for developing artificial intelligence. Dr. Howard’s most recent work focuses on the development of functional brain and neuron interfacing abilities. To better understand the structure and character of this information transfer he concentrated on theoretical mathematical models to represent the exchange of information inside the human brain. This work, called the Fundamental Code Unit (FCU), has proven applicable in the diagnosis and study of brain disorders and has aided in developing and implementing necessary pharmacological and therapeutic tools for physicians. He has also developed individualized strategies to incorporate solutions for psychiatric and brain prosthetics. Through collaborative research efforts with MIT and Oxford University, Dr. Howard has been working on interventions for early detection and novel treatment strategies for neurodegenerative diseases and affective disorders.