Scalable and Parallel Optimization for Large Scale Machine Learing
主讲教师：刘霁 人气：1267 更新时间: 2016年12月28日
摘要：This talk considers two computational challenges in machine learning (ML): 1) How to improve the efficiency of parallelization for solving large scale ML problems; and 2) how to systematically solve more sophisticated ML problems in the form of composition optimization such as reinforcement learning, nonlinear dimension reduction, and graphical models. For 1), this talk will introduce the asynchronous parallel optimization, that opens a new gateway to address big data involved optimization and analytics. In comparison to the traditional synchronous parallelism, the asynchronous fashion significantly reduces the system overhead and maximizes the efficiency on the system level and has been successfully applied to deep learning, recommendation system, NLP, high performance computing, and many other areas recently. This talk will introduce several recent work done by the speaker about the asynchronous parallel algorithms from theory foundations to application, including convergence, speedup properties, and applications in solving deep learning, large linear systems, SVM, LASSO, linear programing, etc. For 2), this talk summarizes a large group of ML problems into the composition optimization, and provides a systematic algorithm for solving composition optimization. The theoretical results provide fundamental understanding on the complication and complexity of these type ML problems.
简历：Ji Liu is currently an assistant professor in Computer Science, Electrical Computer Engineering, and Goergen Institute for Data Science at University of Rochester (UR). He received his Ph.D., Masters, and B.S. degrees from University of Wisconsin-Madison, Arizona State University, and University of Science and Technology of China respectively. His research interests cover a broad scope of machine learning, optimization, and their applications in other areas such as healthcare, bioinformatics, computer vision, and many other data analysis involved areas. His recent research focus is on asynchronous parallel optimization, sparse learning (compressed sensing) theory and algorithm, reinforcement learning, structural model estimation, online learning, abnormal event detection, feature/pattern extraction, etc. He founded the machine learning and optimization group at UR. He won the award of Best Paper honorable mention at SIGKDD 2010 and the award of Facebook Best Student Paper award at UAI 2015.