Zheng Zhang

Professor of Computer Science
NYU Shanghai

Email: zz17@nyu.edu

Zheng Zhang is a professor of computer science at NYU Shanghai. He also holds an affiliated appointment with the Department of Computer Science at the Courant Institute of Mathematical Sciences and with the Center for Data Science at NYU's campus in New York City. Prior to joining NYU Shanghai, he was founder of the System Research Group in Microsoft Research Asia, where he served as principle researcher and research area manager. Before he moved to Beijing, he was project lead and member of technical staff in HP-Labs. He holds a Ph.D. from University of Illinois, Urbana-Champaign, an M.S. from University of Texas, Dallas, and a B.S. from Fudan University.

Professor Zhang’s research interests are theories and practices of large-scale distributed computing and its intersection with machine learning, in particular, deep-learning. He has published extensively in top system conferences (OSDI, Eurosys, NSDI, etc.), and is also known for his column “Zheng Zhang on Science”, which is published in Chinese Business.

Professor Zhang is a member of the Association for Computing Machinery and founder of the SIGOPS APSYS workshop and the CHINASYS research community. He served regularly as PC members of leading system conferences. During his tenures in industrial labs, he was awarded 40 patents and made numerous contributions to product lines. He has several Best Paper awards as well as awards for excellence from Microsoft and HP-Labs.

Recent Works (2014-2015)

  • MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. In NIPS Workshop on Machine Learning Systems (LearningSys), 2016
  • Multiple Granularity Descriptors for Fine-Grained Categorization. Dequan Wang, Zhiqiang Shen, Jie Shao, Wei Zhang, Xiangyang Xue, Zheng Zhang. In International Conference on Computer Vision 2015 (ICCV 2015)
  • The application of Two-level Attention Models in Deep Convolutional Neural Network for Fine-grained Image Classification. Tianjun Xiao, Yichong Xu, Kuiyuan Yang, Jiaxing Zhang, Yuxin Peng and Zheng Zhang.  
  • Scale-Invariant Convolutional Neural Networks. Yichong Xu, Tianjun Xiao, Jiaxing Zhang, Kuiyuan Yang and Zheng Zhang. 
  • Distributed Outlier Detection using Compressive Sensing, Ying Yan, Jiaxing Zhang, Bojun Huang, Jiaqi Mu, Zheng Zhang, and Thomas Moscibroda. 
  • Error-Driven Incremental Learning in Deep Convolutional Neural Network for Large-Scale Image Classification, Tianju Xiao, Jiaxing Zhang, Kuiyuan Yang, Yuxin Peng, Zheng Zhang. Proceedings of ACM Multimedia 2014 (ACM MM 14) (link).
  • Attentional Neural NetworkFeature Selection Using Cognitive Feedback, Qian Wang, Jiaxing Zhang, Sen Song, Zheng Zhang.In Neural Information Processing Systems 2004 (NIPS 2014)
  • Minerva: A Highly Efficient and Scalable Deep Learning Training Platform, Minjie Wang, Tianjun Xiao, Jianpeng Li, Jiaxing Zhang, Chuntao Hong, Zheng Zhang. In NIPS 2014 Workshop of Distributed Matrix Computations (pdf).
  • Error-bounded Sampling for Analytics on Big Sparse Data, Ying Yang, Liang Jeff Chen, Zheng Zhang. In Very Large Data Bases 2014 (VLDB 14) (Industrial track; pdf)
  • A Scalable and Topology Configurable Protocol for Distributed Parameter Synchronization, Minjie Wang, Hucheng Zhou, Minyi Guo, Zheng Zhang. In Proceedings of ACM SIGOPS Asia-Pacific Workshop on Systems 2014 (APSys14) (link)
  • Impression Store: Compressive Sensing-based Storage for Big Data Analytics, Jiaxing Zhang, Ying Yan, Liang Jeff Chen, Minjie Wang, Thomas Moscibroda, and Zheng Zhang. In the 6th USENIX Workshop on Hot Topics in Cloud Computing 2014 (HotCloud 14) (link).

Additional references can be viewed here: Professor Zhang’s Google Scholar Page;
Current total citation: 3948; H-index: 33; i10-index: 64.

Software

I led the development of the (now) open-sourced deep learning training platform Minerva: multi-GPU cards model parallelism + multi-machine data parallelism + Python programming interface. Our 4-GPU training speed currently keeps the leading record (training GoogLeNet in ~4 days on a 4-GPU workstation).

Education

  • Ph.D., Electrical and Computer Engineering, University of Illinois, 1996
  • M.S., Electrical and Computer Engineering, University of Texas at Dallas, 1992
  • B.S., Electrical and Computer Engineering, Fudan University, 1987