We are a computer science research group led by Ce Zhang with help from many close collaborators and friends.
|Master Thesis: Spring 2018
We are offering a series of master thesis in the spring semester of 2018. Drop us an email at email@example.com if you are interested. Some sample topics:
Applications: data-driven astrophysics, meteorology, social sciences, proteomics, reinforcement learning for telescope array control, self-driving etc.
Systems: scalable deep learning over a thousand GPUs, model compression for deep learning, in-database machine learning, blockchain and cryptocurrency as general computation platform, system control with predictive models, etc.
Machine Learning: foundations behind system relaxation: decentralized learning, low precision communication, asynchrony; multi-task learning etc.
- Ease.ml: Towards Multi-tenant Resource Sharing for Machine Learning Workloads.
- Prof. Heng Guo & Kaan Kara: Layerwise Systematic Scan: Deep Boltzmann Machines and Beyond.
- Demjan Grubic: Synchronous Multi-GPU Training for Deep Learning with Low-Precision Communications: An Empirical Study
- Oral Presentation: Xiangru Lian and Prof. Ji Liu on decentralized learning [paper].
- A 3-minutes teaser video:
- space.ml is featured in a News article in the Science magazine [link].
- GalaxyGAN is selected as the Editor’s Choice in the Science magazine [link].
VLDB 2017 (Munich Aug 28 – Sep 1)
- Come by our two talks: Lele Yu on building Bayesian Inference as a new service with hundreds of machines; Zhipeng Zhang on a comparative study on different SimRank algorithms [paper].
- Also don’t miss the demo session: Xupeng Li on ease.ml version 1 — declarative in-database machine learning with a cute homomorphism between relational algebra and linear algebra [paper].
ICML 2017 (Sydney Aug 6 – Aug 11)
- Hantian Zhang is going to give a talk about ZipML — low precision machine learning on modern hardware [paper].
SIGMOD 2017 (Chicago May 14 – May 19)
- Jiawei Jiang gave the talk about a distributed machine learning system designed for heterogeneous infrastructure where straggler is expected [paper].
- HILDA: Come by to hear our vision about ease.ml — Deep Learning in four lines to serve ETH scientists [paper].
Machine Learning on Modern Hardware
- Kaan Kara: training linear models on FPGA with low precision [FCCM paper]
- Ewaida Mohsen: Xgboost inference on FPGA that can deal with up to 20M tuples per second [FPL]!
An ETH Globe article about DS3Lab.
Hantian and Dan give the ZipML session at NVIDIA GTC 2017