Projects

NeuronaBox

A Flexible and High-Fidelity Approach to Distributed DNN Training Emulation

CoLExT

Collaborative Learning Experimentation Testbed

SIDCo

An Efficient Statistical-Based Gradient Compression Technique for Distributed Training Systems

OmniReduce

Efficient Sparse Collective Communication

GRACE

GRAdient ComprEssion for distributed deep learning

FairFL

A Systems Approach to Tackling Fairness in Federated Learning

DC2

Delay-aware Communication Control for Distributed ML

SwitchML

Scaling Distributed Machine Learning with In-Network Aggregation

DAIET

In-Network Computation is a Dumb Idea Whose Time Has Come