Skip to content

Frameworks

Langfuse

📋
DeepSpeed ZeRO++ A framework for accelerating model pre-training, finetuning, RLHF updating.

By minimizing communication overhead. A likely essential concept to be very familiar with.

GitHub Repo stars Levanter (not just LLMS) Codebase for training FMs with JAX.

Release Using Haliax for naming tensors field names instead of indexes. (for example Batch, Feature....). Full sharding and distributable/parallelizable.

RL4LMs by microsoft A modular RL library to fine-tune language models to human preferences.

paper

References