2024-07-16 13:33:51 +08:00
2024-06-20 16:55:28 +08:00
2024-07-09 19:32:35 +08:00
2024-06-21 18:17:28 +08:00
2024-07-16 13:33:51 +08:00
2024-06-21 18:17:28 +08:00
2024-06-29 06:17:20 +00:00
2024-06-26 06:52:09 +00:00
2024-06-12 14:56:55 +08:00
2024-06-12 14:56:55 +08:00
2024-06-12 14:56:55 +08:00
2024-06-12 14:56:55 +08:00
2024-06-13 15:37:35 +08:00
2024-06-12 14:56:55 +08:00
ldh
2024-07-13 16:07:59 +08:00
2024-06-12 14:56:55 +08:00
2024-06-12 14:56:55 +08:00
2024-06-12 14:56:55 +08:00
2024-06-12 14:56:55 +08:00
2024-06-12 16:53:36 +08:00

预训练

GitHub - huggingface/transformers: 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. GitHub - microsoft/DeepSpeed: DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. GitHub - huggingface/peft: 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. GitHub - huggingface/accelerate: 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support

# torch 
https://github.com/microsoft/DeepSpeed/blob/master/docker/Dockerfile
https://github.com/huggingface/transformers/blob/main/docker/transformers-all-latest-gpu/Dockerfile
https://github.com/huggingface/peft/tree/main/docker/peft-gpu-bnb-source
https://github.com/huggingface/accelerate/blob/main/docker/accelerate-gpu/Dockerfile
Description
No description provided
Readme 478 KiB
Languages
Dockerfile 87.1%
Shell 10%
Roff 2.9%