Finetune LLMs (llama, vicuna, gptneo, pythia) without any code!

Abhishek-Thakur

Finetune LLMs (llama, vicuna, gptneo, pythia) without any code! by Abhishek-Thakur

The video demonstrates how to fine-tune various LLM models such as llama, vicuna, GPT Neo, and pythia without writing any code, with the help of Hugging Face's Auto Train Advanced. The process involves selecting a natural language processing task, choosing a dataset for fine-tuning, selecting the appropriate base model, and setting the desired number of models for training. Once the project is created, the Auto Train Advanced takes care of the rest of the training process. The speaker also notes that Amazon SageMaker's Autopilot feature can be used to train and deploy models easily.

00:00:00

In this section, the speaker demonstrates how to use Hugging Face's Auto Train Advanced to fine-tune various LLM models like llama, vicuna, pythia, and GPT Neo without writing any code. They start by creating an Auto Train space with a specific template and flavor, which is marked private to protect the user's Hugging Face token. Next, they select the natural language processing task and choose the llama dataset for fine-tuning the llama 7B model. After selecting the appropriate base model and setting the desired number of models for training, the user can create the project and watch as the Auto Train Advanced handles the rest of the training process.

00:05:00

In this section, the speaker demonstrates how to train models using Amazon SageMaker's Autopilot feature. The process involves creating a project, selecting a dataset, and approving the project for training, after which the trainings begin. Once a model has been trained, it can be easily deployed using inference endpoints. The speaker notes that all models provided by Autopilot are private and users are free to use them as they wish.

More from
Abhishek-Thakur

No videos found.

Related Videos

No related videos found.

Trending
AI Music

No music found.