Currently, all popular LLM service providers use Fine-Tune using JSONL files, which describe the inputs and outputs of the model, with small variations, for example for Gemini, Openai, the format is slightly different.
After downloading a specially formed JSONL file, the process of specialization of the LLM model on the specified dataset begins, for all current well -known LLM providers this service is paid.
For Fine-Tune on a local machine using Ollama, I recommend relying on a detailed video from the YouTube channel Tech with TIM-Easiest Way to fine-tune a llm and us it with alloma:
https://www.youtube.com/watch?v=pTaSDVz0gok
An example of a JUPYTER laptop with the preparation of JSONL Dataset from exports of all Telegram messages and launching the local Fine-Tune process is available here:
https://github.com/demensdeum/llm-train-example