Cool experiment Pawel. I think small models for specific tasks are the future too. Right now power consumption is the main detractor, as it's way easier to connect via api online.
Great article! I like how you think about it. I had a similar experience with running a local model, slower and less efficient but I liked it for what I needed it for. I also appreciate you bringing up the environmental aspect of running AI.
Cool experiment Pawel. I think small models for specific tasks are the future too. Right now power consumption is the main detractor, as it's way easier to connect via api online.
I am going deeper right now. +30B model on Mac Mini :D
Spoiler - it works!
Great article! I like how you think about it. I had a similar experience with running a local model, slower and less efficient but I liked it for what I needed it for. I also appreciate you bringing up the environmental aspect of running AI.
Thanks and yes! I feel like it is future - local well trained LLMs on our hardware. I think Apple is trying to do something like this.