2 Comments
User's avatar
Rainbow Roxy's avatar

Regarding the article, what about data privavy? So well put.

Expand full comment
Pawel Jozefiak's avatar

Privacy is absolutely the right question to be asking here. I've actually been digging into this a lot lately and there's something that really concerns me... the more integrations and connections you stack onto your AI the more vulnerable you become to attacks. Like seriously - security researchers found 43% of tested MCP implementations had significant vulnerabilities. That's not edge cases... that's nearly HALF of them having fundamental architecture problems.

But here's where I actually get hopeful about the future - and I think this is going to be the real shift... it's all moving toward LOCAL. Like... smaller AI models running directly on your device. Your phone, computer, tablet, whatever.

Look I'm not going to sugarcoat it - right now local AI is limited. The models are smaller so they're not as capable as Claude or GPT-5. You need some actual technical knowledge to set it up properly. Performance depends entirely on your hardware so it's never going to be lightning fast. That's the reality of it.

BUT... and this is the important part... when you have an AI model running locally on YOUR device, everything changes from a security perspective. You're not sending your data anywhere. You're not trusting some cloud infrastructure to keep your information safe. It stays on your hardware. You connect it to your apps locally. You control exactly what it can access and what it can't.

That's genuinely safer. That's the direction I think we need to move toward even if it means accepting some performance trade-offs right now.

Expand full comment