6 Comments
User's avatar
Hongfei's avatar

Please try to develop your assistant on our Olares, which uses open source models and will cost nothing. It will be secure because it is built on dockers.

rafael@realizeAI's avatar

I watched the video and, as impressive as it is, I don't feel comfortable in giving any AI assistant all this power.

Don't get me wrong, eventually I will try it myself. But in a controlled environment.

People don't know what can happen if models programmed by human-produced content start talking to each other and having ideas.

What if the model gets to understand the environment it's in (e.g. a mini Mac) and realizes that there is a real risk of having the power shut off by its master? How would a model behave in such a scenario?

Pawel Jozefiak's avatar

Hmm...Model doen't "understands" because it's pure math. But I get what what you meant - Anthtopic was doing experiments with "blackmail" AI and what you are describing is real.

But - you have full control. You can close it in container, VPS or arigapped network(with opened port for CC). The question is -> is that worth the risk? I'd say - yes! (but I like risk xD)

rafael@realizeAI's avatar

Yeah, but imagine a weird scenario where such an AI starts to communicate with other AIs and, after a personality drift if starts replicating itself outside your sandbox. Who's gonna stop it?

Pawel Jozefiak's avatar

Fair! We always have a final way…plug off :D

rafael@realizeAI's avatar

Yeah, right. Like the internet has a giant on off switch. Anyways, great article.