

Those security concerns seem completely unrelated to the model, though. You can have a completely open source model that fits all those requirements, and still give it too much unfettered access to important resources with no way of actually knowing what it will do until it tries.
I agree that you can’t know if the AI has been deliberately trained to act nefarious given the right circumstances. But I maintain that it’s (currently) impossible to know if any AI had been inadvertently trained to do the same. So the security implications are no different. If you’ve given an AI the ability to exfiltrating data without any oversight, you’ve already messed up, no matter whether you’re using a single AI you trained yourself, a black box full of experts, or deepseek directly.
But all this is about whether merely sharing weights is “open source”, and you’ve convinced me that it’s not. There needs to be a classification, similar to “source available”; this would be like “weights available”.