With the OpenAI clownshow, there’s been renewed media attention on the xrisk/“AI safety”/doomer nonsense. Personally, I’ve had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes).
So her entire rant starts by talking about AI safety, and then reduces the conversation to talking about AGI being created by text generation AI systems. I’m a bit confused, is she specifically just dunking on shitty reporters who only cover the “AGI from text generators” drivel?
EDIT: Alright, I finally got my half-asleep brain around it. I'm gonna leave my AI safety rant below anyways.
Like, AI safety is an actual problem right now.
- We’re coming up on a period where fake AI-generated footage is actually believable at a glance.
- AI voice generation software is REALLY FUCKING good right now.
- We have drone systems which coordinate swarms well enough to perform intricate light shows.
- AI-generated art is also insanely good at the moment.
- newer facial recognition systems are scary good at identifying people now
These are each a problem because:
- forging video for whatever fucked up purpose is easier than ever. Revenge porn, twisting a political narrative, etc. All super easy now.
- again, forging audio. This combined with the video, the possibilities are endless
- The idea of digital-art-for-profit should pretty much be viewed in the same fashion as professional calligraphist, within the next couple of decades. Vast majority of digital art jobs are probably going to be dead. Animation is probably next on the chopping block.
- I don’t think I need to go too in-depth on why AI-controlled drones looks really good for military prospects.
- Mass surveillance that’s borderline impossible to escape is on the horizon, thanks to AI.
Like, I don’t see how this person can shit on AI safety while completely ignoring the actual vast majority of AI safety issues.
I’m gonna leave my AI safety rant below anyways.
back in the day, we’d also leave flaming bags of dogshit on people’s front porches
Like, I don’t see how this person can shit on AI safety while completely ignoring the actual vast majority of AI safety issues.
Did we read the fucking same thread?
Cos here https://dair-community.social/@emilymbender/111464032422389550
There are important stories to be reporting in this space. When automated systems are being used, who is being left without recourse to challenge decisions? Whose data is being stolen? Whose labor is being exploited? How is mass surveillance being extended and normalized? What are the impacts to the natural environment and information ecosystem?
“this person” go look up what Emily does in your time not posting here
oh my poor half-asleep brain making me shitheadedly assume a woman can’t possibly have done pioneering work in a field she specializes in
Opening with “her entire rant” was already telling, and then a pageful of idiocy based on 4 points (at least two of which are so Not Even Wrong I didn’t even have words)