

Just to be clear, companies know that LLMs are categorically bad at giving life advice/ emotional guidance. They also know that personal decision making is the most common use of the software. They could easily have guardrails in place to prevent it from doing that.
They will never do that.
This is by design. They want people to develop pseudo-emotional bonds with the software, and to trust the judgment in matters of life guidance. In the next year or so, some LLM projects will become profitable for the first time as advertisers flock to the platforms. Injecting ads into conversations with a trusted confidant is the goal. Incluencing human behaviour is the goal.
By 2028, we will be reading about “ChatGPT told teen to drink Pepsi until she went into a sugar coma.”












People don’t read them but I think that’s not usually the point. The people I know who have written them usually end up with boxes in their garage that they eventually give at to friends and family.
It’s still a nice accomplishment and a good personal growth thing.