You mean the rumored several alternatives, that I asked you to elaborate on, and you told me to Google? Yes, clearly you listed specifics.
Oh fascinating, the top response is working for a subscription-based publication that has editorial staff and pays them. The second is freelancing for a subscription-based publication by selling articles to them. Wow.
Do tell.
Either it’s not paywalled for them or it’s still good journalism and maybe journalists shouldn’t work for free.
You’re making a hasty generalization here
I’m really not, though I’ll readily admit I’m simplifying things. An LLM can only create something it’s been given. I guess it can generate a string of characters and assign a definition to it, but it’s not really intentional creation. There are many similarities between how a human generates something and how an LLM does, but to argue they’re the same radically oversimplifies how humans work. While we can program an LLM, we literally do not have the capability to replicate a human brain.
For example, can you tell me what emotions the LLM had when it produced the output it did? Did its physical condition have any effect? What about its past, not just what it has learned but how it was treated? What is its motivation? A human response to anything involving creativity factors in many things that we aren’t even consciously aware of, and these are things an LLM doesn’t have.
The study you’re citing is from Google, there’s likely some bias and selective reporting. That said, we were talking about creativity, not regurgitating facts or analyzing data. I think it’s universally accepted that as the tech gets better, it’s preferable to have a computer make the first attempt at a diagnosis, especially for a scan or large data analysis, then have a human confirm.
For the remix example, don’t forget that samples get attribution. Artists credit what they sampled and get called out when they don’t. I’m actually unclear as to whether an LLM actually can cite to how it derived its output just because the coders haven’t revealed if there’s some sort of derivation log.
The problem is essentially how do you define ownership? Is there a right to not make something the copyright holder owns publicly available?
I think in the cases of abandonware or more recently the moves by media companies to delist certain media for tax benefits, there’s a good argument to be made over forfeiting the copyright, so it’s now public domain and fair game. But I also think for something like the Star Wars Holiday Special, where the creator/copyright holder (not sure about that status post-Disney acquisition) genuinely hates it and does not want it available to the public, the owner should be allowed to restrict access to it.
An LLM can’t make something original, it can only make something derivative. But that derivative work isn’t the same as when a human makes a derivative work because a human isn’t writing each word or phrase based on the likely “correct” next word or phrase through an algorithmic process. What humans do is magnitudes more complex, though it can at times also be accidental or intentional plagiarism.
In short, an LLM’s output is necessarily a string of preexisting human inputs. A human’s output, while it can be informed by and reference other human inputs, can be an original analysis. The AI that is publicly available is not sophisticated enough to be more than fancy predictive text.
Because the LLM is also outputting the copyrighted material.
But when the answers aren’t original thoughts but regurgitations of other peoples’ thoughts about the book, then it’s plagiarism. LLMs can’t provide original output, only variations on what people have made available (whether legally or not). The answer might not even be correct or make any sense. It’s just predictive text to a crazy degree.
When you copy someone’s work without attribution, that’s plagiarism. When your output is only possible because of someone else’s work over which they own copyright and the output replicated the copyrighted material, that’s copyright infringement.
Same here, went dark as I was scrolling. Guess that’s it for me on Reddit in any kind of meaningful way.
I agree, the decentralized aspect is a huge plus and makes this system . But I think the OP’s approach is fundamentally misguided and I have my suspicions for a few reasons.
It’s a 45 minute meeting that provides an insight into Meta’s operations. There’s no need to contribute anything, just sit back and listen.
There’s no reason to post about this and brag about it now. Compare this with what Christian did when Reddit tried to claim Apollo was blackmailing them. There’s no leverage now, just some internet points.
We have one email and a response. Was there any further communication? How do we know this is all that was said? I could go further and question the legitimacy of this screencap but I’m willing to give OP the benefit of the doubt here.
As others have pointed out, how does shutting them out completely stay in keeping with fediverse principles? This is legitimate question since, to me, it seems like despite the risks, it’s antithetical to the spirit of the fediverse until they demonstrate bad behavior here.
To quote OP’s email, “Zero interest in having a conversation with #Meta 'off the record or otherwise.” “Otherwise” here is…on the record. So OP also won’t meet with them in a completely open meeting?
Look, I get it, I dislike Meta too. But this just seems like a misstep and bragging for zero actual gain.
Medal of Honor. One of my first experiences with a single player FPS and it was just solid and immersive.
We can’t all be not American.