Any arguments to defend your position? I’m giving you a very clear example of the awful consecuences of following that path. And the same applies to any creative work. You are just being dismissive without proposing any real solution. Do better man.
Any arguments to defend your position? I’m giving you a very clear example of the awful consecuences of following that path. And the same applies to any creative work. You are just being dismissive without proposing any real solution. Do better man.
And that’s the whole point of my comment, did you even read it? To summarize, there is currently a loophole in law that allows these bullshit arguments about it being different than straight up copying shit (though this haven’t been litigated yet, so it’s not yet clear if these arguments are actually valid). This means that while a person reading my AGPL code and copying it (without following the license) is 100% illegal, doing the same through an LLM may be legal. So this means that open source licenses can be bypassed by first training an LLM with the code and then extracting the code from the LLM. This is terrible for open source, and in general for anyone who wants to make a living from creating copyrighted work. So we should close this loophole, and I’m glad there is a push to close this through better laws. Even if these laws are comming from Disney, Sony, and all those awful companies.
So again, what’s the point you are trying to make here? That we shouldn’t make these laws stronger to prevent this bullshit? I honestly don’t understand what you are trying to argue here, nothing of what you have said has anything to do with this conversation.
What point are you trying to make? That the fact that someone can break the law means we should not have laws? I honestly don’t get what you are trying to say.
An engineer at AWS can already just copy your code, make minor modifications, and use it.
You are 100% wrong here my man. If an engineer does this they are creating a derivative work and they have to fullfil the conditions of the license of the code. No wonder you don’t see anything wrong here, you AI people live in a fantasy world when it comes to how copyright works hahahaha. Please stop talking about shit you know nothing about.
Nah my man, you are either brainwashed or are being paid hahaha. Is copyright a mess? Of fucking course, I haven’t meet a single person (except crazy ass libertarians funnily enough hahaha) that likes copyright. Are big corporations using copyright to exploit artists, create monopolies, and generally being dicks? Again, of fucking course.
But anyone, like you, saying that we should just let AIs destroy copyright effectively is a fucking prick, that simple. And your agruments are dissingenous at best or outright lies. For example, just as big copyright holder companies are pushing to strengthen copyright law, the big tech companies are pushing for effectively destroying copyright through AI models. I have seen you pushing in multiple thread for open source models like that’s a solution. But if you were a serious person researching about the software open source community you would see that pretty much no one there agrees with your position because it would effectively destroy the copyleft open source licenses. After all, if an “AI” model, open source or not, is allowed to just “train” on my AGPL code and spit it back (with minor modifications at best) to an engineer in AWS that’s it for my project. Amazon will do the Amazon thing and steal the project. So say goodbye to any software freedom we have.
And let’s be 100% clear here, this is not being pushed by the evil copyright holders like you seem to imply (and they are totally evil just to be clear hahahah). This is being pushed by the big tech companies and people like you spreading their propaganda. The fact that the copyright holders happen to be in the right this time is just a broken clock being right and all that, but it’s still good that they are pushing back to big tech. I do agree we have to keep an eye on them, the objective here can’t be to make copyright bigger, just to close the “loophole” that big tech companies are exploting to steal everything.
People like you who want to destroy copyright without offering any alternatives to allow creatives to work in a market are either missinformed or just assholes. Again, of fucking course it’s not an ideal system, but going full kamikaze and just destroying any possibility for artists and creatives of making a living with their work is the most evil thing goung on right now, so bad that the big copyright holders happen to fall on the less bad side this time hahaha. And all for what? So people can be lied to by dumb chatbots? Or so people can create mediocre derivative “art” without putting any effort? Or so we can get mediocre code autocomplete that is subtly wrong all the time? Is fucking ridiculous.
The guy you are replying to is in all AI posts defending AIs. He is probably heavily invested in this BS or being paid for it, don’t waste your time with him.
Jesus man, chill. Why are all AI people so sensitive? Hahahaha. My man, during this conversation I have only asked about what are the great apps that LLMs have provided. You answered with the usual ones, chatgpt and copilot. It’s nice that you find them useful, no need to insult me just because I don’t think they are useful. I was honestly hoping for something else, but that’s it. Seriously, chill dude.
So literally you use it for information retrieval hahahaha. I did use copilot, codium, and the jetbrains one for a bit. But I had to disable each one, the amount fo wrong code simply doesn’t justify the little boilerplate it generates.
Me? I’m not using LLMs at all hahaha. I’m asking you, who says they have great value, to provide examples of their uses. I just provided pretty much the only one I have heard, which some random dude told me in a different thread. Everyone else, like you, just keeps it abstract and just bullshits and bullshits hahaha.
I always ask all people defending AI, or rather LLMs, what’s the great value they all mention in their comments. So far the “best” answer I got was one dude using LLMs to extract info from decades old reports that no one has checked in 20 years hahaha. So glad we are allowing LLMs to deetroy the environment and plagiarize all creative work for that lol.
So, what is the great value you see man?
Probably too late, but just to complement what others have said. The UEFI is responsible for loading the boot software thst runs when the computer is turned on. In theory, some malware that wants to make itself persistent and avoid detection could replace/change the boot software to inject itself there.
Secure boot is sold as a way to prevent this. The way it works, at high level, is that the UEFI has a set of trusted keys that it uses to verify the boot software it loads. So, on boot, the UEFI check that the boot software it’s loading is signed by one of these keys. If the siganture check fails, it will refuse to load the software since it was clearly tampered with.
So far so good, so what’s the problem? The problem is, who picks the keys that the UEFI trusts? By default, the trusted keys are going to be the keys of the big tech companies. So you would get the keys from Microsoft, Apple, Google, Steam, Canonical, etc, i.e. of the big companies making OSes. The worry here is that this will lock users into a set of approved OSes and will prevent any new companies from entering the field. Just imagine telling a not very technical user that to install your esoteric distro they need to disable something called secure boot hahaha.
And then you can start imagining what would happen if companies start abusing this, like Microsoft and/or Apple paying to make sure only their OSes load by default. To be clear, I’m not saying this is happening right now. But the point is that this is a technology with a huge potential for abuse. Some people, myself included, believe that this will result in personal computers moving towards a similar model to the one used in mobile devices and video game consoles where your device, by default, is limited to run only approved software which would be terrible for software freedom.
Do note that, at least for now, you can disable the feature or add custom keys. So a technical user can bypass these restrictions. But this is yet another barrier a user has to bypass to get to use their own computer as they want. And even if we as technical users can bypass this, this will result in us being fucked indirectly. The best example of this are the current Attestation APIs in Android (and iOS, but iOS is such a closed environment that it’s just beating a dead horse hahahah). In theory, you can root and even degoogle (some) android devices. But in practice, this will result in several apps (banks in particular, but more apps too) to stop working because they detect a modified device/OS. So while my device can technically be opened, in practice I have no choice but to continue using Google’s bullshit. They can afford to do this because 99% of users will just run the default configuration they are provided, so they are ok with losing the remaining users.
But at least we are stopping malware from corrupting boot right? Well, yes, assuming correct implementations. But as you can see from the article that’s not a given. But even if it works as advertised, we have to ask ourselves how much does this protect us in practice. For your average Joe, malware that can access user space is already enough to fuck you over. The most common example is ransonware that will just encrypt your personal files wothout needing to mess with the OS or UEFI at all. Similarly a keylogger can do its thing without messing with boot. Etc, etc. For an average user all this secure boot thing is just security theater, it doesn’t stop the real security problems you will encounter in practice. So, IMO it’s just not worth it given the potential for abuse and how useless it is.
It’s worth mentioning that the equation changes for big companies and governments. In their case, other well funded agents are willing to invest a lot of resources to create very sofisticated malware. Like the malware used to attack the nuclear program plants in Iran. For them, all this may be worth it to lock down their software as much as possible. But they are playing and entirely different game than the rest of us. And their concerns should not infect our day to day lives.
And Apple has earned any trust? Jesus christ people, like less than 2 months ago they were caught restoring “deleted” photos from iCloud to user devices hahahahaha. Of course fans were excusing them talking about disk sectors like that has anything to do with cloud storage being available accidentally hahahaha.
But yeah, Apple cult followers will find a way to justify surrendering even more freedom to Apple with this BS for sure. And they will be paying top dollar for the pleasure hahahaha.
What a load of BS hahahaha. LLMs are not conversation engines (wtf is that lol, more PR bullshit hahahaha). LLMs are just statistical autocomplete machine. Literally, they just predict the next token based on previous tokens and their training data. Stop trying to make them more than they are.
You can make them autocomplete a conversation and use them as chatbots, but they are not designed to be conversation engines hahahaha. You literally have to provide everything in the conversation, including the LLM previous outputs to the LLM, to get them to autocomplete a coherent conversation. And it’s just coherent if you only care about shape. When you care about content they are pathetically wrong all the time. It’s just a hack to create smoke and mirrors, and it only works because humans are great at anthropomorphizing machines, and objects, and …
Then you go to compare chatgpt to literally the worst search feature in google. Like, have you ever met someone using the I’m feeling lucky button in Google in the last 10 years? Don’t get me wrong, fuck google and their abysmal search quality. But chatgpt is not even close to be comparable to that, which is pathetic.
And then you handwave the real issue with these stupid models when it comes to search results. Like getting 10 or so equally convincing, equally good looking, equally full of bullshit answers from an LLM is equivalent to getting 10 links in a search engine hahahaha. Come on man, the way I filter the search engine results is by reputation of the linked sites, by looking at the content surrounding the “matched” text that google/bing/whatever shows, etc. None of that is available in an LLM output. You would just get 10 equally plausible answers, good luck telling them apart.
I’m stopping here, but jesus christ. What a bunch of BS you are saying.
I get that. But then I don’t go to car forums to complain about my car and then get mad when they suggest my car is shit and I need to change it hahahaha. My point is, these people only want to complain instead of fixing things, and it’s very annoying. Don’t get me wrong, they can complain as much as they want as far as I care. But I wish they would complain somewhere else, it’s just noise if they are not willing to do anything.
To be fair, the conversations I have seen usually start with people complaining about whatever the latest Windows shitfuckery is. Some well intendend, but clearly naive, linux user suggests to just switch to linux. After all, the OPs usually complain in a linux community, what else do they expect?
Then they, or sometimes a different user than the first one, say something like “but switching to linux is work and I have to learn a new thing” like a dumbass. After that it’s almost impossible, IMO, to have a constructive conversation. Other people from the community get so mad that the conversation becomes a religious argument hahaha. After all, how do you help people that want to fix their problems while at the same time they refuse to change or learn anything? And on top of that they get so self rigtheous when people dare to suggest literally anything. The only solution they want is for Microsoft to magically stop being Microsoft and fix Windows hahaha, I hope they get comfortable while they wait hahaha.
Honestly I don’t even try. Yes, Microsoft is the one fucking people over. But people are proud of their lack of knowledge when it comes to computers and refuse to even learn the little bit that will actually fix their problems. And that’s on people, not on Microsoft. I just let them enjoy Windows, they deserve it.
Or maybe you could actually read the comment you are replying to instead of being so confrontational? They are literally making the same point you are making, except somehow you sound dismissive, like we just need to take it.
In case you missed it they were literally saying that the fact that the real cost of running software (like the AI recall bullshit) is externalized to consumers makes companies don’t give a shit about fixing this. Like literally the same you are saying. And this means that we all, as a society, are just wasting a fuck ton of resources. But capitalism is so eficient hahaha.
But come on man, you really think that the only option is for us to run corporate machines in our homes? I don’t know if I should feel sorry about your lack of imagination, or if you are trying to strawman us here. I’m going to assume lack of imagination, don’t assume malice and all that.
For example, that’s what simple legislation could do. For example, lets say I buy an cellphone/computer, then buy an app/program for that device, and the device has the required specifications to run the software. The company that sold me that software should be obligated by law to give me a version of the software that runs in my machine forever. This is not a lot to ask for, this is literally how software worked before the internet.
But now, behind the cover of security and convenience, this is all out of the window. Each new windows/macos/ios/android/adobe/fucking anything update asks for more and more hardware and little to no meaningful new functionality. So we need to keep upgrading and upgrading, and spending and spending.
But this is not a given, we can do better with very little sacrifices.
First of all man, chill lol. Second of all, nice way to project here, I’m saying that the “AIs” are overhyped, and they are being used to justify rampant plagiarism by Microsoft (OpenAI), Google, Meta and the like. This is not the same as me saying the technology is useless, though hobestly I only use LLMs for autocomplete when coding, and even then is meh.
And third dude, what makes you think we have to prove to you that AI is dumb? Way to shift the burden of proof lol. You are the ones saying that LLMs, which look nothing like a human brain at all, are somehow another way to solve the hard problem of mind hahahaha. Come on man, you are the ones that need to provide proof if you are going to make such wild claim. Your entire post is “you can’t prove that LLMs don’t think”. And yeah, I can’t prove a negative. Doesn’t mean you are right though.
Come on man. This is exactly what we have been saying all the time. These “AIs” are not creating novel text or ideas. They are just regurgitating back the text they get in similar contexts. It’s just they don’t repeat things vebatim because they use statistics to predict the next word. And guess what, that’s plagiarism by any real world standard you pick, no matter what tech scammers keep saying. The fact that laws haven’t catched up doesn’t change the reality of mass plagiarism we are seeing …
And people like you keep insisting that “AIs” are stealing ideas, not verbatim copies of the words like that makes it ok. Except LLMs have no concept of ideas, and you people keep repeating that even when shown evidence, like this post, that they don’t think. And even if they did, repeat with me, this is still plagiarism even if this was done by a human. Stop excusing the big tech companies man
If you actually read the article you will see that they tested both allowing the students to ask for answers from the LLM, and then limiting the students to just ask for guidance from the LLM. In the first case the students did significantly worse than their peers that didn’t use the LLM. In the second one they performed the same as students who didn’t use it. So, if the results of this study can be replicated, this shows that LLMs are at best useless for learning and most likely harmful. Most students are not going to limit their use of LLMs for guidance.
You AI shills are just ridiculous, you defend this technology without even bothering to read the points under discussion. Or maybe you read an LLM generated summary? Hahahaha. In any case, do better man.