As a skilled migrant that emigrated out of the US, no. There are plenty of companies in the EU that would love your talent and are more than willing give you better job protection and higher quality of life.
As a skilled migrant that emigrated out of the US, no. There are plenty of companies in the EU that would love your talent and are more than willing give you better job protection and higher quality of life.
Uh, ACKUALLY, these should be called GNU/Linux because without the Global Nutrition United’s packaging, these cookies would impossible to ship on there own
haha, yeah I am well aware I could do something like that. Unfortunately, once you start working for larger companies, your options for solutions to problems typically shrink dramatically and also need to fit into neat little boxes that someone else already drew. And our environment rules are so draconian, that we cannot use k8s to its fullest anyhow. Most of the people I work with have never actually touched k8s, much less any kind of server oriented UNIX. Thanks for the advice though.
This kinda functionality is surprisingly apropos to a problem I have a work, I realize. And yet, I have k8s. More and more I am appreciating the niche systemd can play with pets instead of cattle and wished corps weren’t jumping to managed k8s and all of that complexity it entails immediately.
It makes somewhat passable mediocrity, very quickly when directly used for such things. The stories it writes from the simplest of prompts is always shallow and full of cliche (and over-represented words like “delve”). To get it to write good prose basically requires breaking down writing, the activity, into its stream of constituent, tiny tasks and then treating the model like the machine it is. And this hack generalizes out to other tasks, too, including writing code. It isn’t alive. It isn’t even thinking. But if you treat these things as rigid robots getting specific work done, you can make then do real things. The problem is asking experts to do all of that labor to hyper segment the work and micromanage the robot. Doing that is actually more work than just asking the expert to do the task themselves. It is still a very rough tool. It will definitely not replace the intern, just yet. At least my interns submit code changes that compile.
Don’t worry, human toil isn’t going anywhere. All of this stuff is super new and still comparatively useless. Right now, the early adopters are mostly remixing what has worked reliably. We have yet to see truly novel applications yet. What you will see in the near future will be lots of “enhanced” products that you can talk to. Whether you want to or not. The human jobs lost to the first wave of AI automation will likely be in the call center. The important industries such as agriculture are already so hyper automated, it will take an enormous investment to close the 2% left. Many, many industries will be that way, even after AI. And for a slightly more cynical take: Human labor will never go away because having power over machines isn’t the same as having power over other humans. We won’t let computers make us all useless.
You’re aware Linux basically runs the
InternetWorld, right?
Billions of devices run Linux. It is an amazing feat!
Only because they massively displaced a shitload of local business. Same with Amazon. If you have very little skills, where else are you going to work?
This is a solvable problem. Just make a LoRA of the Alice character. For modifications to the character, you might also need to make more LoRAs, but again totally doable. Then at runtime, you are just shuffling LoRAs when you need to generate.
You’re correct that it will struggle to give you exactly what you want because you need to have some “machine sympathy.” If you think in smaller steps and get the machine to do those smaller, more do-able steps, you can eventually accomplish the overall goal. It is the difference in asking a model to write a story versus asking it to first generate characters, a scenario, plot and then using that as context to write just a small part of the story. The first story will be bland and incoherent after awhile. The second, through better context control, will weave you a pretty consistent story.
These models are not magic (even though it feels like it). That they follow instructions at all is amazing, but they simply will not get the nuance of the overall picture and be able to accomplish it un-aided. If you think of them as natural language processors capable of simple, mechanical tasks and drive them mechanistically, you’ll get much better results.
Leading to either having to carefully double check what it suggests, or having fix bugs in code that I wrote but didn’t actually write.
100% this. Recent update from jetbrains turned on the AI shitcomplete (I guess my org decided to pay for it). Not only is it slow af, but in trying it, I discovered that I have to fight the suggestions because they are just wrong. And what is terrible is I know my coworkers will definitely use it and I’ll be stuck fixing their low-skill shit that is now riddled with subtle AI shitcomplete. The tools are simply not ready, and anyone that tells you they are, do not have the skill or experience to back up their assertion.
https://theauthoritarians.org/