

Microslop Crashpilot


Microslop Crashpilot


Congrats.
Yes, desktop Linux is generally very usable for the majority of users these days. This was already claimed to be the case in the late 1990s, which is probably why many non-IT-professionals had a bad first expression with desktop Linux. But this has changed since (very roughly) about 10 years ago or so, and for gaming in particular it has changed since very roughly about 5 years ago. This is also the reason why desktop Linux was at like ~1% market share all the time but has suddenly grown to ~6% within the last couple of years already. And with higher popularity comes more developer interest and support. Furthermore, Windows is becoming worse over time because Nadella is more interested in milking his user base instead of nurturing it, and many want more independence from US-based proprietary software due to the current political situation, and so it’s very likely that desktop Linux is going to keep snowballing upwards. The trend is looking very positively for desktop Linux, it will probably reach MacOS market share within the next couple of years. For gaming specifically, it’s already #2.
The most important thing about the Linux ecosystem is of course that most of it (at least the core components) is free/open source software and this is necessary to have digital sovereignty.
Other users interested in making the switch can make their transition easier by doing it in 2 steps: first, replace all important applications you’re using on Windows with Linux-compatible applications (for example, no MS Office, no Adobe), then adjust to the changed workflows while still using Windows. Only after that, install Linux as the primary OS (or set up dual-boot, but it has disadvantages. Best is to physically disconnect your disk containing Windows (so you still have a backup in case you desperately need it) and use another disk for Linux). That way, the culture shock is a bit mitigated because you’ll have at least some familiarity (the applications you need) inside an otherwise unfamiliar new OS environment. That way, the change will feel less overwhelming.
If there are still dependencies which can’t be worked around, there’s also the emergency solution of using either wine or a Windows VM on Linux. In the latter case it’s probably best these days to use winboat, which allows running Windows-only applications which then run inside a specific Windows VM or container on Linux. Or you just use a full regular Windows VM on Linux, with a shared folder between both systems for exchanging files.


Unfortunately, most Windows users have a long history of complaining about it and then still continuing to use it.
There’s no way around it: if you keep using abusive software, you’ll stay in an abusive relationship.


It’s for window management related hotkeys. Obviously. All about windows. With a lowercase “w”.


In order of priority:


It’s been downhill since W7


Tips for coping:


Technically, nothing you use in tech is ever really “simple”, there’s tons of complexity hidden from the common user. And whenever parts of that complexity fail or don’t work like the user expects it to, then the superficially simple stuff becomes hard.
Docker and containers are a fairly advanced topic. Don’t think that it’s easy getting into this stuff. Everyone has to learn quite a bit in advance to utilize that.
To play games, you went into the wrong direction when fiddling with wine directly, or even just indirectly by using bottles You COULD do that, but you’ve literally chosen the hardest path to do so. You should use something like HeroicGamesLauncher, Lutris or Steam in order to manage your games, install and launch them fairly easily. These will take care of all the complex stuff behind the scenes for you.


I use Arch since approximately 2006 or so. I like its stability (yes!), performance, rapid updates and technical simplicity. It never stands in my way and it’s fairly simple to understand, administer and modify. It’s probably the most convenient OS I’ve ever used - sure it takes time/effort to set it up but once you’re past that it’s smooth sailing. It also doesn’t change dramatically over the years (it doesn’t need to) so it’s easy to keep up with its development. Plus, I have a custom setup script for it that installs and sets up all of the basics, so if I ever need to reinstall, I’m not starting from zero.
I am eyeing NixOS as “the next step” but didn’t yet experiment with it too much. Arch is just too comfy to use and the advantages that NixOS brings aren’t yet significant enough for me to make any kind of switch to it, but I consider NIxOS (as well as its related technologies like the Nix package manager) to be the most interesting and most advanced things in the Linux world currently.
If you’re reading this as a newbie Linux user: probably don’t use any of the two mentioned above (yet). They’re not considered entry-level stuff, unless you’re interested in learning low-level (as in: highly technical) Linux stuff from the start already. NixOS/Nix in particular is fairly complex and can be a challenge even for veteran Linux admins/users to fully understand and utilize well. Start your journey with more common desktop distros like Mint, Fedora, Kubuntu.


Linux phones are usable right now, but of course you have some limitations in practice… many apps aren’t available or you have to use workarounds. If you mostly use open source applications you could be fine though. Although it’s likely that you still need a secondary, small Android-based phone that you turn on just for those rare cases where you absolutely need a certain mobile app and it’s only available for Android. At least while Linux mobile OS usage is still low. It’s probably going to grow faster in the future, because those monopolistic companies usually enshittify their products and services at some point (Google is already well on it) and then regular Android/iOS users become so annoyed at what they’re using that they also open up more for alternatives. It’s basically what’s happening in the desktop OS space right now - Windows continues to become more user-hostile and annoying to use, and desktop Linux passively (as well as actively) becomes more popular as a result. At some point, these companies forget what made their products popular in the first place and are only operating in the mode of milking users for data and profits, because they don’t need to work hard anymore to improve the product - it’s already popular enough. At that point, regular users who normally don’t care about things like freedoms, privacy and ethics in the product they use will notice that things became worse and might switch simply because of inconveniences they didn’t have before.
Another very good option beside Linux-based mobile OS these days is GrapheneOS. It’s the best Android-based distribution you can have currently, nothing comes close (not going to elaborate here because long post is already long). But you still should be prepared for increasing hostility from Google towards unofficial Android distributions, and some apps which use the Play Integrity DRM to not work. If you encounter this, make sure to let the app developer(s) know. They need to realize that they are only serving Google’s interests with this, not their own.
The current tech/IT sector is heavily relying on and riding hype trains. It’s a bit like the fashion industry that way. But this AI hype so far has only been somewhat useful.
Current general LLMs are decent for prototyping or example output to jump-start you into the general direction of your destination, but their output always needs supervision and most often it needs fixing. If you apply unreliable and constantly changing AI to everything, and completely throw out humans, just because it’s cheaper, then you’ll get vastly inferior results. You probably get faster results, but the results will have tons of errors which introduces tons of extra problems you never had before. I can see AI fully replacing some jobs in some specific areas where errors don’t matter much. But that’s about it. For all other jobs or purposes, AI will be an extra tool, nothing more, nothing less.
AI has its uses within specific domains, when trained only on domain-specific and truthful data. You know, things like AlphaZero or AlphaGo. Or AIs revealing new methods not known before to reach the same goal. But these general AIs like ChatGPT which are trained on basically the whole web with all the crap in it… it’s never going to be truly great. And it’s also becoming worse over time, i.e. not improving much at all, because the web will be even fuller with AI-generated crap in the future. So the AIs slurp up all that crap too. The training data gets muddier over time. The promise of AIs getting even more powerful as time goes on is just a marketing lie. There’s most likely a saturation curve, and we’re most likely very close to the saturation already, where it won’t really get any better. You could already see this by comparing the jump from GPT-3 to GPT-4 (big) and then GPT-4 to GPT-5 (much smaller). Or take a look at FSD cars. Also not really happening, unless you like crashes. Of course, the companies want to keep the illusion rolling so they’ll always claim the next big revolution is just around the corner. Because they profit from investments and monthly paying customers, and as long as they can keep that illusion up and profit from that, they don’t even need to fulfill any more promises.
It’s just a tendency, not a hard rule.


German here. These are some cultural and day-to-day differences compared to the US:
Collapse will definitely come. Our way of living on this planet is not sustainable, especially now where everyone who would have the power/influence to change things does literally and openly the opposite (e.g. USA turning their back on climate friendly research/technologies for example). So I think it’s kind of over, I’m kind of an optimist but time is simply running out, we had the Paris agreement and all that jazz like 10 years ago and almost nothing really changed (the only time something changed for the positive was during 2020/2021 but that was involuntarily!), in fact it’s now probably worse than it was back then, so it’s kind of over. Sure you can and should individually continue fighting for it because every small improvement will at least delay the collapse a bit which is useful, but I’m not going to naively believe that we will be able to counteract this anymore. It’s too little, too late. And that’s not even taking into account the possibility of a WW3. And rich/powerful people probably know this as well that the geological and political situations become increasingly unstable which is why they are building luxury bunkers. I would build one too, if I had the spare change.


By the way, ignoring as much of this big tech corpo crap as you can also makes you live an easier life.
Whenever I see a story of “some guy who relies on <big tech account> working loses access to it and suddenly can’t do anything anymore” I think “this can never happen to me”. Which means there’s a whole category of problems you’re suddenly never going to see. It also means you’re less naive. So just don’t vendor-lock yourself in. Don’t put a log-in for an account which you don’t control in front of important things you need to do. Simple as that.
On top of that, you’ll also leak less private data about yourself and probably others as well. So you even make yourself less of a target when it comes to data protection laws or something. I know, these get routinely ignored. I’m just saying, if you don’t even use the problematic stuff (or almost never), you’ll also have potentially less legal troubles at hand. And you never know, legel troubles might not appear for a while but they could lurk far in the future. For example, many Nazis got into legal trouble for their participation in Nazi Germany, even decades later.
I know, the guy from the story probably only needed that account to ensure he can compare some stuff with how MS Office is behaving compared to LibreOffice, or things like that. So it’s probably not a big deal. But generally speaking, you really shouldn’t vendor-lock yourself in.
The 5090 might most of the time draw like 350W but like many top-end cards (also from AMD) power draw can spike really high and can reach double that even for very short moments. So you need a beefy power supply regardless. For a 5090 in combination with a top-end 16 core CPU I wouldn’t recommend anything under 1200W (so you still have some wiggle room. Power supplies are also at most efficient when they’re not at ~95-99% capacity but at ~80%).
Translation help from Fascist English to US English:
Some potential optimization opportunities:
Memory doesn’t need to have RGB lighting (unless you want it for the optics), you can get the exact same thing without RGB for a little bit cheaper. IIRC, the non-RGB model is called “Flare X” or similar, “Trident” is the RGB one. Also, CL32 seems slightly slow… not up to date on this but you can probably get CL30 or CL28 for even more performance. 6400MHz seems OK, there are faster ones but there’s also a trade-off to be made between stability and performance so I think 6400MHz is fine. It’s important to ensure good compatibility with your mainboard. Also, 64 GB is still oversized for just a gaming rig. For pure gaming, you get basically no extra value with 64GB compared to 32GB. You only might need more than 32 GB for workstation-like use cases (video editing for example) and/or when you use VMs in parallel. Unused RAM provides no value and no additional perfomance.
A CPU with 16 cores could be slightly oversized for a pure gaming use-case as well, in most games you won’t notice a difference compared to the 12- or even 8-core variant instead. Again, higher core count is primarily useful for workstation-like use cases or VMs. Sometimes, the 12core can even be faster for games if it has slightly higher clock speeds for example. You should look at some benchmarks to see whether the 16core provides any benefit for gaming.
Mainboard: the MSI Godlike is extremely pricey and there’s very questionable, maybe zero additional value compared to a moderately priced one. The most important specs are probably the same anyway. You should take a look at a cheaper option here, unless you don’t mind throwing out money.
Monitor: If the player isn’t playing any fast-paced e-sports titles I think 240MHz refresh rate is overkill, but YMMV.
SSDs: not sure if PCIe 5 is worth the extra cash, could also go with PCIe 4 still, they’re slightly slower but it’s almost not noticeable and for gaming only affects loading times anyway (slightly!), it doesn’t affect your performance in actual gameplay. Not sure if WD is a good NVMe SSD brand actually. Consider Samsung or SK Hynix maybe.


AI has some use but it always needs human oversight and the final decision must also be made by a human professional. If you use AI to speed up tasks and you know whether the output of the AI is valid or not, and you have the final decision, then you can safely use it. But if you let AI decide on and execute important tasks basically autonomously, then you have a recipe for disaster. Fully autonomous and mistake-free AI is a naive pipe dream which I don’t see on the horizon at all.
8BitDo Ultimate 2C Wireless here (but only using it wired, on PC, Linux). Good Xbox-style controller.