• 0 Posts
  • 98 Comments
Joined 2 years ago
cake
Cake day: February 10th, 2024

help-circle
  • Encrypted email in the way that Proton and Tuta do it has a lot of drawbacks. Because I almost never use my personal/non-work email to communicate with another human, and automated mails tend to have the message body be no more sensitive than the subject line and metadata, zero-knowledge encryption at rest for just the mail body has a negligible privacy impact for me.

    It helps to consider your actual needs and privacy goals, using the services or software that fits them best rather than just following what others say has the best privacy.

    I used Proton for two years and, similarly, just recently migrated off of it last month. Since I use custom domains for email through it, and I never cared to use their other services outside of Mail (and occasionally VPN), it was a quick and painless migration. Unlike the painful migration of changing my email address everywhere to be non-gmail (which I still haven’t 100% finished after two years), this time I only needed to update DNS records and copy mailbox data. After migrating, having actual IMAP/JMAP access without a bridge is nice.

    Note that you don’t necessarily need to import your entire mailbox when migrating. I never imported my email archive from gmail to proton; an offline archive of all old received emails on my NAS is enough for me if I ever need to search through it. I can even view that archive in Thunderbird.

    My thoughts on a few of the other Proton services:

    • Proton VPN is really nice. One of the few good ones with port forwarding. But some other options have better pricing than VPN Plus alone outside of the Proton Unlimited bundle.
    • SimpleLogin (or Proton Pass masks) is nice, though using anonymous email masks is a trade-off in dependence. I prefer disposable addresses under my custom domain for anything associated with my identity regardless (like services that use my billing or shipping info), and shared domain masks for anything else. My existing shared-domain email masks in Proton still work even after my subscription ended. Addy and Firefox Relay are fine alternatives, and some other mail services like Fastmail have their own equivalent included.
    • I’d rather self-host CalDAV/CardDAV than rely on online services for calendar, contacts, etc.
    • I had already been using a local KeePassXC database and a NAS for many years so I had no reason to use Proton Drive and Pass, except for the latter’s email masks.

  • It seems they actually changed its official romanization to “Bracky” about 3 years ago, probably to avoid that problem. I listed them from memory and hadn’t realized it changed. Still, that old name was used on Japanese merch and marketing for decades, but not in any main series games since those only use the katakana “ブラッキー”.

    Eevee’s name sounds close enough to being the same in Japanese and English that they even used the same voice clips for both in some anime episodes and Let’s Go Eevee. The official romanization just has a strange spelling.


  • Only the French version uses those names. English has Eevee, Vaporeon, Jolteon, Flareon, Espeon, Umbreon, Leafeon, Glaceon, and Sylveon. Japanese can be considered the original names, which are (when written in latin letters) Eievui, Showers, Thunders, Booster, Eifie, Blacky, Leafia, Glacia, and Nymphia.

    German, Korean, and Chinese each have different names for them and most other Pokémon too. Other languages like Spanish, Italian, and Portuguese use the same names as English.


  • RISC-V is designed to be an extensible instruction set, where the base is very minimal and reduced but a plethora of extensions exist. The ISA can be small for academic and microcontroller uses, large (more than a hundred extensions) for server uses, or anything in between.

    Despite the name, a powerful RISC-V server can arguably not be considered “RISC”, though that term doesn’t have a single agreed-upon meaning and some design characteristics strongly associated with RISC still apply such as limiting memory access to dedicated load/store instructions only rather than allowing computation instructions to operate on memory.

    Also, not everything is CPU instructions. Acceleration for media codecs, for example, normally means off-loading those tasks to the GPU rather than the CPU. Even if the CPU and GPU are both part of the same SoC, that doesn’t touch the CPU instruction set.


  • The common issues with RISC-V laptops, or rather any laptops made with SoCs that weren’t designed to be laptop-first, include things like sleep not putting the system in a low enough power state (battery will run out if you leave it folded without turning it off), underwhelming GPU, higher power draw when idle, and lower peak performance for intermittent load. If none of those are a dealbreaker, the newest DeepComputing Framework board (on K3) can arguably be considered a viable daily driver RISC-V laptop option, though I wouldn’t want to use it as one.

    Nvidia, AMD, and Intel are the big names for GPUs and they all have products that integrate a GPU into the same SoC as the CPU, but none of them would be likely to license out their GPU IP to other SoC vendors in modern times. Same goes for the in-house GPU designs for Apple/Qualcomm/Samsung. ARM does license out its Mali GPU IP, and that’s often the go-to option for SoC vendors that don’t have their own in-house GPU, but RISC-V systems can’t use that. So RISC-V systems’ GPU options effectively amount to either:

    1. Use separate processors for your CPU and GPU. Desktop/server can just slot in a video card. Laptops in the 15-inch or larger space often solder a GeForce or Radeon chip to the board. Smaller 13-inch laptops normally don’t do this because of cooling and battery life concerns.
    2. License the integrated GPU from Imagination. That seems to be the only notable GPU offering available to license on non-ARM. Users don’t seem very fond of Imagination GPUs but they’re better than nothing.
    3. Pray that one of the companies with an established GPU portfolio decides to not only enter the RISC-V space but also makes a RISC-V processor that can be used in laptops. I think that’s unlikely and they’ll probably focus on server only.

  • zarenki@lemmy.mltoLinux@lemmy.mlLinux and RISC-V by 2030
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    13 days ago

    In the first place, consider why you even want to switch to RISC-V. If it’s because of an enthusiasm for open-source and hearing the ISA described as open, know that any performant hardware you’ll get likely won’t be as open as you expect. The SoC won’t be open-source, the CPU cores in it won’t be open-source, the firmware and bootloader might be an open-source u-boot fork but there’s a good chance it’s proprietary. Even the actual implemented ISA won’t be open since major core designers add custom instructions that aren’t part of the RISC-V spec.

    Distros like Ubuntu and Fedora seem slated to treat RISC-V as a main architecture that has close to the same number of packages and the same update schedule as x86/ARM by the end of next year, if not sooner. Just like is also the case for ARM, proprietary software like games can run with a nontrivial performance overhead, and other binary software distributed through other channels outside the distro repos (like docker containers, third-party apt/yum repos, or appimage) is often only distributed for x86 even for things that are open-source and can be compiled for other arches without issue.

    The software situation can be either a major annoyance or completely seamless depending on how closely you stick to just the distro repos.

    Hardware vendors will probably have stuff comparable enough to recent Intel/AMD for desktop in about a year from now. Likely not better, but within the same realm at least. Within another couple years after that you’ll almost definitely see more than one of the established major SoC vendors (like Qualcomm, Nvidia, AMD, or Samsung) release something RISC-V in the desktop, server, or mobile space, which is sure to be competitive with x86 and ARM hardware in that space.

    Laptops might not see anything good. An alternate ISA can be viable on servers and mobile (both being Linux-first ecosystems), and desktop can easily inherit from stuff made for server, but laptop has unique hardware needs and the market isn’t there for vendors to bother investing too much R&D on laptop chips that can’t run Windows nor Mac. RISC-V laptops do exist but they’re basically taking chips designed for SBC/edge and throwing them in a laptop shell, with the result naturally being awful at power draw since it was never meant to be a good laptop chip, and the iGPU situation is a mess too. That’s unlikely to change in the next few years.


  • 3DS does have an account system and is able to log in to an account that previously was used for a different 3DS, but only if that account has already been unlinked from the previous system. Unlinking is easy if the device still works, and absurdly inconvenient if it’s lost/broken. The latter requires contacting their customer service, which seems like an utterly insane requirement for what should be part of a standard login flow and also means that as a non-automated process a human has the ability to refuse, but in almost all cases (currently) it is generally possible to “legally get those games back”.

    Wii, in contrast, doesn’t have that ability at all. There’s no account system there.

    Cartridges and game discs don’t pass the “hammer test”. They also have a limited lifespan: disc rot exists and flash memory loses its data if unpowered for a long time.

    Regardless whether the game is a physical copy or has any digital updates/DLC, true game preservation requires creating usable backups, which requires (for offline games) either properly DRM-free game releases or viable DRM circumvention. Which yes, doesn’t exist for Switch 2, and is outlawed by DMCA and similar laws in most other countries. Ability to create personal backups (and reasonably short copyright terms) should be a consumer right and those laws are a major problem, but both physical and digital game releases are equally terrible in this particular respect. Unless they’re DRM-free, and on consoles they never are.

    In any case, this Pokémon rerelease is for original Switch, with no differences for Switch 2, so it’s entirely possible to dump a backup. Though there’s unlikely to be much meaningful difference between that and one made from the original release. Aside from the emulator code, but community-made emulators have better features; the only people likely to care about it are those who want to reverse-engineer Nintendo’s emulator for reasons like making tools compatible with its local wireless connection.


  • Past 2DS/3DS purchases aren’t lost yet. Nintendo shut down the ability to buy additional games or DLC on it several years ago but the servers to handle logging in, redownloading “owned” digital games, and downloading update patches are still running.

    And, even when those servers are eventually killed (for either 3DS or Switch), any digital games already installed on a system will still continue to work as long as your hardware does. Unlike a lot of PC games’ DRM that requires either constant or occasional check-in with license servers.

    Of course, that’s still not proper ownership, as you don’t truly own something you bought unless you’re able to freely transfer your purchased data between different devices you own without seeking the publisher’s permission (or relying on DRM circumvention) and able to transfer ownership through loan or resale. But understanding the actual implications of any restrictions still matters.


  • I’ve been using Proton Unlimited for a few years and I’m planning to switch to Fastmail soon.

    Mostly because I dislike Proton not supporting the standard client protocols. I know Proton’s “zero-knowledge encryption” is the reason why, but that doesn’t feel like the most meaningful privacy gain to me considering it’s only for the message body and doesn’t apply to email metadata. Proton could try collaborating with and extending open standards with the encryption features they need, making it feasible for third-party clients to implement sync without a bridge, but they haven’t.

    Needing a mail bridge is a moderate annoyance on desktop. But on mobile it means you’re basically forced to use their app. At least the Proton Android app is GPL and I haven’t had issues with it, but I don’t like the lock-in existing at all. Fastmail in contrast has been pushing forward JMAP as an open standard to make mobile sync on third-party clients better than what’s possible in IMAP.

    I also don’t like Proton Unlimited being limited to 3 domains and 15 total addresses (not counting simplelogin). Fastmail has far higher limits there.

    Both services seem to use a fair bit of proprietary software server-side but I think Fastmail has more of the important stuff be FOSS including their main imap/caldav/etc server (Cyrus).


  • My experience is mostly with Sony TVs, which run near-stock Android TV and do have a settings toggle to disable Bluetooth without needing root. Some models need BT for voice search (if mic is in the remote), and to many people losing that might be a good thing, but others seem to need it for basic menu navigation from the stock remote because odd features like trackpad don’t blast through IR. Considering how often I see unfamiliar TVs listed when I look at my phone’s Bluetooth pairing menu, I knew plenty of other TV vendors use constant discoverable mode.

    Having strangers within wireless range (especially for 2.4 GHz, but 5 GHz can be bad too) be able to intentionally and/or repeatedly interrupt what you’re doing with a pairing request at any time absolutely should be seen as a severe security flaw in my eyes. Even if they can’t successfully pair, the request prompt is akin to denial-of-service. Being such a blatant flaw that people often do it by mistake is even worse.


  • I think it’s far more common for devices to get pairing wrong than to get it right.

    Just a few of the very common issues I’ve seen in various devices:

    • TVs that are constantly in discoverable mode, even when the screen is off. Just in case the owner loses their remote and wants to pair a new one without reaching behind the TV to press a button. No way of avoiding this except disabling Bluetooth entirely, which makes the stock remote lose either partial or all functionality. Pairing requests also interrupt whatever you’re watching.
    • Audio devices that have a very short delay after turning on and waiting for any already-paired devices to connect before switching over to a pairing mode instead. So short that a smartphone in a low-power state (e.g. because you haven’t unlocked it for a few minutes) might not connect in time. Most if not all of the bluetooth-to-3.5mm receivers intended for older cars seem to share this problem.
    • Pairing codes are extremely underused in general, even among input devices. Most things seem to just pair with whoever sends a request first unconditionally.


  • the fact that it still includes USB-A ports

    Why complain about this? This is a good thing. Most people have USB-A peripherals and the majority of new keyboards and mice even in 2025 still rely on it. Game controllers too: Switch 2 Pro, Xbox Elite 2, 8bitdo wireless controllers, and many others all include a USB A to C cable (cables with USB-C on both ends can be used too but need to be bought separately) for charging and optional wired play, and all modern wired-only controllers use a USB-A cable. Far better for the device to offer USB-A ports than force most users to buy USB-A adapters.

    This system does have one USB-C port on the back, though it would be better if it had one on the front too in addition to the USB-A ones.


  • Similar to the full app backup use-case mentioned in another comment, I regularly use root to (through adb shell) make a personal backup of my owned kindle books and keys which I can then use to convert them to DRM-free epub and read those books in non Amazon approved apps. The encrypted books are in shared storage but the key to decrypt them is in an app-private database. I also occasionally backup my own apk/obb files.

    A “security model” designed around the idea that users should never be able to have any kind of access, not even read-only, to the data that app developers store on their owned device if the developer doesn’t want them to is one that is fundamentally incompatible with computing freedom.

    I keep a secondary device with rooted Lineage at home for the few apps I want root access to, instead of rooting my daily driver, but I always feel like it would be reassuring to have the ability to make proper backups from my main phone.



  • When compatible hardware is available, it’s expected that having packages built for RVA23 will have a big impact on performance. You can already see a big part of that with the vector (V) extension: running programs built without it is akin to using x86 programs without SSE or AVX. RVA23 is the first RVA profile that considers V mandatory rather than optional.

    You might see a similar performance impact if you target something like RVA22+V instead of RVA23, but as far as I know the only hardware systems that’d benefit from that are the Spacemit ones (OPi RV2, BPI-F3, Jupiter) while that’d still leave behind VisionFive 2, Pioneer, P550/Megrez, and even an upcoming processor UltraRISC announced recently. The profiles aren’t exactly intended to be used for those kinds of fine-tuned combinations and it’s possible some of the other RVA23 extensions (Zvbb, Zicond, etc.) might have a substantial impact too.

    Hardware vendors want to showcase their system having the best performance it can, so I expect Ubuntu’s aim is to have RVA23 builds ready before RVA23 hardware so that they’ll be the distro of choice for future hardware, even if that means abandoning all existing RISC-V users. imo it would’ve been better to maintain separate builds for RV64GC and RVA23 but I guess they just don’t care enough about existing RISC-V users to maintain two builds.


  • zarenki@lemmy.mltoLinux@lemmy.mlFan of Flatpaks ...or Not?
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    The parent comment mentions working on security for a paid OS, so looking at the perspective of something like the users of RHEL and SUSE: supply chain “paranoia” absolutely does matter a lot to enterprise users, many of which are bound by contract to specific security standards (especially when governments are involved). I noted that concerns at that level are rather meaningless to home users.

    On a personal system, people generally do whatever they need to in order to get the software they want. Those things I listed are very common options for installing software outside of your distro’s repos, and all of them offer less inherent vetting than Flathub while also tampering with your system more substantially. Though most of them at least use system libraries.

    they added “bash scripts you find online”, which are only a problem if you don’t look them over or cannot understand them

    I would honestly expect that the vast majority of people who see installation steps including curl [...] | sh (so common that even reputable projects like cargo/rust recommend it) simply run the command as-is without checking the downloaded script, and likewise do the same even if it’s sudo sh. That can still be more or less fine if you trust the vendor/host, its SSL certificate, and your ability to type/copy the domain without error. Even if you look at the script, that might not get you far if it happens to be a self-extracting one unless you also check its payload.


  • zarenki@lemmy.mltoLinux@lemmy.mlFan of Flatpaks ...or Not?
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    9 months ago

    A few reasons security people can have to hesitate on Flatpak:

    • In comparison to sticking with strictly vetted repos from the big distros like Debian, RHEL, etc., using Flathub and other sources means normalizing installing software that isn’t so strongly vetted. Flathub does at least have a review process but it’s by necessity fairly lax.
    • Bundling libraries with an application means you can still be vulnerable to an exploit in some library, even if your OS vendor has already rolled out the fix, because of using Flatpak software that still loads the vulnerable version. The freedesktop runtimes at least help limit the scope of this issue but don’t eliminate it.
    • The sandboxing isn’t as secure as many users might expect, which can further encourage installing untrusted software.

    By a typical home user’s perspective this probably seems like nothing; in terms of security you’re still usually better off with Flatpak than installing random AUR packages, adding random PPA repos, using AppImage programs, installing a bunch of Steam games, blindly building an unfamiliar project you cloned from github, or running bash scripts you find online. But in many contexts none of that is acceptable.