

yeah at this point it’s just skill issue
grow a plant, hug your dog, lift heavy, eat healthy, be a nerd, play a game and help each other out
yeah at this point it’s just skill issue
I feel that, I just wanted to set your expectations. I prefer and will continue to use CalyxOS but I have no expectation that they will deliver the same level of protections/mitigations at the OS side as Graphene given their project scope is different.
CalyxOS aims for a private, yet simple (attainable) Android experience, and I align more closely with their ideology on having a FOSS replacement for Google Play Services in MicroG.
I suppose one thing you could levarage is work profiles on Calyx to “jail” apps you do not trust, though I’m not sure that meaningfully builds upon Android 15s own application sandboxing.
Perhaps as a long term goal you could look into making a custom fork of CalyxOS for your device and incorporating parts of Graphene’s hardening but this will be a lot of work.
As a calyxOS user, if your key concerns are security and device hardening, I’d recommend you just make a seedvault backup and switch to graphene.
The two projects have somewhat different scopes and I don’t think you’ll achieve the same degree of sw security on calyx.
I’m aligned on this. Server side ought to be the way.
Also fuck cheaters.
Yeah like, as a keen advocate for Linux desktop use, this is a wildly dishonest take / headline to run with.
Yup. It’s a cat and mouse game until server side can become enconomial enough to broadly deploy (computational & network constraints).
by partially open source are you referring to Darwin or are there other system components which this applies to?
I don’t believe FPO was expanded to >2 displays, though having identical panels would eliminate VBI complexities. Will be interested in understanding how this behaves with rdna 4.
Huh interesting. Are these displays all the exact same model?
There’s a relatively new feature in the amd gfx display abstraction layer called freesync power optimisation. This feature leverages panel VRR to help mclk idle low at the desktop. This was introduced for single display use with RDNA 2, and expanded to dual display along with RDNA3s MALL advancements. I’m not sure if this is expanded further with RDNA 4 but I can try to find out.
This isn’t true for Vega 10 and 20 due to their use of HBM2. In general what you’re describing is a comparative weakness of GDDR as a technology. I don’t think there’s anything to suggest there’s an inherent issue with idle power on older gen asics at >60 Hz save from the typical limitations with VBI compatibility in an array of panels or display bandwidth thresholds. In the case of VBI compact issues, modifying EDIDs can indeed help.
That’s kind of curious. I don’t think 3D_FULLSCREEN should inherently determine idle mclk behaviour in and of itself. Is this just a single 1440p display at 120Hz? Is VRR enabled?
huh, I generally expect Vega10 based GPUs to idle at ~3W TGP (not inclusive of other board power losses like VRM). Can you tell us what you display setup is? Can you get a reading of your idle mclk using something like CoreCtrl?
video playback will likely kick the asic out of idle. What’s your power use at true desktop idle?
Generally speaking, vega 10 (56, 64) and 20 (radeon vii) are able to achieve decently low desktop idle with varied display configs due to the memory technology they employ.
Can you tell us which distro and display config this is with?
I’m also in 3D_FULLSCREEN on NV21XT but my idle mclk is 96 MHz. TGP (not to be confused with TBP which is only communicated on RDNA 3 and up) is 6 watts at idle with my browsers open. GNOME 47 + Wayland, 2560 x 1440 @ 180Hz + 1920 x 1080 @ 60Hz, VRR enabled on both displays. This is with Fedora 41, kernel 6.13.6-200.fc41.x86_64
For context, the GPU index (0,1,2 etc) will depend on the number of video adapters you have in the system. If you have a CPU with integrated graphics, there’s a chance this will be registered as 0000 whereas the dGPU will be 0001.
I mean, it runs everything I need. But what is mainstream gaming to everyone else? Is it fortnite? Call of duty? Destiny 2? Pubg? Valorant? GTA? Battlefield? (weirdly a lot of shooters), Apex? Siege?
May not matter to people like us but they each command something to the effect of hundreds of thousands of concurrent players. Capable as Linux distros are for gaming (truly the best way to experience classic games) the anticheat situation is no less dire.
Idle power is determined by your display setup. Is your friend a comparable arrangement to yours? Do you run a couple high res high refresh rate displays?
I’m a little surprised you’re reaching the same idle draw as vega10 given its use of HBM2. Also worth noting that everything prior to RDNA 3 showed TGP instead of TBP. This was particularly annoying as it didn’t account for other board power losses (like VRM), and didn’t give monitoring software an accurate read :/
For perf deficits in rtrt with heavier effects (reflections, GI, AO), you may find some luck in leveraging AMDVLK instead of RADV. I don’t doubt the latter will catch up in good time.
Distros building from source for their own repos may also be exempt then?
tech can be tasty too :)
well, at least they provided some rationale for switching browsers. still, it’s good thing we have bazzite.