

It doesn’t.
It doesn’t.
The game was under exclusivity contract for Epic Games, but they were still allowed to sell copies of the game on their own website. Now that the contract is up, the game can be sold on Steam. Granting players who bought the game from the website free Steam keys is a nice touch.
Ocis/OpenCloud can integrate with Collabora, OnlyOffice but don’t currently have things like CalDAV, CardDAV, E2EE, Forms, Kanban boards, or other extensible features installable as plugins in Nextcloud.
If you desire a snappy and responsive cloud storage experience and don’t particularly need those things integrated into your cloud storage service, then Ocis or OpenCloud might be something to look into.
This is the same for Intel variant Framework boards.
For what it’s worth, I do think OCIS is worthy of switching to if you don’t make use of all of the various apps Nextcloud can do. OCIS can hook into an online office provider, but doesn’t do much more than just the cloud storage as of right now.
That said, the cloud storage and UX performance is night and day between Nextcloud/Owncloud and OCIS. If you’re using a S3 provider as a storage backend, then you only need to ensure backups for the S3 objects and the small metadata volume the OCIS container needs in order to ensure file integrity.
Another thing to note about OCIS: it provides no at-rest encryption module unlike Nextcloud. If that’s important to your use case, either stick with Nextcloud or you will need to figure out how to roll your own.
I know that OCIS does intend to bring more features into the stack eventually (CalDAV, CardDAV, etc.). As it stands currently though, OCIS isn’t a behemoth that Nextcloud/Owncloud are, and the architecture, maintenance is more straightforward overall.
As for open-source: OCIS released and has still remained under Apache 2.0 for its entire lifespan thus far. If you don’t trust Owncloud over the drama that created Nextcloud, then I guess remain wary? Otherwise OCIS looks fine to use.
What hardware, audio interface, and sound server is in use for your 5.1 Surround setup?
I am under the presumption that the current state of the Intel Arc Alchemist GPUs will likely remain about the same under Mesa even if support is dropped today by Intel. Am I mistaken in the amount of continued driver effort Intel has been putting in for the Mesa GPU drivers?
Obviously if this is true, one should probably remain wary of upcoming Battlemage GPUs.
A key list of compatible/incompatible components to look for:
The explanations for this are pretty long, but are meant to be fairly exhaustive in order to catch most if any pitfalls one could possibly encounter.
A big one is the choice between AMD, Intel, and NVidia. I am going to leave out Intel for compute as I know little about the state it is in. For desktop and gaming usage, go with AMD or Intel. NVidia is better than it used to be, but still lags behind in proper Wayland support and the lack of in-tree kernel drivers still makes it more cumbersome to install and update on many distros whereas using an AMD or Intel GPU is fairly effortless.
For compute, NVidia is still the optimal choice for Blender, Resolve, and LLM. Though that isn’t to say that modern AMD cards don’t work with these tasks. For Blender and Davinci Resolve, you can get them to use RDNA+ AMD cards through ROCm + HIP, without requiring the proprietary AMD drivers. For resolve especially, there is some serious setup involved, but is made easier through this flatpak for resolve and this flatpak for rocm runtime. ML tasks depend on the software used. For instance, Pytorch has alternate versions that can make use of ROCm instead of CUDA. Tools depending on Pytorch will often have you change the Pytorch source or you may have to manually patch in the ROCm Pytorch for the tool to work correctly on an AMD card.
Additionally, I don’t have performance benchmarks, but I would have to guess all of these tasks aren’t as performant if compared to closely equivalent NVidia hardware currently.
One section of hardware I don’t see brought up much is NICs (including the ones on the motherboard). Not all NICs play as nicely as others. Typically I will recommend getting Ethernet and Wireless network interfaces from Intel and Qualcomm over others like Realtek, Broadcom, Ralink/Mediatek. Many Realtek and Mediatek NICs are hit-or-miss and a majority of Broadcom NICs I have seen are just garbage. I have not tested AMD+Mediatek’s collaboration Wi-Fi cards so I can’t say how well they work.
Bluetooth also generally sits into this category as well. Bluetooth provided by a reputable PCIe/M.2 wireless card is often much more reliable than most of the Realtek, Broadcom, Mediatek USB dongles.
This one isn’t as much of a problem as it used to be. For a lot of cards that worked but had many quirks using PulseAudio (a wide variety of Realtek on-board chipsets mainly), they tend to work just fine with Pipewire. For external audio interfaces: if it is compliant to spec, it likely works just fine. Avoid those that require proprietary drivers to function.
Hard drives and SSDs are mostly fine. I would personally avoid general cheap-quality SSDs and those manufactured by Samsung. A lot of various SATA drives have various issues, though I haven’t seen many new products from reputable companies actually releasing with broken behavior as documented by the kernel. If you wish to take a detailed look of devices the kernel has restricted broken functionality on, here is the list.
Additionally, drives may be one component beside the motherboard where you might actually see firmware updates for the product. Many vendors only release EXE files for Windows to update device firmware, but many nicer vendors actually publish to the LVFS. You can search if a vendor/device is supplied firmware here.
In particular, motherboards are included mainly because they have audio chipsets and network interfaces soldered and/or socketed to them. Like disks, motherboards may or may not have firmware updates available in LVFS. However, most motherboard manufacturers allow for updating the BIOS via USB stick. Some laptops I have seen only publish EXE files to do so. For most desktop boards however, one should be able to always update the motherboard BIOS fine from a Linux PC.
Some motherboards have quirky Secure Boot behavior that denies them being able to work on a Linux machine. Additionally some boards (mostly on laptops again) have either broken or adjustable power state modes. Those with adjustable allow for switching between Windows and standard-compliant modes.
Besides getting a Framework laptop ‘Chromebook edition’, I don’t think there is much you will find for modern boards supporting coreboot or libreboot.
For your use case, this doesn’t really matter. Pretty much every modern x86 CPU will work fine on Linux. One only has to hunt for device support if you are running on ARM or RiscV. Not every kernel supports every ARM or RiscV CPU or SoC.
Obviously one of the biggest factors for many new users switching to Linux is their existing peripherals that require proprietary software on Windows missing functionality or not working on Linux. Some peripherals have been reverse engineered to work on Linux (see Piper, ckb-next, OpenRazer, StreamController, OpenRGB).
Some peripherals like printers may just not work on Linux or may even work better than they ever did on Windows. For problematic printers, there is a helpful megalist on ArchWiki.
For any other peripherals, it’s best to just do a quick search to see if anyone else has used it and if problems have occurred.
A couple things to check using a quick bash script:
#!/usr/bin/env bash
cd /sys/class/power_supply/BAT*/
echo "Charge cycles: $(cat cycle_count)"
printf '%s\0' 'Health: ' &
bc <<< "scale=3; ($(cat charge_full) / $(cat charge_full_design)) * 100"
That should print out the wear cycles the battery has endured and its reported capacity over design capacity. If your battery has less than 1000 cycles and the health reported from the battery is less than 80%, it might be best to contact Framework for warranty replacement as the battery is likely defective.
For multi-monitor: use Wayland. For 2.5Gbps Ethernet NICs, they never work properly on any system in regard to performance, but I presume you are referencing the subpar Realtek NICs not connecting? Depending on the distro, you likely won’t have the driver and/or firmware package preinstalled to make it work.
As I understand it, this driver isn’t ready for personal use unless you don’t care about the contents of your btrfs partitions mounted on Windows.
I know ArchLinuxArm (a fork of the ArchLinux project) supports the Hisense C11. It does seem to be a fairly involved procesd, and (potentially?) requires using external media rather than the onboard eMMC storage to boot a Linux system.
Your particular Chromebook contains the same SoC (Rockchip RK3288) as an Asus C201, which Debian has an install guide for. Once again, a fairly involved process and this one may not be guaranteed to work if the C11 has some quirks not present in the C201.
Just took a couple minutes to install and setup the fork to try it out. Turns out there is a flatpak on Flathub under the id dog.unix.cantata.Cantata that looks to be maintained directly by nullobsi. I’ll have to see where rough edges show up, but this fork looks good thus far. A full port from Qt5 -> Qt6 isn’t a trivial amount of effort, so mad respect to everyone working on this ported version.
The easiest ways to run custom executables for Proton titles is either going to be SteamTinkerLaunch or my shim script.
The question that I have to ask: what category of CLI apps (or even some examples) exist that are too complex to maintain a few versions simultaneously as native packages but are not complex enough to just use an OCI container for them instead?
Both not possible and unnecessary on Wayland.
The flatpak documentation has a semi-relevant page on setting up a flatpak repo utilizing gitlab pages and gitlab’s CI runners on a pipeline. Obviously, you’d need to substitute Gitlab Pages for a webserver of your choice and to port the CI logic over to Gitea Actions (ensuring your Gitea instance is setup for it).
A flatpak repo itself is little more than a web server with a related GPG key for checking the signatures of assembled packages. The docs recommend setting up the CI pipeline to run less on-commit to the package repos and more on the lines of checking for available updates on interval, though I imagine other scenarios in a fully-controlled environment such as a selfhosted one might offer some flexibility.
As I am teaching myself right now maintainable selfhost setups using popular apps (admittedly with Kubernetes vs something minimal in functionality like Docker Desktop), there is a lot of complexity involved in getting these services both functional and maintainable while also having to consider the security implications of various setups.
While I agree the concept of self-host is a good thing to advocate, I think the complexity and difficulty involved not just to do it, but to do it right is going to be a straight cliff of a learning curve for those not already technically inclined in databases, networking, and filesystems/block storage.
Honestly, taking the burden of being IT for a reasonable subscription cost for your efforts is a better way to go, especially if the setup allows for expanding your offerings to other members in a localized community.
Alongside many others, I agree that using QEMU through GUI frontends like virt-manager or GNOME Boxes, or even server-focused solutions like Cockpit+VM plugin or Proxmox layered on top of your installation.
I just want to note a decent point against other solutions like VirtualBox or the VMWare products that work on Linux: these solutions that don’t rely on QEMU almost certainly need the user to install out-of-tree kernel modules (that in some cases may also be proprietary). QEMU and its frontends don’t need out-of-tree modules in a majority of distros and can work out of the box with all features (given BIOS configuration of the host and hardware supports them).
Reading up on RDP as it’s something I do not utilize, I wondered just how encumbered RDP is compared to Spice and VNC. Wonder how third-party server and clients are handling the patent-encumbered protocol.
Do third parties implement an older standard of the RDP protocol that isn’t as encumbered?