• 11 Posts
  • 193 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle

  • Which means my distro-morphing idea should work in theory with OpenStack

    I also don’t recommend doing a manual install though, as it’s extremely complex compared to automated deployment solutions like kolla-ansible (openstack in docker containers), openstack-ansible (host os/lxc containers), or openstack-helm/genestack/atmosphere (openstack on kubernetes). They make the install much more simpler and less time consuming, while still being intensely configurable.


  • Personally, I think Proxmox is somewhat unsecure too.

    Proxmox is unique from other projects, in it’s much more hacky, and much of the stack is custom rather than standards. Like for example: For networking, they maintain a fork of the Linux’s older networking stack, called ifupdown2, whereas similar projects, like openstack, or Incus, use either the standard Linux kernel networking, or a project called openvswitch.

    I think Proxmox is definitely secure enough, but I don’t know if I would really trust it for higher value usecases due to some of their stack being custom, rather than standard and mantained by the wider community.

    If I end up wanting to run Proxmox, I’ll install Debian, distro-morph it to Kicksecure

    If you’re interested in deploying a hypervisor on top of an existing operating system, I recommend looking into Incus or Openstack. They have packages/deployments than can be done on Debian or Red Hat distros, and I would argue that they are designed in a more secure manner (since they include multi tenancy) than Proxmox. In addition to that, they also use standard tooling for networking, like both can use Linux Bridge (in-kernel networking) for networking operations.

    I would trust Openstack the most when it comes to security, because it is designed to be used as a public cloud, like having your own AWS, and it is deployed with components publicly accessible in the real world.



  • This is moving the goal posts. You went from “ssh is not fine to expose” to “VPN’s add security”. While the second is true, it’s not what was being argued.

    Never expose your SSH port on the public web,

    Linux was designed as a multi user system. My college, Cal State Northridge, has an ssh server you can connect to, and put your site up. Many colleges continue to have a similar setup, and by putting stuff in your homedir you can have a website at no cost.

    There are plenty of usecases which involve exposing ssh to the public internet.

    And when it comes to raw vulnerabilities, ssh has had vastly less than stuff like apache httpd, which powers wordpress sites everywhere but has had so many path traversal and RCE vulns over the years.


  • Firstly, Xen is considered by secure by Qubes — but that’s mainly the security of the hypervisor and virtualization system itself. They make a very compelling argument that escaping a Xen based virtual machine is going to be more difficult than a KVM virtual machine.

    But threat model matters a lot. Qubes aims to be the most secure OS ever, for use cases like high profile journalists or other people who absolutely need security, because they will literally get killed without it.

    Amazon moved to KVM because, despite the security trade off’s, it’s “good enough” for their usecase, and KVM is easier to manage because it’s in the Linux kernel itself, meaning you get it if you install Linux on a machine.

    In addition to that, security is about more than just the hypervisor. You noted that Promox is Debian, and XCP-NG is Centos or a RHEL rebuild similar to Rocky/Alma, I think. I’ll get to this later.

    Xen (and by extension XCP-NG) was better known for security whilst KVM (and thus Proxmox)

    I did some research on this, and was planning to make a blogpost and never got around to making it. But I still have the draft saved.

    Name Summary Full Article Notes
    Performance Evaluation and Comparison of Hypervisors in a Multi-Cloud Environment Compares WSL (kind of Hyper-V), VirtualBox, and VMWare-Workstation. springer.com, html Not honest comparison, since WSL is likely using inferior drivers for filesystem access, to promote integration with host.
    Performance Overhead Among Three Hypervisors: An Experimental Study using Hadoop Benchmarks Compares Xen, KVM, and an unnamed commercial hypervisor, simply referred to as CVM. pdf
    Hypervisors Comparison and Their Performance Testing (2018) Compares Hyper-V, XenServer, and vSphere springer.com, html
    Performance comparison between hypervisor- and container-based virtualizations for cloud users (2017) Compares xen, native, and docker. Docker and native have neglible performance differences. ieee, html
    Hypervisors vs. Lightweight Virtualization: A Performance Comparison (2015) Docker vs LXC vs Native vs KVM. Containers have near identical performance, KVM is only slightly slower. ieee, html
    A component-based performance comparison of four hypervisors (2015) Hyper-V vs KVM vs vSphere vs XEN. ieee, html
    Virtualization Costs: Benchmarking Containers and Virtual Machines Against Bare-Metal (2021) VMWare workstation vs KVM vs XEn springer, html Most rigorous and in depth on the list. Workstation, not esxi is tested.

    The short version is: it depends, and they can fluctuate slightly on certain tasks, but they are mostly the same in performance.

    default PROXMOX and XCP-NG installations.

    What do you mean by hardening? If you are talking about hardening the management operating system (Proxmox’s Debian or XCP’s RHEL-like), or the hypervisor itself?

    I agree with the other poster about CIS hardening and generally hardening the base operating system used. But I will note that XCP-NG is more designed to be an “appliance” and you’re not really supposed to touch it. I wouldn’t be suprised if it’s immutable nowadays.

    For the hypervisor itself, it depends on how secure you want things, but I’ve heard that at Microsoft Azure datacenters, they disable hyperthreading because it becomes a security risk. In fact, Spectre/Meltdown can be mitigated by disabling hyper threading. Of course, their are other ways to mitigate those two vulnerabilities, but by disabling hyper threading, you can eliminate that entire class of vulnerabilities — at the cost of performance.


  • Now, I don’t write code. So I can’t really tell you if this is the truth or not — but:

    I’ve heard from software developers on the internet that OpenCL is much more difficult and less accessible to write than CUDA code. CUDA is easier to write, and thus gets picked up and used by more developers.

    In addition to that, someone in this thread mentions CUDA “sometimes” having better performance, but I don’t think it’s only sometimes. I think that due to the existence of the tensor cores (which are really good at neural nets and matrix multiplication), CUDA has vastly better performance when taking advantage of those hardware features.

    Tensor cores are not Nvidia specific, but they are the “most ahead”. They have the most in their GPU’s, and probably most importantly: CUDA only supports Nvidia, and therefore by extension, their tensor cores.

    There are alternative projects, like how leela chess zero mentions tensorflow for google’s Tensor Processing Units, but those aren’t anywhere near as popular due to performance and software support.



  • I despise the way Canonical pretends discourse forum posts by their team members* are documentation.

    I’ve noticed they have been a bit better lately, and have migrated much of the posts to their documentation, but it seems they are doing it again.

    As this is developed, we will update this post to link to the new documentation and feature release notes.

    Pro tip: You could have just made the documentation directly, with the content of this post. Or maybe a blog post. But please stop with the forum posts. They are very confusing for people not used to these… unique locations.

    *Not that people are easily able to find this out when they don’t give any indication that the forum post is something other than just another post by a rando. Actually, I’m just guessing here, based on the quoted reply, for all I know this could be a post by someone unrelated to Canonical. The account is 3 months, and the post itself is identical to a regular forum post from a regular forum member…



  • This is so horrifically wrong, I don’t even know where to start.

    The short version is that phone and computer makers aren’t stupid and they will kill things or shutdown when overheating happens. If you were a phone maker, why tf would you allow someone to fry their own phone?

    My laptop has shut itself off when I was trying to compile code while playing video games, while watching twitch. My android phone has killed apps when I try to do too much as well.


  • I don’t see anything about turing completeness or programmatic capabilities in their github. Any language that doesn’t have the programmatic abilities will inevitably get them hacked on when someone needs them, like what happened to yaml a bunch of times for a bunch of different software. This is one of people’s many frustrations with yaml, the fact that doing a loop, an if statement, or templating, is different for every single software that uses yaml. Even within Kubernetes, there exists different ways to do templates.

    I would much rather see the language consider those things first, then see it repeat one of the biggest mistakes of yaml. This is why I am more eager for things like nickel, or even Nix as a configuration language, and am skeptical of any new standard that doesn’t have those features.


  • See also: noyaml.com

    I personally like yaml though. Although I won’t deny it can be hellish to write without a linter, it’s just like any other language with tab autocomplete and warning for sus things if you have the right software set up.

    I used the ansible and kubernetes VSCode extensions, and I really like them both. With the kubernetes one, you can just start typing the name of the resources you want to create, and then press tab, and boom, a template is created.

    I would much rather see something like Nix be the norm, but I find Nix very frustrating to edit because the language servers for it are nowhere near as developed.







  • also as a bonus question, why does every IDE seem to require you to configure every single option before it can run code

    What IDE’s have you tried?

    Kate (and vscode) aren’t really IDE’s, they’re more like extremely extensible text editors. You can make them IDE’s, but they dob’t come like that out of the box.

    On the other hands, actual IDE’s often have the inbuilt capability to install and manage the programming language related software.





  • Maybe not some obscure ones, but here are some lesser known ones:

    Talos Linux. It’s an immutable operating system designed specifically to deploy kubernetes.

    OpenSuse Harvester Think Proxmox, but instead of VM’s and LXC containers, it’s VM’s and Kubernetes.

    XCP-NG is a RHEL based distro designed for managing Linux virtual machines using the xen hypervisor, as opposed to KVM. Think Proxmox, but RHEL and Xen (also no LXC). However, it does not come with a web ui out of the box, you have to deploy it yourself. Technically, XCP is a Xen distribution, since Xen is a kernel with nothing but a hypervisor that runs under the main distro, but the primary management virtual machine is RHEL based, and uses Linux.

    Speaking of Proxmox, Proxmox is technically a Linux distro.

    SnowflakeOS is a project that aims to bring a GUI focused experience to NixOS.

    TurnkeyLinux (site is loading very, very slowly for me right now) is not a single distribution, but rather a set of debian based distributions that are designed to be turnkey appliance virtual machines that contain and host a specific app. To deploy the app, all you have to do is set up the virtual machine.

    Now, here are some not-linux, but interesting distros:

    SmartOS. They ported KVM to unix, and also can use Linux syscall translation (similar to wine) to run apps in containers as well. There is also Bhyve. It’s a very interesting hypervisor platform.

    OmniOS is similar. Bhyve, KVM, and Linux syscall translation in containers.