Ah, wasn’t aware. Will have to look into it more.
Ah, wasn’t aware. Will have to look into it more.
In the case of steam and web browser, the containerization means I can control their access permissions via flatseal. This adds another layer of security, since they’re both web-accessing applications, and it’s easier than setting up a VM to run those applications.
When using certain apps I prefer them being containerized on my system. It’s case-by-case for me. I keep steam containerized, my web browser containerized, etc.
Yeah, flatseal should come stock with flatpak IMO. You will have to configure many apps to get them to play nice with your system.
I use apt and flatpak. They both are good for what they do.
Yeah, that’s the nature of mass adoption, is that people you don’t like start coming into the community.
This always happens. A community stays intact so long as it stays niche, but once it reaches mass adoption, that community splinters into smaller communities.
The ‘Linux community’ will eventually cease to exist and will be replaced with communities who use Linux in specific fashions.
You don’t need to worry about it. His new content is that of a completely different person, and it’s been more than half a decade since his last controversy.
Again, I genuinely feel that Pewds’ content has changed dramatically since his early days, for the better. He seems much more considerate, thoughtful with what he says, and mellower in general. I don’t think anyone should feel bad at all for liking his recent content.
But you clicked this so…
I’m gonna stick to things Pewds drew controversy over that were actually on video. Not laying out accusations, just laying out why he’s a controversial figure in an unbiased fashion.
Pewdiepie was a very edgy youtuber for most of his career, usually laying out unfiltered hot takes. They were all pretty normal edgy guy things. Think Idubbbz in terms of flavor.
I will say that I personally noticed a major change in his content after his slogan "Subscribe to PewDiePie" was used in the Christchurch mosque shooting in 2019. I think he realized just how toxic portions of his audience were from that tragedy, and ended up changing direction because of that. That was also mostly not in his control.
There were something like half-a-dozen controversies directly caused by his own actions, such as when he payed two people on fiver who didn’t speak English to hold up a sign reading “death to all jews”, and calling it a social experiment to show how people will do anything for money.
Another time he faced backlash for saying the N word during a livestream.
He also said he joined ISIS as a joke at one point.
Most of his controversies are like this, where he does some edgy inflammatory shit, realizes he went to far, apologizes, then everything goes back to the way it was before.
He started YouTube in 2006, then again in 2010 under PewDiePie. It’s been more than a decade either way. It’s been 6 years since his last controversy. I wouldn’t worry about how people judge him now.
Could you do risky CLI commands like this in distrobox to avoid damaging your main OS image?
That’s a view from the perspective of utility, yeah. The downvotes here are likely also from a ethics standpoint, since most LLMs currently trained are doing so by using other peoples’ work without permission, all while using large amounts of water for cooling, and energy from our mostly coal-powered grid. This is also not mentioning the physical and emotional labor that many untrained workers are required to do when sifting through the datasets of these LLMs, removing unsavory data for extremely low wages.
A smaller, more specialized LLM could likely perform this same functionality with a much less training, on a more exclusive data set (probably only a couple of terabytes at its largest I’d wager), and would likely be small enough to run on most users’ computers after training. That’d be the more ethical version of this use case.
I think it’s important to also use the more specific term here: LLM. We’ve been creating AI automation for years for ourselves, the difference now is that software vendors are adding LLMs to the mix now.
I’ve hear this argument before in other instances. Ghidra, for example, just had an LLM pipeline rigged up by LaurieWired to take care of the more tedious process of renaming various functions during reverse engineering. It’s not the end of the analysis process during reverse engineering, it just takes out a large amount of busy work. I don’t know about the use-case you described but it sounds similar. It also seems feasible that you could train an AI system on your own system (given you have enough reversed engineered programs) and then run it locally to do this kind of work, which is a far cry from the disturbingly large LLMs that are guzzling massive amounts of data and energy to learn and run.
EDIT: To be clear, because LaurieWired’s pipeline still relies on normal LLMs which are unethically trained, her pipeline using it is also unethical. It has the potential to be ethical, but currently is unethical.
And we can see by the ratio that this was in fact a hot take.
It feels like your making a semantic argument to downplay how tight grip these softwares have on their respective industry markets.
If you are only ever considered for a job if you have Photoshop experience, and that is the normal treatment across the majority of the industry, that’s a standard that the industry is now holding you to - an industry standard if you will. It does not need to be backed by a governing body for it to still count.
My current understanding is that you will not get a job at a major CGI company by knowing Blender (though the film ‘Flow’ shows that might change going forward). You have to know softwares like Houdini, 3ds Max, Maya, etc…, if you want to be treated seriously.
That entire solution immediately falls apart when the paradigm is patented by the vendor, who immediately sues any competing software using UI elements even vaguely similar to theirs. This has been going on for decades, and the three things that usually happen are that the competitor either gets bought up, sued out of existence, or has to keep their UI different enough that there is little-to-no bleedover between the userbases (and usually starves to death from too little revenue).
There is a practice where software companies will either provide their software to schools and colleges for free or will pay schools and colleges to use their software. This leads to the students using this software, learning that software’s sole paradigm, and essentially forces them to use that software going forward because of how difficult it is to shift to another software with a different paradigm. This is Vendor Lock-In. The vendor locks you into their software.
This leads to all future workers being trained in that software, so of course businesses opt to use that software instead of retraining the employee in another. This contrasts with the idea of what an ‘industry standard’ is. The name suggests that it’s used in the industry because it’s better than other software, but in reality it’s just standard because of lock-in.
This is how Windows cornered the operating system market - by partnering with vendors to ship their systems with Windows pre-installed.
I was specifically trying to not sound conspiratorial. I’m pointing out that it’s a matter of having learned a paradigm vs having to learn a new paradigm.
Devs have already gotten used to CLI and very rarely make full P&CI suites because of it. Even if the original Dev only did CLI for the app and someone came back and made a P&CI for that app, those P&CI interfaces are still fairly barebones. This is both a mix of devs knowing how good CLI can be and because it’s all open source volunteer work.
Layman users of P&CI-focused DEs actively avoid CLI so they don’t have to learn it. This means that using most Linux apps are something to be avoided for most Windows users, making the OS base mostly unusable for them.
To be clear, when I am talking about P&CI-focused DEs, like windows and iOS, I mean that if you cannot perform an action with the P&CI, then that action essentially does not exist for the average user. Contrast that with Linux DEs, where it’s quite common to have to directly edit configs or use the CLI to perform various actions.
As a veteran user, CLI does not bother me. I do understand the frustration of those who want some Linux DEs to become as default as Windows and iOS, because lack of P&CI does damage that effort.
This is not every app in Linux obvi, but the ones that are best at making sure the P&CI is full-flddged, are the apps that develop for windows and iOS as well as Linux - Blender, LibreOffice, Logseq, Godot, etc. The most common offenders are the utility apps, such as those that handle drivers, sound systems, DE functions, etc.
It’s not that they are mad others use CLI, it’s that they’re mad that Linux devs regularly stop creating P&CI features, instead opting for CLI with no P&CI equivalent action.
It’s kind of obvious why - CLI is already very flexible right out of the box, and it takes much less work to add functionality within CLI rather than creating it for the P&CI.
At the same time, I understand the P&CI folk’s frustration, since one of biggest obstacles to getting more people on Linux is the lack of P&CI solutions, and the fact that many actions on Linux are explained solely via CLI.
CLI folks have invested the time to use terminals effectively and view overuse of the P&CI as beneath them, and P&CI folks have no interest in dumping time into learning CLI to do something they could do on Windows with P&CI.
I love this idea of the asshole getting transferred to a new project, removed from said project shortly after, only to be put back on the project after another short period.
Haha! Take that, asshole!
A polyglot is anyone who speaks way more languages than you feel comfortable with. /j