

Hey, the design specs never said the program shouldn’t blast out and air raid siren at full volumn every time the user clicks a button. Cannot be a bug, must be user error.
Hey, the design specs never said the program shouldn’t blast out and air raid siren at full volumn every time the user clicks a button. Cannot be a bug, must be user error.
Any service you use passkeys with instead of passwords won’t put you in another leaked password database. The public key just needs to be invalidated and you can move on with your life.
Does it though? Is there anything wrong with your public key being, um public? All they can do with it is verify who you are, (or technically encrypt things that only you can read - not that pass keys are used in this way?).
Passwords can be secure when the end user picks a strong one. But that is the biggest problem with them, the end user. They don’t pick good passwords and decades have shown us the general public are bad at passwords.
Passkeys are not biometrics. They are much simpler. In a very simple way you can think of them as a secure long random password that is stored on you device, generated per device, and not sent over the wire to the other side (so more like public/private key cryptography I believe).
The passkey on your device can be stored in an encrypted vault or even secure hardware that requires a pin/password or key to unlock.
They are not getting rid of multifactor codes and can be used with them. But by protecting them locally you can still have 2 factors to access them - the hardware/vault that contains them and the pin/password/biometric that unlocks the vault. And that is in addition to server side multifactor systems.
But even without all that you still gain massive benefits over passwords as it stops cross site comprises when one sites gets their password database leaked. Or brute forcing access to systems by guessing weak passwords that most people use.
By far the most important thing is consistency
This is not true. The most important thing is correctness. The code should do what you expect/want it to do. This is followed closely by maintainability. The code should be easy to read and modify. These are the two most important aspects and I believe all other rules or methodologies out there are in service of these two things. Normally the maintainability side of things as correctness is not that hard to achieve with any system of rules out there.
You must resist the urge to make your little corner of the code base nicer than the rest of it.
Uhg. I really don’t like these words. I agree with their sentiment, to a degree, but they make it sound like you should not try to improve anything at all. Just leave it as it is and write your new code in the same old crappy way that it always has been. Which is terrible advice. But I get what they are trying to say - you should not jump into a area swinging a wrecking ball around trying to make the code as locally nice as possible at the expense of the rest of the code base and other developmental practices around.
In reality there is a middle ground. You should strive to make your corner of the code base as nice as possible but you should also take into account the rest of the code base and current practices as well. Sometimes having a little bit better local maintainability is not worth the cost of making the code base as a whole less maintainable. Sometimes a big improvement to local maintainability is worth a minor inconvenience to the code base as a whole - especially for fast moving parts of the code base. You don’t want something that no one has touched in 10 years to drastically slow down current features you are working on just to keep things consistent.
Yes consistency is important. But things are far more nuanced than that statement alone. You should strive for consistency of a code base - it does after all have a big effect on the maintainability of the code base. But there are times that it hampers maintainability as well. And in those situations always go for maintainability over consistency.
Say for instance some new library or an update to a library introduces a new much better way of working. Your code base is full of the old way though. Should you stick to the old way just to keep up with consistency? If the improvement is good enough then it might be worthwhile. Ideally if you can you would go though and update the whole code base to the new way of working. That way you improve things overall and keep consistency of the code base. But that is not always practical to do. It might be better to decide that the new way is worth switching to for new code, and worth refactoring old code when you are working in that area anyway but not worth the effort of converting the whole code base at once. This makes maintainability of the new code better, at the expense of old less used code.
But the new way might not be a big enough jump in maintainability of new code that it is worth sacrificing the maintainability of the code base as a whole. Every situation like this needs to be discussed with your team and you need to decide on what makes most sense for your project. But the answer is not always that consistency is the most important aspect. Even if it is an important aspect.
I do use scripts for more complex things. But even then I have a few very frequent one liners in my history that are 3-4 commands chained together that I have not bothered to convert. It tends to only be when they start to have logic in them that I will write a script for. Or more one off commands that are easier to edit in a multi line editor then trying to get everything right in the shells prompt.
I used to know a guy that would put everything into aliases or scripts in order to avoid remembering them. It worked well most of the time but when something went wrong or was not covered by his scripts he would struggle a lot. He avoided learning the underlying commands and what they did and so could not adapt to things when circumstances changed even a little - which does happen quite a lot.
Which is probably another reason I don’t use them. I don’t like to set them up straight away while I am learning the tool and once I am comfortable with it a reverse history search is good just as good/quick as a true alias anyway and means I never forget what I am doing and can edit it on the fly easily when needed.
TBH, not quite the same. You have to know which one you want. If you don’t quite or get it wrong you need to clear the line and start again. I quite like that I can reverse search and keep typing, or undo what I had typed and still see a list of the most recent things and can select from that list once I see what I want. This works for any command I have previously typed and dont need to setup specific key sequences for it - just any part of that command will find it again. Also works for complex chains of commands or pipes which I do not think aliases do work for.
I seem to be one of very few people that does not use shell aliases. I much prefer just using the reverse history search for previous commands instead. That way I don’t have to remember what letter I picked for different things, just ctrl+r then partially type out the command and I can see what it will execute. Bonus that I don’t need to set them up before hand and that I can edit them before executing them for those times when I need to do something slightly different.
Sounds like a stuck button. Personally I would disassembly the device and have a look at the button and surrounding parts for any damage or liquid or debris at all. If there is any physical damage then an RMA or maybe replacement parts can be ordered (like you can buy replacement rubber if that feels worn at all). Otherwise I would ensure everything is clean and free of and liquids, stickiness or debris then reassemble the device. Even if nothing looked wrong I would test it again and see if the act of disassembly did something to solve the issue which it sometimes does for things.
IFixit has guides for the LCD and OLED versions and overall the steam deck is not very hard to disassemble compared to other small electronics. Though if you are unsure about this you may just want to talk to valve support first. If you accidentally damage something that could affect your ability to get an RMA.
SIGINT is sent when you press Ctrl+C. SIGTERM is sent in just about every other situation - basically when the system wants the program to end. For instance when systemd wants to stop the service or the default signal with programs like kill
pkill
htop
etc. You should catch both of these signals.
I have updated arch systems that had not been powered on for years before. It was fine. No issues what so ever. Arch is not some flaky distro that breaks if you look away for a minute. My main system has had had the same install for over 5 years now and I regularly forget to update it for months at a time. Again, no issues.
Not quite the same as you have no tactile feed back on when you are about to enter the full pull part.
But it applies to features, not coding practices
I disagree. It applies to everything. I would argue it applies to SOLID most of all. I do not find SOLID principals to be good ones to follow most of the time. Situational they can be useful but I have seen so many projects that strictly follow SOLID that becomes an unmaintainable mess.
If you struggle to understand the SOLID principles or think they are too general, then I would suggest you follow my SOLID Training Wheels until you understand them better.
I hate this excuse. If the answer to the problem is you are just not doing it right then it is a terrible answer. But lets look at some of this advice:
Summary: 1 piece of code has 1 responsibility. The inverse: 1 responsibility of code has 1 piece of code
Training Wheels:
Follow the 10/100 Principle
Do not write methods over 10 lines
Do not write classes over 100 lines
No. Just no. Making everything as small as possible is exactly what is wrong with the single responsibility principal. I agree that everything should have one responsibility, but that responsibility might be complex and require a lot of code. Hiding the code behind other functions does not make it easier to read, only means you need to jump around a lot in order to understand what it is doing which IMO makes things harder to read. Every time I jump location it gets harder to remember where you came from or what the wider context is. Keeping related code together is more important then creating small function.
Just take a look at the stdlib of almost any mainstream language. Like the ArrayList in Java, or Vec in rust. These classes are thousands of lines long with many methods being 10-20 lines of code with some even longer then that. Is this code bad or hard to read? Not for what it is doing. And code like this is not atypical in stdlibs, you can jump to almost any class/struct in a language of your choice and see similarly structured code. And in all cases the classes represent one thing and its methods do one thing on that object regardless of how many lines of code they contain.
If you have to change a class that already breaks the 10/100 Principle:
take your code out of that class and put it in a new class first so the original class is smaller
Check-in this refactor without your new code
make your changes in the new class
Check-in your new code
IMO this breaks the single responsibility rule. If new code is mostly related to a single class then it should be added to that class as that is what the class is responsible for. Adding a new class for every bit of logic just splits up the responsibility and makes it far harder to find what is responsible for something.
I could go on about the rest of that training guide - which this whole post seems to be an advert for.
YAGNI, will ruin your code base if you apply it to how you code.
It applies just as much to how you code as to what you are coding. If you added every programming paradigm and principal to your code base it would be a unreadable mess. Not to mention impossible to do as loads of these conflict with each other.
Pick the right tools for the right job. Don’t blindly apply anything to every situation. There are times when the SOLID principals can help but there are also times where they make code worst. Instead always ask yourself if there is a simpler way you could be doing something and if when applying a principal if it actually made the code easier to read (ask someone else as well as it can be hard to tell yourself). Don’t be afraid to break a principal if it is not helping.
Older software is the most noticeable thing. Enterprise does not mean it is better - just that it is supported for a long time and they do that by not changing much on them. They are more designed for servers rather than workstations and generally not a great experiences unless you are running hundreds or thousands of them in an enterprise situation.
Professional just means payed for. What you are paying for is support in managing the systems, not a great user experience.
For home desktops it is far nicer to be on newer software rather than things that came out 5 to 10 years ago.
Um no. Containers are not just chroot. Chroot is a way to isolate or namespace the filesystem giving the process run inside access only to those files. Containers do this. But they also isolate the process id, network, and various other system resources.
Additionally with runtimes like docker they bring in vastly better tooling around this. Making them much easier to work with. They are like chroot on steroids, not simply marketing fluff.
They refuse to make changes to their C code, so it can cooperate with Rust code via bindings.
I don’t even think the rust devs where asking for that. They are refusing changes by rust devs that help with rust while making the c code clearer and even refuse to answer questions about the semantics behind the c code. At least as far as I can see from the outside.
Did you read the article at all?
“Putting all new code aside, fortunately, neither this document nor the U.S. government is calling for an immediate migration from C/C++ to Rust — as but one example,” he said. “CISA’s Secure by Design document recognizes that software maintainers simply cannot migrate their code bases en masse like that.”
Companies have until January 1, 2026, to create memory safety roadmaps.
All they are asking for by that date is a roadmap for dealing with memory safety issues, not rewrite everything.
I don’t get it? They seem to be arguing in favor of bootc over systemd because bootc supports both split /usr and /usr merge? But systemd is the same. There is really nothing in systemd that requires it one way or another even in the linked post about systemd it says:
Note that this page discusses a topic that is actually independent of systemd. systemd supports both systems with split and with merged /usr, and the /usr merge also makes sense for systemd-less systems.
I don’t really get his points for it either. Basically boils down to they don’t like mutable root filesystem becuase the symlinks are so load bearing… but most distros before use merge had writable /bin anyway and nothing is stopping you from mounting the root fs as read-only in a usr merge distro.
And their main argument /opt and similar don’t follow /usr merge as well as things like docker. But /opt is just a dumping ground for things that don’t fir the file hierarchy and docker containers you can do what you want - like any package really nothing needs to follow the unix filesystem hierarchy. I don’t get what any of that has to do with bootc nor /usr merge at all.
I don’t mind ads so much. What I don’t want in invasive tracking and collection of every scrap of data they can to push ads on you. Give some dumb ads based on the damned contents of the page and I would be fine. But no, ads is basically a synonym for tracking these days.
That is the type of thinking that causes a massive amount of CVEs in those languages.