When you run out of characters, you simply create another 0 byte file to encode the rest.
Check mate, storage manufacturers.
When you run out of characters, you simply create another 0 byte file to encode the rest.
Check mate, storage manufacturers.
That’s not really the point. The point this post is making is that third party software is often not available as a package for your distro. It’s been a minute since I used Slackware, but I doubt you can find neatly built tgz slackware packages of Steam or the Nvidia drivers.
I know Slackware has slackbuilds and you can install sbopkg to search for packages and automatically build them, but that goes a bit beyond “just use your package manager”.
So you can’t become root on your system unless you switch to that tty? That sounds like a gigantic pain in the ass.
Me use apt. Why use many letter when few letter do trick?
LOL yes, I had a look at those too when I was looking for a more minimal terminal. Noped the fuck out when I read you had to recompile the tools to configure them.
It’s not that this is beyond my skill level, but that is just so … why would I want to do that?
I guess it’s why some Jellyfin streams started transcoding for me.
You’re better off using the Jellyfin Media Player standalone application anyway.
The flag is called --no-preserve-root
, but the flag wouldn’t do anything here because you’re not deleting root (/
), you’re deleting all non-hidden files and directories under root (/*
), and rm will just let you do it.
It’s apparently a hobby and to be competitive, you need to be able to spew bullshit at amazing rates. Personally I’ve maxed out at 140 wpm
I’m limited by the rate at which I can think of bullshit.
yet all I needed is a “this side up” symbol …
Since you forgot to add - - preserve-root It won’t go too far
Go on then … try it.
Or don’t because you will erase your system. (Hint: it’s in the asterisk)
as the binary is already loaded into memory
That’s not the reason why it continues. It’s because there’s still a file descriptor open to rm
.
That’s not the reason why it continues. It’s because there’s still a file descriptor open to rm
.
In Unix/Linux, a removed file only disappears when the last file descriptor to it is gone. As long as the file /usr/bin/rm
is still opened by a process (and it is, because it is running) it will not actually be deleted from disk from the perspective of that process.
This also why removing a log file that’s actively being written to doesn’t clear up filesystem space, and why it’s more effective to truncate it instead. ( e.g. Run > /var/log/myhugeactivelogfile.log
instead of rm /var/log/myhugeactivelogfile.log
), or why Linux can upgrade a package that’s currently running and the running process will just keep chugging along as the old version, until restarted.
Sometimes you can even use this to recover an accidentally deleted file, if it’s still held open in a process. You can go to /proc/$PID/fd
, where $PID
is the process ID of the process holding the file open, and find all the file descriptors it has in use, and then copy the lost content from there.
kill -9 1
Leave the poor kernel out of it, it has nothing to do with this. It’s Lennart, not Linus.
I don’t think it’s intended as a “solution”, it just lets the clobbering that is caused by the case insensitiveness happen.
So git just goes:
If you add a third or fourth file … it would just continue, and file gets checked out first gets the filename and whichever file gets checked out last, gets the content.
It tells you there’s a name clash, and then it clones it anyway and you end up with the contents of README.MD
in README.md
as an unstaged change.
That’s some suckless level cope
Thanks, really constructive way of arguing your point…
Who really cares about some programming purity aspect?
People who create operating systems and file systems, or programs that interface with those should, because behind every computing aspect is still a physical reality of how that data is structured and stored.
What’s correct is the way that creates the least friction for the end users
Treating different characters as different characters is objectively the most correct and predictable way. Case has meaning, both in natural language as well as in almost anything computer related, so users should be allowed to express case canonically in filenames as well. If you were never exposed to a case insensitive filesystem first, you would find case sensitive the most natural way. Give end users some credit, it’s really not rocket science to understand that f
and F
are not the same, most people handle this “mindblowing” concept just fine.
Also the reason Microsoft made NTFS case insensitive by default was not because of “user friction” but because of backwards compatibility with MSDOS FAT16 all upper case 8.3 file names. However, when they created a new file system for the cloud, Azure Blob Storage, guess what: they made it case sensitive.
You can give me any file, and I can create a compression algorithm that reduces it to 1 bit. (*)
spoiler
(*) No guarantees about the size of the decompression algorithm or its efficacy on other files