deleted by creator
deleted by creator
Or just std::bitset<8>
for C++.
Bit fields are neat though, it can store weird stuff like a 3 bit integer, packed next to booleans
This still isn’t specific enough to specify exactly what the computer will do. There are an infinite number of python programs that could print Hello World in the terminal.
Imo if they can’t max out their harddrive for at least 24 hours without it breaking, their computer was already broken. They just didn’t know it yet.
Any reasonable SSD would just throttle if it was getting too hot, and I’ve never heard of a HDD overheating on its own, only if there’s some external heat sources, like running it in a 60°C room
Now print “¯\_(ツ)_/¯” with the quotes
And it makes the 64gb models on ebay actually a good deal since you can upgrade the SSD and it have a full performance 2TB Steam Deck
I prefer my tutorials without reading someone’s life story at the beginning. The intro contains so little info compared to the number of words being used. This reminds me of looking up a recipe and having to scroll past an essay on the history of someone’s grandmother. I like it when documentation is as dense as possible, and ideally formatted in a logical way so it’s easy to skim. Big paragraphs of English do not achieve this.
I got the same sort of impression in the “Write for beginners” section. The “good” example is like 3x as long but contains less actual information. The reader is already looking up a tutorial, you don’t need to sell them on what they’re about to do with marketing speak. I’ve really come to value conciseness in recent years.
Also, a key part of how GPT-based LLMs work today is they get the entire context window as their input all at once. Where as a human has to listen/read a word at a time and remember the start of the conversation on their own.
I have a theory that this is one of the reasons LLMs don’t understand the progression of time.
The context window is a fixed size. If the conversation gets too long, the start will get pushed out and the AI will not remember anything from the start of the conversation. It’s more like having a notepad in front of a human, the AI can reference it, but not learn from it.
the kind of stuff that people with no coding experience make
The first complete program I ever wrote was in Basic. It took an input number and rounded it to the 10s or 100s digit. I had learned just enough to get it running. It was using strings and a bunch of if statements, so it didn’t work for more than 3 digit numbers. I didn’t learn about modulo operations until later.
In all honesty, I’m still pretty proud of it, I was in 4th or 5th grade after all 😂. I’ve now been programming for 20+ years.
I think part of the problem is that LLMs stop learning at the end of the training phase, while a human never stops taking in new information.
Part of why I think AGI is so far away is because to run the training in real-time like a human, it would take more compute than currently exists. They should be focusing on doing more with less compute to find new more efficient algorithms and architectures, not throwing more and more GPUs at the problem. Right now 10x the GPUs gets you like 5-10% better accuracy on whatever benchmarks, which is not a sustainable direction to go.
And this is why the WSL1 filesystem was so damn slow. WSL2 uses a native ext4 filesystem (usually, you can format it to whatever)
Thank god we were able to ditch flash player!
As shit, I’ve got one of those for spare car parts…
I’m not arguing against charging based on bandwidth speeds. You’re right the total data transfered doesn’t really make a difference.
My point is that even just charging per Mbps, internet will always be cheaper within a data center. Just like water utility service is going to be cheaper next to a freshwater river than in the middle of the desert. There’s millions of dollars in equipment you’re effectively renting to get the internet to your house from the nearest datacenter. Your OVH server in comparison only needs maybe 1 extra network switch installed to get it online, and you’re in a WAY bigger pool of customers to split the cost of service to the building.
If you’re fine with living in a datacenter where the direct connections to Internet backbones are available, then sure. It does cost money to install and maintain fiber/copper lines to individual residences. Of course running a new ethernet cable across an existing building designed for running cables is going to be dirt cheap.
Fines and taxes are incentives. Companies will do whatever’s cheapest, so you can make the good thing cheaper, or the bad thing more expensive. Both will have a similar effect, it’s just a question of where the margins are.
If a company is selling something at-cost and gets taxed, then they’ll have to raise prices for the consumer, but if they’re getting a stimulus from the government it gets covered by tax payers. Which one ends up being the right choice depends on the product and company in question.
I think the strawberry problem is to ask it how many R’s are in strawberry. Current AI gets it wrong almost every time.
I’ve actually noticed this exact thing with elevators before… I was kind of amazed the beep and light were hooked up completely independently from the actual floor selection logic.
It sort of makes sense that the light in the button would just be hooked directly up to the button contacts. The computer would then be polling the buttons separately and it’s possible to miss a button press… These sorts of buttons shouldn’t need a debounce period since pressing any of them a second time doesn’t do anything. If the buttons were interrupt based, this probably wouldn’t happen.