• 1 Post
  • 149 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • C, C++, C#, to name the main ones. And quite a lot of languages are compiled similarly to these.

    To be clear, there’s a lot of caveats to the statement, and it depends on architecture as well, but at the end of the day, it’s rare for a byte or bool to be mapped directly to a single byte in memory.

    Say, for example, you have this function…

    public void Foo()
    {
        bool someFlag = false;
        int counter = 0;
    
        ...
    }
    

    The someFlag and counter variables are getting allocated on the stack, and (depending on architecture) that probably means each one is aligned to a 32-bit or 64-bit word boundary, since many CPUs require that for whole-word load and store instructions, or only support a stack pointer that increments in whole words. If the function were to have multiple byte or bool variables allocated, it might be able to pack them together, if the CPU supports single-byte load and store instructions, but the next int variable that follows might still need some padding space in front of it, so that it aligns on a word boundary.

    A very similar concept applies to most struct and object implementations. A single byte or bool field within a struct or object will likely result in a whole word being allocated, so that other variables and be word-aligned, or so that the whole object meets some optimal word-aligned size. But if you have multiple less-than-a-word fields, they can be packed together. C# does this, for sure, and has some mechanisms by which you can customize field packing.




  • My big reason would be “it hurts readability”. That is, when writing code, readibility for others who aren’t familiar with it (including future me) is my top-priority, and that means indentation and alignment are HIGHLY important, and if I spend the time to write code with specific indentation and alignment, to make it readable at a glance, I want to be certain that it’s always going to display exactly that way. Tabs specifically break that guarantee, because they’re subject to editor settings, which means shit like the below example can occur:

    I write the following code with an editor that uses a tab size of 4.

    myObject.DoSomething(
        someParameter:      "A",
        someOtherParameter: "B",
        value:              "C");
    

    If someone pulls this up in an editor that uses a tab size of 8, they get…

    myObject.DoSomething(
        someParameter:          "A",
        someOtherParameter:     "B",
        value:                          "C");
    

    Not really a big deal, in this simple case, but it illustrates the point.

    My second reason would be that it makes code more difficult to WRITE, I.E. it’s not that hard to insert spaces when you mean to insert tabs, considering that you’re not LITERALLY using only tabs just only tabs for indentation and alignment. And if you do accidentally have spaces mixed in, you’re not going to be able to tell. The guy on another machine with different editor settings will, though.

    I’m aware there are fonts that can make spaces and tabs visible and distinct, but that sounds like a NIGHTMARE to write and read code with. I mentioned above, my top priority is easy readability, and introducing more visual noise to make tabs and spaces distinct can only hurt readability.










  • This really reads to me like the perspective of a business major whose only concept of productivity is about what looks good on paper. He seems to think it’s a desirable goal for EVERY project to be completed with 0 latency. That’s absurd. If every single incoming requirement is a “top priority, this needs to go out as soon as possible” that’s a management failure. They either need to ACTUALLY prioritize requirements properly, or they need to bring in more people.

    For the Chuck and Patty example, he describes Chuck finishing a task and sending it to Patty for review, and Patty not picking it up because she’s “busy.” Busy with what? If this task is the higher priority, why is she not switching to it as soon as it’s ready? Do either Chuck or Patty not know that this task is the current highest priority? Sounds like management failure. Is there not a system in place (whether automatic or not) for notifying people when high priority tasks are assigned? Also sounds like management failure. Is Patty just incapable of switching tasks within 30-60 minutes? She needs to work on her organization skills, or that management isn’t providing sufficient tooling for multitasking.

    When a top-priority “this needs to go out ASAP” task is in play on my team, I’m either working on it, or I know it’s coming my way soon, and who it’s coming from, because my Project Lead has already coordinated that among all of us. Because that’s her job.

    From the article…

    Project A should take around 2 weeks

    Project B should take around 2 weeks

    That’s 4 weeks to complete them both

    But only if they’re done in sequence!

    If you try to do them at the same time, with the same team, don’t be surprised if it ends up taking 6 weeks!

    Nonsense. If these are both top priorities, and the team has proper leadership, (and the 2 week estimate is actually accurate) 4 weeks is entirely achievable. If these are not top priorities, and the team has other work as well, then yeah, no shit it might be 6 weeks. You can’t just ignore the 2 weeks from Project C if it’s prioritized similarly to A and B. If A and B NEED to go out in 4 weeks, then prioritize them higher, and coordinate your team to make that happen.


  • As I understand it (and assuming you know what asymmetric keys are)…

    It’s about using public/private key pairs and swapping them in wherever you would use a password. Except, passwords are things users can actually remember in their head, and are short enough to be typed in to a UI. Asymmetric keys are neither of these things, so trying to actually implement passkeys means solving this newly-created problem of “how the hell do users manage them” and the tech world seems to be collectively failing to realize that the benefit isn’t worth the cost. That last bit is subjective opinion, of course, but I’ve yet to see any end-users actually be enthusiastic about passkeys.

    If that’s still flying over your head, there’s a direct real-world corollary that you’re probably already familiar with, but I haven’t seen mentioned yet: Chip-enabled Credit Cards. Chip cards still use symmetric cryptography, instead of asymmetric, but the “proper” implementation of passkeys, in my mind, would be basically chip cards. The card keeps your public/private key pair on it, with embedded circuitry that allows it to do encryption with the private key, without ever having to expose it. Of course, the problem would be the same as the problem with chip cards in the US, the one that quite nearly killed the existence of them: everyone that wants to support or use passkeys would then need to have a passkey reader, that you plug into when you want to login somewhere. We could probably make a lot of headway on this by just using USB, but that would make passkey cards more complicated, more expensive, and more prone to being damaged over time. Plus, that doesn’t really help people wanting to login to shit with their phones.


  • Automated certificate lifecycle management is going to be the norm for businesses moving forward.

    This seems counter-intuitive to the goal of “improving internet security”. Automation is a double-edged sword. Convenient, sure, but also an attack vector, one where malicious activity is less likely to be noticed, because actual people aren’t involved in tbe process, anymore.

    We’ve got ample evidence of this kinda thing with passwords: increasing complexity requirements and lifetime requirements improves security, only up to a point. Push it too far, and it actually ends up DECREASING security, because it encourages bad practices to get around the increased burden of implementation.



  • It’s the capability of a program to “reflect” upon itself, I.E. to inspect and understand its own code.

    As an example, In C# you can write a class…

    public class MyClass
    {
        public void MyMethod()
        {
            ...
        }
    }
    

    …and you can create an instance of it, and use it, like this…

    var myClass = new MyClass();
    myClass.MyMethod();
    

    Simple enough, nothing we haven’t all seen before.

    But you can do the same thing with reflection, as such…

    var type = System.Reflection.Assembly.GetExecutingAssembly()
        .GetType("MyClass");
    
    var constructor = type.GetConstructor(Array.Empty<Type>());
    
    var instance = constructor.Invoke(Array.Empty<Object>());
    
    var method = type.GetMethod("MyMethod");
    
    var delegate = method.CreateDelegate(typeof(Action), instance);
    
    delegate.DynamicInvoke(Array.Empty<object>());
    

    Obnoxious and verbose and tossing basically all type safety out the window, but it does enable some pretty crazy interesting things. Like self-discovery and dynamic loading of plugins, or self-configuration of apps. Also often useful when messing with generics. I could dig up some practical use-cases, if you’re curious.



  • I think the big reasons for most people boil down to one or both of two things:

    A) People having 0 trust in Google. I.E. people do not believe that paying for their services will exempt them from being exploited, so what’s the point?

    B) YouTube’s treatment of its content creators. Which are what people actually come to YouTube for. Advertisers and copyright holders (and copyright trolls) get first-class treatment, while the majority of content creators get little to no support for anything.