• 0 Posts
  • 130 Comments
Joined 3 years ago
cake
Cake day: June 21st, 2023

help-circle

  • And before anyone chastises me for being “lazy” or relying on extractive services, I highly favor ordering directly from the restaurant and picking up. The deeply abusive nature of Doordash et al towards both customers and restaurants is not lost on me.

    Doordash prices can also be higher than ordering directly from the restaurant, so even if you do use it, it’s better to compare to what the restaurant would charge if you ordered directly.

    I’ve seen differences in price of nearly 2x in some cases.

    Also, if you have the time and means, my personal suggestion is to always pick up, not have it delivered. Saves a ton of money, plus gives you an opportunity to go outside, even if picking up isn’t a whole lot of human interaction (still better than none).



  • Mixins are composition! They don’t describe what a type is (“circle” is a “shape”, etc) but rather what they can do (“circle” can have its area calculated, it can be drawn, it can be serialized, etc). Mixins in Python just so happen to be implemented by adding base classes.

    Inheritance itself isn’t really a problem. It usually only matters when you have unnecessarily deep hierarchies, where a change in a base class can change functionality in dozens of classes in an unintentional way. Similarly, it can add complexity once the hierarchy is deep enough, but only really if you throw too much into the base classes.

    Python’s ABCs are more of interfaces though, which is why despite Python using base classes to “inherit” them, a lot of that is really composition (or putting a class together from parts) rather than inheriting and overriding implementation details from a parent/grandparent/etc type.




  • I miss the days when it was simpler as well. Back before there were botnets with hundreds of thousands of compromised routers across several countries that could send tens of terabytes per second of data to your server for a sustained period of time. Back before there were thousands of bots crawling every IP and domain imaginable for exposed, abusable ports and wp-admin endpoints. Back before people started to compete in how many 9s of uptime they supported (before killing that all with LLMs anyway).

    Sadly, we can’t go back to those times. Doing so with a production service would not end well.

    The issue is not npm. Npm is a solution to a problem, even if it isn’t perfect.

    The issue is we live in a different landscape.

    Eclipse was great, having used it in the past, but its features are not exclusive to Eclipse. I can do the same inlining and extracting of code in vscode with code actions. The compile times weren’t seconds for me in the past, but they are for me now. Vite helps that even more (though that’s comparing JS to Java).


  • I agree in general with the list, but there is some stuff I disagree with still. For example, the very first section: “Work on more than one thing”.

    Like a CPU thread, if you’re responsible for multiple streams of work, you can deal with one stream getting blocked by rolling onto another one.

    This is written from the perspective of the developer, not the stakeholders. Compared to a CPU, you are a single thread. You cannot work on two things at the same time. What this is referring to is not parallelism, but a form of concurrency. Like a CPU thread, when two tasks are being executed concurrently, one task is always blocked. This means that while you, the developer, are always working, you also are always blocking at least one task, meaning you are also always blocked on at least one task.

    Instead of working on two tasks at once, pick up the second task only when the first becomes blocked.

    I believe this might be what the author was trying to convey, but the title, some wording in the section, and the bullet point at the end (“Working on at least two things at a time, so when one gets blocked you can switch to the other”) contradict that and give the impression that you should always be working on two or more things at a time.

    use as normal a developer stack as possible.

    This, I mostly agree with, but I disagree with the wording. You should be using the same tools as the rest of your team when the tool matters. However, using different Git interfaces shouldn’t matter. I’d argue the same holds true for editors as long as the editors all have the features needed for the project.

    For application work, some variety in dev environments can help you find bugs sooner even. Using different environments for development lets you test different environments naturally. For services, this is less relevant.


  • This is a super interesting approach to JS. Conceptually, it’s really cool. In practice, I don’t think I’d do it (at least for any projects I can think of) because explaining it to others would be difficult and representing complex logic as “commands” sounds a bit difficult.

    In a weird way, it reminds me of actor frameworks though. The difference is of course the separation of effects.

    One thing I wish the author would have done, though, is add some type hints. I know it’s about JS, but even some jsdoc types would have helped. It was a bit hard to know at first what the input types were to these functions.


  • Rust currently isn’t as performant as optimized C code, and I highly doubt that even unsafe rust can beat hand optimized assembly — C can’t, anyways.

    A bit tangential, but to answer this question, nothing beats the most optimized assembly code. At best, programming languages can only hope to match the most optimized assembly.

    Rust does have macros for inlining assembly into your program, but it’s horribly unsafe and not super easy to work with.

    Rewriting ffmpeg in Rust is not a solution here (like you’re saying).




  • I don’t understand how a bug is supposed to know whether it’s triggered inside or outside of a google service.

    Who found the bug, and what triggered it? Does it affect all users, or does it only affect one specific service that uses it in one specific way due to a weird, obscure set of preconditions or extraordinarily uncommon environment configuration?

    Most security vulnerabilities in projects this heavily used are hyper obscure.

    If the bug is manifestly present in ffmpeg and it’s discovered at google, what are you saying is supposed to happen?

    e) Report it with the usual 90 day disclosure rule, then fix the bug, or at least reduce the burden as much as possible on those who do need to fix it.

    Google is the one with the vulnerable service. ffmpeg itself is a tool, but the vast majority of end users don’t use it directly, therefore the ffmpeg devs are not the ones directly (or possibly at all) affected by the bug.

    There are a bunch of Rust zealots busily rewriting GNU Coreutils which in practice have been quite reliable and not that badly in need of rewriting. Maybe the zealots should turn their attention to ffmpeg (a bug minefield of long renown) instead.

    This is weirdly offtopic, a gross misrepresentation of what they are doing, and horribly dismissive of the fact that every single person being discussed who is doing the real work is not being paid support fees by Google. Do not dictate what they should do with their time until you enter a contract with them. Until that point, what they do is none of your business.

    Alternatively (or in addition), some effort should go into sandboxing ffmpeg so its bugs can be contained.

    And who will do this effort?


  • Bug reports that apply only to Google’s services or which surface only because of them are bugs Google needs to fix. They can and do submit bug reports all they want. Nobody is obligated to fix them.

    The other part of this is, of course, disclosure. Google’s disclosure of these bugs discredits ffmpeg developers and puts the blame on them if they fail to fix the vulnerabilities. They can acknowledge the project as being a volunteer, hobby project created by others if they want, and they can treat it like that. But if they’re doing that, they should not be putting responsibilities on them.

    If Google wants to use ffmpeg, they can. But a bug in ffmpeg that affects Google’s services is a bug in Google’s service. It is not the responsibility of unpaid volunteers to maintain their services for them.





  • Guess I’ll post another update. The block-based data structure makes no sense to me. At some point it claims that looking up a pair in the data structure is O(1):

    To delete the key/value pair ⟨a,b⟩, we remove it directly from the linked list, which can be done in O(1) time.

    This has me very confused. First, it doesn’t explain how to find which linked list to remove it from (every block is a linked list, and there are many blocks). You can binary search for the blocks that can contain the value and search them in order based on their upper bounds, but that’d be O(M * |D_0|) just to search the non-bulk-prepended values.

    Second, it feels like in general the data structure is described primarily from a theoretical perspective. Linked lists here are only solid in theory, but from a practical standpoint, it’s better to initialize each block as a preallocated array (vector) of size M. Also, it’s not clear if each block’s elements should be sorted by key within the block itself, but it would make the most sense to do that in my opinion, cutting the split operation from O(M) to O(1), and it’d answer how PULL() returns “the smallest M values”.

    Anyway, it’s possible also that the language of the paper is just beyond me.

    I like the divide-and-conquer approach, but the paper itself is difficult to implement in my opinion.