• 1 Post
  • 131 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle

  • I learned z80 assembly back when the cutting edge of technology was a ZX Spectrum, and 68k assembly when I upgraded to an Amiga. That knowledge served me quite well for my early career in industrial automation - it was hard real-time coding on eZ80’s and 65c02 processors, but the knowledge transfers.

    Back in the day, when input got mapped straight into a memory location and the display output was another memory location, then assembly seems like magic. Read the byte they corresponds to the right-hand middle row of the keyboard, check if a certain bit is set in that byte, therefore a key is held down. Call your subroutine that copies a sequence of bytes into a known location. Boom, pressing a key updates the screen. Awesome.

    Modern assembly (x64 and the like) has masses of rules about pointer alignment for the stacks, which you do so often you might as well write a macro for it. Since the OS doesn’t let you write system memory any more (a good thing) then you need to make system calls and call library functions to do the same thing. You do that so often that you might as well write a macro for that as well. Boom, now your assembly looks almost exactly like C. Might as well learn that instead.

    In fact, that’s almost the purpose of C - a more readable, somewhat portable assembly language. Experienced C developers will know which sequence of opcodes they’d expect from any language construction. It’s quite a simple mapping in that regard.

    It’s handy to know a little assembly occasionally, but unless you’re writing eg. crypto implementations, which must take the exact same time and power to execute regardless of the input, then it’s impractical for almost any purpose nowadays.


  • Enough of that crazy talk - plainly WheeledDeviceServiceFactoryBeanImpl is where the dependency injection annotations are placed. If you can decide what the code does without stepping through it with a debugger, and any backtrace doesn’t have at least two hundred lines of Spring boot, then plainly it isn’t enterprise enough.

    Fair enough, though. You can write stupid overly-abstract shit in any language, but Java does encourage it.



  • Well now. My primary exposure to Go would be using it to take first place in my company’s ‘Advent of Code’ several years ago, in order to see what it was like, after which I’ve been pleased never to have to use it again. Some of our teams have used it to provide microservices - REST APIs that do database queries, some lightweight logic, and conversion to and from JSON - and my experience of working with that is that they’ve inexplicably managed to scatter all the logic among dozens of files, for what might be done with 80 lines of Python. I suspect the problem in that case is the developers, though.

    It has some good aspects - I like how easy it is to do a static build that can be deployed in a container.

    The actual language itself I find fairly abominable. The lack of exceptions means that error handling is all through everything, and not necessarily any better than other modern languages. The lack of overloads means that you’ll have multiple definitions of eg. Math.min cluttering things up. I don’t think the container classes are particularly good. The implementation of pointers seems solely implemented to let you have null pointer exceptions, it’s a pointless wart.

    If what you’re wanting to code is the kind of thing that Google do, in the exact same way that Google do it, and you have a team of hipsters who all know how it works, then it may be a fine choice. Otherwise I would probably recommend using something else.


  • I feel that Python is a bit of a ‘Microsoft Word’ of languages. Your own scripts are obviously completely fine, using a sensible and pragmatic selection of the language features in a robust fashion, but everyone else’s are absurd collections of hacks that fall to pieces at the first modification.

    To an extent, ‘other people’s C++ / Bash scripts’ have the same problem. I’m usually okay with ‘other people’s Java’, which to me is one of the big selling points of the language - the slight wordiness and lack of ‘really stupid shit’ makes collaboration easier.

    Now, a Python script that’s more than about two pages long? That makes me question its utility. The ‘duck typing’ everywhere makes any code that you can’t ‘keep in your head’ very difficult to reason about.


  • Frezik has a good answer for SQL.

    In theory, Ansible should be used for creating ‘playbooks’ listing the packages and configuration files which are present on a server or collection of servers, and then ‘playing the playbook’ arranges it so that those servers exist and are configured as you specified. You shouldn’t really care how that is achieved; it is declarative.

    However, in practice it has input, output, loops, conditional branching, and the ability to execute subtasks recursively. (In fact, it can quite difficult to stop people from using those features, since ‘declarative’ doesn’t necessarily come easily to everyone, and it makes for very messy config.) I think those are all the features required for Turing equivalence?

    Being able to deploy a whole fleet of servers in a very straightfoward way comes as close to the ‘infinite memory’ requirement as any programming language can get, although you do need basically infinite money to do that on a cloud service.



  • To be fair, compiling C code with a C++ compiler gets you all the warnings from C++'s strong-typing rules. That’s a big bonus for me, even if it only highlights the areas of your C that are likely to become a maintenance hazard - all those void* casts want some documentation about what assumptions make them safe. Clang will compile variable-length arrays in C++, so you might want to switch off that warning since you’ve probably intended it. Just means that you can’t use designated initialisers, since C++ uses constructors for that and there’s no C equivalent. I’d be happy describing code that compiles in either situation as “C+”.

    Also stops anyone using auto, constexpr or nullptr as variable names, which will help if you want to copy-paste some well-tested code into a different project later.


  • Man alive, don’t get the managers working with audio. “Doubling the stream” might work if you’re using a signed audio format rather than an unsigned one, and the format is in the same endianness as the host computer uses. Neither of which are guaranteed when working with audio.

    But of course, the ear perceives loudness in a logarithmic way (the decibel scale), so for it to be perceived as “twice as loud”, it generally needs an exponential increase. Very high and low frequencies need more, since we’re less sensitive to them and don’t perceive increases so well.


  • I know - thank you, though, good to know it’s not just me. Not the first puzzle that I’ve solved using GraphViz, either.

    Some of them do depend on some unstated property of the input that can only be discerned by inspecting it - I don’t feel too bad about that kind of ‘cheat’, as long as it goes from “the input” -> “your code” -> “the output”.

    Some of them - and I’m thinking another that ludicrous “line up the hailstones” one from day 24 from last year - are the kind where you parse the input so you can output it in the right format for Wolfram Alpha. Most unsatisfying as a coding puzzle.


  • Every “pair” of bits has the same pattern of AND and XOR, and the previous “carry bit” is passed into the same OR / (XOR + AND) combo to produce an output bit and the next “carry bit”. The “whole chain” is nearly right - otherwise your 44 bit inputs wouldn’t give a 45 bit output - it’s just a few are swapped over. (In my case, anyway - haven’t seen any others.) All my swaps were either in the “same column” of GraphViz output, or the next column.

    So, yeah. Either “random swaps” with “nearby” outputs, because it’s nearly right and you don’t need to check further away; or use the fact that this is the well-known pattern for adding two numbers in a CPU’s ALU to generate the “correct” sequence, identify which ones are wrong, and output them in alphabetical order… The answer you need doesn’t even require you to pair them up.


  • Yeah - dumped out the input into GraphViz, and then inspected it ‘by eye’ to get the swaps. Nearly finished in the top 100 in the world, too. Feels like a really bad way to get the solution, though.

    If you add eg. 1111 and 1111 and expect 11110, then you’ll get an output like 11010 if there’s a mistake in “bit 2”. Can try all the swaps between x2 / y2 / z2 until you get the “right answer”, and then continue. There’s only about five different ops for each “bit” of the input, so trying all of them won’t take too long.


  • C++ / Boost

    Ah, cunning - my favourite one so far this year, I think. Nothing too special compared to the other solutions - floods the map using Dijkstra, then checks “every pair” for how much of a time saver it is. 0.3s on my laptop; it iterates through every pair twice since it does part 1 and part 2 separately, which could easily be improved upon.

    spoiler
    #include <boost/log/trivial.hpp>
    #include <boost/unordered/unordered_flat_map.hpp>
    #include <boost/unordered/unordered_flat_set.hpp>
    #include <cstddef>
    #include <fstream>
    #include <limits>
    #include <queue>
    #include <stdexcept>
    
    namespace {
    
    using Loc = std::pair<int, int>;
    using Dir = std::pair<int, int>;
    template <class T>
    using Score = std::pair<size_t, T>;
    template <class T>
    using MinHeap = std::priority_queue<Score<T>, std::vector<Score<T>>, std::greater<Score<T>>>;
    using Map = boost::unordered_flat_set<Loc>;
    
    auto operator+(const Loc &l, const Dir &d) {
      return Loc{l.first + d.first, l.second + d.second};
    }
    
    auto manhattan(const Loc &a, const Loc &b) {
      return std::abs(a.first - b.first) + std::abs(a.second - b.second);
    }
    
    auto dirs = std::vector<Dir>{
        {0,  -1},
        {0,  1 },
        {-1, 0 },
        {1,  0 }
    };
    
    struct Maze {
      Map map;
      Loc start;
      Loc end;
    };
    
    auto parse() {
      auto rval = Maze{};
      auto line = std::string{};
      auto ih = std::ifstream{"input/20"};
      auto row = 0;
      while (std::getline(ih, line)) {
        for (auto col = 0; col < int(line.size()); ++col) {
          auto t = line.at(col);
          switch (t) {
          case 'S':
            rval.start = Loc{col, row};
            rval.map.insert(Loc{col, row});
            break;
          case 'E':
            rval.end = Loc{col, row};
            rval.map.insert(Loc{col, row});
            break;
          case '.':
            rval.map.insert(Loc{col, row});
            break;
          case '#':
            break;
          default:
            throw std::runtime_error{"oops"};
          }
        }
        ++row;
      }
      return rval;
    }
    
    auto dijkstra(const Maze &m) {
      auto unvisited = MinHeap<Loc>{};
      auto visited = boost::unordered_flat_map<Loc, size_t>{};
    
      for (const auto &e : m.map)
        visited[e] = std::numeric_limits<size_t>::max();
    
      visited[m.start] = 0;
      unvisited.push({0, {m.start}});
    
      while (!unvisited.empty()) {
        auto next = unvisited.top();
        unvisited.pop();
    
        if (visited.at(next.second) < next.first)
          continue;
    
        for (const auto &dir : dirs) {
          auto prospective = Loc{next.second + dir};
          if (!visited.contains(prospective))
            continue;
          auto pscore = next.first + 1;
          if (visited.at(prospective) > pscore) {
            visited[prospective] = pscore;
            unvisited.push({pscore, prospective});
          }
        }
      }
    
      return visited;
    }
    
    using Walk = decltype(dijkstra(Maze{}));
    
    constexpr auto GOOD_CHEAT = 100;
    
    auto evaluate_cheats(const Walk &walk, int skip) {
      auto rval = size_t{};
      for (auto &start : walk) {
        for (auto &end : walk) {
          auto distance = manhattan(start.first, end.first);
          if (distance <= skip && end.second > start.second) {
            auto improvement = int(end.second) - int(start.second) - distance;
            if (improvement >= GOOD_CHEAT)
              ++rval;
          }
        }
      }
      return rval;
    }
    
    } // namespace
    
    auto main() -> int {
      auto p = parse();
      auto walk = dijkstra(p);
      BOOST_LOG_TRIVIAL(info) << "01: " << evaluate_cheats(walk, 2);
      BOOST_LOG_TRIVIAL(info) << "02: " << evaluate_cheats(walk, 20);
    }
    




  • My workplace is a strictly BitBucket shop, was interested in expanding my skillset a little, experiment with different workflows. Was using it as a fancy ‘todo’ list - you can raise tickets in various categories - to remind myself what I was wanting to do next in the game I was writing. It’s a bit easier to compare diffs and things in a browser when you’ve been working on several machines in different libraries than it is in the CLI.

    Short answer: bit of timesaving and nice-to-haves, but nothing that you can’t do with the command line and ssh. But it’s free, so there’s no downside.


  • Ah, nice. Had been experimenting with using my Raspberry Pi 3B as my home Git server for all my personal projects - easy sync between my laptop and desktop, and another backup for the the stuff that I’d been working on.

    Tried running Gitea on it to start with, but it’s a bit too heavy for a device like that. Forgejo runs perfectly, and has almost exactly the same, “very Github inspired” interface. Time to run some updates…


  • Most common example would be a bicycle, I think - your pedals tighten on “in the same direction the wheel turns” as you look at them. So your left pedal has left-hand thread, and goes on and comes off backwards.

    The effect of precession also means that you can tighten the pedals on finger tight and a good long ride will make them absolutely solid - need to bounce up and down on a spanner to loosen them.