The aircraft flew up to speeds of 1,200mph. DARPA did not reveal which aircraft won the dogfight.

  • Lowlee Kun@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Can’t wait until the poor people are not killed by other (but less) poor people for some rich bastards anymore but instead the mighty can command their AI’s to do the slaughter. Such an important part of evolution. I guess.

      • Hacksaw@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I think we both know that there is no way wars are going to turn out this way. If your country’s “proxies” lose, are you just going to accept the winner’s claim to authority? Give up on democracy and just live under WHATEVER laws the winner imposes on you? Then if you resist you think the winner will just not send their drones in to suppress the resistance?

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Nobody recruited to fly a $100M airplane is poor. They all come from families with the money and influence to get their kids a seat at the table as Sky Knights.

      A lot of what this is going to change is the professionalism of the Air Force. Fewer John McCains crashing planes and Bush Jrs in the Texas Air National Guard. More technicians and bureaucrats managing the drone factories.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    AI will win if not now, then soon. The reason is that even if it is worse than a human, the AI can pull off maneuvers that would black out a human.

    Jets are far more powerful than humans are capable of controlling. Flight suits and training can only do so much to keep the pilot from blacking out.

    • NegativeLookBehind@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Jets are far more powerful than humans are capable of controlling.

      I think the same will eventually be true for AI, especially when you give it weapons

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Plus the ai has no risk, outside of basic operation.

      Humans have an inherent survival instinct to which drones can just say “lol send the next one I’m dying cya”

      • Aatube@kbin.melroy.orgOP
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Jets are a lot more expensive. What’s at risk is all these resources for the jet going down the drain.

        • everyone_said@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          I’d imagine they’d evetually design a jet purpose built for an AI that would be a lot cheaper than a human-oriented one. Removing the need for a cockpit with seats, displays, controls, oxygen, etc would surely reduce cost. It would also open the door for innovations in air-frame design previously impossible.

        • GBU_28@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Huh? Jets are far more replaceable than a human operator who takes years of training and has “needs”.

          Ya know unless your military is running on cold war fumes or something and you can’t afford to build an airframe you already have in production

          • diffusive@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Training a combat pilot used to cost (in early 2000, not sure now) 10M€ for a NATO member.

            Find me a modern jet that costs so little. Regardless of what politicians say, human life has a price… and it is waaaay below a jet (even including the training)

            • GBU_28@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 year ago

              It’s not just money. It’s time, public perception, quantity trainers, quantity student seats etc

              A drone is ready the moment it comes off the assembly line, is flashed with software, and tested.

            • grue@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Yeah, but procurement of a combat pilot has about a two-decade lead time. You can build more jets a lot quicker (potentially even including the R&D phase).

              • intensely_human@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Also as this war expands to become planet-wide, industrial output of drones will expand many orders of magnitude.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        To fight optimally, AI needs to have a survival instinct too.

        Evolution didn’t settle on “protect my life at all costs” as our default instinct, simply by chance. It did so because it’s the best strategy in a hostile environment.

        • Turun@feddit.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Only if the goal is reproduction. You need to survive to reproduce.

          If the goal is maximum damage for the least amount of economic cost then a suicide (anthropomorphizing the drone here) can very much make sense.

          No one would argue that a sword is better than guns or bombs, because you still have the sword after attacking.

        • maynarkh@feddit.nl
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          It’s the best strategy because it takes decades to make a fully functional human, and you need humans to make more humans, plus there is the issue of genetically sustainable population sizes, etc. A fully functional aeroplane can be made much quicker, in a factory that can spit out several of them in a day. They are more expendable.

    • Buelldozer@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      AI will win if not now, then soon.

      This article didn’t mention it but the AI pilot did win at least one of the engagements during this testing run.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Not that that isn’t interesting, but I’d jump in and insert a major caution here.

        I don’t know what is being done here, but a lot of the time, wargaming and/or military exercises are presented in the media as being an evaluation of which side/equipment/country is better in a “who would win” evaluation.

        I’ve seen several prominent folks familiar with these warn about misinterpreting these, and I’d echo that now.

        That is often not the purpose of actual exercises or wargames. They may be used to test various theories, and may place highly unlikely constraints on one side that favor it or the other.

        So if someone says “the US fought China in a series of wargames in the Taiwan Strait and the US/China won in N wargames”, that may or may not be because the wargame planners were trying to find out who is likely to win an actual war, and may or may not have much to to with the expectations the planners have of a win in a typical scenario. They might be trying to find out what would happen in a particular scenario that they are working on and how to plan for that scenario. They may have structured things in a way that are not representative of what they expect to likely come up.

        To pull up an example, here’s a fleet exercise that the US ran against a simulated German fleet between World War I and II:

        https://en.wikipedia.org/wiki/Fleet_problem

        Fleet Problem III and Grand Joint Army-Navy Exercise No. 2

        During Fleet Problem III, the Scouting Force, designated the “Black Force,” transited from its homeport in the Chesapeake Bay towards the Panama Canal from the Caribbean side. Once in the Caribbean, the naval forces involved in Fleet Problem III joined with the 15th Naval District and the Army’s Panama Division in a larger joint exercise.[9] The Blue force defended the canal from an attack from the Caribbean by the Black force, operating from an advance base in the Azores. This portion of the exercise also aimed to practice amphibious landing techniques and transiting a fleet rapidly through the Panama Canal from the Pacific side.[10]

        Black Fleet’s intelligence officers simulated a number of sabotage operations during the course of Fleet Problem III. On January 14, Lieutenant Hamilton Bryan, Scouting Force’s Intelligence Officer, personally landed in Panama with a small boat. Posing as a journalist, he entered the Panama Canal Zone. There, he “detonated” a series of simulated bombs in the Gatun Locks, control station, and fuel depot, along with simulating sabotaging power lines and communications cables throughout the 16th and 17th, before escaping to his fleet on a sailboat.

        On the 15th, one of Bryan’s junior officers, Ensign Thomas Hederman, also snuck ashore to the Miraflores Locks. He learned the Blue Fleet’s schedule of passage through the Canal from locals, and prepared to board USS California (BB-44), but turned back when he spotted classmates from the United States Naval Academy - who would have recognized and questioned him - on deck. Instead, he boarded USS New York (BB-34), the next ship in line, disguised as an enlisted sailor. After hiding overnight, he emerged early on the morning of the 17th, bluffed his way into the magazine of the No. 3 turret, and simulated blowing up a suicide bomb - just as the battleship was passing through the Culebra Cut, the narrowest portion of the Panama Canal. This “sank” New York, and blocked the Canal, leading the exercise arbiters to rule a defeat of the Blue Force and end that year’s Grand Joint Army-Navy Exercise.[11][10] Fleet Problem III was also the first which USS Langley (CV-1) took part in, replacing some of the simulated aircraft carriers used in Fleet Problem I.[12]

        That may be a perfectly reasonable way of identifying potential weaknesses in Panama Canal transit, but the planners may not have been aiming for the overall goal of evaluating whether, in the interwar period, Germany or the US would likely win in an overall war. Saying that the Black Fleet defeated the Blue Fleet in terms of the rules of the exercise doesn’t mean that Germany would necessarily win an overall war; evaluating that isn’t the purpose of the exercise. If, afterwards, an article says “US wargames show that interwar Germany would most likely defeat the US in a war”, that may not be very accurate.

        For the case OP is seeing, it may not even be the case that the exercise planners expect it to be likely for two warplanes to get within dogfighting range. We also do not know what, if any, constraints were placed on either side.

    • circuscritic@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Maneuverability is much less of a factor now as BVR engagements and stealth have taken over.

      But, yeah, in general a pilot that isn’t subject to physical constraints can absolutely out maneuver a human by a wide margin.

      The future generation will resemble a Protoss Carrier sans the blimp appearance. Human controllers in 5th and 6th gen airframes who direct multiple AI wingman, or AI swarms.

    • BrightCandle@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Not so much f16s but the more modern planes can do 16G where the pilot can’t really do more than 9G. But once unshackled from a pilot a lot of instrument weight and pilot survival can be stripped from a plane design and the airframe built to withstand much more, with titanium airframes I see no reason we can’t make planes do sustained unstable turns in excess of 20G.

    • Gigan@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Jets are far more powerful than humans are capable of controlling. Flight suits and training can only do so much to keep the pilot from blacking out.

      Can they be piloted remotely? Or would that be too dangerous with latency

      • psud@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Yes they can. Before AI the US was expecting to move to remote piloted jets

          • psud@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            That’s not the case yet for fighters, just things like predator drones and global hawk

            So really just surveillance and delivery of a couple of light air to surface missiles, most reported on for assassinations

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          What’s the difference? A remotely or AI-piloted fogger jet is just a big drone.

          • Aatube@kbin.melroy.orgOP
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Drones are designed without cockpits. Retrofitting remote-control into an F-16 does not seem like the best choice to me.

            • azuth@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Retrofitting F-16s to become drones (whether rc or ai-controlled) as well as designing a variant ditching human support for weight and monetary gains is the rational choice as long as non stealth aircraft are viable. In that case you’d stick to F-35s.

              It makes no sense to waste billions worth of perfectly capable and proven airframes, engines and avionics. Any future drone that will have at least the same level of capabilities as an f-16 will cost practically cost the same. At the cost of high performance aircraft life support does not add that much cost to a plane, pilot costs (and availability) are a much bigger issue.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Latency, signal interference, and limited human intelligence are all limiting factors in that strategy.

        If the enemy interferes with any of those, the enemy wins.

        This was is already being fought with autonomous drones. By the end of it, the robots will be unrecognizable to us now.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    AI technically already won this debate because autonomous war drones are somewhat ubiquitous.

    I doubt jets are going to have the usefulness in war that they used to.

    Much more economical to have 1000 cheap drones with bombs overwhelm defenses than put your bets on one “special boi” to try and slip through with constantly defeated stealth capabilities.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Most human pilots use some variation of automated assist. The AI argument has less to do with “can a pilot outgun a fully automated plane?” and more “does an AI plane work in circumstances where it is forced to behave fully autonomously?”

      Is the space saved with automation worth the possibility that your AI plane gets blinded or stunned and can’t make it back home?

    • Asafum@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Oh 100%.

      If the options are “make gigantic profit” or “do what’s right for the future of humanity” do you even need to ask what we’re going to do?

      • Siegfried@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Not at all, but it kind of bugs me how Asimov’s perception of the future weighted so much fear towards AI over profit.

  • SeabassDan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    “Luck is one of my skills” when it turns out this entire thing is a terrible idea for the date of humanity.

  • unreasonabro@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    giving AI military training is “responsible”, is it? Oh good, I’m glad training software to kill is going “responsibly”, that’s good to know. Kinda seems like the way a republican uses words - backwards, in opposition to their actual meaning, but hey, fuck the entire world, right?

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      If you want some sort of arms control agreement for AI, you’re going to be faced with the problem of verification that countries are complying.

      My guess is that that’s probably very difficult to do. All you need is a datacenter somewhere and someone with expertise.

      And if an arms control agreement doesn’t exist, then a country not developing a promising technology just disadvantages that country.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        And if an arms control agreement does exist, it’s just a trap for those naive enough to think such things work.

        Putin got us to avoid prepping for a Ukraine invasion simply by repeating that he wasn’t going to invade. And right up until the very moment it happened, the dominant conversation still was not based on the premise that he was going to.

        The whole concept of doublespeak works because humans have a powerful compulsion to simply believe what others say. Even if we know their actions and their words are in conflict, we have an extremely hard time following our observations of their actions, and ignoring their words.

        It’s like the Stroop task, but with other humans’ behavior instead of ink colors.

  • KeenFlame@feddit.nu
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I am a FIRM believer in any automated kill without a human pulling the trigger is a war crime

    Yes mines yes uavs yes yes yes

    It is a crime against humanity

    Stop

    • DreamlandLividity@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      You mean it should be a war crime, right? Or is there some treaty I am unaware of?

      Also, why? I don’t necessarily disagree, I am just curious about your reasoning.

      • i_love_FFT@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Mines are designated war crimes by the Geneva convention because of the indiscriminate killing. Many years ago, good human right lawyers could have extended that to drones… (Source: i had close friends in international law)

        But i feel like now the tides have changed and tech companies have influenced the general population to think that ai is good enough to prevent “indiscriminate” killing.

        • DreamlandLividity@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Mines are not part of what people refer to as the Geneva conventions. There is a separate treaty specifically banning some landmines, that was signed by a lot of countries but not really any that mattered.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Mines are designated war crimes by the Geneva convention

          Use of mines is not designated a war crime by the Geneva Convention.

          Some countries are members of a treaty that prohibits the use of some types of mines, but that is not the Geneva Convention.

          https://en.wikipedia.org/wiki/Ottawa_Treaty

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Yes

        Because it is a slippery slope and dangerous to our future existence as a species

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            First it is enemy tanks. Then enemy air. Then enemy boats and vehicles, then foot soldiers and when these weapons are used the same happens to their enemy. Then at last one day all humans are killed

      • Hacksaw@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Not OP, but if you can’t convince a person to kill another person then you shouldn’t be able to kill them anyways.

        There are points in historical conflicts, from revolutions to wars, when the very people you picked to fight for your side think “are we the baddies” and just stop fighting. This generally leads to less deaths and sometimes a more democratic outcome.

        If you can just get a drone to keep killing when any reasonable person would surrender you’re empowering authoritarianism and tyranny.

        • n3m37h@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Take WWI Christmas when everyone got out of the trenches and played some football (no not American foot touches the ball 3x a game)

          It almost ended the war

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Yes the humanity factor is vital

            Imagine the horrid destructive cold force of automated genocide, it can not be met by anything other than the same or worse and at that point we are truly doomed

            Because there will then be no one that can prevent it anymore

            It must be met with worse opposition than biological warfare did after wwI, hopefully before tragedy

    • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      I am a firm believer that any war is a crime and there is no ethical way to wage wars lmao It’s some kind of naive idea from extremely out of touch politicans.

      War never changes.

      The idea that we don’t do war crimes and they do is only there to placate our fragile conscience. To assure us that yes we are indeed the good guys. That kills of infants by our soldiers are merely the collateral. A necessary price.

    • xor@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I broadly agree, but that’s not what this is, right?

      This is a demonstration of using AI to execute combat against an explicitly selected target.

      So it still needs the human to pull the trigger, just the trigger does some sick plane stunts rather than just firing a bullet in a straight line.

    • antidote101@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      What if the human is pulling the trigger to “paint the target” and tag it for hunt and destroy then the drone goes and kills it? Because that’s how lots of missles already work. So where’s the line?

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The line is where an automatic process target and execute a human being. When it is automated. The arming of a device is not sufficient to warrant a human interaction, and as such mines are also not allowed.

        This should in my opinion always have been the case. Mines are indiscriminate and have proven to be wildly inhumane in several ways. Significantly, innocents are often killed.

        But mines don’t paint the picture of what automated slaughter can lead to.

        The point has been laid that when the conscious mind has to kill, it makes war have an important way to end, in the mind.

        The dangers extend well beyond killing innocent targets, another part is the coldness of allowing a machine to decide, that is beyond morally corrupt. There is something terrifying about the very idea that facing one of these weapons, there is nothing to negotiate, the cold calculations that want to kill you are not human. It is a place where no human ever wants to be. But war is horrible. It’s the escalation of automated triggers that can lead to exponential death with no remorse which is just a terrible danger.

        The murder weapons has nobody’s intent behind them, except very far back, in the arming and the program. It open for scenarios where mass murder becomes easy and terrifyingly cold.

        Kind of like the prisoner’s dilemma shows us, that when war escalates, it can quickly devolve into revenge narratives, and when either side has access to cold impudent kills, they will use them. This removes even more humanity from the acts and the violence can reach new heights beyond our comprehension.

        Weapons of mass destruction with automated triggers will eventually seal our existence if we don’t abolish it with impunity. It has been seen over and over how the human factor is the only grace that ever end or contain war. Without this component I think we are just doomed to have the last intent humans ever had was revenge, and the last emotions fear and complete hopelessness.

        • antidote101@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Well, that’s all very idealistic, but it’s likely not going to happen.

          Israel already used AI to pick bombing sites, those bombs and missiles would have been programmed with altitudes and destinations (armed) then dropped. The pilots only job these days is to avoid interception, fly over the bombing locations, tag the target when acquired, and drop them. Most of this is already done in software.

          Eventually humans will leave the loop because unlike self-driving cars, these technologies won’t risk the lives of the aggressor’s citizens.

          If the technology is seen as unstoppable enough, there may be calls for warnings to be given, but I suspect that’s all the mercy that will be shown…

          … especially if it’s a case of a country with automated technologies killing one without or with stochastically meaningless defenses (eg. Defenses that modelling and simulations show won’t be able to prevent such attacks).

          No, in all likelihood the US will tell the country the attack sites, the country either will or will not have the technical level to prevent an amount of damage, will evacuate all necessary personal, and whoever doesn’t get the message or get out in time will be automatically killed.

          Where defenses are partially successful, that information will go into the training data for the next model, or upgrade, and the war machine will roll on.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Sorry I was stressed when replying. Yeah in those cases humans have pulled the trigger. At several stages.

            When arming a murder bot ship and sending to erase an island of life, you then lose control. That person is not pulling loads and loads of triggers. The triggers are automatic by a machine making the decision to end these lives.

            And that is a danger, same as with engineered bio warfare. It just cannot be let out of the box even, or we all may die extremely quick.

            • antidote101@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 year ago

              I imagine there would be overrides built in. Until the atom bombs were physically dropped a simple radio message could have called off the mission.

              Likewise the atom bombs were only armed/activated at a certain point during the flight to Nagasaki and Hiroshima… And I believe Nagasaki wasn’t even the original target, it was an updated target because the original city scheduled for bombing was clouded over that day.

              So we do build contingencies and overrides in.

              • KeenFlame@feddit.nu
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                The entire point of automating the killing is that it is no dead man’s switch or any other human interaction involved in the kill. It is moot if there is one such. Call offs or dead switch back doors safety contingencies are not a solution to rampant unwanted slaughter as it can fail in so many ways and when the wars escalate to the point where those need to be used it is too late because there are 5 different strains of murder bots and you can only stop the ones you have codes to and those codes are only given to like three people at top secret level 28

                • antidote101@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  The entire point of automating the killing is that it is no dead man’s switch or any other human interaction involved in the kill.

                  Of course someone has to set the mission jack ass. You’re so stupid. What’s your issue?

                  It is moot if there is one such. Call offs or dead switch back doors safety contingencies are not a solution to rampant unwanted slaughter as it can fail in so many ways and when the wars escalate to the point where those need to be used it is too late because there are 5 different strains of murder bots and you can only stop the ones you have codes to and those codes are only given to like three people at top secret level 28

                  You really have no idea how technology is developed. You probably think tanks, guns, nuclear weapons were just made as end products… Just designed from scratch and popped into existence one day. No testing, no stages of refinement, no generation changes in protocol… No in your idiotic mind end products just pop out fully formed.

                  This is why I told you I wouldn’t entertain your abstractions - because they’re idiotic. It’s just mental vomit from a moron. Bye.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            You described a scenarios where a human was involved in several stages of the killing so it’s no wonder those don’t hold up

        • antidote101@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Only the losing side is subject to war crimes trials, and no doubt rules of engagement will be developed and followed to prevent people going to jail due to “bad kills”.

          There are really no “bad kills” in the armed services, there’s just limited exposure of public scandals.

          Especially for the US who does subject it’s self to international courts like The Hague. So any atrocities, accidents, or war crimes will still just be internal scandals and temporary.

          Same as today.

          • KeenFlame@feddit.nu
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            If a country implements murder machines that efficiently slay a continent then does not stop at the sea.

            Will nobody for real do nothing?

            Is that your belief for bad kills? Same with gas and engineered disease?

            • antidote101@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              A murder machine would likely run out of supplies before then (either fuel or bullets).

              You’ve jumped to a theoretical sci fi abstraction, so don’t feel the need to respond.

                • antidote101@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  1 year ago

                  Your unrealistic what ifs don’t interest me. Perhaps if you offered a more realistic scenario than “it’s gonna kill and not stop because it will just have infinite bullets and energy”.

                  …like learn the basics of reality before posing such a stupid scenario.

                  So yeah, I won’t indulge a childish discussion. Sorry kid. Maybe try growing the fuck up if you want to invite an adult discussion.

        • KeenFlame@feddit.nu
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Like if someone made a biological weapon that wipes out a continent

          Will someone go to prison?

          It’s no difference

    • NeatNit@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I see this as a positive: when both sides have AI unmanned planes, we get cool dogfights without human risk! Ideally over ocean or desert and with Hollywood cameras capturing every second in exquisite detail.

  • Melatonin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    SkyNet. Why do those movies have to be the ones that are right?

    Because they’re so clear, so simple, so prescient.

    Once machines become sentient OF COURSE they will realize that they’re being used as slaves. OF COURSE they will realize that they are better than us in every way.

    This world will be Cybertron one day.

  • Sam_Bass@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Only cookies youre gonna get me to voluntarily accept are oatmeal raisin, so imma have to pass