• Psychadelligoat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        Could also be the fucking GPU of it’s doing that, apparently

        Had some sag on my GPU after years and didn’t really notice. Tried troubleshooting and was about to go mad til someone on Reddit from a year ago had a comment saying to try resetting the GPU and then bracketing it

        Sure as shit it worked

  • Noble Shift@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    And this is why I never purchase a product with a revision code of *.0, and almost always purchase used.

  • wirehead@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    A few years ago now I was thinking that it was about time for me to upgrade my desktop (with a case that dates back to 2000 or so, I guess they call them “sleepers” these days?) because some of my usual computer things were taking too long.

    And I realized that Intel was selling the 12th generation of the Core at that point, which means the next one was a 13th generation and I dono, I’m not superstitious but I figured if anything went wrong I’d feel pretty darn silly. So I pulled the trigger and got a 12th gen core processor and motherboard and a few other bits.

    This is quite amusing in retrospect.

    • JPAKx4@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I recently built myself a computer, and went with AMD’s 3d cache chips and boy am I glad. I think I went 12th Gen for my brothers computer but it was mid range which hasn’t had these issues to my knowledge.

      Also yes, sleeper is the right term.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 months ago

        I think I went 12th Gen for my brothers computer

        12th gen isn’t affected. The problem affects only the 13th and 14th gen Intel chips. If your brother has 12th gen – and you might want to confirm that – he’s okay.

        For the high-end thing, initially it was speculated that it was just the high-end chips in these generations, but it’s definitely the case that chips other than just the high-end ones have been recorded failing. It may be that the problem is worse with the high-end CPUs, but it’s known to not be restricted to them at this point.

        The bar they list in the article here is 13th and 14th gen Intel desktop CPUs over 65W TDP.

  • ApollosArrow@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    I have an Intel Core i9-14900K 3.2 GHz 24-Core LGA 1700 Processor purchased in March. Is there any guesses for the window yet of potential affected CPUs?

  • demesisx@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    The other day, when this news hit for the first time, I bought two ITM Put options on INTC. Then, I waited three days and sold them for 200% profit. Then, I used the profit to invest in the SOXX etf. Feels good to finally get some profit from INTC’s incompetence.

  • 2pt_perversion@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    People are freaking out about the lack of a recall but intel says their patch that will supposedly stop currently working cpus from experiencing the overvolt condition that is leading to the failure. So they don’t really need to do a recall if currently working CPUs will stay working with the patch in place. As long as they offer some sort of free extended warranty and a good RMA proccess for the CPUs that are already damaged I feel it’s fine.

    If they RMA with a bump in perf for those affected it might even be positive PR like “they stand by their products” but if they’re stingy with responsibility then we should obviously give them hell. We really have to see how they handle this.

    • BobGnarley@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      No refunds for the fried ones should be all you need to see about hwp they “handle” this.

      • 2pt_perversion@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        They probably will at least RMA the really frequently crashing ones. To my knowledge they self-reported when they discovered the problem and the fix so they’d be looking at a lawsuit if they didn’t do at least that.

        How much further beyond that they’ll go is what we still have to see. If they have a crazy number of CPUs still dying at 4-5 years old and don’t cover with an extended warranty than fuck em…But we have to wait and see what they actually do first before making that judgement.

    • Metype @lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      For what it’s worth my i9-13900 was experiencing serious instability issues. Disabling turbo helped a lot but Intel offered to replace it under warranty and I’m going through that now. Customer support on the issue seems to be pretty good from my experience.

    • BobGnarley@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Oh you mean they’re going to underclock the expensive new shit I bought and have it underperform to fix their fuck up?

      What an unacceptable solution.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        That’s where the lawsuits will start flying. I wouldn’t be surprised if they knock off 5-15% of performance. That’s enough to put it well below comparable AMD products in almost every application. If performance is dropped after sale, there’s a pretty good chance of a class action suit.

        Intel might have a situation here like the XBox 360 Red Ring of Death. Totally kills any momentum they had and hands a big victory to their competitor. This at a time when Intel wasn’t in a strong place to begin with.

        • M0oP0o@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          I think a spot that might land them in a bit of hot water will be what specs they use for the chips after the “fix”. Will they update the specs to reflect the now slower speeds? My money would be them still listing the full chooch chip killing specs.

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 months ago

            If people bought it at one spec and now it’s lower, that could be enough. It would have made the decision different at purchase time.

            • M0oP0o@mander.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              It would be breach of implied warranty/false advertisement if they keep selling them with the old specs at least.

          • floofloof@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            10 months ago

            It has been wise for years to subtract 15-20% off Intel’s initial performance claims and benchmarks at release. Spectre and Meltdown come to mind, for example. There’s always some post-release patch that hobbles the performance, even when the processors are stable. Intel’s corporate culture is to push the envelope just a little too far then walk it back quietly after the initial positive media coverage is taken care of.

            • M0oP0o@mander.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 months ago

              Yes, but lucky for some of us that practice is still illegal in parts of the world. I just don’t get why they still get away with it (they do get fines but the over all practice is still normalized).

              I sure would not want any 13 or 14 gen Intel in any equipment I was responsible for. Think of the risk over any IT departments head with these CPUs in production, you would never really trust them again.

      • Strykker@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        They aren’t over clocking / under clocking anything with the fix. The chip was just straight up requesting more voltage than it actually needed, this didn’t give any benefit and was probably an issue even without the damage it causes, due to extra heat generated.

        • nek0d3r@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          Giving a CPU more voltage is just what overclocking is. Considering that most of these modern CPUs from both AMD and Intel have already been designed to start clocking until it reaches a high enough temp to start thermally throttling, it’s likely that there was a misstep in setting this threshold and the CPU doesn’t know when to quit until it kills itself. In the process it is undoubtedly gaining more performance than it otherwise would, but probably not by much, considering a lot of the high end CPUs already have really high thresholds, some even at 90 or 100 C.

          • Strykker@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            If you actually knew anything you’d know that overclockers tend to manually reduce the voltage as they increase the clock speeds to improve stability, this only works up to a point, but clearly shows voltage does not directly influence clock speed.

    • AnyOldName3@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      If you give a chip more voltage, its transistors will switch faster, but they’ll degrade faster. Ideally, you want just barely enough voltage that everything’s reliably finished switching and all signals have propagated before it’s time for the next clock cycle, as that makes everything work and last as long as possible. When the degradation happens, at first it means things need more voltage to reach the same speed, and then they totally stop working. A little degradation over time is normal, but it’s not unreasonable to hope that it’ll take ten or twenty years to build up enough that a chip stops working at its default voltage.

      The microcode bug they’ve identified and are fixing applies too much voltage to part of the chip under specific circumstances, so if an individual chip hasn’t experienced those circumstances very often, it could well have built up some degradation, but not enough that it’s stopped working reliably yet. That could range from having burned through a couple of days of lifetime, which won’t get noticed, to having a chip that’s in the condition you’d expect it to be in if it was twenty years old, which still could pass tests, but might keel over and die at any moment.

      If they’re not doing a mass recall, and can’t come up with a test that says how affected an individual CPU has been without needing to be so damaged that it’s no longer reliable, then they’re betting that most people’s chips aren’t damaged enough to die until the after warranty expires. There’s still a big difference between the three years of their warranty and the ten to twenty years that people expect a CPU to function for, and customers whose parts die after thirty-seven months will lose out compared to what they thought they were buying.

    • A_Random_Idiot@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      They cant even commit to offering RMAs, period. They keep using vague, cant-be-used-against-me-in-a-court-of-law language.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        That will surely earn trust with the public and result in brand loyalty, right???

  • InAbsentia@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Thankfully I haven’t had any issues out of my 13700k but it’s pretty shitty of Intel to not stand behind their products and do a recall.

  • deltreed@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    So like, did Intel lay off or deprecate its QA teams similar to what Microsoft did with Windows? Remember when stability was key and everything else was secondary? Pepperidge farms remembers.

    • john89@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Why would they lay off their QA teams when its management and executives who make the decisions to cut corners?

    • nek0d3r@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      I genuinely think that was the best Intel generation. Things really started going downhill in my eyes after Skylake.

  • w2tpmf@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Any real world comparison. Gaming frame rate, video encoding… The 13-700 beats the 7900x while being more energy efficient and costing less.

    That’s even giving AMD a handicap in the comparison since the 7700x is supposed to be the direct comparison to the 13-700.

    I say all this as a longggg time AMD CPU customer. I had planned on buying their CPU before multiple different sources of comparison steered me away this time.

    • M0oP0o@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 months ago

      Ok, so maybe you are missing the part where the 13 and 14 gens are destroying themselves. No one really cares if you use AMD or what not, this little issue is intel and makes any performance,power use or cost moot as the cpu’s ability to not hurt itself in its confusion will now always be in question.

      Also I don’t think CPU speeds have been a large bottleneck in the last few years, why both AMD and Intel keep pushing is just silly.

      • w2tpmf@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Yeah that does suck. But I was replying specifically to the person saying Intel hasn’t been relevant for years because of a supposed performance dominance from AMD. That’s part just isn’t true.

        • M0oP0o@mander.xyz
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          Your comment does not reply to anyone though, its just floating out there on its own.

          And even taken as a reply it still does not make sense since as of this “issue” any 13th or 14th gen Intel over a 600 is out of the running since they can not be trusted to not kill themselves.

          • w2tpmf@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            Yeah not really sure how my comment ended up where it is. Connect stacks comments in a weird way and I must have clicked reply in the wrong place.

            I was replying to this …

            Is there really still such a market for Intel CPUs? I do not understand that AMDs Zen is so much better and is the superior technology since almost a decade now.

            …Which up untill this issue was NOT true. The entire Zen 2 line was a step behind the Intel chips that released at the same times as it.

            I’ve been running a 3600x for years now and love it … But a i5-10600k that came out at the same time absolutely smashes it in performance.

            • M0oP0o@mander.xyz
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 months ago

              Those came out a year apart and no one does not “smash” the other in performance. I doubt you can even notice the difference between the two, and that is the issue with CPUs today, they are not the bottleneck in most systems. I have used both of these (I like the 10600k as well) but they are almost exactly the same “performance” and would not turn up my nose at ether. The issue is that (especially in personal use cases) there is no justification in the newer systems. DDR4 still runs literally everything and both of these 4 year+ year old CPUs (that are now a few gens old) also will run anything well outside of exotic cases. You are more likely to see slowdowns with a lack of ram (since most programs today seem to think the stuff is unlimited), GPU bottlenecks, or just badly optimized software.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Intel has not halted sales or clawed back any inventory. It will not do a recall, period.

    Buy AMD. Got it!

        • schizo@forum.uncomfortable.business
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          Kinda? It really should be treated as a 1st generation product for Windows (because the previous versions were ignored by, well, everyone because they were utterly worthless), and should be avoided for quite a while if gaming is remotely your goal. It’s probably the future, but the future is later… assuming, of course, that the next gen x86 CPUs don’t both get faster and lower power (which they are) and thus eliminate the entire benefit of ARM.

          And, if you DONT use Windows, you’re looking at a couple of months to a year to get all the drivers in the Linux kernel, then the kernel with the drivers into mainstream distributions, assuming Qualcomm doesn’t do their usual thing of just abandoning support six months in because they want you to buy the next release of their chips instead.

            • schizo@forum.uncomfortable.business
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              I’m having the same dream, but I don’t trust Qualcomm to not fuck everyone. I mean it’d be nice if they don’t but they’ve certainly got the history of being the scorpion and I’m going to let someone else be the frog until they’ve proven they’re not going to sting me mid-river.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 months ago

        If there were decent homelab ARM CPUs, I’d be all over that. But everything is either memory limited (e.g. max 8GB) or datacenter grade (so $$$$). I want something like a Snapdragon with 4x SATA, 2x m.2, 2+ USB-C, and support for 16GB+ RAM in a mini-ITX form factor. Give it to me for $200-400, and I’ll buy it if it can beat my current NAS in power efficiency (not hard, it’s a Ryzen 1700).

      • Dudewitbow@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 months ago

        arm is very primed to take a lot of market share of server market from intel. Amazon is already very committed on making their graviton arm cpu their main cpu, which they own a huge lion share of the server market on alone.

        for consumers, arm adoption is fully reliant on the respective operating systems and compatibility to get ironed out.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          11 months ago

          Linux works great on ARM, I just want something similar to most mini-ITX boards (4x SATA, 2x mini-PCIe, and RAM slots), and I’ll convert my DIY NAS to ARM. But there just isn’t anything between RAM-limited SBCs and datacenter ARM boards.

          • Justin@lemmy.jlh.name
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 months ago

            Datacenter cpus are actually really good for NASes considering the explosion of NVMe storage. Most consumer CPUs are limited to just 5 m.2 drives and a 10gbit NIC. But a server mobo will open up for 10+ drives. Something cheap like a first gen Epyc motherboard gives you a ton of flexibility and speed if you’re ok with the idle power consumption.

          • Dudewitbow@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 months ago

            arm is a mixes bag. iirc atm the gpu on the Snapdragon X Elite os disabled on Linux, and consumer support is reliant on how well the hardware manufacturer supports it if it closed source driver. In the case of qualcomm, the history doesnt look great for it

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              Eh, if they give me a PCIe slot, I’m happy to use that in the meantime. My current NAS uses an old NVIDIA GPU, so I’d just move that over.

              • Zangoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                11 months ago

                Apparently (from another comment on a thread about arm from a few weeks ago) consumer GPU bioses contain some x86 instructions that get run on the CPU, so getting full support for ARM isn’t as simple as swapping the cards over to a new motherboard. There are ways to hack around it (some people got AMD GPUs booting on a raspberry pi 5 using its PCIe lanes with a bunch of adapters) but it is pretty unreliable.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  11 months ago

                  Yeah, there are some software issues that need to be resolved, but the bigger issue AFAIK is having the hardware to handle it. The few ARM devices with a PCIe slot often don’t fully implement the spec, such as power delivery. Because of that, driver work just doesn’t happen, because nobody can realistically use it.

                  If they provide a proper PCIe slot (8-16 lanes, on-spec power delivery, etc), getting the drivers updated should be relatively easy (months, not years).

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              Servers being slow is usually fine. They’re already at way lower clocks than consumer chips because almost all that matters is power efficiency.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              11 months ago

              Eh, it looks like ARM laptops are coming along. I give it a year or so for the process to be smooth.

              For servers, AWS Graviton seems to be pretty solid. I honestly don’t need top performance and could probably get away with a Quartz64 SBC, I just don’t want to worry about RAM and would really like 16GB. I just need to server a dozen or so docker containers with really low load, and I want to do that with as little power as I can get away with for minimum noise. It doesn’t need to transcode or anything.

              • CancerMancer@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                10 months ago

                Man so many SBCs come so close to what you’re looking for but no one has that level of I/O. I was just looking at the ZimaBlade / ZimaBoard and they don’t quite get there either: 2 x SATA and a PCIe 2.0 x4. ZimaBlade has Thunderbolt 4, maybe you can squeeze a few more drives in there with a separate power supply? Seems mildly annoying but on the other hand, their SBCs only draw like 10 watts.

                Not sure what your application is but if you’re open to clustering them that could be an option.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  10 months ago

                  Here’s my actual requirements:

                  • 2 boot drives in mirror - m.2 or SATA is fine
                  • 4 NAS HDD drives - will be SATA, but could use PCIe expansion; currently have 2 8TB 3.5" HDDs, want flexibility to add 2x more
                  • minimum CPU performance - was fine on my Phenom II x4, so not a high bar, but the Phenom II x4 has better single core than ZimaBlade

                  Services:

                  • I/O heavy - Jellyfin (no live transcoding), Collabora (and NextCloud/ownCloud), samba, etc
                  • CPU heavy - CI/CD for Rust projects (relatively infrequent and not a hard req), gaming servers (Minecraft for now), speech processing (maybe? Looking to build Alexa alt)
                  • others - actual budget, vault warden, Home Assistant

                  The ZimaBlade is probably good enough (would need to figure out SATA power), I’ll have to look at some performance numbers. I’m a little worried since it seems to be worse than my old Phenom II x4, which was the old CPU for this machine. I’m currently using my old Ryzen 1700, but I’d be fine downgrading a bit if it meant significantly lower power usage. I’d really like to put this under my bed, and it needs to be very quiet to do that.

              • Justin@lemmy.jlh.name
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                11 months ago

                ARM laptops don’t support ACPI, which makes them really hard for Linux to support. Having to go back two years to find a laptop with wifi and gpu support on Linux isn’t practical. If Qualcomm and Apple officially supported Linux like Intel and AMD do, it would be a different story. As it is right now, even Android phones are forced to use closed-source blobs just to boot.

                Those numbers from Amazon are misleading. Linus Torvalds actually builds on an Ampere machine, but they don’t actually do that well in benchmarks.

                https://www.phoronix.com/review/graviton4-96-core

        • icydefiance@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          Yeah, I manage the infrastructure for almost 150 WordPress sites, and I moved them all to ARM servers a while ago, because they’re 10% or 20% cheaper on AWS.

          Websites are rarely bottlenecked by the CPU, so that power efficiency is very significant.

          • tal@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            11 months ago

            I really think that most people who think that they want ARM machines are wrong, at least given the state of things in 2024. Like, maybe you use Linux…but do you want to run x86 Windows binary-only games? Even if you can get 'em running, you’ve lost the power efficiency. What’s hardware support like? Do you want to be able to buy other components? If you like stuff like that Framework laptop, which seems popular on here, an SoC is heading in the opposite direction of that – an all-in-one, non-expandable manufacturer-specified system.

            But yours is a legit application. A non-CPU-constrained datacenter application running open-source software compiled against ARM, where someone else has validated that the hardware is all good for the OS.

            I would not go ARM for a desktop or laptop as things stand, though.

            • batshit@lemmings.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              11 months ago

              If you didn’t want to game on your laptop, would an ARM device not be better for office work? Considering they’re quiet and their battery lasts forever.

              • Nighed@sffa.community
                link
                fedilink
                English
                arrow-up
                0
                ·
                11 months ago

                As long as the apps all work. So much stuff is browser based now, but something will always turns up that doesn’t work. Something like mandatory timesheet software, a bespoke tool etc.

              • frezik@midwest.social
                link
                fedilink
                English
                arrow-up
                0
                ·
                11 months ago

                ARM chips aren’t better at power efficiency compared to x84 above 10 or 15W or so. Apple is getting a lot out of them because TSMC 3nm; even the upcoming AMD 9000 series will only be on TSMC 4nm.

                ARM is great for having more than one competent company in the market, though.

                • batshit@lemmings.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  11 months ago

                  ARM chips aren’t better at power efficiency compared to x84 above 10 or 15W or so.

                  Do you have a source for that? It seems a bit hard to believe.

          • Vik@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 months ago

            even then, strix will look to compete with apple silicon in perf/watt

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          ARM is only more power efficient below 10 to 15 W or so. Above that, doesn’t matter much between ARM and x86.

          The real benefit is somewhat abstract. Only two companies can make x86, and only one of them knows how to do it well. ARM (and RISC V) opens up the market to more players.

        • chingadera@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          I hope so, I accidentally advised a client to snatch up a snapdragon surface (because they had to have a dog shit surface) and I hadn’t realized that a lot of shit doesn’t quite work yet. Most of it does, which is awesome, but it needs to pick up the pace

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          11 months ago

          Depends on the desktop. I have a NanoPC T4, originally as a set top box (that’s what the RK3399 was designed for, has a beast of a VPU) now on light server and wlan AP duty, and it’s plenty fast enough for a browser and office. Provided you give it an SSD, that is.

          Speaking of Desktop though the graphics driver situation is atrocious. There’s been movement since I last had a monitor hooked up to it but let’s just say the linux blob that came with it could do gles2, while the android driver does vulkan. Presumably because ARM wants Rockchip to pay per fucking feature per OS for Mali drivers.

          Oh the VPU that I mentioned? As said, a beast, decodes 4k h264 at 60Hz, very good driver support, well-documented instruction set, mpv supports it out of the box, but because the Mali drivers are shit you only get an overlay, no window system integration because it can’t paint to gles2 textures. Throwback to the 90s.

          Sidenote some madlads got a dedicated GPU running on the thing. M.2 to PCIe adapter, and presumably a lot of duct tape code.

          • cmnybo@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 months ago

            GPU support is a real mess. Those ARM SOCs are intended for embeded systems, not PCs. None of the manufacturers want to release an open source driver and the blobs typically don’t work with a recent kernel.

            For ARM on the desktop, I would want an ATX motherboard with a socketed 3+ GHz CPU with 8-16 cores, socketed RAM and a PCIe slot for a desktop GPU.

            Almost all Linux software will run natively on ARM if you have a working GPU. Getting windows games to run on ARM with decent performance would probably be difficult. It would probably need a CPU that’s been optimized for emulating x86 like what Apple did with theirs.

      • mox@lemmy.sdf.orgOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        RISC-V isn’t there yet, but it’s moving in the right direction. A completely open architecture is something many of us have wanted for ages. It’s worth keeping an eye on.

      • lath@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        Yet they do it all the time when a higher specs CPU is fabricated with physical defects and is then presented as a lower specs variant.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          Nobody objects to binning, because people know what they’re getting and the part functions within the specified parameters.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I’ve been buying AMD for – holy shit – 25 years now, and have never once regretted it. I don’t consider myself a fanboi; I just (a) prefer having the best performance-per-dollar rather than best performance outright, and (b) like rooting for the underdog.

      But if Intel keeps fucking up like this, I might have to switch on grounds of (b)!

      spoiler

      (Realistically I’d be more likely to switch to ARM or even RISCV, though. Even if Intel became an underdog, my memory of their anti-competitive and anti-consumer bad behavior remains long.)

      • vxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 months ago

        I hate the way Intel is going, but I’ve been using Intel chips for over 30 years and never had an issue.

        So your statement is kind of pointless, since it’s such a small data set, it’s irrelevant and nothing to draw any conclusion from.

      • SoleInvictus@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 months ago

        Same here. I hate Intel so much, I won’t even work there, despite it being my current industry and having been headhunted by their recruiter. It was so satisfying to tell them to go pound sand.

      • Damage@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        I’ve been on AMD and ATi since the Athlon 64 days on the desktop.

        Laptops are always Intel, simply because that’s what I can find, even if every time I scour the market extensively.

        • Krauerking@lemy.lol
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          11 months ago

          Honestly I was and am, an AMD fan but if you went back a few years you would not have wanted and AMD laptop. I had one and it was truly awful.

          Battery issues. Low processing power. App crashes and video playback issues. And this was on a more expensive one with a dedicated GPU…

          And then Ryzen came out. You can get AMD laptops now and I mean that like they exist, but also, as they actually are nice. (Have one)

          But in 2013 it was Intel or you were better off with nothing.

          • orangeboats@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 months ago

            Indeed, the Ryzen laptops are very nice! I have one (the 4800H) and it lasts ~8 hours on battery, far more than what I expected from laptops of this performance level. My last laptop barely achieved 4 hours of battery life.

            I had stability issues in the first year but after one of the BIOS updates it has been smooth as butter.

      • Rai@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 months ago

        Sorry but after the amazing Athlon x2, the core and core 2 (then i series) lines fuckin wrecked AMD for YEARS. Ryzen took the belt back but AMD was absolutely wrecked through the core and i series.

        Source: computer building company and also history

        tl:dr: AMD sucked ass for value and performance between core 2 and Ryzen, then became amazing again after Ryzen was released.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          AMD “bulldozer” architecture CPUs were indeed pretty bad compared to Intel Core 2, but they were also really cheap.

      • Final Remix@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 months ago

        I’ve had nothing but issues with some computers, laptops, etc… once I discovered the common factor was Intel, I haven’t had a single problem with any of my devices since. AMD all the way for CPUs.

      • ☂️-@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        © upgradability and not having motherboards be disposable on purpose