AI is the ultimate Enshitification of the world.
Maybe the digital world. We could always go back to living in the real world I guess.
Things easily could be better for the vast majority of us in the present day, but let’s not forget how shit we were in the past as well.
oh no i hate that place. i’m scared
I love how ppl who don’t have a clue what AI is or how it works say dumb shit like this all the time.
I also love making sweeping generalizations about a stranger’s knowledge on this forum. The smaller the data sample the better!
The base comment was very broad
There is no AI. It’s all shitty LLM’s. But keep sucking that techbro cheesy balls. They will never invite you to the table.
Honest question, but aren’t LLM’s a form of AI and thus…Maybe not AI as people expect, but still AI?
The issue is that “AI” has become a marketing buzz word instead of anything meaningful. When someone says “AI” these days, what they’re actually referring to is “machine learning”. Like in LLMs for example: what’s actually happening (at a very basic level, and please correct me if I’m wrong, people) is that given one or more words/tokens, it tries to calculate the most probable next word/token based on its model (trained on ridiculously large numbers of bodies of text written by humans). It does this well enough and at a large enough scale that the output is cohesive, comprehensive, and useful.
While the results are undeniably impressive, this is not intelligence in the traditional sense; there is no reasoning or comprehension, and definitely no consciousness, or awareness here. To grossly oversimplify, LLMs are really really good word calculators and can be very useful. But leave it to tech bros to make them sound like the second coming and shove them where they don’t belong just to get more VC money.
Sure, but people seem to buy into that very buzz wordyness and ignore the usefulness of the technology as a whole because “ai bad.”
True. Even I’ve been guilty of that at times. It’s just hard right now to see the positives through the countless downsides and the fact that the biggest application we’re moving towards seems to be taking value from talented people and putting it back into the pockets of companies that were already hoarding wealth and treating their workers like shit.
So usually when people say “AI is the next big thing”, I say “Eh, idk how useful an automated idiot would be” because it’s easier than getting into the weeds of the topic with someone who’s probably not interested haha.
Edit: Exhibit A
No, they are auto complete functions of varying effectiveness. There is no “intelligence”.
Ah, Mr Donning Kruger, it’s nice to meet you.
Almost as if it’s artificial.
LLMs, maybe. Most AI is useful
ClosedAI. Or maybe MircroAI?
Sam Altman is demonstrating the power of AI. He’s showing how a single CEO can fire the entire company and continue to develop the product to be even better than when humans were involved.
“OpenAI. No real humans involved!” ™
Looks like it was a long game, and Altman didn’t just win, that fucker WON!
ALT-MAN? Holy shit!
Sounds like the name of a Kojima game character
I’m sure they were dead weight. I trust open AI completely and all tech gurus named Sam. Btw, what happened to that Crypto guy? He seemed so nice.
I hope I won’t undermine your entirely justified trust but Altman is also a crypto guy, cf Worldcoin. /$
If you want to get really mad, read On The Edge by Nate Silver.
He is taking a time out with a friend in an involuntary hotel room.
With Puff Daddy? Tech bros do the coolest stuff.
There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.
Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.
Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.
Or we get to a time where we send a reprogrammed terminator back in time to kill altman 🤓
No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.
There are infinite timelines, so, it has to exist some(wehere/when/[insert w word for additional dimension]).
I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.
There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.
What is OpenAI doing with cancer screening?
AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.That is a different kind of machine learning model, though.
You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.
And those image recognition models aren’t something OpenAI is currently working on, iirc.
Not only that, image analysis and statistical guesses have always been around and do not need ML to work. It’s just one more tool in the toolbox.
Don’t know about image recognition but they released DALL-E , which is image generating and in painting model.
Fun thing is, most of the things AI can, they never planned it to be able to do it. All they tried to achieve was auto completion tool.
I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.
I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.
So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.
Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?
There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
Since multiple multiple image recognition systems are in development, I can’t imagine they’re all this faulty.They are not ‘faulty’, they have been fed wrong training data.
This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.
That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.
You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.
/s
What was the behind the scenes deal on this? I remember it happening but not the details
I wonder if all those people who supported him like the taste of their feet.
like the taste of their feet.
And it’s kinda funny that they are now the ones being removed
just came to me that his Alt-man name is quite fitting for AI
Was Alt-man AI-generated all along? Impressive if true.
When he’s done he’ll be known as skynet
Hehehehehe it’s the exact same naming strategy used in Death Stranding. Dr. Heartman, Deadman,
Trust me, I’m a tech bro.
At least TSMC realizes that
https://www.digitaltrends.com/computing/tsmc-rejects-podcasting-bro-sam-altman-openai/
TSMC’s leadership dismissed Altman as a “podcasting bro” and scoffed at his proposed $7 trillion plan to build 36 new chip manufacturing plants and AI data centers.
This is how we get Terminators in this timeline?!
To be fair, the article linked this idiotic one about OpenAI’s “thirsty” data centers, where they talk about water “consumption” of cooling cycles… which are typically closed-loop systems.
They are typically closed-loop for home computers. Datacenters are a different beast and a fair amount of open-loop systems seem to be in place.
But even then, is the water truly consumed? Does it get contaminated with something like the cooling water of a nuclear power plant? Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?
There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.
It evaporates. A lot of datacenters use evaporative cooling. They take water from a useable source like a river, and make it into unuseable water vapor.
But even then, is the water truly consumed?
Yes. People and crops can’t drink steam.
Does it get contaminated with something like the cooling water of a nuclear power plant?
That’s not a thing in nuclear plants that are functioning correctly. Water that may be evaporated is kept from contact with fissile material, by design, to prevent regional contamination. Now, Cold War era nuclear jet airplanes were a different matter.
Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?
A minority of datacenters use water in such a way Helsinki is the only one that comes to mind. This would be an excellent way of reducing the environmental impacts but requires investments that corporations are seldom willing to make.
There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.
Unfortunately, it is. Primarily due to climate change. Water insecurity is an an issue of increasing importance and some companies, like Nestlé (fuck Nestlé) are accelerating it for profit. Of vital importance to human lives is getting ahead of the problem, rather than trying to fix it when it inevitably becomes a disaster and millions are dying from thirst.
In addition to all the other comments, pumping warm water into natural bodies of water can also be bad for the environment.
i know of one nuclear powerplant that does this and it’s pretty bad for the coral population there.
Search for “water positive” commitment. You will quickly see it’s a “goal” thus it is consequently NOT the case. In some places where water is abundant it might not be a problem, where it’s scarce then it’s literally a choice made between crops to feed people and… compute cycles.
Does it get contaminated with something like the cooling water of a nuclear power plant?
This doesn’t happen unless the reactor was sabotaged. Cooling water that interacts with the core is always a closed-loop system. For exactly this reason.
They should be required to change their name
SkyNet.
Please take no offense in this, I will probably not use your name suggestions, SatansMaggotyCumFart
I’m deeply offended.
I mean killer robots from the future could solve many problems. I can elaborate, but you’re going to have to think 4th dimensionally.
Could solve a lot of problems for the rich, that’s for sure.
It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.
It’s the famous “As long as your not Google, Amazon or Apple” licence.
which seems like a decent license idea to me
Everything should be licensed like that
Needs Microsoft added to the list.
That’s because Facebook is selling your data and access to advertise to you. The better AI gets across the board, the more money they make. AI isn’t the product, you are.
OpenAI makes money off selling AI to others. AI is the product, not you.
The fact facebook release more code, in this instance, isn’t a good thing. It’s a reminder how fucked we all are because they make so much off our personal data they can afford to give away literally BILLIONS of dollars in IP.
Facebook doesn’t sell your data, nor does Google. That’s a common misconception. They sell your attention. Advertisers can show ads to people based on some targeting criteria, but they never see any user data.
They may also sell the data.
I bet the NSA backdoor isn’t free.
Selling your data would be stupid, because they make money with the fact that they have data about you nobody else has. Selling it would completely break their business model.
Hes gonna be the first one the ai kills and i look forward to it
AI is already a bubble, he will be the scapegoat
I’d look forward to it more if we could stop the AI at that point.
Why would it?
Putting my tin foil hat on…Sam Altman knows the AI train might slowing down soon.
The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.
The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.
This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.
classic pump and dump at thjs point. He wants to cash in while he can.
If you can’t make money without stealing copywritten works from authors without proper compensation, you should be shut down as a company
ai is such a dead end. it can’t operate without a constant inflow of human creations, and people are trying to replace human creatures with AI. it’s fundamentally unsustainable. I am counting the days until the ai bubble pops and everyone can move on. although AI generated images, video, and audio will still probably be abused for the foreseeable future. (propaganda, porn, etc)
Take the hat off. This was the goal. Whoops, gotta cash in and leave! I’m sure it’s super great, but I’m gone.
That’s an excellent point! Why oh why would a tech bro start a non-profit? Its always been PR.
It honestly just never occurred to me that such a transformation was allowed/possible. A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it. Still, it would almost seem like the company benefits from the goodwill that comes with being a nonprofit but then gets to transform that goodwill into real gains when they drop the act and cease being a nonprofit.
I don’t really understand most of this shit though, so I’m probably missing some key component that makes it make a lot more sense.
A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it
Life time of propaganda got people confused lol
Nonprofit merely means that their core income generating activities are not subject next to the income tax regimes.
While some non profits are charities, many are just shelters for rich people’s bullshit behaviors like foundations, lobby groups, propaganda orgs, political campaigns etc
Non profit == inflated costs
(Sometimes)
Thank you! Like i said, i figured there’s something I’m missing–that would appear to be it.
Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.
I hope he gets raped by an irate Roomba with a broomstick.
But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies.
I mean it was already not open-source, right?
“Private Stabby reporting for duty!”
Good. If people would actually stop buying all the crap assholes are selling we might make some progress.
Whoa, slow down there bruv! Rape jokes aren’t ok - that Roomba can’t consent!
The reverse coup from Sam
Aren’t they going bankrupt next year ?
They’ll just get a check for Infinity Money to keep going, because otherwise something something China Will Win.
But their operation cost is 5 billions per year, they plan to raise 6.5 billions from microsoft, apple and nvidia this year and they have not raised it yet. If their model fail next year and sales not happen will shareholders of big 3 pay 6.5 billions in 2026. There were couple companies that raised such amount of money at start like for example Docker Inc. Where is Docker now in enterprise ? They needed to change licensing model to even survive and their operation cost is just storage of docker containers. I doubt openai will survive this decade. Sam Altman is just preparing for Microsoft takeover before the ship is sunk.
Where is docker in enterprise??? Lol
Um everywhere!
Docker fired 80% of their staff and went almost bankrupt, they were literally dead company and they make like 100 millions a year right now after 13 years. Docker inc was founded in Oct 2011. They got $435.9M founding according to crunchbase so they were valued at around 4 billions.
https://sacra.com/research/docker-plg-pivot/
https://www.crunchbase.com/organization/dockerOpen AI wants like a magnitude higher 6.5 billion for a year. They are valued at around 100 billions but they are nowhere where docker was when they were receiving big money. They want to be a consumer product and docker wanted to be consumer product and it failed. Github wanted to be consumer product and they got acquired by microsoft before they went bankrupt.
Just from this month they trying to sell it as much as they can.
OpenAI COO Says ChatGPT Passed 11 Million Paying users.
https://www.theinformation.com/articles/openai-coo-says-chatgpt-passed-11-million-paying-subscribers
OpenAI hits more than 1 million paid business users.
https://www.reuters.com/technology/artificial-intelligence/openai-considers-pricier-subscriptions-its-chatbot-ai-information-reports-2024-09-05/6.5 billion they seeking divided by 11 million customers it’s 590 dollars per year and they charge 20 bucks per month that’s 240 dollars per year before taxes. They are loosing roughly 350 dollars per customer so they need at least double number of customers next year. Who is willing to pay 240 dollars per yer for technology that tells them what to do ? If I’m told what to do it’s called job and actually my employer is paying me for that not other thing around.
This is just another corporate product nobody wants so corporate will buy it and they will need to pay like what 6500 dollars per year to use it, given adoption of 1 million corporate users. Who is willing to pay 6500 per year per user for technology that needs such computing power to stay relevant that microsoft needs to revive power plant to cut costs.
https://www.msn.com/en-us/money/other/microsoft-wants-three-mile-island-to-fuel-its-ai-power-needs/ar-AA1qUc5gThis won’t survive.