

Oh, no, educated workers who don’t want to be taken advantage of and know their worth, maybe companies should value their employees if you want company loyalty.
Oh, no, educated workers who don’t want to be taken advantage of and know their worth, maybe companies should value their employees if you want company loyalty.
And open ai is not personal use?
Your description is how pre-llm chatbots work
Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn’t change what the underlying principles are here.
Emergent properties don’t require feedback. They just need components of the system to interact to produce properties that the individual components don’t have.
Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it’s creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.
Emergent properties are literally the only reason llms work at all.
No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That’s it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.
No the queue will now add popular Playlists to what you were listening to when you restart the app if your previous queue was a generated one. Not sure the exact steps to cause it but it seems like if you were listening to a daily Playlist close the app, the next day the Playlist has updated and instead of pointing to the new daily it decides to point to one of the popular Playlist for your next songs in queue. It doesn’t stop the song you paused on it just adds new shit to the queue after it once it loses track of where to point. Seems like they should just start shuffling your liked songs in that case but nope it points to a random pop Playlist.
And I’d like to see that contract hold up in court lol
You have no idea what you are talking about. When they train data they have two sets. One that fine tunes and another that evaluates it. You never have the training data in the evaluation set or vice versa.
That’s not what I said at all, I said as the paper stated the model is encoding trueness into its internal weights during training, this was then demonstrated to be more effective when given data sets with more equal distribution of true and false data points were used during training. If they used one-sided training data the effect was significantly biased. That’s all the paper is describing.
If you give it 10 statements, 5 of which are true and 5 of which are false, and ask it to correctly label each statement, and it does so, and then you negate each statement and it correctly labels the negated truth values, there’s more going on than simply “producing words.”
It’s not more going on, it’s that it had such a large training set of data that these false vs true statements are likely covered somewhere in it’s set and the probability states it should assign true or false to the statement.
And then look at that your next paragraph states exactly that, the models trained on true false datasets performed extremely well at performing true or false. It’s saying the model is encoding or setting weights to the true and false values when that’s the majority of its data set. That’s basically it, you are reading to much into the paper.
AI has been a thing for decades. It means artificial intelligence, it does not mean that it’s a large language model. A specially designed system that operates based on predefined choices or operations, is still AI even if it’s not a neural network and looks like classical programming. The computer enemies in games are AI, they mimick an intelligent player artificially. The computer opponent in pong is also AI.
Now if we want to talk about how stupid it is to use a predictive algorithm to run your markets when it really only knows about previous events and can never truly extrapolate new data points and trends into actionable trades then we could be here for hours. Just know it’s not an LLM and there are different categories for AI which an LLM is it’s own category.
Do you understand how they work or not? First I take all human text online. Next, I rank how likely those words come after another. Last write a loop getting the next possible word until the end line character is thought to be most probable. There you go that’s essentially the loop of an LLM. There are design elements that make creating the training data quicker, or the model quicker at picking the next word but at the core this is all they do.
It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.
I.e. the only duck it walks and quacks like is autocomplete, it does not have agency or any other “emergent” features. For something to even have an emergent property, the system needs to have feedback from itself, which an LLM does not.
You mean when riders were added on that they didn’t like? If the plan I responded about was actually put in place you wouldn’t be able to get away with riders and so subsequently there would be no reason for them to kill it, hence it being a bad idea.
Because you are forgetting about framing the narrative. Fox doesn’t need to tell its base about shit, just point to events in the past about how dems blocked GOP initiatives while parroting on about how great the GOP is for crossing isles to get something passed. They won’t tell the whole story and their base isn’t gonna search out the truth, they are gonna eat up what fox and the like serve up on a platter for them.
No but she also doesn’t need to for her base, in fact it works better when they play the idiot stooge. It makes them relatable to their base and makes them seem incompetent to the opposition, to have such a naive take doesn’t help us when we are actively fighting a fascist takeover.
Well, that’s sad to hear. I remember playing it in the beginning, and most of the servers I joined at least tried to protect the environment. I guess times have changed, it’s been a few years since I actively played it.
Have you ever tried out the game ECO?
Realistically because they could care less about opposing the dems if it keeps their power. If they actually tried some reverse psychology shit like that the GOP would happily let it pass and show how they are more bipartisan than those ‘filthy’ liberals. They aren’t all complete idiots, they are fascists trying to dismantle our democracy.
The problem is that those nuggets of content are near impossible to find on today’s YouTube unless you had found them before all the ai bloat channels using ai to crank out videos of nothing.
How about people pay attention to local elections? The reason we are not seeing funding for EV infrastructure is most small towns can be bought by the local dealership family who would rather see continued profits from ICE vehicle maintenance and not investments into EV infrastructure, then it conviently sides with this bullshit narrative of nothing can be sold and we have no infrastructure so give up on EVs.
Sure but it’s not like it was all sunshine and roses either, there were more frequent malicious ads but then again maybe those who are brain dead clicking everything in site (pun intended) should get blocked from the internet with a ransom attack encrypting their drive lmaoo
It would work the way the internet worked before google and facebook monetised monitoring everyone to sell ads
You mean the ads on the side of the screen that told you to play some interactive game in them so they could install malware? Ads of some form were always a thing on the internet, first in forum posts then to website ads then Google started essentially buying ad space on other websites, and paid you for it. I hate Google but when that first came out at least most ads weren’t filled with malware at that point.
Lmao alright bud go fire all your employees and see how you do. Then you will understand who needs to be loyal to who.