

Be ready to pay up (premium, boosts, ads) or don’t count on achieving anything. Also, if you gave them money once - you’re labeled as a mark - and if you stop paying you will be punished by the algorithm until you’re ready to pony up again.
Be ready to pay up (premium, boosts, ads) or don’t count on achieving anything. Also, if you gave them money once - you’re labeled as a mark - and if you stop paying you will be punished by the algorithm until you’re ready to pony up again.
Isles of Sea and Sky, Logiart Grimoire and Persona 3 Reload.
All feel like they’ve been made for deck. Great games, too.
I just got an OLED and had to RMA it after 1 day due to audio jack being completely messed up. Crackling, interference, static.
Some research online shows it is a very widespread issue and they continue selling broken items, since only a percentage of users use 3.5mm audio interface and only a percentage of these will bother to return the item.
Maybe the refurbished ones will be fixed in process.
It’s perfect. My only gripe is sometimes it reads analogue joystick input as double move - but then you can just use pad or tweak the steam input profile.
Performance-wise it feels like a game made for deck
Wilmot Works It Out - much simpler than Wilmot’s Warehouse, an extremely enjoyable puzzle game.
Persona 5 Reloaded!
The one where the zoom animation plays several times in quick succession? Yea, I get that bug too. Very annoying.
Lol, of course, right after I put my deck on eBay
All this assumes the top selling list is automatically generated based on sales data and not human-curated like most “top/trending” lists on many platforms.
Probably some kind of exoskeleton. This thing is heavy.
“metadata” is such a pretty word. How about “recipe” instead? It stores all information necessary to reproduce work verbatim or grab any aspect of it.
The legal issue of copyright is a tricky one, especially in the US where copyright is often being weaponized by corporations. The gist of it is: The training model itself was an academic endeavor and therefore falls under a fair use. Companies like StabilityAI or OpenAI then used these datasets and monetized products built on them, which in my understanding skims gray zone of being legal.
If these private for-profit companies simply took the same data and built their own, identical dataset they would be liable to pay the authors for use of their work in commercial product. They go around it by using the existing model, originally created for research and not commercial use.
Lemmy is full of open source and FOSS enthusiasts, I’m sure someone can explain it better than I do.
All in all I don’t argue about the legality of AI, but as a professional creative I highlight ethical (plagiarism) risks that are beginning to arise in majority of the models. We all know Joker, Marvel superheroes, popular Disney and WB cartoon characters - and can spot when “our” generations cross the line of copying someone else’s work. But how many of us are familiar with Polish album cover art, Brazilian posters, Chinese film superheroes or Turkish logos? How sure can we be that the work “we” produced using AI is truly original and not a perfect copy of someone else’s work? Does our ignorance excuse this second-hand plagiarism? Or should the companies releasing AI models stop adding features and fix that broken foundation first?
Actually no, but thanks for letting me know, I like his content.
In many cases the AI company is “selling you” the image by making users pay for the use of the generator. Sure, there are free options, too - but just giving you an example.
I was on the same page as you for the longest time. I cringed at the whole “No AI” movement and artists’ protest. I used the very same idea: Generations of artists honed their skills by observing the masters, copying their techniques and only then developing their own unique style. Why should AI be any different? Surely AI will not just copy works wholesale and instead learn color, composition, texture and other aspects of various works to find it’s own identity.
It was only when my very own prompts started producing results I started recognizing as “homages” at best and “rip-offs” at worst that gave me a stop.
I suspect that earlier generations of text to image models had better moderation of training data. As the arms race heated up and pace of development picked up, companies running these services started rapidly incorporating whatever training data they could get their hands on, ethics, copyright or artists’ rights be damned.
I remember when MidJourney introduced Niji (their anime model) and I could often identify the mangas and characters used to train it. The imagery Niji produced kept certain distinct and unique elements of character designs from that training data - as a result a lot of characters exhibited “Chainsaw Man” pointy teeth and sticking out tongue - without as much as a mention of the source material or even the themes.
I think the problem is that you cannot ask AI not to plagiarize. I love the potential of AI and use it a lot in my sketching and ideation work. I am very wary of publicly publishing a lot of it though, since, especially recently, the models seem to be more and more at ease producing ethically questionable content.
The problem in here is that while the Joker is a pretty recognizable cultural icon, somebody using an AI may have genuinely original idea for an image that just happens to have been independently developed by someone before. As a result, the AI can produce an image that’s a copy or close reproduction of an original artwork without disclosing its similarity to the source material. The new “author” then will unknowingly rip off the original.
The prompts to reproduce joker and other superhero movies were quite specific, but asking for “Animated Sponge” is pretty innocent. It is not unthinkable that someone may not be familiar with Mr. Squarepants and think they developed an original character using AI
These models were trained on datasets that, without compensating the authors, used their work as training material. It’s not every picture on the net, but a lot of it is scrubbing websites, portfolios and social networks wholesale.
A similar situation happens with large language models. Recently Meta admitted to using illegally pirated books (Books3 database to be precise) to train their LLM without any plans to compensate the authors, or even as much as paying for a single copy of each book used.
Because the original Joker design is not just something that occurred in nature, out of nowhere. It was created by another artist(s) who don’t get credit or compensation for their work.
When YouTube “essayists” cobble script together by copy pasting paragraphs and changing some words around and then then earn money off the end product with zero attribution, we all agree it’s wrong. Corporations doing the same to images are no different.
Yes it is. Honest answer.
Love this one. Brilliant.