

KStars
I’ll add that KStars has a really powerful astrophotography suite called Ekos. It has lots of helpful automation features to make imaging relatively simple to setup.
KStars
I’ll add that KStars has a really powerful astrophotography suite called Ekos. It has lots of helpful automation features to make imaging relatively simple to setup.
A lot of the cheap tablet SoC vendors like Rockchip (whose SoCs end up in low cost SBCs) really only do the bare minimum when it comes to proper linux support. There’s usually next to no effort to upstreaming their patches so oftentimes you’re stuck on their vendor kernel. Luckily for the RK3588(S), Collabora has done a considerable amount of work on supporting the SoC and its peripherals upstream. I run my Orange Pi 5 Plus (RK3588) on a mainline kernel and it works for my needs.
This practice is a lot easier to defend for a low cost SoC compared to something as expensive as a Snapdragon Elite though…
Yep, and for good reason honestly. I work in CV and while I don’t work on autonomous vehicles, many of the folks I know have previously worked at companies or research institutes on these kinds of problems and all of them agree that in a scenario like this, you should treat the state of the vehicle as compromised and go into an error/shutdown mode.
Nobody wants to give their vehicle an override that can potentially harm the safety of those inside it or around it, and practically speaking there aren’t many options that guarantee safety other than this.
Yeah I think Lemmy would actually work pretty reasonably. It reminds me of how lots of software and projects have Reddit communities. I agree that being able to share 1 account over many services, and especially not having to pay for infrastructure is something that drives discord use over forum-based platforms.
Personally, I’d prefer that projects use forums for community discussions rather than realtime chat platforms like Discord or Matrix. I think the bigger problem of projects using Discord is not that it’s closed source, but rather that it makes it difficult to search (since no indexing by search engines) and the format deprioritizes having discussion on a topic over a long period of time. Since Matrix is also intended for chat, it has these same issues (though at least you can preview a room without making an account).
Afaik the StarFive SOCs used in SBCs are a lot slower than current ARM offerings. Part of that might be because software support is worse, so maybe compilers and related tooling aren’t yet optimized for them?
Hopefully development on these continues to improve though. The biggest nail in the coffin for Pi alternatives has been software support.
I’m a researcher in ML and that’s not the definition that I’ve heard. Normally the way I’ve seen AI defined is any computational method with the ability to complete tasks that are thought to require intelligence.
This definition admittedly sucks. It’s very vague, and it comes with the problem that the bar for requiring intelligence shifts every time the field solves something new. We sort of go “well, given these relatively simple methods could solve it, I guess it couldn’t have really required intelligence.”
The definition you listed is generally more in line with AGI, which is what people likely think of when they hear the term AI.
I believe this is the referenced article:
I’ve been using FreeTube since Piped was very inconsistent for me, but I guess that’s just the nature of these services. I’ll have to check out Invidious again, last time I tried it was several years ago and I stopped using it after the main instance shut down. Is it still under active development? I remember its development status being unclear, partially because the language it uses is not super mainstream, but it’s probably changed since then.
Fortunately, Invidious, Piped, Libretube and Newpipe all exist and work flawlessly so there’s no excuse to use proprietary trash like that.
Isn’t the very point of this post that Invidious and Piped don’t work flawlessly?
Can’t you still modify and distribute Grayjay, just not commercially? I understand that still prevents the app from being considered open source, but their reasoning is valid IMO (to prevent people from making ad-infested clones on the play store, which has happened with NewPipe before).
I think what they mean is that ML models generally don’t directly store their training data, but that they instead use it to form a compressed latent space. Some elements of the training data may be perfectly recoverable from the latent space, but most won’t be. It’s not very surprising as a result that you can get it to reproduce copyrighted material word for word.
Not sure what other people were claiming, but normally the point being made is that it’s not possible for a network to memorize a significant portion of its training data. It can definitely memorize significant portions of individual copyrighted works (like shown here), but the whole dataset is far too large compared to the model’s weights to be memorized.
The big thing you get with frameworks is super simple repairability. This means service manuals, parts availability, easy access to components like the battery, RAM, ssd, etc. Customizable ports are also a nice feature. You can even upgrade the motherboard later down the line instead of buying a whole new laptop.
I haven’t read the article myself, but it’s worth noting that in CS as a whole and especially ML/CV/NLP, selective conferences are generally seen as the gold standard for publications compared to journals. The top conferences include NeurIPS, ICLR, ICML, CVPR for CV and EMNLP for NLP.
It looks like the journal in question is a physical sciences journal as well, though I haven’t looked much into it.
I’m curious what field you’re in. I’m in computer vision and ML and most conferences have clauses saying not to use ChatGPT or other LLM tools. However, most of the folks I work with see no issue with using LLMs to assist in sentence structure, wording, etc, but they generally don’t approve of using LLMs to write accuracy critical sections (such as background, or results) outside of things like rewording.
I suspect part of the reason conferences are hesitant to allow LLM usage has to do with copyright, since that’s still somewhat of a gray area in the US AFAIK.
Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won’t be able to perform backpropagation and therefore can’t generate gradients to update your generator’s weights.
That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.
For reference, ICML is one of the most prestigious machine learning conferences alongside ICLR and NeurIPS.
I suspect this is an Akonadi issue as I have the same problem with Merkuro Calendar. My Nextcloud account (configured via CalDAV and CardDAV endpoints) also disables itself if my network gets signed out, which is mildly annoying. Not sure if there’s an easy fix.