No I was being accusatory unfairly. I’ve updated my post.
How was jumping from windows to NixOS?
I might take a screenshot and keep in memory, and only save to disk after some image processing that detects if there is sensitive data.
Hey OP, how do you follow all these updates? Is it RSS feeds on these projects? You’re on top of it this morning.
First off, understanding the different data structure from a high level is mandatory. I would understand the difference between a dataframe, series, and index are. Further, learn how numpy’s ndarrays play a role.
From there, unfortunately, I had to learn by doing…or rather struggling. It was one question at a time to stack overflow, like “how to filter on a column in pandas”. Maybe in the modern era of LLMs, this part might be easier. And eventually, I learned some patterns and internalized the data structures.
You are correct. For some data sources like parquet it includes some metadata that helps with this, but it’s not as robust at databases I dont think. And of course, cvs have no metadata (I guess a header row.)
The actually specification for how to efficiently store tabular data in memory that also permits quick execution of filtering, pivoting, i.e. all the transformations you need…is called apache arrow. It is the backend of polars and is also a non-default backend of pandas. The complexity of the format I’m unfamiliar with.
I learned SQL before pandas. It’s still tabular data, but the mechanisms to mutate/modify/filter the data are different methodologies. It took a long time to get comfy with pandas. It wasnt until I understood that the way you interact with a database table and a dataframe are very different, that I started to finally get a grasp on pandas.
If it works, don’t fix it!
A big feature of polars is only loading applicable data from disk. But during exporatory data analysis (EDA) you often have the whole dataset in memory. In this case, filters wont help much there. Polars has a good page in their docs about all the possible optimizations it is capable of. https://docs.pola.rs/user-guide/lazy/optimizations/
One I see off the top is projection pushdown, which only selects relevant columns for a final transformations. In pandas, if you perform a group by with aggregation, then only look at a few columns, you still perform aggregation across all the data. In polars lazy API, you would define the entire process upfront, and it would know not to aggregate certain columns, for instance.
Imo Rust already has the perfect book. I would make a resource for C developers. Especially since you know C already.
Its a paradigm shift from pandas. In polars, you define a pipeline, or a set of instructions, to perform on a dataframe, and only execute them all at once at the end of your transformation. In other words, its lazy. Pandas is eager, which every part of the transformation happens sequentially and in isolation. Polars also has an eager API, but you likely want to use the lazy API in a production script.
Because its lazy, Polars performs query optimization, like a database does with a SQL query. At the end of the day, if you’re using polars for data engineering or in a pipeline, it’ll likely work much faster and more memory efficient. Polars also executes operations in parallel, as well.
How do you use Godot for data science?
Paperlessngx will store pdfs and index their contents for searching. It’s not necessarily meant for books but I think it would work.
I use todo lists for groceries. So getting things setup on nextcloud and then mobile devices with any caldav compatible app is pretty easy. We have a couple shared lists.
You can use tasks.org for android and reminders for iOS.
I recently built a site with hugo. Its very easy. You pick a theme, then write some markdown files. And when you need flexibility, you have it for later. I also think it’s the most popular right now, which lends to a lot of themes to pick from and a lot of cpmmunity support.
Use a raid atrray, and replace drives as they fail. Ideally they wouldnt fail behind your back, like an optical disk would.
I’ve used minio briefly, and I’ve never used any other self hosted object storage. In the context of spinning it up with docker, it’s pretty easy. The difficult part in my project was that I wanted some buckets predefined. The docker image doesn’t provide this functionality directly, so I had to spin up an adjacent container with the minio cli that would create the buckets automatically every time I spun up minio.
But for your use case you would manage bucket creation manually, from the UI. It seems straight forward enough, and I don’t have complaints. I think it would work for your use case, but I can’t say its any worse or better than alternatives.
Has anyone ever used the enterprise version of dbeaver? Does it do as good a job interfacing with nosql databases it does relational databases?
Thanks for keeping the Lemmy community up to date. Its been cool hearing about how youve grown this project from engine to website to online cloud platform and now a game cohesive enough to sell to a casual steam audience. Congratulations on this achievement. Your passion for backgammon, and this bgammon project, is inspiring.
Have you tried this?