I’m the administrator of kbin.life, a general purpose/tech orientated kbin instance.

  • 0 Posts
  • 518 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle


  • Whenever anyone asks if I use AI. My answer is that, so far it hasn’t ever delivered working code. However the majority of times I used it, the code it did provide sent me in the right direction.

    So it’s not useless. And I know tools have gotten better. But when I see companies seriously talking “AI first” and wanting vibe coding to be a main development strategy. I do really worry.


  • I did defederate from hexbear for a while a year or so ago. Just because at the time their users were generally just actively trolling for reactions in pretty much every community, and it just got too the point I defederated. I’ve since removed them from the defed list.

    Generally I agree. But ML seems to have become a bit more clearly biased in their moderation. To me it’s not a reason to defed, but a reason to view the content they do allow in their hosted communities with that bias in mind.


  • I know the OP is using wifi calling as a solution. But since we’re talking voip providers.

    I use voxbeam. But they’re wholesale, you need a fixed IP for incoming calls, their support are good. But they’re probably not going to want to help you with end-user type questions. They only support SIP. But, pricing is generally good and plenty of reasonably priced DID options.






  • Yeah. I didn’t understand what they meant by the wtf there. Seemed to me someone wondered if the Action would have a localised version of i (making this stay lowercase on a phone was harder than it should be) or if it used the same i. So made a simple test for it.

    Not really sure it’s a wtf unless they expected a different result.




  • If you’re running nginx I am using the following:

    if ($http_user_agent ~* "SemrushBot|Semrush|AhrefsBot|MJ12bot|YandexBot|YandexImages|MegaIndex.ru|BLEXbot|BLEXBot|ZoominfoBot|YaK|VelenPublicWebCrawler|SentiBot|Vagabondo|SEOkicks|SEOkicks-Robot|mtbot/1.1.0i|SeznamBot|DotBot|Cliqzbot|coccocbot|python|Scrap|SiteCheck-sitecrawl|MauiBot|Java|GumGum|Clickagy|AspiegelBot|Yandex|TkBot|CCBot|Qwantify|MBCrawler|serpstatbot|AwarioSmartBot|Semantici|ScholarBot|proximic|GrapeshotCrawler|IAScrawler|linkdexbot|contxbot|PlurkBot|PaperLiBot|BomboraBot|Leikibot|weborama-fetcher|NTENTbot|Screaming Frog SEO Spider|admantx-usaspb|Eyeotabot|VoluumDSP-content-bot|SirdataBot|adbeat_bot|TTD-Content|admantx|Nimbostratus-Bot|Mail.RU_Bot|Quantcastboti|Onespot-ScraperBot|Taboolabot|Baidu|Jobboerse|VoilaBot|Sogou|Jyxobot|Exabot|ZGrab|Proximi|Sosospider|Accoona|aiHitBot|Genieo|BecomeBot|ConveraCrawler|NerdyBot|OutclicksBot|findlinks|JikeSpider|Gigabot|CatchBot|Huaweisymantecspider|Offline Explorer|SiteSnagger|TeleportPro|WebCopier|WebReaper|WebStripper|WebZIP|Xaldon_WebSpider|BackDoorBot|AITCSRoboti|Arachnophilia|BackRub|BlowFishi|perl|CherryPicker|CyberSpyder|EmailCollector|Foobot|GetURL|httplib|HTTrack|LinkScan|Openbot|Snooper|SuperBot|URLSpiderPro|MAZBot|EchoboxBot|SerendeputyBot|LivelapBot|linkfluence.com|TweetmemeBot|LinkisBot|CrowdTanglebot|ClaudeBot|Bytespider|ImagesiftBot|Barkrowler|DataForSeoBo|Amazonbot|facebookexternalhit|meta-externalagent|FriendlyCrawler|GoogleOther|PetalBot|Applebot") { return 403; }

    That will block those that actually use recognisable user agents. I add any I find as I go on. It will catch a lot!

    I also have a huuuuuge IP based block list (generated by adding all ranges returned from looking up the following AS numbers):

    AS45102 (Alibaba cloud) AS136907 (Huawei SG) AS132203 (Tencent) AS32934 (Facebook)

    Since these guys run or have run bots that impersonate real browser agents.

    There are various tools online to return prefix/ip lists for an autonomous system number.

    I put both into a single file and include it into my web site config files.

    EDIT: Just to add, keeping on top of this is a full time job! EDIT 2: Removed Mojeek bot as it seems to be a normal web crawler.


  • I think it’ll be a “we’ll see” situation. This was the main concern for y2k. And I don’t doubt there’s some stuff that was partially patched from y2k still around that is still using string dates.

    But the vast majority of software now works with timestamps and of course some things will need work. But with y2k the vast majority of business software needed changing. I think in this case the vast majority will be working correctly already and it’ll be the job of developers (probably in a panic less than a year before as is the custom) too catch the few outliers and yes some will escape through the cracks. But that was also the case last time round too.


  • You’re right on every point. But, I’m not sure how that goes against what I said.

    Most applications now use the epoch for date and time storage, and for the 2038 problem the issues will be down to making sure either tiime_t or 64bit long values (and matching storage) which will be a much smaller change then was the case for y2k. Since more people also use libraries for date and time handling it’s also likely this will be handled.

    Most databases have datetime types which again are almost certainly already ready for 2038.

    I just don’t think the scale is going to be close to the same.


  • Not really processor based. The timestamp needs to be ulong (not advised but good for date ranges up to something like 2100, but cannot express dates before 1970). Or llong (long long). I think it’s a bad idea but I bet some people too lazy to change their database schema will just do this internally.

    The type time_t in Linux is now 64bit regardless. So, compiling applications that used that will be fine. Of course it’s a problem if the database is storing 32bit signed integers. The type on the database can be changed too and this isn’t hard really.

    As for the Y10K problem. It will almost entirely only be formatting problems I think. In the 80s and 90s, storage was at a premium, databases were generally much simpler and as such dates were very often stored as YYMMDD. There also wasn’t so much use of standard libraries. So this meant that to fix the Y2K problem required quite some work. In some cases there wasn’t time to make a proper solution. Where I was working there was a two step solution.

    One team made the interim change to adjust where all dates were read and evaluate anything <30 (it wasn’t 30, it was another number but I forget which) to be 2000+number and anything else 1900+number. This meant the existing product would be fine for another 30 years or so.

    The other team was writing the new version of the software, which used MSSQL server as a back-end, with proper datetime typed columns and worked properly with years before and after 2000.

    I suspect this wasn’t unusual in terms of approach and most software is using some form of epoch datatype which should be fine in terms of storing, reading and writing dates beyond Y10K. But some hard-coded date format strings will need to be changed.

    Source: I was there, 3000 years ago.


  • r00ty@kbin.lifetoLinux@lemmy.mlIntel or AMD CPUs for new Laptops?
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    6 months ago

    Well for a gamer no real comment. But there is one metric Intel still trashes AMD in for the APU. Hardware video acceleration/encoding. The quality is objectively better on Intel Quicksync.

    When getting a home box that also needed to do transcoding, Intel CPU was a requirement. My desktop development/gaming system? Ryzen + NVidia.


  • The problem with rust, I always find is that when you’re from the previous coding generation like myself. Where I grew up on 8 bit machines with basic and assembly language that you could actually use moving into OO languages… I find that with rust, I’m always trying to shove a round block in a square hole.

    When I look at other projects done originally in rust, I think they’re using a different design paradigm.

    Not to say, what I make doesn’t work and isn’t still fast and mostly efficient (mostly…). But one example is, because I’m used to working with references and shoving them in different storage. Everything ends up surrounded by Rc<xxx> or Rc<RefCell<xxx>> and accessed with blah.as_ptr().borrow().x etc.

    Nothing wrong with that, but the code (to me at least) feels messy in comparison to say C# which is where I do most of my day job work these days. But since I see often that things are done very different in rust projects I see online, I feel like to really get on with the language I need a design paradigm shift somewhere.

    I do still persist with rust because I think it’s way more portable than other languages. By that I mean it will make executable files for linux and windows with the same code that really only needs the standard libraries installed on the machine. So when I think of writing a project I want to work on multi platforms, I’m generally looking at rust first these days.

    I just realised this is programmerhumor. Sorry, not a very funny comment. Unless you’re a rust developer and laughing at my plight of trying to make rust work for me.