• 0 Posts
  • 81 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle



  • Video encoding is generally not a likely workload in an HPC environment. Also those results I’m not sure if that is really FreeBSD versus everyone else, or clang vs. everyone else. I would have liked to see clang results on the Linux side. It’s possible that BSDs core libraries did better, but they probably weren’t doing that much and odds are the compiler made all the difference, and HPC is notorious for just offering users every compiler they can get their hands on.

    Kernel specifically makes a difference from some of those tests (forking, favoring linux strongly, semaphores favoring BSD strongly). The vector math and particularly the AVX512 results would be most applicable to HPC users, and the Linux results are astoundingly better. This might be due to some linear algebra library that only bothered to do Linux and the test suite used that when it was available. Alternatively, it could have been because BSD either lacked or defaulted to a different CPU frequency management strategy that got in the way of vector math performance.


  • Keep in mind that AVX-512 would be a key factor in HPC (in fact the factor for Top500 specifically), and there the BSDs lag hugely. Also, the memory copy for whatever reason favors linux, and Stream is another common HPC benchmark.

    Unclear how much of the benefit when it happened was compiler versus OS. E.g. you can run clang on Linux, and HPC shops frequently have multiple compilers available.

    This is before keeping in mind that a lot of HPC participants only bother with Linux. So the best linear algebra library, the best interconnect, the best MPI, your chances are much better under Linux just by popularity.



  • FreeBSD is unlikely to squeeze performance out of these. Particularly disadvantaged because the high speed networking vendors favored in many of these ignore FreeBSD (Windows is at best an afterthought), only Linux is thoroughly supported.

    Broadly speaking, FreeBSD was left behind in part because of copyleft and in part by doing too good a job of packaging.

    In the 90s, if a company made a go of a commercial operating system sourced from a community, they either went FreeBSD and effectively forked it and kept their variant closed source and didn’t contribute upstream, or went Linux and were generally forced to upstream changes by copyleft.

    Part of it may be due to the fact that a Linux installation is not from a single upstream, but assembled from various disparate projects by a ‘distribution’. There’s no canonical set of kernel+GUI+compilers+utilities for Linux, but FreeBSD owns a much more prescriptive project. I think that’s gotten a bit looser over time, but back in the 90s FreeBSD was a one-stop-shop, batteries included project that included everything the OS needed maintained under a single authority. Linux needed distributions and that created room for entities like RedHat and SUSE to make their mark.

    So ultimately, when those traditionally commercial Unix shops started seeing x86 hardware with a commercially supported Unix-alike, they could pull the trigger. FreeBSD was a tougher pitch since they hadn’t attracted something like a RedHat/SUSE that also opted into open source model of business engagement.

    Looking at the performance of these applications on these systems, it’s hard to imagine an OS doing better. Moving data is generally as close to zero copy as a use case can get, these systems tend to run essentially a single application at a time, so the cpu and io scheduling hardly matter. The community used to sweat ‘jitter’ but at this point those background tasks are such a rounding error in the overall system performance they aren’t worth even thinking about anymore.





  • Surprisingly not a lot of ‘exciting tuning’, a lot of these are exceedingly conservative when it comes to tuning. From a software perspective, the most common “weird” thing in these systems is the affinity for diskless boot, and that’s mostly coming from a history of when hard drives used to be a more frequent failure causing downtime (yes, the stateless nature of diskless boot continues to be desired, but the community would have likely never bothered if not for OS HDD failures). They also sometimes like managing the OS kind of like a common chroot to oversimplify, but that’s mostly about running hundreds of thousands of what should be the exact same thing over and over again, rather than any exotic nature of their workload.

    Linux is largely the choice by virtue of this market evolving from largely Unix based but most applications they used were open source, out of necessity to let them bid, say, Sun versus IBM versus SGI and still keep working regardless of who was awarded the business. In that time frame, Windows NT wasn’t even an idea, and most of these institutions wouldn’t touch ‘freeware’ for such important tasks.

    In the 90s Linux happened and critically for this market, Red Hat and SUSE happened. Now they could have a much more vibrant and fungible set of hardware vendors with some credible commercial software vendor that could support all of them. Bonus that you could run the distributions or clones for free to help a lot of the smaller academic institutions get a reasonable shot without diverting money from hardware to software. Sure, some aggressively exotic things might have been possible versus the prior norm of proprietary, but mostly it was about the improved vendor-to-vendor consistency.

    Microsoft tried to get into this market in the late 2000s, but no one asked for them. They had poor compatibility with any existing code, were more expensive, and much worse at managing at scale in the context of headless, multi-user compute nodes.







  • Well, there only so much in gaming that reasonably can be done server side.

    Sure, the server could identify that a player shouldn’t be visible and not transit that location to a client, addressing seeing through walls, in theory.

    But once a player is hypothetically visible, aimbot can happen. If you are crawling in a ghillie suit in the grass, but the other player has a client that skips rendering grass and replaces the ghillie suit model with a suit made of traffic cones…

    Now intrusive anti cheat isn’t worth it, but it is an unavoidable reality that it is up to the client to preserve the integrity.

    Closest you get would be streamed gameplay, where the rendering even is server side. Also not worth it. But even then I could see cheating machine vision and faked controls to get an edge unfairly.



  • I’m old enough that the vaccine was unavailable, so I got the illness and at least one scar, but my kid was vaccinated and all my peers’ kids are vaccinated so they just won’t know what it’s like.

    Seems like some countries think it’s better to keep it around to keep previously sickened people exposed to keep their immune system active to mitigate shingles, but seems like the data in the ‘vaccinate most of the kids’ countries have shown that this doesn’t actually matter, so we might see more countries embrace vaccinating against it.


  • The NIH director was appointed by Trump, which came with a pretty strong anti-mask, anti-vaxx, and general ‘covid was a hoax’ sort of baggage, so he is unfortunately not that credible.

    There is a study that correlates to the ages he specifies, but the conclusion is that the risks inflicted by the vaccine were still lower than the risks of COVID itself even for that age group, but no matter how they sliced it the risks either way for the age group was minimal, neither the vaccinne nor COVID were too risky overall. Pre-vaccine chicken pox was deadlier to kids than COVID was to that age group, and we didn’t consider that to be particularly risky, mostly worth vaccinating due to heading off the chances for shingles later.