There’s certainly a history of Unix and Unix-like forks; which is rather simple compared to the Linux distro forks (go right to the big pic).
There’s certainly a history of Unix and Unix-like forks; which is rather simple compared to the Linux distro forks (go right to the big pic).
Some other folks just took the bus.
Back in 2000, there was something like that for the kernel with SELinux (Security-Enhanced Linux). Which continues to live in various distributions’ kernels. Not a full O/S though, and not generally regarded as a PoS.
There was an interesting post on Kagi a few days ago; with an alternative take on how it operates.
What if the RAID 5 gets encrypted with ransomware, how many backups are there?
As to how rationales go, this is the clearest.
I hate it.
LibreOffice does “develop and maintain a certification system for professionals of various kinds who deliver and sell services around LibreOffice.”
After a bit of research, I’m forced by facts (NS records can be cached for an undetermined time) to see what you’re saying. Thank you for teaching me.
The workings are, of course, a bit more complicated than what either of us have said (here’s a taste), but there is a situation as you describe, where separating the registrar from the name servers, and the name servers from the domain, could save the domain from going down.
If a registrar goes out of business, ICANN transfers the domain(s) to another registrar.
If a name server business fails, you change name servers through your registrar.
You can’t really fix registrar services in your name server, nor name server problems through your registrar. (Unless, of course, your registrar is also your name server.)
Like, say, slow down an older phone so one has to buy a new faster phone? Source
A registration system where only registered parts are allowed, so no clean room (software engineering) third-party manufacturing? Every single part has to be registered with the original device manufacturer? This seems like a detour around right to repair.
Source: Passmark (CPU Benchmark).
Sounds like a job for Lenny bot. (There are samples on video sites).
This would be seriously useful, what are the impeccable primary sources?
In 2016, HDDs were more reliable (MTBF).
In 2022, for the first 5 years, SSDs are looking more reliable. With more of a constant failure rate (1%/yr), than the increasing failure rate of HDDs after 5 years.
(Caveat: not just bit rot, but general failure data.)
Small enough to fit on a CD, which isn’t everyone’s definition of “small.” There are, of course, much smaller Linux distros, less than a tenth the size; particularly if CLI is adequate.
Maybe, if items are under-priced as often as over-priced.