A thousand novel mistakes.
A thousand novel mistakes.
The two cases, they knew what it was and they did it maliciously. They didn’t know what they were doing and got socially engineered in the process. Both cases are cause for failure.
Sure. But for an entry level interview as a pen tester… Scanning with Kali should be an easy task.
Using Kali? Easy if you have training. The capstone for our security course a decade ago was too find and exploit 5 remote machines (4 on the same network, 1 was on a second network only one of the machines had access to) in an hour with Kali. I found all 5 but could only exploit 3 of them. If I didn’t have to exploit any of them 7 would be reasonably easy to find.
Kali basically has a library of known exploits and you just run the scanner on a target.
This isn’t novel exploit discovery. This is “which of these 10 windows machines hasn’t been updated in 3 years?”
Ah. I see. So it’s not that you can’t get them it’s that they are expensive and you are looking for a reasonably priced way to get one. That makes sense.
Online? I’m confused, do they not ship to Greece?
There is probably an opportunity in this space to provide ultra low cost single board SPA/elctron serving applications. But getting it adopted is going to be an issue.
A good industrial engineer is going to look at it kinda suspiciously. Kinda like how Tesla got rightfully raked over the coals for trying to use consumer grade electronics in cars and their screens started melting as well.
I guarantee that some of them are or airgapped/private network support was provided in securing them.
The windows compatibility subsystem also supports applications that would otherwise have not supported an upgrade.
Who are they going to pay to maintain FLTK? There are still companies that are adverse to using Linux because they don’t know what is going to happen when Linus dies. That might sound strange to us, but companies need legal protections that they can enforce through contracts and support contracts make that happen.
The laggy bit can be explained this way: all of these decisions are made because in theory this all sounds “right” (to the company) but then they get their prototype out with a medium level hardware solution and they look for places to squeeze. Oh, you mean I can take this half price min spec machine and it works 98% of the time? Sold.
Im not trying to say these are good practices, I am trying to explain the decisions that are made.
Many used to (pre windows ce), but writing the whole stack was more expensive than license+support costs.
Many still do, but they aren’t full fledged kiosks. By the time you get to full HD screens, the cost of the chips needed to refresh the screen in a reliable way outpace the cost of going standard consumer electronics. Cost for parts/replacement is also lower that way. This dovetails into needing an OS that supports those chips, which suddenly we are into a full OS.
A question to consider seriously: name a company that has a full OS that supports modern tooling/development environments with consistent graphical fidelity across a wide range of hardware that a manufacturer can pay to maintain the host OS, provides guarantees to OS LTS/security patching and has a proven track record in deploying, supporting and delivering kiosk support.
The only serious answer is Microsoft, and maybe Canonical… But Canonical hasn’t been around for as long as most of these kiosks have.
There are a couple of huge blockers for manufacturers looking at companies that provide Linux support:
Industry track record. Red Hat, Canonical, Google and Oracle are basically the only large scale players in the enterprise Linux support. Red Hat basically only provides support for server/backend infrastructure. Has Google had anything other than Gmail and maps last for more than five years? So that leaves us with Canonical. What’s the longest release Canonical has? 4 years now? Microsoft has 15 year support contracts. The only other player in the market that even comes close is Oracle (Oracle still supports Java 1.4 for example: 22years)
Consistent graphical performance: until the last 5 years graphical fidelity on Linux has been a shit show. A decade ago, getting even the largest players to support Linux was a huge undertaking. Basically the only consistent graphics support was the result of android and that is basically only mediatek.
Development environments. Windows wins this hands down without even a question. Go back 15-20 years and it’s even more obviously in Microsoft’s favor. NET gui apps are brain dead easy to make, super consistent and stupid easy to maintain. This drastically decreases development time and cost allowing companies to pay for the crazy expensive support contracts.
The numbers these companies deal with isn’t thousands or even hundreds of thousands of dollars. It’s tens or hundreds of millions. There is no way in hell a manufacturer is going to give an untested bespoke Linux distro maintainer 25 million to keep that Linux distro running for the next 10-20 years. There isn’t a feasible way for a small company to even support at that price for that length of time.
Oracle and RedHat are the only truly feasible options, and it costs more to develop GUI apps on either platform when there isn’t a 20 year track record of known success. It’s obvious why companies pick Microsoft.
It amazes me that people don’t realize that most kiosks run windows.
Win12 confirmed 2044 release date.
Win12 confirmed as a Linux mint cinnamon derivative distro.
I too have forgotten to memset my structs in c++ tensorflow after prototyping in python.
If it’s not specified, monthly. Otherwise it’s specified.
I don’t think either is actually true. I know many programmers who can fix a problem once the bug is identified but wouldn’t be able to find it themselves nor would they be able to determine if a bug is exploitable without significant coaching.
Exploit finding is a specific skill set that requires thinking about multiple levels of abstraction simultaneously (or intentionally methodically). I have found that most programmers simply don’t do this.
I think the definition of “good” comes into play here, because the vast majority of programmers need to dependably discover solutions to problems that other people find. Ingenuity and multilevel abstract thinking are not critically important and many of these engineers who reliably fix problems without hand holding are good engineers in my book.
I suppose that it could be argued that finding the source of a bug from a bug report requires detective skills, but even this is mostly guided inspection with modern tooling.
For lithium batteries (phone batteries) it’s actually more important than draining to 0. Many studies indicate that the average phone battery should last several thousand cycles while only losing 5-10% of total capacity provided it is never charged above 80%. Minimum % (even down to 0%) and charge rate below 70% is also unrestricted.
The tl;dr is that everytime you charge to 100% is the same as 50-100 charges to 80%. Draining a lithium chemistry battery to 0 isn’t an issue as long as you don’t leave it in a discharged state (immediately charging).
Them: “How do I get to your place in my career?”
Me: “What do you mean?”
Them: “You… Have the position I want eventually. What did you do?”
Me: “Well. 20… No that cant be right… I mean… Yeah… 20 years ago… I graduated college… Then uhh. I’m… Uh…”
At this point either you make up some bullshit or you say it’s just experience. Then you realize what a midlife crisis is and wonder if you are having one which like like 20% of the definition of a midlife crisis.
Not having to interact with Nazis is tied to which instance I signed up on? I’m confused by this argument.