• 1 Post
  • 82 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle



  • Using Kali? Easy if you have training. The capstone for our security course a decade ago was too find and exploit 5 remote machines (4 on the same network, 1 was on a second network only one of the machines had access to) in an hour with Kali. I found all 5 but could only exploit 3 of them. If I didn’t have to exploit any of them 7 would be reasonably easy to find.

    Kali basically has a library of known exploits and you just run the scanner on a target.

    This isn’t novel exploit discovery. This is “which of these 10 windows machines hasn’t been updated in 3 years?”


  • Separation of data between accounts makes them fall under different retrieval requirements.

    As one account, a request for all of the data from that account contains both chunks. Separation of those accounts separates the need to accommodate requests for data from one on the other.

    It can also mean that internally they may have a sufficient mechanism that data that was previously identifying to no longer being identifying (breaking userid to data pairings for example) which is sufficient to “anonymize” the data that it no longer needs to be reported or maintained.


  • GDPR and pii reasons most likely. It’s a nightmare keeping track of why certain data is on certain accounts. This can vastly simplify the GDPR compliance mechanisms. If your GOG account is merged with your PR account, there is probably significantly more “sensitive” data (CC numbers, addresses, etc) in the GOG account. This probably exempts some data that either cdpr or gog tracks from deletion or retrieval requests.


  • There is probably an opportunity in this space to provide ultra low cost single board SPA/elctron serving applications. But getting it adopted is going to be an issue.

    A good industrial engineer is going to look at it kinda suspiciously. Kinda like how Tesla got rightfully raked over the coals for trying to use consumer grade electronics in cars and their screens started melting as well.



  • Who are they going to pay to maintain FLTK? There are still companies that are adverse to using Linux because they don’t know what is going to happen when Linus dies. That might sound strange to us, but companies need legal protections that they can enforce through contracts and support contracts make that happen.

    The laggy bit can be explained this way: all of these decisions are made because in theory this all sounds “right” (to the company) but then they get their prototype out with a medium level hardware solution and they look for places to squeeze. Oh, you mean I can take this half price min spec machine and it works 98% of the time? Sold.

    Im not trying to say these are good practices, I am trying to explain the decisions that are made.


  • Many used to (pre windows ce), but writing the whole stack was more expensive than license+support costs.

    Many still do, but they aren’t full fledged kiosks. By the time you get to full HD screens, the cost of the chips needed to refresh the screen in a reliable way outpace the cost of going standard consumer electronics. Cost for parts/replacement is also lower that way. This dovetails into needing an OS that supports those chips, which suddenly we are into a full OS.


  • A question to consider seriously: name a company that has a full OS that supports modern tooling/development environments with consistent graphical fidelity across a wide range of hardware that a manufacturer can pay to maintain the host OS, provides guarantees to OS LTS/security patching and has a proven track record in deploying, supporting and delivering kiosk support.

    The only serious answer is Microsoft, and maybe Canonical… But Canonical hasn’t been around for as long as most of these kiosks have.

    There are a couple of huge blockers for manufacturers looking at companies that provide Linux support:

    1. Industry track record. Red Hat, Canonical, Google and Oracle are basically the only large scale players in the enterprise Linux support. Red Hat basically only provides support for server/backend infrastructure. Has Google had anything other than Gmail and maps last for more than five years? So that leaves us with Canonical. What’s the longest release Canonical has? 4 years now? Microsoft has 15 year support contracts. The only other player in the market that even comes close is Oracle (Oracle still supports Java 1.4 for example: 22years)

    2. Consistent graphical performance: until the last 5 years graphical fidelity on Linux has been a shit show. A decade ago, getting even the largest players to support Linux was a huge undertaking. Basically the only consistent graphics support was the result of android and that is basically only mediatek.

    3. Development environments. Windows wins this hands down without even a question. Go back 15-20 years and it’s even more obviously in Microsoft’s favor. NET gui apps are brain dead easy to make, super consistent and stupid easy to maintain. This drastically decreases development time and cost allowing companies to pay for the crazy expensive support contracts.

    The numbers these companies deal with isn’t thousands or even hundreds of thousands of dollars. It’s tens or hundreds of millions. There is no way in hell a manufacturer is going to give an untested bespoke Linux distro maintainer 25 million to keep that Linux distro running for the next 10-20 years. There isn’t a feasible way for a small company to even support at that price for that length of time.

    Oracle and RedHat are the only truly feasible options, and it costs more to develop GUI apps on either platform when there isn’t a 20 year track record of known success. It’s obvious why companies pick Microsoft.






  • I don’t think either is actually true. I know many programmers who can fix a problem once the bug is identified but wouldn’t be able to find it themselves nor would they be able to determine if a bug is exploitable without significant coaching.

    Exploit finding is a specific skill set that requires thinking about multiple levels of abstraction simultaneously (or intentionally methodically). I have found that most programmers simply don’t do this.

    I think the definition of “good” comes into play here, because the vast majority of programmers need to dependably discover solutions to problems that other people find. Ingenuity and multilevel abstract thinking are not critically important and many of these engineers who reliably fix problems without hand holding are good engineers in my book.

    I suppose that it could be argued that finding the source of a bug from a bug report requires detective skills, but even this is mostly guided inspection with modern tooling.