CPU Vulnerabilities and Design Choices

processor-540251_960_720.jpg

By Sean Leahy

Over the past several days, four white papers from different security teams were released detailing major and previously unknown CPU vulnerabilities (links are provided below).

To quickly summarize: these vulnerabilities expose memory of other running apps on the same physical system. This means that code running on a host without privileged access has free reign on that computer to view the data of any other process (including inside of Virtual Machines). While this is a major vulnerability issue, it is not beyond the assumptions that Data Machines makes when planning for security in our architectures.

Host privilege escalation is always a risk that needs to be considered, and we plan for that in our larger security strategy. Toward this end, we have minimized scenarios in our systems where any type of host privilege escalation will compromise data in a manner that is problematic for our clients, partners, and their research. Data Machines uses perimeter access (via a VPN) to provide strong attribution. We log any suspicious user behavior on the systems and mitigate/report it if we see an attempted exploit or unplanned data movements. Our clients do not experience any limitations on system or data access in the interim.

We generally allow researchers to have access to computation resources in any manner they see fit, with two caveats;

  1. We always need to know who they are at all times (strong attribution, no shared accounts, etc).

  2. We do not let them create conditions that allow inbound connections from the Internet except for very tightly controlled situations (e.g. HTTPS with our hardened authentication in front of it, SSH, or SCP).

Data we hold for our clients is split into basically two groups: partner data and research data. Partner data is isolated on separate VLANs and is not on systems shared by the larger bodies of users. More specific constraints have been put in place on a case-by-case scenario. On the other extreme, we can and have kept data in cold storage offline in a GSA-approved safe in a secure facility where it is only brought out for tightly controlled research sprints. In short, Data Machines works with each client to provide data handling and security practices that meet their specific needs, and our capabilities are broad enough to accommodate nearly any request.

Security is a critical requirement for each project at Data Machines and should be designed with as few assumptions as possible. The recent security exploits listed below demonstrate how bad assumptions can result in a single point of failure that may compromise system behavior. Data Machines strives to produce robust security controls and practices around systems in anticipation of novel methods of exploitation. By making good architecture and system design choices, the impact of individual security failures can be minimized even when they are as extreme as CPU hardware vulnerabilities.

Here are links to the vulnerabilities announced yesterday:

https://cpu.fail/

https://mdsattacks.com/

https://zombieloadattack.com/

https://www.wired.com/story/intel-mds-attack-speculative-execution-buffer/


Previous
Previous

AUC VS LOG LOSS