Detecting and preventing Cyber Crimes

Catching malware and hackers is hard to do and expensive in a General-Purpose Computer (GPC) because it must be done by software. The GPC lacks hardware checks. The compiled binary image is blindly trusted by GPCS. As a result software checks are added to monitor and catch corruption externally.  These checks are indirect, incomplete and disconnected from the task at hand. The best hacks prove impossible to find. These so-called zero-day attacks, once exposed, require a new method of detection or a new patched upgrade to be added.  This all takes time and effort and in the intervening months, serious damage takes place. Attacks can be grouped into three levels and addressed accordingly. 
  1. The first level, called 'Script Kiddy' attack is the easiest to address. Script attacks are easy to find and repel automatically using simple code analysis tools. Several off-the-shelf-solutions do this well. Attack tools for key-stroke loggers have dropped from $250 in 2008 to a low as $25 today. At this price, anyone can get into the game targeting close friends more easily than strangers.
  2. The second level of 'Commercial' attacks requires software tripwires for intrusion detection. Even skilled attacks trip up on special software alarms that checking for unusual or exceptional actions that hackers and malware need to do. For example, executing PowerShell commands against an organization's production code on production servers. It takes skilled experience and deep understanding to select the alarms triggers that don't trigger unnecessarily. This is expensive and they impossible to do accurately. There is a complex tradeoff between overzealous detection and weak detection. Commercial attacks look to steal credit cards, passwords, and data that can be resold. 
  3. The third level of attack is by and 'Enemy' nation-state attack. Trained and skilled staff in a foreign intelligence agency, a homeland security service, and complementary departments to the CIA, NSA, FBI and so on. They spend time to spy and discover tripwires in advance to specifically avoid them. The research is part of a long term plan to undermine the American way of life. They select targets to perform the best attacks and to escape unnoticed by hiding their tracks. They know in advance what they want to steal or corrupt and where to find the data to attack. They conduct advanced, disposable penetrations to spy and put in place their own tripwires, reconnoiter ade select the weakest attack vector as the main attack that meets their goal. They use unrecognized 'zero-day' exploits, attack vectors unknown to the defender. In the main, these attacks are undetected.  They are only caught by accident if at all, and only when lucky or published off-line by the Press or by WikiLeaks. 
At every level, preventing computer memory corruption is the central task of cybersecurity. If a computer is an engine, then the information stored in memory is the fuel. Memory segmentation prevents memory corruption by limiting access rights for approved use. This essential technology is missing for software stored in a general-purpose computer. The acknowledged, superior approach to memory segmentation is called Capability-Limited Addressing. However, it is a common misconception that memory protection is all that Capability Limited Addressing achieves. This misconception is the mistake of prior, poor results from hybrid designs that mix the default privileges systemic to all existing General-Purpose Computers. This hybrid mistake is perpetuated by CHERI, Cambridge University's second attempt at building a Capability-Based Computer. As the latest hybrid design capability-based addressing registers are added to von Neumann's memory sharing architecture. The dangerous 'superuser/operating system' privileges remain in place and the machine is permanently flawed by the limitations of Identity-Based Access Monitors and the above three cases of malware and hacking. Hybrid solutions bypass the capability-based hardware and continue to flood the memory with all forms of digital corruption.

An extreme Church-Turing Machine like PP250 is designed without any default privileges to abuse. Even a bad application downloaded in error can only hurt itself. It is the extreme implementation of Capability Limited Addressing as a Church-Turing Machine that makes error detection easy. Every machine action against memory is checked by Capability-Based Hardware in a Church-Turing Machine. Sadly this competition was washed away when the General-Purpose microprocessor flooded the market. The PP250 was the only commercial example but it detected all software errors and removed all hacking by mathematically leveling the computational playing field right across the network. You can find the book on Amazon it is an interesting story. The architecture of PP250 as a Church-Turing Machine is extreme, removing all shared memory and every default privilege. The operating system and the superuser are replaced by protected function abstractions that can run safely inline using nature's universal model of computation. By dynamically binding imperative Turing-Commands to authorized Lambda Calculus symbols, all default privileges used by malware and network hackers vanish. It is the archetype of a Church-Turing Machine that obeys nature's universal computational model as defined by the Lambda Calculus. The Capability Keys create a DNA of approved functional relationships. Now mathematical symbols directly equate to an immutable, fraud, and forgery resistant, Capability Key representing one object or a frame of computation in an object-oriented Namespace.PP250 needed no central operating system or any almighty superuser privileges - just individually protected digital objects implementing the required classes of function abstraction. Named immutable Capability Keys also define approved access to complex nodes with a DNA string for the abstraction of many function abstractions. To be used, a key must formally exist locally, either by design or delegated by the functional owner for an authorized purpose. Furthermore, this deconstructs the 'monolithic' security policy of fragmented compilations by placing Capability Keys in the hands of private citizens using a personal Namespace as a secure repository.
Any democracy must be a level playing field to avoid the centralized threats of cyber-dictatorship and prevent undetected malware and remote hacking. These pervasive problems in Genera-Purpose Computer Science must be tamed. In a Church-Turing Machine, all hacking, malware, and undiscovered design bugs that lead to 0-day attacks. All errors, whatever the cause, are detected and prevented, in advance, on the spot.
Since no unfair (default) privileges exist, the ambient authorities in a General-Purpose Computer, that allow the direct attacks you correctly suggest, no longer exist. To say it another way, the mathematical model of immutable Capability Keys not only covers 'least access rights' for memory protection, but they also authorize any 'need to know.' Need to know defines a DNA skeleton for a species of software that protects the individual abstractions from (inside or outside) abuse everywhere across Cyberspace. It means infallible, clockwork automation, in-depth and detailed for clockwork mathematical perfection. Infallible mathematics exists in every mechanical computer, starting with the Abacus, improving with the Slide-Rule, leading to Babbage's infallibly automated Analytical Engine then culminating with the Church-Turing Thesis and Church-Turing Machines.
Even if some access right to some abstraction is too generous, the Capability Key is still locally confined to a private Namespace. Unless the owner delegates a 'dynamic' Capability Key for some specific use by another, it always remains private, protected, and inaccessible to all others.
As my book explains (see http://amazon.com/author/kenneth...), a check and balance exist whenever both sides of the Church-Turing Thesis exist. The computer becomes a network-ready, self-standing software machine. At every step, for every action, a Capability Key governs and approves some limited use of an authorized resource with a proven need-to-know but with limited access rights.
Thus, in advance, a specific Capability Key must be formally shared by the owner, with the authorized user, before any attempted use. Default rights do not exist, and because each dynamic relationship is tempory, it is also removable. Shared keys are transparent, efficiently, and immediately withdrawn by deleting a previously shared Capability Key. Possessing a Capability Key to an external object is only achieved after a valid introduction, again using the simple laws of the Lambda Calculus and the clockworks of Capability-Limited Addressing.

Comments