Computer security experts as early as 1965 warned government and business that the increasing ability of computers to exchange data across communication lines would inevitably lead to attempts to penetrate those lines and gain access to the data being exchanged. At the 1967 annual Joint Computer Conference that brought together more than 15,000 computer security experts, government and business analysts discussed concerns that computer communication lines could be penetrated, coining the term and identifying what has become perhaps the major challenge in computer communications today. The idea of actually testing systems to ensure their integrity arose with the major security networks such as the RAND Corporation that first identified this now major threat to internet communication. The RAND Corporation, in cooperation with the Advanced Research Projects Agency (ARPA) in the USA, produced a seminal report, generally called The Willis Report after its lead author.
The report discussed the security problem and proposed policy and technical considerations that even today lay the groundwork for security measures. From this report, government and business began to put together teams that would try to find vulnerabilities in computer networks and systems to protect the computer systems from unethical hacking or penetration. So-called tiger teams, named after specialized military teams, were formed in the late 1960s to test the ability of computer networks to resist attack. Most systems failed quickly and abysmally. The penetration testing, largely carried out by the RAND Corporation and government, demonstrated two things: first, systems could be penetrated and second, using penetration testing techniques to identify vulnerabilities in systems, networks, hardware, and software was a useful exercise that needed to be further studied and developed.
James P. Anderson
One of the early pioneers in penetration testing development was James P. Anderson. In his 1972 report, Anderson outlined a series of definitive steps that tiger teams could take to test systems for their ability to be penetrated and compromised. Anderson’s approach included first identifying vulnerability and designing an attack on it, and then finding the weakness in the attack itself and ways to neutralize its threat. This fundamental method is still in use today. In the 1970s and 1980s, research into how to create a secure system was still novel. Anderson’s 1980 publication that showed how to design a program to monitor the use of a computer system to identify unusual use that might signal hacker activity is so simple that any savvy computer user today would readily understand how it works and be able to point to any number of ways to get around it. Still, the work at the time was groundbreaking, and many of its methods form part of standard system protection today.
Multiplexed Information and Computing Service
Another system that was developed and used by a broad range of government, military and corporate entities was Multics (Multiplexed Information and Computing Service). It may be the granddaddy of computer systems, operating in some form or other from 1965 to 2000, and arguably still operating today. Honeywell eventually purchased it and serviced education, government, and industry. The key development that came from Multics was that it delivered secure computing service to users in remote locations, a radical development for the period. Fundamental designs from Multics are still in use even today in other operating systems such as UNIX.
The Multics security system was so good it became the first and for many years the only operating system awarded a B2 rating by the US government. Still, in 1974, the US Air Force conducted an ethical hack on its Multics system, one of the earliest known white hat attacks in the USA, and plenty of vulnerabilities were revealed. Regardless, Multics is still considered to be one of the most secure systems in the world, in part because all of its security features are part of the standard product, rather than supplementary or add on features. As a result, application designers had to ensure that their product met the access design security clearances if they wanted their products to work with the Multics system. Today, though, when security features can be optional, and often are, applications may not be able to meet such requirements, leaving individual computer users vulnerable to hacking.
In the 1990s, a security administrator tool for analyzing networks (SATAN) became available. The name scandalized, and developers added a feature allowing it to be reconfigured to SANTA, a testament to the perhaps natural mischievous nature of penetration testers and hackers. The tool allowed administrators to run a series of tests on their own networks to help identify areas of possible vulnerability and created a report along with a tutorial explaining what issues might arise. SATAN is no longer in development and has been replaced by other tools such as nmap and Nessus, to name a few.
Today, the available options for penetration testing are highly specialized and numerous. Many systems include tools for a range of security testing of the operating system. One example among many is the Kali Linux, used in digital forensics and penetration testing. It contains eight standard security tools including Nmap, Aircrack-ng, Kismet, Wireshark, Metasploit Framework, Burp Suite and John the Ripper. That a single system would contain so many penetration testing tools demonstrates how much more sophisticated today’s technology has become and how many ways ingenious hackers are discovering to create mischief in shared computing environments, especially the Internet. Pentoo is a similar penetration testing focused system.
The statistics on threats posed by hackers are sobering. A recent RAND report suggests that in one year as many as 65 million people in the USA alone have had their personal data breached in some way or other, and that cyber-crime generates billions of dollars in revenue each year. As well, the very tools created by those who work to secure cyber information can also be used to exploit it.
Today, on-demand penetration testing is one of the latest methods to test a network system for ways it could be breached and information accessed. This hybrid approach to testing a network combines the manual and real-time attempts by ethical hackers to breach your system security alongside automated tools that run checks on the system. Together, this approach is thought to offer a broader and more rigorous security review. The method has evolved to include subscription-based services. This approach allows smaller companies that might not be able to afford either the wide array of penetration testing tools or the person with the expertise to operate them all to hire an expert to check their system as needed. Since many system-wide checks are run semi-annually, this approach can be a cost effective one, especially for smaller organizations.