(This page has no text content)
ETHICAL HACKING WITH KALI LINUX LEARN FAST HOW TO HACK LIKE A PRO BY HUGO HOFFMAN
All rights reserved. All rights reserved. No part of this book may be reproduced in any form or by any electronic, print or mechanical means, including information storage and retrieval systems, without permission in writing from the publisher. Copyright © 2020
Disclaimer Professionals should be consulted as needed before undertaking any of the action endorsed herein. Under no circumstances will any legal responsibility or blame be held against the publisher for any reparation, damages, or monetary loss due to the information herein, either directly or indirectly. This declaration is deemed fair and valid by both the American Bar Association and the Committee of Publishers Association and is legally binding throughout the United States. There are no scenarios in which the publisher or the original author of this work can be in any fashion deemed liable for any hardship or damages that may befall the reader or anyone else after undertaking information described herein. The information in the following pages is intended only for informational purposes and should thus be thought of as universal. As befitting its nature, it is presented without assurance regarding its continued validity or interim quality. Trademarks that are mentioned are done without written consent and can in no way be considered an endorsement from the trademark holder.
Intended Audience This book is designed to anyone who wishes to become an Ethical Hacker or Penetration Tester in the field of Information Security. This book is written in everyday English, and no technical background is necessary. The contents in this book will provide a practical guide on how you can use Kali Linux to implement various attacks on both wired and wireless networks. If you are preparing to become an IT Professional, such as an Ethical Hacker, IT Security Analyst, IT Security Engineer, Network Analyst, Network Engineer, or a Penetration Tester, yet still in doubt and want to know about network security, you will find this book extremely useful. You will learn key concepts and methodologies revolving around network Security, as well as key Technologies you should be mindful. If you are truly interested in becoming an Ethical Hacker or Penetration Tester, this book is for you. Assuming you are preparing to become an Information Security Professional, this book will certainly provide great details that will benefit you as you enter this industry.
Introduction First, we're going to start with the Introduction to Linux, you that you have a general idea what it this Operating System is about. Next, we are going to look at same Software & Hardware Recommendations for Ethical Hackers, and jump right into the installation of Vitrual Box & Kali Linux. This book is mainly about Kali Linux tools and how to deploy them, yet first we have to look at understanding penetration testing, and how it works with reconnaissance and footprinting. We will look at each and every step you should take as a penetration tester which include Stage 1, Stage 2 and Stage 3. This is important so you understand how to take on a job as an ethical hacker. For example what kind of questions you should ask when getting hired by a client. So in this section, we are going to include the what, the when, the how but all legal requirements as well so you can cover your back. We are also going to look at Penetration Testing Standards so you can decide which one suits you best. Next, we are going to begin more practical by understanding Footprinting and Host discovery with Port Scanning. After that, we are going to get dirty by understanding how you can discover devices with Hping3, how to setup a proxy for Burp Suite and how to target devices with Burp Scanner. Next we are going to look at some Application testing such as Randomizing Sessions Tokens, Spidering & SQL Injection with SQLmap. Then we move on and start looking at both wired and wireless attacks using Kali Linux. We are going to look at Dictionary Attack with Airodump-ng, ARP Poisoning with EtterCAP, and implementing Passive Reconnaissance. Next, we are going to look at capturing both wired and wireless traffic using Port Mirroring, deploying SYN Scan Attack and using Xplico. Next, we are going to deploy MITM Attack in various ways such as using Ettercap or SSLscript. Moving on, you will learn how to manipulate Packet using the tool called Scapy, and how to capture IPv6 Traffic with Parasite6. Next we are going to implement DoS attacks in various ways, by either using a Deauthentication Attack, or creating a Rogue Access Point or and Evil Twin with a tool called MKD3. Next, we are going to look at implementing a Brute Force Attack with TCP Hydra, but then we will look at implementing various attacks at the same time on demand, with some very powerful and dangerous tools such as Armitage’s Hail Mary, The Metasploit Framework or SET (Social-Engineering Toolkit). These tools are available for both white hat and black hat hacking. Once applied the outcome will be
the same in both cases. What you must understand, is that it can lead to a dreadful situation for the person using such hacking tools in any unauthorized manner, which might cause system damage or any system outage. If you attempt to use any of this tools on a wired or wireless network without being authorized and you disturb or damage any systems, that would be considered illegal black hat hacking. Therefore, I would like to encourage all readers to implement any tool described in this book for WHITE HAT USE ONLY. Anything legally authorized to help individuals or companies to find vulnerabilities and identify potential risks is fine. All tools I will describe, you should use for improving security posture only. If you are eager to learn about hacking and penetration testing, it's recommended to build a home lab and practice using these tools in an isolated network that you have full control over, and it's not connected to any production environment or the internet. If you use these tools for black hat purposes and you get caught, it will be entirely on you, and you will have no one to blame. So, again I would highly recommend you stay behind the lines, and anything you do should be completely legit and fully authorized. If you are not sure about anything that you are doing and don't have a clue on the outcome, ask your manager or DO NOT DO IT. This book is for education purposes. It is for those who are interested in learning and knowing what is behind the curtains and would like to become an Ethical hacker or Penetration Tester. Besides to legal issues, before using any of the tools, it is recommended that you have the fundamental knowledge of networking concepts.
Table of Contents Chapter 1 Introduction to Linux Chapter 2 Software & Hardware Recommendations Chapter 3 Installing Virtual Box & Kali Linux Chapter 4 Introduction to Penetration Testing Chapter 5 Pen Testing @ Stage 1 Chapter 6 Pen Testing @ Stage 2 Chapter 7 Pen Testing @ Stage 3 Chapter 8 Penetration Testing Standards Chapter 9 Introduction to Footprinting Chapter 10 Host discovery with Port Scanning Chapter 11 Device discovery with Hping3 Chapter 12 Burp Suite Proxy setup Chapter 13 Target setup for Burp Scanner Chapter 14 Randomizing Sessions Tokens Chapter 15 Burp Spider-ing & SQL Injection Chapter 16 SQL Injection with SQLmap Chapter 17 Dictionary Attack with Airodump-ng Chapter 18 ARP Poisoning with EtterCAP Chapter 19 Capturing Traffic with Port Mirroring Chapter 20 Passive Reconnaissance with Kali Chapter 21 Capturing SYN Scan Attack Chapter 22 Traffic Capturing with Xplico Chapter 23 MITM Attack with Ettercap Chapter 24 MITM Attack with SSLstrip Chapter 25 Packet Manipulation with Scapy Chapter 26 Deauthentication Attack against Rogue AP Chapter 27 IPv6 Packet Capturing with Parasite6 Chapter 28 Evil Twin Deauthentication Attack with mdk3 Chapter 29 DoS Attack with MKD3 Chapter 30 Brute Force Attack with TCP Hydra Chapter 31 Armitage Hail Mary Chapter 32 The Metasploit Framework Chapter 33 Social-Engineering Toolkit Conclusion About the Author Chapter 1 Introduction to Linux
To understand Linux, the leading operating system of the cloud, Internet of Things, DevOps, and Enterprise server worlds it is substantial to an IT career. To comprehend the world of open software licensing is not easy, but let me give you some highlights. If you're planning to work with free software like Linux, you should understand the basics of the rules that govern it. Let’s first look at licensing. There are three main methods to licensing; the Free Software Foundation founded in 1985 by Richard Stallman, the younger Open Source Initiative, and Creative Commons. First of all, the Free Software Foundation wants software to be free, not as free of charge, but to allow users the freedom to do whatever they like with it. Think about it like this. You may have to pay for it, but once it's yours you can do whatever you want with it. Richard Stallman and his foundation are the original authors of the GPL, and the GNU General Public License, which allows users the right to do whatever they like with their software, including modifying it and selling it, as long as they don't make any changes to the original license conditions. The Linux kernel is the most significant piece of software released onto the GPL. But, the Open Source Initiative, while cooperating with the free software foundation where possible, believes that there should be more flexible licensing arrangements obtainable if open source software is to achieve the greatest impact possible on the larger software market. Open source means that the original programming code of a piece of software is made freely obtainable to users, along with the program itself. Licenses that are more closely line up with the OSI goals but include various versions of the Berkeley Software Distribution aka BSD, which oblige little more than the redistributions display the original software's copyright notice and disclaimer. This makes it easier for commercial developments to deploy their modified software under new license models without having to concern about breaking previous measures. The FOSS and FLOSS designations may support to reflect the alterations between these two visions. FOSS only implies that the software can be acquired free of charge, although FLOSS focuses on what you can do with
the software once you obtain it. The Creative Commons license authorises creators of nearly anything such as software, films, music, or books to select exactly the rights they wish to reserve for themselves. Under the Creative Commons system a creator can hand-pick between any combination of the following five elements; attribution, which allows modification and redistribution as long as the creator attribution is included; share-alike, which necessitates the original license conditions to be included in all future distributions and copies. Next is called “non-commercial”, which permits only non-commercial use; no derivative works, permitting further redistribution, but only unmodified copies; and public domain, which allows all possible usage. It's essential when using software released under the Creative Commons to be aware of exactly which elements have been selected by the author. The creative commons share-alike condition, along with Stallman's GPL are in practical terms, related to the copy left distribution system. Copy left licenses permit full recycle and redistribution of a software package, but only when the original substantial permissions are included in the next level distribution. This can be valuable for authors who don't want their software to ever evolve into closed license types, but want its derivatives to remain free forever. Non- copy left open source licenses are frequently referred to as permissive licenses. Permissive licenses will typically not require adherence to any parent restrictions. Instances of such licenses that often allow just about any use of the license software, as long as the original work is attributed in derivatives, are the MIT, BSD, and Apache licenses. Nowadays, pretty much Apache and MIT are the ones most widely utilised. But because open source software is free, doesn't mean that it has no place within the operations of “for-profit” companies. In fact, the products of many largest and most profitable Companies are built, using open source software. In many cases, Companies will freely release their software as open source, as well as providing premium service and
support to paying consumers. For example Ubuntu and CentOS Linux distributions are of that model, because they're supported by Canonical and Red Hat consistently, and both of which are in the business of providing support for enterprise clients, and these are very serious businesses. Another example is Red Hat Linux, which was purchased by IBM for over $30 billion. It's worth noticing that the mainstream of programming code contributions to the Linux kernel are being written by full-time staffs of large technology companies, including Google and Microsoft. Oddly, viewing the license for the user of open source software on your device isn't always so easy. Desktop apps will frequently make their license information available through the “help and about” menu selections, but in other cases the best way to find licensing information on a specific product is to visit their website. The original Linux kernel was created by Linus Torvalds in the early 90's and then donated to the community. Community means anyone, anytime, anywhere, and donated means that the programming code of any Linux component will be freely available for anyone to download, modify, and do anything they might want with it, including profiting from their own customized versions if they want to. A computer operating system or OS is a set of software tools designed to interpret a user's commands, so they can be translated into terms that the host computer can understand. Just about any operating system can be installed and launched on most standard hardware architecture, assuming it has enough memory and processing power to support the OS's features. Hence, you can load Linux natively on any computer or Mac OS, a tiny development board running an ARM processor, or as a virtualized container image within a Docker environment. Nearly all desktop operating systems provide two ways to access their tools through a graphic user interface, also known as GUI, and through a command line interface or CLI. Every modern operating systems allow you to securely and consistently run sophisticated productivity and entertainment tools through the GUI and provide an suitable environment where you can develop your own software,
which was the only thing the first personal computers could do. All Linux have that in in common, but what they do differently is what’s more interesting. The most obvious difference between Linux and its commercial competitors is commercial limitations. Others have them, and Linux does not. This means that you're free to install as many versions of Linux on as many hardware devices as you wish, and no one will tell you otherwise. This freedom changes the way you'll use your operating system because it gives you flexibility to make the changes and customizations that fit your requirements best. It's not unusual to take a hard drive with a Linux file system installed from one computer and drop it into another, and it'll work just fine in opposite with either Windows or Mac OS. Often have as many as half a dozen virtual and physical Linux instances running at a single time as I test various software processes and network design, something that I'd perhaps never try if I needed obtaining separate licenses. This should have two immediate advantages for you. One, you can spend lots of time experimenting with various Linux distributions and desktops as your Linux skills grow, and you can naturally launch test deployment before you launch your company's new Linux-based resources to ensure that they're running properly. Linux environment contains three kinds of software; the Linux kernel, the desktop interface such as GNOME or Cinnamon, and customizations provided by your specific distribution such as Ubuntu or Red Hat. Generally, you're not going to download or directly manage the Linux kernel. That will be handled for you by the installation and update processes used by the distribution you pick. To maintain steadiness, it's not unusual for distributions to largely ignore non-critical new kernel releases for many months. Distributions, particularly the larger and better known ones are commonly updated, while security and critical feature patches are made available almost instantly. Most distributions have managed third-party software repositories and
package management tools for handling updates. If you look at a Software and Updates dialog on Linux boxes, you can choose how you'd like updates to be applied. In addition to the operating system, there are thousands of free software packages available that allows you to perform any compute task feasible, more quickly and safely than you could on other platforms. Whether you're looking for office productivity suites or web server and security services, it will all be integrated into the fabric of the Linux system by reliable package managers. For example if you want to use editing software such as Adobe on Windows or Mac, to get them work effectively without running into system slowdowns, you would need a fast CPU, 32 GB of RAM, and a dedicated video RAM. These rigs could cost thousands of dollars and require cooling systems to keep them from melting down. Nevertheless, if you would use Linux, you could run virtualized processes, along with regular daily tasks on a simple PC, built from less than $300. As Linux is open source, many people have created their own versions of the OS, known as distributions or “distros” to fit specialized needs. The most famous of these is Google's Android OS for smart phones, but there are hundreds of others, including enterprise deployment distros, such as Red Had Enterprise, and it's free community rebuild, CentOS for example. There's a distribution specially optimized for scientific and advanced mathematical applications called Scientific Linux, Kali Linux for network security testing and management, which we will dive in more depth shortly, but other distributions built to be embedded in IoT or Internet of Things devices such as Raspbian for the ultra-cheap Raspberry Pi development board. Distributions are often grouped into families. For example a specific distribution might earn a reputation for stability, good design, quick patching, and a healthy ecosystem of third-party software. Instead of having to re-invent the wheel, other communities might fork derivative versions of that parent distro and their own customizations, and distribute it under a new name, but the original parent child relationship remains.
Updates and patches are pushed from the upstream parent downstream to all the children. This is efficient and an effective way to maintain autonomous systems. The best known distribution families are Debian, which maintains a downstream ecosystem that includes the all-purpose Ubuntu for example. Mint Kali Linux and Red Hat are responsible for the CentOS; and consumer focused Fedora distros; SUSE, that provides OpenSUSE; and the infamously complex but ultra-efficient Arch Linux, whose downstream followers include LinHES for Home Entertainment Management, and the GUI focused Manjaro. You'll also find Linux distribution images for all kinds of dedicated deployments. Extremely lightweight distros can be embedded in Internet of Things devices such as fridges or light bulbs. Docker containers are fast and efficient because they share the OS kernel with their Linux host environments, and they can be built using a wide range of Linux based images. The cloud, led by AWS or Amazon Web Services and Azure, the virtualized on-demand service computing is just great as it contains about everything we know about computing. Linux is multipurpose and free, therefore it’s the perfect operating system for cloud deployments. Another Linux version is being used to run a significant majority of cloud occurrences is hosted on Microsoft's Azure cloud platform. The significance of the industry-wide shift to the cloud is the appearance of specialized Linux distributions that are designed to deliver the best conceivable cloud experience by being small and fast as possible. These specialty distros will frequently include out of the box functionality that allows you take advantage of your specific cloud host environment. These distros include AWS's Amazon Linux AMI for example. AMI stands for Amazon Machine Image, and purpose-built long-term support Ubuntu releases. Long-term support or LTS releases are built to be as stable using fully tested software and configurations. The reliability of such configurations makes it possible for the distro managers to continue to provide security and feature updates to a release for
5 years. You can deploy an LTS release as a server without worrying to rebuild it all that time. If you like to try out the latest and greatest versions of software, you might go ahead and install the most recent interim release, but for stable environments, you have to have an LTS. In summary, open source software can be delivered using various license models. The GPL, the GNU General Public License permits any use, modification or redistribution as long as the original license terms aren't changed. Creative commons licenses permit more restrictive license conditions to give greater choice to software creators. Other major licensing models include Apache, BSD and MIT. Linux is a flexible platform that can be customized to power any compute device, both; physical or virtual. You learned about Linux distributions that package the Linux kernel, along with GUI desktops and specialized software and configurations. The distribution families we discussed include Red Hat Enterprise Linux, Debian and Arch. In conclusion, you now have a basic understanding about the ways distributions patch and maintain the software in Linux machines, as well as how they frequently make new releases available, including LTS or Long Term Support releases. Before you install any Linux, I want to say that Linux Installation is not a simple mission. There are so many platforms on which you can install Linux, so many distros and distro releases and each one with its own installation program, so many configuration options, and so many uniquely different installation pathways that presenting a small subset of the topic in a logical way is a challenge. You can install Linux on PCs and traditional servers. Besides the fact that the Android OS itself is built on a Linux kernel, there's nothing stopping you from installing a more mainstream distro, but keep in mind that such experiments can end badly for the device. What about a refrigerator or something smaller like a kids toy, which are likely to be produced in very large numbers, or virtual servers that are designed to live for a few seconds, perform a specific time-sensitive task, and
then shut themselves down forever? Well, the regular install processes won't work properly in those scenarios, so you'll often need to think outside the box. Many Internet of Things devices use tiny development boards, such as the inexpensive Raspberry Pi to run their compute operations. In the case of the Pi, you can build an OS image on your own PC and flash it onto an SD card, which you can then insert into device and boot it up. Virtual servers can be provisioned using scripts that define the precise operating system and configuration details you're after. Sometimes in response to an external trigger, the scripts will automatically activate resources in your target environment and deploy them as needed to meet changing demand. The variety and flexibility inherent in the Linux and open source ecosystem make it possible to assemble the right combination of software layers necessary to match the hardware resources you're using and your compute workload. In the course of a traditional Linux installation you're going to face choices regarding some of the environment settings within which your OS will operate, how your computer will connect to the network, what kind of user account you'll create for day-to-day administration, and what storage devices you'll use for the software and data used by your system. Let's talk about those one at a time. Linux distros allow you to choose to interact with the GUI using any one of the languages but you'll need to specify which language you want and which keyboard layout you're using. The language you choose will determine what you'll see in dialog boxes and configuration menus throughout the desktop. You'll also need to set your location, so Linux will know your time zone. Many of your network and file handling operations will depend on the time zone setting, so you want to get this right. These settings can be updated later either using the GUI or the CLI. If it's possible you're better off enabling internet access before your installation gets going. This way, your distro can download the latest updates that might not be included in your installation archive, so you'll have one less
thing to do when you log in to your new workstation. The CentOS installation program will ask you whether you want to set up a regular user for your system or if you're fine with just the root user. While you're not forced to create a regular user, to harden your security posture, it's highly endorsed that you avoid logging in as a “root” user for normal operations. As an alternative, logging in and getting your work done as a regular user who can, when necessary, invoke administration powers using pseudo, is much better. Standard Ubuntu install processes for example won't even offer the option of using root. You can always opt in to go with the default approach for storage devices where in most cases the entire file system will be installed within a single partition, but you might want to explore other options for more complicated or unusual use cases. Many server admins prefer keeping the “/var” directory hierarchy isolated in a separate partition to ensure that system log data doesn't overwhelm the rest of the system. You can use a small but fast SSD or solid state drive for most of the system files, while the larger “home” and “var” directories are mounted to a larger, but much slower hard drive. This allows you to leverage the speed of the SSD for running Linux binaries while getting away with a less expensive magnetic hard drive for your data, where the performance difference wouldn't be as much noticeable. You'll be asked whether you want your storage devices to be managed as “LVM volumes”. But what is an “LVM volume”? Well, LVM stands for Logical Volume Manager, which is a way to virtualize storage devices, so they're easy to be manipulated later on. But how it functions? Well, Let's imagine that you've got three separate physical drives on your system. LVM would turn them all into a single volume group, whose capacity equals the total aggregate space from all three drives. At any time you'll be free to create as many logical volumes from that volume group as you'd like, using any combination of individual capacity, up
to the total available volume. If your 3 drives were 2 TB, 500 GB, and 200 GB in size separately, and you needed to work with a data drive of at least 2.3 TB, you could use LVM to create 1 logical volume of 2.3 TB and a second volume of 400 GB for everything else. If your requirements change in the future, you can reduce the size of your data drive and transfer the extra data to the second volume, or to a new volume. Adding or swapping out volumes can be relatively simple operations. LVM can give you fantastic configuration flexibility, but for simple setups it's normally not essential. Now that you're aware of some of the theory, you can go ahead and jump right into Kali Linux installation, but before you do that I would like to recommend few other software and hardware that you should get hold of as Pen Tester.
Chapter 2 Software & Hardware Recommendations Tcpdump https://www.tcpdump.org/ Microsoft Net Mon https://www.microsoft.com/en-us/Download/confirmation.aspx?id=4865 LanDetective https://landetective.com/download.html Chanalyzer https://www.metageek.com/support/downloads/ Ettercap https://www.ettercap-project.org/downloads.html NetworkMiner https://www.netresec.com/?page=NetworkMiner Fiddler https://www.telerik.com/fiddler Wireshark https://www.wireshark.org/download.html Kali Linux https://www.kali.org/downloads/ vmWare https://my.vmware.com/web/vmware/downloads Virtual Box https://www.virtualbox.org/wiki/Downloads
Many people seem to get confused when we talking about wireless adapters and Wireless cards. They don't know what they are, why do we need them, and how to select the right one because there are so many brands and so many models. What we mean by a wireless adapter is the device that you connect to your computer through a USB port and it allows you to communicate with other devices of our Wi-Fi, so you can use it to connect wireless networks and communicate with other computers that use Wi-Fi. You might be thinking that your laptop already has this and yes most laptops and smart phones already have this built in. But, there's two problems with that. The first issue is that you can't access built-in wireless adapters with Kali Linux if it's installed as a virtual machine, and the second issue is that these built-in wireless adapters are not good for penetrating wireless networks. Even if you installed Kali Linux as a main machine on your laptop and then you'll have access to your built-in wireless card, you still want to be able to use this wireless adapter for penetration testing because it doesn't support monitor mode, or packet injection. You want to be able to use it to crack Wi-Fi passwords and do all the awesome stuff that we can do in Kali Linux with aircrack-ng and other tools. Before we start talking about the brands and the models that will work with Kali Linux, I want to talk about a more important factor which is the chipset that's used inside the wireless adapter. Forget about the brand for now. Instead, we're going to talk about the brains that does all the calculations inside the wireless adapter. This is what determines whether the adapter is good or bad. Whether it supports injection and monitor mode and works with Kali Linux, the brand is irrelevant. What's used inside that adapter is important and thus the chipset. There are many chipsets that support monitor mode and packet injection and Kali Linux. There is one that's made by the company called Atheros and it's model is AR9271. This chipset supports monitor mode or packet injection, or you can use the chipset to create fake access point, or you can use it to hack into networks. So you can use this chipset to do pretty much for all Kali Linux attacks. The
Comments 0
Loading comments...
Reply to Comment
Edit Comment