Category: Software

  • Set Up a Putty Session w Cool Options

    Set Up a Putty Session w Cool Options

    Introduction

    I use PuTTY as my primary terminal program. For one thing, it will hold open an SSH session all day and not time out (unlike PowerShell). It is easy to customize the look and feel of your Shell session. Finally, you can save your session settings for subsequent logins.

    My top recommendations are to save your login name, private key, and change the font size and color. It is relatively easy and once you set up these, you’ll be grateful for the amount of time saved.

    Save your login name

    To prevent having to type in your login name each time you start a session, go to Connections > data > add your username on the right side.

    Add the path to your private key

    If you want to login to a server without typing in a password, add the path to your private key in a saved session. This is a good method if you only log into a few servers. As, each server needs to have a separate saved session. If you have more than a few servers, you should run ‘Pagent’ to present your key upon each server’s login request.

    To add the path to your private key, go to Connections > SSH > Authentication and provide the path to your private key.

    Change the font color & size

    To make things easier to read you can enlarge the font size and change the color. Select Colors > Default Foreground > Modify > Pick a color.

    Now when you open putty, it is easier to read.

    For font size, you can change it by going to Appearance > Change > and Select the options.

    Save all of the options to a session

    After you have all everything set up the way you want, then save the setting as a session. Select session > enter an IP & port, Give it a name (like the hostname of the server), in my case I am just saying “Web Server” > and hit “save”.

    Now when you want to start an SSH session with your web server, just launch putty, hit “load” and then “Open”. It will take you right into a session, no need to enter a username and password.

  • Ansible Ad-Hoc Commands

    Ansible Ad-Hoc Commands

    Introduction

    Ansible gives you a powerful option to run commands ad-hoc. This negates the need to write a script, if you only need some quick information. There are two separate options for gathering data via ad-hoc. If you are running just a single command than use the ‘command’ module. If you need to run multiple commands, use the ‘shell’ module.

    You may or may not need to reference the your inventory file, if you are using DNS names. If you are using IP addresses, then you probably do not need it.

    -m = module (shell or command)
    -a = argument (command you want to run on the remote system).

    Run a Single Command against Multiple Hosts

    ansible -i inventory.ini -m command -a 'ip a' server1,server2

    Run Multiple Commands against a Single Machine

  • Run a Basic NMap Scan

    Run a Basic NMap Scan

    Introduction

    NMAP (Network Mapper) is a utility for identifying all hosts on a network and what ports are open on those devices. Historical, it can also tell what the OS is of the identified hosts and what services are running on the open ports. I have not found the OS or the service identifiers to be very accurate.

    Below are some common use cases for nmap.

    NMAP Command

    What are the common switches

    -sS = TCP SYN Scan. (Is the port listening?, does not complete the handshake). Default Scan.
    -sT = TCP connect scan. Use this, if -sS is not available.
    -sU = UDP scan
    -sV = probe open ports and determine what service are running.
    -p = only scan specified port or range. [53, 443, ssh, 22-23, 80-44, 1-17000].
    -p- = scan all 65,000 ports
    -v = verbose.
    -T5 = set timing to highest level. Higher is fastest. (-T3 is default).
    -f = scan 100 most common port (fast scan)
    -O or -A = Detect OS
    -n = do not do DNS resolution.
    –open = only display open ports.

    State of the ports

    • open An application is actively accepting TCP connections or UDP datagrams.
    • closed – A port is accessible, but nothing is listening.
    • filtered – Can not determine if the port is open (typically blocked by firewall).
    • unfiltered – Port is accessible, but unable to determine if open or closed.
    • open/filtered– Unable to determine if port is open or filtered.
    • closed/filtered – Unable to determine if port is closed or filtered.

    Run a TCP Scan

    -sS = TCP/SYN connect
    -v = verbose
    -p = scan port 443
    -sV= get the running service
    -O = Determine the OS
    -T4 = Set timing to aggressive.

    Run a UDP scan

    -sU = conduct a UDP scan
    –open = display only open ports.

    Run a TCP scan and display only the open ports

    -sV = determine the running services.
    -T4 = use number 4 timing template, Aggressive: fast scan, -T3 = default
    -p- = Scan all 65535 ports.
    -n = do not due DNS resolution.
    –open = only show open ports.

    Scan multiple IP addresses using TCP & send results to a text file

    nmap -sV -T4 -p- -n -iL /opt/targets --open -oX /opt/colo_20221108

    -sV = get the running service.
    -T4 = conduct an aggressive scan.
    -p- = scan all 65000 ports.
    -n = do not convert to a DNS name.
    –open = list only the open ports.
    -oX = /path/filename.txt = output scan in XML format
    iL= input from a file /path/filename.txt.

    Ping a specific port

    Check a cipher suite

    Reference:

    https://nmap.org

  • Export a KeePass Master Key File

    Export a KeePass Master Key File

    To in increase security, you can require a KeePass to use both a key file and a password to open the database. This makes it technically, two factor authentication (2FA).

    Go to file > change Master password. Check the ‘Show expert options’

    Enter a new master password. Check the key file box. Select Create. When completed, save the key file to a secure location. Such as a USB stick with Drive letter G:

    Plug in the USB stick. Launch KeePass, enter the password, and make sure the “key file/provider:” is pointed at your USB stick. The database will now open.

    Finally, be sure to backup the key file to your backup location. External hard drive, cloud, etc. If the key file is ever lost. There is no way to ever open the database.

  • Ping Multiple Hosts using PowerShell

    Ping Multiple Hosts using PowerShell

    Are you are working in a windows environment and need to check if a large number of hosts are online? The below ps1 script may be what you are looking for. Place all hostnames to be checked in a text file called computer.txt and on one per line. Modify the script as necessary, then ‘run’ .

  • Backup Files to S3 using Bash

    Backup Files to S3 using Bash

    Description

    A bash script will be used to copy a file from a Linux server to an S3 bucket. Next, it will run a checksum on the results to verify the upload. Finally, it will output the local file size, the local etag , the aws file size, and the aws etag value for easy comparison. This should give the end user enough confidence that the uploaded file has maintained it’s integrity.

    The script assumes you have an account in AWS with a login credentials. You have the cli AWS tools and credentials downloaded to /home/user/.aws/config and /home/user/.aws/credentials. These two files are needed to successfully authenticate to the s3 bucket.

    Amazon Web Service S3 Bucket

    AWS is a flat file system. There are no folders or directories. The “full” name of a file includes all the subdirectories as well. i.e. “/file1/file2/file3.txt” is the file name and not “file3.txt”. AWS will show all subdirectories as folders in the console, for ease of human navigate.

    Begin

    Start the script by defining that it will run as bash and add any notes to the head.

    Send any log output to a custom log file and code to exit the script if any commands in a pipeline fails.

    Get the number of processing units available and add it to a variable.

    Define the remaining local variables.

    Define the AWS variables.

    When a file is uploaded to AWS, it will calculate what is called an ETAG value. This is the checksum value of the upload file. To verify file integrity, we will compare the uploaded aws calculated ETAG against the local file’s calculated ETAG.

    The ETAG will match a true md5 hash value if the file size is < 5 GB. If the file is > 5 GB, the aws ‘cp’ command will automatically break the file into 8 MB chunks and upload 4 threads of data simultaneously, until the upload is complete. Each uploaded thread will have an md5 calculated. The resulting ETAG will be a sum of all the uploaded data chunks, rather than a true md5 hash against the completed file.

    In order to compare the ETAG’s and verify they match, we must calculate the local file’s ETAG value. Then compare that value to the value calculated by AWS. The script contains two methods to calculate the ETAG value, you will need to review and consider what is needed. In my case, I always know the value I will upload will be > 5 GB.

    To calculate the local files ETAG value, for files < 5GB. use:

    For files > 5 GB, we can use the code from https://gist.github.com/rajivnarayan/1a8e5f2b6783701e0b3717dbcfd324ba.

    Next, we will copy the files to the s3 bucket using the ‘cp’ command. We will be using the CLI copy command, rather than the s3api command, as the api can not handle file’s large then 5 GB. Copy the content to S3 and tell AWS that the data is just a plain text file.

    Get the ETAG value that AWS calculated during the upload.

    Next, we will get both the local file size and the uploaded file sizes.

    Finally, display the file sizes and the ETAG values of both the uploaded file and the local file side by side for comparison.