This is work completed for an assignment for Ashford University's CPT200: Fundamentals of Programming Languages

Problem:

The colors red, blue, and yellow are known as the primary colors because they cannot be made by mixing other colors. When you mix two primary colors, you get a secondary color, as shown here:
  • Red and blue = purple
  • Red and yellow = orange
  • Blue and yellow = green
#Note: The assignment called for lists to be used but I opted for a tuple because I don't think the primary colors are changing any time soon. 

Script:
primary_colors = ('red', 'yellow', 'blue')
user_input = ''

print('*******************************************')
print('*                                                         *')
print('*      Welcome to the Color Mixer!      *')
print('*                                                         *')
print('*******************************************\n')

def get_color(ordinal):
    color = input(f'{ordinal} primary color: \t').lower()
    if color in primary_colors:
        return color
    else:
        print('Invalid entry, please enter red, yellow, or blue')
        return get_color(ordinal)

def mix_colors(color1, color2):
    print('\nMixing colors...')
    if color1 == color2:
        return color1.capitalize()
    elif (color1 == 'red' and color2 == 'blue') or (color1 == 'blue' and color2 == 'red'):
        return 'Purple'
    elif (color1 == 'yellow' and color2 == 'blue') or (color1 == 'blue' and color2 == 'yellow'):
        return 'Green'
    else:
        return 'Orange'

def display_results(color1, color2, mixed_color):
    print(f'{color1.capitalize()} and {color2.capitalize()} = \t{mixed_color}\n')

def keep_going():
    global user_input
    try
        user_input = input("Enter 'q' to quit or press any other key to continue mixing colors: \t")[0:1].lower()
    except:
        print('Invalid entry!')
        keep_going()

while user_input != 'q':
    print('\nPlease enter two primary colors (red, yellow, blue).')
    color1 = get_color('First')
    color2 = get_color('Second')
    result = mix_colors(color1, color2)
    display_results(color1, color2, result)
    keep_going()
else:
    print('Goodbye!')

Results:


Ideas for Rework and expansion:
  • I like the idea of importing colorama and formatting the output so that there is some actual color involved. 
  • I've seen this done using dictionaries and lists for comparison, could refactor it to accomodate that. 

Hope this helps, leave me feedback if you'd like to see something specific!


Python has many different containers available, here we'll cover a couple of them
Containers are objects “which contain references to other objects instead of data” and are used to group related variables together (3.2)
There are many types of containers to include:
  • lists
  • dictionaries
  • tuples
  • and more…
Lists
Lists are Python’s version of arrays and are declared using square brackets []. Lists are ordered sequences that can hold a variety of object types (not limited to a single type). For instance,
my_list = [‘one’, 2, 3.0]]
holds a string, an int, and a float.
Lists support indexing, where elements are accessed by their sequential order, starting from 0. In the above example:
  • my_list[0] holds a value of ‘one’
  • my_list[1] holds a value of 2
  • my_list[2] holds a value of 3.0
Objects in a list can also be referenced from right to left:
  • my_list[0] holds a value of ‘one’
  • my_list[-1] holds a value of 3.0
  • my_list[-2] holds a value of 2
Slicing
Lists also support slicing, which is sort of like an expanded capability of indexing. The syntax for slicing is:
object_to_be_sliced[start:stop:step]
Like slicing with strings, slicing of list enables you to grab a subsection of the sequential objects using indexes.
Example:
my_nums = [0,1,2,3,4,5,6,7,8,9,10]
print(my_nums[0:6])
>> [0,1,2,3,4]  
#Note that this goes up to but does not include the stop index
print(my_nums[0::2])
>>[0,2,4,6,8,10]
#Step is hoe many indexes are skipped with each iteration

print(my_nums[::-1])
>> [10,9,8,7,6,5,4,3,2,1,0]
#Using only a step value of -1 returns the list reversed 
print(my_nums[0:10:3])
>> [0,3,6,9]

Dictionaries:
In contrast to list, dictionaries are unordered mappings for storing objects. Dictionaries store objects in key-value pairs wherein the key is a string which is used to retrieve its associated value.
Consider a dictionary of object – price key-value pairs:
menu_prices = {‘cheesburger’:2.99, ‘dbl_cheeseburger’: 3.99,’bacon_cheeserburger’: 3.49, ‘bacon_dbl_cheeseburger’: 4.49}
menu_prices[‘bacon_cheeseburger’]
>> 3.49

Unlike lists, dictionaries do not support indexing or slicing, which is really the deciding factor in whether to use a list or a dictionary. It comes down to whether you need/intend to use indexing/slicing.
Like lists, dictionaries can contain multiple object types simultaneously, to include nested dictionaries and nested lists. 
Both lists and dictionaries are mutable, meaning the values can be changed. 

Python variables do not need explicit declaration to reserve memory space. 

Here I will:
  • Describe the difference between explicit typed variables and implicit typed variables using examples.
  • In large scale applications, explain whether or not explicit declaration has a better impact over implicit declaration.

Type expression can be either explicit or implicit. 

  • Explicit typed variables are manually set. That is their type is stated clearly and in detail.
  • Implicit typed variables are typed by their context. 

Is there any performance difference?

Strictly speaking, there shouldn't be much (if any) performance difference when it comes to the use of implicit declaration vs explicit declaration in large scale applications.
The key is that variables don't have types, per se - values do. In Python, variables aren't so much declared (as in other languages), rather variables are assigned a name. 
  • If I were to say myint = 5, then I should be able to infer pretty easily that x is an int. 
  • Similarly, myfloat = 2.0 is quite clearly a floating point number. 
  • And mystring = "I'm a string" is quite clearly a string. 

With all that said...

I've always been under the impression that explicit typing lends itself toward easier maintainability of code, which is important when the majority of the work done on code is maintenance. 
Honestly, I think this will be my first time grappling with a dynamically typed language and the paradigm shift is...interesting. 

What are your thoughts?




This is intended to be a short introduction to the C# language. I won’t go into syntax or cover the principles of Object-Oriented Programming but I will be describing the language through reference to other languages.

Please let me know if there’s any confusion and I will edit the post suitably.

So… you want to learn C#.

Me too!

I’ve started learning C# through Udemy courses and books (links for each at the end of the article) with the intent of developing multi-platform applications. But enough about me, on to what you really care about.


C# (C Sharp)

C# was developed by Microsoft as an Object-Oriented version of the C language for their .NET Framework (be on the lookout for separate posts on OOP and the .NET Framework in the future).

C# is statically-typed (or strongly-typed), which means that every variable and object has a well-defined type (Ky, 2013).

Every variable and constant has a type, as does every expression that evaluates to a value.

Methods have unique signatures which specify the type(s) of input parameters that they can accept and also what type of value they return.

Statically-typed languages check to make sure all operations use the appropriate types at compile time. 

C# is a High-Level Language (HLL)

Programming languages can be categorized based on their level of abstraction from the details of the computer.

Original computers relied on a series of binary switches that were configured manually or according to a program of 0s and 1s or binary digits (bits). This is not always the case with the onset of quantum processors which utilizes a superposition which operates on an entirely different paradigm. While it would be possible to build every program using 0s and 1s, it would be very difficult for most people to read (Vahid & Lysecky, 2017).

This led to the development of assembly language which was a little easier to read, that is closer to the way humans communicate. Assembly language compiles to (translates to) the binary machine code (0s and 1s) which your computer understands.

Assembly language can be thought of as being one level of abstraction from the computer.

Abstraction?

Think of it like your smartphone. It has only a few buttons on it. You know WHAT those buttons do but don’t need to know HOW they do it.

Make sense? Think of it like Back to the Future…

Marty McFly didn’t need to understand how Doc’s Delorean Time-Machine worked or what a Flux Capacitor does. He simply needed to know how to turn it on, put in a date, and floor it until 88mph! The details which made this amazing machine work were abstracted from Marty.
(Gale & Canton, 1985).



/*Abstraction is also a fundamental aspect of Object-Oriented Programming which can be explained in a similar manner. More on this in a later post. This is also how to use a block comment in C#!*/

//This a single-line comment in C#!

As the field of software development evolved, languages like COBOL, Fortran, and others began to crop up with the same intent as the Assembly language. They were easier for people to read and write. C# doesn’t have the programmer directly dealing with details of the computer. The programmer doesn't need to know how what's going on behind the scene that enables her program to work. That said, knowing can be very valuable. 

Languages similar to C#:
  • C
  • C++
  • Java

C# was derived from C and also incorporated some aspects of C++ and Java. If you’ve written code in any of these languages, C# syntax shouldn’t be terribly difficult to pick up on (Ky, 2013).
Unlike C++ and C, C# handles garbage collection for us! This means that memory management or direct memory manipulation isn’t necessary.

You see, processors have limited amounts of memory (cache) and use Random Access Memory (RAM) as a buffer. The more complicated an operation or series of operations is, the more often it needs to use the RAM as a buffer. But the RAM isn’t accessed as quickly as the processor’s cache (Vahid & Lysecky, 2017).

Some languages require the direct manipulation of memory, assigning variable values to specific addresses and more. In C++ raw pointers, variables which hold an address in memory, were often used and required that the pointers be deleted when they were no longer needed. But the ways that values were passed as method arguments often resulted in memory leaks – or loss of access to portions of memory because of declared, unused, un-deleted pointers. The introduction of smart pointers (self-deleting pointers) helped to alleviate this but with C# memory management requires nearly no developer-involvement (Microsoft, 2019).

This memory management is referred to as garbage collection.



Furthermore, where you might see multiple inheritance in other languages like C++, C# does not support multiple inheritance.

In a personnel management software we might have two classes Faculty and Student that have specific characteristics (like PayRate for Faculty and TuitionRate for Students).

But what about a Student Aide that’s on the payroll?

In C++ you could have a StudentAide class which inherits properties from both the Faculty and Student classes.

Not so in C#...

The issue is that multiple inheritance can caused what referred to as the “diamond problem”. Essentially if both the Faculty and Student classes had properties or methods with the same name, the compiler wouldn’t be able to determine which one to use when a new instance of a StudentAide object occurred.

C# Doesn’t Compile to Executable Code – at least not directly

Yep, you read that correctly.

Because C# was designed to work within the .NET Framework, C# doesn’t compile to machine-executable code, at least not directly. The .NET Framework enables a C# developer to code once and push to multiple platforms (Windows, iOS, Android, Xbox, and more). In order to enable this, code written in C# is compiled to Common Intermediate Language (CIL) which can only run on the Common Language Runtime (CLR). 

“The CLR is a native application that interprets CIL code” and then compiles that into the appropriate native code for the platform it’s on (Ky, 2013).

I’ll go into a bit more detail on the .NET Framework, C# syntax, OOP and other topics covered above at a later date. But for now, I hope that this has given you a general idea of what C# is.

Important takeaways:
C# is an Object-Oriented Programming Language (OOP)
C# is statically typed
C# is a high-level language
C# handles garbage collection!
C# doesn't support multiple inheritance
C# doesn't compile to machine-executable code
C# compiles to Common Intermediate Language (CIL)
CIL on runs on Common Language Runtime (CLR)
CLR compiles to the native code for the appropriate platform


References:
Gale, B. & Canton, N. (1985). Back to the Future. Universal Studios.

Ky, J. (2013). C#: A beginner’s tutorial. [Montreal, CAN]: Brainy Software Inc., 2013. Retrieved from https:://search.ebscohost.com.proxy-library.ashford.edu/login.aspx?direct=true&db=ca t02191a&AN=aul.10882077&site=eds-live&scope=site

Microsoft. (4 April 2019). A Tour of the C# Language. Retrieved from https://docs.microsoft.com/en-us/dotnet/csharp/tour-of-csharp/index

Vahid, F., & Lysecky, S. (2017). Computing technology for all. Retrieved from zybooks.zyante.com/



While the ping command is incredibly helpful in determining the reachability of different IP addresses, it has the potential to be used maliciously. 
The Ping of Death attack was a popular denial of service (DoS) attack between 1996 and 1997 which involved deliberately fragmenting IP packets to make them larger than the maximum allowed 65,536 bytes.  A denial of service (DoS) attack is derives its name from the impact that it has – users are denied service by the servers. Operating system vendors provided patches to protect against these attacks but many websites continue to block ICMP ping messages.

Further, attackers use tools such as whois to determine the IP addresses of target organizations and then use automated ping sweeping tools to methodically ping the publicly addresses within a range or subnet. From there they use port scanning to search for open ports and determine what applications or operating systems are being used and whether there is an exploitable vulnerability. These vulnerabilities might include the absence of patches to operating systems, firmware, and more. For instance, an operating system that went unpatched to deal with the Ping of Death attack would be vulnerable to future Ping of Death attacks.

In contrast, social engineering is a tactic utilized by attackers which exploits human failure. Social engineering attacks may include phone calls, phishing emails, watering hole attacks and more. Attackers using social engineering methods will often take weeks and months getting to know a place before even coming in the door or making a phone call. Their preparation might include finding a company phone list or org chart and researching employees on social networking sites like LinkedIn or Facebook.

In truth, networks will always be vulnerable. 
The proper approach is to reduce vulnerability. 

To reduce vulnerability, avoid the following: 

  • Misconfigured firewalls
  • Unpatched vulnerabilities
  • Unsecured wireless access points
  • Default/overused passwords. 


With regards to preventing social engineering schemes, employees should be trained to identify phishing emails, inform IT specialists within the company when those emails are received and how to handle the email itself. Further, badged access and/or 2-factor authentication can be used to further reduce the likelihood of malicious intrusion into networks.

The selected industry I’ve chosen is the air traffic management industry where computers play an incredibly important role – automation. 

A brief history

Historically, the bottleneck for national airspace access has been air traffic controllers. Early air traffic control was accomplished by the post office using signal fires, flags, and large painted arrows on the ground. Aircrews would fly relatively low so as to be able to see these navigational aids. 

As planes grew more complex, so too did the technology necessary to guide them. In the 1930s, navigational aids evolved in rotating lighted beacons. Air traffic controllers began operating over radios, controlling aircraft using time over fix, airspeed, estimated time over next fix, and other tools of the trade to guide aircraft without tracking their location via radar. Flight progress was tracked on chalkboards and relied heavily on the mental acuity of the controllers, but responsibility for safety of flight fell squarely in the laps of the pilots. 

Post WWII, ever-increasing air traffic congestion led to multiple midair collisions which had the public demanding radar installation throughout the country in the late 1950s. Further evolution saw aircraft transponding via beacons which provided secondary radar information to supplement the primary radar returns (which had previously been tracked with strips of paper on “shrimp boat” strip holders that air traffic controllers manually followed the primary radar returns on their scopes with).

Air traffic management systems continued to evolve to meet the increased demands that stressed the limited situational awareness of air traffic controllers. Automated air traffic management systems were developed which could recognize future conflicts, often hours in advance. The FAA has been attempting to continuously upgrade air traffic management automation since the 1970s, with mixed success. The first attempted project was such an abysmal failure that it is widely regarded as one of the most terribly managed project in project management history – it also led to the 1981 air traffic controller strike which saw President Reagan fire nearly every air traffic controller in the nation. 

Use of computers in Air Traffic Management

While individual use of personal computers in air traffic control is somewhat limited, systems continue to be developed which provide increased automation and enable controllers to handle greater workloads. It’s crucial that controllers continue to familiarize themselves with these systems and their inner workings in order to have a greater understanding of the limitations of these technologies. Unfortunately, most controllers have only a limited understanding of these systems because they’re so focused on keeping aircraft from colliding. 

More recently, technologies such as Traffic Collision Avoidance System (TCAS) and Automatic Dependent Surveillance-Broadcast (ADS-B) have been implemented. The former enables two equipped aircraft to detect potential collision hazard between themselves at a greater distance without relying so heavily on ATC or the pilots’ own MK-I eyeballs. The latter serves a similar purpose but was originally intended to be implemented as a replacement for radar coverage in regions where radar coverage wasn’t feasible, specifically the Caribbean. 

ADS-B

ADS-B highlights an important lack of security and privacy-mindedness regarding computers in the governments of the world. ADS-B has become a world-wide mandate despite numerous cybersecurity concerns. A little detail is necessary to explain just how bad the situation is. 

  1. First, ADS-B broadcasts aircraft identity, location details, airspeed, and more without any encryption.
  2. Second, these broadcasts are picked up by a terrestrial network of transceivers, many of which are privately owned.
  3. Third, no handshake or independent verification of the received information is possible – it’s quite simple to spoof an aircraft’s identity.
  4. Fourth, because the data is not encrypted and broadcast in real time (at 1Hz), ADS-B can actually be used to derive a targeting solution.
  5. Lastly, ADS-B has two major bandwidth issues:
    1. When message overlap occurs, the entire system becomes unreliable. Bandwidth combined with minimum transmission power make this more likely to occur, and in fact it has in the airspace over Florida, numerous times.
    2. There’s a user interface bandwidth issue. ADS-B displays on aircraft do not have an altitude filter which makes it nigh impossible to discern location data on potential threat aircraft when there’s significant congestion. Again, happens regularly over Florida. 

In fact, a white-hat hacker who goes by the handle RenderMan was able to teach himself how to inject a false ADS-B signal into the national airspace in just one weekend. He managed to do so responsibly,and managed to inject an aircraft with the callsign “YOURMOM” into the SFO Class B aerodrome and buzz the tower repeatedly (the controllers at SFO tower did not receive the transmission and there was no impact to flight safety). 


However, he gave up trying to convince people of the dangers of ADS-B after it was clear nobody would listen and has moved on to “the internet of dongs” and is advocating for a practical cybersecurity and privacy mentality as it regard IoT-enabled sex toys. 

Getting away from “the internet of dongs” and back to the ADS-B woes, this isn’t something limited to the United States, nor just to aviation. Aviation is a global logistics backbone. Consider that the economic impact of the drone incident at Gatwick was at least $124M. The absolute lack of cybersecurity mindedness with regards to the treatment of the national airspace as a network is both appalling and rampant. 

Recently, Boeing has been in hot water for espousing a short-term profit culture which prevented critical software risks from being mitigated – but Airbus will soon be eating crow. Airbus’s most modern helicopters and passenger aircraft have incorporated ADS-B collision alerts into their AUTOPILOT. Moreover, airlines (as a result of insurers guarding against human error) mandate autopilot while enroute, and in some case until 50 feet off the ground. So, any malicious actor could effectively shut down the next generation primary location information source for aircraft to prevent air traffic from doing their job, and inject false, non-verifiable signals which effectively steer airborne aircraft with up to 800 passengers on board. This is an systematic weakness which is being ignored and is a vulnerability that non-state actors could easily exploit to wreak economic havoc and is an asymmetric warfighting capability that the world has handed over on a silver, winged, publicly-broadcast platter. 

The current trajectory, no pun intended, of air traffic computer systems and networking is a move toward Four-Dimensional Trajectory Based Optimization (TBO) wherein aircraft are delayed on the ground for a couple of minutes to provide them with optimized routing hours later. 

However, new airspace entrants to include small unmanned aircraft and autonomous Urban Air Mobility aircraft (unmanned flying taxis) will throw a few wrenches into the works. Ultimately, privatization of unmanned air traffic management technologies will lead to the eventual replacement of both pilots and air traffic controllers in favor of automated systems with human-in-the-loop oversight. 

After all, the cause of most accidents is human error. 

However, that replacement is probably something like 30 years out. In the nearer-term (10 years) we will likely see an implementation of 4-D TBO and the start of the use of remote tower technologies to provide air traffic control services for terminal aerodromes without air traffic control towers, or that don’t operate 24/7. These remote tower technologies can also be used to augment the controller capabilities with infared optics, datablock overlays (instead of flight strips) and improvements in weather forecasting capabilities. 

The UAS realm will see the implementation of remote ID capabilities similar to those afforded by ADS-B (FAA indicates this is approximately 2 years out, so we can expect it in 3-4) but hopefully not ADS-B based. This will enable a greater scope of unmanned operations intermixed with manned aviation. As a result, the business case for manned aviation will slowly give way to unmanned as insurers come to recognize the increased risk of manned aviation. 

Hardware upgrades will be very slow, I recall having to change out 12-inch tape reels for out facility communications recorder and being excited that we were transitioning to cassette tapes – in 2009. FAA facilities will be upgraded sooner than USAF facilities but these technological paradigm shifts move at a glacial pace as a result of their governance by an insurmountably glacial Congress. 
Note: The views expressed here are my own and do not reflect the opinions of my employer or the USAF. All of the information discussed above is publicly available. 
            Today I chose to ping and tracert nats.aero and zilliqa.io. NATS UK is the private company responsible for providing air traffic services to the UK and elsewhere. Zilliqa is a cryptocurrency based out of Singapore that I am heavily invested in.
            As you can see form the first image, www.google.com (Links to an external site.) was successfully pinged 4 times with 4 packets sent and no packet loss with times ranging from 35ms to 42ms. The nats.aero ping was similarly successful though the average time was much higher, at 154ms. The zilliqa.io ping was also successful with no packet loss and an average time of 60ms. The longer times taken for nats.aero and zilliqa.io makes sense given their global location in relation to my own (Albuquerque, NM). The relative length of this trip can be demonstrated through a tracert.
            As seen in the Nats.aero tracert image, there were a total of 10 hops, not including the two timed out requests. These timed out requests can occur for a number of reasons but the mostly likely in this case is an increased traffic load at the IP addresses that were later successfully. Tracert maps out the pathways by sending ICMP ping packets which tend to be assigned lower priority (or outright blocked by certain firewalls) (Susan, 2017). The routing of the packets was from my local network outward until it reach the regional Comcast router, then outbound to Los Angeles before hopping twice more to 66.155.26.134. In comparison, the Zilliqa.io tracert also made a total of 10 hops again routing through regional Comcast routers to Los Angeles before finally reaching 192.64.119.53.
            GeeksForGeeks explains that Ping “Is a utility that helps one to check if a particular IP address is accessible or not” and that it can also be used to see if computer on a local network are active. Traceroute, on the other hand, provides the exact route taken to reach the server and the time taken by each step. A reason why a ping might time out is because the IP address being pinged is unreachable – this could be for any number of reasons, including a lack/loss of internet connectivity between the computer pinging and the IP address being pined.  

My prior experience utilizing Microsoft’s Office Suite is extensive, though this is my first foray into Microsoft Access. For those who have been in the Air Force, death by Powerpoint is real, but it’s also two-sided. I can’t put a number to how many weeks I have spent utilizing Powerpoint to put together presentations which utilize almost none of Powerpoint’s features because we have a standard format which must be strictly observed.

            That said, Powerpoint is a powerful tool, but not precisely appropriate for the required content for this assignment. I prefer Powerpoint when presenting to an audience physically or telephonically with the ability to provide more detail. My preferred method for this, when I’m not pigeon-holed by standards, is Pecha Kucha. In Pecha Kucha, 20 slides are presented with each being up for only 20 seconds for a presentation length of only 6 minutes 4 seconds. The slides favor visuals with very limited (almost never justified) text that serve as memorable anchors for the message delivered with each slide by the presenter.

            Microsoft word is a powerful word processor which is used to provide formatted text, unlike text documents which are unformatted. Word also supports the inclusion of images and drawing, though I’ve never been a fan of Word’s native shape manipulation.

            Excel is another software application that I have extensive experience utilizing. One of the most frustrating aspects of working on DoD computers is that I’m unable to create macros for my Excel workbooks, but I have still managed to create a workbook with extensive computations that saves me a lot of time and energy. For the purposes of this assignment, Excel was useful in documenting the time spent on activities and creating visual depictions of the same.

            Microsoft access is a databasing management system or DBMS that enables creation, maintenance, and access to databases. Common database operations include: adding new data, editing existing data, deleting data, and querying the database for information. While databases are quite powerful, I do not believe that Microsoft’s Access is the appropriate software application for storing this kind of data. Not because it is incapable of it but because there’s not a whole lot of use that I see in the specific case of tracking task/activity priority for one individual. However, Access would be appropriate for tracking task/activity priority for a large group of individuals.

            Ultimately, Microsoft word feels like the most appropriate software application for detailing my activities in one day and I would recommend it for this and any other use cases where narrative information will be provided. Excel would be useful for conducting an analysis of the time spent on different activities each day for multiple days. Access would be useful for doing the same for a large group of people. The visuals from Excel and Access would be useful inclusions for the Word document or for a Powerpoint presentation. A Powerpoint presentation would be best utilized to present the data to a large audience simultaneously, especially if that presentation is accompanied by someone to provide further details on the information in the slideshow.