This is intended to be a short introduction to the C# language. I won’t go into syntax or cover the principles of Object-Oriented Programming but I will be describing the language through reference to other languages.

Please let me know if there’s any confusion and I will edit the post suitably.

So… you want to learn C#.

Me too!

I’ve started learning C# through Udemy courses and books (links for each at the end of the article) with the intent of developing multi-platform applications. But enough about me, on to what you really care about.


C# (C Sharp)

C# was developed by Microsoft as an Object-Oriented version of the C language for their .NET Framework (be on the lookout for separate posts on OOP and the .NET Framework in the future).

C# is statically-typed (or strongly-typed), which means that every variable and object has a well-defined type (Ky, 2013).

Every variable and constant has a type, as does every expression that evaluates to a value.

Methods have unique signatures which specify the type(s) of input parameters that they can accept and also what type of value they return.

Statically-typed languages check to make sure all operations use the appropriate types at compile time. 

C# is a High-Level Language (HLL)

Programming languages can be categorized based on their level of abstraction from the details of the computer.

Original computers relied on a series of binary switches that were configured manually or according to a program of 0s and 1s or binary digits (bits). This is not always the case with the onset of quantum processors which utilizes a superposition which operates on an entirely different paradigm. While it would be possible to build every program using 0s and 1s, it would be very difficult for most people to read (Vahid & Lysecky, 2017).

This led to the development of assembly language which was a little easier to read, that is closer to the way humans communicate. Assembly language compiles to (translates to) the binary machine code (0s and 1s) which your computer understands.

Assembly language can be thought of as being one level of abstraction from the computer.

Abstraction?

Think of it like your smartphone. It has only a few buttons on it. You know WHAT those buttons do but don’t need to know HOW they do it.

Make sense? Think of it like Back to the Future…

Marty McFly didn’t need to understand how Doc’s Delorean Time-Machine worked or what a Flux Capacitor does. He simply needed to know how to turn it on, put in a date, and floor it until 88mph! The details which made this amazing machine work were abstracted from Marty.
(Gale & Canton, 1985).



/*Abstraction is also a fundamental aspect of Object-Oriented Programming which can be explained in a similar manner. More on this in a later post. This is also how to use a block comment in C#!*/

//This a single-line comment in C#!

As the field of software development evolved, languages like COBOL, Fortran, and others began to crop up with the same intent as the Assembly language. They were easier for people to read and write. C# doesn’t have the programmer directly dealing with details of the computer. The programmer doesn't need to know how what's going on behind the scene that enables her program to work. That said, knowing can be very valuable. 

Languages similar to C#:
  • C
  • C++
  • Java

C# was derived from C and also incorporated some aspects of C++ and Java. If you’ve written code in any of these languages, C# syntax shouldn’t be terribly difficult to pick up on (Ky, 2013).
Unlike C++ and C, C# handles garbage collection for us! This means that memory management or direct memory manipulation isn’t necessary.

You see, processors have limited amounts of memory (cache) and use Random Access Memory (RAM) as a buffer. The more complicated an operation or series of operations is, the more often it needs to use the RAM as a buffer. But the RAM isn’t accessed as quickly as the processor’s cache (Vahid & Lysecky, 2017).

Some languages require the direct manipulation of memory, assigning variable values to specific addresses and more. In C++ raw pointers, variables which hold an address in memory, were often used and required that the pointers be deleted when they were no longer needed. But the ways that values were passed as method arguments often resulted in memory leaks – or loss of access to portions of memory because of declared, unused, un-deleted pointers. The introduction of smart pointers (self-deleting pointers) helped to alleviate this but with C# memory management requires nearly no developer-involvement (Microsoft, 2019).

This memory management is referred to as garbage collection.



Furthermore, where you might see multiple inheritance in other languages like C++, C# does not support multiple inheritance.

In a personnel management software we might have two classes Faculty and Student that have specific characteristics (like PayRate for Faculty and TuitionRate for Students).

But what about a Student Aide that’s on the payroll?

In C++ you could have a StudentAide class which inherits properties from both the Faculty and Student classes.

Not so in C#...

The issue is that multiple inheritance can caused what referred to as the “diamond problem”. Essentially if both the Faculty and Student classes had properties or methods with the same name, the compiler wouldn’t be able to determine which one to use when a new instance of a StudentAide object occurred.

C# Doesn’t Compile to Executable Code – at least not directly

Yep, you read that correctly.

Because C# was designed to work within the .NET Framework, C# doesn’t compile to machine-executable code, at least not directly. The .NET Framework enables a C# developer to code once and push to multiple platforms (Windows, iOS, Android, Xbox, and more). In order to enable this, code written in C# is compiled to Common Intermediate Language (CIL) which can only run on the Common Language Runtime (CLR). 

“The CLR is a native application that interprets CIL code” and then compiles that into the appropriate native code for the platform it’s on (Ky, 2013).

I’ll go into a bit more detail on the .NET Framework, C# syntax, OOP and other topics covered above at a later date. But for now, I hope that this has given you a general idea of what C# is.

Important takeaways:
C# is an Object-Oriented Programming Language (OOP)
C# is statically typed
C# is a high-level language
C# handles garbage collection!
C# doesn't support multiple inheritance
C# doesn't compile to machine-executable code
C# compiles to Common Intermediate Language (CIL)
CIL on runs on Common Language Runtime (CLR)
CLR compiles to the native code for the appropriate platform


References:
Gale, B. & Canton, N. (1985). Back to the Future. Universal Studios.

Ky, J. (2013). C#: A beginner’s tutorial. [Montreal, CAN]: Brainy Software Inc., 2013. Retrieved from https:://search.ebscohost.com.proxy-library.ashford.edu/login.aspx?direct=true&db=ca t02191a&AN=aul.10882077&site=eds-live&scope=site

Microsoft. (4 April 2019). A Tour of the C# Language. Retrieved from https://docs.microsoft.com/en-us/dotnet/csharp/tour-of-csharp/index

Vahid, F., & Lysecky, S. (2017). Computing technology for all. Retrieved from zybooks.zyante.com/



While the ping command is incredibly helpful in determining the reachability of different IP addresses, it has the potential to be used maliciously. 
The Ping of Death attack was a popular denial of service (DoS) attack between 1996 and 1997 which involved deliberately fragmenting IP packets to make them larger than the maximum allowed 65,536 bytes.  A denial of service (DoS) attack is derives its name from the impact that it has – users are denied service by the servers. Operating system vendors provided patches to protect against these attacks but many websites continue to block ICMP ping messages.

Further, attackers use tools such as whois to determine the IP addresses of target organizations and then use automated ping sweeping tools to methodically ping the publicly addresses within a range or subnet. From there they use port scanning to search for open ports and determine what applications or operating systems are being used and whether there is an exploitable vulnerability. These vulnerabilities might include the absence of patches to operating systems, firmware, and more. For instance, an operating system that went unpatched to deal with the Ping of Death attack would be vulnerable to future Ping of Death attacks.

In contrast, social engineering is a tactic utilized by attackers which exploits human failure. Social engineering attacks may include phone calls, phishing emails, watering hole attacks and more. Attackers using social engineering methods will often take weeks and months getting to know a place before even coming in the door or making a phone call. Their preparation might include finding a company phone list or org chart and researching employees on social networking sites like LinkedIn or Facebook.

In truth, networks will always be vulnerable. 
The proper approach is to reduce vulnerability. 

To reduce vulnerability, avoid the following: 

  • Misconfigured firewalls
  • Unpatched vulnerabilities
  • Unsecured wireless access points
  • Default/overused passwords. 


With regards to preventing social engineering schemes, employees should be trained to identify phishing emails, inform IT specialists within the company when those emails are received and how to handle the email itself. Further, badged access and/or 2-factor authentication can be used to further reduce the likelihood of malicious intrusion into networks.

The selected industry I’ve chosen is the air traffic management industry where computers play an incredibly important role – automation. 

A brief history

Historically, the bottleneck for national airspace access has been air traffic controllers. Early air traffic control was accomplished by the post office using signal fires, flags, and large painted arrows on the ground. Aircrews would fly relatively low so as to be able to see these navigational aids. 

As planes grew more complex, so too did the technology necessary to guide them. In the 1930s, navigational aids evolved in rotating lighted beacons. Air traffic controllers began operating over radios, controlling aircraft using time over fix, airspeed, estimated time over next fix, and other tools of the trade to guide aircraft without tracking their location via radar. Flight progress was tracked on chalkboards and relied heavily on the mental acuity of the controllers, but responsibility for safety of flight fell squarely in the laps of the pilots. 

Post WWII, ever-increasing air traffic congestion led to multiple midair collisions which had the public demanding radar installation throughout the country in the late 1950s. Further evolution saw aircraft transponding via beacons which provided secondary radar information to supplement the primary radar returns (which had previously been tracked with strips of paper on “shrimp boat” strip holders that air traffic controllers manually followed the primary radar returns on their scopes with).

Air traffic management systems continued to evolve to meet the increased demands that stressed the limited situational awareness of air traffic controllers. Automated air traffic management systems were developed which could recognize future conflicts, often hours in advance. The FAA has been attempting to continuously upgrade air traffic management automation since the 1970s, with mixed success. The first attempted project was such an abysmal failure that it is widely regarded as one of the most terribly managed project in project management history – it also led to the 1981 air traffic controller strike which saw President Reagan fire nearly every air traffic controller in the nation. 

Use of computers in Air Traffic Management

While individual use of personal computers in air traffic control is somewhat limited, systems continue to be developed which provide increased automation and enable controllers to handle greater workloads. It’s crucial that controllers continue to familiarize themselves with these systems and their inner workings in order to have a greater understanding of the limitations of these technologies. Unfortunately, most controllers have only a limited understanding of these systems because they’re so focused on keeping aircraft from colliding. 

More recently, technologies such as Traffic Collision Avoidance System (TCAS) and Automatic Dependent Surveillance-Broadcast (ADS-B) have been implemented. The former enables two equipped aircraft to detect potential collision hazard between themselves at a greater distance without relying so heavily on ATC or the pilots’ own MK-I eyeballs. The latter serves a similar purpose but was originally intended to be implemented as a replacement for radar coverage in regions where radar coverage wasn’t feasible, specifically the Caribbean. 

ADS-B

ADS-B highlights an important lack of security and privacy-mindedness regarding computers in the governments of the world. ADS-B has become a world-wide mandate despite numerous cybersecurity concerns. A little detail is necessary to explain just how bad the situation is. 

  1. First, ADS-B broadcasts aircraft identity, location details, airspeed, and more without any encryption.
  2. Second, these broadcasts are picked up by a terrestrial network of transceivers, many of which are privately owned.
  3. Third, no handshake or independent verification of the received information is possible – it’s quite simple to spoof an aircraft’s identity.
  4. Fourth, because the data is not encrypted and broadcast in real time (at 1Hz), ADS-B can actually be used to derive a targeting solution.
  5. Lastly, ADS-B has two major bandwidth issues:
    1. When message overlap occurs, the entire system becomes unreliable. Bandwidth combined with minimum transmission power make this more likely to occur, and in fact it has in the airspace over Florida, numerous times.
    2. There’s a user interface bandwidth issue. ADS-B displays on aircraft do not have an altitude filter which makes it nigh impossible to discern location data on potential threat aircraft when there’s significant congestion. Again, happens regularly over Florida. 

In fact, a white-hat hacker who goes by the handle RenderMan was able to teach himself how to inject a false ADS-B signal into the national airspace in just one weekend. He managed to do so responsibly,and managed to inject an aircraft with the callsign “YOURMOM” into the SFO Class B aerodrome and buzz the tower repeatedly (the controllers at SFO tower did not receive the transmission and there was no impact to flight safety). 


However, he gave up trying to convince people of the dangers of ADS-B after it was clear nobody would listen and has moved on to “the internet of dongs” and is advocating for a practical cybersecurity and privacy mentality as it regard IoT-enabled sex toys. 

Getting away from “the internet of dongs” and back to the ADS-B woes, this isn’t something limited to the United States, nor just to aviation. Aviation is a global logistics backbone. Consider that the economic impact of the drone incident at Gatwick was at least $124M. The absolute lack of cybersecurity mindedness with regards to the treatment of the national airspace as a network is both appalling and rampant. 

Recently, Boeing has been in hot water for espousing a short-term profit culture which prevented critical software risks from being mitigated – but Airbus will soon be eating crow. Airbus’s most modern helicopters and passenger aircraft have incorporated ADS-B collision alerts into their AUTOPILOT. Moreover, airlines (as a result of insurers guarding against human error) mandate autopilot while enroute, and in some case until 50 feet off the ground. So, any malicious actor could effectively shut down the next generation primary location information source for aircraft to prevent air traffic from doing their job, and inject false, non-verifiable signals which effectively steer airborne aircraft with up to 800 passengers on board. This is an systematic weakness which is being ignored and is a vulnerability that non-state actors could easily exploit to wreak economic havoc and is an asymmetric warfighting capability that the world has handed over on a silver, winged, publicly-broadcast platter. 

The current trajectory, no pun intended, of air traffic computer systems and networking is a move toward Four-Dimensional Trajectory Based Optimization (TBO) wherein aircraft are delayed on the ground for a couple of minutes to provide them with optimized routing hours later. 

However, new airspace entrants to include small unmanned aircraft and autonomous Urban Air Mobility aircraft (unmanned flying taxis) will throw a few wrenches into the works. Ultimately, privatization of unmanned air traffic management technologies will lead to the eventual replacement of both pilots and air traffic controllers in favor of automated systems with human-in-the-loop oversight. 

After all, the cause of most accidents is human error. 

However, that replacement is probably something like 30 years out. In the nearer-term (10 years) we will likely see an implementation of 4-D TBO and the start of the use of remote tower technologies to provide air traffic control services for terminal aerodromes without air traffic control towers, or that don’t operate 24/7. These remote tower technologies can also be used to augment the controller capabilities with infared optics, datablock overlays (instead of flight strips) and improvements in weather forecasting capabilities. 

The UAS realm will see the implementation of remote ID capabilities similar to those afforded by ADS-B (FAA indicates this is approximately 2 years out, so we can expect it in 3-4) but hopefully not ADS-B based. This will enable a greater scope of unmanned operations intermixed with manned aviation. As a result, the business case for manned aviation will slowly give way to unmanned as insurers come to recognize the increased risk of manned aviation. 

Hardware upgrades will be very slow, I recall having to change out 12-inch tape reels for out facility communications recorder and being excited that we were transitioning to cassette tapes – in 2009. FAA facilities will be upgraded sooner than USAF facilities but these technological paradigm shifts move at a glacial pace as a result of their governance by an insurmountably glacial Congress. 
Note: The views expressed here are my own and do not reflect the opinions of my employer or the USAF. All of the information discussed above is publicly available. 
            Today I chose to ping and tracert nats.aero and zilliqa.io. NATS UK is the private company responsible for providing air traffic services to the UK and elsewhere. Zilliqa is a cryptocurrency based out of Singapore that I am heavily invested in.
            As you can see form the first image, www.google.com (Links to an external site.) was successfully pinged 4 times with 4 packets sent and no packet loss with times ranging from 35ms to 42ms. The nats.aero ping was similarly successful though the average time was much higher, at 154ms. The zilliqa.io ping was also successful with no packet loss and an average time of 60ms. The longer times taken for nats.aero and zilliqa.io makes sense given their global location in relation to my own (Albuquerque, NM). The relative length of this trip can be demonstrated through a tracert.
            As seen in the Nats.aero tracert image, there were a total of 10 hops, not including the two timed out requests. These timed out requests can occur for a number of reasons but the mostly likely in this case is an increased traffic load at the IP addresses that were later successfully. Tracert maps out the pathways by sending ICMP ping packets which tend to be assigned lower priority (or outright blocked by certain firewalls) (Susan, 2017). The routing of the packets was from my local network outward until it reach the regional Comcast router, then outbound to Los Angeles before hopping twice more to 66.155.26.134. In comparison, the Zilliqa.io tracert also made a total of 10 hops again routing through regional Comcast routers to Los Angeles before finally reaching 192.64.119.53.
            GeeksForGeeks explains that Ping “Is a utility that helps one to check if a particular IP address is accessible or not” and that it can also be used to see if computer on a local network are active. Traceroute, on the other hand, provides the exact route taken to reach the server and the time taken by each step. A reason why a ping might time out is because the IP address being pinged is unreachable – this could be for any number of reasons, including a lack/loss of internet connectivity between the computer pinging and the IP address being pined.  

My prior experience utilizing Microsoft’s Office Suite is extensive, though this is my first foray into Microsoft Access. For those who have been in the Air Force, death by Powerpoint is real, but it’s also two-sided. I can’t put a number to how many weeks I have spent utilizing Powerpoint to put together presentations which utilize almost none of Powerpoint’s features because we have a standard format which must be strictly observed.

            That said, Powerpoint is a powerful tool, but not precisely appropriate for the required content for this assignment. I prefer Powerpoint when presenting to an audience physically or telephonically with the ability to provide more detail. My preferred method for this, when I’m not pigeon-holed by standards, is Pecha Kucha. In Pecha Kucha, 20 slides are presented with each being up for only 20 seconds for a presentation length of only 6 minutes 4 seconds. The slides favor visuals with very limited (almost never justified) text that serve as memorable anchors for the message delivered with each slide by the presenter.

            Microsoft word is a powerful word processor which is used to provide formatted text, unlike text documents which are unformatted. Word also supports the inclusion of images and drawing, though I’ve never been a fan of Word’s native shape manipulation.

            Excel is another software application that I have extensive experience utilizing. One of the most frustrating aspects of working on DoD computers is that I’m unable to create macros for my Excel workbooks, but I have still managed to create a workbook with extensive computations that saves me a lot of time and energy. For the purposes of this assignment, Excel was useful in documenting the time spent on activities and creating visual depictions of the same.

            Microsoft access is a databasing management system or DBMS that enables creation, maintenance, and access to databases. Common database operations include: adding new data, editing existing data, deleting data, and querying the database for information. While databases are quite powerful, I do not believe that Microsoft’s Access is the appropriate software application for storing this kind of data. Not because it is incapable of it but because there’s not a whole lot of use that I see in the specific case of tracking task/activity priority for one individual. However, Access would be appropriate for tracking task/activity priority for a large group of individuals.

            Ultimately, Microsoft word feels like the most appropriate software application for detailing my activities in one day and I would recommend it for this and any other use cases where narrative information will be provided. Excel would be useful for conducting an analysis of the time spent on different activities each day for multiple days. Access would be useful for doing the same for a large group of people. The visuals from Excel and Access would be useful inclusions for the Word document or for a Powerpoint presentation. A Powerpoint presentation would be best utilized to present the data to a large audience simultaneously, especially if that presentation is accompanied by someone to provide further details on the information in the slideshow.


I recently reviewed an app called what3words which overlays the world with a grid of        3m x 3m squares and assigns each square a unique 3 word address. 
As you can see from the screenshot below, the address for this specific 3mx3m square located in front of the Flix Brewhouse in Albuquerque is ///fence.looked.inner.


The intent of this new location reference paradigm is similar to the intent of assembly language vs machine language in that we are used to communicating in words rather than numbers. 
Here is the same address expressed in lat/longs, compared with the what3words address.
Which is easier for you to remember/communicate?
 - 35°09’32.3”N by 106°40’52.3”W. 
///fence.looked.inner

The app is designed with the user in mind, to be sure. Maps are zoomed in on by using two fingers moving in opposite directions while you zoom out by pinching. Further, the app provides multiple languages. The app opens to your precise location.
The map featured on the app is that provided by Google, at least on my Android device. The app may utilize Apple’s map on Apple devices. The map can be set to satellite imagery or to graphical representation as with Google Maps. There is a target icon which enables the user to look at their precise location.
Once a tile is selected, the address is provided in a bar approximately 10% below the top of the screen. This address screen also provides three options: navigate here, share, and save to list which all have familiar/appropriate icons.

In the top left corner of the app is the traditional hamburger menu for more options which enable users to change their language and other setting and also lists a tutorial for those users who are unfamiliar.
The compass icon as seen in the fence.looked.inner screenshot orients the map to where north is the top of the screen, as per the norm. The microphone icon enables users to speak in 3-word addresses and will provide approximate matches. It provides an example: ///limit.broom.flip which when uttered provided ///limit.broom.flipped (near Osbourne, Kansas), ///limit.broom.flip (near Camden Town, London), and ///limit.room.flip(near Sao Felix do Xingu, Para, Brazil).
As stated previously, the app supports tie-ins with navigation apps and enabled me to utilize my phone’s native Map app, Uber, Compass, Zillow, and Zoom. I don’t believe Zoom is on my phone and I’m not familiar with the app itself. As a suggested improvement I might limit the available options to choices I already have installed unless there’s a revenue stream from partnering with apps and encouraging their download. In this case I might highlight those paps not installed and directly recommend them. When I clicked on the Zoom icon, the navigate here menu disappeared but nothing else occurred.

Where this app truly shines is the ability to provide location information to users in areas where addresses aren’t prevalent or don’t exist. For instance, if  I were on a hike and injured to such an extent that I couldn’t walk but had my phone, I could provide the 3-word address to rescue personnel rather than “I think I’m about 3 miles from the start of the 10K trail”.

3 recommendations for improvements:
  • Only show navigation apps which are already installed on the device (or alternative as described prior).
  • Enable voice search of known businesses/locations rather than voice search of what3word addresses.
  • Assign an address to known locations on Google Maps to prevent the need to zoom in all the way and select a random tile. 

p.s. - I mentioned (on LinkedIn) to Chris Sheldrick, CEO  & co-founder of What3Words that I'd done this review as a class assignment he asked me to provide my feedback. 


Pete the Piranha Scratch Project: https://scratch.mit.edu/projects/317422602/

Compelled to do so by an assignment for my INT100 coursework, I created a relatively simple game in MIT's block language Scratch. The link to the project is above but I enjoyed making the game and it appears that my wife, kids, friends, and coworkers are all fans. So it's inspired me to recreate the game as a mobile app! Will provide more info on this as work on that app progresses. 

Having made simple programs before, Scratch proved to be a bit of a headache at for two reasons.
  1. There’s a learning curve associated with the graphic user interface and the specific limitations of the language.  
  2. Having to hunt for the appropriate block(s) when I could have typed out the code was frustrating.

How did I get around these issues?

These difficulties were primarily solved by a thorough use of Google and Youtube, as well as looking at projects that other people have shared. What I gained from this experience were some insights into how modern games are coded and how variables in the program are manipulated to provide some of the mechanics for the game.

Assessment of the languages
It’s plainly evident that Scratch makes programming many simple programs significantly easier. The exercises involving Python, named for Monty Python, were my first real experience programming in the language and, to be frank, it also took some getting used to. Given my recent self-education in C++ and C#, the differences in syntax required constant attention. That said, I would prefer Python over Scratch for most software development projects given what I can only assume are numerous limitations that Scratch has as a very high-level language.

The differences in these languages stem from the era of their development. Machine language is a means of communicating with a CPU in binary. Humans have had thousands of years to develop languages comprised of thousands of symbols and phonemes in order to communicate complex messages – while communication of this nature is possible in binary it is very difficult. As such, Assembly language was developed to make it easier for programmers to instruct CPUs in a human-readable manner.  

Python is a comparatively new high-level language which utilizes indentation, common amongst programmers for increased readability, instead of curly-brackets to structure its programs and scripts into blocks and it significantly easier to work with than Assembly language. Scratch is a block programming language created to help children learn to break problems into chunks of logic and apply that logic toward the creation of programs – that is, to help children learn to think like programmers.

Machine code is used anywhere a traditional computer is in use – higher-level languages are translated into machine code. This includes assembly language which is typically found in drivers and the like. While you could accomplish in assembly anything you could with higher level languages, it becomes impractically long and difficult to debug. Python, on the other hand, can be used to create desktop programs, web applications and more. Recently, it has gained popularity as a language for data analysis. Scratch is useful for its intended purpose – helping newcomers to the art of programming learn to think in a way that enables them to create programs from code.

Which language is most popular? – That depends on how popularity is defined. 

Of those discussed, Machine language is certainly the most omnipresent (as nearly all programs are eventually converted into those ones and zeros), however Python is most likely to be selected for nearly any software development project not requiring direct processor interaction. It’s comparative ease of use without the limitations of a language like Scratch make it ideal for most development (in comparison to the languages discussed herein).