Image result for noob noob
His name is Noob Noob, for those who, ironically, don't get it. 

I'm taking classes toward a Computer Software Technology degree and was tasked with creating the standard "Hello World!" program...after downloading the Java Development Kit (JDK) and an Integrated Development Environment (IDE). Which proved to be harder than I thought.

You know what they say about assumptions, right?
The links within the class took me to NetBeans which I have no experience with but I thought I recalled that Microsoft's Visual Studio supports Java development...

[Spoiler Alert] -> It doesn't. 


So Microsoft Visual Studio doesn't support Java, but Visual Studio Code does. As I'm not familiar with Visual Studio Code, though I'm sure it's wonderful, I opted for Eclipse at the recommendation of a friend.

But, I've gotten ahead of myself.

First you'll need the Java Development Kit (JDK) which allows developers to create Java programs that are executed and run by the Java Virtual Machine (JVM) which is created by the Java Runtime Environment (JRE).

Once you've downloaded and installed the appropriate JDK for your operating system (pay attention to 32 vs 64 bit versions for your OS of choice) you can move on to downloading or installing Eclipse

There are plenty of tutorials out there to guide you though your first HelloWorld program in the Eclipse IDE, but I thought I'd save you the trouble and share one with you.


That's great and all, but what's Java?

Well, you see, Java a class-based, object-oriented, general-purpose programming language that was developed by Sun Microsystems in the 1990s. It's designed so that developers don't have to write multiple version of the same application in different languages for different platforms.

Many of the applications you've used have likely been written, at least in part, in Java. In fact, it's still a top language today, and was the official language for Android mobile application development until Kotlin and Go came into the picture.

syntactically, it is very similar to C# (which is often criticized as a copy of Java). As such, it shares many common features with C and C++, which helped shape Java's creation.

Every time a meme is read aloud, a noob noob gets its wings...



Object-Oriented Design Principles

There are many object-oriented design principles to choose from, though at an introductory level, the 4 major object-oriented design principles are:
  1. Encapsulation
  2. Abstraction
  3. Inheritance
  4. Polymorphism
You'll come across these words a lot in discussing object-oriented programming, but fear not - they're really not too difficult to understand once you view them from a different perspective. But first we need to understand what an object is in this context, and to do that we need to look into how OOP came to be...

Early programming relied on procedural programming practices which broke down seemingly endless legacy code into manageable pieces of code. A procedure, or function, now could be written once and called form anywhere - saving a great deal of duplication. 

Then came the idea of modules - a set of functions and a data structure that those functions operate on. An object is the logical evolution of the module into a single entity which encapsulates data (values) and behavior (functions/methods that operate on the stored values). 

Encapsulation 

The concept of encapsulation is that an object encapsulates, or contains these values and functions/methods while also managing to abstract data away from all but those accessors and mutators which are appropriately permissioned.

Abstraction

Abstraction is a logical continuance of encapsulation. Suppose you -

yes! you. the person reading this.

                                                                                                               - were an object

YOU ARE

And I were to write a health app with a function which calculated your Body Mass Index (BMI) by measuring your height, weight, etc. The abstraction principle would have us do it differently. Instead of having me take your measurements, I can just ask you what your measurements were because you should know them. 

To understand why this is beneficial, let's take the BMI example except IRL. 

If you've ever been in the military, they really take Ford-era manufacturing concepts to heart. 

Instead of having one doctor do all this kind of measuring by hand, the Abstraction Principle would have the soldiers all already have calculated and stored their BMI in memory. Suppose there are 50 soldiers. 

Before you'd be waiting for 1 function to be carried out 50 times, necessitating the communication of multiple values and measurements for calculation (eating up CPU usage and memory making copies).

Now, applying the principle of abstraction, there would still be a call to all 50 soldiers - continuing the analogy, the doctor still has to ask. 

Inheritance

Inheritance is the major principle of object-oriented design which is probably the easiest to understand - especially by analogy. 

We're all relatively familiar with the high level taxonomy of life on earth. At the root of the taxonomy is life. From life we have flora (plants) and fauna (animals). From fauna we have reptiles, mammals, marsupials, etc. 

Eventually we get down to humans. All 50 soldiers are humans, but not all humans are soldiers. Similarly, all fauna are not human but all human are fauna. 

Inheritance is the principle of establishing either "is-a" or "has-a" relationships between objects. For instance every circle has-a circumference and every square is-a rectangle. 

By purposely applying these relationships, we're able to extend or modify the behavior and/or characteristics of one type of object to produce a specialized version of it. 

Polymorphism (many shapes)

Don't worry, its bark is worse than its bite. 


As is subtly included in the mini-header, polymorphism means many shapes. There are two kinds of polymorphism:

Static polymorphism in which methods are overloaded. Method overloading is when a class has more than one method of the same name but the methods take different parameters. Thus when a call is made for the method, the method with the matching parameter/argument list is selected. This is also called compile time polymorphism.

Dynamic polymorphism is when methods are overriden - meaning that a program has two methods with the same name and parameter list except that one of the methods is in the parent class and the other is in the child class. This allows a child class to have specific, tailored version of method inherited form their parents. Dynamic polymorphism is also called runtime polymorphism.




Operating Systems
Let’s face it, comprehending a lot of binary at scale is difficult and tedious work for people.

This is saying something because machine code, traditionally written in binary, is itself a shorthand means of communicating switch positions on mechanical computer components.

In the old days this meant walls of switches to be manually configured, often by teams of women as seen below. The ENIAC (Electronic Numerical Integrator and Computer - first digital computer, 1946) was configured primarily by a team of six female “computers” who manually set thousands of 10-way switches. (Lifghtfoot, 2017). 

The ENIAC, University of Pennsylvania (Nicholle, n.d.)

If you look at it a certain way, the evolution of higher-level programming languages and operating systems has been about getting the end-user as far away from dealing with all that pesky switch configuration as possible.

In a sense, the operating system is the last layer of abstraction from the user’s perspective and the first layer from the system’s perspective.

The user’s intent is passed through to the operating system in a number of programming languages that are compiled, assembled, and packaged for the operating system to conduct the movement of data to and from the hardware components.

Features of contemporary operating systems and their structures.
In truth, the above analogy falls short of giving enough credit to what the ENIAC programmers and operating systems really did and do, respectively.  

Operating systems exist without an industry-wide accepted definition as they can vary wildly based on the hardware they interface with. In general, however, they can be thought of as the mutual interface between the user and the hardware.

Contemporary operating systems have many features, as depicted in the following visualization.


What’s a Process?
Silberschatz, Galvin, and Gagne, 2014, explain that a process “is a program in execution” (p150). A program alone accomplishes nothing. In fact, a program not in execution is in a passive state. In contrast, a program in execution is in an active state. What differentiates a program from a process is their state of activity. When a program is loaded into memory it becomes a process.
Here we see a visualization of a process inside memory.
0.      Text: Current activity’s program counter value and content of processor’s registers
1.      Data: Global and static variables
2.      Heap: Dynamically allocated memory to a process during run time.
3.      Stack: Contains temporary data such as method/function parameters, return addresses, and local variables
Process States
When processes execute, they exist in one of 5 states: New, Running, Waiting, Ready, and Terminated (Silberschatz, Galvin, & Gagne, 2014). Consider an example of an IT help desk worker processing IT trouble tickets


Discuss how operating systems enable processes to share and exchange information.
Modern computing systems utilize numerous types of memory to store data and instructions throughout multiple queuing processes. In effect, each interaction between devices must be choreographed.

For instance, processes awaiting CPU utilization, must be scheduled appropriately so that they use the CPU and any shared variables in the appropriate sequence. Further, the operating system must have places to store the values that will be used. 
.
In general, memory that’s fast tends to be more expensive per byte of storage. Conversely, memory that’s cheap per byte of storage tends to be slower – and speed is of the essence.

Consider that computers are intended to increase the efficiency of the execution of our will. As such, it makes sense that they should be configured with efficiency in mind. This means placing the faster, more expensive memory closer to the CPU so that the CPU can quickly call upon stored data or instructions.

Threads
A thread is like a basic unit of CPU utilization (Silberschatz, Galvin, & Gagne, 2014). If a CPU were a bridge, then threads are cars. A single lane bridge would be similar to a single-thread process. In a single thread example, only one process can be executed at any given time. Add multiple lanes and it’s a multithreaded model.

Back to our IT trouble ticket system analogy from above – what we’ve been discussing thus far is the concept of an IT worker who only works one ticket, to completion, at a time. This wouldn’t be a very efficient use of the IT technician’s skills and labor. No, instead you’d expect her to work a ticket until a delay occurred, then focus their attention on the next highest priority trouble ticket.
Multithreading is motivated by an ability to maximize resource utilization through resource sharing which provides scalable and responsive design capabilities. As you can imagine, multithreading is common in modern software applications.
Scheduling and the PCB:
Think of process control blocks, or PCBs, as an instance of a process with a list of attributes and values that can be referenced just like those of an instance of a user-defined class within a software application's source code. .
Process control blocks (PCB), or task control blocks, are representations of processes which are comprised of numerous pieces of information, namely: Process states, process number, program counter, registers, scheduling information, memory management information, accounting information, and I/O status. 
A computer is, in essence, a miniaturized factory which takes data values and instructions as inputs and outputs transformed data according to the instructions it was given. An operating system, then, is responsible for many things to include the scheduling of movement of packets of data from one system resource to another - such as copying a file in RAM to HDD for non-volatile safekeeping. 
The Operating System maintains three separate scheduling queues: the Job Queue, the Ready Queue, and the Device Queue.
Lightfoot, 2017
  • The job queue keeps all the processes in the system. 
  • The ready queue contains all processes loaded into main memory and ready to execute. 
  • The Device queue contains those processes which are delayed due to the lack of availability of an I/O device. 

I/O (Input/Output) Devices:
There are three types of input/ouput (I/O) operations: sensors, control, and data transfer. Some examples of output devices include speakers, projectors, monitors, and traffic lights. Some examples of input devices include cameras, anemometers (wind sensors), keyboards, mice, touchscreens, etc.
The operating sytem is responsible for managing various input and output devices such as those described above and many, many more. An I/O system takes I/O requests coming from applications and send those requests to the physical device, then accept whatever response comes back from the device and returns that result to the application that originated the I/O request. 
The Critical-Section Problem:
Scheduling of processes is incredibly important. Operating systems must ensure that processes are executed in an appropriate order. It wouldn’t make any sense for the IT technician to work on the print server installation if the print server has been purchased but not yet received. 

Worse - what if there were two IT technicians both accidentally working the same trouble ticket – that would get problematic quickly.

Comparably, operating systems must concern themselves with the order in which processes read/write from/to memory. The critical section problem is normally solved through the application of three requirements: mutual exclusion, and bounded waiting.
Memory Management
Virtual address space is the set of all virtual addresses generated by a given program. When the program generates logical addresses, they’re converted into physical addresses that are stored in the physical memory space - the aggregate of all physical addresses generated by a program. The memory management unit (MMU) is a hardware component which is instrumental in the scheduling of memory access. Virtual memory is, in essence, an abstraction of an address such that the operating system is able to protect against concurrent utilization of memory.
For instance, assume that Program A and Program B both manipulate the value stored at address X. Let’s say that Program A multiplies the value of X by 7 while Program B cubes the value.
If  X0 = 4 | Program A first: X = 21,952 Program B first: X = 448.
The result after both programs have run is dependent on the order of operations. Similarly, operating systems must provide a means by which to sequence memory access. This ensures that the data stored in the physical memory is being manipulated in the correct order.
While jobs/tasks must be scheduled for execution in order to maximize CPU utilization, memory access must be managed to protect against inadvertent data manipulation.
TutorialsPoint explains that “[p]aging is a memory management technique in which process address space is broken into blocks of the same size called pages” (n.d.). Main memory is similarly segmented into frames of the same size as the pages. The virtual address is comprised of the page number and the offset, while the physical address is comprised of the frame number and the offset. Paging is only one memory management technique. Another, segmentation, divides jobs into many pieces (or segments) as is paging. The main difference is that in segmentation the different blocks of memory are variable in size.



Files systems
The objective and functions of file systems management is to provide a purpose-efficient means of manipulating files. These files exist as an abstraction of collective data packaged for user interaction – “name collection of related information that is recorded on secondary storage” (Silberschatz, Galvin, & Gagne, 2104).
The supported operations for a file management system vary depending on the specific application but tend to include the following at a minimum: creating a file, writing a file, reading a file, repositioning within a file, deleting a file, and truncating a file (Silberschatz, Galvin, & Gagne, 2014).
Two functions with which many causal computer users are familiar are open() and close(). These functions store information about a file in repeated use for a period such that the opened file can be quickly accessed for read/write and other functionality. Otherwise, the system would have to inefficiently spend time search for the file during each necessary interaction.
Other important functions include enabling users to access file attributes details. Window’s File Explorer includes the option to view file properties in a drop-down menu after a file’s icon has been right clicked. One such attribute would be the internal file read/write pointer. Think of that pointer like the blinking cursor indicator on a word processor. It lets you know where the letters you type or traverse or delete will be. You, as a user, know that if you press backspace it will delete the character to the left of the cursor and move the cursor one position to the left. In fact, the a text file could be thought of as an array of characters. The pointer simply represents the index from which action will be taken.
Mass storage on a modern operating system sees the operating system sending read and write requests in an order governed by a scheduling models such as First-Come, First-Served (FCFS), Shortest-Seek-Time-First (SSTF), SCAN, C-SCAN, or others. If you imagine a hard disk as a cross-section of a heavily-ringed treed, think of each ring as a very thin 3-dimensional cylinder. This is how hard disks are divided. They are further divided into sectors, grains of wood on those tree lines, to continue the analogy.
Protection and Security
Silberschatz, Galvin, & Gagne, 2014, provide that a computer system is a collection of processes and objects (13.3.). These objects can be tangible, hardware objects such as memory segments, CPUs, printers, etc. Objects can also be abstract software objects such as files and programs.
As you can see form the below graph, Protection and Security are related but disparate functions of operating systems. Protection focuses on managing internal threats to objects and domains while Security is concerned with preventing (un)intentional misuse of the system for purposes other than those for which it is designed/intended.

In object-oriented programming (OOP) we’re introduced to the concept of inheritance and of private, public, and protected (in some languages) methods/attributes of objects. This is so that only the attributes and methods which are needed by other objects should be able to affect those attributes or call those methods.
Similarly, systems must ensure that objects, both internal and external, via protection and security, respectively, are only able to interact with the system, its hardware and software objects, in the way in which that system’s creators and users expect it to be interacted with.
Domains are essentially a predefined set of permissions to read, write, execute, print, switch, etc. different specific objects, such as files. In the below access matrix, we see a matrix of access lists (columns) and capabilities lists (rows). Access lists are a set of permissions to interact with a given object. Capabilities lists, meanwhile, are the inverse idea in that they represent the set of permission to objects for a given domain.
In the above access matrix provided by Silberschatz, Galvin, & Gagne, 2014, the capabilities list for Domain 1 (D1) could look something like {F1, read, F3. read}. F3’s access lest, in comparison, might look like {D1, read; D3, execute; D4, readwrite}.
The fundamental difference between protection and security has to do with the origin of the threats to data that they address. Protection addresses threats that are internal to the system such that bugs and other eventualities do not affect the data stored by the system. Conversely, security addresses threats that are external to the system such as malicious or ignorant users.
One example of a means by which security is affected is through the implementation of bounds checking to prevent exploitation of stack or buffer overflow. In essence, stack/buffer overflow attacks take advantage of how computers store process and the data associated with them. Through experimentation, they can determine how to write code into memory to be executed. They do this by providing an input that’s too large for the program to take normally so it gets written into the stack to be processed later and the malicious programmer can write that so that the computer thinks it’s another valid process to execute.
Think of it like the Konami code for Contra except that instead of giving you infinite lives it injected malicious code onto the NES and recorded the key presses and stored them in memory somewhere for later retrieval. Bounds checking ensures that the program only receives an amount of data that the program is designed to deal with.
Excitement Abounds
These concepts aren’t pure academia, they’re the building blocks for understanding how the software applications I intend to develop will interact with my customers’ hardware. Additionally, the final topics of Security and Protection have kindled an interest in the responsible development of my app.

I’ve really come to appreciate the numerous layers of abstraction we enjoy. More important is that it’s slowly dawning on me that this elephantine, constantly growing mountain of as-of-yet-unobtained software development knowledge and experience is driven by a need to abstract the human just a bit further from the machine.

References
Lightfoot, J. (31 July 2016). Introducing ENIAC Six: Atomic’s Room Named for the Women who Programmed the ENIAC. Retrieved from https://spin.atomicobject.com/2016/07/31/e niac-programmers/

Nicholle, M. (n.d.) The ICT Lounge: History of Computers Timeline (1910-1959). Retrieved from https://www.ictlounge.com/html/history_of_computers_1910-1960.htm

Silberschatz, A., Galvin, P. B., & Gagne, G. (2014). Operating system concepts essentials (2nd ed.). Retrieved from https://redshelf.com/