Monday, January 24, 2022

CPT307 - Week 5 - Interactive Activity 1



    Programming has a set of complex algorithms that help a programmer complete several tasks, some of these tasks require searching or sorting for information within a system. In this blog post, we will be discussing which algorithms might be best suited for seeking and searching information for a program and how they can be used differently in different programs. Bare in mind, with these algorithms we want to understand what worst, best, and average case would be used in the development of your program. We can review a chart that helps provide some insight on these different cases later in this post.

    What type of algorithm should someone use depends entirely on what they want their program to do, so in the event you are unsure, let's go through some of the basic searching algorithms first. Two searching algorithms that I will be talking about today are linear and binary searching. A linear search algorithm checks each element found in a list and will stop once it has found the desired element. The binary search algorithm is a search function that first searches the middle element of a list, if the search value is found the search can stop, if the search is higher than the value that we are searching for, repeating the search process with the portion of the list before the middle element, and if the value is lower, we search the process with the portion of the list that is after the middle element. However for a binary search to work the lists being searched, needs to be sorted, therefore another algorithm must be implemented to sort the list of elements beforehand.

    There are a variety of algorithms that can be used to sort a list of elements within a program, but what exactly is a sorting algorithm? Sorting algorithms are used to organize either an array or list of elements in a simplified manner, generally alphabetical order, ascending or descending numerical values, to help search for an element in the array or list through organizational means. Following this link Here will help describe some of the different sorting methods and which might be best suited for sorting your listed elements.


     One really important factor to bare in mind when developing a program is to understand how much memory will be/wanted to be allocated for this program to run. This is an important factor because of efficiencies. When a program needs to be sorted/searched we would want a program to be optimal, this using a searching/sorting algorithm combination to lower the amount of memory needed and time between each task taken.

    Now that we have established sorting and searching algorithms, let's take a look over at data structures. Data structures are organized ways to implement a type of list to store information. These data structures are typically; arrays, Linked Lists, Stacks, Queues, and Trees. These all share similar reasonings to be used, however, they have differences amongst each other that define how a program will store information and how that information can be accessed. Arrays allow for easier access to insert or remove elements in this data structure without having to follow any order. Whereas, we can look at stacks or queues where they utilize the order of first-in-first-out(queues) or last-in-first-out (stacks). Then we can look at Linked Lists and Trees, these types of data structures utilize nodes to store information. Linked Lists allow for information to be inserted or removed without any order, which makes these easy to work with. The data tree structures are a more complex data structure as they use a root node then use sub-nodes. 
    After learning and understanding more about the data structures and algorithm concepts throughout this course, I will utilize these newly obtained skills to try and apply it towards my programs to help create a better organized program. Where searching through my program to make improvements will be much more easily accessible. 

Tuesday, December 14, 2021

CPT307 - Week 1 - Interactive Activity 2

 Hello everyone,


This week, I will be discussing how to install Java and Netbeans IDE and discussing the main concepts and features of Object-Oriented Programming (OOP).


JDK/NetBeans Guidance

To initially start, we are going to install Java (JDK) we are going to visit this website - Java (JDK) from this link we will choose which operating system we would like to install the Java on, in this case, we are going to be working with Windows, from there we select whichever bit version of Windows the user is running. In my personal case, x64 bit, then click on one of the various links provided, I find it easier to just use the installer which is 152MB. Once downloaded, open the executable and follow the visual prompts from the installer. Once completed to verify if the Java has been correctly installed we will open up our command prompt through either Windows key then run then type in CMD or hit the Windows key then type "command prompt", a black window will open up. 

From here in the command prompt, we will type this command, Javac -version. If installed correctly you will receive a message confirming which version of Java you have installed. See the image below.

If you did not receive the correct prompt above, try and reinstall the Java process, and or restart the computer, sometimes this can fix the issue. Now that we have successfully installed Java, we need to get the NetBeans IDE. From the following link we can download the installer - Netbeans IDE as of the current state of NetBeans IDE, it appears the current update is 12.6. So please begin to download then install the software following the visual prompts. We can open the NetBeans IDE application at this point after successfully updating and installing the program. Now that all the necessary functions have been established we will want to visit this link to the NetBeans IDE quick start guide here - Quickstart this will be the best resource to help create our first application "Hello World!" this will help establish a more current up-to-date guide on simpler ways to help identify some of the different functionalities that are within the NetBeans 12.6 environment.

Object-Oriented Programming Concepts

Object-Oriented Programming (OOP) is the methodology to design a program that uses objects and classes. Essentially, objects are entities that have two key factors, these factors are states and behaviors. States in objects are mainly the idea of descriptions of an entity, for example, a lamp, the state description, the base of it, the height, perhaps brightness levels, then the behaviors would be is it on or off.  Then moving onto what a collection of objects are which are classes, these essentially are blueprints from which objects are created. Then from objects, there is the concept of inheritance where an object has all the same attributes as a parent object, thus leading into the polymorphism concept where a task can be run in different ways. Those are just some of the key factors towards Object-Oriented Programming, but why would anyone use this form of programming for Java? Object-Oriented Programming helps make the development and maintenance of a project easier when projects grow in size. Object-Oriented Programming also helps by providing the functionality to hide critical data within the program. Following this resource will help anyone new to the Java language get a better understanding of how Object-Oriented Programming works and how we will be able to use this to better our skills. Link to resource - Object-Oriented Programming

Saturday, December 11, 2021

CPT304: Week 5 Project

 

   Operating Systems
Brandon Fillian
CPT304: Operating Systems Theory & Design
Bret Konsavage
December 13, 2021

             

Operating Systems

Section 1 – Major Functions of an Operating System

Operating Systems (OS) essentially are a way for an end-user to interface with a computer’s software and hardware. The key function of the operating system is to allow applications to be performed using the computer systems hardware to align with the programs being executed, this is done through processing, through the central processing unit (CPU). There are a total of nine key concepts that make up an operating system they are as followed, Error detection, protection and security, accounting, resource allocation, communication, file systems, I/O operations, program execution, and user interface. These eight concepts can be brought together to form five major functionality groupings that will be discussed throughout this paper alongside with illustrative diagrams to help create a better understanding of how these functions interact amongst themselves.

The ideal concept of an operating system is to help manage the end-user experience to ensure the user is able to operate the system with ease-of-use. To do this, the operating system is made up of several components, as mentioned previously, six of these key features are to make the use of the operating system easier for the end-user while the other three consist of ensuring that the computer runs efficiently. The six following functions; program execution, I/O operations, user interface, file system manipulation, error detections, and communications are to help the end-user establish an ease-of-use environment. Initially for an end-user to establish any sort of communication between them and the computer itself, there needs to be some sort of user interface or some form of command line, an example of a user interface would be the Windows operating system environment. Once an end-user has access to this interface, there needs to be an input of some sort, this is done through I/O operations, which allow the user to interact with the user interface to send different operations towards the CPU for execution. Found within the operating system user interface are file system manipulation toolkits. What these do are to allow the user access to files in memory to be read, written, moved, deleted, and other accessing abilities. The error detection functionality of an operating system is to detect errors when they arise and correct them before any form of issue occurs. Lastly, out of the six functions of the operating system there is communications. Communications in an operating system offer processes to managed shared data. The last three functions are to help the computer system’s efficiency. Resource allocations are to make sure that processes running are not overusing the system’s resources as an example memory. The protection and security function helps prevent any unwanted or malicious activity to occur on the system. Lastly the last function of operating systems is accounting which keeps track of the user’s activity.


Diagram of Major Functions of Operating Systems

 

 Section 2 – Processes and Synchronization

            Processes are units of work that are considered user programs or tasks that are executed within an operating system (Silberschatz et al, 2014). In other words, when a user opens an application a process an operation, or task is created and executed, this directly is forwarded through the operating system of a computer. Process states are the stages in which a process is currently at; referencing the five different process states new, running, waiting, ready, and terminated (Silberschatz et al, 2014). What process states are signified how a program or task is created and ends, an example starting an application creates the new process in an operating system. Running depicts the moment between the application start and where the process is going, in other words towards the CPU for processing, waiting is when the operation is waiting for an event to occur. Ready is when the CPU has decided where the process will be handled whether it be an application or some sort of mathematical algorithm that needs processing. Terminated means that the task has been completed or has failed to launch and ended the process before the entire process has been completed, this could be because of a fault in hardware or software, or the task was interrupted by the user themselves. The process control block or (PCB) is a process representation with different pieces of information of a specific process; process states, process numbers, program counters, registers, memory limits, list of open files, … (Silberschatz et al, 2014). The diagram below

 

Diagram of Process States and Process Control Block



    Processes have a special significance to them, each process consists of threads, threads consist of code, data, and files, with that each thread contains stacks and registers, there are two types of processes there are single-threaded processes and then there are multi-threaded processes. The single-threaded processes are processes that operates on its own, with its own singular control, whereas the multi-threaded processes have multiple threads of control. What a multi-threaded process is capable of doing is having the ability to run multiple operations in parallel, also known as parallelism (Patterson & Hennessy, 2014). Below are diagrams of a single-threaded process and a multi-threaded process.

 

Diagram of Single-Thread Process and Multi-Threaded Process


  

 

    Due to the nature of how processes are established, sharing of the same variable between two different processes can be an issue, this is what a critical-section problem is. The critical-section problem is when programs enter a state in which other processes cannot execute, meaning this operation is not parallel towards other processes. This state of a process is actually a segment of a code that allows shared variables to be accessed, however, only one process is allowed to execute into its critical section at any given time, in which all other processes need to wait for this process to exit its critical section before those processes are able to reach their critical section. Following Peterson’s solution of having two processes require two data items it allows for alternating execution between the processes critical sections by creating a variable flag[i] = true then creating a variable turn = j using a while statement flag[j] turn == j in the same statement will make flag[i] false, as the remainder of this section of code stays true. However by doing this, it does allow both i and j to become turn at the same time, however, only one of these set variables will last whereas the other will be overwritten instantly (Silberschatz et al, 2014). By following these assumed algorithms mutual exclusion is preserved, progress requirements are satisfied, and the bound-waiting requirement has been met because Process i can only enter its critical section if flag[j] == false or turn == i. To represent this coding, to make this solution accurate, the Pi process is prevented from entering its critical section if it becomes stuck in a while loop command with the condition of flag[j] == true and turn == j at that point Pi process can then enter its critical section. If process j is not ready to enter its critical section, then flag[j] has to be false then process i may enter its critical section (Silberschatz et al, 2014). With that said, Process j must rest flag[j] to a true statement which would also set turn to i and since process i does not change the value of the variable turn when executing the while statement process i will enter its critical section which progress occurs then “after at most one entry by process i (bound waiting)” (Silberschatz et al, 2014). In much simpler wording, this is the process of alternating between processes that share the same variable and multiple processes are trying to access variables.

Section 3 – Memory Management in Operating Systems

            Previously mentioned, memory management takes an important role in assisting with computer efficiency. Memory management systems are found within operating systems so that programs that are executed have their own process memory space for full functionality. The sizes of processes are limited to the size of physical memory that the computer has. The system needs to load these new processes when created into a base register also known as the relocation register. According to Silberschatz et al (2014), “the value in the relocation register is added to every address generated by a user process at the time the address is sent to memory” (Section 7.1.3). To understand this better, Silberschatz et al (2014) uses an example of a base relocation register as 14000 then proceeds to add a new process with the logical address of 346, this process moves towards the base relocation register to find the process a new memory location, to do this the 346 address is added to the base register of 14000 this making the physical address 14346 (Section 7.1.3). By placing the process into a physical memory location, it becomes understood that all programs need to be stored in the physical memory for a process to execute.

Diagram to express Virtual Address translation to Physical Address


                          Once the addresses have been created for the processes, the process is then stored within memory. There are several different types of memory that are found within a computer system, this is known as the memory hierarchy. The memory hierarchy is the concept of separating an entire computer’s storage system into different levels based upon their storage capacity and performance. The memory hierarchy is notable for being a pyramid based concept map, starting with registers as the highest level of the pyramid, this is where the CPU has the closest interaction with a storage location, generally, this is where the computing of processing starts, information is stored within registers for a short time, then this information is generally stored in the cache, this is generally where memory goes that would be volatile, generally speaking, user-processes created from executions, application executions found in main memory when executed would place the process in the cache. Following the figure below we can view the entire memory hierarchy.

Diagram of Memory Hierarchy




Section 4 – I/O Operations and Mass Storage

Throughout a computer system, there has to be a form of file system management, this topic can be separated into three sections: mass storage, user and programmer interfacing,  and internal data structures and algorithms. The algorithms for computing are done within the processor using the Arithmetic Logic Unit (ALU), the information is transferred back and forth using the address bus. Mass storage is how a user has the ability to access the information that is stored within a computer system, the operating system allows for a user to functionally move different file types throughout the system, however, these files are only allowed to be moved within the properties of the main memory locations. Main memory locations are known as flash disk, traditional disk, however, there are other forms of memory locations that operating systems allow access to that are considered secondary memory locations; remote storage locations (cloud/network-based), and external I/O bus (external disk drives).

The file-system interface is where users and programmers are able to view and have access to the data and programs of operating systems. File systems are notable for having two sections, collection of files, storing related data, and a directory structure (Silberschatz et al, 2014). This stored information is known as a file, files are physical property abstracts from an operating system that are stored as a logical storage unit. These files are stored on nonvolatile storage devices like hard drives. Files are also known as a sequence of bits, bytes, lines, or records defined by the file’s creator or user (Silberschatz et al, 2014). The file-system interface within operating systems allows for creating, reading, repositioning, deleting, truncating, and executing files. The file-system interface in the operating system allows for a large variety of file extensions to be executed.

File system management also contributes to the support of I/O functionality, allowing for external storage devices to be managed with a computer system. This function also helps allow multiple users, access the functions of the same I/O devices connected to a system, producing the same functional operations conducted with the I/O device, mouse clicks, keyboard ticks, external storage devices, etc. I/O external storage devices can also help prevent data loss if a computer system needs to be reformatted and have all stored information completely wiped clean, these devices allow for access backups if a backup cannot be accessed or created within the computer’s operating system. These devices are quite reliable in terms of allowing for quick access for data transfers from one system to another, and reliable backup storage locations that can be used to reformat a system’s operating system through the motherboard’s bios. Helps reduce the risk of a total failure to a computer system by having the operating system backup on an external drive, these backups could be read as an ISO extension file allowing for an entire operating system to be re-written to factory settings or other customizable functions if the ISO file has been developed for such use.

            Directory structures are to help organize folders and files, these types of structures are commonly known as; single-level directories, two-level directories, tree-structured directories, acyclic-graph directories, and general graph directories. To help understand what these structures are and how they function are exactly how their names are stated. Single-level directories are where all the files are located in the same directory but are limited to a single user, which indicates that each file must use a unique name. Two-level directories are where each user of a computer can create their own directories within the operating system, allowing for searching for files and folders much easier. These directories allow for shared location use because the files and folders can be indexed through the user entries. Tree-structured directors are notable for having a height of two levels but proceed to have the option of extending the structure through branches, the structure itself allows for subdirectories to be created and provides full pathing names, however, files can end up in multiple directories which can be difficult to locate the files. The acyclic directory has the ability to allow directories to share subdirectories and files. However, with that said these files and subdirectories are not exact copies if a change was made to one file or subdirectory this change will not have taken place in the other directory even if the file is the same. Lastly, the general-graph directory structure allows for cycles to occur and have the ability to have multiple directories created from more than one parent directory. Below is a diagram of the most commonly found directory in computer systems helps manage the locations of information.

Diagram of Tree-Structured Directory

    Lastly, the last major concept to operating systems, security, and protection. There are many different forms of threats that may impact a user experience and or be destructive towards the computer system. There are multiple different ways that a user and the operating system can utilize software and hardware to protect themselves and their system. One topic that is very meaningful in terms of protection are domains and access rights. The overall goal for domains is to specify resources that a process may need for access. Realistically, “domains are a collection of access rights, each of which is an ordered pair <object-name, rights-set>” (Silberschatz et al, 2014). To better understand what domains do, we can look at a Domain X that has access rights to a file x, the rights are to read, write, and execute, with these access rights on this domain, a process that is executed within this domain will have the access to read, write, and execute files, however, if the domain’s access rights are only these three then processes that are executed within this domain will only have these three rights and cannot perform other access rights. Meanwhile, because there are multiple domains to a system, each domain with its own different set of access rights for processes, there will be a case where domains may need to share access rights, and for that, this opportunity is available with the structure of domains. Below is a diagram to help visualize what domains are and access rights.

Diagram of the principles of Domains


    Another form of protections is through language-based, which are found within the operating system kernel, which it tries to be a security agent that inspects and validates each attempt to access protected resources (Silberschatz et al, 2014). As operating systems become more advanced the developers have continued to try and provide higher-level user interfaces, and the goals of protection are becoming more refined (Silberschatz et al, 2014). In more recent times, protection systems are now more concerned with the identity of resources and accesses that are attempted on these resources but now also are concerned with the functional nature of that access. In current times protection systems have begun to establish functions to go beyond the set of system-defined functions, like the standard file-access methods which include functions that are user-defined.

            One way protection on domains and objects are through the access rights matrix, the matrix consists of a set of rights, these rights range from a select grouping of rights. The mechanism defines access rights for what process can do on the domain they are executed on. Below is a diagram to help incorporate the matrix into a more functional concept of what access rights belong to what process on each domain.

Diagram of Matrix Access



Security systems are put in place to prevent malicious and unwanted activity that may compromise a system, network, or even personal information, how security systems are intended to work as long as the users correctly use the system use and have appropriate access rights to these resources. Acknowledging that if all resources within a computer system are accessed appropriately and intended then we can feel that the system is accurately working and secure. A few different methods that security is used in terms of protecting against malicious activity start with physical security measures, which dictates that if a site or sites that contain computer systems, they must be physically secured against unwanted/malicious activity (Silberschatz et al, 2014). Another method to prevent activity that is found within security is human authentication, this is giving authorization to only appropriate users who need to access certain files, systems, or information. The operating system itself must have its own security measures to help prevent malicious activity and attacks towards that computer system. Another technique to protect systems, programs, and networks would be the use of cryptography. According to Silberschatz et al (2014), “cryptography is used to constrain the potential senders and/ or receivers of a message” (Section 14.4). This type of security is a way for a computer to recognize if a message is sent/received is authentic by using keys. Keys are ways for users to decode/encode messages so that only a computer with certain keys can view these messages. This leads to the encryption process, which essentially is converting information or data into a code that attempts to prevent unauthorized access. Another great form of security found within operating systems are firewalls, this is a software layered security measure that limits the communications to/from a given host, this would deny malicious activity from sending stolen information from the victim computer to a hacker that is using the internet access to steal information.

Concept Map of Security and Protection



 

 

References

Patterson, D. A., & Hennessy, J. L. (2014). Computer organization and design: The hardware/software interface (5th ed.). Retrieved from https://zybooks.zyante.com/#/zybook/jCx8rOUvAL/gettingstarted

 

Silberschatz, A., Galvin, P. B., & Gagne, G. (2014). Operating system concepts essentials (2nd ed.). Retrieved from https://redshelf.com/

 

 

          


Monday, June 21, 2021

Tech Topic Connections

    Blogs are one of the most ideal ways to communicate to the world when trying to express different experiences and educate others. Throughout this course, we have become educated in the roots of information technology. Provided all of this information that has been learned throughout this course, the focus point will be revolving around the chosen tech topic of web and mobile applications. As the world steadily progresses to a more modernized mobile world, applications are being designed to function on a mobile device for ease of access, whether these applications are strictly built as an application or even designed for a web browser, the functionality resembles that of what an application does on a computer. The main functions of mobile devices have become a gateway to allowing users to use applications that were originally built for computers to a more on-the-go feature. Technology has become so advanced in the last century, that many changes in sizes of computers have become smaller and smaller with much easier accessibility, Vahid & Lysecky (2017) explains, “1940’s computers, with thousands of baseball-sized switches, occupied entire rooms” (Section 1.1, para. 2). Knowing that the original computers in the world were created to be massive, and now throughout time have become so advanced that these devices have become handheld. Without knowing the history of computer technology, one would never realize how advanced we have come to bring such a large system down to such a small device. This does not even scratch the surface of how the components in these devices have changed tremendously over the years.

            The advancements in hardware alone in technology have become so advanced from what it first was back in the 1940’s everything that allowed a computer to function properly had been changed. As Vahid & Lysecky (2017) mentions, “by the 1970’s, an entire computer could fit on a one-coin sized device known as a computer chip” (Section 1.1, para. 2). How quickly computer systems went from being the size of a room down to the size of a coin is incredible, that does not even recognize the other parts of a computer that have been developed over the years. Another very critical piece of hardware that has changed throughout the time that has begun advancing in the mobile technology world is processors. Processors as Vahid & Lysecky (2017) explain are, “hardware that runs program instructions and controls other hardware” (Section 2.1, para.1). These processing units are what drives a computer to function between numerous tasks during a session of use, this controls how quickly tasks can be completed or accessed, and originally these processors were slower, but have rapidly advanced to function with faster-operating speeds. Mobile devices have become more advanced in computing power because of processor enhancements, Reddi et al. (2018) explain that “a smartphone today has enough computing power to be on par with the fastest supercomputers from the 1990s” (para. 1). This determining how even though three years ago mobile processors have become so advanced which would determine how technology has been steadily trending in a positive way. However, mobile devices do not only run based off of their hardware, but a critical function also that plays a role in how devices run are through software designs.

           

            The way hardware is designed is to coexist and work parallel with how software is designed through programming. Programming languages are what allows the hardware to interact with the software designed in programs or operating systems, this allows users to have more control over their systems and allows the user to run programs. Mobile devices use similar programming habits to run their hardware like computers. All programs are run from different types of programming languages. These two share a common programming language known as Java. Java is the main framework for android mobile devices whereas on computers the Java language is used in application use. Software programming is the main framework of how all technology runs and is the most critical function of how the hardware works with the device.

            As mobile devices become more accepted to becoming the main technological devices that people use, applications that are developed for these devices become the main reasoning behind this. Applications that are on mobile devices are usually known for their compatibility when used between computer systems and mobile devices, a lot of applications get developed so that people have access to their email, important documents, and even in some cases video games. In some cases people use their mobile devices in a way to work anywhere, mobile devices enable users to view different parts of their businesses, some companies use these for their databases to view them. As Li et al. (2020) mentions, “When users allow their data to be collected via software applications and mobile devices, users need to have some level of trust and control over their data” (p. 1). Having this level of trust for mobile devices to manage their databases in a business has a lot of meaning. While users have trust in their mobile devices to manage their databases, these devices are protected from security risks with built-in features that the programmers implement into their apps. Some of these features are when swapping applications on the device the program’s security takes into effect by logging the user out of the application. Another way that apps are secured is by having an account authentication that asks for private credentials that only the user would know that way it can be protected from outside users without the device owner’s permission. Other features that have been designed are 2-step authentication which requires another outside source to enable the login to the application, this being a phone number text, email, or even log in from a computer. These types of authentications require the owner of the device to have the proper credentials to log in than needing credentials to approve the login request. These are some of the ways that applications are developed in ways to secure the user in the application they are using.

 

References


Li, Z., Peng, B., & Weng, C. (2020). XeFlow: Streamlining Inter-Processor Pipeline Execution for the Discrete CPU-GPU Platform. IEEE Transactions on Computers, Computers, IEEE Transactions on, IEEE Trans. Comput, 69(6), 819–831. https://doi-org.proxy-library.ashford.edu/10.1109/TC.2020.2968302

Reddi, V. J., Yoon, H., & Knies, A. (2018). Two Billion Devices and Counting. IEEE Micro, 38(1), 6–21.

Vahid, F., & Lysecky, S. (2017). Computing technology for all. Retrieved from zybooks.zyante.com/

Computers in the Workplace

    The specific industry that I will be discussing would be the manufacturing industry. Based on my current knowledge of technology in the computer world. At my current job which is in the manufacturing industry, we do have our own IT department that helps maintain our companies’ servers that hold all of our private documents, designs, and work instructions. Our company holds all of their important documents privately where only certain individuals based on their job codes are able to see them. The entire database is secured through a private network that was established at the company itself, where internet access is mainly disabled from outside sources. Assuming that within the last 50 years there has not been any sort of breach in our network, it seems that our company has a very established security team to prevent any breaches. The IT department also holds an additional team that helps support the entire company if any technical issues occur with user issues, whether it's hardware or operating system issues. They also write some scripts that help with how the computers they provide to the manufacturing teams allow users to access what they need and startup programs to only run for what that team is required.




    Since I partake mostly on the assembly side currently of the manufacturing business, I feel that it would be in my best interest to exemplify how the IT department helps support this business team. When our servers are down that brings up work instructions, they try to find a fast workaround to quickly allow us to continue working, while they work on the larger issue, this generally only occurs if there is a hardware issue, or when the network for the assembly line is down. What they generally do when the network is down is loan us laptops that have connectivity to different servers until they are able to bring up the servers for our manufacturing line. Sometimes if it is not necessarily a network issue or hardware issue, they will screen share to see what the issue is and try and troubleshoot it remotely. When the issue does persist as being a hardware issue, they will place an order for a new replacement computer and provide us a loaner computer in the meantime that has the ability to achieve daily tasks.



    Based on having basic technological knowledge some of the issues that do occur in manufacturing can be troubleshot without having to contact the IT department, sometimes it could just be a connectivity issue into the computer itself, for example, an ethernet cable not being connected, or the display adapter came loose causing the monitor to not function properly. It is important to have some general knowledge behind computer technology, as this creates a self-reliant atmosphere where reliance on another team does not always have to occur, this also preventing more situations from accruing “down-time”. With the way technology is evolving, in the next ten years, I would imagine that fewer issues with programs, hardware, and server issues would end up being less likely as our IT department creates more established servers since computer technology has become so advanced and continues I feel like stability would be increased, in a server level, I would also like to believe that desktop computers in manufacturing businesses would probably downgrade desktops into tablets, depending on the business team. Tablets are becoming a very powerful device with many different options, and the touch screen capability and mobility make them more prominent in assembly work whereas a monitor is only mobile to such extent. With hardware becoming more revolutionized task operations would be much smoother in transitioning from one program to the next, perhaps data would be stored on the device itself rather than a huge super server that has the possibility of being breached where stored data on a device with limited networking would be more secure. Just some ideas based on what I have seen at my company and what the IT department does to support the business.

Network Security


    As technology evolves in the world, personal information becomes more and more used saved, and accessible on technological devices. However, this information is very personal that can lead to distress in an individual’s life if this information is stolen and can cause many issues for that individual. Most of the time the person’s information is sold on the black web, which is the part of the internet where users are anonymous and untraceable, this leading to illegal activities, like an impersonation of identities, credit card, and debit card information being stolen, bank account information could be stolen, there are so many different outcomes that can be devasting when a system becomes hacked. As Vahid & Lysecky (2017) explains, “a malicious breach done by unauthorized access” (Section 8.1, para. 1). Hacks are what are some of the leading causes for personal information being stolen. One kind of hack that can occur from the ping command that we use to establish if there is a connection available between your computer and the server is an overload of pings to a server to deny their service this is also known as a "DDoS" attack or also known as Denial of Service attack. What this attack does is the server that is being threatened by the overwhelming amounts of ping requests is to turn off the victim's server to shut down their network. The likelihood of successfully shutting down a network or computer using a DDoS attack by a single person is unlikely, however, if someone were to use a Botnet then the success rate would be significantly increased. Botnets in other words a large number of networks that have been hijacked and controlled remotely by the hacker, the botnet then controls every command on all of these systems, putting this into effect by using the ping command and sending all of the transacting ping echoes towards that computer or server would most likely cause it to fail and shut down. I will be further expressing two other hacks that are possible security risks for users and businesses, these are known as phishing and password cracking.

    Different technology devices are vulnerable to all kinds of different threats. Vahid & Lysecky (2017), explains “phishing is an internet scam that baits a user to share sensitive information like a password or credit card number” (Section 8.4, para. 1). These types of hacks are very critical to keep attention to, as they can lead to the most devasting loss of financial information and other sensitive information like social security numbers. The reason why technology is so vulnerable to this type of threat is that a lot of these threats come in from an email that is created to look like an actual representation of the website that a user may have data stored on. A prime example that a lot of people have data stored would be websites like Netflix or Amazon. An email would be sent to a user’s personal email, and it would request to click on a link to bring the user to a website that looks like Netflix’s official website where a user would input a username and password, this actually sends the information to the creator of this falsified website to steal the account information. This being on the lower-tiered scams, however, this exact method could create even larger issues if an email were sent as a banking email, where the user could put their bank account information and password. As Jensen et al. (2017) expresses, “the U.S. Federal Bureau of Investigation posted a warning in April 2016 that it received reports from more than 17,000 victims, which accounted for $2.3 billion in losses” (2017). This expressing how dangerous phishing can be to everyone, it can result in a very bad financial loss, that most likely could never be traced back to the original developer of the phishing links or programs. A few recommendations that would help prevent such breaches of privacy to occur would be to not click on links in emails sent to a user’s personal email that they did not request from websites. Also, another recommendation would never submit personal information on a website that is not well known or reputable, as these websites can also just be scamming websites to steal your information. This also leads to another computer breaching threat is known as password cracking.

    Password cracking is a leading cause of user accounts being compromised and accessed, notoriously used on emails and other personal accounts. A password cracking software is designed to recover a lost password, it may not be as malicious as it sounds, however, this can lead to accessing accounts without user authorization, thus, leading down a path of personal information being stolen, through emails and other means necessary. Shi et al. (2021) expresses, “strong passwords are always hard to remember, so it is not surprising that users often create easy-to-guess passwords for convenience, which puts password-based authentication systems in a high-risk situation (para. 1). This indicating that passwords should be hard to establish, even for software that exists that enables password brute force, thus enabling software to continue writing random characters on an account to access the account eventually. This may seem like a time-consuming process which it is, however, eventually access will be granted to the account and that is where the threat has begun establishing their illegal activity. There are a few different ways to prevent this from occurring, one recommendation is to create a password that is entirely random and write down a copy of it somewhere safe, maybe on a piece of paper and hidden away. Another recommendation to prevent password cracking is to have a user change their password frequently, this way, even if a program was trying to establish a connection to the account using brute force technology, will continue to keep failing.

    After establishing what phishing and password cracking is, computer threats still lurk, and not one person is entirely safe from them, however, with the recommendations that were presented within this paper, the possibilities of having phishing or password cracked, should be minimal. There will always be some sort of computer breach no matter how hard the user attempts to protect themselves, practicing good fundamental knowledge of cybersecurity will help prevent any negative outcomes that may arise. Doing so will help prevent personal information from being found on the dark web or being sold to others for illegal purposes.




References

Himawan Pramaditya. (2017). Brute Force Password Cracking Dengan Menggunakan Graphic Processing Power. Jurnal Teknologi Dan Manajemen Informatika, 2(1). https://doi-org.proxy-library.ashford.edu/10.26905/jtmi.v2i1.615

Jensen, M. L., Dinger, M., Wright, R. T., & Thatcher, J. B. (2017). Training to Mitigate Phishing Attacks Using Mindfulness Techniques. Journal of Management Information Systems, 34(2), 597–626.

Shi, R., Zhou, Y., Li, Y., & Han, W. (2021). Understanding Offline Password-Cracking Methods: A Large-Scale Empirical Study. Security & Communication Networks, 1–16.

Thursday, June 17, 2021

 What are Ping and Traceroute Commands


    Ping and traceroute commands are two commands that take place within the command prompt of an operating system. These commands are used to identify whether your computer can reach another server to see if there is a possibility to establish a connection. These commands also show at what point where there may be a fault between the two connections and shows a visual representation of the distance between the two connections.



Assessment of Ping and Traceroute Commands

    After reviewing the ping and traceroutes we can identify from the results how close the servers are within range of our internet service provider's servers. When I used the ping command on the google website, it took about 35 milliseconds to transmit the 32 bytes of data that I sent from my computer to their servers. When I pinged the website, weblio.jp, my ping of 32 bytes of data transmission became an average of 321 milliseconds. It seems that the communication between my server and their servers became evidently longer, the transmission data seemed to be at a stable, upon transmission to these servers. I found another website called scielo.br, which when I pinged this server the transmission of data averaged at 739 milliseconds, which is significantly higher than the previous pinging commands, this transmission hit a max-cap of 1726 milliseconds, this seemed to show me some information that could potentially show that there is a problem with the transmission connection, because, during the transmission of the data, the minimum amount of time was 280 milliseconds, this makes me believe there were two possibilities of interruptions during my communication transmission. One issue could be that there was a disruption in my internet connection causing a longer delay sending out the data, or there could be a disruption going on at this website’s retrieval end or even their feedback response time. These could be potential issues, however, with just the pinging command there is no way of having any real answer as to why there was such a wide variety of transmission delays.



Ping Results:
Google.com – Packets Sent: 4, Received: 4, Lost: 0, Approximate round trip times – Min. 35ms, Max. 36ms, Average: 35ms
Weblio.jp – Packets Sent: 4, Received: 4, Lost: 0, Approximate round trip times – Min. 289ms, Max. 339ms, Average: 321ms
Scielo.br – Packets Sent: 4, Received: 4, Lost: 0, Approximate round trip times – Min. 280ms, Max. 1726, Average: 739ms





Traceroute Results:
Google.com – Over a maximum of 30 hops, range – 1ms – 34ms, hops 5,6, and 7 ping raised to 83-105 ms, then packet loss at hop number 8.
Weblio.jp – Over a maximum of 30 hops, range – 1ms – 218ms, hops 9, 11, 12, 13, and 16 all had about 100-220 ms, doubling its latency by hop number 16. Hops 10, 14, and 15 had packet losses.
Scielo.br – Over a maximum of 30 hops, range – 1ms –171 ms, hop number 10 had a packet loss, and hops 16, 17, and 18 ping raised to 170 ms.



    During the traceroute command, Google’s website we can see the entire route from the starting location (my home) where data is transmitted from to the end result. During the transmission route, my ping raised around 100 ms from where it was averaging late 20s to mid-30s. Hop numbers 6 and 7 were where the transmission peaked and hop number 8 timed out. During the transmission with the website weblio.jp, the overall ping gradually increased from the start to the end of its route, by the time it hit hop number 9, ping doubled, thus making me believe that this is where the server started transmitting into a different geographic location, at hop number 10, the transmission hit a packet loss, and then the ping increased doubled again to 200ms, then more packet losses occurred at hops 14 and 15, and the transmission ended at hop 16, at 220 ms. With the scielo.br website, again ping gradually kept increasing from the start to finish, there was only one packet loss at hops number 10, and the ping gradually continued to rise after this, at hops 16, 17, and 18 ping reached 173 ms where the transmission’s route ended.



    After reviewing the traceroutes round trip times, one could assume that the lower pings averaging 20-60ms would be within the country, and outside the country, we would see an increase by almost double or actually doubling in the roundtrip time. Realistically, after analyzing the difference between each website, we can identify that geographic location certainly impacts the roundtrip times, and it seems the further away the server is the ping will increase.



    Identifying internet connection problems can be identified using the ping and traceroute commands, when pinging a server if there is a problem with the host server, we might see that there may not any responses from the server host. Another way that we can identify if there is a networking issue, using traceroute we can establish how long it takes for a packet of data to be sent to a host and how long it takes to return back, this is can be used to determine if there is an issue with the internet connection, as in most cases from the experimental attempts of our routing, anything within the countries origin of where we are pinging from is generally below 100 ms, if the connection exceeds 100ms than we could probably associate this with a problem, this would most likely only be applicable within the country as it seems that geographic location impacts the time for the data to be sent and received back, having an increased time of latency. The ping command may time out if the host that we are trying to ping has a network that is not established to receive the connection. Another option as to why we may see a timeout issue is if there might be a firewall that could potentially block the connection also.