The fourth article of my series is an overview of Input/Output devices for the computer. Without input and output, a CPU, RAM and drives cannot get data or display it for use. I/O devices provide the method that a CPU is useful for humans.
How I/O Works
The core components of modern microcomputers are a CPU, RAM and a drive of some sort. However, without some method of reading and writing data, these components can only process items contained within them. For some control systems for manufacturing processes, or for firmware contained within devices like cars, no human readable I/O is fine. But even in those cases, there is some way to hook up devices to allow for input of new data or to read the results of computations by the CPU. Without I/O and human interaction, computers aren't very useful. They become big bricks that are very good at crunching numbers and saving them.
The early methods of input were based on real world devices like the teletype and typewriter. A teletype terminal allowed typing in messages and sending them to others. Teletype messages were printed on a roll of paper as they were typed or received by the unit. It was a common interface that did not require new training on how to use it. A typewriter prints out text based on keys pressed. The output is the text printed on a piece of paper.
For output, initial output was printed on paper. Computer programs were run one at a time and used to perform calculations. Printing out the results allowed human analysis so that the results could be adjusted in the future. A teletype machine combined input with the output which made it easy to use for early communication with computers.
With microcomputers, a monitor was added for output. With the addition of a monitor, a more interactive, real time experience could occur with a computer. On mainframes, a program is input with cards and the results printed on paper. The results are reviewed and analyzed so adjustments can be made in the software code on the cards. Once the cards are updated, the software is run again. If the mainframe was busy, you might have to wait in line to submit your program and find out the updated results. On a microcomputer, software could be run, results obtained and software adjusted at the time.
This lead to the development of formatted software forms for input of data and output of results. The initial displayed forms were crude by today's standard, simply dumping data on the screen without formatting. The invention of the Visicalc software, a predecessor to Excel, was a huge leap for the computer. It emulated the use of accounting ledgers in a format that was familiar to accountants. This made it easier for non-programmers to use computers.
The original interfaces used what is called a command line interface. A command line interface accepts text only based on a list of commands. The user had to have a lot of commands memorized or have a reference book in order to be able to interact with the computer. Mistyping a command could result in an error. Errors ranged from the software not running to wiping out software on a drive. Eventually software programs were written that performed error checking and protected the user from mistakes. But even here a mistyped bit of information could cause data loss.
Through the years, the input and output methods have changed. Simple text screens eventually became Graphical User Interfaces (GUIs) such as seen in current Windows and Apple computers, and smart phones. The intent was to mimic the real world by creating a virtual desktop that contained folders and files. The graphics emulated real world items, such a Recycle bin or the icon of a manila folder for a folder storing files on a drive.
The input changed from typing in commands to moving a cursor on the screen and selecting objects to move, open or delete them on the computer. A keyboard can be used to move the cursor however, using a special device, a mouse, made it easier to move the cursor on the screen. The mouse is one solution for moving items on the screen. Other methods of moving the cursor include using other pointing devices, such as a joystick, digital pen or using a touch screen to move the cursor or items on the screen.
New methods of input and output continue to be added. At this time, microphones can be used for voice commands on the computer. Voice command technology has gradually evolved so that it is reliable and useful for many people. Speech from the computer is another option for output of text or description of pictures on the computer. With the addition of these methods, additional access is provided for the disabled, blind or those have difficulty manipulating a keyboard and mouse. In the future, input may occur through the use of virtual reality devices or sensors that motions and output graphics based on the input.
The Impacts of I/O
We take for granted the many methods of I/O for computers, yet these methods shape how we use these tools. They also shape the design of our software. Using a keyboard requires some manual dexterity and knowledge to find and press the appropriate keys. With different keyboards for different languages, the underlying Operating System (OS) must support multiple layouts for typing input. If the keyboards are not mapped correctly, the user will not be able to type in meaningful content. For languages like Chinese, there is a challenge of mapping the characters of their language to something that the can be typed in and the computer will recognize.
The use of a keyboard and monitor requires a user to be literate in the language used on the computer. With the use of a microphone and speaker, or graphics and a touch screen, literacy is less important. With a command line interface, memorization or reference material is required for operating the computer. With voice commands, the user can ask questions on how to operate the computer.
If different methods of I/O had been developed, our experience with computers would be different. For example, suppose there had been devices that require twisting a device with both hands for input. What if the input had originally been based on pictures that were selected or reading our hand motions as we moved things around on our desk and items also moved around on the computer. This may sound unlikely but there have been demonstrations of virtual keyboards that read a user's finger motions as they type on a keyboard made of light. The Xbox and the Wii used controllers that reacted to movement from the user to interact with the software. Virtual goggles and gloves are opening up new ways to interface with computers and software.
We are used to interacting with physical objects in our environment, such as using hand tools or kitchen utensils to manipulate objects. With a computer, the world is virtual, a make believe world that we interact with to create text, artwork, plans, and designs that are only emulations of the physical world. Our choice of I/O devices influences how we interact with the virtual world. When we only have a keyboard and text displayed, the focus is on text files and commands. With the development of graphics and real time interaction with the graphics, we can design virtual worlds using graphics that contain our designs. These designs can be printed and interacted with in the real world. There are 3-d printers that allow us to design objects in the virtual world and have them printed in our world.
In the end, all of this I/O is created using mathematical models that we can manipulate with multiple objects and interact with through other methods. With a broad range of I/O methods, more people can interact with the virtual worlds. As more people interact with these worlds built using a complex structure, they can run into unexpected limitations and bugs in the process. While the appearance of the real world is simulated, it is bounded by the simple commands at the base of it that require a large amount of complexity for the emulations we desire.
In the next article I'll provide some explanation of the base languages, machine language and assembly language that the software is built on.
Picture taken by J.T. Harpster, originally used on 21 December 2017.