As a civil engineer, I feel my education didn't focus enough on the fundamentals of programming.
As an electrical engineer, I KNOW my education didn't focus enough on the fundamentals of programming. Apparently that kind of instruction was reserved for Computer Science majors. The only
required course for me was one semester of FORTRAN. BASIC was available as an interpreted time-shared program on university terminals, but you were on your own as far as learning how to program with it. The computer sci weenies pooh-poohed BASIC as an "unstructured, toy" language. That didn't stop a whole army of hackers from writing games for it, or techs and engineers using it to solve real math problems. Even Microsoft gave its blessing by releasing Visual BASIC 6.0 for Windows. But it seemed to me that most of my FORTRAN course was devoted to learning how to properly format a line of code so the FORTRAN compiler would execute it, instead of barfing it back and refusing to continue. This was especially troublesome with input and output statements. Even if your program compiled correctly and executed, there was always the possibility that there was a logical or mathematical error embedded in the code that would cause the operating system to terminate execution. Stupid stuff like infinite loops and dividing by zero were always caught and quickly dispatched to a dump file, but you didn't find out about mistakes until late the next day or sometimes the next week.
Clearly this was no way to run a railroad or a human-friendly computer science department!
Neither compiled FORTRAN code, nor interpreted BASIC code, would accept any input or produce any output except on computer terminals, typically ASR33 teletype machines. You could theoretically punch BASIC paper tape programs on the teletype machines for later on-line execution, but I discovered that it was easier to use an on-line text editor and save the text file for later execution.
Then I found out that the FORTRAN compiler, which normally ran as a batch job from a punched card deck, could be invoked (late at night) from my ASR33 computer terminal, with program input taken from a text filed saved on hard disk (precious storage in the 1960s!). This discovery eliminated many trips to a common-room, across campus from where I worked, to where a half-dozen or so Hollerith card keypunch machines were located for student use. Because FORTRAN was a required course for an engineering degree, there was usually a long line of students waiting to use the few available machines. However, no more submitting Hollerith decks for me. I programmed from the ASR33 in my office, invoked the FORTRAN compiler and directed output back to my ASR33. Wash, rinse, repeat until everything was debugged and copacetic. I then directed the program output to the line printer located near the mainframe for compatibility with my classmates submissions. I also punched the (now debugged) source card deck there too. Only problem with that was the card punch machine did not print anything across the top of the cards! My instructor wouldn't accept that. He claimed not to be able to read EBCDIC punched code and wanted to see the translation printed neatly across the top of each card in my program deck.
So I made friends with one of the night-shift computer "priests" (a grad student in actuality) to arrange to have my deck of cards sent through a card reader, attached to a small IBM 360 mainframe generally reserved for computer science student use. This card reader had a ribbon and the capability to read the EBCDIC (Extended Binary-Coded-Decimal Interchange Code) keypunch code and translate that code into human-readable dot-matrix printer characters that it printed along the top of the card "on the fly" while the card was being read.. So in the end, although I was able to program and debug from my office ASR33 terminal, I still had to turn in a Hollerith card deck with my program, as well as a printed copy of the run results.
In the Research Institute, where I worked as an electronics technician, we had two keypunch machines that the administrative staff used for some arcane purpose. Since they all went home promptly at 4:00 or 5:00 PM, I was always able to access one machine to punch my FORTRAN deck before I learned about the on-line work-around. These keypunch machines also had a typewriter ribbon that printed along the top edge of the card what the EBCDIC punched code underneath represented. This allowed a dropped box of cards that became inadvertently shuffled to be fairly quickly put back into the correct order. It also allowed fairly easy swapping out of cards that were incorrectly coded. So, the drill was this: get a class assignment to program something; write out the program on paper; take the paper across campus along with a box of blank Hollerith cards and punch the program onto the cards; submit the card deck for execution as a part of a batch FORTRAN job that would run later that evening when the time sharing network wasn't very busy. Come back the next afternoon or the next week to see the results of your effort. Blech!
About the time that I was learning how much I hated FORTRAN, Intel was developing microprocessors for Japanese pocket calculators, beginning with the 4004 and much later the 4040. I got into the game late with a project to embed a microprocessor in an x-ray excited, photon absorbing, and electron emitting, spectroradiometer. Or Electron Spectroscopy for Chemical Analysis (ESCA). This machine basically illuminated the surface of a specimen in vacuum with monochromatic x-rays and measured the energy of the photo-electrons that were emitted, said energy being characteristic of the chemical signature of whatever absorbed the x-ray photon.
This ESCA was an all-analog machine that the Surface Analysis Laboratory, which was right across the hall from the Electronics Laboratory where I worked, wanted to bring into the digital world for data acquisition and analysis by computer instead of lengthy time consuming hand interpretation of strip-chart ink recordings. The Intel 8080 was so new then that the vendor who sold us the system had the CPU on its own dedicated card, as it did with the static random access memory cards (1k bytes each!), and the peripheral controller ICs, which were NOT microprocessors like the PICs that I didn't even know existed. The entire "embedded" microprocessor was contained on a nineteen-inch wide by six-inch high rack full of cards, along with separate power supplies. A far cry from the way it is done today, but pretty much "state-of-the-art" for our little electronics laboratory.
Anyhoo, there was very little RAM (Random Access Memory) available for program storage and execution back in those days, although by the mid-1970s dynamic RAM (DRAM) was making major inroads and eventually replaced static RAM for everything but the highest-speed applications. A result of this shortage of available memory was I had to learn to program in 8080 assembly language to use what little RAM that I had available as efficiently as possible. And I had no idea what that was (assembly language) or how to use it (to write programs). So I signed up for a computer science course titled "IBM 360 Programming in Assembly Language". I really needed to learn how to get really close to the hardware, "banging on the bits" as it later came to be known. Turn things on and off at the bit level, like motors and lights and "stuff like that". But the course was aimed at very senior computer science majors who wanted to learn how to program mainframes using macro assembly language constructs, and mostly canned macros at that: little program snippets written by systems analysts for systems programmers.
Not exactly what I was looking for, but Intel provided plenty of reading material and I finally got the hang of it and even learned what a macro was and when to use one or a few of them. I even learned how to read machine executable code, which came in handy when IBM released the assembly source code for their BIOS (Basic Input Output System) used to "boot up" their personal computers. That decision to "open source" the BIOS was truly a godsend for the hacker international community. So-called "IBM compatible clones" sprung up like dandelions on a spring lawn in the suburbs within a year of the release date of the first genuine IBM Personal Computer! My very first home computer was one of these clones, and all the many that followed used the same building paradigm. It was many years later that I built my last computer, based on an Intel core i7 CPU. It was of course immediately obsolete the day I finished it and turned it on, but it works "gud enuf" for what this engineer needs to use it.
So now I've pretty much regressed back to the early days when TTL (Transistor-Transistor-Logic) was King and you hooked up a few dozen TTL ICs to do some logical thing based on a few inputs and probably even fewer outputs. Because of two things, the way we would do it today is much better. First thing: the old-school method requires that your design be 100% correct or else some usually expensive hardware changes would be necessary. Second thing: if the customer decided to add or subtract "features" of a completed design... some hardware changes would be necessary. And those were guaranteed to be expensive.
With one or a few PICs in the mix, you get to keep most of the hardware and make almost any required changes in software. And all without the need for gigabytes of RAM, and terrabytes of rotating disk storage, or even an Internet connection (unless you really want and need one). Of course PICs keep getting bigger and more powerful each passing year, the older versions being replaced by newer and better versions, so you still have to swim like hell if you want to keep up. Or sit back and let the next generation of engineers and technicians take over the major swimming while you just dip your toe into the water once in awhile just to see how it feels...
The mainframe computer I first learned to "program" on was an RCA Spectra 70 with a time-sharing operating system, which was intended by RCA to fill a perceived niche in the IBM line of mainframe computers available at the time. It was built and sold to another (unidentified) school that decided not to accept it for whatever reasons. RCA was therefore stuck with big iron that they couldn't move until the University of Dayton school of computer science expressed an interest. RCA practically fell all over themselves in selling the Spectra 70 to UD (at a loss, I heard) along with a coterie of engineers and techs to keep it running. By the time I graduated UD in 1978, the writing was on the wall for anyone who cared to read it. Mainframes were obsolete and expensive dinosaurs. Personal computers overtook and exceeded all except Cray supercomputers in performance, and beat virtually everything in terms of price (bang for the buck). I have heard that cooperative networks of personal computers can even out-perform a Cray.
So what's the "next big thing" you (didn't) ask? Implantable tissue-compatible bionics... the conjoining of living cells with nano-machines designed to augment or replace human organs. Everyone speculates that might be nanobots, little cell-sized robots programmed to perform a specific task, like removing the plaque from your clogged arteries without damaging the underlying tissue. That may happen eventually, but something simpler and more mundane is more likely. An artificial kidney or pancreas perhaps, or maybe a replacement for a damaged or detached retina. Someone who visits here at Electronics Point is working on an artificial muscle that will interface to the nervous system. Such a thing has all sorts of applications in prosthetic appliances, but also even better applications for robots with the flexibility and dexterity of human beings. Send a few hundred of those robots ahead to build habitats for humanity on planets the human race hasn't had time to terraform yet. Now that would be a civil engineering project for an adventurous sort to lead! Can't just let that many robots run around unsupervised, ya know? They might get ideas and want a planet of their own... for their own kind.
So, have some fun while practicing engineering. Cross over a little bit into electronics. Never stop learning. Seek new adventures beyond the next visible horizon. Live long and prosper! Buy and use a PIC for FUN!