While the Twin Cities (Minneapolis & St. Paul) along with Philadelphia were the birthplace of the computer industry, Silicon Valley became the incubator for integrated circuits. Silicon Valley also became the ecosystem which propelled the industry into continuous growth with an endless variety of innovative uses for these integrated circuits. This creative ecosystem, a vast network combining imagination, technical skills, and risk-taking venture capital became the envy of the world.
A couple of decades ago, Fast Company, a popular technology-oriented magazine, did an issue containing interviews with the diverse Silicon Valley leadership group covering all the component parts of this highly successful ecosystem. The Editor of the magazine became intrigued with an undercurrent of concern that he detected as a common thread across this diverse group of very successful people. He took the opportunity to describe this concern in an article at the end of the issue.
The described this concern as, “A good many of our interviewees made comments related to this: ‘It is bothersome that we have created a society here in Silicon Valley where our children could not afford to live – unless we shared some of our wealth with them’.
Now one of the most expensive places in the world to live, this unintended consequence of their success continues to this day.
State of the Art Operating System (OS)
One of my assignments with Sperry Univac was to develop a state-of-the-art operating system to support a new line of mainframe computers which would position the company in the computing world we could see coming.
These computers would replace Univac’s line of 1100 computers with a new PL1-based hardware architecture.
Univac was ‘all in’ on this new product line, so I was given carte blanche to recruit a design and implementation team from anywhere in the company.
At one time Univac had produced and supported 14 different operating systems. While the number we could afford to support over time diminished steadily, this still meant that we had a plentiful supply of OS design team expertise. I ended up with at least 5 ‘lead’ designers and others who had played key roles with competitive OS’s. We had key people from the 1100 Exec 8, 494 Omega, 418 Exec 1 & Exec 2, LDC RTOS, and Virtual Memory IBM compatible systems.
This team was charged to create state of the art: executive hardware management software, emulation compatibility for existing 1100 hardware customers, emulation compatibility for 9000 hardware customers, emulation for IBM hardware compatibility, batch processing for complex operational tasks, transaction processing for real time operational tasks, data base, communications networking, high level languages, and a transparent user interface. In addition, this work had to meet state of the art testing and validation procedures.
Three hundred and fifty were recruited to build this OS of the future. This work was bundled such that five senior people were responsible for providing all this functionality – and – to ensure that each of these component parts was state of the art. As the leader, my job was to make sure these results happened. They were the experts; I was the boss. How to make sure that it seamlessly came together?
As a team we knew that most software errors arose at the boundaries where one set of functionalities passed off to another function. A state-of-the-art OS would have to make this integration seamless and error free both at the design level and at the implementation code level. The amount of detail involved with all of this was far beyond what I could master. I had to figure out how to delegate the interaction at the boundaries to the five team leaders.
Each team had to produce a design specification that contained at least two things: the functions they would provide and justification that what they were providing was state of the art. Typically, each would produce their specification and come to me for sign off. How could I hope to keep up with this much talent? After long reflection, a solution arose: each team leader would have to get his/her four peers to agree that what was being proposed was what they needed and to the best of their knowledge, it was a state-of-the-art leap forward. Five minds were better than one!
For reasons beyond our control, this wonderful project was canceled by the Sperry Chairman of the Board.
One unintended consequence of this thoughtful work was creating insights for software people into what it took to develop tested and validated software. In subsequent years this led Univac to collaborate across its software development centers to produce a 13-stage software development process to be followed by all the centers.
Computer Operating Systems (OS’s)
Operating Systems are the software mechanisms which enable computer users to access the power of computer hardware. OS’s provide interfaces to users in everyday language which then get transferred into commands that drive the hardware to perform an endless array of functions: storing data for later retrieval, developing application software which provides a steady stream of functionality, interacting with phones and other computers to connect people and organizations globally, etc.
Over my 40+ year career in the computer industry, I was seriously involved with 5 major operating systems: as an implementer, designer, and manager. At the end of this career, any satisfaction that could be derived from this career received a heavy jolt: computer hacking! I had a good friend who served on ‘tiger teams’, groups of programmers who were given the job to see if they could ‘penetrate’ any of the computer OSs in operation. His comment: “It never took more than two or three days”.
The realization hit me that in all my 40 years, computer security was NEVER a consideration: not in design and certainly not in implementation.
Now, decades of very hard work (retrofitting fixes into millions of lines of code) are slowly but surely making these ‘penetrations’ much more difficult, but not impossible as the hackers continue to become creative in their techniques.
The unintended consequences of our lack of insight related to computer security are billions of dollars in cost and a serious lack of confidence in the integrity of our global computing systems.
Knowledge Systems Center
In 1984, Gerry Probst, Chairman of Sperry convened a meeting to address the future of his mainframe computer-based company, Sperry Univac. His opening words:
- Gentlemen, our company missed the importance of the emerging mini-computer. Then we missed the importance of the emerging personal computer. Now, every day, I am hearing about this new technology, Artificial Intelligence. My question to you is: Is this the ‘next big thing?’
No one at that meeting had the slightest idea about the importance of Artificial Intelligence (AI). Given this lack of awareness Mr. Probst asked that a task force be initiated to investigate AI and determine if Sperry Univac should get involved.
I was selected to lead that task force, and we formed a team including technical and financial people to ensure we looked at both the technology and its business potential. This exploration took 6 months. We met with AI technology leaders such as MIT and Stanford University. MIT led us to Texas Instruments (TI) who was in the process of building hardware based on AI. They were excited about meeting with a computer company that could take their new computer to the market. Stanford led us to Intellicorps, a start-up company, which was looking to market an operating system built on an AI platform. Our interaction with Intellicorps proved to be the foundation for later decision to enter the AI marketplace.
One of our top software programmers attended a 3-day workshop to be introduced to their AI-based software environment. By the 3rd day, he had implemented a solution to an issue he wanted to explore. Then he reflected on what he had done and determined that while the tool set was new, he had mastered it, however, the process he used to get to the solution was NO different than what he had done for many years with a side range of software tools.
This bothered him so he went to Intellicorps’ management to ask ‘what am I missing? They assigned one of their graduate student employees to discuss this with him. She took 2 hours to explain how she would have addressed his problem-issue, taking advantage of what the AI tool set offered. He realized that she had given him a totally new perspective to use to solve problems.
In reflecting on this deeper, he realized that while the computer industry had about 1200 programming languages, every one of them used the same procedure for solving problems – a procedural process. Digging deeper we realized that AI had 3 major tools, Lisp, Prolog, and SmallTalk – and each of them offered a totally new perspective for solving problems. We were looking at adding 3 new perspectives to the one that had dominated the industry up until then.
The challenge then became ‘how to present these insights to Mr. Probst and Univac management.’
The presentation we came up with had 3 major points:
- Sperry Univac was primarily a problem solving company delivering customized solutions to our customers.
- The ability to solve problems is often dependent upon the capacity of the tools we had available to use, and these AI knowledge-based tools were the most powerful tools to represent problems that we have seen
- In the entire history of the computer industry up to that point, our computers were only able to do exactly what the users asked them to do. Small errors in input could be very frustrating to find and correct. In this new AI-environment, the computer leads users through good practice – a complete switch of roles!
Either the presentation was very convincing or the company was desperate to find an alternative revenue stream to their historic mainframe computers. Regardless, the decision was made at that meeting to pursue the AI market. I was picked to lead this new initiative which was funded at $250 million over a 5-year period.
One insight I started with was ‘do not challenge the existing power structure within the company.’ This led to setting up a small group of ‘champions’ who were selected from across Univac’s myriad divisions. This became the Knowledge Systems Center (KSC).
This structure worked amazingly well. Within our first two years we:
- Oem’d AI hardware from TI, and we bought marketing rights to Intellicorps’ AI knowledge-based environment giving us the most powerful AI environment possible.
- We committed to building Expert Systems which involved extracting the knowledge from experts, so the solution could lead less expert people through sound practice.
- We had over 100 projects started across all the company’s divisions.
- Over 350 programmers were involved.
- Over 50 customers were contracted to work with us on solutions.
- We collaborated with 43 universities across the globe, giving them AI HW and SW on which to train our employees of the future.
We were launched and racing forward!
Then Burroughs bought Sperry Univac and quickly stopped all forward-looking activities, so they could pay of the $4.5 billion debt the purchase incurred.
The unintended consequence of this decision by Burroughs and its Chairman Mr. Blumenthal was two-fold:
- The merger of two $5 billion yearly revenue companies quickly plunged to $3 billion yearly – a complete failure.
- Expert Systems technology lost its major champion and eventually lost its momentum leaving room for machine learning technology to get attention.
In 1987 with no backing for the KSC after the merger of Burroughs and Univac, I left with 4 people from the KSC to form PEAKSolutions (PEAKS) and continue the initiative we had started with Univac. Our focus continued to be on Expert Systems. Being a start-up company with little capital, we were forced to abandon the TI AI hardware and move to the Lisp environment on personal computers.
The first 4 months were more than challenging as we struggled to find paying customers. Necessity is the Mother of Invention. We soon realized that programming demos was a very costly process and did not mean we would even get the sale. In a meeting to discuss this, one of our Founders said, “What we are saying is that we need demos, but we cannot afford the cost of programming them. What we need then is something that looks like a demo, but does NOT require programming. What we need are mirages.”
After discussing this, we realized he was exactly right, so we set to work to find a tool that would produce demos and require no programming. We found that tool, and mirages became the heart of our selling technique.
One of our early experiences with a mirage demonstrates their effectiveness. We were working with the Minnesota Department of Transportation to address their challenge of routing oversize or overweight vehicles on the Minnesota road system. Think about it – how do you deliver a 50 foot high crane to its work site – without hitting bridges??
We met with their team that was charged with this responsibility – finding a route through our maze of roads without incident. Working with them, we created the mirage demo and took it to the responsible Director. He did not have a computer, so we wandered through the cubicles outside his office to find a PC to demo on. We found one, and the cubicle occupant asked to stay and watch our demo. In the middle of the demo, we hear her shout, “If we would have had this tool when I was routing vehicles, I would still be there!” That made the sale.
We proceeded to implement Routebuilder which worked with the detail of the thousands of miles of MN roads, weight limits, road widths, bridge heights, and 400 State Laws that affected who could travel on roads, when. We created the first solution to this issue in the country and eventually sold versions of it to 4 other states.
The mirage idea led to another innovation, Reflective Knowledge Engineering (RKE). What our programmers learned to do was to continue the mirage process by adopting an incremental approach to solution implementation. They would meet with the expert for a couple of hours; then go away and implement what they were told; and then demo these changes to the expert who would either endorse what they did or indicate where they had not understood the process properly.
This increment approach to implementation was extremely effective. While we were successfully implementing, there were article being written across the country which stated, “If it NOT possible to get the expertise from the experts.” Given this was our business, I would meet with the programmers and ask, “What are we going to do about this issue?” Their response< “We do NOT have that problem.”
By our 3rd year, we were at $3 million revenue and maintained this level for 3 years. We were confident in our future. Then AI Winter hit. So many other AI companies had NOT figured out how to deliver effective Expert Systems that the country decided they did not work! No matter what we did, we could not overcome this negatively. We had delivered 39 of 40 customer solutions, but in one month sales went to 0 – and stayed there.
The unintended consequence of PEAKS bankruptcy was that there were no companies left committed to Expert System technology. This made room for machine learning to fill the vacuum even though it has not resolved how to deliver reliable solutions to very complex issues. The nice thing about our Expert Systems was that the client’s experts confirmed the solution process was correct.
As we neared the end of the 20th Century, awareness arose that the ‘date field’ in many, many operational forms would have to change from 19__ to
20__. In many businesses having the wrong date could be disastrous. Companies began deploying their programming staff to head off this disaster. Media jumped on the opportunity to predict a monstrous disaster.
The unintended consequence of not thinking ahead was the expenditure of unplanned monies – but disaster did not strike.
Internet and Social Media
The Internet, which has the potential to connect the Earth’s 8 billion people, represents a phenomenon that is difficult to comprehend. While the computer industry has been in continuous flux since its inception, the Internet potential is difficult to get one’s mind around. Especially with the introduction of the visual Worldwide Web which provided access to more and more people. Social Media then propelled another level of change. Designed to connect people, individuals flocked to use it.
Then in 2009, Facebook and Twitter introduced two functions: Like and Share. Use of these two functions was immediate. While these features were intended to provide useful, new capabilities, they instead led to creating viral responses which in turn has led to misinformation and social dysfunction. Our Democracy is threatened.
An unintended consequence of Like and Share is to permit Evil to overwhelm Social Media – and the Internet. The Like and Share features should immediately be removed.
Internet of Things (IOT)
I only have arms length knowledge related to the Internet of Things; however, I just read a novel, The Steel Kiss by Jeffrey Deaver, which suggests that just as computers have been hacked, so can the components within the IOT. In this novel, evil people hack devices such as microwave ovens, gas stoves, precision mechanical equipment, and cars to injure and even kill people. The companies which supply these components scramble to resolve the issue, but soon realize it will take years as they did not consider the hacking issue from the start.
The unintended consequence of ignoring the possibility of evil can be delay in the spread of IOT.
Artificial Intelligence (AI)
With the introduction of ChatGPT by OpenAI in November 2022, Artificial Intelligence is sweeping across the globe. With its success holding conversations with users, replacing draft materials with improved versions, drafting essays, passing professional certification exams, scripting plays, revising text to appeal to different age groups, creating poetry in the voices of well-known poets, creating art, etc., AI shows the potential to impact wide swaths of society.
New versions of GPT show promise for improved problem solving with speeds as much as a million times faster than we can achieve today.
Specialized LMMs are being pursued by multiple organizations. Providing access to more data for training these new versions has the potential to improve its performance in a variety of arenas.
This level of success and this number of potential demands that the organizations doing this work answer some important questions:
- How will they overcome the challenge that it is difficult to know why machine learning algorithms have returned the answers they do?
- What are they doing to explore potential unintended consequences?
- Given the computer industry’s history of significant failures, how will they avoid creating the most disastrous failure ever?
- How are intentional ‘wrongs’ considered at the design level?
- Given that ‘wrongdoers’ will have access to these tools, what are they doing to anticipate and prevent ‘evil’ uses of their tools?
- Given the broad functionality provided, how do they test?
- How are new releases validated to ensure results are reasonable?
If we took this to its logical extent, we would never end. Even if we could partition by categories, including time, the task is one of those things that can drive computing mad. Polite society would prefer to discuss the issues in the context of complexity which can be informative and even effective in certain situations.
In the spirit of that last sentence, the below link is to an article from the October issue of the ACM Communications. After looking at the latest systems, represented by ChatGPT, Bard, and more, the article presents issues that are of interest to us and which will be pursued further. But, the last section is about the topic of this article: unintended consequences.
- Michael A. Cusumano, “Generative AI as a New Innovation Platform”