There is a marriage between innovation and ideology. Its that blend of the two that shape the world and drive the problems you deem worthy of solving. It is important for your ideology to align with a source of influence that is constant. So that your perspective of problems you want to solve
When I started my design company in 2004, I worked with creatives of many different religions and political backgrounds (and still do). And for nearly 12 years I managed to work together without any major clashes or disagreements. But a time came where a problem we had the opportunity to solve revealed our differences in beliefs.
A passionate member of the team was trying to force the company to make an ideological decision not to work with anyone who held a certain set of beliefs that was different from his own. This situation helped me realize that although we were striving to be neutral, we never were. We were simply avoiding situations where our biases clashed. In today’s
A vision that drives your organization forward requires an absolute truth of what is, and what should be. From that day forward I had to become clear about my ideology and the purpose behind my work so that in the future I could attract like-minded individuals that shared my ideology behind the innovation.
There is no such thing as neutrality. At the root of
- Innovation is achieved when we find a new way to solve a problem.
- Problems are solved with ideas.
- Ideas are generated through a way of thinking.
- A way of thinking is our ideology.
- Ideology is our view of the world and the future we want to create.
We’re at the beginning of what Artificial Intelligence can do for the world. AI is beginning to be incorporated into all aspects of our life.
The three stages of Artificial Intelligence:
A couple of years ago Andreeson Horowitz (a Venture Capital Firm) published a talk called “AI: What’s working, what’s not, and where do we go from here?” In that talk they described how has three stages of intelligences.
- Narrow AI
- General AI
- Super AI
Most of what we use today is considered “Narrow AI.” For example, the recommendations Netflix makes after learning what shows you like. Or Amazon Echo having the ability to tell you a joke, or order refills from amazon.com
“General AI” is something we as an industry are striving to accomplish. Let’s say for example you decide to teach a robot how to swim inside of an indoor pool.
If you were to take that same robot and place it into the ocean, it would drown. The environment and the variables in the environment are very different. The Goal of General AI is to mimic
If what we believe influences what we create, how would a use case for AI change based on our beliefs?
The next stage of Artificial Intelligence is “Super AI.” This is where AI exceeds the human brain’s ability to process information and learn things. Imagine you’re working for a technology startup. One day you’re at the office and an engineer rolls his chair over beside you and says “Should we as humans continue to be allowed to make decisions on our own?” And you may wonder to your self “Where is he going with this?”
And then he continues and says “Should we give up our agency to decide and let technology decide on our behalf?” Said differently, should Siri determine all of my decisions for me? (With Siri’s track record, I don’t know if I would trust Siri to do anything.).
Should we as humans be allowed to make decisions on our own? Or should we give up our ability to decide and let technology decide on our behalf. For much of the community working on AI, this is a tough question.
If within your ideology there is a belief that we as a human race were created to create, and were given agency or dominion over all things by the creator, then the answer is easy. All technology should help give us unbiased information to help us (the human race) make a decision.
But, if you believe life was a chance, a big coincidence that came to be after an explosion that required millions of years of evolution, you would be more likely more open to creating technology that has the agency to make decisions on your behalf.
If you really think about both of these examples, giving up agency will lead to a world where no one will be allowed to think on their own. Rather they will have to rely on the technology’s interpretation of reality.
If the algorithm and machine learning
An example of this would be the ongoing research around self-driving cars/autonomous driving. This is where a vehicle can drive you from point A to point B without human intervention. This is the future. One day we will never have to learn how to drive or even get a drivers license.
We will be able to summon our car’s with our phone, it will arrive, we will get in, and it will take us to our destination. During the trip, we will be able to eat, sleep or any other activity while the car is taking us from point A to point B.
While this is a good application of AI, and I truly believe it will make the roads a safer place, when you take what AI does for driving to its an extreme, you begin to find yourself in a debate on whether or not we should be able to continue to be allowed operate a motor vehicle. At this
Obviously, this is an extreme example, and we will probably pass laws that make it harder for humans to drive a vehicle than for a computer to drive it for you. There is a good chance that insurance companies will charge more for individuals that want to operate the vehicle manually.
On the other hand, if the machine learning algorithm (that is powering the artificial intelligence) is based on truth, and constant that never changes, then you can begin to live your life with confidence that as long as you align yourself to truth, there is never a time to fear tomorrow because the truth never changes.
Here is another example. Today it is perfectly, acceptable to greet someone with the words “hello, welcome”. But if the truth is relative, it does not guarantee that this phase will always be an acceptable way to greet some one. An ideology of
This would mean that anything that is written or created that uses this phrase would be considered hateful or hate speech. That’s what happens in a world where truth becomes relative. Because what’s OK today is not OK tomorrow and what’s not OK today is OK tomorrow.
When you think back to the example of Super AI. If “Super AI” is created with the foundation of absolute truth, it means that we will always design “super artificial intelligence” to always be subservient to the human race, rather than giving it agency to make decisions on our behalf.
It is very important to make sure your ideology as a creative, a speaker, author, as an entrepreneur, whatever you do, make sure your ideology and the innovation you want to create are in alignment with truth. Truth is not relative, the truth is absolute and it is a constant. What you believe determines what you create.