top of page

An Reflection on the HBO series -- Westworld

Star SkyTwo Steps From Hell
00:00 / 05:35

The Google I/O annual developer conference that just finished on the 20th has really showcased the impressive developments that artificial intelligence (AI) achieved. Having been working on a commentary essay for my film analysis class, I became familiar with the HBO series -- Westworld. This scifi theme TV series sparked my outlook for the future development of AI. One of the most inspirational idea from the series is using the concept of Bicameral mind to endorse artificial beings with self-consciousness.

Bicameral mind was proposed by Julian Jaynes, Psychologist at Princeton, in the book “The Origin of Consciousness in the Breakdown of the Bicameral Mind”. In short, Bicameral mind represents the separation of consciousness and action. Because of the segregation of action from motivation, the subjects will begin to have introspection of themselves, thus evolving from simply “ We do” to having reasoning about “Why we do what we do”. Even though the concept proposed by Jaynes lacks sufficient evidence, and the academia raised numerous objections and doubts, bicameral mind can still provide inspirations for us to make artificial intelligence truly intelligent. That is, through “instruction transparency”.

 

In the series, Arnold built the hosts’ consciousness through broadcasting their source codes as inner monologues and stimulate them to form introspection overtime. By introspecting their action, hosts will gradually form independent thinking abilities and become one step closer to achieving full intelligence. In a sense, the attention model that is has been widely developed in recent years is a leap forward to creating consciousness. Because the attention model is adding a layer of introspection to the traditional sequential learning, aka analyzing the relative importance of data. When the model is capable of generating the distribution of important data, the process of machine learning can be significantly enhanced, because more resources and computational power can be allocated to processing more important information, just like the 80/20 rule in marketing management -- maximizing gain while minimizing effort.

 

Of course, the aforementioned attention model is just one of the multiple steps to actually achieving self-consciousness. If we want to make AI become entirely intelligent, more introspection components are needed within the processing pipeline, which can stimulate the being’s cognition at the greatest extent. However, recall the previous article: A reflection on human intelligence and artificial intelligence, the so called intelligence is human defined and can only be perceived when the scale of intelligence is encompassed within human intelligence. Once the scale of intelligence exceed the brain power of homo sapiens, we human will be incapable of detecting such brilliance. At the same time, humans are not entirely self-conscious in the current stage, and there are countless opportunities for studies in motivation. Thus, we can only maximally optimize AI to approach intelligence, and hope that in time, as more knowledge about consciousness have accumulated, true brilliancy can be achieved.

 

Inspirational Quotes from Westworld:

  1. (Ford, S1E03) “Arnold built a version of their cognition in which the hosts heard their programming as an inner monologue, with the hopes that in time, their own voice would take over. It was a way to bootstrap consciousness.”

  2. (Arnold, S1E10) “I had a theory of consciousness. I thought it was a pyramid you needed to scale.”

 

References:

  1. Professor Andrew Ng’s introduction about attention model on Youtube: https://www.youtube.com/watch?v=SysgYptB198 

  2. The explanation of Transformer attention model on Google AI: https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html 

  3. The explanation of bicameral mind in Westworld on Inverse: https://www.inverse.com/article/14264-bicameral-mind-westworld-julian-jaynes-origin-of-consciousness-hbo

2021.5.27

© by Chenshu Liu. Proudly created with Wix.com

  • GitHub
  • redbook icon
  • LinkedIn
bottom of page