top of page
Writer's pictureGWUR

The Rise of Artificial Intelligence

Updated: Aug 5, 2020

by Carolyne Im



On February 14, 2019, OpenAI, a nonprofit research firm, released a new language model that can generate prose so convincing they have refused to outsource the code to avoid the possibility of people using it to produce fake news. OpenAI also released a study headed by Alec Radford that revealed what learning process they used to achieve the language model. The study, entitled “Language Models are Unsupervised Multitask Learners,” reveals the team at OpenAI approached natural-language processing (NLP), or the attempt to code programs to genuinely understand language, with a philosophy of language-driven development called “grounded semantics.” Grounded semantics relies on the assumption that “language derives meaning from lived experience.” It assumes humans created language to achieve goals, so understanding language and development of NLP is placed in a goal-oriented context. This model mimics how humans naturally pick up language throughout life--it starts with a blank slate and slowly learns words and meanings through “conversation and interaction.” Grounded semantics works by issuing a command to the program, then modeling correct behavior to fulfill the command. Over time, the program learns what the command means.


OpenAI’s study exemplifies grounded semantics and reveals one of its greatest advantages--that with this approach, language models can learn “tasks without any explicit supervision when trained on a new dataset.” In simpler language, Radford and his colleagues found once their language model was programmed, the system could learn by itself without further coding. To achieve this, the researchers’ dominant method was to collect data of training examples with correct behavior, train the system to imitate that behavior, and test its performance on independent and identical examples. Though the study was successful, Radford et al. also acknowledged that their findings, which they called “multitask learning,” was still in its nascent phase and displayed modest performance improvements at best. Upon testing the model, they had varying successes with the variety of tasks; though they found high success in reading comprehension, tasks like summation performed “rudimentar[ily],” revealing the model is far from usable for practical applications. Furthermore, artificial intelligence (AI) and language models are far from reaching true comprehension of language. Though NLP continues to see greater progress, all known technology have shown limited ability in reaching true comprehension.


This new language model and the findings by OpenAI raise greater, and perhaps frightening, implications on the role of AI technology in our everyday lives. While some are touting the positive impacts of AI popularity and development, others are not so sure. In September 2018, Forbes published an article by Jon Markman claiming humans are “smarter from the rise of AI.” Indeed, Markman praised AI’s ability to analyze incredible amounts of information and data quickly, which may be a “game-changer for corporate profits.” He dismissed the rising fear of AI, blaming “science fiction” and “our innate fear of redundancy” for what he believes to be an unfounded fear. Markman gave the example of AI algorithms trained to detect tumors and gauge their malignancy. In less than two hours, the algorithms were able to recognize them with a success rate of 95%, compared to the 86.6% success rate of human doctors.


AI has made incredible strides in health care particularly, with many programs designed to catch signs of cancer, eye disease, heart disease, and Alzheimer's, among others. However, this reliance on AI raises possible threats to even greater heights, as one of the greatest risks is placing too much trust in the very systems we’ve built. In an article by BBC, they pinpoint this issue, saying “a system is only as good as the data it learns from.” If the system fails, it leads to a cascading effect that could result in ruin. There’s also great concern over how AI may change employment sectors with automation and the future of work. Concern is rising, people’s fears existing on a spectrum from “the end of work” to dismissing AI entirely and saying “little will change.”


While there is no consensus yet, it is obvious that AI will have a “disruptive effect” on the future of work with some jobs being lost, some being created, and some changing in nature. Regardless, we must ponder what it means to exist and thrive further in an age of technology. The threats and the benefits of AI are mounting, between greater data analysis, increased manufacturing, issues over privacy and data mining, and the prevalence of fake news and internet bots. AI is no longer a niche topic; it has invaded every part of our lives, seen most prevalently in Siri or Alexa. We must give due attention to AI software and what it means for our constantly evolving society.


 

Carolyne Im is the Editor-in-Chief of the GW Scope and the Managing Editor of the GW Undergraduate Review. She is a junior majoring in Political Communication and minoring in Music. Currently, she is a Luther Rice Undergraduate Research Fellow conducting research on the discrepancy between professed attitudes of racial awareness and performed antiracist behavior.

6 views0 comments

Comments


bottom of page