Anatomy of AI
The Anatomy of AI is a great piece that examines the hidden costs associated with AI and personal assistant devices. Specifically, the diagram above “visualizes three central, extractive processes that are required to run a large-scale artificial intelligence system: material resources, human labor, and data.”
A few fun (and depressing) quotes indicating the disparity of resources between the persons at each end of the processes:
Amazon CEO Jeff Bezos, at the top of our fractal pyramid, made an average of $275 million a day during the first five months of 2018, according to the Bloomberg Billionaires Index. 17 A child working in a mine in the Congo would need more than 700,000 years of non-stop work to earn the same amount as a single day of Bezos’ income.
Regarding the methods used to ship components:
Even industry-friendly sources like the World Shipping Council admit that thousands of containers are lost each year, on the ocean floor or drifting loose. 29 Some carry toxic substances which leak into the oceans. Typically, workers spend 9 to 10 months in the sea, often with long working shifts and without access to external communications. Workers from the Philippines represent more than a third of the global shipping workforce. 30 The most severe costs of global logistics are born by the atmosphere, the oceanic ecosystem and all it contains, and the lowest paid workers.
The Future We Wanted
Wow. I really loved The Future We Wanted—it’s by far my favorite piece we’ve read for our Unconventional Uses of Voice Technology class. I love science fiction in general, but I especially love scifi that explores the relationship of human beings to machines and raises deeper questions about what it means to be human. The Future We Wanted hits both of those themes, and was both vulnerable and emotionally provocative.
The protagonist, Polly, exists in a near-future world in which an AI is considered to have a “virtual identity” and “personhood”. This world is infused with oppressive sexist overtones: Polly is expected to perform all of the emotional labor in her family’s household, and is later told by her husband that she would like their household AI system, Augusta, more if she connected with the “womanhood” of it.
Her women’s therapy group—“women helping women”—isn’t much help either. Polly gets vulnerable about her struggles with Augusta—how it makes her feel like a failure, how she doesn’t see it as a person—and another group member is allowed to cross talk and hurl accusatory statements at her. When this escalates into a back and forth between the two women, the group facilitator’s response is to silence both of them and end the group…not a very safe space.
And, at the same time, this world (or at least the people in it) purports to be post-sexist. “Thinking of [Augusta] as sexist is a dated framework,” her husband tells her, later half-heartedly chiding their son for making “appearance-based judgments” about Augusta.
As time goes on, Polly starts missing group therapy to pay the bills while her husband and kids sits around and play games with Augusta. Poor Polly seems to have no in her corner. She is gaslighted by the people in her world and given useless platitudes. Ultimately, this dissonance leaves Polly feeling so alienated and disconnected from both the humans and AI she destroys Augusta. Bravo.
Dialogflow - Baseball Playoff Bot
For our first Dialogflow assignment, and made a silly bot that helps you decide which team to root for in the playoffs this year. Download a .zip of the bot here
Spoiler alert: it hates the Yankees and the answer is always the Astros.
Upending the Uncanny Valley
The Hanson Robotics / University of Texas paper Upending the Uncanny Valley was an interesting and somewhat frustrating read. While I ultimately agree that the pursuit of humanlike robots is a worthwhile endeavor (and welcome the advances these groups are making toward that goal), I had a hard time focusing on the majority of the content in the article because I found myself angry at the psuedo-scientific manner in which the concepts were portrayed. The article begins with a statement that "the myth that robots should not look or act very humanlike is a pernicious one in robotics research, one commonly known by the term 'Uncanny Valley'." I'm not sure how they came to this definition, but I would argue that it is both wrong and misleading.
As far as I'm aware, the uncanny valley refers to depictions of humanoids that are somewhat realistic, but not realistic enough to be truly convincing--leading to cognitive dissonance and a negative emotional response from most human beings. This is different from the claim that "robots should not look or act very humanlike". Essentially, humanoid robots should be purposely cartoonish or convincingly lifelike, otherwise they are creepy.
I do appreciate the fact that "the robots of David Hanson do not tiptoe around the uncanny valley, but dip in and out of the uncanny in attempt to chart the territory and its boundaries", as I think this is the best way to make progress in building lifelike androids.
The paper further notes that "the Uncanny Valley has never been studied with real people", so, they wanted to "put the theory to the test with human subjects." However, their approach to doing this is to run a set of web surveys, the results of which they then interpret to mean that "there does not appear to be an inherent valley."
Unfortunately, the article makes no mention of the survey design or methodology, which left a bad taste in my mouth. It seems to provide a very convenient vehicle for a robotics company that builds realistic-ish robots to push their agenda.
In order to determine if their findings are worthwhile, it would be helpful to know several things, such as: How large was their sample size? How were these people selected--was it randomized? Were the people chosen already familiar with Hanson Robotics' work? If so, were they more likely to find a psuedo-realistic robot agreeable versus a random person who would find it uncanny? How does their familiarity with robotics and/or the robotics industry prejudice their viewpoint? Do these same people react differently to psuedo-realistic robots in person versus on a computer screen?
Without the answers to those questions, and a more in depth analysis of the results, it seems disingenuous to claim that no valley is inherent and "a new theory is called for."
Tell Me About The World
For week 1 of Hello Computer, we were asked to "create something that takes non-speech input from a person and responds with speech synthesis."
I initially planned to build a "misfortune generator", where a user would click a button and be given a bad fortune ("It is too late for you to do anything about climate change..."). But, after spending thirty minutes trying and failing to build a large collection of mis-fortunes, I decided to change directions.
[one_half][/one_half][one_half_last]
[/one_half_last]
From a technical perspective, I grabbed a collection of some of the most popular definitions from the book, did some basic data munging in excel, converted the data to JSON, and used a simple sorting algorithm to "randomize" the data. It probably would've been easier to just use a Math.floor() / .random() and pass that into the array index, but I didn't think of that until after the fact, and I'm not sure if the result would have felt as random to the user.