AI and Public Opinion

But how do you really feel?

I have a lot of topics I want to look into when it comes to the amorphous idea of AI. One I'm probably not going to get to before I wrap up my MA program is the issue of public opinion, but it's going to be a continuing interest.

A number of people have written about the "mental models" that the public has (Ryan Calo is one). These are the images that mention of "AI" conjures up in people's minds. The visions are often shaped by movies, television, and books, and not so much by the flood of information coming out about the many exciting but more practical ways these computing advances can be used in daily life. (That's part of the reason I'm working on getting Everyday AI off the ground.)

A few info points jumped out at me recently as warranting deeper investigation when it comes to public opinion. 

MIT Tech Review started tracking all of the estimates of job losses and creation from automation, and the numbers are all over the map (they vary by tens of millions of jobs). 

Meanwhile, NPR/Maris recently came out with a poll in which 94% of full-time workers said it was not very likely/not likely at all that they'd lose their job due to automation. But a slightly older Pew poll indicated that 72% of people are worried about a future in which computers can do a lot of human jobs. 

I'd love to have the time to dive into this more, statistically and anecdotally. It's common to see polls and stories in which people expect a certain thing to happen, but not to happen to them. But what factors are influencing these polling outcomes? Lack of exposure to these topics? Psychological factors? Socioeconomics? Type of information consumed about the topics? 

If you have any stories to share or you're aware of anyone that has done good digging into the factors that influence opinion on automation or AI in general, I'd appreciate tips!

The High Bias Struggle is Real

My brain works better when I get it all out.

I'm doing research this semester on machine learning and fairness. But before I can really dig into that, I need a firmer grasp of machine learning terminology. So I've been working my way through a paper called "A Few Useful Things to Know about Machine Learning" by Pedro Domingos.

Sounds sweet and gentle, right? It's not (for little ol' me at least).

Part of the problem is that I'm a bit of an island on this one. This research is for an independent study that covers a lot more than the details of machine learning algorithms. It's about the process of regulating these tools, and involves public policy, law, ethical frameworks, corporations, nongovernmental regulatory bodies, individual psychology, education, and more.

That means I'm making the most out of my interdisciplinary program. But it also means that I have to save up my big computer science questions and try to find a nice computer scientist to talk them out with me, because I often learn best when I can ask questions and interrogate responses. Until then, I'm left to my own devices. And that often involves writing out ideas to try to get them straight in my head. 

Last night, I tackled this sentence by Googling things, searching the trusty Artificial Intelligence: A Modern Approach, and asking questions of a kind software engineer:

"A linear learner has high bias, because when the frontier between two classes is not a hyperplane the learner is unable to induce it."

This is describing an area in which data scientists need to be particularly careful when designing machine learning algorithms. Here's my translation:

"A computer model that is designed to find a way to explain patterns in example data by drawing a clear-cut, straight boundary between Things of Type 1 and Things of Type 2 can be way off in some cases. For one, the patterns in the data might be messier than can be represented by a straight boundary. It might be impossible to clearly say, 'Hey, everyone on that side of the line is this thing, and every on this side of the line is that thing.' Things that fall into the same category may be more mixed together.

Applying a computer model that tries to find a straight boundary to that kind of mixed-up data will give you some sort of model that works. It will find some sort of boundary. But it won't draw the right conclusions from the examples it is fed, and it won't tell you about the way the real world works. This is one reason it is important to make sure you understand your data and are looking for the right kinds of patterns."

And here's how I got to that... (If you're a machine learning person and happen to be reading this, please let me know if I got anything wrong so I can learn!)

Machine Learning and Refugee Resettlement

It's nice to see an international application, but this is just a start.

A lot of talk about AI in the United States focuses on shiny new tech advances, threats of superintelligent robots, or the impact on businesses and workers in an already-polarized society. There's not much out there on what impact the technology can have on the rest of the world, particularly the developing world (though the issue of low-skilled jobs in the developing world being replaced by machines in the developed world pops up quite a bit). So I'm always on the lookout for more international stories.

One of those stories came out on Friday in Science. Researchers developed a computer program that can help resettle refugees in particular host countries. This is a great use of machine learning that could help improve people's lives and build global community. Still, it's worth proceeding with caution, not least because this technology is pitched as a quick fix to a thorny problem. 
 

What the Researchers Did, and What the Model Can Do

The researchers use employment as the marker of a successful resettlement. And their method of algorithmic resettlement can in theory get more refugees jobs than the current approach to resettlement in the United States and Switzerland. That is, it's better at determining where in the countries refugees will be most likely to get jobs. 

To create the model, the researchers used existing data on resettlement outcomes, and coded it according to refugee characteristics and employment success in a way that a computer program can understand. Then, they used that data to teach their algorithm what combinations of characteristics predict employment success. There's a lot more to it, but the bottom line is that algorithm can in theory be applied to new refugees to determine where they should be resettled in a way that increases their chances of getting a job.

The researchers point out that there are a number of things left to figure out to improve the accuracy of the model. And they call for the approach to be tested in the field with a controlled trial and random participants. 

But based on a quick read, it seems like a few questions could use answering first.

Hello, World!

It’s time to come out of my hole and serve up thoughts for public consumption.

When I went back to school about a year and a half ago, I wanted to absorb as much information as I could about topics I’ve had at the back of my mind for while but lacked the time to explore in the depth I desired. As I start my last semester, I’m coming up for air and bringing the thoughts that have been brewing to the surface. 

In case you’re curious about what I’ve been doing, head on over to my portfolio page and take a gander. I spread my media wings with work in graphic design, and explored a new form of storytelling with my radio show called Home. I investigated robots and games that help build critical-thinking skills, and found new ways to explore audiences. And I honed my quantitative statistical skills while diving into the world of AI and machine learning. 

All of this is an outgrowth of my effort to understand technology and people better. A path that began with a drive to investigate the polarization of views and extremism online, along with the technologies that exacerbate or help bridge those divides, has led me to AI, fairness, and public policy. 

More broadly speaking, I’ve ended up at the problem of how to go about building global societies that can more easily adapt to technological developments while supporting citizens caught in the turbulence. The solution involves education and efforts by siloed forces—corporations, programmers, governments, philosophers, NGOs, representatives of the general public, and more—to communicate and forge agreements about goals, principles, and standards.

This is where I find myself in January 2018, and hopefully for years to come. Expect to hear more from me as I work things out!
 

The Learning Turtle (Robot)

Turtles can be teachers.

 

UPDATE: Project complete! Check it out here.

 

At Georgetown, I'm exploring a number of things that could be lumped under the heading of educational technologies. I'm trying to get both a broad and deep view of what's out there and what could be integrated into non-educational educational environments to help people develop problem-solving and critical-thinking skills. These things range from scalable and inexpensive tech like video games and simulations that don't require serious processing power to the less accessible world of VR headsets to the principles of human-computer interaction, user interface design, and more.

I'm particularly excited that I get to take a closer look at Seymour Papert's work in education this semester. Papert, among other things, is famous for creating the Logo educational programming language and accompanying floor turtles, which children could use to learn by doing. Much of his work was focused on children, but his ideas have been widely applied and have influenced many, including Alan Kay's Dynabook

For the next couple of months, I'll be working with a group to dig into Papert's constructionist theories and see how they're being applied in educational robotics today. If you're curious, I've included my project proposal below.

Big Ideas and Small Revolutions

That time Becky got really interested in stuff that happened at Xerox PARC half a century ago.

I took a class last semester with an intimidating title: Semiotics and Cognitive Technologies. It ended up being a revelation. 

Bear with me for just a moment while I spew some words that might not be familiar. We covered a lot of ground, moving from humans' first use of tools through extended cognition and Engelbart to embodied technologies and artificial intelligence. Along the way, we applied the theories of a really smart, pretty eccentric guy named C. S. Peirce. He operated in the field of semiotics, along with many others, and was set on figuring out a system of signs and symbolic logic that could be applied all forms of human behavior. 

All that basically means we looked at how humans make sense of the world around them. More specifically, we looked at how humans use things that they have created to learn, share memories, and build up the communal store of knowledge for current and future generations. That includes stone axes and beads as well as early computers and virtual reality technology.

The whole class shifted the way I think about the world. But I found myself especially interested in the application of these principles to interface design, particularly the work of Alan Kay at Xerox PARC and his influences. They looked at computing devices not just as tools humans could use to do work, but as partners of a sort in a symbiotic relationship. That was the beginning of personal computing and the drive to create devices with which humans could interact. Machines that could be integrated more seamlessly into normal human life than, say, room-sized computers that performed a series of calculations using punched cards.

More on all this later. For now, though, feel free to take a look at my final project for the class: "Big Ideas and Small Revolutions." It's just a first draft. I plan to work and rework this base as I move through my studies at Georgetown.  

Bringing Death to Life

If an obituary can do this, so can writing about the living.

The Economist's obituaries are some of my favorite pieces of writing.

That sounds morbid, I'm sure. But take the recent obit for Sashimani Devi, the "last human consort of the god Jagannath."

Never heard of Sashimani Devi? What about Jagannath? That the obit editor of this major publication decided to focus on this topic for the week's one remembrance in print is a bit confusing. Yet, that's not unusual for this feature. And it's brilliant.

The prose is unpretentious and absorbing. And for a few minutes every week, the words transport you elsewhere. Here's the author describing Sashimani Devi's relationship with her husband:

"Each day she would rise early, bathe and go to the vast temple complex, where on the highest spire Lord Jagannath’s red pennants flew to show he was inside. There he was, her wooden spouse, freshly dressed and decked in flowers, on a high jewelled platform beside his sibling gods. She would arrive as his breakfast was served to him of coconut, sweets and ripe bananas, and afterwards she danced in the main hall for the visitors." 

Naturally about the person who died, these obituaries give you a glimpse into one person's existence.

But they're much more than that. They're deep commentaries on living and some of life's most difficult themes, from the personal to the international.

The author adds texture and personality to these themes. She makes big, intangible issues bite-size and digestible. She leaves you with firm takeaways and roots in the present despite looking to the past.

A reflection on the life of Naty Revuelta, Fidel Castro's mistress, blossoms into a story of unflinching devotion to a man and an idea, no matter the cost. It never mentions the thaw in U.S. relations with Cuba going on at the time of writing or revolutionary upheaval around the world, but it doesn't have to flaunt its relevance.

An offering on Billie Whitelaw, Samuel Beckett's muse, is supremely personal. The story about deep insecurity and success in the face of adversity is both inspiring and sad.

And then there's Sashimani, whose story highlights the struggle between modernity and tradition along with all sorts of questions about religion, devotion, love, and more.

All that is wrapped up in just three words in the quote above: "her wooden spouse." The narrative is about Sashimani's love of her husband, a god and "a mere stump of wood with round, staring eyes," as the author puts it. She was married to him at a young age and remained devoted to serving him her entire life, mourning the loss of her beloved when the piece of wood was replaced with a new one every twelve years.

This is writing that sheds light on the some of the world's deepest divisions, questions, and problems, but it doesn't do that in 15,000 words or with bullet points or through policy recommendations. These are engrossing and unique narratives that make very real some very abstract ideas.

If an obituary can do that, so can writing about the living.