AI

How to Talk About AI

Take it down a notch.

A lot of the material out there about AI that says it's for the general public is actually just regurgitating terms that appear in more technical papers. It's not working to "democratize" the topic, despite claims to the contrary.

To get information to the public and empower people to critically assess developments, it's important to speak in terms that laypeople can understand (something I've done in the international affairs space for all of my career). 

This is a list-in-progress of best practices for talking about AI to the general public—a style guide of sorts. 

  • Don’t use the term AI unless you have to. It's overused, and no one can really agree about what it means. Instead, describe what the technique is doing. It will take more words, but it will be clearer.
  • Take the agency off of the computer and put it back into the hands of the person, even if that means using the passive voice. AI shouldn't be doing things. People should be creating programs to do things. Not as sexy, but also not as scary. 

AI and Public Opinion

But how do you really feel?

I have a lot of topics I want to look into when it comes to the amorphous idea of AI. One I'm probably not going to get to before I wrap up my MA program is the issue of public opinion, but it's going to be a continuing interest.

A number of people have written about the "mental models" that the public has (Ryan Calo is one). These are the images that mention of "AI" conjures up in people's minds. The visions are often shaped by movies, television, and books, and not so much by the flood of information coming out about the many exciting but more practical ways these computing advances can be used in daily life. (That's part of the reason I'm working on getting Everyday AI off the ground.)

A few info points jumped out at me recently as warranting deeper investigation when it comes to public opinion. 

MIT Tech Review started tracking all of the estimates of job losses and creation from automation, and the numbers are all over the map (they vary by tens of millions of jobs). 

Meanwhile, NPR/Maris recently came out with a poll in which 94% of full-time workers said it was not very likely/not likely at all that they'd lose their job due to automation. But a slightly older Pew poll indicated that 72% of people are worried about a future in which computers can do a lot of human jobs. 

I'd love to have the time to dive into this more, statistically and anecdotally. It's common to see polls and stories in which people expect a certain thing to happen, but not to happen to them. But what factors are influencing these polling outcomes? Lack of exposure to these topics? Psychological factors? Socioeconomics? Type of information consumed about the topics? 

If you have any stories to share or you're aware of anyone that has done good digging into the factors that influence opinion on automation or AI in general, I'd appreciate tips!

Hello, World!

It’s time to come out of my hole and serve up thoughts for public consumption.

When I went back to school about a year and a half ago, I wanted to absorb as much information as I could about topics I’ve had at the back of my mind for while but lacked the time to explore in the depth I desired. As I start my last semester, I’m coming up for air and bringing the thoughts that have been brewing to the surface. 

In case you’re curious about what I’ve been doing, head on over to my portfolio page and take a gander. I spread my media wings with work in graphic design, and explored a new form of storytelling with my radio show called Home. I investigated robots and games that help build critical-thinking skills, and found new ways to explore audiences. And I honed my quantitative statistical skills while diving into the world of AI and machine learning. 

All of this is an outgrowth of my effort to understand technology and people better. A path that began with a drive to investigate the polarization of views and extremism online, along with the technologies that exacerbate or help bridge those divides, has led me to AI, fairness, and public policy. 

More broadly speaking, I’ve ended up at the problem of how to go about building global societies that can more easily adapt to technological developments while supporting citizens caught in the turbulence. The solution involves education and efforts by siloed forces—corporations, programmers, governments, philosophers, NGOs, representatives of the general public, and more—to communicate and forge agreements about goals, principles, and standards.

This is where I find myself in January 2018, and hopefully for years to come. Expect to hear more from me as I work things out!