Session Transcripts

A live transcription team captured the SRCCON sessions that were most conducive to a written record—about half the sessions, in all.

Why your bot is nothing without a human

Session facilitator(s): Millie Tran, Stacy-Marie Ishmael

Day & Time: Friday, 10:30-11:30am

Room: Innovation Studio

PARTICIPANT: Hello! It is 10:30 and we’re starting on time, very robotically. So my name is Stacey Marie Ishmael, Millie Tran, and that’s. We are going to talk to you today about. We have about seven different titles to this presentation. It’s why your bot is nothing without a human. Your bots are bad, stop building them. Which should give you some sense of where we’re going with thi me mostly. What we’re hoping to achieve is that by the end of this, you’ll have a better understanding of what are some of the things that we, particularly we in the newsroom sense, are doing when we’re writing and creating bots that are driving our audiences insane and how we can stop doing that and create better user experiences with them. We would like this to be as interactive as possible, because none of us like talking at people, so if you have questions in between, yell them, you know, we’ll listen. All right, let us begin.

Why did we get into bots? I spent up until extremely recently working at BuzzFeed news, building the BuzzFeed news app and having strong opinions about notifications. Millie was also on my team and she is now the Director of adaptation at BuzzFeed. She builds various things that she will talk to you about, as well, but one of the things that unites us, other than general nerdiness, is we all, the three of us really think about bots as user experiences, bots as interface, and bots as life hacks, which is generally in contrast to how newsrooms have started to approach bots, which is to say, and here is another way that we can broadcast what we think is important back at our audiences without necessarily thinking about why would somebody who has taken the time to find a bot, download it and say yes, you can send me things, actually do that we’re not always giving them anything meaningful. I’m going to start with a question. How many of you use bots? How many of you have built one? How many of you have never installed Facebook messenger bot, because it’s too hard and too annoying? Yes? So if you have not yet installed a Facebook messenger bot. I would invite you to try. Let me know if you succeed by the end of this session. They don’t make it easy.

We built one which we’re extremely biased towards, but anyway, one of the problems that we had when we were coming up with this session is definitions. Right? When we talk about bots, what do we mean and this is something that I think has been a point of confusion, as well, for many of us from the perspective of why do we want to create this? We’ve been through a period where we—I’m not going to ask any of you how old you are. Do you remember IRC bots? OK, cool, people over 22. We’ve had a conflation of bots as interfaces with bots as apps, with bots as ways of sending information out into the world that is just triggered by one specific thing. When we think about bots as interfaces we’re like, OK, what do do we have? We have Alexa on Amazon echo. We have Siri, we have Google now, but most commonly when newsrooms think about bots, they’re thinking of ooh, this kind of chat-like messengery thing where you can type something in that’s really cool. It will be human and therefore people will be likely to engage, because it feels like something a friend would send them, well, what we’ve done in practice is we’ve taken something that works pretty well when it’s a voice user interface and turned it into something that is incredibly annoying when it is text-based because I have an Amazon echo and I love her very much, but she drives me insane sometimes, because I have a weird accent, and so I’ll say things like, Alexa, what’s the temperature and she’ll be like? Excuse me? And this is even more frustrating when I’m trying to interact with the CNN bot and I’ll tell the CNN bot, hello and it will say, I don’t understand you and you have to try 17 different key configurations that nobody has bothered to define.

When we talk about humans and the reap that humans is part of this presentation is because the things that make all of those different kinds of bots successful, whether we’re thinking about those interfaces or triggers or newsroom hacks for engravesment is the script, right? A bot is a decision tree and it’s a decision tree that in an ideal world would be more akin to a super obvious user friendly choose your own game. But is usually an exercise in 0 to swear words in under 10 seconds because you can’t figure out what are the triggers for this thing and how do you get a way to interact with it and that is, and here I’m showing my bias, often a consequence of them being entirely written who think in loops, and not necessarily conversation, hi, greetings, salutations, where we don’t necessarily think about building these things in ways that somebody would actually talk to something, because we’re only designing the back end and we’re assuming that the back end and the front end are the same and this has been a really common thing that we’ve noticed in some of the bots that we’re going to take a look at and describe to you. Here’s a good example. This is someone interacting with the 1-8 hundred-flowers bot. Because according to Facebook it is much easier to send 72 different messages to something you could achieve in 2 seconds on one of the websites. One of the points of frustration is here. Is like hey, I am in Canada, what is your address? Canada. Do you deliver to Canada? What? And then the person enters a series of leave me alone loops. Quit, I don’t want to hear from you anymore, unsubscribe, please stop talking to me, and the bot keeps replying, because it’s doing what it’s supposed to do. It’s like oh, here’s a trigger that I don’t know how to respond to so I’ll ask another question that hopefully this person can figure out.

Here’s another example: This was a perfectly good text messaging service. I really enjoyed using it when it just sends us text messuages and here you have an example of Millie trying to interact with poncho saying hey, what’s the weather? Are you on a boat? Why is poncho asking, after you tried to give them like a config, why are they asking are you on a boat? It’s because somebody got overly clever with the script, right? One of the things I’ve noticed with bots is they’re either super clever, weirdly passive-aggressive or completely useless and there are only a few examples of interactions that you’re like, oh, I actually solved my problem more quickly than I could have solved that same problem either by Googling or firing up a browser or using an app.

PARTICIPANT: Millie: I would like to see someone try to install or interact with a bot right now. Spend the next three minutes interacting with something and let us know how that goes.

PARTICIPANT: For those of you who’ve said you’ve never interacted with a bot anymore and you have Facebook messenger installed and they’re going to make you install anyway, try to use any of the messenger bots that are available. Which assumes that you can discover them and assumes that you can get them to start and stop talking to you.

PARTICIPANT: I got the guardian’s recipe bot pret seamlessly. Obviously in the last three minutes I haven’t –

PARTICIPANT: Do you work for guardian?

PARTICIPANT: I don’t, but I was in a story and they had a link that opened and that saved me a lot of hassle.

PARTICIPANT: Has anyone seen one of those stories what they’ve embedded it’s not technically a down load, who else has successfully installed one?

PARTICIPANT: Did you raise your hand?

PARTICIPANT: I did. The Yahoo weather bot.

PARTICIPANT: You work phoria hoo.

PARTICIPANT: No, I just said weather, that’s the thing I experience daily so I should maybe install this. And now it wants me to caption my weather and share it with a PARTICIPANT: The request to caption was puncturated with lots of emojis which made me feel pleasant: *.

PARTICIPANT: Like as a place that is very fond of emojis, I can tell you that emojis are not a substitute for actual emotions.

PARTICIPANT: Here’s Millie interesting with tech crunch. Which is one of the earliest.

PARTICIPANT: So tech crunch was one of the launch partners when Facebook launched their messenger platform and it is the one that is the most rage quit after people ha successfully installed it for these reasons.

PARTICIPANT: This is at least a month of tolerance, by the way, and then just –

PARTICIPANT: No matter what you do, no matter what you you’ve told tech crunch, it will just send you things, all the time. And this I think is one of the clearest examples of the conflation of bot with notification with utility, when none of those words are in fact justified by the use case presented here. She eventually made it stop. She’s like are you sure you want to completely unsubscribe, type yes, and it went away. So here’s the thing with bots. Even more than apps, they have a discover ability problem, they have a retention problem and they have an inspiring rage quit emotion in your audiences problem. So we are taing a piece of technology, an approach to our audiences that is very recent and we’re managing to piss off even the power users right out of the gate which generally does not bode well for widespread adoption. And it’s the kind of thing that when we hired people who can actually write the sentences to do the scripts for these. But by then everybody else has moved on to the next thing already. So the goal is to say we are in the positions in newsrooms where we can stop this from happening. This does not have to be us and we can do better and I’m going to give you some reasons and descriptions for how.

PARTICIPANT: Millie. Not yet, though.

PARTICIPANT: Stacey: The reason we put this up here is the challenge that bots have is textual awareness and any kind of understanding. Especially Facebook messenger bots, you are very limited in the universe of possibilities that they can handle, right? The Facebook with as we will make this as constrained as possible in order to minimize abuse. Which is fine. I totally understand that. It also means we will make this as constrained as possible, so they are the least useful possible things that you can offer to the same people who otherwise want the information. So if you are building a messenger bot and you are not willing to staff it in a way that oh, what happened in the news last night, we ought to have some contextual relevant things that we can apply to this. This is what’s going to happen. You are an NBA bot and you can’t answer the question who won last night. Now, who won last night is a nontrivial question. It assumes a series of input on the back end of what game are you talking about. What does last night mean? What’s your time stone? Like, these are nontrivial things that are easily solved by a social media editor tweeting or your home page editor making a decision to splash that our your newsletter team sending that information out or the people who are hearing the pus notificationings sending that out. We are probably already staffed to answer those questions and as a result, we have trained our audiences to expect to us able to answer those questions and yet we introduce them to new things, and somehow expect that their expectations will have changed, because they appreciate this is a new and much more difficult technology. Which has never been true in the history of audiences.

Bots can be art. This is howdy.ai, it’ Slack bot. Usually quite useful for any of you—do any of you use howdy? So howdy is built into all the Slack and it lets you do things like onboarding, managing things, etc. Scott Lamb, he this is the day after Brexit and he tries to interact with howdy. This is one of the problems with conversational interfaces, you create cues and you create expectations you can say hi, how are you? What are you doing, when’s your birthday.

bummer about the UK, right? Howdy says, I have not been programmed to handle that yet, sorry. That’s a good answer. It didn’t feel weird and passive-aggressive. It’s the kind of thing that you go, OK, but what do I about about that? My questions is are they looking at the logs of the kinds of questions that people are asking them and are they learning from them?

PARTICIPANT: Millie: Another really important part of this is the transpareny in recognizing the limits of what you’ve built. So it’s not trying to engage with Scott here. It’s saying, nope, like, I’m the bot, leave me alone.

[laughterPARTICIPANT: Stacey-Marie: So, here’s one of the things that is hard about this, right? It’s like we’re trying, we’re really trying.

PARTICIPANT: Millie: It gets better.

PARTICIPANT: Stacey-Marie: Yes, we will find the 17 people who know how to use them, we will try to get them to do interesting things, but we’re constrained both by the platform and the fact that the expectations that people have for interacting with things aren’t aligned. There are two really interesting blog posts on this subject. We will send all of these links around at the end. One is by Fanny Brown and it’s about how bots in general are incredibly gendered and the way that people talk to Alexa and the people talk to Siri tends to reflect a fetch me darling attitude. There is another post by a name named Sandy … because the second you give somebody that has the appearance of being conversational, they assume it’ all-knowing. They’re like whatever I ask you you should be able to deal with, you are clearly Google, right and so people will try throwing things at bots and we have nothing to give them and not all of us are as transparent as howdy is. So this is—I tried to interact with something called beta bot. I was like, all right, start, begin, I can’t, I’m sorry. I don’t know how to handle the command begin, right? Like? Like, throw a thesaurus with words that aligned with start and stop and unsubscribe. Because when even the very first interaction with your bot is some things are too complicated for a simple bot like myself, including what do you do? We have not thought through how we want these interactions to work.

PARTICIPANT: This is script—this is something that happened to me with Alexa. So Alexa is very sensitive.

[laughterAnd you can—so you can say, Alexa play Spotify, it’s usually play lemonade, and at some point, my phone rang and take the calls, and it was like pause, it didn’t. No. Alexa play, Alexa stop, Alexa was just no, never mind. I call this the rage escalation. Which how do I go from a state where I asked you to solve a simple problem and you don’t and in not solving my problem, you also are not giving me ways to figure out what do I need to change to get you to understand me, and that is a probe, right? I am a power user and therefore I’m willing to adjust my behavior, because I’m an early adopter of things and I expect things to not work. But we keep designing interfaces and putting them out in front of our non-power-user audience, and also expecting those people to change their behavior. We’re not giving them contextual cues, we’re not helping them understand what went wrong. You know, it’s the equivalent of like error. Most of our bots just error. And we don’t tell them how to solve the problem. We don’t tell them what the problem is. We don’t give them any off-ramps so that they can go and figure out, OK, fine, if this is a dead end, where should I turn to instead?

PARTICIPANT: Vigit: That’s actually a emoji …

PARTICIPANT: Stacey-Marie: So how do we get to here, to creaing bots that are frictionless, proactive and context aware? Frictionless, somebody can find them, they can figure out how to use them and they can figure out how to make them go away when they don’t want to use them anymore. Proactive, you don’t need to hand-hold your bot into giving you the information that you want. You don’t need to say, no, media company, I don’t want this irrelevant story about chocolate, I was asking about Belgium. Please give me the story about Belgium that I’m trying to find instead.

Context aware. What’s the weather shouldn’t need to be what’s the weather in Canada today, because that’s where I am. What team won last night shouldn’t need to be I’m talking about basketball and I want to know like what’s happening with the warriors or whoever your team is at the moment.

Again, nontrivial problems that we are trying to solve in many other places, with many other resource—many resources—well, some resources.

Millie: Those are also all things we worked on when we worked Ong the BuzzFeed lab. Just trying to look at it through this lens, what does frictionless mean? What does proactive mean? It means you knowing that your audience is going to be context aware enough to know that your audience is going to want to know about the notification of a new Beyonce video, so we’ve been thinking about these problems for a long time and this is just another format to do that.

PARTICIPANT: So now we have some good news. Which is it it is in fact possible to achieve this. Here’s an example. This is sous-Chef bot. This one has a couple of things I want to highlight. First is it gives you further cues, right? So like I am I’m looking for a recipe for Gaz patcho. Here are some * options, did I solve your problem? Yes, no, something that somebody had to think through and offer what those different trees would send someone to and the person who thought it through needed to know a little bit about food and recipes and what some of the alternatives were, and what a good, completed, yes you’ve solved my problem I’m going to go off and cook this, feels like.

Millie: Something that I also wanted to highlight that I thought was really good was defiing the input. And it gives you a variety of things, like cuisine, ingredient. So it gives you a guideline, which is set expectations.PARTICIPANT: Stacey-Marie: How many of you use Purple? It’s a quasi bot, I will explain quasi in a moment. Which starts in a way to alert in our gripping, gripping electoral cycle and it’s run by a person with a team of others but it’s formatted in such a way that you might think you’re interacting with a bot, but but you’re actually interacting with a human. It’s not perfectly scalable. But it will send you a text message that says yesterday was pretty historic, we have the first woman nominee in a major political party. And in that text message there will be a word that is in all caps. Which is a pretty subtle cue that you should do something with that word and they you send that word back, that gives you further cues, so the person behind it or the team of people behind it, are even within the message, they’re giving you a cues for how to use the medium. And that, the previous one was sous-Chef bot did something very similar and it’s a good example of what Facebook does allow you to do, which is signal to your audience with some of your options are and make those very clear and very discoverable.

But before we go any further, I want to go back a little bit. How did we get here? How can we get to a point where people are raising money for bot startups, you’re at a presentation about bots at SRCCON and essentially somebody went to Japan, somebody went to China and it was like ooh, messengers, interesting. How do we get in on this action? And this goes back to what I said at the beginning about the conflation of a series of different trends in media and technology. There is the messenger trend, right? The platform trend, and the hacking engagement trend. How do you get people to care about what you are sending them? And that last bit is the most important. The reason that so many of our newsrooms are so excited about bots is because we think that we can get through that valley of oh, it’s from a news organization, I don’t care, but it’s kind of cool and kind of like a human because I do. Because I care about my friends. Bots aren’t our friends, yet, I don’t suppose. So what we’ve done is we’ve thought these messenger apps are super successful, they have incredible retention. Everybody who uses them is obsessed with them. But what everybody who uses them is obsessed with is their friends, right? Like, the reason that those platforms are so sticky is because something like 85 to 95% of the interactions on platforms like mine are with other human beings, and the other 5% is like the ancillary services that they’ve built up around those human beings. Now that you’ve spent talking about your movie tickets, with your friends, do you want to buy some movie tickets? Do you want to make dinner reservations? We have taken away the humanness of these platforms, that’s part of the problem. The thing that actually drives people to them. I want to talk to somebody, I want to feel like I’m having a conversation, in a way that’s accessible and interesting, but taking the people out of it. Whether it’s in terms of the actual scripts that we’re writing, the language that we’re using, or how hard we’re making it for people in the first place. But I promised you some good news.

Millie: I’m the representative of good news. S meekan is one of my favorite Slack bots. This message says OK, this just came up, proactive. So context aware. I’m at work. This is useful, I don’t want overlapping meetings. The meetings were my no meetings block and an animals brainstorm because I work at BuzzFeed. So it will ask you if you want to delete or reschedule. Sometimes I will interact in here. Sometimes I’ll just go in my calendar, because I am calendaring, but, yeah, this is I think one example of what can be really useful and where we can go toward. And this is another good thing. Humans shouldn’t do what a bot can do, it’s weird, I’m quoting Stacey as she’s standing flexion to knee me. Stacey-Marie: This is from a BuzzFeed slack room where we had just created all the integrations in the world apartment integrations were a thing like if a breaking news thing lad happened in the news team Slack we would get Reuters and breaking news. If somebody was going to be away from the office that day, we’d get like a calendar reminder in the thing and the goal of that is most journalists have too many tabs open and so we should bring some of their information that makes their job easier to them. Proactive. Frictionless, context away, news, right, like we were very specific about the things that we weren’t setting up. But this is very responsive that somebody was doing, not necessarily yelled at, but chastised. And I was like, oh, this is stupid, nobody should have to go to Instagram, see what the last thing we posted was. So we scheduled an if this, then that trigger, that every time we posted something on Instagram, it would be it would spit something out into the room. Because somebody was getting stressed out every single day that they were forgetting to check Instagram, right? But this is a very different bot experience, this isn’t something that somebody is interacting with. This is something that is solving a problem in the background and bots that solve problems in the background and bots that are interfaces and bots that solve problems in the foreground are three totally separate categories of things that we have allowed to blend together confusingly, in the way that we are generally writing them, talking about them, and attempting to get people to use them. And so the key thing that I want to emphasize here is we need to be much more focused on what is the exact problem that we are trying to solve for either us internally or for the people who have to interact with it externally, when they use this thing.

Because I haven’t seen any good examples of any bots that successfully combined those three. I have seen good examples of hyperfocused hypertargetted things, like sous-Chef, like meekan but generally we get too ambitious with these things if we try to blend all of those into one. This is not exactly a manifesto. But some thoughts. I’ll read out because it’s really small. All bots are decision trees, these trees should make it as easily as possible for someone to provide a response that triggers something meaningful to them and useful to us. If we are building these things and we are not learning from them, what is the actual point? Right, there are a bunch of people that I asked, if you’re building bots how often are you looking at your bots? This is like the early freaking days of push notifications, we can’t be sending things out into the world and not looking at how are the different ways that people are interacting with them, what are some conditions that we haven’t thought of, how do we need to make our scripts better? Because we are in the service of providing useful tools and useful experiences to our audiences or at least we should be, but yet we’re treating bots like OK, we built it, it’s out there, we can let it keep doing what it’s been doing. The next one is about friction. If your bot makes it a more painful experience for a user to publish a given task than anything else, your bot is bad and should go away. And this is what I mean. If it takes me longer to get Alexa to play my Spotify list than it takes me to take my phone out and press Spotify and play my list. And too many of these bots create more complexity to accomplish a task that was previously simple in another medium. However, if your bot makes it possible to seamlessly integrate multiple workflows or reduce the steps in a given workflow, keep going. It’s like like meekan and howdy in general. Don’t confuse, this is cool with this is useful. Just like, those are not the same. They feel like the same to us, but they are so not the same, OK? Cool and useful, if they are the same, you’ve won and you like don’t even need to be here anymore and you can carry on. But in the meantime we have too many examples of here is a cool thing I built that is useful only to the part of my brain that needs to tinker, versus here is a thing that is generally useful with the people who have taken the time to go through all the steps to figure out how to interact with it.

One other caveat about transpareny: Most times when you’re interacting with a bot, and this includes things like Purple, you don’t know it’s a gender. You don’t know who’s behind it, you don’t know what information is being collected. There’s not necessarily an institutional voice, so there isn’t even a personal voice and one of the things that we’ve started to realize at BuzzFeed is people appreciate when we tell them who is doing the talking, right? Like these notifications are brought to you by BuzzFeed news and this is the team behind it. And a lot of these bots you’re like, ooh, I don’t even know who built this, and they’re asking you all these questions, who are you, where do you live, how do you feel today and you’re like, oh, you creep, I don’t need to tell you all that. That is something that we need to build in. What are we are increasingly asking people to trust these things without giving them any sense about what we’re doing with the stuff that they’re telling us on the other side.

And finally to the point about learning, please review your logs. I know that is an incredibly old school thing to say but it is the only place where you get a sense of what is your 0 to swearing threshold. There are some of the ideas that they have for you that you aren’t even seeing and how might you get that. That’s the formal presentation. Questions?

PARTICIPANT: Mill: Now we can just talk.

PARTICIPANT: David: Just really briefly one thing that came up when you were talking about this is right in the middle of the talk you were talking about Alexa and I’m trying to think is through this psychological effects of you have this one voice and this one window and this one fake person you’re talking to, and behind are different bots with different aptitudes.

PARTICIPANT: Millie: As in the skills?

David: As in the skills, right, like sometimes you’ll have a great bot li meekan, in a way that I would like. How do you think that that affects trust of the voice in general and of that channel in general as a speaker?

PARTICIPANT: Stacey-Marie: I think this is especially relevant for news organizations. It was a little troublesome in a while in who was allowed to tweet. We’ve sort of loosened up in social media. We don’t always think about how our user experience undermines the trust and credibility of what we’re trying to build. So we will obsess over typos, or at least I obsess over typos. We will obsess over, is this the best art. We don’t think about if somebody is swearingality the bot that we have built for them, are they ever going to come back to us in. And I think that is the problem. It’s also the problem with bots as platforms, it’s like if you have one bad experience with Facebook messenger bots, what is your experience that you might have with the next that might be the sous-chef. If you don’t say, I hear you, we’re not giving you the tools that you need either to solve those problems.PARTICIPANT: Most of the bots you’ve talked about are really applications and since a lot of us are here with media intentions, I’ve been trying to disentangle how you address content desires with a bot, with something that’s less transactional than a bot is. How do news organizations use bots if what people want is less information and more like services or functions?

Stacey-Marie: I think we have to go back to we are less at the stage where bots can provide that. I think we are most comfortable in the information purveying business, but are increasingly in the emotional labor of services business and how do we recognize the fact that most of the tools that we’ve built and most of the infrastructure that we have is optimized for that information-conveying and not the “are you having a bad day?” kinds of questions. We said we were going to talk about the BuzzFeed bot and we’re going to. One of the people who’s directly responsible for it is sitting right in this room. So if I get anything wrong, she can tell me immediately. So the BuzzFeed news bot came about to ask people how they were interacting with the Republican and Democratic National Conventions. What it did extremely well is it sounded—it hit the right balance of here is useful stuff, presented in a way that you would expect it from BuzzFeed and we stole this completely from the BuzzFeed news app. The same people were responsible for coming up with the language for the BuzzFeed news bot. This morning I got a notification from buzz bot that says, before they unplug me, will you send me some feedback about how I’ve performed? And it was a little bit mars rovery. Remember, when Mars rover sent that tweet that well, guys, it was nice, now I’m going to die, but it felt like yes, I empathize here, I actually feel like they’ve created an experience that is more of that emotional connection, more of that thing, but in ha way that was both useful to me, right? * like my opinion is very important before news organizations do anything, they should ask me. So you consulted me, you said how did I do? They did well and also it gave me a sense of closure. It has served its purpose and I felt like I had good interactions with it. But that was all in the language. It was all in the way that they thought about what are the kinds of questions that are likely to prompt the kinds of responses that are useful to the journalists on the other end, but that don’t feel like work for the people who are sending replies in, because too often it feels like work. There’s a section on Amazon echo that’s like, train it to recognize your voice. Do you have any idea how much time I’ve spent trying to get Alexa to recognize my accent and so it feels like work when I interacted with it. But what the BuzzFeed thing did, it would ask questions, did did you watch Hillary Clinton’s speech? OK, you watch it, what emoji represents how you felt about it? Like a very, very low friction way of providing a response. And then they follow up like saying, here are the most commonly used responses or here are the most common responses we got from this question. So you got a feeling of oh, it wasn’t me, like other people who shared had these things to offer. So it did a very good job of closing loops. It closed emotional loops, it closed content loops and it closed experience loops in a way that never made me feel I had to enter the rage spiral. Any other questions?

You still have 15 more minutes.

PARTICIPANT: Just a question more on the technical side. What user experience, like applications are using to build up the—like the intelligent side of the bot?

PARTICIPANT: Stacey-Marie: So Facebook has a mostly documented series of approaches that you can use for this. I’ve seen people do just like a hack automaer scripts and I’m going to ask this gentleman over here to give a little more context about some of the things that he’s been working on.

PARTICIPANT: Well, I think in general, a problem that arises with bots that are sort of transactional and have that input from the user, you know, as oppoed to sort of generative bots that are just publishing things, is that we’re not really—you know, the point about reviewing the input logs, I think it goes deeper than just reviewing the input logs. I think you could do engineering around that, and I think that a lot of the time the resources, the engineering resources that go into building bot-like applications are focused on decision trees and and how can we get to pushing something out, whereas, you can—I mean really, the split in a lot of cases should be a lot more like right down the middle. More going into the analysis of the input so that you can make the bot smarter over time but that’s not something that’s immediately evident in terms of like, you know, more updates, more content, so I think that’s something we tend to skimp on a lot.

PARTICIPANT: If any of you are interested in building bots, some of the best documentation I’ve seen is kick. Kick has a DevTools site where they kind of go through how you can build a bot. Facebook is application pending approval so you can build one but you can’t actually use it unless they white list you. Slack bots are pretty easy for varying degrees of easy to build and configure and then figure out how you can do more useful things with them and if you’ve never ued if this, then that is, but you’re interested in the basics of automation, that’s a good place to start.

PARTICIPANT: I just wanted to throw out because of the particular circumstances of trying to run a live beta demo during th RNC. We skipped the part where we usually start code before we even run it so we’ve still got our code under wraps because we’re spooked. But the DNC is over and we will be publishing the code early next week as soon as we confirm that there aren’t any like token still in there. But so if you’re interested in building that and like take a look, we will be posting it, and if anybody wants to talk sort of like what we learned and what we experimented request. I run the open lab at BuzzFeed. We built the bot, secret truth it’s not entirely true that the news curation team wrote the content. I mostly had to write the content myself. But we spent the last two weeks really actively demos, how could we use a Facebook messenger bot to engage people. To get people to share stories with us, to get people to tell how we cover the conventions and not surprisingly the second convention went a lot better than the first because we had a little bit better sense of what we were doing, but I’m happy to talk about some of the things that we figured out about what we could get from people and what we couldn’t in that kind of breaking news context.

Any other questions?

PARTICIPANT: And then I want somebody to tell me an aming experience. What you learned about….

PARTICIPANT: I guess I’m curious about the extent to which the frameworks, you know, bot kit, etc., the frameworks around building these bots, ends up affecting their language and their agendas, and the way in which if multiple organizations are using the same frameworks to use things, like, what does it mean to build a language from the ground up and how will they be similar and should there be a common framework or should like, you know, BuzzFeed’s bot have nothing to do agendawise or frameworkwise with another organization’s bot? How will that grow?

PARTICIPANT: Stacey-Marie: So I think about that in a couple of ways. I think about this in the context of like notifications. Almost every news organization is ing like two people. I think about it in terms of like material design, right, where you have using an extremely clear consistent framework that still manages to feel very different and most of that feeling comes from copy. I am wildly biased, I’m mostly an editor, but I do think that one of the things that’s been most interesting about like the research that we were doing for this is the way that you can very quickly get a sense of who knows their editorial voice and who is going to convey that through the language that you even have available and the buttons that you can use, right? So Facebook gives you different options, you know, three different option, how do you frame those? Did you use exclamation points? Do you use eemojis? How do you allow people to interact with you? What kinds of triggers do you recognize? Are you extremely buttoned up or are you BuzzFeed? It can go either way and so I do think there’s a sense in which the sameness provides constraints, but I do think there’s a tremendous amount of creativity that we are not necessarily employing that is still available to us.PARTICIPANT: So I’ve been working on a Slack bot for a couple of weeks or months or so, and I mean it’s fun and it’s cool, but the one thing that’s kind of frustrating me is I feel like I’m not getting a ton of analytic type stuff from Slack, so I’m just kind curious. I mean reviewing the input logs has been kind of a goal of mine but I’m curious to hear about your process, technically how you’re saving those inputs and if there’s any kind of like privacy things to worry about, like to tell the user, hey, everything you type into this, we’re saving to a database, just FYI and we’re not going to publish it, but it’s just, you know, for our help.

PARTICIPANT: Stacey-Marie: There have been a couple of interesting sessions here. There was one yesterday about you’re the reason my name is on Google. There’s one today. I think this is also true with bots. There was a fusion piece from sept of 2015, about about that was extremely popular in main land China that was in fact recording every single keystroke that people typed into there and other places and keeping that and it was mostly owned by the Chinese government and you can think about the implications of, you know, yet more surveillance from a government that might not be super into what you’re typing in the bot. I think this is real. I’m answering your questions in reverse. This goes back to the disclosure point. We are asking people for a lot of things, we’re not necessarily giving anything back and we need to flip that relationship a little bit and be a little bit clearer about, most people don’t read privacy policies. But they are grateful about changes, as long as that change is not by the way this is mandatory arbitration. So I do think that’s important to consider.

You have almost no information, other than extremely explicit actions that people are ting and those explicit actions aren’t always like meaningful and allow you to do that. So with Facebook, you can get the stuff out, it’s not easy, and it’s not necessarily fun to parse, because it’s there. With and as you’ve experienced with Slack, you can’t really say,—it’s not easy to say how many times did somebody trigger this and what did they do after or those kinds of feedback loops and this reflects the fact that a lot of these services are very top-down, and we are the down. Right? Like we are the ones that are having to work around the constraints, but we don’t necessarily have a way of feeding back to them. It would make my life easier if you were doing this and I use that analogy very specifically because it’s like getting an app approved in the App Store. They say you have to jump really high and you’re like sure, because I really want you to approve my app and if they say we’re not going to tell you anything useful but if you want to be here that’s how you have to play. That’s where we are. But I think it’s still early enough that if media organizations got our shit together and pushed back a little bit more, we still have the room to ask for those things. Because most of the time it’s not like malicious information hoarding. Sometimes it is. But sometimes it was just like I didn’t think this was useful to be. Like this isn’t a use case that we’ve ever had before and we’ve had some success pushing back and getting some great information as a result.

PARTICIPANT: If you’re playing with … reaching bots. It’s like if you took parts of Facebook, parts of Twitter, and Facebook messenger and like the text messages and a payments app and shoved them all into one thing and then put about 800 million people on it, so you use it for just getting around in China, and there’s like lots of bots that you use like to check into a hostel and it’s partially automaed and then when it struggles because the cost of labor is lower, it goes to a person. It struggles a lot with my responses because my Mandarin is shitty. Do you think there’s a possibility of like these like quasi-bots appearing or just is the labor cost too high here?

PARTICIPANT: I mean these are phone banks, right? This approach is what we’ve done with customer service, it’s what we’ve done sometimes in newsrooms, where we’re like, escalate to the most senior editor. I do think that the resource cost is something that we continue to underestimate. Like if we want to build really good experiences we need to throw more people at the problem and we’re currently approaching bots at throwing fewer people at the problem and there’s a short-term gain that leads to a long-term problem which is something that I’m sure is very familiar to all of you in this room and that is where that we’ve made the mistake. Like we see this as a way of cutting people out of the process and being like if somebody wants to know what the top three headlines are, they can be just like what’s the top three headlines and we don’t have to do anything. True, but they could also go to your website or use your app or do something else, so if we’re going to put dev resources into building something that are editorial information, that editorial information has to be manifestly better, easier to get to and somehow more convenien than all of the other places that other people are. And if we get to services and get people escalate and do things that they can’t get other places, if we get into the expectation that there is going to be a human at the other end of this or if there’s going to be a bot so good we don’t need a human, are we in fact able to deliver on that?

PARTICIPANT: Millie: I think that’s it. We’re out of time.

PARTICIPANT: Stacey-Marie: There’s also an etherpad with like 10,000 links that we found, and we’re all on Twitter, so say hi. Up to you. Thank you for coming … …

[applause]