Session Transcripts

A live transcription team captured the SRCCON sessions that were most conducive to a written record—about half the sessions, in all.

Building news apps for humanity

Session facilitator(s): Thomas Wilburn

Day & Time: Thursday, 12-1pm

Room: Innovation Studio

So if you’re just coming in, if you want to fill in this table here, that would be great OK, hey, I’m going to go ahead and get started. Just real quick and then people can trickle in, but I did want to cover some real simple like preliminary material. This is welcome, by the way, to building news apps for humanity. I’m Thomas Wilburn. I did want to call attention. This is one of the rooms where we are we’ve got our wonderful transcription taking place, which is fantastic. I wanted to go over a few notes about it. If you are going to talk and this is mostly going to be a discussion-focused session, please introduce yourself, say your name, and it says affiliation. I don’t know if you want to network that hard, but you know, go for it, I guess.[laughter]For the transcriber. If you need to go off the record, so if you’re going to say something that you’re worried about being transcribed and then your boss reading it later as they are browsing through the SRCCON transcripts as they, I’m sure will do, please feel free to say the next comment is off the record, don’t transcribe it, and then when you’re done, let them know, let Norma know that it is back on the record so that she can start writing it back down again. And then also, please remember that because this is being recorded, don’t speak at a glacial pace, so take your time and that way we can make sure that everything gets captured. So let’s go ahead. I did want to start out with a brief kind of introduction to kind of focus and figure out where that discussion is going to go and direct it a little bit. So what we’ve got here is in 2014, there’s a guy called Eric meyer, he’s tremendously influential in the web. In 2014, his daughter, Rebecca died of cancer. She was 6 years old. It was unbelievably tragic and at the end of the year, Facebook showed him this: Here’s what your year looked like, with party balloons and partygoers and his daughter, who had died of cancer, right? So this is not a thing where Facebook meant to be cruel, but he described this as inadvertent algorithmic cruelty, right? The people at Facebook had never assumed that your year in review would be anything other than celebratory, right, that would be anything other than people surrounded by partygoers, so when this hit him, he wrote a post about it and he started thinking about how is it that as we build these systems that are auThomasated or that are not necessarily human-controlled, how do we make sure that this kind of inadvertent algorithmic cruelty is not a constant going forward?

PARTICIPANT: And then he wrote a book about it that’s really good, called design for real life and I read that book. And I started thinking about it. Like for me, this is really interesting, because as journalists, like 90% of what we do is bad news, and so we kind of have this—we’re in this situation a lot of times where what we’re giving people is we’re telling them about disasters, we’re telling them about bad things. How do we think about how to do that in ways that are humane? In ways that are not going to be inadvertently cruel and then the other question is, as we start to move towards a future according to people who pontificate about the future of news, where it’s going to be increaingly auThomasatic, it’s going to be increasingly fed by all of this different data, how do we make sure that we don’t run into all the same problems that Facebook did? How do we keep from having all of that cruelty and stress? And I think that news should be to some extent stressful, I believe very strongly in the aeffect the you uncomfortable part of our mission, but * I don’t think we ought to be cruel about it is my goal here. So one of the things they talk about in the book is stress cases, you shouldn’t think of things as edge cases because edge cases are dismissible, but if you think in terms of stress cases that allows you to empathize with your users a lot more and they take for example that when they were redesigning content for a home repair big box store, and then all of the material for this store was kind of phrase in this very peppy, like, oh, yay, you’re redoing your kitchen kind of way and then when they talked to cusThomasers, they realized that people who go to like a Home Depot are stressed and pissed off, right, like their water heater just exploded or they just had a hole knocked in the door or they have per mites, right? They’re not in a * mood for someone to be peppy at them. They want simple and clear instructions and they want things to be broken down for them and so it was a different way of thinking about that in a systematic many way and that I think we could think about how we disclose stuff in a systematic way. I said in a session a little while back with our crime team and I was thinking about: And they have rules about how they disclose names and descriptions of suspects. And they talk about the rule for them is that they release physical descriptions of suspects which includes things like racial identity and gender, they do that, they say, when it’s a suspect who’s—like this is a crime that’s ongoing. If somebody is still on the loose and the theoretical idea behind that is that oh, well, then they could avoid or capture that person. And setting aside whether or not you think that’s realistic, and I kind of have opinions on that, I thought that was an interesting question, where they had kind of sidled into this question of this is information that’s going to cause stress, what are the guidelines that we have around that? Are those good guidelines? Maybe not, but they have at least started thinking about it, just not really in these terms.

PARTICIPANT: Another thing that immediately comes to mind for me is ads, right? If you are most publishers I think probably the biggest auThomasaed system on your site that you don’t think about is your advertisements.

And you can feel about that in different ways. I just for reference, this slide ued to be titled ads will murder us all, so you can maybe guess how I feel about it. But ads are complicated, right? For example, I’m from Seattle, and we are undergoing a homeless crisis. We have declared a state emergency, homelessness is a serious problem, but homeless people don’t buy ads, so you end up in a situation like this where there’s a shooting in our largest homeless encampment over drugs and money and also you can win a dream house raffle. That’s fun, that’s a great juxtaposition. This kind of thing happens a lot or this is maybe a less extreme case. This story on a motorcycle killed on the I5 crash brought to you the Seattle auto system motor show. We don’t have a system at the Times to do anything other than turn ads off or on. There’s not been any thought given in the system that we need to have more control over this system on a story or topic basis. We haven’t planned for stress cases except in really the most brutal way. It’s as if we’re a doctor and our only prescription is just kill you basically when we get sick. We don’t have any palliative measures and for me that’s really frustrating.

Moving forward. I don’t know how many of you ar familiar with Tay which was the bot that Microsoft put togethers and immediately the internet taught it to be a Nazi. So this was a bot that they released into the wild and it’s something that we, I believe there’s even a couple of session here, right, we’re going to use bots, we’re going to robots into the newsroom, Facebook is starting to push this hard, the Washington Post just rolled out pots for people to talk to, and I think we should be asking ourselves * how this can go wrong. There’s a really good motherboard article where I will link to where they actually asked people who built pots how do you stop your bot from being raisist? And it turns out that they spend a lot of time thinking about this. Has your newsroom spent a lot of time thinking about this? I guarantee you that ours has not if we were building bots, like that’s just not going to happen. And *.

And then this one. Is anyone here from BuzzFeed? Oh, thank God, that was going to be awkward. I’m not really picking on them. This is just—this is where this happened. There was a writer called Kate Leth. She wrote on Twitter, she said, all I want is a check box that says do not embed this tweet on BuzzFeed, because she wrote a series of tweets about a super hero costume. She’s a writer on a couple of different coic books, and she wrote these things, and then BuzzFeed did what BuzzFeed does was that they aggregate them and put snarky little—not snarky, like cute little posts in between them. And it’s supposed to be a feel-good story, but the problem is that a Twitter embed is active, right, it sends people right to that tweet and right to the profile. So she was immediately swamped by the kind of people who taught Tay to be a Nazi, right? That just happened. This is something that social networks really want us to do, they really, really want us to feed into their network and to make their products better by us it and embedding it.

But as we all know, like social networks are also like really, really bad at fighting harassment, and when you put somebody in front of your size audience, are you actually like helping them? Is that beneficial? Are you giving them exposure? People die of exposure, right? Like that’s the saying and it’s true or are you basicallysicking your Saudens on them? And if * you think about your commentsers, are those people that you want going after you? Because I personally would not touch them with a ten foot pole, right? That’s terrifying.

And that brings me to the last thing that I’ve been talking about. Is we have comment system at the Times and it’s a cesspool. The thing that ed to moderate it left and bad moderation is no moderation, because people just circulate and as a result we’re kind of having this conversation, but one of the things that’s been really interesting that I’ve been hearing from reporters and photographers is actually they go out and people don’t want to talk to the Times at all because of the comments. Like, people actually come in and they’ll say to our reporter, oh, I don’t want to talk to you, because I don’t want your—like your comments are so nasty that it’s like too toxic. I don’t want to be involved, I don’t want my name in your paper. I don’t want those people approaching me. I don’t know how widespread that is, but I’ve heard that from multiple people which really surprises me. It’s another one of those cases where you’ve let this happen. You put a system in place where people could publish sort of whatever they want, maybe there’s a flagging mechanism if somebody catches it or maybe there’s whatever, but you’ve just hoped that they would police themselves. Is that actually something that you’re able to maintain? Is that something that you can keep away from cruelty, or do we need to think about that in a way that prevents abuse, right? So Zoe Quinn, the infamous Zoe Quinn fame, Gamergate and all that crap, said, and I think this was very true, right? It’s 2016, if you’re not asking yourself how could this be used to hurt someone when you design a product or a web page or a news app, you’ve failed.

And so that’s what I kind of want to open this up now, and bring it to the room, and have a discussion about this. How can we make sure that the things that we build are not going to be harmful? They’re not going to be used to hurt someone and I wanted to focus maybe on these three questions, and we can work our way through there and if it drifts off, that’s cool, too. So first, what are the worst case scenarios that we’re not thinking about? So are there—is there stuff in here that I have not put up here, but that are uniquely vulnerable or are intrinsically vulnerable to design that’s inhumane? Second what are the mistakes and successes we can learn from and if this is the part where if you have to be, like, I have to go off the record because I’m going to talk about something stupid that my employer did, that’s cool. But I’m curious where are the cases where your organization has gone over this, and what have you learned and I’m really interested in how I can learn from you all on how to make this work and lastly how can we make that empathy a process. I know a lot of us have an intake process when we’re working as part of a newsroom dev team or in the newsroom itself. How can we make this a part of what we’re thinking about when we pitch stories, when we develop them, and when we publish them? How do we make sure that they are carrying forward those ideals?

PARTICIPANT: So we don’t—we’ve got a fair number of people. I’m going to try and just kind of make it a reasonably fluid discussion. I’m going to have to try not to have to pass the mic around but if you feel like you need the mic, feel free to raise your hand, and yeah, does anyone want to get started, like are there any worst-case scenarios that you want to bring up?

PARTICIPANT: I can just talk about—the area where we’ve—hi my name is Lauren I work at Vox Media, and we have stopped ourselves from putting a few different tools out into the world. Like up until recently one of the teams that I worked with was a tools team. That made it easier for a reporter to throw a title around something, define a context around it and embed an interactive thing into their articles and we built a really cool tool that allowed you to define a foreground and then the user could cusThomas upload the background image and we thought about different ways of ing that like the verge is our tech site when the Amazon dash came out with a button just as a funny jokey way, there are a lot of different examples of this but we’ve stopped ourself every time from putting like that kind of user engagement style product out into the world. I guess we weren’t intentionally thinking about this but we just knew that somebody is going to put a penis on this and it’s going to have Vox Media’s logo on it. We don’t know how to prevent against that. So I guess that’s the context I’m thinking about it this is user generated content or allowing the community to be creative, like how do you do that in a way that the Nazis don’t come and take over your system?PARTICIPANT: I’m Audrey Carlson. I also work at the Seattle Times and just to give –She’s a plant.

PARTICIPANT: I’m here of my own free will. Thomas was talking about the example of poorly juxtaposed ads on a page, so for instance when you have this big breaking news story about a shooting in a homeless encampment and then the ads surrounding it were all about this dream home raffle. I think for me, that was in some ways a worst case scenario because we all noticed it pretty quickly and it stayed up for the next 24 hours on our site, because that was a special takeover ad section that had been planned, you know, weeks or months in advance, and the ad seller was even consulted about it, and they—they knew the context and they chose not to take it down, and so from within—I guess it wasn’t the newsroom but within our own marketing department, the decision was made to defer to that decision, and the new rooms had no say. So as the days rolled out we had new stories about this same shooting, we had to just keep keep living with the fact that all of that was showing up with those ads. So sort of the second step is once you identify it, there aren’t even ways to fix it on the fly.PARTICIPANT: I’m M I work at the Intercept. As far as again the you know, the ads topic is concerned, like one thing that really irks me is when I’m at a news site and there’s an ad for a politician, so recently, you know, you read the Times and like you know, Hillary banners were all over the place. They were on the right team, I guess, but that’s good, but still it kind of feels weird that this is a news organization, which is supposed to be you know, like an objective, balanced place, but if the Hillary campaign is like funding, like you know, some part of it. Is of course, this is a bigger discussion, but I feel like that it almost makes me feel that there should be some rules about what types of ads news organizations should be, you know, be willingly, you know, like putting on their sites, and I think, you know, political ads, like it may not be one of those. Or should not be one of those.Thomas: So I don’t want to—one of the nice things and these are really good points. I was a little bit worried, because everybody loves to bitch about ads and so I kind of maybe thought about whether or not I should even introduce it. I do want to—I want to bring up one of the successes that I’ve seen lately, actually and maybe that would trigger other people to think about it. One of the things that I saw lately was that Breaking News kind of rolled out that kind of inform your area that there’s a breaking news interface but they didn’t allow you to type your own description of what it is, you could just choose an emoji, so it was kind of presanitized, like oh, these are the options we’re going to give you, like, they may carry implication, but A, we can preselect them and then B, we’re not going to let you fill out so that you can use this to cause a riot or incite some sort of mass rampage or a mass evacuation, right? We’re going to sanitize that down, but then it still flags it for the newsroom and for users. I thought that was a really smart way of taking user input that you might be worried about, and putting it behind a translation layer that would keep it from being too toxic. And I don’t know if anyone has done other stuff like that that would be a way of—I think a lot of the concerns that we have are around user-generated content and one of the ways that we’ve maybe tried to keep that from sinking into the abyss.

Yeah?

Yeah, if you don’t mind.

PARTICIPANT: Hi. I’m Lynn with sib il, and you know, we * sat around thinking all day about how to m comment section better and one of the things that surprised us is we were focusing on how to you get your community to police itself. We put in a pause. Whenever somebody’s making a comment, because we were realizing that whenever you’re out in the real world, your social interactions are very different than your online interactions, to say the least, right? And you know, just kind of by happen chance when people were making a comment we put in a pause where we asked them, is your comment civil? And we weren’t really expecting much to happen with that, except what we found out is that by putting that pause in, and when when you’re online, everything—you want everything to be faster and I think when you’re on a news site, you want to make your site as fast as possible. Everything needs to be fast and then suddenly this person who just wants to shoot from the hip and make a comment is asked to think, like to stop and we found to our total surprise, 25% of users in their first five comments, 25% of them went back and rewrote their comment when we asked them to sit and think for a minute about what they’re writing and we were just floored and we’re seeing it time and time again, so you know, to have people actually self-policing, whenever you ask them hold on a second, before you sling that mud, is that really what you want to say? And it was just an interesting observation, you know, again contrasting with how we try to do everything faster. Are there other ways in the newsroom that you can put that pause in with your audience?

And get them to think before they act or before they react? Thank you.

Thomas: So that’s an interesting and when you do that, is it just like a button that they have to click yes, no or how do they have to indicate it?

PARTICIPANT: Sorry, I should have not taken the mic away from you.

PARTICIPANT: No, it’s fine. I’ve got a big voice. So it’s a platform where when you do make your comment, you’re asked to evaluate your tone, but then you’re also asked to evaluate the comments of two other people. And what this does is gives a front line of purity, so every comment going on your site is being looked at by a human and what we find is, if you look at any given community, it’s about where, you know, 90% of what’s in the middle is pretty consistent for every community, you know, whether you’re NPR or ESPN, you have that five percent on either side, your civility tolerance is going to be a little bit different. We find that what we find in the middle is what people expect when they go online and that people with a vested interest in being online are very willing to go in and not only self-moderate but moderate their peers, so that’s the whole premise of the system and that’s how we’re finding some really interesting data in how people are acting in what they think is a typical comment section where they can just shoot from the hip and they’re being asked to do things not only to be more introspective, but also to be able to influence how others are behaving online, and you know, it doesn’t become an echo chamber, it doesn’t, you know, it’s really people just making sure that there’s a basic level of civility so that just be stopped out by a few loud voices. Does that make sense?

Thomas: Do you do anything—like I think about one of the things that has come out a little bit is self-care and if you’re a person who maintains the comment section, right, you’re face to face with that a lot and it’s really draining, is there a worry that—like, is there a prefilte keep –

PARTICIPANT: Yes. Yeah. It’s kind of like, you know, the best of—the best of the algorithms, plus the human intervention, and you know, when you put the bot up there, I had to laugh, because if you could bot your way out of this, like Google and Facebook and all of these smart companies would have done it by now, but I feel like there’s an environment where human interaction i isn’t a part of their DNA, so they’re just trying to bot back a solution and I think by adding that human piece, you know, the worst of the worst is filtered out, all the spam and make all this money when you you work from home. So that’s all filtered out from the beginning and all of your obvious stuff that’s going to be really awful to look at. But it’s more like, you know, human language is so nuanced that how do you deal with racism and misogyny and you know, those are the things that come through, you know, or even things like doxing, you know, a lot of people online, in smaller communities where everybody knows each other’s pseudonym and knows each other’s handle. It’s like they don’t realize that most that there is really against the rules. And so we find that having that attached to t you’re able to police a community that is so desperately wanting to be policed and moderated. Like they want this level of moderation, so that they can keep contributing.

Thomas: OK, thank you. OK. Excellent. Yeah.

PARTICIPANT: I will not try and project. So my name is Marie, I also work at Vox Media on the platform team, and I’m on the team building what we don’t call exactly the content management system, a publishing platform, there we go. And my job is really communicating to all of the people who use that platform, and helping them learn about ne features and how we’re kind of rolling things out and when I first started we had basically one way of doing that within the app itself, and it was a big kind of like flash message at the top. And the voice early on, the app was like really fun and cool, and so I was like, I don’t know, it was the second day, oh, cool, we released a new feature and I get to be funny quirky and I get to do this in my new job and the next day my boss came in and she was like, hey, this is like OK, but maybe think about what it’s gonna look like for somebody who’s opening the app and they are run writing about story about something that is not fun and goofy at all. And after also reading Eric and Sarah’s book which if you haven’t read it it’s phenomenal, it really got me thinking about, OK, every time we’re communicating even internally within our newsroom and we have some brands that are really kind of fun and goofy and we have some brands that are talking about—all of them ultimately are talking about serious issues at some point or another. You know, what is that tone that we’re talking when we’re trying to let people know and how do you sort of inform people in a way that’s interesting, but also respectful of the fact that sometimes, some really heavy shit is happening, a lot of times now. And you know, one of the things that we ended up doing, again, you can’t always solve the problem right in the moment. Remove those notifications so they’re not so disruptive so you do get to have that choice. And we’ve also talked about how do we expand that and make that something that our editorial teams can access, as well, so that when they have—you know, either kind of more day to day mundane things, or when there’s say, a situation where you want to let everybody know, like, hey, let’s put a pause on our social updates while this kind of breaking news situation unfolds and we doesn’t want to be in the middle of it, you have that medium to kind of communicate to people. So it’s interesting think about it. There are lots of things you can do.

What was the book you mention in.

Marie: It’s design for real life. And I’ll put a link to that.

Thomas: Anyone else before I take it back to Lauren?

Lauren: I would just say the first obvious thing that we should all be thinking about when it comes to designing for these types of use cases or like specifically the thing that you pulled up is just think about hiring diversely. Like the more diverse people that you have who are contributing to the actual act of creating the stuff are going to know about the nuances that a team of white men would not know how this thing is really gendered. I’m not going to—no offense to you. But having a diverse staff, a diverse collection of contributors to the things that you’re building and the second one that we think a lot about when we do product design, and now my team is doing more about part of my studio is user testing, I know that’s hard to do with breaking news on the fly or something with like Facebook situation where it’s dynamically generated and it’s going to be personalized for every single person, but the more that you can low-fi test it on people before it goes out to the public, or small beta, the more you’re going to catch something that you didn’t think about previously.

Thomas: So that leads, actually kind of indirectly to something that I had, I think, come up, and a lot of this for me is I’m really interested in breaking news, because I think it’s really easy to screw this stuff up when you don’t have time to think about it. Which is part of why I think we want it to be a part of the process, but also I think it’s instructive to kind of think about the ways for it—like, so for example we have a breaking news system that is built to send out alerts to email, to the mobile app and stuff, and is the language in there chosen so that it’s going to be relatively neutral towards whatever it is? Because we send out breaking news alerts for natural disasters, but also for things that are more positive, and so it might be easier to assume, one way or the other. I think it might be more interesting to think about it the other way, what if your whole breaking news system is engineered foredoom and gloom, and then you send out puppies and it seems like they’re puppies of doom. So that’s what I would kind of like to avoid.

So lets, if you don’t mind, I’m interested in kind of continuing this forward. Is there anyone here who has tried to make this kind of thing, like a part of your process? Like is this something—is this something that as a part of every story, that you try to think about? Or every news app, for example, when you’re designing visualization or something that can be used is this someone that has brought into their process or introduced?

PARTICIPANT: Yeah.

PARTICIPANT: Well, I’m Martin McClellan and I work at Breaking News, so the feature that you are were talking that we tall tipping is something that we’ve thought about from beginning to end exactly in those terms. Can we allow people to upload photos? Well, if we do, they’re going to send us pictures of their penis. Can we allow people to upload photos that we can filter somehow and how do we know that they’re actually current photos that they’re witnessing or are they fake. There’s a lot of takes that come up, you see them again and again and again of a particular type. It’s like the storm clouds over New York or the crowds riding or same over and over and over, and a lot of times people believing they are real and retweeting them or sending them along. So that was one of the things that we decided when we went to the emojis, the emojis is a method of the user. We would never use as part of our editorial style an emoji, so it sets them apart immediately. Actually it then gives the user a little bit of a voice but without too much and it makes it so that we don’t have to edit them except for abusers which we can catch in another way, so …

Thomas: It’s interesting for me that for at least several people, like the process of selling this thinking to management has been let’s avoid dick picks, which is I guess one way that people take it seriously in a newsroom, but also seems kind of terrifying to me, that that’s the way that we have to actually get this out. Does anyone have kind of like times when this has not been a part of your newsroom process and you wish that had been?

Lauren: I’m sorry, someone else can talk. I’m just—…

Thomas: No.

Lauren: I’ll say that Vox Media has had some pretty public examples of stories that we’ve written recently that did not go through a process of thinking about like how is this hurting someone? Like one of them was oh, gosh, this football player who had been aed by many, many people, like it’s very clear that he had—allegedly raped someone, and the whole story that we—that an editor on one of our teams put out was looking at this perspective of how could this happened to this white man who was on such a strong path in the beginning and how did he turn and it was all told from the perspective of what went wrong with him, and not like from the perspective of the people that were actually hurt by him and it was just a bad story all the way around. And we realized that our errors there, it was a lack of diversity on staff for that team. There were concerns that were raised from a few people in leadership who had read that that got ultimately overruled, so there was like a lot of bad process in place that we’ve now had a committee go through, figure out what exactly wept wrong and put, like fail safes to put in place to prevent that happening in the future. It goes back to having diversity not only on your team, but also in positions of leadership.

And what was I going to say? Something else? I’m done talk

PARTICIPANT: My name is Bea Cordelia. The last place I was at was an environmental nonprofit that had a quarterly magazine, so we weren’t doing break news, but our staff was, as most environmental nonprofits are, very, very white and very, very middle class. So one of the things that I did wags we were going through a redesign and we did, we did our dem graphic in that analysis, and I start creating * user personas that were for people who weren’t our basic constituency. So I was putting in lower-class women, communities of color, and I was giving those personas to all of our content creaors, including all of our writers and editors and telling them, OK, when you’re coming up with a new issue, think about whether you’re talking to these people and whether or not you’re not talking to these people. And that is starting to actually change how they write. And it’s nice that not every story is only going to a 65-plus middle class woman, or which is what environmentalism generally targets.This is Audrey again. I think this brings up a really interesting point, too, of like what do we consider to be cruel? Because I don’t think that’s, you know, a completely well defined thing or that we would all agree on that. Something like Thomas’ first example I think everyone would have that cringe moment, but there are other things that people would say, oh, they’re being too sensitive about it. There’s kind of a disconnect there between maybe some actual hurt being felt versus the person who chose the headline or chose the picture to go with the—like whatever it is. Media isn’t taking that seriously and I think that’s also an interesting point to all of this and I think that goes back to what Lauren was saying is having different perspectives in the room can bring different ideas of when something is okay, and when it’s not and can catch the things before they go out the door and I think that’s a really important part of a process that’s trying to avoid or creating these like stress-free environments as much as possible, is that we all have a different definition of what’s going ob stressful or hurtful or cruel or negatively impact different groups that we may or may not be thinking about. And it’s one thing to be thinking about oh, building these apps that maybe just accidentally juxtapose things. A robot isn’t going to have that same sort of judgment that we could make, so we have to go in there and put in those catches, but then the side of it the cruelty or the hurt comes in as us as humans not thinking about it from someone else’s perspective, I think is maybe something that is a little—well, I was going to say it’s easier to tackle but sometimes it’s also a lot harder to tackle because it involves a lot more, I think actual conversations and trying to understand people from different perspectives and what might be considered OK or not.

Thomas: Oh, sorry. Yeah. Absolutely

PARTICIPANT: Lisa Wilkins and Martin knows that I’m always playing devil’s advocate, so I will play it here. There’s a part of me that wants people to be sensitive to people’s backgrounds, and where they come from and who they are, and gender and race and all that, and there’s another side of me that’s like, well, the story is the story, and you can kind of make it a sensitive story to the people who are reading t but you also realize that there’s just the story here. So like you were talking about the Vox Media article where it was talking about how did this guy go wrong? That’s really important. There are a lot of guys or you know, people who go wrong and I think that story is valuable. The other side of that—well, not the other side, but what kind of—what causes you to maybe unpublish a story like that is that knee-jerk reactionary Twitter mob or the universe that comes out and is like, I can’t believe you just said that and it’s like, well, these are all very different viewpoints. I mean the story is the story. You can’t just eliminate some facts or give it like a nice Rosy polish because you don’t want to insult somebody. It’s like, it’s news, it’s happening. So I would rather get all of the facts and maybe be offended, than have some things held back and not get the full version of what’s going on.

Thomas: No, I think that’s a good point to make. * I would want to distinguish, I think, a little bit between inadvertent cruelty and I don’t want to say vertant cruelty. Because I think sometimes the way you’re telling a story is the way you mean to tell it that way. I more worry about the things that you didn’t mean to to tell it that way.

PARTICIPANT: But you’re not thinking about it and it happens and you’re not intending it to, you will still have the wrath of the Twitter verse on you, regardless and then you’re pretty much blackballed, you know? * it can really destroy a person when they’re not intending to do something, they do, and then they are ape wondering what shit storm did I just pull up there. *.

Thomas: Sure: Yeah. It’s an interesting questio we had a thing a little while back at Times where we wrote a headline unintentionally in a way that turned out to really offend the person that the headline was about. And I’m really less concerned about the fact that we wrote the headline, although it bothers me a little bit. And I’m more concerned with the fact that it took us eight hours to acknowledge that the headline might not have been a good idea. So I think that’s a good point. I’m trying to figure out—what I’d like to do now, is since we’ve got about 15 to 20 minutes left, oh, I’m sorry.

PARTICIPANT: Yeah, I just—I wanted to make a question. I’m Sandra. I work at … [inaudible] I’m not a journalist. I work with … but I’m not a journalist. But so I think that after your comment I will make some question because I maybe think that all of you are journalists. So this honesty that’s being brought with for example the Vox story, I mean where is the honesty in your articles or in your writing? Wa where is the border between that and where you’re just like provoking your audience? I would ask that. Because I mean if we are talking about a story about someone being raped and you’re writing about the story of the rapist instead of the victim, so that’s OK, I’m OK with it, but then you would have to be hon et and say I’m like telling the story of the rapist, not the victim, right? So where is the line for that with you when you go on content online. So you said for example there’s like limits in how you subjectively gauge something as cruel or not, but I don’t know, I’m from Mexico, so for me there’s various lines of what cruelty means in media, like having people without heads in the media all the time, for example. So for me, I think that there’s like human guidelines, and this is my view as just a consumer of news because I’m not a proceed cueser, would he I so ask you as journalists, how do you get * human empathy? Is that something that you think about or is it just like, OK, I want my story and I don’t get anything about empathy with other people or other countries or other cultures, or I would like to ask that and open that conversation, too, like what’s your—what’s your border or your front line for deciding what to publish or what not? I mean what’s the ethics of the stories and the breaking news and this kind of thing? Like, is it subjective, or do you think like—like journalists have like an ethic general ethics or how is it? Thanks.

Thomas: I think that’s a really good question, and maybe to broaden it or consider as people are thinking about how they want to answer, whether people have this written down or whether it’s just like a thing that they do, like in our discussion with the crime team, there was a lot of times there was a really general feeling in the room of oh, this is the policy. But then everybody kind of had the wrong take and there was no written policy of this is when we publish an address, this is when we publish a subject description. Like, it wasn’t nailed down in a really specific documented way, which gave people a little bit of leeway, and sometimes that slipperiness is concerning, right? So does anyone have a case where like you’ve got that written down or where you’ve tried to create a policy specifically for that? I’ve notice that had for me, management is often very nervous about writing these things down. There’s a lot of nervousness around legal responsibility that you would have if you had a policy towards these kinds of things, and so there’s often a lot of these concerns that surround this kind of question of what do we do with people’s information, that that will be used as a tactic to not introduce that into the discussion. For me at least.

I don’t think you have any taers. I’m sorry.

PARTICIPANT: It’s OK.

Thomas: But it’s good. I think that’s a really good question to ask. What I’d like to do for the next ten minutes. I appreciate everyone who spoke up, at your table, I’d like to do a quick kind of like gued like five minutes. If your table could pick something about your site or your organization, and be like, how could I redesign this? Like, what’s the thing that you’re worried about and how would we fix it and then if we’ve got time to report that back out at the end of the session, I think that would be really good. So if you could take like five minutes and do that, that would be great and if anybody needs post-it notes or anything like that, I have those back here. And I can bring those around.

[group activity»

PARTICIPANT: OK you’re all having really intense conversations which I think is really a good thing. I’m going to be really cruel and cut them off but I hope you’ll continue these conversations outside. Given that these are really intense, I wanted to open it up. We’ve got three minutes before we’re technically done here. If there was anything that came up at your table that you think was really interesting, would anyone want to report out to like the whole kind of group about what you’ve been discussing or what it is that you’re trying to fix as far as this goes? Yes. Thank you.

PARTICIPANT: OK, so this didn’t really wasn’t really the discussion on the table, but I feel like this was—I think as we get drowned in like software, we are off-loading a lot of decisionmaking to like computer programs, and software is never going to be good at making these, like, you know, complicaed, you know, decisions about very subtle things, right? And I think as media companies get heavily influenced by Silicon Valley and try to turn themselves into like software companies and less and less as like news organization, I think this is one thing that they need—like, we need to keep in mind, right? Because like in our goal is to provide information and news on a lot of times very like, you know, sensitive issues and if we just mimic ourselves. Like, for example, recently, some huge VC on Twitter about how—oh, that you know, there should be a startup thinking about an app to solve police shootings, right? And that was a valid. That was a—that was a good idea that he thought that he should tweet out so that you know, like, you know, people in Silicon Valley could start making apps about that. And I think it’s that kind of thinking that will lead us in like, you know, horrible direction, so I wanted to make that spiel. OK.

Thomas: OK, thank you. Anyone else want to report out from what you were discussing at your table? No? OK, OK. Someone over here. Haven’t heard from.

PARTICIPANT: I think one of the things I’m interested in is—so I do a lot of visual journalism, so creing visualizations or charts or whatever based on some underlying dataset is how do you not surgeon something into just an abstraction, whether that’s just a lot of points or little cartoon figures or something, * in a high number. How do you keep stories humane when you’re using visualizations which is just sort of lines and dots and things like that.

Thomas: OK, cool, so technically it’s now 12:59. Lunch will start in half an hour for the lunch conversations, nobody has told me that you can’t all sit here, so if you’re having a good time continuing the discussion, feel free to continue that and if you have anything in mind that you want to share with the group, please keep in mind that we do have the etherpad for the session so you can share links or ideas that you have problems that you’ve solved or run into that you want to share there. I really appreciate all of you joining me here today. Thank you.

[applause]