Session Transcripts

A live transcription team captured the SRCCON sessions that were most conducive to a written record—about half the sessions, in all.

Better Analytics: Why You Have to Define Success Before You Use Analytics—And How To Do It

Session facilitator(s): Tyler Fisher, Sonya Song

Day & Time: Friday, 2:30-3:30pm

Room: Classroom 305

TYLER: Hey, everyone. Everybody, we’re going to get started. Welcome to better analytics, why you have to define success before you use analytics and how to do it. Very robust discussion title.

We have a huge number of people, which is awesome. We’re going to be doing a lot of group discussion, so feel free to speak up, especially because we’re having a live transcript in here. So make sure to have your voices heard. Also I know when people talk, they get a little bit squirrly in public. I would encourage you not to fear it. But you have any problems about that, you can also go off the record if you don’t want to have it in the live transcript, you’re free to make that private.

So I’m Tyler Fisher, I work at NPR, and we have Sonya who works at chartbeat.

So we’re going to talk—both of us are going to do a little bit of what we’ve done so that you know where we’re coming from in terms of big analytics work we’ve done in the past and what we might be able to help you with.

Quickly there’s links to the etherpad, which is, and slides If anyone wants to take notes, that would be great. Don’t feel pressure to.

Yeah, so I’m Tyler, my name without the E on the social networks. And developer at NPR, used to be at northwestern University Knight lab and Chicago Tribune. And most of the work I do is learning how—figuring out how we can learn from the storytelling. Most of what I do is visual projects, visual story left leg or election time, or things that aren’t necessarily reproducible. But we still want to learn things about their audience from those scenarios. So I’m usually instrumenting analytics on those sort of special projects and trying to design a experience useful going forward.

We had a story, for slide shows, also audio or other forms of media and we track them in very similar ways about two years we were doing it.

So each of these stories began with a title card and a big begin button. So I tracked it to begin with. How many people landed on the page actually clicked to begin.

And sometimes when we had audio in the story, we had a thing that said put on your headphones with just a little headphone icon. And we found—not a huge sample size but the bottom four right here are ones on the best performing. Even the best didn’t have a great begin rate. So that’s tosh that we’re losing some of our audience by signaling audio. So that means would you stop using audio? Well– well, I’ll get there. We’ll come back with my question.

I did another thing. So thes shows, some of them are really long, some of them are shorter, and I wanted to know was there a relationship between the number of slides and completion rate. And I found that there is slight negative relationship and that’s mostly driven by that outlie at the beginning by the crazy 130 slide piece that we did, which was frankly too long. But—especially if you take out the outliers, that was a relationship. I was curious to see if we could build the piece as long as we wanted to build it and that the sort of slide show format, people click through it if your story’s compelling. And I would say the top in my opinion are the most compelling stories we told.

So, yeah. And then back to the audio things. So the other two things I was really tracking a lot was completion rate and then what I call engaged user completion rate. Which was the people who actually clicked begin. So we’re getting bounces and things like that. How many of those people finished the story. And I found that our audio pieces—so I can’t. This is an image, so I can’t flip the sorting. But you see if you flip the sorting of this table, the audio ones are a little lower when you check regular completion rate. But engages, they move up to the closer to the top.

So we’re finding while we’re losing a larger portion of our audience when we—the ones who actually want to hear it, are more engaged users. So that’s sort of a catch 22. Is your goal, like, just getting a ton of people to finish the story? Then you don’t want audio. If you want to have engagement with people who are actually interested, audio is a good—if you have audio, and at NPR, we do, it’s a really good thing to include for us.

So that’s some of the work I’ve done. I’ve also published a lot of things on our app, and there’s some independents I’m working on. That’s all I’ve got as a interim.

SONYA: My name is Sonya Song, I’m a media researcher. Before I worked with—open news fellow, I worked with the Boston globe and before I joined chart chartbeat, I worked at other institutes as well. So my background is media psychology. So to analyze what people would read on Facebook and publish it on e-mail lab called sharing fast and slow, research you can still find it. I tried to plug in as many theories as possible why do people share, why do people click without sharing? Go to the next slide. Yeah.

So here I want to show you some examples that I some results generated by target data. So I want to show you that stories perform differently on different platforms through social search. So that’s why sometimes we just cannot expect some stories because you don’t have such nature. So this is one example. I like to have breaking news. On the top you see some labels, Paris attacks, Thanksgiving, San Bernardino shooting, debate on Republicans, Democrats, and Christmas. So here’s the volume of traffic from Google, Facebook, and Twitter. And then we see that for the Paris attacks, roughly the volume. And then for the shootings, yes, went up a lot. So this so this is, you know, expected; right? No matter from which stories, the volume went up simply because there’s surprises over there. And then if you go to the next slide, this is kind of like we clean up the data and then we remove some daily, weekly cycles, then here gives a variety. Which means a number of different

articles people would actually read during those moments. So here we see for the Paris attacks, for the shooting, we see actually the line down. That means people didn’t necessarily read more articles. During those moments, people would read fewer number of articles. Is that means the news consumption is targeted. If you look at survey data, people say no matter where I got news first, I would go to those brand name news organizations for updates. CNN, BBC, et cetera,.

And then of course during Christmas people would read fewer articles. That’s if we could.

So this is one example that shows stories just perform differently. So think about how to define your success, maybe we should first understand where we will find the benchmark, you kn? What categories, what kind of metrics are you comparing against?

And then the next one is, you know, this is a scatter plot. One is social, one is search. So, you know, this is too complicated, I will talk about it later if we have time. So that just shows that stories perform differently on search and social. So this is, again, another example.

TYLER: Cool so we have a brief introduction from some of our work so you know where we’re coming from if you have any questions about how we’re instrumenting these kinds of things. But mostly we want to make this about you in the spirit of SRCCON. So we’re going to do three things. We’re going to have a large group discussion about just generally news stories and success and what that means to you. We’re going to have a small group activity about—we’re going to assign groups like specific types of stories and define success for that type of story and how you would measure it and how you would report to your user and things. And then we’ll come back together and regather all the work.

So let’s start with our group discussion. We’re going to keep this to 15 minutes so we have a little timer here. But just brief introduction. So I want everybody to think about, like, at your news organization, what’s your average story? What’s the thing you’re publishing? The kind of thing you’re publishing every day? Is it a breaking news update? Is it—I don’t know is it a feature story? Is it, like, a sports game recap, you kknow? Stuff like that. It’s different for everybody.

But when you think about those things, and we’ll work through these questions as you think about that kind of—your average story.

So if everybody can’t see. The first question we’re going to talk about is what does success look like for your everyday story? What does it mean to your news organization. So anybody off the top of your head have thoughts about what success looks like for your average story?

PARTICIPANT: It grew our audience reach.

TYLER: Sure. So audience reach. How do you measure audience reach?

PARTICIPANT: Page views, content views across all platforms, not just our platform. By segments of types of audience. So, like, audience not just in terms of sheer volume, but we want to reach more women, so looking demographically.

TYLER:. Cool.

PARTICIPANT: Paid views and then being tweeted or shared by someone famous or someone influential.

TYLER: Yeah,. So when you’re thinking about success, are you thinking, oh, we want to get 100,000 page views. Or is it—you know, do you set, like, quantitative expectations? Is it a more qualitative, like, success, we want to reach, you know, we did a story about criminal justice, and we want to reach criminal justice advocates. Yeah, in the back.

PARTICIPANT: Comments in particular, comments that get recommended by users or big tweets that aren’t just using automated tags or headlines, but premium asking comments on the stories.

TYLER: Sure so you’re analyzing what people are saying about the story and doing qualitative analysis about that. Yeah,. Cool.

SONYA: Also cover the second question. So how do you measure your success? So we don’t think about technical constraints. Just in an ideal world, what kind of metrics would you use to measure your success; right?

PARTICIPANT: We compare, like, performance of one story against other stories of that type because not all stories are created equal.

TYLER: Sure. Uh-huh. So what—I guess what—when you’re measuring your success, what kinds of tools are people using? Chart, Google analytics.

SONYA: Parsley.Armature.

SONYA: Very aeasy to use. Or do you also have some kind of developed tools like what Tyler would do at NPR, no?

PARTICIPANT: We have a thing called billboards that aggregates from chart our own internal measuring thing calls provenance and our data scientists have built a series of algorithms that I don’t understand that make projections and recommendations.

SONYA: Oh,. Okay. Recommend to put on the front page or to change the positions? Or on social? What kind of recommendations.

PARTICIPANT: Data recommendations and then an AV testing headline tool that it recommends against as well for, like—I don’t really know. I’m not in that workflow every day but to optimize headlines against –

SONYA: Okay. Cool.

TYLER: Let’s see. So—y. So—yeah. People are typically using, you know, I’m hearing a lot of the popular tools, Google analytics, parsley, armature, so I guess when you sit down and think about success, if you’ve had this conversation, is there ever a discussion—and this is sort of a leading question. But is there ever discussion of something, like, Google analytics doesn’t give you out of the box? We talk about page views because it’s the easiest metric to get; right? It’s the top of the report. So that’s what we think about.

Are people ever thinking about, like, okay. Well, page views are maybe not our goal here and, like, we want to use something else that’s hid or something we can build ourselves that actually –

PARTICIPANT: Giving away all my secrets, but we want to build something that basically asks is this good journalism? Is this something you want to read?

TYLER: What does that mean to you?

PARTICIPANT: We want to ask that question. We want to get to that story and ask you a simple yes or no question like a reaction question. We want your feedback. But in a structured way.


PARTICIPANT: I was going to say something super similar, like, we don’t have anything in the works, but I would love to have something that does that same thing. Like, if you know a lot about this topic that we wrote about, do you feel that we covered it adequately? Or totally missed the boat? Similarly we have a columnist that gets a large amount of traffic and people are always, like, oh, hey, you’ve got such traffic. But it’s all hate traffic. They’re reading it so that they can share it and say this guy’s an idiot. So is that success?

TYLER: Right. Right.

PARTICIPANT: Not really. But how do you measure that?

TYLER: Yeah, it’s a tough thing. The sort of qualitative, like, was this good? Like, did we satisfy our expectations as an audience member? It’s tough to measure. But I think it’s important to, like, think about to stop and—you know, not just take what I IT team has bought for you and, like, just take what it gives you. I think there’s work to be done. And everything I’ve done, everything I’ve showed you from my end is Google analytics. That’s what I do. But you can do so much custom work with it, you can build what you want and learn what you want to learn with reason.

PARTICIPANT: To your question of measuring success, I think most organizations are determined by income, money still.

TYLER: Right.

PARTICIPANT: So, yes, you send your publications and try to get that kind of success. But the simplest answer is we’re still measuring success by traffic I think. Mostly the high level success rate.

SONYA: So with different departments your newsroom can find success in the same way consistently. So that’s another question.

TYLER: Right. . So the business end argument of it is one I ignore.


Which is a bad thing. But—so are people working with, like, ad view—a lot of operating on hits; right? Like, did you get 100,000 people to see this thing? Are people having to work with a viewability? We need this X-amount of people to see the ad for ten seconds or whatever. How does that change?

PARTICIPANT: We are. I’m with ESPN undefeated, and ad rolls usually for video clips and trying to get people to watch videos. Monetizeation is pretty –

TYLER: Yeah, so how does that change. The other thing is view; right? It just launched. How did that conversation of success is we need to get this many people to see this ad. How does that change? Have you thought about it this way? How you built it?

TYLER: Okay. But from the beginning, we’re going to do preroll because that’s how we’re going to get enough people to see this.

PARTICIPANT: It’s a legacy. A legacy preroll. If there was a better way to do ads, maybe investment. But right now it seems to be the best way overall.

TYLER: Okay. Cool.

PARTICIPANT: Time on page.

TYLER: Sure.

PARTICIPANT: And time on-site as well. So did someone just bounce out of the site or did they go to something else? It’s not a measure of success of that story, but it’s a measure of something.


I guess it’s a measure of probably of the UX of promotion of stories around it.

TYLER: Right so how are you—in your corpus of work, how are you using metrics like data time on-site? So when you’re trying to see, like, what did well? How do you know?

PARTICIPANT: We use time on-site –

TYLER: And do you have sort of—you know what’s good and bad, like, what are good and bad numbers just –

PARTICIPANT: Yeah. Kind of. I mean but it goes back to this thing, like, if your story at a certain time, and you can compare like. So long form pieces of that content with lots of time put into them or whatever. People are going to stay longer. But they cost a lot more.

TYLER: Sure. Yeah,. So the story in an underserve audience. Being reached. You can see it’s very skewed and skewed old. There’s an effort to try to address that how they do that exactly I guess is probably just –

TYLER: Yeah, and Google Analytics at least tries because they have this massive ad network, and they know everything about you tries to tell you your user. It’s, like, age, Democrat, location, and things like that. They do a pretty decent job. Yea?

PARTICIPANT: I heard a couple of people mention. And I do it effectively. It’s, like, confident way to know consistent way to know. Somehow metadata is there, and I don’t know how to do it.

TYLER: So how are other people doing that? Like, to comparison? What’s your taxonomy? How is that structured?

PARTICIPANT: We haven’t started doing this yet, but I’m hoping to work with parsley to add some tags that don’t show up publicly on our site but do get fed into parsley and we can tag things breaking versus enterprise versus multimedia, so then we would be able to pull out data and say if it’s breaking and sports, which just comes from the section taxonomy, or if it’s enterprise, then it’s entertainment or whatever, so then we can segment.

SONYA: You have categories.


SONYA: Best country singer or something like that.

TYLER: So I think—I know York on main analytics team, we have a whole separate team for that. But I know because I have to integrate with their system that when we track the page view, the initial hit, we’re tracking all of this to Google Analytics, and we can do that filtering immediately. So we can take a look at all the politic stories and all the technology stories and all of that.

PARTICIPANT: With the audio or video, maybe somebody has the bandwidth to do it or on Wi-Fi.

TYLER: Yeah, always.

PARTICIPANT: So maybe you convert a lot better if you looked at it from a full Wi-Fi or available network.

TYLER: Yeah, that’s a good one to know for our work.

PARTICIPANT: This is in terms of the last question. No. The reporter, everyone has a different way of measuring, at the end of the month you have a general page view or visit the goal or reporters want to measure impact. Also, I forgot what I had entire measurements based on the actual audience. Like, a very specific audience. They want a specific audience that makes this much money or this and this place. And the only way to measure that is a mixture of location and this whole learning about the audience and subscriber information.

TYLER: Uh-huh. Yeah, actually I do want to get this question in the last few minutes. A lot of us are measuring success and what that means to us, and we have our measurements. When you communicate it to the newsroom, is it acted on? I find this is a struggle at NPR, we struggle with this, we have priorities, and we’re measuring them. But communicating them to, like, your everyday reporting and making that a part of the editorial decision to make, how do others—I’m curious if others have statistics or not or if this is a sticking point to everybody?

PARTICIPANT: Try to manage what success is across the country. But it comes out in an e-mail every day. So I guess hitting their daily number for the day.

PARTICIPANT: What was the comment?

PARTICIPANT: Just a daily e-mail that segments our goals for the month. And tracks the progress throughout the month. So that we can see what the expectations are.

SONYA: So what’s your company if you don’t mind.


PARTICIPANT: What’s the goal?

PARTICIPANT: Hundreds of millions.

PARTICIPANT: Of page views?

PARTICIPANT: There’s a lot of different ones. Yeah,. Across social, page views, time on-site,.

TYLER: But it’s volume of traffic kinds of thing?

PARTICIPANT: Yeah. It just gives you aboverall target for the end of the month and how you’re moving on a daily basis. So it can be shares on Facebook, signups, or time spent, so there’s a couple of them.

SONYA: Yeah, so who sets the goal? The manag. They work in different newsrooms, and they found, at gawker, that is the metrics or the thing that was measure their performance. So all the reporters are under pressure they have to have page reviews, shares, and even reporters at New York Times don’t have access to the dashboard, and they have to listen to their editors and then show their editorial rather than guide it by audience preferences.

So there are different ways to measure success.

PARTICIPANT: By contrast, I have to shove analytics down the throats of people at my—not the ad team. The ad team is all analytics. But the writers are, like, very resistant.

TYLER: They just don’t want to hear it?

PARTICIPANT: They don’t want to be driven by analytics and the editors—we’re a newspaper, so people are very print mentality as well. I think. But people are pretty, like, intentionally oblivious I think. But I think they’re scared the one off blog post they’re going to do are going to be their best article ever and nobody is going to read stuff that they spent three months on.

Which isn’t really the case. But they’re—they don’t want to –


PARTICIPANT: I send out reports to them, and they do get a little competitive. But I think there is a very resistant analytics resistance.

SONYA: Yeah,.

PARTICIPANT: I mean I find that I’ve worked with editors and contextualize page views, but something like time on page, contextualizeing that for them and a more narrative story of how that equates to their success. Sort of has helped remove some of that fear in places where I’ve worked.

In fact, to the point where they are—their desires to get their hands on the analytics because they’re interested in how—it’s not a story about, like, how many people did you get the site, it’s a story of how engaged your readership is what you’re working on. And when you talk to editors that way, sometimes that can help the problem. At least that’s what IYeah, I think that’s really great advice. Frame it in a way that’s about journalism, like, about the success of your journalism.

PARTICIPANT: We have an engagement team in the newsroom with the journalists, and it’s run bis by a journalist.

TYLER: Yeah, we have something similar.

PARTICIPANT: There’s a data guy that runs it with her. But I think you need that level of buying into the audience engagement team and creation view to filter through to be something that was—and so they will tweak headlines and tweak stories in realtime on the basis of what we’re seeing in the analytics. So they’ll change headlines or develop the page and get the most out of their stories based on realtime analytics.

TYLER: Yeah. That’s a model I think a lot of users are deciding to replicate.

PARTICIPANT: But it’s new. It’s a new thing.

TYLER: Yeah, and I think it’s a powerful one.

All right. We’re going to move on to the group activity. This is an enormous group. So this might be a little tricky. But—so what we want to—whoops. Y. That’s right. So what we want to do is we want, like, so groups at tables I guess people have to sort of—and groups on the floor I guess. So we want you to pick specific story type. And we had some ideas, so you don’t have to go—where is my mouse?

You have to go completely—oh, well. I had some specific stories. Things like you could pick like a breaking news story, you could pick an investigative long form piece, you could pick a feature story, a vid video, you could pick a social media-native video, auto playing video kind of deal. Anything that a news organization might publish. Sports game recap, you know? Things like that.

And what we want you to do with that story type as a group is define success for that story. And that doesn’t necessarily mean we want to get 100,000 page views, that means something more like, you know, we want to reach a community of this type people or something that’s a little bit less about the number and more about quality.

And then define how you’re going to measure that. Like, how are you going to find out you were successful in that? And then how are you going to report it back to your user. Does that make sense sort of to everybody?

So we have some paper and sharpies and things on the tables if you want to write stuff down. But let’s try to consolidate this enormous group into a bunch of different groups and see how it goes. You’re going to have 20 minutes.

[Group activi

TYLER: Everybody, you have five minutes left.

Okay. Everybody, time is up. Wrap up your conversations. We’re going to all report back as a large group.

Okay. So we’re going to go around to each group and then share what success is, the success statement, the success measurement, and their success report. We’ve got ten minutes left total. So two minutessish for each group.

PARTICIPANT: We misunderstood the statement part of that and we wrote three pages of success. So we’re going to speed read.

TYLER: Go for it.


PARTICIPANT: I’ll be going first?

Okay. The type of story we did was, like, a big project. Three-month, six-month, whatever. Year project story. Some statements of success would be that it affected change where people took action, built brand or company awareness, started influential conversations and strong relationships. We reached new audiences and strengthened loyalty in existing audiences. Tried something innovative or experimental. It inspired or educated others in the newsroom internally. Turned readers into promoters, attracted advertisers, had a long shelf life. Okay. That’s not all of them. But those are the big ones.

And how do we measure those things? The story was talked about in policy situations or reference. You know, in city hall or on TV or on the websites. Measure the paid views, whether they’re new or existing readers. Did we reach a demographic that we don’t traditionally reach? Do user testing or focus groups referrals stay high? Did we get event tracking on letters like newsletter signups?

Define the experimental, define what the fact experiment is. And how do you measure it? Number of times the story was referenced somewhere else online and later referenced that story. And measure reshares, e-mail, completion rates, so forth.

And then the number of unique commentators, and not just th commenters on the site, but not having two people –

And how do we report that? This was much less because we ran out of time. But the key points here is that you just can’t e-mail out a report, you can’t hand out a report saying here’s how it did. It has to be in real life, in person with context of what does this all mean? And then there has to be take aways from it. So what did we learn? What are we going to do the same? What different next time instead of just, hey, we did great. Next.

TYLER: Yeah, that’s great. Thank you. Really thorough. Great. Next group.

PARTICIPANT: Sure. We also picked an investigative story. So a lot of us were similar. We picked a topic, it would be coverage of lead in water maybe. So we said there’s a text-driven story interacted and see how it affects.

So one of the things we thought would help measure the success would be whether or not it influences policy changes, whether it was reshared by the influences and circles. Whether or not we reputable refers and refer to your story. Whether or not it’s mentioned in the speeches. We also talked about whether or not if we would look at—if we got higher traffic or more time to spend on-site in a highly affected area, we would consider that a measure of success.

Also if there was an interactive, it helped you understand how—people spend more time with the story after having used that interactive I think would be a good thing. If other people were—wrote about your story, also we would love to hear on your performance from the brand but also if a promotion was given. And maybe if your story got covered on TV.

So those are the list of the success metrics. In terms of how it got recorded, we talked a little bit about how things can impact the numbers and whether or not there was, you know, time of day or the weather outside or whether or not Kardashian was there that day and stole the Internet. So we rolled it up into this elementary school style it needs improvement, it was good.


And then three bullet points, this is why it was good. And then it wouldn’t be e-mailed to you, it would be discussed directly to you with your manager or editor, someone who has context of higher coverage of initiative over time.

And we thought that would work because investigative story isn’t necessarily—it has a longer burn, so you don’t need those numbers at the moment. You might talk about quickly as long as it’s getting traffic.

TYLER: Awesome. Great. The group in the back.


TYLER: Sorry. The group without a table.

PARTICIPANT: We to be honest we didn’t brief. But we did talk about measurement, and how we really track the amount of time, whether or not that’s relevant and the metrics around what it was that demonstration people were really engaged with. We went around with sharing or saving stories. Do people come back and save them? We talked about—help me out.

PARTICIPANT: Contextual information about stories. Did it succeed on the home page? Or because it was tweeted out from an account? Or quality.

PARTICIPANT: We also talked about kind of how success across an organization sometimes you can focus too micro on the story level. So that may be on its own is a—isn’t tremendously valuable and how sometimes little pieces about how that—how you can put that in your narrative as a whole and a little bit of that focus groups and how it’s not just about numbers, it’s about putting those in context and getting a broader narrative around blogging. We trust it, we’re getti verbatim stuff around the stories.

And essentially what we also mentioned is the narrative going back to sponsor the story. So if you’ve got people who write it out, how do you get back to them, the success or failure of that story? And then one other thing that we mentioned is that this is assignments at the end of the day. And you have to kind of listen to the editorial and response to the story. Maybe you didn’t get the numbers but everybody is recognizing. Maybe did get the numbers. And—but people still aren’t convinced that it was a great piece of journalism. And there’s value in that as well.

TYLER: Uh-huh. We’ve got three more groups and three more minutes. So.

PARTICIPANT: We decided to do—look at the story type of live video story. And—yeah, specifically because he had live Facebook video on our site. So what we want to find success for was viewers, full or close to full length engagement, social lift, and long tail success. So adding subscribers through Twitter, Facebook, or e-mail newsletter. The metrics that we decided on was the full video watch of time on page versus time of video. And full video watch of time of riv versus time to completion of the video.

For referrals, we want to know where people are coming from, the amount of traffic, so throughout long periods of video, we share this is happening on the video stream, does that bring people in? How many people does it bring in? What particular platforms are more successful at bringing people in? Watches after the completion of the event. So when people when I’m on a page to replay the event after we have completed it. Then we talked about after the event was completed, we pull down the file, upload that in our home player in place, and look at scrubbing the particular video to particular locations. I realied I was going slow. Points at people exit the live video, social media shares, video hashtag, so if you have a hashtag associated with the video, track the social conversation. Cut a long line of video to track highlights and track success on those. Newsletter field, you need track unfollow buttons, and highlights versus live stream viewer counts.

And we report with a live stream discussion and our editor of viewing our analyst person.



Social engagement report that we send out, tracks realtime, we have a Slack bot that would pop up every time it exceeded x-number of viewers and a live dashboard catized analytics events.

TYLER: Wow. Thorough. Wow.


PARTICIPANT: We had a lot of time.

TYLER: Next group.

PARTICIPANT: So basically we had everything they mentioned. We were just talking the normal stuff that I think everybody talked about impactful, it’s getting traction, we’ve thrown audience maybe. We’ve—the measures were, like, e-mail, feedback, longevity, the amount of people talking about it. If it actually affected change or started a movement? And my favorite was it does not have been to be viral in order to be successful. That being, like, it did reach the audience and it—the influences for that specific topic are talking about it or referencing. Also the qualitative data on it might be shore more than the quantitative data. How that it might have trended into a different topic.

PARTICIPANT: Okay. So we decide to do a story to a follow-up of a specific story. A follow-up, like, three months later. Not a update but the story is—we started our conversation the difference between product view success and measures that versus reporters in the newsroom. So we got a little off track, but I’ll see what we have. But we basically learned that that product had success differently and we had a whole discussion about how story types are different there.

TYLER: Hugely important.

PARTICIPANT: But what we wanted—when we wanted to look at this we compare the original numbers of the first story to see if we’re reaching or updating the original audience of the story. Also more mentions of the general outlets whether on our story or the topic in general if we’re bringing the story back into the conversation saying—we return the topic to attention. Looking to traffic story as referrals from this one. So now original story traffic covering, you know, three months ago from this date on. So people coming out just for the first time or wanting a backup and wh happened. And then generally what everyone else talked about respectful organizations or more follow on multiple social brands, commentary general that we spend a lot of time discussing and sentiment, what the story is. Negative because it’s not necessarily about us, it’s—negative because that sucks. That’s bad news. And then talk about the general impact. Anything else?

TYLER: Thank you. Yeah, so we’re a little over time so thank you, all, for coming. I would just some themes I heard. A lot of you were talking about investigative—and a lot of you are not talking about page views. You’re talking about qualitative impact. And I think when you go into your next conversation to define success, think about those things and don’t talk about 100,000 page views. If you got anything out of this, I hope it’s that.