The Software Agents - Episode 15: ADL - Dave Sifry, VP for Technology and Society - Fighting Online Hate with an Internet Scale Open Platform

The Anti-Defamation League (ADL) is bringing modern software to the ever-growing task of watching for online hate speech. VP for Technology and Society Dave Sifry explains the open platform the ADL is building to apply machine learning to identify hate speech of all kinds across all social media networks.

Announcer: Welcome to The Software Agents. Meet the people who bring software to every aspect of life, society, and business to handle the challenges of a transforming world.

Christina Noren: Welcome to The Software Agents, a new podcast on how software is helping the world survive and evolve right now, as told by the people who are making it happen. I'm Christina Noren, and my co-host is Paul Boutin.

Paul Boutin: Hello. Thanks for tuning in.

Christina Noren: The Software Agents is sponsored by Cloudbees, the enterprise software delivery company. Cloudbees provides the industry’s leading DevOps technology platform that enables developers to focus on what they do best—build stuff that matters.

So, today, we have a good friend of mine and former colleague and a good friend of Paul’s and mine both for many years, Dave Sifry. And, you know, many of you listening will remember Dave as the Founder of Technorati, which I frankly miss. Like, I miss being able to search real time blog posts in a reasonable fashion, and Google doesn’t replace it effectively. And Dave and I worked together in the years since he was at Technorati, and frankly, Dave taught me what continuous integration and continuous delivery were about a few years back.

And these days, Dave is leading technology efforts for the ADL, the Anti-Defamation League, and doing something that I consider really interesting, given the renewed focus on social justice and equality and so forth in this crazy year of 2020.

So, Dave—tell us a little bit about who you are and what you're up to these days.

Dave Sifry: Yeah! Well, first off—so great to see you, Christina; Paul, as well. It’s so much fun to kinda hang out with some OGs from the early days of all of this stuff. So, thanks for having me on. It’s an honor and a pleasure.

Well, what can I tell you, Christina? What would you like to know?

Christina Noren: Well, so, we can either go first down the, “Who is Dave?” path, or we can go down the, “What is the ADL doing with tech?” path. And I know you well enough to know you're probably gonna wanna take path number two first, because you're too modest.

Dave Sifry: Ah! Alright, you know what, then I'll go down path number one, just to mess with you.

So, let’s see. Hi, Dave Sifry—nerd by birth, geek by training, trained computer scientist, moved out here to the West Coast about 25 years ago. And I've had the incredible fortune to be able to work in this field. I have started six different companies, you know, as Christina mentioned, one of them was Technorati, which literally just came from—like, I didn't build it to change the world, I just wanted to know what anybody anywhere in the world was saying about me. And it was kind of a surprise—wow, lots of other people felt that way, too.

And then I built a company called Linuxcare before that, which was the world’s largest open source services and support company, and then I worked as an executive at a number of companies, including Interana, where Christina and I worked together, where I was the VP Eng and she was the Head of Product. And at Lyft, where I helped to lead some of the innovation efforts in product engineering and design, including Lyft Transit. And then, over at Reddit, where I ran Revenue Products.

And now I am at the Anti-Defamation League. And, just for those of you who are unfamiliar with the Anti-Defamation League or ADL, as we call it, it’s a 107-year-old nonprofit organization that focuses on fighting hate in all its forms. And in particular, the Center on Technology and Society, that I have the incredible privilege to head, really focuses—in a sense, you can think of us as, even though we're virtual, we're kind of the Silicon Valley office of the ADL.

And so, we work both very closely by engaging with all of the major technology companies and platforms, you know, the Facebooks, Googles, Twitter, Reddit, you name it. And we work on them in terms of their content moderation strategies and policies, how to deal with targets and victims of hate and harassment and also how they're dealing with the incredible rush of hate and partisanship and harassment and polarization that often are a byproduct of the algorithmic amplification that is built into their business models. And then we also hold them accountable. We hold their feet to the fire when they don’t do the right things or they don’t do enough to focus on the things that they say. You know, we help them with their policies and ensuring that they're building really good policies that also protect the rights and civil rights of the marginalized, but also when their enforcement doesn’t meet—their actions don’t meet their words.

So, it’s been an incredible experience, because not only do we do research and advocacy and we work on helping to change the laws around cyber stalking and doxing and swatting and, you know, nonconsensual pornography on the Internet. But we also, you know, are working on some of the cutting edge issues around misinformation and disinformation, you know, algorithmic amplification and ensuring that people’s—well, people’s civil rights are protected, that the responsibility of the values that we're trying to build as a society are incorporated into the kinds of policies and procedures that these companies actually produce.

And lastly, what I brought in the last year that I've been there—I just joined about over a year ago—is a certain product thinking around how we can look both at the products themselves, you know, Facebook is a product, it has affordances, there are ways that it promotes and demotes certain things, you know, YouTube is a product, it does certain things. But also to look at how the Anti-Defamation League can help to build products to actually measure hate online as well, so it’s a new initiative called the Online Hate Index.

Christina Noren: So, let’s talk about that a little bit. Yeah, so, I was excited when you called me up a year ago to tell me you had taken this job and I got kind of a certain inkling. And then when we spoke a bit, I don't know, five months ago, six months ago at the beginning of this crazy pandemic, you know, I really got this idea that you were building a new kind of platform or service for the Internet that was kind of like the hate processing platform was what I got.

So, tell me a little bit about the technology, the product you're actually building at the ADL.

Dave Sifry: Yeah. So, it really stems from, I think, when I started doing my listening tour after I came on board about a year ago, you know, I think that there are three really big problems that we identified, and what’s kind of interesting is how they're all in some ways systemic and complementary to each other in that, if there was a solution, that it might be beneficial to solving all three of these problems.

And the first one was one just around the overall amount of hate and harassment that’s perceived by people. So, you know, at the Anti-Defamation League, we have been doing online nationally representative surveys of people’s experience being harassed online or, you know, experiencing hate. And unfortunately, those numbers are significant and they're only rising of late, right?

And so, you know, we have a number of reports around that, but I won’t go into too many details about the number of people who feel that they've been harassed, whether it’s because of the way that they speak, whether it’s their, you know, a protected characteristic like their race or religion or gender or their national origin. And, you know, the kinds of harassment campaigns that they've been subjected to, right, both sustained and serious ones.

And then, you know, so, it’s very real. I think we all know someone or maybe ourselves, like, we've been stalked or we've been doxed or we know somebody who has, you know, we've experienced harassment online. And so, that’s a very common experience.

And then the second big problem that we saw was actually one from the platforms themselves. It was super interesting to talk to some of the insiders at some of the biggest social media platforms that, in fact, you know, their experience with hate and harassment is that they know that it exists. They know that people are posting, you know, these terrible comments and posts. They know that people are harassing others, and they're working really, really hard in general. You know, I have not met, in all of my time in Silicon Valley, anyone who walks around twirling their mustache saying, “Mwahaha! I cannot wait to create more hate on the Internet!” Like, that in fact, these are people with good intentions, but we also recognize that even being able to identify it at scale is extraordinarily difficult.

And, you know, I'll give you an example of, at YouTube, I think that they're doing a tremendous amount of work behind the scenes, literally, dozens of engineers and data scientists that are working on these problems every day. And, you know, they had recently taken down a tremendous number of extremists from the platform. But then it’s still, finding new ones is kinda like shooting fish in a barrel and that, you know, there’s a lot of people who will change their name or they'll change the spelling, or maybe they're not caught in an initial net. So, it’s pretty easy, regardless of what they say, for someone from The New York Times to do an expose article or to run a—

Christina Noren: I'm gonna lead the witness a little bit, because when we last talked, it sounded like you were building an AI powered platform for processing all this massive amount of [Cross talk].

Dave Sifry: Yeah, yeah, I'm getting there, but I’d like at least being clear about the problem, right? So, the problem that we found here was that, you know, in the end, nobody believes their first party numbers, right? So, when Google says, “Hey, we took down X number of videos,” there’s no way to validate that other than to just trust them. And that’s a difficult place to be.

And then the third thing that we've noticed is, here’s a really big problem is one that actually came out of our incredibly successful Stop Hate for Profit campaign, which was this work that we did around fighting hate on Facebook by having over 1,100 major advertisers pause their advertising on Facebook for a month, and really, to express their feelings about the lack of safety on the platform. That, you know, advertisers have these values that they don’t want their advertising to show up next to hateful content.

Christina Noren: You guys led it, the ADL?

Dave Sifry: Yes, we were part of—we actually helped build the coalition that included the NAACP, Color of Change, the NHMC, Mozilla, Common Sense Media, and many other organizations. But, you know, it actually started at the Center for Technology and Society, where we were going to a number of the different platforms, including Facebook to talk to them and to get them to make some changes around how they were treating hate on the platform.

And we really felt like, after engaging and engaging and engaging, they just weren’t being serious about dealing with the problem. And for a company that literally made $17,000,000,000.00 in profit last year that, you know, they really needed to pay attention to the exhaust, to the societal issues that were being created by all of the hate that was propagating on the platform, right? And as much as, you know, they're in many ways one of the world’s greatest inventors of marketing technology, you know, like a great steel factory, they're pouring mercury out the back into the river, and that you need to deal with the fact that there’s all this pollution that’s coming out of the back as well.

And so, but here’s the thing—there was no standardized way of looking at how to measure hate in all of its forms on the Internet. And so, what we realized was that this was actually a really big, hairy problem. But, that there have been some incredible advances over the last few years that might allow us to take some stabs at making some measurement tools to be able to at least start—and it starts with measurement, because I'm a big fan of Peter Drucker. That, you know, if you can’t measure it, you can’t manage it. So, to start by just having some third party measurements of hate online. And so, we've been working, starting with anti-Semitism, because that’s actually an area that the Anti-Defamation League knows probably better than anybody else. We've been building these NLP and machine learning based classifiers that allow us to be able to recognize different forms of hate. And also, by the way, to see it in its context, right? So, to recognize the different between counter-speech, identity speech and actual, you know, hateful language is super important as well.

Christina Noren: You gave me a great example of this a few months ago. You gave me the example—I think, and you have to correct me on the quote—but it was the difference between “Hitler said, ‘all Jews should die.’ Hitler was wrong when he said, ‘all Jews should die.’” And you gave me a few examples along those lines that your NLP was capable of recognizing the difference between them.

Dave Sifry: That’s exactly right, and if you—you know, again, if you're using just a pure pattern matching or a white list or black list system, you know, when somebody says, “All Jews are vermin,” well, of course, they'll capture that. That looks anti-Semitic. But classic counter-speech would be, “Saying ‘all Jews are vermin’ to someone is totally wrong.” Well, you know, you don’t wanna be catching that in the same filters.

And so, you know, what’s been super interesting in the world of natural language processing have bene the advances in convolutional neural network technologies and advances like BERT and RoBERTa and a variety of these new models that actually take into account the ability to have a deeper understanding of the English language. And then, by being able to understand things like negation and irony, you're actually able to trend the models in a much more sophisticated way to be able to recognize the difference between saying “all Jews are vermin” and saying, “When Hitler said that all Jews were vermin, he was wrong” and being able to recognize that that is a really important difference in how you want to measure hate speech.

Christina Noren: Why is the ADL hiring an awesome VP of Engineering and VP of Technology like you, you know, why is this a technology problem today?

Dave Sifry: I mean, it’s really a systems problem, to be clear. I don't think that technology is the only way to solve this. I think, in fact, using technology purely as a way to solve problems is somehow what got us into the systems problem that we have. So, I think part of this is to recognize that you need to take a little bit more of a holistic view than purely a technical solution to the problem, and you need to take one that incorporates the fact that it’s actually about the business models and the incentive systems that are built into the Facebooks of the world that cause good natured engineers to be incentivized to build systems that have these kinds of negative side effects as a part of them.

And what we want to do, and what we're trying to do here is to say—let’s start by just at least having a standardized way of measuring this kind of speech so that you can start to look at the interventions that, say, Facebook or Google or Twitter decide to do. And now, in a standardized way, you can look at, “Are those interventions even effective?” Or, “Hey, you know, maybe it’s a good intentioned intervention, but it’s actually having a counter effect,” right? Those things are well known in terms of systems and how they work.

So, it starts with, and I guess the positioning that we're thinking about for the Online Hate Index is, it’s kind of like having the Nielsen of hate, right? It’s having an independent third party—and this is, again, why I think you can’t really do this as a for-profit. I think you need to have an organization with the gravitas and the history and the expertise of an Anti-Defamation League that can actually serve as a counterweight to the enormous power that so many of the largest platforms already wield.

Christina Noren: So, talk to me about the Online Hate Index. So, it feels like what you're doing now is, you're at the measurement phase, and if I can lead the witness again a little bit, you know, it feels like you have more plans for how you can get into the intervention. Tell us what the platform does today.

Dave Sifry: We're still in the early stages. So, the first part of this is to recognize what the Anti-Defamation League is good at and what we're not good at, right? So, we really do—you know, we've been tracking anti-Semitism and hate groups for years and years and years. So, we have a lot of expertise and the hate symbols database and all the rest. So, we're able to take that data ad turn that into really effective models that are then used to be able to create these classifiers.

But I'll tell you what. Like, we don’t know, say, anti-black racism as well as, say, the NAACP or Color of Change. We don’t know, you know, anti-LGBTQI speech better than, say, the folks who are at GLAAD or academics that are studying it.

So, a really critical part of this is to create a platform and a capability, right, and a lot of this comes down to data collection or what’s called annotation and labeling, right, in the ML world where you can give these tools and have a running platform so that these organizations that may not be able to make the kind of financial and technical investment that the ADL has chosen to make, but give them the capability and the power to be able to build and maintain their own classifiers. And what that leads to is a deeper sense and a more minimally biased sense of what’s really going on in terms of measuring these other forms of hate and harassment as well.

So, we can’t do this alone. We have to do this with friends and allies. And it also means that we have to work with and we're getting a lot of response from the biggest platforms who say, “Oh, please, give us a standardized thing that we can actually measure ourselves against. We hate—we don’t want to be grading our own homework. We would so much rather have an independent third party that we can then point ourselves towards to show that we're actually doing a good job.”

Christina Noren: So, my mental model, Dave, is that you're building an Internet scale platform where you're processing all of these posts that are happening across all these platforms, and you're doing so against training models that you've developed around anti-Semitism, and it’s pluggable where, you know, the NAACP or GLAAD can plug in their own training models for the kind of bias that they're experts in.

Dave Sifry: You're almost there, you're almost there. I would argue that there’s actually two things that are off in terms of the way that you represented that model, Christina—but you're, as usual, so insightful, here.

Christina Noren: I know my way of working is to throw out a straw man and then get you to bat the straw man—so, bat my straw man.

Dave Sifry: Of course. So, number one is, we're looking at and we've built that level of Internet class infrastructure to be able to handle hundreds of millions of postings a day. You know, we're pulling in the public social networks right now and we're running our classifiers against them to also improve the classifiers. And by the way, you know, you always have to recognize that there is a constant attacker/defender problem that’s going on, here. So, these classifiers, efficacy and accuracy degrade over time. So, you always need to be improving them as there’s new memes and there’s new spellings and there’s new incidents, unfortunately, that happen and people refer to, right? And so, this is an important part of it.

The second part—and by the way, from a metaphor perspective, I’d invite you to think of this more as, like, we're trying to make online hate go the way of spam, right? And when you think about how we fought spam 10 years ago, it wasn’t like there’s some centralized clearinghouse that all e-mail goes through and it’s checked for spam. That what it was, was a cascading set of work that’s being done, like, on the one side at the infrastructure level, like at DNS where we created some new capabilities to be able to just identify senders. The second one was some agreement among the largest e-mail senders and receivers that they would actually use this new system. Third was actually some regulation, right, so you had the FTC come in to say, “If you are going to send unsolicited commercial e-mail, you must include an unsubscribe link, otherwise you will face fines.”

And so, you know, what’s wonderful about this system, and I think what’s so analogous here is, how do you do this in a way not only that scales as the Internet scales, but also protects civil liberties? Because we're not trying to censor people. What we're trying to do here is recognize—you know, because listen, if you want a payday loan, like, I've got a whole group of people I can find for you in my e-mail folder that are willing to give you a payday loan, right? Like, you wanna meet a Nigerian prince? I got a raft of people I can introduce you to, right? But it doesn’t show up in my everyday attention. And I think that, to that extent, what we're trying to do here is the same.

So, you know, there are all sorts of issues around privacy and GDPR that, you know, if this was something that everything had to come and run through the ADL, I mean, not only would we be overwhelmed, but it would be violating law.

So, the goal here is to create a set of these regularly updated classifiers and to encourage engineering teams from around the world to participate in improving these classifiers so that they can then be used internally to help to measure and be used inside of the companies so that the data can still be validated and used, but doesn’t necessarily have to violate privacy or data protection laws.

Christina Noren: You're thinking of these as libraries that, if I'm a developer at Facebook developing some new feature for posting or whatever, I can choose to call these libraries and ensure that there’s at least a warning signal if there’s something approaching hate speech that comes through.

Dave Sifry: Yeah, I think that’s the goal, and that you can just do a Git pull or you can pull down the Python library and just immediately start using this as part of the tools that you're actually building internally. And I think, you know, secondly, to encourage the folks—because, you know, listen, Facebook has 30,000 content moderators, right, that are looking at and making policy evaluations all the time. So, you know, being able to take advantage of all of that signal and be able to build that into better classifiers that, then, everyone can use, it’s just a good thing for everyone. Like, this should not be a competitive advantage for companies.

Christina Noren: Does Facebook really have 30,000 moderators?

Dave Sifry: Yeah, Facebook does, believe it or not.

Christina Noren: When I worked at Microsoft in the late ‘90s, we had 30,000 employees at Microsoft overall. That’s insane.

Dave Sifry: It is crazy. But that’s the scale that we're talking about here, Christina. You know, there are 2,000,000,000 people that interface with a Facebook related product every month. I mean, we're literally talking a significant fraction of the world’s population in different cultures and languages. So, this is not a small issue, it’s not a small problem. And frankly, their attention or inattention to this issue can have disastrous effects, right?

So, in Myanmar, for example, a number of years ago, because Facebook essentially wasn’t keeping their eye on the ball, there was a set of viral memes that caused a genocide. And, you know, this is something that the U.N. looked into and the rest and, you know, Facebook has worked on it to fix. But I think we need to take responsibility for the kinds of activities that end up getting pushed out and, frankly, amplified by these networks and to make sure that things like that never happen again.

Christina Noren: I wanna go a couple different directions from here. So, first off, you know, the theme of this podcast is really about how software is helping save the world in this moment of crisis, and this moment of crisis from ________ 2020, you know, February or so. And you took this job several months before that—has there been any shift in the perceived need or what you're doing or the understanding of the problem that has resulted from either the pandemic or the associated focus on social issues that’s come up in the wake of the pandemic and lockdown?

Dave Sifry: I keep asking my colleagues, “This is gonna slow down, right?” [Laughter] I mean, I have colleagues who have worked at the Anti-Defamation League or 20 years or 30 years and I say, “Really, next month’s gonna be a little bit easier, right?” And what has consistently happened over the last 13 months that I've been there, it just seems that every month, there’s been an increase, every month there’s been additional impact. Every month, there’s been even more polarization and even more hate and harassment, even more misinformation and disinformation.

So, you know, we're working incredibly hard to really help to, number one, identify and make people aware of the problem, but even more importantly, you know, make an impact. And by working with these companies, you know, there’s tremendous impact that’s happening every single day.

I mean, Reddit, for example, the first time in 15 years, in its entire history, came out with an official hate speech policy in June. You know, Twitter kicked off QAnon which, when you think about it, is actually a really difficult thing to do, technically. Because you're not talking about a movement that has a clear leader who you can say, “Well, get rid of that leader and everything else will follow.” Like, this is a conspiracy theory.

Christina Noren: [Cross talk] speech. I mean, it is really what NLP is built around, right?

Dave Sifry: Yep. And Facebook, even today, just announced that they were taking enforcement action against QAnon. And so, you know, it’s one thing, though, to say it, and it’s another thing to make sure that they deliver. You know, and that’s where the ADL stands firm, like—we will continue to be watching and we will call them out that, when they make statements, they gotta back it up. And we're gonna continue to do that, throughout the election and beyond.

Christina Noren: So, one thing I wanna get more clear on is, I can imagine three major points on the spectrum of user experience here. I can imagine your algorithms are running and running and running and you're processing all this speech and you're publicizing that Facebook is letting QAnon speech go forward. Or I can imagine that it’s processing things in real time and I can choose as a user to process my feed through whatever platform you've got and have my feed filtered so I don’t see the hateful stuff. Or I can imagine that when I try to post the hateful stuff, it gets blocked.

So, I'm imagining a continuum, here. Where is the reality relative to that continuum?

Dave Sifry: I think we're very, very firmly in that first space right now. And we really view—it’s so important to get these classifiers right, to keep them updated, to have them running in the background, and also to be noting where there are errors, right? You know, so important to recognize that people who are targeted and marginalized are often the first whose voices are stopped and censored. And I think that, you know, it’s so incredibly important to recognize the distinction between a measurement tool that allows us to look at things in a standard way and an enforcement tool which really is far more impactful and therefore has to withstand much, much higher levels of scrutiny.

So, we're really talking about just the measurement piece for now, but this is a long term project that isn’t gonna be—like, we didn't get here in a week or a month or a year, Christina. So, it’s gonna take us some time to build the muscle to ensure that we get out of it successfully.

Christina Noren: So, that brings me to the next avenue I wanna go down. So, when we talked a few months ago, you were very clear that you were building a system for the long haul. So, you've set up a new engineering team and a new product and, you know, you've put in place your best practices. Can you tell us a little bit about how the team operates and how the software gets delivered and released?

Dave Sifry: Hmm—great question. You know what, here’s how I'll answer it. Because we can talk about rituals and we can talk about agile process and we can talk about design thinking and user centered design and all of that stuff.

But honestly, it starts with culture, and it’s about a culture of trust and accountability. And I think that, you know, being able to recruit the kinds of incredible people that we're able to recruit—I mean, ADL is able to literally punch way above our weight in terms of the kinds of people who are willing to come and work for a mission-driven organization. So, we need to make sure that they truly feel like it’s transparent and that they're trusted in the work that they do.

But at the same time, we need to make sure that we're all holding each other accountable for the work that we do and making sure that we can review it and have clear and honest communication. So, I'll say that the very first thing I did when I got there was, we actually instilled a culture of retrospection.

So, if there’s one meeting in your entire week that can actually help to change the culture of an organization, it’s actually doing an hourly retrospective and literally keeping to it. And so, you know, Christina, you remember what we did back at Interana.

Christina Noren: I was gonna push you on that, and I told Paul this in advance, which is, I loved the way that you did retrospectives with real time Google Sheets and finding a way to prompt the introverts in the room to get their opinion out.

Dave Sifry: Yeah, no, and if anything, we've only improved on that process since the time that we worked together on this. And it starts with everybody sitting down, we have a new tab that gets created—so, by the way, if you wanna see the history of all of the other retrospectives that we've ever done, you can literally go back and see them to the beginning of the year—I mean, we started them about a year ago—and see what we actually did.

And we view this as, like, you wanna write down what went well and then you wanna also write down what to improve. And different people do this different ways, like, what to stop, what to start, what to continue—and these are all valid ways of doing it. But the point is to create a culture where there’s no blame, that there’s just an honest reckoning. That we can look at things and we can honestly talk about them without fear of being insulted, without the fear of being judged.

And so, it starts with a quiet time of about 10 minutes of silent writing. And that’s actually an important part of the process, because I mean, look, you can tell, I'm kind of an extrovert, you know? I'm ENTP or EN, you know, FP depending on the day. And so, I can sit and talk and talk and talk, and I can get you all excited about things. And here’s the thing, like—but there’s also the quiet introvert in the corner who actually has something really important, in fact, more important than me to say, but maybe they're just not as good speaking in public. So, this period of quiet writing, especially in a collaborative tool like Google Sheets, it’s super magical.

And so, we allow that to happen generally 10, 15 minutes, and it kind of slows down. And then we allow people to vote and they just collaboratively vote on what are the things that either they agree with or that they have a question about. And then you sort it based on the things that got the most votes. And then we spend the remaining 30, 35 minutes on literally proceeding through what went well and then we do what to improve and then the next thing down in what went well and the next thing down in what to improve.

Saving the last 15 minutes for a period of what I call burning desires, right? Because you're never gonna get through everything, so you want to be able to ensure that, if there’s something important or there’s a question that maybe didn't get the most number of votes, that people can bring that up and we can talk about it constructively as a team.

Christina Noren: I'm technically an INTP, but I'm a pretty vocal and loud mouthed INTP. And, you know, we've worked together and you and I can dominate a conversation. But your running through that process, I've seen, let the quiet people be heard.

Dave Sifry: And it’s just so magical to watch the constructive flowering that happens. And for the people who are the more extroverted, to let them take a back seat and to see the group work as a team, you know, I think is beneficial for everyone. And it’s a wonderful thing to try.

Christina Noren: My takeaway from this section and, you know, there’s a lot more to talk about and we could go on for hours, especially since you and I are both talkers, but we won’t.

But you are, even though this is a problem that is raising in importance exponentially, month over month, in this crazy year of 2020, you are not shortchanging appropriate processes for building a sustainable software development organization and doing this. You're building for the long haul, and that’s something that I think is really special about what you're doing.

Dave Sifry: Well, thank you. And I've gotta tell ya, I have so many scars on my back from doing it the other way that we're just—I figure one of these are gonna work, you know? And this one seems to be working so far so well, so we're gonna keep doing it.

Christina Noren: So, we just have a couple more minutes, and I hope you don’t mind my asking, Dave, and if you want us to cut it out of the recording, we will, but you have an amazing personal family story about why you care about what the ADL is doing and this hate speech. And I think a lot of us who have connections to people who were in the thick of World War II, we kind of have a different reaction to the current moment than most people.

Dave Sifry: Yeah, and I know you have a personal story about that as well.

Christina Noren: Are you willing to share why this is a problem that’s so important to you personally to work on? I think our listeners would appreciate that.

Dave Sifry: Of course. So, one thing that I don’t talk about a lot in public, but I'm happy to since you asked, is my family history. And I think it’s important to note that I am the son of a Holocaust survivor. My mother, Anna, was a hidden child in Belgium during the war years when, you know, so many Jews were found and sent to—you know, deported and sent to concentration and death camps. And we were incredibly fortunate in that my grandfather worked in the diamond business, and so he was able to hide them, his family, away with the French and Belgian resistance and they were able to survive the war.

And I'll tell you, as a child growing up, hearing these stories and learning about my family tree and the effects of fascism and propaganda as it directly impacted, you know, we did a family tree when I was young and 70 percent of my family tree is just these little red Xs where we don’t even necessarily know their names, but we know, “Oh, so-and-so had seven children and they all perished.”

And so, you know, growing up with this, you really recognize the kinds of evil that people can do to one another. And that, you know, words really do have impact and it starts with words. And to recognize that the kind of division and where everyone feels entitled to their own facts and that, you know, the polarization—and quite frankly, the, in this case, the media that people are getting their information from is one that is, by its very nature, amplifying the divisiveness and amplifying conflict and amplifying fear.

Because that’s what actually drives the core algorithm inside of most of these social networks, right? So, it’s something we call engagement, you know, and having been on the inside of this, right, you wanna get people to like and to share and to comment. And guess what? The things that get people to like and to share and to comment tend to be things that are very salacious or things that are scary or things that are full of conflict and polarization. Things that confirm your own cognitive biases.

And that, you know, for me, what I recognized—and as someone participated in this, you know, building Technorati and watching the effects, you know, knowing, there were days when I would come into work and I knew it was a bad news day and I knew that that meant that we were gonna make a lot of money that day.

Christina Noren: Yeah.

Dave Sifry: And that is not a pleasant place to be. You know, especially when we built these things because we wanted to connect people, we wanted to bring people together, we wanted to create more senses of community and sharing. And to realize that, in fact, while that does happen, there’s also the alienation and the FOMO and the polarization and the conflict that has been happening as well.

And so, I felt like, you know, especially after the 2016 election and the sort of tone in our country that it did make me wonder, you know? And I don’t mean to be alarmist or defeatist, but to recognize and say—you know what, like, could this be happening again? And I don’t believe that history necessarily repeats itself, but gosh darn it, it sure does rhyme.

Christina Noren: When you grow up close to this, you have more of a realization of how bad things actually can be. You know without touching too much on my own story, my mother watched the family, the farm my in Holland next door to her get lined up and shot because they were hiding children like your mother, you know?

And when you know that can happen and it’s only removed from you by one generation and it’s childhood stories, it’s a whole different proposition.

Dave Sifry: Yeah. And so, I think, you know, the number of people who know personally a Holocaust survivor, those numbers are dwindling, right? Because the number of Holocaust survivors are dwindling. And so, these stories, we must not forget, and we have to keep in mind that these stories are real and that human beings can do these things to each other.

And so, I felt, you know, as someone who knew the internals of how all this stuff worked, and I was like, “Do I really wanna go off and just do another startup, you know, make some more widgets, make some more money? Or do I wanna try to do something that could potentially have some deeper impact?” And, you know, the jury’s still out, we'll see, you know, but we're doing our darnedest, and I think we're having a real impact, so, we're gonna keep at it.

Christina Noren: Well, I'm gonna close it there. So, Dave, you know, you and I could talk for three hours on this, but I think we've covered enough to inspire our listeners by what you're doing.

Paul, what’s your take after listening to Dave on this?

Paul Boutin: My only observation is the sheer scale of what ADL is doing in trying to detect hate speech across, as you said, 2,000,000,000 people a month on Facebook alone and so many other places. And I know that we see a lot of complaints about what people miss, but just trying to get even a good chunk of it is an extremely huge—I don’t have an adjective for it. As you said, 30,000 people, that’s more than Microsoft employed for the whole company 20 years ago.

So, bravo. You know, for everything you do succeed at, it’s terrific. Because what if you weren’t doing anything, right?

Dave Sifry: Thanks, Paul. Well, we need—and we need people’s help. And there are people of good will who work inside of these companies and that they can help, too, and they can participate. Like, we all get to decide how we want to build the world that us and our kids are gonna live in. And so, I'm so grateful for people like you and for all the work that you're doing and we'll keep at it.

Christina Noren: So, we're calling this The Software Agents and it’s because we who have the power to build software have agency in this world. And, you know, you're giving a team of people at the ADL the ability to use their agency towards something amazingly good.

Dave Sifry: Thanks.

Christina Noren: Okay, well, thank you, Dave. We will sign off now.

Dave Sifry: Oh, thanks for having me.

Christina Noren

Follow Christina Noren on Twitter and Linkedin.