Hey! 👋 Help us make the show better by taking our listener survey!

90: Moderating Digital Toxic Waste with Chris Wexler of Krunam

Published August 31, 2021
Run time: 00:56:46
Listen to this episode with one of these apps:

Last year alone, over 60 million incidents of child sexual abuse materials (CSAM) were reported online. Social enterprise Krunam is stopping that cycle of abuse with its breakthrough platform.

Krunam CEO, Chris Wexler, joins the show to talk about how technologies like computer vision and deep learning are aiding content moderators in identifying and removing CSAM from the internet, how social algorithms incent the distribution of harsh content, and why it’s also important we don’t create an Internet that’s entirely encrypted.

Note: This episode contains discussions around child sexual abuse.

In this episode, you will learn:

  • How content moderation happens on big social platforms
  • Why AI is best for identifying patterns at large scale
  • What computer vision is and how Krunam is using it for intent of behavior
  • The challenge of AI training with illegal data
  • How right now is a great growth period for technologies that identify digital toxic waste
  • How humans’ attraction to outliers fuels social algorithms
  • How it takes 30 years for society to adopt a new technology
  • How the progress of communication technology is reorganizing society from around location to around ideas and leading to adverse social outcomes
  • Why we need to have more nuanced conversations

This episode is brought to you by The Jed Mahonis Group, where we make sense of mobile app development with our non-technical approach to building custom mobile software solutions. Learn more at https://jmg.mn.

Recorded July 20, 2021 | Edited by Jordan Daoust | Produced by Jenny Karkowski

Show Links

Krunam’s website | https://krunam.co

Krunam on LinkedIn | https://www.linkedin.com/company/krunam/

Chris Wexler on LinkedIn | https://www.linkedin.com/in/chriswexler/

Chris Wexler on Twitter | https://twitter.com/ChrisWexler

JMG Careers Page | https://jmg.mn/careers

Connect with Tim Bornholdt on LinkedIn | https://www.linkedin.com/in/timbornholdt/

Chat with The Jed Mahonis Group about your app dev questions | https://jmg.mn

Episode Transcript:

Tim Bornholdt 0:00
Welcome to Constant Variables, a podcast where we take a non technical look at building and growing digital products. I'm Tim Bornholdt. Let's get nerdy.

A quick note, before we jump into this week's episode, you probably hear the old rate and review us at the beginning and end of every podcast you listen to, but it's for a good reason. You see, discovering new podcasts is actually pretty difficult, and the only lever most podcasters have to pull on is getting more people to rate and review a show, so it shows up higher in the algorithms. So if you have a minute, it seriously would help us if you jumped over to ConstantVariables.co/review. Again, the URL is constantvariables.co/review. That site will bring you directly to our page in the Apple Podcast app where we would really love your feedback on the show. Obviously five stars is great. If you hate us, maybe just send me a DM or something. I can take it. Thank you so much and sorry to Chris for making him sit through another one of these.

Another quick note before jumping in. This episode does contain discussions around child sexual abuse.

Today we are chatting with Chris Wexler, founder and CEO of the social enterprise Krunam. Krunam is in the business of removing digital toxic waste from the internet. They use artificial intelligence to identify CSAM, otherwise known as child sexual abuse materials, and other indicative content to improve and speed content moderation. Krunam's technology is already being used by law enforcement and now they're moving into the private sector. So without further ado, here is my interview with Chris Wexler.

Chris, welcome to the show.

Chris Wexler 1:53
I'm really excited to be here, Tim, thanks for having me on.

Tim Bornholdt 1:56
I'd love for you to take this chance to introduce yourself and tell us all about the founding of Krnuam.

Chris Wexler 2:01
Yeah, so my name is Chris Wexler. I am the CEO of Krunam. We are a company specifically focused around fighting child sexual abuse materials that are traded daily on the web. In fact, last year alone, there were over 67 million incidents that were reported on major platforms. And so our technology will allow for the automated removal, or at the very least, reducing the harm to the people that have to do the removal on these platforms and should help reduce the re-abuse of these children that have already been through the most horrific thing. The last thing they want is those images being shared, but also to stop the cycle of abuse, because consumption of that kind of material often is a critical trigger to people actually moving from fantasy to abuse of children. So that is what we do in a nutshell.

Tim Bornholdt 3:07
How did you get involved in this space?

Chris Wexler 3:11
I have a bit of unique background. I have been in corporate America for a very long time. I was on Wall Street and largely following technology back in the nineties. I was in marketing technology for 15 years or 20 years almost. But the whole time I was actually for the last 15 years, I was volunteering with a group called Not for Sale, which is actually one of the co-owners of Krunam. They're one of the shareholders. And they really attack human trafficking and the exploitation of people through trafficking, which is often sexual trafficking in a very unique way. And they have started companies all around the world that not only attack the problem head on. For example, at a group of restaurants, high end brunch restaurants in Amsterdam, that actually is a training ground for women that are pulled out of trafficking in the red light district in Amsterdam, a beverage company called rebel. If you're ever in Whole Foods, you'll see it. It's one of the largest natural drinks being sold at Whole Foods right now. But that was literally created to halt trafficking out of a part of the Amazon and create an economy of scale for people that have been displaced by environmental degradation that had been living subsistence in that area of the Amazon. And I know since in the last five years, there's been zero trafficking out of that area, the Amazon, after that was a hotspot for years. And so they've been creating these companies all around the globe to not only fight human trafficking, but also to help fund their nonprofit and Not for Sale.

And so I'd always been helping and volunteering and I'm consulting with them in their venture arm, just business. And when the opportunity came up to have a technology company that was in the world of big data, and in Silicon Valley, big technology, which is the companies I've been working with for years, it was a perfect fit for me to step out of my traditional corporate role and make my advocation a vocation, but do it at scale. And so it's a super exciting opportunity. It's been an exciting year, as we put this together and launch it and we've had a lot of great momentum. So it's been a good time.

Tim Bornholdt 5:52
It's fantastic. It's really great that there are people like you out there kind of using these tools and building these tools to help prevent a lot of what's going on with child trafficking. I think one thing that might be, I guess, we can be eye opening, because we're a podcast, but maybe ear opening for our listeners is, you know, you talk about moderation. And one of the things you mentioned in your intro was talking about kind of lowering the the human effects of having to go through and moderate a lot of these things. Can you shed some light on how content on big social platforms typically happens these days?

Chris Wexler 6:28
Yeah, it is one of the darker jobs that people have to do. Currently, content moderation is largely that there are automated elements of it. But by and large, anytime something objectionable gets onto Facebook, or Google, or fill in the blank, you know, we've all probably done at least once where we've hit a flag of, this is inappropriate, this isn't right. Well, that triggers a whole chain of events within those organizations of people actually reviewing the content. And so they have an army of content moderators. And so these people are literally viewing and listening to the worst of humanity every day. The average sit in that job is about nine months. And most people come away with post traumatic stress disorder, PTSD, because it is a brutal job. Facebook, YouTube, Google have all settled with moderators based on the damage that has come out of that. And a lot of it was done in the United States early and now they've moved it into countries like the Philippines and India and kind of, you know, countries like that. They've exported that misery to a workforce that has less rights. But you know, our view of this is, not only is that really damaging to those, frankly, heroes that are trying to make the internet a less toxic place for all of us. But it is, I think, really critical because humans are really bad at this. If you're looking at something that's really damaging to you, beheadings, child sexual abuse materials, you get emotionally drained. And the studies have shown that after 10 to 15 minutes of doing your job of trying to clear of viewing this and clearing it off of the platforms, your performance of classifying and doing your job drops dramatically after just 10 to 15 minutes. And when you have something that is identifiable through pattern, and AI, and at large scale, it's a perfect thing for AI and deep neural network classifiers to really do.

And so that's where we're stepping in. And there is a current technology out there called photo DNA that Microsoft created back in 2008. It's called perceptual hashing. It's like a fingerprint of a photo. So if they've already found incidence of CSAM and the reason I'm, just so your listeners understand, the reason I'm saying CSAM seems very technical, but child sexual abuse materials. I think parochially people have referred to that as child pornography. But really, we in the area of fighting this have moved away from that language completely. Because pornography implies consent. And there's no way a child can consent to being in these images and these videos. And so anyway, I just wanted to clear that up, but that's why I keep saying CSAM.

But, you know, the photo DNA was really revolutionary at the time it felt. It fingerprinted previously identified CSAM, and often, you know, what was happening and still happens to this day is that people are trading images. And so if it's known to the creators of photo DNA that this is a piece of banned material, it gets put in the data set, and it's identified. The problem is, everyone in the world now has a, you know, high class camera and a 4k video camera in their pocket with a camera. And this content's getting created every day. And it's not getting into those hashtags. And so, you know, the current estimates are between 5 and 10% of all CSAM is actually in the current hash sets, which really makes the current technology fairly antiquated.

We took it to the next level where we have partnered with the home office in the UK, which is for those in the US is like the FBI, the local police, and homeland security all wrapped into one. So it's their law enforcement department. And they had the foresight back in 2013 to create a database of confiscated materials. They literally keep it in a Faraday cage. For those of you who don't know what that is, it's literally a room that is is shielded from Wi Fi and cell signals and is not connected to the internet. And they have it all stored there. And it's millions and millions of images. We originally built this technology for the police officers, the child sexual abuse investigators that were spending 80% of their time sorting through compensated materials trying to figure out, Is this illegal content? And if so, what classification of it? There's, you know, five or six different classifications, depending on severity. And then they don't have 20% of their time to go and actually investigate the crime as well. As a result they'd have to triage who they were saving. It just didn't make sense. And so one of our co founders, Ben Ganz, was a child sexual assault investigator. And he's like, There has to be a better way. And he knew this database existed. And so he talked to another one of our co founders, Scott Page, who's our CTO and said, Hey, how do we use the best in class computer vision and AI to flip the script, and instead of making it 80% of the time going through this content, make it 20%, so we can do what humans do well, which is complex investigations? And so they set about doing that, this is back in 2015.

Unknown Speaker 12:49
You know, a lot of people, I think, have a cursory understanding of what computer vision is, We've often heard about it in kind of the context of facial recognition. Or when you're on Google, and you go, Hey, I want to find a picture of a cat. You know, computer vision is really good at going, Oh, that's a cat or a dog, like broad classifications. It's less good with faces, like it's very problematic, sometimes on how it identifies faces. And we've seen that in particular with people of color, where it's misidentifying people and they're getting arrested for crimes they didn't commit or fill in the blank. I mean that, in those instances, the algorithm seen millions of examples. And so we were asking it to go one step further with CSAM, which is not just to identify what it is, but imply and identify the intent of behavior based on body positioning, relative body size, state of undress, all these things. And, frankly, didn't know if that was going to be possible back in 2015. It was really cutting edge at the time. And when we dragged off all of our equipment into that Faraday cage at the home office, and got a result that was in the realm of knowing that we could refine it to something that was going to be really powerful to be used to protect kids, it was a real eureka moment. Because, you know, there's now currently, it's a 10 time multiplier in effectiveness over the antiquated photo hashing, photo DNA approach. And it flipped the script for investigators. They're now spending, you know, maybe 30 or 40% of their time going through confiscated material. And as a result, it's a force multiplier. Now 60% of their time is going out there and protecting kids, exactly what technology should be used for. Take the monotonous repetitive things and frankly, psychologically damaging things off the plate of the investigators and frankly, they're still looking at the content. But it's being presented in a way that it's bunched together so they can make better decisions. They're getting warnings if it's particularly heinous, and so they can emotionally prepare themselves. So the damage is lower. But more importantly, they're just getting through it quicker. And so they can go and do the things that humans do much better than computers.

Chris Wexler 15:21
And so what we were doing with Krunam is bringing this technology that we built for law enforcement to help investigators and bring it to the content moderation world of large technology. And, you know, in the last four or five years, I think, you know, most online communities, the shales have come off people's eyes. They've gone, Ooh, we can't just let this run free, it's pretty damaging. And it's hurting the user experience, it's hurting a lot of people when their content is being put out there against their will. And so it's been exciting to have these conversations with these large technology companies as they realize their responsibility to fight this really, really horrifying and growing problem. And COVID only made it worse, as people aren't traveling, are, you know, during the height of the pandemic, nobody was traveling. People weren't getting out of their house. They were doing more and more of this on their computers. And so it was the right product at the right time, fighting a really horrendous crime. And so it's been a wild ride the last 12 months. But that's where we're at.

Tim Bornholdt 16:37
Yeah, and I would imagine that it's been a lot of learning on your end as well with as technology continues to improve and, you know, our abilities with neural networks, like you were saying, continue to improve, and we're able to make better AI, I'm sure that the technology then just keeps getting better and better as well. And the results keep being better and better for preventing people from having to be exposed to this content, when you know that it's, like you said, especially when it's particularly heinous. But any of this stuff, it's just like, no one wants to have to look through things like this that are just, you know, awful. So it's really interesting. And I guess the question I keep thinking about with something like this is, you know, you kind of alluded to being able to use training data that was well sorted, and, you know, organized so that your computer, because if you're not familiar, if you're listening to this and wondering, like, you know, how do you actually go about doing this. You have to, you know, kind of categorize each of these images into certain classifications. And then once you have that data, you can continue to train it, and it continues to learn and evolve as time goes on. But like, how in particular are you continuing to evolve that? Is going to some of these providers like, I don't knowwhat social networks you have as customers or anything, but like, you know, assume that you get something like a Facebook as a customer. Would you then be kind of incorporating some of that content into continuing to grow the algorithms? Or llike, how do you continue to work on a project like this that's, like you said, everyone has a phone in their pocket these days and can continue to produce this on scale? Like, just how do you keep up with the changing criminal mindset with all of this?

Chris Wexler 18:26
Well, you know, we're solving a really unique problem for these platforms. We don't actually take any data from our customers. I think a lot of AI companies essentially provide a framework. So they're like, Oh, we have this thing, and we'll take your data, we'll transform it and give it back to you. That is illegal with CSM. It is illegal for a Microsoft or a Facebook to send us images that they've found. It's technically trafficking in child sexual assault materials. And so we're in a unique position where we have to train our data on illegal data that is illegal for us to hold and illegal for our partners to hold. And that's one of the reasons why this has not been an area of technology that's been able to move very fast, because you have a data issue. We solve that by having this public private partnership with the home office in the UK, and the desire of the home office to really take this groundbreaking technology and not just keep it for themselves and their investigators but bring this more broadly to the world. So I really want to honor their vision there because that's something that I think a lot of particularly government projects, they go well, we have it but we don't need to make sure anybody else has it. I mean, let's get the kudos and move on. They really have a heart and a focus to bring this technology more broadly and have it apply more broadly to protect more kids.

But as a result, we only train on data that has been confiscated through court approved confiscations, so following all the privacy laws, and we only train on data that has been classified by three different trained law enforcement investigators. And I think that that's a critical thing for people to think about. We think about AI and we go, oh, wow, look what it's doing with data. I think the dirty little secret of AI and deep neural networks is most people's data is poorly sourced, or even, questionably source, scraped. Clearview AI, the big provider for facial recognition, literally is breaking privacy laws scraping photos off of the Internet and has no qualms talking about it. That's a violation of CCPA, that's a violation of GDPR. Your and I's faces are probably in their data set. And we have no way to get that out of there. That's really a violation of privacy. When you're talking about an image of one of the worst moments of a child or a human's life, moment of sexual abuse, we definitely can't step those lines. And so that's why we make sure that we have truly airtight, ethically sourced data. We are working off of the ever growing data set with the home office. And it's been classified by experts.

A lot of data is often labeled or classified, by, you know, putting it on on like an Amazon Turk, which is a service where you just kind of ask random people to say, what is this? We obviously can't do that with CSAM. But as a result, I think a lot of the classifications are murky at best. We have a data labeling and the data cleanliness problem in AI as a whole. But because of our amazing partnership with the home office, and the amazing trained men and women that work as child sexual assault investigators in the UK, our data is pristine. I think that's one of the reasons why, in fact, I know that's one of the big reasons why our classifier is so dramatically heads and shoulders ahead of other competitors in the space is because we've solved a data problem. We've applied really amazing cutting edge, computer vision, deep neural network and AI elements to it. But the secret sauce is ethically sourced and really carefully labeled data. And it sounds like a really boring thing. But boy is it powerful to have a really amazing data set.

Tim Bornholdt 23:06
Boring usually makes for good business, like in the long run. People don't want to touch it, but it's like, you know, at the same time, it's like I can imagine. That's one thing when when we were preparing for this interview, and I was thinking through all of this is, as somebody that does work with a lot of data and understands how all of this works, that was my key question was like, you know, it's kind of you want to save a lot of these content moderators that social network companies employ because of, like you said, all the fast burnout and everything that goes into the job. It's just awful. But you know, at the end of the day, you still have somebody at your company has to go through and go like, you know, deal with this data. And I think having that public private partnership, where you have people that are on the frontlines actually fighting this, and you know, they have, I would imagine people that are working in that industry in law enforcement have the resources to deal with the mental health side of seeing all of this content all at once. And so being able to kind of lean on their shoulders a little bit and take their information and apply it to the public space. It's really interesting to me because I again, I was scraping my head of like, yeah, how do you host like a classifier with all this illegal material and dealing with it? But that's a really, really smart way of going about doing it.

Chris Wexler 24:29
Yeah, I mean, I just have to tip my hat to Scott Page and Ben Ganz who really kind of pioneered that. And frankly, all those investigators, you know. I think one of the amazing thing about those investigators is they do sit in those jobs a long time but I think partially it's, a, they're trained and they have support. But they have agency. They know that when they do their job right, they're actually saving kids.

Tim Bornholdt 24:55

Chris Wexler 24:57
And you know, a content moderator doesn't have that same agency. All they have is ago, well, I took that image down. And so I think that there's an element of that too. The other tricky part of our training regimen is even when we drag our equipment into the Faraday cage to do the training, we still can't look at the content. It's still illegal content. And so we are sitting side by side with the people running the Cade, the child abuse image database that the UK home office holds, we're sitting side by side with them back to back often, and they're helping us through a lot of that data. We're working with another major law enforcement organization, and we're doing the training over email, which is even more painful. And so when you can't hold the data, when you can't even look at the data, it is a really, really hard technical challenge. That is one of the amazing things that we've been able to solve.

Tim Bornholdt 26:02
So I think most people listening to this show, their last name isn't Zuckerberg or Dorsey. So I don't know if they would have this specific problem that you're you're trying to solve with here. But I think more broadly speaking, you know, you've been doing really fascinating things with AI and deep learning. And, along those lines, like, what do you see with AI and with neural networks, like how can we apply those types of solutions to just general issues like on the internet? Because I mean, I think we can all agree that there's times and especially we've been seeing over the last few years, the internet has kind of become more and more of a toxic place. Where do you see like, you know, people being able to take these concepts of deep learning and some artificial intelligence and apply them in similar ways to combat, you know, maybe mis-nformation or other kinds of issues that we're seeing on the internet?

Chris Wexler 27:06
It's actually an amazing growth area in technology right now. And you see the big players, the Facebook's, the Google's, Snapchat's, they are building a lot of their own classifiers based on the data that they have. So if you look at violence, or misinformation, or all of that, they're able to, they see that data coming through and are building their own classifiers. I think, long term, that's probably not the best approach for the health of communities online. Because what we don't want is a 20 different flavors of how this might work based on different corporate needs and wants. Eventually, this will have to be something that becomes more unified. And that's either going to be through regulation or through cooperation. And I think this is an area that unlike in a lot of areas, big tech is actually really willing to cooperate. You see the heads of trust and safety and integrity, kind of community health is the best way to think about that, they are talking all the time. They are sharing notes, because they know that they're all solving the same problems. And so you see a lot of reciprocity going on in the space. But it's actually one of the reasons why we exist. We obviously are starting with, frankly, what is probably the hardest problem to solve, which is CSAM. But our vision is to really be a part of the solution of removing all digital toxic waste from communities. It's always been there. One of the things that we haven't had until fairly recently is really sophisticated recommendation and distribution engines on these platforms.

When Twitter and Facebook were straight timelines, you wouldn't have something pop up that wasn't someone you follow. But as you know, the push when you're one of these online communities, they get compensated by how long you spend on the platform, and how much you interact with the platform and the community. And they found that pulling in, you know, the great viral tweet or the great this or that, is one way to get you engaged. Unfortunately, we as humans are wired wrong for this form of attention, kind of attention economy. Because all of us love a train wreck. When we drive, we're rubberneckers. It's kind of built into our DNA and our behavioral that if you see something that's really an outlier to normal behavior, you pay more attention. That's why outrage works on the internet really well, because the algorithms pick it up because outrage is an outlier. And people are attracted to outliers. That's why, unfortunately, that's one of the reasons why disinformation and incitement to violence and a lot of this, often illegal, or borderline illegal content is getting pushed by these algorithms that are tuned specifically to attention. Because the horrible kind of law of unintended circumstances in the space is the closer contact gets to illegal, the more people are interested in it, so that literally the profit if you're optimizing your site to, I want to increase my revenue per user by a 100th of a cent. And believe me, they're thinking of it in those terms. Allowing the most extreme content actually drives the most attention, which then drives higher profits. And so actually the algorithm is incenting the distribution of harsher and harsher content. Often you know, the psychological, and I don't know if this is, if I have the numbers right in my head, but you know, for every time you hear no, you need to hear or a negative, you need to hear positive five times for it to kind of outweigh in your head. Algorithms are just preying on kind of that mindset of if it's, Boy, I just watched a motorcycle accident on YouTube or something like that. All of a sudden, you're down this rabbit hole of really horrifying dashcam videos. It's because the algorithm goes, well, we can get them to watch one more video and we can sell one more ad.

And so we need to really try to divorce the toxicity from profit. And so that's either going to be through regulation or really intelligent and thoughtful use of technology to kind of draw that line in a more healthy way. One of the ways we're thinking about it at Krunam is in kind of a US legal framework. Because we are a global company. We have our technology people in London. We're talking to people in the EU and Asia and all over. But for particularly for listeners of this podcast and for us, the First Amendment is a really interesting construct for us to think about. Everybody thinks about the uses the First Amendment as a synonym to free speech. And it is. It guarantees the right to express yourself but it does not have an absolute right. There are classes of speech that have been determined by the US government to be illegal, obscenity, so CSAM fits into the obscenity space, true threats to violence, incitement to violence. So that's where I think you see extreme Islam jihadism driving to violence, or extreme right wing radicalization in the US driving to violence. That's illegal. That's illegal speech. Blackmail. So another big one is slander. But the context online of slander is, that's a difficult one to please. But, a very clear case of that is revenge porn. That is like the clearest case of online slander and blackmail. It kind of fits both.

And so if we just draw the line at what 200 years of jurisprudence around the First Amendment and what is legal and illegal there, we're gonna make our communities dramatically more healthy. And so that I think is a really critical thing. You know, adoption of technology typically takes 30 years for society to adapt to like, so for example, radio. The beginning of radio was all shortwave people. And they were so excited that I can talk to people, ham radio operators. I can sit in Philadelphia and talk to somebody in San Francisco. It was like mind blowing. And it was largely two-way. And the commercial application was ship to shore radio and so all of a sudden, it was a huge growth. So first, there were hobbyists, interesting young men who like to cause trouble like they used it a lot for jokes and humor. Sounds like the early internet. But there was a commercial application of Ship to Shore radio. It was a really powerful thing. And then the Titanic happened. The Titanic disaster happened. And while it probably wasn't the reason that the Titanic didn't get help. Amateur ham operators were actually blamed for some of the distress calls that getting picked up. It was probably just a bad connection. And as a result, the government stepped in and regulated. And it moved from a democratized technology of ham radio, to a commercialized monopoly, with two companies we know and love today in the US called CBS and NBC.

And so they really came into being out of regulation radio, and we did it to protect people. And World War One and World War Two, World War One in particular, really solidified it because the government could just step in and go, Hey, stop messing with the Navy. You're seeing the same pattern happen in the internet, where the first 10 years was, Hey, this is a real thing. And there was a hobbyist. I mean, it was the classic computer nerd that I kept getting accused of being of. Right. And I still can't code. It's not fair. I've got the label, and I still can't code. Anyway, that's my embarrassing admission that I actually can't code. But then you got to the area of growth. So in a commercial basis, so if the equivalent to radio was ship to shore, for us, it was on the hyper connection through social media, and news, etc. But I think hyper connection and so there's explosive growth. We're now in that last stage. We've had some really horrifying experiences. I think January 6, I think some of the disinformation around some of the vaccine deniers, we're seeing the real negative impacts of a free for all. And I think my worry is we're going to regulate it to a point, I think it's very interesting Facebook's like all for internet regulation right now. Because I think the big tech companies really want to use regulation as a way to secure their monopolies. I think if we use technology properly, we can keep the internet small de-democratized, not in a political phase, but as a way that we all can use it and use it freely. But we have to take the toxic elements out. And I think as hard as it was to grow amazing platforms like Facebook and Twitter to the size they are, and frankly, the platform's in China dwarfs them. But when that was a Herculean task to build them, I think it's going to be even a bigger task to build the right technology to make those healthy communities.

And so I'm a white male. I have a very different experience online than a woman of color. It's a much more toxic place for groups that are not in the majority. And we need to deal with that. Because otherwise, we're driving people off of a platform that is really becoming a proxy to society. Online is how we are organizing, and one of the reasons I think, this is my pet theory right now, one of the reasons we're in such social uproar right now, around the world, it's not just a US phenomenon, is that, technology is really unleashed us from location. And everyone's like, Oh, yeah, I don't have to go to the office anymore. I'm not really talking about that. What I'm talking about is kind of the progress of technology has always been to expand human interaction. If you look at the size of ancient cities, the space from the center of town to the city walls, where they could be protected from the world was a 30 minute walk. As horses became domesticated, cities got bigger and from the center of town to the edge of town was a 30 minute ride on a horse. As cars came into space, the center of town was a 30 minute drive to the edge of town. And that was just Transportation Technology.

As communication technology has come in, we are reorganizing our entire society from location where we were limited just based on how we can connect to organizing around ideas. You know, you see this in the political space where people are talking about the big sort, where people are literally moving their homes to areas where the people are more ideologically aligned. But you see this in just interest groups. If back in the day, if you were someone who was absolutely passionate about cross stitching kumquats, you were alone in whatever town you're in. I guarantee you you can find a group online of 1000 kumquat cross stitch enthusiasts and find a community. That's really great for people who love cross stitch. There's nothing but upside there, right? You found like minded people, you can do it. It's really damaging if it's people who like to produce and distribute Child Sexual Assault Materials. Because if you start what was something that was in, let's say, you're in your basement in a small town, and sadly, largely, you were abused as a child. And so your trauma is coming out unhealthily through the consumption of CSAM, you realize you're outside of the norm in that small town. Nobody else is talking about it, nobody else is doing it. But if you find a kind of community of 100 or 1000 people through the internet that you're sharing these images, and you're talking about it, and you're doing it, it normalizes that behavior. And as you do that more and more, that social reinforcement actually drives you to action. So it actually puts children at risk this. The exact same thing that has helped you and I find people like minded people all around the globe is incredibly damaging in this context. Because it leads to really adverse social outcomes. And so we need to find a way. And I think starting with just on some basic controls of what content can and can't be on public networks is going to be important. But that's going to take a lot of engineering. And, frankly, not just by engineers. I think we could build the pipes without engineers.

But if you look at how we're built at Krunam, we have amazing engineers. Our AI and deep neural network, and computer vision experts are literally second to none. And they've been working in computer vision since the very start. These guys are amazing. But we also have Child Sexual Assault investigators. We also have sexual sexual trafficking experts. So we understand how the business of human trafficking works and how we can interrupt the business flows. We also have voices of the survivors included. It has to be a holistic approach to a problem, not just a technological approach. And that's why this problem is so much harder. We're actually adding the humanities to a science and bringing those together. It's the only way we're going to solve a problem this massive. And so that, you know, we're doing it on CSAM, right now, but we want to bring this out to non consensual sexual revenge porn, non consensual sexual content distribution. You know, recording yourself is become a normal part of sexual expression. Having that distributed to the world shouldn't be a worry you have. And so, you know, that might be another stage. Another area is violence detection. Another area is disinformation. Another area is radicalization. So all of these things, there's just a ripe space for inter, multi disciplinary, approach to solve these, but they ain't easy. That's often good business. But it's, you know, I love the challenges. I love that this one will make interaction between humans on a global scale more healthy. And that's what drives us every day here at Krunam.

Tim Bornholdt 44:35
There's, man, so much to unpack in all of that. I think the the one thing that I was kind of thinking through was when early on you had mentioned the incentives for, you know, the big social networks would be to get you to just stay on for one more click or one more video or one more scroll, and it's like, that's the incentives that they're provided. And so of course, it's like if you're going to have those incentives, then you need to have that, you know, kind of technologically provided solution of being able to use, you know, crazy AI and deep learning networks to be able to figure out how to classify different pieces of content and eliminate them. When it's like maybe an easier solution would be, don't try to pull every penny out of somebody that you possibly can. But of course, being in a capitalistic society, that's hearsay. So I would never advocate for anything that obvious or simple, but then like, to your point, I mean, even if incentives were aligned, and there was still this, you know, kind of pervasive thought of, hey, maybe we should be serving up like good content and serving up, you know, getting the kumquat, cross stitchers, you know, to be together in one place, it's like, even if you do have the best intentions in mind, and you're not trying to build a machine that's just keeping people drawn in using fear and uncertainty and doubt, you still do have that problem on the other end of radicalization and all these other problems that can surface. And on one hand, it's sad, you know, like that we have to have these tools in place in that, you know, like, no offense at all to your business, but you're like, Man, wouldn't it be great if if it didn't exist? Because then that would mean there wouldn't be any like, you know, the ideal world would be we wouldn't have any see CSAM, right?

Chris Wexler 46:23
I would love to be out of business, that would be all awesome.

Tim Bornholdt 46:25
Right. So it's just like on the one hand, you know, if you were to redo the incentives of the social networks, so that they are more, their incentives are not to keep your eyeballs on the page at all times. Their incentives are in some other format. They can make money in some other way. You know, that would be ideal. But even if that was the case, you know, I was mentioning the cumquat, cross stitchers. It's like, you get them organized and they can use these platforms to find each other. In much the same way, you're always going to have people on the fringes of society with not great ideas or intentions that can also use these platforms to mobilize. So it's like, in one way, it's just going to be a never ending problem, regardless of incentives. Because with what we've unleashed with the internet on to society, we're always going to have problems with people finding each other and, you know, proliferating not great ideas and concepts, and being insular with each other to just keep it going to the point of violence or some other awful incident.

Chris Wexler 47:31
Yeah, I think that's exactly right. You know, I think one of the interesting things on that level is, we're asking, essentially, we're building a set of norms as as a society online, that we took 2,00, 3,000, 10,000 years to build as humans. And we're trying to do it very quickly. It's very difficult. And we're outsourcing it to companies that have a profit motive. And so that is just a truth. And so I think it's interesting, I think, you know, you see companies taking steps. I think Facebook's kind of council of elders trying to help them with hard decisions that are independent and independently funded and binding. I think that's one interesting approach. I don't think it scales, but it's a very interesting approach. But you know, we also have it in new business structures. We at Krunam, we're a public benefit corporation. And so we have a dual responsibility of fighting CSAM and other intractable problems online, and profit with our shareholders. And so I think we're beginning to realize that unfettered capitalism leads to a lot of negative consequences. That said, you know, I believe, you know, to paraphrase what Winston Churchill said about democracy, but capitalism is the worst form of economy except all of the others. We just need to keep, he said that about democracy, but we have ways to refine and work on that and make it better.

And I think the markets are starting to actually turn here a little bit. You're seeing 40% of investors caring about ESG ratings, and so you'll hear those initials a lot more in the business world, but that's environmental, social and governance. That is how these companies are doing beyond simply profits. And I think that with 40% of investors and 33% of dollars being invested in 2019 on that, that is going to actually have a moderating impact as that grows. And I think that that's, frankly, the impact of millennials, and to a lesser degree Gen Z. They're still probably still young to be investing in the markets. But that's a lot of the impact of the millennials who have seen the negative impacts of this demanding more out of their investment portfolio than just pure growth. And so I think we'll find that balance. It's going to take time. It's going to take people that are focused on sustainable business models that also don't have negative impacts. I think we saw the same with fossil fuels, that the companies that don't have sustainable fossil fuel approaches are getting punished right now in the marketplace. You'll see the same with large broad scale online communities. If you don't have a way to properly police might be the word or at the very least, make sure that your community isn't toxic, you will eventually be punished in the marketplace. It's just taking time to get there. And we're just trying to push that change along and do it in a way that is both individual friendly and business friendly. And I think that it's important for us to bridge those two.

Tim Bornholdt 51:43
I completely agree. Chris, this was such a great conversation. I think one that more of us need to be having more often. How can people get in touch with you if they have any questions about Krunam or anything that you're doing with trying to help solve some of these big issues that are facing online?

Chris Wexler 51:59
Yeah, well, you know, I'm one of those weirdos that loves LinkedIn. So I'm open on LinkedIn. And so just Chris Wexler on LinkedIn at Krunam. Please just reach out, that's the easiest way. Twitter at Chris Wexler is another good way to reach me personally. Our website is up and the new website's up and is Krunam.co. If you go to Krunam.com, it's gonna be a really kick ass country station down in Texas, k run AM. But Krunam.co is our website.

And frankly, you know, we're finding because of the diverse elements of solving these hard problems, we have had such amazing people reaching out that have either been personally impacted by this horrific thing, them and their children, or that realize that, hey, there's something in the financial flows. So there's something in data governance, or there's something here that help us attack this problem at a more holistic basis. So if you think you have something that might help fight this, please reach out. It takes a large amount of expertise and knowledge.

I think the the other thing, and this is always a really esoteric thing to talk about when you're talking about like how do you help Krunam or how do you help fight CSAM is I challenge everyone to have a very nuanced view on the two competing goods of regulating and having a less toxic community. So that often requires some surveillance and privacy rights online. These are two competing goods. It's really important that we don't create an internet that is completely encrypted. Even though nobody really wants everyone poking in, we need to have a really nuanced conversation about what is a public space at a private space online. What is appropriate for communities to be monitoring and not and what is appropriate for privacy laws. And the truth is, it's not full surveillance, because that's draconian. And it's not full privacy, because that's just asking for crime and oligarchs to get away with what they're doing. It's finding a middle and it's only when we as a culture and a community have a nuanced conversation about those two competing goods that we're going to have a more more healthy online community.

Tim Bornholdt 54:49
I remember having nuanced conversations at some point in my life, but it seems like the last couple of years that doesn't seem to be a thing anymore. So I completely agree with you though. There's a pendulum. So much of life is a pendulum, right, of swinging from one end to the other. And it used to, you know, there used to be some sort of spaces where you'd go slightly back and forth. But it seems like you know, the more that we get connected with each other, that pendulum is now just rapidly going from one end all the way to the other. And it's like, ah, no, we need to introduce nuance and civility and understand that, like, we're all trying to get to roughly the same point of figuring out, Okay, these things are definitely bad. And these things are definitely good. But we need to figure out the right balance. And I agree with you. The only way we get there is through nuanced conversations, like pretty much just like the one we had.

Chris Wexler 55:44
Yeah, yeah. I really appreciate the conversation we've had, and you letting me talk to your listeners. This has been really amazing. Thank you.

Tim Bornholdt 55:53
Thank you, Chris. I appreciate it.

Thanks to Chris Wexler for joining me on the podcast today. You can learn more about Chris Wexler and everything about Krunam at Krunam.co. And Chris like he mentioned at the end of the episode, you can find him on LinkedIn.

Show notes for this episode can be found at constantvariables.co. You can get in touch with us by emailing Hello@constantvariables.co. I'm @TimBornholdt on Twitter and the show is @CV_podcast. Today's episode was produced by Jenny Karkowski and edited by the jaw dropping Jordan Daoust. This episode was brought to you by The Jed Mahonis Group. If you're looking for a technical team who can help make sense of mobile software development, give us a shout at JMG.mn.