98: Uncovering Bias in Algorithms with Eri O’Diah of SIID TechnologiesPublished December 7, 2021
Run time: 00:58:29
With increasing awareness around diversity, equity and inclusion, there’s a greater need to improve conversations on difficult topics.
Eri O’Diah joins the show to talk about how her involvement with the Superbowl in Minnesota led to her creation of the “Grammarly of bias”. She shares how her company, SIID Technologies, is harnessing the intuitive power of machine learning to build tools that facilitate better understanding of all groups and how privacy and health concerns correlate with training an effective model.
In this episode, you will learn:
- How a lack of diverse perspectives impacts algorithms
- How AI can be used to evaluate, uncover, and correct the influence of human biases in communication
- How sometimes the best solutions are ones founders create for themselves
- The monetary cost of bias
- Why there’s fear in addressing bias
- The mental health components of training algorithms
This episode is brought to you by The Jed Mahonis Group, where we make sense of mobile app development with our non-technical approach to building custom mobile software solutions. Learn more at https://jmg.mn.
Recorded November 30, 2021 | Edited by Jordan Daoust | Produced by Jenny Karkowski
SIID Technologies | https://siid.ai/
JMG Pricing Page | https://jmg.mn/pricing
Connect with Tim Bornholdt on LinkedIn | https://www.linkedin.com/in/timbornholdt/
Chat with The Jed Mahonis Group about your app | https://jmg.mn
Rate and review the show on Apple Podcasts | https://constantvariables.co/review
Tim Bornholdt 0:01
Before we get into this week's episode, I want to thank Pejman from Touca.io for his five star review of the show on Apple Podcasts. Reviews like Pejman's help Constant Variables gain more visibility. So if you're listening on the Apple Podcast app right now, please head to our main show page and scroll down to the ratings and review section to leave us a review. If you include your name or company name in the review, like Pejman and his Continuous Delivery Software Touca.io, we will give you a shout out on a future episode. If you're not using an Apple device, you can still leave us a review by visiting constantvariables.co/review.
This episode is brought to you by The Jed Mahonis Group. We build best in class iOS, Android, and web apps. We do this by integrating with teams that lack mobile expertise and work together to deliver creative mobile solutions that solve real business problems. To learn more about us and to see our pricing, something we're very transparent about, visit jmg.mn.
Welcome to Constant Variables, a podcast where we take a non technical look at building and growing digital products. I'm Tim Bornholdt. Let's get nerdy.
Today, I am joined by Eri O'Diah, founder and chief visionary officer of SIID Technologies. Eri, welcome to the show.
Eri O'Diah 1:32
Tim Bornholdt 1:33
This is gonna be a fun interview. I can already feel it. We've been laughing a lot already. So life is going to go well here today.
Eri O'Diah 1:41
Yes. Absolutely. I'm excited.
Tim Bornholdt 1:45
Tell us a little bit about yourself and why you decided to jump into the world of tech app owning.
Eri O'Diah 1:52
Sure. So my name is Eri O'Diah, as Tim just said. I am the founder of SIID Technologies, which stands for Social Impact Identification. We're applying Big Data and emerging technologies to help companies evaluate, uncover and correct the influence of human bias on decision making and communication. And we're doing this through the development of a regulatory AI model. The idea for Sid came to me while my other company, I also run a digital marketing agency called Collectively Digital, was engaged in the 2018 Super Bowl that was held here in Minneapolis. I was introduced to the real life capabilities of AI through discussions that I was privy to around stadium technology. I actually attended a sports con at Optum that year, during the Super Bowl, prior to the Super Bowl. And, you know, much of the conversation was really around, you know, the POS system, the analytics around that, how the beacons worked and all of that. It was just super fascinating.
At the same time in 2018 was also the height of the Black Lives Matter movement. Kaepernick took a knee. Trumpism was on the rise. And as I was learning about the capabilities of AI, you know, SIID initially started as a Mar-tech solution to better target women who were equally, are equally as sports fan, but much of the conversation that I was involved in were very male skewed. With everything that was going on socially, I just really felt like another Mar-tech solution was not something that was dire. And that, you know, what I was learning about artificial intelligence and the real application, real life applications of it could be used to really drive change. And I wanted to do that. I wanted to do that to improve the quality of life for myself and others who look like me.
Tim Bornholdt 4:29
I wonder, I'm curious to hear like with you coming from, you know, a non technical background and jumping into the world of AI, like there's jumping into a pool, there's jumping into a deep end then there's jumping into like, I don't know, like a riptide. Which I feel like the amount of confusion and difficulty around AI and what that means compared to things like machine learning or, you know, any other kind of computer driven, you know, algorithmically driven type of content or how, you know, filtering and all that good stuff, I'm just really curious to hear like, what your experience has been like jumping into the very treacherous, deep end of technology and in trying to use these algorithms and understand how AI actually works.
Eri O'Diah 5:15
The great thing for me is that I did have a background in tech. That was never really, when you think about the marketing, my marketing background, much of the solutions that we use to automate had a component of AI, right, for automation purposes, for you know, ads, social ads, and things like that. Additionally, I have an MCs D, which I received when I was 19. About unfortunately, where I was never able to actually get into the tech space, because I couldn't get a job, no one would hire a 19 year old black girl to do anything in tech, back in those days, and still now. So I did have a little bit of foundation. And it has been treacherous since 2019. Diving into the Niagara Falls is probably what it's been. But you know, I did a lot of research on my own to really understand, you know, what modeling was, like. What was required. You know, what was the issues around algorithmic bias, really understanding that and understanding that it was really the lack of diverse data sets, you know. And when you think about it in that sense, you know, it helps in simplifying things. And for me, you know, I wanted to address the biases within the media marketing space, right. For much of my career, in that space, especially in the film and entertainment sector, you know, my role was really on the team that I worked on were really confirming stereotypes, right? You know, I've been asked to, you know, be the, to validate like, urban, bionic speech or something, like, How do you say this, and, you know. And usually, I, you know, I don't really even speak like that, for the most part. So I wasn't able to validate a lot of things or confirm a lot of things in my teams, from a social media marketing campaign perspective, which resulted in me not being black enough, right, on those teams. And that was a problem initially, is being black and then not being black enough. So it's in that space. And you'll see some of these campaigns like Dove' beauty campaign, where you have a woman of color getting clean and beautiful, right? And she's taking her skin tone shirt off, and she becomes a white woman. So her becoming clean and beautiful requires her to turn into a white woman. You know, that's a Jim Crow era marketing right there. So I wanted to to really mitigate those type of marketing because the messaging in that is you need to be lighter skinned, you need to be white to be considered beautiful. And there's a whole I mean, you can draw a line to the issues around colorism and you can draw a line to skin cancer within the black community. In Africa my country, Nigeria specifically, there's a huge bleaching problem, right. And that has caused all types of health problems and that's because of the social advertising and the social norms of preference and language and imagery around light skin, around representation of light skinned white women. So a lot of what I do wanted to do is really change those dynamics, right? Help marketers understand and better market and be inclusive and sensitive to cultural differences versus perpetuating harmful stereotypes.
Tim Bornholdt 10:20
I can clearly see why those examples you gave in advertising are problematic, and how Dove was able to put that campaign out without one person kind of raising their eyes is interesting. But, I wonder, I hear a lot about algorithms and the dangers of like social media, just you hear about things like bias and racism and issues when you are just reviewing things through algorithms. So I want to hear, from your perspective, what are the issues with biases in algorithms?
Eri O'Diah 10:56
Well, I think the the number one issue is the workforce, right, that are creating the technology and the algorithms, right? We all have biases. As a woman of color, I'm not immune. But when you have an industry that's predominantly one particular group, right, and that group develops for themselves, and they're solving problems also for themselves. I believe at the root of the tech space and the AI space, that is the problem. We're not being inclusive in who, you know, participates in the development of these algorithms. So there is little representation. That's where that gap in data comes from. Does that make sense?
Tim Bornholdt 11:59
Totally. Yeah, that's what I've come to learn too, is a lot of, I think I always give the example that people think tech is just people going into a room and getting a wizard hat on and a wand and kind of whispering incantations, and then voila, tech appears. And it isn't like that, like this comes from humans, and humans are inherently flawed and computers are inherently rocks we've tricked into thinking. And so you've got basically these computers that know logic and logic being ones and zeros, true or false. Like there's very clear, clear boundaries. And so in order for a computer to make a human equivalent of a decision, you have to tell the computer, What would a human do? And that's effectively what a lot of machine learning and AI is, is you feed in a series of inputs that says, like, let's say you want to teach a computer how to identify cars. Well, you would take pictures of cars and from all different angles.
Eri O'Diah 11:59
And all different colors.
Tim Bornholdt 12:10
And all different colors. And you would say, this is a Tesla, this is a Ford Focus, this is a Ford Fusion, and you would just tell them, tell this algorithm, this model, that here is what these things are, and then it spits out a model by which then you can feed in a picture of any kind of car, and then the computer will do whatever weird magic it does behind the scenes. This is the part where it escapes actually my understanding. I didn't do good with math in school. But it's a lot of math, that then comes out and says, Hey, this is, we have a 96% chance of understanding that this is an ambulance, for example. And so when you have humans going through and making these judgments, the humans, if you have just a bunch of white dudes that are labeling things and describing things as they are, then that's what the computer knows. And so to your point, it's like if you don't have diverse sets of people coming in and saying, you know, not even just, you can split it across every sort of intersection you want to, whether it's gender or race or location or age, just getting a varying degree of perspective and opinion will make this algorithm, frankly, I would imagine, you know, the more you get into it the computer will just melt because humans just have so much difference in how they see the world and how they view things. But, you know, to your point, that's how a lot of these algorithms, I would say almost all these algorithms work, at least as far as I know.
Eri O'Diah 14:34
And it's really important to have diverse perspective. You know, it's really important to even challenge your perspective. Right. And within the tech space, because as you said, it's predominantly white male, there isn't that challenge. Right? Perspectives are very similar. There isn't diversity And we have, you know, the problem that we have we've now transferred the social issues of racism, you know, and biases into our tech. And that's a big problem, because we're only evolving in this area of technology. And if marginalized people now have to not only deal with real life biases and impacts from that, right, and now from a technological perspective, they also have to deal with a computer that is biased, that doesn't recognize them. My iPhone takes horrible pictures, because I'm a dark melanated person. So I generally need daylight and a lot of bright lights in order to get and achieve a great photo with any of the smartphones, for the most part, you know. And even the camera on my laptop. So there's that. I mean, it's very, there are nuances, right. There are things that you don't even think about that could create challenges for an individual, right? But when you're creating technology like this, and you have a room filled with, let's even say just all, let's just say it's like all women, right, we will generally be, we will solve things differently. Right. We'll have a different perspective, and good or bad, those perspectives may not have any challenges if it's just a group of women. There's still room for bias even in that dynamic.
Tim Bornholdt 17:01
Right. It's not that we don't want to have a white voice or a male voice in crafting these algorithms. It's that we want to have as many diverse voices as possible. What I'm striving for with all of this, and what I would think humanity is striving for is for technology that helps everybody.
Eri O'Diah 17:20
Tim Bornholdt 17:20
And to your point, my sister's boyfriend also is a dark, a well-melanated individual and whenever we take family pictures, it is always like he is either, you know, we are all blown out, like really overexposed so that he looks normal, or he looks like a shadow, like, just disappears into the background. And from a technology standpoint, I've done a lot of photography, and I understand it is hard to balance those things. But I think there are, people probably underestimate how much processing actually goes into a picture that's taken by an iPhone. There's a ton of software that's at play making these decisions. And the better technology we get, the better we move along. And the more that people can, you know, send these examples to Apple, or better yet, get engineers that are black, or have dark skin that have these problems, that have a vested interest in solving them.
Eri O'Diah 18:19
Because they would notice that as a problem.
Tim Bornholdt 18:21
Right. Exactly. It doesn't even have to be, these algorithms and these biases in algorithms, that can be something as innocuous as just that, I mean, or something as innocuous as like, they misidentified yoga pants or something because it was a bunch of guys that did it instead of a bunch of girls. And so it looked a little different. But then it can also extrapolate where if the recognition we're doing like, you know, say facial recognition for police cars, or for like security in buildings, it's like those systems have been riddled with bias because there's not a whole lot of people that get into the space of public safety, I would imagine this is an oversimplification or generalization. But I would think that that tends to be a certain type of person that goes into writing those types of algorithms as opposed to getting a more diverse approach at this.
Eri O'Diah 19:15
Tim Bornholdt 19:17
So I would love to hear, you know, some examples of how SIID Technology could be deployed or like some examples of where you're implementing it to make a difference.
Eri O'Diah 19:26
Sure. We're very much early stage startup. We're pre revenue right now. Through our engagement in a couple of programs, specifically the MIT solve digital workforce challenge, as well as their unbundling policing and reimagining public safety incubator, we've identified two very strong use case and product market fit. Much of that is, so much of the challenges that I've had since, you know, the inception of SIID has really been gaining access to knowledge. I've been rejected by pretty much every program in Minnesota, up until this year when we were accepted into the Minnesota SBIR STTR accelerator, which helped us really hone in a little bit what our use cases would be, right.
So initially, I was thinking of the marketing approach for marketers. And through the engagement with MIT, we're able to hone it in a bit more for workforce development purposes, to be deployed into collaborative spaces like Slack, Teams and Discord. Or, you know, use as a plugin for within any text editor. We're very much like Grammarly, we actually coined it as the Grammarly for bias. And from a civic use case, we're in stealth mode with that right now. So I'm not really sharing too much about that. But we are exploring opportunities to work with police departments and deploy our technology in areas within their work.
Tim Bornholdt 21:53
That, first of all, getting into the SBIR accelerator, it's a really cool opportunity, because that's all like military, you know, type founded, right? So like that's really interesting that that technology of eliminating bias with the way that we you know, deploy, you know, possibly how we deploy military technology around the world is encouraging. And partnering also with the nerds at MIT is also really a good opportunity as well, like, it shows that there's a lot of people that at least take, there's so much politics around this where to me, it just seems like it really shouldn't be an issue like. It feels like we are, we should be aware at this point that bias is bad. And that, like we're trying to find a way to actually level the playing field, so like all of humanity can get lifted up instead of just certain sects of it. But anyway, it's heartening, at least to me to hear like that some of these organizations that you might not necessarily think of as progressive in that regard, are actually taking it seriously and trying to find ways to better themselves.
Eri O'Diah 23:04
Right. I mean, with Pat Dylan's help, we were able to develop a strong enough project pitch to get the attention of the National Science Foundation and earned ourselves a invitation to submit a proposal, right. And we're using our engagement and MIT's program to build a strong project proposal and plan to submit that to the NSF next May. I agree with you. It's, for me, personally, I am incredibly encouraged by both, you know, Pat Dylan, as well as the team at MIT Solve and their partners. What we're trying to do with SIID is incredibly ambitious, right? It's not easy. This is not an easy startup. But it helps to be passionate about, you know, doing something that is socially good, and that will change the quality of life for not just myself, but so many others. Right. And in the space of policing we've seen what took place last year. We've even this year, specifically in Minnesota has to come to terms with the problem that exists in this ecosystem. And I think the Minnesota communities are having a very hard time dealing with reality. And before this ecosystem can actually be conducive to underrepresented, underserved communities and individuals, it needs to accept the fact that the micro aggressions and passive aggressiveness that Minnesota is known very well for is actually racism.
Tim Bornholdt 25:30
We're not just Minnesota Nice.
Eri O'Diah 25:32
It's racism. It's racism. Its bias. It's pure, unfiltered racism. Right. And that's why a human being was literally murdered on camera in broad daylight by people who were supposed to protect him. So, and it's not a popular opinion. This is a topic that many people are uncomfortable with in this space. But it's the realities of the space. And if we don't talk about it, we can't address it. And because I actually had to sit in my home alone and watch all this happen last year, it affected me. And I absolutely feel called to play a role in minimizing the impacts of bias and human bias in the public policy and public safety sector, because my life depends on it literally. Right?
Tim Bornholdt 26:57
Well, yeah, absolutely. And I, you know, you said earlier like that this is a hard topic to discuss and I mean, it is. We have to kind of come to terms with not just, you know, 100 years of issues, or 500 years of issues, but human history is all about, Hey, that's mine, I'm gonna go take it. And so it's kind of coming to grips with who we are as people and more importantly, who we want to be. And I think that's why I'm really curious about what you do, and how, like, if you have, like, you know, specifics around how you want to address this, because that's what I'm really curious about here is like, you know, identifying the problem is clear to me, but I'm really just curious to hear like, is what you're trying to offer, like just a way to do checks on models, like, or go on, I'm curious.
Eri O'Diah 27:51
From a workforce development use case, we are trying to help users become more aware of their unconscious bias within through their communication, right. So through written communication, we're using a natural language processor very much like Grammarly, right? And it's really just identifying, helping you become aware that this content has a risk of bias. And this is how it can be perceived. And this is why, right. I think we're deeply ignorant to history, where we're culturally insensitive for the most part, and we all have some learning and unlearning to do. Right. And with our solution from a workforce development perspective, being more mindful about how we communicate with each other. And now that we've gone through this digital transformation, and many American corporations now can have access to talent in emerging markets, right, their existing workforce needs to be able to work and collaborate with these, you know, with these professionals in different and diverse markets now, right? So we have to learn how to talk and work with each other. Passive aggressiveness and microaggressions, it's not a good thing. It's not nice, right? And the language, we, the language we use, you know, and it's often baked in the language that we use, you know, especially English language. So that's where we're kind of, we're giving an HR tool to also keep a pulse on their company culture, those companies, corporations who are really interested in not only a digital transformation, but a cultural shift that is inclusive, truly inclusive and drives fair and equitable outcomes for all their employees. It gives them the tool to really do that. Right. And it's not just HR's job. It's every employee's job to drive that culture that they want through how they communicate and interact with each other.
Tim Bornholdt 30:30
I think one thing I think about a lot is there is the blog post I read a long time ago about just how to deal when you're a senior developer communicating with junior developers. And one piece of advice that the author gave was to not feign surprise. And so but what I mean by that is, like, if somebody was to say, Oh, you know, we'll just use Git to commit our code, and somebody comes back to you and says, Oh, what's Git? And then the senior person would say, You don't know what good is? And saying it in such a way like that it's something that we all do all the time, like, inherently, like, it's not even something you think about of it being something that could potentially be minimizing or anything like that, or making somebody feel dumb. Like, you know, it's really about just, if you instead of like, the default being, You don't know what Git is? Have a reaction of like, Oh, yes, I get to explain to you what Git is! Like, being excited to like, partake, you know, bestow that knowledge upon somebody else. And so I think, what I love about yours, obviously, like going after the, like, you know, racist thoughts and language that might be microaggressions and whatnot is is super, super important. I also think like it doesn't, it doesn't necessarily just specifically relate to racism. It relates to all kinds of prejudice and biases and things that we have as humans. And so it's using race as the lense is a helpful way to explain it. But it's really, to me, what's exciting is, it's like, we can just learn how to be better communicators overall. That's what we're trying to get to at the end of the day is just how do we communicate better with each other, when we do have a diverse set of experiences. Like we are from different cultures and different backgrounds, and we were all raised different ways. But at the end of the day, if we're trying to work in a company or an organization, and we're moving together towards one goal, if I explained something to you, like, if I start using, you know, if I'm talking to somebody that has no tech background, for example, and I start using everything in terms of like database tables, or I start talking about, you know, whatever it's like, those metaphors aren't going to get my point across of what I'm trying to say. So I need to, you need to find common ground. You need to find ways to do that. And so that's what to me is really exciting about your technology is it's not even it obviously I'm not trying to minimize but I am, like excited as that being part of a bigger thing of like, let's just be better communicators.
Eri O'Diah 33:00
And race for me, you know, they always say that some of the best, some of the best solutions out there are solutions that founders created for themselves. Right, and SIID is the solution that I'm creating to improve the quality of life for myself. And these are the areas where I have had huge and still have huge challenges in corporate America, and how, you know, the communication, how people have communicated, spoken to me, treated me, how some even the challenges I have, and trying to find the language to express myself and to communicate, you know, as well, you know. I need this software. I need a solution that I can also use to validate my concerns from a marketing perspective, right. I can sit in at a team meeting and explain why this is not the best approach, why this may be offensive, you know, but having a solution that uses data and evidence base that I could use and just, you know, kind of share with team leads and say, This is my concern. And, you know, we're using the solution. This is what the solution validates it. Right. And, that's, for me, that would be very helpful right now. Race is something that is, if we can solve the race issue, I truly believe that everything else will literally fall in place. Right.
Tim Bornholdt 34:53
Because you apply those same results to any other kind of ism, you know, like, ageism, anything else, like, if you can solve it for one, then yeah, you just apply the solution to the other problems and away we go.
Eri O'Diah 35:08
And away we go. So, that for me, I'm solving issues that I'm trying to solve for a problem that impacts my life directly. And race and communication is really one of those things. But it's not all of the things that it can do. And SIID isn't the be all, end all of these problems. It's a tool that can help individuals just really become aware. From a civic use case, we are leveraging the tool for training purpose, for de-escalating encounters and applying a certain level of accountability.
Tim Bornholdt 36:03
That's a great way to, you know, make a difference. Like it's nice to find those things that are really hot right now and apply that technology to it. And when you were describing the story before, I kept thinking about an instance recently where I had my own kind of, my own bias, like showing right in my face. And what's cool about these like, with Grammarly and things like that, it's like active, you're getting feedback right away where you're like, you said something, and then it's like, Hey, you know, you screwed up. I have an app that I built that tracks all the breweries in the state, and I'm trying to get to visit all of them. And I don't make any money off it right. Like, it's not a big money making venture for me. But we have like a tip jar in the app where you can leave a tip of like, 99 cents or whatever. But I, as a joke, I put one in there for $100. And I said something like, Well look at you, Mr. Moneybags. Like, go ahead and donate all this money. And I did that like, you know, six years ago or five years ago, just, you know, I was typing it up. I thought it was funny. Just throw it in there and away I went. And we had somebody come in and do some UX on the app, just give some feedback around things that could be changed. And I noticed this person put in their notes, they said, The title here says Mr. Moneybags. Well, if there's a woman out there that would maybe potentially give you $100, and they see that it says Mr. Moneybags, maybe they might think this isn't for them and move on. And I was like, Oh, my God, I didn't even think about it, like, it was a joke. I didn't even think about it. And it's an innocuous joke too, right, like, it's not like I'm putting anybody down or out or anything, but it was the fact that it wasn't an inclusive way of saying it, like I just cut out 50%, potentially, of people that would give me $100. Like, it's all of this stuff, like you can say what you want about like, having it be a good social venture. But if you're going to be a cold blooded, capitalist, like we all have to be in this country, we're talking dollars that you're losing here by not being inclusive.
Eri O'Diah 38:04
There's, I mean, the cost of bias in this country is about 8 billion, like, 8 billion. Right. I mean, the overall cost globally is about 60 billion, of bias for corporations. And that's from higher, across the board. That's a lot of money annually. That's a lot of money.
Tim Bornholdt 38:28
Yeah, I'd take it.
Eri O'Diah 38:33
The cost of like, from a policing perspective, the cost of, there was a study done on five states, right. And the cost of bias and, you know, from police brutality, the lawsuits were totaled over a five year period, right, from five cities, it was like $3 billion in five years, $3 billion in five years, across five cities. Could you imagine how that money can be used to improve the quality of life for the people in those states and cities?
Tim Bornholdt 39:17
Yeah, what a waste, you know.
Eri O'Diah 39:19
A complete waste. And that's taxpayer money.
Tim Bornholdt 39:22
Right. Oh, man. Okay, so , I guess, like changing the topic a little bit. You know, one thing that I harp on a lot on this show is privacy, and, you know, there's with tools, let me just pick on Grammerly for a minute. It's like, if you have a tool that's sitting and reading everything you're typing, one might think, Hmm, where does that data go? And how does that data get processed? And is somebody actually reading what I'm, everything I'm typing, or is it getting harvested so Zuckerberg can make a few extra billion later on down the road? So I'm curious, like from your standpoint with SIID, like, how do you balance training your model and having an effective model and an effective deployment of that, along with privacy that people have, the concerns around privacy with AI?
Eri O'Diah 40:11
Well, we're currently training our model in AWS SageMaker. And we plan to mitigate privacy risk using their cloud security, their API gateway, to identify problems and proactively address security issues. From our end, we are not necessarily monitoring or policing, no pun intended, conversations. We are essentially keeping a, it's almost like a sentiment analysis really, right. And based on the content, you know, applying some scores to that. But we are not policing actual conversations. And the caveat, or, you know, the benefit of SIID is that it's a one on one interaction with our solution in like from a Slack perspective, right? Like, it's not necessarily going to send a message out within your Slack DM, with a group DM of, Oh, hey, you have a bias. And this is your bias, or anything. This is very private, between you and our solution.
And, you know, one thing I've learned over the course of the last few years is addressing bias is, there's a lot of fear around that, you know. There's a lot of fear of being called out, embarrassment. And in order to help people become aware, and unlearn these biases, we're attempting to create an environment that's private, so that they can, they can have this experience and move through this at their own pace, privately without fear or risk of, you know, huge repercussions, if that makes sense. Employers get to take this analysis, right, to understand what kind of training they need to deploy to their employees. Some of that can be customized for each employee, right, and be digestible for each employee. But overall, it's really providing a tool for our workforce to better communicate. And it's not necessarily recording tracking actual conversations, if that makes sense.
Tim Bornholdt 42:51
Yeah, and that's good, because I think that is really the only way to go about doing it. And it's good to keep everything, you know, because there's already enough like, again, with politics and everything around this topic right now, there's already enough out there of people being skeptical of these programs and of training people about systemic bias and racism and things like that. So keeping it so it's just something like, Hey, bud, maybe don't be so racist. Sending it as, like a private message as opposed to like, Okay, here's the three most racist people in our company this week, in general in Slack, you know. That's not, you don't necessarily need to shame somebody publicly for for their words. It's more in the spirit of getting people to be a little bit better.
Eri O'Diah 43:38
And that's the point. This is to prevent you from that shame of sending a message that could potentially be received as bias or offensive, and then having that, you know, precautions that follow, right. Like you send an email, you post a social media post, and the content's a bit, eh, you know, you know, and that's already public. Now, would you rather have a solution on the back end as, just very similar to Grammarly, as you are crafting your message, you can see and it tells you, you know, and recommends alternative language that's either more inclusive, or less offensive, right, to communicate your point. And then it doesn't just offer you alternative languages. It tells you why the language you use can be perceived as offensive or bias. Right. And that education piece is really important because you don't know what you don't know. Right, right. If you're not aware, like I mean, there's so many different cultural nuances in this world based on your background. How are we supposed to know everything? Like to be honest, like how are you supposed to know?
Tim Bornholdt 45:00
It's even little things. Like I remember when when the US invaded Iraq in 2004. Like military people were going around giving the thumb's up to everybody being like, Yeah, we did it. Well, the thumb's up in Iraq, in Iraqi culture, is the equivalent of the middle finger here in the United States.
Eri O'Diah 45:17
Oh wow. I didn't even know that.
Tim Bornholdt 45:18
So it's little things like that where you're not even knowing you're doing something that could be potentially offensive, and maybe you don't want to walk around like the middle of New York City flipping everybody off, because that would, you know, be pretty bad. And it's like, why would you go into somebody else's backyard and do the same thing? It's those little things like, it's not trying to shame you. It's just trying to show you like, we're all kind of different.
Eri O'Diah 45:40
Now I'm wondering, like, so in a Slack, so in like in a Slack environment. I mean, that's really interesting, right? And you're working with a counterpart that is, you know, on that side of the world, and instead of liking with a thumbs up in Slack, I mean, I feel like that's actually, that might actually be helpful because a lot of us work with developers and counterparts in Pakistan and different areas of the world. And I mean, I didn't even know that and I know when I am in a Slack environment, and I am talking to someone on that side of the world not to use the thumb's up.
Tim Bornholdt 46:20
Yeah, use the middle finger instead. No, I'm just kidding. I don't know. I don't know what the equivalent is. I can't remember at this point.
Eri O'Diah 46:32
What I know not to do is use the thumb's up now. Whatever I use, I'm gonna use a heart. Right.
Tim Bornholdt 46:40
There you go. Hopefully, hopefully a heart is universal but that's the thing was symbols and everything is, anyone can, it was just like the okay sign being usurped by the three percenters or whatever the proud boys, whichever group now, like, you can't give an okay symbol, because you don't want to be perceived as a neo-Nazi.
Yeah, there is one question that just popped into my mind. And I've been thinking a lot about this in context of Facebook, because, of course, we have to spend all this time thinking about Facebook, because of just how awful they are. And it keeps coming out that they're even more awful. But one thing that I read a few years back was the amount of stress that content reviewers go undergo by basically having to manually look at the worst of humanity for several hours a day with virtually no breaks. I mean, they're watching people kill themselves. They're watching people, like, dogs do dog fights, they're watching just, you know, the worst of humanity. And, so like, you know, these people had just terrible mental health crises and have PTSD coming out of these jobs, because of just how awful it is. And so I'm wondering, like, we talked about how a lot of these algorithms get trained is you have to feed it stuff to let it know, Hey, this is bad, you know, speech, this is bad content. But I would think that you would have to, at some point, put in pretty messed up things into the algorithm so that the algorithm would know, Hey, that's super messed up. So, you know, you have to put it in in the first place. So my ultimate question is, like, how do you and your team think about keeping, like a positive mental health and approaching it with like, you know, keeping the spirits up while having to be subjected to things that are very unpleasant?
Eri O'Diah 48:25
That's a good question, because I spent a lot of time moderating. That was a quarter of my job. And it is, it's very hard. It's very hard. And I don't know that I figured that out myself. You probably see some of my rantss on LinkedIn, where I'm in my feelings about things. And, you know, as much of it is because I am consuming, like, due to the work that I'm doing, I am, I'm consuming a lot of, a lot of that content, right, and moving through it. What has been helpful this year, MIT offered access to therapy and support because they knew that moving through this incubator would affect a lot of us mentally in many ways, and I thought that that was absolutely brilliant of them to offer and have that support on standby for anyone, and everyone, right? And, quite frankly, our sessions were very therapeutic, to be honest. I mean, having a community that you can, that understands the work that you're doing, and are maybe even doing the same work, is very helpful, and has been very helpful for me this year. And having people to talk to about the challenges and emotionally, and even physically, and hearing that they're also going through the same challenges mentally, and how they're managing it has been helpful. I didn't have that prior to this year. And I think that for those who are moderators, content moderators, I think I would recommend that they either find a community, a support group, that they can share some of, you know, what they're going through emotionally, or can really consider some therapy too, because we're not meant to take on that much negativity. It's part of our work, unfortunately. But we're, you know, our minds are not meant for that and the imagery that accompanies it.
Tim Bornholdt 51:51
It's a lot. It's overwhelming. And I'm glad to hear that MIT actually is providing you with, like a therapist and giving you access to those tools and resources. Because yeah, I think, you know, things have probably changed, but with Facebook, like they had just basically subcontracted to a different company. And then this company, you know, their only client was Facebook. And they were just, it was basically like working in a call center. Like, you sit at a computer, you show up, you see a screen, and it's just like, picture after picture after video after comment after whatever of, again, the worst things you could possibly imagine. And there's no relief, no reprieve, like they didn't give any mental health things, like you barely got bathroom breaks. And it's just, it's just such a tough, you know, underbelly of all of this connectivity that we have. Like, there's so much greatness that comes from having us all interconnected with the internet and with technology, but there's, with every immensely wonderful thing we have, you have to have that balanced by, you know, some pretty depraved and awful things. And that's, you know, there's always a yin and a yang. And so, you know, finding ways to use this technology that you're working on, and, you know, helping to kind of stem the tide from even coming in the first place, hopefully, but then, you know, maybe working on other tools to help with getting, you know, a reprieve from some of this stuff. I know, it's not exactly what you're doing. But it's whenever I think about AI, that's, you know, that's what I think about,
Eri O'Diah 53:24
We actually, you know, the use cases were focused on is workforce development and the public safety policing arena. But the tool kit includes a plugin, a crawler, and some licensing, right, you know, an actual web application. So that crawler component is something that can be used to address that content moderation.
Tim Bornholdt 54:00
Awesome. Well, and, hopefully on a lighter note to wrap things up here. I want to hear where you see SIID going in 2022. We're almost to the end of the year. So what are your goals coming up here for the next year?
Eri O'Diah 54:12
The next year, I am super excited about the next year. We have a couple of opportunities that we are waiting to hear back on in terms of fundraising. We are also working on bringing on a full time CTO. We've been working with Jazmia Henry, who helped us, was really helpful in building our proof of concept, which is a Slack bot language bot. And that's what we've really been able to use to prove concept, to articulate the idea of SIID and the potential of SIID. Next year, much of next year will be spent in a closed beta, or closed pilot. We are, we have one police department who's already signed on to pilot with us. And our pilot will launch, we're planning on launching that in in Q2, of next year, so that's what we'll be spending most of the next year doing, so super excited about that.
Tim Bornholdt 55:36
That is super exciting. Eri, you're doing amazing work. And I am really glad that you're in this space and that the people that hated on 19 year old Eri are now looking back at just gross disdain because of how great this company is going to be. So I'm very excited to see where things go and, you know, keep us in touch and we'll see where things go with SIID.
Eri O'Diah 56:00
Thank you so much. Thank you for having me on your podcast. This has been a great conversation. It's been super easy, you know, talking with you and even learning a couple new things today even, and that's pretty, I love learning new things. So this was pretty awesome. And I hope to make it on one of your walks here before it gets too cold. Or even if it's too cold because I've yet to do snowshoeing, so if you have a walk over one of the lakes here with snowshoeing and stuff. I'm down.
Tim Bornholdt 56:39
Nice. There's a regional park by my house that I know has trails that you can snowshoe on so, you know, we could, we can try to find, that would be fun. I've only snowshoed once but it's fun until the snowshoe falls off your foot, and then you step, you know, three feet down into snow. That's never any fun, but
Eri O'Diah 57:00
It's fun until the 30 below zero wind chill.
Tim Bornholdt 57:06
Well, that's, you know, the opposite of Minnesota Nice. That's kind of how you get weeded out here is if you're tough enough to to handle these brutal winters, then you're in.
Eri O'Diah 57:17
Right, right. Thank you so much. This has been fun.
Tim Bornholdt 57:20
Thanks to Eri O'Diah for coming on the show today. You can learn more about SIID Technologies by visiting SIID.ai That's SIID.ai.
Show notes for this episode can be found at constantvariables.co. You can get in touch with us by emailing Hello@constantvariables.co or you can find us on Twitter @CV_podcast. Personally, I'm most active on LinkedIn so connect with me on there if you haven't already.
Today's episode was produced by Jenny Karkowski and edited by the vehement Jordan Daoust.
As I mentioned at the top of the show ,if you could take two minutes to leave us a rating and review on Apple podcasts. we will give you a mention in a future episode as a thank you. Visit constantvariables.co/review and we'll link you right there or open up the apple podcasts app on your phone and head to the main show page for Constant Variables.
This episode was brought to you by The Jed Mahonis Group. If your team needs mobile expertise, we would love to work with you. Give us a shout at jmg.mn.