Gadget Lab Podcast: Inside Textio’s Anti-Bias Bot

Many companies say they want to diversify their workforce. Far fewer have actually succeeded in doing so, even if they’re earnestly trying. And one of the first hurdles can come before any candidates have even been interviewed: The language used in recruiting emails or job postings is often full of unconscious biases—phrases like “gentlemen’s agreement” or even “ninja” can deter women or people of color from applying in the first place.

But how do we check our unconscious biases when, by definition, we don’t know what they are? A Seattle-based startup called Textio says it’s using machine learning to help eliminate those biases in real time, by literally changing the writing of hiring managers who are composing the job postings.

This week on Gadget Lab, WIRED senior writer Lauren Goode talks with Textio CEO Kieran Snyder about the way the software works, how tracking language patterns over time can reveal deep insights about how we see the world, and how this kind of “augmented writing” software could eventually be used in applications beyond job postings.

Show Notes

Read more about Textio here. Check out other conversations from WIRED25 here.

Lauren Goode can be found on Twitter @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our consulting executive producer is Alex Kapelman (@alexkapelman). Our theme music is by Solar Keys.

How to Listen

You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Play Music app just by tapping here. We’re on Spotify too. And in case you really need it, here’s the RSS feed.

Transcript

Lauren Goode: Hi, I’m Lauren Goode, cohost of WIRED’s Gadget Lab podcast. And this week we’re talking about AI, specifically how artificial intelligence can be used to make suggestions on your writing as you’re writing things to eliminate bias. Of course, the big question is, will AI ever replace all of our writing tasks? And what happens if the technology that’s designed to eliminate bias is biased itself?

It’s a special episode this week, because a couple people from our regular Gadget Lab are out of the office traveling. So it’s just me in-studio. And the interview you’re about to hear is from our WIRED 25 conference late last year. It’s a conversation I had with Kieran Snyder.

Kieran is the cofounder and CEO of Textio, a Seattle-based startup that has created what Snyder calls an augmented writing platform. And it’s fascinating. We talked a lot about how Textio works, what kinds of words and phrases are biased that you might not even realize are biased, and how she envisions things like Textio being used more broadly in the workplace. All right, without any more windup, here’s Kieran Snyder from Textio.

Announcer: Please welcome Kieran Snyder in conversation with WIRED’s Lauren Goode.

LG: Thank you everybody for joining us today, for being here. And thank you to Kieran Snyder for joining us at WIRED 25.

Kieran Snyder: Thank you for having me.

LG: Just a quick intro for Kieran. Kieran is the cofounder and CEO of Textio, which we’re going to talk all about. And prior to that, you were a program manager at Microsoft for nearly a decade. Is that right?

KS: More or less.

LG: More or less. And she also has a PhD in linguistics and cognitive science. She is, I think by all definitions, a linguistics expert. And so we’re going to talk about that as well.

And I know for some of you who have been in earlier podcast tapings, you’re aware at this point that there is something like a kid carnival going on next door, thanks to Google. And so you may hear some noise as kids take their aggressions out on robots and things like that.

But that’s what we like on the Gadget Lab podcast. It just adds to the atmosphere. So thank you for your patience as we deal with the noise. So I think a good place to start would probably be to ask what is Textio? What does Textio do?

KS: Textio is an augmented writing platform. So then the next question is, what’s an augmented writing platform, right? Think of it as a word processor that is designed to tell you who’s going to respond to the things that you’re writing. So based on the patterns of language that you’re using, you may resonate more with one audience versus another.

Maybe you’re writing a job description, and you’re trying to figure out how to get the most qualified people to apply. Or maybe you are writing a message to a colleague, and you’re trying to figure out how to get them to engage and not kind of select out. But that’s what Textio is.

LG: And who are you selling that product to?

KS: We sell to businesses today, generally to leaders. So people who are involved with hiring or internal communications or HR. But it’s always into businesses, companies of all sizes.

LG: When you say companies of all sizes, what businesses specifically can you share? Some names that people might recognize?

KS: Sure. Everything from Cisco and Johnson & Johnson to Slack to NASA. We’re really excited about the NASA one. When our team got to go visit NASA onsite, they came back with NASA T-shirts. We have a lot of space nerds at the company. But really, a large range of organizations, governments, civil organizations, as well as large enterprises.

LG: I thought it might be helpful to describe for the audience what’s happening as they’re using Textio’s software. So you’re in this augmented writing platform. And one of the examples that we’ve talked about before, because Kieran was also in an issue of WIRED earlier this year, is you’re a hiring manager or recruiter and you’re typing something up and you use a word like, “I’m looking for a ninja,” or “I’m looking for a rock star.”

KS: Don’t use that word. Right? Yeah.

LG: And I see a friend laughing because she now works at LinkedIn and she’s like, “Yes, I see this all day long.” OK. So you see phrases like this. And one of the things that shocked me is you were telling me how coded those words are and that the software actually flags that. So, like, paint a picture for us if someone’s typing up an email or a query or something in your software. What actually happens?

KS: Yeah. You’re a writer, so maybe this never happens to you, but think about the last time you had to write a really sensitive email, a really sensitive piece of communication. And you knew before you pressed Send that it was a little bit risky. And if you’re like other people, what you probably did in that situation is you ask someone you trust to give it a read-through before you send it, right? And maybe they’ll catch something about how it’s going to land, and maybe if they catch something you’re going to make a few changes and you’ll press Send and you won’t be in trouble. Right?

Textio is like 500 million of those second opinions. So as you are writing, Textio is comparing your language to the language of other similar documents that the system has seen before, where it knows who has responded to you. And so you are getting suggestions, you are getting language patterns promoted to you that will work well in your situation. So if you are saying “ninja” in the context of a hiring document, you are statistically very likely to attract only people who identify as men for the role, right?

So we see these patterns all the time. One of our favorite examples is from one of my alma maters. Amazon uses the word “maniacal” on its career site 11 times more often than the rest of the technology industry. And I guarantee you there is no HR person at Amazon running around telling people to describe the workplace as maniacal. Like that’s not a brand goal. But when they do, statistically, it changes their candidate pool. And when all of us hear that, none of us are that surprised, right? Because when you have thousands of people using a pattern in common, it reveals something deep about the culture.

LG: When you say it changes their hiring pattern, how does it skew it exactly? Who does it attract?

KS: “Maniacal” definitely draws fewer candidates who identify as women to the role. Or you hear something like Uber uses “whatever it takes” 30 times more often than the rest of the industry. It has a similar impact. It draws more candidates who identify as men and specifically white men. And that is not their intentional goal, probably, in using that language, but it happens nonetheless.

LG: So you say that the software is comparing the document that you’re working on at that moment in real time with 500 … you said 500 million other documents, potentially?

KS: It’s actually usually close to 600 million.

LG: Close to 600 million.

KS: Now other documents where Textio has measured the response rates in the past.

LG: And those are all Textio documents? Or you’ve pulled that data from somewhere else?

KS: Those are Textio documents. So they’re all documents inside our data stream and training set. And of course, when I’m writing, not all 600 million are relevant to what I’m writing. So Textio kind of slices and dices the data sets so that you get the set that is most relevant. So if I’m hiring an engineer in San Francisco, I have a very different comparison set than if I’m hiring an accountant in New York, for instance.

LG: Okay. And you have two different kinds of products. One is, as you described, you’ve done the writing already. And then you’re sort of sending it through this processor. And then the other is that the suggestions are happening in real time.

KS: So I would slice it a little bit differently. All of Textio is a real-time writing experience. But in any kind of learning loop platform—and we think of Textio as a learning loop platform, which means that as you are using it, you’re making the system more intelligent for everybody who is using it—I think there are three key pieces.

The first is about being able to predict what’s going to happen, right? So as you’re writing, can Textio make a prediction of who’s going to engage? The second piece is making a suggestion to change something that you’ve just typed. And say, “OK, actually, instead of the word ‘manage’ in this context, maybe use the word ‘lead,’ or ‘run,’ or ‘handle,’ because you get a different impact.”

And the third, which I think you’re probably talking about, is the ability to create language more proactively. So if the system knows that I’m hiring somebody with a machine-learning background and 10 years of experience, it can proactively morph those notes into language that’s going to work really well for that role.

LG: One of the other stories that you told me that became one of my favorites was how you figured out that big data was over. That people saying “big data” in a job posting was very five years ago. That had its moment, but now it’s seen as a little bit outdated.

KS: Yeah. And the same thing’s happening with AI. So actually, when we started the company, which was about five years ago, it was right as “big data” as a language pattern was on the decline. So if you were to go back maybe six or seven years ago, if you were describing your work as involving big data in the context of a job description, or by the way a startup you were trying to get funding for, it was really, really popular.

If you used it in a hiring context, you got more people to apply. You got more qualified people to apply. It was a pretty hot term at the time. And what happens with any kind of marketing thing that becomes popular is people copy it. And when people copy it, then it becomes so pervasive that it loses that differentiating impact that it once had.

Today, if you were to use “big data,” people would giggle a little bit, right? Because it became so popular that it became a cliché. And so today, if you were to use “big data” in the context of hiring, the jobs would fill significantly more slowly, because fewer people would apply. And the same thing is happening with “artificial intelligence” today. It has clearly hit its saturation point and is beginning the decline.

LG: So in some ways, you’re using AI to determine that the phrase AI may be on the outs.

KS: There you go. I would say we’re using learning loops.

LG: That’s right. Learning loops.

KS: Maybe that’s what’s next.

LG: Right, right. So I think the natural question is, when does this software become more widely available? When does it become something that all of us are using? Like we can plug it into our Google Docs or our Office 365? So that as consumers we have access to these augmented writing tools, and it’s not just something that businesses and hiring managers are using.

KS: Yeah, all of us are both employees and consumers. And I think the line is generally pretty blurry between the things we use at home and the things we use at work. So when we talk about learning loops, I’ll just note that in our consumer lives, we’ve been using similar software for quite a while, right? Every time you’re driving with Waze or Google Maps, it’s fundamentally the same principle involved, right?

You’re sharing your coordinates with the software and so is everyone else on the road. And therefore, we’re all getting where we’re going a little bit faster. Or when you’re trying to listen to music on Spotify and you’re getting recommendations from people who have patterns a lot like you. So the learning loop technology has been present for a really long time in our consumer lives. And I think in the last three to four years, we are increasingly expecting to be augmented at work. Writing is just one domain.

LG: So I guess I’ll ask that again. Do you plan to launch a consumer version of your software?

KS: Perhaps. Right now, there are so many areas in businesses that have really high impact to businesses. We started with hiring because every business really is impacted primarily by who chooses to work there. That’s the thing that makes or breaks the business. But if you think about all the places you’re writing at work, whether you are making software as we do at our company or you’re making T-shirts, the thing you’re actually probably making the most of every day is words.

And the opportunity in a business that’s really interesting is, companies do have voice and culture that emerge in the language that they use. So for Uber to use “whatever it takes” so often is not something that’s just a fact about their hiring language. It’s a fact about their language in general. So I wouldn’t rule out something that is consumer oriented. But the cultural patterns, when you look at large organizations, are very interesting from a language perspective. So we’re really highly focused there right now.

LG: Speaking of large organizations, what happens when companies like Google or a Microsoft start to bake some of these features directly into their very popular productivity suites? When I was at Microsoft earlier this year, and they were making some updates to 365—this was ahead of their big annual software event—one of the things I saw was that in Word now there’s a refine-your-writing tool. Where some of the words that specifically were flagged were things like if you wrote that you are making a “gentleman’s agreement,” the software might actually flag that and say, “That’s not inclusive writing. Here’s another suggestion.” Or if you were to say “housewife”, it would correct it and say, “How about homemaker, because it’s not gendered.”

And so it seems like the big tech companies could have the ability just to build this effectively. How do you stay competitive and differentiate when you know that all of these really smart people are working on these tools at Google and Microsoft?

KS: Well, they’re not working on these tools. They’re working on broad word-processing platforms, which is quite different. That’s the world that both my cofounder and I came from. And actually, we were both involved in productivity software at Microsoft in leadership roles for quite a while. And the challenge with something like Microsoft Word and Google Docs is paradoxically its scale. So the fact that a billion people use Microsoft Word every year, it’s a huge market. It’s probably bigger than a billion now. It was a billion a couple of years ago.

They’re using it to write all kinds of things. And that means you lose the concept of outcomes, right? So the best you can do when you’re writing … you’re making software that should be working for a billion people, writing all kinds of things at the same time, is including shallow rules to say, “Hey, ‘gentleman’s agreement.’ That’s a little bit biased.” Or “Hey, you put your comma in the wrong place.” Or “Wow, your sentences are kind of long.”

These are not things that are telling you at all who’s going to respond, because you can’t build a response-based model for a billion kinds of communication at the same time. The opportunity of augmented writing is actually to focus specifically on a domain. Or in the case of jobs, it’s really about who is responding to you. There’s a feedback loop that can be measured. Broad word processors don’t have that kind of feedback loop.

LG: Interesting.

KS: It’s a very different writing scenario.

LG: You’re saying that one of your differentiators is that you can measure the effectiveness? That the project that somebody is working on in your software is actionable, there has to be some type of … the person has to respond to the job posting. As opposed to writing a “Dear diary” entry in Microsoft Word that nobody’s going to see. Or some other sort of draft that potentially might not have impact.

KS: That’s exactly it. So if you think about all the places you use Microsoft Word or Google Docs, when you’re writing, you use it to jot down notes. You use it to write marketing documents. You use it to write specs. You use it to write articles. You use it to write all kinds of things. And it’s not clear across that whole set what kind of feedback loop you would even measure.

It’s a super valuable thing to have a broad word processor, but it’s a different kind of thing. I actually see the Microsoft Word features much more in competition with like a Grammarly, which also is really about shallow rules that work across all the kinds of things you’re writing. Rather than something that superpowers your writing for a very particular scenario.

LG: We’re going to take a quick break here. But stay tuned for the rest of the conversation with Kieran Snyder from Textio.

[Break]

LG: Why do humans need this software? Why is it that we’re not able just to … After working on a couple of drafts or learning a lesson ourselves over time, we’ve put up a bunch of job postings and we’re not getting a response. So we tweak the words and we hope that it’s more effective, right? We have our own abilities to learn patterns over time. Why is it that we’re not able just to, I don’t know, figure out our own biases in writing? And need to even rely on software like this?

KS: I think we can figure out some of our own biases in a very limited way. But we’re always going to be limited by the experience that we’ve had, A, and observed, B. Right? And the opportunity to have my observations augmented by hundreds of millions of other people’s observations means I have the opportunity to learn more.

And the reality is that the more different somebody is for me, the worse I’m going to be at guessing what is going to resonate or alienate that other person. Right? When we ask that trusted colleague for the second opinion looking over our shoulder, it’s because we inherently know that we’re probably not guessing correctly all the time. There’s probably something a little bit wrong.

Those are the cases we know about. Most of the cases of unconscious bias, by definition, we don’t know that we have, right? That’s what it means for them to be unconscious. And so we’re all limited by what we’ve experienced and been able to observe about that experience. That’s just part of what it means to be human.

LG: So what happens when we get to a point where augmented writing tools and platforms are changing our language in such a way that a lot of our language all starts to look alike? I’m writing the same thing that my colleague Boone is writing, that my editor Mike is writing. What happens when the word corrections that are coming through start making us all look the same in our creative endeavors?

KS: Well, you’re a writer. So let me ask you what you would do in that situation.

LG: Well, yes, I’m a writer. So in some ways, Textio terrifies me and delights me. I don’t want it taking my job. But at the same time, it seems like it could be a really useful tool. So I guess if, let’s say …

KS: Forget Textio.

LG: OK.

KS: What do you do right now? If you notice that tech writers are writing in a similar way about similar themes, what do you do? You’re a writer.

LG: I adjust. I adjust my own output. And I do that a lot, actually. At WIRED, we’re very attuned to what we call echoes. If you’re using the same word over and over again in copy.

And, “Well, you’ve said the word ‘saturation’ three times.” I’m literally just making up a word right now. But, “You’ve said that three times in your copy. Go back and change two of them,” right? So you only have one instance.

So I guess it would be the same thing if I was reading a bunch of articles and I realized, “Huh, we’re all starting to echo each other in some way.” I’d probably try to craft some unique or different way to say it.

KS: And that’s what the best writers do. Whether people converge on a voice because they’re using software or not, the best writers always find a way to sound different. When everybody starts using “big data,” somebody realizes that “artificial intelligence” is next. And when everybody’s using “artificial intelligence,” somebody realizes that “learning loops” are next.

And that’s how language evolves. It’s not a static system. We don’t sound the way our parents sounded or their parents sounded, or certainly the way people sounded a hundred years ago. That’s not how we sound today. We use different words. We put words together differently. The sounds we use have changed over time.

So the system keeps moving. So even if you have software that guided people, in some cases, where they’re part of the same organization to some kind of unified voice, the best writers always find a way to subvert that. And so it means that the software is always learning from what the best writers are doing at any given time. And that’s actually something really important at Textio, is we look at the writers who are most successful in the platform. And we look at what patterns they appear to be innovating before the rest of the system has caught on to it.

So a simple example, in the case of job posts over the last year, successful job posts have gotten shorter. They’ve shed about a hundred words. OK? And Textio knew that was going to happen really, really early on. And it’s not because it showed up in our broad data set, it’s because the pattern showed up in the writing of people who were otherwise scoring the highest in the platform. And so the best writers always find a way to stand out, to push the system forward in ways that are really big or really small.

LG: Is it possible for your technology to work in the inverse? So rather than the software being applied at the point that somebody is crafting a document, the technology is actually reading something that’s already been written. Perhaps it’s the response that comes in. And which I think opens up a whole other can of worms, right? When you’re screening candidates effectively for jobs, can your software work that way as well?

KS: So at the language platform level, Textio and any kind of platform that was built similarly would be able to adapt to any kind of text with an outcome attached. The outcome part is really important. Otherwise, you do become just a general purpose word processor, and you lose the predictive power of what the software can do. But it’s all about where is the highest-impact opportunity to help people say what they mean without guessing. Put their best foot forward.

So a totally viable scenario would be to help the job seeker put their best foot forward in an application. And by the way, that is a scenario a lot of our companies that we partner with are very interested in us enabling. Because they too want job seekers who are strong that they can hire.

And it’s not the case that for every job, your ability to write a résumé or a cover letter has anything to do with your job qualification. So if you could help people put their best foot forward, it would help screeners understand the real skills that could be applicable. I think there’s a possibility to be very virtuous for both the hire and the person trying to be hired.

LG: So what’s interesting, and I know that we’ve talked before about how you refer to it as an augmented writing platform, not an AI writing platform. But I think a lot of people are hearing, “OK, AI. This is definitely some type of artificial intelligence that’s being applied here to a very human experience.”

What’s interesting is that you are in some ways using AI to combat biases that creep into our writing. Where AI itself is known to have biases because of the humans that are putting together the data and feeding this data into the AI. So, I guess, how worried are you about your own software potentially absorbing biases as it is being deployed to combat biases in the hiring process?

KS: That’s something we think about all the time. I don’t know if you remember, maybe a year and a half ago, Amazon got in some trouble by trying to build hiring software that automatically matched their candidates with their open roles. They tried to build it for internal use only. And what happened is just what you would expect, which is the software repeated the exact biases that had been present in their hiring patterns for the prior decade. Right?

And so if certain roles were generally filled by men or by white people or by young people, the software went and picked men and white people and young people for those roles. And of course, that’s a huge problem. Which is not at all what they intended when they designed it. And so when we think about that at Textio, it’s really important that people’s data be able to be anonymized and aggregated so you can view it at the organization level.

But also, at the industry level, the geography level, and the role level. And lots of ways to slice and dice it. It’s not a perfect solution. So I’ll say like industries, geographies, and roles also have biases. But the bigger you can make the relevant data set and the more aspects … facets you can cover, the more likely you are to mitigate the bias that can be there in the data set.

And then, of course, the fact that it’s outcome-based, again, means that you have the opportunity to learn pretty quickly. And when we’re talking tens of millions of documents every single month coming into the system with outcomes, you can see pretty quickly if you propagate patterns that are promoting the same kinds of biases, and you have an opportunity to detect that. Which is different, again, than an AI system that is just making a prediction. Or making a prediction, measuring the feedback loop, and then promoting different guidance in response to what happens.

LG: Are you ever using any third-party or outside data to build your technology?

KS: We don’t really use outside data. We definitely have places in the system where we use outside libraries for things that we think are components of the data experience. So this could be things like a parser or a spell-checker, right? These are data-backed experiences that are not the most differentiating part of Textio.

And there are some really good … I think of them as very commoditized components now in the industry. So in those places, where we contribute to open source, we use components pretty actively. But for the parts that are Textio’s core data engine, it’s Textio data.

LG: We’re going to take just another quick break here. And when we come back, we’re going to wrap up the conversation with Kieran.

[Break]

LG: So can you explain a little bit more when you said that you’re looking very closely at the data in terms of things like geographies? Locations and things like that. And by doing that, you’re hoping to thwart bias. Explain that a little bit for people who aren’t AI experts and are trying to understand how you actually prevent bias from creeping into a technology. How does that work?

KS: I have a bunch of geography examples. But let’s start with an industry example, because I think it’ll help people understand the bias part a little bit more. I think we all know that the technology industry has a diversity and inclusion problem that’s gotten a lot of attention in the last decade, especially. However, some companies are doing better with this work than other companies, right? So if you look at the relatively new crop of recently IPO tech companies, you look at an Atlassian or you look at like a Slack, they are doing better. Not perfectly, but better relative to some of the old entrenched, larger organizations.

And even within the older organizations, you could look at like a Microsoft compared to like a Cisco. And Cisco is currently the most diverse that they’ve been since the year 2000, right? So there is high variation in individual organizations about how successfully they are combating their issues. And so, if you want to help a company that has highly entrenched bias problems, they can do a lot better if they can learn, while they’re writing from the outcomes that have helped other companies be a little bit more successful, in drawing a diverse pipeline.

One of the wonderful things about working with leaders of people or diversity and inclusion is they very much buy into the premise that a rising tide lifts all boats, right? If I’m a diversity and inclusion leader, I have a really hard job. And if you’re succeeding at your company, instead of feeling threatened by that, I’m probably pretty excited about it, because it means we’re working together to create change in the context of an industry. So the opportunity to aggregate, normalize anonymized data so that companies might be doing better with one aspect can help another company who isn’t, and vice versa, is generally very appealing to this audience.

LG: How do the companies you work with and sell your software to feel about the fact that some of the technology they’re using is actually being powered from data that’s from other tech companies?

KS: I think generally quite positive. And I think people see it as a big opportunity. Because when we work in technology, we understand that a broader data set is a more powerful data set. They don’t see the outcomes from other companies.

They don’t even necessarily see the unique patterns from other companies. But the broader algorithms, when I say, “Hey, go ahead and replace ‘synergy’ with ‘alignment’ as you’re writing in Textio,” the fact that it’s gleaned over a much larger data set is an asset.

I’ll ask you, do you object to getting better directions in Waze because the other drivers on the road are contributing their coordinates? It’s very, very similar in terms of how people see it. It’s a win-win.

LG: Interesting. So one of the questions that we’ve been asking throughout the conference is … because the theme of the conference is move fast and fix things, as opposed to breaking things. And I know that Textio is certainly trying to fix things in some way. Which is, how optimistic are you feeling right now about tech, the tech industry, and the future of tech? Specifically as it pertains to what you’re building?

KS: So I have, again, a very biased orientation on the question because I look at my own company, right? So I’m making software to help companies, but I also founded and lead a company, right? Our company is minority male, including in engineering. It’s 30 percent black and Latinx. It’s more than 25 percent LGBTQ identifying. We don’t look like other technology companies. Our board is more than half women. Our leadership team is half women. We just don’t look like other companies.

And that tells me it can be done. And we’re not 10 people anymore either, right? So the bigger we get, the more interesting these patterns become, and you develop a reputation as a place that people can come bring their whole selves to work. We’re not the only company that is looking different than the industry has looked before.

And it’s so easy to overlook that, because there are a lot of organizations that look just like companies have looked for the last 10 or 15 years, but increasingly that is less and less true. And so I have tremendous optimism for the industry, because I’ve been able to build a company that looks like what a modern company is supposed to look like in 2019.

LG: Kieran, I think we’re out of time. But thank you so much for joining me on the Gadget Lab podcast.

KS: Thank you.

LG: … and for your thoughtful answers. And thank you to everybody who joined us.

KS: Yeah.

[Music]

LG: All right. That wraps up my conversation with Kieran Snyder from Textio. We’re grateful that she was able to participate in the WIRED 25 conference late last year. And thanks to all of you for listening. If you have feedback, you can find all of us on Twitter. Just check the show notes. Our show is produced by Boone Ashworth. Our consulting executive producer is Alex Kapelman. And we’ll be back next week with our regular cast of characters.

[Outro music]


More Great WIRED Stories

Read More