We need better startup founders

Day three of our Rethink the Future Summit.

Welcome to the third day of Rethink the Future, a virtual launch summit we’re hosting March 6th-10th, 2023.

Today we’re speaking with

, author of , about the future of technology. In this video, we discuss why the world doesn’t need more Mark Zuckerbergs, what kind of tech founders we need instead, and how the techno-optimism movement can take us into the next phase of humanity. Tech can be both destructive and generative, and Sun wants to see it be used for the greater good.

Click the above video to watch or read the complete transcript below.

The Post is a reader-supported publication. Become an annual subscriber during launch month to get a free tote + 20% off forever!

Elle Griffin: You write a lot about technology and specifically about techno-optimism, so I want to start there. Can you describe exactly what is techno optimism and where you see it going?

Jasmine Sun: I don't think that techno-optimism is a cohesive movement or a single thing. The way that we sort of arrived at our project is when I was when I moved to the Bay Area for undergrad, it was right after the 2016 election and the techlash when everyone was really worried about Cambridge Analytica and starting to notice that a lot of these big technology companies that have been making tons of money are also causing a lot of harm, especially to marginalized populations.

As people were reckoning with the techlash, I saw that writing and discourse about technology were splitting into two camps. It felt like in one camp, you had these techno-utopians, techno-solutionists, techno-accelerationists, who sort of believe that technology can solve almost every problem. If you did design thinking or you had a hackathon you could basically deal with any social or political problem by innovating through it with tech and specifically tech companies and startups. So that was one point of view–sort of the carrying on of this “Boy Genius” founder narrative that, actually, we need more Mark Zuckerbergs and not fewer.

And then there was this other camp that took the techlash to another extreme, where you saw a lot of critics, journalists, academics, folks studying the impacts of technology, accurately diagnosing and uncovering some of the worst things that happened as a result of technology–accelerating political extremism and radicalization misinformation, but sort of took that to the end of, perhaps technology, social media. The internet is this inherently flawed thing and it's inherently racist, it's inherently capitalistic, it's perhaps something that we should abandon altogether.

So I was a college student trying to wrap my head around this ecosystem, this Silicon Valley-like environment that I was thrown into. I'm trying to figure out my own career, and I felt like me and a lot of my friends were experiencing the same tension, where on one hand, I think as Gen Z we're in our 20s now and we had grown up with the internet, so it's very hard to imagine a world without modern digital technology, without the internet, or where we weren't connected through social media. It has become part of the basic infrastructure of how you form community–how do you communicate with your friends and connect with people? At the same time, a lot of us were really concerned about the politics of technology and about social harms that were resulting from it.

It felt like there was a big gap in the middle where, whoever was articulating a vision of technology that was positive and was generative said, “Here are the products and here are the things that we should create,” but one that was doing so towards very humanistic ends, where you could acknowledge that technology had the capacity to harm and that there was a lot of worth in figuring out a human and community-centered approach to how technology can be used for good. And in collaboration with governments and local communities and not just the Silicon Valley bubble, but rather a positive vision for technology out there that is a collaborative one and an interdisciplinary one.

So that is the spirit that

’s techno-optimism was born out of. Sometimes we say that it's not optimism about technology alone, it’s the idea that all technological progress is inherently a good thing for others, rather, it’s human optimism. It's about optimism–that we have agency as people to transform technology and to build tools towards human ends. If we can collectively define those values, define those ends, the things that we care about. All humans are toolmakers and technologists by nature from the beginning–making fire and civilizations–so there is no reason we should not also be able to take technology and use it in this very pro-social, pro-humanistic manner.

Elle Griffin: I think that's something we can all get behind. And I've actually been thinking about this for myself because it does seem like we're just pushing forward with all technology on all fronts without taking a step to think, “Is this really the future that we want to create?”

I was speaking with somebody who did a lot of the design work for Minority Report, and he was telling me, “Whenever I'm working in film and I want to create a beautiful future it’s lots of nature, lots of water, and whenever I want to create a future that's dystopian, I put a lot of tech in it.” And I thought, well, that is interesting, because that makes sense, right? If we think about what we want the future to look like? I don't think anybody says, “I just want to be sitting here looking at my screen all day on all the technology with everything robotic around us.”

I think there are totally positive uses for those technologies, but do we really want to develop all of this, and what is the goal here? So are you seeing that happen in the Bay Area in Silicon Valley? Are there people consciously thinking about these technological problems and whether we even want them before we enter into these spaces?

Jasmine Sun: I think more recently for sure. I think the tech industry has become a lot more conscious of social impacts in recent years, mainly since 2016. Not that it’s perfect, but I feel like discourse has gotten better. For example, I think it’s really interesting that right now everyone's really keen on generative AI like ChatGPT, DALL-E, everything that OpenAI is doing, and the Microsoft Sydney situation. And I feel that there are a lot of folks in the Bay Area and also elsewhere who are really conscious of AI safety and ethics, some from a right-now perspective and sense of, are we seeing algorithmic bias, are we seeing that AI is starting to reflect a bunch of racist or extremist things from the data it's consuming, but also folks thinking on a more long term perspective, like what happens if AI gets out of control, and it's able to outsmart us and it's able to sort of escape its humans and pursue its own ends?

I am seeing a lot of really vibrant discourse in the AI community, and one thing that is interesting about that is that, in AI at least, it's a lot of researchers and engineers and companies that are at the forefront of these conversations about safety and governance and making sure that we are in a state where we are ready to release this to the public. How can we have a slow rollout so that you're able to have people test these really powerful tools and see if, for example, Sydney's gonna say a bunch of messed up creepy things before you say that everyone can have it. Yes, there are mistakes for sure, but seeing that slow rollout strategy, seeing, even when GPT two was released a few years ago, seeing OpenAI choose not to release it publicly, and to audit case by case, I think that's that shows a lot of progress in the way that the tech industry is now approaching these issues.

Elle Griffin: Are you seeing very positive use cases? I'm thinking of the techno-optimism movement as a whole, are there other people who want to solve this for the public good?

Jasmine Sun: I think technology is so broad, so definitely not all positive uses, but I think one thing that Reboot is interested in doing is uplifting and amplifying these positive uses. I think sometimes tech journalism can sort of get into a beat of finding the latest messed up thing that happened–Facebook did this, anti-Vax that–and just highlight all the bad stuff. Somebody has to do that investigative work and a lot of issues would never be brought to the surface if nobody was doing critical work. But on the other hand, Reboot audiences are mostly young technologists and tech workers–people who are in college, maybe studying computer science or people who are in their first few years working and trying to understand, “How can I contribute to the world with my engineering skill set?” And those people are already engineers. They already love technology, but who are they going to look up to as role models in terms of what kinds of founders or creators they aspire to be? And yes, one version of that is, “I want to be Elon Musk and I want to build tons of huge companies that change the world even if I end up causing a lot of chaos and harm along the way.” But what Reboot really wants to do is take specific instances of people building these sorts of more pro-social technologies and highlight them.

So one piece that was in Kernel Magazine, in our second issue that we're really proud of, is called, Prison Phones and the Problem with Profits: Can an Upstart Tech Nonprofit Take on the 1.4 Billion Prison Communications Industry? So when you think about Silicon Valley, you probably don't think about the prison communications industry. Basically you have this amazing team of young technologists and a nonprofit tech startup called Ameelio that has figured out a way to use that technology, it’s cheap and it's scalable to build a really low-cost software for people who are currently incarcerated to send photos and videos and communicate with their family and friends outside. And that's based on research that having these communication lines really lowers recidivism rates.

And yet right now, because you have these huge legacy firms that are operating in prison communications, it's crazy expensive for a lot of these poor families and poor people to be able to reach out to people at all. So you're replicating social harms where recidivism is going to be higher because these oligopolistic firms are monopolizing prison communications. In this case, having this nonprofit tech startup Emilio enter the space, apply these startup software principles, and bring their software at a relatively low cost to people is having a really massive impact that I think will benefit society broadly. Those are the kinds of instances of techno-optimism that I'm really excited about highlighting more of and hopefully you tell those stories and more people say, what is another very specific place where I can see technology making an intervention that can materially impact people's lives?

Elle Griffin: Do you see the opportunity for large companies to do the same? I can understand the small startups wanting to change the world, but there are also big giant tech companies that are doing a lot of harm. Can we go into Facebook and make some changes there? Should the government step in?

Jasmine Sun: Yeah, I think it's really hard. There are a lot of people who are a lot more knowledgeable than me who are probably better equipped to answer those questions. I think yes, assuming that you can't do a single thing to change anything that happens in Facebook is a little bit fatalistic. Facebook exists, whether we like it or not, and if there aren't people inside, people are going to keep on using Facebook anyway. And so one thing that I think about is like

, who writes , did some really amazing investigative journalism around content moderation and Facebook, and how content moderators were mistreated, underpaid, and were having mental breakdowns. He does a lot of reporting on the lack of content moderation in other countries and he wasn't the only one but he's part of many journalists who are uncovering how, because Facebook didn't have moderators and who spoke a lot of different languages, that led to like the acceleration of these super extremist political groups that led to real-life violence. But in doing that, and in bringing these issues to light, you have the interplay between a really smart journalist and critique.

Then at Facebook, they ended up creating a civic integrity team to then scale up moderation, figure out how to limit extremism through different interventions and the news feed, and doing design research, going into local communities and showing people the feed and asking, “how do you interpret this information? What are you going to tap on?”

So I do think there are people at these big companies who are going to try to ensure that their product harms are mitigated, but I think the nature of most of them being like large profit pursuing corporations means that it's they're only going to do that basically in response to having journalist or policymakers sort of asked them to do that. I think if no journalist ever published those pieces Facebook would not have spent a bunch of resources on it. So I feel like it's this very important exchange and interplay between the companies building the technology and then having researchers, writers, and policymakers, who are identifying the harm that they might not have prioritized fixing. Like, we are going to say that you need to acknowledge this data privacy risk, or you need to acknowledge radicalization risks, and do something about it because we're going to impose either a cost in terms of literal government–extracting fees for violating some policy–or just a PR cost of, we're going to run a bad story about you. So I think it's possible. I just think it requires a lot of advocates on the outside to encourage that change.

Elle Griffin: I am very fascinated by the story of online communication and how that's evolving, because that case you just mentioned is basically using social media/online platforms to correct social media and online platforms.

Jasmine Sun: That's so true. I didn't even realize that.

Elle Griffin: Because we have the ability to speak out, we can, and that's also the curse. I'll be very curious to see how we need to change technology or to change online communications and online media, because on the one hand, it's creating more free speech but there are also a lot of challenges to free speech. So I'm very curious to see how that dynamic plays out.

Jasmine Sun: Yeah, Twitter is breaking, like legitimately! They fired everybody, my app is crazy buggy right now, and it's wild to feel like we're losing a really core communications infrastructure. Twitter was creating culture, creating news that people use for real, and now we're like, “oh my God, one guy can come in and sort of ruin the whole thing!”

Elle Griffin: And that's a good point, because we have all these people at the top of these companies that are in a lot of ways playing the role of government. Because now the things that are moderating our lives and the information we get and any technology we have access to is dictated by a tech CEO, not a democratically elected government or something that's deciding that these are the right things for humanity in this country. How are you feeling about that? Because earlier you said there was this optimism that we need more tech founders, or do we need better ones? Or moderated ones?

Jasmine Sun: To be clear, I don't think we need more Mark Zuckerbergs, I think we need more role models who are ideally more sort of proactively conscious than maybe Mark Zuckerberg is. But I think that's a really good point. And I feel like there are a couple of possible reactions. I'm being a little bit reductive, but there are a couple of possible reactions you could have to like seeing Twitter fall apart, and one of them is, “Oh my God, technology was a mistake.” Like we should have never put everybody in single big rooms so they could shout at each other. We should just knock on our neighbor's doors and sell the local paper and hang out in real life. And I do think those things are important, I think local community, etc. really does matter.

But I think it would be fatalistic and too techno-pessimistic to say that social media–simply having the ability to use the internet to publish your own content and share that with other people–saying that that's ruined just because Elon Musk screwed over Twitter feels like the wrong approach. So then the question is how do we build digital infrastructures? How do we build technologies that might be more resilient or less able to be co-opted? I know you are talking to

from and I'm a huge fan of the research and the work they're doing because I feel like they're definitely experts on this issue, but I think the fact that more alternatives are emerging like Mastodon is an exciting thing. Is Mastodon perfect? No, they're they have their own issues with the federated structures and stuff like that, but I know that Jack Dorsey has folks working on Bluesky Social, which is a fully decentralized protocol, and they are sort of working on their own version of Twitter where people can spin it off on their own, and they don't have the feudal overlord situation that Mastodon does.

My day job is at Substack and the way that we get around it, despite being a private company, is that we own the platform but individual creators will always own their email list and their content and can move it away at any time. So my hope is that we'll see all these other alternatives emerge, and people will have a more expansive imagination around technology rather than assuming that tech has to always occur in these big companies, it might be a nonprofit that creates technology. It might be a decentralized protocol. It might be, yes, a company, but they know that things like ownership really matter to users now because they've seen what happens when you don't own the platform, and now you can bake in those principles from the beginning. So I think, by seeing technological harms, my optimistic hope is that it leads to a much more expansive view and at least to people being able to blank slate. Technology does not require a single model to be created and what is the range of ways that we can see it thrive?

Elle Griffin: That's interesting. So do you think that there needs to be restructuring done at the company level to enact that or do you think it's a response like Mastodon as the response to Twitter being mishandled? Maybe somebody else will come up with a response and say “this is even better than Mastodon,” or does the work need to be done at the top where there’s a CEO that's making these decisions for everyone?

Jasmine Sun: I think you certainly need both. With the Facebook example, you need the integrity team at Facebook to be working on harm mitigation, but, ideally, I think in this day that in the same way that you have public funding for libraries and for, like streets and whatever, like there are certain technologies, and I would say like social media is one of them, where you do want a public option or something. I think it's really honestly odd in some ways that, while the government does not fund technology creation, of course, it's hard and expensive, but again so are our sovereign libraries or the highways or public parks. When you think about it, or at least when I think about it, I think if we never had public libraries or national parks, I feel like it would be extremely hard today to convince the government as it exists today to allocate massive amounts of money to libraries and national parks. But if you ask American citizens, “Hey, are you happy that books are free and that you can go to Yellowstone for cheap?” They'd say yeah, of course. And so how can we reinspire that with technology? Not to say that all technology has to be government funded because I do believe in innovation competition, people coming up with their own ideas, but I feel like the fact that we are exclusively dependent on profit models feels wrong to me. So I would be really excited about initiatives at the federal level or at the state level to provide both public and probably philanthropic funding for tech that isn't built within purely a profit model.

Elle Griffin: It's hard to imagine the government creating a Twitter that I would want to actually use. But it’s also kind of weird that our state’s leaders and country’s leaders are communicating with us via Twitter.

Jasmine Sun: I feel like maybe it's some fund that's spun up—I don't feel like this stuff should be built in-house–especially having parties change, we don't need every new president to be appointing a new CEO of “government Twitter,” that sounds awful. Maybe the thing I'm imagining is having federal funding allocated towards grantmakers who might be at the National Science Foundation, and in the same way that they allocate science grants, they might allocate digital technology grants, so they're outsourcing the actual building and management and oversight of the tools, but they ensure that whoever's doing it doesn't have to purely rely on advertising for example, as a model. Imagine if all science research for the public good or medical research had to immediately prove that they could make money, like in the first five or 10 years. That's why NSF grants exist, and eventually some stuff gets commercialized and you do have commercial R&D labs still, but there's a large amount of both public and philanthropic funding that goes towards things that are public goods research, and I feel like having more of that in technology and software and internet stuff, which feels really important to me.

Elle Griffin: Yeah, I like that combination of grant funding with a capitalist structure to keep the competition and innovation part of it and allowing long-term growth. That's good. So what about tech criticism, what are you seeing? You write in this world, how does tech criticism play into both the positive and the negative?

Jasmine Sun: I think it's really interesting, and you know, I'd be curious for your thoughts too, as someone who writes often about technology. I think it feels like it has this very unusually antagonistic relationship oftentimes like with writers, with media, with journalists. One thing that I read recently that I've been thinking about a lot is an essay by

of the on Substack. He wrote for another blog, and the blog post is called, “What does the critic love?” And he talks about criticism as this very old tradition, right? Like you have film criticism, you have food criticism. Having experts who write about and evaluate usually cultural works in great depth is not a new thing. And in general, like when you have a movie critic, they love movies and that's why they're a critic. And so the job that they do is they're almost like a tour guide or a translator who takes somebody who might not be a huge movie buff and says let me show you what's beautiful about this film. Notice this or this film is bad and here's why, and they tour guide you through their discipline.

Whereas, when I think about tech criticism as a discipline today, one, it's unclear and there aren't really official critics. It's mostly academics and journalists and writers, which is fine, I don't think you need an official critic title. But those people are, for the most part, very disconnected from the field. There's a huge gap between builders and critics and then consumers, and they all feel like their own siloed spheres. And again, I think in an ideal world, you would have this interchange where yes, builders are selling to consumers. But you have critics who are helping interpret like, you have Twitter and Mastodon, right, like, let's talk about in-depth, the difference between the user experience, the control that you have, the risks, the design, and in telling you about the different models, they help consumers make more informed decisions. They also encourage builders to aspire towards good craft and towards whatever high-quality technology might be, but you don't really have that going on. And I think part of that is, I would say, the critics fault too.

In that piece, one of the things L.M. Sacasas says is the food critic loves food, the book critic loves books, but very few technology critics would say that they love technology. And I don't think they have to love all technology. I don't think they have to love mainstream technology. I don't think every single critic has love tech most of the time, just like every book reviewer is writing a lot of bad reviews and mixed reviews along with their good ones. But when you have critics who have basically never write anything positive, I think it makes the discipline worse as a whole. Because it also makes it possible for builders to think they're just negative-Nancy’s or trying to ruin us or kill our vibe or whatever, when what you really want from criticism and from writing is being able to say sometimes this is bad, and other times this is good, aspire to this instead. You have to give people the positive, generative thing to move towards in addition to criticizing things that are worth criticizing. So that's sort of what I've been thinking a lot about and what Reboot’s role is as a publication that publishes technology writing and tech criticism–how can we ensure that we're doing a mix of both things that we're helping the people who read us, a lot of whom are technologists and engineers, design products and design technologies that aspire towards the greatness of craft.

Elle Griffin: I think that that's definitely the case. I mean, I have rarely seen an article about Twitter that was like, “here's one way that we could fix this, here's an idea for communications in the future.” It's just take-down pieces of here are the things that are wrong. I totally agree that there's only so much you can do with that. Is there something in particular that you are thinking about going forward that you want to as a critic, or as a publication yourself either emphasize or critique for the future?

Jasmine Sun: Well, there are a few things that we're thinking about. So one thing that's interesting about tech criticism is I think that sometimes you're critiquing the products and the specific technologies and then sometimes you're criticizing the organizations that build them or the companies that build them. And I actually think both are valuable. I don't think that you only have to talk about the technical aspects or whatever. It is really important to consider the economic context or organizational context through which decisions about technology get made. And so on that front, one thing that Reboot really wants to do is to publish more pieces uncovering how tech organizations work, I think especially not just big companies, again, but also looking at instead a nonprofit, or we just published a deep dive into the Ethereum Foundation which is sort of like this steward of the broader Ethereum ecosystem and they are also structured I think, as a nonprofit really focused on grant making. They sometimes build technology in-house but they often do grants and educational programs to try to push things outward. And I think publishing about the organizational context in which technology gets built, and the relationship between these organizations and their products is really interesting because I think it helps people imagine again, alternative models for building technology.

And then on the product side, I feel like some of that is like, for example, almost a design review, or like how can you take a product like Twitter, or how can you take something like ChatGPT and using a critical eye or a cultural lens, technological lens, etc. look at the affordances and the specific features and ask what are the effects of that? I think with ChatGPT, one thing that's so interesting is the underlying technology of, GPT 3.5 is what people call it, but it's not that much more advanced than GPT 3, and GPT 3 was released openly on a thing that OpenAI called Playground, and there is no freak-out about GPT 3 in the way that there is ChatGPT and why is that? It's because ChatGPT has this chatbot interface and it feels so easy to talk to and it feels almost human. And it's something that any ordinary person can use without any code and without any experience and just have this conversation with, and that completely transformed, I think, the way that the technology was received. I would love to read or to publish some sort of analysis of maybe the history in the future of chatbots and the sort of human-like or like personality-like interfaces for AI because what that showed me what the ChatGPT phenomenon the Sydney phenomenon shows is that it's not just about the underlying AI, it's so much about the design, and I feel like having a cultural critic or somebody who understands that, or even someone who knows psychology really well that would be so interesting to see.

Elle Griffin: Personally I'm very fascinated by that, because I've started using ChatGPT instead of Google. I'll use Google sometimes to fact-check, but I hate the Google interface, it's so hard to find the information you want to find. So you have to open 10 links, and they're all SEO content so they don't actually contain the things you want, and then you have to dig to page five and then there's the science, there's the study. It's so hard to get to the information so I feel like unlocking that is going to be huge. I'm excited to see where that goes.

Jasmine Sun: It is really interesting, now you have this war of the chatbots with every company trying to release their own and I think there are definitely things that could go wrong. I want to see more people taking a very careful look at chatbots or at search engines and AI systems and thinking, “what are the risks? What are the opportunities and like, what are the specific design choices or tech choices that can shape the way that people interact with it?”

Leave a comment