Applied Memetic: Developing and researching synthetic media content
Ilan Manouach & Denise Araouzou / Athens, Greece
With extensive research on online harassment, the researcher and artist Caroline Sinders explores the lack of »care« within social media systems through her Care Bot project.
Interview with Caroline Sinders — Dez 20, 2019
Within social media systems, what are the systems of care regarding online harassment? With extensive research in the topic, the researcher and artist Caroline Sinders explores the lack of »care« within social media systems through her Care Bot project. The bot is created for the Solitude & ZKM web residencies on the topic »Engineering Care,« curated by Daphne Dragona. Visit the project and read more in an interview with the artist about how indifferent and uncaring policies and procedures of social media platforms are and the different approaches to »care« that become visible in her project.
Schlosspost: How was the web residency process and experience for you?
Caroline Sinders: It was very good. It makes me want to do a lot more with the project; expand it. The Web Residency really forces you to do something. I experience it like a hackathon, in a way.
SP: When you read about the concept of the »Engineering Care« open call did you immediately have a clear idea in mind with what kind of project proposal you wanted to apply, or did the idea evolve after reading about the concept?
CS: I keep an eye on the Web Residency open calls. When I read this call, I wanted to do some kind of artistic exploration of emotional labor and care. I also like the work of Daphne Dragona very much and I respect her vision. I have been thinking a lot about how we advise people, and I realized that I could probably go a step further with my previous project Social Media Break-Up Coordinator and develop a Care Bot. That’s where the idea came from. I wanted the bot to be a kind of online interaction that also offers apologies. I wanted it to acknowledge that even if a platform doesn’t think that it’s harassment, it doesn’t mean that you weren’t harassed. I’ve been researching online harassment for the past eight years and I’ve been thinking a lot about what it means when someone files a harassment report. How does it feel for a victim to know that he or she is shooting this report onto the internet and doesn’t know that someone will believe him or her? I have spent a lot of time coaching victims on online harassment. I wanted a bot that acknowledges that it is a bot, but that at the same time say to the victim: it’s not your fault; it’s the platform’s fault. It is really important to me that it is represented in the chat itself, to create a kind of awareness and recognition that these platforms fail regularly. Because it doesn’t mean that your harassment isn’t real.
»I wanted the bot to be a kind of online interaction that also offers apologies. I wanted it to acknowledge that even if a platform doesn’t think that it’s harassment, it doesn’t mean that you weren’t harassed.«
SP: Can you describe more about how the Care Bot is programmed and offers an interaction?
CS: It’s a conversational interface that walks a user through mitigating harassment. It defines different kinds of harassment a user could broadly be experiencing, and then offers solutions. But it also points out and reinforces to a victim that the fault of these systems not working lies on the platform and is not the victim. Even if a victim files a harassment report, a platform may not recognize it as harassment for a variety of reasons, but that doesn’t mean the harassment isn’t real. And while it’s so tough to have a platform not recognize the harassment, it shouldn’t delegitimize what the victim has experienced.
Screenshot of a conversation with the Care Bot
Screenshot of a conversation with the Care Bot
Screenshot of a conversation with the Care Bot
SP: You wrote in your project proposal that the Care Bot is not designed to be used in place of a therapist or a security expert. Instead it shows exactly how indifferent and uncaring policies and procedures are. Why would you find it important to share more about »uncare,« rather than offering care?
CS: I really wanted it to be honest that this is a bot. For me it is important that this is transparent. If you scroll down more or interact more with the Care Bot, then the bot will reference me, or my writing. In the conversation there is a moment when the bot is saying: »My writer feels conflicted telling you about this, but it is a safety precaution, so I, the bot, am telling you.« It is set up in a conversational style, and in this conversation it’s important to me that people are aware that they are dealing with a bot. I have thought a lot about Eliza and the structure of Eliza [1] as a bot. I wanted this to be very simple and honest. One thing my collaborator and programmer, Alex Fefegha talked about was to have people write in to the conversation with the bot. We hope that in a later iteration there will be room for people to write text. But I was concerned that if you have to deal with harassment, it is important that you get out of your harassment and give it a name, and know what harassment you are dealing with. That’s sometimes the problem with automating care systems, the way we design the infrastructure. We as creators know what the names of the experts are, but a person may not know what the institutional term of the harassment they’re facing is. If someone came to visit this bot, the bot gives you choices and determines what those things are.
»I try to have a friendly and empathetic voice, because I try to give someone options; I try to coach the victims by what he or she feels and thinks. If this was a bot used by Twitter, it would probably have a different conversation.«
SP: The way it’s set up as a conversation almost feels like it’s a person talking to you. How do you see the differences between digital care and offline care?
CS: I think the issue is always the context. I do a lot of text interactions or phone calls with victims who are dealing with harassment and it can be very good, that someone tells you »I know it can be really though, it’s hard.« It’s easier and at the same time more difficult for me as a third party, because I can reconfirm a victim and emphasize that he or she is facing something unfair. But if I work for a platform, I legally shouldn’t say those things. Being on the victim’s side, in a very conversational and informal atmosphere, is something I don’t think that platforms can do. For example to say: »What happened to you is really wrong.« I try to have a friendly and empathetic voice, because I try to give someone options; I try to coach the victims by what he or she feels and thinks. If this was a bot used by Twitter, it would probably have a different conversation.
SP: The work is inspired by the work Social Media Break Up Coordinator a performance piece you did in 2015–16, in which you describe the failure of the digital system. Do you feel that there is also the difference between being present, for example during this performance, or being present online at an online platform? Does it offer you more possibilities?
CS: I do hope so, part of the meta level of Social Media Break Up Coordinator is that I did a kind of seminar and people had to run their own Social Media Break Up Coordinator franchises. I did that for the Channels Biennale in Australia, where I Skyped in. With the Care Bot I am hoping that it can be installed and it can be presented on its own. It is actually a little bit different from the Social Media Break Up Coordinator because I am also trying to help people leave social networks, the goal is to be online less and the Care Bot is much more like: »If something bad happened, come to me.« The difference between the Social Media Break Up Coordinator and the Care Bot is that with online harassment you need immediate care, now.
Caroline Sinders, as the Social Media Break Up coordinator, in a session at Babycastles. Credits: Caroline Sinders
SP: Your project is making clear that the social media platforms fail and miss the opportunity to provide immediate care to people to express violent communications that can also influence online harassment. Based on your experiences and research, can you describe why online harassment is so common, especially on social media platforms?
CS: The thing about social media platforms is that they’re poorly designed. They weren’t really fixed, or the conversation about fixing them became public when this major social media harassment campaign, called GamerGate, came into play. A friend who used to work on Twitter sent me some of their internal design documents. When Twitter was first launched, it didn’t have a block button; it took them an entire year to add the block button. People were faced with harassment on Twitter, like Ariel Waldman, who is a very famous YouTuber who used to be a designer at Silicon Valley and was faced with a lot of online harassment in this period from 2007 to 2010. This time period that many researchers have looked at began with discussions about codes of conduct, and those discussions have really wrapped up a lot in many of the American open-source communities around 2013. These themes ran in parallel; for a whole year Twitter didn’t have a block button. A major oversight! It was 2012 or 2013 when they finally added a mute button and started refining the block button. It’s also about what muting and blocking means. That’s a big gap, that’s six years! Think about how the internet changed from 2007 to 2013, and when GamerGate happened in 2014 and 2015. We didn’t have things like block or mute phrases, words or hashtags. We didn’t have an easy ability to share block lists. Randi Harper created a massive list of GamerGate followers and shared this list with victims of GamerGate. The shared block list only became a feature toward the end of GamerGate, and the algorithmic quality feature launched after that campaign died down. But any of those tools would have helped during GamerGate, and during previous harassment campaigns. In general, those tools would be helpful for an individual experiencing harassment, or people being targeted by a harassment campaign. There are many things you still can’t do on Twitter, e.g. you can’t disable comments, you can’t make certain content unretweetable, you can’t hide comments. But they now have an algorithmic quality filter. The real reason I would say why we have so much online harassment is, is that we have people responsible for these platforms that just didn’t face harassment themselves, so it wasn’t something they thought about, it wasn’t something they thought their platform needed.
»The real reason I would say why we have so much online harassment is, is that we have people responsible for these platforms that just didn’t face harassment themselves, so it wasn’t something they thought about, it wasn’t something they thought their platform needed.«
SP: The internet has grown dramatically, and next to that there is currently a generation that is very much engaged in critical thinking and opening up discussions, which is also visible in the offline sphere, but is even more noticeable online. Would you say that digitalization and today’s society have opened more room for online harassment?
CS: I’d say there are slightly oversimplified versions of online harassment. Harassment is really common. It happens all the time. We need better tools to tackle it. Marginalized groups face more harassment online, for example black women historically face the most harassment. Women in general face more harassment than people who present as male. Some of it is really low level, harassment that I would call »drive-by harassment«. For example, someone jumps into your timeline and says something messed up and then jumps out again, sometimes you get harassed by many people. Journalists face a lot of harassment, if you are in any kind of job that puts you in the spotlight you will face harassment. What I want people to think about is that it happens frequently, and that it doesn’t have to. A lot of it isn’t just because we live online, although that is a major part of it. I want to refocus the reasons that it happens back to the platforms. Still these platforms don’t have a lot of tools for people to use. They have almost no transparency around who handles harassment and they don’t have transparency reports on harassment pieces. They really hide a lot of their stats on harassment. The real issue here is not that we live online; it’s that our communication tools are poorly designed and those in power of the communication tools give us so little information and insight into how they function.
SP: There is this huge gap of care when you think about social media systems, and what I see with the Care Bot, that it is about filling this gap when it should actually be a tool that a social media platform has to offer in their guidelines, it can even be a button to tell you what to do if you are experiencing online harassment. What do you hope your Care Bot can activate? Is it about creating awareness of the functioning or lack of functioning of the care for online harassment or is it more of a guideline for how to deal with online harassment?
CS: I thought about adding to the website and I may add it to the website on Schlosspost, »I am a Bot that exists to fill in the gaps that platforms refuse to fill in. I should be a team or a series of teams that platforms pay to help victims.« I may add that above the Bot itself. But until these teams exist, I exist.
The interview was conducted by Sarie Nijboer.
Eliza is an early natural language processing computer program created from 1964 to 1966 at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum. Created to demonstrate the superficiality of communication between humans and machines, Eliza simulated conversation by using a pattern matching and substitution methodology that gave users an illusion of understanding on the part of the program, but which had no built-in framework for contextualizing events. Source: Wikipedia.
© 2024 Akademie Schloss Solitude and the author
Beteiligte Person(en)
Ilan Manouach & Denise Araouzou / Athens, Greece
Cycles of Circulation (Sria Chatterjee, Karolina Sobecka, & Jamie Allen) / Basel, Switzerland
Johanna Bruckner / Rome, Italy