Ethics of researching Online and #SMSociety conference summary, Day 2
This long-anticipated post has seen several iterations; I began jotting ideas in March about conducting online research and the challenges associated with it; my OISE Write-Ins were occupied with finding information about ethics associated with online research. I then began looking at theses and dissertations about anything internet-related and how they addressed ethics of online research. By the beginning of June, I felt I had done no reading on it at all. Perhaps I hadn’t processed what I had been reading.
Just as I felt ready to post, I learned of the Social Media and Society Conference at Ryerson University, which took place July 19-21. When I glanced through the program, I realized I would have to delay my post yet again because many presentations touched on ethics and research online. And then I took a break from almost everything, only to return to this pending post.
What I knew before July 19
I realized that writing about ethics would take me more time than reading a few articles about online research and becoming familiar with online research guidelines. I was delaying this post on ethics, but I did not feel confident sharing what I had learned until after the Social Media and Society Conference from July 19-21st. Waiting was the right decision.
By reading some dissertations on social media research, I was already aware that the British Psychological Society has guidelines on internet-mediated research. The American Educational Research Association has no formal guidance for internet research outlined. I was aware of the existence of the Association of Internet Researchers as early as last year; I had just missed attending the conference in Montreal; their guidelines were updated in 2012. I also knew there were all sorts of journal articles written on online ethics, and that methodology sections of theses and dissertations used a lot of space to discuss online research ethics; more room allotted for ethics than I ever remember including in my masters thesis.
What I learned after – YMMV
Where you are situated globally determines the ethics you follow, and I was surprised to learn that the TriCouncil in Canada does not have a set of guidelines specific to conducting research online; guidelines are for human participants and participants should be treated as such without any distinction for research conducted online, although there is an ongoing conversation to address this gap. There are challenges associated with this, as research involving human participants is not always human; for example, we don’t always know if a participant is actually a bot.
Where your participant is located may also affect the guidelines you have to follow: Canada, the United States, Australia, the United Kingdom, Europe (South-East Europe and other countries within Europe also have different laws for research conducted online) are the ones I have read and with which I have some degree of familiarity.
Of course, guidelines are comprehensive today, but require updating as social media platforms continue to evolve and change regularly, more often than ethics guidelines; keeping guidelines current and relevant requires constant revisiting.
Beyond some more obvious issues, such as: human subjects research, consent and informed consent, privacy, minors, and recruitment, presenters at the #SMSociety conference shared guidelines for best practices when situations appear less clear-cut; For example, Jacquelyn Burkell discussed the following issues in her session on Saturday:
- Ethnographically speaking, interpretation of observational data when the researcher is not a participant is a challenge.
- Are public spaces really public? When someone has a curated audience on Twitter, for example, is the space they occupy private?
- If individuals cannot be identified, what about pseudonyms? An enduring pseudonym may be very important and injuring someone’s pseudonym can be the same as identifying the individual.
- Things that researchers de-identify can still be identifiable; removing someone’s Twitter handle while leaving their words intact provides searchable content that leads to the “deidentified” individual.
Sometimes “this” is better; sometimes “that” is better.
It is deemed more acceptable if:
- Research use on the social media platform is allowed (e.g. Twitter)
- The group being researched is open (but group moderators or members of a Facebook group can deny access if one identifies themselves as a researcher).
- The discussion is archived somewhere.
- Discussions are indexed on the open web.
And back again to the seesaw:
- Is doing no harm equal to pleasing participants?
- Does the audience that social media users imagine include researchers?
- Are some researchers “more” ethical than others?
- Should social media users have a say in which types of researchers (e.g academic, industry) are more trustworthy?
- What is the duty of care for vulnerable groups? What is a vulnerable group? Is a vulnerable group always vulnerable?
- De-identification is not the same as anonymisation.
- Are social media platforms’ terms of service sufficient notice for users having their words and images researched?
- Is privacy online different from other contexts?
- How do you get consent from all the participants you’ve studied?
The general consensus from Paul Reilly at the conference was:
- Ethical stances should be decided on a case by case basis; each social media platform should be assessed separately.
- Being public and transparent about your research methods is good practice.
- Reaching out to participants helps create a knowing public.
- All researchers are not all aware of all the ethics associated with conducting research online.
- Use a Research Ethics Board throughout the project lifecycle to constantly assess the benefit vs harm.
What I hadn’t considered
Privacy. I assumed that Twitter is a public forum and people understand their information is searchable and accessible to anyone with a search engine or on www.twitter.com. I did not consider that where a person tweets from (i.e. their home) is a personal space and they are participating in an event, like a Twitter chat, on their own time. At conferences, people who are unwilling to have their photos taken can indicate so, or sign a waiver which permits others to use photos at the conference. There is no such option for Twitter users when they choose to participate in a chat.
Another aspect I hadn’t considered is one of time. What people posted in 2015 may not be their views any longer, but those posts, if undeleted (or even deleted) contain a permanent record of an individual’s words. Certain celebrities have been put through the public wringer because of comments, likes, or retweets made in the past, albeit concerning ones. The defense that “I don’t think that anymore” wasn’t readily accepted by the public.
I have posts on my blog which I am no longer happy with. Some of them make me cringe. I’ve left them, despite my itchy delete finger, because they represent progress for me, and that is sometimes difficult to gauge when we are immersed in a topic, like teaching, technology, research, and the like. I hope people will read those posts and then read more recent posts and see that I have changed or evolved, but I can’t guide personal interests. The passage of time also makes an impact on the interpretation of any information. We view yesterday with today’s lenses; are we capable of unbias when we look at content that happened before this week/month/year? I remember the collective shock after the last US presidential election, but many claimed to have known that the election outcome had been inevitable. The problem was that they made those claims after November of 2016.
My participation on Twitter is what guided me to do research in social media analysis and professional development. I found Twitter useful and motivating; it transformed my teaching in some respects, my outlook on professional development, and my interests, including graduate school. It even made me become succinct in expressing my thoughts. I chose Twitter to explore professional development because of its open platform, as I’m sure others do. However, I have always been cognisant of what I share, and my willingness to share has changed or transformed over time. Initially, I avoided tweeting about certain topics because I didn’t want to be noticed, or was concerned about the trolling I had observed online. I also felt unsure of using the platform effectively when I was a novice user; I did a lot of lurking and clicking to try and understand how people communicated. It wasn’t intuitive, and there seemed to be a lot of vocal people I didn’t necessarily want to tangle with. Then there was the concern that my words would be misconstrued, that my dry humour, which isn’t always recognised when encountered by others in person, would be read at face value in my Tweets, and I did not want to begin using emojis to help others interpret my words, because an international audience may have different understandings of them.
The conversations and exchanges I want to research will very likely be people I know, and some I know well. I wonder how my participation and research will affect my connections and relations with other language teachers on Twitter. I also hesitate to participate in topics I am interested in sometimes, knowing that I might affect conversations. One way to circumvent this is to add the hashtag #MyResearch whenever I am actively interested in a topic because it’s related to my research, as a signaller. I saw this idea from Ian Guest’s dissertation and have used it as a way to ameliorate my involvement on Twitter. Lately, I’ve become a lurker again rather than a participant because of my concerns.
Although there is general agreement that vulnerable groups in the population need to be protected and that risks for these groups can be more complex, who defines what a “vulnerable group” is?
I use the college strike in Ontario as an example. In the fall of 2017, unionized college faculty were on strike for five weeks. In addition to unionized faculty, non-unionized faculty (defined as sessional or part-time faculty in Ontario) were either put in the horrible position of crossing picket lines to work or lose employment income. Many conversations and sharing of information took place on Twitter: the union, faculty from different colleges, individual college faculty – there was increased traffic on Twitter and new accounts were created, old accounts were reawakened, and many used the Twitter platform to keep abreast of news, picket lines, information about the strike, and personal stories. Some people blogged about their experiences, and others used threading on Twitter to tell stories as well. Many people used Twitter to contradict the collective message from news sources; what was being reported on the radio and in the 6 o’clock news did not represent the faculty experience on the picket line. For the first time, I made my account private because I was concerned that by “liking” a post it might be interpreted incorrectly. In some cases, I was reacting to the challenges faced by college faculty; in other cases, I was showing support for the story being told, because I empathised with the poster.
Twitter, a platform I had joined because of its public space, became a minefield during the strike. I recall liking and then immediately unliking posts; I made my account private and then made it public again; I deleted tweets after 24 hours for fear of reprisal, even though in retrospect there was nothing incendiary about them. Had I been outed during that time by a researcher I would have found it difficult to return to work; I mean that I would have been nervous and concerned about future employment. I’m not sure that my words or likes or retweets would have been interpreted by a researcher in the context I intended them. I would not have consented to participate in any kind of interviews and I would not have wanted my words paraphrased for me. When I clicked liked, retweeted, or posted anything strike-related, I was in a private space – the space of a striking college faculty member. I was part of a vulnerable group at that time, but college faculty are not necessarily viewed by others as a vulnerable group.
I share this about my concerns because I almost always agree to participate in people’s research: I have agreed to be interviewed by colleagues and strangers, even multiple times; I have responded to requests on Twitter to participate in research. I have agreed to requests at OISE by former classmates to be a research participant: I’ve also participated anonymously in surveys that I’ve come across online and provided my contact information for follow up. But my words in the context of the Ontario college teachers’ strike I would not want researched, even two years later. No one except someone who was intimately involved with striking faculty for those five weeks would understand the experience.
I think ethnographic research in social media spaces needs to be conducted by researchers who are familiar with the nuances, challenges, and shifting landscapes of each platform. I would not feel as comfortable conducting research on Facebook as I do with Twitter – I have joined one or two groups on FB for the purposes of lurking and nothing more. I don’t claim to understand the dynamics of the group I have joined, only the content which is shared. If one is interested in researching a topic they are curious about but know little of, I would expect any netnographer to go through the same process I went through when I first joined Twitter in order to understand the people, the styles of communication, and the communities. I would expect the researcher to feel an obligation to protect their research participants.
“First, do no harm” is more challenging and vague than it sounds.