Beyond Pushing Buttons
by Stephen P. Cummings, MSW, ACSW, LISW
Ah, 2018. It seems just like...well, a while ago. You’ll recall the NASW Code of Ethics underwent significant updates in 2018, including how social workers engage with technology. Now, in the summer of 2021, another update has appeared, including expanded language on technology and how we must consider implications for use. For this column, I’d like to acknowledge what The New Social Worker has already covered, and then focus on social workers’ use of technology and in what new ways the Code of Ethics frames its use.
The New Social Worker has been featuring articles all summer about the updates to the Code of Ethics. Drs. Erlene Grise-Owens and Justin “Jay” Miller wrote about the big self-care focus (https://www.socialworker.com/feature-articles/self-care/national-association-social-workers-nasw-code-of-ethics-2021-self-care/). Dr. Allan Barsky thoroughly discussed updates to the Code of Ethics in 2017 and was interviewed by Dr. Jonathan Singer for several episodes of the Social Work Podcast. Regarding technology, Dr. Barsky noted that updates to the Code involved issues regarding communication. Specifically, social workers using technology to communicate are expected to follow the same ethical standards as they would if they were communicating face-to-face, in real life. Social workers were also guided to understand and assess how technology is a part of culturally competent practice. For example, when arranging for a new therapeutic relationship, should the social worker choose to use telecommunication technology (say, the now-ubiquitous Zoom platform), it’s important to assess the ethical implications of this technology use. Does the client have access to this technology, and if not, what barriers to accessing it exist?
This summer, Dr. Barsky discussed the 2021 updates to the Code in The New Social Worker (https://www.socialworker.com/feature-articles/ethics-articles/special-report-2021-revisions-nasw-code-of-ethics/). Updated language regarding technology speaks to the ethical responsibility social workers have to remove barriers to technology. Notably, in Standard 1.05, Cultural Competence, the Code’s updated language makes clear that “social workers should demonstrate awareness and cultural humility by engaging in critical self-reflection (understanding their own bias and engaging in self-correction), recognizing clients as experts of their own culture, committing to lifelong learning, and holding institutions accountable for advancing cultural humility.”
In this same section, the Code includes new language that involves technology in practice: “Social workers who provide electronic social work services should be aware of cultural and socioeconomic differences among clients’ use of and access to electronic technology and seek to prevent such potential barriers (emphasis mine). Social workers should assess cultural, environmental, economic, mental or physical ability, linguistic, and other issues that may affect the delivery or use of these services.”
While the previous update implied the need to understand the use of technology in the context of clients’ lives, I interpret this update as a call for meaningful action, after competency and contemplation. In other words, social workers need to go farther, to clearly embrace what we mean by “technology” and the large-scale implications that are not readily evident, but a part of our practice and policy landscape.
For me, professionally, this is a far cry from 20 years ago. Back then, I was once dissuaded by social workers in my professional network from embracing technology as an essential part of practice. “We are people-people. We don’t push buttons,” I recall being told. Now, clearly, I don’t embrace that, but it’s important for me to deconstruct those words just a bit. “Pushing buttons” suggests a reliance on technology as a function, rather than critiquing the proliferation of technology use. After all, “pushing buttons” suggests maintaining the status quo, or worse, exacerbating inequality. So, yes, we are “people-people.”
Okay, what’s this “technology” you speak of?
The Code of Ethics appears to focus on technology social workers use in typical practice. Data collection, record keeping, communication: any facet that requires technological support to do the work usually falls under the term “technology.” I often associate with the tangible. I touch a keyboard, characters appear on my screen, the graphic user interface creates a readable stream of text. When I complete this column, I’ll proofread, edit, and click “send.” That is a clean, convenient example of technology at work. I’m older, so permit me to acknowledge that this example of technology, while mundane and commonplace, is still amazing. In high school, I chose not to take a course on manual typewriters. I was no futurist, anticipating the oncoming renaissance of “E-Mail.” I just figured I would muddle through my life as a terrible typist. Now, everyone not only uses keyboards, but proficiency is built around just the daily interaction with the technology.
Technology, of course, is way more than what’s on my smartphone. Social workers must be aware of the risks of ignoring the deeper infrastructure and the nature of technology inequality. When I conceptualize the definition of technology, I consider the following:
What’s visible, and what’s hidden. If a client or whole population can’t access needed social work services, is there a systemic reason for this? One example close to me: much of my home state, Iowa, has weak internet capability, and in some places, no connection exists at all. Even just outside the populated areas (like the college town I live in), internet access drops significantly. This became painfully apparent as the pandemic wore on, as the use of the internet became progressively necessary to attend school or meet with a clinician. Add to this what researchers already knew: the current global pandemic was completely predictable, and the need for adequate preparation wasn’t exclusive to preparing masks and vaccines. And even without the threat of a global crisis, the need for a reliable broadband network that serves everyone has been necessary prior to the progression of COVID-19.
What’s past, present, and future. I empathize with the challenges of crafting statements of best practice regarding technology. I use the NASW’s own work on addressing best practice as an example of this challenge. Before the most recent update to the Standards of Practice on Technology in 2016, the previous NASW Standards included language on the use of fax machines. That technology is outdated, yet fax machines are still available. (According to a colleague who directs a hospital social work department, the policy for her department is to use encrypted digital communication to send patient records outside the hospital setting, but fax machines are a last resort if a local provider has no other method to receive the information in a timely fashion.)
Beyond daily use of tech. I believe it’s also the social worker’s ethical responsibility to anticipate what future practice and policy needs may require. Case in point: a colleague recently shared an article about family members using chatbot technology to digitally replicate conversations with deceased loved ones. What ethical implications for practice could we glean from this use of technology? Should this technology proliferate (which it most likely will), what are the implications? Does the deceased have rights in this expanding virtual world?
But technology is “neutral,” right? Won’t algorithms help with reducing inequality?
That digital algorithms, automation, and artificial intelligence will reduce or eliminate bias is a persistent fallacy. Evidence already indicates the opposite appears to be true. One recent study suggests a majority of commercial facial recognition systems demonstrate consistent bias. In one case, law enforcement using facial recognition misidentified the faces of people in indigenous populations. Such errors in bias aren’t simple data mistakes. Reliance on biased technology has the power to deeply disrupt the lives of innocent people and their families.
What can social workers do?
Social workers will always need to learn and take action, especially as the nature of technology use is always changing.
- Look beyond everyday technology. It’s easy not to see how technology is being used in our everyday lives, yet it’s present throughout, even if we aren’t the “end users.” How is your neighborhood or jurisdiction using technology? How are your clients being affected? What are the implications?
- Get familiar with NASW’s statements and best practices. The National Association of Social Workers has a series of articles on clinical practice tools for technology, which can be found on its website. Many of these articles are behind a membership paywall. However, the Standards for Technology in Social Work Practice document is openly available.
- Reject the notion of tech neutrality. All technology is created by people with a purpose, and the deployment of technology is never free of creator or user bias.
- Be proactive in policy. As I noted earlier, although the NASW Code of Ethics concentrates most of the updated language on ethics in practice, social workers should apply ethical standards for social justice and change. Is your neighborhood or jurisdiction impacted by policies that focus on technology use? Does everyone have access to the internet, or is that access limited?
I applaud the new, more direct language of the updates to the Code of Ethics. “Social workers must take action against oppression, racism, discrimination, and inequities, and acknowledge personal privilege.” This standard makes clear we must act for meaningful social change. More than ever, this includes an understanding of what technology is, how it’s used at various levels, and how harm can be caused if we fail to act.
Otherwise, we’re just pushing buttons.
Stephen P. Cummings, MSW, ACSW, LISW, is a clinical assistant professor at the University of Iowa School of Social Work, where he is the administrator for distance education.