If you were told an industry is directly contributing to an increase in teenage eating disorders or suicide, what would your response be? Perhaps you would feel outrage, or simply confusion over how an industry could’ve been allowed to have that effect on teens in the first place. Maybe you would wonder, more specifically, how the government could’ve allowed this to happen. After all, don’t we, as citizens, collectively cede part of our individual power and personal freedoms in exchange for such protection in the first place?
Unfortunately, when it comes to tech companies, people seem to have trouble mustering the same outrage and demand for real change they have for other industries that infringe on their personal rights and privacy. U.S. tech companies contribute greatly to the American and global economy, with Microsoft, Apple, Google, and Amazon worth more than a trillion dollars each. Because of their outsized influence on the economy, government regulators are often reluctant to fine tech companies or curtail their growth in any way. After all, no politician wants to be blamed for layoffs or a slowdown in economic growth. However, with seven in ten Americans using social media, the tech companies that create and run these platforms cannot be left to self-regulate as best they can. We should not be satisfied with evasive congressional answers and half-hearted apologies.
Tech companies need to take responsibility for their actions when it comes to data privacy, social media’s affect on mental and physical health, and people’s ability to use social media as a tool against democracy.
Tech companies, like medical providers and researchers, have a significant effect on the quality of human life and should be regulated in a similar manner. Creating Institutional Review Boards (IRBs) for tech companies could help enforce industry-wide ethical guidelines. Because tech companies create, test, and release products for human consumption, tech companies routinely conduct human testing—but without supervision or regulation. In contrast, medical research proposals that involve human subjects require IRB approval to protect potential subjects and ensure that the scientific benefits of a project outweigh the risks of having human subjects participate. When it comes to social media, humans are currently at the mercy of whatever tech companies deem appropriate to release and test, and are often not aware that they are test subjects at all. This norm of human testing without IRB approval or individual consent needs to be changed through industry and/or governmental regulation. Instituting lofty fines for tech companies that cross ethical lines is one way to incentivize companies to pay close attention to ethics, in addition to their bottom line.
Moreover, increasing the demographic diversity of the industry is a must. Tech companies should include sociologists, psychologists, economists, ethicists, and philosophers on their leadership and engineering teams. Having professionals that study human behavior and society as part of a diverse team will help software engineers, who are notorious for embracing the “black and white” simplicity of code, think through the ethical grey areas that are inherent in their work. For example, human-focused professionals could help illuminate potential areas of concern when it comes to the implications of AI on human employment, the effect of social media on mental health, and the ethical limits of surveillance and “smart” technology. An increase in demographic diversity (in terms of race, gender, class, nationality, etc.) will also help software engineers create more equitable and accurate AI that doesn’t categorize Black people as “gorillas,” among other issues.
As exciting as the world of technology is, this powerful, global industry must be regulated to give it the best chance of enriching, instead of destroying, humanity. I am not a Luddite by any means. However, I do believe that there is, or should be, a limit to scientific and technological discovery. It would be strange to excuse Frankenstein for not anticipating the disasters his monster could create, or to dismiss the necessity of limiting CRISPR gene editing technology to non-human experimentation as scientists become more familiar with the tool and debate the extent to which we can (or should) edit the human genome.
It’s time for us to move beyond the age of tech exceptionalism, and treat tech companies like any other industry that has a significant effect on human mental and physical health. Tech leadership should be held responsible for the faulty and/or dangerous products their companies produce. Employees should be expected to take personal responsibility for their contributions to products instead of given a free pass to hide behind leadership. As individuals who make personal ethical decisions, software engineers should not be allowed to escape the implications of their work. Family and friends can help keep engineers accountable to their ethical decisions, including refusing to overlook detrimental bystander behavior.
Tech companies’ actions affect all of us. We, as the general public, have a responsibility to fight for our right to be respected as individuals, not products, and to have a government that proactively creates ethical regulations to protect us.
3 thoughts on “Why Does Tech Exceptionalism Exist?”
Hi Rachel, I really enjoyed reading your blog post and it aligns really well with what we are currently discussing in class as well. I like how you mentioned that social media and tech algorithms are essentially mass, unregulated human trials. I think a lot of people overlook the effect of tech through this lens since problems like mental illness are slow developing and are more long-term. Human medical trials, on the other hand, are more direct violations of human rights, when looking at the surface level. However, I think it is very important to gauge the long-term scope of the detriment tech can cause to humans, and address it sooner rather than later.
Another point to consider is just how interwoven technology is in our lives, in today’s day and age. From the alarm clocks we set in the morning, to communicating with colleagues, or completing school assignments – tech is used in almost every aspect of our lives. That being considered, our functionality in current society heavily relies on our affinity to technology. It is astonishing how an industry that affects all aspects of life can simultaneously be one of the most unregulated industries today. It is important to note that technology has been growing at an exponential rate for the past few decades, and the full of implications of such unregulated growth is just starting to fully manifest themselves. As per the accountability aspect of this issue, I think it is up to both legislature and individual companies themselves. The hierarchy in companies makes it extremely difficult to pinpoint one source of this issue, from the software developers to the executive level – people are actively making decisions to put forth such technology. As you mentioned, having ethics boards and increased diversity can make a big difference in the type of tech that reached the general public.
I really loved this post. It’s so strange that tech companies somehow get a free pass when it comes to regulation. Unlike medical experimentation and regulations regarding lawyers, I think tech is different in that it is (on the most surface level) not an industry that directly hurts people. Rather, it tricks and masterfully manipulates people into hurting themselves. If a doctor or lawyer does something bad, it’s easy to blame the problem on that individual specifically. If some Alexa device records a conversation it wasn’t supposed to, it’s more difficult to attach true blame. Was it the consumer’s fault for not reading through the user manual thoroughly? Was it the company’s fault for putting in this feature that could be helpful in other cases, but just not in this one? Was it the marketing team’s fault for not disclosing this openly in advertising (which would be terrible choice, I think)? Because people are the one’s choosing these devices, one party can’t easily be held solely responsible for whatever harm arises. I think there’s also the problem that the US runs away from regulation like it’s the plague; regulation only sprouts up when the public gets angry and rich people’s wallets start suffering. As of now, what the tech industry is doing is not eliciting either of those negative responses.
I in no way think that these harmful issues that tech companies are creating should be evaded and ignored. However, whilst trying to not sound like an out of touch person who says “you can’t sue McDonalds for making you unhealthy,” I think that it should be understood that we as users are enabling these actions. While it is horrible that social media companies know their impact on our mental health and still capitalize on it, we are the ones who come to their platforms and give our attention/data/ad-viewership to the algorithm that can suck us in for the most hours. I’d hate to make it sound like this is all nothing to worry about (because it is), but we have a choice as users to engage and disengage. If the terms and conditions are harmful, then the contract must be terminated. Either a company can choose to hear and apply the needs of hurt consumers (like Apple giving users the choice to opt out of data sharing, which tanked Facebook’s value), or the hurt consumer can decide that none of the tech players in this market care about them.
Again, I don’t mean to sound like I’m just saying “if you don’t like social media, don’t use it.” However, as someone who deleted the majority of social media apps on my phone and has seen their phone screen time reading drop from 11hr/day to 3hr, I think users should remember that we don’t have to give ourselves to algorithms that don’t have our best interest. Choice is what is needed here. Thankfully we have the choice to not use social media, but we should also have the choice to keep our data private – the choice to be free and make our own decisions with our own data/content/posts/etc.