Walls holding back information sharing and participatory decision-making have been breaking down over the past few decades. Many readers will question this claim, basing their fears on recent developments in politics, disinformation, and social disintegration. But I hold on to my conviction that our world is getting more open, and I'll examine where it's going in this two-part article. The article is the culmination of a year-long "Open Anniversary" series on the Linux Professional Institute blog. Previous installments in the series are:
January (Free Culture): Introductory explanation to launch the series
February (Open Source): Open source in the worldwide COVID-19 battle
March (Open Business): Who's building businesses around free and open source software?
April (Open Government): Where transparency, crowdsourcing, and open source software meet
May (Open Knowledge): Open knowledge, the Internet Archive, and the history of everything
June (Open Hardware): Open hardware suitable for more and more computing projects
July (Open education): The many ways to target world disparities
August (Open Web): Open, simple, generative: Why the Web is the dominant internet application
September (Linux): The many meanings of Linux
October (Free Software): Steps toward a great career in free and open source software
November (Open Access): Open Access Flips Hundreds of Years of Scientific Research
This first part of the series defends the cause of openness against critics who blame it for current social and political problems. I try to locate more appropriate targets for this criticism.
Is Openness Dangerous?
There's plenty to lament in what we see online, apparently spiraling out of control: rampant conspiracy theories, the plethora of criminal activity on the "dark internet," and more. Some people stretch their criticisms too far, though. I've heard uninformed and defamatory statements like, "The internet is causing polarization" and "The internet helps lies to travel quickly." When we evaluate technologies, we have to think carefully. What precise technologies are we talking about? Who is using them, and how are they being used?
Such questions become even more complex because the combination of personal digital devices and near-universal networking also hands tools to spies and to governments trying to curtail their population's behavior.
The internet actually is still doing what it did all along, starting from the supposedly golden age when it brought people together around the world and provided safe spaces to discuss stigmatized issues such as gay and lesbian behavior, recovery from child abuse or drug use, non-neurotypical experiences, and so forth. So many topics are now part of the public discourse—just look at recent commitments to address sexual harassment in the workplace, for instance—that were first aired in internet communities.
One goes to meetings today where people say, “I’m on the autistic spectrum” or “I’m a victim of child abuse” or “I spent five years in prison” or “My pronouns or they and them” without shame or stigma. There has to be a connection here; we forget how much more of an open society we have become since the internet.
As data looms in importance, the internet is keeping up as a resource for the marginalized. One recent example, NativeDATA, tailors health information to native North American peoples, who suffer from a lot of health problems related to their environments and social status.
From the beginning, too, there was plenty of evidence that the internet had some pretty nasty corners. Illegal trade, hate speech, and wanton lies were known problems. Attempts to separate the good from the bad started quite some time ago—remember the Communications Decency Act of 1996—but always floundered on the dilemma that different people had different ideas about what was good and bad, and ultimately people realized that they didn't want to hand the decision over to any authority.
It is a tribute to the spirit of the early internet that major social media companies—while investing millions of dollars to take down harmful content—show reluctance to crack down further, and democratic governments are moving cautiously in defining standards (notably the Digital Services Act package in the European Union). For instance, although the EU wants social media sites to label and remove content that is manifestly dangerous, the regulators want transparency in such removal and clear explanations about why it's removed. The regulators are also sensitive to excessive demands on social media sites.
Things have taken a turn for the worse during the past decade, so far as I can see, but the problem is not the internet: it is the services built by companies such as search engines and social media. A recent working paper by Suran et al. on "collective intelligence" points to the problem. Successful collective intelligence (related to the ideas of crowdsourcing and the wisdom of crowds) requires two traits: diversity and transparency. The internet is quite capable of fostering these values, but social media works against them.
Regarding diversity, the preference by search engines and social media to display items similar to what one has previously "liked" or clicked on creates the bubbles so often criticized by observers. And the algorithms, of course, are quite opaque. The companies can't afford to be transparent about what they do because revealing the algorithms would make it easier to game their systems. But the problem demonstrates that we need something different from social media for serious discussions and "news."
Some people also claim that social media tends to inflame the discourse, arousing fear and hate. I'm not convinced this is true. People on social media joyfully pile on to express their approval for positive things such as births, marriages, degrees earned, awards, and promotions. Let's just say that social media is designed to evoke emotions instead of cautious consideration, and leave it at that.
I love social media. Like billions of people, I use it to keep up with old college friends, share my pleasures and pains with them, and connect my colleagues with common interests. Social media was designed for that and does it superbly.
Social media introduces risks when people use it to exchange "news," organize political engagement, or function as a public space in other ways. Those tasks are better served by completely different tools—offline and online—that foster thoughtful debate and intensive research. There are models for such spaces. They use some of the same superficial mechanisms as social media does, such as groups and ratings. But the public spaces deliberately engage interested people in working together to solve their problems. Positive results with broad consensus are their goal.
These platforms can be run by governments, companies, or non-profits. One example is a partner of the Linux Professional Institute, SmartCT in the Philippines.
Read More: Open Access Flips Hundreds of Years of Scientific Research, Part 2
Source: lpi.org
0 comments:
Post a Comment