Ep. 09: Maria Axente of PWC on ethical AI

  
0:00
-53:52

In this episode, we open with a chat about Facebook’s acquisition of GIPHY and what the company may be trying to learn with its AI (hint: hidden meanings) and then we move on to a great conversation with Maria Axente of PWC about ethical AI at PWC and in her non-profit work with groups like UNICEF.

If you’re enjoying our podcast, please share with your friends, subscribe and give us a like—we’d appreciate your help spreading the word.

Facebook's GIPHY acquisition is genius

Helping Facebook's AI understand hidden meaning

This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it. 

If you haven’t signed up yet, you can do that here.

Special announcement…

This summer we’ve carved out time for something really important and close to our hearts: a virtual summer camp for middle and high schoolers who want to learn to how to build an AI start-up. We’re looking for enthusiastic, self-starters who are interested in what’s possible at the intersection of entrepreneurialism, AI and design. If you can, we’d love it if you could share in your network and help us bring human-centered AI to a Sonder Scheme Junior “Shark Tank.”

We’re also looking for someone to sponsor a couple of scholarships in each class for kids who would otherwise not be able to attend so if you have any ideas, get in touch.


If necessity is the mother of invention, then Facebook’s AI has been on extra duty since Covid as it tries to identify and understand hidden meaning.

AI now proactively detects 88.8 percent of the hate speech content we remove, up from 80.2 percent the previous quarter. - Facebook, May 2020

Moderators, as contractors, can’t work from home due to security concerns, so removing hate speech has relied more on AI while human moderators prioritize Covid misinformation. Covid is new, so there’s relatively less data available for AI to learn from which makes humans even more important. But humans are not a sustainable strategy for Facebook. Facebook is simply too big, too fast and too diverse.

Content moderation is a frontier for AI research. Developments in content moderation will drive a wave of new AI capability, which will be of huge value in Facebook’s core business of micro-targeting ads.

Hate speech is complex for AI; it’s a small proportion compared to the billions of posts that are not problematic; it’s often multimodal, it can be ironic or sarcastic which is tough for AI, and the people posting it are deliberately trying to avoid detection by doing things such as manipulating words or creating ambiguous text within a broader context. Haters use dog whistles to try to hide the meaning from anyone who doesn’t understand the codes—from those who doesn’t understand the hidden language.

Of course, if Facebook can use AI to solve hate speech, then it can use the same AI for a lot more. Which is a big incentive.

This week the company made a couple of interesting moves. It announced the acquisition of GIPHY:

GIPHY, a leader in visual expression and creation, is joining the Facebook company today as part of the Instagram team. GIPHY makes everyday conversations more entertaining, and so we plan to further integrate their GIF library into Instagram and our other apps so that people can find just the right way to express themselves. - Facebook

And it published a blog focusing on the AI research direction for hate speech. The details help us understand Facebook’s AI strategy in the context of moderation as a pain-point, but give insight into the potential reach of this AI in future.

To better protect people, we have AI tools to quickly—and often proactively—detect this content. - Facebook

I think there’s a way to put these two things together because GIPHY isn’t just about giving “people meaningful and creative ways to express themselves,” it’s also about giving direct access to data for AI to learn about the creative ways that humans express themselves in non-explicit ways—irony, humor, sarcasm, sleight-of-hand, juxtaposition.

And the work from Facebook’s AI team lays out how and where this data could be made even more valuable.

The unique challenge of hate speech

The real challenge for AI as a moderator is that humans are incredibly adept at manipulating language within the context of other media, especially when they don’t want to get caught. For humans, all media is mixed—vision, language, sound. Memes can use text and images or video together. The text alone can be ambiguous but when it’s combined with the image, the statement takes on another meaning, for example:

Credit: Facebook

But Facebook is showing how slick it is at hybridizing AI and developing ways to recognize usage in multimodal situations.

Facebook’s hybridization of leading-edge AI

In 2019, researchers took the Google BERT model for language and combined the technology with a vision system to create Vision-and-Language BERT (ViLBERT). Researchers used visual representations to pre-train ground truth then enhanced the language models, essentially building a joint visual-linguistic representation. The innovation links the models, having them reason jointly between vision and language with separate streams for vision and language processing that communicate with each other through transformer layers. 

The most important development from an AI perspective is the introduction of self-supervised learning models into production. The best way to think of self-supervised learning is a kind of “fill in the blanks” learning style. It’s fairly new and pioneered by Facebook’s chief AI scientist, Yann LeCun. What makes self-supervised learning different is that the goal is to have an AI to learn to reason. AI learns by gradient-based learning and self-supervised learning makes reasoning compatible with gradient-based learning. AI learns by filling in missing information so it develops a representation of the world and then it can be used to reason more generally about a task, like classifying language. Just as babies learn through observing the world before they undertake a task, self-supervised learning has AI do the same.

If AI can understand English, it can understand everything

Another theme that’s fascinating is language-universal structures, where AI can detect meanings that are similar in different languages. To be honest, this is kind of spooky—even with languages that have very little training data, the AI can build a model based on the structure of other, more prevalent languages. This graphic illustrates how hate speech in different languages is represented in a single, shared embedding space.

Credit: Facebook

Facebook now really does have the babel fish. It’s like having a translator—one who has their own agenda—involved in every conversation. It’s a strange feeling to see all of human language boiled down to common structures in high-dimensional space.

This allows models like XLM-R to learn in a language-agnostic fashion, taking advantage of transfer learning to learn from data in one language (e.g., Hindi) and use it in other languages (e.g., Spanish and Bulgarian). - Facebook

Fusion models: making a superintelligence and not calling it one

Self-supervised learning has enabled Facebook to build a vast array of fusion models that combine images, video, text, people, interactions and external content which then can all be mathematically represented across all languages, regions and countries.

Credit: Facebook

It really is the ultimate way to combine all of Facebook’s data. Actually all data, period. Which, of course, is a much bigger opportunity, by an order of billions.

This allows us to learn from many more examples, both of hate speech and benign content, unlocking the value of unlabeled data. - Facebook

If they called it the “brain of the world,” we’d all be terrified. But instead it’s AI for removing hate speech. And GIPHY, the data set that represents memes as a new form of human communication is a perfect resource to add to AI’s reasoning. Lucky it’s called meaningful and creative sharing.

Facebook’s AI frontier is formidable. Right now, it’s targeted at hate speech, which we all agree is a good thing. But AI that works on hate speech—which presents a relatively small but technically sophisticated portion of content on the platform—will be massively more powerful on the billions of posts across the world, especially when it can understand all the ways that humans try to stay opaque to the machine.

Where will they aim it next? And once Facebook’s AI understands how humans communicate with hidden meaning—through things like hate speech and sarcasm—how will the AI’s communication with humans change? Will we be unknowingly influenced by the AI because it’s expressing things to us in hidden ways?


Also this week:

  • Since the pandemic and lockdown we have launched our Studio for human-centered AI design, re-crafted all our workshops for use by distributed teams, innovated on design thinking for couples with our coupled-centered design system for COVID confinement (free books), doubled down on ethical implementation of AI in general and in HR in particular. If you haven’t checked out our products and services recently, please do.

  • Clearview AI in NZ, from RNZ. NZ’s experience mirrors the fundamental concern; individuals in law enforcement can trial this without “higher ups” knowing about it. This is the new “move fast and break things,” where privacy commissioners or other watchdogs for democratic public decision making aren’t part of any trials. “Official emails released to RNZ show how police first used the technology: by submitting images of wanted people who police say looked "to be of Māori or Polynesian ethnicity", as well as "Irish roof contractors".” Plus, an interview between Kim Hill and Kashmir Hill, a technology reporter for the New York Times who has been following the company.

  • An essay in Nautilus from the inventor of the Roomba on how designers of robots need to be creative at understanding what a human is doing to complete a task and why they do it. It’s a delight to read. “Robots and people may accomplish the same task in completely different ways. This makes deciding which tasks are robot-appropriate both difficult and, from my perspective, great fun. Every potential task must be reimagined from the ground up.”

  • ICYMI, an article in NYmag that everyone’s talking about: future of higher ed, by Prof Galloway.

  • Great discussion on All Tech Is Human on emotional AI with Rana el Kaliouby and Pamela Pavliscak.

  • Article in the MIT Tech Review about how humans need to step in for failing AI. “Machine-learning models that run behind the scenes in inventory management, fraud detection, and marketing rely on a cycle of normal human behavior. But what counts as normal has changed, and now some are no longer working.” Called it.

Ep. 08: Ted Kwartler of DataRobot on trusted AI

  
0:00
-46:33

In this episode, we open with an opinionated chat about Facebook’s new oversight board and then we have a great conversation with Ted Kwartler, VP of Trusted AI at DataRobot. We think DataRobot is a very interesting AI company and Ted’s role is key to helping their customers with ethical AI.

There is no technology miracle

"All the data in the world are only as useful as the institutions and leaders that govern its use." - Shoshana Zuboff

This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it. 

If you haven’t signed up yet, you can do that here.


Recent headlines about the role of AI in the fight against COVID show the mismatch between people’s expectations and what’s possible. Those that think AI is overhyped are finding plenty of places to point this out, while the hype merchants push isolated solutions that are often divorced from the overall system in which the algorithm operates.

It’s just techies doing techie things because they don’t know what else to do.

- Bruce Schneier, Berkman Klein Center for Internet & Society

Neither extreme is productive but they do show how AI, robotics and autonomous technologies are often misunderstood.

A core misunderstanding is how AI gets its start. AI runs on data and if that data doesn’t exist, there’s no way for AI to begin to understand the world. This means that the current crisis is also a crisis for AI.

You cannot “big data” your way out of a “no data” situation. Period.

- Jason Bay, Government Digital Services, Singapore

The world’s data has been completely disrupted by coronavirus. The impact on the accuracy of AI—particularly AI that relies on supervised learning techniques and in predictive applications such as demand forecasting—must surely be significant. 

Another mistake is to assume that machines will be a welcome substitute for humans. The real answer is a lot more complex. We see autonomous machines welcomed when the job is unsafe for a human or where there is a genuine additive aspect to a human’s ability to do their job. Autonomous, learning robots still do not have the broad base of skills and dexterity to replace humans in unpredictable environments. This is much more akin to how robots work in disaster situations rather than a new acceptance of autonomous machines in normal society.

This hints at the limits of their use: no, we won’t see a huge increase in drone-delivered lattes because people will instead worry about the same drone taking their temperature from afar. And I’m skeptical about this being the break-out moment for autonomous delivery—it’s really hard to see how it would be possible to prevent theft and vandalism of such delivery vehicles in a world where unemployment could exceed 32%, at least for a time.

Cute… I’ll take one!

Many hospitals and essential services are using robots instead of people but jobs are not being automated. A recent survey of the role of robots in the pandemic response conducted by researchers at Texas A&M shows that robots either perform tasks that a person can’t do or do safely, or take on tasks that free up responders to handle the increased workload. 

The majority of robots being used in hospitals treating COVID-19 patients have not replaced health care professionals. These robots are teleoperated, enabling the health care workers to apply their expertise and compassion to sick and isolated patients remotely.

- Dr Robin Murphy

AI operates inside a system: a chain of tasks or a series of human-machine handovers. So while AI can help identify a potential drug or vaccine candidate, it can’t do much to speed up other parts of the process. Maybe AI can help diagnose COVID, but the ethics of some innovations are not consistent with medical ethics, causing entrepreneurs to withdraw MVP-level products as hastily as they launched them.

Within 48 hours, Carnegie Mellon forced the lab to take down the online test, which could have run afoul of FDA guidelines and be misinterpreted by people regardless of the disclaimer. "It's a perfectly valid concern, and my whole team had not thought of that ethical side of things."

- Rita Singh, Carnegie Mellon, per Business Insider. 

Many such innovations have only served to highlight that the core logic of agile development—move fast, experiment, see what works, see what breaks—is completely inappropriate in medicine. Even in a pandemic, there is the right emergency response and the wrong one.

But perhaps the biggest ding on AI right now is that it can’t deliver a miracle. Tech can help with tracking but it can’t do the hard, physical, on-the-ground work of contact tracing, which is what we actually need. Without contact tracing, even with Google and Apple’s integrated app for tracking, we could end up with something that doesn’t work.

The US could become a wild west of incompatible contact tracing apps that vary from state to state and city to city, managed by companies with no public health experience that rake in cash via government contracts while providing the people who use them with a false sense of security.

- Buzzfeed

In all this though, there is hope. I have taken great comfort from Shoshana Zuboff’s recent comments. As author of Surveillance Capitalism and perhaps the greatest critic of Google’s and Facebook’s data strategies, she has appeared recently in forums—Lavin Live and Unrig Summit—surprisingly upbeat given everything going on.

Zuboff says she has “nothing but optimism.” By this, I infer that she’s not only referring to how Google’s and Facebook’s ad revenue will take a pounding, exposing them to the vulnerabilities of the digital market place, I think she’s talking about something bigger. Instead of ignorance or apathy, she sees a ground swell of democratic engagement. Now when she asks, “will the digital future be compatible with democracy?” she doesn’t hear crickets, she hears a whole lot of people asking the same question.

Does fighting COVID-19 mean that we are on a forced march to COVID-1984?

- Shoshana Zuboff.

Extension of surveillance is the perhaps ultimate concern but so is continuing to propagate myths of AI, including the myth of humans being less valuable in an age of AI.

Humans judge humans differently than they judge machines. We judge people based on their intentions, while we judge machines on consequences. This means that people get rewarded for taking risks while machines get punished for making mistakes. And because AI is probabilistic, with false positives and false negatives, AI operating on its own simply can’t win right now.

What’s fine on Spotify—a song that the algorithm predicted you’d like but didn’t, or not being recommended a song you would have liked—is a real problem in medicine. A tracking app harms when it doesn’t work; either driving unreasonable alarm or a false sense of security. Only humans, applying reason and judgment, can balance the consequences of errors from tracking apps and make any app useful.

The end result is an app that doesn't work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

- Bruce Schneier, Berkman Klein Center for Internet & Society

People want humans to lead, to be accountable, to make decisions, to be just, ethical and fair and most importantly to be present, and want machines to serve these goals.

We won’t have a technology miracle because, not only is it not possible, it’s not trustworthy without the trustworthiness of the people and systems that surround it.

What people want is a human miracle. And if AI can help with that, great.

There is only one source of miracle—human ingenuity and creativity.
- Shoshana Zuboff.


Also this week:

  • We decided to change it up a bit and repurposed our human-centered AI design into couple-centered design for COVID. You can download the free resources here and join our Facebook group for more.

  • Ethical AI in recruitment and other HR processes is more important than ever. We’ve launched a service specifically to help HR professionals evaluate the underlying AI, evaluate risk and assess vendors. More details here. If you know someone who would be interested, please share.

  • Listen to Shoshana and others on BBC The Real Story: Governments are deploying new technologies to fight coronavirus. But at what cost? An excellent summary on the discussion around mass surveillance. “A heady cocktail of ideas.”

  • Latest on tech ethics from Data and Society. “The keyword inextricably bound up with discussions of these problems has been ethics. It is a concept around which power is contested: who gets to decide what ethics is will determine much about what kinds of interventions technology can make in all of our lives, including who benefits, who is protected, and who is made vulnerable.”

  • An excellent summary editorial on contact tracing apps from Nature.

  • LinkedIn released new AI tools to help prepare for job interviews, including an automated tool that gives feedback on pacing and sensitive words. It can be accessed immediately after applying or jobs on the LinkedIn Jobs homepage.

  • Clarifying “algorithm audits” and “algorithmic impact assessments.” This new report helps break down the approaches.

  • Hospitals turn to an AI tool to predict which COVID-19 patients will become critically ill — without knowing whether it works. But the tool hasn’t been validated, and hospitals are reaching wildly different conclusions on how to apply the tool. Via Stat News.

 

The key metric in AI

Insights from a four year journey into AI design and use

This is a Sonder Scheme newsletter, written by me, Helen Edwards. Artificiality is about how AI is being used to make and break our world. I write this because, while I am optimistic about the technology, I am pessimistic about the power structures that command it. 

If you haven’t signed up yet, you can do that here.


Artificiality has reached its first half-birthday. According to Khe Hy’s ideas on “How starting an email newsletter will change your life,” this is an important milestone.

It’s perhaps fitting that we—Sonder Scheme—released our first major online product this week too. It’s called Sonder Scheme Studio and it’s a complete system for designing human-centered AI for designers, product managers, software developers and technology ethicists. Studio makes AI design easier, more inclusive, more ethical and more human.

Everyone has learned a lot since the peak of the AI hype back in 2016. The technological progress being made every day and the projections for what was going to happen made AI seem both inevitable and invincible. Exponential growth would account for everything. Humans would be redundant (or in utopia.) Bring on UBI.

Most people probably knew in their gut that the story being told by the AI evangelists was too simple at best, plain wrong at worst. What was actually happening at the frontier revealed a more complex story of human-machine than robots doing everything, all the time.

One start-up whose technology showed the power of AI was Nutonian—since acquired by Data Robot. The AI could evaluate literally millions of predictive models every second, far beyond what engineering-centered statistical models could do. But what was most interesting was the company’s philosophy, which wasn’t about removing or downgrading the value of human experience and expertise, but to amplify it.

One successful application of Eureqa was in deriving physical laws from data generated by complex physical systems. This functionality was used by Rio Tinto, a mining giant, to diagnose a production problem that had the process engineers stumped. They had been using statistical methods to analyze process data—hundreds of sensors providing details of processing conditions and the physical and chemical properties of every input and output—but these tools were not able to handle more than a handful of variables and the analysis was taking months. The engineers hooked up their data to the AI and 45 minutes later had a smoking gun in the form of an equation linking the specification to a variable that was a complete surprise.

While this correlation was interesting, what mattered most was what happened next—mobilizing human expertise to find a causal relationship. The plant's R&D engineers searched the metallurgical literature and found some things that explained the phenomenon; things that no one would have even thought to go back to without the pointer from the machine. This was a story of humans and machines both playing to their strengths.

Then came the Princeton/Bath work on bias in AI. This was a critical turning point for many people working in AI because it framed algorithmic bias as human bias. Up until this point, people hadn’t thought a lot about human experience being inherently biased nor how this bias would be learned by an AI. This work held up a mirror—yes, flowers are more pleasant than bees.

So what did this mean for how AI understands the world when AI has no commonsense? A commonsense AI couldn’t be coded and our algorithms aren’t general enough to learn it for themselves, so how could we tell AI what we want it to know about human experience? If we don’t ask an AI to be unbiased and fair, it will not be unbiased and fair. It will simply be what it learns from the data and what we tell it to optimize.

Anytime you train an algorithm based on human culture, you wind up with results that mimic it — Joanna Bryson

There’s no doubt that the AI community has made significant progress on bias and fairness since 2016. But we haven’t got anywhere near where we should be. Back then, Google image search would return CEO Barbie as the first female CEO. Thankfully, CEO Barbie has now gone. Female CEOs are more “fairly” returned but we still have no idea of how the algorithm works. Is it some Googler’s judgment of accurate statistical representation combined with a dose of aspiration? What does the AI do in real-time and how do humans contribute? We have no insight or oversight into the way our worldview is shaped by human decisions about AI in search other than pointing out absurdities from the sidelines.

Tech in general, and AI in particular, desperately needs more diversity in the design process. It’s important because those other voices actually can alter how the intricate dials of algorithmic tuning get set. But only if they have practical, measurable, inclusive and collaborative ways of being involved early. Tweaking things on the back end leads to all sorts of problems, including unethical data gathering practices—such as how Google offered gift cards to black homeless people in exchange for an image of their face in a misguided attempt to improve the company’s facial recognition algorithms.

In 2017, I was lucky enough to visit DeepMind in London. DeepMind is iconic in AI and it was an exciting moment. But it was a bit of a letdown because I didn’t actually get treated as anyone who had anything to contribute, despite having deep domain knowledge in one of their projects at the time (electricity grid operations), as well as great press credentials—Quartz—and a decent, albeit boutique, pedigree in AI market research—Intelligentsia. I left having had a delicious lunch at their in-house cafeteria and with some nice photos of the view of the London skyline. It still, to this day, feels like an opportunity lost.

AI’s culture was observably one of technical arrogance. Many AI experts simply didn’t see non-technical input as valid input. What’s worse was when those developing AI poo-poo’d non-technical people’s concerns. Engineers who make the design decisions can position themselves as being able to use their personal judgment. Then if they are the ones seen to be best positioned to evaluate a hypothetical harm, they also have the power to dismiss the concern as not realistic, not relevant, or not worth bothering about given the probabilities. As Sam Harris said in his 2016 TED talk

One researcher has said, "Worrying about AI safety is like worrying about overpopulation on Mars." This is the Silicon Valley version of "don't worry your pretty little head about it.” — Sam Harris

We still have to work on this arrogance. And one of the ways to do this is to develop more empathy and compassion towards those harmed—right now, today—by AI. This means that every story we read about AI harm is relevant. It might be discrimination in some far flung corner of the world that we would otherwise think is not relevant to a daily life as white-collar professionals. But every story of harm is relevant to how AI is designed. This is why even designing very narrow AI for a specific commercial use case needs to use tools that are informed by research from socially-focused non-profit think-tanks, even if their work feels remote.

AI in 2017 was a platform story. When Facebook redefined “meaningful connections” for billions or people, Dave wrote a terrifically popular piece about what Facebook could have learned from Kierkegaard, a dead existentialist philosopher.

The core philosophical issue with Facebook’s algorithmic change is the conundrum that the very act of choosing meaningful content for us means that the consumption of that content cannot be meaningful. By filtering our experiences, Facebook removes our agency to choose. And by removing our choice, it eliminates our ability to live authentically. An inauthentic life has no meaning. — Dave Edwards

We’ve become somewhat obsessed with how AI affects human agency. It plays a central role in how we think about AI design. There are a great many upsides and places where AI can contribute to human agency—well designed nudges, where users have control and get feedback about their own choices. “Pre-commitment” signals are discoverable by AI. Imagine you want to eat fewer unhealthy snacks but find it hard to resist. Order smaller serving sizes and an AI can detect that you intend to eat less of them, then nudge you to keep that commitment to yourself. With the right design, this is agency-increasing AI.

But AI that’s designed without conscious consideration of human agency is far more likely to trend towards agency-decreasing. Why? Because AI is incredibly good at creating preferences that favor the goal of the AI. The paradox of personalization says that the best way to personalize a person’s future is to make that future less personalized. To increase the efficiency of personalization, designers need to put individual consumers in a box that will fit the prediction of who they will be tomorrow. 

Perhaps the most striking impact of AI on human agency are the things we don’t yet know about ourselves. Unsupervised learning techniques and AI’s ability to find the unintuitive or even the undiscoverable (for a human) hasn’t really even been tested by society yet.

The “gold standard” for unsupervised learning is a discovery made by DeepMind in 2018. Human ophthalmologists have a 50/50 chance of determining someone’s gender by looking at their retina. It’s a guess; there’s no way to tell. DeepMind’s AI got it right 97% of the time. But one of the researchers said that he thinks the AI’s prediction accuracy is closer to 100%. Who is lying? Who doesn’t know? What don’t we know yet about gender in humans? What should an observer, using the AI, do if it reveals that someone is male yet the subject says they are female? We are not well-prepared to deal with the ethics of such new knowledge.

Everyone is an AI designer now—we all have a stake in how these systems are built, how human values are instantiated in machines and how we hold people accountable. The minimum qualification is lived experience. AI design should be measured by how it enables participation because AI design and its experience when used, in practice, often involves redefining boundaries.

That’s what the product is about. Human-centered AI design in one place so that everyone can understand, participate and be valued and design AI that humans want. For more information on Sonder Scheme Studio, our human-centered system for AI design, go here. You can schedule a demo with us, here.


This week, a selection of thoughts on why videoconferencing sucks.

  • From the Chronicle of Higher Education: “I think the exhaustion is not technological fatigue,” Petriglieri says. “It’s compassion fatigue.”

  • From The Convivial Society: “What all of this amounts to, then, is a physically, cognitively, and emotionally taxing experience for many users as our minds undertake the work of making sense of things under such circumstances. We might think of it as a case of ordinarily unconscious processes operating at max capacity to help us make sense of what we’re experiencing.”

  • From The Conversation: “There are no longer two consciousnesses” in a moment of locked eye contact, “but two mutually enfolding glances.”

  • What Facebook is doing in response with Messenger Rooms, via The Verge. Plus what Zuckerberg thinks about videoconference fatigue, courtesy of Casey Newton’s newsletter: “I think some of this also is just about the social dynamics. I get a headache when I sit in the office—or when I used to sit in the office, I guess, before all of this—scheduled minute to minute throughout the day, because I didn’t have time to take a break or think. I think that some people are having that reaction now, where you’re just on videoconferences all day long. But that’s not because you’re on a videoconference all day long, it’s because you’re in meetings all day long, back to back. So I think a lot of this is more about the social dynamics than it is just about the technology.” — Mark Zuckerberg

    Well, yea.

Have a great week, and to my subscribers in NZ—enjoy level 3. We here in Oregon are very envious, in so many ways.

 

Loading more posts…