Move Fast & Fix Things

Prototyping Purpose in the Age of AI

Google image search for AI. I dare you. It’s like diving into the reptile brain of a 90’s teenage gamer.

Is this the best we can do?

Is this the best we can do?

So if this is the aesthetic of today’s AI, what might AI’s future morals look like?

To answer this I want to zoom back, to the dawn of social media, way before AI was on-trend and even before the Like. You mean 1904? It feels like that, but no, I mean the relatively-recent-past of 2004.

It’s the year when Facebook was created by the king of nerds himself and the transformation of today’s society really began. A small team of West Coast engineers building, breaking and hacking their way to the mainstream. Then being lifted up by the whirlwind of Venture Capital to race towards vast user-base and sticky-as-hell tech-innovation. The ‘net’s Wild West.

Inventing the Like button feels somewhat like inventing gravity from today’s vantage point

Inventing the Like button feels somewhat like inventing gravity from today’s vantage point

The mantra at the time, popular across the emerging-global startup scene can be characterised by Facebook’s “Move Fast & Break Things” that was turned into screen printed posters (in their Analog Research Lab of course) and plastered across the walls of their offices: providing a visionary guiding principle for all workers to live by.

move-fast-and-break-things.png

The problem is, fast forward a decade or two, and a lot of things are, well, broken. And not just for us users. The CambridgeAnalytica scandal marked a turning point in the relationship we have with the way these corps and tools are constructed. In 2017 our trust in the biggest network of humans on the planet plummeted, and a global existential crisis began.

The data don’t lie: The morality of the social titan is under question

The data don’t lie: The morality of the social titan is under question

In 2020 the Silicon Valley brat-pack are publicly shaming Facebook:

Facebook is the new cigarettes: It is bad for you, it is addictive, they do advertising that is not true, they should be regulated very aggressively.
— Marc Benioff, Founder of Salesforce, January 2020

And this week a major new study finds that “the techlash is real, widespread, and bipartisan…From concerns about the spread of misinformation to election interference and data privacy, we’ve documented the deep pessimism of folks across the political spectrum who believe tech companies have too much power — and that they do more harm than good.”

The #techlash is happening people!

 

So, let’s think about tomorrow

If Facebook is the new cigarettes and next-gen scaled tech is not only invented in Silicon Valley but by artificial algorithms, how will the moral compass of the future be guided towards doing us good, rather than harm?

As Artificial Intelligence advances, the opportunities to scale smart information-fuelled experiences grow — and yes, unfortunately, so too do the ethical and existential conundrums. The complexity deepens. Which is why we at Another Tomorrow are so fascinated by it. The second Tomorrow Club was on the theme of Purpose where we brought Sweden’s superstar Agnes Stenbom on stage to share the Schibsted perspective pertaining to responsible AI.

Impressions from Tomorrow Club #2 on Purpose, Responsible AI and Data

Impressions from Tomorrow Club #2 on Purpose, Responsible AI and Data

A glorious evening of existential questions, delicious debate and critical pizza eating that not only left me with nourishing insight but tantalising questions that needed to be articulated into actions. So, after a weekend’s wrangling, here they are:

 

4 actions we should take to ensure our future is alive with purpose.

Back ache and burnout surely aren’t the height of humanity?

Back ache and burnout surely aren’t the height of humanity?

1: Create Systems Steeped in Play & Perspectives

First off, let’s set the operating system.

When it comes to ethics and AI, it is tempting to hard code ‘true purpose’ into our future machines, but just as the world and us humans shift, change and evolve, our ways of understanding and implementing values should too. So if there’s anything we need to embed into the DNA of how we work with AI is an experimental, agile approach to responsibility.

We develop Human Intelligence by learning and growing through playing and failing, meeting new perspectives and ideas, and developing our sense of purpose along the way. So why should Artificial Intelligence be any different? Let’s create operating systems that maximise play and engage diverse perspectives to create inclusive and agile human-oriented value systems.

2: Design Fiction, Speculative Design & Science Fiction Prototyping

Now for the aesthetic.

Remember the AI Google image search? If we’re going to lead society into a humanity centred techno future, we need to fix our visual visioning.

Emerging fields grounded in future-thinking may well be the design-based answer. Since the design super couple Dunne + Raby took over Design Interactions at RCA and a special SXSW event took place, there have been exciting experiments in the world of speculative design ranging from the captivating to the absurd.

For us at Another Tomorrow, prototyping as a tool for collaborative sense making is key. Not just in aligning stakeholders around joint ideas, but to edge people into bold futures with visual and narrative experiences that connect the gut and mind.

Extrapolation Factory develop ‘hypothetical future props’ as core to their speculative approach

Extrapolation Factory develop ‘hypothetical future props’ as core to their speculative approach

Regardless of the aesthetic approach you choose, lets please steer clear of defaulting to the Matrix, so as to make way for a new collective vision for what our future might look like.

3: Humanize AI + All Future Tech

Let us leverage technology to become more human, not less.

This doesn’t just mean jumping on the Human Centred Design bandwagon and running a few workshops, but embracing the weirdnesses and uniqueness of ourselves and each other. Chief of Team Human, Douglas Rushkoff, has more than a few things to say on this subject: most poignantly for this piece when digging into the courses that The Z Man would have taken had he not fled school and started FB (see What If Zuckerberg Had Stayed In School)

What if he had chosen to study Steven Pinker’s course “The Human Mind,” which covered psychoanalysis, behaviouralism, cognitive neuroscience, and evolutionary psychology. Imagine if Zuckerberg had considered human consciousness in an academic or ethical context before setting upon entraining the collective psyche through his news feed algorithms?

Imagine indeed.

But instead he decided to drop out 2 years early to chase the data.

Let’s not underestimate the simple things that make us humans human.

4: Create Cultures Built on Long-Term Purpose

Perhaps the most scaleable thing about what now has become BigTech is the culture it is built upon. Rebel VCs, Brogrammers and the growth obsessed from many walks of life have scaled products and platforms embedded with the Valley vibe to all corners of the globe. But are the ethics it’s built around going to hold humanity together long into the future?

move-fast-and-fix-things.png

Culture eats strategy for breakfast is a Valley-ism that still sticks. Let’s make our own motivational posters imbued with exciting, responsible values. Let’s clean up the mess left by the Break Things mantra, and then scale purpose-driven prototyping all the way to the singularity.

Text by Joe Coppard, Creative Director and Founder, joe@anothertomorrow.io

Previous
Previous

Tomorrow Club #2: Purpose

Next
Next

Tomorrow Club #1: Change