100%

Scanned image of the page. Keyboard directions: use + to zoom in, - to zoom out, arrow keys to pan inside the viewer.

Page Options

Download this Issue

Share

Something wrong?

Something wrong with this page? Report problem.

Rights / Permissions

This collection, digitized in collaboration with the Michigan Daily and the Board for Student Publications, contains materials that are protected by copyright law. Access to these materials is provided for non-profit educational and research purposes. If you use an item from this collection, it is your responsibility to consider the work's copyright status and obtain any required permission.

March 22, 2018 - Image 10

Resource type:
Text
Publication:
The Michigan Daily

Disclaimer: Computer generated plain text may have errors. Read more about this.

2-BSide

begun to dabble in one of
video game storytelling’s most
unique aspects: The ability to
give players a sense that their
actions matter to the world of
the story.
The advent of modern video
games, however, wouldn’t truly
begin until the release of Hideo
Kojima’s “Metal Gear Solid” in
1998. “Metal Gear” tells the
story of Snake, a work that
was so unmistakably different
from other games being put
out at the time because of its
sweeping, cinematic nature.
The game’s many cutscenes
offered dynamic, interesting
“camerawork,”
alongside
dramatic voice acting, that lent
the game a sense of gravity and
importance that had thus far
been absent from the medium.
This would lay the groundwork
for
games
like
Sony’s
“Uncharted” series, which has
been described as playing a
movie. This signaled one of the
first times that mainstream
video gaming had embraced
its role as a medium capable
of telling intense, deeply felt
stories.
The late 2000s and beyond
would see the beginning of the
storytelling explosion. Here,
studios began to experiment
with the concept of player
choice. Games like “Metal
Gear Solid” did important
work in terms of establishing
video games as a legitimate
medium of storytelling, but
it still seemed as if there was
little that video games could
do which other storytelling
mediums
couldn’t.
“Metal
Gear Solid” could just as easily
have been a movie; in fact, you
can watch all of the cutscenes
spliced together on YouTube
as a serviceable substitute for
actually playing the game.
Unsatisfied with this being
the benchmark for storytelling
in
games,
developers
such
as Telltale Games began to
create games with alternate
endings and storylines based
on choices given to the player.
These
games
seemed
to
emulate the choose-your-own-
adventure novels sold to kids in
elementary school, albeit with
more adult subject matter.
Cut to the modern day.

The last couple of years have,
in my opinion, seen video
games truly come into their
own as a medium that can
tell unique, powerful stories
which could only be told
as video games. Games like
Toby Fox’s “Undertale” and
Davey Wreden’s “The Stanley
Parable” come to mind as
members
of
a
developing

medium
that
almost
feels
reminiscent
of
the
1920’s
literary postmodernism; these
games dissect what it means
to be a game. Players can
exploit glitches, mess with
game files and more. “The
Stanley Parable,” for example,
is a game meant to be restarted
dozens of times. It has almost
20 achievable endings, some
of
them
entirely
random,
including
some
in
which
Stanley, the game’s voiceless
protagonist can either choose
to comply with or rebel against
the game’s whimsical British
narrator. Games like this make
me think that if Samuel Beckett
were alive today, he’d be a game
developer. Wreden’s follow-up
game, “The Beginner’s Guide,”
tells the story of his friendship
with a reclusive game designer
by walking the player through
the games his friend made. It’s
an entirely unique narrative
structure, a story about an

artist in which the audience is
placed inside of their art.
One of the most unique
offerings I’ve seen has been
Giant Sparrow’s 2017 release,
“What
Remains
of
Edith
Finch.”
The
game
follows
the titular Edith Finch as she
explores her now-abandoned
childhood home and unearths
generations-old secrets about
the
Finch
family.
“Edith
Finch” is the first video game
I’ve encountered that I can say
tells a story that could only
ever work as a video game; it
makes such creative use of all
facets of the medium, from its
interactivity to its non-linear
narrative to how effortlessly it
manages to tell a story through
its environment. The player
is left to wander aimlessly
through this massive house
and,
through
discovery
of
old
family
heirlooms,
step
into the memories of Finch
ancestors,
experiencing
the world uniquely as each
family member did. In one
such sequence, the character
plays as a baby sitting in a
bathtub, able to control the
bath toys floating and swirling
around in the water as Mozart
plays alongside the player’s
movements.
The
game’s
emotional journey is ingrained
in its interactivity.
This isn’t to say that all
modern
video
games
are
art. On the contrary, truly
artful games make up only a
small portion of the medium.
However, this is the case
with all artistic mediums. For
every masterfully constructed
art-film, there’s a mindless
blockbuster, and for every
“What
Remains
of
Edith
Finch,” there’s a “Call of
Duty.” There’s nothing wrong
with the latter — it’s in fact
beloved by many — but for the
first time in the history of the
young medium, video games
are beginning to consistently
offer an upper echelon of well-
crafted games that tell deeply
felt and uniquely experienced
stories. Today, some of the
most moving, profound stories
being told anywhere are being
told in video games — we’ve
come a long, long way from
“Pong.”

I love video games. For me,
they are almost inextricably
tied up in childhood nostalgia.
I think of countless hours
spent
with
my
brothers
huddled around a GameCube,
Wii or PlayStation. I think
of collecting Pokemon on my
GameBoy
Advance
during
long car trips. These probably
sound like familiar stories,
and that’s because they’re the
kind of experiences shared by
many kids who grew up during
video games’ “golden age”
through the late ’90s and early
2000s. As millennials grew up,
video games seemed to change
with them — just look how far
graphics have come from the
clunky, polygonal look of the
N64. This change expands
far beyond how video games
look, however, and has greater
implications for how they tell
stories. The last 15 years have
seen video games come into
their own as a unique form of

expression.
I’ve always held that many
of the best pieces of art —
your “Casablanca”s, your “Les
Misérables”s, your “Catcher in
the Rye”s — are told via their
respective mediums because

that medium is the only way to
most effectively tell the story.
A great film, for example,
should be able to do and show
things that only a film can do.
Today, we’re in the midst of
video gaming’s renaissance;
inspired,
motivated
content
creators
are
continuously

finding new ways to push the
boundaries
of
storytelling
within the interactive medium
of video games at both the
indie and mainstream level.
This
prolific
explosion
of artful storytelling didn’t
just
happen
spontaneously,
however. Rather, it was a slow
build over nearly two decades of
progress and experimentation.
Through much of the ’80s,
video games existed mostly
as
novelties;
games
like
“Tetris” or “Galaga,” which
told no stories, but were just
intended to be, well, games.
Throughout the early to mid
’90s, this began to change.
Games like “The Legend of
Zelda: A Link to the Past”
began incorporating fleshed
out stories with dialogue. The
beauty of these early story-
based single player games was
the ability of the player to step
into the world of the game. For
example, Link, the protagonist
of “The Legend of Zelda,”
has no lines of dialogue; he’s
just a stand-in for the player.
These roleplaying games had

4B —Thursday, March 22, 2018
b-side
The Michigan Daily — michigandaily.com

PARAMOUNT PICTURES

A few years ago, Charlie
Kaufman (“Eternal Sunshine of
the Spotless Mind”) and Duke
Johnson (“Community”) made
the
stop-motion
animation
film “Anomalisa” with eerily
realistic puppets. I watched the
first 15 minutes with my mom
last summer before a rather
explicit sex scene started and
I quickly changed the channel.
“That’s why I recognized it!”
my mom yelled and shuddered.
When
the
film
was
first
released, she and a friend went
to the theater to see it under the
impression of watching a funny
animated movie — more along
the lines of Disney’s “Big Hero
6” than Seth Rogen’s “Sausage
Party.” I remember her friend
laughing and telling the story
at a Christmas party, acting out
how my mom covered her face
with her jacket and eventually
ran out of the theater. Sure, it’s
weird watching puppets having
sex, but her reaction had more
to do with the phenomenon
called “Uncanny Valley.”
Kaufman and Johnson were
warned
“Anomalisa”
would
creep
out
audiences
since
the puppets were 3D printed
and highly photorealistic, the
equivalent of a painting by
Richard Estes. In an interview
with the Los Angeles Times,
Johnson
explained
their
reasoning
for
this
hyper-
realistic style:
“The challenge we felt with
so much animated stuff is that
you’re always conscious of the
animation, and we kept asking,
‘What if we could escape that?
What would it be like?’” In
other words, these filmmakers

steered into the skid, embracing
the
disturbing
nature
of
Uncanny Valley, a theory that
Stamps
Associate
Professor
Heidi Kumao describes as “the
aesthetics of a human look-
alike.” As Kumao wrote in
an email interview with The
Daily, “We are fine with (and
have empathy for) objects that
have human features, but are
recognizably different from us.
When the look-alike appears

to resemble a human being too
closely, however, we become
uneasy. An eerie sensation is
triggered. We are naturally
repulsed by objects that are so
realistic that we almost mistake
them for a real person.”
Technological advancements
in the medium of animation
drive
the
competition
between
large
companies
like Pixar, Disney and video
game corporations to achieve
ultimate realism. According to
Professor Kumao, “realism is
their currency. It is how these
types of movies sell themselves:
They try to outdo each other

with
technical
wizardry.
As they strive to reach this
technical goal, the goalposts
keep moving.”
In
addition,
the
fervor
around
virtual
reality
and
augmented reality has turned
animators into cowboys in the
Wild West as they discover how
to make animation interactive
and
visualize
360
degree
environments.
As
a
result,
cutting-edge technology has
forever altered the process of
animation. Take “Anomalisa,”
for instance, which used 18
different 3D-printed versions
of the main character, Michael
Stone, so the animators could
change
expressions
and
physical gestures with extreme
precision. However, as Kumao
wrote,
this
scramble
for
realism is a “technical task that
leaves nothing to the viewer’s
imagination.”
Although
brand
new
technology
like
VR
and
3D
printing
are
incredibly
expensive,
access
to
basic,
affordable
equipment
has
also altered the independent
animation
scene.
Since
the
creation of YouTube in 2005,
several
animation
artists
have
established
themselves
as popular channels. “How it
Should Have Ended,” “Happy
Tree Friends,” “Cyanide and
Happiness” and many more all
benefited from the affordability
and easy access to animation
technology as well as the rise
of social media. No longer do
studios have a monopoly on the
art form, as individual artists
and anyone with a computer
can create their own material.
For these low-budget YouTube
channels, there is no race for
realism. The same goes for many
animated television series that

MEGHAN CHOU
Daily Arts Writer

The Uncanny Valley effect
& the future of animation

FILM NOTEBOOK

produce over a dozen episodes
each season. While the old “Tom
and Jerry” cartoons and the
new episodes from the reboot
noticeably differ in style, the
series has stuck to 2D animation
and less-realistic avatars as
have other cartoon staples like
“The
Simpsons.”
However,
Hollywood
productions
and
some independent films have
larger staffs, more money, more
time and a desire for realism,
which can lead to some ethical
problems.
After the tragic death of
actor Paul Walker during the
shooting of “Furious 7,” the
studio used his brother as a body
double and computer-generated
images of the deceased Walker
to finish the film. Some people
congratulated Weta Digital for
preserving
Walker’s
legacy,
while
others
felt
queasy
watching this digital rendering
of the actor. The latter not
only questioned the ethical
legitimacy of the quasi-real
CGI, but also felt the images
fell irretrievably deep into the
Uncanny Valley. Since “Furious
7,” other big franchise films
have used computer-generated
clips of actors. For example, in
“Rogue One,” audiences did a
double take at the animation of

Grand Moff Tarkin portrayed
by Peter Cushing (“Dracula”),
a beloved actor who died 20
years prior to the release of
this particular “Star Wars”
installment.
Besides
bringing
actors
back to life, studios have also
started to animate younger
versions of stars like Robert
Downey
Jr.
(“Iron
Man”)
in
“Captain
America:
Civil
War.” While the script did call
for a brief appearance by a
younger Tony Stark, Marvel’s
experiment
inspired
Martin
Scorsese to “de-age” Robert
De Niro to fit a younger role
in the much-anticipated film
“The Irishman.” So where’s
the line? Should older stars
skilled in their craft be able
to take opportunities away
from up-and-coming actors?
Is de-aging as morally tricky
as profiting off of digitally
reincarnated
actors
without
their
permission?
There
remains plenty of grey area to
cringe at and debate. However,
the
biggest
concerns
arise
when this technology, once
contained to the most powerful
and wealthy companies that
must obey legal restrictions,
becomes available to that creep
commenting
anonymously

online or irresponsible internet
users.
The invention of FakeApp,
and other similar programs,
has made this a reality. Almost
anyone can paste a random
face on another person’s body
with this software that leaves
minimal differences between a
real video and a “deepfake.” Like
many worrisome communities
on the web, deepfakes have a
faithful following on Reddit
where users post videos ranging
from innocent re-imaginings
of superheroes with different
actors to modified pornography
featuring celebrities, politicians
or
exes.
These
are
the
consequences of animation’s
quest for realism — a quest
that the Uncanny Valley can
keep in check. The discomfort
that forms when an animation
looks too realistic is an ethical
defense mechanism. When a
de-aged or computer-generated
version of a deceased actor
appears on screen, listen to the
urge to squirm and feel a little
sick in the stomach. Someone or
something is trying to warn us
about the future of animation:
We are on a dangerous path
and our only guide is the
phenomenon of the Uncanny
Valley.

MAX MICHALSKY
Daily Arts Writer

NINTENDO

For every

masterfully

constructed

art-film, there’s

a mindless

blockbuster, and

for every “What

Remains of Edith

Finch,” there’s a

“Call of Duty”

Video games, storytelling
& the growth of a meduim

A great film, for

example, should

be able to do and

show things that

only a film can do

VIDEO GAME NOTEBOOK

We are naturally

repulsed by

objects that are so

realistic that we

almost mistake

them for a real

person

Back to Top

© 2024 Regents of the University of Michigan