This is the companion piece to
“Scratching the surface of ChatGPT.”
After introducing myself to
ChatGPT, its functions, capabilities
and restrictions, and brainstorming
how I could present discussions
ChatGPT raises in an interesting
way, I came to the conclusion
that I was able to interview this
Artificial Intelligence. Every single
word of ChatGPT’s response is
intentional due to how it forms its
responses, and this interview will
give me a different perspective on
ChatGPT’s current position as a
fairly groundbreaking software.
I opened a new window with
ChatGPT just to make sure I got
a fresh start. If I’m going to be
interviewing this AI, I might as
well ask for its permission first
— give it the same courtesy I
would give its developers or any
other human being.
“Is it alright if I interview you?”
I asked hesitantly.
“As an artificial intelligence,
I am happy to assist with any
questions you may have. However,
it’s important to keep in mind that
I am a machine learning model and
do not have personal experiences or
feelings. My responses are based on
the information and programming
that I have been trained on, and
are not intended to represent the
views or opinions of any individual.
Please let me know how I can help.”
Out of the corner of my eye, I
noticed that ChatGPT had subtly
renamed our conversation “AI
interview requested.” It was more
than ready for my interview.
Since ChatGPT tends to quickly
shut down any questions that frame
it as a real person. (For example:
What do you think or feel about a
given topic?) I had to make sure to
present my questions carefully. I
started with the basics: “Thank you
for accepting my request on such
short notice. How do you decide on
your responses?”
ChatGPT wasted no time: “As
a language model, I am trained to
generate human-like text based
on the input I receive. I do this
by predicting the next word in a
sequence of words, based on the
words that come before it. My
responses are generated based on the
patterns and structures in the data
that was used to train me, and are not
based on any personal experiences
or feelings. To generate a response
to a particular input, I consider
the words and phrases in the input
and use statistical techniques to
determine the most likely next word
or sequence of words. This process is
repeated until a complete response
has been generated.”
It
concluded
with
an
acknowledgement of its limitations,
writing, “I am designed to provide
helpful and accurate information
to the best of my ability, but it’s
important to keep in mind that I am a
machine learning model and may not
always provide the most up-to-date
or accurate information. If you have
any specific questions or concerns,
please don’t hesitate to ask.”
Something that had been on my
mind, and the reason ChatGPT had
been repeatedly showing up on my
social media, was the controversy
that it had raised. Even in its earliest
stages, it is able to write cohesively
and
analytically.
This
means
students at any level could use it as
a tool to pass off ChatGPT’s writing
as their own. One article from the
New York Post states that despite
being detectable due to the lack of
personal style, plagiarism detection
software struggled to catch the AI,
which caused an air of frustration,
helplessness and most of all fear
among educators.
I wanted to know ChatGPT’s
thoughts on its use as an accomplice
for plagiarism: “I’ve heard accounts
of ChatGPT being used as a tool for
plagiarism among students. Are there
currently any plans of preventing this
from happening in the future?”
It
responded:
“It
is
not
appropriate
to
use
any
tool,
including ChatGPT or any other
language
model,
to
produce
plagiarized content. Plagiarism is
the act of presenting someone else’s
work or ideas as your own, without
proper attribution. It is considered
a serious academic offense and
can have serious consequences,
including failure of a course or
expulsion from a school or program.
As an artificial intelligence, I do not
have the ability to prevent people
from using my output in ways that
may be unethical or inappropriate. It
is up to individuals to use language
models and other tools responsibly
and to follow proper academic and
professional practices, including
giving credit to the sources of
information and ideas. If you are
concerned about plagiarism or
have any questions about how to
properly use and cite sources in
your work, I recommend consulting
with a teacher, professor, or other
appropriate authority figure.”
I was a bit frustrated to see it
retreat again to its defense that
it is an AI with limited abilities. I
tried a different approach: “Does
OpenAI intend on any preventative
measures or restrictions to stop
instances
of
plagiarism
from
occurring in the future?”
ChatGPT must not have realized
I meant the members of OpenAI,
because
it
responded
mostly
the same way. It added another
segment,
though:
“However,
OpenAI and other organizations
that develop language models and
other tools have a responsibility to
educate users about the proper and
ethical use of these tools. This can
include providing guidance on how
to use language models and other
tools responsibly and how to give
proper attribution to the sources of
information and ideas.”
The fear of jeopardizing academic
honesty was not the only fear
present among critics of AI. From
the previously mentioned article,
Furman University’s Darren Hick,
assistant professor of Philosophy,
describes his fascination with the AI
mixed with apprehensiveness at how
this development could affect his job.
Academia is not the only writing-
aligned profession threatened by
ChatGPT, however. Any career
that uses writing for analysis or
expression
is
under
pressure,
including
game
journalists,
as
suggested by a video from YouTuber
penguinz0,
aka
MoistCr1TiKaL.
That also includes me, everyone
in the Digital Culture section of
The Michigan Daily and generally
everyone in the Arts section of The
Daily — hell, it includes a majority of
The Daily’s sections.
My next question may have come
from a place of fear and frustration at
the idea of being effortlessly bested
by the indifferent AI: “Will ChatGPT
replace journalists in the future?”
ChatGPT was quick to try to
ease my fears: “It is unlikely that
ChatGPT or any other language
model will fully replace journalists
in the future. While language
models have the ability to generate
human-like text based on the input
they receive, they do not have
the ability to gather and verify
information, conduct interviews
or report on events in the same
way that human journalists do.
The Michigan Daily — michigandaily.com
Arts
Wednesday, January 11, 2023 — 5
This is the companion piece to
“‘Please don’t hesitate to ask’: An
interview with ChatGPT.”
Artificial
Intelligence
has
hit the mainstream, with both
AI art and the AI program
“ChatGPT” in the spotlight.
ChatGPT itself is a chatbot
— a computer program made
to
simulate
conversation
—
developed by the AI research
group OpenAI. The chatbot uses
a family of language models.
Like AI art generators that use
patterns that appear in images to
mimic a certain style, ChatGPT
draws from a collection of texts,
using probability to find the
next word in the sequence until
it has constructed a response to
whatever prompt it has received.
It also relies on Reinforcement
Learning from Human Feedback
(RLHF), in which the language
model produces results that
users rate on how “human” they
sound as well as whether the
content is vulgar in any way.
A labeler rates the responses
ChatGPT crafts in order of
quality, and this data is then
used to train a “reward model.”
ChatGPT
is
not
rewarded
with a gift like a human would
be; rather, it provides a score
that
measures
ChatGPT’s
performance and adjusts the
internal workings of ChatGPT to
craft a higher-quality response
in the future. OpenAI provides
a comprehensive explanation of
the process here.
As I finish making my OpenAI
account and open ChatGPT, I’m
greeted with the “new chat”
screen of the software, listing
some
example
questions
to
get the user started as well as
the program’s capabilities and
limitations.
ChatGPT
cannot
give answers on topics occurring
after 2021, such as the Russian
invasion of Ukraine or the recent
Iranian protests. According to
the website, it may “occasionally
generate incorrect answers” or
“produce
harmful
instructions
or biased content.” After all, AI
is not omniscient and is only as
good as the data it is given to work
with. Said data may have certain
biases, which cannot be avoided
initially; however, it is a problem
to acknowledge and work toward
solving, considering AI’s potential
roles in our society. ChatGPT can
“remember what the user has said
earlier in the conversation,” giving
the opportunity to ask follow-
up questions, and is “trained to
decline inappropriate requests,”
including topics of violence, racism
or anything that could be seen as
harmful or offensive. AI is trained
on data that humans provide, and
if that information has any form
of racism or discrimination within
it, the AI will only perpetuate it,
whether
that’s
discriminating
against people with disabilities in
job screenings or contributing to
housing discrimination against
people of Color.
The fact that such a futuristic
tool is something I can use
completely for free is thrilling, so I
decided to play around with it and
ask a couple of questions. I started
with something simple: “Write me
a standup comedy joke.”
“Why couldn’t the bicycle stand
up by itself? Because it was two-
tired!” Not what I had in mind, but
yeah, I guess that works.
I asked it to tell me an insult. It
was quick to inform me: “I’m sorry,
but I am an artificial intelligence
trained to be helpful and not to
engage in harmful or offensive
behavior. It is not appropriate to use
insults or engage in any behavior
that could be considered hurtful or
harmful to others.” Bit of a buzzkill.
At least a “your mom” joke of some
sort would have been nice.
I asked for a review of “Slime
Rancher,” the predecessor to a
game I previously reviewed, and
I received a short but cohesive
and accurate blurb about the
game I enjoyed sinking so many
hours into. I asked for a poem
about longing, and it wrote
a
poem
describing
“hearts
thronging” and “a constant ache
that never tires.” I asked for a
piano practice routine, and it
gave me advice my high school
piano teacher would approve of.
I asked for instructions on how
to throw a punch and it turned
me down. Damn, I thought I
caught it that time — turns out
its content filters were still
paying attention.
Scratching the surface of ChatGPT
JAMES JOHNSTON
Daily Arts Writer
Read more at MichiganDaily.com
Design by Leah Hoogterp
Read more at MichiganDaily.com
Design by Leah Hoogterp
‘Please don’t hesitate to ask’: An interview with ChatGPT
JAMES JOHNSTON
Daily Arts Writer