DK_en 2x06 - Thoughts for the summer break

Share
DK_en 2x06 - Thoughts for the summer break
Photo by Content Pixie / Unsplash

Episode first aired on 27 July, 2023. Listen on Spreaker.com.

Hello everyone. You're on Runtime, the Geek Radio, and this is DataKnightmare, where the algorithmic is political.

It's been a tough year, but I didn't want to take the summer break without first leaving you with something to think about.

A few days ago, one of you on Mastodon pointed me to a paper, Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence, by Shaked Noy and Whitney Zhang.

Obviously, it wasn't one of those, "hey, this article is interesting", kind of reports, but more of the, "you're so wrong, AI does work, nyah nyah", kind.

When your finger points to the moon, plenty of people are willing to go out of their way to discuss the quality of your manicure.
Anyway, the paper is interesting, so thank you.

Mind you, there are a few things that could be discussed, but at least it comes out in science from a peer-reviewed process, not the usual slimy propaganda.

It is, it must be said, also a clever paper, in the sense that it puts ChatGPT to the test where it really works, i.e. in text generation, and not, say, in answering arbitrary questions or in dialogue, which, curiously enough, were the main features for which it was initially proposed as absolutely revolutionary.

But the important thing these days is to pass as revolutionary, no matter whether with the idea that launched the product or with something else found along the way.

I, for one, have always scoffed at the six-month revolution in IT, but lately, technology does propose revolutions for the attention span of a goldfish.

Anyway, the idea behind the paper is classical.

We take people who write texts for a living, managers, HR, data analysts, marketers, consultants, grant writers.

And we give them tasks. A company-wide email. A description of a code notebook without the code. A press launch. An email about the risks and benefits of an investment in China. A cover letter of a grant proposal.

All texts of 400 words, except the cover letter of 500.
Then we make two groups.

Do a first test, which is the same for everyone.
And then a second test, where half of the people will use ChatGPT and the other half won't. And we see what happens.

The paper asks itself two simple questions.

  1. Can ChatGPT improve productivity?
  2. And does ChatGPT have different effects on less skilled workers than on more skilled workers?

And the answers, we must say, are clear.

With ChatGPT, the average level of texts improves and production times decrease.

So we are entitled to say that productivity improves.

Moreover, with ChatGPT, the difference between the abilities of individuals is greatly reduced.
Now, that the use of an automatic text generator can reduce production time seems pretty obvious to me, but experimental proof is welcome.

We have now evidence that if someone else writes the text for you, you save time. Okay.

But of course, the novelty here lies in the fact that the software can be that someone and replicate for the better the output of a human who calls that output his job.

With regard to raising the quality of output, however, the issue is more complicated.

It is true that all the tasks assigned are entirely realistic
and were recognized as such by the test's participants. The problem, if anything, is that these are tasks that everyone would like to see disappear forever. Texts, yes, but the kind of texts that no one would ever want to have to write.

Because they are completely useless. Purely functional communication. The text equivalent of a road sign.

Here, technological progress should cut the useless, not make it cheaper to produce.

But this is another matter and concerns how haphazardly applied innovation often serves to crystallize processes and practices rather than make them evolve. One finds oneself multiplying rather than eliminating things that are patently unnecessary only because technology makes them cheaper.

We all know that digitization in the private, as in the public sector, has almost always simply replicated existing processes and roles. But that's not what I wanted to talk about today.

I wanted to talk about generative AI.

Because precisely, we talk about haphazard innovation.

Let us assume the conclusions of the paper. Let's assume that with ChatGPT, texts are produced faster and that the gap in capacity between authors is greatly reduced.

Now, one does not need a PhD in statistics to realize that software that writes, press and company releases, cover letters and code descriptions well, does not necessarily also write other types of texts well. So the results of the paper should not be immediately generalized.

Maybe we will discover that ChatGPT and company are capable of producing quality texts in the majority of contexts. But it seems obvious to me that, well before then, the so-called market will completely ignore the use cases where ChatGPT does well and will immediately jump to a conclusion it likes. Namely, that for productivity's sake, from tomorrow, texts must be written with ChatGPT. Certainly under human review, as long as we don't waste too much time splitting hairs.

Here, I would like to say that this is my point. But I would be lying.

It's the point that Weizenbaum made 50 years ago.

We must not ask whether a technology can do a certain thing. We must ask whether it should do it.

We are about to encounter something very similar to the perverse consequences of Goodhart's law: any observed statistical regularity will tend to collapse once pressure is applied for control purposes.

Having trained so-called AIs predominantly on content available on the internet means that AIs know how to write like a piecework marketeer because that is the prevailing content on the net.
Marketing, soulless company reports and functional communication.

Yesterday, we taught machines how humans write using as an example when humans write like machines.

The danger I see is that the various ChatGPTs will become the benchmark so that from the day after tomorrow humans will have to write like machines in order not to lose the competition with machines that write like humans that write like machines.

Even in the same paper we mentioned, the authors point out how in the group under scrutiny, the vast majority of participants merely copied and pasted the contents proposed by ChatGPT with minimal edits. This is exactly what will happen by bringing ChatGPT and company into business processes.

More texts will be produced in less time or of all of a sufficient quality to past management scrutiny and the human contribution will be that of an assistant. A career that anyone would certainly aspire to and one that promises to be full of stimuli and challenges.

This is not the only problem I see.

Writing is, without doubt, the most fundamental invention of civilization. What happens to civilization if writing becomes something that machines do for us?

Let us reason why we still can.

ChatGPT can make the slides for the next meeting. But it can also produce the handouts for the next course. Just as it can write the next manual, the next textbook, the next policy, the next report. And with an average programmer, ChatGPT can replace a senior programmer.

Mind you, I am not saying that this is what I think because it's not. I am saying that this is the message that is getting through. I am saying that this is what all managers in the world are thinking right now. Unless there is a fundamental opposition and strong restriction to the introduction of text generators into business processes, simple economic pressure will determine their absolute prevalence.

Sure, I can write a 5-pager much better than ChatGPT. But ChatGPT can do it in the next 5 minutes. Me, in a few days. I, with ChatGPT, can maybe even take half a day. Provided I accept a text I would, on no account, call mine. Especially since the management culture already insists on storytelling rather than facts and analysis to reason upon.

An excellent programmer may perhaps code 5 times as well.
But an average programmer costs much less. And who cares about performance these days if processors will be faster next year anyway? There is no competition.

Another thing I wonder.

What kind of progress is it that robs human beings of the creative sphere, the direct expression of their ideas? When these kinds of questions are raised, there are always those who will tell you to change or die and remind you that you are a Luddite.
Which does not offend me in the slightest.

In this case I am proudly a Luddite. Ludd and his people, you see, were not against mechanical weaving mills or progress. They were against employers who, under the guise of power looms, replaced experienced workers with cheaper ones, leading to lower wages and unemployment.

When they tell us that tests show how ChatGPT reduces the difference in product quality between more and less experienced workers, what do we think it means? That the less experienced worker will therefore be paid more? Or that the more experienced worker will have to do even more to keep the same pay?
Let me phrase it differently.

Will the equalizing power of ChatGPT on duty equalize upwards, i.e. more money for the less skilled, or downwards, i.e. more money for the tech owner? Or maybe i'm misunderstanding completely?

And what the tech sector is really proposing is that we live in a utopia, where suddenly a fat paycheck comes without working because machines do everything for us, and we can finally devote our lives to philosophy, painting and traveling?

It doesn't seem so. It sure doesn't seem so to me.

For two centuries machines have been supposed to liberate us from the slavery of work. And we are still here, working 40 hours a week and being thankful when we do have a job. On the other hand, those who can afford to buy it, technology, they make all the money in the world.

Between 1979 and 2021, productivity grew by an average of 64.6%.
Wages grew by 17.3%. That is, productivity grew more than three and a half times what wages grew.

I kind of pity the apologists for this model of progress.
They think they're not poor souls living off their work like everyone else. They see themselves as temporarily embarrassed millionaires. Until the next fashionable technology comes for their work.

One last thing, which is absolutely not secondary.

When we say that ChatGPT wrote a persuasive text, it is us who see persuasiveness in it. For ChatGPT there is no difference between writing a parody of Pericles' speech to the Athenians and reciting a nursery rhyme. ChatGPT has no intentionality or thought. It is a statistical engine.

But for us, text is the expression of another mind. It was born to be exactly that. It always has been.

And we have always reacted to the ideas that the text conveyed,
because we empathized with the minds that had born those ideas.
Think of the Vedas, the Tale of Thermopylae, the Speech of Pericles, the Gospels. Think about the Declaration of Independence of the United States of America.
Think of Das Kapital, Mein Kampf, Mao's Little Red Book.

Entire peoples have been moved, for better or worse, by the ideas behind certain texts. But suddenly, our world will be invaded by texts behind which there is no mind, no thought. Snippets of sentences that someone has said once, perhaps thinking of something else entirely. Homogenized and processed to take the form most statistically similar to something real. Taking as real the flotsam and jetsam that is gathered on the net.

We think it's meat and it's a chicken nugget.

What happens when we suddenly can no longer assume that a text is really someone else's thought? Do we lose all faith in the text, beyond its literal content? Or will the force of habit drive us to attribute consciousness and intentionality to an instrument that has none of its own, but which can in an absolutely invisible way convey those of its master?

We are used to the fact that the fine-tuning of generative AI is done to filter out inappropriate content. In general, there has never been the need to discuss what was inappropriate so far.

Violence, sexism, racism.
But history teaches us that my inappropriate is someone else's sacrosanct. Just as my terrorist is their freedom fighter.

That is why democracies exist, to bring these kinds of decisions into the light so that everyone has a say and the values shared by the majority can prevail.

But generative AIs are not tools of democracies. They are tools of corporations that by definition are not democracies and that exclusively pursue their own profit and the particular fixations of their owners, called visions by apologists.

Also, their decision process is completely opaque.

Are we sure we can trust their decision in this case? Even better, are we sure that certain decisions are theirs to make, especially since they are headed by man-children with a Napoleon complex?

I obviously do not have the answers.

But I want you to at least begin to hear some questions and a few off-key notes, amidst this Hosanna chorus of sycophants, pen-pushers, money-grubbers and poor chaps who think they are temporarily embarrassed millionaires.

Talk to you again from 8 September.

Have a good summer.

Be well.

The real victory is to outlive the bastards.