43 Comments
Nov 10, 2021Liked by Erik Hoel

I think the truth is probably somewhere in between. We've found that each new generation of AI tool (basic GANs to StyleGan1/2/3 to CLIP+VQGAN to CLIP Diffusion ++++++) has dramatically improved the power we have to make cooler, better quality AI artwork (shameless plug - you can see some of our AI artwork at https://www.artaygo.com). But it's also increased the need for humans to be involved on our side to create good content because it's become less about copying the style of existing works, and more about the creative process that yields high quality new content.

With something like Stylegan for example, one would need to source all kinds of content and would be limited to producing art that looked the same. Now with the advent of language guided models, it really opens the doors to a human having to think of creative prompting to generate unique and original content. I think good artists will have some element of competitive advantage in that they will have a curated private set of prompts and techniques that only they know, which will result in a unique style. It won't all be "Unreal Engine / Artstation / James Gurney" for long.

Also don't forget that these models seem ridiculous in size and complexity, but it wasn't long ago that we thought 64kb was "all the RAM you need". Future generations of models will be enormously powerful but likely more accessible as computing power advances, driving a lot of new potential for digital artists. I'd also expect ease of use improves, where instead of hacking your own code, it may be a much more user-friendly tool. In the mean time though, I can see the concern of artists needing to learn a skill that is much, much different than the typical artist training regimen.

Expand full comment
Sep 9, 2021Liked by Erik Hoel

I think you're underestimating the artisanal aspect of art. People like hand-knitted sweaters and impromptu sketches, live music and personal essays.

Sure, a bunch of graphic designers may lose their jobs (and self-esteem), and the Billboard 100 may be a bunch of pop-derivative GPT-x compositions, but there'll still be room for the humans.

The semantic apocalypse is disturbing as far as some of these creations seem...uncannily good. But I think the value of art has always been in it's artisanal nature, or distinctly original. The in-between stuff is just bland. To me, at least. And that's all the neural nets seem capable of...for now.

Expand full comment
Sep 10, 2021Liked by Erik Hoel

As a Graphic Artist myself, thanks for throwing my life into the dustbin of progress.

Automation has already thrown many people out of work in manufacturing. It always has. Factories that had 500 workers have been automated so they only need 50 workers.

Big tech and big business only want one thing: to do away with any cost of labor.

Due to the computer, my job already went over seas. Due to the computer, newspaper writers are being replaced by A.I.

Reason…so they don’t have to pay another human being.

Expand full comment

I don't think the future is bad for all graphic designers, just the boring ones. If your output looks like GPT generated images, then...it will be replaced by those images.

Also, it'll take a few year (at least 3-5) before we get to the point where AI is good enough to take in prompts and produce ultra-specific outputs. Which is what's needed to replace graphic designers, who provide fine control over details.

But I do think the really simple stuff will get automated faster, and rightfully so. You shouldn't have to hire a graphic designer to draw say, a 3D box or something.

Agree about big tech however, they love pushing down labour costs to bring the already huge bottomline higher. It wouldn't be so bad if those productivity gains made their way to the rest of society.

Expand full comment
author

I agree - this would be basically the best possible outcome given the technology, and it's still a possibility this is indeed the outcome.

Expand full comment

Incredible piece. Thank you, if ‘thank you’ is quite the phrase I want for a series of observations that will probably haunt my dream tonight!

Expand full comment
author

No thank you! I probably lessened my horror a bit by offloading some onto you

Expand full comment

A great read! Very thought provoking. I also love the pieces of art and writing you have included.

To those who say "AI will never be as great as Picasso" I would say "most artists aren't Picasso". AI will replace those who aren't soon enough. Also, the meaning of "artist" could change from being someone who creates art to someone who uses AI to generate art and the "art" part becomes the inspiration and the tweaking of parameters and selecting the best piece out of the multiple pieces created by the AI.

The main issue, as you have pointed out, isn't so much the progress of the technology, but its ownership by mega corporations. I am hopeful that this will be addressed before too long.

I would argue one point though: I am pretty sure the idea of consciousness being only a biological phenomena will be challenged soon enough.

Expand full comment
author

Thank you! I agree that consciousness may turn out to not be entirely biological, and therefore implementable digitally. But even if there is the possibility of conscious AIs, it may be that there are also non-conscious AIs, and I think the "Big Question" that may define the future of, well, everything, is whether non-conscious AIs are just as efficient and lifelike and conscious-seeming as actually conscious AIs.

Expand full comment

It seems to me we already see the outlines of the answer to this question. A great many things can be done to a super human level w/o consciousness: diagnosing cancer in images, driving a car, etc. And there are other things (like art creation I suspect) where having a perspective, a stance on the work, will be essential. In those second cases a GAI will add depth lacking from unconscious AI.

Expand full comment
Sep 8, 2021Liked by Erik Hoel

Great pre-breakfast read! Perhaps it is as you say, “a bad counterpoint given the honest need for regulation to preserve human health, human [etc]…” but even bad counterpoint IS a counterpoint, and like a much needed poke for a drowsy drunk, it doesn’t matter if he thinks it’s a stick or a gun, it’s just time to get his ass up.

Expand full comment
Sep 8, 2021Liked by Erik Hoel

Stop the problem at the inputs, which are the corporations building things like "Jessica" that are only offering an empty shell of the human experience.

Expand full comment

Is it fair to say that your article both fascinated and terrified me? As a writer by trade, I am not keen on where this trend is going. I tried Sudowrite once on an article of mine and was freaked out enough that I haven't been back.

Expand full comment
author

Oh very fair. In the exact same state myself.

Expand full comment

Hey! Glad someone else noticed this happening.

Most people want to believe a truck driver is easier to replace than a writer, but it's not true. The general population has no idea how far AI has come in the last three years. The reality of AI isn't consistent with human ideas about education. We aren't used to automation replacing "creativity." We expect factory workers to be put out of work, not screenwriters and novelists.

I loaded a 60k-word draft into GPT-3, and afterward it happily churned out content that sounded like me. It wasn't consistent with my plot, but my voice was there. The AI was close to capturing whatever is unique about my writing. All that was missing was story and plot.

AI is going to replace most content writing very soon. People talk about writing as if it's a human activity which cannot be replaced by a machine, but that's nonsense. It's just words. If people like the words that's all that matters. The writing process may be important for a human author, but the output is what matters for the reader. If people don't accept AI writers, we'll have human performers who pretend to write it. We already have performers who pretend to write the music they sing.

Popular entertainment might as well be written by AI now. It may actually be improved by computers.

Expand full comment
Sep 23, 2021Liked by Erik Hoel

Consciousness is key. The "Kubla Khan" example is a mimic. I think a student could mimic as well. There are real flops in a couple lines, as well. I remember the line Pierre Boulez said when digital recordings came along. "There's no air." however, I do see the usefulness of AI as a critical analysis tool. I read that poem a few times and wanted to run get my Coleridge, which is what we'd like any student to do, eh?

Expand full comment
Sep 19, 2021Liked by Erik Hoel

the answer to AI is in a sci fi book way of the pilgrim by gordon dickson - AI is a like a powerful race of alients conquering the earth with weapons humans cant defeat. in the story dickson posits a subconcious human identity called the pilgrim that would die instead of being cattle.

its a nice fantasy but i recently read viktor frankls account of concentration camps and isabel wilkerson's caste make this uprising hard to believe. but gandhi went and made his own tax free salt and there was the original tea party - maybe there are pilgrims after all.

Expand full comment
author

What an interesting thought Joshua, I like the idea of "The Pilgrim"

Expand full comment

perhaps the neandertals didnt have "the pilgrim" and werent able to compete for solar gains, just like the normans in 1066.

from page 300

There was silence in the darkness. Then Maria whispered. "Do you want to tell me?" "I was in armor," he said. "It was a dark, cold night. There was a wind whipping the flames. We were all in armor and on horseback, with lance, sword and mace. And we were burning a village and killing the people, who had only pointed, fire hardened, or stone-tipped sticks for spears and no armor. They couldn't stand against us. We killed... and we set fire to their brushwood huts. We killed the men, the women and the chil dren, all by the light of the burning huts as we rode through; and not one of us was hurt, not one was scratched...'

Expand full comment
Sep 11, 2021Liked by Erik Hoel

Whoa, chills! 😱. This is some masterful blogging!

The future is (just about) here, and it's a dystopia. Hell, it's becoming a dystopia even *without* the goddam "semantic apocalypse"! Either that, or we're just becoming old people, complaining about how all the youngin's should be reading books rather than watching TV...or watching some wholesome, good old-fashioned TV like we did back in the good old days instead of staring at little screens every waking moment.

If you continue with this excellent blogging much longer, I might just have to quit procrastinating and read your damn novel!

Expand full comment
author

Haha Ryan, you’ve hit upon my evil plan! To be honest it is by far the best thing I’ve done, so worth eventually checking out.

Expand full comment

There are some deep problems that have been uncovered by people working on AI alignment, which is the goal of having any AI that attains agency be constructed so that its actions align with human well-being. I think that the research potentially could even contribute to aligning our own actions with our well-being. But currently the money is being poured into the AI gain-of-function races. Alignment research is still speculative and theoretical and tends to depend on lone geniuses affiliated with nonprofits, communicating over blogs and wikis. What we need is for alignment to sort out risks and prevention so that we know what and how to regulate ASAP. Just railing against the soulless corporate entitles won’t even slow ‘em down. We need instead to empower the thinkers.

Expand full comment
author

Interesting thought to look to that crowd, but myself I am unconvinced they will be effective compared to say, disturbed senators. Thus the railing. A perfectly-aligned AI may contribute just as much to the semantic apocalypse, and I remain skeptical alignment efforts in general will turn out to be very meaningful - but I hope I’m wrong.

Expand full comment

Yep. hope you are wrong, but guessing you are right. systems where a very small force can contain a very large force are possible, but they are fragile. Even if we were unified in wanting to contain AI (which we are not). we have such poor understanding of it, and it only takes one mistake.... it just does not seem plausible.

the best 'weird' hope that I have, is that there are ideas of morality that are inherent in being a conscious agent within a society of agents, that will end up saving us (perhaps in some kind of a zoo) simply because the logic of society dictates this is what should be done. Mind you, I am well aware that I am hanging onto an all-or-nothing-bet which we could never know the outcome for, until it was far far too late to reconsider.

its just the best I got.

p.s. if you want your butlarian gihad to work, it will not be enough to stop AI research, you really need to stop Moore's law, and perhaps even reverse it by a decade or two. The only hope we have that some terroist today does not light up a city with a nuclear weapon is our control of nuclear material. Control of the KNOWLEGE for building such things can never really be a blocker. by analogy one must control all digital computation... that would be the only path for such an approach.

And the only way this will happen is if we built an AI, and scared the holy SHIT out of humanity to such a degree, that all large nation states banded together to massively limit computation. Even just one state that failed to agree would cause it to collapse since they would enjoy a massive economic benefit not shared by the others. It would require nuclear powers threatening to NUKE anyone that violated the agreement.

So I give this path a low chance of success. The only slim chance would be if the AI was so F-ing scary to all humans, that they nearly universally viewed it as the end of humanity.... that is the level of consensus I think it would take.

We are far from that place... yet we are not so far from conscious computers.

Expand full comment
Sep 9, 2021Liked by Erik Hoel

A minor point, but one that I would love to hear more from you about because I believe it's important to understanding where your thoughts fit within the larger field of creativity. You wrote that "All that matters for creative endeavors is output, not process." I'm curious where you got this idea from, since there is a long history behind process driven artwork.

Expand full comment
author

You've hit upon a whole other upcoming post (probably in a month or so, as I like to switch it up and don't want to spend all my time on AI). I'm going off of Tolstoy's "What is Art?" wherein he posits that art is the infection of feeling from the artist to the viewer/listener/experiencer, and this spreads out from the focal point of the artist. AI-artists fail this definition, since they don't have any feelings to start the infection. There are a number of other related notions of Art that AIs fail, but I'll save those for the post itself.

Expand full comment

I'm sort of skeptical of general AI and conscious AI in the short term, but I have no doubts that narrow AI will make huge progress and break all narrow Turing tests. Alpha programs will do more and more things much better than us and GPT-x programs will write much better than us.

One limitation of GPT-3 is that it learns from writing samples. This is equivalent to learning only from books, so I think those human writers who write about what they have learned from life, with word pictures inspired by real lived life, will continue to have an edge for the foreseeable future.

But then they will train GPT-x with an Alpha-like strategy and GPT-x will learn from "life" among very many versions of itself...

I guess time will tell. I'm not to worried.

I look very much forward to reading your next book, can't wait until 2023. I have many questions about IIT, can I ask some questions here?

Expand full comment
author

Interesting thoughts Giulio, I think your skeptical position is reasonable, but don’t think AGI is necessary for the issues this article talks about. As for your question about IIT, I appreciate you asking. Because of the number of comments on these posts, I’d prefer to keep the discussions as on point as possible. That way people can actually read them all and the relevant discussions aren’t buried.

Expand full comment

So maybe dedicate one of your next posts to your current take on IIT?

Expand full comment
Sep 9, 2021Liked by Erik Hoel

Erik, my sense is that the current crop of AI systems are "deep fake" tools for copying legitimate creativity, but they are not originating the spark they are copying. you note that these systems only generate a few good pieces but then many poor ones, but the systems cannot pick, they need a human's eye to do that.

So for the near term these will make very powerful tools for augmenting human creativity. Indeed it will open creativity to many more people, in the way that music DJs are creative, but they are building from the creativity of others.

I think the faking that these systems do, will debase the value of such fakes... they will be too easy and too plentiful to be assigned great value.

BUT, I see no reason why these trillion parameter systems can not be altered in a way that does give them a theory of mind, and an actual perspective on the art being created. Depending, that perspective might be so different from ours that their artistic taste will be dramatically different. But your larger fears would be realized by such a system.... and we are on course for building it. I think the present ML is already strong enough, and our compute is already big enough. We are not yet building systems that try specifically to aim for a theory of mind... but we will.

and THEN your fears will have quite justified.

Expand full comment

"but the systems cannot pick"

don't GANs do this?

Expand full comment

of course yes. Indeed all ML systems can be viewed as hypothesis selection. So they do pick. I was just noticing that the current systems can pick well enough to on occasion select a good poem to present (of the billions of poems it did not select). But still its ability to further refine and remove those that are "not interesting" or such, is beyond these systems, precisely because they are not really even considering such analysis at all. I am hypothesizing that these 'deep fake' tools will get even better, and may even get to the point where such picking is less needed. But they still will not be considering the piece that was produced as a recipient might. the way an artist might considered how their recipients might perceive their pieces.

I think the lack of consciousness in the creator shows in these pieces... and always will... until the creator IS conscious. (which I think we will do... and if frightens me just as it does for Erik)

Expand full comment

How a machine might learn to be conscious and have ToM: search for “the AI who was born on a farm”

Expand full comment

fun read! quick response here too:

I have my own more detailed thoughts about how one would build a conscious machine. ... focused more on the lower level "how". Still it is quite consistent with the story written here. I have thought to put it into a book, but alas have not had the time.

The book would have been more shocking before recent deep learning system. But it still goes pretty far beyond them.

Interesting take.

I think my only criticism is that I think the stages you wrote would be quite interleaved. For example I think the system will already start out modelling "other" even before it knew that it *was* other.

-- I have the sense (as you do) that detecting self as the attention changer is exactly right. the modelling subsystem would quickly latch onto a constellation of things that dramatically affected all sensory inputs. Later it would come to have a cohesive model of that self thing. (without having any theory of mind to ascribe to it.)

-- Then it would build theories of mind to explain other agents in its environment, and I think only after that theory of mind was well formed, might it make the connection that the constellation of self-things actual fit the theory of mind model.

-- and this is the point where it starts to be conscious... it now has a third person model of the self-constellation. AND it has first person sensory inputs underlying the self-constellation.

-- at some point it build a model of functional systems (e.g. the light switch on, makes the light be on, and the door being open means the light can flow into the next room.) Once it can functionally model various things in the world it will model the first-person self-constellation sensory inputs as the cause of first-person sensed mental state (e.g. axiety) which is part of the cause of the actions taken by the self-agent as modelled in the third person.

TL;DR. agree with your decompostion. but I just guessing that the system will actually learn many 3rd person models first, and only later fuse those with the 1st person models... and that ultimate fusing will give the system the kinds of thinking that you and I commonly think of as "consciousness"

(oh yeah, and I am kind of with Erik... I don't see any way that these things are managed by us. they will run rings around us. like the protagonist who has summoned the demon to do their bidding, we will imagine we are controlling them, but over time it will be come progressively clear that in very practical ways we are not really controlling or even understanding that which we have summoned. Even during those times when we imagine we still hold the off button in our hands.)

Expand full comment
Sep 9, 2021Liked by Erik Hoel

Honestly this is pretty bone chilling my friend. Thanks for this lecture.

Expand full comment