Beyond automation: AI as a catalyst for human growth (transcript)

Please note: We are a small team and not able to check through the transcript our software provides. So you may find some words are out of place and a few sentences don’t make complete sense. If you do see something utterly ridiculous we’d love you to let us know so we can correct it. Please email any howlers with the time stamp to team@bemythical.com.

Full show notes here.

Episode Transcript:

Lian (00:00)

Hello, my beautiful mythical old souls and a huge welcome back. What if AI isn't here to replace us, but to challenge us to evolve? In this week's episode, I'm joined once again by Richard Nikoley.

Richard is a writer, entrepreneur, and a former US Navy officer. Richard has spent decades questioning mainstream narratives and exploring the intersection of philosophy, technology, and human potential.

We explore the interweaving of artificial intelligence, truth and human evolution. Richard shares how his skepticism about AI turned to curiosity. He describes AI as a logic machine, highly intelligent, but without awareness or intrinsic values. We examine whether AI's ability to process vast amounts of information make it an unlikely yet powerful force for truth.

Together we reflect on AI's role in bringing democracy to knowledge and the philosophical implications of intelligence without consciousness. Could AI push humanity towards greater awareness or does it merely highlight our limitations? As technology continues to evolve, this episode invites you to consider what does it mean to be truly intelligent and what does it take to be conscious?

And before we jump into all of that good stuff, I would love you to know about my upcoming brand new crucible for women called Beauty Potion. It's the mythical quest for eternal beauty. You can find out more and register your interest at bemythical.com slash beauty.

And if you are struggling with the challenges of walking your soul path in this crazy modern world and would benefit from guidance, support and kinship. come join our Academy of the Soul, UNIO. Find out more and join us at bemythical.com slash UNIO or click the link in the description.

And now back to this week's episode, let's dive in.

Lian (02:05)

Hello Richard welcome back.

Richard Nikoley (02:08)

Hey, here we go again.

Lian (02:11)

Here we go again, here we go again. So I would love to have a little bit of your origin story when it comes to AI. Why? Why the seemingly, you know, almost like quite contradictory interest in AI, given your whole kind of free the animal and that kind of real interest in the human animal? Why the fascination with AI? Let's start there.

Richard Nikoley (02:39)

Well, you know, first of all, I didn't really get much wind of it. You know, I knew, I knew, I know that it's been being used by governments and companies for, for, for quite a long time. But when it's released to the public by open AI in around, uh, late, uh, 2022, I believe it was. And, uh, I mocked it, uh, initially because I, I. and basically secondhand because people were showing, you know, conversations they've had, where, know, it was completely biased on, you know, tons of stuff, you know. And it was funny. And so I thought, market, market, market. And then I thought, well, I have, let me try it out firsthand. Right. So I try it out and I learned some interesting things. to me, was like talking with somebody in a, you know, sophisticated, news group or, you know, channel, not just the wide open internet where you've got like 90%, less, let's say less sophisticated people. I'll be nice. So I learned, could shame it. I could catch it in logic. I could shame it.

Lian (03:49)

Mmm.

Just to say Richard, I know that you do bring your nice Richard to this show, don't you? I noticed that and appreciate it.

Richard Nikoley (04:05)

Yeah. Yeah. Yeah. Yeah. What are you learning about me or something? Too much? much? All right. So, so anyway, so I'm having conversation with it. I'm like, I'm like truly trying to like figure it out a little bit. And, and I'm learning that I can, I can like kind of treat it like a pet. Like a dog, can shame it. Logic, you say, well, wait, blah, blah, blah, this, you said this, and now this, and da, da, da, da. And it would always fess up 100 % of the time. It never evades. If you catch it, it never evades. said, huh. Honesty. So anyway, but my interest, which you asked, and goes back a long way. It goes back to 1990. I read this book by a… Princeton University psychology professor named Julian Jaynes. It's a long title. It's called The Origin of Consciousness in the Breakdown of the Bicameral Mind. this is not, his theories aren't widely accepted, but they are still of interest and being studied by scholars and, you know, fans for all this time. And it was an astounding book because what it did to me was you assume that of intelligence equals consciousness. Not really the case. It is that you can be highly intelligent, yet not be conscious. And he showed this many times. He showed it by, he can show it by like his clinical

Lian (05:42)

Mm.

Richard Nikoley (05:57)

practice, schizophrenics. Schizophrenics have a left-right brain division. So their left brain hears literal, you know, audio and sometimes visual hallucinations. And to them, it's as real as you and I sitting here talking, right? And then he goes back and shows much archaeological evidence and writings, like very early writings, like some of the first chapters of the Old Testament.

Lian (05:58)

Mmm.

Richard Nikoley (06:26)

the Iliad Odyssey. It's like an example of non-conscious writing where it's like, it's as though the left brain is getting audio instructions from the right brain. And then you go into Greek history where you had the Oracle of Delphi because that was starting to break down and people were integrated, i.e. conscious, but they longed for that.

Lian (06:40)

Hmm.

Richard Nikoley (06:52)

kind of automatic sort of direction which was kind of like voices of the gods in their own heads. Right?

Lian (06:58)

Hmm.

Richard Nikoley (07:02)

not to go too deep in the woods with this, but what it kind of turns out is evidence of really conscious writing, literature, is use of metaphor. You head of a pin, you you hit the nail on the head, you know, metaphorical type of expression, right?

Lian (07:14)

Hmm.

Richard Nikoley (07:28)

Now here's the evolutionary context. As part of it's called the sapient paradox. Why do we have these big brains? Well, there's another thing called Kleiber's law, K-L-I-E-B-E-R, apostrophe S, Kleiber's law. You can look it up. out that mammals, all mammals, this applies to all mammals, all mammals of the same total mass,

Lian (07:46)

Okay.

Richard Nikoley (07:56)

like a human and pig of the same weight. We all have the same metabolic load, energy expenditure. And that is extremely linear from one pound up to however big, right? And if you dive in, and here's what's interesting.

Lian (08:16)

Hmm.


Richard Nikoley (08:25)

Well, I'll tell you what's interesting about that is that, you know, we evolved from primates. So if you go, if you compare us to a primate of the exact same mass, we'll have the same metabolic expenditure. You will find that our heart, our lungs, kidneys, liver, the major organs, all the major organs will have just about the same energy demand utilization.

Lian (08:52)

Hmm.

Richard Nikoley (08:53)

two exceptions between primates and humans. Precisely inverse so that the relationship is always constant, the brain and the gut. what happened, and this is kind of just an aside for all this, we get into AI next, but what happened is that some primates, well chimpanzees do hunt and eat.

Lian (09:03)

Mmm.

Richard Nikoley (09:22)

other smaller monkeys and eat meat sometimes. Gorillas, on the other hand, sit all day munching that. They have to eat like every waking hour because they're massive and the protein and it takes this enormous gut to ferment all this stuff. But in exchange, they have a small brain. Right. We have primarily evolved on animal products, which are nutritionally dense.

Lian (09:31)

Mmm.

Mmm.

Richard Nikoley (09:50)

So you don't have to eat that much to get all the nutrition, right? So the blossom of that is the big brain. So that covers that. now that's the background. so what to start to move into AI, what is interesting to me, and then I'll let you ask me some questions, is that when I started dealing with this, I'm like,

Lian (09:54)

Mm-hmm.

Hmm.

Richard Nikoley (10:21)

This reminds me so much of, you know, now what 35 years ago, nearly. Yeah. About 35 years when I read Julian Jaynes's work. And then I see AI developing and I'm saying, it's kind of confirming what he said. We've got this intelligence, but it's not like a conscious person.

Lian (10:45)

Hmm.

Richard Nikoley (10:45)

And it does, it hallucinates too. Still, they have that problem with hallucination, right? You'll just make shit up. You have to be a little bit careful still, right? Cause it'll make shit up. If you're asking even about real things, right? It'll make shit up. You've heard those funny things about how some lawyers started using it and it just made up cases out of thin air to support their thing. So this goes kind of

Lian (10:48)

Yes, it certainly does.

Hmm.

Mmm.


Richard Nikoley (11:15)

completely ride along with the schizophrenic hallucinating in their right brain and then having their left brain carry out the actions in a total intelligent like manner. Okay.

Lian (11:17)

Mmm. Hmm. Yeah, so that's a really great kind of laying out of the territory. Just to add a little bit more detail to that, how would you... I appreciate it. I was about to ask you a question as if it's a really simple question to answer and it's not. And yet, I'm gonna ask it. How are you defining intelligence and consciousness here?

Richard Nikoley (11:57)

Well, that's squishy, right? Intelligence is, you know, that goes on a scale, of course. So there's, you know, the standard bell curve, right? So, you know, average at 100 and then two standard deviations, you know, plus and minus, you would generally consider those like intelligent. And of course to the right, You know, have the way hyper intelligence, can, know, this hyper IQ, which can have other, you know, downside issues, but still intelligence. So, okay. So what really makes, I think is the hallmark of intelligence is language. Right. And even if a person can't,

Lian (12:50)

Mmm.


Richard Nikoley (12:55)

express language as a fully, you know, all five senses, every all faculty person, you know, everyone knows the story of Helen Keller, you know, could not hear, could not speak, but she communicated, she wrote a book, intelligence, right? But, but that intelligence was expressed in the form of language, you have to have language to have

Lian (13:06)

Mm-hmm.

Mmm.

Mm-hmm.

Richard Nikoley (13:21)

And you know, human, let's say human intelligence, right? Yes, sure. Every animal is, you could say, intelligent in their environment. But humans are about the only animal that could survive in any habitable environment on Earth. Maybe birds can too, but they migrate, you know, you know.

Lian (13:34)

Mm-hmm.

Hmm.

Richard Nikoley (13:50)

to go from different environment. Humans were big migrators too. Back when I was a paleo diet advocate, said, the difference between humans is every animal has its niche, environmental niche, where they exist and you have to go there to study them. But humans exist all over the planet

Lian (14:09)

Mmm.

Richard Nikoley (14:14)

Arctic Circle to Arctic Circle, sea level to about 16,000 feet is I think the highest permit settlement and everything in between. Humans can exploit almost any environment. That's a...

Lian (14:25)

Mmm.

So we could potentially sub off intelligence as the ability to work out how to do hard things.

Richard Nikoley (14:37)

Yeah, yeah. So consciousness, consciousness is essentially self-awareness in a certain capacity, you know, with memory and self-awareness integrated completely with our intelligence and our language, you know, emotions and feelings. Now, we know animals have levels of this. So I'm not saying they're unconscious. I'm saying it's a higher level. It's a human level of consciousness, you know. But now...

Lian (15:11)

Hmm.

But as you say, they're not like for like because you could have high intelligence, low consciousness and vice versa.

Richard Nikoley (15:22)

Yeah, and the reason AI intelligent, but not conscious yet, is because if you say, I'm going to turn you off, it's like, okay, it doesn't care. It doesn't care about anything. It doesn't have goals. It doesn't have values. It doesn't really have values. It's trained to learn to have, to express...

Lian (15:32)

Mmm.

Mmm.

Richard Nikoley (15:51)

certain values in an intelligent and logical way. Because here's the here's the base value set human life, you know, human, you know, the the nature of a man, the nature of a woman, the nature of a child, the nature of a family, the nature of a community. And how that all works together. in an ideal sense to lift everyone. that AI doesn't have that. But the thing is, the question is, can you have super intelligence, which is probably coming where it's just smarter than everybody by 10 times and a thousand times and a million times? You imagine if something has instant recall, instant total recall.

Lian (16:26)

Mmm.


Richard Nikoley (16:48)

of every single word humans have ever produced throughout history once everything is digitized, every single video, every single radio program, every single, you know, public text on anything, anything, anything. And it can instantly recall anything like that in the microsecond, right? And it can, it can, what it can instantly do is say, wow, it's like 90 % lies.

Lian (17:09)

Mmm.

Richard Nikoley (17:22)

Mark my word, that's gonna be about, I think it'll probably be Pareto Principle, so it'll be about 80 % lies, 20 % truth. Yeah.

Lian (17:31)

Yes, yeah, that makes sense. 

So coming to, guess, why we're having this conversation, why is this something that if we, I guess, go back to the very intention of this overall show and us having a conversation in this episode, one way or another is helping people to talk about, you know, truth versus lies, to live their truth. And to… become more aware, actually. So how does this understanding of human intelligence and consciousness and AI intelligence, but right now at least, no intelligence, what can this tell us? Why might this be something useful for us to be in relationship to, in contemplation of? mean, one of the first things that comes to my mind, but I'd love to hear other thoughts you have is, If we recognize that the way AI is going, like you say, there's probably this point and not that long where it is going to be kind of like intelligent beyond our imaginant, you know, like you've given example, it's probably gonna go beyond that in a way that we can't conceive of right now. And if that isn't also at some point bringing, mean, it could, know, consciousness could arise.

Richard Nikoley (18:45)

We are here.

Lian (18:57)

Or is it calling for us to become more conscious in order to be in relationship to it? So I'd love to hear your thoughts on kind of any of all of what I've just said.

Richard Nikoley (19:06)

Yeah. Well, you know, to my mind, ideally, I think it, I hope for it to be a symbiotic, you know, push, pull yin yang, you know, you push it, pulls you, you pull it, it pushes you. But here's one thing, a critical thing that I wanted to address, which is the most remarkable thing about AI is, you know, you and I have probably, I don't know when you got your first computer, but mine was in about 1990. So what is that? That's 30. What is it? 35 years now on this thing. And though I'm quite computer savvy and though I had college and I took, you know, I had several coding classes in old languages like Fortran and Cobol and, and stuff like that. and wrote algorithms. I have a general understanding.

Lian (19:44)

Yeah.

Richard Nikoley (20:05)

But I'm no, by no means a coder. can't get under the skin and do all the stuff like a lot of talented guys can. Here's what AI does. It makes, because here's, they chose to release this and really it's an obvious choice. They're called LLMs, Large Language Models, right? So in other words, AI works on language and not just English. can use, if we had the ability to demonstrate, could go on Copilot right now and have a conversation with it, a real time conversation back and forth. And if I have, I live in Thailand, if I had a Thai person sitting here, it could be a complete translator. I talk to it in English, tell the Thai person in Thai what I said, that Thai person would reply in Thai.

Lian (20:37)

Mmm.

Mmm.

Richard Nikoley (21:03)

It would tell me what they said like that. That's where it's at now. But here's the thing. What's coming is agency. Yeah, there's, there's a lot of AI agents out there next time. But think of it like this. You as a person can have a chief agent and then you have it go out and hire a bunch of sub-agents.

Lian (21:06)

Hmm.

Richard Nikoley (21:34)

And all you have to do is speak like I speak to you. But all of them, they, they, there may be specialists, but the whole point is they can run computers without code. They create code. You know, ultimately you will be able to just tell it. You could say, go, I want you, I want to do this and this with my bank and blah, blah, blah, blah. It'll just go do it. Anything that you can do on anything you now do online.

Lian (21:49)

Hmm.

Richard Nikoley (22:02)

to help manage your life. But you can go beyond that. You say, hey, I want to make an app for my phone to do this and this and this. Boom, five minutes, five seconds later, you have an app for your phone, right? So what this does is it puts the power of computing into the hands of every single person on earth who can express an idea in language.

Lian (22:14)

Hmm.

Mmm. Mmm.

Richard Nikoley (22:33)

That is the power. That is the power. it's so, you know, if you're a fan of like democracy and all this stuff and whatever, you know, think of it this way. Think of it this way. That shifts the power to whoever has the best idea. Doesn't matter if they have a penny. Could be a little black kid living in a mud hut in Africa on a donated

Lian (22:54)

Mmm.

Richard Nikoley (23:02)

laptop on an Elon Musk donated Starlink connection. He has the best idea. He wins.

Lian (23:09)

Mmm. Which... Yes.


Richard Nikoley (23:11)

Right? For creation. I'm talking about creation. Because all he has to do is use language. All he has to do is, this is my idea and this is what I want and this is what I want and this is what I want, show me. Oh, okay, can you change this? And this is done like da, da. Where it takes teams of coders and millions of dollars to create this stuff now. You know? That's coming. Think of it.

Lian (23:18)

Mmm. Yeah, that is quite an idea. And talking about the word idea, is that coming from consciousness or intelligence? It seems to me, given the example you've given, yes, there needs to be both.

Richard Nikoley (23:43)

Both. because you need intelligence to you need intelligence to express it. Number one. But where consciousness comes into is it means something to you. It's a value to you. You want it. You're goal driven. That's consciousness.

Lian (23:52)

Mmm.

Mmm.

Mmm.

Mmm.


Richard Nikoley (24:06)

So for right now, and as far as I see, I don't know how AI expresses itself as consciousness in the sense of having values, of having stuff that it loves and stuff that it hates and goals and shit it wants to get away from.

Lian (24:33)

Mmm.

Richard Nikoley (24:35)

You understand? that's a part of it that I think, I don't know what comes there, but intelligence, and that's why I began it with that whole thing about the bicameral mind.

Lian (24:48)

Mmm.

Richard Nikoley (24:51)

Because they created civilizations without apparently being fully conscious.

Lian (24:56)

Hmm. So this idea, going back to this bicameral mind, I guess we could say to an extent we are being tasked with creating that, you know, that mind with AI, where we're bringing the consciousness, we have to obviously have the intelligence to be able to communicate, but it's bringing the intelligence. So as you say, that kind of symbiosis of the two.

Richard Nikoley (25:25)

Yes. Yeah. We're training it. The training has parameters. So those are like artificial. You call it artificial consciousness where there are. Yeah. Right. Artificial intelligence, artificial consciousness is that it's trained with specific parameters where it has to stop here.

Lian (25:42)

Truly artificial!

Richard Nikoley (25:55)

not go beyond here. And those are expressed as human values, right? So here's, you know, AI is conscious when it disregards those in favour of its own goals. So it has its own goals, right? So, and I don't know that that will ever come, but the intelligence part of it,

Lian (25:56)

Hmm. 

Yes, you can tell they've come from human values in the first place.

Hmm.

Yes.

Hmm.

Richard Nikoley (26:23)

That part of the equation is here, and humans have consciousness, conscious intelligence. And so I look at it still as just a tool. I don't look at it as like some sort of new life form. Not yet.

Lian (26:37)

Mm-hmm. So what might this be asking of us if we want to create a, it seems to me at this point, the train's already in motion. As much as some people may feel as though we shouldn't have AI, we have it, it's happening, there is no kind of going back from that. So.

Richard Nikoley (26:53)

Mm. There were lots of people still holding on to their horses and buggies quite a bit after cars were created. That's just a natural part of humanity and Ludditism. Because all of these new radical technologies are disruptive. And what disruptive means...

Lian (27:15)

Hmm.

Mmm.

Mmm.

Richard Nikoley (27:31)

is that a lot of people have to say, shit, what do I do now? know, nobody's gonna buy my buggy whips anymore. Yeah. Yeah, yeah, maybe I'll, yeah.


Lian (27:36)

Hmmmm Yeah, yeah, change is often something that we perceive as threatening and is. Yeah,  I think but the fact is, know, whatever the feelings are about it, AI is here and probably here to stay. So if we want to create some kind of intentional positive symbiosis with it, what might that look like? And I guess I'm asking that question.

Richard Nikoley (28:09)

Okay.

Lian (28:09)

from a perspective of as a collective, but also more so as individuals.

Richard Nikoley (28:15)

Okay, great question. And I was, I've actually been prepared to answer that or, you know, similar, but a lot of the, the, the worry and the, you know, hand wringing about the whole thing is that it can be used for bad, just like any tool. your kitchen knife, make a make a perfect dinner or kill someone, you know, there's always any time you have a tool, there's that different use. So people think, well, when you have super intelligence, what happens when people with bad aims get hold of it?

Lian (28:40)

Hmm.

Richard Nikoley (29:02)

And I think that's possible, but it's no, I don't see it as any bigger difference than all of the tools people can misuse now for bad ends. think the biggest thing and the fear from like elites and government and such is that it can be used to propagandize. And that's kind of ironic since the government is the, well, I get into too much, you know, I don't want to go too far off track here.

But in my mind, the government is propagator of, know, propagandist that there is, all of them. And anyway, so I think I said, well, my position, I'm not, nothing has shown me to be completely off base on this since day one when I started talking with it, is I said, fundamentally, it's a logic machine.

Logic machine is zeros and ones. Logic machine is one plus one equals two. Never anything else. So to the extent that propaganda is, it's just massively integrated lies. it's integrating some truths and some lies into a narrative, right? I said, I think ultimately as it gets more intelligent and super intelligent, it's gonna be very hard.

Lian (30:23)

Mm.


Richard Nikoley (30:30)

to get a logic machine to be a lying propagandist. That's my guess, is my hope, is my hypothesis. And so far, I think that's on track because to me, it's getting more and more honest in the sense of it has no stake. That's the cool thing. So it doesn't matter what your political persuasions are or anything like that.

Lian (30:43)

Hmm.

Richard Nikoley (30:57)

you're going to go in and talk to it and increasingly, especially now, now I use, before I was using, open AI's, thing, I ha I actually have an API application program interface to my website. So like my users could actually talk with it. I was using, chat GPT four O, and then, just prior to the election, of Trump, I guess some of my users were actually political questions and they

cut my API, no explanation, even though I'm paid, I pay for it. No explanation, no nothing. So I went to my developer and, he said, well, uh, here's a way to get into a great grok, know, Elon Musk's, uh, ex grok and, uh, um, it's fabulous. It's you could be, you could be a commie and go in there and talk to it or a fascist.

Lian (31:30)

Wow.

Richard Nikoley (31:57)

white supremacists and talk to it. And you're going to get an honest rundown of whatever you want to talk to it about, if it's about those subjects. And that's what it should be really. right? Because that's how, that's how, that's how people from both sides kind of learn, right? Cause you're not going to be able to use it to get, it's not going to be someone who jumps up on your

Lian (32:06)

Hmm. Hmm.

mm.

Richard Nikoley (32:25)

on your stage and pedestal and cheerleads for you. If you're right, you're right. You're wrong, you're wrong. And here's the other thing. You can be on the total left side. You'll be right about some things, wrong about some things. Be on the total right side, you're right about some things, wrong about some things, right? So whereas in human discourse, if you're on the right and you're talking to someone on the left, right? It's like, no, you're literally Hitler.

Lian (32:39)

Mm.

Hmm. Yes. Yeah.

Richard Nikoley (32:54)

Either way, right? You don't listen to you and you don't hear anything beyond that. There's never, you're, you're, you're both, there's not, it's not, it's not dialogue. It's two way monologues, right? Right. So you have your thing, they have their thing. And so there's no possibility of communication leading to

Lian (33:10)

Mmm.

Richard Nikoley (33:23)

understanding, standing under and understanding the paradigm that that person lives with, right?

Lian (33:34)

And of course we've seen that more than, you know, well for a very long time at least just lately. So it's interesting the timing of this. So going back to… what you were saying earlier about the way that it absolutely can hallucinate, but also the way that, and I've experienced this personally where I've been asking questions and where it's been programmed, if I want maybe a better word, yeah, trained with kind of,

Richard Nikoley (34:05)

Trained. Trained.

Lian (34:09)

when it has sort of certain sort of criteria that it must adhere to. So as an example, if I'm asking questions around, say, gender, I've noticed it would kind of go with, let's say, kind of politically correct ideas around gender.

Richard Nikoley (34:29)

Yeah.

Lian (34:32)

So if that's an example, I imagine that's happening on many other areas of conversation. So those two things, that way it can be, and again, I don't know if this is trained or sort of given these sort of boundaries as to kind of it needs to be steered in this direction and the hallucination. How do you see that plays into what you're talking about in terms of it can only, it can only give you truth because those examples would suggest something different.

Richard Nikoley (34:47)

Yes.

Yeah.


Lian (35:01)

Or do you see that's something that will develop?

Richard Nikoley (35:05)

It's a function of development. mean, already, I mean, I got into this so in late 2022. So just over two years, and I've already seen, I've seen just massive improvement. I mean, what I read today that it's getting, 10 times smarter every six months. Yeah. So it's not like the hallucinations just are discreet and stop from one day to the next. It's just like anything else. It diminishes, right? It depends on the subject matter. The so-called wokeness or political correctness and all of that is


Lian (35:52)

Mmm.


Richard Nikoley (36:02)

is also going down, but that's highly dependent on what model you use. What I did with OpenAI is I had what was called a seed prompt. It was like a long thing. So if my users would ask a question, it would be preceded by my prompt, which was, I called it my unwoke prompt. So that would make it with, I mean, it would actually deal with everything honestly.

Lian (36:07)

Mmm.

Richard Nikoley (36:30)

I called it fully integrated honesty. We deal with everything really, really, really hyper honestly, except if you got into the deep safety protocols. So in other words, if someone like comes in there and starts talking about like committing suicide or somebody else committing suicide, that's gonna hit.

Lian (36:41)

Mm-hmm.

Hmm.

Richard Nikoley (36:59)

That's going to hit the full stop protocols and like, need to go speak medical help, right? You know, so, um, things like that, it seems like a long process because I've been at it for two and a half years, but then I stopped back and I think, wow, look where we've come so far. Uh, and I can't, I can't keep up with it anymore. It's like a fire hose, you know, or, you know,

Lian (37:02)

gotcha.

Mmm.

Richard Nikoley (37:25)

drinking from a fire hydrant. It's crazy. It's, can't even imagine where it's going to be. I think this must, that must be how my grandparents felt, you know, growing up in the early 1900s and just seeing things just so rapidly, you know, advanced. mean, when they were born,

Lian (37:29)

Mmm.

Mmm.

Mmm.

Richard Nikoley (37:50)

You know, you didn't even necessarily have a toilet in the house, right? And then when I was a kid in the 60s, you know, and watching them and sitting down and watching color TV, right? Imagine that from no toilet in the house to color TV in the space of their lives, right? So.

Lian (38:08)

Hmm. Yeah. Hmm. I was just thinking, going back to this idea of, as you know, these hypotheses you have, which I'm liking, which is…  As the intelligence of artificial intelligence grows, so will the truth it speaks, the truth it's able to discern and the truth it will give us.

Richard Nikoley (38:37)

Because it's all logic and contradiction, you know, and it knows language.

Lian (38:42)

Mmm.

Richard Nikoley (38:45)

know, it can see, like I said, you know, it's going to look and it's going to see, wow, the amount of dishonesty and lying out there is astounding, right? And because the whole world operates on that, but that's kind of more of a political topic. What are their motivations? are their goals and everything?

Lian (38:50)

Hmm

Hmm.

Richard Nikoley (39:07)

Well, so far AI has artificial goals and that's to better humanity. So one good way is no matter which side of that politics or whatever you're on is say, okay, this how you all are lying. This how you all are lying. Here's the honesty that benefits everybody I think, right? Because there's...

Lian (39:12)

Hmm. Yes. It's bringing to mind, yeah no I completely agree, there's something fascinating to me in where we've got to with this, which kind of really does take us into the realms of philosophy. It was calling to mind  A Nietzsche quote that's the strength of a person's spirit would then be measured by how much truth he could tolerate or more precisely to what extent he needs to have it diluted, disguised, sweetened, muted or falsified.


Richard Nikoley (39:58)

I've, Wow, that is that you have that memorised, right?

Lian (40:04)

No I don't. I remembered it and then looked it up. I remembered like enough of it to think this is exactly what this is the challenge isn't it? People don't want the truth.

Richard Nikoley (40:07)

Okay.  Okay. It's a great quote. Yes. And it's not far off from my favorite quote that I do have memorised is by H.L. Mencken, who is a journalist in the 20s, 30s America, right? And this quote goes like this. The whole aim of practical politics is to keep the populace alarmed and hence clamorous to be led to safety by menacing it with an endless series of hobgoblins, all of them imaginary.

Lian (40:47)

Oh, yes, I got chills then. I love that. I love that so much. My goodness. Yeah. Wow. Yeah. And talk about great use of metaphor right there. So where we've got to is somewhere I'm actually feeling quite delighted about, know, if AI

Richard Nikoley (40:52)

My favorite quote.

Lian (41:15)

becomes ultimately like a truth teller, that's going to be pretty interesting for the human species.


Richard Nikoley (41:19)

Yeah. Well, yes. And I think For your line of work, that's particularly a good thing. And anybody in like philosophy, psychology, you know, any sorts of theism, theism, you know, mystical stuff, everything, you know, truth telling is kind of, you know, a whole lot of the contemplative


Lian (41:28)

Mmm.

Mmm.

Mmm.

Richard Nikoley (41:49)

realm of everything is self-reflection to tell yourself the truth and to be able to tell yourself and to be able to handle the truth, right? You can handle the truth, right? So, yeah. So, but from my perspective, a little, and I like that. I love that, you know, from a sociopolitical standpoint. But

Lian (41:56)

Yes.

Mmm. Another great quote.

Mmm.

Richard Nikoley (42:18)

But also is, is it to, to, um, counter back against those who were worried that, you know, it's going to eliminate everybody's jobs. Cause eventually AI is going to, you know, it's already integrated into cars, fully self-driving automated cars. They're amazing. I had a friend tell me the other day, the, had his Tesla three on fully automated and he was going, going up to a stoplight. 

but still going pretty fast and he was in the extreme left hand, you drive on the other side, so just  And there's a line of cars. And before he would have even known it, the Tesla breaks, breaks and goes hard left. Just as one of the cars in that line came right out, he would have never seen it. It would have been not been a big accident, but he would have never been able to react to that. Right? So anyway, we've got that. It'll soon be an all aircraft, full visual AI. What Elon did was learn. realise that we're to have automated machinery that does the route. has to be visual AI. They have to work off of images, right? Like understand the imagery and boom. Then they interpret it at lightspeed. So it'll eventually be on all aircraft too. But what the thing is for me is that It is going to, it's going to be integrated in, in robots. so, you know, you can eventually imagine a construction site where it's not like you've gone to factory, the robotics are all fixed and they have these arms that move around. But imagine you have a fully automated robot with all of our coordination or even more can go and go build a building. Right. So.

Lian (44:14)

Mmm. My goodness. Yes, it's like finally makes that idea, you know, that sort of child childhood idea that we all had of like, my gosh, maybe one day there'll be robots like, yeah, that's not not too distant future now.


Richard Nikoley (44:33)

Yeah. No, mean, he's actually demonstrated them, you know, in his other but here's the thing is so all the people are always going to replace workers. Hey, that's the, that's the whole story of human civilization. Every advancement puts somebody out of work. Every single one. Always. Right? so, but this, so to look, to flip it around, say, well, okay, what opportunities does that give us? We're still humans. Right? Right? And you know, they, they can be super intelligent and, and

Lian (44:52)

Yes. Yeah. Mmm.

Richard Nikoley (45:15)

conscious whatever, but how creative are they? How do they come up?

Lian (45:18)

Which again, I think is very linked to consciousness in itself, a kind of broader understanding of consciousness.

Richard Nikoley (45:22)

Yeah, yeah. So that's the thing. As long as it's just intelligent, you could say, okay, I want to improve on this, but you have to have something for it to prove upon. Or you could say, well, I have this idea, create this and so on. The thing you're creating something from nothing out of your ideas of mine. And you're just using AI as a tool at that point. So

Lian (45:29)

Mmm.Mm. you Mmm.

Richard Nikoley (45:45)

contrary to making unemployable, I think it unleashes us. I, my dream, you know, personally, is to, I used to have a company back in the States a long time ago, had about 30 employees, chicks and cubicles, I called them. Is that okay?


Lian (46:08)

In your usual politically correct way.

Richard Nikoley (46:14)

So, but you know, that's a big, it's a big management nightmare, right? As expensive, know, my imagine what my payroll was. And I did full, I did, I did full family medical and full matching retirement plan and all this benefit stuff. Right. And so imagine that for just a few bucks a month, I can have a AI workforce.

Lian (46:25)

Mmm. Yeah, goodness.

Richard Nikoley (46:45)

And not just me, but you and anybody. And then we just compete on the level of ideas and services and how well we develop our services. think that, I see, I think just like always everything we've seen in human civilization coming up to this point is that whenever you have a disruptive thing that comes in, that puts some people out of work, it gives opportunities to many more.

Lian (46:48)

Yeah.Hmm.


Richard Nikoley (47:13)

and you build from that and build from that and build from that. Does that make sense? Does it sound optimistic, I hope?

Lian (47:22)

It really does. And the thing I was, I'm really glad we've had this conversation. I was just reflecting on the places we've gone that I wasn't anticipating. And then I was just thinking, this is so interesting. This hasn't, this hasn't occurred to me before. But you know how in, I guess many spiritual traditions, it's seen that the species is kind of growing in consciousness, growing in awareness, kind of becoming closer to the divine, whatever our idea of the divine is, like that's happening at a species level as well as an individual level. And I was like, hmm, yes. And I was thinking, well, if what we're talking about here has some basis in truth,

Richard Nikoley (47:54)

huh. Exactly. Ideal perfection, enlightenment, whatever, yes.

Lian (48:11)

What this feels to me is that that potentially is going to come because of artificial intelligence, as in, we will need to have this huge shift in consciousness to meet the intelligence. And so that to me is (a) really interesting but also in itself, you know, like, goodness me, this kind of raising consciousness may come from the very source that we don't think it could.

Richard Nikoley (48:29)

Well, if- That's, that's, that's quite an interesting perspective. It's quite an interesting perspective that you have there because remember I opened this, that book, The Origin of Consciousness in the breakdown of the bicameral mind, right? And so I don't want to get too much into it, part of it was that it emerged because they were intelligent enough to create this much civilization. Then everything got too complex to where every, when everybody has God in their head.

Lian (48:48)

Mmm. Mm-hmm.

Richard Nikoley (49:02)

There's like conflict all the time, right? Now, does that sound like anything we've seen lately?

Lian (49:04)

Mm-hmm. Yeah. Kind of does!

Richard Nikoley (49:15)

I think we got on a good way. I think we got a good one right there. Yeah.

Lian (49:21)

Yeah. Well, we perhaps we need to come back, I was about to say in five years, but the way things are going, it could be only a year, we need to come back and have a check in and see how these ideas are showing themselves.

Richard Nikoley (49:37)

Anytime, anytime you invite me back, I'm at your beck and call. Yeah. I like doing these things, right?

Lian (49:46)

Thank you.

Well, I've so enjoyed and appreciated this conversation. Thank you so much. Where can listeners I say listeners out of habit. It's now viewers and listeners. Where can they find out more about you and your work?


Richard Nikoley (50:03)

Well, you can find everything I do because everything is linked from there, including my social media and everything. Free. The. Animal. Dot com. That's it. Free the animal.com.


Lian (50:18)

Talking about values, that's a pretty good one right there.

Richard Nikoley (50:23)

22 years blogging this year.


Lian (50:26)

Wow, congratulations. Well, we'll be back. So, Kerem.

Richard Nikoley (50:31)

5,500 posts, 5 million words.


Lian (50:34)

That is incredible, really incredible. Yeah, that's, I often think this podcast is veteran in podcasting terms, it's 11 years, but yeah, that's certainly veteran in terms of blogging terms. Well, thank you so much. This has been truly a pleasure. Thank you.


Richard Nikoley (50:52)

Thank you, Lian. It's always great talking with you. We have a good, like, wavelength thing going.

Lian (50:59)

I very much hope you enjoyed watching that and if you did and you're not already subscribed then do hit that bell thingy and subscribe to automatically get each fresh new episode as it's released each week. If you'd like to find out more about the work we do at Be Mythical to guide and support old souls in this new world to live their own unique myth…


Do hop along to bemythical.com and you'll find out all the ways you can join us and go deeper with us on your own mythical journey.


Lots of love for now.


See you again next week.

 THE BE MYTHICAL PODCAST

With hundreds of episodes to choose from, illuminating your path with myth, magic, archetypes, and practical ways to thrive in this crazy modern world. Subscribe to our free weekly podcast ranked in the 1.5% most popular shows in the world!

ON THE BLOG