Contrast
< Back to Blog
Original link:

https://www.youtube.com/watch?v=2yd18z6iSyk

2024-01-17 16:34:21

Joe Rogan - 'I Wasn't Afraid of AI Until I Learned This'

video content Image generated by Wilowrid

There's something that changed in the field of A I in 2017 that everyone needs to know because I was not freaked out about A I at all at all .

Um , until this big change in 2017 , it's really important to know this because we've heard about A I for the longest time and you're like , yep , Google Maps still mispronounces the street name and like , Siri just doesn't work .

Um And this thing happened in 2017 .

It's actually the exact same thing that said .

All right .

Now it's time to start translating animal language and swear underneath the hood , the engine got swapped out and it was a thing called Transformers .

Um And the interesting thing about this new model called Transformers is the more data you pump into it and the more like computers you let it run on the more superpowers it gets , but you haven't done anything differently .

You just give more data and run it on more computers .

video content Image generated by Wilowrid

It's , it's reading more of the internet and it's just throwing more computers at the stuff that's read on the internet and , and out , pops out suddenly it knows how to explain jokes .

You're like where did that come from ?

Or now it knows how to play chess and all it's done is predict all you've asked it to do is let me predict the next character or the next word .

Give the Amazon example .

Oh yeah , this is interesting .

So this is 2017 .

Um open A I releases a paper where they treat uh where they train this A I .

It's one of these transformers at GP T to predict the next character of an Amazon review .

Pretty simple .

But then they're looking inside the brain of this A I and they discover that there's one neuron that does best in the world .

Sentiment analysis .

Like understanding whether the human is feeling like good or bad about the product .

You're like , that's so strange .

You have to just predict the next character .

Why is it learning about how a human being is feeling ?

video content Image generated by Wilowrid

And it's strange until you realize , oh , I see why it's because to predict the next character really ?

Well , I have to understand how the human being is feeling to know whether the word is going to be like a positive word or a negative word and this wasn't programmed .

No , no , it was the key to emergent behavior .

Um And it's really interesting that like , um GP T three had been out um for , I think a couple of years until a researcher thought to ask , oh , I wonder if it knows chemistry and it turned out it can do research great chemistry at the level and sometimes better than models that were explicitly trained to do .

There is these other A I systems that were trained explicitly on chemistry and it turned out GP T three which is just pumped with more , you know , reading more and more of the internet and just like throwing with more computers and GP us at it suddenly it knows how to do research .

Great chemistry .

So you could say , how do I make the X nerve gas ?

And suddenly that capability is in there .

Wilowrid Advertisement
video content Image generated by Wilowrid

And what's scary about it is that we didn't know that it had that capability until years after it had already been deployed to everyone .

And in fact , there is no way to know what abilities it has .

Another example is um you know , theory of mind , like the my ability to sit here and sort of like model what you're thinking , sort of like the basis for ample strategic thinking .

Um It's like when you're nodding your head right now , we're like testing , like , are you , how well are we uh no one thought to test any of these , you know , transformer based models , these GP TS on whether they could model what somebody else was thinking ?

Um And it turns out like GP D three was not very good at it .

Um GP D 3.5 was like at the level , I don't remember the exact details now , but it's like at the level , like a four year old or five year old and GP D four , like , was able to pass these sort of theory of mind tests up near like a human adult .

Um And so it's like , it's growing really fast .

video content Image generated by Wilowrid

You're like , why is it learning how to model how other people think ?

And then it all of a sudden makes sense .

If you are predicting the next word for the entirety of the internet , then , well , it's gonna read every novel and for novels to work , the characters have to be able to understand how all the other characters are working and what they're thinking and what they're strategizing about .

I it has to understand how French people think and how they think differently than German people .

It's read all the internet .

So it's read lots and lots of chess games .

So now it's learned how to model chess and play chess .

It's read all the textbooks on chemistry .

So it's learned how to predict the next characters of text in a chemistry book , which means it has to learn chemistry .

So you feed in all of the data of the internet and ends up having to learn a model of the world in some way because language is sort of like a shadow of the world .

It's like you imagine casting lights from the world and it creates shadows which we talk about as language .

video content Image generated by Wilowrid

And the A I is learning to go from that flattened language and reconstitute , make the model of the world .

And so that's why these things , the more data and the more compute , the more computers you throw at them , the better and better it's able to understand all of the world that is accessible via text and now video and image .

That does that make sense ?

It does make sense .

Now , what is the leap between these emergent behaviors or these emergent abilities that A I has and artificial general intelligence ?

Wilowrid Advertisement
video content Image generated by Wilowrid

And when , when is it , when do we know or do we know like this is the the speculation over the internet when um uh Sam Altman was removed as the CEO and then brought back was that they had not been forthcoming about the actual capabilities of whether it's chat GB D five or artificial General intelligence that some large leap had occurred .

That's some of the reporting about it .

Um Obviously , the , the board had a different statement which was about Sam .

The quote was , I think not consistently being candid with the board .

So funny way of saying lying .

Yeah .

Um So basically , the board was accusing Sam of lying .

There was this story about , was that specifically they didn't say , I mean , I think that one of the failures of the board is that they didn't communicate nearly enough for us to know that's why it's going on , which is why I think a lot of people then think , well , was there this big crazy jump in capabilities .

And that's the thing .

video content Image generated by Wilowrid

And Qar and QTAR went viral and ironically , it goes viral because the algorithms of social media pick up that Qar , which has this mystique to it , sort of , it must be really powerful and this breakthrough .

And then that's kind of a theory on its own .

So it kind of blows up , but we don't currently have any evidence and we know a lot of people , you know , who are around the companies in the Bay Area , I can't say for certain .

But my sense is that the board acted based on what they communicated and that there was not a major breakthrough that led to or had anything to do with this happening .

But to your question though , you're asking about what is A G I artificial general intelligence and what's spooky about that ?

Um Because um so just to sort of define it , I just say before , before you get there as we start talking about A G I .

video content Image generated by Wilowrid

So that's what of course open A I is like said that they're trying to build their mission , the mission statement and they're like , but we have to build an aligned A G I , meaning that it like does like what human beings say it should do and also like take care not to like do catastrophic things .

Um You can't have a deceptively aligned operator or building an aligned A G I .

And so I think it's really critical um because we don't know what happened with Sam and the board that the independent investigation that they , that they say they're , they're going to be doing like that .

They do that , that they make the report public , that it's actually independent because like either we need to have Sam's name cleared or there need to be consequences .

You need to know just what , what's going on because you , you can't have something this powerful and have a problem with who's like the per person who's running it or something like that .

So they not , not honesty about the , what's , what's there ?

Wilowrid Advertisement
Original video



Partnership

Attention YouTube vloggers and media companies!
Are you looking for a way to reach a wider audience and get more views on your videos?
Our innovative video to text transcribing service can help you do just that.
We provide accurate transcriptions of your videos along with visual content that will help you attract new viewers and keep them engaged. Plus, our data analytics and ad campaign tools can help you monetize your content and maximize your revenue.
Let's partner up and take your video content to the next level!
Contact us today to learn more.