video yeah yeah a little bit yeah um what I was told is youtube's ad revenue now uh comes more from tvs than even like the actual youtube app um so so you can kind of see where people are actually beginning to spend more time on the tvs now um than than even the mobile app so there's probably something there uh podcasts is growing a lot a lot of people of course you're making one of the most listened to podcasts but it's a it's it's it's a thing that instagram is not really getting um and it's going more towards the apple podcast spotify so there's always like people figuring out new forms of content that's not necessarily going to instagram so that's something that's an opportunity if i'm able to aggregate every Indian podcaster yeah and improve the quality of their video by i don't know if I include a chat function where they can talk to the podcaster and the guest like have some angle like that do you think if I were to be able to aggregate that is is it a possibility definitely um another thing that people haven't really tried is like live stream the podcast like let's say we're talking now um and and and like so the way podcast workers we record it you edit it we post it and then people are listening to it but there's no communication between us and them right um and and like x tried that with with live stream and and and you know like so Instagram has it too Instagram live but it's not really podcast podcast right right right yeah uh but there there's something where like you can consume all the podcasts You can also talk to the people who did the podcast they would respond to you on the comments um you you can you can probably like I say "hey, I want to hear this Nikil's thing in in like uh but but only the parts where he talks about ai." And it'll just edit it like really fast and just make a new version cuz you just listen to that Um that's something Youtube doesn't do well the what do you call when you convert video to text? There's a word for the the transcriptions Yeah, they don't do great at transcriptions right yeah but that again, i'm just saying like why do you even need to see the transcript transcripts are there because it's a hack to get to what you want but if you literally just enter a prompt And say uh, just make a version of this podcast for me that edits out what Arvin and Nikl talk about uh ai or like neural networks and and it just creates that segment and they just listen to that uh and they would happen now cuz I thought most large language models are text, they're not consuming video yet right exactly so you you don't need the video part as much you just need to make sure the transcript is pretty accurate um or or or even take the uh mp3 file the audio file and then uh, the long context is good enough to consume all of it uh and then you just say um, I want only these parts out and it'll tell you the timestamps and you take that and make a video out of it It's it's it's going to have rough edges. I'm sure it's not going to work perfectly but uh with with but with with some engineering you can make something like this happen Uh the the hard part honestly in kill is is the you know you got to start from scratch. you got to uh create incentives for people to like consume stuff so there's some something new new element out needed and then a lot of sharing fun. Now after that is the network effects where if you had a personal computer at home and I had one and we could figure out a way to talk to each other, which is internet and then, uh, the worldwide web and then like mobile, uh, cloud and now AI. So it's very like simplistic way to describe it. And there's a lot of details here. but this is no, it's actually very useful. cuz, I feel like whenever I try and learn more about this field online or with the people I speak to, I feel like I get the highlevel nonuanced generic stuff that everybody is saying, but I'm not able to, I don't have that bridge in my brain which goes from, okay, it started like this, then it this happened then that happened I did a I interviewed Yan recently, Yan Lakun a few months ago and spoke many hours and we went into, like, I spent a lot of time sitting and trying to learn about Jeppa and the machine learning and neural networks and what his creations were. But again, the manner in which I explained it or we kind of like tried to portray, I think, got too muddled because I don't have a clear understanding of much of this. So let's say when you say we moved from the internet to like today's AI. when everybody's talking about AI, what was that one thing? if you had to fixate on? one thing, Why is AI in 2025 different from when people spoke about AI in 2010? I think, uh, the biggest change from 2010 to 2020 or like the 2020s, I would say not just 2025 is, uh, this thing called neural networks actually work and I would say the forefathers like Leon or Hinton Benjio they did a lot of work to establish the foundations but one guy single-handedly, you know, with with with of course with a group of amazing engineers who worked with them truly made it work. I'd say it's Ilia Sutsker, um, and and I think the magic sauce was throw a lot of data and compute at it Uh and now you can ask like, "oh wait, is that really all like, um, was it really that simple?" And yes, like honestly, yes and I think, uh, that's where like it was it It came down to blind faith in in doing things. I'm sorry I'm interrupting you again but can you explain what a neural network actually is? I have a little bit of history with this because I work in the stock investor world, and we've had neural networks for a long time. And I remember seeing this over much of the last decade, where you would put in a lot of maybe a bunch of different data factors that we have, like maybe time, price volume and put all this data into a neural network, try to get it to predict what will happen next and start maybe a robo advisory kind of service, or, you know, try to figure out how a computer might be able to predict. But none of this played out in the manner that we perceived it when it came to the stock market. But maybe you can define for me, it didn't play out. Then I'm talking over the last decade, can you define what is a neural network In very simple words. So a neural network is a network of artificial neurons, uh, connected to each other layer by layer, um, and what is an artificial neuron is just like a computational unit that takes an input number and gives you an output number u And so it's, it's, it's called a neural network because it's inspired from the biological neural network, which is the human brain, um, but it's not exactly meant to be working the same way, either. In fact, that's actually why in practice it works because a lot of people tried to make it work the exact same way and failed at it but think about it as like a massive circuit that you're feeding numbers to and it spits out new numbers. Um, and it's spit out new numbers based on the numbers that I have put in and the patterns it recognizes in those correct. Yeah, exactly. In the stock market example, if we were to just stick to it, when you put so much data into a neural network, and it predicts what might happen tomorrow. based on what has happened yesterday, stock markets often tend to be random. And there is a school of thought, they call it technical analysis, where people believe that patterns exist, and they try and map out what patterns happened in the past and how they will repeat themselves specifically. But what if this is a bit selfish, cuz, I'm, I'm sticking to the stock market example. But what if the past patterns do not recur in the future? Then what does the neural network predict? That's a good question. So a neural networks look, neural networks can be trained to predict anything, right? The stand alone without the prediction function, the loss function, just the neural network alone is simply a mathematical function, MHM, very nonlinear. Think about it. as like some extremely high order polinomial function, right? Um, is this going to what was the last word you said auto extremely high order polomial function, right? Um, by by that All I mean is like very nonlinear. um, lot of higher order interactions and multiplic multiplications. Can you help me picture a neural network? You said it was meant to mimic brain chemistry, but it doesn't Yeah, think about as like, okay, let's say you're you're feeding in like three or four numbers at the input layer. Uh, the first layer will take that and like transform that imagine it applies some some sinosoids Or like do you mean when you say transform? Are you talking about transformers in Google and their development and stuff like that? Or no, I don't mean specifically a transformer but I just mean like a mathematical function like some function f of like those four numbers where that function is being learned. But the way it's in in practice, it's implemented is there is like a matrix, it takes those matrix, start with a bunch of random numbers in the beginning, and then and they multiply with the input you feed. and then there's some sinosoid or like some kind of like nonlinear function that takes that and modifies it. Now, why do you need That is because that's where you bring in the higher order dependencies, you're learning that. And then you imagine doing this over like four or five different layers. Um, and then you have a bunch of outputs. It could be four outputs. It could be 40 outputs. That depends on your the way you constructed the neural net. And then there is like a target output that you have based on the data set and the current prediction is taken. And the target output is taken, uh, the, the difference is calculated, and you're you're updating the, the parameters of the neural net, which are those matrices at each layer to, um, update themselves so that you minimize the loss, but not the loss on one single input, a loss on like a giant data set, like millions of millions of examples, and go back to the stock market example. And when we when we would put data into a neural network, and we didn't get the output we desired, we would go back and curve, fit the data in a manner to get a more desirable output. But does it still kind of rely on the premise of recognizing patterns historically implicitly? Yeah, Yeah. So if it's, if it has to do its job of predicting the output reliably, then it has to recognize whatever patterns it needs to be able to do that, right? Um, like, like, let's say um, i'm I'm going to the example of just predicting the next word. Uh, if if a neural network has to be good at predicting the next word given the previous word, then it implicitly has to understand grammar, you know, sentence construction, common sense, all that stuff or like if if a neural network has to predict the next uh, like, like character in a pro program that you're writing, it has to somewhat understand the logic all that stuff. So it really depends like like, like whether a neural net captures the useful patterns really depends on what is the task that you're training it on If you're training it on the raw stock price, let's say you just have a bunch of numbers of the stock price of Nvidia Uh opening price every single day Sure it's not going to be useful on its own because there are so many other factors that influence the price and it if if all it had is like the each day's opening price it's not going to really there's not really that many patterns in in that Anyway, so uh, there's this thing in machine learning called uh, the model can only learn um, whatever like like like actual patterns exist and everything else that's in the data that's noise is irreducible noise by that I mean like no loss function can hope to capture any of it. You can um, exactly fit it but it's not going to generalize so um, as long as there is something that's truly signal in your data, Uh and and the way you crafted the task can capture the signal like doing the task or requires you to capture the signal. then yes, the model will definitely be able to capture interesting patterns. Um, and and when you said machine learning, I'm sorry, again, can you distinguish? What is neural network and what is machine learning the difference? Yeah. So, um, neural networks is one way to do machine learning. Uh, but machine learning and how would you define machine learning? Yeah. Uh, machine learning is broadly u train a computer program to, um, do something intelligent, uh, or or make intelligent predictions on on, um, data sets that you're given, uh, such that you're given a recorded bunch of inputs and you want to be able to make intelligent predictions on new inputs that you've not seen before. Um, and, and the predictions could be anything and, and neural networks happens to be a particular way of doing machine learning where the predictions are done through, uh, this abstraction called neural net that takes in an input, and like applies matrices, nonlinearities, and then like stacks them repeatedly, and then makes predictions out and updates the predictions using back propagation, the, the, the way to, you know, change the weights depending on your loss. So there are so many other ways to do machine learning. There are like, you know, support vector machines, linear regression, logistic regression. There's like a whole bunch of techniques, but it happens to be that, uh, neural networks is the one way to do things when you want really want to benefit from scale, like, like, like the prediction should keep improving the more data you throw at the problem or more compute, you throw at the problem. Neural networks happens to be the most, uh, scalable way to do things. But if you have like only 100 or 200 examples, uh, other algorithms might be working as well too. So where does a large language model sit amidst all this? What is it so a large language model uh is essentially a giant neural network that's trained on this one task of predicting the next word from the previous word except it's training on the whole internet so it's training on terabytes of text trillions of tokens and it's doing it's it's training on books code and um, uh textbooks and like general web pages news articles all these things so um, distinction being just text it's not training on videos and pictures and stuff like that Uh, i think i think