This video tutorial demonstrates Langchain's expression language for building efficient chains using the pipe operator. It covers `runnable passthrough`, `runnable parallel` (using two chromaDB retrievers for context), and `runnable Lambda` (transforming Python functions into chain components). The tutorial uses a question-answering example, showing how to ingest knowledge into chromaDB, create retrievers, and integrate them with an LLM to generate responses. Langchain tracing is also shown for monitoring chain execution. the chain. we'll first say my chain is retrieval. we'll create the retrieval part first. I'm asking it that hey first get this part in then once you have the retrieval go create send information to the prompt. Why are we doing it this way? Because retrieval if you look in this is before the prompt right? So retrieval retrieval has to be done before it is sent to the prompt. Once the prompt is done, you have to send it to to, uh, I think something is wrong or what? No. Okay. so uh, retrieval prompt LLm and when we get the output we'll send it out to the output partial, right? So with this our chain will be ready, Right? So this chain is now ready. This entire thing is now ready. All we have to do is give it an input and everything will be done. So now this entire thing, right? So this entire thing from this dotted line till this is ready, Right? So our runnable parallel is ready. We are saying, hey, the context in the prompt has to come from retriever A. And the way the retrievers work is that they will also take a question, right? So when you run this retrieval a, it will take a question, right? So it will take a question. And this question will become the basis for which the vector store a will answer, right? Whatever it will return will be part of the context. And this question has to go as a pass through, right? Uh, and that's the reason we have here. The question coming from this side, right? So the chain will run Uh and now we'll say chain. invoke. Uh, oops, I did not send any question I pressed Uh, so therefore it is saying I'm I need an input, Right. So when was VP Bor that's my that's my question, right? So let's see input to chat. Prom template is missing variables. I think some problem with the uh, okay, so here we have a spelling error should be question. So see for us if you look at the error it says question J. Right? So so what is happening is when you are sending this as an input it is expecting this key, Right. You know, it is it was expecting this key. It did not get this key and therefore it complained. It said, hey, I did not find question J Right. You know, whatever it was What However you spell it. right. So uh yeah, let's run this again. Let's run this again. Let's run this again. Now let's see. And if you go here to generative geek. uh, see, I'm getting a sequence here. The sequence is there is a retriever The retriever is query is when was VP born And I have two documents, uh, which says VP's birthday is on 1st October and half of my text will be part of a right? So this becomes part of the context chat open AI is now sending this saying, hey, answer the following question bases. The context context is this is a document that VP's birthday is on 1st October. And this is the second document, right? So half of my text will be and when was VP born is the question VP was born on 1st October. So it is it is not able to give me the ear right So because it doesn't have that information if you look at it right now what we'll do is we'll enhance the prompt string right. so uh, we'll enhance the number of requested results four is greater than the number of elements in index two updating end results two. Okay so that's just a warning we are now saying prompt string um we had this prompt string. I'm just going to copy this and I'm going to bring I'm going to enhance the question right this time I'm going to say context is no more just context there is context uncore a which will come from retriever a and I'll also add new context to it uh and I'll call it context b right so I'll say cont context b right so this will become uh my context b so so this becomes I now have a more enriched uh string And now my prompt will expect two inputs from me, Right? So, so let's see how how we'll do it, right So last time the way we did it, we first created our prompt string then we created the prompt same thing is now ready I'll run this after that we created a retrieval, right? and we'll do the same thing. we'll say, hey, retrieval this time uh retrieval enhanced is equal to runable parallel and this time what we are going to do we are going to pass it a context A. We are required to pass it a context A. We are going to say context A is retriever A. Um next we are passing it context uncore B And this is basically retriever B. Okay next it is also expecting a question. See when you are creating a prom template. It is expecting all of these things to come in as part of an input from somewhere from either from a previous layer or from an input. It doesn't know it just needs that from somewhere. Right. So the question for us is a runnable pass through. When we say runnable. Pass through. We are just passing it, Whatever we got. We are passing it.