Recently I finished the Andrew Ng’s course on Coursera – Generative AI for Everyone. These are my notes for week 2 from that course.
The second week was focused on using generative AI to create software applications – how much it costs/how much time it takes.
Let’s consider building in software application for restaurant reputation monitoring using Machine Learning.
- Collecting data (few hundred/few thousand), label it. (approx. 1 month)
- Find the correct AI model to train on the data to learn how to output positive/negative depending on the input. (approx. 3 months)
- Find a cloud service to deploy and use the model. (approx. 3 months)
This process generally takes months to complete. However you can do the same using generative AI within weeks. Here’s the steps involved –
- Scope the project
- Build the initial prototype and then work on improving it.
- Evaluation of the outputs to increase the system’s accuracy.
- Deploy using a cloud service and monitor
Retrieval Augmented Generation –
As we know LLM takes prompt as input and the length of prompt is limited, if we provide additional context to improve the answer, the length of that context is limited.
RAG is additional technique using which you can provide additional context to LLM without increasing the prompt length. It occurs in three steps –
- Given a prompt, search all relavent documents to generate the answer.
- Add that retrieved text to the prompt.
- Then feed the new prompt with the additional context to LLM.
Many applications use this technique, example coursera coach, which uses the coursera website information to answer specific questions that student ask. Many companies are creating ChatBots for their company offerings and they use RAG to provide the additional context.
Fine-tuning –
Most of the generative AI models are trained on data from web. They are general purpose LLMs. We can use this LLM and fine tune it on our domain specific data, so that the LLM learns to give the niche output we want.
For example, a general purpose LLM will not be able to give correct output on medical records/legal records. We can first train our LLM on these records so that it starts giving correct output for new medical records.
BloombergGPT is one such solution which was build specifically for financial data.
How to choose a model?
- Based on Model size –
- 1B parameters – pattern matching and basic knowledge of the world
- 10B parameters – greater world knowledge and can follow basic instructions
- 100B parameters – rich world knowledge and complex reasoning.
- Open source/closed source
- Closed source models – available through cloud programming interface, easy to use, not that costly
- Open source models – full control over the model and data, can be run using your own systems.
LLM, Instruction Tuning and RLHF –
Instruction Tuning is training your LLM on specific set of questions and answers or examples of LLM following instructions, so it learns how to follow specific type of questions (instructions).
Re-inforcement Learning from Human Feedback – to further improve LLM, we can use supervised learning to rate LLM answers.
- Train an answer quality model. Given a prompt, we will get multiple answers from LLM, and then we can store these answers and their rating (score by humans) into a dataset. Then we train a ML model on this data to automatically rate the answer.
- Have LLM generate more responses for different prompts and train the LLM to generate answers with higher rating.
Thanks for stopping by!