Recently I finished the Andrew Ng’s course on Coursera – Generative AI for Everyone. These are my notes for week 1 from that course.
Generative AI is defined as artificial intelligence systems that can generate high quality content like images, text, audio and video. Through chatGPT, we have seen it can produce text. Adobe has AI in it’s tool, using which we can create images using prompts.
Andrew makes a point in one of the videos that AI technology is general purpose. Just like electricity is used to power many things, AI can be applied to various problems. We already see use of AI applications in day to day lives like spam prediction, recommendations on Amazon/netflix, chatGPT, etc.
Some applications for generative ai –
- Writing – As LLMs work by predicting next words and sentences, it can be used to write something for you. Example, you can ask LLMs to write your linkedin post/blog post for you. LLMs are also used for translating from one language to another.
- Reading – LLMs can also read long texts and create smaller inference from it. For example, you can ask LLM to go through your resume and create a linkedin summary for you.
- Chatting – LLMs can be build to use specialized chatbots for an organization as per the requirements.
Limitations of LLMs –
- Knowledge cutoff – LLM’s knowledge is confined to the data it was used to train it’s model.
- Hallucinations – LLMs can make stuff in very confident and authoritative.
- Input length (prompt length/context length) and output is limited.
- It doesn’t work well with tabular (structured) data.
- LLMs can reflect the bias of the data it was trained on.
Thanks for stopping by!