Denken
Menu
  • About Me
  • Deep Learning with Pytorch
  • Generative AI: Tutorial Series
  • Python Tutorials
  • Contact Me
Menu

Tag: LLM

Generative AI: LLMs: How to do LLM inference on CPU using Llama-2 1.9

Posted on September 7, 2023September 7, 2023 by Aritra Sen

In the last few posts, we talked about how to use Llama-2 model for performing different NLP tasks and for most of the cases I have used GPU in Kaggle kernels. Now there can be requirements to that you don’t have GPU and you need to build some apps using CPU only. In this short…

Read more

Generative AI: LLMs: Reduce Hallucinations with Retrieval-Augmented-Generation (RAG) 1.8

Posted on August 26, 2023August 26, 2023 by Aritra Sen

Though there is a huge hype and excitement about LLMs as they are really good at several NLP related tasks, however, they also come with few of the below mentioned issues: Frozen in time – LLMs are “frozen in time” and lack up-to-date information. This is due to the fact that these LLMs are trained with…

Read more

Generative AI: LLMs: Semantic Search and Conversation Retrieval QA using Vector Store and LangChain 1.7

Posted on August 25, 2023August 25, 2023 by Aritra Sen

In the last few blogposts, we have gone through the basics of LLMs and different fine-tuning approaches and basics of LangChain. In this post we will mainly work with the embeddings from LLM, how we can store these LLM embeddings in Vector Store and using this persistent vector db how we can do Semantic search….

Read more

Generative AI: LLMs: LangChain + Llama-2-chat on Amazon mobile review dataset 1.6

Posted on August 17, 2023August 17, 2023 by Aritra Sen

In the last post we talked about in detail how we can fine tune a pretrained Llama-2 model using QLoRA. Llama-2 has two sets of models, first one was the model used in previous blogpost which is pretrained model then there is a instruction finetuned Llama-2 chat model which we will use in this post….

Read more

Generative AI: LLMs: Finetuning Llama2 with QLoRA on custom dataset 1.5

Posted on July 27, 2023August 17, 2023 by Aritra Sen

In the last post in this series, we have gone through the inner workings of LoRA fine tuning process. In this blogpost we will use the concepts of LoRA with the quantization method. We will use the newly launched Llama2 which is one of the biggest LLM launch in the history of open-source models. Below…

Read more

Generative AI: LLMs: LoRA fine tuning 1.4

Posted on July 19, 2023July 19, 2023 by Aritra Sen

In the last post we discussed two approaches to fine tuning using feature-based method, these options may not be always efficient in terms of computational complexity as well as time complexity. Full fine tuning of any LLM models needs to stitch the below mentioned steps together: Combination of all these steps can produce lot of…

Read more

Generative AI: LLMs: Feature base finetuning 1.3

Posted on July 12, 2023July 12, 2023 by Aritra Sen

In the last post we talked about how to do In-context finetuning using few shot techniques, In-context finetuning works when we don’t have much data, or we don’t have access to the full model. This technique has certain limitations like the more examples you add in the prompt the context length increases a lot and…

Read more

Generative AI: LLMs: Finetuning Approaches 1.1

Posted on July 6, 2023July 6, 2023 by Aritra Sen

In the last post in this Generative AI with LLMs series we talked about different types of LLM model and how they are generally pre-trained. These Deep Learning language models with large numbers of parameters are generally trained on open-sourced data like Common Crawl, The Pile, MassiveText, blogs, Wikipedia, GitHub etc. These datasets are generally…

Read more

Generative AI: LLMs: Getting Started 1.0

Posted on July 5, 2023July 5, 2023 by Aritra Sen

With the hype of ChatGPT/LLMs I thought about writing a blog post series on the LLMs. This series would be more on Finetuning LLMs and how we can leverage LLMs to perform NLP tasks. We will start with discussing different types of LLMs, then slowly we will move to fine-tuning of LLMs for specific downstream…

Read more

RSS Feeds

Enter your email address:

Delivered by FeedBurner

Pages

  • About Me
  • Contact Me
  • Deep Learning with Pytorch
  • Generative AI: Tutorial Series
  • Python Tutorials

Tag Cloud

Announcements Anrdoid BERT Bias Celebration Cricket CyanogenMod deep-learning Denken Experience Facebook Features Finetuning GCN GenerativeAI GNN Google HBOOT HBOOT downgrading HTC Wildfire huggingface India Launch Life LLM Lumia 520 MachineLearning mobile My Space nlp Orkut People Python pytorch pytorch-geometric Rooting Sachin Share Social Network tranformers transformers Tutorials Twitter weight-initialization Windows Phone

WP Cumulus Flash tag cloud by Roy Tanck and Luke Morton requires Flash Player 9 or better.

Categories

Random Posts

  • About Me
  • Python Tutorials – 1.1 – Variables and Data Types
  • 1.0 – Getting started with Transformers for NLP
  • Python Tutorials – 1.7 – Pandas
  • How to Share in Orkut

Recent Comments

  • Generative AI: LLMs: Reduce Hallucinations with Retrieval-Augmented-Generation (RAG) 1.8 – Denken on Generative AI: LLMs: Semantic Search and Conversation Retrieval QA using Vector Store and LangChain 1.7
  • vikas on Domain Fuss
  • Kajal on Deep Learning with Pytorch -Text Generation – LSTMs – 3.3
  • Aritra Sen on Python Tutorials – 1.1 – Variables and Data Types
  • Aakash on Python Tutorials – 1.1 – Variables and Data Types

Visitors Count

AmazingCounters.com

Archives

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

Copyright

AritraSen’s site© This site has been protected from copyright by copyscape.Copying from this site is stricktly prohibited. Protected by Copyscape Original Content Validator
© 2025 Denken | Powered by Minimalist Blog WordPress Theme
Menu
  • About Me
  • Deep Learning with Pytorch
  • Generative AI: Tutorial Series
  • Python Tutorials
  • Contact Me