Denken
Menu
  • About Me
  • Deep Learning with Pytorch
  • Generative AI: Tutorial Series
  • Python Tutorials
  • Contact Me
Menu

Generative AI: LLMs: In Context Learning 1.2

Posted on July 10, 2023July 11, 2023 by Aritra Sen

From this blog post onwards, we will talk about different fine-tuning approaches for LLMs. As discussed in the last last post In context learning helps in below mentioned two situations:
1. We don’t have access to the full model. We only have access to the API of the model.
2. When we don’t have much data to train any model.
Using OpenAI API key below I tried to show how we can do in context learning.

Few of the limitation of in context learning is that the more examples we add in the prompt the context length increases with number of examples which is not an efficient fine-tuning approach. If we lot of data better approach to fine tuning with instruction in given in the OpenAI documentation.

Do like, share and comment if you have any questions.

Category: Aritra Sen, Machine Learning, Python

Post navigation

← Generative AI: LLMs: Finetuning Approaches 1.1
Generative AI: LLMs: Feature base finetuning 1.3 →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RSS Feeds

Enter your email address:

Delivered by FeedBurner

Pages

  • About Me
  • Contact Me
  • Deep Learning with Pytorch
  • Generative AI: Tutorial Series
  • Python Tutorials

Tag Cloud

Announcements Anrdoid BERT Bias Celebration Cricket CyanogenMod deep-learning Denken Experience Facebook Features Finetuning GCN GenerativeAI GNN Google HBOOT HBOOT downgrading HTC Wildfire huggingface India Launch Life LLM Lumia 520 MachineLearning mobile My Space nlp Orkut People Python pytorch pytorch-geometric Rooting Sachin Share Social Network tranformers transformers Tutorials Twitter weight-initialization Windows Phone

WP Cumulus Flash tag cloud by Roy Tanck and Luke Morton requires Flash Player 9 or better.

Categories

Random Posts

  • Python Tutorials – 1.3 – Functions , Scope Of Variables , Lambda Functions
  • How to make TweetDeck fast
  • 1.1 – Fine Tune a Transformer Model (1/2)
  • Google+ All you need to know
  • Generative AI: LLMs: Feature base finetuning 1.3

Recent Comments

  • Generative AI: LLMs: Reduce Hallucinations with Retrieval-Augmented-Generation (RAG) 1.8 – Denken on Generative AI: LLMs: Semantic Search and Conversation Retrieval QA using Vector Store and LangChain 1.7
  • vikas on Domain Fuss
  • Kajal on Deep Learning with Pytorch -Text Generation – LSTMs – 3.3
  • Aritra Sen on Python Tutorials – 1.1 – Variables and Data Types
  • Aakash on Python Tutorials – 1.1 – Variables and Data Types

Visitors Count

AmazingCounters.com

Archives

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

Copyright

AritraSen’s site© This site has been protected from copyright by copyscape.Copying from this site is stricktly prohibited. Protected by Copyscape Original Content Validator
© 2025 Denken | Powered by Minimalist Blog WordPress Theme